QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,779,650
| 12,131,013
|
Capture KeyboardInterrupt in context manager when OpenMPI run is manually terminated
|
<p>I am running code in parallel using <code>mpi4py</code>. I've noticed that if I run the code and perform a keyboard interrupt, my context manager <code>__exit__</code> will run if I run the code as <code>python file.py</code> but will not when run as <code>mpirun -np 1 file.py</code> (this is only one process, but it produces the same as if I run the code with more processes).</p>
<p>How do I get the terminated MPI run to cause the code to exit like a normal python process and enter the context manager's <code>__exit__</code> process?</p>
<p>Minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from mpi4py import MPI
def f(i):
# raise KeyboardInterrupt()
return i**0.5
class ContextManager():
def __init__(self,):
return
def __enter__(self,):
return self
def __exit__(self, exc_type, exc_value, traceback):
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
print("Exiting.")
if rank == 0:
print(rank, size)
print(exc_type)
print(exc_value)
print(traceback)
if __name__ == "__main__":
with ContextManager():
for i in range(1_000_000):
print(f(i))
</code></pre>
<p>If I uncomment the interrupt, whether the code is run via <code>python file.py</code> or <code>mpirun -np 1 file.py</code>, the context manager <code>__exit__</code> is run and the printouts are displayed.</p>
<p>If I run the code as shown above (the interrupt is commented) and hit <code>ctrl+C</code> in the middle of the run, then:</p>
<ul>
<li>if I ran the code as <code>python file.py</code>, <code>__exit__</code> is entered and the printouts are displayed</li>
<li>if I ran the code as <code>mpirun -np 1 python file.py</code>, the code simply terminates and never enters <code>__exit__</code> (i.e., there are no printouts).</li>
</ul>
<p>Versions:</p>
<pre><code>python: 3.10.14
mpirun: 4.1.4
mpi4py: 3.1.6
Ubuntu: 22.04.4 LTS
</code></pre>
<hr />
<p>Edit: Following the answers in <a href="https://stackoverflow.com/q/58117400/12131013">this question</a>, the interrupt is still not captured.</p>
<pre class="lang-py prettyprint-override"><code>from mpi4py import MPI
import signal
def sigterm_handler(signum, frame):
print("Here")
raise KeyboardInterrupt
signal.signal(signal.SIGTERM, sigterm_handler)
def f(i):
# raise KeyboardInterrupt()
return i**0.5
class ContextManager():
def __init__(self,):
return
def __enter__(self,):
return self
def __exit__(self, exc_type, exc_value, traceback):
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
print("Exiting.")
if rank == 0:
print(rank, size)
print(exc_type)
print(exc_value)
print(traceback)
if __name__ == "__main__":
with ContextManager():
for i in range(1_000_000):
print(f(i))
</code></pre>
|
<python><mpi><contextmanager><mpi4py>
|
2024-07-22 15:49:20
| 1
| 9,583
|
jared
|
78,779,292
| 4,691,830
|
Cannot use fileinput.input as stdin in subprocess.run
|
<p>I was under the impression that <code>fileinput.input([path_1, path2, ...])</code> is basically interchangeable with <code>open(path_1)</code> except that the former concatenates the contents of all the files given. However, <a href="https://docs.python.org/3/library/fileinput.html#fileinput.input" rel="nofollow noreferrer"><code>fileinput.input</code></a> behaves differently when passed as <code>stdin</code> to <a href="https://docs.python.org/3/library/subprocess.html#subprocess.run" rel="nofollow noreferrer"><code>subprocess.run</code></a>. That is, <code>subprocess.run</code> just hangs and doesn't read anything from the file(s) at all.</p>
<p>Here's an example:</p>
<p>example.py:</p>
<pre class="lang-py prettyprint-override"><code>import fileinput
import subprocess
import sys
idle_path = sys.exec_prefix + "/bin/idle3"
paths = [idle_path]
with fileinput.input(paths) as combined_file:
# with open(idle_path) as combined_file:
subprocess.run(["cat"], stdin=combined_file)
</code></pre>
<p>Calling the above e.g. in the console with <code>python example.py</code> does not finish.</p>
<p>With <code>open</code> instead of <code>fileinput.input</code> it works and produces the contents of <code>/bin/idle3</code> as expected:</p>
<pre><code>$ python example.py
#!/home/me/mambaforge/envs/testenv/bin/python3.12
from idlelib.pyshell import main
if __name__ == '__main__':
main()
</code></pre>
<p>What's going on there? How can I pipe the combined content of multiple files to any command I want to use with <code>subprocess.run</code>?</p>
<p>I see this on Python 3.12, not sure if it makes any difference. The command I'm actually trying to run is not <code>cat</code> but <code>subprocess.run([sys.executable, "-m", "cantools", "plot", "/path/to/vehicle.dbc"], stdin=combined_candumps)</code>. (see <a href="https://cantools.readthedocs.io/en/latest/#the-plot-subcommand" rel="nofollow noreferrer">cantools docs</a>). Running the whole thing in the shell isn't really an option because I want to get some of the arguments from a GUI to be written in Python. While cantools is written in Python, it has a shell-only API and no documented Python interface. That's the reason for the roundtrip via <code>subprocess.run</code>.</p>
|
<python><file-io><subprocess>
|
2024-07-22 14:33:43
| 3
| 4,145
|
Joooeey
|
78,779,247
| 5,333,970
|
Upgrading python3-pip for security issues
|
<p>A security scan has identified python3-pip 9.0.3 on EL8 as a security issue. I don't use Python on the machine. But when I tried to remove it, I got an error because dnf is dependent on Python..</p>
<p>There is no newer version of python3-pip for EL8 that I can find.</p>
<p>Does anyone have any suggestions on how I can resolve this? I was wondering if I could remove the python-platform and install a newer version, like 3.8. This is on a Docker image, so I have room to experiment.</p>
|
<python><python-3.x><rpm><yum><dnf>
|
2024-07-22 14:26:51
| 1
| 303
|
Daniel Cosio
|
78,779,076
| 9,284,651
|
add date columns
|
<p>I have two columns with date and I need to create another based on sum of those two. df looks like below:</p>
<pre><code>date_1 date_2 result_date
2024-07-07 18:00:00.000000 0001-01-02T01:12:53.832 2024-07-08 19:12:53
2024-07-07 08:46:00.000000 0001-01-04T08:00:00 2024-07-10 16:46:00
2024-07-07 17:42:00.000000 0001-01-08T02:00:00 2024-07-14 19:42:00
2024-07-07 17:42:00.000000 0002-01-01T02:00:00 2025-07-07 19:42:00
</code></pre>
<p>The type of date_1 and date_2 columns are "object". In date_2 column, the code shows date from year one. I am not sure how I could add them.</p>
<pre><code>data = {
'date_1': ['2024-07-07 18:00:00.000000', '2024-07-07 08:46:00.000000', '2024-07-07 17:42:00.000000', '2024-07-07 17:42:00.000000'],
'date_2': ['0001-01-02T01:12:53.832', '0001-01-04T08:00:00', '0001-01-08T02:00:00', '0002-01-01T02:00:00']
}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe><datetime><time>
|
2024-07-22 13:48:26
| 2
| 403
|
Tmiskiewicz
|
78,779,017
| 2,855,689
|
When using Docker and python:3.11 image the module "test" is not found
|
<p>I have a multistage docker image as below.</p>
<pre><code># syntax=docker/dockerfile:1
FROM python:3.11 as DEP_BUILDER
ENV PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_CACHE_DIR='/var/cache/pypoetry' \
POETRY_HOME='/usr/local'
ARG POETRY_VERSION=1.8.3
WORKDIR /venv
RUN apt-get update && apt-get install -y curl
RUN curl -sSL https://install.python-poetry.org | python3 -
COPY pyproject.toml /venv/
RUN python -m venv .venv
RUN . .venv/bin/activate && \
poetry config virtualenvs.in-project true && \
poetry install --no-interaction --no-ansi --with dev
FROM python:3.11
WORKDIR /server
COPY /app /server/app
COPY /tests /server/tests
COPY --from=DEP_BUILDER /venv/.venv ./.venv
ENV PATH="/server/.venv/bin:$PATH"
CMD ./.venv/bin/python3 -m pytest tests
</code></pre>
<p>While in my local computer running <code>pytest</code> works fine, when doing it inside Docker I get the following error:</p>
<pre><code>Attaching to app-1
app-1 | ImportError while loading conftest '/server/tests/conftest.py'.
app-1 | tests/conftest.py:10: in <module>
app-1 | from test.support.os_helper import EnvironmentVarGuard
app-1 | E ModuleNotFoundError: No module named 'test'
app-1 exited with code 4
</code></pre>
<p>Could someone be so kind to explan why a module that is supposed to be a Python built-in is inaccessible from inside the image?</p>
<p>Thank you very much</p>
<h2>Update 1:</h2>
<p>To test whether or not it's a virtual env issue I updated the second stage like</p>
<pre><code>FROM python:3.11
WORKDIR /server
COPY /app /server/app
COPY /tests /server/tests
COPY --from=DEP_BUILDER /venv/.venv/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
CMD python -m pytest
</code></pre>
<p>The result didn't change</p>
<pre><code>Attaching to app-1
app-1 | ImportError while loading conftest '/server/tests/conftest.py'.
app-1 | tests/conftest.py:10: in <module>
app-1 | from test.support.os_helper import EnvironmentVarGuard
app-1 | E ModuleNotFoundError: No module named 'test'
app-1 exited with code 4
</code></pre>
<h2>Update 2</h2>
<p>After further investigation I've found out that the test module is completely missing in the docker image.</p>
<p><a href="https://i.sstatic.net/LmsxnBdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LmsxnBdr.png" alt="enter image description here" /></a></p>
<p>Is it possible that official Python Docker images come without the test module? If so is there any Python Docker image that comes with it?</p>
|
<python><python-3.x><docker><dockerfile>
|
2024-07-22 13:36:03
| 1
| 732
|
Aitor Martin
|
78,779,016
| 2,107,030
|
Setting the same axis limits for all *unpacked* subplots
|
<p>What is an efficient way to assign the same specs (for example, <code>xlim</code>) to matplotlib unpacked subplots?</p>
<p>I have</p>
<pre><code>fig, (axs1, axs2,axs3) = plt.subplots(3, sharex=True, figsize=(10,6), gridspec_kw={'height_ratios': [2, 1, 1], 'hspace':0})
</code></pre>
<p>And I want to cycle over them, or find a common object similar to <a href="https://stackoverflow.com/a/56646584/2107030">plt.setp()</a>, to share same specs for all axes, for example to avoid:</p>
<pre><code>axs1.set_xlim(0., 1.)
axs2.set_xlim(0., 1.)
axs3.set_xlim(0., 1.)
</code></pre>
|
<python><matplotlib><subplot>
|
2024-07-22 13:35:59
| 1
| 2,166
|
Py-ser
|
78,778,988
| 9,714,255
|
Do I need to use Named Entity Recognition (NER) in tokenization?
|
<p>I am working on an NLP project for sentiment analysis. I am using SpaCy to tokenize sentences. As I was reading the <a href="https://spacy.io/usage/linguistic-features#named-entities" rel="nofollow noreferrer">documentation</a>, I learned about NER. I've read that it can be used to extract entities from text for aiding a user's searching.</p>
<p>The thing I am trying to understand is how to embody it (<em>if I should</em>) in my tokenization process. I am giving an example.</p>
<pre class="lang-py prettyprint-override"><code>text = "Let's not forget that Apple Pay in 2014 required a brand new iPhone in order to use it. A significant portion of Apple's user base wasn't able to use it even if they wanted to. As each successive iPhone incorporated the technology and older iPhones were replaced the number of people who could use the technology increased."
sentence = sp(text) # sp = spacy.load('en_core_web_sm')
for word in sentence:
print(word.text)
# Let
# 's
# not
# forget
# that
# Apple
# Pay
# in
# etc...
for word in sentence.ents:
print(word.text + " _ " + word.label_ + " _ " + str(spacy.explain(word.label_)))
# Apple Pay _ ORG _ Companies, agencies, institutions, etc.
# 2014 _ DATE _ Absolute or relative dates or periods
# iPhone _ ORG _ Companies, agencies, institutions, etc.
# Apple _ ORG _ Companies, agencies, institutions, etc.
# iPhones _ ORG _ Companies, agencies, institutions, etc.
</code></pre>
<p>The first loops shows that 'Apple' and 'Pay' are different tokens. When printing the discovered entities in the second loop, it understands that 'Apply Pay' is an ORG. If yes, how could I achieve that (let's say) "type" of tokenization?</p>
<p>My thinking is, shouldn't 'Apple' and 'Pay' be tokenized as a single word together so that, when I create my classifier it will recognize it as an entity and not recognize a fruit ('Apple') and a verb ('Pay').</p>
|
<python><python-3.x><nlp><spacy><named-entity-recognition>
|
2024-07-22 13:28:14
| 1
| 1,348
|
LoukasPap
|
78,778,978
| 7,290,715
|
Get the last child node and its associated namespace mapping dynamically in xml file using python
|
<p>I have an xml file as below:</p>
<pre><code>xml_s = """
<?xml version="1.0" encoding="utf-8"?>
<feed xml:base="https://url_path/1f3a6012-d6b2-4258-a91a-fc6d4e86c304/"
xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"
xmlns="http://www.w3.org/2005/Atom">
<title type="text">Data</title>
<id>https://<url_path>/1f3a6012-d6b2-4258-a91a-fc6d4e86c304/Data</id>
<updated>2024-07-18T05:22:18Z</updated>
<link rel="self" title="Data" href="Data" />
<m:count>5746</m:count>
<entry>
<id>uuid:58aaa654-1649-4061-97fe-ecc53e186fee;id=1</id>
<title type="text"></title>
<updated>2024-07-18T05:22:18Z</updated>
<category term="Intelex.Data"
scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
<content type="application/xml">
<m:properties>
<d:RowNum m:type="Edm.Int32">1</d:RowNum>
<d:Record_x0020_No m:type="Edm.Int64">491</d:Record_x0020_No>
<d:Location m:type="Edm.String">Amine &gt; Shared</d:Location>
<d:MOC_x0020_Initiator m:type="Edm.String" m:null="true" />
<d:Status m:type="Edm.String">Vertical Heads Approval</d:Status>
<d:Workflow_x0020_Status m:type="Edm.String">1</d:Workflow_x0020_Status>
<d:Date_x0020_Created m:type="Edm.DateTime">2022-11-15T15:01:17</d:Date_x0020_Created>
</m:properties>
</content>
</entry>
<entry>
<id>uuid:58aaa654-1649-4061-97fe-ecc53e186fee;id=2</id>
<title type="text"></title>
<updated>2024-07-18T05:22:18Z</updated>
<category term="Intelex.Data" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
<content type="application/xml">
<m:properties>
<d:RowNum m:type="Edm.Int32">2</d:RowNum>
<d:Record_x0020_No m:type="Edm.Int64">586</d:Record_x0020_No>
<d:Location m:type="Edm.String">Ruby &gt; DNB-NT</d:Location>
<d:MOC_x0020_Initiator m:type="Edm.String">XYZ</d:MOC_x0020_Initiator>
<d:Status m:type="Edm.String">Vertical Heads Approval</d:Status>
<d:Workflow_x0020_Status m:type="Edm.String">1</d:Workflow_x0020_Status>
<d:Date_x0020_Created m:type="Edm.DateTime">2022-12-07T14:14:19</d:Date_x0020_Created>
</m:properties>
</content>
</entry>
</feed>
"""
</code></pre>
<p>I am trying to convert this in pandas dataframe using <code>pd.read_xml().</code></p>
<pre><code>import pandas as pd
import xml.etree.ElementTree as ET
x = ET.parse(xml_s)
root = x.getroot()
s = ET.tostring(root).decode()
namespace =
{'ns_0': 'http://www.w3.org/2005/Atom',
'ns_1': 'http://schemas.microsoft.com/ado/2007/08/dataservices',
'ns_2': 'http://schemas.microsoft.com/ado/2007/08/dataservices/metadata'}
df = pd.read_xml(s,namespaces = namespace, xpath='.//properties')
</code></pre>
<p>But this is giving me an error.</p>
<pre><code>ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If
document uses namespaces denoted with xmlns, be sure to define namespaces and use them
in xpath.
</code></pre>
<p>Then I noticed that <code>properties</code> tag is mapped with <code>m</code> namespace. Hence I supplied <code>xpath = './/ns_2:properties'</code> and this has worked!!</p>
<p>So with that my question would be:</p>
<ol>
<li>How can I dynamically identify the "last child node" of any xml. My assumption is that within the "last child node" the actual data would be present.</li>
<li>Since in this case the last child node was associated with <code>m</code> namespace, how this mapping can be done dynamically?</li>
</ol>
|
<python><xml>
|
2024-07-22 13:26:32
| 2
| 1,259
|
pythondumb
|
78,778,961
| 1,376,848
|
Equivalent of TS Parameters in Python
|
<p>In TypeScript I can do this:</p>
<pre class="lang-js prettyprint-override"><code>function a(b: string): string {
return b + "Bar";
}
let c: Parameters<typeof a>[0]; // string
c = "Foo";
e = a(d); // FooBar
</code></pre>
<p>How do I do <code>Parameters</code> in Python?</p>
<p>Context: in my case now I want to define a function that takes as parameter an object that I can pass to function of a lib I'm using. Unfortunately that lib's types aren't available at runtime, so I can't import them. Contrived:</p>
<pre class="lang-py prettyprint-override"><code>from somelib import func
def myfunc(arg: XYZ): # I want to type XYZ
func(arg)
</code></pre>
|
<python><python-typing>
|
2024-07-22 13:22:31
| 0
| 2,086
|
Otto
|
78,778,953
| 11,291,788
|
Chrome is ignoring my attempts to unset cookies (Django)
|
<p>I'm trying to unset my cookie on an https site. The cookie was set by the server after successful login but chrome is not unsetting the cookie when logout is called even though the Set-Cookie header is present on the response headers with the correct directives.</p>
<pre><code>@api_view(['POST'])
def login(request):
data = request.data
email = data.get('email')
password = data.get('password')
if not email or not password:
return JsonResponse({'error': 'Email and password are required'}, status=400)
user = authenticate_user(email, password)
if user is not None:
token = RefreshToken.for_user(user)
# Create response object
response = JsonResponse({'message': 'Login successful'})
# Set the token in a secure, HTTP-only cookie
response.set_cookie(
key='access_token',
value=str(token.access_token),
httponly=True,
secure=True, # Ensure you use HTTPS
samesite='Lax',
path='/',
domain='my-domain.com'
)
return response
else:
# Authentication failed
return JsonResponse({'error': 'Invalid credentials'}, status=401)
</code></pre>
<p>This is my logout method:</p>
<pre><code>@api_view(['POST'])
def logout(request):
# Create response object
response = JsonResponse({'message': 'Logout successful'})
response.set_cookie(
'access_token',
value='',
max_age=0,
path='/',
secure=True,
httponly=True,
samesite='Lax',
domain='my-domain.com'
)
return response
</code></pre>
<p>Additionally I have the following in my settings.py file:</p>
<pre><code>SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = 'Lax'
SESSION_COOKIE_PATH = '/'
CSRF_COOKIE_PATH = '/'
SESSION_COOKIE_DOMAIN = 'my-domain.com'
CSRF_COOKIE_DOMAIN = 'my-domain.com'
</code></pre>
<p>This is what my client side request looks like, I'm using Redux. The request succeeds.</p>
<pre><code>const protectConfig = {
headers: {
'Content-Type': 'application/json',
},
withCredentials: true
};
export const logout = () => async dispatch => {
dispatch(setAlert('Logged Out Successfully', 'success'));
try {
const { data } = await axios.post(
process.env.REACT_APP_API_URL + API_LOGOUT,
protectConfig
);
dispatch({
type: LOGOUT,
payload: data,
});
} catch (err) {
dispatch(setAlert(err.response.data.detail, 'error'));
dispatch({
type: LOGOUT_FAIL,
});
}
};
</code></pre>
<p>I've tested on Firefox and also in incognito mode. What could possibly be causing the cookies not to be unset by my logout method?</p>
|
<python><reactjs><django><google-chrome>
|
2024-07-22 13:20:33
| 1
| 534
|
Warwick
|
78,778,926
| 2,521,423
|
Creating a metaclass that inherits from ABCMeta and QObject
|
<p>I am building an app in PySide6 that will involve dynamic loading of plugins. To facilitate this, I am using <code>ABCMeta</code> to define a custom metaclass for the plugin interface, and I would like this custom metaclass to inherit from <code>ABC</code> and from <code>QObject</code> so that I can abstract as much of the behavior as possible, including things like standard signals and slots that will be common to all subclasses.</p>
<p>I have set up a MWE that shows the chain of logic that enabled me to get this setup working, but the chain of inheritance goes deeper than I thought it would, and the <code>@abstractmethod</code> enforcement of ABC does not seem to carry through (in the sense that not overriding <code>print_value()</code> does not cause an error). Is it possible to shorten this while still having the desired inheritance? In the end, my goal is to have the <code>MetaTab</code> class as an abstract base class that inherits from ABC so that I can define <code>@abstractmethod</code>s inside it, and then subclass that for individual plugins. Do I really need both <code>QABCMeta</code> and <code>QObjectMeta</code> to make this work, or is there a way to clean it up that eliminates one of the links in this inheritance chain?</p>
<pre><code>from abc import ABC, ABCMeta, abstractmethod
from PySide6.QtCore import QObject
class QABCMeta(ABCMeta, type(QObject)):
pass
class QObjectMeta(ABC, metaclass=QABCMeta):
pass
class MetaTab(QObject, QObjectMeta):
def __init__(self, x):
print('initialized')
self.x = x
@abstractmethod
def print_value(self):
pass
class Tab(MetaTab):
def __init__(self, x):
super().__init__(x)
def print_value(self):
print(self.x)
def main():
obj = Tab(5)
for b in obj.__class__.__bases__:
print("Base class name:", b.__name__)
print("Class name:", obj.__class__.__name__)
obj.print_value()
if __name__=='__main__':
main()
</code></pre>
|
<python><abstract-class><multiple-inheritance><metaclass><pyside6>
|
2024-07-22 13:14:40
| 1
| 1,488
|
KBriggs
|
78,778,894
| 15,948,240
|
How to make a clickable URL in Shiny for Python?
|
<p>I tried this:</p>
<pre><code>app_ui = ui.page_fluid(
ui.output_text("the_txt")
)
def server(input, output, session):
@render.text
def the_txt():
url = 'https://stackoverflow.com'
clickable_url = f'<a> href="{url}" target="_blank">Click here</a>'
return ui.HTML(clickable_url)
</code></pre>
<p>But the displayed text is the raw HTML: <code><a> href="https://stackoverflow.com" target="_blank">Click here</a></code></p>
<p>How do I display a clickable link in a Shiny for Python app?</p>
|
<python><py-shiny>
|
2024-07-22 13:09:29
| 1
| 1,075
|
endive1783
|
78,778,792
| 3,018,860
|
Plotting star maps with equatorial coordinates system
|
<p>I'm trying to generate star maps with the equatorial coordinates system (RAJ2000 and DEJ2000). However, I only get a grid system where meridians and parallels are in parallel, while parallels should be curved and meridians should converge to the north celestial pole and the south ceestial pole.</p>
<p>I'm using some Python modules: matplotlib, skyfield (for the stereographic projection), astroquery (so I can target any object in the deep space) and astropy.</p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Generate a skymap with equatorial grid"""
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
from skyfield.api import Star, load
from skyfield.data import hipparcos, stellarium
from skyfield.projections import build_stereographic_projection
from astroquery.simbad import Simbad
from astropy.coordinates import SkyCoord
import astropy.units as u
from astropy.wcs import WCS
from astropy.visualization.wcsaxes import WCSAxes
# Design
plt.style.use("dark_background")
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman']
# Query object from Simbad
OBJECT = "Alioth"
FOV = 30.0
MAG = 6.5
TABLE = Simbad.query_object(OBJECT)
RA = TABLE['RA'][0]
DEC = TABLE['DEC'][0]
COORD = SkyCoord(f"{RA} {DEC}", unit=(u.hourangle, u.deg), frame='fk5')
print("RA is", RA)
print("DEC is", DEC)
ts = load.timescale()
t = ts.now()
# An ephemeris from the JPL provides Sun and Earth positions.
eph = load('de421.bsp')
earth = eph['earth']
# Load constellation outlines from Stellarium
url = ('https://raw.githubusercontent.com/Stellarium/stellarium/master'
'/skycultures/modern_st/constellationship.fab')
with load.open(url) as f:
constellations = stellarium.parse_constellations(f)
edges = [edge for name, edges in constellations for edge in edges]
edges_star1 = [star1 for star1, star2 in edges]
edges_star2 = [star2 for star1, star2 in edges]
# The Hipparcos mission provides our star catalog.
with load.open(hipparcos.URL) as f:
stars = hipparcos.load_dataframe(f)
# Center the chart on the specified object's position.
center = earth.at(t).observe(Star(ra_hours=COORD.ra.hour, dec_degrees=COORD.dec.degree))
projection = build_stereographic_projection(center)
# Compute the x and y coordinates that each star will have on the plot.
star_positions = earth.at(t).observe(Star.from_dataframe(stars))
stars['x'], stars['y'] = projection(star_positions)
# Create a True/False mask marking the stars bright enough to be included in our plot.
bright_stars = (stars.magnitude <= MAG)
magnitude = stars['magnitude'][bright_stars]
marker_size = (0.5 + MAG - magnitude) ** 2.0
# The constellation lines will each begin at the x,y of one star and end at the x,y of another.
xy1 = stars[['x', 'y']].loc[edges_star1].values
xy2 = stars[['x', 'y']].loc[edges_star2].values
lines_xy = np.rollaxis(np.array([xy1, xy2]), 1)
# Define the limit for the plotting area
angle = np.deg2rad(FOV / 2.0)
limit = np.tan(angle) # Calculate limit based on the field of view
# Build the figure with WCS axes
fig = plt.figure(figsize=[6, 6])
wcs = WCS(naxis=2)
wcs.wcs.crpix = [1, 1]
wcs.wcs.cdelt = np.array([-FOV / 360, FOV / 360])
wcs.wcs.crval = [COORD.ra.deg, COORD.dec.deg]
wcs.wcs.ctype = ["RA---STG", "DEC--STG"]
ax = fig.add_subplot(111, projection=wcs)
# Draw the constellation lines
ax.add_collection(LineCollection(lines_xy, colors='#ff7f2a', linewidths=1, linestyle='-'))
# Draw the stars
ax.scatter(stars['x'][bright_stars], stars['y'][bright_stars],
s=marker_size, color='white', zorder=2)
ax.scatter(RA, DEC, marker='*', color='red', zorder=3)
angle = np.pi - FOV / 360.0 * np.pi
limit = np.sin(angle) / (1.0 - np.cos(angle))
# Set plot limits
ax.set_xlim(-limit, limit)
ax.set_ylim(-limit, limit)
ax.set_aspect('equal')
# Add RA/Dec grid lines
ax.coords.grid(True, color='white', linestyle='dotted')
# Set the coordinate grid
ax.coords[0].set_axislabel('RA (hours)')
ax.coords[1].set_axislabel('Dec (degrees)')
ax.coords[0].set_major_formatter('hh:mm:ss')
ax.coords[1].set_major_formatter('dd:mm:ss')
# Title
ax.set_title(f'Sky map centered on {OBJECT}', color='white', y=1.04)
# Save the image
FILE = "chart.png"
plt.savefig(FILE, dpi=100, facecolor='#1a1a1a')
</code></pre>
<p>And this is the resulting image:</p>
<p><a href="https://i.sstatic.net/KpQPMQGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KpQPMQGy.png" alt="enter image description here" /></a></p>
<p>As you can see, the grid (parallels and meridians) are totally parallel. However, my goal is to achieve this grid:</p>
<p><a href="https://i.sstatic.net/rE5RZ34k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rE5RZ34k.png" alt="enter image description here" /></a></p>
<p>In this case, I got the right WCS from a FITS image in the DSS survey. It's automatic. However, for plotting star maps I need to create a simulation of that, working fine with the labels and the coordinates system, not as a background image or something else.</p>
|
<python><matplotlib><astropy><skyfield>
|
2024-07-22 12:45:55
| 2
| 2,834
|
Unix
|
78,778,698
| 8,964,393
|
Random stratified sampling in pandas
|
<p>I have created a pandas dataframe as follows:</p>
<pre><code>import pandas as pd
import numpy as np
ds = {'col1' : [1,1,1,1,1,1,1,2,2,2,2,3,3,3,3,3,4,4,4,4,4,4,4,4,4],
'col2' : [12,3,4,5,4,3,2,3,4,6,7,8,3,3,65,4,3,2,32,1,2,3,4,5,32],
}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks as follows:</p>
<pre><code>print(df)
col1 col2
0 1 12
1 1 3
2 1 4
3 1 5
4 1 4
5 1 3
6 1 2
7 2 3
8 2 4
9 2 6
10 2 7
11 3 8
12 3 3
13 3 3
14 3 65
15 3 4
16 4 3
17 4 2
18 4 32
19 4 1
20 4 2
21 4 3
22 4 4
23 4 5
24 4 32
</code></pre>
<p>Based on the values of column <code>col1</code>, I need to extract:</p>
<ul>
<li>3 random records where <code>col1 == 1</code></li>
<li>2 random records such that <code>col1 = 2</code></li>
<li>2 random records such that <code>col1 = 3</code></li>
<li>3 random records such that <code>col1 = 4</code></li>
</ul>
<p>Can anyone help me please?</p>
|
<python><pandas><dataframe><random><sampling>
|
2024-07-22 12:25:42
| 1
| 1,762
|
Giampaolo Levorato
|
78,778,569
| 1,424,746
|
Sentry Logging python integration - which handler?
|
<p>So if there are 2 handlers to a logger say</p>
<pre><code> ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
ch.setFormatter(logging.Formatter('%(message)s')) # Logs are pre-formatted as JSON strings
# File handler for custom human-readable output
fh = logging.FileHandler(os.environ["LOG_FILE_LOCATION"])
fh.setLevel(logging.DEBUG)
fh.setFormatter(CustomFileFormatter())
logger.addHandler(ch)
logger.addHandler(fh)
</code></pre>
<p>which handler will be used by sentry to display an error message ?
Also, if i want to custom format the output thats sent to sentry what should i do ?
If a sample message in sentry is like this,</p>
<pre><code>{"event": "Error event", "logger": "A.a1", "level": "error", "environment": "staging", "image_name": "X", "target_module": "Y", "filename": "/usr/local/lib/python3.10/site-packages/structlog/stdlib.py", "lineno": 247, "function": "_proxy_to_logger", "timestamp": "2024-07-22T06:58:00.499028Z"}
</code></pre>
<p>what can I do to have a sentry record with message = event and all other key/value pairs as tags and values to them</p>
|
<python><logging><sentry>
|
2024-07-22 12:01:23
| 1
| 369
|
Ram
|
78,778,481
| 8,973,620
|
Joblib parallel processing introduces {built-in method time.sleep} in profiling
|
<p>I am using Joblib to run parallel jobs in my Python application. During profiling, I noticed that the slowest process was <code>{built-in method time.sleep}</code>. Interestingly, this issue disappears when I remove Joblib parallel processing. Could you explain why {built-in method time.sleep} becomes a bottleneck with Joblib parallel processing?</p>
<p>Here is a simplified version of my code:</p>
<pre><code>from joblib import Parallel, delayed
def my_function(x):
# Some computation
return x * x
results = Parallel(n_jobs=2)(delayed(my_function)(i) for i in range(10))
</code></pre>
<p>Profiling Output</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
17153 209.317 0.012 209.317 0.012 {built-in method time.sleep}
588 0.835 0.001 0.835 0.001 {method 'poll' of 'select.poll' objects}
</code></pre>
|
<python><parallel-processing><multiprocessing><joblib>
|
2024-07-22 11:40:22
| 1
| 18,110
|
Mykola Zotko
|
78,778,438
| 12,331,569
|
A created python thread stucks when code is compiled to cython
|
<p>I have been trying to figure out why the created thread doesn't work and get stuck when compiled into Cython, as it works normally with CPython...</p>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>from count import count_launcher
if __name__ == "__main__":
tracker = {"running": True, "count": 0}
count_launcher(tracker)
</code></pre>
<p>count.py</p>
<pre class="lang-py prettyprint-override"><code>import threading
import time
def count(tracker: dict):
start = time.time()
while tracker["running"]:
tracker["count"] += 1
# While It works if I added this print statement:
# print("Hello, world!")
end = time.time()
print(f"Counted to: {tracker['count']} in {end - start} seconds")
def count_launcher(tracker: dict):
t = threading.Thread(target=count, args=(tracker,), daemon=True)
t.start()
print("Started the counting process...")
time.sleep(2)
tracker["running"] = False
t.join()
</code></pre>
<p>I have tried using different python versions (3.12, 3.11) with cython but still get the same results... also there is something strange as when I add a print statement within the while loop of the count function it works! which is kinda very strange...</p>
|
<python><cython><python-multithreading><cythonize>
|
2024-07-22 11:30:47
| 1
| 316
|
Muhammad Hawash
|
78,778,401
| 1,587,329
|
webbrowser.open steals focus, how to avoid
|
<p>On Windows, calling <a href="https://docs.python.org/3/library/webbrowser.html#webbrowser.open" rel="nofollow noreferrer"><code>webbrowser.open</code></a> steals the e.g. keyboard input focus, and sets it to the newly opened webbrowser tab.</p>
<p>Is there a way to avoid this, that is: to open the page in the background ?</p>
|
<python><focus><python-webbrowser>
|
2024-07-22 11:21:12
| 1
| 38,730
|
serv-inc
|
78,778,227
| 5,255,911
|
How to make code formatting work again for Python (Visual Studio Code on Mac)?
|
<p>On Mac, <code>Option</code> +<code>Shift</code> + <code>F</code> now brings up the "There is no formatter for 'python' files installed." message box:</p>
<img src="https://i.sstatic.net/WK2r2vwX.png" width="300" />
<p>I tried installing this plugin, without a change seen to this situation:</p>
<img src="https://i.sstatic.net/H3UeWIbO.png" width="300" />
<p>I already have these two plugins installed for Python:</p>
<img src="https://i.sstatic.net/3K9Atc9l.png" width="450" />
<p>However as @starball mentioned, it may have <a href="https://stackoverflow.com/questions/77283648/vs-code-python-extension-circa-v2018-19-no-longer-includes-support-for-linters">reduced support</a> now.</p>
<p>So according to the new world, is there a different way that I can get the on-demand (keyboard shortcut based) automatic code formatting for Python in VSCode to work?</p>
<p>Versions:</p>
<ul>
<li>VS Code: 1.91.1 (Universal)</li>
<li>Python: 3.12.4</li>
</ul>
|
<python><visual-studio-code>
|
2024-07-22 10:44:40
| 5
| 896
|
MaduKan
|
78,778,114
| 19,626,271
|
Why is np.sort() with order slow on multi dtype arrays? (sorting by unsigned integer ID)
|
<p>Working with embedded software, I'm using <code>numpy</code> arrays with specified datatypes to save memory and other reasons. I'm using version '1.23.4' and Python 3.10.0, but this seems to be an issue with latest versions too. In a task I noticed that sorting this array with <code>np.sort(..., order='id')</code>seems to be very slow. This comes as a surprise, as if we just sort the ID column separately with <code>np.sort</code> it is handled fastly, and no other column is relevant for the sorting.</p>
<p>A similar small example:</p>
<pre><code>dtypes = [('x', 'u1'), ('id', '<u8')]
x_values = [n % 2 for n in range(5000000)]
ids = list(range(5000000, 0, -1))
x = np.array(list(zip(x_values, ids)), dtype=dtypes)
</code></pre>
<p>Now, if I run <code>np.sort(x, order='id')</code>, it takes on my machine around 15-20 seconds. Compared to <code>np.sort(x['id'])</code> which runs in 1/100th of the time, it seems unreasonably slow. This suggests workarounds to not sort the complete array directly.</p>
<p>What increases this complexity when using a multi datatype array?</p>
|
<python><arrays><numpy><performance><sorting>
|
2024-07-22 10:17:56
| 0
| 395
|
me9hanics
|
78,777,978
| 10,267,104
|
Socket IO is rejecting requests from Nginx proxy
|
<p>I have this docker application running several containers. One of these containers is a Python application that can handle both socket io requests and normal HTTP requests. Django's ASGI handles HTTP/ASGI requests while <code>python-socketio</code> handles socket requests.</p>
<p>Since there are 4 other applications like this which must all be server via Nginx, I have to specify namespace URLs for both the socket io and ASGI applications for all the applications.</p>
<p>I am using the following configuration for the application:</p>
<pre><code>location /app/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app:8000/; # docker container name is "app"
}
location /app/socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app:8000/socket.io/; # docker container name is "app"
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
</code></pre>
<p><code>asgi.py</code></p>
<pre class="lang-py prettyprint-override"><code>import os
import socketio
from django.core.asgi import get_asgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings")
django_application = get_asgi_application()
from app.async_socketio.server import sio # noqa 402
# this allows Socket IO to handle requests by default to a
# default path of /socket.io and forwards any other requests
# to the Django application.
application = socketio.ASGIApp(sio, django_application)
# just so all my socket event handles can load
from app.async_socketio.namespace import * # noqa 403
</code></pre>
<p>Entrypoint: <code>main.py</code> which uses Uvicorn</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
if __name__ == "__main__":
uvicorn.run(
"config.asgi:application",
host="0.0.0.0",
reload=True,
log_level="debug",
)
</code></pre>
<p>When I make a request to connect using Postman to URL, <code>http://localhost:8089/app/socket.io/?token=token&organisation_id=org_id</code>, I get the following logs:</p>
<pre><code>app_store_service | DEBUG: = connection is CONNECTING
app_store_service | DEBUG: < GET /socket.io/?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNzI0MjA5MzQwLCJpYXQiOjE3MjE2MTczNDAsImp0aSI6ImRlMmQ1ZmUyZTk5ZTQ0ZWY5YTdlMTc2NjhmM2UyNTZmIiwidXNlcl9pZCI6IjA2NjdmZDI4LWZkYWMtNzI1Yi04MDAwLWNjM2U5YWE2OGEzNSJ9.zAENI3Xl0k5efwE83u__svuXDUZajv00462XerXCo2c&organisation_id=06689368-db4a-70dd-8000-c37727d378ad&EIO=4&transport=websocket HTTP/1.1
app_store_service | DEBUG: < upgrade: websocket
app_store_service | DEBUG: < connection: upgrade
app_store_service | DEBUG: < host: localhost
app_store_service | DEBUG: < x-real-ip: 172.18.0.1
app_store_service | DEBUG: < x-forwarded-for: 172.18.0.1
app_store_service | DEBUG: < x-forwarded-proto: http
app_store_service | DEBUG: < sec-websocket-version: 13
app_store_service | DEBUG: < sec-websocket-key: OR4YrxOcqdovU4hNL+xkMw==
app_store_service | DEBUG: < sec-websocket-extensions: permessage-deflate; client_max_window_bits
app_store_service | INFO: ('172.18.0.21', 48800) - "WebSocket /socket.io/?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNzI0MjA5MzQwLCJpYXQiOjE3MjE2MTczNDAsImp0aSI6ImRlMmQ1ZmUyZTk5ZTQ0ZWY5YTdlMTc2NjhmM2UyNTZmIiwidXNlcl9pZCI6IjA2NjdmZDI4LWZkYWMtNzI1Yi04MDAwLWNjM2U5YWE2OGEzNSJ9.zAENI3Xl0k5efwE83u__svuXDUZajv00462XerXCo2c&organisation_id=06689368-db4a-70dd-8000-c37727d378ad&EIO=4&transport=websocket" [accepted]
app_store_service | DEBUG: > HTTP/1.1 101 Switching Protocols
app_store_service | DEBUG: > Upgrade: websocket
app_store_service | DEBUG: > Connection: Upgrade
app_store_service | DEBUG: > Sec-WebSocket-Accept: QhVhFHD/iSPmU+0qOzIVQgy0HRg=
app_store_service | DEBUG: > Sec-WebSocket-Extensions: permessage-deflate
app_store_service | DEBUG: > date: Mon, 22 Jul 2024 09:41:41 GMT
app_store_service | DEBUG: > server: uvicorn
app_store_service | INFO: connection open
app_store_service | DEBUG: = connection is OPEN
app_store_service | DEBUG: > TEXT '0{"sid":"POZUXfoHMZRUnheFAAAS","upgrades":[],"p...0,"pingInterval":25000}' [86 bytes]
app_store_service | DEBUG: < TEXT '40/app/socket.io/,' [21 bytes]
app_store_service | DEBUG: > TEXT '44/app/socket.io/,"Unable to connect"' [40 bytes]
app_store_service | DEBUG: < CLOSE 1005 (no status received [internal]) [0 bytes]
app_store_service | DEBUG: = connection is CLOSING
app_store_service | DEBUG: > CLOSE 1005 (no status received [internal]) [0 bytes]
app_store_service | DEBUG: x half-closing TCP connection
app_store_service | DEBUG: = connection is CLOSED
nginx-1 | 172.18.0.1 - - [22/Jul/2024:09:41:41 +0000] "GET /app/socket.io/?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNzI0MjA5MzQwLCJpYXQiOjE3MjE2MTczNDAsImp0aSI6ImRlMmQ1ZmUyZTk5ZTQ0ZWY5YTdlMTc2NjhmM2UyNTZmIiwidXNlcl9pZCI6IjA2NjdmZDI4LWZkYWMtNzI1Yi04MDAwLWNjM2U5YWE2OGEzNSJ9.zAENI3Xl0k5efwE83u__svuXDUZajv00462XerXCo2c&organisation_id=06689368-db4a-70dd-8000-c37727d378ad&EIO=4&transport=websocket HTTP/1.1" 101 128 "-" "-" "-"
app_store_service | INFO: connection closed
</code></pre>
<p>I can see the request is reaching the application but the connection gets closed before any application logic can be executed.</p>
<p>But if I connect directly to the application port and not through Nginx, it works without any issues.</p>
|
<python><django><sockets><nginx><python-socketio>
|
2024-07-22 09:46:24
| 2
| 369
|
lordsarcastic
|
78,777,855
| 4,451,315
|
Dummy-encoded PyArrow table from PyArrow ChunkedArray, without going through pandas?
|
<p>Say I have</p>
<pre><code>import pyarrow as pa
ca = pa.chunked_array([['a', 'b', 'b', 'c']])
print(ca)
</code></pre>
<pre><code><pyarrow.lib.ChunkedArray object at 0x7fc938bcea70>
[
[
"a",
"b",
"b",
"c"
]
]
</code></pre>
<p>I'd like to end up with:</p>
<pre class="lang-py prettyprint-override"><code>pyarrow.Table
_a: uint8
_b: uint8
_c: uint8
----
_a: [[1,0,0,0]]
_b: [[0,1,1,0]]
_c: [[0,0,0,1]]
</code></pre>
<p>How can I do this?</p>
<p>I'm aware that it's possible to do this by converting to pandas, but is it possible to do it with just PyArrow (to avoid taking on an extra dependency)?</p>
<p><strong>EDIT</strong>: numpy is already a required dependency of PyArrow, so I'm OK with using that. However, it's not the idea solution, so a pure-NumPy solution would be best to avoid</p>
|
<python><pyarrow>
|
2024-07-22 09:16:18
| 3
| 11,062
|
ignoring_gravity
|
78,777,829
| 5,269,892
|
PyCharm breakpoint in Ipython console not triggering
|
<p>I have two python scripts, <em>test_breakpoints.py</em> and <em>test_breakpoints_import.py</em> (see below), with breakpoints set in <em>PyCharm</em>. The former script imports a function from the latter.</p>
<p>When using the interactive console (IPython), connecting the debugger, and running code (via Alt+Shift+e), PyCharm ignores the breakpoints in <em>test_breakpoints.py</em> and only triggers upon the breakpoint in the imported function from <em>test_breakpoints_import.py</em>. This is different from <em>Debug Script</em> (via Shift+F9), whicht triggers on the first breakpoint.</p>
<p><strong>Why are breakpoints for non-imported code ignored when using the debugger inside the interactive console? Is there a way that PyCharm triggers these breakpoints (without putting the code into a different script and importing it from there, or debugging the entire script)?</strong></p>
<p>I am often using the console for interactive coding and would like to avoid running entire scripts just to debug small code parts.</p>
<hr />
<p><em><strong>test_breakpoints.py:</strong></em></p>
<pre><code>from test_breakpoints_import import myfunc_import
def myfunc_local():
print('Bye')
print('Hello')
myfunc_local()
myfunc_import()
</code></pre>
<p><a href="https://i.sstatic.net/0kgF5mYC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kgF5mYC.png" alt="enter image description here" /></a></p>
<p><em><strong>test_breakpoints_import.py:</strong></em></p>
<pre><code>def myfunc_import():
print('Hi')
</code></pre>
<p><a href="https://i.sstatic.net/4aY9bJCL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aY9bJCL.png" alt="enter image description here" /></a></p>
<hr />
<p><strong>Running code in IPython interactive console (Alt+Shift+e):</strong></p>
<p><a href="https://i.sstatic.net/19zSDru3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19zSDru3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/4a6rAiDL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4a6rAiDL.png" alt="enter image description here" /></a></p>
<p><strong>Debug script (Shift+F9):</strong></p>
<p><a href="https://i.sstatic.net/266us5KM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/266us5KM.png" alt="enter image description here" /></a></p>
|
<python><pycharm><breakpoints>
|
2024-07-22 09:10:29
| 0
| 1,314
|
silence_of_the_lambdas
|
78,777,372
| 16,723,655
|
Pydub library has an error with version check
|
<p>I am just curious why only this library has an error of version check.</p>
<p>Below is the code.</p>
<pre><code>import pydub
print('Pydub version: ', pydub.__version__)
</code></pre>
<p>I got the error as below.</p>
<pre><code>AttributeError: module 'pydub' has no attribute '__version__'
</code></pre>
<p>Other libraries can print their versions.</p>
<p>I could check the version(0.25.1) of 'pydub' by just installing it in prompt as below.</p>
<pre><code>pip install pydub
</code></pre>
<p>Are there anyone who know the reason of an error of version check for 'Pydub'?</p>
|
<python><version><pydub>
|
2024-07-22 07:15:17
| 1
| 403
|
MCPMH
|
78,777,096
| 972,647
|
Python: Submitting and tracking many subprocesses leads to "stuck" subprocesses
|
<p>I have an 3rd party cli executable I need to call from my python code. These are heavy calculations (cpu) and there are around 50-100 times I need to call it. The executable itself is to some degree multi-threaded but not all steps and I have a lot of cores available.</p>
<p>This means I want to have multiple subprocesses running at once but not all of them. So I need to submit some of them and then track once one of them completes, start a new one to optimize cpu usage.</p>
<p>I had a working version but it was very naive, but it worked. It just waited for the first submitted process to complete. But it is data-dependent, so at some point a very long running process was the "first" one submitted of the remaining ones and it will just block from any other process getting submitted until this one completes. There is also a long sequential/IO? phase and this long-running one can be at 1% cpu for hrs during which the cpu is then mostly idle.</p>
<p>So I needed to optimize my process submission code:</p>
<pre><code>num_concurrent_processes = 6
for path in files:
# Building cmd omitted
rs_process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True,
bufsize=1, creationflags=0x00000008)
rs_processes.append(rs_process)
if len(rs_processes) >= num_concurrent_processes:
# wait until at least one process has completed before submitting the next one
continue_polling = True
while continue_polling:
for idx, proc in enumerate(rs_processes):
poll = proc.poll()
if poll is not None:
# process complete
stdout_data, stderr_data = proc.communicate()
# removed some logging in case of error
#remove completed process from list
del rs_processes[idx]
# exit polling loop as a new subprocess can be submitted
continue_polling = False
break
if continue_polling:
# Put some breaks on the polling
time.sleep(10)
</code></pre>
<p>I'm blocking submission of more subprocesses if the limit is reached. Then I poll until I find a process that finished (<a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.poll" rel="nofollow noreferrer">poll is not None according to python documentation</a>). I get the output from the process via communicate and do some action on it (logging, not shown). Then I remove the process from the "tracking list" and exit from the polling loop.</p>
<p>But there is some kind of logic flaw in the code as when running it "stale" processes accumulate. They all sit at 0% but somehow the polling doesn't think they completed. And once there are <code>num_concurrent_processes</code> stale processes, progress stops entirely. What is going wrong here? Does using poll() mean I now need to manually terminate the process? With the old method all processes ran just fine and stopped on their own and I'm using the same data here with the new method.</p>
|
<python><popen>
|
2024-07-22 05:42:54
| 2
| 7,652
|
beginner_
|
78,777,074
| 1,488,641
|
Uploaded media doesn't server in production mode without restarting django
|
<p>I need to serve media in Django in production mode and it is very little need to serve telegram user photos in Django admin. so I know everything about Django it's not for serving files or media so there is no need to repeat repetitive things. I just need to serve media in production mode for my purpose so I use WhiteNoise to do this and append this lines:</p>
<pre><code>MIDDLEWARE = [
...
'whitenoise.middleware.WhiteNoiseMiddleware',
...
]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'medias')
</code></pre>
<p>in urls.py:</p>
<pre><code>from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('botAPI.urls')),
]
if settings.DEBUG:
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATICFILES_DIRS[0])
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
else:
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>in wsgi.py I put this:</p>
<pre><code>...
application = get_wsgi_application()
application = WhiteNoise(application, root=MEDIA_ROOT, prefix='/media/')
....
</code></pre>
<p>It works correctly and serves media files in production mode. but for new media for example uploading an image in Django admin I have to restart Django to access that media. is there any way to solve this problem or another way in Django to serve media files dynamically? (I can't use any external services cloud or CDN or webserver) everything should work with running Django</p>
|
<python><django><webserver><django-media><whitenoise>
|
2024-07-22 05:33:28
| 1
| 435
|
minttux
|
78,776,546
| 3,486,684
|
How do I type annotate a variable that stores a type annotation?
|
<p>Suppose I have:</p>
<pre class="lang-py prettyprint-override"><code>x: type = Optional[str] # <--- type error
</code></pre>
<p>This results in the type error:</p>
<pre><code>Expression of type "UnionType" is incompatible with declared type "type"
"UnionType" is incompatible with "type"
</code></pre>
<p>I do not want to type annotate <code>x</code> with <code>typing.UnionType</code>, becuase in general it could be any annotation. For example:</p>
<pre><code>x: ? = NamedTuple
x: ? = MyClass
x: ? = str
x: ? = Any
</code></pre>
<p>Whatever I put down for <code>?</code> should work for a broad class of type annotations. Do I have any good options beyond <code>Any</code>?</p>
|
<python><python-typing><pyright>
|
2024-07-21 23:20:54
| 1
| 4,654
|
bzm3r
|
78,776,507
| 3,486,684
|
How to inherit (from a parent class) dataclass field introspection functionality?
|
<p>I have a parent dataclass, and various other classes will then extend this parent dataclass. Let's call these dataclasses <code>DC</code>s. In the example code below, see <code>ParentDC</code>, and an example <code>ChildDC</code>:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, field, fields
from typing import Optional
@dataclass
class ParentDC:
type_map: dict[str, type] = field(init=False, metadata={"support": True})
primary_fields: set[str] = field(
init=False, default_factory=set, metadata={"support": True}
)
secondary_fields: set[str] = field(
init=False, default_factory=set, metadata={"support": True}
)
dtype_map: dict[str, type] = field(
init=False, default_factory=dict, metadata={"support": True}
)
def __init_subclass__(cls, type_map: dict[str, type]) -> None:
print(cls.__class__.__qualname__)
cls.type_map = type_map
cls.primary_fields = set()
cls.secondary_fields = set()
field_data = fields(cls)
for fdat in field_data:
if not fdat.metadata.get("support", False):
if fdat.metadata.get("secondary", False):
cls.secondary_fields.add(fdat.name)
else:
cls.primary_fields.add(fdat.name)
cls.dtype_map = {
k: type_map[v].dtype
for k, v in cls.__annotations__.items()
if k in cls.primary_fields.union(cls.secondary_fields)
}
type_map = {
"alpha": int,
"beta": float,
}
@dataclass
class ChildDC(ParentDC, type_map=type_map):
alpha: Optional[str] = field(
default=None, kw_only=True, metadata={"secondary": True}
)
beta: str = field(kw_only=True)
print(f"{ChildDC.primary_fields=}")
print(f"{ChildDC.secondary_fields=}")
print(f"{ChildDC.dtype_map=}")
</code></pre>
<p>I want to create some "introspection functionality" common to all <code>DC</code>s. This introspection functionality rests at the <em>class</em> level: you should not need to create an <em>instance</em> of <code>ChildDC</code> to be able to access which fields are its primary fields, etc.</p>
<p>The code as it stands does not work:</p>
<pre><code>type
ChildDC.primary_fields=set()
ChildDC.secondary_fields=set()
ChildDC.dtype_map={}
</code></pre>
<p>And I have some inkling of why: when <code>__init_subclass__</code> is the wrong place for such introspection data to be generated and stored, because the parent does not have access to any of the child in <code>__init_subclass__</code>.</p>
<p>Where should this introspection information be generated and visualized?</p>
|
<python><inheritance><python-dataclasses><python-class>
|
2024-07-21 22:53:44
| 1
| 4,654
|
bzm3r
|
78,776,495
| 3,486,684
|
No hang when stepping manually through problematic code; otherwise, it hangs. What tools does the editor + debugger have to examine?
|
<p>I have a problematic section of code. The start of the section is decorated with a print statement, as is the end:</p>
<pre class="lang-py prettyprint-override"><code>print("starting problematic section") # <-- breakpoint set here
# various calls # <--- program hangs somewhere here if not manually stepping through
print("done.") # <-- breakpoint set here
</code></pre>
<p>When I use the debugger to step through the code manually everything works okay, and I hit both starting and ending print statements.</p>
<p>When I remove the breakpoint, and do not step through the code manually: program execution hangs. Therefore, it seems relevant to me to be able to see where the program is currently in execution, independently of any breakpoints I might have set.</p>
<p>How can I do so?</p>
<p>I have two ideas:</p>
<ul>
<li><p>Can the editor continuously and automatically jump me to whatever point in the source is relevant corresponds to where the program currently is in its execution? (Note: this would be happening <em>without</em> any breakpoints being set, or any manual stepping through.)</p>
</li>
<li><p>Is it possible to examine a live-updating trace/call-stack of the program's execution?</p>
</li>
</ul>
<p>(A unfeasible idea I considered: setting a <a href="https://code.visualstudio.com/docs/editor/debugging#_logpoints" rel="nofollow noreferrer">"logpoint"</a> on every single line of source code, which prints out the file/line of the breakpoint. Obviously, setting a logpoint on every line in every file is not possible since it would be awfully time consuming. But, the idea illustrates what I wish I could do: finding out where my program is stuck, when breakpoints/print-statements are not enough.)</p>
|
<python><visual-studio-code><debugging>
|
2024-07-21 22:40:26
| 0
| 4,654
|
bzm3r
|
78,776,454
| 3,329,877
|
NumPy successive matrix multiplication vectorization
|
<p>I have a NumPy array <code>ar</code> of shape <code>(M, L, 2, 2)</code>. <code>M</code> is usually in the hundreds or thousands, <code>L</code> is usually <code><= 5</code>.</p>
<p>I want to multiply the <code>L</code> <code>(N, N)</code> matrices successively (<code>multiplied_ar[m] = ar[m, 0, :, :] @ ar[m, 1, :, :] @ ...</code>) to get an array of shape <code>(M, 2, 2)</code>. I don't care about the intermediary results, just the final result.</p>
<p>Is it possible to <strong>vectorize</strong> this somehow so I wouldn't have to iterate over <code>m</code> and <code>l</code>?</p>
|
<python><numpy><numpy-ndarray><matrix-multiplication><numpy-einsum>
|
2024-07-21 22:08:23
| 1
| 397
|
VaNa
|
78,776,354
| 3,437,787
|
How to fill missing values based on relationships between existing data
|
<h2>Question</h2>
<p>How to fill missing values of a pandas dataframe based on the relationship between an existing preceeding row (predictions for a commodity), and an associated existing value in another column (actual values for a commodity).</p>
<h2>Details</h2>
<p>I have a pandas dataframe with 10 columns and 40 rows. The columns are <code>Date, Actual, time_from_actual_1, time_from_actual_2, time_from_actual_3...</code> up to <code>time_from_actual_8</code>.</p>
<p>The <code>Actual</code> column contains actual values for a commodity with hourly timestamps in the <code>Date</code> column. The <code>time_from_actual</code> columns are predictions forward in time for the same commodity. These are produced once per day, hence the pre-existing values at <code>Index 0</code> and <code>Index 1</code>. And so, there are 23 missing observations per day.</p>
<h3>Input dataframe</h3>
<p><a href="https://i.sstatic.net/fzIyjW76.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzIyjW76.png" alt="inptu_df_vscode" /></a></p>
<p>I would like to fill those missing values in a very specific way. I want the values for "time_from_actual" at index 1 up to 24 to follow the same pattern as the first column with regards to the differences between the different timesteps and the actual values.</p>
<h3>Output dataframe</h3>
<p><a href="https://i.sstatic.net/FY4oHrVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FY4oHrVo.png" alt="output_df_vscode" /></a></p>
<p>I've succeeded doing this with a <code>nested for loop</code>, but I would very much like to see suggestions for more elegant approaches. Below you will find a complete attempt with sample data, code and output. Thank you for any suggestions!</p>
<h3>Code</h3>
<pre><code># imports
import pandas as pd
import numpy as np
# Random seed
np.random.seed(42)
# Sample data
data = {
'Date': pd.date_range(start='2023-01-01', periods=40, freq='H'),
'Actual': [100, 99.72, 101.02, 104.06, 103.60, 103.13, 106.29, 107.82, 106.88, 107.97,
107.04, 106.11, 106.59, 102.77, 99.32, 98.19, 96.17, 96.80, 94.98, 92.15,
95.09, 94.63, 94.77, 91.92, 90.83, 91.05, 88.75, 89.50, 88.30, 87.72,
86.51, 90.22, 90.19, 88.08, 89.72, 87.28, 87.70, 83.78, 81.12, 131.52],
'time_from_actual_1': [97] + [np.nan]*23 + [90] + [np.nan]*15,
'time_from_actual_2': [99] + [np.nan]*23 + [89] + [np.nan]*15,
'time_from_actual_3': [98] + [np.nan]*23 + [88] + [np.nan]*15,
'time_from_actual_4': [97] + [np.nan]*23 + [87] + [np.nan]*15,
'time_from_actual_5': [96] + [np.nan]*23 + [86] + [np.nan]*15,
'time_from_actual_6': [95] + [np.nan]*23 + [85] + [np.nan]*15,
'time_from_actual_7': [94] + [np.nan]*23 + [84] + [np.nan]*15,
'time_from_actual_8': [93] + [np.nan]*23 + [83] + [np.nan]*15,
}
# dataframe
df = pd.DataFrame(data)
# copy of the dataframe to reference original values only
original_df = df.copy()
# Fill missing values for columns starting with "time_from_actual"
time_cols = [col for col in df.columns if col.startswith('time_from_actual')]
for col in time_cols:
for i in range(1, len(df)):
if pd.isnull(df.loc[i, col]):
j = i
while j < len(df) and pd.isnull(original_df.loc[j, col]):
previous_actual = df.loc[j - 1, 'Actual']
previous_time = df.loc[j - 1, col]
current_actual = df.loc[j, 'Actual']
difference = previous_time - previous_actual
df.loc[j, col] = current_actual + difference
j += 1
</code></pre>
|
<python><pandas><missing-data>
|
2024-07-21 21:09:55
| 3
| 62,064
|
vestland
|
78,776,334
| 13,559,669
|
pd.to_datetime() not working for old dates
|
<p>When trying to convert the "Date" field of a pandas DataFrame using pd.to_datetime(), I get an "OutOfBoundsDatetime" error only when the date is before Year 1700 (otherwise the conversion works fine). Any help?</p>
<pre><code>df = pd.read_excel('Timeline_SecondMillenialH2.xlsx')
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
</code></pre>
<p>First 2 records of the Excel file for reference:</p>
<p><a href="https://i.sstatic.net/Qs5XFL2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs5XFL2n.png" alt="First 2 records of the Excel file for reference" /></a></p>
<p>Error:</p>
<pre><code>OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 1516-01-01 00:00:00
</code></pre>
|
<python><pandas><dataframe><datetime><python-datetime>
|
2024-07-21 20:54:03
| 0
| 436
|
el_pazzu
|
78,776,284
| 3,579,212
|
How to register Django hooks with PyInstaller
|
<p>I want to use the Django <a href="https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/hook-django.py" rel="nofollow noreferrer">hooks</a> in Pyinstaller. I have tried the following:</p>
<pre><code> python -m PyInstaller
--runtime-hook='hook-django.contrib.sessions.py'
--runtime-hook='hook-django.core.cache.py'
--runtime-hook='hook-django.core.management.py'
--runtime-hook='hook-django.db.backends.py'
--runtime-hook='hook-django.py'
--runtime-hook='hook-django.template.loaders.py'
...
--noconfirm
--console
--clean
manage.py
</code></pre>
<p>But I am getting</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '.../hook-django.*.py'
</code></pre>
<p>Should I use the <code>additional-hooks-dir</code> parameter even with the built-in hooks, if so what value should I provide?</p>
|
<python><django><pyinstaller>
|
2024-07-21 20:21:22
| 2
| 3,664
|
adnanmuttaleb
|
78,776,268
| 3,486,684
|
How can I efficiently `fill_null` only certain columns of a DataFrame?
|
<p>For example, let us say I want to <code>fill_null(strategy="zero")</code> only the numeric columns of my DataFrame. My current strategy is to do this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
df = pl.DataFrame(
[
pl.Series("id", ["alpha", None, "gamma"]),
pl.Series("xs", [None, 100, 2]),
]
)
final_df = df.select(cs.exclude(cs.numeric()))
final_df = final_df.with_columns(
df.select(cs.numeric()).fill_null(strategy="zero")
)
print(final_df)
</code></pre>
<pre><code>shape: (3, 2)
┌───────┬─────┐
│ id ┆ xs │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═════╡
│ alpha ┆ 0 │
│ null ┆ 100 │
│ gamma ┆ 2 │
└───────┴─────┘
</code></pre>
<p>Are there alternative, either more idiomatic or more efficient methods to achieve what I'd like to do?</p>
|
<python><python-polars>
|
2024-07-21 20:11:25
| 1
| 4,654
|
bzm3r
|
78,776,267
| 4,974,980
|
Blocking OpenAI Requests
|
<p>After multiple attempts at this myself along with numerous ChatGPT and Claude queries, I am throwing my hands up and asking StackOverflow what would seem to be an easy question:</p>
<p>How does one block OpenAI, at the global level, from making HTTP requests when testing using pytest?</p>
<p>Things I have tried (and this list is likely forgetting many):</p>
<ol>
<li>Use <code>httpx-blockage</code></li>
<li>In <code>conftest.py</code> add in a patch for OpenAI:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True)
def block_azure_openai():
with patch.object(AzureOpenAI, '__init__', return_value=None):
yield
</code></pre>
<ol start="3">
<li>In <code>conftest.py</code> patch OpenAI <em>and</em> the utility function that returns an instance of the OpenAI client:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True)
def block_httpx(monkeypatch):
with patch('ai.utils.get_openai_client', return_value=Mock()), patch('ai.utils.AzureOpenAI', return_value=None):
yield
</code></pre>
<ol start="4">
<li>Patch the AsyncClient</li>
</ol>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope='session')
async def block_httpx(monkeypatch):
async def mock_httpx_request(*args, **kwargs):
raise RuntimeError("HTTPx requests are blocked during tests")
monkeypatch.setattr(httpx.AsyncClient, "request", mock_httpx_request)
# Add more if needed for other methods like .get, .post, etc.
</code></pre>
<ol start="5">
<li>Patch the Client</li>
</ol>
<pre><code>@pytest.fixture(scope='session')
def block_httpx(monkeypatch):
def mock_httpx_request(*args, **kwargs):
raise RuntimeError("HTTPx requests are blocked during tests")
with patch('httpcore'):
monkeypatch.setattr(httpx.Client, "request", mock_httpx_request)
yield
</code></pre>
<p>None of these have worked and while running tests I continue to see, in Azure's metrics, requests coming in .</p>
<p><a href="https://i.sstatic.net/26p6U5yM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26p6U5yM.png" alt="OpenAI Azure Requests" /></a></p>
<p>The only idea I have remaining is to quit this job so the user story can be someone else's problem. Hoping someone here has one better than that though 😛</p>
|
<python><unit-testing><openai-api><azure-openai>
|
2024-07-21 20:10:36
| 0
| 2,454
|
Jens Astrup
|
78,776,035
| 19,251,893
|
nlst times out while connecting FTPS server with Python
|
<p>I can login to with Total Commander to<br />
server: ftps://publishedprices.co.il<br />
username: "XXXX"<br />
password empty</p>
<p>And with</p>
<pre><code>lftp -u XXXX: publishedprices.co.il
</code></pre>
<p>But when I tried to login and get the file list with Python on the same machine the <code>nlst</code> function returns time out.</p>
<p>Code:</p>
<pre><code>from ftplib import FTP_TLS
ftp_server = "publishedprices.co.il"
username = 'XXXX'
password = ""
ftps = FTP_TLS()
ftps.set_debuglevel(2)
ftps.connect(ftp_server,timeout=30)
print('connected')
ftps.login(username, password)
ftps.prot_p()
print('log in')
file_list = ftps.nlst()
</code></pre>
<p>debug print:</p>
<pre class="lang-none prettyprint-override"><code>*get* '220-Welcome to Public Published Prices Server\n'
*get* '220- Created by NCR L.T.D\n'
*get* '220-\n'
*get* '220-\n'
*get* '220 ** The site is open! Have a good day.\n'
*resp* '220-Welcome to Public Published Prices Server\n220- Created by NCR L.T.D\n220-\n220-\n220 ** The site is open! Have a good day.'
connected
*cmd* 'AUTH TLS'
*put* 'AUTH TLS\r\n'
*get* '234 Authentication method accepted\n'
*resp* '234 Authentication method accepted'
*cmd* 'USER XXXX'
*put* 'USER XXXX\r\n'
*get* '331 User XXXX, password please\n'
*resp* '331 User XXXX, password please'
*cmd* 'PASS '
*put* 'PASS \r\n'
*get* '230 Password Ok, User logged in\n'
*resp* '230 Password Ok, User logged in'
*cmd* 'PBSZ 0'
*put* 'PBSZ 0\r\n'
*get* '200 PBSZ=0\n'
*resp* '200 PBSZ=0'
*cmd* 'PROT P'
*put* 'PROT P\r\n'
*get* '200 PROT P OK, data channel will be secured\n'
*resp* '200 PROT P OK, data channel will be secured'
log in
*cmd* 'TYPE A'
*put* 'TYPE A\r\n'
*get* '200 Type ASCII\n'
*resp* '200 Type ASCII'
*cmd* 'PASV'
*put* 'PASV\r\n'
*get* '227 Entering Passive Mode (194,90,26,21,47,54)\n'
*resp* '227 Entering Passive Mode (194,90,26,21,47,54)'
</code></pre>
<p>Exception:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "c:\project\test.py", line 18, in <module>
file_list = ftps.nlst()
^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 553, in nlst
self.retrlines(cmd, files.append)
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 462, in retrlines
with self.transfercmd(cmd) as conn, \
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 393, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 793, in ntransfercmd
conn, size = super().ntransfercmd(cmd, rest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 354, in ntransfercmd
conn = socket.create_connection((host, port), self.timeout,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 851, in create_connection
raise exceptions[0]
File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 836, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
</code></pre>
<p>When I set
<code>ftps.set_pasv(False)</code>
I got</p>
<pre class="lang-none prettyprint-override"><code>*get* '220-Welcome to Public Published Prices Server\n' [27/11310]*get* '220- Created by NCR L.T.D\n'
*get* '220-\n'
*get* '220-\n'
*get* '220 ** The site is open! Have a good day.\n'
*resp* '220-Welcome to Public Published Prices Server\n220- Created by NCR L.T.D\n220-\n220-\n220 ** The site is open! Have a good day.'
connected
*cmd* 'AUTH TLS'
*put* 'AUTH TLS\r\n'
*get* '234 Authentication method accepted\n'
*resp* '234 Authentication method accepted'
*cmd* 'USER XXXX'
*put* 'USER XXXX\r\n'
*get* '331 User XXXX, password please\n'
*resp* '331 User XXXX, password please'
*cmd* 'PASS '
*put* 'PASS \r\n'
*get* '230 Password Ok, User logged in\n'
*resp* '230 Password Ok, User logged in'
*cmd* 'PBSZ 0'
*put* 'PBSZ 0\r\n'
*get* '200 PBSZ=0\n'
*resp* '200 PBSZ=0'
*cmd* 'PROT P'
*put* 'PROT P\r\n'
*get* '200 PROT P OK, data channel will be secured\n'
*resp* '200 PROT P OK, data channel will be secured'
log in
*cmd* 'TYPE A'
*put* 'TYPE A\r\n'
*get* '200 Type ASCII\n'
*resp* '200 Type ASCII'
*cmd* 'PORT 172,17,118,200,159,233'
*put* 'PORT 172,17,118,200,159,233\r\n'
*get* '500 Port command invalid\n'
*resp* '500 Port command invalid'
</code></pre>
<p>lftp log:</p>
<pre class="lang-none prettyprint-override"><code>lftp :~>debug -o log1.txt -c -t 9
lftp :~>set ftp:ssl-force true
lftp :~>set ssl:verify-certificate no
lftp :~>set ftp:use-feat false
lftp :~>connect -u XXXX: publishedprices.co.il
lftp :~>ls
</code></pre>
<p>log file:</p>
<pre class="lang-none prettyprint-override"><code>2024-07-22 02:22:22 publishedprices.co.il ---- Resolving host address...
2024-07-22 02:22:22 publishedprices.co.il ---- IPv6 is not supported or configured
2024-07-22 02:22:22 publishedprices.co.il ---- 1 address found: 194.90.26.22
2024-07-22 02:22:26 publishedprices.co.il ---- Connecting to publishedprices.co.il (194.90.26.22) port 21
2024-07-22 02:22:26 publishedprices.co.il <--- 220-Welcome to Public Published Prices Server
2024-07-22 02:22:26 publishedprices.co.il <--- 220- Created by NCR L.T.D
2024-07-22 02:22:26 publishedprices.co.il <--- 220-
2024-07-22 02:22:26 publishedprices.co.il <--- 220-
2024-07-22 02:22:26 publishedprices.co.il <--- 220 ** The site is open! Have a good day.
2024-07-22 02:22:26 publishedprices.co.il ---> AUTH TLS
2024-07-22 02:22:26 publishedprices.co.il <--- 234 Authentication method accepted
2024-07-22 02:22:26 publishedprices.co.il ---> USER XXXX
2024-07-22 02:22:26 Certificate: CN=*.publishedprices.co.il
2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Checking against: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 Certificate: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Issued by: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Checking against: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 Certificate: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Comodo CA Limited,CN=AAA Certificate Services
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 publishedprices.co.il <--- 331 User XXXX, password please
2024-07-22 02:22:26 publishedprices.co.il ---> PASS
2024-07-22 02:22:26 publishedprices.co.il <--- 230 Password Ok, User logged in
2024-07-22 02:22:26 publishedprices.co.il ---> PWD
2024-07-22 02:22:26 publishedprices.co.il <--- 257 "/" is the current directory
2024-07-22 02:22:26 publishedprices.co.il ---> PBSZ 0
2024-07-22 02:22:26 publishedprices.co.il <--- 200 PBSZ=0
2024-07-22 02:22:26 publishedprices.co.il ---> PROT P
2024-07-22 02:22:26 publishedprices.co.il <--- 200 PROT P OK, data channel will be secured
2024-07-22 02:22:26 publishedprices.co.il ---> PASV
2024-07-22 02:22:26 publishedprices.co.il <--- 227 Entering Passive Mode (194,90,26,21,48,206)
2024-07-22 02:22:26 publishedprices.co.il ---- Connecting data socket to (194.90.26.21) port 12494
2024-07-22 02:22:26 publishedprices.co.il ---- Data connection established
2024-07-22 02:22:26 publishedprices.co.il ---> LIST
2024-07-22 02:22:26 publishedprices.co.il <--- 150 Opening data connection
2024-07-22 02:22:26 Certificate: CN=*.publishedprices.co.il
2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Checking against: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 Certificate: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA
2024-07-22 02:22:26 Issued by: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Checking against: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 Certificate: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority
2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Comodo CA Limited,CN=AAA Certificate Services
2024-07-22 02:22:26 Trusted
2024-07-22 02:22:26 publishedprices.co.il <--- 226 Transfer complete
2024-07-22 02:22:26 publishedprices.co.il ---- Got EOF on data connection
2024-07-22 02:22:26 publishedprices.co.il ---- Closing data socket
2024-07-22 02:22:29 publishedprices.co.il ---> QUIT
2024-07-22 02:22:29 publishedprices.co.il <--- 221 Goodbye
2024-07-22 02:22:29 publishedprices.co.il ---- Closing control socket
</code></pre>
|
<python><python-3.x><ftp><ftplib><ftps>
|
2024-07-21 18:19:44
| 1
| 345
|
python3.789
|
78,775,967
| 1,649,171
|
One project (Django 3 with mod_wsgi installed within py 3.8 venv) works but another (Django 5 with mod_wsgi installed within py 3.12) fails
|
<p><strong>Issue:</strong> Internal Server Error with Django and mod_wsgi on Ubuntu Server</p>
<p><strong>Setup:</strong></p>
<ul>
<li>Ubuntu 20.04</li>
<li>Python 3.8 for existing project (and the base python for the OS too)</li>
<li>Python 3.12.4 for new project</li>
<li>Apache</li>
<li>mod_wsgi installed in respective virtual environments. The system level one has been removed.</li>
</ul>
<p><strong>Projects:</strong></p>
<ol>
<li><p><strong>Existing Project (Working ✅)</strong></p>
<ul>
<li><strong>Directory:</strong> <code>/var/www/existing-project/</code></li>
<li><strong>Python Version:</strong> 3.8</li>
<li><strong>Django Version:</strong> 3.x</li>
<li><strong>Virtual Environment Path:</strong> <code>/var/www/existing-project/venv</code></li>
<li><strong>WSGI File Path:</strong> <code>/var/www/existing-project/project/project/wsgi.py</code></li>
<li><strong>Apache Configuration:</strong>
<pre><code>Protocols h2 http/1.1
WSGIApplicationGroup %{GLOBAL}
LoadModule wsgi_module "/var/www/existing-project/venv/lib/python3.8/site-packages/mod_wsgi/server/mod_wsgi.cpython-38-x86_64-linux-gnu.so"
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName existing-project.com
DocumentRoot /var/www/existing-project/project/
RemoteIPHeader CF-Connecting-IP
ErrorLog ${APACHE_LOG_DIR}/existing-project_error.log
CustomLog ${APACHE_LOG_DIR}/existing-project_access.log combined
<Directory /var/www/existing-project/project/project>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Alias /favicon.ico /var/www/existing-project/project/static/images/favicon/favicon.ico
Alias /static /var/www/existing-project/project/static
<Directory /var/www/existing-project/project/static>
Require all granted
</Directory>
<IfModule mod_expires.c>
<FilesMatch "\.(png|jp?g|gif|ico|mp4|wmv|mov|mpeg|css|map|woff?|eot|svg|ttf|js|json|pdf|csv)">
ExpiresActive on
ExpiresDefault "access plus 1 year"
</FilesMatch>
</IfModule>
WSGIDaemonProcess existing-project python-home=/var/www/existing-project/venv python-path=/var/www/existing-project/project
WSGIProcessGroup existing-project
WSGIScriptAlias / /var/www/existing-project/project/project/wsgi.py
RewriteEngine on
RewriteCond %{SERVER_NAME} =existing-project.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>New Project (Not working ❌)</strong></p>
<ul>
<li><strong>Directory:</strong> <code>/var/www/new-project/</code></li>
<li><strong>Python Version:</strong> 3.12.4</li>
<li><strong>Django Version:</strong> 5</li>
<li><strong>Virtual Environment Path:</strong> <code>/var/www/new-project/venv</code></li>
<li><strong>WSGI File Path:</strong> <code>/var/www/new-project/project/project/wsgi.py</code></li>
<li><strong>Apache Configuration:</strong>
<pre><code>WSGIApplicationGroup %{GLOBAL}
LoadModule wsgi_module "/var/www/new-project/venv/lib/python3.12/site-packages/mod_wsgi/server/mod_wsgi-py312.cpython-312-x86_64-linux-gnu.so"
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName new-project.com
DocumentRoot /var/www/new-project/project/
RemoteIPHeader CF-Connecting-IP
# Basic HTTP Authentication
<Directory /var/www/new-project/project>
AuthType Basic
AuthName "Restricted Area"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Directory>
ErrorLog ${APACHE_LOG_DIR}/new-project_error.log
CustomLog ${APACHE_LOG_DIR}/new-project_access.log combined
<Directory /var/www/new-project/project/project>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Alias /favicon.ico /var/www/new-project/project/static/images/favicon/favicon.ico
Alias /static /var/www/new-project/project/static
<Directory /var/www/new-project/project/static>
Require all granted
</Directory>
<IfModule mod_expires.c>
<FilesMatch "\.(png|jp?g|gif|ico|mp4|wmv|mov|mpeg|css|map|woff?|eot|svg|ttf|js|json|pdf|csv)">
ExpiresActive on
ExpiresDefault "access plus 1 year"
</FilesMatch>
</IfModule>
WSGIDaemonProcess new-project python-home=/var/www/new-project/venv python-path=/var/www/new-project/project
WSGIProcessGroup new-project
WSGIScriptAlias / /var/www/new-project/project/project/wsgi.py
# RewriteEngine on
# RewriteCond %{SERVER_NAME} =new-project.com
# RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
</code></pre>
</li>
</ul>
</li>
</ol>
<p><strong>Issue:</strong>
When accessing <code>new-project.com</code>, I receive an "Internal Server Error" with the following error logs:</p>
<pre><code>[wsgi:error] [pid 1198771] mod_wsgi (pid=1198771): Exception occurred processing WSGI script '/var/www/new-project/project/project/wsgi.py'.
[wsgi:error] [pid 1198771] Traceback (most recent call last):
File "/var/www/new-project/project/project/wsgi.py", line 12, in <module>
from django.core.wsgi import get_wsgi_application
ModuleNotFoundError: No module named 'django'
</code></pre>
<p><strong>Troubleshooting Steps Taken:</strong></p>
<ol>
<li>Confirmed Django installation:
<pre class="lang-bash prettyprint-override"><code>(venv) $ pip show django
(venv) $ python -m django --version
5.0.6
</code></pre>
</li>
<li>Verified permissions:
<pre class="lang-bash prettyprint-override"><code>sudo chown -R www-data:www-data /var/www/new-project/
sudo chmod -R 755 /var/www/new-project/
</code></pre>
</li>
<li>Checked mod_wsgi path and installation:
<pre class="lang-bash prettyprint-override"><code>(venv) /usr/bin$ mod_wsgi-express module-config
LoadModule wsgi_module "/var/www/new-project/venv/lib/python3.12/site-packages/mod_wsgi/server/mod_wsgi-py312.cpython-312-x86_64-linux-gnu.so"
WSGIPythonHome "/var/www/new-project/venv"
ldd /var/www/new-project/venv/lib/python3.12/site-packages/mod_wsgi/server/mod_wsgi-py312.cpython-312-x86_64-linux-gnu.so
</code></pre>
</li>
<li>Verified Python version in WSGI script:
<pre class="lang-py prettyprint-override"><code>import sys
import os
print("Python executable:", sys.executable)
print("Python version:", sys.version)
print("Python path:", sys.path)
print("Current working directory:", os.getcwd())
</code></pre>
<pre class="lang-bash prettyprint-override"><code>[wsgi:error] [pid 1201444] Python executable: /var/www/new-project/venv/bin/python
[wsgi:error] [pid 1201444] Python version: 3.8.10 (default, Mar 25 2024, 10:42:49)
[wsgi:error] [pid 1201444] [GCC 9.4.0]
[wsgi:error] [pid 1201444] Python path: ['/var/www/new-project/project', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload']
[wsgi:error] [pid 1201444] Current working directory: /
</code></pre>
</li>
</ol>
<p><strong>Question:</strong></p>
<p>I've been stuck on this for three days and I am out of options to troubleshoot. How can I resolve the "ModuleNotFoundError"? What is happening here?</p>
|
<python><django><apache><ubuntu><mod-wsgi>
|
2024-07-21 17:53:18
| 0
| 910
|
Ken
|
78,775,890
| 13,215,289
|
How to relay and stream HTTP response chunks directly using httpx and FastAPI?
|
<p>I'm working on a relay handler using <code>httpx</code> and FastAPI to stream HTTP response chunks directly from an upstream server to a client. The goal is to send each chunk of the streamed response directly as it is received, instead of buffering all data and sending it at the end.</p>
<p><strong>Here's what I've tried so far:</strong></p>
<ol>
<li><p><strong>Streaming with async generator:</strong></p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse
import httpx
import logging
logger = logging.getLogger(__name__)
app = FastAPI()
@app.post("/relay")
async def relay_request():
try:
async with httpx.AsyncClient() as client:
async with client.stream("POST", relayhandler.url, headers=preparedheaders, json=prepared_request) as response:
logger.debug(f"Received response with status code {response.status_code}.")
if response.status_code != 200:
response_text = await response.aread()
logger.error(f"Relay request failed with status code {response.status_code}: {response_text.decode()}")
raise HTTPException(status_code=response.status_code, detail=response_text.decode())
async def stream_response():
try:
async for chunk in response.aiter_bytes():
logger.debug(f"Received chunk: {chunk}")
yield chunk
except httpx.StreamClosed:
logger.error("Upstream response stream was unexpectedly closed")
raise HTTPException(status_code=500, detail="Upstream response stream was unexpectedly closed")
headers = {
"content-type": "text/event-stream; charset=utf-8",
}
return StreamingResponse(stream_response(), status_code=response.status_code, headers=headers, media_type="text/event-stream;charset=UTF-8")
except httpx.RequestError as e:
logger.error(f"HTTPX Request error: {e}")
raise HTTPException(status_code=500, detail=str(e))
except Exception as e:
logger.error(f"Unexpected error: {e}")
raise HTTPException(status_code=500, detail="Unexpected internal server error")
</code></pre>
<p><strong>Result:</strong> The connection gets closed unexpectedly.</p>
</li>
<li><p><strong>Testing streaming generator directly:</strong></p>
<pre class="lang-py prettyprint-override"><code>import httpx
timeout = httpx.Timeout(10.0, read=None) # Set a read timeout to None to allow streaming indefinitely
async with httpx.AsyncClient(timeout=timeout) as client:
try:
stream_gen = stream_response(client, url, headers, payload)
print("Streaming response:")
async for chunk in stream_gen:
print(chunk.decode())
except httpx.RequestError as e:
print(f"Request error: {e}")
except httpx.StreamClosed:
print("Stream was unexpectedly closed")
except Exception as e:
print(f"Unexpected error: {e}")
async def stream_response(client, url, headers, payload):
async with client.stream('POST', url, headers=headers, json=payload) as response:
async for chunk in response.aiter_bytes():
if chunk:
yield chunk
</code></pre>
<p><strong>Result:</strong> All the data is collected and then printed at once, not streamed chunk by chunk.</p>
</li>
<li><p><strong>Direct StreamingResponse:</strong></p>
<pre class="lang-py prettyprint-override"><code>from fastapi.responses import StreamingResponse
async with httpx.AsyncClient() as client:
return StreamingResponse(
client.stream('POST', relayhandler.url, headers=preparedheaders, json=prepared_request),
status_code=200,
media_type="text/event-stream;charset=UTF-8"
)
</code></pre>
<p><strong>Result:</strong> This raises <code>TypeError: '_AsyncGeneratorContextManager' object is not iterable</code>.</p>
</li>
<li><p><strong>Currently implemented:</strong></p>
<pre class="lang-py prettyprint-override"><code>async with httpx.AsyncClient() as client:
response = await client.post(
relay_handler.url,
headers=prepared_headers,
json=prepared_request,
)
logger.debug(f"Received response with status code {response.status_code}.")
if response.status_code != 200:
response_text = await response.aread()
logger.error(f"Relay request failed with status code {response.status_code}: {response_text.decode()}")
raise HTTPException(status_code=response.status_code, detail=response_text.decode())
async def stream_response():
async for chunk in response.aiter_bytes():
logger.debug(f"Received chunk: {chunk}")
yield chunk
headers = {
"content-type": "text/event-stream; charset=utf-8",
}
return StreamingResponse(stream_response(), status_code=response.status_code, headers=dict(headers), media_type="text/event-stream;charset=UTF-8")
</code></pre>
<p><strong>Result:</strong> It works but it also buffers everything and then streams everything out at once.</p>
</li>
</ol>
<hr />
<p><strong>Objective:</strong></p>
<p>The ultimate goal is to directly stream every response chunk from the upstream server in a <code>StreamingResponse</code> to the recipient. Can someone help me get this to work correctly?</p>
<p><strong>Environment:</strong></p>
<ul>
<li>Python 3.9</li>
<li>httpx 0.26.0</li>
<li>FastAPI 0.103.0</li>
<li>Uvicorn 0.20.0</li>
</ul>
<p>Any help or insights are greatly appreciated!</p>
|
<python><http><fastapi><http-streaming><httpx>
|
2024-07-21 17:25:51
| 1
| 542
|
leon
|
78,775,683
| 110,963
|
Change title (verbose name plural) in custom admin.TabularInline
|
<p>In my Django admin UI I use a <code>TabularInline</code> like this:</p>
<pre><code>class MyCustomInline(admin.TabularInline):
# ...
def get_queryset(self, request):
# ...
</code></pre>
<p>The overridden functions adds a filter, so that only a subset of related objects is displayed. In the admin UI I get table using the plural of the model class as title, which is wrong in this case. I would like to give it a custom name.</p>
<p>I cannot do this for the model itself, because I want to change the title only for this filtered set. Not for the model class in general.</p>
<p>I tried to figure it out from the source code. In the template I found out that the <code>verbose_name</code> is passed in via <code>opts</code>, but could not follow its path to find a way to override it.</p>
<p>Could somebody give me a hint how to change the title aka verbose name just for this specific <code>TabularInline</code>?</p>
|
<python><django><django-admin>
|
2024-07-21 16:00:54
| 1
| 15,684
|
Achim
|
78,775,434
| 19,381,612
|
How do create a new line break in python-doxc that has a specified character size?
|
<p>I currently have a paragraph as a header that is <code>Pt48</code> in size and I want to create a new line break using <code>add_break()</code> which works perfectly fine but it inherits the top paragraph character size. Is there a way to specify the new line break character size. I've look at the documentation and I cannot find anything that can do this. <a href="https://python-docx.readthedocs.io/en/stable/dev/analysis/features/text/breaks.html" rel="nofollow noreferrer">Line Break Reference</a> | <a href="https://python-docx.readthedocs.io/en/latest/user/text.html" rel="nofollow noreferrer">Working with Text Refrence</a></p>
<p>Some code for reference</p>
<pre><code>document = Document()
initiate_all_styles(document)
invoice_header = document.add_paragraph(style='invoice')
invoice_header_run = invoice_header.add_run('INVOICE')
invoice_header_run.bold = True
invoice_header_run.add_break()
document.save('invoice.docx')
</code></pre>
<p>Styles code</p>
<pre><code>def initiate_all_styles(document):
initiate_invoice_style(document)
#other methods for styles located here
def initiate_invoice_style(document):
font_styles = document.styles
font_charstyle = font_styles.add_style('invoice', WD_STYLE_TYPE.PARAGRAPH)
font_charstyle.font.color.rgb = RGBColor.from_string(MAIN_COLOR)
font_object = font_charstyle.font
font_object.size = Pt(48)
font_object.name = 'Arial'
</code></pre>
<p><strong>Edit</strong>:<br />
My current solution for this is to create an invisible paragraph with a character that matched the background colour and set the size to my desired spacing <code>Pt(10)</code>.</p>
<p>This is definitely not a great solution and more of a hack but it works. This also takes a extra couple lines of code rather then potentially adding 1 and maintaining it also pain in the butt.</p>
<pre><code>invisible_character = document.add_paragraph('-', style='invisible')
invisible_character.paragraph_format.line_spacing = ONE_DOT_ZERO_LINE_SPACING
invisible_character.paragraph_format.space_after = ZERO_SPACE_AFTER
</code></pre>
|
<python><python-docx>
|
2024-07-21 14:20:29
| 1
| 1,124
|
ThisQRequiresASpecialist
|
78,775,408
| 6,703,592
|
dataframe filter groupby based on a subset
|
<pre><code>df_example = pd.DataFrame({'name': ['a', 'a', 'a', 'b', 'b', 'b'],
'class': [1, 2, 2, 3, 2, 2],
'price': [3, 4, 2, 1, 6, 5]})
</code></pre>
<p>I want to filter each <code>name</code> where the <code>price</code> is larger than the smallest <code>price</code> in a subset <code>class==2</code> within <code>name</code> grounp:</p>
<pre><code>df_example.sort_values(['name', 'price'], inplace=True)
df_tem = df_example[df_example['class'] == 2].groupby('name').first()
</code></pre>
<p>Below is the pseudocode:</p>
<pre><code>df_example.groupby('name').apply(lambda key, val: val['price'] > df_tem.loc[key]['price']).reset_index()
</code></pre>
<p>Is there any effective way to achieve something like filter dataframe based on a subset within groupby</p>
<p><strong>result:</strong></p>
<p>the smallest price with <code>class=2</code> for each name group <code>df_tem</code>:</p>
<pre><code> class price
name
a 2 2
b 2 5
</code></pre>
<p>Therefore,</p>
<pre><code>group a: price>2; group b: price>5
</code></pre>
<p>the output:</p>
<pre><code>pd.DataFrame({'name': ['a', 'a', 'b'],
'class': [1, 2, 2],
'price': [3, 4, 6]})
</code></pre>
<p>Update:</p>
<p>actually i have an idea that create a new column called smallest, then filter</p>
<pre><code>df_example by df_example['price'] > df_example['smallest '].
</code></pre>
<p>Do you know how to quickly create such column something like</p>
<pre><code>df_example['smallest '] = df_example[df_example['class'] == 2].groupby('name')['price'].transform('first')
</code></pre>
<p>above way still have <code>nan</code></p>
|
<python><pandas><dataframe><group-by>
|
2024-07-21 14:06:28
| 3
| 1,136
|
user6703592
|
78,775,364
| 16,527,170
|
Pandas - Convert Multiple colums to rows in Dataframe
|
<p>I have df as below:</p>
<pre><code>import pandas as pd
import numpy as np
data = {
'NIFTYBEES.NS': [0.26, 0.18],
'abc.NS': [0.1, 0.1],
'NIFTYBEES.NS_pct_change': [0.86, -0.21],
'abc.NS_pct_change': [0.2, 0.2],
'avg': [0.4775, 0.76],
'weekday': ['Monday', 'Tuesday']
}
Date = pd.date_range('2024-04-29', periods=2, freq='D')
df = pd.DataFrame(data, index=Date)
df.reset_index(inplace=True)
# Use melt to reshape the DataFrame
df_melted = df.melt(id_vars=['index', 'avg', 'weekday'], value_vars=df.filter(regex='\.NS$').columns,
var_name='etf', value_name='value')
df_melted['etf_mrr'] = df_melted['value']
df_melted.drop(columns=['value'], inplace=True)
df_melted = df_melted[['index', 'etf', 'etf_mrr', 'avg', 'weekday']]
df_melted.rename(columns={'index': 'Date'}, inplace=True)
print(df_melted)
</code></pre>
<p>Current Output:</p>
<pre><code> Date etf etf_mrr avg weekday
0 2024-04-29 NIFTYBEES.NS 0.26 0.4775 Monday
1 2024-04-30 NIFTYBEES.NS 0.18 0.7600 Tuesday
2 2024-04-29 abc.NS 0.10 0.4775 Monday
3 2024-04-30 abc.NS 0.10 0.7600 Tuesday
</code></pre>
<p>For Column Ending with <code>.NS_pct_change</code>, I need one more column as <code>pct_change</code>, in <code>melt</code> condition , I checked with <code>OR</code> Condition its not working, also single <code>melt</code> statement is not able to create multiple columns.</p>
<p>I need expected output as below:</p>
<p>Expected Output:</p>
<pre><code> Date etf etf_mrr etf_pct_change avg weekday
0 2024-04-29 NIFTYBEES.NS 0.26 0.86 0.4775 Monday
1 2024-04-29 abc.NS 0.1 0.2 0.4775 Monday
2 2024-04-30 NIFTYBEES.NS 0.18 -0.21 0.7600 Tuesday
3 2024-04-30 abc.NS 0.1 0.2 0.7600 Tuesday
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-21 13:47:25
| 2
| 1,077
|
Divyank
|
78,775,326
| 16,883,182
|
How to generically type-annotate a fuction/method parameter to accept any real number type?
|
<p>I would like to write a generic function that takes an instance of any type that implements all real number operations (such as <code>int</code>, <code>float</code>, <code>fractions.Fraction</code>, or third-party types). I found the <code>numbers.Real</code> ABC which works great for enforcing the type at runtime with a <code>isinstance(x, numbers.Real)</code> check, and all built-in real number numeric types in Python are registered as a virtual subclass of it.</p>
<p>But I found out the hard way that while it works great for run-time checks, it <a href="https://github.com/python/mypy/issues/2922" rel="nofollow noreferrer">doesn't work well</a> for <em>type annotation</em> purposes because all major type checkers <a href="https://github.com/python/mypy/issues/3186#issuecomment-1571512649" rel="nofollow noreferrer">ignore</a> usages of <code>abc.ABC.register</code>. So <em>how</em> do I annotate my parameters which accepts any real numeric type then?</p>
<p><a href="https://discuss.python.org/t/numeric-generics-where-do-we-go-from-pep-3141-and-present-day-mypy/17155/10" rel="nofollow noreferrer">This</a> post in a discussion suggests using the <code>typing.SupportsInt</code> protocol in favor of the <code>numbers.Integral</code> ABC, but in my case I need a replacement for <code>numbers.Real</code>. Also, it only checks if a class implements <code>__int__</code>, not if it supports all integral operations. And something like <code>typing.SupportsReal</code> doesn't exist... Would I just have to define my own protocol that checks for the existence of all real number operation dunder methods (<code>__add__</code>, <code>__sub__</code>, <code>__mul__</code>, <code>__div__</code>, <code>__truediv__</code>, etc...)?</p>
|
<python><python-typing>
|
2024-07-21 13:30:24
| 1
| 315
|
I Like Python
|
78,775,207
| 22,886,184
|
Cloud Run - Authenticated request doesn't work on custom (sub)domain
|
<p>I have a Cloud Run service running with authenticated requests turned on. I've created a domain mapping using Load Balancing to point a subdomain to the container.</p>
<p>I have been sending requests to the direct container url without any problems.</p>
<pre class="lang-py prettyprint-override"><code>import os
from google.oauth2 import service_account
import google.auth.transport.requests
import google.oauth2.id_token
import requests
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'service-account.json'
audience = "https://app-id.a.run.app"
request = google.auth.transport.requests.Request()
id_token = google.oauth2.id_token.fetch_id_token(request, audience)
requests.post(
audience + "/job",
headers={'Authorization': f"Bearer {id_token}"},
)
</code></pre>
<p>My service account has the Cloud Run Invoker permission and requests get authenticated fine.</p>
<p>The domain mapping is configured to the right Cloud Run service & region.
<a href="https://i.sstatic.net/9tlkZeKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9tlkZeKN.png" alt="enter image description here" /></a></p>
<p>Why is it that when I change <code>audience</code> to my subdomain and send the request, I'm not authenticated?</p>
<p>Edit: The subdomain is correctly pointing at the container and requests are being logged.</p>
|
<python><google-cloud-platform><google-cloud-run>
|
2024-07-21 12:39:55
| 1
| 537
|
Victor
|
78,775,206
| 4,306,274
|
How to plot a line on the second axis over a HORIZONTAL (not VERTICAL) bar chart with Matplotlib?
|
<p>I know how to plot a line on the second axis over a VERTICAL bar chart.</p>
<p>Now I want the bars HORIZONTAL and the line top-to-bottom, just like the whole chart rotates 90°.</p>
<p>If I simply replace <code>bar</code> with <code>barh</code>, the line is still left-to-right...</p>
<p>Can I do this with Matplotlib?</p>
<p>Here is a sample for VERTICAL bar chart:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame(
{
"A": [2, 4, 8],
"B": [3, 5, 7],
"C": [300, 100, 200],
},
index=["a", "b", "c"],
)
ax0 = df.plot(y=df.columns[:2], kind="bar", figsize=(10.24, 5.12))
ax1 = df["C"].plot(
kind="line",
secondary_y=True,
color="g",
marker=".",
)
plt.tight_layout()
plt.show()
</code></pre>
<p>Let me stress: I do see those questions related to VERTICAL bar charts. Now I'm asking about HORIZONTAL bar charts.</p>
<p>So this is not a duplicate question.</p>
|
<python><pandas><matplotlib>
|
2024-07-21 12:39:43
| 1
| 1,503
|
chaosink
|
78,775,101
| 828,647
|
Iterate through child elements in robot framework
|
<p>I have a list of users on a webpage like this</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div id="users">
<li class="user_list">
<div class="users-name">User One</div>
<div class="other details">...</div>
</li>
<li class="user_list">
<div class="users-name">User Two</div>
<div class="other details">...</div>
</li>
</div></code></pre>
</div>
</div>
</p>
<p>I want to iterate through the list and get the names of the users like this</p>
<pre><code>@{users_list}= Get WebElements xpath=//li[contains(@class, 'user_list')]
FOR ${user} IN @{users_list}
${name_el}= Get WebElement xpath=${user}/div[@class='users-name']
${name_text}= Get Text ${parent}
Log Name= ${name_text}
END
</code></pre>
<p>But I am getting a syntax error that this is not a valid Xpath expression. How can I resolve this? I am using robotframework v7.0 with seleniumlibrary 6.0.</p>
|
<python><selenium-webdriver><robotframework>
|
2024-07-21 11:51:25
| 1
| 515
|
user828647
|
78,774,956
| 13,132,640
|
Vectorize a sampling of a dataframe based on filtered conditions?
|
<p>I have two dataframes, one which has three variables (one discrete, two continuous) and the other which has the same, except with also an additional 4th variable. For the purposes of a minimum reproducible example, I use random data, but in reality these variables are related to each other.</p>
<p>Anyway, for the first dataframe, which is always of significant length (minimum 100k rows), I need to assign values for the "additional 4th variable". I do this by filtering the second dataframe to similar conditions, and then randomly sampling the fourth variable from this filtered dataframe. This works perfectly for my application. However, it's rather slow to iterate over a very long dataframe like this. I'm wondering if there is a more clever vectorized way of doing this operation. My thoughts are to maybe bin the three variables, then assemble filtered versions of the second dataframe for all bin combinations up-front, and then somehow to query these pre-filtered dataframes and sample. But I'm not sure if this is a sensible method. Any thoughts?</p>
<p>Here's an example of my current technique below:</p>
<pre><code>import pandas as pd
import numpy as np
import random
# df1
varA1 = [random.randint(1,3) for i in range(1000)]
varB1 = [random.uniform(0, 50) for i in range(1000)]
varC1 = [random.uniform(0, 100) for i in range(1000)]
df1 = pd.DataFrame({'A':varA1,'B':varB1,'C':varC1})
# df2 - reference, look-up.
varA2 = [random.randint(1,3) for i in range(1000)]
varB2 = [random.uniform(0, 50) for i in range(1000)]
varC2 = [random.uniform(0, 100) for i in range(1000)]
varD2 = [random.uniform(0, 1000) for i in range(1000)]
df2 = pd.DataFrame({'A':varA2,'B':varB2,'C':varC2,'D':varD2})
# Assign a value for "D" in Df1, based on values in Df2, considering similar values of other parameters.
df1['D'] = np.nan
for r in df1.iterrows():
temp = df2.loc[(df2['A']==r[1]['A'])& \
(df2['B']>=r[1]['B']-5)& \
(df2['B']<=r[1]['B']+5)& \
(df2['C'] >= r[1]['C']-10) & \
(df2['C'] >= r[1]['C']+10),:]
df1.loc[r[0],'D'] = temp['D'].sample().values
</code></pre>
|
<python><pandas><vectorization>
|
2024-07-21 10:42:30
| 2
| 379
|
user13132640
|
78,774,602
| 739,809
|
How to parallelize transformer model for machine translation on 8 GPUs?
|
<p>I am attempting to perform machine translation using a transformer model in a manner almost identical to the original article. While the model works reasonably well, it requires greater computational resources. To address this, I ran the model on a computer with 8 GPU processors, but I lack experience in this area.
I tried to make the necessary adjustments for parallelization:</p>
<pre><code>transformer = nn.DataParallel(transformer)
transformer = transformer.to(DEVICE)
</code></pre>
<p>However, due to my lack of experience, things are not working well. Specifically, I have been stuck for a long time on the following error message:</p>
<pre class="lang-none prettyprint-override"><code>File "C:\Projects\MT005\.venv\Lib\site-packages\torch\nn\functional.py", line 5382,
in multi_head_attention_forward raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
RuntimeError: The shape of the 2D attn_mask is torch.Size([8, 64]), but should be (4, 4).
</code></pre>
<p>Could someone help me solve this problem and get the model running on all 8 GPUs?</p>
|
<python><machine-learning><parallel-processing>
|
2024-07-21 07:14:42
| 0
| 2,537
|
dsb
|
78,773,967
| 5,626,776
|
Module import error for PYTHON_PYLINT when using super-linter/super-linter@v6.7.0
|
<p>I've been working on getting super-linter within a GitHub Action to accept my recent Python project import for a while now without success. I am using <code>py3langid</code> to detect English language in a pretty basic Caesar Cipher implementation.</p>
<p>The linting error is:</p>
<pre><code>PYTHON_PYLINT
2024-07-20 21:05:46 [INFO] Linting PYTHON_PYLINT items...
Error: -20 21:05:47 [ERROR] Found errors when linting PYTHON_PYLINT. Exit code: 1.
2024-07-20 21:05:47 [INFO] Command output for PYTHON_PYLINT:
------
************* Module cipher
caesar_cipher/cipher.py:3:0: E0401: Unable to import 'py3langid.langid' (import-error)
-----------------------------------
Your code has been rated at 8.91/10
------
</code></pre>
<p>Top of cipher.py</p>
<pre><code>"""Caesar Cipher implementation"""
from py3langid.langid import MODEL_FILE, LanguageIdentifier
...
</code></pre>
<p>Folder structure (shortened for brevity)</p>
<pre><code>~/Projects/cryptography/
.git
.github/
workflows/
tests.yml
...
caesar_ciper/
cipher.py
test_cipher.py
...
__init__.py
requirements.txt
</code></pre>
<p>tests.yml</p>
<pre><code>---
name: Test project
on: push
permissions: read-all
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Lint
uses: super-linter/super-linter@v6.7.0
env:
# To report GitHub Actions status checks
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
test:
needs: lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
run: pytest
</code></pre>
<p>I had not used <code>pylint</code> locally before receiving this error. After it appeared I ran</p>
<pre><code>docker run --rm -v $(pwd):/data cytopia/pylint .
</code></pre>
<p>and got the following error</p>
<pre><code>************* Module .
__init__.py:1:0: F0010: error while code parsing: Unable to load file __init__.py:
[Errno 2] No such file or directory: '__init__.py' (parse-error)
</code></pre>
<p>After adding <code>~/Projects/cryptography/__init__.py</code>, <code>pylint</code> ran fine locally without any errors, but my remote workflow still fails with the same error as originally stated.</p>
<p>Any idea what I could be doing wrong? Thanks</p>
<p><strong>EDIT</strong></p>
<p>I also tried creating a <code>.pylintrc</code> on the remote GitHub Actions runner using this SO question to populate the file</p>
<p><a href="https://stackoverflow.com/questions/1899436/pylint-unable-to-import-error-how-to-set-pythonpath">PyLint "Unable to import" error - how to set PYTHONPATH?</a></p>
<p>setting <code>~/.pylintrc</code> to the following didn't effect the super-linter result</p>
<pre><code>[MASTER]
init-hook='import sys; sys.path.append("./")'
</code></pre>
<p><strong>EDIT 2</strong></p>
<p>Tried to update <code>PYTHONPATH</code> on the runner to include the working directory</p>
<pre><code>...
- name: Set PYTHONPATH
run: export PYTHONPATH=${PYTHONPATH}:${pwd}
...
</code></pre>
<p>This also hasn't helped to resolve the linting error</p>
<p><strong>EDIT 3</strong></p>
<p>As per @Azeem suggestion I tried adding the following to <code>tests.yml</code> under <code>lint</code></p>
<pre><code>...
steps
...
- name: Create .pylintrc # <- this
run: echo "[MASTER]" > ~/.pylintrc && echo "init-hook='import sys; sys.path.append("./")'" >> ~/.pylintrc # <- this
- name: Lint
uses: super-linter/super-linter@v6.7.0
env:
PYTHON_PYLINT_CONFIG_FILE: ~/.pylintrc # <- this
# To report GitHub Actions status checks
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
</code></pre>
<p>This is the error that came back</p>
<pre><code>[FATAL] -> PYTHON_PYLINT_LINTER_RULES rules file (/action/lib/.automation/~/.pylintrc) doesn't exist. Terminating...
</code></pre>
<p>Does anyone have any more ideas? I'm not even coding the project any more, just fighting the linter. I might have to open an issue on GitHub.</p>
<p>Thanks to @Azeem for chipping in</p>
<p>Read error again and tried to set location <code>.pylintrc</code> to <code>/action/lib/.automation/.pylintrc</code> which also failed</p>
<pre><code>/home/runner/work/_temp/91c876a5-f511-4156-91ca-5e23ad639603.sh: line 1: /action/lib/.automation/.pylintrc: No such file or directory
</code></pre>
|
<python><python-3.x><github-actions><super-linter>
|
2024-07-20 21:45:48
| 1
| 925
|
PumpkinBreath
|
78,773,889
| 11,443,551
|
TRL SFTTrainer clarification on truncation
|
<p>I am currently finetuning LLama models using SFTTrainer in huggingface. However, I came up with a question, I can not answer through the documentations (atleast, it is a bit ambigious).</p>
<p>My dataset contains samples from 20 tokens to 5k tokens.</p>
<p>Currently I am using a <code>max_seq_length=512,</code> and <code>packing=True</code>.</p>
<p>However, what is unclear to me is, what happens with samples with >512 tokens. Are they simple truncated?</p>
<p>If yes, is there any simple option to split them, rather to truncate them?</p>
|
<python><large-language-model><huggingface><llama>
|
2024-07-20 20:46:23
| 0
| 369
|
iiiiiiiiiiiiiiiiiiii
|
78,773,843
| 893,254
|
How to handle external dependencies like datetime.now() in FastAPI tests?
|
<p>I have some existing FastAPI tests which no longer pass, because some internal server logic has changed such that there is now a dependency on a value returned by <code>datetime.now()</code>.</p>
<p>This is an external dependency. Normally to handle external dependencies, we would write a mock implementation of the external dependency and find a way to inject it into the code being tested.</p>
<p>I am not sure if that is the best approach for something as simple as a datetime dependency. It might be, or it might not be.</p>
<p>Here is some MWE code which illustrates the problem:</p>
<pre><code># lib_datetime.py
from datetime import datetime
from datetime import timezone
def now() -> datetime:
return datetime.now(timezone.utc)
</code></pre>
<pre><code># fastapi_webserver.py
from fastapi import FastAPI
from lib_datetime import now
from datetime import datetime
app = FastAPI()
@app.get('/datetime_now')
def datetime_now():
the_datetime_now = now()
the_datetime_now_str = the_datetime_now.strftime('%Y-%m-%d %H:%M:%S.%f %z')
return {
'datetime_now': the_datetime_now_str,
}
</code></pre>
<pre><code># test_fastapi_webserver.py
from fastapi.testclient import TestClient
from fastapi_webserver import app
from datetime import datetime
from datetime import timezone
client = TestClient(app)
def test_fastapi_webserver_datetime_now():
datetime_now = datetime(year=2024, month=7, day=20, tzinfo=timezone.utc)
datetime_now_str = datetime_now.strftime('%Y-%m-%d %H:%M:%S.%f %z')
response = client.get('/datetime_now')
print(response.json())
assert response.status_code == 200
assert response.json() == {
'datetime_now', datetime_now_str,
}
</code></pre>
<p>The problem should be fairly obvious. The value of the datetime returned varies depending on when the test is run. No good.</p>
<p>Here is a summary of what I have tried to fix it:</p>
<ul>
<li>Monkeypatching. I tried to change what <code>lib_datetime.now()</code> points to from within the test file. I couldn't figure out how to make this work.</li>
<li>Hoisting external dependencies. In theory, <code>now()</code> could depend on an object. Or it could become an object. We could create two objects, one which returns the current datetime, and one which returns a fixed value for testing. I could not figure out how to inject that into the FastAPI <code>app</code>, or even if this is possible. This would be my prefered approach as it is what I am most familiar with.</li>
<li>Use an environment variable to change the runtime behaviour of <code>now()</code>. Again, I was not sure how to integrate this into the test code, and I am not sure this is really a good approach.</li>
</ul>
<p>The blocker here is that I am not that familiar with FastAPI, as I only recently started using it. So I don't know what I can do with it, yet.</p>
|
<python><python-3.x><fastapi>
|
2024-07-20 20:24:24
| 1
| 18,579
|
user2138149
|
78,773,780
| 1,179,689
|
BrokenPipeError when using python print with gdb
|
<p>I am trying to run an application in Linux and generate the input with Python:</p>
<pre><code>python3 -c 'print(".....")' | ./someapp
</code></pre>
<p>but getting the next error:</p>
<pre><code>Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>using bash ...
any suggestions?</p>
<p><strong>UPDATE:</strong>
I trying to solve some pwn challenges and I read in the writeups that they are doing this without any errors.... the only change that I see is that they are using python2 and not python3....</p>
|
<python>
|
2024-07-20 19:56:58
| 1
| 2,708
|
yehudahs
|
78,773,489
| 11,748,924
|
Combine three trained binary classification models into single multiclassification model in keras
|
<p>I have three trained binary classification models, they are trained with <strong>sigmoid</strong> activation at output layer.</p>
<ol>
<li>First model returning probability scalar from 0 to 1 to check whether image is number <strong>ZERO</strong> or not.</li>
<li>Second model returning probability scalar from 0 to 1 to check whether image is number <strong>ONE</strong> or not.</li>
<li>Third model returning probability scalar from 0 to 1 to check whether image is number <strong>TWO</strong> or not.</li>
</ol>
<p><a href="https://i.sstatic.net/1GxF5p3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1GxF5p3L.png" alt="enter image description here" /></a></p>
<p>I know I can just trained them with <strong>softmax</strong> with constructing model with three neurons at output layer. But suppose I have a situation where it's really take a long time to train their weights due to complex model, all I have just their individual model of binary classification. Or, I want extract their hidden representation features at hidden layer for example <code>model_0</code> (binary classification to check whether image is number zero or not).</p>
<p>So, how to concat/combine/merge them into single model?
My code currently stuck at this point:</p>
<pre><code>model_0 = init_binary_classification_model((28,28))
model_0.load_weights('trained_weight_of_binary_classification_to_check_whether_image_is_zero.h5')
model_1 = init_binary_classification_model((28,28))
model_1.load_weights('trained_weight_of_binary_classification_to_check_whether_image_is_one.h5')
model_2 = init_binary_classification_model((28,28))
model_2.load_weights('trained_weight_of_binary_classification_to_check_whether_image_is_two.h5')
</code></pre>
<p>Where:</p>
<pre><code>def init_binary_classification_model(input_shape=(28,28)):
input_layer = Input(shape=input_shape)
tensor = Flatten()(input_layer)
tensor = Dense(16, activation='relu')(tensor)
tensor = Dense(8, activation='relu')(tensor)
output_layer = Dense(1, activation='sigmoid')(tensor)
return Model(inputs=input_layer, outputs=output_layer)
</code></pre>
<p>I expect the multi-classification model has same input shape <code>(28,28)</code> and different output shape <code>(3)</code> and I don't need retrain the model (if possible).</p>
<p>Full code is available at <a href="https://colab.research.google.com/drive/1y1mvAzebIFU_cuEQo8Q60L1I6uT8i2Ce?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1y1mvAzebIFU_cuEQo8Q60L1I6uT8i2Ce?usp=sharing</a></p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2024-07-20 17:44:05
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
78,773,339
| 2,612,259
|
Is there a way to keep the context of a context manager local to the as block?
|
<p>I am trying to find a way to avoid having to create a bunch of names for some functions that I pass into another function. I realize that if the functions were just simple expressions then I could just pass in lambdas and avoid creating named functions, but in many cases they are longer functions needing statements.</p>
<p>Here is an example of what I am doing. This approach works fine but requires me to define a unique named function and pass it into the subscribe function.</p>
<pre class="lang-py prettyprint-override"><code>subscribers = []
def subscribe(handler):
subscribers.append(handler)
def handler_1():
print(f'handler 1')
subscribe(handler_1)
def handler_2():
print(f'handler 2')
subscribe(handler_2)
for subscriber in subscribers:
subscriber()
</code></pre>
<p>Running this yields the expected:</p>
<pre><code>handler 1
handler 2
</code></pre>
<p>I was attempting to solve this by using a context manager. Inside each width statement I was hoping to create a closure simply named <code>handler</code> and pass that to the subscribe function. This does not work however since it seems the context variable is not local to the statements in the <code>as</code> block, but seems to be in the scope outside the with statement. The result is that each closure defined in each <code>with / as</code> statement captures the same context variable (from the last <code>with</code> statement.</p>
<p>Here is an example of what I was hoping my avoid creating a bunch of uniquely named functions.</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
@contextmanager
def foo(target):
yield target
subscribers = []
with foo('target 1') as context:
def handler():
print(f'target = {context}')
return context
subscribe(handler)
with foo('target 2') as context:
def handler():
print(f'target = {context}')
return context
subscribe(handler)
for subscriber in subscribers:
subscriber()
</code></pre>
<p>So the above code produces:</p>
<pre><code>target = target 2
target = target 2
</code></pre>
<p>I guess in the end I was trying to used a context manager to implement something like a trailing closure, but that does not seem possible if the context variable is not local.
Is there a way to avoid my original approach, which works but requires a bunch of named functions, which I feel might be error prone?</p>
|
<python>
|
2024-07-20 16:37:47
| 1
| 16,822
|
nPn
|
78,773,176
| 11,626,909
|
How to ignore or bypass instances that result in NoneType non-iterable object
|
<p>I am trying to parse a section from 10K in Edgar database, when I run the following code,</p>
<pre><code># pip install edgartools
import pandas as pd
from edgar import *
# Tell the SEC who you are
set_identity("Your Name youremail@outlook.com")
filings2 = get_filings(form='10-K', amendments=False,
filing_date="2024-03-01")
filings2_df = filings2.to_pandas()
# Create a list to store the Item 1c text
item1c_texts = []
# Iterate over each filing
for filing in filings2:
url = filing.document.url
cik = filing.cik
filing_date = filing.header.filing_date,
reporting_date = filing.header.period_of_report,
comn = filing.company
# Extract the text for Item 1c
TenK = filing.obj()
item1c_text = TenK['Item 1C']
item1c_texts.append({
'CIK': cik,
'Filing Date': str(filing_date),
'Item 1c Text': item1c_text,
'url': url,
'reporting_date': str(reporting_date),
'comn': comn
})
# Create a DataFrame from the Item 1c text data
item1c_df = pd.DataFrame(item1c_texts)
</code></pre>
<p>I got this error -</p>
<pre><code>TypeError: cannot unpack non-iterable NoneType object
Cell In[463], line 11
9 if TenK is not None:
10 TenK = filing.obj()
---> 11 item1c_text = TenK['Item 1C']
12 item1c_texts.append({
13 'CIK': cik,
14 'Filing Date': str(filing_date),
(...)
18 'comn': comn
19 })
Show Traceback
</code></pre>
<p>Is there any way I can bypass the non-iterable NoneType issue? I read about many other NoneType issues in this platform, but none of them is very helpful for me.</p>
|
<python><edgar>
|
2024-07-20 15:32:29
| 1
| 401
|
Sharif
|
78,773,065
| 4,701,426
|
Lists in pandas dataframe cells
|
<p>If we have a (part of a bigger) dataframe that shows what states individuals (rows) visited in a trip:</p>
<pre><code>df = pd.DataFrame({'states_visited': [['NY', 'CA'], 'CA', 'CA']}, index = ['John', 'Mary', 'Joe'])
states_visited
John [NY, CA]
Mary CA
Joe CA
</code></pre>
<p>Because the <code>states_visited</code> column has values of type <code>list</code>, we can't use usual pandas methods like:</p>
<pre><code>df['states_visited'].unique()
</code></pre>
<p>to get the expected outcome of: <code>[[NY, CA] , CA]</code> which is what we need for example if we want to know what states were visited together (NY and CA) and which states were visited separately (CA).</p>
<p>Instead, we get <code>TypeError: unhashable type: 'list'</code></p>
<p>Similarly, we can't do <code>df['states_visited'].str.contains('NY')</code> to know who visited NY in his/her trip (it will return <code>NaN</code> for John). To achieve this, we have to go through the hoop of something like:</p>
<pre><code>df['states_visited'].explode()
.dropna()
.str.contains('NY')
.groupby(level=0).any()
.astype(float)
)
</code></pre>
<p>So, if pandas does not like nested data, what is the proper way of having list-like values in a pandas dataframe/series (not simple dictionaries because this example data is a part of a bigger dataframe)?</p>
|
<python><pandas>
|
2024-07-20 14:45:42
| 2
| 2,151
|
Saeed
|
78,773,026
| 1,391,441
|
Hide warnings and/or errors when importing Keras
|
<p>My script imports the following <a href="https://keras.io/" rel="nofollow noreferrer">Keras</a> modules:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Input
from keras.utils import to_categorical
</code></pre>
<p>and every time the same warnings/errors are shown:</p>
<pre><code>2024-07-20 10:51:48.653282: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-20 10:51:48.657088: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-20 10:51:48.670352: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-20 10:51:48.710318: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-20 10:51:48.722894: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-20 10:51:50.112114: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</code></pre>
<p>I've tried some suggestions I found like:</p>
<pre><code>from keras.config import disable_interactive_logging
disable_interactive_logging()
</code></pre>
<p>or</p>
<pre><code>import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
</code></pre>
<p>but neither worked. Can these messages be hidden somehow?</p>
|
<python><tensorflow><keras>
|
2024-07-20 14:27:59
| 1
| 42,941
|
Gabriel
|
78,772,912
| 21,376,217
|
Why does `ssl.SSLContext.wrap_socket` cause socket to close?
|
<p>I found some "strange" phenomena when using Python for network programming.</p>
<p>After constructing the SSL context and obtaining the SSL socket, the original socket will be automatically closed. Why is this? Is there any need to do this?</p>
<p>Just like the following code:</p>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>ssl_ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
fd = socket.socket()
ssl_fd = ssl_ctx.wrap_socket(fd, server_hostname='www.baidu.com')
print(fd)
</code></pre>
<p>The output of this code is as follows:</p>
<pre class="lang-bash prettyprint-override"><code>E:\Projects\_library_py>python main.py
<socket.socket [closed] fd=-1, family=2, type=1, proto=0>
</code></pre>
|
<python><sockets><ssl>
|
2024-07-20 13:36:21
| 1
| 402
|
S-N
|
78,772,897
| 3,225,420
|
How to store nested lists in Postgres?
|
<p>How can I store nested lists in <code>Postgres</code> in way that's easy to use them in my <code>Python</code> program later?</p>
<p>I plan to write the lists to the database once and reuse them many times. I've been able to store the nested lists as a <code>string</code> but it's not optimal, I'm trying to accomplish this with as little post-processing as possible so I'd rather do more up-front work for speed/ease of use on retrieval later.</p>
<p>These nested lists are for layout purposes, not poor data normalization.</p>
<p>Here is what I've tried based off <a href="https://builtin.com/data-science/postgresql-in-array#:%7E:text=You%20could%20expand%20the%20list,for%20storing%20and%20manipulating%20lists." rel="nofollow noreferrer">here</a>:</p>
<p>Created table in my database with field that supports <code>ARRAY</code> (I'm using DBeaver, the annotation for <code>ARRAY</code> is the <a href="https://dbeaver.com/docs/dbeaver/PostgreSQL-Arrays/" rel="nofollow noreferrer">underscore before text</a>: <code>_text</code>)</p>
<pre><code>CREATE TABLE public.layout (
f_id int8 NULL,
layout _text NULL
);
</code></pre>
<p>When trying this:</p>
<pre><code>insert into mpl_layout (f_id, layout)
values (7, ARRAY[["list_1"],["list_2"],["list_3"],["list_4"]]);
</code></pre>
<p>I get an error:</p>
<blockquote>
<p>SQL Error [42703]: ERROR: column "list_1" does not exist</p>
</blockquote>
<p>Adding parentheses around the <code>ARRAY</code> arguments only changes the error message:</p>
<pre><code>insert into mpl_layout (f_id, layout)
values (7, ARRAY([["list_1"],["list_2"],["list_3"],["list_4"]]));
</code></pre>
<blockquote>
<p>SQL Error [42601]: ERROR: syntax error at or near "7"</p>
</blockquote>
<p>I tried the curly brace <code>'{}'</code> format:</p>
<pre><code>insert into mpl_mosaic_layout (figure_id, layout)
values (7, '{[["list_1"],["list_2"],["list_3"],["list_4"]]}');
</code></pre>
<p>And got this error:</p>
<blockquote>
<p>SQL Error [22P02]: ERROR: malformed array literal:
"{[["list_1"],["list_2"],["list_3"],["list_4"]]}" Detail: Unexpected
array element.</p>
</blockquote>
<p>What should I try next?</p>
|
<python><postgresql><nested-lists>
|
2024-07-20 13:28:38
| 1
| 1,689
|
Python_Learner
|
78,772,795
| 6,657,842
|
What is the Python code to check all the GenerativeAI models supported by Google Gemini?
|
<p>Being new to <strong>GenerativeAI</strong> world, I am trying to load a pre-trained text generation model and doing some stuff which is not working. This is how I load the <strong>GenerativeAI</strong> model.</p>
<pre><code>from vertexai.generative_models import GenerativeModel
generation_model = GenerativeModel("gemini-pro")
</code></pre>
<p>Since it does not work, I feel I might have to use some other <strong>GenerativeAI</strong> model, not "gemini-pro". Even I try following piece of code to check all the models supported by <strong>Gemini</strong>.</p>
<pre><code>import google.generativeai as genai
for model in genai.list_models():
if 'generateContent' in model.supported_generation_methods:
print(model.name)
</code></pre>
<p>But I get '<strong>PermissionDenied</strong>' error as attached image shows.
<a href="https://i.sstatic.net/bm2itNDU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bm2itNDU.png" alt="enter image description here" /></a></p>
<p>Now as a programmer I have 2 queries here.</p>
<p>Q1. Can I see the <strong>GenerativeAI</strong> models supported by <strong>Gemini</strong> or not?</p>
<p>Q2. If answer is 'Yes', what is the Python code?</p>
|
<python><google-cloud-platform><google-gemini><google-generativeai>
|
2024-07-20 12:43:31
| 3
| 2,025
|
RLD
|
78,772,666
| 3,107,798
|
How to start consuming messages from specific offset using Confluent Kafka Python consumer
|
<p>Using the <a href="https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html#pythonclient-consumer" rel="nofollow noreferrer">Confluent Kafka Python consumer</a>, how do I start consuming from a specific offset? At first I was going use the <a href="https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html#confluent_kafka.Consumer.seek" rel="nofollow noreferrer">seek()</a> method, but documentation makes it seem like you can not <strong>start</strong> reading from a specific offset.</p>
<blockquote>
<p>seek() may only be used to update the consume offset of an actively
consumed partition (i.e., after assign()), to set the starting offset
of partition not being consumed instead pass the offset in an assign()
call.</p>
</blockquote>
<p>Simple example:</p>
<pre><code>from confluent_kafka import Consumer, TopicPartition
consumer = Consumer({...})
consumer.subscribe(['topic-test'])
tp = TopicPartition('topic-test', 0, 10)
consumer.seek(tp)
</code></pre>
<p>Running this code throws the exception:</p>
<pre><code>cimpl.KafkaException: KafkaError {
code=_UNKNOWN_PARTITION,
val=-190,
str="Failed to seek to offset 10: Local: Unknown partition"
}
</code></pre>
|
<python><apache-kafka><confluent-kafka-python>
|
2024-07-20 11:51:12
| 1
| 11,245
|
jjbskir
|
78,772,435
| 4,447,899
|
The AWS SAM common layer can't find the modules
|
<p>I have the follow directory structure:</p>
<pre><code>common =>
__init__.py
common.py
customer =>
app.py
template.yaml
</code></pre>
<p>My template looks like this:</p>
<pre><code>AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
CommonLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: common-layer
Description: Common code for multiple Lambda functions
ContentUri: common/
CompatibleRuntimes:
- python3.11
CustomerFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.lambda_handler
Runtime: python3.11
CodeUri: customer/
MemorySize: 128
Timeout: 30
Environment:
Variables:
LOG_LEVEL: INFO
Layers:
- !Ref CommonLayer
Events:
CustomerApi:
Type: Api
Properties:
Path: /customer/{customer_id}
Method: get
</code></pre>
<p>.....</p>
<pre><code>Outputs:
CustomerApiEndpoint:
Description: "API Gateway endpoint URL for GET /customer/{customer_id}"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/customer/{customer_id}"
Export:
Name: CustomerApiEndpoint
</code></pre>
<p>Just to test I made a simpler version of get_customer in common (final reads from database etc)
as follows:</p>
<pre><code>def get_customer(customer_id):
# Hard-coded customer object
return {
"customer_id": customer_id,
"firstname": "John",
"surname": "Doe",
"msisdn": "0696542321"
}
</code></pre>
<p>when I run sam build and sam local start-api and call <a href="http://127.0.0.1:3000/customer/1" rel="nofollow noreferrer">http://127.0.0.1:3000/customer/1</a></p>
<p>I am getting:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'app': No module named 'common'
</code></pre>
<p>In my app.py I have:</p>
<pre><code>import json
import logging
from common import common
# Set up logging
LOG_LEVEL = "INFO"
logging.basicConfig(level=LOG_LEVEL)
logger = logging.getLogger()
def lambda_handler(event, context):
try:
logger.info("Received event: %s", json.dumps(event))
# Extract customer_id from path parameters
customer_id = int(event['pathParameters']['customer_id'])
# Get customer details from common code
customer = common.get_customer(customer_id)
return {
'statusCode': 200,
'body': json.dumps(customer)
}
except Exception as e:
logger.error("Error processing request: %s", str(e))
return {
'statusCode': 500,
'body': json.dumps({'message': 'Internal Server Error'})
}
</code></pre>
<p>Please help resolve this</p>
|
<python><aws-lambda><aws-sam><aws-sam-cli>
|
2024-07-20 10:00:37
| 0
| 342
|
user4447899
|
78,772,431
| 13,562,186
|
Load 2 Tkinter windows simultaneously. one with animation
|
<p>The following script runs standalone to run a scenario:</p>
<p>The results are printed first, followed by the plot is closed with an animation.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import tkinter as tk
from tkinter import ttk
from matplotlib.animation import FuncAnimation
def run_model():
# Input parameters (example values)
frequency_per_year = 365
exposure_duration = 240 # minutes
product_amount = 5 # g
weight_fraction_substance = 0.6 # fraction
room_volume = 58 # m³
ventilation_rate = 0.5 # per hour
inhalation_rate = 10 # m³/hr
body_weight_kg = 65 # kg
# Advanced parameters
vapour_pressure = 2 # Pa
application_temperature = 18 # °C
molecular_weight = 100 # g/mol
# Conversion functions
def convert_to_seconds(duration):
return duration * 60 # minutes to seconds
def convert_to_kg(amount):
return amount / 1000 # g to kg
def convert_to_per_second(rate):
return rate / 3600 # per hour to per second
def convert_inhalation_rate(rate):
return rate # m³/hr
def convert_pressure(value):
return value # Pa
def convert_temperature(value):
return value + 273.15 # °C to K
def convert_molecular_weight(value):
return value # g/mol
# Convert units
exposure_duration_seconds = convert_to_seconds(exposure_duration)
product_amount_kg = convert_to_kg(product_amount)
ventilation_rate_per_second = convert_to_per_second(ventilation_rate)
inhalation_rate_m3_hr = convert_inhalation_rate(inhalation_rate)
temperature_k = convert_temperature(application_temperature)
vapour_pressure_pa = convert_pressure(vapour_pressure)
molecular_weight_kg_mol = convert_molecular_weight(molecular_weight)
# Universal gas constant
R = 8.314 # J/mol/K
# Time points (in seconds)
time_points = np.linspace(0, exposure_duration_seconds, 500)
# Calculate C_air over time
C_air_over_time = (product_amount_kg * weight_fraction_substance / room_volume) * np.exp(-ventilation_rate_per_second * time_points)
# Calculate C_sat
C_sat = (molecular_weight_kg_mol * vapour_pressure_pa) / (R * temperature_k)
# Convert C_air_over_time to mg/m^3 for plotting
C_air_over_time_mg_m3 = C_air_over_time * 1e6
# Calculate the total inhaled dose
total_inhaled_volume_m3 = inhalation_rate_m3_hr * (exposure_duration_seconds / 3600) # m^3
total_inhaled_dose_kg = np.trapz(C_air_over_time, time_points) * inhalation_rate_m3_hr / 3600 # kg
total_inhaled_dose_mg = total_inhaled_dose_kg * 1e6 # mg
# Calculate the external event dose
external_event_dose_mg_kg_bw = total_inhaled_dose_mg / body_weight_kg # mg/kg bw
# Calculate mean event concentration
mean_event_concentration = np.mean(C_air_over_time_mg_m3)
# Calculate peak concentration (TWA 15 min)
time_interval_15min = 15 * 60 # 15 minutes in seconds
if exposure_duration_seconds >= time_interval_15min:
time_points_15min = time_points[time_points <= time_interval_15min]
C_air_15min = C_air_over_time[time_points <= time_interval_15min]
TWA_15_min = np.mean(C_air_15min) * 1e6 # Convert to mg/m³
else:
TWA_15_min = mean_event_concentration
# Calculate mean concentration on day of exposure
mean_concentration_day_of_exposure = mean_event_concentration * (exposure_duration_seconds / 86400)
# Calculate year average concentration
year_average_concentration = mean_concentration_day_of_exposure * frequency_per_year / 365
# Calculate cumulative dose over time and convert to mg/kg body weight
time_step = time_points[1] - time_points[0]
cumulative_dose_kg = np.cumsum(C_air_over_time) * time_step * inhalation_rate_m3_hr / 3600 # kg
cumulative_dose_mg = cumulative_dose_kg * 1e6 # mg
cumulative_dose_mg_kg_bw = cumulative_dose_mg / body_weight_kg # mg/kg bw
# Final values for annotations
final_air_concentration = C_air_over_time_mg_m3[-1]
final_external_event_dose = cumulative_dose_mg_kg_bw[-1]
# Prepare results text
result_text = (
f"Mean event concentration: {mean_event_concentration:.1e} mg/m³\n"
f"Peak concentration (TWA 15 min): {TWA_15_min:.1e} mg/m³\n"
f"Mean concentration on day of exposure: {mean_concentration_day_of_exposure:.1e} mg/m³\n"
f"Year average concentration: {year_average_concentration:.1e} mg/m³\n"
f"External event dose: {external_event_dose_mg_kg_bw:.1f} mg/kg bw\n"
f"External dose on day of exposure: {external_event_dose_mg_kg_bw:.1f} mg/kg bw"
)
# Display results in a Tkinter window
result_window = tk.Tk()
result_window.title("Model Results")
ttk.Label(result_window, text=result_text, justify=tk.LEFT).grid(column=0, row=0, padx=10, pady=10)
ttk.Button(result_window, text="Close", command=result_window.destroy).grid(column=0, row=1, pady=(0, 10))
result_window.mainloop()
# Plotting with animation
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
# Initialize lines
line1, = ax1.plot([], [], lw=2)
line2, = ax2.plot([], [], lw=2)
# Plot air concentration over time
def init_air_concentration():
ax1.set_xlim(0, exposure_duration)
ax1.set_ylim(0, np.max(C_air_over_time_mg_m3) * 1.1)
ax1.set_title('Inhalation - Air concentration')
ax1.set_xlabel('Time (min)')
ax1.set_ylabel('Concentration (mg/m³)')
ax1.grid(True)
return line1,
def update_air_concentration(frame):
line1.set_data(time_points[:frame] / 60, C_air_over_time_mg_m3[:frame])
return line1,
# Plot external event dose over time
def init_external_dose():
ax2.set_xlim(0, exposure_duration)
ax2.set_ylim(0, np.max(cumulative_dose_mg_kg_bw) * 1.1)
ax2.set_title('Inhalation - External event dose over time')
ax2.set_xlabel('Time (min)')
ax2.set_ylabel('Dose (mg/kg bw)')
ax2.grid(True)
return line2,
def update_external_dose(frame):
line2.set_data(time_points[:frame] / 60, cumulative_dose_mg_kg_bw[:frame])
return line2,
ani1 = FuncAnimation(fig, update_air_concentration, frames=len(time_points), init_func=init_air_concentration, blit=True, interval=300/len(time_points), repeat=False)
ani2 = FuncAnimation(fig, update_external_dose, frames=len(time_points), init_func=init_external_dose, blit=True, interval=300/len(time_points), repeat=False)
plt.tight_layout()
plt.show()
if __name__ == "__main__":
run_model()
</code></pre>
<p>However, I would like them to appear simultaneously. Or, if not, that one immediately after the other without having to close the results window first.</p>
<p>Various attempts at this cause this issue:</p>
<blockquote>
<p>Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication macOSVersion]: unrecognized selector sent to instance 0x13616f730'
First throw call stack:</p>
</blockquote>
|
<python><matplotlib><tkinter><tkinter-canvas><matplotlib-animation>
|
2024-07-20 09:59:10
| 1
| 927
|
Nick
|
78,771,991
| 17,947,699
|
How to add value to mood in an mp3 file via mutagen (Python)?
|
<p>I couldn't find a way to write mood to an mp3 file via mutagen (Python library)</p>
<p>Initializing:</p>
<pre><code>from mutagen.mp3 import MP3
from mutagen.id3 import ID3, TIT2, TALB, TPE1, TPE2, TCON, TPUB, TENC, TIT3, APIC, WOAR, PRIV
audio = MP3(mp3_file, ID3=ID3)
</code></pre>
<p>I could write the subtitle using
<code>audio['TIT3'] = TIT3(encoding=3, text=subtitle)</code></p>
<p>Also, I could write the Author URL using
<code>audio['WOAR'] = WOAR(encoding=3, url=author_url)</code></p>
<p>But I couldn't find a way to write the mood</p>
<p>I wrote the mood (Sad) manually in the file properties then ran
<code>mutagen.File("file.MP3").keys()</code></p>
<p>and I got the result
<code>dict_keys(['TIT2', 'TALB', 'TCON', 'TPE2', 'TPUB', 'TSSE', 'TENC', 'TIT3', "APIC:'Album Cover'", 'APIC:', 'WOAR:https://youtube.com', 'PRIV:WM/Mood:S\x00a\x00d\x00\x00\x00', 'TPE1'])</code></p>
<p>I tried many ways but couldn't get the Mood to work.</p>
<p>I even tried with TMOO like said in the first comment. Still didn't work.</p>
<p>Here is the full python code</p>
<pre><code>import os
from mutagen.mp3 import MP3
from mutagen.id3 import ID3, TIT2, TALB, TPE1, TPE2, TCON, TPUB, TENC, TIT3, APIC, WOAR, TMOO
from PIL import Image
def add_album_art(mp3_file, album_art_file):
audio = MP3(mp3_file, ID3=ID3)
if audio.tags is None:
audio.add_tags()
# Open album art image
with open(album_art_file, 'rb') as img:
audio.tags.add(
APIC(
encoding=3, # 3 is for utf-8
mime='image/jpeg', # or image/png
type=3, # 3 is for the album front cover
desc='Cover',
data=img.read()
)
)
audio.save()
def edit_mp3_metadata(mp3_file, title, subtitle, contributing_artists, album_artist, album, genre, publisher, encoded_by, author_url, mood):
audio = MP3(mp3_file, ID3=ID3)
if title:
audio['TIT2'] = TIT2(encoding=3, text=title) # Title
if subtitle:
audio['TIT3'] = TIT3(encoding=3, text=subtitle) # Subtitle
if contributing_artists:
audio['TPE1'] = TPE1(encoding=3, text=contributing_artists) # Contributing artists
audio['TPE2'] = TPE2(encoding=3, text=album_artist) # Album artist
audio['TALB'] = TALB(encoding=3, text=album) # Album
if genre:
audio['TCON'] = TCON(encoding=3, text=genre) # Genre
audio['TPUB'] = TPUB(encoding=3, text=publisher) # Publisher
audio['TENC'] = TENC(encoding=3, text=encoded_by) # Encoded by
audio['WOAR'] = WOAR(encoding=3, url=author_url) # Author URL
if mood:
audio['TMOO'] = TMOO(encoding=3, text=mood) # Mood
audio.save()
def main():
mp3_file = input("Enter the path to the MP3 file: ")
if not os.path.isfile(mp3_file):
print(f"Error: The file '{mp3_file}' does not exist.")
return
album_art_file = input("\n(Enter to skip from now)\n\nEnter the path to the album art image: ")
if album_art_file:
if not os.path.isfile(album_art_file):
print(f"Error: The file '{album_art_file}' does not exist.")
return
title = str(input("Enter the title: "))
subtitle = str(input("Enter the subtitle (Title + Version/Type): "))
contributing_artists = str(input("Enter the contributing artists (A semicolon and a space at the end of each artist): "))
album_artist = str(input("Enter the album artist: "))
album = str(input("Enter the album: "))
genre = str(input("Enter the genre: "))
publisher = str(input("Enter the publisher: "))
encoded_by = str(input("Enter the encoder: "))
author_url = str(input("Enter the author URL: "))
mood = str(input("Enter the mood: "))
if album_art_file:
add_album_art(mp3_file, album_art_file)
edit_mp3_metadata(mp3_file, title, subtitle, contributing_artists, album_artist, album, genre, publisher, encoded_by, author_url, mood)
print("MP3 file metadata and album art updated successfully.")
if __name__ == "__main__":
main()
</code></pre>
<p>Windows version info:
Edition: Windows 10 Pro
Version: 22H2
Installed on: 21-02-24
OS build: 19045.4651
Experience: Windows Feature Experience Pack 1000.19060.1000.0</p>
|
<python><python-3.x><mp3><id3><mutagen>
|
2024-07-20 06:13:26
| 0
| 374
|
Parijat Softwares
|
78,771,514
| 395,857
|
How can I export all submission attachments with the OpenReview API?
|
<p>I have access to the program chairs console of a venue in OpenReview. I followed <a href="https://docs.openreview.net/how-to-guides/data-retrieval-and-modification/how-to-export-all-submission-attachments" rel="nofollow noreferrer">https://docs.openreview.net/how-to-guides/data-retrieval-and-modification/how-to-export-all-submission-attachments</a>:</p>
<p>Using:</p>
<pre class="lang-py prettyprint-override"><code>#pip install openreview-py
import openreview
# API V2
client = openreview.api.OpenReviewClient(
baseurl='https://api2.openreview.net',
username='[redacted]',
password='[redacted]'
)
notes = client.get_all_notes(invitation = "Your/Venue/ID/-/Submission")
for note in notes:
if(note.content.get("pdf")):
f = client.get_attachment(note.id,'pdf')
with open(f'paper{note.number}.pdf','wb') as op:
op.write(f)
</code></pre>
<p>worked well to download all submission attachments.</p>
<p>I now want to to download all the supplementary materials for all submissions.</p>
<pre><code>for note in notes:
if(note.content.get("supplementary_material")):
f = client.get_attachment(note.id,'supplementary_material')
with open(f'paper{note.number}_supplementary_material.zip','wb') as op:
op.write(f)
</code></pre>
<p><code>supplementary_material</code> matches my venue's <a href="https://i.sstatic.net/JNNKsh2C.png" rel="nofollow noreferrer">config</a>:</p>
<pre><code>{
"supplementary_material": {
"value": {
"param": {
"type": "file",
"extensions": [
"zip",
"pdf"
],
"maxSize": 100,
"optional": true,
"deletable": true
}
},
"description": "All supplementary material must be self-contained and zipped into a single file. Note that supplementary material will be visible to reviewers throughout and after the review period, and ensure all material is anonymized. The maximum file size is 100MB.",
"order": 7,
"readers": [
"confname/2024/Industry_Track",
"confname/2024/Industry_Track/Submission${4/number}/Area_Chairs",
"confname/2024/Industry_Track/Submission${4/number}/Reviewers",
"confname/2024/Industry_Track/Submission${4/number}/Authors"
]
}
}
</code></pre>
<p>I'm getting the error message:</p>
<pre><code>C:\Users\dernoncourt\anaconda3\envs\openreview\python.exe C:\Users\dernoncourt\openreview\get_all_pdfs.py
Traceback (most recent call last):
File "C:\Users\dernoncourt\anaconda3\envs\openreview\Lib\site-packages\openreview\api\client.py", line 141, in __handle_response
response.raise_for_status()
File "C:\Users\dernoncourt\anaconda3\envs\openreview\Lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api2.openreview.net/attachment?id=[redacted]&name=supplementary_material
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dernoncourt\openreview\get_all_pdfs.py", line 26, in <module>
f = client.get_attachment(note.id, 'supplementary_material')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dernoncourt\anaconda3\envs\openreview\Lib\site-packages\openreview\api\client.py", line 609, in get_attachment
response = self.__handle_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dernoncourt\anaconda3\envs\openreview\Lib\site-packages\openreview\api\client.py", line 156, in __handle_response
raise OpenReviewException(error)
openreview.openreview.OpenReviewException: {'name': 'NotFoundError', 'message': 'The Supplementary Material file was not found (2024-07-19-3654726)', 'status': 404, 'details': {'path': 'note.content.supplementary_material', 'reqId': '2024-07-19-3654726'}}
Process finished with exit code 1
</code></pre>
<p>How to fix it?</p>
|
<python><download><openreview>
|
2024-07-19 23:59:11
| 0
| 84,585
|
Franck Dernoncourt
|
78,771,443
| 9,779,026
|
Maximizing sum of And()ed BoolVars in Python ortools
|
<p>Given a set of cards, I have a list of combos that each involve one or more of these cards. I want to find a subset of these cards of a fixed size (<code>DECKSIZE</code>), that optimizes the number of combos that can be played using the chosen cards (the number of combos that include only the chosen cards is maximized).</p>
<pre class="lang-py prettyprint-override"><code>cards = [BoolVar("card1"), BoolVar("card2"), BoolVar("card3"), BoolVar("card4")]
combos = [
[cards[0], cards[1]],
[cards[0], cards[2]],
[cards[1], cards[3]],
[cards[0], cards[2], cards[3]]
]
DECKSIZE = 3
solver.Add(sum(cards)==DECKSIZE)
num_combos = sum([And(combo) for combo in combos])
solver.Maximize(num_combos)
print("chosen cards:", [card for card in cards if card.solution_value()])
</code></pre>
<p>Is there a way to express this maximization problem with ortools? I managed to do this with z3py but perhaps ortools is faster.</p>
|
<python><optimization><or-tools><cp-sat>
|
2024-07-19 23:09:32
| 1
| 1,437
|
2080
|
78,771,249
| 3,486,684
|
Given a type `ScalarOrList` defined `ScalarOrList[T] = T | list[T]`, static check refuses `input: list[str]` for `arg: ScalarOrList[str | int]`?
|
<p>I had:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
type ScalarOrList[T] = T | list[T]
def as_list[T](xs: Optional[ScalarOrList[T]], length: int = 1) -> list[T]:
if isinstance(xs, Sequence) and not isinstance(xs, str):
return list(x for x in xs)
elif xs is not None:
return [xs for _ in range(length)]
else:
return []
def f(xs: ScalarOrList[str | int]):
print(xs)
only_strs: list[str] = ["hello", "world"]
f(only_strs) # <--- complaint here
</code></pre>
<p>But this caused the type checker to complain:</p>
<pre class="lang-none prettyprint-override"><code>Argument of type "list[str]" cannot be assigned to parameter "xs" of type "ScalarOrList[str | int]" in function "f"
Type "list[str]" is incompatible with type "ScalarOrList[str | int]"
"list[str]" is incompatible with "str"
"list[str]" is incompatible with "int"
"list[str]" is incompatible with "list[str | int]"
Type parameter "_T@list" is invariant, but "str" is not the same as "str | int"
Consider switching from "list" to "Sequence" which is covariant
</code></pre>
<p>I tried switching to <code>Sequence</code>, but I ran into confusing type checking issues there too:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional, Sequence
type ScalarOrSequence[T] = T | Sequence[T]
def as_list[T](xs: Optional[ScalarOrSequence[T]], length: int = 1) -> list[T]:
if isinstance(xs, Sequence) and not isinstance(xs, str):
return list(x for x in xs)
elif xs is not None:
return [xs for _ in range(length)] # <--- complaint here
else:
return []
</code></pre>
<p>This causes the type checker to complain:</p>
<pre class="lang-none prettyprint-override"><code>Expression of type "list[object* | str* | str]" is incompatible with return type "list[T@as_list]
</code></pre>
<p>Switching to <code>Sequence</code> is actually probably not what I want, because it turns out that there are things that are considered sequences, that I would prefer to consider objects in the context of my usage. For example: <code>str</code>, <code>Enum</code>, etc.</p>
|
<python><python-typing><pyright>
|
2024-07-19 21:20:30
| 1
| 4,654
|
bzm3r
|
78,771,213
| 3,826,115
|
Downloading from THREDDS Data Server fails when running in parallel, but works when running in series
|
<p>I am attempting to download/subset a list of gridded parameters from the UCAR THREDDS servers.</p>
<p>Here is how I am setting things up:</p>
<pre><code>import pandas as pd
import xarray as xr
import concurrent.futures
from functools import partial
from tqdm import tqdm
from siphon.catalog import TDSCatalog
bounds = [-10,-10,10,10]
cat_url = 'https://thredds.rda.ucar.edu/thredds/catalog/files/g/ds633.0/e5.oper.an.pl/200001/catalog.xml'
dataset_names = ['e5.oper.an.pl.128_131_u.ll025uv.2000010100_2000010123.nc',
'e5.oper.an.pl.128_132_v.ll025uv.2000010100_2000010123.nc',
'e5.oper.an.pl.128_130_t.ll025sc.2000010100_2000010123.nc',
'e5.oper.an.pl.128_133_q.ll025sc.2000010100_2000010123.nc',
'e5.oper.an.pl.128_129_z.ll025sc.2000010100_2000010123.nc']
var_strs = ['U', 'V', 'T', 'Q', 'Z']
start_date = pd.to_datetime('2000-01-01 00:00:00')
end_date = pd.to_datetime('2000-01-02 00:00:00')
def pull_data_from_url(dataset_name, var, cat_url, bounds, start_date, end_date):
catalog = TDSCatalog(cat_url)
ds = catalog.datasets[dataset_name]
ncss = ds.subset()
query = ncss.query()
query.lonlat_box(east=bounds[2], west=bounds[0], south=bounds[1], north=bounds[3])
query.time_range(start_date, end_date)
query.variables(var)
nc = ncss.get_data(query)
data = xr.open_dataset(xr.backends.NetCDF4DataStore(nc))
return data
</code></pre>
<p>If I fun the function in a loop (in series), it works fine:</p>
<pre><code>for dat, var in zip(dataset_names, tqdm(var_strs)):
pull_data_from_url(dat, var, cat_url, bounds, start_date, end_date)
</code></pre>
<p>However, if I try to do it in parallel:</p>
<pre><code>partial_pull_data = partial(pull_data_from_url, bounds=bounds, start_date=start_date, end_date=end_date, cat_url=cat_url)
with concurrent.futures.ThreadPoolExecutor() as executor:
list(tqdm(executor.map(lambda p: partial_pull_data(*p), zip(dataset_names, var_strs)), total=len(dataset_names)))
</code></pre>
<p>I get the following error message:</p>
<pre><code>Traceback (most recent call last):
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:715 in urlopen
httplib_response = self._make_request(
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:467 in _make_request
six.raise_from(e, None)
File <string>:3 in raise_from
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:462 in _make_request
httplib_response = conn.getresponse()
File ~\AppData\Local\miniconda3\lib\http\client.py:1377 in getresponse
response.begin()
File ~\AppData\Local\miniconda3\lib\http\client.py:320 in begin
version, status, reason = self._read_status()
File ~\AppData\Local\miniconda3\lib\http\client.py:289 in
_read_status
raise RemoteDisconnected("Remote end closed connection without"
RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ~\AppData\Local\miniconda3\lib\site-packages\requests\adapters.py:486 in send
resp = conn.urlopen(
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:799 in urlopen
retries = retries.increment(
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\util\retry.py:550 in increment
raise six.reraise(type(error), error, _stacktrace)
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\packages\six.py:769 in reraise
raise value.with_traceback(tb)
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:715 in urlopen
httplib_response = self._make_request(
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:467 in _make_request
six.raise_from(e, None)
File <string>:3 in raise_from
File ~\AppData\Local\miniconda3\lib\site-packages\urllib3\connectionpool.py:462 in _make_request
httplib_response = conn.getresponse()
File ~\AppData\Local\miniconda3\lib\http\client.py:1377 in getresponse
response.begin()
File ~\AppData\Local\miniconda3\lib\http\client.py:320 in begin
version, status, reason = self._read_status()
File ~\AppData\Local\miniconda3\lib\http\client.py:289 in
_read_status
raise RemoteDisconnected("Remote end closed connection without"
ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ~\AppData\Local\miniconda3\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\kchudler\onedrive - research triangle institute\documents\hydromet_tools\era5_tools\untitled0.py:55
list(tqdm(executor.map(lambda p: partial_pull_data(*p), zip(dataset_names, var_strs)), total=len(dataset_names)))
File ~\AppData\Local\miniconda3\lib\site-packages\tqdm\std.py:1178 in __iter__
for obj in iterable:
File ~\AppData\Local\miniconda3\lib\concurrent\futures\_base.py:609 in result_iterator
yield fs.pop().result()
File ~\AppData\Local\miniconda3\lib\concurrent\futures\_base.py:446 in result
return self.__get_result()
File ~\AppData\Local\miniconda3\lib\concurrent\futures\_base.py:391 in __get_result
raise self._exception
File ~\AppData\Local\miniconda3\lib\concurrent\futures\thread.py:58 in run
result = self.fn(*self.args, **self.kwargs)
File c:\users\kchudler\onedrive - research triangle institute\documents\hydromet_tools\era5_tools\untitled0.py:55 in <lambda>
list(tqdm(executor.map(lambda p: partial_pull_data(*p), zip(dataset_names, var_strs)), total=len(dataset_names)))
File c:\users\kchudler\onedrive - research triangle institute\documents\hydromet_tools\era5_tools\untitled0.py:31 in pull_data_from_url
nc = ncss.get_data(query)
File ~\AppData\Local\miniconda3\lib\site-packages\siphon\ncss.py:114 in get_data
resp = self.get_query(query)
File ~\AppData\Local\miniconda3\lib\site-packages\siphon\http_util.py:410 in get_query
return self.get(url, query)
File ~\AppData\Local\miniconda3\lib\site-packages\siphon\http_util.py:486 in get
resp = self._session.get(path, params=params)
File ~\AppData\Local\miniconda3\lib\site-packages\requests\sessions.py:602 in get
return self.request("GET", url, **kwargs)
File ~\AppData\Local\miniconda3\lib\site-packages\requests\sessions.py:589 in request
resp = self.send(prep, **send_kwargs)
File ~\AppData\Local\miniconda3\lib\site-packages\requests\sessions.py:703 in send
r = adapter.send(request, **kwargs)
File ~\AppData\Local\miniconda3\lib\site-packages\requests\adapters.py:501 in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
</code></pre>
<p>Can someone provide help on this matter? The files take a while to retrieve, so I would like a faster way to get them then iterating over a loop.</p>
|
<python><thredds><python-siphon>
|
2024-07-19 21:04:08
| 1
| 1,533
|
hm8
|
78,770,947
| 12,027,869
|
Map Dataframe Column Values Based on Two Dictionaries Conditionally
|
<p>I have a dataframe <code>df_test</code>. I want to map the column <code>color</code> conditionally:</p>
<ul>
<li>if <code>category</code> is <code>'tv</code>', then map using the <code>tv_map</code> dictionary</li>
<li>else map using the <code>radio_map</code> dictionary</li>
</ul>
<p>I could split <code>df_test</code> by <code>category</code>, map each dictionary, then row bind but I would like to do it in one-go something like:</p>
<pre><code>df_test['new_col'] = df_test.apply(lambda x: tv_map.get(x, x) if x['category'] == "tv" else color_map.get(x, x), axis=1)
</code></pre>
<pre><code>df_test = pd.DataFrame({
'category': ['tv', 'radio', 'tv', 'radio'],
'color': ['red', 'green', 'green', 'red']
})
tv_map = {'red': 'tv_red', 'green': 'tv_green'}
radio_map = {'red': 'radio_red', 'green': 'radio_green'}
df_test['new_col'] = df_test.query('category == "tv"')['color'].map(tv_map)
df_test['new_col'] = df_test.query('category == "radio"')['color'].map(radio_map)
df_test
</code></pre>
|
<python><pandas>
|
2024-07-19 19:27:42
| 2
| 737
|
shsh
|
78,770,943
| 8,479,386
|
Banner is not displaying at the top and css is not working
|
<p>I have html like following, in my Django project</p>
<pre><code><!DOCTYPE html>
{% load static %}
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Index Page</title>
<link rel="icon" type="image/png" sizes="16x16" href="{% static 'favicon.png' %}">
<link rel="stylesheet" href="{% static 'style.css' %}">
</head>
<body>
<div id="error-banner" class="error-banner" style="display: none;"></div>
<header>
<div class="header-content">
<!--some header contents-->
</div>
</header>
<div>
<!--some other divs-->
</div>
<script>
function showErrorBanner(message) {
var banner = document.getElementById('error-banner');
banner.textContent = message;
banner.style.display = 'block';
setTimeout(function() {
banner.style.display = 'none';
}, 10000); // 10 seconds
}
{% if error_message %}
showErrorBanner("{{ error_message }}");
{% endif %}
</script>
</body>
</html>
</code></pre>
<p><em><strong>My signin_view is like following:</strong></em></p>
<pre><code>def signin_view(request):
if request.method == 'POST':
form = AuthenticationForm(request, data=request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
user = authenticate(request, username=username, password=password)
if user is not None:
login(request, user)
return redirect('login') # Change 'home' to the name of your homepage URL pattern
else:
error_message = "Invalid username or password."
else:
error_message = "Invalid form submission."
else:
form = AuthenticationForm()
error_message = None
return render(request, 'login.html', {'login_form': form, 'error_message': error_message})
</code></pre>
<p><em><strong>and entire style.css is like following</strong></em></p>
<pre><code>body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
background-image: url('resources/background.jpg');
background-size: cover;
background-position: center;
background-repeat: no-repeat;
background-attachment: fixed;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
overflow: hidden;
}
header {
background-color: rgba(87, 184, 148, 0.8);
padding: 10px 20px;
color: white;
position: absolute;
top: 0;
width: 100%;
}
.header-content {
display: flex;
justify-content: flex-end;
}
nav ul {
list-style: none;
margin: 0;
padding: 0;
display: flex;
}
nav ul li {
margin-left: 20px;
}
nav ul li a {
color: white;
text-decoration: none;
font-size: 16px;
}
nav ul li a:hover {
text-decoration: underline;
}
.form-container {
background-color: rgba(255, 255, 255, 0.9);
padding: 20px;
border-radius: 10px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
width: 300px;
text-align: center;
}
.form-toggle {
display: flex;
justify-content: center;
margin-bottom: 20px;
}
.form-toggle button {
background-color: #57B894;
color: white;
border: none;
padding: 10px 20px;
cursor: pointer;
margin: 0 10px;
border-radius: 5px;
transition: background-color 0.3s ease;
}
.form-toggle button:hover {
background-color: #469D7B;
}
.form-content {
display: none;
}
.form-content.active {
display: block;
}
form input {
width: calc(100% - 20px);
padding: 10px;
margin: 10px 0;
border: 1px solid #ccc;
border-radius: 5px;
}
.form-options {
display: flex;
justify-content: space-between;
align-items: center;
margin: 10px 0;
}
form button {
background-color: #57B894;
color: white;
border: none;
padding: 10px 20px;
cursor: pointer;
border-radius: 5px;
transition: background-color 0.3s ease;
}
form button:hover {
background-color: #469D7B;
}
form p {
margin: 10px 0 0;
}
form a {
color: #57B894;
text-decoration: none;
}
form a:hover {
text-decoration: underline;
}
.required-field {
position: relative;
}
.required-field input {
padding-left: 15px; /* Adjust as needed */
}
.required-field span {
position: absolute;
left: 0;
top: 0;
color: red;
font-size: 20px; /* Adjust size as needed */
line-height: 1;
}
.error-banner {
background-color: #f8d7da;
color: #721c24;
padding: 10px;
margin: 10px 0;
border: 1px solid #f5c6cb;
border-radius: 4px;
text-align: center;
}
</code></pre>
<p>the issue is that whenever there are incorrect credentials then it show the banner but not at the top and it disappears after 10 seconds as expected. currently it shows it like this</p>
<p><a href="https://i.sstatic.net/efWMClvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/efWMClvI.png" alt="enter image description here" /></a></p>
<p>I am using PyCharm Professional. I have invalidated caches, restarted PyCharm and my system as well. Please let me know what is messing this.</p>
|
<javascript><python><html><css><django>
|
2024-07-19 19:26:53
| 2
| 866
|
Karan
|
78,770,897
| 1,788,771
|
Filtered reverse of a one to many relation in Django
|
<p>I have a model called <code>Item</code> and another called <code>ItemSynonym</code> with a one to many relation like so:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from django.conf import settings
from project.utils.models import UUIDModel
class Item(UUIDModel):
name = models.CharField(max_length=200)
class ItemSynonym(UUIDModel):
marker = models.ForeignKey(
Marker,
on_delete=models.CASCADE,
related_name="synonyms",
)
name = models.CharField(max_length=200)
</code></pre>
<p>I would now like to add a new field to the <code>ItemSynonym</code> model called language code so that it becomes:</p>
<pre><code>class ItemSynonym(UUIDModel):
marker = models.ForeignKey(
Marker,
on_delete=models.CASCADE,
related_name="synonyms",
)
language_code = models.CharField(
db_index=True,
max_length=8,
verbose_name="Language",
choices=settings.LANGUAGES,
default=settings.LANGUAGE_CODE,
)
name = models.CharField(max_length=200)
</code></pre>
<p>However I would like to filter I am wondering if it is possible somehow to filter the synonyms by only modifying the models, so that I don't have to hunt through the entire code base and filter synonyms individually so something like:</p>
<pre class="lang-py prettyprint-override"><code>class Item(UUIDModel):
name = models.CharField(max_length=200)
synonyms = models.FilteredRelation(
"all_synonyms",
condition=models.Q(language_code=get_language()),
)
class ItemSynonym(UUIDModel):
marker = models.ForeignKey(
Marker,
on_delete=models.CASCADE,
related_name="all_synonyms",
)
language_code = models.CharField(
db_index=True,
max_length=8,
verbose_name="Language",
choices=settings.LANGUAGES,
default=settings.LANGUAGE_CODE,
)
name = models.CharField(max_length=200)
</code></pre>
<p>Though obviously the above doesn't work, otherwise I wouldn't be here. I've also tried adding an annotation for the <code>FilteredRelation inside a custom manager for </code>Item`, but for whatever reason the annotation isn't even accessible as a property.</p>
<p>I'm fairly new to Django, but I'm sure this isn't an uncommon problem so I would also appreciate any info on what the idiomatic way to filter such relations is.</p>
|
<python><django><django-models><orm>
|
2024-07-19 19:10:43
| 1
| 4,107
|
kaan_atakan
|
78,770,569
| 13,849,446
|
Converting pigpio code to Gpiod library python
|
<p>I have the following code of <a href="https://abyz.me.uk/rpi/pigpio/" rel="nofollow noreferrer">pigpio</a> library and I have to convert it's functionality using <a href="https://pypi.org/project/gpiod/" rel="nofollow noreferrer">gpiod</a> library. Where pigpio has a good documentation I can understand the working of previous code using that. But the problem arises when I try to convert it, the gpiod does not have a documentation at all (or I couldn't find one).
The following is the code of pigpio</p>
<pre><code>import pigpio
relay_1 = 5
pi = pigpio.pi()
def setupGPIO():
if not pi.connected: # Check connected
print("Not connected to PIGPIO Daemon")
setupGPIO()
else:
print("Connected to PIGPIO Daemon")
pi.set_pull_up_down(relay_1, pigpio.PUD_DOWN)
cb1 = pi.callback(relay_1, pigpio.EITHER_EDGE, pinChange)
def pinChange(pin, state, t):
if (state == 0):
print('LOW', pin, str(t))
else:
print('HIGH', pin, str(t))
lastPin = t # used somewhere else
if (pin == interCom):
some_func(state)
else:
check_lights()
def check_lights(self):
r1 = pi.read(relay_1)
totals = r1
print(f'totals != pinTotals {totals} {pinTotals}')
if __name__ == "__main__":
setupGPIO()
</code></pre>
<p>What I was able to achieve</p>
<pre><code>import gpiod
relay_1 = 5
def setupGPIO(self):
print('setup gpio initiated')
try:
config = {
relay_1: gpiod.LineSettings(
direction=gpiod.line.Direction.INPUT,
bias=gpiod.line.Bias.PULL_DOWN
)
}
with gpiod.request_lines(
"/dev/gpiochip4",
consumer='gpio-setup',
config=config
) as request:
print('Connected to GPIO')
except Exception as e:
print(f'Failed to setup GPIO: {e}')
setupGPIO()
print('setup gpio executed')
if __name__ == "__main__":
setupGPIO()
</code></pre>
<p>Do not know the correctness of this code but I did this and tested for blink test that was mentioned in one example.
If any one can help me convert this code that would be great, or if any one can guide me or provide some documentation that would really help. Thanks for any help in advance</p>
|
<python><python-3.x><libgpiod><pigpio><gpiod>
|
2024-07-19 17:23:06
| 0
| 1,146
|
farhan jatt
|
78,770,408
| 5,431,734
|
cannot create a storer when reading an hdf5 filre with `dd.read_hdf`
|
<p>I want to use a dask dataframe to load a pandas dataframe using the <code>dd.read_hdf()</code> method. I create a very basic pandas dataframe, then I separate the values from column headers and index and I save them in an hdf5 file. I can read the hdf5 file and recreate the orinal dataframe, that loos ok</p>
<p>I cannot however to read the hdf5 file from <code>dd.read_hdf()</code>, it drops me and error:</p>
<pre><code>TypeError: An error occurred while calling the read_hdf method registered to the pandas backend.
Original Message: cannot create a storer if the object is not existing nor a value are passed
</code></pre>
<p>Below is a minimal example to reproduce this error above</p>
<pre><code>import pandas as pd
import numpy as np
import dask.dataframe as dd
import h5py
def save_hdf5(df, hdf5_path):
# Separate the DataFrame values, column names, and index
values = df.values
columns = df.columns.to_numpy()
index = df.index.to_numpy()
# Save to HDF5 file with different groups
with h5py.File(hdf5_path, 'w') as hdf:
hdf.create_dataset('values', data=values)
hdf.create_dataset('columns', data=columns)
hdf.create_dataset('index', data=index)
def load_hdf5(hdf5_path):
# Load the data from the HDF5 file
with h5py.File(hdf5_path, 'r') as hdf:
values = hdf['values'][:] # Load entire dataset into memory
columns = hdf['columns'][:].astype(str) # Load and convert back to strings
index = hdf['index'][:].astype(str) # Load and convert back to strings
return pd.DataFrame(values, columns=columns, index=index)
# Create a simple pandas DataFrame
data = {'A': [1, 2, 3], 'B': [4, 5, 6]}
df = pd.DataFrame(data)
df.index =['one','two','three']
# File path for the HDF5 file
my_path = 'my_df.h5'
save_hdf5(df, my_path)
my_df = load_hdf5(my_path)
</code></pre>
<p>If I call now <code>dd.read_hdf(my_path, key="values")</code> then I get the error above. What I am doing wrong? It looks to me that the store is empty. Am I missing something really basic?</p>
<pre><code>store = pd.HDFStore(my_path)
print(store.keys())
</code></pre>
|
<python><dask>
|
2024-07-19 16:35:54
| 0
| 3,725
|
Aenaon
|
78,770,327
| 8,507,034
|
Read/Write pipeline in pandas
|
<p>I'm building a simple pipeline to pull a dataset from a database and write it to a csv so that I can access the data more quickly in the future.</p>
<p>Currently, I have this:</p>
<pre><code># data loading as a pipeline
# ingests and writes batches of CHUNK_SIZE
CHUNK_SIZE = 10_000
data_loader = pd.read_sql(sql=SQL, con=conn, chunksize=CHUNK_SIZE)
for chunk in tqdm(data_loader, total=chunks) :
chunk.to_csv("data/raw/statanom.pit_train.csv", mode="a")
</code></pre>
<p>It occurs to me that there is a bottle neck in that it is working sequentially when it could begin pulling the next chunk of data while the previous chunk is being written to disk.</p>
<p>Assuming it's possible and recommended, how can I parallelize these tasks? I'd like to initiate the query for the next iteration at the same time the previous chunk is being written to disk.</p>
|
<python><pandas><parallel-processing><file-io><export-to-csv>
|
2024-07-19 16:15:15
| 1
| 315
|
Jred
|
78,770,307
| 1,833,218
|
sharex and imshow do not produce plots of the same width
|
<p><strong>Context:</strong> I have the x,y and x,z projection of an image. I want the two images to be stacked vertically sharing the same width.</p>
<p><strong>My trial:</strong> I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
N1, N2 = 256, 512
table1 = np.random.rand(N1, N1)
table2 = np.random.rand(N1, N2)
f, axes = plt.subplots(2,sharex=True)
f.subplots_adjust(hspace=0,wspace=0)
axes[0].imshow(table1.T,extent = [0, N1, 0, N1])
axes[1].imshow(table2.T,extent = [0, N1, 0, N2])
</code></pre>
<p><strong>The wrong result:</strong> Note that <code>axes[0]</code> and <code>axes[1]</code> share the same X-extent and the same number of pixels in the x axis, and since I used <code>sharex=True</code>. Therefore one would expect the example below to stack the two images vertically with the same width. Howver this is not the case. This is what I find:</p>
<p><a href="https://i.sstatic.net/V08dN7Ut.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V08dN7Ut.png" alt="output of the snippet" /></a></p>
<p>You can see that the second image is smaller, however, I stress, the two have the same x-extent and same number of pixels in the x axis.</p>
<p><strong>So my question is:</strong> how to stack vertically two images with same extent and pixels in the x-axis (however I need the number of pixels in y axis to possibly vary)</p>
<p><strong>Why not use <code>aspect='auto'</code>?</strong> One may think that just adding <code>aspect=auto</code> will solve the issue. While the images will end with the same width, they will also have the same heigth. However the second image has a larger y-extent (and twice the pixels in the y axis). I want the images to not be stretched so to preserve their aspect ratio.</p>
|
<python><numpy><matplotlib>
|
2024-07-19 16:09:55
| 2
| 2,317
|
Antonio Ragagnin
|
78,770,272
| 12,553,917
|
Redirecting python script's stdin to fifo results in RuntimeError: input(): lost sys.stdin
|
<p>I have this python script that's meant to function as a server which reads commands from stdin which is redirected to a fifo:</p>
<p>test.py:</p>
<pre class="lang-py prettyprint-override"><code>while True:
try:
line = input()
except EOFError:
break
print(f'Received: {line}')
</code></pre>
<p>In bash I run the commands:</p>
<pre class="lang-bash prettyprint-override"><code>mkfifo testfifo
test.py < testfifo &
echo word > testfifo
</code></pre>
<p>And instead of printing the line it received, the script quits with an error:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 5, in <module>
line = input()
^^^^^^^
RuntimeError: input(): lost sys.stdin
[1]+ Exit 1 test.py < testfifo
</code></pre>
<p>I've read some answers which say that there needs to be one writer to the fifo open at all times. So I tried to also run:</p>
<pre class="lang-bash prettyprint-override"><code>sleep 100 > testfifo &
</code></pre>
<p>Same error. I also tried replacing the <code>input()</code> line with this:</p>
<pre class="lang-py prettyprint-override"><code>line = sys.stdin.readline()
</code></pre>
<p>And all that did was give me this error instead:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'readline'
</code></pre>
<p>I'm on Windows with mingw64, I suspect this may be a bug with mingw. Any ideas? Thanks.</p>
<p>EDIT: Since this problem seems rather hopeless and out of my control, I've opted for now to just use a UDP server instead. Bash supports redirecting to <code>/dev/udp/host/port</code> which makes this easy to do.</p>
|
<python><bash><mingw><fifo><mkfifo>
|
2024-07-19 16:00:49
| 0
| 776
|
Verpous
|
78,770,245
| 6,618,289
|
shiboken6: constexpr variable must be initialized by a constant expression
|
<p>I am trying to build some python wrappers using shiboken6 and Pyside6. I have managed to get the code compiling on a single machine, but now that I migrated the code to a build server, I am getting a compile error from within the shiboken compiler:</p>
<blockquote>
<p>QtCore/qplugin.h(146,33): error G3F63BFAE: constexpr variable 'HeaderOffset' must be initialized by a constant expression</p>
</blockquote>
<p>This code seems to work perfectly fine on the reference machine, but I do not know in what way the setup differs. I installed a conda environment from the .yml file generated on the working machine to the new machine:</p>
<pre class="lang-yaml prettyprint-override"><code>channels:
- defaults
dependencies:
... // omitted most packages for brevity
- pip=24.0=py312haa95532_0
- pybind11-abi=5=hd3eb1b0_0
- python=3.12.4=h14ffc60_1
... // omitted most packages for brevity
- pip:
- pyside6==6.7.2
- shiboken6==6.7.2
- shiboken6-generator==6.7.2
</code></pre>
<p>I am passing the following options to shiboken:</p>
<pre class="lang-none prettyprint-override"><code>C:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/shiboken6_generator/shiboken6.exe
--generator-set=shiboken
--compiler=msvc
--enable-parent-ctor-heuristic
--enable-return-value-heuristic
--use-isnull-as-nb_nonzero
--enable-pyside-extensions
--avoid-protected-hack
--debug-level=full
--output-directory=build/x64/Release/
-TD:/myrepo/myproject/src/core/python_core/
-TC:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/PySide6/typesystems
-ID:/myrepo/myproject/src/core/python_core/
-IC:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/shiboken6_generator/include
-IC:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/PySide6/include
-IC:/ProgramData/Miniconda3/envs/myenv/include
-IC:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/PySide6/include/QtCore
-IC:/ProgramData/Miniconda3/envs/myenv/Lib/site-packages/numpy/core/include
-IC:/Qt/6.7.2/msvc2019_64/include/QtXml
-IC:/Qt/6.7.2/msvc2019_64/include/QtGui
-IC:/Qt/6.7.2/msvc2019_64/include/QtCore
-IC:/Qt/6.7.2/msvc2019_64/include/QtQml
-IC:/Qt/6.7.2/msvc2019_64/include
bindings.h
bindings.xml
</code></pre>
<p>Which should be fairly standard. I am not sure what causes the problem, as the definition of <code>offsetof</code> is identical between the two machines:</p>
<pre class="lang-cpp prettyprint-override"><code>#if defined _MSC_VER && !defined _CRT_USE_BUILTIN_OFFSETOF
#ifdef __cplusplus
#define offsetof(s,m) ((::size_t)&reinterpret_cast<char const volatile&>((((s*)0)->m)))
#else
#define offsetof(s,m) ((size_t)&(((s*)0)->m))
#endif
#else
#define offsetof(s,m) __builtin_offsetof(s,m)
#endif
</code></pre>
<p>It confuses me that I find questions related to <code>reinterpret_cast</code> not being part of a valid constexpr (see this <a href="https://stackoverflow.com/questions/69054606/why-cant-reinterpret-cast-be-used-in-a-constant-expression">other stackoverflow post</a>). Which would mean that no matter what this should not compile. One machine does, however and the other does not.</p>
<p>Note that the <code>--compiler</code> option was only added as an attempt to fix the issue and changes the error. When nothing is provided or when the option <code>msvc</code> is passed, I get the error above. Passing the value <code>--compiler=clang</code> results in the even more cryptic error</p>
<blockquote>
<p>C1083 Cannot open source file 'c++'.</p>
</blockquote>
<p>I am using Visual Studio 2019 (version 16.11.35) to compile the resulting python wrapper code and Qt 6.7.2 on both setups.</p>
<p>The only related issue I could find is this one from an ITK mailing list from 2017:
<a href="https://itk.org/pipermail/community/2017-May/013075.html" rel="nofollow noreferrer">https://itk.org/pipermail/community/2017-May/013075.html</a>
quote: "it is a known bug in VS2017"</p>
<p>Does anyone have an idea what the problem could be? Is this a bug in qplugin.h? Or am I missing a configuration that happens to be present on one machine, but is not on the other? Is this a problem with MSVC?</p>
|
<python><c++><qt><pyside6>
|
2024-07-19 15:53:15
| 1
| 925
|
meetaig
|
78,770,236
| 4,538,529
|
Calling python "concurrent.futures" multiprocessing from C# opens multiple winform instances of the application
|
<ul>
<li>pythonnet by Python.Runtime 3.0.3 (Installed the winform application in Visual Studio 2022)</li>
<li>adfuller_test.py (This is my python script file)</li>
</ul>
<hr />
<p>I am calling python code from a winform application which works great normally.</p>
<p>The problem that I encounter now is that when I now try to use: "multiprocessing" with:</p>
<ul>
<li><strong>concurrent.futures.ProcessPoolExecutor</strong> in the "adfuller_test.py".</li>
</ul>
<ol>
<li>To clarify. If I run the "adfuller_test.py" from CMD. The script works as expected.</li>
</ol>
<p>The problem happens when I call the below line of code from within my C# application. What happens is that 2 new instances of my winform application is opening which is very strange and is the whole problem. The function called below is also not returning anything.</p>
<p>Notice that the second argument: "2" is "num_cpu_cores". So many cores to use. That number is how many new winform instances that are opened for the C# application.</p>
<pre><code>dynamic result = pythonModule.adfuller_engle_granger_multicore(new[] { "pair1", "pair2", "pair3" }, 2);
</code></pre>
<p>What is causing this and how can we prevent this and make the code work?</p>
<p>Below are all relevant code.</p>
<p><strong>C# code in the winform application:</strong></p>
<pre><code>dynamic pythonModule = null;
public void initializePythonEngine()
{
if (pythonModule == null)
{
PythonEngine.Initialize(); // Initialize the Python engine
PythonEngine.BeginAllowThreads(); // Allow threads to interact with Python
using (Py.GIL())
{
dynamic sys = Py.Import("sys");
string projectDirectory = Directory.GetParent(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location))?.Parent?.FullName;
string adfullerTestPath = Path.Combine(projectDirectory, "adfuller_test");
sys.path.append(adfullerTestPath);
pythonModule = Py.Import("adfuller_test"); // Import adfuller_test module
}
}
}
void function1()
{
try
{
initializePythonEngine();
using (Py.GIL())
{
dynamic result = pythonModule.adfuller_engle_granger_multicore(new[] { "pair1", "pair2", "pair3" }, 2);
}
}
catch (PythonException ex) { MessageBox.Show("Python error: " + ex.Message); }
catch (Exception ex) { MessageBox.Show("C# error: " + ex.Message); }
finally { PythonEngine.Shutdown(); }
}
</code></pre>
<p><strong>Python code in: "adfuller_test.py"</strong></p>
<pre><code>import concurrent.futures
import numpy as np
def process_pair(pair):
# Simulate some processing
result = f"Processed {pair}"
return result
def adfuller_engle_granger_multicore(pairs, num_cpu_cores):
# Using ProcessPoolExecutor to parallelize the processing of pairs
optimal_pair_windows = []
with concurrent.futures.ProcessPoolExecutor(max_workers=num_cpu_cores) as executor:
# Map pairs to the process_pair function
future_to_pair = {executor.submit(process_pair, pair): pair for pair in pairs}
for future in concurrent.futures.as_completed(future_to_pair):
result = future.result()
if result is not None:
optimal_pair_windows.append(result)
# Return a combined string of results
return "\n".join(optimal_pair_windows)
# Test function
if __name__ == "__main__":
pairs = ["pair1", "pair2", "pair3"]
num_cpu_cores = 2
result = adfuller_engle_granger_multicore(pairs, num_cpu_cores)
print(result)
</code></pre>
<hr />
<hr />
<p><strong>UPDATED APPROACH USING APPDOMAINS</strong></p>
<p>I am trying to create multiple AppDomains in below code approach in order to run Tasks in parallell.</p>
<p>The function that is called is: <code>function1();</code></p>
<p>However the code breaks with this error:</p>
<p><em><strong>"System.ArgumentException: 'Cannot pass a GCHandle across AppDomains.'"</strong></em></p>
<p>The code breaks on this line. I am not sure what this means and we can find a solution for the below code to execute the python function: "<strong>adfuller_engle_granger_multicore</strong>" in parallell from C#?</p>
<pre><code>dynamic result = _pythonModule.adfuller_engle_granger_multicore(
data.SymbolCombination,
data.CombinationList,
data.Series);
</code></pre>
<hr />
<pre><code>[Serializable]
public class SerializableData
{
public List<string> SymbolCombination { get; set; }
public List<string> CombinationList { get; set; }
public List<double[]> Series { get; set; }
}
public class PythonCaller : MarshalByRefObject
{
private dynamic _pythonModule;
public PythonCaller()
{
InitializePythonEngine();
}
private void InitializePythonEngine()
{
if (_pythonModule == null)
{
PythonEngine.Initialize(); // Initialize the Python engine
PythonEngine.BeginAllowThreads(); // Allow threads to interact with Python
using (Py.GIL())
{
dynamic sys = Py.Import("sys");
string projectDirectory = Directory.GetParent(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location))?.Parent?.FullName;
string adfullerTestPath = Path.Combine(projectDirectory, "adfuller_test");
sys.path.append(adfullerTestPath);
_pythonModule = Py.Import("adfuller_test"); // Import adfuller_test module
}
}
}
public string ExecutePythonCode(SerializableData data)
{
using (Py.GIL())
{
dynamic result = _pythonModule.adfuller_engle_granger_multicore(
data.SymbolCombination,
data.CombinationList,
data.Series
);
return result.ToString();
}
}
}
void function1()
{
//Below variables are filled with memories
var combinationDICT = new Dictionary<string, List<string>>(); var combinationLIST = new List<string>(); var symbolCombination = new List<string>(); var SERIES = new List<double[]>();
int num_cpu_cores = 10;
int batchSize = (int)Math.Ceiling((double)combinationDICT.Count / num_cpu_cores);
var tasks = new List<Task>();
var combinationDictList = combinationDICT.ToList(); // Convert dictionary to list for indexed access
for (int i = 0; i < num_cpu_cores; i++)
{
// Create a batch for each core
var batch = combinationDictList.Skip(i * batchSize).Take(batchSize).ToList();
if (batch.Count > 0)
{
tasks.Add(Task.Run(() =>
{
// Create an AppDomain for this batch of tasks
AppDomain newDomain = AppDomain.CreateDomain("PythonExecutionDomain_" + i);
try
{
// Create an instance of PythonCaller in the new AppDomain
PythonCaller pythonCaller = (PythonCaller)newDomain.CreateInstanceAndUnwrap(
typeof(PythonCaller).Assembly.FullName,
typeof(PythonCaller).FullName
);
foreach (var pair in batch)
{
try
{
// Create data for the PythonCaller
List<string> combinationlist = new List<string>(pair.Value);
var data = new SerializableData
{
CombinationList = new List<string>(combinationlist),
SymbolCombination = new List<string>(symbolCombination),
Series = new List<double[]>(SERIES)
};
// Call the method to execute Python code
string result = pythonCaller.ExecutePythonCode(data);
}
catch (Exception ex)
{
// Handle exceptions for individual pair processing
MessageBox.Show($"Error processing pair {pair.Key}: {ex.Message}");
}
}
}
catch (Exception ex)
{
// Handle exceptions for AppDomain creation or interaction
MessageBox.Show($"Error with AppDomain {i}: {ex.Message}");
}
finally
{
// Unload the AppDomain
AppDomain.Unload(newDomain);
}
}));
}
}
// Wait for all tasks to complete
Task.WaitAll(tasks.ToArray());
}
</code></pre>
|
<python><c#><winforms><concurrent.futures><python.net>
|
2024-07-19 15:49:23
| 0
| 1,295
|
Andreas
|
78,770,228
| 6,328,506
|
Ensuring at least one required field in a TypedDict in Python
|
<p>Giving the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing_extensions import NotRequired, TypedDict, Unpack
class LoginParams(TypedDict):
url: NotRequired[str]
username: NotRequired[str]
password: NotRequired[str]
api_key: NotRequired[str]
def do_login(**kwargs: Unpack[LoginParams]):
pass
</code></pre>
<p>How can I ensure that at least one of the LoginParams fields is required, no matter which one? In other words, an instance of LoginParams should not be empty. If this is not possible using TypedDict, what other approach can I take?</p>
|
<python><python-typing>
|
2024-07-19 15:47:24
| 0
| 416
|
Kafka4PresidentNow
|
78,770,134
| 7,380,827
|
Why is the 'limit' limit of maximum recursion depth in python 2**32 / 2 - 31?
|
<p>In Python, some programs create the error <code>RecursionError: maximum recursion depth exceeded in comparison</code> or similar.</p>
<p>This is because there is a limit set for how deep the recursion can be nested.</p>
<p>To get the current value for maximum recursion depth, use</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.getrecursionlimit()
</code></pre>
<p>The default value seems to be 1000 or 1500 depending on the specific system/python/... combination.</p>
<p>The value can be increased to have access to deeper recursion.</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.setrecursionlimit(LIMIT)
</code></pre>
<p>where <code>LIMIT</code> is the new limit.</p>
<p>This is the error that I get when I exceed the 'limit' limit:</p>
<pre><code>OverflowError: Python int too large to convert to C int
</code></pre>
<p>Why is the 'limit' limit <code>int(2**32 / 2) - 31</code>?</p>
<p>I would expect that it maybe is <code>int(2**32 / 2) - 1</code> as it makes sense that integers are signed (32-bit) numbers, but why <code>- 31</code>?</p>
<p>The question first came up as I was testing with 3.9.13 (64-bit). I also tested with 3.12 (64-bit).</p>
<p>Edit: Here is a demo on IDLE with Python 3.9.13.</p>
<pre><code>Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license()" for more information.
>>> import sys
>>> v = 2**31-1
>>> print(v)
2147483647
>>> sys.setrecursionlimit(v)
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
sys.setrecursionlimit(v)
OverflowError: Python int too large to convert to C int
>>> v = 2**31-30
>>> sys.setrecursionlimit(v)
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
sys.setrecursionlimit(v)
OverflowError: Python int too large to convert to C int
>>> v = 2**31-31
>>> sys.setrecursionlimit(v)
>>>
</code></pre>
|
<python><recursion>
|
2024-07-19 15:21:02
| 1
| 534
|
Twistios_Player
|
78,770,117
| 8,964,393
|
How to calculate the Relative Strength Index (RSI) through record iterations in pandas dataframe
|
<p>I have created a pandas dataframe as follows:</p>
<pre><code>import pandas as pd
import numpy as np
ds = { 'trend' : [1,1,1,1,2,2,3,3,3,3,3,3,4,4,4,4,4], 'price' : [23,43,56,21,43,55,54,32,9,12,11,12,23,3,2,1,1]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks as follows:</p>
<pre><code>display(df)
trend price
0 1 23
1 1 43
2 1 56
3 1 21
4 2 43
5 2 55
6 3 54
7 3 32
8 3 9
9 3 12
10 3 11
11 3 12
12 4 23
13 4 3
14 4 2
15 4 1
16 4 1
</code></pre>
<p>I have saved the dataframe to a .csv file called <code>df.csv</code>:</p>
<pre><code>df.to_csv("df.csv", index = False)
</code></pre>
<p>I have then created a function that calculates the Relative Strength Index (RSI - see: <a href="https://www.investopedia.com/terms/r/rsi.asp" rel="nofollow noreferrer">https://www.investopedia.com/terms/r/rsi.asp</a>):</p>
<pre><code>def get_RSI(df, column, time_window):
"""Return the RSI indicator for the specified time window."""
diff = df[column].diff(1)
# This preservers dimensions off diff values.
up_chg = 0 * diff
down_chg = 0 * diff
# Up change is equal to the positive difference, otherwise equal to zero.
up_chg[diff > 0] = diff[diff > 0]
# Down change is equal to negative deifference, otherwise equal to zero.
down_chg[diff < 0] = diff[diff < 0]
# We set com = time_window-1 so we get decay alpha=1/time_window.
up_chg_avg = up_chg.ewm(com=time_window - 1,
min_periods=time_window).mean()
down_chg_avg = down_chg.ewm(com=time_window - 1,
min_periods=time_window).mean()
RS = abs(up_chg_avg / down_chg_avg)
df['RSI'] = 100 - 100 / (1 + RS)
df = df[['RSI']]
return df
</code></pre>
<p>I need to create a new field called <code>RSI</code> which:</p>
<ol>
<li>iterates through each and every record of the dataframe</li>
<li>calculates the RSI by considering the <code>price</code> observed at each iteration and the last
prices (RSI length is 3 in this example) observed in the previous trends.</li>
</ol>
<p>For example:</p>
<ul>
<li>I iterate at record 0 and the RSI is NaN (missing).</li>
<li>I iterate at record 1 and the RSI is still NaN (missing)</li>
<li>I Iterate at record 12 and the RSI is 47.667343 (it considers price at record 3, price at record 5 and price at record 12</li>
<li>I Iterate at record 13 and the RSI is 28.631579 (it considers price at record 3, price at record 5 and price at record 13</li>
<li>I Iterate at record 15 and the RSI is 27.586207 (it considers price at record 3, price at record 5 and price at record 15</li>
<li>and so on .....</li>
</ul>
<p>I have then written this code:</p>
<pre><code>rsi = []
for i in range(len(df)):
ds = pd.read_csv("df.csv", nrows=i+1)
print(ds.info())
d = ds.groupby(['trend'], as_index=False).agg(
{'price':'last'})
get_RSI(d,'price',3)
rsi.append(d['RSI'].iloc[-1])
df['RSI'] = rsi
</code></pre>
<p>The dataset looks correct:</p>
<pre><code>display(df)
trend price RSI
0 1 23 NaN
1 1 43 NaN
2 1 56 NaN
3 1 21 NaN
4 2 43 NaN
5 2 55 NaN
6 3 54 NaN
7 3 32 NaN
8 3 9 NaN
9 3 12 NaN
10 3 11 NaN
11 3 12 NaN
12 4 23 47.667343
13 4 3 28.631579
14 4 2 28.099174
15 4 1 27.586207
16 4 1 27.586207
</code></pre>
<p>Th problem is that I need to process about 4 million records and it would take approximately 60 hours.</p>
<p>Does anyone know how to get the same results in a quick, efficient way, please?</p>
|
<python><pandas><dataframe><loops><iterator>
|
2024-07-19 15:17:30
| 2
| 1,762
|
Giampaolo Levorato
|
78,770,116
| 2,132,593
|
Limit Horizontal Scrolling in Dash DataTable with multi-level header (no Extra Blank Space)
|
<p>I'm building a Dash app using Plotly that displays two tables <code>dash_table.DataTable</code> with many columns. The first table has multi-level header.</p>
<p>The first columns of each table are fixed and horizontal scrolling is enabled for the remaining columns.</p>
<p>I'm experiencing an issue when scrolling horizontally the table with multi-level header, which results in excessive blank space to the right of the last column (see screenshot).</p>
<p>I have tried using CSS to control overflow properties, but without success.</p>
<p>Has anyone experienced this problem before?</p>
<p>This is the example code to replicate the issue:</p>
<pre><code>import dash
from dash import dash_table, dcc, html
from dash.dependencies import Input, Output
import pandas as pd
import numpy as np
app = dash.Dash(__name__)
app.title = 'test'
# Sample Data
data = {
'Column 1': ['A', 'B', 'C', 'D'],
'Column 2': [1, 2, 3, 4],
'Column 3': [5, 6, 7, 8],
'Column 4': [9, 10, 11, 12],
'Column 5': [13, 14, 15, 16],
'Column 6': [13, 14, 15, 16],
'Column 7': [13, 14, 15, 16],
'Column 8': [13, 14, 15, 16],
'Column 9': [13, 14, 15, 16],
'Column 10': [9, 10, 11, 12],
'Column 11': [13, 14, 15, 16],
'Column 12': [13, 14, 15, 16],
'Column 13': [13, 14, 15, 16],
'Column 14': [13, 14, 15, 16],
'Column 15': [13, 14, 15, 16],
'Column 16': [9, 10, 11, 12],
'Column 17': [13, 14, 15, 16],
'Column 18': [13, 14, 15, 16],
'Column 19': [13, 14, 15, 16],
'Column 20': [13, 14, 15, 16],
'Column 21': [13, 14, 15, 16],
}
df = pd.DataFrame(data)
app.layout = html.Div(
children=[
dcc.Interval(id="interval", interval=5 * 60 * 1000, n_intervals=0), # 5 minutes
html.Div(id='raw-data', style={'display': 'none'}, children='{}'),
html.H1(children='Test'),
# Monthly Data Table
html.Div(
className="data-table-container",
children=[
dash_table.DataTable(
id='table_monthly',
columns=[
{"name": ["A", "Column 1"], "id": "Column 1"},
{"name": ["A", "Column 2"], "id": "Column 2"},
{"name": ["B", "Column 3"], "id": "Column 3"},
{"name": ["B", "Column 4"], "id": "Column 4"},
{"name": ["B", "Column 5"], "id": "Column 5"},
{"name": ["C", "Column 6"], "id": "Column 6"},
{"name": ["C", "Column 7"], "id": "Column 7"},
{"name": ["C", "Column 8"], "id": "Column 8"},
{"name": ["C", "Column 9"], "id": "Column 9"},
{"name": ["C", "Column 10"], "id": "Column 10"},
{"name": ["D", "Column 11"], "id": "Column 11"},
{"name": ["D", "Column 12"], "id": "Column 12"},
{"name": ["D", "Column 13"], "id": "Column 13"},
{"name": ["E", "Column 14"], "id": "Column 14"},
{"name": ["E", "Column 15"], "id": "Column 15"},
{"name": ["E", "Column 16"], "id": "Column 16"},
{"name": ["E", "Column 17"], "id": "Column 17"},
{"name": ["F", "Column 18"], "id": "Column 18"},
{"name": ["G", "Column 19"], "id": "Column 19"},
{"name": ["G", "Column 20"], "id": "Column 20"},
{"name": ["H", "Column 21"], "id": "Column 21"},
],
data=df.to_dict('records'),
fixed_columns={'headers': True, 'data': 1},
merge_duplicate_headers=True,
style_table={
'overflowX': 'auto',
'minWidth': '100%',
'width': 'max-content', # Use max-content to fit contents
'maxWidth': 'none'
},
style_cell={
'minWidth': 100, 'maxWidth': 200, 'width': 100,
'textAlign': 'center'
}
)
]
),
# Daily Data Table
html.Div(
className="data-table-container",
children=[
dash_table.DataTable(
id='table_daily',
columns=[{'name': i, 'id': i} for i in df.columns],
data=df.to_dict('records'),
fixed_columns={'headers': True, 'data': 1},
style_table={
'overflowX': 'auto',
'minWidth': '100%',
'width': 'max-content',
'maxWidth': 'none'
},
style_cell={
'minWidth': 100, 'maxWidth': 200, 'width': 100,
'textAlign': 'center'
}
)
]
)
]
)
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<p>And this is a screenshot:</p>
<p><a href="https://i.sstatic.net/KPboQb2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPboQb2G.png" alt="enter image description here" /></a></p>
|
<python><css><plotly-dash>
|
2024-07-19 15:17:28
| 1
| 1,964
|
Giacomo
|
78,770,111
| 1,304,376
|
better way of initializing python object method variables with object variable values if not passed in
|
<p>I have many object methods where I can pass in values, but want to be able to default them to the object variable if not passed. Something like this example that I came up to illustrate what I want to do:</p>
<pre><code>from datetime import datetime
class Appointment :
def __init__(self, resource, startTime:datetime, endTime:datetime) :
self.resource = resource
self.startTime = startTime
self.endTime = endTime
def isValid(self, startTime:datetime=None, endTime:datetime=None) :
# would be nice if I could:
# def isValid(self, startTime:datetime=self.startTime, endTime:datetime=self.endTime) :
# first way of initializing
if not startTime :
startTime = self.startTime
# second way of initializing
endTime = endTime if endTime else self.endTime
return self.resource.isAvailable(startTime, endTime)
</code></pre>
<p>Those are the two ways (three if you count the illegal syntax one) I can think of doing it. First has lots of nesting and more lines, second has redundant assignment, so I dislike both. Is there a better way? Is there a way to make the illegal syntax one work?</p>
|
<python>
|
2024-07-19 15:16:14
| 1
| 1,676
|
Ching Liu
|
78,770,014
| 188,331
|
How to use HuggingFace's run_translation.py script to train a translation from scratch?
|
<p>I tried various HuggingFace scripts to build language models, such as <code>run_mlm.py</code> (<a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py" rel="nofollow noreferrer">link</a>), <code>run_clm.py</code> (<a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py" rel="nofollow noreferrer">link</a>) and <code>run_translation.py</code> (<a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py" rel="nofollow noreferrer">link</a>). For the former 2 scripts, it can train a language model from scratch (i.e. without a base model).</p>
<p>However, I cannot build a model from scratch using <code>run_translation.py</code>, as the model name or path is required in the command line. Given that I have a considerably large parallel translation dataset, how can I build a translation model from scratch?</p>
<hr />
<p><strong>UPDATE:</strong> The command is:</p>
<pre><code>NCCL_P2P_DISABLE="1" NCCL_IB_DISABLE="1" python run_translation.py \
--output_dir models/TestTranslation-v1 \
--model_name_or_path fnlp/bart-base-chinese \
--tokenizer_name path/to/custom-tokenizer \
--train_file data/bart_parallel/train.json \
--validation_file data/bart_parallel/validation.json
--do_train \
--do_eval \
--source_lang yue \
--target_lang zh
</code></pre>
<p>p.s. The <code>NCCL_*</code> flags are required for NVIDIA 4090 cards.</p>
|
<python><huggingface-transformers>
|
2024-07-19 14:53:00
| 1
| 54,395
|
Raptor
|
78,769,948
| 5,189,215
|
Slow __repr__ calculation for one of Python classes slows down the jupyter notebook debugger in VSCode
|
<p>I am trying to debug some code in VSCode Jupyter Notebook.
I get the following warning:</p>
<pre><code>pydevd warning: Computing repr of Out (dict) was slow (took 6.62s)
Customize report timeout by setting the `PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT` environment variable to a higher timeout (default is: 0.5s)
</code></pre>
<p>And every step-over in the debugger takes really long time.<br />
As I understand it, the debugger tries to automatically resolve repr of all currently defined variables, even if I am not asking it to print anything.<br />
I've tried to disable automatic variable resolution following the advice <a href="https://stackoverflow.com/questions/73471088/vscode-disable-variable-view-in-debugging#comment129753356_73471088">here</a>.<br />
I've tried setting environment variables such as <code>PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT</code> and <code>PYDEVD_DISABLE_VARIABLE_LOOKUP</code>, either in jupyter notebook or in the debug session itself.
None of the above helped.</p>
<p>Reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import time
class MyClass:
def __init__(self, value):
self.value = value
def __repr__(self):
time.sleep(5)
c1 = MyClass("asd")
c2 = MyClass("bcd")
c3 = MyClass("bcd2")
</code></pre>
<p>The debugger "hands" if you break beyond initialization of "c1".</p>
<p>I've also noted that there is a problem only when I use Jupyter Notebook in VSCode. Running python debugger for a script with similar code doesn't hand (after hiding "variables" section, at least).</p>
|
<python><visual-studio-code>
|
2024-07-19 14:35:24
| 0
| 1,102
|
Maksim Gayduk
|
78,769,892
| 5,188,353
|
Wrong greeks (delta, vega, ...) results when requesting data using reqMktData
|
<p>I am using the native library ibapi (Interactive Brokers). I have the following code:</p>
<pre class="lang-py prettyprint-override"><code> from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.contract import Contract
import threading
import time
class TradingApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, wrapper=self)
self.nextOrderId = None
self.option_params_received = False
def error(self, reqId, errorCode, errorString, advancedOrderRejectJson=None):
print("Error {} {} {}".format(reqId, errorCode, errorString))
def tickOptionComputation(self, reqId, tickType, impliedVol, delta, optPrice, pvDividend, gamma, vega, theta, undPrice, pvDividend2):
if tickType == 13: # This is the type for option computation
print(f"Greeks for reqId {reqId}: Delta={delta}, Gamma={gamma}, Vega={vega}, Theta={theta}")
def websocket_con():
app.run()
# Create an instance of the TradingApp class and connect to the server.
app = TradingApp()
print("Attempting to connect...")
app.connect("127.0.0.1", 7497, clientId=7)
# Create a thread to connect to the websocket.
con_thread = threading.Thread(target=websocket_con)
con_thread.start()
# Give some lag time for connection
time.sleep(2)
# Create a simple put options contract on AAPL
option_contract = Contract()
option_contract.symbol = "AAPL" # Added symbol for the underlying
option_contract.secType = "OPT"
option_contract.currency = "USD"
option_contract.exchange = "SMART"
option_contract.lastTradeDateOrContractMonth = "20240802"
option_contract.strike = 195
option_contract.tradingClass = 'AAPL'
option_contract.right = "P" # Put option
app.reqMktData(reqId=111,
contract=option_contract,
genericTickList= "101",
snapshot=False,
regulatorySnapshot=False,
mktDataOptions=[])
time.sleep(5)
app.cancelMktData(111)
</code></pre>
<p>The problem I have is that the results for all greeks are wrong from what I see in the TWS. I am using real-time data, no snapshots. I see the values in parallel (no timing issue), and the options contract is described in detail. The result for the delta is even a positive value, although I sent a request for a put option. Is there anything I forgot in the code to receive the same values I see in TWS?</p>
|
<python><algorithmic-trading><interactive-brokers><ib-api>
|
2024-07-19 14:21:39
| 1
| 675
|
clex
|
78,769,876
| 4,701,426
|
Filtering a pandas series consisting of lists and NaN values if the elements contain a string
|
<p>Let's consider this dataframe:</p>
<pre><code>temp = pd.DataFrame({'x': [['ab', 'bc'], ['hg'], np.nan]})
temp
x
0 [ab, bc]
1 [hg]
2 NaN
</code></pre>
<p>I'd like to create a new column called <code>dummy</code> that takes the value of 1 if a row contains letter 'a' in any of its elements, value of 0 if it does not, and value of NaN if it's NaN.</p>
<p><strong>Expected outcome:</strong></p>
<pre><code> x dummy
0 [ab, bc] 1
1 [hg] 0
2 NaN NaN
</code></pre>
<p>Sounds simple but I'm stuck. What I've tried:</p>
<p>1)</p>
<pre><code>temp['dummy'] = np.where(temp.x.str.contains('a', case = False, na = False), 1, 0)
</code></pre>
<p>will assign all 0s because it compares the whole list to 'a'</p>
<p>2)</p>
<pre><code>temp['dummy'] = np.where(temp.x.astype(str).str.contains('a', case = False, na = False), 1, 0)
</code></pre>
<p><code>atype(str)</code> will take care of the above issue by flattening the a list as a string but now np.NaN is 'nan' and <code>na = False</code> does not work on it.</p>
<p>3)</p>
<pre><code>temp['dummy'] = np.where(all([temp.x.astype(str).str.contains('a', case = False, na = False) , temp.x.astype(str) != 'nan']), 1, 0)
</code></pre>
<p>I think my second condition should take care of the above issue but now I get error : <code>ValueError: The truth value of a Series is ambiguous.</code></p>
<p>4)</p>
<pre><code>temp['dummy'] = [1 if all(['a' in y , y != np.nan]) else 0 for y in temp.x ]
</code></pre>
<p>Error: <code>TypeError: argument of type 'float' is not iterable</code></p>
<p>5)</p>
<p>The only thing that will work is:</p>
<pre><code>temp['dummy'] = np.nan # placeholder
temp['dummy'][temp.x.notnull()] = np.where(temp[temp.x.notnull()].x.astype(str).str.contains('a', case = False, na = False), 1, 0)
temp
</code></pre>
<p>But it's two lines and ugly.</p>
|
<python><pandas>
|
2024-07-19 14:18:19
| 1
| 2,151
|
Saeed
|
78,769,837
| 7,206,559
|
How do I compose or chain multiple function in Python
|
<p>When writing a program, I often want to perform a series of steps on a piece of data (usually a list or string). For example I might want to call a function which returns a string, convert it to a list, map over the list, and then reduce to get a final result.</p>
<p>In programming languages I've used in the past, I would expect to be able to do something like one of the following:</p>
<pre><code>compose(
getSomeString,
list,
map(someMapFunction)
reduce(someReduceFunction)
)()
</code></pre>
<pre><code>getSomeString()
.list()
.map(someMapFunction)
.reduce(someReduceFunction)
</code></pre>
<pre><code>getSomeFunction
=> list
=> map(someMapFunction)
=> reduce(someReduceFunction)()
</code></pre>
<p>However, I can't figure out a clean and compact way to do composition/chaining in Python. The one exception I've found is that a list comprehension works for the case where I want to compose a map and a filter.</p>
<p>What is the Pythonic way of writing composition oriented code without either having tons of nesting that makes my code look like Lisp:</p>
<pre><code>reduce(
someReduceFunction,
map(
someMapFunction,
list(
getSomeString()
)
)
)
</code></pre>
<p>Or creating intermediate values for everything:</p>
<pre><code>myString = getSomeString()
stringAsList = list(myString)
mappedString = map(someMapFunction, stringAsList)
reducedString = reduce(someReduceFunction, mappedString)
</code></pre>
|
<python><python-3.x><method-chaining><function-composition>
|
2024-07-19 14:10:07
| 2
| 899
|
David Moneysmith
|
78,769,769
| 1,484,601
|
python typing TypeVar: understanding unbound error
|
<p>I do not understand error like</p>
<blockquote>
<p>Type variable "T" is unbound</p>
</blockquote>
<p>Here a concrete example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict, Type, Callable, TypeVar
class Sup: ...
class A(Sup): ...
class B(Sup): ...
def get_A(a: A) -> int:
return 1
def get_B(b: B) -> int:
return 2
T = TypeVar("T", bound=Sup)
f: Dict[Type[T], Callable[[T], int]] = {A: get_A, B: get_B}
</code></pre>
<p>mypy returns:</p>
<pre class="lang-none prettyprint-override"><code>error: Type variable "foo.T" is unbound [valid-type]
note: (Hint: Use "Generic[T]" or "Protocol[T]" base class to bind "T" inside a class)
note: (Hint: Use "T" in function signature to bind "T" inside a function)
error: Dict entry 0 has incompatible type "type[A]": "Callable[[A], int]"; expected "type[T?]": "Callable[[T], int]" [dict-item]
error: Dict entry 1 has incompatible type "type[B]": "Callable[[B], int]"; expected "type[T?]": "Callable[[T], int]" [dict-item]
</code></pre>
<p>My understanding (which may be incorrect), is that mypy complains because it can not guarantee that later on, no tuple (key, value) not following the constrain on T will be added. For example:</p>
<pre class="lang-py prettyprint-override"><code># not a subclass of Sup !
class D: ...
def get_D(d: D)->int:
return 2
f[D] = get_D # D is not a subclass of Sup, so violation of the type of f
</code></pre>
<p>Question:</p>
<ul>
<li><p>is my understanding of the error correct ? If not, could an explanation be provided ?</p>
</li>
<li><p>if my understanding is correct, why would the error occur here:</p>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>f: Dict[Type[T], Callable[[T], int]] = {A: get_A, B: get_B}
</code></pre>
<p>and not here ?</p>
<pre class="lang-py prettyprint-override"><code># This is the line not respecting the type of f !
f[D] = get_D
</code></pre>
|
<python><python-typing><type-variables>
|
2024-07-19 13:55:30
| 1
| 4,521
|
Vince
|
78,769,759
| 6,128,612
|
Jupyter notebook lost syntax highlithing
|
<p>After creating environments (ubuntu VM) Jupyter Notebook lost highlighting of code syntax for some reason. Highlighting is now unavailable for all environments. How to fix that? Below you can see my language syntax highlighting is grayed out and below how my code looks right now.</p>
<p><a href="https://i.sstatic.net/ZjhCFHmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZjhCFHmS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/QsRX4ZTn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsRX4ZTn.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook>
|
2024-07-19 13:53:40
| 0
| 343
|
Alexandros
|
78,769,746
| 200,783
|
Why does mypy give an error: Generator has incompatible item type "Sequence[str]"?
|
<p>I've used mypy to type-check <code>rbast.py</code> from <a href="https://medium.com/workpath-thewaywework/learning-ruby-2-things-i-like-2-things-i-miss-from-python-6f60af8ed16c" rel="nofollow noreferrer">this post</a>:</p>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=19931c644949c29b715b797a90b59a01" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.12&gist=19931c644949c29b715b797a90b59a01</a></p>
<p>It gives three errors. The first two are obvious, but the third is:</p>
<p><code>main.py:38: error: Generator has incompatible item type "Sequence[str]"; expected "str" [misc]</code></p>
<p>How do I need to change the code to fix this error?</p>
|
<python><mypy>
|
2024-07-19 13:50:52
| 1
| 14,493
|
user200783
|
78,769,678
| 3,932,263
|
VSCode autosuggest python imports without relying on Intellisense
|
<p>This is explicitly about not using pylance/intellisense because it is broken sometimes, and would be easier to fix through text matching by identifying common names with hardcoded import statements.</p>
<p>I am using objects like AutoModel from the transformers and frequently come across objects that the auto-import suggestions cannot find.</p>
<p>Rather than bothering with finding the reason it doesn't find, I always want VSCode to suggest to do "from transformers import AutoModel" whenever it sees an undefined "AutoModel", so no need to scan through any python import directories.</p>
<p>Is this possible out-of-the-box or with some extension?</p>
<p>By the way, I already tried to add this to no avail (after reloading the window):</p>
<pre><code>"python.analysis.packageIndexDepths": [
{"name": "transformers", "index": 4,},
]
</code></pre>
|
<python><visual-studio-code><python-import><intellisense>
|
2024-07-19 13:35:48
| 1
| 1,399
|
Maximilian Mordig
|
78,769,542
| 1,484,601
|
Type hints for a callable taking a subclass type as argument
|
<p><code>A</code> and <code>B</code> are sub-classes of <code>Sup</code>. These functions are defined:</p>
<pre class="lang-py prettyprint-override"><code>def get_A_report(a: A) -> int
def get_B_report(b: B) -> int
</code></pre>
<p>What are the correct typing hints for <code>f</code>?</p>
<pre class="lang-py prettyprint-override"><code>f = {
A: get_A_report,
B: get_B_report,
}
</code></pre>
<p>Note: the naive:</p>
<pre class="lang-py prettyprint-override"><code>f: Dict[Type[Sup],Callable[[Sup],int]]
</code></pre>
<p>results in this error:</p>
<pre><code>error: Dict entry 0 has incompatible type
error: Dict entry 1 has incompatible type
</code></pre>
<p>I tried to use <code>TypeVar</code> and <code>Protocol</code>, with no success.</p>
<pre><code>Dict entry 0 has incompatible type
</code></pre>
|
<python><python-typing>
|
2024-07-19 13:08:28
| 0
| 4,521
|
Vince
|
78,768,949
| 1,818,935
|
A seemingly stopped HTTP service continues to work
|
<p>I wrote a python function, detailed <a href="https://stackoverflow.com/a/78768222/1818935">in another post</a>, which communicates with an HTTP-based <a href="https://rumbledb.org/" rel="nofollow noreferrer">RumbleDB</a> server, running on my local machine, in order to evaluate <a href="https://www.jsoniq.org/" rel="nofollow noreferrer">JSONiq</a> queries.</p>
<p>I tested the function in a Jupyter Notebook inside Visual Studio Code running on my Windows 10 laptop. Before executing the function, I started the RumbleDB server in a Git Bash console by executing the command</p>
<pre><code>spark-submit rumbledb-1.21.0-for-spark-3.5.jar serve -p 9090
</code></pre>
<p>as instructed <a href="https://rumble.readthedocs.io/en/latest/HTTPServer/" rel="nofollow noreferrer">in the RumbleDB documentation</a>.</p>
<p>After a few tests, I hit <code>Ctrl</code>+<code>C</code> inside the bash console in order to stop the server (as suggested <a href="https://stackoverflow.com/a/61520496/1818935">in this post</a>).</p>
<p>The server having been stopped (or so I thought), I restarted the Jupyter notebook where I did the testing, and reran it. To my surprise, the tests succeeded, and returned the expected values, with no error messages.</p>
<p>How can it be, if the RumbleDB process has stopped?</p>
<p>I tested which HTTP processes are listening to ports 9090 and 8001 (8001 is another port I used during the testing) by running the following commands in the Git Bash console:</p>
<pre><code>netstat -ano | findstr 9090
netstat -ano | findstr 8001
</code></pre>
<p>The commands returned nothing, indicating, I presume, that no HTTP process is listening to these ports.</p>
<p>This only increased my puzzlement. Please help.</p>
<p>EDIT: I have restarted my computer, and immediately after it came back up, I opened Visual Studio Code and ran the tests. They succeeded! How can this be?</p>
|
<python><windows><apache-spark><http><git-bash>
|
2024-07-19 10:47:20
| 1
| 6,053
|
Evan Aad
|
78,768,907
| 2,107,030
|
split pandas datafram based on given row string
|
<p>I have a text file with a data set of the form</p>
<pre><code>Line 1
Line 2
!
1.01499999 0.504999995 6.19969398E-7 5.38933136E-7 1.35450875E-6
1.74000001 0.220000029 7.92876381E-6 4.1831604E-6 6.61433387E-6
2.10750008 0.147500038 2.06282803E-5 9.86384475E-6 9.99511121E-6
2.54500008 0.289999962 1.9321451E-5 9.41255712E-6 1.40418542E-5
3.19000006 0.355000019 2.1970769E-5 1.08561271E-5 1.98473426E-5
4.18249989 0.637500048 1.94816221E-5 1.02610993E-5 2.58007622E-5
6.23999977 1.41999984 1.95048287E-5 1.50751257E-5 2.59193912E-5
8.50250053 0.84250021 -3.13448794E-7 7.45566576E-5 1.58940475E-5
-------
1.02499998 0.389999986 1.25625479E-6 8.79037373E-7 1.47827166E-6
1.57249999 0.157500029 3.83437873E-6 4.57433907E-6 4.91447827E-6
2.16750002 0.4375 9.80125606E-6 4.12876625E-6 9.34428499E-6
3.00500011 0.399999976 2.13497806E-5 9.10624203E-6 1.67928665E-5
4.25500011 0.850000024 1.35475839E-5 8.33513332E-6 2.37308996E-5
5.84749985 0.742500067 4.97072215E-5 2.55404848E-5 2.50197209E-5
8.25749969 1.66750002 1.56722888E-6 4.85851124E-5 1.65847723E-5
-------
0.741162002 0.158674002 1.99696819E-6 8.0933961E-7 8.04328749E-7
1.11103797 0.211201996 2.83219379E-6 1.18746482E-6 2.33293395E-6
1.48358989 0.161349952 6.8232016E-6 2.48437755E-6 4.83310259E-6
2.01257992 0.367640018 4.58416616E-6 2.24343057E-6 9.16807585E-6
2.67011499 0.289895058 2.16971075E-5 7.58860506E-6 1.58778421E-5
3.64500499 0.684994936 1.36258495E-5 5.31197475E-6 2.35662919E-5
5.09749985 0.767499924 1.92129683E-5 9.40257451E-6 2.89442723E-5
6.32749987 0.462500095 7.3068717E-5 2.60800098E-5 2.67253181E-5
8.11250019 1.32250023 3.10791838E-5 1.88406557E-5 1.89492275E-5
-------
</code></pre>
<p>I want to read it as a pandas dataframe to assign each data set separated by the string <code>-------</code> so that I can read it in a fashion similar to this:</p>
<pre><code>names = ['column_name1','column_name2','column_name3','column_name4','column_name5']
for j in range(5):
names.append('cols%i' %j)
kwargs = {
"names":names,
"delimiter":' ',
"skip_blank_lines":True
}
skiprows=3
with open('file.txt') as f:
content = "".join(f.readlines()[skiprows:])
dfs = {
k: pd.read_table(io.StringIO(data), **kwargs)
for k, data in enumerate(content.split(u'-------')) #dfs.values()[i] will be a separated dataset discriminated by the string `-------`
}
axs1.errorbar(dfs.values()[0].column_name1, dfs.values()[0].column_name2, xerr=dfs.values()[0].column_name3, yerr=dfs.values()[0].column_name4, fmt='.')
</code></pre>
<p>The above code returns the error <code>TypeError: 'dict_values' object is not subscriptable</code>.</p>
<p>How can I read the data frame <code>dfs</code> splitting it based on the string <code>-------</code> in order that I can plot its values in a way similar to that used in the code with <code>ax1.errorbar()</code>, e.g., using <code>dfs.values()[0].column_name1</code>?</p>
|
<python><pandas><dataframe><split>
|
2024-07-19 10:37:40
| 1
| 2,166
|
Py-ser
|
78,768,827
| 240,443
|
A local variable seemingly breaking scope in Django
|
<p>I have introduced this function in a file <code>storage.py</code> that is imported from <code>models.py</code>:</p>
<pre><code>def uuid_file_name(kind):
def file_name(instance, filename):
h = str(uuid.uuid4())
basename, ext = os.path.splitext(filename)
return os.path.join(f'{kind}_files', h[0:1], h[1:2], h + ext.lower())
return file_name
</code></pre>
<p>The intention of the function is to dynamically generate file names in a field declaration, such as:</p>
<pre><code>class Thingy(models.Model):
widget = FileField(upload_to=uuid_file_name("widget"))
</code></pre>
<p>Now when I <code>makemigrations</code>, I get an odd error:</p>
<pre><code>ValueError: Could not find function file_name in <app>.storage.
</code></pre>
<p>There is no other mention of <code>file_name</code> in my project, and if I change the identifier, the error message changes accordingly, so the error is definitely from this. However, without some metaprogramming, I can't see how <code>file_name</code> can leak from within this function.</p>
<p>I thought I could avoid whatever magic was happening by <code>from .storage import uuid_file_name as _uuid_file_name</code>, hoping Django wouldn't act on a private module member, but the same error occurs.</p>
<p>So, a two-fold question:</p>
<ol>
<li><p>Why is this happening, and how do I prevent it?</p>
</li>
<li><p>Is there another way to write the equivalent code to circumvent the error?</p>
</li>
</ol>
<hr />
<p>Full traceback:</p>
<pre><code>Traceback (most recent call last):
File "<project>/./manage.py", line 22, in <module>
main()
File "<project>/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/base.py", line 413, in run_from_argv
self.execute(*args, **cmd_options)
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/base.py", line 459, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/base.py", line 107, in wrapper
res = handle_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/commands/makemigrations.py", line 259, in handle
self.write_migration_files(changes)
File "<project>/<venv>/lib/python3.12/site-packages/django/core/management/commands/makemigrations.py", line 364, in write_migration_files
migration_string = writer.as_string()
^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/writer.py", line 141, in as_string
operation_string, operation_imports = OperationWriter(operation).serialize()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/writer.py", line 99, in serialize
_write(arg_name, arg_value)
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/writer.py", line 51, in _write
arg_string, arg_imports = MigrationWriter.serialize(item)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/writer.py", line 287, in serialize
return serializer_factory(value).serialize()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/serializer.py", line 42, in serialize
item_string, item_imports = serializer_factory(item).serialize()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/serializer.py", line 231, in serialize
return self.serialize_deconstructed(path, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/serializer.py", line 96, in serialize_deconstructed
arg_string, arg_imports = serializer_factory(arg).serialize()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<project>/<venv>/lib/python3.12/site-packages/django/db/migrations/serializer.py", line 187, in serialize
raise ValueError(
ValueError: Could not find function file_name in <app>.storage.
</code></pre>
|
<python><django><scope>
|
2024-07-19 10:19:35
| 0
| 199,494
|
Amadan
|
78,768,753
| 13,942,929
|
Cython - How do we implement Tuple of Object in Cython?
|
<p>I want to implement tuple of object in my cython project.
I couldn't find a way to implement in <code>.pyx</code> file so that I can use it in python.</p>
<p>You can check my <code>get_language</code> method for some reference.</p>
<p>And please help me with my <code>get_value</code> method which has <code>tuple of object</code> as parameter and returns <code>tuple of object</code> value.</p>
<p>In CPP Side</p>
<pre><code>#include "Career.h"
#include "Language.h"
using triple_language_ptr = std::tuple<std::shared_ptr<_Language>, std::shared_ptr<_Language>, std::shared_ptr<_Language>>
class _Career{
std::shared_ptr<_Language> get_language(std::shared_ptr<_Language> value);
triple_language_ptr get_value(triple_language_ptr value);
}
</code></pre>
<p>In Cython Side
[career.pxd]</p>
<pre><code>cdef extern from "Career.h":
cdef cppclass _Line:
_Career(...)
shared_ptr[_Language] get_language(shared_ptr[_Language] value)
tuple[shared_ptr[_Language], shared_ptr[_Language],shared_ptr[_Language]] get_value(
tuple[shared_ptr[_Language], shared_ptr[_Language], shared_ptr[_Language]] value
)
cdef class Career:
cdef shared_ptr[_Career] career_ptr
</code></pre>
<p>[career.pyx]</p>
<pre><code>cdef class Career:
def __cinit__(...):
pass
def get_language(self, value: Language):
cdef Language py_lang = Language()
py_lang.lang_ptr = self.career_ptr.get().get_language(value.lang_ptr)
return py_lang
def get_value(self, value: ???):
# Fill this part of code for me
pass
</code></pre>
|
<python><c++><cython><cythonize>
|
2024-07-19 10:04:49
| 1
| 3,779
|
Punreach Rany
|
78,768,062
| 9,097,114
|
Isuues with pyinstaller (Module Not found)
|
<p>I have created a Python (.PY) file thatuses the Tkinter package inside it and executes perfectly in Spyder.</p>
<p>Now I am trying to convert it to .EXE file using pyinstaller package, once I convert tried to run and got the below error:</p>
<p><a href="https://i.sstatic.net/FyaoN74V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyaoN74V.png" alt="enter image description here" /></a></p>
<p>The version of packages are as below:</p>
<ol>
<li><p>Python 3.11.7</p>
</li>
<li><p>import tkinter
tkinter.TkVersion</p>
</li>
</ol>
<ul>
<li>8.6 -</li>
</ul>
<ol start="3">
<li></li>
</ol>
<pre><code> import tkinter
tcl = tkinter.Tcl()
print(tcl.call("info", "patchlevel"))
</code></pre>
<ul>
<li>8.6.12 -</li>
</ul>
|
<python><python-3.x><tkinter>
|
2024-07-19 07:26:03
| 1
| 523
|
san1
|
78,767,993
| 11,756,186
|
Shift part of row in dataframe to new row
|
<p>I have a dataframe (pandas) that I want to transform for displaying purposes.
Therefore I want to shift some parts of the dataframe to new rows like below :</p>
<pre><code> col1 col2 col_to_shift col_not_to_shift1 col_not_to_shift2
0 a1 a2 a3 a4 a5
1 b1 b2 NaN b4 NaN
2 c1 c2 c3 c4 c5
</code></pre>
<p>I would like to obtain the following dataframe where a new row is created for the value (if not <code>NaN</code>) in the column to shift (unique), duplicating the data description contained in <code>col1</code> and <code>col2</code> and keeping the columns that I don't want to shift even if they contain NaNs:</p>
<pre><code> col1 col2 col_to_shift col_not_to_shift1 col_not_to_shift2
0 a1 a2 a3 NaN NaN
1 a1 a2 NaN a4 a4
2 b1 b2 NaN b4 NaN
3 c1 c2 c3 NaN NaN
4 c1 c2 NaN c4 c4
</code></pre>
<p>I tried looking at <code>pd.shift</code> but was not able to make it work.</p>
<p>Here is a piece of code to produce the dataframe :</p>
<pre><code>data = {"col1": ['a1', 'b1', 'c1'], 'col2': ['a2', 'b2', 'c2'],
'col_to_shift': ['a3', np.NaN, 'c3'],
'col_not_to_shift1': ['a4', 'b4', 'c4'],
'col_not_to_shift2': ['a5', np.NaN, 'c5']}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-19 07:09:24
| 4
| 681
|
Arthur
|
78,767,971
| 9,879,534
|
Why does `timezone` not work in `sqlmodel`?
|
<p>I'm using <code>sqlmodel</code> to do sql-orm. I learned from <a href="https://stackoverflow.com/questions/75350395/how-should-we-manage-datetime-fields-in-sqlmodel-in-python">this page</a> and <a href="https://github.com/tiangolo/sqlmodel/issues/370" rel="nofollow noreferrer">this page</a>, but both failed, the time inserted into the table is anyway utc time, no matter I set <code>timezone</code> True or False. At last I need to set <code>cur_time: str = Field(default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))</code> then everything goes well. But I'm just confused why I can't insert local time with <code>Columns(DateTime(timezone=True))</code></p>
<p>My code is</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from sqlmodel import (
TIMESTAMP,
Column,
DateTime,
Field,
Session,
SQLModel,
create_engine,
func,
text,
)
class Test(SQLModel, table=True):
__tablename__ = "test"
id: int | None = Field(default=None, primary_key=True)
name: str
# 1st page, utc time
first_time: datetime | None = Field(
sa_column=Column(
TIMESTAMP(timezone=True),
nullable=False,
server_default=text("CURRENT_TIMESTAMP"),
)
)
# 2nd page, utc time
second_time: datetime | None = Field(
sa_column=Column(DateTime(timezone=True), server_default=func.now())
)
# datetime.now, local time
third_time: str = Field(default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
uri = "sqlite:///./test.db"
engine = create_engine(uri, echo=True)
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
test = Test(name="test")
session.add(test)
session.commit()
</code></pre>
<p>And the result is
<a href="https://i.sstatic.net/H7Ebw5Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H7Ebw5Oy.png" alt="enter image description here" /></a>
I'm using sqlmodel 0.0.20, sqlalchemy 2.0.31, python 3.10.12. I tried both on win10 and ubuntu 22.04, and results are the same, things only go well when I use <code>datetime.now().strftime</code>.</p>
<p>On ubuntu, <code>timedatectl</code>'s result is</p>
<pre><code> Local time: Fri 2024-07-19 15:04:41 CST
Universal time: Fri 2024-07-19 07:04:41 UTC
RTC time: Fri 2024-07-19 07:04:42
Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
</code></pre>
|
<python><sqlmodel>
|
2024-07-19 07:04:09
| 1
| 365
|
Chuang Men
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.