QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,731,369
| 4,948,889
|
Sending message to private Telegram channel fails using Bot API python
|
<p>I have tried sending message to a telegram channel using Telegram's BOT API with Python 3.11.0.</p>
<p>At first I have tried making the channel Public and the code worked fine:</p>
<pre><code>import requests
# Replace YOUR_BOT_TOKEN with your actual bot token
bot_token = BOT_API_KEY
# Replace YOUR_CHAT_ID with the chat ID of the recipient
chat_id = '@testme100'
# The message you want to send
text = 'Hello, world!'
# The URL for the Telegram Bot API
url = f'https://api.telegram.org/bot{bot_token}/sendMessage'
# The data to be sent in the POST request
data = {'chat_id': chat_id, 'text': text}
# Send the POST request
response = requests.post(url, data=data)
# Check the response status code
if response.status_code == 200:
print('Message sent successfully.')
else:
print(f'Error sending message: {response.text}')
</code></pre>
<p>The output:<br />
<a href="https://i.sstatic.net/KckSy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KckSy.png" alt="output of the code" /></a></p>
<p>But then I thought of making it private channel and changed the Channel ID:<br />
The URL is : <code>https://t.me/+vMjFCXBghv00NDZl</code> and I thought the text ahead of <code>+</code> is the channel ID, so I changed the <code>chat_id = '@vMjFCXBghv00NDZl'</code>.</p>
<p>But now it's not sending the messages to the channel. When I make it public again then it works fine.</p>
<p>The error I received = <code>Error sending message: {"ok":false,"error_code":400,"description":"Bad Request: chat not found"}</code></p>
|
<python><python-3.x><python-requests><telegram><telegram-bot>
|
2023-03-14 09:40:10
| 1
| 7,303
|
Jaffer Wilson
|
75,731,226
| 6,734,243
|
How to create a custom sphinx directive that uses the image directive?
|
<p>I need to display a bunch of image that are stored in a json file (the list of our sponsors) like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"toto": "path/to/image",
"tutu": "path/to/image",
...
}
</code></pre>
<p>I have colleagues that are not very good at rst and asking them to write an extra image directive is too much. To help them I would like to create a custom directive that parse the json file and then "calls" the image directive. is it possible ?</p>
<p>I would write something like:</p>
<pre><code>.. logo:: path/to/json
</code></pre>
<p>and that's all.</p>
<p>What I already try is a fully customized directive but it only works in html and instead of creating extra visit_method I would like to add images node for each image in the list.</p>
<pre class="lang-py prettyprint-override"><code>"""Logo extention to create a list of logos"""
import json
from pathlib import Path
from typing import Dict, List
from docutils import nodes
from sphinx.application import Sphinx
from sphinx.util import logging
from sphinx.util.docutils import SphinxDirective, SphinxTranslator
logger = logging.getLogger(__name__)
SIZES = {"xs": 6, "sm": 8, "md": 10, "lg": 12, "xl": 15, "xxl": 20}
"Accommodate different logo shapes (width values in rem)"
class logo_node(nodes.General, nodes.Element):
"""Logo Node"""
pass
class Logos(SphinxDirective):
"""Logo directive to create a grid of logos from a json file
Example:
.. logo:: funder
"""
has_content = True
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self) -> List[logo_node]:
# get the env
self.env
# get the data
data_dir = Path(__file__).parents[1] / "_data" / "logo"
logo_file = data_dir / f"{self.arguments[0]}.json"
logos = json.loads(logo_file.read_text())
return [logo_node(logos=logos)]
def visit_logo_node_html(translator: SphinxTranslator, node: logo_node) -> None:
"""Entry point of the html logo node"""
# Write the html content line by line for readability
# init the html content
html = '<div class="container my-4">\n'
html += '\t<div id="funder-logos" class="d-flex flex-wrap flex-row justify-content-center align-items-center">\n'
for v in node["logos"].values():
# get informations from the parameters
size = SIZES[v["size"]]
link = v["html"]
light_logo = f"_static/logos/{v['light']}"
dark_logo = f"_static/logos/{v['dark']}"
# write the html files
html += f'\t\t<div class="my-1 mx-2" style="width:{size}rem;">'
html += f'<a href="{link}">'
html += f'<img class="card-img only-light" src="{light_logo}">'
html += f'<img class="card-img only-dark" src="{dark_logo}">'
html += "</a>"
html += "</div>\n"
translator.body.append(html)
def depart_logo_node_html(translator: SphinxTranslator, node: logo_node) -> None:
"""exit from the html node and close the container"""
translator.body.append("\t</div>\n</div>\n")
def visit_logo_node_unsuported(translator: SphinxTranslator, node: logo_node) -> None:
"""Entry point of the ignored logo node."""
logger.warning("Logo: unsupported output format (node skipped)")
raise nodes.SkipNode
def setup(app: Sphinx) -> Dict[str, bool]:
app.add_node(
logo_node,
html=(visit_logo_node_html, depart_logo_node_html),
epub=(visit_logo_node_unsuported, None),
latex=(visit_logo_node_unsuported, None),
man=(visit_logo_node_unsuported, None),
texinfo=(visit_logo_node_unsuported, None),
text=(visit_logo_node_unsuported, None),
)
app.add_directive("logos", Logos)
return {
"parallel_read_safe": True,
"parallel_write_safe": True,
}
</code></pre>
|
<python><python-sphinx>
|
2023-03-14 09:26:08
| 1
| 2,670
|
Pierrick Rambaud
|
75,731,212
| 14,169,836
|
Cumulative subtraction with two columns in pandas
|
<p>I have the table where I have 2 values for each day. I need to subtract the Val1 - Val2 row by row. E.g.:</p>
<pre><code> day Val1 Val2
1 20230310 100 50
2 20230311 NaN 10
3 20230312 NaN 5
4 20230313 NaN -3
5 20230314 NaN 40
</code></pre>
<p>How it should look like at the end:</p>
<pre><code> day Val1 Val2
1 20230310 100 10
2 20230311 50 50
3 20230312 45 5
4 20230313 48 -3
5 20230314 8 40
</code></pre>
<p>Below is the same example but with the logic, not only the results:</p>
<pre><code> day Val1 Val2
1 20230310 100 10
2 20230311 100-50=50 50
3 20230312 50-5=45 5
4 20230313 45-(-3)=48 -3
5 20230314 48-40=8 40
</code></pre>
|
<python><pandas><subtraction>
|
2023-03-14 09:25:00
| 1
| 329
|
Ulewsky
|
75,730,973
| 1,649,021
|
Converting string value to Boolean for dataframe
|
<p>Trying to easily convert values in my dataframe to Booleans. Each column can contain values like "YES", "NO" or "NaN".</p>
<p>Converted the values with numpy to an ndarray in a list comprehension but if I convert it to a dataframe I get the error</p>
<pre><code>ValueError: 5 columns passed, passed data had 159 columns
</code></pre>
<p>However, if I print the the length of my ndarray it outputs 5</p>
<pre><code>import numpy as np
death_columns = ['Death1', 'Death2', 'Death3', 'Death4', 'Death5']
death_counts = combined_dataframe[death_columns]
death_data = [np.where(death_counts[c] == "YES", True, False) for c in death_columns]
len(death_data)
</code></pre>
<p>This is how I thought to create a dataframe again.</p>
<pre><code>death_counts = pd.DataFrame(death_data, columns = death_columns)
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-14 09:02:50
| 0
| 5,077
|
Mark Molina
|
75,730,853
| 12,871,587
|
Keep only duplicated rows with a subset
|
<p>I have a dataframe that I'd like to explore and look only the duplicated rows based on two or more columns.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"A": [1, 6, 5, 4, 5, 6],
"B": ["A", "B", "C", "D", "C", "A"],
"C": [2, 2, 2, 1, 1, 1]})
</code></pre>
<p>I'd like to return duplicate combination for columns A and B only. I've tried:</p>
<pre class="lang-py prettyprint-override"><code>df.filter(pl.col("A", "B").is_duplicated()) # Returns: This is ambiguous. Try to combine the predicates with the 'all' or `any' expression.
</code></pre>
<p>When adding .all() in between, the result is the same as above</p>
<pre class="lang-py prettyprint-override"><code>df.filter(pl.col("A", "B").all().is_duplicated()) # Same as above
</code></pre>
<p>Unique with keep "none" returns the opposite result I'd like to have, so tried the below:</p>
<pre class="lang-py prettyprint-override"><code>df.unique(subset=["A", "B"], keep="none").is_not() # 'DataFrame' object has no attribute 'is_not'
</code></pre>
<p>Expected output would be to see only the rows:</p>
<pre><code>shape: (2, 3)
βββββββ¬ββββββ¬ββββββ
β A | B | C β
β --- | --- | --- β
β i64 | str | i64 β
βββββββͺββββββͺββββββ‘
β 5 | C | 2 β
β 5 | C | 1 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2023-03-14 08:49:27
| 2
| 713
|
miroslaavi
|
75,730,699
| 13,423,321
|
How to turn off sqlite json query with JSON_QUOTE? There are different between SQLAlchemy's sqlite and mysql json query
|
<p>Common code</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine, Column, Integer, JSON
Base = declarative_base()
class Person(Base):
__tablename__ = 'person'
id = Column(Integer(), primary_key=True)
message = Column(JSON)
</code></pre>
<p><strong>MySQL</strong></p>
<p>JSON query statement is clear and beautiful like this.</p>
<pre class="lang-py prettyprint-override"><code>engine = create_engine('mysql+pymysql://root:123456@localhost:3306/test', pool_recycle=3600) # MySQL
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
person = Person(message={'name': 'Bob'}) # Insert a record
session.add(person)
session.commit()
query = session.query(Person.id).filter(Person.message['name'] == 'Bob') # MySQL will work, clear and beautiful
print(query[0].id) # 1
print(query)
# SELECT person.id AS person_id
# FROM person
# WHERE JSON_EXTRACT(person.message, %(message_1)s) = %(param_1)s
</code></pre>
<p><strong>SQLite</strong></p>
<p>It uses needless <code>json_quote()</code> and the result doesn't clear. I want to turn off sqlite json query with JSON_QUOTE.</p>
<p>How can I solve this?</p>
<pre class="lang-py prettyprint-override"><code>engine = create_engine('sqlite:///:memory:') # SQLite
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
person = Person(message={'name': 'Bob'}) # Insert a record
session.add(person)
session.commit()
query = session.query(Person.id).filter(Person.message['name'] == func.json_quote('Bob')) # SQLite will work, but need to use json_quote
print(query[0].id) # 1
print(query)
# SELECT person.id AS person_id
# FROM person
# WHERE JSON_QUOTE(JSON_EXTRACT(person.message, ?)) = json_quote(?)
</code></pre>
|
<python><mysql><json><sqlite><sqlalchemy>
|
2023-03-14 08:33:54
| 1
| 1,127
|
XerCis
|
75,730,659
| 8,428,347
|
AttributeError: /src/libmi.so: undefined symbol: logarithm
|
<p>I've tried to call function on C/C++ from python3 code and get the error from header. Earlier this C/C++ function could be imported in ppython works, but I forget how it was done.</p>
<p>C/C++ function:</p>
<pre><code>#ifdef __cplusplus
extern "C" double logarithm(unsigned a)
#else
double logarithm(unsigned a)
#endif
{
if (a != 0)
{
return log2(a);
}
else
{
return 0;
}
}
</code></pre>
<p>Python code:</p>
<pre><code>import os
from ctypes import *
libname = os.path.abspath("/src/libmi.so")
mi = CDLL(libname)
mi.logarithm.restype = c_double
</code></pre>
<p>Full text of exception:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 5>()
3 libname = os.path.abspath("/src/libmi.so")
4 mi = CDLL(libname)
----> 5 mi.logarithm.restype = c_double
File /usr/local/lib/python3.8/ctypes/__init__.py:386, in CDLL.__getattr__(self, name)
384 if name.startswith('__') and name.endswith('__'):
385 raise AttributeError(name)
--> 386 func = self.__getitem__(name)
387 setattr(self, name, func)
388 return func
File /usr/local/lib/python3.8/ctypes/__init__.py:391, in CDLL.__getitem__(self, name_or_ordinal)
390 def __getitem__(self, name_or_ordinal):
--> 391 func = self._FuncPtr((name_or_ordinal, self))
392 if not isinstance(name_or_ordinal, int):
393 func.__name__ = name_or_ordinal
AttributeError: /src/libmi.so: undefined symbol: logarithm
</code></pre>
<p>The commands for builder and linker:</p>
<pre><code>g++ -fPIC -c $(python3.8-config --includes) -o mi.o mi.cpp
g++ $(python3.8-config --ldflags) -lpython3.8 mi.o -o libmi.so
</code></pre>
<p>I have checked the symbol link in libmi.so:</p>
<pre><code>root@5c574a48a4ce:/src# objdump -t libmi.so | grep logarithm
0000000000001225 g F .text 0000000000000034 logarithm
</code></pre>
<p>How it can be solved?</p>
|
<python><c++><dll><dllimport>
|
2023-03-14 08:29:33
| 1
| 343
|
volkoshkursk
|
75,730,605
| 14,014,925
|
Visual Studio Code Debugger not being able to import modules from my Conda environment
|
<p>I'm trying to use the VS Code Python debugger with my Conda environment. If I run my code through the terminal or pressing the play button, it runs correctly. However, if I try to debug it, when I import an external module like pandas or numpy, I get</p>
<pre><code>ModuleNotFoundError: No module named 'pandas'
</code></pre>
<p>The environment is selected with the interpreter, and I've tried several launch.json configurations, but nothing seems to work. I've also check and pandas is correctly installed in the environment, in fact, I can import it if I run it through the terminal. Anybody knows what's going on? I'm using VS Code on Windows 10 and Anaconda. I've also tried PyCharm debugger and I run in the same problem.</p>
<p>Thanks in advance!</p>
|
<python><visual-studio-code><anaconda><conda><vscode-debugger>
|
2023-03-14 08:23:45
| 1
| 345
|
ignacioct
|
75,730,596
| 11,198,558
|
How to start multiple faust app in the same time?
|
<p>I'm new user of Faust and don't know how to fix the problem when I ran 3 faust apps in the same time. Specifically:</p>
<p>I have 3 python file, In each, I run 1 service for listening from kafka server. Each file contains code as below, the different in each file is just the TOPIC_INPUT name.</p>
<pre><code>app = faust.App(
'UserInfoReceive',
broker= 'kafka://' + SERVER_INPUT + f':{DVWAP_KAFKA_PORT}',
value_serializer='raw',
)
kafka_topic = app.topic(TOPIC_INPUT)
@app.agent(kafka_topic)
async def userSettingInput(streamInput):
async for msg in streamInput:
userResgister(msg)
</code></pre>
<h2>Expected behavior</h2>
<p>Expect 3 python files can run normally and listen to the comming kafka event</p>
<h2>Actual behavior</h2>
<p>it generates OSError as this img</p>
<p>Hi all,</p>
<p>I'm new user of Faust and don't know how to fix the problem when I ran 3 faust apps in the same time. Specifically:</p>
<p>I have 3 python file, In each, I run 1 service for listening from kafka server. Each file contains code as below, the only difference in each file is the TOPIC_INPUT name.
app = faust.App(
'UserInfoReceive',
broker= 'kafka://' + SERVER_INPUT + f':{DVWAP_KAFKA_PORT}',
value_serializer='raw',
)</p>
<p>kafka_topic = app.topic(TOPIC_INPUT)</p>
<p>@app.agent(kafka_topic)
async def userSettingInput(streamInput):
async for msg in streamInput:
userResgister(msg)
Expected behavior
Expect 3 python files can run normally and listen to the comming kafka event</p>
<p>Actual behavior
it generates OSError as this img</p>
<p><a href="https://i.sstatic.net/8DJEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8DJEp.png" alt="image" /></a></p>
<h2>Versions</h2>
<p>Python version: 3.9
Faust version 1.10.4
Operating system WSL Linux Subsystem on Windows
Kafka version kafka-python==1.4.7</p>
|
<python><apache-kafka><faust>
|
2023-03-14 08:23:14
| 1
| 981
|
ShanN
|
75,730,577
| 12,319,746
|
Cannot send mail from inside a function
|
<pre><code>def send_mail():
recv = os.environ.get('mail_username')
sender = 'alerts@.com'
receivers = [recv]
message = f"""From: QA-Security Gate Alerts <alerts@.com>
To: To Person {recv}
Subject: Azure function down
Check Logs
"""
try:
smt_obj = smtplib.SMTP('mailrelay.site.com')
smt_obj.sendmail(sender, receivers, message)
print("Successfully sent email")
except Exception as e:
logging.error('Sending mail Failed')
raise e
</code></pre>
<p><code>send_mail()</code></p>
<p>This code works fine when I run it without the function, however, once I put it in the function it runs and prints <code>"Successfully sent email"</code> but doesn't actually do so. I have tried adding a <code>return </code> then assigning the function to a variable but still doesn't work.</p>
<pre><code>return smt_obj( Inside the function)
# Outside the function
mail = send_mail()
mail.quit()
</code></pre>
|
<python>
|
2023-03-14 08:21:04
| 1
| 2,247
|
Abhishek Rai
|
75,730,369
| 6,797,250
|
Transforming pandas data frame. Sort of melting
|
<p>I have this data frame:</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame({'day': [1, 1, 2, 2], 'category': ['a', 'b', 'a', 'b'],
'min_feature1': [1, 2, 3, 4], 'max_feature1': [8, 9, 10, 11],
'min_feature2': [2, 3, 4, 5], 'max_feature2': [6, 9, 12, 13]})
</code></pre>
<p>The result looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">day</th>
<th style="text-align: left;">category</th>
<th style="text-align: left;">min_feature1</th>
<th style="text-align: left;">max_feature1</th>
<th style="text-align: left;">min_feature2</th>
<th style="text-align: left;">max_feature2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">8</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">9</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">9</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">10</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">12</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">11</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">13</td>
</tr>
</tbody>
</table>
</div>
<p>I want to transform this data, so it looks like this:</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame([[1, 'a', 'feature1', 1, 8],
[1, 'a', 'feature2', 2, 6],
[1, 'b', 'feature1', 2, 9],
[1, 'b', 'feature2', 3, 9],
[2, 'a', 'feature1', 3, 10],
[2, 'a', 'feature2', 4, 12],
[2, 'b', 'feature1', 4, 11],
[2, 'b', 'feature2', 5, 13],], columns=['day', 'category', 'feature', 'min', 'max'])
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">day</th>
<th style="text-align: left;">category</th>
<th style="text-align: left;">feature</th>
<th style="text-align: left;">min</th>
<th style="text-align: left;">max</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">feature1</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">8</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">feature2</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">feature1</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">9</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">feature2</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">9</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">feature1</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">10</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">a</td>
<td style="text-align: left;">feature2</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">12</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">feature1</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">11</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">b</td>
<td style="text-align: left;">feature2</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">13</td>
</tr>
</tbody>
</table>
</div>
<p>How can I do this?</p>
|
<python><pandas><dataframe>
|
2023-03-14 07:57:58
| 3
| 3,871
|
Andrey Lukyanenko
|
75,730,352
| 221,270
|
cx_Oracle config dsn
|
<p>I want to connect to an Oracle Apex table via Python to update the table and insert new data, what must I enter for the dsn parameter, if my workspace is under:</p>
<pre><code>https://apex.oracle.com/pls/apex/r/meteor_sdr
import cx_Oracle
# Establish the database connection
connection = cx_Oracle.connect(user="hr", password=userpwd,
dsn="dbhost.example.com/orclpdb1")
</code></pre>
|
<python><oracle-apex><cx-oracle>
|
2023-03-14 07:56:14
| 0
| 2,520
|
honeymoon
|
75,730,243
| 11,572,712
|
from pm4py.objects.log.util import func as functools -> ImportError
|
<p>I am running this code to import a module:</p>
<pre><code>from pm4py.objects.log.util import func as functools
</code></pre>
<p>And get this ImportError:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-15-d14af87451f6> in <module>
1 import pm4py
----> 2 from pm4py.objects.log.util import func as functools
ImportError: cannot import name 'func' from 'pm4py.objects.log.util'
</code></pre>
<p>Does someone know how to resolve this?</p>
|
<python><importerror>
|
2023-03-14 07:44:32
| 1
| 1,508
|
Tobitor
|
75,730,103
| 21,295,456
|
How to continue to further train a pre-trained YOLOv8 model
|
<p>I've trained a YOLOv8 model for object detection with 4 classes on a custom dataset.</p>
<p>Now I want to train it to detect more classes by increasing my dataset.</p>
<p>Instead of training the model from the start can I further train it specifically on the new dataset?</p>
|
<python><machine-learning><object-detection><yolo>
|
2023-03-14 07:23:23
| 2
| 339
|
akashKP
|
75,729,807
| 4,490,974
|
Speech to Text - recognise the speaker's
|
<p>I am having an issue where we want to transcribe some audio of a press conference.</p>
<p>However it is not picking up when the speakers change.</p>
<p>For this demo I'll use a YouTube press conference (our script downloads it as mp3)
<a href="https://www.youtube.com/watch?v=3fFkMrTYZTs" rel="nofollow noreferrer">https://www.youtube.com/watch?v=3fFkMrTYZTs</a></p>
<p>In this video there are 3 speakers.</p>
<p>Speaker 1 is from 00:00
speaker 2 is from 02:02
Speaker 3 is from 04:10</p>
<p>However the transcript is not picking this up. How can I improve my Python code to better recognise when each of these people are speaking.</p>
<p>Function to transcribe audio using Google Cloud Speech-to-Text API</p>
<pre><code># Function to transcribe audio using Google Cloud Speech-to-Text API
def transcribe_audio(audio_file):
client = speech.SpeechClient()
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.MP3,
sample_rate_hertz=48000,
language_code="en-US",
audio_channel_count=2,
enable_automatic_punctuation=True,
diarization_config={
"enable_speaker_diarization": True,
"min_speaker_count": 2,
"max_speaker_count": 6,
},
speech_contexts=[
speech.SpeechContext(
phrases=["Australia", "United States", "United Kingdom"]
)
],
enable_word_time_offsets=True,
)
uri = audio_file
# Perform asynchronous speech recognition
audio = speech.RecognitionAudio(uri=uri)
operation = client.long_running_recognize(config=config, audio=audio)
response = operation.result(timeout=1000)
# Process the transcription results
transcription = ""
speaker_tags = []
for result in response.results:
alternative = result.alternatives[0]
transcription += alternative.transcript.strip() + " "
speaker_tags += alternative.words
return transcription, speaker_tags
</code></pre>
<p>The code above gets called via:</p>
<pre><code>transcription, speaker_tags = transcribe_audio(audio_file)
#print("transcript:",transcription)
#print("Speaker tags:", speaker_tags)
speaker_dict = {}
current_speaker = ""
for tag in speaker_tags:
speaker = "Speaker_" + str(tag.speaker_tag)
if current_speaker != speaker:
current_speaker = speaker
speaker_dict[current_speaker] = ""
speaker_dict[current_speaker] += " " + tag.word
transcript = ""
for speaker, text in speaker_dict.items():
transcript += speaker + ":" + text + "\n"
print(transcript)
</code></pre>
<p>However when it does transcribe it I get transcription back I get this:</p>
<pre class="lang-none prettyprint-override"><code>Speaker_0: I've always said when I asked the United States is Pacific, Power United States. Route to Pacific Islanders and kept the sea Lanes in Skies open and navigable for all the basic rules of the road and fuels International Commerce. And our Partnerships have helped underwrite credible growth and Innovation. Today Prospect a piece for decades to come United States. Could ask for no better Partners in the end up as if he were so much. Washington State partnership for showing again, how democracies can deliver our own security and prosperity and not just for us. But for the entire world Day stuff to carry out, our first project under office and developing Australia's conventionally armed nuclear, powered, submarine capacity to be clear. One recruit everyone from the outset right off the bat, there's no confusion Subs, not your arm, supposed to clear up our route. These both want each of us standing here today. Australia. Deeply committed. Turn off the giant, new chapter in the relationship between the United States. And the United Kingdom begins a friendship on a shared values and vision for a peaceful and a prosperous future. The orcas agreement we can firm Hearing Center represents the biggest single investment in Australian Defence capability. In all of our history, strengthening security, and stability in every building a future investment in skills, jobs and infrastructure and delivering I used to. If any type of bility into the future, Bility security relationships across from Italy in the next check. I just driving will take delivery of three us virginia-class nuclear-powered, submarine BCC in 65 years and only the second time in history that the United States has its nuclear propulsion technology. And we thank you for We are also proud to partner with the United Kingdom, to construct the Next Generation submarine to be cool. If I say no, and you conventionally armed nuclear powered submarine by Stone, the British design and incorporating cutting-edge Australian UK and US Technologies. This will be an Australian Sovereign capability built by aliens with construction to begin 60 years ago. The maintenance of Freedom, peace and security. Today, we stand together and recognizing just be hopeful window. and the lost by Russia in the legal invasion of Ukraine has growing assertiveness, the destabilizing the School threatened to create a world defined by danger disorder and Division, Brace for this new reality, it is more important than ever that we strengthen the resilience of our own country. That's why the UK and today announced a significant amount of Defense budget with providing an extra ten pounds over the next two years and need you to send it. This will allow us to replenish our. And modernize our nuclear Enterprise delivering orcas and strengthening Elgin Tavern.
Speaker_1: The maintenance of Freedom, peace and security. Today, we stand together and recognizing just be hopeful window. and the lost by Russia in the legal invasion of Ukraine has growing assertiveness, the destabilizing the School threatened to create a world defined by danger disorder and Division, Brace for this new reality, it is more important than ever that we strengthen the resilience of our own country. That's why the UK and today announced a significant amount of Defense budget with providing an extra ten pounds over the next two years and need you to send it. This will allow us to replenish our. And modernize our nuclear Enterprise delivering orcas and strengthening Elgin Tavern.
Speaker_2: peaceful and a prosperous future. The orcas agreement we can firm Hearing Center represents the biggest single investment in Australian Defence capability. In all of our history, strengthening security, and stability in every building a future investment in skills, jobs and infrastructure and delivering I used to. If any type of bility into the future, Bility security relationships across from Italy in the next check. I just driving will take delivery of three us virginia-class nuclear-powered, submarine BCC in 65 years and only the second time in history that the United States has its nuclear propulsion technology. And we thank you for We are also proud to partner with the United Kingdom, to construct the Next Generation submarine to be cool. If I say no, and you conventionally armed nuclear powered submarine by Stone, the British design and incorporating cutting-edge Australian UK and US Technologies. This will be an Australian Sovereign capability built by aliens with construction to begin 60 years ago.
</code></pre>
<p>I also note that the transcription is so wrong and has not been able to transcribe it correct.</p>
|
<python><google-speech-to-text-api>
|
2023-03-14 06:41:33
| 1
| 895
|
Russell Harrower
|
75,729,733
| 14,109,040
|
Replace values in pandas columns based on starting value of column name
|
<p>I have a dataframe with the following structure:</p>
<pre><code> Index1 Index2 -2 -1 0 1 2 -1 ph -2 ph 1 ph 2 ph
0 A W 0 0 28 29 18 -28 -28 1 -10
1 B X 0 20 26 21 27 -6 -26 -5 1
2 C Y 19 20 20 17 17 0 -1 -3 -3
3 D Z 15 0 15 18 18 -15 0 3 3
</code></pre>
<p>The columns satrting with '-' can have 0 values. And I want to replace them using a dash like follows.</p>
<pre><code> Index1 Index2 -2 -1 0 1 2 -1 ph -2 ph 1 ph 2 ph
0 A W - - 28 29 18 -28 -28 1 -10
1 B X - 20 26 21 27 -6 -26 -5 1
2 C Y 19 20 20 17 17 - -1 -3 -3
3 D Z 15 - 15 18 18 -15 - 3 3
</code></pre>
<p>I have currently done this for each column manually:</p>
<pre><code>import pandas as pd
# Create DataFrame
data = {'Index1': ['A', 'B', 'C', 'D'],
'Index2': ['W', 'X', 'Y', 'Z'],
'-2': [0, 0, 19, 15],
'-1': [0, 20, 20, 0],
'0': [28, 26, 20, 15],
'1': [29, 21, 17, 18],
'2': [18, 27, 17, 18]
}
df = pd.DataFrame(data)
df = df.assign(**df[['-1','-2','1', '2']].sub(df['0'], axis=0).add_suffix(' ph'))
df['-1'].replace(to_replace=0, value='-', inplace=True)
df['-2'].replace(to_replace=0, value='-', inplace=True)
df['-1 ph'].replace(to_replace=0, value='-', inplace=True)
df['-2 ph'].replace(to_replace=0, value='-', inplace=True)
</code></pre>
<p>However, the columns (except for the indexes) can vary (so could be -3,-2,1,0,1 etc.) So I want to update my code to say: if the column name starts with a '-' replace 0s using a '-'</p>
|
<python><pandas>
|
2023-03-14 06:30:09
| 1
| 712
|
z star
|
75,729,449
| 11,649,397
|
How to let producer know that consumer finishes work in python
|
<p>Hi I want to paralyze the process of reading and analyzing multiple text files. So I have a producer say <strong>Producer</strong> and two consumers <strong>TextAnalyzer A</strong> and <strong>TextAnalyzer B</strong>.</p>
<p><strong>Producer</strong> has a RequstQueue which contains tasks for <strong>TextAnalyzer A</strong> and <strong>TextAnalyzer B</strong>.</p>
<p>Say <strong>TextAnalyzerA</strong> thread has a task assigned and starts working. Once it finishes it notifies Producer and similarly <strong>TextAnalyzerB</strong> thread finishes a task and notifies Producer.</p>
<p>Once both consumers finish their tasks I want <strong>Producer</strong> to start another thread <strong>"ResultSender"</strong> to execute.</p>
<p>My question is what are the best ways to notify <strong>Producer</strong> that consumers are finished their work in python ?</p>
<p><a href="https://i.sstatic.net/rSJoA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rSJoA.png" alt="enter image description here" /></a></p>
<p>I created some sample code here by introducing RequestQueue and FinishedQueue to have a communication between Producer and Consumer.</p>
<pre><code> from time import sleep
from random import random
from threading import Thread
# global variable for consumer and producer to know
RequestQueue = Queue()
FinishedQueue = Queue()
class Task:
def __init__(self, name, index, value, done):
self.name = name
self.index = index
self.value = value
self.done = done
def getName(self):
return self.name
def getIndex(self):
return self.index
def getValue(self):
return self.value
def getDone(self):
return self.done
def setDone(self, done):
self.done = done
def producer():
print('Producer: Running')
# generate items
task = Task("(csv)", 1, "test", False)
RequestQueue.put(task)
print(">producer added {} {} {} {}".format(
task.getName(), task.getIndex(), task.getValue(), task.getDone()))
value = random()
sleep(value)
while True:
if (FinishedQueue.empty() == False):
fq = FinishedQueue.get()
print(">producer found {} {} {} {}".format(
fq.getName(), fq.getIndex(), fq.getValue(), fq.getDone()))
break
sleep(1)
RequestQueue.put(None)
print('Producer: Done')
# consumer task
def consumer():
print('Consumer: Running')
while True:
item = RequestQueue.get()
# check for stop
if item is None:
break
print(">consumer is working on {} {} {} {}".format(
item.getName(), item.getIndex(), item.getValue(), item.getDone()))
# consumer do some work
sleep(1)
# set done to be true
item.setDone(True)
print(">consumer finished {} {} {} {}".format(
item.getName(), item.getIndex(), item.getValue(), item.getDone()))
FinishedQueue.put(item)
# all done
print('Consumer: Done')
def test():
# create the shared queue
tc = Thread(target=consumer, args=())
tc.start()
tp = Thread(target=producer, args=())
tp.start()
tp.join()
tc.join()
test()
</code></pre>
|
<python><multithreading><python-multithreading><producer-consumer><condition-variable>
|
2023-03-14 05:42:25
| 1
| 353
|
takanoha
|
75,729,285
| 9,906,395
|
Pandas: Find the left-most value in a pandas dataframe followed by all 1s
|
<p>I have the following dataset</p>
<pre><code>data = {'ID': ['A', 'B', 'C', 'D'],
'2012': [0, 1, 1, 1],
'2013': [0, 0, 1, 1],
'2014': [0, 0, 0, 1],
'2015': [0, 0, 1, 1],
'2016': [0, 0, 1, 0],
'2017': [1, 0, 1,1]}
df = pd.DataFrame(data)
</code></pre>
<p>For each row I want to generate a new column - <code>Baseline_Year</code> - which assumes the name of the column with all values to the right that are equal to 1. In case there is not column with all the values equal to 1, I would like the <code>Baseline_Year</code> to be equal to missing.</p>
<p>See the expected results</p>
<pre><code>data = {'ID': ['A', 'B', 'C', 'D', 'E'],
'2012': [0, 1, 1, 1, 1],
'2013': [0, 0, 1, 1, 1],
'2014': [0, 0, 0, 1, 1],
'2015': [0, 0, 1, 1, 1],
'2016': [0, 0, 1, 0, 1],
'2017': [1, 0, 1,1, 1],
'Baseline_Year': [np.nan, np.nan, '2015','2017', '2012'],
}
df_results = pd.DataFrame(data)
df_results
</code></pre>
|
<python><pandas><sum><row>
|
2023-03-14 05:07:48
| 1
| 1,122
|
Filippo Sebastio
|
75,729,249
| 346,977
|
Pandas, timedeltas, and dividing by zero
|
<p><strong>Update: Apologies for the confusion this question caused: The issue was that the <code>timedelta</code> fields I was looking at had been cast into pandas as (non-timedelta) objects. Recasting them to time deltas resolved the issue</strong></p>
<p>I have the following pandas dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>proactive_time</th>
<th>passive_time</th>
</tr>
</thead>
<tbody>
<tr>
<td>foo</td>
<td>timedelta(seconds=30)</td>
<td>timedelta(seconds=20)</td>
</tr>
<tr>
<td>bar</td>
<td>timedelta(0)</td>
<td>timedelta(0)</td>
</tr>
</tbody>
</table>
</div>
<p>And I'm looking to calculate the percentage of proactive time over proactive+passive time via:</p>
<pre><code>dataframe['proactive_ratio'] = dataframe['proactive_time'] / (dataframe['proactive_time'] + dataframe['passive_time'])
</code></pre>
<p>The row <code>foo</code> works just fine, however <code>bar</code> returns a <code>DivideByZero</code> error. Fair enough - but I'd like those values to return <del>timedelta(0)</del> <code>NaN</code> instead of throwing the error.</p>
<p>This is where my lack of pandas knowledge presents problems: I can solve this via python conditionals in a <code>for</code> loop, but for performance reasons, I'd really prefer to keep this in pandas.</p>
<p>The strategies I've tried (eg .apply) seem to only have knowledge of one column at a time, which makes combining proactive_time & passive_time unfeasible.</p>
<p>Any thoughts on how to resolve?</p>
|
<python><pandas>
|
2023-03-14 05:00:43
| 2
| 12,635
|
PlankTon
|
75,729,220
| 5,302,323
|
Running all python codes saved in a folder from Jupyter Notebook
|
<p>Is there any python code I can use to automatically detect all ipynb files saved inside a specific folder and run them sequentially?</p>
<p>I can identify them all using this function:</p>
<pre class="lang-py prettyprint-override"><code>path = glob.iglob(f"/Users/[...]")
</code></pre>
<p>But unfortunately I cannot run them using</p>
<p>for file in path
%run file</p>
<p>Any help would be really appreciated!</p>
|
<python>
|
2023-03-14 04:56:04
| 1
| 365
|
Cla Rosie
|
75,729,192
| 6,451,136
|
Argon2 hash key output is over 32 bytes
|
<p>I am learning cryptography and started a simple project where I intend on using <code>Argon2</code> to hash my password with a randomly generated salt and then use this hash to generate cipher text and encrypt data using AES algorithm.</p>
<p>However, my hash always seems to be over 32 bytes.</p>
<p>For Hash key generation, I use the following:</p>
<pre class="lang-py prettyprint-override"><code>ph = PasswordHasher(hash_len=32, salt_len=16)
hash = ph.hash(key)
</code></pre>
<p>When I print this, I find the output that looks like this:</p>
<pre class="lang-bash prettyprint-override"><code>Hash: $argon2id$v=19$m=65536,t=3,p=4$BkGV9OL8x2VlYhPyj7efVA$5U6H89HJb+6IkQxYYWkp9CQd42dEXdiSwKfdB0PnEZI
Hashed password: 5U6H89HJb+6IkQxYYWkp9CQd42dEXdiSwKfdB0PnEZI
Salt: BkGV9OL8x2VlYhPyj7efVA
Length of Password: 43
Length of Salt: 22
</code></pre>
<p>I am able to <code>verify</code> this password without any issue as follows:</p>
<pre><code>ph.verify(hash, key)
</code></pre>
<p>I try to encrypt data using AES as follows:</p>
<pre><code>cipher = AES.new(key, AES.MODE_EAX)
nonce = cipher.nonce
ciphertext, tag = cipher.encrypt_and_digest(data)
</code></pre>
<p>This always results in an error:</p>
<pre><code>ValueError: Incorrect AES key length (43 bytes)
</code></pre>
<p>Why does the length of the key always exceed 32 bytes even when I explicitly specify it in the <code>PasswordHasher</code>
Am I missing something?</p>
|
<python><encryption><cryptography><argon><argon2-cffi>
|
2023-03-14 04:50:29
| 0
| 345
|
Varun Pius Rodrigues
|
75,729,172
| 4,616,611
|
Creating nested json with list from pandas dataframe
|
<p>I have a DataFrame with these columns: <code>id, jan_x, jan_y, feb_x, feb_y, ..., dec_x, dec_y</code> that I would like export as a json that is structured like this:</p>
<pre><code>{
"id1": [
[jan_x, feb_x, ..., dec_x],
[jan_y, feb_y, ..., dec_y]
],
"id2": [
[jan_x, feb_x, ..., dec_x],
[jan_y, feb_y, ..., dec_y]
]
}
</code></pre>
<p>The initial keys e.g., <code>id1</code> correspond to an id in column <code>id</code> of my dataframe. Without any custom parsing function is there a straight forward functional way of achieving this? I have tried dumping it as a json but the desired list structure isn't captured.</p>
<p>Here is a sample data frame with just two months.</p>
<pre><code>data = {'id': ['1', '2', '3', '4'],
'jan_x': [1, 2, 3, 4],
'jan_y': [5, 6, 7, 8],
'feb_x': [9, 10, 11 12],
'feb_y': [13 14, 15, 16]}
df = pd.DataFrame(data)
</code></pre>
<p>Sample Output:</p>
<pre><code>{
"1": [
[1, 9],
[5, 13]
],
"2": [
[2, 10],
[6, 14]
],
"3": [
[3, 11],
[7, 15]
],
"4": [
[4, 12],
[8, 16]
]
}
</code></pre>
|
<python><json><pandas><parsing>
|
2023-03-14 04:47:52
| 2
| 1,669
|
Teodorico Levoff
|
75,729,141
| 14,109,040
|
Differencing all other(not selected) columns in a dataframe by another column
|
<p>I have a data frame with the following structure.</p>
<pre><code> Index1 Index2 -2 -1 0 1 2
0 A W 20 25 28 29 18
1 B X 21 20 26 21 27
2 C Y 19 19 20 17 17
3 D Z 18 17 15 18 18
</code></pre>
<p>I want to subtract the '0' column from all columns except for the index columns and create new columns with the result, titled using the column name and a suffix. I successfully managed to do this:</p>
<pre><code> Index1 Index2 -2 -1 0 1 2 -1 ph -2 ph 1 ph 2 ph
0 A W 20 25 28 29 18 -3 -8 1 -10
1 B X 21 20 26 21 27 -6 -5 -5 1
2 C Y 19 19 20 17 17 -1 -1 -3 -3
3 D Z 18 17 15 18 18 2 3 3 3
</code></pre>
<p>The code to recreate the example:</p>
<pre><code>import pandas as pd
# Create DataFrame
data = {'Index1': ['A', 'B', 'C', 'D'],
'Index2': ['W', 'X', 'Y', 'Z'],
'-2': [20, 21, 19, 18],
'-1': [25, 20, 19, 17],
'0': [28, 26, 20, 15],
'1': [29, 21, 17, 18],
'2': [18, 27, 17, 18]
}
df = pd.DataFrame(data)
df = df.assign(**df[['-1','-2', ,'1', '2']].sub(df['0'], axis=0).add_suffix(' ph'))
print(df)
</code></pre>
<p>However, the columns (except for the indexes) can vary (so could be -3,-2,1,0,1 etc.) So I want to update my code to say: subtract column '0' from all columns except for the indexes (Index1 and Index2).</p>
<p>I tried this (and a few other alternatives):</p>
<pre><code>df = df.assign(**df[~['Index1','Index2']].sub(df['0'], axis=0).add_suffix(' ph'))
</code></pre>
<p>But didn't seem to figure it out</p>
|
<python><python-3.x><pandas>
|
2023-03-14 04:41:33
| 1
| 712
|
z star
|
75,729,116
| 4,082,104
|
How to get numpy to interpret object array slice as single array?
|
<p>When you ask Numpy to make an array out of a collection including arbitrary objects, it will create an array of "object" type, which allows you to use index slicing across those objects, but since the object itself is unknown to numpy, you cannot index into the object in one go (even if that particular object <strong>is</strong> actually a numpy array).</p>
<p>However, if you slice into the object array to select the parts of the object array that are actually numpy arrays, it seems that numpy won't collapse that slice into a single numpy array, even with another call to <code>np.array()</code>. Here is a little example of what I mean:</p>
<pre class="lang-py prettyprint-override"><code>>>> aa = np.array([np.random.randn(3, 4), {'something': 'blah'}], dtype=object)
>>> aa.shape
(2,)
>>> np.array(aa[0:1])
array([array([[ 1.78237043, -0.61082005, 0.92160137, 0.58961677],
[ 1.54183639, -0.43097464, 1.36213935, -1.2695875 ],
[ 0.01431181, -0.62073519, 0.56267489, -0.46113538]])],
dtype=object)
>>> np.array(aa[0:1]).shape # I want this to be (1, 3, 4)
(1,)
</code></pre>
<p>Is there any way to do this without a double copy (e.g. not like this: <code>np.array(aa[0:1].tolist())</code>)? Does an object array even allow you to do this without such a copy?</p>
|
<python><numpy>
|
2023-03-14 04:34:44
| 1
| 6,060
|
Multihunter
|
75,728,785
| 5,059,995
|
Conflicting module dependence in python project
|
<p>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cosmic-ray 8.3.5 requires virtualenv<=16.7.10, but you have virtualenv 20.21.0 which is incompatible.</p>
<p>Cosmic-ray uses very old version of virtualenv and its causing conflicts with other modules which uses relatively newer versions.</p>
<p>In my requirements-dev.txt, I have two packages, one internally use another package virtualenv with version 20 and another one with virtualenv version 16.
When I try to install the packages I get the above error.</p>
<p>How should we handle such scenarios?</p>
<p>I tried the normal ways of not mentioning the version in requirements.txt but same error happens.</p>
|
<python><dependencies><mutation-testing>
|
2023-03-14 03:12:32
| 1
| 311
|
Dipu Krishnan
|
75,728,717
| 16,971,617
|
Can i initialize optimizer before changing the last layer of my model
|
<p>Say I want to change the last layer of my code but my optimizer is defined on the top of my scripts, what is a better practise?</p>
<pre><code>batch_size = 8
learning_rate = 2e-4
num_epochs = 100
cnn = models.resnet18(weights='DEFAULT')
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(cnn.parameters(), lr=learning_rate)
def main():
dataset= datasets.ImageFolder(root=mydir)
loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
num_classes = len(dataset.classes)
num_ftrs = cnn.fc.in_features
cnn.fc = nn.Linear(num_ftrs, num_classes)
# Do i need to reinitialize the optimizer again here?
if __name__ == '__main__':
main()
</code></pre>
|
<python><pytorch>
|
2023-03-14 02:56:58
| 1
| 539
|
user16971617
|
75,728,582
| 4,390,160
|
Visibility of superclass constructor parameters in subclass constructor
|
<p>Consider this example:</p>
<pre><code>class Foo:
def __init__(self, bar: int, baz: str):
self.bar = bar
self.baz = baz
class SubFoo(Foo):
def __init__(self, qux: float, *args, **kwargs):
super().__init__(*args, **kwargs)
self.qux = qux
some_sub_foo = SubFoo(1.0, 3, 'test')
other_sub_foo = SubFoo(bar=4, baz='hello', qux=42.0) # or in whatever order
</code></pre>
<p>However, when writing a <code>SubFoo()</code> definition, editors and IDEs will only be able to provide type hints and auto-completion for the subclass' own parameters, the superclass' parameters are effectively hidden. (interestingly, CoPilot sees right through it, but that's not a solution to my question)</p>
<p>My question: is there some way, Pythonic or nasty, that would allow me to write SubFoo without having to repeat the full signature of <code>Foo</code> verbatim, while retaining autocompletion, type hints, etc.?</p>
<p>A solution that requires rewriting of <code>Foo</code> would be acceptable, but of course a solution that only requires work on <code>SubFoo</code> would make it more useful. And if there's some solution that would work for other methods, but not for constructors, I'd be interested to see it as well.</p>
<p>As for repeating the entire signature of <code>Foo</code> verbatim, this works and behaves as required, but is impractical when extending more complex classes:</p>
<pre><code>class SubFoo(Foo):
def __init__(self, qux: float, bar: int, baz: str):
super().__init__(bar, baz)
self.qux = qux
</code></pre>
<p>Note that I don't see objections to what I'm asking out of principle. Constructors like this break the Liskov substitution principle, but that typically doesn't apply to constructors anyway. If there's some other reason why nobody should want this in the first place, that might be a good answer as well.</p>
|
<python><oop>
|
2023-03-14 02:25:30
| 2
| 32,399
|
Grismar
|
75,728,510
| 736,312
|
Why does re.findall return a list of tuples containing empty strings but re.finditer works correctly?
|
<p>I am using python 3.9.13 version.<br>
I am trying to use the findall function from re library but I am getting empty results.</p>
<p>The regex I am using is:<br></p>
<pre><code>_regex = re.compile(r"(?:)0\d{1,4}(?:-?\d{2,4}-?\d{2,4}|\d{8}|\d(-)?\d{2,4}(-)?\d{3,4})")
</code></pre>
<p>I am testing that on the following string:<br></p>
<pre><code>_text = "06-6206-567903-3668-067403-3668-400503-3668-429503-3668-432403-3668-039206-6206-572906-6206-630303-3668-481806-6206-564403-3668-053703-3668-070606-6206-5663"
</code></pre>
<p>From re.finditer, I am getting correct results:<br></p>
<pre><code>_test = re.finditer(_regex, _text)
for item in _test:
print(item)
<re.Match object; span=(0, 12), match='06-6206-5679'>
<re.Match object; span=(12, 24), match='03-3668-0674'>
<re.Match object; span=(24, 36), match='03-3668-4005'>
<re.Match object; span=(36, 48), match='03-3668-4295'>
<re.Match object; span=(48, 60), match='03-3668-4324'>
<re.Match object; span=(60, 72), match='03-3668-0392'>
<re.Match object; span=(72, 84), match='06-6206-5729'>
<re.Match object; span=(84, 96), match='06-6206-6303'>
<re.Match object; span=(96, 108), match='03-3668-4818'>
<re.Match object; span=(108, 120), match='06-6206-5644'>
<re.Match object; span=(120, 132), match='03-3668-0537'>
<re.Match object; span=(132, 144), match='03-3668-0706'>
<re.Match object; span=(144, 156), match='06-6206-5663'>
</code></pre>
<p>However, when using the re.findall function, I am getting empty results.</p>
<pre><code> _test = re.findall(_regex, _text)
[('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', ''), ('', '')]
</code></pre>
<p>I am wondering if this problem comes from the regex I am using (maybe the first non-capturing group ?).
Please help.</p>
|
<python><python-re>
|
2023-03-14 02:09:48
| 1
| 796
|
Toyo
|
75,728,476
| 8,391,698
|
How to save HappyTransformer model in specified directory instead of ~/.cache
|
<p>I have this code that saved <a href="https://happytransformer.com" rel="nofollow noreferrer">HappyTransformer</a> model in <code>.cache/</code> directory by default</p>
<pre><code>from happytransformer import HappyTextToText, TTSettings
happy_tt = HappyTextToText(model_type= "gpt")
</code></pre>
<p>I would like to automatically to save it in a specified cached directory:</p>
<pre><code>"/home/ubuntu/storage1/various_transformer_models/"
</code></pre>
<p>What should I do?</p>
<p>I tried this:</p>
<pre><code>from happytransformer import HappyTextToText, TTSettings
cache_dir = "/home/ubuntu/storage1/various_transformer_models/"
tt_settings = TTSettings(cache_dir=cache_dir)
happy_tt = HappyTextToText(model_type="gpt", tt_settings=tt_settings)
</code></pre>
<p>But failed:</p>
<pre><code>----> 5 tt_settings = TTSettings(cache_dir=cache_dir)
6 happy_tt = HappyTextToText(model_type="gpt", tt_settings=tt_settings)
TypeError: __init__() got an unexpected keyword argument 'cache_dir'
</code></pre>
|
<python><huggingface-transformers><huggingface-datasets>
|
2023-03-14 02:01:29
| 0
| 5,189
|
littleworth
|
75,728,463
| 7,742,405
|
Json data to raw data
|
<p>I need help from you, I need to transform this JSON data into raw data, but I donβt want to use a lot of for to iterate above this json, so do you guys have some idea how to do that in a better way?</p>
<p>But I don't know how to do that in a "easy" way, I don't want to <code>for: for: for: for</code></p>
<p>Thank you guys!</p>
<hr />
<h1>EDIT</h1>
<p>JSON input with the whole data:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"name": "Dummy_App_Name",
"appKey": "Dummy_App_Key",
"platform": "Dummy_Platform",
"data": [
[
{
"id": "ffb1e945-f619-48d9-ab7f-e7a2c1792003",
"name": "Dummy_Ad_Network_Instance_1",
"contents": [
{
"id": "Dummy_id",
"name": "Dummy_Name",
"isSkippable": True,
"offerwallAbTest": None,
"type": "Dummy_Type",
"insights": {
"reports": [
{
"country": "TD",
"clicks": [ 0, 1, 2, 3, 4, 5, 6 ],
"conversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"impressions": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueViewers": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueConversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"earnings": [ 0, 1, 2, 3, 4, 5, 6 ],
},
{
"country": "SC",
"clicks": [ 0, 1, 2, 3, 4, 5, 6 ],
"conversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"impressions": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueViewers": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueConversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"earnings": [ 0, 1, 2, 3, 4, 5, 6 ],
}
],
"timestamps": [
"2023-03-06T00:00:00Z",
"2023-03-07T00:00:00Z",
"2023-03-08T00:00:00Z",
"2023-03-09T00:00:00Z",
"2023-03-10T00:00:00Z",
"2023-03-11T00:00:00Z",
"2023-03-12T00:00:00Z"
]
}
}
]
},
{
"id": "be70f064-6226-412f-942c-2a2eeabb8d79",
"name": "Dummy_Ad_Network_Instance_2",
"contents": [
{
"id": "Dummy_Id",
"name": "Dummy_Name",
"isSkippable": True,
"offerwallAbTest": None,
"type": "Dummy_Type",
"insights": {
"reports": [
{
"country": "BY",
"clicks": [ 0, 1, 2, 3, 4, 5, 6 ],
"conversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"impressions": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueViewers": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueConversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"earnings": [ 0, 1, 2, 3, 4, 5, 6 ],
},
{
"country": "CA",
"clicks": [ 0, 1, 2, 3, 4, 5, 6 ],
"conversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"impressions": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueViewers": [ 0, 1, 2, 3, 4, 5, 6 ],
"dailyUniqueConversions": [ 0, 1, 2, 3, 4, 5, 6 ],
"earnings": [ 0, 1, 2, 3, 4, 5, 6 ]
}
],
"timestamps": [
"2023-03-06T00:00:00Z",
"2023-03-07T00:00:00Z",
"2023-03-08T00:00:00Z",
"2023-03-09T00:00:00Z",
"2023-03-10T00:00:00Z",
"2023-03-11T00:00:00Z",
"2023-03-12T00:00:00Z"
]
}
}
]
}
]
]
}
]
</code></pre>
<p>Output expected</p>
<pre><code>"date", "app_name", "appKey", "platform", "ad_network_instance", "placement", "country", "earnings", "impressions", "clicks", "conversions", "ecpm", "dailyUniqueViewers", "dailyUniqueConversions"
"2023-03-06T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",0,0,0,0,((earning/1000000)/impressions)*1000,0,0,0
"2023-03-07T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",1,1,1,1,((earning/1000000)/impressions)*1000,1,1,1
"2023-03-08T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",2,2,2,2,((earning/1000000)/impressions)*1000,2,2,2
"2023-03-09T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",3,3,3,3,((earning/1000000)/impressions)*1000,3,3,3
"2023-03-10T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",4,4,4,4,((earning/1000000)/impressions)*1000,4,4,4
"2023-03-11T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",5,5,5,5,((earning/1000000)/impressions)*1000,5,5,5
"2023-03-12T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","TD",6,6,6,6,((earning/1000000)/impressions)*1000,6,6,6
"2023-03-06T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",0,0,0,0,((earning/1000000)/impressions)*1000,0,0,0
"2023-03-07T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",1,1,1,1,((earning/1000000)/impressions)*1000,1,1,1
"2023-03-08T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",2,2,2,2,((earning/1000000)/impressions)*1000,2,2,2
"2023-03-09T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",3,3,3,3,((earning/1000000)/impressions)*1000,3,3,3
"2023-03-10T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",4,4,4,4,((earning/1000000)/impressions)*1000,4,4,4
"2023-03-11T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",5,5,5,5,((earning/1000000)/impressions)*1000,5,5,5
"2023-03-12T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_1","SC",6,6,6,6,((earning/1000000)/impressions)*1000,6,6,6
"2023-03-06T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",0,0,0,0,((earning/1000000)/impressions)*1000,0,0,0
"2023-03-07T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",1,1,1,1,((earning/1000000)/impressions)*1000,1,1,1
"2023-03-08T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",2,2,2,2,((earning/1000000)/impressions)*1000,2,2,2
"2023-03-09T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",3,3,3,3,((earning/1000000)/impressions)*1000,3,3,3
"2023-03-10T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",4,4,4,4,((earning/1000000)/impressions)*1000,4,4,4
"2023-03-11T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",5,5,5,5,((earning/1000000)/impressions)*1000,5,5,5
"2023-03-12T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","BY",6,6,6,6,((earning/1000000)/impressions)*1000,6,6,6
"2023-03-06T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",0,0,0,0,((earning/1000000)/impressions)*1000,0,0,0
"2023-03-07T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",1,1,1,1,((earning/1000000)/impressions)*1000,1,1,1
"2023-03-08T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",2,2,2,2,((earning/1000000)/impressions)*1000,2,2,2
"2023-03-09T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",3,3,3,3,((earning/1000000)/impressions)*1000,3,3,3
"2023-03-10T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",4,4,4,4,((earning/1000000)/impressions)*1000,4,4,4
"2023-03-11T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",5,5,5,5,((earning/1000000)/impressions)*1000,5,5,5
"2023-03-12T00:00:00Z","Dummy_App_Name","Dummy_App_Key","Dummy_Platform","Dummy_Ad_Network_Instance_2","CA",6,6,6,6,((earning/1000000)/impressions)*1000,6,6,6
</code></pre>
|
<python><json><csv>
|
2023-03-14 01:57:00
| 2
| 449
|
Oracy Martos
|
75,728,387
| 6,379,197
|
SGD Optimizer Not Working on cost function
|
<p>I wanted to make own neural network for Speech data set and for that using tensorflow.I am writing the code imported library and dataset then done one hot encoding and after all done the weights and baises assignment and then done the forward propagation with the random values and for back propagation and cost minimization used a loss function to do so but unable to use the optimizer dont know why.</p>
<pre><code> tf.compat.v1.disable_v2_behavior()
X = tf.compat.v1.placeholder(tf.float32, [None, n_dim])
Y = tf.compat.v1.placeholder(tf.float32, [None, n_classes])
W_1 = tf.Variable(tf.random.normal([n_dim, n_hidden_units_one], mean=0, stddev=sd))
b_1 = tf.Variable(tf.random.normal([n_hidden_units_one], mean=0, stddev=sd))
h_1 = tf.nn.tanh(tf.matmul(X, W_1) + b_1)
W_2 = tf.Variable(tf.random.normal([n_hidden_units_one, n_hidden_units_two], mean=0, stddev=sd))
b_2 = tf.Variable(tf.random.normal([n_hidden_units_two], mean=0, stddev=sd))
h_2 = tf.nn.sigmoid(tf.matmul(h_1, W_2) + b_2)
W = tf.Variable(tf.random.normal([n_hidden_units_two, n_classes], mean=0, stddev=sd))
b = tf.Variable(tf.random.normal([n_classes], mean=0, stddev=sd))
y_ = tf.nn.softmax(tf.matmul(h_2, W) + b)
init = tf.compat.v1.global_variables_initializer()
saver = tf.compat.v1.train.Saver()
cost_function = tf.reduce_mean(-tf.reduce_sum(Y * tf.math.log(y_), axis=[1]))
tape_df = tf.GradientTape(persistent=True)
opt = tf.optimizers.SGD(learning_rate=0.01)
optimizer = opt.minimize(cost_function, var_list=[W, b], tape =tape_df)
</code></pre>
<p>But I am getting this error:</p>
<pre><code> Traceback (most recent call last):
File "D:/Sound Data/Sound-classification-on-Raspberry-Pi-with-Tensorflow/trainModel.py", line 134, in <module>
optimizer = opt.minimize(cost_function, var_list=[W, b], tape =tape_df)
File "D:\Sound Data\Sound-classification-on-Raspberry-Pi-with-Tensorflow\venv\lib\site-packages\keras\optimizers\optimizer_experimental\optimizer.py", line 526, in minimize
grads_and_vars = self.compute_gradients(loss, var_list, tape)
File "D:\Sound Data\Sound-classification-on-Raspberry-Pi-with-Tensorflow\venv\lib\site-packages\keras\optimizers\optimizer_experimental\optimizer.py", line 259, in compute_gradients
grads = tape.gradient(loss, var_list)
File "D:\Sound Data\Sound-classification-on-Raspberry-Pi-with-Tensorflow\venv\lib\site-packages\tensorflow\python\eager\backprop.py", line 1052, in gradient
raise RuntimeError("A non-persistent GradientTape can only be used to "
RuntimeError: A non-persistent GradientTape can only be used to compute one set of gradients (or jacobians)
</code></pre>
<p>How can I solve this error? I am using Tensorflow 2.11 version.</p>
|
<python><tensorflow><speech-recognition><sgd>
|
2023-03-14 01:37:43
| 0
| 2,230
|
Sultan Ahmed
|
75,728,358
| 610,569
|
Is there a efficient way to prepend a list from items from another list?
|
<p>Given two list, the goal is to append the items from the 2nd list to the 1st and return the matrix of lists, e.g.</p>
<pre><code>x = [1, 2, 3, 4, 5]
y = [123, 456] # m no. of items
def func(x, y):
return [[i] + x for i in y]
func(x, y)
</code></pre>
<p>[out]:</p>
<pre><code>[[123, 1, 2, 3, 4, 5], [456, 1, 2, 3, 4, 5]]
</code></pre>
<p><strong>While this works for small no. of items in y, having O(m) is kind of expensive. Is there an efficient way to optimize this prepending? Or O(m) is the best possible?</strong></p>
|
<python><list><vectorization><prepend>
|
2023-03-14 01:30:08
| 1
| 123,325
|
alvas
|
75,728,328
| 6,676,101
|
In python, if a file opened using the `w` access flag opened using ASCII-encoding, UTF-8, Unicode, or something else?
|
<p>In python, if a file opened using the <code>w</code> access flag will the data written to file be:</p>
<blockquote>
<ul>
<li>ASCII-encoded</li>
<li>encoded using the UTF-8 scheme</li>
<li>encoded using some other scheme?</li>
</ul>
</blockquote>
<p>If it is platform or system specific, how do we find out which encoding scheme is used, or is there a different library I should use for file I/O for which we explicitly specify the encoding scheme?</p>
<pre class="lang-python prettyprint-override"><code>file = open("data.txt", "w")
</code></pre>
|
<python><python-3.x><file-io><io><output>
|
2023-03-14 01:24:45
| 1
| 4,700
|
Toothpick Anemone
|
75,728,204
| 9,599,508
|
Pythonic way to encode UTF-8 strings to ASCII/Hex and back again?
|
<p>I am using a black-box database to store objects from my python code which can only store ASCII characters. Let's assume this database cannot be swapped in for another, more friendly one. Unfortunately, the data I need to store is in UTF-8 and contains non-english characters, so simply converting my strings to ASCII and back leads to data loss.</p>
<p>My best idea for how to solve this is to convert my string to hex (which uses all ASCII-compliant characters), store it, and then upon retrieval convert the hex back to UTF-8.</p>
<p>I have tried varying combinations of encode and decode but none have given me the intended result.</p>
<p>Example of how I'd like this to work:</p>
<pre><code>original_string='ParabΓ©ns'
original_string.some_decode_function('hex') # now it looks like A4 B8 C7 etc
database.store(original_string)
</code></pre>
<p>Upon retrieval:</p>
<pre><code>retrieved_string=database.retrieve(storage_location) # now it looks like A4 B8 C7 etc
final-string=retrieved_string.decode('UTF-8) # now it looks like 'ParabΓ©ns'
</code></pre>
|
<python><character-encoding>
|
2023-03-14 00:59:37
| 1
| 302
|
Mr. T
|
75,728,104
| 10,807,390
|
How do I make a correlation matrix for each subset of a column of my pandas dataframe?
|
<p>Hereβs the head of my dataframe:</p>
<pre><code>
</code></pre>
<p>There are 100 different loggers and 10 different years. I want to subset the table by logger and find the Pearson correlation values for year by avg_max_temp, avg_min_temp, and tot_precipitation for each logger. Because there are 100 loggers, Iβd expect the resulting dataframe to have 100 rows of 3 output columns as well as a column for the logger ID..</p>
<p>Hereβs how I would do this analysis for all the data combined:</p>
<pre><code># Create a new dataframe with the correlation values
corr_df = pd.DataFrame(df.corr(method='pearson'))
corr_df.drop(['year', 'yield'], axis=1, inplace=True)
corr_df.drop(['avg_max_temp', 'avg_min_temp', 'tot_precipitation','yield'], axis=0, inplace=True)
# Print the dataframe
corr_df.head()
</code></pre>
<p>However, I canβt figure out how to do this for each of the 100 dataloggers. Any help would be hugely appreciated. Thanks in advance!</p>
|
<python><pandas><dataframe><pearson-correlation>
|
2023-03-14 00:36:10
| 1
| 427
|
Tom
|
75,727,884
| 12,609,881
|
Manually create span with start and end timestamps - OpenTelemetry Python
|
<p>I am trying to trace the execution of a task in Celery through the Publisher -> Queue -> Consumer lifecycle. I am using OpenTelemetry 1.16.0 in Python 3.9.15 in order to instrument the application. I already know how to propagate a trace through the Queue as described in the answers to this stackoverflow question.</p>
<p><a href="https://stackoverflow.com/questions/68530363/opentelemetry-python-how-to-instanciate-a-new-span-as-a-child-span-for-a-given">OpenTelemetry Python - How to instanciate a new span as a child span for a given trace_id</a></p>
<p>In addition to propagating the trace through the Queue I want to create a span in the trace for the time spent on the Queue. Right now, that period of time just appears as a gap in the trace as viewed in Grafana Tempo. I should be able to manually create this span assuming that I have trace id, parent span id, timestamp for when task is added to Queue, and timestamp for when task is taken from Queue.</p>
|
<python><open-telemetry>
|
2023-03-13 23:48:46
| 1
| 911
|
Matthew Thomas
|
75,727,856
| 4,764,787
|
Send SIGINT to subprocess and also see output
|
<p>In a celery task, I am launching a <code>subprocess</code> and need (1) to be able to send a <code>SIGINT</code> while (2) also having access to the subprocess's stdout and stderr. I am able to do one or the other, but not both simultaneously.</p>
<p>I can send <code>SIGINT</code> when the command in subprocess is given as a list, or as a string prepended with bash:</p>
<pre><code>proc = subprocess.Popen(,
[sys.executable, "-m", path.to.module, myarg1, myarg2, ...], # also works with f"/bin/bash -c {sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
</code></pre>
<p>As far as I understand, both options are ultimately launching bash and it seems that only a running bash will react to <code>SIGINT</code>.</p>
<p>Conversely, running "python -m ..." means my program no longer reacts to the <code>SIGINT</code>, but on the other hand it allows me to start seeing the stdout/stderr and logging inside my python program:</p>
<pre><code>proc = subprocess.Popen(,
f"{sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
</code></pre>
<p>With the above, now I'm no longer able to send <code>SIGINT</code> to my program but the logging is working.</p>
<p>How can I get both things to work at the same time? I've played around with <code>shell=True</code> and the various stdin/out/err tweaks, but no luck.</p>
<p>EDIT: With the top form (command as a list) and adding <code>signal.signal()</code> to my program in <code>path.to.module</code> I am able to both receive the SIGINT as well as see some output.</p>
|
<python>
|
2023-03-13 23:41:26
| 1
| 381
|
Jaime Salazar
|
75,727,762
| 145,129
|
Graphing O(n^2) Function Using Quadratic Algorithm Returns Linear Graph instead of Standard O(n^2) Graph
|
<p>I'm trying to generate a simple graph to show O(n^2) over a thousand items. The code seems to work just fine, however when I plot it using matplotlib, the graph comes out in a linear fashion, meaning the graph of y seems to equal x. I would expect a nice O(n^2) graph like this:</p>
<p><a href="https://www.section.io/engineering-education/big-o-notation/n-square.png" rel="nofollow noreferrer">O(n^2) graph</a></p>
<p>I am running my base over 10000 items, still it's linear?</p>
<p>Here is what I'm running:</p>
<pre><code>import time
import matplotlib.pyplot as plt
counter = 0
start_time = time.perf_counter()
N = 10000
timings = []
x = []
y = []
for i in range(0, N):
for j in range(0, N):
# do something
pass
step_time = time.perf_counter() - start_time
timings.append([counter, round(step_time, 2)])
counter += 1
# NOT efficient code!
for t in timings:
x.append(t[0])
y.append(t[1])
plt.xlabel('number of points')
plt.ylabel('time')
plt.plot(x, y)
plt.title('Quadratic Graph!')
plt.show()
</code></pre>
|
<python><algorithm><big-o>
|
2023-03-13 23:21:05
| 1
| 9,233
|
KingFish
|
75,727,688
| 2,312,801
|
Pandas findall re.IGNORECASE doesn't work
|
<p>I have a list of keywords:</p>
<pre><code>keywords = ['fake', 'hoax', 'misleading', etc.]
</code></pre>
<p>I'd like to search the <code>text</code> column of DataFrame <code>df1</code> for the above keywords and return rows containing these keywords (exact match), both in uppercase and lowercase (case-insensitive).</p>
<p>I tried the following:</p>
<pre><code>df2 = df1[df1.text.apply(lambda x: any(i for i in re.findall('\w+', x, flags=re.IGNORECASE) if i in keywords))]
df2
</code></pre>
<p>The above code returns all rows with the specified keywords, BUT it doesn't include the uppercase words (e.g., it return text containing "hoax", but not "HOAX").</p>
<p>Can someone please help me with this?</p>
|
<python><pandas><findall><keyword-search><ignore-case>
|
2023-03-13 23:05:37
| 1
| 2,459
|
mOna
|
75,727,685
| 781,938
|
How do I get a list of table-like objects visible to duckdb in a python session?
|
<p>I like how duckdb lets me query DataFrames as if they were sql tables:</p>
<pre class="lang-py prettyprint-override"><code>df = pandas.read_parquet("my_data.parquet")
con.query("select * from df limit 10").fetch_df()
</code></pre>
<p>I also like how duckdb has metadata commands like <code>SHOW TABLES;</code>, like a real database. However, <code>SHOW TABLES;</code> doesn't show pandas DataFrames or other table-like objects.</p>
<p>my question is: does duckdb offer something like <code>SHOW TABLES;</code> that includes both (1) real database tables and (2) table-like objects (e.g. pandas DataFrames) and their schemas?</p>
<p>Thanks!</p>
|
<python><pandas><duckdb>
|
2023-03-13 23:05:26
| 2
| 6,130
|
william_grisaitis
|
75,727,555
| 10,807,390
|
How can I find the year has the highest value for each subset of a large dataframe using Pandas?
|
<p>I have a dataframe with 10 years of humidity and temperature data for each of 100 dataloggers. Here is the head:</p>
<pre><code>
</code></pre>
<p>The loggers are labeled as L1 through L100.</p>
<p>My end goal is to have a dataframe with three columns: year, the count of dataloggers where that year had the highest average humidity, and the count of dataloggers where that year had the highest average temperature.</p>
<p>I thought that this code would work:</p>
<pre><code>import pandas as pd
df_years = pd.DataFrame({'year': list(range(1985, 1996))})
traits = ['avg_humidity', 'avg_temp']
for trait in traits:
df_logger = df.groupby('logger')['year', trait].max().reset_index()
df_input = df_logger['year'].value_counts().reset_index()
df_input.columns = ['year', f'count_{trait}']
df_years = df_years.merge(df_input, how='left', on='year').fillna(0)
</code></pre>
<p>But this output is leaving me with the same values for both traits, making me think I've done things wrong. Even when I just look at one trait:</p>
<pre><code>df_logger = df.groupby('logger')['year','avg_humidity'].max().reset_index()
df_input = df_logger['year'].value_counts().reset_index()
df_input.columns = ['year', 'count']
df_merged = df_years.merge(df_input, how='left', on='year').fillna(0)
</code></pre>
<p>The data seem incorrect. I think my whole process is wrong here. Any help would be massively appreciated. Thanks so much in advance.</p>
|
<python><pandas>
|
2023-03-13 22:40:53
| 1
| 427
|
Tom
|
75,727,523
| 3,380,902
|
format string to json / dict
|
<p>I have a string that has the following contents:</p>
<pre><code>print("json str:", type(json_str), json_str)
json str: <class 'str'> json
{
"word": [
"table name"
]
}
</code></pre>
<p>I get an exception when I attempt to run:</p>
<pre><code>import json
json.loads(json_str).get('word')
</code></pre>
<p>How do I format the string, such that I can call <code>.get</code> method?</p>
<pre><code>Exception Expecting value: line 1 column 1 (char 0)
</code></pre>
|
<python><json><string><dictionary>
|
2023-03-13 22:36:19
| 1
| 2,022
|
kms
|
75,727,004
| 13,215,988
|
How do I add widgets to the top left of Pyside Qt layout instead of having the items centered and evenly spaced?
|
<p>I have a PySide6 Qt screen that looks like this:<br />
<a href="https://i.sstatic.net/XUwKq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XUwKq.png" alt="enter image description here" /></a></p>
<p>The problem is that I want the buttons "0 test1.rtf," "1 test2.rtf," "2 test3.rtf," etc... to populate at the top left of the scroll area without dynamic spacing in between them which incorrectly makes the buttons fill up the entire area.</p>
<p>See how the spacing between the buttons dynamically changes when I add buttons:<br />
<a href="https://i.sstatic.net/j0lHz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j0lHz.png" alt="enter image description here" /></a></p>
<p>I want it to look like this:<br />
<a href="https://i.sstatic.net/odcVi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/odcVi.png" alt="enter image description here" /></a></p>
<p>I have the following code for my PySide6 Qt screen:</p>
<pre><code>wid = QtWidgets.QWidget()
grid = QtWidgets.QVBoxLayout(wid)
# setting the inner widget and layout
grid_inner = QtWidgets.QVBoxLayout(wid)
wid_inner = QtWidgets.QWidget(wid)
wid_inner.setLayout(grid_inner)
# add the inner widget to the outer layout
grid.addWidget(wid_inner)
# add tab frame to widget
wid_inner.tab = QtWidgets.QTabWidget(wid_inner)
grid_inner.addWidget(wid_inner.tab)
# create tab
new_tab = QtWidgets.QScrollArea(wid_inner.tab)
grid_tab_1 = QtWidgets.QVBoxLayout(new_tab)
new_tab.tab_name_private = "test1"
wid_inner.tab.addTab(new_tab, "test1")
for idx, tx in enumerate(self.log):
fname_array = (tx.file_name).split('/')
btn = QtWidgets.QPushButton(str(idx) + ' ' + fname_array[(len(fname_array)) - 1])
btn.clicked.connect(lambda checked=False, x=idx: self.display_transaction(x))
grid_tab_1.addWidget(btn)
# create tab 2
new_tab2 = QtWidgets.QScrollArea(wid_inner.tab)
grid_tab_2 = QtWidgets.QVBoxLayout(new_tab2)
wid_inner.tab.addTab(new_tab2, "test2")
for idx, tx in enumerate(self.log):
fname_array = (tx.file_name).split('/')
btn = QtWidgets.QPushButton(str(idx) + ' ' + fname_array[(len(fname_array)) - 1])
btn.clicked.connect(lambda checked=False, x=idx: self.display_transaction(x))
grid_tab_2.addWidget(btn)
continue_btn = QtWidgets.QPushButton("Ok")
self.layout = QtWidgets.QVBoxLayout()
self.layout.addWidget(wid)
self.layout.addWidget(continue_btn)
self.setLayout(self.layout)
continue_btn.clicked.connect(self.continue_to_main)
</code></pre>
<p>How do I alter my code to get the screen to look the way I want?</p>
|
<python><qt><pyside2><pyside6>
|
2023-03-13 21:21:46
| 0
| 1,212
|
ChristianOConnor
|
75,726,965
| 2,725,742
|
Why does my pySerial data get corrupted mid-stream?
|
<p>This works wonderfully on Windows, with Python 3.10. Incoming data has a delimiter, this delivers one message at a time, life is good...until I test on Linux. Specifically Ubuntu 18.04.</p>
<p>On Linux, it receives a handful of messages successfully, and then it gets garbage, non-ascii values. Oddly it often turns to garbage right after the delimeter, which seems like it is a hint, but I am not sure how. I have tried changing the decode to 'utf-8', 'utf-16', and a variety of other things but observe the same behavior.</p>
<pre><code>import serial, time
incoming= ''
delim = 'xyz'
ser = serial.Serial(port=port, baudrate=baud, timeout=timeout)
while True:
while ser.in_waiting > 0:
incoming += ser.read(ser.in_waiting).decode('ascii')
while delim in incoming:
deliver, incoming = incoming.split(delim, 1)
if deliver:
print(deliver)
time.sleep(0.1)
</code></pre>
<p>Any suggestions on what the issue could be?</p>
|
<python><serial-port><embedded-linux><pyserial>
|
2023-03-13 21:16:42
| 1
| 448
|
fm_user8
|
75,726,959
| 11,048,519
|
How to reroute requests to a different URL/endpoint in FastAPI?
|
<p>I am trying to write a middleware in my FastAPI application, so that requests coming to endpoints matching a particular format will be rerouted to a different URL, but I am unable to find a way to do that since <code>request.url</code> is read-only.</p>
<p>I am also looking for a way to update request headers before rerouting.</p>
<p>Are these things even possible in FastAPI?</p>
<p>Redirection is the best I could do so far:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import Request
from fastapi.responses import RedirectResponse
@app.middleware("http")
async def redirect_middleware(request: Request, call_next):
if matches_certain_format(request.url.path):
new_url = create_target_url(request.url.path)
return RedirectResponse(url=new_url)
</code></pre>
|
<python><fastapi><middleware><starlette><reroute>
|
2023-03-13 21:15:14
| 1
| 868
|
Aniket Tiratkar
|
75,726,867
| 19,299,757
|
interact with file open window in headless chrome
|
<p>I've an application which I am automating using headless chrome using selenium python. One of the feature of this application is to upload files and send it into the application for processing.
For this there is a "Drag and Drop" button in the GUI which when clicked opens up the file "Open" window. From there I've to select the file and press "Open" button which will then be uploaded to the application.
I used AutoIT to send the file name in the File Open dialog box which works great.
When I try to use headless, it doesn't work.</p>
<p>I also tried pyautogui and this doesn't seem to work in headless mode as well (although I saw some notes that it can be used in headless browsers as well). I used the below commands in pyautogui to send file name in file open dialog window.</p>
<pre><code>driver.get('URL')
myfile = "full path to myfile.pdf"
pyautogui.hotkey('alt', 'o')
pyautogui.PAUSE = 2
pyautogui.typewrite(myfile)
pyautogui.press('enter')
</code></pre>
<p>When executing this opens the URL, but instead of sending the file name in file open dialog, it puts the file name in Pycharm editor. Not sure what mistake I am doing here.</p>
<p>Can pyautogui can be used in headless mode at all? If yes, can someone list the commands to enter file name in file open dialog window?</p>
<p>The same code works great without headless mode.</p>
<p>Thanks much.</p>
|
<python><selenium-webdriver><pyautogui>
|
2023-03-13 21:01:57
| 1
| 433
|
Ram
|
75,726,722
| 7,984,318
|
How to append rows in pandas ,if column value is not in a external list
|
<p>I have a DataFrame you can have it by running this code:</p>
<pre><code>df = """
Facility Quarter Price
F1 Q1 2.1
F1 Q2 1.2
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
</code></pre>
<p>And a list:</p>
<pre><code>quarters=['Q1','Q2','Q3','Q4']
</code></pre>
<p>The logic is if the Quarter value is not in quarters list ,then append rows that assign the value of price to 0.</p>
<p>The output should looks like:</p>
<pre><code>Facility Quarter Price
F1 Q1 2.1
F1 Q2 1.2
F1 Q3 0
F1 Q4 0
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-13 20:41:29
| 3
| 4,094
|
William
|
75,726,719
| 47,152
|
Confused by python async for loop--executes sequentially
|
<p>I am new to asyncio and trying to understand basic for loop behavior. The code below executes sequentially, but my naive assumption was that while the sleeps are occurring, other items could be fetched via the for loop and start processing. But that doesn't seem to happen.</p>
<p>For example, while the code is "doing something else with 1" it seems like it could fetch the next item from the loop and start working on it while waiting for the sleep to end on item 1. But when I run, it executes sequentially with pauses for the sleeps like a non-async program.</p>
<p>What am I missing here?</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
class CustomIterator():
def __init__(self):
self.counter = 0
def __aiter__(self):
return self
async def __anext__(self):
if self.counter >= 3:
raise StopAsyncIteration
await asyncio.sleep(1)
self.counter += 1
return self.counter
async def f(item):
print(f"doing something with {item}")
await asyncio.sleep(3)
async def f2(item):
print(f"doing something else with {item}")
await asyncio.sleep(2)
async def do_async_stuff():
async for item in CustomIterator():
print(f"got {item}")
await f(item)
await f2(item)
if __name__ == '__main__':
asyncio.run(do_async_stuff())
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>got 1
doing something with 1
doing something else with 1
got 2
doing something with 2
doing something else with 2
got 3
doing something with 3
doing something else with 3
</code></pre>
|
<python><python-asyncio>
|
2023-03-13 20:41:00
| 4
| 1,465
|
chacmool
|
75,726,487
| 18,086,775
|
Unable to reference a excel marco file while running a juypter-notebook from another notebook
|
<p>I'm trying to run a jupyter notebook file called <code>Seg_pivot.ipynb</code> from <code>main.ipynb</code>. The main.ipynb has one line of code: <code>%run "fi/Seg_pivot.ipynb"</code></p>
<p>The current directory structure looks like the following: <a href="https://i.sstatic.net/2emDi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2emDi.png" alt="enter image description here" /></a></p>
<p>The <code>Seg_pivot.ipynb</code> runs a module in the MyMacro, used to style the files in the result folder. When I run the <code>Seg_pivot.ipynb</code> everything works. But when I run the <code>main.ipynb</code>, it gives a <strong>FileNotFound error: No such file: 'mymacro.xlsm'</strong></p>
<p>In the <code>Seg_pivot.ipynb</code> this is how I call the macro:</p>
<pre><code>import xlwings as xw
wb = xw.Book("MyMacro.xlsm")
s_macro = wb.macro("Module2.styleF")
s_macro()
</code></pre>
|
<python><excel><vba><jupyter-notebook>
|
2023-03-13 20:11:35
| 1
| 379
|
M J
|
75,726,455
| 4,534,466
|
Disable ssl verification on a third party module using HTTPX
|
<p>I'm using a module that uses <code>httpx</code> to make a request. Because of my company's network policy, I get a <code>CERTIFICATE_VERIFY_FAILED</code> every time I try to run my code. The data I'm sending and receiving in the request is not sensitive so I want to disable the SSL verification on the module.</p>
<p>If I were to write the request I could use a session as such:</p>
<pre><code>client = httpx.Client(**kwargs, verify=False)
</code></pre>
<p>But the request is inside the module.</p>
<p>Blender did an excellent response <a href="https://stackoverflow.com/questions/15445981/how-do-i-disable-the-security-certificate-check-in-python-requests">on this stack overflow answer</a> where they wrote a context manager with the code bellow but the code does not work for the <code>httpx</code> module:</p>
<pre><code>import warnings
import contextlib
import requests
from urllib3.exceptions import InsecureRequestWarning
old_merge_environment_settings = requests.Session.merge_environment_settings
@contextlib.contextmanager
def no_ssl_verification():
opened_adapters = set()
def merge_environment_settings(self, url, proxies, stream, verify, cert):
# Verification happens only once per connection so we need to close
# all the opened adapters once we're done. Otherwise, the effects of
# verify=False persist beyond the end of this context manager.
opened_adapters.add(self.get_adapter(url))
settings = old_merge_environment_settings(self, url, proxies, stream, verify, cert)
settings['verify'] = False
return settings
requests.Session.merge_environment_settings = merge_environment_settings
try:
with warnings.catch_warnings():
warnings.simplefilter('ignore', InsecureRequestWarning)
yield
finally:
requests.Session.merge_environment_settings = old_merge_environment_settings
for adapter in opened_adapters:
try:
adapter.close()
except:
pass
</code></pre>
<p>Is there a way I could replicate the same behavior inside a request made through httpx?</p>
|
<python><ssl><httpx>
|
2023-03-13 20:07:27
| 1
| 1,530
|
JoΓ£o Areias
|
75,726,452
| 10,335
|
Can I force the install of a package that requires a newer Python version than the one I have installed?
|
<p>I know it isn't a correct thing to do, but I would like to try to install package that requires Python 3.8, but my installed Python is 3.7.</p>
<p>Is it possible using pip? Or I must clone the repository and change the <code>setup.py</code>?</p>
|
<python><pip>
|
2023-03-13 20:07:22
| 1
| 40,291
|
neves
|
75,726,343
| 119,527
|
Reconciling the differences between BinaryIO and RawIOBase
|
<p>My modest goal: Correctly annotate a function argument as an object supporting <code>read()</code> which returns <code>bytes</code>.</p>
<ul>
<li>If I use <a href="https://docs.python.org/3/library/typing.html#typing.BinaryIO" rel="nofollow noreferrer"><code>typing.BinaryIO</code></a>, I cannot accept a <code>serial.Serial</code> object (which derives from <code>io.RawIOBase</code>).</li>
<li>If I use <a href="https://docs.python.org/3/library/io.html#io.RawIOBase" rel="nofollow noreferrer"><code>io.RawIOBase</code></a>, I cannot accept a <code>BinaryIO</code> object (e.g. <a href="https://github.com/python/typeshed/blob/ad9c7c1d0c842ac0cb3b010ecdd9402aac5f1bd3/stdlib/io.pyi#L157-L158" rel="nofollow noreferrer"><code>sys.stdin.buffer</code></a>)</li>
</ul>
<pre><code>import io
import serial
from typing import BinaryIO, Optional
import sys
def foo_rawiobase(infile: io.RawIOBase) -> Optional[bytes]:
return infile.read(42)
def foo_binaryio(infile: BinaryIO) -> bytes:
return infile.read(42)
def foo_serial(dev: serial.Serial) -> None:
# error: Argument 1 to "foo_binaryio" has incompatible type "Serial"; expected "BinaryIO" [arg-type]
foo_binaryio(dev)
# ok
foo_rawiobase(dev)
def foo_file(f: BinaryIO) -> None:
# ok
foo_binaryio(f)
# error: Argument 1 to "foo_rawiobase" has incompatible type "BinaryIO"; expected "RawIOBase" [arg-type]
foo_rawiobase(f)
def foo_stdin() -> None:
foo_file(sys.stdin.buffer)
</code></pre>
<p><code>typeshed</code> definitions:</p>
<ul>
<li><a href="https://github.com/python/typeshed/blob/ad9c7c1d0c842ac0cb3b010ecdd9402aac5f1bd3/stdlib/io.pyi#L78-L82" rel="nofollow noreferrer"><code>io.RawIOBase</code></a></li>
<li><a href="https://github.com/python/typeshed/blob/ad9c7c1d0c842ac0cb3b010ecdd9402aac5f1bd3/stdlib/typing.pyi#L704-L706" rel="nofollow noreferrer"><code>typing.BinaryIO</code></a></li>
</ul>
<p>How do I properly handle both cases?</p>
|
<python><python-typing>
|
2023-03-13 19:52:20
| 2
| 138,383
|
Jonathon Reinhart
|
75,726,225
| 217,332
|
Type hinting decorator with bounded arguments
|
<p>An example of what I'd like to do is as follows:</p>
<pre><code>@dataclass
class Arg1:
x = field()
@dataclass
class Arg2:
x = field()
@dataclass
class Obj:
y = field()
T = TypeVar("T")
R = TypeVar("R", bound=(Arg1 | Arg2))
C = TypeVar("C", bound=Callable[[Self, R], T])
@staticmethod
def deco(f: C) -> C:
def wrapper(self: Self, arg: Obj.R) -> Obj.T:
print(arg.x)
print(self.y)
return f(self, arg)
return wrapper
@deco
def f1(self: Self, arg: Arg1) -> str:
return "ok"
@deco
def f2(self: Self, arg: Arg2) -> int:
return 3
</code></pre>
<p>I'm getting the following two errors from mypy:</p>
<pre><code>test.py:22: error: A function returning TypeVar should receive at least one argument containing the same TypeVar [type-var]
test.py:26: error: Incompatible return value type (got "Callable[[Self, Obj.R], Obj.T]", expected "C") [return-value]
</code></pre>
<p>If I remove <code>T</code> and replace uses of it with <code>Any</code>, I can fix the first error (at the expense of weakening my type assertions) but still get the second error. This one is particularly puzzling to me because mypy appears to be giving me back the definition of <code>C</code> in complaining that I'm not returning a <code>C</code>. What should I be doing to convince it of the type equivalence here?</p>
<p>Regarding the first error, I expect that I could solve it by unpacking the definition of <code>C</code> in <code>def deco(f: C) -> C</code> but then I would lose the restriction that the type of the argument being passed to <code>f</code> is the same as the type of the argument to the return value of <code>deco</code>. Is there anything I can do about this?</p>
<p>Edit: after discussion in comments I updated the definition of deco to be <code>def deco(f: Callable[[Self, R], T]) -> Callable[[Self, R], T]:</code> and now get the following error instead:</p>
<pre><code>test.py:21: error: Static methods cannot use Self type [misc]
</code></pre>
<p>This seems like progress but I'm not sure what to do about this either.</p>
|
<python><python-decorators><mypy><python-typing><python-3.11>
|
2023-03-13 19:37:26
| 2
| 83,780
|
danben
|
75,725,895
| 2,774,589
|
Changing `ylim` on `cartopy` with different `central_longitude`
|
<p>I want to create a map on Mercator projection from -60S to 60N but with -160W as the central longitude.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import cartopy.crs as ccrs
fig = plt.figure()
ax = fig.add_subplot(1,1,1,
projection=ccrs.PlateCarree(central_longitude=0)
)
ax.set_extent([-180,180,-60,60])
ax.coastlines(resolution='110m')
ax.gridlines(draw_labels=True)
</code></pre>
<p>Returns the expected result for <code>central_longitude=0</code></p>
<p><a href="https://i.sstatic.net/RCZiX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RCZiX.png" alt="enter image description here" /></a></p>
<p>But if <code>central_longitude=-60</code></p>
<p>It returns</p>
<p><a href="https://i.sstatic.net/Wdqwo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wdqwo.png" alt="enter image description here" /></a></p>
<p>My questions are:</p>
<ol>
<li>Why <code>cartopy</code> is behaving like this?</li>
<li>How can I solve this issue?</li>
</ol>
|
<python><matplotlib><gis><cartopy>
|
2023-03-13 18:56:09
| 1
| 1,660
|
iury simoes-sousa
|
75,725,842
| 9,363,441
|
Continuous change of led params RPI from remote access
|
<p>I have RPI4 where have attached RGB LED. This led has to blink with certain frequency and color forever. I have such python service:</p>
<pre><code>#libraries
import RPi.GPIO as GPIO
from time import sleep
import os
import schedule
import time
#disable warnings (optional)
GPIO.setwarnings(False)
#Select GPIO Mode
GPIO.setmode(GPIO.BCM)
#set red,green and blue pins
redPin = 12
greenPin = 19
bluePin = 13
isLight = False
#set pins as outputs
GPIO.setup(redPin,GPIO.OUT)
GPIO.setup(greenPin,GPIO.OUT)
GPIO.setup(bluePin,GPIO.OUT)
def turnOff():
GPIO.output(redPin,GPIO.HIGH)
GPIO.output(greenPin,GPIO.HIGH)
GPIO.output(bluePin,GPIO.HIGH)
def white():
GPIO.output(redPin,GPIO.LOW)
GPIO.output(greenPin,GPIO.LOW)
GPIO.output(bluePin,GPIO.LOW)
def red():
GPIO.output(redPin,GPIO.LOW)
GPIO.output(greenPin,GPIO.HIGH)
GPIO.output(bluePin,GPIO.HIGH)
def green():
GPIO.output(redPin,GPIO.HIGH)
GPIO.output(greenPin,GPIO.LOW)
GPIO.output(bluePin,GPIO.HIGH)
def blue():
GPIO.output(redPin,GPIO.HIGH)
GPIO.output(greenPin,GPIO.HIGH)
GPIO.output(bluePin,GPIO.LOW)
if __name__ == '__main__':
turnOff()
isLight = False
while True:
led_info = os.getenv('LED_INFO')
print(os.environ)
color = led_info[-1]
frequency = led_info[:-1]
time.sleep(int(frequency) / 1000)
if isLight:
isLight = False
turnOff()
else:
isLight = True
if color == 'R':
red()
elif color == 'G':
green()
elif color == 'B':
blue()
</code></pre>
<p>as you can see here I get info from environmental variable and apply it in the loop. But what I'm trying to make is to change frequency of blinking and color in runtime. For example from mobile device. I don't understand which solution will be better:</p>
<ul>
<li>open socket connection</li>
<li>open ssh connection</li>
</ul>
<p>in both of these solutions I will then (as I understood) edit environmental variable which will be then used in the loop. The problem of such solution is that when I'm changing smth in runtime I get exception which says that this variable is broken. Then I thought about local project configurational file via <code>dotenv</code> for example, but not sure that I'm really on the right way.</p>
<h2>update</h2>
<pre><code>import RPi.GPIO as GPIO
import os
import time
from time import sleep
from http.server import BaseHTTPRequestHandler, HTTPServer
import schedule
host_name = '' # Change this to your Raspberry Pi IP address
host_port = 8000
# GPIO setup
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(18, GPIO.OUT)
# set red,green and blue pins
redPin = 12
greenPin = 19
bluePin = 13
# set pins as outputs
GPIO.setup(redPin, GPIO.OUT)
GPIO.setup(greenPin, GPIO.OUT)
GPIO.setup(bluePin, GPIO.OUT)
class MyServer(BaseHTTPRequestHandler):
isLight = False
""" A special implementation of BaseHTTPRequestHander for reading data from
and control GPIO of a Raspberry Pi
"""
def do_HEAD(self):
""" do_HEAD() can be tested use curl command
'curl -I http://server-ip-address:port'
"""
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
def _redirect(self, path):
self.send_response(303)
self.send_header('Content-type', 'text/html')
self.send_header('Location', path)
self.end_headers()
def turnOff(self):
GPIO.output(redPin, GPIO.HIGH)
GPIO.output(greenPin, GPIO.HIGH)
GPIO.output(bluePin, GPIO.HIGH)
def white(self):
GPIO.output(redPin, GPIO.LOW)
GPIO.output(greenPin, GPIO.LOW)
GPIO.output(bluePin, GPIO.LOW)
def red(self):
GPIO.output(redPin, GPIO.LOW)
GPIO.output(greenPin, GPIO.HIGH)
GPIO.output(bluePin, GPIO.HIGH)
def green(self):
GPIO.output(redPin, GPIO.HIGH)
GPIO.output(greenPin, GPIO.LOW)
GPIO.output(bluePin, GPIO.HIGH)
def blue(self):
GPIO.output(redPin, GPIO.HIGH)
GPIO.output(greenPin, GPIO.HIGH)
GPIO.output(bluePin, GPIO.LOW)
def runLoop(self, color, frequency):
while True:
time.sleep(int(frequency) / 1000)
if self.isLight:
self.isLight = False
self.turnOff()
else:
self.isLight = True
if color == 'R':
self.red()
elif color == 'G':
self.green()
elif color == 'B':
self.blue()
def do_GET(self):
""" do_GET() can be tested using curl command
'curl http://server-ip-address:port'
"""
html = '''
<html>
<body style="width:960px; margin: 20px auto;">
<h1>Welcome to my Raspberry Pi</h1>
<p>Current GPU temperature is {}</p>
<form action="/" method="POST">
Turn LED :
<input type="submit" name="submit" value="On">
<input type="submit" name="submit" value="Off">
<input type="text" name="submit">
</form>
</body>
</html>
'''
temp = os.popen("/opt/vc/bin/vcgencmd measure_temp").read()
self.do_HEAD()
self.wfile.write(html.format(temp[5:]).encode("utf-8"))
def do_POST(self):
""" do_POST() can be tested using curl command
'curl -d "submit=On" http://server-ip-address:port'
"""
content_length = int(self.headers['Content-Length']) # Get the size of data
post_data = self.rfile.read(content_length).decode("utf-8") # Get the data
post_data = post_data.split("=")[1] # Only keep the valu
color = post_data[-1]
frequency = post_data[:-1]
self.runLoop(color, frequency)
print("LED is {}".format(post_data))
self._redirect('/') # Redirect back to the root url
if __name__ == '__main__':
http_server = HTTPServer((host_name, host_port), MyServer)
print("Server Starts - %s:%s" % (host_name, host_port))
try:
http_server.serve_forever()
except KeyboardInterrupt:
http_server.server_close()
</code></pre>
<p>here I can pass value for color/blinking only one time and then all stuck</p>
|
<python><linux><raspberry-pi>
|
2023-03-13 18:49:29
| 0
| 2,187
|
Andrew
|
75,725,836
| 18,749,472
|
"The request's session was deleted before the request completed" on heavier requests
|
<p>In my Django project I am frequently coming across the error:</p>
<p><em>The request's session was deleted before the request completed. The user may have logged out in a concurrent request, for example.</em></p>
<p>This happens across a number of views and because of a number of different processes within my website. I have noticed that this error occurs on heavier requests for my local server to process, such as loading more images or returning more data from the database when calling the view associated with the current page the user is requesting.</p>
<p>This issue does get resolved when configuring the session engine in <em>settings.py</em>:</p>
<pre><code>SESSION_ENGINE = "django.contrib.sessions.backends.cache"
</code></pre>
<p>but this causes the users session to be cleared on server refresh which is a pain during development, so another option would be needed.</p>
<p>I primarily use pythons sqlite3 package to execute most queries which could be a possible factor contributing to this error.</p>
<p><em>sqlite3 connection settings:</em></p>
<pre><code>connection = sqlite3.connect(r"C:\Users\logan..\....\db.sqlite3", check_same_thread=False)
</code></pre>
<p><em>django database settings in settings.py:</em></p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
</code></pre>
<p>perhaps using a different session engine would resolve this issue, or some other configuration in <em>settings.py</em></p>
|
<python><django><session><server><django-sessions>
|
2023-03-13 18:48:40
| 0
| 639
|
logan_9997
|
75,725,818
| 10,255,994
|
Loading Hugging face model is taking too much memory
|
<p>I am trying to load a large Hugging face model with code like below:</p>
<pre><code>model_from_disc = AutoModelForCausalLM.from_pretrained(path_to_model)
tokenizer_from_disc = AutoTokenizer.from_pretrained(path_to_model)
generator = pipeline("text-generation", model=model_from_disc, tokenizer=tokenizer_from_disc)
</code></pre>
<p>The program is quickly crashing <strong>after the first line</strong> because it is running out of memory. Is there a way to chunk the model as I am loading it, so that the program doesn't crash?</p>
<hr>
<p><strong>EDIT</strong>
<br>
See cronoik's answer for accepted solution, but here are the relevant pages on Hugging Face's documentation:</p>
<p><strong>Sharded Checkpoints:</strong> <a href="https://huggingface.co/docs/transformers/big_models#sharded-checkpoints:%7E:text=in%20the%20future.-,Sharded%20checkpoints,-Since%20version%204.18.0" rel="noreferrer">https://huggingface.co/docs/transformers/big_models#sharded-checkpoints:~:text=in%20the%20future.-,Sharded%20checkpoints,-Since%20version%204.18.0</a>
<br>
<strong>Large Model Loading:</strong> <a href="https://huggingface.co/docs/transformers/main_classes/model#:%7E:text=the%20weights%20instead.-,Large%20model%20loading,-In%20Transformers%204.20.0" rel="noreferrer">https://huggingface.co/docs/transformers/main_classes/model#:~:text=the%20weights%20instead.-,Large%20model%20loading,-In%20Transformers%204.20.0</a></p>
|
<python><pytorch><nlp><huggingface-transformers><huggingface>
|
2023-03-13 18:46:57
| 1
| 483
|
Bud Linville
|
75,725,658
| 20,484,525
|
How can I update my Python version and keeping all the packages?
|
<p>I want to ask you a question about the Python update.
If I wanted to update my Python version from the 3.10 to 3.11, what should I do?
And, most important, do I have to reinstall all the packages like Pandas, Numpy, Matplotlib, ...?
In your opinion, when is needed to update it?</p>
|
<python><python-3.x>
|
2023-03-13 18:28:48
| 2
| 429
|
Flavio Brienza
|
75,725,633
| 13,135,901
|
Filter queryset based on related object's field value
|
<p>I have two models:</p>
<pre><code>class PartUse(models.Model):
...
imported = models.BooleanField()
class PartReturn(models.Model):
partuse = models.ForeignKey(PartUse)
...
imported = models.BooleanField()
class PartUseListView(ListView):
model = PartUse
def get_queryset(self):
if self.request.GET.get('show_imported', None):
qs = self.model.objects.all()
else:
qs = self.model.objects.filter(Exists(PartReturn.objects.filter(
imported=False, id__in=OuterRef("partreturn"))))
return qs
</code></pre>
<p>I want to filter QuerySet for <code>PartUse</code> to return all <code>PartUse</code> instances that have <code>imported == False</code> for either model. What's the best way to achieve that?</p>
|
<python><django><django-models><django-views><django-admin>
|
2023-03-13 18:26:05
| 1
| 491
|
Viktor
|
75,725,261
| 3,696,490
|
Grid visualization of a graph from adjacency list
|
<p>I have a program that generated the following adjacency list:
The data represent a 2D grid graph of size 4 by 14 but every node is not necessarily connected to all its neighbors.</p>
<p>My adjacency list is:</p>
<pre><code>list_ng2d={(0, 0): [(0, 1)],
(0, 1): [(0, 2), (0, 0)],
(0, 2): [(1, 2), (0, 3), (0, 1)],
(0, 3): [(0, 4), (0, 2)],
(0, 4): [(1, 4), (0, 5), (0, 3)],
(0, 5): [(0, 6), (0, 4)],
(0, 6): [(0, 7), (0, 5)],
(0, 7): [(0, 8), (0, 6)],
(0, 8): [(0, 9), (0, 7)],
(0, 9): [(1, 9), (0, 10), (0, 8)],
(0, 10): [(0, 11), (0, 9)],
(0, 11): [(1, 11), (0, 12), (0, 10)],
(0, 12): [(0, 13), (0, 11)],
(0, 13): [(0, 12)],
(1, 0): [(1, 1)],
(1, 1): [(1, 2), (1, 0)],
(1, 2): [(0, 2), (1, 3), (1, 1)],
(1, 3): [(1, 4), (1, 2)],
(1, 4): [(0, 4), (1, 5), (1, 3)],
(1, 5): [(1, 6), (1, 4)],
(1, 6): [(2, 6), (1, 7), (1, 5)],
(1, 7): [(1, 8), (1, 6)],
(1, 8): [(2, 8), (1, 9), (1, 7)],
(1, 9): [(0, 9), (1, 10), (1, 8)],
(1, 10): [(1, 11), (1, 9)],
(1, 11): [(0, 11), (1, 12), (1, 10)],
(1, 12): [(1, 13), (1, 11)],
(1, 13): [(1, 12)],
(2, 0): [(2, 1)],
(2, 1): [(2, 2), (2, 0)],
(2, 2): [(3, 2), (2, 3), (2, 1)],
(2, 3): [(2, 4), (2, 2)],
(2, 4): [(3, 4), (2, 5), (2, 3)],
(2, 5): [(2, 6), (2, 4)],
(2, 6): [(1, 6), (2, 7), (2, 5)],
(2, 7): [(2, 8), (2, 6)],
(2, 8): [(1, 8), (2, 9), (2, 7)],
(2, 9): [(3, 9), (2, 10), (2, 8)],
(2, 10): [(2, 11), (2, 9)],
(2, 11): [(3, 11), (2, 12), (2, 10)],
(2, 12): [(2, 13), (2, 11)],
(2, 13): [(2, 12)],
(3, 0): [(3, 1)],
(3, 1): [(3, 2), (3, 0)],
(3, 2): [(2, 2), (3, 3), (3, 1)],
(3, 3): [(3, 4), (3, 2)],
(3, 4): [(2, 4), (3, 5), (3, 3)],
(3, 5): [(3, 6), (3, 4)],
(3, 6): [(3, 7), (3, 5)],
(3, 7): [(3, 8), (3, 6)],
(3, 8): [(3, 9), (3, 7)],
(3, 9): [(2, 9), (3, 10), (3, 8)],
(3, 10): [(3, 11), (3, 9)],
(3, 11): [(2, 11), (3, 12), (3, 10)],
(3, 12): [(3, 13), (3, 11)],
(3, 13): [(3, 12)]}
</code></pre>
<p>I am trying to draw this graph to visualize the form of the graph (I drew it by hand for small instances on paper)</p>
<p>What I tried so far is to make a networkx instance and draw it, but I could not get it to draw things properly, it is adding edges that are not in the list</p>
<pre><code>
H = nx.Graph(list_ng2d)
# nx.draw(H, with_labels=True, font_weight='bold')
G=nx.grid_2d_graph(N,M)
# plt.figure(figsize=(6,6))
pos=dict()
for node,val in list_ng2d.items():
# print(f" {node} ngbrs {val}")
for ngbr in val:
pos[node]=ngbr
# pos = {(x,y):(y,-x) for x,y in G.nodes()}
# G.remove_edges_from([(0,13),(1,13)])
# nx.draw(G, pos=nx.get_node_attributes(G, 'pos'),
# node_color='lightgreen',
# with_labels=True,
# node_size=100)
nx.relabel_nodes(G,labels,False)
labels = dict(((i, j), i *M + (j)) for i, j in H.nodes()) #M=14
pos = {y:x for x,y in labels.items()}
nx.draw_networkx(G, pos=pos, with_labels=True, node_size = 100)
labels.items()
</code></pre>
<p>What I get</p>
<p><a href="https://i.sstatic.net/W37F5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W37F5.png" alt="enter image description here" /></a></p>
<p>What I am trying to draw (the image is rotated compared to networkx)
<a href="https://i.sstatic.net/89uiA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/89uiA.png" alt="enter image description here" /></a></p>
<p>Can someone point me in the right direction or give ideas on how to achieve this?</p>
|
<python><graph><networkx>
|
2023-03-13 17:47:34
| 2
| 550
|
user206904
|
75,725,243
| 5,583,772
|
What is a good best practice way to log a nested-dict
|
<p>In a python simulation code, code outputs are located in a nested-dict. I'm interested to log this nested dict to file at the end of each iteration, and having trouble identifying the best practice approach. In a current method, the nest_dict is logged by passing it to a logger like:</p>
<pre><code>self.logger.info(self.input_dict)
</code></pre>
<p>But this produces a perhaps wasteful output file, as the dictionary is represented as it would appear with</p>
<pre><code>print(self.input_dict)
</code></pre>
<p>So my related questions are:</p>
<ol>
<li>Are there any best practices around logging nested dictionaries?</li>
<li>Is logging.info the correct method to use or are there dedicated methods in python for logging data, versus logging events?</li>
</ol>
|
<python>
|
2023-03-13 17:45:43
| 1
| 556
|
Paul Fleming
|
75,725,137
| 11,665,178
|
Python Cloud functions use too much memory
|
<p>I am coming from an AWS backend using Lambda functions with python 3.9 runtime.</p>
<p>I have decided to run the equivalent on GCP for another project and everything is fine, except the memory usage.</p>
<p>On my AWS Lambda, I have a python function that consumes 79MB (you can see the memory usage in CloudWatch logs).</p>
<p>When I run the exact same function on a GCP Cloud Function (python39 runtime as well of course) it says memory exceeded 131MB over 128MB.</p>
<p>Is this coming from the Flask backend used in CloudFunctions? How can the memory usage be almost double for the same Python code?</p>
|
<python><google-cloud-functions>
|
2023-03-13 17:33:57
| 0
| 2,975
|
Tom3652
|
75,725,092
| 12,131,472
|
how do I insert each row of a dataframe before the matching rows in a different dataframe?
|
<p>I have one small and one big dataframe</p>
<p>the small one</p>
<pre><code> WS period shortCode identifier
6 197.78 2023-03-10 TC2-FFA spot
7 196.79 2023-03-10 TC5-FFA spot
8 253.13 2023-03-10 TC6-FFA spot
9 198.13 2023-03-13 TC12-FFA spot
10 166.67 2023-03-10 TC14-FFA spot
11 217.86 2023-03-10 TC17-FFA spot
18 97.00 2023-03-10 TD3-FFA spot
19 172.19 2023-03-10 TD7-FFA spot
20 205.71 2023-03-13 TD8-FFA spot
21 175.63 2023-03-10 TD19-FFA spot
22 115.45 2023-03-10 TD20-FFA spot
23 11350000.00 2023-03-10 TD22-FFA spot
24 232.14 2023-03-10 TD25-FFA spot
</code></pre>
<p>the big one with MultiIndex</p>
<pre><code>datumUnit $/mt WS
identifier period shortCode
TC2BALMO Mar 23 TC2-FFA 39.376 228.930
TC2CURMON Mar 23 TC2-FFA 35.946 208.988
TC2+1_M Apr 23 TC2-FFA 38.444 223.512
TC2+2_M May 23 TC2-FFA 37.786 219.686
TC2+3_M Jun 23 TC2-FFA 36.613 212.866
... ...
TD25+3Q Q4 23 TD25-FFA 42.909 185.432
TD25+4Q Q1 24 TD25-FFA 39.000 NaN
TD25+5Q Q2 24 TD25-FFA 32.421 NaN
TD25+1CAL Cal 24 TD25-FFA 34.250 NaN
TD25+2CAL Cal 25 TD25-FFA 33.955 NaN
</code></pre>
<p>This is its MultiIndex</p>
<pre><code>MultiIndex([( 'TC2BALMO', 'Mar 23', 'TC2-FFA'),
('TC2CURMON', 'Mar 23', 'TC2-FFA'),
( 'TC2+1_M', 'Apr 23', 'TC2-FFA'),
...
( 'TD25+4Q', 'Q1 24', 'TD25-FFA'),
( 'TD25+5Q', 'Q2 24', 'TD25-FFA'),
('TD25+1CAL', 'Cal 24', 'TD25-FFA'),
('TD25+2CAL', 'Cal 25', 'TD25-FFA')],
names=['identifier', 'period', 'shortCode'], length=198)
</code></pre>
<p>I wish to insert the "spot" line of the small dataframe on top of the 2nd dataframe for each shortCode without changing the order of the big dataframe</p>
<p>intended result</p>
<pre><code>datumUnit $/mt WS
identifier period shortCode
spot 23-03-10 TC2-FFA NaN 197.78
TC2BALMO Mar 23 TC2-FFA 39.376 228.930
TC2CURMON Mar 23 TC2-FFA 35.946 208.988
TC2+1_M Apr 23 TC2-FFA 38.444 223.512
TC2+2_M May 23 TC2-FFA 37.786 219.686
TC2+3_M Jun 23 TC2-FFA 36.613 212.866
... ...
spot 23-03-10 TD25-FFA NaN 232.14
TD25BALMO Mar 23 TD25-FFA 48.902 211.331
TD25CURMON Mar 23 TD25-FFA 53.254 230.138
TD25+1_M Apr 23 TD25-FFA 46.815 202.312
TD25+2_M May 23 TD25-FFA 43.717 188.924
TD25+3_M Jun 23 TD25-FFA 41.571 179.650
TD25+4_M Jul 23 TD25-FFA 40.776 176.214
TD25+5_M Aug 23 TD25-FFA 40.281 174.075
TD25CURQ Q1 23 TD25-FFA 46.668 201.677
TD25+1Q Q2 23 TD25-FFA 44.035 190.298
TD25+2Q Q3 23 TD25-FFA 40.367 174.447
TD25+3Q Q4 23 TD25-FFA 42.909 185.432
TD25+4Q Q1 24 TD25-FFA 39.000 NaN
TD25+5Q Q2 24 TD25-FFA 32.421 NaN
TD25+1CAL Cal 24 TD25-FFA 34.250 NaN
TD25+2CAL Cal 25 TD25-FFA 33.955 NaN
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-13 17:30:42
| 1
| 447
|
neutralname
|
75,725,091
| 1,764,353
|
How to run terraform apply inside an AWS Lambda function?
|
<p>I came accross an older <a href="https://github.com/FitnessKeeper/terraform-lambda" rel="nofollow noreferrer">repo</a> on github that seems like it's exactly what I need for my use case. Unfortunately, the code needs to be updated because lambda and/or python no longer support certain opperations.</p>
<p>I keep getting <code>PermissionError: [Errno 13] Permission denied: '/tmp/tmpg4l5twh9/terraform'</code> when I try to execute <code>terraform --version</code>. (I get the same error in my local environment and when the lambda function is packaged and deployed to AWS).</p>
<p>Here is my updated sandbox code to update the lambda function:</p>
<pre><code>import os
import subprocess
import urllib.request
import tarfile
import json
import tempfile
# Fetch the latest version of Terraform
URL = 'https://api.github.com/repos/hashicorp/terraform/releases/latest'
with urllib.request.urlopen(URL) as response:
data = response.read()
encoding = response.info().get_content_charset('utf-8')
TERRAFORM_DOWNLOAD_URL = json.loads(data.decode(encoding))['tarball_url']
# Download and extract the latest version of Terraform
with tempfile.TemporaryDirectory() as tmpdir:
TERRAFORM_TAR_PATH = os.path.join(tmpdir, 'terraform.tar.gz')
TERRAFORM_PATH = os.path.join(tmpdir, 'terraform')
urllib.request.urlretrieve(TERRAFORM_DOWNLOAD_URL, TERRAFORM_TAR_PATH)
with tarfile.open(TERRAFORM_TAR_PATH, "r:gz") as tf:
print(f"Extracting {TERRAFORM_TAR_PATH} to {TERRAFORM_PATH}")
tf.extractall(path=tmpdir)
# Remove the tar file after it's extracted
os.remove(TERRAFORM_TAR_PATH)
print(f"All files extracted to {TERRAFORM_PATH}")
print(f"{TERRAFORM_PATH} contents: {os.listdir(tmpdir)}")
# Add Terraform to PATH
os.rename(f'{tmpdir}/{os.listdir(tmpdir)[0]}', TERRAFORM_PATH)
os.environ["PATH"] += os.pathsep + TERRAFORM_PATH
os.chmod(TERRAFORM_PATH, 0o777)
# os.system(f'chmod -R 777 {TERRAFORM_PATH}')
print(os.listdir(TERRAFORM_PATH))
subprocess.check_output([TERRAFORM_PATH, "--version"])
</code></pre>
<p>I dont believe this is a duplicate of <a href="https://stackoverflow.com/questions/73804942/is-there-a-way-to-deploy-a-terraform-file-via-an-aws-lambda-function">Is there a way to deploy a terraform file via an AWS lambda function?</a> because I am working from a solution that was previously working.</p>
|
<python><amazon-web-services><aws-lambda><terraform><terraform-provider-aws>
|
2023-03-13 17:30:38
| 1
| 389
|
John R
|
75,725,071
| 13,262,692
|
Pad sequence to multiple of a number TensorFlow
|
<p>I want to pad a sequence in Tensorflow to multiple of a number. The code I have tried is:</p>
<pre><code>import tensorflow as tf
def pad_to_multiple(tensor, multiple, dim=-1, value=0):
seqlen = tf.shape(tensor)[dim]
seqlen = tf.cast(seqlen, tf.float32)
multiple = tf.cast(multiple, tf.float32)
m = seqlen / multiple
if tf.math.equal(tf.math.floor(m), m):
return False, tensor
remainder = tf.math.ceil(m) * multiple - seqlen
paddings = tf.zeros([tf.rank(tensor), 2], dtype=tf.int32)
paddings = tf.tensor_scatter_nd_update(paddings, tf.reshape([dim, 1], [1, 2]), tf.reshape([remainder, 0], [1, 2]))
tensor = tf.pad(tensor, paddings, constant_values=value)
return True, tensor
</code></pre>
<p>It throws the following error at <code>tf.tensor_scatter_nd_update</code>:</p>
<pre><code>Inner dimensions of output shape must match inner dimensions of updates shape. Output: [4,2] updates: [1,2]
</code></pre>
<p>Is there a way I can fix this dimension mismatch in <code>tensor_scatter_nd_update</code>?</p>
|
<python><tensorflow><tensorflow2.0>
|
2023-03-13 17:28:40
| 1
| 308
|
Muhammad Anas Raza
|
75,724,886
| 7,085,817
|
`func azure functionapp list-functions` returns unauthorised, 401
|
<p>I'm trying to deploy a function app using CLI and VS code plugin, and as the title states receiving 401. I tried logging out and in: when logged out the error is different.
Documentation suggests checking Storage account keys - checked, rotated, doesn't help.</p>
<p>As mentioned in the title even the command to list functions don't work. I'm all out of ideas.</p>
<p>To deploy I use Terraform, and tried to delete and redeploy the function app: no changes.</p>
<p>Here is my function.json:</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 */5 * * * *"
}
]
}
</code></pre>
<p>And here is the TF block:</p>
<pre><code>
resource "azurerm_linux_function_app" "data-extract-fa" {
name = "${var.prefix}-function-app"
resource_group_name = var.rg
location = var.location
storage_account_name = azurerm_storage_account.data-extract-sa.name
storage_account_access_key = azurerm_storage_account.data-extract-sa.primary_access_key
service_plan_id = azurerm_service_plan.data-extract-sp.id
identity {
type = "SystemAssigned"
}
app_settings = {
AzureWebJobsDisableHomepage = true
APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.data-extract-ai.instrumentation_key
SCM_DO_BUILD_DURING_DEPLOYMENT = true
}
site_config {
application_stack {
python_version = "3.9"
}
}
dynamic "connection_string" {
for_each = { for tuple in regexall("(.*)=(.*)", file("../functions/.env")) : tuple[0] => tuple[1] }
content {
name = connection_string.key
type = "Custom"
value = connection_string.value
}
}
}
</code></pre>
|
<python><azure><azure-functions><terraform-provider-azure>
|
2023-03-13 17:06:49
| 1
| 350
|
Alex Dembo
|
75,724,665
| 5,679,047
|
How to maximize performance of spaCy on an M1 Mac (currently much slower than Intel)
|
<p>I've observed that <code>nlp.pipe</code> is 30-40% slower on my almost brand new M1 Pro Macbook than on my old Macbook Pro from 2017. Most other functions are faster on the M1 by a similar margin, so this is not the performance I would expect.</p>
<p>For a benchmark, I'm running the following code (with scispacy):</p>
<pre><code>import os
from pathlib import Path
import re
import spacy
import time
with open(Path(os.path.expanduser('~')) / 'Downloads' / 'icd10cm-table and index-April 1 2023' / 'icd10cm-tabular-April 1 2023.xml') as f:
content = f.read()
matches = re.findall(r'<desc>([^<]*)</desc>', content)
nlp = spacy.load('en_core_sci_lg')
start_time = time.time()
for x in nlp.pipe(matches):
pass
print('%s seconds elapsed' % (time.time() - start_time))
</code></pre>
<p>My M1 Mac takes over 75 seconds to complete the task, while my 2017 Intel Mac can do it in 46 seconds.</p>
<p>I don't know whether <code>spacy</code> uses <code>numpy</code>, but I installed a fast version of <code>numpy</code> using ABarrier's answer to <a href="https://stackoverflow.com/questions/70240506/why-is-numpy-native-on-m1-max-greatly-slower-than-on-old-intel-i5">this question</a>. That made <code>numpy</code> faster, but made no difference for <code>spacy</code>. I'm assuming that somewhere there is an unoptimized binary being used, but I don't know how to figure out what it is.</p>
<hr />
<p>Instructions to replicate my benchmark:</p>
<p>You can get the file I'm using <a href="https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Publications/ICD10CM/April-1-2023-Update/icd10cm-table%20and%20index-April%201%202023.zip" rel="nofollow noreferrer">here (cdc.gov)</a>; it's a table of ICD-10 concepts in XML format. If you don't have scispacy or don't have the <code>en_core_sci_lg</code> model, create an environment and run</p>
<pre><code>pip install scispacy
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.1/en_core_sci_lg-0.5.1.tar.gz
</code></pre>
|
<python><spacy><apple-m1>
|
2023-03-13 16:43:02
| 1
| 681
|
Zorgoth
|
75,724,652
| 1,348,572
|
Cannot assign "default_connection" value to Tortoise ORM model with asyncpg driver?
|
<p>I'm writing a plugin-based system in my application that loads Tortoise-ORM (<code>tortoise-orm = {extras = ["asyncpg"], version = "^0.19.3"} </code>) models on demand. I've written a very simple test one, below, with some basic fields.</p>
<pre class="lang-py prettyprint-override"><code>class PingModel(Model):
uid = fields.TextField()
ping_count = fields.IntField()
def __str__(self):
return self.name
</code></pre>
<p>This being said, if I use <code>sqlite</code> as my Tortoise-ORM database, all works well. However, if I create configuration like the following for <code>postgres</code>, I get an error when attempting to use the Model:</p>
<pre class="lang-py prettyprint-override"><code> elif 'postgres' in self.config['database']:
db_config = {
'connections': {
'default': {
'engine': "tortoise.backends.asyncpg",
'credentials': {
'host': self.config['database']['postgres']['db_url'],
'port': self.config['database']['postgres']['db_port'],
'user': self.config['database']['postgres']['db_user'],
'password': self.config['database']['postgres']['db_password'],
'database': self.config['database']['postgres']['db_name'],
},
},
},
'apps': {
'models': {
'models': self.models,
'default_connection': 'default'
},
},
}
[...]
await Tortoise.init(config=db_config)
await Tortoise.generate_schemas()
</code></pre>
<p>And the exception I get when using it:</p>
<pre><code>Traceback (most recent call last):
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/discord/commands/core.py", line 124, in wrapped
ret = await coro(arg)
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/discord/commands/core.py", line 978, in _invoke
await self.callback(self.cog, ctx, **kwargs)
File "/home/tunacado/Source/fishbot/fishbot/cogs/ping.py", line 17, in ping
record = await PingModel.filter(uid__contains=user)
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/tortoise/models.py", line 1255, in filter
return cls._meta.manager.get_queryset().filter(*args, **kwargs)
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/tortoise/manager.py", line 15, in get_queryset
return QuerySet(self._model)
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/tortoise/queryset.py", line 299, in __init__
super().__init__(model)
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/tortoise/queryset.py", line 96, in __init__
self.capabilities: Capabilities = model._meta.db.capabilities
File "/home/tunacado/.cache/pypoetry/virtualenvs/fishbot-5Nu9GTXz-py3.10/lib/python3.10/site-packages/tortoise/models.py", line 285, in db
raise ConfigurationError(
tortoise.exceptions.ConfigurationError: default_connection for the model <class 'fishbot.cogs.ping.PingModel'> cannot be None
</code></pre>
<p>Just to note, I have ensured that we are in fact using this form of <code>db_config</code> and this isn't a logic error in my conditionals. The <code>db_config</code> looks like this when I pass it to <code>init</code>:</p>
<pre class="lang-py prettyprint-override"><code>{'connections': {'default': {'engine': 'tortoise.backends.asyncpg', 'credentials': {'host': 'localhost', 'port': 5432, 'user': 'myuser', 'password': 'mypassword', 'database': 'mydb'}}}, 'apps': {'models': {'models': ['fishbot.cogs.ping'], 'default_connection': 'default'}}}
</code></pre>
<p>After research, I note that the Model itself <em>must</em> have an attribute of <code>Model._meta.default_connection</code> set to the connection name (<code>default</code>, in my case). This is what this <a href="https://github.com/tortoise/tortoise-orm/issues/1188" rel="nofollow noreferrer">GitHub issue describes</a>. I even tried setting <code>class Meta</code> in my Model, and manually changing the <code>_meta.default_connection</code> model value and none of that seemed to work, same exact exception.</p>
<p>My question is why my Model never inherits the <code>default_connection</code> assigned in the configuration. If we check the <code>self.models</code> in the above <code>apps:models</code> part of <code>db_config</code> it shows up as <code>fishbot.cog.ping</code>, which is the module that <em>contains</em> the <code>PingModel</code> Model I've built. I can't import the model itself in the <code>models</code> section either; <code>fishbot.cogs.ping.PingModel</code> as the <code>models:models</code> section doesn't work either with <code>tortoise.exceptions.ConfigurationError: Module "fishbot.cogs.ping.PingModel" not found</code>.</p>
<p>Even if I hard-code it to <code>'models': ['fishbot.cogs.ping'],</code> I get the exact same exception; no matter what I can't get <code>default_connection</code> to stick to my models I import this way.</p>
<p>So that leaves me with my questions:</p>
<ul>
<li>Why isn't my model inheriting the <code>default_connection</code> from the configuration I placed?</li>
<li>What is the proper way to ensure that I am setting <code>default_connection</code> on all of my models with initializing a <code>postgres</code> database with Tortoise-ORM?</li>
</ul>
<p>Any advice is appreciated, including telling me what I messed up.</p>
<h3>EDIT 0</h3>
<p>So I ran the <code>postgres.py</code> in the <a href="https://github.com/tortoise/tortoise-orm/blob/develop/examples/postgres.py" rel="nofollow noreferrer">Tortoise example section</a> in my same <code>poetry</code> environment and it worked; it has to be specific to how my models are being passed in then...</p>
|
<python><tortoise-orm>
|
2023-03-13 16:41:22
| 2
| 349
|
robbmanes
|
75,724,561
| 7,776,781
|
Decorate behaviour of subclass methods of an abstract base class
|
<p>I have an inheritance scenario that looks like this:</p>
<pre><code>import abc
class A(metaclass=abc.ABCMeta):
def __init__(self, x: int | None = None) -> None:
self.x = x
@abc.abstractmethod
def foo(self, x: int | None = None) -> None:
pass
class B(A):
def foo(self, x: int | None = None) -> None:
x = x or self.x
if x is None:
raise ValueError("x must be defined")
print(x)
class C(B):
... # defines a bunch of other stuff
</code></pre>
<p>The idea is that it will be possible for users (other developers) to use my class in two different ways. Either they interact directly with <code>B</code> (or another equivalent subclass of <code>A</code>) and then they have to pass certain values to its methods (in this case, they need to pass <code>x</code> to the <code>B.foo</code> method. Alternatively, they can instead subclass <code>B</code> (like <code>C</code> in this example) and then set <code>x</code> directly in the init and then subsequently there is no need to pass <code>x</code> to <code>C.foo</code>.</p>
<p>In the end, both of these usages should be valid:</p>
<pre><code>b = B()
b.foo(x=1)
c = C(x=1)
c.foo()
</code></pre>
<p>Now, the problem is that I would like to avoid having to repeat this clause in different methods and different subclasses of <code>A</code>:</p>
<pre><code>x = x or self.x
if x is None:
ValueError("x must be defined")
</code></pre>
<p>Is there any way I can do something in <code>A</code> that will essentially run this check, and effectively pass the along the value of <code>x</code> to the <code>foo</code> method if it has not been given a value for <code>x</code>?</p>
<p>I would essentially want to make <code>self.x</code> the default value for <code>x</code>, but I don't want to have to re-write the <code>if x is None</code> check if I can avoid it.</p>
<p>Something that comes to mind would be if I could somehow automatically register a decorator onto the methods of my subclass, but I don't think that it is possible?</p>
|
<python><python-3.x><inheritance>
|
2023-03-13 16:32:08
| 1
| 619
|
Fredrik Nilsson
|
75,724,490
| 8,040,369
|
DataFrame: convert different format of date in df column to same format
|
<p>I have a df as below</p>
<pre><code>ID Date
===================
1 1/1/2023
2 2/1/2023
3 2/1/2023
4 31-01-2023
5 31-01-2023
</code></pre>
<p>All the date mentioned above are for Jan, 2023 and will come in the same format as mentioned above.
I need to convert it to a format <strong>YYYY-MM-DD</strong>.</p>
<p>If i try <code>df['Date'] = pd.to_datetime(df['Date'].str[:10])</code>, it makes the 2 and 3rd row data as Feb, 2023.</p>
|
<python><pandas><dataframe><datetime>
|
2023-03-13 16:25:01
| 1
| 787
|
SM079
|
75,724,281
| 7,648
|
RuntimeError: Error(s) in loading state_dict
|
<p>I have the following PyTorch model:</p>
<pre><code>import math
from abc import abstractmethod
import torch.nn as nn
class AlexNet3D(nn.Module):
@abstractmethod
def get_head(self):
pass
def __init__(self, input_size):
super().__init__()
self.input_size = input_size
self.features = nn.Sequential(
nn.Conv3d(1, 64, kernel_size=(5, 5, 5), stride=(2, 2, 2), padding=0),
nn.BatchNorm3d(64),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
nn.Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=0),
nn.BatchNorm3d(128),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
nn.Conv3d(128, 192, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(192),
nn.ReLU(inplace=True),
nn.Conv3d(192, 192, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(192),
nn.ReLU(inplace=True),
nn.Conv3d(192, 128, kernel_size=(3, 3, 3), padding=1),
nn.BatchNorm3d(128),
nn.ReLU(inplace=True),
nn.MaxPool3d(kernel_size=3, stride=3),
)
self.classifier = self.get_head()
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm3d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def forward(self, x):
xp = self.features(x)
x = xp.view(xp.size(0), -1)
x = self.classifier(x)
return [x, xp]
class AlexNet3DDropoutRegression(AlexNet3D):
def get_head(self):
return nn.Sequential(nn.Dropout(),
nn.Linear(self.input_size, 64),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(64, 1),
)
</code></pre>
<p>I am initializing the model like this:</p>
<pre><code>def init_model(self):
model = AlexNet3DDropoutRegression(4608)
if self.use_cuda:
log.info("Using CUDA; {} devices.".format(torch.cuda.device_count()))
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model = model.to(self.device)
return model
</code></pre>
<p>After training, I save the model like this:</p>
<pre><code> torch.save(self.model.state_dict(), self.cli_args.model_save_location)
</code></pre>
<p>I then attempt to load the saved model:</p>
<pre><code>import torch
from reprex.models import AlexNet3DDropoutRegression
model_save_location = "/home/feczk001/shared/data/AlexNet/LoesScoring/loes_scoring_01.pt"
model = AlexNet3DDropoutRegression(4608)
model.load_state_dict(torch.load(model_save_location,
map_location='cpu'))
</code></pre>
<p>But I get the following error:</p>
<pre><code>RuntimeError: Error(s) in loading state_dict for AlexNet3DDropoutRegression:
Missing key(s) in state_dict: "features.0.weight", "features.0.bias", "features.1.weight", "features.1.bias", "features.1.running_mean", "features.1.running_var", "features.4.weight", "features.4.bias", "features.5.weight", "features.5.bias", "features.5.running_mean", "features.5.running_var", "features.8.weight", "features.8.bias", "features.9.weight", "features.9.bias", "features.9.running_mean", "features.9.running_var", "features.11.weight", "features.11.bias", "features.12.weight", "features.12.bias", "features.12.running_mean", "features.12.running_var", "features.14.weight", "features.14.bias", "features.15.weight", "features.15.bias", "features.15.running_mean", "features.15.running_var", "classifier.1.weight", "classifier.1.bias", "classifier.4.weight", "classifier.4.bias".
Unexpected key(s) in state_dict: "module.features.0.weight", "module.features.0.bias", "module.features.1.weight", "module.features.1.bias", "module.features.1.running_mean", "module.features.1.running_var", "module.features.1.num_batches_tracked", "module.features.4.weight", "module.features.4.bias", "module.features.5.weight", "module.features.5.bias", "module.features.5.running_mean", "module.features.5.running_var", "module.features.5.num_batches_tracked", "module.features.8.weight", "module.features.8.bias", "module.features.9.weight", "module.features.9.bias", "module.features.9.running_mean", "module.features.9.running_var", "module.features.9.num_batches_tracked", "module.features.11.weight", "module.features.11.bias", "module.features.12.weight", "module.features.12.bias", "module.features.12.running_mean", "module.features.12.running_var", "module.features.12.num_batches_tracked", "module.features.14.weight", "module.features.14.bias", "module.features.15.weight", "module.features.15.bias", "module.features.15.running_mean", "module.features.15.running_var", "module.features.15.num_batches_tracked", "module.classifier.1.weight", "module.classifier.1.bias", "module.classifier.4.weight", "module.classifier.4.bias".
</code></pre>
<p>What is going wrong here?</p>
|
<python><deep-learning><pytorch>
|
2023-03-13 16:05:42
| 3
| 7,944
|
Paul Reiners
|
75,724,115
| 13,158,157
|
Azure blob storage directory download (win, python)
|
<p>My goal is to download part of the blob storage that looks like a directory inside other directories.</p>
<p>Official <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-download-python" rel="nofollow noreferrer">Download to a file path</a> function request a self argument that I don't get how to use and other stack overflow questions often suggest <a href="https://stackoverflow.com/questions/37079951/download-all-blobs-within-an-azure-storage-container">outdated solutions implementing BlockBlobService</a>. I also found <a href="https://gist.github.com/amalgjose/7de7b93a326e5a6f53f8b43ba5187932" rel="nofollow noreferrer">this</a> but it seems not work.</p>
<p>Generally, it seems that I have to loop over <code>blob_list</code> but I have trouble to recreate locally structure of a "directory" in blob storage for different reasons. Sometimes I have .parquet files that are actually folders with snappy compression and etc.
So far I have this code:</p>
<pre><code>import pathlib
blob_service_client = BlobServiceClient.from_connection_string(conn_str)
container_client = blob_service_client.get_container_client(container="container_name")
blob_list = container_client.list_blobs("dir_1/sub_dir_2/desired_dir/")
local_directory = pathlib.Path("local_directory/")
for blob in blob_list:
local_file_path = local_directory / pathlib.Path(blob.name)
local_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(local_file_path, "wb") as local_file:
print('yes it is downloaded')
blob_client = container_client.get_blob_client(blob)
blob_data = blob_client.download_blob()
blob_data.readinto(local_file)
</code></pre>
<p>What is current best way to download a sub_directory to mirror structure on azure blob storage on win machine ?</p>
|
<python><azure>
|
2023-03-13 15:49:51
| 1
| 525
|
euh
|
75,724,033
| 14,720,380
|
Set the media type of a custom Error Response via a pydantic model in FastAPI
|
<p>In my FastAPI application I want to return my errors as RFC Problem JSON:</p>
<pre><code>from pydantic import BaseModel
class RFCProblemJSON(BaseModel):
type: str
title: str
detail: str | None
status: int | None
</code></pre>
<p>I can set the response model in the OpenAPI docs with the <code>responses</code> argument of the FastAPI class:</p>
<pre><code>from fastapi import FastAPI, status
api = FastAPI(
responses={
status.HTTP_401_UNAUTHORIZED: {'model': RFCProblemJSON},
status.HTTP_422_UNPROCESSABLE_ENTITY: {'model': RFCProblemJSON},
status.HTTP_500_INTERNAL_SERVER_ERROR: {'model': RFCProblemJSON}
}
)
</code></pre>
<p>However, I want to set the media type as 'application/problem+json'. I tried two methods, first just adding a 'media type' field on to the basemodel:</p>
<pre><code>class RFCProblemJSON(BaseModel):
media_type = "application/problem+json"
type: str
title: str
detail: str | None
status: int | None
</code></pre>
<p>and also, inheriting from <code>fastapi.responses.Response</code>:</p>
<pre><code>class RFCProblemJSON(Response):
media_type = "application/problem+json"
type: str
title: str
detail: str | None
status: int | None
</code></pre>
<p>However neither of these modify the media_type in the openapi.json file/the swagger UI.</p>
<p>When you add the media_type field to the basemodel, the media type in the SwaggerUI is not modified::
<a href="https://i.sstatic.net/tiPq5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tiPq5.png" alt="Incorrect media type" /></a></p>
<p>And when you make the model inherit from Response, you just get an error (this was a long shot from working but tried it anyway).</p>
<pre><code> raise fastapi.exceptions.FastAPIError(
fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'RoutingServer.RestAPI.schema.errors.RFCProblemJSON'> is a valid Pydantic field type. If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None. Read more: https://fastapi.tiangolo.com/tutorial/response-model/
</code></pre>
<p>It is possible to get the swagger UI to show the correct media type if you manually fill out the OpenAPI definition:</p>
<pre><code>api = FastAPI(
debug=debug,
version=API_VERSION,
title="RoutingServer API",
openapi_tags=tags_metadata,
swagger_ui_init_oauth={"clientID": oauth2_scheme.client_id},
responses={
status.HTTP_401_UNAUTHORIZED: {
"content": {"application/problem+json": {
"example": {
"type": "string",
"title": "string",
"detail": "string"
}}},
"description": "Return the JSON item or an image.",
},
}
)
</code></pre>
<p>However, I want to try and implement this with a BaseModel so that I can inherit from RFCProblemJSON and provide some optional extras for some specific errors.</p>
<p>The minimal example to reproduce my problem is:</p>
<pre><code>from pydantic import BaseModel
from fastapi import FastAPI, status, Response, Request
from fastapi.exceptions import RequestValidationError
from pydantic import error_wrappers
import json
import uvicorn
from typing import List, Tuple, Union, Dict, Any
from typing_extensions import TypedDict
Loc = Tuple[Union[int, str], ...]
class _ErrorDictRequired(TypedDict):
loc: Loc
msg: str
type: str
class ErrorDict(_ErrorDictRequired, total=False):
ctx: Dict[str, Any]
class RFCProblemJSON(BaseModel):
type: str
title: str
detail: str | None
status: int | None
class RFCUnprocessableEntity(RFCProblemJSON):
instance: str
issues: List[ErrorDict]
class RFCProblemResponse(Response):
media_type = "application/problem+json"
def render(self, content: RFCProblemJSON) -> bytes:
return json.dumps(
content.dict(),
ensure_ascii=False,
allow_nan=False,
indent=4,
separators=(", ", ": "),
).encode("utf-8")
api = FastAPI(
responses={
status.HTTP_422_UNPROCESSABLE_ENTITY: {'model': RFCUnprocessableEntity},
}
)
@api.get("/{x}")
def hello(x: int) -> int:
return x
@api.exception_handler(RequestValidationError)
def format_validation_error_as_problem_json(request: Request, exc: error_wrappers.ValidationError):
status_code = status.HTTP_422_UNPROCESSABLE_ENTITY
content = RFCUnprocessableEntity(
type="/errors/unprocessable_entity",
title="Unprocessable Entity",
status=status_code,
detail="The request has validation errors.",
instance=request.url.path,
issues=exc.errors()
)
return RFCProblemResponse(content, status_code=status_code)
uvicorn.run(api)
</code></pre>
<p>When you go to <code>http://localhost:8000/hello</code>, it will return as <code>application/problem+json</code> in the headers, however if you go to the swagger ui docs the ui shows the response will be <code>application/json</code>. I dont know how to keep the style of my code, but update the openapi definition to show that it will return as 'application/problem+json` in a nice way.</p>
<p>Is this possible to do?</p>
|
<python><swagger><fastapi><swagger-ui><openapi>
|
2023-03-13 15:40:44
| 1
| 6,623
|
Tom McLean
|
75,723,868
| 435,129
|
Avoid "-bash: __vsc_prompt_cmd_original: command not found" when launching Terminal from Python in macOS
|
<p>I have a basic macOS app (PyQt5) that launches Terminal.app and executes a command in it:</p>
<pre class="lang-py prettyprint-override"><code> cmd = f" cd {os.getcwd()}; bash -c ./backup.sh"
subprocess.run(['osascript', '-e', f'tell application "Terminal" to do script "{cmd}"'])
</code></pre>
<p>It works, however every command executed in this terminal produces this warning:</p>
<p><a href="https://i.sstatic.net/TIElh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TIElh.png" alt="enter image description here" /></a></p>
<p>That bottom <code>ls</code> is me experimenting. I see the warning comes after each command has executed.</p>
<p>Googling I find an instruction to <code>unset PROMPT_COMMAND</code>. If I <code>echo $PROMPT_COMMAND</code> from this newly created Terminal.app instance I get:</p>
<pre><code>π§’ pi@pis-MacBook-Pro ~/Desktop/tmp/tray
> echo $PROMPT_COMMAND
update_terminal_cwd; __vsc_prompt_cmd_original history -a; history -c; history -r
-bash: __vsc_prompt_cmd_original: command not found
</code></pre>
<p>I'm running the app from VSCode's interactive terminal. Presumably this is related. However <code>echo $PROMPT_COMMAND</code> from this interactive terminal gives:</p>
<pre><code>(.venv)
π§’ pi@pis-MacBook-Pro ~/Desktop/tmp/tray
> echo $PROMPT_COMMAND
__vsc_prompt_cmd_original
</code></pre>
<p>But I think VSCode is using iTerm2 for its interactive terminal. (The backcolour matches that of iTerm2 not Terminal).</p>
<p>My question is: <strong>How to launch the Terminal.app without incurring this warning?</strong></p>
<p>If I try a preceding <code>unset PROMPT_COMMAND</code>:</p>
<pre class="lang-py prettyprint-override"><code> cmd = f" unset PROMPT_COMMAND; cd {os.getcwd()}; bash -c ./backup.sh"
subprocess.run(['osascript', '-e', f'tell application "Terminal" to do script "{cmd}"'])
</code></pre>
<p>... I still get an initial warning, even though it has now disappeared from subsequent commands executed in this new Terminal window:</p>
<p><a href="https://i.sstatic.net/ohbTF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ohbTF.png" alt="enter image description here" /></a></p>
<p>EDIT: I tried running <code>python tray.py</code> from iTerm2 rather than from VSCode's terminal, but it doesn't change anything.</p>
<pre><code>> echo $PROMPT_COMMAND # from iTerm2
history -a; history -c; history -r
</code></pre>
|
<python><bash><macos><terminal>
|
2023-03-13 15:26:27
| 0
| 31,234
|
P i
|
75,723,616
| 1,079,110
|
Does ZMQ have a mechanism to tell whether a connection is still alive?
|
<p>I have a ZMQ server that performs a heavy computation and thus sending the result back to the client via <code>server_socket.send()</code> can take several minutes. The client should wait indefinitely for the result of the computation. However, if the connection drops during the computation, then the client should find another server to connect to.</p>
<p>I know this could be implemented manually by using a background thread on the server that keeps sending "please wait" messages to the client until the result is ready. This way, the client can use <code>client_socket.RCVTIMEO = 1000</code> to raise <code>zmq.Again</code> if the server does not respond within 1 second.</p>
<p>However, I'm wondering whether there is a built-in mechanism in ZMQ for this, because it already uses background threads itself to send and receive messages. Is there a mechanism in ZMQ to tell whether a connection is still alive even though the server code has not called <code>server_socket.send()</code> in a while?</p>
<p>Here is the manual solution (which also only works for the case of a single client for now) that I would like to simplify:</p>
<pre><code>import threading
import time
import zmq
def server():
context = zmq.Context.instance()
socket = context.socket(zmq.ROUTER)
socket.bind('tcp://*:5555')
while True:
identity, _, message = socket.recv_multipart()
print('Received request from client')
print('Start telling the client to wait')
waiting = True
def say_wait():
while waiting:
socket.send_multipart([identity, b'', b'wait'])
time.sleep(0.1)
# TODO: This also needs to get a message from the same client, not any.
_, _, message = socket.recv_multipart()
assert message == b'alright', message
thread = threading.Thread(target=say_wait)
thread.start()
print('Perform heavy server computation')
time.sleep(3)
print('Stop telling the client to wait')
waiting = False
thread.join()
print('Send the result to the client')
socket.send_multipart([identity, b'', b'result'])
def client():
socket = None
while True:
if not socket:
print('Client finds a new server to connect to')
context = zmq.Context.instance()
socket = context.socket(zmq.REQ)
socket.RCVTIMEO = 1000 # 1 second timeout.
address = find_available_server()
socket.connect(f'tcp://{address}')
socket.send(b'request')
try:
while True:
message = socket.recv()
if message == b'wait':
print('Client heard that it should wait longer')
socket.send(b'alright')
continue
else:
print(f'Client got the result of the computation: {message}')
break
except zmq.Again:
print('Client did not hear back from the server')
socket.close(linger=0)
socket = None
def find_available_server():
# In practice, this function asks a central coordinator for
# the IP address of an available server.
return 'localhost:5555'
threading.Thread(target=server).start()
threading.Thread(target=client).start()
</code></pre>
|
<python><zeromq><pyzmq>
|
2023-03-13 15:02:21
| 2
| 34,449
|
danijar
|
75,723,564
| 5,558,497
|
list re-order based on multi-level string match
|
<p>I have python a list that looks like the following:</p>
<pre><code>['T20221019A_E3.B Allele Freq',
'T20221019A_E3.Log R Ratio',
'T20221019A_E3.Gtype',
'Father_FM.B Allele Freq',
'Father_FM.Log R Ratio',
'Father_FM.Gtype',
'Mother_FS.B Allele Freq',
'Mother_FS.Log R Ratio',
'Mother_FS.Gtype']
</code></pre>
<p>First I would like to order this list based on what it is left of the dot '.' marker with the following order: <code>Mother</code>, <code>Father</code> and <code>T20221019A_E3</code>. This would be my first-level sort.</p>
<p>Then I would like to sort the string right of the dot '.' with the following order: <code>Gtype</code>, <code>B Allele Freq</code> and <code>Log R Ratio</code></p>
<p>such that my re-ordered list look like so:</p>
<pre><code>['Mother_FS.Gtype',
'Mother_FS.B Allele Freq',
'Mother_FS.Log R Ratio',
'Father_FM.Gtype',
'Father_FM.B Allele Freq',
'Father_FM.Log R Ratio',
'T20221019A_E3.Gtype',
'T20221019A_E3.B Allele Freq',
'T20221019A_E3.Log R Ratio',]
</code></pre>
<p>What would be the cleanest way of achieving this?</p>
|
<python><python-3.x><list>
|
2023-03-13 14:58:14
| 1
| 2,249
|
BCArg
|
75,723,546
| 8,391,698
|
How to resolve "the size of tensor a (1024) must match the size of tensor b" in happytransformer
|
<p>I have the following code. This code uses the GPT-2 language model from the Transformers library to generate text from a given input text. The input text is split into smaller chunks of 1024 tokens, and then the GPT-2 model is used to generate text for each chunk. The generated text is concatenated to produce the final output text. The <a href="https://happytransformer.com" rel="nofollow noreferrer">HappyTransformer</a> library is used to simplify the generation process by providing a pre-trained model and an interface to generate text with a given prefix and some settings. The GPT-2 model and tokenizer are also saved to a local directory. The output of the code is the generated text for the input text, with corrections for grammar suggested by the prefix "grammar: ".</p>
<pre><code>from transformers import GPT2Tokenizer, GPT2LMHeadModel
from happytransformer import HappyGeneration, GENSettings
import torch
model_name = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
save_path = "/home/ubuntu/storage1/various_transformer_models/gpt2"
# save the tokenizer and model to a local directory
tokenizer.save_pretrained(save_path)
model.save_pretrained(save_path)
# Processing
happy_gen = HappyGeneration("GPT-2", "gpt2")
args = GENSettings(num_beams=5, max_length=1024)
mytext = "This sentence has bad grammar. This is a very long sentence that exceeds the maximum length of 512 tokens. Therefore, we need to split it into smaller chunks and process each chunk separately."
prefix = "grammar: "
# Split the text into chunks of maximum length 1024 tokens
max_length = 1024
chunks = [mytext[i:i+max_length] for i in range(0, len(mytext), max_length)]
# Process each chunk separately
results = []
for chunk in chunks:
# Generate outputs for each chunk
result = happy_gen.generate_text(prefix + chunk, args=args)
results.append(result.text)
# Concatenate the results
output_text = " ".join(results)
print(output_text)
</code></pre>
<p>But it gives me this error:</p>
<pre><code>RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3
</code></pre>
<p>How can I resolve it?</p>
|
<python><huggingface-transformers><huggingface><gpt-2>
|
2023-03-13 14:56:59
| 1
| 5,189
|
littleworth
|
75,723,462
| 6,036,058
|
Bold and italics with taipy-gui
|
<p>How to put a string in bold and italics on taipy-gui?</p>
<p>For example, I would like the word <code>famous</code> to have both styles.</p>
<pre><code>from taipy import Gui
Gui(page="My *famous* app").run()
</code></pre>
|
<python><python-3.x><taipy>
|
2023-03-13 14:49:56
| 1
| 667
|
Florian Vuillemot
|
75,723,386
| 15,112,773
|
Rank does not go in order if the value does not change
|
<p>I have a dataframe:</p>
<pre><code>data = [['p1', 't1'], ['p4', 't2'], ['p2', 't1'],['p4', 't3'],
['p4', 't3'], ['p3', 't1'],]
sdf = spark.createDataFrame(data, schema = ['id', 'text'])
sdf.show()
+---+----+
| id|text|
+---+----+
| p1| t1|
| p4| t2|
| p2| t1|
| p4| t3|
| p4| t3|
| p3| t1|
+---+----+
</code></pre>
<p>I want to group by text. If the text does not change, then the rank remains. For example</p>
<pre><code>+---+----+----+
| id|text|rank|
+---+----+----+
| p1| t1| 1|
| p2| t1| 1|
| p3| t1| 1|
| p4| t2| 2|
| p4| t3| 3|
| p4| t3| 3|
+---+----+----
</code></pre>
<p>Unfortunately, the rank function does not give what I need.</p>
<pre><code>w = Window.partitionBy("text").orderBy("id")
sdf2 = sdf.withColumn("rank", F.rank().over(w))
sdf2.show()
+---+----+----+
| id|text|rank|
+---+----+----+
| p1| t1| 1|
| p2| t1| 2|
| p3| t1| 3|
| p4| t2| 1|
| p4| t3| 1|
| p4| t3| 1|
+---+----+----+
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql><label-encoding>
|
2023-03-13 14:43:16
| 1
| 383
|
Rory
|
75,723,316
| 11,906,819
|
Django Admin - Show ManyToOne data on a model in admin
|
<p>I have 2 models <code>Patient</code> and <code>History</code>.
I would like to show all <code>History</code> of a <code>Patient</code> inside the <code>Patient</code> model in django admin as table. I was thinking to use ManyToOne field as one patient can have more than 1 history and 1 history is only owned by 1 patient. In the <code>Patient</code> model admin I would like to also be able to add a new history.
I have tried to use ManyToMany field but doesn't looks like the solution.
Can anyone please help me on how to solve this?</p>
<p>Below is the code:</p>
<pre class="lang-py prettyprint-override"><code>class Patient(models.Model):
name = models.CharField(max_length=100)
class History(models.Model):
patient = models.ForeignKey(Patient, on_delete=models.CASCADE)
diagnosis = models.TextField()
</code></pre>
|
<python><django><django-models><django-forms><django-admin>
|
2023-03-13 14:37:07
| 1
| 375
|
Kevin Lee
|
75,723,093
| 14,269,252
|
Scatter plot with multiple Y axis using plotly express
|
<p>I have to plot my data with multiple Y axis, regarding the code below I am facing several issues.</p>
<ul>
<li><p>There is no coloring is shown.</p>
</li>
<li><p>Texts are overlapping for multiple Y axis.</p>
</li>
<li><p>Not showing all the values of Y axis.</p>
</li>
<li><p>On the X axis, it has to show the date such as March 10 2012 and show as much as date possible</p>
</li>
<li><p>On one of Y axis, It has to show all the number as it is, not in the range of 50K to 100K .</p>
</li>
</ul>
<p>how should I modify the code?</p>
<pre><code>fig = go.Figure()
fig.add_trace(go.Scatter(
x=l1.DATE,
y=l1.CODE,
name="yaxis1 data",mode='markers',yaxis="y1", marker = {'color' : 'red'}
))
fig.add_trace(go.Scatter(
x=l2.DATE,
y=l2.CODE,
name="yaxis2 data",mode='markers',marker = {'color' : 'blue'},
yaxis="y2"
))
fig.add_trace(go.Scatter(
x=l3.DATE,
y=l3.CODE,
name="yaxis3 data",mode='markers',marker = {'color' : 'green'},
yaxis="y3"
))
fig.add_trace(go.Scatter(
x=l4.DATE,
y=l4.CODE,
name="yaxis4 data",mode='markers',marker = {'color' : 'black'},
yaxis="y4"
))
# Create axis objects
fig.update_layout(
xaxis=dict(
domain=[0.3, 0.7]
),
yaxis=dict(
title="yaxis title",
titlefont=dict(
color="#1f77b4"
),
tickfont=dict(
color="#1f77b4"
)
),
yaxis2=dict(
title="yaxis2 title",
titlefont=dict(
color="#ff7f0e"
),
tickfont=dict(
color="#ff7f0e"
),
anchor="free",
side="left",
position=0.15
),
yaxis3=dict(
title="yaxis3 title",
titlefont=dict(
color="#d62728"
),
tickfont=dict(
color="#d62728"
),
anchor="x",
side="right"
),
yaxis4=dict(
title="yaxis4 title",
titlefont=dict(
color="#9467bd"
),
tickfont=dict(
color="#9467bd"
),
anchor="free",
side="right",
position=0.85
)
)
# Update layout properties
fig.update_layout(
title_text="multiple y-axes example",
width=800,
)
fig.show()
</code></pre>
<p>what I produced :
<a href="https://i.sstatic.net/t7W3H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t7W3H.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-03-13 14:18:10
| 1
| 450
|
user14269252
|
75,723,026
| 3,896,008
|
Meaning of this error message python version conflicts
|
<p>While trying to install package: line-profiler-pycharm, I install it using pip inside my conda environment called <code>wcs</code>. But when I try to run it, I get the following error:</p>
<pre><code> 22:01 Error running 'prep_network': Could not
do Profile Lines execution because the
python package 'line-profile-pycharm' is not
installed to Python 3.8 (wcs): Python 3.10.5 (/Users/myusername/opt/anaconda3/envs/wcs/bin/python)
</code></pre>
<p>Why does it show to python 3.8 and then python 3.10 after the colon <code>:</code>?</p>
<p>I checked the python versions inside my <code>Λ/anaconda3/wcs/bin/python </code>, I can see <code>python 3.11</code> and <code>python3.10</code>, <code>Python3.8</code> does not even exist in there.</p>
|
<python><python-3.x><conda><environment>
|
2023-03-13 14:11:35
| 1
| 1,347
|
lifezbeautiful
|
75,723,005
| 12,596,824
|
Grouping by ID and getting multinomial distribution column
|
<p>I have a data frame like so:</p>
<pre><code> id test
0 1 1.000000
1 2 0.582594
2 2 0.417406
3 3 0.016633
4 3 0.983367
5 4 1.000000
6 5 0.501855
7 5 0.498145
8 6 1.000000
9 7 1.000000
</code></pre>
<p>I want to use the <code>np.random.multinomial()</code> function to generate a new column (<code>target</code>) where for each ID i will generate either 1 or 0 depending on the test column (which are probabilities for the argument <code>pvals</code>). But for each id, the sum of the new column <code>target</code> will always be 1.</p>
<p>For example for id 2, i would get maybe something like the array in the commented code</p>
<pre><code>np.random.multinomial(n = 1, pvals = [0.582594, 0.417406])
# array([1, 0])
</code></pre>
<p>I would then want to create the new column like so where I may having values like this, but obviously the multinomial distribution is probabilistic.</p>
<pre><code> id test target
0 1 1.000000 1
1 2 0.582594 1
2 2 0.417406 0
3 3 0.016633 0
4 3 0.983367 1
5 4 1.000000 1
6 5 0.501855 0
7 5 0.498145 1
8 6 1.000000 1
9 7 1.000000 1
</code></pre>
<p>How can I do this in python without writing a loop to iterate through each id and doing this iteratively?</p>
|
<python><pandas><numpy>
|
2023-03-13 14:09:57
| 2
| 1,937
|
Eisen
|
75,722,941
| 4,907,339
|
Can I make order unimportant when localizing a timedelta?
|
<p>If I localize a <code>datetime()</code> object after it has been passed through <code>pandas.Timedelta()</code>, it has the correct date, time, timezone and timezone offset.</p>
<p>However, if I localize a <code>datetime()</code> object before it has been passed through <code>pandas.Timedelta()</code>, it's timezone offset is relative to the unaware <code>datetime()</code> object.</p>
<p>I wasn't expecting order to be important. Is there a way that I can make order unimportant?</p>
<pre class="lang-py prettyprint-override"><code>import pandas
import pytz
import datetime
unaware_date = datetime.datetime(2023,3,12,17)
unaware_timedelta_date = unaware_date - pandas.Timedelta(weeks=110)
aware_and_localized_unaware_timedelta_date = pytz.timezone('EST5EDT').localize(unaware_timedelta_date)
print(aware_and_localized_unaware_timedelta_date)
aware_and_localized_date = pytz.timezone('EST5EDT').localize(unaware_date)
aware_and_localized_timedelta_date = aware_and_localized_date - pandas.Timedelta(weeks=110)
print(aware_and_localized_timedelta_date)
</code></pre>
<pre><code>2021-01-31 17:00:00-05:00
2021-01-31 17:00:00-04:00
</code></pre>
|
<python><pandas>
|
2023-03-13 14:02:19
| 1
| 492
|
Jason
|
75,722,878
| 6,067,528
|
Invalid ELF header when using pyarmor library even though installation matches my architecture
|
<p>I'm trying to test <code>pyarmor</code> with</p>
<p><code>echo "print(1)" > test.py</code></p>
<p><code>pyarmor gen test.py</code></p>
<p>But am receiving the following error:</p>
<p><code>ERROR /usr/local/lib/python3.11/dist-packages/pyarmor/cli/core/pytransform3.so: invalid ELF header</code></p>
<p>AFAIK, ELF header errors are typically due to architecture incompatibility with the installed library.. however my platform & specs inside the <code>ubuntu:23.04</code> container I'm building are:</p>
<pre><code>root@513aa8169a6c:/# uname -a
Linux 513aa8169a6c 5.15.78-0-virt #1-Alpine SMP Fri, 11 Nov 2022 10:19:45 +0000 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>When I run <code>pip install -vvv pyarmor==8.0.1</code>, I can see the correct .whl for my architecture is being detected i.e. <strong>py3-none-any.whl</strong>.</p>
<pre><code>https://files.pythonhosted.org:443 "GET /packages/2d/47/fa465e1d7712816320928b9a27b7d7dc1a3f90f800f9c3b064f8eacc7578/pyarmor-8.0.1-py2.py3-none-any.whl HTTP/1.1" 200 207362
Downloading pyarmor-8.0.1-py2.py3-none-any.whl (207 kB)
</code></pre>
<p>So what is going wrong here?</p>
|
<python><pip><pyarmor>
|
2023-03-13 13:57:04
| 0
| 1,313
|
Sam Comber
|
75,722,839
| 347,924
|
SendGrid - Email with Dynamic Template and Multiple Recipients only showing dynamic data for one of the recipients?
|
<p>I'm trying to use SendGrid to send out emails to multiple recipients. I don't want the recipients to see each other's email, so I've tried two methods of accomplishing that:</p>
<ul>
<li>adding <code>is_multiple=True</code> to the <code>Email</code> constructor and setting <code>to_emails</code></li>
<li>adding personalizations with each name / email pair</li>
</ul>
<p>In both cases, I'm using a dynamic template. I've tried setting the template data in two ways as well:</p>
<ul>
<li><code>message.dynamic_template_data = {...python dict...}</code> (where <code>message</code> is the result from the <code>Email()</code> constructor)</li>
<li>adding it to each personalization via <code>personalization.dynamic_template_data = {...python dict...}</code></li>
</ul>
<p>I'm testing this with two email addresses.</p>
<p>In all cases, one of the recipients gets the email formatted correctly. The other recipient receives an email with all of the dynamic data missing.</p>
<p>What am I doing wrong?</p>
<hr />
<p><strong>Correction:</strong> I am getting it to work with the personalizations method - however, I'm sending the same data for each recipient... and I'd rather only send it once.</p>
<p>So the question is how to get this to work with a single copy of the <code>dynamic_template_data</code>?</p>
|
<python><sendgrid><sendgrid-templates>
|
2023-03-13 13:54:24
| 1
| 4,700
|
naydichev
|
75,722,810
| 235,671
|
Preserve argument information for inspect when stacking decorators
|
<p>I'm writing a decorator that should pass a query to the function based on one of its parameters.</p>
<p>This is how it currently looks like, but not finished yet (still experimenting):</p>
<pre class="lang-py prettyprint-override"><code>def pass_query(service: str, query: str):
def decorate(decoratee):
@functools.wraps(decoratee)
def decorator(*args, **kwargs):
params = inspect.getfullargspec(decoratee)
if "query" in params and service == params.get("service", None):
kwargs["query"] = query
return decoratee(*args, **kwargs)
return decorator
return decorate
</code></pre>
<p>The problem is that <code>inspect.getfullargspec(decoratee)</code> doesn't retrieve any useful information with two or more decorators as then the other ones loose the information about the function parameters.</p>
<pre class="lang-py prettyprint-override"><code>@pass_query("foo", "q1.sql")
@pass_query("bar", "q2.sql")
def track_report(service: str, query: Optional[str] = None) -> None:
pass
</code></pre>
<p>Is there a way to preserve the parameters so that the decorators won't break reflection as if there was only one decorator?</p>
|
<python><python-3.x><dependency-injection><reflection><decorator>
|
2023-03-13 13:52:26
| 1
| 19,283
|
t3chb0t
|
75,722,781
| 1,975,199
|
Server Side Event not being received
|
<p>I've implemented some "real time streaming" to the browser. This method is using Server Side events, the backend is Python/FastAPI. I'm developing this primarily on my workstation. Theory of operation is: There are some custom devices on my network that send "heartbeat/health" information. I pick those up, process them and then send them to the browser (ChartJS). At the moment, I have 3 devices. 2 of these devices are the same type of device. I'll call them Device A (qty 1) and Device B (qty 2). While the browser is navigated away from one of the real time charts the server backend stores the data for a small period of time. Thus, when you navigate to the real-time graph it automatically fills in using the last 60 seconds of data.</p>
<p>I'm doing this all with Asynchronous Queues. Device A has it's own Queue, and Devices B have their own Queue. The packet in the Queue is JSON and the Device ID is inside it.</p>
<p><strong>On my workstation - everything works as I want it to.</strong></p>
<p>Then, I move this software over to the device it will live on, which is an ARM device running Linux at about 800Mhz.</p>
<p>Once I get this software loaded and connected I start to check and make sure it works. Device A is streaming. If I wait, and check Device A the historic data flushes successfully.</p>
<p>If I go to Device B I see only 1 of the 2 devices data is being received. If I reload the page, that device may change, it may not. If I navigate away and come back, BOTH devices history data is populated, but only one is actively showing data. I can refresh (F5) and this may change back and forth. The amount of data per device is about 100 bytes.</p>
<p>My first thought, was that this was a processing issue. But flushing the historic data is significantly more data all at once.</p>
<p>Through logging of the js console I see that one of the message is never received.<br />
Using a basic print() statement in python I can see that the message <strong>is</strong> being sent.
Using wireshark I can see the message on the network, and I can also see that the payload is correct.</p>
<p>I am not sure what else to check. Some code follows.</p>
<p>Javascript front end (This is truncated, it's too long)</p>
<pre><code> source.onmessage = function (event) {
/* This line only shows 1 of the 2 packets being received */
console.log(event.data);
/* comes over with single quotes, which is not json friendly */
var data = event.data.replace(/'/g, '"');
const json_data = JSON.parse(data);
</code></pre>
<p>Python side of the SSE: (Variables modified for SO - hopefully no mistakes)</p>
<pre><code>@app.get("/data")
async def data():
async def get_data():
# clear the queue first, then fill it up
await interface.empty_queue()
# this loads the queue with historic data - up to 120 packets of 200 bytes each
for item in interface.objs.keys():
await interface.objs[item].load_queue(interface.health_queue, False)
while True:
# process the queue
message = await interface.health_queue.get()
print(message)
yield f"data:{message}\n\n"
interface.health_queue.task_done()
</code></pre>
<p>Finally, I can see this in wireshark:
Both packets are received on the PC:
<a href="https://i.sstatic.net/jBf7F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jBf7F.png" alt="enter image description here" /></a></p>
<p>Device 1's payload with the correct ID:
<a href="https://i.sstatic.net/rVpU8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rVpU8.png" alt="enter image description here" /></a></p>
<p>Device 2's payload with the correct ID:
<a href="https://i.sstatic.net/RHvn6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RHvn6.png" alt="enter image description here" /></a></p>
<p>However, as I mentioned only one of the IDs is being processes by the browser:</p>
<p><a href="https://i.sstatic.net/kHpms.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kHpms.png" alt="enter image description here" /></a></p>
<p>What I expect it to look like:
<a href="https://i.sstatic.net/Yo2gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yo2gy.png" alt="enter image description here" /></a></p>
<p>I know this is a lot, and very custom. But I am hoping someone can help me identify the source of this issue. This is my first real development with SSE, FastAPI. But I feel like this is working acceptably when it's not on the target device.</p>
|
<python><fastapi><server-sent-events>
|
2023-03-13 13:49:54
| 0
| 432
|
jgauthier
|
75,722,684
| 14,230,633
|
Jupyter Lab Autocomplete repeats text I've already input
|
<p>Example: suppose I'm writing <code>x = df.groupby('column')</code>. If I've typed <code>x = df.group</code> and then hit tab, it autofills to <code>x = x = df.groupby()</code> (notice the extra <code>x = </code>).</p>
<p>Another example: if I'm writing <code>sns.histplot(df.my_column)</code> and I type <code>sns.histplot(df.my_c)</code> and then hit tab, it autofills to <code>sns.histplot(sns.histplot(df.my_column</code></p>
|
<python><jupyter-notebook><autocomplete><jupyter><jupyter-lab>
|
2023-03-13 13:41:18
| 0
| 567
|
dfried
|
75,722,675
| 2,255,168
|
Jinja2: Object of type Namespace is not JSON serializable
|
<p>To send my backend data to my frontend/javascript in my Jinja2 view i'm using something like :</p>
<pre><code>{% for organization in team.organizations %}
{{ teamOrganizations.append(organization.namespace) or "" }}
{% endfor %}
<div data-json='{{ teamOrganizations|tojson }}'></div>
</code></pre>
<p>But with a namespace, I have the following error : <code>Object of type Namespace is not JSON serializable</code></p>
<p>If I do a {{ organization.namespace }} I have the expected value displayed as string, why this is not JSON serializable ?</p>
<p>Thx</p>
<p>EDIT: I fixed it with <code>teamOrganizations.append("" ~ organization.namespace)</code></p>
|
<python><jinja2>
|
2023-03-13 13:40:58
| 0
| 1,380
|
Paul
|
75,722,645
| 13,345,744
|
How to convert categorial data to indices and print assignment?
|
<p><strong>Context</strong>
I have a series with categorial data. My goal is to convert it to indices like in the example above. However, there are two other requirements:</p>
<ul>
<li>nan values should stay nan and not converted to an index e.g. -1</li>
<li>I would like to print the assignment of category to index</li>
</ul>
<hr />
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code>red -> 0
blue -> 1
green -> 2
nan -> nan
red -> 0
yellow -> 3
green -> 2
nan -> nan
series = series.astype('category').cat.codes
</code></pre>
<p><strong>Question</strong></p>
<p>How can I achieve this goal?</p>
|
<python><pandas><categorical-data>
|
2023-03-13 13:37:58
| 2
| 1,721
|
christophriepe
|
75,722,627
| 6,492,292
|
Get text based on coordinates as same format as in PDF
|
<p>I have coordinate details, but I'm unable to find any method in pymupdf to fetch a block of data based on the coordinates. Is there any method in pymupdf that can do this? I'm open to other libraries, though I already used PDFQuery which is not working properly.</p>
<p>Explanation:
I want to read block of text within the given coordinates using pymupdf. For example, if I have coordinates x0, y0, x1, y1 I should be able to get the text within the block the same format as in PDF.</p>
<p>For example:
if I do</p>
<pre class="lang-py prettyprint-override"><code>print(page.get_textbox(fitz.Rect([40.91999816894531, 274.94500732421875, 349.88214111328125, 364.9531555175781])))
</code></pre>
<p>It's giving me a string with each word in that block separated by a new line. Is there a way I can get the block as the same format as in PDF?</p>
|
<python><coordinates><pymupdf>
|
2023-03-13 13:36:51
| 0
| 1,721
|
m9m9m
|
75,722,594
| 11,261,546
|
boost smart pointer can cast but std cant?
|
<p>I'm working on a quite old code that has some functions that return an allocated raw pointer (here an oversimplication of what the function does):</p>
<pre><code>MyClass* func(){
MyClass* myPtr = new MyClass();
return myPtr;
}
</code></pre>
<p>I want to bind this class and the function that returns the pointer into python using pybind11:</p>
<p>The class:</p>
<pre><code>py::class_<MyClass>(m, "PyClass")
// Methods and atributes ...
</code></pre>
<p>The function (?):</p>
<pre><code>m.def("get_object", &func);
</code></pre>
<p>This compiles and works the first time I launch it, however it crashes afterwards. I thought it's because the allocated object was never de-allocated, so I tried (with no success) using smart pointers and setting the return policy:</p>
<pre><code>m.def("get_object_smart", [](){
auto a = std::unique_ptr<MyClass>(func());
return a;
}, py::return_value_policy::copy); // also tried take_ownership
</code></pre>
<p>Is there a way of telling pybind11 when to call the objects destructor? is it different when allocated in the heap with <code>new</code>?</p>
|
<python><c++><pybind11>
|
2023-03-13 13:34:32
| 1
| 1,551
|
Ivan
|
75,722,419
| 8,981,425
|
Which is the best way to define module variables in Python?
|
<p>I am developing a project with FastAPI and I have to connect to a database. I have decided to create a module with database-related functionality, like managing the connection or getting the table's metadata (I am using SQLAlchemy). I want to retrieve this information without recalculating it or duplicating the objects. This can be done in many ways. The one I have chosen due to its simplicity and clarity is to store the values in variables that get the data from functions:</p>
<p>db.py:</p>
<pre><code>def get_conn():
[...]
return conn
def get_metadata(conn):
[...]
return metadata
conn = get_conn()
meta = get_metadata()
</code></pre>
<p>other_file.py:</p>
<pre><code>from db import conn, meta
[...]
</code></pre>
<p>This works fine, but I am not sure if it is the best way to do it. Another option is to use the @cache decorator on the functions, remove the variables, and call the functions instead of accessing the variables. I would like to know if there is a better or standard way to solve this. Thanks!</p>
<p><strong>EDIT</strong>: I am not asking for an opinion. I want to know which are my options and their tradeoffs. Some of the replies and comments have been really useful so far since they have pointed out problems with my implementation or have proposed interesting alternatives.</p>
|
<python><module>
|
2023-03-13 13:20:20
| 1
| 367
|
edoelas
|
75,722,375
| 10,518,336
|
How to get zero values in list comprehension Python
|
<p>I would like every element in <code>ll</code> to be <code>0</code> where the corresponding element in <code>frac</code> is <code>0</code>. So the first value of <code>ll</code> should be <code>0</code> and the rest unchanged. Thank you.</p>
<pre><code>
frac = [0,.1,.5,.3,.7,1.1]
ll = [[i + 1 for i in range(5) if i <= k*4 < (i+1)] for k in frac]
[[1], [1], [3], [2], [3], [5]]
</code></pre>
|
<python><list>
|
2023-03-13 13:15:19
| 2
| 366
|
Expectation mean first moment
|
75,722,345
| 14,550,787
|
Accessing GCP Secret Version Value using Discovery.build with python
|
<p>Weird instance happening when trying to pull a secret version value using python. I created a client with the discovery.build method, here: <a href="https://googleapis.github.io/google-api-python-client/docs/dyn/secretmanager_v1.projects.secrets.versions.html" rel="nofollow noreferrer">https://googleapis.github.io/google-api-python-client/docs/dyn/secretmanager_v1.projects.secrets.versions.html</a></p>
<pre><code>sm = discovery.build("secretmanager", "v1", credentials = credentials)
</code></pre>
<p>I can return the secret,</p>
<pre><code>response = self.sm.projects().secrets().versions().access(name 'projects/17344xxxxx/secrets/api/versions/latest').execute()
pwd = response['payload']['data']
</code></pre>
<p>I am not sure how to decode the response to get the actual value, for after filtering the dictionary down, it returns a string.</p>
<p>Thanks</p>
|
<python><google-cloud-platform><google-secret-manager>
|
2023-03-13 13:12:57
| 1
| 583
|
ColeGulledge
|
75,722,326
| 15,959,591
|
slicing all the columns of a dataframe simultaneously
|
<p>I have a data frame of an ECG. The data frame consists of 12 columns (lead in ECG vocabulary). I want to cut some slices of all columns like the image below.
I find the peak of the first column and define the length of the cut and slice all the columns simultaneously.
<strong>Edit:</strong> In fact, I choose the highest peaks of the first column (lead) and stack them in a list named pos. Then, I use these peaks as my reference line and cut for ex 500 data points before and 500 data points after the reference line and stack them in the list named cut.
This is my code but I get the error "list assignment index out of range"</p>
<pre><code>def slicing (patient, pos, len):
cut = []
for lead in patient:
for position in pos:
cut[position] = patient.loc[:, str(lead)][(position - len) : (position + len)]
print (cut)
</code></pre>
<p>Could you please tell me how can I fix the error and how can I have my slices be named like
cut[1] = [...]
cut[2] = [...]</p>
|
<python><pandas><dataframe><data-processing>
|
2023-03-13 13:11:21
| 1
| 554
|
Totoro
|
75,722,293
| 1,862,965
|
How to send multiple file over Rest API and then read files and merge with pandas
|
<p>How can i send multiple file over Rest api and then read files and merge with pandas.</p>
<p>Problem is:
I have 4 or more Files.
2 Files are csv and filename contains *_data.csv
2 Files are csv and filename contains *_prices.csv</p>
<p>How can i read only *_data.csv with df = pd.read_csv
and also only *_prices.csv and df1 = pd.read_csv from REST API.</p>
<p>After all i want to merge df with df1.</p>
<p>thanks a lot</p>
|
<python><pandas>
|
2023-03-13 13:08:29
| 1
| 386
|
user1862965
|
75,722,285
| 19,369,310
|
New column based on the largest 3 elements in a group indexed by Group ID
|
<p>I have a dataframe that looks like</p>
<pre><code>Group_ID probability
34883 0.002676
34883 0.17826266
34883 0.01399753
34883 0.04569782
34883 0.02799506
34883 0.02634829
34883 0.02923014
34883 0.13544669
34883 0.07595718
34883 0.19246604
34883 0.20028818
34883 0
34883 0
34883 0.07163442
34897 0.03329843
34897 0.07643979
34897 0.09570681
34897 0.00376963
34897 0.01780105
34897 0.0008377
34897 0.08125654
34897 0.10764398
34897 0.25780105
34897 0.10910995
34897 0
34897 0.02743455
34897 0.18890052
34897 0
</code></pre>
<p>where the probabilities in <code>probability</code> sum to 1 for each <code>Group_ID</code>. I want to create a new column called <code>top 3</code> which indicates the position of the largest 3 probabilities for each Group_ID and top3 = 1 if the row has the greatest probability for that Group_ID and zero otherwise. Hence the outcome looks like:</p>
<pre><code>Group_ID probability top3
34883 0.002676 0
34883 0.17826266 1
34883 0.01399753 0
34883 0.04569782 0
34883 0.02799506 0
34883 0.02634829 0
34883 0.02923014 0
34883 0.13544669 0
34883 0.07595718 0
34883 0.19246604 1
34883 0.20028818 1
34883 0 0
34883 0 0
34883 0.07163442 0
34897 0.03329843 0
34897 0.07643979 0
34897 0.09570681 0
34897 0.00376963 0
34897 0.01780105 0
34897 0.0008377 0
34897 0.08125654 0
34897 0.10764398 0
34897 0.25780105 1
34897 0.10910995 1
34897 0 0
34897 0.02743455 0
34897 0.18890052 1
34897 0 0
</code></pre>
<p>I googled for a bit and I think it might have something to do with idxmax but I am not sure how to proceed. Thank you in advance.</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-03-13 13:07:43
| 2
| 449
|
Apook
|
75,722,191
| 17,630,139
|
ValueError: Error code 'I2041' supplied to 'ignore' option does not match '^[A-Z]{1,3}[0-9]{0,3}$'
|
<p>Upon importing this <a href="https://pypi.org/project/flake8-import-restrictions/" rel="nofollow noreferrer">flake8 plugin</a> <code>flake8-import-restrictions==1.1.1</code>, I've notice this error appeared. I tried ignoring it by adding it to my setup.cfg file ignore option as shown below.</p>
<pre class="lang-py prettyprint-override"><code>[isort]
force_single_line=True
ensure_newline_before_comments = True
single_line_exclusions=[]
line_length = 100
src_paths = myproject,tests
skip= .gitignore, .env
skip_glob=env/*, spec/*
profile=google
lexicographical=True
order_by_type=True
color_output=True
[flake8]
ignore = D413, E203, E266, E501, E711, F401, F403, I2041, W503
max-line-length = 100
max-complexity = 18
application-import-names = myproject,tests
import-order-style = google
exclude = tests/integration/.env.shadow, env
[pylint]
max-line-length = 100
[pylint.messages_control]
disable = C0330, C0326
</code></pre>
<p>However, flake8 appears to not like this ignore option:</p>
<pre class="lang-py prettyprint-override"><code> File "/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/options/config.py", line 131, in parse_config
raise ValueError(
ValueError: Error code 'I2041' supplied to 'ignore' option does not match '^[A-Z]{1,3}[0-9]{0,3}$'
</code></pre>
<h1>What I have Tried</h1>
<ul>
<li>uninstalling the plugin to verify it's that particular plugin</li>
</ul>
<p>Of course to not see this error would mean I would have to uninstall the plugin. However, I would really like to use this flake8 plugin. How can this be resolved?</p>
|
<python><flake8>
|
2023-03-13 13:00:01
| 1
| 331
|
Khalil
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.