QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,340,220
| 3,798,292
|
pgf backend in matplotlib - "input stack size" capacity exceeded with pdflatex and lualatex
|
<p>I'm trying to use the pgf backend in matplotlib to create figure files for a manuscript, following the example in the matplotlib docs <a href="https://matplotlib.org/stable/tutorials/text/pgf.html" rel="nofollow noreferrer">here</a>. I don't have xelatex so I've tried using pdflatex and lualatex. However, if I try to use the docs' example and save the figure, I get an error. Internet searching hasn't helped much. I would be grateful to know if I am doing something wrong, or if it works for other people (in which case knowing your matplotlib and pdfTeX/LuaTeX versions would be helpful - mine are matplotlib 3.5.1, pdfTeX 3.1415926-2.5-1.40.14 (TeX Live 2013) and LuaTeX beta-0.76.0-2020040104). Or is it likely that something in my TeX setup needs to be changed?</p>
<p>The error message I get is</p>
<pre><code>! LaTeX Error: File `pdftex.def' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: def)
Enter file name: ! TeX capacity exceeded, sorry [input stack size=5000].
<argument> \usepackage
[T1]{fontenc}
l.92 ...ed{ver@\Gin@driver}{\input{\Gin@driver}}{}
</code></pre>
<p>This is with the following code, using the example in the docs:</p>
<pre><code>import matplotlib.pyplot as plt
plt.rcParams.update({
"pgf.texsystem": "pdflatex",
"pgf.preamble": "\n".join([
r"\usepackage[utf8x]{inputenc}",
r"\usepackage[T1]{fontenc}",
r"\usepackage{cmbright}",
]),
})
fig, ax = plt.subplots(figsize=(4.5, 2.5))
ax.plot(range(5))
ax.text(0.5, 3., "serif", family="serif")
ax.text(0.5, 2., "monospace", family="monospace")
ax.text(2.5, 2., "sans-serif", family="sans-serif")
ax.set_xlabel(r"µ is not $\mu$")
fig.tight_layout(pad=.5)
fig.savefig('/user/home/test.pdf', backend='pgf')
</code></pre>
<p>The last line is how I'm trying to save the file, which is the only line not in the docs' example, but it says above that this should work. The other methods it gives to use the backend don't work either.</p>
|
<python><matplotlib><tex>
|
2023-02-03 19:28:23
| 1
| 351
|
PeterW
|
75,340,187
| 5,718,264
|
How to run second vertex ai pipeline after first one is completed successfully?
|
<p>I have a certain scenario where I want to deploy two vertex ai pipelines, where the second one is dependent on the first run. Is there a way I can run the second vertex ai pipeline only after first one is completed?</p>
|
<python><google-cloud-platform><google-cloud-vertex-ai>
|
2023-02-03 19:24:34
| 1
| 834
|
Shadab Hussain
|
75,339,760
| 1,075,996
|
Postfix: Python Milter not able to change email header
|
<p>I have some servers that create automated emails which all pass through a Postfix MTA. The software that generates the email does not strictly follow RFCs, and sometimes generates emails with duplicate message-ID headers. The software cannot be changed, so I am trying to intercept and fix these messages on their way through the MTA.</p>
<p>I have a milter daemon written in Python that is attempting to remove duplicate message IDs from inbound messages.</p>
<p>The code is below:</p>
<pre><code>import Milter
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('/var/log/milter.log')
file_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s'))
logger.addHandler(file_handler)
seen_message_ids = set()
class MessageIDMilter(Milter.Base):
def __init__(self):
self.id = Milter.uniqueID()
@Milter.noreply
def connect(self, IPname, family, hostaddr):
logger.debug("Milter connected to %s family %s at address %s" % (IPname, family, hostaddr))
self.IPname = IPname
self.family = family
self.hostaddr = hostaddr
return Milter.CONTINUE
@Milter.noreply
def header(self, name, hval):
logger.debug("Received header %s with value %s" % (name, hval))
if name.lower() == "message-id":
logger.debug("Found message ID: %s" % hval)
if hval in seen_message_ids:
logger.debug("Deleting duplicate message ID: %s" % hval)
try:
self.chgheader(name, 1, "")
except Exception as e:
logger.error("Error removing from: %s error message: %s" % (name, e))
else:
seen_message_ids.add(hval)
return Milter.CONTINUE
@Milter.noreply
def eoh(self):
logger.debug("Reached end of headers")
return Milter.ACCEPT
if __name__ == "__main__":
logger.debug("Script started OK")
Milter.factory = MessageIDMilter
Milter.runmilter("message-id-milter", "inet:10001@localhost", 0)
</code></pre>
<p>The script runs and can be called from Postfix. When attempting to delete the duplicate header with chgheader, the following error is thrown:</p>
<p><code>2023-02-03 18:22:44,983 ERROR Error removing from: Message-ID error message: cannot change header</code></p>
<p>I cannot see anything wrong with this request, nor any other method to remove the header. The docs suggest this should work: <a href="https://pythonhosted.org/pymilter/milter_api/smfi_chgheader.html" rel="nofollow noreferrer">https://pythonhosted.org/pymilter/milter_api/smfi_chgheader.html</a></p>
|
<python><postfix-mta><mta><milter>
|
2023-02-03 18:34:41
| 0
| 453
|
btongeorge
|
75,339,741
| 7,437,143
|
Plotly Dash change networkx node colours in based on user input?
|
<p>After creating the minimal working example below, I tried to change the color of the nodes in the graphs based on user input. Specifically, I have <code>n</code> lists of colors (one color per node), and I would like the user to be able to loop (ideally forward and backwards) through the node color lists. (In essence I show firing neurons using the color).</p>
<h2>MWE</h2>
<pre class="lang-py prettyprint-override"><code>"""Generates a graph in dash."""
import dash
import dash_core_components as dcc
import dash_html_components as html
import networkx as nx
import plotly.graph_objs as go
# Create graph G
G = nx.DiGraph()
G.add_nodes_from([0, 1, 2])
G.add_edges_from(
[
(0, 1),
(0, 2),
],
weight=6,
)
# Create a x,y position for each node
pos = {
0: [0, 0],
1: [1, 2],
2: [2, 0],
}
# Set the position attribute with the created positions.
for node in G.nodes:
G.nodes[node]["pos"] = list(pos[node])
# add color to node points
colour_set_I = ["rgb(31, 119, 180)", "rgb(255, 127, 14)", "rgb(44, 160, 44)"]
colour_set_II = ["rgb(10, 20, 30)", "rgb(255, 255, 0)", "rgb(0, 255, 255)"]
# Create nodes
node_trace = go.Scatter(
x=[],
y=[],
text=[],
mode="markers",
hoverinfo="text",
marker=dict(size=30, color=colour_set_I),
)
for node in G.nodes():
x, y = G.nodes[node]["pos"]
node_trace["x"] += tuple([x])
node_trace["y"] += tuple([y])
# Create Edges
edge_trace = go.Scatter(
x=[],
y=[],
line=dict(width=0.5, color="#888"),
hoverinfo="none",
mode="lines",
)
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]["pos"]
x1, y1 = G.nodes[edge[1]]["pos"]
edge_trace["x"] += tuple([x0, x1, None])
edge_trace["y"] += tuple([y0, y1, None])
################### START OF DASH APP ###################
app = dash.Dash()
fig = go.Figure(
data=[edge_trace, node_trace],
layout=go.Layout(
xaxis=dict(showgrid=True, zeroline=True, showticklabels=True),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
),
)
app.layout = html.Div(
[
html.Div(dcc.Graph(id="Graph", figure=fig)),
]
)
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
<p>When you save this as <code>graph.py</code> and run it with: <code>python graph.py</code>, you can open a browser, go to 127.0.0.1:8050 and see:
<a href="https://i.sstatic.net/n5Cn6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n5Cn6.png" alt="enter image description here" /></a>
For <code>colour_set_I</code>, and:
<a href="https://i.sstatic.net/9RPXi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9RPXi.png" alt="enter image description here" /></a>
For <code>colour_set_II</code>.</p>
<h2>Question</h2>
<p>How can I get a slider e.g. from <code>0</code> to <code>n</code> or a <code>next></code> and <code>back<</code> button that loads the next/previous colour list into the nodes?</p>
<h2>Hacky</h2>
<p>I noticed when I change the <code>graph.py</code> file line:</p>
<pre class="lang-py prettyprint-override"><code>marker=dict(size=30, color=colour_set_I),
</code></pre>
<p>to:</p>
<pre class="lang-py prettyprint-override"><code>marker=dict(size=30, color=colour_set_II),
</code></pre>
<p>it automatically updates the node colors, however, typing the frame index into the colour set and pressing <code>ctrl+s</code> is somewhat elaborate, even though I thoroughly enjoy keyboard control over clicking.</p>
|
<python><colors><plotly><networkx><plotly-dash>
|
2023-02-03 18:32:12
| 2
| 2,887
|
a.t.
|
75,339,687
| 11,099,153
|
Python's standard hashing algorithm
|
<p>Python documentation describes hashing procedure of fractions (and ints, and floats) <a href="https://docs.python.org/3/library/stdtypes.html" rel="nofollow noreferrer">here</a>.</p>
<p>I understand the approach as far as 'treat fractions as mod P' and 'inverses mod P are easyish to compute'. Afterwards encode sign by negating hash when appropriate.</p>
<p>There is a additional rule:</p>
<blockquote>
<p>If x = m / n is a negative rational number define hash(x) as -hash(-x). If the resulting hash is -1, replace it with -2.</p>
</blockquote>
<p>This makes no sense to me since it leads to</p>
<pre class="lang-py prettyprint-override"><code>hash(-1) == hash(-2)
</code></pre>
<p>Surely getting distinct values on common inputs is rather important for a good hash function!</p>
<p>How come this is a better choice than <code>hash(-1) == -1</code>?</p>
<p>The implementation in <a href="https://github.com/python/cpython/blob/main/Python/pyhash.c" rel="nofollow noreferrer">cPython</a> gives no comments to this odd choice either.</p>
|
<python><hash>
|
2023-02-03 18:25:37
| 0
| 349
|
Radost Waszkiewicz
|
75,339,598
| 19,504,610
|
Using SQLModel to have multiple tables with same columns
|
<p>Let's say I have two SQL tables <code>address</code> and <code>email</code> tables.</p>
<p>For the <code>address</code> table I may have the following generic fields:</p>
<ol>
<li><code>postal_code</code></li>
<li><code>street_name</code></li>
</ol>
<p>and additionally, I would want the following two fields:</p>
<ol start="3">
<li><code>is_verified</code>, of type <code>Enum</code> with only one of the three variants in <code>Unverified</code>, <code>Verified</code>, <code>InProgress</code></li>
<li><code>in_progress_description</code>, a <code>String</code> of a comment on the progress status</li>
</ol>
<p>Similarly, for the <code>email</code> table, I would want the following generic field:</p>
<ol>
<li><code>email_addr</code> of type <code>pydantic.EmailStr</code></li>
</ol>
<p>and also the fields <code>is_verified</code> and <code>in_progress_description</code>.</p>
<p>Should I create a mixin like the following for the verifiability of the two tables (SQLModels) or how should I write my <code>Email</code> and <code>Address</code> classes to avoid duplicating codes for <code>is_verified</code> and <code>in_progress_description</code> fields?</p>
<p>=== Mixin ===</p>
<pre><code>class VerifiableMixin:
verification_status: VerificationStatusEnum = VerificationStatusEnum.Unverified
verification_description: str = None
</code></pre>
<p>Then have the <code>Email(SQLModel, table=True)</code> subclassing it too.</p>
<pre><code>class Email(SQLModel, VerifiableMixin, table=True):
email_addr: EmailStr
</code></pre>
<p>=== SQLModel ===</p>
<pre><code>class VerifiableBaseSQLModel:
verification_status: VerificationStatusEnum = Field(default=VerificationStatusEnum.Unverified)
verification_description: str = Field(default=None)
</code></pre>
<pre><code>class Email(SQLModel, VerifiableBaseSQLModel, table=True):
email_addr: EmailStr
</code></pre>
|
<python><fastapi><pydantic><sqlmodel>
|
2023-02-03 18:14:14
| 1
| 831
|
Jim
|
75,339,463
| 371,683
|
Read columns from Parquet in GCS without reading the entire file?
|
<p>Reading a parquet file from disc I can choose to read only a few columns (I assume it scans the header/footer, then decides). Is it possible to do this remotely (such as via Google Cloud Storage?)</p>
<p>We have 100 MB parquet files with about 400 columns and we have a use-case where we want to read 3 of them, and show them to the user. The user can choose which columns.</p>
<p>Currently we download the entire file, and then filter it but this takes time.</p>
<p>Long term we will be putting it into Google BigQuery and the problem will be solved</p>
<p>More specifically we use Python with either pandas or PyArrow and ideally would like to use those (either with a GCS backend or manually getting the specific data we need via a wrapper). This runs in Cloud Run so we would prefer to not use Fuse, although that is certainly possible.</p>
<p>I intend to use Python and pandas/pyarrow as the backend for this, running in Cloud Run (hence why data size matter, because 100MB download to disk actually means 100MB downloaded to RAM)</p>
<p>We use <code>pyarrow.parquet.read_parquet</code> with <code>to_pandas()</code> or <code>pandas.read_parquet</code>.</p>
|
<python><pandas><google-cloud-storage><parquet><pyarrow>
|
2023-02-03 17:57:18
| 1
| 1,999
|
Niklas B
|
75,339,340
| 14,791,134
|
How to use Machine Learning in Python to predict a binary outcome with a Pandas Dataframe
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>import nfl_data_py as nfl
pbp = nfl.import_pbp_data([2022], downcast=True, cache=False, alt_path=None)
</code></pre>
<p>which returns a dataframe of every play that occurred in the 2022 NFL season. The columns I want to train it on are <code>score_differential</code>, <code>yardline_100</code>, <code>ydstogo</code>, <code>down</code> and <code>half_seconds_remaining</code> to predict the <code>play_type</code> - either <code>run</code>, or <code>pass</code>.</p>
<p>Example: I feed it a -4 score differential, 25 yard line, 4th down, 16 yards to go, and 300 half seconds remaining - it would return whatever it learned from the dataframe, probably <code>pass</code>.</p>
<p>How would I go about doing this? Should I use a scikeylearn decision tree?</p>
|
<python><pandas><dataframe><machine-learning><artificial-intelligence>
|
2023-02-03 17:47:44
| 1
| 468
|
earningjoker430
|
75,339,183
| 2,868,899
|
rows of columns to column
|
<p>Would I need to use pivot to make a row of columns into one column? I have a dataset like the one shown but within each row are 8 separate rows. I need to make each cell its own row.</p>
<p>This would be an example of what i would start with:</p>
<pre><code>d = {'col1':[1,9,17],'col2':[2,10,18],'col3':[3,11,19],'col4':[4,12,20],'col5':[5,13,21],'col6':[6,14,22],'col7':[7,15,23],'col8':[8,16,24]}
import pandas as pd
df = pd.DataFrame(data=d)
</code></pre>
<p>And then would need to have a new df like this:</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
</code></pre>
|
<python><pandas>
|
2023-02-03 17:33:04
| 2
| 2,790
|
OldManSeph
|
75,339,171
| 3,852,723
|
how to connect application running on localhost from docker container
|
<p>I am trying to connect the application (which is not running on docker)<a href="https://i.sstatic.net/9SB7m.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9SB7m.jpg" alt="enter image description here" /></a></p>
<ul>
<li><p>i am trying to run this docker image with the help of docker compose.</p>
</li>
<li><p>i am using host network mode connecting external services on
<strong>host.docker.internal</strong> on port 7497</p>
</li>
<li><p>i am trying to call from the docker container from the python code
this docker is not having port config</p>
<pre><code> services:
ibkr-bot-eminisp500:
container_name: ibkr-bot-eminisp500
image: |my-image|
network_mode: host
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- IBKR_CLIENT_URL_KEY= "host.docker.internal"
- IBKR_PORT_KEY=7497
</code></pre>
</li>
</ul>
<p>but i am getting following error. what i am missing</p>
<pre><code>| API connection failed: gaierror(-2, 'Name or service not known')
ibkr-bot-eminisp500 | Traceback (most recent call last):
ibkr-bot-eminisp500 | File "/usr/bin/src/app/main.py", line 8, in <module>
ibkr-bot-eminisp500 | ibkrBot = IBKRBot()
</code></pre>
|
<python><docker><docker-compose>
|
2023-02-03 17:31:44
| 1
| 453
|
Ganesh Pol
|
75,339,122
| 14,141,126
|
Return value of key in nested dict when key not present in all nested dicts
|
<p>Trying to get the values of a particular key from nested dictionaries, but the key is not always present. The key in question is 'action'. I tried several ways but can't get it right. I either get an error saying the key doesnt exist, or I get a partial return. My latest attempts are as follows.</p>
<pre><code>def events_query():
query_res = {
'took': 52,
'timed_out': False,
'_shards': {
'total': 3,
'successful': 3,
'skipped': 1,
'failed': 0
},
'hits': {
'total': {'value': 10000, 'relation': 'gte'},
'max_score': None,
'hits': [
{
'_index': 'winlogbeat-dc-2023.01.16-000195',
'_type': '_doc',
'_id': '_LrxCYYBiABa0UinUkYt',
'_score': None,
'_source': {
'agent': {'hostname': 'SRVDCMI'},
'event': {
'code': '7036',
'provider': 'Service Control Manager',
'created': '2023-01-31T22:27:34.585Z',
'kind': 'event'
}
},
'sort': [-9223372036854775808]
},
{
'_index': 'winlogbeat-dc-2023.01.16-000195',
'_type': '_doc',
'_id': '_brxCYYBiABa0UinUkYt',
'_score': None,
'_source': {
'agent': {'hostname': 'SRVDCMI'},
'event': {
'code': '7036',
'provider': 'Service Control Manager',
'kind': 'event',
'created': '2023-01-31T22:27:34.585Z'
}
},
'sort': [-9223372036854775808]
},
{
'_index': 'winlogbeat-dc-2023.01.16-000195',
'_type': '_doc',
'_id': '_rrxCYYBiABa0UinUkYt',
'_score': None,
'_source': {
'agent': {'hostname': 'SRVDCMI'},
'event': {
'code': '7036',
'provider': 'Service Control Manager',
'kind': 'event',
'created': '2023-01-31T22:27:34.585Z'
}
},
'sort': [-9223372036854775808]
},
{
'_index': 'winlogbeat-dc-2023.01.16-000195',
'_type': '_doc',
'_id': '_7rxCYYBiABa0UinUkZI',
'_score': None,
'_source': {
'agent': {'hostname': 'SRVDC01'},
'event': {
'code': '4624',
'provider': 'Microsoft-Windows-Security-Auditing',
'created': '2023-01-31T22:27:34.622Z',
'kind': 'event',
'module': 'security',
'action': 'logged-in',
'category': ['authentication'],
'type': ['start'],
'outcome': 'success'
}
},
'sort': [-9223372036854775808]
},
{
'_index': 'winlogbeat-dc-2023.01.16-000195',
'_type': '_doc',
'_id': 'ALrxCYYBiABa0UinUkdI',
'_score': None,
'_source': {
'agent': {'hostname': 'SRVDC01'},
'event': {
'code': '4776',
'provider': 'Microsoft-Windows-Security-Auditing',
'created': '2023-01-31T22:27:34.622Z',
'kind': 'event',
'module': 'security',
'action': 'credential-validated',
'category': ['authentication'],
'type': ['start'],
'outcome': 'success'
}
},
'sort': [-9223372036854775808]
}]}}
for q in query_res:
if 'action' in query_res['hits']['hits'][0]['_source']['event']:
print(query_res['hits']['hits'][0]['_source']['event']['action'])
else:
print('not found')
#or
action = query_res['hits']['hits']
action_list = [a['_source']['event']['action'] for a in action]
print(action_list)
events_query()
</code></pre>
<p>Any help is appreciated.</p>
|
<python>
|
2023-02-03 17:25:59
| 2
| 959
|
Robin Sage
|
75,339,092
| 12,907,088
|
Why is the left number inclusive and the right number exclusive when using the colon operator in Python arrays/strings?
|
<p>I was just confused why in the following example the number on the left is included, while the number on the right isn't:</p>
<pre class="lang-py prettyprint-override"><code>a = "0123456789"
a[:] # "0123456789"
a[1:] # "123456789" -> includes the 1
# and this confuses me:
a[:5] # "01234" -> excludes the 5
a[1:5] # "1234" -> again
</code></pre>
<p>Can anybody explain why it is designed this way?</p>
|
<python><slice>
|
2023-02-03 17:22:50
| 1
| 694
|
MoPaMo
|
75,339,087
| 10,852,841
|
Fill facecolor in convex hulls for custom seaborn mapping function
|
<p>I'm trying to overlay shaded convex hulls to the different groups in a scatter <code>seaborn.relplot</code> using Matplotlib. Based on this <a href="https://stackoverflow.com/questions/29968097/seaborn-facetgrid-user-defined-plot-function">question</a> and the <a href="https://seaborn.pydata.org/tutorial/axis_grids.html?highlight=map#using-custom-functions" rel="nofollow noreferrer">Seaborn example</a>, I've been able to successfully overlay the convex hulls for each <code>sex</code> in the penguins dataset.</p>
<pre><code># Import libraries
import pandas as pd
import numpy as np
from scipy.spatial import ConvexHull
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
data = sns.load_dataset("penguins")
xcol = 'bill_length_mm'
ycol = 'bill_depth_mm'
g = sns.relplot(data = data, x=xcol, y = ycol,
hue = "sex", style = "sex",
col = 'species', palette="Paired",
kind = 'scatter')
def overlay_cv_hull_dataframe(x, y, color, **kwargs):
data = kwargs.pop('data')
# Get the Convex Hull for each group based on hue
for _, group in data.groupby('hue'):
points = group[['x', 'y']].values
hull = ConvexHull(points)
for simplex in hull.simplices:
plt.fill(points[simplex, 0], points[simplex, 1],
facecolor = color, alpha=0.5,
edgecolor = color)
# Overlay convex hulls
g.map_dataframe(overlay_cv_hull_dataframe, x=xcol, y=ycol,
hue='sex')
g.set_axis_labels(xcol, ycol)
</code></pre>
<p><a href="https://i.sstatic.net/RkHEL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RkHEL.png" alt="enter image description here" /></a></p>
<p>However, the convex hulls are not filled in with the same color as the edge, even though I specified that</p>
<pre><code>plt.fill(points[simplex, 0], points[simplex, 1],
facecolor = color, alpha=0.5,
edgecolor = color, # color is an RGB tuple like (0.12, 0.46, 0.71)
)
</code></pre>
<p>I've also tried setting <code>facecolor='lightsalmon'</code> like this <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill.html#sphx-glr-gallery-lines-bars-and-markers-fill-py" rel="nofollow noreferrer">example</a> and removing the <code>alpha</code> parameter, but get the same plot. I think I'm really close but I'm not sure what else to try.</p>
<p>How can I get the convex hulls to be filled with the same <code>color</code> as <code>edgecolor</code> and the points?</p>
|
<python><matplotlib><seaborn>
|
2023-02-03 17:22:06
| 1
| 2,379
|
m13op22
|
75,339,009
| 15,549,110
|
Python | traverse every element from a list in another for loop
|
<p>Want to traverse each element from below list one by one in next for loops.</p>
<p>How should i do it? how can i fit in one more for loop in it.</p>
<p>currently i am calling accessing <strong>summaryList[1]</strong> only one element</p>
<pre><code>summaryList=['hi','bye','hello']
def list(path):
result = []
for root, dirs, files in os.walk(path):
for name in files:
if fnmatch.fnmatch(name, summaryList[1] + '.txt'):
result.append(os.path.join(root, name))
</code></pre>
|
<python><python-3.x><list><dictionary><for-loop>
|
2023-02-03 17:14:08
| 2
| 379
|
pkk
|
75,338,989
| 10,853,071
|
how to filter / query data from Python SASPY to_df function
|
<p>I am working on python on some data get from a SAS server. I am currently using SASPY to_df() function to bring it from SAS to local pandas.</p>
<p>I would like to know if its possible to filter/query the data that is being transferred so I could avoid bringing unneeded that and speeding up my download.</p>
<p>I couldn't find anything on saspy documentation, it only offers the possibility of using "**kwargs" but I couldn't figure out how to do it.</p>
<p>Thanks.</p>
|
<python><pandas><saspy>
|
2023-02-03 17:11:59
| 1
| 457
|
FábioRB
|
75,338,898
| 7,089,108
|
How do I calculate the matrix exponential of a sparse matrix?
|
<p>I'm trying to find the matrix exponential of a sparse matrix:</p>
<pre><code>import numpy as np
b = np.array([[1, 0, 1, 0, 1, 0, 1, 1, 1, 0],
[1, 0, 0, 0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[1, 1, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 1, 0, 0, 1, 1],
[0, 0, 1, 0, 1, 0, 1, 1, 0, 0],
[1, 0, 0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 1, 0, 0, 1]])
</code></pre>
<p>I can calculate this using <code>scipy.linalg.expm</code>, but it is slow for larger matrices.</p>
<pre><code>from scipy.linalg import expm
S1 = expm(b)
</code></pre>
<p>Since this is a sparse matrix, I tried converting <code>b</code> to a <code>scipy.sparse</code> matrix and calling that function on the converted sparse matrix:</p>
<pre class="lang-py prettyprint-override"><code>import scipy.sparse as sp
import numpy as np
sp_b = sp.csr_matrix(b)
S1 = expm(sp_b);
</code></pre>
<p>But I get the following error:</p>
<pre><code>loop of ufunc does not support argument 0 of type csr_matrix which has no callable exp method
</code></pre>
<p>How can I calculate the matrix exponential of a sparse matrix?</p>
|
<python><scipy><sparse-matrix>
|
2023-02-03 17:03:15
| 1
| 433
|
cerv21
|
75,338,840
| 4,506,929
|
Using `xarray.apply_ufunc` with `np.linalg.pinv` returns an error with `dask.array`
|
<p>I get an error when running the following MWE:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
from numpy.linalg import pinv
import dask
data = np.random.randn(4, 4, 3, 2)
da = xr.DataArray(data=data, dims=("x", "y", "i", "j"),)
da = da.chunk(x=1, y=1)
da_inv = xr.apply_ufunc(pinv, da,
input_core_dims=[["i", "j"]],
output_core_dims=[["i", "j"]],
exclude_dims=set(("i", "j")),
dask = "parallelized",
)
</code></pre>
<p>This throws me this error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/glade/scratch/tomasc/tracer_inversion2/mwe.py", line 14, in <module>
da_inv = xr.apply_ufunc(pinv, da,
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 1204, in apply_ufunc
return apply_dataarray_vfunc(
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 315, in apply_dataarray_vfunc
result_var = func(*data_vars)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 771, in apply_variable_ufunc
result_data = func(*input_data)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 747, in func
res = da.apply_gufunc(
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/dask/array/gufunc.py", line 489, in apply_gufunc
core_output_shape = tuple(core_shapes[d] for d in ocd)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/dask/array/gufunc.py", line 489, in <genexpr>
core_output_shape = tuple(core_shapes[d] for d in ocd)
KeyError: 'dim0'
</code></pre>
<p>Even though when using <code>dask.array.map_blocks</code> directly, things seem to work right out of the box:</p>
<pre class="lang-py prettyprint-override"><code>data_inv = dask.array.map_blocks(pinv, da.data).compute() # works!
</code></pre>
<p>What am I missing here?</p>
|
<python><dask><python-xarray>
|
2023-02-03 16:56:43
| 1
| 3,547
|
TomCho
|
75,338,838
| 6,263,317
|
JAX best way to iterate RNGKeys?
|
<p>In JAX I find myself needing a <code>PRNGKey</code> that changes on each iteration of a loop. I'm not sure of the best pattern. I've considered</p>
<p>a) <code>split</code></p>
<pre class="lang-py prettyprint-override"><code>for i in range(N):
rng, _ = jax.random.split(rng)
# Alternatively.
rng = jax.random.split(rng, 1)[0]
</code></pre>
<p>b) <code>fold_in</code></p>
<pre class="lang-py prettyprint-override"><code>for i in range(N):
rng = jax.random.fold_in(rng, i)
</code></pre>
<p>c) use the iterator index? seems bad since the rng doesn't depend on a prior rng.</p>
<pre class="lang-py prettyprint-override"><code>for i in range(N):
rng = jax.random.PRNGKey(i)
</code></pre>
<p>Which of these is the best pattern and why? I am leaning towards (b) as it maintains dependency on the previous rng key (e.g. passed in as an argument) but im not sure if this is really the intended use-case for <code>jax.random.fold_in</code></p>
|
<python><jax>
|
2023-02-03 16:56:34
| 1
| 4,499
|
Jon Deaton
|
75,338,703
| 8,781,465
|
How can I explain a part of my predictions as a whole with SHAP values? (and not every single prediction)
|
<p><strong>Background information</strong></p>
<p>I fit a classifier on my training data. When testing my fitted best estimator, I predict the probabilities for one of the classes. I order both my X_test and my y_test by the probabilites in a descending order.</p>
<p><strong>Question</strong></p>
<p>I want to understand which features were important (and to what extend) for the classifier to predict only the 500 predictions with the highest probability as a whole, not for each prediction. Is the following code correct for this purpose?</p>
<pre><code>y_test_probas = clf.predict_proba(X_test)[:, 1]
explainer = shap.Explainer(clf, X_train) # <-- here I put the X which the classifier was trained on?
top_n_indices = np.argsort(y_test_probas)[-500:]
shap_values = explainer(X_test.iloc[top_n_indices]) # <-- here I put the X I want the SHAP values for?
shap.plots.bar(shap_values)
</code></pre>
<p>Unfortunately, the <a href="https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/bar.html" rel="nofollow noreferrer">shap documentation (bar plot)</a> does not cover this case. Two things are different there:</p>
<ol>
<li>They use the data the classifier was trained on <--> I want to use the data the classifier is tested on</li>
<li>They use the whole X and not part of it <--> I want to use only part of the data</li>
</ol>
<p><strong>Minimal reproducible example</strong></p>
<pre><code>import numpy as np
import pandas as pd
import shap
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
# Load the Titanic Survival dataset
data = pd.read_csv("https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv")
# Preprocess the data
data = data.drop(["Name"], axis=1)
data = data.dropna()
data["Sex"] = (data["Sex"] == "male").astype(int)
# Split the data into predictors (X) and response variable (y)
X = data.drop("Survived", axis=1)
y = data["Survived"]
# Split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Fit a logistic regression classifier
clf = RandomForestClassifier().fit(X_train, y_train)
# Get the predicted class probabilities for the positive class
y_test_probas = clf.predict_proba(X_test)[:, 1]
# Select the indices of the top 500 test samples with the highest predicted probability of the positive class
top_n_indices = np.argsort(y_test_probas)[-500:]
# Initialize the Explainer object with the classifier and the training set
explainer = shap.Explainer(clf, X_train)
# Compute the SHAP values for the top 500 test samples
shap_values = explainer(X_test.iloc[top_n_indices, :])
# Plot the bar plot of the computed SHAP values
shap.plots.bar(shap_values)
</code></pre>
<p>I don't want to know how the classifier decides <strong>all</strong> the predictions, but on the <strong>predictions with the highest probability</strong>. Is that code suitable to answer this question? If not, how would a suitable code look like?</p>
|
<python><shap>
|
2023-02-03 16:44:45
| 1
| 1,815
|
DataJanitor
|
75,338,668
| 10,795,473
|
Why does mypy complain about my dictionary comprehension?
|
<p>I'm trying to do an update of a dictionary with a dictionary comprehension. My code works fine, but mypy raises an error while parsing the types.</p>
<p>Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>load_result = {"load_code": "something"}
load_result.update({
quality_result.quality_code: [quality_result.quantity]
for quality_result in random_quality_results()
})
</code></pre>
<p>In that code the <code>quality_result</code> objects have those two attributes <code>quality_code</code> and <code>quantity</code> which are a string and a float respectively.</p>
<p>Here the code for those quality result objects:</p>
<pre class="lang-py prettyprint-override"><code>class QualityResult(BaseSchema):
"""Asset quality score schema."""
quality_code: str
quantity: float = Field(
...,
description="Value obtained [0,1]",
ge=0,
le=1,
)
</code></pre>
<p>My code works as expected and returns the desired dictionary, but when running mypy it throws this error:</p>
<pre><code>error: Value expression in dictionary comprehension has incompatible type "List[float]"; expected type "str"
</code></pre>
<p>I see mypy is getting the types correctly as I'm inserting a list of floats, the thing is I don't understand why it complains. I assume I must be missing something, but I'm not being able to figure it out.</p>
<p>Why does it say it must be a string? How can I fix it?</p>
|
<python><mypy>
|
2023-02-03 16:41:55
| 1
| 309
|
aarcas
|
75,338,615
| 9,850,681
|
How can I test an Exception inside an async with?
|
<p>in my tests I did a coverage and I end up with untested parts inside an async with, what the coverage sees is the part about exceptions, my code is like this:</p>
<pre class="lang-py prettyprint-override"><code> @classmethod
async def update_eda_configuration(
cls,
configuration: ConfigurationInputOnUpdate
) -> Union['Configuration', ConfigurationOperationError]:
async with get_session() as conn:
result = await conn.execute(
select(Configuration).where(Configuration.id==configuration.id))
configuration_to_update = result.scalars().unique().first()
if configuration_to_update is not None:
configuration_to_update.attribute = configuration.attribute
configuration_to_update.value = configuration.value
try:
await conn.commit()
return configuration_to_update
except Exception as e:
matches = re.findall(
pattern='DETAIL:.*',
string=str(e.orig),
)
</code></pre>
<p>Coverage looks like this:
<a href="https://i.sstatic.net/FBBIZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBBIZ.png" alt="enter image description here" /></a></p>
<p>I tried some tests but to no avail, trying for example patching to conn.commit.</p>
<p>How can I do this?</p>
|
<python><pytest><coverage.py>
|
2023-02-03 16:36:49
| 0
| 460
|
Plaoo
|
75,338,450
| 5,810,060
|
FuzzyWuzzy on 2 col from different DataFrames
|
<p>I have a very easy but not simple(to me at least!) question
I have 2 DFs:</p>
<pre><code>df1:
Account_Name
samsung
tesla
microsoft
df2:
Company_name
samsung electronics
samsung Ltd
tesla motors
Microsoft corporation
</code></pre>
<p>all I am trying to do is to find the best match for every row in df1 from df2 and also have an extra column that will tell me the similarity score for the best match found from df2.</p>
<p>I have got the code that allows me to compare the 2 columns and produce the similarity score but I have no clue how to iterate through df2 to find the best match for the row in question from df1</p>
<p>the similarity score code is below just in case but I don't think it is relevant to this question</p>
<pre><code>from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
for col in ['Account_Name']:
df[f"{col}_score"] = df.apply(lambda x: similar(x["Company_name"], x[col]) * 100 if
pd.notna(x[col]) else np.nan, axis=1)
</code></pre>
<p>The main issue is with finding the best similarity match when the data is in 2 separate DFs
help please!</p>
|
<python><python-3.x><dataframe><similarity>
|
2023-02-03 16:23:21
| 1
| 906
|
Raul Gonzales
|
75,338,336
| 1,864,294
|
subprocess stdout without colors
|
<p>I am running a command line tool that returns a coloured output (similar to <code>ls --color</code>). I run this tool via <code>subprocess</code>:</p>
<pre><code>process = subprocess.run(['ls --color'], shell=True, stdout=subprocess.PIPE)
process.stdout.decode()
</code></pre>
<p>But the result is, of course, with the color instructions like <code>\x1b[m\x1b[m</code> which makes further processing of the output impossible.</p>
<p>How can I remove the colouring and use pure text?</p>
|
<python><encoding><colors><subprocess>
|
2023-02-03 16:12:56
| 2
| 20,605
|
Michael Dorner
|
75,338,329
| 13,285,583
|
How to handle unexpected fatal error with signal?
|
<p>My goal is to handle unexpected fatal error with signal. However, none of the signal is triggered. For example, <code>AttributeError</code>. The example is trivial. What I want is to execute a function i.e. <code>camera.stop()</code> before closing the app regardless of how bad it is. Otherwise, I will need to reboot to use the camera.</p>
<pre><code>def signal_handler():
print("do something")
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGHUP, signal_handler)
signal.signal(signal.SIGABRT, signal_handler)
if __name__ == "__main__":
print("a{}".fomrat())
</code></pre>
<p>The error</p>
<pre><code>Traceback (most recent call last):
File "exception_not_rethrown.py", line 12, in <module>
print("a{}".fomrat())
AttributeError: 'str' object has no attribute 'fomrat'
</code></pre>
|
<python>
|
2023-02-03 16:12:35
| 0
| 2,173
|
Jason Rich Darmawan
|
75,338,231
| 1,960,266
|
What is the use of the next() function for printing the size of an array?
|
<p>I have been reading about how to use Pytorch for the MNIST character recognition, so far the code is the following:</p>
<pre><code>import torch
import torchvision
train_loader=torch.utils.data.DataLoader(torchvision.datasets.MNIST('/files/',train=True, download=True,
transform=torchvision.transforms.Compose(
[torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307), (0.3081))
])),
batch_size=batch_size_train,shuffle=True)
test_loader=torch.utils.data.DataLoader(
torchvision.datasets.MNIST('/files/',train=False,download=True,
transform=torchvision.transforms.Compose(
[torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,),(0.3081,))
])),
batch_size=batch_size_test,shuffle=True
)
examples=enumerate(test_loader)
batch_idx,(example_data,example_targets)=next(examples)
</code></pre>
<p>However, the problem is the last line:</p>
<pre><code>batch_idx,(example_data,example_targets)=next(examples)
</code></pre>
<p>I could replace it by:</p>
<pre><code>example_data,example_targets=next(examples)
</code></pre>
<p>and the program compiles, but when I want to do the following:</p>
<pre><code>print (example_data.shape)
</code></pre>
<p>Only the first version of <code>batch_idx,(example_data,example_targets)</code> works, but not the second one without the <code>batch_idx</code> part.</p>
<p>Also, when I print the value of <code>batch_idx</code> always returns <code>0</code>. So, my question is what is the use of this <code>batch_idx</code> part, why it has the value of 0 and the relationship with the <code>next()</code> and <code>shape</code> functions?</p>
|
<python><pytorch>
|
2023-02-03 16:03:01
| 1
| 3,477
|
Little
|
75,338,157
| 1,581,090
|
How to read a binary file and append them to a new file?
|
<p>I read data from a file as binary data like</p>
<pre><code>with open(filename, "rb") as filein:
content = filein.read()
print(type(content[0]))
</code></pre>
<p>and I expected the data type of the data read to be <code>byte</code>, but I get <code>int</code>.</p>
<p>How to read data from a file as type <code>byte</code> (i.e. the python structure where I put a "b" in from of i like</p>
<pre><code>mybyte = b"bytes"
</code></pre>
<p>so I can "add" them to other byte strings?</p>
<p>What I actually want to do is essentially this:</p>
<pre><code># Read the complete(!) content of the file
with open(filename, "rb") as filein:
content = filein.read()
# Create new content, where I manipulate some single bytes
# For simplicity this is not shown here
new_content = b""
for byte in content:
# some manipulating of single bytes, omitted for simplicity
new_content += byte
# Write the modified content again to a new file
# In this very example, it should replicate the exact same file
with open(filename + "-changed", "wb") as fileout:
fileout.write(new_content)
</code></pre>
<p>But here I get an error</p>
<pre><code> Traceback (most recent call last):
File "break_software.py", line 29, in <module>
new_content += byte
TypeError: can't concat int to bytes
</code></pre>
|
<python><byte>
|
2023-02-03 15:57:03
| 1
| 45,023
|
Alex
|
75,337,910
| 20,266,647
|
Issue with the aggregation function in the pipeline during online ingest
|
<p>I see issue in the aggregation function (part of pipeline) during the online ingest, because aggregation output is invalid (output is different then expectation, I got value 0 instead of 6). The pipeline is really very simple:</p>
<p><a href="https://i.sstatic.net/2tqaT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2tqaT.png" alt="enter image description here" /></a></p>
<p>See part of code (Python and MLRun):</p>
<pre><code>import datetime
import mlrun
import mlrun.feature_store as fstore
from mlrun.datastore.targets import ParquetTarget, NoSqlTarget
# Prepare data, four columns key0, key1, fn1, sysdate
data = {"key0":[1,1,1,1,1,1], "key1":[0,0,0,0,0,0],"fn1":[1,1,2,3,1,0],
"sysdate":[datetime.datetime(2021,1,1,1), datetime.datetime(2021,1,1,1),
datetime.datetime(2021,1,1,1), datetime.datetime(2021,1,1,1),
datetime.datetime(2021,1,1,1), datetime.datetime(2021,1,1,1)]}
# Create project and featureset with NoSqlTarget & ParquetTarget
project = mlrun.get_or_create_project("jist-agg",context='./', user_project=False)
feature_set=featureGetOrCreate(True,project_name, 'sample')
# Add easy aggregation 'agg1'
feature_set.add_aggregation(name='fn1',column='fn1',operations=['count'],windows=['60d'],step_name="agg1")
# Ingest data to the on-line and off-line targets
output_df=fstore.ingest(feature_set, input_df, overwrite=True, infer_options=fstore.InferOptions.default())
# Read data from online source
svc=fstore.get_online_feature_service(fstore.FeatureVector("my-vec", ["sample.*"], with_indexes=True))
resp = svc.get([{"key0": 1, "key1":0} ])
# Output validation
assert resp[0]['fn1_count_60d'] == 6.0, 'Mistake in solution'
</code></pre>
<p>Do you see the mistake?</p>
|
<python><pipeline><mlops><feature-store><mlrun>
|
2023-02-03 15:35:11
| 1
| 1,390
|
JIST
|
75,337,685
| 3,575,623
|
Lineplot above clustermap
|
<p>Based on <a href="https://stackoverflow.com/a/66296010/3575623">this answer</a>, I can use <code>.fig.subplots_adjust()</code> to shift my clustermap to the bottom half of my final figure. I'd like to plot a line above it representing some associated data, but I can't seem to change its size. Most solutions I've seen online for changing a lineplot's dimensions are actually changing the dimensions of the figure containing it, which won't work here, and I've tried passing it arguments like figsize, height or aspect and none seem to work. How can I get the aspect ratio to fit in this combined figure?</p>
<p>Basic code to generate where I'm at:</p>
<pre><code>iris = sns.load_dataset("iris")
species = iris.pop("species")
g = sns.clustermap(iris)
g.fig.set_size_inches(20, 10)
g.fig.subplots_adjust(top=0.5)
ax = g.fig.add_axes([0.05, 0.05, 0.4, 0.9])
rand_melt = pd.melt(pd.DataFrame(np.random.randn(1000).cumsum()))
sns.lineplot(x=rand_melt.index, y="value", data=rand_melt, ax=ax)
plt.plot()
</code></pre>
<p><a href="https://i.sstatic.net/PNgsG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PNgsG.png" alt="plot created from code above" /></a></p>
<p>I would rather use seaborn for this as my data is formatted for it and I know what style and parameters to use for the line plot, but if it isn't possible at all, I'll create another question.</p>
|
<python><seaborn>
|
2023-02-03 15:17:04
| 0
| 507
|
Whitehot
|
75,337,665
| 3,433,875
|
Dynamic SQL queries with SQLite3
|
<p>I would like to allow users to query a sql database.
The database is <a href="https://github.com/Curbal-Data-Labs/python-bites/tree/main/Data" rel="nofollow noreferrer">here</a></p>
<p>So the user will be able to enter the queries they want:</p>
<pre><code>csr_city= input("Enter a city >>").lower().strip().replace(' ','')
csr_type = input("Enter a type >>").lower().strip()
</code></pre>
<p>and then the query will execute:</p>
<pre><code>cur = conn.cursor()
cur.execute('SELECT * FROM crime_scene_report WHERE city=? AND type=? ', (csr_city,csr_type))
rows = cur.fetchall()
rows
</code></pre>
<p>If the user enters both variables like <code>city='SQL City'</code> and <code>type='murder'</code> it works as it finds both values in the same row, but if the user leaves one empty, ex type, then it returns a blank table as both conditions do not exist on one single row.</p>
<p>What I would like to do is for SELECT to ignore the variable if it is empty.
I guess I could do it with if statements but that would create a mess?? Is there a better way?</p>
<p>I tried <a href="https://stackoverflow.com/questions/65702995/how-to-ignore-the-condition-that-user-did-not-pass-in-sqlite">How to ignore the condition that user did not pass in SQLITE?</a>, but didnt work for me, still getting empty tables.</p>
|
<python><python-3.x><sqlite><sqlite3-python>
|
2023-02-03 15:15:11
| 1
| 363
|
ruthpozuelo
|
75,337,606
| 18,758,062
|
Optuna HyperbandPruner not pruning?
|
<p>My study is setup to use the Hyperband pruner with 60 trials, 10M max resource and reduction factor of 2.</p>
<pre class="lang-py prettyprint-override"><code>
def optimize_agent(trial):
# ...
model = PPO("MlpPolicy", env, **params)
model.learn(total_timesteps=2000000)
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.HyperbandPruner(
min_resource=1, max_resource=10000000, reduction_factor=2
),
)
study.optimize(optimize_agent, n_trials=60, n_jobs=2)
</code></pre>
<p>When I let the study run overnight, it ran the first 6 trials to completion (2M steps each). Isn't the HyberbandPruner supposed to stop at least some trials before they complete?</p>
|
<python><stable-baselines><optuna>
|
2023-02-03 15:10:46
| 1
| 1,623
|
gameveloster
|
75,337,401
| 10,353,865
|
Is the any-call on a boolean series/array efficient?
|
<p>Say we have a call like:</p>
<pre><code>ser = pd.Series([1,2,3,4])
ser[ser>1].any()
</code></pre>
<p>Now my question is: Is pandas "smart enough" to stop computation and spit out the "true" when it encounters the 2 or does it really go through the whole array first and checks the any() afterwards. If the latter is true: How to avoid this behavior?</p>
|
<python><pandas>
|
2023-02-03 14:53:19
| 1
| 702
|
P.Jo
|
75,337,352
| 20,266,647
|
How to identify different type of exit 'shutdown', 'restart', 'kill', 'kill -x' in python?
|
<p>Is it possible to identify in the python application type of external exit/break? I would like to do different actions (release sources slow & correct/complex or fast & partly/dirty) based on different external exit reasons e.g. 'shutdown', 'restart', 'kill', 'kill -x'?</p>
<p>I used simple code with <strong>atexit</strong>, but it is without ability to identify the reason/urgency (it is not useful for me). See sample code:</p>
<pre><code>import atexit
def OnCorrectExit(user):
print(user, "Release sources and exit Python application")
atexit.register(OnCorrectExit)
</code></pre>
<p>or version with decorator</p>
<pre><code>import atexit
@atexit.register
def OnCorrectExit(user):
print(user, "Release sources and exit Python application")
</code></pre>
<p>Do you know, how to identify different exit urgency in python and based on that build different type of resource cleaning?</p>
|
<python><exit><kill><restart><shutdown>
|
2023-02-03 14:49:22
| 1
| 1,390
|
JIST
|
75,337,266
| 7,077,532
|
How is the LRU deleted in this LRUCache implementation?
|
<p>I'm going over the optimal Leetcode solution for LRU Cache LeetCode Medium question. The original problem can be found here: <a href="https://leetcode.com/problems/lru-cache/description/" rel="nofollow noreferrer">https://leetcode.com/problems/lru-cache/description/</a></p>
<p>I'm confused by a specific portion of the <a href="https://neetcode.io/practice" rel="nofollow noreferrer">solution code</a>.</p>
<pre><code>class Node:
def __init__(self, key, val):
self.key, self.val = key, val
self.prev = self.next = None
class LRUCache:
def __init__(self, capacity: int):
self.cap = capacity
self.cache = {} # map key to node
self.left, self.right = Node(0, 0), Node(0, 0)
self.left.next, self.right.prev = self.right, self.left
# remove node from list
def remove(self, node):
prev, nxt = node.prev, node.next
prev.next, nxt.prev = nxt, prev
# insert node at right
def insert(self, node):
prev, nxt = self.right.prev, self.right
prev.next = nxt.prev = node
node.next, node.prev = nxt, prev
def get(self, key: int) -> int:
if key in self.cache:
self.remove(self.cache[key])
self.insert(self.cache[key])
return self.cache[key].val
return -1
def put(self, key: int, value: int) -> None:
if key in self.cache:
self.remove(self.cache[key])
self.cache[key] = Node(key, value)
self.insert(self.cache[key])
if len(self.cache) > self.cap:
# remove from the list and delete the LRU from hashmap
**lru = self.left.next
self.remove(lru)**
del self.cache[lru.key]
</code></pre>
<p>Why am I deleting the lru? In other words why is self.left.next represent the least recently used key?</p>
<p>To illustrate this let's say your capacity param is 2. And you currently have two nodes comprising your doubly linked list: [1,1] and [2,2]</p>
<p>Now you put a 3rd node [3,3] so the linked list looks like this: [2,2] [1,1] [3,3]. But you have exceeded the capacity constraint of two so you need to remove the LRU (least recently used) node which is [2,2]. For lru = self.left.next ... why does it equal [2,2] and not [1,1]?</p>
|
<python><doubly-linked-list>
|
2023-02-03 14:41:22
| 1
| 5,244
|
PineNuts0
|
75,337,193
| 13,231,896
|
How to draw polygons in python (giving cordenates) and number its edges?
|
<p>I have a list of agroforestry plots (polygons) in python. Every polygon es represented as latitude/longitude.</p>
<pre><code> polygon1_coords = [
(11.946898956, -86.109286248),
(11.947196373, -86.109159886),
(11.947092456, -86.108962101),
(11.946897164, -86.10886504),
(11.946898956, -86.109286248 )]
polygon2_coords = [
(11.946895372, -86.110055411),
(11.94718204, -86.110046254),
(11.947194581, -86.109172705),
(11.946900747, -86.109313718),
(11.946895372, -86.110055411)
]
</code></pre>
<p>I need a way to draw those polygons as an image and place numbers on its edges. A polygon need to also be represented with a letter. This is an example of what I need</p>
<p><a href="https://i.sstatic.net/qa5ub.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qa5ub.png" alt="Polygons I need to draw" /></a></p>
<p>As you can see I need to be able to represent several polygons in the same picture and number its edges individually. How can I do it in python?</p>
|
<python><python-imaging-library><gis>
|
2023-02-03 14:35:48
| 1
| 830
|
Ernesto Ruiz
|
75,337,122
| 658,209
|
Automatically sync new files from google drive to google cloud storage
|
<p>I want to automatically sync new files that are added to google drive to google cloud storage.
I have seen various people asking this on the web and most of them suggest something along the lines of:</p>
<ul>
<li>Develop an app to poll for new files in the drive</li>
<li>Retrieve new files and upload them to GCS</li>
</ul>
<p>If someone has already written an open-source library/script for this then I would like to reuse it instead of re-inventing the wheel.</p>
<p>Edit:</p>
<p>I have now written a watcher webhook API in python, and subscribed to the folder to get notification when a new file is added to google drive.
Now the issue is, when the webhook is called by Google, no information is provided about the new files/folders added.</p>
|
<python><google-cloud-platform><google-drive-api><google-cloud-storage>
|
2023-02-03 14:29:26
| 1
| 1,364
|
Prim
|
75,337,051
| 9,257,578
|
How to trim text from file and put it another file using python?
|
<p>I have a text file called file1 like</p>
<pre><code>
HelloWorldTestClass
MyTestClass2
MyTestClass4
MyHelloWorld
ApexClass
*
ApexTrigger
Book__c
CustomObject
56.0
</code></pre>
<p>Now i want to output my file as in file2 which contains <code>test</code> in the word and have output like this</p>
<pre><code> HelloWorldTestClass
MyTestClass2
MyTestClass4
</code></pre>
<p>I have a code like this</p>
<pre><code>import re
import os
file_contents1 = f'{os.getcwd()}/build/testlist.txt'
file2_path = f'{os.getcwd()}/build/optestlist.txt'
with open(file_contents1, 'r') as file1:
file1_contents = file1.read()
# print(file1_contents)
# output = [file1_contents.strip() for line in file1_contents if "TestClass" in line]
# # Use a regudjlar expression pattern to match strings that contain "test"
test_strings = [x for x in file1_contents.split("\n") if re.search(r"test", x, re.IGNORECASE)]
# x = test_strings.strip("['t]")
# # Print the result
with open(file2_path, 'w') as file2:
# write the contents of the first file to the second file
for test in test_strings:
file2.write(test)
</code></pre>
<p>But it is outputting
<code> HelloWorldTestClass MyTestClass2 MyTestClass4</code></p>
<p>I didn't find the related question if already asked please attached to it thanks</p>
|
<python>
|
2023-02-03 14:24:36
| 1
| 533
|
Neetesshhr
|
75,336,802
| 5,510,540
|
python: label position lineplot() with secondary y-axes
|
<p>I have a produce a plot with secondary y-axes and the labels are on top of each other:
<a href="https://i.sstatic.net/h00fD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h00fD.png" alt="enter image description here" /></a></p>
<p>The code is the following:</p>
<pre><code> sns.lineplot(data=incident_cnt["A"], ax=ax,color="#FF4613", marker='o')
sns.lineplot(data=incident_cnt["B"], ax=ax, color="#80D5FF", marker='o')
ax2 = ax.twinx()
sns.lineplot(data=incident_cnt["C"], ax=ax2, color="#00FFAA", marker='o')
ax.set_xlabel("Date of index event")
ax.set_ylabel("Patients (no)")
ax2.set_ylabel("Patients (no) - DIC", color="#00FFAA")
ax2.set_ylim(0, ax.get_ylim()[1]/10)
</code></pre>
<p>is there a way to manually change the position of label C so it does not get on top of A?</p>
|
<python><plot>
|
2023-02-03 14:04:53
| 1
| 1,642
|
Economist_Ayahuasca
|
75,336,749
| 1,770,902
|
Pyglet render into texture
|
<p>I'm writing a drawing program using <a href="https://pyglet.org/" rel="nofollow noreferrer">pyglet</a>, and I want to be able to have the image being created as separate from the window's buffer (for instance, the image could be larger than the window, or may want to draw to this image at a different rate than the main window is being re-drawn). I want to be able to draw into this off-screen image, then display it in the window, but pyglet doesn't allow drawing to anything else than a window. Is there any simple way I can do this?</p>
<p>I've tried creating a second hidden pyglet window, but this gets rendered at the same rate as the main window which I definitely don't want.</p>
<p>The closest I found was <a href="https://stackoverflow.com/questions/44604391/pyglet-draw-text-into-texture">Pyglet draw text into texture</a>, but the code there isn't complete, and also no longer works as the opengl version used by pyglet has moved on.</p>
|
<python><opengl><pyglet><render-to-texture>
|
2023-02-03 14:00:35
| 1
| 463
|
timfoden
|
75,336,535
| 18,221,164
|
Downloading from Nexus Repository using Python
|
<p>I have a task to download some artifacts from Nexus. I am using <code>nexuscli</code> library.
The nexus repository structure is very heirarchical.</p>
<pre><code>https://localhostABC.de/nexus/#browse/browse
</code></pre>
<p>will lead to a folder structure as below:</p>
<pre><code>release
--de
----Updates
------operating
--------libs
-----------Alib
-------------1.0.0
---------------Alib-1.0.0-zip
</code></pre>
<p>On the right click of the zip, I get a file path: <code>de/release/operating/libs/Alib/1.0.0-zip</code></p>
<p>When the file path is copied and pasted to another tab in a browser, it directly downloads the zip. The address is <a href="https://localhost.de/nexus/repository/release/de/Updates/operating/libs/Alib/1.0.0/Alib-1.0.0-32-bit.zip" rel="nofollow noreferrer">https://localhost.de/nexus/repository/release/de/Updates/operating/libs/Alib/1.0.0/Alib-1.0.0-32-bit.zip</a></p>
<p>To accomodate all of this inside the script for python:</p>
<pre><code>import nexuscli.nexus_http
from nexuscli.api.repository.base_models.repository import Repository
nexus_config = nexuscli.nexus_config.NexusConfig(url="https://localhost.de/nexus", username="XXX",password="XXX")
nexus_client = nexuscli.NexusClient(nexus_config)
nexus_http = nexuscli.nexus_http.NexusHttp(nexus_config)
nexus_repository = Repository(name="release", nexus_client=nexus_client, nexus_http=nexus_http)
nexus_repository.download("de/release/operating/libs/Alib/1.0.0-zip",
"C:\\Users\\XXX\\Pycharm\\Projects\\Trial", flatten=True)
print("HELLO")
</code></pre>
<p>It fails and throws an error saying provide creds for Nexus3.</p>
<p>But the credentials are correct as I can download from the browser.</p>
<p>Any suggestions?</p>
|
<python><nexus>
|
2023-02-03 13:41:38
| 0
| 511
|
RCB
|
75,336,506
| 1,868,775
|
Nodejs equivalent of Python's b'string'?
|
<p>Give this <code>salt</code> in Python</p>
<p><code>salt = b"0000000000000000004d6ec16dafe9d8370958664c1dc422f452892264c59526"</code></p>
<p>What's the equivalent in Nodejs?</p>
<p>I have this</p>
<p><code>const salt = Buffer.from("0000000000000000004d6ec16dafe9d8370958664c1dc422f452892264c59526", "hex");</code></p>
<p>But upon conversion to string, they don't match.</p>
|
<python><node.js>
|
2023-02-03 13:39:10
| 1
| 4,292
|
Filipe Aleixo
|
75,336,478
| 15,452,168
|
Extracting dates in a pandas dataframe column using regex
|
<p>I have a data frame with a column <code>Campaign</code> which consists of the campaign name (start date - end date) format. I need to create 3 new columns by extracting the start and end dates.</p>
<pre class="lang-none prettyprint-override"><code>start_date, end_date, days_between_start_and_end_date.
</code></pre>
<p>The issue is <code>Campaign</code> column value is not in a fixed format, for the below values my code block works well.</p>
<pre><code>1. Season1 hero (18.02. -24.03.2021)
</code></pre>
<p>What I am doing in my code snippet is extracting the start date & end date from the campaign column and as you see, start date doesn't have a year. I am adding the year by checking the month value.</p>
<pre><code>import pandas as pd
import re
import datetime
# read csv file
df = pd.read_csv("report.csv")
# extract start and end dates from the 'Campaign' column
dates = df['Campaign'].str.extract(r'(\d+\.\d+)\.\s*-\s*(\d+\.\d+\.\d+)')
df['start_date'] = dates[0]
df['end_date'] = dates[1]
# convert start and end dates to datetime format
df['start_date'] = pd.to_datetime(df['start_date'], format='%d.%m')
df['end_date'] = pd.to_datetime(df['end_date'], format='%d.%m.%Y')
# Add year to start date
for index, row in df.iterrows():
if pd.isna(row["start_date"]) or pd.isna(row["end_date"]):
continue
start_month = row["start_date"].month
end_month = row["end_date"].month
year = row["end_date"].year
if start_month > end_month:
year = year - 1
dates_str = str(row["start_date"].strftime("%d.%m")) + "." + str(year)
df.at[index, "start_date"] = pd.to_datetime(dates_str, format="%d.%m.%Y")
dates_str = str(row["end_date"].strftime("%d.%m")) + "." + str(row["end_date"].year)
df.at[index, "end_date"] = pd.to_datetime(dates_str, format="%d.%m.%Y")
</code></pre>
<p>but, I have multiple different column values where my regex fail and I receive nan values, for example</p>
<pre><code>1. Sales is on (30.12.21-12.01.2022)
2. Sn 2 Fol CAMPAIGN A (24.03-30.03.2023)
3. M SALE (19.04 - 04.05.2022)
4. NEW SALE (29.12.2022-11.01.2023)
5. Year End (18.12. - 12.01.2023)
6. XMAS 1 - THE TRIBE CELEBRATES XMAS (18.11.-08.12.2021) (gifting communities)
Year End (18.12. - 12.01.2023)
</code></pre>
<p>in all the above 4 example, my date format is completely different.</p>
<p>expected output</p>
<pre><code>start date end date
2021-12-30 2022-01-22
2023-03-24 2023-03-30
2022-04-19 2022-05-04
2022-12-29 2023-01-11
2022-18-12 2023-01-12
2021-11-18 2021-12-08
</code></pre>
<p>Can someone please help me here?</p>
|
<python><pandas><regex><dataframe><datetime>
|
2023-02-03 13:36:40
| 2
| 570
|
sdave
|
75,336,477
| 19,556,055
|
Replace a value in a single column of a dataframe within a list of dataframes
|
<p>I have a long list of dataframes. For each dataframe, I want to replace all 0 values in a column with an empty string. I can use the code below to do it, but I was wondering if there is a faster way? There are about 18000 dataframes in the list, so any speed gain is a huge help.</p>
<pre><code>for df_player in list_all_dfs:
df_player["Position"] = df_player["Position"].replace(0, "")
</code></pre>
|
<python><pandas><for-loop>
|
2023-02-03 13:36:17
| 1
| 338
|
MKJ
|
75,336,237
| 1,284,499
|
Change json.dumps behaviour : customize serialization
|
<p>Imagine, I've got a dict <code>{"a": "hello", "b": b"list"}</code></p>
<ul>
<li>'a' is a string</li>
<li>'b' is a byte string</li>
</ul>
<p>I would like to serialize the dict into the "json"(*) string --> '{"a": "hello", "b": list}'</p>
<p>(*) : not really json compliant</p>
<p>For that, i've written that method, it works ....</p>
<pre class="lang-py prettyprint-override"><code>def stringify(obj):
def my(obj):
if isinstance(obj,bytes):
return "<:<:%s:>:>" % obj.decode()
return json.dumps(obj, default=my).replace('"<:<:',"").replace(':>:>"',"")
</code></pre>
<p>(the "<:<:" & ":>:>" are just added before serialization, to be replaced, post json serialization, to obtain the desired result)</p>
<p>It's a little be hacky, using string substitution to obtain the result ... it works ;-)</p>
<p>I ask myself, and you, if it can be done in a better/python way ...
Do you have any idea ?</p>
<p><strong>EDIT</strong>
I would like to rewrite my stringify, in a better way, with assertions :</p>
<pre class="lang-py prettyprint-override"><code>assert stringify( dict(a="hello",b=b"byte") ) == '{"a": "hello", "b": byte}'
assert stringify( ["hello", b"world"] ) == '["hello", world]'
assert stringify( "hello" ) == '"hello"'
assert stringify( b"world" ) == "world"
</code></pre>
|
<python><json><string><serialization>
|
2023-02-03 13:13:46
| 1
| 1,131
|
manatlan
|
75,336,092
| 12,430,026
|
Seaborn clustermap showing less columns that the input dataframe has
|
<p>When using <code>seaborn</code> <code>clustermap</code> method, the resulting plot has less columns that the input dataframe. Does anyone know when this can happen?</p>
<p>The input data is a 70x64 dataframe ofcounts, filled mostly with 0s. No row or column is ever all 0s As can be seen here: <a href="https://pastebin.com/v5D8MRTP" rel="nofollow noreferrer">https://pastebin.com/v5D8MRTP</a></p>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
import pandas as pd
import scipy
cluster_df = pd.read_csv("some/path/to.csv)")
# Generate clustering on binary data
row_clus = scipy.cluster.hierarchy.linkage(np.where(cluster_df > 0, 1, 0), method = "ward")
col_clus = scipy.cluster.hierarchy.linkage(np.where(cluster_df.transpose() > 0, 1, 0), method = "ward")
# Clustering heatmap
plot_clus = sns.clustermap(cluster_df, standard_scale = None,
row_linkage = row_clus,
col_linkage = col_clus)
</code></pre>
<p>This results in the following, which I get even if just calling <code>sns.clustermap(cluster_df)</code>:</p>
<p><a href="https://i.sstatic.net/z6tts.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z6tts.png" alt="enter image description here" /></a></p>
|
<python><seaborn><clustermap>
|
2023-02-03 12:59:14
| 0
| 1,577
|
Lamma
|
75,336,039
| 7,376,511
|
ClassVar for variable type defined on __init__subclass__
|
<pre><code>class School:
def __init__(self) -> None:
self.number = 0
def test(self) -> None:
self.number = 0
class Sophism(School):
def test(self) -> None:
self.number = 1
class Epicureanism(School):
def test(self) -> None:
self.number = 2
PhilosophySchool = TypeVar("PhilosophySchool", bound=School)
class Philosopher:
school: ClassVar[PhilosophySchool] # Type variable "PhilosophySchool" is unbound [valid-type]
def __init_subclass__(cls, /, school: type[PhilosophySchool], **kwargs: object) -> None:
super().__init_subclass__(**kwargs)
cls.school = school()
class Sophist(Philosopher, school=Sophism):
pass
s1 = Sophist()
s2 = Sophist()
s1.school.test() # PhilosophySchool? has no attribute "test"
s1.school.number == s2.school.number == Sophist.school.number # True # PhilosophySchool? has no attribute "number"
s1.school == s2.school == Sophist.school # True # Unsupported left operand type for == (PhilosophySchool?)
</code></pre>
<p>I am trying to make a class that automatically instantiates some properties on definition. I get multiple warnings from mypy, but I cannot understand any of them, because this code works in the interpreter.</p>
<p>How can I tell mypy that Philosopher's "school" variable, which I define on subclassing, is <strong>always</strong> a subclass of School, the very same subclass that I pass on <code>school=Sophism</code>?
In the last line, <code>s.school.test()</code>, mypy cannot even tell that school is an instance of Sophism(), not School(), and somehow it thinks it doesn't have <code>test</code> nor <code>number</code>, despite School itself having them defined.</p>
|
<python><mypy><python-typing><metaclass>
|
2023-02-03 12:54:46
| 2
| 797
|
Some Guy
|
75,336,005
| 8,077,270
|
Splinter Python ElementDoesNotExist for is_visible()
|
<p>In my code I have following line:</p>
<pre><code>browser.find_by_css(business_role_expand).is_visible(1000)
</code></pre>
<p>According to the <a href="https://splinter.readthedocs.io/en/latest/api/driver-and-element-api.html?highlight=is_visible#splinter.driver.ElementAPI.is_visible" rel="nofollow noreferrer">documentation</a>, the code should wait maximum of 1000s for the element specified by the CSS to load and be visible. If not, it will return "False". However instead I get this error:</p>
<pre><code>splinter.exceptions.ElementDoesNotExist: no elements could be found with css "div.panel:nth-child(4) > div:nth-child(1) > a:nth-child(1)"
</code></pre>
<p>Can anyone advise me? I don't understand why this happens. I'm using Firefox driver.</p>
|
<python><selenium><firefox><webdriver><splinter>
|
2023-02-03 12:51:28
| 2
| 475
|
eXPRESS
|
75,335,972
| 12,027,232
|
ONNX runtime predictions on raw CuPy arrays?
|
<p>I am following the ONNX inference tutorial at: <a href="https://github.com/onnx/models/blob/main/vision/classification/onnxrt_inference.ipynb" rel="nofollow noreferrer">https://github.com/onnx/models/blob/main/vision/classification/onnxrt_inference.ipynb</a>.</p>
<p>Instead of doing the pre-processing in pure NumPy, I have re-written the function to be done in CuPy for GPU-acceleration.</p>
<p>The pre-processing function thus looks like:</p>
<pre><code>
def preprocess_gpu(cuImage):
img = cuImage / 255.
h,w = img.shape[0], img.shape[1]
y0 = (h - 224) // 2
x0 = (w - 224) // 2
img = img[y0 : y0+224, x0 : x0+224, :]
img = cp.divide(cp.subtract(img , cp.array([0.485, 0.456, 0.406])), cp.array([0.229, 0.224, 0.225]))
img = cp.transpose(img, axes=[2, 0, 1])
img = cp.expand_dims(img, axis=0)
return img
</code></pre>
<p>When feeding such an array into the prediction function,</p>
<pre><code>def predict(path):
img = get_cuimage(path)
img = preprocess_gpu(img)
ort_inputs = {session.get_inputs()[0].name: img}
preds = session.run(None, ort_inputs)[0]
preds = np.squeeze(preds)
a = np.argsort(preds)[::-1]
print('class=%s ; probability=%f' %(labels[a[0]], preds[a[0:1]]))
predict(path)
</code></pre>
<p>I get the error:</p>
<pre><code>RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'data'.
</code></pre>
<p>Are there any work-arounds? I know that the ONNX run-time is currently using the CPU, but that should not be a problem. Furthermore, I can not seem to find ONNXruntime-gpu on Conda anywhere?</p>
<p>Any tips greatly appreciated.</p>
|
<python><onnx><cupy><onnxruntime>
|
2023-02-03 12:48:57
| 0
| 410
|
JOKKINATOR
|
75,335,785
| 7,790,226
|
How to access Request object & dependencies of FastAPI in models created from Pydantic's BaseModel
|
<p>I am writing APIs using stack FastAPI, Pydantic & SQL Alchemy and I have come across many cases where I had to query database to perform validations on payload values. Let's consider one example API, <code>/forgot-password</code>. This API will accept <code>email</code> in the payload and I need to validate the existence of the email in database. If the email exist in the database then necessary action like creating token and sending mail would be performed or else an error response against that field should be raise by Pydantic. The error responses must be the standard <code>PydanticValueError</code> response. This is because all the validation errors would have consistent responses as it becomes easy to handle for the consumers.</p>
<h1>Payload -</h1>
<pre><code>{
"email": "example@gmail.com"
}
</code></pre>
<p>In Pydantic this schema and the validation for email is implemented as -</p>
<pre><code>class ForgotPasswordRequestSchema(BaseModel):
email: EmailStr
@validator("email")
def validate_email(cls, v):
# this is the db query I want to perform but
# I do not have access to the active session of this request.
user = session.get(Users, email=v)
if not user:
raise ValueError("Email does not exist in the database.")
return v
</code></pre>
<p>Now this can be easily handled if the we simple create an Alchemy session in the pydantic model like this.</p>
<pre><code>class ForgotPasswordRequestSchema(BaseModel):
email: EmailStr
_session = get_db() # this will simply return the session of database.
_user = None
@validator("email")
def validate_email(cls, v):
# Here I want to query on Users's model to see if the email exist in the
# database. If the email does. not exist then I would like to raise a custom
# python exception as shown below.
user = cls._session.get(Users, email=v) # Here I can use session as I have
# already initialised it as a class variable.
if not user:
cls.session.close()
raise ValueError("Email does not exist in the database.")
cls._user = user # this is because we want to use user object in the request
# function.
cls.session.close()
return v
</code></pre>
<p>But it is not a right approach as through out the request only one session should be used. As you can see in above example we are closing the session so we won't be able to use the user object in request function as <code>user = payload._user</code>. This means we will have to again query for the same row in request function. If we do not close the session then we are seeing alchemy exceptions like this - <code>sqlalchemy.exc.PendingRollbackError</code>.</p>
<p>Now, the best approach is to be able to use the same session in the Pydantic model which is created at the start of request and is also closing at the end of the request.</p>
<p>So, I am basically looking for a way to pass that session to Pydantic as context. Session to my request function is provided as dependency.</p>
|
<python><sqlalchemy><fastapi><pydantic>
|
2023-02-03 12:32:06
| 2
| 1,261
|
Jeet Patel
|
75,335,722
| 15,724,084
|
python function global and local scope confusion
|
<p>I have a code in which I declare a variable globally. Then inside a function, when I try to use it, it gives an error <code>Unbound variable is not declared</code>
My code:</p>
<pre><code>count_url =1
def foo():
...
ttk.Label(canvas1, text=f'{varSongTitle}...Done! {count_url}/{str(var_len)}').pack(padx=3,pady=3)
root.update()
count_url = count_url + 1
</code></pre>
<p>When I read from <a href="https://stackoverflow.com/questions/10851906/python-3-unboundlocalerror-local-variable-referenced-before-assignment">here</a> that for bypassing this issue: The issue as I guess was that inside function my globally declared variable was becoming local, I guess because after printing it out I was assigning it to <code>count_url =+</code> That's why I needed to also decalre it globally inside function as below:</p>
<pre><code>count_url =1
def foo():
global count_url
...
ttk.Label(canvas1, text=f'{varSongTitle}...Done! {count_url}/{str(var_len)}').pack(padx=3,pady=3)
root.update()
count_url = count_url + 1
</code></pre>
<p>Now code works perfectly! But I have pair of questions <code>How? Why?</code>. Why it does not behave similarly if I assign <code>global</code> in global scope like</p>
<pre><code>global count_url
count_url=1
def foo():
...
</code></pre>
<p>And also How can this be possible, that due to assigning inside the function a value to my global variable, why it becomes local?</p>
|
<python><function><global-variables><local-variables>
|
2023-02-03 12:24:47
| 1
| 741
|
xlmaster
|
75,335,650
| 13,334,778
|
TypeError: MetaData.__init__() got multiple values for argument 'schema'
|
<p>I am trying to execute sql query to get a pandas dataframe on postgres server using sqlalchemy</p>
<p>I made this class to connect to postgres server :</p>
<pre><code>class Connection:
def __init__(self):
load_dotenv()
engine = sqlalchemy.create_engine(os.getenv('CONNECTION_STRING'))
try:
self.connection = engine.connect()
print("Connected to server : SUCCESS")
except SQLAlchemyError as error:
raise f'[CONNECTION ERROR] {error}'
def sql_to_frame(self, filename):
with open(filename, 'r') as file:
query = file.read()
dataframe = pd.read_sql(text(query), self.connection)
return dataframe
def close(self):
self.connection.close()
</code></pre>
<p>on trying to use this <code>Connection</code> object :</p>
<pre><code>data_server = Connection()
my_dataframe = data_server.sql_to_frame('data_table.sql')
</code></pre>
<p>I am receiving this following error :</p>
<p><img src="https://i.sstatic.net/fOWie.png" alt="error_screenshot" /></p>
<p>I am using <code>venv</code> with following packages :</p>
<pre><code>Package Version
-------------------- -----------
anyio 3.6.2
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
attrs 22.2.0
backcall 0.2.0
beautifulsoup4 4.11.2
bleach 6.0.0
cffi 1.15.1
colorama 0.4.6
debugpy 1.6.6
decorator 5.1.1
defusedxml 0.7.1
entrypoints 0.4
fastjsonschema 2.16.2
ftfy 6.1.1
fuzzywuzzy 0.18.0
greenlet 2.0.2
idna 3.4
importlib-metadata 6.0.0
importlib-resources 5.10.2
ipykernel 6.16.2
ipython 7.34.0
ipython-genutils 0.2.0
ipywidgets 8.0.4
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 7.4.9
jupyter-console 6.4.4
jupyter_core 4.12.0
jupyter-server 1.23.5
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.5
Levenshtein 0.20.9
MarkupSafe 2.1.2
matplotlib-inline 0.1.6
mistune 2.0.4
nbclassic 0.5.1
nbclient 0.7.2
nbconvert 7.2.9
nbformat 5.7.3
nest-asyncio 1.5.6
notebook 6.5.2
notebook_shim 0.2.2
numpy 1.21.6
packaging 23.0
pandas 1.1.5
pandocfilters 1.5.0
parso 0.8.3
pickleshare 0.7.5
pip 23.0
pkgutil_resolve_name 1.3.10
prometheus-client 0.16.0
prompt-toolkit 3.0.36
psutil 5.9.4
psycopg2 2.9.5
pycparser 2.21
Pygments 2.14.0
pyrsistent 0.19.3
python-dateutil 2.8.2
python-dotenv 0.21.1
python-Levenshtein 0.20.9
pytz 2022.7.1
pywin32 305
pywinpty 2.0.10
pyzmq 25.0.0
qtconsole 5.4.0
QtPy 2.3.0
rapidfuzz 2.13.7
scikit-learn 1.0.2
scipy 1.7.3
Send2Trash 1.8.0
setuptools 67.1.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.3.2.post1
SQLAlchemy 2.0.1
terminado 0.17.1
threadpoolctl 3.1.0
tinycss2 1.2.1
tornado 6.2
traitlets 5.9.0
typing_extensions 4.4.0
wcwidth 0.2.6
webencodings 0.5.1
websocket-client 1.5.0
wheel 0.38.4
widgetsnbextension 4.0.5
zipp 3.12.0
</code></pre>
<ul>
<li>I have looked for similar issues but got no result from it</li>
<li>My python version used to create <code>venv</code> in <code>python3.7</code></li>
<li>Is there any postgres driver that i have to import the sqlalchemy is using internally ?</li>
</ul>
|
<python><postgresql><sqlalchemy>
|
2023-02-03 12:18:39
| 0
| 341
|
abhinit21
|
75,335,466
| 8,406,122
|
dataclass_transform() got an unexpected keyword argument 'field_specifiers'
|
<p>I am trying to work with this <a href="https://pypi.org/project/g2p-en/" rel="nofollow noreferrer">G2P package</a>. But I am getting <code>dataclass_transform() got an unexpected keyword argument 'field_specifiers'</code> this error. I am trying to run their sample code that they have provided there after running <code>!pip install g2p_en</code> in my jupyter notebook. The sample code for which I am getting the error is attached below. Can someone please help me with the issue?</p>
<pre><code>from g2p_en import G2p
texts = ["I have $250 in my pocket.", # number -> spell-out
"popular pets, e.g. cats and dogs", # e.g. -> for example
"I refuse to collect the refuse around here.", # homograph
"I'm an activationist."] # newly coined word
g2p = G2p()
for text in texts:
out = g2p(text)
print(out)
</code></pre>
|
<python>
|
2023-02-03 12:03:25
| 2
| 377
|
Turing101
|
75,335,461
| 11,155,419
|
How to remove a downstream or upstream task dependency in Airflow
|
<p>Assuming we have the two following Airflow tasks in a DAG,</p>
<pre><code>from airflow.operators.dummy import DummyOperator
t1 = DummyOperator(task_id='dummy_1')
t2 = DummyOperator(task_id='dummy_2')
</code></pre>
<p>we can specify dependencies as:</p>
<pre><code># Option A
t1 >> t2
# Option B
t2.set_upstream(t1)
# Option C
t1.set_downstream(t2)
</code></pre>
<hr />
<p>My question is whether there is any functionality that lets you remove downstream and/or upstream dependencies once they are defined.</p>
<p>I have a fairly big DAG where most of the tasks (and their dependencies) are generated dynamically. Once the tasks are created, I would like to re-arrange some of the dependencies and/or introduce some new tasks.</p>
<p>For example, assuming that the functionality implements the following logic</p>
<pre><code>from airflow.operators.dummy import DummyOperator
t1 = DummyOperator(task_id='dummy_1')
t2 = DummyOperator(task_id='dummy_2')
t1 >> t2
</code></pre>
<p>I would like to then be able to add a new task, add it in between the two tasks, and then remove the old dependency between <code>t1</code> and <code>t2</code>. Is this possible?</p>
<pre><code>from airflow import DAG
from airflow.operators.dummy import DummyOperator
def function_that_creates_dags_dynamically():
tasks = {
't1': DummyOperator(task_id='dummy_1'),
't2': DummyOperator(task_id='dummy_2'),
}
tasks['t1'] >> tasks['t2']
return tasks
with DAG(
dag_id='test_dag',
start_date=datetime(2021, 1, 1),
catchup=False,
tags=['example'],
) as dag:
tasks = function_that_creates_dags_dynamically()
t3 = DummyOperator(task_id='dummy_3')
tasks[t1] >> t3
t3 >> tasks[t2]
# Somehow remove tasks[t1] >> tasks[t2]
</code></pre>
|
<python><airflow><airflow-2.x>
|
2023-02-03 12:03:00
| 1
| 843
|
Tokyo
|
75,335,433
| 5,510,540
|
secondary axes 10 times smaller that the main axes using twinx()
|
<p>I am producing a plot and I want to make the secondary y-axes 10 times smaller than the main y-axes.</p>
<pre><code>ax2 = ax.twinx()
sns.lineplot(data=df["A"], ax=ax)
sns.lineplot(data=df["B"], ax=ax2)
</code></pre>
<p>is it possible to do define <code>ax2 = ax.twinx()/10</code>? how can specifcy that ax2 should be 10 times lower than ax? for example if <code>ax</code> goes from 0 to 100, <code>ax2</code> should go from 0 to 1.</p>
|
<python><plot><seaborn>
|
2023-02-03 12:00:23
| 1
| 1,642
|
Economist_Ayahuasca
|
75,335,346
| 7,376,511
|
Mypy error on __init_subclass__ example from python documentation
|
<p>From the <a href="https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__" rel="nofollow noreferrer">official Python 3 documentation</a> for __init__subclass__:</p>
<pre><code>class Philosopher:
def __init_subclass__(cls, /, default_name: str, **kwargs):
super().__init_subclass__(**kwargs)
cls.default_name = default_name
class AustralianPhilosopher(Philosopher, default_name="Bruce"):
pass
</code></pre>
<p>The problem is, mypy raises <code>"Type[Philosopher]" has no attribute "default_name"</code>. What is the solution for this? How can I make mypy take these values?</p>
|
<python><mypy><metaclass>
|
2023-02-03 11:53:00
| 1
| 797
|
Some Guy
|
75,335,264
| 17,160,160
|
Pandas. merge/join/concat. Rows into columns
|
<p>Given data frames similar to the following:</p>
<pre><code>df1 = pd.DataFrame({'Customer': ['Customer1', 'Customer2', 'Customer3'],
'Status': [0, 1, 1]}
Customer Status
0 Customer1 0
1 Customer2 1
2 Customer3 1
df2 = pd.DataFrame({'Customer': ['Customer1', 'Customer1', 'Customer1', 'Customer2', 'Customer2', 'Customer3'],
'Call': ['01-01', '01-02', '01-03', '02-01', '03-02', '06-01']})
Customer Call
0 Customer1 01-01
1 Customer1 01-02
2 Customer1 01-03
3 Customer2 02-01
4 Customer2 03-02
5 Customer3 06-01
</code></pre>
<p>What is the most efficient method for me to merge the two into a third data frame in which the rows from df2 become columns added to df1. In the new df each row should be a unique customer and 'Call' from df2 is added as incrementing columns populated by NaN values as required?</p>
<p>I'd like to end up with something like:</p>
<pre><code> Customer Status Call_1 Call_2 Call_3
0 Customer1 0 01-01 01-02 01-03
1 Customer2 1 02-01 03-02 NaN
2 Customer3 1 06-01 NaN NaN
</code></pre>
<p>I assume some combination of <code>stack()</code> and <code>merge()</code> is required but can't seem to figure it out.</p>
<p>Help appreciated</p>
|
<python><pandas>
|
2023-02-03 11:45:46
| 3
| 609
|
r0bt
|
75,335,094
| 20,646,427
|
__init__() got multiple values for argument 'user'
|
<p>I have a form, model and view and trying to show ModelChoiceField with filters</p>
<p>I wrote an init in my forms.py but when im trying to submit my form on html page i got an error:</p>
<p>"<code>__init__()</code> got multiple values for argument 'user' "</p>
<p>forms.py</p>
<pre><code>class WorkLogForm(forms.ModelForm):
worklog_date = forms.DateField(label='Дата', widget=forms.DateInput(
attrs={'class': 'form-control', 'placeholder': 'Введите дату'}))
author = forms.EmailField(label='Автор',
widget=forms.EmailInput(attrs={'class': 'form-control', 'placeholder': 'Email автора'}))
contractor_counter = forms.ModelChoiceField(queryset=CounterParty.objects.none())
contractor_object = forms.ModelChoiceField(queryset=ObjectList.objects.none())
contractor_section = forms.ModelChoiceField(queryset=SectionList.objects.none())
description = forms.CharField(label='Описание',
widget=forms.TextInput(attrs={'class': 'form-control', 'placeholder': 'Описание'}))
def __init__(self, user, *args, **kwargs):
super(WorkLogForm, self).__init__(*args, **kwargs)
counter_queryset = CounterParty.objects.filter(counter_user=user)
object_queryset = ObjectList.objects.filter(
Q(customer_guid__in=counter_queryset) | Q(contractor_guid__in=counter_queryset))
section_queryset = SectionList.objects.filter(object__in=object_queryset)
self.fields['contractor_counter'].queryset = counter_queryset
self.fields['contractor_object'].queryset = object_queryset
self.fields['contractor_section'].queryset = section_queryset
class Meta:
model = WorkLog
fields = (
'worklog_date', 'author', 'contractor_counter', 'contractor_object', 'contractor_section', 'description')
</code></pre>
<p>views.py</p>
<pre><code>def create_work_log(request):
if request.method == 'POST':
form = WorkLogForm(request.POST, user=request.user)
if form.is_valid():
form.author = request.user
work_log = form.save()
return render(request, 'common/home.html')
else:
form = WorkLogForm(user=request.user)
return render(request, 'contractor/create_work_log.html', {'form': form})
</code></pre>
|
<python><django>
|
2023-02-03 11:29:57
| 1
| 524
|
Zesshi
|
75,335,059
| 3,922,727
|
Azure function deployed but never run on blob input
|
<p>We are setting an Azure functions to be triggered once we have a file in an azure blob storage.</p>
<p>This file will be used as an input of a python script hosted on Github.</p>
<p>Here is the azure function basic script that was generated once the function was set using visual studio code:</p>
<p>import logging</p>
<p>import azure.functions as func</p>
<pre><code>def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
</code></pre>
<p>the aim is that, this TOML input file that was uploaded into the blob should serve as a loader of the variables.</p>
<p>The script then run and generates another file that would be saved in another blob.</p>
<p>Using a web app, we are able to load into the blob, however, the function is not triggered by looking at the monitor tab:</p>
<p><a href="https://i.sstatic.net/VsGdf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VsGdf.png" alt="enter image description here" /></a></p>
<p>What we want is that within the main() of the azure function, to trigger a python project on github to run with the input file. so it becomes:</p>
<pre><code>def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
# python src/main.py fileInput.toml
</code></pre>
<p>Any idea why the enabled function is not running and what to add into it's function?</p>
|
<python><azure><azure-functions>
|
2023-02-03 11:27:03
| 1
| 5,012
|
alim1990
|
75,334,901
| 774,133
|
Transposed dataframe to LaTeX
|
<p>I am not able to change the number format in the LaTeX output of the library Pandas.</p>
<p>Consider this example:</p>
<pre><code>import pandas as pd
values = [ { "id":"id1", "c1":1e-10, "c2":int(1000) }]
df = pd.DataFrame.from_dict(values).set_index("id")
print(df)
</code></pre>
<p>with output:</p>
<pre><code> c1 c2
id
id1 1.000000e-10 1000
</code></pre>
<p>Let's say that I want <code>c1</code> formatted with two decimal places, <code>c2</code> as an integer:</p>
<pre><code>s = df.style
s.clear()
s.format({ "c1":"{:.2f}", "c2":"{:d}" })
print(s.to_latex())
</code></pre>
<p>with output:</p>
<pre><code>\begin{tabular}{lrr}
& c1 & c2 \\
id & & \\
id1 & 0.00 & 1000 \\
\end{tabular}
</code></pre>
<p>However, I do not need a LaTeX table for <code>df</code> but for <code>df.T</code>.</p>
<p><strong>Question</strong>: since I can specify the styles only for the columns (at least it seems so in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html" rel="nofollow noreferrer">docs</a>), <strong>how can I specify the row-based output format for <code>df.T</code>?</strong></p>
<p>If I simply write this:</p>
<pre><code>dft = df.T
s2 = dft.style
# s2.clear() # nothing changes with this instruction
print(s2.to_latex())
</code></pre>
<p>it is ever worse as I get:</p>
<pre><code>\begin{tabular}{lr}
id & id1 \\
c1 & 0.000000 \\
c2 & 1000.000000 \\
\end{tabular}
</code></pre>
<p>where even the integer (the one with value <code>int(1000)</code>) became a float using the default style/format.</p>
<p>I played with the <code>subset</code> parameter and various slices with no success.</p>
|
<python><pandas><dataframe><latex><pandas-styles>
|
2023-02-03 11:13:43
| 1
| 3,234
|
Antonio Sesto
|
75,334,890
| 3,575,623
|
Not-quite gradient of dataframe
|
<p>I have a dataframe of ints:</p>
<pre><code>mydf = pd.DataFrame([[0,0,0,1,0,2,2,5,2,4],
[0,1,0,0,2,2,4,5,3,3],
[1,1,1,1,2,2,0,4,4,4]])
</code></pre>
<p>I'd like to calculate something that resembles the gradient given by <code>pd.Series.dff()</code> for each row, but with one big change: my ints represent categorical data, so I'm only interested in detecting a change, not the magnitude of it. So the step from 0 to 1 should be the same as the step from 0 to 4.</p>
<p>Is there a way for pandas to interpret my data as categorical in the data frame, and then calculate a <code>Series.diff()</code> on that? Or could you "flatten" the output of <code>Series.diff()</code> to be only 0s and 1s?</p>
|
<python><pandas>
|
2023-02-03 11:12:32
| 1
| 507
|
Whitehot
|
75,334,864
| 15,958,930
|
Use numpy to mask a row containing only zeros
|
<p>I have a large array of point cloud data which is generated using the azure kinect. All erroneous measurements are assigned the coordinate [0,0,0]. I want to remove all coordinates with the value [0,0,0]. Since my array is rater large (1 million points) and since U need to do this process in real-time, speed is of the essence.</p>
<p>In my current approach I try to use numpy to mask out all rows that contain three zeroes ([0,0,0]). However, the np.ma.masked_equal function does not evaluate an entire row, but only evaluates single elements. As a result, rows that contain at least one 0 are already filtered by this approach. I only want rows to be filtered when all values in the row are 0. Find an example of my code below:</p>
<p><code>my_data = np.array([[1,2,3],[0,0,0],[3,4,5],[2,5,7],[0,0,1]])</code></p>
<p><code>my_data = np.ma.masked_equal(my_data, [0,0,0])</code></p>
<p><code>my_data = np.ma.compress_rows(my_data)</code></p>
<h3>output</h3>
<pre><code>array([[1, 2, 3],
[3, 4, 5],
[2, 5, 7]])
</code></pre>
<h3>desired output</h3>
<pre><code>array([[1, 2, 3],
[3, 4, 5],
[2, 5, 7],
[0, 0, 1]])`
</code></pre>
|
<python><numpy><real-time>
|
2023-02-03 11:10:03
| 2
| 577
|
Thijs Ruigrok
|
75,334,831
| 2,583,765
|
Pylance doesn't show suggestions for not-yet-imported definitions
|
<p>According to the <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance" rel="nofollow noreferrer">Pylance extension page</a>, I should be able to write "g" in my python code, and then Pylance should suggest that I import the <code>gc</code> module (see screenshot below, or <a href="https://github.com/microsoft/pylance-release/raw/prerelease/images/all-features.gif" rel="nofollow noreferrer">the gif from the docs itself</a>). But I can't get this to work on my system.</p>
<p>This is what I expect to happen (screenshot from Pylance extension page):</p>
<p><a href="https://i.sstatic.net/IFUXX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IFUXX.png" alt="expected" /></a></p>
<p>This is what happens on my system:</p>
<p><a href="https://i.sstatic.net/KkFln.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KkFln.png" alt="actual" /></a></p>
<p>Can someone please help me understand why Pylance isn't working as advertised on my system?</p>
|
<python><visual-studio-code><pylance>
|
2023-02-03 11:07:16
| 1
| 5,175
|
birgersp
|
75,334,744
| 606,576
|
CSV DictWriter cannot write to file open in binary mode, BigQuery Client cannot upload file read in text mode
|
<p>Using Python 3.10 with <code>google-cloud-bigquery==3.4.2</code>.</p>
<p>I have CSV data that I need to load, transform and upload to BigQuery. I want to use a <code>SpooledTemporaryFile</code> for the intermediate data to avoid disk I/O. Simplified code:</p>
<pre><code>from csv import DictReader, DictWriter
from tempfile import SpooledTemporaryFile
from google.cloud.bigquery import Client, LoadJobConfig, SourceFormat
csv_lines = ["A;2;3", "B;4;6", "C;8;12"]
fieldnames = ["Foo", "Bar", "Baz"]
csv_reader = DictReader(csv_lines, fieldnames=fieldnames, delimiter=";")
with SpooledTemporaryFile(mode="w+") as tmp:
writer = DictWriter(tmp, fieldnames=fieldnames)
for row in csv_reader:
writer.writerow({"Foo": row["Foo"], "Bar": row["Bar"], "Baz": row["Baz"]})
job = Client().load_table_from_file(
tmp,
"GCP_PROJECT_ID.BIGQUERY_DATASET_ID.BIGQUERY_TABLE_ID",
job_config=LoadJobConfig(
source_format=SourceFormat.CSV, skip_leading_rows=0, autodetect=True
),
)
job.result()
</code></pre>
<p>The problem is that if I open the <code>SpooledTemporaryFile</code> with <code>mode="w+b"</code> (the default), <code>DictWriter</code> fails:</p>
<pre><code>TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>And if I use <code>mode="w+"</code>, the <code>DictWriter</code> writes but the GCP upload job fails with</p>
<pre><code>ValueError: Cannot upload files opened in text mode:
use open(filename, mode='rb') or open(filename, mode='r+b')
</code></pre>
<p>Any ideas on how to solve this Gordian knot welcome.</p>
|
<python><csv><google-bigquery>
|
2023-02-03 10:59:18
| 1
| 915
|
kthy
|
75,334,695
| 6,026,338
|
Rotate an image in python and fill the cropped area with image
|
<p>Have a look at the image and it will give you the better idea what I want to achieve. I want to rotate the image and fill the black part of image just like in required image. <a href="https://i.sstatic.net/5Ymji.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Ymji.png" alt="enter image description here" /></a></p>
<pre><code># Read the image
img = cv2.imread("input.png")
# Get the image size
h, w = img.shape[:2]
# Define the rotation matrix
M = cv2.getRotationMatrix2D((w/2, h/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(img, M, (w, h))
mask = np.zeros(rotated.shape[:2], dtype=np.uint8)
mask[np.where((rotated == [0, 0, 0]).all(axis=2))] = 255
img_show(mask)
</code></pre>
<p>From the code I am able to get the mask of black regions. Now I want to replace these black regions with the image portion as shown in the image <a href="https://i.sstatic.net/5Ymji.png" rel="nofollow noreferrer">1</a>. Any better solution how can I achieve this.</p>
<p><a href="https://i.sstatic.net/Lk7Sb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lk7Sb.png" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing><deep-learning><computer-vision>
|
2023-02-03 10:54:48
| 2
| 1,604
|
Qazi Ammar
|
75,334,569
| 5,618,856
|
python-docx: How to get the document properties into an object
|
<p>I'm using python-docx and I can list the properties with <code>doc.core_properties.__dir__()</code> <a href="https://stackoverflow.com/questions/52120217/python-coreproperties-python-docx">as in this question</a>.
But assigning prs.core_properties to a python variable doesn't give a usable object.</p>
<p>How do I get the document properties into a python object?</p>
<p>Here is what I tried:</p>
<pre class="lang-py prettyprint-override"><code>from pptx import Presentation
import os
pptFile = "myfile.pptx"
prs = Presentation(os.path.realpath(pptFile))
prsProps = prs.core_properties
print(prs.core_properties.__dir__())
print(prs.core_properties.modified)
prsProps
</code></pre>
<p>yields</p>
<pre><code>['_partname', '_content_type', '_package', '_blob', '_element', '_rels', '__module__', '__doc__', 'default', 'author', 'category', 'comments', 'content_status', 'created', 'identifier', 'keywords', 'language', 'last_modified_by', 'last_printed', 'modified', 'revision', 'subject', 'title', 'version', '_new', '__init__', 'load', 'blob', 'part', 'content_type', 'drop_rel', 'load_rels_from_xml', 'package', 'partname', 'rels', '_blob_from_file', '_rel_ref_count', 'part_related_by', 'relate_to', 'related_part', 'target_ref', '__dict__', '__weakref__', '__new__', '__repr__', '__hash__', '__str__', '__getattribute__', '__setattr__', '__delattr__', '__lt__', '__le__', '__eq__', '__ne__', '__gt__', '__ge__', '__reduce_ex__', '__reduce__', '__getstate__', '__subclasshook__', '__init_subclass__', '__format__', '__sizeof__', '__dir__', '__class__']
2019-11-15 12:30:37
<pptx.parts.coreprops.CorePropertiesPart at 0x1d9a3fa6fd0>
</code></pre>
<p>I expected an object in <code>prsProps</code>.</p>
|
<python><python-docx><python-pptx>
|
2023-02-03 10:45:20
| 1
| 603
|
Fred
|
75,334,495
| 6,013,016
|
how to check if ctrl key is released (pygame)
|
<p>How do I check if left ctrl key is released in pygame? I saw the documentation, but following code didn't work for me:</p>
<pre><code>if event.type == pygame.KEYUP:
if pygame.key.get_mods() & pygame.KMOD_CTRL: # this is not working!
print("Left control is released")
</code></pre>
<p>What am I doing wrong? Or what is the proper way of checking it?</p>
|
<python><pygame>
|
2023-02-03 10:37:39
| 2
| 5,926
|
Scott
|
75,334,441
| 9,703,039
|
How to select all Dataframe columns with the same names?
|
<p>I am creating a dataframe based on a csv import:</p>
<pre><code>ID, attachment, attachment, comment, comment
1, lol.jpg, lmfao.png, 'Luigi',
2, cat.docx, , 'It's me', 'Mario'
</code></pre>
<p>Basically the number of 'attachments' and 'comment' columns corresponds to the line that has the bigger number of said attachment and comment.
Since I am exporting the CSV from a third party software, I do not know in advance how many attachments and comment columns there will be.</p>
<p>Importing this CSV with <code>pd.read_csv</code> creates the following dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>ID</th>
<th>attachment</th>
<th>attachment.1</th>
<th>comment</th>
<th>comment.1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>lol.jpg</td>
<td>lmfao.png</td>
<td>'Luigi'</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>cat.docx</td>
<td></td>
<td>'It's me'</td>
<td>'Mario'</td>
</tr>
</tbody>
</table>
</div>
<p>Is there a simple way to select all attachment/comment columns?</p>
<p>Such as <code>attachments_df = imported_df.attachment.all</code> or <code>comments_df = imported_df['comment].??</code></p>
<p>Thanks.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-02-03 10:32:16
| 3
| 339
|
Odyseus_v4
|
75,334,440
| 221,270
|
Meteor spectrogram from wave files
|
<p>I am using <a href="https://www.qsl.net/dl4yhf/spectra1.html" rel="nofollow noreferrer">SpectrumLab</a> for logging meteors with an SDR. SpectrumLab records <a href="https://www.qsl.net/dl4yhf/speclab/specdisp.htm#waterfall" rel="nofollow noreferrer">waterfall</a> screenshots and a wav file of an event. I am trying to reproduce the waterfall screenshot of SpectrumLab from the wave file but the pattern looks different:</p>
<pre><code>import librosa
import matplotlib.pyplot as plt
import librosa.display
audio_data = 'event20230131_101027_26.wav'
x , sr = librosa.load(audio_data)
plt.figure(figsize=(14, 5))
librosa.display.waveshow(x, sr=sr)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
plt.figure(figsize=(14, 5))
librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='log')
plt.colorbar()
plt.show()
</code></pre>
<p>Screenshot of SpectrumLab:
<a href="https://i.sstatic.net/R54A6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R54A6.jpg" alt="enter image description here" /></a></p>
<p>Screenshot generated with librosa:
<a href="https://i.sstatic.net/282rV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/282rV.png" alt="enter image description here" /></a></p>
<p>Wave file which was recorded together with the screenshot in SpectrumLab:</p>
<p><a href="https://github.com/snowformatics/sdr_scatter/blob/master/convert_wave/event20230131_101027_26.wav" rel="nofollow noreferrer">Wavefile</a></p>
|
<python><signal-processing><fft><librosa><rtl-sdr>
|
2023-02-03 10:32:16
| 0
| 2,520
|
honeymoon
|
75,334,327
| 9,236,505
|
Provide sample DataFrame from csv file
|
<p>when asking a python/pandas question on stackoverflow I often like to provide a sample dataframe.
I usually have a local csv file I deal with for testing.</p>
<p>So for a DataFrame I like to provide a code in my question like</p>
<pre><code>df = pd.DataFrame()
</code></pre>
<p>Is there an easy way or tool to get a csv file into code in a format like this, so another user can easily recreate the dataframe?</p>
<p>For now I usually do it manually, which is annoying and time consuming. I have to copy/paste the data from excel to stackoverflow, remove tabs/spaces, rearrange numbers to get a list or dictionary and so on.</p>
<p>Example csv file:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>4</td>
</tr>
</tbody>
</table></div>
<p>I if want to provide this table I can provide code like:</p>
<pre><code>d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
</code></pre>
<p>I will have to create the dictionary and Dataframe manually. I manually have to write the code into the stackoverflow editor.
For a more complex table this could lead to a lot of work.</p>
|
<python><pandas>
|
2023-02-03 10:21:40
| 1
| 336
|
Paul
|
75,334,245
| 4,865,723
|
Second optional capturing group depending on optional delimiter in regex
|
<p>I'm sorry for asking this maybe duplicate question. I checked the existing questions and answers about <em>optional capturing groups</em>. I tried some things but I'm not able to translate the answer to my own example.</p>
<p>This are two imput lines</p>
<pre><code>id:target][label
id:target
</code></pre>
<p>I would like to capture <code>id:</code> (group 1), <code>target</code> (group 2) and if <code>][</code> is present <code>label</code> (group 3).</p>
<p>The used regex (Python regex) only works on the first line (<a href="https://regex101.com/r/JPPV0W/1" rel="nofollow noreferrer">live example on regex101</a>).</p>
<pre><code>^(.+:)(.*)\]\[(.*)
</code></pre>
<p><a href="https://i.sstatic.net/rfU9c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rfU9c.png" alt="enter image description here" /></a></p>
<p>In the other examples I don't get what the regex makes a capturing group optional. And maybe the delimiter <code>][</code> used by me also mix up with my understanding problem.</p>
<p>One thing I tried was this</p>
<pre><code>^(.+:)(.*)(\]\[(.*))?
</code></pre>
<p>This doesn't work as expected
<a href="https://i.sstatic.net/S0kum.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S0kum.png" alt="enter image description here" /></a></p>
|
<python><regex><regex-group>
|
2023-02-03 10:14:09
| 1
| 12,450
|
buhtz
|
75,334,103
| 6,612,915
|
Explanation needed for a calculation
|
<p>I am fairly new to Python and do not understand the explanation given by the course I am doing. I cannot follow why width gets 2.</p>
<p>To my understanding the <code>print(combine(1)[2])</code> appoints the value to to position one. But I thought <code>is_3D</code> is in position 0, hence height would be in position 2.
So I do not understand what is going on here.</p>
<pre><code>def combine(width, height=2, depth=0, is_3D=False):
return[is_3D, width, height, depth]
print(combine(1)[2])
</code></pre>
<p><a href="https://i.sstatic.net/Gn7WJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gn7WJ.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-02-03 10:00:13
| 2
| 464
|
Anna
|
75,334,025
| 9,236,505
|
bar plot a multiheader dataframe in a desired format
|
<p>I have the following DataFrame:</p>
<pre><code>data = {('Case1', 'A'): {'One': 0.96396415, 'Two': 0.832049574, 'Three': 0.636568627, 'Four': 0.765846157},
('Case1', 'B'): {'One': 0.257496625, 'Two': 0.984418254, 'Three': 0.018891398, 'Four': 0.440278509},
('Case1', 'C'): {'One': 0.512732941, 'Two': 0.622697929, 'Three': 0.731555346, 'Four': 0.031419349},
('Case2', 'A'): {'One': 0.736783294, 'Two': 0.460765675, 'Three': 0.078558864, 'Four': 0.566186283},
('Case2', 'B'): {'One': 0.921473211, 'Two': 0.274749932, 'Three': 0.312766018, 'Four': 0.159229808},
('Case2', 'C'): {'One': 0.146389032, 'Two': 0.893299471, 'Three': 0.536288712, 'Four': 0.775763286},
('Case3', 'A'): {'One': 0.351607026, 'Two': 0.041402396, 'Three': 0.924265706, 'Four': 0.639154727},
('Case3', 'B'): {'One': 0.966538215, 'Two': 0.658236148, 'Three': 0.473447279, 'Four': 0.545974617},
('Case3', 'C'): {'One': 0.036585457, 'Two': 0.279443317, 'Three': 0.407991168, 'Four': 0.101083315}}
pd.DataFrame(data=data)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Case1</th>
<th>Case1</th>
<th>Case1</th>
<th>Case2</th>
<th>Case2</th>
<th>Case2</th>
<th>Case3</th>
<th>Case3</th>
<th>Case3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>A</td>
<td>B</td>
<td>C</td>
</tr>
<tr>
<td>One</td>
<td>0,963964</td>
<td>0,257497</td>
<td>0,512733</td>
<td>0,736783</td>
<td>0,921473</td>
<td>0,146389</td>
<td>0,351607</td>
<td>0,966538</td>
<td>0,036585</td>
</tr>
<tr>
<td>Two</td>
<td>0,83205</td>
<td>0,984418</td>
<td>0,622698</td>
<td>0,460766</td>
<td>0,27475</td>
<td>0,893299</td>
<td>0,041402</td>
<td>0,658236</td>
<td>0,279443</td>
</tr>
<tr>
<td>Three</td>
<td>0,636569</td>
<td>0,018891</td>
<td>0,731555</td>
<td>0,078559</td>
<td>0,312766</td>
<td>0,536289</td>
<td>0,924266</td>
<td>0,473447</td>
<td>0,407991</td>
</tr>
<tr>
<td>Four</td>
<td>0,765846</td>
<td>0,440279</td>
<td>0,031419</td>
<td>0,566186</td>
<td>0,15923</td>
<td>0,775763</td>
<td>0,639155</td>
<td>0,545975</td>
<td>0,101083</td>
</tr>
</tbody>
</table>
</div>
<p>There are 2 header rows.</p>
<p>In the end i need a plot like the following (which i created in excel).
Another solution would be a seperate plot for every Case, instead of all in one.</p>
<p><a href="https://i.sstatic.net/a1iKT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a1iKT.png" alt="enter image description here" /></a></p>
<p>What i tried so far is:</p>
<pre><code>df.T.melt(ignore_index=False)
</code></pre>
<p>to get the DataFrame in a format like i used in excel.
But from there i could not figure any solution to get the right plot. Maybe the transpose/melt is not even necessary.</p>
<p>Can anyone give me a hint on how to achieve the desired plot?</p>
|
<python><pandas><dataframe><matplotlib>
|
2023-02-03 09:51:58
| 2
| 336
|
Paul
|
75,333,793
| 3,922,727
|
Python reading a file from blob storage on azure using argparse is returning an error of attribute not found
|
<p>We are trying to add a TOML file as an argument when we want to run the following:</p>
<p><code>python src/main --file=something.toml</code></p>
<p>Within the argument parser function, we've added this line:</p>
<pre><code>def parse_args(activity_choices, country_choices):
parse = argparse.ArgumentParser()
...
parse.add_argument('--file', type = argparse.FileType('r'))
parser = parse.parse_args()
file = parser.file.readlines()
</code></pre>
<p>with printing the file that is of type <code>list</code>, using:</p>
<p><code>print(file[0])</code></p>
<p>We see that the only returned value is <code>[mainparams]</code> and not the whole list of values inside of mainparams.</p>
<p>We tried the solution of <a href="https://stackoverflow.com/questions/64427533/reading-a-toml-config-file-from-cli-with-argparse">this question</a>:</p>
<pre><code>toml = tomli.loads(parser.file)
</code></pre>
<p>But we used tomli, as it was already used in different place within the script.</p>
<p>the error was:</p>
<blockquote>
<p>AttributeError: '_io.TextIOWrapper' object has no attribute 'replace'</p>
</blockquote>
<p>We need to get the country and activity values into variable in order to proceed.</p>
<p>Here is a toml example:</p>
<pre><code>[mainparams]
country='USA'
activity='HEALTH'
[optionalparams]
csv_path=TRUE
</code></pre>
|
<python><file><argparse>
|
2023-02-03 09:28:50
| 1
| 5,012
|
alim1990
|
75,333,663
| 11,197,301
|
numpy array in a python function and the correct usage of if condition
|
<p>let's say that I have this function</p>
<pre><code>def funtion(x, bb, aa):
if x>aa:
res = aa
else:
xxr = x/aa
res = bb*(1.5*xxr-0.5*xxr**3)
return res
</code></pre>
<p>If I do:</p>
<pre><code>xx = np.linspace(0,49,50)
yy = funct(xx,74,33)
</code></pre>
<p>I get the following error:</p>
<pre><code>The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>The usage of a.any() or a.all() does not solve the problem. The results is different from what I am looking for.</p>
<p>The usage of a.any() or a.all() leads to</p>
<pre><code>array([ 0. , 3.36260678, 6.71903609, 10.06311044, 13.38865236,
16.68948438, 19.959429 , 23.19230876, 26.38194618, 29.52216379,
32.60678409, 35.62962963, 38.58452292, 41.46528647, 44.26574283,
46.9797145 , 49.60102401, 52.12349389, 54.54094666, 56.84720483,
59.03609094, 61.1014275 , 63.03703704, 64.83674208, 66.49436514,
68.00372875, 69.35865542, 70.55296769, 71.58048808, 72.4350391 ,
73.11044328, 73.60052314, 73.8991012 , 74. , 73.89704205,
73.58404987, 73.05484598, 72.30325291, 71.32309319, 70.10818933,
68.65236386, 66.9494393 , 64.99323817, 62.77758299, 60.2962963 ,
57.5432006 , 54.51211843, 51.1968723 , 47.59128475, 43.68917828])
</code></pre>
<p>but this is what I expect</p>
<pre><code>array([ 0. , 3.36260678, 6.71903609, 10.06311044, 13.38865236,
16.68948438, 19.959429 , 23.19230876, 26.38194618, 29.52216379,
32.60678409, 35.62962963, 38.58452292, 41.46528647, 44.26574283,
46.9797145 , 49.60102401, 52.12349389, 54.54094666, 56.84720483,
59.03609094, 61.1014275 , 63.03703704, 64.83674208, 66.49436514,
68.00372875, 69.35865542, 70.55296769, 71.58048808, 72.4350391 ,
73.11044328, 73.60052314, 73.8991012 , 74. , 33. ,
33. , 33. , 33. , 33. , 33. ,
33. , 33. , 33. , 33. , 33. ,
33. , 33. , 33. , 33. , 33. ])
</code></pre>
<p>As you can notice, there is a constant value for x>aa, according to the function definition. Maybe I have to re-rewrite the if statement in another way.</p>
<p>Could someone give a glue? It is not possible for me to do a for cycle due to the fact that I need to use the function in another function.
Thanks for any kind of help.</p>
|
<python><arrays><if-statement><vectorization>
|
2023-02-03 09:17:03
| 0
| 623
|
diedro
|
75,333,654
| 9,490,769
|
Localize time zone based on column in pandas
|
<p>I am trying to assign a timezone to a datetime column, based on another column containing the time zone.</p>
<p>Example data:</p>
<pre class="lang-py prettyprint-override"><code> DATETIME VALUE TIME_ZONE
0 2021-05-01 00:00:00 1.00 Europe/Athens
1 2021-05-01 00:00:00 2.13 Europe/London
2 2021-05-01 00:00:00 5.13 Europe/London
3 2021-05-01 01:00:00 4.25 Europe/Dublin
4 2021-05-01 01:00:00 4.25 Europe/Paris
</code></pre>
<p>I am trying to assign a time zone to the <code>DATETIME</code> column, but using the <code>tz_localize</code> method, I cannot avoid using an apply call, which will be very slow on my large dataset. Is there some way to do this without using apply?</p>
<p>What I have now (which is slow):</p>
<pre class="lang-py prettyprint-override"><code>df['DATETIME_WITH_TZ'] = df.apply(lambda row: row['DATETIME'].tz_localize(row['TIME_ZONE']), axis=1)
</code></pre>
|
<python><pandas><pytz><zoneinfo>
|
2023-02-03 09:16:08
| 1
| 3,345
|
oskros
|
75,333,571
| 4,388,099
|
Get Aerospike hyperLogLog(HLL) intersection count of multiple HLL unions
|
<p>I have 2 or more HLLs that are unioned, I want to get the intersection count of that unions.
I have used the example from here <a href="https://github.com/aerospike-examples/hll-python/blob/master/hll.py" rel="nofollow noreferrer">hll-python example</a>
Following is my code</p>
<pre><code>ops = [hll_ops.hll_get_union(HLL_BIN, records)]
_, _, result1 = client.operate(getKey(value), ops)
ops = [hll_ops.hll_get_union(HLL_BIN, records2)]
_, _, result2 = client.operate(getKey(value2), ops)
ops = [hll_ops.hll_get_intersect_count(HLL_BIN, [result1[HLL_BIN]] + [result2[HLL_BIN]])]
_, _, resultVal = client.operate(getKey(value), ops)
print(f'intersectAll={resultVal}')
_, _, resultVal2 = client.operate(getKey(value2), ops)
print(f'intersectAll={resultVal2}')
</code></pre>
<p>I get 2 different results when I use different keys for the intersection using <code>hll_get_intersect_count</code>, i.e resultVal and resultVal2 are not same. This does not happen in the case of union count using function <code>hll_get_union_count</code>. Ideally the value of intersection should be the same.<br />
Can any one tell me why is this happening and what is the right way to do it?</p>
|
<python><aerospike><hyperloglog>
|
2023-02-03 09:08:07
| 1
| 301
|
darekarsam
|
75,333,570
| 16,220,410
|
generate unique id code in faker data set
|
<p>im trying to create a data set with a unique id code but i get a</p>
<blockquote>
<p><strong>'ValueError not enough values to unpack (expected 6, got 5)'</strong></p>
</blockquote>
<p>on line 8, basically, I am trying to:</p>
<ol>
<li>generate a unique 6 digit id code</li>
<li>append dataset value with 'ID' ex: ID123456</li>
</ol>
<p>UPDATE:
<strong>fixed the error and ID append, now how do i make sure the generated id is unique in the dataset?</strong></p>
<pre><code>from faker import Faker
import random
import pandas as pd
Faker.seed(0)
random.seed(0)
fake = Faker("en_US")
fixed_digits = 6
concatid = 'ID'
idcode,name, city, country, job, age = [[] for k in range(0,6)]
for row in range(0,100):
idcode.append(concatid + str(random.randrange(111111, 999999, fixed_digits)))
name.append(fake.name())
city.append(fake.city())
country.append(fake.country())
job.append(fake.job())
age.append(random.randint(20,100))
d = {"ID Code":idcode, "Name":name, "Age":age, "City":city, "Country":country, "Job":job}
df = pd.DataFrame(d)
df.head()
</code></pre>
<p>planning to generate 1k rows</p>
|
<python>
|
2023-02-03 09:08:05
| 1
| 1,277
|
k1dr0ck
|
75,333,526
| 12,304,000
|
send POST request to Itunes API via Postman
|
<p>For the Itunes Reporter API, I have an <strong>access_token</strong> and <strong>vendor_number</strong>.</p>
<p><a href="https://help.apple.com/itc/appsreporterguide/#/apd68da36164" rel="nofollow noreferrer">https://help.apple.com/itc/appsreporterguide/#/apd68da36164</a></p>
<p>I found some old Python code that was used to send API requests to this API:</p>
<pre><code> def _make_request(self,
cmd_type: str,
command: str,
credentials: Dict[str, str],
extra_params: Dict[str, str] = None
) -> requests.Response:
if not extra_params:
extra_params = {}
# command does not differ anymore, no matter if the apple id has multiple accoutns or not. a= is an invalid parameter by now.
command = f'[p=Reporter.properties, {cmd_type.capitalize()}.{command}]'
endpoint = ('https://reportingitc-reporter.apple.com'
f'/reportservice/{cmd_type}/v1')
# account needs to be passed as data, not as parameter
if self.account:
data = {
'version': self.version,
'mode': self.mode,
**credentials,
'queryInput': command,
'account': self.account
}
else:
data = {
'version': self.version,
'mode': self.mode,
**credentials,
'queryInput': command
}
data = self._format_data(data)
data.update(extra_params)
response = requests.post(endpoint, data=data)
response.raise_for_status()
return response
def download_sales_report(self,
vendor: str,
report_type: str,
date_type: str,
date: str,
report_subtype: str = '',
report_version: str = '') -> Data:
"""Downloads sales report, puts the TSV file into a Python list
Information on the parameters can be found in the iTunes Reporter
documentation:
https://help.apple.com/itc/appsreporterguide/#/itcbd9ed14ac
:param vendor:
:param report_type:
:param date_type:
:param date:
:param report_subtype:
:param report_version:
:return:
"""
credentials = {
'accesstoken': self.access_token
}
command = (f'getReport, {vendor},{report_type},{report_subtype},'
f'{date_type},{date},{report_version}')
ordered_dict_sales_report = self._process_gzip(self._make_request('sales', command,
credentials))
return ordered_dict_sales_report
</code></pre>
<p>Now, I want to replicate this in Postman but I am <strong>not sure where to place the parameters from the "command" i.e vendor, reportType</strong> etc. Do I pass them as a raw json in the body? Or as query paramas?</p>
<p>The endpoint I am using currently for a POST request is this:</p>
<p><a href="https://reportingitc-reporter.apple.com/reportservice/sales/v1" rel="nofollow noreferrer">https://reportingitc-reporter.apple.com/reportservice/sales/v1</a></p>
<p>I am passing a "BearerToken" as the authorization and this as the Body:</p>
<pre><code>{
"version": "1.0",
"mode": "Test",
"queryInput": "[p=Reporter.properties, Sales.getReport, 85040615, sales, Summary, Daily, 20230101]"
}
</code></pre>
<p>but i get 400 Bad request error</p>
<p>According to the documentation, this is the Java syntax that i need to convert to Python:</p>
<pre><code>Syntax
$ java -jar Reporter.jar p=[properties file] Sales.getReport [vendor number], [report type], [report subtype], [date type], [date], [version]* (if applicable)
</code></pre>
|
<python><post><https><postman><itunes>
|
2023-02-03 09:03:19
| 0
| 3,522
|
x89
|
75,333,407
| 8,318,946
|
Docker is taking wrong settings file when creating image
|
<p>I have Django application where my settings are placed in folder named settings. Inside this folder I have <strong>init</strong>.py, base.py, deployment.py and production.py.</p>
<p>My wsgi.py looks like this:</p>
<pre><code>os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
application = get_wsgi_application()
</code></pre>
<p>My Dockerfile:</p>
<pre><code>FROM python:3.8
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git@master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token {GITHUB-TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80
</code></pre>
<p>My docker-compose file:</p>
<pre><code>version: '3'
services:
app:
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
</code></pre>
<p><strong>Problem</strong></p>
<p>Every time I create image Docker is taking settings from development.py instead of production.py. I tried to change my setting using this command:</p>
<pre><code>set DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
</code></pre>
<p>It works fine when using conda/venv and I am able to switch to production mode however when creating Docker image it does not take into consideration production.py file at all.</p>
<p><strong>Question</strong></p>
<p>Is there anything else I should be aware of that causes issues like this and how can I fix it?</p>
|
<python><django>
|
2023-02-03 08:51:34
| 1
| 917
|
Adrian
|
75,333,342
| 4,451,521
|
How can I run a script from another script when I change folders?
|
<p>Originally I had two scripts <code>main.py</code> and <code>call_main.py</code> They were in the same level.</p>
<p>In <code>call_main.py</code> I had</p>
<pre><code>import subprocess
cmd = [
"python3",
"main.py",
]
subprocess.run(cmd)
</code></pre>
<p>No problem there.
However now I am moving the <code>call_main.py</code> script two levels deeper to a folder <code>/tools/mytools/call_main.py</code></p>
<p>How should I modify the script to be able to call <code>main.py</code>?</p>
<p>In the beginning I tried changing the call to</p>
<pre><code>import subprocess
cmd = [
"python3",
"../../main.py",
]
subprocess.run(cmd)
</code></pre>
<p>and changing any parameters that I call accordingly,</p>
<p>However it seems <code>main.py</code> call other scripts so when I do the above way, it cannot file some imports it does.</p>
<p>So any good way to be able to run main?</p>
|
<python><path>
|
2023-02-03 08:44:40
| 0
| 10,576
|
KansaiRobot
|
75,333,089
| 5,580,309
|
Raspberry Pi - Crontab task not running properly
|
<p>I have scheduled a task <code>arp -a</code> which runs once per hour, that scans my wi-fi network to save all the info about currently connected devices into a <code>scan.txt</code> file. After the scan, a python script reads the <code>scan.txt</code> and saves the data into a database.</p>
<p>This is what my <code>wifiscan.sh</code> script looks like:</p>
<pre><code>cd /home/pi/python/wifiscan/
arp -a > /home/pi/python/wifiscan/scan.txt
python wifiscan.py
</code></pre>
<p>This is my crontab task:</p>
<pre><code>#wifiscan
59 * * * * sh /home/pi/launcher/wifiscan.sh
</code></pre>
<p>If I run the <code>wifiscan.sh</code> file manually, all the process works perfectly; when it is run by the crontab, the <code>scan.txt</code> file is generated empty and the rest of the process works, but with no data, so I'm assuming that the problem lies in the <code>arp -a</code> command.</p>
<p>How is it possible that <code>arp -a</code> does not produce any output when it is run by crontab? Is there any mistakes I'm making?</p>
|
<python><cron><raspberry-pi><arp>
|
2023-02-03 08:18:17
| 1
| 1,038
|
sirdan
|
75,332,748
| 10,689,857
|
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-bu
|
<p>I have a virtual environment with Python 3.11.0 in it.
When trying to install cx-Oracle</p>
<pre class="lang-none prettyprint-override"><code>pip install cx-Oracle
</code></pre>
<p>Is there any way to make it work with python 3.11?</p>
<p>Full error:</p>
<pre class="lang-none prettyprint-override"><code>(venv) PS C:\Users\XXXXX\Pruebas\sp-back-office-toolscoe> pip install cx-Oracle
Collecting cx-Oracle
Using cached cx_Oracle-8.3.0.tar.gz (363 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: cx-Oracle
Building wheel for cx-Oracle (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for cx-Oracle (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
C:\Users\XXXXX\AppData\Local\Temp\pip-build-env-__332i7h\overlay\Lib\site-packages\setuptools\config\expand.py:144: UserWarning: File 'C:\\Users
\\XXXXXX\\AppData\\Local\\Temp\\pip-install-akmlg3ac\\cx-oracle_559c2c2b67a543f586a98b0333592264\\README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running bdist_wheel
running build
running build_ext
building 'cx_Oracle' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-bu
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cx-Oracle
Failed to build cx-Oracle
ERROR: Could not build wheels for cx-Oracle, which is required to install pyproject.toml-based projects
</code></pre>
<p>As the error suggests, I have installed Microsoft C++ Build Tools and added it to my path, but I still get the same error. Not sure what should I do.</p>
|
<python><pip>
|
2023-02-03 07:38:49
| 0
| 854
|
Javi Torre
|
75,332,507
| 11,607,378
|
Is is possible to separate CPU resources between subprocess and main process in Python?
|
<p>Suppose I have Python program with a main process A, it would have a main thread a, and invoke some threads a1, a2. And I spawn a subprocess from A, let's say it B, and have the main thread b.</p>
<p>Is it possible to separate the CPU resources(CPU cores) between b and (a, a1, a2), that their CPU usages would not compete with each other?</p>
<p><strong>Update(Answer)</strong>: We need not regard the subprocess/main process. Just follow the accepted answer, e.g. set A cpu-affinity to some cores, B may inherit from this setting, but just setting B cpu-affinity to some other cores after B is spawned could achieve the expected behavior.</p>
|
<python><cpu>
|
2023-02-03 07:07:20
| 1
| 673
|
Litchy
|
75,332,474
| 275,002
|
Python: MySQL Connection not available
|
<p>I have a routine that is accessed every second, it worked fine for the days and then gave the error:</p>
<pre><code>Exception in store_price
[2023-02-03 05:02:56] - Traceback (most recent call last):
File “/x/db.py", line 86, in store_price
with connection.cursor() as cursor:
File "/root/.pyenv/versions/3.9.4/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 632, in cursor
raise OperationalError("MySQL Connection not available.")
mysql.connector.errors.OperationalError: MySQL Connection not available
</code></pre>
<p>Below is my code</p>
<pre><code>def store_price(connection, symbol, last_price, timestamp):
"""
:param connection:
:param symbol:
:param last_price:
:param timestamp:
:return:
"""
table_name = 'deribit_price_{}'.format(symbol.upper())
try:
if connection is None:
print('Null found..reconnecting')
connection.reconnect()
if connection is not None: # this is line # 86
with connection.cursor() as cursor:
sql = "INSERT INTO {} (last_price,timestamp) VALUES (%s,%s)".format(table_name)
cursor.execute(sql, (last_price, timestamp,))
connection.commit()
except Exception as ex:
print('Exception in store_perpetual_data')
crash_date = time.strftime("%Y-%m-%d %H:%m:%S")
crash_string = "".join(traceback.format_exception(etype=type(ex), value=ex, tb=ex.__traceback__))
exception_string = '[' + crash_date + '] - ' + crash_string + '\n'
print(exception_string)
</code></pre>
|
<python><mysql><mysql-connector>
|
2023-02-03 07:04:27
| 0
| 15,089
|
Volatil3
|
75,332,327
| 11,280,068
|
python f-strings with path parameters within fastapi path
|
<p>I'm wondering if you can use f strings within a path in fastapi. For example, I want to do the following:</p>
<pre><code>common_path = '/{user}/{item_id}'
@app.get(f'{common_path}/testing')
</code></pre>
<p>would this work?</p>
|
<python><python-3.x><fastapi><f-string>
|
2023-02-03 06:43:45
| 1
| 1,194
|
NFeruch - FreePalestine
|
75,332,264
| 2,161,250
|
compare two list of dictonary with different order in Python
|
<p>I have two lists of dict which I need to compare but dict in different order so I am not sure what is the correct way to do it.</p>
<p>l1 = [{'a': '1'}, {'b': '2'}]</p>
<p>l2 = [{'b': '2'}, {'a': '1'}]</p>
<p>result should be true when I compare l1 and l2 as both have same dictionaries in their respective list.</p>
|
<python><list><dictionary>
|
2023-02-03 06:34:32
| 6
| 338
|
Lavish Karankar
|
75,332,136
| 305,135
|
Pandas groupby having condition on column
|
<p>I have data like this :</p>
<pre><code> userID activity count
0 3 running 5
1 3 running 6
2 3 walking 0
3 3 walking 1
4 3 stopped 2
</code></pre>
<p>I want to group data conditionallly, by userID and wether activity is running(1) or not (0)</p>
<p>This will group by userID, of course, I'm a newbie in Pandas!:</p>
<pre><code>numUsers = df.groupby(by=["userID"])["count"].sum()
</code></pre>
|
<python><sql><pandas>
|
2023-02-03 06:15:38
| 0
| 19,540
|
AVEbrahimi
|
75,332,124
| 13,538,030
|
Stratified sampling with multiple variables in Python
|
<p>I need your advice. I am performing stratified sampling in Python. There are 3 variables (A, B, and C), and each of them has 3 levels.</p>
<pre><code>A: a1, a2, a3
B: b1, b2, b3
C: c1, c2, c3
</code></pre>
<p>That being said, there are 3 * 3 * 3 = 27 strata, and for each stratum, I want to random sample 1000 rows. What is the best strategy to implement the code?</p>
<p>Thank you.</p>
|
<python><sampling>
|
2023-02-03 06:14:21
| 0
| 384
|
Sophia
|
75,332,053
| 10,844,937
|
How to merge multiple columns of a dataframe using regex?
|
<p>I have a <code>df</code> which as following</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'number_C1_E1': ['1', '2', None, None, '5', '6', '7', '8'],
'fruit_C11_E1': ['apple', 'banana', None, None, 'watermelon', 'peach', 'orange', 'lemon'],
'name_C111_E1': ['tom', 'jerry', None, None, 'paul', 'edward', 'reggie', 'nicholas'],
'number_C2_E2': [None, None, '3', None, None, None, None, None],
'fruit_C22_E2': [None, None, 'blueberry', None, None, None, None, None],
'name_C222_E2': [None, None, 'anthony', None, None, None, None, None],
'number_C3_E1': [None, None, '3', '4', None, None, None, None],
'fruit_C33_E1': [None, None, 'blueberry', 'strawberry', None, None, None, None],
'name_C333_E1': [None, None, 'anthony', 'terry', None, None, None, None],
}
)
</code></pre>
<p>Here what I want to do is combine those columns and we have two rules:</p>
<ol>
<li>If a column removes <code>_C{0~9}</code> or <code>_C{0~9}{0~9}</code> or <code>_C{0~9}{0~9}{0~9}</code> is equal to another column, these two columns can be combined.</li>
</ol>
<blockquote>
<p>Let's take <code>number_C1_E1</code> <code>number_C2_E2</code> <code>number_C3_E1</code> as an example, here <code>number_C1_E1</code> and <code>number_C3_E1</code> can be combined because they are both <code>number_E1</code> after <code>removing _C{0~9}</code>.</p>
</blockquote>
<ol start="2">
<li>The two combined columns should get rid of the <code>None</code> values.</li>
</ol>
<p>The desired result is</p>
<pre><code> number_C1_1_E1 fruit_C11_1_E1 name_C111_1_E1 number_C2_1_E2 fruit_C22_1_E2 name_C222_1_E2
0 1 apple tom None None None
1 2 banana jerry None None None
2 3 blueberry anthony 3 blueberry anthony
3 4 strawberry terry None None None
4 5 watermelon paul None None None
5 6 peach edward None None None
6 7 orange reggie None None None
7 8 lemon nicholas None None None
</code></pre>
<p>Anyone has a good solution?</p>
|
<python><pandas>
|
2023-02-03 06:03:33
| 2
| 783
|
haojie
|
75,331,691
| 11,901,732
|
ORA-00942: table or view does not exist when querying in Python
|
<p>I got this error when querying Oracle data from Python, with the query below:</p>
<pre><code>CONN_INFO = {
'host': '192.xxx.xxx.xx',
'port': 4445,
'user': 'User27',
'psw': '12345678',
'service': 'xxxxxxx.xxxx.com',
}
CONN_STR = '{user}/{psw}@{host}:{port}/{service}'.format(**CONN_INFO)
connection = cx_Oracle.connect(CONN_STR)
query = """
select * from EM.df
"""
df = pd.read_sql(query, con=connection)
df
</code></pre>
<p>the error looks like this:</p>
<pre><code>DatabaseError: Execution failed on sql '
select * from EM.Sales
': ORA-00942: table or view does not exist
</code></pre>
<p>However, I have verified that table 'df' does exist and that the select query runs fine within Oracle DB. What could be the reasons why this happened?</p>
|
<python><oracle-database>
|
2023-02-03 04:59:10
| 1
| 5,315
|
nilsinelabore
|
75,331,550
| 2,130,515
|
Unable to create chrome webdriver in an abstract class
|
<p>This code is working just fine:</p>
<pre><code>class example:
def __init__(self) -> None:
remote_debug_port=8956
options = webdriver.ChromeOptions()
options.add_argument(f'--remote-debugging-port={remote_debug_port}')
options.add_argument("--start-maximized")
options.add_argument('--no-sandbox')
options.add_argument('--headless')
options.add_argument('--disable-infobars')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome("/usr/bin/chromedriver", options=options)
a = example()
</code></pre>
<h1>create webdriver inside an Abstract class, then create an child class.</h1>
<pre><code>from abc import ABC, abstractmethod
from selenium import webdriver
class ChromeDriverBase(ABC):
def __init__(self, url_source:str,
webdriver_path:str="/usr/bin/chromedriver",
wait_time:int=10,
remote_port:int=2345) -> None:
self.url_source = url_source
self.wait_time = wait_time
self.remote_port = remote_port
self.create_options()
self.driver = webdriver.Chrome(webdriver_path, options=self.options)
def create_options(self):
self.options = webdriver.ChromeOptions()
self.options.add_argument(f'--remote-debugging-port={self.remote_port}')
self.options.add_argument("--start-maximized")
self.options.add_argument('--no-sandbox')
self.options.add_argument('--headless')
self.options.add_argument('--disable-infobars')
self.options.add_argument('--disable-dev-shm-usage')
@abstractmethod
def function1(self):
pass
class ChromeChild(ChromeDriverBase):
def __init__(self, url_source: str, wait_time: int = 10, remote_port: int = 2345) -> None:
super().__init__(url_source=url_source, wait_time=wait_time, remote_port=remote_port)
def function1(self):
# just for illustration
return self.url_source
a = ChromeChild(url_source='www.google.com')
# this trigger this error
selenium.common.exceptions.SessionNotCreatedException: Message: session not
created: This version of ChromeDriver only supports Chrome version 109
Current browser version is 103.0.5060.114 with binary path /usr/bin/google-chrome
</code></pre>
<p>I checked the version:</p>
<pre><code>Google Chrome 109.0.5414.119
ChromeDriver 109.0.5414.74
</code></pre>
|
<python><selenium><google-chrome><selenium-webdriver>
|
2023-02-03 04:32:19
| 1
| 1,790
|
LearnToGrow
|
75,331,489
| 1,039,860
|
how to run libreoffice python script using scriptforge
|
<p>I am trying to organize my python project that currently only adds a menu to a libreoffice calc menu bar:</p>
<pre><code># this file is called AddMenu.py which is located in 'C:\Program Files\LibreOffice\share\Scripts\python'
def create_menu(args=None):
o_doc = CreateScriptService("Document")
o_menu = o_doc.CreateMenu("Test Menu")
o_menu.AddItem("About", command="About")
o_menu.AddItem("Testing",
script="vnd.sun.star.script:AddMenu.py$item_b_listener?language=Python&location=user")
def item_b_listener(args):
bas = CreateScriptService("Basic")
s_args = args.split(",")
msg = f"Menu name: {s_args[0]}\n"
msg += f"Menu item: {s_args[1]}\n"
msg += f"Item ID: {s_args[2]}\n"
msg += f"Item status: {s_args[3]}"
bas.MsgBox(msg)
</code></pre>
<p>The menu and buttons are added as expected, and About works fine. However, when I click on the "Test Menu" I get an error:</p>
<pre><code>Library : ScriptForge
Service : Session
Method : ExecutePythonScript
Arguments: [Scope], Script, arg0[, arg1] ...
A serious error has been detected in your code on argument : « Script ».
The requested Python script could not be located in the given libraries and modules.
« Scope » = user
« Script » = AddMenu.py$item_b_listener
THE EXECUTION IS CANCELLED.
Do you want to receive more information about the 'ExecutePythonScript' method ?
</code></pre>
<p>Any suggestions?</p>
<p>Follow up question: how do I run a python script when calc starts? Since the menu doesn't persist on restarting calc, I need to run the macro to reinstall it</p>
|
<python><libreoffice-calc>
|
2023-02-03 04:19:36
| 2
| 1,116
|
jordanthompson
|
75,331,478
| 754,444
|
Crop a box around n percentile of maximum values
|
<p>Given a binary image, how do I box around the majority of the white pixels? For example, consider the following image:
<a href="https://i.sstatic.net/pSdty.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pSdty.png" alt="image processing to edge detection" /></a></p>
<p>As canny segmentation results in a binary image, I thought I could use np.nonzero to identify the location of the points, and then draw a box around it. I have the following function to identify the location of the bounding box but its not working as intended (as you can see by the box in the image above):</p>
<pre><code>def get_bounding_box(image,thresh=0.95):
nonzero_indices = np.nonzero(image)
min_row, max_row = np.min(nonzero_indices[0]), np.max(nonzero_indices[0])
min_col, max_col = np.min(nonzero_indices[1]), np.max(nonzero_indices[1])
box_size = max_row - min_row + 1, max_col - min_col + 1
print(box_size)
#box_size_thresh = (int(box_size[0] * thresh), int(box_size[1] * thresh))
box_size_thresh = (int(box_size[0]), int(box_size[1]))
#coordinates of the box that contains 95% of the highest pixel values
top_left = (min_row + int((box_size[0] - box_size_thresh[0]) / 2), min_col + int((box_size[1] - box_size_thresh[1]) / 2))
bottom_right = (top_left[0] + box_size_thresh[0], top_left[1] + box_size_thresh[1])
print((top_left[0], top_left[1]), (bottom_right[0], bottom_right[1]))
return (top_left[0], top_left[1]), (bottom_right[0], bottom_right[1])
</code></pre>
<p>and using the following code to get the coords and draw the box as follows:</p>
<pre><code>seg= canny_segmentation(gray)
bb_thresh = get_bounding_box(seg,0.95)
im_crop = gray[bb_thresh[0][1]:bb_thresh[1][1],bb_thresh[0][0]:bb_thresh[1][0]]
</code></pre>
<p>why is this code not giving me the right top left / bottom right coordinates?</p>
<p>I have a example colab workbook here <a href="https://colab.research.google.com/drive/15TNVPsYeZOCiOB51I-geVXgGFyIp5PjU?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/15TNVPsYeZOCiOB51I-geVXgGFyIp5PjU?usp=sharing</a></p>
|
<python><opencv><image-processing><crop><edge-detection>
|
2023-02-03 04:15:40
| 3
| 922
|
zaza
|
75,331,455
| 19,425,874
|
Scraping Table Data from Multiple Pages
|
<p>So I think this is going to be complex...hoping someone is up for a challenge.</p>
<p>Basically, I'm trying to visit all HREF tags on a specific URL and then print their "profile-box" class into a Google Sheet.</p>
<p>I have a <strong>working</strong> example with a <strong>different</strong> link below. This code goes to each of the URLs, visits the Player Link, and then returns their associated data:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM')
worksheet = sh.get_worksheet(3)
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find("div", class_="profile-box")
row = {"Name": name, "URL": player_url}
for p in div_profile_box.find_all("p"):
try:
key, value = p.get_text(strip=True).split(':', 1)
row[key.strip()] = value.strip()
except: # not all entries have values
pass
data.append(row)
return data
urls = [
'https://basketball.realgm.com/dleague/players/2022',
'https://basketball.realgm.com/dleague/players/2021',
'https://basketball.realgm.com/dleague/players/2020',
'https://basketball.realgm.com/dleague/players/2019',
'https://basketball.realgm.com/dleague/players/2018',
]
res = []
for url in urls:
print(f"Getting: {url}")
data = get_links(url)
res = [*res, *data]
if res != []:
header = list(res[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res]]
worksheet.append_rows(values, value_input_option="USER_ENTERED")
</code></pre>
<p>RESULTS OF THIS CODE (CORRECT):</p>
<p><a href="https://i.sstatic.net/Txj2d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Txj2d.png" alt="G League Profile boxes" /></a></p>
<p>Secondarily - I have a <strong>working</strong> code that takes a separate URL, loops through 66 pages, and returns the table data:</p>
<pre><code>import requests
import pandas as pd
url = 'https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc'
res = []
for count in range(1, 66):
# pd.read_html accepts a URL too so no need to make a separate request
df_list = pd.read_html(f"{url}/{count}")
res.append(df_list[-1])
pd.concat(res).to_csv('my data.csv')
</code></pre>
<p>This returns the table data from the URL and works perfectly:</p>
<p><a href="https://i.sstatic.net/j0JfL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j0JfL.png" alt="International Stats" /></a></p>
<p>So... this brings me to my current issue:</p>
<p>I'm trying to take this same link (<a href="https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc" rel="nofollow noreferrer">https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc</a>)
and repeat the same action as the first code.</p>
<p>Meaning, I want to visit each profile (on all 66 or x number of pages), and print the profile data just like in the first code.</p>
<p>I thought/hoped, I'd be able to just replace the original D League URLS with this URL and it would work - it doesn't. I'm a little confused why, because the table data seems to be the same set up?</p>
<p>I started trying to re-work this, but struggling. I have very basic code, but think I'm taking steps backwards:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
for link in soup.find_all("a"):
profile_url = link.get("href")
profile_response = requests.get(profile_url)
profile_soup = BeautifulSoup(profile_response.text, "html.parser")
profile_box = profile_soup.find("div", class_="profileBox")
if profile_box:
print(profile_box)
</code></pre>
<p>Any thoughts on this? Like I said, ultimately trying to recreate the same action as the first script, just for the 2nd URL.</p>
<p>Thanks in advance.</p>
|
<python><pandas><beautifulsoup><python-requests>
|
2023-02-03 04:12:39
| 1
| 393
|
Anthony Madle
|
75,331,340
| 4,500,749
|
Send tasks to a Celery app on a remote server
|
<p>I have a server (Ubuntu Server) on the local network on ip address: 192.168.1.9.
This server is running RabbitMQ in docker.</p>
<p>I defined a basic Celery app:</p>
<pre><code>from celery import Celery
app = Celery(
'tasks',
brocker='pyamqp://<username>:<password>@localhost//',
backend='rpc://',
)
@app.task
def add(x, y):
return x + y
</code></pre>
<p>Connected on the server I run the script with <code>celery -A tasks worker --loglevel=INFO -c 2 -E</code></p>
<p>On my local laptop in a python shell I try to execute the task remotely by creating a new Celery instance with this time the ip address of my remote server.</p>
<pre><code>from celery import Celery
app = Celery(
'tasks',
brocker='pyamqp://<username>:<password>@192.168.1.9//',
backend='rpc://',
)
result = app.send_task('add', (2,2))
# Note: I also tried app.send_task('tasks.add', (2,2))
</code></pre>
<p>And from there nothing happen, the task stay <code>PENDING</code> for ever, I can't see anything in the logs, it doesn't seem the server picks up the task.
If I connect to the server and run the same commands locally (but with <code>localhost</code> as the address) it works fine.</p>
<p>What is wrong? How can I send tasks remotely?
Thank you.</p>
|
<python><celery><remote-server>
|
2023-02-03 03:52:35
| 2
| 326
|
Romn
|
75,331,308
| 1,496,362
|
What is the int needed for in map(int, icount) in Pydoop
|
<p>In the official <a href="https://github.com/malli3131/HadoopTutorial/blob/master/Pydoop/Tutorial" rel="nofollow noreferrer">Pydoop tutorial</a> there is a word count example.</p>
<p>I understand how it works, but I am wondering about the inner workings of <code>map(int, icounts))</code>.</p>
<p>Do I follow correctly that icounts is a list of 1s? Where does the int come from and why map?</p>
<pre><code># Compute the word frequency
import pydoop
def mapper(_, text, writer):
for word in text.split():
writer.emit(word, "1")
def reducer(word, icounts, writer):
writer.emit(word, sum(map(int, icounts)))
</code></pre>
|
<python><hadoop><mapreduce>
|
2023-02-03 03:48:42
| 1
| 5,417
|
dorien
|
75,331,240
| 825,920
|
Deserialize Protobuf datetime?
|
<p>I am trying to deserialize the <code>datetime?</code> data serialized using Protobuf from Redis records. (They are not epoch numbers).</p>
<p>I found the following integers (left) in the Redis records (serialized <code>datetime?</code> as protobuf).</p>
<pre><code> 1354218408 => "2023-02-02T04:51:19.5480532Z"
2719022476 => "2023-02-01T13:43:21.7035974Z"
3430755584 => "2023-02-01T14:01:51.0320768Z"
2719022538 => "2023-02-01T13:43:21.7036005Z"
674672264 => "2023-02-02T04:14:58.087098Z"
2184901194 => "2023-02-02T21:32:05.7918176Z"
</code></pre>
<p>The following is the raw protobuf for "2023-02-02T04:15:41.406221Z" (<code>datetime?</code>).</p>
<pre><code> {
"2": [
{
"1": 1541054724
}
]
},
</code></pre>
<p>Given a serialized integer, how to convert it to datetime?</p>
<p>I tried the following python code and it doesn't get the right number.</p>
<pre><code>from datetime import datetime
timestamp = 1354218408 # Example integer representing a timestamp
dt = datetime.fromtimestamp(timestamp)
print(dt)
</code></pre>
<p>It prints <code>2012-11-29 14:46:48</code> instead of "2023-02-02T04:51:19.5480532Z"</p>
|
<python><c#><protocol-buffers><protobuf-net><stackexchange.redis>
|
2023-02-03 03:36:14
| 0
| 29,537
|
ca9163d9
|
75,331,236
| 6,494,707
|
How to take only the array matrix (item) from np.array()
|
<p>I have a list <code>mask_arr</code> of numpy array and the element of list is numpy array like this:</p>
<pre><code>mask_arr[0][:]
array([[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
...,
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255]], dtype=uint8)
</code></pre>
<p>How to take only the 2D array without the <code>dtype</code> part:</p>
<pre><code>[[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
...,
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255],
[255, 255, 255, ..., 255, 255, 255]]
</code></pre>
<p>the reason is that I am getting the following error:</p>
<pre><code>im = mask_arr[i]
*** TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
|
<python><arrays><list><numpy><numpy-ndarray>
|
2023-02-03 03:35:31
| 1
| 2,236
|
S.EB
|
75,331,127
| 16,971,617
|
One Mask subtracting another mask on numpy
|
<p>I am new to numpy so any help is appreciated. Say I have two 1-0 masks A and B in 2D numpy array with the same dimension.
Now I would like to do logical operation to subtract B from A</p>
<pre><code>A B Expected Result
1 1 0
1 0 1
0 1 0
0 0 0
</code></pre>
<p>But i am not sure it works when a = 0 and b = 1 where a and b are elements from A and B respectively for <code> A = A - B</code>
So I do something like</p>
<pre><code>A = np.where(B == 0, A, 0)
</code></pre>
<p>But this is not very readable. Is there a better way to do that
Because for logical or, I can do something like</p>
<pre><code>A = A | B
</code></pre>
<p>Is there a similar operator that I can do the subtraction?</p>
|
<python><numpy>
|
2023-02-03 03:12:53
| 2
| 539
|
user16971617
|
75,330,993
| 13,689,939
|
Python requests PUT request with json parameter fails and data parameter succeeds
|
<p><strong>Problem</strong></p>
<p>I've looked at some of the documentation about the <code>json</code> and <code>data</code> parameters and the differences between them. I think I understand the difference, best explained <a href="https://stackoverflow.com/questions/47188244/what-is-the-difference-between-the-data-and-json-named-arguments-with-reques">here</a>, in my opinion.</p>
<p>However, I have a specific request that fails on <code>PUT</code> using <code>json</code>, but fails using <code>data</code>, and I'm not sure why. Can someone clarify why this is the case? Could it be that there is a list in the payload?</p>
<p><strong>Context</strong></p>
<p>I have <code>requests==2.28.0</code> installed. Below is the code that submits the <code>PUT</code> requests to an API for PagerDuty, the incident management software, one using <code>data</code> (successful) and one using <code>json</code> (failing). Otherwise they are identical.</p>
<p>The weird thing is that <a href="https://developer.pagerduty.com/api-reference/b3A6Mjc0ODE0OA-merge-incidents" rel="nofollow noreferrer">their examples</a> use the <code>json</code> parameter.</p>
<pre><code>payload = f'{{"source_incidents": [{{"id": "{child_incident_id}", "type": "incident_reference"}}]}}'
headers = {
'Content-Type': "application/json",
'Accept': "application/vnd.pagerduty+json;version=2",
'From': email,
'Authorization': f"Token token={read_write_api_token}"
}
response = requests.put(f'https://api.pagerduty.com/incidents/{parent_incident_id}/merge', data=payload, headers=headers)
print("response: ", response)
</code></pre>
<p>Result: <code>response: <Response [200]></code></p>
<pre><code>payload = f'{{"source_incidents": [{{"id": "{child_incident_id}", "type": "incident_reference"}}]}}'
headers = {
'Content-Type': "application/json",
'Accept': "application/vnd.pagerduty+json;version=2",
'From': email,
'Authorization': f"Token token={read_write_api_token}"
}
response = requests.put(f'https://api.pagerduty.com/incidents/{parent_incident_id}/merge', json=payload, headers=headers)
print("response: ", response)
</code></pre>
<p>Result: <code>response: <Response [400]></code></p>
|
<python><json><http><python-requests>
|
2023-02-03 02:45:14
| 3
| 986
|
whoopscheckmate
|
75,330,988
| 10,834,788
|
How to write a key function for bisect.bisect_left that compares both the index of two arrays?
|
<p>I want to write a key function for <code>bisect.bisect_left</code> and my objective is to compare two lists, call one list smaller than the other only if both elements of it are smaller than or equal to the other list's elements.</p>
<p><code>[x1, y1]</code> should be placed before <code>[x2, y2]</code> only if <code>x1 <= x2 and y1 <= y2</code>.</p>
<p>My objective is to figure out the placement of a point with <code>(x,y)</code> coordinates in the sorted list of rectangles (with each element as (length and breadth) in order to calculate the number of rectangles that point could fall in.</p>
<p>It can be possible that a point cannot be placed at any such index.</p>
|
<python><binary-search><bisect>
|
2023-02-03 02:44:15
| 1
| 4,662
|
Aviral Srivastava
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.