QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,054,028 | 11,578,282 | Why am I missing data when I receive packets over UDP? | <p>I'm trying to receive data over a UDP connection from another app running locally but I don't seem to be receiving all the data that is being sent. I believe it's because I'm receiving the data in a while loop which sometimes doesn't run quickly enough to receive all the data but I'm not sure of an alternative I can use to constantly receive data. I should be receiving a sine wave but I'm actually receiving what is shown in the figure.</p>
<pre><code>import socket
import struct
import time
import numpy as np
import sys
UDP_IP = "127.0.0.1"
UDP_PORT = 2000
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
data_fifo = []
def receive_data():
while True:
data, addr = sock.recvfrom(1472) # buffer size is 1472 bytes
temp = struct.unpack('368f', data)
data_fifo.extend(temp)
</code></pre>
<p><a href="https://i.sstatic.net/97s5G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/97s5G.png" alt="received data" /></a></p>
| <python><sockets><udp> | 2023-09-06 17:12:01 | 0 | 467 | Eind997 |
77,053,969 | 2,112,406 | Using pyspark in a python module | <p>Can I add functionality to my Python module to use pyspark? I have to parse very large data files in a non-trivial way.</p>
<p>If so, could anyone point me towards a resource? I'm a bit confused about this whole thing, because whenever I write a python script using pyspark, I have to run it with <code>spark-submit</code> etc., so I'm not sure how I'd set that up within a module. I'm also not sure how one would pass in the required/available resources on the system the module is running.</p>
| <python><pyspark> | 2023-09-06 17:00:56 | 1 | 3,203 | sodiumnitrate |
77,053,921 | 6,791,157 | Path name with dot-colon on Windows creates a ghost file | <p>I do not understand this behavior. This is a minimal reproduction of a strange behavior I discovered while trying to write to a file whose name is an ISO format time string including <code>:</code>. The file name was truncated starting with <code>:</code>.</p>
<p>The colon is an invalid path character on Windows when not part of a drive name.</p>
<p>However, this command on cmd:</p>
<pre><code>echo hello >x.:x
</code></pre>
<p>Creates a file that shows up on <code>dir</code> as <code>x.</code>. <code>Path().iterdir()</code> in Python also lists <code>x.</code>.</p>
<p>However, <code>x.</code> is not readable in cmd, and neither is <code>x.:x</code>:</p>
<pre><code>C:\Users\user>dir x*
Volume in drive C has no label.
Volume Serial Number is F606-918D
Directory of C:\Users\user
05/09/2023 01:59 0 x.
1 File(s) 0 bytes
0 Dir(s) 6,182,993,920 bytes free
C:\Users\user>type x.
The system cannot find the file specified.
C:\Users\user>type x.:x
The filename, directory name, or volume label syntax is incorrect
</code></pre>
<p>Though it is readable in Python:</p>
<pre><code>In [1]: Path('x.:x').read_text()
Out[1]: 'hello \n'
</code></pre>
<p>I would expect Windows to reject a file name like this, whether on cmd or Python, just like it rejects <code>x:x</code> with <code>The system cannot find the path specified</code> or <code>FileNotFoundError</code>. However, the dot-colon escapes error for some reason.</p>
<p>I am on Windows 10 and My Python <code>sys.version</code> is <code>'3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)]'</code></p>
<p>Does anyone know what is going on here?</p>
| <python><windows><cmd> | 2023-09-06 16:51:18 | 1 | 789 | roeen30 |
77,053,906 | 11,500,371 | Can you combine pd.wide_to_long with the creation of a multiindex? | <p>I have a dataframe in a wide format that looks like this:</p>
<pre><code>citydata = pd.DataFrame(columns=['Trip Date','City 1 Name','City 2 Name','City 3 Name','City 1 Cost','City 2 Cost','City 3 Cost','City 1 Hotel','City 2 Hotel','City 3 Hotel'])
citydata.loc[0] = ['2023-01-15','London','Paris','Barcelona',5500,4375,2530,"Marriott","Hilton","W"]
citydata.loc[1] = ['2023-08-15','Sydney','Auckland','Beijing',9500,2300,11000,"Peninsula","Meridien","AirBnb"]
citydata
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Trip Date</th>
<th>City 1 Name</th>
<th>City 2 Name</th>
<th>City 3 Name</th>
<th>City 1 Cost</th>
<th>City 2 Cost</th>
<th>City 3 Cost</th>
<th>City 1 Hotel</th>
<th>City 2 Hotel</th>
<th>City 3 Hotel</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-01-15</td>
<td>London</td>
<td>Paris</td>
<td>Barcelona</td>
<td>5500</td>
<td>4375</td>
<td>2530</td>
<td>Marriott</td>
<td>Hilton</td>
<td>W</td>
</tr>
<tr>
<td>2023-08-15</td>
<td>Sydney</td>
<td>Auckland</td>
<td>Beijing</td>
<td>9500</td>
<td>2300</td>
<td>11000</td>
<td>Peninsula</td>
<td>Meridien</td>
<td>AirBnb</td>
</tr>
</tbody>
</table>
</div>
<p>I want to use pd.wide_to_long (or another function like melt?) to remove the City 1, 2, and 3 distinction, have a long format, and have simple columns for City Name, City Cost, and City Hotel. The end result would be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Trip Date</th>
<th>City Name</th>
<th>City Cost</th>
<th>City Hotel</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-01-15</td>
<td>London</td>
<td>5500</td>
<td>Marriott</td>
</tr>
<tr>
<td>2023-01-15</td>
<td>Paris</td>
<td>4375</td>
<td>Hilton</td>
</tr>
<tr>
<td>2023-01-15</td>
<td>Barcelona</td>
<td>2530</td>
<td>W</td>
</tr>
<tr>
<td>2023-08-15</td>
<td>Sydney</td>
<td>9500</td>
<td>Peninsula</td>
</tr>
<tr>
<td>2023-08-15</td>
<td>Auckland</td>
<td>2300</td>
<td>Meridien</td>
</tr>
<tr>
<td>2023-08-15</td>
<td>Beijing</td>
<td>11000</td>
<td>AirBnb</td>
</tr>
</tbody>
</table>
</div>
<p>I can't seem to figure out how pd.wide_to_long can do this while preserving the trip date index.</p>
| <python><pandas><dataframe> | 2023-09-06 16:48:33 | 3 | 337 | Sean R |
77,053,878 | 9,046,275 | Row-wise aggregation based on non-zero columns | <p>I am facing a scenario where I have a sparse dataset (for each row, only 4 columns are populated at the same time, the rest will be zeros). For each variable prefix, there will be 3 variables (e.g <code>a_qty</code>, <code>a_height</code>, <code>ma_width</code>), the number of columns is completely arbitrary as new prefixes might be added anytime.</p>
<p>To make an example, the dataframe could resemble something like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"some_id": ["x", "y", "z"],
"a_qty": [1, 0, 0],
"a_height": [2, 0, 0],
"a_width": [3, 0, 0],
"b_qty": [0, 3, 0],
"b_height": [0, 5, 0],
"b_width": [0, 8, 0],
"c_qty": [0, 0, 10],
"c_height": [0, 0, 11],
"c_width": [0, 0, 12],
}
)
# prints
βββββββββββ¬ββββββββ¬βββββββββββ¬ββββββββββ¬ββββ¬ββββββββββ¬ββββββββ¬βββββββββββ¬ββββββββββ
β some_id β a_qty β a_height β a_width β β¦ β b_width β c_qty β c_height β c_width β
β --- β --- β --- β --- β β --- β --- β --- β --- β
β str β i64 β i64 β i64 β β i64 β i64 β i64 β i64 β
βββββββββββͺββββββββͺβββββββββββͺββββββββββͺββββͺββββββββββͺββββββββͺβββββββββββͺββββββββββ‘
β x β 1 β 2 β 3 β β¦ β 0 β 0 β 0 β 0 β
β y β 0 β 0 β 0 β β¦ β 8 β 0 β 0 β 0 β
β z β 0 β 0 β 0 β β¦ β 0 β 10 β 11 β 12 β
βββββββββββ΄ββββββββ΄βββββββββββ΄ββββββββββ΄ββββ΄ββββββββββ΄ββββββββ΄βββββββββββ΄ββββββββββ
</code></pre>
<p>I am looking for a way to narrow down the dataset by reducing the number of columns and getting rid of the 0s for a given row. Each row of the target dataframe should contain:</p>
<ol>
<li><code>qty</code>: the sum of the columns containing <code>qty</code> for a given row (same applies to <code>width</code> and <code>height</code>)</li>
<li><code>is_prod_*</code> a dummy variable set to 1 depending on the prefix of the non-zero columns. Prefixes won't mix (only a single letter per row)</li>
</ol>
<pre class="lang-py prettyprint-override"><code>df_target = pl.DataFrame(
{
"some_id": ["x", "y", "z"],
"height": [2, 5, 11],
"width": [3, 8, 12],
"qty": [1, 3, 10],
"is_prod_a": [1, 0, 0],
"is_prod_b": [0, 1, 0],
"is_prod_c": [0, 0, 1]
}
)
# prints
βββββββββββ¬βββββββββ¬ββββββββ¬ββββββ¬βββββββββ¬βββββββββ¬βββββββββ
β some_id β height β width β qty β prod_a β prod_b β prod_c β
β --- β --- β --- β --- β --- β --- β --- β
β str β i64 β i64 β i64 β i64 β i64 β i64 β
βββββββββββͺβββββββββͺββββββββͺββββββͺβββββββββͺβββββββββͺβββββββββ‘
β x β 2 β 3 β 1 β 1 β 0 β 0 β
β y β 5 β 8 β 3 β 0 β 1 β 0 β
β z β 11 β 12 β 10 β 0 β 0 β 1 β
βββββββββββ΄βββββββββ΄ββββββββ΄ββββββ΄βββββββββ΄βββββββββ΄βββββββββ
</code></pre>
<p>I have tried grouping by row number and summing variables by type:</p>
<pre class="lang-py prettyprint-override"><code>width_cols = [col for col in df_test.columns if "width" in col],
height_cols = [col for col in df_test.columns if "height" in col]
qty_cols = [col for col in df_test.columns if "qty" in col]
(
df
.with_row_index()
.group_by("index")
.agg(
pl.col(*width_cols).sum().alias('width')
)
)
# DuplicateError: column with name 'width' has more than one occurrence
</code></pre>
<p>This is clearly not working as I am trying to create a duplicate column on the second row. What would be the most elegant way to achieve such aggregation?</p>
<h3>Edit 1</h3>
<p>It is possible to get the row-wise column sums using <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.fold.html" rel="nofollow noreferrer"><code>pl.DataFrame.fold</code></a> which seems to do the trick. Still unsure on how to create the dummy column named base on a condition.</p>
<pre class="lang-py prettyprint-override"><code>(
df_test
.with_columns(
pl.fold(0, lambda acc, s: acc + s, pl.col(*width_cols)).alias("width"),
pl.fold(0, lambda acc, s: acc + s, pl.col(*height_cols)).alias("height"),
pl.fold(0, lambda acc, s: acc + s, pl.col(*qty_cols)).alias("qty")
)
)
</code></pre>
| <python><python-polars> | 2023-09-06 16:43:54 | 1 | 1,671 | anddt |
77,053,825 | 10,472,446 | Sorting list of complex elements like Gray code is organized | <p>I have a list of complex elements, namely tuples containing a predefined number of numbers, here is a small example:</p>
<p><code>[(2, 3, 5), (1, 5, 6), (1, 3, 4), (1, 3, 6), (2, 3, 7)]</code></p>
<p>I need to sort this list in the same way than Gray code is organized, meaning that only one element of the tuple can change from one item to the next. Order of the numbers in the tuples is important. Example:
<code>(1, 5, 6)</code> can be followed by <code>(1, 3, 6)</code> or vice-versa.</p>
<p>The first tuple in the resulting list can be any of the tuple in the initial list.</p>
<p>If a tuple cannot be placed according to the rule in a group of consecutive tuples, than it should be placed within another group where the rule can be applied. Example:
<code>[(2, 3, 5), (1, 5, 6), (1, 3, 4), (1, 3, 6), (2, 3, 7)]</code>
can be sorted such as:</p>
<p><code>[(1, 5, 6), (1, 3, 6), (1, 3, 4), (2, 3, 7), (2, 3, 5)]</code></p>
<p>or</p>
<p><code>[(2, 3, 5), (2, 3, 7), (1, 3, 4), (1, 3, 6), (1, 5, 6)]</code></p>
<p>The goal is to limit the number of changes when passing from one element to another, and obviously in total when considering the full list. The following list would not be optimal because it has a total number of changes of 6 (1+1+3+1=6) when the one before had only 5 changes in total:</p>
<p><code>[(1, 3, 4), (1, 3, 6), (1, 5, 6), (2, 3, 5), (2, 3, 7)]</code></p>
<p>Has anybody encountered such an algorithm?</p>
| <python><python-3.x><algorithm><sorting> | 2023-09-06 16:34:34 | 1 | 379 | Toady |
77,053,756 | 11,318,930 | holoviews - bokeh: histogram with log axis is blank | <p>I am trying to do a log plot in Jupyter Notebook using <code>Holoviews</code> with the <code>bokeh</code> backend. If I set the y-axis to <code>log</code>, the plot shows only the axes with no data plotted and the following warning:</p>
<pre><code>WARNING:param.HistogramPlot03582: Logarithmic axis range encountered value less than or equal to zero, please supply explicit lower-bound to override default of 0.010.
</code></pre>
<p>I can eliminate the warning by explicitly setting the y-axis range but there is still no plot. The results can be replicated using this:</p>
<pre><code>import holoviews as hv
import numpy as np
hv.extension('bokeh')
tl = [2079, 76, 15022, 3475, 31550, 38, 564, 551, 77, 117, 8962, 14, 186, 691, 896,
2041, 1784, 5225, 2046, 15216, 176, 460, 219, 37903, 7806, 6325, 25396, 38016, 4154]
frequencies, edges = np.histogram(tl, 20)
frequencies = frequencies.astype('float') # not really required
frequencies[frequencies <= 0] = .1 # not really required
print('min freq', frequencies.min())
print('freq list', frequencies)
hv.Histogram((edges, frequencies)).opts(logy=True, height=400, width=400, ylim=(1E-1, 3E6))
</code></pre>
<p>The result will look like this.</p>
<p><a href="https://i.sstatic.net/VetgR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VetgR.png" alt="enter image description here" /></a></p>
<p>Would anyone know of a workaround? Note: the problem is likely with <code>bokeh</code> to any <code>bokeh</code> solutions will likely work.</p>
| <python><plot><jupyter-notebook><bokeh><holoviews> | 2023-09-06 16:25:20 | 0 | 1,287 | MikeB2019x |
77,053,712 | 3,505,206 | Pydantic 2.0: Initializing a BaseModel with a private variable to be used to create computed_fields | <p>Using the following version</p>
<pre class="lang-ini prettyprint-override"><code>pydantic = "^2.0.2"
pydantic-settings = "^2.0.1"
</code></pre>
<p>Want to create a pydantic BaseModel for AWS SQS Messages, where the input is hidden after dumping the json via <code>print(model.model_dump_json())</code></p>
<p>What I have without hidden / private attributes.</p>
<pre class="lang-py prettyprint-override"><code>
class SQSMessage(BaseModel):
message: dict
@property
def sub_msg(self) -> dict:
return json.loads(self.message["Message"])
@computed_field
@property
def message_id(self) -> str:
return self.message["MessageId"]
@computed_field
@property
def bucket(self) -> str:
return self.sub_msg["Records"][0]["s3"]["bucket"]["name"]
@computed_field
@property
def key(self) -> str:
return self.sub_msg["Records"][0]["s3"]["object"]["key"]
@computed_field
@property
def event_datetime(self) -> datetime:
_dt = self.sub_msg["Records"][0]["eventTime"]
if "Z" in _dt:
return datetime.fromisoformat(self.sub_msg["Records"][0]["eventTime"].replace("Z", ""))
else:
return datetime.fromisoformat(self.sub_msg["Records"][0]["eventTime"])
</code></pre>
<p>This works well, but the messsage var is still shown when dumping the model json.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>msg = {
"Type": "Notification",
"MessageId": "0d69f2aa-3384-5435-b75a-a9102074b9a3",
"TopicArn": "arn:aws:sns:us-east-1:12345678910:example-sns-topic",
"Subject": "Amazon S3 Notification",
"Message": '{"Records":[{"eventVersion":"2.1","eventSource":"aws:s3","awsRegion":"us-east-1",'
'"eventTime":"2022-10-07T11:46:55.304Z","eventName":"ObjectCreated:Put",'
'"userIdentity":{"principalId":"AWS:AROARSMOKBLH3KNNAYOF2:ABCD12345@SOMEDOMAIN.com"},'
'"requestParameters":{"sourceIPAddress":"159.53.46.223"},'
'"s3":{"s3SchemaVersion":"1.0","configurationId":"tf-s3-topic-20220908215027661200000001",'
'"bucket":{"name":"app-bucket_name",'
'"ownerIdentity":{"principalId":"A1GXDFZQ55BKMX"},'
'"arn":"arn:aws:s3:::app-bucket_name"},'
'"object":{"key":"export/raw/v1/example.csv",'
'"size":0,'
'"eTag":"c258a1bafcdc3e550a13265867e8e5f4","sequencer":"00634011AF3640E697"}}}]}',
}
model = SQSMessage(message=msg)
print(model.model_dump_json())
>>> {"message":{"Type":"Notification","MessageId":"0d69f2aa-3384-5435-b75a-a9102074b9a3","TopicArn":"arn:aws:sns:us-east-1:12345678910:example-sns-topic","Subject":"Amazon S3 Notification","Message":"{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"us-east-1\",\"eventTime\":\"2022-10-07T11:46:55.304Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:AROARSMOKBLH3KNNAYOF2:ABCD12345@SOMEDOMAIN.com\"},\"requestParameters\":{\"sourceIPAddress\":\"159.53.46.223\"},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"tf-s3-topic-20220908215027661200000001\",\"bucket\":{\"name\":\"app-bucket_name\",\"ownerIdentity\":{\"principalId\":\"A1GXDFZQ55BKMX\"},\"arn\":\"arn:aws:s3:::app-bucket_name\"},\"object\":{\"key\":\"export/raw/v1/example.csv\",\"size\":0,\"eTag\":\"c258a1bafcdc3e550a13265867e8e5f4\",\"sequencer\":\"00634011AF3640E697\"}}}]}"},"message_id":"0d69f2aa-3384-5435-b75a-a9102074b9a3","bucket":"app-bucket_name","key":"export/raw/v1/example.csv","event_datetime":"2022-10-07T11:46:55.304000"}
</code></pre>
<p>When I try to use <code>PrivateAttrs</code>, like so</p>
<pre class="lang-py prettyprint-override"><code>...
_message: dict = PrivateAttr(default_factory=dict)
...
model = SQSMessage(_message=msg)
print(model.model_dump_json())
>>> {}
</code></pre>
<p>This does hide the private attributes, however we are unable to reference this field to create computed_fields. How could this be achieved with pydantic 2?</p>
<p>EDIT:
I have also tried adding the following Config, but this does not hide the message field when dumping to json.</p>
<pre class="lang-py prettyprint-override"><code> class Config:
@staticmethod
def json_schema_extra(schema: dict, _):
props = {}
for k, v in schema.get('properties', {}).items():
if not v.get("hidden", False):
props[k] = v
schema["properties"] = props
message: dict = Field(hidden=True)
</code></pre>
| <python><fastapi><pydantic> | 2023-09-06 16:18:13 | 1 | 456 | Jenobi |
77,053,686 | 19,130,803 | Save dash component or plotly express graph as json | <p>I am creating a <code>dash</code> application and plotting graphs using <code>plotly express</code>. I trying to save the <code>dash components</code> and <code>graphs</code> into a <code>json</code> file using <code>to_plotly_json()</code>.
But for <code>pie chart</code> graph, getting an error as below:</p>
<pre><code>Error-1 The first argument to the plotly.graph_objs.layout.Template
constructor must be a dict or
an instance of :class:`plotly.graph_objs.layout.Template`
Error-1 The first argument to the plotly.graph_objs.layout.Template
constructor must be a dict or
</code></pre>
<p>Below is MWE:</p>
<pre><code>import json
import dash_bootstrap_components as dbc
from dash import dcc
import plotly.express as px
import pandas as pd
def generate_pie_charts(df, template) -> list[dict[str, Any]]:
pie_charts = list()
for field in df.columns.tolist():
value_count_df = df[field].value_counts().reset_index()
cols = value_count_df.columns.tolist()
name: str = cols[0]
value: str = cols[1]
try:
figure = px.pie(
data_frame=value_count_df,
values=value,
names=name,
title=f"Pie chart of {field}",
template=template,
).to_plotly_json()
pie_chart = dcc.Graph(figure=figure).to_plotly_json()
pie_charts.append(pie_chart)
except Exception as e:
print(f"Error-1 {e}")
return pie_charts
def perform_exploratory_data_analysis():
rows = list()
template = "darkly"
info = {
"A": ["a", "a", "b", "b", "c", "a", "a", "b", "b", "c", "a", "a", "b", "b", "c"],
"B": ["c", "c", "c", "c", "c", "a", "a", "b", "b", "c", "a", "a", "b", "b", "c"],
}
df = pd.DataFrame(info)
try:
row = dbc.Badge(
"For Pie Charts", color="info", className="ms-1"
).to_plotly_json()
rows.append(row)
row = generate_pie_charts(df, template)
rows.append(row)
data = {"contents": rows}
status = False
msg = "Error creating EDA graphs."
file = "eda.json"
with open(file, "w") as json_file:
json.dump(data, json_file)
msg = "EDA graphs created."
status = True
except Exception as e:
print(f"Error-2 {e}")
result = (status, msg)
return result
perform_exploratory_data_analysis()
</code></pre>
<p>What I am missing?</p>
| <python><pandas><plotly-dash><plotly-express> | 2023-09-06 16:13:59 | 1 | 962 | winter |
77,053,670 | 8,256,981 | In numpy genfromtxt, missing_values, filling_values, excludelist, deletechars and replace_space are not working properly | <p>This is my test.csv file, where "A 1" and "A+2" are headers:</p>
<pre><code>A 1,A+2
test& ,1
skip,
#,
N/A,NA
</code></pre>
<p>This is my Jupyter code:</p>
<pre><code>import numpy as np
test = np.genfromtxt("test.csv",
delimiter = ',',
dtype = str,
comments = '#',
converters = None,
missing_values='NA',
filling_values=np.nan,
excludelist = ['skip'],
deletechars = " !#$%&'()*+,-./:;<=>?@[\\]^{|}~",
replace_space = '_',
autostrip = True
)
print(test)
</code></pre>
<p>This is my output:</p>
<pre><code>[['A 1' 'A+2']
['test&' '1']
['skip' '']
['N/A' 'NA']]
</code></pre>
<p>Why is it not this output:</p>
<pre><code>[['A_1' 'A2']
['test' '1']
['NA' nan]]
</code></pre>
<p>What is the correct way to use missing_values, filling_values, excludelist, deletechars and replace_space?</p>
| <python><numpy> | 2023-09-06 16:10:58 | 1 | 491 | maxloo |
77,053,535 | 13,258,525 | How can I sampleBy a date column without casting it to another type in PySpark? | <p>I have a data frame which I'd like to sample by its date column. However, <code>sampleBy</code> will only accept float, int, or str types:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import date
from pyspark.sql import Row
import pyspark.sql.functions as F
# create dummy data
df_rows = [Row(date = date(2020, 1, d), value=d*2) for d in range(1,31)]
df = spark.createDataFrame(df_rows)
df.show(5)
+----------+-----+
| date|value|
+----------+-----+
|2020-01-01| 2|
|2020-01-02| 4|
|2020-01-03| 6|
|2020-01-04| 8|
|2020-01-05| 10|
+----------+-----+
only showing top 5 rows
</code></pre>
<p>Following <a href="https://stackoverflow.com/a/47672336/13258525">this SO approach</a> I get the following error:</p>
<pre><code>fractions = (df
.select("date")
.distinct()
.withColumn("fraction", F.lit(1.0))
.rdd.collectAsMap()
)
sampled_df = df.sampleBy("date", fractions, seed=42)
</code></pre>
<blockquote>
<p>PySparkTypeError: [DISALLOWED_TYPE_FOR_CONTAINER] Argument <code>fractions</code>(type: dict) should only contain a type in [float, int, str], got date</p>
</blockquote>
<p>I tried formatting the dictionary values as strings, but this then didn't match the original columns, and my sampled dataframe was empty:</p>
<pre class="lang-py prettyprint-override"><code>fractions = (df
.select(F.date_format(F.col("date"), "yyyy-MM-dd" ))
.distinct()
.withColumn("fraction", lit(1.0))
.rdd.collectAsMap()
)
sampled_df = df.sampleBy("date", fractions, seed=42)
sampled_df.groupBy("date").count().show()
+----+-----+
|date|count|
+----+-----+
+----+-----+
</code></pre>
<p>If I cast the original column to string first, I can then sample, but I'm hoping to avoid this partly to keep the code simple, and partly for computational speed:</p>
<pre><code>sampled_df = (df
.withColumn("date", F.date_format(F.col("date"), "yyyy-MM-dd" ))
.sampleBy("date", fractions, seed=42)
)
sampled_df.groupBy("date").count().show(5)
+----------+-----+
| date|count|
+----------+-----+
|2020-01-02| 1|
|2020-01-06| 1|
|2020-01-03| 1|
|2020-01-07| 1|
|2020-01-10| 1|
+----------+-----+
only showing top 5 rows
</code></pre>
<p>Is it possible to sampleBy a column that isn't float, int or str type without the intermediate casting step?</p>
| <python><dataframe><pyspark> | 2023-09-06 15:50:35 | 0 | 2,183 | s_pike |
77,053,484 | 10,808,155 | How to merge rows with condition on time range in PySpark Dataframe | <p>Here is my input pyspark dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Start Date</th>
<th>End Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John</td>
<td>2022-06-29 20:00:00</td>
<td>2022-07-12 20:00:00</td>
</tr>
<tr>
<td>1</td>
<td>John</td>
<td>2022-01-02 19:00:00</td>
<td>2022-05-18 20:00:00</td>
</tr>
<tr>
<td>2</td>
<td>Bob</td>
<td>2021-07-05 20:00:00</td>
<td>2021-09-25 20:00:00</td>
</tr>
<tr>
<td>2</td>
<td>Jack</td>
<td>2022-04-24 20:00:00</td>
<td>2022-06-25 20:00:00</td>
</tr>
<tr>
<td>3</td>
<td>Mike</td>
<td>2021-10-31 20:00:00</td>
<td>2021-12-11 19:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>If they have exact same ID and Name, I would like to merge to one row and their start date should take earliest start date and end date should take the latest end date, the output should be like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Start Date</th>
<th>End Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John</td>
<td>2022-01-02 19:00:00</td>
<td>2022-07-12 20:00:00</td>
</tr>
<tr>
<td>2</td>
<td>Bob</td>
<td>2021-07-05 20:00:00</td>
<td>2021-09-25 20:00:00</td>
</tr>
<tr>
<td>2</td>
<td>Jack</td>
<td>2022-04-24 20:00:00</td>
<td>2022-06-25 20:00:00</td>
</tr>
<tr>
<td>3</td>
<td>Mike</td>
<td>2021-10-31 20:00:00</td>
<td>2021-12-11 19:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>John has been merged and his start date updated to 2022-01-02 and end date updated to 2022-07-12 20:00:00.</p>
<p>How can I achieve in pyspark?</p>
| <python><dataframe><pyspark> | 2023-09-06 15:43:21 | 1 | 305 | Jie Zhang |
77,053,140 | 2,886,640 | How to make an ordered file structure in KivyMD? | <p>I watched a lot of tutorials and examples about Kivy and KivyMD, but there are only a few doing complex apps, and they use to pile all the code in almost one or two files.</p>
<p>I am trying to do it properly, but I'm having errors that I can't understand very well as I'm still a newbie on this subject.</p>
<p><strong>THE CODE</strong></p>
<p>I made an app with the following <strong>directory structure</strong>:</p>
<pre><code>mytestapp
|_ screens
| |_ screen1.py
| |_ screen2.py
|_ src
| |_ kv
| |_ screen1.kv
| |_ screen2.kv
|_ main.py
|_ mytest.kv
</code></pre>
<p>Here's the content from <strong>main.py</strong>:</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivymd.uix.screenmanager import MDScreenManager
from screens.screen1 import Screen1
from screens.screen2 import Screen2
import os
KV_FILES = [
'src/kv/screen1.kv',
'src/kv/screen2.kv',
]
class MyScreenManager(MDScreenManager):
def load_screen_kv_files(self):
for kv_file in KV_FILES:
kv_path = os.path.abspath(kv_file)
if kv_path in Builder.files:
continue
Builder.load_file(kv_file)
class MyTestApp(MDApp):
def build(self):
self.screen_manager = MyScreenManager()
self.screen_manager.load_screen_kv_files()
return self.screen_manager
</code></pre>
<p>Here's the content from <strong>screen1.py</strong>:</p>
<pre><code>from kivymd.uix.screen import MDScreen
class Screen1(MDScreen):
pass
</code></pre>
<p>Here's the content from <strong>screen2.py</strong>:</p>
<pre><code>from kivymd.uix.screen import MDScreen
class Screen2(MDScreen):
pass
</code></pre>
<p>And the code from the Kivy files:</p>
<p>Content from <strong>mytest.kv</strong>:</p>
<pre><code><MyScreenManager>:
Screen1:
Screen2:
</code></pre>
<p>Content from <strong>screen1.kv</strong></p>
<pre><code><Screen1>:
name: 'screen-1'
... [Code which works OK individually] ...
</code></pre>
<p>Content from <strong>screen2.kv</strong></p>
<pre><code><Screen2>:
name: 'screen-2'
... [Code which works OK individually] ...
</code></pre>
<p><strong>THE PROBLEM</strong></p>
<p>The problem is that nothing happens when I ran the app... Screens are not loaded.</p>
<p><strong>ALTERNATIVES</strong></p>
<p>If I remove the content from <strong>mytest.kv</strong> and modify the method <code>load_screen_kv_files</code> this way:</p>
<pre><code>def load_screen_kv_files(self):
for kv_file in KV_FILES:
kv_path = os.path.abspath(kv_file)
if kv_path in Builder.files:
continue
self.add_widget(Builder.load_file(kv_file))
</code></pre>
<p>I get the error:</p>
<pre><code>kivy.uix.screenmanager.ScreenManagerException: ScreenManager accepts only Screen widget.
</code></pre>
<p>So if I change the <code>screen1.kv</code> and <code>screen2.kv</code> files to just replace <code><Screen1></code> by <code>MDScreen</code> and <code><Screen2></code> by <code>MDScreen</code> too, the app works.</p>
<p>But the problem here is that my KV content belong to an instance of MDScreen instead of an instance of Screen1 and Screen2. And in the future, I will need to use instances of Screen1 and Screen2 which will define methods called by the KV content elements.</p>
<p>Do you know how to manage that preserving this file structure (or improving it)?</p>
| <python><python-3.x><kivy><kivymd> | 2023-09-06 14:59:58 | 0 | 10,269 | forvas |
77,052,972 | 11,832,749 | How can I make transparent histograms in subplots? | <p>So far I have:</p>
<pre><code>def add_alpha_to_colormap(cmap, alpha):
# borrowed from https://saturncloud.io/blog/adding-alpha-to-an-existing-matplotlib-colormap-a-guide/
cmap = plt.cm.get_cmap(cmap)
colors = cmap(np.arange(cmap.N))
# Add alpha to the RGB array
RGBA = np.hstack([colors[:, :3], np.full((cmap.N, 1), alpha)])
# Create new colormap
new_cmap = mcolors.ListedColormap(RGBA)
return new_cmap
...
num_bins = 20
fig, axes = plt.subplots(figsize=(18, 14), dpi=120, nrows=2, ncols=4)
cmap = add_alpha_to_colormap('viridis', alpha=0.5)
for model in set(df.index):
df.loc[model]['rouge1_recall'].plot.hist(cmap=cmap, bins=num_bins, title='Rouge1 recall', ax=axes[0, 0])
df.loc[model]['rouge1_precision'].plot.hist(cmap=cmap, bins=num_bins, title='Rouge1 precision', ax=axes[1, 0])
df.loc[model]['rouge2_recall'].plot.hist(cmap=cmap, bins=num_bins, title='Rouge2 recall', ax=axes[0, 1])
df.loc[model]['rouge2_precision'].plot.hist(cmap=cmap, bins=num_bins, title='Rouge2 precision', ax=axes[1, 1])
df.loc[model]['bert_recall'].plot.hist(cmap=cmap, bins=num_bins, title='BertScore recall', ax=axes[0, 2])
df.loc[model]['bert_precision'].plot.hist(cmap=cmap, bins=num_bins, title='BertScore recall', ax=axes[1, 2])
df.loc[model]['bleu'].plot.hist(cmap=cmap, bins=num_bins, title='Bleu', ax=axes[0, 3])
plt.show()
</code></pre>
<p>which gives me this:
<a href="https://i.sstatic.net/2EQIr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2EQIr.png" alt="enter image description here" /></a></p>
<p>I don't know:</p>
<ol>
<li>Why it's coming off purple instead of the actual default colors with a reduced alpha.</li>
<li>How to add a legend for each color.</li>
</ol>
| <python><pandas><matplotlib><colors><histogram> | 2023-09-06 14:38:52 | 1 | 371 | Michael |
77,052,942 | 6,236,430 | Python (3) flask to rebuild tables fails (usually) using mysql.connector | <p>I have a few tables I wish to truncate and then repopulate (legacy and not how I'd do it but company won't pay to fix process) Code is logging the state but when it gets to truncate a table it is just hanging.</p>
<p>Here's the relevant code:</p>
<pre><code> print(str(datetime.utcnow()), 'Running Database Sync...')
cur = cnx.cursor(dictionary=True)
logging.info(cnx.is_connected())
try:
# Clear the database ready to re-import
logging.info("Clear Tables")
# Clear lookup tables first
cur.execute("TRUNCATE TABLE member_additional_skills;")
logging.info("Reset member_additional_skills")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_additional_skills AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_additional_skills")
cur.execute("TRUNCATE TABLE member_employment_status;")
logging.info("Reset member_employment_skills")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_employment_status AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_employment_status")
cur.execute("TRUNCATE TABLE member_work_domain;")
logging.info("Reset member_work_domain")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_work_domain AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_work_domain")
cnx.commit()
logging.info("Delete Members")
# Clear members table
sql = "DELETE FROM members;"
cur.execute(sql)
cnx.commit()
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
</code></pre>
<p>So sometimes this is working but mostly it is failing on the first truncate call. The server isn't running out of memory and its cpu and load are reasonable. The first table (member_additional_skills) has 850 rows in it so should be a simple task right? I do not get any errors and the app hangs indefinitely after logging <code>Clear Tables</code></p>
<p>I usually end up manually running this a few times and a few server reboots and it will either work or will continue to fail. The flask schedule call is on a cron job. Usually whatever i do fails and then suddenly it has worked (although this is not the case now for over a week).</p>
<p>*** EDIT This is the full function for completeness, the issue is however only happening at the point outlined above (after Clear Tables) ***</p>
<pre><code>@app.cli.command()
def scheduled():
"""Run scheduled job."""
logging.info("Start Scheduled Sync")
# Compile regex for phone number validation
phone_num_check = re.compile(r"^[0-9 ]+$")
# Load the entire _member directory into a dict
with open(config["Path"]["root"] + "opt/find-interpreter-db-populate/members.json", "r") as fp:
members = json.load(fp)
logging.info("Fixing Members")
# Loop through each _member and fix the data
for member_id in members:
# Check if _member wants profile shown
if 'show_profile' in members[member_id]:
if members[member_id]['show_profile'].lower() == "yes":
members[member_id]['show_profile'] = 1
else:
members[member_id]['show_profile'] = 0
else:
members[member_id]['show_profile'] = 0
# Check that _member has a billing postcode
if 'billing_postcode' in members[member_id]:
if len(members[member_id]['billing_postcode']) > 2 and ' ' in members[member_id]['billing_postcode']:
members[member_id]['postcode_area'] = members[member_id]['billing_postcode'].split(' ')[0].upper()
# If we weren't able to get a postcode area then don't store this user
if 'postcode_area' not in members[member_id]:
continue
# If the user is missing their first name or last name then don't use
if 'first_name' not in members[member_id] or 'last_name' not in members[member_id]:
continue
if 'contact_tel' not in members[member_id]:
members[member_id]['contact_tel'] = None
else:
members[member_id]['contact_tel'] = members[member_id]['contact_tel'].replace('+44', '0')
members[member_id]['contact_tel'] = members[member_id]['contact_tel'].replace(' ', '')
# If phone number not valid then don't store it
if not phone_num_check.match(members[member_id]['contact_tel']):
members[member_id]['contact_tel'] = None
# If the user doesn't have a description then set it to None
if 'description' not in members[member_id]:
members[member_id]['description'] = None
# If the user has work domains set then split them
if 'work_domains' in members[member_id]:
members[member_id]['work_domains'] = members[member_id]['work_domains'].split(', ')
else:
members[member_id]['work_domains'] = []
# If the user has additional skills then split them
if 'additional_skills' in members[member_id]:
members[member_id]['additional_skills'] = members[member_id]['additional_skills'].split(', ')
else:
members[member_id]['additional_skills'] = []
# Changed DIN Member from Yes/No to 1/0
if 'din_member' in members[member_id]:
if members[member_id]['din_member'] == "Yes":
members[member_id]['din_member'] = 1
else:
members[member_id]['din_member'] = 0
else:
members[member_id]['din_member'] = 0
# Change NRCPD from Yes/No to 1/0
if 'nrcpd_registered' in members[member_id]:
if members[member_id]['nrcpd_registered'] == "Yes":
members[member_id]['nrcpd_registered'] = 1
else:
members[member_id]['nrcpd_registered'] = 0
else:
members[member_id]['nrcpd_registered'] = 0
# Change full/associate/student _member types to FULL/ASSO/STUD for DB
if 'full' in members[member_id]['membership_plan'].lower():
members[member_id]['membership_plan'] = 'FULL'
elif 'assoc' in members[member_id]['membership_plan'].lower():
members[member_id]['membership_plan'] = 'ASSO'
elif 'stud' in members[member_id]['membership_plan'].lower():
members[member_id]['membership_plan'] = 'STUD'
else:
continue
# Convert and split employment statuses (if set)
if 'employment_status' in members[member_id]:
emp_types = []
if 'free' in members[member_id]['employment_status'].lower():
emp_types.append('FREE')
if 'empl' in members[member_id]['employment_status'].lower():
emp_types.append('EMPL')
members[member_id]['employment_status'] = emp_types
else:
members[member_id]['employment_status'] = []
# If we have an avatar attachment ID, get the URL"
if "wp_user_avatar" in members[member_id]:
attachment_urls = get_attachment_url(members[member_id]["wp_user_avatar"])
if attachment_urls:
thumbnail_attachment_url = attachment_urls["thumbnail"]
full_attachment_url = attachment_urls["full"]
members[member_id]["avatar_thumbnail_url"] = thumbnail_attachment_url
members[member_id]["avatar_full_url"] = full_attachment_url
else:
members[member_id]["avatar_thumbnail_url"] = None
members[member_id]["avatar_full_url"] = None
else:
members[member_id]["avatar_thumbnail_url"] = None
members[member_id]["avatar_full_url"] = None
# Generate the _member key
members[member_id]['member_key'] = (str(member_id) + "|" + app.config['SECRET_KEY']).encode("utf-8")
members[member_id]['member_key'] = hashlib.sha256(members[member_id]['member_key']).hexdigest()[-10:].upper()
# All data valid - we'll want to add it to the DB so set the add field
members[member_id]['add'] = True
print(str(datetime.utcnow()), 'Running Database Sync...')
cur = cnx.cursor(dictionary=True)
logging.info(cnx.is_connected())
try:
# Clear the database ready to re-import
logging.info("Clear Tables")
# Clear lookup tables first
cur.execute("TRUNCATE TABLE member_additional_skills;")
logging.info("Reset member_additional_skills")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_additional_skills AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_additional_skills")
cur.execute("TRUNCATE TABLE member_employment_status;")
logging.info("Reset member_employment_skills")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_employment_status AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_employment_status")
cur.execute("TRUNCATE TABLE member_work_domain;")
logging.info("Reset member_work_domain")
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
# cur.execute("ALTER TABLE member_work_domain AUTO_INCREMENT = 1;")
# logging.debug("Reset inc member_work_domain")
cnx.commit()
logging.info("Delete Members")
# Clear members table
sql = "DELETE FROM members;"
cur.execute(sql)
cnx.commit()
except mysql.connector.Error as err:
print("Failed truncating database: {}".format(err))
exit(1)
try:
logging.info("Rebuilding Members")
# Loop through each _member. If add is set, add to the database
for _member in members:
this_member = members[_member]
if 'add' in this_member:
sql = """INSERT INTO members (id, member_key, first_name, last_name,website_url, email,
contact_tel,postcode, postcode_area, description, membership_type_id,
nrcpd_registered,din_member, show_profile,avatar_thumbnail_url, avatar_large_url, rnd)
VALUES (%s, %s, %s, %s, %s, %s, %s,%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"""
member_data = (_member, this_member['member_key'], this_member['first_name'], this_member['last_name'],
this_member['website_url'], this_member['email'], this_member['contact_tel'],
this_member['billing_postcode'], this_member['postcode_area'],
this_member['description'],
this_member['membership_plan'], this_member['nrcpd_registered'],
this_member['din_member'],
this_member['show_profile'], this_member["avatar_thumbnail_url"],
this_member["avatar_full_url"], random.randint(0, 50000)
)
cur.execute(sql, member_data)
cnx.commit()
logging.info("Lookup Data")
# Do lookup data
for _member in members:
this_member = members[_member]
if 'add' in this_member:
# Additional skills first
for additional_skill in this_member['additional_skills']:
# Check if additional skill already exists
cur.execute("SELECT id FROM additional_skills WHERE name = %s", (additional_skill,))
result = cur.fetchall()
# If the additional skill doesn't already exist, create it
if cur.rowcount == 0:
cur.execute("INSERT INTO additional_skills (name) VALUES (%s)", (additional_skill,))
add_skill_id = cur.lastrowid
else:
add_skill_id = result[0]['id']
cur.execute(
"INSERT INTO member_additional_skills (member_id, additional_skills_id) VALUES (%s, %s)",
(_member, add_skill_id))
# Do work domains
for work_domain in this_member['work_domains']:
# Check if work domain already exists
cur.execute("SELECT id FROM work_domains WHERE name = %s", (work_domain,))
result = cur.fetchall()
# If the work domain doesn't already exist, create it
if cur.rowcount == 0:
cur.execute("INSERT INTO work_domains (name) VALUES (%s)", (work_domain,))
work_dom_id = cur.lastrowid
else:
work_dom_id = result[0]['id']
cur.execute("INSERT INTO member_work_domain (member_id, work_domain_id) VALUES (%s, %s)",
(_member, work_dom_id))
# Do employment status
if 'employment_status' in this_member:
for employment_status in this_member['employment_status']:
sql = """INSERT INTO member_employment_status (members_id, employment_statuses_id)
VALUES (%s, %s)"""
cur.execute(sql, (_member, employment_status))
cnx.commit()
except mysql.connector.Error as error:
print(str(datetime.utcnow()), error)
print(str(datetime.utcnow()), "MySql Error Code: ", error.errno)
print(str(datetime.utcnow()), "MySql SQLSTATE: ", error.sqlstate)
print(str(datetime.utcnow()), "MySql Error Message: ", error.msg)
logging.debug("Start Scheduled Sync")
logging.info("Finished!")
print(str(datetime.utcnow()), 'Sync finished.')
</code></pre>
<p>I am considering removing the truncate sections and updating existing data if it exists or adding entry if it doesn't exist but my python/mysql isn't too strong</p>
| <python><mysql><flask> | 2023-09-06 14:35:14 | 0 | 869 | ThurstonLevi |
77,052,912 | 1,747,834 | Does pip support local plugins? | <p>I'd like to execute certain Python code each time <code>pip</code> installs (or removes) a package.</p>
<p>Do I need to modify the existing pip-code for that, or can I simply register my add-on in <code>pip.conf</code>?</p>
| <python><python-3.x><pip> | 2023-09-06 14:31:42 | 0 | 4,246 | Mikhail T. |
77,052,891 | 22,221,987 | Prevent QChart PlotArea shifting after axis labels values changes | <p>When you zoom the plot, axis values labels changes and increase their length like:
<code>5.0->5.01->5.001->5.0001 etc</code>. This cause next behaviour:</p>
<ol>
<li>Vertical axis shifts in horizontal direction every time value label append digit in itself. This cause the whole plot area shifting. Looks kinda 'unstable'.</li>
<li>Too much zoom cause too long values. This cause lag and continues disfunction of zoom.</li>
</ol>
<p>It's not possible to fix the second problem with `y_axis.setLabelFormat('%d') because i can't just cut off values. I need to display them in necessary value-gap.</p>
<p>So, the main idea is to force chart to draw plotArea, which is already shifted enough, to fit maximum length of axis label. And to prevent shifting even here, we need to stop zooming plot area when we reach some length of the axis labels. So, it will look like this: <a href="https://i.sstatic.net/ZHvAc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZHvAc.png" alt="enter image description here" /></a>.</p>
<p>Here is the code:</p>
<pre><code>import sys
from PySide6.QtCore import Qt
from PySide6.QtGui import QPainter
from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget
from PySide6.QtCharts import QChart, QChartView, QLineSeries, QValueAxis
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("GeeksCoders LineChart Example")
# Create the main widget and layout
widget = QWidget()
layout = QVBoxLayout(widget)
# Create the chart view and add it to the layout
chart_view = QChartView()
chart_view.setRubberBand(QChartView.RectangleRubberBand)
chart_view.setRenderHint(QPainter.Antialiasing)
layout.addWidget(chart_view)
# Create the chart and add it to the view
chart = QChart()
chart.setTitle("GeeksCoders Monthly Sales")
chart.setAnimationOptions(QChart.SeriesAnimations)
# Add data to the chart
series = QLineSeries()
series.setName("Sales Data")
sales_data = [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65]
months = range(1, 13)
for i in range(12):
series.append(months[i], sales_data[i])
chart.addSeries(series)
# Add axes to the chart
axis_x = QValueAxis()
axis_x.setLabelFormat("%i")
axis_x.setTitleText("Month")
chart.addAxis(axis_x, Qt.AlignBottom)
series.attachAxis(axis_x)
axis_y = QValueAxis()
axis_y.setLabelFormat("%i")
axis_y.setTitleText("Sales")
chart.addAxis(axis_y, Qt.AlignLeft)
series.attachAxis(axis_y)
# Set the chart view's chart
chart_view.setChart(chart)
coordinates = chart.plotArea().getCoords()
chart.plotArea().setCoords(300, coordinates[1], coordinates[2], coordinates[3])
# Set the central widget
self.setCentralWidget(widget)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
<p>In 'Green right' example we need to force axes labels to expand outside plotArea, instead of 'Red' inside expansion. And we need to prevent too deep zoom. If we reach some digits count in label we stop zooming.</p>
<p>As example, <code>setFormat('%.3f')</code> doesn't help, because we just don't show the full labels, but we still zooming and real values expand.</p>
<p><strong>UPD:</strong> Thanks to @relent95, it's possible to fix plotArea left border with setting plotArea manually. But, it affects the whole chartView. Resizing doesn't work anymore. And plot area unattaches from the background. Using <code>setPlotArea(Qrect(300, 0, 0, 0))</code> to set only left border doesn't help. Behaviour in this case looks like with <code>QRect(0, 0, 0, 0)</code>.</p>
<p>Here is the code updates:</p>
<pre><code>class CustomChartView(QChartView):
def __init__(self):
super().__init__()
def set_plot_once(self):
old_plot_area = self.chart().plotArea()
# self.chart().setPlotArea(QRect(300, 0, 0, 0)) # causes default behaviour without shifting
self.chart().setPlotArea(QRect(300, old_plot_area.y(), old_plot_area.width(), old_plot_area.height()))
# This method causes fixed sizes without resizing
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
...
...
self.chart_view.setChart(chart)
self.chart_view.set_plot_once()
self.setCentralWidget(widget)
</code></pre>
| <python><python-3.x><qt><pyqt><pyside> | 2023-09-06 14:29:13 | 0 | 309 | Mika |
77,052,708 | 4,575,197 | How to get the matched groups in regex Python and save it as a new column | <p>I have a dataframe and i want to find out, if there was any mentions of the firms that i'm looking for in <code>DocumentIdentifier</code> column. probably it should be done through Regex groups, but I'm not sure and currently i use <code>contains()</code>.</p>
<p>The data looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>GKGRECORDID</th>
<th>DATE</th>
<th>SourceCommonName</th>
<th>DocumentIdentifier</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>20160101223000-T417</td>
<td>sueddeutsche.de</td>
<td>"http://www.sueddeutsche.de/wirtschaft/vw-skandal-schein-und-sein-1.2802686"</td>
</tr>
<tr>
<td>3</td>
<td>20151231060000-T360</td>
<td>focus.de</td>
<td>"http://www.focus.de/finanzen/boerse/volkswagen-skandal-im-news-ticker-vw-betriebsrat-fordert-nachhaltigkeitsbeirat-fuer-autobauer_id_5183047.html"</td>
</tr>
<tr>
<td>4</td>
<td>20151231100000-T827</td>
<td>welt.de</td>
<td>"http://www.welt.de/regionales/niedersachsen/article150494146/Osterloh-will-fuer-VW-Nachhaltigkeitsbeirat-mit-externen-Fachleuten.html"</td>
</tr>
<tr>
<td>5</td>
<td>20151231101500-T428</td>
<td>focus.de</td>
<td>"http://www.focus.de/regional/wolfsburg/auto-osterloh-will-fuer-vw-nachhaltigkeitsbeirat-mit-externen-fachleuten_id_5183279.html"</td>
</tr>
<tr>
<td>6</td>
<td>20151231140000-T543</td>
<td>focus.de</td>
<td>"http://www.focus.de/finanzen/news/wirtschaftsticker/unternehmen-osterloh-will-fuer-vw-nachhaltigkeitsbeirat-mit-externen-fachleuten_id_5183525.html"</td>
</tr>
</tbody>
</table>
</div>
<p>Using <code>contain()</code> method, which filters the data correctly, however i don't think that's the right way of finding, which firm it matches. My code looks like this:</p>
<pre><code>firm_pattern='|'.join(['adidas', 'Airbus', 'Allianz','Volkswagen','VW')
pattern = '|'.join(['welt.de','focus.de'])
results=[results[(results['DocumentIdentifier'].str.contains(f'{firm_pattern}', case=False, na=False)) &
(results['SourceCommonName'].str.contains(pattern,case=False, na=False))]
</code></pre>
<p>What i want is to find out which of the firms it matches and create as new columns as it needs like for first row in a new column like <code>Firm1</code> should be written <code>vw</code>. For second row in <code>Firm1</code> column <code>vw</code> and for <code>Firm2</code> <code>Volkswagen</code>.</p>
<p>what i found out is that i can get the groups using <code>.group()</code> method.</p>
<pre><code>re.search(r'(volkswagen)|(vw)',
'http://www.focus.de/finanzen/boerse/volkswagen-skandal-im-news-ticker-vw-betriebsrat-fordert-nachhaltigkeitsbeirat-fuer-autobauer_id_5183047.html'
).group(0) #0-1 returns volkswagen only but it should return vw as well
</code></pre>
<p>how can i improve my code?</p>
<h2>Edit:</h2>
<p>If you want to use my data copy this code:</p>
<pre><code>import pandas as pd
data = {
'GKGRECORDID': [1, 3, 4, 5, 6],
'DATE': ['20160101223000-T417', '20151231060000-T360', '20151231100000-T827', '20151231101500-T428', '20151231140000-T543'],
'SourceCommonName': ['sueddeutsche.de', 'focus.de', 'welt.de', 'focus.de', 'focus.de'],
'DocumentIdentifier': [
'http://www.sueddeutsche.de/wirtschaft/vw-skandal-schein-und-sein-1.2802686',
'http://www.focus.de/finanzen/boerse/volkswagen-skandal-im-news-ticker-vw-betriebsrat-fordert-nachhaltigkeitsbeirat-fuer-autobauer_id_5183047.html',
'http://www.welt.de/regionales/niedersachsen/article150494146/Osterloh-will-fuer-VW-Nachhaltigkeitsbeirat-mit-externen-Fachleuten.html',
'http://www.focus.de/regional/wolfsburg/auto-osterloh-will-fuer-vw-nachhaltigkeitsbeirat-mit-externen-fachleuten_id_5183279.html',
'http://www.focus.de/finanzen/news/wirtschaftsticker/unternehmen-osterloh-will-fuer-vw-nachhaltigkeitsbeirat-mit-externen-fachleuten_id_5183525.html'
]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
| <python><pandas><regex><data-cleaning><string-matching> | 2023-09-06 14:09:01 | 1 | 10,490 | Mostafa Bouzari |
77,052,659 | 3,387,716 | python list comprehension: list of dicts to dict of lists with key intersection | <p>I have a list with a variable number of dictionaries, for ex:</p>
<pre class="lang-py prettyprint-override"><code>var = [ {'a': 1, 'b': 2}, {'b': 20, 'a': 10, 'c': 30}, {'c': 300, 'a': 100} ]
</code></pre>
<p>I need to extract the keys that are common to all dicts, make a list of their associated values, create a new dict out of it, and store it in the same variable:</p>
<p>The expected result would be:</p>
<pre class="lang-py prettyprint-override"><code>var = { 'a': [1, 10, 100] }
</code></pre>
<p>I can find the intersection of the keys with:</p>
<pre class="lang-bash prettyprint-override"><code>[k for k in var[0] if all(k in d for d in var[1:])]
</code></pre>
<p>But how can you do the rest of the transformation?</p>
| <python><list-comprehension> | 2023-09-06 14:02:20 | 2 | 17,608 | Fravadona |
77,052,482 | 7,862,279 | Can't connect asynchronously to opcua server | <p>I'm trying to get a value of a node in my OPCUA server.</p>
<p>I'm using <code>asyncua</code> on a windows 10 x64 computer.
The server is on a PLC.</p>
<p>When I write this in a normal task it works</p>
<pre><code>client = Client("opc.tcp://192.168.1.88:4840")
# client.max_chunkcount = 5000 # in case of refused connection by the server
# client.max_messagesize = 16777216 # in case of refused connection by the server
client.connect()
</code></pre>
<p>But when I use the basic example using an async function with the await the line,</p>
<pre><code>await client.connect()
</code></pre>
<p>Returns a <code>timeoutError</code> with <code>asyncio.exceptions.CancelledError</code> but nothing else to explain why it doesn't work.</p>
<p>I would've kept trying without the the <code>await</code> but when I try to get my value with <code>client.nodes.root.get_child([...])</code> the returned value prints <code><coroutine object Node.get_child at 0x000002C38EE45E40></code> (when it should be a simple integer or boolean) and I don't know what to do with that so I guess I should keep going with the examples.</p>
<p>Do you have any idea why <code>await client.connect()</code> return this exception ?</p>
<p>I also tried with 2 different (and built under different langages) opcua clients just to be sure it wasn't the server that was broken. And the clients can connect properly.</p>
<hr />
<p>EDIT: in case that helps, the console gives :</p>
<pre><code>disconnect_socket was called but connection is closed
close_session was called but connection is closed
close_secure_channel was called but connection is closed
disconnect_socket was called but connection is closed
Traceback (most recent call last):
File "Python311\Lib\asyncio\tasks.py", line 490, in wait_for
return fut.result()
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "OPCgpt.py", line 28, in <module>
loop.run_until_complete(connect_to_opc_server())
File "Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "OPCgpt.py", line 12, in connect_to_opc_server
await client.connect()
File "Python311\Lib\site-packages\asyncua\client\client.py", line 287, in connect
await self.open_secure_channel()
File "Python311\Lib\site-packages\asyncua\client\client.py", line 374, in open_secure_channel
result = await self.uaclient.open_secure_channel(params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Python311\Lib\site-packages\asyncua\client\ua_client.py", line 315, in open_secure_channel
return await self.protocol.open_secure_channel(params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Python311\Lib\site-packages\asyncua\client\ua_client.py", line 232, in open_secure_channel
await asyncio.wait_for(self._send_request(request, message_type=ua.MessageType.SecureOpen), self.timeout)
File "Python311\Lib\asyncio\tasks.py", line 492, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
</code></pre>
<hr />
<p>EDIT 2 : Someone asked me to post my code for people to be able to reproduce it. It's very simple so the bug is pretty odd. Someone on github mentionned that I may be blocking the eventloop but I didn't go too far off from <a href="https://github.com/FreeOpcUa/opcua-asyncio/tree/master/examples" rel="nofollow noreferrer">freeopcua's examples</a>..</p>
<pre><code>import asyncio
from asyncua import Client
async def connect_to_opc_server():
# CrΓ©ez un client OPC UA
client = Client(url="opc.tcp://192.168.1.88:4840")
# client.max_chunkcount = 5000
# client.max_messagesize = 16777216
client.name = "Re_Shoes"
try:
# Connexion au serveur OPC UA
await client.connect()
# Lister tous les nΕuds enfants du nΕud racine (Object)
root_node = await client.get_root_node()
print("root node is: ", root_node)
finally:
# DΓ©connexion du serveur OPC UA
await client.disconnect()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(connect_to_opc_server())
loop.close()
</code></pre>
<p>This is my main code but I have tried many variations of this.</p>
| <python><async-await><opc-ua> | 2023-09-06 13:41:10 | 1 | 763 | Jack |
77,052,324 | 14,073,111 | Call functions using dictionaries in python | <p>Let's say that I have this pseudo code:</p>
<pre><code>x11 = {
"exec": {
"test1": "test1"
},
"mount": {
"test2": "test2"
},
"unmount": {
"test3": "test3"
},
}
def get_exec(dict1):
return dict1["test1"]
def get_mount(dict1):
return dict1["test2"]
def get_unmount(dict1):
return dict1["test2"]
x1 = ["exec", "mount", "unmount"]
for elem in x1:
e1 = x11.get(elem)
get_e = {
"exec": get_exec(e1),
"mount": get_mount(e1),
"unmount": get_unmount(e1),
}
get_e[elem]
</code></pre>
<p>Basically, I am trying to avoid a lot of if conditions and want to use a dictionary and call the right function in each iteration. But this what i have is not working becuase then the dict is being called, it is going to each function and do operations. Basically, in my case each function is checking different keys and when <strong>e1</strong> is passed to the function (which is valid only in the "exec" case, then it is failing on other function...</p>
<p>Is there a way to have something similar which works?</p>
| <python><python-3.x><function><dictionary><methods> | 2023-09-06 13:20:50 | 3 | 631 | user14073111 |
77,052,314 | 12,725,674 | Python regex: Extract string between multiple dashes | <p>I have transcripts of speeches which have the following structure:</p>
<pre><code>--------------------------------------------------------------------------------
Person 1 [1]
--------------------------------------------------------------------------------
Text spoken by Person 1
--------------------------------------------------------------------------------
Person 2 [2]
--------------------------------------------------------------------------------
Text spoken by Person 2
--------------------------------------------------------------------------------
Person 3 [3]
--------------------------------------------------------------------------------
Text spoken by Person 3
--------------------------------------------------------------------------------
Person 2 [4]
--------------------------------------------------------------------------------
Text spoken by Person 2
--------------------------------------------------------------------------------
Person 4 [5]
--------------------------------------------------------------------------------
Text spoken by Person 4
</code></pre>
<p>I use the following regex to extract the text spoken by Person 2:</p>
<pre><code>regex_pattern= r"Person 2\s+\[\d+\]\s+-+\s+((?:.*?\n?)*(?=\s+-+|$))"
</code></pre>
<p>The regex works alright but in some instances, the written text includes two hyphens (--) which I want to account for.
For example:</p>
<pre><code>--------------------------------------------------------------------------------
Person 2 [12]
--------------------------------------------------------------------------------
Bla bla bla -- bla bla bla bla -- bla bla bla bla
--------------------------------------------------------------------------------
Person 6 [13]
--------------------------------------------------------------------------------
Bla bla bla bla bla
</code></pre>
<p>The solution has to be fairly easy but somehow I didn't manage to get it right.
Any help would be much appreciated</p>
| <python><regex> | 2023-09-06 13:20:04 | 1 | 367 | xxgaryxx |
77,052,236 | 12,888,591 | Pandas read_excel converting dates incorrectly | <p>I have a large number of separate .xlsx files I am trying to compile into one master list. There are dates in two of the columns of data but they are in inconsistent formats, like 01/03/2021 and 01.03.2021, among other things. The data is so large I am not sure of all the different formats. I want all data in this new dataframe to just appear 1:1 with the original files so I can properly assess how to standardise all the data. Currently, it appears like any date with a leading 0 and separated by a / is converted into a date and timestamp like so:</p>
<pre><code>01/11/2021 = 2021-01-11 00:00:00
</code></pre>
<p>This issue doesnt affect dates with dot seperations nor dates without the leading 0.</p>
<p>I am using this line of code to read each file into a dataframe:</p>
<pre><code>df = pd.read_excel(file_path, engine="openpyxl", skiprows=skip_rows, header=None, dtype=str, parse_dates=False)
</code></pre>
<p>As you can see, I have included the dtype=str arguments and parse_dates=False in an effort to stop these dates being converted, but it does not change anything. It is important that no data is lossed or changed from its original before I have a chance to verify correctness. How can I preserve the integrity of the original data?</p>
| <python><pandas><excel> | 2023-09-06 13:10:15 | 1 | 415 | Scott Adamson |
77,052,195 | 241,515 | Pandas: groupby by one column, select rows with the (local) minimum value in one column and the (local) maximum in another | <p>I have a <code>DataFrame</code> with this structure:</p>
<pre><code> sampleid ploidy purity error
0 1962666_A1 2 0.05 0.330501
1 1962666_A1 2 0.09 0.337803
2 1962666_A1 2 0.97 0.991373
3 1962666_A1 3 0.05 0.283901
4 1962666_A1 3 0.10 0.331267
1233 1640057_030959B2_P5 5 0.11 0.359961
1234 1640057_030959B2_P5 5 0.13 0.395113
1235 1640057_030959B2_P5 5 0.19 0.408185
1236 1640057_030959B2_P5 5 0.34 0.437057
1237 1640057_030959B2_P5 5 1.00 0.523207
</code></pre>
<p>I need to identify, for each <code>sampleid</code> (hence, requiring a groupby) the row with the <strong>largest</strong> purity but with the lowest error. It doesn't necessarily mean the absolute lowest or maximum value, see below. In addition, I need to keep whatever value is in the <code>ploidy</code> column.</p>
<p>Given the above data as an example, for the first <code>sampleid</code> the fifth row satisfies the conditions: it has 0.10 in purity, and 0.331267 error (not 0.97, because the associated error is larger than the others).</p>
<p>Aside the fact that this requires <code>groupby</code>, I've tried with different approaches but no luck.</p>
<p>For example,</p>
<p><code>df.groupby("sampleid").agg({"purity": "max", "error": "min"})</code></p>
<p>Actually shuffles the columns around (example shown for another sample):</p>
<pre><code>df.groupby("sampleid").agg({"purity": "max", "error": "min"}).loc["1962666_B2"]
purity 0.670000
error 0.680062
Name: 1962666_B2, dtype: float64
df.query("sampleid == '1962666_B2' and purity == 0.670000")
sampleid ploidy purity error
30 1962666_B2 5 0.67 0.988684
</code></pre>
<p>Other approaches I've tried involved using <code>scipy</code>'s <code>argrelextrema</code> but I could use only one condition at a time, which is incorrect (it will select either the rows with the lowest error but also the lowest purity, or the highest purity and the highest error).</p>
<p>I've looked through SO (e.g. <a href="https://stackoverflow.com/questions/49148539/pandas-select-maximum-from-one-column-and-minimum-from-another">Pandas, select maximum from one column and minimum from another</a>) , but I didn't find a question that takes on this problem explicitly, most of the solutions are for one column only or they don't fit the exact problem.</p>
<p>What would the correct approach be here?</p>
| <python><pandas><dataframe> | 2023-09-06 13:06:14 | 1 | 4,973 | Einar |
77,051,982 | 4,905,285 | Python pandas: filter columns in dataframe BEGINNING with pattern | <p>I have 2 columns called <code>eta</code> and <code>theta</code> in a pandas data frame.</p>
<p>I'm trying to select the column called <code>eta</code> only.</p>
<p>So far I have tried</p>
<pre><code>df.filter(like='eta')
</code></pre>
<p>...however this selects both <code>eta</code> and <code>theta</code>.</p>
<p>I've also tried <code>pattern</code>, but this is an R command:</p>
<pre><code>df.filter(pattern='^eta')
</code></pre>
| <python><pandas><dataframe><filter> | 2023-09-06 12:37:52 | 1 | 441 | AlexLee |
77,051,620 | 22,070,773 | nanobind trampoline method name issue | <p>I have a trampoline class that I am trying to wrap in nanobind like so:</p>
<pre><code>class PyT : T {
public:
NB_TRAMPOLINE(T, 1);
void F() override {
NB_OVERRIDE(F);
}
};
// ...
nb::class_<T, PyT>(m, "T")
.def(nb::init<>())
.def("f", &PyT::F)
</code></pre>
<p>It compiles fine, but when i go to use it in python, it only works if I call the function as <code>F()</code> despite wrapping it with the method name <code>f()</code>.</p>
<p>How do I solve this?</p>
| <python><c++><nanobind> | 2023-09-06 11:53:01 | 1 | 451 | got here |
77,051,594 | 424,556 | Inconsistent font sizes in ticks on Y axis | <p>I use Seaborn to plot violins, though I assume the bug comes from Matplotlib. I do not do anything fancy regarding the visualization itself, and merely set the font size of tick labels on the Y axis using both</p>
<pre><code>sns.set_context("notebook", font_scale=1, rc={"lines.linewidth": 1.5})
</code></pre>
<p>(to adjust globally) and</p>
<pre><code>ax.tick_params(axis='y', labelsize=19)
ax.yaxis.get_offset_text().set_fontsize(19)
</code></pre>
<p>However, the result I obtain depends on the <strong>value</strong> of the tick's label:</p>
<p><img src="https://i.imgur.com/YlLUyea.jpeg" alt="Bad font size" /></p>
<p>That is, each time the label is a perfect inverse power of 10 (here 1E-4) I get one font size, and otherwise I get another font size. I tried multiple combinations of values (e.g., changing the "font_scale", or labelsize..), but I never get something consistant. When I set different notation styles:</p>
<p>ax.yaxis.set_major_formatter(mpl.ticker.ScalarFormatter(useMathText=True, useOffset=False))</p>
<p>it only affects values that are perfect inverse powers of 10.</p>
<p>The other (maybe) relevant plotting functions I use here are:</p>
<pre><code>b = sns.violinplot(data=mydata_without_outliers, cut=0, palette=colors, scale='width')
ax.set(yscale="log")
sns.scatterplot(data=outlier_data, marker='D', facecolor='blue', edgecolor=(0,0,0), color='blue')
plt.subplots_adjust(top=0.947, bottom=0.074, left=0.052, right=0.99, hspace=0.095, wspace=0.185);
sns.despine()
</code></pre>
<p>which produce violin plots with scatterplots superimposed to show outliers.</p>
<p><strong>EDIT</strong>: To clarify, this is <strong>not</strong> a question about "how to change fonts in Matplotlib" : I could have googled that. This is <strong>not</strong> a question about "how to change minor and major ticks" because this assumes one <em>already</em> knows what minor and major ticks are. This question is really about what it is asked above : how to make fonts consistent between powers and non powers of 10, which other questions do not answer, and which the proposed answer by tmdvision perfectly answers. Googling "power of 10 font matplotlib" result in totally different answers to totally different questions, including on StackOverflow.</p>
| <python><matplotlib><seaborn> | 2023-09-06 11:48:14 | 1 | 3,336 | nbonneel |
77,051,502 | 22,070,773 | nanobind incompatible function argument types | <p>I have this basic example:</p>
<pre><code>#include <iostream>
#include <nanobind/nanobind.h>
class T {
public:
T(std::string s) {
std::cout << "hello world" << std::endl;
};
}
NB_MODULE(basic_example, m) {
nb::class_<T>(m, "T")
.def(nb::init<std::string>())
;
}
</code></pre>
<p>When I try to compile, I'm getting the error:</p>
<blockquote>
<p>TypeError: <strong>init</strong>(): incompatible function arguments. The following argument types are supported: 1. <strong>init</strong>(self, arg: std::__cxx11::basic_string<char, std::char_traits, std::allocator >, /) -> None</p>
</blockquote>
<p>I'm new to nanobind and unfamiliar with how to fix it.</p>
| <python><c++><nanobind> | 2023-09-06 11:33:49 | 1 | 451 | got here |
77,051,492 | 6,884,119 | Python: List Calendars using Google Calendar API | <p>I am trying to list my account's Google Calendars using the Calendar API, but it is returning <code>No Calendars Found</code>.</p>
<p>Below is my code:</p>
<p><code>models/cal_setup.py</code></p>
<pre><code>import pickle
import os.path
from google.oauth2 import service_account
from googleapiclient.discovery import build
from google.auth.transport.requests import Request
# You can change the scope of the application, make sure to delete token.pickle file first
SCOPES = ['https://www.googleapis.com/auth/calendar']
CREDENTIALS_FILE = 'credentials.json' # Give path to your credentials.json file
def get_calendar_service():
cred = None
'''
The file token.pickle stores the user's access and refresh tokens, and is created automatically when
the authorization flow completes for the first time. In other words when the user give access to this
channel
'''
if os.path.exists('token.pickle'):
with open('token.pickle','rb') as token:
cred = pickle.load(token)
if not cred or not cred.valid:
if cred and cred.expired and cred.refresh_token:
cred.refresh(Request())
else:
cred = service_account.Credentials.from_service_account_file(CREDENTIALS_FILE, scopes=SCOPES)
with open('token.pickle', 'wb') as token:
pickle.dump(cred, token)
service = build('calendar','v3',credentials=cred)
return service
</code></pre>
<p><code>calendar.py</code></p>
<pre><code>from models.cal_setup import get_calendar_service
def list_cal():
print("List all calendar")
service = get_calendar_service()
print('Getting list of calendars')
calendars_result = service.calendarList().list().execute()
calendars = calendars_result.get('items', [])
if not calendars:
print('No calendars found.')
for calendar in calendars:
summary = calendar['summary']
id = calendar['id']
primary = "Primary" if calendar.get('primary') else ""
print("%s\t%s\t%s" % (summary, id, primary))
list_cal()
</code></pre>
<p>I am using a key generated from the Google Cloud service account to achieve this and below is its permissions:</p>
<p><a href="https://i.sstatic.net/Mf46F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mf46F.png" alt="enter image description here" /></a></p>
<p>I have added my email account as an owner.</p>
<p>Can someone help me out?</p>
| <python><google-calendar-api><google-workspace><google-api-python-client><service-accounts> | 2023-09-06 11:32:14 | 1 | 2,243 | Mervin Hemaraju |
77,051,489 | 5,221,435 | API in my main.py flask application doesn't work after dockerizing the app on Windows | <p>I dockerized my Flask application on windows. I use Docker Desktop to run docker. In windows, to join the container you can't use localhost but the ip "192.168.99.100". So, after dockerizing app, I try to reach one API from this ip address, but the request goes in timeout:</p>
<p><a href="https://i.sstatic.net/YN90C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YN90C.png" alt="enter image description here" /></a></p>
<p>The strange thing is that I enabled debug inside my main.py file, but when I run docker-compose up, it seems that it doesnt' run the application in debug mode:
<a href="https://i.sstatic.net/g6l67.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g6l67.png" alt="enter image description here" /></a></p>
<p>My Dockerfile, in which I specified where to find my main.py app (the same path) as FLASK_APP:</p>
<pre><code>FROM python:3.8
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8080
ENV FLASK_APP=main.py
CMD ["flask", "run", "--host", "0.0.0.0"]
</code></pre>
<p>docker-compose.yaml</p>
<pre><code>version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- .:/app
stdin_open: true
environment:
- FLASK_APP=main.py
- GOOGLE_APPLICATION_CREDENTIALS=cred.json
</code></pre>
<p>docker-compose.yaml and Dockerfile are in the same path.</p>
<p>main.py, where you can see at last lines that I run it in debug mode:</p>
<pre><code>import os, io
import logging
import json
import jinja2
import requests
import re
from flask import Flask, request, render_template, send_file, abort, session, redirect, jsonify, url_for
from flask_cors import CORS
from flask.helpers import make_response
from google.cloud import tasks_v2, bigquery, firestore, kms_v1, storage
import httplib2shim
import env_config
from audience_manager import AudienceManager
from datetime import datetime, timedelta
import time
import mysql.connector
import cron
from google.cloud import pubsub_v1
import google.oauth2.credentials
from googleapiclient.discovery import build
from google.auth import app_engine
import google_auth_oauthlib.flow
from exceptions import AuthenticatorException, UserVisibilityNotFoundException, InvalidEmailException, \
NoneEmailException
from jwt.exceptions import ExpiredSignatureError
from authlib.integrations.base_client import MismatchingStateError
from authlib.integrations.flask_client import OAuth
from oauth2_authentication import Oauth2Authentication
from services import db_service
from spreadsheet import Spreadsheet
from ggm import start_ggm_check, ggm_module
import secret_manager_module
from flask.globals import g
from task_module import CloudTask
import iap_service
httplib2shim.patch()
app = Flask(__name__)
@app.before_request
def before_request():
print("#### abc")
g.auth_user = "giacomo.brunetta@external.stellantis.com"
@app.route('/hello', methods=['POST'])
def ciao():
print("## hello")
return 'ok'
if __name__ == "__main__":
app.run(host='127.0.0.1', port=8080, debug=True)
</code></pre>
<p>Flask version in requirements is 2.0.1</p>
| <python><docker><flask><docker-compose><dockerfile> | 2023-09-06 11:31:57 | 2 | 1,585 | Giacomo Brunetta |
77,051,447 | 6,876,422 | Chunk a text based on regex expression with delimiter including | <p>I have a long text, about 10k characters that contain many sections. I need to chunk the text based on these sections. Every chunk should contain a section. The text template is represented by sections with titles start with "SECTION|RUBRIQUE n" where n is the number of the section.</p>
<p>This is my try:</p>
<pre><code>import re
def get_text_chunks(text):
section_pattern = r"(SECTION|RUBRIQUE) \d+: .+"
section_headings = re.findall(section_pattern, text)
chunks = re.split(section_pattern, text)
return chunks
long_text = """
This text should be ignored.
RUBRIQUE 1: IDENTIFICATION OF THE SUBSTANCE/MIXTURE AND OF THE COMPANY/UNDERTAKING
1.1. Product identifier
Product name: - SANITARY DEODORIZING DESCALING CLEANER
Product code: DUOBAC SANIT
1.2. Applicable uses of the substance or mixture and uses advised against
SANITARY HYGIENE
RUBRIQUE 2: HAZARDS IDENTIFICATION
2.1. Classification of the substance or mixture
In accordance with Regulation (EC) No. 1272/2008 and its adaptations.
Skin corrosion, Category 1B (Skin Corr. 1B, H31 4).
RUBRIQUE 2: HAZARDS IDENTIFICATION
2.2. Another Classification
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
"""
chunks = get_text_chunks(long_text)
for chunk in chunks:
print(chunk)
print("-----------------------")
</code></pre>
<p>But I'm getting this output:</p>
<pre><code>This text should be ignored.
-----------------------
RUBRIQUE 1: IDENTIFICATION OF THE SUBSTANCE/MIXTURE AND OF THE COMPANY/UNDERTAKING
-----------------------
1.1. Product identifier
Product name: - SANITARY DEODORIZING DESCALING CLEANER
Product code: DUOBAC SANIT
1.2. Applicable uses of the substance or mixture and uses advised against
SANITARY HYGIENE
-----------------------
RUBRIQUE 2: HAZARDS IDENTIFICATION
-----------------------
2.1. Classification of the substance or mixture
In accordance with Regulation (EC) No. 1272/2008 and its adaptations.
Skin corrosion, Category 1B (Skin Corr. 1B, H31 4).
-----------------------
</code></pre>
<p>Instead of having this Output:</p>
<pre><code>RUBRIQUE 1: IDENTIFICATION OF THE SUBSTANCE/MIXTURE AND OF THE COMPANY/UNDERTAKING
1.1. Product identifier
Product name: - SANITARY DEODORIZING DESCALING CLEANER
Product code: DUOBAC SANIT
1.2. Applicable uses of the substance or mixture and uses advised against
SANITARY HYGIENE
-----------------------
RUBRIQUE 2: HAZARDS IDENTIFICATION
2.1. Classification of the substance or mixture
In accordance with Regulation (EC) No. 1272/2008 and its adaptations.
Skin corrosion, Category 1B (Skin Corr. 1B, H31 4).
-----------------------
</code></pre>
<p>PS: my input text does not start with SECTION|RUBRIQUE from the first line. So that first part should be ignored.</p>
| <python><regex><split> | 2023-09-06 11:26:08 | 1 | 2,330 | famas23 |
77,051,260 | 19,238,204 | How to Highlight a 3D Surface Plot slice | <p>I was wondering how to highlight a 3D surface plot (done by SymPy Plotting Backends) with the help of matplotlib (because I find an example here: <a href="https://stackoverflow.com/questions/57483209/how-do-i-highlight-a-slice-on-a-matplotlib-3d-surface-plot">How do I highlight a slice on a matplotlib 3D surface plot?</a>)
it is using matplotlib.</p>
<p>To create the surface 3D plot is done by SPB (with backend=MB) with this code:</p>
<pre><code>from sympy import *
from spb import * # pip install sympy_plot_backends
import matplotlib.pyplot as plt
init_printing()
var("t, omega, s, alpha")
expr = exp(-t) * sin(t)
res = laplace_transform(expr, t, s, noconds=True)
print(res)
colorscale = [
[0, "green"],
[0.01, "orange"],
[0.05, "red"],
[1, "red"]
]
print('The Laplace Transform plot in 3-D:')
plot3d(abs(res.subs(s, alpha + I * omega)), (alpha, -2, 2), (omega, -2, 2),
use_cm=True, colorbar=False, backend=MB)
print('The slice highlighted plot:')
#Z = abs(res.subs(s, alpha + I * omega))
#y0 = 10
#C[y0] = plt.cm.Reds_r(Z[y0]/2)
#plt.figure(figsize = (15,15))
#plt.plot((alpha, -2, 2), (omega, -2, 2),Z[y0,:], color=plt.cm.Reds(.7))
#plt.show()
</code></pre>
<p>I comment out my trial, it does not work as I want it to, this is the slice should look like from one side of alpha axis:</p>
<p><a href="https://i.sstatic.net/oGBuB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oGBuB.png" alt="1" /></a></p>
| <python><matplotlib><sympy><matplotlib-3d> | 2023-09-06 10:56:58 | 1 | 435 | Freya the Goddess |
77,051,119 | 10,811,647 | Tensorflow unable to load model | <p>I trained some models in a docker. I am trying to load the models on my host machine but I am unable to do so.</p>
<p>At first I saved the models as .keras files as mentionned on the documentation. When trying to load the model using <code>tf.keras.models.load_model("model.keras")</code>, I get the following error:</p>
<ul>
<li>OSError: Unable to open file (file signature not found)</li>
</ul>
<p>I trained the models again and saved them as .h5 files this time, which led to two different errors depending on the model I was trying to load:</p>
<ul>
<li>TypeError: <strong>init</strong>() got an unexpected keyword argument 'fn'</li>
<li>ValueError: The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)</li>
</ul>
<p>I could not find any information on how to fix these errors, I would love some help!</p>
| <python><tensorflow><keras> | 2023-09-06 10:36:38 | 1 | 397 | The Governor |
77,051,111 | 17,160,160 | Linearize Pyomo model with BigM Constraint | <p>I <em>may</em> have answered my own question posted below with the following.</p>
<p>I believe the construction is now appropriately linear and returns a feasible solution. Though I would appreciate any feedback as I am new to this.</p>
<p>In replacement of the <code>model.bigM_cons</code> I outlined below, I have now implemented the following which creates a new variable (<code>model.new</code>) to represent the product of <code>model.auction*model.volume</code></p>
<p>Relevant constraints are then applied to <code>model.new</code> and the new variable is also used in the decision function:</p>
<p><strong>New BigM Constraint</strong></p>
<pre><code>model.m = Param(initialize = 100)
model.new = Var(model.WEEK_PROD_flat, within = NonNegativeIntegers)
# force new to be 0 if auction is 0
# force new to be between 0 and M if auction is 1
def c1_rule(model,w,p):
return model.new[w,p] <= model.auction[w,p]*model.m
model.c1 = Constraint(model.WEEK_PROD_flat, rule = c1_rule)
# force new to always be <= volume
def c2_rule(model,w,p):
return(model.new[w,p] <= model.volume[w,p])
model.c2 = Constraint(model.WEEK_PROD_flat, rule = c2_rule)
# force new to equal volume if auction = 1
def c3_rule(model,w,p):
return model.new[w,p] >= model.volume[w,p]-(1-model.auction[w,p])*model.m
model.c3 = Constraint(model.WEEK_PROD_flat, rule = c3_rule)
</code></pre>
<hr />
<p><strong>Original Post</strong><br />
I am constructing a model that involves 2 decision variables indexed over a sparse set of (week, product) indices (<code>model.WEEK_PROD_flat</code>):</p>
<p><code>model.volume</code> is an non-negative integer variable that quantifies the amount of product to be sold.<br />
<code>model.auction</code> is a binary variable that indicates if a particular (week, product) sale should occur.</p>
<p>As it stands, my problem is non-linear as several constraints and the objective function involve the product of <code>model.auction * model.volume</code>. Effectively, <code>model.auction</code> is being used to turn on/off <code>model.volume</code>.</p>
<p>I understand that I need to create a bigM constraint to avoid the multiplication of decision variables and to ensure that my problem formulation is linear.</p>
<p>I am trying to formulate a constraint that forces <code>model.volume</code> to equal zero if <code>model.auction</code> is equal to zero and assign a value to <code>model.volume</code> only if <code>model.auction</code> is equal to 1.</p>
<p>My current formulation of this constraint looks like this:</p>
<pre><code>model.bigM = Param(initialize = 100) # define bigM value
def bigM_rule(model,w,p):
return model.volume[w,p] <= model.auction[w,p] * model.bigM
model.bigM_cons = Constraint(model.WEEK_PROD_flat, rule = bigM_rule)
</code></pre>
<p>When applied in a simple MWE (below), things seem to function correctly. However, when the new constraint is incorporated into my complete model, an infeasible solution is returned.</p>
<p>It should be noted that this complete model, when utilising the non-linear constraint and objective function DID reach a feasible solution (though I am aware this is unlikely to be a global max). The infeasible output is only returned when applying the bigM constraint and so leads me to believe that I may formulated it incorrectly?</p>
<p>I would appreciate some insight into where I may have gone wrong here.</p>
<p><strong>Complete MWE</strong></p>
<pre><code>weekly_products = {
1: ['Q24'],
2: ['Q24', 'J24'],
3: ['Q24', 'J24','F24'],
4: ['J24', 'F24'],
5: ['F24']
}
product_weeks = {'Q24': [1, 2, 3],
'J24': [2, 3, 4],
'F24': [3, 4, 5]}
prices = {(1, 'Q24'):43.42,
(2, 'Q24'):43.73,
(2, 'J24'):24.89,
(3, 'Q24'):44.03,
(3, 'J24'):25.54,
(3, 'F24'):43.10,
(4, 'J24'):26.15,
(4, 'F24'):43.45,
(5, 'F24'):43.77}
from pyomo.environ import *
model = ConcreteModel()
# define Sets
model.WEEKS = Set(initialize = [1,2,3,4,5])
model.PRODS = Set(initialize = ['Q24','J24','F24'])
model.WEEK_PROD = Set(model.WEEKS, initialize=weekly_products)
model.WEEK_PROD_flat = Set(initialize=[(w, p) for w in model.WEEKS for p in model.WEEK_PROD[w]])
model.PROD_WEEK = Set(model.PRODS, initialize = product_weeks)
# deine Vars
model.volume = Var(model.WEEK_PROD_flat, within = NonNegativeIntegers, bounds = (0,60))
model.auction = Var(model.WEEK_PROD_flat, within = Binary)
# Define Params
model.price = Param(model.WEEK_PROD_flat, initialize = prices)
model.weekMax = Param(initialize = 1)
model.prodMax = Param(initialize = 3)
model.bigM = Param(initialize = 100)
# Define Cons
def weekMax_rule(model,i):
return sum(model.auction[i,j] for j in model.WEEK_PROD[i]) <= model.weekMax
model.weekMax_const = Constraint(model.WEEKS, rule = weekMax_rule)
def prodMax_rule(model,j):
return sum(model.auction[i,j] for i in model.PROD_WEEK[j]) <=model.prodMax
model.prodMax_const = Constraint(model.PRODS, rule = prodMax_rule)
def bigM_rule(model,w,p):
return model.volume[w,p] <= model.auction[w,p] * model.bigM
model.bigM_cons= Constraint(model.WEEK_PROD_flat, rule = bigM_rule)
# Objective function
def objective_rule(model):
return sum(model.volume[w,p] * model.price[w,p] for p in model.PRODS for w in model.PROD_WEEK[p])
model.maximiseRev = Objective(rule = objective_rule, sense = maximize)
optimizer = SolverFactory('scip')
results = optimizer.solve(model)
model.display()
</code></pre>
| <python><pyomo> | 2023-09-06 10:35:59 | 1 | 609 | r0bt |
77,050,866 | 12,129,443 | Why I am getting "Notebook out of memory" error upon notebook submission in Kaggle | <p>I am participating in a Kaggle competition. Over the past 7-10 days I have been facing a peculiar problem. I am trying to make my submission to the competition, but am getting "Notebook out of memory" error again and again as you can see in the picture.</p>
<p>I have tried a variety of things to get rid of the problem, like keeping bare minimum package installations & imports, deleting variables at various junctures and using garbage collection using gc.collect() and even reducing my training dataset to just 50 records. Nothing is working. I am running a simple, straightforward model using "facebook-wav2vec2largexlsr53".</p>
<p>Apart from this model as a dataset, I am installing "Jiwer package with Rapidfuzz" (see the bottom pic). When I SAVEALL or just RUN ALL the notebook, the kernel runs perfectly even with 10000 records and I get the result as expected, but the problem is only for submission (on pressing the SUBMIT button either from output or from the right panel of notebook). By the way I was able to make a submission around 2 weeks back.</p>
<p>What am I doing wrong? How do get rid of this problem and make a submission? This has already caused enormous wastage of time and even Kaggle's valuable platform resources. If I am not doing anything wrong and the problem is with Kaggle platform, how do I take it to their attention? Appreciate inputs.</p>
<p><a href="https://i.sstatic.net/3UfQP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3UfQP.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/RvDvE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RvDvE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/CfWik.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CfWik.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/RSDWF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RSDWF.png" alt="enter image description here" /></a></p>
| <python><nlp><huggingface-transformers><kaggle><large-language-model> | 2023-09-06 09:55:50 | 1 | 668 | Srinivas |
77,050,847 | 1,852,526 | Python trying to read XML file with Open() to string is adding invalid characters | <p>I have the following code where I am trying to read an XML file (.csproj) into string. Problem is, when I try to do so (see <code>file_content=f.read().replace('\n','')</code>), its introducing invalid characters before the '<xml' tag like '' (see the screenshot) and I get an error that the XML is not well formed.</p>
<pre><code>def parse_csproj(repository_root, repository_name, source):
# If the file is not XML, then a ParseError
# is raised. Return None to indicate that the
# file is not a csproj file.
try:
p = repository_root / repository_name / source
with p.open() as f:
file_content=f.read().replace('\n','') #I put a breakpoint here and I see invalid characters before <xml tag
root=ET.fromstring(str(file_content))
#root = ET.parse(f).getroot()
except Exception as ex:
return None
# Extract the list of NuGet packages that are
# used by the csproj file.
refs = []
for element in root.findall('.//{*}PackageReference'): #You need to add {*} otherwise it won't work. Or you
#will need to access based on namespace. Because notice in .csproj, the root element contains a namespace.
ref = PackageReference(
ecosystem = 'NuGet',
repository_root = repository_root,
repository_name = repository_name,
source = source,
package_name = element.attrib['Include'],
version = element.attrib['Version'])
refs.append(ref)
return refs
</code></pre>
<p><a href="https://i.sstatic.net/QGM2U.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QGM2U.jpg" alt="Invalid  characters" /></a></p>
<p>This is file I am trying to read:</p>
<p><a href="https://i.sstatic.net/A56Fi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A56Fi.png" alt="file type" /></a></p>
| <python><xml><elementtree> | 2023-09-06 09:52:56 | 0 | 1,774 | nikhil |
77,050,824 | 1,928,054 | Pandas stack with level parameter | <p>Consider the following DataFrame:</p>
<pre><code>df1 = pd.DataFrame(np.ones((2, 2)), index=["A", "B"], columns=["foo", "bar"])
print(df1)
foo bar
A 1.0 1.0
B 1.0 1.0
print(df1.stack())
A foo 1.0
bar 1.0
B foo 1.0
bar 1.0
</code></pre>
<p>I want to stack this DataFrame such that the column is at level 0 instead of -1. However, when I pass this in the parameter, I seem to get the same result:</p>
<pre><code>print(df1.stack(level=0))
A foo 1.0
bar 1.0
B foo 1.0
bar 1.0
</code></pre>
<p>It seems like I'm not understanding correctly how the level parameter works. Moreover, I would like to use level parameter instead of e.g. first pivoting the DataFrame, as I am dealing with MultiIndices in practice.</p>
| <python><pandas> | 2023-09-06 09:49:24 | 1 | 503 | BdB |
77,050,751 | 5,510,713 | Unable to decode QR code using any Python libraries but works on QR scanners app | <p>Following is a QR code captured from holography experiment with highest level of error correction. I can decode the image using my native camera app on iPhone but unable to decode using any of the Python QR code decoding libraries.</p>
<p><a href="https://i.sstatic.net/P0gyV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P0gyV.jpg" alt="enter image description here" /></a></p>
<pre><code>#---- Using opencv ----#
import cv2
img = cv2.imread('image.jpg')
detector = cv2.QRCodeDetector()
val, pts, qr_code = detector.detectAndDecode(img)
print(val)
#---- Using zxing ----#
import zxing
reader = zxing.BarCodeReader()
result = reader.decode('image.jpg')
result_content = result.raw if result else "Unable to decode"
print(result_content)
#---- using zbarlight ----#
from PIL import Image
import zbarlight
image = Image.open('image.jpg')
image = image.convert('L')
codes = zbarlight.scan_codes(['qrcode'], image)
decoded_code = codes[0].decode('utf-8') if codes else "Unable to decode"
print(decoded_code)
</code></pre>
<p>I understand that the image has noise and pincushion distortion in it. I just don't understand how a simple iPhone camera app can decode it straight away but none of the above libraries couldn't.</p>
| <python><opencv><qr-code><zxing><zbar> | 2023-09-06 09:37:45 | 0 | 776 | DhiwaTdG |
77,050,748 | 8,510,149 | Most efficient way to collect a list of values in a column Pandas - Numpy | <p>Below I have a logic that for each row in the dataframe test collects all values for id that has level lower than the current row and shares id_group.</p>
<p>This code is very slow when I apply it to my real world dataset. Is it possible to do something similar using numpy?</p>
<pre><code>test = pd.DataFrame({'id_group':[10]*10+[11]*5,
'level':list(range(0,10)) + list(range(0,5)),
'id':[i+20 for i in list(range(0,10))]+[i+30 for i in list(range(0,5))]})
def unique_id(row):
threshold = row['level']
main_id = row['id_group']
unique_values = test[(test['level'] < threshold) & (test.id_group==main_id)]['id'].unique()
return unique_values.tolist()
test['list of ids'] = test.apply(lambda row: unique_id(row), axis=1)
</code></pre>
| <python><pandas><numpy> | 2023-09-06 09:37:30 | 1 | 1,255 | Henri |
77,050,694 | 11,578,996 | How to create custom heatmap annotations with a mask | <p>I want to only label the highest value on my heatmap, but only the first digit is showing. I don't know why. Shrinking the font doesn't seem to work. While writing this I guess ignoring the annotation variable and adding a text might work but I can't wrap my head around this for subplot :cryingface:</p>
<p>You can see what I'm getting here:</p>
<p><a href="https://i.sstatic.net/IWntf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IWntf.png" alt="enter image description here" /></a></p>
<p>Toy data generation</p>
<pre><code>np.random.seed(42)
n_rows = 10**6
n_ids = 1000
n_groups = 3
times = np.random.normal(12, 2.5, n_rows).round().astype(int) + np.random.choice([0,24,48,72,96,120,144], size=n_rows, p=[0.2,0.2,0.2,0.2,0.15,0.04,0.01])
timeslots= np.arange(168)
id_list = np.random.randint(low=1000, high=5000, size=1000)
ID_probabilities = np.random.normal(10, 1, n_ids-1)
ID_probabilities = ID_probabilities/ID_probabilities.sum()
final = 1 - ID_probabilities.sum()
ID_probabilities = np.append(ID_probabilities,final)
id_col = np.random.choice(id_list, size=n_rows, p=ID_probabilities)
data = pd.DataFrame(times[:,None]==timeslots, index=id_col)
n_ids = data.index.nunique()
data = data.groupby(id_col).sum()
data['grp'] = np.random.choice(range(n_groups), n_ids)
data
</code></pre>
<p>Copy pasta sample of the toy data:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 ... 159 160 161 162 163 164 165 166 167 grp
1011 0 0 0 0 0 0 2 3 15 21 ... 1 1 0 0 0 0 0 0 0 1
1016 0 0 0 0 0 0 4 3 18 41 ... 2 0 0 0 0 0 0 0 0 2
1020 0 0 0 0 0 1 1 2 6 16 ... 1 1 0 0 0 0 0 0 0 0
1024 0 0 0 0 0 0 2 3 7 13 ... 0 1 1 0 0 0 0 0 0 0
1029 0 0 0 0 0 0 1 5 3 14 ... 1 0 1 0 0 0 0 0 0 1
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
4965 0 0 0 0 0 2 4 2 10 9 ... 0 1 0 0 0 0 0 0 0 1
4984 0 0 0 0 0 1 0 6 10 12 ... 0 0 0 0 0 0 0 0 0 2
4989 0 0 0 0 0 1 3 4 7 16 ... 1 1 0 0 0 0 0 0 0 0
4995 0 0 0 0 2 0 2 2 2 23 ... 0 1 0 0 0 0 0 0 0 0
4999 0 0 0 0 0 1 1 7 9 11 ... 0 0 0 0 0 0 0 0 0 2
</code></pre>
<p>My code for generating the graphs</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
rows = 1
cols = n_groups
# profiles['grp'] = results
grpr = data.groupby('grp')
actual_values = []
fig, axs = plt.subplots(rows, cols, figsize=(cols*3, rows*3), sharey=True, sharex=True)
for grp, df in grpr:
plt.subplot(rows,cols,grp+1)
annot_labels = np.empty_like(df[range(168)].sum(), dtype=str)
annot_mask = df[range(168)].sum() == df[range(168)].sum().max()
actual_values.append(df[range(168)].max().max())
annot_labels[annot_mask] = str(df[range(168)].max().max())
sns.heatmap(df[range(168)].sum().values.reshape(7,-1), cbar=False, annot=annot_labels.reshape(7,-1), annot_kws={'rotation':90, 'fontsize':'x-small'}, fmt='')
ppl = df.shape[0]
journs = int(df.sum().sum()/1000)
plt.title(f'{grp}: {ppl:,} people, {journs:,}k trips')
for ax in axs.flat:
ax.set(xlabel='Hour', ylabel='Day')
ax.set_yticklabels(['M','T','W','T','F','S','S'], rotation=90)
# Hide x labels and tick labels for top plots and y ticks for right plots.
for ax in axs.flat:
ax.label_outer()
score_ch = ordered_scores['calinski_harbasz'][p]
score_si = ordered_scores['silhouette'][p]
plt.suptitle(f"Why don't these labels work? Actual values = {actual_values}")
plt.tight_layout()
plt.show()
</code></pre>
| <python><matplotlib><seaborn><heatmap><plot-annotations> | 2023-09-06 09:30:01 | 1 | 389 | ciaran haines |
77,050,566 | 11,098,908 | Procedure of a recursive function | <p>I came across this code that traces the flow of a recursive function</p>
<pre><code>def summation(lower, upper, margin): (1)
blanks = " " * margin (2)
print(blanks, lower, upper) (3)
if lower > upper: (5)
print(blanks, 0) (6)
return 0
else:
result = lower + summation(lower + 1, upper, margin + 4) (4)
print('*', blanks, result) (7)
return result
</code></pre>
<p>However, I can't understand the flow of the <strong>second half of the output</strong> in the example below</p>
<pre><code>summation(1, 4, 0)
# Output:
1 4 (from step 3)
2 4 (from step 3)
3 4 (from step 3)
4 4 (from step 3)
5 4 (from step 3)
0
4 <=== How could the function "know" to calculate the `blanks` and the `result`?
7 <=== ???
9 <=== ???
10 <=== ???
10
</code></pre>
<p>From my understanding the code was executed from step (1) to step (6), but how could it know to execute step(7) <em>repeatedly</em> with <em>correct</em> values of <code>result</code> and <code>blanks</code>?</p>
<p>Could someone please explain to me as plainly/clearly as possible?</p>
<p>Thanks.</p>
| <python><recursion> | 2023-09-06 09:13:48 | 3 | 1,306 | Nemo |
77,050,460 | 736,662 | Python/Locust and csv-file with parameters | <p>Today I have hard-coded the values used in this Locust test into the script (TSIDs).
How can I, instead, feed the values from a .csv file into the script?</p>
<pre><code>from locust import HttpUser, between, task
import random
from datetime import datetime, timedelta
server_name = "https://xxx"
TSIDs = {'128132','85624'}
def set_from_date():
<to be implemented>
return from_date
def set_to_date():
<to be implemented>
return to_date
class LoadValues(HttpUser):
host = server_name
def _run_read_ts(self, series_list, resolution, start, end):
TSIDs = ",".join(series_list)
resp = self.client.get(f'/api/loadValues?tsIds={TSIDs}&resolution=
{resolution}'f'&startUtc={set_from_date()}&endUtc{set_to_date()}',
headers={'X-API-KEY': 'AKjCg9hTcYQ='})
print("Response status code:", resp.status_code)
@task(1)
def test_get_ts_1(self):
self._run_read_ts(random.sample(TSIDs, 1), 'PT15M',
set_from_date(), set_to_date())
</code></pre>
<p>In other tools I have used the is a possiblity to use a feeder-class and refer to a .csv file sitting on the same place as the script.</p>
| <python><locust> | 2023-09-06 09:00:37 | 1 | 1,003 | Magnus Jensen |
77,050,395 | 3,535,537 | How to to burst array by timestamp dict attribute and sort by timestamp | <p>I want burst array with same <code>["content"]["system.id"]</code> or <code>["content"]["target.uid"]</code>. With <code>typeB</code>, this object contain a list of timestamp (key) or with <code>typeA</code>, this object contain "date" attribute.</p>
<p>We need keep order of timestamp (<code>typeB</code>) and date (<code>typeA</code>).</p>
<p>in a sample, <code>["content"]["system.id"]</code> or <code>["content"]["target.uid"]</code> = 123456789</p>
<p>and the list of timestamp in order is 1693217400000, 1693217450000, 1693217460000, 1693217500000 and 1693217550000</p>
<p>Init of my code is:</p>
<pre><code>products = ...
tmp = {}
for item in products :
if item["name"] == "typeB":
tmp.setdefault(p["content"]["system.id"], []).append(p)
elif item["name"] == "typeA":
tmp.setdefault(p["content"]["target.uid"], []).append(p)
out = []
for k, v in tmp.items():
for e in v:
for t in e["content"].items():
out.append(v[0])
</code></pre>
<p>My input is:</p>
<pre><code>products = [
{
"name": "typeA",
"date": 1693217460000,
"content": {
"rule.uid": "id",
"target.uid": "123456789",
"state": None,
"attributes": ""
}
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217400000": {"c1": "0", "c2": "0"},
"1693217500000": {"c1": "0", "c2": "0"},
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "000000001",
"communication.id": "99",
"message.id": "b0fa2",
"1693217400000": {"c1": "0", "c2": "0"},
"1693217500000": {"c1": "0", "c2": "0"},
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217450000": {"c1": "0", "c2": "1"},
"1693217550000": {"c1": "0", "c2": "0"},
},
},
]
</code></pre>
<p>and I need this output:</p>
<pre><code>[
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217400000": {"c1": "0", "c2": "0"}
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217450000": {"c1": "0", "c2": "1"}
},
},
{
"name": "typeA",
"date": 1693217460000,
"content": {
"rule.uid": "id",
"target.uid": "123456789",
"state": None,
"attributes": ""
}
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217500000": {"c1": "0", "c2": "0"},
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "123456789",
"communication.id": "99",
"message.id": "b0fa2",
"1693217550000": {"c1": "0", "c2": "0"},
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "000000001",
"communication.id": "99",
"message.id": "b0fa2",
"1693217400000": {"c1": "0", "c2": "0"}
},
},
{
"name": "typeB",
"date": 1693217443241,
"content": {
"system.id": "000000001",
"communication.id": "99",
"message.id": "b0fa2",
"1693217500000": {"c1": "0", "c2": "0"},
},
}
]
</code></pre>
| <python> | 2023-09-06 08:52:22 | 2 | 11,934 | StΓ©phane GRILLON |
77,050,392 | 774,133 | plotnine: how to modify colormaps | <p>Please consider this code (it is an example in the doc):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
from plotnine import *
df = pd.DataFrame({
'variable': ['gender', 'gender', 'age', 'age', 'age', 'income', 'income', 'income', 'income'],
'category': ['Female', 'Male', '1-24', '25-54', '55+', 'Lo', 'Lo-Med', 'Med', 'High'],
'value': [60, 40, 50, 30, 20, 10, 25, 25, 40],
})
df['variable'] = pd.Categorical(df['variable'], categories=['gender', 'age', 'income'])
df['category'] = pd.Categorical(df['category'], categories=df['category'])
(ggplot(df, aes(x='variable', y='value', fill='category'))
+ geom_bar(stat='identity')
)
</code></pre>
<p>that produces:</p>
<p><a href="https://i.sstatic.net/4pX7x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4pX7x.png" alt="enter image description here" /></a></p>
<p>I would like to have a monochrome image using only grays. In matplotlib I would just change the colormap.</p>
<p>I cannot understand how to do this from the (poor) plotnine documentation or from the examples I have found.</p>
<p>I tried these lines:</p>
<pre><code>theme_set(theme_bw) # and also theme_set(theme_bw()), both seem to be correct, with no effects
(ggplot(df, aes(x='variable', y='value', fill='category'))
+ geom_bar(stat='identity')
)
</code></pre>
<pre><code>(ggplot(df, aes(x='variable', y='value', fill='category'))
+ geom_bar(stat='identity')
+ theme_bw()
)
</code></pre>
<pre><code>(ggplot(df, aes(x='variable', y='value', fill='category'))
+ geom_bar(stat='identity')
+ scale_color_cmap(cmap_name='gray')
)
</code></pre>
<p>with no success.</p>
<p>Any help?</p>
<p>Second question: do you have any suggestions on resources/tutorials explaining how to use plotnine. It is a wonderful library, but it is simply too hard to go on simply trying how to combine commands...</p>
| <python><plotnine> | 2023-09-06 08:52:08 | 1 | 3,234 | Antonio Sesto |
77,050,065 | 22,466,650 | How to convert a string (one single column) to a DataFrame? | <p>My input is a string :</p>
<pre><code>text = '''10 February 2023
abc
def
23 March 2023
ghi
jkl'''
</code></pre>
<p>I made the code below (using regex but I'm open to any other alternative) :</p>
<pre><code>data = []
for m in re.finditer(r'(\d+ \w+ \w+)\n(.*)', text, flags=re.I|re.S):
data.append([m.group(1), m.group(2).splitlines()])
df = pd.DataFrame(data, columns=['date', 'letters']).explode('letters')
</code></pre>
<p>My code gives me this weird result :</p>
<pre><code> date letters
0 10 February 2023 abc
0 10 February 2023 def
0 10 February 2023 23 March 2023
0 10 February 2023 ghi
0 10 February 2023 jkl
</code></pre>
<p>While I was expecting this one :</p>
<pre><code> date letters
0 10 February 2023 abc
0 10 February 2023 def
1 23 March 2023 ghi
1 23 March 2023 jkl
</code></pre>
<p>How can I fix my code ? Also, do you have any alternatives to suggest ? I'd be very interested to learn from them.</p>
| <python><pandas> | 2023-09-06 08:06:35 | 1 | 1,085 | VERBOSE |
77,049,879 | 9,657,938 | Send unlimited args on command line with value | <p>I am trying to build a generic file for create a table on database I want from command line send something like this <code>python create_table.py table_name=test name=TEXT city=TEXT </code></p>
<p>So every time I run the file I can create a new table with new fields every field with his type</p>
<p>I figure out to use <code>sys.argv</code> and send it like this</p>
<pre><code>python create_table.py test name TEXT city TEXT ...etc
</code></pre>
<p>and take sys.arg[1] as table name and sys.argv[2] as first column and sys.argv[3] as the type of it and the other column like this with unlimited of column number every time I run the code send different number of columns.</p>
<p>My question is there is any better way than using <code>sys.argv</code> ?</p>
<p>my code now look like this</p>
<pre><code>import sqlite3
import sys
conn = sqlite3.connect('db/'+sys.argv[1]+'.db')
print ("Opened database successfully");
query = 'CREATE TABLE ' + sys.argv[2] +' ('
for i in range(3,len(sys.argv[3:]),2):
if i != 3:
query +=', '
query+= sys.argv[i] +' ' +sys.argv[i+1]
query += ')'
print(query)
conn.execute(query)
print ("Table created successfully");
conn.close()
</code></pre>
| <python><python-3.x><sqlite><command-line-arguments><argparse> | 2023-09-06 07:39:19 | 1 | 369 | Sideeg MoHammed |
77,049,753 | 2,235,673 | Handling overflow and underflow conditions in Python decimal conversions | <p>I'm working on a Python function that takes a <em>float</em> number as input and converts it to a <em>decimal</em>, then to a <em>string</em>. However, I need to restrict the number of digits in the exponent of the <em>decimal</em> representation to two, while allowing any exponent for the <em>float</em> number. Consequently, any exponent <em>>= e+100</em> should be reduced to <em>e+99</em>, and any number with exponent <em><= e-100</em> should be considered as 0. How can I achieve this in Python?</p>
<p>Currently, I'm struggling to catch when a number is <em>Overflow</em> or <em>Underflow</em>:</p>
<pre><code>from decimal import (Clamped, Context, Decimal, Inexact, Overflow, Rounded, Underflow)
import math
def format_number(value):
# Parameters
MAX_EXPONENT = 99
DYNAMODB_CONTEXT = Context(Emin=-99, Emax=99, prec=30, traps=[Clamped, Overflow, Underflow])
try:
number = str(DYNAMODB_CONTEXT.create_decimal(value))
if number in ['-Infinity', 'Infinity', 'NaN']:
print('Infinity and NaN not supported')
return(false)
except decimal.Overflow as e:
print("Overflow")
return(false)
except decimal.Underflow as e:
print("Underflow")
return(false)
print(number)
return number
format_number(-1e-322)
format_number(1e-322)
format_number(-1e322)
format_number(1e322)
</code></pre>
<p>This code returns:</p>
<pre><code>ERROR!
Traceback (most recent call last):
File "<string>", line 10, in format_number
decimal.Underflow: [<class 'decimal.Underflow'>, <class 'decimal.Clamped'>]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 23, in <module>
File "<string>", line 14, in format_number
NameError: name 'decimal' is not defined. Did you mean: 'Decimal'?
</code></pre>
| <python><python-3.x> | 2023-09-06 07:19:23 | 1 | 2,626 | Medical physicist |
77,049,740 | 3,678,257 | Redis vector search does not return required number of documents for large K | <p>I have deployed a RedisSearch server to test vector similarity search. I have followed this guide pretty much exactly, <strong>using HNSW algorithm</strong>:</p>
<ol>
<li><a href="https://github.com/openai/openai-cookbook/blob/main/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb" rel="nofollow noreferrer">Using Redis as a Vector Database with OpenAI</a></li>
</ol>
<p>My main goal was to test RedisSearch performance for medium size datasets (~20 million documents) and large number of returned documents (~5_000 to 10_000) because ElasticSearch solution for vector similarity search was very slow once we start increasing <code>k</code> and <code>num_candidates</code> to return a large resulting set.</p>
<p>We have an embedding model and based on tests with ES it generates adequate embeddings.</p>
<p>So since we only needed to test the performance (time aspect) of Redis VSS, to save time and money on generating real embeddings with our model, I generated fake embeddings for every document by:</p>
<pre><code>list(np.random.rand(1024))
</code></pre>
<p>and then indexed 20 million documents into Redis. So now when I perform search and request 1_000 documents I do get 1_000 documents back consistently.
But requesting 5_000 documents or more, I start getting either empty sets or sets with number of documents less than specified amount.</p>
<p>This part I do not understand. Nowhere in the code I specified that I needed documents with similarity score above some specific value. I just need 5_000 or 10_000 documents with any score.</p>
<p>How can I have Redis VSS return required number of documents with any similarity score?</p>
<p>Search snippet:</p>
<pre><code>def search_redis(
redis_client: redis.Redis,
user_query: str,
index_name: str = INDEX_NAME,
vector_field: str = "embedding",
return_fields: list = ["id", "user_id", "title", "vector_score"],
hybrid_fields = "*",
k: int = 100,
):
embedded_query = get_fake_embeddings() # 1024 random floats
base_query = f'{hybrid_fields}=>[KNN {k} @{vector_field} $vector AS vector_score]'
query = (
Query(base_query)
.return_fields(*return_fields)
.sort_by("vector_score")
.paging(0, k)
.dialect(2)
)
params_dict = {"vector": np.array(embedded_query).astype(dtype=np.float32).tobytes()}
results = redis_client.ft(index_name).search(query, params_dict)
return results.docs
</code></pre>
| <python><search><redis><vector-search> | 2023-09-06 07:16:42 | 1 | 664 | ruslaniv |
77,049,671 | 3,073,612 | My MongoDB server crashes with Invariant failure | <p>I have a PHP code and Python code that accesses this MongoDB server in production running on Docker. But every now and then, the MongoDB container restarts with this log, and I am unable to find what is the root cause of this random crash. This is a standalone server with no replica sets. I tried posting in MongoDB Developer Community, and raised a ticket in their Jira, so far, no responses.</p>
<p>Here is the crash log:</p>
<pre><code>{"t":{"$date":"2023-09-06T06:30:50.517+00:00"},"s":"E", "c":"WT", "id":22435, "ctx":"conn3","msg":"WiredTiger error message","attr":{"error":22,"message":{"ts_sec":1693981850,"ts_usec":517620,"thread":"1:0x7f9c42ef9640","session_name":"WT_SESSION.get_key","category":"WT_VERB_DEFAULT","category_id":9,"verbose_level":"ERROR","verbose_level_id":-3,"msg":"__wt_txn_context_prepare_check:19:not permitted in a prepared transaction","error_str":"Invalid argument","error_code":22}}}
{"t":{"$date":"2023-09-06T06:30:50.517+00:00"},"s":"F", "c":"ASSERT", "id":23083, "ctx":"conn3","msg":"Invariant failure","attr":{"expr":"cursor->get_key(cursor, &recordId)","error":"BadValue: 22: Invalid argument","file":"src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp","line":2552}}
{"t":{"$date":"2023-09-06T06:30:50.517+00:00"},"s":"F", "c":"ASSERT", "id":23084, "ctx":"conn3","msg":"\n\n***aborting after invariant() failure\n\n"}
{"t":{"$date":"2023-09-06T06:30:50.517+00:00"},"s":"F", "c":"CONTROL", "id":6384300, "ctx":"conn3","msg":"Writing fatal message","attr":{"message":"Got signal: 6 (Aborted).\n"}}
</code></pre>
<p>For full log of the multiple crash incidents: <a href="https://s3.selfmade.ninja/crashlog/mongo-crash-logs.zip" rel="nofollow noreferrer">https://s3.selfmade.ninja/crashlog/mongo-crash-logs.zip</a></p>
<p>I have <code>featureCompatiblityVersion</code> set to 6.0 and I am running on MongoDB 6.0.9 server. I tried to move up to MongoDB 7.0 also, but nothing changed. I tried upgrading the PHP MongoDB drivers to latest version, no use, the container restarts every now and then.</p>
<p>Here are the things I tried:</p>
<ul>
<li>I dumped all data and moved them to 6.0, 6.0.9, 7.0 servers, crash happens everywhere</li>
<li>I updated the MongoDB drivers for PHP and Python</li>
<li>I repaired the database with <code>--repair</code> and it said nothing is wrong</li>
<li>I upgraded the database with <code>--upgrade</code> and it did nothing, still crashing</li>
</ul>
<p>This restart is putting the production server stalled for over 10 seconds in random times and I have no clue what to do from here.</p>
<p><strong>Update 1:</strong></p>
<p>I have increased the verbosity of the MongoDB logs and found these interesting among the pile of logs it generated at level 3.</p>
<pre><code>{"t":{"$date":"2023-09-06T07:24:05.844+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn1023","msg":"Slow query","attr":{"type":"command","ns":"vpn.session","command":{"find":"session","filter":{"access_token":"<hidden>"},"limit":1,"$db":"vpn","lsid":{"id":{"$uuid":"a44079c4-ebd9-4126-8aaf-bf439aecda11"}}},"planSummary":"COLLSCAN","keysExamined":0,"docsExamined":145708,"cursorExhausted":true,"numYields":145,"nreturned":1,"queryHash":"CAEA5ECC","planCacheKey":"CAEA5ECC","queryFramework":"classic","reslen":384,"locks":{"FeatureCompatibilityVersion":{"acquireCount":{"r":146}},"Global":{"acquireCount":{"r":146}},"Mutex":{"acquireCount":{"r":1}}},"storage":{},"remote":"172.18.0.1:57968","protocol":"op_msg","durationMillis":30}}
{"t":{"$date":"2023-09-06T07:24:05.844+00:00"},"s":"D2", "c":"QUERY", "id":22783, "ctx":"conn1023","msg":"Received interrupt request for unknown op","attr":{"opId":296291,"knownOps":[]}}
{"t":{"$date":"2023-09-06T07:24:05.844+00:00"},"s":"D3", "c":"-", "id":5127803, "ctx":"conn1023","msg":"Released the Client","attr":{"client":"conn1023"}}
{"t":{"$date":"2023-09-06T07:24:05.844+00:00"},"s":"D3", "c":"-", "id":5127801, "ctx":"conn1023","msg":"Setting the Client","attr":{"client":"conn1023"}}
</code></pre>
<p>We run Wireguard Manager in PHP and a new intern moved it from MySQL to MongoDB recently. He didn't set any keys for this <code>vpn.session</code> collection, and I believe everytime an access token is looked up, its considered slow query and the above log says the same connection context has some invalid operation, and I believe this may be linked to the assertion error the MongoDB is throwing, still I wonder how can it crash the MongoDB server. The collection has millions of documents without index, and that may be the cause of the race condition which threw the Invariant Failure.</p>
<p>I am not sure about this Hypothesis, but I am gonna build some index in that collection and gonna watch for a while and see if the crash is gonna happen again.</p>
| <python><php><mongodb><docker> | 2023-09-06 07:05:03 | 0 | 2,756 | Sibidharan |
77,049,610 | 10,994,166 | Spark simple join not working, work not getting properly distributed among executors | <p>Hi I have two df like this:</p>
<pre><code>df1
id items
1 [[c,b,..], [z,x,..],..]
2 [[d,t,..], [q,a,..],..]
. .
. .
schema: [('id', 'string'), ('items', 'array<array<string>>')]
Total Records: 157
Total Partition while saving: 20
Total size of data: 700mb
df2
id rand
1 a
2 b
4 h
. .
. .
Total Records: 1.5 Million
Total Partition while saving: 10
Total size of data: 200mb
</code></pre>
<p>Now I'm applying inner join using <code>id</code> col and trying to save the dataframe, but it's taking forever and when I check spark ui:</p>
<ul>
<li>I noticed that it's getting stuck after running fix number of Task,</li>
<li>Also I noticed that task is not getting properly split into executors, like most of the shuffle read size/records are split into 10 executors even though I have more executors</li>
</ul>
<p>Here's my spark config:</p>
<pre><code>spark_config["spark.executor.memory"] = "24G"
spark_config["spark.executor.memoryOverhead"] = "8G"
spark_config["spark.executor.cores"] = "24"
spark_config["spark.driver.memory"] = "10G"
spark_config["spark.sql.shuffle.partitions"] = "50"
spark_config["spark.default.parallelism"] = "50"
spark_config["spark.dynamicAllocation.enabled"] = "true"
spark_config["spark.sql.execution.arrow.pyspark.enabled"] = "true"
spark_config["spark.shuffle.service.enabled"] = "true"
spark_config["spark.dynamicAllocation.minExecutors"] = "100"
spark_config["spark.dynamicAllocation.maxExecutors"] = "500"
spark_config["spark.submit.deployMode"] = "client"
spark_config["spark.yarn.queue"] = "default"
</code></pre>
<p>I have tried <code>increasing-decreasing shufffle-partition and default-parallelism</code> also tried giving more memory to executors but nothing is working, it's getting stuck after running fix number of task, eg if partition is <code>100 then it'll get stuck at 76, if partition is 500 then 476 and if 50 then 28</code></p>
<p>I have also attached ss of executor and task from ui</p>
<p>All the task where there's Shuflle Read/records it's getting failed
<a href="https://i.sstatic.net/Yq3nt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yq3nt.png" alt="enter image description here" /></a></p>
<p>Where there's no Shuflle Read/records it's getting su
<a href="https://i.sstatic.net/7F6KV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7F6KV.png" alt="enter image description here" /></a></p>
<p>In executors all the work is getting split into 8-10 executors where few tasks are getting failed after performing some task and other executors doesn't have any work, seems like this is the main problem.
<a href="https://i.sstatic.net/VRyDk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VRyDk.png" alt="enter image description here" /></a></p>
<p>And where task are getting failed in executors I'm getting this <code>stdout</code></p>
<pre><code>#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill %p"
# Executing /bin/sh -c "kill 12155"...
</code></pre>
<p>Edit:
1)PySpark 2.4.8
2) Joining code:</p>
<pre><code>df1 = df1.join(df2, how = 'inner', on = 'id')
</code></pre>
<ol start="3">
<li>I have 100 executors, I have tried adding and removing them.</li>
</ol>
| <python><scala><pyspark><memory><apache-spark-sql> | 2023-09-06 06:55:07 | 1 | 923 | Chris_007 |
77,049,559 | 90,580 | How can I generate a PBKDF2 password hash in python? | <p>I'm using some software that stores it's internal user database in a JSON file (users.json) where each user has a field called <code>password</code> where the values look like this:</p>
<pre><code>pbkdf2:10000:f30dd5755d4d172d:e7b2fdda936b4ad5335c3b76c8f3568b0e3b14ce1d9b8ca1e32b7627545d0a3811aa3407a814731b1ee1c86e108c66c1616b1ea2570f7ecf8d04d4f465c33947
</code></pre>
<p>I want to modify this <code>users.json</code> programatically from python to reset a user password, without having to go through this application UI.</p>
<p>How can I generate a <code>pbkdf2:xxxxxx</code> string for a new password?</p>
| <python><pbkdf2> | 2023-09-06 06:47:30 | 1 | 25,455 | RubenLaguna |
77,049,365 | 6,032,140 | yaml/ruamel string formatting and printing output isn't working as expected | <ol>
<li><p>I have the following python dit string format as shown below.</p>
<pre><code># With two dictionary input
dit = "{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}\n{mgbp: '{ifftp:'{ipdp:'{ipdncf:'{r_t:no, mfn:\"bbbb.txt\", r_s:-967901775, np:-1187634210}}}}"
# With single dictionary input
#1 dit = "'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}"
</code></pre>
</li>
<li><p>And I am trying to use the ruamel.yaml to dump it as yaml output file with the below code.</p>
</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import sys
from pathlib import Path
import ruamel.yaml
path = Path('yamloutput.yaml')
#1 dit = "'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}"
dit = "{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}\n{mgbp: '{ifftp:'{ipdp:'{ipdncf:'{r_t:no, mfn:\"bbbb.txt\", r_s:-967901775, np:-1187634210}}}}"
yaml_str = dit.replace('"', '').replace("'",'').replace(':', ': ').replace('{','[').replace('}',']')
def cleanup(d):
if isinstance(d, list):
ret_val = {}
for elem in d:
assert len(elem) == 1 and isinstance(elem, dict)
ret_val.update(elem)
return ret_val
return d
yaml = ruamel.yaml.YAML(typ='safe')
yaml.default_flow_style = False
data = yaml.load(yaml_str)
data = cleanup(data)
data['p_d'] = cleanup(data['p_d'])
data['p_d']['sst'] = cleanup(data['p_d']['sst'])
yaml.dump(data, path)
sys.stdout.write(path.read_text())
</code></pre>
<ol start="3">
<li>When executed the above code I was getting the below error.</li>
</ol>
<pre><code>File "_ruamel_yaml.pyx", line 707, in _ruamel_yaml.CParser.get_single_node
File "_ruamel_yaml.pyx", line 904, in _ruamel_yaml.CParser._parse_next_event
ruamel.yaml.parser.ParserError: did not find expected <document start>
in "<unicode string>", line 3, column 1
</code></pre>
<ol start="4">
<li>So I tried include the document start format using --- and inserted new line character \n between each dictionary in the string as shown below</li>
</ol>
<pre><code>dit = "---\n{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}\n{mgbp: '{ifftp:'{ipdp:'{ipdncf:'{r_t:no, mfn:\"bbbb.txt\", r_s:-967901775, np:-1187634210}}}}"
</code></pre>
<ol start="5">
<li><p>Still getting the same above error. But if its run with single dictionary string it works fine.</p>
</li>
<li><p>Expected Output:</p>
</li>
</ol>
<pre><code>p_d:
a: 3
s: lion
sst:
b: 6
c: -20
p: panther
vec_mode:
- 2.5
- -2.9
- 3.4
- 5.6
- -8.9
- -5.67
- 2
- 2
- 2
- 2
- 5.4
- 2
- 2
- 6.545
- 2
- 2
what: 3.6864e-05
mgbp:
ifftp:
ipdp:
ipdncf:
r_t: no
mfn: bbbb.txt
r_s: -967901775
np: -967901775
</code></pre>
<p>Query:</p>
<ol>
<li>If I am having multiple dict string to be converted into yaml output, do I need to specify any other "document start" format ? if so what is I am missing in the above code ?</li>
</ol>
<p>Please provide your comments.</p>
| <python><yaml><ruamel.yaml> | 2023-09-06 06:10:43 | 1 | 1,163 | Vimo |
77,049,291 | 8,961,082 | Apache Airflow running orphaned dag from days ago | <p>New to airflow, I was running a dag a few days ago on the scheduler. I want to kill it but it won't stop. I've uninstalled airflow and reinstalled, I have even created a new project just to start from scratch. Additionally, I have deleted all those files associated with the first project.</p>
<p>Every time I run the scheduler, after a minute of so this appears:</p>
<pre><code>schedule_job_runner.py resetting orphaned tasks for active dags
</code></pre>
<p>And my old dag from days ago begins to run again.</p>
<p>I can't find anything in the <code>airflow.db</code> or can find any place where this being stored somewhere even though I'm deleting airflow and changing projects.</p>
<p>Could someone explain what is happening.</p>
| <python><airflow><directed-acyclic-graphs><airflow-webserver> | 2023-09-06 05:52:11 | 0 | 377 | d789w |
77,049,120 | 13,238,846 | Structured Tool Agent Finish | <p>I have created a structured tool using Langchain and I want to output directly agent finish after using this tool. Is there any option in kwargs to pass to that when initlaizing the tool? Or I just have to remake the tool using Base Tool.</p>
<pre><code> Tool(
name="GetStock",
func=get_stock,
description=get_stock_description,
)
</code></pre>
| <python><agent><langchain> | 2023-09-06 05:05:14 | 3 | 427 | Axen_Rangs |
77,048,844 | 13,684,789 | How to Copy Worksheet to an Existing Sheet with gspread? | <p>In Google Sheets one is able to manually copy a worksheet in a given sheet to another sheet; I want to do this programmatically with <code>gspread</code>. For example, if I have 2 sheets in my drive--<em>spreadsheet_1</em> and <em>spreadsheet_2</em>--I want to copy worksheet <em>ws_1</em> from <em>spreadsheet_1</em> to <em>spreadsheet_2</em>.</p>
<p>I read through some of the <a href="https://docs.gspread.org/en/v5.10.0/api/models/worksheet.html#gspread.worksheet.Worksheet.copy_to" rel="nofollow noreferrer">gspread documentation</a> and found that the <code>Worksheet</code> object has a method called <code>copy_to</code> that copies a given worksheet to another sheet. I tried using it in the following code.</p>
<pre><code># import, authenticate, make gspread client
from google.colab import auth
auth.authenticate_user()
import gspread
from google.auth import default
creds, _ = default()
gc = gspread.authorize(creds)
# access worksheet ws_1 from sheet spreadsheet_1
sh_1 = gc.open('spreadsheet_1')
ws = sh_1.worksheet('ws_1')
# copy the worksheet to spreadsheet_2
sh_2 = gc.open('spreadsheet_2')
ws.copy_to(sh_2.id) # reference spreadsheet_2 with its id
</code></pre>
<p>Error:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-9-b388ead7ce3a> in <cell line: 5>()
3
4 sh_2 = gc.open('spreadsheet_2')
----> 5 ws.copy_to(sh_2.id)
AttributeError: 'Worksheet' object has no attribute 'copy_to'
</code></pre>
<p>But as per the attribute error I received, <code>Worksheet</code> doesn't actually have such a method. I tried looking further in to the docs but its really not clear how this task can be accomplished.</p>
| <python><gspread> | 2023-09-06 03:23:07 | 1 | 330 | Γbermensch |
77,048,438 | 3,121,975 | Modifying class decorator on class with property decorator | <p>I have a bunch of data classes that I'm trying to build from a dictionary of parsed XML. The keys in this XML have been obfuscated so I've had to go through tedious amounts of documentation, written in a language I have only passing familiarity with, to be able to construct this structure. Right, now I have something like this:</p>
<pre><code>@dataclass
class Record:
point_id: str
generator_id: str
generator_name: str
control: str
success: bool
amount: int
notes: str
@classmethod
def from_xml(cls, xml: dict) -> FixedGenerationRecord:
return cls(
xml["JP06400"],
value_or_empty(xml, "JP06119"),
value_or_empty(xml, "JP06120"),
xml["JP06121"],
xml["JP06122"] == "0",
int(xml["JP06123"]),
value_or_empty(xml, "JP06124"),
)
</code></pre>
<p>Needless to say, doing this for every class is tedious, so I thought of handling this automatically through a system of decorators:</p>
<pre><code>def parse_xml(cls):
class Wrapper:
def __init__(self, *args, **kwargs):
self._inst = cls(*args, **kwargs)
self.commands = []
def __getattribute__(self, attrib):
try:
obj = super().__getattribute__(attrib)
return obj
except AttributeError:
# will handle the situation below
pass
return self._inst.__getattribute__(attrib)
def parse_xml(self, xml):
for command in self.commands:
command(self._inst, xml)
return Wrapper
</code></pre>
<p>The only thing this is missing is a decorator that could be added to each field I wanted to set in the <code>parse_xml</code> method. Ideally, this decorator would create a setter for the data-class property and add it to <code>self.commands</code>. Only, I'm not sure how to do that. How do I create a property-level decorator that can modify a value in a class-level decorator?</p>
| <python><decorator> | 2023-09-06 00:44:37 | 1 | 8,192 | Woody1193 |
77,048,324 | 1,459,079 | pex built by pex doesn't run outside its virtualenv | <p><em>TL;DR</em></p>
<p>I built <code>pex</code> according to the guide <a href="https://pex.readthedocs.io/en/v2.1.145/buildingpex.html" rel="nofollow noreferrer">https://pex.readthedocs.io/en/v2.1.145/buildingpex.html</a> and was expecting it to run fine outside of the virtualenv I used to build it, but alas:</p>
<pre><code>$ ~/bin/pex
env: python3.9: No such file or directory
</code></pre>
<p><em>What I tried</em></p>
<p>I am reading pex's guide <a href="https://pex.readthedocs.io/en/v2.1.145/buildingpex.html" rel="nofollow noreferrer">https://pex.readthedocs.io/en/v2.1.145/buildingpex.html</a> and following their first "recipe" to build <code>pex</code> itself</p>
<blockquote>
<p>You can build .pex files using the pex utility, which is made available when you pip install pex.
Do this within a virtualenv, then you can use pex to bootstrap itself:</p>
</blockquote>
<ul>
<li>In an empty directory, I created a fresh <code>venv</code></li>
</ul>
<pre><code>$ python3 -m venv venv
</code></pre>
<ul>
<li>Activate the <code>venv</code></li>
</ul>
<pre><code>$ . venv/bin/activate
</code></pre>
<ul>
<li>Insatll <code>pex</code></li>
</ul>
<pre><code>(venv) tmp $ pip install pex
Collecting pex
Downloading pex-2.1.145-py2.py3-none-any.whl (2.9 MB)
|ββββββββββββββββββββββββββββββββ| 2.9 MB 4.1 MB/s
Installing collected packages: pex
Successfully installed pex-2.1.145
</code></pre>
<ul>
<li>Build <code>pex</code></li>
</ul>
<pre><code>$ pex pex requests -c pex -o ~/bin/pex
</code></pre>
<ul>
<li>Try <code>pex</code> interactively</li>
</ul>
<pre><code>$ pex
Python 3.9.6 (default, May 7 2023, 23:32:44)
[Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> ^D
now exiting InteractiveConsole...
</code></pre>
<ul>
<li>The documentation says:</li>
</ul>
<blockquote>
<p>you can use pex in or outside of any virtualenv.</p>
</blockquote>
<ul>
<li>Therefore, I deactivate the <code>venv</code> and then try to run <code>pex</code> again:</li>
</ul>
<pre><code>$ deactivate
$ ~/bin/pex
env: python3.9: No such file or directory
</code></pre>
<p>What am I missing?</p>
<p><em>Additional information</em></p>
<p>Platform: MacOS Ventura 13.5.1</p>
| <python><twitter><pex> | 2023-09-06 00:04:19 | 1 | 3,181 | Marcello Romani |
77,048,121 | 6,395,618 | How to format Ragged Tensor for Encoder-Decoder model? | <p>I'm working on building a <strong>seq2seq</strong> model using encoder-decoder architecture for which I have built a <code>tf.data.Dataset</code> pipeline that reads the text from the directories, vectorizes using them <code>tf.keras.layers.TextVectorization</code> and preprocess it to be fed for model training. I'm not able to format my <code>labels</code> such that it is of the shape <code>(None, seq_len, target_vocab_size)</code>. I tried using to map <code>tf.utils.to_categorical</code> to the labels but it won't work on the tensors. Strangely there is no material out there where there was a similar problem discussed. Below is my implementation:</p>
<pre><code>BUFFER_SIZE = len(articles)
BATCH_SIZE = 64
train_raw = (tf.data.Dataset
.from_tensor_slices((articles[is_train], summaries[is_train]))
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE))
val_raw = (tf.data.Dataset
.from_tensor_slices((articles[~is_train], summaries[~is_train]))
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE))
context_vectorizer = tf.keras.layers.TextVectorization(
standardize = tf_lower_and_split_punct,
max_tokens = MAX_VOCAB_SIZE,
ragged=True)
target_vectorizer = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=MAX_VOCAB_SIZE,
ragged=True)
context_vectorizer.adapt(train_raw.map(lambda context, target: context))
target_vectorizer.adapt(train_raw.map(lambda context, target: target))
def preprocess_text(context, target):
context = context_vectorizer(context).to_tensor()
target = target_vectorizer(target)
target_in = target[:,:-1].to_tensor()
target_out = target[:,1:].to_tensor()
# target_out = target[:,:-1]
return (context, target_in), target_out
train_ds = train_raw.map(preprocess_text, tf.data.AUTOTUNE)
val_ds = val_raw.map(preprocess_text, tf.data.AUTOTUNE)
def encoder(hsize, embed_dim=200):
en_input_layer = Input(shape=(None,), name='encoder_input_layer', ragged=True)
en_embed = Embedding(context_vectorizer.vocabulary_size()+1, output_dim=embed_dim, name='encoder_embedding_layer')
en_embed_out = en_embed(en_input_layer)
en_gru_1 = GRU(hsize, return_sequences=True, return_state=True, name='encoder_gru_layer_1')
en_gru_1_out, en_gru_states = en_gru_1(en_embed_out)
return en_input_layer, en_gru_1_out, en_gru_states
def decoder(hsize, encoder_states, embed_dim=200):
de_input_layer = Input(shape=(None,), name='decoder_input_layer', ragged=True)
de_embed = Embedding(target_vectorizer.vocabulary_size()+1, output_dim=embed_dim, name='decode_embedding_layer')
de_embed_out = de_embed(de_input_layer)
de_gru_1 = GRU(hsize, return_sequences=True, name='decoder_gru_layer_1')
de_gru_1_out = de_gru_1(de_embed_out, initial_state=encoder_states)
de_dense = TimeDistributed(Dense(target_vectorizer.vocabulary_size(), activation='softmax'), name='time_distributed_output_layer')
de_preds = de_dense(de_gru_1_out)
return de_input_layer, de_preds
hsize = 256
def create_model(hsize):
en_input_layer, enc_out, enc_states = encoder(hsize)
de_input_layer, de_preds = decoder(hsize, enc_states)
model = Model(inputs=[en_input_layer, de_input_layer], outputs=de_preds)
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=["acc"])
return model
### Model training
m = create_model(hsize)
history = m.fit(
train_ds.repeat(),
steps_per_epoch=100,
epochs=100,
validation_data=val_ds,
callbacks=[
tf.keras.callbacks.ModelCheckpoint('./checkpoints_trial_1',
save_weights_only=True),
tf.keras.callbacks.EarlyStopping(patience=3)])
</code></pre>
<p><strong>The model summary is below:</strong></p>
<hr />
<pre><code> Layer (type) Output Shape Param # Connected to
==================================================================================================
encoder_input_layer (Input [(None, None)] 0 []
Layer)
decoder_input_layer (Input [(None, None)] 0 []
Layer)
encoder_embedding_layer (E (None, None, 200) 437200 ['encoder_input_layer[0][0]']
mbedding)
decode_embedding_layer (Em (None, None, 200) 244200 ['decoder_input_layer[0][0]']
bedding)
encoder_gru_layer_1 (GRU) [(None, None, 256), 351744 ['encoder_embedding_layer[0][0
(None, 256)] ]']
decoder_gru_layer_1 (GRU) (None, None, 256) 351744 ['decode_embedding_layer[0][0]
',
'encoder_gru_layer_1[0][1]']
time_distributed_output_la (None, None, 1220) 313540 ['decoder_gru_layer_1[0][0]']
yer (TimeDistributed)
==================================================================================================
Total params: 1698428 (6.48 MB)
Trainable params: 1698428 (6.48 MB)
Non-trainable params: 0 (0.00 Byte)
__________________________________________________________________________________________________
</code></pre>
<p>The model compile's fine but when I run the <code>fit</code> method I get the following error:</p>
<pre><code>ValueError: Shapes (None, None) and (None, None, 1220) are incompatible
</code></pre>
<p>I'm struggling with defining the model's <code>Input</code> layers correctly, or <code>preprocess_text</code> output that would work with the model definition.</p>
| <python><tensorflow><keras><deep-learning><nlp> | 2023-09-05 22:46:08 | 1 | 2,606 | Krishnang K Dalal |
77,048,094 | 9,140 | How to do rolling() grouped by day by hour in Polars? | <p>Let's say you have a dataset with the following column, Node, Time, date, Hour, Amount. Time is spaced hourly. date is simply the current date corresponding to Time. You should have only one instance of a Node per hour.</p>
<p>Every day, for every Hour I want the average of Amount for the past 31 days in this hour. Here is the first thing I tried:</p>
<pre><code>df.rolling('Time', period='31d', group_by=['date', 'Hour']).agg(pl.col('Amount').mean())
</code></pre>
<p>I expected to get back one row per Hour per day, but I got the same amount of rows as the original df:</p>
<p><a href="https://i.sstatic.net/EDcTB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDcTB.png" alt="enter image description here" /></a></p>
<p>What is it calculating exactly here and how can I do what I want it to do?</p>
<p>Thanks!</p>
| <python><dataframe><python-polars> | 2023-09-05 22:38:22 | 3 | 5,498 | EtienneT |
77,048,073 | 1,040,915 | How to iterate through paginated results in the Python SDK for Microsoft Graph? | <p>I'm getting to grips with the <a href="https://github.com/microsoftgraph/msgraph-sdk-python" rel="noreferrer">Python SDK</a>, having never used GraphQL before (but I'm familiar with the basic concept). I'm able to retrieve the <code>odata_next_link</code> value from responses, but I'm not sure how to use it. I note from <a href="https://learn.microsoft.com/en-us/graph/paging" rel="noreferrer">here</a> that:</p>
<blockquote>
<p>You should include the entire URL in the @odata.nextLink property in your request for the next page of results. Depending on the API that the query is being performed against, the @odata.nextLink URL value will contain either a $skiptoken or a $skip query parameter. The URL also contains all the other query parameters present in the original request. Do not try to extract the $skiptoken or $skip value and use it in a different request.</p>
</blockquote>
<p>However, I'm not sure <em>how</em> to include that URL in the next request. Currently, my queries look like <code>response = await graph_client.groups.by_group_id(group_id).transitive_members.get()</code> - I don't see an option there to change the base url. I thought I could do something like:</p>
<pre><code>query_params = GroupsRequestBuilder.GroupsRequestBuilderGetQueryParameters(
skip_token = parse_qs(urlparse(response.odata_next_link).query)['$skipToken'][0]
)
request_configuration = GroupsRequestBuilder.GroupsRequestBuilderGetRequestConfiguration(
query_parameters=query_params
)
response = await graph_client.[...].get(request_configuration)
</code></pre>
<p>but that reports <code>GroupsRequestBuilder.GroupsRequestBuilderGetQueryParameters.__init__() got an unexpected keyword argument 'skip_token'</code> (and similarly for if I try naming the parameter <code>skiptoken</code> or <code>skipToken</code>)</p>
<p>Frustratingly, there's no code example <a href="https://learn.microsoft.com/en-us/graph/sdks/paging?tabs=csharp" rel="noreferrer">here</a> - but, based on those examples, I did search the repo for an Iterator - <a href="https://github.com/search?q=repo%3Amicrosoftgraph%2Fmsgraph-sdk-python%20iterator&type=code" rel="noreferrer">with no results</a>.</p>
| <python><microsoft-graph-api><microsoft-graph-sdks> | 2023-09-05 22:33:20 | 2 | 5,977 | scubbo |
77,047,987 | 4,921,853 | ValidationError when instantiating SimpleNodeParser class from llama index | <p>Taking straight from the docs ...</p>
<pre><code>from llama_index.node_parser import SimpleNodeParser
from llama_index import SimpleDirectoryReader
# load the blogs in using the reader
docs = SimpleDirectoryReader('./data').load_data()
# chunk up the blog posts into nodes
parser = SimpleNodeParser()
nodes = parser.get_nodes_from_documents(docs)
</code></pre>
<p>Gives me this error:</p>
<pre><code>ValidationError Traceback (most recent call last)
Cell In[20], line 8
5 docs = SimpleDirectoryReader('./data').load_data()
7 # chunk up the blog posts into nodes
----> 8 parser = SimpleNodeParser()
9 nodes = parser.get_nodes_from_documents(docs)
File ~/Emily/trials/.env/lib/python3.9/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for SimpleNodeParser
text_splitter
field required (type=value_error.missing)
</code></pre>
<p>I am on an M1 Mac and in a virtual environment. How can I hunt down this error?</p>
| <python><parsing><large-language-model><llama-index> | 2023-09-05 22:03:35 | 1 | 305 | superhero |
77,047,935 | 5,378,132 | How to Create Conda Environment in AWS Lambda at Run Time? | <p>I am using an AWS lambda as part of my CI/CD pipeline. The lambda checks out some code using the provided git url and commit hash in the event. Then, it goes into the source code and does a <code>conda env create -f env.yml</code>, and then runs a python file <code>python flow.py run</code>. Unfortunately, installing and activating a conda environment is proving to be problematic as conda creates cache files in the <code>/home/sbx_user</code> directory, and AWS Lambda doesn't have option to write to any direction except <code>/tmp</code>. I get the following error as an example:</p>
<pre><code>Creating Conda environment 'torch_model'...
CondaError: Cannot write to condarc file at /home/sbx_user1051/.condarc
Caused by FileNotFoundError(2, 'No such file or directory')
CondaError: Cannot write to condarc file at /home/sbx_user1051/.condarc
Caused by FileNotFoundError(2, 'No such file or directory')
CondaError: Error encountered while attempting to create cache directory.
Directory: /home/sbx_user1051/.cache/conda/notices
Exception: [Errno 30] Read-only file system: '/home/sbx_user1051'
</code></pre>
<p>One thing I've also tried is setting <code>HOME</code> as <code>/tmp</code> in hopes that the conda environment and cache files would be installed in the <code>tmp</code> folder, which the Lambda does have write access to. But this attempt was unsuccessful as well and returned the following error:</p>
<pre><code>environment variables:
conda info could not be constructed.
KeyError('pkgs_dirs')
</code></pre>
<p>Any help/guidance would be greatly appreciated!</p>
| <python><amazon-web-services><aws-lambda><anaconda><conda> | 2023-09-05 21:48:09 | 0 | 2,831 | Riley Hun |
77,047,920 | 6,734,243 | is there a way to create and init a dir variable in one line using pathlib? | <p>I try to stick to pathlib to create and manipulate path but generating folders is always a pain. If I want to init a "toto" folder in a "tutu" folder and store the path in a variable I'm forced to do the following:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
tutu = Path("tutu")
toto = tutu / "toto"
toto.mkdir()
</code></pre>
<p>Is there a way to generate <code>toto</code> variable AND init the folder in one line ?</p>
| <python><pathlib> | 2023-09-05 21:46:03 | 1 | 2,670 | Pierrick Rambaud |
77,047,916 | 1,448,529 | How to remove special character from excel header in pandas | <p>I'm trying to read an excel file in pandas which has special characters in its header. I have a column with name like this "Col(β€9%)". How can I remove less than equal sign here?</p>
<p>If the ingest the file as it is, it fails with below error message</p>
<pre><code>pandas._libs.lib.map_infer_mask UnicodeEncodeError: 'ascii' codec can't encode character u'\u2264' in position 39: ordinal not in range(128)
</code></pre>
| <python><pandas> | 2023-09-05 21:45:23 | 1 | 731 | hampi2017 |
77,047,749 | 1,394,763 | How to keep Processes alive in python multiprocessing so that they maintain state and can be reused? | <p>I would like to divide an expensive computation among many subprocesses that can all work on part of the problem in an embarassing parallel way. However, each subprocess needs a large amount of initial data (different for each subprocess). I would like to pass in this initial data only once when the processes are created and then be able to call each subprocess multiple times to do work with different arguments (but always with the same initial data).</p>
<p>I feel like this should be a very common situation but I can't figure out how to do it.</p>
<p>A concrete example:</p>
<pre><code>class MyWorker:
def __init__(self, data):
self.data = data #this is a large array
def dosomework(self, arg):
#does some work using self.data and the arg
#arg is cheap to pickle and send from the main process, data is not
return result
datachunks = [hugearray[i:i+chunksize] for i in range(0, len(hugearray), chunksize)] #break up huge array into chunks
# I want to create a bunch of new process and in each process initialize a MyWorker object with a different chunk of the data.
workers = [MyWorker(chunk) for chunk in datachunks] # These all live in the main process (not what I want)
# Now I want to call the workers' dosomework method in parallel many times with different args.
argstodo = [1,2,3,4,5]
results = []
for arg in argstodo:
r = [worker.dosomework(arg) for worker in workers] #want the workers to run in parallel in their own processes
results.append(r)
</code></pre>
<p>I can create all the MyWorkers in the main process and then create a <code>multiprocessing.Pool</code>. Then I can use <code>pool.map</code> and pass in each worker's <code>dosomework</code> function along with a corresponding <code>arg</code>. Something like:</p>
<pre><code>def apply(x):
func, args = x
return func(args)
with multiprocessing.Pool() as pool:
for arg in argstodo:
r = pool.map(apply, [(worker,arg) for worker in workers])
results.append(r)
</code></pre>
<p>But this seems extremely inefficient since the workers' <code>self.data</code> has to be pickled and unpickled over and over again.</p>
| <python><parallel-processing><multiprocessing><python-multiprocessing> | 2023-09-05 21:07:03 | 2 | 1,502 | Alex |
77,047,680 | 6,068,731 | imshow with x-axis as log scale is not equally-spaced | <p>I am using <code>pcolor</code> to generate the following plot (code below). It has a <code>colorbar</code> in log scale and the x-values are in log-scale too. The problem is that the rectangles in this plot have different widths (I've put a red grid to show the rectangles better, suggestion of Trenton). Is there any way in which I can make sure the width of each rectangle is the same?</p>
<p><a href="https://i.sstatic.net/gvVFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gvVFV.png" alt="enter image description here" /></a></p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import numpy as np
# Generate Values
x_values = np.geomspace(start=1, stop=1e-2, num=6)
y_values = np.arange(start=0, stop=50, step=4, dtype=int)
x_grid, y_grid = np.meshgrid(x_values, y_values)
z_values = np.random.randn(len(y_values), len(x_values))
fig, ax = plt.subplots()
im = ax.pcolor(x_grid, y_grid, z_values, norm=matplotlib.colors.LogNorm(), ec='r', lw=2)
ax.set_xscale('log')
fig.colorbar(im)
plt.show()
</code></pre>
| <python><numpy><matplotlib><imshow> | 2023-09-05 20:52:30 | 1 | 728 | Physics_Student |
77,047,569 | 6,734,243 | how to define a pytest plugin in a setuptools pyproject.toml? | <p>According to pytest documentation, plugins must be registered as entry points so I want ot find the pyproject.toml equivalent to:</p>
<pre><code>entry_points={
"pytest11": [
"copie = pytest_copi.plugin",
],
}
</code></pre>
| <python><pytest><pytest-fixtures> | 2023-09-05 20:30:50 | 1 | 2,670 | Pierrick Rambaud |
77,047,518 | 3,174,124 | Get the printed length of a string in terminal | <p>It seems like a fairly simple task, yet I can't find a fast and reliable solution to it.</p>
<p>I have strings in bash, and I want to know the number of characters that will be printed on the terminal. The reason I need this, is to nicely align the strings in three columns of <code>n</code> characters each. For that, I need to add as many "space" as necessary to make sure the second and third columns always starts at the same location in the terminal.</p>
<p>Example of problematic string length:</p>
<pre class="lang-bash prettyprint-override"><code>v='feΜeΜ'
echo "${#v1}"
> # 5 (should be 3)
printf '%s' "${v1}" | wc -m
> # 5 (should be 3)
printf '%s' "${v1}" | awk '{print length}'
> # 5 (should be 3)
</code></pre>
<p>The best I have found is this, that works <em>most of the time</em>.</p>
<pre class="lang-bash prettyprint-override"><code>echo "${v}" | python3 -c 'v=input();print(len(v))'
> # 3 (yeah!)
</code></pre>
<p>But sometimes, I have characters that are modified by the following sequences. I can't copy/past that here, but this is how it looks like:</p>
<pre class="lang-bash prettyprint-override"><code>v="de\314\201tresse"
echo "${v}"
> # deΜtresse
echo "${v}" | python3 -c 'v=input();print(len(v))'
> # 9 (should be 8)
</code></pre>
<p>I know it can be even more complicated with <code>\r</code> character or ANSI sequences, but I am only going to have to deal with "regular" strings that can be commonly found in filenames, documents and other file content writing by humans. Since the string IS printed in the terminal, I guess there must be some engine that knows or can know the printed length of the string.</p>
<p>I have also considered the possible solution of sending ANSI sequence to get the position of the cursor in the terminal before and after printing the string, and use the difference to compute the length, but it looks like a rabbit hole I don't want to dig. Plus it will be very slow.</p>
| <python><bash><string-length> | 2023-09-05 20:19:52 | 4 | 611 | Slagt |
77,047,492 | 1,060,747 | Tracking down cause of error in unittest/mock | <p>I'm using the Python Responses module (latest v0.23.3) alongside Requests (v2.31.0) in Python 3.11. It's part of a fairly large application, and I'm using Responses to simulate a device which I communicate with via Requests sending simple HTML GET strings e.g. "action=instruction&pin=5".</p>
<p>My standard "Device" module (containing a Device class) has a send_request method, which basically does the following:</p>
<pre><code> r = requests.post(f'http://{self.ip}:{self.node_port}/{target}',
data=data,
timeout=self.server_timeout,
headers={'Connection': 'close'})
</code></pre>
<p>where data is a dict containing the key-value pairs needed to make up those GET strings. The only other stuff in that send_request method is some logging and exception handling.</p>
<p>I then have a "Device_Sim" module (and Device_Sim class) which extends the "Device" class, and overrides the send_request method like this:</p>
<pre><code> @responses.activate
def send_request(self, target: str, data: dict):
""" Intercept calls to Device.send_request, implementing a Responses library callback for simulated responses.
All parameters are the same as in Device.send_request, and responses are simulated so should be the same.
However, rather than sending a message to the Device itself, the Responses library redirects that message to
the handle_post_requests method below, giving full control over how the Simulated Device behaves and responds
to any command.
Note that the Device will only respond if powered on
"""
if self.sim_actual_power_on:
# The callback here will override any Request library action, preventing the Request library from attempting
# communication via the network, and passing it directly to the callback method below. This is only used
# for simulated Devices - allowing easy testing of behaviours without needing "real" test Devices.
responses.add_callback(responses.POST,
url=f'http://{self.ip}:{self.node_port}/{target}',
callback=self.handle_post_requests,
content_type='text/plain')
# Once the callback is setup, call the parent send_request() method as normal to handle everything else.
super().send_request(target=target, data=data)
</code></pre>
<p>The above is the <em>only</em> place I use the Responses module.</p>
<p>When I run the code, I intermittently get an exception which creates the following Traceback:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/Users/dave/Documents/Development/KDeHome/Python/devices.py", line 522, in send_query
return self.send_request(target='query', data=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/responses/__init__.py", line 225, in wrapper
with assert_mock, responses:
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py", line 1564, in __exit__
if self.is_local and self.temp_original is not DEFAULT:
^^^^^^^^^^^^^
AttributeError: '_patch' object has no attribute 'is_local'. Did you mean: 'has_local'?
</code></pre>
<p>Searching for the error and issues with the Responses module have drawn a blank - it doesn't seem to be a specific issue with Responses, as far as I can tell. But I'm also struggling to see what I can have done wrong in my own code - as I said, the above is the only place I use the Responses module, and my code seems fairly standard.</p>
| <python><python-requests><python-responses> | 2023-09-05 20:13:37 | 1 | 561 | DaveWalker |
77,047,476 | 315,168 | Pandas DataFrame.write_parquet() and setting the Zstd compression level | <p>I am writing out a compressed Parquet file from <code>DataFrame</code> as following:</p>
<pre class="lang-py prettyprint-override"><code>result_df.to_parquet("my-data.parquet", compression="zstd")
</code></pre>
<p>How can I instruct Pandas on the compression level of zstd coding?</p>
| <python><pandas><parquet><zstd> | 2023-09-05 20:10:01 | 3 | 84,872 | Mikko Ohtamaa |
77,047,383 | 4,541,974 | Google Speech V2 real time streaming from mic | <p>I can't seem to find anywhere in the documentation how to use google's speech V2 API. For some reason V2 seems to be cheaper than V1 (as per <a href="https://cloud.google.com/speech-to-text/pricing" rel="nofollow noreferrer">google's speech pricing table</a> -- although I have no idea why an outdated version seems to be more expensive), and also V2 supports automatic language detection.</p>
<p>Here's a working example with V1 to "listen" from the microphone and transcribe in real time:</p>
<pre><code>import queue
import re
import sys
import os
from google.cloud import speech
import pyaudio
# Audio recording parameters
RATE = 16000
CHUNK = int(RATE / 10) # 100ms
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="key_google.json"
class MicrophoneStream:
"""Opens a recording stream as a generator yielding the audio chunks."""
def __init__(self: object, rate: int = RATE, chunk: int = CHUNK) -> None:
"""The audio -- and generator -- is guaranteed to be on the main thread."""
self._rate = rate
self._chunk = chunk
# Create a thread-safe buffer of audio data
self._buff = queue.Queue()
self.closed = True
def __enter__(self: object) -> object:
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
# The API currently only supports 1-channel (mono) audio
channels=1,
rate=self._rate,
input=True,
frames_per_buffer=self._chunk,
# Run the audio stream asynchronously to fill the buffer object.
# This is necessary so that the input device's buffer doesn't
# overflow while the calling thread makes network requests, etc.
stream_callback=self._fill_buffer,
)
self.closed = False
return self
def __exit__(
self: object,
type: object,
value: object,
traceback: object,
) -> None:
"""Closes the stream, regardless of whether the connection was lost or not."""
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
# Signal the generator to terminate so that the client's
# streaming_recognize method will not block the process termination.
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(
self: object,
in_data: object,
frame_count: int,
time_info: object,
status_flags: object,
) -> object:
"""Continuously collect data from the audio stream, into the buffer.
Args:
in_data: The audio data as a bytes object
frame_count: The number of frames captured
time_info: The time information
status_flags: The status flags
Returns:
The audio data as a bytes object
"""
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self: object) -> object:
"""Generates audio chunks from the stream of audio data in chunks.
Args:
self: The MicrophoneStream object
Returns:
A generator that outputs audio chunks.
"""
while not self.closed:
# Use a blocking get() to ensure there's at least one chunk of
# data, and stop iteration if the chunk is None, indicating the
# end of the audio stream.
chunk = self._buff.get()
if chunk is None:
return
data = [chunk]
# Now consume whatever other data's still buffered.
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
except queue.Empty:
break
yield b"".join(data)
def listen_print_loop(responses: object) -> None: # Changed the return type to None
num_chars_printed = 0
all_transcripts = [] # To store all transcripts
for response in responses:
if not response.results:
continue
result = response.results[0]
if not result.alternatives:
continue
transcript = result.alternatives[0].transcript
overwrite_chars = " " * (num_chars_printed - len(transcript))
if not result.is_final:
sys.stdout.write(transcript + overwrite_chars + "\r")
sys.stdout.flush()
num_chars_printed = len(transcript)
else:
print(transcript + overwrite_chars)
all_transcripts.append(transcript) # Storing the transcript
if re.search(r"\b(exit|quit)\b", transcript, re.I):
print("Exiting..")
print("All Transcripts: ", all_transcripts) # Print all transcripts if needed
break
num_chars_printed = 0
def main() -> None:
"""Transcribe speech from audio file."""
# See http://g.co/cloud/speech/docs/languages
# for a list of supported languages.
language_code = "es" # a BCP-47 language tag
client = speech.SpeechClient()
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=RATE,
language_code=language_code,
)
streaming_config = speech.StreamingRecognitionConfig(
config=config, interim_results=True
)
with MicrophoneStream(RATE, CHUNK) as stream:
audio_generator = stream.generator()
requests = (
speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
responses = client.streaming_recognize(streaming_config, requests)
# Now, put the transcription responses to use.
listen_print_loop(responses)
if __name__ == "__main__":
main()
</code></pre>
<p>However, I have no idea how to convert the same code to use V2 instead. <a href="https://cloud.google.com/speech-to-text/v2/docs/streaming-recognize" rel="nofollow noreferrer">Here's the only example</a> google has, but that code over there is basically to read a .wav file or any other recorded file, but I need to so it in real time. Anyone knows how to use V2 for real time stream transcribing?</p>
<p>This is my attempt (but it's not working at all):</p>
<pre><code>import os
import queue
from google.cloud.speech_v2 import SpeechClient
from google.cloud.speech_v2.types import cloud_speech
import pyaudio
# Audio recording parameters
RATE = 16000
CHUNK = int(RATE / 10) # 100ms
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "key_google.json"
class MicrophoneStream:
def __init__(self, rate, chunk):
self._rate = rate
self._chunk = chunk
self._buff = queue.Queue()
self.closed = True
def __enter__(self):
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
channels=1,
rate=self._rate,
input=True,
frames_per_buffer=self._chunk,
stream_callback=self._fill_buffer,
)
self.closed = False
return self
def __exit__(self, type, value, traceback):
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(self, in_data, frame_count, time_info, status_flags):
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self):
while not self.closed:
chunk = self._buff.get()
if chunk is None:
return
data = [chunk]
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
except queue.Empty:
break
yield b"".join(data)
def main():
project_id = "stellar-cumulus-379717"
client = SpeechClient()
recognition_config = cloud_speech.RecognitionConfig(
auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
language_codes=["en-US"],
model="long",
)
streaming_config = cloud_speech.StreamingRecognitionConfig(config=recognition_config)
config_request = cloud_speech.StreamingRecognizeRequest(
recognizer=f"projects/{project_id}/locations/global/recognizers/_",
streaming_config=streaming_config,
)
with MicrophoneStream(RATE, CHUNK) as stream:
audio_generator = stream.generator()
audio_requests = (
cloud_speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
def requests():
yield config_request
yield from audio_requests
responses = client.streaming_recognize(requests=requests())
for response in responses:
for result in response.results:
print(f"Transcript: {result.alternatives[0].transcript}")
if __name__ == "__main__":
main()
</code></pre>
<p>I am getting a bunch of errors:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\Python3\Lib\site-packages\google\api_core\grpc_helpers.py", line 162, in error_remapped_callable
return _StreamingResponseIterator(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python3\Lib\site-packages\google\api_core\grpc_helpers.py", line 88, in __init__
self._stored_first_result = next(self._wrapped)
^^^^^^^^^^^^^^^^^^^
File "C:\Python3\Lib\site-packages\grpc\_channel.py", line 541, in __next__
return self._next()
^^^^^^^^^^^^
File "C:\Python3\Lib\site-packages\grpc\_channel.py", line 967, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception iterating requests!"
debug_error_string = "None"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\AI_workers\pocketchat\stt2.py", line 98, in <module>
main()
File "D:\AI_workers\pocketchat\stt2.py", line 90, in main
responses = client.streaming_recognize(requests=requests())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python3\Lib\site-packages\google\cloud\speech_v2\services\speech\client.py", line 1639, in streaming_recognize
response = rpc(
^^^^
File "C:\Python3\Lib\site-packages\google\api_core\gapic_v1\method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python3\Lib\site-packages\google\api_core\grpc_helpers.py", line 166, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.Unknown: None Exception iterating requests!
</code></pre>
| <python><google-speech-api><google-speech-to-text-api> | 2023-09-05 19:51:37 | 1 | 1,654 | Luis Cruz |
77,047,372 | 6,484,726 | FastAPI, how to block requests until specific one completes? | <p>My path operator in FastAPI making API calls.
External API is protected and I have to provide Bearer token in my requests.
In my application I check whether current Bearer token has expired and issue a new token, if so.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
app = FastAPI()
class Token:
def expired(self) -> bool:
...
def __str__(self) -> str:
...
def get_token() -> Token:
...
async def issue_new_token() -> Token:
...
async def touch_external_api(headers: dict):
...
@app.post('/process/{task_id}')
async def handler(task_id):
token = get_token()
if token.expired():
token = await issue_new_token()
await touch_external_api(headers={'Authorization': f'Bearer {token}'})
</code></pre>
<p>How to ensure that when multiple concurrent requests hit my path operator and <code>token.expired()</code> returns <code>True</code> only the first one will call <code>issue_new_token</code> and others will wait for it to complete? Or how can I ensure that issuing new token will be done only once?
Is preemptive update in the background task seems like a solution?</p>
<p>I'm thinking about using asyncio primitives but it seems hackish and I haven't got an opportunity to test it:</p>
<pre class="lang-py prettyprint-override"><code>token_lock = asyncio.Lock()
async def issue_new_token() -> Token:
...
async def issue_token() -> Token:
if not token_lock.locked:
await token_lock.acquire()
token = await issue_new_token()
token_lock.release()
else:
await token_lock.acquire() # wait until coroutine whish issues new token releases
token_lock.release()
token = get_token()
return token
@app.post('/process/{task_id}')
async def handler(task_id):
token = get_token()
if token.expired():
token = await issue_token()
await touch_external_api(headers={'Authorization': f'Bearer {token}'})
</code></pre>
| <python><python-asyncio><fastapi> | 2023-09-05 19:49:08 | 1 | 398 | hardhypochondria |
77,047,030 | 1,903,852 | Combining time and time zone columns in data frame | <p>I've got a csv file with trip data. Each row specifies where the trip started (<code>origin</code>), where the trip ended (<code>destination</code>), as well as the starting time (<code>depart_time</code>) and ending time of the trip (<code>arrival_time</code>). The <code>depart_time</code> and <code>arrival_time</code> are in <strong>local</strong> times. Columns <code>depart_time_zone</code> and <code>arrival_time_zone</code> specify the corresponding time zones of the origin and destination respectively.</p>
<p>Example data:</p>
<pre><code>origin,destination,depart_time,depart_time_zone,arrival_time,arrival_time_zone
A,B,04/04/2023 08:01,GMT-04:00,04/04/2023 09:02,GMT-04:00
C,D,04/04/2023 09:20,GMT-04:00,04/04/2023 16:48,GMT-05:00
</code></pre>
<p>I'd like to combine the time and time_zone columns. For that I used the following function:</p>
<pre><code>data = pd.read_csv('trips.csv', parse_dates={'depart' : ['depart_time','depart_time_zone'],'arrive' : ['arrival_time','arrival_time_zone']})
</code></pre>
<p>This seems to accomplish my goal. However, notice from the first line in this CSV that the <code>depart_time=04/04/2023 08:00</code> and <code>depart_time_zone=GMT-04:00</code>
When looking at the output of the above command, I get: <code>depart=2023-04-04 08:00:00+04:00</code>. Notice the <code>-04</code> changed into <code>+04</code>. This seems incorrect?</p>
| <python><python-3.x><dataframe> | 2023-09-05 18:46:36 | 1 | 2,431 | Joris Kinable |
77,046,935 | 7,766,155 | Get word frequencies from a pretrained word2vec model in gensim | <p>I am trying to get the CBOW represention (word frequencies) out of a pretrained word2vec model in gensim. I know that the word2vec model I am using was originally trained using CBOW so the word2vec object must store word frequencies somewhere but I cannot find it. I did read the documentation but did not help at all, or maybe I am misunderstanding something.</p>
<p>Edit: If there's no way to retrieve the word frequencies except by having the original dataset, please tell me straight away.</p>
| <python><nlp><gensim> | 2023-09-05 18:29:56 | 0 | 301 | AmirWG |
77,046,872 | 8,126,390 | Python asyncio how to detect unexpected exceptions | <p>I'm building a project on top of Django Channels and as such, I would like to avoid mucking with code above my sub-classed consumer.</p>
<p>With asyncio, uncaught and unexpected exceptions aren't being thrown to the console even with asyncio debug enabled, and Python dev mode enabled.</p>
<p>I came up with the following so that async code I develop can be wrapped in a task, and automatically capture exceptions, log them, and so on:</p>
<pre><code>async def _task_monitoring(self):
while True:
for task in self._task_list:
if task.done():
self._task_list.remove(task)
try:
_ = task.result()
except ConnectionClosedError:
self.log.warning(traceback.format_exc())
self.user_warning(f'remote connection disconnected unexpectedly')
continue
except ConnectionClosedOK:
self.user_warning(f'Remote connection shutdown')
continue
except asyncio.exceptions.CancelledError:
self.log.warning(f'task {task.get_name()} was cancelled')
continue
except Exception as e:
self.log.error(traceback.format_exc())
raise e
</code></pre>
<p>when I use <code>asyncio.create_task</code> I add the resulting task to <code>self._task_list</code>. I know this probably isn't the best use of resources, but it's what I have to work with right now.</p>
<p>My main question concerns the blanket <code>except Exception as e:</code> which is not good practice I know, but is the best way I can be sure to see the exception from a coroutine in instances where I don't otherwise expect one.</p>
<p>Is there a better way to handle unexpected exceptions in asyncio coroutines?</p>
| <python><python-asyncio> | 2023-09-05 18:17:33 | 1 | 740 | Brian |
77,046,862 | 12,417,488 | Python pandas read file with delimiters inside the data? | <p>I regularly have to read text files as such:</p>
<pre><code>Column1; Column2; Column3
data1;dat;a2;data3
</code></pre>
<p>Meaning that the data in Column 2 uses the same character used as the delimiter, therefore pandas incorrectly reads this data as two columns. This is just an illustrative example, this is not the real data.</p>
<p>How do I efficiently overcome this? I don't know a way besides manually editing the text file, but this becomes highly inefficient very fast.</p>
<p>Edit:
Best preemptive knowledge I can have is predicting which columns are likely to have delimiters. For example, I could realistically guess beforehand that column 2 will have delimiters in the text.</p>
| <python><pandas><csv><text> | 2023-09-05 18:15:31 | 1 | 663 | PJ_ |
77,046,810 | 7,347,774 | Is it possible to use mlxtend's fpgrowth() in Snowpark without transforming data to Pandas DF? | <p>I've been trying to get the Market Basket Analysis done with FP-Growth algorithm with <code>fpgrowth</code> function from <code>mlxtend</code> library available in Snowpark.</p>
<p>It works with smaller datasets but fails for the whole dataset (over 4 million rows and 6000 columns of one-hot encoding). It exits with error:</p>
<blockquote>
<p>Function available memory exhausted. Please visit <a href="https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-designing.html#memory" rel="nofollow noreferrer">https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-designing.html#memory</a> for help.</p>
</blockquote>
<p>The log says that the Python sandbox max memory usage was 80 GB.
I tried Snowpark-optimized XL and 2XL warehouses but it failed with the same max memory usage.</p>
<p>Is it possible to provide Snowpark DataFrame to mlxtend's fpgrowth function without converting it first to Pandas DF which makes it fail because of the data size?</p>
| <python><snowflake-cloud-data-platform><market-basket-analysis><mlxtend><fpgrowth> | 2023-09-05 18:03:55 | 1 | 1,055 | Piotr K |
77,046,693 | 2,856,499 | Exception in Python breaks for loop despite putting in continue/pass | <p>Im writing a python code that loops file directories and sub-directories to extract word doc file and write the content of word doc to excel/csv files. If some files are invalid (for example they have XML, the loop would skip)</p>
<p>The issue now is I managed to get it more or less working, but for some reason, my for loop is breaking, and I can't continue.</p>
<p>So for example: In this file path, I have 6 word documents. 1 of these documents is not valid because it is a XML file. The rest of the 5 word documents are valid and I expect 5 file content to be saved into a csv file. However, in my current final.csv, I only have 4 word doc content being saved into the csv file. I can only assume that when the exception happens it does not continue. However, I already placed a pass inside my exception.</p>
<p>Is my understanding not correct in which pass does not continue the loop?</p>
<p>My code looks like this below</p>
<pre><code>import pandas as pd
import numpy as np
import docx2txt
import csv
import re
import os
def read_files():
list_of_lists = []
full_path =""
rootdir = '/Users/xxx/Documents/github/xxxx/xxxxx/Financial Statements/'
for subdir, dirs, files in os.walk(rootdir):
try:
for file in files:
print(os.path.join(subdir, file))
content = [" "] * 22
if file.endswith(".docx") or file.endswith(".doc"):
full_path = os.path.join(subdir, file)
print(full_path)
text = docx2txt.process(full_path)
s = " "
for line in text.splitlines():
#This will ignore empty/blank lines.
if line != '':
s = s+ " " + line
s = s.replace(",", "")
s = " ".join(s.split())
s = s.strip()
content.insert(6, s)
content.insert(0, subdir)
content.insert(1, file)
list_of_lists.append(content)
else:
print("AAAA")
#print(list_of_lists)
except Exception as error:
print(full_path)
f = open("exception.txt", "a")
f.write(full_path + "\n")
f.write(str(error) + "\n")
f.close()
print("An exception occurred", error)
pass
finally:
df = pd.DataFrame(list_of_lists)
#print(df)
df.to_csv('final.csv', encoding='utf-8')
read_files()
</code></pre>
| <python> | 2023-09-05 17:44:09 | 1 | 1,315 | Adam |
77,046,635 | 4,260,095 | JSON Schema referencing in array items from another file | <p>I have two JSON schemas - <code>publisher</code> and <code>article</code>, such that there could be multiple articles in a single publisher.</p>
<p>This is the <code>article</code> schema (in <code>a.json</code>):</p>
<pre class="lang-json prettyprint-override"><code>{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Article",
"type": "object",
"properties": {
"aid": {
"type": "integer"
},
"author": {
"type": "string"
},
"title": {
"type": "string"
}
},
"required": ["aid", "author", "title"]
}
</code></pre>
<p>And I'm trying to reference this in <code>publisher</code> schema as below (in <code>p.json</code>, in the same dir):</p>
<pre class="lang-json prettyprint-override"><code>{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Publisher",
"type": "object",
"properties": {
"pid": {
"type": "integer"
},
"name": {
"type": "integer"
},
"lang": {
"type": "string"
},
"articles": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "object",
"$ref": "./a.json"
}
}
},
"required": ["pid", "articles"]
}
</code></pre>
<p>I would ideally want the data to raise error if any of the articles do not have the required fields as mentioned in <code>a.json</code>. But this does not seem to happen:</p>
<pre class="lang-py prettyprint-override"><code>import json, jsonschema
schema = json.load(open("p.json"))
data = {
"pid": 1,
"articles": [
{
"aid": 100,
"title": "test",
}
]
}
jsonschema.validate(data, schema)
</code></pre>
<p>The last line should raise an exception, as <code>article</code> does not have the <code>author</code> key that is marked as a requierd field in <code>a.json</code>.</p>
<p>But somehow, it raises <code>_WrappedReferencingError: Unresolvable: ./a.json</code> exception.</p>
<p>How can I reference <code>a.json</code> here such that it raise the correct exception?</p>
| <python><json><jsonschema><python-jsonschema> | 2023-09-05 17:35:01 | 2 | 955 | Shod |
77,046,624 | 7,200,745 | How to filter messages from Kafka based on headers value in AWS lambda? | <p>I have a message coming from Kafka to the lambda in the format:</p>
<pre><code>{
"eventSource":"aws:kafka",
"eventSourceArn":"arn:aws:kafka:sa-east-1:123456789012:cluster/vpc-2priv-2pub/751d2973-a626-431c-9d4e-d7975eb44dd7-2",
"bootstrapServers":"b-2.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092,b-1.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092",
"records":{
"mytopic-0":[
{
"topic":"mytopic",
"partition":0,
"offset":15,
"timestamp":1545084650987,
"timestampType":"CREATE_TIME",
"key":"abcDEFghiJKLmnoPQRstuVWXyz1234==",
"value":"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
"headers":[
{
"headerKey":[
104,
101,
97,
100,
101,
114,
86,
97,
108,
117,
101
]
}
]
}
]
}
}
</code></pre>
<p>here in the value of headerkey the string 'headerValue' is stored. you can see it if you decode using the python script:</p>
<pre><code>bytes(h).decode()
</code></pre>
<p>It is achieved by triggering AWS lambda function by self-managed Kafka trigger. I want to add filter messages to minimize number of not necessary runnings of heavy lambda.
So I want to add filter t the trigger: headers["headerKey"] = "headerValue" and send <strong>only such values</strong> to the lambda.
I found some docs on filtering:</p>
<p><a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax" rel="noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax</a>
<a href="https://docs.aws.amazon.com/lambda/latest/dg/with-kafka.html" rel="noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/with-kafka.html</a></p>
<p>it works well for filtering strings, not array because how I understand headers are always given as a list of integers and it is not clear how to it. Help me to solve the problem, please!</p>
<p>P.S.
I found message exmaple from here: <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html" rel="noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html</a></p>
| <python><amazon-web-services><apache-kafka><aws-lambda> | 2023-09-05 17:33:52 | 2 | 444 | Eugene W. |
77,046,591 | 14,141,072 | Discord.py : Slash Command working but simultaneously giving and error | <p>I have just come back to coding in Discord.py and I have this error.
<a href="https://i.sstatic.net/bel6Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bel6Q.png" alt="enter image description here" /></a></p>
<p>So basically I am unable to figure out the reason. The code is here:</p>
<pre><code>@cerium.slash_command(name="avatar", description="get the avatar for the user you want to")
async def avatar(ctx, *, member : discord.Member = None):
await ctx.send(member.display_avatar)
</code></pre>
| <python><discord.py> | 2023-09-05 17:27:18 | 1 | 831 | Bhavyadeep Yadav |
77,046,536 | 13,757,692 | Split numpy array into segments where condition is met | <p>I have an array like so:</p>
<pre><code>arr = np.array([1, 2, 3, 4, -5, -6, 3, 5, 1, -2, 5, -1, -1, 10])
</code></pre>
<p>I want to get rid of all negative values, and split the array at each index where there was a negative value. The result should look like this:</p>
<pre><code>split_list = [[1, 2, 3, 4], [3, 5, 1], [5], [10]]
</code></pre>
<p>I know how to do this using list comprehension, but since the array can get quite large and I have to do the calculation many times, I want to find a solution using numpy. I found this <a href="https://www.geeksforgeeks.org/python-split-list-into-lists-by-particular-value/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-split-list-into-lists-by-particular-value/</a>, which I can use to split the array where there are negative values, but I can't simultaneously remove them.</p>
| <python><arrays><numpy> | 2023-09-05 17:20:35 | 6 | 466 | Alex V. |
77,046,517 | 11,500,371 | How to I create columns that are named based on the results of a groupby and matching another condition? | <p>I am trying to create three new columns ("Product 1 Price", "Product 2 Price", and "Product 3 Price") from an existing column "Product Price". The order of the product numbers is important and is based on three existing values in the dataframe (columns Product 1, Product 2, and Product 3).</p>
<p>Here is how the dataframe currently looks:</p>
<pre><code>df = pd.DataFrame(columns=['Product Name','Product Price','Campaign Name','Campaign Start Date','Product 1','Product 2','Product 3'])
df.loc[0] = ['Apples',5,'Summer','2023-06-01','Mangos','Apples','Bananas']
df.loc[1] = ['Mangos',15,'Summer','2023-06-01','Mangos','Apples','Bananas']
df.loc[2] = ['Bananas',10,'Summer','2023-06-01','Mangos','Apples','Bananas']
df.loc[3] = ['Guava',9,'Fall','2023-10-01','Guava','Apples',np.nan]
df.loc[4] = ['Apples',7,'Fall','2023-10-01','Guava','Apples',np.nan]
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product Name</th>
<th>Product Price</th>
<th>Campaign Name</th>
<th>Campaign Start Date</th>
<th>Product 1</th>
<th>Product 2</th>
<th>Product 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Apples</td>
<td>5</td>
<td>Summer</td>
<td>2023-06-01</td>
<td>Mangos</td>
<td>Apples</td>
<td>Bananas</td>
</tr>
<tr>
<td>Mangos</td>
<td>15</td>
<td>Summer</td>
<td>2023-06-01</td>
<td>Mangos</td>
<td>Apples</td>
<td>Bananas</td>
</tr>
<tr>
<td>Bananas</td>
<td>10</td>
<td>Summer</td>
<td>2023-06-01</td>
<td>Mangos</td>
<td>Apples</td>
<td>Bananas</td>
</tr>
<tr>
<td>Guava</td>
<td>9</td>
<td>Fall</td>
<td>2023-10-01</td>
<td>Guava</td>
<td>Apples</td>
<td>NaN</td>
</tr>
<tr>
<td>Apples</td>
<td>7</td>
<td>Fall</td>
<td>2023-10-01</td>
<td>Guava</td>
<td>Apples</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>I need to create a column for each product's price, based on the specific campaign's price. For example, Apples are $5 in the Summer campaign but $7 in the Fall campaign.</p>
<pre><code>df2 = pd.DataFrame(columns=['Campaign Name','Campaign Start Date','Product 1','Product 2','Product 3','Product 1 Price','Product 2 Price','Product 3 Price'])
df2.loc[0] = ['Summer','2023-06-01','Mangos','Apples','Bananas',15,5,10]
df2.loc[1] = ['Fall','2023-10-01','Guava','Apples',np.nan,9,7,np.nan]
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Campaign Name</th>
<th>Campaign Start Date</th>
<th>Product 1</th>
<th>Product 2</th>
<th>Product 3</th>
<th>Product 1 Price</th>
<th>Product 2 Price</th>
<th>Product 3 Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Summer</td>
<td>2023-06-01</td>
<td>Mangos</td>
<td>Apples</td>
<td>Bananas</td>
<td>15</td>
<td>5</td>
<td>10.0</td>
</tr>
<tr>
<td>Fall</td>
<td>2023-10-01</td>
<td>Guava</td>
<td>Apples</td>
<td>NaN</td>
<td>9</td>
<td>7</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>Any ideas on how to accomplish this? I've tried a pivot table</p>
<pre><code>df.pivot(index='Campaign Name',columns='Product Name',values='Product Price').reset_index().rename_axis(columns=None)
</code></pre>
<p>But this doesn't return the columns labeled by Product 1/2/3, which is how I need them for later in my analysis.</p>
| <python><pandas><dataframe> | 2023-09-05 17:17:37 | 2 | 337 | Sean R |
77,046,468 | 4,225,430 | Where is the parameter 'request' used in view.py in django | <p>A newbie in Django web development. I've got a tiny but puzzling question about request / response in views.py.</p>
<ol>
<li>I self-learn Django online and here comes the http request / response section. The code is as follows:</li>
</ol>
<pre><code>#from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def index(request):
return HttpResponse("<h1>123</h1>")
</code></pre>
<p>I understand that the index page is requested and django creates a response which shows 123 in the index page. My questions are:</p>
<ol>
<li><p>with the <code>request</code> as the parameter of the function, isn't it used in the body of the function, just like <code>def math(x): return x+1</code>?</p>
</li>
<li><p>Also, why does <code>render</code> become redundant in this trunk of code, where it should be used to produce output?</p>
</li>
</ol>
<p>Thanks for answering.</p>
| <python><django><httprequest><render><httpresponse> | 2023-09-05 17:08:44 | 2 | 393 | ronzenith |
77,046,465 | 10,200,497 | Add a string to a multiline string in python without creating new line | <p>I have two strings. One of them is a normal string and the other one is multiline.</p>
<pre><code>a = '1'
b = """
abc
xyz
"""
</code></pre>
<p>This is the output that I want:</p>
<pre><code>1- abc
xyz
</code></pre>
<p>I have tried to use a f-string:</p>
<pre><code>result = f'{a}- {b}'
</code></pre>
<p>But it creates a new line at the beginning which I don't want.</p>
| <python> | 2023-09-05 17:08:10 | 4 | 2,679 | AmirX |
77,046,319 | 22,213,065 | Move last lines to first line | <p>I have high number of txt files in <code>E:\Desktop\Social_media\edit8\New folder (2)</code> directory and each file have an arrangement like following:</p>
<pre><code>Bolt;539,110
Classmates;263,454
PlanetAll;126,907
theGlobe;73,063
SixDegrees;64,065
JANUARY 1997
</code></pre>
<p>Now I want to move last lines to first line like following:</p>
<pre><code>JANUARY 1997
Bolt;539,110
Classmates;263,454
PlanetAll;126,907
theGlobe;73,063
SixDegrees;64,065
</code></pre>
<p>I write following python script for this:</p>
<pre><code>import os
directory = r'E:\Desktop\Social_media\edit8\New folder (2)' # Replace with the directory path containing your text files
# Get a list of all text files in the directory
files = [file for file in os.listdir(directory) if file.endswith('.txt')]
# Process each file
for file in files:
file_path = os.path.join(directory, file)
# Read the file content
with open(file_path, 'r') as f:
lines = f.readlines()
# Extract the last line and strip the newline character
last_line = lines.pop().strip()
# Insert the last line at the beginning
lines.insert(0, last_line)
# Write the modified content back to the file
with open(file_path, 'w') as f:
f.writelines(lines)
</code></pre>
<p><strong>My script working good but I don't know why it move last line to first of first line like following:</strong></p>
<pre><code>JANUARY 1997Bolt;539,110
Classmates;263,454
PlanetAll;126,907
theGlobe;73,063
SixDegrees;64,065
</code></pre>
<p><strong>Where is my script problem? and how to fix it?</strong></p>
| <python><python-3.x> | 2023-09-05 16:44:35 | 1 | 781 | Pubg Mobile |
77,046,307 | 7,169,895 | PySide6 QAbstractTableView - Set table header color | <p>I saw <a href="https://stackoverflow.com/questions/12748460/qtableview-custom-table-model-set-text-color-in-header">this question</a> which is asking the same as me. However, the solution is not working for coloring my header row. Here is my code</p>
<pre><code> def headerData(self, col, orientation, role):
if orientation == QtCore.Qt.Orientation.Horizontal:
if role == QtCore.Qt.ItemDataRole.DisplayRole:
return str(self._dfDisplay.columns[col])
elif role == Qt.BackgroundRole:
return QtGui.QColor(173, 216, 230) # light blue
else:
return None
return None
</code></pre>
<p>Why is are my table headers not being colored?
There is also <a href="https://stackoverflow.com/questions/13837403/qtbackgroundrole-seems-to-be-ignored">this</a> but my rows themselves color fine.</p>
<p>Edit: Full Code:</p>
<pre><code>import sys
import pandas as pd
from PySide6 import QtCore, QtGui, QtWidgets
from PySide6.QtCore import Qt
import numpy as np
class CustomHeaderView(QHeaderView):
def paintSection(self, painter, rect, logicalIndex):
painter.save()
painter.restore()
brush = QBrush(Qt.GlobalColor.blue)
brush.setStyle(Qt.SolidPattern)
painter.setBrush(brush)
painter.drawRect(rect)
value = self.model().headerData(logicalIndex, Qt.Orientation.Horizontal, Qt.ItemDataRole.DisplayRole)
print(value)
painter.drawText(rect, Qt.AlignmentFlag.AlignCenter, value)
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data):
super(TableModel, self).__init__()
self._dfDisplay = data
self._data = data
def data(self, index, role):
if role == Qt.DisplayRole:
value = self._data[index.column()][index.row()]
return str(value)
def rowCount(self, index):
return self._data.shape[0]
def columnCount(self, index):
return self._data.shape[1]
def headerData(self, col, orientation, role):
if orientation == QtCore.Qt.Orientation.Horizontal:
if role == QtCore.Qt.ItemDataRole.DisplayRole:
return str(self._dfDisplay.columns[col])
elif role == Qt.BackgroundRole:
return QtGui.QColor(173, 216, 230) # light blue
else:
return None
return None
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
# self.table.horizontalHeader().setStyle(QStyleFactory.create('fusion')) # set theme to fusion to allow coloring of headers -- works
header_view = CustomHeaderView(QtCore.Qt.Orientation.Horizontal)
self.table.setHorizontalHeader(header_view)
data = pd.DataFrame([
[1, 9, 2],
[1, 0, -1],
[3, 5, 2],
[3, 3, 2],
[5, 8, 9],
])
self.model = TableModel(data)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app=QtWidgets.QApplication(sys.argv)
window=MainWindow()
window.show()
app.exec_()
</code></pre>
| <python><pyside6> | 2023-09-05 16:42:22 | 0 | 786 | David Frick |
77,046,175 | 5,862,756 | how to install & use numpy on macOS Ventura, ARM | <p>it's there, but computer says "no"...</p>
<pre><code>% python --version
Python 3.11.3
% find /opt -name "numpy"
/opt/homebrew/lib/python3.11/site-packages/numpy
/opt/homebrew/lib/python3.11/site-packages/numpy/core/include/numpy
% export PYTHONPATH="/opt/homebrew/lib/python3.11/site-packages/numpy"
% echo $PYTHONPATH
/opt/homebrew/lib/python3.11/site-packages/numpy
% python -c "import numpy"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'numpy'
</code></pre>
| <python><macos><numpy> | 2023-09-05 16:21:31 | 0 | 1,293 | VorpalSword |
77,046,121 | 15,587,184 | Python regular expressions to continuously extract multiple pieces of information from text data | <p>I have a dataset that has a col named "plain_text" in which we have a dumped log of conversations among several user in a group chat, the servers saves the information in a format like this:</p>
<pre><code>a "timestamp" + "user name" + ":" + "text"
</code></pre>
<p>the timestamp may by enclosed in slashes or parentheses or none for instance:
/8:08:57/ or (14:05:14) or just 16:59:59</p>
<p>the user name is entered manually by the one using the server and the "text" is the message that the person is sending it can be as long or as short as they want and may have tabs new lines, etc.</p>
<p>this is a preview of just one cell of information on my dataste:</p>
<pre><code>text = """
(19:04:45) Server 526.785 : Ongoing Push
(19:08:46) Main Deck : Operation was not uploaded the error code will be
55-858-658-458
No More Handing is needed
(19:50:46) Server UJI-OP : Reset Deck main
OP may take up to 6 mins or more...
(19:51:46) Server UJI-OP : Main Deck status ON
please stand up for opening doors
23:20:04 Jill : Windows Closed
5:16:58 Carl V: Is someone on the Front door?
(17:11:49) IUJO-66 : No Response on Deck (5:10:43) Van UHJ : Flights delay 8:34:08 H2047: Buy Concert Tickets 9:05:42 Mark P.: Gen. OK
7:00:15 Jill : Status not ok updated 21:22:34 YHXO: Front desk clear
"""
df = pd.DataFrame({'plain text': [text]})
</code></pre>
<p>My desired output looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>user</th>
<th>text_sent</th>
</tr>
</thead>
<tbody>
<tr>
<td>19:08:46</td>
<td>Main Deck</td>
<td>Operation was not uploaded the error code will be 55-858-658-458 No More Handing is needed</td>
</tr>
<tr>
<td>19:04:45</td>
<td>Server 526.785</td>
<td>Ongoing Push</td>
</tr>
<tr>
<td>19:50:46</td>
<td>Server UJI-OP</td>
<td>Reset Deck main</td>
</tr>
</tbody>
</table>
</div>
<p>basically each piece of information will be put in there own column and we will capture all the text sent. (I have shown only a few raws of my desired output so that I dont overload the screen)</p>
<p>I'm using this regex:</p>
<pre><code>"(?P<timestamp>\d+:\d+:\d+)\S*\s+(?P<user>[^:]+?)\s*:\s*(?P<msg>.*?)(?=\s*\S*\d+:\d+:\d+|$)"
</code></pre>
<p>But is not working for some of the messages look:</p>
<p><a href="https://i.sstatic.net/kLJj1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kLJj1.png" alt="enter image description here" /></a></p>
<p>Some messages are skip eventhough they do follow my regex</p>
<p>here is regex: <a href="https://regex101.com/r/G0rdpa/1" rel="nofollow noreferrer">https://regex101.com/r/G0rdpa/1</a></p>
<p>could you please help me out modifying to capture all information?</p>
| <python><regex> | 2023-09-05 16:13:03 | 1 | 809 | R_Student |
77,046,105 | 22,213,065 | Merge the last 3 lines of each text file with specific arrangement | <p>I have high number of txt files in <code>E:\Desktop\Social_media\edit8\New folder</code> directory and each file has an arrangement similar to the following:</p>
<pre><code>Bolt
2,739,393
Classmates
1,267,092
SixDegrees
1,077,353
PlanetAll
552,488
theGlobe
437,847
OpenDiary
9,251
1998
MARCH
034+
</code></pre>
<p>Now I want to merge each txt file last 3 lines like following:</p>
<pre><code>Bolt
2,739,393
Classmates
1,267,092
SixDegrees
1,077,353
PlanetAll
552,488
theGlobe
437,847
OpenDiary
9,251
034+ MARCH 1998
</code></pre>
<p>this mean last 3 lines must have an arrangement like <code>number+ month year</code></p>
<p><strong>I write following python script for this but I don't know why not working:</strong></p>
<pre><code>import os
# Define the directory where your text files are located
directory_path = r'E:\Desktop\Social_media\edit8\New folder'
# Function to rearrange the lines and write to a new file
def rearrange_lines(file_path):
with open(file_path, 'r') as file:
lines = [line.strip() for line in file.readlines() if line.strip()] # Read non-empty lines
# Check if there are at least 3 non-empty lines
if len(lines) >= 3:
lines[-1], lines[-2], lines[-3] = lines[-3], lines[-2], lines[-1] # Rearrange the last 3 lines
# Create a new file with the rearranged lines
with open(file_path, 'w') as file:
file.write('\n'.join(lines))
# Iterate through each file in the directory
for root, dirs, files in os.walk(directory_path):
for file_name in files:
if file_name.endswith('.txt'):
file_path = os.path.join(root, file_name)
rearrange_lines(file_path)
print(f'Rearranged lines in {file_name}')
print('Done!')
</code></pre>
<p><strong>Where is my script problem? and how to fix it problem?</strong></p>
| <python><python-3.x> | 2023-09-05 16:09:57 | 2 | 781 | Pubg Mobile |
77,046,075 | 901,426 | dump Protobuf class variables to a list/dict and swap keys | <p>i'm working with the lovely Protobuf (<em>cough</em>) and i'm running into an odd problem: instead of returning 'value' as a member of the 'Metric' class in the MQTT message, it is returning, through the <code>oneof</code> declaration, keys like <code>boolean_value</code>, <code>string_value</code>, &c. since this is supposed to be Sparkplug B message, the key <em>should</em> be just <code>value</code>. this is breaking our MQTT code for handling messages as well as confuses the hell out of me because Sparkplug B is kinda the bread-and-butter/raison-d'etre of Protobuf, is it not?</p>
<p>at any rate, since i have to move forward, i just figured i'd do a key-swap after dumping the class variables/properties to a list, like so:</p>
<pre class="lang-py prettyprint-override"><code>def on_message():
inboundPayload = sparkplug_b_pb2.Payload()
inboundPayload.ParseFromString(msg.payload) #<-- msg is the MQTT message
# let's see what's in the 'metric` part of the payload
for metric in inboundPayload.metrics:
print(metric)
# alias: 19
# timestamp: shpak-of-numbers
# datatype: 11
# is_null: false
# boolean_value: true #<-- this key should be 'value' but it isn't, so let's just
# replace the key with some hoop-jumping β
metric['value'] = metric.pop(list(metric.items())[-1] #<-- in concept test, works; here: no
# blah blah rest of code block
</code></pre>
<p>as the comment says, when i tested this with a simple dict, it worked just fine. but when i dropped it into the code here, it tanks when i try to replace the key.</p>
<p>can someone please point out what i'm doing wrong? my intuition says it has something to do with the <code>metric.items()</code> part, but, i'll admit, my Python chops are still wobbly working with the Protobuf classes.</p>
| <python><class><protocol-buffers> | 2023-09-05 16:05:40 | 1 | 867 | WhiteRau |
77,046,010 | 6,231,251 | Iteratively select rows in Pandas dataframes to form single df | <p>I have a dict of dataframes:</p>
<pre><code>data1 = {'A': [1, 2, 3],
'B': [4, 5, 6],
'my_value': [0.1, 0.4, 0.3]}
df1 = pd.DataFrame(data1)
# Example DataFrame 2
data2 = {'A': [7, 8, 9],
'B': [10, 11, 12],
'my_value': [0.2, 0.5, 0.6]}
df2 = pd.DataFrame(data2)
# Example DataFrame 3
data3 = {'A': [13, 14, 15],
'B': [16, 17, 18],
'my_value': [0.9, 0.8, 0.7]}
df3 = pd.DataFrame(data3)
dfs = {'df1': df1, 'df2': df2, 'df3': df3}
</code></pre>
<p>In the above dataframes, the order counts. I would like to obtain a new, single dataframe by iteratively selecting the first rows of each df, then the second rows, then the third etc. Each of these selections should be then sorted by 'my_value', and concatenated together in a single df.</p>
<p>Basically, starting from the above example, the first three rows of the outcome should be the first rows of each dataset, sorted by 'my_value'; then the seconds, and the thirds in the same fashion.</p>
| <python><pandas><dataframe> | 2023-09-05 15:56:29 | 3 | 882 | sato |
77,045,962 | 5,491,260 | How to use celery rdb while working with python, flask, docker and docker compose? | <p>I use python, flask, celery, docker and docker compose in my project and wanted to use <a href="https://docs.celeryq.dev/en/stable/reference/celery.contrib.rdb.html" rel="nofollow noreferrer">celery rdb</a> to debug my tasks. I could not connect to celery container using telnet from my local machine to docker container.</p>
| <python><docker><debugging><docker-compose><celery> | 2023-09-05 15:50:05 | 1 | 589 | adnan kaya |
77,045,947 | 2,258,585 | Polars join with OR condition | <p>I have two dataset coming from two very different data sources.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.read_csv(b"""date,label,org_slug,org_id,org_name,issues_count
2023-08-29,label1,org-slug-name-1,org-id-1,org-name,1""", try_parse_dates=True)
df2 = pl.read_csv(b"""date,org_name,org_id,org_slug,info_1,info_2
2023-08-29,org-name,org-id-1,org-slug-name-1,10,12""", try_parse_dates=True)
</code></pre>
<p>Dataframe 1</p>
<pre><code>ββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ¬βββββββββββββββββ¬βββββββββββββββ
β date β label β org_slug β org_id β org_name β issues_count β
β --- β --- β --- β --- β --- β --- β
β date β str β str β str β str β i64 β
ββββββββββββββͺββββββββββββββββββͺββββββββββββββββββͺββββββββββββββββββͺβββββββββββββββββͺβββββββββββββββ‘
β 2023-08-29 β label1 β org-slug-name-1 β org-id-1 β org-name β 1 β β
ββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ
</code></pre>
<p>Dataframe 2</p>
<pre><code>ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββββββββ¬ββββββββββββββββββββ¬ββββββββββββ¬ββββββββββ
β date β org_name β org_id β org_slug β info_1 β info_2 β
β --- β --- β --- β --- β --- β --- β
β date β str β str β str β i64 β i64 β
ββββββββββββββͺββββββββββββββͺββββββββββββββββββββͺββββββββββββββββββββͺββββββββββββͺββββββββββ‘
β 2023-08-29 β org-name β org-id-1 β org-slug-name-1 β 10 β 12
ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββ΄ββββββββββ
</code></pre>
<p>I'm trying to join these two dataframe based on <code>date</code>.
Then I would like to still joining via <code>org_id</code> or <code>org_slug</code> or <code>org_name</code>.</p>
<p>These generally match, but there are times where org_slug or org_id can be null and I want to rely on the next condition.</p>
<p>is it possible to achieve it in polars?</p>
| <python><dataframe><python-polars> | 2023-09-05 15:48:05 | 2 | 2,246 | Tizianoreica |
77,045,943 | 7,655,447 | How to use logical AND operation on multiple pandas columns and change the value of column in python | <p>I have a pandas dataframe with multiple columns as below:
<a href="https://i.sstatic.net/8esc0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8esc0.png" alt="enter image description here" /></a></p>
<p>How can I create a new column named as 'result' and perform logical operation like '&' on all the columns cells in a row to check if there is any single 'Fail' value present if yes 'result' column value should be 'Fail' else if in entire row is having only 'Pass' the 'result' column value for that row should be 'Pass'. Desired output should look like below:
<a href="https://i.sstatic.net/uGecH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uGecH.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe> | 2023-09-05 15:47:23 | 0 | 407 | Nikhil Mangire |
77,045,910 | 1,314,404 | How to use higher version of sqlite3 in mac | <p>I got this error <code>"RuntimeError: Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0."</code>
To check the version, I tried
<code>sqlite3 --version</code>
and I got
<code>3.33.0 2020-08-14 13:23:32 fca8dc8b578f215a969cd899336378966156154710873e68b3d9ac5881b0alt2</code>
So I go to :<a href="https://www.sqlite.org/download.html" rel="nofollow noreferrer">https://www.sqlite.org/download.html</a>
and download <code>Precompiled Binaries for Mac OS X (x86)</code> which seems to be version <code>3.43</code></p>
<p>But I don't know how to use it instead of the old version.</p>
<p>I have tried to follow:
<a href="https://gist.github.com/defulmere/8b9695e415a44271061cc8e272f3c300" rel="nofollow noreferrer">https://gist.github.com/defulmere/8b9695e415a44271061cc8e272f3c300</a>
to do these 2</p>
<p>I executed following steps to resolve this error:</p>
<p>Inside my python3.10.8's virtual environment i.e. venv3.10, installed pysqlite3-binary using command: <code>pip install pysqlite3-binary</code></p>
<p>Added these 3 lines in <code>venv3.10/lib/python3.10/site-packages/chromadb/__init__.py</code> at the beginning:</p>
<pre><code>__import__('pysqlite3')
import sys
sys.modules['sqlite3'] = sys.modules.pop('pysqlite3')
</code></pre>
<p>but I got this error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pysqlite3-binary (from versions: none)
ERROR: No matching distribution found for pysqlite3-binary
</code></pre>
<p>I also try to add the lines to the code
<a href="https://i.sstatic.net/UxtAh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UxtAh.png" alt="enter image description here" /></a>
but got error as well:
ModuleNotFoundError: No module named 'pysqlite3'</p>
<p>What I am doing wrongly and how to fix it please?</p>
| <python><macos><virtualenv><sqlite3-python> | 2023-09-05 15:43:09 | 1 | 1,315 | user1314404 |
77,045,877 | 13,559,669 | Device Code Flow - Microsoft Azure Authentication | <p>What is the duration of the token we get when authenticating to Azure resources using Device Code Flow: <a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token-device-code-flow?tabs=dotnet" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token-device-code-flow?tabs=dotnet</a>.</p>
<p>Is there a "refresh token" that we can activate after the first one expires? If so, is the refresh token activated behind the scenes, or should I manually activate it (and how)?</p>
| <python><azure><authentication><access-token><azure-authentication> | 2023-09-05 15:38:08 | 1 | 436 | el_pazzu |
77,045,855 | 7,945,506 | Python pandas: Why is concat not working? | <p>I have a list of pd.DataFrames <code>all_predictions</code> that I want to concatinate. As I am getting errors, I began to investigate:</p>
<p>From two of the DataFrames in the list, I took a subset of rows and columns; and reset the index:</p>
<pre><code>df1 = all_predictions[1]
df2 = all_predictions[2]
df1 = df1.head(3)
df2 = df2.head(3)
df1.reset_index(drop=True, inplace=True)
df2.reset_index(drop=True, inplace=True)
df1 = df1[["key_id", "prediction", "yrmon"]]
df2 = df2[["key_id", "prediction", "yrmon"]]
</code></pre>
<p><a href="https://i.sstatic.net/HoHGA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HoHGA.png" alt="enter image description here" /></a></p>
<p>But I cannot concat them:</p>
<pre><code>ValueError Traceback (most recent call last)
Cell In[78], line 1
----> 1 pd.concat([df1, df2], ignore_index=True)
File ~/projects/bcs-modeling/.venv/lib/python3.9/site-packages/pandas/core/reshape/concat.py:393, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
378 copy = False
380 op = _Concatenator(
381 objs,
382 axis=axis,
(...)
390 sort=sort,
391 )
--> 393 return op.get_result()
File ~/projects/bcs-modeling/.venv/lib/python3.9/site-packages/pandas/core/reshape/concat.py:680, in _Concatenator.get_result(self)
676 indexers[ax] = obj_labels.get_indexer(new_labels)
678 mgrs_indexers.append((obj._mgr, indexers))
--> 680 new_data = concatenate_managers(
681 mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy
682 )
683 if not self.copy and not using_copy_on_write():
684 new_data._consolidate_inplace()
File ~/projects/bcs-modeling/.venv/lib/python3.9/site-packages/pandas/core/internals/concat.py:199, in concatenate_managers(mgrs_indexers, axes, concat_axis, copy)
...
2116 if block_shape[0] == 0:
2117 raise ValueError("Empty data passed with indices specified.")
-> 2118 raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (3, 3), indices imply (6, 3)
</code></pre>
<p>Then I recreated the DataFrames (I got the exact values from e.g. <code>df1.to_dict()</code>):</p>
<pre><code>d1 = pd.DataFrame({
"key_id": ["99c5a5fef58b0e89d22f6e2d99a7cdf5", "b2074747c5bbcaddfc588339e75542f4", "bd7798b548f115440054473557ec90f7"],
"prediction": [57.9340063165089, -82.29114989923285, -141.58455971583805],
"yrmon": ["2021-09-30","2021-09-30","2021-09-30"],
})
d2 = pd.DataFrame({
"key_id": ["0873065925bca4e1cd2b5ef42ca979fa", "8c55ac2774db7b6c20a4f1b0cf80b2b5", "cdaa56f8ff1863040a1593086c8e61c6"],
"prediction": [-298.70691, -24907.70776706384, 1.2290192287788002],
"yrmon": ["2021-09-30","2021-09-30","2021-09-30"],
})
</code></pre>
<p><a href="https://i.sstatic.net/vHIw5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vHIw5.png" alt="enter image description here" /></a></p>
<p>And the concat went fine:</p>
<p><a href="https://i.sstatic.net/1xsHy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1xsHy.png" alt="enter image description here" /></a></p>
<p>Why can I not concat df1 and df2? How to solve this?</p>
<p>Thanks a lot!</p>
<hr />
<p>Edit:
@Corralien:</p>
<p><a href="https://i.sstatic.net/DExWy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DExWy.png" alt="enter image description here" /></a></p>
<pre><code>ValueError Traceback (most recent call last)
Cell In[84], line 3
1 df1 = all_predictions[1].head(3).reset_index(drop=True)
2 df2 = all_predictions[2].head(3).reset_index(drop=True)
----> 3 pd.concat([df1, df2], ignore_index=True)
File ~/projects/bcs-modeling/.venv/lib/python3.9/site-packages/pandas/core/reshape/concat.py:393, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
378 copy = False
380 op = _Concatenator(
381 objs,
382 axis=axis,
(...)
390 sort=sort,
391 )
--> 393 return op.get_result()
File ~/projects/bcs-modeling/.venv/lib/python3.9/site-packages/pandas/core/reshape/concat.py:680, in _Concatenator.get_result(self)
676 indexers[ax] = obj_labels.get_indexer(new_labels)
678 mgrs_indexers.append((obj._mgr, indexers))
--> 680 new_data = concatenate_managers(
681 mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy
682 )
683 if not self.copy and not using_copy_on_write():
684 new_data._consolidate_inplace()
...
2116 if block_shape[0] == 0:
2117 raise ValueError("Empty data passed with indices specified.")
-> 2118 raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (3, 195), indices imply (6, 195)
</code></pre>
| <python><pandas><concatenation> | 2023-09-05 15:35:07 | 1 | 613 | Julian |
77,045,849 | 11,879,339 | How to Plot GeoJSON Geometry Data in Streamlit Using Plotly Express Choropleth Mapbox? | <p>I'm trying to plot GeoJSON data of Indonesian cities and provinces using Streamlit and Plotly Express. My data comes from a DuckDB database, and I merge it with some additional data before plotting. I convert the merged data to GeoJSON format for plotting, but the map shows up empty.</p>
<p>What I've Tried:</p>
<ol>
<li>Checked that the GeoJSON and DataFrame indices match.</li>
<li>Used <code>featureidkey</code> in <code>px.choropleth_mapbox</code>.</li>
<li>Verified that the data types for the index and GeoJSON id match.</li>
</ol>
<p>Here's the snippet related to plotting:</p>
<pre class="lang-py prettyprint-override"><code>class InteractiveMap:
def __call__(self):
return self.interactive_map()
def interactive_map(self):
st.title('Interactive Map of Indonesia')
choice = st.selectbox("Choose between Cities and Provinces", ["Cities", "Provinces"])
conn = duckdb.connect('oeroenremboog.db')
cursor = conn.cursor()
if choice == "Cities":
cursor.execute("SELECT * FROM indonesia_cities;")
else:
cursor.execute("SELECT * FROM indonesia_provinces;")
columns = [desc[0] for desc in cursor.description]
indonesia_map = pd.DataFrame(cursor.fetchall(), columns=columns)
if 'geometry' in indonesia_map.columns:
indonesia_map['geometry'] = indonesia_map['geometry'].apply(wkb.loads, hex=True)
indonesia_map = gpd.GeoDataFrame(indonesia_map, geometry='geometry')
# Load the JSON data
with open('src/data/diff_percentage_dm1_dm2.json') as f:
diff_data = json.load(f)
diff_df = pd.DataFrame(diff_data)
name_column = 'Name' if choice == "Cities" else 'Propinsi'
if name_column in indonesia_map.columns and 'kabupaten_tinggal' in diff_df.columns:
merged_data = pd.merge(indonesia_map, diff_df, left_on=name_column, right_on='kabupaten_tinggal', how='left')
else:
st.error(f"Missing column {name_column} in either of the DataFrames.")
return
# Convert the 'diff_percentage' column to numeric
merged_data['diff_percentage'] = pd.to_numeric(merged_data['diff_percentage'], errors='coerce')
merged_data['diff_percentage'].fillna(0, inplace=True)
indonesia_map.crs = "EPSG:4326"
# Convert GeoDataFrame to GeoJSON
geojson_data = json.loads(merged_data.to_json())
# Debugging
st.write(f"Sample GeoJSON: {str(geojson_data)[:500]}")
st.write(f"Sample merged_data: {str(merged_data)[:500]}")
# Plotting
fig = px.choropleth_mapbox(merged_data,
geojson=geojson_data,
locations=merged_data.index, # DataFrame index
color='diff_percentage',
color_continuous_scale="Viridis",
range_color=(-100, 100),
mapbox_style="carto-positron",
opacity=0.5,
labels={'diff_percentage':'Difference Percentage'},
center={"lat": -2, "lon": 118},
zoom=3.4,
featureidkey="properties.id")
st.plotly_chart(fig)
cursor.close()
return merged_data
</code></pre>
<p>Debugging Info:</p>
<ul>
<li>Sample GeoJSON: (a snippet of the GeoJSON data)</li>
<li>Unique Locations: (unique locations from the DataFrame)</li>
<li>Unique diff_percentage: (unique values in the diff_percentage column)</li>
</ul>
<p>Here's a sample of the first 10 rows from my indonesia_map DataFrame converted to dictionary format:</p>
<pre class="lang-py prettyprint-override"><code>Sample indonesia_map (first 10 rows as dict): {'Name': {0: 'SIMEULUE', 1: 'ACEH SINGKIL', 2: 'ACEH SELATAN', 3: 'ACEH TENGGARA', 4: 'ACEH TIMUR', 5: 'ACEH TENGAH', 6: 'ACEH BARAT', 7: 'ACEH BESAR', 8: 'PIDIE', 9: 'BIREUEN'}, 'latitude': {0: 2.613334894180298, 1: 2.349949598312378, 2: 3.1632587909698486, 3: 3.369655132293701, 4: 4.628895282745361, 5: 4.530141830444336, 6: 4.456692218780518, 7: 5.3799920082092285, 8: 5.068343639373779, 9: 5.093278884887695}, 'longitude': {0: 96.08564758300781, 1: 97.84710693359375, 2: 97.43519592285156, 3: 97.69552612304688, 4: 97.62864685058594, 5: 96.85894012451172, 6: 96.18546295166016, 7: 95.51558685302734, 8: 96.00715637207031, 9: 96.60938262939453}, 'geometry': {0: <POINT (96.086 2.613)>, 1: <POINT (97.847 2.35)>, 2: <POINT (97.435 3.163)>, 3: <POINT (97.696 3.37)>, 4: <POINT (97.629 4.629)>, 5: <POINT (96.859 4.53)>, 6: <POINT (96.185 4.457)>, 7: <POINT (95.516 5.38)>, 8: <POINT (96.007 5.068)>, 9: <POINT (96.609 5.093)>}}
Sample merged_data (first 10 rows as dict): {'Name': {0: 'SIMEULUE', 1: 'ACEH SINGKIL', 2: 'ACEH SELATAN', 3: 'ACEH TENGGARA', 4: 'ACEH TIMUR', 5: 'ACEH TENGAH', 6: 'ACEH BARAT', 7: 'ACEH BESAR', 8: 'PIDIE', 9: 'BIREUEN'}, 'latitude': {0: 2.613334894180298, 1: 2.349949598312378, 2: 3.1632587909698486, 3: 3.369655132293701, 4: 4.628895282745361, 5: 4.530141830444336, 6: 4.456692218780518, 7: 5.3799920082092285, 8: 5.068343639373779, 9: 5.093278884887695}, 'longitude': {0: 96.08564758300781, 1: 97.84710693359375, 2: 97.43519592285156, 3: 97.69552612304688, 4: 97.62864685058594, 5: 96.85894012451172, 6: 96.18546295166016, 7: 95.51558685302734, 8: 96.00715637207031, 9: 96.60938262939453}, 'geometry': {0: <POINT (96.086 2.613)>, 1: <POINT (97.847 2.35)>, 2: <POINT (97.435 3.163)>, 3: <POINT (97.696 3.37)>, 4: <POINT (97.629 4.629)>, 5: <POINT (96.859 4.53)>, 6: <POINT (96.185 4.457)>, 7: <POINT (95.516 5.38)>, 8: <POINT (96.007 5.068)>, 9: <POINT (96.609 5.093)>}, 'kabupaten_tinggal': {0: 'SIMEULUE', 1: 'ACEH SINGKIL', 2: 'ACEH SELATAN', 3: 'ACEH TENGGARA', 4: 'ACEH TIMUR', 5: 'ACEH TENGAH', 6: 'ACEH BARAT', 7: 'ACEH BESAR', 8: 'PIDIE', 9: 'BIREUEN'}, 'total_dm_tipe_i': {0: '236', 1: '165', 2: '464', 3: '194', 4: '117', 5: '231', 6: '167', 7: '1216', 8: '221', 9: '200'}, 'total_dm_tipe_ii': {0: '180', 1: '321', 2: '1076', 3: '256', 4: '447', 5: '343', 6: '916', 7: '1788', 8: '731', 9: '706'}, 'diff_percentage': {0: 0.1346153846153846, 1: -0.3209876543209876, 2: -0.3974025974025974, 3: -0.1377777777777777, 4: -0.5851063829787234, 5: -0.1951219512195122, 6: -0.6915974145891042, 7: -0.1904127829560586, 8: -0.5357142857142857, 9: -0.5584988962472405}}
Sample GeoJSON (first 500 characters): {'type': 'FeatureCollection', 'features': [{'id': '0', 'type': 'Feature', 'properties': {'Name': 'SIMEULUE', 'latitude': 2.613334894180298, 'longitude': 96.08564758300781, 'kabupaten_tinggal': 'SIMEULUE', 'total_dm_tipe_i': '236', 'total_dm_tipe_ii': '180', 'diff_percentage': 0.1346153846153846}, 'geometry': {'type': 'Point', 'coordinates': [96.08564793225175, 2.6133349583186596]}}, {'id': '1', 'type': 'Feature', 'properties': {'Name': 'ACEH SINGKIL', 'latitude': 2.349949598312378, 'longitude':
</code></pre>
<p>Error:
The map shows up, but it's emptyβno data is plotted.
<a href="https://i.sstatic.net/sc6lr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sc6lr.png" alt="enter image description here" /></a></p>
<p>How can I correctly plot GeoJSON geometry data using Streamlit and Plotly Express?</p>
| <python><plotly><geojson><streamlit><plotly-express> | 2023-09-05 15:33:00 | 0 | 1,244 | ebuzz168 |
77,045,818 | 1,838,726 | Is it okay to use python operators for tensorflow tensors? | <p><strong>TL;DR</strong><br />
Is <code>(a and b)</code> equivalent to <code>tf.logical_and(a, b)</code> in terms of optimization and performance? (<code>a</code> and <code>b</code> are tensorflow tensors)</p>
<p><strong>Details</strong>:<br />
I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:</p>
<pre><code>@tf.function
# @tf.function(jit_compile=True)
def my_tf_func():
# ...
a = ... # some tensorflow tensor
b = ... # another tensorflow tensor
# currently ugly: prefix notation with tf.logical_and
c = tf.math.count_nonzero(tf.logical_and(a, b))
# more readable alternative: infix notation:
c = tf.math.count_nonzero(a and b)
# ...
</code></pre>
<p>The code that uses <a href="https://en.wikipedia.org/wiki/Polish_notation" rel="nofollow noreferrer">prefix notation</a> works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation <code>logical_and</code> comes before the operands <code>a</code> and <code>b</code>).</p>
<p>Can I use <a href="https://en.wikipedia.org/wiki/Infix_notation" rel="nofollow noreferrer">infix notation</a>, i.e. the alternative at the end of above code, with usual python operators like <code>and</code>, <code>+</code>, <code>-</code>, or <code>==</code> and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?</p>
<p>The same question applies to unary operators like <code>not</code> vs. <code>tf.logical_not(...)</code>.</p>
<p><sub>This question was crossposted at <a href="https://software.codidact.com/posts/289588" rel="nofollow noreferrer">https://software.codidact.com/posts/289588</a> .</sub></p>
| <python><tensorflow><infix-operator><prefix-operator><xla> | 2023-09-05 15:28:00 | 1 | 6,700 | Daniel S. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.