QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,159,149
| 11,602,367
|
Debugging Dash clientside callback
|
<p>I have the following Dash app (simplified). It displays text boxes and a list of words (buttons). You click the buttons that have text to add the corresponding text to the text box (this is the part I want a client-side callback for). There is another callback with the same output (now that that's permitted with Dash 2.9) which triggers when the user clicks the "->" button which refreshes the terms.</p>
<pre><code># Font Survey app
import re
import dash
import dash_bootstrap_components as dbc
import pandas as pd
from dash import dcc, html
from dash.dependencies import ALL, Input, Output, State
from dash.exceptions import PreventUpdate
EMPTY_DIV = html.Div()
# Div for sample list of terms
WORDS = list(pd.read_csv('font_terms.csv')['adj'])
word_items = [html.Li(children=dbc.Button(word, id={'type': 'fill-word-button', 'index': i})) for i, word in enumerate(WORDS)]
terms_div = html.Div(id='terms', children=word_items)
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP], prevent_initial_callbacks="initial_duplicate")
server = app.server
app.layout = html.Div([
dcc.Store(id='terms-store', data=WORDS),
html.Div(id="adj-inputs", className="column",
children= [dcc.Input(id={"type": "adj-input", "index": i}, value='') for i in range(5)]),
terms_div,
html.Button("→", id='forward-button', n_clicks=0),])
@app.callback(
[Output({"type": "adj-input", "index": i}, 'value') for i in range(5)],
Input({"type": "fill-word-button", "index": ALL}, "n_clicks"),
State({'type': 'adj-input', 'index': ALL}, "value")
)
def fill_word_button(button_click, adj_inputs):
ctx = dash.callback_context
if not ctx.triggered:
raise PreventUpdate
button_id = ctx.triggered[0]['prop_id'].split('.')[0]
button_index = int(re.search(r"\d+", button_id).group())
# check if there is an empty text box
if '' not in adj_inputs:
raise PreventUpdate
adj_inputs = ['' if a is None else a for a in adj_inputs] + [WORDS[button_index]]
adj_inputs = list(set([a for a in adj_inputs if a!=''])) + [a for a in adj_inputs if a=='']
return adj_inputs[:5]
# The following callback, once debugged, should replace the callback above ("fill_word_button")
# @app.clientside_callback(
# """
# function(n_clicks, adj_inputs, words) {
# const ctx = dash_clientside.callbackContext;
# if (!ctx.triggered.length) {
# return adj_inputs;
# }
# const button_id = ctx.triggered[0]['prop_id'].split('.')[0];
# const button_index = parseInt(button_id.match(/\d+/)[0]);
# // check if there is an empty text box
# if (adj_inputs.every(a => a !== '')) {
# return adj_inputs;
# }
# adj_inputs = adj_inputs.concat([words[button_index]]);
# adj_inputs = [...new Set(adj_inputs.filter(a => a !== '')), ...adj_inputs.filter(a => a === '')];
# return adj_inputs.slice(0, 5);
# }
# """,
# Output({"type": "adj-input", "index": ALL}, 'value'),
# Input({"type": "fill-word-button", "index": ALL}, "n_clicks"),
# State({'type': 'adj-input', 'index': ALL}, "value"),
# State('terms-store', 'data')
# )
@app.callback(
[Output({"type": "adj-input", "index": i}, 'value', allow_duplicate=True) for i in range(5)],
[Input("forward-button", "n_clicks")],
[State({'type': 'adj-input', 'index': ALL}, "value")],
)
def refresh_terms(forward_clicks, adj_inputs):
ctx = dash.callback_context
if not ctx.triggered:
raise PreventUpdate
button_id = ctx.triggered[0]['prop_id'].split('.')[0]
if button_id == "forward-button":
adj_inputs = [adj for adj in adj_inputs if adj !=""]
# Do something with the adj_inputs e.g. upload to database and
# db.insert_response_mysql(adj_inputs)
# Reset adj_inputs
return [""]*5
if __name__ == "__main__":
app.run_server(debug=True, port=8000, host="0.0.0.0")
</code></pre>
<p>The error I have is connected to the <code>refresh_terms</code> callback</p>
<pre><code>[State({'type': 'adj-input', 'index': ALL}, "value")],
TypeError: 'NoneType' object is not callable
</code></pre>
<p>This only happens when I try using the client-side callback but is fine when I use a separate callback. Please help me debug this error.</p>
|
<javascript><python><plotly-dash>
|
2023-05-02 21:59:22
| 1
| 551
|
acciolurker
|
76,159,107
| 4,500,155
|
Is it possible to use Flatgeobuf in Python?
|
<p>I would like to read from / write to <a href="https://github.com/flatgeobuf/flatgeobuf" rel="nofollow noreferrer">flatgeobuffers</a> from a Python GIS application. My understanding is that this technology can only be used so in JavaScript and TypeScript (of course the compilation stage is language-agnostic insofar as the official CLI does that just fine).</p>
<p>It there a known technique or workaround to read from / write to flatgeobuffers in Python?</p>
|
<python><gis><flatbuffers>
|
2023-05-02 21:50:10
| 1
| 544
|
WhyNotTryCalmer
|
76,159,055
| 1,985,211
|
Convert numpy array of strings to datetime
|
<p>I am shocked at how long I've been running around down various rabbit holes trying to figure this problem out which (I thought) should be relatively simple.</p>
<p>I have a numpy array of strings saved as variable <code>t</code> with some associated data as raw_data:</p>
<pre><code>t = array(['20141017000000','20141017000001','20141017000002'],dtype='<U14')
raw_data = np.array([1,2,3],dtype='float')
</code></pre>
<p>The date format is YYYYmmddHHMMSS</p>
<p>I just want to convert this to a datetime object that is compatible with matplotlib for plotting purposes.</p>
<p>Various answers I've found inevitably lead to errors including:</p>
<p>Option #1</p>
<pre><code>import matplotlib.dates as dates
convertedDate = dates.num2date(t)
Error: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>Option #2 (<a href="https://datagy.io/python-string-to-date/" rel="nofollow noreferrer">from here</a>)</p>
<pre><code>from datetime import datetime
convertedDate = datetime.strptime(t, '%YY%mm%dd%HH%MM%ss')
Error: strptime() argument 1 must be str, not numpy.ndarray
</code></pre>
<p>Option #3 (from <a href="https://stackoverflow.com/questions/27103044/converting-datetime-string-to-datetime-in-numpy-python">here</a>)</p>
<pre><code>import numpy as np
convertedDate = [np.datetime64(x) for x in t]
</code></pre>
<p>While this option 3 works, the output doesn't quite make sense to me since it looks identical to the original string for example <code>convertedDate[0]</code> returns <code>numpy.datetime64('20141017000000')</code>. And furthermore when I try to plot it I get this:</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot(convertedDate,raw_data)
OverflowError: int too big to convert
</code></pre>
<p>Any help is appreciated.</p>
|
<python><numpy><date><datetime><type-conversion>
|
2023-05-02 21:40:03
| 2
| 669
|
Darcy
|
76,159,050
| 9,718,879
|
Heroku migrations refuse being made
|
<p>I deployed a django rest application on Heroku but I noticed when I went to the api route, I see a <code>no such table</code> error. I figured I needed to do</p>
<ol>
<li>heroku run python manage.py makemigrations</li>
<li>heroku run python manage.py migrate</li>
<li>heroku run python manage.py createsuperuser</li>
</ol>
<p>Which I actually did, I got a lot of OK from running those migrations but then when I decided to create a super user, it says I have 20 unapplied migrations even though I just run the migrate command and got plenty of OKs</p>
<p><a href="https://i.sstatic.net/8e9xA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8e9xA.png" alt="enter image description here" /></a></p>
|
<python><django><heroku><django-rest-framework>
|
2023-05-02 21:39:04
| 0
| 1,121
|
Laspeed
|
76,159,000
| 3,366,592
|
invalid results of process_time() when measuring model.fit() performance
|
<p>I use the snippet below to measure and output the time spent during model fitting.</p>
<pre><code>perf_counter_train_begin = time.perf_counter()
process_time_train_begin = time.process_time()
model.fit(data, ...)
perf_counter_train = time.perf_counter() - perf_counter_train_begin
process_time_train = time.process_time() - process_time_train_begin
print(f"System Time: {perf_counter_train}; Process Time: {process_time_train}")
</code></pre>
<p>It is expected that the system time (acquired from <code>time.perf_counter()</code>) might take much greater values than the process time (from <code>time.process_time()</code>) due to various factors like system calls, process scheduling and so on. On the other hand, when I run my neural network training script, I get results like this:</p>
<pre><code>System Time: 51.13854772000013; Process Time: 115.725974476
</code></pre>
<p>Judging by my clock, the system time is measured correctly, and the process time is bogus. What am I doing wrong here?</p>
|
<python><performance><tensorflow><keras>
|
2023-05-02 21:29:16
| 1
| 449
|
user3366592
|
76,158,929
| 15,008,906
|
aiokafka exits when running in a multiprocessing class
|
<p>I've been banging my head against the wall today trying to figure out why this isn't working. I created this multiprocessing class:</p>
<pre><code>class Consumer(multiprocessing.Process):
def __init__(self, topic, **kwargs):
self.topic = topic
super(Consumer, self).__init__(**kwargs)
def _deserializer(serialized):
return json.loads(serialized)
async def _consume(self):
consumer = AIOKafkaConsumer(
self.topic,
# group_id=None,
group_id="Deployment",
value_deserializer=self._deserializer,
bootstrap_servers='localhost:30322',
)
await consumer.start()
tasks = []
try:
async for msg in consumer:
logging.info("***** reading message *****")
tasks.append(asyncio.create_task(process_msg(msg, 1)))
finally:
await consumer.stop()
await asyncio.gather(*tasks)
def run(self):
asyncio.run(self._consume())
</code></pre>
<p>And my main file does this:</p>
<pre><code>num_procs = 1
processes = [Consumer("deployment_requests") for _ in range(num_procs)]
for p in processes:
p.start()
for p in processes:
logging.info(f'pid is {p.pid}')
for p in processes:
p.join()
logging.info(f'pid is {p.pid}')
</code></pre>
<p>And the output</p>
<pre><code>2023-05-02 16:03:58 - INFO - pid is 23520
2023-05-02 16:04:03 - INFO - Updating subscribed topics to: frozenset({'deployment_requests'})
2023-05-02 16:04:03 - INFO - Discovered coordinator 0 for group Deployment
2023-05-02 16:04:03 - INFO - Revoking previously assigned partitions set() for group Deployment
2023-05-02 16:04:03 - INFO - (Re-)joining group Deployment
2023-05-02 16:04:03 - INFO - Joined group 'Deployment' (generation 182) with member_id aiokafka-0.8.0-e331a252-b6cd-4521-9140-6bb70cf9e838
2023-05-02 16:04:03 - INFO - Elected group leader -- performing partition assignments using roundrobin
2023-05-02 16:04:03 - INFO - Successfully synced group Deployment with generation 182
2023-05-02 16:04:03 - INFO - Setting newly assigned partitions {TopicPartition(topic='deployment_requests', partition=0)} for group Deployment
2023-05-02 16:04:03 - INFO - LeaveGroup request succeeded
Process Consumer-1:
2023-05-02 16:04:04 - INFO - pid is 23520
</code></pre>
<p>If I take the code out of the class, this code works as expected. But as is, it never even prints "***** reading message *****" so it's not even waiting on messages. So I think it has something to do with p.start() not using the run() method correctly for the asyncio call. But it could also be something completely different :)</p>
<p>Here's the producer logs, but there aren't any issues on the producer side.</p>
<pre><code>[2023-05-02 21:04:03,943] INFO [GroupCoordinator 0]: Stabilized group Deployment generation 182 (__consumer_offsets-15) (kafka.coordinator.group.GroupCoordinator)
[2023-05-02 21:04:03,947] INFO [GroupCoordinator 0]: Assignment received from leader for group Deployment for generation 182 (kafka.coordinator.group.GroupCoordinator)
[2023-05-02 21:04:03,966] INFO [GroupCoordinator 0]: Member[group.instance.id None, member.id aiokafka-0.8.0-e331a252-b6cd-4521-9140-6bb70cf9e838] in group Deployment has left, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2023-05-02 21:04:03,966] INFO [GroupCoordinator 0]: Preparing to rebalance group Deployment in state PreparingRebalance with old generation 182 (__consumer_offsets-15) (reason: removing member aiokafka-0.8.0-e331a252-b6cd-4521-9140-6bb70cf9e838 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator)
</code></pre>
|
<python><python-asyncio><python-multiprocessing><aiokafka>
|
2023-05-02 21:15:43
| 1
| 413
|
Jon Hayden
|
76,158,821
| 5,420,333
|
Blazepose Mediapipe: Differences between Python and Javascript implementation
|
<p>I was building a system that processes poses from videos using Python, and then, a Javascript (react) application that estimates the user pose on webcam in real time, and compares it with the Python processed poses.</p>
<p>The thing is that I started encountering very different results on the coordinates... I made a test running the same video on both applications, and it gives a very discrepant result. I've tried to seek for some patter to transform the data (sometimes the X axis in python seems to be the Y axis in javascript, and vice-versa), but testing more than one scenario, I just couldn't get a reliable pattern to transform and match the data.</p>
<p>I'm using the same version of mediapipe in both applications. I know that python and javascript mediapipe implementation can be slightly different... but it is that different or am I missing something?</p>
<p>Thank you!</p>
|
<javascript><python><computer-vision><mediapipe><pose>
|
2023-05-02 20:58:01
| 1
| 471
|
Dhiogo Corrêa
|
76,158,814
| 1,056,563
|
How to structure a nested "x if condition else y" so Black will leave it legible?
|
<p>For a double nested <code>x if condition else y</code> it was legible before <code>black</code> got into the fray. It loses the nice indentations I had placed and now it's just a <em>Wall of Code</em>:</p>
<pre><code> clause = (
(f"{self.colname} " if self.colname else "") + self.sql
if self.sql
else self.values_filter()
if self.values is not None
and len(self.values) > 0
and (self.colname is not None)
else self.range_filter()
if self.range is not None and (self.colname is not None)
else None
)
</code></pre>
<p>I'm going to break this into separate pieces for expediency but for legacy purposes would like to know if there's some way to get a legible format for this language construct.</p>
|
<python><python-black>
|
2023-05-02 20:56:50
| 1
| 63,891
|
WestCoastProjects
|
76,158,797
| 1,292,652
|
Why is mypy/PyCharm/etc not detecting type errors for Type[T]?
|
<p>Consider the following code:</p>
<pre><code>def verify(schema: Type[T], data: T) -> None:
pass
verify(int, "3")
verify(float, "3")
verify(str, "3")
</code></pre>
<p>I would expect the first two <code>verify()</code> calls to show up as a type error, and the last one to not.</p>
<p>However, none of them show up with type errors, in PyCharm and in mypy. I tried enabling every possible flag for strictness and error codes, yet nothing.</p>
<p>How can I get a type-checker to type-check this? Why does it fail?</p>
<p>Libraries like <code>apischema</code> rely on functionality like this for type-checking, e.g., <code>apischema.serialize(MyDataclass, my_dataclass)</code>, but that doesn't work either.</p>
|
<python><pycharm><python-typing><mypy>
|
2023-05-02 20:54:17
| 1
| 4,580
|
Yatharth Agarwal
|
76,158,756
| 21,376,217
|
How to convert integer number and arrays of bytes to each other in C?
|
<p>After using Python, I found that its struct module can package numbers into byte strings or unpack byte strings into numbers.</p>
<p>the code is like this:</p>
<pre class="lang-py prettyprint-override"><code>import struct
struct.pack('>I', 13934)
# b'\x00\x006n'
struct.pack('>Q', 12345678901234567890)
# b'\xabT\xa9\x8c\xeb\x1f\n\xd2'
struct.unpack('>I', b'\x00\x006n')
# 13934
struct.unpack('>Q', b'\xabT\xa9\x8c\xeb\x1f\n\xd2')
# 12345678901234567890
</code></pre>
<p>How to implement such a function in C?</p>
|
<python><c><struct>
|
2023-05-02 20:47:44
| 2
| 402
|
S-N
|
76,158,646
| 13,689,939
|
How to Convert Pandas any Function with mean into SQL (Snowflake)?
|
<p><strong>Problem</strong></p>
<p>I'm converting a Python Pandas data pipeline into a series of views in Snowflake. The transformations are mostly straightforward, but some of them seem to be more difficult in SQL. I'm wondering if there are straightforward methods.</p>
<p><strong>Question</strong></p>
<p>How can I write a Pandas <code>df['col'].any()</code> as simply as possible using SnowSQL? I'm assuming a group by with some aggregate function, but <code>ANY</code> isn't implemented in SnowSQL.</p>
<p><strong>Example</strong></p>
<p>Here's a sample dataframe with the result I'm looking for:</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({'col':[0, 0, 1, 3, 5, np.nan, np.nan]})
>>> any_value = (df['col'] == 3).any()
>>> any_value
True
</code></pre>
|
<python><sql><pandas><migration><snowflake-cloud-data-platform>
|
2023-05-02 20:31:36
| 2
| 986
|
whoopscheckmate
|
76,158,640
| 978,288
|
matplotlib Annotation: how to get bbox only for text
|
<p>I would like to <em>test annotation objects</em> in my graph <em>for overlapping</em> and, if one object covers another, move them accordingly.</p>
<p>However <code>box = ann.get_window_extent(renderer)</code> gets the box for whole Annotation object and it does not help.</p>
<p>Is it possible to get the <code>bbox</code> for a Text object inside the Annotation?</p>
<p>The annotations in my example were created with</p>
<pre><code>ax.annotate(str(point), (0, 0),
(point[0], point[1]),
xycoords=xycoords, textcoords=textcoords,
bbox=bbox_props, size=96, ha=ha, va=va,
arrowprops=arrowprops))
</code></pre>
<p>where <code>point</code> was of the form <code>(1, 1)</code>, <code>bbox_props = dict(boxstyle="round,pad=0.15", fc="blue", alpha=0.4)</code> and <code>arrowprops=dict(arrowstyle="->", lw=0.8)</code>.</p>
<p>I'm thinking about placing empty annotations and Text objects separately, so the testing would be possible -- but, perhaps, it is not necessary?</p>
<p>(The image shows that points scattered for a corner of each bbox indicate, that the boxes are way too big, for my needs of overlapping test."+" for (0, 0), "x" for (1, 0) and triangles for (1, 1).
Boxes were accessed with <code>box = ann.get_window_extent(rr).transformed(ax.transData.inverted())</code> -- the transformation was necessary, corners with <code>box.corners()</code>. The labels in example image are just fine, but after it was enlarged to full screen.)
<a href="https://i.sstatic.net/baxCU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/baxCU.png" alt="plot with example annotations with bboxes indicated" /></a></p>
|
<python><matplotlib><annotations>
|
2023-05-02 20:30:53
| 0
| 462
|
khaz
|
76,158,635
| 7,247,147
|
How to efficiently filter out duplicate objects in a list based on multiple properties in Python?
|
<p>I'm working on a Python project where I have a list of custom objects, and I need to filter out duplicates based on multiple properties of these objects. Each object has three properties: <code>id</code>, <code>name</code>, and <code>timestamp</code>. I want to consider an object as a duplicate if both the <code>id</code> and <code>name</code> properties match another object in the list. The <code>timestamp</code> property should not be considered when determining duplicates.</p>
<p>Here's an example of what the custom object class looks like:</p>
<pre class="lang-py prettyprint-override"><code>class CustomObject:
def __init__(self, id, name, timestamp):
self.id = id
self.name = name
self.timestamp = timestamp
</code></pre>
<p>And a sample list of objects:</p>
<pre class="lang-py prettyprint-override"><code>data = [
CustomObject(1, "Alice", "2023-01-01"),
CustomObject(2, "Bob", "2023-01-02"),
CustomObject(1, "Alice", "2023-01-03"),
CustomObject(3, "Eve", "2023-01-04"),
CustomObject(2, "Bob", "2023-01-05"),
]
</code></pre>
<p>In this case, I want to remove the duplicates and keep the objects with the earliest <code>timestamp</code>.</p>
<p>The expected output should be:</p>
<pre class="lang-py prettyprint-override"><code>[
CustomObject(1, "Alice", "2023-01-01"),
CustomObject(2, "Bob", "2023-01-02"),
CustomObject(3, "Eve", "2023-01-04"),
]
</code></pre>
<p>I know that I can use a loop to compare each object with every other object in the list, but I'm concerned about the performance, especially when the list gets large. Is there a more efficient way to achieve this in Python, possibly using built-in functions or libraries?</p>
|
<python>
|
2023-05-02 20:29:56
| 3
| 1,115
|
user7247147
|
76,158,570
| 7,090,501
|
Break list of words into whole word chunks under a max token size
|
<p>Let's say I have a long list of names that I would like to feed into an LLM in chunks. How can I split up my list of names so that each group is a list with <code>< max_tokens</code> items without repeating or breaking up a any individual entries in the list? I know from the <a href="https://platform.openai.com/docs/guides/embeddings/how-can-i-tell-how-many-tokens-a-string-has-before-i-embed-it" rel="nofollow noreferrer">OpenAI docs</a> that I can turn my list into a big string and use <code>tiktoken</code> to truncate the string to a token size but I don't know how to make sure there are whole words in each chunk.</p>
<pre class="lang-py prettyprint-override"><code>import tiktoken
city_reprex = ['The Colony', 'Bridgeport', 'Toledo', 'Barre', 'Newburyport', 'Dover', 'Jonesboro', 'South Haven', 'Ogdensburg', 'Berkeley', 'Ray', 'Sugar Land', 'Telluride', 'Erwin', 'Milpitas', 'Jonesboro', 'Orem', 'Winnemucca', 'Calabash', 'Sugarcreek']
max_tokens = 25
encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
prompt = ', '.join(city_reprex)
prompt_size_in_tokens = len(encoding.encode(prompt))
record_encoding = encoding.encode(prompt)
# How can I get my chunks as close to the max size as possible while also making sure each item in the chunk is a whole item in the list?
print(f"Chunk 1: --> {encoding.decode(record_encoding[:max_tokens])}")
print(f"Chunk 2: --> {encoding.decode(record_encoding[max_tokens:max_tokens*2])}")
</code></pre>
<p>Output:</p>
<pre><code>Chunk 1: --> The Colony, Bridgeport, Toledo, Barre, Newburyport, Dover, Jonesboro, South Haven, Ogd
Chunk 2: --> ensburg, Berkeley, Ray, Sugar Land, Telluride, Erwin, Milpitas, Jonesboro, Orem
</code></pre>
|
<python><openai-api>
|
2023-05-02 20:20:26
| 1
| 333
|
Marshall K
|
76,158,457
| 11,498,718
|
Why is my Python class memoizing by default?
|
<p>Context: I am running Python 2.7 (I am restricted to this version because reasons.)</p>
<p>I have a Python class, that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyClient( object ):
def __init__(self, my_dict):
self.value = my_dict
def run():
# do some stuff and update self.value
</code></pre>
<p>In a Python shell, if I do:</p>
<pre class="lang-py prettyprint-override"><code>test_dict = {"foo": "bar"}
x = MyClient(test_dict).run()
y = MyClient(test_dict).run()
</code></pre>
<p>Y is a cached version of X! I know that it's returning a cached version because <code>self.value</code> is updated after by <code>run()</code>. When I do <code>y = MyClient(test_dict)</code>, the output shows me a value that would only appear if the code ran once before and updated <code>self.value</code>.</p>
<p>I haven't ever come across this behavior before... I'm not explicitly using a cache on any properties.</p>
<p>However, if I do:</p>
<pre class="lang-py prettyprint-override"><code>test_dict = {"foo": "bar"}
x = MyClient(test_dict)
y = MyClient({"foo": "bar"})
</code></pre>
<p>Y is NOT the cached version of X.</p>
<p>I'm wondering if the Python class is trying to be helpful by seeing that I initialized a class with the same object in memory, and returning the already existing one rather than creating a new one?</p>
<p>I have looked at some other SO questions, but all of them are related to implementing memoization in a Python class. I can't find anything about Python classes self-memoizing (which I believe is my issue here).</p>
|
<python><python-2.7>
|
2023-05-02 20:01:33
| 1
| 494
|
Klutch27
|
76,158,220
| 10,603,191
|
Python Playwright, how to fill data into these boxes
|
<p>I am writing a Python script using the Playwright package to auto-complete a questionnaire. <a href="https://ee.kobotoolbox.org/x/cmKGMgos" rel="nofollow noreferrer">Here is a dummy version of the form</a>, and here is my code so far:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from playwright.async_api import async_playwright
async def form_entry():
async with async_playwright() as pw:
browser = await pw.chromium.launch(
headless=False
)
page = await browser.new_page()
await page.goto('https://ee.kobotoolbox.org/x/cmKGMgos')
await page.wait_for_timeout(3000)
await page.get_by_label("ENTER YOUR PASSWORD:").fill("123")
await page.get_by_label('Q4 (Oct-Nov)').check()
await page.get_by_label('A').check()
### code to enter values in the 6 boxes in section 1.1
await browser.close()
if __name__ == '__main__':
asyncio.run(form_entry())
</code></pre>
<p>This code can input/select the correct responses to the first few questions (Quarter, Year, Enter your password, name of the company), but I can't get it to work for the input boxes in section 1.1 (3 boxes each for <code>NO. OF MEN:</code> & <code>NO. OF WOMEN:</code>)</p>
<p>In the html I can see the <code><input ...</code> with <code>type="number"</code> and <code>name="/a8v6GUnyWqGLNQoNqnNkps/level_1/level_1_1/cc_1a_num_men_m1"</code>, so I would have thought that</p>
<pre class="lang-py prettyprint-override"><code>await page.get_by_role('number',name='/a8v6GUnyWqGLNQoNqnNkps/level_1/level_1_1/cc_1a_num_men_m1').fill(12)
</code></pre>
<p>would work, but it doesn't.</p>
<p>Can anyone advise the correct syntax to fill numbers into these 6 boxes? Like this:
<a href="https://i.sstatic.net/0WRgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0WRgP.png" alt="Example ideal output" /></a></p>
<p>Thanks</p>
|
<python><web-scraping><playwright><playwright-python>
|
2023-05-02 19:23:21
| 1
| 459
|
Chris Browne
|
76,158,219
| 2,725,810
|
Method inheritance for DRF views
|
<p>Consider:</p>
<pre class="lang-py prettyprint-override"><code>class CourseListViewBase():
def list(self, request, format=None):
# whatever
class CourseListView(generics.ListAPIView, CourseListViewBase):
permission_classes = [IsAuthenticated]
def list(self, request, format=None): # Why can't I omit this?
return CourseListViewBase.list(self, request, format)
class CourseListViewGuest(generics.ListAPIView, CourseListViewBase):
permission_classes = []
authentication_classes = []
def list(self, request, format=None): # Why can't I omit this?
return CourseListViewBase.list(self, request, format)
</code></pre>
<p>Why do I have to define the <code>list</code> method in the derived classes instead of relying on it being inherited from the base class? If I don't define it, I get a warning like this:</p>
<blockquote>
<p>AssertionError: 'CourseListViewGuest' should either include a <code>queryset</code> attribute, or override the <code>get_queryset()</code> method.</p>
</blockquote>
|
<python><django><django-rest-framework><multiple-inheritance>
|
2023-05-02 19:23:18
| 1
| 8,211
|
AlwaysLearning
|
76,158,147
| 5,091,720
|
Pandas - groupby ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead
|
<p>I was updated my Pandas from I think it was 1.5.1 to 2.0.1. Any how I started getting an error on some code that works just fine before.</p>
<pre><code>df = df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index()
</code></pre>
<blockquote>
<p>Traceback (most recent call last): File "f:...\My_python_file.py", line 37, in
df = df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() File
"C:\Users...\Local\Programs\Python\Python310\lib\site-packages\pandas\core\groupby\generic.py",
line 1767, in <strong>getitem</strong>
raise ValueError( ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead.</p>
</blockquote>
|
<python><pandas><dataframe>
|
2023-05-02 19:13:20
| 2
| 2,363
|
Shane S
|
76,158,142
| 7,339,624
|
What is the point of class-level type hints in Python?
|
<p>I'm trying to understand the point of class-type hints. I know that we can use type hints in the <code>__init__</code> method of a class, like this:</p>
<pre><code>class Foo:
def __init__(self, x: int):
self.x = x
</code></pre>
<p>However, I came across another way of defining type hints at the class level, as shown in the following example:</p>
<pre><code>class Bar:
x : int
def __init__(self, x: int):
self.x = x
</code></pre>
<p>What is the purpose of using class-level type hints like this? Do they provide any additional benefits or information compared to using type hints only in the <code>__init__</code> method?</p>
|
<python><python-typing>
|
2023-05-02 19:12:41
| 1
| 4,337
|
Peyman
|
76,158,139
| 8,028,981
|
Getting WinError 6 (invalid handle) when trying to mute the output of an f2py process
|
<p>I want to suppress the console output from an f2py extension module. Standard approaches like the following don't work, because the Fortran output comes from another place than standard Python print commands (?) So the Fortran output will still be printed to the console.</p>
<pre><code>from contextlib import redirect_stdout
import io
with redirect_stdout(io.StringIO()) as f:
call_my_f2py_module()
</code></pre>
<p>Until recently I have been working with this, and it worked:</p>
<pre><code>class LoggerLowLevelMuted():
"""A logger to mute low level output form Fortran modules
Inspired from https://stackoverflow.com/a/17753573/8028981"""
def __init__(self, filename=None):
self.filename = filename
if filename is None:
self.filename = os.devnull
self.stdchannel = sys.__stdout__
def __enter__(self):
self.dest_file = open(self.filename, 'a')
self.oldstdchannel = os.dup(self.stdchannel.fileno())
os.dup2(self.dest_file.fileno(), self.stdchannel.fileno())
os.close(self.stdchannel.fileno())
def __exit__(self, exc_type, exc_val, exc_tb):
os.dup2(self.oldstdchannel, self.stdchannel.fileno())
self.dest_file.close()
os.close(self.oldstdchannel)
</code></pre>
<p>and then:</p>
<pre><code>with log.LoggerLowLevelMuted(filename=nfmds_logfile):
call_my_f2py_module()
</code></pre>
<p>I want to be honest, I have no clue what that class exactly did, but it worked in the sense that the Fortran output from the f2py module was suppressed. It still works under Linux, but under Windows I now get the following error (only since recently):</p>
<blockquote>
<p>OSError: [WinError 6] Invalid handle</p>
</blockquote>
<p>To reproduce the error, you can run (under Windows):</p>
<pre><code>with LoggerLowLevelMuted(filename="testout.txt"):
print("test")
</code></pre>
<p>Can anybody see what is going wrong and what I can do to fix the problem?</p>
<p>The topic is related to <a href="https://stackoverflow.com/q/977840/8028981">this question</a>.</p>
<p>Edit: Another related post is <a href="https://stackoverflow.com/a/8825434/8028981">this one</a>.</p>
|
<python><stdout><f2py>
|
2023-05-02 19:12:24
| 0
| 1,240
|
Amos Egel
|
76,158,045
| 6,500,048
|
Split column list into rows without duplicating data
|
<p>I have a dataframe where the first column is a list, how can I iterate through the list and add a value to the relevant pre defined column:</p>
<pre><code>workflow cost cam gdp ott pdl
['cam', 'gdp', 'ott'] $2,346
['pdl', 'ott'] $1,200
</code></pre>
<p>should convert to:</p>
<pre><code>workflow cost cam gdp ott pdl
['cam', 'gdp', 'ott'] $2,346 782 782 782
['pdl', 'ott'] $1,200 600 600
</code></pre>
<p>I can get the length of the list, but I can't work out how to iterate over the list in order to match it to a column heading. Basically the cost is simply split evenly between the number of processes in the list.</p>
|
<python><pandas>
|
2023-05-02 18:58:29
| 6
| 1,279
|
iFunction
|
76,158,026
| 8,283,848
|
What is the proper way to use raw sql in Django with params?
|
<p>Consider that I have a <em>"working"</em> PostgreSQL query -</p>
<pre><code>SELECT sum((cart->> 'total_price')::int) as total_price FROM core_foo;
</code></pre>
<p>I want to use the raw query within Django, and I used the below code to get the result-</p>
<pre class="lang-py prettyprint-override"><code>from django.db import connection
with connection.cursor() as cursor:
query = """SELECT sum((cart->> 'total_price')::int) as total_price FROM core_foo;"""
cursor.execute(query, [])
row = cursor.fetchone()
print(row)
</code></pre>
<p>But, I need to make this hard-coded query into a dynamic one with <code>params</code>( maybe, to prevent SQL injections). So, I converted the Django query into -</p>
<pre class="lang-py prettyprint-override"><code>from django.db import connection
with connection.cursor() as cursor:
query = 'SELECT sum((%(field)s->> %(key)s::int)) as foo FROM core_foo;'
kwargs = {
'field': 'cart',
'key': 'total_price',
}
cursor.execute(query, kwargs)
row = cursor.fetchone()
print(row)
</code></pre>
<p>Unfortunately, I'm getting the following error -</p>
<pre><code>DataError: invalid input syntax for type integer: "total_price"
LINE 1: SELECT sum(('cart'->> 'total_price'::int)) as foo FROM core_...
</code></pre>
<p>Note that; the <em><strong><code>field</code></strong></em> ( here the value is <code>cart</code>) input gets an additional quote symbol during the execution, which doesn't match the syntax.</p>
<hr />
<h3>Question</h3>
<p>What is the proper way to pass <em><code>kwargs</code></em> to the <code>cursor.execute(...)</code></p>
<ol>
<li>with single/double quotes?</li>
<li>without single/double quotes?</li>
</ol>
|
<python><sql><django><postgresql><django-3.0>
|
2023-05-02 18:55:58
| 1
| 89,380
|
JPG
|
76,157,949
| 13,381,632
|
Update Python Package in Mamba vs. Conda
|
<p>I am attempting to update a Python package in Mamba, specifically to a version of <code>boto3=1.26.63</code>. I am trying to do so using the syntax</p>
<pre><code>mamba update boto3=1.26.63
</code></pre>
<p>but receive an error stating:</p>
<blockquote>
<p>Encountered problems while solving: -nothing provides requested boto3 1.26.63**</p>
</blockquote>
<p>Can someone please confirm the syntax for upgrading packages using Mamba? I am familiar with how to do so using Conda, but I am attempting to build out a software development environment using Mamba and need to confirm the appropriate syntax...any assistance is most appreciated.</p>
|
<python><anaconda><command-line-interface><boto3><mamba>
|
2023-05-02 18:44:07
| 0
| 349
|
mdl518
|
76,157,885
| 864,245
|
Match pair of numbers against dictionary containing pairs of numbers
|
<p>(Using Python 3.9, but can upgrade to newer)</p>
<p>I have two numbers - CPU (in vCPU) and RAM (in GB) - the resource requirements of an application.</p>
<p>From a list of instance sizes, I am trying to find the closest match that my application can run on - as long as the instance has enough CPU and RAM.</p>
<p>Here is the specific list of instances I want to compare against:</p>
<pre class="lang-py prettyprint-override"><code>instance_sizes = [
{"name": "t3a.nano", "cpu": 2, "mem": 0.5},
{"name": "t3a.micro", "cpu": 2, "mem": 1},
{"name": "t3a.small", "cpu": 2, "mem": 2},
{"name": "t3a.medium/c5a.large", "cpu": 2, "mem": 4},
{"name": "t3a/m5a.large", "cpu": 2, "mem": 8},
{"name": "c5a.xlarge", "cpu": 4, "mem": 8},
{"name": "t3a/m5a.xlarge", "cpu": 4, "mem": 16},
{"name": "c5a.2xlarge", "cpu": 8, "mem": 16},
{"name": "t3a/m5a.2xlarge", "cpu": 8, "mem": 32},
]
</code></pre>
<ul>
<li><code>cpu</code> 1.8, <code>mem</code> 6 should return <code>t3a/m5a.large</code>
<ul>
<li>(this has 2 vCPU, and 8GB RAM. the <code>t3a.small</code> doesn't have enough RAM with only 4GB)</li>
</ul>
</li>
<li><code>cpu</code> 0.1, <code>mem</code> 6 should return <code>t3a/m5a.large</code>
<ul>
<li>(this has 2 vCPU, and 8GB RAM. the <code>t3a.small</code> doesn't have enough RAM with only 4GB. it's still oversized, e.g. 0.1 CPU to 2, but it's the smallest instance capable of running the application)</li>
</ul>
</li>
<li><code>cpu</code> 2.1, <code>mem</code> 6 should return <code>c5a.xlarge</code></li>
<li><code>cpu</code> 6, <code>mem</code> 16 should return <code>c5a.2xlarge</code></li>
</ul>
<p>Thanks in advance</p>
|
<python><python-3.x>
|
2023-05-02 18:35:02
| 3
| 1,316
|
turbonerd
|
76,157,876
| 1,907,631
|
Can you read HDF5 dataset directly into SharedMemory with Python?
|
<p>I need to share a large dataset from an HDF5 file between multiple processes and, for a set of reasons, mmap is not an option.</p>
<p>So I read it into a numpy array and then copy this array into shared memory, like this:</p>
<pre><code>import h5py
from multiprocessing import shared_memory
dataset = h5py.File(args.input)['data']
shm = shared_memory.SharedMemory(
name=memory_label,
create=True,
size=dataset.nbytes
)
shared_tracemap = np.ndarray(dataset.shape, buffer=shm.buf)
shared_tracemap[:] = dataset[:]
</code></pre>
<p>But this approach doubles the amount of required memory, because I need to use a temporary variable. Is there a way to read the dataset directly into SharedMemory?</p>
|
<python><shared-memory><hdf5><h5py><hdf>
|
2023-05-02 18:33:32
| 1
| 319
|
monday
|
76,157,864
| 8,040,369
|
Python script run from AirFlow job takes UTC time
|
<p>I have an AirFlow job scheduled and running. It calls a Python script where I am using the <code>datetime.now()</code> function to get the Date and Time values and store it in a SQL table.</p>
<p>Since it runs via AirFlow, the time that gets saved in the SQL table is in UTC.</p>
<p>Is there a way to pass the current time value through AirFlow or how to get local (+5.30 in my case) time in the SQL table?</p>
<p>The time needs to be in a proper format so that it can be used for other calculations.</p>
|
<python><datetime><airflow>
|
2023-05-02 18:32:20
| 0
| 787
|
SM079
|
76,157,847
| 2,056,201
|
Sharing data between React elements through a Route
|
<p>I am trying to exchange several data variables between multiple elements in React, essentially the equivalent of passing a pointer to a struct in C++ between various classes/functions</p>
<p>I am following this tutorial
<a href="https://react.dev/learn/sharing-state-between-components" rel="nofollow noreferrer">https://react.dev/learn/sharing-state-between-components</a></p>
<p>I am using flask on backend to serve the /data route. This part works without issues.</p>
<p>However the error I get in console is <code>act-dom.production.min.js:189 TypeError: "activeState" is read-only</code></p>
<p>How can I modify this code so that I can set <code>activeState</code> variable from within ToolbarElement?</p>
<p>Thanks</p>
<p>Here is my code:</p>
<pre><code>function App() {
activeState = {
graph_data: 0,
other_data: 1,
}
const [activeState, setActiveState] = useState();
return (
<div className="App">
<div className="App-body">
<div className="toolbar-element" style={{ top: "50px", left: "0px" }}>
<ToolbarElement
active_state={activeState}
updateState={(state) => setActiveState(state)}
/>
</div>
<div className="graph-element" style={{ top: "50px", right: "0px" }}>
<GraphElement
active_state={activeState}
updateState={(state) => setActiveState(state)}
/>
</div>
</div>
</div >
);
}
export default App;
</code></pre>
<p>ToolbarElement.js</p>
<pre><code>function ToolbarElement({
active_state,
updateState
}) {
const [data, setdata] = useState({
lst: [],
});
const handleButtonClick = () => {
console.log('Button clicked!');
fetch("/data").then((res) =>
res.json().then((data) => {
// Setting a data from api
setdata({
lst: data,
});
})
);
active_state.graph_data = data.lst[0];
updateState(active_state);
};
return (
<div style={{ height: '33.33%', backgroundColor: '#F0F0F0' }}>
<button onClick={handleButtonClick}>Click me!</button>
</div>
);
}
export default ToolbarElement;
</code></pre>
<p>GraphElement.js</p>
<pre><code>function GraphElement({
active_state,
updateState
}) {
return (
<div style={{ height: '33.33%', backgroundColor: '#E8E8E8' }}>
<p>{active_state.graph_data}</p>
</div>
);
}
export default GraphElement;
</code></pre>
|
<javascript><python><html><reactjs><flask>
|
2023-05-02 18:28:54
| 2
| 3,706
|
Mich
|
76,157,845
| 21,113,865
|
Can you force derived class from a bass class to define specific member variables in python?
|
<p>If I have a class say:</p>
<pre><code>class BaseClassOnly():
def __init__(self):
self._foo = None
def do_stuff(self):
print("doing stuff with foo" + self._foo)
</code></pre>
<p>I want to force all classes derived from BaseClassOnly to provide a value for 'self._foo' so that the inherited function do_stuff() will be able to use it. Is there a way to ensure if a class that inherits from BaseClassOnly with result in an error if the variable self._foo is not set in init()?</p>
|
<python><inheritance><base-class><python-class><abstract-base-class>
|
2023-05-02 18:28:21
| 1
| 319
|
user21113865
|
76,157,763
| 16,898,766
|
How do I set the figsize and dpi in matplotlib so that the image has the required size and quality?
|
<p>I need to combine the images into one. I would like the resulting image to consist of 7 rows and 9 columns. Each component image has dimensions of 256 x 256. I would like these images not to be resized, that is, I would like the resulting image to have dimensions of at least 9 * 256 x 7 * 256 = 2304 x 1792. The easiest way for me to do this would be using numpy and concatenating the images together using, for example, np.concatenate. However, I decided to use matplotlib, because it is much easier to add labels to the axes with it.</p>
<p>Unfortunately, I'm having trouble setting the right figsize and dpi values.</p>
<p>I've found that a good way is to set the dpi parameter to 100, and for example the height as the height in pixels divided by the dpi value. As below.</p>
<pre><code>cols = 9
rows = 7
img_size = (256, 256)
dpi = 100
fig, axes = plt.subplots(rows, cols, figsize=(img_size[0] * cols // dpi, img_size[1] * rows // dpi), dpi=dpi)
</code></pre>
<ol>
<li><p>Unfortunately, when I save this image, its dimensions are 2359 x 1720.</p>
</li>
<li><p>My second problem is that despite setting wspace=0 and hspace=0, there are gaps between the subplots.</p>
</li>
</ol>
<pre><code>plt.subplots_adjust(wspace=0, hspace=0, left=0, bottom=0, right=1, top=1)
</code></pre>
<p><a href="https://i.sstatic.net/mosWs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mosWs.png" alt="enter image description here" /></a></p>
<ol start="3">
<li>In addition, I do not know what is the best format in which I should save this image, so that the text is in good quality and so that the quality of the image does not degrade significantly when zooming (I will be using this image in Latex).</li>
</ol>
<p>Below I attach a simple example for reproduction.</p>
<pre><code>from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
cols = 9
rows = 7
img_size = (256, 256)
dpi = 100
fig, axes = plt.subplots(rows, cols, figsize=(img_size[0] * cols // dpi, img_size[1] * rows // dpi), dpi=dpi)
for row_idx, row in enumerate(axes):
images = [Image.new('RGB', (256, 256), color='grey') for i in range(cols)]
for col_idx, (col, image) in enumerate(zip(row, images)):
col.imshow(image)
col.spines['top'].set_visible(False)
col.spines['right'].set_visible(False)
col.spines['bottom'].set_visible(False)
col.spines['left'].set_visible(False)
col.set_xticks([])
col.set_yticks([])
if col_idx == 0:
col.set_ylabel('name', fontsize=30)
plt.subplots_adjust(wspace=0, hspace=0, left=0, bottom=0, right=1, top=1)
plt.savefig('example.png', bbox_inches='tight', format='png')
plt.show()
</code></pre>
|
<python><python-3.x><matplotlib><image-processing><latex>
|
2023-05-02 18:17:06
| 1
| 333
|
nietoperz21
|
76,157,724
| 3,103,957
|
__call__() method in meta-class in python
|
<p>I am having the following piece of code.</p>
<pre><code>class CustomMetaClass(type):
def __call__(self, *args, **kwargs):
print("Custom call method is invoked from custom meta class.")
class Sample(metaclass=CustomMetaClass):
def method(self):
print("Printing from a method")
</code></pre>
<p>I have a custom meta-class and a normal class (Sample). 'Sample' is associated with my custom meta-class.</p>
<p>Since every thing is an object in Python, whenever the interpretter comes across a class definition, an instance of 'type' is created for the user defined class.
This is done via by calling the <code>__call__()</code> of the meta class.</p>
<p>In the above case, Sample class is associated with my custom metaclass and hence the print statement should have been executed; But it not run actually.</p>
<p>But when I create instance of Sample ( <code>Sample()</code> ), the print method invoked. As for as I know (from a different stackoverflow question), the same <code>__call__()</code>
method in the meta class is used to create instance for the user class (while defining the class) as well of the user defined class (i.e: <code>Sample()</code> )</p>
<p>Can someone please help where I am wrong here?</p>
|
<python><call><metaclass>
|
2023-05-02 18:12:24
| 1
| 878
|
user3103957
|
76,157,711
| 5,651,603
|
How can we properly mock/stub async methods of a mocked class?
|
<p>I want to mock/stub the async methods of the <code>Connect</code> class returned by <code>websockets.client.connect</code>; such as <code>send</code> and <code>recv</code>. I succeed in testing the instantiation of the class, but I can't seem to setup the methods?</p>
<p>The complete code is in <a href="https://gist.github.com/lflfm/f6fc7b5d063d57f8860e82720f511e6a" rel="nofollow noreferrer">this gist</a>, but here are the main points:</p>
<p>This is how I'm preparing the mock:</p>
<pre class="lang-py prettyprint-override"><code> self.client_class_patcher = patch('websockets.client.connect')
self.mock_connect_class = self.client_class_patcher.start()
self.addCleanup(self.client_class_patcher.stop)
self.connect_instance = MagicMock()
self.connect_instance.send.side_effect = MagicMock(return_value="fake_result_data")
self.connect_instance.recv.side_effect = MagicMock(return_value="fake_result_data")
self.mock_connect_class.return_value = self.connect_instance
</code></pre>
<p>implementation code:</p>
<pre class="lang-py prettyprint-override"><code> async with websockets.client.connect(self.ws_url, subprotocols=["aop.ipc"], extra_headers="auth and things") as ws:
await ws.send(json.dumps(call))
response = await ws.recv()
</code></pre>
<p>In my tests:</p>
<pre class="lang-py prettyprint-override"><code> @async_test
async def test_connection(self):
# arrange
_connector = MyRemoteDeviceClass("ws://nowhere:1234", "accesstoken")
# act
await _connector.enable_device("some_device_id")
# assert
self.mock_connect_class.assert_called_once_with("ws://nowhere:1234", subprotocols=["aop.ipc"], extra_headers=ANY) # this test is working fine :)
self.connect_instance.recv.assert_called_once() #this test is not :(
self.connect_instance.recv.assert_awaited_once() #neither is this
</code></pre>
<p>...of course, in real tests, we assert just one thing in each test :)</p>
|
<python><python-3.x><python-asyncio><python-unittest><python-unittest.mock>
|
2023-05-02 18:11:08
| 0
| 1,080
|
LFLFM
|
76,157,577
| 7,386,830
|
Correct method to build a multiple polynomial regression model
|
<p>I am using Statsmodels (Python) library to develop a multi-polynomial regression model. I have 6 variable columns and 1 target column.</p>
<p>One method for example in Statsmodel, there is an in-built function such as following (for power 2):</p>
<pre><code>from sklearn.preprocessing import PolynomialFeatures
pf = PolynomialFeatures(degree=2, include_bias=True)
X_poly = pf.fit_transform(X)
X_poly.shape
</code></pre>
<p>This automatically creates 36 variables (with my 6 input-variable from dataset).</p>
<p>At the same time, I also understand that I can also introduce polynomials for specific variables in my dataset. For example:</p>
<p><a href="https://i.sstatic.net/J9hEc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J9hEc.png" alt="enter image description here" /></a></p>
<p>Question is, what is the right method to develop a polynomial regression model ? Is it better to use the first step via Statsmodel library, or the second step where I control polynomial powers for each variable ?</p>
<p>What are the differences between these two steps ?</p>
|
<python><machine-learning><regression><statsmodels><polynomials>
|
2023-05-02 17:49:54
| 0
| 754
|
Dinesh
|
76,157,511
| 11,999,957
|
How do I find where in CSV file does error occur when importing using Pandas?
|
<p>Let's say I try to import CSV file using <code>pd.read_csv()</code> and get this error.</p>
<pre><code>'utf-8' codec can't decode byte 0x93 in position 214567: invalid start byte
</code></pre>
<p>How do interpret the error message and find in the CSV file what character is causing the issue? Is it the 214567th character and if so how do I find it via notepad or excel or something?</p>
|
<python><pandas><csv>
|
2023-05-02 17:40:34
| 3
| 541
|
we_are_all_in_this_together
|
76,157,451
| 3,324,136
|
FFprobe, python and Lambda: Unable to get dimensions of video using FFprobe
|
<p>I am using ffprobe and python on Lambda and trying to get the dimensions of a video. I have the code that grabs the localized source file.</p>
<pre><code># Download the source file from S3
source_object = s3.Object(source_bucket_name, find_file_name)
print(f'The source object is {source_object}')
source_file = '/tmp/source.mp4'
source_object.download_file(source_file)
# Get dimensions of video
dimensions_x = os.system('ffprobe -v error -select_streams v:0 -show_entries stream=height,width -of csv=s=x:p=0 ' + source_file )
print(f'The dimensions of x are {dimensions_x}')
</code></pre>
<p>The source file is in an S3 bucket and the stored in the <code>/tmp</code> file. However, the print file doesn't show up in my Cloudwatch logs. It seems to pass right over this code as well.</p>
<p><code>print(f'The dimensions of x are {dimensions_x}')</code></p>
<p>I have been trying different methods with FFmpeg, all with the same outcome. The code all works as expected except for the code getting the dimensions.</p>
|
<python><amazon-s3><aws-lambda><ffprobe>
|
2023-05-02 17:32:23
| 0
| 417
|
user3324136
|
76,157,347
| 11,391,711
|
Converthing datetime64[ns] into a Timestamp object
|
<p>I am reading date information from an Excel file which is stored as <code>datetime64[ns]</code>. However, I need this information stored as <code>Timestamp</code> object since the function to which I pass the date information does not properly work with <code>datetime64[ns]</code>.</p>
<p>Here is how I read the date information from Excel.</p>
<pre><code>df= pd.read_excel('data.xlsx', sheet_name="data")
firstDate= pd.to_datetime(df[df["column"] == "date"]["end"], format='%Y-%m-%d')
firstDate
1 2020-11-01
Name: Value, dtype: datetime64[ns]
</code></pre>
<p>However, if I manually create a date object, it is stored as Timestamp.</p>
<pre><code>secondDate = pd.to_datetime('2020-11-01', format='%Y-%m-%d')
secondDate
Timestamp('2020-11-01 00:00:00')
</code></pre>
<p>The function that I have is not working when using <code>firstDate</code>, but perfectly works with <code>secondDate</code>. How can either convert datetime64[ns] to Timestamp or save the date from Excel as Timestamp object?</p>
|
<python><pandas><datetime><datetime-format>
|
2023-05-02 17:19:04
| 1
| 488
|
whitepanda
|
76,157,295
| 3,553,024
|
How to install pycurl on Apple M1 Silicon using pip v23.1+
|
<p>Installing <code>pycurl</code> on computers with Apple M1 chips has always been a struggle. I have been using this command to install <code>pycurl</code> with OpenSSLv3 using <code>pip</code>:</p>
<pre><code>brew update && brew install openssl
export LDFLAGS="-L/opt/homebrew/opt/openssl@3/lib"
export CPPFLAGS="-I/opt/homebrew/opt/openssl@3/include"
pip uninstall pycurl
pip install --compile --install-option="--with-openssl" pycurl
</code></pre>
<p>But in pip v23.1+ (<a href="https://pip.pypa.io/en/stable/news/#v23-1" rel="nofollow noreferrer">see here</a>), the <code>--install-option</code> has been removed and I can't figure out how to create an equivalent command.</p>
|
<python><pip><apple-m1><pycurl>
|
2023-05-02 17:10:19
| 2
| 1,874
|
jdesilvio
|
76,157,266
| 12,894,926
|
What is difference between Pyarrow arguments for Pandas readers?
|
<p><a href="https://pandas.pydata.org/docs/user_guide/pyarrow.html#i-o-reading" rel="nofollow noreferrer">Pandas' documentation explains how to use <code>PyArrow</code> as the backend for IO methods.</a>
However, I couldn't understand from it the difference between these two options:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv(data, engine="pyarrow")
# and
df_pyarrow = pd.read_csv(data, dtype_backend="pyarrow")
</code></pre>
<p>What is it?</p>
|
<python><pandas><pyarrow>
|
2023-05-02 17:06:48
| 1
| 1,579
|
YFl
|
76,157,247
| 17,231,480
|
Unable to load my static files in Django production environment
|
<p>I am deploying my Django application to Railway.</p>
<p>I am following this guide, <a href="https://docs.djangoproject.com/en/4.2/howto/static-files/deployment/#how-to-deploy-static-files" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/howto/static-files/deployment/#how-to-deploy-static-files</a>. The app was deployed successfully but <strong>cannot load the static files</strong>. I followed some online tutorials on this and it is still not working. What did I miss?</p>
<p><strong>My Railway deploy log:</strong></p>
<pre><code>Not Found: /static/styles/style.css
Not Found: /static/images/logo.svg
Not Found: /static/styles/style.css
Not Found: /static/images/logo.svg
Not Found: /static/images/avatar.svg
Not Found: /static/images/avatar.svg
Not Found: /static/js/script.js
Not Found: /static/js/script.js
</code></pre>
<p><strong>settings.py:</strong></p>
<pre><code>DEBUG = False
STATIC_ROOT = BASE_DIR / "staticfiles"
STATIC_URL = '/static/'
STATICFILES_DIRS = [
BASE_DIR / "static",
]
MEDIA_URL = '/images/'
MEDIA_ROOT = BASE_DIR / 'static/images'
</code></pre>
|
<python><django><deployment><static><railway>
|
2023-05-02 17:04:17
| 1
| 349
|
jethro-dev
|
76,157,217
| 1,761,521
|
Take elements from each group in Polars where the groups are not even
|
<p>How do I take the first <code>n</code> elements of a group where <code>n</code> > <code>G</code> and <code>G = number of items in group</code>?</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(dict(x=[1,1,1,2,3,3,3], y=[1,2,3,4,5,6,7]))
df.group_by("x").agg(pl.all().gather([0, 2]))
</code></pre>
<p>The above returns a <code>OutOfBoundsError: gather indices are out of bounds </code> error.</p>
|
<python><python-polars>
|
2023-05-02 17:00:56
| 1
| 3,145
|
spitfiredd
|
76,157,159
| 6,595,551
|
OpenTelemetry exporter logs/errors in AWS Lambda, Invalid type NoneType for attribute, Invalid type <class 'NoneType'> of value None, etc
|
<p>Context:</p>
<p>I'm using a couple of tools to send metrics from an AWS Lambda function to the ADOT collector. Overall, I deployed my API service (FastAPI) with AWS Lambda.</p>
<p>Tools:</p>
<ol>
<li>AWS Lambda</li>
<li>Python==3.9</li>
<li>opentelemetry-sdk==1.17.0</li>
<li><code>arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-17-0:1</code> (<a href="https://aws-otel.github.io/docs/getting-started/lambda/lambda-python" rel="nofollow noreferrer">AWS Distro for OpenTelemetry Lambda Support</a>)</li>
</ol>
<p>Basically, I'm sending my metrics to the local ADOT collector, and the collector is exporting them to my exporter endpoint.</p>
<p>Problem:</p>
<p>The problem is I see a couple of logs inside the lambda log group, and I cannot figure out how I should do:</p>
<ol>
<li>Disable these logs</li>
<li>Fix those problems</li>
</ol>
<p>These are environment variables to configure the OTEL in the Lambda:</p>
<pre class="lang-bash prettyprint-override"><code>AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument
OPENTELEMETRY_COLLECTOR_CONFIG_FILE=/var/task/src/otel-config.yaml
OTEL_METRICS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OPENTELEMETRY_EXTENSION_LOG_LEVEL=error
</code></pre>
<p>This is the <code>otel-config.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>receivers:
otlp:
protocols:
http:
exporters:
otlp:
endpoint: https://<ANOTHER_SERVICE>:4317
service:
pipelines:
metrics:
receivers: [otlp]
exporters: [otlp]
</code></pre>
<p>Here are some logs that I'm receiving in the log group:</p>
<pre class="lang-bash prettyprint-override"><code>[WARNING] 2023-05-01T23:56:03.374Z An instrument with name http.server.duration, type Histogram, unit ms and description measures the duration of the inbound HTTP request has been created already.
</code></pre>
<p><a href="https://i.sstatic.net/m2Cde.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m2Cde.png" alt="enter image description here" /></a></p>
<pre class="lang-bash prettyprint-override"><code>[ERROR] 2023-05-01T23:56:04.958Z 42a36f0d-6bde-428d-833a-fc34f295a4a9 Failed to export batch code: 404, reason: 404 page not found
</code></pre>
<p>Note that everything is working as expected, and I can see the metrics in the Grafana dashboard, but these errors are making logs hard to maintain.</p>
|
<python><amazon-web-services><aws-lambda><open-telemetry>
|
2023-05-02 16:54:12
| 0
| 1,647
|
Iman Shafiei
|
76,156,942
| 11,734,659
|
Want to run a Python "print()" script from Java, but getting no output
|
<p>I have a simple Python script:</p>
<pre><code>def main():
print("Hello world")
if __name__ == "__main__":
main()
</code></pre>
<p>And I want to get the response in Java. The endpoint is:</p>
<pre><code>@RestController
@CrossOrigin(origins = "http://localhost:4200")
public class TestPython {
@PostMapping("/endpoint")
public ResponseEntity<String> endpoint() throws IOException {
ProcessBuilder pb = new ProcessBuilder("path-to-venv", "path-to-python-script");
pb.redirectErrorStream(true);
Process p = pb.start();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
StringBuilder sb = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
sb.append(line);
}
String jsonResponse = sb.toString();
BufferedReader errorReader = new BufferedReader(new InputStreamReader(p.getErrorStream()));
StringBuilder errorSb = new StringBuilder();
String errorLine;
while ((errorLine = errorReader.readLine()) != null) {
errorSb.append(errorLine);
}
String errorOutput = errorSb.toString();
if (!errorOutput.isEmpty()) {
System.out.println("Error output: " + errorOutput);
}
System.out.println(jsonResponse);
System.out.println(errorOutput);
return ResponseEntity.ok(jsonResponse);
}
}
</code></pre>
<p>jsonResponse and errorOutput are both empty. My virtual environment is active and I am using it. When I execute the Python script via terminal, a cmd pops up and prints <code>Hello World</code>, but using Java is empty. I also tried to explicitly set the charset in java, but I get the same problem. Also I get a lot of these <code>\u0000</code> characters when executing <code>reader.cb</code> in debug mode.
Any help is welcome.</p>
|
<python><java>
|
2023-05-02 16:26:22
| 1
| 582
|
Programmer2B
|
76,156,772
| 6,195,489
|
Process only files with filename above a certain number
|
<p>I have a single directory full of millions of files with file names such as e.g.:</p>
<pre><code>234.txt
235.txt
236.txt
</code></pre>
<p>I would like to work through the files with a name that has an integer prefix above a certain value, which is determined by the last file processed in a previous run and fetched from a database.</p>
<p>At the minute I have:</p>
<pre><code>for root, dirs, files in os.walk(directory):
for filename in files:
if int(re.split("\.",filename)[0]) > last_processed_id:
<do some thing with file>
</code></pre>
<p>But I have hundreds of thousands of files, so this approach takes some time doing pointless work checking if the filename has been processed before. Is there a faster/better way to limit the files returned from os.walk() short of moving the files. once processed.</p>
|
<python><os.walk>
|
2023-05-02 16:03:39
| 0
| 849
|
abinitio
|
76,156,688
| 1,454,316
|
Apply Callable to NDArray
|
<p>I am new to Python and I have done a fair share of just trying things to see if it works. In this case, when I apply a Callable to an NDArray, I get a result, just not what I expected.</p>
<pre><code>from typing import Callable
from typing import Tuple
import numpy as np
callable : Callable[[float], Tuple[float, float]] = lambda x : (x , x + 1)
array : np.ndarray = np.asarray([0, 1, 2, 3])
result = callable(array)
print(result)
</code></pre>
<p>I expected (or rather hoped) to get an iterable of tuples where each tuple is the output of the callable applied to a float. What I got was a tuple of arrays:</p>
<pre><code>(array([0, 1, 2, 3]), array([1, 2, 3, 4]))
</code></pre>
<p>What is actually happening? (Why should I expect the results I actually got?)</p>
|
<python><numpy><numpy-ndarray><callable-object>
|
2023-05-02 15:51:35
| 1
| 841
|
Little Endian
|
76,156,645
| 5,675,125
|
AWS EC2 python not creating folders and then also folders deleting
|
<p>So this is a weird one (for me). I am trying to download audio files to my AWS EC2 instance. after which I will upload them to s3 (if I get passed this part)</p>
<p>The first problem I am having is with python.</p>
<p>I am importing <code>tempfile</code> package and using it like so:</p>
<pre><code>self._tempdir = tempfile.mkdtemp(dir="/home/ec2-user/tmp/")
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>FileNotFoundError: [Errno 2] No such file or directory: '/home/ec2-user/tmp/test.mp4'</p>
</blockquote>
<p>so if I log into my instance and do</p>
<pre><code>$ mkdir home/ec2-user/tmp/
$ touch test.mp4
</code></pre>
<p>an file the script again it works. The problem is, I want the file name to be randomly generated so I cant keep logging in the instance everytime. I want python to be able to do this.</p>
<p>The second issue I am facing is this</p>
<p>when I logout of my instance, and log back into my instance the folder and file I created at <code>home/ec2-user/tmp/test.mp4</code> have gone. I have to do the manual steps again.</p>
<p>Am I missing something blatantly obvious here?</p>
|
<python><amazon-web-services><amazon-ec2><temporary-files>
|
2023-05-02 15:46:48
| 0
| 1,603
|
JamesG
|
76,156,601
| 1,860,222
|
Representing a complex object as a pyqt model class
|
<p>I'm trying to create a model/view for a complex object in pyqt. All of the examples I have found so far assume the data will be in a repeating element like a table or list. Does anyone know of a good example or tutorial that demonstrates working with something more complicated?</p>
<p>For example lets say I have a class Foo with properties a: int, b: str, and c: bool . I want to display <strong>b</strong> as a text label at the top of the window and <strong>a</strong> and <strong>c</strong> along the left and right side respectively. It doesn't make sense to represent the data for Foo as a list or table. How would I create a model to represent this class? How do I tie that model into the view?</p>
|
<python><model-view-controller><pyqt>
|
2023-05-02 15:42:17
| 0
| 1,797
|
pbuchheit
|
76,156,599
| 4,507,596
|
How to correctly use cv2.bitwise_and() with Meta AI Segment-Anything Model (SAM) generated masks?
|
<p>I am using Meta AI Segment-Anything Model (SAM) to generate masks:</p>
<pre><code>masks, scores, logits = mask_predictor.predict(
box=box, multimask_output=True)
for i, (mask, score) in enumerate(zip(masks, scores)):
print ("image.shape" + str(image.shape)) # image.shape(506, 447, 3)
print ("mask.shape" + str(mask.shape)) # mask.shape(506, 447)
masked_img = cv2.bitwise_and(image, image, mask=mask)
</code></pre>
<p>The cv2.bitwise_and(image, image, mask=mask) results in a "mask data type = 0 is not supported error" I don't know why. My goal is to mask off all the image data from the original image that isn't included in the mask. What might I be doing wrong?</p>
|
<python><opencv>
|
2023-05-02 15:42:03
| 1
| 446
|
Will
|
76,156,557
| 6,849,363
|
Pytorch runs on GPU without CUDA
|
<p>I'm scratching my head a bit. I just got a new work computer and am trying to actually set up a clean CUDA environment this time. So far all I've done is the following:</p>
<p>Downloaded pytorch with <code>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117</code></p>
<p>Run the following script:</p>
<pre><code>def measure_time(device, size=1024*4):
print(f"Running on {device}")
# Create random matrices
a = torch.randn(size, size, device=device)
b = torch.randn(size, size, device=device)
# Warm up the device
for _ in range(10):
c = torch.matmul(a, b)
torch.cuda.synchronize() if device.type == 'cuda' else None
# Measure the time taken for 1000 iterations
start_time = time.time()
for _ in range(1000):
c = torch.matmul(a, b)
torch.cuda.synchronize() if device.type == 'cuda' else None
end_time = time.time()
elapsed_time = end_time - start_time
print(f"Time taken: {elapsed_time:.4f} seconds\n")
return elapsed_time
def main():
# Check for the availability of GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Perform the computation on CPU
cpu_time = measure_time(torch.device('cpu'))
# Perform the computation on GPU if available
if device.type == 'cuda':
gpu_time = measure_time(device)
print(f"Speedup factor (GPU/CPU): {cpu_time / gpu_time:.2f}")
else:
print("GPU not available.")
main()
</code></pre>
<p>Output of script:</p>
<pre><code>>>> main()
Running on cpu
Time taken: 316.8135 seconds
Running on cuda
Time taken: 31.7928 seconds
Speedup factor (GPU/CPU): 9.96
</code></pre>
<p>So it both appears to see the gpu and we see a convincing time improvement, both suggesting we have working gpu support.</p>
<p>From everything I see online though, this should NOT be working, e.g., <a href="https://discuss.pytorch.org/t/is-it-required-to-set-up-cuda-on-pc-before-installing-cuda-enabled-pytorch/60181" rel="nofollow noreferrer">here</a>. Is pytorch running without CUDA?</p>
<p>Running a tensorflow script with the same goal errors out, as it can't find the GPU.</p>
<p>Does anyone have an explanation for this? Sure I could go and just download CUDA and get tensorflow working, but I'd prefer to actually know what my computer is doing.</p>
|
<python><tensorflow><pytorch>
|
2023-05-02 15:37:55
| 1
| 470
|
Tanner Phillips
|
76,156,551
| 4,121,487
|
How to pass list of strings to a CFFI extension?
|
<p>I would like to pass a list of strings to a CFFI extension which expects a <code>char**</code> as input parameter.</p>
<p>Example:</p>
<p><code>extension.h</code>:</p>
<pre class="lang-c prettyprint-override"><code>#include <stddef.h>
void sort_strings(char** arr, size_t string_count);
</code></pre>
<p><code>extension.c</code>:</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <stdlib.h>
#include <string.h>
static int cmpstrp(const void *p1, const void *p2) {
return strlen(*(const char**)p1) < strlen(*(const char**)p2);
}
void sort_strings(char** arr, size_t string_count) {
qsort(arr, string_count, sizeof(char*), cmpstrp);
}
// `main` is defined only for testing if `sort_strings` works as expected
int main(void) {
char* string_list[] = {"cat", "penguin", "mouse"};
size_t string_count = sizeof(string_list)/sizeof(string_list[0]);
sort_strings(string_list, string_count);
for (int i = 0; i < string_count; i++) {
printf("%s\n", string_list[i]);
}
}
</code></pre>
<p><code>extension_build.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from cffi import FFI
ffibuilder = FFI()
ffibuilder.cdef('void sort_strings(char** arr, size_t string_count);')
ffibuilder.set_source('_extension',
'#include "extension.h"',
sources=['extension.c'],
libraries=[])
if __name__ == "__main__":
ffibuilder.compile(verbose=True)
</code></pre>
<p><code>demo.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from _extension.lib import sort_strings
from _extension import ffi
string_list = ["cat", "penguin", "mouse"]
bytes_list = [s.encode("latin1") for s in string_list]
cdata_list = ffi.new("char **", bytes_list)
sort_strings(cdata_list, len(string_list))
# print sorted list
</code></pre>
<p>How to test:</p>
<pre class="lang-bash prettyprint-override"><code>python extension_build.py
python demo.py
</code></pre>
<p>I have tried passing <code>string_list</code>, <code>bytes_list</code> and <code>cdata_list</code> as first input argument to <code>sort_strings</code>. I get these error messages:</p>
<pre><code>TypeError: initializer for ctype 'char *' must be a cdata pointer, not str
</code></pre>
<pre><code>TypeError: initializer for ctype 'char *' must be a cdata pointer, not bytes
</code></pre>
<pre><code>TypeError: initializer for ctype 'char *' must be a cdata pointer, not list
</code></pre>
<p>How can I pass my list of strings correctly (if possible without copying the list)?</p>
<p>(Just in case my intention is not clear: I'm not asking how to sort a list in Python.)</p>
<p><strong>SOLUTION:</strong></p>
<p>(Based on Armin Rigo's answer.)</p>
<p>It works when <code>demo.py</code> is updated like this:</p>
<pre class="lang-py prettyprint-override"><code>from _extension.lib import sort_strings
from _extension import ffi
string_list = ["cat", "penguin", "mouse"]
bytes_list = [ffi.new("char[]", s.encode("latin1")) for s in string_list]
pointer = ffi.new("char*[]", bytes_list)
sort_strings(pointer, len(string_list))
for s in pointer:
print(ffi.string(s).decode("latin1"))
</code></pre>
|
<python><c><python-cffi>
|
2023-05-02 15:37:19
| 1
| 447
|
MaxGyver
|
76,156,335
| 10,907,172
|
Django/Docker/pyaudio: pip install failed beaucause of #include "portaudio.h"
|
<p>I am using Django on Docker with python:3.8.3-alpine image and want to use pyaudio librairy but pip install failed because of portaudio dependency</p>
<blockquote>
<p>rc/pyaudio/device_api.c:9:10: fatal error: portaudio.h: No such file
or directory
9 | #include "portaudio.h"
| ^~~~~~~~~~~~~</p>
</blockquote>
<p>I've tried many thing like trying to install portaudio before but nothing works</p>
<p>maybe comes from my python image but trying python:3.8.3 image is too long to build</p>
<p>How to use pyaudio in DJango/Docker project?</p>
|
<python><django><docker><pyaudio>
|
2023-05-02 15:13:54
| 1
| 683
|
SLATER
|
76,156,170
| 11,155,419
|
How to programmatically share a Google Sheet with an email, via Python API?
|
<p>I have list of Google Sheets that I would like to modify, such that a specific email is granted an <code>Editor</code> access. However, instead of doing this manually, I would like to be able to do it programmatically, using the Google Python client.</p>
<p>I am aware I can potentially use the Drive API,</p>
<pre><code>SCOPES = [
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/drive.metadata',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/drive',
]
drive_client = discovery.build('drive', 'v3', http=creds.authorize(Http()))
</code></pre>
<p>however, I am still not sure how to go about granting <code>Editor</code> or <code>Viewer</code> permissions to an entity/email, for a specific Google Sheet (without possibly affecting existing permission).</p>
|
<python><google-sheets><google-cloud-platform><google-drive-api>
|
2023-05-02 14:55:23
| 1
| 843
|
Tokyo
|
76,156,101
| 11,994,733
|
Running PyCharm Unittests sequentially for packages while running tests within each package in parallel
|
<p>I'm writing unit tests for a published web application. This web application has a variable that changes the functionality of the app and this variable is persistent across all of a user's sessions.</p>
<p>I want to take advantage of running multiple tests in parallel, but I can't have two tests expecting different values for this variable to run at the same time. I'm looking for a way to group tests in bundles to be run in parallel with each bundle having a build-up and tear-down section (used to set the variable to what is expected for the bundle).</p>
<p>I'm using PyCharm Unittest to write all my tests and <code>-n 6</code> in <code>Run Configuration -> Additional Arguments</code> to run my tests in parallel.</p>
<p>My tests are already broken up into packages based on the state variable's expected value. How do I let Unittest know that I want it to run each of these packages one after another but run tests within a package in parallel?</p>
<p>Folder Structure</p>
<pre><code>Tests
SingleHome
__init__.py
test_Entity_CRUD.py
test_Entity_UI.py
Apartment
__init__.py
test_Entity_CRUD.py
test_Entity_UI.py
</code></pre>
<p>Sample File Structure</p>
<pre><code>class BaseDriver(unittest.TestCase):
def _test_load(self, driver):
driver.get('https://www.google.com')
self.assertEqual("Google", driver.title)
class ChromeDriver(BaseDriver):
def test_load(self):
driver = webdriver.Chrome()
self._test_load(driver)
driver.quit()
class EdgeDriver(BaseDriver):
def test_load(self):
driver = webdriver.Edge()
self._test_load(driver)
driver.quit()
</code></pre>
|
<python><parallel-processing><pycharm><python-unittest>
|
2023-05-02 14:47:42
| 1
| 2,426
|
Mandelbrotter
|
76,155,889
| 8,930,751
|
Subscribing to multiple partitions in Azure Event Hub using Python
|
<p>I have created a event hub namespace. Inside the event hub namespace I created a event hub with 8 partitions. It has one consumer group - $Default.</p>
<p>I have written the receiver code in Python which looks like this.</p>
<pre><code>import asyncio
from azure.eventhub.aio import EventHubConsumerClient
from azure.eventhub.extensions.checkpointstoreblobaio import (
BlobCheckpointStore,
)
BLOB_STORAGE_CONNECTION_STRING = "BLOB_STORAGE_CONNECTION_STRING"
BLOB_CONTAINER_NAME = "BLOB_CONTAINER_NAME"
EVENT_HUB_CONNECTION_STR = "EVENT_HUB_CONNECTION_STR"
EVENT_HUB_NAME = "EVENT_HUB_NAME"
async def on_event(partition_context, event):
# Print the event data.
print(
'Received the event: "{}" from the partition with ID: "{}"'.format(
event.body_as_str(encoding="UTF-8"), partition_context.partition_id
)
)
# Update the checkpoint so that the program doesn't read the events
# that it has already read when you run it next time.
await partition_context.update_checkpoint(event)
async def main():
# Create an Azure blob checkpoint store to store the checkpoints.
checkpoint_store = BlobCheckpointStore.from_connection_string(
BLOB_STORAGE_CONNECTION_STRING, BLOB_CONTAINER_NAME
)
# Create a consumer client for the event hub.
client = EventHubConsumerClient.from_connection_string(
EVENT_HUB_CONNECTION_STR,
consumer_group="$Default",
eventhub_name=EVENT_HUB_NAME,
checkpoint_store=checkpoint_store,
)
async with client:
# Call the receive method. Read from the beginning of the
# partition (starting_position: "-1")
await client.receive(on_event=on_event, starting_position="-1")
if __name__ == "__main__":
loop = asyncio.get_event_loop()
# Run the main method.
loop.run_until_complete(main())
</code></pre>
<p>This code I have taken from <a href="https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-python-get-started-send?tabs=passwordless%2Croles-azure-portal#receive-events" rel="nofollow noreferrer">this document</a>. Now I have run this above code in 5 different VMs. So the expectation is all the 5 receivers should process 5 different message simultaneously. Once one message is processed, the receiver which is free should consumer another message. It should continue till someone stops the receiver code.</p>
<p>The problem I'm facing is that the same message is getting received by multiple receivers and is being processed again and again. My assumption is the checkpointing is not happening not properly. But I don't exactly know why it's not happening. Or perhaps the above code doesn't meet my expectation.</p>
<p>These are the versions I'm using :</p>
<blockquote>
<p>Name: azure-eventhub Version: 5.10.1</p>
<p>Name: azure-eventhub-checkpointstoreblob-aio Version: 1.1.4</p>
</blockquote>
<p>What can I try next?</p>
|
<python><azure><events><azure-eventhub>
|
2023-05-02 14:24:14
| 0
| 2,416
|
CrazyCoder
|
76,155,851
| 7,237,062
|
Diart (torchaudio) on Windows x64 results in torchaudio error "ImportError: FFmpeg libraries are not found. Please install FFmpeg."
|
<p>I am giving a try to a speech <em>diarization</em> project named <a href="https://github.com/juanmc2005/diart" rel="nofollow noreferrer"><strong>diart</strong></a>
(based on <a href="https://huggingface.co/" rel="nofollow noreferrer">hugging face</a> models)</p>
<p>I follow the instructions using a <code>miniconda</code> environment which are essentially:</p>
<pre><code>conda create -n diart python=3.8
conda activate diart
conda install portaudio pysoundfile ffmpeg -c conda-forge
pip install diart
# + register some pyannote stuff on hugging face
# requiring hugging face CLI instructions for API token
</code></pre>
<p>However, I keep bumping into python import error:</p>
<blockquote>
<p>ImportError: FFmpeg libraries are not found. Please install FFmpeg.</p>
</blockquote>
<p>Trace:</p>
<pre><code>>>> from diart.sources import MicrophoneAudioSource
Traceback (most recent call last):
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\torchaudio\_extension.py", line 71, in _init_ffmpeg
_load_lib("libtorchaudio_ffmpeg")
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\torchaudio\_extension.py", line 52, in _load_lib
torch.ops.load_library(path)
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\torch\_ops.py", line 573, in load_library
ctypes.CDLL(path)
File "F:\DEV\miniconda3\envs\diart\lib\ctypes\__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'F:\DEV\miniconda3\envs\diart\Lib\site-packages\torchaudio\lib\libtorchaudio_ffmpeg.pyd' (or one of its dependencies). Try using the full path with constructor syntax.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\diart\sources.py", line 11, in <module>
from torchaudio.io import StreamReader
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\torchaudio\io\__init__.py", line 21, in __getattr__
torchaudio._extension._init_ffmpeg()
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\torchaudio\_extension.py", line 73, in _init_ffmpeg
raise ImportError("FFmpeg libraries are not found. Please install FFmpeg.") from err
ImportError: FFmpeg libraries are not found. Please install FFmpeg.
</code></pre>
<p>It is my first time with Torch, and I do believe the issue is with a missing TorchAudio specific library.</p>
<p>The python file that raises the exception states:</p>
<pre class="lang-py prettyprint-override"><code>def _init_ffmpeg(): # line 60
# ...
try:
_load_lib("libtorchaudio_ffmpeg")
except OSError as err:
raise ImportError("FFmpeg libraries are not found. Please install FFmpeg.") from err #<=== line 73 : the exception
# ...
</code></pre>
<p><em>Below you will find many details concerning the environment.</em></p>
<h1><strong>Question</strong></h1>
<p><strong>Am I missing something ? What should I do to use this project ?</strong></p>
<hr />
<h1>Additionnal info</h1>
<p>The resulting environment setup (using Miniconda Powershell) in admin mode.</p>
<p>FFMPEG is confirmed in the path to be seen from the conda env install:</p>
<pre><code>(diart) PS C:\Windows\system32> get-command ffmpeg
CommandType Name Version Source
----------- ---- ------- ------
Application ffmpeg.exe 0.0.0.0 F:\DEV\miniconda3\envs\diart\Library\bin\ffmpeg.exe
</code></pre>
<h2>Python</h2>
<pre><code>(diart) PS C:\Windows\system32> python --version
Python 3.8.16
</code></pre>
<h2>Conda env setup</h2>
<pre><code>(diart) PS C:\Windows\system32> conda --version
conda 23.3.1
(diart) PS C:\Windows\system32> conda list
# packages in environment at F:\DEV\miniconda3\envs\diart:
#
# Name Version Build Channel
absl-py 1.4.0 pypi_0 pypi
aiohttp 3.8.4 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
alembic 1.10.4 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
aom 3.5.0 h63175ca_0 conda-forge
asteroid-filterbanks 0.4.0 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
audioread 3.0.0 pypi_0 pypi
backports-cached-property 1.0.2 pypi_0 pypi
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2022.12.7 h5b45459_0 conda-forge
cachetools 5.3.0 pypi_0 pypi
certifi 2022.12.7 pypi_0 pypi
cffi 1.15.1 py38h57701bc_3 conda-forge
charset-normalizer 3.1.0 pypi_0 pypi
click 8.1.3 pypi_0 pypi
cmaes 0.9.1 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
colorlog 6.7.0 pypi_0 pypi
commonmark 0.9.1 pypi_0 pypi
contourpy 1.0.7 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
diart 0.7.0 pypi_0 pypi
docopt 0.6.2 pypi_0 pypi
einops 0.3.2 pypi_0 pypi
expat 2.5.0 h63175ca_1 conda-forge
ffmpeg 5.1.2 gpl_h5b1d025_106 conda-forge
filelock 3.12.0 pypi_0 pypi
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.14.2 hbde0cde_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.39.3 pypi_0 pypi
freetype 2.12.1 h546665d_1 conda-forge
frozenlist 1.3.3 pypi_0 pypi
fsspec 2023.4.0 pypi_0 pypi
google-auth 2.17.3 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
greenlet 2.0.2 pypi_0 pypi
grpcio 1.54.0 pypi_0 pypi
hmmlearn 0.2.8 pypi_0 pypi
huggingface-hub 0.14.1 pypi_0 pypi
hyperpyyaml 1.2.0 pypi_0 pypi
idna 3.4 pypi_0 pypi
importlib-metadata 6.6.0 pypi_0 pypi
importlib-resources 5.12.0 pypi_0 pypi
intel-openmp 2023.1.0 h57928b3_46319 conda-forge
joblib 1.2.0 pypi_0 pypi
julius 0.2.7 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
lame 3.100 hcfcfb64_1003 conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libexpat 2.5.0 h63175ca_1 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libflac 1.4.2 h63175ca_0 conda-forge
libhwloc 2.9.1 h51c2c0f_0 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libogg 1.3.4 h8ffe710_1 conda-forge
libopus 1.3.1 h8ffe710_1 conda-forge
libpng 1.6.39 h19919ed_0 conda-forge
librosa 0.9.2 pypi_0 pypi
libsndfile 1.2.0 h2628c91_0 conda-forge
libsqlite 3.40.0 hcfcfb64_1 conda-forge
libvorbis 1.3.7 h0e60522_0 conda-forge
libxml2 2.10.4 hc3477c8_0 conda-forge
libzlib 1.2.13 hcfcfb64_4 conda-forge
llvmlite 0.39.1 pypi_0 pypi
mako 1.2.4 pypi_0 pypi
markdown 3.4.3 pypi_0 pypi
markupsafe 2.1.2 pypi_0 pypi
matplotlib 3.7.1 pypi_0 pypi
mkl 2022.1.0 h6a75c08_874 conda-forge
mpg123 1.31.3 h63175ca_0 conda-forge
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
networkx 2.8.8 pypi_0 pypi
numba 0.56.4 pypi_0 pypi
numpy 1.23.5 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
openh264 2.3.1 h63175ca_2 conda-forge
openssl 3.1.0 hcfcfb64_2 conda-forge
optuna 3.1.1 pypi_0 pypi
packaging 23.1 pypi_0 pypi
pandas 2.0.1 pypi_0 pypi
pillow 9.5.0 pypi_0 pypi
pip 23.1.2 pyhd8ed1ab_0 conda-forge
platformdirs 3.5.0 pypi_0 pypi
pooch 1.7.0 pypi_0 pypi
portaudio 19.6.0 h63175ca_7 conda-forge
primepy 1.3 pypi_0 pypi
protobuf 3.20.1 pypi_0 pypi
pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge
pyannote-audio 2.1.1 pypi_0 pypi
pyannote-core 4.5 pypi_0 pypi
pyannote-database 4.1.3 pypi_0 pypi
pyannote-metrics 3.2.1 pypi_0 pypi
pyannote-pipeline 2.3 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydeprecate 0.3.2 pypi_0 pypi
pygments 2.15.1 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
python 3.8.16 h4de0772_1_cpython conda-forge
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.8 3_cp38 conda-forge
pytorch-lightning 1.6.5 pypi_0 pypi
pytorch-metric-learning 1.7.3 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
requests 2.29.0 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
resampy 0.4.2 pypi_0 pypi
rich 12.6.0 pypi_0 pypi
rsa 4.9 pypi_0 pypi
ruamel-yaml 0.17.21 pypi_0 pypi
ruamel-yaml-clib 0.2.7 pypi_0 pypi
rx 3.2.0 pypi_0 pypi
scikit-learn 1.2.2 pypi_0 pypi
scipy 1.10.1 pypi_0 pypi
semver 2.13.0 pypi_0 pypi
sentencepiece 0.1.98 pypi_0 pypi
setuptools 67.7.2 pyhd8ed1ab_0 conda-forge
shellingham 1.5.0.post1 pypi_0 pypi
simplejson 3.19.1 pypi_0 pypi
singledispatchmethod 1.0 pypi_0 pypi
six 1.16.0 pypi_0 pypi
sortedcontainers 2.4.0 pypi_0 pypi
sounddevice 0.4.6 pypi_0 pypi
soundfile 0.10.3.post1 pypi_0 pypi
speechbrain 0.5.14 pypi_0 pypi
sqlalchemy 2.0.11 pypi_0 pypi
sqlite 3.40.0 hcfcfb64_1 conda-forge
svt-av1 1.4.1 h63175ca_0 conda-forge
sympy 1.11.1 pypi_0 pypi
tabulate 0.9.0 pypi_0 pypi
tbb 2021.9.0 h91493d7_0 conda-forge
tensorboard 2.12.2 pypi_0 pypi
tensorboard-data-server 0.7.0 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tk 8.6.12 h8ffe710_0 conda-forge
torch 1.13.1 pypi_0 pypi
torch-audiomentations 0.11.0 pypi_0 pypi
torch-pitch-shift 1.2.4 pypi_0 pypi
torchaudio 0.13.1 pypi_0 pypi
torchmetrics 0.11.4 pypi_0 pypi
torchvision 0.14.1 pypi_0 pypi
tqdm 4.65.0 pypi_0 pypi
typer 0.7.0 pypi_0 pypi
typing-extensions 4.5.0 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 1.26.15 pypi_0 pypi
vc 14.3 h3d8a991_11 conda-forge
vs2015_runtime 14.34.31931 h4c5c07a_11 conda-forge
websocket-client 1.5.1 pypi_0 pypi
websocket-server 0.6.4 pypi_0 pypi
werkzeug 2.3.1 pypi_0 pypi
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
x264 1!164.3095 h8ffe710_2 conda-forge
x265 3.5 h2d74725_3 conda-forge
xz 5.2.6 h8d14728_0 conda-forge
yarl 1.9.2 pypi_0 pypi
zipp 3.15.0 pypi_0 pypi
</code></pre>
<hr />
<h1>Things I tried</h1>
<p>Follow PyTorch audio build process as described <a href="https://pytorch.org/audio/stable/build.windows.html" rel="nofollow noreferrer">here</a> using Visual Studio 2022 Community powershell for developper x64</p>
<pre><code>Enter-VsDevShell 7c1743f6 -Arch amd64 # if necessary, for Visual Studio build tools x64 within power shell for developpers
#
git clone https://github.com/pytorch/audio
cd audio
set USE_FFMPEG=1
# python setup.py develop
python setup.py develop --verbose
</code></pre>
<p>Output error slightly truncated due to post size overflow (30_000 char):</p>
<pre><code># redacted due to post char limit...
rir.cpp.obj : error LNK2019: symbole externe non résolu "__declspec(dllimport) class at::Tensor __cdecl at::fft_irfft(class at::Tensor const &,class c10::optional<__int64>,__int64,class c10::optional<class c10::basic_string_view<char> >)" (__imp_?fft_irfft@at@@YA?AVTensor@1@AEBV21@V?$optional@_J@c10@@_JV?$optional@V?$basic_string_view@D@c10@@@4@@Z) référencé dans la fonction "void __cdecl torchaudio::rir::`anonymous namespace'::make_rir_filter_impl<float>(class at::Tensor &,double,__int64,class at::Tensor &)" (??$make_rir_filter_impl@M@?A0xcc00d006@rir@torchaudio@@YAXAEAVTensor@at@@N_J0@Z)
torchaudio\csrc\libtorchaudio.pyd : fatal error LNK1120: 12 externes non résolus
ninja: build stopped: subcommand failed.
</code></pre>
<hr />
<h1>Tests</h1>
<h2>1. @Brock Brown ==> what if <code>conda uninstall ffmpeg</code> ?</h2>
<pre><code>(diart) PS C:\Windows\system32> conda uninstall ffmpeg
(diart) PS C:\Windows\system32> python
Python 3.8.16 (default, Mar 2 2023, 03:18:16) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from diart.sources import *
# redacted due to post char limit...
from _sounddevice import ffi as _ffi
File "F:\DEV\miniconda3\envs\diart\lib\site-packages\_sounddevice.py", line 2, in <module>
import _cffi_backend
ModuleNotFoundError: No module named '_cffi_backend'
</code></pre>
<pre><code>(diart) PS C:\Windows\system32> Get-Command ffmpeg
CommandType Name Version Source
----------- ---- ------- ------
Application ffmpeg.exe 1.0.0.0 C:\ProgramData\chocolatey\bin\ffmpeg.exe
(diart) PS C:\Windows\system32> ffmpeg --version
ffmpeg version 6.0-essentials_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
</code></pre>
<hr />
<h2>FFMPEG version</h2>
<p>according to <a href="https://pytorch.org/audio/stable/tutorials/streamwriter_basic_tutorial.html" rel="nofollow noreferrer">this link</a> it may be preferrable to use ffmpeg <4.4.</p>
<p>I tried that using chocolatey install ffmpeg=4.3 without success.</p>
|
<python><pytorch><conda><torchaudio><diarization>
|
2023-05-02 14:19:23
| 3
| 3,347
|
LoneWanderer
|
76,155,734
| 4,357,631
|
Properly typed factory for FastAPI and Pydantic
|
<p>I have been developing my first API using FastAPI/SQLAlchemy. I have been using the same four methods (Get One, Get All, Post, Delete) for multiple different entities in the database, thus creating a lot of repeated code. For example, the code below shows the methods for a Fungus entity.</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, TYPE_CHECKING
if TYPE_CHECKING: from sqlalchemy.orm import Session
import models.fungus as models
import schemas.fungus as schemas
async def create_fungus(fungus: schemas.CreateFungus, db: "Session") -> schemas.Fungus:
fungus = models.Fungus(**fungus.dict())
db.add(fungus)
db.commit()
db.refresh(fungus)
return schemas.Fungus.from_orm(fungus)
async def get_all_fungi(db: "Session") -> List[schemas.Fungus]:
fungi = db.query(models.Fungus).limit(25).all()
return [schemas.Fungus.from_orm(fungus) for fungus in fungi]
async def get_fungus(fungus_id: str, db: "Session") -> schemas.Fungus:
fungus = db.query(models.Fungus).filter(models.Fungus.internal_id == fungus_id).first()
return fungus
async def delete_fungus(fungus_id: str, db: "Session") -> int:
num_rows = db.query(models.Fungus).filter_by(id=fungus_id).delete()
db.commit()
return num_rows
</code></pre>
<p>I have been trying to come up with an abstract design pattern with an interface class that implements these four methods independent from the entity.</p>
<p>However, from my understanding new Python standards and FastAPI require Python to be typed. So, how would I type the return types of these functions, instead of <code>schemas.Fungus</code>, or the parameters <code>schemas.CreateFungus</code> or <code>models.Fungus</code>?</p>
<p>What I have thought is that I could use the type of these values, which are <code><class 'pydantic.main.ModelMetaclass'></code> and <code><class 'sqlalchemy.orm.decl_api.DeclarativeMeta'></code>. However I am not sure if this is correct or encouraged.</p>
|
<python><sqlalchemy><fastapi><python-typing><pydantic>
|
2023-05-02 14:08:01
| 1
| 309
|
Skalwalker
|
76,155,694
| 10,346,275
|
Telethon sending of messages gets stuck for a while then continues
|
<p>i am try to send messages to the groups that i have joined using telethon. everything works fine but most of the times, it gets stuck for about 1min-1min 30s then it continues. this issue occurs multiple times during the process. also its really slow. please help</p>
<pre class="lang-py prettyprint-override"><code>from telethon import TelegramClient, events, utils
from telethon.sessions import StringSession
import asyncio
api_id = xxxxxx
api_hash = 'hash54356'
client = TelegramClient('new_session', api_id, api_hash, system_version="4.16.30-vxITSNOTBOT")
# Define a function to send a message
async def send_message(target, messages):
try:
if not str(target).startswith("-100"):
print(f"{target} is not a valid group. Skipping...")
return
for message in messages:
await client.send_message(target, message)
print(f"Sent {len(messages)} messages to {target}")
except Exception as e:
print(f"An error occurred while sending the message to the group {target}: {e}")
# Define the main function that calls the send_message function
async def main():
# Get all the dialogs (chats) that you are a member of
async for dialog in client.iter_dialogs():
# Check if the dialog is a group chat
if dialog.is_group:
# Get the ID of the group chat
target = dialog.id
# Send the message to the group chat
messages = ["https://t.me/channel1", "https://t.me/channel2"]
await send_message(target, messages)
# Run the main function using the client's event loop
with client:
client.loop.run_until_complete(main())
</code></pre>
|
<python><telegram-bot><telethon>
|
2023-05-02 14:03:17
| 1
| 604
|
Spidy
|
76,155,574
| 3,575,623
|
Calculate distance between columns of two data frames
|
<p>I have two data frames that contain values for different people in each column:</p>
<pre><code>import numpy as np
import pandas as pd
import math
df1 = pd.DataFrame({
"Anna":[1.5,-2,2.5],
"Bob":[2.5,-3,3.5],
"Cam":[3.5,-4,4.5]})
df2 = pd.DataFrame({
"Dave":[1,-2.5,2],
"Emma":[2,-3.5,3],
"Fred":[3,-4.5,4]})
print(df1)
Anna Bob Cam
0 1.5 2.5 3.5
1 -2.0 -3.0 -4.0
2 2.5 3.5 4.5
print(df2)
Dave Emma Fred
0 1.0 2.0 3.0
1 -2.5 -3.5 -4.5
2 2.0 3.0 4.0
</code></pre>
<p>Is there a faster way of getting a distance matrix from each person in df1 to each person in df2 than this double loop?</p>
<pre><code>results = []
for n1 in df1.columns:
results.append([])
for n2 in df2.columns:
results[-1].append(math.dist(df1[n1], df2[n2]))
res_df = pd.DataFrame(results)
res_df.columns = df1.columns
res_df.index = df2.columns
print(res_df)
Anna Bob Cam
Dave 0.866025 1.658312 3.278719
Emma 2.179449 0.866025 1.658312
Fred 3.840573 2.179449 0.866025
</code></pre>
|
<python><pandas><dataframe><distance>
|
2023-05-02 13:53:43
| 2
| 507
|
Whitehot
|
76,155,369
| 19,155,645
|
OSRM API: find distance to nearest emergency station
|
<p>I have a typescript project and I am trying to get the distance to the nearest police station (and later I also add fire station). I'm using typescript for this</p>
<p>I tried this url I found in the internet:<code>https://router.project-osrm.org/route/v1/driving/${lon},${lat};nearest?annotations=distance&exclude=motorway,toll</code><br>
but it seems not to work at all.</p>
<p>I looked further in the documentation and I am not sure which I should use: <a href="https://project-osrm.org/docs/v5.24.0/api/#" rel="nofollow noreferrer">route/v1/driving</a> or <a href="https://project-osrm.org/docs/v5.24.0/api/#services" rel="nofollow noreferrer">nearest/v1/driving</a>. I did not find a solution that has both (?) .</p>
<p>So the general goal would be to get coordinates of my location and get the distance (in km) to nearest police station and nearest fire-station.</p>
<p>For example I also tried the following function:</p>
<pre><code>import axios from "axios";
async function getNearestPoliceStation(
lat: number,
lon: number
): Promise<[number, number]> {
try {
console.log("input lat and lon", lat, lon);
const url2 = `https://nominatim.openstreetmap.org/search?format=json&q=police+station&limit=1&lat=${lat}&lon=${lon}`;
const response2 = await axios.get(url2);
console.log(response2.data[0]);
const nearestPoliceStation = response2.data[0];
if (!nearestPoliceStation) {
return [-1, -1];
}
const lati = parseFloat(nearestPoliceStation.lat);
const longi = parseFloat(nearestPoliceStation.lon);
return [lati, longi];
} catch (error) {
console.error(error);
return [-1, -1];
}
}
</code></pre>
<p>but when I input <a href="https://www.openstreetmap.org/#map=14/49.85349/8.65859" rel="nofollow noreferrer">49.85349, 8.65859</a> (in Germany), it finds me the nearest police station in New Jersey: <a href="https://www.openstreetmap.org/#map=14/40.7007/-75.1719" rel="nofollow noreferrer">40.7007047, -75.1719147</a></p>
<p>A python code for this would also be helpful. I tried some tests on Python, using Google Maps API (instead of TS and OSMR) and that also was not too straightforward.</p>
|
<python><typescript><google-maps-api-3><openstreetmap>
|
2023-05-02 13:28:27
| 1
| 512
|
ArieAI
|
76,155,172
| 3,531,792
|
django Request aborted Forbidden (CSRF token missing.) channels websockets
|
<p>I'm new to django and I'm trying to get a realtime chat app using channels websockets up and running. I've tried building the chat page myself as well as copy pasting the relevant chat code from the tutorial itself. Still I get "CSRF verification failed. Request aborted." and "Forbidden (CSRF token missing.):" from inside terminal. Things I've tried:</p>
<ol>
<li>adding <code>{% csrf_token %}</code> above the form - then messages aren't added to the list of messages but the page reloads and the "Forbidden (CSRF token missing.):" disappears.</li>
<li>adding <code>@csrf_protect</code> in chat/views.py.</li>
<li>using channels version 3.0.5. That's the version in the tutorial.</li>
<li>I've tried rebuilding the database and deleting all migration files and restoring them. Doesn't change anything (just trying different angles)</li>
</ol>
<p><em>chat/templates/chat/chat.html</em></p>
<pre><code>{% extends 'core/base.html' %}
{% block title %} {{ chat.name }} | {% endblock %}
{% block content %}
<div class="p-10 lg:p-20 text-center">
<h1 class="text-3xl lg:text-6xl text-white">{{ chat.name }}</h1>
</div>
<!-- div for actual messages-->
<div class="lg:w-2/4 mt-6 mx-4 lg:mx-auto p-4 bg-white rounded-xl">
<div class="chat-messages space-y-3" id="chat-messages">
<div class="p-4 bg-gray-200 rounded-xl">
<p class="font-semibold">Username</p>
<p>This is the message for the user</p>
</div>
<div class="p-4 bg-gray-200 rounded-xl">
<p class="font-semibold">Username</p>
<p>This is the message for the user</p>
</div>
<div class="p-4 bg-gray-200 rounded-xl">
<p class="font-semibold">Username</p>
<p>This is the message for the user</p>
</div>
<div class="p-4 bg-gray-200 rounded-xl">
<p class="font-semibold">Username</p>
<p>This is the message for the user</p>
</div>
</div>
</div>
<!-- form for input and sending-->
<div class="lg:w-2/4 mt-6 mx-4 lg:mx-auto p-4 bg-white rounded-xl">
<form method="post" action="." class="flex">
{% csrf_token %}
<input type="text" name="content" class="flex-1 mr-3" placeholder="Your message..." id="chat-massage-input">
<button class="px-5 py-3 rounded-xl text-white bg-teal-600 hover:bg-teal-700" id="chat-message-submit">Submit</button>
</form>
</div>
{% endblock %}
{% block scripts %}
{{ chat.slug | json_script:"json-chatname"}}
{{ request.user.username | json_script:"json-username" }}
<script>
const chatName = JSON.parse(document.getElementById("json-chatname").textContent);
const userName = JSON.parse(document.getElementById("json-username").textContent);
const chatSocket = new WebSocket(
"ws://"
+ window.location.host
+ "/ws/"
+ chatName
+ "/"
);
chatSocket.onopen = function(e) {
console.log("[open] Connection established");
};
chatSocket.onmessage = function(e){
console.log("on message");
const data = JSON.parse(e.data);
if (data.message){
let html = '<div class="p-4 bg-gray-200 rounded-xl">'
html += '<p class="font-semibold">' + data.username + '</p>'
html += '<p>' + data.message + '</p> </div>'
document.querySelector("#chat-messages").innerHTML += html;
}else{
alert("The message is empty");
}
};
chatSocket.onclose = function(e){
console.log("on close");
}
document.querySelector("#chat-message-submit").onclick() = function(e){
e.preventDefault();
const messageInputDom = document.querySelector("#chat-message-input");
const message = messageInputDom.value;
chatSocket.send(JSON.stringify({
"message" : message,
"username" : userName,
"chatname" : chatName,
}))
messageInputDom.value = "";
return false;
}
</script>
{% endblock %}
</code></pre>
<p><em>chat/consumers.py</em></p>
<pre><code>import json
from channels.generic.websocket import AsyncWebsocketConsumer
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.chat_name = self.scope["url_route"]["kwargs"]["chat_name"]
self.chat_group_name = "%s_group" % self.chat_name
# add group to channel layers
await self.channel_layer.group_add(
self.chat_group_name,
self.channel_name
)
# connect to channel
await self.accept()
async def disconnect(self):
await self.channel_layer.group_discard(
self.chat_group_name,
self.channel_name
)
async def receive(self, text_data):
data = json.loads(text_data)
message = data["message"]
username = data["username"]
chat = data["chat"]
await self.channel_layer.group_send(
self.chat_group_name,
{
"type": "chat_message",
"message":message,
"username":username,
"chat":chat,
}
)
async def chat_message(self, event):
message = event["message"]
username = event["username"]
chat = event["chat"]
await self.send(text_data=json.dumps({
"message": message,
"username": username,
"chat": chat,
}))
</code></pre>
<p><em>chat/models.py</em></p>
<pre><code>from django.db import models
# Create your models here.
class Chat(models.Model):
name = models.CharField(max_length=255)
slug = models.SlugField(unique=True)
</code></pre>
<p><em>chat/views.py</em></p>
<pre><code>from django.shortcuts import render
from django.contrib.auth.decorators import login_required
from django.views.decorators.csrf import csrf_protect
# Create your views here.
from .models import Chat
@login_required
@csrf_protect
def chats(request):
chats = Chat.objects.all
return render(request, "chat/chats.html", {"chats" : chats})
@login_required
@csrf_protect
def chat(request, slug):
chat = Chat.objects.get(slug=slug)
return render(request, "chat/chat.html", {"chat" : chat})
</code></pre>
<p><em>djangochat/asgi.py</em></p>
<pre><code>import os
from django.core.asgi import get_asgi_application
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import chat.routing
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangochat.settings')
django_asgi_app = get_asgi_application()
application = ProtocolTypeRouter({
"http" : django_asgi_app,
"websocket" : AuthMiddlewareStack(
URLRouter(
chat.routing.websocket_urlpatterns
)
)
})
</code></pre>
<p>Some notes:</p>
<ol>
<li>I somehow don't even see the chat sockets handshaking at all anymore.</li>
<li><code>console.log</code> doesn't display anything and all <code>alert</code> doesn't alert anything even when message is empty and submit is clicked.</li>
<li>I've tried debugging the websocket connections but since they aren't even connecting I'm not sure how to proceed.</li>
</ol>
<p>I'm really scratching my head here. Any help would be appreciated.</p>
|
<python><django><channel>
|
2023-05-02 13:04:48
| 1
| 613
|
Kenshima
|
76,155,163
| 12,684,429
|
Create new column which is max of other columns with conditions
|
<p>I would like to make a new column which is the max of two things;</p>
<p>The first is the average of two columns, the second is one of those two columns.</p>
<p>So I essentially want forecast to be the max of the following:</p>
<pre><code>
df3['forecast'] = df3[['A', 'B']].mean(axis=1)
df3['forecast'] = df3[['A']]
</code></pre>
<p>Any help greatly appreciated!
Cheers</p>
|
<python><pandas><max>
|
2023-05-02 13:03:50
| 1
| 443
|
spcol
|
76,155,130
| 16,436,095
|
markdownify don't remove comment from tag
|
<p>I try to convert HTML to Markdown using <code>markdownify</code>. This lib don't remove comment from style tag and I try to understand it.</p>
<p>One of methods of MarkdownConverter class is <code>process_tag</code>, and I think that key somewhere here. See below (I add some prints to check):</p>
<pre><code>def process_tag(self, node, convert_as_inline, children_only=False):
text = ''
# markdown headings or cells can't include
# block elements (elements w/newlines)
isHeading = html_heading_re.match(node.name) is not None
isCell = node.name in ['td', 'th']
convert_children_as_inline = convert_as_inline
if not children_only and (isHeading or isCell):
convert_children_as_inline = True
print(f"convert_children_as_inline = {convert_children_as_inline}")
# Remove whitespace-only textnodes in purely nested nodes
def is_nested_node(el):
return el and el.name in ['ol', 'ul', 'li',
'table', 'thead', 'tbody', 'tfoot',
'tr', 'td', 'th']
if is_nested_node(node):
for el in node.children:
# Only extract (remove) whitespace-only text node if any of the
# conditions is true:
# - el is the first element in its parent
# - el is the last element in its parent
# - el is adjacent to an nested node
can_extract = (not el.previous_sibling
or not el.next_sibling
or is_nested_node(el.previous_sibling)
or is_nested_node(el.next_sibling))
if (isinstance(el, NavigableString)
and six.text_type(el).strip() == ''
and can_extract):
el.extract()
# Convert the children first
for i, el in enumerate(node.children):
cl = None
if isinstance(el, Comment):
cl = "Comment"
elif isinstance(el, Doctype):
cl = "Doctype"
elif isinstance(el, NavigableString):
cl = "NavigableString"
else:
cl = "Other"
print(f"{i}, cl = {cl}, el = {el}")
if isinstance(el, Comment) or isinstance(el, Doctype):
continue
elif isinstance(el, NavigableString):
text += self.process_text(el)
else:
text += self.process_tag(el, convert_children_as_inline)
if not children_only:
convert_fn = getattr(self, 'convert_%s' % node.name, None)
if convert_fn and self.should_convert_tag(node.name):
text = convert_fn(node, text, convert_as_inline)
return text
</code></pre>
<p>My test file consists of two parts:</p>
<pre><code><style><!-- 1. some style defenitions --></style>
<!-- 2. some style definitions -->
</code></pre>
<p>What I see in terminal:</p>
<pre><code>convert_children_as_inline = False
0, cl = NavigableString, el =
1, cl = Other, el = <style><!-- 1. some style defenitions --></style>
convert_children_as_inline = False
0, cl = NavigableString, el = <!-- 1. some style defenitions -->
2, cl = NavigableString, el =
3, cl = Comment, el = 2. some style definitions
</code></pre>
<p>And what I see in out file:</p>
<p><code><!-- 1. some style defenitions --></code></p>
<p>Please explain me why converter didn't determine string <code><!-- 1. some style defenitions --></code> like a comment. I'm a bit confuse about it because the second part it determine like comment (I want to get an empty out file).</p>
|
<python><markdown>
|
2023-05-02 13:00:43
| 1
| 370
|
maskalev
|
76,155,119
| 12,559,770
|
Merge different duplicated row based on one column in pandas
|
<p>I have a table such as :</p>
<pre><code>Chrs pos value
Chr1 1 9
Chr1 2 11
Chr1 3 13
Chr1 4 13
Chr1 5 13
Chr1 6 13
Chr1 7 14
Chr1 8 14
Chr1 9 14
</code></pre>
<p>Does someone have an idea using pandas to transform it as :</p>
<pre><code>Chrs start end value
Chr1 1 2 9
Chr1 2 3 11
Chr1 3 6 13
Chr1 7 9 14
</code></pre>
<p>As you can see, I added two columns <code>'start'</code> and <code>'end'</code> and I merged the same values into the same row by adding the last value into the end column.</p>
|
<python><python-3.x><pandas>
|
2023-05-02 12:59:46
| 3
| 3,442
|
chippycentra
|
76,155,091
| 635,799
|
np.uint32 != np.uintc on Windows
|
<p>On my Windows machine:</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> np.dtype(np.uint32).itemsize
4
>>> np.dtype(np.uintc).itemsize
4
>>> np.uint32 == np.uintc
False
</code></pre>
<p>But on my Mac:</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> np.dtype(np.uint32).itemsize
4
>>> np.dtype(np.uintc).itemsize
4
>>> np.uint32 == np.uintc
True
</code></pre>
<p>Why <code>np.uint32 != np.uintc</code> on Windows despite the same itemsize?</p>
|
<python><numpy><types>
|
2023-05-02 12:56:53
| 1
| 898
|
Chang
|
76,155,063
| 815,612
|
Thread.join's timeout does not work when thread is computing a sum
|
<p>In the following code:</p>
<pre><code>import threading
def infinite_loop():
while True:
pass
def huge_sum():
return sum(range(2**100))
thread = threading.Thread(target=huge_sum)
thread.start()
thread.join(1)
print("Done")
</code></pre>
<p>I expect the script to print "Done" after one second since <code>join()</code> will timeout, but instead the script hangs. If you replace the call to <code>huge_sum</code> with <code>infinite_loop</code>, it works fine. The problem seems to be with the built-in <code>sum()</code> function.</p>
<p>Is there a way I can reliably get something like <code>join()</code>'s timeout behavior <em>no matter what the thread is doing</em>? I don't mind wacky metaprogramming solutions, this is for a very niche application. However, for the most part I cannot modify the code being executed inside the thread (e.g. "use a loop instead of <code>sum</code>" is not a solution).</p>
<p>Linux, Python 3.8.</p>
|
<python><multithreading>
|
2023-05-02 12:54:06
| 2
| 6,464
|
Jack M
|
76,155,052
| 1,040,718
|
Getting a random element in a QuerySet - Python Django
|
<p>I have the following query in Django:</p>
<pre><code>vms = VM.objects.filter(vmtype=vmtype,
status__in=['I', 'R'])
.annotate(num_jobs=Count('job'))
.order_by('num_jobs')
</code></pre>
<p>it will return a QuerySet of vms based on the number of jobs running in each VM. Please notice that <code>num_jobs</code> is not a field in the <code>VM</code> object (model). This would return a query set such as the number of jobs is
[0, 0, 1, 2, 2, 3, 4, 4, 5].
Doing <code>vms.first()</code> will for sure return me the vm with least number of jobs, but it could not be the only. From all the <code>vms</code> with the least number of jobs, I want to return a random between them, not just the first. How can I accomplish that?</p>
|
<python><python-3.x><django><django-models>
|
2023-05-02 12:52:46
| 1
| 11,011
|
cybertextron
|
76,154,826
| 19,467,973
|
By what principle will it be correct to distribute queues in celery and rabbitmq?
|
<p>I have a small pet project that is an api written in fastpic and using postgresql and sqlalchemyORM. Recently I came across such a thing as Celery and decided to add it to my project, but I ran into the problem that I don't understand a little on what principle it is best for my project to apply it.</p>
<p>Briefly about my project</p>
<p>This is a Restful Api for creating companies and employees within companies. The api has logic for 4 roles. I check what role the user has with the help of jwt tokens, and I store session data in radish.</p>
<p>Here is the tree of my project</p>
<pre><code>├───alembic
├───auth
├───celery
├───crud_routs_func
│ ├───accounts_db_func
│ └───dash_board_func
├───db
│ ├───backup_folder
├───logger
├───rabbitmq
│ └───mnesia
├───routes
└───utils
</code></pre>
<p>In main.py I add all the routes from the routes folder.</p>
<pre><code>def main():
app.include_router(ManipulationEmployeesController.create_router())
app.include_router(AccountController.create_router())
app.include_router(AuthenticateController.create_router())
app.include_router(CompanyController.create_router())
app.include_router(SellingPointsController.create_router())
app.include_router(InviteEmployeeController.create_router())
app.include_router(CriteriesController.create_router())
app.include_router(DashboardController.create_router())
uvicorn.run(app, host="0.0.0.0", port=8000, ws_ping_interval=0)
if __name__ == "__main__":
t1 = threading.Thread(target=create_backup_job, args=())
t1.start()
main()
</code></pre>
<p>Every router I have looks something like this and receives data thanks to functions of a certain class to access the database.</p>
<pre><code>class CompanyController(Controller):
prefix = "/api/company"
tags = ["company"]
@post("/create")
@check_access(["owner"])
async def create(request: Request, item: schemas.Company):
user = _T.user_items(request)
CompanyDBFunc.create_company(ur_name=item.ur_name,true_name=item.true_name,
inn=item.inn, ur_address=item.ur_address,
true_address=item.true_address, phone=item.phone,
email=item.email, owner_id=user["id"])
response = JSONResponse(status_code=201, content={"message":"You have successfully created a company!"})
return response
@put("/update")
@check_access(["owner"])
async def update(request: Request, item: schemas.Company, company_id):
user = _T.user_items(request)
CompanyDBFunc.update_company(company_id, user['id'], ur_name=item.ur_name, true_name=item.true_address, inn=item.inn, ur_address= item.ur_address,
true_address=item.true_address, phone=item.phone, email=item.email)
response = JSONResponse(status_code=200, content={"message":"Company data has been successfully updated."})
return response
</code></pre>
<p>This is about what any class looks like for me to access the database</p>
<pre><code>class CompanyDBFunc(DbConnect):
@classmethod
@Logger.log
def create_company(self, ur_name, true_name, inn, ur_address, true_address, phone, email, owner_id):
new_company = models.Company(ur_name=ur_name, true_name= true_name, inn=inn, ur_address=ur_address, true_address=true_address, phone=phone, email=email, owner_id=owner_id)
self.session_work.add(new_company)
self.session_work.commit()
self.session_work.close()
@classmethod
@Logger.log
def get_company_ids_by_owner_id(self, owner_id):
company_ids = self.session_work.query(models.Company.id).filter_by(owner_id=owner_id).all()
company_ids = [c[0] for c in company_ids]
self.session_work.close()
return company_ids
</code></pre>
<p>How to use celery correctly in such applications and by what logic should queues be created for processing ?</p>
<p>Thank you all in advance !</p>
|
<python><rabbitmq><celery><fastapi>
|
2023-05-02 12:26:32
| 0
| 301
|
Genry
|
76,154,814
| 2,986,042
|
How to print variable from Multi core Trace32 with Python command
|
<p>I want to get both static variables and local variable from a function in trace32 with python script. I have got some useful reference from <code>trace32</code> site with python script. with below documents, I have written some script to control the trace32 with <code>python remote API's</code>.</p>
<p><a href="https://www2.lauterbach.com/pdf/app_python.pdf" rel="nofollow noreferrer">https://www2.lauterbach.com/pdf/app_python.pdf</a></p>
<p><a href="https://www2.lauterbach.com/pdf/general_ref_s.pdf" rel="nofollow noreferrer">https://www2.lauterbach.com/pdf/general_ref_s.pdf</a></p>
<p>Now in my environment, I have two core. One is master and other is slave. The slave is configured as Intercom. Below is the script I have written with the help of the above docs.</p>
<p><strong>Code example:</strong></p>
<pre><code>Static int totalcount = 15U;
static void test_main (void)
{
int temp = 0U;
temp = get_result(); /* it will return some values*/
totalcount = totalcount + temp;
}
</code></pre>
<p><strong>Script:</strong></p>
<pre><code>import time
import ctypes # module for C data types
from ctypes import c_void_p
import enum # module for enumeration support
# Load TRACE32 Remote API DLL
t32api = ctypes.cdll.LoadLibrary('D:/demo/api/python/t32api64.dll')
# TRACE32 Debugger or TRACE32 Instruction Set Simulator as debug device
T32_DEV = 1
# Configure communication channel to the TRACE32 device
# use b for byte encoding of strings
t32api.T32_Config(b"NODE=",b"localhost")
t32api.T32_Config(b"PORT=",b"20000")
t32api.T32_Config(b"PACKLEN=",b"1024")
# Establish communication channel
rc = t32api.T32_Init()
rc = t32api.T32_Attach(T32_DEV)
# Ping to master core
rc = t32api.T32_Ping()
time.sleep(2)
# Break the slave core -> Name is mycore
rc = t32api.T32_Cmd(b"InterCom mycore Break")
time.sleep(3)
rc = t32api.T32_Cmd(b"InterCom mycore Go")
time.sleep(2)
rc = t32api.T32_Cmd(b"InterCom mycore break.Set test_main")
time.sleep(2)
#Print the variable
error = ctypes.c_int32(0)
result = ctypes.c_int32(0)
t32api.T32_Cmd (b"InterCom mycore Var totalcount")
error = t32api.T32_EvalGet(ctypes.byref(result));
if (error == 0):
print("OK");
print (result.value)
else:
print("Nok error")
# Release communication channel
rc = t32api.T32_Exit()
print ("Exit")
</code></pre>
<p>I can able to communicate with slave using the python API <code>t32api.T32_Cmd</code> and it was successful.
But the problem here, I didn't get the actual value of the static variable. It is printing like below</p>
<p><strong>output:</strong></p>
<pre><code>$ python test.py
Ok
c_long(0)
Exit
</code></pre>
<p>Here I am getting result as <code>c_long(0</code>). If I am converting to <code>result.value</code> also, then value is 0.</p>
<p>Seems like <code>t32api</code> is not getting the variable output using <code>t32api.T32_EvalGet()</code>function. <code>t32api</code> still in the master core only.</p>
<p>I would like to know how to get the actual values of the variable using Python API. Can anybody suggest the solution ?</p>
|
<python><python-3.x><trace32><lauterbach>
|
2023-05-02 12:24:54
| 1
| 1,300
|
user2986042
|
76,154,751
| 7,253,901
|
Pandas-on-spark API throws a NotImplementedError even though functionality should be implemented
|
<p>I am facing a weird issue with pyspark-on-pandas. I am trying to use regex to replace abbreviations with their full counterparts. The function I am using is the following (Simplified it a bit):</p>
<pre><code>def resolve_abbreviations(job_list: pspd.Series) -> pspd.Series:
"""
The job titles contain a lot of abbreviations for common terms.
We write them out to create a more standardized job title list.
:param job_list: df.SchoneFunctie during processing steps
:return: SchoneFunctie where abbreviations are written out in words
"""
abbreviations_dict = {
"1e": "eerste",
"1ste": "eerste",
"2e": "tweede",
"2de": "tweede",
"3e": "derde",
"3de": "derde",
"ceo": "chief executive officer",
"cfo": "chief financial officer",
"coo": "chief operating officer",
"cto": "chief technology officer",
"sr": "senior",
"tech": "technisch",
"zw": "zelfstandig werkend"
}
#Create a list of abbreviations
abbreviations_pob = list(abbreviations_dict.keys())
#For each abbreviation in this list
for abb in abbreviations_pob:
# define patterns to look for
patterns = [fr'((?<=( ))|(?<=(^))|(?<=(\\))|(?<=(\())){abb}((?=( ))|(?=(\\))|(?=($))|(?=(\))))',
fr'{abb}\.']
# actual recoding of abbreviations to written out form
value_to_replace = abbreviations_dict[abb]
for patt in patterns:
job_list = job_list.replace(to_replace=fr'{patt}', value=f'{value_to_replace} ', regex=True)
return job_list
</code></pre>
<p>When I call this code with a 'pyspark.pandas.series.Series' like so:</p>
<pre><code>df['CleanedUp'] = resolve_abbreviations(df['SchoneFunctie'])
</code></pre>
<p>The following error is thrown:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\path_to_python_file\python_file.py", line 180, in <module>
df['SchoneFunctie'] = resolve_abbreviations(df['SchoneFunctie'])
File "C:\path_to_python_file\python_file.py", line 164, in resolve_abbreviations
job_list = job_list.replace(to_replace=fr'{patt}', value=f'{value_to_replace} ', regex=True)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\series.py", line 4492, in replace
raise NotImplementedError("replace currently not support for regex")
NotImplementedError: replace currently not support for regex
python-BaseException
</code></pre>
<p>But when I look in the pyspark.pandas.Series documentation, I do see a replace function which should be implemented, and also I am quite certain I've used it correctly. Link to documentation:
<a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.Series.replace.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.Series.replace.html</a></p>
<p>I am using pyspark version 3.3.1.</p>
<p><strong>What is going wrong here?</strong></p>
|
<python><apache-spark><pyspark><pyspark-pandas>
|
2023-05-02 12:16:45
| 1
| 2,825
|
Psychotechnopath
|
76,154,376
| 4,022,609
|
python's asyncio exchange data between 2 coroutines - one of them executing synchronous and cpu-intensive code
|
<p>I have problems getting 2 routines cooperating, now even to the point that I've started asking myself if it's possible to do it with asyncio...</p>
<p>I have 2 routines. The first routine performs (only) synchronous and cpu-intensive processing. The second one performs ("proper") asyncio code. The first one cannot be on the same asyncio list of coroutines as the second one, because the second one cannot be waiting so long, i.e. while the synchronous, long lasting and cpu-intensive first one is running (and which cannot be interrupted, and as such isn't very suitable to some asyncio setup).</p>
<p>Both aren't related to one another, apart from the ("proper") asyncio ( = second) one indicating that the synchronous operation is to be done, and the first (synchronous) one has to inform the second (asyncio) one that is has finished execution and passing it its results.</p>
<p>Note that I have found plenty of examples of the one (asyncio) starting the other one, but none of these examples returns any result to any other "a"waiting one... Also, if the synchronous code finishes execution, I do am able to obtain its results. But I don't succeed in triggering the awaiting one, neither by setting some event (being awaited for), nor by having the asyncio code awaiting a future.</p>
<p>Would anyone be able to tell me if my setup is possible at all, or how I should set this up ? Of course, without the first (synchronous) one blocking any asyncio loop, and still keep the possibility for it to provide another (meanwhile "a"waiting) asyncio one of its results ?</p>
<p>EDIT : added the problematic code below. The problem is that self.finished_queue as an asyncio.Queue cannot be awaited (as it isn't informed about the synchronous loop having finished), and a (non-asyncio) "normal" queue.Queue cannot have get() called within an async def as it would block all awaits in the main asyncio loop...</p>
<pre><code>class TestClass:
def __init__(self):
self.unfinished_queue = queue.Queue()
self.finished_queue = queue.Queue() # an asyncio.Queue here doesn't work properly (get() is not returning)
async def asyncio_looping_run(self, duration: float):
i = 0
while True:
i += 1
print(f"taking a nap for {duration} seconds - {i} th time")
await asyncio.sleep(duration)
if i % 10 == 0:
self.unfinished_queue.put_nowait(i)
print("awaiting an entry to finish")
# can't afford to be blocking here, because we are in this async def, and this would block all
# other await'ing async defs !!!
# SO : await'ing an asyncio.Queue should be used here, but this doesn't work !!!
entry = self.finished_queue.get()
print(f"{entry}")
def long_lasting_synchronous_loop(self, msg: str):
print(f"entered long_lasting_synchronous_loop('{msg}')")
while True:
print("waiting for something to do")
input_item = self.unfinished_queue.get()
print(f"found something to do ! - found {input_item} as input")
print("mimicking a long synchronous operation by (synchronously) sleeping for 5 seconds")
time.sleep(5)
print("long synchronous operation finished ! will put it on the finished queue now")
self.finished_queue.put_nowait(f"done {input_item} !")
print(f"the result of {input_item} was put on the finished queue")
async def main():
print("started for real now !")
obj = TestClass()
print("future 1 : outputs every 1/x second, yielding control to the asyncio loop")
future1 = obj.asyncio_looping_run(0.1)
print("future 2 : runs the lengthy DB operation, NOT yielding control to the asyncio loop")
pool = concurrent.futures.ThreadPoolExecutor()
future2 = asyncio.get_event_loop().run_in_executor(
pool, obj.long_lasting_synchronous_loop, 'future2')
print(f"started at {time.strftime('%X')}")
done, pending = await asyncio.wait([future2, future1],
return_when=asyncio.FIRST_COMPLETED)
print("async main() loop exited !")
if __name__ == "__main__":
constants.init_constants()
try:
asyncio.run(
main()
)
except KeyboardInterrupt:
print(f"Terminated on user request.")
except asyncio.CancelledError:
print(f"asyncio.CancelledError: main() terminated by user?")
except ServerSocketBindingError as _e:
print(_e)
exit_code = constants.GENERAL_ERROR
except Exception as _e:
print(f"Terminated due to error: {_e}")
print(f"main() terminated due to error: {_e}")
exit_code = constants.GENERAL_ERROR
finally:
print(f"Handling cleanup.")
</code></pre>
|
<python><python-asyncio><synchronous>
|
2023-05-02 11:25:40
| 1
| 316
|
sudo
|
76,154,142
| 274,460
|
What is a fernet key exactly?
|
<p>According to the documentation for the <code>cryptography.fernet</code> module, fernet keys are:</p>
<blockquote>
<p>A URL-safe base64-encoded 32-byte key</p>
</blockquote>
<p>Yet this doesn't work:</p>
<pre><code>import secrets
from cryptography import fernet
f = fernet.Fernet(secrets.token_urlsafe(32))
</code></pre>
<p>failing with <code>ValueError: Fernet key must be 32 url-safe base64-encoded bytes</code> - however the documentation for <code>token_urlsafe</code> claims that it returns</p>
<blockquote>
<p>a random URL-safe text string, containing nbytes random bytes. The text is Base64 encoded ...</p>
</blockquote>
<p>Likewise, this doesn't work:</p>
<pre><code>import base64
from cryptography import fernet
key = fernet.Fernet.generate_key()
print(base64.b64decode(key))
</code></pre>
<p>failing with: <code>binascii.Error: Incorrect padding</code>.</p>
<p>So what is a Fernet key and what is the right way to go about generating one from a pre-shared string?</p>
|
<python><cryptography><python-cryptography><fernet>
|
2023-05-02 10:56:42
| 1
| 8,161
|
Tom
|
76,154,140
| 5,852,692
|
Networkx creating a graph from dictionary where dict shows the connections
|
<p>I have a dictionary, and I would like to create a networkx DiGraph with it. The dictionary shows the downstream nodes:</p>
<pre><code>dict_ = {0: [1],
1: [],
2: [],
3: [0, 1],
4: [0, 1, 2]}
</code></pre>
<p>The important thing, here is <code>(1)</code> and <code>(2)</code> are end nodes, <code>(3)</code> is connected to <code>(1)</code> over <code>(0)</code>.</p>
<p>The graph should look like following:</p>
<pre><code>1 2 ^ # digraph directions always to up
| | |
0____ | |
| | | |
3 | | |
|___________| |
4 |
</code></pre>
<p>How can I add edges in a way, it would respect my constraints? The algorithm should be somewhat generic, since I have a different dictionary for different situations.</p>
|
<python><dictionary><graph><networkx><edges>
|
2023-05-02 10:56:37
| 1
| 1,588
|
oakca
|
76,154,105
| 3,762,284
|
How can I ignore empty log messages?
|
<p>I use sanic framework.</p>
<p>I add log module and saw strange logs like this:</p>
<pre class="lang-none prettyprint-override"><code>[2023-05-02 19:25:40 +0900] - (sanic.access)[INFO][127.0.0.1:34844]: GET http://127.0.0.1/api/v1/ta?idtype=1 200 7559
[2023-05-02 19:25:40,969] [INFO]
[2023-05-02 19:25:40 +0900] - (sanic.access)[INFO][127.0.0.1:34846]: GET http://127.0.0.1/api/v1/sa?idtype=1 200 84895
[2023-05-02 19:25:43,857] [INFO]
[2023-05-02 19:25:43 +0900] - (sanic.access)[INFO][127.0.0.1:34846]: GET http://127.0.0.1/api/v1/to?type=I&idtype=2 200 26729
[2023-05-02 19:25:43 +0900] - (sanic.access)[INFO][127.0.0.1:34842]: GET http://127.0.0.1/api/v1/ta?idtype=2 200 6233
[2023-05-02 19:25:43,870] [INFO]
[2023-05-02 19:25:43,877] [INFO]
</code></pre>
<p>I never wrote empty logs, so I check call stacks.</p>
<p>lower is part of call stack.</p>
<pre><code>[13] - File "/home/deploy/utils/miniconda3/lib/python3.7/site-packages/sanic/response.py", line 122, in send
await self.stream.send(data, end_stream=end_stream)
[14] - File "http1_response_header", line 68, in http1_response_header
HEADER_CEILING = 16_384
[15] - File "/home/deploy/utils/miniconda3/lib/python3.7/site-packages/sanic/http.py", line 477, in log_response
access_logger.info("", extra=extra)
[16] - File "/home/deploy/utils/miniconda3/lib/python3.7/logging/__init__.py", line 1383, in info
self._log(INFO, msg, args, **kwargs)
[17] - File "/home/deploy/utils/miniconda3/lib/python3.7/logging/__init__.py", line 1519, in _log
self.handle(record)
</code></pre>
<p>I think, sanic prints empty logs (I can't understand why).</p>
<p>I want to ignore empty logs.
lower is my logger initialize code. How can I do it?</p>
<pre><code>def initialize_logger():
class StreamLogFormatter(logging.Formatter):
format_type = "[%(asctime)s] [%(levelname)s] %(message)s"
def format(self, record):
trace(record.msg)
formatter = logging.Formatter(self.format_type)
return formatter.format(record)
logger = logging.getLogger()
# remove default handler
logger.handlers = []
# console log
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(StreamLogFormatter())
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
return logger
</code></pre>
|
<python><logging><python-logging>
|
2023-05-02 10:51:42
| 1
| 556
|
Redwings
|
76,154,062
| 253,898
|
Set timeout in Opensearch search method gives error
|
<p>I'm using Opensearch-py to handle interaction with an OpenSearch. However I'm having some issues with setting a timeout to fix a timeout issue when searching an index.</p>
<pre><code>self.get_client().search(
index=index,
body=search_body,
scroll=scroll,
ignore=ignore,
size=size,
timeout=30)
</code></pre>
<p>The error I'm getting:</p>
<blockquote>
<p>ValueError: Timeout value connect was 30.0, but it must be an int, float or None.</p>
</blockquote>
<p>I have a hard time understanding why 30 isn't resolved to an int. I've also tried "30", "30s", 30.0 and casting using int() and float(). Same error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/usr/src/app/opensearchpy/connection/http_requests.py", line 179, in perform_request
response = self.session.send(prepared_request, **send_kwargs)
File "/usr/src/app/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/usr/src/app/requests/adapters.py", line 426, in send
timeout = TimeoutSauce(connect=timeout, read=timeout)
File "/usr/src/app/urllib3/util/timeout.py", line 94, in __init__
self._connect = self._validate_timeout(connect, 'connect')
File "/usr/src/app/urllib3/util/timeout.py", line 135, in _validate_timeout
raise ValueError("Timeout value %s was %s, but it must be an "
ValueError: Timeout value connect was 30.0, but it must be an int, float or None.
</code></pre>
|
<python><opensearch>
|
2023-05-02 10:45:18
| 3
| 11,088
|
Linora
|
76,153,917
| 3,521,180
|
how to get value based on matched key value pair from a single list of dictionaries in python?
|
<p>Below is the list of dictionaries where the first dictionary has all keys based on which matching has to be done on remining dictionaries.</p>
<pre><code>tier_information= [
{
"NaN": "From",
"From": "To",
"3.00000000": "8.00000000",
"4.00000000": "11.00000000",
"5.00000000": "14.00000000",
"6.00000000": "17.00000000",
"7.00000000": "20.00000000",
"8.00000000": "23.00000000"
},
{
"NaN": "3.00000000",
"From": "8.00000000",
"3.00000000": "13.00000000",
"4.00000000": "18.00000000",
"5.00000000": "23.00000000",
"6.00000000": "28.00000000",
"7.00000000": "33.00000000",
"8.00000000": "38.00000000"
},
{
"NaN": "4.00000000",
"From": "11.00000000",
"3.00000000": "18.00000000",
"4.00000000": "25.00000000",
"5.00000000": "32.00000000",
"6.00000000": "39.00000000",
"7.00000000": "46.00000000",
"8.00000000": "53.00000000"
},
{
"NaN": "5.00000000",
"From": "14.00000000",
"3.00000000": "23.00000000",
"4.00000000": "32.00000000",
"5.00000000": "41.00000000",
"6.00000000": "50.00000000",
"7.00000000": "59.00000000",
"8.00000000": "68.00000000"
},
{
"NaN": "6.00000000",
"From": "17.00000000",
"3.00000000": "28.00000000",
"4.00000000": "39.00000000",
"5.00000000": "50.00000000",
"6.00000000": "61.00000000",
"7.00000000": "72.00000000",
"8.00000000": "83.00000000"
},
{
"NaN": "7.00000000",
"From": "20.00000000",
"3.00000000": "33.00000000",
"4.00000000": "46.00000000",
"5.00000000": "59.00000000",
"6.00000000": "72.00000000",
"7.00000000": "85.00000000",
"8.00000000": "98.0000000"
},
{
"NaN": "8.00000000",
"From": "23.00000000",
"3.00000000": "38.00000000",
"4.00000000": "53.00000000",
"5.00000000": "68.00000000",
"6.00000000": "83.00000000",
"7.00000000": "98.00000000",
"8.00000000": "113.00000000"
}
]
</code></pre>
<p>Explanation:
As we can see that the tier_information[0] has below format:-</p>
<pre><code>"NaN": "From",
"From": "To",
</code></pre>
<p>Now,suppose tier_information[0] contains a pair <code>"3.00000000": "8.00000000"</code>, and if it matches in the 2nd dict in the list as below, i.e.</p>
<pre><code>"NaN": "3.00000000",
"From": "8.00000000",
"3.00000000":
</code></pre>
<p>Then the output should be <code>13.00000000</code>.
Similarly, if tier_information[0] contains a pair <code>"4.00000000": "11.00000000"</code>, and if it matches in the 2nd dict in the list as below, i.e.</p>
<pre><code>"NaN": "3.00000000",
"From": "8.00000000",
"4.00000000":
</code></pre>
<p>The output should be <code>"18.00000000"</code> and so on. In the nutshell, we have to compare dict[0] with each sub dict and fetch the value. I have written the below piece of program.</p>
<pre><code>count = 0
reference_dict = tier_information[0]
for i in reference_dict:
count += 1
result_dict = {}
for sub_dict in tier_information:
for key, value in sub_dict.items():
if key in reference_dict:
result_dict[key] = value
print(result_dict)
</code></pre>
<p>And on executing it, I am getting below output</p>
<pre><code>{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
{'NaN': '8.00000000', 'From': '23.00000000', '3.00000000': '38.00000000', '4.00000000': '53.00000000', '5.00000000': '68.00000000', '6.00000000': '83.00000000', '7.00000000': '98.00000000', '8.00000000': '113.00000000'}
</code></pre>
<p>The output is not what is required.
Please help, thank you.</p>
<p>Required output:</p>
<pre><code>[ 3.00000000 8.00000000 3.00000000 8.00000000 ] = 13.00000000
[ 3.00000000 8.00000000 4.00000000 11.00000000 ] = 18.00000000
[ 3.00000000 8.00000000 5.00000000 14.00000000 ] = 23.00000000
[ 3.00000000 8.00000000 6.00000000 17.00000000 ] = 28.00000000
[ 3.00000000 8.00000000 7.00000000 20.00000000 ] = 33.00000000
[ 3.00000000 8.00000000 8.00000000 23.00000000 ] = 38.00000000
[ 4.00000000 11.00000000 3.00000000 8.00000000 ] = 18.00000000
[ 4.00000000 11.00000000 4.00000000 11.00000000 ] = 25.00000000
[ 4.00000000 11.00000000 5.00000000 14.00000000 ] = 32.00000000
[ 4.00000000 11.00000000 6.00000000 17.00000000 ] = 39.00000000
[ 4.00000000 11.00000000 7.00000000 20.00000000 ] = 46.00000000
[ 4.00000000 11.00000000 8.00000000 23.00000000 ] = 53.00000000
[ 5.00000000 14.00000000 3.00000000 8.00000000 ] = 23.00000000
[ 5.00000000 14.00000000 4.00000000 11.00000000 ] = 32.00000000
[ 5.00000000 14.00000000 5.00000000 14.00000000 ] = 41.00000000
[ 5.00000000 14.00000000 6.00000000 17.00000000 ] = 50.00000000
[ 5.00000000 14.00000000 7.00000000 20.00000000 ] = 59.00000000
[ 5.00000000 14.00000000 8.00000000 23.00000000 ] = 68.00000000
[ 6.00000000 17.00000000 3.00000000 8.00000000 ] = 28.00000000
[ 6.00000000 17.00000000 4.00000000 11.00000000 ] = 39.00000000
[ 6.00000000 17.00000000 5.00000000 14.00000000 ] = 50.00000000
[ 6.00000000 17.00000000 6.00000000 17.00000000 ] = 61.00000000
[ 6.00000000 17.00000000 7.00000000 20.00000000 ] = 72.00000000
[ 6.00000000 17.00000000 8.00000000 23.00000000 ] = 83.00000000
[ 7.00000000 20.00000000 3.00000000 8.00000000 ] = 33.00000000
[ 7.00000000 20.00000000 4.00000000 11.00000000 ] = 46.00000000
[ 7.00000000 20.00000000 5.00000000 14.00000000 ] = 59.00000000
[ 7.00000000 20.00000000 6.00000000 17.00000000 ] = 72.00000000
[ 7.00000000 20.00000000 7.00000000 20.00000000 ] = 85.00000000
[ 7.00000000 20.00000000 8.00000000 23.00000000 ] = 98.00000000
[ 8.00000000 23.00000000 3.00000000 8.00000000 ] = 38.00000000
[ 8.00000000 23.00000000 4.00000000 11.00000000 ] = 53.00000000
[ 8.00000000 23.00000000 5.00000000 14.00000000 ] = 68.00000000
[ 8.00000000 23.00000000 6.00000000 17.00000000 ] = 83.00000000
[ 8.00000000 23.00000000 7.00000000 20.00000000 ] = 98.00000000
</code></pre>
|
<python><json><python-3.x>
|
2023-05-02 10:24:11
| 1
| 1,150
|
user3521180
|
76,153,855
| 4,349,869
|
Paramiko "OSError: Failure" when trying to put large file to SFTP server
|
<p>My task is to perform some action on data queried from a database and then store the output on an SFTP server.<br />
The script performing all of this is executed on a VM running Windows Server 2016 Standard through task scheduler.</p>
<p>There are a total of 4 files which I should copy at the end of the script from some location on a shared drive (accessible from the VM that's running the script) to the SFTP server. These files have size:</p>
<ol>
<li>~25 MB</li>
<li>~55 MB</li>
<li>~900 MB</li>
<li>~2 GB</li>
</ol>
<p>Here's the final part of the script, where the copying process takes place:</p>
<pre><code>hostname = 'ftp_server_name'
username = 'username'
password = 'password'
keypath = 'Sftp_Key'
known_hosts = 'known_hosts.txt'
import paramiko
mykey = paramiko.RSAKey.from_private_key_file(keypath, password)
ssh_client = paramiko.SSHClient()
ssh_client.load_host_keys(known_hosts)
ssh_client.connect(
hostname=hostname,
username=username,
allow_agent=True,
pkey=mykey,
disabled_algorithms={'pubkeys': ['rsa-sha2-256', 'rsa-sha2-512']}
)
tr = ssh_client.get_transport()
tr.default_max_packet_size = 10000000000
tr.default_window_size = 10000000000
sftp_client = ssh_client.open_sftp()
sftp_client.put(
r'\\path\to\origin\on\shared\drive\file_1.csv',
'/root/folder/path/to/destination/file_1.csv'
)
sftp_client.put(
r'\\path\to\origin\on\shared\drive\file_2.csv',
'/root/folder/path/to/destination/file_2.csv'
)
sftp_client.put(
r'\\path\to\origin\on\shared\drive\file_3.csv',
'/root/folder/path/to/destination/file_3.csv'
)
sftp_client.put(
r'\\path\to\origin\on\shared\drive\file_4.csv',
'/root/folder/path/to/destination/file_4.csv'
)
print(sftp_client.listdir())
sftp_client.close()
ssh_client.close()
</code></pre>
<p>As you can see, I am</p>
<ul>
<li>spelling out entirely the destination file name (as indicated in the documentation and as suggested everywhere on SO)</li>
<li>getting the transport and increasing the sizes (as suggested <a href="https://stackoverflow.com/a/48170689/4349869">here</a>)</li>
</ul>
<p>It all goes well until the final one (<code>file_4</code>), at which I get the following error</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "H:\folder\scripts\test_sftp_copy.py", line 45, in <module>
ftp_client.put(
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 759, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 716, in putfo
size = self._transfer_with_callback(
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 679, in _transfer_with_callback
writer.write(data)
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\file.py", line 405, in write
self._write_all(data)
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\file.py", line 522, in _write_all
count = self._write(data)
File "C:\Users\MeUser\Anaconda3\lib\site-packages\paramiko\sftp_file.py", line 208, in _write
t, msg = self.sftp._read_response(req)
File "C:\Users\A-AChiap\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 874, in _read_response
self._convert_status(msg)
File "C:\Users\A-AChiap\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 907, in _convert_status
raise IOError(text)
OSError: Failure.
</code></pre>
<p>The weird thing is that, once the code breaks after attempting to <code>put</code> the fourth file (raising the <code>OSError: failure</code>), I cannot put any other file in the server any more.</p>
<p>So, for instance, if I try copying again <code>file_1.csv</code>, I would get the <code>OSError: failure</code> error right away.<br />
In this case, I need to wait several hours or try again the next day.<br />
This feels like an issue on the SFTP side.</p>
<p>Can anybody with more experience or knowledge of the matter comment? Thank you</p>
|
<python><sftp><paramiko>
|
2023-05-02 10:14:19
| 1
| 545
|
andrea
|
76,153,647
| 1,581,090
|
How to select a control element from `print_control_identifiers` in `pywinauto`?
|
<p>When using <code>pywinauto</code> with an application I can print out the control identifiers uwing the method <code>print_control_identifiers</code>. An example output is as follows:</p>
<pre><code> | | | | ListItem - '02.05.2023 11:31:53 - Retrieved tree structure' (L599, T1033, R1756, B1050)
| | | | ['02.05.2023 11:31:53 - Retrieved tree structure', 'ListItem3', '02.05.2023 11:31:53 - Retrieved tree structureListItem']
| | | | child_window(title="02.05.2023 11:31:53 - Retrieved tree structure", control_type="ListItem")
</code></pre>
<p>I assume this is a "ListItem".</p>
<ul>
<li>But what do the different elements mean?</li>
<li>What does "L599" mean? What "T1033"? What "R1756"? What "B1050"?</li>
<li>What do the elements in that second row mean? (<code>['02.05.2023 11:31:53 - Retrieved tree structure', 'ListItem3', '02.05.2023 11:31:53 - Retrieved tree structureListItem']</code>)?</li>
<li>What does the last line (<code>child_element</code>) mean?</li>
<li>Where is this documented?</li>
<li>How can I select for example this element when I have the app window as <code>dialog</code>?</li>
</ul>
|
<python><pywinauto>
|
2023-05-02 09:45:21
| 2
| 45,023
|
Alex
|
76,153,606
| 6,695,041
|
Dialogflow doesn't return training phrases
|
<p>I am trying to get an overview of the training phrases per intent from Dialogflow in python.</p>
<p>I have followed <a href="https://cloud.google.com/python/docs/reference/dialogflow/latest/google.cloud.dialogflow_v2.services.intents.IntentsClient#google_cloud_dialogflow_v2_services_intents_IntentsClient_list_intents" rel="nofollow noreferrer">this</a> example to generate the following code:</p>
<pre><code>from google.cloud import dialogflow_v2
# get_credentials is a custom function that loads the credentials
credentials, project_id = get_credentials()
client = dialogflow_v2.IntentsClient(credentials=credentials)
request = dialogflow_v2.ListIntentsRequest(
parent=f"projects/{project_id}/agent/environments/draft",
)
page_result = client.list_intents(request=request)
for intent in page_result:
print("Intent name: ", intent.name)
print("Intent display_name: ", intent.display_name)
print("Training phrases: ", intent.training_phrases)
</code></pre>
<p>The name and display name of the intent are printed as expected, however training phrases is always an empty list (for both the draft as the test environment). Any ideas on why I'm not seeing the training phrases that I can see in the console?</p>
<p><strong>EDIT</strong>
After hkanjih's answer I've updated my code as follows:</p>
<pre><code>from google.cloud import dialogflow_v2
# get_credentials is a custom function that loads the credentials
credentials, project_id = get_credentials()
client = dialogflow_v2.IntentsClient(credentials=credentials)
request = dialogflow_v2.ListIntentsRequest(
parent=f"projects/{project_id}/agent/environments/draft",
)
page_result = client.list_intents(request=request)
for intent in page_result:
print("Intent name: ", intent.name)
# intent.name is equal to projects/{project_id}/agent/intents/{intent_id}
intent_request = dialogflow_v2.GetIntentRequest(
name=intent.name,
)
intent = client.get_intent(request=intent_request)
# printing intent name again just to check if it's the same (it is)
print("Intent name: ", intent.name)
print("Intent display_name: ", intent.display_name)
print("Training phrases: ", intent.training_phrases)
</code></pre>
<p>Unfortunately, for all intents: <code>Training phrases: []</code></p>
|
<python><dialogflow-es>
|
2023-05-02 09:39:40
| 3
| 409
|
Hav11
|
76,153,599
| 11,973,820
|
returning oracle cursor results to json python
|
<p>I am working with python oracle connection. But i have a issue returning date with the cursor result as json.</p>
<p>below is the json result of the cursor, the issue is format of create_dttm. When creating a dataframe from this it is not changing the format. Any suggestion</p>
<pre><code>result = cursur.execute("**my query**")
data = list(result)
final = json.dumps(data)
print(final)
[{"create_dttm": {"$date": 1677264505842}, "update_dttm": {"$date": 1677264505842}, "wo_id": "ABC-63953"},{"create_dttm": {"$date": 1677264505843}, "update_dttm": {"$date": 1677264505843}, "wo_id": "ABC-63954"}]
</code></pre>
<p>I want the data to be like below when creating a dataframe</p>
<pre><code>create_dttm update_dttm wo_id
2021-5-09 2021-5-09 ABC-63953
2021-5-09 2021-5-09 ABC-63953
</code></pre>
|
<python><json><oracle-database>
|
2023-05-02 09:38:48
| 2
| 859
|
Jai
|
76,153,438
| 8,863,055
|
should I call declarative_base() multiple times?
|
<p>I participate in a legacy project which is python Flask API with sqlalchemy.
In this project I can see there are many calls to <code>declarative_base()</code> for creating base for db classes.</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class ExampleClass(Base):
__tablename__ = 'example_table_name'
__table_args__ = {'schema': 'some_schema'}
vendor_id = Column(String())
</code></pre>
<p>And in many modules with db classes, there is this <code>declarative_base()</code> called.
We have some performance issues in this project, sometimes db acts very slow. Is it possible that multiple instances of classes created via calls to <code>declarative_base()</code> may be the reason?</p>
|
<python><flask><sqlalchemy>
|
2023-05-02 09:20:06
| 0
| 462
|
Marek Kamiński
|
76,153,402
| 3,165,737
|
Azure Functions won't publish Python function
|
<p>I'm trying to create an Azure functions using Python code, v2 model, version 3.10.</p>
<p>Below are the contents of my <code>requirements.txt</code> file:</p>
<pre><code>azure-functions
requests
beautifulsoup4
html5lib
pyjq
extruct
js2py
</code></pre>
<p>The dependency of <code>pyjq</code> has been an issue, as it doesn't want to compile when publishing the code.</p>
<p>If I try a local build (i.e. <code>--build local</code>), there's an issue with lxml (which is an indirect requirement):</p>
<pre><code>There was an error restoring dependencies.
ERROR: cannot install lxml-4.9.2 dependency: binary dependencies without wheels are not supported when building locally.
Use the "--build remote" option to build dependencies on the Azure Functions build server,
or "--build-native-deps" option to automatically build and configure the dependencies using a Docker container.
More information at https://aka.ms/func-python-publish
</code></pre>
<p>As I have a local package (not available in pypi) that needs to be included as well, I decided to build wheels for both that and <code>pyjq</code> (using <code>pip wheel pyjq</code>), store them in a <code>wheels</code> subfolder and modify the <code>requirements.txt</code>:</p>
<pre><code><other packages>
./wheels/pyjq-2.6.0-cp310-cp310-linux_x86_64.whl
./wheels/mypackage-1.0.0-py3-none-any.whl
</code></pre>
<p>The remote build is successful:</p>
<pre><code>Uploading built content /home/site/artifacts/functionappartifact.squashfs for linux consumption function app...
Resetting all workers for fa-myapp.azurewebsites.net
Deployment successful. deployer = Push-Deployer deploymentPath = Functions App ZipDeploy. Extract zip. Remote build.
Remote build succeeded!
Syncing triggers...
Functions in fa-myapp:
</code></pre>
<p>However, no functions seem to be available.</p>
<p>I've tested the code locally (<code>func host start</code>) and confirmed that it works.</p>
<p>Is there any way I can find some sort of detailed logging which will give me more information as to what is causing this?</p>
|
<python><azure-functions>
|
2023-05-02 09:14:20
| 1
| 8,605
|
DocZerø
|
76,153,219
| 2,646,505
|
Distinguish default or command-line argument
|
<p>Consider</p>
<pre class="lang-py prettyprint-override"><code>import argparse
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--foo", type=str, default="bar", help="description")
args = parser.parse_args()
</code></pre>
<p>Now I would like to know if <code>args.foo</code> got its value due to the default, or because the user specified <code>myprogram --foo bar</code>. How can I distinguish that?</p>
<p><strong>Edit:</strong> A big plus would be if the docstring still reflected the default.</p>
|
<python><argparse>
|
2023-05-02 08:51:02
| 1
| 6,043
|
Tom de Geus
|
76,153,040
| 10,413,428
|
Disabling custom button inside QDialogButtonBox
|
<p>I have a QDialogButtonBox that contains the two custom buttons Start and Cancel. According to <a href="https://stackoverflow.com/questions/31290792/how-to-change-the-caption-of-a-button-in-a-qdialogbuttonbox">this answer</a>, the best way to add these buttons is as follows:</p>
<pre class="lang-py prettyprint-override"><code>button_box = QDialogButtonBox()
button_box.addButton("Launch", QDialogButtonBox.ButtonRole.AcceptRole)
button_box.addButton("Cancel", QDialogButtonBox.ButtonRole.RejectRole)
</code></pre>
<p>One of the two buttons must now be deactivated. I tried to adopt the code from <a href="https://stackoverflow.com/questions/43440318/access-individual-button-inside-qts-qdialogbuttonbox">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>button_box.button(QDialogButtonBox.ButtonRole.AcceptRole).setDisabled(True)
</code></pre>
<p>but this seems only to work with the default qt buttons like Ok etc.
I came up with the following working solution, but I wanted to ask if there is a more direct way I missed.</p>
<pre class="lang-py prettyprint-override"><code>for btn in button_box.buttons():
if btn.text() == "Launch":
btn.setDisabled(True)
</code></pre>
|
<python><qt><pyside><pyside6><qt6>
|
2023-05-02 08:28:09
| 1
| 405
|
sebwr
|
76,152,806
| 7,745,011
|
How to add custom labels to a torchdata datapipe?
|
<p>I am trying to load image data for model training from a self-hosted S3 storage (MinIO). Pytorch provides <a href="https://pytorch.org/data/beta/generated/torchdata.datapipes.iter.S3FileLoader.html#torchdata.datapipes.iter.S3FileLoader" rel="nofollow noreferrer">new datapipes with this functionality in the torchdata library</a>.</p>
<p>So within my function to create the datapipe, I have these lines:</p>
<pre><code>dp_s3 = IterableWrapper(list(sample_dict.keys()))
dp_s3 = dp_s3.load_files_by_s3()
dp_s3 = dp_s3.map(open_image)
dp_s3 = dp_s3.map(transform)
</code></pre>
<p>The problem with this approach is, that the S3 file loader datapipe returns a tuple of a string, which contains the file path on the S3 storage as label and <code>io.BytesIO</code> containing the image data. However I have all labels and the files to load in a separate text files, which are loaded into <code>sample_dict</code> (a dictionary mapping file paths to classification labels) in a previous step.</p>
<p>Question is now, how can I get the labels from <code>sample_dict</code> into my mapping functions?
There seem to be two main obstacles to achieve this:</p>
<ul>
<li>The dataloader is multi-threaded and I get a pickle error if I add <code>sample_dict</code> to it. Also I cannot make the dictionary globally accessible for other worker threads which are handled by pytorch</li>
<li><code>load_files_bys3()</code> is the functional name for <code>S3FileLoader</code> which can only deal with S3 type file paths as input. My initial though was that I need to us a map-style datapipe for this instead of a iterable-style, but unfortunately there are no map-style S3 datapipes available.</li>
</ul>
|
<python><torch><pytorch-datapipe><torchdata>
|
2023-05-02 07:56:04
| 1
| 2,980
|
Roland Deschain
|
76,152,720
| 10,215,160
|
How to replace certain values in a Polars Series?
|
<p>I want to replace the <code>inf</code> values in a polars series with 0. I am using the polars Python library.</p>
<p>This is my example code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
example = pl.Series([1,2,float('inf'),4])
</code></pre>
<p>This is my desired output:</p>
<pre><code>output = pl.Series([1.0,2.0,0.0,4.0])
</code></pre>
<p>All similiar questions regarding replacements are regarding polars Dataframes using the <code>.when</code> expression (e.g <a href="https://stackoverflow.com/questions/74814175/replace-value-by-null-in-polars">Replace value by null in Polars</a>) which does not seem to be available in a Series object:</p>
<p><em>AttributeError: 'Series' object has no attribute 'when'</em></p>
<p>Is this possible using polars expressions?</p>
<p>EDIT:
I found the following solution but it seems very convoluted:</p>
<pre><code>example.map_dict({float('inf'): 0 }, default= pl.first())
</code></pre>
|
<python><python-polars>
|
2023-05-02 07:43:49
| 2
| 1,486
|
Sandwichnick
|
76,152,464
| 880,783
|
How can I annotate an untyped import?
|
<p>I'd like to add type annotations to an untyped import. How should I do that? This does not work:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar, cast, reveal_type
from funcy import retry
# bug.py:5: note: Revealed type is "Any"
reveal_type(retry)
F = TypeVar("F", bound=Callable)
retry = cast(Callable[..., Callable[[F], F]], retry)
# bug.py:11: note: Revealed type is "Any"
reveal_type(retry)
</code></pre>
<p>Since I do not assign <code>retry</code> here, but I import it, I am unsure which notation allows me to annotation on import. This does not work:</p>
<pre><code>from funcy import retry : Callable[..., Callable[[F], F]]
</code></pre>
|
<python><import><python-typing>
|
2023-05-02 07:04:10
| 1
| 6,279
|
bers
|
76,152,283
| 1,581,090
|
How to check if some specific text exists in an application/window with pywinauto?
|
<p>I want to use <code>pywinauto</code> to check if some specific text exists in an application/window. I used the search engine "google" to search for "pywinauto check text" but did not find anything useful. Also, the <a href="https://pywinauto.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">documentation</a> was not useful, as I do not know what methods I can use to check for text. Also the methods of the "dialog"</p>
<pre><code>['WAIT_CRITERIA_MAP', '_WindowSpecification__check_all_conditions', '_WindowSpecification__get_ctrl', '_WindowSpecification__parse_wait_args', '_WindowSpecification__resolve_control', '__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_ctrl_identifiers', 'actions', 'allow_magic_lookup', 'app', 'backend', 'child_window', 'criteria', 'dump_tree', 'exists', 'print_control_identifiers', 'print_ctrl_ids', 'wait', 'wait_not', 'window', 'wrapper_object']
</code></pre>
<p>do not show something obvious to check for some text.</p>
<p>So how can I check for some specific text in an application/window?</p>
<p>I get the corresponding window(?) using the following code:</p>
<pre><code>dialog = Desktop(backend="uia")["MyProgram"]
if not dialog.exists():
dialog = Application(backend='uia').start(path, timeout=300)["MyProgram"]
dialog.wait("exists enabled visible ready")
dialog.set_focus()
</code></pre>
<p>It seems that the text I want to look for is in a pane(?):</p>
<pre><code>x = dialog["SZRCPane10"]
</code></pre>
<p>and here I tried (following <a href="https://stackoverflow.com/questions/51136194/how-can-i-get-all-listitem-in-listbox">this answer</a>):</p>
<pre><code>x.item_texts()
x["ListBox"].item_texts()
x["ListBox1"].item_texts()
x.ListBox.item_texts()
x.texts()
</code></pre>
<p>but none of them did work. When doing</p>
<pre><code>print(x.print_control_identifiers())
</code></pre>
<p>I get the output</p>
<pre><code>Pane - '' (L413, T818, R1723, B1052)
['Pane']
child_window(auto_id="2362796", control_type="Pane")
|
| ListBox - '02.05.2023 08:17:08 - The Configuration Backend has just established connection to MPData' (L413, T818, R1723, B1052)
| ['02.05.2023 08:17:08 - The Configuration Backend has just established connection to MPData', 'ListBox', '02.05.2023 08:17:08 - The Configuration Backend has just established connection to MPDataListBox']
| child_window(title="02.05.2023 08:17:08 - The Configuration Backend has just established connection to MPData", auto_id="listBoxMessages", control_type="List")
| |
| | ScrollBar - 'Vertical' (L1705, T819, R1722, B1051)
| | ['VerticalScrollBar', 'ScrollBar', 'Vertical']
| | child_window(title="Vertical", auto_id="5051336", control_type="ScrollBar")
...
</code></pre>
<p>Is there a way to access these elements/controls?</p>
|
<python><pywinauto>
|
2023-05-02 06:35:40
| 1
| 45,023
|
Alex
|
76,152,242
| 10,639,239
|
Vs Code skips Breakpoints after "import"
|
<p>When debugging my code, my breakpoints are ignored by visual studio code, once I use these lines of code:</p>
<pre><code>from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds
from brainflow.data_filter import DataFilter, FilterTypes, DetrendOperations, AggOperations
</code></pre>
<p>No error Message is given.
I edited my launch.json in accordance to this post: <a href="https://stackoverflow.com/questions/56794940/debugger-not-stopping-at-breakpoints-in-vs-code-for-python">Debugger Not Stopping at Breakpoints in VS Code for Python</a></p>
<p>And tried several others, but nothing solves my issue.
Probably my import is wrong/has an error? (if anybody knows something brainflow specific, please let me know. First time using it so.. yeah...)</p>
<p>Here my Minimal Code example:</p>
<pre><code>#Bt_Connection
#python -m pip install brainflow
#pip install pybluez
#python.exe -m pip install --upgrade pip
import argparse
import time
import socket
import os
from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds
from brainflow.data_filter import DataFilter, FilterTypes, DetrendOperations, AggOperations
msg="X"
print(msg)
test= "test"
print(test)
</code></pre>
<p>And some screenshots since the error really only becomes visible in the IDE. (Video would be better, but you have to believe me that the breakpoints are not hit.)
<a href="https://i.sstatic.net/KRETl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KRETl.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/uH0Iw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uH0Iw.png" alt="enter image description here" /></a></p>
<p>In the second picture the code only stops at the first line, since that is specified in my launch.json. But once I hit continue it just goes through the code without stoping and writes "X" and "test" in the terminal.</p>
<p>launch.json for completions sake:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Debug Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"stopOnEntry": true,
"justMyCode": false
}
]
}
</code></pre>
<p>Python Version: 3.11.2 64 bit</p>
<p>And for Visual Studio Code:</p>
<p><a href="https://i.sstatic.net/KUVUw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KUVUw.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code>
|
2023-05-02 06:27:48
| 1
| 365
|
G.M
|
76,152,193
| 3,391,549
|
Find indices of nan elements in nested lists and remove them
|
<pre><code>names=[['Pat','Sam', np.nan, 'Tom', ''], ["Angela", np.nan, "James", ".", "Jackie"]]
values=[[1, 9, 1, 2, 1], [1, 3, 1, 5, 10]]
</code></pre>
<p>I have 2 lists: <code>names</code> and <code>values</code>. Each value goes with a name, i.e., <code>Pat</code> corresponds to the value <code>1</code> and <code>Sam</code> corresponds to the value <code>9</code>.</p>
<p>I would like to remove the <code>nan</code> from <code>names</code> and the corresponding values from <code>values</code>.</p>
<p>That is, I want a <code>new_names</code> list that looks like this:</p>
<pre><code>[['Pat','Sam', 'Tom', ''], ["Angela", "James", ".", "Jackie"]]
</code></pre>
<p>and a <code>new_values</code> list that looks like this:</p>
<pre><code>[[1, 9, 2, 1], [1, 1, 5, 10]]
</code></pre>
<p>My attempt was to first find the indices of these <code>nan</code> entries:</p>
<pre><code>all_nan_idx = []
for idx, name in enumerate(names):
if pd.isnull(name):
all_nan_idx.append(idx)
</code></pre>
<p>However, the above does not account nested lists.</p>
|
<python><list><nested-lists>
|
2023-05-02 06:19:00
| 7
| 9,883
|
Adrian
|
76,152,159
| 1,581,090
|
Where to find proper documentation for `pywinauto`?
|
<p>On the documentation page for <code>pywinauto.application.Application</code> <a href="https://pywinauto.readthedocs.io/en/latest/code/pywinauto.application.html?highlight=pywinauto.window#pywinauto.application.Application" rel="nofollow noreferrer">HERE</a> you find some methods, but it seems not all are mentioned. I just ran the following code to open an existing application and to focus on its window/application(?; not sure what the difference is):</p>
<pre><code>dialog = Desktop(backend="uia")["MyProgram"]
if not dialog.exists():
dialog = Application(backend='uia').start(path, timeout=300)["MyProgram"]
dialog.set_focus()
dialog.ManageFirmware.click_input()
</code></pre>
<p>But the method <code>set_focus</code> does not seem to be explained in the documentation. Do I understand something wrong here?</p>
<p>And what is the difference between <code>Desktop</code>, <code>Application</code> and <code>Window</code>...?</p>
|
<python><pywinauto>
|
2023-05-02 06:13:30
| 1
| 45,023
|
Alex
|
76,152,065
| 9,880,480
|
Find the coordinates of a 3D point given the euclidean distance and rotation to another known point
|
<p>Imagine a 3D point (Point 1) with coordinates to origin <code>x,y,z=1,2,3</code> and quaternion rotation values to origin: <code>w,x,y,z=0.8,0.1,0.1,0.1</code> (which can be converted to rotation matrices). Further, imagine another 3D point (Point 2) that has a known euclidean distance from Point 1 equal to <code>2</code>. You also know the quaternion rotation from Point 2 to Point 1 which is equal to <code>w,x,y,z=0.5,0.5,0.5,0.5</code> (can also be converted to rotation matrices). Using this information, how can you find the x,y,z coordinates of Point 2?</p>
<p>I am going to code this out using python, so if you have a suggestion for how to do so as well, that would be greatly appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import quaternion
p1_coords = [1,2,3]
p1_rot_to_origin = np.quaternion(0.8, 0.1, 0.1, 0.1)
p2_rot_to_p1 = np.quaternion(0.5, 0.5, 0.5, 0.5)
</code></pre>
<p>Using RVIZ I get:</p>
<p>I have tried illustrating it using rviz. Code:</p>
<pre class="lang-py prettyprint-override"><code>class TestAnimator(Node):
def __init__(self):
super().__init__('TestAnimator')
self.tf_broadcaster = tf2_ros.TransformBroadcaster(self)
# Ros 2
self.timer = self.create_timer(1, self.timer_callback)
def timer_callback(self):
p1 = TransformStamped()
p1.header.frame_id = 'origin' #
p1.child_frame_id = f'Point_1'
p1.transform.translation = Vector3(x=1., y=2., z=3.)
p1.transform.rotation = Quaternion(w=0.8, x=0.1, y=0.1, z=0.1)
self.tf_broadcaster.sendTransform(p1)
p2 = TransformStamped()
p2.header.frame_id = f'Point_1' # 'test_frame'
p2.child_frame_id = f'Point_2'
p2.transform.translation = Vector3(x=0., y=2., z=0.)
p2.transform.rotation = Quaternion(w=0.5, x=0.5, y=0.5, z=0.5)
self.tf_broadcaster.sendTransform(p2)
# Sleep after each timestep
time.sleep(0.05)
rclpy.init(args=None)
test_animator = TestAnimator()
rclpy.spin(test_animator)
</code></pre>
<p>Image:
<a href="https://i.sstatic.net/6pGNg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6pGNg.png" alt="RVIZ illustration" /></a>
The bottom right axis is the origin (0,0,0). Point 1 is the point closest to the origin, and is rotated Quaternion(w=0.8, x=0.1, y=0.1, z=0.1) with respect to the origin axis as represented by the tilt of the axis on the picture. Point 2 is rotated Quaternion(w=0.5, x=0.5, y=0.5, z=0.5) with respect to Point 1, and is a vector from Point 1 equal to Vector3(x=0., y=2., z=0.).</p>
|
<python><math><rotation><quaternions><coordinate-systems>
|
2023-05-02 05:56:28
| 1
| 397
|
HaakonFlaar
|
76,151,787
| 800,735
|
In Apache Beam/Dataflow multi-output DoFns, how do you assign type hints to specific tags
|
<p>I have a multi-output DoFn:</p>
<pre><code>class DoFn1:
def process(self, row) -> Iterable[Union[Dict[str, Any], pvalue.TaggedOutput]]:
if something:
yield some_dict(...)
else:
yield pvalue.TaggedOutput("bad", ...)
</code></pre>
<p>And another DoFn that consumes its outputs</p>
<pre><code>class DoFn2:
def process(self, row: Dict[str, Any]) -> Iterable[...]:
if something:
yield some_dict(...)
else:
yield pvalue.TaggedOutput("bad", ...)
</code></pre>
<p>And then when I use it like this:</p>
<pre><code>pcoll = ...
pcoll = pcoll | "dofn1" >> beam.ParDo(DoFn1()).with_outputs(
"bad",
main="good",
)
pcoll["good"] | "dofn2" >> beam.ParDo(DoFn2())
</code></pre>
<p>I get an error that looks like this:</p>
<pre><code>apache_beam.typehints.decorators.TypeCheckError: Type hint violation for 'dofn2': requires Dict[str, Any] but got Union[Dict[str, Any], TaggedOutput] for row
</code></pre>
<p>How can I tell Beam type checking that the <code>good</code> tag gives <code>Dict[str, Any]</code>, and <code>bad</code> gives <code>TaggedOutput</code>? Am I forced to propagate the tagged output type hints?</p>
|
<python><google-cloud-dataflow><apache-beam><type-hinting>
|
2023-05-02 04:43:54
| 1
| 965
|
cozos
|
76,151,629
| 3,398,324
|
How to interpret predictions from LightGBM
|
<p>I am trying to obtain predictions from my LightGBM model, simple min example is provided in the first answer <a href="https://stackoverflow.com/questions/62555987/lightgbm-ranking-example/67621253#67621253">here</a>. When I run the provided code from there (which I have copied below) and run model.predict, I would expect to get the predictions for the binary target, 0 or 1 but I get a continuous variable instead:</p>
<pre><code>import numpy as np
import pandas as pd
import lightgbm
df = pd.DataFrame({
"query_id":[i for i in range(100) for j in range(10)],
"var1":np.random.random(size=(1000,)),
"var2":np.random.random(size=(1000,)),
"var3":np.random.random(size=(1000,)),
"relevance":list(np.random.permutation([0,0,0,0,0, 0,0,0,1,1]))*100
})
train_df = df[:800] # first 80%
validation_df = df[800:] # remaining 20%
qids_train = train_df.groupby("query_id")["query_id"].count().to_numpy()
X_train = train_df.drop(["query_id", "relevance"], axis=1)
y_train = train_df["relevance"]
qids_validation = validation_df.groupby("query_id")["query_id"].count().to_numpy()
X_validation = validation_df.drop(["query_id", "relevance"], axis=1)
y_validation = validation_df["relevance"]
model = lightgbm.LGBMRanker(
objective="lambdarank",
metric="ndcg",
)
model.fit(
X=X_train,
y=y_train,
group=qids_train,
eval_set=[(X_validation, y_validation)],
eval_group=[qids_validation],
eval_at=10,
verbose=10,
)
model.predict(X_train)
</code></pre>
|
<python><pandas><lightgbm>
|
2023-05-02 03:57:11
| 1
| 1,051
|
Tartaglia
|
76,151,617
| 10,964,685
|
Export Plotly scatter as kml - python
|
<p>Is it possible to export a Plotly scatter figure as a kml file? I've got an example below using <code>matplotlib</code> but is it possible to execute the same output using Plotly?</p>
<p>The Plotly figure is a scatter plot. Can it be converted to a kml output? I'm returning an error when trying to export as a kml.</p>
<pre><code>import plotly.express as px
import geopandas as gpd
import simplekml
import matplotlib.pyplot as ppl
from pylab import rcParams
gdf = gpd.read_file(gpd.datasets.get_path("naturalearth_cities"))
gdf['LON'] = gdf['geometry'].x
gdf['LAT'] = gdf['geometry'].y
fig = px.scatter_mapbox(data_frame = gdf,
lat = 'LAT',
lon = 'LON',
zoom = 1,
mapbox_style = 'carto-positron',
)
fig.show()
fig.write_image('test.kml')
</code></pre>
<p>Output:</p>
<pre><code>ValueError: Invalid format 'kml'.
Supported formats: ['png', 'jpg', 'jpeg', 'webp', 'svg', 'pdf', 'eps', 'json']
</code></pre>
|
<python><matplotlib><plotly><kml>
|
2023-05-02 03:54:34
| 1
| 392
|
jonboy
|
76,151,427
| 12,875,823
|
Uvicorn reload options are not being followed
|
<p>I have three directories: app, config, and private</p>
<p>I'm running uvicorn programmatically like this with WatchFiles installed:</p>
<pre class="lang-py prettyprint-override"><code>uvicorn.run(
"app.main:fast",
host=host,
port=port,
log_level=log_level,
reload=reload,
reload_includes=["app/*", "config/*", "manage.py", ".env"],
)
</code></pre>
<p>But for some reason, the directory private is also watched and reloads. I tried doing this:</p>
<pre class="lang-py prettyprint-override"><code>uvicorn.run(
"app.main:fast",
host=host,
port=port,
log_level=log_level,
reload=reload,
reload_dirs=["app", "config"],
reload_includes=["app/*.py", "config/*.py", "manage.py", ".env"],
reload_excludes=["*.py"]
)
</code></pre>
<p>But this just ignores all of the python files. How do I just watch the directories app and config.</p>
|
<python><fastapi><uvicorn>
|
2023-05-02 02:44:22
| 2
| 998
|
acw
|
76,151,347
| 14,385,814
|
Get the fields of Foreign key values in Django
|
<p>I'm confused why the values I fetch in foreign key is missing or not pops in Ajax result , all I want is to get the value name through the foreign key id using <strong>ajax</strong>, I tried many ways like <code>select_related()</code> but it doesn't work.</p>
<p>The image below shows the result of the ajax through <code>consolelog</code> which contains data from <code>supplier table</code> and <code>supplier_type table</code> but only the supplier table fetch the data and missing supplier name data which foreign key name? I just tried <code>supplier_type__name</code> but it seems doesn't work. Is there any problem with the implementation?</p>
<p><a href="https://i.sstatic.net/YAeno.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YAeno.png" alt="enter image description here" /></a></p>
<p><strong>ajax</strong></p>
<pre><code>$('.get_supplier_details').click(function (event) {
var patient_id = $('#selected-patient').find(":selected").val();
$.ajax({
type: "POST",
url: "{% url 'supplier-details' %}",
data:{
supplier: supplier_id
}
}).done(function(data){
console.log(data); //
});
});
</code></pre>
<p><strong>vies.py</strong></p>
<pre><code>from main.models import (Supplier, SupplierType)
def patientdetails(request):
supplier_id = request.POST.get('supplier')
clients = Supplier.objects.filter(id=supplier_id ).select_related()
qs_json = serializers.serialize('json', clients, fields=['first_name', 'middle_name', 'supplier_type__name'])
return JsonResponse({'data': qs_json})
</code></pre>
<p><strong>models,py</strong></p>
<pre><code>class SupplierType(models.Model):
name = models.CharField(max_length=128, blank=True, null=True)
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(blank=True, null=True, auto_now_add=True)
updated_at = models.DateTimeField(blank=True, null=True, auto_now=True)
class Meta:
managed = True
db_table = 'supplier_type'
class Supplier(models.Model):
supplier_type = models.ForeignKey(SupplierType, models.DO_NOTHING)
first_name = models.CharField(max_length=128, blank=True, null=True)
middle_name = models.CharField(max_length=128, blank=True, null=True)
last_name = models.CharField(max_length=128, blank=True, null=True)
class Meta:
managed = True
db_table = 'supplier'
</code></pre>
|
<javascript><python><jquery><django><ajax>
|
2023-05-02 02:23:19
| 2
| 464
|
BootCamp
|
76,151,211
| 4,648,873
|
Unable to import python dependencies that come with miniconda in a Docker run
|
<p>I am trying to run a python script inside a dockerized miniconda environment. The issue I am facing is that when I <code>docker run</code> interactively(-it) and run the script manually inside, it works great. But when I <code>docker run</code> non-interactively, the modules that come with miniconda installation like <code>cryptography</code>, <code>lxml</code>, are not found.</p>
<p>My dockerfile:</p>
<pre><code>ARG REGISTRY=harbor-west.reg.com/ci
ARG FROM_TAG=master
FROM harbor-west.reg.com/base-os/ubuntu:20.04
USER root
ENV CONDA_DIR $HOME/miniconda3
RUN apt update && \
DEBIAN_FRONTEND=noninteractive apt install -y \
python3 \
python3-pip \
wget
RUN pip install --upgrade \
google-api-python-client \
grpcio \
matplotlib \
numpy \
opencv-python \
pandas \
scikit-learn
RUN mkdir /abc
#Download a conda package under /abc/bin - steps removed for simplicity
#install miniconda
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-py38_23.3.1-0-Linux-x86_64.sh
RUN chmod 755 Miniconda3-py38_23.3.1-0-Linux-x86_64.sh
RUN /bin/bash -c "./Miniconda3-py38_23.3.1-0-Linux-x86_64.sh -b"
ENV PATH=$CONDA_DIR/bin:$PATH
RUN /root/miniconda3/condabin/conda init
WORKDIR /abc/bin
CMD ["/bin/bash", "-c", "/abc/bin/start-prediction.sh"]
#ENTRYPOINT ["/abc/bin/start-prediction.sh"]
</code></pre>
<p>Output with non-interactive docker run(unexpected):</p>
<pre><code>Traceback (most recent call last):
File "prediction_server.py", line 2, in <module>
from abc.learn import prepare_data, SuperResolution
File "/abc/bin/abc/__init__.py", line 3, in <module>
from abc.auth.tools import LazyLoader
File "/abc/bin/abc/auth/__init__.py", line 1, in <module>
from .api import RegSession
File "/abc/bin/abc/auth/api.py", line 30, in <module>
from ._auth import (
File "/abc/bin/abc/auth/_auth/__init__.py", line 2, in <module>
from ._pki import PKIAuth
File "/abc/bin/abc/auth/_auth/_pki.py", line 4, in <module>
from ..tools._lazy import LazyLoader
File "/abc/bin/abc/auth/tools/__init__.py", line 1, in <module>
from .certificate import pfx_to_pem
File "/abc/bin/abc/auth/tools/certificate.py", line 6, in <module>
import cryptography
ModuleNotFoundError: No module named 'cryptography'
</code></pre>
<p>Output with interactive docker run(as expected):</p>
<pre><code>(base)root@dcc788e0a8c5:/abc/bin# ./start-prediction.sh
server listening on 0.0.0.0:50443
</code></pre>
<p>I tried echoing the PATH inside the container and looks ok:</p>
<pre><code>/root/miniconda3/bin:/root/miniconda3/condabin:/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
</code></pre>
<p>I'm not able to figure out what I'm missing. I would think if it works interactively, it should work non-interactively as well. Any pointers would be appreciated.</p>
|
<python><docker><miniconda>
|
2023-05-02 01:39:43
| 1
| 1,792
|
A.R.K.S
|
76,151,197
| 18,551,983
|
Filter one DataFrame by another
|
<p>I have two DataFrames, <code>df1</code> and <code>df2</code>, with the same columns and where the indices of <code>df2</code> is a subset of the indices in <code>df1</code>.</p>
<p>I want to output a DataFrames with the indices from <code>df1</code>, but with all cells set to 0, except cells having the value 1 in <code>df2</code> <em>(i.e. the values in <code>df1</code> do not actually matter)</em>.</p>
<hr />
<p>Here is an example:</p>
<p>First DataFrame <code>df1</code>:</p>
<pre><code> C1 C2 C3 C4
A 0 0 0 0
B 0 0 0 0
C 0 0 0 0
D 0 0 0 0
E 1 1 1 1
F 1 1 1 1
</code></pre>
<p>Second DataFrame <code>df2</code>:</p>
<pre><code> C1 C2 C3 C4
B 1 1 1 1
C 1 1 1 1
D 1 1 1 1
</code></pre>
<p>Expected output:</p>
<pre><code> C1 C2 C3 C4
A 0 0 0 0
B 1 1 1 1
C 1 1 1 1
D 1 1 1 1
E 0 0 0 0
F 0 0 0 0
</code></pre>
<hr />
<p>Here is code to generate the the example data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(
data={
'C1': [0, 0, 0, 0, 1, 1],
'C2': [0, 0, 0, 0, 1, 1],
'C3': [0, 0, 0, 0, 1, 1],
'C4': [0, 0, 0, 0, 1, 1],
},
index=['A', 'B', 'C', 'D', 'E', 'F'],
)
df2 = pd.DataFrame(
data={
'C1': [1, 1, 1],
'C2': [1, 1, 1],
'C3': [1, 1, 1],
'C4': [1, 1, 1],
},
index=['B', 'C', 'D'],
)
expected_output = pd.DataFrame(
data={
'C1': [0, 1, 1, 1, 0, 0],
'C2': [0, 1, 1, 1, 0, 0],
'C3': [0, 1, 1, 1, 0, 0],
'C4': [0, 1, 1, 1, 0, 0],
},
index=['A', 'B', 'C', 'D', 'E', 'F'],
)
</code></pre>
<hr />
<p>The following can be assumed:</p>
<pre class="lang-py prettyprint-override"><code>assert all(df1.columns == df2.columns)
assert len(set(df2.index) - set(df1.index)) == 0
assert {0, 1} - set(df1.values.ravel()) == set()
assert {0, 1} - set(df2.values.ravel()) == set()
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-02 01:34:03
| 2
| 343
|
Noorulain Islam
|
76,151,112
| 6,087,667
|
pandas expanding with custom function for aggregation
|
<p>I am trying to understand why does this code throws an error. Even if the first 30 rows after <code>expanding</code> is of <code>np.nan</code> that still should allow numeric operations. Why does it fail? How should I fix this?</p>
<pre><code>import pandas as pd
import numpy as np
i = pd.date_range('2000-01-01', '2000-06-01', freq='D')
x = pd.DataFrame(data=np.random.randint(0,10, (len(i), 3)), index = i, columns = list('abc'))
# this works
# x.expanding(min_periods=30).corr()
# this doesn't. Why? Is it because of nans? Isn't np.nan a float
x.expanding(min_periods=30).aggregate(lambda t: t.corr().stack().loc[[('a', 'b'), ('a', 'c')], :])
# this fails too
# x.expanding(min_periods=30).aggregate(lambda t: t+1)
</code></pre>
<p>DataError: No numeric types to aggregate</p>
<p>I can generate the desire output by this code:</p>
<pre><code>y = x.expanding(min_periods=30).corr()
y.index.names = ['dates', 'letters']
y.columns.name = 'cols'
y=y.stack()
mi = pd.MultiIndex.from_tuples([('a','b'), ('a','c')], names = ['letters', 'cols'])
y.to_frame().join(pd.DataFrame(index=mi), on=['letters', 'cols'], how='inner')
</code></pre>
|
<python><pandas><group-by><aggregate><apply>
|
2023-05-02 01:00:57
| 2
| 571
|
guyguyguy12345
|
76,151,111
| 15,632,586
|
What should I do to load my images with D3-Graphviz and Flask?
|
<p>I am trying to load my files that I have defined on a JavaScript file (with D3-Graphviz) to a Flask server. Here is my current server structure:</p>
<pre><code>static
--images
--File.png
--Actor.png
--System.png
--Service.png
--background.jpg
--style.css
--visualisation.js
templates
--website.html
</code></pre>
<p>Here is my current code to implement the graph (with icons) in <code>visualisation.js</code>:</p>
<pre><code>let graph = d3.select("#graph")
let newgraphviz = graph.graphviz()
newgraphviz.transition(function () {
return d3.transition("main")
.ease(d3.easeLinear)
.delay(500)
.duration(1500);}
)
.on("initEnd", render)
.on("end", draw_edges);
let graph2 = graph.selectAll(".edge").nodes()
console.log(graph2)
function draw_edges() {
graph.selectAll(".edge")
.nodes()
.forEach(function (e, i) {
setTimeout(() => {
{
let ee = d3.select(e);
ee.select("path")
.attr("stroke-width", 1)
.transition()
.duration(800)
.attr("stroke", "red")
.attr("stroke-width", 2);
ee.select("polygon").transition().duration(800).attr("fill", "red");
}
}, 800 * (i + 1));
});
}
function renderString(str){
newgraphviz.tweenShapes(false)
.addImage("{{ url_for('static', filename='images/Actor.png') }}", "50px", "50px")
.addImage("{{ url_for('static', filename='images/File.png') }}", "50px", "50px")
.addImage("{{ url_for('static', filename='images/System.png') }}", "50px", "50px")
.addImage("{{ url_for('static', filename='images/Service.png') }}", "50px", "50px")
.renderDot(str, function() {
graph.selectAll("title").remove();
graph.selectAll("text").style("pointer-events", "none");
});
}
// This would be used to render the GV file.
function render(filename) {
fetch(filename).then(response => response.text()).then(textAsString =>
renderString(textAsString));
}
</code></pre>
<p>And here is my code (in Flask) for deploying the website:</p>
<pre><code>from flask import Flask, render_template, request, jsonify
import subprocess
app = Flask(__name__, static_url_path='/static')
@app.route('/')
def index():
return render_template('website.html')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>However, when I run this Python file to run the <code>website.html</code> file, I could not get the icons (<code>File.png</code>, <code>Actor.png</code>, <code>System.png</code> and <code>Service.png</code>) displayed in the graph, although my <code>background.png</code> file (that I defined in the CSS file) could still be loaded.</p>
<p>So, is the problem in my code related to D3-Graphviz, or just a problem about loading images in Flask? And what should I do to resolve this problem?</p>
<p><strong>Update:</strong> From what I read in Flask, I have changed the addImage function to <code>.addImage("{{ url_for('static', filename='images/System.png') }}", "50px", "50px")</code>, however this does not seem to solve my problem.</p>
|
<javascript><python><flask><d3.js><d3-graphviz>
|
2023-05-02 01:00:30
| 0
| 451
|
Hoang Cuong Nguyen
|
76,151,098
| 5,790,653
|
python email send to list one by one
|
<p>I have a list of emails like this:</p>
<pre><code>emails = ['a@a.a', 'b@b'b', 'c@c.c', 'd@d.d']
</code></pre>
<p>And this is my <code>for</code> loop:</p>
<pre><code>for email in emails:
message['To'] = email
# ... other codes to send email
</code></pre>
<p>The problem is when I receive all three emails:</p>
<p>The first contact's <code>to</code> header is: <code>a@a.a</code>.</p>
<p>The second contact's <code>to</code> header is: <code>a@a.a</code> and <code>b@b.b</code>.</p>
<p>The third contact's <code>to</code> header is: <code>a@a.a</code> and <code>b@b.b</code> and <code>c@c.c</code>.</p>
<p>How can I send emails to all members without the mentioned problem?</p>
<p>I saw other answers but they still have issues.</p>
<p>I've been thinking about opening a file and having a list like this:</p>
<blockquote>
<p>In first iteration, the file should contain first email, then python reads the file and sends email to that. In second iteration the second email is written there and python sends email to only the second one, etc.</p>
</blockquote>
<p>I tried to have a <code>with open</code> with mode <code>w</code> but only the last email is written in the file and that's another issue.</p>
<p><strong>Edit 1</strong></p>
<p>This is the full code:</p>
<pre><code>from smtplib import SMTP
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import datetime
import pathlib
message = MIMEMultipart('alternative')
sender_email = 'myEmail@gmail.com'
receiver_email = ['a@a.a', 'b@b.b', 'c@c.c']
password = 'myPass'
html = '<p>an HTML code</p>'
message['Subject'] = 'Some Subject'
message['From'] = sender_email
for receiver in receiver_email:
message['To'] = receiver
part1 = MIMEText(html, 'plain')
part2 = MIMEText(html, 'html')
message.attach(part1)
message.attach(part2)
smtp = SMTP('smtp.gmail.com', 587)
smtp.ehlo()
smtp.starttls()
smtp.ehlo()
smtp.login(sender_email, password)
smtp.sendmail(sender_email, receiver, message.as_string())
smtp.quit()
</code></pre>
|
<python><email>
|
2023-05-02 00:58:00
| 1
| 4,175
|
Saeed
|
76,151,051
| 2,647,447
|
How to add background image to only selected page in ktinter of Python?
|
<p>I am trying to add a "page under construction" png file to only 1 of the 3 pages under my release menu which has "release 1", "release 2", and "release 3".</p>
<pre><code>{def underConstruction():
my_img = ImageTk.PhotoImage(Image.open('c/path/to/my/png/file')
my_label = Label(image=my_img)
my_label.pac()
print "I am adding the png file here."}
</code></pre>
<p>my calling statement is:
{subMenu.add_command(Label="Release 1...", command = underConstruction.}
the problem is only the print statement gets executed. It did not pull in the png file. What went wrong?</p>
|
<python><tkinter>
|
2023-05-02 00:45:42
| 1
| 449
|
PChao
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.