QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,846,512
| 12,750,353
|
Cannot record with a Gradio audio input using dynamic layout
|
<p>I want to create an interface with <code>gradio</code> where I have an initially hidden audio input, that after some steps, e.g. receiving instructions, the user can record audio. But when I make the audio input visible it is unable to record.</p>
<pre class="lang-py prettyprint-override"><code>import gradio
with gradio.Blocks() as interface:
recorder = gradio.Audio(source='microphone', type='filepath', visible=False)
action_btn = gradio.Button('Start')
def next_line(action):
if action == 'Start':
return {action_btn: 'Next', recorder: gradio.update(visible=True)}
else:
return {action_btn: 'Done', recorder: gradio.update(visible=False)}
action_btn.click(next_line, inputs=[action_btn], outputs=[action_btn, recorder])
interface.launch(share=True)
</code></pre>
<p>Also, I am using it in a jupyter notebook at the moment for prototyping.</p>
<p>Can someone help me to workaround this issue, recording with a component that is sometimes hidden?</p>
|
<python><jupyter-notebook><gradio>
|
2023-03-26 07:52:33
| 1
| 14,764
|
Bob
|
75,846,414
| 1,581,090
|
How to test the python backend of a flask application?
|
<p>I have a draft of a flask application with a html/javascript frontend and a python backend, to which the frontend communicates through the flask API.</p>
<p>I want to test the python backend using the API using python only (no javascript etc involved in the testing). I found <a href="https://circleci.com/blog/testing-flask-framework-with-pytest/" rel="nofollow noreferrer">this page</a> with some suggestion but I guess something is missing.</p>
<p>I have tried the following test code in <code>tests/test1.py</code>:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
def test_1():
"""Example test"""
response = app.test_client().get('/')
print(response)
</code></pre>
<p>but the response returns a 404 error. I usually start the web server with <code>flask run --host 0.0.0.0</code> using this <code>server.py</code> script:</p>
<pre><code>import sys
import json
from flask import Flask, request, render_template
app = Flask(__name__)
from mf import gameplay
game = gameplay.Game()
@app.route('/')
def index():
return render_template('mainmap.html')
@app.route('/api', methods=['POST'])
def api():
data_json = request.data.decode()
data = json.loads(data_json)
return game(data_json)
if __name__ == '__main__':
app.run('0.0.0.0', 5000, threaded=True)
</code></pre>
<p>I guess I am missing something to start the test?</p>
<p>I found some examples <a href="https://www.patricksoftwareblog.com/testing-a-flask-application-using-pytest/" rel="nofollow noreferrer">HERE</a> and <a href="https://testdriven.io/blog/flask-pytest/" rel="nofollow noreferrer">HERE</a> that use something like</p>
<pre><code>flask_app = create_app('flask_test.cfg')
</code></pre>
<p>but 1. <code>flask</code> does not have a method <code>create_app</code> and 2. what is the content of the config file (even not shown on the documentation of flask itself: <a href="https://flask.palletsprojects.com/en/2.2.x/config/" rel="nofollow noreferrer">HERE</a> ...???)? I did not find one example...</p>
|
<python><flask>
|
2023-03-26 07:24:31
| 3
| 45,023
|
Alex
|
75,846,194
| 15,155,978
|
How to download Google Drive files to a Jupyter notebook using the Google Drive Python API?
|
<p>I'm trying to download my Google Drive files to a Jupyter notebook using the Google Drive Python API. Thus, I'm following this <a href="https://medium.com/@umdfirecoml/a-step-by-step-guide-on-how-to-download-your-google-drive-data-to-your-jupyter-notebook-using-the-52f4ce63c66c" rel="nofollow noreferrer">post</a>. But since this is outdated, I'm having problems when creating the credential using OAuth 2.0 verification.</p>
<p>Since Jupyter notebook opens a localhost URL, I decided to configure it as a Web Application for the Type of Application. Then, I'm not sure about what to configure on the <code>JavaScript authoritative sources</code> and <code>Authorized redirect URIs</code> fields. I guess that in the first field, I could use the local host URL when opening a Jupyter notebook. In the case of the second field, I don't know what to put on it. The configuration was done as follows.<a href="https://i.sstatic.net/YZeZo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YZeZo.png" alt="config_image" /></a>.</p>
<p>When running the code suggested on the Medium post:</p>
<pre><code>obj = lambda: None
lmao = {"auth_host_name":'localhost', 'noauth_local_webserver':'store_true', 'auth_host_port':[8080, 8090], 'logging_level':'ERROR'}
for k, v in lmao.items():
setattr(obj, k, v)
# authorization boilerplate code
SCOPES = 'https://www.googleapis.com/auth/drive.readonly'
store = file.Storage('token.json')
creds = store.get()
# The following will give you a link if token.json does not exist, the link allows the user to give this app permission
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_id.json', SCOPES)
creds = tools.run_flow(flow, store, obj)
</code></pre>
<p>Iβm getting this error: <code>InvalidClientSecretsError: Missing property "redirect_uris" in a client type of "web"</code>, regarding to the redirect URI that I don't know how to configure.</p>
<pre><code>InvalidClientSecretsError Traceback (most recent call last)
Cell In [2], line 12
10 # The following will give you a link if token.json does not exist, the link allows the user to give this app permission
11 if not creds or creds.invalid:
---> 12 flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
13 creds = tools.run_flow(flow, store, obj)
File ~/.pyenv/versions/py-3.10.7/lib/python3.10/site-packages/oauth2client/_helpers.py:133, in positional.<locals>.positional_decorator.<locals>.positional_wrapper(*args, **kwargs)
131 elif positional_parameters_enforcement == POSITIONAL_WARNING:
132 logger.warning(message)
--> 133 return wrapped(*args, **kwargs)
File ~/.pyenv/versions/py-3.10.7/lib/python3.10/site-packages/oauth2client/client.py:2134, in flow_from_clientsecrets(filename, scope, redirect_uri, message, cache, login_hint, device_uri, pkce, code_verifier, prompt)
2097 """Create a Flow from a clientsecrets file.
2098
2099 Will create the right kind of Flow based on the contents of the
(...)
2131 invalid.
2132 """
2133 try:
-> 2134 client_type, client_info = clientsecrets.loadfile(filename,
2135 cache=cache)
2136 if client_type in (clientsecrets.TYPE_WEB,
2137 clientsecrets.TYPE_INSTALLED):
2138 constructor_kwargs = {
2139 'redirect_uri': redirect_uri,
2140 'auth_uri': client_info['auth_uri'],
2141 'token_uri': client_info['token_uri'],
2142 'login_hint': login_hint,
2143 }
File ~/.pyenv/versions/py-3.10.7/lib/python3.10/site-packages/oauth2client/clientsecrets.py:165, in loadfile(filename, cache)
162 _SECRET_NAMESPACE = 'oauth2client:secrets#ns'
164 if not cache:
--> 165 return _loadfile(filename)
167 obj = cache.get(filename, namespace=_SECRET_NAMESPACE)
168 if obj is None:
File ~/.pyenv/versions/py-3.10.7/lib/python3.10/site-packages/oauth2client/clientsecrets.py:126, in _loadfile(filename)
123 except IOError as exc:
124 raise InvalidClientSecretsError('Error opening file', exc.filename,
125 exc.strerror, exc.errno)
--> 126 return _validate_clientsecrets(obj)
File ~/.pyenv/versions/py-3.10.7/lib/python3.10/site-packages/oauth2client/clientsecrets.py:99, in _validate_clientsecrets(clientsecrets_dict)
97 for prop_name in VALID_CLIENT[client_type]['required']:
98 if prop_name not in client_info:
---> 99 raise InvalidClientSecretsError(
100 'Missing property "{0}" in a client type of "{1}".'.format(
101 prop_name, client_type))
102 for prop_name in VALID_CLIENT[client_type]['string']:
103 if client_info[prop_name].startswith('[['):
InvalidClientSecretsError: Missing property "redirect_uris" in a client type of "web".
</code></pre>
<p>Can someone tell me how to configure a credential on Google Drive API for accessing my Google Drive files with Jupyter notebooks?</p>
<p>I really appreciate any help you can provide.</p>
|
<python><jupyter-notebook><google-drive-api>
|
2023-03-26 06:23:10
| 3
| 922
|
0x55b1E06FF
|
75,845,973
| 523,612
|
What causes `None` results from BeautifulSoup functions? How can I avoid "AttributeError: 'NoneType' object has no attribute..." with BeautifulSoup?
|
<p>Often when I try using BeautifulSoup to parse a web page, I get a <code>None</code> result from the BeautifulSoup function, or else an <code>AttributeError</code> is raised.</p>
<p>Here are some self-contained (i.e., no internet access is required as the data is hard-coded) examples, based off an example in the <a href="https://beautiful-soup-4.readthedocs.io/en/latest" rel="nofollow noreferrer">documentation</a>, which don't require Internet access:</p>
<pre><code>>>> html_doc = """
... <html><head><title>The Dormouse's story</title></head>
... <body>
... <p class="title"><b>The Dormouse's story</b></p>
...
... <p class="story">Once upon a time there were three little sisters; and their names were
... <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
... <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
... <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
... and they lived at the bottom of a well.</p>
...
... <p class="story">...</p>
... """
>>>
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html_doc, 'html.parser')
>>> print(soup.sister)
None
>>> print(soup.find('a', class_='brother'))
None
>>> print(soup.select_one('a.brother'))
None
>>> soup.select_one('a.brother').text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'text'
</code></pre>
<p>I know that <code>None</code> <a href="https://stackoverflow.com/questions/19473185">is a special value in Python</a> and that <code>NoneType</code> <a href="https://stackoverflow.com/questions/21095654">is its type</a>; but... <strong>now what?</strong> Why do I get these results, and how can I handle them properly?</p>
<hr />
<p><sub>This question is specifically about BeautifulSoup methods that look for a single result (like <code>.find</code>). If you get this result using a method like <code>.find_all</code> that normally returns a list, this may be due to a problem with the HTML parser. See <a href="https://stackoverflow.com/questions/23113803">Python Beautiful Soup 'NoneType' object error</a> for details.</sub></p>
|
<python><beautifulsoup><attributeerror><nonetype>
|
2023-03-26 05:06:12
| 2
| 61,352
|
Karl Knechtel
|
75,845,952
| 5,513,260
|
python udf iterator -> iterator giving outputted more rows error
|
<p>Have dataframe with text column CALL_TRANSCRIPT (string format) and pii_allmethods column (array of string). Trying to search Call_Transcripts for strings in array & mask using pyspark pandas udf. Getting outputted more than input rows errors. Tried couple of ways to troubleshoot , but not able to resolve.</p>
<p>Inner for loop is to go through pii_list array and replace call_transcript (text variable) with mask value. yield is after inner loop is done , so not clear why it would return more rows than input</p>
<p><strong>NOTE:</strong> I have Spark UDF which is working , for performance improvements trying pandas udf</p>
<pre><code>dfs = dfs.withColumn('FULL_TRANSCRIPT', pu_mask_all_pii(col("CALL_TRANSCRIPT"),
col("pii_allmethods")))
**Python UDF function :**
@pandas_udf("string")
def pu_mask_all_pii(iterator: Iterator[Tuple[pd.Series, pd.Series]]) ->
Iterator[pd.Series]:
for text, pii_list in iterator:
pii_list = sorted(pii_list,key=len, reverse=True)
strtext = str(text)
for pii in pii_list:
if len(pii) > 1:
mask = len(pii) * 'X'
strtext = str(re.sub(re.escape(pii), mask,strtext.encode(),flags=re.IGNORECASE))
yield strtext
**PythonException:** An exception was thrown from a UDF: 'AssertionError: Pandas
SCALAR_ITER UDF outputted more rows than input rows.'. Full traceback below:
</code></pre>
|
<python><pyspark><pandas-udf>
|
2023-03-26 04:57:01
| 1
| 421
|
Mohan Rayapuvari
|
75,845,867
| 417,896
|
Arrow Julia to Python - Read Record Batch Stream
|
<p>I am trying to read an arrow file that I wrote as a sequence of record batches in python. For some reason I am only getting the first struct entry. I have verified the files are bigger than one item and of expected size.</p>
<pre><code>with pa.OSFile(input_filepath, 'rb') as source:
with pa.ipc.open_stream(source) as reader:
for batch in reader:
# only one batch here
my_struct_col = batch.column('col1')
field1_values = my_struct_col.flatten()
print(field1_values)
</code></pre>
<p>I am writing the file in Julia using:</p>
<pre><code>using Arrow
struct OutputData
name::String
age::Int32
end
writer = open(filePath, "w")
data = OutputData("Alex", 20)
for _ = 1:1000
t = (col1=[data],)
table = Arrow.Table(Arrow.tobuffer(t))
Arrow.write(writer, table)
end
close(writer)
</code></pre>
<p>I believe both languages are using the streaming IPC format to file.</p>
|
<python><julia><apache-arrow>
|
2023-03-26 04:32:12
| 1
| 17,480
|
BAR
|
75,845,842
| 3,810,748
|
Is the default `Trainer` class in HuggingFace transformers using PyTorch or TensorFlow under the hood?
|
<h2>Question</h2>
<p>According to the <a href="https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/trainer" rel="nofollow noreferrer">official documentation</a>, the <code>Trainer</code> class "provides an API for feature-complete training in PyTorch for most standard use cases".</p>
<p>However, when I try to actually use <code>Trainer</code> in practice, I get the following error message that seems to suggest that TensorFlow is currently being used under the hood.</p>
<pre><code>tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
</code></pre>
<p>So which one is it? Does the HuggingFace transformers library use PyTorch or TensorFlow for their internal implementation of <code>Trainer</code>? And is it possible to switch to only using PyTorch? I can't seem to find a relevant parameter in <code>TrainingArguments</code>.</p>
<p><strong>Why does my script keep printing out TensorFlow related errors? Shouldn't <code>Trainer</code> be using PyTorch only?</strong></p>
<h2>Source code</h2>
<pre><code>from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel
from transformers import TextDataset
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer
from transformers import TrainingArguments
import torch
# Load the GPT-2 tokenizer and LM head model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
lmhead_model = GPT2LMHeadModel.from_pretrained('gpt2')
# Load the training dataset and divide blocksize
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path='./datasets/tinyshakespeare.txt',
block_size=64
)
# Create a data collator for preprocessing batches
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False
)
# Defining the training arguments
training_args = TrainingArguments(
output_dir='./models/tinyshakespeare', # output directory for checkpoints
overwrite_output_dir=True, # overwrite any existing content
per_device_train_batch_size=4, # sample batch size for training
dataloader_num_workers=1, # number of workers for dataloader
max_steps=100, # maximum number of training steps
save_steps=50, # after # steps checkpoints are saved
save_total_limit=5, # maximum number of checkpoints to save
prediction_loss_only=True, # only compute loss during prediction
learning_rate=3e-4, # learning rate
fp16=False, # use 16-bit (mixed) precision
optim='adamw_torch', # define the optimizer for training
lr_scheduler_type='linear', # define the learning rate scheduler
logging_steps=5, # after # steps logs are printed
report_to='none', # report to wandb, tensorboard, etc.
)
if __name__ == '__main__':
torch.multiprocessing.freeze_support()
trainer = Trainer(
model=lmhead_model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
)
trainer.train()
</code></pre>
|
<python><tensorflow><pytorch><huggingface-transformers>
|
2023-03-26 04:19:32
| 2
| 6,155
|
AlanSTACK
|
75,845,832
| 9,729,023
|
AWS Lambda : How Can We Pass the Event by Lambda Test Event, Instead Of Transformer In Event Bridge?
|
<p>We have the sceduled job to count and export several tables by Lambda Function.
We usually pass the timestamp and table name to Lambda Function by transformer in Event Bridge.</p>
<p>We'd like to run Lambda Function manually by Lambda Test Event, when we need to run only specific table as trouble shooting.
I set the Lambda Test Event just as same as the payload by transformer in Event Bridge, but 'json.loads(event)' returns error (It runs successfully by scheduled transformer in Event Bridge)</p>
<pre><code>-- Lambda Test Event Json Setting
{
"time": "2023-03-26T02:05:00Z",
"table": "T_Test"
}
-- Lambda Code
import os
import sys
import json
from datetime import datetime, date, timedelta
def lambda_handler(event, context):
print('event: ', event)
payload = json.loads(event)
dateFormatter = "%Y-%m-%dT%H:%M:%S%z"
time = payload['time']
table = payload['table']
tables = table.split(',')
for table in tables:
print(table) ..
-- Error Message by Test Event Manually
[ERROR] TypeError: the JSON object must be str, bytes or bytearray, not dict
</code></pre>
<p>We understand something wrong in my Test Event Json but not sure why.</p>
<p>Could you kinly give us any advice to handle this error, just only revise the Test Event, not to lambda code itself ?
Thank you so much in advane.</p>
<p>*** My Solution ***</p>
<p>Thanks to advices, I changed my test event that json.load can handle as follows and went OK now.</p>
<pre><code>-- Revised Lambda Test Event Json Setting
"{ \"time\": \"2023-03-26T02:05:00Z\" , \"table\": \"T_Test\" }"
</code></pre>
|
<python><aws-lambda><aws-event-bridge>
|
2023-03-26 04:16:42
| 1
| 964
|
Sachiko
|
75,845,668
| 6,296,626
|
How to sniff network traffic while on VPN?
|
<p>Using <code>scapy</code> I am able to sniff the UDP network traffic between port 10 and 20 like shown in this simple code example:</p>
<pre class="lang-py prettyprint-override"><code>from scapy.all import *
def capture_traffic(packet):
if packet.haslayer(IP) and packet.haslayer(UDP):
if 10 <= packet[UDP].sport <= 20:
print(f"Remote Address: {packet[IP].src} | Remote Port: {packet[UDP].sport}")
sniff(filter="udp and portrange 10-20", prn=capture_traffic)
</code></pre>
<p>However, when I launch my VPN (Windscribe), my Python script (that is running with Administrator privilege) is not able to sniff any packet. Why and how am I able to sniff packets while on VPN?</p>
|
<python><network-programming><vpn><scapy><packet-sniffers>
|
2023-03-26 03:09:14
| 0
| 1,479
|
Programer Beginner
|
75,845,492
| 12,902,027
|
Raising exception in init causes SystemError: returned a result with an error set in Python C API
|
<p>I am using pytest to test my own Python C extension module.
I am trying to check if the <code>TypeError</code> occurs properly when an argument of invalid type is input to the <code>__init__</code> method.
The method implementation is something like</p>
<pre class="lang-c prettyprint-override"><code>PyObject * myObject_init(myObject *self, PyObject *args)
{
if ("# args are invalid")
{
PyErr_SetString(PyExc_TypeError, "Invalid Argument");
return NULL;
}
}
</code></pre>
<p>This makes TypeError occur. But the problem is that when I test this method with pytest like,</p>
<pre class="lang-py prettyprint-override"><code>def test_init_with_invalid_argument():
x = "something invalid"
with pytest.raises(TypeError):
obj = MyObject(x)
</code></pre>
<p>it does fail. The Error message is something like</p>
<pre class="lang-bash prettyprint-override"><code>TypeError: Invalid Argument
The above exception was the direct cause of the following exception:
self = <test_mymodule.TestMyObjectInit object at 0x00000239886D27F0>
def test_init_with_invalid_argument(self):
with pytest.raises(TypeError):
> obj = MyObject(x)
E SystemError: <class 'mymodule.MyObject'> returned a result with an error set
tests\test_init_with_invalid_argument.py:19: SystemError
</code></pre>
<p>What is the problem here, and how can I make the test pass?</p>
|
<python><error-handling><pytest><cpython><python-c-api>
|
2023-03-26 01:54:57
| 2
| 301
|
agongji
|
75,845,280
| 10,044,690
|
Constraining equation that includes multiple variables at a boundary using GEKKO
|
<p>I have a system of differential equations that I'm trying to perform some optimal control on, using Gekko. In particular, I have a point-mass orbiting a planet and would simply like to raise its orbit using modelled thrusters as control inputs. In order to set the final radial position and velocity at the new raised orbit, I need to set the following boundary conditions:</p>
<pre><code>sqrt(x[-1]**2 + y[-1]**2) = r_final
sqrt(vx[-1]**2 + vy[-1]**2) = v_final
</code></pre>
<p>Where <code>(x,y)</code> is the Cartesian position of the point-mass, and <code>(vx,vy)</code> is its velocity components, and the <code>[-1]</code> notation implies the last element in the trajectory. Furthermore, <code>r_final</code> is the desired altitude of the new orbit, and <code>v_final</code> is the desired speed of the new orbit.</p>
<p>So far I have only been able to find functions in Gekko that constrain a <em>single</em> variable (e.g. <code>fix()</code>, <code>fix_final()</code> and <code>fix_initial()</code>), however I couldn't find any functions or examples that had more complex boundary conditions that included multiple variables.</p>
<p>Looking at the above-mentioned example, I'm hoping to use Gekko to constrain the final radial position and speed at the new orbit similar to the following piece of code:</p>
<pre><code>m.fix(m.sqrt(x**2 + y**2), pos=len(m.time)-1,val=r_final)
m.fix(m.sqrt(vx**2 + vy**2), pos=len(m.time)-1,val=v_final)
</code></pre>
<p>Is something like this possible? Thanks.</p>
|
<python><gekko>
|
2023-03-26 00:29:52
| 1
| 493
|
indigoblue
|
75,845,142
| 13,566,716
|
aioredis.exceptions.ConnectionError: Connection closed by server
|
<p>I get this error randomly with redis on heroku.</p>
<pre><code>aioredis.exceptions.ConnectionError: Connection closed by server.
</code></pre>
<p>this is the full trace:</p>
<pre><code>2023-03-25T23:34:34.116795+00:00 app[web.1]: There was an exception checking if the field exists in the hashmap: await wasn't used with future
2023-03-25T23:34:34.117607+00:00 app[web.1]: [2023-03-25 23:34:34,117] ERROR in app: Exception on /thelio_bot [POST]
2023-03-25T23:34:34.117608+00:00 app[web.1]: Traceback (most recent call last):
2023-03-25T23:34:34.117608+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/connection.py", line 1422, in get_connection
2023-03-25T23:34:34.117609+00:00 app[web.1]: if await connection.can_read():
2023-03-25T23:34:34.117609+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/connection.py", line 893, in can_read
2023-03-25T23:34:34.117609+00:00 app[web.1]: return await self._parser.can_read(timeout)
2023-03-25T23:34:34.117609+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/connection.py", line 479, in can_read
2023-03-25T23:34:34.117610+00:00 app[web.1]: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
2023-03-25T23:34:34.117610+00:00 app[web.1]: aioredis.exceptions.ConnectionError: Connection closed by server.
2023-03-25T23:34:34.117611+00:00 app[web.1]:
2023-03-25T23:34:34.117611+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2023-03-25T23:34:34.117611+00:00 app[web.1]:
2023-03-25T23:34:34.117612+00:00 app[web.1]: Traceback (most recent call last):
2023-03-25T23:34:34.117612+00:00 app[web.1]: File "/app/thelioapp/redis_factory.py", line 300, in field_exists
2023-03-25T23:34:34.117612+00:00 app[web.1]: hash_exists = await redis_conn.exists(self.redis_key)
2023-03-25T23:34:34.117613+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/client.py", line 1082, in execute_command
2023-03-25T23:34:34.117613+00:00 app[web.1]: conn = self.connection or await pool.get_connection(command_name, **options)
2023-03-25T23:34:34.117614+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/connection.py", line 1425, in get_connection
2023-03-25T23:34:34.117614+00:00 app[web.1]: await connection.disconnect()
2023-03-25T23:34:34.117614+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/site-packages/aioredis/connection.py", line 806, in disconnect
2023-03-25T23:34:34.117620+00:00 app[web.1]: await self._writer.wait_closed() # type: ignore[union-attr]
2023-03-25T23:34:34.117620+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.10/asyncio/streams.py", line 343, in wait_closed
2023-03-25T23:34:34.117620+00:00 app[web.1]: await self._protocol._get_close_waiter(self)
2023-03-25T23:34:34.117620+00:00 app[web.1]: RuntimeError: await wasn't used with future
</code></pre>
<p>This happens when executing a redis command:</p>
<pre><code>async def update_hashmap(self):
redis_conn = await self.get_redis_conn()
try:
await redis_conn.hset(self.redis_key, self.field, self.value)
except Exception as e:
print(f"There was an exception updating/adding to the hashmap: {e}")
raise Exception
</code></pre>
<p>happens on line:</p>
<pre><code>await redis_conn.hset(self.redis_key, self.field, self.value)
</code></pre>
<p>my get_redis_conn function is like so:</p>
<pre><code>executed_redis_pool = False
main_redis_pool = None
def create_redis_pool():
global executed_redis_pool
global main_redis_pool
if not executed_redis_pool:
executed_redis_pool=True
main_redis_pool = aioredis.ConnectionPool.from_url(os.environ["REDIS_URI_PROD"])
else:
print("Redis pool already executed")
create_redis_pool()
async def get_redis_conn():
redis_conn = await aioredis.Redis(connection_pool=main_redis_pool, health_check_interval=30, db=0, decode_responses=True)
return redis_conn
</code></pre>
<p>any help is appreciated!</p>
|
<python><python-3.x><redis><python-asyncio><redis-py>
|
2023-03-25 23:41:14
| 1
| 369
|
3awny
|
75,845,010
| 7,587,176
|
NY Times API -- Comments not being returned
|
<p>I have code below that is intended to first try NY Times API & then bs4 for scraping with the goal of creating a data set of each comment on an article that I pass in via a link. In turn outputting some NLP data anlaysis. I am very close but also a bit conufused as my input returns the first pargraph of the article. Anyone have a fresh pair of eyes on feedbacked? Example input article here, which has comments: <a href="https://www.nytimes.com/2022/04/11/nyregion/remote-work-hybrid-manhattan.html" rel="nofollow noreferrer">https://www.nytimes.com/2022/04/11/nyregion/remote-work-hybrid-manhattan.html</a></p>
<pre><code>from flask import Flask, render_template, request
from newspaper import Article
from textblob import TextBlob
import requests
import json
import nltk
nltk.download('punkt')
app = Flask(__name__)
def get_comments(api_key, article_id):
url = f"url"
response = requests.get(url)
if response.status_code != 200:
return None
data = json.loads(response.text)
comments = data['results']['comments']
return comments
def process_article(article_url):
article = Article(article_url)
article.download()
article.parse()
api_key = 'secret'
articles_url = f"https://api.nytimes.com/svc/search/v2/articlesearch.json?q={article.title}&fq=source:(%{api_key}"
response = requests.get(articles_url)
if response.status_code != 200:
return "Article not found on The New York Times."
data = json.loads(response.text)
articles = data['response']['docs']
for a in articles:
if article_url == a['web_url']:
article_id = a['web_url']
break
else:
return "Article not found on The New York Times."
comments = get_comments(api_key, article_id)
if comments is None:
article.download()
article.parse()
article.nlp()
return article.summary
sentiment_polarity = 0
sentiment_subjectivity = 0
topics = {}
for comment in comments:
comment_body = comment['commentBody']
sentiment = TextBlob(comment_body).sentiment
sentiment_polarity += sentiment.polarity
sentiment_subjectivity += sentiment.subjectivity
if sentiment.polarity > 0:
sentiment_label = 'positive'
elif sentiment.polarity == 0:
sentiment_label = 'neutral'
else:
sentiment_label = 'negative'
for topic in comment['commentTitle'].split():
if topic in topics:
topics[topic][sentiment_label] += 1
else:
topics[topic] = {'positive': 0, 'neutral': 0, 'negative': 0}
topics[topic][sentiment_label] += 1
num_comments = len(comments)
avg_sentiment_polarity = sentiment_polarity / num_comments
avg_sentiment_subjectivity = sentiment_subjectivity / num_comments
return render_template('results.html',
article_title=article.title,
article_text=article.text,
num_comments=num_comments,
avg_sentiment_polarity=avg_sentiment_polarity,
avg_sentiment_subjectivity=avg_sentiment_subjectivity,
topics=topics)
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
article_url = request.form['article_url']
try:
return process_article(article_url)
except:
article = Article(article_url)
article.download()
article.parse()
article.nlp()
return article.summary
else:
return render_template('index.html')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
|
<python><web-scraping>
|
2023-03-25 23:02:24
| 1
| 1,260
|
0004
|
75,845,000
| 15,255,487
|
Pydantic schemas throws error although in payload needed properties exists
|
<p>In my flask + react project I use pydantic for validation of front-end payload.
I have properties of payload coded as below:</p>
<pre><code>class PersonsSchema(BaseModel):
id = str
name: Optional[str]
lastName: Optional[str]
class TagSchema(BaseModel):
name: Optional[str]
id: str
class UpdatedActionSchema(BaseModel):
id: str
description: str
action: str
category: str
tag: TagSchema
persons: Optional[List[PersonsSchema]] = None
</code></pre>
<p>Can You please let me know why when I check the payload in my 'PUT' method I'm getting an error which indicates that I don't have such properties, this is the error:
<a href="https://i.sstatic.net/CLTnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CLTnF.png" alt="enter image description here" /></a></p>
<p>My backend code looks like this:</p>
<pre><code>@blp.route('/action/<string:actionId>', methods=['GET', 'PUT'])
def get_action(actionId):
id = actionId
if request.method == 'PUT':
data = request.json
key_to_lookup = 'id'
for person in data['persons']:
if key_to_lookup not in person:
person['id'] = str(uuid.uuid4())
print(f"data:{data}")
actions_schema = UpdatedActionSchema()
try:
actions_schema.parse_obj(data)
except ValueError as e:
return jsonify({'error': str(e)}), 400
</code></pre>
<p>When I print in the terminal my payload it shows me this code, seems all properties are there:</p>
<pre><code>data:
{
'action': 'asd',
'category': 'asd',
'description': 'asd',
'id': '64c728b8-ba61-4038-890e-caadb79a5548',
'persons':
[
{
'id': 'e01c8916-0d82-4155-a1b8-a55ded7c1674',
'lastName': 'zzz',
'name': 'aaa'
},
{
'name': '',
'lastName': '',
'id': '84155298-dd72-490c-bc9a-4f72c8d7cd2f'
}
],
'tag':
{
'action_id': '64c728b8-ba61-4038-890e-caadb79a5548',
'id': '56d6c9ca-695f-45bb-aa00-319aa788b985',
'name': 'asd'
}
}
</code></pre>
<p>I have spent some time on it but cannot figure out.</p>
<p>thanks in advance</p>
|
<python><flask><backend><pydantic>
|
2023-03-25 23:01:03
| 1
| 912
|
marcinb1986
|
75,844,984
| 788,153
|
Get the groupby column back in pandas
|
<p>I am doing a groupby on column <code>a</code> followed by <code>ffill</code>, but after groupby the column <code>a</code> is gone. The result df will have only column <code>b</code> and <code>c</code>. Is there a way to get the column <code>a</code> back after groupby and ffill? I am assuming that the values will shuffle in the process.</p>
<p>How to get back the groupby column in pandas?</p>
<pre><code>df = pd.DataFrame({'a':[1,1,2,2] ,
'b': [12,np.nan,14, 13],
'c' : [1, 2, np.nan, np.nan]
})
df
df.groupby('a').ffill()
</code></pre>
|
<python><pandas>
|
2023-03-25 22:55:10
| 3
| 2,762
|
learner
|
75,844,951
| 4,627,565
|
how to efficiently unzip large datasets in our local folder in google Colab?
|
<p>What is the fastest way to unzip folders with large datasets for model training?</p>
<pre><code> from zipfile import ZipFile
# import training dataset
output='/dataset'
zf = ZipFile('/content/train_data.zip', 'r') # read input zip file
zf.extractall(output) # extract to the output_dir in this case /content/sample_data
zf.close()
</code></pre>
|
<python><google-colaboratory><unzip>
|
2023-03-25 22:47:49
| 0
| 2,359
|
Bionix1441
|
75,844,912
| 357,024
|
Python asyncio sleep forever
|
<p>In some test code I simulate a request handler coroutine that never returns a response. Below works by calling <code>sleep</code> with a sufficiently large value, but a better choice would be something like <code>await asyncio.sleep_forever()</code>. Is there anything built into python asyncio that can do this?</p>
<pre><code>async def lagging_request_handler(request):
await asyncio.sleep(10000)
</code></pre>
|
<python><python-asyncio>
|
2023-03-25 22:35:52
| 1
| 61,290
|
Mike
|
75,844,871
| 3,629,654
|
Clean way to open/close an optional file in python?
|
<p>My code:</p>
<pre><code>fh = None
if args.optional_input_file_path is not None:
fh = open(args.optional_input_file_path)
my_function(foo, bar, fh)
if args.optional_input_file_path is not None:
fh.close()
</code></pre>
<p>I don't like how I need to write <code>if args.optional_input_file is not None:</code> twice.</p>
<p>Moving the conditional logic inside <code>my_function</code> also isn't ideal because I'm intentionally passing in <code>IO</code> objects to the function instead of file paths to make it easier to test.</p>
<p>Is there a cleaner way to achieve the same thing?</p>
<p><strong>I want to be able to write <code>my_function(foo, bar, fh)</code> exactly once, and have <code>fh</code> be either an <code>IO</code> object or <code>None</code>, depending on the value of <code>args.optional_input_file_path</code>.</strong></p>
|
<python><file-handling>
|
2023-03-25 22:28:05
| 3
| 841
|
Tan Wang
|
75,844,751
| 608,576
|
Better way of checking query string parmeter in Django
|
<p>I have quite some integer query string params that I convert this way. Is there built in better way in python to handle this ?</p>
<pre><code>current_page = request.GET.get('page')
if current_page is not None and current_page.isnumeric():
current_page = int(current_page )
else
current_page = 0
</code></pre>
|
<python><django>
|
2023-03-25 21:56:15
| 1
| 9,830
|
Pit Digger
|
75,844,604
| 8,372,455
|
algorithm to save frames from a video file
|
<p>Is it possible to save a desired amount of images (<code>DESIRED_FRAMES_TO_SAVE</code>) from a video file spread-out evenly throughout the entire video file footage?</p>
<p>Hopefully this makes sense I have a video file that is 8 seconds in length and I would like to save 60 Frames of the video file in sequential order.</p>
<p>Trying to throw something together in open CV the code works but I know it should be improved. For example the <code>if count % 3 == 0:</code> I got from calculating the total frames of the video file which is <code>223</code> and then dividing by <code>DESIRED_FRAMES_TO_SAVE</code> or <code>60</code> I come up with about ~3.72...so in other words I think about every 3 or 4 frames I should save one to come up with ~<code>DESIRED_FRAMES_TO_SAVE</code> by the end of the video file. Sorry this is an odd question but would anyone have any advice on how to rewrite this better without to while loops?</p>
<pre><code>import cv2
VIDEO_FILE = "hello.mp4"
DESIRED_FRAMES_TO_SAVE = 60
count = 0
cap = cv2.VideoCapture(VIDEO_FILE)
'''
# calculate total frames
while True:
success, frame = cap.read()
if success == True:
count += 1
else:
break
total_frames = count
print("TOTAL_FRAMES",total_frames)
'''
count = 0
while True:
success, frame = cap.read()
name = './raw_images/' + str(count) + '.jpg'
if success == True:
if count % 3 == 0:
cv2.imwrite(name, frame)
print(count)
elif count > DESIRED_FRAMES_TO_SAVE:
break
count += 1
else:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><opencv>
|
2023-03-25 21:25:25
| 2
| 3,564
|
bbartling
|
75,844,527
| 12,760,550
|
Replace column values using another pandas dataframe mapping
|
<p>Imagine I have the following dirty data of employee information of their contracts across countries (<code>df1</code>):</p>
<pre><code>ID Country Name Job Date Grade
1 CZ John Office 2021-01-01 Senior
1 SK John . 2021-01-01 Assistant
2 AE Peter Carpinter 2000-05-03
3 PE Marcia Cleaner 1989-11-11 ERROR!
3 FR Marcia Assistant 1978-01-05 High
3 FR Marcia 1999-01-01 Senior
</code></pre>
<p>I need to look into a LOV mapping table and on it, each country have different (or same) LOV columns that would replace the value provided by a code. For each country then, it would check if the column is in the LOV mapping for that country and, if the value exist in the "Values" column, replace to the corresponding code. If not, just leave the same value.</p>
<p>So using this mapping (<code>df2</code>):</p>
<pre><code>Country Field Values Code
US Job Back BA
US Job Front FR
US Job Office OFF
CZ Job Office CZ_OFF
CZ Job Field CZ_Fil
SK Job All ALL
FR Job Assistant AST
AE Job Carpinter CAR
AE Job Carpinter CAR
CZ Grade Senior S
CZ Grade Junior J
SK Grade M1 M1
FR Grade Low L
FR Grade Mid M1
FR Grade High H
</code></pre>
<p>Would result in the following dataframe:</p>
<pre><code>ID Country Name Job Date Grade
1 CZ John CZ_OFF 2021-01-01 S
1 SK John . 2021-01-01 M1
2 AE Peter CAR 2000-05-03
3 PE Marcia Cleaner 1989-11-11 ERROR!
3 FR Marcia AST 1978-01-05 H
3 FR Marcia 1999-01-01 Senior
</code></pre>
<p>Thank you so much for the support!</p>
|
<python><pandas><apply><group>
|
2023-03-25 21:09:40
| 1
| 619
|
Paulo Cortez
|
75,844,524
| 11,141,816
|
Compute matrix inverse with decimal object
|
<p>There's a related questions <a href="https://stackoverflow.com/questions/32685280/matrix-inverse-with-decimal-type-numpy">Matrix inverse with Decimal type NumPy</a> 2015 a while ago which did not have a definite answer. There's a second question from me <a href="https://stackoverflow.com/questions/75656846/is-there-a-way-for-python-to-perform-a-matrix-inversion-at-500-decimal-precision">Is there a way for python to perform a matrix inversion at 500 decimal precision</a> where hpaulj provided some updated suggestions.</p>
<p>Basically decimal is a standard python library capable of computing arbitrary precession value at arbitrary orders. It can be operated by most of the Numpy function such as poly to evaluate the polynomials</p>
<pre><code>np.polyval([Decimal(1),Decimal(2)], Decimal(3.1) )
Decimal('5.100000000')
</code></pre>
<p>It can also be cased to a numpy array or being initiated as a dtype object array(<a href="https://stackoverflow.com/questions/7770870/are-decimal-dtypes-available-in-numpy">Are Decimal 'dtypes' available in NumPy?</a> 2011).</p>
<pre><code>np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]])
array([[Decimal('1'), Decimal('2')],
[Decimal('3'), Decimal('4')]], dtype=object)
matrix_m=np.zeros((2,2) ,dtype=np.dtype)
for ix in range(0,2):
for iy in range(0,2):
matrix_m[ix,iy]=Decimal(ix)+Decimal(iy);
array([[Decimal('0'), Decimal('1')],
[Decimal('1'), Decimal('2')]], dtype=object)
</code></pre>
<p>Some array operation from numpy also worked when Decimal was the element,</p>
<pre><code>np.exp( np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]]) )
array([[Decimal('2.718281828'), Decimal('7.389056099')],
[Decimal('20.08553692'), Decimal('54.59815003')]], dtype=object)
np.sqrt( np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]]) )
array([[Decimal('1'), Decimal('1.414213562')],
[Decimal('1.732050808'), Decimal('2')]], dtype=object)
</code></pre>
<p>and, at single element, the numpy calculation agreed with decimal's native function</p>
<pre><code>np.exp(Decimal(1))==Decimal(1).exp()
True
</code></pre>
<p>The useful constant was also provided</p>
<pre><code>def pi():
"""Compute Pi to the current precision.
#https://docs.python.org/3/library/decimal.html
>>> print(pi())
3.141592653589793238462643383
"""
getcontext().prec += 2 # extra digits for intermediate steps
three = Decimal(3) # substitute "three=3.0" for regular floats
lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24
while s != lasts:
lasts = s
n, na = n+na, na+8
d, da = d+da, da+32
t = (t * n) / d
s += t
getcontext().prec -= 2
return +s # unary plus applies the new precision
</code></pre>
<p>However, it turned out that both the determinate and the inverse of the matrix in the numpy</p>
<pre><code>np.linalg.det(np.array([[Decimal(1),Decimal(2)],[Decimal(1),Decimal(3)]]))
File <__array_function__ internals>:180, in det(*args, **kwargs)
File ~\anaconda3\lib\site-packages\numpy\linalg\linalg.py:2154, in det(a)
2152 t, result_t = _commonType(a)
2153 signature = 'D->D' if isComplexType(t) else 'd->d'
-> 2154 r = _umath_linalg.det(a, signature=signature)
2155 r = r.astype(result_t, copy=False)
2156 return r
np.linalg.inv(np.array([[Decimal(1),Decimal(2)],[Decimal(1),Decimal(3)]]))
File <__array_function__ internals>:180, in inv(*args, **kwargs)
File ~\anaconda3\lib\site-packages\numpy\linalg\linalg.py:552, in inv(a)
550 signature = 'D->D' if isComplexType(t) else 'd->d'
551 extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 552 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
553 return wrap(ainv.astype(result_t, copy=False))
</code></pre>
<p>returned the same error</p>
<pre><code>UFuncTypeError: Cannot cast ufunc 'inv' input from dtype('O') to dtype('float64') with casting rule 'same_kind'
</code></pre>
<p>Which is not what was intended. It should just calculate the object according to the arithmetic and decimal should be able to compute the value itself. <a href="https://stackoverflow.com/a/75657327/11141816">hpaulj's post</a> provided an alternative method to cast the decimal object to mfp object of mpmath package</p>
<pre><code>mp.matrix( np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]]))
matrix(
[['1.0', '2.0'],
['3.0', '4.0']])
mp.matrix( np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]])) [0,0]
mpf('1.0')
</code></pre>
<p>and then perform the inverse in mpmath package.</p>
<pre><code>mp.matrix( np.array([[Decimal(1),Decimal(2)],[Decimal(3),Decimal(4)]])) **(-1)
matrix(
[['-2.0', '1.0'],
['1.5', '-0.5']])
</code></pre>
<p>This could work, however, it lost the nice functionally of decimal package and involved large amount casting elements from mpmath to numpy and decimal objects. The mpf() object's computational speed is also significantly slower than the calculation Decimal() object's.</p>
<p>Is there an easy way to write or improve the code from the numpy package directly so that a np.inverse() could be used on decimal array? Is there any way to compute the matrix inverse with decimal object?</p>
|
<python><numpy><decimal><matrix-inverse>
|
2023-03-25 21:09:16
| 0
| 593
|
ShoutOutAndCalculate
|
75,844,488
| 6,850,351
|
Why pyperclip.copy function works only when there is breakpoint in code?
|
<p>If there is no break point - text not copied over to clipboard. If its present - it is. I am on fedora 37, debugging with vs code.</p>
<p>File and repo: <a href="https://github.com/kha-white/manga-ocr/blob/master/manga_ocr/ocr.py" rel="nofollow noreferrer">https://github.com/kha-white/manga-ocr/blob/master/manga_ocr/ocr.py</a></p>
<p>Code:</p>
<pre><code>def process_and_write_results(mocr, img_or_path, write_to):
t0 = time.time()
text = mocr(img_or_path)
t1 = time.time()
logger.info(f'Text recognized in {t1 - t0:0.03f} s: {text}')
if write_to == 'clipboard':
pyperclip.copy(text) # <--- here is the function
else:
write_to = Path(write_to)
if write_to.suffix != '.txt':
raise ValueError('write_to must be either "clipboard" or a path to a text file')
with write_to.open('a', encoding="utf-8") as f:
f.write(text + '\n')
</code></pre>
|
<python><pyperclip>
|
2023-03-25 20:58:57
| 1
| 362
|
Cute pumpkin
|
75,844,357
| 1,035,897
|
Error passing globals to Jinja2Templates constructor with FastAPI
|
<p>According to the <a href="https://github.com/encode/starlette/blob/62b5b6042a39289ed561580c251c233250c3c088/starlette/templating.py#L71ls" rel="nofollow noreferrer">starlette sources</a>, it seems the correct way to set up globals is simply to pass them to the constructor of <code>fastapi.templating.Jinja2Templates</code> as you <a href="https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.Template.globals" rel="nofollow noreferrer">would do</a> with <code>jinja2.Environment</code>.</p>
<p>So I have the following code trying to prepare for using jinja2 templates in FastAPI, with a dummy global "myglobal" for testing:</p>
<pre><code>from fastapi.templating import Jinja2Templates
templates = None
try:
templates = Jinja2Templates(directory=f"{webroot}", globals={"myglobal":"somevalue"})
except Exception as e:
logger.exception("No templates folder found, skipping...")
</code></pre>
<p>On application startup, the exception triggers with the following output:</p>
<pre><code>No templates folder found, skipping...
Traceback (most recent call last):
File "/app/main.py", line 22, in <module>
templates = Jinja2Templates(directory=f"{webroot}", globals={"myglobal":"somevalue"})
File "/app/venv/lib/python3.7/site-packages/starlette/templating.py", line 74, in __init__
self.env = self._create_env(directory, **env_options)
File "/app/venv/lib/python3.7/site-packages/starlette/templating.py", line 89, in _create_env
env = jinja2.Environment(**env_options)
TypeError: __init__() got an unexpected keyword argument 'globals'
</code></pre>
<p>So my question is, <em>what am I doing wrong here</em>? <em>How can I pass globals to my templates in FastAPI</em>?</p>
|
<python><templates><jinja2><fastapi>
|
2023-03-25 20:31:34
| 1
| 9,788
|
Mr. Developerdude
|
75,844,119
| 7,800,760
|
Python: finding partial names of people
|
<p>I have a list of people extracted from a news article as a list of strings. Example follows:</p>
<pre><code>persons = [
"John Doe",
"John",
"Johnson",
"Murray Gell-Mann",
"Mann",
"Murray",
"M",
]
def is_not_substring(sub, strings):
for s in strings:
if sub != s and sub in s:
return False
return True
fullnames = [
s for s in persons if is_not_substring(s, persons)
] # initial fullnames
residuals = [s for s in persons if s not in fullnames] # others
</code></pre>
<p>After this snippet runs the <em>fullnames</em> variable will be a list of names of different people.</p>
<p>The <em>residuals</em> list could hold either part of the name (firstname or lastname) for one of the above people or another different person.</p>
<p>For example "John" could be referring to "John Doe" but not to "Johnson", hence I cannot use the plain "in" substring operator.</p>
<p>For the same logic "Mann" is not referring to Dr. Gell-Mann but is probably the last name of Thomas Mann, while "Murray" can be thought as referring to Gell-Mann.</p>
<p>Also please note that you should consider "M" as the fictional James Bond charachter and not part of "Murray Gell-Mann".</p>
<p>I'm not finding the right operator to separate strings that are parts of the people's names versus only substrings.</p>
<p>The expected final output with this example would be:</p>
<pre><code>fullnames = ["John Doe", "Johnson", "Murray Gell-Mann", "Mann", "M"]
</code></pre>
<p>and using enum I would then transform each name to a tuple with a unique id:</p>
<pre><code>fulltuples = [(0, "John Doe"}, (1, "Johnson"), (2, Murray Gell-Mann), (3, "Mann"), (4,"M")]
</code></pre>
<p>and then the final residuals should be:</p>
<pre><code>residualtuples = [(0,"John"),(2,"Murray)]
</code></pre>
<p>where in the latter the number in the tuple refers to the number in the fulltuples to mean it's a co-reference.</p>
|
<python>
|
2023-03-25 19:39:05
| 1
| 1,231
|
Robert Alexander
|
75,844,027
| 6,357,916
|
Adding date column to pandas dataframe with 200000 rows
|
<p>I run following code to add date column to dataframe <code>df1</code> of shape <code>(200000, 115)</code>starting from today:</p>
<pre><code>df1 = pd.DataFrame(index=range(200000),columns=range(115)) # create dummy data frame
start_date = datetime.date.today()
end_date = start_date + datetime.timedelta(days=200000)
df1['date'] = pd.date_range(start=start_date, end=end_date)
</code></pre>
<p>But this gives me following error:</p>
<pre><code>OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 2570-10-24 00:00:00c
</code></pre>
<p>How do add date column without getting an error?</p>
|
<python><pandas><dataframe>
|
2023-03-25 19:22:12
| 0
| 3,029
|
MsA
|
75,844,008
| 1,482,566
|
Connect to Sharepoint using Python
|
<p>Using the clientid, client_secret and Tenant ID I'm able to connect using Postman service. But I'm trying to write a Python script and connect to Sharepoint using the below code. But I'm getting an invalid request error.</p>
<pre><code>from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
# Set your credentials and endpoint
client_id = "your_client_id"
client_secret = "your_client_secret"
tenant_id = "your_tenant_id"
site_url = "https://mtu.sharepoint.com/teams/folderName"
list_title = "Your List Title"
# Authenticate
ctx_auth = AuthenticationContext(f"https://login.microsoftonline.com/{tenant_id}")
try:
if ctx_auth.acquire_token_for_app(client_id, client_secret):
ctx = ClientContext(site_url, ctx_auth)
# Load the list data
list_data = ctx.web.lists.get_by_title(list_title).get_items().execute_query()
# Print the data
for item in list_data:
print(item.properties)
else:
print("Failed to acquire access token.")
except Exception as ex:
print(f"Authentication error: {ex}")
</code></pre>
<p><strong>Error</strong>
It fails at this step: <code>ctx_auth.acquire_token_for_app</code>. "Specified tenant identifier 'none' is neither a valid DNS name nor a valid external domain"</p>
|
<python><sharepoint><sharepoint-api>
|
2023-03-25 19:17:17
| 0
| 3,342
|
shockwave
|
75,843,631
| 1,233,751
|
bazel: Cycle in the workspace file detected. This indicates that a repository is used prior to being defined
|
<p>I have the following files and try to compile simple python app using bazel 6.1.1 and getting error.</p>
<p>my files:</p>
<pre class="lang-bash prettyprint-override"><code>β tree
.
βββ BUILD
βββ main.py
βββ WORKSPACE
</code></pre>
<p><strong>the error</strong>:</p>
<pre><code>β bazel build //...
WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE.
ERROR: Failed to load Starlark extension '@io_bazel_rules_docker//repositories:repositories.bzl'.
Cycle in the workspace file detected. This indicates that a repository is used prior to being defined.
The following chain of repository dependencies lead to the missing definition.
- @io_bazel_rules_docker
This could either mean you have to add the '@io_bazel_rules_docker' repository with a statement like `http_archive` in your WORKSPACE file (note that transitive dependencies are not added automatically), or move an existing definition earlier in your WORKSPACE file.
ERROR: Error computing the main repository mapping: cycles detected during computation of main repo mapping
Loading:
</code></pre>
<h3>BUILD</h3>
<pre><code>load("@io_bazel_rules_docker//python3:image.bzl", "py3_image")
py3_image(
name = "main",
main = "main.py",
srcs = ["main.py"],
base = "@python_container//image",
deps = [],
)
</code></pre>
<h3>WORKSPACE</h3>
<pre><code>load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load(
"@io_bazel_rules_docker//python:image.bzl",
_py_image_repos = "repositories",
)
_py_image_repos()
load("@io_bazel_rules_docker//container:container.bzl", "container_pull")
load("@io_bazel_rules_docker//container:container.bzl", "container_layer", "container_image")
container_layer(
name = "python_symlink",
symlinks = {
"/usr/bin/python": "/usr/local/bin/python",
"/usr/bin/python3": "/usr/local/bin/python",
},
)
container_image(
name = "python",
base = "@python_container//image",
layers = [":python_symlink"],
visibility = ["//visibility:public"],
)
container_pull(
name = "python_container",
registry = "docker.io/library",
repository = "python",
tag = "3.9-slim",
)
</code></pre>
<h3>main.py</h3>
<pre class="lang-py prettyprint-override"><code>print("Hello world!")
</code></pre>
|
<python><bazel><bazel-rules>
|
2023-03-25 18:06:36
| 1
| 10,514
|
DmitrySemenov
|
75,843,630
| 14,645,415
|
Set line character limit for emphasize-lines (code-block directive)
|
<p>When I currently do this</p>
<pre><code>.. code-block:: python
:emphasize-lines: 3,5
def some_function():
interesting = False
# 116 characters line
print('This line is highlighted. but we exceeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeed line limit')
print('This one is not...')
print('...but this one is.')
</code></pre>
<p>It renders it like this, i.e the problem here is that it doesn't highlight the whole line (the line with the 116 characters).
<a href="https://i.sstatic.net/J0krY.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0krY.gif" alt="enter image description here" /></a></p>
<p>The <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-option-code-block-emphasize-lines" rel="nofollow noreferrer">docs</a> don't show how to set the line character limit for highlighting in the emphasize-line option. My question is how do I set line character limit for this option?</p>
<p>Sphinx version that I have:</p>
<pre class="lang-bash prettyprint-override"><code>$ sphinx-build --version
sphinx-build 4.5.0
</code></pre>
|
<python><documentation><python-sphinx>
|
2023-03-25 18:06:28
| 0
| 854
|
Ibrahim
|
75,843,431
| 1,087,836
|
How to install Python debug symbols and libraries via miniconda?
|
<p>I have created a new Python 3.8 environment using:</p>
<pre><code>conda create -n myproj python=3.8
</code></pre>
<p>But the installed Python version has no debug symbols and libraries for CPython included. I know how to install them with the normal installer but is there any way to install them via miniconda?
Right now there are only the *.lib files for the release build included but the *_d.lib files for a debug build are missing. Here are 2 Screenshots for comparison.</p>
<p>A installation via normal installer (including the libs) looks like this:
<a href="https://i.sstatic.net/bSBSC.png" rel="noreferrer"><img src="https://i.sstatic.net/bSBSC.png" alt="Python 3.11 Installation with Debug Symbols and Libraries" /></a></p>
<p>A Python Installation via miniconda:
<a href="https://i.sstatic.net/JhRJm.png" rel="noreferrer"><img src="https://i.sstatic.net/JhRJm.png" alt="Python 3.8 Installation via Miniconda, Debug symbols and libs are missing" /></a></p>
|
<python><anaconda><cpython>
|
2023-03-25 17:32:30
| 0
| 363
|
Aragok
|
75,843,297
| 382,912
|
Specify max_bandwidth or TransferConfig for boto3 s3 upload_part
|
<p>The <code>upload_file</code> method allows you to specify <code>max_bandwidth</code> using the <code>boto3.s3.transfer.TransferConfig</code> object: <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3.html</a></p>
<p>The <code>upload_part</code> method, however, does not seem to allow a <code>TransferConfig</code>. You get the following error when trying to use one:</p>
<pre><code>*** botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "config", must be one of: Body, Bucket, ContentLength, ContentMD5, Key, PartNumber, UploadId, SSECustomerAlgorithm, SSECustomerKey, SSECustomerKeyMD5, RequestPayer, ExpectedBucketOwner
</code></pre>
<p>Is there some way to limit upload bandwidth when performing a MultiPartUpload with <code>boto3</code>? Perhaps setting some property on the <code>s3</code> <code>client</code> object?</p>
<p>NB: I know you can set <code>multipart_threshold</code> in a <code>TransferConfig</code> and use <code>upload_file</code>, but that strategy does not allow you to pickup resume transfers where you left off (if your connection dies after uploading 20 of 30 parts and you retry, it will re-upload the first 20 again).</p>
<p><strong>I want to use <code>upload_part</code> so I can manually check which parts have already been uploaded and only upload those parts I need to (so that if my connection drops and the upload fails after uploading 20 of 30 parts, I can just upload the last 10 parts when I retry instead of uploading the first 20 again).</strong></p>
<p>Here is my multipart uploader with resume failures for reference (the stuff about <code>project</code> is specific to my application, but hopefully code makes sense and is helpful):</p>
<pre class="lang-py prettyprint-override"><code>class MultipartUpload:
def __init__(self, client, project, local_file):
self.client = client
self.project = project
self.local_file = local_file
self.parts = []
self.debug = False
def orphan(self):
project_prefix = _project_prefix(self.project)
key = self.local_file.key
uploads = self.client.list_multipart_uploads(
Bucket=S3_BUCKET, Prefix=project_prefix
)
if "Uploads" not in uploads:
self._debug("No 'Uploads' key.")
return None
# find those that match the file
orphans = [u for u in uploads["Uploads"] if u["Key"] == key]
# TODO:: also COMPARE md5 / etag
if len(orphans) == 0:
self._debug("No matching orphans.")
return None
# TODO: elect the best orphan if there is more than one:
return orphans[0]
def upload(self):
upload = self.orphan()
uploaded_parts = []
if upload:
upload_id = upload["UploadId"]
resp = self.list_parts(upload_id)
uploaded_parts = resp.get("Parts", [])
else:
upload = self.create_upload()
upload_id = upload["UploadId"]
uploaded_parts = []
self._debug(
"filesize: {}mb chunksize: {}mb num_parts: {}".format(
self.local_file.size / 1024 / 2024,
self.local_file.chunksize / 1024 / 1024,
self.local_file.num_parts,
)
)
uploaded_part_nums = set([p["PartNumber"] for p in uploaded_parts])
checksums = []
bytes_uploaded = 0
for part in self.local_file.parts():
part_num = part.part_num
chunk = part.bytes
chunksize = part.size
bytes_uploaded = bytes_uploaded + chunksize
checksums.append(part.checksum)
progress_msg = "{}/{} {}mb chunks || {}mb of {}mb || {}%".format(
part_num,
self.local_file.num_parts,
int(chunksize / 1024 / 1024),
int(bytes_uploaded / 1024 / 1024),
int(self.local_file.size / 1024 / 1024),
int(100 * bytes_uploaded / self.local_file.size),
)
# continue if part already uploaded
if part_num in uploaded_part_nums:
self._write(f"[already uploaded]: {progress_msg}")
continue
self._write(f"[uploading ......]: {progress_msg}")
self.client.upload_part(
Body=chunk,
Bucket=S3_BUCKET,
Key=self.local_file.key,
UploadId=upload_id,
PartNumber=part_num,
)
self.finalize(upload_id, checksums)
def _debug(self, message):
if self.debug:
print("{}: {}".format(self.local_file.local_path, message))
def _write(self, message):
msg = "UPLOADING: %s: %s" % (
self.local_file.local_path,
message,
)
_print_over_same_line(msg)
# sys.stdout.write("\r%s" % msg)
# sys.stdout.flush()
def list_parts(self, upload_id):
resp = self.client.list_parts(
UploadId=upload_id, Bucket=S3_BUCKET, Key=self.local_file.key
)
if "Parts" not in resp:
self._debug("No 'Parts' uploaded.")
return resp
def create_upload(self):
resp = self.client.create_multipart_upload(
Bucket=S3_BUCKET,
ContentType=self.local_file.mime_type,
Key=self.local_file.key,
)
upload_id = resp.get("UploadId")
self._debug(f"Created multipart upload_id:{upload_id}")
return resp
def _checksums(self):
checksums = []
for part in self.local_file.parts():
checksums.append(part.checksum)
return checksums
def finalize(self, upload_id, checksums=None):
# recompute the parts / etags
if checksums is None:
checksums = self._checksums()
self._write("[finalizing {} parts]: ".format(len(checksums)))
resp = self.client.complete_multipart_upload(
Bucket=S3_BUCKET,
Key=self.local_file.key,
UploadId=upload_id,
MultipartUpload={"Parts": checksums},
)
print("") # clear terminal
if 200 != resp.get("ResponseMetadata", {}).get("HTTPStatusCode"):
print(resp)
raise Exception
def _upload(client, project, local_file, max_bandwidth_mb=None):
if local_file.size < AWS_MIN_CHUNKSIZE:
x = {}
# x["MetaData"] = {"Content-MD5": local_file.etag}
if local_file.mime_type:
x["ContentType"] = local_file.mime_type
# , "ETag": local_file.etag
client.upload_file(
local_file.local_path,
S3_BUCKET,
local_file.key,
# Config=_transfer_config(max_bandwidth_mb, upload_file=local_file),
# Callback=ProgressPercentage(local_file.local_path, local_file.size),
ExtraArgs=x,
)
# add a newline since we have been flushing stdout:
# print("")
else:
mu = MultipartUpload(client, project, local_file)
mu.upload()
</code></pre>
|
<python><amazon-s3><boto3>
|
2023-03-25 17:10:44
| 1
| 6,151
|
kortina
|
75,843,045
| 15,233,108
|
compare if substring exists within an existing list of files but with a different trailing name
|
<p>I have 2 lists of filenames that I am comparing.</p>
<p>The first list is the full list of files in the directory, and the 2nd list is the list of files I've extracted from the directory with a specific filename format. I want to use this 2nd list, to find files in the full list that are slightly different in naming.</p>
<p>This is the code I have:</p>
<pre><code>f = open('Sims4_62-62.txt', 'a')
dataread = listofGenFiles
print(listofGenFiles)
for genData in dataread:
#counterRep = 0
#print(genData)
#print(dataread)
if genData in fulldata and genData in 'Color_Front':
if counterRep < 62:
f.write(genData + '\n')
counterRep += 1
print(counterRep)
f.close()
</code></pre>
<p>Example of filenames in genData:</p>
<blockquote>
<p>Ea_S6D1_fC3_Color_Left.npy<br />
Ea_S6D1_fC3_Color_Right.npy<br />
Ea_S6D1_fC4_Color_Back.npy<br />
Ea_S6D1_fC4_Color_Left.npy<br />
Ea_S6D1_fC4_Color_Right.npy<br />
Ea_S6D2_fC16_Color_Back.npy<br />
Ea_S6D2_fC16_Color_Left.npy<br />
Ea_S6D4_fC16_Color_Left.npy<br />
Ea_S6D3_fC15_Color_Left.npy</p>
</blockquote>
<p>Excerpt from the full list from the directory with some files having the 'Front' variation in naming that wasnt initially taken in the 2nd list (From fulldata variable)</p>
<blockquote>
<p>Ea_S6D1_fC3_Color_Front.npy <--- I want this filename added to my file<br />
Ea_S6D1_fC3_Color_Right.npy<br />
Ea_S6D1_fC4_Color_Back.npy<br />
Ea_S6D1_fC4_Color_Left.npy<br />
Ea_S6D1_fC4_Color_Right.npy<br />
Ea_S6D2_fC16_Color_Front.npy < ---- I want this filename added to my file<br />
Ea_S6D2_fC16_Color_Left.npy</p>
</blockquote>
<p>How do I fix my code ? Not sure what I am missing here :) thanks!</p>
|
<python><python-3.x><file><io>
|
2023-03-25 16:28:24
| 0
| 582
|
Megan Darcy
|
75,842,720
| 13,682,080
|
Python module shared between docker services
|
<p>I have a project structure like this:</p>
<pre><code>stack-example/service1
βββ Dockerfile
βββ main.py
stack-example/service2
βββ Dockerfile
βββ main.py
stack-example/shared_module
βββ __init__.py
βββ utils.py
stack-example/docker-compose.yml
</code></pre>
<p>Both <em>service1</em> and <em>service2</em> use <em>shared_module</em>.</p>
<pre><code>#service1/main.py == service2/main.py
from shared_module import print_hello
def main():
print_hello()
if __name__ == "__main__":
main()
</code></pre>
<p>So I have two <em>Dockerfile</em>s</p>
<pre><code>#service1/Dockerfile (service2 has the same idea)
FROM python:3.11-slim
WORKDIR /service1
USER 1002
COPY . .
</code></pre>
<p>And <em>docker-compose.yml</em></p>
<pre><code>version: '3.3'
services:
service1:
build: service1/.
command: python3 main.py
ports:
- 8012:8012
service2:
build: service2/.
command: python3 main.py
ports:
- 8013:8013
</code></pre>
<p>Of course, if I try to run it, I will get Python import error, because he obviously can't see <em>shared_module</em>.</p>
<p>What should I add to my source and docker files to achieve desired behaviour?</p>
|
<python><docker><microservices>
|
2023-03-25 15:31:28
| 1
| 542
|
eightlay
|
75,842,358
| 15,159,198
|
How to create a multiline tripple quote `ast` node string in python?
|
<p>Is there a way of creating an <code>ast</code> node of a tripple quote multiline python string (like in <code>docstrings</code>)? I am creating a python module to use the OpenAI API for auto-generating docstrings with GTP and inserting them directly into the python file. A simplified version of what I have is:</p>
<pre class="lang-py prettyprint-override"><code>import ast
class DocstringWriter(ast.NodeTransformer):
def visit_FunctionDef(self, node):
docstring = ast.get_docstring(node)
new_docstring_node = self.make_docstring_node(docstring)
if docstring:
node.body[0] = new_docstring_node
else:
node.body.insert(0, new_docstring_node)
return node
def make_docstring_node(self, docstring):
docstring = """New docstring\nwith maybe some new lines"""
s = ast.Str(docstring)
return ast.Expr(value=s)
</code></pre>
<p>For example, this gives:</p>
<pre class="lang-py prettyprint-override"><code>def esc(s):
"\n\nesc(s)\n\nReturns a string with all backslashes escaped.\n\nParameters\n----------\ns : str\n The string to escape.\n\nReturns\n-------\nstr\n The escaped string.\n\nExamples\n--------\n>>> esc('\\\\')\n'\\\\\\\\'\n\n"
return s.translate(str.maketrans({'\\': '\\\\'}))
</code></pre>
<p>When it should return:</p>
<pre class="lang-py prettyprint-override"><code>def esc(s):
"""
esc(s)
Returns a string with all backslashes escaped.
Parameters
----------
s : str
The string to escape.
Returns
-------
str
The escaped string.
Examples
--------
>>> esc('\\\\')
'\\\\\\\\'
"""
return s.translate(str.maketrans({'\\': '\\\\'}))
</code></pre>
|
<python><abstract-syntax-tree>
|
2023-03-25 14:30:28
| 0
| 483
|
Clerni
|
75,842,242
| 9,152,984
|
Using aler9/rtsp-simple-server, able to stream HLS from file but not from ffmpeg's stdin
|
<p><strong>UPD:</strong> hehe, it actially works, but in chrome, not mozilla</p>
<p>I want to display programmatically generated stream on a webpage in real time. For this I have <a href="https://github.com/aler9/rtsp-simple-server" rel="nofollow noreferrer">rtsp-simple-server</a> with RTSP and HLS enabled. I managed to publish a stream from file with command <code>ffmpeg -re -stream_loop -1 -i file.mp4 -vcodec libx264 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/mystream</code> and see it at http://localhost:8888/mystream.</p>
<p>But when I'm trying to do the same with python from the ffmpeg's stdin, I'm getting infinitely loading video instead of stream and browser console says "Uncaught (in promise) DOMException: The fetching process for the media resource was aborted by the user agent at the user's request". Here is my code:</p>
<pre><code>import random
import shlex
import subprocess
import time
import numpy as np
def main():
width = 1024
height = 720
framerate = 1
frame_duration = 1 / framerate
cmd = shlex.split(
f'ffmpeg'
f' -y'
f' -f rawvideo'
f' -vcodec rawvideo'
f' -s {width}x{height}'
f' -pix_fmt bgr24'
f' -r {framerate}'
f' -i -'
f' -r {framerate}'
f' -force_key_frames expr:eq(mod(n,3),0)'
f' -vcodec libx264'
f' -crf 18'
f' -preset ultrafast'
f' -tune zerolatency'
f' -f rtsp'
f' -rtsp_transport tcp'
f' rtsp://localhost:8554/mystream'
)
ffmpeg_process = subprocess.Popen(cmd, stdin=subprocess.PIPE)
try:
while True:
image = np.full(
(height, width, 3),
(
random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255),
),
dtype=np.uint8,
)
ffmpeg_process.stdin.write(image.tobytes())
time.sleep(frame_duration)
finally:
ffmpeg_process.stdin.close()
ffmpeg_process.wait()
if __name__ == '__main__':
main()
</code></pre>
<p>In the server's logs seems like no bad messages instead of single 404 status (is it just about favicon, right?)</p>
<pre><code>2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] GET /mystream/
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [c->s] GET /mystream/ HTTP/1.1
Host: localhost:8888
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: ru,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: csrftoken=iRZDO5rsJzhh5peyyKhViN9yRslNQbuZ; Webstorm-713207de=ca4f5f1e-40e8-4bfd-97da-1be2f15f6e9e
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: cross-site
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0
2023/03/25 13:50:32 INF [HLS] [muxer mystream] created (requested by 172.18.0.1)
2023/03/25 13:50:32 INF [HLS] [muxer mystream] is converting into HLS, 1 track (H264)
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [s->c] HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: text/html
Server: rtsp-simple-server
(body of 1240 bytes)
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] GET /favicon.ico
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [c->s] GET /favicon.ico HTTP/1.1
Host: localhost:8888
Accept: image/avif,image/webp,*/*
Accept-Encoding: gzip, deflate, br
Accept-Language: ru,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: csrftoken=iRZDO5rsJzhh5peyyKhViN9yRslNQbuZ; Webstorm-713207de=ca4f5f1e-40e8-4bfd-97da-1be2f15f6e9e
Referer: http://localhost:8888/mystream/
Sec-Fetch-Dest: image
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [s->c] HTTP/1.1 404 Not Found
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Server: rtsp-simple-server
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] GET /mystream/index.m3u8
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [c->s] GET /mystream/index.m3u8 HTTP/1.1
Host: localhost:8888
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: ru,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: csrftoken=iRZDO5rsJzhh5peyyKhViN9yRslNQbuZ; Webstorm-713207de=ca4f5f1e-40e8-4bfd-97da-1be2f15f6e9e
Referer: http://localhost:8888/mystream/
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0
2023/03/25 13:50:32 DEB [HLS] [conn 172.18.0.1] [s->c] HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/x-mpegURL
Server: rtsp-simple-server
(body of 122 bytes)
2023/03/25 13:50:34 DEB [HLS] [conn 172.18.0.1] GET /mystream/index.m3u8
2023/03/25 13:50:34 DEB [HLS] [conn 172.18.0.1] [c->s] GET /mystream/index.m3u8 HTTP/1.1
Host: localhost:8888
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: ru,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: csrftoken=iRZDO5rsJzhh5peyyKhViN9yRslNQbuZ; Webstorm-713207de=ca4f5f1e-40e8-4bfd-97da-1be2f15f6e9e
Referer: http://localhost:8888/mystream/
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0
2023/03/25 13:50:34 DEB [HLS] [conn 172.18.0.1] [s->c] HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/x-mpegURL
Server: rtsp-simple-server
(body of 122 bytes)
2023/03/25 13:50:36 DEB [HLS] [conn 172.18.0.1] GET /mystream/index.m3u8
2023/03/25 13:50:36 DEB [HLS] [conn 172.18.0.1] [c->s] GET /mystream/index.m3u8 HTTP/1.1
Host: localhost:8888
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: ru,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: csrftoken=iRZDO5rsJzhh5peyyKhViN9yRslNQbuZ; Webstorm-713207de=ca4f5f1e-40e8-4bfd-97da-1be2f15f6e9e
Referer: http://localhost:8888/mystream/
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0
2023/03/25 13:50:36 DEB [HLS] [conn 172.18.0.1] [s->c] HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/x-mpegURL
Server: rtsp-simple-server
(body of 122 bytes)
</code></pre>
<p>How do I get my stream from python visible on a webpage?</p>
|
<python><ffmpeg><http-live-streaming><rtsp>
|
2023-03-25 14:08:51
| 1
| 775
|
Powercoder
|
75,842,202
| 4,772,565
|
How to save multiple figures from bokeh html file as separate png figures but download just as one zip file?
|
<p>This question is based on <a href="https://stackoverflow.com/questions/75768412/how-to-save-the-multiple-figures-in-a-bokeh-gridplot-into-separate-png-files">How to save the multiple figures in a bokeh gridplot into separate png files?</a>, where the accepted answer already provides working code to save the multiple figures in a bokeh grid plot into separate PNG files.</p>
<p>However, with that answer, a number of png-figures are downloaded, and each png-figure is a separate file. It is more convenient for my users to download all the png-figures just as one zip file.<br />
<em>Could you help me to do it?</em></p>
|
<python><bokeh>
|
2023-03-25 13:59:58
| 1
| 539
|
aura
|
75,842,180
| 310,370
|
How can I convert this Gradio file upload code into folder path taking one?
|
<p>There is this gradio code that allows you to upload images. I prefer entering folder path as a text input.</p>
<p>It works but instead of this one, I want to convert it into a directory path taking one</p>
<pre><code> reference_imgs = gr.UploadButton(label="Upload Guide Frames", file_types = ['.png','.jpg','.jpeg'], live=True, file_count = "multiple")
</code></pre>
<p>I tried below one but it didn't work</p>
<pre><code>reference_imgs = gr.inputs.FilePicker(label="Select folder with Guide Frames", type="folder")
</code></pre>
|
<python><huggingface><gradio>
|
2023-03-25 13:56:37
| 0
| 23,982
|
Furkan GΓΆzΓΌkara
|
75,842,155
| 15,637,435
|
How to use Selenium with chromedriver in apache-airflow in docker?
|
<h2>Problem</h2>
<p>I have a scraper with selenium that I want to run with airflow inside docker. I'm able to create the container and run the related dag of the scraper in airflow. However, whenever the selenium driver gets initiated, I get an error. <code>('Service' object has no attribute 'process'; 90)</code>. Which I think can be lead back to the missing chromedriver when I initiate the driver.</p>
<h2>What I tried</h2>
<p>I'm currently trying to copy the chromedriver in the Dockerfile so I can use it inside the container. However, this isn't working. How can I use Selenium with chromedriver together with airflow in docker?</p>
<h2>Code</h2>
<p><strong>Error log</strong></p>
<pre><code>*** Reading local file: /opt/airflow/logs/dag_id=scrape_digitec_data/run_id=manual__2023-03-25T13:28:56.191607+00:00/task_id=scrape_digitec_data/attempt=1.log
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: scrape_digitec_data.scrape_digitec_data manual__2023-03-25T13:28:56.191607+00:00 [queued]>
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: scrape_digitec_data.scrape_digitec_data manual__2023-03-25T13:28:56.191607+00:00 [queued]>
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1362} INFO -
--------------------------------------------------------------------------------
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1363} INFO - Starting attempt 1 of 1
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1364} INFO -
--------------------------------------------------------------------------------
[2023-03-25, 13:28:58 UTC] {taskinstance.py:1383} INFO - Executing <Task(PythonOperator): scrape_digitec_data> on 2023-03-25 13:28:56.191607+00:00
[2023-03-25, 13:28:58 UTC] {standard_task_runner.py:54} INFO - Started process 90 to run task
[2023-03-25, 13:28:58 UTC] {standard_task_runner.py:82} INFO - Running: ['***', 'tasks', 'run', 'scrape_digitec_data', 'scrape_digitec_data', 'manual__2023-03-25T13:28:56.191607+00:00', '--job-id', '100', '--raw', '--subdir', 'DAGS_FOLDER/dag_digitec_scraper.py', '--cfg-path', '/tmp/tmp9omrthbe']
[2023-03-25, 13:28:58 UTC] {standard_task_runner.py:83} INFO - Job 100: Subtask scrape_digitec_data
[2023-03-25, 13:28:58 UTC] {dagbag.py:525} INFO - Filling up the DagBag from /opt/***/dags/dag_digitec_scraper.py
[2023-03-25, 13:28:58 UTC] {task_command.py:384} INFO - Running <TaskInstance: scrape_digitec_data.scrape_digitec_data manual__2023-03-25T13:28:56.191607+00:00 [running]> on host 48fbc6fdff5d
[2023-03-25, 13:28:59 UTC] {taskinstance.py:1590} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=scrape_digitec_data
AIRFLOW_CTX_TASK_ID=scrape_digitec_data
AIRFLOW_CTX_EXECUTION_DATE=2023-03-25T13:28:56.191607+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2023-03-25T13:28:56.191607+00:00
[2023-03-25, 13:28:59 UTC] {warnings.py:109} WARNING - /home/***/.local/lib/python3.10/site-packages/***/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'execution_date' from the template is deprecated and will be removed in a future version. Please use 'data_interval_start' or 'logical_date' instead.
warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key]))
[2023-03-25, 13:29:03 UTC] {digitec_deal_of_day_scraper_selenium.py:65} INFO - https://www.digitec.ch/en/product/hp-tilt-pen-styluses-8216103
[2023-03-25, 13:29:03 UTC] {logging_mixin.py:117} INFO - https://www.digitec.ch/en/product/hp-tilt-pen-styluses-8216103
[2023-03-25, 13:29:05 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 193, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/dag_digitec_scraper.py", line 10, in _scrape_digitec_data
day_deals.get_product_info_df()
File "/opt/airflow/app/product_scraper/adapters/digitec_deal_of_day_scraper_selenium.py", line 32, in get_product_info_df
product_info_df = self._get_product_info()
File "/opt/airflow/app/product_scraper/adapters/digitec_deal_of_day_scraper_selenium.py", line 91, in _get_product_info
driver = webdriver.Chrome('chromedriver',
File "/home/airflow/.local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 80, in __init__
super().__init__(
File "/home/airflow/.local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 101, in __init__
self.service.start()
File "/home/airflow/.local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 106, in start
self.assert_process_still_running()
File "/home/airflow/.local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 117, in assert_process_still_running
return_code = self.process.poll()
AttributeError: 'Service' object has no attribute 'process'
[2023-03-25, 13:29:05 UTC] {taskinstance.py:1401} INFO - Marking task as FAILED. dag_id=scrape_digitec_data, task_id=scrape_digitec_data, execution_date=20230325T132856, start_date=20230325T132858, end_date=20230325T132905
[2023-03-25, 13:29:05 UTC] {standard_task_runner.py:102} ERROR - Failed to execute job 100 for task scrape_digitec_data ('Service' object has no attribute 'process'; 90)
[2023-03-25, 13:29:06 UTC] {local_task_job.py:164} INFO - Task exited with return code 1
[2023-03-25, 13:29:06 UTC] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM apache/airflow:2.4.1-python3.10
ENV PYTHONPATH="${PYTHONPATH}:/opt/airflow/app"
COPY Pipfile .
COPY Pipfile.lock .
COPY app/product_scraper/adapters/chromedriver .
USER airflow
RUN pip install selenium && \
pip install bs4 && \
pip install lxml && \
pip install selenium-stealth
</code></pre>
<p><strong>docker-compose.yaml</strong></p>
<pre><code>---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
# image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.4.1}
build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./app:/opt/airflow/app
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./minio:/data/minio
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
expose:
- 6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
# Required to handle warm shutdown of the celery workers properly
# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-triggerer:
<<: *airflow-common
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
# yamllint disable rule:line-length
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.2.0
min_airflow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airflow_version_comparable )); then
echo
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
echo
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
echo "If you are on Linux, you SHOULD follow the instructions below to set "
echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
echo "For other operating systems you can get rid of the warning with manually created .env file:"
echo " See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
echo
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
echo
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
echo
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
echo
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
echo
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
# yamllint enable rule:line-length
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
_PIP_ADDITIONAL_REQUIREMENTS: ''
user: "0:0"
volumes:
- .:/sources
airflow-cli:
<<: *airflow-common
profiles:
- debug
environment:
<<: *airflow-common-env
CONNECTION_CHECK_MAX_COUNT: "0"
# Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
command:
- bash
- -c
- airflow
# You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
# or by explicitly targeted on the command line e.g. docker-compose up flower.
# See: https://docs.docker.com/compose/profiles/
flower:
<<: *airflow-common
command: celery flower
profiles:
- flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
minio:
image: 'minio/minio:latest'
ports:
- '${FORWARD_MINIO_PORT:-9000}:9000'
- '${FORWARD_MINIO_CONSOLE_PORT:-9090}:9090'
environment:
MINIO_ROOT_USER: 'root'
MINIO_ROOT_PASSWORD: 'password'
volumes:
- ./minio:/data/minio
command: minio server /data/minio --console-address ":9090"
# volumes:
# minio:
# driver: local
selenium:
image: selenium/standalone-chrome
ports:
- "4444:4444"
volumes:
postgres-db-volume:
</code></pre>
<p><strong>driver initiation in scraper script</strong></p>
<pre><code>options = webdriver.ChromeOptions()
options.add_argument('--ignore-ssl-errors=yes')
options.add_argument('--ignore-certificate-errors')
# Set user agent
user_agent = 'userMozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'
options.add_argument(f'user-agent={user_agent}')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--no-sandbox')
options.add_argument('-headless')
# Launch the browser
driver = webdriver.Chrome('chromedriver', options=options)
</code></pre>
|
<python><docker><selenium-webdriver><web-scraping><airflow>
|
2023-03-25 13:52:35
| 1
| 396
|
Elodin
|
75,842,117
| 11,720,066
|
Pydantic - apply validator on all fields of specific type
|
<p>In my project, all <code>pydantic</code> models inherit from a custom "base model" called <code>GeneralModel</code>.</p>
<p>This enables to configure the same behavior for the entire project in one place.</p>
<p>Let's assume the following implementation:</p>
<pre><code>from pydantic import BaseModel
class GeneralModel(BaseModel):
class Config:
use_enum_values = True
exclude_none = True
</code></pre>
<p>One particularly desired behavior is to perform a validation on all fields of a specific type.</p>
<p>Any ideas how to achieve this using my <code>GeneralModel</code>? Different approaches are blessed as well.</p>
|
<python><validation><pydantic><code-duplication>
|
2023-03-25 13:46:19
| 2
| 613
|
localhost
|
75,841,971
| 1,469,465
|
ResourceWarning: unclosed socket after removing Docker container with Python SDK
|
<p>I have Docker Desktop installed on my MacBook and I am using Docker Python SDK (<a href="https://github.com/docker/docker-py" rel="nofollow noreferrer">https://github.com/docker/docker-py</a>) version 6.0.1 (but same issue appears in 5.0.3). For a test case, I create a local container and store this container in a variable <code>container</code>. At a later stage, I want to remove the container. This is what I execute</p>
<pre class="lang-py prettyprint-override"><code>container.remove(force=True)
container = None
</code></pre>
<p>When executing this last line, I (sometimes) get the warning</p>
<pre class="lang-bash prettyprint-override"><code>$HOME/git/es-client/test/local_es_decorator.py:197: ResourceWarning: unclosed <socket.socket fd=5, family=AddressFamily.AF_UNIX, type=SocketKind.SOCK_STREAM, proto=0, raddr=$HOME/.docker/run/docker.sock>
self._local_es_container = None
Object allocated at (most recent call last):
File "$HOME/.pyenv/versions/es-client/lib/python3.8/site-packages/docker/transport/unixconn.py", lineno 28
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
</code></pre>
<p>Adding a 5 seconds sleep between those two lines does not solve the issue. In one virtual environment I do get these warnings, in another one not, although I don't know what is different between the two. What could explain this behavior?</p>
<p>This is the decorator that is creating and removing this Docker container:</p>
<pre><code>import asyncio
import tracemalloc
from functools import wraps
from os import getenv
from pprint import pprint
from typing import Optional
from unittest import TestCase
import requests
from docker import types, from_env
from docker.errors import NotFound
from docker.models.containers import Container
from elasticsearch_dsl import connections
from time import sleep
from ESClient.connection import set_endpoint
from ESClient.utils import measure_time
ELASTICSEARCH_TEST_PORT = 9200
TEST_ELASTICSEARCH_ENDPOINT = 'http://localhost'
TEST_ELASTICSEARCH_CONNECTION = connections.create_connection(hosts=['localhost'], port=ELASTICSEARCH_TEST_PORT,
timeout=10, max_retries=2)
ELASTICSEARCH_TEST_URL = f'{TEST_ELASTICSEARCH_ENDPOINT}:{ELASTICSEARCH_TEST_PORT}'
CONTAINER_NAME = 'elasticsearch_test'
# noinspection PyPep8Naming
class use_local_elasticsearch(TestContextDecorator):
"""
Class decorator for unittest.TestCase to make use of a Docker ElasticSearch container for that test case
This introduces a dependency to the Docker SDK
You also need to have Docker installed, see https://docs.docker.com/get-docker/
Usage example:
If we have a TestCase class called FindAutoRestockTestCase that tests integration with ES (therefore we don't want
to just mock those interactions but to actually have our code integrate to a real ES cluster), just decorate the
class with this decorator as follows:
@use_local_elasticsearch()
class FindAutoRestockTestCase(TestCase):
...
This will spin up a local ES docker container, create all required indices there, set the endpoint in ESClient
(if needed) and override settings ELASTICSEARCH_ENDPOINT and ELASTICSEARCH_CONNECTION before running the tests in
the class, so no other change is needed. After every test method in the class is executed, the container will be
removed (unless destroy_container = False), and settings are restored to their previous values.
Note that the cluster is spin up when you initialize the class. This means that at the time that
setUpClass or setUpTestData is called, the cluster is not available yet.
"""
def __init__(self, destroy_container: bool = True):
super().__init__()
tracemalloc.start()
self._destroy_container = destroy_container
self._local_es_container: Optional[Container] = None
self._decorated_class = None
self._enabled = False
def enable(self):
"""
Call method in charge of creating ES container with indices, set endpoint of ESClient if present,
and enable override_settings
If the environment variable USE_EXISTING_ES is set, we don't spin up a Docker container from Python.
This is used by CircleCI. Rationale: The tests in CircleCI are running from a Docker container with Python.
This particular container does not have access to Docker itself, and therefore cannot create containers
itself. We create an existing Docker container in the CircleCI script and reuse it here. In your local
environment however, we do have access to Docker, and as long as the app is running, we can (and will if
necessary) create a Docker container whenever it is necessary
Often, a test class wants to add test data to ElasticSearch one time for all test methods. However, we
cannot create this data in setupTestData(), since this method is called before the class is decorated
and this method enable() is called, and hence no ElasticSearch is running at that moment in time.
This method enable() is called for every test case method inside a TestClass, so if we setup the cluster
in this method and reset all the indices, we have to do this for every test method. This is time consuming
if there are many test cases inside a test class.
In order to only one time setup the cluster per test class and create the test data in ElasticSearch only
once after the cluster is initialized, we do the following:
- Run the classmethod on_elasticsearch_loaded() on the decorated test class. Individual classes can
implement this method as they see fit to generate test data. If the test class doesn't implement
this method, no test data is generated at this point.
- Keep track of whether we have initialized ElasticSearch already in the variable self._enabled. If the
cluster has been initialized, we don't set it up again.
If destroy_container=True, we setup a fresh container for every test method, and the classmethod
on_elasticsearch_loaded() is called for every test method. It is therefore important that this method is
idempotent if it has effects other than in ElasticSearch, e.g. creating data in SQL.
"""
if not self._local_es_container or not self._enabled:
if not getenv('USE_EXISTING_ES'):
self._spin_up_es_container()
self._wait_until_es_runs()
set_endpoint(TEST_ELASTICSEARCH_ENDPOINT)
if hasattr(self._decorated_class, 'on_elasticsearch_loaded'):
self._decorated_class.on_elasticsearch_loaded()
self._enabled = True
def disable(self):
"""
Remove ES container
"""
if self._destroy_container and self._local_es_container:
self._local_es_container.remove(force=True)
pprint(tracemalloc.get_traced_memory())
# tracemalloc.stop()
traceback = tracemalloc.get_object_traceback(self._local_es_container)
print(traceback)
print('-' * 80)
self._local_es_container = None
def decorate_class(self, cls):
"""
Call decorate_class of both parent and "grandparent" classes,
as both are needed but parent class does not call its parent
"""
result_cls = super().decorate_class(cls)
# Keep track of the decorated class in order to call on_elasticsearch_loaded()
# on it, see docblock of enable() for more details
self._decorated_class = result_cls
return result_cls
def _spin_up_es_container(self):
"""
Spin up a Docker ES container if there is not yet one running
For CircleCI, we set up a Docker container in the .circleci configuration, and reuse that one
In that case, the environment variable USE_EXISTING_ES=1 and this function is not called at all.
It would fail, because we could not create a Docker container with ElasticSearch from the
Python docker container in which the tests are running.
"""
# Spin up ElasticSearch docker container
es_environment = {'cluster.name': 'docker-cluster', 'discovery.type': 'single-node',
'bootstrap.memory_lock': 'true', 'xpack.security.enabled': 'false',
'ES_JAVA_OPTS': '-Xms128m -Xmx128m'}
client = from_env()
ulimit = types.Ulimit(name='memlock', soft=-1, hard=-1)
# Check if there is already a running test ES container locally. If so, use it
try:
container = client.containers.get(CONTAINER_NAME)
if container.status != 'running':
container.start()
except NotFound:
# If there is not yet a running test ES container, run it
container = client.containers.run('elasticsearch:7.12.0', name=CONTAINER_NAME, detach=True,
environment=es_environment, ulimits=[ulimit],
ports={str(ELASTICSEARCH_TEST_PORT): ELASTICSEARCH_TEST_PORT})
# container_starting, _ = container.exec_run(f'curl {ELASTICSEARCH_TEST_URL}')
# Store a reference to the container, so we can remove it on tear down
self._local_es_container = container
def _wait_until_es_runs(self):
# Wait until ElasticSearch is running before we try to create indices
container_starting = True
with measure_time() as t:
while container_starting:
try:
requests.get(ELASTICSEARCH_TEST_URL)
container_starting = False
except requests.exceptions.ConnectionError as e:
if t.seconds < 5 * 60:
sleep(5)
else:
msg = 'ElasticSearch container is not running after 5 minutes'
raise requests.exceptions.ConnectionError(msg) from e
</code></pre>
|
<python><docker>
|
2023-03-25 13:18:47
| 0
| 6,938
|
physicalattraction
|
75,841,918
| 1,972,356
|
Vectorizing multivariate normal distribution calculation
|
<p>I have n points in 3D space, each with a corresponding guess and certainty attached to it. I want to calculate the multivariate normal distribution for each point given its guess and certainty. Currently, I'm using an iterative approach and the <code>Scipy.stats</code> <code>multivariate_normal</code> function, as shown in the code snippet below:</p>
<pre><code>import numpy as np
from scipy.stats import multivariate_normal
n = 10
def create_points(n):
return np.random.randint(0, 1000, size=(n, 3))
real = create_points(n)
guess = create_points(n)
uncertainties = np.random.randint(1, 100, size=n)
def iterative_scoring_scipy(real, guess, uncertainties):
score = 0.0
covariances = [
np.diag([uncertainties[i]**2]*3)
for i in range(len(real))
]
for i in range(n):
score += multivariate_normal.pdf(real[i], mean=guess[i], cov=covariances[i])
return score
print(iterative_scoring_scipy(real, guess, uncertainties))
</code></pre>
<p>Here is an attempt that does not use <code>scipy</code> but instead uses numpy:</p>
<pre><code>def iterative_scoring_numpy(real, guess, uncertainties):
score = 0.0
for i in range(n):
# calculate the covariance matrix
cov = np.diag([uncertainties[i]**2]*3)
# calculate the determinant and inverse of the covariance matrix
det = np.linalg.det(cov)
inv = np.linalg.inv(cov)
# calculate the difference between the real and guess points
diff = real[i] - guess[i]
# calculate the exponent
exponent = -0.5 * np.dot(np.dot(diff, inv), diff)
# calculate the constant factor
const = 1 / np.sqrt((2 * np.pi)**3 * det)
# calculate the probability density function value
pdf = const * np.exp(exponent)
# add the probability density function value to the score
score += pdf
return score
print(iterative_scoring_numpy(real, guess, uncertainties))
</code></pre>
<p>However, both approaches are slow and I'm looking for a way to speed it up using vectorization. How can I vectorize either code snippets to make it faster?</p>
|
<python><scipy><vectorization><distribution>
|
2023-03-25 13:09:41
| 1
| 1,373
|
NicolaiF
|
75,841,894
| 13,955,154
|
Progress bar in a for in os.listdir() with model.predict() inside the loop
|
<p>I have a simple for loop that works in this way:</p>
<pre><code>for filename in tqdm(os.listdir('train')):
path = os.path.join('train', filename)
for i in range(9):
features = model.predict(some_array)
</code></pre>
<p>Where model is a keras model used for feature extraction, so model.predict() is the classical method.
As you can see I tried to use tqdm to see the progress of the loop, for example if my directory has 5 files, I would expect to see a progress bar that tells me how much files I already passed through. Instead with my code, I get 5 small progress bars like:</p>
<blockquote>
<p>1/1 [==============================] - 0s 31ms<br />
1/1 [==============================] - 0s 47ms<br />
1/1 [==============================] - 0s 47ms<br />
1/1 [==============================] - 0s 30ms<br />
1/1 [==============================] - 0s 32ms</p>
</blockquote>
<p>How can I solve?</p>
|
<python><for-loop><progress-bar><tqdm>
|
2023-03-25 13:06:35
| 0
| 720
|
Lorenzo Cutrupi
|
75,841,722
| 3,257,191
|
Efficiently editing large input file based on simple lookup with python dataframes
|
<p>I have a very large txt file (currently 6Gb, 50m rows) with a structure like this...</p>
<pre><code>**id amount batch transaction sequence**
a2asd 12.6 123456 12394891237124 0
bs9dj 0.6 123456 12394891237124 1
etc...
</code></pre>
<p>I read the file like this...</p>
<pre><code>inputFileDf = pd.read_csv(filename, header=None, index_col=False, sep ='\t', names=['id','amount','batch','transaction','sequence'])
</code></pre>
<p>I also have a list that I'm generating during the app run (before loading the inputFileDf) that stores millions of rows of just the "transaction" and "sequence" columns...</p>
<pre><code>runListDf = pd.DataFrame(runList, columns=['transaction','sequence-2'])
</code></pre>
<p>At the end of the run I want to update the input file based on the matches in the list as follows...</p>
<pre><code># merge the 2 input dfs (this step takes the longest)
combinedDf = pd.merge(inputFileDf, runListDf, how='left', left_on=['transaction','sequence'], right_on = ['transaction','sequence-2'])
combinedDf['sequence-2'] = combinedDf['sequence-2'].fillna(value=-1)
# create a new isValid column based on whether there's a match between the sequence fields (0 means invalid and will be removed later)
combinedDf['isValid'] = np.where(combinedDf['sequence'] == combinedDf['sequence-2'],0,1)
combinedDf = combinedDf.drop('sequence-2', axis=1)
# we only care about matching columns
combinedDf = combinedDf.loc[combinedDf['isValid'] == 1]
combinedDf.drop('isValid', axis=1, inplace=True)
combinedDf.to_csv(filename, header=None, index=None, sep ='\t')
</code></pre>
<p>I'm running into performance issues which I'm not sure is to do with my code, or perhaps just the size of the comparisons I need. I've experimented (as suggested by Chat GPT!) with replacing my pd.merge operations with set_index + join but the set_index step takes even longer...</p>
<pre><code># this is even less efficient in my case
inputFileDf.set_index(['transaction', 'sequence'], inplace=True)
runListDf.set_index(['transaction', 'sequence'], inplace=True)
combinedDf = inputFileDf.join(runListDf, how='left')
</code></pre>
<p>Really appreciate any thoughts on whether it may be possible to perform this task much more efficiently.</p>
|
<python><pandas><dataframe><bigdata>
|
2023-03-25 12:35:12
| 1
| 1,317
|
d3wannabe
|
75,841,624
| 1,116,675
|
Problem to process visually identical looking characters (umlauts)
|
<p>This may probably be a more general issue related to character encoding, but since I came across the issue while coding an outer join of two dataframes, I post it with a Python code example.</p>
<p><strong>On the bottom line the question is: why is <code>oΜ</code> technically not the identical character as <code>ΓΆ</code> and how can I make sure, both are not only visually identical but also technically?
If you copy-paste both characters in a text editor and do a search for one of them, you will never find both!</strong></p>
<p>So now the Python example trying to do a simple outer join of two dataframes on the column 'filename' (here presented as CSV data):</p>
<p>df1:</p>
<pre><code>filename;abstract
problematic_oΜ.txt;abc
non-problematic_ΓΆ.txt;yxz
</code></pre>
<p>df2:</p>
<pre><code>bytes;filename
374;problematic_ΓΆ.txt
128;non-problematic_ΓΆ.txt
</code></pre>
<p>Python code:</p>
<pre><code>import csv
import pandas as pd
df1 = pd.read_csv('df1.csv', header=0, sep = ';')
df2 = pd.read_csv('df2.csv', header=0, sep = ';')
print(df1)
print(df2)
df_outerjoin = pd.merge(df1, df2, how='outer', indicator=True)
df_outerjoin.to_csv('df_outerjoin.csv', sep =';', index=False, header=True, quoting=csv.QUOTE_NONNUMERIC)
print(df_outerjoin)
</code></pre>
<p>Output:</p>
<pre><code># filename abstract bytes _merge
1 problematic_oΜ.txt abc NaN left_only
2 non-problematic_ΓΆ.txt yxz 128.0 both
3 problematic_ΓΆ.txt NaN 374.0 right_only
</code></pre>
<p>So the 'ΓΆ' in the problematic filename isn't recognised as the same character as 'ΓΆ' in the non-problematic filename.</p>
<p>What is happening here?</p>
<p>What can I do to overcome this issue β can I do something "smart" by importing the data files with special encoding setting or will I have to do a dumb search and replace?</p>
|
<python><character-encoding><utf>
|
2023-03-25 12:17:02
| 3
| 938
|
Madamadam
|
75,841,470
| 2,386,605
|
Scalars method in Postgresql does return only first of joines tables
|
<p>For sqlalchemy, I am running something like</p>
<pre><code>res.scalars().all()
</code></pre>
<p>However, I just get a list of <code>Foo</code> items and not <code>(Foo, Bar)</code>.</p>
<p>Do you know how to fix that?</p>
|
<python><sql><python-3.x><postgresql><sqlalchemy>
|
2023-03-25 11:43:53
| 0
| 879
|
tobias
|
75,841,428
| 16,175,571
|
Populate Django Model with for-loop
|
<p>I have a model <code>Task</code> which I want to populate with a for loop.
In a list I have the tasks that should be passed into the model.</p>
<p>My model has actually more than three tasks (Below I have shown only three). The list will also have varying number of entries. The list can also have only one task.</p>
<pre><code>tasks = ['first task', 'second task', 'third task']
class Task(models.Model):
ad = models.OneToOneField(Ad, on_delete=models.CASCADE, primary_key=True, blank=True, null=False)
task1 = models.CharField(max_length=256, blank=True, null=True)
task2 = models.CharField(max_length=256, blank=True, null=True)
task3 = models.CharField(max_length=256, blank=True, null=True)
def __str__(self):
return f'Tasks for {self.ad}'
</code></pre>
<p>My approach looks like this:</p>
<pre><code>task_obj = Task.objects.create(
ad = ad
)
for idx, t in enumerate(tasks):
task_obj.f'task{idx+1}' = t
</code></pre>
<p>Basically this part <code>f'task{idx+1}'</code> should not be a string, but the actual variable of the model.</p>
<p>Is this even possible? Or does an other way exist I am not aware of?</p>
|
<python><django><django-models>
|
2023-03-25 11:34:05
| 1
| 337
|
GCMeccariello
|
75,841,217
| 5,558,021
|
Folium Choropleth doesn't change colors on tiles
|
<pre><code>DF = pd.DataFrame({'REGION_ID':list(range(0,88)), 'share': list(map(lambda x: x* 5.3, list(range(0,88))))})
m = folium.Map(location=[63.391522, 96.328125], zoom_start=3)
rel_ = folium.Choropleth(
geo_data = './admin_level_4.geojson',
name = ΠΠΌΡ',
data = DF,
columns=['REGION_ID', 'share'],
key_on='id',
bins = 10,
fill_color='BuGn',
nan_fill_color='darkblue',
nan_fill_opacity=0.5,
fill_opacity=0.7,
line_opacity=0.2,
legend_name= 'ΠΠ΅Π³Π΅Π½Π΄Π°',
highlight = True,
show = False
)
rel_.add_to(m)
</code></pre>
<p>In the tutorials they are mentioned that REGION_ID should correspond the same REGION_ID from geojson, but I cant find any ...</p>
<pre><code>gpd.read_file('./admin_level_4.geojson').columns.values
array(['flag', 'ref', 'name', 'note', 'is_in', 'koatuu', 'place',
'ref:en', 'ref:ru', 'ref:uk', 'source', 'int_ref', 'name:ab',
'name:af', 'name:ar', 'name:av', 'name:az', 'name:ba', 'name:be',
'name:bg', 'name:bs', 'name:ca', 'name:ce', 'name:cs', 'name:cu',
'name:cv', 'name:cy', 'name:da', 'name:de', 'name:el', 'name:en',
'name:eo', 'name:es', 'name:et', 'name:eu', 'name:fa', 'name:fi',
'name:fr', 'name:fy', 'name:ga', 'name:he', 'name:hi', 'name:hr',
'name:hu', 'name:hy', 'name:id', 'name:is', 'name:it', 'name:ja',
'name:ka', 'name:kk', 'name:ko', 'name:ku', 'name:kv', 'name:ky',
'name:la', 'name:lb', 'name:lt', 'name:lv', 'name:mk', 'name:mn',
'name:mr', 'name:ms', 'name:nb', 'name:nl', 'name:nn', 'name:no',
'name:oc', 'name:os', 'name:pl', 'name:pt', 'name:ro', 'name:ru',
'name:se', 'name:sh', 'name:sk', 'name:sl', 'name:sq', 'name:sr',
'name:su', 'name:sv', 'name:sw', 'name:ta', 'name:tg', 'name:tl',
'name:tr', 'name:tt', 'name:ug', 'name:uk', 'name:ur', 'name:uz',
'name:vi', 'name:yi', 'name:zh', 'alt_name', 'website', 'boundary',
'int_name', 'name:aba', 'name:ace', 'name:ast', 'name:atv',
'name:bxr', 'name:crh', 'name:dsb', 'name:hak', 'name:hsb',
'name:inh', 'name:kaa', 'name:kbd', 'name:koi', 'name:krc',
'name:krl', 'name:lbe', 'name:lez', 'name:lrc', 'name:mhr',
'name:mrj', 'name:myv', 'name:nog', 'name:pam', 'name:pnb',
'name:sah', 'name:sco', 'name:szl', 'name:tyv', 'name:tzl',
'name:udm', 'name:vep', 'name:war', 'name:xal', 'name_int',
'old_name', 'timezone', 'wikidata', 'ISO3166-2', 'ssrf:code',
'wikipedia', 'cladr:code', 'oktmo:user', 'omkte:code',
'addr:region', 'population', 'short_name', 'source:url',
'admin_level', 'alt_name:ca', 'alt_name:de', 'alt_name:en',
'alt_name:fr', 'alt_name:nl', 'alt_name:ru', 'alt_name:sl',
'alt_name:vi', 'border_type', 'name:az-cyr', 'old_name:en',
'old_name:eo', 'old_name:fr', 'old_name:os', 'old_name:vi',
'addr:country', 'addr:postcode', 'cadaster:code', 'name:be-x-old',
'gis-lab:status', 'wikipedia:ru', 'wikipedia:uk', 'is_in:country',
'official_name', 'gost_7.67-2003', 'name:be-tarask',
'is_in:continent', 'name:zh-min-nan', 'official_name:av',
'official_name:az', 'official_status', 'old_alt_name:vi',
'population:date', 'official_name:ca', 'official_name:ce',
'official_name:de', 'official_name:en', 'official_name:es',
'official_name:fr', 'official_name:it', 'official_name:ru',
'official_name:lez', 'official_name:tt', 'official_name:bxr',
'official_name:udm', 'population:source', 'is_in:country_code',
'geometry'], dtype=object)
</code></pre>
<p>And on the visualization of map I don't see any colors variation between regions.</p>
<p>What's a problem with that?</p>
<p>I have tried id column manually via geopandas and save it to geojson but get error: ValueError: key_on <code>'id'</code> not found in GeoJSON.</p>
|
<python><gis><geospatial><folium><choropleth>
|
2023-03-25 10:56:53
| 1
| 1,383
|
Dmitry Sokolov
|
75,841,155
| 673,600
|
Making a Pandas DataFrame allowable for an update by reference after a filter
|
<p>I'm filtering a df as follows with a function named <code>sell_shares</code>:</p>
<pre><code>def sell_shares(df_company, number_shares, date_end):
df_filtered = df_company.copy()[(df_company['Date'] <= date_end)]
df_filtered['Date'] = pd.to_datetime(df_filtered['Date'], dayfirst=True)
emaining_shares = number_shares # default
for i, row in df_filtered.iterrows():
row = df_filtered.iloc[i]
remaining_shares, df_filtered = remove_shares(df_filtered, i, remaining_shares)
</code></pre>
<p>and then calling a function with <code>df_filtered</code></p>
<pre><code>def remove_shares(df, i, number_shares):
#print("Considering row: ", row)
print("Before sell, Number of shares: {:}".format(df.iloc[i]["Quantity"]))
remaining_shares = 0
if abs(number_shares) < abs(df.iloc[i]["Quantity"]):
# have less than in batch - so decrement
#row.columns.set_loc("Quantity") += - abs(number_shares)
#row["Quantity"] += - abs(number_shares)
df.iloc[i, df.columns.get_loc("Quantity")] += - abs(number_shares)
</code></pre>
<p>and my original calling function</p>
<pre><code>def compute_gain_before_tax_year(df, company_name, date_start, date_end):
df_company = df.copy(deep=True)
df_company = df_company[df_company["Market"]==company_name]
df_company = df_company[( df_company["Date"] < pd.to_datetime(date_start, dayfirst=True))]
</code></pre>
<p>when I eventually update that dataframe</p>
<pre><code>df.iloc[i, df.columns.get_loc("Quantity")] = 0
</code></pre>
<p>How do I ensure this is all done by reference. In the function <code>remove_shares</code> I can see the changes reflected in the table at a local level when I print, but they do not propagate upwards.</p>
|
<python><pandas>
|
2023-03-25 10:44:06
| 0
| 6,026
|
disruptive
|
75,841,120
| 5,868,293
|
For every row keep only the non-nas and then concatenate while ignoring index, pandas
|
<p>I have the following pandas dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'col1': [1, np.nan, np.nan],
'results': ['Sub', 'Sub', 'Sub'],
'group': ['a', 'a', 'a'],
'seed': [6, 6, 6],
'col2': [np.nan, 2, np.nan],
'col3': [np.nan, np.nan, 3]})
df
col1 results group seed col2 col3
0 1.0 Sub a 6 NaN NaN
1 NaN Sub a 6 2.0 NaN
2 NaN Sub a 6 NaN 3.0
</code></pre>
<p>I would like for every row to keep only the columns that dont have NaNs and then concatenate back ignoring the index</p>
<p>The end result looks like this</p>
<pre><code>pd.DataFrame({'col1':[1],
'results':['Sub',],
'group':['a'],
'seed':[6],
'col2':[2],
'col3':[3]}, index=[0])
col1 results group seed col2 col3
0 1 Sub a 6 2 3
</code></pre>
<p>How could I do that ?</p>
|
<python><pandas>
|
2023-03-25 10:39:27
| 1
| 4,512
|
quant
|
75,841,032
| 9,391,359
|
Can't connect to MySQL server on 'localhost:3306'
|
<p>I have simple docker file for micro app</p>
<pre><code>FROM python:3.8.16-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends make build-essential libssl-dev \
zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev xz-utils tk-dev libxml2-dev \
libxmlsec1-dev libffi-dev liblzma-dev git ca-certificates
RUN python -m pip install --upgrade pip setuptools
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN python3 -m spacy download xx_ent_wiki_sm
COPY app /app
WORKDIR /app
ENTRYPOINT ["bash", "run.sh"]
</code></pre>
<p>run.sh just calling python script that connect to localhost mysql</p>
<pre><code>db = mysql.connector.connect(
host=os.environ['host'],
user=os.environ['user'],
passwd=os.environ['passwd'],
database=os.environ['database'])
</code></pre>
<p>the <code>.env</code> file looks like</p>
<pre><code>host=localhost
user=test
passwd=test
database=semantic_textual_similarity
</code></pre>
<p>I build Dockerfile using <code> docker build -t semantic_textual_similarity:semantic_textual_similarity .</code></p>
<p>and tried to run it using</p>
<pre><code>docker run --env-file=.env -p 3306:3306 semantic_textual_similarity:semantic_textual_similarity
</code></pre>
<p>and got following error</p>
<pre><code>mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306'
</code></pre>
<p>What should i pass to <code>Docker run</code> to make it works ?</p>
|
<python><mysql><docker>
|
2023-03-25 10:25:13
| 1
| 941
|
Alex Nikitin
|
75,840,827
| 7,251,207
|
How to properly generate a documentation with Swagger for Flask
|
<p>I would like to ask how to generate a proper documentation with Swagger for Flask. I have tried many libraries like flasgger, apispec, marshmallow, flask_apispec, etc but I didn't find the best one.</p>
<p>So I have a function here:</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/test', methods=['POST'])
def test_function():
data = request.get_json()
validated_data = TestSchema().load(data)
# Doing something here
return jsonify({'data': 'success'}), 200
</code></pre>
<p>I already have a schema object with marshmallow as follows:</p>
<pre class="lang-py prettyprint-override"><code>class TestSchema(Schema):
id = fields.Int(required=True)
name = fields.Str(required=True)
</code></pre>
<p>I want to automatically generate the documentation so that it is easier to know how to call my test API. How can I do this with swagger UI? I'm not really familiar with web programming and APISpec things, so it is inconvenient for me to generate it manually by writing a yaml file. I would like to use the marshmallow schemas if possible</p>
|
<python><flask><swagger><openapi>
|
2023-03-25 09:43:55
| 1
| 305
|
juliussin
|
75,840,780
| 288,201
|
Convert singular values into lists when parsing Pydantic fields
|
<p>I have an application that needs to parse some configuration. These structures often contain fields that can be either a string, or an array of strings, e.g. in YAML:</p>
<pre class="lang-yaml prettyprint-override"><code>fruit: apple
vegetable:
- tomato
- cucumber
</code></pre>
<p>However, internally I'd like to have <code>fruit=['apple']</code> and <code>vegetable=['tomato', 'cucumber']</code> for uniformity.</p>
<p>I'd like to use Pydantic to do the parsing. How do I declare these fields with as little repetition as possible?</p>
<p>Ideal solution:</p>
<ul>
<li>Retains the type of <code>fruit</code> and <code>vegetable</code> as <code>list[str]</code> for both typechecking (mypy) and at runtime.</li>
<li>Has at most one line of code per field, even when used in multiple classes.</li>
<li>Generalizes to arbitrary types, not just <code>str</code>.</li>
</ul>
<p>I have considered:</p>
<ul>
<li><code>fruit: Union[str, list[str]]</code> - I will have to check the type of <code>fruit</code> everywhere it's used, which defeats the purpose of parsing in the first place.</li>
<li><code>@validator("fruit", pre=True)</code> that converts a non-list to a list - this will have to be repeated for every field in every class, bloating up the definition by 3 extra lines per field.</li>
</ul>
|
<python><pydantic>
|
2023-03-25 09:31:38
| 1
| 8,287
|
Koterpillar
|
75,840,710
| 14,729,820
|
How to rename images with specific pattern?
|
<p>I have this image folder as shown below that has repated charchter name <code>A</code> I want only keep one letter <code>A</code> with rest of <code>numbers + ext</code>
the input :</p>
<pre><code>input_folder --|
|--- imgs -- |-- A_0.jpg
|-- A_A_A_1.jpg
|-- A_A_2.jpg
|-- A_A_A_A_3.jpg
|-- A_4.jpg
.........
</code></pre>
<p>I want to rename new image by keeping only Leter <code>A+numbers+.jpg</code>
for example if image name is <code>A_0.jpg</code> keep it if image name <code>A_A_A_1.jpg</code> rename it to <code>A_1.jpg</code> and so on ..</p>
<p>I am trying to loop throght image folder and using <code>python</code> or <code>regular expression</code> with <code>py</code> following this articail <a href="https://medium.com/@nkrh/how-to-bulk-rename-with-regular-expression-and-python-2267580bb3a1" rel="nofollow noreferrer">Rename with Regular Expression</a></p>
<p>the expacted output results in the same folder :</p>
<pre><code>input_folder --|
|--- imgs -- |-- A_0.jpg
|-- A_1.jpg
|-- A_2.jpg
|-- A_3.jpg
|-- A_4.jpg
.........
</code></pre>
<p>This issue caused by runing following script for <code>500 000 images</code> <a href="https://stackoverflow.com/questions/75818186/how-to-rename-images-name-using-data-frame">rleated</a></p>
|
<python><regex><operating-system><glob>
|
2023-03-25 09:17:03
| 1
| 366
|
Mohammed
|
75,840,467
| 12,189,799
|
flask send base 64 encoded pdf string without saving
|
<p>I have a flask application. I am using selenium cdp commands to generate pdfs. I get a base64 encoded string from selenium. Before sending the response as a pdf I save the file in my filesystem.</p>
<pre><code>file.write(base64.b64decode(pdf_string['data'])
</code></pre>
<p>And the response is just <code>send_file('path', 'name.pdf')</code></p>
<p>How do i serve the pdf without saving it to a file. I was trying to look for a BytesIO solution but I don't fully understand it yet.
I also tried setting the mimetype / header to 'application/pdf' and 'application/pdf;base64'</p>
<p>type of decoded string is 'bytes'
and type of pdf_string['data'] is 'str'</p>
<p>Is there a way I can accomplish this without saving files?</p>
|
<python><flask><selenium-webdriver><pdf><http-headers>
|
2023-03-25 08:25:18
| 1
| 371
|
Alan Dsilva
|
75,840,283
| 4,442,753
|
Select line in current python script and run them in python interpreter?
|
<p>I am new to SpaceVim and would like to use it to develop in python.
I am actually new to non GUI IDE, which is why I am starting with SpaceVim instead of vanilla NeoVim.</p>
<p>This said, I would like to run part of a script (only part of it) into a terminal to check if the code is working as intended.
Should I open an external terminal, start python, and copy/past my code to it?</p>
<p>SpaceVim states how to run the full python script, <code>SPC l r</code>
but not how to select lines to only run those.</p>
<p>Please, how could I do that?</p>
<p>Thanks a lot for your help!</p>
|
<python><spacevim>
|
2023-03-25 07:44:06
| 0
| 1,003
|
pierre_j
|
75,840,240
| 9,403,794
|
Why this code take more than 5 second if execute first time and when repeat takes less then 0.2s
|
<p>I begin from a question:</p>
<p>Why first time execution is taking longer time then usual. Should I be worried about malware or similar?</p>
<p>I have code snippets below. I have pickled numpy ndarray in file. I tried to read all and generate total ndarray.</p>
<p>I registered a strange behavior.</p>
<p>Execution time takes more than 5s and only when I turn off Linux machine and I open terminal and i run first time this program.
Next program executions takes less then 0.2s.</p>
<p>Why first time execution is taking longer time then usual.
First I was thinking that it is a extra time for <strong>pycache</strong></p>
<p>I am running this code under venv and venv is activated via main.sh</p>
<p>read_pickle.py:</p>
<pre><code>#!/home/****/bin/python3
# coding=utf-8
from os import listdir
from os.path import isfile, join
import multiprocessing
import numpy as np
import pandas as pd
import time as t
from utilsC0 import read_pickle_from_file
from enums import *
def process(file):
directory = "some localization" # external hdd
path = directory + f"/{file}"
arr = read_pickle_from_file(path)
return arr
def f2():
directory = "some localization" # external hdd
onlyfiles = [f for f in listdir(directory) if isfile(join(directory, f))]
with multiprocessing.Pool() as p:
result = p.map(process, onlyfiles)
result = np.vstack(result)
print("End f2")
if __name__ == "__main__":
t1 = t.time()
f2()
t2 = t.time()
print(f"{t2 - t1}")
</code></pre>
<p>And simple main.sh:</p>
<pre><code>#!/usr/bin/env bash
source /home/***/bin/activate
python3 src/read_pickle.py
</code></pre>
|
<python><python-3.x>
|
2023-03-25 07:32:44
| 1
| 309
|
luki
|
75,839,980
| 6,225,526
|
How to split the groups and apply a function in Pandas based on condition
|
<p>Lets say I have a dataframe defined as below</p>
<pre><code>df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar','foo', 'bar'],
'B' : [1, 2, 3, 4, 5, 6, 7, 8],
'C' : ['mean', 'halfmean', 'mean', 'halfmean', 'mean', 'halfmean', 'mean', 'halfmean']})
</code></pre>
<p>I would like to aggregate the <code>B</code> for each groups of <code>A</code> based on a condition in <code>C</code>. What is the effective way to do it?</p>
<p>Note: <code>C</code> is same for a group. That is, I need mean of all numbers for <code>foo</code> and mean of half of nlargest number in <code>bar</code>.</p>
<pre><code>grouped = df.groupby('A')
????
</code></pre>
<p>Expected result</p>
<pre><code>foo 4 #(1 + 3 + 5 + 7)/4
bar 7 #(6 + 8)/2
</code></pre>
|
<python><pandas>
|
2023-03-25 06:23:18
| 4
| 1,161
|
Selva
|
75,839,927
| 16,728,255
|
How does `mypy` know the signature of the pydantic model?
|
<p>How does <code>mypy</code> know the signature of the <code>pydantic</code> model in this manner?</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Model(BaseModel):
a: int
Model(a='asd') # error: Argument "a" to "Model" has incompatible type "str"; expected "int"
</code></pre>
<p>How can <code>pydantic</code> <code>BaseModel</code>'s metaclass change the <code>__init__</code> signature?</p>
|
<python><python-typing><mypy><pydantic>
|
2023-03-25 06:08:35
| 1
| 372
|
Karen Petrosyan
|
75,839,831
| 10,313,194
|
Django cannot add null data from url to database
|
<p>I create model like this which it should recieve null value without error</p>
<pre><code>class ReceiveData(models.Model):
api_key=models.ForeignKey(DeviceKey,on_delete=models.CASCADE)
field1= models.FloatField(null=True)
field2= models.FloatField(null=True)
field3= models.FloatField(null=True)
</code></pre>
<p>I use is_float to check type of data before add to database.</p>
<pre><code>def is_float(element: any) -> bool:
#If you expect None to be passed:
if element is None :
return False
try:
float(element)
return True
except ValueError:
return False
def device(request):
key = request.GET.get('key')
f1 = request.GET.get('f1')
f2 = request.GET.get('f2')
f3 = request.GET.get('f3')
if DeviceKey.objects.filter(api_key=key).exists():
if(is_float(f1) or
is_float(f2) or
is_float(f3))
recv_data = ReceiveData(
api_key = key,
field1 = float(f1),
field2 = float(f2),
field3 = float(f3)
)
recv_data.save()
</code></pre>
<p>I send data by URL link this without f3.</p>
<pre><code>http://127.0.0.1:8000/device/?key=002&f1=25&f2=26
</code></pre>
<p>It show error like this.</p>
<pre><code> field3 = float(f3),
TypeError: float() argument must be a string or a number, not 'NoneType'
</code></pre>
<p>I don't send f3 in URL I think it should be null but it show TypeError. How to fix it?</p>
|
<python><django><django-views>
|
2023-03-25 05:41:55
| 1
| 639
|
user58519
|
75,839,825
| 8,391,698
|
How to prevent transformer generate function to produce certain words?
|
<p>I have the following <a href="https://huggingface.co/docs/transformers/model_doc/t5#inference" rel="nofollow noreferrer">code</a>:</p>
<pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
sequences = tokenizer.batch_decode(sequence_ids)
sequences
</code></pre>
<p>Currently it produces this:</p>
<pre><code>['<pad><extra_id_0> park offers<extra_id_1> the<extra_id_2> park.</s>']
</code></pre>
<p>Is there a way to prevent the generator to produce certain words (e.g. <code>stopwords = ["park", "offer"]</code>)?</p>
|
<python><nlp><huggingface-transformers><generative-pretrained-transformer>
|
2023-03-25 05:39:59
| 1
| 5,189
|
littleworth
|
75,839,562
| 1,056,563
|
"ModuleNotFoundError: No module named 'azure'" even though azure-cli-core was installed
|
<p>The various <code>azure-cli</code>* modules were installed via <code>pip3.10</code>:</p>
<pre><code>python3.10 -m pip list
2023-03-24T23:57:52.8637926Z azure-cli 2.46.0
2023-03-24T23:57:52.8638330Z azure-cli-core 2.46.0
2023-03-24T23:57:52.8638523Z azure-cli-telemetry 1.0.8
2023-03-24T23:57:52.8638722Z azure-common 1.1.28
2023-03-24T23:57:52.8638929Z azure-core 1.26.3
</code></pre>
<p>But when running <code>pytest</code> using the same <code>python3.10</code> <code>azure.cli.core</code> can not be imported?</p>
<pre><code> + python3.10 -m pytest -s -rA
</code></pre>
<pre><code>2023-03-25T00:01:08.3698899Z =============================
________________________ ERROR collecting test session _________________________
2023-03-25T00:01:08.3796409Z /usr/lib/python3.10/importlib/__init__.py:126: in import_module
from azure.cli.core import AzCommandsLoader
E ModuleNotFoundError: No module named 'azure'
2023-03-25T00:01:08.3800743Z =========================== short test summary info ============================
2023-03-25T00:01:08.3800996Z ERROR - ModuleNotFoundError: No module named 'azure'
</code></pre>
|
<python><pip><azure-cli>
|
2023-03-25 03:51:30
| 0
| 63,891
|
WestCoastProjects
|
75,839,431
| 17,101,330
|
How to scale two timeseries to match cumulative pct change but keep initial values of one of them
|
<p>I have two timeseries like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'A': [1, 3, 2, 5, 2, 8, 3],
'B': [10, 33, 22, 39, 30, 66, 34],
})
df.A.plot(color="blue")
df.B.plot(color="orange")
</code></pre>
<p><a href="https://i.sstatic.net/VzCOY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VzCOY.jpg" alt="enter image description here" /></a></p>
<p>I want to plot them on the same graph to compare them and therefore I need to scale them.
I want to keep the values of A and scale B to the approximate range as A.</p>
<p>So it should look like this (except that I need the original values of A to display them on the y-axis):</p>
<pre><code>df.A.pct_change().fillna(0).cumsum().plot(color="blue")
df.B.pct_change().fillna(0).cumsum().plot(color="orange")
</code></pre>
<p><a href="https://i.sstatic.net/T7k4M.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T7k4M.jpg" alt="enter image description here" /></a></p>
<p>I tried e.g. minmax scaling, but that doesn't work (at least how I am doing it):</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn.preprocessing import minmax_scale
fig, ax = plt.subplots()
df.A.plot(color="blue", ax=ax)
pd.DataFrame(minmax_scale(X=df.B.values, feature_range=(df.A.min(), df.A.max()))).plot(color="orange", ax=ax)
</code></pre>
<p><a href="https://i.sstatic.net/269ij.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/269ij.jpg" alt="enter image description here" /></a></p>
<p>I also tried e.g. this (but my math doesn't work):</p>
<pre><code>df.A.plot(color="blue")
d = df.A.iloc[-1] / df.B.iloc[-1]
(df.B * d).plot(color="orange")
</code></pre>
<p><a href="https://i.sstatic.net/QntEW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QntEW.jpg" alt="enter image description here" /></a></p>
<p>ChatGPT came up with this approach, which goes in the right direction but it's not working either:</p>
<pre><code>pca = df.A.pct_change().fillna(0)
pcb = df.B.pct_change().fillna(0)
cpca = (1 + pca).cumprod() - 1
cpcb = (1 + pcb).cumprod() - 1
ratio = cpca.iloc[-1] / cpcb.iloc[-1]
scaled = pd.Series(df.B * ratio)
scaled = scaled.add(df.A.iloc[0]).values
</code></pre>
<p>How can I achieve this?
I need the actual calculation of how to scale the values of B.
(In fact I am not even using matplotlib, so please don't link answers refering to matplotlib)</p>
|
<python><pandas><numpy><time-series><scale>
|
2023-03-25 02:55:51
| 1
| 530
|
jamesB
|
75,839,379
| 7,267,480
|
SCIPY minimization using L-BFGS_B, error "TOTAL NO. of f AND g EVALUATIONS EXCEEDS LIMIT"
|
<p>Trying to optimize one function using scipy minimize using <a href="https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html" rel="nofollow noreferrer">L-BFGS-B method</a></p>
<p>I have a simple code snippet to catch some errors and messages from minimize function:</p>
<pre><code>result = minimize(fun = res_fit_obj_f,
x0 = input_params_initial_guess,
args = (all_fitted_windows_df,
...
debug),
method = method,
bounds = input_params_bounds,
options = options
)
# Check if the optimization was successful
if result.success:
success = True
xopt = result.x
fopt = result.fun
benefit = result.fun - ig_obj_f_val
rel_benefit = np.round(benefit / ig_obj_f_val * 100 , 2)
else:
success = False
# Print an error message if the optimization was not successful
print('Warning!')
print('SCIPY Minimization Error (used method =',method,') failed. Error message:', result.message)
</code></pre>
<p>Now I have the next message in some cases:</p>
<p><strong>STOP: TOTAL NO. of f AND g EVALUATIONS EXCEEDS LIMIT.</strong></p>
<p>When I have tuned previously in options for SCIPY L-BFGS-B:</p>
<pre><code>hps['options'] = {
'maxiter': 1000000,
'disp' : hps['debug'], # to display convergence messages
'maxls': 200,
'ftol': 1e-10,
'gtol': 1e-10,
}
</code></pre>
<p>For the time of calculation I don't think that in this case solver made 1000000 iterations, so it's strange that it gave that message.</p>
<ol>
<li><p>How can I check the history of iterations in this case?</p>
</li>
<li><p>How can I increase the allowed number of function evaluations to give the optimizer more time/effort to get the most accurate solution?</p>
</li>
</ol>
|
<python><scipy><scipy-optimize-minimize>
|
2023-03-25 02:35:13
| 1
| 496
|
twistfire
|
75,839,281
| 6,727,914
|
What is the difference between grid[index] VS grid[index, :] in python
|
<p>In this <a href="https://colab.research.google.com/drive/1gS2aJo711XJodqqPIVIbzgX1ktZzS8d8?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1gS2aJo711XJodqqPIVIbzgX1ktZzS8d8?usp=sharing</a> , they used <code>np.max(qtable[new_state, :])</code></p>
<p>But I did an experiment and I don't understand the need of <code>:</code> . My experiement show the same value, same array shape</p>
<pre><code>import numpy as np
N = 10
grid = np.array([[np.array(k) for i in range(N)] for k in range(N)])
print(grid)
index = 5
d = grid[index]
e = grid[index, :]
print(d)
print(e)
</code></pre>
|
<python><arrays><list><numpy><q-learning>
|
2023-03-25 01:56:46
| 1
| 21,427
|
TSR
|
75,839,133
| 8,475,638
|
MPI block other ranks to execute until rank 0's task is finished
|
<p>I am new in MPI, Here is the code structure I am implementing right now:</p>
<pre><code>import time
def A():
//some simple codes here to execute. Lets say
print('Hello World from A')
time.sleep(5)
def B():
//some simple codes here to execute.Lets say
print('Hello World from B')
time.sleep(5)
def C():
print('Hello World from C')
print('>>>>>>>>>>>>> The place where I will start to make parallelization')
scores = []
for i in range (1, 1000):
score = score_calculator(i) // ignore this function
scores.append(score)
print('<<<<<<<<<<<<< The place where I will stop to make parallelization')
</code></pre>
<p>for example this is in <code>hello_world.py</code></p>
<p>I will call <code>mpiexec -n 4 hello_world.py</code></p>
<p>So my point is - there will be 4 ranks here. the function A and B are sequential, so no need of other ranks there. My parallelization will be here in C function.</p>
<p>What I want to do</p>
<p>A and B will run sequentially. so I will call <code>if rank == 0</code> in each of the function.</p>
<p>after <code>print('Hello World from C')</code> all other ranks will be forced to wait, until the rank = 0 is finished.</p>
<p>When Rank 0 is finished, then I configure the next parts for parallelization.</p>
<p>My Question is:</p>
<p>So how I can make wait other ranks to finish the rank 0?</p>
<p>Any of implementation in python or C is welcomed.</p>
<p>Thanks in advance!</p>
|
<python><c++><c><parallel-processing><mpi>
|
2023-03-25 01:05:23
| 0
| 2,336
|
Ankur Lahiry
|
75,839,069
| 630,544
|
How to DRY up this psycopg connection pool boilerplate code with a reusable async function or generator?
|
<p>I'm using <code>psycopg</code> to connect to a PostgreSQL database using a connection pool. It works great, but any function that needs to run SQL in a transaction gets three extra layers of nesting:</p>
<p><code>/app/db.py</code></p>
<pre class="lang-py prettyprint-override"><code>from os import getenv
from psycopg_pool import AsyncConnectionPool
pool = AsyncConnectionPool(getenv('POSTGRES_URL'))
</code></pre>
<p><code>/app/foo.py</code></p>
<pre class="lang-py prettyprint-override"><code>from db import pool
from psycopg.rows import dict_row
async def create_foo(**kwargs):
foo = {}
async with pool.connection() as conn:
async with conn.transaction():
async with conn.cursor(row_factory=dict_row) as cur:
# use cursor to execute SQL queries
return foo
async def update_foo(foo_id, **kwargs):
foo = {}
async with pool.connection() as conn:
async with conn.transaction():
async with conn.cursor(row_factory=dict_row) as cur:
# use cursor to execute SQL queries
return foo
</code></pre>
<p>I wanted to abstract that away into a helper function, so I tried refactoring it:</p>
<p><code>/app/db.py</code></p>
<pre class="lang-py prettyprint-override"><code>from contextlib import asynccontextmanager
from os import getenv
from psycopg_pool import AsyncConnectionPool
pool = AsyncConnectionPool(getenv('POSTGRES_URL'))
@asynccontextmanager
async def get_tx_cursor(**kwargs):
async with pool.connection() as conn:
conn.transaction()
cur = conn.cursor(**kwargs)
yield cur
</code></pre>
<p>...and calling it like this:</p>
<p><code>/app/foo.py</code></p>
<pre class="lang-py prettyprint-override"><code>from db import get_tx_cursor
from psycopg.rows import dict_row
async def create_foo(**kwargs):
foo = {}
async with get_tx_cursor(row_factory=dict_row) as cur:
# use cursor to execute SQL queries
return foo
</code></pre>
<p>...but that resulted in an error:</p>
<pre><code>TypeError: '_AsyncGeneratorContextManager' object does not support the context manager protocol
</code></pre>
<p>I also tried variations of the above, like this:</p>
<pre class="lang-py prettyprint-override"><code>async def get_tx_cursor(**kwargs):
async with pool.connection() as conn:
async with conn.transaction():
async with conn.cursor(**kwargs) as cur:
yield cur
</code></pre>
<p>...but got similar results, so it appears using a generator is not possible.</p>
<p>Does anyone know of a clean and simple way to expose the cursor to a calling function, without using another library?</p>
<h3></h3>
<p>Here are the versions I'm using:</p>
<ul>
<li><strong>python:</strong> 3.11</li>
<li><strong>psycopg:</strong> 3.1.8</li>
<li><strong>psycopg-pool:</strong> 3.1.6</li>
</ul>
|
<python><python-asyncio><generator><contextmanager><psycopg3>
|
2023-03-25 00:47:29
| 1
| 4,007
|
Shaun Scovil
|
75,839,037
| 6,296,626
|
How to capture/sniff network traffic of certain process (executable) in Python?
|
<p>I am trying to listen and capture network traffic (packets) of certain process (.exe executable on Windows) using Python.</p>
<p>So far using <code>scapy</code> I am able to sniff the network...</p>
<p>Here for example I am sniffing all UDP packet in between port 10 to 20 and print out the remote IP address and port.</p>
<pre class="lang-py prettyprint-override"><code>from scapy.all import *
def capture_traffic(packet):
if packet.haslayer(IP) and packet.haslayer(UDP):
print(f"Remote Address: {packet[IP].src} | Remote Port: {packet[UDP].sport}")
sniff(filter="udp and portrange 10-20", prn=capture_traffic)
</code></pre>
<p>I also know that using <code>psutil</code> I can get process PID like so:</p>
<pre class="lang-py prettyprint-override"><code>for process in psutil.process_iter():
if "some_app.exe" in process.name():
process_pid = process.pid
break
</code></pre>
<p><strong>How can I sniff/capture traffic/packet from only the one specific process?</strong></p>
|
<python><network-programming><scapy><packet><packet-sniffers>
|
2023-03-25 00:36:45
| 1
| 1,479
|
Programer Beginner
|
75,839,017
| 1,806,124
|
How to run Python module with PM2 ecosystem file?
|
<p>I can run a Python script with PM2 with the following ecosystem file:</p>
<pre class="lang-json prettyprint-override"><code>{
"apps": [{
"name": "my_app",
"script": "script.py",
"instances": "1",
"wait_ready": true,
"autorestart": false,
"interpreter" : "/home/my_user/.cache/pypoetry/virtualenvs/my_app-ij6Dv2sY-py3.10/bin/python",
}]
}
</code></pre>
<p>That works fine but i have another program that i can only start this way:</p>
<pre class="lang-bash prettyprint-override"><code>python -m my_app
</code></pre>
<p>But for this i can not simply change the ecosystem file to this:</p>
<pre class="lang-json prettyprint-override"><code>{
"apps": [{
"name": "my_app",
"script": "-m my_app",
"instances": "1",
"wait_ready": true,
"autorestart": false,
"interpreter" : "/home/my_user/.cache/pypoetry/virtualenvs/my_app-ij6Dv2sY-py3.10/bin/python",
}]
}
</code></pre>
<p>PM2 will run it without errors but my app will not be running. How can i use <code>-m my_app</code> with a PM2 ecosystem file?</p>
|
<python><python-3.x><pm2>
|
2023-03-25 00:30:57
| 1
| 661
|
Endogen
|
75,838,706
| 10,791,262
|
CentOS Apache(httpd) deploying of flask occurs 503 error
|
<p>I faced 503 error while deploying my flask app on godaddy centos vm, port is 5002 for flask app.</p>
<p>And httpd config in etc/httpd/conf.d/default-site.conf is following.</p>
<pre><code><VirtualHost *:80>
servername domain.com
serveralias domain.com
serveralias domain.com
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:5002/
ProxyPassReverse / http://127.0.0.1:5002/
</VirtualHost>
</code></pre>
<p>but I am getting the 503 error - "503 Service Unavailable"</p>
<p>Please help me. Thanks in advance</p>
<p>firewall and port status</p>
<p>sudo firewall-cmd --list-all
services: dhcpv6-client http https ssh
ports: 100/tcp 2022/tcp 8080/tcp 5000/tcp 3036/tcp 443/tcp 80/tcp 5002/tcp</p>
<p>netstate -l</p>
<pre><code>tcp 0 0 0.0.0.0:down 0.0.0.0:* LISTEN
tcp 0 0 localhost:commplex-main 0.0.0.0:* LISTEN
tcp6 0 0 [::]:down [::]:* LISTEN
tcp6 0 0 [::]:mysql [::]:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
udp 0 0 localhost:323 0.0.0.0
</code></pre>
|
<python><flask><devops>
|
2023-03-24 22:58:43
| 0
| 311
|
Alex
|
75,838,661
| 14,509,475
|
Overload unary operator as binary operator
|
<p>In Python, is it possible to override a unary operator (such as <code>~</code>) so that it acts as a binary one?</p>
<p>The result would be that <code>a ~ b</code> would become a meaningful expression.</p>
|
<python><python-3.x><operator-overloading>
|
2023-03-24 22:50:40
| 0
| 496
|
trivicious
|
75,838,658
| 15,392,319
|
Certain Characters in Foreign Languages Cause Lambda Post Invocation Errors
|
<p>When I query data from certain countries, namely Belarus, Bangladesh and Kazakhstan, I get this error:</p>
<pre><code>[ERROR] [1679697735190] LAMBDA_RUNTIME Failed to post handler success response. Http response code: 413.
Traceback (most recent call last):
File "/var/runtime/bootstrap.py", line 480, in <module>
main()
File "/var/runtime/bootstrap.py", line 468, in main
handle_event_request(lambda_runtime_client,
File "/var/runtime/bootstrap.py", line 148, in handle_event_request
lambda_runtime_client.post_invocation_result(invoke_id, result, result_content_type)
File "/var/runtime/lambda_runtime_client.py", line 62, in post_invocation_result
rapid_client.post_invocation_result(invoke_id, result_data if isinstance(result_data, bytes) else result_data.encode('utf-8'), content_type)
RuntimeError: Failed to post invocation response
equestId: c0f7e100-80b0-4e6a-84ff-9f5d0f8c708f Error: Runtime exited with error: exit status 1 Runtime.ExitError
</code></pre>
<p>I know this error usually has to do with the invocation response data being too large however even when only returning two or three results it errors out. I believe it has to do with the languages, for example Bangladesh's data is in Bangla. Is there something I can do to preprocess the data to keep the error from occuring?</p>
|
<python><amazon-web-services><aws-lambda><invocation>
|
2023-03-24 22:50:17
| 0
| 428
|
cmcnphp
|
75,838,609
| 17,639,970
|
Is it possible to add json info to a OSM file in julia?
|
<p>I'm trying to extract set of public transit nodes from open street map. After reading wikis and documentation, it seems Overpass-turbo gives nodes and edges related to public trasit. So, I did this query on <a href="https://overpass-turbo.eu/" rel="nofollow noreferrer">NYC</a> and once trying to export it give an option of exporting as raw osm data, however, after downloading the file is JSON not OSM file. So, having an existing .osm file of newyork city, how to integerate this json to that osm?</p>
<p>can OSMX.jl from julia (or python) open an existing osm file and somehow integrated this json info to the existing map?</p>
|
<python><julia><openstreetmap>
|
2023-03-24 22:41:35
| 0
| 301
|
Rainbow
|
75,838,573
| 843,036
|
How to upload recorded audio blob to server without method='post' in Javascript
|
<p>I am trying to create a voice recorder in flask that takes the voice recording and then further processes the audio file.</p>
<p>For the voice recorder I am using the recorder code in this github repo: <a href="https://github.com/addpipe/simple-recorderjs-demo" rel="nofollow noreferrer">https://github.com/addpipe/simple-recorderjs-demo</a></p>
<p>After the voice is recorder, I want to save it to disk and then process it using another python library</p>
<p>Here is the layout of my website:</p>
<p><a href="https://i.sstatic.net/mc4tw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mc4tw.png" alt="enter image description here" /></a></p>
<p>I want the user to record their audio, and when stop button is pressed the audio should save to disk. then when <code>submit</code> is pressed, I want the python code to get the audio and process it.</p>
<p>To make this work, I changed the code from the above linked github to the following:</p>
<pre><code>function stopRecording() {
console.log("stopButton clicked");
//disable the stop button, enable the record too allow for new recordings
stopButton.disabled = true;
recordButton.disabled = false;
pauseButton.disabled = true;
//reset button just in case the recording is stopped while paused
pauseButton.innerHTML="Pause";
//tell the recorder to stop the recording
rec.stop();
//stop microphone access
gumStream.getAudioTracks()[0].stop();
rec.exportWAV(saveAudioFile);
}
function saveAudioFile(blob) {
// the file name is taken from python code.. for some reason this one is not considered
var filename = new Date().toISOString();
var xhr=new XMLHttpRequest();
xhr.onload=function(e) {
if(this.readyState === 4) {
console.log("Server returned: ",e.target.responseText);
}
};
var fd=new FormData();
fd.append("audio_data",blob, filename);
xhr.open("POST","/",true);
xhr.send(fd);
}
</code></pre>
<p>When the <code>stop</code> button is pressed it calls the <code>saveAudioFile</code> function that saves the audio file to disk.</p>
<p>Here is my html code:</p>
<pre><code><h2>Step 1: Record your voice</h2>
<br>
<div align="center" id="controls">
<button type="button" class="btn btn-danger" id="recordButton">Start Recording</button>
<button type="button" class="btn btn-secondary" id="pauseButton">Pause</button>
<button type="button" class="btn btn-primary" id="stopButton">Stop Recording</button>
</div>
<div align="center" id="formats">Format: start recording to see sample rate</div>
<br>
<h2>Step 2: Select your target language</h2>
<br>
<form class="languageSelect" align="center" method="POST">
<select class="form-select" name="languageSelector" id="languageSelector" aria-label="Select your target language">
<option selected>Select target language</option>
<option value="ar">Arabic</option>
<option value="zh-Hans">Chinese Simplified</option>
<option value="de">German (Germany)</option>
<option value="ja">Japanese</option>
<option value="fr">French</option>
<option value="it">Italian</option>
</select>
<br>
<br>
<button class="btn btn-success" id="submit2" onclick="loading()" type="submit">
<i class="fa fa-circle-o-notch fa-spin" style="display:none;"></i>
<span class="btn-text">Submit</span>
</button>
</form>
<br>
<!-- inserting these scripts at the end to be able to use all the elements in the DOM -->
<script src="/static/js/recorder.js"></script>
<script src="/static/js/app.js"></script>
<!-- below script will show the loading animation on button click..-->
<script type="text/javascript">
function loading() {
$(".btn .fa-circle-o-notch").show();
$(".btn .btn-text").html("Please Wait...");
}
</script>
</code></pre>
<p>This works and creates the audio file, but the problem is that it does so with <code>method='POST'</code>.</p>
<p>So when the stop button is clicked to stop the recording, the <code>flask</code> code for <code>method=='POST'</code> also gets executed and that causes problems as the language selector is empty and the submit button is not pressed.</p>
<p>Here is the flask code for this page:</p>
<pre><code>@app.route('/', methods=['GET', 'POST'])
def home():
if request.method == "POST":
lang = request.form.getlist("languageSelector")
print("Language Selected: " + str(lang))
filename = 'audio.wav'
filepath = os.path.join('audio-files', filename)
print(request.files)
f = request.files['audio_data']
with open(filepath, 'wb') as audio:
f.save(audio)
print('file uploaded successfully')
translted_text = translate_audio_to_text(filepath, lang)
synthesize_audio(translted_text)
return render_template('home.html', request="POST")
else:
return render_template('home.html')
</code></pre>
<p>As can be seen, when the stop button is pressed in the recorder, it also executes all the code in the <code>if request.method == "POST":</code> block which causes the app to fail.</p>
<p>I'm very new to javascript so am struggling with this. Is there a way to save the audio file without <code>POST</code> method?</p>
|
<javascript><python><flask>
|
2023-03-24 22:34:56
| 1
| 2,699
|
StuckInPhDNoMore
|
75,838,499
| 355,931
|
Pandas read_csv: Data is not being read from text file (open() reads hex chars)
|
<p>I'm trying to read a text file with <code>pandas.read_csv</code>, but data is not being loaded (only a dataframe with <code>NA</code> values. The text file contains valid data (I can open it with excel). When I try to read it with <code>pathlib.Path.open()</code> it shows lines with Hex codes.</p>
<p>Let me show you what is happening:</p>
<pre><code>import pandas as pd
from pathlib import Path
path = Path('path/to/my/file.txt')
# This shows an error: Unidecode Error... as usual with windows files
df = pd.read_csv(path, dtype=str)
## UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf1 in position 96: invalid continuation byte
# This imports a dataframe full of null values:
df = pd.read_csv(path, dtype=str, encoding='latin1')
print(df)
## C Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 Unnamed: 6 \
## 0 <NA> <NA> <NA> <NA> <NA> <NA> <NA>
## 1 <NA> <NA> <NA> <NA> <NA> <NA> <NA>
## ...
# So, what is Python reading? I tried this:
with path.open('r') as f:
data = f.readline()
print(data)
## 'C\x00e\x00n\x00t\x00r\x00o\x00 \x00B\x00e\x00n\x00e\x00f\x00i\x00c\x00i\x00o\x00s\x00\n
</code></pre>
<p>And, as I said before, when I open the file with Excel, it shows exactly how it is supposed to look: a text files with values separated by pipes (<code>|</code>). So, right now, I'm feeling quite surprised.</p>
<p>What am I missing? Can anyone point me in the right direction? Which is the right encoding?</p>
|
<python><pandas><character-encoding><text-files>
|
2023-03-24 22:19:04
| 1
| 21,147
|
Barranka
|
75,838,473
| 12,309,386
|
python ray pid unavailable in RayTaskError
|
<p>I am running a function as number of ray tasks, collecting the object ids in a list called <code>futures</code>.</p>
<p>I then have a try/except block within which I attempt to get the results. My function throws a <code>RuntimeError</code>. My understanding is that ray wraps this within a <code>RayTaskError</code>. I want to catch the <code>RayTaskError</code> and print its pid.</p>
<pre><code>...
try:
results = ray.get(futures)
except ray.exceptions.RayTaskError as e:
print(e.pid)
</code></pre>
<p>Doing this gives me <code>AttributeError: 'RuntimeError' object has no attribute 'pid'</code></p>
<p>If I change <code>print(e.pid)</code> to simply <code>print(e)</code> then I do see a lot of information including the pid:</p>
<pre><code>ray::file_process_worker() (pid=22996, ip=127.0.0.1)
File "python\ray\_raylet.pyx", line 850, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 902, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 857, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 861, in ray._raylet.execute_task
File "C:\Defacto offline\PriceTransparency\pt_ray_spike\ptrans\fileprocessors\basefileprocessor.py", line 601, in file_process_worker
raise RuntimeError(err_msg)
RuntimeError: Error processing file sample.txt
</code></pre>
<p>If I print the type of the exception with <code>print(type(e))</code> then I get</p>
<pre><code><class 'ray.exceptions.RayTaskError(RuntimeError)'>
</code></pre>
<p>Looking at <a href="https://docs.ray.io/en/latest/_modules/ray/exceptions.html#RayTaskError" rel="nofollow noreferrer">the source code</a> it seems like the pid should be available as an attribute.</p>
<p>So my question is, <em><strong>why am I unable to get the pid from the exception?</strong></em></p>
<hr />
<p><strong>2023.03.27 update</strong></p>
<p>Adding a breakpoint within the exception handling block and inspecting what's available, I see that <code>e</code> is an object of <code><class 'ray.exceptions.RayTaskError(RuntimeError)'></code> and has the attributes:</p>
<ul>
<li><code>cause</code>: a <code>RuntimeError</code> object (which was thrown by the remote function being executed)</li>
<li><code>args</code>: a tuple containing a single member, the <code>RuntimeError</code> object.</li>
<li><code>_annotated</code>: a string 'RayTaskError'</li>
</ul>
|
<python><ray>
|
2023-03-24 22:14:41
| 0
| 927
|
teejay
|
75,838,452
| 10,997,438
|
Python parallel execution slower than serial, even with batches
|
<p>I have a txt file with several hundreds of thousands of strings, each on a new line.<br />
I need to create a list with an element for each string. Each of these elements must be a list that contains ordered substrings of the original string. The length of the substrings is a k given.<br />
For example with the following file content</p>
<pre><code>ABCDEFH
EJDNOENDE
DEMD
</code></pre>
<p>And <code>k = 3</code> I get</p>
<pre><code>[
[ABC, BCD, CDE, DEF, EFH],
[EJD, JDN, DNO, NOE, OEN, END, NDE],
[DEM, ]
]
</code></pre>
<p>Consider that generally the file is long several hundreds of thousands of lines and the average length of each line is 150. k is almost always above 10, even up to 50.</p>
<p>My code in series (that works) is:</p>
<pre class="lang-py prettyprint-override"><code>def build_k_mers(reads: list[str], k: int) -> list[list[str]]:
return [[read[i : i + k] for i in range(len(read) - k + 1)] for read in reads]
</code></pre>
<p>Since building each sublist is an independent process I thought of parallelizing. My first attempt was:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool, cpu_count
def build_k_mers_helper(args: tuple[str, int]) -> list[str]:
read, k = args
k_mers = [read[i : i + k] for i in range(len(read) - k + 1)]
return k_mers
def build_k_mers_parallel(reads: list[str], k: int) -> list[list[str]]:
with Pool(cpu_count()) as pool:
k_mers = pool.map(build_k_mers_helper, [(read, k) for read in reads])
return k_mers
</code></pre>
<p>But this code would take slightly more time than the serial one. I thought it might have been because Python (in particular Jupyter Notebook) uses only one process anyway, and creating multiple processes would only add the switching context time. However, to test this, I opened Resource Monitor and I saw all cores of my CPU running at 100% during the script execution.<br />
Seeing this I thought that maybe it is indeed able to use multiple cores, so I thought that maybe since each <code>build_k_mers_helper</code> is quite fast (just a list comprehension on a short string) still switching context for each process that starts and terminates almost immediately slows down everything.</p>
<p>My second attempt then was:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool, cpu_count
def build_k_mers_helper_batch(args: tuple[list[str], int]) -> list[list[str]]:
batch, k = args
return [[read[i : i + k] for i in range(len(read) - k + 1)] for read in batch]
def build_k_mers_parallel_batch(reads: list[str], k: int) -> list[list[str]]:
seqs = [read for read in reads]
step = int(len(seqs) / cpu_count()) or 1
with Pool(cpu_count()) as pool:
k_mers_batches = pool.map(build_k_mers_helper_batch, [(seqs[i : i + step], k) for i in range(0, len(seqs), step)])
print('Terminated batches')
return [k_mers for batch in k_mers_batches for k_mers in batch]
</code></pre>
<p>To avoid the "too many context switching" problem. Basically instead of creating a process for each line, I create a process for each batch of lines such that each CPU core (each process) takes care of the same amount of batches and very few switching context are needed (indeed only setupping the processes and terminating them should be needed).</p>
<p>This attempt still did not manage to get better performance than the serial one, and was even a tiny bit slower than the previous attempt.</p>
<p>What am I doing wrong or how could I actually improve execution time by parallelizing?</p>
<p>My system has 64GB of RAM and an Intel Core i7-12700H (<code>cpu_count()</code> returns <code>20</code>).</p>
<p>My benchmark code is:</p>
<pre class="lang-py prettyprint-override"><code>from kmers import build_k_mers_parallel, build_k_mers_parallel_batch, build_k_mers
from timeit import Timer
setup = '''
from kmers import build_k_mers_parallel, build_k_mers_parallel_batch, build_k_mers
with open('filename.txt') as file:
reads = file.readlines()
'''
print(min(Timer('build_k_mers(reads, 50)', setup).repeat(3, 100)))
</code></pre>
<p>Of course changing <code>filename</code> and the function called in the timeit Timer according to what I was benchmarking.<br />
I tested with the setups below, keeping <code>k = 50</code> across all tests.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>number of lines</th>
<th>serial</th>
<th>parallel</th>
<th>batch parallel</th>
</tr>
</thead>
<tbody>
<tr>
<td>#0</td>
<td>500k</td>
<td>0m 14s</td>
<td>0m 15s</td>
<td>0m 16s</td>
</tr>
<tr>
<td>#1</td>
<td>1M</td>
<td>0m 19s</td>
<td>0m 21s</td>
<td>0m 23s</td>
</tr>
<tr>
<td>#2</td>
<td>2M</td>
<td>0m 27s</td>
<td>0m 33s</td>
<td>0m 36s</td>
</tr>
<tr>
<td>#3</td>
<td>3M</td>
<td>0m 38s</td>
<td>0m 43s</td>
<td>0m 45s</td>
</tr>
<tr>
<td>#4</td>
<td>4M</td>
<td>0m 43s</td>
<td>0m 57s</td>
<td>1m 00s</td>
</tr>
<tr>
<td>#5</td>
<td>5M</td>
<td>0m 54s</td>
<td>1m 10s</td>
<td>1m 15s</td>
</tr>
</tbody>
</table>
</div>
<p>Bigger files would start using the disk as swap space, slowing everything down considerably thus distorting the benchmark.</p>
|
<python><performance><parallel-processing><multiprocessing><python-multiprocessing>
|
2023-03-24 22:09:00
| 1
| 389
|
CrystalSpider
|
75,838,347
| 8,318,946
|
Issue with complex signal in Django
|
<p>Below are my Django models. I'd like to write signal that will do the following:</p>
<ol>
<li>User updates project that contains let's say 3 Spider objects.</li>
<li>Each Spider object contains config_file so signal should get list of all Spider objects and update group from all config files from the list.</li>
<li>In the end when the user updates group name in Project all <code>config_files</code> should change group name as well.</li>
</ol>
<pre><code>class Project(models.Model):
name = models.CharField(max_length=200, default="")
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default='')
is_shared = models.BooleanField(default=False)
group = models.ForeignKey(Group, on_delete=models.CASCADE, null=True, blank=True, default=1)
class Spider(models.Model):
name = models.CharField(max_length=200, default="", unique=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default='')
project = models.ForeignKey(Project, on_delete=models.CASCADE, blank=True, null=True, related_name='project_spider')
config_file = models.ForeignKey(BaseFileModel, on_delete=models.SET_NULL, null=True, default='')
class BaseFileModel(models.Model):
file = models.FileField(upload_to=custom_upload_files, null=True, default='')
group = models.ForeignKey(Group, on_delete=models.CASCADE, null=True, blank=True, default=1)
version = models.IntegerField(default=1)
</code></pre>
<p>I tried to write pre_save signal but it doesn't work as expected because file field in BaseFileModel is null after <code>spider.config_file.save()</code> and not all config files have group updated.</p>
<p>The problem I have is that there is no direct connection between BaseFileModel and Project and I am not sure how to write lookup/query to update all config files after updating project group.</p>
<pre><code>@receiver(pre_save, sender=Project)
def update_spiders_group(sender, instance, **kwargs):
if instance.pk:
try:
old_project = Project.objects.get(pk=instance.pk)
except Project.DoesNotExist:
pass
else:
if old_project.group != instance.group
spiders = Spider.objects.filter(project=instance)
for spider in spiders:
spider.config_file.group = instance.group
spider.config_file.save()
</code></pre>
<p>I would appreciate any help on how to achieve my goal.</p>
|
<python><django>
|
2023-03-24 21:48:40
| 1
| 917
|
Adrian
|
75,838,234
| 20,589,631
|
how do i show a widget from a diffrent widget?
|
<p>i have two widgets, one of them is a main widget (even though it uses Qwidgets.Qwidget), and the other one is a side widget that is supposed to appear from the main widget.
i tried to use a button, and that onclick, the side widget would show. but instead the app crashes after a sec.</p>
<p>here's the code for the main:</p>
<pre><code>from PyQt5 import QtWidgets
import sys
# help is the script of the side widget
import help
class KeyGrabber(QtWidgets.QWidget):
def __init__(self):
try:
super().__init__()
layout = QtWidgets.QVBoxLayout(self)
self.button = QtWidgets.QPushButton('start')
layout.addWidget(self.button)
self.button.setCheckable(True)
self.button.toggled.connect(self.show_widget)
except Exception as error:
print(error.__str__())
def show_widget(self):
try:
help.main(sys.argv)
except Exception as error:
print(error.__str__())
if __name__ == "__main__":
try:
app = QtWidgets.QApplication(sys.argv)
grabber = KeyGrabber()
grabber.show()
sys.exit(app.exec_())
except Exception as error:
print(str(error))
</code></pre>
<p>and here's the code for the side widget:</p>
<pre><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'help.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
import sys
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(273, 376)
self.name_label = QtWidgets.QLabel(Form)
self.name_label.setGeometry(QtCore.QRect(10, 10, 47, 13))
self.name_label.setObjectName("name_label")
self.name_input = QtWidgets.QLineEdit(Form)
self.name_input.setGeometry(QtCore.QRect(10, 30, 201, 21))
self.name_input.setObjectName("name_input")
self.issue_label = QtWidgets.QLabel(Form)
self.issue_label.setGeometry(QtCore.QRect(10, 60, 91, 16))
self.issue_label.setObjectName("issue_label")
self.issue_input = QtWidgets.QTextEdit(Form)
self.issue_input.setGeometry(QtCore.QRect(10, 80, 241, 111))
self.issue_input.setObjectName("issue_input")
self.email_label = QtWidgets.QLabel(Form)
self.email_label.setGeometry(QtCore.QRect(10, 200, 201, 16))
self.email_label.setObjectName("email_label")
self.email_input = QtWidgets.QLineEdit(Form)
self.email_input.setGeometry(QtCore.QRect(10, 220, 201, 21))
self.email_input.setObjectName("email_input")
self.send = QtWidgets.QPushButton(Form)
self.send.setGeometry(QtCore.QRect(0, 290, 121, 51))
self.send.setObjectName("send")
self.pushButton_2 = QtWidgets.QPushButton(Form)
self.pushButton_2.setGeometry(QtCore.QRect(150, 290, 121, 51))
self.pushButton_2.setObjectName("pushButton_2")
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
self.name_label.setText(_translate("Form", "name"))
self.issue_label.setText(_translate("Form", "describe the issue:"))
self.email_label.setText(_translate("Form", "your email (not required):"))
self.send.setText(_translate("Form", "send"))
self.pushButton_2.setText(_translate("Form", "cancel"))
self.pushButton_2.clicked.connect(self.close)
def close(self):
sys.exit(0)
def main(argv):
try:
app = QtWidgets.QApplication(argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
except Exception as error:
print(error.__str__())
</code></pre>
|
<python><pyqt5><qwidget>
|
2023-03-24 21:26:32
| 1
| 391
|
ori raisfeld
|
75,838,200
| 11,141,816
|
What's the simple way to get the return value of a function passed to multiprocessing.Process without using too many functions?
|
<p>In this post <a href="https://stackoverflow.com/questions/10415028/how-to-get-the-return-value-of-a-function-passed-to-multiprocessing-process?answertab=scoredesc#tab-top">How to get the return value of a function passed to multiprocessing.Process?</a> there were manny solutions to get a value from the multiprocessing. <a href="https://stackoverflow.com/a/10415215/11141816">vartec and Nico SchlΓΆmer</a> also mentioned the <a href="https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes" rel="nofollow noreferrer">Sharing state between processes</a></p>
<pre><code>from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print(num.value)
print(arr[:])
</code></pre>
<p>However, the object that's able to be store in Value and Array seemed to be limited, not just a return of any python object. They also mentioned the Manager() class, but I'm not sure how they started the manager class since</p>
<pre><code>return_dict = manager.dict() # never had a statement
return_dict.start()
</code></pre>
<p>In practice, a process desired runs like,</p>
<pre><code>def function(Input):
Output=computation(Input);)
return Output;
p1=multiprocessing.Process(target=function,args=(input_1,))
p2=multiprocessing.Process(target=function,args=(input_2,))
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
<p>or in a while loop. The returned objects</p>
<pre><code>output_1,output_2
</code></pre>
<p>may be some complicated objects from the other packages such as the sympy or numpy objects, etc. The main program should just get the raw object return in a list in the order of the processes being started.</p>
<pre><code>[output_1,output_2]
</code></pre>
<p>or with a simple label</p>
<pre><code>def function(Input,ix):
Output=computation(Input);)
return [ix,Output];
p1=multiprocessing.Process(target=function,args=(input_1,1,))
p2=multiprocessing.Process(target=function,args=(input_2,2,))
[[2,output_2],[1,output_1]]
</code></pre>
<p>I thought of defining a list globally, and just append the return to the list. However, I worried that if p1 and p2 finished at the same time, they would try to append to the list at the same time and causing trouble in the memory(could it happen?), or slow down the algorithms. I also saw answers using Queue(). However, that method kind of changed the function itself quite a lot, and function(Input) could not be called normally.</p>
<p>I saw an example with <a href="https://pythonprogramming.net/values-from-multiprocessing-intermediate-python-tutorial/" rel="nofollow noreferrer">pool</a>,</p>
<pre><code>from multiprocessing import Pool
def job(num):
return num * 2
if __name__ == '__main__':
p = Pool(processes=20)
data = p.map(job, [i for i in range(20)])
p.close()
print(data)
</code></pre>
<p>which was ridiculously simpler compare to the Process method. Does that mean pool was superior? However, in this case the script intended to use Process instead of pool.</p>
<p>Is there a way to just run the function() with a range of input, and then get the return value, without changing how function() was coded(i.e. function(1)=3.14159265...), with the Process class? What's the simple way to get the return value of a function passed to multiprocessing.Process without using too many other multiprocessing objects?</p>
|
<python><python-multiprocessing>
|
2023-03-24 21:20:54
| 1
| 593
|
ShoutOutAndCalculate
|
75,838,144
| 7,175,213
|
Create a graph in Python (with pygraphviz) when the number of nodes is huge
|
<p>I am trying, in Python, to create a graph from lists where I can see the merge and ramifications of my lists. I am sure the first item of the list is always the same for all lists.</p>
<p>For these three example lists:</p>
<pre><code>list1 = ["w", "a", "b", "c", "d"]
list2 = ["w", "a", "f", "c", "d"]
list3 = ["w", "a", "e", "f", "d"]
</code></pre>
<p>The result should be:</p>
<p><a href="https://i.sstatic.net/Qwl14.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qwl14.png" alt="enter image description here" /></a></p>
<p>Reading online I found <code>pygraphviz</code> and I am trying to use to do it by following the <a href="https://graphviz.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">documentation</a>.</p>
<pre><code>import pygraphviz as pgv
from datetime import datetime, timezone
def flatten(l): return flatten(l[0]) + (flatten(l[1:]) if len(l) > 1 else []) if type(l) is list else [l]
def nodes_to_edges(nodes):
edges = []
for i in range(len(nodes)-1):
edges.append((nodes[i], nodes[i+1]))
return edges
now = datetime.now(timezone.utc)
list1= ["w", "a", "b", "c", "d"]
list2 = ["w", "a", "f", "c", "d"]
list3= ["w", "a", "e", "f", "d"]
edges_list1 = nodes_to_edges(list1)
edges_list2 = nodes_to_edges(list2)
edges_list3 = nodes_to_edges(list3)
nodes = flatten(list1 +list2+list3)
edges = flatten(edges_list1 +edges_list2+edges_list3)
# create a new graph
G = pgv.AGraph(label=f'<<i>Generated {now.strftime("%b %d %Y %H:%M:%S")}</i>>',fontsize = 10, graph_type='digraph', directed=True)
for node in nodes:
G.add_node(node)
for edge in edges:
G.add_edge(edge[0], edge[1])
G.layout()
G.draw("file.png")
</code></pre>
<p><strong>Moreover I cannot find a way to be like in my drawing, one node in the top and the others below, to see the ramifications.</strong> The image I have as result is a bit messy (see below).</p>
<p><a href="https://i.sstatic.net/HDA1W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HDA1W.png" alt="enter image description here" /></a></p>
<p>Even worst, if is a big list one node ends in front of another and the plot is not legible.</p>
|
<python><graph><pygraphviz>
|
2023-03-24 21:10:02
| 1
| 1,148
|
Catarina Nogueira
|
75,838,141
| 5,094,261
|
Stream large XML file directly from GridFS to xmltodict parsing
|
<p>I am using <a href="https://motor.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">Motor</a> for async MongoDB operations. I have a <a href="https://motor.readthedocs.io/en/stable/api-asyncio/asyncio_gridfs.html?highlight=gridfs" rel="nofollow noreferrer">gridfs</a> storage where I store large XML files (typically 30+ MB in size) in chunks of 8 MBs. I want to incrementally parse the XML file using <a href="https://github.com/martinblech/xmltodict" rel="nofollow noreferrer">xmltodict</a>.
Here is how my code looks.</p>
<pre class="lang-py prettyprint-override"><code>async def read_file(file_id):
gfs_out: AsyncIOMotorGridOut = await gfs_bucket.open_download_stream(file_id)
tmpfile = tempfile.SpooledTemporaryFile(mode="w+b")
while data := await gfs_out.readchunk():
tmpfile.write(data)
xmltodict.parse(tmpfile)
</code></pre>
<p>I am pulling all the chunks out one by one and storing them in a temporary file in memory and then parsing the entire file through xmltodict. Ideally I would want toparse it incrementally as I don't need the entire xml object from the get go.</p>
<p>The documentation for xmltodict suggests that we can add custom handlers to parse a stream, like this example:</p>
<pre class="lang-py prettyprint-override"><code>>>> def handle_artist(_, artist):
... print(artist['name'])
... return True
>>>
>>> xmltodict.parse(GzipFile('discogs_artists.xml.gz'),
... item_depth=2, item_callback=handle_artist)
A Perfect Circle
FantΓ΄mas
King Crimson
Chris Potter
...
</code></pre>
<p>But the problem with this is that it expects a file-like object with a synchronous <code>read()</code> method, not a coroutine. Is there any way it can be achieved? Any help would be greatly appreciated.</p>
|
<python><mongodb><python-asyncio><gridfs><xmltodict>
|
2023-03-24 21:09:26
| 1
| 1,273
|
Shiladitya Bose
|
75,838,115
| 2,085,454
|
How to re-trigger Airflow pipeline within a DAG
|
<p>Our company's internal airflow2 platform has some issue, it can show "success" even if we didn't get any output from the pipeline, sometimes. To avoid this happen, we hope to have automated code to check whether there's output after Airflow pipeline finished, if not, then re-run the pipeline automatically.</p>
<p>Do you know how can we do that?</p>
|
<python><automation><airflow><airflow-2.x>
|
2023-03-24 21:06:27
| 1
| 4,104
|
Cherry Wu
|
75,837,897
| 9,291,575
|
Dask worker has different imports than main thread
|
<p>I have a dask delayed function that uses some options defined in another submodule. There's also a third module that modifies these options when imported.</p>
<p>If the imports happen after <code>__name__ == '__main__'</code> (in a notebook, for example), running the function in a distributed client ignores the modified options.</p>
<p>Is there a way to make sure the client worker have done the same "imports" as the main thread before running any computation ?</p>
<hr />
<p>Here's a MWE, it uses three python modules:</p>
<p><code>constants.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>N = 1
</code></pre>
<p><code>add.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import dask
import constants as c
@dask.delayed
def add(da):
return da + c.N
</code></pre>
<p><code>overhead.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import constants
constants.N = 4
</code></pre>
<p>Then if I run the following script, it works (output is <code>5</code>):</p>
<pre class="lang-py prettyprint-override"><code>from dask.distributed import Client
import dask
import add
import overhead
if __name__ == '__main__':
c = Client(n_workers=2, threads_per_worker=2)
print(dask.compute(add.add(1))[0])
</code></pre>
<p>But if we import the submodules <em>after</em> <code>if __name__ == '__main__'</code>, it fails (output is <code>2</code>):</p>
<pre class="lang-py prettyprint-override"><code>from dask.distributed import Client
import dask
if __name__ == '__main__':
import add
import overhead
c = Client(n_workers=2, threads_per_worker=2)
print(dask.compute(add.add(1)))
</code></pre>
<p>Another working solution is to trigger some code related to the <code>overhead</code> module, so simply:</p>
<pre class="lang-py prettyprint-override"><code>from dask.distributed import Client
import dask
if __name__ == '__main__':
import add
import overhead
c = Client(n_workers=2, threads_per_worker=2)
c.run(lambda: overhead)
print(dask.compute(add.add(1)))
</code></pre>
<p>This works. But requires me to know which module should be "triggered". I would prefer a more generic solution ?</p>
<hr />
<p>Of course, this is a simplified example. If it may help for context, my real-life issue is using <code>intake-esm</code> to read files (culprit function is <code>intake_esm.source._open_dataset</code>). I have another package that calls <code>intake_esm.utils.set_options</code> upon import. That option change is not respected when I run the workflow in a notebook but it works if I run it as a script (with all imports at the top of the file).</p>
|
<python><dask><dask-distributed>
|
2023-03-24 20:28:45
| 1
| 708
|
Aule Mahal
|
75,837,806
| 6,865,112
|
How to change the name of the column that is based on a Model in FastAPI?
|
<p>In FastAPI the database is generated automatically and it was created based on the models I had in the moment. After the database creation I changed a model's property name from "owner_id" to "user_id" but after re-runing the API it does not upgraded the Database.</p>
<p>How can I trigger the database upgrade in FastAPI (with SQLAlchemy)? Is there a specific command I need to run?</p>
<p>I've tried generating a migration for that and it can work, but I think there is a more easy way that does not rely on a migration from Alembic.</p>
<p>I'd expect a CLI command to trigger the database upgrade after a model change...</p>
|
<python><sqlalchemy><fastapi><alembic>
|
2023-03-24 20:14:48
| 1
| 749
|
Almeida Cavalcante
|
75,837,715
| 13,321,451
|
How to format labels in scientific notation for bar_label
|
<p>I am plotting data in a seaborn barplot. I want to label something from my pandas dataframe into the bar. I have gotten the labeling part figured out (see code to replicate below), but I still want to convert it to scientific notation.</p>
<pre><code>import pandas as pd
d = {'name': ['experiment1','experiment2'], 'reads': [15000,12000], 'positiveEvents': [40,60]}
df = pd.DataFrame(d)
df['proportianPositive'] = df['positiveEvents']/df['reads']
p = sns.barplot(data=df,x='name',y='positiveEvents', palette = 'colorblind', alpha =0.8)
p.bar_label(p.containers[0],labels=df.proportianPositive, padding = -50, rotation=90)
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/AqBe6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AqBe6.png" alt="output script above" /></a></p>
<p>How do I convert df.proportianPositive to scientific notation so that it will show up on my barplot?</p>
|
<python><pandas><matplotlib><seaborn><bar-chart>
|
2023-03-24 19:59:39
| 2
| 342
|
Oll
|
75,837,694
| 11,925,464
|
multiprocess loop (python)
|
<p>i have a dataframe which i'm trying to use multiprocess to use the available cores more efficiently for loops. There is an example <a href="https://stackoverflow.com/questions/74185743/multiprocessing-a-loop">here</a>, however, i can't work out how 'pool' is applied.</p>
<p>sample df code:</p>
<pre><code>df = pd.DataFrame({
'mac':['type_a','type_a','type_a','type_a','type_a','type_b','type_b','type_b','type_b','type_b'],
'con':['a','a','a','c','b','a','a','b','a','c'],
'result':[1,1,2,2,3,1,1,3,1,2],
})
MAX_NUMBER = 3
for j in range(MAX_NUMBER):
i = j + 1
aft = f"mc{i}_aft"
bef = f"mc{i}_bef"
holder1 = ['con', 'mac', 'result']
holder2 = ['con', 'mac']
df['add'] = (df.groupby(holder1).cumcount() + 1)
df[aft] = df['add'].loc[df.result == i]
df = df.groupby(holder2, group_keys=False).apply(lambda x: x.fillna(method='ffill').fillna(0))
df = df.drop(['add'], axis=1)
df.fillna(0, inplace=True)
df[aft] = df[aft].astype('int')
df[bef] = df.groupby(holder2)[aft].shift(1)
df.fillna(0, inplace=True)
df[bef] = df[bef].astype('int')
</code></pre>
<p>sample df:</p>
<pre>
βββββ¦βββββββββ¦ββββββ¦βββββββββ
β β mac β con β result β
β ββββ¬βββββββββ¬ββββββ¬βββββββββ£
β 0 β type_a β a β 1 β
β 1 β type_a β a β 1 β
β 2 β type_a β a β 2 β
β 3 β type_a β c β 2 β
β 4 β type_a β b β 3 β
β 5 β type_b β a β 1 β
β 6 β type_b β a β 1 β
β 7 β type_b β b β 3 β
β 8 β type_b β a β 1 β
β 9 β type_b β c β 2 β
βββββ©βββββββββ©ββββββ©βββββββββ
</pre>
<p>sample output:</p>
<pre>
βββββ¦βββββββββ¦ββββββ¦βββββββββ¦ββββββββββ¦ββββββββββ¦ββββββββββ¦ββββββββββ¦ββββββββββ¦ββββββββββ
β β mac β con β result β mc1_aft β mc1_bef β mc2_aft β mc2_bef β mc3_aft β mc3_bef β
β ββββ¬βββββββββ¬ββββββ¬βββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ£
β 0 β type_a β a β 1 β 1 β 0 β 0 β 0 β 0 β 0 β
β 1 β type_a β a β 1 β 2 β 1 β 0 β 0 β 0 β 0 β
β 2 β type_a β a β 2 β 2 β 2 β 1 β 0 β 0 β 0 β
β 3 β type_a β c β 2 β 0 β 0 β 1 β 0 β 0 β 0 β
β 4 β type_a β b β 3 β 0 β 0 β 0 β 0 β 1 β 0 β
β 5 β type_b β a β 1 β 1 β 0 β 0 β 0 β 0 β 0 β
β 6 β type_b β a β 1 β 2 β 1 β 0 β 0 β 0 β 0 β
β 7 β type_b β b β 3 β 0 β 0 β 0 β 0 β 1 β 0 β
β 8 β type_b β a β 1 β 3 β 2 β 0 β 0 β 0 β 0 β
β 9 β type_b β c β 2 β 0 β 0 β 1 β 0 β 0 β 0 β
βββββ©βββββββββ©ββββββ©βββββββββ©ββββββββββ©ββββββββββ©ββββββββββ©ββββββββββ©ββββββββββ©ββββββββββ
</pre>
<p>ps. actual df is much larger and involves more variables and loops, hence, trying to fully utilize the available cores to save processing time. The loop is groupby/cumcount of results before(bef) and after(aft) tests.</p>
<p>kindly advise</p>
|
<python><pandas><multiprocessing>
|
2023-03-24 19:56:45
| 0
| 597
|
ManOnTheMoon
|
75,837,570
| 6,656,081
|
Why is the regex quantifier {n,} more greedy than + (in Python)?
|
<p>I tried to use regexes for finding max-length sequences formed from repeated doubled letters, like <code>AABB</code> in the string <code>xAAABBBBy</code>.</p>
<p>As described in the <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>The <code>'*'</code>, <code>'+'</code>, and <code>'?'</code> quantifiers are all <em>greedy</em>; they match as much text as possible.</p>
</blockquote>
<p>When I use the quantifier <code>{n,}</code>, I get a full substring, but <code>+</code> returns only parts:</p>
<pre class="lang-python prettyprint-override"><code>import re
print(re.findall("((AA|BB){3,})", "xAAABBBBy"))
# [('AABBBB', 'BB')]
print(re.findall("((AA|BB)+)", "xAAABBBBy"))
# [('AA', 'AA'), ('BBBB', 'BB')]
</code></pre>
<p>Why is <code>{n,}</code> more greedy than <code>+</code>?</p>
|
<python><regex><regex-greedy>
|
2023-03-24 19:39:36
| 1
| 2,602
|
Anton Ganichev
|
75,837,501
| 5,032,387
|
How to change values in matrix to corresponding row value in vector based on condition
|
<p>I'm interested in whether it's possible do the following with pure numpy.
Let's say I have a matrix a and a vector b. I want to fill whatever values meet the condition to the left of the equal sign with the value from the vector b correponding to the row in matrix a.</p>
<pre><code>import numpy as np
a = np.arange(30).reshape(5,6)
a[3,3] = 2
a[4,0] = 1
a
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 2, 22, 23],
[ 1, 25, 26, 27, 28, 29]])
b = np.arange(0,5)[:,None]
b
array([[0],
[1],
[2],
[3],
[4]])
a[a<3] = b
</code></pre>
<blockquote>
<p>TypeError: NumPy boolean array indexing assignment requires a 0 or 1-dimensional input, input has 2 dimensions</p>
</blockquote>
<p>I understand why this doesn't work and that I would need to reshape b to the same shape as a, and then subset it by a<3 first</p>
<pre><code>b_mat = np.full(a.shape, b)
a[a<3] = b_mat[a<3]
a
array([[ 0, 0, 0, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 3, 22, 23],
[ 4, 25, 26, 27, 28, 29]])
</code></pre>
<p>However, I'd like to know, is there a way to do this elegantly in numpy without making the 1-dimensional vector b into a matrix where all the rows repeat?</p>
<p>There is a way to do this in Pandas as solved in <a href="https://stackoverflow.com/questions/46766416/how-to-assign-dataframe-boolean-mask-series-make-it-row-wise-i-e-where">this</a> question.</p>
|
<python><numpy>
|
2023-03-24 19:30:15
| 2
| 3,080
|
matsuo_basho
|
75,837,422
| 4,391,249
|
Is there a Python container that acts like a dictionary but doesn't need both key and value?
|
<p>Say I have:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Foo:
foo_id: int
# Other interesting fields.
def __hash__(self):
return self.foo_id.__hash__()
</code></pre>
<p>And I make a <code>foos_set = {Foo(i) for i in range(10)}</code>. I had always assumed that <code>set.remove</code> used the hash for constant-time lookup. So I thought it would be reasonable to think that <code>foos_set.remove(6)</code> should work. But actually, it raises a <code>KeyError</code>. You'd need to do <code>foo_set.remove(Foo(6))</code>. In fact, if there are more fields, you need to make sure that all of them match!</p>
<p>I suppose the right thing for me to do is just make a <code>foos_dict = {i: Foo(i) for i in range(10)}</code>. And I'd be happy to do that, but it just feels unnecessarily clunky so here I am asking if there's another container I don't know about.</p>
|
<python>
|
2023-03-24 19:16:03
| 1
| 3,347
|
Alexander Soare
|
75,837,305
| 11,050,535
|
Extract Embedded CSV/Excel File from PDF
|
<p>I have been trying to Extract embedded CSV/Excel from PDF Files, I need to extract the Embedded Files from PDF and store to separate Folder, I am not getting the way to perform this activity, I tried using PyPDF2, PDFSharp, but no Success.</p>
<p>Here the below code i have been trying to use, Any Suggestion</p>
<pre><code>import PyPDF2
def getAttachments(reader):
"""
Retrieves the file attachments of the PDF as a dictionary of file names
and the file data as a bytestring.
:return: dictionary of filenames and bytestrings
"""
catalog = reader.trailer["/Root"]
fileNames = catalog['/Names']['/EmbeddedFiles']['/Names']
attachments = {}
for f in fileNames:
if isinstance(f, str):
name = f
dataIndex = fileNames.index(f) + 1
fDict = fileNames[dataIndex].getObject()
fData = fDict['/EF']['/F'].getData()
attachments[name] = fData
return attachments
handler = open('YOURPDFPATH', 'rb')
reader = PyPDF2.PdfFileReader(handler)
dictionary = getAttachments(reader)
print(dictionary)
for fName, fData in dictionary.items():
with open(fName, 'wb') as outfile:
outfile.write(fData)
</code></pre>
|
<python><pypdf>
|
2023-03-24 18:58:59
| 1
| 605
|
Manz
|
75,837,242
| 8,713,442
|
Adding data from dictionary to RDD row
|
<p>I have data frame and dictionary as shown below :</p>
<p>I am converting this to RDD and then to each row I need to copy the data from dictionary result to each Row .</p>
<p>THis is just an example , I am having more than 15 keys in dictionary( which is coming as a result of some logic) which needs to get copied to dataframe .</p>
<p>Can some one please help and explain how can we do it dynamically with RDD ?</p>
<pre><code> d =[{"curr_col1": '75757', "curr_col2": 'hello',"curr_col3": 79,"curr_col4": 'pb',"curr_col45": None,"E_N": None}
df = spark.createDataFrame(d)
dict= {'curr_col45':'55','E_N':'55}
</code></pre>
<p>Output dataframe should be</p>
<pre><code>d =[{"curr_col1": '75757', "curr_col2": 'hello',"curr_col3": 79,"curr_col4": 'pb',"curr_col45": '55',"E_N": '55'}
</code></pre>
|
<python><apache-spark><pyspark><rdd>
|
2023-03-24 18:49:59
| 1
| 464
|
pbh
|
75,837,155
| 3,402,296
|
Parent class attribute does not persist in scope
|
<p>I have an abstract class</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
class StandardClass(ABC):
_my_table = None
@classmethod
def __init__(cls):
if cls._my_table is not None:
return
cls._my_table = pd.read_sql(_my_table_query(), cls._connection)
# Here some abstract methods
</code></pre>
<p>I then have two classes that inherit from this abstract class</p>
<pre class="lang-py prettyprint-override"><code>class ClassFoo(StandardClass):
def __init__(self):
super().__init__()
class ClassBar(StandardClass):
def __init__(self):
super().__init__()
foo = ClassFoo() # Table sql read is performed here as expected
bar = ClassBar() # I would expect the table read to be skipped as it was already performed
# However the table read is performed again even though there should be
# only one instance of _my_table (as there should be only one parent class Standard
</code></pre>
<p>Why do we witness this behaviour? Also, what should I do to obtain the desired behaviour?</p>
|
<python><oop><inheritance>
|
2023-03-24 18:40:30
| 0
| 576
|
RDGuida
|
75,837,023
| 688,191
|
SQL code that runs on Azure via GUI fails silently via pyodbc
|
<p><strong>Background</strong>: I am building a complex data analysis tool, and part of the process is to run some pretty deep and complex SQL from python. I am using <code>pyodbc</code> to connect to a SQL server instance hosted on Azure. This code has a loop containing a lot of calculation and row creation.</p>
<p>When I execute the SQL code batch in a GUI tool, in this case SQLPro or MSSQL (which I believe uses the JDBC driver), it completes all expected 25 iterations and proceeds to the post-loop code. However, when I execute the same batch in python via <code>pyodbc</code> and <code>cursor.execute</code>, the loop only executes 5 times before completing, and the code afterwards does not execute (or not <em>fully</em>).</p>
<p>The SQL loop code looks like this:</p>
<pre><code>declare @loop_id int = 1
while (select count(*) from dbo.items_remaining) > 0
begin
...
delete from dbo.items_remaining where ...
insert into dbo.step_tracker select getdate(), @loop_id, 'loop complete'
select @loop_id = @loop_id + 1
end
</code></pre>
<p>I have investigated every difference and angle I can think of, without success. Ultimately, I need to make the code work when being called from python, but I would happily take any advice I can get for debugging. Thanks!</p>
|
<python><sql-server><azure>
|
2023-03-24 18:21:25
| 1
| 351
|
bengreene
|
75,836,912
| 1,139,286
|
Scipy installation error when installing openai['embeddings'] dependencies with pip
|
<p>When attempting to install openai['embeddings'] with pip, I receive an error when pip attempts to install the scipy dependency. I am trying to avoid using Conda, as I am much more familiar with pip. Please could you tell me how to fix this error?</p>
<p>I am running</p>
<ul>
<li>pyenv 2.3.14</li>
<li>Python 3.10.6</li>
<li>pipenv 2023.3.20</li>
</ul>
<p>on MacOSX.</p>
<p>The error output is below:</p>
<pre><code>Installing openai[embeddings]...
...
Collecting scipy
Downloading scipy-1.10.1.tar.gz (42.4 MB)
ββββββββββββββββββββββββββββββββββββββββ 42.4/42.4 MB 15.5 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
[36m error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [40 lines of output]
The Meson build system
Version: 1.0.1
Source dir:
/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5
Build dir:
/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5/.mesonpy-hg3s
h936/build
Build type: native build
Project name: SciPy
Project version: 1.10.1
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Cython compiler for the host machine: cython (cython 0.29.33)
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 12.2.0 "GNU Fortran (Homebrew GCC 12.2.0) 12.2.0")
Fortran linker for the host machine: gfortran ld64 650.9
Compiler for Fortran supports arguments -Wno-conversion: YES
Checking if "-Wl,--version-script" : links: NO
Program cython found: YES
(/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-build-env-qh0mtg4o/overlay/bin/cython)
Program python found: YES (/Users/tom/.local/share/virtualenvs/skills-9YAMSSbX/bin/python)
Found pkg-config: /opt/homebrew/bin/pkg-config (0.29.2)
Program pythran found: YES
(/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-build-env-qh0mtg4o/overlay/bin/pythran)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency openblas found: NO (tried pkgconfig, framework and cmake)
Run-time dependency openblas found: NO (tried pkgconfig and framework)
../../scipy/meson.build:134:0: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig and framework
A full log can be found at
/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5/.mesonpy-hg3s
h936/build/meson-logs/meson-log.txt
+ meson setup --prefix=/Users/tom/.pyenv/versions/3.10.6
/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5
/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5/.mesonpy-hg3s
h936/build
--native-file=/private/var/folders/tr/rsy8rmfd40zczm17pl8wyv5w0000gn/T/pip-install-lo2paia0/scipy_737b9929d1564228920272097a8b29c5
/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
</code></pre>
|
<python><pip><scipy><openai-api>
|
2023-03-24 18:05:28
| 1
| 691
|
Thomas Hopkins
|
75,836,828
| 7,975,785
|
Run tests of another module with pytest
|
<p>I wrote two modules, packageA and packageB. Both have their own battery of tests, but packageB depends on packageA, so I would like to run packageA's tests when I run packageB's.</p>
<p>I can use <code>pytest.main(['--pyargs' ,'package_A.tests.tests_A'])</code> in packageB, and it seems to work. However, if there are conflicting options in <code>conftest.py</code>, it all breaks down.</p>
<p>Is there a solution?</p>
<p>Here is a (not) working example:</p>
<p>My folder structure:</p>
<pre><code>- python path
- packageA
- tests
- tests_A.py
- conftest.py
- packageB
- tests
- tests_B.py
- conftest.py
</code></pre>
<p><em>conftest.py</em> is the same in both folders:</p>
<pre><code>def pytest_addoption(parser):
parser.addoption(
"--any_option", action="store_true", default=False
)
</code></pre>
<p><em>tests_A.py</em> contans one test that fails (just to be sure that it runs):</p>
<pre><code>def test_package_A():
assert False
</code></pre>
<p><em>tests_B.py</em> calls the tests in package_A:</p>
<pre><code>import pytest
pytest.main(['--pyargs' ,'package_A.tests.tests_A'])
</code></pre>
<p>But pytest does not like overwriting options:</p>
<blockquote>
<p>=========================== short test summary info ===========================</p>
<p>ERROR - ValueError: option names {'--any_option'} already added</p>
<p>!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!</p>
<p>============================== 1 error in 0.09s ===============================</p>
</blockquote>
|
<python><unit-testing><pytest>
|
2023-03-24 17:54:15
| 1
| 1,576
|
Zep
|
75,836,799
| 12,596,824
|
Duplicating index of pandas dataframe and adjusting age column randomly from 0.5-1 years
|
<p>I have the following code where I am duplicating each row in the original dataframe anywhere from 1-3 times. I want to update the age though with each duplication to be some random age between 0.5-1 years away but always increasing from the last age with the same id. How can I do this?</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(42)
d = {'ages': [11,94,30,64,57,19, np.NaN],
'sex': [2,2,2,2,1,1,1]}
df = (
pd
.DataFrame(d)
.dropna()
.reset_index(drop = True)
.assign(id_col = lambda x: x.index + 1)
.loc[lambda x: x.index.repeat(np.random.uniform(1, 4, size = len(x)))]
)
</code></pre>
<p>Expected output might look something like this:</p>
<pre><code> ages sex id_col
0 11.0 2 1
0 11.56 2 1
1 94.0 2 2
1 94.2 2 2
1 94.56 2 2
2 30.0 2 3
2 30.67 2 3
2 30.89 2 3
3 64.0 2 4
3 64.11 2 4
4 57.0 1 5
5 19.0 1 6
</code></pre>
|
<python><pandas>
|
2023-03-24 17:51:07
| 4
| 1,937
|
Eisen
|
75,836,730
| 234,593
|
Paramiko fails connecting /w "Private key file is encrypted" when upgrading to 2.9.0+
|
<p>The following works fine using paramiko 2.8.1, but fails with any 2.9.0+:</p>
<pre class="lang-py prettyprint-override"><code> pkey = paramiko.RSAKey.from_private_key(pkey_str)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
ssh.connect(host, port=int(port), username=username, pkey=pkey) # paramiko.ssh_exception.PasswordRequiredException: Private key file is encrypted
return ssh.open_sftp()
</code></pre>
<p>Same SFTP host, same user, same key, etc. I've looked at <a href="https://www.paramiko.org/changelog.html#2.9.0" rel="nofollow noreferrer">paramiko's changelog for 2.9.0</a>, which says:</p>
<blockquote>
<p>Specifically, you need to specify <code>disabled_algorithms={'keys': ['rsa-sha2-256', 'rsa-sha2-512']}</code> in either <code>SSHClient</code> or <code>Transport</code>.</p>
</blockquote>
<p>But that doesn't change anything:</p>
<pre class="lang-py prettyprint-override"><code> # still: paramiko.ssh_exception.PasswordRequiredException: Private key file is encrypted
key_cfg = {'keys': ['rsa-sha2-256', 'rsa-sha2-512']}
ssh.connect(host, port=int(port), username=username, pkey=pkey, disabled_algorithms=key_cfg)
</code></pre>
|
<python><paramiko><ssh-keys>
|
2023-03-24 17:40:44
| 1
| 17,007
|
Kache
|
75,836,583
| 1,656,343
|
python asyncio parallel processing with a dynamic tasks queue
|
<p>I'm new to <code>asyncio</code>. Most <code>asyncio</code> code examples show parallel processing with a fixed number of tasks:</p>
<pre class="lang-py prettyprint-override"><code>tasks = [asyncio.ensure_future(download_one(url)) for url in urls]
await asyncio.gather(*tasks)
</code></pre>
<p>I need to download a large number of URLs. I'm currently using <code>aiohttp + asyncio</code> in the aforementioned way to run download the files in batches of let's say 16 concurrent tasks.</p>
<p>The problem with this approach is that the <code>asyncio</code> operation blocks <em>(no puns intended)</em> until the entire batch has been completed.</p>
<p>How can I dynamically add a new task to the queue as soon as a task is finished?</p>
|
<python><asynchronous><python-asyncio><threadpool><aiohttp>
|
2023-03-24 17:20:39
| 2
| 10,192
|
masroore
|
75,836,118
| 5,881,882
|
PyTorch: Interpolate - Input and output must have the same number of spatial dimensions, but got input with spatial dimensions
|
<p>I am following a U-Net tutorial and I am currently stuck with some interpolation. My x comes as <code>[256, 16384]</code>. Which is a batch of 256 with 1 channel and the 128x128 is flatten to 16384.
Thus, I reshape x to <code>[256, 1, 128, 128].</code> Then I go for some transformations and my out prior to a reshape is [256, 1, 32, 32]. I reshape it to [256, 1, 32x32] and finally squeeze it to [256, 1296]. My print outs show me that everything until then is correct. Finally, I would like to interpolate back to [256, 16384].</p>
<pre><code> def forward(self, x):
print(f'dim of x is {x.shape}')
src_dims = (x.shape[0], 1, 128, 128)
z = self.encoder(torch.reshape(x, src_dims))
out = self.decoder(z[::-1][0], z[::-1][1:])
out = self.head(out)
out = torch.reshape(out, (out.shape[0], 1, out.shape[2] * out.shape[3]))
out = torch.squeeze(out)
print(f'shape of OUT PRIOR interpolate is {out.shape}')
if self.retain_dim:
out = F.interpolate(out, (x.shape[0], x.shape[1]))
print(f'shape of OUT after squeeze is {out.shape}')
z = z[0]
z = self.head(z)
z = torch.squeeze(z)
return out, z
</code></pre>
<p>My print-outs before interpolate are:</p>
<blockquote>
<p>dim of x is torch.Size([256, 16384])
shape of OUT PRIOR interpolate is torch.Size([256, 1296])</p>
</blockquote>
<p>so now I assume, if I plug it into F.interpolate, it should interpolate a [256, 1296] tensor to a [256, 16384] tensor.</p>
<p>nevertheless here is this error:</p>
<pre><code>
ValueError: Input and output must have the same number of spatial dimensions,
but got input with spatial dimensions of [] and output size of (256, 16384).
Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size
in (o1, o2, ...,oK) format.
</code></pre>
|
<python><pytorch>
|
2023-03-24 16:28:15
| 0
| 388
|
Alex
|
75,835,942
| 3,261,292
|
Python BeautifulSoup issue in extracting direct text in a given html tag
|
<p>I am trying to extract direct text in a given HTML tag. Simply, for <code><p> Hello! </p></code>, the direct text is <code>Hello!</code>. The code works well except with the case below.</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup('<div> <i> </i> FF Services </div>', "html.parser")
for tag in soup.find_all():
direct_text = tag.find(string=True, recursive=False)
print(tag, ':', direct_text)
</code></pre>
<p>Output:</p>
<pre><code>`<div> <i> </i> FF Services </div> : `
`<i> </i> : `
</code></pre>
<p>The first printed output should be <code><div> <i> </i> FF Services </div> : FF Services </code>, but it skips <code>FF Services</code>. I found that when I delete <code><i> </i></code> the code works fine.</p>
<p>What's the problem here?</p>
|
<python><html><beautifulsoup><text-extraction>
|
2023-03-24 16:09:32
| 2
| 5,527
|
Minions
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.