QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,408,640
86,137
Problem with Python f-string formatting of struct_time
<p>I have the following code;</p> <pre><code>oStat=os.stat(oFile) print(time.strftime('%H:%M:%S', time.localtime(oStat.st_mtime))) print(f&quot;{time.localtime(oStat.st_mtime):%H:%M:%S}&quot;) </code></pre> <p>The first print statement works as expected; the f-string gives me:</p> <pre><code> print(f&quot;{time.localtime(oStat.st_mtime):%H:%M:%S}&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: unsupported format string passed to time.struct_time.__format__ </code></pre> <p>The desired output should be a time eg:</p> <p>14:10:02</p> <p>Why is this in error and how can I fix it?</p> <p>I've tried various combinations but none work.</p>
<python><time><f-string>
2024-04-30 12:51:38
2
524
Hipponax43
78,408,251
2,192,488
PyQt5: How to download icon for QWebEngineView?
<p>I keep on struggling with something as simple as showing a <code>favicon.ico</code> with PyQt5's <code>QWebEngineView</code>.</p> <p>Both tracing and the testing server tell me that <code>pixmap</code> gets downloaded, but it simply does not show. Conversely, if I replace <code>pixmap</code> with a local filename, it shows.</p> <p>I checked similar questions here, but none of the <a href="https://stackoverflow.com/a/48921358/2192488">answers</a> seem to work. So, please, post only tested answers. Thanks!</p> <pre class="lang-py prettyprint-override"><code>from PyQt5.Qt import * from PyQt5.QtCore import QUrl from PyQt5.QtGui import QIcon, QPixmap from PyQt5.QtNetwork import QNetworkAccessManager, QNetworkRequest from PyQt5.QtWebEngineWidgets import * from PyQt5.QtWidgets import QApplication url_web = 'https://hamwaves.com/swx/index.html' url_ico = 'https://hamwaves.com/swx/favicon.ico' class Browser(QWebEngineView): def __init__(self): QWebEngineView.__init__(self) self.nam = QNetworkAccessManager() self.nam.finished.connect(self.set_icon) self.nam.get(QNetworkRequest(QUrl(url_ico))) def set_icon(self, response): pixmap = QPixmap() pixmap.loadFromData(response.readAll(), format='ico') self.setWindowIcon(QIcon(pixmap)) app = QApplication(sys.argv) web = Browser() web.load(QUrl(url_web)) web.show() sys.exit(app.exec_()) </code></pre>
<python><security><pyqt5><content-security-policy><qwebengineview>
2024-04-30 11:39:50
1
32,046
Serge Stroobandt
78,408,226
5,660,533
how to parse credit card expiration date format correctly using parser?
<p>I am using the parser from dateutil python package, it is working fine for most dates but there are many places where it is working randomly. One example is: in a date string as <code>5/23</code>, i am trying to mean May of 2023. But the parser parses this as <code>2024-05-23 00:00:00</code>. As you can see, it took the 23 as the date and 5 as the month(which is correct) and it included the year <code>2024</code> I am guessing because that is the current year. How can i force my parser to not do it like this?</p> <p><strong>Things I have noticed</strong> I know that if i give a 4 digit year, it does not take the default current year and hence the value is correctly parse, like <code>06/2025</code>. I <a href="https://dateutil.readthedocs.io/en/stable/parser.html" rel="nofollow noreferrer">tried looking at the docs of the parser library</a> but did not find anything useful.</p> <p>Another weird thing I noticed is for dates like <code>5/86</code> it takes the 1900 i am guessing because it is closer? so <code>5/86</code> is parsed as <code>1986-05-30 00:00:00</code>. Is there any option to stop this and only takes values in the future? Because for credit cards we cant have expiration dates in the past. I know we wont have credit cards with expiration dates 80 years in the future, but my code demands it specifically that we dont get a past date.</p> <p>Another weird thing I noticed is for dates like <code>5/86</code> it takes the 1900 i am guessing because it is closer? so <code>5/86</code> is parsed as <code>1986-05-30 00:00:00</code>. Is there any option to stop this and only takes values in the future? Because for credit cards we cant have expiration dates in the past. I know we wont have credit cards with expiration dates 80 years in the future, but my code demands it specifically that we dont get a past date.</p> <p>My code:</p> <pre><code>from dateutil import parser from datetime import datetime def date_string_to_datetime_conversion(date_string: str): try: return parser.parse(date_string, dayfirst=True) except (ValueError, TypeError) as e: try: return parser.parse(date_string) except (ValueError, TypeError) as e: raise InvalidData( f&quot;The expiration_date is a required column in a bankcard and must be in a valid date format. &quot; f&quot;Additional info {e}&quot; ) date_string_list = ['05/23','6/2025','12/12','12/31','5/86','4/32'] # 2024-05-23 00:00:00 # 2025-06-30 00:00:00 # 2024-12-12 00:00:00 # 2024-12-31 00:00:00 # 1986-05-30 00:00:00 # 2032-04-30 00:00:00 for date_time in date_string_list: print(date_string_to_datetime_conversion(date_time)) </code></pre>
<python><python-3.x><datetime><validation><parsing>
2024-04-30 11:35:05
1
740
Rohit Kumar
78,408,063
1,472,474
get current UTC posix timestamp in better resolution than seconds in Python
<p>I am trying to get a UTC posix timestamp in better resolution than seconds – milliseconds would be acceptable, but microseconds would be better. I need that as a synchronization for unrelated external counter/timer HW which runs in nanoseconds from 0 on powerup.</p> <p>Which means I want some &quot;absolute&quot; time to have a pair of <code>(my_absolute_utc_timestamp, some_counter_ns)</code> to be able to interpret subsequent counter/timer values.</p> <p>And I need <strong>at least</strong> milliseconds precision. I'd like it to be an <code>int</code> value, so I have no problems with floating point arithmetic precision loss.</p> <h2>What have I tried:</h2> <ol> <li><p><code>time.time_ns()</code></p> <ul> <li>I thought this was it, but it's local time.</li> </ul> </li> <li><p><code>time.mktime(time.gmtime())</code></p> <ul> <li>for some strange reason, this is one hour more than utc time: <pre><code>&gt;&gt;&gt; time.mktime(time.gmtime()) - datetime.datetime.utcnow().timestamp() 3599.555135011673 </code></pre> </li> <li>and it has only seconds precision</li> </ul> </li> <li><p>I ended up with <code>int(datetime.datetime.utcnow() * 1000000)</code> as &quot;utc_microseconds&quot;, which works, but:</p> <ul> <li>there may be problems with floating precision.</li> <li>it seems too complicated and I just don't like it.</li> </ul> </li> </ol> <h2>Question:</h2> <p>Is there any better way to get microseconds or milliseconds UTC posix timestamp in Python? Using the Python standard library is preferred.</p> <p>I'm using Python 3.10.</p>
<python><time><utc><unix-timestamp>
2024-04-30 11:03:41
1
5,587
Jan Spurny
78,407,966
4,111,177
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() when training CrossEncoder
<p>I'm (more or less) following the <a href="https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/cross-encoder/training_quora_duplicate_questions.py" rel="nofollow noreferrer">Training_quora_duplicate_questions.py </a> example using my own data.</p> <p>As I understood from the <a href="https://www.sbert.net/examples/applications/cross-encoder/README.html" rel="nofollow noreferrer">cross encoder page</a>, and I quote:</p> <blockquote> <p>For binary tasks and tasks with continuous scores (like STS), we set num_labels=1. For classification tasks, we set it to the number of labels we have.</p> </blockquote> <p>As such I've written the model thusly:</p> <pre><code>model = CrossEncoder(&quot;distilroberta-base&quot;, num_labels= 2) </code></pre> <p>and the evaluator thusly:</p> <pre><code>evaluator = CEBinaryClassificationEvaluator.from_input_examples(dev_examples, name= &quot;Rooms-dev&quot;) </code></pre> <p>Yet, when I run the code as soon as it tries to evaluate for the first time it gives the following error:</p> <pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>I tried the same code before but with num_labels=1 and the it worked, 'though isn't (I think) quite what I'd like since I want to model to predict either 0 or 1 and not a continuous from 0 to 1.</p> <p>Any ideia what might be causing this?</p> <p>Code and error message below:</p> <pre><code># extracting rooms &amp; labels into lists rooms_1 = rooms_df[&quot;room_ner1&quot;].tolist() rooms_2 = rooms_df[&quot;room_ner2&quot;].tolist() labels = rooms_df[&quot;label&quot;].tolist() # creating array of room-pairs dataset = [ [original, candidate, label] for original, candidate, label in zip(rooms_1, rooms_2, labels) ] random.shuffle(dataset) dev_sample_size = int(len(dataset) * 0.10) dev_data = dataset[: dev_sample_size] train_data = dataset[dev_sample_size: ] # preparing training dataset train_examples = list() n_examples = len(train_data) # creating training dataset for i in trange(n_examples): example = train_data[i] train_examples.append(InputExample(texts= [example[0], example[1]], label= example[2])) train_examples.append(InputExample(texts= [example[1], example[0]], label= example[2])) # preparing test dataset dev_examples = list() d_examples = len(dev_data) # creating test dataset for i in trange(d_examples): example = dev_data[i] dev_examples.append(InputExample(texts= [example[0], example[1]], label= example[2])) train_batch_size = 16 num_epochs = 4 model_save_path = &quot;output/training_rooms-&quot; + datetime.now().strftime(&quot;%Y-%m-%d_%H-%M-%S&quot;) model = CrossEncoder(&quot;distilroberta-base&quot;, num_labels= 2)#, device= &quot;mps&quot;) train_dataloader = DataLoader( train_examples, shuffle= True, batch_size= train_batch_size ) evaluator = CEBinaryClassificationEvaluator.from_input_examples(dev_examples, name= &quot;Rooms-dev&quot;) warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1) # 10% train data for warm-up logger.info(f&quot;Warmup-steps: {warmup_steps:_}&quot;) model.fit( train_dataloader= train_dataloader, evaluator= evaluator, epochs= num_epochs, evaluation_steps= 5_000, warmup_steps= warmup_steps, output_path= model_save_path ) </code></pre> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[67], [line 1](vscode-notebook-cell:?execution_count=67&amp;line=1) ----&gt; [1](vscode-notebook-cell:?execution_count=67&amp;line=1) model.fit( [2](vscode-notebook-cell:?execution_count=67&amp;line=2) train_dataloader= train_dataloader, [3](vscode-notebook-cell:?execution_count=67&amp;line=3) evaluator= evaluator, [4](vscode-notebook-cell:?execution_count=67&amp;line=4) epochs= num_epochs, [5](vscode-notebook-cell:?execution_count=67&amp;line=5) evaluation_steps= 5_000, [6](vscode-notebook-cell:?execution_count=67&amp;line=6) warmup_steps= warmup_steps, [7](vscode-notebook-cell:?execution_count=67&amp;line=7) output_path= model_save_path [8](vscode-notebook-cell:?execution_count=67&amp;line=8) ) File [~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:275](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:275), in CrossEncoder.fit(self, train_dataloader, evaluator, epochs, loss_fct, activation_fct, scheduler, warmup_steps, optimizer_class, optimizer_params, weight_decay, evaluation_steps, output_path, save_best_model, max_grad_norm, use_amp, callback, show_progress_bar) [272](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:272) training_steps += 1 [274](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:274) if evaluator is not None and evaluation_steps &gt; 0 and training_steps % evaluation_steps == 0: --&gt; [275](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:275) self._eval_during_training( [276](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:276) evaluator, output_path, save_best_model, epoch, training_steps, callback [277](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:277) ) [279](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:279) self.model.zero_grad() [280](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:280) self.model.train() File [~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:450](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:450), in CrossEncoder._eval_during_training(self, evaluator, output_path, save_best_model, epoch, steps, callback) [448](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:448) &quot;&quot;&quot;Runs evaluation during the training&quot;&quot;&quot; [449](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:449) if evaluator is not None: --&gt; [450](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:450) score = evaluator(self, output_path=output_path, epoch=epoch, steps=steps) [451](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:451) if callback is not None: [452](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:452) callback(score, epoch, steps) File [~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:81](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:81), in CEBinaryClassificationEvaluator.__call__(self, model, output_path, epoch, steps) [76](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:76) logger.info(&quot;CEBinaryClassificationEvaluator: Evaluating the model on &quot; + self.name + &quot; dataset&quot; + out_txt) [77](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:77) pred_scores = model.predict( [78](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:78) self.sentence_pairs, convert_to_numpy=True, show_progress_bar=self.show_progress_bar [79](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:79) ) ---&gt; [81](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:81) acc, acc_threshold = BinaryClassificationEvaluator.find_best_acc_and_threshold(pred_scores, self.labels, True) [82](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:82) f1, precision, recall, f1_threshold = BinaryClassificationEvaluator.find_best_f1_and_threshold( [83](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:83) pred_scores, self.labels, True [84](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:84) ) [85](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/evaluation/CEBinaryClassificationEvaluator.py:85) ap = average_precision_score(self.labels, pred_scores) File [~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:226](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:226), in BinaryClassificationEvaluator.find_best_acc_and_threshold(scores, labels, high_score_more_similar) [223](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:223) assert len(scores) == len(labels) [224](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:224) rows = list(zip(scores, labels)) --&gt; [226](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:226) rows = sorted(rows, key=lambda x: x[0], reverse=high_score_more_similar) [228](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:228) max_acc = 0 [229](https://file+.vscode-resource.vscode-cdn.net/Users/duarteharris/IronHack/GitHub/room-matching/~/IronHack/GitHub/room-matching/.venv/lib/python3.11/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py:229) best_threshold = -1 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>Thank you in advance for any help.</p>
<python><sentence-transformers>
2024-04-30 10:45:52
1
2,263
duarte harris
78,407,951
5,368,122
Null conversion error - polars.exceptions.ComputeError - pandera(0.19.0b3) with polars
<p>Note: you can install the specific pandera version using</p> <blockquote> <p>pip install pre 'pandera[polars]'</p> </blockquote> <p>We are trying a simple validation example using polars. We cant understand the problem or why it originates. But it throws <strong>polars.exceptions.ComputeError</strong> exception when any of the validation fails and there is null in data.</p> <p>For example, in the code below, the dummy data contains <strong>extract_date</strong> feature with a None. It runs fine if the <strong>case_id</strong> are all int convertible string but throws the exception if any of the case_id is not int convertible.</p> <p><strong>Here is the code:</strong></p> <pre><code>import pandera.polars as pa import polars as pl from datetime import date import json class CaseSchema(pa.DataFrameModel): case_id: int = pa.Field(nullable=False, unique=True, coerce=True) gdwh_portfolio_id: str = pa.Field(nullable=False, unique=True, coerce=True) extract_date: date = pa.Field(nullable=True, coerce=True) class Config: drop_invalid_rows = True invalid_lf = pl.DataFrame({ #&quot;case_id&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;], &quot;case_id&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;abc&quot;], &quot;gdwh_portfolio_id&quot;: [&quot;d&quot;, &quot;e&quot;, &quot;f&quot;], &quot;extract_date&quot;: [date(2024,1,1), date(2024,1,2), None] }) try: CaseSchema.validate(invalid_lf, lazy=True) except pa.errors.SchemaErrors as e: print(json.dumps(e.message, indent=4)) </code></pre> <p>It gives: <strong>'failure_case' for 1 out of 1 values: [{&quot;abc&quot;,&quot;f&quot;,null}]</strong> If you uncomment &quot;<code>case_id&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;]</code>, and comment <code>&quot;case_id&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;abc&quot;]</code> it runs fine.</p> <p>Not sure why it panics when there are nulls. If there are no nulls in the data it works fine.</p> <p>The trace we get is:</p> <pre><code> &gt; Traceback (most recent call last): &gt; File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main &gt; File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code &gt; File &quot;/mnt/batch/tasks/shared/LS_root/mounts/clusters/erehoba-acc-payments-req/code/Users/ourrehman/dna-payments-and-accounts/data_validation/test.py&quot;, line 22, in &lt;module&gt; &gt; CaseSchema.validate(invalid_lf, lazy=True) &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/api/dataframe/model.py&quot;, line 289, in validate &gt; cls.to_schema().validate( &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/api/polars/container.py&quot;, line 58, in validate &gt; output = self.get_backend(check_obj).validate( &gt; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/backends/polars/container.py&quot;, line 65, in validate &gt; check_obj = parser(check_obj, *args) &gt; ^^^^^^^^^^^^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/backends/polars/container.py&quot;, line 398, in coerce_dtype &gt; check_obj = self._coerce_dtype_helper(check_obj, schema) &gt; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/backends/polars/container.py&quot;, line 486, in _coerce_dtype_helper &gt; raise SchemaErrors( &gt; ^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/errors.py&quot;, line 183, in __init__ &gt; ).failure_cases_metadata(schema.name, schema_errors) &gt; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/pandera/backends/polars/base.py&quot;, line 173, in failure_cases_metadata &gt; ).cast( &gt; ^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/polars/dataframe/frame.py&quot;, line 6624, in cast &gt; return self.lazy().cast(dtypes, strict=strict).collect(_eager=True) &gt; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ &gt; File &quot;/anaconda/envs/pandera-polars/lib/python3.11/site-packages/polars/lazyframe/frame.py&quot;, line 1810, in collect &gt; return wrap_df(ldf.collect()) &gt; ^^^^^^^^^^^^^ &gt; polars.exceptions.ComputeError: conversion from `struct[3]` to `str` failed in column 'failure_case' for 1 out of 1 values: [{&quot;abc&quot;,&quot;f&quot;,null}] </code></pre> <h4>Expected behavior</h4> <p>It should work with column that have null and are set <strong>nullable=True</strong></p> <h4>versions</h4> <p>pandera: 0.19.0b3 polars: 0.20.23 python: 3.11</p>
<python><python-polars><pandera>
2024-04-30 10:42:07
1
844
Obiii
78,407,858
13,942,929
How do I create a constructor that would receive different types of parameters?
<p>I have this Point class. I want it to be able to recieve <code>double</code> and <code>SomeType</code> parameters.</p> <p><code>Point.pxd</code>:</p> <pre><code>from libcpp.memory cimport shared_ptr, weak_ptr, make_shared from SomeType cimport _SomeType, SomeType cdef extern from &quot;Point.h&quot;: cdef cppclass _Point: _Point(shared_ptr[double] x, shared_ptr[double] y) _Point(shared_ptr[double] x, shared_ptr[double] y, shared_ptr[double] z) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y) _Point(shared_ptr[_SomeType] x, shared_ptr[_SomeType] y, shared_ptr[_SomeType] z) shared_ptr[_SomeType] get_x() shared_ptr[_SomeType] get_y() shared_ptr[_SomeType] get_z() cdef class Point: cdef shared_ptr[_Point] c_point </code></pre> <p><code>Point.pyx</code>:</p> <pre><code>from Point cimport * cdef class Point: def __cinit__(self, SomeType x=SomeType(&quot;0&quot;, None), SomeType y=SomeType(&quot;0&quot;, None), SomeType z=SomeType(&quot;0&quot;, None)): self.c_point = make_shared[_Point](x.thisptr, y.thisptr, z.thisptr) def __dealloc(self): self.c_point.reset() def get_x(self) -&gt; SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_x() cdef SomeType coord = SomeType(&quot;&quot;, None, make_with_pointer = True) coord.thisptr = result return coord def get_y(self) -&gt; SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_y() cdef SomeType coord = SomeType(&quot;&quot;, None, make_with_pointer = True) coord.thisptr = result return coord def get_z(self) -&gt; SomeType: cdef shared_ptr[_SomeType] result = self.c_point.get().get_z() cdef SomeType coord = SomeType(&quot;&quot;, None, make_with_pointer = True) coord.thisptr = result return coord property x: def __get__(self): return self.get_x() property y: def __get__(self): return self.get_y() property z: def __get__(self): return self.get_z() </code></pre> <p>How should I write my <code>.pxd</code> and <code>.pyx</code> files so that my Point constructor can receive different type of parameters?</p>
<python><cython><cythonize>
2024-04-30 10:22:15
1
3,779
Punreach Rany
78,407,769
7,307,824
Counting Values of columns in all previous rows excluding current row
<p>I'm currently learning Pandas and stuck with a problem.</p> <p>I have the following data:</p> <pre class="lang-py prettyprint-override"><code>labels = [ 'date', 'name', 'opponent', 'gf', 'ga'] data = [ [ '2023-08-5', 'Liverpool', 'Man Utd', 5, 0 ], [ '2023-08-10', 'Liverpool', 'Everton', 0, 0 ], [ '2023-08-14', 'Liverpool', 'Tottenham', 3, 2 ], [ '2023-08-18', 'Liverpool', 'Arsenal', 4, 4 ], [ '2023-08-27', 'Liverpool', 'Man City', 0, 0 ], ] df = pd.DataFrame(data, columns=labels) </code></pre> <p>The games / rows are sorted by date. for each row / game I would like to count the column values of 'goals_for' and 'goals_against' in the previous rows / games (excluding the current row or any after the date).</p> <p>So I would like the data to be like this:</p> <pre><code>labels = [ 'date', 'name', 'opponent', 'gf', 'ga', 'total_gf', 'total_ga' ] data = [ [ '2023-08-5', 'Liverpool', 'Man Utd', 5, 0, 0, 0 ], [ '2023-08-10', 'Liverpool', 'Everton', 0, 0, 5, 0 ], [ '2023-08-14', 'Liverpool', 'Tottenham', 3, 2, 5, 0 ], [ '2023-08-18', 'Liverpool', 'Arsenal', 4, 4, 8, 2 ], [ '2023-08-27', 'Liverpool', 'Man City', 0, 0, 12, 6 ], ] </code></pre> <p>I tried <code>expanding()</code> but it seems to include the current row. <code>rolling</code> has a parameter <code>closed='left'</code> but others don't have it.</p> <p>Any help or tips or links to similar solutions would be appreciated.</p>
<python><pandas>
2024-04-30 10:07:25
2
568
Ewan
78,407,608
102,221
Why defining empty __new__(cls) method in python class prevent __init__ from running
<p>In next super-primitive code when <code>__new__</code> method is defined <code>__init__</code> method for created objects is not called.</p> <pre><code>class NaiveSingleton: def __new__(cls): pass def __init__(self): print(&quot;NaiveSingleton init called&quot;) </code></pre> <p>so, call</p> <pre><code> nsA1 = NaiveSingletonA() nsA2 = NaiveSingletonA() </code></pre> <p>brings no <code>__init__</code> printout. Why?</p> <p>Second question: Why definition of empty <code>__new__</code> method make this class work as singleton?</p>
<python><singleton>
2024-04-30 09:39:09
1
1,113
BaruchLi
78,407,603
10,309,712
How to make XGBoost work in a hierarchical classifier
<p>I am trying to make <code>XGBoost</code> work with the hierarchical classifier package available <a href="https://github.com/globality-corp/sklearn-hierarchical-classification/tree/develop" rel="nofollow noreferrer">here</a> (repo archived).</p> <p>I can confirm the module works fine with sklearn's random forest classifier (and other sklearn modules that I checked with). But I cannot get it work with <code>XGBoost</code>. I understand there's some modification needed for the hierarchical classifier to work, but I cannot figure out this modification.</p> <p>I give below, a <code>MWE</code> to reproduce the issue (assuming the library is installed via - <code>pip install sklearn-hierarchical-classification</code>):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from sklearn.model_selection import train_test_split from sklearn.datasets import load_digits from xgboost import XGBClassifier from sklearn.ensemble import RandomForestClassifier from sklearn_hierarchical_classification.classifier import HierarchicalClassifier from sklearn_hierarchical_classification.constants import ROOT from sklearn_hierarchical_classification.metrics import h_fbeta_score, multi_labeled #from sklearn_hierarchical_classification.tests.fixtures import make_digits_dataset </code></pre> <p>We want to build the following class hierarchy along with data from the handwritten digits dataset:</p> <pre><code> &lt;ROOT&gt; / \ A B / \ / \ 1 7 C 9 / \ 3 8 </code></pre> <p>Like so:</p> <pre class="lang-py prettyprint-override"><code>def make_digits_dataset(targets=None, as_str=True): &quot;&quot;&quot;Helper function: from sklearn_hierarchical_classification.tests.fixtures module &quot;&quot;&quot; X, y = load_digits(return_X_y=True) if targets: ix = np.isin(y, targets) X, y = X[np.where(ix)], y[np.where(ix)] if as_str: # Convert targets (classes) to strings y = y.astype(str) return X, y class_hierarchy = { ROOT: [&quot;A&quot;, &quot;B&quot;], &quot;A&quot;: [&quot;1&quot;, &quot;7&quot;], &quot;B&quot;: [&quot;C&quot;, &quot;9&quot;], &quot;C&quot;: [&quot;3&quot;, &quot;8&quot;], } </code></pre> <p>So that:</p> <pre class="lang-py prettyprint-override"><code>base1 = RandomForestClassifier() base2 = XGBClassifier() clf = HierarchicalClassifier( base_estimator=base1, class_hierarchy=class_hierarchy, ) X, y = make_digits_dataset(targets=[1, 7, 3, 8, 9], as_str=False, ) y = y.astype(str) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=RANDOM_STATE, ) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) with multi_labeled(y_test, y_pred, clf.graph_) as (y_test_, y_pred_, graph_): h_fbeta = h_fbeta_score( y_test_, y_pred_, graph_, ) print(&quot;h_fbeta_score: &quot;, h_fbeta) h_fbeta_score: 0.9690011481056257 </code></pre> <p>Works fine. But with <code>XGBClassifier</code> (<code>base2</code>) raises the following error:</p> <pre><code>Traceback (most recent call last): File &quot;~/hierarchical-classification.py&quot;, line 62, in &lt;module&gt; clf.fit(X_train, y_train) File &quot;~/venv/lib/python3.10/site-packages/sklearn_hierarchical_classification/classifier.py&quot;, line 206, in fit self._recursive_train_local_classifiers(X, y, node_id=self.root, progress=progress) File &quot;~/venv/lib/python3.10/site-packages/sklearn_hierarchical_classification/classifier.py&quot;, line 384, in _recursive_train_local_classifiers self._train_local_classifier(X, y, node_id) File &quot;~/venv/lib/python3.10/site-packages/sklearn_hierarchical_classification/classifier.py&quot;, line 453, in _train_local_classifier clf.fit(X=X_, y=y_) File &quot;~/venv/lib/python3.10/site-packages/xgboost/core.py&quot;, line 620, in inner_f return func(**kwargs) File &quot;~/venv/lib/python3.10/site-packages/xgboost/sklearn.py&quot;, line 1438, in fit or not (self.classes_ == expected_classes).all() AttributeError: 'bool' object has no attribute 'all' </code></pre> <p>I understand this error got to do with this section of the call to the <code>fit()</code> method in <code>xgboost.sklearn.py</code>:</p> <pre class="lang-py prettyprint-override"><code>1436 if ( 1437 self.classes_.shape != expected_classes.shape 1438 or not (self.classes_ == expected_classes).all() 1439 ): 1440 raise ValueError( 1441 f&quot;Invalid classes inferred from unique values of `y`. &quot; 1442 f&quot;Expected: {expected_classes}, got {self.classes_}&quot; 1443 ) </code></pre> <p>Expected value of <code>y</code>: <code>[0 1]</code>, but got <code>['A' 'B']</code> (internal nodes). There must be a way to modify the class <code>sklearn_hierarchical_classification.classifier.HierarchicalClassifier.py</code> so that it works fine with <code>xgboost</code>.</p> <p>What's the fix to this?</p>
<python><machine-learning><scikit-learn><xgboost><hierarchical>
2024-04-30 09:37:37
1
4,093
arilwan
78,407,539
15,045,363
How to use a default module and inverter model when they are unknown in PVLib?
<p>Student doing his master's thesis in the field of phtovoltaic here, so I'm new to this topic. I'm creating an algo to detect and classify anomalies in PV systems using only the produced AC Power.</p> <p>In the 1st step of my algo, I want to simulate the maximum clear sky AC Power of a PV system for anytime of the year.<br /> I know the number of modules, the Wp of each module, the tilt/azimuth, and all informations about the location. However, idk the model of the modules/inverter.</p> <p><strong>Is it possible to specify a default module and inverter at the creation of a PVSystem?</strong><br /> For example in PVGis you only give the efficiency of the system, and not the model.</p>
<python><pvlib>
2024-04-30 09:25:09
1
865
Maxime Charrière
78,407,435
9,853,105
Python Certificate error during TLS handshake to gmail SMTP server
<p>I write a client program which tries to send mail to Gmail SMTP server by <a href="https://docs.python.org/3/library/smtplib.html" rel="nofollow noreferrer"><code>smtplib</code></a> and <a href="https://docs.python.org/3/library/ssl.html#ssl-contexts" rel="nofollow noreferrer"><code>ssl.SSLContext</code></a> objects, but get an error when loading a cert PEM file during TLS handshake :</p> <pre class="lang-py prettyprint-override"><code>import ssl, smtplib, email def send_email(subject, body, sender, recipients, password): mml = email.message.EmailMessage() mml.set_content(body) mml['From'] = sender mml['To'] = ', '.join(recipients) mml['Subject'] = subject sslctx = ssl.SSLContext(protocol=ssl.PROTOCOL_TLS_CLIENT) sslctx.verify_mode = ssl.CERT_REQUIRED assert sslctx.verify_mode == ssl.CERT_REQUIRED sslctx.load_verify_locations(cafile='crt.pem', capath='.') hdlr = smtplib.SMTP(host='smtp.gmail.com', port=587) # do not use SMTP_SSL hdlr.starttls(context=sslctx) hdlr.login('my-account-xxx@gmail.com', password) hdlr.sendmail(sender, recipients, mml.as_string()) print(&quot;[INFO] Message sent!&quot;) &gt; gmail_demo.send_email(subject='someone testing this site', body='there &lt;b&gt;you&lt;/b&gt; go', sender='system@yourproject.io', recipients=['another-person@gmail.com'], password='abcdef123456') &gt; &gt; Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ssl.SSLError: [SSL] PEM lib (_ssl.c:3917) </code></pre> <p>From that error message, I found it is reported by <a href="https://github.com/python/cpython/blob/v3.12.0/Modules/_ssl.c#L3917" rel="nofollow noreferrer">the C extension module in cpython</a> , it seems that the method <code>SSLContext.load_cert_chain()</code> is implemented at here .</p> <p>My questions :</p> <ul> <li>Does <code>SSLContext.load_cert_chain(...)</code> require callers to provide valid <code>keyfile</code> argument ? In <a href="https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain" rel="nofollow noreferrer">the doc</a> the argument <code>keyfile</code> is optional, however in C implementation , this error is likely from the return value of <code>SSL_CTX_use_PrivateKey_file(...)</code> .</li> <li>Is there anything incorrect in my cert file <code>crt.pem</code> ? The file contains a cert I downloaded from gmail SMTP server, by following the instructions in this <a href="https://support.google.com/a/answer/6180220?hl=en" rel="nofollow noreferrer">google workplace admin help</a>.</li> </ul> <p>About the environment : CPython 3.12.0</p> <p>Thanks</p>
<python><smtp><gmail><ssl-certificate><cpython>
2024-04-30 09:05:36
1
876
T.H.
78,407,327
2,566,198
Multiprocessing python on NFS gets 'No such file or directory' from os.cwd()
<p>I've been troubleshooting a test case on an EL9 environment which behaves differently in an EL7 environment.</p> <p>It's specifically:</p> <ul> <li>On an NFS mounts.</li> <li>Occurs on our EL9 systems (5.14.0-362.18.1.el9_3.x86_64). Does not occur on any of our older Centos 7 systems (3.10 kernel various versions).</li> <li>In a subdirectory. (Doesn't happen in a 'top level' mount).</li> <li>The code frequently 'breaks' with <code>os.cwd()</code> returning 'no such file or directory'.</li> </ul> <p>Reproduced on:</p> <ul> <li>5.14.0-427.el9.x86_64 (Alma)</li> <li>5.14.0-362.24.1.el9_3.x86_64 (Alma)</li> <li>5.14.0-362.18.1.el9_3.x86_64 (Alma)</li> <li>5.14.0-362.8.1.el9_3.x86_64 (RHEL)</li> </ul> <p>Example reproduction code:</p> <pre><code>#!/usr/bin/python3 -u import os import multiprocessing import time if __name__==&quot;__main__&quot;: multiprocessing.set_start_method(&quot;spawn&quot;) count = 0 while True: try: os.getcwd() pool = multiprocessing.Pool(10) pool.close() pool.terminate() count += 1 except Exception as e: print(f&quot;Failed after {count} iterations&quot;) print(e) break </code></pre> <p>I'm at something of a loss to understand quite what's going on here, and quite why it fails.</p> <p>It seems to be connected to <code>pool.terminate()</code> as if you add even a short (0.05) sleep before that, the problem stops occurring (at least, wasn't reproducible in 'sensible' amounts of time, where the above fails in &lt;10 iterations).</p> <p>And interestingly the <code>cwd</code> stays inconsistent:</p> <pre><code>build-2[myuser]:[~/python_race]$ ./simple.py Failed after 2 iterations [Errno 2] No such file or directory build-2[myuser]:[~/python_race]$ ./simple.py Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1362, in _path_importer_cache KeyError: '.' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;./simple.py&quot;, line 4, in &lt;module&gt; import multiprocessing File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 982, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 925, in _find_spec File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1423, in find_spec File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1392, in _get_spec File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1364, in _path_importer_cache File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1340, in _path_hooks File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1610, in path_hook_for_FileFinder File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1486, in __init__ FileNotFoundError: [Errno 2] No such file or directory </code></pre> <p>This seems to 'recover' with a chdir, or waiting for what I assume is an NFS cache expiry interval.</p> <p>We've picked this up due to some of our tests failing on the 'new version' on a more simplistic:</p> <pre><code>import multiprocessing multiprocessing.set_start_method(&quot;spawn&quot;) while True: multiprocessing.Pool(10) </code></pre> <p>But as mentioned - specific to NFS mounts <em>in subdirs</em> which we think might be because the <code>struct dentry</code> behaving a bit differently on root mounts.</p> <p>But I was wondering if anyone could shed some insight into what might be going wrong here? I'm at a loss to identify whether it might be fileserver, linux kernel, something within python, or... well, somewhere else entirely.</p> <p>With rpcdebug enabled we appear to get:</p> <pre><code>Apr 24 11:58:47 build-2 kernel: NFS: release(lib/libgcc_s.so.1) Apr 24 11:58:47 build-2 kernel: NFS: release(lib/libc.so.6) Apr 24 11:58:47 build-2 kernel: NFS: release(locale/locale-archive) Apr 24 11:58:47 build-2 kernel: NFS reply getattr: -512 Apr 24 11:58:47 build-2 kernel: nfs_revalidate_inode: (0:53/3834468196) getattr failed, error=-512 Apr 24 11:58:47 build-2 kernel: NFS: lookup(/python_race) Apr 24 11:58:47 build-2 kernel: NFS call lookup /python_race Apr 24 11:58:47 build-2 kernel: RPC: xs_tcp_send_request(168) = 0 Apr 24 11:58:47 build-2 kernel: NFS: release(bin/python3.11) </code></pre> <p>Edit:</p> <p>Have also found it does <em>not</em> occur if you add a <code>pool.join()</code> prior to the <code>terminate()</code></p>
<python><linux><nfs>
2024-04-30 08:46:19
1
53,584
Sobrique
78,407,270
22,538,132
Rendering depth from mesh and camera parameters
<p>I want to render depth from mesh file, and camera parameters, to do so I tried <code>RaycastingScene</code> from open3d with this <a href="https://github.com/rpapallas/diff-dope/blob/ros-example/diffdope_ros/meshes/hope/bbq_sauce.ply" rel="nofollow noreferrer">mesh file</a> as follows:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import numpy as np import open3d as o3d import matplotlib.pyplot as plt def render_depth( intrins:o3d.core.Tensor, width:int, height:int, extrins:o3d.core.Tensor, tmesh:o3d.t.geometry.TriangleMesh )-&gt;np.ndarray: &quot;&quot;&quot; Render depth from mesh file Parameters ---------- intrins : o3d.core.Tensor Camera Intrinsics matrix K: 3x3 width : int image width height : int image height extrins : o3d.core.Tensor camera extrinsics matrix 4x4 tmesh : o3d.t.geometry.TriangleMesh TriangleMesh Returns ------- np.ndarray Rendred depth image &quot;&quot;&quot; scene = o3d.t.geometry.RaycastingScene() scene.add_triangles(tmesh) rays = scene.create_rays_pinhole( intrinsic_matrix=intrins, extrinsic_matrix=extrins, width_px=width, height_px=height ) ans = scene.cast_rays(rays) t_hit = ans[&quot;t_hit&quot;].numpy() / 1000.0 return t_hit if __name__==&quot;__main__&quot;: import os mesh_path = f&quot;{os.getenv('HOME')}/bbq_sauce.ply&quot; mesh = o3d.t.io.read_triangle_mesh(mesh_path) mesh.compute_vertex_normals() # camera_info[k].reshape(3, 3) intrins_ = np.array([ [606.9275512695312, 0.0, 321.9704895019531], [0.0, 606.3505859375, 243.5377197265625], [0.0, 0.0, 1.0] ]) width_ = 640 # camera_info.width height_ = 480 # camera_info.height # root2cam 4x4 extrens_ = np.eye(4) # intrins_t = o3d.core.Tensor(intrins_) extrins_t = o3d.core.Tensor(extrens_) rendered_depth = render_depth( intrins=intrins_t, width=width_, height=height_, extrins = extrins_t, tmesh=mesh ) plt.imshow(rendered_depth) plt.show() </code></pre> <p>but I'm getting a depth image which doesn't seem to be correct!</p> <p>Can you please tell me how can I fix that? thanks.</p> <p><a href="https://i.sstatic.net/KU2MgMGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KU2MgMGy.png" alt="enter image description here" /></a></p>
<python><rendering><raycasting><scene><open3d>
2024-04-30 08:36:15
1
304
bhomaidan90
78,407,264
7,692,855
Python 3.10 cannot open shared object file librdkafka
<p>I am trying to upgrade a relatively simple Python app thats reads a Kafka Topic, processes and then calls the API.</p> <p>Currently it is running in a docker container based on Python 3.8 on Ubuntu 20.04 LTS. I am looking to upgrade to Python 3.10 on Ubuntu 22.04 LTS.</p> <p>The docker container builds without issues but there is an issue when running the app or the tests.</p> <pre><code>ImportError while importing test module '/code/fr24/tests/test_feed.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) ... ... from confluent_kafka.cimpl import Consumer as _ConsumerImpl E ImportError: librdkafka.so.1: cannot open shared object file: No such file or directory </code></pre> <p>I added</p> <p><code>RUN apt-get install -y librdkafka-dev</code></p> <p>to the Ubuntu 22.04 Dockerfile. Then I got a different error</p> <pre><code>ImportError: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory </code></pre> <p>Which makes me think it's likely a path issue.</p> <p><code>docker-compose run fr24 python -c &quot;import sys; print('\n'.join(sys.path))&quot;</code></p> <p>returns</p> <pre><code>/usr/local/lib/python3.10/dist-packages/_pdbpp_path_hack /code/fr24 /code /usr/lib/python310.zip /usr/lib/python3.10 /usr/lib/python3.10/lib-dynload /usr/local/lib/python3.10/dist-packages /usr/lib/python3/dist-packages </code></pre> <p><code>docker-compose run fr24 env</code> returns among others</p> <pre><code>PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PYTHONPATH=/code/fr24: </code></pre> <p>am I missing something simple here?</p>
<python><python-3.x><ubuntu>
2024-04-30 08:35:27
1
1,472
user7692855
78,407,063
12,027,232
Decorator / functionality for inheriting an arbitrary number of classes
<p>I want to create functionality for a data class to be able to inherit 0..N mix-ins in Python whilst still returning the original data class object.</p> <p>I have a preliminarily working solution, albeit very ugly and not very pythonic.</p> <pre><code>from typing import Any from dataclasses import dataclass @dataclass class Customer: salary = 0 monthlyBonus = 0 latestClassification = &quot;&quot; class SalaryMixin: def get_salary(self: Any) -&gt; float: return self.salary class customerInformationMixin: def get_latest_classification(self: Any) -&gt; str: return self.latestClassification def create_customer(*mixins: Type) -&gt; Customer: customer = Customer() for mixin in mixins: mixin_methods = [func for func in dir(mixin) if callable(getattr(mixin, func)) and not func.startswith(&quot;__&quot;)] for method in mixin_methods: setattr(customer, method, getattr(mixin(), method)) return customer </code></pre> <p>Ultimately I would like a function <code>create_customer()</code> that will accept any number of mix-ins and inherit its functionality.</p>
<python><python-3.x><oop>
2024-04-30 07:57:59
1
410
JOKKINATOR
78,406,864
4,373,805
How to do groupby on multi index dataframe based on condition
<p>I have a multi index dataframe, and I want to combine rows based on certain conditions and I want to combine rows per index.</p> <pre><code>import pandas as pd # data data = { 'date': ['01/01/17', '02/01/17', '03/01/17', '01/01/17', '02/01/17', '03/01/17'], 'language': ['python', 'python', 'python', 'r', 'r', 'r'], 'ex_complete': [6, 5, 10, 8, 8, 8] } # Convert to DataFrame df = pd.DataFrame(data) # Convert DataFrame to JSON json_data = df.to_json(orient='records') # Convert JSON data back to DataFrame df_from_json = pd.read_json(json_data, orient='records') # Set date and language as multi-index df_from_json.set_index(['date', 'language'], inplace=True) df_from_json.sort_index(inplace= True) df_from_json </code></pre> <h4>1st Problem:</h4> <p>I want to combine the dates '01/01/17', '02/01/17' and rename as '1_2', this should give me 4 rows: 2 rows for '1_2' - (Python and R) and 2 rows for '03/01/17' (Python and R)</p> <h4>2nd Problem:</h4> <p>I want to combine Python and R rows and rename as Python_R, this should give 3 rows for 3 dates.</p> <p>Any guidance or pointer will be hugely appreciated.</p>
<python><pandas><dataframe><data-wrangling>
2024-04-30 07:20:05
1
468
Ezio
78,406,847
14,132
sqlalchemy: (how) can I return a function expression from process_bind_param in a custom type?
<p>I have a custom type for interfacing with <a href="https://mariadb.com/kb/en/inet6/" rel="nofollow noreferrer">INET6</a> in MariaDB, and from <code>process_bind_param</code> I want to return an SQL expression like this: <code>CAST('2001::ff' AS INET6)</code></p> <p>I've tried this so far (slightly simplified):</p> <pre><code>import ipaddress from sqlalchemy import types from sqlalchemy.sql.expression import cast class Inet6(types.TypeDecorator): impl = types.BINARY cache_ok = True def get_col_spec(self, **kw): return 'INET6' def process_bind_param(self, value, dialect): if value is None: return None assert isinstance(value, ipaddress.IPv6Address) return cast(str(value), self) def process_result_value(self, value, dialect): if value is None: return None return ipaddress.IPv6Address(value) </code></pre> <p>However, when I use it in a query, I get this exception:</p> <pre><code>Traceback (most recent call last): File &quot;/home/mlenz/nnis/nnis-business-services/ip-inet6.py&quot;, line 19, in &lt;module&gt; for i in q.all(): ^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/orm/query.py&quot;, line 2693, in all return self._iter().all() # type: ignore ^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/orm/query.py&quot;, line 2847, in _iter result: Union[ScalarResult[_T], Result[_T]] = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 2308, in execute return self._execute_internal( ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 2190, in _execute_internal result: Result[Any] = compile_state_cls.orm_execute_statement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/orm/context.py&quot;, line 293, in orm_execute_statement result = conn.execute( ^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 1416, in execute return meth( ^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/elements.py&quot;, line 516, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 1639, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 1820, in _execute_context self._handle_dbapi_exception( File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 2343, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 1814, in _execute_context context = constructor( ^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py&quot;, line 1487, in _init_compiled d_param = { ^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py&quot;, line 1488, in &lt;dictcomp&gt; key: flattened_processors[key](compiled_params[key]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/type_api.py&quot;, line 2059, in process return fixed_impl_processor( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/sqltypes.py&quot;, line 929, in process return DBAPIBinary(value) ^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/pymysql/__init__.py&quot;, line 128, in Binary return bytes(x) ^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/operators.py&quot;, line 666, in __getitem__ return self.operate(getitem, index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/elements.py&quot;, line 1616, in operate return op(self.comparator, *other, **kwargs) # type: ignore[no-any-return] # noqa: E501 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/operators.py&quot;, line 666, in __getitem__ return self.operate(getitem, index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/type_api.py&quot;, line 1733, in operate return super().operate(op, *other, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/type_api.py&quot;, line 194, in operate return op_fn(self.expr, op, *other, **addtl_kw) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/default_comparator.py&quot;, line 244, in _getitem_impl _unsupported_impl(expr, op, other, **kw) File &quot;/home/mlenz/venv/lib/python3.11/site-packages/sqlalchemy/sql/default_comparator.py&quot;, line 250, in _unsupported_impl raise NotImplementedError( sqlalchemy.exc.StatementError: (builtins.NotImplementedError) Operator 'getitem' is not supported on this expression [SQL: SELECT inet6_test.ip AS inet6_test_ip, inet6_test.id AS inet6_test_id, inet6_test.timestamp AS inet6_test_timestamp FROM inet6_test WHERE inet6_test.ip = %(ip_1)s LIMIT %(param_1)s] [parameters: [{}]] </code></pre> <p>Is this (returning a <code>CAST</code> from <code>process_bind_param</code>) supported at all? Or am I just doing it wrong?</p> <p>(The reason I want to produce a <code>CAST</code> is that because binary strings are far less readable in the emitted SQL/bind params, and for <code>&lt;</code> and <code>&gt;</code> comparisons, MariaDB needs explicit <code>CAST</code> statements when dealing string representations of IP addresses. If I simply return a binary <code>the_ip_address.packed</code> I get hard-to-debug errors where mariadb complains that the value isn't a valid INET6)</p> <p>Update: cold it be that sqlalchemy tries to convert my CAST statement into a binary(), because the data type uses BINARY as the base type? If yes, can I tell it not to do that for bind params?</p> <h2>Second Update</h2> <p>To expand on the binary string issues, I have a table like this:</p> <pre><code>Create Table: CREATE TABLE `inet6_test` ( `id` int(11) NOT NULL AUTO_INCREMENT, `ip` inet6 NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=214872 DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_unicode_ci </code></pre> <p>And when I run</p> <pre><code>address = IPv6Address('2001::ff') result = session.execute( text('INSERT INTO inet6_test SET ip=:ip'), {'ip': address.packed}, ) </code></pre> <p>I get</p> <pre><code>sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1292, &quot;Incorrect inet6 value: ' \\0001\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000\\0000?' for column `pop`.`inet6_test`.`ip` at row 1&quot;) [SQL: INSERT INTO inet6_test SET ip=%(ip)s] [parameters: {'ip': b' \x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff'}] </code></pre> <p>but I'm pretty sure that <code>IPv6Address.packed</code> is the correct format, because when I run</p> <pre><code>result = session.execute( text('SELECT HEX(:param)'), {'param': address.packed}, ) for row in result: print(row[0]) </code></pre> <p>I get back <code>200100000000000000000000000000FF</code> which is correct, and <code>SELECT CAST(X'200100000000000000000000000000FF' AS INET6)</code> on mariadb prompt gives me <code>2001::ff</code>, which shows that in general, mariadb understands the input, just not when passed through sqlalchemy.</p> <p>Involved versions:</p> <ul> <li>sqlalchemy==2.0.29</li> <li>pymysql==1.1.0</li> <li>mariadb 10.11.6</li> </ul>
<python><sqlalchemy>
2024-04-30 07:15:54
0
12,742
moritz
78,406,689
14,105,638
Finding the maximum value from a Random Forest Regression model in python
<p>I've been using random forest regression to calculate the Return On Ad Spend (ROAS) when the user gives the upper bound. My model takes in three input variables: the cost of TV, Radio and Newspaper ads. However, to find the most optimal value, I need to use a for loop to iterate through every dollar, which is time-consuming. Is there a faster method to find the highest y value in my program?</p> <pre class="lang-py prettyprint-override"><code>def ROASPrediction(Q,TV,Radio,Newspaper): rec = &quot;Recommended Investment for Best Sales&quot; y_big = 0 x_b = 0 y_b = 0 z_b = 0 for x in range(TV // 2, TV): for y in range(Radio // 2, Radio): for z in range(Newspaper // 2, Newspaper): customer_features = np.array([x, y, z]) customer_features1 = customer_features.reshape(1, -1) # customer_features1 =pd.DataFrame(customer_features) model_fit1 = joblib.load('/content/drive/MyDrive/ROAS.joblib') y_future_pred = model_fit1.predict(customer_features1) print(&quot;y_future_pred&quot;, y_future_pred) if (y_future_pred[0] &gt;= y_big): y_big = y_future_pred[0] x_b = x y_b = y z_b = z # y_future_pred1= str(y_future_pred[0]) + &quot;M$&quot; # y_roas= y_future_pred[0]*1000000 / (TV+Radio+Newspaper) y_future_pred1 = str(y_big) + &quot;M$&quot; y_roas = y_big * 1000000 / (TV + Radio + Newspaper) x_b1 = str(x_b) y_b1 = str(y_b) z_b1 = str(z_b) y_roas1 = str(y_roas) + &quot;%&quot; return rec, x_b1, y_b1, z_b1, y_future_pred1, y_roas1 </code></pre> <p>And the following code is my Random Forest Model.</p> <pre><code>df = pd.read_csv('/Advertising.csv') df.head() x = df[['TV', 'Radio','Newspaper']] y = df[['Sales']] x_train, x_test, y_train, y_test = train_test_split (x, y, test_size=0.20 , random_state=41) rf_regressor = RandomForestRegressor(n_estimators=100, random_state=42) rf_regressor.fit(x_train, y_train) y_pred = rf_regressor.predict(x_test) </code></pre> <p>And <a href="https://www.kaggle.com/datasets/yasserh/advertising-sales-dataset" rel="nofollow noreferrer">this</a> is the csv file I'm using.</p> <p>Is there any way to make the ROASPrediction function more efficient so that it does not take like 5 minutes to compute just $30 of TV, Radio and Newspaper?</p>
<python><machine-learning><regression><random-forest>
2024-04-30 06:42:36
0
583
l_b
78,406,673
1,019,455
SQLAlchemy filter on Query again got extra junk rows
<p>I have Django experience. Now I would like to try some DIY python packages to my mini project. I Here is my code.</p> <p><code>models.py</code></p> <pre class="lang-py prettyprint-override"><code>import enum from datetime import date, datetime from sqlalchemy import UniqueConstraint from sqlmodel import Field, SQLModel class StatusEnum(enum.Enum): &quot;&quot;&quot;Enum class of status field.&quot;&quot;&quot; pending = &quot;pending&quot; in_progress = &quot;in_progress&quot; completed = &quot;completed&quot; class TaskContent(SQLModel, table=True): &quot;&quot;&quot;Model class for TaskContent history.&quot;&quot;&quot; identifier: str = Field(primary_key=True) # For redo mechanism id: int = Field(primary_key=False) # For human use title: str = Field(nullable=True) description: str = Field(nullable=True) due_date: date = Field(default=None, nullable=True) status: StatusEnum = Field(default=StatusEnum.pending) is_deleted: bool = Field(default=False) # For redo mechanism created_by: int = Field(nullable=True, default=None, foreign_key=&quot;user.id&quot;) created_at: datetime = Field(default=datetime.now()) # For redo mechanism class CurrentTaskContent(SQLModel, table=True): &quot;&quot;&quot;Model class for current.&quot;&quot;&quot; # https://github.com/tiangolo/sqlmodel/issues/114 __table_args__ = (UniqueConstraint(&quot;identifier&quot;, &quot;id&quot;), ) identifier: str = Field(primary_key=True) # For redo mechanism id: int = Field(primary_key=False) # For human use created_by: int = Field(nullable=True, default=None, foreign_key=&quot;user.id&quot;) updated_by: int = Field(nullable=True, default=None, foreign_key=&quot;user.id&quot;) created_at: datetime = Field(default=datetime.now()) updated_at: datetime = Field(default=datetime.now()) class User(SQLModel, table=True): &quot;&quot;&quot;User model of this application.&quot;&quot;&quot; id: int = Field(primary_key=True) username: str = Field(nullable=False) </code></pre> <p><code>main.py</code></p> <pre class="lang-py prettyprint-override"><code>def get_queryset() -&gt; sqlalchemy.orm.query.Query: &quot;&quot;&quot;Get the queryset of tasks.&quot;&quot;&quot; with (Session(engine) as session): # List out available id. available_id_list = session.query(CurrentTaskContent).all() available_identifier_list = [i.identifier for i in available_id_list] # Get task and id queryset_tasks = session.query(TaskContent).filter( TaskContent.identifier.in_(available_identifier_list) ).subquery() # Get username and id username_query = session.query(User.username, User.id).subquery() # Join the queryset and username final_query = session.query(queryset_tasks, username_query).outerjoin( username_query, and_(queryset_tasks.c.created_by == username_query.c.id) ) return final_query </code></pre> <p>The <code>tasks_queryset.count()</code> is <code>5</code> which is correct. My database has only 5 rows. Then I try to filter again by <code>due_date</code> with the following.</p> <p><strong>Problem:</strong><br> <code>aa</code> and <code>bb</code> both of them shows me <code>25</code> which is not what I need.</p> <pre class="lang-py prettyprint-override"><code>aa = tasks_queryset.filter(tasks_queryset.subquery().c.due_date == due_date_instance) bb = tasks_queryset.filter(TaskContent.due_date == due_date_instance) </code></pre> <p><strong>Workaround Solution:</strong><br> Put <code>due_date</code> to the table query.</p> <pre class="lang-py prettyprint-override"><code> queryset_tasks = session.query(TaskContent).filter( TaskContent.identifier.in_(available_identifier_list), or_(TaskContent.due_date == _due_date, _due_date is None) ).subquery() </code></pre> <p><strong>Question:</strong><br> How to filter the <code>tasks_queryset</code> with given <code>due_date</code>?</p>
<python><sqlalchemy><fastapi>
2024-04-30 06:40:06
1
9,623
joe
78,406,627
1,138,158
Top2Vec model gets stuck on Colab
<p>I'm trying to implement Top2Vec on Colab. The following code is working fine with the dataset &quot;https://raw.githubusercontent.com/wjbmattingly/bap_sent_embedding/main/data/vol7.json&quot; available <a href="https://colab.research.google.com/github/wjbmattingly/python_for_dh/blob/main/04_topic_modeling/04_03_04_top2vec.ipynb#scrollTo=0614e5ba-bd50-4767-9074-4e142772c528" rel="nofollow noreferrer">here</a>.</p> <p>But when I'm using the dataset &quot;abcnews-date-text.csv&quot;, Colab is running endlessly. Any idea to resolve this please.</p> <pre><code># Extract the text data from the dataset documents = data['headline_text'].tolist() # Initialize Top2Vec model top2vec_model = Top2Vec(documents, embedding_model=&quot;distiluse-base-multilingual-cased&quot;) # Get the number of topics num_topics = 5 # You can adjust this number according to your preference # Get the top topics top_topics = top2vec_model.get_topics(num_topics) # Print the top topics for i, topic in enumerate(top_topics): print(f&quot;Topic {i+1}: {', '.join(topic)}&quot;) </code></pre>
<python><topic-modeling><top2vec>
2024-04-30 06:28:37
0
423
PS Nayak
78,406,587
287,367
GeoDataFrame conversion to polars with from_pandas fails with ArrowTypeError: Did not pass numpy.dtype object
<p>I try to convert a GeoDataFrame to a polars DataFrame with <code>from_pandas</code>. I receive an <em>ArrowTypeError: Did not pass numpy.dtype object</em> exception. So I can continue working against the polars API.</p> <p>Expected outcome would be a polars DataFrame with the <code>geometry</code> column being typed as <code>pl.Object</code>.</p> <p>I'm aware of <a href="https://github.com/geopolars/geopolars" rel="nofollow noreferrer">https://github.com/geopolars/geopolars</a> (alpha) and <a href="https://github.com/pola-rs/polars/issues/1830" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/1830</a> and would be OK with the shapely objects just being represented as pl.Object for now.</p> <p>Here is a minimal example to demonstrate the problem:</p> <pre class="lang-py prettyprint-override"><code>## Minimal example displaying the issue import geopandas as gpd print(&quot;geopandas version: &quot;, gpd.__version__) import geodatasets print(&quot;geodatasets version: &quot;, geodatasets.__version__) import polars as pl print(&quot;polars version: &quot;, pl.__version__) gdf = gpd.GeoDataFrame.from_file(geodatasets.get_path(&quot;nybb&quot;)) print(&quot;\nOriginal GeoDataFrame&quot;) print(gdf.dtypes) print(gdf.head()) print(&quot;\nGeoDataFrame to Polars without geometry&quot;) print(pl.from_pandas(gdf.drop(&quot;geometry&quot;, axis=1)).head()) try: print(&quot;\nGeoDataFrame to Polars naiive&quot;) print(pl.from_pandas(gdf).head()) except Exception as e: print(e) try: print(&quot;\nGeoDataFrame to Polars with schema override&quot;) print(pl.from_pandas(gdf, schema_overrides={&quot;geometry&quot;: pl.Object}).head()) except Exception as e: print(e) # again to print stack trace pl.from_pandas(gdf).head() </code></pre> <p><strong>Output</strong></p> <pre><code>geopandas version: 0.14.4 geodatasets version: 2023.12.0 polars version: 0.20.23 Original GeoDataFrame BoroCode int64 BoroName object Shape_Leng float64 Shape_Area float64 geometry geometry dtype: object BoroCode BoroName Shape_Leng Shape_Area \ 0 5 Staten Island 330470.010332 1.623820e+09 1 4 Queens 896344.047763 3.045213e+09 2 3 Brooklyn 741080.523166 1.937479e+09 3 1 Manhattan 359299.096471 6.364715e+08 4 2 Bronx 464392.991824 1.186925e+09 geometry 0 MULTIPOLYGON (((970217.022 145643.332, 970227.... 1 MULTIPOLYGON (((1029606.077 156073.814, 102957... 2 MULTIPOLYGON (((1021176.479 151374.797, 102100... 3 MULTIPOLYGON (((981219.056 188655.316, 980940.... 4 MULTIPOLYGON (((1012821.806 229228.265, 101278... GeoDataFrame to Polars without geometry shape: (5, 4) ┌──────────┬───────────────┬───────────────┬────────────┐ │ BoroCode ┆ BoroName ┆ Shape_Leng ┆ Shape_Area │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 ┆ f64 │ ╞══════════╪═══════════════╪═══════════════╪════════════╡ │ 5 ┆ Staten Island ┆ 330470.010332 ┆ 1.6238e9 │ │ 4 ┆ Queens ┆ 896344.047763 ┆ 3.0452e9 │ │ 3 ┆ Brooklyn ┆ 741080.523166 ┆ 1.9375e9 │ │ 1 ┆ Manhattan ┆ 359299.096471 ┆ 6.3647e8 │ │ 2 ┆ Bronx ┆ 464392.991824 ┆ 1.1869e9 │ └──────────┴───────────────┴───────────────┴────────────┘ GeoDataFrame to Polars naiive Did not pass numpy.dtype object GeoDataFrame to Polars with schema override Did not pass numpy.dtype object </code></pre> <p><strong>Stack trace</strong> (is the same with and without <code>schema_overrides</code>)</p> <pre><code>--------------------------------------------------------------------------- ArrowTypeError Traceback (most recent call last) Cell In[59], line 27 24 print(e) 26 # again to print stack trace ---&gt; 27 pl.from_pandas(gdf).head() File c:\Users\...\polars\convert.py:571, in from_pandas(data, schema_overrides, rechunk, nan_to_null, include_index) 568 return wrap_s(pandas_to_pyseries(&quot;&quot;, data, nan_to_null=nan_to_null)) 569 elif isinstance(data, pd.DataFrame): 570 return wrap_df( --&gt; 571 pandas_to_pydf( 572 data, 573 schema_overrides=schema_overrides, 574 rechunk=rechunk, 575 nan_to_null=nan_to_null, 576 include_index=include_index, 577 ) 578 ) 579 else: 580 msg = f&quot;expected pandas DataFrame or Series, got {type(data).__name__!r}&quot; File c:\Users\...\polars\_utils\construction\dataframe.py:1032, in pandas_to_pydf(data, schema, schema_overrides, strict, rechunk, nan_to_null, include_index) 1025 arrow_dict[str(idxcol)] = plc.pandas_series_to_arrow( 1026 data.index.get_level_values(idxcol), 1027 nan_to_null=nan_to_null, 1028 length=length, 1029 ) 1031 for col in data.columns: -&gt; 1032 arrow_dict[str(col)] = plc.pandas_series_to_arrow( 1033 data[col], nan_to_null=nan_to_null, length=length 1034 ) 1036 arrow_table = pa.table(arrow_dict) 1037 return arrow_to_pydf( 1038 arrow_table, 1039 schema=schema, (...) 1042 rechunk=rechunk, 1043 ) File c:\Users\...\polars\_utils\construction\other.py:97, in pandas_series_to_arrow(values, length, nan_to_null) 95 return pa.array(values, from_pandas=nan_to_null) 96 elif dtype: ---&gt; 97 return pa.array(values, from_pandas=nan_to_null) 98 else: 99 # Pandas Series is actually a Pandas DataFrame when the original DataFrame 100 # contains duplicated columns and a duplicated column is requested with df[&quot;a&quot;]. 101 msg = &quot;duplicate column names found: &quot; File c:\Users\...\pyarrow\array.pxi:323, in pyarrow.lib.array() File c:\Users\...\pyarrow\array.pxi:79, in pyarrow.lib._ndarray_to_array() File c:\Users\...\pyarrow\array.pxi:67, in pyarrow.lib._ndarray_to_type() File c:\Users\...\pyarrow\error.pxi:123, in pyarrow.lib.check_status() ArrowTypeError: Did not pass numpy.dtype object </code></pre>
<python><dataframe><geopandas><python-polars><pyarrow>
2024-04-30 06:20:34
2
857
Ahue
78,406,512
480,118
Pandas: multiindex group
<p>I have the following data:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd, numpy as np data = [['id', 'date', 'b_field', 'a_field'], ['XYX', '01/01/2024', 100, 101], ['XYX', '01/02/2024', 200, 201], ['ABC', '01/01/2024', 300, 301], ['ABC', '01/02/2024', 400, 401]] id_sort=['ABC', 'XYZ'] field_sort=['a_field', 'b_field'] df = pd.DataFrame(data[1:], columns=data[:1]) </code></pre> <p><img src="https://i.sstatic.net/fmdYCJ6t.png" alt="df" /></p> <p>I would like to effectively transform this such that if I were to serialize this to a csv or json array it would look like this:</p> <p><img src="https://i.sstatic.net/fzlAY1W6.png" alt="expected_output" /></p> <p>where:</p> <ol> <li>the ID is grouped at the top (not repeated for each column)</li> <li>sort by ID and then fields</li> <li>DATE label on the same column row as the field</li> </ol> <p>So far I've tried a few things. The closest I have come is the following:</p> <pre><code>df.set_index('date').reindex(pd.MultiIndex.from_product([id_sort, field_sort]),axis=1).reset_index() </code></pre> <p>Of course, as you can see there are several issues:</p> <ol> <li>no values</li> <li>the 'date' label is at the top level, not 2nd</li> <li>dates look like a tuple, not sure why</li> </ol> <p><img src="https://i.sstatic.net/z9ZNx5nm.png" alt="enter image description here" /></p> <p>EDIT: For #2, for shifting the date label down, it looks like I can do this (not sure if it's the best way):</p> <pre class="lang-py prettyprint-override"><code>df = df.rename(columns={'date': '', '':'date'}) </code></pre>
<python><pandas>
2024-04-30 06:01:17
3
6,184
mike01010
78,406,109
424,957
How to set xlim when save some frames to file by matplotlib in python?
<p>I use python and matplotlib to create bar chart race animation, the data is time series and keep update every day. I want only save updated frames to file, for example, the frames are 10460 unti yesterday, the video file created in one hour. <a href="https://i.sstatic.net/V0XJfcnt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0XJfcnt.png" alt="enter image description here" /></a></p> <p>The new 10 frames are appended today, I used code to save the newest frames to file as below, but axis a is too short, how can I set axis x to the same as max x in past?</p> <pre><code>anim = FuncAnimation(self.fig, self.anim_func, frames[10460:], init_func, interval=interval) </code></pre> <p><a href="https://i.sstatic.net/6TZ0omBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6TZ0omBM.png" alt="enter image description here" /></a></p>
<python><matplotlib-animation>
2024-04-30 03:27:55
1
2,509
mikezang
78,406,108
10,964,685
Update dmc slider text style - plotly dash
<p>How can the ticks be manipulated using dash mantine components? I've got a slider below, that alters the opacity of a bar graph. I know you can change the size and radius of the slider bar. But I want to change the fontsize, color of the xticks corresponding to the bar.</p> <p>You could use the following css with <code>dcc.sliders</code> but is there a similar way to control <code>dmc.sliders</code>?</p> <pre><code>.rc-slider-mark-text { font-size: 10px; color: blue; } .rc-slider-mark-text-active { font-size: 10px; color: red; } </code></pre> <p>I've tried to change the css file to no avail. Also, alter the fontsize or color in the style parameter has no affect.</p> <pre><code>import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import dash_mantine_components as dmc import plotly.express as px import plotly.graph_objs as go import pandas as pd df = pd.DataFrame({ 'Fruit': ['Apple','Banana','Orange','Kiwi','Lemon'], 'Value': [1,2,4,8,6], }) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) filter_box = html.Div(children=[ html.Div(children=[ dmc.Text(&quot;trans&quot;), dmc.Slider(id = 'bar_transp', min = 0, max = 1, step = 0.1, marks = [ {&quot;value&quot;: 0, &quot;label&quot;: &quot;0&quot;}, {&quot;value&quot;: 0.2, &quot;label&quot;: &quot;0.2&quot;}, {&quot;value&quot;: 0.4, &quot;label&quot;: &quot;0.4&quot;}, {&quot;value&quot;: 0.6, &quot;label&quot;: &quot;0.6&quot;}, {&quot;value&quot;: 0.8, &quot;label&quot;: &quot;0.8&quot;}, {&quot;value&quot;: 1, &quot;label&quot;: &quot;1&quot;}, ], value = 1, size = 'lg', style = {&quot;font-size&quot;: 2, &quot;color&quot;: &quot;white&quot;}, #doesn't work ), ], className = &quot;vstack&quot;, ) ]) app.layout = dbc.Container([ dbc.Row([ dbc.Col(html.Div(filter_box), ), dcc.Graph(id = 'type-chart'), ]) ], fluid = True) @app.callback( Output('type-chart', 'figure'), [ Input('bar_transp', 'value'), ]) def chart(bar_transp): df_count = df.groupby(['Fruit'])['Value'].count().reset_index(name = 'counts') df_count = df_count type_fig = px.bar(x = df_count['Fruit'], y = df_count['counts'], color = df_count['Fruit'], opacity = bar_transp, ) return type_fig if __name__ == '__main__': app.run_server(debug = True) </code></pre>
<python><css><plotly><plotly-dash><mantine>
2024-04-30 03:27:48
1
392
jonboy
78,406,082
2,705,757
Playwright Python: expect(locator).to_be_visible() vs locator.wait_for()
<p>Is there a difference between <code>expect(locator).to_be_visible()</code> and <code>locator.wait_for()</code>? Should I prefer one over the other?</p> <p>I've looked up the <code>__doc__</code>s:</p> <ul> <li><a href="https://playwright.dev/python/docs/api/class-locator#locator-wait-for" rel="nofollow noreferrer"><code>wait_for</code></a>: Returns when element specified by locator satisfies the <code>state</code> option. (<code>state</code> defaults to <code>'visible'</code>).</li> <li><a href="https://playwright.dev/python/docs/api/class-locatorassertions#locator-assertions-to-be-visible" rel="nofollow noreferrer"><code>to_be_visible</code></a>: Ensures that Locator points to an attached and visible DOM node.</li> </ul> <p>Still, not very clear. Is it possible for a locator to be visible but not attached? If so, would they be equal if I know for sure that the located element is attached?</p>
<python><playwright><playwright-python>
2024-04-30 03:18:26
1
9,226
AXO
78,406,021
1,568,919
pandas series get value by index with fill value if the index does not exist
<pre><code>ts = pd.Series({'a' : 1, 'b' : 2}) ids = ['a','c'] # 'c' is not in the index # the result I want np.array([ts.get(k, np.nan) for k in ids]) </code></pre> <p>Is there a pandas native way to achieve this?</p>
<python><pandas>
2024-04-30 02:53:32
2
7,411
jf328
78,405,885
8,954,291
TQDM: Track loop inside ProcessPoolExecutor job
<p>I have some number of jobs that have been submitted to a <code>ProcessPoolExecutor</code>. I want to track the progress of each job with <code>tqdm</code>. However, when I run the code below, the progress bars constantly swap places. I believe this is because *handwavey asynchronicity explanation* but I don't know the method to get around that. I imagine I could theoretically pipeline the progress bar up to the parent process, but I'm not familiar enough with <code>concurrent</code> or <code>tqdm</code> to do that. What's the pythonic answer?</p> <p>MWE:</p> <pre class="lang-py prettyprint-override"><code>from tqdm import tqdm import time import concurrent import random from concurrent.futures import ProcessPoolExecutor def main(): pool = ProcessPoolExecutor() futures = [pool.submit(my_function,x) for x in range(10)] for future in concurrent.futures.as_completed(futures): print(future) def my_function(fn_number): for i in tqdm(range(500),desc=f&quot;{fn_number}_outer&quot;): for j in tqdm(range(500),desc=f&quot;{fn_number}_inner&quot;): # Sometimes the function skips to the outer loop. # This is foiling using tqdm.update() if random()&gt;0.9: break sleep(0.1) return fn_number if __name__=='__main__': main() </code></pre>
<python><python-multiprocessing><tqdm>
2024-04-30 01:52:16
0
1,351
Jakob Lovern
78,405,756
2,430,558
how to run a chi-square goodness of fit test on fitting discrete probability distributions in Python
<p>I am trying to test the fit of several probability distributions on my data and perform a Maximum Likelihood estimation and KS test on the fit of each probability distribution to my data. My code works for continuous probability distributions but not for discrete (because they do not have a <code>.fit</code> object. To fix the discrete distributions I am attempting to use a chi-square test but I cannot get it to work.</p> <pre><code>import pandas as pd import numpy as np import scipy.stats as st from scipy.stats import kstest data = pd.read_csv(r'...\demo_data.csv') data = pd.DataFrame(data) #continuous variables continuous_results = [] for cvar in [&quot;cvar1&quot;, &quot;cva2&quot;, &quot;cvar3&quot;]: #probability distributions cdistributions = [ st.arcsine, st.alpha, st.beta, st.betaprime, st.bradford, st.burr, st.chi, st.chi2, st.cosine, st.dgamma, st.dweibull, st.expon, st.exponweib, st.exponpow, st.genlogistic, st.genpareto, st.genexpon, st.gengamma, st.genhyperbolic, st.geninvgauss, st.gennorm, st.f, st.gamma, st.invgamma, st.invgauss, st.invweibull, st.laplace, st.logistic, st.loggamma, st.loglaplace, st.loguniform, st.nakagami, st.norm, st.pareto, st.powernorm, st.powerlaw, st.rdist, st.semicircular, st.t, st.trapezoid, st.triang, st.tukeylambda, st.uniform, st.wald, st.weibull_max, st.weibull_min ] for distribution in cdistributions: try: #fit each probability distribution to each variable in the data pars = distribution.fit(data[cvar]) mle = distribution.nnlf(pars, data[cvar]) #perform ks test ks_result = kstest(data[cvar], distribution.cdf, args = pars) #create dictionary to store results for each variable/distribution result = { &quot;variable&quot;: cvar, &quot;distribution&quot;: distribution.name, &quot;type&quot;: &quot;continuous&quot;, &quot;mle&quot;: mle, &quot;ks_stat&quot;: ks_result.statistic, &quot;ks_pvalue&quot;: ks_result.pvalue } continuous_results.append(result) except Exception as e: # assign 0 for error mle = 0 </code></pre> <p>this code isn't running a chi-square test or ks test and I am unsure how to fix it:</p> <pre><code>#discrete variables discrete_results = [] for var in [&quot;dvar1&quot;, &quot;dvar2&quot;]: #probability distributions distributions = [ st.bernoulli, st.betabinom, st.binom, st.boltzmann, st.poisson, st.geom, st.nbinom, st.hypergeom, st.zipf, st.zipfian, st.logser, st.randint, st.dlaplace ] for distribution in distributions: try: #fit each probability distribution to each variable in the data distfit = getattr(st, distribution.name) chisq_stat, pval = st.chisquare(data[var], f = distfit.pmf(range(len(data[var])))) #perform ks test ks_result = kstest(data[var], distribution.cdf, args = distfit.pmf(range(len(data[var])))) #create dictionary to store results for each variable/distribution result = { &quot;variable&quot;: var, &quot;distribution&quot;: distribution.name, &quot;type&quot;: &quot;continuous&quot;, &quot;chisq&quot;: chisq_stat, &quot;pvalue&quot;: pval, &quot;ks_stat&quot;: ks_result.statistic, &quot;ks_pvalue&quot;: ks_result.pvalue } discrete_results.append(result) except Exception as e: # assign 0 for error chisq_stat = 0 </code></pre> <p>printing e:</p> <pre><code>_parse_args() missing 1 required positional argument: 'p' _parse_args() missing 3 required positional arguments: 'n', 'a', and 'b' _parse_args() missing 2 required positional arguments: 'n' and 'p' _parse_args() missing 2 required positional arguments: 'lambda_' and 'N' _parse_args() missing 1 required positional argument: 'mu' _parse_args() missing 1 required positional argument: 'p' _parse_args() missing 2 required positional arguments: 'n' and 'p' _parse_args() missing 3 required positional arguments: 'M', 'n', and 'N' _parse_args() missing 1 required positional argument: 'a' _parse_args() missing 2 required positional arguments: 'a' and 'n' _parse_args() missing 1 required positional argument: 'p' _parse_args() missing 2 required positional arguments: 'low' and 'high' _parse_args() missing 1 required positional argument: 'a' _parse_args() missing 1 required positional argument: 'p' _parse_args() missing 3 required positional arguments: 'n', 'a', and 'b' _parse_args() missing 2 required positional arguments: 'n' and 'p' _parse_args() missing 2 required positional arguments: 'lambda_' and 'N' _parse_args() missing 1 required positional argument: 'mu' _parse_args() missing 1 required positional argument: 'p' _parse_args() missing 2 required positional arguments: 'n' and 'p' _parse_args() missing 3 required positional arguments: 'M', 'n', and 'N' _parse_args() missing 1 required positional argument: 'a' _parse_args() missing 2 required positional arguments: 'a' and 'n' _parse_args() missing 1 required positional argument: 'p' _parse_args() missing 2 required positional arguments: 'low' and 'high' _parse_args() missing 1 required positional argument: 'a' </code></pre>
<python><pandas><mle><probability-distribution>
2024-04-30 00:49:10
0
449
DrPaulVella
78,405,638
14,890,683
Pydantic Generic Basemodel Validation Error
<p>Pydantic <a href="https://docs.pydantic.dev/latest/concepts/models/#generic-models" rel="nofollow noreferrer">Generic BaseModels</a> fails unexpectedly when validating python instances but succeeds when validating the same data as a dictionary.</p> <p>Python: 3.11.8 Pydantic: 2.7.1</p> <p>Here is the Generic Model:</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar from pydantic import BaseModel U = TypeVar(&quot;U&quot;) class M1(BaseModel, Generic[U]): a: list[U] b: list[U] class M2(BaseModel): a: str m1: M1[int] </code></pre> <p>It works when validating a raw dictionary:</p> <pre class="lang-py prettyprint-override"><code>m2 = M2.model_validate( { &quot;a&quot;: &quot;s&quot;, &quot;m1&quot;: { &quot;a&quot;: [1], &quot;b&quot;: [2] } } ) </code></pre> <p>However, it does not work when using python types:</p> <pre class="lang-py prettyprint-override"><code>m2 = M2( a=&quot;s&quot;, m1=M1( a=[1], b=[2] ) ) # fails :( </code></pre> <p>Interestingly enough, the following also fails:</p> <pre class="lang-py prettyprint-override"><code>m2 = M2.model_validate( dict( a=&quot;s&quot;, m1=M1( a=[1], b=[2] ) ) ) </code></pre>
<python><pydantic>
2024-04-29 23:57:32
1
345
Oliver
78,405,637
18,346,591
Multiprocessing and connection pool error: cannot pickle 'psycopg2.extensions.connection' object
<p>I am trying to pass a db object that uses a connection pool in Postgres to another class object but I keep getting an error.</p> <pre><code>if __name__ == &quot;__main__&quot;: cores = calculate_number_of_cores() pretty_print(f&quot;Number of cores ==&gt; {cores}&quot;) db_manager = DatabaseManager() with ProcessPoolExecutor(cores) as executor: while True: if LOCAL_RUN: pretty_print(&quot;ALERT: Doing a local run of the automation with limited capabilities.&quot;) list_of_clients = db_manager.get_clients() print(list_of_clients) random.shuffle(list_of_clients) list_of_not_attempted_clients_domains = db_manager.get_not_attempted_domains_that_matches_random_client_tags() group_of_clients_handlers = {} pretty_print(list_of_not_attempted_clients_domains) # no matches if not list_of_not_attempted_clients_domains: sleep = 60 * 10 pretty_print(f'No matches found. Sleeping for {sleep}s') time.sleep(sleep) continue for client in list_of_clients: client_id = client[0] client_name = client[1] group_of_clients_handlers[client_id] = [ClientsHandler(db_manager), client_name] try: list(executor.map( partial(run, group_of_clients_handlers), list_of_not_attempted_clients_domains )) # for not_attempted_clients_domains in list_of_not_attempted_clients_domains: # futures = executor.submit(run(group_of_clients_handlers, not_attempted_clients_domains)) # for future in concurrent.futures.as_completed(futures): # pretty_print(&quot;Loading futures....&quot;) # print(future.result()) time.sleep(60) pretty_print(&quot;sleeping for 60secs&quot;) except Exception as err: pretty_print(err) </code></pre> <p>I am using a multiprocessing module in Python. My objective is to have ready connections that we use for every query to the database rather than having to always connect to the database from for every execution or process.</p> <p>It's a must I pass the <code>db_manager</code> to the <code>run</code> function as I don't want to have to create a new connection there. But because of this, I am getting the error: cannot pickle 'psycopg2.extensions.connection' object.</p> <p>I do not even know exactly what that means. The only thing I know is that when I submit the <code>run</code> function to the executor manually, the code works properly. That's the commented-out code.</p> <p>I do not know any other way to navigate this. I kinda know where the error is coming from but how to solve it so it doesn't interfere with my setup is such a pain.</p>
<python><postgresql><parallel-processing><multiprocessing><connection-pooling>
2024-04-29 23:57:30
1
662
Alexander Obidiegwu
78,405,479
12,240,037
ArcGIS REST Map Services Data Query using geometry in Python
<p>I am attempting to access ArcGIS Map Services and query a dataset to only return records that intersect the boundary of a custom geometry I created. I have queried and downloaded data before using this script, but never using geometry. Do i simply add my <code>geometry_json</code> to the <code>Where</code> clause?</p> <pre><code>#My geometry (a simply polygon) geometry_json = { &quot;geometryType&quot; : &quot;esriGeometryPolygon&quot;, &quot;spatialReference&quot; : { &quot;wkid&quot; : 4326, &quot;latestWkid&quot; : 4326 }, &quot;features&quot; : [{ &quot;geometry&quot;:{ &quot;curveRings&quot;:[ [[-123.77335021899995,49.353870065000081],{&quot;a&quot;:[[-123.77335021899995,49.353870065000081],[-123.88831213030559,49.338864996255367],0,1]}]]}}] } </code></pre> <p>The script to access and download the data</p> <pre><code>import arcpy # Import the arcpy module, which provides access to ArcGIS functionality. import urllib.request # Import the urllib.request module to work with URLs. import json # Import the json module for JSON data handling. # Setup arcpy.env.overwriteOutput = True baseURL = r&quot;https://gisp.dfo-mpo.gc.ca/arcgis/rest/services/FGP/MSDI_Dynamic_Current_Layer/MapServer/0&quot; # Base URL fields = &quot;*&quot; # Return all fields outdata = r&quot;out/path&quot; #output location # Get record extract limit urlstring = baseURL + &quot;?f=json&quot; # Construct the URL to request JSON metadata from the service. j = urllib.request.urlopen(urlstring) # Open the URL and get the response. js = json.load(j) # Load the JSON response into a Python dictionary. maxrc = int(js[&quot;maxRecordCount&quot;]) # Extract the maximum record count from the JSON response. print (&quot;Record extract limit: %s&quot; % maxrc) # Print the maximum record count. # Get object ids of features where = &quot;1=1&quot; # Define a where clause to retrieve all records. urlstring = baseURL + &quot;/query?where={}&amp;returnIdsOnly=true&amp;f=json&quot;.format(where) # Construct the URL to request object IDs. j = urllib.request.urlopen(urlstring) # Open the URL and get the response. js = json.load(j) # Load the JSON response into a Python dictionary. idfield = js[&quot;objectIdFieldName&quot;] # Extract the name of the object ID field. idlist = js[&quot;objectIds&quot;] # Extract the list of object IDs. idlist.sort() # Sort the list of object IDs. numrec = len(idlist) # Get the number of records. print (&quot;Number of target records: %s&quot; % numrec) # Print the number of records. # Gather features print (&quot;Gathering records...&quot;) # Print a message indicating feature gathering. fs = dict() # Create an empty dictionary to store features. for i in range(0, numrec, maxrc): # Loop over the object IDs in chunks based on the maximum record count. torec = i + (maxrc - 1) # Calculate the end index of the chunk. if torec &gt; numrec: # Adjust the end index if it exceeds the number of records. torec = numrec - 1 fromid = idlist[i] # Get the starting object ID of the chunk. toid = idlist[torec] # Get the ending object ID of the chunk. where = &quot;{} &gt;= {} and {} &lt;= {}&quot;.format(idfield, fromid, idfield, toid) # Define a where clause for the chunk. print (&quot; {}&quot;.format(where)) # Print the where clause. urlstring = baseURL + &quot;/query?where={}&amp;returnGeometry=true&amp;outFields={}&amp;f=json&quot;.format(where,fields) # Construct the URL to request features. fs[i] = arcpy.FeatureSet() # Create a new empty FeatureSet object. fs[i].load(urlstring) # Load features from the URL into the FeatureSet. # Save features print (&quot;Saving features...&quot;) # Print a message indicating feature saving. fslist = [] # Create an empty list to store FeatureSets. for key,value in fs.items(): # Iterate over the dictionary of FeatureSets. fslist.append(value) # Append each FeatureSet to the list. arcpy.Merge_management(fslist, outdata) # Merge all FeatureSets into a single feature class at the specified output location. print (&quot;Done!&quot;) # Print a message indicating completion. </code></pre>
<python><json><rest><arcgis><arcpy>
2024-04-29 22:47:25
1
327
seak23
78,405,474
1,115,716
Convert Word Docx to PDF headlessly?
<p>I'm currently using <code>doc2pdf</code> python library to convert some Word <code>docx</code> files to <code>pdf</code>.</p> <pre><code>from docx2pdf import convert docx_file = '/tmp/One_of_hundreds.docx' file_ext = os.path.splitext(docx_file)[1] if file_ext.lower() == '.docx': # we'll need to convert this to a PDF pdf_path = os.path.splitext(docx_file)[0] + '.pdf' convert(docx_file, pdf_path) </code></pre> <p>Unfortunately, this spins up MS-Word, which I think would be jarring for a user to see, especially if it happens multiple times (in the hundreds). Is there no way to suppress the Word UI (I'm on MacOS if that helps)? Additionally, this also spins up a dialog asking for permissions, which mercifully happens only once, is there no way to prevent that?</p>
<python><docx2pdf>
2024-04-29 22:46:21
0
1,842
easythrees
78,405,421
5,639,469
Custom type for mongoengine field
<p>I create DDD project in python and domain entities are mapped 1-1 to mongodb models.</p> <p>I have the following class</p> <pre><code>class Experiment(Document, Entity): id: ExperimentId = ExperimentId.Field(db_field='_id', primary_key=True, required=True) name: ExperimentName = ExperimentName.Field(required=True) created_at: datetime.datetime = DateTimeField() def __init__(self, id: ExperimentId, name: ExperimentName): if not id: raise NoLabsException('Experiment id is empty', ErrorCodes.invalid_experiment_id) if not name: raise NoLabsException('Experiment name is empty', ErrorCodes.invalid_experiment_name) created_at = datetime.datetime.now(tz=datetime.timezone.utc) super().__init__(id=id, name=name, created_at=created_at) def set_name(self, name: ExperimentName): if not name: raise NoLabsException('Name cannot be empty', ErrorCodes.invalid_experiment_name) self.name = name </code></pre> <p>As you see id, name have theirs own field types (ExperimentId.Field is just inherited from UUIDField for example). But when I set up Experiment model id and name become UUID and str when I do experiment.name for example, not actual ExperimentId and ExperimentName that I set up before. How can I support actual types?</p>
<python><mongoengine>
2024-04-29 22:21:53
0
330
JaktensTid
78,405,417
3,084,178
Changing the color scheme in a matplotlib animated gif
<p>I'm trying to create an animated gif that takes a python graph in something like the plt.style.use('dark_background') (i.e black background with white text) and fades it to something like the default style (ie. white background with black text).</p> <p><a href="https://i.sstatic.net/r76QU3kZ.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r76QU3kZ.gif" alt="animated gif" /></a></p> <p>The above is the result of running the below. You'll see that it doesn't quite work because the area around the plot area and legend area stubbornly remains white. I've tried numerous variations on the below, but can't figure it out.</p> <p>I'm also trying to get it not to loop. The repeat=False doesn't seem to do it. But that's a secondary issue. The main one is: how do I get the background of the figure to change its color during an animation?</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np import matplotlib as mpl # Data for the plot x1 = np.linspace(1,100) y1 = np.sin(x1) ##################################### # STEP 1: FADE TO WHITE - FAILED ##################################### # Create the figure and axis fig, ax = plt.subplots() # Plot the data line1, = ax.plot(x1, y1, label='sin') # Set the title and axis labels ax.set_title('Title') ax.set_xlabel('x axis') ax.set_ylabel('y axis') # Add a legend #ax.legend(loc='right') legend = ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.subplots_adjust(right=0.79) # Function to update the plot for the animation def update1(frame): if frame&gt;=noFrames: frame=noFrames-1 startWhite = tuple([(noFrames-frame-1)/(noFrames-1)]*3) startBlack = tuple([frame/(noFrames-1)]*3) ax.cla() # Plot the data line1, = ax.plot(x1, y1, label='sin') # Set the title and axis labels ax.set_title('Title',color=startWhite) ax.set_xlabel('X Axis',color=startWhite) ax.set_ylabel('Y Axis',color=startWhite) # Add a legend #ax.legend(loc='right') legend = ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.subplots_adjust(right=0.79) fig.patch.set_color(None) fig.patch.set_facecolor(startBlack) ax.patch.set_facecolor(startBlack) fig.patch.set_edgecolor(startBlack) fig.patch.set_color(startBlack) plt.rcParams['axes.facecolor'] = startBlack plt.rcParams['axes.edgecolor'] = startBlack plt.rcParams['axes.labelcolor'] = startWhite plt.rcParams['axes.titlecolor'] = startWhite plt.rcParams['legend.facecolor'] = startBlack plt.rcParams['legend.edgecolor'] = startWhite plt.rcParams['legend.labelcolor'] = startWhite plt.rcParams['figure.facecolor'] = startBlack plt.rcParams['figure.edgecolor'] = startBlack plt.rcParams['xtick.color'] = startWhite plt.rcParams['ytick.color'] = startWhite plt.rcParams['text.color'] = startWhite fig.canvas.draw_idle() return fig, noFrames = 50 # Create the animation ani = animation.FuncAnimation(fig, update1, frames=range(noFrames*5), blit=False, repeat=False) ani.event_source.stop() #stop the looping # Save the animation as a GIF ani.save('TEST01_fade.gif', writer='pillow', fps=10) plt.close() </code></pre>
<python><matplotlib><animation>
2024-04-29 22:19:44
1
1,014
Dr Xorile
78,405,379
15,045,917
How to pass on values when converting a recursive function to an iterative one with stack and while loop
<p><strong>EDIT:</strong> My initial recursive code was incomplete, as I did not multiply by <code>f(l, r)</code> the sum of the two recursive branches.</p> <p>Given a function <code>f(l, r)</code> that computes something from the nodes <code>(l, r)</code> of a binary tree of height <code>max_height</code>, I want to compute a total value by adding values of the left and right node children and multiply the sum with the value of the parent node and pass that along the tree.</p> <p>I have a working recursive implementation of this, but now I want to eliminate the recursion with a while loop and stack structures. My problem is that I don't know how to &quot;pass on&quot; values during the while loop. That is, I don't know how I should replicate the behavior when I multiply the current value <code>f(l, r)</code> with the sum from the two recursion branches.</p> <p>I have included two code snippets: The first is the current recursive implementation and the second one is my attempt at the more iterative based approach. The latter code needs more work, and I have placed commented TODOs to indicate some of my questions.</p> <pre><code>def recursive_travel(l, r, cur_height, max_height): if cur_height == max_height - 1: return f(l, r) * (f(l + 1, r) + f(l, r + 1)) return f(l, r)* (recursive_travel(l + 1, r, cur_height + 1, max_height) + recursive_travel(l, r + 1, cur_height + 1, max_height)) </code></pre> <p>where the initial call will be <code>recursive_travel(0, 0, 0, max_height)</code>.</p> <p>Attempt the removing the recursion:</p> <pre><code>def iterative_travel(max_height): call_stack = [(0, 0, 0)] # cur_height, l, r in that order handled_stack = [] # TODO: Maybe I need to have something like this, or maybe I need a double array to store computed values? # Precompute the value of r_c directly to an n x n table for fast access pre_f = [[f(l, r) for l in range(0, max_height + 1)] for r in range(0, max_height + 1)] while call_stack: cur_height, l, r = stack.pop() if max_height - 1 == cur_height: # TODO: Not sure how to pass on the computed values # TODO: Where I should put this value? In some table? In some stack? value = pre_f[l, r] * (pre_f[l + 1, r] + pre_f[l, r + 1]) # TODO: Should I mark somewhere that the node (l, r) has been handled? elif handled_stack: # TODO: Not sure how to handle the computed values pass else: # TODO: Do I do something to the current l and r here? stack.append((current_depth + 1, l + 1, r)) stack.append((current_depth + 1, l, r + 1)) return 0 # TODO: Return the correct value </code></pre>
<python><loops><recursion><return><binary-tree>
2024-04-29 22:06:52
2
393
Epsilon Away
78,405,285
14,674,494
Python parallelism and /dev/ttyS0 sharing
<p>I don't know how to make those two functions (which are called in the code below at the end) parallel. If I do multitasking or multiprocessing I came across errors regarding multiple port access. If I use mutexes I come across issues regarding deadlocks - I tried to wrap the contents of those infinite whiles with locks but then everything stalled, then I tried to add within the GSM class mutex wrappers, but again, deadlocks, deadlocks and deadlocks. I have even tried to use asyncs but no success since I had troubles with the lack of await in serial_digest function. I don't know how to overcome this issue. Note: I know that there is a lot of code, but I think that all of it is necessary and is not quite difficult to read, even if I am not a master at clean coding by far, especially regarding python.</p> <pre><code>import serial from time import sleep from gotify import Gotify ctrlZ = chr(26) gsm = { 'check':'AT', 'sig_quality':'AT+CSQ', 'ready':'AT+CPIN?', #tech details 'manufacturer':'AT+CGMI', 'model':'AT+CGMM', 'imsi':'AT+CIMI', 'iccid':'AT+CCID', 'net_reg_stat':'AT+CREG?', 'scan_networks':'AT+COPS=?', 'apn':'AT+CGDCONT?', 'operator':'AT+COPS?', #tackling with GSM 'check_sms':'AT+CMGF=1', 'list_sms':'AT+CMGL=&quot;ALL&quot;', 'dial_last':'ATDL', 'hangup':'ATH', 'find_number':'AT+CNUM', 'answer':'ATA', } for key, value in gsm.items(): gsm[key] = value + '\n' class GSM: def __init__(self, port='/dev/ttyS0', baudrate=115200): self.ser = serial.Serial(port, baudrate) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.ser.close() def transceive(self, message): self.ser.write(message.encode()) self.ser.read_until() def receive(self): return self.ser.read_until().decode()[:-2] def auto_answer(self, accept): self.transceive('ATS0=' + accept + '\n') def dial(self, number): self.transceive('ATD' + number + ';\n') def send_sms(self, number, text): self.transceive('AT+CMGS=' + '&quot;' + number + '&quot;' + '\n' + text + ctrlZ) def unlock_enable(self, status): if status: val = '1' else: val = '0' self.transceive('AT+CFUN=' + val + '\n') while True: answer = self.receive() if answer == '+CPIN: SIM PIN': print('Modem powered!') elif answer == 'OK': break def unlock_sim(self, pin): self.transceive('AT+CPIN=' + pin + '\n') while True: response = self.receive() if response == 'ERROR': raise Exception('Incorrect PIN!') elif response == 'OK': print('SIM activated!') break def wait(self, delay): self.transceive('WAIT=' + delay + '\n') def init(g, pin): g.transceive(gsm['ready']) if g.receive() != '+CPIN: READY': g.unlock_enable(True) g.unlock_sim(pin) sleep(3) g.transceive(gsm['check_sms']) while True: response = g.receive() if response == 'ERROR': print('Cannot send SMS!') break elif response == 'OK': print('Ready for sending SMS!') break def serial_spit(): pin = '0000' number = '0744604883' with GSM() as g: cmd = input('If you would like to set the PIN press nothing but the return key.\n') if cmd == '': pin = input('Please, introduce the PIN.\n') init(g, pin) while True: cmd = input(f'''Introduce the command: c - change the phone number (now it is {number} ) d - dial h - hangup m - message\n\n''') if cmd == 'c': number = input('Please, introduce the phone number you would like to use from now on.\n') elif cmd == 'd': g.dial(number) elif cmd == 'h': g.transceive(gsm['hangup']) elif cmd == 'm': txt = input('Please, introduce the next message.\n') if txt == '' : continue g.send_sms(number, txt) else: print('Command not recognized') def serial_digest(): last_dialing_number = 0 gotify = Gotify( base_url='http://localhost:80', app_token='A_8APqsvE8C5Rom', ) with GSM() as g: while True: call_status = '' data = g.receive() if data == 'RING': call_status='Ring' g.transceive('AT+CLCC\r\n') data = g.receive() if data.startswith('+CLCC'): for x in data[len('+CLCC: '):-len('\r\n')].split(','): if x.startswith('&quot;') and x.endswith('&quot;'): last_dialing_number = x.strip('&quot;') gotify.create_message( message=call_status, title=last_dialing_number, ) print(last_dialing_number) print(call_status) print() elif data == 'NO CARRIER': call_status='Hang' gotify.create_message( message=call_status, title=last_dialing_number, ) print(last_dialing_number) print(call_status) print() elif data.startswith('+CMT'): [number,_,date,hour] = [x.strip('&quot;') for x in data[len('+CMT: '):-len('\r\n')].split(',')] message = g.receive() message += '\n' + date + ' ' + hour title = number[2:] print(title) print(message) print() gotify.create_message( message=message, title=title, ) serial_spit() serial_digest() </code></pre>
<python><multithreading><serial-port><python-multithreading><gsm>
2024-04-29 21:35:43
0
408
pauk
78,405,251
6,606,057
Create dummy for missing values for variable in Python
<p>I have the following dataframe in:</p> <pre class="lang-none prettyprint-override"><code> a 1 3 2 2 3 Nan 4 3 5 Nan </code></pre> <p>I need to recode this column so it looks like this:</p> <pre class="lang-none prettyprint-override"><code> df_miss_a 1 0 2 0 3 1 4 0 5 1 </code></pre> <p>I've tried:</p> <pre><code>df_miss_a = np.where(df['a'] == 'Nan', 1, 0) </code></pre> <p>and</p> <pre><code>df_miss_a = np.where(df['a'] == Nan, 1, 0) </code></pre> <p>The above outputs only 0s.</p> <p>The format of the output is unimportant.</p>
<python><pandas><numpy><missing-data><recode>
2024-04-29 21:26:42
1
485
Englishman Bob
78,405,194
480,118
python muliprocessing: AttributeError: Can't get attribute <function_name> on module
<p>The following code:</p> <pre><code>import numpy as np, pandas as pd import multiprocessing, itertools, timeit from functools import partial processes = 5 * multiprocessing.cpu_count() print(f'processes: {processes}') pool = multiprocessing.Pool(processes=processes) def calc(x, y): return x+y def calc_all(): pairs = [[1,1], [2,2], [3,3]] results = pool.map(calc, pairs) print(results) if __name__ == '__main__': calc_all() </code></pre> <p>is returning:</p> <pre><code> File &quot;/usr/local/lib/python3.12/multiprocessing/process.py&quot;, line 314, in _bootstrap self.run() File &quot;/usr/local/lib/python3.12/multiprocessing/process.py&quot;, line 108, in run self._target(*self._args, **self._kwargs) File &quot;/usr/local/lib/python3.12/multiprocessing/pool.py&quot;, line 114, in worker task = get() ^^^^^ File &quot;/usr/local/lib/python3.12/multiprocessing/queues.py&quot;, line 389, in get return _ForkingPickler.loads(res) ^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: Can't get attribute 'calc' on &lt;module '__main__' from '/workspaces/calc.py'&gt; </code></pre> <p>If i move just the <code>main</code> to a separate file and import <code>calc_all</code>, i still get the same error (different module name ofcourse)</p> <p>I'f i move <code>calc_all</code> and the <code>main</code> to another module and import <code>calc</code>, it works fine.</p> <p>I would like to understand why this is happening when both are top level functions. Is there a better way to solve this problem rather than having to move part of the module to a separate file?</p>
<python><multiprocessing>
2024-04-29 21:07:31
2
6,184
mike01010
78,405,071
1,431,750
Update a Column of a Matrix where each MongoDB document is a Row
<p>I'm storing matrices in MongoDB where each document is a row of values. Example of two <em>small</em> matrices but assume there are <em>thousands</em> of rows and columns:</p> <pre class="lang-js prettyprint-override"><code>[ { matrix_id: 123, row_id: 0, values: [1, 2, 3] }, { matrix_id: 123, row_id: 1, values: [4, 5, 6] }, { matrix_id: 789, row_id: 0, values: [20, 40, 60] }, { matrix_id: 789, row_id: 1, values: [80, 100, 120] } ] </code></pre> <p><sup>(It can't be in just one document with all the rows and columns because it would exceed the 16 MB size limit. If I use <a href="https://www.mongodb.com/docs/manual/core/gridfs/#when-to-use-gridfs" rel="nofollow noreferrer">GridFS</a> then I can't update single values, full rows, or full columns efficiently.)</sup></p> <p>Say I want to update the full 2nd column (index=1) of matrix 123 to have the values <code>[7890, 9876]</code> and get a result like this:</p> <pre><code>┌ ┐ ┌ ┐ │ 1 2 3 │ ─→ │ 1 7890 3 │ │ 4 5 6 │ │ 4 9876 6 │ └ ┘ └ ┘ </code></pre> <p>then this <code>updateOne()</code> query - to update a <em>single value</em> - is being reused to update full columns, so is executed for <em>each row document</em>. <a href="https://mongoplayground.net/p/kZrmGruZ-7-" rel="nofollow noreferrer">Mongo Playground</a>. I'm repeating this with different params for <code>row_id</code> &amp; <code>new_col_val</code>:</p> <pre class="lang-js prettyprint-override"><code>db.collection.update({ matrix_id: 1, // index of the row being updated row_id: 0 }, [ { $set: { values: { $let: { vars: { // index of the column being updated col_idx: 1, // value for that column being updated new_col_val: 7890 }, in: { $concatArrays: [ { $slice: [&quot;$values&quot;, &quot;$$col_idx&quot;] }, [&quot;$$new_col_val&quot;], { $slice: [&quot;$values&quot;, { $add: [&quot;$$col_idx&quot;, 1] }, { $size: &quot;$values&quot; }] } ] } } } } } ]) </code></pre> <p>To do all full-column updates, I put this in a function in the app's language and keep updating the row_id &amp; column value; but that's quite inefficient, including using <a href="https://www.mongodb.com/docs/manual/core/bulk-write-operations/" rel="nofollow noreferrer">Bulk Write ops</a>:</p> <pre class="lang-py prettyprint-override"><code>def update_column(col_idx: int, values: list[int]): # using find_one_and_update / update_one for row_idx, col_val in enumerate(values): update = update_action(col_idx, col_val) # returns above query with col_idx &amp; col_val collection.update_one({&quot;matrix_id&quot;: 1, &quot;row_id&quot;: row_idx}, update) # using bulk_write requests = [ UpdateOne({&quot;matrix_id&quot;: 1, &quot;row_id&quot;: row_idx}, update_action(col_idx, col_val)) for row_idx, col_val in enumerate(values) ] result = collection.bulk_write(requests, ordered=False) # parallel execution </code></pre> <p>Is there a better way with fewer total statements/queries or reduce total time? Or just something <em>cleaner</em>?</p> <p><sub>PS. I'm not actually doing this, was exploring another user's question.</sub></p>
<python><arrays><mongodb><matrix><mongodb-query>
2024-04-29 20:33:37
1
16,597
aneroid
78,404,611
6,645,617
Extract very large zip file on AWS S3 using Lamda Functions
<p>I'm trying to read a very large zip file on a s3 bucket and extract its data on another s3 bucket using the code below as lambda function:</p> <pre><code>import json import boto3 from io import BytesIO import zipfile def lambda_handler(event, context): s3_resource = boto3.resource('s3') source_bucket = 'bucket1' target_bucket = 'bucket2' for file in my_bucket.objects.all(): if (str(file.key).endswith('.zip')): zip_obj = s3_resource.Object(bucket_name=source_bucket, key=file.key) buffer = BytesIO(zip_obj.get()[&quot;Body&quot;].read()) z = zipfile.ZipFile(buffer) for filename in z.namelist(): file_info = z.getinfo(filename) try: response = s3_resource.meta.client.upload_fileobj( z.open(filename), Bucket=target_bucket, Key=f'{filename}' ) except Exception as e: print(e) else: print(file.key + ' is not a zip file.') </code></pre> <p>Now, the problem is this code reads the whole file into memory and I'm getting MemoryError.</p> <p>I want to know if there is any more efficient way to dot this? like read the file in chunks?</p> <p>Thanks.</p>
<python><python-3.x><amazon-web-services><amazon-s3><aws-lambda>
2024-04-29 18:41:56
1
1,983
Ali AzG
78,404,495
10,430,394
Finding faces with skimage.measure.marching_cubes returns in weird offset based on meshgrid limits
<p>A collegue of mine recently tried to visualise strain moduli using matplotlib. If you do not know what that is: Here's a link if you are interested. <a href="https://www.degruyter.com/document/doi/10.1524/zkri.2011.1413/html" rel="nofollow noreferrer">https://www.degruyter.com/document/doi/10.1524/zkri.2011.1413/html</a></p> <p>It's a 3D visualisation of the anisotropy of the strain tensor for crystal structures when they are compressed under pressure.</p> <p>For our purposes it is simply an isosurface in 3D space. You compute values for a 3D meshgrid using a function and then the marching cubes algorithm finds vertex points that map out the isosurface at a given value (let's say: every vertex point that has an iso value of 0.4).</p> <p>Additionally, the marching_cubes algorithm finds all faces that belong to the given iso-surface. In the simplest case, there is no anisotropy and the result will be a perfect sphere.</p> <p>This is the code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np from skimage import measure def func(x, y, z): Z1111 = 1 Z1133 = 1/3 Z3333 = 1 return (Z1111 * (x**4 + 2 * x**2 * y**2 + y**4) + 6 * Z1133 * (x**2 * z**2 + y**2 * z**2) + Z3333 * z**4 ) a,b,c = -1,1,.1 X, Y, Z = np.mgrid[a:b:c, a:b:c, a:b:c] vals = func(X,Y,Z) iso_val = 0.4 verts, faces, murks, _ = measure.marching_cubes(vals, iso_val) # shift_zero = np.array((a*10,a*10,a*10)) # verts = verts + shift_zero fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='jet', lw=1) ax.plot((0,0),(0,0),(-10,10),'r-',lw=3,zorder=100) plt.show() </code></pre> <p>Here's where the unusual behavior comes in:</p> <p>When you define the 3D-meshgrid and you use points between -1,1 with an interval of 0.1 for all directions (x,y,z), then the sphere is not centered around (0,0,0), but around (10,10,10). If you uncomment the <code>shift_zero</code> and <code>verts = verts + shift_zero</code> lines, then the offset can be compensated.</p> <p>Is there something wrong with my script? Does the function for the isosurface include an intrinsic shift? Is this an issue with the implementation of marching_cubes?. I could just compute the center point of the vertices using <code>verts.mean(axis=0)</code> and subtract that from the vertices, but that seems like a cop-out.</p>
<python><matplotlib><scikit-image><marching-cubes>
2024-04-29 18:14:53
1
534
J.Doe
78,404,304
9,540,764
py2app application is not working on other Macs, why?
<p>I have created an application using <code>py2app</code> and it's working perfectly on my Mac.<br> Now, I have to send it to some friends. When they download it, either from Wetransfer or Google Drive or whatever, it doesn't work.<br> It says <code>The app is damaged and can’t be opened. You should move it to the Trash.</code><br> As a double-check, I also downloaded it and on my Mac, it says the same thing.<br> The same app opened from the <code>dist</code> folder of the Python project and from the downloaded folder is working only in the first case.</p> <p>What is the problem?<br> I see there are other questions on the same topic, but there is no solution, or it's not working in my case, or they are too old.</p>
<python><python-3.x><macos><py2app>
2024-04-29 17:28:27
0
836
User
78,404,259
4,296,426
Share database cursor across multiple threads with Celery
<p>I have a single worker in celery that uses sqlalchemy and takes an input table (in one db) and a output table (in another db) and it retrieves and transfers the data from one to the other.</p> <p>I'd like to parallelize this across celery workers so that each worker is dealing with x/n-of-workers rows.</p> <p>I know I could probably refactor this to split up the start/end rows that each worker would deal with, but is there a way to have the celery workers share the proxy.fetchmany() calls until they're exhausted?</p> <p>Here is some example code of it working in singular:</p> <pre><code> logging.info(&quot;Connecting to Database&quot;) connection_url = URL.create(&quot;mssql+pyodbc&quot;, query={&quot;odbc_connect&quot;: self.SOURCE_CONNECT_STRING}) source_eng = sqlalchemy.create_engine(connection_url) source_connection = source_eng.connect() # Select all rows query = f&quot;SELECT * FROM {sourceTable} WITH (NOLOCK)&quot; stmt = sqlalchemy.text(query) proxy = source_connection.execution_options(stream_results=True).execute(stmt) column_names = list(proxy.keys()) logging.info(column_names) insert_stmt = f&quot;&quot;&quot;INSERT INTO {destTable} ({','.join(column_names)}) VALUES ({','.join(['?']*len(column_names))});&quot;&quot;&quot; if ( hasIdentity ): insert_stmt = f&quot;&quot;&quot; SET IDENTITY_INSERT {destTable} ON; SET NOCOUNT ON; {insert_stmt} SET NOCOUNT OFF; SET IDENTITY_INSERT {destTable} OFF&quot;&quot;&quot; completed = 0 while 'batch not empty': # equivalent of 'while True' batch = proxy.fetchmany(self.INITIAL_BATCH_SIZE) if not batch: break # Strip whitespace from batch batch = [self._strip_tuple(i) for i in batch] logging.info(f&quot;Inserting {str(len(batch))} rows.&quot;) with pyodbc.connect(self.DEST_CONNECT_STRING) as conn: with conn.cursor() as cursor: cursor.fast_executemany = True # Send pyodbc call logging.info(insert_stmt) cursor.executemany(insert_stmt, list(batch)) conn.commit() completed += len(batch) logging.info(f&quot;Completed inserting {completed} rows into {destTable}.&quot;) proxy.close() </code></pre>
<python><sqlalchemy><celery>
2024-04-29 17:19:02
0
1,682
Optimus
78,404,240
14,250,641
Most efficient way to conditionally sample my large df
<p>I have a large DF (~35 million rows) and I am trying to create a new df by randomly sampling two rows from each unique cluster ID (~1.8 million unique cluster IDs)-- one row must have a label 0 and one row must have a label 1 (sometimes there is only one label, so I must check first if both labels are present within the cluster). For reference, my dataset has 3 main cols: 'embeddings', 'cluster_ID', 'label'.</p> <p>I'm finding that this process takes way more time than I anticipated (more than 15 hours), and I want to know if there is a way to optimize my code.</p> <p>Please NOTE: My original df only has 2 labels: 1 and 0. I should end up with more than 1.8 million rows. I would like to sample 2 rows for each cluster (one label 1 and other is label 0). Some clusters have a mixed bag of rows with both labels-- but some only have rows with label 1. If the former is the case: I want 2 rows from that cluster (one for each label)-- if the latter is the case: I only want one row from that cluster. Therefore: I should end up with between 1.8-3.6 million rows.</p> <pre><code>import pandas as pd import random # Create an empty list to store selected rows selected_rows = [] # Iterate over unique cluster IDs for cluster_id in result_df_30['cluster_ID'].unique(): # Filter the DataFrame for the current cluster ID df_cluster = result_df_30[result_df_30['cluster_ID'] == cluster_id] # Filter rows with label 0 and 1 df_label_0 = df_cluster[df_cluster['label'] == 0] df_label_1 = df_cluster[df_cluster['label'] == 1] # Sample rows if they exist if not df_label_0.empty: sample_label_0 = df_label_0.sample(n=1, random_state=42) selected_rows.append(sample_label_0) if not df_label_1.empty: sample_label_1 = df_label_1.sample(n=1, random_state=42) selected_rows.append(sample_label_1) # Concatenate the selected rows into a single DataFrame selected_rows_df = pd.concat(selected_rows) selected_rows_df </code></pre>
<python><pandas><dataframe><numpy><sample>
2024-04-29 17:13:22
1
514
youtube
78,404,170
3,003,605
Logistic Regression Model - Prediction and ROC AUC
<p>I am building a Logistic Regression using statsmodels (statsmodels.api) and would like to understand how to get predictions for the test dataset. This is what I have so far:</p> <pre><code>x_train_data, x_test_data, y_train_data, y_test_data = train_test_split(X, df[target_var], test_size=0.3) logit = sm.Logit( y_train_data, x_train_data ) result = logit.fit() result.summary() </code></pre> <p>What is the best way to print the predictions for y_train_data and y_test_data for below? I am unsure which Regression metrics to use or to import in this case:</p> <pre><code>in_sample_pred = result.predict(x_train_data) out_sample_pred = result.predict(x_test_data) </code></pre> <p>Also, what's the best way to calculate ROC AUC score and plot it for this Logistic Regression model (through scikit-learn package)?</p> <p>Thanks</p>
<python><scikit-learn><logistic-regression><prediction><roc>
2024-04-29 16:59:04
1
385
user3003605
78,404,121
4,324,329
How to get the epoch value for any given Local Time with Time Zone & Daylight Saving Time in Python?
<p>I'm new to Python and I have a use case which deals with <code>datetime</code>. Unfortunately, the data is not in UTC, so I have to rely on Local Time, Time Zones &amp; Daylight Savings. I tried with <code>datetime</code> &amp; <code>pytz</code> to get the epoch value from <code>datetime</code>, but its not working out.</p> <p>Approach 1 : If I try creating <code>datetime</code> object, localize it to Brussels time and use strftime to convert to epoch, I'm <em>getting</em> valid response for the window outside Daylight Saving Time (DST). But for time within DST, I'm getting a response of <code>-1</code>.</p> <p>Approach 2 : If I give the time zone details while creating the <code>datetime</code> object, and try to convert that to epoch, its giving value same as that for IST.</p> <pre><code>from datetime import datetime import pytz print('localize') print(pytz.timezone(&quot;Asia/Kolkata&quot;).localize(datetime(2024, 10, 27, 0,0,0)).strftime('%s')) print(pytz.timezone(&quot;Europe/Brussels&quot;).localize(datetime(2024, 10, 27, 0,0,0)).strftime('%s')) print(pytz.timezone(&quot;Europe/Brussels&quot;).localize(datetime(2024, 10, 27, 6,0,0)).strftime('%s')) print('tzinfo') print(datetime(2024, 10, 27, 0,0,0, tzinfo=pytz.timezone(&quot;Asia/Kolkata&quot;)).strftime('%s')) print(datetime(2024, 10, 27, 0,0,0,tzinfo=pytz.timezone(&quot;Europe/Brussels&quot;)).strftime('%s')) print(datetime(2024, 10, 27, 6,0,0, tzinfo=pytz.timezone(&quot;Europe/Brussels&quot;)).strftime('%s')) </code></pre> <p>Output:</p> <pre><code>localize 1729967400 -1 1729989000 tzinfo 1729967400 1729967400 1729989000 </code></pre> <p>What's even more confusing is, if I print the datetime values without formatting, Approach 1 shows valid info, while Approach 2 shows non-rounded offset values as shown below.</p> <pre><code>print('datetime values showing appropriate offset') print(pytz.timezone(&quot;Asia/Kolkata&quot;).localize(datetime(2024, 10, 27, 0, 0, 0))) print(pytz.timezone(&quot;Europe/Brussels&quot;).localize(datetime(2024, 10, 27, 0, 0, 0))) print(pytz.timezone(&quot;Europe/Brussels&quot;).localize(datetime(2024, 10, 27, 6, 0, 0))) print('datetime values showing different offset') print(datetime(2024, 10, 27, 0,0,0, tzinfo=pytz.timezone(&quot;Asia/Kolkata&quot;))) print(datetime(2024, 10, 27, 0,0,0,tzinfo=pytz.timezone(&quot;Europe/Brussels&quot;))) print(datetime(2024, 10, 27, 6,0,0, tzinfo=pytz.timezone(&quot;Europe/Brussels&quot;))) </code></pre> <p>Output:</p> <pre><code>datetime values showing appropriate offset 2024-10-27 00:00:00+05:30 2024-10-27 00:00:00+02:00 2024-10-27 06:00:00+01:00 datetime values showing different offset 2024-10-27 00:00:00+05:53 2024-10-27 00:00:00+00:18 2024-10-27 06:00:00+00:18 </code></pre> <p>Which is better and why so? Not sure if I'm missing anything. Can anyone help on this?</p>
<python><datetime><timezone><dst><pytz>
2024-04-29 16:49:34
1
367
Sree Karthik S R
78,404,120
6,260,154
Having trouble with negative lookahead regex in python
<p>So, I have to match everything before the last open-close parenthesis and get that in group. And after the last open-close parenthesis try to get the value with some pattern, again in group.</p> <p>This is my sample example:</p> <pre><code>ID pqr () name:123. </code></pre> <p>And this is my regex:</p> <pre><code>^(?P&lt;JUNK&gt;.*?)(?!\(.\))(\(.*\))?\sname\:(?P&lt;name&gt;\d+)\.$ </code></pre> <p>And now with this I am getting <code>ID pqr </code> for <code>JUNK</code> key and <code>123</code> for <code>name</code> key which is perfect.</p> <p>And now, with this regex, it's working fine for below strings:</p> <ol> <li><code>ID pqr (a) () name:123.</code></li> <li><code>ID pqr (a) (b) () name:123.</code></li> <li><code>ID pqr (a) (b) () name:123.</code></li> <li><code>ID pqr (a) (b) (XX) name:123.</code></li> </ol> <p>in returns I am getting these outputs:</p> <ol> <li><code>{'JUNK': 'ID pqr ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (a) ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (a) (b) ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (a) (b) ', 'name': '123'}</code></li> </ol> <p>So far, it's working fine with above strings, but for the below ones I am having some trouble</p> <ol> <li><code>ID pqr (a) (b) (X) name:123.</code></li> <li><code>ID pqr (aa) (b) (X) name:123.</code></li> <li><code>ID pqr (a) (bb) (X) name:123.</code></li> </ol> <p>For these strings, I am getting output like this:</p> <ol> <li><code>{'JUNK': 'ID pqr (a) (b) (X)', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (a) ', 'name': '123'}</code></li> </ol> <p>But basically I want like this:</p> <ol> <li><code>{'JUNK': 'ID pqr (a) (b) ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (aa) (b) ', 'name': '123'}</code></li> <li><code>{'JUNK': 'ID pqr (a) (bb) ', 'name': '123'}</code></li> </ol> <p>This is my regex101 attempt: <a href="https://regex101.com/r/AV8WlB/4" rel="nofollow noreferrer">https://regex101.com/r/AV8WlB/4</a></p> <p>Can anyone point out where I am going wrong?</p>
<python><regex>
2024-04-29 16:49:05
1
1,016
Tony Montana
78,404,086
9,639,680
Unexpected keyword argument 'constant_memory'
<p>I need to improve the performance of the Excel writer part of my code and referred to <a href="https://xlsxwriter.readthedocs.io/working_with_memory.html" rel="nofollow noreferrer">xlsxwriter</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.ExcelWriter.html" rel="nofollow noreferrer">pandas</a> documentation.</p> <pre><code>import pandas as pd ... writer = pd.ExcelWriter('result.xlsx', engine='xlsxwriter', engine_kwargs={&quot;constant_memory&quot;: True}) </code></pre> <p>I see a <code>TypeError: Workbook.__init__() got an unexpected keyword argument 'constant_memory'</code> runtime error after adding the <code>engine_kwargs</code>. Am I doing something wrong?</p>
<python><pandas><excel><xlsxwriter>
2024-04-29 16:40:36
1
824
nac001
78,404,048
1,403,546
VSCode - How to set the working directory when running a python .py script?
<p>I'm new to VSCode and I can't find anywhere the answer to an easy question:</p> <p><strong>how can I run a python script (let's say main.py) by creating configurations where I can set a specific working directory folder as well as arguments?</strong></p> <p>I see that by default the working directory, when running any file, is the workspace directory, but I'm not able to change it anyhow.</p> <p>When using the Python Debugger, I can easily do that with the &quot;cwd&quot; option inside a configuration of the launch.json, and I want to be able to do the same to just run the code (... and debugging without any breakpoint is not the same, e.g. when there's parallel computation).</p> <p>With PyCharm this is very easy, but not with VSCode perhaps?</p>
<python><visual-studio-code>
2024-04-29 16:34:23
2
1,759
user1403546
78,403,998
10,517,777
How do I reduce the time to read parquet files with pandas and futures
<p><strong>Python version:</strong> 3.11.8</p> <p><strong>Goal:</strong> Read 1.000 parquet files from Amazon bucket faster.</p> <p><strong>Approach:</strong> I am using the libraries boto3 and pandas to read the parquet files. I could retrieve the files sequentially. However, I want to reduce the time reading the parquet files by using a parallel execution with the library concurrent.futures. For that reason, I decided to implement the ProcessPoolExecutor, but I got stuck because I do not know how to create my project to implement it. This is my current attempt:</p> <p>I have two scripts. The first script will read the parquet files and the second script will transform or manipulate all the received data.</p> <p>script1.py</p> <pre><code>from boto3 import Session from pandas import DataFrame,concat, read_parquet from concurrent.futures import ProcessPoolExecutor from concurrent.futures import wait def retrieve_s3_bucket(bucket_name: str, access_key_id: str, secret_access_key: str, prefix: str): ### some code to retrieve objects from S3 return objects def read_parquet_file(object) -&gt; DataFrame: buffer = BytesIO() object.Object().download_fileobj(buffer) df_object = read_parquet(buffer) return df_object def get_data_from_bucket(objects, columns_to_delete: list, columns_to_format: list, restaurant_ids: list) -&gt; DataFrame: list_dfs = [] with ProcessPoolExecutor(2) as exe: # submit tasks and collect futures for object in objects: df_object = exe.submit(read_parquet_file(object),object) if df_object.empty: print(&quot;Empty&quot;) else: list_dfs.append(df_object) # wait for all tasks to complete print('Waiting for tasks to complete...') wait(list_dfs) data = concat(list_dfs,ignore_index=True) return data # protect the entry point if __name__ == '__main__': # start the process pool get_data_from_bucket() </code></pre> <p>script2.py:</p> <pre><code>from script1 import get_data_from_bucket, retrieve_s3_bucket def main(bucket_name: str, access_key_id: str, secret_access_key: str, prefix: str) -&gt; str: try: data ={} objects = retrieve_s3_bucket(bucket_name, access_key_id,secret_access_key,prefix) df = get_data_from_bucket(objects, columns_to_delete, columns_to_format, list_restaurants) ## some addtional operations to do with df except Exception as e: logging.error(str(e)) data['error_message'] = str(e) data['stack_trace'] = print_exc() finally: return dumps(data) </code></pre> <p>I am getting the following error during execution:</p> <blockquote> <p>An attempt has been made to start a new process before the current process has finished its bootstrapping phase</p> </blockquote> <p>How should I include the concurrent.futures in my project and to handle properly errors during the reading?. Forgive me if I am doing something wrong but I am pretty new with concept of parallelism.</p> <p><strong>UPDATE:</strong></p> <p>To reach my goal, I have a Virtual Machine with the following configuration:</p> <p>CPU 2.59 GHz Memory 32 GB Sockets: 1 Virtual processors: 8</p> <p>I do not know, if <code>concurrent.futures</code> will be a suitable approach here.</p>
<python><pandas><parquet><concurrent.futures>
2024-04-29 16:21:57
0
364
sergioMoreno
78,403,835
4,129,091
ReadTheDocs sphix build error: index.rst not found
<p>I'm trying to build and publish documentation on ReadTheDocs from a github repo.</p> <p>The build yields an error:</p> <pre><code>Running Sphinx v7.3.7 making output directory... done WARNING: html_static_path entry '_static' does not exist building [mo]: targets for 0 po files that are out of date writing output... building [html]: targets for 0 source files that are out of date updating environment: [new config] 0 added, 0 changed, 0 removed reading sources... Traceback (most recent call last): File &quot;/home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/envs/latest/lib/python3.12/site-packages/sphinx/cmd/build.py&quot;, line 337, in build_main app.build(args.force_all, args.filenames) File &quot;/home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/envs/latest/lib/python3.12/site-packages/sphinx/application.py&quot;, line 351, in build self.builder.build_update() File &quot;/home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/envs/latest/lib/python3.12/site-packages/sphinx/builders/__init__.py&quot;, line 293, in build_update self.build(to_build, File &quot;/home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/envs/latest/lib/python3.12/site-packages/sphinx/builders/__init__.py&quot;, line 313, in build updated_docnames = set(self.read()) ^^^^^^^^^^^ File &quot;/home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/envs/latest/lib/python3.12/site-packages/sphinx/builders/__init__.py&quot;, line 422, in read raise SphinxError('root file %s not found' % sphinx.errors.SphinxError: root file /home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/checkouts/latest/docs/index.rst not found Sphinx error: root file /home/docs/checkouts/readthedocs.org/user_builds/betterpathlib/checkouts/latest/docs/index.rst not found </code></pre> <p>I have the file <code>.readthedocs.yaml</code> at project root, with these lines:</p> <pre><code>... # Build documentation in the &quot;docs/&quot; directory with Sphinx sphinx: configuration: docs/conf.py </code></pre> <p>and in <code>docs/conf.py</code>:</p> <pre><code># -- Project information ----------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information project = 'betterpathlib' copyright = '2024, xx' author = 'xx' # -- General configuration --------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration extensions = [] templates_path = ['_templates'] exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] # -- Options for HTML output ------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output html_theme = 'alabaster' html_static_path = ['_static'] </code></pre>
<python><python-sphinx><read-the-docs>
2024-04-29 15:51:30
0
3,665
tsorn
78,403,641
17,795,398
`/` and `//` give different results for a multiple number
<p>This issue is difficulting my algorithm implementation:</p> <pre><code>&gt;&gt;&gt; -19.9871/2.8553 -7.0 &gt;&gt;&gt; -19.9871//2.8553 -8.0 </code></pre> <p>The second division should return <code>-7.0</code> also. I guess this is something related to floating point errors, do you know another way to get the expected result? Using <code>numpy.floor(-19.9871/2.8553)</code> gives the expected result, but maybe there is a better solution...</p> <p>I add some cases here based on answers:</p> <pre><code>&gt;&gt;&gt; decimal.Decimal(8.5659)/decimal.Decimal(2.8553) Decimal('2.999999999999999533405376125') &gt;&gt;&gt; np.floor(decimal.Decimal(8.5659)/decimal.Decimal(2.8553)) 2 &gt;&gt;&gt; round(decimal.Decimal(8.5659)/decimal.Decimal(2.8553)) 3 # true value (//) </code></pre> <p>Here <code>round</code> does the work, but:</p> <pre><code>&gt;&gt;&gt; decimal.Decimal(12.8489)/decimal.Decimal(2.8553) Decimal('4.500017511294785017834689091') &gt;&gt;&gt; round(decimal.Decimal(12.8489)/decimal.Decimal(2.8553)) 5 &gt;&gt;&gt; np.floor(decimal.Decimal(12.8489)/decimal.Decimal(2.8553)) 4 # true value (//) </code></pre> <p>here <code>floor</code> does the work.</p> <p>What should I do?</p>
<python>
2024-04-29 15:08:25
1
472
Abel Gutiérrez
78,403,564
3,447,228
How to flag AI content in Teams botbuilder?
<p>According to the Microsoft Team's store validation guidelines, apps with AI-generated content need to flag which messages are AI generated.</p> <p>According to the image which can ben seen <a href="https://learn.microsoft.com/en-us/microsoftteams/platform/assets/images/submission/teams-ai-library-description-guideline.png" rel="nofollow noreferrer">here</a>, there seems to be a way this can be flagged within the UI of Teams. &quot;ai-generated content may be incorrect&quot;</p> <p>Using botbuilder-python, is anyone aware of how to flag this?</p>
<python><botframework><microsoft-teams>
2024-04-29 14:55:55
1
498
user3447228
78,403,321
10,425,150
Generate matrix with diagonal fill pattern
<p><strong>Input Data:</strong></p> <pre><code>data = [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1] </code></pre> <p><strong>Expected Output\need help with:</strong></p> <p>The output should look something like this:</p> <pre><code>[15] [14, 13] [12, 11, 10] [9, 8, 7, 6] [5, 4, 3, 2, 1] </code></pre> <p><strong>Current solution:</strong></p> <pre><code>end_value = 15 data = list(range(1,end_value+1)[::-1]) iterations = round(end_value**0.5)+1 last_index = [0]+[i*(i+1)+1 for i in range(iterations)] out = [data[last_index[i]:last_index[i+1]] for i in range(len(last_index)-1)] for p in out: print(p) </code></pre> <p><strong>Current Output:</strong></p> <pre><code>[15] [14, 13] [12, 11, 10, 9] [8, 7, 6, 5, 4, 3] [2, 1] </code></pre>
<python>
2024-04-29 14:14:41
3
1,051
Gооd_Mаn
78,403,311
10,985,257
What is best practise for long running sudo commands?
<p>I have build a function, which calls sudo commands from python, because I don't want to run every command as sudo (I am pulling in the process data from a repo, which I control, but this could be an angle of attack I wanted to mitigate):</p> <pre class="lang-py prettyprint-override"><code>streamer = SysStreamer def _logable_sudo(cmds: List[str], password: str) -&gt; List[str]: sudo_cmds = [&quot;sudo&quot;, &quot;-S&quot;] sudo_cmds.extend(cmds) streamer.stream_command( sudo_cmds, stdout_handler=self.streamer.stdout_handler, stdin=PIPE, password=password, ) return streamer.stdout_lines </code></pre> <p>Where <code>SysStreamer</code> is the following class:</p> <pre class="lang-py prettyprint-override"><code>class SysStreamer: def __init__(self) -&gt; None: self.flush_stdout_cache() self.logger = load_logger(__name__) def flush_stdout_cache(self): self.stdout_lines = [] def stream_command( self, args, *, stdout_handler=logging.info, check=True, text=True, stdin=None, stdout=PIPE, stderr=STDOUT, password=None, **kwargs, ): &quot;&quot;&quot;Mimic subprocess.run, while processing the command output in real time.&quot;&quot;&quot; with Popen(args, text=text, stdin=stdin, stdout=stdout, stderr=stderr, **kwargs) as process: if password: process.stdin.write(f&quot;{password}\n&quot;) process.stdin.flush() for line in process.stdout: stdout_handler(line[:-1]) retcode = process.poll() if check and retcode: raise CalledProcessError(retcode, f&quot;{process.args}:Check Logs!&quot;) return CompletedProcess(process.args, retcode) def stdout_handler(self, line): self.logger.info(line) self.stdout_lines.append(line) </code></pre> <p>The <code>SysStreamer</code> is just a workaround to have access to the return values of the process and also save them for other use cases.</p> <p>The Issue I have with this approach, that I need to gather the Password myself and having it in plaintext visible while debugging.</p> <p>So my first intention was to use the systems <code>sudo</code> call instead of requesting the password earlier before the <code>sudo</code> call and I think it would be the best approach. The only thing what is worrying me: What if I need to call a second <code>sudo</code> call later, and the sudo-timer has reseted?</p> <p>I had this issue with this specific workflow in the past and I needed to run the complete workflow again to verify that the system is in the actual needed state.</p> <p>So my final question: How should I handle <code>sudo</code> and help the process not to reset the sudo-timer, while my script is still running?</p>
<python><linux>
2024-04-29 14:12:10
0
1,066
MaKaNu
78,403,222
2,723,494
Application-Insight does not show page views either with Automatic Instrumentation or SDK instrumentation
<p>I'm deploying a web app in Azure, on a linux machine, using python, deploying as code. Per docs, I should be able to enable automatic instrumentation of Application Insights<a href="https://i.sstatic.net/BBFVOnzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BBFVOnzu.png" alt="enter image description here" /></a></p> <p>I have enabled the collection on the app itself, and I confirmed that it's pointing to the correct resource group: <a href="https://i.sstatic.net/pz1jw2Rf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pz1jw2Rf.png" alt="enter image description here" /></a></p> <p>But I see nothing in the Application Insights dasbhoard. In the application insights Logs I can see the odd trace and dependency exception. But I cannot see page views or visitors or the host of other metrics that should be automatically tracked.</p> <p>I tried to then implement tracking with code, using the SDK, but again, to no avail:</p> <pre><code>if ( not hasattr(trace.get_tracer_provider(), &quot;_is_sdk_configured&quot;) or not trace.get_tracer_provider()._is_sdk_configured ): trace.set_tracer_provider(TracerProvider()) otlp_exporter = OTLPSpanExporter( endpoint=&quot;https://eastus-8.in.applicationinsights.azure.com&quot;, # Correct Ingestion Endpoint headers={ &quot;x-ms-monitor-ingestion-key&quot;: &quot;xxxxxxxxxxxxxx&quot; }, # Using the instrumentation key as the ingestion key ) trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(otlp_exporter)) </code></pre> <p><a href="https://i.sstatic.net/tk1NPFyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tk1NPFyf.png" alt="enter image description here" /></a></p> <p>I SSHd into my site on Azure using CLI and I could manually run a script that printed a message to a trace logger, so I know the configurations are accurate. But I cannot understand why, under both automatic and manual implementation I cannot see page views or users.</p> <p>Thank you</p>
<python><azure-web-app-service><azure-application-insights>
2024-04-29 13:56:48
1
1,228
user2723494
78,403,169
12,094,039
No module named 'pyspark.resource' when running pyspark command
<p>I am trying to setup the Pyspark environment for the first time in my system. I followed all the instructions carefully while installing the Apache Spark. I am using Windows 11 system.</p> <p>When I run the <code>pyspark</code> cmd, I got this error,</p> <pre><code>Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. Traceback (most recent call last): File &quot;D:\SoftwareInstallations\spark-3.5.1\python\pyspark\shell.py&quot;, line 31, in &lt;module&gt; import pyspark File &quot;D:\SoftwareInstallations\spark-3.5.1\python\pyspark\__init__.py&quot;, line 59, in &lt;module&gt; from pyspark.rdd import RDD, RDDBarrier File &quot;D:\SoftwareInstallations\spark-3.5.1\python\pyspark\rdd.py&quot;, line 78, in &lt;module&gt; from pyspark.resource.requests import ExecutorResourceRequests, TaskResourceRequests ModuleNotFoundError: No module named 'pyspark.resource' </code></pre> <p>These are all the environment variables I have set,</p> <pre><code>HADOOP_HOME = D:\SoftwareInstallations\hadoop-winutils\hadoop-3.3.5 PYTHONPATH = %SPARK_HOME%\python;%SPARK_HOME%\python\lib\py4j-0.10.9.7-src.zip SPARK_HOME = D:\SoftwareInstallations\spark-3.5.1 </code></pre> <p>I also tried installing the <strong>pyspark</strong> again using <code>pip install pyspark</code>, but I still face this issue.</p>
<python><apache-spark><pyspark>
2024-04-29 13:47:31
1
411
Aravindan vaithialingam
78,403,017
2,123,706
have a list of week numbers, how can I find the days corresponding to it python
<p>I have a list of weeks for 2024: <code>wks = [2,3,4,5,6,7...18]</code></p> <p>How can I extract the dates that correspond to each week, bearing in mind, that the week starts on a Tuesday?</p> <p>As Jan 1, 2024 was Monday, I have no data Jan 2, 2024 was Tuesday, and data starts generating, hence why I have no <code>wk 1</code></p> <p>Here, I would like <code>wk_starting = ['Jan 2, 2024', 'Jan 9, 2024,....,'Feb 7, 2024'....,'Apr 23, 2024']</code> and <code>wk_composed_of = ['Jan 2 - 8', 'Jan 9 - 16',...,'Feb 7-16'....,'Apr 23-29']</code></p>
<python><date-range>
2024-04-29 13:20:26
1
3,810
frank
78,402,835
19,299,757
How to find elements using selenium from a UI that uses frames
<p>I am trying to automate a web UI that seems to be using frames. I am unable to locate some of the elements in that UI unlike other UI's that are developed without frame concept. For eg: I am trying to click on a dropdown field. When I inspectd the field using &quot;Firebug&quot;, I am see this html stack for this dropdown.</p> <pre><code>&lt;frame border=&quot;0&quot; marginheight=&quot;0&quot; marginwidth=&quot;5&quot; name=&quot;body&quot; noresize=&quot;&quot; src=&quot;Pub_Docs/index.htm&quot;&gt; &lt;body marginwidth=&quot;5&quot; marginheight=&quot;0&quot;&gt; &lt;form action=&quot;../dxnrequest.aspx?RequestName=TadiPost&quot; method=&quot;POST&quot;&gt; &lt;input type=&quot;hidden&quot; name=&quot;TadiServer&quot; value=&quot;TPASS&quot;/&gt; &lt;input type=&quot;hidden&quot; name=&quot;TadiUrl&quot; value=&quot;/testdx/curr_hols.call&quot;/&gt; &lt;div style=&quot;position: relative; left: 5px; top: 0px;&quot;&gt; &lt;table width=&quot;100%&quot; class=&quot;BKGND&quot;&gt; &lt;tbody&gt; &lt;tr&gt; &lt;tr&gt; &lt;td&gt; &lt;div style=&quot;position: relative; left: 4px; top: 1px;&quot;&gt; &lt;table width=&quot;100%&quot; class=&quot;PARAMS&quot;&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td valign=&quot;TOP&quot;&gt; &lt;table class=&quot;CLEAR&quot;&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td class=&quot;HEADER&quot;&gt;Currency:&lt;/td&gt; &lt;td&gt; &lt;**select name=&quot;in_CUR_CODE&quot; size=&quot;1&quot;&gt;** &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;tr&gt; &lt;tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/td&gt; &lt;td valign=&quot;TOP&quot;&gt; &lt;/tr&gt; &lt;tr&gt; &lt;tr&gt; &lt;/tbody&gt; </code></pre> <p>Firebug is pointing to name=&quot;in_CUR_CODE for this dropdown.So I thought this would work.</p> <pre><code>ccy_dd_locator = &quot;//form[@method='POST']//select[@name='in_CUR_CODE']&quot; CCY_DD_LOCATOR_VALUE_XPATH = (By.XPATH, 'ccy_dd_locator_value') frame = WebDriverWait(self.driver, 5).until(EC.frame_to_be_available_and_switch_to_it((By.NAME, 'body'))) WebDriverWait(self.driver, 15).until(EC.presence_of_element_located(CCY_DD_LOCATOR_VALUE_XPATH)).click() </code></pre> <p>But its not locating the dropdown and I am getting &quot;Timeout&quot; exception. Is there a different way in selenium to locate elements from an UI that are using frames? This doesn't seem to work as with other UI's that are not using frames.</p> <p>Any help in understanding this is much appreciated.</p>
<python><selenium-webdriver>
2024-04-29 12:46:40
1
433
Ram
78,402,507
2,475,195
Random Forest - optimize for AUC or F1 score
<p>I am using random forest in <code>sklearn</code>, and my dataset is fairly unbalanced (20% positive class, 80% other class). Is there a way to make it train (optimize) for some metric that takes this into consideration, like AUC score or F1-score? Are there any tricks that I can use to nudge it in this direction? So far, only approach I have thought of / tried is using different class weights.</p> <p>Alternatively, is there another implementation (or another model, e.g. xgboost) that would allow me such custom metric?</p>
<python><machine-learning><scikit-learn><random-forest>
2024-04-29 11:49:58
3
4,355
Baron Yugovich
78,402,469
7,556,091
Why does the python version not match the banner display in juypterlab's console?
<p>In my shell, I create a new python 3.9.19 ipykernel environment:</p> <pre class="lang-bash prettyprint-override"><code>$ ~/local/python/3.9/bin/python3 -m ipykernel install --user --name test_3dot9 Installed kernelspec test_3dot9 in /home/user/.local/share/jupyter/kernels/test_3dot9 $ jupyter kernelspec list Available kernels: test_3dot9 /home/user/.local/share/jupyter/kernels/test_3dot9 python3 /home/user/local/python/3.12/share/jupyter/kernels/python3 </code></pre> <p>Then I open a console with the test_3dot9 kernel in web ui of jupyterlab.</p> <pre class="lang-bash prettyprint-override"><code>Python 3.9.19 (main, Apr 29 2024, 15:50:25) Type 'copyright', 'credits' or 'license' for more information IPython 8.18.1 -- An enhanced Interactive Python. Type '?' for help. [1] !python --version Python 3.12.3 </code></pre> <p>Why?</p>
<python><python-3.x><jupyter><jupyter-lab><python-venv>
2024-04-29 11:43:54
0
1,896
progquester
78,402,392
2,853,298
How to avoid string truncate coming from python print method to node js application
<p>I am running a python script from node JS application as</p> <pre><code>const { spawnSync } = require('child_process'); const pythonProcess = await spawnSync(python3, [ scriptFilePath, 'methodNameToExecute', PathList.toString() ]); const result = pythonProcess.stdout?.toString()?.trim(); </code></pre> <p>The python script is running fine here and getting expected string output from python print method to here in result variable. But when result is too long i am getting truncated string result. Please guide me how to get full string even its too large string. Please guide if any limitation is there while passing string from python to node application.</p>
<python><node.js><python-3.x><child-process><spawn>
2024-04-29 11:30:49
0
313
Rajesh Kumar
78,402,347
5,368,122
pandera.errors.BackendNotFoundError with Pandas Dataframe
<pre><code>pandera: 0.18.3 pandas: 2.2.2 python: 3.9/3.11 </code></pre> <p>Hi,</p> <p>I am unable to setup the pandera for pandas dataframe as it complains:</p> <pre><code>File &quot;/anaconda/envs/data_quality_env/lib/python3.9/site-packages/pandera/api/base/schema.py&quot;, line 96, in get_backend raise BackendNotFoundError( pandera.errors.BackendNotFoundError: Backend not found for backend, class: (&lt;class 'data_validation.schemas.case.CaseSchema'&gt;, &lt;class 'pandas.core.frame.DataFrame'&gt;). Looked up the following base classes: (&lt;class 'pandas.core.frame.DataFrame'&gt;, &lt;class 'pandas.core.generic.NDFrame'&gt;, &lt;class 'pandas.core.base.PandasObject'&gt;, &lt;class 'pandas.core.accessor.DirNamesMixin'&gt;, &lt;class 'pandas.core.indexing.IndexingMixin'&gt;, &lt;class 'pandas.core.arraylike.OpsMixin'&gt;, &lt;class 'object'&gt;) </code></pre> <p>My folder structure is:</p> <pre><code>project/ data_validation/ schema/ case.py validation/ validations.py pipeline.py </code></pre> <p><strong>case.py:</strong></p> <pre><code>import pandas as pd import pandera as pa class CaseSchema(pa.DataFrameSchema): case_id = pa.Column(pa.Int) </code></pre> <p><strong>validations.py</strong></p> <pre><code>import pandas as pd from data_validation.schemas.case import CaseSchema def validate_case_data(df: pd.DataFrame) -&gt; pd.DataFrame: &quot;&quot;&quot;Validate a DataFrame against the PersonSchema.&quot;&quot;&quot; schema = CaseSchema() return schema.validate(df) </code></pre> <p><strong>pipeline.py</strong></p> <pre><code>import pandas as pd from data_validation.validation.validations import validate_case_data def validate_df(df: pd.DataFrame) -&gt; pd.DataFrame: &quot;&quot;&quot;Process data, validating it against the PersonSchema.&quot;&quot;&quot; validated_df = validate_case_data(df) return validated_df df = pd.DataFrame({ &quot;case_id&quot;: [1, 2, 3] }) processed_df = validate_df(df) </code></pre>
<python><pandas><pandera>
2024-04-29 11:20:49
1
844
Obiii
78,402,246
22,054,564
Microsoft.Azure.WebJobs.Script: WorkerConfig for runtime: python not found
<p>I have Function App of Python v2 and docker based.</p> <p><strong>function_app.py</strong>:</p> <pre><code>import logging import time import azure.functions as func from selenium import webdriver import chromedriver_autoinstaller app = func.FunctionApp() @app.schedule(schedule=&quot;0 * * * * *&quot;, arg_name=&quot;myTimer&quot;, run_on_startup=True, use_monitor=False) def timer_trigger(myTimer: func.TimerRequest) -&gt; None: if myTimer.past_due: logging.info('The timer is past due!') chromedriver_autoinstaller.install() browser = webdriver.Chrome() browser.get('http://selenium.dev/') assert &quot;Selenium&quot; in browser.title time.sleep(5) browser.quit() logging.info('Python timer trigger function executed.') </code></pre> <p><strong>local.settings.json</strong>:</p> <pre><code>{ &quot;IsEncrypted&quot;: false, &quot;Values&quot;: { &quot;AzureWebJobsStorage&quot;: &quot;UseDevelopmentStorage=true&quot;, &quot;FUNCTIONS_WORKER_RUNTIME&quot;: &quot;python&quot;, &quot;AzureWebJobsFeatureFlags&quot;: &quot;EnableWorkerIndexing&quot; } } </code></pre> <p><strong>requirements.txt</strong>:</p> <pre><code>azure-functions chromedriver-autoinstaller </code></pre> <p><strong>Dockerfile</strong>:</p> <pre><code>FROM mcr.microsoft.com/azure-functions/python:4-python3.11 ENV AzureWebJobsScriptRoot=/home/site/wwwroot \ AzureFunctionsJobHost__Logging__Console__IsEnabled=true COPY requirements.txt / RUN pip install -r /requirements.txt COPY . /home/site/wwwroot </code></pre> <p><strong>error</strong>: <code> Microsoft.Azure.WebJobs.Script: WorkerConfig for runtime: python not found.</code></p> <p><strong>Errors in VS Code</strong>:</p> <p><a href="https://i.sstatic.net/JpTBcTQ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpTBcTQ2.png" alt="enter image description here" /></a></p> <p>1st time deployment is successful but I didn't get the timer trigger function in the azure portal.</p> <p><strong>2nd time deployment:</strong></p> <pre><code>3:07:49 PM fa-pyv2d-ci-001: Triggering recycle (preview mode disabled). 3:07:50 PM fa-pyv2d-ci-001: Deployment successful. deployer = ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract zip. Remote build. 3:08:11 PM fa-pyv2d-ci-001: Syncing triggers... 3:08:14 PM fa-pyv2d-ci-001: Querying triggers... 3:08:19 PM fa-pyv2d-ci-001: No HTTP triggers found. </code></pre> <p><strong>3rd time deployment</strong>: - Failed</p> <pre><code>3:18:51 PM fa-pyv2d-ci-001: Zip package size: 2.83 kB 3:18:54 PM fa-pyv2d-ci-001: Fetching changes. 3:18:55 PM fa-pyv2d-ci-001: Cleaning up temp folders from previous zip deployments and extracting pushed zip file /tmp/zipdeploy/47acfecb-1a01-482a-a933-8a41bf22c5e4.zip (0.00 MB) to /tmp/zipdeploy/extracted 3:19:20 PM fa-pyv2d-ci-001: Deployment failed. </code></pre>
<python><azure><docker><azure-functions>
2024-04-29 11:01:21
1
837
VivekAnandChakravarthy
78,402,226
12,493,545
How to package software with parallel imports?
<p>How can I package the following minimal example in such a way that <code>bar.py</code> is still able to import <code>foo.py</code>. Preferably, without needing to change the python code, but by passing something like <code>-m trend.bar</code> into <code>pyinstaller</code>.</p> <h2>Minimal Example</h2> <p>Given the structure:</p> <pre><code>. ├── toast │   └── foo.py # VARIABLE = 42 └── trend └── bar.py </code></pre> <p>Where <code>bar.py</code> contains</p> <pre class="lang-py prettyprint-override"><code>from toast import foo print(foo.VARIABLE) </code></pre> <p>I would execute this like <code>python3 -m trend.bar</code> without issues.</p> <h2>What I have tried</h2> <ul> <li>I tried installing without giving any additional information regarding the structure. Import didn't work.</li> <li>I tried <code>pyinstaller --additional-hooks-dir=. trend/bar.py</code>. Import didn't work.</li> </ul>
<python><pyinstaller><python-packaging>
2024-04-29 10:55:07
0
1,133
Natan
78,402,207
5,489,190
Pip/python unable to find boost under Windows 11 in virtual env
<p>I'm trying to install <code>vina</code> package in virtual conda environment. Everything goes fine until <code>pip install vina</code>. This ends with following error:</p> <pre><code>error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [63 lines of output] fatal: not a git repository (or any of the parent directories): .git Version found 1.2.5 (from __init__.py) running egg_info creating vina. Egg-info writing vina. Egg-info\PKG-INFO writing dependency_links to vina.egg-info\dependency_links.txt writing requirements to vina.egg-info\requires.txt writing top-level names to vina.egg-info\top_level.txt writing manifest file 'vina.egg-info\SOURCES.txt' Boost library is not installed in this conda environment. </code></pre> <p>But I have boost installed, I executed <code>mamba install -c conda-forge numpy swig boost-cpp sphinx sphinx_rtd_theme</code>. I also have boost_1_82 installed and added to system PATH. I was trying to list dependencies using <code>johnnydep vina</code> but it also filed with <code>Boost library location was not found!</code></p> <p>So my question is how to fix it? How can I set correctly reference to boost so my conda virtual env will find it?</p> <p><strong>Edit:</strong> It is solely Windows problem, the very same commands work perfectly fine under Ubuntu 22.04. So as a workaround I use WSL, but if you have any clue how to solve it natively under Windows, I will leave this question open.</p>
<python><boost><pip><conda><mamba>
2024-04-29 10:51:28
1
749
Karls
78,402,155
6,060,982
Inconsistencies in Time Arithmetic with Timezone-Aware DateTime Objects in Python
<p>Surprisingly, time arithmetic is not handled as expected with pythons timezone aware objects. For example consider this snippet that creates a timezone aware object at 2022-10-30 02:00.</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime, timezone, timedelta from zoneinfo import ZoneInfo zone = ZoneInfo(&quot;Europe/Madrid&quot;) HOUR = timedelta(hours=1) u0 = datetime(2022, 10, 30, 2, tzinfo=zone) </code></pre> <p>At 2:59 the clocks shifted back to 2:00, marking the ending of the daylight saving time period. This makes 2022-10-30 02:00 ambiguous. In 2022-10-30 the clocks showed 2:00 twice. Fist comes the DST instance <code>2022-10-30 02:00:00+02:00</code> followed by the winter time instance <code> 2022-10-30 02:00:00+01:00</code>, when the timezone shifted from CEST to CET. Python solves the ambiguity by selecting <code>u0</code> to be the first of the two instances, the one within the DST interval. This is verified by by printing out <code>u0</code> and its timezone name:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(u0) 2022-10-30 02:00:00+02:00 &gt;&gt;&gt; u0.tzname() 'CEST' </code></pre> <p>which is the central European daylight saving timezone.</p> <p>If we add one hour to <code>u0</code>, the passage to <code>CET</code>, i.e. the winter timezone for central Europe, is correctly detected.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; u1 = u0 + HOUR &gt;&gt;&gt; u1.tzname() 'CET' </code></pre> <p>However, the time does not fold back to 2:00, as expected:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(u1) 2022-10-30 03:00:00+01:00 'CET' </code></pre> <p>So, with the addition of a 1h interval, it looks as if 2 hours have passed. One hour due to the wall time shifting from 2:00 to 3:00 and another one due to the change of the timezone that is shifted 1h towards UTC. Conversely one would expect <code>u1</code> to print out as <code>2022-10-30 02:00:00+01:00</code>. This 2 hour shift is verified by converting <code>u0</code> and <code>u1</code> to UTC:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; u0.astimezone(timezone.utc) datetime.datetime(2022, 10, 30, 0, 0, tzinfo=datetime.timezone.utc) &gt;&gt;&gt; u1.astimezone(timezone.utc) datetime.datetime(2022, 10, 30, 2, 0, tzinfo=datetime.timezone.utc) </code></pre> <p>To make things worse, the time interval between <code>u1</code> and <code>u0</code> is inconsistently calculated depending on the chosen timezone. On the one hand we have:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; u1 - u0 datetime.timedelta(seconds=3600) </code></pre> <p>which is equivalent to a 1h interval. On the other hand, if we do the same calculation in UTC:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; u1.astimezone(timezone.utc) - u0.astimezone(timezone.utc) datetime.timedelta(seconds=7200) </code></pre> <p>the calculated interval is <strong>2h</strong>.</p> <p>In conclusion, it appears that Python's handling of timedelta in timezone-aware datetimes emphasizes local clock times rather than consistent logical intervals, leading to potential discrepancies when crossing DST boundaries.</p> <p>In my opinion, this can be misleading, as the existence of the <code>zoneinfo</code> library gives the impression that these kind of problems have been solved.</p> <p>Does anyone know if this is a bug or expected behaviour? Has anyone else encountered this issue, and how do you manage these discrepancies in your applications? If this is expected behavior, perhaps Python documentation should provide clearer guidance or warnings about performing time arithmetic with timezone-aware objects.</p> <h2>Edit</h2> <p>I have verified the described behavior with python 3.11 and 3.12. Similar results are obtained for other time zones, e.g. &quot;Europe/Athens&quot;, but I have not performed an extensive check for all time zones.</p>
<python><datetime><timezone>
2024-04-29 10:42:13
1
700
zap
78,401,909
14,860,526
How to know if an object in a python module is imported or defined inside the module itself
<p>let's assume i have two python modules a, and b.</p> <p>a.py:</p> <pre><code>CONSTANT = 2 </code></pre> <p>b.py:</p> <pre><code>from a import CONSTANT CONSTANT2 = 4 </code></pre> <p>if i use</p> <pre><code>b.__dict__ </code></pre> <p>or</p> <pre><code>inspect.getmembers(b) </code></pre> <p>I get CONSTANT and CONSTANT2 but how do i get only CONSTANT (imported) and its parent module? or how can i filter the results to get only CONSTANT and its parent module?</p> <p>what i want to have in the end is actually &quot;a&quot;, the modules from which b imports</p> <pre><code>inspect.getmodule(CONSTANT) </code></pre> <p>doesn't work on integers or lists</p> <p>i need this information to check that the dependencies are respected and that lower level modules do not import from higher level modules</p>
<python><python-import>
2024-04-29 10:01:20
1
642
Alberto B
78,401,894
15,498,094
ThreadPoolExecutor submit method thread creation error
<p>I'm running a stream data processor on AWS ECS (EC2) by polling for messages in an SQS queue.</p> <p>I use a ThreadPool as shown below (some of the code is omitted for brevity):</p> <pre><code>def process_messages(self, messages): executor = ThreadPoolExecutor() success = 0 timed_out = False thread_future = executor.submit(self._process, messages) try: succeeded = thread_future.result(timeout=self._message_processing_timeout) # omitted: handling succeeded messages except TimeoutError as e: timed_out = True print(&quot;Timed out!&quot;) except Exception as e: print(f&quot;{repr(e)}&quot;) if timed_out: # omitted: kill process running this worker executor.shutdown(wait=False) return </code></pre> <p>I sometimes get the error <code>error: can't start new thread</code> from <code>executor.submit()</code>. After receiving this error, the worker stop processing messages and they start building up in my DLQ. I suspect that there is insufficient memory and that is preventing the thread from being created. Further, I think the lack of error handling here is causing the threadpool to be stuck in some state that prevents it from processing further messages. Please correct me if I'm wrong here.</p> <p>How do I gracefully handle this error? I am considering moving the <code>executor.submit()</code> error into the try catch but don't know if that is the right way to deal with this.</p>
<python><python-3.x><multithreading><amazon-ecs><python-multithreading>
2024-04-29 09:59:11
0
446
Vedank Pande
78,401,778
11,829,398
How to create a Pydantic type without a keyword argument?
<p>I'm working with a number of data types and often pass <code>data_type</code> as an argument to my functions/methods/classes. For type hinting, I've considered doing</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal data_type: Literal[&quot;type1&quot;, &quot;type2&quot;, &quot;type3&quot;, &quot;type4&quot;] </code></pre> <p>But this seems brittle. If I need to add a new data type, I need to update the type hints <em>everywhere</em> and this isn't DRY. Is this even an issue?</p> <p>So, I created a <code>DataType</code> type in a very hacky way:</p> <pre class="lang-py prettyprint-override"><code>VALID_DATA_TYPES = {&quot;type1&quot;, &quot;type2&quot;, &quot;type3&quot;, &quot;type4&quot;} DataType = Literal[tuple(VALID_DATA_TYPES)] </code></pre> <p>I'm considering switching to Pydantic to get param validation and am doing it properly.</p> <p>But I don't like how you have to pass an argument name (<code>value</code> in this case) in Pydantic</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class DataType(BaseModel): value: Literal[&quot;type1&quot;, &quot;type2&quot;, &quot;type3&quot;, &quot;type4&quot;] DataType(value=&quot;type1&quot;) # all good DataType(value=&quot;error&quot;) # ValidationError </code></pre> <p>Is it possible to use a Pydantic model like so?</p> <pre class="lang-py prettyprint-override"><code>DataType(&quot;type1&quot;) # all good DataType(&quot;error&quot;) # ValidationError </code></pre> <p>I find it odd that this functionality doesn't exist out of the box... perhaps I am just doing something that isn't common/necessary?</p> <p>Note: I do not want to use an Enum. I want to explore other options.</p>
<python><pydantic>
2024-04-29 09:38:48
2
1,438
codeananda
78,401,689
1,020,139
How can I combine two types in Python similar to using "&" in TypeScript?
<p>Say, I have two types <code>A</code> and <code>B</code>. How can I concatenate these types into a type that has all members of <code>A</code> and <code>B</code>?</p> <p>In TypeScript I would use the <code>&amp;</code> operator: <code>A &amp; B</code>:</p> <p><a href="https://ultimatecourses.com/blog/use-intersection-types-to-combine-types-in-typescript" rel="nofollow noreferrer">https://ultimatecourses.com/blog/use-intersection-types-to-combine-types-in-typescript</a></p> <p>How can I achieve the same in Python?</p>
<python>
2024-04-29 09:23:46
2
14,560
Shuzheng
78,401,662
15,045,917
Converting the following recursive binary tree traversal algorithm to an iterative one
<p>I would like to learn whether the following recursive code can be made more efficient by an iterative implementation, and if so, how such implementation can be done. It has been some years since I have actively coded, so I have forgot enough nomenclature that I don't think I can answer this question myself with the help of existing literature.</p> <p>Denote by <code>l</code>and <code>r</code> the number of left and right turns we have taken in a binary tree of height <code>n</code>. Suppose that <code>f(l, r)</code> is some real function that computes some value depending on <code>l</code> and <code>r</code> in constant time. Assume that <code>f</code> is not symmetric, i.e. not necessarily <code>f(l,r) = f(r,l)</code>. The recursive code is as follows (in Python 3):</p> <pre><code>def recursive_travel(l, r, cur_height, max_height): if cur_height == max_height: return f(l, r) * (f(l+1, r) + f(l, r+1)) return recursive_travel(l+1, r, cur_height+1, max_height) + recursive_travel(l, r+1, cur_height+1, max_height) </code></pre> <p>and the initial call should be <code>recursive_travel(0,0,0,n)</code>. Iterative implementation will be a slightly more faster without the need for recursive calls, but I don't know whether there is an elegant way to implement this. All help will be appreciated!</p>
<python><algorithm><recursion><optimization><binary-tree>
2024-04-29 09:20:16
1
393
Epsilon Away
78,401,612
5,056,004
Handling pre-release/development/local version of python packages w.r.t PEP440
<p>I have a mono-repo where I have 2 python packages <strong>PackageA</strong>, <strong>PackageB</strong> where PackageB depends on PackageA~=1.0.1 (1.0.1 is the latest published version of PackageA)</p> <p>When I develop in this mono-repo, I install these 2 packages in editable mode in a virtual environment, and as part of this install, a local version gets generated for each of these packages. There are 2 requirements for these generated versions, to ensure pip can properly resolve dependencies:</p> <ol> <li>a generated version should always be greater than or equal to latest published version.</li> <li>a generated version should be less than next version to be published.</li> </ol> <p>So for instance, given that I'm working in a branch called feature/foo, the version generated for PackageA would become: 1.0.1post+feature.foo This version satisfies requirement 1 above. This can be confirmed by running:</p> <pre><code>python -c &quot;from compatibleversion import check_version; assert check_version('1.0.1post+feature.foo', '&gt;=1.0.1')&quot; </code></pre> <p>It also satifies requirement 2 above:</p> <pre><code>python -c &quot;from compatibleversion import check_version; assert check_version('1.0.1post+feature.foo', '&lt;1.0.2')&quot; </code></pre> <p>The problem starts with when I push PackageA with this version to a feed. Given that the following versions of PackageA exist in my feed:</p> <pre><code>1.0.1post+feature.foo 1.0.1 </code></pre> <p>when I say <code>pip install PackageA==1.0.1</code>, pip installs <strong>1.0.1post+feature.foo</strong> version of PackageA instead, which might potentially be a buggy version of PackageA, which at the same time fulfills the requirement <strong>~=1.0.1</strong></p> <p>So my question is, how to make sure development versions of a package does not unintendedly get installed by pip, and at the same time fullfils the abovementioned two requirements.</p> <p>PS. I have tried to find an answer to this question by reading <a href="https://packaging.python.org/en/latest/specifications/version-specifiers/#handling-of-pre-releases" rel="nofollow noreferrer">https://packaging.python.org/en/latest/specifications/version-specifiers/#handling-of-pre-releases</a> without luck, and as a workaround, have made a special feed (have called it test feed) for these development versions, and ensured that for production environments, pip do not have access to those test feeds.</p>
<python><version><python-packaging><monorepo>
2024-04-29 09:10:19
0
628
masih
78,401,299
16,723,655
How can adjust ylabel font location in matplotlib if they are attached?
<p><a href="https://i.sstatic.net/8MGw3wiT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MGw3wiT.png" alt="![enter image description here" /></a></p> <p>Here is my plot.</p> <p>I tried rotation but it is still attach.</p> <p>How can I detach of them and properly show them?</p> <p>However, I also have 2 more y-axis tick labels.</p> <p>If adjust then, I want to make same format for total 4 y-axis tick labels.</p> <p>Here is code.</p> <pre><code>import matplotlib.pyplot as plt fig, ax1 = plt.subplots() ax1.set_ylim([0, 0.7]) ax2 = ax1.twinx() ax2.set_xticks([5, 15, 25, 35]) ax2.set_yticks([0.1156, 0.1904, 0.2062, 0.4879]) ax2.plot([5, 15, 25, 35], [0.1156, 0.1904, 0.2062, 0.4879], linewidth=2, color='black') ax2.plot(40, 0.7) ax2.set_yticklabels(['24', '40', '43', '103'], minor=False, fontsize=15, fontweight='bold') </code></pre>
<python><matplotlib>
2024-04-29 08:07:38
1
403
MCPMH
78,401,248
17,136,258
How to handle a nested JSON?
<p>I have a problem. I have a nested <code>JSON</code> file:</p> <pre class="lang-py prettyprint-override"><code>json_data = ''' { &quot;appVersion&quot;: &quot;&quot;, &quot;device&quot;: { &quot;model&quot;: &quot;&quot; }, &quot;bef&quot;: { &quot;catalog&quot;: &quot;&quot; }, &quot;data&quot;: [ { &quot;timestamp&quot;: &quot;&quot;, &quot;label&quot;: &quot;&quot;, &quot;category&quot;: &quot;&quot; } ] } </code></pre> <p>I would like to extract all data and if it is nested I would like it to be separated with a <code>_</code>. I have tried to normalise the nested JSON file. I use <code>json_normalise</code> for this. Unfortunately, the desired output is not what I want and need. Furthermore, I want that there can be any possible number of nested values, so I tried to solve it with a loop.</p> <p>How can I produce the desired output?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import json json_data = ''' { &quot;appVersion&quot;: &quot;0.0.3&quot;, &quot;device&quot;: { &quot;model&quot;: &quot;Lenovo&quot; }, &quot;bef&quot;: { &quot;catalog&quot;: &quot;Manual&quot; }, &quot;data&quot;: [ { &quot;timestamp&quot;: &quot;2024-04-24 12:08:02.415077&quot;, &quot;label&quot;: &quot;zuf&quot;, &quot;category&quot;: &quot;50&quot; } ] } ''' parsed_json = json.loads(json_data) def extract_metadata(json_data): metadata = {} for key, value in json_data.items(): if isinstance(value, dict): for k, v in value.items(): metadata[f'{key}_{k}'] = v else: metadata[key] = value return metadata meta_data = extract_metadata(parsed_json) df_main = pd.json_normalize(parsed_json['data'], sep='_') df_meta = pd.DataFrame([meta_data]) df = pd.concat([df_main, df_meta], axis=1) print(df) </code></pre> <p>What I got</p> <pre class="lang-py prettyprint-override"><code> timestamp label category appVersion device_model \ 0 2024-04-24 12:08:02.415077 zuf 50 0.0.3 Lenovo bef_catalog data 0 Manual [{'timestamp': '2024-04-24 12:08:02.415077', '... </code></pre> <p>What I want</p> <pre class="lang-py prettyprint-override"><code>appVersion device_model bef_catalog data_timestamp data_label data_category 0.0.3 Lenovo Manual 2024-04-24 12:08:02.415 zuf 50 </code></pre>
<python><json><pandas>
2024-04-29 07:59:04
3
560
Test
78,401,161
5,786,649
Hydra / OmegaConf: interpolation of config groups
<p>Is it possible to interpolate a config group? My intention is to interpolate a group's key from other keys, in this example: selecting a dataset's config group based on the dataset name and the dataset version (e.g, if the dataset comes from a competition, this might be one of the tasks of the competition). Using hydra's structured configs, my idea is something like this:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class MainConfig(DictConfig): dataset_name: str dataset_version: str dataset_group: ${.dataset_name}_${.dataset_version} @dataclass cass MyDatasetPart1Config(DictConfig): paths: dict[str, str] cs = ConfigStore.instance() cs.store(name=&quot;config&quot;, node=MainConfig) cs.store(group=&quot;dataset&quot;, name=&quot;mydataset_part1&quot;, node=MyDatasetPart1Config) def main() ... if __name__ == &quot;__main__&quot; ... </code></pre> <p>then be able to call the group like</p> <pre><code>&gt;&gt;&gt; python myapp.py dataset_name=mydataset version=part1 </code></pre> <p>hoping to be achieve the same as if I used</p> <pre><code>&gt;&gt;&gt; python myapp.py dataset_group=mydataset_part1 </code></pre>
<python><fb-hydra><omegaconf>
2024-04-29 07:40:47
1
543
Lukas
78,400,895
16,525,263
How to find the max value in a column in pyspark dataframe
<p>I have a pyspark dataframe</p> <pre><code>deviceId timestamp 009eeb 2024-04-22 009eeb 2024-04-24 7c002v 2024-04-20 7c002v null 4fd556 null 4fd556 null </code></pre> <p>I need to get the max timestamp in the final dataframe and drop duplicates. I tried with the below code</p> <pre><code>w = Window.partitionBy('deviceId') df_max = df.withColumn('max_col', F.max('timestamp').over(w))\ .where((F.col('timestamp') == F.col('max_col')) | (F.col('timestamp').isNull())) .dropDuplicates() </code></pre> <p>But from this code, I'm getting what I need to get.</p> <pre><code>deviceId timestamp 009eeb 2024-04-24 7c002v null 4fd556 null 4fd556 null </code></pre> <p>I need to get the result as below</p> <pre><code>deviceId timestamp 009eeb 2024-04-24 7c002v 2024-04-20 4fd556 null </code></pre> <p>Please suggest any changes to be done</p>
<python><apache-spark><pyspark>
2024-04-29 06:39:01
1
434
user175025
78,400,487
8,176,763
airflow 2.9 schedule DAG on the first weekday day of the month every month
<p>I would like to schedule a DAG on the first weekday of the month for every month. So for example this years it would have been scheduled at:</p> <pre><code>2024-01-01 2024-02-01 2024-03-01 2024-04-01 2024-05-01 2024-06-03 -----&gt; here the third because the first is a Saturday </code></pre> <p>I have had a look at <a href="https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/timetable.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/timetable.html</a> , but it seems quite complex to structure such DAG. Is this the only way to go for that kind scheduling ?</p> <p>This is my logic to find the first working day:</p> <pre><code>import pendulum year = 2024 for month in range(0,13): first_day_month = pendulum.DateTime(year=year,month=month,day=1) working_day = first_day_month if first_date_month.weekday() &gt;= 0 and first_day_month.weekday &lt;= 4 else first_day_month.next(pendulum.MONDAY) print(working_day) </code></pre>
<python><airflow>
2024-04-29 04:28:37
1
2,459
moth
78,400,480
10,499,034
How can I force plots to be the same size when the figsize setting does nothing in pyplot?
<p>I wrote the following code:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import random symetricArr=[[random.random() for i in range(100)] for j in range(100)] asymetricArr=[[random.random() for i in range(50)] for j in range(100)] plt.figure(figsize=(4,4)) plt.imshow(symetricArr, cmap='hot', interpolation='nearest') plt.show() plt.figure(figsize=(4,4)) plt.imshow(asymetricArr, cmap='hot', interpolation='nearest') plt.show() </code></pre> <p>Which should produce two figures that are the exact same size but the figsize property does not take for some reason. Instead it produces this:</p> <p><a href="https://i.sstatic.net/8lPyooTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8lPyooTK.png" alt="enter image description here" /></a></p> <p>How can I make this produce two plots that are the exact same size?</p>
<python><matplotlib><imshow>
2024-04-29 04:27:08
0
792
Jamie
78,400,455
3,314,925
how to calculate common elements between list column rows
<p>Is it possible to calculate the number of common items in a list column, between a row and the previous row in the method chain? My code below throws and error 'TypeError: unhashable type: 'list''</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'x':[1,2,3,4], 'list_column': [ ['apple', 'banana', 'cherry'], ['banana', 'cherry'], ['cherry', 'date', 'fig'], ['orange'] ] }) res = len(set(df.loc[1,'list_column']) &amp; set(df.loc[0,'list_column'])) res df=(df .assign( list_length=lambda x: x['list_column'].str.len(), nr_common=lambda x: (set(x['list_column']) &amp; set(x['list_column'].shift(1))).len() ) ) df </code></pre>
<python><pandas><dataframe><list>
2024-04-29 04:17:01
5
1,610
Zeus
78,400,440
15,625,738
How can I Insert data in PDF via python on the PDF form having input field ? I am using pypdf
<p>I am trying to automate the PDF form input process on my PDF form having input fields. I have extracted the form fields and data structures via following method :</p> <pre><code>from pypdf import PdfWriter, PdfReader,generic import pypdf def extract_form_fields(pdf_path): with open(pdf_path, 'rb') as file: reader = PdfReader(file) fields = reader.get_fields() return fields pdf_path = './form.pdf' form_fields = extract_form_fields(pdf_path) </code></pre> <p>And I am getting field details :</p> <p>Field Name: topmostSubform[0].Page1[0].T3[1] Field Attributes: {'/T': 'T3[1]', '/FT': '/Tx', '/Parent': {'/Kids': [IndirectObject(103, 0, 1808139782480), IndirectObject(432, 0, 1808139782480), IndirectObject(433, 0, 1808139782480), IndirectObject(104, 0, 1808139782480), IndirectObject(454, 0, 1808139782480), IndirectObject(455, 0, 1808139782480), IndirectObject(456, 0, 1808139782480), IndirectObject(457, 0, 1808139782480), IndirectObject(107, 0, 1808139782480), IndirectObject(108, 0, 1808139782480), IndirectObject(109, 0, 1808139782480), IndirectObject(113, 0, 1808139782480), IndirectObject(486, 0, 1808139782480), IndirectObject(115, 0, 1808139782480), IndirectObject(490, 0, 1808139782480), IndirectObject(491, 0, 1808139782480), IndirectObject(492, 0, 1808139782480), IndirectObject(116, 0, 1808139782480), IndirectObject(495, 0, 1808139782480), IndirectObject(117, 0, 1808139782480), IndirectObject(498, 0, 1808139782480), IndirectObject(499, 0, 1808139782480), IndirectObject(500, 0, 1808139782480), IndirectObject(501, 0, 1808139782480), IndirectObject(118, 0, 1808139782480), IndirectObject(504, 0, 1808139782480), IndirectObject(505, 0, 1808139782480)], '/Parent': IndirectObject(101, 0, 1808139782480), '/T': 'Page1[0]'}, '/TU': ' E-Mail</p> <p>And the updating the pdf with following method:</p> <p>''' def fill_pdf_field(pdf_path, field_name, field_value, output_path): with open(pdf_path, 'rb') as file: reader = PdfReader(file) writer = PdfWriter()</p> <pre><code> for page_num in range(len(reader.pages)): page = reader.pages[page_num] if '/Annots' in page: for annot in page['/Annots']: if '/T' in annot.get_object() and annot['/T'] == field_name: annot.update({ generic.NameObject('/V'): generic.create_string_object(field_value) }) writer.add_page(page) with open(output_path, 'wb') as output_file: writer.write(output_file) pdf_path = './form.pdf' output_path = './updated_form.pdf' field_name = 'topmostSubform[0].Page1[0].T3[1]' field_value = 'filled value test for email' fill_pdf_field(pdf_path, field_name, field_value, output_path) </code></pre> <p>'''</p> <p>With fill_pdf_field PDF is written and new file is created but, the input is not rectified. When I try to check with extract_form_fields method, the fields are not available. When I open the file on browser, form is readable. When I write the fields with any other writer again and check back the fields will be available again.</p> <p>How can I resolve this issue? And save the form via pdf?</p>
<python><pdf><pypdf><pymupdf><python-pdfreader>
2024-04-29 04:12:33
0
576
Madhav Dhungana
78,400,329
639,616
multiplot in for loop by importing only pandas
<p>Sometime, <code>DataFrame.plot()</code> inside a <code>for</code> loop produces multiple charts.</p> <pre><code>import pandas as pd data = {'Str': ['A', 'A', 'B', 'B'], 'Num': [i for i in range(4)]} df = pd.DataFrame(data) for n in ['A', 'B']: df[df.Str == n].plot(kind='bar') </code></pre> <p><a href="https://i.sstatic.net/JlVqXO2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JlVqXO2C.png" alt="enter image description here" /></a></p> <p>But sometimes, it produces a single chart.</p> <pre><code>import pandas as pd data = {'C1': ['A', 'A', 'B', 'B'], 'C2': [i for i in range(4)], 'C3': [1,2,1,2]} df = pd.DataFrame(data) for n in [1,2]: df[df.C3 == n].groupby('C1').C2.sum().plot(kind='bar') </code></pre> <p><a href="https://i.sstatic.net/IYXtfsuW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYXtfsuW.png" alt="enter image description here" /></a></p> <p>From the previous code, if <code>plt.show()</code> was added at the end of the loop. It will produce multiple charts.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt data = {'C1': ['A', 'A', 'B', 'B'], 'C2': [i for i in range(4)], 'C3': [1,2,1,2]} df = pd.DataFrame(data) for n in [1,2]: df[df.C3 == n].groupby('C1').C2.sum().plot(kind='bar') plt.show() </code></pre> <p><a href="https://i.sstatic.net/2fFd68cM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fFd68cM.png" alt="enter image description here" /></a></p> <p>I don't want to use <code>plt.show()</code>. Actually I want to <code>import</code> only <code>pandas</code> and create multiple charts using a <code>for</code> loop.</p>
<python><pandas><matplotlib>
2024-04-29 03:16:37
2
12,836
wannik
78,400,266
24,758,287
Rolling Indexing in Polars?
<p>I'd like to ask around if anyone knows how to do rolling indexing in polars? I have personally tried a few solutions which did not work for me (I'll show them below):</p> <p><strong>What I'd like to do</strong>: <strong>Indexing the number of occurrences within the past X days by Name</strong> Example: Let's say I'd like to index occurrences within the past 2 days:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.from_repr(&quot;&quot;&quot; ┌─────────┬─────────────────────┬─────────┐ │ Name ┆ Date ┆ Counter │ │ --- ┆ --- ┆ --- │ │ str ┆ datetime[ns] ┆ i64 │ ╞═════════╪═════════════════════╪═════════╡ │ John ┆ 2023-01-01 00:00:00 ┆ 1 │ │ John ┆ 2023-01-01 00:00:00 ┆ 2 │ │ John ┆ 2023-01-01 00:00:00 ┆ 3 │ │ John ┆ 2023-01-01 00:00:00 ┆ 4 │ │ John ┆ 2023-01-02 00:00:00 ┆ 5 │ │ John ┆ 2023-01-02 00:00:00 ┆ 6 │ │ John ┆ 2023-01-02 00:00:00 ┆ 7 │ │ John ┆ 2023-01-02 00:00:00 ┆ 8 │ │ John ┆ 2023-01-03 00:00:00 ┆ 5 │ │ John ┆ 2023-01-03 00:00:00 ┆ 6 │ │ New Guy ┆ 2023-01-01 00:00:00 ┆ 1 │ └─────────┴─────────────────────┴─────────┘ &quot;&quot;&quot;) </code></pre> <blockquote> <p>In this case, the counter resets to &quot;1&quot; starting from the past X days (e.g. for 3 Jan 23, it starts &quot;1&quot; from 2 Jan 23), or if a new name is detected</p> </blockquote> <p><strong>What I've tried</strong>:</p> <pre class="lang-py prettyprint-override"><code>(df.rolling(index_column='Date', period='2d', group_by='Name') .agg((pl.col(&quot;Date&quot;).rank(method='ordinal')).alias(&quot;Counter&quot;)) ) </code></pre> <p>The above does not work because it outputs:</p> <pre><code>┌─────────┬─────────────────────┬──────────────────────────┐ │ Name ┆ Date ┆ Counter │ │ --- ┆ --- ┆ --- │ │ str ┆ datetime[ns] ┆ list[u32] │ ╞═════════╪═════════════════════╪══════════════════════════╡ │ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] │ │ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] │ │ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] │ │ John ┆ 2023-01-01 00:00:00 ┆ [1, 2, 3, 4] │ │ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] │ │ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] │ │ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] │ │ John ┆ 2023-01-02 00:00:00 ┆ [1, 2, 3, 4, 5, 6, 7, 8] │ │ John ┆ 2023-01-03 00:00:00 ┆ [1, 2, 3, 4, 5, 6] │ │ John ┆ 2023-01-03 00:00:00 ┆ [1, 2, 3, 4, 5, 6] │ │ New Guy ┆ 2023-01-01 00:00:00 ┆ [1] │ └─────────┴─────────────────────┴──────────────────────────┘ </code></pre> <pre class="lang-py prettyprint-override"><code>(df.with_columns(Mask=1) .with_columns(Counter=pl.col(&quot;Mask&quot;).rolling_sum_by(window_size='2d', by=&quot;Date&quot;)) ) </code></pre> <p>But it outputs:</p> <pre><code>┌─────────┬─────────────────────┬─────────┬──────┐ │ Name ┆ Date ┆ Counter ┆ mask │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ datetime[ns] ┆ i32 ┆ i32 │ ╞═════════╪═════════════════════╪═════════╪══════╡ │ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 │ │ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 │ │ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 │ │ John ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 │ │ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 │ │ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 │ │ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 │ │ John ┆ 2023-01-02 00:00:00 ┆ 9 ┆ 1 │ │ John ┆ 2023-01-03 00:00:00 ┆ 6 ┆ 1 │ │ John ┆ 2023-01-03 00:00:00 ┆ 6 ┆ 1 │ │ New Guy ┆ 2023-01-01 00:00:00 ┆ 5 ┆ 1 │ └─────────┴─────────────────────┴─────────┴──────┘ </code></pre> <p>And it also cannot handle &quot;New Guy&quot; correctly because rolling_sum cannot do group_by=[&quot;Name&quot;, &quot;Date&quot;]</p> <pre class="lang-py prettyprint-override"><code>df.with_columns(Counter = pl.col(&quot;Date&quot;).rank(method='ordinal').over(&quot;Name&quot;, &quot;Date&quot;) ) </code></pre> <p>The above code works correctly, but can only be used for indexing within the same day (i.e. period=&quot;1d&quot;)</p> <hr /> <h3>Additional Notes</h3> <p>I also did this in Excel, and also using a brute/raw method of using a &quot;for&quot;-loop. Both worked perfectly, however they struggled with huge amounts of data.</p> <p><strong>What I read:</strong> Some references to help in answers: (Most didn't work because they have fixed rolling window instead of a dynamic window by &quot;Date&quot;)</p> <ul> <li><p><a href="https://stackoverflow.com/questions/77633868/how-to-implement-rolling-rank-in-polars-version-0-19">https://stackoverflow.com/questions/77633868/how-to-implement-rolling-rank-in-polars-version-0-19</a></p> </li> <li><p><a href="https://github.com/pola-rs/polars/issues/4808" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/4808</a></p> </li> <li><p><a href="https://stackoverflow.com/questions/77048094/how-to-do-rolling-grouped-by-day-by-hour-in-in-polars">How to do rolling() grouped by day by hour in Polars?</a></p> </li> <li><p><a href="https://stackoverflow.com/questions/76164821/how-to-groupby-and-rolling-in-polars">How to group_by and rolling in polars?</a></p> </li> <li><p><a href="https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rank.html" rel="nofollow noreferrer">https://docs.pola.rs/api/python/stable/reference/series/api/polars.Series.rank.html</a></p> </li> <li><p><a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.rolling.html#polars.DataFrame.rolling" rel="nofollow noreferrer">https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.rolling.html#polars.DataFrame.rolling</a></p> </li> </ul>
<python><dataframe><group-by><python-polars><data-wrangling>
2024-04-29 02:44:53
1
301
user24758287
78,400,254
13,530,621
SVM training taking too long
<p>I have a dataset with 41 features, out of which 4 are text features. I've been given &quot;Bag of Words&quot; numpy arrays (npz) for these four features, which I combined with the other numerical features to train an SVM model. There are a total of 100000 records and 41 features, 4 of which are vectorized as mentioned.</p> <p>This model has been training for 45 minutes now :). Is there a way to decrease training time? Is there anything wrong with how I pre-process the dataset (particularly combine npz and existing numerical features)? Any other options that I could explore?</p> <pre><code>title_feature = load_npz('train_title_bow.npz') overview_feature = load_npz('train_overview_bow.npz') tagline_feature = load_npz('train_tagline_bow.npz') production_companies_feature = load_npz('train_production_companies_bow.npz') numerical_features = df_train[df_train.columns.difference(['title', 'overview', 'tagline', 'production_companies', 'rate_category', 'average_rate', 'original_language'])] text_features = np.hstack([title_feature.toarray(), overview_feature.toarray(), tagline_feature.toarray(), production_companies_feature.toarray()]) svm_X_train = np.hstack([numerical_features, text_features]) svm_y_train = df_train['rate_category'] svm_classifier = SVC(kernel='linear') # Linear kernel is used, you can choose other kernels too svm_classifier.fit(svm_X_train, svm_y_train) </code></pre>
<python><machine-learning><svm>
2024-04-29 02:39:09
1
1,090
revmatcher
78,400,238
5,302,866
How to use mysql's CURRENT_TIMESTAMP from python?
<p>I want to use the MySQLs function CURRENT_TIMESTAMP from python code, but can't figure out how to pass this in my INSERT or UPDATE queries.</p> <pre><code> try: mycursor.execute( &quot;CREATE TABLE IF NOT EXISTS test (pkID bigint unsigned NOT NULL AUTO_INCREMENT , tsTest TIMESTAMP, PRIMARY KEY (pkID));&quot;) sqlInsert = &quot;INSERT INTO test (tsTest) VALUES (%s);&quot; valsInsert = ['CURRENT_TIMESTAMP', ] mycursor.execute(sqlInsert, valsInsert) except mysql.connector.Error as err: print(&quot;MySQL error: {}&quot;.format(err)) </code></pre> <p>I've tried: 'CURRENT_TIMESTAMP', 'CURRENT_TIMESTAMP()' and even mysql.connector.Timestamp MySQL just throws: Incorrect datetime value:</p> <p>Yes, I know I can use python's Now() function, but this might be problematic if the code is run on a different server than the DB. So, I'd prefer to use the mySQL function for all timestamps. TIA!</p>
<python><mysql><timestamp><sql-timestamp>
2024-04-29 02:29:19
1
1,205
MaybeWeAreAllRobots
78,400,038
13,752,965
Can't do voxel_down_sample with Tensor PointCloud that has a user-defined property
<p>I'm trying to do <code>voxel_down_sample</code> on an annotated point cloud. The annotations are simply a <code>uint8</code> label defining a particular class of object.</p> <p>If I omit the user defined property attached to each point, then the downsample works as expected.</p> <p>Here's an example:</p> <pre class="lang-py prettyprint-override"><code>import open3d as o3d device = o3d.core.Device(&quot;CPU:0&quot;) pcd = o3d.t.geometry.PointCloud(device) pcd.point.positions = o3d.core.Tensor([[0,0,0],[1,1,1],[2,2,2]], device=device) pcd.point.labels = o3d.core.Tensor([0,1,2], o3d.core.uint8, device=device) pcd_ds = pcd.voxel_down_sample(voxel_size=3) </code></pre> <p>If you comment out the <code>pcd.point.labels = ...</code> line, the downsample works, but if I try to include the labels, I get an error:</p> <pre class="lang-py prettyprint-override"><code>RuntimeError Traceback (most recent call last) Cell In[10], line 6 3 pcd.point.positions = o3d.core.Tensor([[0,0,0],[1,1,1],[2,2,2]], device=device) 4 pcd.point.labels = o3d.core.Tensor([0,1,2], o3d.core.uint8, device=device) ----&gt; 6 pcd_ds = pcd.voxel_down_sample(voxel_size=3) RuntimeError: [Open3D Error] (void open3d::core::kernel::BinaryEW(const open3d::core::Tensor&amp;, const open3d::core::Tensor&amp;, open3d::core::Tensor&amp;, open3d::core::kernel::BinaryEWOpCode)) /root/Open3D/cpp/open3d/core/kernel/BinaryEW.cpp:49: The broadcasted input shape [1, 1] does not match the output shape [1]. </code></pre> <p>Is it possible to accomplish voxel downsampling with a PointCloud including a user defined property?</p> <p>Versions: Python v3.11 open3d v0.18</p>
<python><open3d>
2024-04-29 00:42:32
1
703
tdpu
78,399,765
2,537,486
How to style size and color of matplotlib suptitle?
<p>This should be straightforward: I want to style the matplotlib <code>suptitle</code>.</p> <p><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.suptitle.html" rel="nofollow noreferrer">Documentation</a> says that <code>fontproperties</code> is a dict of font properties that can be supplied as a parameter.</p> <p>So:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib as mpl fig, ax = plt.subplots() ax.set_xlim(0, 1) fdct = {'color': 'r', 'fontsize': 32} # Generates error fig.suptitle('A title', fontproperties=fdct) # Works fig.suptitle('A title', fontsize=32) </code></pre> <p>The first command generates</p> <p><code>TypeError: FontProperties.__init__() got an unexpected keyword argument 'color'</code></p> <p>But why? I would like to style suptitle with a dict of font properties, like I do for other instances of <code>Text</code>.</p>
<python><matplotlib>
2024-04-28 22:01:16
3
1,749
germ
78,399,750
3,103,957
Usage of Async Context managers and Async Iterators in Async library in Python
<p>When we use Async context managers and iterators in Python, I don't see any value in them. Because in context manager, when <strong>aenter</strong>() and <strong>aexit</strong>() are called into, they become the only tasks made available in the event loop. And similarly when using Async iterators, the <strong>anext</strong>() is going to be the only task in the event loop.</p> <p>Basically when we have more than one tasks available in the event loop, then asyncio provides us the CPU advantage. When one task is blocked due I/O or N/W operation, the other task if it is ready, can hold the CPU.</p> <p>But in the case of Async context managers and iterators, the event loop always tend to have a single task.</p> <p>Any advantageous use case of utilising them?</p>
<python><iterator><python-asyncio><contextmanager>
2024-04-28 21:49:56
1
878
user3103957
78,399,741
6,735,815
Trying to understand behavior of `_field_defaults` of `namedtuple` in Python 3.9
<p>I am new to Python's <code>namedtuple</code>, particularly, <code>_field_defaults</code>.</p> <p>According to Python <a href="https://docs.python.org/3.12/library/collections.html#collections.namedtuple" rel="nofollow noreferrer">3.12.3 official doc</a>,</p> <blockquote> <p>some <code>namedtuple._field_defaults</code> Dictionary mapping field names to default values.</p> </blockquote> <p>Would someone explain the odd behavior of the following code snippet? Apparently, the one provided by Copilot generates the error code listed below.</p> <p><a href="https://i.sstatic.net/vqNBjZo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vqNBjZo7.png" alt="Copilot's suggestion" /></a></p> <p>Text-based code snippet from above.</p> <pre class="lang-py prettyprint-override"><code>from collections import namedtuple # Define the namedtuple Person = namedtuple('Person', 'name age') # Set default values Person._field_defaults['name'] = 'John' Person._field_defaults['age'] = 30 # Create a namedtuple with default values person1 = Person() # Create a namedtuple with a different name person2 = Person(name='Alice') print(person1) # Outputs: Person(name='John', age=30) print(person2) # Outputs: Person(name='Alice', age=30) </code></pre> <p>The following error was produced.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; person1 = Person() Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: &lt;lambda&gt;() missing 2 required positional arguments: 'name' and 'age' </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; Person=namedtuple('Person', ['name', 'age'], defaults=['default_name', 0]) &gt;&gt;&gt; p = Person() &gt;&gt;&gt; p Person(name='default_name', age=0) &gt;&gt;&gt; p._field_defaults {'name': 'default_name', 'age': 0} &gt;&gt;&gt; p._field_defaults['age'] = 100 &gt;&gt;&gt; p = Person() &gt;&gt;&gt; p Person(name='default_name', age=0) &gt;&gt;&gt; </code></pre> <p>Observed from the code snippet above, <code>p._field_defaults['age'] = 100</code> changed the dictionary, but the default construction of <code>Person</code> was not updated.</p>
<python><python-3.9><namedtuple><github-copilot>
2024-04-28 21:47:42
1
463
Ku Zijie
78,399,709
7,421,086
limit context token on document retrieval chains
<h1>Code</h1> <pre class="lang-py prettyprint-override"><code>import os from qdrant_client import QdrantClient from langchain.vectorstores import Qdrant from langchain.chat_models import ChatOpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.chains import create_retrieval_chain from langchain.schema import HumanMessage, SystemMessage, AIMessage from langchain.chains.combine_documents import create_stuff_documents_chain from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.runnables import RunnablePassthrough from langchain.callbacks import get_openai_callback class ConversationChatManagement(): def _get_existing_vector_store(self, collection_identity): qdrant_host = os.environ.get('QDRANT_HOST', 'qdrant-host-set') new_client = QdrantClient(host=qdrant_host, port=6333) embeddings=OpenAIEmbeddings() return Qdrant( client=new_client, collection_name=collection_identity, embeddings=embeddings, ) def _create_prompt_template(self): SYSTEM_TEMPLATE = &quot;&quot;&quot; Answer the user's questions based on the below context and message history. If the context doesn't contain any relevant information to the question, don't make something up and just say &quot;I don't know&quot;: &lt;context&gt; {context} &lt;/context&gt; &quot;&quot;&quot; messages = [ ( &quot;system&quot;, SYSTEM_TEMPLATE, ), MessagesPlaceholder(variable_name=&quot;messages&quot;) ] return ChatPromptTemplate.from_messages(messages) def chat_conversation(self, message_content: str, collection_identity, target_conversation, llm_client: ChatOpenAI): # Basically, a list of `HumanMessage, SystemMessage, AIMessage` objects. # Representing the conversation history fetched from database.. history_messages = self._fetch_conversation_history(target_conversation) prompt_template = self._create_prompt_template() document_chain = create_stuff_documents_chain(llm_client, prompt_template) def parse_retriever_input(params: Dict): return params[&quot;messages&quot;][-1].content # Add vector store into document chain to become retrieval chain. existing_vector_store = self._get_existing_vector_store(collection_identity) retriever = existing_vector_store.as_retriever() retrieval_chain = RunnablePassthrough.assign( context=parse_retriever_input | retriever, ).assign( answer=document_chain, ) docs = retriever.invoke(message_content) messages = history_messages + [HumanMessage(content=message_content)] message_stream = retrieval_chain.stream({ &quot;context&quot;: docs, &quot;messages&quot;: messages }) </code></pre> <blockquote> <p>Code was mostly reference from <a href="https://python.langchain.com/docs/use_cases/chatbots/retrieval/#retrieval-chains" rel="nofollow noreferrer">here</a></p> </blockquote> <hr /> <h1>The Issue</h1> <p>For large documents, it will return an error about the maximum context length:</p> <pre><code>openai.BadRequestError: Error code: 400 - {'error': {'message': &quot;This model's maximum context length is 8192 tokens. However, your messages resulted in 8367 tokens. Please reduce the length of the messages.&quot;, 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} </code></pre> <blockquote> <p>I am using <code>langchain===0.1.9</code>, <code>openai===1.5.0</code> and <code>langchain-community===0.0.24</code>.<br> Currently, upgrading or downgrading is not a viable option since it might result in too many breaking changes in the project, which might not be resolved in time.</p> </blockquote> <hr /> <h1>Explored Solutions</h1> <h2>1. <a href="https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html#langchain-chains-combine-documents-reduce-reducedocumentschain" rel="nofollow noreferrer"><code>ReduceDocumentsChain</code></a></h2> <p>The best reference I saw was in <code>ReduceDocumentsChain</code>'s example <em>(link in the header)</em>.</p> <p>However, the problem is that the example code uses a ton of deprecated, non-working code that does not work in the version I am using:</p> <ul> <li><a href="https://api.python.langchain.com/en/latest/llms/langchain_community.llms.openai.OpenAI.html#langchain_community.llms.openai.OpenAI" rel="nofollow noreferrer"><code>OpenAI</code></a> was deprecated in <code>0.0.10</code>. When I attempted the example with my versions, this causes the codebase to break.</li> <li><a href="https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html#langchain.chains.llm.LLMChain" rel="nofollow noreferrer"><code>LLMChain</code></a> was deprecated in <code>0.1.17</code>; Attempting to replace the input object of <code>OpenAI</code> with <code>ChatOpenAI</code> causes it to break.</li> </ul> <blockquote> <p>Admittedly, even if this example works with my version, I am confused on how I would attach the message history into this as well...</p> </blockquote> <h2>2. <a href="https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain-chains-conversational-retrieval-base-conversationalretrievalchain" rel="nofollow noreferrer"><code>ConversationalRetrievalChain</code></a></h2> <p>Contains a <code>max_tokens_limit</code> which I thought I could use. But it's deprecated, and requires <code>question_generator: LLMChain</code>, which is also deprecated... <em>(Recall <code>LLMChain</code> doesn't work with <code>ChatOpenAI</code>)</em></p> <p><code>create_history_aware_retriever</code> with <a href="https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain" rel="nofollow noreferrer"><code>create_retrieval_chain</code></a>, as suggested by the documentation, doesn't seem to provide any sort of functionality/parameters to limit token length.</p> <hr /> <p>Admittedly, I am new to <code>LangChain</code> API.<br>Hence, I suspect might be a design issue as well... <em>(wrong chain/tools, or wrong prompts, etc)</em>.<Br>But I am unable to clearly identify what the problem/solution is.</p>
<python><langchain><py-langchain>
2024-04-28 21:31:02
2
4,050
WQYeo
78,399,639
9,191,338
How to use io.BytesIO in Python to write to an existing buffer?
<p>According to <a href="https://stackoverflow.com/questions/53485708/how-the-write-read-and-getvalue-methods-of-python-io-bytesio-work">How the write(), read() and getvalue() methods of Python io.BytesIO work?</a> , it seems <code>io.BytesIO</code> will copy initial bytes provided.</p> <p>I have a buffer, and I want <code>pickle</code> to directly dump object in that buffer, without any copy. So I tried to use <code>f = io.BytesIO(buffer)</code>. However, after <code>f.write</code>, my <code>buffer</code> is not modified.</p> <p>Here is an example:</p> <pre class="lang-py prettyprint-override"><code>a = bytearray(1024) import io b = io.BytesIO(memoryview(a)) b.write(b&quot;1234&quot;) a[:4] # a does not change </code></pre> <p>What I want, is to make <code>io.BytesIO</code> directly writes to my buffer.</p> <p>My ultimate goal is:</p> <pre class="lang-py prettyprint-override"><code>a = bytearray(1024) import io, pickle with io.BytesIO(memoryview(a)) as f: obj = [1, 2, 3] pickle.dump(obj, f) # a should hold the pickled data of obj </code></pre>
<python>
2024-04-28 21:00:27
1
2,492
youkaichao
78,399,582
1,203,670
An incorrect year shows up when plotting using pandas and matplotlib and the YearLocator
<p>I have the following code (see below), where I try to plot the monthly orders. It works quite well, however, I am rying to set the major ticks at every year, but I only end up with a year at 1970. I am not quite sure what I am missing and why it behaves like this. What am I doing wrong?</p> <pre><code>from pandas import Timestamp test_data = pd.Series({Timestamp('2016-10-31 00:00:00'): 1052, Timestamp('2016-11-30 00:00:00'): 942, Timestamp('2016-12-31 00:00:00'): 791, Timestamp('2017-01-31 00:00:00'): 982, Timestamp('2017-02-28 00:00:00'): 647, Timestamp('2017-03-31 00:00:00'): 966, Timestamp('2017-04-30 00:00:00'): 1289, Timestamp('2017-05-31 00:00:00'): 504, Timestamp('2017-06-30 00:00:00'): 496, Timestamp('2017-07-31 00:00:00'): 776, Timestamp('2017-08-31 00:00:00'): 869, Timestamp('2017-09-30 00:00:00'): 617, Timestamp('2017-10-31 00:00:00'): 1601, Timestamp('2017-11-30 00:00:00'): 1094, Timestamp('2017-12-31 00:00:00'): 1405, Timestamp('2018-01-31 00:00:00'): 1377, Timestamp('2018-02-28 00:00:00'): 921, Timestamp('2018-03-31 00:00:00'): 1229, Timestamp('2018-04-30 00:00:00'): 1374, Timestamp('2018-05-31 00:00:00'): 1045, Timestamp('2018-06-30 00:00:00'): 1222, Timestamp('2018-07-31 00:00:00'): 1119, Timestamp('2018-08-31 00:00:00'): 1610, Timestamp('2018-09-30 00:00:00'): 1525, Timestamp('2018-10-31 00:00:00'): 2160, Timestamp('2018-11-30 00:00:00'): 2414, Timestamp('2018-12-31 00:00:00'): 2004, Timestamp('2019-01-31 00:00:00'): 1751, Timestamp('2019-02-28 00:00:00'): 1069, Timestamp('2019-03-31 00:00:00'): 1524, Timestamp('2019-04-30 00:00:00'): 1758, Timestamp('2019-05-31 00:00:00'): 1568, Timestamp('2019-06-30 00:00:00'): 1221, Timestamp('2019-07-31 00:00:00'): 1097, Timestamp('2019-08-31 00:00:00'): 1671, Timestamp('2019-09-30 00:00:00'): 1212, Timestamp('2019-10-31 00:00:00'): 2468, Timestamp('2019-11-30 00:00:00'): 2591, Timestamp('2019-12-31 00:00:00'): 2516, Timestamp('2020-01-31 00:00:00'): 1842, Timestamp('2020-02-29 00:00:00'): 1704, Timestamp('2020-03-31 00:00:00'): 2314, Timestamp('2020-04-30 00:00:00'): 5300, Timestamp('2020-05-31 00:00:00'): 5499, Timestamp('2020-06-30 00:00:00'): 3815, Timestamp('2020-07-31 00:00:00'): 2368, Timestamp('2020-08-31 00:00:00'): 2844, Timestamp('2020-09-30 00:00:00'): 2269, Timestamp('2020-10-31 00:00:00'): 3138, Timestamp('2020-11-30 00:00:00'): 4584, Timestamp('2020-12-31 00:00:00'): 3674, Timestamp('2021-01-31 00:00:00'): 4831, Timestamp('2021-02-28 00:00:00'): 2978, Timestamp('2021-03-31 00:00:00'): 3318, Timestamp('2021-04-30 00:00:00'): 3477, Timestamp('2021-05-31 00:00:00'): 2601, Timestamp('2021-06-30 00:00:00'): 2134, Timestamp('2021-07-31 00:00:00'): 1709, Timestamp('2021-08-31 00:00:00'): 2663, Timestamp('2021-09-30 00:00:00'): 1877, Timestamp('2021-10-31 00:00:00'): 2210, Timestamp('2021-11-30 00:00:00'): 4441, Timestamp('2021-12-31 00:00:00'): 2782, Timestamp('2022-01-31 00:00:00'): 3666, Timestamp('2022-02-28 00:00:00'): 2546, Timestamp('2022-03-31 00:00:00'): 2207, Timestamp('2022-04-30 00:00:00'): 2881, Timestamp('2022-05-31 00:00:00'): 2682, Timestamp('2022-06-30 00:00:00'): 2550, Timestamp('2022-07-31 00:00:00'): 2362, Timestamp('2022-08-31 00:00:00'): 2834, Timestamp('2022-09-30 00:00:00'): 3012, Timestamp('2022-10-31 00:00:00'): 3425, Timestamp('2022-11-30 00:00:00'): 5092, Timestamp('2022-12-31 00:00:00'): 3289, Timestamp('2023-01-31 00:00:00'): 3719, Timestamp('2023-02-28 00:00:00'): 2788, Timestamp('2023-03-31 00:00:00'): 3499, Timestamp('2023-04-30 00:00:00'): 3493, Timestamp('2023-05-31 00:00:00'): 3402, Timestamp('2023-06-30 00:00:00'): 2828, Timestamp('2023-07-31 00:00:00'): 3525, Timestamp('2023-08-31 00:00:00'): 3739, Timestamp('2023-09-30 00:00:00'): 3278, Timestamp('2023-10-31 00:00:00'): 3548, Timestamp('2023-11-30 00:00:00'): 5150, Timestamp('2023-12-31 00:00:00'): 4719, Timestamp('2024-01-31 00:00:00'): 4679, Timestamp('2024-02-29 00:00:00'): 3222, Timestamp('2024-03-31 00:00:00'): 3374, Timestamp('2024-04-30 00:00:00'): 1421}, name=&quot;Monthly orders&quot;) plot = test_data.plot(kind=&quot;bar&quot;, title=&quot;Monthly orders&quot;) plot.xaxis.set_major_locator(mdates.YearLocator()) # Locate years plot.xaxis.set_major_formatter(mdates.DateFormatter('%Y')) # Year format </code></pre> <p>Result from the code:</p> <p><a href="https://i.sstatic.net/tmvOlSyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tmvOlSyf.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib>
2024-04-28 20:37:00
1
3,099
Snowflake
78,399,495
2,843,348
SqlAlchemy 2.0: Issues using `mapped_column()` with `Computed()` columns in ORM Table Definitions (not working on MappedAsDataclass)
<blockquote> <p><em>(Note: In line with Stack Overflow's encouragement to enrich the community by answering one's own questions, especially after investing significant effort in troubleshooting, I am providing an answer below. I eagerly welcome further comments, additional insights, or alternative approaches! I would also like to acknowledge the valuable assistance from F. Caselli on a SQLAlchemy forum, which helped me to narrow down the search for a solution.)</em></p> </blockquote> <p>Hello, I'm trying to create a <strong>COMPUTED</strong> column directly in the <strong>ORM definition</strong> using <strong><code>mapped_column()</code></strong> in SqlAlchemy 2.0, but I haven't been successful (I've checked Google and tried to find the answer in the documentation).</p> <p>I know that for the core model, it's enough to declare a table in the following way (<strong>Toy example</strong> where <code>&quot;area&quot;</code> is a COMPUTED column):</p> <pre class="lang-py prettyprint-override"><code>square = Table( &quot;square&quot;, metadata_obj, Column(&quot;id&quot;, Integer, primary_key=True), Column(&quot;side&quot;, Integer), Column(&quot;area&quot;, Integer, Computed(&quot;side * side&quot;)), ) </code></pre> <p>This is equivalent to the following <strong>SQL DDL</strong> statement (Postgresql &gt;12):</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE square ( id SERIAL NOT NULL, side INTEGER, area INTEGER GENERATED ALWAYS AS (side * side) STORED, PRIMARY KEY (id) ) </code></pre> <p>However, when trying to do the equivalent in <strong>ORM</strong> using <code>Mapped[]</code> and <code>mapped_column()</code>, I can't seem to find the solution.</p> <p>I imagined something like the following might work, but it didn’t:</p> <pre class="lang-py prettyprint-override"><code>class Square(Base): ... area: Mapped[int] = mapped_column(Computed(&quot;side*side&quot;)) </code></pre> <p>Before that, I also tried other configurations, such as passing <code>Computed()</code> as <code>server_default=</code>, and using <code>FetchedValue</code>, but in all cases, I encountered errors.</p> <p>I also came to <code>@hybrid_property</code>, but I understand that it's not the same (since I need the computed column to be part of the table's DDL definition and <code>@hybrid_property</code> seems to leave the computation to be done during selects but not as DDL).</p> <p><strong>I would appreciate if someone could guide me on the correct use of <code>mapped_column()</code> with <code>Computed()</code>!</strong></p>
<python><sqlalchemy><orm><ddl>
2024-04-28 19:59:31
1
374
A.Sommerh
78,399,081
1,867,328
Insert a row in pandas dataframe
<p>I have below pandas dataframe</p> <pre><code>import pandas as pd dat = pd.DataFrame({'A': [1,2,3,5], 'B': [7,6,7,8]}) </code></pre> <p>Now I have a list</p> <pre><code>List = [99, 88] </code></pre> <p>I want to insert this list at the row number 2.</p> <p>Is there any method available to achieve this? I found there is a method called <code>insert</code>, however this only insert column at designated place.</p>
<python><pandas>
2024-04-28 17:35:30
1
3,832
Bogaso
78,398,988
18,346,591
Multi-processing code not working in while loop
<p>Happy Sunday.</p> <p>I have this code that I want to run using the multi-processing module. But it doesn't just work for some reason.</p> <pre><code>with ProcessPoolExecutor() as executor: while True: if LOCAL_RUN: print(&quot;ALERT: Doing a local run of the automation with limited capabilities.&quot;) list_of_clients = db_manager.get_clients() random.shuffle(list_of_clients) list_of_not_attempted_clients_domains = db_manager.tags() group_of_clients_handlers = {} # no matches if not list_of_not_attempted_clients_domains: sleep = 60 * 10 pretty_print(f'No matches found. Sleeping for {sleep}s') time.sleep(sleep) continue for client in list_of_clients: client_id = client[0] client_name = client[1] group_of_clients_handlers[client_id] = [ClientsHandler(db_manager), client_name] # MULTI-PROCESSING CODE try: print('running function...') executor.map( partial(run, group_of_clients_handlers=group_of_clients_handlers), list_of_not_attempted_clients_domains ) except Exception as err: print(err) </code></pre> <p>Despite all my attempts to debug this, I have no idea why this doesn't work, although I feel it relates to processes taking time to start up or scheduling task etc but I am not certain.</p> <p>The while loop just keeps running and I see all the print statements like <code>running function...</code> but the run function never executes. The run function is a very large function with nested large functions.</p> <p>The except block doesn't print out any error either. Would love to hear what you think...</p>
<python><multithreading><oop><parallel-processing><multiprocessing>
2024-04-28 17:04:57
1
662
Alexander Obidiegwu
78,398,896
597,858
'TCP localhost:57318->localhost:34033 (CLOSE_WAIT)' and 'Too many open files'
<p>I have a python script which uses selenium for web-scraping on an ubuntu server. I am creating fresh driver object in a while loop in every iteration and quitting it after the use. The script runs fine for sometime. then it stops creating new driver, with an exception 'Too many open files'.</p> <p>Here is the script:</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.service import Service from bs4 import BeautifulSoup import requests import os import logging from PyPDF2 import PdfReader from webdriver_manager.chrome import ChromeDriverManager from openai import OpenAI import undetected_chromedriver as uc import time import shutil import glob import logging import csv import traceback logging.basicConfig(format='%(asctime)s - %(name)s - %(process)d - %(threadName)s - %(levelname)s - %(message)s', level=logging.WARNING, handlers=[logging.FileHandler(&quot;/root/nse-announcements/logs-warnings.log&quot;), logging.StreamHandler()]) url = 'https://www.nseindia.com/companies-listing/corporate-filings-announcements' bot_token = '7069953058:AAGsJ-hihPjME' bot_chatID = os.getenv('TELEGRAM_BOT_CHAT_ID') openai_api_key = os.getenv('OPENAI_API_KEY') openAI_view = '' first_page_text = '' path_to_pdf = '' pdf_link = '' subject = '' pdf_link_file_path = '/root/nse-announcements/announcements_pdf_links.txt' temp_pdf = '/root/nse-announcements/temp.pdf' user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36' download_directory = &quot;/root/nse-announcements/pdfs-of-announcements&quot; csv_file_path = '/root/nse-announcements/nse_announcements.csv' csv_headers = ['Date', 'Symbol', 'Company Name', 'Subject', 'Details', 'OpenAI View', 'Broadcast Date/Time', 'PDF Link'] with requests.Session() as session: response = session.get(f'https://api.telegram.org/bot{bot_token}/sendMessage', params={'chat_id': bot_chatID, 'text': &quot;Hello Unix User! I'm now active and ready to fetch NSE announcements for you.&quot;}) def clear_tmp_directory(): tmp_path = '/tmp' tmp_dirs = glob.glob(f&quot;{tmp_path}/tmp*&quot; ) for directory in tmp_dirs: try: shutil.rmtree(directory) except Exception as e: pass def clean_scoped_directories(): tmp_path = '/tmp' scoped_dirs = glob.glob(f&quot;{tmp_path}/scoped_dir*&quot;) for directory in scoped_dirs: try: shutil.rmtree(directory) except Exception as e: pass # logging.debug(f&quot;Failed to remove {directory}: {str(e)}&quot;) chrome_items = glob.glob(f&quot;{tmp_path}/.com.google.Chrome.*&quot;) for item in chrome_items: try: if os.path.isdir(item): shutil.rmtree(item) else: os.remove(item) except Exception as e: # Check if the error is because the item is not a directory if &quot;Errno 20&quot; in str(e): try: os.remove(item) # Attempt to delete it as a file except Exception as e: pass # logging.debug(f&quot;Failed to remove {item} as a file: {e}&quot;) # else: # logging.debug(f&quot;Failed to remove {item}: {e}&quot;) def clean_text(text): return text.replace('&amp;', '').replace('&lt;', '').replace('&gt;', '').replace('&quot;', '').replace(&quot;'&quot;, &quot;&quot;) def send_telegram_message(data): message = f&quot;&lt;b&gt;Symbol:&lt;/b&gt; {data['Symbol']}\n&quot; \ f&quot;&lt;b&gt;{data['pref_alert']}&lt;/b&gt;\n&quot; \ f&quot;&lt;b&gt;Company Name:&lt;/b&gt; {data['Company Name']}\n&quot; \ f&quot;&lt;b&gt;Broadcast Date/Time:&lt;/b&gt; {data['Broadcast Date/Time']}\n&quot; \ f&quot;&lt;b&gt;Subject:&lt;/b&gt; {data['Subject']}\n&quot; \ f&quot;&lt;b&gt;Details:&lt;/b&gt; {data['Details']}\n&quot; \ f&quot;&lt;b&gt;OpenAI:&lt;/b&gt; {data['OpenAI']}\n&quot; \ f&quot;&lt;b&gt;PDF Link:&lt;/b&gt; {data['PDF Link']}&quot; url = f'https://api.telegram.org/bot{bot_token}/sendMessage' params = { 'chat_id': bot_chatID, 'parse_mode': 'HTML', 'text': message } max_attempts = 30 # Maximum number of attempts to send the message attempt = 0 connect_timeout, read_timeout = 10, 30 # Timeouts delay_seconds = 1 # Initial delay for retries while attempt &lt; max_attempts: try: with requests.Session() as session: response = session.get(url, params=params, timeout=(connect_timeout, read_timeout)) if response.status_code == 200: return True elif response.status_code in {301, 302, 307, 308}: # Handle redirection, if needed url = response.headers['Location'] elif 400 &lt;= response.status_code &lt; 500: logging.warning(f'Client error: {response.status_code} - {response.text}') break # Stop retrying for client errors elif 500 &lt;= response.status_code &lt; 600: logging.warning(f'Server error: {response.status_code} - {response.text}') # Consider retrying or handling server-side errors else: logging.warning(f'Unhandled status code: {response.status_code}') except Exception as e: logging.warning(f'Exception in send_telegram_message: {str(e)}') time.sleep(delay_seconds) delay_seconds *= 2 # Exponential backoff finally: attempt += 1 # Ensure attempt is incremented return False def wait_for_download(path_to_pdf): timeout = 5 # Maximum time to wait for the download to appear (in seconds) start_time = time.time() while True: elapsed_time = time.time() - start_time if os.path.exists(path_to_pdf) and os.path.getsize(path_to_pdf) &gt; 0: return True if elapsed_time &gt; timeout: return False time.sleep(1) def process_pdf(pdf_link): global path_to_pdf, openAI_view driver = None try: chrome_options = uc.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--disable-dev-shm-usage&quot;) chrome_options.add_argument('log-level=3') chrome_options.add_argument(&quot;--disable-gpu&quot;) chrome_options.add_argument(&quot;--disable-webgl&quot;) chrome_options.add_experimental_option('prefs', { &quot;download.default_directory&quot;: download_directory, &quot;download.prompt_for_download&quot;: False, &quot;download.directory_upgrade&quot;: True, &quot;plugins.always_open_pdf_externally&quot;: True }) driver = uc.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options) time.sleep(2) driver.execute_cdp_cmd('Network.setUserAgentOverride', {'userAgent': user_agent}) pdf_file_name = pdf_link.split('/')[-1] path_to_pdf = os.path.join(download_directory, pdf_file_name) for attempt in range(10): # Retry attempts driver.get(pdf_link) if wait_for_download(path_to_pdf): with open(path_to_pdf, 'rb') as file: reader = PdfReader(file) first_page = reader.pages[0] first_page_text = first_page.extract_text() if first_page_text: openAI_view = 'first_page_text' return first_page_text else: logging.warning(f'PDF {pdf_link} first page contains image only content or no text.') openAI_view = 'image PDF' os.remove(path_to_pdf) # Move deletion outside of the try-except return False time.sleep(5) # Wait 5 seconds before the next retry logging.warning(f&quot;Failed to fetch PDF {pdf_link} after multiple attempts.&quot;) openAI_view = 'failed to fetch PDF' return False except Exception as e: logging.warning(f'Exception in process_pdf: {str(e)}') openAI_view = 'error in processing PDF' return False finally: if driver: driver.quit() def summarize_text(text): client = OpenAI(api_key=openai_api_key) if client.api_key is None: return &quot;API key not found in environment variables&quot; system_msg = 'You know what is most relevant and important for the investors of a company.' user_msg = f'Summarize important information (ignore addresses) in company announcement text: {text}' # system_msg = 'Identify and summarize key details from company announcements related specifically to fund raising through equity shares, warrants, or other securities. Focus on details such as the type of offering, amount being raised, type of investors, and any relevant approvals or voting outcomes. Ignore routine operational details and addresses.' # user_msg = f'Analyze this announcement text and summarize any sections related to fund raising through equity shares or warrants: {text}' attempts = 0 max_attempts = 3 # Set the number of attempts to 3 while attempts &lt; max_attempts: attempts += 1 # Create a chat completion using the client chat_completion = client.chat.completions.create( messages=[ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: system_msg}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: user_msg} ], model=&quot;gpt-3.5-turbo&quot;, temperature=0.3, max_tokens=150, ) reason = chat_completion.choices[0].finish_reason # Handling for the 'stop' finish reason if reason == 'stop': return chat_completion.choices[0].message.content # Handling for the 'length' finish reason (output truncated due to max tokens) elif reason == 'length': return f&quot;Output truncated: {chat_completion.choices[0].message.content}&quot; # Handling for the 'function_call' finish reason (model initiated a function call) elif reason == 'function_call': return &quot;Function call initiated by the model.&quot; # Handling for the 'content_filter' finish reason (content omitted by filters) elif reason == 'content_filter': return &quot;Content filtered due to safety protocols.&quot; # Handling for the 'null' finish reason (response still in progress or incomplete) elif reason == 'null': return &quot;Response still in progress or incomplete.&quot; time.sleep(2) return &quot;Failed to summarize text after maximum attempts.&quot; def write_to_csv(data): with open(csv_file_path, 'a', newline='') as csvfile: # 'a' mode to append to the file writer = csv.DictWriter(csvfile, fieldnames=csv_headers) writer.writerow(data) existing_links = set() # Load existing links from file if it exists try: with open(pdf_link_file_path, 'r') as file: existing_links = set(line.strip() for line in file) except FileNotFoundError: pass try: with open(csv_file_path, 'x', newline='') as csvfile: # 'x' mode to create and fail if exists writer = csv.DictWriter(csvfile, fieldnames=csv_headers) writer.writeheader() except FileExistsError: pass avoidable_keywords = ['74(5)', 'GST authority', 'delisting'] avoidable_subjects_details = ['Analysts/Institutional', 'FDA Inspection'] while True: driver = None try: clean_scoped_directories() clear_tmp_directory() chrome_options = uc.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--disable-dev-shm-usage&quot;) chrome_options.add_argument('log-level=3') chrome_options.add_argument(&quot;--disable-gpu&quot;) chrome_options.add_argument(&quot;--disable-webgl&quot;) chrome_options.add_experimental_option('prefs', { &quot;download.default_directory&quot;: download_directory, &quot;download.prompt_for_download&quot;: False, &quot;download.directory_upgrade&quot;: True, &quot;plugins.always_open_pdf_externally&quot;: True }) driver = uc.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options) time.sleep(2) driver.execute_cdp_cmd('Network.setUserAgentOverride', {'userAgent': user_agent}) wait = WebDriverWait(driver, 20) driver.get(url) time.sleep(5) wait.until(EC.presence_of_element_located( (By.XPATH, '//table[@id=&quot;CFanncEquityTable&quot;]/tbody/tr[1]/td[6]'))) html_source = driver.page_source soup = BeautifulSoup(html_source, 'html.parser') table = soup.find('table', {'id': 'CFanncEquityTable'}) driver.quit() if not table or table.find_all('a', href=True) == []: continue for row in table.find_all('tr'): openAI_view = '' first_page_text = '' path_to_pdf = '' pdf_link = '' subject = '' pref_alert = '' cells = row.find_all('td') if len(cells) &gt; 4: pdf_link = cells[4].find('a', href=True)['href'] if cells[4].find('a', href=True) and cells[4].find('a', href=True)['href'].endswith('.pdf') else None subject = clean_text(cells[2].text.strip()) details = clean_text(cells[3].text.strip()) company_name = clean_text(cells[1].text.strip()) broadcast_date_time = cells[5].text.strip().split(&quot;Exchange&quot;)[0].strip() symbol = cells[0].text.strip() original_symbol = symbol symbol = clean_text(symbol) if symbol != original_symbol: symbol = symbol + &quot; (Edited)&quot; if pdf_link and pdf_link not in existing_links: found_subject = None for keyword in avoidable_subjects_details: if keyword.lower() in subject.lower() or keyword.lower() in details.lower(): found_subject = keyword break if found_subject is not None: logging.warning(f'************** SKIPPING PDF SINCE IT CONTAINS THE AVOIDABLE SUBJECT/DETAIL: {found_subject} ***************') existing_links.add(pdf_link) with open(pdf_link_file_path, 'a') as file: file.write(pdf_link + '\n') data = { 'Date': time.strftime(&quot;%Y-%m-%d&quot;), 'Symbol': symbol, 'Company Name': company_name, 'Subject': subject, 'Details': details, 'OpenAI View': 'SKIP', 'Broadcast Date/Time': broadcast_date_time, 'PDF Link': pdf_link } write_to_csv(data) continue first_page_text = process_pdf(pdf_link) if first_page_text: os.remove(path_to_pdf) found_keyword = None preferential_issue_keywords = ['preferential issue', 'equity shares', 'convertible warrants', 'raising funds'] fund_raising_found = 'preferential' in first_page_text.lower() and any(keyword in first_page_text.lower() for keyword in preferential_issue_keywords) if fund_raising_found: pref_alert = '*** PREFERENTIAL ISSUE ALERT ***' else: for keyword in avoidable_keywords: if keyword.lower() in first_page_text.lower(): found_keyword = keyword break if found_keyword is not None: logging.warning(f'************** SKIPPING PDF SINCE IT CONTAINS THE AVOIDABLE KEYWORD: {found_keyword} ***************') existing_links.add(pdf_link) with open(pdf_link_file_path, 'a') as file: file.write(pdf_link + '\n') data = { 'Date': time.strftime(&quot;%Y-%m-%d&quot;), 'Symbol': symbol, 'Company Name': company_name, 'Subject': subject, 'Details': details, 'OpenAI View': 'SKIP', 'Broadcast Date/Time': broadcast_date_time, 'PDF Link': pdf_link } write_to_csv(data) continue openAI_view = summarize_text(first_page_text) data = { 'Symbol': symbol, 'Company Name': company_name, 'Subject': subject, 'Details': details, 'OpenAI': openAI_view, 'Broadcast Date/Time': broadcast_date_time, 'PDF Link': pdf_link } should_link_be_saved = send_telegram_message(data) if should_link_be_saved: existing_links.add(pdf_link) with open(pdf_link_file_path, 'a') as file: file.write(pdf_link + '\n') data = { 'Date': time.strftime(&quot;%Y-%m-%d&quot;), 'Symbol': symbol, 'Company Name': company_name, 'Subject': subject, 'Details': details, 'OpenAI View': openAI_view, 'Broadcast Date/Time': broadcast_date_time, 'PDF Link': pdf_link } write_to_csv(data) except Exception as e: tb = traceback.format_exc() logging.warning(f'Exception in TRY BLOCK {str(e)}\nTraceback: {tb}') if 'too many open files' in str(e).lower(): break finally: if driver: driver.quit() </code></pre> <p>I ran lsof and this is the output:</p> <pre><code>root@nse-announcements:~# lsof -c python COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python 120968 root cwd DIR 253,1 4096 787286 /root/nse-announcements python 120968 root rtd DIR 253,1 4096 2 / python 120968 root txt REG 253,1 6752256 1967 /usr/bin/python3.11 python 120968 root mem REG 253,1 5026584 790814 /root/nse-announcements/venv/lib/python3.11/site-packages/pydantic_core/_pydantic_core.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 141872 2769 /usr/lib/x86_64-linux-gnu/libgcc_s.so.1 python 120968 root mem REG 253,1 311112 6376 /usr/lib/python3.11/lib-dynload/_decimal.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 264392 790605 /root/nse-announcements/venv/lib/python3.11/site-packages/charset_normalizer/md__mypyc.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 74496 6361 /usr/lib/python3.11/lib-dynload/_asyncio.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 143992 6371 /usr/lib/python3.11/lib-dynload/_ctypes.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 5235544 3132 /usr/lib/x86_64-linux-gnu/libcrypto.so.3 python 120968 root mem REG 253,1 43560 2790 /usr/lib/x86_64-linux-gnu/libffi.so.8.1.2 python 120968 root mem REG 253,1 24336 6381 /usr/lib/python3.11/lib-dynload/_multiprocessing.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 54488 6380 /usr/lib/python3.11/lib-dynload/_multibytecodec.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 202904 3071 /usr/lib/x86_64-linux-gnu/liblzma.so.5.4.1 python 120968 root mem REG 253,1 14712 74280 /usr/lib/x86_64-linux-gnu/librt.so.1 python 120968 root mem REG 253,1 671960 3133 /usr/lib/x86_64-linux-gnu/libssl.so.3 python 120968 root mem REG 253,1 3052896 4166 /usr/lib/locale/locale-archive python 120968 root mem REG 253,1 74848 3879 /usr/lib/x86_64-linux-gnu/libbz2.so.1.0.4 python 120968 root mem REG 253,1 2105184 71658 /usr/lib/x86_64-linux-gnu/libc.so.6 python 120968 root mem REG 253,1 14480 73974 /usr/lib/x86_64-linux-gnu/libpthread.so.0 python 120968 root mem REG 253,1 45080 6379 /usr/lib/python3.11/lib-dynload/_lzma.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 32088 6362 /usr/lib/python3.11/lib-dynload/_bz2.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 216944 6360 /usr/lib/python3.11/lib-dynload/_ssl.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 63696 6359 /usr/lib/python3.11/lib-dynload/_hashlib.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 14504 6369 /usr/lib/python3.11/lib-dynload/_contextvars.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 35032 1885 /usr/lib/x86_64-linux-gnu/libuuid.so.1.3.0 python 120968 root DEL REG 0,26 1032 /dev/shm/sem.LNVy9P python 120968 root mem REG 253,1 16064 790602 /root/nse-announcements/venv/lib/python3.11/site-packages/charset_normalizer/md.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 14688 6392 /usr/lib/python3.11/lib-dynload/_uuid.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 49128 6377 /usr/lib/python3.11/lib-dynload/_json.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 23664 6383 /usr/lib/python3.11/lib-dynload/_queue.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 14472 6391 /usr/lib/python3.11/lib-dynload/_typing.cpython-311-x86_64-linux-gnu.so python 120968 root mem REG 253,1 357584 1908 /usr/lib/locale/C.utf8/LC_CTYPE python 120968 root mem REG 253,1 174336 5421 /usr/lib/x86_64-linux-gnu/libexpat.so.1.8.10 python 120968 root mem REG 253,1 121200 3887 /usr/lib/x86_64-linux-gnu/libz.so.1.2.13 python 120968 root mem REG 253,1 957008 71661 /usr/lib/x86_64-linux-gnu/libm.so.6 python 120968 root mem REG 253,1 27028 75433 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache python 120968 root mem REG 253,1 232568 71655 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 python 120968 root 0r CHR 1,3 0t0 5 /dev/null python 120968 root 1u unix 0xffff9ab807f43b80 0t0 724111 type=STREAM (CONNECTED) python 120968 root 2u unix 0xffff9ab807f43b80 0t0 724111 type=STREAM (CONNECTED) python 120968 root 3w REG 253,1 882 793138 /root/nse-announcements/logs-warnings.log python 120968 root 4u IPv4 724137 0t0 TCP nse-announcements:35066-&gt;149.154.167.220:https (ESTABLISHED) python 120968 root 5u IPv6 724255 0t0 TCP localhost:46688-&gt;localhost:60247 (CLOSE_WAIT) python 120968 root 6u IPv6 725494 0t0 TCP localhost:52052-&gt;localhost:38957 (CLOSE_WAIT) python 120968 root 13u IPv6 730702 0t0 TCP localhost:33460-&gt;localhost:60785 (ESTABLISHED) python 120968 root 14w FIFO 0,14 0t0 728361 pipe python 120968 root 19r FIFO 0,14 0t0 730626 pipe root@nse-announcements:~# </code></pre> <p>python is leaving TCP connections in CLOSE_WAIT state. I don't know where these connections are being created in the script. totally mysterious. what can be done here so that scripts runs indefinitely, as it is supposed to.</p>
<python><selenium-webdriver><tcp>
2024-04-28 16:33:02
1
10,020
KawaiKx
78,398,739
2,800,754
How to call a Perl script with couple of arguments with html tag from within a Python script using Subprocess/shell?
<p>Within a <strong>python script</strong> I have a <strong>function which looks as below</strong>:</p> <pre><code>def NotifierFunc(Mail, Ticket_Num, Open_DtTime): cmd = &quot;/home/path/MyPerlScript.pl --channel 'MyChannel' --userid 'itsme' --message 'Hello &lt;at&gt;&quot;+Mail+&quot;&lt;at&gt; : The ticket &lt;a href='`echo 'http://snview/'&quot;+Ticket_Num+&quot;&gt;`echo &quot;+Ticket_Num+&quot;`&lt;/a&gt; is not closed for more than 72 hours : Open since &quot;+Open_DtTime+&quot; IST. Please close asap.' &quot; print(cmd) subprocess.Popen(cmd, shell= True, stout=subprocess.PIPE) </code></pre> <p><strong>What it prints :</strong></p> <pre><code>/home/path/MyPerlScript.pl --channel 'MyChannel' --userid 'itsme' --message 'Hello &lt;at&gt;Abc@xyz.com&lt;at&gt; : The ticket &lt;a href='`echo 'http://snview/'INC23548&gt;`echo INC23548`&lt;/a&gt; is not closed for more than 72 hours : Open since 4-APR-2024 12:30:40 IST. Please close asap.' </code></pre> <p>This command does not get executed successfully in the shell primarily because the message (all that is passed after '--message' is within single quotes) or at least that is what is I think. When I put that under double quotes python script throws syntax error. When I put the whole cmd value in either single quote or doc string quote(''') and --message content in double quote that results in syntax error as well.</p> <p><strong>If I run below cmd directly on the Linux command line (with the --message content within double quotes) it works fine:</strong></p> <pre><code>&gt; Mail=&quot;Abc@xyz.com&quot; &gt; Ticket_Num=&quot;INC23548&quot; &gt; Open_DtTime = &quot;14-APR-2024 12:30:40&quot; &gt; /home/path/MyPerlScript.pl --channel 'MyChannel' --userid 'itsme' --message &quot;Hello &lt;at&gt;$Mail&lt;at&gt; : The ticket &lt;a href='`echo 'http://snview/'$Ticket_Num &gt;`echo $Ticket_Num`&lt;/a&gt; is not closed for more than 72 hours : Open since $Open_DtTime IST. Please close asap.&quot; </code></pre> <p>Now my question is how can I pass on this whole cmd value to subprocess so that the content of '--message' can be rendered correctly - or how can I put the '--message' content within double quotes and make it work.</p> <p>Please note if I pass on the cmd as below - it works fine but it does not have the hyperlink for the ticket number. But I need the hyperlink.</p> <pre><code>cmd = &quot;/home/path/MyPerlScript.pl --channel 'MyChannel' --userid 'itsme' --message 'Hello &lt;at&gt;&quot;+Mail+&quot;&lt;at&gt; : The ticket +Ticket_Num+&quot; is not closed for more than 72 hours : Open since &quot;+Open_DtTime+&quot; IST. Please close asap.' &quot; </code></pre> <p>I have also tried putting the variables within {} and have tried various quote combinations. Nothing works so far. I am using python 3.10</p>
<python><html><subprocess>
2024-04-28 15:40:30
2
1,012
instinct246
78,398,714
1,761,721
Playwright Python is not getting the response
<p>When I wait for a response from an API call that is made on a page, PlayWright never gets the response. I've seen this in several attempts to wait for a response on different pages but here is some example code that narrows it down to an easy to replicate situation I hope.</p> <p>The code times out</p> <pre><code>from playwright.sync_api import Page from playwright.sync_api import sync_playwright import json with sync_playwright() as p: browser = p.chromium.launch(headless=False) page = browser.new_page() # Goto playwright homepage and press search box: page.goto(&quot;https://playwright.dev/&quot;) page.get_by_role('button', name='Search').click() # Catching response associated with filling searchfield: with page.expect_response(&quot;**.algolia.net/**&quot;) as response: # Fill a letter in searchbox to trigger the post-request: page.get_by_placeholder('Search docs').fill('A') # Printing the value of the response as a python json object: print(response.value.json()) # Printing the value of the response as raw json: print(json.dumps(response.value.json())) </code></pre> <p>Here is the error message that happens when the response times out</p> <pre class="lang-none prettyprint-override"><code>Exception has occurred: InvalidStateError invalid state File &quot;C:\Users\ronwa\Documents\playwright-test\pwright-search.py&quot;, line 23, in &lt;module&gt; print(response.value.json()) ^^^^^^^^^^^^^^ playwright._impl._errors.TimeoutError: Timeout 30000ms exceeded while waiting for event &quot;response&quot; =========================== logs =========================== waiting for response **.algolia.net/** ============================================================ During handling of the above exception, another exception occurred: File &quot;C:\Users\ronwa\Documents\playwright-test\pwright-search.py&quot;, line 7, in &lt;module&gt; with sync_playwright() as p: asyncio.exceptions.InvalidStateError: invalid state </code></pre>
<python><web-scraping><playwright><playwright-python>
2024-04-28 15:34:12
1
449
Stephen Graham
78,398,712
14,195,139
ERROR: Package 'imageio' requires a different Python: 2.7.17 not in '>=3.5'
<p>I am doing Medical Image Analysis, in that I need package <code>imageio</code> in <code>jupyter notebook</code>. I am writing this command</p> <pre><code>!pip install imageio </code></pre> <p>Giving me error like this</p> <pre><code>DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality. Defaulting to user installation because normal site-packages is not writeable Collecting imageio Using cached imageio-2.9.0.tar.gz (3.3 MB) ERROR: Package 'imageio' requires a different Python: 2.7.17 not in '&gt;=3.5' </code></pre> <p>My <code>kernel </code>Jupyter one set to <code>Python3</code></p> <pre><code>!python3 --version Python 3.6.3 :: Anaconda, Inc. </code></pre> <p>I tried</p> <pre><code>!pip3 install imageio </code></pre> <p>which correctly Installed it with this output</p> <pre><code>Requirement already satisfied: imageio in /home/jayakumars/anaconda3/envs/fastai2/lib/python3.6/site-packages (2.15.0) Requirement already satisfied: numpy in /home/jayakumars/anaconda3/envs/fastai2/lib/python3.6/site-packages (from imageio) (1.19.5) Requirement already satisfied: pillow&gt;=8.3.2 in /home/jayakumars/anaconda3/envs/fastai2/lib/python3.6/site-packages (from imageio) (8.4.0) </code></pre> <p>But still when <code>import</code> <code>imageio </code>package</p> <pre><code>import numpy as np import pandas as pd import os import matplotlib.pyplot as plt import glob import nibabel as nib import cv2 import imageio from tqdm.notebook import tqdm from ipywidgets import * from PIL import Image from fastai.basics import * from fastai.vision.all import * from fastai.data.transforms import * </code></pre> <p>Giving me <code>ImportError</code></p> <pre><code>ImportError Traceback (most recent call last) &lt;ipython-input-40-9c1456a18eb5&gt; in &lt;module&gt;() 6 import nibabel as nib 7 import cv2 ----&gt; 8 import imageio 9 from tqdm.notebook import tqdm 10 from ipywidgets import * ImportError: No module named imageio </code></pre> <p>So, How can I install it?</p> <p>I tried <code>pip3 </code>and I was expecting to remove <code>imageio ``importError</code></p>
<python><jupyter-notebook><importerror><python-imageio>
2024-04-28 15:34:01
1
443
Jaykumar