QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,121,611
9,363,181
Unable to get the postgres data in the right format via Kafka, JDBC source connector and pyspark
<p>I have created a table in <code>Postgres</code>:</p> <pre><code>CREATE TABLE IF NOT EXISTS public.sample_a ( id text COLLATE pg_catalog.&quot;default&quot; NOT NULL, is_active boolean NOT NULL, is_deleted boolean NOT NULL, created_by integer NOT NULL, created_at timestamp with time zone NOT NULL, created_ip character varying(30) COLLATE pg_catalog.&quot;default&quot; NOT NULL, created_dept_id integer NOT NULL, updated_by integer, updated_at timestamp with time zone, updated_ip character varying(30) COLLATE pg_catalog.&quot;default&quot;, updated_dept_id integer, deleted_by integer, deleted_at timestamp with time zone, deleted_ip character varying(30) COLLATE pg_catalog.&quot;default&quot;, deleted_dept_id integer, sql_id bigint NOT NULL, ipa_no character varying(30) COLLATE pg_catalog.&quot;default&quot; NOT NULL, pe_id bigint NOT NULL, uid character varying(30) COLLATE pg_catalog.&quot;default&quot; NOT NULL, mr_no character varying(15) COLLATE pg_catalog.&quot;default&quot; NOT NULL, site_id integer NOT NULL, entered_date date NOT NULL, CONSTRAINT pk_patient_dilation PRIMARY KEY (id) ); </code></pre> <p>and I have inserted the data as below:</p> <pre><code>INSERT INTO sample_a (id, is_active, is_deleted, created_by, created_at, created_ip, created_dept_id, updated_by, updated_at, updated_ip, updated_dept_id, deleted_by, deleted_at, deleted_ip, deleted_dept_id, sql_id, ipa_no, pe_id, uid, mr_no, site_id, entered_date) VALUES ('00037167-0894-4373-9a56-44c49d2285c9', TRUE, FALSE, 70516, '2024-10-05 08:12:25.069941+00','10.160.0.76', 4, 70516, '2024-10-05 09:25:55.218961+00', '10.84.0.1',4,NULL, NULL, NULL, NULL, 0,0,165587147,'22516767','P5942023',1,'10/5/24'); </code></pre> <p>Now, I have created the <code>JDBC source connector</code> config as below:</p> <pre><code> { &quot;name&quot;: &quot;JdbcSourceConnectorConnector_0&quot;, &quot;config&quot;: { &quot;value.converter.schema.registry.url&quot;: &quot;http://schema-registry:8081&quot;, &quot;key.converter.schema.registry.url&quot;: &quot;http://schema-registry:8081&quot;, &quot;name&quot;: &quot;JdbcSourceConnectorConnector_0&quot;, &quot;connector.class&quot;: &quot;io.confluent.connect.jdbc.JdbcSourceConnector&quot;, &quot;key.converter&quot;: &quot;io.confluent.connect.avro.AvroConverter&quot;, &quot;value.converter&quot;: &quot;io.confluent.connect.avro.AvroConverter&quot;, &quot;connection.url&quot;: &quot;jdbc:postgresql://postgres:5432/&quot;, &quot;connection.user&quot;: &quot;postgres&quot;, &quot;connection.password&quot;: &quot;********&quot;, &quot;table.whitelist&quot;: &quot;sample_a&quot;, &quot;mode&quot;: &quot;bulk&quot; } } </code></pre> <p>So when the data is pushed from the DB to the Kafka Topic, I can see the data in the readable format in the Kafka Control Center tab. Since I am using <code>bulk</code> mode, the data is continuously being loaded.</p> <p>My Problem is when I fetch the data via <code>Pyspark</code>, it is not readable:</p> <pre><code>from pyspark.sql.session import SparkSession from pyspark.sql.functions import col spark = SparkSession \ .builder \ .appName(&quot;Kafka_Test&quot;) \ .config(&quot;spark.jars.packages&quot;, &quot;org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.0&quot;) \ .getOrCreate() df = spark \ .readStream \ .format(&quot;kafka&quot;) \ .option(&quot;kafka.bootstrap.servers&quot;, &quot;localhost:9092&quot;) \ .option(&quot;subscribe&quot;, &quot;sample_a&quot;) \ .option(&quot;startingOffsets&quot;,&quot;latest&quot;) \ .load() df.selectExpr(&quot;cast(value as string) as value&quot;).writeStream.format(&quot;console&quot;).start() spark.streams.awaitAnyTermination() </code></pre> <p><strong>Output</strong>:</p> <pre><code>H00037167-0894-4373-9a56-44c49d2285c9?ڹ??d10.160.0.7??????d10.84.0.0????22516767P5942023¸ </code></pre> <p>So do I access the specific attributes? Do I need any deserializer class?</p> <p>TIA.</p> <p>As per <strong>Nimi's</strong> suggestion I was able to fetch the schema via the schema registry and tried to use the <code>from_avro</code> method but it gave me this error:</p> <pre><code>jsonSchema = {&quot;type&quot;:&quot;record&quot;,&quot;name&quot;:&quot;sample_a&quot;,&quot;fields&quot;:[{&quot;name&quot;:&quot;id&quot;,&quot;type&quot;:&quot;string&quot;},{&quot;name&quot;:&quot;is_active&quot;,&quot;type&quot;:&quot;boolean&quot;},{&quot;name&quot;:&quot;is_deleted&quot;,&quot;type&quot;:&quot;boolean&quot;},{&quot;name&quot;:&quot;created_by&quot;,&quot;type&quot;:&quot;int&quot;},{&quot;name&quot;:&quot;created_at&quot;,&quot;type&quot;:{&quot;type&quot;:&quot;long&quot;,&quot;connect.version&quot;:1,&quot;connect.name&quot;:&quot;org.apache.kafka.connect.data.Timestamp&quot;,&quot;logicalType&quot;:&quot;timestamp-millis&quot;}},{&quot;name&quot;:&quot;created_ip&quot;,&quot;type&quot;:&quot;string&quot;},{&quot;name&quot;:&quot;created_dept_id&quot;,&quot;type&quot;:&quot;int&quot;},{&quot;name&quot;:&quot;updated_by&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;int&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;updated_at&quot;,&quot;type&quot;:[&quot;null&quot;,{&quot;type&quot;:&quot;long&quot;,&quot;connect.version&quot;:1,&quot;connect.name&quot;:&quot;org.apache.kafka.connect.data.Timestamp&quot;,&quot;logicalType&quot;:&quot;timestamp-millis&quot;}],&quot;default&quot;:None},{&quot;name&quot;:&quot;updated_ip&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;string&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;updated_dept_id&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;int&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;deleted_by&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;int&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;deleted_at&quot;,&quot;type&quot;:[&quot;null&quot;,{&quot;type&quot;:&quot;long&quot;,&quot;connect.version&quot;:1,&quot;connect.name&quot;:&quot;org.apache.kafka.connect.data.Timestamp&quot;,&quot;logicalType&quot;:&quot;timestamp-millis&quot;}],&quot;default&quot;:None},{&quot;name&quot;:&quot;deleted_ip&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;string&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;deleted_dept_id&quot;,&quot;type&quot;:[&quot;null&quot;,&quot;int&quot;],&quot;default&quot;:None},{&quot;name&quot;:&quot;sql_id&quot;,&quot;type&quot;:&quot;long&quot;},{&quot;name&quot;:&quot;ipa_no&quot;,&quot;type&quot;:&quot;string&quot;},{&quot;name&quot;:&quot;pe_id&quot;,&quot;type&quot;:&quot;long&quot;},{&quot;name&quot;:&quot;uid&quot;,&quot;type&quot;:&quot;string&quot;},{&quot;name&quot;:&quot;mr_no&quot;,&quot;type&quot;:&quot;string&quot;},{&quot;name&quot;:&quot;site_id&quot;,&quot;type&quot;:&quot;int&quot;},{&quot;name&quot;:&quot;entered_date&quot;,&quot;type&quot;:{&quot;type&quot;:&quot;int&quot;,&quot;connect.version&quot;:1,&quot;connect.name&quot;:&quot;org.apache.kafka.connect.data.Date&quot;,&quot;logicalType&quot;:&quot;date&quot;}}],&quot;connect.name&quot;: &quot;sample_a&quot;} df.select(from_avro(&quot;value&quot;, json.dumps(jsonSchema)).alias(&quot;sample_a&quot;)) \ .select(&quot;sample_a.*&quot;).writeStream.format(&quot;console&quot;).start() </code></pre> <p><strong>error:</strong>.</p> <pre><code> org.apache.spark.SparkException: Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'. at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:113) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760) at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:435) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538) at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480) at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 70516 out of bounds for length 2 at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:460) at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:283) </code></pre>
<python><apache-spark><pyspark><apache-kafka><jdbc>
2024-10-24 11:06:25
2
645
RushHour
79,121,591
109,525
Equivalent of pandas.Series.dt.ceil in polars
<p>I'm trying to round timestamp to the next minutes in polars.</p> <p>For example:</p> <ul> <li><code>2023-01-01 10:05:00</code> should stay <code>2023-01-01 10:05:00</code></li> <li><code>2023-01-01 10:05:01</code> should be <code>2023-01-01 10:06:00</code></li> </ul> <p>This works in pandas with ceil:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import datetime df = pl.DataFrame( {'timestamp' :[ datetime.datetime(2023, 1, 1, 10, 5, 0), datetime.datetime(2023, 1, 1, 10, 5, 30), datetime.datetime(2023, 1, 1, 10, 6, 0), datetime.datetime(2023, 1, 1, 10, 6, 1), ] } ) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">timestamp</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">2023-01-01T10:05:00.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:05:30.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:06:00.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:06:01.000000</td> </tr> </tbody> </table></div> <pre class="lang-py prettyprint-override"><code>df['timestamp'].to_pandas().dt.ceil('1min') </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">timestamp</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">2023-01-01T10:05:00.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:06:00.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:06:00.000000</td> </tr> <tr> <td style="text-align: left;">2023-01-01T10:07:00.000000</td> </tr> </tbody> </table></div> <p>The only way I found in polars is the following:</p> <pre class="lang-py prettyprint-override"><code>df.with_columns( pl.when(pl.col('timestamp').dt.truncate('1m') == pl.col('timestamp')) .then(pl.col('timestamp')) .otherwise(pl.col('timestamp').dt.truncate('1m') + datetime.timedelta(minutes=1)) ) </code></pre>
<python><timestamp><python-polars>
2024-10-24 11:01:38
1
14,153
0x26res
79,121,361
9,593,060
delta-rs writing with Pandas gets stuck for big dataset
<p>I am using <a href="https://github.com/delta-io/delta-rs" rel="nofollow noreferrer">delta-rs</a> to read and process some delta tables with Pandas.</p> <p>I made several experiments with the following pretty simple code:</p> <pre><code>from deltalake import write_deltalake, DeltaTable df = DeltaTable(s3_table_URI, storage_options={&quot;AWS_REGION&quot;: &quot;eu-west-1&quot;}).to_pandas() write_deltalake(s3_table_URI, df, mode=&quot;overwrite&quot;, schema_mode=&quot;overwrite&quot;) </code></pre> <p>I tried with two different tables:</p> <ol> <li>A quite big table, 500 million rows and 12 columns.</li> <li>A smaller table, 145000 rows and 5 columns.</li> </ol> <p>For the smaller table, the code works fine with no problems at all.</p> <p>However, for the bigger one, the code reads in memory the data successfully and I am able to process it with some transformations (that here I omitted) but the code gets stuck in writing. It looks like it doesn't really matter which transformations I am doing as the code gets stuck also with just read/writing operation with no intermediate transformations.</p> <p>I am using a m4.10xlarge with 160GB of ram and 40 cores single node on Databricks, I thought it could have been an OOM issue, but I still have plenty of memory available when it gets stuck in writing (more than 50GB). After 3 hours, the cluster is still there doing apparently nothing when the writing command is executed.</p> <p>The same code executed with Polars (that under the hood uses delta-rs) works perfectly, with no problems:</p> <pre><code>import polars as pl df = pl.read_delta(s3_table_URI, storage_options={&quot;AWS_REGION&quot;: &quot;eu-west-1&quot;}) df.write_delta(s3_table_URI, mode=&quot;overwrite&quot;, storage_options={&quot;AWS_REGION&quot;: &quot;eu-west-1&quot;}) </code></pre> <p>Anyone experienced a similar problem? It looks like some sort of bug related to the interaction between Pandas and delta-rs, as Polars works fine.</p> <p>I am forced to use Pandas because of some other dependencies I am not able to update inside the company I am working for.</p>
<python><pandas><python-polars><delta-lake><delta-rs>
2024-10-24 09:51:37
0
1,618
Mattia Surricchio
79,121,307
1,479,670
How to port a PyCharm project from linux to windowsp
<p>I have a python application for the command line which i built using PyCharm. To run it, i call <code>source venv\bin\activate</code> and then start my application with <code>python my_app.py</code>.</p> <p>Now to port this to windows i copied the whole project directory (python files and venv) to my PyCharmProjects directory. When i then tried to run my application from within pycharm, it didn't work because of missing packages. So i had to manually install all necessary packages again. Once the application ran in PyCharm, i wanted to test it on the command-line, but there is no <code>activate.bat</code> anywhere in my <code>venv</code>-directory, and there is also no <code>.venv</code> directory. I suppose i'll have similar issues when i go from windows back to linux again.</p> <p>What is the correct way to port a Project from linux to windows (and vice versa)?</p>
<python><pycharm>
2024-10-24 09:37:45
1
1,355
user1479670
79,121,185
11,826,257
pip install pyspectra does not find existing numpy installation
<p>I want to analyze near-infrared (NIR) spectra in Python. My spectra are stored in the <a href="https://en.wikipedia.org/wiki/SPC_file_format" rel="nofollow noreferrer">spc file format</a>. So I need a tool that lets me import such files. &quot;Pyspectra&quot; seems to be a good module for this. However, I am unable to install it in a fresh virtual environment with Python 3.12.5 and pip 24.2 on a Windows 10 machine.</p> <p><code>pip install pyspectra</code> fails with an error message:<br /> &quot;<em>Getting requirements to build wheel did not run successfully</em>&quot;.</p> <p>The last line of the traceback states:<br /> <em>ModuleNotFoundError: No module named 'numpy'</em>.</p> <p>I installed numpy with <code>pip install numpy</code> and verified that it works with <code>import numpy as np</code>. No problem here. I also made sure that I am in the same virtual environment in which I wish to install pyspectra.</p> <p>But I still cannot import pyspectra. Pip continues to claim that it cannot find numpy.</p> <p>Could this be a dependency issue between 'pyspectra 0.0.1.2' and 'numpy 2.1.2'?</p> <hr /> <p>For reference: This is my code on the Windows command line</p> <pre><code># Create and activate a virtual environment C:\user\...\Desktop&gt;python -m venv venv C:\user\...\Desktop&gt;venv\scripts\activate # Import numpy (to make sure it is installed) (venv) C:\user\...\Desktop&gt;py -m pip install numpy # Import pyspectra (venv) C:\user\...\Desktop&gt;py -m pip install pyspectra Collecting pyspectra Using cached pyspectra-0.0.1.2-py3-none-any.whl.metadata (12 kB) Requirement already satisfied: numpy in c:\users\...\venv\lib\site-packages (from pyspectra) (2.1.2) Collecting pandas (from pyspectra) Using cached pandas-2.2.3-cp312-cp312-win_amd64.whl.metadata (19 kB) Collecting spc-spectra (from pyspectra) Using cached spc_spectra-0.4.0.tar.gz (8.6 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [24 lines of output] Traceback (most recent call last): [...] File &quot;C:\Users\...\AppData\Local\Temp\pip-install-v_qa0rxl\spc-spectra_a351371d52ef45edbd585ef2843e5c1e\spc_spectra\spc.py&quot;, line 10, in &lt;module&gt; import numpy as np ModuleNotFoundError: No module named 'numpy' [end of output] </code></pre> <p>Edit: Rewrote question to make it more understandable.</p>
<python><import><modulenotfounderror>
2024-10-24 09:09:39
1
407
Staehlo
79,121,019
9,356,136
Regex negative lookahead with variable text before and after keywords
<p>I have a string which contains two keywords, keyword_1 and keyword_2, and an integer which follows on keyword_2.</p> <p>I want to extract the integer, but only if a particular keyword, keyword_neg, does not occur between keyword_1 and keyword_2.</p> <p>Additionally (and I believe this is where the problem is coming from), there may be more random text between all 3 keywords.</p> <p>I have tried a large number of different regex to do so, but with no success. For the first test string in the code snippet below, the regex returns the number 9, but should return nothing. To avoid making &quot;translation&quot; errors to a more abstract example, here are the original strings/regex with which I have been working:</p> <p>keyword_1 is &quot;ISIN ABC123&quot;</p> <p>keyword_2 is &quot;Fondets omløpshastighet er&quot;</p> <p>keyword_neg is &quot;ISIN&quot;</p> <pre><code>import re #The following string should return nothing, because between &quot;ISIN ABC123&quot; and &quot;Fondets omløpshastighet er&quot;, the word &quot;ISIN&quot; occurs str_that_should_return_nothing = &quot;ISIN ABC123 blabla ISIN DEF123 Fondets omløpshastighet er 9 blabla&quot; #The following string should return 5 because the word &quot;ISIN&quot; does not appear between &quot;ISIN ABC123&quot; and &quot;Fondets omløpshastighet er&quot; str_that_should_return_5 = &quot;ISIN ABC123 blabla Fondets omløpshastighet er 5 ISIN DEF123 Fondets omløpshastighet er 9 blabla&quot; pattern = 'ISIN ABC123.*?(?!ISIN).*?Fondets omløpshastighet er (\d)' #does NOT work match = re.findall(pattern, str_that_should_return_nothing ) print(&quot;str_1_f: &quot;, match) match = re.findall(pattern, str_that_should_return_5) print(&quot;str_2_t: &quot;, match) </code></pre> <p>Please help :)</p>
<python><regex>
2024-10-24 08:28:25
0
367
Spaniel
79,120,999
2,971,574
Use IOUtils.setByteArrayMaxOverride in pyspark
<p>I'm facing the error</p> <p>&quot;org.apache.poi.util.RecordFormatException: Tried to allocate an array of length 100,335,238, but the maximum length for this record type is 100,000,000.&quot;</p> <p>when trying to read an excel file using pyspark in databricks. That's my syntax:</p> <pre><code>spark.read.format(&quot;com.crealytics.spark.excel&quot;) \ .option(&quot;header&quot;, &quot;true&quot;) \ .schema(input_schema) \ .load(path) \ .display() </code></pre> <p>I just updated my excel maven package on my computer cluster to &quot;com.crealytics:spark-excel_2.12:3.5.1_0.20.4&quot; but that does not help either. My cluster is running on databricks runtime version 15.4 LTS.</p> <p>So, the error message indicates that my excel file (14 MB) is just a little too big to be read. My AI assistant (as well as my google research) tell me that one solution to the problem is to increase the &quot;maximum record length&quot; above 100,000,000. This can be done using the command <code>IOUtils.setByteArrayMaxOverride(200000000)</code>. But this syntax is Java syntax. So, the question is how to increase this size using pyspark/Python? Or maybe there is another (better) approach to deal with the problem?</p>
<python><pyspark><databricks>
2024-10-24 08:20:55
1
555
the_economist
79,120,980
4,614,404
Roboflow device management. In which GPU is the model loaded?
<p>I am experimenting with the SDK of Roboflow, I haven't found documentation on device management. I have three questions:</p> <ol> <li>Is the model loaded to GPU by default?</li> <li>What happens when multiple GPUs are available?</li> <li>And how do I define in which GPU I want the model allocated?</li> </ol> <p><strong>EDIT:</strong></p> <p>Roboflow does not expose the device directly as Pytorch would:</p> <pre><code>rf = Roboflow(api_key=&quot;YOUR API KEY&quot;) project = rf.workspace().project(&quot;license-plate-recognition-rxg4e&quot;) model = project.version(&quot;4&quot;).model model = model.to(&quot;cuda:4&quot;) &gt; AttributeError: 'ObjectDetectionModel' object has no attribute 'to' </code></pre>
<python><pytorch><gpu><roboflow>
2024-10-24 08:15:48
1
2,024
Victor Zuanazzi
79,120,656
32,043
Pipenv installation fails in one directory but not in the other
<p>I have a strange problem with Python. I'm using pipenv to manage my virtual environment. I upgraded an environment from 3.9 to 3.11 and now <code>pipenv install</code> fails. Error is that the hashes do not match.</p> <p>As soon as I copy the <code>Pipfile</code> to another direcotry and try it there, it works smoothly.</p> <p>I deleted the <code>Pipfile.lock</code> in the original directory, I moved the whole project directory to another folder, I executed <code>pipenv update</code>, I removed the virtual environment manually, nothing helped.</p> <p>The only difference in the <code>Pipfile.lock</code> in the two directories is that the faulty one excludes <code>3.3</code> for <code>six</code> and <code>python-dateutil</code>:</p> <pre><code>&quot;markers&quot;: &quot;python_version &gt;= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3&quot; </code></pre> <p>vs.</p> <pre><code>&quot;markers&quot;: &quot;python_version &gt;= '2.7' and python_version not in '3.0, 3.1, 3.2&quot; </code></pre> <p>Removing that exclusion doesn't help either...</p> <p>My assumption is that different Python versions or pip versions are used but I would not know why, as I'm executing all commands in the same session and user.</p> <p>Here's the error of <code>pipenv install</code></p> <pre><code>\[pipenv.exceptions.InstallError\]: ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. \[pipenv.exceptions.InstallError\]: certifi==2024.8.30 from [https://www.piwheels.org/simple/certifi/certifi-2024.8.30-py3-none-any.whl#sha256=3dffae5ce57c3934b457066e04f14270151dd908412a601a3abb554c5acff9d4](https://www.piwheels.org/simple/certifi/certifi-2024.8.30-py3-none-any.whl#sha256=3dffae5ce57c3934b457066e04f14270151dd908412a601a3abb554c5acff9d4 &quot;https://www.piwheels.org/simple/certifi/certifi-2024.8.30-py3-none-any.whl#sha256=3dffae5ce57c3934b457066e04f14270151dd908412a601a3abb554c5acff9d4&quot;) (from -r /tmp/pipenv-d3ppdela-requirements/pipenv-qz_5cdgh-hashed-reqs.txt (line 2)): \[pipenv.exceptions.InstallError\]: Expected sha256 922820b53db7a7257ffbda3f597266d435245903d80737e34f8a45ff3e3230d8 \[pipenv.exceptions.InstallError\]: Expected or bec941d2aa8195e248a60b31ff9f0558284cf01a52591ceda73ea9afffd69fd9 \[pipenv.exceptions.InstallError\]: Got 3dffae5ce57c3934b457066e04f14270151dd908412a601a3abb554c5acff9d4 </code></pre> <p>Python 3.11 on Raspberry Pi OS bookwork 64bit.</p>
<python><linux><pipenv><debian-bookworm>
2024-10-24 06:40:57
1
24,231
guerda
79,120,587
270,043
Optimization of PySpark code to do comparisons of rows
<p>I want to iteratively compare 2 sets of rows in a PySpark dataframe, and find the common values in another column. For example, I have the dataframe (<code>df</code>) below.</p> <pre><code>Column1 Column2 abc 111 def 666 def 111 tyu 777 abc 777 def 222 tyu 333 ewq 888 </code></pre> <p>The output I want is</p> <pre><code>abc,def,CommonRow &lt;-- because of 111 abc,ewq,NoCommonRow abc,tyu,CommonRow &lt;-- because of 777 def,ewq,NoCommonRow def,tyu,NoCommonRow ewq,tyu,NoCommonRow </code></pre> <p>The PySpark code that I'm currently using to do this is</p> <pre><code># &quot;value_list&quot; contains the unique list of values in Column 1 index = 0 for col1 in value_list: index += 1 df_col1 = df.filter(df.Column1 == col1) for col2 in value_list[index:]: df_col2 = df.filter(df.Column1 == col2) df_join = df_col1.join(df_col2, on=(df_col1.Column2 == df_col2.Column2), how=&quot;inner&quot;) if df_join.limit(1).count() == 0: # No common row print(col1,col2,&quot;NoCommonRow&quot;) else: print(col1,col2,&quot;CommonRow&quot;) </code></pre> <p>However, I found that this takes a very long time to run (<code>df</code> has millions of rows). Is there anyway to optimize it to run faster, or is there a better way to do the comparisons?</p>
<python><dataframe><pyspark><optimization>
2024-10-24 06:17:59
2
15,187
Rayne
79,120,579
6,783,666
How can a duration scalar from Django models be implemented for Strawberry GQL?
<p>How can Django duration model fields be exposed in a Strawberry GraphQL API? Is there a standard implementation for the ISO8601 duration format or does a custom scalar field need to be implemented?</p> <pre><code>from django.db import models class MyModel(models.Model): name = models.CharField(max_length=255) my_duration = models.DurationField() </code></pre>
<python><strawberry-graphql>
2024-10-24 06:14:27
1
3,849
Moritz
79,120,568
698,182
Restrict CPU core usage on linux in python program using Piper TTS library
<p>For some reason when I'm using <strong>PiperTTS</strong> to generate speech using <code>taskset -c 2,3 python myprogram.py</code> the system does not restrict processing to the cores I specify.</p> <p>When I use another library, such as <code>llama-cpp-python</code>, <code>taskset</code> is able to restrict the cores execution occurs on. I also tried using <code>psutil</code> python library to set CPU affinity, but this did not work either.</p> <ul> <li>How or why one python program would be able to bypass restrictions set by taskset?</li> <li>Why Piper TTS library isn't using cores set by taskset?</li> </ul>
<python><linux><task>
2024-10-24 06:11:05
2
1,931
ekcrisp
79,120,446
461,212
Get a sum value from nested attributes, in Python
<p>I need to code a SEARCH class, so that accessing its attributes by their nested representation(s).</p> <p><strong>Attributes and their relationship</strong>: the goal of the 'SEARCH' class is to obtain a serie of consecutive integer numbers 'n', selecting 1 or summing 2 of the predefined A, B, C, D constants, as follows:</p> <blockquote> <pre><code>A = 0 B = 1 C = 2 D = 4 with 'n' meeting these attributes relationship: n = (A or B) + (None or C or D), thus getting: n = 0..5 </code></pre> </blockquote> <p><strong>Expected Usage</strong>: the main constraint is to apply for nested attributes for acceding to the 'n' value series.</p> <pre><code># Expected output 'n' from the 'SEARCH' class nested attributes: print(SEARCH.A) # Output: 0 print(SEARCH.B) # Output: 1 print(SEARCH.A.C) # Output: 2 (= A + C = 0 + 2) print(SEARCH.A.D) # Output: 4 print(SEARCH.B.C) # Output: 3 print(SEARCH.B.D) # Output: 5 print(SEARCH.A.A) # Output: to drop an error and break print(SEARCH.B.B) # Output: to drop an error and break </code></pre> <p>I need this 'SEARCH' class to behave somehow similarly to an enumeration (See n=0..5). This is why I tried with <code>Enum</code> and <code>Flag</code> classes, as in my <a href="https://stackoverflow.com/q/79117582/461212">previous question</a> on this same topic. The <code>Enum</code> approach was wrong. I checked this <a href="https://stackoverflow.com/q/77291854/461212">nearest case</a> question.</p> <p><strong>Code step 1</strong>: My first piece of code (after the <code>Enum</code> tries), with some workarounds with regards to a real nested attribute sequence (C and D underscored) ; nevertheless, this first investigation already reflects what I am aiming:</p> <pre><code>class SEARCH: # class attributes A = 0 B = 1 C = 2 D = 4 class _A: @staticmethod def _C(): return SEARCH.A + SEARCH.C @staticmethod def _D(): return SEARCH.A + SEARCH.D class _B: @staticmethod def _C(): return SEARCH.B + SEARCH.C @staticmethod def _D(): return SEARCH.B + SEARCH.D def __getattr__(self, attr): # Raise error for invalid nested attributes if attr in ['A', 'B']: raise AttributeError(f&quot;Invalid nested attribute access: '{attr}'&quot;) # Usage try: # Valid cases print(SEARCH.A) # Expected Output: 0 print(SEARCH.B) # Expected Output: 1 print(SEARCH.C) # Expected Output: 2 print(SEARCH.D) # Expected Output: 4 print(SEARCH._A._C()) # Expected Output: 2 (0 + 2), but not using underscores print(SEARCH._A._D()) # Expected Output: 4 (0 + 4) print(SEARCH._B._C()) # Expected Output: 3 (1 + 2) print(SEARCH._B._D()) # Expected Output: 5 (1 + 4) # Invalid nested cases try: print(SEARCH.A.A()) # Should raise an error except AttributeError as e: print(f&quot;Error: {e}&quot;) try: print(SEARCH.B.A()) # Should raise an error except AttributeError as e: print(f&quot;Error: {e}&quot;) except AttributeError as e: print(f&quot;General Error: {e}&quot;) </code></pre> <p><strong>Code Step 2</strong>: my current investigations consist in using the <code>__new__</code> method, for creating a new instance of a class, to be called before <code>__init__</code>, where <code>__new__</code> is used to run the initialize() class method automatically whenever the class is used. I am still struggling with a few errors, since the type object 'A' has no attribute 'value'. As follows:</p> <pre><code>class SEARCH: A_value = 0 B_value = 1 C_value = 2 D_value = 4 class A: pass class B: pass def __new__(cls, *args, **kwargs): cls.initialize() # Automatically initialize when the class is loaded return super(SEARCH, cls).__new__(cls) @classmethod def initialize(cls): # Dynamically set the nested class values after SEARCH is fully defined cls.A.value = cls.A_value cls.A.C = type('C', (), {'value': cls.A_value + cls.C_value}) cls.A.D = type('D', (), {'value': cls.A_value + cls.D_value}) cls.B.value = cls.B_value cls.B.C = type('C', (), {'value': cls.B_value + cls.C_value}) cls.B.D = type('D', (), {'value': cls.B_value + cls.D_value}) # Testing try: # Valid cases print(SEARCH.A.value) # Expected Output: 0 print(SEARCH.B.value) # Expected Output: 1 print(SEARCH.A.C.value) # Expected Output: 2 (0 + 2) print(SEARCH.A.D.value) # Expected Output: 4 (0 + 4) print(SEARCH.B.C.value) # Expected Output: 3 (1 + 2) print(SEARCH.B.D.value) # Expected Output: 5 (1 + 4) # Invalid nested cases try: print(SEARCH.A.A) # Should raise an error except AttributeError as e: print(f&quot;Error: {e}&quot;) try: print(SEARCH.B.A) # Should raise an error except AttributeError as e: print(f&quot;Error: {e}&quot;) except AttributeError as e: print(f&quot;General Error: {e}&quot;) </code></pre> <p>I could not find yet what I'm doing wrong .</p> <p>Any hints or direction about: <strong>how to get a sum value from nested attributes in Python-3.x ?</strong></p>
<python><python-3.x><nested-attributes>
2024-10-24 05:19:46
2
9,395
hornetbzz
79,120,040
1,277,239
Concatenate a few row vectors into a matrix
<p>Four 1X3 row vectors, trying to concatenate them into a 4X3 martrix. Here is my code that is not working:</p> <pre><code>ul = np.array([-320, 240, 1]) ur = np.array([320, 240, 1]) br = np.array([320, -240, 1]) bl = np.array([-320, -240, 1]) corners =np.concatenate( (ul,ur, br, bl), axis=1) </code></pre> <p>Desired output:</p> <p>-320, 240, 1</p> <p>320, 240, 1</p> <p>320, -240, 1</p> <p>-320, -240, 1</p> <p>Thanks.</p>
<python><arrays><matrix>
2024-10-24 00:02:20
1
2,912
Nick X Tsui
79,120,026
2,476,977
Removing whitespace within matplotlib plot with subplots
<h2>Problem</h2> <p>When generating the following plot, I am not able to remove the white frame that appears around each graph (note: colors were changed to make the problem more obviously visible).</p> <p><a href="https://i.sstatic.net/QsVgfFhn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsVgfFhn.png" alt="problematic plot" /></a></p> <p>Is there any way to remove this unwanted effect?</p> <h2>Attempt so far</h2> <p>I've been looking for solutions, but haven't found anything that seems to work. I suspect that there's something annoying that Seaborn is doing. Here are the links that I found on SO, in case people want to know where I've looked.</p> <ul> <li><a href="https://stackoverflow.com/questions/49693560/removing-the-white-border-around-an-image-when-using-matplotlib-without-saving-t">Removing the white border around an image when using matplotlib without saving the image</a></li> <li><a href="https://stackoverflow.com/questions/59866037/how-to-remove-white-lines-when-plotting-multiple-subplots-in-matplotlib">How to remove white lines when plotting multiple subplots in Matplotlib?</a></li> <li><a href="https://stackoverflow.com/questions/37809697/remove-white-border-when-using-subplot-and-imshow-in-python-matplotlib">Remove white border when using subplot and imshow in python (Matplotlib)</a></li> <li><a href="https://stackoverflow.com/q/42850225/2476977">Eliminate white space between subplots in a matplotlib figure</a></li> </ul> <p>But none of these seem applicable. In particular, the whitespace is somehow within the plot rather than on its boundary.</p> <h2>Code to reproduce figure (MRE)</h2> <p>It might stretch the &quot;minimal&quot; part of MRE, but here's code that produces the above figure.</p> <pre><code>## imports import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns ##produce dataframes results = pd.DataFrame( {'cat_1': {0: -0.03200000000000003, 1: -0.040000000000000036, 2: -0.030400000000000205, 3: -0.024800000000000155, 4: 2.2880000000000003}, 'cat_2': {0: 0.13839999999999986, 1: 0.14880000000000004, 2: 0.14880000000000004, 3: 0.13759999999999994, 4: 7.3991999999999996}, 'cat_3': {0: 0.1719999999999997, 1: 0.15039999999999987, 2: 0.1823999999999999, 3: 0.16559999999999953, 4: 12.720000000000002}, 'cat_4': {0: 0.2967999999999993, 1: 0.6991999999999994, 2: 0.735199999999999, 3: 0.7607999999999997, 4: 23.2432}, 'label': {0: 'label_1', 1: 'label_2', 2: 'label_3', 3: 'label_4', 4: 'label_5'}} ) relative_results = pd.DataFrame( {'cat_1': {0: -2.123424021234242, 1: -2.6542800265428026, 2: -2.0172528201725415, 3: -1.6456536164565463, 4: 151.82481751824818}, 'cat_2': {0: 6.275505577219545, 1: 6.747075360478827, 2: 6.747075360478827, 3: 6.2392309785072975, 4: 335.5037634896164}, 'cat_3': {0: 6.359535606004574, 1: 5.56089625083191, 2: 6.744065665902532, 3: 6.122901722990442, 4: 470.30984249057167}, 'cat_4': {0: 5.825318940137375, 1: 13.723258096172705, 2: 14.429833169774268, 3: 14.932286555446508, 4: 456.1962708537782}, 'label': {0: 'label_1', 1: 'label_2', 2: 'label_3', 3: 'label_4', 4: 'label_5'}} ) baseline = pd.Series( {'cat_1': 1.507, 'cat_2': 2.2054, 'cat_3': 2.7046, 'cat_4': 5.095, 'label': 'label_0'} ) ## functions to manipulate second y-axis def get_new_bounds(ax1, baseline, col): a,b = ax1.get_ybound() return a*baseline[col]/100, b*baseline[col]/100 def get_new_ticks(ax1, baseline, col): ticks = ax1.get_yticks() return ticks*baseline[col] / 100 ## custom settings rc={&quot;axes.facecolor&quot;:&quot;#b3edff&quot;, &quot;figure.facecolor&quot;:&quot;#b3edff&quot;, # &quot;grid.color&quot;:&quot;#ffbb33&quot;, &quot;grid.color&quot;:&quot;#202020&quot;, &quot;grid.linewidth&quot;:0.3, &quot;lines.linewidth&quot;:0.7, &quot;lines.markersize&quot;:20, &quot;font.family&quot;:&quot;Calibri&quot;} sns.set_theme(font_scale = 1.25, rc = rc) ## plot figure cols = [f&quot;cat_{i}&quot; for i in range(1,5)] fig, axs = plt.subplots(2,2, figsize = (20,10), sharex = False, sharey=True) for ax, col in zip(axs.ravel(), cols): sns.pointplot( data = relative_results, x = &quot;label&quot;, y = col, ax = ax, markersize=7) ax.set_xlabel(&quot;&quot;) ax.set_ylabel(&quot;&quot;) ax.set_title(col) ax.axhline(0, ls='--', color = 'k') # note: getting rid of the second y-axis doesn't solve the problem for i, ax, col in zip(range(4), axs.ravel(), cols): if i &gt;= 2: ax.set_xticklabels([]) ax2 = ax.twinx() # if i%2 == 1: # ax2.set_ylabel(&quot;(ms) change in latency&quot;) ax2.set_yticks(get_new_ticks(ax, baseline, col)) ax2.set_ybound(*get_new_bounds(ax, baseline, col)) ax2.grid(False) fig.text(-1.3, 1, '(%) change', ha='center', va='center', rotation='vertical', transform=ax.transAxes, fontsize = &quot;xx-large&quot;) fig.text(1.15, 1, '(unit) change', ha='center', va='center', rotation='vertical', transform=ax.transAxes, fontsize = &quot;xx-large&quot;) fig.supxlabel(&quot;x-axis label&quot;, fontsize = &quot;xx-large&quot;) fig.suptitle(&quot;Title&quot;, fontsize = &quot;xx-large&quot;) plt.show() </code></pre>
<python><matplotlib><plot><seaborn>
2024-10-23 23:55:27
0
5,027
Ben Grossmann
79,119,670
480,118
postgres: select and return results before dropping table
<p>I have the following sql, where i want to return the selected data</p> <pre><code>insert into test(id,name) select * from tmp_tbl on conflict(id) do update set name = EXCLUDED.name; with selected_data as (select * from tmp_tbl) select * from selected_data; drop table if exists tmp_tbl; </code></pre> <p>THis is the python code:</p> <pre><code>with psycopg.connect(self.connect_str, autocommit=True) as conn: with conn.cursor() as cur: cur.execute(sql_full) rows = cur.fetchall() colnames = [desc[0] for desc in cur.description] df_result = pd.DataFrame(rows, columns=colnames) return df_result </code></pre> <p>This produces error:</p> <pre><code> File &quot;/usr/local/lib/python3.12/site-packages/psycopg/cursor.py&quot;, line 223, in fetchall self._check_result_for_fetch() File &quot;/usr/local/lib/python3.12/site-packages/psycopg/_cursor_base.py&quot;, line 588, in _check_result_for_fetch raise e.ProgrammingError(&quot;the last operation didn't produce a result&quot;) psycopg.ProgrammingError: the last operation didn't produce a result </code></pre> <p>I can execute that sql directly in sql client(dbweaver) and it works. but the python/psycopg code does not. any help appreciated</p>
<python><sql><postgresql><psycopg2><psycopg3>
2024-10-23 20:47:49
1
6,184
mike01010
79,119,637
219,153
What is the equivalent of np.polyval in the new np.polynomial API?
<p>I can't find a direct answer in NumPy documentation. This snippet will populate <code>y</code> with polynomial <code>p</code> values on domain <code>x</code>:</p> <pre><code>p = [1, 2, 3] x = np.linspace(0, 1, 10) y = [np.polyval(p, i) for i in x] </code></pre> <p>What is the new API equivalent when <code>p = Polynomial(p)</code>?</p>
<python><numpy><polynomials>
2024-10-23 20:36:00
1
8,585
Paul Jurczak
79,119,610
1,488,821
Getting the module and class of currently executing classmethod in Python
<p>For code that exists in a module named <code>some.module</code> and looks like this:</p> <pre class="lang-py prettyprint-override"><code>class MyClass: @classmethod def method(cls): # do something pass </code></pre> <p>I'd like to know in the block marked as &quot;do something&quot; what the module name, the class name, and the method name are. In the above example, that would be:</p> <ul> <li>module <code>some.module</code></li> <li>class <code>MyClass</code></li> <li>method <code>method</code></li> </ul> <p>I know of <code>inspect.stack()</code> and I can get the function / method name from it, but I can't find the rest of the data.</p>
<python><introspection>
2024-10-23 20:25:18
1
2,030
Ivan Voras
79,119,492
3,486,684
Bar chart with multiple bars using xOffset, when the x-axis is temporal?
<p>Here's a small example:</p> <pre class="lang-py prettyprint-override"><code>import altair as alt import polars as pl source = pl.DataFrame( { &quot;Category&quot;: list(&quot;AAABBBCCC&quot;), &quot;Value&quot;: [0.1, 0.6, 0.9, 0.7, 0.2, 1.1, 0.6, 0.1, 0.2], &quot;Date&quot;: [f&quot;2024-{m+1}-1&quot; for m in range(3)] * 3, } ).with_columns(pl.col(&quot;Date&quot;).str.to_date()) bars = alt.Chart(source).mark_bar().encode( x=alt.X(&quot;Date:T&quot;), xOffset=&quot;Category:N&quot;, y=&quot;Value:Q&quot;, color=&quot;Category:N&quot;, ) bars </code></pre> <p><a href="https://i.sstatic.net/AJZ0EHl8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJZ0EHl8.png" alt="example output" /></a></p> <p>If I set <code>x=&quot;Date:N&quot;</code>, then the example behaves as I'd like, but without the benefits of temporal formatting for the x-axis:</p> <p><a href="https://i.sstatic.net/tKOoe2yf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tKOoe2yf.png" alt="enter image description here" /></a></p> <p>Is there any way in which I can have <code>xOffset</code> work for the case where <code>x=&quot;Date:T&quot;</code>?</p>
<python><vega-lite><altair>
2024-10-23 19:42:53
1
4,654
bzm3r
79,119,386
7,346,633
Unsupported platform tag manylinux_2_31_riscv64
<p>When trying to upload a wheel distribution to PyPI, all other architectures were accepted, except for <code>manylinux_2_31_riscv64</code>. I could not find any documentation on how to submit packages targeting the riscv64 platform. Does anyone know what the tag name should be? <a href="https://i.sstatic.net/jde7TuFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jde7TuFd.png" alt="enter image description here" /></a></p>
<python><pypi><python-wheel>
2024-10-23 19:06:46
0
2,231
Hykilpikonna
79,119,356
2,971,574
Create a path structure that can be navigated like classes
<p>Assume there is the following path structure in a storage account that I've created myself:</p> <pre><code>basepath (e.g. /mount/basefolder/) ├── folder_a ├── folder_b ├── folder_c | ├── folder_c1 | ├── folder_c2 | └── a_file.txt | └── folder_d | ├── another_file.csv | ├── folder_d1 └── folder_d2 └── my_table.xlsx </code></pre> <p>I'd like to have a class (or something else) <code>Path</code> that returns the base path &quot;/mount/basefolder/&quot; when calling it. So, pasting <code>Path</code> to the console should return &quot;/mount/basefolder/&quot;. When writing <code>Path.</code> in my IDE and pressing tabulator the IDE should suggest the different options of the second hierarchy of folders/files: &quot;folder_a&quot;, &quot;folder_b&quot;, &quot;folder_c&quot; and &quot;folder_d&quot;. Calling <code>Path.folder_b</code> should return &quot;/mount/basefolder/folder_b/&quot;. When writing <code>Path.folder_c.</code>to the IDE it should suggest the options &quot;folder_c1&quot;, &quot;folder_c2&quot; and &quot;a_file.txt&quot; and so on.</p> <p>The idea is that I can easily navigate the paths that exist using dots and that calling it returns the string to the actual path in my storage account. But how do I build something like that? Of course, classes are required. I thought that StrEnums might help but I couln't figure out how to set it up :(.</p>
<python>
2024-10-23 18:58:23
1
555
the_economist
79,119,271
19,549,205
How to Execute Parameterized Queries in GridDB Using the Python Client?
<p>I'm using the GridDB Python client to interact with my time-series data. I want to execute a parameterized query to prevent SQL injection and handle dynamic values efficiently. When I run the code I get this error:</p> <pre><code>[0] -1: Parameter index out of range </code></pre> <p>Here's the code I'm working with:</p> <pre><code>from griddb_python import griddb # Initialize GridStore factory factory = griddb.StoreFactory.get_instance() try: # Connect to the GridStore gridstore = factory.get_store( host='239.0.0.1', port=31999, cluster_name='defaultCluster', username='admin', password='admin' ) # Get the container container = gridstore.get_container(&quot;sensor_data&quot;) # Create a parameterized query query = container.query(&quot;SELECT * FROM sensor_data WHERE sensor_id = ?&quot;) # Set the parameter value query.set_parameter(1, 'sensor_123') # Execute the query rs = query.fetch() # Process the result set while rs.has_next(): data = http://rs.next() print(data) except griddb.GSException as e: for i in range(e.get_error_stack_size()): print(f&quot;[{i}] {e.get_error_code(i)}: {e.get_message(i)}&quot;) </code></pre>
<python><database><prepared-statement><griddb>
2024-10-23 18:35:36
1
314
PhantomPhreak1995
79,119,238
1,145,760
How to approach: csv objects specified on command line or in a file
<p>I would like to accept 2d points provided on the command line or read from a file. Example command line positional parameter input:</p> <pre><code>( 4,-1),(0, 0) , (3,3) </code></pre> <p>example CSV file</p> <pre><code>4, -1 0,0 3 ,3 </code></pre> <p>I.e. &quot;comma outside braces&quot; and &quot;newline&quot; are the two possible separators. I can see 3 roads ahead:</p> <ul> <li>re(not sure if even possible if number of provided points is unknown)</li> <li>some magic from python's legendary &quot;batteries&quot;(module <code>csv</code> seems a bit overengineered for such a simple use case, is that true?)</li> <li>the <code>C</code> way - feed the string to a state machine</li> <li>entirely separate handling for the two formats</li> <li>get rid of spaces, perhaps of '\n'(see comments).</li> </ul> <p>It's mostly a conceptual question so any criticism a la &quot;this is a dumb question&quot; or &quot;this is an AB question&quot; is welcome more than code.</p>
<python><csv>
2024-10-23 18:26:26
2
9,246
Vorac
79,119,214
2,554,349
How can i understand what is running IronPython or Python?
<p>With what command inside code can I understand that IronPython or Python is running?</p>
<python><ironpython>
2024-10-23 18:21:53
2
392
Dmitry Dronov
79,118,920
184,379
How to test urllib3 retry adapter for connection errors
<p>my Retry class is</p> <pre><code>class RetryRequest: &quot;&quot;&quot; Module for Retry logic on requests API requests &quot;&quot;&quot; def __init__(self): # Retry Logic retry_server_errors = requests.adapters.Retry( total=20, connect=10, status_forcelist=list(range(500, 512)), status=10, backoff_factor=0.1, allowed_methods=['GET', 'PATCH', 'DELETE'], ) adapter = requests.adapters.HTTPAdapter(max_retries=retry_server_errors) s = requests.Session() s.mount(&quot;https://&quot;, adapter) s.mount(&quot;http://&quot;, adapter) self.session = s </code></pre> <p>my test is</p> <pre><code>@patch('requests.adapters.HTTPAdapter.send', side_effect=ConnectionError) def test_retry_on_connection_error(mock_send): &quot;&quot;&quot; Test to verify that the RetryRequest retries when a connection error occurs. &quot;&quot;&quot; retry_request = RetryRequest() request_session = retry_request.session # Make the request and expect a ConnectionError with pytest.raises(ConnectionError): request_session.get('https://example.com/test') # Ensure that the request was retried 10 times assert mock_send.call_count == 10, &quot;Expected 10 retries on connection error&quot; </code></pre> <p>but it only gets called once, so the retry for 10 on <code>connect</code> is not working</p>
<python><python-requests><urllib3>
2024-10-23 16:47:24
1
17,352
Tjorriemorrie
79,118,853
12,076,197
Create new dataframe rows when column has comma delimited values
<p>Example dataframe:</p> <pre><code>name col1 col2 col3 bob bird 78 1000 alice cat 55 500,600,700 rob dog 333 20,30 </code></pre> <p>Desired Dataframe that adds rows when col3 has comma delimited string values:</p> <pre><code>name col1 col2 col3 bob bird 78 1000 alice cat 55 500 alice cat 55 600 alice cat 55 700 rob dog 333 20 rob dog 333 30 </code></pre> <p>Any suggestion is appreciated! thanks!</p>
<python><pandas><dataframe>
2024-10-23 16:32:50
2
641
dmd7
79,118,841
1,368,534
How can I migrate from Poetry to UV package manager?
<p>I'm planning to switch from poetry to the uv Python package manager, but I can't find any migration guides. Currently, I'm using <a href="https://python-poetry.org/" rel="noreferrer">Poetry</a> and already have a <em>pyproject.toml</em> file.</p> <p>What key(s) should be modified or added to migrate properly to uv?</p> <p>Here’s the current <em>pyproject.toml</em> structure:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;name&quot; version = &quot;1.6.0&quot; description = &quot;&quot; authors = [ &quot;...&quot;, ] maintainers = [ &quot;...&quot;, ] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.12&quot; fastapi = &quot;^0.115.2&quot; uvicorn = { version = &quot;^0.32.0&quot;, extras = [&quot;standard&quot;] } pydantic = &quot;^2.5.3&quot; pydantic-settings = &quot;^2&quot; [tool.poetry.group.dev.dependencies] pytest = &quot;^8.3.3&quot; flake8 = &quot;~7.1.1&quot; mypy = &quot;^1.12&quot; [tool.isort] profile = &quot;black&quot; multi_line_output = 3 [tool.mypy] strict = true ignore_missing_imports = true [tool.pytest.ini_options] filterwarnings = [ &quot;error&quot;, &quot;ignore::DeprecationWarning&quot;, &quot;ignore:.*unclosed.*:ResourceWarning&quot;, ] env = [ &quot;...=...&quot;, &quot;...=...&quot;, ] [tool.coverage.run] omit = [ &quot;...=...&quot;, &quot;...=...&quot;, ] [tool.coverage.report] exclude_lines = [ &quot;pragma: no cover&quot;, &quot;if TYPE_CHECKING&quot;, ] [build-system] requires = [&quot;poetry-core&gt;=1.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>Additionally, in the [build-system] section, I currently have <em>poetry-core</em>. Should this be replaced with something specific for uv during the migration?</p>
<python><python-poetry><uv>
2024-10-23 16:29:39
4
2,886
mdegis
79,118,795
7,800,760
Ollama python library not closing socket
<p>In my python code I am doing the following:</p> <pre><code>ollama_client = ollama.Client(host='http://localhost:11434') </code></pre> <p>and then with this I am calling my embedder function:</p> <pre><code>def get_embedding(text: str, ollama_client: ollama.Client) -&gt; List[float]: &quot;&quot;&quot; Get embedding from Ollama API using direct HTTP request with proper resource management. Args: text (str): The text to be embedded. Returns: List[float]: The embedding vector as a list of floats. &quot;&quot;&quot; try: response = ollama_client.embeddings(model=VECTORIZER_MODELNAME, prompt=text) return response['embedding'] except Exception as e: print(f&quot;Error getting embedding: {e}&quot;) # Return zero vector as fallback return [0.0] * VECTORIZER_DIMENSION </code></pre> <p>and in my code I am invoking this for many strings such as the following:</p> <pre><code>highlight_vector = get_embedding(highlight, ollama_client) </code></pre> <p>all works well but at the end of my program I get the following warning:</p> <pre><code>sys:1: ResourceWarning: unclosed &lt;socket.socket fd=15, family=30, type=1, proto=6, laddr=('::1', 50598, 0, 0), raddr=('::1', 11434, 0, 0)&gt; </code></pre> <p>Python 3.11.7, ollama library 0.3.3</p>
<python><ollama>
2024-10-23 16:16:19
0
1,231
Robert Alexander
79,118,703
3,225,420
How to format Graph API call to update Excel cell?
<p>I want to update an Excel cell value (hosted in SharePoint) using the Microsoft Graph API. Documentation is <a href="https://learn.microsoft.com/en-us/graph/api/range-update?view=graph-rest-1.0&amp;tabs=http" rel="nofollow noreferrer">here</a>.</p> <p>I have had success with other API calls (format updates, clearing cells) using a similar format as below, please help me understand what I'm doing wrong.</p> <p>In the code example I want to write &quot;41373&quot; to cell B:17 in sheet 'Plant_3'. The code example returns a <code>400</code> code error:</p> <pre><code>&quot;code&quot;: &quot;InvalidAuthenticationToken&quot;, &quot;message&quot;: &quot;Access token is empty.&quot;, </code></pre> <p>This message can be generic and mean the formatting of an API call is wrong rather than an authentication problem. Other Graph API calls work with the same <code>id</code>s and <code>keys</code> so I doubt it's an authentication issue.</p> <pre><code>def graph_config_params(): &quot;&quot;&quot; read authentication parameters from config file, return dict with results :return: (dict) configuration parameters &quot;&quot;&quot; try: # Read the config file config = configparser.ConfigParser() config.read('config.ini') # assign authorization tokens from config file to variables return_dict = {'client_id': config['entra_auth']['client_id'], 'client_secret': config['entra_auth']['client_secret'], 'tenant_id': config['entra_auth']['tenant_id'], # Get necessary configuration data for the SharePoint file 'site_id': config['site']['site_id'], 'document_library_id': config['site']['document_library_id'], 'doc_id': config['site']['doc_id'], 'drive_id': config['site']['drive_id']} return return_dict except Exception as e: log.exception(e) def set_up_ms_graph_authentication(graph_auth_args=graph_config_params()): &quot;&quot;&quot; Create headers with access token for Microsoft Graph API to authenticate :param graph_authentication_params: :return: (dict) headers with access token &quot;&quot;&quot; try: # Set up Microsoft Graph authentication msal_scope = ['https://graph.microsoft.com/.default'] msal_app = ConfidentialClientApplication( client_id=graph_auth_args['client_id'], authority=f&quot;https://login.microsoftonline.com/{graph_auth_args['tenant_id']}&quot;, client_credential=graph_auth_args['client_secret']) result = msal_app.acquire_token_silent(scopes=msal_scope, account=None) if not result: result = msal_app.acquire_token_for_client(scopes=msal_scope) if 'access_token' in result: access_token = result['access_token'] else: raise Exception(&quot;Failed to acquire token&quot;) # Prepare request headers with the acquired access token headers = {'Authorization': f'Bearer {access_token}'} # log.info(f'headers: {headers}') return headers except Exception as e: log.exception(e) sheet = 'Plant_3' # get variables that work in other Graph API calls config_args=graph_config_params() # get Authorization Bearer token, again works in other Graph API calls. my_headers = set_up_ms_graph_authentication() url= f&quot;https://graph.microsoft.com/v1.0/sites/{config_args['site_id']}/drives/{config_args['drive_id']}/items/{config_args['doc_id']}/workbook/worksheets/{sheet}/range/(address='B:17')&quot; response = requests.patch(url, headers=my_headers, json={'values': [['41373']] }) </code></pre> <p>Things I've tried:</p> <ul> <li>Switched keyword argument from <code>json</code> to <code>data</code>.</li> <li>Single brackets / no brackets around '41373', both in <code>json</code> and <code>data</code>.</li> <li>Verified permissions, removed <code>Files.ReadWrite.All</code> and got a <code>403</code> error.</li> <li>Switched 'patch' to <code>post</code> and tried all configurations mentioned above.</li> <li>Adding <code>valueType</code> argument: [['String']]</li> </ul>
<python><excel><azure><microsoft-graph-api><microsoft-graph-files>
2024-10-23 15:50:14
1
1,689
Python_Learner
79,118,646
12,415,855
OpenAI Assistant - working for one API_Key but not for another one?
<p>i have the following simple assistant using the gpt-4o model -</p> <pre><code>import os import sys from dotenv import load_dotenv from openai import OpenAI path = os.path.abspath(os.path.dirname(sys.argv[0])) fn = os.path.join(path, &quot;.env&quot;) load_dotenv(fn) CHATGPT_API_KEY = os.environ.get(&quot;CHATGPT_API_KEY&quot;) client = OpenAI(api_key = CHATGPT_API_KEY) fn = os.path.join(path, &quot;workText.txt&quot;) vector_store = client.beta.vector_stores.create(name=&quot;HTML-File&quot;) file_paths = [fn] file_streams = [open(path, &quot;rb&quot;) for path in file_paths] file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=file_streams ) print(file_batch.status) print(file_batch.file_counts) # exit() print(f&quot;Preparing assistant&quot;) assistant = client.beta.assistants.create( name=&quot;HTML Analyse Assistant&quot;, instructions=&quot;You are a machine learning researcher, answer questions about the provided text-file&quot;, model=&quot;gpt-4o&quot;, tools=[{&quot;type&quot;: &quot;file_search&quot;}], ) assistant = client.beta.assistants.update( assistant_id=assistant.id, tool_resources={&quot;file_search&quot;: {&quot;vector_store_ids&quot;: [vector_store.id]}}, ) question = &quot;&quot;&quot; What is the name of the location? &quot;&quot;&quot; print(f&quot;Preparing thread&quot;) thread = client.beta.threads.create() print(f&quot;Preparing question&quot;) results = client.beta.threads.messages.create( thread_id = thread.id, role = &quot;user&quot;, content = question ) print(f&quot;Running for answer&quot;) run = client.beta.threads.runs.create ( thread_id = thread.id, assistant_id = assistant.id ) while run.status not in [&quot;completed&quot;, &quot;failed&quot;]: run = client.beta.threads.runs.retrieve ( thread_id = thread.id, run_id = run.id ) if run.status == &quot;completed&quot;: results = client.beta.threads.messages.list( thread_id=thread.id ) # resultAnswer = results.data[0].content[0].text.value fn = os.path.join(path, &quot;result.txt&quot;) with open(fn, &quot;w&quot;, encoding=&quot;utf-8&quot;, errors=&quot;ignore&quot;) as f: lines = f.writelines(str(results)) </code></pre> <p>I have several openai API-Keys and it works fine for one of them with this result output:</p> <pre><code>SyncCursorPage[Message](data=[Message(id='msg_Z6iImlhDNFDmXoCszSl8hOrp', assistant_id='asst_eYsWBFev7jeZZC3MRyZ9PuJK', attachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[FileCitationAnnotation (end_index=117, file_citation=FileCitation(file_id='file-v3Y3HH9Kd0L2doxuoRVNU6kL'), start_index=99, text='【4:0†workText.txt】', type='file_citation')], value='The name of the location is &quot;Benediktinerabtei Gerleve&quot; which is located in Billerbeck, Deutschland【4:0†workText.txt】.'), type='text')], created_at=1729687317, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='assistant', run_id='run_6KbPCSO8VqIP0bbqFjuSp4fV', status=None, thread_id='thread_aaRH59cvaPGh5L0J8WIviBBU'), Message(id='msg_GPAug3kAr7GSRSEuIPRUmYvc', assistant_id=None, ttachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[], value='\n\nWhat is the name of the location?\n\n'), type='text')], created_at=1729687313, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='user', run_id=None, status=None, thread_id='thread_aaRH59cvaPGh5L0J8WIviBBU')], object='list', first_id='msg_Z6iImlhDNFDmXoCszSl8hOrp', last_id='msg_GPAug3kAr7GSRSEuIPRUmYvc', has_more=False) </code></pre> <p>But with another API-Key it seems not to work - i get no data output -</p> <pre><code>Message(id='msg_NCxFnnQPXL0gKhsxE2hJhYWh', assistant_id=None, attachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[], value='\nWhat is the name of the location?\n'), type='text')], created_at=1729696968, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='user', run_id=None, status=None, thread_id='thread_puBTFtaoMvauMBO2D6Wc2ylr') </code></pre> <p>Why is this working with one API-Key but not with the other one...?</p>
<python><openai-api>
2024-10-23 15:32:36
2
1,515
Rapid1898
79,118,250
4,752,874
Python Unable to Map Dataframe Columns by Name to List of Objects
<p>I have a dataframe like the table shown below.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>id</th> <th>shop</th> <th>var1</th> <th>var2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>a</td> <td>a</td> <td>b</td> </tr> <tr> <td>2</td> <td>b</td> <td>b</td> <td>c</td> </tr> </tbody> </table></div> <p>I would like to populate a list of objects using just the id and shop columns, however the columns in the table may not always be in the order shown, so I would like to reference them by name opposed to index which is shown below. I've searched online but can't find a solution.</p> <pre><code>class Test: def __init__(self, id, shop): self.id = id self.shop = shop def test_list(df:pd.DataFrame)-&gt;list: return list(map(lambda x:Test(id=x[0],shop=x[1]),df.values.tolist())) </code></pre>
<python><dataframe><list><object>
2024-10-23 13:57:34
1
349
CGarden
79,118,142
10,069,064
How to upgrade NumPy in Ubuntu?
<p>My OS is Ubuntu 22.04.4 LTS. The &quot;pip list&quot; command tells that I have the NumPy 1.21.5. But I need a newer version of the NumPy.</p> <p>I am trying to install the NumPy package whith next command:</p> <pre><code>pip install --upgrade numpy==2.1.2 </code></pre> <p>But here is an error:</p> <pre><code>&lt;...&gt; Run-time dependency python found: NO (tried pkgconfig, pkgconfig and sysconfig) ../meson.bould:41:12: ERROR: python dependency not found </code></pre> <p>I find that I must install -dev package. So I do that:</p> <pre><code>sudo apt install python3-dev </code></pre> <p>...but the NumPy installation error is still there.</p> <p>What shall I do to upgrade the NumPy?</p>
<python><numpy><ubuntu><meson-build>
2024-10-23 13:31:53
4
304
Arseniy
79,118,016
8,249,257
How to preserve data types when working with pandas and sklearn transformers?
<p>While working with a large sklearn <code>Pipeline</code> (fit using a <code>DataFrame</code>) I ran into an error that lead back to a wrong data type of my input. The problem occurred on an a single observation coming from an API that is supposed to interface the model in production. Missing information in a single line makes pandas (obviously) incapable of inferring the correct dtype but I thought that my fit transformers will handle the conversions. Apparently, I am mistaken.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from sklearn.impute import SimpleImputer X_tr = pd.DataFrame({&quot;A&quot;: [5, 8, None, 4], &quot;B&quot;: [3, None, 9.9, 12]}) print(X_tr.dtypes) #&gt;&gt; A float64 #&gt;&gt; B float64 x = pd.DataFrame({&quot;A&quot;: [10.1], &quot;B&quot;: [None]}) print(x.dtypes) #&gt;&gt; A float64 #&gt;&gt; B object </code></pre> <p>The above shows clearly that pandas infers the <code>float64</code> types for column A and B in the training dataset, however (again obviously) for the single observation it doesn't know the dtype for the column B so it assigns <code>object</code>. No issue so far. But let's imagine a <code>SimpleImputer</code> somewhere within a <code>Pipeline</code> to replace the missing values:</p> <pre class="lang-py prettyprint-override"><code>imputer = SimpleImputer( fill_value=0, strategy=&quot;constant&quot;, missing_values=pd.NA ).set_output(transform=&quot;pandas&quot;) X_tr_im = imputer.fit_transform(X_tr) # training print(X_tr_im.dtypes) #&gt;&gt; A float64 #&gt;&gt; B float64 x_im = imputer.transform(x) print(x_im.dtypes) #&gt;&gt; A object #&gt;&gt; B object </code></pre> <p>The imputer does replace the <code>None</code> values with zeros in all cases, however, two things happened that I did not expect:</p> <ul> <li>Column B was NOT converted to the dtype that it was fit on</li> <li>Column A was converted to the unwanted dtype of <code>object</code></li> </ul> <p>This creates two unwanted non-numeric data types that lead to errors further down the pipeline. Even if it is not the task of transformers to preserve dtypes, in my case it would still be very helpful.</p> <p>Am I doing something fundamentally wrong? Are there any solutions available?</p>
<python><pandas><scikit-learn>
2024-10-23 13:01:28
2
478
Woodly0
79,117,556
8,832,008
multiprocessing queues losing reference when passed to worker process
<p>I am creating a multiprocessing Manager as a global variable in the main process and then create a Queue with this manager for each task I want to process. These Queue objects are then passed into a pool of workers, one queue per task.</p> <p>When these queues are accesses, something is losing its reference, and I can't figure out why:</p> <pre><code>Traceback (most recent call last): File &quot;/net/home/cmosig/miniconda3/envs/env_conda/lib/python3.12/multiprocessing/managers.py&quot;, line 209, in _handle_request result = func(c, *args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/net/home/cmosig/miniconda3/envs/env_conda/lib/python3.12/multiprocessing/managers.py&quot;, line 438, in incref raise ke File &quot;/net/home/cmosig/miniconda3/envs/env_conda/lib/python3.12/multiprocessing/managers.py&quot;, line 426, in incref self.id_to_refcount[ident] += 1 ~~~~~~~~~~~~~~~~~~~^^^^^^^ KeyError: '712cf1996780' </code></pre> <p>Minimal Example:</p> <pre><code>import multiprocessing as mp from joblib import Parallel, delayed GLOBAL_QUEUE_MANAGER = mp.Manager() def process(queue): queue.put(&quot;test&quot;) queue.close() return def job_generator(): for i in range(10): queue = GLOBAL_QUEUE_MANAGER.Queue() yield {&quot;queue&quot;: queue} Parallel(n_jobs=3, batch_size=1, backend=&quot;multiprocessing&quot;)(delayed(process)(**p) for p in job_generator()) </code></pre> <p>EDIT: What I want to avoid is creating all Queue objects for all tasks beforehand as there could be potentially thousands of tasks and I am worried about overhead.</p> <p>EDIT2: FWIW: worker processes can't create queues as they are deamonic.</p> <p>The complete implementation: <a href="https://github.com/cmosig/sentle/blob/cba4e40782d54df646f16dbaf4dadcc1f058a921/sentle/sentle.py#L622" rel="nofollow noreferrer">https://github.com/cmosig/sentle/blob/cba4e40782d54df646f16dbaf4dadcc1f058a921/sentle/sentle.py#L622</a></p>
<python><multiprocessing><joblib>
2024-10-23 10:38:17
0
1,334
cmosig
79,117,267
2,785,041
Show most recent python logging messages in case of an error
<p>I'm using python <a href="https://docs.python.org/3/library/logging.html" rel="nofollow noreferrer"><code>logging</code></a> to log messages. One module (let's call it <code>checker</code>) produces a lot of tracing information, which is usually omitted (by setting the log level to <code>ERROR</code>) as it otherwise produces too many messages.</p> <p>However, in some cases, an error occurs in another module <code>runner</code> that calls <code>checker</code>. In case of an error, I would like to see the most recent log messages from <code>checker</code>.</p> <p>How can I do this with python's <code>logging</code>?</p> <p><code>spdlog</code> (a C++ logging library), has <a href="https://github.com/gabime/spdlog?tab=readme-ov-file#backtrace-support" rel="nofollow noreferrer">backtrace support</a> that uses a ring buffer to accomplish this. Does something similar exist in python?</p>
<python><logging>
2024-10-23 09:25:13
1
3,434
morxa
79,117,238
9,653,275
Problem installing rgee in R studio on windows machine
<p>I am trying to install <code>rgee</code> in Rstudio and I'm hitting a problem at start. I install <code>rgee</code> and run <code>ee_install (py_env = &quot;rgee&quot;)</code> and I get the following errors. I follow the instructions and run <code>ee_clean_pyenv()</code>. It doesn't seem to do anything but I restart the session and run <code>ee_install(py_env = &quot;rgee&quot;)</code> again and it just repeats the same issue. I have tried running simply <code>ee_install()</code> as well but that causes the same issue.</p> <p>What is the best way to fix this issue?</p> <pre><code>&gt; library(rgee) &gt; ee_install(py_env = &quot;rgee&quot;) ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Python configuration used to create rgee ── python: C:/Users/Mike Williamson/Documents/.virtualenvs/r-reticulate/Scripts/python.exe libpython: C:/Users/Mike Williamson/AppData/Local/Programs/Python/Python39/python39.dll pythonhome: C:/Users/Mike Williamson/Documents/.virtualenvs/r-reticulate version: 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] Architecture: 64bit numpy: C:/Users/Mike Williamson/Documents/.virtualenvs/r-reticulate/Lib/site-packages/numpy numpy_version: 2.0.2 ee: [NOT FOUND] ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1. Removing the previous Python Environment (rgee), if it exists ... rgee not found 2. Creating a Python Environment (rgee) Error in value[[3L]](cond) : An error occur when ee_install was creating the Python Environment. Run ee_clean_pyenv() and restart the R session, before trying again. </code></pre>
<python><r><windows><error-handling><rgee>
2024-10-23 09:17:41
0
541
mikejwilliamson
79,117,121
1,744,834
Polars get all possible categories as physical representation
<p>Given a DataFrame with categorical column:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;id&quot;: [&quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;b&quot;, &quot;b&quot;], &quot;value&quot;: [1,1,1,6,6,6,6], }) res = df.with_columns(bucket = pl.col.value.cut([1,3])) </code></pre> <pre><code>shape: (7, 3) ┌─────┬───────┬───────────┐ │ id ┆ value ┆ bucket │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ cat │ ╞═════╪═══════╪═══════════╡ │ a ┆ 1 ┆ (-inf, 1] │ │ a ┆ 1 ┆ (-inf, 1] │ │ a ┆ 1 ┆ (-inf, 1] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ └─────┴───────┴───────────┘ </code></pre> <p>How do I get all potential values of the categorical column? I can get them as strings with <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.cat.get_categories.html" rel="nofollow noreferrer"><code>pl.Expr.cat.get_categories()</code></a> as strings?</p> <pre class="lang-py prettyprint-override"><code>res.select(pl.col.bucket.cat.get_categories()) </code></pre> <pre><code>shape: (3, 1) ┌───────────┐ │ bucket │ │ --- │ │ str │ ╞═══════════╡ │ (-inf, 1] │ │ (1, 3] │ │ (3, inf] │ └───────────┘ </code></pre> <p>I can also get all existing values in their physical representation with <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.to_physical.html" rel="nofollow noreferrer"><code>pl.Expr.to_physical()</code></a></p> <pre class="lang-py prettyprint-override"><code>res.select(pl.col.bucket.to_physical()) </code></pre> <pre><code>shape: (7, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 0 │ │ 0 │ │ 2 │ │ 2 │ │ 2 │ │ 2 │ └────────┘ </code></pre> <p>But how I can get all potential values in their physical representation? I'd expect output like:</p> <pre><code>shape: (3, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 1 │ │ 2 │ └────────┘ </code></pre> <p>Or should I just assume that it's always encoded as range of integers without gaps?</p>
<python><python-polars><categorical>
2024-10-23 08:46:30
1
118,326
roman
79,117,088
17,473,587
Discrepancy in Record Count Between Django ORM and Raw SQL Query
<p>I'm encountering an issue where the count of records returned by a Django ORM query does not match the count returned by a raw SQL query. Here is the relevant part of my Django view:</p> <pre><code>start_date = datetime(2024, 10, 19, 0, 0, 0) end_date = datetime(2024, 10, 19, 23, 59, 59) dbug = Reservations.objects.all().filter(updated_at__range=(start_date, end_date)) print(dbug.count()) </code></pre> <p>Above returns <strong>6529</strong></p> <p>The Django settings.py contains:</p> <pre><code>TIME_ZONE = 'Asia/Tehran' USE_TZ = False </code></pre> <p>I have tried SQL query same bellow:</p> <pre><code>SELECT COUNT(*) FROM &quot;consultant_reservations&quot; WHERE updated_at BETWEEN '2024-10-19 00:00:00' AND '2024-10-19 23:59:59'; count ------- 6540 (1 row) </code></pre> <hr /> <p>Here is discrepancy within SQL query result (<strong>which is 6540</strong>) I have tried in psql terminal and Django ORM result (<strong>which is 6529</strong>)</p> <hr /> <h2>Please let me presenting an example:</h2> <p><strong>Trying SQL query same as:</strong></p> <pre><code>SELECT * FROM &quot;consultant_reservations&quot; WHERE updated_at BETWEEN '2024-10-19 00:00:00' AND '2024-10-19 23:59:59' LIMIT 4; </code></pre> <p><strong>Result:</strong></p> <pre><code> id | idd | voip_number | client_id | client_mobile | reserve_duration | status | reserve_timestamp | created_at | updated_at | consultant_id_id | mobile_id | created_by | updated_by -------+---------+-------------+-----------+---------------+------------------+----------+------------------------+------------------------+------------------------+------------------+-----------+------------+------------ 76407 | 2011050 | 2217 | 1101151 | 09355648120 | 3600 | reserved | 2024-10-19 19:30:00+00 | 2024-10-14 08:40:03+00 | 2024-10-19 20:28:01+00 | 5052 | 2395781 | 3445 | 0 1408 | 1958653 | 1119 | 754939 | 09142477905 | 3600 | reserved | 2024-10-19 05:30:00+00 | 2024-09-28 06:17:04+00 | 2024-10-19 06:28:01+00 | 3791 | 974986 | 87 | 0 1514 | 1958759 | 2571 | 947805 | 09334143576 | 3600 | reserved | 2024-10-19 09:30:00+00 | 2024-09-28 06:34:05+00 | 2024-10-19 10:28:01+00 | 5374 | 1711586 | 3802 | 0 60371 | 1997347 | 2589 | 1070143 | 09033927800 | 3600 | reserved | 2024-10-19 12:30:00+00 | 2024-10-09 11:42:37+00 | 2024-10-19 13:28:02+00 | 5385 | 2279104 | 3814 | 0 (4 rows) </code></pre> <p><strong>Trying Django query same as:</strong></p> <pre><code>start_date = datetime(2024, 10, 19, 0, 0, 0) end_date = datetime(2024, 10, 19, 23, 59, 59) dbug = Reservations.objects.all().filter(updated_at__range=(start_date, end_date))[:4] data = list(dbug.values()) df = pd.DataFrame(data) print(df.head(4)) </code></pre> <p><strong>Result is:</strong></p> <pre><code> id idd consultant_id_id voip_number client_id client_mobile ... status reserve_timestamp created_at created_by updated_by updated_at 0 76407 2011050 5052 2217 1101151 09355648120 ... reserved 2024-10-19 23:00:00 2024-10-14 12:10:03 3445 0 2024-10-19 23:58:01 1 1408 1958653 3791 1119 754939 09142477905 ... reserved 2024-10-19 09:00:00 2024-09-28 09:47:04 87 0 2024-10-19 09:58:01 2 1514 1958759 5374 2571 947805 09334143576 ... reserved 2024-10-19 13:00:00 2024-09-28 10:04:05 3802 0 2024-10-19 13:58:01 3 60371 1997347 5385 2589 1070143 09033927800 ... reserved 2024-10-19 16:00:00 2024-10-09 15:12:37 3814 0 2024-10-19 16:58:02 [4 rows x 14 columns] </code></pre> <p>Shows that the encountered discrepancy within sql result between orm one.</p> <p>The consultant_reservations table is appropriated the Reservation model.</p> <p>Regards.</p>
<python><sql><django><postgresql><psql>
2024-10-23 08:35:50
0
360
parmer_110
79,116,936
4,483,043
how to add images in the markdown html gradio
<p>i am adding images in my Gradio header, I explored the documentation and found that, it is allowed to keep markdown or html and load images and apply CSS on it to make it stylized, however I have tried and it doesn't load the images properly.</p> <pre><code>import gradio as gr gr.HTML(&quot;&lt;img src='image.png' alt='image One'&gt;&quot;) </code></pre>
<python><html><user-interface><chatbot><gradio>
2024-10-23 07:49:29
1
437
Farooq Zaman
79,116,922
1,587,132
why does fun.__code__.co_names not include all global variables?
<p>Here's some python code:</p> <pre><code>dict_de_en = {'we':'wir', 'love':'lieben', 'cake':'kuchen'} def fun1(sentence): return &quot; &quot;.join([dict_de_en[w] for w in sentence.split()]) def fun2(sentence): dd = dict_de_en return &quot; &quot;.join([dd[w] for w in sentence.split()]) print(&quot;f1:&quot;, fun1(&quot;we love cake&quot;)) # &quot;wir lieben kuchen&quot; print(&quot;f2:&quot;, fun2(&quot;we love cake&quot;)) # ditto print(&quot;n1&quot;, fun1.__code__.co_names) # ('join', 'split') print(&quot;n2&quot;, fun2.__code__.co_names) # ('dict_de_en', 'join', 'split') </code></pre> <p>why does <code>fun1</code> not contain <code>dict_de_en</code> in <code>.co_names</code>?<br /> It does work as expected for:</p> <pre><code>dict_de_en = {'we':'wir', 'love':'lieben', 'cake':'kuchen'} def fun3(word): return dict_de_en[word] def fun4(word): dd = dict_de_en return dd[word] print(&quot;f3:&quot;, fun1(&quot;we&quot;)) # &quot;wir&quot; print(&quot;f4:&quot;, fun2(&quot;we&quot;)) # ditto print(&quot;n3&quot;, fun3.__code__.co_names) # ('dict_de_en',) print(&quot;n4&quot;, fun4.__code__.co_names) # ('dict_de_en',) </code></pre> <p>The background is that I am running a Python MOOC with automated task scoring. It should find and warn informatively about the usage of global variables in the participant's solution functions. My setup generally works well so far, except for the situation in fun1.</p>
<python>
2024-10-23 07:44:43
2
1,556
Berry Boessenkool
79,116,803
13,641,358
os.system multiple return values
<p>Is it possible to get multiple return values for a <code>os.system()</code> call?</p> <p>The following example always returns 0 even though the python call returns an error code</p> <pre class="lang-py prettyprint-override"><code>ret_val = os.system(f&quot;python3 -c 'import sys; sys.exit(1)' | tee test.log&quot;) </code></pre> <p>The goal is to get a list of return values, in this case:</p> <pre class="lang-py prettyprint-override"><code>ret_val == [1, 0] </code></pre>
<python><python-3.x>
2024-10-23 07:23:31
1
663
ucczs
79,116,088
2,449,857
Handling the presence or absence of a token in EBNF / Lark
<p>I'm making early steps with the Lark library in Python and am really looking forward to replacing a lot of awful <code>if</code> statements with an EBNF parser..!</p> <p>The task here is interpreting the times written in railway timetables. <code>12:34</code> means the train will stop, <code>12/34</code> means the train is not expected to stop, <code>12d34</code> means the train may stop to set down passengers only... and there's at least four more variations like this. Times are to the half-minute: <code>12:34h</code> means <code>12:34:30</code> and it's that <code>h</code> I'm having trouble with.</p> <p>I have a dataclass and enum to store the parsed values:</p> <pre><code>from dataclasses import dataclass from enum import Enum, auto @dataclass(frozen=True) class TTime: class StopMode(Enum): STOPPING = auto() PASSING = auto() SET_DOWN = auto() # etc... hour: int minute: int second: int = 0 stopmode: StopMode = StopMode.PASSING </code></pre> <pre class="lang-py prettyprint-override"><code>from lark import Lark, Transformer class TreeToTTime(Transformer): def hour(self, n): return int(&quot;&quot;.join(n)) def minute(self, n): return int(&quot;&quot;.join(n)) def stopmode(self, n): (n,) = n return n def stopping(self, _): return TTime.StopMode.STOPPING def passing(self, _): return TTime.StopMode.PASSING def setdown(self, _): return TTime.StopMode.SET_DOWN def halfminute(self, _): return True def ttime(self, args): hour, stopmode, minute, halfminute = args second = 30 if halfminute else 0 return TTime(hour, minute, second, stopmode) </code></pre> <pre class="lang-py prettyprint-override"><code>ttime_parser = Lark( r&quot;&quot;&quot; ttime : hour stopmode minute halfminute? hour : (&quot;0&quot;..&quot;2&quot;)? DIGIT minute : (&quot;0&quot;..&quot;5&quot;) DIGIT stopmode : stopping | passing | setdown stopping : &quot;:&quot; passing : &quot;/&quot; setdown : &quot;d&quot; halfminute : &quot;h&quot; | &quot;H&quot; %import common.DIGIT %import common.WS %ignore WS &quot;&quot;&quot;, start=&quot;ttime&quot;, ) def parse_ttime(text): tree = ttime_parser.parse(text) return TreeToTTime().transform(tree) </code></pre> <pre><code>import pytest @pytest.mark.parametrize( &quot;text,expected&quot;, [ (&quot; 0:00&quot;, TTime(0, 0, 0, TTime.StopMode.STOPPING)), (&quot;13:20&quot;, TTime(13, 20, 0, TTime.StopMode.STOPPING)), (&quot;00/00&quot;, TTime(0, 0, 0, TTime.StopMode.PASSING)), (&quot;12d34&quot;, TTime(12, 34, 0, TTime.StopMode.SET_DOWN)), # (&quot;01:23h&quot;, TTime(1, 23, 30, TTime.StopMode.STOPPING)), ], ) def test_parse_times(text, expected): assert expected == parse_time(text) </code></pre> <p>The problem is the <code>args</code> passed to <code>ttime</code>. If <code>halfminute</code> is absent i.e. <code>12:34</code>, <code>args</code> has three values: <code>(12, StopMode.STOPPING, 34)</code>. If <code>halfminute</code> is present i.e. <code>12:34h</code> it's four values: <code>(12, StopMode.STOPPING, 34, True)</code>.</p> <p>How do I configure the transformer here to include the <em>absence</em> of a token? For <code>12:34</code> I'd like <code>args</code> to be <code>[12, StopMode.STOPPING, 34, False]</code> for example.</p> <p>I could write:</p> <pre class="lang-py prettyprint-override"><code>hour, stopmode, minute, halfminute, *_ = args + (False,) </code></pre> <p>but I feel this is not the right approach - it only works because the halfminute token is the final token for example.</p> <p>Any other pointers gratefully received - I feel I've made rapid progress this evening but there's still a lot I'll be missing!</p>
<python><ebnf><lark-parser>
2024-10-23 00:27:17
1
3,489
Jack Deeth
79,116,021
8,737,211
Python Package Optional Sub-package if Optional Extras Specified
<p>Say I have created a python project <code>foobar</code> with the following layout:</p> <pre><code>foobar ├── __init__.py ├── core/ ├── network/ └── plotting/ </code></pre> <ol> <li>I would like to have <code>pip install foobar</code> to install only the core part of the package, along with core dependencies:</li> </ol> <pre><code>foobar ├── __init__.py └── core/ </code></pre> <ol start="2"> <li>I would like to have <code>pip install foobar[plotting]</code> to install plotting capabilities and extra dependencies on-top of the core installation:</li> </ol> <pre><code>foobar ├── __init__.py ├── core/ └── plotting/ </code></pre> <ol start="3"> <li>I would like to have <code>pip install foobar[all]</code> to install everything:</li> </ol> <pre><code>foobar ├── __init__.py ├── core/ ├── network/ └── plotting/ </code></pre> <p>Dependency groups for the above scenarios seem simple enough to define in the <code>pyproject.toml</code> file:</p> <pre class="lang-ini prettyprint-override"><code>[project] name = &quot;foobar&quot; ... dependencies = [ &quot;foo&quot;, &quot;bar&quot;, ] [project.optional-dependencies] network = [ &quot;requests&quot;, ] plotting = [ &quot;PyQt6&quot;, &quot;pyqtgraph&quot;, ] all = [ &quot;foobar[network,plotting]&quot;, ] dev = [ &quot;foobar[all]&quot;, &quot;pytest&quot;, &quot;ruff&quot;, &quot;black&quot;, ] </code></pre> <p>But it's unclear to me whether the package contents can be installed as I intend through some kind of definition in the <code>pyproject.toml</code> file. How can this be accomplished?</p>
<python><pyproject.toml>
2024-10-22 23:26:46
0
494
Campbell McDiarmid
79,116,000
10,203,572
Querying class members variables with DuckDB
<p>I have a codebase with an API to pass a SQL query to run for execution, but that same API does not provide a way to pass a variable. Meaning I do not get to pass a variable in the same scope as where the query runs:</p> <pre><code>class SampleClientClass(BaseExecutionAPI): def __init__(self): self.conn = duckdb.DuckDBPyConnection() def _execute_queries(self): qs = self._yield_queries() for q in qs: self.conn.sql(q) def _yield_queries(self): ... </code></pre> <p>I have tried passing it as a class member instead (since I do not get to override or alter the <code>_execute_queries</code> method) but that does not seem to work:</p> <pre><code>class MyTableCreationClass(BaseExecutionAPI): def __init__(self, df: pd.DataFrame): self.conn = duckdb.DuckDBPyConnection() self.my_df = df def _execute_queries(self): qs = self._yield_queries() for q in qs: self.conn.sql(q) def _yield_queries(self): return [ f&quot;&quot;&quot; INSERT INTO my_table BY NAME SELECT * FROM \&quot;self.my_df\&quot; &quot;&quot;&quot; ] </code></pre> <p>Which gives me the error: <code>Table with name self.my_df does not exist!</code></p> <p>Is there a way to have DuckDB pick up class members at all, or does it only work with local variables?</p>
<python><pandas><duckdb>
2024-10-22 23:10:14
1
1,066
Layman
79,115,743
15,008,906
sqlalchemy self referential 1 to many declarative relationship
<p>I'm still pretty new to sqlalchemy so any guidance is appreciated. I have the following class that does a 1 to 1 relationship with itself and works just fine:</p> <pre><code>class Unit(Base): __tablename__ = 'unit' uic: Mapped[str] = mapped_column(String(10), primary_key=True) parent = relationship(&quot;Unit&quot;, remote_side=[uic], uselist=False) parentuic = Column(String, ForeignKey('unit.uic'), nullable=True) </code></pre> <p>So when I query the unit, I'll get &quot;unit.parentuic&quot; as an expected string value. And I'll get &quot;unit.parent&quot; as a unit object. Looks like it's working as I expected it. I'm able to create a bunch of units that has the same parentuic and when I query the parent, I get a list of all the child unit objects so I'm good to go.</p> <p>But I also want to add a 1 to Many relationship to the same uic. Basically, I'm keeping track of a specific group of parents so I added this:</p> <pre><code> # Keep track of a specific group of parent objects parent_group: Mapped[list[&quot;Unit&quot;]] = relationship(back_populates=&quot;parent_group_uic&quot;) parent_group_uic: Mapped[&quot;Unit&quot;] = relationship( &quot;Unit&quot;, remote_side=[uic], back_populates=&quot;parent_group&quot; ) </code></pre> <p>This gives me all kinds of weird behaviors so I'm doing this all wrong I think. When I try to add a parent group, I get &quot;maximum recursion depth exceeded&quot;. Or if I start just looking at only a parent, it brings along parent_groups with it even though I didn't create a parent group yet. So my relationship logic is all wrong I'm guessing.</p> <p>Any ideas?</p> <p>Thanks for any help you can provide. Jon</p> <pre><code>SQLAlchemy 2.0.36 fastapi 0.112.0 asyncpg 0.30.0 </code></pre> <p>Edit to add Additional information requested by “python_user”</p> <p>Let me give a better description of what I’m trying to achieve. Basically, I’m creating a hierarchical structure of a unit org structure. This structure should never change. So it would look like:</p> <pre><code>parent_unit | |__ __ child_unit1 __ __ child_unit2 | |__ __ sub_child_unit3 </code></pre> <p>So a child has a 1 to 1 relationship to its parent. But when I query the parent, it can return many childs. So maybe I explained that wrong? The key here is that once the org structure is created, it technically will stay the same.</p> <p>In addition to the above relationship, I want to also create groups with these units that CAN be changed. For example, I want to create a group comprised of child_unit2, child_unit1, and sub_child_unit3 only. And arrange them in an order like:</p> <pre><code>child_unit2 | |__child_unit1 __ __ sub_child_unit3 </code></pre> <p>But I can also have another group where child_unit1 is used like:</p> <pre><code>new_unit | |__ child_unit1 </code></pre> <p>So that means with regards to the groups, a unit can have a 1 to many relationship with many parents (ie child_unit1 has a parent of new_unit and a parent of child_unit2). Does that make sense? So I want to use the Unit table to do all this since all the units are kept there and the org structure is created from it based on a parent column. But I also want to mix and match them in to custom groups. And I just noticed that as I have the code now, if I create unit AA as the parent, BB as the child and CC as the child, and then query the parent unit, it gives me this output back:</p> <pre><code>“parent_group”: [ { “uic”: “BB”, }, { “uic”: “CC”, } ] </code></pre> <p>And I’m not sure why “parent_group” has the child data instead of “parent”. I didn’t add a single thing to parent_group yet.</p> <p>So when it comes to the org structure, the child can only have a single parent unit. But when it comes to groups, the child can have many different parent units.</p>
<python><postgresql><sqlalchemy>
2024-10-22 21:11:27
1
413
Jon Hayden
79,115,566
6,564,826
How to reenter to the container script execution?
<p>I have container and python script that need to be run forever, for example:</p> <pre><code>import time if __name__ == &quot;__main__&quot;: n = 0 while True: print(f&quot;Script is running! I want to see this string ! N is {n}&quot;) n += 1 time.sleep(200) </code></pre> <p>So I</p> <ol> <li>Enter to the virtual machine using mobaxterm</li> <li>Enter to the container as <code>docker exec -it -u root container_name bash</code></li> <li>Run script inside container as <code>python my_script.py</code></li> <li>Script start working and I can see string which script prints in stdout...</li> <li> <blockquote> <p>Script is running! I want to see this string !</p> </blockquote> </li> </ol> <p>But session was interrupted, but the script still is running. What can I do to enter into the &quot;script&quot; to see the string/logs which it produces? How can I do it <strong>without interrupting</strong> the script? It needs to run forever inside the container.</p>
<python><docker><cmd><scripting><containers>
2024-10-22 19:58:36
1
361
ooolllooo
79,115,559
8,028,053
requests.get consistent "IncompleteRead" for url
<p>When using requests.get as follows, I'm consistently getting an IncompleteRead error:</p> <p><code>response = requests.get(&quot;https://files.rcsb.org/header/6TAV.pdb&quot;)</code></p> <p>Tried, but did not work:</p> <ul> <li>changing from https to http based on (<a href="https://stackoverflow.com/questions/14442222/how-to-handle-incompleteread-in-python">How to handle IncompleteRead: in python</a>) and still get IncompleteRead</li> <li>setting <code>stream=True</code>, still get IncompleteRead</li> </ul> <p>What works:</p> <ul> <li>accessing the URL directly</li> <li>subprocess + wget</li> </ul> <p>Is there a recommended way to successfully download this URL/handle the incomplete read within python?</p>
<python><python-requests>
2024-10-22 19:52:58
1
1,341
fileyfood500
79,115,545
3,078,502
Unexpected output in Python REPL using VS Code on Windows
<p>I have VS Code set up with Python extension, but <em><strong>without</strong></em> Jupyter extension.</p> <p>VS Code lets you send Python code from the editor to what it calls the &quot;Native REPL&quot; (a Jupyter-type interface without full notebook capabilities) or the &quot;Terminal REPL&quot; (the good old Python <code>&gt;&gt;&gt;</code> prompt). See <a href="https://code.visualstudio.com/docs/python/run" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/run</a>.</p> <p>I prefer working in the Terminal REPL. I have my keyboard shortcuts configured to use Ctrl+Enter (instead of Shift+Enter) to send the selection to the terminal:</p> <pre><code>[ { &quot;key&quot;: &quot;ctrl+enter&quot;, &quot;command&quot;: &quot;python.execSelectionInTerminal&quot;, &quot;when&quot;: &quot;editorTextFocus &amp;&amp; !findInputFocussed &amp;&amp; !isCompositeNotebook &amp;&amp; !jupyter.ownsSelection &amp;&amp; !notebookEditorFocused &amp;&amp; !replaceInputFocussed &amp;&amp; editorLangId == 'python'&quot; }, { &quot;key&quot;: &quot;shift+enter&quot;, &quot;command&quot;: &quot;-python.execSelectionInTerminal&quot;, &quot;when&quot;: &quot;editorTextFocus &amp;&amp; !findInputFocussed &amp;&amp; !isCompositeNotebook &amp;&amp; !jupyter.ownsSelection &amp;&amp; !notebookEditorFocused &amp;&amp; !replaceInputFocussed &amp;&amp; editorLangId == 'python'&quot; } ] </code></pre> <p>On Linux, this works as expected.</p> <p>Editor:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;Hello, World!&quot;) </code></pre> <p>What happens in terminal:</p> <pre><code>&gt;&gt;&gt; print(&quot;Hello, world!&quot;) Hello, world! </code></pre> <p>On Windows, the same actions (select line, hit Ctrl+Enter) produces the following:</p> <pre><code>&gt;&gt;&gt; KeyboardInterrupt &gt;&gt;&gt; print(&quot;Hello, world!&quot;) Hello, world! </code></pre> <p>This might be some harmless little visual noise, but it seems to be interfering with plotting. When I initialize a plot with:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() </code></pre> <p><code>print(&quot;Hello, world!&quot;)</code> now generates the following:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(&quot;Hello, world!&quot;) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 0, in &lt;module&gt; KeyboardInterrupt </code></pre> <p>Note that putting code directly into the terminal (instead of using VS Code's send-to-terminal feature) still functions normally.</p> <p>How can I fix the send-to-terminal feature?</p>
<python><visual-studio-code>
2024-10-22 19:46:21
1
609
Lee Hachadoorian
79,115,254
189,418
Raise exception in .map_elements()
<p><strong>Update:</strong> This was fixed by <a href="https://github.com/pola-rs/polars/pull/20417" rel="nofollow noreferrer">pull/20417</a> in Polars <a href="https://github.com/pola-rs/polars/releases/tag/py-1.18.0" rel="nofollow noreferrer">1.18.0</a></p> <hr /> <p>I'm using <code>.map_elements</code> to apply a complex Python function to every element of a polars series. This is a toy example:</p> <pre><code>import polars as pl df = pl.DataFrame({&quot;A&quot;: [1, 2, 3], &quot;B&quot;: [4, 5, 6]}) def sum_cols(row): return row[&quot;A&quot;] + row[&quot;B&quot;] df.with_columns( pl.struct(pl.all()) .map_elements(sum_cols, return_dtype=pl.Int32).alias(&quot;summed&quot;) ) shape: (3, 3) ┌─────┬─────┬────────┐ │ A ┆ B ┆ summed │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i32 │ ╞═════╪═════╪════════╡ │ 1 ┆ 4 ┆ 5 │ │ 2 ┆ 5 ┆ 7 │ │ 3 ┆ 6 ┆ 9 │ └─────┴─────┴────────┘ </code></pre> <p>However, when my function raises an exception, Polars silently uses Nulls as the output of the computation:</p> <pre><code>def sum_cols(row): raise Exception return row[&quot;A&quot;] + row[&quot;B&quot;] df.with_columns( pl.struct(pl.all()) .map_elements(sum_cols, return_dtype=pl.Int32).alias(&quot;summed&quot;) ) shape: (3, 3) ┌─────┬─────┬────────┐ │ A ┆ B ┆ summed │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i32 │ ╞═════╪═════╪════════╡ │ 1 ┆ 4 ┆ null │ │ 2 ┆ 5 ┆ null │ │ 3 ┆ 6 ┆ null │ └─────┴─────┴────────┘ </code></pre> <p>How can I make the Polars command fail when my function raises an exception?</p>
<python><python-polars>
2024-10-22 17:45:26
2
8,426
foglerit
79,115,170
4,212,870
Operation on all columns of a type in modern Polars
<p>I have a piece of code that works in Polars 0.20.19, but I don't know how to make it work in Polars 1.10.</p> <p>The working code (in Polars 0.20.19) is very similar to the following:</p> <pre><code>def format_all_string_fields_polars() -&gt; pl.Expr: return ( pl.when( (pl.col(pl.Utf8).str.strip().str.lengths() == 0) | # ERROR ON THIS LINE (pl.col(pl.Utf8) == &quot;NULL&quot;) ) .then(None) .otherwise(pl.col(pl.Utf8).str.strip()) .keep_name() ) df.with_columns(format_all_string_fields_polars()) </code></pre> <p>I have converted the <code>pl.Utf8</code> dtype to <code>pl.String</code>, but it keeps giving me the same error:</p> <blockquote> <p>AttributeError: 'ExprStringNameSpace' object has no attribute 'strip'</p> </blockquote> <p>The function is supposed to perform the When-Then operation on all string fields of the dataframe, in-place, but return all columns in the dataframe (including the non-string columns as well).</p> <p>How do I convert this function to a working piece of code in Polars 1.10?</p>
<python><dataframe><python-polars>
2024-10-22 17:17:04
2
1,298
Vinícius Queiroz
79,115,080
2,619,429
How to use ruff as fixer in vim with ALE
<p>I'm using ale in vim, and I want to add ruff as fixer for python.</p> <p>So, in .vimrc, I added:</p> <pre><code>let g:ale_fixers = { \ 'python': ['ruff'], \ 'javascript': ['eslint'], \ 'typescript': ['eslint', 'tsserver', 'typecheck'], \} </code></pre> <p>Then, when executing in vim the command ALEFix, I've go this error:</p> <blockquote> <p>117: Unknown function: ale#fixers#ruff#Fix</p> </blockquote> <p>Do you know how to integrate ruff as fixer in vim with ALE ?</p>
<python><ruff><vim-ale>
2024-10-22 16:47:19
1
518
Raoul Debaze
79,114,976
3,641,004
How to call the ctypes function from bytes in Python?
<p>I have the disassamble bytes of a simple function</p> <pre><code>89 4C 24 08 mov dword ptr [sum],ecx while (sum&gt;=1) { 83 7C 24 08 01 cmp dword ptr [sum],1 7C 0C jl doNothing+17h (07FF636C61017h) sum--; 8B 44 24 08 mov eax,dword ptr [sum] FF C8 dec eax 89 44 24 08 mov dword ptr [sum],eax } EB ED jmp doNothing+4h (07FF636C61004h) } C3 ret </code></pre> <p>which is a bytes object in python <code>bytes.fromhex('89 4c 24 08 83 7c 24 08 01 7c 0c 8b 44 24 08 ff c8 89 44 24 08 eb ed c3 ')</code></p> <p>How to call this micro codes in python using ctypes? I tried the code as below, but it crashes.</p> <pre><code>import ctypes # raw disassamble bytes buf = bytes.fromhex('89 4c 24 08 83 7c 24 08 01 7c 0c 8b 44 24 08 ff c8 89 44 24 08 eb ed c3 ') # function type definition nothFn = ctypes.CFUNCTYPE(None, ctypes.c_int) # ctypes buffer codebuf = ctypes.create_string_buffer(buf) # raw buffer's address as the function cfunc = nothFn(ctypes.addressof(codebuf)) # call it then it crashes cfunc(ctypes.c_int(3)) </code></pre> <p>I also tried to use the address returned from <code>str(codebuf)</code> but it also crashes.</p> <p>Questions:</p> <ol> <li>Is it due to the memory execution violates? how to make the allocated memory executables then? Does it have be in a dynamic library to be loaded for execution?</li> <li>Will the same code run under both Windows and Linux if the cpu is the same architecture x86_64? To avoid complication, let's suppose the function is simple and only operates on the input argument or stack memory.</li> </ol>
<python><cross-platform><ctypes><exploit>
2024-10-22 16:11:19
1
392
wanyancan
79,114,927
9,094,379
How can I replace direct references "@ file" on requirements file?
<p>I want to run some Python scripts from <a href="https://github.com/thuiar/MIntRec2.0/" rel="nofollow noreferrer">this repository</a>. The repository contains the instructions to run the scripts, including a <code>requirements.txt</code> file. However, the versions for some modules are not specified, but include direct references (to, as I understand, a local directory).</p> <p>This is an extract of the listed modules:</p> <pre><code>addict==2.4.0 apex==0.1 appdirs==1.4.4 audioread==3.0.0 av==10.0.0 blessed==1.20.0 brotlipy==0.7.0 certifi @ file:///croot/certifi_1671487769961/work/certifi cffi @ file:///croot/cffi_1670423208954/work charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work click==8.1.3 </code></pre> <p>When I use <code>pip</code> to install the modules from the requirements file I get this error: <code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/croot/certifi_1671487769961/work/certifi'</code>. As you note, <code>certifi</code>, <code>cffi</code> and <code>charset-normalizer</code> from the previous extract present the same sort of direct references instead of an explicit version.</p> <p><a href="https://stackoverflow.com/a/62589814/9094379">This answer</a> explains how to interpret direct references, when they are URLs, i.e. not as in the requirements file I want to work with.</p> <p>I would like to know how to manually replace such references (if there is a way to do so) or what workaround exists for this issue. I am using <code>pip</code> on <code>venv</code>.</p>
<python><python-3.x><pip><requirements.txt>
2024-10-22 15:55:41
1
352
Galo Castillo
79,114,550
2,449,857
Is `mydict.get(x, x)` eqivalent to `mydict.get(x) or x`?
<p>When using a dictionary to occasionally replace values, are <code>.get(x, x)</code> and <code>.get(x) or x</code> equivalent?</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>def current_brand(brand_name): rebrands = { &quot;Marathon&quot;: &quot;Snickers&quot;, &quot;Opal Fruits&quot;: &quot;Starburst&quot;, &quot;Jif&quot;: &quot;Cif&quot;, &quot;Thomson&quot;: &quot;TUI&quot;, } return rebrands.get(brand_name, brand_name) # or return rebrands.get(brand_name) or brand_name # this is forbidden - cannot use `default` keyword argument here return rebrands.get(brand_name, default=brand_name) assert current_brand(&quot;Jif&quot;) == &quot;Cif&quot; assert current_brand(&quot;Boots&quot;) == &quot;Boots&quot; </code></pre> <p>I think <code>.get(x) or x</code> is clearer, but that's pretty much a matter of opinion, so I'm curious if there's a technical benefit or drawback to one or the other I've not spotted.</p> <p>Edit: sorry, to be clear, I'm assuming that the dictionary does not contain falsey values as that would't make sense in this context (i.e. in the example above you're not recording that <code>&quot;Somerfield&quot;</code> rebranded as <code>&quot;&quot;</code>)</p>
<python><python-3.x><dictionary>
2024-10-22 14:27:44
2
3,489
Jack Deeth
79,114,448
17,289,097
How to integrate and enable Pipeline in Open Web UI
<p>I want to integrate and enable the pipeline text_to_sql_pipeline.py in Open Web UI. The code for text_to_sql_pipeline.py is available in <code>https://github.com/open-webui/pipelines/blob/main/examples/pipelines/rag/text_to_sql_pipeline.py</code></p> <p>Could you please guide me through step by step procedure to enable the pipeline in open web UI through Docker container?</p> <p>Here is What I have done:</p> <ol> <li>Executed the Open Web UI with Ollama using Docker container</li> <li>Ran the pipeline container separately using</li> </ol> <p><code>docker run -d -p 9099:9099 --add-host=host.docker.internal:host-gateway -v pipelines:/app/pipelines --name pipelines --restart always ghcr.io/open-webui/pipelines:main</code></p> <ol start="3"> <li><p>Login to Open WebUI: In Open WebUI: Navigate to the <code>Admin Panel &gt; Settings &gt; Connections &gt; OpenAI API section</code>. Set the API URL to <code>http://host.docker.internal:9099</code> and the API key to <code>0p3n-w3bu!</code>.</p> </li> <li><p>Then in the pipeline section, I uploaded the text_to_sql_pipeline.py from my local computer. But I could not see Pipeline activated in open web UI. It shows : <code>there are no Valves</code>.</p> </li> <li><p>I also tried to mention the Github URL to the pipeline which is <code>https://github.com/open-webui/pipelines/blob/main/examples/pipelines/rag/text_to_sql_pipeline.py</code> But this also does not work. <a href="https://i.sstatic.net/0luZrCYy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0luZrCYy.png" alt="enter image description here" /></a></p> </li> </ol> <p>Please provide me the solution to this. Let me know If I am missing any steps or am doing any error?</p>
<python><ollama><py-postgresql>
2024-10-22 14:04:46
0
465
Urvesh
79,114,445
15,913,281
Filter the Earliest and Latest Date in Each Month
<p>Given a dataframe like the one below, how do I filter for the earlest and latest date in each month? Note the actual data runs to tens of thousands of rows.</p> <p>Input:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Deg</th> </tr> </thead> <tbody> <tr> <td>02/01/1990</td> <td>1210.92</td> </tr> <tr> <td>13/01/1990</td> <td>1226.83</td> </tr> <tr> <td>14/01/1990</td> <td>1224.52</td> </tr> <tr> <td>15/01/1990</td> <td>1220.77</td> </tr> <tr> <td>08/02/1990</td> <td>1164.32</td> </tr> <tr> <td>09/02/1990</td> <td>1156.72</td> </tr> <tr> <td>12/02/1990</td> <td>1145.18</td> </tr> <tr> <td>13/02/1990</td> <td>1146.88</td> </tr> <tr> <td>24/02/1990</td> <td>1149.07</td> </tr> </tbody> </table></div> <p>Desired output:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Deg</th> </tr> </thead> <tbody> <tr> <td>02/01/1990</td> <td>1210.92</td> </tr> <tr> <td>15/01/1990</td> <td>1220.77</td> </tr> <tr> <td>08/02/1990</td> <td>1164.32</td> </tr> <tr> <td>24/02/1990</td> <td>1149.07</td> </tr> </tbody> </table></div>
<python><pandas><numpy>
2024-10-22 14:04:29
1
471
Robsmith
79,114,312
9,134,545
Airflow Mark as success/failure callback
<p>I'm playing a bit with Airflow alerting mechanism, but I can't find anything on how to do use a callback for when tasks/dags state is set manually.</p> <p>I'm primarly using the DockerOperator.</p> <p>I've tried the on_failure_callback and on_success_callback but they only get triggered when the task state is set by Airflow's scheduler.</p> <p>I'm I missing something ? Is there any way to actually have a callback for when tasks/dags state is set manually ?</p>
<python><airflow>
2024-10-22 13:31:24
0
892
Fragan
79,114,216
1,082,349
PyArrow Dataset filtering not working with partitioned parquet files
<p>I save a pandas dataframe as follows:</p> <pre><code>import pyarrow as pa import pyarrow.parquet as pq table = pa.Table.from_pandas(my_df) pq.write_to_dataset(table, root_path=&quot;data/bfl&quot;, partition_cols=['pnr_group']) </code></pre> <p>I can find it stored in a partitioned directory structure like this:</p> <pre><code>data/bfl/pnr_group=0/319a1fb5557a342c1b55356ce5123123-0.parquet </code></pre> <p>When I read an individual parquet file directly using pq.read_table(), I can see the data. However, when trying to read it using PyArrow's Dataset API with filtering, I get empty results:</p> <pre><code>import pyarrow.dataset as ds import pyarrow as pa # This works - has data import pyarrow.parquet as pq file_path = 'data/bfl/pnr_group=0/319a1fb5557a342c1b55356ce5123123-0.parquet' table = pq.read_table(file_path) print(len(table)) # Shows rows # This finds the correct files but returns empty data dataset = ds.dataset( 'data/bfl', format='parquet', partitioning=ds.DirectoryPartitioning.discover(['pnr_group']) ) filter_expr = ds.field('pnr_group') == '0' filtered_dataset = dataset.filter(filter_expr) df = filtered_dataset.to_table().to_pandas() # Returns empty dataframe </code></pre> <p>The dataset schema shows 'pnr_group' as a string type, and dataset.files correctly lists all the parquet files. However, after filtering and converting to pandas, the resulting dataframe is empty. How can I correctly read and filter partitioned parquet files using PyArrow's Dataset API?</p>
<python><pandas><parquet><pyarrow>
2024-10-22 13:08:46
1
16,698
FooBar
79,114,033
8,547,986
what's the advantage of `NewType` over `TypeAlias`?
<p>Consider the following example:</p> <pre class="lang-py prettyprint-override"><code> UserId = NewType('UserId', int) ProductId = NewType('ProductId', int) </code></pre> <p>But I can also do, the following:</p> <pre class="lang-py prettyprint-override"><code>UserId: TypeAlias = int ProductId: TypeAlias = int </code></pre> <p>So why should I use <code>NewType</code> over <code>TypeAlias</code> or vice versa? Are they both interchangeable?</p>
<python><python-typing>
2024-10-22 12:21:54
1
1,923
monte
79,113,894
1,152,500
Application logging in Executor/Worker using Azure Databricks python notebooks
<p>I am using <code>Azure Databricks</code> for building and running ETL pipelines. For development, using <code>Databricks notebooks (Python)</code>. My goal is to view the application logs via Spark UI for both codes running on driver and executors.</p> <p>Initially, I was facing issue to view executor logs but as described here <a href="https://kb.databricks.com/clusters/set-executor-log-level.html" rel="nofollow noreferrer">https://kb.databricks.com/clusters/set-executor-log-level.html</a>, I am able to view the application logs put inside the code running on <code>worker nodes</code>(<code>executors</code>) like <code>forEach</code>/<code>forEachPartitions</code>.</p> <p>As written in the above link that we need to set the log level on all executors. Does that mean we need to set the log level inside each code/method meant to be run on worker nodes like below. So will I have to set the logging level in each method, which I think is redundant and should be avoided.</p> <pre><code> def doSomething(): logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') ## some operation df.forEach(lambda x: doSomething()) </code></pre> <blockquote> <p>To set the log level on all executors, you must set it inside the JVM on each worker.</p> </blockquote> <p>Is there any better way to do which avoids setting up the log level everytime?</p>
<python><apache-spark><logging><azure-databricks><databricks-notebook>
2024-10-22 11:45:53
1
21,478
Anand
79,113,889
1,455,474
Identify the full path to xxx/library/bin in python
<p>I have an application that depends on intel-fortran-rt</p> <p>When installed via <code>pip install intel-fortran-rt==2021.3.0</code>, the intel fortran runtime dlls are copied into <code>xxx\Library\bin</code></p> <p>The problem is to identify <code>xxx</code> across different python versions, distributions and platforms.</p> <p>How does pip know where to put the files (I assume it is specified in a meta file, but could not find it) and how can I obtain the path from python</p>
<python><python-wheel>
2024-10-22 11:43:23
1
623
Mads M Pedersen
79,113,866
1,028,133
Example of a name clash between enum names and mixin-class methods/attributes?
<p>The <a href="https://docs.python.org/3/howto/enum.html#enum-basic-tutorial" rel="nofollow noreferrer">Python Enum tutorial</a> recommends using upper case names for Enum members &quot;to help avoid issues with name clashes between mixin-class methods/attributes and enum names&quot;.</p> <p>What would be an example of such a name clash?</p>
<python><enums>
2024-10-22 11:37:00
2
744
the.real.gruycho
79,113,846
7,924,573
sphinx.ext.autodoc: ModuleNotFoundError for external packages
<p>I want to start documenting my flask-based application with Sphinx. Currently, I am trying to figure how to use the autodoc extension. My local modules are all found, but the imports of external libraries does not work in sphinx.</p> <p>I have all my requirements in a <code>requirements.txt</code> file, so I hope that sphinx could extract the packages/modules form there. I am testing the sphinx on my local machine in an env. Therefore I already added:</p> <pre class="lang-py prettyprint-override"><code>import os import sys sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('~/envs')) </code></pre> <p>to <code>myproject/docs/source/config.py</code>. Which worked for my project-internal modules but not further.</p> <p>So, running <code>sphinx-build -M html docs/source/ docs/build/</code> causes this error:</p> <pre><code>WARNING: Failed to import myproject.errors. Possible hints: * ModuleNotFoundError: No module named 'flask' * KeyError: 'myproject' building [mo]: targets for 0 po files that are out of date writing output... building [html]: targets for 0 source files that are out of date updating environment: 0 added, 1 changed, 0 removed reading sources... [100%] api WARNING: autodoc: failed to import module 'errors' from module 'myproject'; the following exception was raised: No module named 'flask' [autodoc.import_object] </code></pre> <p>As an mwe I created <code>myproject/docs/source/api.rst</code>:</p> <pre class="lang-none prettyprint-override"><code>API === .. autosummary:: :toctree: generated .. automodule:: myproject.errors :members: :imported-members: </code></pre> <p>To double-check, whether all of the dependencies are installed, I ran `</p> <pre class="lang-bash prettyprint-override"><code>(envs) user@machine:~/git/myproject$ pip freeze | grep -F &quot;Flask&quot; Flask==2.3.3 </code></pre> <p>To exclude any problems, that could be <code>flask</code>-specific, I tried to mock it via <code>autodoc_mock_imports=['flask']</code> but this shifts the problem to all other external modules one by one.</p>
<python><flask><python-sphinx><python-venv><autodoc>
2024-10-22 11:33:23
1
843
tschomacker
79,113,663
3,555,685
Getting request_before_redirect.url is None for POSTS requests in Locust Python
<p>Trying to use locust following the documentation, <a href="https://docs.locust.io/en/stable/quickstart.html" rel="noreferrer">https://docs.locust.io/en/stable/quickstart.html</a></p> <pre><code>from locust import HttpUser, task class TestUser(HttpUser): # @task(1) # def health(self): # self.client.get(&quot;/health&quot;) @task(2) def post_value(self): self.client.post( &quot;/some/route&quot;, headers={ &quot;key1&quot;: &quot;value1&quot; }, data={ &quot;data1&quot;: &quot;value1&quot;, }) </code></pre> <p>GET requests are working fine. POST requests are throwing the below error in locust clients.py.</p> <pre><code>File &quot;/opt/anaconda3/envs/email_bot/lib/python3.12/site-packages/locust/clients.py&quot;, line 286, in post return self.request(&quot;POST&quot;, url, data=data, json=json, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/anaconda3/envs/email_bot/lib/python3.12/site-packages/locust/clients.py&quot;, line 196, in request url = request_before_redirect.url # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'url' </code></pre> <p>Tried setting <code>allow_redirects=True</code> but still getting the same error. Not able to identify the root issue. Any pointers will be helpful.</p>
<python><fastapi><load-testing><locust>
2024-10-22 10:44:22
2
362
Dinesh Babu Rengasamy
79,113,523
6,597,296
How to send messages to an XMPP server using only Twisted?
<p>I need to send messages to an XMPP server. Only send - I don't need to process replies, make a bot, or anything like that. When using the <code>xmpppy</code> module, it goes something like this:</p> <pre class="lang-py prettyprint-override"><code>from xmpp import Client from xmpp.protocol import JID, Message jabberid = 'user@server.com' password = 'secret' receiver = 'sender_id@conference.server.com' message = 'Hello world!' jid = JID(jabberid) connection = Client(server=jid.getDomain(), debug=None) connection.connect() connection.auth(user=jid.getNode(), password=password, resource=jid.getResource()) connection.send(Message(to=receiver, body=message)) </code></pre> <p>However, I need to do this using Twisted. Unfortunately, the documentation is mostly useless (seems machine-generated) and I have absolutely no idea what I am supposed to do. :-(</p> <p>Maybe something like</p> <pre class="lang-py prettyprint-override"><code>from twisted.words.protocols.jabber.jid import JID from twisted.words.protocols.jabber.client import XMPPAuthenticator jabberid = 'user@server.com' password = 'secret' jid = JID(jabberid) XMPPAuthenticator(jid, password) </code></pre> <p>but what then?</p>
<python><xmpp><twisted>
2024-10-22 10:11:33
2
578
bontchev
79,113,520
3,572,950
Is it safe and ok to divide mp3 file like that?
<p>I have an <code>mp3</code> file and want to divide it into several files (&quot;chunks&quot;). I came up with this code (I stole the idea from <a href="https://github.com/django/django/blob/main/django/core/files/base.py#L48" rel="nofollow noreferrer">django</a>):</p> <pre><code>from pathlib import Path class FileWrapper: def __init__(self, file) -&gt; None: self.file = file def chunks(self, chunk_size): chunk_size = chunk_size try: self.file.seek(0) except (AttributeError, Exception): pass while True: data = self.file.read(chunk_size) if not data: break yield data with open(&quot;./test.mp3&quot;, &quot;rb+&quot;) as src: wrapper_file = FileWrapper(src) for chunk_ind, chunk in enumerate(wrapper_file.chunks(chunk_size=100 * 1024)): out_file_path = Path(&quot;./&quot;, f&quot;test_{chunk_ind}.mp3&quot;) with open(out_file_path, &quot;wb+&quot;) as destination: destination.write(chunk) </code></pre> <p>And, you know, it's working okay, I can play resulted &quot;chunks&quot;, but I'm scary and in doubt, that this approach can sometimes work, but sometimes it maybe can &quot;brake&quot; resulted &quot;chunks&quot;. So, is this way - okay? Or I need to dive deeper into how <code>mp3</code> files made?</p>
<python><mp3>
2024-10-22 10:10:59
1
1,438
Alexey
79,113,456
5,595,282
Pandas - FutureWarning in concat - how to fix or opt into new behavior
<p>I have code like the following where I split up a dataframe into different groups. The &quot;treatment&quot; group is where I might want to delete rows and/or modify rows; and for performance reasons I split it into away from a group of rows that should survive unchanged.</p> <p>It is guaranteed that all DFs have the same columns and dtypes (they all come from the original <code>df</code> parameter).</p> <p>At the end of the treatment, I want to concat them back to a single DF. Now, I do not know in advance if any of the DFs will be empty. (and if <code>df</code> is empty, all DFs will be empty (happens especially in testing) - usually though <code>df</code> has ~500k rows).</p> <p>See code:</p> <pre><code>def some_fn(df: pd.DataFrame) -&gt; pd.DataFrame: df_no_treatment, df_treatment = split_df(df) df_treatment = do_something_complex(df_treatment) assert (df.dtypes == df_treatment.dtypes).all() assert (df.dtypes == df_no_treatment.dtypes).all() result = pd.concat([df_no_treatment, df_treatment]).sort_index() assert (df.dtypes == result.dtypes).all() return result </code></pre> <p>Now, <code>concat</code> throws a <code>FutureWarning: The behavior of array concatenation with empty entries is deprecated. In a future version, this will no longer exclude empty items when determining the result dtype. To retain the old behavior, exclude the empty entries before the concat operation.</code></p> <p>Note the asserts in the code above, it seems to work as intended?</p> <p>How do I fix the warning or <strong>opt into the new behavior</strong>? I don't want any automatics with dtypes, they do match and concat should just concat and do not do anything else.</p> <p>I find code like</p> <pre><code> if df_no_treatment.empty: return df_treatment if df_treatment.empty: return df_no_treatment return pd.concat([df_no_treatment, df_treatment]).sort_index() </code></pre> <p>absolutely over the top for what was previously a simple concat. What am I missing?</p>
<python><pandas><dataframe><future-warning>
2024-10-22 09:52:37
1
372
Raubtier
79,113,437
11,714,087
Airflow dag, wait for certain period if triggered at a certain time
<p>I have an airflow DAG dag-A that is triggered from another DAG, sometimes, this dag-A is triggered at 4 pm UTC (midnight EST), and when it gets triggered at midnight EST (4PM UTC), I want it to wait for 30 minutes and then start running at 16:30 UTC.</p> <p>Generally it should run when it is triggered, but if triggered between 16:00 and 16:30 UTC it should wait till 16:30.</p> <p>I have done this using sleep method, but trying to do same using TimeSensor, as that would be non blocking while sleep is blocking.</p> <p>Here is my code that is leading me into infinite loop, as it keep waiting for 16:30 even if task was <code>check_time</code> returns false. I want that if it is not between 16:00 and 16:30 then run the dq_taks but if it is between 16:00 and 16:30 then wait until it becomes 16:30 and then run dq_task.</p> <pre class="lang-py prettyprint-override"><code>from airflow import DAG from airflow.operators.python import PythonOperator from airflow.sensors.time import TimeSensor from airflow.utils.dates import days_ago from datetime import datetime def check_time_and_delay(): now = datetime.now() return now.hour == 16 and now.minute &gt;= 0 and now.minute &lt;= 30 with DAG( 'delayed_dag', default_args={'start_date': days_ago(1)}, schedule_interval=None, ) as dag: check_time = PythonOperator( task_id='check_time', python_callable=check_time_and_delay ) wait_until_12_30 = TimeSensor( task_id='wait_until_12_30', target_time=atetime.strptime('16:30:00', '%H:%M:%S').time(), mode='reschedule', timeout=1800 ) run_dq_check = PythonOperator( task_id='run_dq_check', python_callable=lambda: print(&quot;Running DQ check&quot;) ) # Define the task flow check_time &gt;&gt; [wait_until_12_30, run_dq_check] wait_until_12_30 &gt;&gt; run_dq_check </code></pre> <p>The code that works but uses sleep method:</p> <pre class="lang-py prettyprint-override"><code>def check_if_midnight_and_wait(execution_date): if execution_date.hour == 4 and execution_date.minute &lt; 30: # 4PM utc is midnight in EST wait_time = timedelta(minutes=30 - execution_date.minute) print(f&quot;Waiting for {wait_time} minutes to start at 00:30 AM EST.&quot;) sleep(wait_time.total_seconds()) # task: wait_and_start = PythonOperator( task_id='wait_and_start', python_callable=check_if_midnight_and_wait, provide_context=True ) wait_and_start &gt;&gt; dq_check </code></pre>
<python><airflow>
2024-10-22 09:48:44
1
377
palamuGuy
79,113,288
4,505,998
Strange behaviour with ax.get_xlim and date axis with matplotlib
<p>I worked with temporal data and matplotlib before, so I know that in matplotlib, dates are represented as the number of <em>days</em> since epoch as a float, and that I can use <code>matplotlib.dates</code> to convert dates back and forth to floats.</p> <p>Nevertheless, I encountered a pretty strange problem when using an API. I want to create a function to <em>decorate</em> the plot with some temporal annotations, but I don't want to print the ones that are outside the plot. For example, if the plot goes back to 2020, I don't want to print an annotation in 2019. For that I used <code>ax.get_xlim()</code> and converted the limits to date with <code>mdates.num2date</code>.</p> <p>When I plot a DataFrame with a <code>DateTimeIndex</code> ranging from 2019 to 2024, I get the limits <code>(2599.0, 2860.0)</code>, which converted back to date is nowhere near 2019 or 2024.</p> <p>The code is the following:</p> <pre class="lang-py prettyprint-override"><code>pytrends = TrendReq() pytrends.build_payload(kw_list=['matplotlib', 'seaborn']) df = pytrends.interest_over_time() print(df.index) df.plot() print(plt.gca().get_xlim()) </code></pre> <p>Which results in:</p> <pre class="lang-py prettyprint-override"><code>DatetimeIndex(['2019-10-20', '2019-10-27', '2019-11-03', '2019-11-10', '2019-11-17', '2019-11-24', '2019-12-01', '2019-12-08', '2019-12-15', '2019-12-22', ... '2024-08-18', '2024-08-25', '2024-09-01', '2024-09-08', '2024-09-15', '2024-09-22', '2024-09-29', '2024-10-06', '2024-10-13', '2024-10-20'], dtype='datetime64[ns]', name='date', length=262, freq=None) (np.float64(2599.0), np.float64(2860.0)) </code></pre> <p><a href="https://i.sstatic.net/pBG1eMJf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBG1eMJf.png" alt="Sample graph" /></a></p> <p>I could not reproduce it with a simpler DataFrame.</p>
<python><pandas><matplotlib>
2024-10-22 09:16:12
1
813
David Davó
79,113,170
948,655
Python Pydantic `TypeAdapter` validate with default for missing attribute
<p>Suppose I have:</p> <pre class="lang-py prettyprint-override"><code>class Thing(pydantic.BaseModel): one: int two: str three: bool adapter = pydantic.TypeAdapter(dict[str, Thing]) raw_data = {&quot;this&quot;: {&quot;one&quot;: 1, &quot;two&quot;: &quot;zwei&quot;}, &quot;that&quot;: {&quot;one&quot;: 111, &quot;two&quot;: &quot;dos&quot;}} </code></pre> <p>You can see that the <code>raw_data</code>, which I want to convert (validate) using <code>adapter</code>, contains two items that are missing the attribute <code>three</code>. I want to be able to still validate this, but give it a default value <code>True</code> for the missing attribute <code>three</code>. <em>However</em>, I <em>don't</em> want to do this <em>in the definition of class <code>Thing</code></em>, because in my use-case this <em>default</em> behaviour is <em>not an intrinsic part of the class <code>Thing</code></em>. I don't want users of this class to assume there's always a default for the attribute <code>three</code>; I only want to provide a default value <em>this time only, for this particular <code>raw_data</code> only</em>, because I'm expecting <em>this particular</em> <code>raw_data</code> to be missing <code>three</code> for some reason. (In particular, <a href="https://stackoverflow.com/questions/65798785/pydantic-set-attributes-with-a-default-function">this question</a> looks similar but does not solve my problem.) So, what I want is something like:</p> <pre class="lang-py prettyprint-override"><code>things_i_want = adapter.validate_python(raw_data, default={&quot;three&quot;: True}) </code></pre> <p>Or:</p> <pre class="lang-py prettyprint-override"><code>or_this = adapter.validate_python(raw_data, default=pydantic.Default(three=True)) </code></pre> <p>Is this possible at all in Pydantic?</p>
<python><pydantic>
2024-10-22 08:47:15
1
8,813
Ray
79,113,126
955,273
Is it possible to continue an EWM after appending a row?
<p>I have some quant finance code which does some analysis of stock prices.</p> <p>One thing I need to calculate is an EWMA.</p> <p>When doing research (ie: the historical &quot;batch&quot; world), I have a long list of stock prices, and I can use pandas <code>ewm(...).mean()</code> to calculate my EWMA over that list of prices.</p> <p>Later, when using the same code in production (ie: the live &quot;realtime&quot; world), I want to calculate my EWMA for only the latest stock price, without having to recalculate the entire history.</p> <p>Below is a walk through of the problem I'm facing:</p> <p>Here is some code which simulates a stock price:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd def gbm(T, N, mu, sigma, S0): dt = float(T)/N t = np.linspace(0, T, N) W = np.random.standard_normal(size = N) W = np.cumsum(W)*np.sqrt(dt) X = (mu-0.5*sigma**2)*t + sigma*W S = S0*np.exp(X) return S dates = pd.date_range('2021-01-01', '2022-01-01') T = (dates.max()-dates.min()).days / 365 N = len(dates) mu = 0.03 sigma = 0.5 S0 = 100 df = pd.DataFrame(index=index, data={'value': gbm(T, N, mu, sigma, S0)}) </code></pre> <p>I can calculate an EWMA over the data as follows:</p> <pre><code>df['ewma'] = df['value'].ewm(halflife=30).mean() df.plot() </code></pre> <p><a href="https://i.sstatic.net/DkRPva4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DkRPva4E.png" alt="plot" /></a></p> <p>The tail of my dataframe looks as follows:</p> <p><a href="https://i.sstatic.net/UDLhHxLE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDLhHxLE.png" alt="tail" /></a></p> <p>As is well known, the formula for calculating the EWMA is the previous EWMA value decayed by <code>1-α</code> and the current observation scaled by <code>α</code>:</p> <p><a href="https://i.sstatic.net/CbAmS5vr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbAmS5vr.png" alt="EMA formula" /></a></p> <p>Now we pretend we're in the production, and we've just received a single new price. I add a single new observation to my source data:</p> <pre class="lang-py prettyprint-override"><code>df_new = pd.DataFrame(index=[pd.Timestamp('2022-01-02')], data={'value':105}) df = pd.concat([df, df_new]) </code></pre> <p>The tail of my new dataframe looks as follows:</p> <p><a href="https://i.sstatic.net/cwj8NdEg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwj8NdEg.png" alt="new tail" /></a></p> <p>I would like to now calculate the EWMA of the new value.</p> <p>Obviously I can just recalculate the entire history of the EWM over the entire dataframe:</p> <pre class="lang-py prettyprint-override"><code>df['ewma'] = df['value'].ewm(halflife=30).mean() </code></pre> <p>However, after benchmarking we have found this is a major bottleneck in our production pipeline.</p> <p>I know that I can manually calulate the EWMA for the new value by manually calculating <code>α</code> from my halflife, and then using the EWMA formula with the previous EWMA value:</p> <pre class="lang-py prettyprint-override"><code>df = df.tail(2).copy() halflife = 30 alpha = 1 - np.exp(-np.log(2)/halflife) df.iloc[-1]['ewma'] = (1-alpha)*df.iloc[-2]['ewma'] + alpha*df.iloc[-1]['value'] </code></pre> <p>However this causes a disconnect between what happens in the batch historical world.</p> <p>My researchers write python code which uses pandas <code>ewm(...).mean</code>, and I want to take that code and productionise it, but <em>not have to recalculate the entire history of the EWMA for every single price update</em>.</p> <p>I would like to present the researchers with a single unified interface which works in the historical batch world and in the live realtime world.</p> <p>Is this possible with pandas?</p> <p>Can I provide some state to pandas so that it &quot;picks up where it left off&quot; for the new value?</p>
<python><pandas><dataframe>
2024-10-22 08:37:03
1
28,956
Steve Lorimer
79,112,916
732,546
Why the polyfit second coefficient is not zero?
<p>I have this class exercice and I have a question regarding the second coefficient given by the polyfit function.</p> <p>Python code:</p> <pre><code>import numpy X = [1,2,3,4,5] Y = [1,2,3,4,5] function = numpy.polyfit(X, Y, 1) display(function) </code></pre> <p>Image with the execution of the code below: <a href="https://i.sstatic.net/9vTOEhKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9vTOEhKN.png" alt="enter image description here" /></a></p>
<python>
2024-10-22 07:38:33
0
1,388
Álvaro
79,112,841
20,765,573
AZ CLI runs very slow after a while
<p>I'm experiencing inconsistent slow speeds with the Azure CLI (<code>az-cli</code>). Sometimes commands execute in less than 10 seconds, but at other times, the CLI seems to lock into a state of slowness (80+ seconds for a simple query).</p> <p>This usually occurs after I've been using it for a while or when I have multiple terminal windows open and I've used Az CLI in at least two of them.</p> <p>For example, when I run the following command:</p> <pre class="lang-bash prettyprint-override"><code>az --version --debug </code></pre> <p>I see the following output after 80+ seconds:</p> <pre class="lang-none prettyprint-override"><code>az --version --debug cli.knack.log: File logging enabled - writing logs to 'C:\Users\localuser\.azure\logs'. cli.knack.cli: Command arguments: ['--version', '--debug'] cli.knack.cli: __init__ debug log: Enable color in terminal. cli.knack.cli: Event: Cli.PreExecute [] urllib3.connectionpool: Starting new HTTPS connection (1): raw.githubusercontent.com:443 urllib3.connectionpool: https://raw.githubusercontent.com:443 &quot;HEAD / HTTP/1.1&quot; 301 0 cli.azure.cli.core.util: Connectivity check: 4.057321899999806 sec urllib3.connectionpool: Starting new HTTPS connection (1): raw.githubusercontent.com:443 urllib3.connectionpool: https://raw.githubusercontent.com:443 &quot;GET /Azure/azure-cli/main/src/azure-cli-core/setup.py HTTP/1.1&quot; 200 1350 urllib3.connectionpool: Starting new HTTPS connection (1): raw.githubusercontent.com:443 urllib3.connectionpool: https://raw.githubusercontent.com:443 &quot;GET /Azure/azure-cli/main/src/azure-cli-telemetry/setup.py HTTP/1.1&quot; 200 680 azure-cli 2.65.0 core 2.65.0 telemetry 1.1.0 Extensions: account 0.2.5 automation 1.0.0b1 azure-devops 1.0.1 init 0.1.0 next 0.1.3 Dependencies: msal 1.31.0 azure-mgmt-resource 23.1.1 Python location 'C:\Program Files\Microsoft SDKs\Azure\CLI2\python.exe' Extensions directory 'C:\Users\localuser\.azure\cliextensions' Python (Windows) 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)] Legal docs and information: aka.ms/AzureCliLegal Your CLI is up-to-date. cli.knack.cli: Event: Cli.PostExecute [&lt;function AzCliLogging.deinit_cmd_metadata_logging at 0x000002CAA0E1C360&gt;] cli.__main__: Command ran in 84.876 seconds (init: 0.211, invoke: 84.665) </code></pre> <p>Are these execution times normal? What might be causing this intermittent slowness?</p> <p><strong>Update: 2024-10-25</strong></p> <p>After reinstalling <code>az-cli</code>, the problem went away but it came back. What I've observed is that connecting and disconnecting the <strong>Ethernet cable</strong> solves the issue. But I don't have a permanent solution.</p>
<python><azure><azure-cli>
2024-10-22 07:13:53
1
305
Daniel M.
79,112,805
10,451,021
Unable to trigger Power Automate flow using python script
<p>I am trying to send message on teams using a python script using Power Automate.</p> <pre><code>import requests # Import requests library import datetime # Get current time. now = datetime.datetime.now().strftime(&quot;%Y-%m-%d %H:%M:%S&quot;) # Triggering Power Automate Flow flow_url='https://***/triggers/manual/paths/invoke?api-version=2016-06-01' # Replace with actual URL from step 2 response=requests.post(flow_url,json={&quot;status&quot;:&quot;Script Completed&quot;,&quot;timestamp&quot;:now}) if response.status_code==200: print('PowerAutomate Flow triggered successfully') else: print(f'Failed to trigger PowerAutomate Flow: {response.status_code}') </code></pre> <p>Error:</p> <pre><code>Failed to trigger PowerAutomate Flow: 401 </code></pre> <p>Power Automate Flow:</p> <p><a href="https://i.sstatic.net/Wi8XSTew.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wi8XSTew.png" alt="enter image description here" /></a></p> <p>Full Flow:</p> <p><a href="https://i.sstatic.net/Bqde9Jzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bqde9Jzu.png" alt="enter image description here" /></a></p>
<python><microsoft-teams><power-automate>
2024-10-22 07:04:22
1
1,999
Salman
79,112,453
1,447,953
Pandas complex groupby using match criteria in another table
<p>I am having a hard time describing this issue in a general way that would make the question title useful. But here it is. I am trying to merge or group rows in a table based on ids in a column that are declared to belong to certain groups according to a different table. It is complicated further by the ids being composite objects spread across two columns. Here is a toy version of it and a solution that works, but is terribly inefficient:</p> <pre><code># Inputs: columns = [&quot;time&quot;, &quot;id1_list&quot;, &quot;id2_list&quot;, &quot;id3&quot;] data = [(1, (&quot;A&quot;, &quot;B&quot;), (1, 2), 1), (1, (&quot;A&quot;, &quot;B&quot;), (2, 3), 2), (1, (&quot;A&quot;, &quot;B&quot;), (4, 5), 3), (1, (&quot;A&quot;, &quot;B&quot;), (6, 7), 4), (1, (&quot;A&quot;, &quot;C&quot;), (1, 1), 5), (2, (&quot;A&quot;, &quot;B&quot;), (1, 3), 1), (2, (&quot;A&quot;, &quot;B&quot;), (2, 3), 2), (2, (&quot;A&quot;, &quot;B&quot;), (4, 3), 3), (2, (&quot;A&quot;, &quot;C&quot;), (1, 1), 4)] merge_cols = [&quot;time&quot;, &quot;id1&quot;, &quot;id2_lists&quot;] merge_data = [(1, &quot;A&quot;, ((1, 2), (3, 4))), (1, &quot;B&quot;, ((3, 5),)), (2, &quot;A&quot;, ((1, 2), (3, 4))), (2, &quot;B&quot;, ((3, 5),))] output_columns = [&quot;time&quot;, &quot;id3_lists&quot;] expected_output = [(1, ((1, 2, 3, 5), (4,))), (2, ((1, 2, 3, 4),))] </code></pre> <p>Inefficient solution:</p> <pre><code># Group by time df_g = pd.DataFrame(data, columns=columns).groupby(&quot;time&quot;) df_merge_data_g = pd.DataFrame(merge_data, columns=merge_cols).groupby(&quot;time&quot;) def match(g, id3_A, id3_B, df_merge_data_t): # print(df_LR_t) # Get id match info rowA = g.query(&quot;id3==@id3_A&quot;).iloc[0] rowB = g.query(&quot;id3==@id3_B&quot;).iloc[0] id1sA = rowA[&quot;id1_list&quot;] id1sB = rowB[&quot;id1_list&quot;] id2sA = rowA[&quot;id2_list&quot;] id2sB = rowB[&quot;id2_list&quot;] matched = False for id1_A, id2_A in zip(id1sA, id2sA): if matched: break for id1_B, id2_B in zip(id1sB, id2sB): if matched: break if id1_A==id1_B: # print(id1_A, id2_A, id1_B, id2_B) match_groups = df_merge_data_t.query(&quot;id1==@id1_A&quot;)[&quot;id2_lists&quot;].iloc[0] # print(match_groups) for match_g in match_groups: # print(match_g) # print(id2_A in match_g, id2_B in match_g) if id2_A in match_g and id2_B in match_g: matched = True break # print(&quot;matched:&quot;, matched) return matched def merge(data): for x in set(data): for y in set(data): if x == y: continue if not x.isdisjoint(y): data.remove(x) data.remove(y) data.add(x.union(y)) return merge(data) return data def get_match_groups(g): #print(g) df_merge_data_t = df_merge_data_g.get_group(g.name) # Form all pairings of items to be matched pairs = list(itertools.combinations(g.id3, 2)) # Check each pair for match matched_pairs = set(frozenset(pair) for pair in pairs if match(g, *pair, df_merge_data_t)) print(&quot;matched_pairs&quot;) print(matched_pairs) # Merge pairs with common elements to get connected groups of matches merged_matches = merge(matched_pairs) # Add back any items that weren't matched with anything as singleton groups unused = set(frozenset((id3,)) for id3 in set(g.id3) if not any(id3 in g for g in merged_matches)) merged_matches.update(unused) return merged_matches out = df_g.apply(get_match_groups, include_groups=False) </code></pre> <p>Output:</p> <pre><code>time 1 {(1, 2, 3, 5), (4)} 2 {(1, 2, 3, 4)} dtype: object </code></pre> <p>Expected output:</p> <pre><code>pd.DataFrame(expected_output, columns=output_columns)[&quot;id3_lists&quot;] 0 ((1, 2, 3, 5), (4,)) 1 ((1, 2, 3, 4)) Name: id3_lists, dtype: object </code></pre> <p>So in words: We have the table <code>data</code>. I want to form groups of <code>id3</code>, based on match criteria in <code>id1_list</code> and <code>id2_list</code>. Basically each row describes a composite object that has been created using other objects whose id information is listed in <code>id1_list</code> and <code>id2_list</code>. <code>id1_list</code> and <code>id2_list</code> are aligned, so that pairs created by <code>zip(id1_list, id2_list)</code> describe the contributing objects.</p> <p>Next, we have the <code>merge_data</code> table. This table describes equivalences between those contributing objects (the ones tagged by <code>(id1, id2)</code> pairs). So e.g. the first column of <code>merge_data</code>:</p> <pre><code>(1, &quot;A&quot;, ((1, 2), (3, 4))) </code></pre> <p>says that at time <code>1</code>, for objects with <code>id1=A</code>, the objects with <code>id2</code> in the set <code>(1, 2)</code> should be considered &quot;matches&quot;. Likewise for <code>(3, 4)</code>. These sets will always be disjoint, if that helps.</p> <p>So the goal is, for each time <code>t</code>, collect all the rows in <code>data</code> and <code>merge_data</code> for that time, then use <code>merge_data</code> to figure out which rows of <code>data</code> &quot;match&quot;. And then output a table of <code>id3</code> groups that are formed by that criteria at each time group, merging all matches into groups so long as any &quot;link&quot; exists.</p> <p>I am happy to restructure any of this data to make some sort of clever pandas merge operation do everything efficiently. Or maybe some graph algorithm. But so far all I came up with was the very slow brute force solution above.</p>
<python><pandas><dataframe><group-by><pandas-merge>
2024-10-22 04:36:11
1
2,974
Ben Farmer
79,112,373
9,136,850
PYSAT How to apply Clausify on Equals object when using Equals Object on a CNF?
<p>I am using PySat library.</p> <p>I need to show a cardinality constraint has the same truth value as another literal. Here is my code,</p> <pre class="lang-py prettyprint-override"><code>from pysat.card import CardEnc, ITotalizer from pysat.formula import * cnf2 = ITotalizer(lits=[1,2,3], ubound=1, top_id=100).cnf print(cnf2.clauses) y = Atom(4) cnf3 = Equals(cnf2, y) cnf3.clausify() print(cnf3.clauses) </code></pre> <p>Output:</p> <blockquote> <p>[[-2, 101], [-1, 101], [-1, -2, 102], [-101, 103], [-102, 104], [-3, 103], [-3, -101, 104]]</p> </blockquote> <blockquote> <p>[[-12, 4], [12, -4]]</p> </blockquote> <p>But the Equals constraint completely ignores all the clauses defined in the original CNF and randomly adds a new literal named -12 and is using that. I don't know how to fix this issue.</p>
<python><sat>
2024-10-22 03:43:18
0
354
samarendra chandan bindu Dash
79,112,170
908,390
replit run button installs unwanted python packages that break my project
<p>Public replit:</p> <p><a href="https://replit.com/@mblakele/BugImportDecoupleHumps" rel="nofollow noreferrer">https://replit.com/@mblakele/BugImportDecoupleHumps</a></p> <p>When I run this replit, I see a poetry command that I haven't requested:</p> <pre><code>--&gt; poetry add decouple humps Using version ^0.0.7 for decouple Using version ^0.2.2 for humps Updating dependencies Resolving dependencies... Package operations: 2 installs, 0 updates, 0 removals • Installing decouple (0.0.7) • Installing humps (0.2.2) Writing lock file </code></pre> <p>I think this is a result of replit upm searching for <code>decouple</code> and <code>humps</code>, which my <code>pyproject.toml</code> already specifies:</p> <pre><code>[tool.poetry.dependencies] pyhumps = &quot;3.8.*&quot; python-decouple = &quot;^3.6.0&quot; </code></pre> <p>But because replit guesses anyway, and gets it wrong about these packages, every run fails with:</p> <pre><code>Traceback (most recent call last): File &quot;/home/runner/BugImportDecoupleHumps/main.py&quot;, line 2, in &lt;module&gt; from decouple import config ImportError: cannot import name 'config' from 'decouple' (/home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/decouple/__init__.py) </code></pre> <p>After each attempt, I have to remove extraneous lines from my TOML and also use pip or upm to remove the bogus packages. If I do this, running the code from shell works ok:</p> <pre><code>~/BugImportDecoupleHumps$ pip uninstall decouple humps Found existing installation: decouple 0.0.7 Uninstalling decouple-0.0.7: Would remove: /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/decouple-0.0.7.dist-info/* /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/decouple/* Proceed (Y/n)? Successfully uninstalled decouple-0.0.7 Found existing installation: humps 0.2.2 Uninstalling humps-0.2.2: Would remove: /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps-0.2.2.dist-info/* /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps/* /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/tests/* Would not remove (might be manually added): /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps/__init__.pyi /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps/main.py /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps/main.pyi /home/runner/BugImportDecoupleHumps/.pythonlibs/lib/python3.11/site-packages/humps/py.typed Proceed (Y/n)? Successfully uninstalled humps-0.2.2 ~/BugImportDecoupleHumps$ python main.py hello False </code></pre> <p>How can I fix this? Can I stop replit from guessing about these packages?</p> <p>Thanks in advance for any help!</p>
<python><python-poetry><replit>
2024-10-22 01:24:08
3
7,863
mblakele
79,112,091
20,591,261
How to highlight values per column in Polars
<p>I have a Polars DataFrame, and I want to highlight the top 3 values for each column using the <code>style</code> and loc features in Polars. I can achieve this for individual columns, but my current approach involves a lot of repetition, which is not scalable to many variables.</p> <pre><code>import polars as pl import polars.selectors as cs from great_tables import loc, style df = pl.DataFrame({ &quot;id&quot;: [1, 2, 3, 4, 5], &quot;variable1&quot;: [15, 25, 5, 10, 20], &quot;variable2&quot;: [40, 30, 50, 10, 20], &quot;variable3&quot;: [400, 100, 300, 200, 500] }) top3_var1 = pl.col(&quot;variable1&quot;).is_in(pl.col(&quot;variable1&quot;).top_k(3)) top3_var2 = pl.col(&quot;variable2&quot;).is_in(pl.col(&quot;variable2&quot;).top_k(3)) ( df .style .tab_style( style.text(weight=&quot;bold&quot;), loc.body(&quot;variable1&quot;, top3_var1) ) .tab_style( style.text(weight=&quot;bold&quot;), loc.body(&quot;variable2&quot;, top3_var2) ) ) </code></pre> <p>This works, but it's not scalable for many columns since I have to manually define <code>top3_var</code> for each column.</p> <p>I’ve tried using <code>pl.all().top_k(3)</code> to make the process more automatic:</p> <pre><code>( df .style .tab_style( style.text(weight=&quot;bold&quot;, ), loc.body(&quot;variable1&quot;, top3_var1) ) .tab_style( style.text(weight=&quot;bold&quot;), loc.body(&quot;variable2&quot;, top3_var2) ) ) </code></pre> <p>However, I’m not sure how to apply the style and loc methods to highlight only the top 3 values in each column individually without affecting the entire row.</p>
<python><dataframe><python-polars><great-tables>
2024-10-22 00:33:03
2
1,195
Simon
79,111,984
1,937,514
Properly add type annotations to a services container
<p>I am trying to add type annotations to a &quot;service container&quot; with relatively strict typing, but am struggling to define the type hints correctly.</p> <p>Basically, the services container is a glorified dictionary, where the key should be an ABC type, or a Protocol (i.e. the &quot;abstract class&quot;). The dictionary value is a generic <code>ServiceEntry[T]</code> where T is a subclass of the key (&quot;abstract class&quot;).</p> <p>I've included simplified implementation below, but the full code can be found in this gist: <a href="https://gist.github.com/NixonInnes/6b7cf851cd2715aafbf32c4fb54405ac" rel="nofollow noreferrer">https://gist.github.com/NixonInnes/6b7cf851cd2715aafbf32c4fb54405ac</a></p> <p>This is my service entry implementation:</p> <pre class="lang-py prettyprint-override"><code>class ServiceLife(Enum): SINGLETON = 1 TRANSIENT = 2 class ServiceEntry[T]: service_class: type[T] service_life: ServiceLife instance: T | None def __init__(self, service_class: type[T], service_life: ServiceLife): self.service_class = service_class self.service_life = service_life self.instance = None self.lock = threading.Lock() </code></pre> <p>And this is a simplified implementation of the service container:</p> <pre class="lang-py prettyprint-override"><code>class ServiceContainer(IServiceContainer): def __init__(self): ... self.__services: dict[type[Any], ServiceEntry[Any]] = {} def _set_service[T](self, key: type[T], value: ServiceEntry[T]) -&gt; None: with self.__lock: self.__services[key] = value def _get_service[T](self, key: type[T]) -&gt; ServiceEntry[T]: with self.__lock: return self.__services[key] @override def register[T]( self, abstract_class: type[T], service_class: type[T], service_life: ServiceLife = ServiceLife.TRANSIENT, ): assert issubclass( service_class, abstract_class ), f&quot;'{service_class.__name__}' does not implement '{abstract_class.__name__}'&quot; service = ServiceEntry[T](service_class, service_life) self._set_service(abstract_class, service) @override def get[T](self, abstract_class: type[T], **overrides: Any) -&gt; T: try: service = self._get_service(abstract_class) ... with service.lock: ... instance = self._create_instance(service, overrides) ... return instance def _create_instance[T](self, service_entry: ServiceEntry[T], overrides: dict[str, Any]) -&gt; T: signature = inspect.signature(service_entry.service_class.__init__) type_hints = get_type_hints(service_entry.service_class.__init__) kwargs = {} for name, param in signature.parameters.items(): ... return service_entry.service_class(**kwargs) </code></pre> <p>I would like to be able to satisfy type checkers so that when I do something like the below, it is able to infer the output service type and ideally detect an invalid registration (i.e. the <code>service_class</code> is not a subclass of the <code>abstract_class</code>)</p> <pre class="lang-py prettyprint-override"><code>class IFoo(ABC): ... class Foo(IFoo): pass @runtime_checkable class IBar(Prototype): ... class Bar(IBar): pass services = ServiceContainer() services.register(IFoo, Foo) services.register(IBar, Bar) foo_service = services.get(IFoo) bar_service = services.get(IBar) </code></pre> <p>Maybe im being too ambitious with the static typing system?</p>
<python><python-typing>
2024-10-21 23:05:22
0
370
NixonInnes
79,111,951
3,049,987
Python Protocol using keyword-only arguments requires implementation to have different signature
<p>I'm on python 3.10. I'm using PyCharm's default type checker and MyPy. Here is the protocol I defined:</p> <pre class="lang-py prettyprint-override"><code>class OnSubscribeFunc(Protocol): def __call__(self, instrument: str, *, x: int) -&gt; AsyncGenerator: ... </code></pre> <p>When create a method that implements it like this:</p> <pre class="lang-py prettyprint-override"><code>class A: async def subscribe(self, instrument: str, *, x: int): yield ... a: OnSubscribeFunc = A().subscribe # this apparently is where it gets it wrong </code></pre> <p>I get this warning: <code>Expected type 'OnSubscribeFunc', got '(instrument: str, Any, x: int) -&gt; AsyncGenerator' instead</code></p> <p>If I remove the <code>*</code> from my implementation however, the warning disappears. I would expect it to be the other way around because not having the <code>*</code> allows the implementation to have non-keyword-only arguments which might not be what I'm aiming for with my protocol.</p> <p>So for comparison - this implementation gives no warning:</p> <pre class="lang-py prettyprint-override"><code>class A: async def subscribe(self, instrument: str, x: int): yield ... </code></pre> <p>This does not make any sense to me, why does it behave like this and is this expected or is it a bug in my type checker?</p>
<python><pycharm><python-typing>
2024-10-21 22:48:34
1
2,046
Tim Woocker
79,111,762
7,657,180
Equivalent vba for python requests
<p>I have the following python code</p> <pre><code>import requests url = 'https://moe-register.emis.gov.eg/account/authenticate' data = {'EmailAddress': '471666845@minia4.moe.edu.eg'} response = requests.post(url, data=data) print('Final URL After Redirection:', response.url) </code></pre> <p>I need the vba equivalent for this code that prints the response url. In python, it prints something like that <code>https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=e6ffeec9-aee3-480d-9d60-ddbdce181893&amp;response_type=id_token&amp;redirect_uri=https%3a%2f%2fmoe-register.emis.gov.eg%2faccount%2ftrust&amp;scope=openid%20email%20profile&amp;response_mode=form_post&amp;state=315db325-bdb7-4e8a-a7ac-0559970943a7&amp;nonce=018c05a4-73f5-459b-8626-5ae5d1b3b0f6&amp;login_hint=471666845@minia4.moe.edu.eg</code> but in vba I didn't get this this is the vba code</p> <pre><code>Sub Test() Dim http As Object Dim url As String Dim email As String Dim status As Long Dim locationHeader As String Dim responseBody As String Dim maxRedirects As Integer Dim redirectCount As Integer url = &quot;https://moe-register.emis.gov.eg/account/authenticate&quot; email = &quot;471666845@minia4.moe.edu.eg&quot; Set http = CreateObject(&quot;WinHTTP.WinHTTPRequest.5.1&quot;) maxRedirects = 10 redirectCount = 0 Do http.Open &quot;POST&quot;, url, False http.setRequestHeader &quot;Content-Type&quot;, &quot;application/x-www-form-urlencoded&quot; http.send &quot;EmailAddress=&quot; &amp; email status = http.status If status &gt;= 300 And status &lt; 400 Then locationHeader = http.getResponseHeader(&quot;Location&quot;) url = locationHeader redirectCount = redirectCount + 1 Else responseBody = http.responseText Exit Do End If Loop While redirectCount &lt; maxRedirects MsgBox &quot;Final URL After Redirection: &quot; &amp; url If Len(responseBody) &gt; 1024 Then MsgBox &quot;Response Body (First 1024 characters): &quot; &amp; Left(responseBody, 1024) Else MsgBox &quot;Response Body: &quot; &amp; responseBody End If Set http = Nothing End Sub </code></pre>
<python><vba><python-requests>
2024-10-21 21:07:22
1
9,608
YasserKhalil
79,111,414
11,277,108
scrapy spider using seleniumbase middleware scraping 'chrome-extension' URLs that weren't requested
<p>I'm currently running a scrapy spider using a seleniumbase middleware and for some reason it is scraping <code>chrome-extension</code> URLs. I'm scraping the <code>https://www.atptour.com</code> website and at no point does my scraper request anything other than pages from that website.</p> <p>I've attached below my log of what's happening:</p> <pre><code>2024-10-21 17:43:47: [INFO] Spider opened 2024-10-21 17:43:47: [INFO] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2024-10-21 17:43:47: [INFO] Telnet console listening on 127.0.0.1:6027 2024-10-21 17:43:50: [DEBUG] Started executable: `/Users/philipjoss/miniconda3/envs/capra_production/lib/python3.11/site-packages/seleniumbase/drivers/uc_driver` in a child process with pid: 22177 using 0 to output -1 2024-10-21 17:43:51: [DEBUG] Crawled (200) &lt;GET https://www.atptour.com/en/-/tournaments/calendar/tour&gt; (referer: None) 2024-10-21 17:43:54: [DEBUG] Started executable: `/Users/philipjoss/miniconda3/envs/capra_production/lib/python3.11/site-packages/seleniumbase/drivers/uc_driver` in a child process with pid: 22180 using 0 to output -1 2024-10-21 17:43:55: [DEBUG] Crawled (200) &lt;GET https://www.atptour.com/en/-/tournaments/calendar/challenger&gt; (referer: None) 2024-10-21 17:43:55: [DEBUG] Crawled (200) &lt;GET chrome-extension://neajdppkdcdipfabeoofebfddakdcjhd/audio.html&gt; (referer: chrome-extension://neajdppkdcdipfabeoofebfddakdcjhd/audio.html) </code></pre> <p>There are two successful responses from web pages I have requested and then suddenly a <code>chrome-extension</code> URL appears. What's also weird is that the referer is listed as the same address which has never been requested previously.</p> <p>To make things more interesting I've run the code on another machine and it ran fine there with the same package versions: scrapy 2.11.2 and seleniumbase 4.28.5.</p> <p>This is the spider:</p> <pre><code>from scrapy import Request, Spider from scrapy.http.response.html import HtmlResponse class Production(Spider): name = &quot;atp_production&quot; start_urls = [ &quot;https://www.atptour.com/en/-/tournaments/calendar/tour&quot;, &quot;https://www.atptour.com/en/-/tournaments/calendar/challenger&quot;, ] def start_requests(self): for url in self.start_urls: yield Request( url=url, callback=self._parse_calendar, meta=dict(dont_redirect=True), ) def _parse_calendar(self, response: HtmlResponse): json_str = response.xpath(&quot;//body//text()&quot;).get() </code></pre> <p>And this is the middleware:</p> <pre><code>class SeleniumBase: @classmethod def from_crawler(cls, crawler: Crawler): middleware = cls(crawler.settings) crawler.signals.connect(middleware.spider_closed, signals.spider_closed) return middleware def __init__(self, settings: dict[str, Any]) -&gt; None: self.driver = sb.Driver( uc=settings.get(&quot;UNDETECTABLE&quot;, None), headless=settings.get(&quot;HEADLESS&quot;, None), user_data_dir=settings.get(&quot;USER_DATA_DIR&quot;, None), ) def spider_closed(self, *_) -&gt; None: self.driver.quit() def process_request(self, request: Request, spider: Spider) -&gt; HtmlResponse: self.driver.get(request.url) return HtmlResponse( self.driver.current_url, body=self.driver.page_source, encoding=&quot;utf-8&quot;, request=request, ) </code></pre> <p>Any ideas on what might be happening?</p> <hr /> <p><strong>Update:</strong></p> <p>scrapy seems to have gone completely haywire now. It's not sending responses to the correct callbacks for all the downstream parsing methods (that aren't in the MRE above) about 95% of the time. I can't really add the logic to the MRE above as it's very complex and SO will complain that I have too much code in my question. Suffice to say I've triple checked everything and besides - it all runs fine on my other machine so the references are definitely all correct.</p> <p>I've gone nuclear and reinstalled scrapy and seleniumbase but that hasn't solved the issue :(</p> <p><strong>Update 2:</strong></p> <p>So I've been poking around some more and a partial diagnosis of the issue is that the website is redirecting to a different address but the response from scrapy is listed as 200. I'm assuming this bypasses the <code>don't_redirect</code> instruction. Interestingly simply re-raising the request again and submitting it <strong>always</strong> returns a non-directed response. This allowed me to come up with a solution (I'll post a bit) that works for this case but I'd still be interested if anyone had a better explanation for what might be going on!!!</p>
<python><scrapy><seleniumbase>
2024-10-21 19:00:01
1
1,121
Jossy
79,111,113
1,038,501
Which Shapely predicate should be used to distinquish between these LinearRings
<p>In my project there are two use-cases. The red and blue shapes are not overlapping in the first case, but they do in the second.</p> <p><a href="https://i.sstatic.net/eA45LnMv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eA45LnMv.png" alt="case 1" /></a><br /> <strong>case #1</strong></p> <p><a href="https://i.sstatic.net/FrKa6TVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FrKa6TVo.png" alt="case 2" /></a><br /> <strong>case #2</strong></p> <p>In my test code I tried various predicates from Shapely but could not find anyway to tell them apart.</p> <pre><code>from shapely import LinearRing # first use-case blue_sq = LinearRing([(1, 1), (1, 2), (2, 2), (2, 1), (1, 1)]) red_one = LinearRing([(0, 0), (0, 3), (3, 3), (3, 2), (1, 2), (1, 0), (0, 0)]) print(blue_sq.overlaps(red_one)) print(blue_sq.intersects(red_one)) print(blue_sq.crosses(red_one)) print(blue_sq.contains(red_one)) print(blue_sq.touches(red_one)) print(blue_sq.within(red_one)) # second use-case blue_ln = LinearRing([(2, 1), (2, 2), (7, 2), (7, 1), (2, 1)]) red_two = LinearRing([(1, 0), (1, 1), (2, 1), (2, 2), (3, 2), (3, 0), (1, 0)]) print(blue_ln.overlaps(red_two)) print(blue_ln.intersects(red_two)) print(blue_ln.crosses(red_two)) print(blue_ln.contains(red_two)) print(blue_ln.touches(red_two)) print(blue_ln.within(red_two)) </code></pre> <p>The results are summarised in the table below.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Shapely predicate</th> <th>case 1</th> <th>case 2</th> </tr> </thead> <tbody> <tr> <td>overlaps</td> <td>True</td> <td>True</td> </tr> <tr> <td>intersects</td> <td>True</td> <td>True</td> </tr> <tr> <td>crosses</td> <td>False</td> <td>False</td> </tr> <tr> <td>contains</td> <td>False</td> <td>False</td> </tr> <tr> <td>touches</td> <td>False</td> <td>False</td> </tr> <tr> <td>within</td> <td>False</td> <td>False</td> </tr> </tbody> </table></div> <p>Both cases appear to intersect and overlap, but neither case crosses, contains, touches or lies within the other. I feel sure that I am missing something obvious :-)</p>
<python><geometry><shapely>
2024-10-21 17:18:09
1
716
snow6oy
79,111,112
594,925
Executing code on class (metaclass instance) destruction in python
<p>We have some API that should be shut down (e.g. <code>api.shutdown()</code>) just once per python process and specific only to a particular class (e.g. <code>ControllerA</code>) from a hierarchy of controllers (e.g. <code>Controller</code> inherited by <code>ControllerA</code>, ..., <code>ControllerZ</code>). Can I add a &quot;destructor logic&quot; in python in some reasonable way when destroying the class itself and not just any of its instances? I understand that in Python classes are not explicitly destroyed in the same way as objects but rather garbage collected when there are no existing references to them but perhaps there is some way to achieve the above effect? What I want is to perform the <code>api.shutdown()</code> call once for all instances but not explicitly as it should not be done for instances of <code>ControllerB</code> ..., <code>ControllerZ</code>. Could using metaclasses or something like metaclass destructors achieve anything like it?</p>
<python><garbage-collection><metaclass>
2024-10-21 17:17:26
2
628
pevogam
79,111,108
5,527,374
Drawing rectangle in PNG file
<p>TLDR: My goal is simple. I have a PNG file. I want to draw a rectangle in it using Python and save it to a new file.</p> <p>I have a PNG file (attached to this post). All I want to do is draw a rectangle in the image using Python and save the image to a new file. Here's code that doesn't work:</p> <pre><code>import png org_path = './arragon.png' altered_path = './altered.png' f = open(org_path, 'rb') image = png.Reader(file=f) width, height, rows, metadata = image.read() for row in rows: for i in range(len(row)): row[i] = 255 writer = png.Writer( width=width, height=height, bitdepth=metadata['bitdepth'], greyscale=metadata['greyscale'], alpha=metadata['alpha'] ) writer.write(open(altered_path, 'wb'), rows) </code></pre> <p>That last line produces the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/miko/tmp/alter-image/./edit.py&quot;, line 23, in &lt;module&gt; writer.write(open(altered_path, 'wb'), rows) File &quot;/home/miko/.local/lib/python3.10/site-packages/png.py&quot;, line 670, in write raise ProtocolError( png.ProtocolError: ProtocolError: rows supplied (0) does not match height (450) </code></pre> <p>Now, to break it down, I tried just copying the image object to a Writer, without any changes:</p> <pre><code>import png f = open(org_path, 'rb') image = png.Reader(file=f) width, height, rows, metadata = image.read() writer = png.Writer( width=width, height=height, bitdepth=metadata['bitdepth'], greyscale=metadata['greyscale'], alpha=metadata['alpha'] ) writer.write(open(altered_path, 'wb'), rows) </code></pre> <p>Then I get this message:</p> <pre><code>Traceback (most recent call last): File &quot;/home/miko/tmp/alter-image/./edit.py&quot;, line 23, in &lt;module&gt; writer.write(open(altered_path, 'wb'), rows) File &quot;/home/miko/.local/lib/python3.10/site-packages/png.py&quot;, line 668, in write nrows = self.write_passes(outfile, check_rows(rows)) File &quot;/home/miko/.local/lib/python3.10/site-packages/png.py&quot;, line 703, in write_passes return self.write_packed(outfile, rows) File &quot;/home/miko/.local/lib/python3.10/site-packages/png.py&quot;, line 738, in write_packed for i, row in enumerate(rows): File &quot;/home/miko/.local/lib/python3.10/site-packages/png.py&quot;, line 658, in check_rows raise ProtocolError( png.ProtocolError: ProtocolError: Expected 633 values but got 211 values, in row 0 </code></pre> <p>I can't make heads or tails of what's wrong. Can anybody please tell me how to do this?</p> <p><a href="https://i.sstatic.net/oSZZz5A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oSZZz5A4.png" alt="arragon.png" /></a></p>
<python><png>
2024-10-21 17:15:32
2
925
tscheingeld
79,110,982
6,394,617
exported jupyter notebook has different syntax highlighting
<p>When I have this Python code in a Jupyter notebook:</p> <pre><code>df = pd.read_csv(&quot;data.csv&quot;, index_col=0) print(df.shape) </code></pre> <p>The <code>read_csv</code> and <code>shape</code> are blue, the <code>&quot;data.csv&quot;</code> is red, and the <code>0</code> and <code>print</code> are green.</p> <p>When I export the notebook to HTML, the <code>read_csv</code>, <code>shape</code>, and <code>print</code> no longer have syntax highlighting.</p> <p>Also, in my jupyter notebook:</p> <pre><code>for v in my_values: something </code></pre> <p>has both the <code>for</code> and the <code>in</code> as green, but when I export it to HTML, the <code>for</code> is green and the <code>in</code> is purple.</p> <p><em><strong>My question:</strong></em> is there some simple fix to this, to make the exported HTML the same as the notebook?</p> <p>I would like the highlighting to be the same, but I don't have time to code up a solution to this.</p> <p><em><strong>More Information</strong></em></p> <p>I tried exporting from Jupyter notebook and Jupyter Lab. Both gave the same results.</p> <p>I also installed pandoc so that I could export as LaTeX. That has the same issue.</p> <p>I inspected the HTML, and found that the highlighting is done with span tags and CSS styles. Here is the HTML for the two code chunks:</p> <pre><code>&lt;div class=&quot;highlight hl-ipython3&quot;&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;df&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pd&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;read_csv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;data.csv&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index_col&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;df&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;shape&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;/pre&gt;&lt;/div&gt; </code></pre> <p>and</p> <pre><code>&lt;div class=&quot;highlight hl-ipython3&quot;&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;v&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;my_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;something&lt;/span&gt; &lt;/pre&gt;&lt;/div&gt; </code></pre> <p>I found the span tags in the CSS:</p> <pre><code>.highlight .s2 { color: var(--jp-mirror-editor-string-color) } /* Literal.String.Double */ .highlight .mi { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Integer */ .highlight .k { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /* Keyword */ .highlight .ow { color: var(--jp-mirror-editor-operator-color); font-weight: bold } /* Operator.Word */ </code></pre> <p>I did not find any highlighting for spans &quot;n&quot; or &quot;nb&quot;, which is presumably why they were not highlighted in the HTML document.</p> <p>I added one for &quot;nb&quot;, using the same color as for &quot;k&quot;. I also changed the color for &quot;ow&quot; to match the color for &quot;k&quot;. That solved half of the highlighting issues.</p> <p>However, I cannot just change the color for &quot;n&quot;, since some of the &quot;n&quot; are just user defined names (i.e. variables) whereas others are methods that I would like to be blue.</p>
<python><jupyter-notebook><syntax-highlighting>
2024-10-21 16:42:00
1
913
Joe
79,110,973
1,818,059
Can I tell in python if my class is used in context?
<p>I am working on a small hobby project using Python. I wish to &quot;do it right&quot;, and follow common guidelines.</p> <p>Following an example package using SQLite, I can make things work fine by using context.</p> <p>Example:</p> <pre><code>import mypackage as mp with mp.democlass() as dc: dc.dosomething('some_parameter') </code></pre> <p>This obviously calls the <code>__enter__</code> and <code>__exit__</code> functions.</p> <p>If I wish to package my work so that it can also be used without the <code>with</code> function, example:</p> <pre><code>dc = mp.democlass() dc.dosomething('some_parameter') </code></pre> <p>.. what would be the &quot;correct&quot; approach. Can I determine in the class if it is used in <code>with</code> context, or do I need to code an identical initialization function to <code>__enter__</code>, example to open a database connection.</p> <p>I hope the question makes sense. Maybe I need a boilerplate module which has both ways.</p>
<python>
2024-10-21 16:37:57
1
1,176
MyICQ
79,110,954
125,244
How to format printing an array in Python
<p>I have an array of 10 elements and I can each of the elements print on a new line formatted 6.2f with</p> <p><code>print(f'{myArray:6.2f}', sep=&quot;\n&quot;)</code></p> <p>But I would like to create string containing what needs to be printed, add a few things and print that string, like:</p> <pre><code> text = 'something ' + f'{myArray:6.2f}' + ' rest' print(text) </code></pre> <p>How can I make that each element of myArray will be on a new line and formatted in text?</p>
<python><list><string-formatting>
2024-10-21 16:32:00
3
1,110
SoftwareTester
79,110,939
2,437,514
Break up a sparse 2D array or table into multiple subarrays or subtables
<p>I want to find a way to &quot;lasso around&quot; a bunch of contiguous/touching values in a sparse table, and output a set of new tables. If any values are &quot;touching&quot;, they should be part of a subarray together.</p> <p>For example: if I have the following sparse table/array:</p> <pre><code>[[0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 0 0 0] [0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 1 0 0] [0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0]] </code></pre> <p>The algorithm should &quot;find&quot; subtables/subarrays. It would identify them like this:</p> <pre><code>[[0 0 0 1 1 0 0 0 2 2 2 2 0 0 0 0 0 0 0] [0 0 0 1 1 0 0 0 2 2 2 2 2 0 0 3 3 0 0] [0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 3 3 3 0] [0 0 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0]] </code></pre> <p>But the final output should be a series subarrays/subtables like this:</p> <pre><code>[[1 1] [1 1]] </code></pre> <pre><code>[[0 1 1 1 1 0] [0 1 1 1 1 1] [0 1 1 1 0 0] [1 0 0 0 0 0]] </code></pre> <pre><code>[[0 0 1 1 0] [0 1 1 1 1] [1 0 0 0 0]] </code></pre> <p>How can I do this in python? I've tried looking at sk-image and a few things seem to be similar to what I'm trying to do, but nothing I have seen seems to fit quite right.</p> <p>EDIT: it looks like <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.label.html#scipy.ndimage.label" rel="nofollow noreferrer"><code>scipy.ndimage.label</code></a> is extremely close to what I want to do, but it will break the corner-case values into their own separate arrays. So it's not quite right. EDIT: ah ha, the <code>structure</code> argument is what I am after. If I get time I will update my question with an answer.</p>
<python><pandas><numpy><scipy>
2024-10-21 16:28:00
1
45,611
Rick
79,110,884
5,218,153
Python Poetry - Pointing to envs.TOML Which is Invalid, Rather Than pyproject.TOML
<p>I've recently come back to a project which I'd left for a while. I am now attempting to add a new package via <code>poetry add xxx</code> but I receive an error -</p> <blockquote> <p>Invalid TOML file C:/Users/username/AppData/Local/pypoetry/Cache/virtualenvs/envs.toml: Unexpected character: ']' at line 5 col 1</p> </blockquote> <p>This is an actual TOML file that exists, but why is it pointing there all of a sudden? I have a pyproject.toml within my project root folder that has worked just fine up until now.</p> <p>Is it possible to fix my Poetry install so that it recognises the pyproject.toml?</p> <p><em>I should point out that I don't have a huge amount of experience with Poetry</em></p>
<python><python-poetry>
2024-10-21 16:13:02
1
642
Jamsandwich
79,110,878
5,094,589
I want to match 6 or fewer digits in a string, if there are "/" or "-" between them
<p>It should match:</p> <ul> <li>&quot;abc 12-34 def&quot; precisely &quot;12-34&quot;</li> <li>&quot;Phone number: 123/45&quot;, precisely &quot;123/45&quot;</li> <li>&quot;sequence: 12//-34&quot;, precisely &quot;12//-34&quot;</li> <li>&quot;My code is 1-2-3-4&quot;, precisely &quot;1-2-3-4&quot;</li> </ul> <p>It should <em>not</em> match:</p> <ul> <li><p>&quot;too many: 1234-567-89&quot;</p> </li> <li><p>&quot;too many; 1234-567&quot;</p> </li> </ul> <p>Here is what I have tried:</p> <pre><code>pattern = r'\d([\/-]\d){1,5}' </code></pre> <p>but didn't succeed</p>
<python><regex>
2024-10-21 16:10:24
1
1,106
Daniil Yefimov
79,110,798
865,169
Can I implement a custom insert method in a SQLAlchemy ORM-mapped class?
<p>I have an ORM-mapped class <code>Data</code> which includes a 'valid_until' (datetime) attribute I use for versioning of rows. This means that rows in the table can be duplicates with respect to some attributes, but the 'valid_until' attribute sets them apart, allowing me to identify which one is the most recent. The most recent such row has a future &quot;infinity&quot; 'valid_until' and all previous rows have the 'valid_until' set to when they were superseded.</p> <p>I can write a custom insert method in the class to have the side-effect that when I insert a new row, it updates the 'valid_to' attribute of any existing row that would be a duplicate in order to supersede it. E.g.:</p> <pre class="lang-py prettyprint-override"><code>data = Data() # ORM-mapped object data.insert() </code></pre> <p>Can I implement this in a way so that the custom insert behaviour is used when I call <code>sqlalchemy.orm.Session.add</code> and perhaps <code>sqlalchemy.orm.Session.bulk_save_objects</code>? I.e.:</p> <pre class="lang-py prettyprint-override"><code>session = sqlalchemy.orm.Session() session.add(data) session.commit() </code></pre> <p>This is so I would not have to make sure to use the custom <code>Data.insert</code> method, but could just rely on the usual mechanisms of SQLAlchemy. I have looked for, for example an <code>__insert__</code> method in ORM-mapped classes to override, but I have not found anything like that.</p>
<python><sqlalchemy>
2024-10-21 15:47:56
0
1,372
Thomas Arildsen
79,110,759
4,451,315
Replace all values in array according to mapping
<p>Say I have:</p> <pre class="lang-py prettyprint-override"><code>import pyarrow as pa arr = pa.array([1, 3, 2, 2, 1, 3]) </code></pre> <p>I'd like to replace values according to <code>{1: 'one', 2: 'two', 3: 'three'}</code> and to end up with:</p> <pre><code>&lt;pyarrow.lib.LargeStringArray object at 0x7f8dd0b3c820&gt; [ &quot;one&quot;, &quot;three&quot;, &quot;two&quot;, &quot;two&quot;, &quot;one&quot;, &quot;three&quot; ] </code></pre> <p>I can do this by going via Polars:</p> <pre><code>In [19]: pl.from_arrow(arr).replace_strict({1: 'one', 2: 'two', 3: 'three'}, return_dtype=pl.String).to_arrow() Out[19]: &lt;pyarrow.lib.LargeStringArray object at 0x7f8dd0b3c820&gt; [ &quot;one&quot;, &quot;three&quot;, &quot;two&quot;, &quot;two&quot;, &quot;one&quot;, &quot;three&quot; ] </code></pre> <p>Is there a way to do it with just PyArrow?</p>
<python><pyarrow>
2024-10-21 15:38:33
1
11,062
ignoring_gravity
79,110,748
7,123,033
How to incrementally train a Face Recognition Model without retraining from scratch?
<p>I'm building a face recognition model. I've already trained a model using the images of two people (Cristiano Ronaldo and Lionel Messi). Now, I want to add more people (e.g., Maria Sharapova) to the model without retraining everything from scratch.</p> <p>Is there a way to train a model a new model using the new dataset? If so, how can I efficiently merge the new training data with the existing model?</p> <p>Here is my existing code</p> <pre class="lang-py prettyprint-override"><code>import torch import torchvision from torchvision import datasets, models, transforms import os import ssl ssl._create_default_https_context = ssl._create_unverified_context data_transforms = { 'train': transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = './new_dataset' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'test']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True) for x in ['train', 'test']} class_names = image_datasets['train'].classes model = models.resnet18(pretrained=True, progress=True) num_classes = len(class_names) model.fc = torch.nn.Linear(model.fc.in_features, num_classes) device = torch.device(&quot;cpu&quot;) model = model.to(device) criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9) num_epochs = 10 for epoch in range(num_epochs): for inputs, labels in dataloaders['train']: inputs = inputs.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() torch.save(model.state_dict(), 'model.pth') model.eval() correct = 0 total = 0 with torch.no_grad(): for inputs, labels in dataloaders['test']: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total print(f&quot;Accuracy on the test set: {accuracy}%&quot;) </code></pre> <p>The folder, <code>./new_dataset</code> looks like this</p> <pre><code>new_dataset/ --test/ ----cristiano_ronaldo ----lione_messi --train/ ----cristiano_ronaldo ----lione_messi </code></pre>
<python><machine-learning><pytorch>
2024-10-21 15:34:20
0
321
Sammy
79,110,735
18,203,140
Filling outside a PolyLine in Leaflet
<p>I have a set of coordinates that gives a closed polygon. Using the <code>folium.PolyLine()</code> function in Python I can easily fill &quot;inside&quot; but not &quot;outside&quot;. The actual output is left, the desired one right in figure.</p> <p><a href="https://i.sstatic.net/LpM2NBdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LpM2NBdr.png" alt="enter image description here" /></a></p> <p>The code used in Python (in order to obtain image on the left) is the following, where I decided to use <code>leafmap.folium.Map()</code> object.</p> <pre><code>import folium import leafmap.foliumap as leafmap m = leafmap.Map( layers_control=True, draw_control=False, measure_control=False, fullscreen_control=False, center=(map_center_x,map_center_y), zoom=z_start ) m.add_basemap('OpenTopoMap') folium.PolyLine( locations=coords, fill_color=&quot;#D3D3D3&quot;, color=&quot;#00B0F0&quot;, weight=line_weight, opacity=fill_opacity ).add_to(m) </code></pre> <p>I want to revert the area of filling, but I cannot find a way. Is there a way, using <code>folium.PolyLine()</code> to obtain the result on the right instead?</p>
<python><python-3.x><geopandas><folium>
2024-10-21 15:31:43
1
309
Stefø
79,110,663
11,233,365
Python: Package installed with meson cannot be found by pip
<p>I have been trying to solve an issue compiling a Python package for Windows, and part of that involves building <code>contourpy</code> in the UCRT64 environment on MSYS2. I have been able to successfully build the package in <code>virtualenv</code> using the following commands:</p> <pre class="lang-bash prettyprint-override"><code>$ python -m virtualenv ~/envs/contourpy $ source ~/envs/contourpy/bin/activate # Compile contourpy $ git clone https://github.com/contourpy/contourpy.git $ cd contourpy $ meson setup builddir $ ninja -C builddir $ meson install -C builddir --destdir /home/guest/envs/contourpy </code></pre> <p>These options allow me to build contourpy successfully. However when I try running <code>pip list</code>, it doesn't show up in this environment.</p> <p>When I try to run <code>pip install .</code> directly, I get the following error messages:</p> <pre class="lang-none prettyprint-override"><code> Found ninja.EXE-1.11.1.git.kitware.jobserver-1 at C:/msys64/tmp/pip-build-env-0ba4jhpg/normal/bin/ninja.EXE Visual Studio environment is needed to run Ninja. It is recommended to use Meson wrapper: C:/msys64/tmp/pip-build-env-0ba4jhpg/overlay/bin/meson compile -C . + meson compile --ninja-args=['-v'] </code></pre> <p>Is there a way I can get this package to perform the compilation while using <code>pip install .</code>?</p>
<python><pip><meson-build>
2024-10-21 15:08:44
1
301
TheEponymousProgrammer
79,110,285
3,224,483
Why doesn't dropdown appear when I attach DataValidation to the column?
<p>I want to create a spreadsheet that only allows two values in column A. Here is my attempt:</p> <pre><code>import openpyxl from openpyxl.worksheet.datavalidation import DataValidation book = openpyxl.Workbook() sheet = book.active for i in range(1, 11): sheet.cell(row=i, column=1).value = 'Acceptable' sheet.cell(row=i, column=2).value = 'Foo' sheet.cell(row=i, column=3).value = 'Fah' validator = DataValidation(type='list', formula1='Acceptable,Allowed,Permitted', allow_blank=True, showDropDown=False) validator.add('A1:A10') book.save('test.xlsx') </code></pre> <p>Actual output:</p> <p><a href="https://i.sstatic.net/fnVYH86t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnVYH86t.png" alt="actual output in Excel, with no dropdown showing" /></a></p> <p>Desired output (manually done by adding the data validation in Excel):</p> <p><a href="https://i.sstatic.net/TZv5YuJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZv5YuJj.png" alt="desired output in excel, with dropdown showing" /></a></p> <p>The <code>showDropDown=False</code> is not a typo. I tried True and False there, but according to <a href="https://stackoverflow.com/a/63844294/3224483">some sources</a>, showDropDown is a misnomer and showDropDown=False should make it appear.</p> <p>I tried adding the data validation to individual cells in a loop, but that didn't work either. Here's that attempt:</p> <pre><code>validator = DataValidation(type='list', formula1='Acceptable,Allowed,Permitted', allow_blank=True, showDropDown=False) for i in range(1, 4): sheet.cell(row=i, column=1).value = 'Acceptable' sheet.cell(row=i, column=2).value = 'Foo' sheet.cell(row=i, column=3).value = 'Fah' validator.add(sheet.cell(row=i, column=1)) </code></pre> <p>I also tried copying and pasting <a href="https://openpyxl.readthedocs.io/en/3.1/validation.html" rel="nofollow noreferrer">the example from openpyxl's documentation</a> and adding a <code>wb.save('test.xlsx')</code> to the end. They add the validator directly to the sheet. I think the expected result is that the validator is applied to all cells on that sheet, but instead the entire workbook becomes invalid. Windows prompts me to &quot;repair&quot; the book, and after that is done, none of the cells have validation.</p> <p>Using Python 3.11, openpyxl 3.1.5 (also tried 3.0.10).</p>
<python><openpyxl>
2024-10-21 13:32:45
1
3,659
Rainbolt
79,110,150
5,775,358
enum as option in function, also add string option
<p>When defining a function, there are several ways to limit input to a set of predefined options. I thought it would make sense to use <code>Enum</code> objects to achieve this. For example:</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum, auto class ColorOptions(Enum): RED = auto() BLUE = auto() def color_something(color: ColorOptions): match color: case ColorOptions.RED: return 'rgb(1, 0, 0)' case ColorOptions.BLUE: return 'rgb(0, 0, 1)' # Example usage print(color_something(ColorOptions.BLUE)) # Output: 'rgb(0, 0, 1)' </code></pre> <p>However, the following call doesn’t work:</p> <pre class="lang-py prettyprint-override"><code>color_something('BLUE') </code></pre> <p>To address this, I modified the function to accept both <code>Enum</code> members and string representations, like so:</p> <pre class="lang-py prettyprint-override"><code>def color_something(color: ColorOptions | str): if isinstance(color, str): # If the string doesn't match an Enum member, let it raise an error color = ColorOptions[color.upper()] match color: case ColorOptions.RED: return 'rgb(1, 0, 0)' case ColorOptions.BLUE: return 'rgb(0, 0, 1)' # Example usage print(color_something('blue')) # Output: 'rgb(0, 0, 1)' </code></pre> <p>This works, but it feels inefficient to add this string-to-enum conversion logic to every function that uses an <code>Enum</code>. One option is to add a helper function, but I'm wondering if there's built-in Enum functionality that I might be missing.</p> <p>Is there a better way to handle predefined options in a function, where both <code>Enum</code> members and strings can be accepted without duplicating this logic everywhere?</p> <p><em>Edit</em> after @chepner, this limits the options in the IDE instead of <code>str</code> but then you have to define the options twice (in the Enum class and in the <code>Literal</code> part of the function).</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal def color_something(color: ColorOptions | Literal['red', 'blue']): if isinstance(color, str): # If the string doesn't match an Enum member, let it raise an error color = ColorOptions[color.upper()] match color: case ColorOptions.RED: return 'rgb(1, 0, 0)' case ColorOptions.BLUE: return 'rgb(0, 0, 1)' # Example usage print(color_something('blue')) # Output: 'rgb(0, 0, 1)' </code></pre>
<python><function><enums>
2024-10-21 13:00:24
1
2,406
3dSpatialUser
79,109,962
1,821,692
asyncio server does not cancels request even if aiohttp.ClientSession exceeds its timeout
<p>The final goal is to cancel request on server side if client exceeds its timeout.</p> <p>The code related to start the server:</p> <pre class="lang-py prettyprint-override"><code>def run_server_loop( routes: web.RouteTableDef, shutdown_state: ShutdownState, logger: Logger, *, port: int, periodic_callback: Callable[[], None] | None = None, shutdown_callback: Callable[[], None] | None = None, ) -&gt; None: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) try: loop.run_until_complete( _run_server( routes, shutdown_state, logger, port=port, periodic_callback=periodic_callback, ) ) finally: if shutdown_callback is not None: shutdown_callback() logger.info('Server stopped') flush_logs() async def _run_server( routes: web.RouteTableDef, shutdown_state: ShutdownState, logger: Logger, *, port: int, periodic_callback: Callable[[], None] | None = None, ) -&gt; None: try: app = web.Application() app.add_routes(routes) runner = web.AppRunner( app, access_log_format=( '%a %t [%D μs] &quot;%r&quot; %{Content-Length}i %s ' '%b &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot;' ), ) await runner.setup() site = web.TCPSite(runner, port=port) await site.start() logger.info(f'Listening {site.name}') while not shutdown_state.is_shutdown_requested: await asyncio.sleep(0.1) if periodic_callback is not None: periodic_callback() await runner.cleanup() except: # noqa logger.critical('Unhandled exception', exc_info=True) raise </code></pre> <p>Here is my endpoint code:</p> <pre class="lang-py prettyprint-override"><code>@routes.get('/ping') async def handle_ping(_) -&gt; web.Response: try: import time import asyncio for i in range(10): await asyncio.sleep(1) return web.json_response( data=PingResult( service_name=service_name, version=SERVICE_VERSION, storage_path=str(storage_dir.path), daemon_pid=daemon.pid, daemon_status=str(daemon.status.value), ).dict() ) except asyncio.CancelledError as ce: print('Request was cancelled') return HTTPBadRequest(ErrorResult(error='Request was cancelled')) </code></pre> <p>Client code</p> <pre><code>async def ping(timeout=10) -&gt; PingResult: async with aiohttp.ClientSession(timeout=ClientTimeout(total=timeout)) as session: async with session.get('http://localhost:5002/ping') as resp: body = await resp.json() return PingResult.parse_obj(body) </code></pre> <p>Models</p> <pre class="lang-py prettyprint-override"><code>from aiohttp import web from pydantic import BaseModel class ErrorResult(TypedDict): error: str class HTTPBadRequest(web.HTTPBadRequest): def __init__(self, error: Mapping) -&gt; None: super().__init__(text=dumps(error), content_type='application/json') class PingResult(BaseModel): service_name: str version: str storage_path: str daemon_pid: int daemon_status: str </code></pre> <p>Even if I call <code>ping(timeout=2)</code> I can see that the request on the server wasn't cancelled. Or if I call <code>curl http://localhost:5002/ping</code> and terminate the command in less than 2-3 second I'm getting the same behavior (the server side code works without any termination).</p> <p>It seems like I'm misunderstanding the whole idea of cancelling request but I can figure out how can I achieve my main goal.</p>
<python><asynchronous><python-asyncio><aiohttp>
2024-10-21 12:04:10
1
3,047
feeeper
79,109,569
4,738,644
How to do composite from PIL using only OpenCV?
<p>Hello I want to combine a green image mask (3 channels with a squared figure only with the green channel as 255) with a 3 channels image using opencv in python. I cannot use PIL.Image.composite since I don't have privileges to install it (not administrator).</p> <p>OpenCV has a function called addweighted, which does something similar to PIL.Image.composite, however since the mask is binary and this is applied in the whole image, the rest of the image becomes darker for example when the Alpha and Beta values have 0.5 value and Gamma makes the image whiter, but the original colors of the image get lost</p> <p>To correct this I have to add to an empty array, the original image without the ROI and ROI with the mix of the mask and image (addWeighted) which for one image could be something trivial, but for several images for example 10T this start consuming more time.</p> <p>I also tried something like <a href="https://stackoverflow.com/questions/40895785/using-opencv-to-overlay-transparent-image-onto-another-image">this</a> however when I load an image using the IMREAD_UNCHANGED option, the alpha channel does not appear. a colored image is read as a 3 dimensions image and a grayscale/binary image is read as a 2 dimensions image.</p>
<python><opencv><image-processing><transparency><mask>
2024-10-21 10:12:33
0
421
Diego Alejandro Gómez Pardo