QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,661,472
12,297,666
Reverse the order of columns without changing the column labels pandas dataframe
<p>I need to reverse the order of my pandas dataframe. But using the following code:</p> <pre><code>df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}) df = df.iloc[:, ::-1] </code></pre> <p>also reverses the order of the column labels. How can i reverse only the data and maintain the column labels? I expect to get:</p> <pre><code> A B C 0 7 4 1 1 8 5 2 2 9 6 3 </code></pre>
<python><pandas>
2023-03-07 11:36:23
3
679
Murilo
75,661,356
130,964
Inserting a new root element in lxml
<p>I have an xml file (not necessarily html):</p> <pre><code>&lt;div&gt; &lt;p&gt;...&lt;/p&gt; &lt;/div&gt; </code></pre> <p>and I wish to insert a new <code>html</code> root element to give</p> <pre><code>&lt;html&gt; &lt;div&gt; &lt;p&gt;...&lt;/p&gt; &lt;/div&gt; &lt;/html&gt; </code></pre> <p>(I will add <code>head</code>, <code>body</code> later.) Is there a simple command to insert a new element <code>html</code> as new parent of <code>div</code> without creating a new tree and copying elements?</p>
<python><lxml>
2023-03-07 11:23:56
1
38,146
peter.murray.rust
75,661,313
272,023
How are class variables handled when methods in multiprocessing.Process access them?
<pre><code>class MyClass: def __init__(self, my_val): self.my_val = my_val def process_one(self): ... do something, maybe access self.my_val def do_work(self): sub_process = multiprocessing.Process(target=self.process_one) ... now do work and maybe access self.my_val sub_process.join() </code></pre> <p>How is the variable handled by the two different processes (the calling one and the one spawned)? Is the variable accessible across the processes? Is this safe if only one ever writes to the variable?</p>
<python>
2023-03-07 11:18:32
2
12,131
John
75,661,181
2,218,086
Floating Point Accuracy Problems While Calculating Pi
<p>I'm trying to use random numbers to estimate the value of Pi. The <a href="https://theabbie.github.io/blog/estimate-pi-using-random-numbers.html" rel="nofollow noreferrer">example I started with</a> obviously has some problems which is a good thing as it means I have to understand it so I can fix it.</p> <pre><code># Expected Result 3.141592653589793238 # from math import sqrt from random import random # number of fandom points N=100000 # number of points inside I=0 for i in range(N): #print(&quot;I=&quot;+str(I)) #Generate random point in 1x1 square x=random() y=random() # Is the point inside the circle? # r=sqrt(x**2 + y**2) #print(str(r)) #if r&lt;1: If (x*x + y*y) &lt;1: # Update 2 I+=1 # print(&quot;Pi=&quot; + str(4*I/N)) print(&quot;Pi=&quot; + str(4*I)/N)) # Update 3 </code></pre> <p>Expected Result</p> <pre><code>3.141592653589793238 </code></pre> <p>Actual Results</p> <ul> <li><p>100,000 Iterations</p> <pre><code> Pi=3.14736 Pi=3.14448 Pi=3.14424 </code></pre> </li> <li><p>1,000,000 Iterations</p> <pre><code> Pi=3.141496 Pi=3.141356 Pi=3.138 </code></pre> </li> </ul> <p>I would have expected the results to be more consistent. How can I add more precision to the calculations?</p> <p><strong>UPDATE</strong> <br> I modified the code to remove sqrt and ** as recommended in the comments. I cranked up the iterations to 4,000,000 but it didn’t make a lot of difference. It is still only accurate to about 2 decimal places. How can I add more precision to the calculation. (That question was in the title but it was edited out)</p> <p>2,000,000 iterations</p> <pre><code>Pi=3.141342 Pi=3.141328 Pi=3.143074 Pi=3.139084 </code></pre> <p>4,000,000 iterations</p> <pre><code>Pi=3.142605 Pi=3.141509 Pi=3.140663 Pi=3.143194 </code></pre> <p><strong>UPDATE 2</strong> <br> I changed the final calculation to the following.</p> <pre><code>import decimal result=decimal.Decimal(4*I/N) print(&quot;Pi=&quot; + str(result)) </code></pre> <p>4,000,000 iterations with decimal.Decimal()</p> <pre><code>Pi=3.14080899999999996197175278211943805217742919921875 Pi=3.143496999999999985675458447076380252838134765625 Pi=3.14080899999999996197175278211943805217742919921875 Pi=3.141859999999999875086587053374387323856353759765625 </code></pre> <p>40,000,000 iterations with</p> <pre><code>decimal.Decimal() Pi=3.141386900000000093058361017028801143169403076171875 Pi=3.1414208999999999605279299430549144744873046875 Pi=3.1414591999999998961357050575315952301025390625 Pi=3.14168000000000002813749233609996736049652099609375 </code></pre> <p>This latest test looks like I have achieved accuracy of 3 decimal places but the results still look odd with big rows of 99999 or 00000. Any other ideas what is going on here?</p> <p>Last a couple of larger itterations but I think the increase in accuracy is diminishing</p> <p>100,000,000 Iterations (yes 100 million)</p> <pre><code>Pi=3.141542439999999825062104719108901917934417724609375 Pi=3.141720879999999826992507223621942102909088134765625 Pi=3.141750519999999990972128216526471078395843505859375 Pi=3.14174343999999994281324688927270472049713134765625 Pi=3.14132400000000000517275111633352935314178466796875 </code></pre> <p>1,000,000,000 Iterations (1 billion too a while on my old laptop :))</p> <pre><code>Pi=3.141603959999999862162667341181077063083648681640625 </code></pre> <p><strong>Update 3</strong> <BR> I moved the brackets in final calculation as described by @Michael Butscher however even with 1 billion iterations the I only consistently obtained 3 decimal places accurately. Maybe I have hit some other limitation perhaps the sudo randomness of Python random numbers? I'll call this exercise done and incorporate my findings into my actual project. OK I couldn't resist I did 10 billion as well Surprisingly it didn't make much difference</p> <p>1,000,000 Iterations</p> <pre><code>Pi=3.142056 Pi=3.136428 Pi=3.1407 Pi=3.141612 </code></pre> <p>10,000,000 Iterations</p> <pre><code>Pi=3.142806 Pi=3.142266 Pi=3.141996 Pi=3.1422232 </code></pre> <p>100,000,000 Iterations</p> <pre><code>Pi=3.14151576 Pi=3.1417604 Pi=3.1415038 Pi=3.1413738 </code></pre> <p>1,000,000,000</p> <pre><code>Pi=3.141553108 Pi=3.1415895 Pi=3.141629112 </code></pre> <p>10,000,000,000 Iterations (10 Billion) Took about 12 hours to execute</p> <pre><code>Pi=3.1416011832 </code></pre>
<python>
2023-03-07 11:05:24
2
411
David P
75,661,144
461,499
airflow PostgresOperator report number of inserts/updates/deletes
<p>I'm exploring replacing our home-build SQL file orchestration framework with apache airflow.</p> <p>We currently have extensive logging on execution time, history and number of records <code>INSERTED</code>/<code>UPDATED</code>/<code>DELETED</code>. The first two are supported by Airflow standard logging, however, I could not find a way to log the resulting counts of the operations.</p> <p>What would be the way to log these? Preferably by sql file? And how make them visible in a nice graph?</p> <p>My simple exmaple DAG looks like this:</p> <pre><code>with DAG( dag_id=&quot;postgres_operator_dag&quot;, start_date=datetime.datetime(2023, 2, 2), schedule_interval=None, catchup=False, ) as dag: proc_r= PostgresOperator(task_id='proc_r', postgres_conn_id='postgres_dbad2a', sql=['001-test.sql','002-test.sql']) proc_r </code></pre>
<python><postgresql><airflow>
2023-03-07 11:02:22
1
20,319
Rob Audenaerde
75,661,090
5,587,736
How to clean up a timed out request to prevent a memory leak?
<p>Structure: A FastAPI service wrapped in a docker container with a 1gb memory limit.</p> <p>User: A script that sends requests to this docker container.</p> <p>Problem: Whenever a request is sent to the service that reaches the timeout limit, a memory leak occurs, because the request is not actually cancelled. Rather, as far as I can see, the request is parked, the blocking call in my script (<code>requests.post</code>) throws an error, which I can catch in a try/except and continue with the next request. However, the memory from the failed request is not released in the docker container.</p> <p>FastAPI endpoint:</p> <pre><code>@app.post(&quot;/extract&quot;, response_model=ExtractionResponse) def extract(request: ExtractionRequest) -&gt; ExtractionResponse: result = app.state.model.inference(request.sample) return ExtractionResponse(sample=result) </code></pre> <p>&quot;User&quot; script:</p> <pre><code>for sample in tqdm(data): try: requests.post( url=f&quot;http://{args.ip}:{args.port}/api/v1/extract&quot;, data=json.dumps(sample), timeout=5.0 ) except requests.exceptions.ReadTimeout: print(sample_data) </code></pre> <p>Executed with <code>CMD [&quot;uvicorn&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8080&quot;, &quot;main:app&quot;]</code> within the docker container.</p> <p>So let's assume for some sample that <code>app.state.model.inference</code> or any other part of the request handling hangs indefinitely, this means the request will be kept in memory, causing a leak, even if the user is no longer waiting for a response from this request.</p> <p>My question, therefore, is: How can I cancel the request if the blocking call waiting for it has moved on to other things?</p>
<python><docker><memory-leaks><fastapi>
2023-03-07 10:57:18
0
697
Kroshtan
75,661,029
17,082,611
Check whether a variable is instance of ResNet50
<p>I am checking whether</p> <pre><code>model = ResNet50(weights='imagenet', include_top=False, pooling=&quot;avg&quot;) </code></pre> <p>is instance of</p> <pre><code>keras.applications.ResNet50 </code></pre> <p>What I have done is:</p> <pre><code>isinstance(model, ResNet50) </code></pre> <p>but unfortunately this is raising me the following exception:</p> <blockquote> <p>TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union</p> </blockquote> <p>Moreover, I have tried:</p> <pre><code>isinstance(model, keras.applications.ResNet50()) </code></pre> <p>but, again, this is raising me the same exception.</p> <ul> <li>What am I missing?</li> </ul>
<python><keras><instance>
2023-03-07 10:50:12
1
481
tail
75,660,958
4,050,510
Why does NaN-comparisons warn only inside np.frompyfunc?
<p>If you compare <code>np.nan</code> to some other value <em>and you do it in a <code>np.frompyfunc</code></em>, will raise a warning. Why is this? Is it a bug or a feature?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np func = np.frompyfunc(lambda x: x&lt;0,nin=1,nout=1) print(func(1)) # False, no warning print(np.nan&lt;0) # False print(func(np.nan)) # False, and raises a warning </code></pre> <h5>Background and Motivation</h5> <p>I ran into this while debugging <a href="https://stackoverflow.com/questions/75656026/why-does-numpy-vectorize-give-a-warning-about-an-invalid-value-when-using-uncert">Why does numpy.vectorize give a warning about an invalid value when using uncertainties?</a> The root cause was hidden behind both <code>np.vectorize</code> and some other library code. So understanding that float comparison changes semantics in a <code>frompyfunc</code> was a bit hard. It comes across as a bug to me.</p> <p>In my own code, I like to use <code>np.seterr(all='raise')</code> and <code>with np.errstate(some_group='ignore'): ...</code> to make sure all warnings about float errors are handeled correctly and warnings are ignored only where I know it is safe. Therefore, I would like to know why this compare-with-nan is only occasionally warning.</p>
<python><numpy><nan><numpy-ufunc>
2023-03-07 10:43:14
0
4,934
LudvigH
75,660,861
11,894,831
Closing cursors and exception
<p>What is the &quot;correct&quot; way to deal with exceptions in a code block using a SQL cursor ? Is it necessary/recommended to close the cursor and the database? If so, what is the correct way to do it?</p> <p>I use to do it like this:</p> <pre><code> try: my_cursor = my_database.cursor(buffered=True) my_sql_query = &quot;SELECT MY_FIELDS FROM MY_TABLE&quot; my_cursor.execute(my_sql_query ) my_selected_rows = my_cursor.fetchall() for rows in my_selected_rows : do_something my_cursor.close() my_database.close() except Exception as e: if my_cursor is not None: my_cursor.close() if my_database is not None: my_database.close() </code></pre> <p>In PyCharm, this code rise two warnings &quot;Local variable might be referenced before assignment&quot; for the line <code>if my_cursor is not None</code> and <code>if my_database is not None</code></p>
<python><sql><database-cursor>
2023-03-07 10:36:47
2
475
8oris
75,660,760
4,473,615
Flask TypeError: Expected bytes
<p>I'm trying to get data from a table and response the result. I am facing this issue since the data is from a database. Below is the code:</p> <pre><code>cursor = connection.cursor() cursor.execute(&quot;&quot;&quot;select * from table&quot;&quot;&quot;) result = cursor.fetchall() for row in result: data = row connection.close() return Response( data, mimetype=&quot;text/csv&quot;, headers={ &quot;Content-disposition&quot;: &quot;attachment; filename=data.csv&quot; }, ) </code></pre> <p>While trying to get it, I am getting this error:</p> <pre><code>TypeError: Expected bytes </code></pre> <p>How can I resolve this?</p>
<python><flask>
2023-03-07 10:27:32
1
5,241
Jim Macaulay
75,660,756
14,752,392
proper way to import django base settings in production settings
<p>I have seen alot of articles and videos about the best practices when it comes to managing django settings file. And almost of them talks about having a base settings <code>base_settings.py</code> and then environment settings, like <code>development_settings.py</code>, <code>production_settings.py</code> etc.</p> <p>usually in the dev and prod settings the base setting is imported into it like</p> <pre><code>from project.settings.base import * </code></pre> <p>I have also read that import <code>*</code> is a bad practice, so the question is <strong>WHAT IS THE RIGHT WAY TO IMPORT THE SETTINGS WITHOUT MESSING UP THE NAMESPACE</strong>.</p> <p>One of the warnings I get when running my test with pytest is the namespace warning, which I presume come from the <code>*</code> import</p> <p>see warning below when I run pytest</p> <pre><code>..\..\..\..\Miniconda3\envs\salary\Lib\site-packages\pkg_resources\__init__.py:2804 C:\Users\seven\Miniconda3\envs\salary\Lib\site-packages\pkg_resources\__init__.py:2804: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`. Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages declare_namespace(pkg) ..\..\..\..\Miniconda3\envs\salary\Lib\site-packages\pkg_resources\__init__.py:2804 ..\..\..\..\Miniconda3\envs\salary\Lib\site-packages\pkg_resources\__init__.py:2804 C:\Users\seven\Miniconda3\envs\salary\Lib\site-packages\pkg_resources\__init__.py:2804: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('zope')`. Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages declare_namespace(pkg) ..\..\..\..\Miniconda3\envs\salary\Lib\site-packages\coreapi\codecs\download.py:5 C:\Users\seven\Miniconda3\envs\salary\Lib\site-packages\coreapi\codecs\download.py:5: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13 import cgi -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html </code></pre> <p>I don't exactly know what is raising these warnings but it goes away when I remove the <code>*</code> by putting all the settings in one file which I don't want to do.</p>
<python><django><django-rest-framework><pytest><django-settings>
2023-03-07 10:27:09
0
918
se7en
75,660,728
12,783,363
How to allow macOS punctuations without using quotations or backslash in terminal but by using argparse or sys.argv?
<p>Currently I have the following code:</p> <p>bubble.py</p> <pre><code>import argparse parser = argparse.ArgumentParser(description=&quot;Create pixel art bubble speech image&quot;) parser.add_argument('text', type=str, nargs='+', help=&quot;Text inside the bubble speech&quot;) args = parser.parse_args() </code></pre> <p>Running the following commands work fine on windows: <code>bubble.py hello world</code> <code>bubble.py hello?</code> <code>bubble.py hello :)</code></p> <p>But causes varying errors on macOS: <code>zsh: no matches found hello?</code> <code>zsh: parse error near `)'</code></p> <p>As disclaimer, I have little to none knowledge of macOS terminal. How to avoid the error and have the texts including the punctuations captured by argparse nargs?</p>
<python><macos><argparse>
2023-03-07 10:25:15
1
916
Jobo Fernandez
75,660,706
3,909,896
Replace column value substring with hash of substring in PySpark
<p>I have a dataframe with a column containing a description including customer ids which I need to replace with their <code>sha2</code> hashed version.</p> <p>Example: the column value <code>&quot;X customer 0013120109 in country AU.</code> should be turned into <code>&quot;X customer d8e824e6a2d5b32830c93ee0ca690ac6cb976cc51706b1a856cd1a95826bebd in country AU.</code></p> <p>MRE:</p> <pre><code>from pyspark.sql.dataframe import DataFrame from pyspark.sql.functions import col, sha2, regexp_replace, lit, concat from pyspark.sql.types import LongType, StringType, StructField, StructType data = [ [1, &quot;Sold device 11312.&quot;], [2, &quot;X customer 0013120109 in country AU.&quot;], [3, &quot;Y customer 0013140033 in country BR.&quot;], ] schema = StructType( [ StructField(name=&quot;Id&quot;, dataType=LongType()), StructField(name=&quot;Description&quot;, dataType=StringType()) ] ) df = spark.createDataFrame(data=data, schema=schema) </code></pre> <p>My attempted solution was to use <code>regexp_replace</code> in combination with <code>regexp_extract</code>, but it expects a concrete string as &quot;replacement&quot; value - while my replacement value would be dynamic.</p> <pre><code>df = ( df .withColumn(&quot;Description&quot;, regexp_replace(&quot;Description&quot;, r&quot;customer \d+&quot;, concat(lit(&quot;customer &quot;), sha2(regexp_extract( &quot;Description&quot;, r&quot;.* customer (\d+) .*&quot;, 1), 256 ) ) ) ) ) </code></pre> <p>PS: I really want to avoid UDFs since the transformation from JVM to Python and back is a huge performance degradation...</p>
<python><pyspark>
2023-03-07 10:23:00
2
3,013
Cribber
75,660,540
7,208,845
Python3 return multiple contextmanagers from a function to be used in a single with statement
<p>Given:</p> <pre><code>con = psycopg2.connect() with con, con.cursor() as c: c.execute() # some query inside here </code></pre> <p>According to the psycopg2 documentation <a href="https://www.psycopg.org/docs/usage.html#transactions-control" rel="nofollow noreferrer">https://www.psycopg.org/docs/usage.html#transactions-control</a>, the con object manages the transaction and takes care of commit and rollback of the db transaction. So both the <code>con</code> and <code>con.cursor()</code> are required in the <code>with</code> statement to properly manage commit/rollback</p> <p>Now I want repeat the <code>with</code> part of the code multiple times, to do multiple transactions, such as</p> <pre><code>con = psycopg2.connect() with con, con.cursor() as c: c.execute() # some query inside here with con, con.cursor() as c: c.execute() # another query inside here ... with con, con.cursor() as c: c.execute() # final query inside here </code></pre> <p>This works but this requires me to copy paste the <code>con, con.cursor()</code> part of the <code>with</code> statement for every <code>with</code> block.</p> <p>Now I was wondering if it is possible in python to create a function that returns something that I can pass directly to the <code>with</code> statement to reduce <code>con, con.cursor()</code> to <code>some_custom_function()</code></p> <p>Something along these lines:</p> <pre><code>con = psycopg2.connect() def cursor(): return con, con.cursor() # this doesn't work with cursor() as c: c.execute() # some query inside here with cursor() as c: c.execute() # another query inside here ... with cursor() as c: c.execute() # final query inside here </code></pre> <p>(You may be wondering why, but the <code>con.cursor()</code> method also takes arguments such as <code>cursor_factory=psycopg2.extras.RealDictCursor</code>. Then I would have to repeat those arguments with every <code>with</code> statement as well. But for simplicity of this example, I've left that out of the question.)</p>
<python><psycopg2>
2023-03-07 10:07:04
1
347
LinG
75,660,485
14,594,208
How to remove possible suffix repetitions from a str column?
<p>Consider the following dataframe, where the suffix in a <code>str</code> column <strong>might</strong> be repeating itself:</p> <pre class="lang-py prettyprint-override"><code> Book 0 Book1.pdf 1 Book2.pdf.pdf 2 Book3.epub 3 Book4.mobi.mobi 4 Book5.epub.epub </code></pre> <p>Desired output (removed suffixes where needed)</p> <pre class="lang-py prettyprint-override"><code> Book 0 Book1.pdf 1 Book2.pdf 2 Book3.epub 3 Book4.mobi 4 Book5.epub </code></pre> <p>I have tried splitting on the <code>.</code> character and then counting occurences of the last item to check if there is duplication.</p> <p><em>I have used file paths only to illustrate my point! The contents of the column could be something different than paths!</em></p>
<python><pandas>
2023-03-07 10:02:20
1
1,066
theodosis
75,660,364
11,974,163
How does data conversion work between pyodbc and sql server?
<p>I'm building an automated script where I inject some data into sql server using pyodbc, with this line (basic example):</p> <pre><code>cursor.execute(sql_query, data) </code></pre> <p>Given that I've created/designed a sql server database and table locally, the data seems to be converted automatically - &quot;under the hood&quot; - into the required types that are specified in the table design when inserted into <code>db.table</code>.</p> <p>I'm wondering how this works and if it's safe not to create explicit code to convert data types? I found this official microsoft <a href="https://learn.microsoft.com/en-us/sql/machine-learning/python/python-libraries-and-data-types?view=sql-server-ver16" rel="nofollow noreferrer">docs</a> on it, but I don't think what is written there gives me clarity on sending data from python to sql server as it states:</p> <blockquote> <p>Python supports a limited number of data types in comparison to SQL Server. As a result, whenever you use data from SQL Server in Python scripts, SQL data might be implicitly converted to a compatible Python data type. However, often an exact conversion cannot be performed automatically and an error is returned.</p> </blockquote> <p>This seems like it's from <code>SQL Server -&gt; Python</code>, rather than <code>Python -&gt; SQL Server</code>.</p>
<python><sql-server><pyodbc>
2023-03-07 09:52:09
1
457
pragmatic learner
75,660,251
6,832,201
Textual: UI is not updating after change the value
<p>I am trying to build a simple TUI based app using Python's <code>textual</code> package. I have one left panel where I want to display list of items and on right panel I wan to show details of the selected item from left panel. So I want add items in the left panel using keybinding provided by textual lib but when I add new item to the list it does not updates the UI of the left panel to show newly added item in the list. I am following this [doc][1]</p> <p>[1]: <a href="https://textual.textualize.io/guide/reactivity/#__tabbed_2_1" rel="nofollow noreferrer">https://textual.textualize.io/guide/reactivity/#__tabbed_2_1</a> to add that feature but it is not working as expected. I am not able to figure out what's wrong here or my understanding of the things are incorrect.</p> <p>Here is my full code:</p> <pre><code>import uuid from time import monotonic from textual.app import App, ComposeResult from textual.containers import Container from textual.reactive import reactive from textual.widget import Widget from textual.widgets import Button, Header, Footer, Static, ListView, ListItem, Label class LeftPanel(Widget): items = reactive([]) def compose(self) -&gt; ComposeResult: yield Static( &quot;All Request&quot;, expand=True, id=&quot;left_panel_header&quot; ) yield ListView( *self.items, initial_index=None, ) class RightPanel(Widget): &quot;&quot;&quot;A stopwatch widget.&quot;&quot;&quot; def compose(self) -&gt; ComposeResult: yield ListView( ListItem(Label(&quot;4&quot;)), ListItem(Label(&quot;5&quot;)), ListItem(Label(&quot;6&quot;)), initial_index=None, ) class DebugApp(App): &quot;&quot;&quot;A Textual app to manage stopwatches.&quot;&quot;&quot; CSS_PATH = &quot;main.css&quot; BINDINGS = [ (&quot;d&quot;, &quot;toggle_dark&quot;, &quot;Toggle dark mode&quot;), (&quot;a&quot;, &quot;add_item&quot;, &quot;Add new item&quot;), ] def compose(self) -&gt; ComposeResult: &quot;&quot;&quot;Called to add widgets to the app.&quot;&quot;&quot; yield Container(LeftPanel(id=&quot;my_list&quot;), id=&quot;left_panel&quot;) yield Container(RightPanel(), id=&quot;right_panel&quot;) yield Footer() def action_add_item(self): self.query_one(&quot;#my_list&quot;).items.append(ListItem(Label(str(uuid.uuid4()), classes=&quot;request_item&quot;))) self.dark = not self.dark # This works def action_toggle_dark(self) -&gt; None: &quot;&quot;&quot;An action to toggle dark mode.&quot;&quot;&quot; self.dark = not self.dark def render_ui(): app = DebugApp(watch_css=True) app.run() </code></pre>
<python><python-3.x><state><rich><textual>
2023-03-07 09:42:49
1
3,036
Ropali Munshi
75,660,214
12,883,297
Identify the day diff between 2 dates in a column and flag the pattern in pandas
<p>I have a dataframe</p> <pre><code>df_in = pd.DataFrame([[&quot;A&quot;,&quot;2023-02-04&quot;],[&quot;A&quot;,&quot;2023-02-05&quot;],[&quot;A&quot;,&quot;2023-02-06&quot;],[&quot;B&quot;,&quot;2023-02-06&quot;],[&quot;B&quot;,&quot;2023-02-13&quot;],[&quot;B&quot;,&quot;2023-02-20&quot;], [&quot;C&quot;,&quot;2023-02-07&quot;],[&quot;C&quot;,&quot;2023-02-10&quot;],[&quot;C&quot;,&quot;2023-02-12&quot;],[&quot;D&quot;,&quot;2023-02-14&quot;],[&quot;D&quot;,&quot;2023-02-17&quot;],[&quot;D&quot;,&quot;2023-02-20&quot;], [&quot;E&quot;,&quot;2023-02-18&quot;]],columns=[&quot;id&quot;,&quot;date&quot;]) </code></pre> <pre><code>id date A 2023-02-04 A 2023-02-05 A 2023-02-06 B 2023-02-06 B 2023-02-13 B 2023-02-20 C 2023-02-07 C 2023-02-10 C 2023-02-12 D 2023-02-14 D 2023-02-17 D 2023-02-20 E 2023-02-18 </code></pre> <p>I want to derive 2 new columns from the dataframe. 1st column <strong>day_difference</strong> tells day diff between that row and previous row at id level. 2nd column <strong>Flag</strong> which tells the pattern at id level. Example id <strong>A</strong> has daily data then mention <strong>Daily</strong>, id <strong>B</strong> has 7 days diff pattern mention <strong>7_days_diff</strong>. id <strong>C</strong> has not any pattern so mention <strong>No_pattern</strong>. id <strong>E</strong> has only single row, mention <strong>Single_day</strong>.</p> <p><strong>Expected output:</strong></p> <pre><code>df_out = pd.DataFrame([[&quot;A&quot;,&quot;2023-02-04&quot;,&quot;_&quot;,&quot;1_days_diff&quot;],[&quot;A&quot;,&quot;2023-02-05&quot;,1,&quot;1_days_diff&quot;],[&quot;A&quot;,&quot;2023-02-06&quot;,1,&quot;1_days_diff&quot;], [&quot;B&quot;,&quot;2023-02-06&quot;,&quot;_&quot;,&quot;7_days_diff&quot;],[&quot;B&quot;,&quot;2023-02-13&quot;,7,&quot;7_days_diff&quot;],[&quot;B&quot;,&quot;2023-02-20&quot;,7,&quot;7_days_diff&quot;], [&quot;C&quot;,&quot;2023-02-07&quot;,&quot;_&quot;,&quot;No_pattern&quot;],[&quot;C&quot;,&quot;2023-02-10&quot;,3,&quot;No_pattern&quot;],[&quot;C&quot;,&quot;2023-02-12&quot;,2,&quot;No_pattern&quot;], [&quot;D&quot;,&quot;2023-02-14&quot;,&quot;_&quot;,&quot;3_days_diff&quot;],[&quot;D&quot;,&quot;2023-02-17&quot;,&quot;3&quot;,&quot;3_days_diff&quot;],[&quot;D&quot;,&quot;2023-02-20&quot;,&quot;3&quot;,&quot;3_days_diff&quot;], [&quot;E&quot;,&quot;2023-02-18&quot;,1,&quot;Single_day&quot;]],columns=[&quot;id&quot;,&quot;date&quot;,&quot;day_difference&quot;,&quot;Flag&quot;]) </code></pre> <pre><code>id date day_difference Flag A 2023-02-04 _ 1_days_diff A 2023-02-05 1 1_days_diff A 2023-02-06 1 1_days_diff B 2023-02-06 _ 7_days_diff B 2023-02-13 7 7_days_diff B 2023-02-20 7 7_days_diff C 2023-02-07 _ No_pattern C 2023-02-10 3 No_pattern C 2023-02-12 2 No_pattern D 2023-02-14 _ 3_days_diff D 2023-02-17 3 3_days_diff D 2023-02-20 3 3_days_diff E 2023-02-18 1 Single_day </code></pre> <p>How to do it in pandas?</p>
<python><python-3.x><pandas><dataframe><datetime>
2023-03-07 09:39:24
2
611
Chethan
75,660,016
1,627,234
Multiply scipy sparse matrix with a 3d numpy array
<p>I have the following matrices</p> <pre class="lang-py prettyprint-override"><code>a = sp.random(150, 150) x = np.random.normal(0, 1, size=(150, 20)) </code></pre> <p>and I would basically like to implement the following formula</p> <p><img src="https://latex.codecogs.com/svg.image?%5Csigma_%7Bk%7D&amp;space;=&amp;space;%5Csum_%7Bij%7D%5E%7BN%7D&amp;space;A_%7Bij%7D&amp;space;(x_%7Bik%7D&amp;space;-&amp;space;x_%7Bjk%7D)%5E2" alt="" /></p> <p>I can calculate the inner difference like this</p> <pre class="lang-py prettyprint-override"><code>diff = (x[:, None, :] - x[None, :, :]) ** 2 diff.shape # -&gt; (150, 150, 20) a.shape # -&gt; (150, 150) </code></pre> <p>I would basically like to broadcast the element-wise multiplication between the scipy sparse matrix and each internal numpy array.</p> <p>If A was allowed to be dense, then I could simply do</p> <pre class="lang-py prettyprint-override"><code>np.einsum(&quot;ij,ijk-&gt;k&quot;, a.toarray(), (x[:, None, :] - x[None, :, :]) ** 2) </code></pre> <p>but A is sparse, and potentially huge, so this isn't an option. Of course, I could just reorder the axes and loop over the <code>diff</code> array with a for loop, but is there a faster way using numpy?</p> <p>As @hpaulj pointed out, the current solution also forms an array of shape <code>(150, 150, 20)</code>, which would also immediately lead to problems with memory, so this solution would not be okay either.</p>
<python><arrays><numpy><scipy><sparse-matrix>
2023-03-07 09:18:12
2
5,558
Pavlin
75,660,014
5,722,932
Flask Celery - RabbitMQ Connection is not Working
<p>I have developed the flask application. I want to send mail from my application. So I choose the Background process using Celery &amp; RabbitMQ I got the below error while pull the data .</p> <pre class="lang-py prettyprint-override"><code>celery=Celery( 'sequre_spacement_backend', broker = app.config['QUEUE_BROKER_URL'], include = ['sequre_spacement_backend.tasks.MeetingRoom.MeetingRoomSMSNotification'] ) </code></pre> <p>while run the <strong>celery worker</strong></p> <p><strong>Error</strong></p> <pre><code>[2023-03-07 13:00:56,534: ERROR/MainProcess] Received unregistered task of type 'tasks.MeetingRoom.MeetingRoomSMSNotification.queue_test'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you're using relative imports? Please see http://docs.celeryq.org/en/latest/internals/protocol.html for more information. The full contents of the message body was: '[[], {}, {&quot;callbacks&quot;: null, &quot;errbacks&quot;: null, &quot;chain&quot;: null, &quot;chord&quot;: null}]' (77b) Thw full contents of the message headers: {'lang': 'py', 'task': 'tasks.MeetingRoom.MeetingRoomSMSNotification.queue_test', 'id': 'eacb73c8-4ce2-4c7d-9378-da36d221a465', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'group_index': None, 'retries': 0, 'timelimit': [None, None], 'root_id': 'eacb73c8-4ce2-4c7d-9378-da36d221a465', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': '{}', 'origin': 'gen137853@heptagon', 'ignore_result': False} The delivery info for this task is: {'consumer_tag': 'None4', 'delivery_tag': 2, 'redelivered': True, 'exchange': '', 'routing_key': 'celery'} Traceback (most recent call last): File &quot;/opt/lampp/htdocs/sequre_spacement_backend/venv/lib/python3.8/site-packages/celery/worker/consumer/consumer.py&quot;, line 591, in on_task_received strategy = strategies[type_] KeyError: 'tasks.MeetingRoom.MeetingRoomSMSNotification.queue_test' </code></pre> <p><a href="https://i.sstatic.net/4oEXy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4oEXy.png" alt="enter image description here" /></a></p>
<python><flask><rabbitmq><celery>
2023-03-07 09:18:07
1
7,393
Selvamani P
75,659,953
13,329,117
Python's argeparse using same option multiple times, but put those options in same list
<p>In Python's argparse, using the same option multiple times puts those arguments in different lists. But I want those arguments on the same list.</p> <p>The result I have got is:</p> <pre class="lang-py prettyprint-override"><code># only the input portion [ [input1, input2], [input3, input4, input5], [input6] ] </code></pre> <p>My Code:</p> <pre class="lang-py prettyprint-override"><code># myScript.py import argparse parser=argparse.ArgumentParser() parser.add_argument('-i', action='append', nargs='+') parser.add_argument('-o', action='append', nargs='*') args = parser.parse_args() </code></pre> <p>Executing the code:</p> <pre class="lang-bash prettyprint-override"><code>myScript.py -i input1 input2 -o output1 -i input3 input4 input5 -o output2 -i input6 </code></pre> <p>The result I want is:</p> <pre class="lang-py prettyprint-override"><code>[ input1, input2, input3, input4, input5, input6 ] </code></pre>
<python><argparse><multiple-arguments>
2023-03-07 09:11:20
1
535
Shezan
75,659,888
16,971,617
Convert numpy array to binary array in numpy
<p>Say I have a 3D numpy array <code>(100*100*4)</code>. I would like to convert non-255 vector ([255,255,255,255]) to [0,0,0,255]</p> <pre class="lang-py prettyprint-override"><code># genearte a numpy array np.random.seed(seed=777) s = np.random.randint(low=0, high = 255, size=(100, 100, 4)) print(s) </code></pre> <p>Currently this is my approach but seems very slow, is there a better way? Any help is appreciated.</p> <pre class="lang-py prettyprint-override"><code>def foo(x): y= np.full_like(x, 255) for iy, ix in np.ndindex(x.shape[0:2]): if not np.all(x[iy, ix] == 255): y[iy, ix] = np.array([0, 0, 0, 255]) return </code></pre>
<python><numpy><optimization>
2023-03-07 09:05:24
5
539
user16971617
75,659,832
12,242,085
How to add values from list as a row in DataFrame if values from list do not exist in DF with defined values in other columns in DF in Python Pandas?
<p>I have Pandas DataFrame in Python like below:</p> <p><strong>Example data:</strong></p> <pre><code>COL1 | COL2 | COL3 ------|------|------- var1 | xxx | 20 var2 | xxx | 10 var3 | yyy | 10 </code></pre> <p>And I have list like the follow: <code>list_1 = [&quot;var1&quot;, &quot;var5&quot;]</code></p> <p><strong>Requirements:</strong></p> <p>And I need to</p> <ol> <li>add to &quot;COL1&quot; in DataFrame values from <strong>list_1</strong> as row only if values from <code>list_1</code> do not exist in &quot;COL1&quot; in DataFrame</li> <li>In each added in &quot;COL1&quot; values from list I need to have values &quot;yyy&quot; in &quot;COL2&quot; and &quot;10&quot; in &quot;COL3&quot;</li> </ol> <p><strong>Desire output:</strong></p> <p>So, as a result I need something like below based on my example DataFrame and <code>list_1</code>:</p> <pre><code>COL1 | COL2 | COL3 ------|------|------- var1 | xxx | 20 var2 | xxx | 10 var3 | yyy | 10 var5 | yyy | 10 </code></pre> <p>How can I do that in Python Pandas ?</p>
<python><pandas><dataframe><list>
2023-03-07 08:59:44
2
2,350
dingaro
75,659,800
5,181,219
PySpark: make DataFrame no longer accessible
<p>My goal is to write two functions <code>capture</code> and <code>release</code> which take a PySpark DataFrame as input and make it &quot;inaccessible&quot; to the user. The behavior I'm looking for is something like:</p> <pre class="lang-py prettyprint-override"><code>df = spark.read.csv(&quot;...&quot;) # or some other way or creating a DataFrame df.show() # this should work capture(df) df.show() # this should no longer work, but return an exception, ideally one I control df.count() # this should not work either release(df) df.count() # now this should work again </code></pre> <p>The method should ideally be independent of how the DataFrame was generated, e.g. approaches like &quot;renaming the underlying file path&quot; are not great (the program might also not have permissions to do so).</p> <p>It would be even better to <em>also</em> prevent the DataFrame from being used indirectly, e.g.:</p> <pre class="lang-py prettyprint-override"><code>df = spark.read.csv(&quot;...&quot;) # or some other way or creating a DataFrame df2 = df.withColumnRenamed(&quot;foo&quot;, &quot;bar&quot;) df2.show() # this should work capture(df) df2.show() # this should not work release(df) df2.show() # now this should work again </code></pre> <p>Is this possible? What's the cleanest way to get a behavior like this? I'm not looking for a solution that provides bulletproof security, just a way of warning the user if they're trying to do something that they likely didn't realize would cause issues.</p> <p>More context on the question: we're building a library to make it easy to write and run <a href="https://desfontain.es/privacy/friendly-intro-to-differential-privacy.html" rel="nofollow noreferrer">differentially private</a> pipelines on PySpark. For the privacy guarantees to hold, the private data must only be used once: as input to the differentially private program, and nowhere else. Using the private data in other places of the program (e.g. when defining hyperparameters) is a common class of pitfalls: users might do it because it's convenient, and fail to realize that this breaks the privacy guarantees. We're looking for ways of catching the most common examples of this pitfall, and making the private data inaccessible after initialization would be a useful mitigation.</p>
<python><apache-spark><pyspark>
2023-03-07 08:56:44
1
1,092
Ted
75,659,729
2,132,691
How do I specify a default value in one field an sqlalchemy engine INSERT statement?
<p>I use raw SQLAlchemy <code>engine.execute()</code> calls to perform INSERT statements, e.g.</p> <pre><code>engine = create_engine(&quot;mysql+pymysql://...&quot;) engine.execute(&quot;INSERT INTO table VALUES (%s, %s, %s %s)&quot;, v1, v2, v3, v4) </code></pre> <p>Now lets imagine that the third column is defined as <code>varchar(10) NOT NULL DEFAULT 'foo'</code> and I want to specify that I want to use that default value.</p> <p>MySQL has the DEFAULT keyword for this, but how do I tell SQLAlchemy to generate the DEFAULT keyword for one of the variables?</p> <p>I imagine something like</p> <pre><code>engine.execute(&quot;INSERT INTO table VALUES (%s, %s, %s %s)&quot;, v1, v2, default, v4) </code></pre> <p>But what do I need for <code>default</code> here? (Does SQLAlchemy even support that?)</p> <p>(I can't use <code>None</code> because that gets translated into NULL and violates the NOT NULL constraint. (on MySQL 8)</p> <p>(I could omit the column like this</p> <pre><code>engine.execute(&quot;INSERT INTO table (col1, col2, col4) VALUES (%s, %s, %s)&quot;, v1, v2, v4) </code></pre> <p>but I find it undesirable to generate the list of column names, and I'd prefer to use the DEFAULT keyword.)</p>
<python><mysql><sqlalchemy>
2023-03-07 08:48:58
1
368
florian
75,659,699
4,576,519
How to accurately calculate high Lp norms in PyTorch
<p>I am using <code>torch.norm</code> to calculate <a href="https://en.wikipedia.org/wiki/Lp_space#Definition" rel="nofollow noreferrer">Lp norms</a> with relatively large values for <code>p</code> (in the range of 10-50). The vectors I do this for have relatively small values and I notice that the result incorrectly becomes <code>0</code>. In the example above, this already happens at <code>p=9</code>!</p> <pre class="lang-py prettyprint-override"><code>import torch lb = 1e-6 # Lower bound ub = 1e-5 # Upper bound # Construct vector v = torch.rand(100) * (ub - lb) + lb # Calculate Lp norms for p in range(1,20): print(p, torch.norm(v, p=p)) # It should approach the maximum value for p -&gt; inf print(torch.max(v)) </code></pre> <p>Is there a way to circumvent this issue? Or is it inherent to machine precision and its associated rounding errors? Ideally, I would like a solution that maintains the graph. But I am also iterested in numerical approximations.</p>
<python><precision><linear-algebra><torch><magnitude>
2023-03-07 08:45:54
1
6,829
Thomas Wagenaar
75,659,606
2,998,077
To get trend-line's equation (polynomial, order 2)
<p>A simple dataframe that I want to plot it with its trend-line (polynomial, order 2). However I got the equation obviously wrong:</p> <pre><code>y = 1.4x**2 + 6.6x + 0.9 </code></pre> <p>It shall be:</p> <pre><code>y = 0.22x2 - 1.45x + 11.867 # the &quot;2&quot; after x is square </code></pre> <p>How can I get the correct equation?</p> <pre><code>import matplotlib.pyplot as plot from scipy import stats import numpy as np data = [[&quot;2020-03-03&quot;,9.727273], [&quot;2020-03-04&quot;,9.800000], [&quot;2020-03-05&quot;,9.727273], [&quot;2020-03-06&quot;,10.818182], [&quot;2020-03-07&quot;,9.500000], [&quot;2020-03-08&quot;,10.909091], [&quot;2020-03-09&quot;,15.000000], [&quot;2020-03-10&quot;,14.333333], [&quot;2020-03-11&quot;,15.333333], [&quot;2020-03-12&quot;,16.000000], [&quot;2020-03-13&quot;,21.000000], [&quot;2020-03-14&quot;,28.833333]] fig, ax = plot.subplots() dates = [x[0] for x in data] usage = [x[1] for x in data] bestfit = stats.linregress(range(len(usage)),usage) equation = str(round(bestfit[0],1)) + &quot;x**2 + &quot; + str(round(bestfit[1],1)) + &quot;x + &quot; + str(round(bestfit[2],1)) ax.plot(range(len(usage)), usage) ax.plot(range(len(usage)), np.poly1d(np.polyfit(range(len(usage)), usage, 2))(range(len(usage))), '--',label=equation) plot.show() print (equation) </code></pre> <p><a href="https://i.sstatic.net/IupZq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IupZq.png" alt="enter image description here" /></a></p>
<python><pandas><equation><coefficients><trendline>
2023-03-07 08:36:00
1
9,496
Mark K
75,659,393
5,334,697
Python convert raw text to Yaml or Json
<p>Is there any way where we can convert raw text yaml formatted data to yaml file or json. I'm getting a raw file data in response as</p> <pre><code>image: artifactory.test.com/hello:v1 stages: - build build-job-example: stage: build script: - echo &quot;Building the 'Hello World' app&quot; - python3 --version only: - main </code></pre> <p>How do I convert it into a yaml, so that I can edit the contents and push to gitlab. My aim it to replace few values with respect to key and commit.</p>
<python><python-3.x><gitlab>
2023-03-07 08:10:13
1
2,169
Aditya Malviya
75,658,955
7,559,069
BytesIO readline method returns bytes although there is no newline b"\n"
<p>I am using <strong>readline</strong> method from BytesIO API. Expecting to get empty bytes but the method always returns whatever the buffer contains.</p> <pre><code>b = io.BytesIO() b.write(b&quot;foo&quot;) # Note that there is not newline \n b.seek(0) # Move to beginning of buffer b.readline() </code></pre> <p>Returns:</p> <pre><code>b'foo' </code></pre> <p>Instead, I would expect:</p> <pre><code>b'' </code></pre>
<python><byte><buffer><readline><bytesio>
2023-03-07 07:16:37
0
495
Antman
75,658,724
597,858
How to calculate the rate of interest in compound interest problem using python
<p>I know how to calculate compound interest in python using this function:</p> <pre><code>def compound_interest(principal, rate, time): # Calculates compound interest Amount = principal * (pow((1 + rate / 100), time)) CI = Amount - principal print(&quot;Compound interest is&quot;, CI) </code></pre> <p>I want to input the principal amount, final amount, time to the function and it should output the rate. how do I do that?</p>
<python>
2023-03-07 06:44:14
1
10,020
KawaiKx
75,658,529
1,354,400
Polars cumulative sum over consecutive groups
<p>I have a DataFrame like so:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.from_repr(&quot;&quot;&quot; ┌────────────┬───────┬───────┐ │ Date ┆ Group ┆ Value │ │ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ i64 │ ╞════════════╪═══════╪═══════╡ │ 2020-01-01 ┆ 0 ┆ 5 │ │ 2020-01-02 ┆ 0 ┆ 8 │ │ 2020-01-03 ┆ 0 ┆ 9 │ │ 2020-01-01 ┆ 1 ┆ 5 │ │ 2020-01-02 ┆ 1 ┆ -1 │ │ 2020-01-03 ┆ 1 ┆ 2 │ │ 2020-01-01 ┆ 2 ┆ -2 │ │ 2020-01-02 ┆ 2 ┆ -1 │ │ 2020-01-03 ┆ 2 ┆ 7 │ └────────────┴───────┴───────┘ &quot;&quot;&quot;) </code></pre> <p>I want to do a cumulative sum grouped by &quot;Date&quot; in the order of the &quot;Group&quot; consecutively, something like:</p> <pre><code>| Date | Group | Value | |------------|-------|------------------| | 2020-01-01 | 0 | 5 | | 2020-01-02 | 0 | 8 | | 2020-01-03 | 0 | 9 | | 2020-01-01 | 1 | 10 (= 5 + 5) | | 2020-01-02 | 1 | 7 (= 8 - 1) | | 2020-01-03 | 1 | 11 (= 9 + 2) | | 2020-01-01 | 2 | 8 (= 5 + 5 - 2) | | 2020-01-02 | 2 | 6 (= 8 - 1 - 1) | | 2020-01-03 | 2 | 18 (= 9 + 2 + 7) | </code></pre> <p>The explanation for these values is as follows. Group 0 precedes group 1 and group 1 precedes group 2. For the values of group 0, we need not do anything, cumulative sum up to this group are just the original values. For the values of group 1, we accumulate the values of group 0 for each date. Similarly, for group 2, we accumulate the values of group 1 and group 0.</p> <p>What I have tried is to do this via a helper pivot table. I do it iteratively by looping over the Groups and doing a cumulative sum over a partial selection of the columns and adding that into a list of new values. Then, I replace these new values with into a column into the original DF.</p> <pre class="lang-py prettyprint-override"><code>ddf = df.pivot(on='Group', index='Date', values='Value') new_vals = [] for i in range(df['Group'].max() + 1): new_vals.extend( ddf.select([pl.col(f'{j}') for j in range(i+1)]) .sum_horizontal() .to_list() ) df.with_columns(pl.Series(new_vals).alias('CumSumValue')) </code></pre> <p>Is there a way to do this without loops or all this &quot;inelegance&quot;?</p>
<python><dataframe><window-functions><python-polars><cumulative-sum>
2023-03-07 06:09:01
1
902
Syafiq Kamarul Azman
75,658,389
346,977
Pandas/matplotlib newbie: aggregating time series data with differing indices?
<p>I'm getting to grips with pandas/matplotlib, and looking to aggregate multiple data series with (marginally) differing indices. For example:</p> <p><strong>Series 1</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>seconds_since_start</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>0.0</td> <td>35</td> </tr> <tr> <td>0.8</td> <td>41</td> </tr> <tr> <td>1.1</td> <td>48</td> </tr> </tbody> </table> </div> <p><strong>Series 2</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>seconds_since_start</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>0.0</td> <td>31</td> </tr> <tr> <td>0.7</td> <td>37</td> </tr> <tr> <td>1.1</td> <td>41</td> </tr> </tbody> </table> </div> <p>At present, I'm plotting both series as 2 separate line graphs. Ultimately, I'm looking to create a single line that shows, for any given x value, the mean y of both series. The values between specified points can be assumed to be linear.</p> <p>I assume this is a common task, but the ways I'm trying involve a lot more complexity than I suspect is necessary.</p> <p>In short: is there a straightforward way in plot the mean for series that have differing index values?</p> <p>Notes:</p> <ul> <li>While the only immediate need is graphing, ideally the aggregation would be calculated in pandas, not matplotlib</li> <li>The solution will aggregate &gt;100 different series, not just 2</li> </ul>
<python><pandas><matplotlib>
2023-03-07 05:40:14
2
12,635
PlankTon
75,658,340
21,346,793
Why does the program return an empty response
<p>This is my code in FLASK. I try to make validation form in flask library. also i try to check it with postman request. But it doesn't work, please help:</p> <pre><code>from flask import Flask from flask_wtf import FlaskForm from wtforms import StringField, IntegerField app = Flask(__name__) app.config['WTF_CSRF_ENABLED'] = False class RegistrationForm(FlaskForm): email = StringField() phone = IntegerField() name = StringField() address = StringField() index = IntegerField() comment = StringField() @app.route('/registration', methods=['POST']) def registration(): form = RegistrationForm() if form.validate_on_submit(): email, phone = form.email.data, form.phone.data return fr&quot;Successfully registered user {email} with phone +7{phone}&quot; return f'Invalid input, {form.errors}', 400 if __name__ == '__main__': app.run(debug=True) </code></pre> <p>I try to make POST with postman, but it return: Successfully registered user None with phone +7None.</p> <p>How can i fix that, please, help</p> <p>Try to rewrite code, try to debug, but i didn't find that exception</p>
<python><flask>
2023-03-07 05:32:01
1
400
Ubuty_programmist_7
75,658,318
1,039,860
using keys to navigate a QComboBox in a QDialog
<p>I have a QDialog with a QComboBox and a cancel button. The two options the user has is to either select an item from the QComboBox or hit the cancel button. I would like the user to have the option of navigating with keys instead of just the mouse. ESC would press the cancel button, moving the keys up and down move the selection in the QComboBox, and enter/return selects the item. This code supports either a QComboBox or a QListWidget (depending on the as_list parameter). Ultimately I would like this concept to work with both.</p> <p>Here is the code:</p> <pre><code>class OptionsDialog(QDialog): def __init__(self, title, selected, options, as_list=False): super().__init__() self.type = title self.options = options self.ignore_keys = ignore_keys self.combo_box = None self.list_box = None self.selected_option = None self.setModal(True) top_layout = QVBoxLayout() layout = QHBoxLayout() if as_list: self.list_box = QListWidget() layout.addWidget(self.list_box) i = 0 for option in self.options: self.list_box.insertItem(i, option) if option == selected: self.list_box.setCurrentRow(i) i += 1 self.list_box.setSelectionMode(QListWidget.SingleSelection) self.list_box.clicked.connect(self.list_box_changed) else: self.combo_box = QComboBox() layout.addWidget(self.combo_box) for option in self.options: self.combo_box.addItem(option) self.combo_box.currentIndexChanged.connect(self.combo_box_changed) top_layout.addLayout(layout) layout = QHBoxLayout() button = QPushButton('Cancel') layout.addWidget(button) button.clicked.connect(self.cancel_pressed) top_layout.addLayout(layout) self.setLayout(top_layout) def combo_box_changed(self, index): self.selected_option = self.combo_box.currentText() self.close() def list_box_changed(self): self.selected_option = self.list_box.currentItem().text() self.close() def cancel_pressed(self): self.selected_option = None self.close() </code></pre>
<python><qcombobox>
2023-03-07 05:27:45
1
1,116
jordanthompson
75,658,285
5,550,284
How to append strings to the column of a Dataframe from another array in Pandas?
<p>I have a Dataframe that looks like below</p> <pre><code> ip metric 0 10.10.20.9 0 1 10.10.1.25 0 2 10.1.13.45 0 3 10.1.100.101 0 4 10.1.100.11 0 5 10.11.2.100 0 6 10.1.2.151 0 7 10.1.2.184 0 8 10.1.20.185 0 </code></pre> <p>I want to append some strings to the <code>ip</code> column picked from an array like so</p> <pre><code>arr = [&quot;(0)&quot;, &quot;(1)&quot;, &quot;(2)&quot;, &quot;(3)&quot;, &quot;(4)&quot;, &quot;(5)&quot;, &quot;(6)&quot;, &quot;(7)&quot;, &quot;(8)&quot;] ip metric 0 10.10.20.9(0) 0 1 10.10.1.25(1) 0 2 10.1.13.45(2) 0 3 10.1.100.101(3) 0 4 10.1.100.11(4) 0 5 10.11.2.100(5) 0 6 10.1.2.151(6) 0 7 10.1.2.184(7) 0 8 10.1.20.185(8) 0 </code></pre> <p>You can see I took items from the array and added to the values of the <code>ip</code> column.</p> <p>Now I know how to add a string to the column of a Dataframe by doing something like below</p> <pre><code>df[&quot;ip&quot;] = df[&quot;ip&quot;].astype(str) + '%' </code></pre> <p>But I can't figure out how to add items from an array to the Dataframe column. Any idea how this can be done?</p>
<python><pandas>
2023-03-07 05:21:27
2
3,056
Souvik Ray
75,658,264
266,185
How to wait but not block thread when getting a resource in python?
<p>Suppose we need to create a database connection pool, the requirement is that when a client tries to get a connection, if all exisiting connections are busy, then need to wait for 30 second before giving up, hope some connections are released by other client. So the naive solution is</p> <pre><code>def get_connection(): if all_conn_are_busy: time.sleep(30) try to get connection again else: return conn </code></pre> <p>But since time.sleep(30) will block the thread, if 2 clients trying to get connection at the same time, it will block for 60 seconds. So is there any way to noblock it but also wait for some time?</p>
<python><block>
2023-03-07 05:16:16
1
6,013
Daniel Wu
75,658,054
5,976,033
Azure Function timeout after 5mins even though `functionTimeout` is set to `00:10:00` in `host.json`
<p>I'm stumped here. I have an Azure Function with Python runtime, Consumption Plan, Timer Trigger.</p> <p>Every time the Function runs it timesout at 5mins even though the <code>host.json</code> is set for 10mins.</p> <p><code>host.json</code>:</p> <pre><code>{ &quot;version&quot;: &quot;2.0&quot;, &quot;functionTimeout&quot;: &quot;00:10:00&quot; } </code></pre> <p>Error:</p> <pre><code>Exception type Microsoft.Azure.WebJobs.Host.FunctionTimeoutException Exception message Timeout value of 00:05:00 was exceeded by function: Functions.daily_job </code></pre> <p>What am I missing here? Why is this occurring and how do I override it?</p> <hr /> <p><strong>EDIT 1: <code>functionTimeout</code> is definitely set in Azure.</strong></p> <p><a href="https://i.sstatic.net/Uyshm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uyshm.png" alt="enter image description here" /></a></p> <p><strong>Yet it always throws a timeout error:</strong></p> <p><a href="https://i.sstatic.net/z3pBi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z3pBi.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/KxV0p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KxV0p.png" alt="enter image description here" /></a></p> <hr /> <p><strong>EDIT 2</strong>:</p> <p>I had two <code>host.json</code> files, one in the project root (the correct one, but one I was not editing!), and one in the function directory (the incorrect one I was editing). I removed the incorrect one and edited the correct file and of course, the timeout is working as intended.</p>
<python><azure-functions><timeout>
2023-03-07 04:30:05
1
4,456
SeaDude
75,658,016
8,032,151
How to configure setuptools_scm to always generate timestamp and git hash
<p>The <a href="https://github.com/pypa/setuptools_scm" rel="nofollow noreferrer"><code>setuptools_scm</code></a> package by default could generate 4 different version messages.</p> <pre><code>no distance and clean: {tag} distance and clean: {next_version}.dev{distance}+{scm letter}{revision hash} no distance and not clean: {tag}+dYYYYMMDD distance and not clean: {next_version}.dev{distance}+{scm letter}{revision hash}.dYYYYMMDD </code></pre> <p>In my usecase, I don't want to use its version message. Instead, I want to use it and retrive git hash and timestamp information.</p> <pre><code>from setuptools_scm import get_version my_version = get_version() </code></pre> <p>According to its documentation, there is a <code>get_version()</code> function. But if the current repo has no distance and clean, it only generates <code>tag</code> which is not enough.</p> <p>My question is how to configure <code>get_version()</code> function to let it always generate git hash and timestamp. So I can parse it and create my own version message.</p>
<python><pip><setuptools><setuptools-scm>
2023-03-07 04:20:00
1
761
Billy
75,658,015
10,200,497
Percent change of values that are not NaN
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({'a': [10, 11, 20, 80, 1, 22], 'b':['x', np.nan, 'x', np.nan, np.nan, 'x']}) </code></pre> <p>And this is the output that I want:</p> <pre><code> a b c 0 10 x NaN 1 11 NaN NaN 2 20 x 100 3 80 NaN NaN 4 1 NaN NaN 5 22 x 10 </code></pre> <p>I want to create column <code>c</code> which is the perecent change of values of column <code>a</code> that are not <code>NaN</code> in <code>b</code>. For example 100 in <code>c</code> is the result of percent change of 20 and 10.</p> <p>I have tried to create a new dataframe by using <code>df.loc[df.b.notna(), 'a'].values</code> but I still cannot get the result that I want.</p>
<python><pandas>
2023-03-07 04:19:02
1
2,679
AmirX
75,658,000
8,609,411
How to refresh a dataset in Power BI Service which uses Python script connector as a source?
<p>I've a report which uses Python script connector as a source. Below is the example of the code.</p> <pre><code>import requests import json import pandas as pd authentication_url = &quot;https://name.api.yyyymanager.com/Authentication/AuthorizeUser&quot; credentials = { &quot;Username&quot;:&quot;Sha******ya&quot;, &quot;Password&quot;:&quot;*********&quot; } response = requests.post(authentication_url,json=credentials) token = str(response.content)[3:-2] headers = { 'Content-Type':'application/json', 'Accept':'application/json', 'X-RM12Api-ApiToken':token } # Property Insurance dataset_url = &quot;https://name.api.yyyymanager.com/ReportWriterReports/218/RunReportWriterReport&quot; response = requests.get(url=dataset_url,headers=headers) data = json.loads(response. Content) property_insurance = pd.DataFrame(data['Rows']) property_insurance = property_insurance[['name','Property Insurance Expiration']] </code></pre> <p>Using this script we get the data in power bi and based on it we create our visual.</p> <p>Now when I deploy it in Power BI Service and try to schedule the refresh of the dataset I get the below error at the dataset setting page.</p> <p><a href="https://i.sstatic.net/8Lj6b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Lj6b.png" alt="enter image description here" /></a></p>
<python><powerbi><powerbi-datasource>
2023-03-07 04:15:30
1
651
Shahab Haidar
75,657,926
18,148,705
Unable to iterate through a json
<p>I have a json object that looks something like this</p> <pre><code>[{'&quot;p&quot;': '{&quot;n&quot;:&quot;s&quot;,&quot;i&quot;:&quot;1&quot;}'},.....] </code></pre> <p>Just imagine multiple objects like this. I want to iterate through this and access n and i keys like a normal json but unfortunately, i am not able to figure it out.</p> <p>I tried this -</p> <pre><code>for i in a: for j in i: j.replace(&quot;'&quot;,&quot; &quot;) for k,v in j: print(k,v) </code></pre> <p>But getting this</p> <blockquote> <p>ValueError: not enough values to unpack (expected 2, got 1)</p> </blockquote> <p>Is there any way i can convert that weird json into normal json like below</p> <pre><code>[{&quot;p&quot;: {&quot;n&quot;:&quot;s&quot;,&quot;i&quot;:&quot;1&quot;}},.....] </code></pre> <p>Any help will be appreciated, Thank you.</p>
<python><json>
2023-03-07 03:59:47
2
335
user18148705
75,657,924
17,473,587
What is or inside int in python?
<p>This function:</p> <pre><code>def posts(request): # Get start and end points start = int(request.GET.get(&quot;start&quot;) or 0) end = int(request.GET.get(&quot;end&quot;) or (start + 9)) # Generate list of posts data = [] for i in range(start, end + 1): data.append(f&quot;Post #{i}&quot;) # Artificially delay speed of response time.sleep(1) # Return list of posts return JsonResponse({ &quot;posts&quot;: data }) </code></pre> <p>in view.py, what is <code>or</code> in these two lines?</p> <pre><code>start = int(request.GET.get(&quot;start&quot;) or 0) end = int(request.GET.get(&quot;end&quot;) or (start + 9)) </code></pre>
<python>
2023-03-07 03:58:39
0
360
parmer_110
75,657,917
1,223,946
How to define a script in the venv/bin dir with pyproject.toml (in hatch or any other wrapper)
<p>Im unsure about the <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/#creating-pyproject-toml" rel="nofollow noreferrer">new doc</a> on packaging with <a href="https://hatch.pypa.io/latest/" rel="nofollow noreferrer">hatch</a> and wonder if someone worked out how to define a script in a pip installable package. So in short I need to be able to direct <code>python -m build</code> to make a package with <code>open_foo_bar.py</code> as in the example, install into the <code>(virtual env)/bin</code> dir.</p> <p>my package looks like this (after a python -m build step that generated dist dir)</p> <pre><code>pypi_package/ ├── bin │ └── open_foo_bar.py ├── dist │ ├── foo-0.1.0-py3-none-any.whl │ └── foo-0.1.0.tar.gz ├── pyproject.toml ├── README.md └── test_pkg ├── foolib.py └── __init__.py </code></pre> <p>Im trying to get <code>bin/open_foo_bar.py</code> installed into the <code>$(virtual env)/bin</code> instead it installs it into the <code>site-packages/bin</code></p> <pre><code>./lib/python3.10/site-packages/bin/open_foo_bar.py </code></pre> <p>myproject.toml is</p> <pre><code>[build-system] requires = [&quot;hatchling&quot;] build-backend = &quot;hatchling.build&quot; [project] name = &quot;FOO&quot; version = &quot;0.1.0&quot; authors = [ { name=&quot;Mr Foo&quot;, email=&quot;foo@bar.com&quot; }, ] description = &quot;a Foo bar without drinks&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.8&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: OS Independent&quot;, ] dependencies = [ 'requests' ] [project.urls] &quot;Homepage&quot; = &quot;http://footopia.s/howto_foo&quot; </code></pre> <p>This used to be easy by defining the <code>scripts</code> section in setup.py</p> <pre><code>setuptools.setup( ... scripts ['bin/script1'], ... ) </code></pre>
<python><pip><setuptools><hatch>
2023-03-07 03:56:16
1
2,176
Peter Moore
75,657,887
815,653
How to understand a line of dead code in a python function?
<p>The following code comes from Ply, python’s lexer and parser. I understand the first line is a raw string but I also feel that the first line of the code looks like dead code and will be discarded in execution. How could I understand that line of code?</p> <pre><code>def t_newline(t): r'\n+' t.lexer.lineno += t.value.count(&quot;\n&quot;) </code></pre>
<python><ply><dead-code>
2023-03-07 03:48:13
2
10,344
zell
75,657,798
1,033,591
Chrome browser can't get csrf from cookie
<p>I copy the code from Django official site to get csrftoken from cookie.</p> <pre><code>function getCookie(name) { let cookieValue = null; if (document.cookie &amp;&amp; document.cookie !== '') { const cookies = document.cookie.split(';'); for (let i = 0; i &lt; cookies.length; i++) { const cookie = cookies[i].trim(); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) === (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } csrftoken = getCookie('csrftoken'); </code></pre> <p>It originally works well, but in some days it can't work for Chrome anymore. But it still works for Firefox.</p>
<javascript><python><django>
2023-03-07 03:27:44
0
2,147
Alston
75,657,731
15,299,206
How to update the string with priority order in the dictionary
<p>I have list of dict below</p> <pre><code>data = [ { 'Pencil': 'Green' }, { 'Pen': 'N/A' }, { 'Scale': 'Red' }, { 'Compass': 'N/A'}] </code></pre> <p>My priority order is below</p> <pre><code>priority_order = {'Red':4, 'Orange':3, 'Yellow':2, 'Green':1, 'Undefined': 0} </code></pre> <p>I have main variable which has to update with in priority order of values in <code>data</code> list of dictionary, by default main is</p> <pre><code>main = 'Undefined' </code></pre> <p>Code is below</p> <pre><code>for each in data: for k,v in each.items(): if priority_order[v] &gt; priority_order[main]: main = priority_order[v] </code></pre> <p>I am getting key error for this</p> <p>My expected out is 'Red' as <code>scale</code> is having 'Red'</p>
<python>
2023-03-07 03:15:32
3
488
sim
75,657,663
9,766,517
Obtaining decimal from 2-Byte hex
<p>I have a problem where we are given the barometric pressure (Hg/1000) as 2 Bytes. The data is from a serial readout and we are provided with the following information regarding that:</p> <ul> <li>8 data bits</li> <li>1 Start bit</li> <li>1 Stop bits</li> <li>No Parity</li> </ul> <p>I am trying to convert the bytes into valid pressure readings (between 20 and 32.5) in python, from the following example data:</p> <pre><code>1. ['0xf0', '0x73'] 2. ['0xef', '0x73'] 3. ['0xf1', '0x73'] 4. ['0xf4', '0x73'] 5. ['0xee', '0x73'] 6. ['0xec', '0x73'] </code></pre> <p>So far I have been able to get the value <code>351</code> for number 6 or <code>236,115</code> by converting to decimal and adding them although I'm not really sure where to go from here. I believe this is supposed to correlate to around <code>29.67Hg</code> but I am unsure.</p>
<python><hex><data-conversion>
2023-03-07 02:59:55
1
418
Ellis Thompson
75,657,537
2,035,204
Pandas - alternatives to iterating over every row if what you need is the result of row-to-row operations, not just the "end result"
<p>I am trying to analyse rentals - meaning that theoretically, every user is at one point returning what he took out. I have a dataframe that looks like this:</p> <pre><code>| id | date | giver | taker | type | items | |----|-----------|-------|-------|--------|------------------------------------| | 1 | day1 1 am | userA | userB | loan | [4 item a, 3 item b, 15 item c] | | 2 | day1 2 pm | userZ | userG | loan | [13 item g, 31 item zxc, 5 item p] | | 3 | day1 3 pm | userB | userA | return | [4 item a, 3 item b, 15 item c] | | 4 | day1 9 pm | userL | userJ | loan | [3 item t, 3 item u, 6 item k] | | 5 | day2 3am | userK | userH | loan | [4 item a, 3 item b, 15 item c] | | 6 | day2 6 pm | userH | userK | return | [13 item g, 31 item zxc, 5 item p] | </code></pre> <p>What I am doing so far, is that I am just simply iterating over all of the rows, then iterating over all items in &quot;items&quot; of a row (yes, a nested for loop, kill me now), adding or subtracting (depending on whether user gives or takes) the separate items from the &quot;items&quot; column to a dict of dicts containing all users. So for example, after the trade with id 3, the &quot;master&quot; dict containing who has what out looks like this:</p> <p>{'userA': {'item a': 0, 'item b': 0, 'item c': 0},<br> 'userB': {'item a': 0, 'item b': 0, 'item c': 0},<br> 'userZ': {'item g': 13, 'item z': 31, 'item p': 5},<br> 'userG': {'item g': -13, 'item z': -31, 'item p': -5}} <br><br> The issue is that I also have a few helper dicts:<br> A dict which says how many given items were out at any given date - out[&quot;item b&quot;][&quot;day2 3 am&quot;] == 3<br> A dict which stores information about what is the last trade that doesn't have a &quot;0&quot; state after it for a given user for a given item, counting only the trades the user has taken a part in - lastzero[&quot;userA&quot;][&quot;item b&quot;] == 3</p> <p>And of course, because of the nested loop, the code is very slow. I'm just using iterrows to iterate over the rows and a normal for loop inside for the items in &quot;items&quot;. I know I can get a slight performance improvement if I use itertuples, but I'm looking for ideas as to whether anything can be done to not have to use the &quot;iterating over rows&quot; antipattern. What I can't get past is that for instance the dict which says how many given items were out at a given date HAS to be calculated sequentially, ie. row by row by row, I don't know how to replicate that in Pandas without just using iterrows.</p>
<python><pandas><dictionary><optimization><vectorization>
2023-03-07 02:26:05
0
697
Entman
75,657,297
10,339,757
Get aggregates from different Dataframe to current Dataframe with conditions
<p>I have a harvest dataframe and a weather dataframe. I want to get the number of days above a temp threshold for the previous x months before harvest for all blocks. Note the harvest dataframe includes multiple years and the id is not 1-1 between frames, ie 2 blocks in harvest df can share an ID that correspond to a location in the weather frame.</p> <p>My current (working) code is below, but it is VERY slow, on the order of minutes. I want to speed it up but unclear how.</p> <pre><code>def days_above_thresh(x, weather_df): return weather_df.loc[ (weather_df[&quot;id&quot;]==x.id) &amp; \ (weather_df[&quot;day&quot;]&gt;=x['harvest_date']-DateOffset(months=2)) &amp; \ (weather_df[&quot;day&quot;]&lt;=x['harvest_date']) &amp; \ (weather_df[&quot;temperature_max&quot;]&gt;30), &quot;temperature_max&quot;].count() harvest_df[&quot;days_above_30&quot;] = harvest_df.apply(days_above_thresh , args=(weather_df,), axis=1) </code></pre> <p>The dataframes would look something like this -</p> <pre class="lang-none prettyprint-override"><code>weather_df id day temperature_max 1 2020-01-01 30 1 2020-01-02 32 1 2020-01-03 28 1 2020-01-04 25 . . . 2 2020-01-01 10 2 2020-01-02 15 2 2020-01-03 17 2 2020-01-04 12 . . . harvest_df id farm_id harvest_date 1 87 2020-01-02 1 86 2020-01-03 2 13 2020-01-30 </code></pre>
<python><python-3.x><pandas><dataframe><pandas-merge>
2023-03-07 01:31:46
1
371
thefrollickingnerd
75,657,133
6,202,327
Sympy yields `TypeError: unsupported operand type(s) for *: 'interval' and 'complex'` for complex rational expression
<p>I have this sympy script:</p> <pre class="lang-py prettyprint-override"><code>from sympy import * from sympy.plotting import plot_implicit x, y = symbols('x y', real=True) alpha = sqrt(2) / 2 expr = 1 + ((1-alpha) * x + y*I) / (1 - alpha * (x + y*I))**2 expr = Eq(abs(expr), 1) p1 = plot_implicit(expr) </code></pre> <p>Which is trying to solve R(z) = 1 where:</p> <p>As far as I can tell the expression I am giving it is correct. Why am I getting incompatible types? Note that if I get rid of the denominator I do get a plot.</p> <p><a href="https://i.sstatic.net/NgOLR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NgOLR.png" alt="enter image description here" /></a></p> <p>Full error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/makogan/Documents/University/Math521/A3/A5.py&quot;, line 49, in &lt;module&gt; p1 = plot_implicit(expr) File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot_implicit.py&quot;, line 430, in plot_implicit p.show() File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot.py&quot;, line 240, in show self._backend.show() File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot.py&quot;, line 1533, in show self.process_series() File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot.py&quot;, line 1530, in process_series self._process_series(series, ax, parent) File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot.py&quot;, line 1398, in _process_series points = s.get_raster() File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/plot_implicit.py&quot;, line 87, in get_raster func(xinterval, yinterval) File &quot;/home/makogan/.local/lib/python3.10/site-packages/sympy/plotting/experimental_lambdify.py&quot;, line 272, in __call__ return self.lambda_func(*args, **kwargs) File &quot;&lt;string&gt;&quot;, line 1, in &lt;lambda&gt; TypeError: unsupported operand type(s) for *: 'interval' and 'complex' </code></pre>
<python><math><sympy><numerical-methods>
2023-03-07 00:51:36
1
9,951
Makogan
75,656,915
2,924,334
re.sub a list of words, ignore case
<p>I am trying to add the html <code>&lt;b&gt;</code> element to a list of words in a sentence. After doing some search I got it almost working, except the ignore-case.</p> <pre><code>import re bolds = ['test', 'tested'] # I want to bold these words, ignoring-case text = &quot;Test lorem tested ipsum dolor sit amet test, consectetur TEST adipiscing elit test.&quot; pattern = r'\b(?:' + &quot;|&quot;.join(bolds) + r')\b' dict_repl = {k: f'&lt;b&gt;{k}&lt;/b&gt;' for k in bolds} text_bolded = re.sub(pattern, lambda m: dict_repl.get(m.group(), m.group()), text) print(text_bolded) </code></pre> <p>Output:</p> <p><code>Test lorem &lt;b&gt;tested&lt;/b&gt; ipsum dolor sit amet &lt;b&gt;test&lt;/b&gt;, consectetur TEST adipiscing elit &lt;b&gt;test&lt;/b&gt;.</code></p> <p>This output misses the <code>&lt;b&gt;</code> element for <code>Test</code> and <code>TEST</code>. In other words, I would like the output to be:</p> <p><code>&lt;b&gt;Test&lt;/b&gt; lorem &lt;b&gt;tested&lt;/b&gt; ipsum dolor sit amet &lt;b&gt;test&lt;/b&gt;, consectetur &lt;b&gt;TEST&lt;/b&gt; adipiscing elit &lt;b&gt;test&lt;/b&gt;.</code></p> <p>One hack is that I explicitly add the <code>capitalize</code> and <code>upper</code>, like so ...</p> <p><code>bolds = bolds + [b.capitalize() for b in bolds] + [b.upper() for b in bolds]</code></p> <p>But I am thinking there must be a better way to do this. Besides, the above hack will miss words like <code>tesT</code>, etc.</p> <p>Thank you!</p>
<python><python-re>
2023-03-07 00:05:10
1
587
tikka
75,656,893
1,039,860
Trying to get cellEntered connection to work with QTableWidget (I want to trigger one or more callbacks)
<p>I am trying to get a callback when a cell is entered <strong>either by clicking or navigating</strong> into with arrow or tab keys (<strong>basically when it gets focus</strong>.) I have tried many of the different connections to no avail (either they flat out are never called or when they are, self.currentItem() returns None.) Once the cell has focus, I want to know what the row &amp; col are so that I can pop up an appropriate dialog for the user to select and then fill out the appropropriate cell.</p> <p>I've been unable to get this to work with my project, so I asked chatAI to generate an example. Turns out that AI isn't as good as we think it is ;-) Here is an example that should work but doesn't :-/ (handle_cell_focus_changed is never called) :</p> <pre><code>from PyQt5.QtWidgets import QApplication, QTableWidget, QTableWidgetItem import sys class TableWidget(QTableWidget): def __init__(self): super().__init__() self.init_ui() def init_ui(self): self.setRowCount(3) self.setColumnCount(3) for row in range(self.rowCount()): for column in range(self.columnCount()): item = QTableWidgetItem(f&quot;({row}, {column})&quot;) self.setItem(row, column, item) self.cellEntered.connect(self.handle_cell_focus_changed) def handle_cell_focus_changed(self, row, column): print(f&quot;Cell ({row}, {column}) has focus&quot;) if __name__ == '__main__': app = QApplication(sys.argv) table_widget = TableWidget() table_widget.show() sys.exit(app.exec_()) </code></pre> <p>self.itemEntered.connect(self.handleItemEntered) works great when moving the mouse around, but I need to know when the user navigates into the cell as well as when the user clicks on a cell.</p> <pre><code>self.cellClicked.connect(self.item_selection_changed) </code></pre> <p>works fine.</p>
<python><pyqt5><qtablewidget>
2023-03-06 23:59:16
0
1,116
jordanthompson
75,656,846
11,141,816
Is there a way for python to perform a matrix inversion at 500 decimal precision
<p>There's an algorithm sensitive to the precision of the output. Especially, the paper required a real matrix inverse to be performed at 500 decimals precision. I wanted to write a script to check the result with python. However, the largest float data type used in numpy was np.float128, and the deicmal package does not seem to have a matrix inverse function.</p> <p>I found the post <a href="https://stackoverflow.com/questions/32685280/matrix-inverse-with-decimal-type-numpy">Matrix inverse with Decimal type NumPy</a> which introduce the decimal as an object in the numpy's function. But I saw comments on stack exchange that python transfer the object as float(32) object because of the % operators.</p> <p>Is there a way for python to perform a matrix inversion at 500 decimal precision?</p>
<python><numpy><matrix>
2023-03-06 23:48:52
1
593
ShoutOutAndCalculate
75,656,804
21,343,992
Create EC2 instance, start instance and run Linux command using Boto3
<p>I am trying to create an AWS EC2 instance, start it, execute a simple Linux command and print the output. However, I keep getting:</p> <blockquote> <p>botocore.errorfactory.InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation: Instances [[the_instance_id]] not in a valid state for account &lt;some_account&gt;</p> </blockquote> <p>At the moment I use this boto3 script to create the instance:</p> <pre><code>import boto3 ec2 = boto3.resource('ec2') instance = ec2.create_instances( ImageId='ami-0b828c1c5ac3f13ee', MinCount=1, MaxCount=1, InstanceType='t2.micro' ) print(instance) </code></pre> <p>In the AWS Console the 'status check' says '2/2 checks passed'. I then copy-paste the instance-id in to the below script to execute a Linux <code>echo</code> command:</p> <pre><code>import boto3 commands = [' echo &quot;hello world&quot;'] ssm_client = boto3.client('ssm') output = ssm_client.send_command( InstanceIds=[&lt;the_instance_id&gt;], DocumentName='AWS-RunShellScript', Parameters={ 'commands': commands } ) print(output) </code></pre> <p>However, I get:</p> <pre><code>botocore.errorfactory.InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation: Instances [[&lt;the instance id&gt;]] not in a valid state for account &lt;some account&gt; </code></pre>
<python><amazon-web-services><amazon-ec2><boto3>
2023-03-06 23:41:12
1
491
rare77
75,656,736
8,119,664
Is there a way to reshape a Pandas Series into bins based on time intervals and select one of them?
<p>So I have a timeseries stored on a Pandas Series:</p> <pre class="lang-py prettyprint-override"><code>data = pd.Series(data=[0,1,3,5], index=pd.to_timedelta([0, 15, 30, 45], unit='min')) </code></pre> <p>I wanted to group those data into 30 minute intervals, and then select all the data of the second interval. By looking at the docs, it can find the sum, mean or do other things with the values, but I can't find a way to return all the data as they are as a Pandas Series.</p> <p>So something like this, but obviously this syntax is wrong:</p> <pre class="lang-py prettyprint-override"><code>new_data.resample('30min').iloc[1] </code></pre> <p>So, if I have the Series like this:</p> <pre><code>0 days 00:00:00 0 0 days 00:15:00 1 0 days 00:30:00 3 0 days 00:45:00 5 </code></pre> <p>I'd like to get:</p> <pre><code>0 days 00:30:00 3 0 days 00:45:00 5 </code></pre>
<python><pandas><dataframe><numpy><time-series>
2023-03-06 23:26:52
1
480
CurlyError
75,656,533
1,701,545
Unable to load pandas in R reticulate due to missing GLIBCXX_3.4.29
<p>I'm trying to use <code>R</code>'s <code>reticulate</code> package for loading <code>python</code>'s <code>pandas</code> package in <code>R</code>.</p> <p>I have <code>python3.8</code>, and I installed <code>pandas</code> through <code>conda</code>. In <code>python</code> <code>pandas</code> imports fine but in <code>R</code>, after loading <code>reticulate</code> I get this error:</p> <pre><code>&gt; pd &lt;- import(&quot;pandas&quot;) Error in py_module_import(module, convert = convert) : ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/mnd/miniconda/lib/python3.8/site-packages/pandas/_libs/window/aggregations.cpython-38-x86_64-linux-gnu.so) </code></pre> <p>I read through <a href="https://stackoverflow.com/questions/65349875/where-can-i-find-glibcxx-3-4-29">this SO post</a> but my <code>/usr/lib/x86_64-linux-gnu/libstdc++.so.6</code> does not have <code>GLIBCXX_3.4.29</code> in it (it has <code>GLIBCXX_3.4</code> to <code>GLIBCXX_3.4.28</code>), and <code>GLIBCXX_3.4.29</code> cannot be found in <a href="https://ftp.gnu.org/gnu/glibc/" rel="nofollow noreferrer">this ftp</a>.</p> <p>I also tried following this <a href="https://stackoverflow.com/questions/67873498/error-while-importing-pandas-in-r-via-reticulate">SO post</a> by installing <code>pandas</code> from <code>R</code> with <code>reticulate</code>: <code>reticulate::py_install(&quot;pandas&quot;, force = TRUE)</code>, which completed, but the <code>pd &lt;- import(&quot;pandas&quot;)</code> command results with the same error above.</p> <p>Any idea how to solve this?</p>
<python><r><pandas><glibc><reticulate>
2023-03-06 22:47:54
2
6,330
user1701545
75,656,499
11,546,773
Dask/pandas apply function and return multiple rows
<p>I'm trying to return a dataframe from the dask <code>map_partitions</code> function. The example code I provided returns a 2 row dataframe in the function. However only 1 row is shown in the end result. Which is in this case only the column name row. I removed the column names in previous test examples but even then only 1 row is shown. I also have this exact same result with pandas only.</p> <p>How can I make this <code>map_partitions</code> function return multiple rows (or dataframe with multiple rows) to a new dask dataframe? A solution with dask delayed might even be better. I need to apply this function on every cell of the dataframe and the result should be a complete new dataframe (with more rows) based on every cell of the dataframe.</p> <p><strong>Current result</strong></p> <pre><code>Dask 0 0 1 2 3 ... 1 0 1 2 3 ... 2 0 1 2 3 ... 3 0 1 2 3 ... 4 0 1 2 3 ... </code></pre> <p><strong>Desired result:</strong></p> <pre><code>Dask 0 1 2 3 4 0 11.760715 14.591147 3.058529 19.868252 22.714292 1 10.601743 21.634348 17.443206 13.619830 13.574586 2 16.346402 2.80519 8.610979 11.656930 23.822052 3 3.100282 17.24039 10.871604 13.625602 22.695311 4 17.240093 23.069574 0.832129 22.055441 3.771150 5 22.676472 23.644936 10.721542 10.563838 17.297389 6 12.54929 0.988218 16.113930 19.572034 7.090997 7 11.76189 10.733782 3.819583 6.998412 14.439809 8 19.371690 5.172882 19.620361 3.148623 23.348465 9 5.924958 14.746566 9.069269 0.560508 15.120616 </code></pre> <p><strong>Example code</strong></p> <pre><code>import pandas as pd import dask.dataframe import numpy as np def myfunc(): data1 = np.random.uniform(low=0, high=25, size=(5,)) data2 = np.random.uniform(low=0, high=25, size=(5,)) # Just a example dataframe to show df = pd.DataFrame([data1, data2]) return df df = pd.DataFrame({ 'val1': [1, 2, 3, 4, 5], 'val2': [1, 2, 3, 4, 5] }) ddf = dask.dataframe.from_pandas(df, npartitions=2) output = ddf.map_partitions(lambda part: part.apply(lambda x: myfunc(), axis=1), meta=object).compute() print('\nDask\n',output) </code></pre>
<python><pandas><numpy><dask><dask-delayed>
2023-03-06 22:40:41
2
388
Sam
75,656,461
2,213,309
logging to stdout and to file
<p>I want some of my loggings to be printed to the terminal and some others exclusively printed to a file.</p> <pre><code>import logging as log filelog = log.getLogger ('file') filelog.addHandler ( log.FileHandler ('example.log') ) filelog.setLevel (log.DEBUG) log.getLogger().setLevel (log.DEBUG) log.debug ('print to terminal') filelog.debug ('print to file') </code></pre> <p>This prints both lines to the terminal (and also the second one to example.log). But I wanted only the first line to be printed to the terminal.</p> <p>Strangely, when I comment out the <code>log.debug</code> line the <code>filelog.debug</code> line does not print to the terminal anymore but only to the file.</p> <p>A possible solution would be initialize two separate loggers like</p> <pre><code>stdlog = log.getLogger ('stdout') filelog = log.getLogger ('file') filelog.addHandler ( log.FileHandler ('example.log') ) </code></pre> <p>but that's pretty annoying if you're using modules and have to import both loggers.</p>
<python><logging><python-logging>
2023-03-06 22:34:27
1
2,259
flappix
75,656,255
13,138,364
Plot horizontal bars using seaborn.objects
<p>In the original API, <a href="https://seaborn.pydata.org/generated/seaborn.barplot.html" rel="nofollow noreferrer"><code>barplot</code></a> provided an <code>orient</code> parameter to swap bar orientation:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd, numpy as np, seaborn as sns df = pd.DataFrame(data=np.random.randint(10, size=(3, 5)), columns=[*&quot;abcde&quot;]) # a b c d e # 0 8 6 5 2 3 # 1 0 0 0 1 8 # 2 6 9 5 6 9 sns.barplot(data=df, orient=&quot;h&quot;) </code></pre> <p><a href="https://i.sstatic.net/Bn3oD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bn3oD.png" width="220"></a></p> <p>In the <code>objects</code> API, it seems we should now <a href="https://seaborn.pydata.org/generated/seaborn.objects.Plot.add.html#seaborn.objects.Plot.add" rel="nofollow noreferrer"><code>add</code></a> the orientation to the mark layer, but I'm not sure how:</p> <pre class="lang-py prettyprint-override"><code>import seaborn.objects as so so.Plot(data=df).add(so.Bar(), so.Agg(), orient=&quot;h&quot;) # ValueError: No grouping variables are present in dataframe </code></pre> <p>What is the idiomatic way to change bar orientation with the new <code>objects</code> API?</p>
<python><seaborn><bar-chart><seaborn-objects>
2023-03-06 22:04:26
2
42,007
tdy
75,656,240
17,696,880
How to split and reorder the content inside the ((PERS)) tag by ' y ' or ' y)' using Python regular expressions?
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;((PERS) Marcos Sy y) ((PERS) Lucy) estuvieron ((VERB) jugando) sdds&quot; #example 1 input_text = &quot;ashsahghgsa ((PERS) María y Rosa ds) son alumnas de esa escuela y juegan juntas&quot; #example 2 input_text = re.sub( r&quot;\(\(PERS\)&quot; + r&quot;((?:\w\s*)+(?:\sy\s(?:\w\s*)+)+)(?=\s*y\s*(?:\)|\())&quot;, #lambda m: (f&quot;((PERS)){m[1]}) y&quot;), lambda m: (f&quot;((PERS)){m[1].replace(' y', ') y ((PERS)')}&quot;), input_text, re.IGNORECASE) print(input_text) # --&gt; output </code></pre> <p>I need to separate the content inside a <code>((PERS) )</code> tag if there is a <code>&quot; y &quot;</code> or a <code>&quot; y)&quot;</code> in between. So get the <code>&quot; y&quot;</code> or the <code>&quot; y &quot;</code> out of the <code>((PERS) )</code> tag and the rest of the content (in case it finds as is the case in <code>example 2</code>) left in another <code>((PERS) )</code> tag. I try with <code>\s+y\s+?</code> and with <code>\s+y\s+</code></p> <p>To achieve the desired output, I tried with a regex to match all the names inside the <code>((PERS) )</code> tag that are separated by <code>&quot; y &quot;</code> or <code>&quot; y)&quot;</code>. For that I tried to use a positive lookahead to check for <code>&quot; y &quot;</code> or <code>&quot; y)&quot;</code> after each name, and then group all the names together. But this lookahead dont works well.</p> <p>So get this output for each of the examples respectively</p> <pre><code>&quot;((PERS) Marcos Sy) y ((PERS) Lucy) estuvieron ((VERB) jugando) sdds&quot; #for example 1 &quot;ashsahghgsa ((PERS) María) y ((PERS)Rosa ds) son alumnas de esa escuela y juegan juntas&quot; #for example 2 </code></pre> <p>This regex is for content that does or does have to start with a capital letter <code>r&quot;([A-Z][\wí]+\s*)&quot;</code> although I think that in this case it would be better to simply use <code>r&quot;((?:\w\s*)+)&quot;</code> since the content is already encapsulated.</p>
<python><regex><split>
2023-03-06 22:02:46
3
875
Matt095
75,656,086
9,392,446
transpose columns and create list in new column
<p>Very difficult question to fit into a title. Let me explain:</p> <p>let's say i have a df like this:</p> <pre><code>id 'state': 'texas' 'phone_type': 'iphone' 'email_domain': 'gmail' 111 1 0 1 222 0 1 1 123 0 1 0 234 1 0 0 432 0 0 1 </code></pre> <p>#code for df</p> <pre><code>df_test = pd.DataFrame(columns=['id' ,&quot;'state': 'texas'&quot; ,&quot;'phone_type': 'iphone'&quot; ,&quot;'email_domain': 'gmail'&quot; ] ,data=[ [111,1,0,1] ,[222,0,1,1] ,[123,0,1,0] ,[234,1,0,0] ,[432,0,0,1] ]) </code></pre> <p>how can i take the columns, transpose them to rows in a new df, and put the ids that = 1 in a list in a new column? and throw in one more column of the count of ids. Like this:</p> <pre><code>attr ids_list count_ids 'state': 'texas' [111,234] 2 'phone_type': 'iphone' [222,123] 2 'email_domain': 'gmail' [111,222,432] 3 </code></pre>
<python><pandas>
2023-03-06 21:38:50
3
693
max
75,656,074
12,469,912
How to efficiently generate unique random non-zero integers from specific spaces in a range?
<p>I want to generate 100 pairs of unique random non-zero integers from the range (-150, 151). But I want my code to generate from specific areas in (-150, 151). I have coded it as follows:</p> <pre><code>import random my_list = [] c = 0 while c &lt; 100: if c &lt; 5: # Both values are unique. first_num, second_num = random.sample(range(-5, 6), 2) # Exclusion of 0. while first_num == 0 or second_num == 0: first_num, second_num = random.sample(range(-5, 6), 2) c += 1 elif 5 &lt;= c &lt; 80: first_num, second_num = random.sample(range(-100, 101), 2) while first_num == 0 or second_num == 0: first_num, second_num = random.sample(range(-100, 101), 2) c += 1 else: first_num, second_num = random.sample(range(-150, 151), 2) while first_num == 0 or second_num == 0: first_num, second_num = random.sample(range(-150, 151), 2) c += 1 random_nums = (first_num, second_num) if random_nums not in my_list: my_list.append(random_nums) else: c -= 1 </code></pre> <p>Is there a more elegant or efficient way alternative to what I have coded above?</p> <p><strong>UPDATE</strong></p> <p>After discussing with MatBailie in the comments, I am updating the requirements:</p> <p>If the whole selected range is (-150 to 151), then in the final <code>my_list</code>:</p> <ul> <li>(5,6), (6,5) are allowed but (5,6), (5,6) are not.</li> <li>If (-1, 1) is generated once from range (-5 to 6) then it should not be generated again from the bigger ranges such as (-100 to 101) or (-150 to 151)</li> <li>It is also good to have pairs such as (-1, 140) or (140, -1).</li> </ul> <p><strong>UPDATE</strong></p> <p>Benchmarking MatBailie, Alain T. and Samwise's answers on <a href="https://perfpy.com/" rel="nofollow noreferrer">perfpy</a> with 100 pairs, I got the following result:</p> <p><a href="https://i.sstatic.net/jNniY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jNniY.png" alt="enter image description here" /></a></p> <p>The same benchmark with generating 1000 pairs and the same ratio of areas, I got this:</p> <p><a href="https://i.sstatic.net/6yPvD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6yPvD.png" alt="enter image description here" /></a></p>
<python><python-3.x>
2023-03-06 21:36:46
3
599
plpm
75,656,026
1,015,155
Why does numpy.vectorize give a warning about an invalid value when using uncertainties?
<p>With Python 3.10, numpy 1.23.5, and <a href="https://uncertainties-python-package.readthedocs.io/en/latest/" rel="nofollow noreferrer">uncertainties</a> 3.1.7 (on Linux; specifically using packages from conda-forge on Fedora 37), the following code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from uncertainties.core import Variable x = np.array([1.0, 2.0]) y = np.array([np.nan, np.nan], dtype=float) func = np.vectorize(lambda x, y: Variable(x, y), otypes=[object]) func(x, y) </code></pre> <p>produces:</p> <pre><code>/lib/python3.10/site-packages/numpy/lib/function_base.py:2411: RuntimeWarning: invalid value encountered in &lt;lambda&gt; (vectorized) outputs = ufunc(*inputs) </code></pre> <p>Using <code>warnings.simplefilter(&quot;error&quot;)</code>, I get the following traceback:</p> <pre><code>Traceback (most recent call last): File &quot;/test.py&quot;, line 12, in &lt;module&gt; z = func(x, y) File &quot;/lib/python3.10/site-packages/numpy/lib/function_base.py&quot;, line 2328, in __call__ return self._vectorize_call(func=func, args=vargs) File &quot;/lib/python3.10/site-packages/numpy/lib/function_base.py&quot;, line 2411, in _vectorize_call outputs = ufunc(*inputs) RuntimeWarning: invalid value encountered in &lt;lambda&gt; (vectorized) </code></pre> <p>If I change the input to use all finite floats like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from uncertainties.core import Variable x = np.array([1.0, 2.0]) y = np.array([1.0, 1.0], dtype=float) func = np.vectorize(lambda x, y: Variable(x, y), otypes=[object]) func(x, y) </code></pre> <p>or change the code to use a custom class instead of <code>Variable</code> like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np class Variable: def __init__(self, x, y): self.x = x self.y = y x = np.array([1.0, 2.0]) y = np.array([np.nan, np.nan], dtype=float) func = np.vectorize(lambda x, y: Variable(x, y), otypes=[object]) func(x, y) </code></pre> <p>then no warning is issued.</p> <p>What is causing this warning from numpy? It comes from compiled code that pdb can not step into. I don't see anything in the uncertainties code for <code>Variable</code> that should error or otherwise be different from a standard Python class like my third example.</p> <p>I note that this code:</p> <pre><code>import numpy as np from uncertainties.core import Variable x = np.array([1.0, 2.0]) y = np.array([np.nan, np.nan], dtype=float) [Variable(ix, iy) for ix, iy in zip(x, y)] </code></pre> <p>produces no error, so there is not actually a problem with passing these arguments to <code>Variable</code>. It seems to be that numpy is examining something about the types or dimensions of the arguments to the vectorized function and detects something that does not match what it expects.</p> <p>Here I tried to provide a simple invocation of <code>numpy.vectorize</code>. The actual case where I encountered this was with <a href="https://github.com/lebigot/uncertainties/blob/804adccf3401aeacbcbae0d669f92131fcd02c03/uncertainties/unumpy/core.py#L291-L296" rel="nofollow noreferrer">uncertainties.unumpy.uarray</a> which uses <code>numpy.vectorize</code> similarly to my example.</p>
<python><numpy>
2023-03-06 21:31:02
1
1,123
ws_e_c421
75,656,004
16,547,860
Connect remote Hive server in VS Code
<p>I am learning Pyspark and Hive. Currently, I want to connect to Hive remote server from VS Code. I would like to access the table and do some ETL using pyspark and write the new table back to the HIVE server. I am using the windows operating system and python language. Any help, documentation, or links are greatly appreciated.</p> <p>The details of the hive connection is shown below.</p> <p><a href="https://i.sstatic.net/rNMwr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rNMwr.png" alt="enter image description here" /></a></p> <p>Thank you.</p>
<python><visual-studio-code><pyspark><hive><bigdata>
2023-03-06 21:26:47
1
312
Shiva
75,655,926
2,324,259
Pandas groupby apply (nested) slow
<p>I have a dataframe with 'category' and 'number' columns. I want to create a new column 'avg_of_largest_2_from_prev_5' which is calculated after grouping by 'category' and averaging highest 2 values from the previous 5 rows' number values, excluding the current row.</p> <pre><code>np.random.seed(123) n_rows = 10000 data = {'category': np.random.randint(1, 1000, n_rows), 'number': np.random.randint(1, n_rows, n_rows)} df = pd.DataFrame(data) %timeit df['avg_of_largest_2_from_prev_5'] = df.groupby('category')['number'].apply(lambda x: x.shift(1).rolling(5, min_periods=0).apply(lambda y: pd.Series(y).nlargest(2).mean())) df = df[df['category'] == df['category'].values[0]] df </code></pre> <p>out: 4.55 s ± 34.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)</p> <pre><code> category number avg_of_largest_2_from_prev_5 0 511 4179 NaN 392 511 2878 4179.0 1292 511 5834 3528.5 1350 511 1054 5006.5 1639 511 8673 5006.5 3145 511 8506 7253.5 4176 511 947 8589.5 4471 511 151 8589.5 4735 511 5326 8589.5 4965 511 4827 8589.5 5046 511 9792 6916.0 5316 511 3772 7559.0 5535 511 1095 7559.0 5722 511 5619 7559.0 5732 511 700 7705.5 6825 511 1156 7705.5 6877 511 7240 4695.5 8100 511 2381 6429.5 8398 511 2376 6429.5 </code></pre> <p>this operation takes 36 seconds with 10k rows and 1k categories. When I try this in 1m+ rows dataframe it takes around 8 minutes. I think there should be a faster way for what I'm trying to do, and I'd appreciate any suggestions.</p>
<python><pandas><dataframe>
2023-03-06 21:16:26
2
571
Emre
75,655,848
6,107,054
Reference instance attribute in parameterized decorator
<p>I have a method in a class that leverages <code>cachetools</code>'s <code>ttl_cache</code>. I want to be able to create different instance of the class with different <code>ttl</code> values.</p> <p>The <code>ttl</code> value is specific as a parameter of <code>ttl_cache()</code>, but I am unable to reference instance attributes in the decorator's params.</p> <pre><code>class MyClass def __init__(self, ttl): self.ttl = ttl @cachetools.func.ttl_cache(ttl=self.ttl) # doesn't work. `self` is not available def expensive_operation(self): # do expensive operation </code></pre> <p>How can I implement this?</p>
<python><caching><decorator><python-decorators>
2023-03-06 21:05:50
1
2,427
aberger
75,655,836
6,687,699
ResourceWarning: unclosed file <_io.BufferedReader name=
<p>I have used <code>with open</code> but no luck but I still get this error :</p> <pre><code> ResourceWarning: unclosed file &lt;_io.BufferedReader name='/home/idris/Documents/workspace/captiq/captiq/static/docs/CAPTIQ Datenschutzhinweise.pdf'&gt; attachment=customer_profile.get_attachments(), ResourceWarning: Enable tracemalloc to get the object allocation traceback </code></pre> <p>below is my function which the error points to :</p> <pre><code>def get_attachments(self): files = None cp = ( self.cooperation_partner.get_cooperation_partner() if self.cooperation_partner else None) # TODO: improve implementation: too many conditions if cp and cp.custom_attachments.exists(): files = [f.attachment for f in cp.custom_attachments.all()] elif cp and cp.pool.default_b2b_attachments.exists(): files = [ f.attachment for f in cp.pool.default_b2b_attachments.all()] else: files = self.get_default_attachments() if not files: path_one = finders.find( 'docs/file_1.pdf') path_two = finders.find('docs/file_2.pdf') with open(path_one, 'rb') as f1, open(path_two, 'rb') as f2: files = [f1, f2] attachments = [ { 'filename': os.path.basename(attachment.name), 'content': attachment.read(), 'mimetype': mimetypes.guess_type(attachment.name)[0] } for attachment in files] return attachments attachments = [ { 'filename': os.path.basename(attachment.name), 'content': attachment.read(), 'mimetype': mimetypes.guess_type(attachment.name)[0] } for attachment in files] return attachments </code></pre> <p>So the issue is here with this section :</p> <pre><code>with open(path_one, 'rb') as f1, open(path_two, 'rb') as f2: files = [f1, f2] attachments = [ { 'filename': os.path.basename(attachment.name), 'content': attachment.read(), 'mimetype': mimetypes.guess_type(attachment.name)[0] } for attachment in files] return attachments </code></pre>
<python><django>
2023-03-06 21:04:04
0
4,030
Lutaaya Huzaifah Idris
75,655,693
14,368,631
Package C++ extension using setuptools along with a stub file
<p>So I have the following file structure:</p> <pre><code>project/ ├─ cpp_src/ │ ├─ src/ │ │ ├─ cpp source files │ ├─ test/ │ │ ├─ cpp test files │ ├─ CMakeLists.txt │ ├─ stub.pyi ├─ python_src/ │ ├─ ... ├─ build.py </code></pre> <p>In my <code>build.py</code> file, I am using setuptools to compile and package the C++ extension in <code>cpp_src</code> using a custom <code>build_ext</code> command. However, I can't seem to get this to include the stub file <code>stub.pyi</code>. How can I modify the setuptools command to do this? I'm not that concerned about the file structure, so if another <code>setup.py</code> file is needed in <code>cpp_src</code>, thats fine.</p> <p>I'm also using Poetry to manage the virtual environment if that helps. Moreover, if there is another build system which will make this easier, I'd be happy to use that.</p> <p>Thanks.</p> <p>EDIT: This is a reduced version of the <code>build.py</code> file (full repo here <a href="https://github.com/Aspect1103/Hades/tree/generation-rust" rel="nofollow noreferrer">https://github.com/Aspect1103/Hades/tree/generation-rust</a>):</p> <pre class="lang-py prettyprint-override"><code>import subprocess from pathlib import Path from setuptools import Extension, setup from setuptools.command.build_ext import build_ext class CMakeBuild(build_ext): def build_extension(self, ext: Extension) -&gt; None: # Determine where the extension should be transferred to after it has been # compiled current_dir = Path.cwd() build_dir = current_dir.joinpath(self.get_ext_fullpath(ext.name)).parent # Determine the profile to build the CMake extension with profile = &quot;Release&quot; # Make sure the build directory exists build_temp = Path(self.build_temp).joinpath(ext.name) if not build_temp.exists(): build_temp.mkdir(parents=True) # Compile and build the CMake extension subprocess.run( [ &quot;cmake&quot;, current_dir.joinpath(ext.sources[0]), f&quot;-DDO_TESTS=false&quot;, f&quot;-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{profile.upper()}={build_dir}&quot;, ], cwd=build_temp, check=True, ) subprocess.run( [&quot;cmake&quot;, &quot;--build&quot;, &quot;.&quot;, f&quot;--config {profile}&quot;], cwd=build_temp, check=True ) def main(): setup( name=&quot;hades_extensions&quot;, script_args=[&quot;bdist_wheel&quot;], ext_modules=[Extension(&quot;hades_extensions&quot;, [&quot;cpp_src&quot;])], cmdclass={&quot;build_ext&quot;: CMakeBuild}, ) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><c++><setuptools><python-packaging><python-poetry>
2023-03-06 20:47:53
1
328
Aspect11
75,655,681
1,200,914
Filling a cloud 2D image into a continous map
<p>I have the following image:</p> <p><a href="https://i.sstatic.net/qlCjP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qlCjP.png" alt="cloudtocher2dinterpolator" /></a></p> <p>and I wish to obtain something close to (I didn't do it perfectly):</p> <p><a href="https://i.sstatic.net/C5Bcx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C5Bcx.png" alt="enter image description here" /></a></p> <p>How can I do this with python? My initial image is a 2D numpy array of 0 and 255 values.</p>
<python><opencv><python-imaging-library>
2023-03-06 20:46:32
2
3,052
Learning from masters
75,655,561
8,194,364
selenium.common.exceptions.ElementClickInterceptedException error clicking on a toggle button on yahoo finance income statement data
<p>I am trying to webscrape Income Statement data from <a href="https://finance.yahoo.com/quote/AAPL/financials?p=AAPL" rel="nofollow noreferrer">https://finance.yahoo.com/quote/AAPL/financials?p=AAPL</a></p> <p>I am trying to simulate a click on Operating Expense to expand the row to see the Research and Development values like such: <a href="https://i.sstatic.net/VVIVz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VVIVz.png" alt="Research and Development rows" /></a></p> <p>This is my code so far:</p> <pre><code>def clickOperatingExpense(url): options = Options() options.add_argument('--headless') driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(url) # This is where I encounter a selenium.common.exceptions.ElementClickInterceptedException WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=&quot;Col1-1-Financials-Proxy&quot;]/section/div[4]/div[1]/div[1]/div[2]/div[4]/div[1]/div[1]/div[1]/button'))).click() </code></pre> <p>Just for some more information, the full error is:</p> <pre><code>selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element &lt;button aria-label=&quot;Operating Expense&quot; class=&quot;P(0) M(0) Va(m) Bd(0) Fz(s) Mend(2px) tgglBtn&quot;&gt;...&lt;/button&gt; is not clickable at point (39, 592). Other element would receive the click: &lt;div tabindex=&quot;1&quot; class=&quot;lightbox-wrapper Ta(c) Pos(f) T(0) Start(0) H(100%) W(100%) Bgc($modalBackground) Ovy(a) Z(50) Op(1)&quot;&gt;...&lt;/div&gt; </code></pre> <p>The issue looks like a div tag would receive the click but I'm not sure why. How can I make the tag receive the click instead?</p>
<python><selenium-webdriver><xpath><css-selectors><webdriverwait>
2023-03-06 20:32:34
2
359
AJ Goudel
75,655,544
15,452,168
calculating values in python using formula for a dataframe
<p>I have an excel sheet with the data below, here we need to calculate 2 columns based on other column values but I am getting NaN values</p> <pre><code>import pandas as pd # Define the initial data data = {'statMonthName': ['Mar', 'Mar', 'Mar', 'Mar', 'Apr', 'Apr', 'Apr', 'Apr', 'Apr'], 'statWeek': [1, 2, 3, 4, 5, 6, 7, 8, 9], 'p_organic': [3646049.56, 3867696.284, 4051128.056, 4095508.5, 2778538.164, 2789640.51, 2736064.373, 3105200.772, 3112694.166], 'f_organic': [2289567, None, None, None, None, None, None, None, None], 'pct_diff': [None, None, None, None, None, None, None, None, None]} # Create a Pandas dataframe to store the data df = pd.DataFrame(data) # Print the dataframe print(df) </code></pre> <p>in excel I am using the formula</p> <pre><code>f_organic = D2 =C2*(1+E1) which means f_organic of x week = p_organic of x week * (1+ pct_diff of (x-1) week) </code></pre> <p>and pct_diff is given as</p> <pre><code>pct_diff = =D2/C2-1 pct_diff = f_organic row2/ p_organic row 2 - 1 </code></pre> <p>when I am trying to achieve this via python, I am failing and getting only null values</p> <p>my code is below</p> <pre><code># fill null values in f_organic and pct_diff columns for i in range(1, len(df)): df.at[i, 'pct_diff'] = df.at[i, 'f_organic'] / df.at[i, 'p_organic'] - 1 df.at[i, 'f_organic'] = df.at[i, 'p_organic'] * (1 + df.at[i-1, 'pct_diff']) # print final dataframe df </code></pre> <p>any help or suggestion is appreciated :)</p>
<python><pandas><dataframe><for-loop>
2023-03-06 20:30:17
1
570
sdave
75,655,531
3,261,292
Calling a module from different directories in python
<p>I have the following project dir structure:</p> <pre><code>Project ├── src | ├── eval.py | └──utils.py └── app.py </code></pre> <p>In <code>eval.py</code>, I import a function from utils:</p> <pre><code>from utils import clean_txt </code></pre> <p>and usually, I run my code like this:</p> <pre><code>python src/eval.py </code></pre> <p>I have another file (<code>app.py</code>) that will call <code>eval.py</code> to run the project. When I run it, I get this error:</p> <blockquote> <p>from utils import reading_jobs_for_id_api</p> </blockquote> <blockquote> <p>ModuleNotFoundError: No module named 'utils</p> </blockquote> <p>It's probably because <code>utils.py</code> is not seen in the <code>Project</code> directory. I found a solution to solve this by running the following in the parent directory. Is this the &quot;right&quot; way (when writing python packages) to solve the problem?</p> <pre><code>export PYTHONPATH=&quot;${PYTHONPATH}:$PWD&quot; </code></pre>
<python><directory>
2023-03-06 20:27:32
3
5,527
Minions
75,655,502
15,520,615
Extracting Dataverse CRM Data using Dataverse-to-delta Accelerator with Databricks
<p>I am using the Python code on Databricks located <a href="https://github.com/BlueprintTechnologies/blueprint-dataverse-to-delta-accelerator" rel="nofollow noreferrer">here</a> to extract CRM Data to SQL.Everything appeared to work fine when I executed the code on our CRM platform. However, I noticed that some fields appear to have vanished.</p> <p>I played around with the following code changing [0] to <a href="https://github.com/BlueprintTechnologies/blueprint-dataverse-to-delta-accelerator" rel="nofollow noreferrer">1</a> or [100]</p> <pre><code>data_schema = generate_schema(input_json=list_data[0]) </code></pre> <p>When I changed from [0] to <a href="https://github.com/BlueprintTechnologies/blueprint-dataverse-to-delta-accelerator" rel="nofollow noreferrer">1</a> more fields would appear, and if I changed to [199] even more fields would apppear. However, none of the values produced the total number of fields that should be in the table i.e over 300 fields.</p> <p>Therefore, could someone take a look at the code and let me know what exactly the code does:</p> <pre><code>data_schema = generate_schema(input_json=list_data[0]) </code></pre> <p>Also, what I would need to do get the full list of fields?</p>
<python><azure-databricks>
2023-03-06 20:24:24
1
3,011
Patterson
75,655,344
2,011,041
Django Rest Framework model assigns a str to the primary key and a number to my str field in sqlite
<p>I'm getting my database table to contain a string as the ID (and primary key) while the actual string field contains the ID number, so those two fields seem to be swapped and I can't figure out why.</p> <p>This is a basic Pokemon API for practise purposes, using Django 4.1.7, DRF 3.14 and sqlite as the DB. My models include: Card, Expansion and Type (fire, water, bug, ice, etc.)</p> <p><strong>Type</strong> is a tricky class, as it contains attributes that reference another Type object: <em>name, strong_against (Type), weak_against, resists, vulnerable_to</em>. Except for the type name (a string), all the other fields should reference another Type object (since one Pokemon type can be strong against other types, weak against other types, resists other types and is vulnerable to other types).</p> <p>Type model:</p> <pre><code>class Type(models.Model): &quot;&quot;&quot;Pokémon type&quot;&quot;&quot; name = models.CharField(max_length=100, unique=True, blank=False) strong_against = models.ManyToManyField('self', blank=True, symmetrical=False, related_name=&quot;strong_versus&quot;) weak_against = models.ManyToManyField('self', blank=True, symmetrical=False, related_name=&quot;weak_versus&quot;) resists = models.ManyToManyField('self', blank=True, symmetrical=False, related_name=&quot;resists&quot;) vulnerable = models.ManyToManyField('self', blank=True, symmetrical=False, related_name=&quot;vulnerable&quot;) def __init__(self, name, *args, **kwargs): super().__init__(*args, **kwargs) self.name = name def __str__(self): return f&quot;Type name: {self.name}, Strong against: {self.strong_against}, Weak against: {self.weak_against}, Resists: {self.resists}, Vulnerable to: {self.vulnerable}&quot; def __repr__(self): return f&quot;Type( ID: {self.pk}; NAME: {self.name})&quot; </code></pre> <p>When trying to populate the &quot;Type&quot; database table I get weird results, where the type name field actually seems to contain the primary key value and the ID field contains the actual name, when it should be the other way around. I tried populating just one object manually using Django's admin page and also tried populating many objects at once using bulk_create(), and the results are the same. When I use <code>Type.objects.all()</code> to print the table contents this is how they're printed:</p> <pre><code>&lt;QuerySet [Type( ID: Fire; NAME: 1)]&gt; </code></pre> <p>Also, when trying to run <code>Type.objects.all().delete()</code> to clear the table records, I get &quot;ValueError: Field 'id' expected a number but got 'Fire'&quot; (&quot;Fire&quot; is a type name, which seems to be inserted as the id instead of the actual name).</p> <p>I also tried manually adding an &quot;id&quot; field like this:</p> <pre><code>id = models.AutoField(primary_key=True) </code></pre> <p>Results are the same. Also tried removing the &quot;<em>unique</em>&quot; attribute of &quot;<code>name</code>&quot; in case that was somehow causing it to be used as the PK, but still same results.</p> <p>Finally, I just decided to comment out all fields except &quot;name&quot;, flush the DB, recreate it and test again:</p> <pre><code>class Type(models.Model): &quot;&quot;&quot;Pokémon type&quot;&quot;&quot; name = models.CharField(max_length=100, blank=False) </code></pre> <p>Again, same results. So even when the model is as simple as this, it still assigns 1 to the type name and then when I click on it to see details, I get &quot;Type with ID “Fire” doesn’t exist. Perhaps it was deleted?&quot;</p> <p>One more test I did was to change &quot;Type&quot; class name to &quot;PokemonType&quot; (just in case &quot;Type&quot; was colliding with some language or framework keyword). It didn't help.</p> <p>The weird thing is that my <strong>Expansion</strong> model works ok and data is stored correctly in the same db.</p>
<python><django><django-rest-framework>
2023-03-06 20:03:33
1
1,397
Floella
75,655,292
2,908,017
How to draw or create a Horizontal Divider in a Python FMX GUI App?
<p>How do I create a Horizontal Divider Line as can be seen in the screenshot above the buttons:</p> <p><a href="https://i.sstatic.net/gG77e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gG77e.png" alt="UI with three buttons, divider line, and two radio buttons" /></a></p> <p>How can this be accomplished in an <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">FMX GUI App for Python</a>?</p>
<python><user-interface><firemonkey><horizontal-line>
2023-03-06 19:58:11
1
4,263
Shaun Roselt
75,655,049
2,908,017
How do I set an active tab for TabControl in a Python FMX GUI App?
<p>I've made a <code>Form</code> with a <code>TabControl</code> and Four <code>TabItem</code> tabs on the <code>TabControl</code>. By default, the first tab is always the active tab:</p> <p><a href="https://i.sstatic.net/9YBAU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9YBAU.png" alt="Python GUI App with four tabs" /></a></p> <pre><code>self.TabControl1 = TabControl(self) self.TabControl1.Parent = self self.TabControl1.Align = &quot;Client&quot; self.TabItem1 = TabItem(self.TabControl1) self.TabItem1.Text = &quot;My first tab&quot; self.TabItem1.Parent = self.TabControl1 self.TabItem2 = TabItem(self.TabControl1) self.TabItem2.Text = &quot;My second tab&quot; self.TabItem2.Parent = self.TabControl1 self.TabItem3 = TabItem(self.TabControl1) self.TabItem3.Text = &quot;My third tab&quot; self.TabItem3.Parent = self.TabControl1 self.TabItem4 = TabItem(self.TabControl1) self.TabItem4.Text = &quot;My fourth tab&quot; self.TabItem4.Parent = self.TabControl1 </code></pre> <p>How do I set a different tab as the default tab? Like, let's say I want to set the third tab as active via code.</p>
<python><user-interface><tabs><firemonkey>
2023-03-06 19:29:46
1
4,263
Shaun Roselt
75,655,027
8,849,755
How to fit with scipy.optimize.curve_fit when numbers are not close to 1?
<p>I have a set of x,y points in which x lies between 0 and 100e-12 while y ranges between 0 and 1e9 and want to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">scipy.optimize.curve_fit</a> to fit a model. It fails because of the numbers. If I rescale the data so that the numbers are closer to 1 (i.e. <code>x*=1e10</code> and <code>y*=1e-9</code>), then it works. So I know where the problem is and could eventually live with this solution. But would prefer to perform the fit in the original scale.</p> <p>Is it possible?</p> <p>I have seen <a href="https://stackoverflow.com/a/17238290/8849755">an answer</a> where it is suggested to use a <code>diag</code> argument but with this I get: <code>least_squares() got an unexpected keyword argument 'diag'</code>. I guess it was from an older version. Is there an analogous for the current version?</p> <p>Additional info: I am providing curve_fit with very reasonable <code>p0</code>.</p>
<python><scipy><curve-fitting>
2023-03-06 19:28:08
2
3,245
user171780
75,655,016
11,676,466
Pytorch is it possible to chain model prediction with two dataloaders
<p>In my training schedule, the process of loading data with <code>__getitem__</code> takes quite a lot of time. It's an image-to-image problem (2x upscaling). I followed the classic approach and I get <code>input</code> images and <code>target</code> images from the same data loader. Prediction of my model also takes some time. The scenario which could help me to speed up the process of training would be to create two separate data loader, the first one <code>input_dataloader</code> for input images and the second one <code>target_dataloader</code> for target images, and use them as follow:</p> <ol> <li>Load only input image from <code>input_dataloader</code>.</li> <li>Predict the upscaled images.</li> <li>While the model is predicting the images load the target image from <code>target_dataloader</code>.</li> </ol> <p>In this scenario, I could overlap the expensive data loading with the computation of my model, quite similar to <a href="https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234" rel="nofollow noreferrer"><code>non_blocking</code></a> but on a data loader level.</p> <p>Are there any PyTorch modules that I could use to implement such behavior?</p>
<python><deep-learning><pytorch>
2023-03-06 19:26:51
0
372
XYZCODE123
75,654,890
6,296,919
how to merge tuple and dataframe data
<p>may be my questions is too basic but I am learning python. Let me know if you need more information.</p> <p>I have dataframe as below.</p> <pre><code> ID Model MVersion dId sGroup eName eValue 0 1 Main V15 40 GROUP 1 dNumber U220059090(C) 1 2 Main V15 40 GROUP 1 tDate 44901 2 3 Main V15 40 GROUP 2 dNumber U220059090(C) 3 4 Main V15 40 GROUP 2 tDate 44901 4 5 Main V15 40 None sCompany bp 5 6 Main V15 42 GROUP 1 dNumber U220059090(C) 6 7 Main V15 42 GROUP 1 tDate 44901 7 8 Main V15 42 GROUP 2 dNumber U220059090(C) 8 9 Main V15 42 GROUP 2 tDate 44901 9 10 Main V15 42 None sCompany bp 10 11 Main V15 44 None Sender sDummy 11 12 Main V15 44 None TradeDate Tdummy 12 13 Main V15 44 None Product Pdummy 13 14 Main V15 44 None seller seDummy </code></pre> <p>I needed to apply grouping on Model, MVersion, dId &amp; sGroup columns which I have done below.</p> <p>I am trying to get result as below into separate group the <strong>None</strong> sGroup should be part of Group1 and Group2 for each dId. some dId might have all sGroup as None. Also is that possible to add new column as Group_Id with Incremental values.</p> <pre><code> ID Model MVersion dId sGroup eName eValue Group_Id 0 1 Main V15 40 GROUP 1 dNumber U220059090(C) 1 1 2 Main V15 40 GROUP 1 tDate 44901 1 4 5 Main V15 40 None sCompany bp 1 ID Model MVersion dId sGroup eName eValue Group_Id 2 3 Main V15 40 GROUP 2 dNumber U220059090(C) 2 3 4 Main V15 40 GROUP 2 tDate 44901 2 4 5 Main V15 40 None sCompany bp 2 ID Model MVersion dId sGroup eName eValue Group_Id 5 6 Main V15 42 GROUP 1 dNumber U220059090(C) 3 6 7 Main V15 42 GROUP 1 tDate 44901 3 9 10 Main V15 42 None sCompany bp 3 ID Model MVersion dId sGroup eName eValue Group_Id 7 8 Main V15 42 GROUP 2 dNumber U220059090(C) 4 8 9 Main V15 42 GROUP 2 tDate 44901 4 9 10 Main V15 42 None sCompany bp 4 ID Model MVersion dId sGroup eName eValue Group_Id 10 11 Main V15 44 None Sender sDummy 5 11 12 Main V15 44 None TradeDate Tdummy 5 12 13 Main V15 44 None Product Pdummy 5 13 14 Main V15 44 None seller seDummy 5 </code></pre> <p>what I have tried is to filtered out all None to one dataframe and applied grouping on Model, MVersion, dId and SGroup. I am not sure how can I combined these two result into one. I don't know what is correct and efficient way to do this. any help is really appreciated.</p> <pre><code>import pandas as pd import numpy as np data = [ [1,'Main','V15', 40,'GROUP 1','dNumber','U220059090(C)'], [2,'Main','V15', 40,'GROUP 1','tDate','44901'], [3,'Main','V15', 40,'GROUP 2','dNumber','U220059090(C)'], [4,'Main','V15', 40,'GROUP 2','tDate','44901'], [5,'Main','V15', 40,None, 'sCompany','bp'], [6,'Main','V15', 42,'GROUP 1','dNumber','U220059090(C)'], [7,'Main','V15', 42,'GROUP 1','tDate','44901'], [8,'Main','V15', 42,'GROUP 2','dNumber','U220059090(C)'], [9,'Main','V15', 42,'GROUP 2','tDate','44901'], [10,'Main','V15', 42,None,'sCompany','bp'], [11,'Main','V15', 44,None,'Sender','sDummy'], [12,'Main','V15', 44,None,'TradeDate','Tdummy'], [13,'Main','V15', 44,None,'Product','Pdummy'], [14,'Main','V15', 44,None,'seller','seDummy'], [15,'Delivery','V15', 40,None,'delIncoTerm','FIP'], [16,'Delivery','V15', 40,None,'delWindow','44562'], ] df = pd.DataFrame(data, columns=['ID','Model','MVersion','dId','sGroup','eName','eValue']) print(df) print('\n') nullSectionGroup = df[df['sGroup'].isnull()] print('null sGroup') print('----------------') print(nullSectionGroup) print('\n') grpModel = df.groupby('Model') # 1) group by Model for model in grpModel: grpModelVersion = model[1].groupby('MVersion') # 2) group by MVersion for modelVersion in grpModelVersion: grpDocId = modelVersion[1].groupby('dId') # 3) group by dId for docId in grpDocId: #print('docId', docId) grpSG = docId[1].groupby('sGroup') # 4) group by sGroup for x in grpSG: #variable declarition model = x[1].Model.iloc[0] modelVersion = x[1].MVersion.iloc[0] docId = x[1].dId.iloc[0] sectionGroup = x[1].sGroup.iloc[0] #filtering dataframe of null section group based on x[1] values #print('****model :', model, '**mVersion :', mVersion, '**Doc_Id :', dId, '**sGroup :', sGroup) filtered_value = nullSectionGroup.loc[(nullSectionGroup['Model']==model)&amp;(nullSectionGroup['MVersion']==modelVersion)&amp;(nullSectionGroup['dId']==docId)] print('filtered_value =&gt; pandas.core.frame.DataFrame') print(filtered_value) print('grouped values =&gt; tuple') print(x) print('\n') </code></pre>
<python><python-3.x><pandas><dataframe><group-by>
2023-03-06 19:10:43
2
847
tt0206
75,654,809
12,733,629
Running VSCode interactive window on WSL with relative imports
<p>I'm trying to visualize plots using pyplot <code>imshow</code>. Windows' WSL does not work with GUIs so the advised way of doing so is via VSCode's interactive window. However, given my project structure,</p> <pre><code>$projectFolder ├── mycode │ ├── __init__.py │ ├── model.py │ ├── dataset.py │ ├── train.py │ └── predict.py └── vision └── references └── detection ├── utils.py ├── transforms.py ├── engine.py └── etc </code></pre> <p>the only way I can run my code is using the terminal (<code>python3 -m mycode.predict</code>), but then again nothing is plotted because of WSL. Here are my imports:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import torch from dataset import MyDataset from model import get_instance_segmentation_model from PIL import Image from torchvision.transforms import functional as F from torchvision.utils import draw_bounding_boxes, draw_segmentation_masks from train import get_transform from vision.references.detection import transforms as T </code></pre> <p>I followed <a href="https://k0nze.dev/posts/python-relative-imports-vscode/" rel="nofollow noreferrer">this</a> exactly but still get <code>ModuleNotFoundError: No module named 'vision'</code> when trying to run via the VSCode 'play' button and/or as interactive window, even though autocomplete works and nothing is highlighted (&quot;No problems have been detected in the workspace&quot;). What am I doing wrong here? Here's my launch.json:</p> <pre><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Module&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;mycode&quot;, &quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;${workspaceFolder}/vision/&quot;} } ] } </code></pre>
<python><visual-studio-code><windows-subsystem-for-linux>
2023-03-06 19:01:59
1
327
Martim Passos
75,654,750
1,362,485
Get value and type of python variable similar to Jupyter behavior
<p>Assume you have a Jupyter notebook with one entry that has three lines:</p> <pre><code>x = 1 y = x + 1 y </code></pre> <p>The output will print '2'</p> <p>I want to do this inside my python code. If I have a variable <code>lines</code> and run exec:</p> <pre><code> lines = &quot;&quot;&quot;x = 1 y = x + 1 y&quot;&quot;&quot; exec(lines,globals(),locals()) </code></pre> <p>I will not get any result, because <code>exec</code> returns None. Is there a way to obtain the value and type of the expression in the last line, inside a python program?</p>
<python><jupyter-notebook><jupyter><jupyter-lab>
2023-03-06 18:55:03
1
1,207
ps0604
75,654,676
8,194,364
How to get when lambda was last modified using boto3 and python?
<p>I want to use boto3 in a python script to get when a lambda was last updated. When I navigate to the lambda dashboard, I am able to see the last updated time as seen in my attachment.<a href="https://i.sstatic.net/x4XvP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x4XvP.png" alt="last updated text" /></a></p> <p>Can I get the last modified value using boto3?</p>
<python><aws-lambda><boto3>
2023-03-06 18:46:42
2
359
AJ Goudel
75,654,624
19,079,397
What is the fastest way to convert pyspark column into a python list?
<p>I have a large pyspark data frame but used a small data frame like below to test the performance. I know three ways of converting the pyspark column into a list but non of them are as fast as how spark jobs are run. What is the best and fastest way of creating a python list from pyspark data frame column?</p> <pre><code>Dataframe:- from pyspark.sql import Row df = spark.createDataFrame([ Row(a=1, b=4., c='GFG1'), Row(a=2, b=8., c='GFG2'), Row(a=4, b=5., c='GFG3') ]) method 1:- list1=[] for row_iterator in df.collect(): list1.append(row_iterator['b']) output:- [4.0, 8.0, 5.0] time taken:- 0.20 seconds method2:- list2 = df.rdd.map(lambda x: x.b).collect() list2 output:- [4.0, 8.0, 5.0] time taken:- 0.30 seconds method3:- list3 = df.select(df.b).rdd.flatMap(lambda x: x).collect() list3 output:- [4.0, 8.0, 5.0] time taken:- 0.34 seconds </code></pre> <p>Though iterating over rows is faster but I am looking for a better solution that results faster than this because my pyspark dataframe is huge(10k rows of data). Is there any fastest way of converting pyspark column into a python list?</p>
<python><list><apache-spark><pyspark><rdd>
2023-03-06 18:39:41
2
615
data en
75,654,608
226,473
pygsheets does not modify Date Format
<p>I'm trying to convert a date in a Google Spreadsheet column from <code>3/3/2023</code> to <code>Friday March 3, 2023</code> with pygsheets.</p> <p>The following code:</p> <pre><code>client = pygsheets.authorize(service_account_file=&quot;credentials.json&quot;) test_sheet = client.open(titles[0]) test_worksheets = test_sheet.worksheets() active = test_worksheets[0] model_cell = pygsheets.Cell(&quot;A1&quot;) model_cell.set_text_format(&quot;fontSize&quot;,18) model_cell.set_vertical_alignment(pygsheets.VerticalAlignment.MIDDLE) model_cell.set_number_format(pygsheets.FormatType.DATE, 'dddd+ mmmm yyy') pygsheets.DataRange('A2', 'A', worksheet=active).apply_format(model_cell) </code></pre> <p>successfully changes the fontSize and VerticalAlignment attributes but does not change the date format. What is wrong with the code?</p> <p>UPDATE: It seems there are two things at play here. First, my date format string isn't what I wanted and so it's possible that gsheets wasn't able to interpret it and just ignored. I'm skeptical that's the case but it's possible. The format string I need to use is <code>'dddd&quot;, &quot;mmmm&quot; &quot;d&quot;, &quot;yyyy'</code>.</p> <p>Second and more importantly, I noticed that after I move the date values from one column to another there is a single quote (or possibly a tick mark) at the start of the date string. I remove this quote and the format changes.</p> <p>Seems like pygsheets is adding a tick mark at the beginning of non-numerical dates (for example 3/3/2023) i'm guessing to preserve the original formatting. But when you call the batch updater it doesn't remove the tick mark.</p> <p>I'm not entirely sure how to get around this but at least I know what I need to get around now.</p>
<python><pygsheets>
2023-03-06 18:38:13
2
21,308
Ramy
75,654,562
591,939
How to normalise keywords extracted with Named Entity Recognition
<p>I'm trying to employ NER to extract keywords (tags) from job postings. This can be anything along with <code>React, AWS, Team Building, Marketing</code>.</p> <p>After training a custom model in SpaCy I'm presented with a problem - extracted tags are not unified/normalized across all of the data.</p> <p>For example, if job posting is about <code>frontend development</code>, NER can extract the keyword <code>frontend</code> in many ways (depending on job description), for example: <code>Frontend</code>, <code>Front End</code>, <code>Front-End</code>, <code>front-end</code> and so on.</p> <p>Is there a reliable way to normalise/unify the extracted keywords? All the keywords go directly into the database and, with all the variants of each keyword, I would end up with too much noise.</p> <p>One way to tackle the problem would be to create mappings such as:</p> <pre><code>&quot;Frontend&quot;: [&quot;Front End&quot;, &quot;Front-End&quot;, &quot;front-end&quot;] </code></pre> <p>but that approach seems not too bright. Perhaps within SpaCy itself there's an option to normalise tags?</p>
<python><nlp><spacy><named-entity-recognition>
2023-03-06 18:32:14
2
11,904
Pono
75,654,359
271,811
Python won't load C-language DLL in Windows 11 (that loads in Windows 10)
<p>I'm running Python 3.8.5 on Windows 10 Pro, Version 10.0.19045 Build 19045. I have an application where I have moved part of the processing to a DLL, written in C using Microsoft Visual Studio C++ 2019. I'm loading my library DLL and calling functions within it with no problems.</p> <p>My colleagues have now started testing, and report the DLL is 'unable to load' under Windows 11. Any ideas why?</p> <p>I have been able to duplicate the bug with the following minimal application:</p> <p>test.py</p> <pre><code>import ctypes import _ctypes dll = ctypes.WinDLL(&quot;./test_dll.dll&quot;) dll.RunTest.restype = ctypes.c_int dll.RunTest.argtypes = [ctypes.c_int] print(dll.RunTest(0)) _ctypes.FreeLibrary(dll._handle) </code></pre> <p>test.h:</p> <pre><code>#ifndef TEST_H #define TEST_H #if defined(_MSC_VER) &amp;&amp; _MSC_VER&gt;=1020 #pragma once #endif #include &lt;stdint.h&gt; #include &lt;windows.h&gt; #if !defined(MYDLL) # if defined(TESTDLL_EXPORTS) # define MYDLL extern &quot;C&quot; __declspec(dllexport) # else # define MYDLL extern &quot;C&quot; __declspec(dllimport) # endif #endif #ifdef __cplusplus extern &quot;C&quot; { #endif MYDLL int WINAPI RunTest(int seed); #ifdef __cplusplus } #endif #endif </code></pre> <p>test.cpp:</p> <pre><code>#include &quot;pch.h&quot; #include &quot;test.h&quot; MYDLL int WINAPI RunTest(int seed) { return 42; } </code></pre> <p>This prints 42 on Windows 10 and is 'unable to load' on Windows 11</p> <p>Thanks in advance!</p>
<python><dll><windows-11>
2023-03-06 18:09:28
1
2,131
Max
75,654,354
14,088,919
Plotting 100% Stacked bar plot from many columns
<p>I've been trying to solve this for a while now and tried virtually all related answers with no luck.</p> <p>The original dataset I have looks like this:</p> <pre><code>| ID | Category | p0 | p1 | p2 | p3 | Target | | 01 | A | 0.92 | 0.01 | 0.05 | 0.02 | 0 | | 02 | A | 0.90 | 0.05 | 0.01 | 0.04 | 1 | | 03 | B | 0.88 | 0.08 | 0.03 | 0.01 | 0 | | 04 | B | 0.80 | 0.1 | 0.04 | 0.06 | 0 | | 05 | B | 0.90 | 0.01 | 0.03 | 0.06 | 3 | | 06 | C | 0.92 | 0.02 | 0.03 | 0.03 | 2 | | 07 | C | 0.78 | 0.12 | 0.04 | 0.06 | 1 | | 08 | C | 0.65 | 0.08 | 0.12 | 0.15 | 2 | | 09 | C | 0.81 | 0.02 | 0.07 | 0.10 | 3 | </code></pre> <p>where <code>target</code> is the real value and <code>p0</code>,<code>p1</code>,<code>p2</code>,<code>p3</code> are the probabilities of each class.</p> <p>My final goal is to make a stacked bar plot with <code>Category</code> in the x-axis and two groups of bars for each one, one with the <code>Target</code> percentages and one with the probabilities percentages. I have achieved the first part so far, which looks like this (in reality I have more targets and categories hence why it looks a bit off). <a href="https://i.sstatic.net/rHYtY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rHYtY.png" alt="enter image description here" /></a></p> <p>Basically, for the second set of bars, I'm stuck on the 100% stack part. I can group the probabilities by Category with this code, but can't figure out how to make it as a percentage of row total</p> <pre><code>df[['p0','p1','p2','p3','category','count']].groupby('category').sum() </code></pre> <p>where <code>count</code> is just a column that equals 1 in all rows.</p> <p>If anyone has any idea of only the calculation and not the plot, please go ahead!!</p>
<python><pandas><matplotlib><group-by><bar-chart>
2023-03-06 18:09:02
1
612
amestrian
75,654,328
2,758,414
Are my requests being processed asynchronously?
<p>I have a scraping script which requests about 30 urls synchronously. It takes 95s to complete all requests.</p> <p>I've rewritten the script using asynchronous libraries <code>asyncio</code> and <code>aiohttp</code> in order to improve speed.</p> <p>Here are the performance statistics for 32 requests:</p> <ul> <li>synchronous requests total time = 95 seconds</li> <li>asychronous requests total time = 60 seconds</li> <li>single, manual hit on the url from the browser - about 1 second</li> </ul> <p>I think speed improvement of 50% is very bad so I'm trying to suspect that my requests are not really firing asychronously (I'm new to <code>asyncio</code>).</p> <p>In fact, since the single request takes only 1 second and I make only 32 requests I was expecting my total asychronous requests time to be less than 1.5 seconds. 32 requests is very few so I assume that they all would start almost at the same time, and so waiting for the last to complete shouldn't take more than 1.5 seconds.</p> <p>I would appreciate any hints.</p> <pre class="lang-py prettyprint-override"><code># Single asynchronous request async def async_get_course(session, url_course): async with session.get(url_course) as res: response = await res.content.read() return response # Main coroutine async def example(courses, root): starting_time = time.time() actions = [] data = [] data2 = [] async with aiohttp.ClientSession() as session: for course in courses: url_course = f&quot;{root}{course['course_link']}&quot; data.append(url_course) actions.append(asyncio.ensure_future(async_get_course(session, url_course))) results = await asyncio.gather(*actions) for idx, res in enumerate(results): data2.append(get_info_from_course((courses[idx], data[idx], res))) total_time = time.time() - starting_time print('total_time', total_time) return data2 # Run the coroutine courses_final = asyncio.run(example(courses, root)) </code></pre>
<python><asynchronous><concurrency><python-asyncio><aiohttp>
2023-03-06 18:06:08
1
2,747
LLaP
75,654,308
17,487,457
Python create directories named from list elemets
<p>I have the following classifiers:</p> <pre><code>DT = sklearn.tree.DecisionTreeClassifier() LR = sklearn.linear_model.LogisticRegression() SVC = sklearn.svm.SVC() </code></pre> <p>So I created this list of classifiers:</p> <pre><code>classifiers = [DT, LR, SVC] </code></pre> <p>I want to create directories, one for each classifier in the list, so that I have these directories:</p> <pre><code>working_dir ├── DT ├── LR ├── SVC </code></pre> <p>Doing this in a way like:</p> <pre><code>for clf in classifiers: # create a dir named clf (e.g. DT) if not exists </code></pre> <p>How can this be done.</p>
<python><list><directory><python-os>
2023-03-06 18:04:35
1
305
Amina Umar
75,654,165
12,783,363
How to bypass argparse multi-word argument quotations?
<p>I have a bubble.py script with an argparse set-up as below. There is only one positional argument <code>'text'</code> and few other optional arguments (omitted for simplicity).</p> <p>bubble.py</p> <pre><code>import argparse parser = argparse.ArgumentParser(description=&quot;Create pixel art bubble speech image&quot;) parser.add_argument('text', type=str, help=&quot;Text inside the bubble speech&quot;) args = parser.parse_args() </code></pre> <p>When I ran the command <code>bubble.py hello</code> the program works just fine. When I ran the command <code>bubble.py &quot;hello world&quot;</code> the program works fine as well. But I want to run the command <code>bubble.py hello world</code> (without the quotations). Is it possible? If so, how?</p> <p>Also asking if this would work on any operating system.</p>
<python><argparse>
2023-03-06 17:49:19
0
916
Jobo Fernandez
75,654,109
6,485,881
OverflowError when subtracting datetime columns in pandas
<p>I'm trying to check if the difference between two Timestamp columns in Pandas is greater than <code>n</code> seconds. I don't actually care about the difference. I just want to know if it's greater than <code>n</code> seconds, and I could also limit <code>n</code> to a range between, let's say, 1 to 60.</p> <p><strong>Sounds easy, right?</strong></p> <p><a href="https://stackoverflow.com/questions/22923775/calculate-time-difference-between-two-pandas-columns-in-hours-and-minutes">This question</a> has many valuable answers outlining how to do that.</p> <p><strong>The problem:</strong> For reasons outside of my control, the difference between the two timestamps may be <em>quite</em> large, and that's why I'm running into an integer overflow.</p> <p>Here's a <a href="https://stackoverflow.com/help/minimal-reproducible-example">MCVE</a>:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import pandas.testing dataframe = pd.DataFrame( { &quot;historic&quot;: [pd.Timestamp(&quot;1900-01-01T00:00:00+00:00&quot;)], &quot;futuristic&quot;: [pd.Timestamp(&quot;2200-01-01T00:00:00+00:00&quot;)], } ) # Goal: Figure out if the difference between # futuristic and historic is &gt; n seconds, i.e.: # futuristic - historic &gt; n number_of_seconds = 1 dataframe[&quot;diff_greater_n&quot;] = ( dataframe[&quot;futuristic&quot;] - dataframe[&quot;historic&quot;] ) / pd.Timedelta(seconds=1) &gt; number_of_seconds expected_dataframe = pd.DataFrame( { &quot;historic&quot;: [pd.Timestamp(&quot;1900-01-01T00:00:00+00:00&quot;)], &quot;futuristic&quot;: [pd.Timestamp(&quot;2200-01-01T00:00:00+00:00&quot;)], &quot;diff_greater_n&quot;: [True], } ) pandas.testing.assert_frame_equal(dataframe, expected_dataframe) </code></pre> <p><strong>Error</strong>:</p> <blockquote> <p>OverflowError: Overflow in int64 addition</p> </blockquote> <p>A bit more context:</p> <ul> <li>The timestamps need to have second precision, i.e. I don't care about any milliseconds</li> <li>This is one of multiple or-combined checks on the dataframe</li> <li>The dataframe may have a few million rows</li> <li>I'm quite happy that I get to finally ask about an Overflow error on stackoverflow</li> </ul>
<python><pandas><dataframe><integer-overflow>
2023-03-06 17:42:46
1
13,322
Maurice
75,654,014
10,620,003
Build two array from another array based on the values and a window size
<p>I have an array with a thousand rows and columns. The array has the number 1, greater than 1, and less than 1. I want to build two arrays from that with this way:</p> <p>The most important part is the values which are less than 1. Then based on a window size (here is 7), the value greater than 1 before the values less than 1, should change to 1, and all of the other remaining are zero. For example, if a row is <code>[1, 1, 1.2, 0.5, 1.9, 1, 1]</code>, the first array that I want is: <code>[0, 0, 1, 0,0,0,0]</code> and the second array that I want are related to the values greater than 1 after the values less than 1. For this example, I want <code>[0, 0, 0, 0, 1, 0,0]</code>.</p> <p>Here is a simple exmaple: array I have:</p> <pre><code>a = np.array([[1,1,1.01, 0.5, 0.5, 1.02, 1, 1,1,1.21, 0.5, 0.5, 1.22, 1.3], [1,1.4,1.01, 0.5, 0.5, 1.02, 1, 1,1,1.51, 0.5, 0.7, 1.22, 1]]) a= array([[1. , 1. , 1.01, 0.5 , 0.5 , 1.02, 1. , 1. , 1. , 1.21, 0.5 ,0.5 , 1.22, 1.3 ], [1. , 1.4 , 1.01, 0.5 , 0.5 , 1.02, 1. , 1. , 1. , 1.51, 0.5 ,0.7 , 1.22, 1. ]]) </code></pre> <p>Two array I want:</p> <pre><code>array([[0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]]) </code></pre> <p>and</p> <pre><code>array([[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1]]) </code></pre> <p>Could you please help me with this? Thank you</p>
<python><numpy>
2023-03-06 17:32:16
1
730
Sadcow
75,653,925
3,657,417
Pandas check if any of previous n rows met criteria
<p>Say i have this data set.</p> <pre><code>| | color | down | top | | -- | ------ | ---- | --- | | 0 | | 1 | 5 | | 1 | | 2 | 5 | | 2 | blue | 7 | 11 | | 3 | | 5 | 8 | | 4 | | 9 | 10 | | 5 | | 9 | 10 | | 6 | orange | 5 | 9 | | 7 | | 4 | 7 | | 8 | | 5 | 10 | | 9 | | 5 | 6 | | 10 | | 3 | 7 | </code></pre> <p>I want to flag rows where any of the 3 previous rows are either blue or orange AND where current top is between down and top of that row.</p> <p>The outcome of this operation would be a dataset:</p> <pre><code>| | color | down | top | blue_condition | orange_condition | | -- | ------ | ---- | --- | -------------- | ---------------- | | 0 | | 1 | 5 | | | | 1 | | 2 | 5 | | | | 2 | blue | 7 | 11 | | | | 3 | | 5 | 8 | 1 | | | 4 | | 9 | 10 | 1 | | | 5 | | 9 | 10 | 1 | | | 6 | orange | 5 | 9 | | | | 7 | | 4 | 7 | | 1 | | 8 | | 5 | 10 | | | | 9 | | 5 | 6 | | 1 | | 10 | | 3 | 7 | | | </code></pre> <p>I have been fiddling around with combinations between <code>.tail()</code>, <code>.filter()</code> and <code>.assign()</code>. But I am a bit stuck to be honest.</p> <pre><code>df = pd.DataFrame({&quot;color&quot;: [None, None, 'blue', None, None, None, 'orange', None, None, None, None], 'down': [1, 2, 7, 5, 9, 9, 5, 4, 5, 5, 3], 'top': [5, 5, 11, 8, 10, 10, 9, 7, 10, 6, 7]}) # get latest 3 records df['blue_condition'] = df.tail(3) # assign using lambda df['blue_condition'].assign(blue_condition=lambda x: (x.tail(3).query(top &lt; top.tail(3)))) </code></pre> <p>I have looked into other questions but they don't take the current row as a reference, from what I can tell. <a href="https://www.stackoverflow.com/">https://stackoverflow.com/questions/74734980/pandas-return-true-if-condition-true-in-any-of-previous-n-rows</a></p> <p><a href="https://www.stackoverflow.com/">https://stackoverflow.com/questions/56573008/compare-the-previous-n-rows-to-the-current-row-in-a-pandas-column</a></p>
<python><pandas><dataframe>
2023-03-06 17:22:36
1
773
Florian
75,653,850
105,678
Python mypy type checking not working as expected
<p>I'm new to python, and am a huge fan of static type checkers. I have some code that handles file uploads with the Bottle framework. See below.</p> <pre><code>def transcribe_upload(upload: FileUpload) -&gt; Alternative: audio:AudioSource = upload_source(upload) ... def upload_source(upload:FileUpload) -&gt; AudioSource: ... </code></pre> <p>I made a really simple mistake and passed <code>upload.file</code> (the file-like object) to <code>upload_source</code> instead of the entire <code>FileUpload</code> object.</p> <pre><code>def transcribe_upload(upload: FileUpload) -&gt; Alternative: audio:AudioSource = upload_source(upload.file) # This is incorrect! </code></pre> <p>The typechecker didn't catch it. In fact, it doesn't catch ANY incorrect parameter passing to <code>upload_source</code>:</p> <pre><code>def transcribe_upload(upload: FileUpload) -&gt; Alternative: audio:AudioSource = upload_source(4) # Why isn't mypy giving me an error? audio:AudioSource = upload_source(upload.asdf) # Why isn't mypy giving me an error? </code></pre> <p>What's going on? I tested some basic functions separately and the typechecker caught when I tried to pass a number to a function that wanted a string, and it worked. What am I missing here?</p> <p><strong>EDIT</strong></p> <p>@kojiro suggsted that <code>FileUpload</code> is equivalent to <code>Any</code>. I think that may be correct. Here is the source for <a href="https://github.com/bottlepy/bottle/blob/master/bottle.py#L2730" rel="nofollow noreferrer"><code>FileUpload</code></a>. It's imported like this: <code>from bottle import FileUpload</code>.</p> <p>If that's the case, why is it letting me use <code>FileUpload</code> as if it were a type? (If I mess up the name, like <code>FileUpld</code> it does give me an error).</p> <p>More importantly, how do I get real types? I suppose the bottle authors have to add them?</p>
<python><types><mypy><bottle>
2023-03-06 17:15:29
1
16,149
Sean Clark Hess
75,653,720
9,182,743
Set plotly bargap to 0
<p>I am unable to set the plotly bargap to 0 (am using datetime for x axis)</p> <p>here is the code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import plotly print (plotly.__version__) # Create a list of dates for the first day of each month in a year df = pd.DataFrame({ &quot;month&quot;: pd.date_range(start='2021-01-01', end='2022-08-01', freq='MS'), &quot;count&quot;: np.random.randint(0, 11, size=20) }) import plotly.express as px fig = px.bar(df, x='month', y='count') fig.update_layout(width=1200, height=400, bargap=0) fig.update_layout(xaxis_rangeslider_visible=False) fig.update_xaxes( dtick=&quot;M1&quot;, tickformat=&quot;%b\n%Y&quot;) fig.show() </code></pre> <p><a href="https://i.sstatic.net/i0baA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i0baA.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-03-06 17:01:08
1
1,168
Leo
75,653,538
2,782,382
Polars Row object to Dictionary
<p>I'm trying to iterate through the rows of a Polars DataFrame, where the iterator returns a dictionary of each row.</p> <p>The <a href="https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.iter_rows.html#polars.DataFrame.iter_rows" rel="nofollow noreferrer">documentation</a> indicates that iter_rows(named=True) is what I want. However, I get an &quot;AttributeError: 'DataFrame' object has no attribute 'iter_rows'&quot; error when I try to use it.</p> <p>iterrows(named=True) without the underscore does seem to mostly work, but it is returning a Polars Row object rather than a dictionary. I can access the value using either integer indexes or dot notation with the column name, but it's not clear how I can get the column name / key.</p> <pre><code>import polars as pl df = pl.DataFrame({ 'a': [1, 2, 3], 'b': [4, 5, 6] }) for row in df.iterrows(named=True): print(type(row)) print(row[0]) print(row.a) </code></pre> <p>Is it possible to either get the column name / key values from the Row object or otherwise change it to a Python dictionary?</p>
<python><python-polars>
2023-03-06 16:44:26
1
1,353
Chris
75,653,530
7,058,823
Pandas transform table to occurrence
<p>Hi I have a table similar to below</p> <pre><code> Fruits America Europe Asia 0 Apple Good N/A Bad 1 Orange N/A Bad Good </code></pre> <p>and I would like to transform into something like this</p> <pre><code> Fruits and Region Good Bad N/A 0 Apple America 1 0 0 1 Apple Europe. 0. 0. 1 2 Apple Asia. 0 1. 0 3 Orange America. 0. 0. 1 4 Orange Europe. 0. 1. 0 5 Orange Asia. 1. 0. 0 </code></pre> <p>I have tried the stack function but it doesn't work as expected.</p> <p>Thank you</p>
<python><pandas>
2023-03-06 16:43:42
1
877
Platalea Minor
75,653,463
1,854,159
Storing H2o models/MOJO outside the file system
<p>I'm investigating the possibility of storing MOJOs in cloud storage blobs and/or a database. I have proof-of-concept code working that saves the MOJO to a file then loads the file and stores to the target (and vice-versa for loading), but I'd like to know if there's any way to skip the file step? I've looked into python's BytesIO, but since the h2o mojo APIs all require a file-path I don't think I can use it.</p>
<python><machine-learning><h2o>
2023-03-06 16:37:33
1
565
8forty
75,653,432
2,908,017
How do I create a tabbed control in a Python FMX GUI App?
<p>I'm making a Python FMX GUI App and I basically want three tabs on it like this: <a href="https://i.sstatic.net/PTO7m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PTO7m.png" alt="Delphi App with tabs" /></a></p> <p>I tried doing this:</p> <pre><code>self.TabControl1 = TabControl(self) self.TabControl1.Parent = self self.TabControl1.Align = &quot;Client&quot; self.TabControl1.Margins.Top = 50 self.TabControl1.Margins.Bottom = 50 self.TabControl1.Margins.Left = 50 self.TabControl1.Margins.Right = 50 self.TabControl1.Tabs.Add(&quot;Tab1&quot;) self.TabControl1.Tabs.Add(&quot;Tab2&quot;) self.TabControl1.Tabs.Add(&quot;Tab3&quot;) </code></pre> <p>The <code>self.TabControl1.Tabs.Add()</code> fails and errors with <code>AttributeError: Error in getting property &quot;Tabs&quot;. Error: Unknown attribute</code>.</p> <p>What is the correct way to give tabs to the <code>TabControl</code> component?</p>
<python><user-interface><tabs><firemonkey>
2023-03-06 16:35:02
1
4,263
Shaun Roselt
75,653,409
13,132,728
Conditionals and Rounding with streamlit's st.number_input()
<p>I have two <a href="https://docs.streamlit.io/library/api-reference/widgets/st.number_input" rel="nofollow noreferrer">st.number_inputs()</a> I am trying to use</p> <pre><code>input1 = st.number_input(&quot;x&quot;, value=5.0, step=0.5, format=&quot;%0.1f&quot;) input2 = st.number_input(&quot;y&quot;, value=-110) </code></pre> <p>They work great, except for a few specific cases. As for input1, I’d like for it to always be rounded to the nearest half number. The step works, but when a float such as 5.4 is given, the number is not rounded and the step behaves as normal and it goes from 5.4, 5.9, etc. when I’d like the 5.4 to be rounded to 5.5 and behave like 5.5, 6.0, etc… Is there a way to always round any given number to the nearest 0.5?</p> <p>As for input2, I’d like to start counting the opposite direction once the number gets above -100. ie, instead of -102, -101, -100, -99, I’d like for it to go -102, -101, -100, +101, +102, … etc. Is it possible to add conditionals like this to an st.number_input()? Furthermore, If any number between -100 and +100 is given, I’d like to round it (similar to input1) to either -100 or +100, whichever is closer.</p> <p>So basically, my question boils down to two main topics: is it possible to round a number given to st.number_input() and add certain conditionals?</p> <p>I've tried using <code>.format()</code> but the number input is a string so that doesn't work. As for the rounding, is there printf format for rounding to the nearest 0.5 I can add to the format argument? No idea where to start with what I'm trying to achieve for <code>input2</code>, though.</p>
<python><formatting><streamlit>
2023-03-06 16:32:42
1
1,645
bismo