QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,722,182
12,297,666
How to sum the values of a series into a dataframe based on a mask numpy array
<p>I have the following variables:</p> <pre><code>import pandas as pd import numpy as np y = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], columns=['A', 'B', 'C']) mask_y = np.array([[True, False, False], [False, False, True], [False, False, False]]) dist_y = pd.Series([0.222, 0.333, 0.444]) </code></pre> <p>I need to add the values of <code>dist_y</code> into <code>y</code> using the <code>mask_y</code> <code>True</code> values as condition. I expect to get:</p> <pre><code> A B C 0 1.222 2 3.000 1 4.000 5 6.333 2 7.000 8 9.000 </code></pre> <p>I have tried the following:</p> <pre><code>y.loc[:, mask_y] += dist_y.values[:, None] </code></pre> <p>But this does not work. Any ideas how can I do this?</p>
<python><pandas><numpy>
2023-03-13 12:59:09
3
679
Murilo
75,721,741
2,131,621
Building an AMI using Packer with Python & Python modules (using pip) installed via powershell script
<p>Using Packer, I am trying to create a Windows AMI with Python + the cryptography module installed. Here is the installation command I'm using for Python:</p> <pre><code>Invoke-Expression &quot;python-3.6.8-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0&quot; </code></pre> <p>Standalone that works fine. If I launch an EC2 instance from the resulting AMI, I can open Powershell and execute <code>python --version</code> and it returns the Python version. This is to be expected since, according to <a href="https://docs.python.org/3.8/using/windows.html#installing-without-ui" rel="nofollow noreferrer">Python documentation</a>, <code>PrependPath=1</code> will &quot;Add install and Scripts directories to PATH&quot; In addition, however, I want to install cryptography module so I add the following to the install script:</p> <pre><code>Invoke-Expression &quot;python-3.6.8-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0&quot; pip install --upgrade pip pip install cryptography </code></pre> <p>Now Packer will fail when it gets to the pip command saying <code>The term 'pip' is not recognized as the name of a amazon-ebs.windows: cmdlet, function, script file, or operable program.</code> I tried adding pip's location to the system path in multiple different ways but nothing helped. What <em>did</em> work (as well as the addition to the system path) was adding a sleep after the Python install command. Seemingly Packer/Powershell doesn't wait for the Python installer to finish. So now my install script looks like this:</p> <pre><code>Invoke-Expression &quot;python-3.6.8-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0&quot; sleep 30 $env:Path += &quot;;C:\Program Files\Python36\Scripts\&quot; pip install --upgrade pip pip install cryptography </code></pre> <p>Now Packer executes no problem and creates the new AMI but when I launch the resulting AMI and run <code>python --version</code> I get <code>'python' is not recognized as the name of a cmdlet, function, script file, or operable program.</code> Adding commands to the script to append the system path has not helped.</p> <p>Can anyone shed any light on this predicament?</p>
<python><amazon-web-services><powershell><amazon-ami><packer>
2023-03-13 12:14:56
1
535
Jay
75,721,736
3,561,433
Plot Angular Grid with filled cells based on Color Map
<p>I would like to do a transformation between two coordinate systems and also like to show filled cells accordingly as shown below:-</p> <p><a href="https://i.sstatic.net/sZfxr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sZfxr.png" alt="enter image description here" /></a></p> <p>I have been able to create the below filled grid the way I want, but I am not quite sure how to draw the angular transformed grid.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt z_min = 10 z_max = 100 n_zcells = 10 z_array = np.linspace(z_min, z_max, n_zcells) u_min = -9.5 u_max = 9.5 n_ucells = 15 u_array = np.linspace(u_min, u_max, n_ucells) [u_grid, z_grid] = np.meshgrid(u_array, z_array) # %% Represent Polar Grid in Cartesian Coordinates f = 5 # Focal length in pixels B = 10 # Baseline in cm # u = fX/Z; Equation for Camera to Pixel Coordinates x_grid = u_grid*z_grid/f # %% Desired Grid Points plt.figure() plt.subplot(2,1,1) plt.scatter(x_grid, z_grid) plt.xlabel('Lateral Distance (cm)') plt.ylabel('Depth (cm)') plt.subplot(2,1,2) plt.scatter(u_grid, z_grid) plt.xlabel('Pixel Column ') plt.ylabel('Depth (cm)') plt.show() # %% Filled Grid Cell Colours filled_colours = np.random.rand(n_zcells-1,n_ucells-1) # No. of cells will be 1 less than no. of grid points plt.imshow(filled_colours, cmap=plt.get_cmap('gray'), origin='lower') </code></pre> <p>Is there some proper way to fill the grid obtained in cartesian coordinates which is angular, with the colors obtained for the rectangular grid? I tried looking at plt.fill() and plt.contour() but couldn't really get it right. Any help is appreciated.</p>
<python><matplotlib><colors>
2023-03-13 12:14:24
1
522
Manish
75,721,623
8,176,763
airflow cannot return io.stringIO from a simple dag definition
<p>Given the following dag definition:</p> <pre><code>import datetime import pendulum import io from airflow.decorators import dag, task @dag( dag_id=&quot;my_beauty&quot;, schedule_interval=&quot;0 0 * * *&quot;, start_date=pendulum.datetime(2023, 3, 13, tz=&quot;UTC&quot;), catchup=False, dagrun_timeout=datetime.timedelta(minutes=60), ) def MyBeauty(): @task def get_buffer(): content = io.StringIO(&quot;some words here blabla&quot;) return content a = get_buffer() dag = MyBeauty() </code></pre> <p>Errors out with:</p> <pre><code>Traceback (most recent call last): File &quot;/home/airflow/.local/bin/airflow&quot;, line 8, in &lt;module&gt; sys.exit(main()) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py&quot;, line 39, in main args.func(args) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py&quot;, line 52, in command return func(*args, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py&quot;, line 108, in wrapper return f(*args, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py&quot;, line 575, in task_test ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py&quot;, line 75, in wrapper return func(*args, session=session, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py&quot;, line 1670, in run mark_success=mark_success, test_mode=test_mode, job_id=job_id, pool=pool, session=session File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py&quot;, line 72, in wrapper return func(*args, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py&quot;, line 1374, in _run_raw_task self._execute_task_with_callbacks(context, test_mode) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py&quot;, line 1520, in _execute_task_with_callbacks result = self._execute_task(context, task_orig) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py&quot;, line 1588, in _execute_task self.xcom_push(key=XCOM_RETURN_KEY, value=xcom_value, session=session) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py&quot;, line 72, in wrapper return func(*args, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py&quot;, line 2297, in xcom_push session=session, File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py&quot;, line 72, in wrapper return func(*args, **kwargs) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/xcom.py&quot;, line 240, in set map_index=map_index, File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/models/xcom.py&quot;, line 627, in serialize_value return json.dumps(value, cls=XComEncoder).encode(&quot;UTF-8&quot;) File &quot;/usr/local/lib/python3.7/json/__init__.py&quot;, line 238, in dumps **kw).encode(obj) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/json.py&quot;, line 176, in encode return super().encode(o) File &quot;/usr/local/lib/python3.7/json/encoder.py&quot;, line 199, in encode chunks = self.iterencode(o, _one_shot=True) File &quot;/usr/local/lib/python3.7/json/encoder.py&quot;, line 257, in iterencode return _iterencode(o, 0) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/json.py&quot;, line 153, in default CLASSNAME: o.__module__ + &quot;.&quot; + o.__class__.__qualname__, AttributeError: '_io.StringIO' object has no attribute '__module__' </code></pre> <p>I'm using the official docker image from airflow, and this is the set up info:</p> <pre><code>airflow@5d8cb6fd4afd:/opt/airflow/dags$ airflow info Apache Airflow version | 2.5.1 executor | CeleryExecutor task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler sql_alchemy_conn | postgresql+psycopg2://airflow:airflow@postgres/airflow dags_folder | /opt/airflow/dags plugins_folder | /opt/airflow/plugins base_log_folder | /opt/airflow/logs remote_base_log_folder | System info OS | Linux architecture | arm uname | uname_result(system='Linux', node='5d8cb6fd4afd', release='5.15.49-linuxkit', version='#1 SMP PREEMPT Tue Sep 13 | 07:51:32 UTC 2022', machine='aarch64', processor='') locale | ('en_US', 'UTF-8') python_version | 3.7.16 (default, Jan 18 2023, 03:18:19) [GCC 10.2.1 20210110] python_location | /usr/local/bin/python Tools info git | NOT AVAILABLE ssh | OpenSSH_8.4p1 Debian-5+deb11u1, OpenSSL 1.1.1n 15 Mar 2022 kubectl | NOT AVAILABLE gcloud | NOT AVAILABLE cloud_sql_proxy | NOT AVAILABLE mysql | NOT AVAILABLE sqlite3 | 3.34.1 2021-01-20 14:10:07 10e20c0b43500cfb9bbc0eaa061c57514f715d87238f4d835880cd846b9ealt1 psql | psql (PostgreSQL) 15.1 (Debian 15.1-1.pgdg110+1) Paths info airflow_home | /opt/airflow system_path | /home/airflow/.vscode-server/bin/5e805b79fcb6ba4c2d23712967df89a089da575b/bin/remote-cli:/home/airflow/.local/bin:/ro | ot/bin:/home/airflow/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin python_path | /home/airflow/.local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/h | ome/airflow/.local/lib/python3.7/site-packages:/usr/local/lib/python3.7/site-packages:/opt/airflow/dags:/opt/airflow/ | config:/opt/airflow/plugins airflow_on_path | True Providers info apache-airflow-providers-amazon | 7.1.0 apache-airflow-providers-celery | 3.1.0 apache-airflow-providers-cncf-kubernetes | 5.1.1 apache-airflow-providers-common-sql | 1.3.3 apache-airflow-providers-docker | 3.4.0 apache-airflow-providers-elasticsearch | 4.3.3 apache-airflow-providers-ftp | 3.3.0 apache-airflow-providers-google | 8.8.0 apache-airflow-providers-grpc | 3.1.0 apache-airflow-providers-hashicorp | 3.2.0 apache-airflow-providers-http | 4.1.1 apache-airflow-providers-imap | 3.1.1 apache-airflow-providers-microsoft-azure | 5.1.0 apache-airflow-providers-mysql | 4.0.0 apache-airflow-providers-odbc | 3.2.1 apache-airflow-providers-postgres | 5.4.0 apache-airflow-providers-redis | 3.1.0 apache-airflow-providers-sendgrid | 3.1.0 apache-airflow-providers-sftp | 4.2.1 apache-airflow-providers-slack | 7.2.0 apache-airflow-providers-sqlite | 3.3.1 apache-airflow-providers-ssh | 3.4.0 </code></pre>
<python><airflow>
2023-03-13 12:02:54
1
2,459
moth
75,721,613
11,688,559
Access the parameters names of a Scipy distribution that has not yet been instantiated
<p>This question has been asked before <a href="https://stackoverflow.com/questions/47449991/how-to-programatically-get-parameter-names-and-values-in-scipy">here</a>. However, the solution does not seem to work with the current version of Scipy any more.</p> <p>To review, I would like to know what parameters some arbitrary Scipy distribution would require before instantiating it. The existing solution (that does not seem to work anymore, or for me at least) is as follows:</p> <pre><code>import inspect import scipy.stat as st def get_params(scipy_dist): return [param for param in inspect.signature(scipy_dist._pdf).parameters if param != 'x'] </code></pre> <p>Unfortunately, if one calls this on say <code>st.norm</code> or <code>st.expon</code> then it returns <code>['kwds']</code>.</p> <p>Is there an alternative way of obtaining the parameters?</p>
<python><scipy>
2023-03-13 12:01:51
1
398
Dylan Solms
75,721,585
3,152,686
Response failing in httpx but not in requests
<p>I am making a POST request to a URL using '<strong>httpx</strong>' library. However, I get a <em><strong>401 unauthorized error</strong></em> with my below request</p> <pre><code>cert = os.path.realpath('./certs/certificate.pem') key = os.path.realpath('./certs/key.pem') context = ssl.create_default_context() context.load_cert_chain(certfile=cert, keyfile=key, password=os.getenv('PASSWORD', '')) response = httpx.post( url=my_url, data={'client_id': os.getenv('USER', '')}, verify=context ) token = response.json()['access_token'] </code></pre> <p>In contrast, if I make the same request using '<strong>requests</strong>' library, then it succeeds and I get the response. Below is my request</p> <pre><code>cert = os.path.realpath('./certs/certificate.pem') key = os.path.realpath('./certs/key2.pem') certificate = (cert, key) response = requests.post(my_url, cert=certificate, auth=HTTPBasicAuth(os.getenv('USER', ''), os.getenv('PASSWORD', ''))) token = response.json()['access_token'] </code></pre> <p>May I know what am I missing here?</p>
<python><post><python-requests><httpx>
2023-03-13 11:59:11
1
564
Vishnukk
75,721,548
13,971,651
No module named 'debug_toolbar.urls' : version 3.1.1 installed
<p>settings.py</p> <pre><code>INSTALLED_APPS = ( 'debug_toolbar', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'debug_toolbar.middleware.DebugToolbarMiddleware', ) </code></pre> <p>urls.py</p> <pre><code>if DEBUG: import debug_toolbar urlpatterns += url('__debug__/', include('debug_toolbar.urls')) </code></pre> <p>requirements.txt</p> <pre class="lang-none prettyprint-override"><code>Django django-debug-toolbar==3.1.1 </code></pre> <p>It's showing <code>No module named 'debug_toolbar.urls'</code> when testing on the server.</p>
<python><django><django-debug-toolbar>
2023-03-13 11:55:00
1
373
puranjan
75,721,540
9,472,066
PySpark - how to set up environment variables (not Spark config)?
<p>On production cluster I am running Spark via <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">Spark Operator</a>, which allows me to set environment variables for driver and executors, so I can access them like in my PySpark script:</p> <pre><code>db_table = os.getenv(&quot;DB_TABLE&quot;) </code></pre> <p>Now I want to do the same locally for testing. I am using <a href="https://hub.docker.com/r/bitnami/spark/" rel="nofollow noreferrer">Spark Docker</a> in local mode with 1 master and 1 worker in Docker Compose. Tests are running in a separate Docker container in the same Docker network.</p> <p>How can I do this? I tried:</p> <ol> <li>Adding <code>'--conf &quot;spark.executorEnv.DB_TABLE=table_name&quot;</code> to <code>spark-submit</code>. This is not detected in script.</li> <li>Adding <code>'--conf &quot;spark.driver.DB_TABLE=table_name&quot;</code> to <code>spark-submit</code>. This is not set as environment variable, but accessible in Spark Context as <code>spark.conf.get(&quot;spark.driver.DB_TABLE&quot;)</code>. This does not work for two reasons:</li> </ol> <ul> <li>I would need to modify script to handle local and production environments separately, which would results in messier and longer code</li> <li>I cannot set any Spark options for Spark Context, which I set with <code>.config(&quot;spark.jars&quot;, ADDITIONAL_JARS)</code>, so I need ADDITIONAL_JARS env var</li> </ul> <ol start="3"> <li>Setting env vars in running Spark Docker container from running tests Docker container with <code>os.system(f'docker exec -i docker-spark-master-1 /bin/bash -c &quot;export DB_TABLE=table_name&quot;')</code>. This does not work, since I do not have Docker installed inside tests Docker container, and this is an ugly hack anyway.</li> </ol> <p>So how can I set environment variables for PySpark so that they are available as regular environment variables?</p>
<python><docker><apache-spark><pyspark>
2023-03-13 11:54:35
1
1,563
qalis
75,721,229
6,761,328
Only list-like objects are allowed to be passed to isn()
<p>I have a dropdown menu:</p> <pre><code> dcc.Dropdown( id=&quot;select&quot;, options = list(all_df['Device'].unique()), value = list(all_df['Device'].unique()[0]) ) dcc.Graph(id = 'shared' , figure={} ), </code></pre> <p>to enable a selection of a device (or devices) to plot later on:</p> <pre><code>@app.callback( Output(&quot;shared&quot;, &quot;figure&quot;), Input(&quot;select&quot;, &quot;value&quot;) ) def update_signal_chart(select): df4 = all_df[all_df['Device'].isin(select)] </code></pre> <p>(...)</p> <p>and whatever I try, it's always resulting in</p> <pre><code>TypeError: only list-like objects are allowed to be passed to isin(), you passed a [str] </code></pre> <p>or</p> <pre><code>TypeError: only list-like objects are allowed to be passed to isin(), you passed a nonetype </code></pre> <p>or the like. I don't understand what I am doing wrongly resp. what I shall change into what. The data frame looks as:</p> <pre><code>all_df.head() Out[63]: time V+ ... I_fil Device 0 2022-09-27 11:56:22.733740 7.980062 ... 5.035886 A 1 2022-09-27 11:56:22.733940 7.982012 ... 5.032311 A 2 2022-09-27 11:56:22.734140 7.983312 ... 5.027761 A 3 2022-09-27 11:56:22.734340 7.983962 ... 5.022236 A 4 2022-09-27 11:56:22.734540 7.982987 ... 5.016711 A </code></pre> <p>Here is the entire code of the dash:</p> <pre><code>all_df = pd.read_csv(&quot;out.csv&quot; #, parse_dates=['time'] , dayfirst=True , skiprows=4 , sep=&quot;,&quot; , decimal='.') df2 = all_df.melt(id_vars=['Device','time'], value_vars=['V+','V-','I_A', 'I_fil']) external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, title='Dashboard', external_stylesheets=external_stylesheets) colors = { 'background': '#000000', 'text': '#f3ff00' } # Define the app app.layout = html.Div( children = [ html.Div([ html.H4('Dauertest'), html.Div(children = &quot;&quot;), # Draw graph dcc.Graph(id = 'General' , figure={} ), ]), dcc.Checklist( id=&quot;signals&quot;, options = ['V+', 'V-', 'I_A', 'I_fil'], value = ['V+', 'V-'], inline=True ), html.Br(), html.Br(), html.Br(), # New row html.Div([ html.Div([ dcc.Dropdown( id=&quot;select&quot;, options = list(all_df['Device'].unique()), value = list(all_df['Device'].unique()[0]) ), dcc.Graph(id = 'shared' , figure={} ), #], className='six columns'), ], className='twelve columns'), ], className='row') ]) @app.callback( Output(&quot;General&quot;, &quot;figure&quot;), Input(&quot;signals&quot;, &quot;value&quot;) ) def update_line_chart(signals): df3=df2[df2['variable'].isin(signals)] fig_general = px.scatter(df3 , x = &quot;time&quot; , y = 'value' , color = 'Device' , symbol = 'variable' , hover_name = &quot;Device&quot; , template = 'plotly_dark' #, marginal_y=&quot;rug&quot; ).update_layout( transition_duration = 500 , autosize = True , height = 700 ) return fig_general @app.callback( Output(&quot;shared&quot;, &quot;figure&quot;), Input(&quot;select&quot;, &quot;value&quot;) ) def update_signal_chart(select): # Subset data frame to choose between devices # df4 = all_df[all_df['Device'].isin(select)] df4 = all_df[all_df['Device'] == select] fig_shared = make_subplots(rows = 4 , cols = 1 , shared_xaxes=True , vertical_spacing=0.05 ) fig_shared.add_trace(go.Scattergl( x = all_df['time'] , y = all_df['V+'] , mode = &quot;markers&quot; , name=&quot;V+&quot; , connectgaps = False #, hoverinfo = '&quot;Device' , marker=dict( # color = 'LightSkyBlue' # , size = 20 # , line = dict( # color='MediumPurple' # , width=2 # ) symbol = &quot;circle&quot; ) ) , row=4 , col=1 ) fig_shared.add_trace(go.Scattergl( x = all_df['time'] , y = all_df['V-'] , name=&quot;V-&quot; , mode = &quot;markers&quot; , marker=dict( symbol = &quot;star&quot; ) ) , row=3 , col=1 ) fig_shared.add_trace(go.Scattergl( x = all_df['time'] , y = all_df['I_A'] , name=&quot;I_A&quot; , mode = &quot;markers&quot; , marker=dict( symbol = &quot;diamond&quot; ) ) , row=2 , col=1 ) fig_shared.add_trace(go.Scattergl( x = all_df['time'] , y = all_df['I_fil'] , name=&quot;I_fil&quot; , mode = &quot;markers&quot; , marker=dict( symbol = &quot;x&quot; ) ) , row=1 , col=1 ) fig_shared.update_layout( height= 1000 , width = 1900 , title_text = &quot;&quot; , template = 'plotly_dark' #, font=dict( # family = &quot;Courier New, monospace&quot; # , size = 18 # , color = &quot;RebeccaPurple&quot; # ) ) fig_shared.show() fig_shared['layout']['yaxis']['title'] = 'I_fil / A' fig_shared['layout']['yaxis2']['title'] = 'I_A / A' fig_shared['layout']['yaxis3']['title'] = 'V- / V' fig_shared['layout']['yaxis4']['title'] = 'V+ / V' return fig_shared # Run the app if __name__ == &quot;__main__&quot;: app.run_server( debug=False , port=8050 ) </code></pre> <p>and the data: <a href="https://filebin.net/pag2dnsixti4ce9a" rel="nofollow noreferrer">https://filebin.net/pag2dnsixti4ce9a</a></p>
<python><pandas><plotly-dash><isin>
2023-03-13 11:24:16
1
1,562
Ben
75,721,094
428,666
How to merge almost touching polygons
<p>How can I merge polygons that are almost touching, using Python? For example, given these polygons and some distance threshold T:</p> <p><a href="https://i.sstatic.net/SSFrI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SSFrI.png" alt="Input polygons" /></a></p> <p>The algorithm would produce something like this:</p> <p><a href="https://i.sstatic.net/vQCVN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vQCVN.png" alt="Result polygon" /></a></p> <p>So basically I need to remove any gaps smaller than T between the two polygons.</p> <p>The original vertices of the polygons should not be unnecessarily modified. So for example simply buffering both polygons and then taking a union of them is not good.</p> <p>A library that already implements this functionality would also be fine.</p> <p><strong>EDIT:</strong> As another example, <a href="https://pro.arcgis.com/en/pro-app/latest/tool-reference/cartography/aggregate-polygons.htm" rel="nofollow noreferrer">ArcGIS polygon aggregation</a> seems to do exactly this. However there isn't much hint about how it is implemented, and using ArcGIS is not an option.</p>
<python><geometry><shapely>
2023-03-13 11:10:38
4
825
anttikoo
75,720,913
6,609,896
Collections.Counter add zero count elements
<p>I have this code to analyse the <code>node.properties</code> json dict for my data:</p> <pre><code>def summarise_all_model_properties( all_nodes: list ) -&gt; defaultdict[str, Counter[str]]: propdicts = defaultdict(Counter) for node in all_nodes: propdicts[node.model_type].update( k for k in node.properties.keys() if node.properties[k] ) return propdicts </code></pre> <p>Which correctly counts every time a <code>node.properties[propertyName]</code> is not an empty/falsey value. This gives a list of all non-blank properties and their counts, grouped by <code>node.model_type</code></p> <p>However I would also like to know any node properties that exist but are empty and report 0 in <code>Counter</code> e.g. right now I have (in pseudocode)</p> <pre><code>foo.properties = {&quot;a&quot;: &quot;something!&quot;, &quot;b&quot;: '', &quot;c&quot;: 123} bar.properties = {&quot;a&quot;: &quot;&quot;, &quot;b&quot;: '', &quot;c&quot;: &quot;something else!&quot;} all_nodes = [foo,bar] Counter({&quot;a&quot;: 1, &quot;c&quot;: 2}) </code></pre> <p>but I want to include b even though it's always empty in the data</p> <pre><code>Counter({&quot;a&quot;: 1, &quot;b&quot;:0, &quot;c&quot;: 2}) </code></pre> <h3>Update</h3> <p>This is because I have some code later that prints the properties + count of non-blank usages to a table with 2 columns, and I do not know all the property names ahead of time (they come from reading a json file)</p> <pre><code>for model_type, counter in summarise_all_model_properties(all_nodes).items(): for propname, count in counter.items(): csv.writerow(model_type, propname, count) </code></pre>
<python><python-3.x><counter>
2023-03-13 10:53:26
2
5,625
Greedo
75,720,848
14,269,252
How to plot multiple category on the same y-axis using plotly express?
<p>I wrote the code as follows for visualization, I have DATE, CODE which I am showing as a scatter chart using plotly and color it based on source.</p> <p>X axis shows the time, Y axis shows the CODE (categorical variable)</p> <p>1- How to show each different source on the same Y axis but their Y axis defined?? currently it is no the same Y axis. 2- How to show each of this category on different chart but all attached and stacked together?</p> <pre><code>a sample of data ID DATE CODE SOURCE 0 P04 2016-08-08 f m1 1 P04 2015-05-08 f m1 2 P04 2010-07-20 v m3 3 P04 2013-12-06 g m4 4 P08 2018-03-01 h m4 def char(): color_discrete_map = {'df1': 'rgb(255,0,0)', 'df2': 'rgb(0,255,0)', 'df3': '#11FCE4', 'df4': '#9999FF', 'df5': '#606060', 'df6': '#CC6600'} fig = px.scatter(df, x='DATE', y='CODE', color='SOURCE', width=800, height=500,color_discrete_map=color_discrete_map) fig.update_layout(xaxis_type='category') fig.update_layout( margin=dict(l=250, r=0, t=0, b=20), ) fig.update_layout(xaxis=dict(tickformat=&quot;%y-%m&quot;)) fig.update_xaxes(ticks= &quot;outside&quot;, ticklabelmode= &quot;period&quot;, tickcolor= &quot;black&quot;, ticklen=10, minor=dict( ticklen=4, dtick=7*24*60*60*1000, tick0=&quot;2016-07-03&quot;, griddash='dot', gridcolor='white') ) # fig.update_layout(xaxis_tickformat = '%d %B (%a)&lt;br&gt;%Y') st.plotly_chart(fig) </code></pre> <p>what I want (but scatter) : <a href="https://i.sstatic.net/sRY5I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sRY5I.png" alt="enter image description here" /></a></p> <p>What I produce:</p> <p><a href="https://i.sstatic.net/joHHW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/joHHW.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-03-13 10:47:48
0
450
user14269252
75,720,784
6,213,883
How to provide hive metastore information via spark-submit?
<p>Using Spark 3.1, I need to provide the hive configuration via the <code>spark-submit</code> command (<strong>not</strong> inside the code).</p> <hr /> <p>Inside the code (which is not the solution I need), I can do the following which works fine (able to list database and select from tables. Removing the &quot;enableHiveSupport&quot; also works fine as long as the config is specified):</p> <pre><code>spark = SparkSession.builder.appName(&quot;redacted-sprak31&quot;) \ .enableHiveSupport()\ .config(&quot;spark.sql.warehouse.dir&quot;, &quot;hdfs://&quot; + hdfs_host + &quot;:8020/user/hive/warehouse&quot;)\ .config(&quot;hive.metastore.uris&quot;, &quot;thrift://&quot; + hdfs_host + &quot;:9083&quot;) \ .config(&quot;spark.sql.hive.metastore.jars.path&quot;, &quot;file:///spark_jars/var/hive_jars/*.jar&quot; ) \ .config(&quot;spark.sql.hive.metastore.version&quot;, &quot;MYVERSION&quot;) \ .config(&quot;spark.sql.hive.metastore.jars&quot;, &quot;path&quot;) \ .config(&quot;spark.sql.catalogImplementation&quot;, &quot;hive&quot;) \ .getOrCreate() </code></pre> <p>Which is submited like this:</p> <pre><code>spark-submit \ --py-files={file} local://__main__.py </code></pre> <hr /> <p>using the <code>--conf</code> flag in the <code>spark-submit</code> command, and removing all the <code>config</code> statements from the <code>__main__.py</code> file:</p> <pre><code>spark-submit \ --conf spark.sql.warehouse.dir=&quot;hdfs://${hdfs_host}:8020/user/hive/warehouse&quot; \ --conf hive.metastore.uris=&quot;thrift://${hdfs_host}:9083&quot; \ --conf spark.sql.hive.metastore.jars.path=&quot;file:///spark_jars/var/hive_jars/*.jar&quot; \ --conf spark.sql.hive.metastore.version=&quot;MYVERSION&quot; \ --conf spark.sql.hive.metastore.jars=&quot;path&quot; \ --conf spark.sql.catalogImplementation=&quot;hive&quot; \ --py-files={file} local://__main__.py </code></pre> <p>with, in <strong>main</strong>.py:</p> <pre><code>spark = SparkSession.builder.appName(&quot;redacted-sprak31&quot;) \ .getOrCreate() </code></pre> <p>This provides me with the following error when exetuing the very same SQL statement (a simple <code>select * from DB.TABLE limit 10</code>):</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 193, in _run_module_as_main &quot;__main__&quot;, mod_spec) File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 85, in _run_code exec(code, run_globals) File &quot;/sandbox/__main__.py&quot;, line 12, in &lt;module&gt; df = spark.sql(&quot;select * from db.tablelimit 10&quot;) File &quot;/usr/local/lib/python3.7/site-packages/pyspark/sql/session.py&quot;, line 723, in sql return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) File &quot;/usr/local/lib/python3.7/site-packages/py4j/java_gateway.py&quot;, line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File &quot;/usr/local/lib/python3.7/site-packages/pyspark/sql/utils.py&quot;, line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Table or view not found: db.table; line 1 pos 14; 'GlobalLimit 10 +- 'LocalLimit 10 +- 'Project [*] +- 'UnresolvedRelation [db, table], [], false </code></pre> <ul> <li>Why does the parameters passed via <code>--conf</code> do not trigger the same behavior as with in code configuration ?</li> <li>What, consequently, am I missing for spark to behave as expected (connects correctly to the metastore) ?</li> </ul>
<python><apache-spark><spark3><apache-spark-3.0>
2023-03-13 10:42:24
0
3,040
Itération 122442
75,720,635
11,630,148
Remove %20 from urls
<p>My urls in django have <code>%...</code>. How can I change that? At first this was ok but suddenly changed at some point in my day. My views, models and urls are as follow:</p> <pre class="lang-py prettyprint-override"><code>class Rant(UUIDModel, TitleSlugDescriptionModel, TimeStampedModel, models.Model): categories = models.ManyToManyField(Category) def slugify_function(self, content): return content.replace(&quot;_&quot;, &quot;-&quot;).lower() def get_absolute_url(self): return reverse(&quot;rants:rant-detail&quot;, kwargs={&quot;title&quot;: self.title}) def __str__(self): return self.title </code></pre> <pre class="lang-py prettyprint-override"><code>class RantDetailView(DetailView): model = Rant slug_field = &quot;slug&quot; slug_url_kwarg = &quot;slug&quot; def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) return context </code></pre> <pre class="lang-py prettyprint-override"><code> path(&quot;&lt;str:slug&gt;/&quot;, RantDetailView.as_view(), name=&quot;rant-detail&quot;), path(&quot;category/&lt;slug:category&gt;/&quot;, rant_category, name=&quot;rant-categories&quot;),``` I have no Idea why the link on my localhost is `localhost:8000/first%20rant%20on%20rantr/` Does anyone know where I made a mistake? I'm expecting the urls to have `hyphens` in it like `http://localhost:8000/first-rant-on-rantr/` where did I make a mistake? </code></pre>
<python><django><django-urls>
2023-03-13 10:28:10
1
664
Vicente Antonio G. Reyes
75,720,582
10,207,281
Concurrency settings of my RedisSpider doesn't performance as expected
<p>I wrote a spider with scrapy-redis and python3.7.<br /> I set <strong>CONCURRENT_REQUESTS</strong> to 10.<br /> Here is my settings of the spider:</p> <pre><code> custom_settings = { &quot;DOWNLOADER_MIDDLEWARES&quot;: { &quot;projects.middlewares.MediaFileCheckMiddleware&quot;: 300, }, &quot;DOWNLOAD_TIMEOUT&quot;: 300, &quot;CONCURRENT_REQUESTS&quot;: 10, &quot;SCHEDULER_PERSIST&quot;: True, &quot;SCHEDULER_FLUSH_ON_START&quot;: False, &quot;SCHEDULER_IDLE_BEFORE_CLOSE&quot;: 0, } </code></pre> <p>I expect it could handle 10 download requests at the same time like multi-threads and each thread could start a new request immediately while the prevous one is done.</p> <p>The fact is, the spider got 10 requests from redis at one time and downloaded them, but it didn't get a new request until the prevous 10 were all done.</p> <p>What I expect is:</p> <p>thread1 --job1--|-----job2-----|-job3-|--job4--|...<br /> thread2 ----job1----|----job2----|----job3----|...<br /> thread3 -job1-|-job2-|----job3----|--job4--|...</p> <p>What I saw is:</p> <p>thread1 --job1--      |-----job2-----|-job3-         |--job4--,...<br /> thread2 ----job1----|----job2----   |----job3----|...<br /> thread3 -job1-         |-job2-           |----job3---- |--job4--,...</p> <p>I don't understand why..</p> <p>The main code:</p> <pre><code>class FileHandler(RedisSpider): name = 'filehandler' redis_key = 'news:multimedias' file_path = r'/data/temp' img_header = Headers(content_type='img') size_limit = 50 * 1024 * 1024 custom_settings = { &quot;DOWNLOADER_MIDDLEWARES&quot;: { &quot;projects.dapoo.middlewares.MediaFileCheckMiddleware&quot;: 300, }, &quot;DOWNLOAD_TIMEOUT&quot;: 300, # &quot;DOWNLOAD_WARNSIZE&quot;: 0, &quot;CONCURRENT_REQUESTS&quot;: 10, &quot;SCHEDULER_PERSIST&quot;: True, &quot;SCHEDULER_FLUSH_ON_START&quot;: False, &quot;SCHEDULER_IDLE_BEFORE_CLOSE&quot;: 0, } def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def make_request_from_data(self, data): obj = json.loads(data) suffix = obj.get('suffix') src = obj.get(&quot;src&quot;) err = obj.get('err') media_type = obj.get('media_type') if err: pass # logging else: return scrapy.Request( src, method=&quot;GET&quot;, headers=self.img_header(), dont_filter=True, meta={'file_name': str(int(datetime.datetime.now().timestamp()))}, callback=self.parse ) def parse(self, response): file_name = response.meta.get('file_name') file_path = os.path.join(self.file_path, file_name) with open(file_path, 'wb') as f: f.write(response.body) </code></pre>
<python><concurrency><scrapy>
2023-03-13 10:23:18
0
920
vassiliev
75,720,575
774,133
EOF in reading pickle in Jupyter notebook, but not in python interpreter
<p>I have a problem in opening a pickle in a Jupyter notebook. Unfortunately, I cannot share the data file, so I cannot provide a working example.</p> <p>For my data analysis I use a single conda environment <code>ai</code>. The same environment is used in a Jupyter notebook.</p> <p>I have a pickle that is 67M in size.</p> <ol> <li><p>If I launch <code>python</code> in a shell after activating the conda environment <code>ai</code>, I am able to load the pickle file with <code>df = pd.read_pickle(&quot;file.pickle&quot;)</code>. No problems at all.</p> </li> <li><p>If I read the same file in a Jupyter notebook using the same environment <code>ai</code>, I get an <code>EOF</code> error.</p> </li> </ol> <pre><code>--------------------------------------------------------------------------- EOFError Traceback (most recent call last) /tmp/ipykernel_8405/3858244629.py in ----&gt; 1 df = pd.read_pickle(&quot;file.pickle&quot;) ~/miniconda3/envs/ai/lib/python3.8/site-packages/pandas/io/pickle.py in read_pickle(filepath_or_buffer, compression, storage_options) 215 # RawIOBase, BufferedIOBase, TextIOBase, TextIOWrapper, mmap]&quot;; 216 # expected &quot;IO[bytes]&quot; --&gt; 217 return pickle.load(handles.handle) # type: ignore[arg-type] 218 except excs_to_catch: 219 # e.g. EOFError: Ran out of input </code></pre> <p>Why? Do I need to change some RAM usate limits in the notebook? I know my description is a little generic, but I cannot provide the data file.</p>
<python><jupyter-notebook>
2023-03-13 10:22:34
0
3,234
Antonio Sesto
75,720,310
784,433
broadcasting for matrix multiplication
<p>Consider the following</p> <pre class="lang-py prettyprint-override"><code>np.random.seed(2) result = SC @ x </code></pre> <p>SC is <code>nn x nn</code> and x is <code>nn x ns</code>.</p> <p>now consider we have a 3D SCs <code>ns x nn x nn</code>.</p> <pre class="lang-py prettyprint-override"><code>ns = 4 nn = 2 SCs = np.random.rand(ns, nn, nn) x = np.random.rand(nn, ns) def matmul3d(a, b): ns, nn, nn = a.shape assert(b.shape == (nn, ns)) results = np.zeros((nn, ns)) for i in range(ns): results[:, i] = a[i, :, :] @ b[:, i] return results </code></pre> <pre><code>array([[0.385428 , 0.22932766, 0.36791082, 0.06029485], [0.68934311, 0.14157493, 0.75236553, 0.09049892]]) </code></pre> <p>simply use matrix multiplication, the diagonal is the result:</p> <pre class="lang-py prettyprint-override"><code>results = a @ b array([[[0.385428 , 0.21717737, 0.38019609, 0.0372277 ], [0.68934311, 0.30008412, 0.65169432, 0.0858002 ]], [[0.52588409, 0.22932766, 0.4972909 , 0.06536792], [0.48764911, 0.14157493, 0.43837138, 0.07607813]], [[0.39071113, 0.1655206 , 0.36791082, 0.04962322], [0.79777992, 0.34153306, 0.75236553, 0.10054907]], [[0.37441129, 0.10004409, 0.33380446, 0.06029485], [0.5542946 , 0.14242876, 0.4923592 , 0.09049892]]]) </code></pre> <p>Is there any broadcasting for this to remove the loop?</p>
<python><array-broadcasting>
2023-03-13 09:55:44
1
1,237
Abolfazl
75,720,210
429,476
Float storage precision in hardware only to 13 decimals
<p>There are many similar questions related to floating point, and I have read all of them that came up. Still, I am not able to grasp what I am doing wrong.</p> <p>Python3 Cython reference interpreter on 64-bit x86 machine stores floats as double precision, 8 bytes, that is 64 bits</p> <p>CPython implements float using C double type. The C double type usually implements IEEE 754 double-precision binary float, which is also called binary64.</p> <p>From <a href="https://en.wikipedia.org/wiki/IEEE_754-1985" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/IEEE_754-1985</a>, this would mean that I would get 16 decimal digit precision</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Level</th> <th>Width</th> <th>Range at full precision</th> <th>Precision[a]</th> </tr> </thead> <tbody> <tr> <td>Double precision</td> <td>64 bits</td> <td>±2.23×10−308 to ±1.80×10308</td> <td>Approximately 16 decimal digits</td> </tr> </tbody> </table> </div> <p>But in the code below, I have two 16-digit floats which are different at the 16th decimal digit. Whether I use Decimal or float, Python in my machine (Python 3.10, x86,64 bit, linux) is able to handle only to 13 decimal point. What am I missing?</p> <pre><code>lat1 = -81.0016666666670072 # or float(-81.0016666666670072) lat2 = -81.0016666666670062 # or float(-81.0016666666670062) print(&quot;Out as string lat1=-81.0016666666670072 lat2= -81.0016666666670062&quot;) print(f&quot;Precision 16 lat1:.16f {lat1:.16f} lat2:.16f{lat2:.16f}&quot;) # Lets see how it is store in hardware print(f&quot;Stored in HW as lat1.hex() {lat1.hex()} lat2.hex() {lat2.hex()}&quot;) x = float.fromhex(lat1.hex()) y = float.fromhex(lat2.hex()) print(f&quot;Reconstructed from Hex lat1:.16f {x:.16f} lat2:.16f{y:.16f}&quot;) try: assert lat1 != lat2 except: # Assert false - means Python is telling lat1 == lat2 print(f&quot;Fail, {lat1} and {lat2} are really different at precision 16&quot;) # try with Decimal from decimal import * getcontext().prec = 16 try: assert Decimal(lat1).compare(Decimal(lat2)) except: # Assert false - means Python is telling both are same print(f&quot;Fail, Decimal(lat1) {Decimal(lat1):.16f} and Decimal(lat2) {Decimal(lat2):.16f} are really different at precision 16&quot;) print(&quot;Reducing precision to 14&quot;) lat1 = -81.00166666666711 lat2 = -81.00166666666710 print(f&quot;At precision 14-still equal lat1:.14f {lat1:.14f} lat2:.14f{lat2:.14f}&quot;) print(&quot;Reducing precision to 13&quot;) lat1 = -81.0016666666671 lat2 = -81.0016666666670 # Lets see string representation print(f&quot;At precision 13-Not equal lat1:.13f {lat1:.13f} lat2:.13f{lat2:.13f}&quot;) try: assert lat1 == lat2 except: # Assert false - means Python is telling lat1 != lat2, which is correct print(f&quot;Pass, {lat1} and {lat2} are different&quot;) </code></pre> <p>Ouput</p> <pre><code>Out as string lat1=-81.0016666666670072 lat2= -81.0016666666670062 Precision 16 lat1:.16f -81.0016666666670062 lat2:.16f-81.0016666666670062 Stored in HW as lat1.hex() -0x1.4401b4e81b500p+6 lat2.hex() -0x1.4401b4e81b500p+6 Reconstructed from Hex lat1:.16f -81.0016666666670062 lat2:.16f-81.0016666666670062 Fail, -81.001666666667 and -81.001666666667 are really different at precision 16 Fail, Decimal(lat1) -81.0016666666670062 and Decimal(lat2) -81.0016666666670062 are really different at precision 16 Reducing precision to 14 At precision 14-still equal lat1:.14f -81.00166666666711 lat2:.14f-81.00166666666711 Reducing precision to 13 At precision 13-Not equal lat1:.13f -81.0016666666671 lat2:.13f-81.0016666666670 Pass, -81.0016666666671 and -81.001666666667 are different </code></pre> <p>Note - These are geo-location and I know <a href="https://gis.stackexchange.com/a/8674/61719">I don't need 16-digit accuracy</a>. But still curious as to why I am only getting 13 decimal point precision</p> <p>Also even though I went through many answers, not able to make out why there is no way to finding out the number of digits of precision supported on an OS/hardware through code.</p>
<python><floating-point>
2023-03-13 09:45:28
2
6,310
Alex Punnen
75,720,122
3,878,398
What is the correct way to pass a csv file using requests in python?
<p>If I wanted to push a csv file to an endpoint in Python, what is the correct way to do this?</p> <pre><code>with open(&quot;foo.csv&quot;) as f: endpoint = 'some-url' headers = {} r = requests.post(endpoint, ...., headers = headers) </code></pre> <p>What would be the next steps?</p>
<python><rest><python-requests>
2023-03-13 09:36:10
2
351
OctaveParango
75,719,888
10,617,728
How to fix '/bin/sh: 1: mysql: not found' in cloud function
<p>I have cloud function that restores mysql dump into cloud mysql db. When the function is executed, i get an error</p> <blockquote> <p>/bin/sh: 1: mysql: not found</p> </blockquote> <p>my <code>requirements.txt</code> has</p> <pre><code>google-cloud-storage mysql-client </code></pre> <p>my main.py is as follows</p> <pre><code>import functions_framework from google.cloud import storage import os import datetime import subprocess CURRENT_DATE = datetime.datetime.today().date() # Triggered by a change in a storage bucket @functions_framework.cloud_event def restore_database(cloud_event): &quot;&quot;&quot;Triggered by a change to a GCS bucket subfolder. It unzips the zipped sql dump file and restores it into cloud mysql using mysql command. Args: data (dict): The Cloud Functions event payload. context (google.cloud.functions.Context): Metadata of triggering event. &quot;&quot;&quot; data = cloud_event.data # Get the GCS bucket and file information from the event data bucket_name = data[&quot;bucket&quot;] file_name = f'dumps/xxxx-{CURRENT_DATE}.sql.gz' # Only process .gz files in a subfolder named 'dumps' if not file_name.endswith('.gz') or 'dumps/' not in file_name: return # Initialize GCS client storage_client = storage.Client() # Get the bucket and blob objects bucket = storage_client.bucket(bucket_name) blob = bucket.blob(file_name) # Download the file to a temporary location tmp_file_name = '/tmp/' + os.path.basename(file_name) blob.download_to_filename(tmp_file_name) # Get database credentials from environment variables db_host = os.environ[&quot;SQL_INSTANCE_NAME&quot;] db_name = os.environ[&quot;DATABASE_NAME&quot;] db_pass = os.environ[&quot;DATABASE_PASSWORD&quot;] db_port = os.environ[&quot;DATABASE_PORT&quot;] db_user = os.environ[&quot;DATABASE_USER&quot;] # Unzip the file cmd = f&quot;gzip -dk {tmp_file_name}&quot; subprocess.check_output(cmd, shell=True) # Get the name of the unzipped file unzipped_file_name = os.path.splitext(tmp_file_name)[0] # Run the mysql command to restore the database from the unzipped dump file cmd = f&quot;mysql -u {db_user} -h {db_host} -P {db_port} -p'{db_pass}' {db_name} &lt; {unzipped_file_name}&quot; subprocess.check_output(cmd, shell=True) # Delete the temporary files os.remove(tmp_file_name) os.remove(unzipped_file_name) print(f&quot;Database {db_name} restored successfully from {file_name} in {bucket_name}&quot;) </code></pre> <p>How can I fix this issue? I've tries adding <code>mysql-server</code> or <code>mysql</code> in the <code>requirements.txt</code> but doesn't solve the problem.</p> <p>I believe <code>mysql-client</code> doesn't install the mysql that's needed to run in the terminal using subprocess.</p>
<python><mysql><google-cloud-platform><google-cloud-functions>
2023-03-13 09:15:45
0
1,199
Shadow Walker
75,719,852
3,099,733
Does PyInstaller support installing dependent packages via pip on first use?
<p>I am trying to use PyInstaller to distribute a Python app building with pywebview, tensorflow, etc. However, when I create a Python executable with PyInstaller, I find that the package is larger than my expectation. I am not sure if it just packs everything into installer which leads to the large package size.</p> <p>So my question is, is it possbile for PyInstaller to pack only Python binary, and run <code>pip install</code> when it is run for the first time to reduce the size of distributed package?</p> <p>I have tried using <code>pip</code> module like below:</p> <pre class="lang-py prettyprint-override"><code>import sys, io buffer = io.StringIO() sys.stdout = sys.stderr = buffer import pip pip.main(['install', 'numpy']) import webview webview.create_window('Hello world', 'https://pywebview.flowrl.com/') webview.start() </code></pre> <p>And when I run the built demo.exe I got error</p> <pre><code>Traceback (most recent call last): File &quot;demo.py&quot;, line 7, in &lt;module&gt; File &quot;pip\__init__.py&quot;, line 13, in main File &quot;pip\_internal\utils\entrypoints.py&quot;, line 43, in _wrapper File &quot;pip\_internal\cli\main.py&quot;, line 68, in main File &quot;pip\_internal\commands\__init__.py&quot;, line 114, in create_command File &quot;importlib\__init__.py&quot;, line 127, in import_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1014, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'pip._internal.commands.install' </code></pre> <p>I don't have to stick to <code>pyinallter</code>, I would like to try any tool that supports to defer the dependency installation to the frist time running the program.</p>
<python><pyinstaller><cx-freeze>
2023-03-13 09:11:00
0
1,959
link89
75,719,826
9,524,065
Installing gluonnlp results in an ImportError undefined symbol _PyGen_Send
<p>I hope this is not an incredibly dumb issue, but googling did not produce any useable results. I installed <code>gluonnlp 0.10.0</code> using <code>pip</code> on a Ubuntu 22.04.1 LTS server under Python 3.10.6. When I try to import gluon, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;[...]/lib/python3.10/site-packages/IPython/core/interactiveshell.py&quot;, line 3460, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;ipython-input-4-e68c3b59001c&gt;&quot;, line 1, in &lt;module&gt; import gluonnlp as nlp File &quot;[...]/lib/python3.10/site-packages/gluonnlp/__init__.py&quot;, line 25, in &lt;module&gt; from . import data File &quot;[...]/lib/python3.10/site-packages/gluonnlp/data/__init__.py&quot;, line 23, in &lt;module&gt; from . import (batchify, candidate_sampler, conll, corpora, dataloader, File &quot;[...]/lib/python3.10/site-packages/gluonnlp/data/corpora/__init__.py&quot;, line 21, in &lt;module&gt; from . import (google_billion_word, large_text_compression_benchmark, wikitext) File &quot;[...]/lib/python3.10/site-packages/gluonnlp/data/corpora/google_billion_word.py&quot;, line 34, in &lt;module&gt; from ...vocab import Vocab File &quot;[...]/lib/python3.10/site-packages/gluonnlp/vocab/__init__.py&quot;, line 21, in &lt;module&gt; from . import bert, elmo, subwords, vocab File &quot;[...]/lib/python3.10/site-packages/gluonnlp/vocab/bert.py&quot;, line 24, in &lt;module&gt; from ..data.transforms import SentencepieceTokenizer File &quot;[...]/lib/python3.10/site-packages/gluonnlp/data/transforms.py&quot;, line 48, in &lt;module&gt; from .fast_bert_tokenizer import is_control, is_punctuation, is_whitespace ImportError: [...]/lib/python3.10/site-packages/gluonnlp/data/fast_bert_tokenizer.cpython-310-x86_64-linux-gnu.so: undefined symbol: _PyGen_Send </code></pre> <p><code>pip list</code> returns the following modules, if this information is helpfull:</p> <pre><code> Package Version ------------------------ ----------- asttokens 2.2.1 backcall 0.2.0 certifi 2022.12.7 charset-normalizer 3.0.1 click 8.1.3 contourpy 1.0.7 cvxpy 1.3.0 cycler 0.11.0 Cython 0.29.33 decorator 5.1.1 ecos 2.0.12 executing 1.2.0 fasttext 0.9.2 filelock 3.9.0 fonttools 4.38.0 fst-pso 1.8.1 FuzzyTM 2.0.5 gensim 4.3.0 germansentiment 1.1.0 gluonnlp 0.10.0 graphviz 0.8.4 h5py 3.8.0 hdbscan 0.8.29 huggingface-hub 0.12.1 idna 3.4 ipython 8.10.0 jedi 0.18.2 joblib 1.2.0 kaleido 0.2.1 kiwisolver 1.4.4 llvmlite 0.39.1 matplotlib 3.7.0 matplotlib-inline 0.1.6 miniful 0.0.6 mxnet 1.9.1 nltk 3.8.1 numba 0.56.4 numpy 1.23.5 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 osqp 0.6.2.post8 packaging 23.0 pandas 1.5.3 parso 0.8.3 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 23.0.1 plotly 5.13.0 polars 0.16.7 prompt-toolkit 3.0.36 ptyprocess 0.7.0 pure-eval 0.2.2 pybind11 2.10.3 pyFUME 0.2.25 Pygments 2.14.0 pynndescent 0.5.8 pyparsing 3.0.9 pyportfolioopt 1.5.4 python-dateutil 2.8.2 pytz 2022.7.1 PyYAML 6.0 qdldl 0.1.5.post3 regex 2022.10.31 requests 2.28.2 scikit-learn 1.2.1 scipy 1.10.1 scs 3.2.2 sentence-transformers 2.2.2 sentencepiece 0.1.97 setuptools 64.0.2 simpful 2.9.0 simple-elmo 0.9.1 six 1.16.0 sklearn 0.0.post1 smart-open 6.3.0 Snowball 0.5.2 stack-data 0.6.2 tenacity 8.2.1 threadpoolctl 3.1.0 tokenizers 0.13.2 top2vec 1.0.28 torch 1.13.1 torchvision 0.14.1 tqdm 4.64.1 traitlets 5.9.0 transformers 4.26.1 typing 3.6.6 typing_extensions 4.5.0 umap-learn 0.5.3 urllib3 1.26.14 wcwidth 0.2.6 wheel 0.38.4 wordcloud 1.8.2.2 </code></pre> <p>I tried reinstalling <code>cython</code>, <code>gluonnlp</code> and <code>mxnet</code> and would like to avoid using another python version if possible. The <a href="https://nlp.gluon.ai/install/install-more.html" rel="nofollow noreferrer">documentation</a> at least states that <code>python 3.5+</code> should be supported. I would be gratefull for any pointer you might have!</p> <h4>Edit</h4> <p>I also opened an <a href="https://github.com/dmlc/gluon-nlp/issues/1595" rel="nofollow noreferrer">issue</a> describing my problem in the gluon gihub repo.</p>
<python><pip><cython><gluon><mxnet>
2023-03-13 09:07:32
1
1,810
Max Teflon
75,719,802
12,967,353
Test code and branch coverage simultanously with Pytest
<p>I am using pytest to test my Python code.</p> <p>To test for code coverage (C0 coverage) I run <code>pytest --cov</code> and I can specify my desired coverage in my <code>pyproject.toml</code> file like this:</p> <pre><code>[tool.coverage.report] fail_under = 95 </code></pre> <p>I get this result with a coverage a 96.30%:</p> <pre><code>---------- coverage: platform linux, python 3.8.13-final-0 ----------- Name Stmts Miss Cover ------------------------------------------------------------------------------------------------ ..................................... Required test coverage of 95.0% reached. Total coverage: 96.30% </code></pre> <p>To test for branch coverage (C1 coverage) I run <code>pytest --cov --cov-branch</code>. I get this result with a coverage of 95.44%:</p> <pre><code>---------- coverage: platform linux, python 3.8.13-final-0 ----------- Name Stmts Miss Branch BrPart Cover -------------------------------------------------------------------------------------------------------------- ..................................... Required test coverage of 95.0% reached. Total coverage: 95.44% </code></pre> <p>I get two different coverage values, so I am testing two different coverage instances. What I would like to do is be able to test for code coverage AND branch coverage with the same command, and also be able to specify two different required coverages.</p> <p>For now, all I can do is execute pytest two times, with two disadvantages:</p> <ol> <li>I have to run my tests 2 times, so it takes twice as long.</li> <li>I am limited to the same required coverage for both.</li> </ol>
<python><testing><pytest><coverage.py>
2023-03-13 09:04:40
1
809
Kins
75,719,800
9,399,492
Importing local module from an embedded installation of Python
<p>I'm trying to make a little program that sets up a portable python 3 with specific packages, with no admin rights required.</p> <p>(The point of this is to execute some specific scripts with more ease and in a way that could be distributed to other people easily)</p> <p>So I have downloaded the latest <a href="https://www.python.org/ftp/python/3.11.2/python-3.11.2-embed-amd64.zip" rel="nofollow noreferrer">Windows embeddable package</a> of python and installed pip on it, along with the modules I needed.</p> <p>But when executing the python script that I made this entire mess for, it threw me a <code>ModuleNotFoundError</code>. The module in question is a local file, in the same directory as the main script.</p> <p>Normally, when importing a module, python looks in the installed packages folder, but also in the directory in which the script that's being executed is located. This doesn't seem to be the case with an embedded python by default, though.</p> <p>I had to change some settings in a <code>._pth</code> file for python to even look into the installed packages folder, thanks to <a href="https://stackoverflow.com/a/69659472">another post</a>. But I couldn't find what to do to make it also look into the directory of the script it's executing, like regular python does.</p> <p>I hope someone has an idea to fix that, thanks in advance :)</p>
<python><python-3.x><module><python-embedding>
2023-03-13 09:04:37
0
1,344
RedStoneMatt
75,719,726
18,221,164
Python Invoke module fails
<p>I am trying to understand how the <code>invoke</code> module works in Python. I have installed the package using the <code>pip install</code> and I am trying to invoke a script in the following manner.</p> <p>I create a <code>tasks.py</code> as <code>invoke</code> is on the search for it, and created a small task as shown below.</p> <p><code>tasks.py</code></p> <pre><code>from invoke import task @task(name = &quot;webopener&quot;) def openwebpage(c,url=None): if url: c.run(f&quot;start {url}&quot;) else: print(&quot;I need a URL!&quot;) </code></pre> <p>I go to the directory and call :</p> <pre><code>E:\Test&gt;invoke webopener 'invoke' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>But the invoke seems to have been installed</p> <pre><code>E:\Test&gt;pip install invoke Requirement already satisfied: invoke in e:\python\python39\lib\site-packages (1.7.3) </code></pre> <p>I read that we need to add it to Path. But what directory needs to be added? This does not happen while downloading or using other packages? Any suggestions?</p>
<python><python-3.x><invoke>
2023-03-13 08:57:02
1
511
RCB
75,719,665
320,437
In Python issubclass() unexpectedly complains about "Protocols with non-method members"
<p>I have tried the obvious way to check my protocol:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any, Protocol, runtime_checkable @runtime_checkable class SupportsComparison(Protocol): def __eq__(self, other: Any) -&gt; bool: ... issubclass(int, SupportsComparison) </code></pre> <p>Unfortunately the <code>issubclass()</code> call ends with an exception (Python 3.10.6 in Ubuntu 22.04):</p> <pre class="lang-bash prettyprint-override"><code>$ python3.10 protocol_test.py Traceback (most recent call last): File &quot;protocol_test.py&quot;, line 8, in &lt;module&gt; issubclass(object, SupportsComparison) File &quot;/usr/lib/python3.10/abc.py&quot;, line 123, in __subclasscheck__ return _abc_subclasscheck(cls, subclass) File &quot;/usr/lib/python3.10/typing.py&quot;, line 1570, in _proto_hook raise TypeError(&quot;Protocols with non-method members&quot; TypeError: Protocols with non-method members don't support issubclass() </code></pre> <p>As you can see I added no non-method members to <code>SupportsComparison</code>. Is this a bug in the standard library?</p>
<python><python-3.x><protocols><python-typing>
2023-03-13 08:50:50
1
1,573
pabouk - Ukraine stay strong
75,719,399
5,387,794
Kafka consumer error: failed to deserialize - the unknown protocol id
<p>I am running Kafka locally, and sending a &quot;hello, world!&quot; message using the kafka-console-producer.sh using the following command:</p> <pre><code>kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092 &gt; hello, world! </code></pre> <p>I have a Kafka consumer running using the <a href="https://github.com/quixio/quix-streams" rel="nofollow noreferrer">Quix Streams</a> library. The code for the consumer is</p> <pre><code>import quixstreams as qx import pandas as pd client = qx.KafkaStreamingClient('127.0.0.1:9092') topic_consumer = client.get_topic_consumer(&quot;quickstart-events&quot;, consumer_group=None) def on_stream_received_handler(stream_received: qx.StreamConsumer): stream_received.timeseries.on_dataframe_received = on_dataframe_received_handler def on_dataframe_received_handler(stream: qx.StreamConsumer, df: pd.DataFrame): print(df.to_string()) topic_consumer.on_stream_received = on_stream_received_handler print(&quot;Listening to streams. Press CTRL-C to exit.&quot;) qx.App.run() </code></pre> <p>When I send the &quot;hello, world&quot; message to Kafka, I get the following error from the Kafka consumer:</p> <pre><code>[23-03-13 16:00:43.516 (4) ERR] Exception while processing package System.Runtime.Serialization.SerializationException: Failed to deserialize - the unknown protocol id '104' at QuixStreams.Transport.Fw.Helpers.TransportPackageValueCodec.Deserialize(Byte[]) + 0x135 at QuixStreams.Transport.Fw.DeserializingModifier.Publish(Package, CancellationToken) + 0x1b0 at QuixStreams.Transport.TransportConsumer.&lt;&gt;c__DisplayClass5_2.&lt;.ctor&gt;b__8(Package p) + 0x54 at QuixStreams.Transport.Fw.ByteMergingModifier.&lt;Publish&gt;d__13.MoveNext() + 0x531 --- End of stack trace from previous location --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x32 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0xe9 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x7b at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x2b at System.Runtime.CompilerServices.TaskAwaiter.GetResult() + 0x1a at QuixStreams.Transport.Kafka.KafkaConsumer.&lt;AddMessage&gt;d__66.MoveNext() + 0x333 </code></pre>
<python><apache-kafka>
2023-03-13 08:17:37
1
5,429
kovac
75,719,357
2,348,684
Docker conda multistage image, error with conda-pack
<p>I am trying to create a multistage docker image for a python conda project using conda-pack with the following Dockerfile on MacOS:</p> <pre><code>### build stage FROM continuumio/miniconda3 AS build COPY environment.yaml . # Create the conda environment and install conda-pack RUN conda env create -f environment.yaml RUN conda install -c conda-forge conda-pack # Use conda-pack to create a standalone enviornment in /venv: RUN conda-pack -n test -o /tmp/env.tar &amp;&amp; \ mkdir /venv &amp;&amp; cd /venv &amp;&amp; tar xf /tmp/env.tar &amp;&amp; \ rm /tmp/env.tar RUN /venv/bin/conda-unpack ### runtime stage # Use the official Miniconda3 image as the base image FROM continuumio/miniconda3 # Copy /venv from the previous stage: COPY --from=build /venv /venv </code></pre> <p>And the following environment.yaml file:</p> <pre><code>name: test dependencies: - python=3.8.10 - pip=20.3 - pip: - setuptools==59.5.0 </code></pre> <p>As a result, I am getting the following error:</p> <pre><code> &gt; [build 5/6] RUN conda-pack -n test -o /tmp/env.tar &amp;&amp; mkdir /venv &amp;&amp; cd /venv &amp;&amp; tar xf /tmp/env.tar &amp;&amp; rm /tmp/env.tar: #9 2.287 Collecting packages... #9 2.287 CondaPackError: #9 2.287 Files managed by conda were found to have been deleted/overwritten in the #9 2.287 following packages: #9 2.287 #9 2.287 - setuptools 65.6.3: #9 2.287 lib/python3.8/site-packages/pkg_resources/_vendor/__pycache__/zipp.cpython-38.pyc #9 2.287 lib/python3.8/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py #9 2.287 lib/python3.8/site-packages/pkg_resources/_vendor/importlib_resources/__pycache__/__init__.cpython-38.pyc #9 2.287 + 187 others #9 2.287 #9 2.287 This is usually due to `pip` uninstalling or clobbering conda managed files, #9 2.287 resulting in an inconsistent environment. Please check your environment for #9 2.287 conda/pip conflicts using `conda list`, and fix the environment by ensuring #9 2.287 only one version of each package is installed (conda preferred). </code></pre> <p>It looks like there is an issue with the version 59.5.0 of setuptools which I am trying to install with pip... I only get the error when using conda-pack, but at the same time it is the way I have found how I should be able to reduce the image size.</p> <p>I have tried to reinstall conda (current conda version is 23.1.0) but I am still getting the error. How can I solve it? My goal is to be able to build a multistage docker image of my Conda project and use conda-pack to be able to lower down the size of the Docker image.</p>
<python><docker><pip><conda><conda-pack>
2023-03-13 08:11:32
1
371
user2348684
75,719,141
9,124,096
Why python request works, but not the when make the same request in curl it does not
<p>I am trying to make a request from a saved session of my LinkedIn account to send a message however when I use the &quot;curl&quot; request, it does not work. Yet, when I use the python requests version it works right away. I am unsure of what is wrong with my &quot;curl&quot; request.</p> <h3>Python Code:</h3> <pre><code>import requests import json reqUrl = &quot;https://www.linkedin.com/voyager/api/messaging/conversations?action=create&quot; headersList = { &quot;Host&quot;: &quot;www.linkedin.com&quot;, &quot;User-Agent&quot;: &quot;Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0&quot;, &quot;Accept&quot;: &quot;application/json&quot;, &quot;Accept-Language&quot;: &quot;en-US,en;q=0.5&quot;, &quot;Accept-Encoding&quot;: &quot;gzip, deflate, br&quot;, &quot;Referer&quot;: &quot;https://www.linkedin.com/search/results/people/?keywords=ali&amp;origin=SWITCH_SEARCH_VERTICAL&amp;sid=HnO&quot;, &quot;x-restli-protocol-version&quot;: &quot;2.0.0&quot;, &quot;X-LI-Lang&quot;: &quot;en_US&quot;, &quot;X-LI-Track&quot;: &quot;{&quot;clientVersion&quot;:&quot;1.12.9&quot;,&quot;mpVersion&quot;:&quot;1.12.9&quot;,&quot;osName&quot;:&quot;web&quot;,&quot;timezoneOffset&quot;:5,&quot;timezone&quot;:&quot;Asia/Karachi&quot;,&quot;deviceFormFactor&quot;:&quot;DESKTOP&quot;,&quot;mpName&quot;:&quot;voyager-web&quot;,&quot;displayDensity&quot;:1,&quot;displayWidth&quot;:1920,&quot;displayHeight&quot;:1080}&quot;, &quot;X-li-page-instance&quot;: &quot;urn:li:page:d_flagship3_search_srp_people;zn3gn3MTTl6ilgF+0KG8hA==&quot;, &quot;Csrf-Token&quot;: &quot;ajax:your session&quot;, &quot;Content-Type&quot;: &quot;text/plain;charset=UTF-8&quot;, &quot;Content-Length&quot;: &quot;489&quot;, &quot;Origin&quot;: &quot;https://www.linkedin.com&quot;, &quot;Sec-Fetch-Dest&quot;: &quot;empty&quot;, &quot;Sec-Fetch-Mode&quot;: &quot;cors&quot;, &quot;Sec-Fetch-Site&quot;: &quot;same-origin&quot;, &quot;TE&quot;: &quot;trailers&quot; } payload = json.dumps({ &quot;keyVersion&quot;: &quot;LEGACY_INBOX&quot;, &quot;conversationCreate&quot;: { &quot;eventCreate&quot;: { &quot;value&quot;: { &quot;com.linkedin.voyager.messaging.create.MessageCreate&quot;: { &quot;attributedBody&quot;: { &quot;text&quot;: &quot;message&quot;, &quot;attributes&quot;: [] }, &quot;attachments&quot;: [] } } }, &quot;recipients&quot;: [ &quot;ACoAACAAZBIBVTrxu7umYeGDknEvSHy6CbWpFGI&quot; ], &quot;subtype&quot;: &quot;MEMBER_TO_MEMBER&quot; } }) response = requests.request(&quot;POST&quot;, reqUrl, data=payload, headers=headersList) print(response.text) </code></pre> <p>The same curl version does not work. Strage</p> <pre><code>curl -X POST \ 'https://www.linkedin.com/voyager/api/messaging/conversations?action=create' \ --header 'Host: www.linkedin.com' \ --header 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0' \ --header 'Accept: application/json' \ --header 'Accept-Language: en-US,en;q=0.5' \ --header 'Accept-Encoding: gzip, deflate, br' \ --header 'Referer: https://www.linkedin.com/search/results/people/?keywords=ali&amp;origin=SWITCH_SEARCH_VERTICAL&amp;sid=HnO' \ --header 'x-restli-protocol-version: 2.0.0' \ --header 'X-LI-Lang: en_US' \ --header 'X-LI-Track: {&quot;clientVersion&quot;:&quot;1.12.9&quot;,&quot;mpVersion&quot;:&quot;1.12.9&quot;,&quot;osName&quot;:&quot;web&quot;,&quot;timezoneOffset&quot;:5,&quot;timezone&quot;:&quot;Asia/Karachi&quot;,&quot;deviceFormFactor&quot;:&quot;DESKTOP&quot;,&quot;mpName&quot;:&quot;voyager-web&quot;,&quot;displayDensity&quot;:1,&quot;displayWidth&quot;:1920,&quot;displayHeight&quot;:1080}' \ --header 'X-li-page-instance: urn:li:page:d_flagship3_search_srp_people;zn3gn3MTTl6ilgF+0KG8hA==' \ --header 'Csrf-Token: ajax:your session' \ --header 'Content-Type: text/plain;charset=UTF-8' \ --header 'Content-Length: 489' \ --header 'Origin: https://www.linkedin.com' \ --header 'Sec-Fetch-Dest: empty' \ --header 'Sec-Fetch-Mode: cors' \ --header 'Sec-Fetch-Site: same-origin' \ --header 'TE: trailers' \ --data-raw '{ &quot;keyVersion&quot;: &quot;LEGACY_INBOX&quot;, &quot;conversationCreate&quot;: { &quot;eventCreate&quot;: { &quot;value&quot;: { &quot;com.linkedin.voyager.messaging.create.MessageCreate&quot;: { &quot;attributedBody&quot;: { &quot;text&quot;: &quot;message&quot;, &quot;attributes&quot;: [] }, &quot;attachments&quot;: [] } } }, &quot;recipients&quot;: [ &quot;ACoAACAAZBIBVTrxu7umYeGDknEvSHy6CbWpFGI&quot; ], &quot;subtype&quot;: &quot;MEMBER_TO_MEMBER&quot; } }' </code></pre> <p>What is the difference between both of the requests? Please provide an explanation for why one works and for why another does not work</p>
<python><http><curl>
2023-03-13 07:44:03
0
1,451
foragerDev
75,719,072
3,336,412
Dynamically set sql-default value with table-name in SQLModel
<p>I'm trying to create a base-class in SQLModel which looks like this:</p> <pre class="lang-py prettyprint-override"><code>class BaseModel(SQLModel): @declared_attr def __tablename__(cls) -&gt; str: return cls.__name__ guid: Optional[UUID] = Field(default=None, primary_key=True) class SequencedBaseModel(BaseModel): sequence_id: str = Field(sa_column=Column(VARCHAR(50), server_default=text(f&quot;SELECT '{TABLENAME}_' + convert(varchar(10), NEXT VALUE FOR dbo.sequence)&quot;))) </code></pre> <p>so I got a table like this:</p> <pre class="lang-py prettyprint-override"><code>class Project(SequencedBaseModel): ... </code></pre> <p>where alembic would generate a migration for a table <code>Project</code> with columns <code>guid</code> and <code>sequence_id</code>. The default-value for sequence-id is a sequence which is generated with</p> <pre class="lang-sql prettyprint-override"><code>SELECT '{TABLENAME}_' + convert(varchar(10), NEXT VALUE FOR dbo.sequence) </code></pre> <p>and should insert into project-table the values <code>Project_1</code>, <code>Project_2</code>, <code>...</code></p> <p>Any idea on how to set the tablename dynamically? I cannot use a constructor for setting the columns because alembic is ignoring them, I cannot access the <code>__tablename__()</code> function, or <code>cls</code>, because the columns are static...</p>
<python><sqlalchemy><alembic><sqlmodel>
2023-03-13 07:35:01
1
5,974
Matthias Burger
75,718,285
12,724,372
Pandas : join 2 tables and compare the fields having the same name except for suffix
<p>I have joined 2 tables having same fields on a key and using a suffix int and prod for each.</p> <p>`int</p> <pre><code>id measure1 measure2 123 45 23 134 09 12 </code></pre> <p>prod</p> <pre><code>id measure1 measure2 123 16 45 134 65 01` </code></pre> <pre><code>df=prod.merge(int,on='id',suffixes=['_prod','_int']) </code></pre> <p>joined table would look like this.</p> <p>`</p> <pre><code>id measure1_int measure2_int measure1_prod measure2_prod 123 45 23 16 45 134 09 12 65 01` </code></pre> <p>I want to create a measure , column deviation_{measure_name} like so</p> <p>`</p> <pre><code>id measure1_int measure2_int measure1_prod measure2_prod deviation_measure1 deviation_measure2 123 45 23 16 45 29 -22 134 09 12 65 01 -56 11` </code></pre> <p>this is what I have done, what would be the fastest way to acheive this?</p> <pre><code>def f(a,b): return a-b for i in li: df[str('deviation_'+i)]=df.apply(lambda x: f(x[str(i+'_int')],x[str(i+'_prod')]), axis=1) </code></pre>
<python><pandas><dataframe>
2023-03-13 05:24:03
1
1,275
Devarshi Goswami
75,718,202
1,114,872
typechecking for compose funcion in python does not seem to work
<p>I am trying to annotate the type of a 'compose' function, and I can't get mypy to tell me I am wrong.</p> <p>In the code below, the definitions of first and eval work as expected, changing the return type to (say) int gives me an error.</p> <p>However, the definition for compose does not. I can change the return value of the resulting function to pretty much anything, and mypy does not notice it.</p> <p>Is is supposed to be like that? What can I do to get typechecking that checks is particular type?</p> <pre><code>from typing import Sequence, TypeVar, Callable T = TypeVar('T') S = TypeVar('S') H = TypeVar('H') def first(l: Sequence[T], h : Sequence[S]) -&gt; tuple[T,S]: return (l[0],h[0]) def eval(f : Callable[[T],S], arg : T) -&gt; S : return f(arg) def compose (f : Callable[[T],S], g : Callable[[H],T]) -&gt; Callable[[H],int]: #no complaints? def aux(x): a = g(x) b = f(a) return b return aux </code></pre>
<python><mypy>
2023-03-13 05:06:00
1
1,512
josinalvo
75,718,164
4,508,962
Python : cannot call static method on run-time retrieved type
<p>I have a base class:</p> <pre><code>class MyBaseClass: @abstractmethod def from_list(list_float: List[float]) -&gt; Self: pass @staticmethod @abstractmethod def fn() -&gt; int: pass </code></pre> <p>All children of that class implement <code>from_list</code> and static <code>fn</code>, since they are abstract.</p> <p>Now, I have another class 'MyOptions' :</p> <pre><code>class MyOptions: @abstractclass def related_to_type(): type(MyBaseClass) </code></pre> <p>All <code>MyOptions</code> children implement <code>related_to_type</code>, which returns a type that is a child of <code>MyBaseClass</code>. What I want is:</p> <p>Provided a <code>MyOptions</code> child (which is unknown until runtime), call <code>fn</code> on its related type, returned by <code>related_to_type()</code>. In other term, I want:</p> <pre><code>options: MyOptions = MyOptionChild() type_opt = options.related_to_type() # this should return a MyBaseClass child type result = type_opt.fn() # error here </code></pre> <p>The error is that <code>type_opt</code> contains an variable of type <code>&lt;class 'abc.ABCMeta'&gt;</code>, on which I cannot call my static function <code>fn</code>. PS : I cannot make MyOptions generic with a type argument because the related type is known at run-time only</p> <p>What can I do to call a static function on a variable containing a &quot;reflected class&quot; in Python ? Typically, in java, you would do:</p> <pre><code>Method method = myBaseClassChild.getMethod(&quot;fn&quot;); int result = method.invoke(null); </code></pre>
<python><reflection><static>
2023-03-13 04:57:09
1
1,207
Jerem Lachkar
75,718,114
11,922,765
Python Identify US holidays in a timeseries dataframe
<p>I have a dataframe with years of data. I want to detect, assign True/False or 1/0 if it is a Holiday.</p> <p>My code:</p> <pre><code>df = pd.DataFrame(index=['2004-10-01', '2004-10-02', '2004-10-03', '2004-10-04', '2004-10-05', '2004-10-06', '2004-10-07', '2004-10-08', '2004-10-09', '2004-10-10', '2018-07-25', '2018-07-26', '2018-07-27', '2018-07-28', '2018-07-29', '2018-07-30', '2018-07-31', '2018-08-01', '2018-08-02', '2018-08-03']) import holidays # Detect US holidays hldys = holidays.country_holidays('US') # I have to apply this to each date in the dataframe index df['isHoliday?'] = df.index.map(lambda x: hldy.get(x)) </code></pre> <p>Present output:</p> <pre><code> isHoliday? 2004-10-01 None 2004-10-02 None 2004-10-03 None 2004-10-04 None 2004-10-05 None 2004-10-06 None 2004-10-07 None 2004-10-08 None 2004-10-09 None 2004-10-10 None 2018-07-25 None 2018-07-26 None 2018-07-27 None 2018-07-28 None 2018-07-29 None 2018-07-30 None 2018-07-31 None 2018-08-01 None 2018-08-02 None </code></pre> <p><strong>Update</strong> I found the solution</p> <pre><code>us_hldys = holidays.country_holidays('US') df['isHoliday?'] = df.index.to_series().apply(lambda x: x in us_hldys) isHoliday? 2004-10-01 False 2004-10-02 False 2004-10-03 False 2004-10-04 False 2004-10-05 False 2004-10-06 False 2004-10-07 False 2004-10-08 False 2004-10-09 False 2004-10-10 False 2018-07-25 False 2018-07-26 False 2018-07-27 False 2018-07-28 False 2018-07-29 False 2018-07-30 False 2018-07-31 False 2018-08-01 False 2018-08-02 False 2018-08-03 False </code></pre>
<python><pandas><dataframe><numpy><python-holidays>
2023-03-13 04:44:38
2
4,702
Mainland
75,717,900
188,331
FastText Unsupervised Training only detects small number of Chinese words
<p>I'm using FastText to unsupervised train the model (the <code>data.train</code> file is a text file that contains 50,000 lines/1.7 million Chinese characters):</p> <pre><code>import fasttext model = fasttext.train_unsupervised(input='data.train', model='cbow', dim=300, epoch=10, neg=10, minn=2, minCount=20) model.save_model('data.bin') </code></pre> <p>The output is as follows:</p> <pre><code>Read 0M words Number of words: 21 Number of labels: 0 Progress: 100.0% words/sec/thread: 254151 lr: 0.000000 avg.loss: 4.173459 ETA: 0h 0m 0s </code></pre> <p>And the word list is as follows:</p> <pre><code>print(model.words) ['&lt;/s&gt;', 'd', 'btw', '=', 'iphone', 'in', 'post', '/', '3', '-'] </code></pre> <p>It cannot even detect a Chinese character. However, FastText itself supports Chinese; word vectors are trained and available here: <a href="https://fasttext.cc/docs/en/crawl-vectors.html" rel="nofollow noreferrer">https://fasttext.cc/docs/en/crawl-vectors.html</a></p> <p>What did I miss?</p> <hr /> <p><strong>Sample Data</strong></p> <pre><code>我是一個小蘋果 你好嗎?今天天氣很好 我是一個小男孩 我有一部新 iphone </code></pre> <p>I expected that the unsupervised training can identify the words like:</p> <pre><code>天氣 男孩 蘋果 今天 iphone </code></pre> <p><em>which means &quot;weather&quot;, &quot;boy&quot;, &quot;apple&quot;, &quot;today&quot;.</em></p>
<python><fasttext>
2023-03-13 03:46:45
0
54,395
Raptor
75,717,821
940,158
Debugging python running unreasonably slow when adding numbers
<p>I'm working on an NLP and I got bitten by an unreasonably slow behaviour in Python while operating with small amounts of numbers.</p> <p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>import random, time from functools import reduce def trainPerceptron(perceptron, data): learningRate = 0.002 weights = perceptron['weights'] error = 0 for chunk in data: input = chunk['input'] output = chunk['output'] # 12x slower than equivalent JS sum_ = 0 for key in input: v = weights[key] sum_ += v # 20x slower than equivalent JS #sum_ = reduce(lambda acc, key: acc + weights[key], input) actualOutput = sum_ if sum_ &gt; 0 else 0 expectedOutput = 1 if output == perceptron['id'] else 0 currentError = expectedOutput - actualOutput if currentError: error += currentError ** 2 change = currentError * learningRate for key in input: weights[key] += change return error # Build mock data structure data = [{ 'input': random.sample(range(0, 5146), 10), 'output': 0 } for _ in range(11514)] perceptrons = [{ 'id': i, 'weights': [0.0] * 5147, } for i in range(60)] # simulate 60 perceptrons # Simulate NLU for i in range(151): # 150 iterations hrstart = time.perf_counter() for perceptron in perceptrons: trainPerceptron(perceptron, data) hrend = time.perf_counter() print(f'Epoch {i} - Time for training: {int((hrend - hrstart) * 1000)}ms') </code></pre> <p>Running it on my M1 MBP I get the following numbers.</p> <pre><code>Epoch 0 - Time for training: 199ms Epoch 1 - Time for training: 198ms Epoch 2 - Time for training: 199ms Epoch 3 - Time for training: 197ms Epoch 4 - Time for training: 199ms ... Epoch 146 - Time for training: 198ms Epoch 147 - Time for training: 200ms Epoch 148 - Time for training: 198ms Epoch 149 - Time for training: 198ms Epoch 150 - Time for training: 198ms </code></pre> <p>Each epoch is taking around 200ms, which is unreasonably slow given the small amount of numbers that are being processed. I profiled the code with <code>cProfile</code> in order to find out what is going on:</p> <pre><code> 655306 function calls (655274 primitive calls) in 59.972 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 3/1 0.000 0.000 59.972 59.972 {built-in method builtins.exec} 1 0.005 0.005 59.972 59.972 poc-small.py:1(&lt;module&gt;) 9060 59.850 0.007 59.850 0.007 poc-small.py:4(trainPerceptron) 1 0.006 0.006 0.112 0.112 poc-small.py:34(&lt;listcomp&gt;) 11514 0.039 0.000 0.106 0.000 random.py:382(sample) 115232 0.034 0.000 0.047 0.000 random.py:235(_randbelow_with_getrandbits) 11548 0.002 0.000 0.012 0.000 {built-in method builtins.isinstance} 11514 0.002 0.000 0.010 0.000 &lt;frozen abc&gt;:117(__instancecheck__) 183616 0.010 0.000 0.010 0.000 {method 'getrandbits' of '_random.Random' objects} 11514 0.002 0.000 0.008 0.000 {built-in method _abc._abc_instancecheck} 11514 0.002 0.000 0.006 0.000 &lt;frozen abc&gt;:121(__subclasscheck__) 115140 0.005 0.000 0.005 0.000 {method 'add' of 'set' objects} 11514 0.003 0.000 0.004 0.000 {built-in method _abc._abc_subclasscheck} 115232 0.004 0.000 0.004 0.000 {method 'bit_length' of 'int' objects} 151 0.003 0.000 0.003 0.000 {built-in method builtins.print} </code></pre> <p>This wasn't too helpful, so I tried with <a href="https://github.com/pyutils/line_profiler" rel="nofollow noreferrer">line_profiler</a>:</p> <pre><code>Timer unit: 1e-06 s Total time: 55.2079 s File: poc-small.py Function: trainPerceptron at line 4 Line # Hits Time Per Hit % Time Line Contents ============================================================== 4 @profile 5 def trainPerceptron(perceptron, data): 6 1214 301.0 0.2 0.0 learningRate = 0.002 7 1214 255.0 0.2 0.0 weights = perceptron['weights'] 8 1214 114.0 0.1 0.0 error = 0 9 13973840 1742427.0 0.1 3.2 for chunk in data: 10 13973840 1655043.0 0.1 3.0 input = chunk['input'] 11 13973840 1487543.0 0.1 2.7 output = chunk['output'] 12 13 # 12x slower than equivalent JS 14 13973840 1210755.0 0.1 2.2 sum_ = 0 15 139738397 13821056.0 0.1 25.0 for key in input: 16 139738397 13794656.0 0.1 25.0 v = weights[key] 17 139738396 14942692.0 0.1 27.1 sum_ += v 18 19 # 20x slower than equivalent JS 20 #sum_ = reduce(lambda acc, key: acc + weights[key], input) 21 22 13973839 1618273.0 0.1 2.9 actualOutput = sum_ if sum_ &gt; 0 else 0 23 24 13973839 1689194.0 0.1 3.1 expectedOutput = 1 if output == perceptron['id'] else 0 25 13973839 1365346.0 0.1 2.5 currentError = expectedOutput - actualOutput 26 13732045 1211916.0 0.1 2.2 if currentError: 27 241794 38375.0 0.2 0.1 error += currentError ** 2 28 241794 25377.0 0.1 0.0 change = currentError * learningRate 29 2417940 271237.0 0.1 0.5 for key in input: 30 2417940 332890.0 0.1 0.6 weights[key] += change 31 32 1213 405.0 0.3 0.0 return error </code></pre> <p>This shows that these 3 lines (that are adding the numbers) are taking the entire runtime budget:</p> <pre class="lang-py prettyprint-override"><code>for key in input: v = weights[key] sum_ += v </code></pre> <p>I thought throwing numpy at the problem in order to speed it up, but <code>input</code> has a very small size, which means that the overhead of calling numpy will hurt the performance more than the gains obtained by making the math operations with it.</p> <p>Anyhow, adding numbers shouldn't be that slow, which makes me believe something weird is going on with Python. In order to confirm my theory, I ported the code to Javascript. This is the result:</p> <pre class="lang-js prettyprint-override"><code>function trainPerceptron(perceptron, data) { const learningRate = 0.002; const weights = perceptron['weights']; let error = 0; for (const chunk of data) { const input = chunk['input']; const output = chunk['output']; const sum = input.reduce((acc, key) =&gt; acc + weights[key], 0); const actualOutput = sum &gt; 0 ? sum : 0; const expectedOutput = output === perceptron['id'] ? 1 : 0; const currentError = expectedOutput - actualOutput; if (currentError) { error += currentError ** 2; const change = currentError * learningRate; for (const key in input) { weights[key] += change; } } } return error; } // Build mock data structure const data = new Array(11514); for (let i = 0; i &lt; data.length; i++) { const inputSet = new Set(); while (inputSet.size &lt; 10) { inputSet.add(Math.floor(Math.random() * 5146)); } const input = Array.from(inputSet); data[i] = { input: input, output: 0 }; } const perceptrons = Array.from({ length: 60 }, (_, i) =&gt; ({ id: i, weights: Array.from({ length: 5147 }, () =&gt; 0.0), })); // simulate 60 perceptrons // Simulate NLU for (let i = 0; i &lt; 151; i++) { // 150 iterations const hrstart = performance.now(); for (const perceptron of perceptrons) { trainPerceptron(perceptron, data); } const hrend = performance.now(); console.log(`Epoch ${i} - Time for training: ${Math.floor(hrend - hrstart)}ms`); } </code></pre> <p>When I run the JS code I get the following numbers:</p> <pre><code>Epoch 0 - Time for training: 30ms Epoch 1 - Time for training: 18ms Epoch 2 - Time for training: 17ms Epoch 3 - Time for training: 17ms Epoch 4 - Time for training: 17ms ... Epoch 147 - Time for training: 17ms Epoch 148 - Time for training: 17ms Epoch 149 - Time for training: 17ms Epoch 150 - Time for training: 17ms </code></pre> <p>These numbers confirm my theory. Python is being unreasonably slow. Any idea why or what exactly is making it perform so poorly?</p> <p>Runtime details:</p> <p>MacOS Ventura 13.2.1 (22D68) Macbook Pro M1 Pro 32GB Python 3.11.0 (native Apple Silicon)</p>
<javascript><python><performance><nlp>
2023-03-13 03:25:44
0
15,217
alexandernst
75,717,777
5,405,813
Running yolov8 pre-trained model on a free GPU on a CPU for detection for a single class
<p>As a newbie trying object detection for a single class i will be using a free GPU. Can the yolov8 trained model on a free GPU to detect a single class after i have downloaded it on a CPU?</p>
<python><yolo>
2023-03-13 03:12:16
1
455
bipin_s
75,717,593
12,144,502
Reading Sudokus from text file and applying backtracking
<p>I have just begun with python, so excuse me if these are noob questions or if the questions are already answered. I am trying to read multiple sudoku puzzles and apply the algorithm I found online. The code utilizes a grid(<code>list[list[int]]</code>) setup. I have tried looking for a solution online, but all I found was that I should convert the .txt file to a JSON file and continue from there.</p> <p><strong>The <code>.txt</code> file looks like this</strong>:</p> <pre><code>AME: sudoku TYPE: SUD COMMENT: 5 sudokus SUDOKU 1 034000600 002600080 068300007 003001005 059060072 000520046 205906000 000408050 000007004 SUDOKU 2 000504000 000089020 785000000 002346008 040290050 860000000 030007042 400060805 608005030 SUDOKU 3 040000300 036072000 078060400 083000600 000300005 025008070 300004800 000000024 764089530 SUDOKU 4 000074900 000000046 079000300 600728009 980503000 037940050 200000000 008030060 060490023 SUDOKU 5 026030470 000200510 700900020 509000000 000050000 000000307 080009001 034006000 095070640 EOF </code></pre> <hr /> <p><strong>Code from online</strong>:</p> <pre><code>def is_valid(grid, r, c, k): not_in_row = k not in grid[r] not_in_column = k not in [grid[i][c] for i in range(9)] not_in_box = k not in [grid[i][j] for i in range(r//3*3, r//3*3+3) for j in range(c//3*3, c//3*3+3)] return not_in_row and not_in_column and not_in_box def solve(grid, r=0, c=0): if r == 9: return True elif c == 9: return solve(grid, r+1, 0) elif grid[r][c] != 0: return solve(grid, r, c+1) else: for k in range(1, 10): if is_valid(grid, r, c, k): grid[r][c] = k if solve(grid, r, c+1): return True grid[r][c] = 0 return False grid = [ [0, 0, 4, 0, 5, 0, 0, 0, 0], [9, 0, 0, 7, 3, 4, 6, 0, 0], [0, 0, 3, 0, 2, 1, 0, 4, 9], [0, 3, 5, 0, 9, 0, 4, 8, 0], [0, 9, 0, 0, 0, 0, 0, 3, 0], [0, 7, 6, 0, 1, 0, 9, 2, 0], [3, 1, 0, 9, 7, 0, 2, 0, 0], [0, 0, 9, 1, 8, 2, 0, 0, 3], [0, 0, 0, 0, 6, 0, 1, 0, 0] ] solve(grid) print(*grid, sep='\n') </code></pre> <p>Instead of the static grid that needs to be inputted, I would instead use the <code>.txt</code> file. I just couldn't find anything to help me solve the issue at hand. This might be because I am new to python or I need help understanding how to convert to JSON files.</p> <p><a href="https://stackoverflow.com/questions/54748879/how-to-convert-txt-file-into-json-using-python">How to convert txt file into json using python?</a> I have looked at this, but I am unsure if this is even the right approach.</p> <p>Any feedback is appreciated, or even a nod in the right direction is also helpful.</p>
<python><json><algorithm><artificial-intelligence>
2023-03-13 02:25:08
2
400
zellez11
75,717,408
11,922,765
Python Dataframe isocalendar() Boolean condition not producing desired result when ISO year is different from Gregorian year
<p>I am surprised that my simple boolean condition was producing a complete year result when I wanted only the first week's data of that year only.</p> <p>My code:</p> <pre><code># Some sample data df1 = pd.DataFrame([1596., 1537., 1482., 1960., 1879., 1824.],index=['2007-01-01 00:00:00', '2007-01-01 01:00:00', '2007-01-01 02:00:00', '2007-12-31 21:00:00', '2007-12-31 22:00:00', '2007-12-31 23:00:00']) df1.index = pd.to_datetime(df1.index,format = '%Y-%m-%d %H:%M:%S') # Consider and plot only the 2007 year and first week result year_plot = 2007 year_data = df1[(df1.index.year==year_plot)&amp;(df1.index.isocalendar().week==1)] print(year_data) DAYTON_MW Datetime 2007-01-01 00:00:00 1596.0 2007-01-01 01:00:00 1537.0 2007-01-01 02:00:00 1482.0 2007-01-01 03:00:00 1422.0 2007-01-01 04:00:00 1402.0 ... ... 2007-12-31 19:00:00 2110.0 2007-12-31 20:00:00 2033.0 2007-12-31 21:00:00 1960.0 2007-12-31 22:00:00 1879.0 2007-12-31 23:00:00 1824.0 192 rows × 1 columns year_data.plot(figsize=(15, 5), title='Week Of Data') plt.show() </code></pre> <p><a href="https://i.sstatic.net/SidOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SidOZ.png" alt="enter image description here" /></a></p> <p>I need your help to know where the problem is.</p> <p><strong>Update</strong>: The problem has been found. Meantime, @J_H also found the issue. I am surprized why it is behaving like this, where, it is treating last days in 2007 year as week 1.</p> <p><a href="https://i.sstatic.net/knIIE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/knIIE.png" alt="enter image description here" /></a></p> <p><strong>Result</strong>: Based on the accepted answer, the solution is</p> <pre><code>df1[(df1.index.isocalendar().year==year_plot)&amp;(df1.index.isocalendar().week==1)]\ .plot(figsize=(15, 5), title='Week Of Data')# plt.savefig('oneweek.png') plt.show() </code></pre> <p><a href="https://i.sstatic.net/GdcXZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdcXZ.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><matplotlib>
2023-03-13 01:33:41
3
4,702
Mainland
75,717,293
6,676,101
What is a pythonic way to create a factory method?
<p>I have a class, and I would prefer that people call a factory method in order to create instance of the class instead of instantiating the class directly.</p> <p>One of a few different reasons to use a factory method is so that we can use <code>functools.wrap</code> to wrap un-decorated callable.</p> <pre class="lang-python prettyprint-override"><code>import inspect class Decorator: def __init__(self, kallable): self._kallable = kallable @classmethod def make_decorator(kls, kallable): wrapped_kallable = kls(kallable) return functools.wrap(wrapped_kallable, kallable) def __call__(self, *args, **kwargs): func_name = self._kallable.__name__ func_name = inspect.currentframe().f_code.co_name print(&quot;ENTERING FUNCTION &quot;, func_name) ret_val = self._kallable(*args, **kwargs) print(&quot;LEAVING FUNCTION&quot;, self._kallable) return ret_val </code></pre> <p>Printing <code>&quot;ENTERING FUNCTION &quot;</code> is kind of a silly thing to do, I wanted to show you a <em><strong>minimal</strong></em> reproducible example of what I am trying to do (create a decorator-class which uses <code>functools.wrap</code> and which is instantiated from a factory method)</p>
<python><python-3.x><factory><factory-pattern>
2023-03-13 00:56:56
1
4,700
Toothpick Anemone
75,717,172
5,407,050
How to use one_hot on a pytorch_directml tensor on privateuseone device?
<p>Does anyone know how to use Torch_directml one_hot? It seems to fail, while one_hot on 'cpu' device works. See the following snippets:</p> <pre><code>import torch import torch_directml import torch.nn.functional as F a_tensor = torch.tensor([[ 4, 3, 17, 0, 0], [ 1, 0, 13, 13, 0], [ 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0]], device='privateuseone:0') one_hot_codified = F.one_hot(a_tensor, 22) </code></pre> <p>Fails with error <code>RuntimeError: DirectML scatter doesn't allow partially modified dimensions. Please update the dimension so that the indices and input only differ in the provided dimension.</code></p> <p>While</p> <pre><code>import torch import torch_directml import torch.nn.functional as F a_tensor = torch.tensor([[ 4, 3, 17, 0, 0], [ 1, 0, 13, 13, 0], [ 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0]], device='cpu') one_hot_codified = F.one_hot(a_tensor, 22) print(one_hot_codified_2) </code></pre> <p>returns as expected:</p> <pre><code>tensor([[[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]]) </code></pre> <p>Thank you so much.</p>
<python><tensorflow><pytorch>
2023-03-13 00:21:36
1
334
Daniel Lema
75,717,075
2,143,578
wbgapi: resolve country name
<p>I'd like to use the <code>wbgapi</code> to get a list of countries with GDP per capita for 2021, with &quot;normal&quot; country names.</p> <pre><code>import wbgapi as wb gdppercap=wb.data.DataFrame('NY.GDP.PCAP.CD', wb.region.members('WLD')) # how do I pass the coder function to the rename method here, so that it returns the &quot;WBG NAME&quot;? gdppercap['YR2021'].sort_values(ascending = False).rename(index = wb.economy.coder) </code></pre> <p>The coder function works like this:</p> <pre><code>wb.economy.coder(['ARG', 'Korea']) ORIGINAL NAME WBG NAME ISO_CODE ARG Argentina ARG Korea Korea, Rep. KOR </code></pre> <p>But I don't understand how to isolate the second column, which is what I'm after.</p>
<python><function><iterator>
2023-03-12 23:47:06
1
1,593
Zubo
75,717,066
11,922,765
Python dataframe first week with complete 7 days data
<p>I have data frame containing years of data. I want to plot the first week's data that has all seven days.</p> <p>My code:</p> <pre><code># Extract first week data from the dataframe first_week_index=df1.loc[(df1.index.isocalendar().week==1)].index DatetimeIndex(['2005-01-03 00:00:00', '2005-01-03 01:00:00', '2005-01-03 02:00:00', '2005-01-03 03:00:00', '2005-01-03 04:00:00', '2005-01-03 05:00:00', '2005-01-03 06:00:00', '2005-01-03 07:00:00', '2005-01-03 08:00:00', '2005-01-03 09:00:00', ... '2018-01-07 14:00:00', '2018-01-07 15:00:00', '2018-01-07 16:00:00', '2018-01-07 17:00:00', '2018-01-07 18:00:00', '2018-01-07 19:00:00', '2018-01-07 20:00:00', '2018-01-07 21:00:00', '2018-01-07 22:00:00', '2018-01-07 23:00:00'], dtype='datetime64[ns]', name='Datetime', length=2352, freq=None) # As you can see in the above result, I cannot plot the first year (2005) first week data as it starts with 3rd day. # It does not have the first two days of the first week. Hence, I should search in this index, # which year's first week has all seven days data # Only the first week with complete seven days of data first_week_dates = first_week_index.strftime('%Y-%m-%d').unique() first_week_dates = pd.to_datetime(first_week_dates,format='%Y-%m-%d') year_with_firstWeek_7days = first_week_dates.year.value_counts() </code></pre> <p>Present output:</p> <pre><code>print(year_with_firstWeek_7days) 2007 7 2018 7 2006 6 2008 6 2012 6 2013 6 2017 6 2005 5 2011 5 2014 5 2009 4 2010 4 2015 4 2016 4 Name: Datetime, dtype: int64 </code></pre> <p>Now I know that only year 2007 and 2018s first week have complete seven days of data. I will plot it. This is fine. But my approach seems long and lengthy. Can you suggest a better approach?</p>
<python><pandas><dataframe><numpy><datetime>
2023-03-12 23:43:07
2
4,702
Mainland
75,717,005
2,112,406
neovim cannot autocomplete numpy (omni completion not found)
<p>I'm trying to get autocomplete to work, but it doesn't work with numpy. I have a script with numpy imported:</p> <pre><code>import numpy as np </code></pre> <p>and I'm typing <code>np.</code>, and nothing shows up. Instead I get an error that says <code>Omni completion (^O^N^P) Pattern not found</code>. It does work for the <code>sys</code> module, however. Interestingly, typing <code>np</code> returns the options <code>np module [jedi]</code> and <code>numpy [B]</code>.</p> <p>All the answers I could find online are regarding the python version. But I can verify that neovim sees the right python.</p> <p>I'm using the <code>pyenv</code> python:</p> <pre><code>➜ ~ which python ~/.pyenv/shims/python ➜ ~ python --version Python 3.11.2 </code></pre> <p>Typing <code>:!which python</code> also returns the same path.</p> <p>I have the following plugins:</p> <pre><code>deoplete-jedi deoplete.nvim jedi-vim </code></pre> <p><code>:checkhealth</code> shows that the latest <code>pynvim</code> is installed.</p> <p>What am I missing?</p>
<python><numpy><autocomplete><neovim>
2023-03-12 23:26:06
1
3,203
sodiumnitrate
75,716,950
14,729,820
How to install specific python version required by TensorFlow?
<p>I have the following <strong>operating system</strong> :</p> <pre><code>NAME=&quot;Ubuntu&quot; VERSION=&quot;20.04.5 LTS (Focal Fossa)&quot; ID=ubuntu ID_LIKE=debian PRETTY_NAME=&quot;Ubuntu 20.04.5 LTS&quot; VERSION_ID=&quot;20.04&quot; HOME_URL=&quot;https://www.ubuntu.com/&quot; SUPPORT_URL=&quot;https://help.ubuntu.com/&quot; BUG_REPORT_URL=&quot;https://bugs.launchpad.net/ubuntu/&quot; PRIVACY_POLICY_URL=&quot;https://www.ubuntu.com/legal/terms-and-policies/privacy-policy&quot; VERSION_CODENAME=focal UBUNTU_CODENAME=focal </code></pre> <p><code>Python </code>vertion: <code>Python 3.8.10</code></p> <p>I want to install spesfic <code>tensflow</code> vertion --&gt; <code>tensorflow=1.13.1,&lt;1.14</code> because it is in requierments file <a href="https://github.com/Mohammed20201991/TextRecognitionDataGeneratorHuMu23/blob/master/requirements-hw.txt" rel="nofollow noreferrer">HERE</a> it seems that requierd old vertion of python like <code>python 3.6</code></p> <p>What Iam trying I am created venv with py and try to run the following command python3.6 -m venv &quot;my_env_name&quot; bash: python3.6: command not found in the meantime I am not able to write sudo on this remote server What I expacted is to <code>downgrade python vertion</code> to be able to install requerments .</p>
<python><tensorflow><installation><command-line><operating-system>
2023-03-12 23:16:05
1
366
Mohammed
75,716,873
16,665,831
Pandas API on spark runs too slow according to pandas
<p>I am making transform on my dataframe. while the process takes just 3 seconds with pandas, when I use pyspark and Pandas API on spark it takes approximately 30 minutes, yes 30 minutes! my data is 10k rows. The following is my pandas approach;</p> <pre><code>def find_difference_between_two_datetime(time1, time2): return int((time2-time1).total_seconds()) processed_data = pd.DataFrame() for unique_ip in data.ip.unique(): session_ids = [] id = 1 id_prefix = str(unique_ip) + &quot;_&quot; session_ids.append(id_prefix + str(id)) ip_data = data[data.ip == unique_ip] timestamps= [time for time in ip_data.time] for item in zip(timestamps, timestamps[1:]): if find_difference_between_two_datetime(item[0], item[1]) &gt; 30: id +=1 session_ids.append(id_prefix + str(id)) ip_data[&quot;session_id&quot;] = session_ids processed_data = pd.concat([processed_data, ip_data]) processed_data = processed_data.reset_index(drop=True) processed_data </code></pre> <p>And the following is my pyspark - Pandas API on spark approach;</p> <pre><code>import pyspark.pandas as ps def find_difference_between_two_datetime_spark(time1, time2): return int((time2-time1)/ 1000000000) spark_processed_data = ps.DataFrame() for unique_ip in data.ip.unique().to_numpy(): session_ids = [] id = 1 id_prefix = str(unique_ip) + &quot;_&quot; session_ids.append(id_prefix + str(id)) ip_data = data[data.ip == unique_ip] timestamps= ip_data.time.to_numpy() for item in zip(timestamps, timestamps[1:]): if find_difference_between_two_datetime_spark(item[0], item[1]) &gt; 30: id +=1 session_ids.append(id_prefix + str(id)) ip_data[&quot;session_id&quot;] = session_ids spark_processed_data = ps.concat([spark_processed_data, ip_data]) spark_processed_data = spark_processed_data.reset_index(drop=True) spark_processed_data </code></pre> <p>What I am missing about spark environment, I think it is not normal to run this code too slowly?</p>
<python><pandas><pyspark><pyspark-pandas>
2023-03-12 22:55:53
1
309
Ugur Selim Ozen
75,716,861
7,259,176
How can I generate example versions that match a PEP 440 Version specifier in Python?
<p>I have a PEP 440 Version specifier, such as <code>&quot;&gt;=1.0,&lt;2.0&quot;</code>, that defines a range of valid versions for a Python package. I would like to generate a list of example versions that match this specifier.</p> <p>For example, if the specifier is <code>&quot;&gt;=1.0,&lt;2.0&quot;</code>, some valid example versions could be <code>1.0.1</code>, <code>1.5.3</code>, or <code>1.9.9</code>. Invalid example versions could be <code>0.9.5</code>, <code>2.0.0</code>, or <code>3.0.0</code>.</p> <p>What is the easiest way to generate such example versions in Python (3.10)? Should I use a regular expression, the packaging library, or some other method?</p> <p>My current approach is to <code>split</code> at commas and remove the <a href="https://peps.python.org/pep-0440/#version-specifiers" rel="nofollow noreferrer">comparison operators</a> from the version specifier and then <code>try</code> to create a <code>Version</code> from it.</p> <p>In the following example I allow invalid versions, because I don't mind them for my use case.</p> <pre class="lang-py prettyprint-override"><code>from packaging.version import Version, InvalidVersion comp_ops = [&quot;===&quot;, &quot;~=&quot;, &quot;&lt;=&quot;, &quot;&gt;=&quot;, &quot;!=&quot;, &quot;==&quot;, &quot;&lt;&quot;, &quot;&gt;&quot;] # order matters version_spec = &quot;&gt;=1.0,&lt;2.0&quot; versions = [] for v in version_spec.split(','): version = v for op in comp_ops: version = version.removeprefix(op) try: versions.append(Version(version)) except InvalidVersion: pass </code></pre>
<python><regex><version><packaging><pep>
2023-03-12 22:54:09
1
2,182
upe
75,716,668
3,943,868
How to sort list of strings based on alphabetical order?
<p>For example, I have a list of strings:</p> <pre><code>How to make pancakes from scratch How to change a tire on a car How to install a new showerhead How to change clothes </code></pre> <p>I want to it be sorted as in the new list:</p> <pre><code>How to change a tire on a car How to change clothes How to install a new showerhead How to make pancakes from scratch </code></pre> <p>In each of these strings, &quot;How to&quot; are the same, it has to look at the next character, and then get the order 'c/i/m', then 'a/c' etc.</p> <pre><code>L = ['How to make pancakes from scratch', 'How to change a tire on a car', 'How to install a new showerhead', 'How to change clothes'] </code></pre> <p>I can sort one string in alphabetical order. But how to extend the logic to a list of strings?</p>
<python>
2023-03-12 22:09:05
1
7,909
marlon
75,716,634
2,697,954
Automatic1111 > ControlNet API sending 500 server error
<p>I am using Automatic1111 and it is working fine on my local machine and txt2img API is also working fine but when I try to use controlNet it always gives me the server 500 response, any help or guide to where should I look for solution?</p> <pre class="lang-py prettyprint-override"><code>import requests import base64 # Open the control image file in binary mode with open(&quot;poses\poses_base.png&quot;, &quot;rb&quot;) as f: # Read the image data image_data = f.read() # Encode the image data as base64 image_base64 = base64.b64encode(image_data) # Convert the base64 bytes to string image_string = image_base64.decode(&quot;utf-8&quot;) # Define the payload with the prompt and other parameters payload = { &quot;prompt&quot;: &quot;A PURPLE VEST&quot;, &quot;negative_prompt&quot;: &quot;&quot;, &quot;width&quot;: 512, &quot;height&quot;: 512, &quot;steps&quot;: 100, &quot;cfg&quot;: 10, &quot;sampler_index&quot;: &quot;DPM++ 2S a Karras&quot;, &quot;controlnet_units&quot;: [ { &quot;input_image&quot;: image_string, &quot;mask&quot;: '', &quot;module&quot;: &quot;none&quot;, &quot;model&quot;: &quot;control_sd15_canny [fef5e48e]&quot;, &quot;weight&quot;: 1.6, &quot;resize_mode&quot;: &quot;Scale to Fit (Inner Fit)&quot;, &quot;lowvram&quot;: False, &quot;processor_res&quot;: 512, &quot;threshold_a&quot;: 64, &quot;threshold_b&quot;: 64, &quot;guidance&quot;: 1, &quot;guidance_start&quot;: 0, &quot;guidance_end&quot;: 1, &quot;guessmode&quot;: True } ] } # Send the request to the API endpoint response = requests.post(url=f'http://127.0.0.1:7860/controlnet/txt2img', json=payload) # Print the response print(response) </code></pre>
<python><stable-diffusion>
2023-03-12 21:59:51
2
4,971
Imran Bughio
75,716,257
5,191,069
Why can I update the .data attribute of a pytorch tensor when the variable is outside the local namespace
<p>I'm able to access and update the .data attribute of a pytorch tensor when the variable is outside a functions namespace:</p> <pre><code>x = torch.zeros(5) def my_function(): x.data += torch.ones(5) my_function() print(x) # tensor([1., 1., 1., 1., 1.]) </code></pre> <p>When I (attempt to) update x in the regular fashion though (i.e. <code>x += y</code>), I get an error &quot;UnboundLocalError: local variable 'x' referenced before assignment&quot;. This is expected because x is outside of <code>my_function</code>'s namespace.</p> <pre><code>x = torch.zeros(5) def my_function(): x += torch.ones(5) # UnboundLocalError: local variable 'x' referenced before assignment my_function() </code></pre> <p>Why can I update <code>x</code> via .data but not with its regular += operator?</p>
<python><pytorch>
2023-03-12 20:40:59
2
630
jstm
75,716,145
2,523,581
pivot wider but keep repeating the index
<p>I have a dataframe in pandas of the sort</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ &quot;id&quot;: [1, 1, 1, 1, 2, 2, 2, 2], &quot;column&quot;: [&quot;a&quot;, &quot;b&quot;,&quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;], &quot;value&quot;: [1, 7, 6, 5, 4, 3, 1, 7] }) </code></pre> <p>I want to generate a new dataframe where we have</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>value_a</th> <th>value_b</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>7</td> </tr> <tr> <td>1</td> <td>6</td> <td>5</td> </tr> <tr> <td>2</td> <td>4</td> <td>3</td> </tr> <tr> <td>2</td> <td>1</td> <td>7</td> </tr> </tbody> </table> </div> <p>I tried many things, pivot, pivot table and so on, but all solutions seem to need index column which gets unique and the values being aggregated in some way. I want keep repeating id and have the values in the original order they appeared.</p> <p>Thanks in advance!</p>
<python><pandas><pivot><pivot-table>
2023-03-12 20:18:05
4
630
Nikola
75,716,069
12,501,684
Why doesn't this way of checking a type work?
<p>I am creating a Python object from its string representation:</p> <pre class="lang-py prettyprint-override"><code>obj = ast.literal_eval(obj_text) </code></pre> <p>Now, I want to make sure that the object has an appropriate type. In this case, I want it to be a list of strings. This can be done, <a href="https://stackoverflow.com/questions/18495098/python-check-if-an-object-is-a-list-of-strings">as explained here</a>. Still, I thought that maybe newer (3.10) Python versions will have an easier way to do that, so I tried this instead:</p> <pre class="lang-py prettyprint-override"><code>type_ok = (type(obj) == list[str]) </code></pre> <p>For some reason, this is always false. So I tried <code>is</code> instead:</p> <pre class="lang-py prettyprint-override"><code>type_ok = (type(obj) is list[str]) </code></pre> <p>But this also always evaluates to <code>False</code>. Why is this the case? Isn't <code>list[str]</code> the type of, say, <code>[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]</code>? Anyway it seems, like I'll have to use a <code>for</code> loop, if this doesn't work.</p>
<python><string><list><types><python-3.10>
2023-03-12 20:06:25
2
5,075
janekb04
75,715,755
2,778,405
.dt accessor fails, but python functions work
<p>For some reason the .dt accessor is having funny results compared to regular datetime functions in pandas. This screen shot is the best example:</p> <p><a href="https://i.sstatic.net/GoUDt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GoUDt.png" alt="enter image description here" /></a></p> <p>As you can see at exactly one date all dates start failing, however vanilla python will work just fine:</p> <p><a href="https://i.sstatic.net/qKCwY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qKCwY.png" alt="enter image description here" /></a></p> <p>Here's the code I'm trying.</p> <pre><code>import pandas as pd ## Add Year Index and Month Index # df_final['Year'] = df['Date'].dt.to_period('Y') # df_final['Month'] = df['Date'].dt.to_period('M') # df_final['Month'] = df['Date'].dt.month fn = lambda x: x.month r = df_final['Date'].apply(fn) </code></pre> <p>Data set is <a href="https://drive.google.com/file/d/1DdpTu9VqaZzRBwb51zp51FY9FpKf9DUq/view?usp=sharing" rel="nofollow noreferrer">here</a></p> <p>How is this happening? Its double weird because those DateTime values are created at the same time and inserted into both rows as a single variable.</p>
<python><pandas><datetime>
2023-03-12 19:18:16
1
2,386
Jamie Marshall
75,715,657
925,913
Getting tox to use the Python version set by pyenv
<p>I can't seem to wrap my head around managing Python versions. When I run <code>tox</code>, I can immediately see that it's using Python 3.7.9:</p> <pre class="lang-bash prettyprint-override"><code>$ tox py39: commands[0]&gt; coverage run -m pytest ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.7.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/local/bin/python3 </code></pre> <p>But it's configured to use 3.9:</p> <pre class="lang-bash prettyprint-override"><code>[tox] envlist = py39,manifest,check-formatting,lint skipsdist = True usedevelop = True indexserver = spotify = https://artifactory.spotify.net/artifactory/api/pypi/pypi/simple [testenv] basepython = python3.9 deps = :spotify:-r{toxinidir}/dev-requirements.txt commands = coverage run -m pytest {posargs} allowlist_externals = coverage [testenv:manifest] ; a safety check for source distributions basepython = python3.9 deps = check-manifest skip_install = true commands = check-manifest </code></pre> <p>Here's what I see with <code>which</code>:</p> <pre class="lang-bash prettyprint-override"><code>$ pyenv local 3.9.10 $ which python /Users/acheong/.pyenv/shims/python $ which python3 /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 $ pyenv which python /Users/acheong/.pyenv/versions/3.9.10/bin/python </code></pre> <p><code>pytest</code> also uses the wrong version:</p> <pre class="lang-bash prettyprint-override"><code>$ pytest tests ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.7.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 cachedir: .pytest_cache rootdir: /Users/acheong/src/spotify/protean/ezmode-cli, configfile: tox.ini, testpaths: tests plugins: mock-3.10.0, cov-2.10.0 </code></pre> <p>But in this case I learned I can do this:</p> <pre class="lang-bash prettyprint-override"><code>$ pyenv exec pytest tests ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.9.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/acheong/.pyenv/versions/3.9.10/bin/python cachedir: .pytest_cache rootdir: /Users/acheong/src/spotify/protean/ezmode-cli, configfile: tox.ini, testpaths: tests plugins: mock-3.10.0, cov-2.10.0 </code></pre> <p>But when I try that with <code>tox</code>, I get an error:</p> <pre class="lang-bash prettyprint-override"><code>$ pyenv exec tox Traceback (most recent call last): File &quot;/Users/acheong/.pyenv/versions/3.9.10/bin/tox&quot;, line 8, in &lt;module&gt; sys.exit(run()) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py&quot;, line 19, in run result = main(sys.argv[1:] if args is None else args) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py&quot;, line 38, in main state = setup_state(args) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py&quot;, line 53, in setup_state options = get_options(*args) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/config/cli/parse.py&quot;, line 38, in get_options guess_verbosity, log_handler, source = _get_base(args) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/config/cli/parse.py&quot;, line 61, in _get_base MANAGER.load_plugins(source.path) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/plugin/manager.py&quot;, line 90, in load_plugins self._register_plugins(inline) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/plugin/manager.py&quot;, line 38, in _register_plugins self.manager.load_setuptools_entrypoints(NAME) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/pluggy/_manager.py&quot;, line 287, in load_setuptools_entrypoints plugin = ep.load() File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/importlib/metadata.py&quot;, line 77, in load module = import_module(match.group('module')) File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1030, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 850, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 228, in _call_with_frames_removed File &quot;/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox_pyenv.py&quot;, line 48, in &lt;module&gt; from tox import hookimpl as tox_hookimpl ImportError: cannot import name 'hookimpl' from 'tox' (/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/__init__.py) </code></pre> <p>I've tried a lot of things I found online but I'm afraid if anything I've only messed up my environment even more. What steps can I take to diagnose the problem and get <code>tox</code> to use my <code>pyenv</code> version?</p>
<python><pytest><virtualenv><pyenv><tox>
2023-03-12 19:01:19
4
30,423
Andrew Cheong
75,715,537
10,093,446
Do you replicate type hints of function within the docstring of the function? PEP Guidelines?
<h1>Background</h1> <p>Propose that there is function:</p> <pre class="lang-py prettyprint-override"><code>def do_product(a: int, b: int) -&gt; int: return a * b </code></pre> <p>And I start documenting my function:</p> <pre class="lang-py prettyprint-override"><code>def do_product(a: int, b: int) -&gt; int: &quot;&quot;&quot;This function gets two integers and returns the product :param a: first integer :param b: second integer :return: return the product of a and b&quot;&quot;&quot; return a * b </code></pre> <p>When I use Sphinx, I use get the type hinting imported into the API documentation by using the <a href="https://github.com/tox-dev/sphinx-autodoc-typehints" rel="nofollow noreferrer">sphinx-autodoc-typehints</a> extension of sphinx. The advantage of doing that is that I declare types only once when programming and can not have disambiguation between documentation and functional declaration. I am currently working on a PR where I was asked to declare the types within the docstring. This would then look like this:</p> <pre class="lang-py prettyprint-override"><code>def do_product(a: int, b: int) -&gt; int: &quot;&quot;&quot;This function gets two integers and returns the product :param int a: first integer :param int b: second integer :return int: return the product of a and b&quot;&quot;&quot; return a * b </code></pre> <p>This doesn't make sense as this seems like replication of the declaration within the function..</p> <h1>Question:</h1> <ol> <li>When writing docstrings, do you need to define types?</li> <li>What are the best practises for documenting type hinting in python?</li> <li>Is there a PEP standard for this? I could find the appropriate one..</li> </ol>
<python><python-sphinx><python-typing><docstring><pep>
2023-03-12 18:41:10
1
548
Pieter Geelen
75,715,491
19,491,471
pandas merge or join on multiple columns
<p>I am trying to join two tables based on two columns. Let's say we have:</p> <pre><code>df1 = key atTime c d e 1 3 4 a 100 23 8 3 g 230 ... df2 = xkey xatTime z 30 2 p 1 3 l ... </code></pre> <p>Now I want to merge/join these two dataframes in a way that only if they have the same keys and the same at time that row gets merged. In this example I want to achieve something like(left join is done here):</p> <pre><code>resultDF = key atTime c d e z 1 3 4 a 100 l 23 8 3 g 230 nan </code></pre> <p>How can I achieve a merge in pandas that goes based off of two columns?</p> <p>Thanks</p>
<python><pandas>
2023-03-12 18:35:45
0
327
Amin
75,715,460
17,082,611
NotFittedError (instance is not fitted yet) after invoked cross_validate
<p>This is my minimal reproducible example:</p> <pre><code>import numpy as np from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import cross_validate x = np.array([ [1, 2], [3, 4], [5, 6], [6, 7] ]) y = [1, 0, 0, 1] model = GaussianNB() scores = cross_validate(model, x, y, cv=2, scoring=(&quot;accuracy&quot;)) model.predict([8,9]) </code></pre> <p>What I intended to do is instantiating a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html" rel="nofollow noreferrer">Gaussian Naive Bayes Classifier</a> and use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html" rel="nofollow noreferrer">sklearn.model_selection.cross_validate</a> for cross validate my model (I am using <code>cross_validate</code> instead of <code>cross_val_score</code> since in my real project I need precision, recall and f1 as well).</p> <p>I have read in the doc that <code>cross_validate</code> does &quot;evaluate metric(s) by cross-validation and also record fit/score times.&quot;</p> <p>I expected that my <code>model</code> would have been fitted on <code>x</code> (features), <code>y</code> (labels) data but when I invoke <code>model.predict(.)</code> I get:</p> <blockquote> <p>sklearn.exceptions.NotFittedError: This GaussianNB instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.</p> </blockquote> <p>Of course it says me about invoking <code>model.fit(x,y)</code> before &quot;using the estimator&quot; (that is before invoking <code>model.predict(.)</code>.</p> <p>Shouldn't the model have been fitted <code>cv=2</code> times when I invoke <code>cross_validate(...)</code>?</p>
<python><machine-learning><scikit-learn><cross-validation>
2023-03-12 18:31:29
1
481
tail
75,715,426
2,592,835
pathfinding algorithm bug in python
<p>I have been using python's pathfinding module from pip but its not working it seems to find a path that goes through obstacles (which is unacceptable for my videogame).</p> <p>Here's the sample code:</p> <pre><code>from pathfinding.core.diagonal_movement import DiagonalMovement from pathfinding.core.grid import Grid from pathfinding.finder.a_star import AStarFinder import random blockpositions=[] def makegrid(x2,y2): grid=[] for x in range(x2): grid.append([]) for y in range(y2): if random.randint(1,5)==1: blockpositions.append((x,y)) grid[x].append(0) else: grid[x].append(1) return grid grid=makegrid(50,50) startpos=(0,0) endpos=(49,49) finder = AStarFinder(diagonal_movement=DiagonalMovement.always) grid2=Grid(matrix=grid) start = grid2.node(startpos[0],startpos[1]) end = grid2.node(endpos[0],endpos[1]) path, runs = finder.find_path(start, end, grid2) for x in path: if x in blockpositions: print(&quot;block in path!&quot;) print(path) </code></pre> <p>returns the following:</p> <pre><code>block in path! block in path! block in path! block in path! block in path! block in path! block in path! block in path! block in path! [(0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 4), (6, 5), (7, 6), (8, 7), (9, 8), (10, 9), (11, 9), (12, 10), (13, 11), (13, 12), (14, 13), (15, 14), (16, 15), (17, 16), (18, 17), (19, 18), (20, 19), (21, 20), (22, 21), (23, 22), (24, 23), (25, 23), (26, 24), (26, 25), (27, 26), (28, 27), (29, 28), (30, 29), (31, 30), (32, 31), (33, 31), (34, 32), (35, 33), (36, 34), (37, 35), (38, 36), (39, 37), (40, 38), (41, 39), (42, 40), (43, 41), (43, 42), (43, 43), (44, 44), (45, 45), (46, 46), (47, 47), (48, 48), (49, 49)] </code></pre> <p>I know the grid is random so theres always a chance it will get stuck but ive been running this algorithm in my videogame and get the same errors</p>
<python>
2023-03-12 18:26:44
0
1,627
willmac
75,715,376
1,595,865
Set the formatting style for logging - percent, format style / bracket or f-string
<p>Python has multiple string formatting styles for arguments:</p> <ul> <li><p>Old-style/percent/<code>%</code>: <code>&quot;The first number is %s&quot; % (1,)</code></p> </li> <li><p>New-style/bracket/<code>{</code>: <code>&quot;The first number is {}&quot;.format(1)</code></p> </li> <li><p>F-string: <code>one = 1; f&quot;the first number is {one}&quot;</code></p> </li> </ul> <p>I can format log messages manually at the message creation time: <code>log.error(&quot;the first number is {}&quot;.format(1))</code>. However this can lead to <a href="https://stackoverflow.com/questions/4148790/lazy-logger-message-string-evaluation?noredirect=1&amp;lq=1">performance problems</a> and <a href="https://medium.com/swlh/why-it-matters-how-you-log-in-python-1a1085851205" rel="nofollow noreferrer">cause bugs</a> when formatting complex objects.</p> <p>How do I configure logging to use the style I want in individual logs (e.g. <code>logging.error</code>)?</p>
<python><python-logging>
2023-03-12 18:21:35
2
23,522
loopbackbee
75,715,274
1,484,601
python singledispatch with several arguments
<p>for example:</p> <pre class="lang-py prettyprint-override"><code>@singledispatch def f( a: str, b: list | dict )-&gt;None: ... @f.register def flist( a: str, b: list )-&gt;None: print(&quot;flist:&quot;,type(b)) @f.register def fdict( a: str, b: dict )-&gt;None: print(&quot;fdict:&quot;,type(b)) a = &quot;---&quot; b = [1,2] f(a,b) b = {1:2} f(a,b) </code></pre> <p>My (apparently incorrect) understanding, is that this should print (I hope for obvious reasons):</p> <pre><code>flist: &lt;class 'list'&gt; fdict: &lt;class 'dict'&gt; </code></pre> <p>but in fact, this prints:</p> <pre><code>fdict: &lt;class 'list'&gt; fdict: &lt;class 'dict'&gt; </code></pre> <p>why does the first call to f redirect to 'fdict', despite 'b' being a list ?</p>
<python><functools><single-dispatch>
2023-03-12 18:05:43
0
4,521
Vince
75,714,883
7,256,554
How to test a FastAPI endpoint that uses lifespan function?
<p>Could someone tell me how I can test an endpoint that uses the new <a href="https://fastapi.tiangolo.com/advanced/events/" rel="noreferrer">lifespan</a> feature from FastAPI?</p> <p>I am trying to set up tests for my endpoints that use resources from the lifespan function, but the test failed since the dict I set up in the lifespan function is not passed to the TestClient as part of the FastAPI app.</p> <p>My API looks as follows.</p> <pre><code>from fastapi import FastAPI from contextlib import asynccontextmanager ml_model = {} @asynccontextmanager async def lifespan(app: FastAPI): predictor = Predictor(model_version) ml_model[&quot;predict&quot;] = predictor.predict_from_features yield # Clean up the ML models and release the resources ml_model.clear() app = FastAPI(lifespan=lifespan) @app.get(&quot;/prediction/&quot;) async def get_prediction(model_input: str): prediction = ml_model[&quot;predict&quot;](model_input) return prediction </code></pre> <p>And the test code for the <code>/prediction</code> endpoint looks as follows:</p> <pre><code>from fastapi.testclient import TestClient from app.main import app client = TestClient(app) def test_read_prediction(): model_input= &quot;test&quot; response = client.get(f&quot;/prediction/?model_input={model_input}&quot;) assert response.status_code == 200 </code></pre> <p>The test failed with an error message saying <code>KeyError: 'predict'</code>, which shows that the <code>ml_models</code> dict was not passed with the app object. I also tried using <code>app.state.ml_models = {}</code>, but that didn't work either. I would appreciate any help!</p>
<python><testing><fastapi>
2023-03-12 17:08:07
3
2,260
lux7
75,714,781
1,857,373
Fit Error OneVsOneClassifier, OneVsRestClassifier Logistic Regression for digits recognition multilabel classifier model fit error
<p><strong>Problem</strong></p> <p>New error on problem:</p> <pre><code>UserWarning: X has feature names, but LogisticRegression was fitted without feature names warnings.warn(UserWarning: X has feature names, but LogisticRegression was fitted without feature names </code></pre> <p>Goal is to use RandomisedSearchCV for tune parameter, then fit two models, one each for OneVsOneClassifier and OneVsRestClassifier. Then check accuracy performance of each models using tuned parameter from RandomizedSearchCV. Define two models to fit, to fit on digit recognition MNIST dataset for multi-label classification prediction.</p> <p>I setup a tune hyper=parameters using GridSearchCV, just simple for estimator__C': [0.1, 1, 100, 200] for LogisticRegression. For audit, I print the computed grid parameters. Provide a scaled X-train object to the fit model. Then run the fit model.</p> <p>Problem was running on Kaggle GPU P100. When I execute code: ovr_grid_search.fit() &amp; ovo_grid_search.fit(), finish running and next step error is this by Adding verbose=1 and error_score=&quot;raise&quot; to RandomainedSearchCV classifier, and determined that needed to relocate StandardScaler with MinMaxScaler</p> <p><strong>Error</strong> Error is LogisticRegression fit with out features.</p> <pre><code>/Users/matthew/opt/anaconda3/lib/python3.9/site-packages/sklearn/base.py:443: UserWarning: X has feature names, but LogisticRegression was fitted without feature names </code></pre> <p><strong>Code</strong></p> <pre><code>from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train.astype(np.float64)) ovr_model = OneVsRestClassifier(LogisticRegression()) ovo_model = OneVsOneClassifier(LogisticRegression()) param_grid = { 'estimator__C': [0.1, 1, 100, 200] } ovr_grid_param = RandomizedSearchCV(ovr_model, param_grid, cv=5, n_jobs=8) ovo_grid_param = RandomizedSearchCV(ovo_model, param_grid, cv=5, n_jobs=8) print(&quot;OneVsRestClassifier best params: &quot;, ovr_grid_param) print(&quot;OneVsOneClassifier best params: &quot;, ovo_grid_param) min_max_scaler = preprocessing.MinMaxScaler() X_train_scaled = min_max_scaler.fit_transform(X_train) ### below code is the problem area ** ovr_grid_param.fit(X_train_scaled, y_train) ovo_grid_param.fit(X_train_scaled, y_train) </code></pre> <p><strong>Data</strong> The digit recognition MNIST dataset. X_train scaled data</p> <pre><code>array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]) </code></pre> <p>y_train data</p> <pre><code>33092 5 30563 2 17064 4 16679 9 30712 0 30177 0 11735 3 1785 8 4382 3 21702 7 37516 3 9476 6 4893 5 22117 0 12646 8 </code></pre> <p><strong>RandomisedSearch execution results</strong></p> <pre><code>OneVsRestClassifier best params: RandomizedSearchCV(cv=5, error_score='raise', estimator=OneVsRestClassifier(estimator=LogisticRegression()), n_jobs=3, param_distributions={'estimator__C': [0.1, 1, 100, 200], 'estimator__max_iter': [2500, 4500, 6500, 9500, 14000]}) OneVsOneClassifier best params: RandomizedSearchCV(cv=5, error_score='raise', estimator=OneVsOneClassifier(estimator=LogisticRegression()), n_jobs=3, param_distributions={'estimator__C': [0.1, 1, 100, 200], 'estimator__max_iter': [2500, 4500, 6500, 9500, 14000]}) </code></pre>
<python><scikit-learn><logistic-regression><multilabel-classification><gridsearchcv>
2023-03-12 16:54:59
1
449
Data Science Analytics Manager
75,714,718
9,078,185
Keras Sequential: constrain weights based on other weights
<p>I am new to using Tensorflow, and am trying to make a model that essentially dictates how to apportion a budget across some number of options. Hence I need to constrain the weights on each variable not only be positive but also that all of the weights add to the budget (e.g., 100% or similar).</p> <p>I know I can constrain the weights individually thusly:</p> <pre><code>model.add(tf.keras.layers.Dense(1, input_shape=[1], activation='linear', kernel_constraint=tf.keras.constraints.min_max_norm(min_value=0.0, max_value=1.0))) </code></pre> <p>But can the sum of my weights be constrained as well?</p>
<python><tensorflow><keras>
2023-03-12 16:46:08
0
1,063
Tom
75,714,609
17,158,703
Calculus of two 2D arrays on different dimensions
<p>I have two 2D arrays which I have to subtract, but not on the same dimension, and the arrays have different sizes. If I understood correctly there is something called broadcasting to do what I need and I think this post (<a href="https://stackoverflow.com/questions/72616936/add-arrays-of-different-dimensions">Link</a>) shows everything needed to do it, but I have difficulties applying it to my problem, and moreover, there might be a more direct way then what I thought of so far.</p> <h2>The Problem</h2> <p>Here is the general logic of the problem (illustrated below, apologies for black font if you are in dark mode): I've got two arrays, currently both on the same dimensions, on which they are of different size in rows (m and n rows). In their current shape, they only match in columns and I want to subtract every value of one array by every value of the other array column-wise.</p> <p><a href="https://i.sstatic.net/sHn2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sHn2r.png" alt="enter image description here" /></a></p> <p>I don't know which functions and operations can and should be done to efficiently do this without looping. I currently have a loop in place to do this, but the data I'm dealing with here is so large that it takes +12 minutes to complete.</p> <h2>My Idea</h2> <p>I thought I might be able to first (1) transpose the first array onto different dimensions and then (2) repeat it by the number of rows of the other array, in order to get a <code>m*n*500</code> array when concatenated.</p> <p><a href="https://i.sstatic.net/NGV7o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NGV7o.png" alt="enter image description here" /></a></p> <p>I then do the same for for the second array, just for different dimensions and repeat it by the number of what were previously the rows of the other array. Then when concatenated they should have the same size.</p> <p><a href="https://i.sstatic.net/TYTVH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYTVH.png" alt="enter image description here" /></a></p> <p>Then, finally, I would be able to simply subtract the two array from one another without any dimension conflicts.</p> <p><a href="https://i.sstatic.net/Dttu8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dttu8.png" alt="enter image description here" /></a></p> <h2>Code Example</h2> <p>Code-wise there is really not much to show here. Currently I'm looping through the rows of one of the arrays and then transpose it before using it in subtraction.</p> <pre><code>self.C = [] # list to hold the resulting arrays for i in range(self.A.shape[0]): # loop over rows A = pd.DataFrame(self.A.iloc[i, :]).T C = pd.DataFrame(self.B.to_numpy() - A.to_numpy(), index=self.index) self.C.append((self.A.index[i], C)) </code></pre> <p>The result is not even a 3D array but a list of tuples containing the indices and 2D arrays.</p> <h2>The Ask</h2> <p>As already mentioned above, I don't know how to do this in numpy and I have difficulties applying answers from similar questions to my problem. I'd appreciate your help on this.</p>
<python><numpy><numpy-ndarray>
2023-03-12 16:31:13
1
823
Dattel Klauber
75,714,478
648,045
Splitting text based on a delimiter in Python
<p>I have just started learning python. I am working on netflix_tiles dataset downloaded from Kaggle. Some of the entries in the director column have multiple director names separated by column, i was trying to separate director names using the split function</p> <p>The following is one of the original values loaded from the file to dataframe</p> <blockquote> <p>s7 Movie My Little Pony: A New Generation, <strong>Robert Cullen, José Luis Ucha Vanessa Hudgens</strong>, ..</p> </blockquote> <p>I am using the following code to do the split</p> <pre><code>def strip(x): x = x.strip().split(',') return x director_counts = df[&quot;director&quot;].apply(strip) </code></pre> <p>After the above code executes the output is as follows</p> <blockquote> <p>s7 [Robert Cullen, José Luis Ucha]</p> </blockquote> <p>The director name is not split based on comma and I am also seeing the index(s7) is also returned from the function when i passed just the director column to the function. Can anyone please tell me why is it behaving in this way? Edit: Tried this as well</p> <pre><code>director_counts = df['director'].str.split(',\s*') </code></pre> <p>Link to collab: <a href="https://colab.research.google.com/drive/1OXJ9XKCBVg4-6W8Hiqfy4ZTkgz0IVqbR?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1OXJ9XKCBVg4-6W8Hiqfy4ZTkgz0IVqbR?usp=sharing</a></p>
<python><pandas>
2023-03-12 16:10:52
2
4,953
logeeks
75,714,426
19,130,803
on/off the logging based on flag python
<p>I have implemented the logging using python standard module(I refered python offical docs). I have created a separate logger.py file and return logger from this. In different modules I have imported and used for logging purpose.</p> <p>Now I want to have a functionality that will enable/disable or on/off the logging. For this I am thinking to define a boolean flag, since I have many files this approach forcing me to write <code>if condition</code> at each line wherever I am logging.</p> <p>Is there a better approach to achieve the same task?</p>
<python>
2023-03-12 16:04:20
1
962
winter
75,714,415
10,407,102
Displaying images blobs on Django html templates using SaS token
<p>I'm currently trying to get blob images displayed in the HTML of Django templates using the SaS token but I don't see any online implementation examples. I currently have the SaS token but don't know how to implement it in Django.</p> <p>I tried to manually add the token to the end of the URL:</p> <p>https://{account}.blob.core.windows.net/{container}/{path}.jpg/?sp=r&amp;st=2023-03-11T17:53:36Z&amp;se=2050-03-12T01:53:36Z&amp;sv=2021-12-02&amp;sr=c&amp;sig={****}</p>
<python><django><azure-storage>
2023-03-12 16:02:50
1
799
Karam Qusai
75,714,362
1,797,139
PyCharm /. VSCode - how to use global python installation
<p>Recently did a clean install of Linux Mint on an old laptop.</p> <p>Tried writing a python script to use pyaudio. From inside PyCharm, I tried the usual method to add pyaudio using the project preferences screen, but that just kept &quot;loading&quot;. When I tried to install it via pip inside a terminal in PyCharm, I get an error that it cannot create the wheel.</p> <p>If I work 100% off the command line however, I am able to install pyaudio AND run the script without problems.</p> <p>So, how can I get PyCharm to use my &quot;default&quot; python as per my command line - I tried changing the interpreter to the /usr/sbin/python3 , but it still does not detect the already installed library for pyaudio. So currently I can edit the file in PyCharm, but can only run it from command line...</p>
<python><pycharm>
2023-03-12 15:55:17
1
1,341
Monty
75,714,294
333,403
dynamically determine height of widget in python-textual
<p>I develop a simple app using <a href="https://textual.textualize.io/" rel="nofollow noreferrer">Textual</a> framework. I have two widgets W1 and W2. W1 has a fixed height of <code>20</code> (lines). Now I want W2 to take up the rest of the vertical space. In css for a browser I would use <code>calc(100vh - 20)</code> but textual does not (yet) support this.</p> <p>How can I achieve this dynamic (i.e. viewport-height-dependent) layout?</p>
<python><css><textual-framework>
2023-03-12 15:44:32
1
2,602
cknoll
75,714,164
2,998,077
How to show only the first and last tick labels on the x-axis with subplots
<p>To show only the first and last tick on x-axis, of both a line chart and bar chart side-by-side, produced by matplotlib and pandas.GroupBy, I have below lines.</p> <p>However only the bar chart shows what's wanted. The line chart has the latest month on the left upmost (it shall be on the right upmost), and missing the x tick on the right.</p> <p>What went wrong and how can I correct it?</p> <pre><code>import matplotlib import matplotlib.pyplot as plt import pandas as pd from io import StringIO csvfile = StringIO( &quot;&quot;&quot; Name;Year - Month;Score;Upvote Mike;2023-01;884.22;5 Mike;2022-12;5472.81;36 Mike;2022-11;2017.59;15 Mike;2022-10;1845.23;14 Mike;2022-08;1984.32;15 Mike;2022-07;1033.33;8 Mike;2022-06;1587.64;24 Mike;2022-05;1019.93;20 Mike;2022-04;2359.3;45 Mike;2022-03;7478.72;140 </code></pre> <p>&quot;&quot;&quot;)</p> <pre><code>df = pd.read_csv(csvfile, sep = ';', engine='python') for group_name, sub_frame in df.groupby(&quot;Name&quot;): fig, axes = plt.subplots(nrows=1,ncols=2,figsize=(10,5)) sub_frame_sorted = sub_frame.sort_values('Year - Month') # sort the data-frame by a column sub_frame_sorted.plot(ax=axes[1], x=&quot;Year - Month&quot;, y=&quot;Score&quot;) sub_frame_sorted.plot(ax=axes[0], kind='bar', x=&quot;Year - Month&quot;, y=&quot;Upvote&quot;) axes[0].set_xticks([axes[0].get_xticks()[0], axes[0].get_xticks()[-1]]) axes[1].set_xticks([axes[1].get_xticks()[0], axes[1].get_xticks()[-1]]) plt.setp(axes[0].get_xticklabels(), rotation=0) plt.setp(axes[1].get_xticklabels(), rotation=0) plt.show() </code></pre> <p><a href="https://i.sstatic.net/KDhY9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KDhY9.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib>
2023-03-12 15:20:59
1
9,496
Mark K
75,714,139
8,176,763
csv.DictReader to skip first two rows of data
<p>I have a buffer that looks like that:</p> <pre><code>some_random_info another info here that i dont want to parser column1,column2,column3 a,b,c </code></pre> <p>I want to read this that using python csv built-in module using the <code>DictReader</code> class, but from the docs it says:</p> <pre><code> class csv.DictReader(f, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds) Create an object that operates like a regular reader but maps the information in each row to a dict whose keys are given by the optional fieldnames parameter. The fieldnames parameter is a sequence. If fieldnames is omitted, the values in the first row of file f will be used as the fieldnames. Regardless of how the fieldnames are determined, the dictionary preserves their original ordering. </code></pre> <p>I have tried that:</p> <pre><code>import io import csv buffer = io.StringIO(&quot;&quot;&quot; some_random_info another info here that i dont want to parser column1,column2,column3 a,b,c &quot;&quot;&quot;) reader = csv.DictReader(buffer,fieldnames=['column1','column2','column3']) for row in reader: print(row) </code></pre> <p>But outputs this:</p> <pre><code>{'column1': 'some_random_info', 'column2': None, 'column3': None} {'column1': 'another info here that i dont want to parser', 'column2': None, 'column3': None} {'column1': 'column1', 'column2': 'column2', 'column3': 'column3'} {'column1': 'a', 'column2': 'b', 'column3': 'c '} </code></pre> <p>what I'm looking for is just <code>{'column1': 'a', 'column2': 'b', 'column3': 'c '}</code></p>
<python>
2023-03-12 15:17:40
1
2,459
moth
75,714,112
236,081
How do I configure pdm for a src-based Python repository?
<p>I have previously arranged a Python repository without a <code>src</code> folder, and got it running with:</p> <pre><code>pdm install --dev pdm run mymodule </code></pre> <p>I am failing to replicate the process in a repository <em>with</em> a <code>src</code> folder. How do I do it?</p> <p><strong>pyproject.toml</strong></p> <pre><code>[project] name = &quot;mymodule&quot; version = &quot;0.1.0&quot; description = &quot;Minimal Python repository with a src layout.&quot; requires-python = &quot;&gt;=3.10&quot; [build-system] requires = [&quot;pdm-pep517&gt;=1.0.0&quot;] build-backend = &quot;pdm.pep517.api&quot; [project.scripts] mymodule = &quot;cli:invoke&quot; </code></pre> <p><strong>src/mymodule/__init__.py</strong></p> <p>Empty file.</p> <p><strong>src/mymodule/cli.py</strong></p> <pre><code>def invoke(): print(&quot;Hello world!&quot;) if __name__ == &quot;__main__&quot;: invoke() </code></pre> <p>With the configuration above, I can <code>pdm install --dev</code> but <code>pdm run mymodule</code> fails with:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/Documents/mymodule/.venv/bin/mymodule&quot;, line 5, in &lt;module&gt; from cli import invoke ModuleNotFoundError: No module named 'cli' </code></pre>
<python><pyproject.toml><python-pdm>
2023-03-12 15:14:01
1
17,402
lofidevops
75,713,995
4,186,989
Find all yaml keys matching string and change their values using python
<p>I've been trying to...</p> <ol> <li>load yaml file</li> <li>find all keys named &quot;top&quot; in a yaml file</li> <li>get the values one by one, which is a percentage</li> <li>add 5 to the percentage number</li> <li>save the file</li> </ol> <p>So far I've written the following python:</p> <pre><code>import sys from pathlib import Path import ruamel.yaml import re in_file = Path('overview.yaml') out_file = Path('new_overview.yaml') yaml = ruamel.yaml.YAML() data = yaml.load(in_file) #Found this in some other thread def lookup(sk, d, path=[]): # lookup the values for key(s) sk return as list the tuple (path to the value, value) if isinstance(d, dict): for k, v in d.items(): if k == sk: yield (path + [k], v) for res in lookup(sk, v, path + [k]): yield res elif isinstance(d, list): for item in d: for res in lookup(sk, item, path + [item]): yield res #For each found result for path, value in lookup(&quot;top&quot;, data): #Remove the percentage sign cleanvalue = re.sub('%','', value) #Convert to fload and add 5 addvalue = float(cleanvalue) + 5 #Print old and new value print(cleanvalue, addvalue) #with open('husoversigt_changed.yaml', 'wb') as f: # yaml.dump(data, f) </code></pre> <p>So now it just prints the old value and the new value.</p> <p>But I'm really struggling to change the &quot;data&quot; variable using the path found and returned by lookup function.</p> <p>Can anyone help me?</p> <p>A little piece of the data are here:</p> <pre><code>type: custom:mod-card card: type: vertical-stack cards: - type: picture-elements elements: - type: state-icon entity: light.zigbee_bedroom_light tap_action: action: more-info style: top: 31% left: 74% - type: state-icon entity: binary_sensor.zigbee_mailbox_on_off tap_action: action: more-info style: top: 3% left: 42% - type: state-icon entity: group.alarm_sensors tap_action: action: more-info style: top: 20% left: 48% </code></pre>
<python><yaml><ruamel.yaml>
2023-03-12 14:53:52
1
425
danededane
75,713,993
17,973,259
Python pylint "warning error uncreachable code"
<p>I have this method in my game and I'm getting the &quot;<em>Unreachable code</em>&quot; warning from <em>pylint</em>.</p> <pre><code>def _check_buttons(self, mouse_pos): &quot;&quot;&quot;Check for buttons being clicked and act accordingly.&quot;&quot;&quot; buttons = { self.play_button.rect.collidepoint(mouse_pos): lambda: (self._reset_game(), setattr(self, 'show_difficulty', False), setattr(self, 'show_high_scores', False), setattr(self, 'show_game_modes', False)), self.quit_button.rect.collidepoint(mouse_pos): lambda: (pygame.quit(), sys.exit()), self.menu_button.rect.collidepoint(mouse_pos): self.run_menu, self.high_scores.rect.collidepoint(mouse_pos): lambda: setattr(self, 'show_high_scores', not self.show_high_scores), self.game_modes.rect.collidepoint(mouse_pos): lambda: setattr(self, 'show_game_modes', not self.show_game_modes), self.endless_button.rect.collidepoint(mouse_pos): lambda: (setattr(self.settings, 'endless', not self.settings.endless), setattr(self, 'show_game_modes', False)), self.easy.rect.collidepoint(mouse_pos): lambda: (setattr(self.settings, 'speedup_scale', 0.3), setattr(self, 'show_difficulty', False)), self.medium.rect.collidepoint(mouse_pos): lambda: (setattr(self.settings, 'speedup_scale', 0.5), setattr(self, 'show_difficulty', False)), self.hard.rect.collidepoint(mouse_pos): lambda: (setattr(self.settings, 'speedup_scale', 0.7), setattr(self, 'show_difficulty', False)), self.difficulty.rect.collidepoint(mouse_pos): lambda: setattr(self, 'show_difficulty', not self.show_difficulty) } for button_clicked, action in buttons.items(): if button_clicked and not self.stats.game_active: action() </code></pre> <p>The warning is in the for loop at the end, I thought it was because button_clicked is always True and I tried to swap it to: if not self.stats.game_active and button_clicked, but the warning is still there. Why?</p>
<python><pylint>
2023-03-12 14:53:18
1
878
Alex
75,713,961
13,802,418
Python requests returns 403 even with headers
<p>I'm trying to get content of website but my requests return me an 403 ERROR.</p> <p>After searching, I found Network&gt;Headers section to add headers before GET request and tried these headers.</p> <pre><code>from bs4 import BeautifulSoup as bs import requests url = &quot;https://clutch.co/us/agencies/digital-marketing&quot; HEADERS = {&quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36&quot;} ### Also tried &quot;Referer&quot; , &quot;sec-ch-ua-platform&quot; and &quot;Origin&quot; headers but nothing changed. html = requests.get(url,headers=HEADERS) print(&quot;RESULT:&quot;,html) </code></pre> <p>But result didn't change.</p>
<python><python-requests>
2023-03-12 14:47:47
1
505
320V
75,713,948
2,280,178
How can I use a mermaid Flowchart LR for describing a Pyhton code line?
<p>I want to describe a python code line in a quarto document for further description with a mermaid flowchart LR diagram.</p> <p>It should look like this:</p> <p><a href="https://i.sstatic.net/qzDx4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qzDx4.png" alt="enter image description here" /></a></p> <p>The Pyhton codeline is this:</p> <pre><code>tfidf_text_vectorizer = TfidfVectorizer(stop_words=list(stopwords), min_df=5, max_df=0.7) </code></pre> <p>I try to split the code line in a mermaid code chunk:</p> <pre><code>```{mermaid} flowchart LR tfidf_text_vectorizer --&gt; TfidfVectorizer( TfidfVectorizer( --&gt; stop_words stop_words --&gt; =list(stopwords), =list(stopwords), --&gt; min_df=5, min_df=5, --&gt; max_df=0.7) ``` </code></pre> <p>But I receive the error in VS Code: undefinded.</p> <p>Can I describe a code line with mermaid. If yes, how can I do this?</p> <p>PS: Are there any other solutions for describe a code line in a quarto document?</p>
<python><diagram><mermaid>
2023-03-12 14:46:20
1
517
SebastianS
75,713,913
11,703,015
Group a DataFrame by months and an additional column
<p>I have the next DataFrame:</p> <pre><code>data={ 'date':['02/01/2023', '03/01/2023', '12/01/2023', '16/01/2023', '23/01/2023', '03/02/2023', '14/02/2023', '17/02/2023', '17/02/2023', '20/02/2023'], 'amount':[-2.6, -230.0, -9.32, -13.99, -12.99, -50.0, -5.84, -6.6, -11.95, -20.4], 'concept':['FOOD', 'REPAIR', 'HEALTH', 'NO CLASSIFIED', 'NO CLASSIFIED', 'REPAIR', 'FOOD', 'NO CLASSIFIED', 'FOOD', 'HEALTH'] } df = pd.DataFrame(data) </code></pre> <p>I need to group the information first by months and then by the concept of each item. I tried something this:</p> <pre><code>df.groupby(['date','concept']).sum() </code></pre> <p>And it works for an individual day, but I need the same but grouped by the entire month.</p> <p>I tried also converting that <code>df.date</code> to datetime values: <code>df.date = pd.to_datetime(df.date,dayfirst=True)</code>, but I don't know how to indicate that the grouping should be by the each entire month.</p> <p>The result I need would be something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>concept</th> <th>amount</th> </tr> </thead> <tbody> <tr> <td>Jan-23</td> <td>FOOD</td> <td>-2.6</td> </tr> <tr> <td></td> <td>HEALTH</td> <td>-9.32</td> </tr> <tr> <td></td> <td>NO CLASSIFIED</td> <td>-26.98</td> </tr> <tr> <td></td> <td>REPAIR</td> <td>-230</td> </tr> <tr> <td>Feb-23</td> <td>FOOD</td> <td>-17.79</td> </tr> <tr> <td></td> <td>HEALTH</td> <td>-20.4</td> </tr> <tr> <td></td> <td>NO CLASSIFIED</td> <td>-6.6</td> </tr> <tr> <td></td> <td>REPAIR</td> <td>-50</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe><group-by>
2023-03-12 14:41:15
1
516
nekovolta
75,713,821
295,155
Minimize ||Ax|| on the non negative region of the unit sphere: ||x|| = 1, x ≥ 0
<p>Having a non-square matrix A, I need to find the direction (versor) x ≥ 0 which is &quot;most perpendicular&quot; to all rows of A (in a least sum of squares sense).</p> <p>I tried with SVD: if A = U Σ V*, the &quot;most perpendicular&quot; vector is the last column in V. But it is not guaranteed to be ≥ 0, so I need to check other vectors in V, to see if I can combine them s.t. the result is ≥ 0. I couldn't find a simple method.</p> <p>I read the documentation of the linear least-square <code>scipy.optimize</code> functions (and <code>scipy.optimize.linprog</code>), but I don't know how to search on the unit sphere ||x|| = 1. I know I can use the generic <code>scipy.optimize.minimize</code>, but I was wondering if there exists a more adequate approach (adapted to the linear nature of Ax).</p> <p>I am looking for a solution in Python (numpy, scipy etc.). Optimized for speed. It should work for both tall and fat matrices A (overdetermined and underdetermined systems).</p> <p>Thank you for any help.</p>
<python><linear-algebra><linear-programming><scipy-optimize><svd>
2023-03-12 14:30:35
0
919
Amenhotep
75,713,804
9,003,381
On Plotly Python, my widgets don't work : the graphs don't remove when I uncheck the box
<p>I tried to use widgets by simply displaying a graph depending on checkboxes.</p> <p>It shows the number of births along the years, for 2 cities (Grenoble and Vizille).</p> <p>I want to display the evolution lines for 0, 1 or 2 cities depending on checkboxes.</p> <p>At the beginning it's ok : it displays 0 lines, and then 1 or 2 when we check the boxes.</p> <p>But if I uncheck a box, it doesn't remove a line.</p> <p>What have I forgotten in my code ?</p> <p>Here is a reproducible example :</p> <pre><code>import os import pandas as pd import plotly.graph_objects as go from ipywidgets import widgets # CSV from URL url = &quot;https://entrepot.metropolegrenoble.fr/opendata/200040715-MET/insee/etat_civil_200040715.csv&quot; data_pop = pd.read_csv(url) # Data prep noms_col = data_pop.columns data_pop.reset_index(inplace=True) data_pop.drop([&quot;code_postal&quot;],axis=1,inplace=True) data_pop.columns = noms_col # Using widgets use_Gre = widgets.Checkbox( description='Grenoble', value=False, ) use_Viz = widgets.Checkbox( description='Vizille', value=False, ) container_1 = widgets.HBox(children=[use_Gre, use_Viz]) g = go.FigureWidget(data=[{'type': 'scatter'}]) def validate1(): if use_Gre.value is True: return True else: return False def validate2(): if use_Viz.value is True: return True else: return False def response1(change): if validate1(): if use_Gre.value: trace01 = data_pop[data_pop['commune']==&quot;Grenoble&quot;].nombre_naissances x1=data_pop[data_pop['commune']==&quot;Grenoble&quot;].annee with g.batch_update(): g.add_scatter(y=trace01,name=&quot;Grenoble&quot;, x= x1) def response2(change): if validate2(): if use_Viz.value: trace02 = data_pop[data_pop['commune']==&quot;Vizille&quot;].nombre_naissances x2=data_pop[data_pop['commune']==&quot;Vizille&quot;].annee with g.batch_update(): g.add_scatter(y=trace02,name=&quot;Vizille&quot;, x= x2) use_Gre.observe(response1, names=&quot;value&quot;) use_Viz.observe(response2, names=&quot;value&quot;) widgets.VBox([container_1, g]) </code></pre>
<python><plotly><ipywidgets>
2023-03-12 14:27:57
2
319
Elise1369
75,713,782
9,795,265
Unable to Run container ( Path error for executable )
<p><strong>DockerFile</strong></p> <pre><code>FROM python:3.12.0a6-bullseye RUN mkdir /app WORKDIR /app COPY . . RUN apt-get update &amp;&amp; apt-get upgrade -y RUN apt-get -y autoremove RUN apt install python3 -y \ &amp;&amp; apt install python3-pip -y \ &amp;&amp; apt install python3-venv -y ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 ENV PATH=&quot;$PATH:/app/venv/bin/python3&quot; RUN echo $PATH RUN pip install virtualenv RUN pip install cmake RUN virtualenv venv RUN venv/bin/pip install -r latestreq.txt EXPOSE 8001 RUN chmod +x venv/bin/python3 CMD [ &quot;venv/bin/python3&quot;,&quot;manage.py&quot;,&quot;runserver&quot;] </code></pre> <p><strong>Error i am getting</strong></p> <blockquote> <pre><code>docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start </code></pre> <p>container process: exec: &quot;photoframe-container&quot;: executable file not found in $PATH: unknown. ERRO[0000] error waiting for container:</p> </blockquote>
<python><docker><docker-compose><dockerfile><virtualenv>
2023-03-12 14:24:29
0
1,194
Atif Shafi
75,713,598
11,546,773
Numpy array to list of lists in polars dataframe
<p>I'm trying to save a dataframe with a 2D list in each cell to a parquet file. As example I created a polars dataframe with a 2D list. As can be seen in the table the dtype of both columns is <code>list[list[i64]]</code>.</p> <pre><code>┌─────────────────────┬─────────────────────┐ │ a ┆ b │ │ --- ┆ --- │ │ list[list[i64]] ┆ list[list[i64]] │ ╞═════════════════════╪═════════════════════╡ │ [[1], [2], ... [4]] ┆ [[1], [2], ... [4]] │ │ [[1], [2], ... [4]] ┆ [[1], [2], ... [4]] │ └─────────────────────┴─────────────────────┘ </code></pre> <p>In the code below I saved and read the dataframe to check whether it is indeed possible to write and read this dataframe to and from a parquet file.</p> <p>After this step I created a numpy array from the dataframe. This is where the problem starts. Converting back to a polars dataframe is still possible. Despite the fact that the dtype of both columns now an object is.</p> <pre><code>┌─────────────────────────────────────┬─────────────────────────────────────┐ │ a ┆ b │ │ --- ┆ --- │ │ object ┆ object │ ╞═════════════════════════════════════╪═════════════════════════════════════╡ │ [array([array([1], dtype=int64),... ┆ [array([array([1], dtype=int64),... │ │ [array([array([1], dtype=int64),... ┆ [array([array([1], dtype=int64),... │ └─────────────────────────────────────┴─────────────────────────────────────┘ </code></pre> <p>Now, when I try to write this dataframe to a parquet file the following error pops up: <code>Exception has occurred: PanicException cannot convert object to arrow</code>. Which is indeed true because the dtypes are now objects.</p> <p>I tried using <code>pl.from_numpy()</code> but this complains on reading 2D arrays. I also tried casting but casting from an object seems not possible. Creating the dataframe with the previous dtype does also not seem to work.</p> <p><strong>Question:</strong> How can I still write this dataframe to a parquet file? Preferably with dtype <code>list[list[i64]]</code>. I need to keep the 2D array structure.</p> <p>By just creating the desired result as a list I'm able to write a read but not when it is a numpy array.</p> <p><strong>Proof code:</strong></p> <pre><code>import polars as pl import numpy as np data = { &quot;a&quot;: [[[[1],[2],[3],[4]], [[1],[2],[3],[4]]], [[[1],[2],[3],[4]], [[1],[2],[3],[4]]]], &quot;b&quot;: [[[[1],[2],[3],[4]], [[1],[2],[3],[4]]], [[[1],[2],[3],[4]], [[1],[2],[3],[4]]]] } df = pl.DataFrame(data) df.write_parquet('test.parquet') read_df = pl.read_parquet('test.parquet') print(read_df) </code></pre> <p><strong>Proof result:</strong></p> <pre><code>┌─────────────────────────────────────┬─────────────────────────────────────┐ │ a ┆ b │ │ --- ┆ --- │ │ list[list[list[i64]]] ┆ list[list[list[i64]]] │ ╞═════════════════════════════════════╪═════════════════════════════════════╡ │ [[[1], [2], ... [4]], [[1], [2],... ┆ [[[1], [2], ... [4]], [[1], [2],... │ │ [[[1], [2], ... [4]], [[1], [2],... ┆ [[[1], [2], ... [4]], [[1], [2],... │ └─────────────────────────────────────┴─────────────────────────────────────┘ </code></pre> <p><strong>Sample code:</strong></p> <pre><code>import polars as pl import numpy as np data = { &quot;a&quot;: [[[1],[2],[3],[4]], [[1],[2],[3],[4]]], &quot;b&quot;: [[[1],[2],[3],[4]], [[1],[2],[3],[4]]] } df = pl.DataFrame(data) df.write_parquet('test.parquet') read_df = pl.read_parquet('test.parquet') print(read_df) arr = np.dstack([read_df, df]) # schema={'a': list[list[pl.Int32]], 'b': list[list[pl.Int32]]} combined = pl.DataFrame(arr.tolist(), schema=df.columns) print(combined) # combined.with_column(pl.col('a').cast(pl.List, strict=False).alias('a_list')) combined.write_parquet('test_result.parquet') </code></pre>
<python><dataframe><numpy><parquet><python-polars>
2023-03-12 13:55:08
1
388
Sam
75,713,427
13,132,728
How to merge rows that have multiple levels for specific columns in pandas
<h2>My data</h2> <p>I am working with the following data from the <a href="https://www.ncei.noaa.gov/" rel="nofollow noreferrer">National Centers for Environmental Information (NCEI)</a> - obtained simply by using pandas' <code>read_html()</code>.</p> <pre><code>df = pd.read_html('https://www.ncei.noaa.gov/access/monitoring/climate-at-a-glance/statewide/rankings/1/tavg/202302')[0] df.head() Period Value 1901-2000Mean Anomaly Rank(1895-2023) Warmest/CoolestSince Record 0 February 2023 1-Month 56.6°F(13.7°C) 48.0°F(8.9°C) 8.6°F(4.8°C) 126th Coolest Coolest since:2022 1895 1 February 2023 1-Month 56.6°F(13.7°C) 48.0°F(8.9°C) 8.6°F(4.8°C) 4th Warmest Warmest since:2018 2018 2 Jan–Feb 2023 2-Month 54.0°F(12.2°C) 46.5°F(8.1°C) 7.5°F(4.1°C) 127th Coolest Coolest since:2022 1978 3 Jan–Feb 2023 2-Month 54.0°F(12.2°C) 46.5°F(8.1°C) 7.5°F(4.1°C) 3rd Warmest Warmest since:2017 1950 </code></pre> <h2>My problem/desired output</h2> <p>The <code>Rank (YYYY-YYYY)</code> and <code>Warmest/CoolestSince</code> columns have two &quot;levels&quot; to them, meaning each row is replicated to account for each level. My desired output would be the following</p> <pre><code> Period Value 1901-2000Mean Anomaly CoolestRank(1895-2023) CoolestSince CoolestRecord WarmestRank(1895-2023)WarmestSince WarmestRecord 0 February 2023 1-Month 56.6°F(13.7°C) 48.0°F(8.9°C) 8.6°F(4.8°C) 126th Coolest Coolest since:2022 1895 4th Warmest Warmest since:2018 2018 1 Jan–Feb 2023 2-Month 54.0°F(12.2°C) 46.5°F(8.1°C) 7.5°F(4.1°C) 127th Coolest Coolest since:2022 1978 3rd Warmest Warmest since:2017 1950 </code></pre> <p>So basically, I'd like to combine all unique <code>Period</code> rows and add new columns to take care of the two columns with multiple levels.</p> <h2>What I have tried</h2> <p>My initial thought was something not very practical - what if I took the two <code>Rank (YYYY-YYYY)</code> and <code>Warmest/CoolestSince</code> values from each odd row (the second of each duplicated <code>Period</code> and use those to populate two new columns, then drop each odd row? Something like this could be done, but probably isn't very efficient, pythonic, etc.</p> <p>Then I thought, since we have groups of unique <code>Period</code> values, that maybe I could do some sort of <code>groupby</code> magic? Since the warmth related columns are always the lower level, I can do something like:</p> <pre><code>(df .groupby('Period') .agg( Value=('Value','first'), Anomaly=('Anomaly','first'), CoolestSince=('Warmest/CoolestSince','first'), CoolestRecord=('Record','first'), WarmestSince=('Warmest/CoolestSince','last'), WarmestRecord=('Record','last'), ) ) </code></pre> <p>to achieve my desired output. However, this type of aggregation doesn't work for assigning columns with non-alphabetic characters. I get an error when I try to add the other columns:</p> <pre><code>(df .groupby('Period') .agg( Value=('Value','first'), 1901-2000Mean=('1901-2000Mean','first'), Anomaly=('Anomaly','first'), CoolestRank(1895-2023)=('Rank(1895-2023)','first'), CoolestSince=('Warmest/CoolestSince','first'), CoolestRecord=('Record','first'), WarmestRank(1895-2023)=('Rank(1895-2023)','last'), WarmestSince=('Warmest/CoolestSince','last'), WarmestRecord=('Record','last'), ) ) SyntaxError: positional argument follows keyword argument </code></pre> <p>Is there a way to assign <code>groupby</code> columns via strings? I tried formatting it into a dictionary but I couldn't seem to get it to work. Is there an alternative solution that is more efficient? I am curious, as what I am on to seems to be a pretty solid solution.</p> <p>NOTE: Although not listed in the data preview of this question, occasionally there is a third level to a column for ties between years. These can be ignored, as I am just dropping these rows with <code>df.loc[~df.Value.str.contains('Ties')]</code>.</p> <p>Thanks!</p>
<python><pandas><data-cleaning><data-wrangling>
2023-03-12 13:26:47
1
1,645
bismo
75,713,161
1,739,325
Finetuning Vision Encoder Decoder Models with huggingface causes ValueError: expected sequence of length 11 at dim 2 (got 12)
<p>Input code that causes code failing:</p> <pre><code>from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, ViTFeatureExtractor, AutoTokenizer from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel, default_data_collator from datasets import load_dataset, DatasetDict encoder_checkpoint = &quot;google/vit-base-patch16-224-in21k&quot; decoder_checkpoint = &quot;bert-base-uncased&quot; model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( encoder_checkpoint, decoder_checkpoint ) # set special tokens used for creating the decoder_input_ids from the labels model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.pad_token_id = tokenizer.pad_token_id # make sure vocab size is set correctly model.config.vocab_size = model.config.decoder.vocab_size # set beam search parameters model.config.eos_token_id = tokenizer.sep_token_id model.config.max_length = 512 model.config.early_stopping = True model.config.no_repeat_ngram_size = 3 model.config.length_penalty = 2.0 model.config.num_beams = 4 model.decoder.resize_token_embeddings(len(tokenizer)) feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint) tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) </code></pre> <p>Preparing dataset</p> <pre><code>dataset = load_dataset(&quot;svjack/pokemon-blip-captions-en-zh&quot;).remove_columns(&quot;zh_text&quot;) dataset = dataset.map(lambda example: {'pixel_values': feature_extractor(example['image'], return_tensors='pt').pixel_values}) dataset = dataset.remove_columns(&quot;image&quot;) dataset = dataset.map(lambda example: {'labels': tokenizer(example['en_text'], return_tensors='pt').input_ids }) dataset = dataset.remove_columns(&quot;en_text&quot;) &quot;&quot;&quot; dataset = DatasetDict({ train: Dataset({ features: ['pixel_values', 'labels'], num_rows: 833 }) &quot;&quot;&quot; train_testvalid = dataset[&quot;train&quot;].train_test_split(0.1) test_valid = train_testvalid['test'].train_test_split(0.5) train_test_valid_dataset = DatasetDict({ 'train': train_testvalid['train'], 'test': test_valid['test'], 'valid': test_valid['train']}) </code></pre> <p>Setting parameters:</p> <pre><code>for param in model.encoder.parameters(): param.requires_grad = False output_dir = &quot;./checkpoints&quot; training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy=&quot;steps&quot;, per_device_train_batch_size=8, per_device_eval_batch_size=8, overwrite_output_dir=True, fp16=True, run_name=&quot;first_run&quot;, load_best_model_at_end=True, output_dir=output_dir, logging_steps=2000, save_steps=2000, eval_steps=2000, ) </code></pre> <p>Trying to finetune models:</p> <pre><code>trainer = Seq2SeqTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_test_valid_dataset['train'], eval_dataset=train_test_valid_dataset['valid'], data_collator=default_data_collator, ) trainer.train() </code></pre> <p>Output error:</p> <pre><code>/usr/local/lib/python3.9/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1541 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1542 ) -&gt; 1543 return inner_training_loop( 1544 args=args, 1545 resume_from_checkpoint=resume_from_checkpoint, /usr/local/lib/python3.9/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1763 1764 step = -1 -&gt; 1765 for step, inputs in enumerate(epoch_iterator): 1766 1767 # Skip past any already trained steps if resuming training /usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py in __next__(self) 626 # TODO(https://github.com/pytorch/pytorch/issues/76750) 627 self._reset() # type: ignore[call-arg] --&gt; 628 data = self._next_data() 629 self._num_yielded += 1 630 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 669 def _next_data(self): 670 index = self._next_index() # may raise StopIteration --&gt; 671 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 672 if self._pin_memory: 673 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) /usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 59 else: 60 data = self.dataset[possibly_batched_index] ---&gt; 61 return self.collate_fn(data) /usr/local/lib/python3.9/dist-packages/transformers/data/data_collator.py in default_data_collator(features, return_tensors) 68 69 if return_tensors == &quot;pt&quot;: ---&gt; 70 return torch_default_data_collator(features) 71 elif return_tensors == &quot;tf&quot;: 72 return tf_default_data_collator(features) /usr/local/lib/python3.9/dist-packages/transformers/data/data_collator.py in torch_default_data_collator(features) 134 batch[k] = torch.tensor(np.stack([f[k] for f in features])) 135 else: --&gt; 136 batch[k] = torch.tensor([f[k] for f in features]) 137 138 return batch ValueError: expected sequence of length 11 at dim 2 (got 12) </code></pre> <p>How to fix the code?</p>
<python><huggingface-transformers>
2023-03-12 12:37:08
1
5,851
Rocketq
75,713,078
7,483,509
sounddevice.PortAudioError: Error querying host API -9979
<p>I am trying to do python audio with <code>python-sounddevice</code> on a macOS 13.2.1 with M1 chip but I can't get it to work. I installed portaudio and libsndfile with brew, then created a conda environment with sounddevice and soundfile packages then ran:</p> <pre class="lang-bash prettyprint-override"><code>❯ python play_file.py some_audio_file.wav ||PaMacCore (AUHAL)|| AUHAL component not found.Traceback (most recent call last): File &quot;play_file.py&quot;, line 77, in &lt;module&gt; stream = sd.OutputStream( File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/tvm/lib/python3.8/site-packages/sounddevice.py&quot;, line 1488, in __init__ _StreamBase.__init__(self, kind='output', wrap_callback='array', File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/tvm/lib/python3.8/site-packages/sounddevice.py&quot;, line 892, in __init__ _check(_lib.Pa_OpenStream(self._ptr, iparameters, oparameters, File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/tvm/lib/python3.8/site-packages/sounddevice.py&quot;, line 2736, in _check raise PortAudioError(errormsg, err, hosterror_info) sounddevice.PortAudioError: &lt;exception str() failed&gt; During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;play_file.py&quot;, line 85, in &lt;module&gt; parser.exit(type(e).__name__ + ': ' + str(e)) File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/tvm/lib/python3.8/site-packages/sounddevice.py&quot;, line 2220, in __str__ hostname = query_hostapis(host_api)['name'] File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/tvm/lib/python3.8/site-packages/sounddevice.py&quot;, line 640, in query_hostapis raise PortAudioError('Error querying host API {}'.format(index)) sounddevice.PortAudioError: Error querying host API -9979 </code></pre> <p>And play_file.py:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 &quot;&quot;&quot;Load an audio file into memory and play its contents. NumPy and the soundfile module (https://python-soundfile.readthedocs.io/) must be installed for this to work. This example program loads the whole file into memory before starting playback. To play very long files, you should use play_long_file.py instead. This example could simply be implemented like this:: import sounddevice as sd import soundfile as sf data, fs = sf.read('my-file.wav') sd.play(data, fs) sd.wait() ... but in this example we show a more low-level implementation using a callback stream. &quot;&quot;&quot; import argparse import threading import sounddevice as sd import soundfile as sf def int_or_str(text): &quot;&quot;&quot;Helper function for argument parsing.&quot;&quot;&quot; try: return int(text) except ValueError: return text parser = argparse.ArgumentParser(add_help=False) parser.add_argument( '-l', '--list-devices', action='store_true', help='show list of audio devices and exit') args, remaining = parser.parse_known_args() if args.list_devices: print(sd.query_devices()) parser.exit(0) parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, parents=[parser]) parser.add_argument( 'filename', metavar='FILENAME', help='audio file to be played back') parser.add_argument( '-d', '--device', type=int_or_str, help='output device (numeric ID or substring)') args = parser.parse_args(remaining) event = threading.Event() try: data, fs = sf.read(args.filename, always_2d=True) current_frame = 0 def callback(outdata, frames, time, status): global current_frame if status: print(status) chunksize = min(len(data) - current_frame, frames) outdata[:chunksize] = data[current_frame:current_frame + chunksize] if chunksize &lt; frames: outdata[chunksize:] = 0 raise sd.CallbackStop() current_frame += chunksize stream = sd.OutputStream( samplerate=fs, device=args.device, channels=data.shape[1], callback=callback, finished_callback=event.set) with stream: event.wait() # Wait until playback is finished except KeyboardInterrupt: parser.exit('\nInterrupted by user') except Exception as e: parser.exit(type(e).__name__ + ': ' + str(e)) </code></pre> <p>The closest case to mine that I found is <a href="https://stackoverflow.com/questions/71404292/oserror-errno-9999-unanticipated-host-error-pamaccore-auhal-auhal-co">this one</a> which hasn't been answered.</p>
<python><portaudio><apple-silicon><python-sounddevice><soundfile>
2023-03-12 12:24:30
1
1,109
Nick Skywalker
75,713,049
18,749,472
Stop Django cache clearing on server refresh
<p>In my Django project I want to stop the cache from clearing whenever the server is restarted. Such as when I edit my views.py file, the cache is cleared and I get logged out which makes editing any features exclusive to logged in users a pain.</p> <p><em>settings.py</em></p> <pre><code>SESSION_ENGINE = &quot;django.contrib.sessions.backends.cache&quot; </code></pre> <p>What configurations would stop cache clearing on server refresh?</p>
<python><django><session><caching><django-settings>
2023-03-12 12:19:13
3
639
logan_9997
75,713,039
13,194,245
Python script runs fine locally but get a JSONDecodeError when running in Github Actions
<p>I have the following code:</p> <pre><code>import requests import boto3 import os # Fetch data from API response = requests.get('https://api.example.com') data = response.json() # Upload data to S3 bucket s3_client = boto3.client('s3',aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID'), aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY')) bucket_name = 'api-collector' folder_name = 'collection' object_name = 'test' s3_client.put_object(Body=str(data), Bucket=bucket_name, Key='{}/{}'.format(folder_name, object_name)) </code></pre> <p>The API request works fine on my local version however in Github Actions it fails with the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/runner/work/api-collector/api-collector/folder/api.py&quot;, line 9, in &lt;module&gt; data = response.json() File &quot;/home/runner/.local/lib/python3.10/site-packages/requests/models.py&quot;, line 975, in json raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Error: Process completed with exit code 1. </code></pre> <p>this is what my .yaml file looks like:</p> <pre><code>name: api_to_s3 on: push: branches: - main jobs: upload: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Install dependencies run: | pip install -r requirements.txt - name: api_to_s3 env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} run: | python folder/api.py </code></pre> <p>Anything i am missing here? Thanks!</p>
<python><python-requests><github-actions>
2023-03-12 12:16:14
0
1,812
SOK
75,712,871
39,242
How do I batch a request to Compute Engine using the Python Google Cloud Client Libraries?
<p>In the older Google API Client Libraries, you could batch a request to label many instances at once, using <code>googleapiclient</code>, <code>discovery.build(&quot;compute&quot;, &quot;v1&quot;).new_batch_http_request(...)</code></p> <p>With the new, <a href="https://github.com/googleapis/google-api-python-client#other-google-api-libraries" rel="nofollow noreferrer">recommended</a> Cloud Client Libraries, you can set labels on one instance with <a href="https://cloud.google.com/compute/docs/reference/rest/v1/instances/setLabels" rel="nofollow noreferrer">setLabels</a>, but I don't see a way to batch these requests.</p> <p>The <a href="https://cloud.google.com/compute/docs/api/how-tos/batch" rel="nofollow noreferrer">documentation</a> mentions the batching of requests, but only with a direct HTTPS call, and I would rather use a Python library. This documentation does refer to a Python library, but only the older Google API Client Libraries.</p> <p>I don't mind using the older Google API Client libraries, but, though they are <a href="https://github.com/googleapis/google-api-python-client#other-google-api-libraries" rel="nofollow noreferrer">officially supported</a>, they have had, since 2016 at least, severe bugs causing invocation failure. (This is in their SSL implementation, see <a href="https://github.com/googleapis/google-api-python-client/issues/218" rel="nofollow noreferrer">1</a>, <a href="https://github.com/googleapis/google-api-python-client/issues/1118" rel="nofollow noreferrer">2</a> and <a href="https://github.com/googleapis/google-api-python-client/search?q=socket+timeout&amp;type=issues" rel="nofollow noreferrer">more</a>). This makes them impossible to use.</p>
<python><google-cloud-platform><google-compute-engine><google-client>
2023-03-12 11:41:21
1
19,885
Joshua Fox
75,712,777
10,967,961
Forming a symmetric matrix counting instances of being in same cluster
<p>I have a database that comprises cities divided into clusters for each year. In other words, I applied a community detection algorithm for different databases containing cities in different years base on modularity. The final database (a mock example) looks like this:</p> <pre><code>v1 city cluster year 0 &quot;city1&quot; 0 2000 1 &quot;city2&quot; 2. 2000 2 &quot;city3&quot; 1. 2000 3 &quot;city4&quot; 0 2000 4 &quot;city5&quot; 2 2000 0 &quot;city1&quot; 2 2001 1 &quot;city2&quot; 1 2001 2 &quot;city3&quot; 0 2001 3 &quot;city4&quot; 0 2001 4 &quot;city5&quot; 0 2001 0 &quot;city1&quot; 1 2002 1 &quot;city2&quot; 2 2002 2 &quot;city3&quot; 0 2002 3 &quot;city4&quot; 0 2002 4 &quot;city5&quot; 1 2002 </code></pre> <p>Now what would like to do is counting how many times a city ends up in the same cluster as another city each year. So in the mock example above I should end up with a 5 times 5 symmetric matrix where rows and columns are cities where each entry represent the number of times that city I and j are in the same cluster (independently of which cluster) in all years:</p> <pre><code> city1 city2 city3 city4 city5 city1 . 0. 0. 1. 1 city2. 0. . 0. 0. 1 city3. 0. 0. . 2. 1 city4. 1. 0. 2 . 1. city5. 1. 1 1. 1. . </code></pre> <p>I am working in python but it's fine even if the solution is in matlab or R.</p> <p>Thank you</p>
<python><r><matrix><cluster-analysis>
2023-03-12 11:25:07
2
653
Lusian
75,712,609
7,848,740
Get value of clipboard from a Selenium Grid via Python
<p>I'm using Selenium for Python to connect to my Grid and command a browser. No, I'm trying to get the data of the clipboard of my Browser of the grid and copy them into my Python code.</p> <p>I've tried to use <code>from tkinter import Tk</code> and <code>clipboard = Tk().clipboard_get()</code> but it clearly copy the clipboard on my host and not the one the Selenium Grid</p> <p>Is there a way to access it?</p>
<python><selenium-webdriver><selenium-grid>
2023-03-12 10:52:01
1
1,679
NicoCaldo
75,712,595
19,257,035
How to add concurrency to my asyncio based file downloader script without hitting server
<p>Below is my code to download file as fast as possible using asycio Trying to implement, multiple server connection with multi chunk, multi threded downlad just like idm and aria2</p> <pre><code>import asyncio import os.path import shutil import aiofiles import aiohttp import lxml.html as htmlparser import cssselect import regex, json from tempfile import TemporaryDirectory domain = &quot;https://doma.com/&quot; url = 'https://doma.com/ust/xxxx' CONTENT_ID = regex.compile(r&quot;/ust/([^?#&amp;/]+)&quot;) def parts_generator(size, start=0, part_size=5 * 1024 ** 2): while size - start &gt; part_size: yield start, start + part_size start += part_size yield start, size async def main(): async def download(url, headers, save_path): async with session.get(url, headers=headers) as request: file = await aiofiles.open(save_path, 'wb') await file.write(await request.content.read()) async with aiohttp.ClientSession() as session: async with session.get(url) as first: cs = await first.text() csrf_token = htmlparser.fromstring(cs).cssselect(&quot;meta[name='csrf-token']&quot;)[0].get(&quot;content&quot;) content_id = CONTENT_ID.search(url).group(1) Headers={&quot;x-requested-with&quot;: &quot;XMLHttpRequest&quot;, &quot;x-csrf-token&quot;: csrf_token} async with session.post(domain + &quot;api/get&amp;user=xxx&amp;pass=yyy&quot;, headers=Headers, json={&quot;id&quot;: content_id}) as resp: res = json.loads(await resp.text()) re = res['result']['Original']['file'] async with session.get(re) as request: size = request.content_length tasks = [] file_parts = [] filename = 'File.mp4' tmp_dir = TemporaryDirectory(prefix=filename, dir=os.path.abspath('.')) for number, sizes in enumerate(parts_generator(size)): part_file_name = os.path.join(tmp_dir.name, f'{filename}.part{number}') file_parts.append(part_file_name) tasks.append(await download(re, {'Range': f'bytes={sizes[0]}-{sizes[1]}'}, part_file_name)) await asyncio.gather(*tasks)'' with open(filename, 'wb') as wfd: for f in file_parts: with open(f, 'rb') as fd: shutil.copyfileobj(fd, wfd) asyncio.run(main()) </code></pre> <p>is using asyncio better than threadpool and multiprocessing</p> <p><strong>my script still doesnt perform concurrent download</strong></p> <p><strong>help with adding concurrency and handling cases where server repond with empty payload in casr of excessive request. ysing true loop to sleep and retry in such casees</strong></p> <p><strong>with idm this mp4 700mb can be downloaded within few minutes and idm can achive 3 mbps speed for this video download on my network</strong></p> <p><strong>can someone help tweak my python script to achive same spped and failsafe downlaod like idm Also want to be able to play file simultaneiusly while downlaoding</strong></p>
<python><python-asyncio>
2023-03-12 10:49:34
2
367
Tathastu Pandya
75,712,503
7,624,179
Workaround to using pip in virtual python 2.7 32-bit environemnt
<p>I am currently using Anaconda to setup a virtual environment of python 2.7.1 32-Bit. My project require several packages to be downloaded. However, when I try</p> <pre><code>&gt;pip install numpy </code></pre> <p>It gives me the following error.</p> <pre><code>DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality. c:\python27\lib\site-packages\pip\_vendor\urllib3\util\ssl_.py:424: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings SNIMissingWarning, c:\python27\lib\site-packages\pip\_vendor\urllib3\util\ssl_.py:164: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecurePlatformWarning, </code></pre> <p>I have noticed that this only happens for the 32-bit installation of Python and not in the 64 bit version. However, the <a href="https://github.com/justinsalamon/audio_to_midi_melodia" rel="nofollow noreferrer">Melodia</a> library I am trying to use require python 2 32-bit.</p> <p>I have tried isntalling the packages via conda's default install feature, but the command fails even after adding the conda-forge channel.</p> <p>Is it possible to work around this issue? Is manually installing the packages (not via pip) a viable option?</p>
<python><python-2.7><pip><conda><32bit-64bit>
2023-03-12 10:29:12
0
859
Kenta Nomoto
75,712,497
1,036,903
PyPDF2:Creating EncodedStreamObject is not currently supported
<p>The following code tries to edit part of text in a PDF file:</p> <pre><code>from PyPDF2 import PdfReader, PdfWriter replacements = [(&quot;Failed&quot;, &quot;Passed&quot;)] pdf = PdfReader(open(&quot;2.pdf&quot;, &quot;rb&quot;)) writer = PdfWriter() for page in pdf.pages: contents = page.get_contents().get_data() #print(contents) old contents for (a, b) in replacements: contents = contents.replace(str.encode(a), str.encode(b)) #print(contents) new contents which has 'Passed' as new value page.get_contents().set_data(str(contents)) #Issue occurs here writer.add_page(page) with open(&quot;2_modified.pdf&quot;, &quot;wb&quot;) as f: writer.write(f) </code></pre> <p>Keep getting into below issue:</p> <blockquote> <p>Traceback (most recent call last): <br> File &quot;/pdf_editor.py&quot;, line 14, in &lt;module&gt; <br>     page.get_contents().set_data(str(contents)) #Issue occurs here <br> File &quot;/venv/lib/python3.9/site-packages/PyPDF2/generic/_data_structures.py&quot;, line 839, in set_data <br>     raise PdfReadError(&quot;Creating EncodedStreamObject is not currently supported&quot;) <br> PyPDF2.errors.PdfReadError: Creating EncodedStreamObject is not currently supported</p> </blockquote> <p>I tried with solutions mentioned <a href="https://stackoverflow.com/questions/72896483/pypdf2-encodedstreamobject-and-decodedstreamobject-issues">here</a> which did not work, also found <a href="https://github.com/py-pdf/pypdf/issues/656" rel="nofollow noreferrer">this</a> github link which has a lable &quot;bug&quot; but with no further updates.</p> <p><strong>UPDATE:</strong> <br> I had tried the <a href="https://stackoverflow.com/questions/31703037/how-can-i-replace-text-in-a-pdf-using-python">library</a> which was in comments earlier did not pursue for two reasons:</p> <ol> <li>Seems not used widely</li> <li>Kept getting one or other issue last one being 'apply_redact_annotations' error</li> </ol> <p>So wanted to know any other work around or any other good libraries to achieve this</p>
<python><pdf><pdf-generation><wkhtmltopdf><pypdf>
2023-03-12 10:27:57
1
396
Vinod
75,712,458
2,355,176
Disable postgres index updation temporarily and update the indexes manually later for insert statement performance
<p>i have about 1300 CSV files with almost 40k of rows in each file, i have written a python code to read the file and convert all the 40k entries into single insert statement to insert in Postgres database.</p> <p>The psudocode is following</p> <pre><code>for file in tqdm(files, desc=&quot;Processing files&quot;): rows = ReadFile(file) # read all 40k rows from the file q = GenerateQuery(rows) # convert all rows into single bulk insert statement InsertData(q) # Execute the generated query to insert data </code></pre> <p>The code is fine but there are performance issues, when I start the code with empty table it takes around 2 to 3it/s, but after 15 to 20 files it takes 10 to 12it/s, and then the performance drops exponentially with each 10 to 15 files processing, the per iteration time keeps on increasing even it reaches 40 to 50s/it, it's hilarious, according to my understanding I have developed the following hypothesis</p> <p>Since in start table is empty its very easy to update table indexes, so it takes no time for 40k bulk insert records to update indexes but with growing records it become harder and even harder to update indexes in the table with 10m+ records.</p> <p>My Question is can I temporarily disable index updation of the table, so that after complete data dump I will then manually update the indexes by calling some query in postgres which for now I don't know if it really exists.</p>
<python><python-3.x><postgresql><csv>
2023-03-12 10:19:36
3
2,760
Zain Ul Abidin
75,712,275
20,589,631
Is there a way to compare a value against multiple possibilities without repetitive if statements?
<p>I have a function with an input that can be of multiple possibilities, and each one triggers a different function.</p> <p>The example code here works, but it's very bad and I don't want to use this sort of code:</p> <pre><code>def start(press_type): if press_type == &quot;keyboard&quot;: function1() if press_type == &quot;left click&quot;: function2() if press_type == &quot;right click&quot;: function3() if press_type == &quot;middle click&quot;: function4() </code></pre> <p>Is there any way to write the function without repeating those <code>if</code>/<code>else</code> statements?</p> <p>I tried to use a dictionary, but the result was difficult to work with, also, I have a couple other parameters with multiple set options, so this method would end up being convoluted eventually.</p>
<python><if-statement>
2023-03-12 09:45:13
3
391
ori raisfeld
75,712,094
12,298,276
How to make UNIX-Python-executable work in VS Code via "select interpreter" on a Windows machine?
<p>I was forced to use <a href="https://learn.microsoft.com/en-us/windows/wsl/install" rel="nofollow noreferrer">WSL on Windows 10</a> to install the Python library <a href="https://pypi.org/project/Cartopy/" rel="nofollow noreferrer">Cartopy</a> on my Windows machine correctly. It is way easier in a Linux-distribution, so I chose Ubuntu 20.04 WSL2.</p> <p>As a consequence, I created a virtual Python environment via WSL in my project folder via <code>python -m venv linux-venv</code>. The problem is now that all binaries/executables were compiled for the WSL-distribution and are thus not selectable from <a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">VS Code</a> started from within Windows 10.</p> <p>Consequently, I installed VS Code on WSL and started it from there. My hope was that I can then <a href="https://code.visualstudio.com/docs/python/environments#_manually-specify-an-interpreter" rel="nofollow noreferrer">manually select the right interpreter path</a> <code>linux-venv/bin/python</code> from my project's root directory. Unfortunately, it does not work either, even though I'm doing this from VS Code running on WSL:</p> <p><a href="https://i.sstatic.net/qehvx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qehvx.png" alt="select interpreter path" /></a></p> <p>Clicking on my desired interpreter <code>linux-venv/bin/python</code>, the following error message is displayed:</p> <p><a href="https://i.sstatic.net/TRPdX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TRPdX.png" alt="error message" /></a></p> <p>If I try to &quot;browse to my desired executable&quot; instead, they are not displayed since solely &quot;.exe&quot; - file extensions are allowed:</p> <p><a href="https://i.sstatic.net/MCMOR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MCMOR.png" alt="browse1" /></a></p> <p>Next, I can solely browse &quot;.exe&quot; - files, wherefore I cannot select the UNIX-compiled executables.</p> <p><a href="https://i.sstatic.net/H3OZn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3OZn.png" alt="browse2" /></a></p> <hr /> <p><strong>Conclusion:</strong></p> <p><em>It does not seem possible at the moment to select a UNIX-compiled interpreter in VS Code properly on a Windows 10 - machine, not even using WSL.</em></p> <p>Since the aforementioned Python library <a href="https://pypi.org/project/Cartopy/" rel="nofollow noreferrer">Cartopy</a> solely works well on UNIX-systems, I need to install my virtual environment from WSL.</p> <p>In order to continue with my project, <strong>I would need to be able to select the UNIX-compiled interpreter</strong>, but I cannot.</p>
<python><visual-studio-code><windows-subsystem-for-linux><pythoninterpreter>
2023-03-12 09:10:45
1
4,731
Andreas L.
75,712,057
15,724,084
python extracting values from lists inside a list
<p>With Selenium Webdriver I search for input value in google search. Then take links one by one, run them and take emails from each URL.</p> <p><code>[[], [], [], [], ['info@neurotechnology.com', 'support@neurotechnology.com', 'support@neurotechnology.com'], ['info@neurotechnology.com', 'support@neurotechnology.com', 'support@neurotechnology.com'], ['info@himalayakarya.com', 'info@himalayakarya.com'], ['info@himalayakarya.com', 'info@himalayakarya.com'], ['sales@obejor.com', 'sales@obejor.com'], ['sales@obejor.com', 'sales@obejor.com'], [], [], ['507c10c65fb14333bfa540a7cfb4b543@o417096.ingest.sentry.io'], ['507c10c65fb14333bfa540a7cfb4b543@o417096.ingest.sentry.io'], ['hello@paykobo.com'], ['hello@paykobo.com'], ['customercare@indiamart.com'], ['customercare@indiamart.com'], [], [], [], []] [[], ['info@neurotechnology.com', 'support@neurotechnology.com', 'support@neurotechnology.com'], ['info@himalayakarya.com', 'info@himalayakarya.com'], ['sales@obejor.com', 'sales@obejor.com'], ['507c10c65fb14333bfa540a7cfb4b543@o417096.ingest.sentry.io'], ['hello@paykobo.com'], ['customercare@indiamart.com']]</code></p> <p>The result is lists of lists. I need to take lists out of them to be <code>str</code> values inside a list.</p> <pre><code>#open google from selenium.webdriver.chrome.options import Options from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service as ChromeService from selenium.webdriver.common.keys import Keys chrome_options = Options() chrome_options.headless = False chrome_options.add_argument(&quot;start-maximized&quot;) # options.add_experimental_option(&quot;detach&quot;, True) chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) chrome_options.add_experimental_option('excludeSwitches', ['enable-logging']) chrome_options.add_experimental_option('useAutomationExtension', False) chrome_options.add_argument('--disable-blink-features=AutomationControlled') driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options) driver.get('https://www.google.com/') #paste - write name #var_inp=input('Write the name to search:') var_inp='dermalog lf10' #search for image WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.NAME, &quot;q&quot;))).send_keys(var_inp+Keys.RETURN) #find first 10 companies res_lst=[] res=WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.TAG_NAME,'cite'))) print(len(res)) for r in res: res_lst.append(driver.execute_script(&quot;return arguments[0].firstChild.textContent;&quot;, r)) print(res_lst) #take email addresses from company import re emails_lst=[] for i in range(len(res_lst)): driver.get(res_lst[i]) email_pattern = r&quot;[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,4}&quot; html = driver.page_source emails = re.findall(email_pattern, html) driver.implicitly_wait(5) print(emails) emails_lst.append(emails) print(emails_lst) no_duplicates=[x for n, x in enumerate(emails_lst) if x not in emails_lst[:n]] print(no_duplicates) driver.close() #send email </code></pre>
<python><list><selenium-webdriver>
2023-03-12 09:05:09
1
741
xlmaster
75,712,029
12,863,331
Capturing any character between two specified words including new lines
<p>I need to capture the title between the words TITLE and JOURNAL and to exclude a scenario in which the captured string is <code>Direct Submission</code>.<br /> for instance, in the the following text,</p> <pre><code> TITLE The Identification of Novel Diagnostic Marker Genes for the Detection of Beer Spoiling Pediococcus damnosus Strains Using the BlAst Diagnostic Gene findEr JOURNAL PLoS One 11 (3), e0152747 (2016) PUBMED 27028007 REMARK Publication Status: Online-Only REFERENCE 2 (bases 1 to 462) AUTHORS Behr,J., Geissler,A.J. and Vogel,R.F. TITLE Direct Submission JOURNAL Submitted (04-AUG-2015) Technische Mikrobiologie, Technische </code></pre> <p>the captured string needs to be only<br /> <code>'The Identification of Novel Diagnostic Marker Genes for the Detection of Beer Spoiling Pediococcus damnosus Strains Using the BlAst Diagnostic Gene findEr'</code>, either with or without new line characters (preferably without new line characters).<br /> I tried applying regular expressions such as those offered <a href="https://stackoverflow.com/questions/159118/how-do-i-match-any-character-across-multiple-lines-in-a-regular-expression">here</a> and <a href="https://stackoverflow.com/questions/8303488/regex-to-match-any-character-including-new-lines">here</a>, but couldn't apply them to my needs.<br /> Thanks.</p>
<python><regex>
2023-03-12 08:58:12
1
304
random
75,711,922
4,865,723
Dataclasses and slots causing ValueError: 'b' in __slots__ conflicts with class variable
<p>I don't understand the error message and also couldn't find other SO questions and answers helping me to understand this. The MWE is tested with Python 3.9.2. I'm aware that there is a <code>slots=True</code> parameter in Pythons 3.10 dataclasses. But this isn't an option here.</p> <p>The error output:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/dc.py&quot;, line 6, in &lt;module&gt; class Foo: ValueError: 'b' in __slots__ conflicts with class variable </code></pre> <p>Why does that happen? I even don't understand the background of that error.</p> <pre><code>#!/usr/bin/env pyhton3 from dataclasses import dataclass, field @dataclass(init=False) class Foo: __slots__ = ('a', 'b') a: int b: list[str] = field(init=False) def __init__(self, a, ccc): self.a = a # b = _some_fancy_modifications(ccc) self.b = b if __name__ == '__main__': f = Foo(1, list('bar')) </code></pre> <p>The member <code>b</code> is not given as an argument of <code>__init__()</code> but computed based on the argument <code>ccc</code>. Because of that I think I need to write my own <code>__init__()</code> (<code>@dataclass(init=False)</code>) and the <code>b</code> member shouldn't be initialized by dataclass (<code>field(init=False)</code>). Maybe I misunderstood something here?</p>
<python><python-3.9><python-dataclasses><slots>
2023-03-12 08:34:35
2
12,450
buhtz
75,711,859
859,141
Django Group By Aggregation works until additional fields added
<p>I'd like to group the following track variations at the given circuit so on an index page I only have a single entry for each circuit. On a subsequent circuit detail page I will show the various configurations.</p> <p><a href="https://i.sstatic.net/sm72E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sm72E.png" alt="enter image description here" /></a></p> <p>If I keep the query simple as below it works.</p> <pre><code> track_listing = (Tracks.objects .values('sku', 'track_name') .annotate(variations=Count('sku')) .order_by() ) </code></pre> <p>However adding other fields such as track_id or location breaks or changes the grouping.</p> <pre><code> track_listing = (Tracks.objects .values('sku', 'track_name', 'track_id') .annotate(variations=Count('sku')) .order_by() ) </code></pre> <p>Is there a way to keep the group while including other fields. The location is not unique to each row and having one example of track_id allows me to retrieve a track image.</p>
<python><django><django-models><django-queryset><django-orm>
2023-03-12 08:21:33
2
1,184
Byte Insight
75,711,757
5,870,471
FastAPI GET endpoint returns "405 method not allowed" response
<p>A <code>GET</code> endpoint in FastAPI is returning correct result, but returns <code>405 method not allowed</code> when <code>curl -I</code> is used. This is happening with all the <code>GET</code> endpoints. As a result, the application is working, but health check on application from a load balancer is failing.</p> <p>Any suggestions what could be wrong?</p> <p><strong>code</strong></p> <pre><code>@app.get('/health') async def health(): &quot;&quot;&quot; Returns health status &quot;&quot;&quot; return JSONResponse({'status': 'ok'}) </code></pre> <p><strong>result</strong></p> <pre><code>curl http://172.xx.xx.xx:8080 </code></pre> <p><a href="https://i.sstatic.net/YHbiN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YHbiN.png" alt="" /></a></p> <p><strong>return header</strong></p> <pre><code>curl -I http://172.xx.xx.xx:8080 </code></pre> <p><a href="https://i.sstatic.net/qFSo0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qFSo0.png" alt="" /></a></p>
<python><rest><fastapi><http-status-code-405>
2023-03-12 07:55:38
2
548
toing