QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,443,742
10,844,937
How to inspect total amount of celery worker?
<p>I start 10 <code>celery</code> workers by the following.</p> <pre><code>celery -A worker.celery worker -l info -c 10 </code></pre> <p>I need to know the total amount of active <code>celery</code> workers. If the total amout of active workers are not bigger than 10, we can handle the new task. If not, the new task has to wait till a worker finished. Here is the code to check the total amount of active workers.</p> <pre><code> import json import subprocess def get_celery_worker(): bash_command = &quot;celery -A worker inspect active -j&quot; process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) output, error = process.communicate() output_string = output.decode(&quot;utf-8&quot;) output_json = json.loads(output_string) number_of_celery_worker = 0 if len(list(output_json.values())[0]) == 0: pass else: for value in list(output_json.values())[0]: for v in value.values(): if v == 'run_task': # Here run_task is the worker name. number_of_celery_worker += 1 return int(number_of_celery_worker / 2) # Every task contains two run_task </code></pre> <p>I start 10 tasks on by one every second.The subprocess gives me the total workers are: <code>0</code>, <code>0</code>, <code>1</code>, <code>1</code>, <code>2</code>, <code>3</code>, <code>4</code>, <code>5</code>, <code>6</code>, <code>6</code>.</p> <p>Anyone has an idea to implement this or any other idea to count workers?</p>
<python><subprocess><celery>
2023-02-14 05:15:29
2
783
haojie
75,443,581
18,758,062
stable_baselines3 callback on each step
<p>I am training a <code>stable_baselines3</code> <code>PPO</code> agent and want to perform some task on every step. To do this, I'm using a callback <code>CustomCallback</code> with <code>_on_step</code> method defined.</p> <p>But it appears that <code>_on_step</code> is called only on every <code>PPO.n_steps</code>, so if <code>n_steps</code> param is <code>1024</code>, then <code>CustomCallback._on_step</code> appears to be called only on every <code>1024</code> steps.</p> <p>How can you do something on every <code>1</code> step, insted of on every <code>PPO.n_steps</code> steps?</p> <pre><code>from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.callbacks import BaseCallback class CustomCallback(BaseCallback): def __init__(self, freq, verbose=0): super().__init__(verbose) self.freq = freq def _on_step(self): if self.n_calls % self.freq == 0: print('do something') return True env = make_vec_env(&quot;CartPole-v1&quot;, n_envs=1) model = PPO(&quot;MlpPolicy&quot;, env, n_steps=1024) model.learn( total_timesteps=25000, callback=CustomCallback(freq=123), ) </code></pre>
<python><machine-learning><pytorch><reinforcement-learning><stable-baselines>
2023-02-14 04:50:50
0
1,623
gameveloster
75,443,467
102,401
How do you parse sections of text with Lark in Python
<p>I'm trying to figure out how to use the <a href="https://lark-parser.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">Lark Python Module</a> to parse a document that looks like this:</p> <pre><code>---&gt; TITLE Introduction ---&gt; CONTENT The quick Brown fox ---&gt; TEST Jumps over ---&gt; CONTENT The lazy dog </code></pre> <p>Each <code>---&gt;</code> marks the start of a section of a specific type that has some content that goes until the next <code>---&gt;</code> section starts.</p> <p>So far, I have this</p> <pre class="lang-py prettyprint-override"><code> from lark import Lark parser = Lark(r&quot;&quot;&quot; start: section* | line* section.1 : &quot;---&gt; &quot; SECTION_TITLE &quot;\n\n&quot; SECTION_TITLE.1 : &quot;TITLE&quot; | &quot;CONTENT&quot; | &quot;SOURCE&quot; | &quot;OUTPUT&quot; line.-1: ANY_LINE ANY_LINE.-1: /.+\n*/ &quot;&quot;&quot;, start='start') with open(&quot;src/index.mdx&quot;) as _in: print(parser.parse(_in.read())) </code></pre> <p>It parses the file, but everything shows up in <code>ANY_LINE</code> tokens instead of splitting out the section headers. I'm new to this type of parser and feel like I'm missing something obvious, but I haven't been able to figure it out.</p>
<python><lark-parser>
2023-02-14 04:30:03
1
25,593
Alan W. Smith
75,443,406
7,766,024
How do I "push down" the current columns to form the first row, and create new columns to replace that one?
<p>I have a DataFrame that essentially has the first row that I want as the column row and I'd like to know how to set new columns and set that row as the first row.</p> <p>For example:</p> <pre><code>| 4 | 3 | dog | | --- | --- | --- | | 1 | 2 | cat | </code></pre> <p>I want to change that DataFrame to be:</p> <pre><code>| number_1 | number_2 | animal | | -------- | -------- | ------ | | 4 | 3 | dog | | 1 | 2 | cat | </code></pre> <p>What would be the best way to do this?</p>
<python><pandas><dataframe>
2023-02-14 04:17:31
1
3,460
Sean
75,443,405
14,661,648
Python psycopg2 Caching?
<p>Is it possible to store the executed <code>fetchall()</code> query from psycopg2 into memory, so that I don't have to run that again on my database with a million entries?</p> <p>What's the best way to store them locally on a machine so that Python can any time dissect the <code>list</code>?</p>
<python><postgresql><psycopg2>
2023-02-14 04:17:29
1
1,067
Jiehfeng
75,443,369
1,056,563
"RuntimeWarning: divide by zero encountered in log" in numpy.log even though small values were filtered out
<p>Given <code>samplex</code>:</p> <pre><code>In [22]: samplex Out[22]: array([0. , 0.00204082, 0.00408163, 0.00612245, 0.00816327, 0.01020408, 0.0122449 , 0.01428571, 0.01632653, 0.01836735, 0.02040816, 0.02244898, 0.0244898 , 0.02653061, 0.02857143, 0.03061224, 0.03265306, 0.03469388, 0.03673469, 0.03877551, 0.04081633, 0.04285714, 0.04489796, 0.04693878, 0.04897959, 0.05102041, 0.05306122, 0.05510204, 0.05714286, 0.05918367, 0.06122449, 0.06326531, 0.06530612, 0.06734694, 0.06938776, 0.07142857, 0.07346939, 0.0755102 , 0.07755102, 0.07959184, 0.08163265, 0.08367347, 0.08571429, 0.0877551 , 0.08979592, 0.09183673, 0.09387755, 0.09591837, 0.09795918, 0.1 ]) </code></pre> <p>I am using <code>numpy.where</code> to protect against <code>log(0)</code> using <code>np.where(samplex&gt;1e-8</code>:</p> <pre><code>import numpy as np np.where(samplex&gt;1e-8,np.log(samplex),0) </code></pre> <p>But that's not completely working - a warning is generated though <code>numpy</code> does complete the work anyways:</p> <pre><code>&lt;ipython-input-18-e5dde8c65402&gt;:1: RuntimeWarning: divide by zero encountered in log np.where(samplex&gt;1e-8,np.log(samplex),0) Out[18]: array([ 0. , -6.19440539, -5.50125821, -5.0957931 , -4.80811103, -4.58496748, -4.40264592, -4.24849524, -4.11496385, -3.99718081, -3.8918203 , -3.79651012, -3.70949874, -3.62945603, -3.55534806, -3.48635519, -3.42181667, -3.36119205, -3.30403363, -3.24996641, -3.19867312, -3.14988295, -3.10336294, -3.05891118, -3.01635156, -2.97552957, -2.93630885, -2.89856853, -2.86220088, -2.82710956, -2.79320801, -2.76041819, -2.72866949, -2.69789783, -2.66804487, -2.63905733, -2.61088645, -2.58348748, -2.55681923, -2.53084374, -2.50552594, -2.48083332, -2.45673577, -2.43320528, -2.41021576, -2.3877429 , -2.36576399, -2.34425779, -2.32320438, -2.30258509]) </code></pre> <p>So what is happening here? Is there a preferred pattern to protect against divide by 0's?</p>
<python><numpy>
2023-02-14 04:07:49
3
63,891
WestCoastProjects
75,443,351
5,182,223
Python asyncio unable to run multiple tasks properly
<p>I have the following code snippet that I am expecting it to run two async functions (<code>func1</code> and <code>func2</code>) concurrently, and:</p> <ul> <li><code>Worker</code> is an infinite loop that keeps fetching items from a global <code>asyncio.Queue</code> instance no matter if the queue is empty or not and simply printing some stuff, and <code>Worker.start()</code> is the method that starts that loop</li> </ul> <pre class="lang-py prettyprint-override"><code>worker1 = Worker() worker2 = Worker() worker3 = Worker() async def func1(): print(&quot;Running func1...&quot;) await asyncio.gather(worker1.start(), worker2.start(), worker3.start()) print(&quot;func1 done...&quot;) async def func2(): print(&quot;Running func2...&quot;) await asyncio.sleep(2) print(&quot;func2 done...&quot;) async def background_tasks(): asyncio.create_task(func1()) asyncio.create_task(func2()) if __name__ == '__main__': asyncio.run(background_tasks()) </code></pre> <p>I am expecting the two functions run concurrently and could get some output similar with below:</p> <pre><code>Running func1... Running func2... worker1 printing object worker2 printing object worker3 waiting worker2 printing object func2 done... worker1 printing object worker2 printing object ... (expecting no &quot;func1 done...&quot; because of the infinite loop) </code></pre> <p>But I actually get result output like this:</p> <pre><code>Running func1... Running func2... Process finished with exit code 0 </code></pre> <p>It seems both of the two functions started but never ended properly, even func2 has no ending output, I am unable to find a solution to this and hoping to get some help, thanks in advance!</p>
<python><python-asyncio>
2023-02-14 04:04:26
1
677
nonemaw
75,443,262
11,462,274
The zone attribute is specific to pytz's interface; please migrate to a new time zone provider
<p>I need to filter the lines where the date is today and not more than 2 hours ago according to local time (the code needs to be malleable as I travel to different timezones):</p> <pre class="lang-python prettyprint-override"><code> from tzlocal import get_localzone import pandas as pd import pytz df['DATA_HORA'] = df.apply(lambda x: datetime.strptime(f'{x[&quot;DATA&quot;]} {x[&quot;HORA&quot;]}', '%d/%m/%Y %H:%M'), axis=1) local_tz = get_localzone() local_tz = pytz.timezone(local_tz.zone) df_today = df[(df['DATA_HORA'].dt.tz_localize(local_tz) &gt;= (datetime.now(local_tz) - timedelta(hours=2))) &amp; (df['DATA_HORA'].dt.date == datetime.now(local_tz).date())] </code></pre> <p>I tried to understand how to do it, according to the specific documentation about it:</p> <p><a href="https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html#getting-a-time-zone-s-name" rel="nofollow noreferrer">https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html#getting-a-time-zone-s-name</a></p> <p>But I was not successful, how should I proceed to find my local timezone and not receive this warning anymore?</p>
<python><pandas><datetime><timezone><pytz>
2023-02-14 03:47:06
1
2,222
Digital Farmer
75,443,109
9,331,134
pandas join on columns which contains a list - match any
<p>I have two dataframes <br> I want to join on a column where one of the column is a list, <br> need to join if any value in list matches</p> <pre><code>df1 = | index | col_1 | | ----- | ----- | | 1 | 'a' | | 2 | 'b' | df2 = | index_2 | col_1 | | ------- | ----- | | A | ['a', 'c'] | | B | ['a', 'd', 'e'] | I am looking something like df1.join(df2, on='col_1', type_=any, type='left') | index |col_1_x |index_2|col_1_y | | ----- |--------|_______| ----- | | 1 |'a' | A |['a', 'c'] | | 1 |'a' | A |['a', 'd', 'e']| ``` </code></pre>
<python><pandas><dataframe><join>
2023-02-14 03:08:12
2
1,082
Kaushik J
75,443,080
13,730,432
How can I fix this error while uploading csv to bigquery?
<p>I am getting the below error while uploading a CSV to bigquery using Python:</p> <p>google.api_core.exceptions.BadRequest: 400 Error while reading data, error message: Could not parse '80:00:00' as TIME for field global_time_for_first_response_goal (position 36) starting at location 11602908 with message 'Invalid time string &quot;80:00:00&quot;' File: gs://mybucket/mytickets/2023-02-1309:58:11:865588.csv</p> <pre><code> def upload_csv_bigquery_dataset(): # logging.info(&quot;&gt;&gt;&gt; Uploading CSV to Big Query&quot;) client = bigquery.Client() table_id = &quot;myproject-dev.tickets.ticket&quot; job_config = bigquery.LoadJobConfig( write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE, source_format = bigquery.SourceFormat.CSV, schema = [bigquery.table_schema], skip_leading_rows = 1, autodetect = True, allow_quoted_newlines = True ) uri = &quot;gs://mybucket/mytickets/2023-02-1309:58:11:865588.csv&quot; load_job = client.load_table_from_uri( uri, table_id, job_config=job_config ) # Make an API request. load_job.result() # Waits for the job to complete. destination_table = client.get_table(table_id) print(&quot;&gt;&gt;&gt; Loaded {} rows.&quot;.format(destination_table.num_rows)) </code></pre> <p>Can someone please tell me a fix or a workaround please? Stuck at this.</p>
<python><csv><google-bigquery>
2023-02-14 03:02:09
1
701
xis10z
75,443,038
5,666,087
How do I modify TIFF physical resolution metadata
<p>I have several pyramidal, tiled TIFF images that were converted from a different format. The converter program wrote incorrect data to the XResolution and YResolution TIFF metadata. How can I modify these fields?</p> <pre><code>tiff.ResolutionUnit: 'centimeter' tiff.XResolution: '0.34703996762331574' tiff.YResolution: '0.34704136833246829' </code></pre> <p>Ideally I would like to use Python or a command-line tool.</p>
<python><tiff>
2023-02-14 02:52:47
1
19,599
jkr
75,442,872
4,780,574
Postgres/psycopg2 "executue_values": Which argument was not converted during string formatting?
<p>I am using <code>execute_values</code> to insert a list of lists of values into a postgres database using psycopg2. Sometimes I get &quot;not all arguments converted during string formatting&quot;, indicating that one of the values in one of the lists is not the expected data type (and also not NoneType). When it is a long list, it can be a pain to figure out which value in which list was causing the problem.</p> <ol> <li>Is there a way to get postgres/psycopg2 to tell me the specific 'argument which could not be converted'?</li> <li>If not, what is the most efficient way to look through the list of lists and find any incongruent data types per place in the list, excluding NoneTypes (which obviously are not equal to a value but also are not the cause of the error)?</li> </ol> <p>Please note that I am not asking for help with the specific set of values I am executing it, but trying to find a general method to more quickly inspect the problem query so I can debug it.</p>
<python><postgresql><debugging><psycopg2><sqldatatypes>
2023-02-14 02:14:48
0
814
Stonecraft
75,442,675
564,872
lxml fails to import, with error `symbol not found in flat namespace '_xsltDocDefaultLoader'`
<p>With the code:</p> <pre><code>from lxml.etree import HTML, XML </code></pre> <p>I get the traceback:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/username/code/project/lxml-test.py&quot;, line 3, in &lt;module&gt; from lxml.etree import HTML, XML ImportError: dlopen(/Users/username/.virtualenvs/project-venv/lib/python3.11/site-packages/lxml/etree.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_xsltDocDefaultLoader' </code></pre> <p>I'm on a mac m1 chip.</p> <p>I installed libxml2 and libxslt via brew.</p> <p>I'm running python 3.11 inside of a virtualenv.</p> <h3>What I've tried:</h3> <ul> <li>Uninstalling and re-installing lxml with pip, and tried several different versions. (4.7.1 &amp; 4.8.0 didn't compile. All of the 4.9.0,1,2 versions give me the above error)</li> <li>Installing libxml2 and libxslt via brew and then reinstalling python-lxml.</li> <li>Installing python-lxml via conda (<a href="https://stackoverflow.com/a/74596889/564872">as suggested here</a>)</li> </ul> <h4>EDIT:</h4> <p>I posted this bug in lxml's bug report forum, and was notified that this is a highly-duplicated bug report of <a href="https://bugs.launchpad.net/lxml/+bug/1913032" rel="nofollow noreferrer">Missing wheel for macos with M1 Edit</a></p>
<python><python-3.x><lxml><apple-m1>
2023-02-14 01:26:17
3
655
Civilian
75,442,651
3,225,420
Miniconda Not Using Environment Python
<p>I've searched for hours - if this is a duplicate question please be kind :)</p> <p>PC had Anaconda 4.x.x. Resolver took hours and I was stuck, could not upgrade Anaconda. Therefore I:</p> <ul> <li>Uninstalled Anaconda</li> <li>Downloaded Miniconda for Windows 10</li> <li>Created a new environment, installed pandas 1.5.2.</li> </ul> <p>The top of my python file looks like this:</p> <pre><code>print('Top of file') import pandas as pd </code></pre> <p>I activate my new environment and execute program. Prompt returns:</p> <blockquote> <p>Top of file</p> </blockquote> <blockquote> <p>File &quot;E:\MyPath\first_take.py&quot;, line 2, in import pandas as pd ModuleNotFoundError: No module named 'pandas'</p> </blockquote> <p>When I use <code>conda list</code> in the environment I see <code>pandas 1.5.2</code> in the results.</p> <p>This is where I've been chasing my tail for hours.</p> <p>I've tried:</p> <ul> <li>Setting PATH variable to include folders where python.exe and pip are located.</li> <li>Reinstalling Miniconda and letting it set my PATH variable.</li> <li>Recreating the environment and calling out a specific python version.</li> <li>Setting default program to open .py files as Python.</li> </ul> <p>UPDATE 1: I opened VS Code and had the same issue. However, I was able to change my Python interpreter to the one in the miniconda folder.</p> <p>Confirms my belief the interpreter being used from Anaconda prompt is not the one I think it is, and don't know how to change it.</p> <p>Update 2:</p> <pre><code>(minitab) C:\Users\Drew&gt;where conda C:\Users\Drew\miniconda3\condabin\conda.bat C:\Users\Drew\miniconda3\Library\bin\conda.bat C:\Users\Drew\miniconda3\Scripts\conda.exe (minitab) C:\Users\Drew&gt;where python C:\Users\Drew\miniconda3\envs\minitab\python.exe C:\Users\Drew\miniconda3\python.exe </code></pre> <p>Thank you for helping me.</p>
<python><anaconda><miniconda><anaconda3>
2023-02-14 01:21:18
0
1,689
Python_Learner
75,442,308
3,763,616
How to calculate the month begin and end month date from date in polars?
<p>Is there an efficient way to get the month end date on a date column. Like if date =β€˜2023-02-13” to return β€œ2023-02-28”, also beginning of the month would be great as well. Thanks!</p> <pre><code>df = pl.DataFrame({'DateColumn': ['2022-02-13']}) test_df = df.with_columns([ pl.col('DateColumn').str.strptime(pl.Date).cast(pl.Date) ] ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ DateColumn β”‚ β”‚ --- β”‚ β”‚ date β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 2022-02-13 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Two new columns would be perfect.</p>
<python><date><python-polars>
2023-02-14 00:05:27
2
489
Drthm1456
75,442,268
70,157
Specify install directories, with PEP 517 installer
<p>How can I specify, to a PEP 517 conformant installer, the directories where libraries and scripts should be installed?</p> <h2>Deprecated Setuptools installer does it right</h2> <p>Using <code>python3 -m setup install --install-lib=/usr/share/foo/ --install-scripts=/usr/share/bar/</code>, I can specify the installation location for Python libraries and executable programs.</p> <pre><code>$ python3 -m setup install --help […] Options for 'install' command: […] --install-lib installation directory for all module distributions (overrides --install- purelib and --install-platlib) […] --install-scripts installation directory for Python scripts […] </code></pre> <p>This is good, when installing a self-contained application; the program files should end up in a location appropriate to the operating system, and the libraries should be installed in an application-private directory because they're not for general import by other Python programs.</p> <p>The Setuptools project has <a href="https://setuptools.pypa.io/en/stable/history.html#setup-install-deprecation-note" rel="nofollow noreferrer">deprecated the <code>setup install</code> feature</a>, so we need to find a replacement for this.</p> <h2>PEP-517 tools to do this?</h2> <p>The current simple <a href="https://pypa-build.readthedocs.io/en/latest/" rel="nofollow noreferrer">'build' tool</a> (β€œA simple, correct Python build frontend”) apparently does not have corresponding options to specify the installation directories that way.</p> <h2>Need a direct replacement for <code>--install-lib</code> and <code>--install-scripts</code></h2> <p>I want to migrate away from deprecated <code>setup install</code>, to a PEP-517 build system.</p> <p>But I need (for some applications) to be able to specify the install location of library modules, and of console scripts. Just like with the old <code>--install-lib</code> and <code>--install-scripts</code> options.</p> <p>How do I instruct 'build' to do this? Or, what other PEP-517 conformant installation tool lets me do this?</p>
<python><software-packaging>
2023-02-13 23:58:40
0
32,600
bignose
75,442,206
8,321,207
Finding mean/SD of a group of population and mean/SD of remaining population within a data frame
<p>I have a pandas data frame that looks like this:</p> <pre><code>id age weight group 1 12 45 [10-20] 1 18 110 [10-20] 1 25 25 [20-30] 1 29 85 [20-30] 1 32 49 [30-40] 1 31 70 [30-40] 1 37 39 [30-40] </code></pre> <p>I am looking for a data frame that would look like this: (sd=standard deviation)</p> <pre><code> group group_mean_weight group_sd_weight rest_mean_weight rest_sd_weight [10-20] [20-30] [30-40] </code></pre> <p>Here the second/third columns are mean and SD for that group. columns third and fourth are mean and SD for the rest of the groups combined.</p>
<python><pandas><dataframe><mean><standard-deviation>
2023-02-13 23:44:48
1
375
Kathan Vyas
75,442,186
505,188
Pandas Mean and Merge Two DataFrames
<p>I have two dataframes that I need to get the means for plus merge based on their original column names. An example is this:</p> <pre><code> df = pd.DataFrame({ 'sepal_length': [5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0], 'sepal_width': [3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4], 'petal_length': [1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5], 'petal_width': [0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2] }) df2 = pd.DataFrame({ 'sepal_length': [0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2], 'sepal_width': [3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4], 'petal_length': [1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5], 'petal_width': [1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5] }) </code></pre> <p>I get the means like this:</p> <pre><code>df_one=df.mean(axis=0).to_frame('Mean_One') df_two=df2.mean(axis=0).to_frame('Mean_Two') </code></pre> <p>The question is how to merge these two datagrams (df_one and df_two) since there is no column name for the original petal info (e.g., sepal_length, sepal_width, etc.). If there were I could do this</p> <pre><code> pd.merge(df_one, df_two, on='?') </code></pre> <p>Thanks for any help on this.</p>
<python><pandas><dataframe>
2023-02-13 23:41:13
1
712
Allen
75,442,020
10,659,353
Python list.index function with random probability on collision
<p>Let's say I have a list: <code>a = [1,1]</code>.</p> <p>If I call <code>a.index(1)</code> it will always return <code>0</code>.</p> <p>Is there any pythonic way to return <code>0</code> or <code>1</code> in equal probabilities?</p>
<python><arrays><random>
2023-02-13 23:09:12
1
381
Enzo Dtz
75,441,985
18,476,381
Python convert datetime in jinja template to update SQL table
<p>I have a jinja template below in which some of the values are of datetime. I initially ran my template but the SQL query generated would convert datetime values to the following:</p> <blockquote> <p>2023-02-13 20:56:13.112000+00:00</p> </blockquote> <p>. The datetime should instead be strings without the extra +00:00 like</p> <blockquote> <p>&quot;2023-02-13 20:56:13.112000&quot;</p> </blockquote> <p>I tried to add this check but got an error saying no test named datetime found.</p> <pre><code>{% elif val is datetime %} {{col}} = '{{val.strftime('%Y-%m-%d %H:%M:%S')}}' </code></pre> <p>Jinja template:</p> <pre><code>UPDATE {{database_name}}.{{table_name}} SET {% for col, val in zip(column_list, value_list) %} {% if val is string %} {{col}} = '{{val}}' {% elif val is datetime %} {{col}} = '{{val.strftime('%Y-%m-%d %H:%M:%S')}}' {% else %} {{col}} = {{val}} {% endif %} {% if not loop.last %},{% endif %} {% endfor %} WHERE 1= 1 {% for key,val in filters.items() %} {% if val is sequence and val is not string and val is not mapping %} AND {{key}} in ({% for i in val %}{% if i is string %}'{{i}}'{% else %}{{i}}{% endif %}{% if not loop.last %},{% endif %}{% endfor %}) {% elif val is string %} AND {{key}} = '{{val}}' {% elif val is number %} AND {{key}} = {{val}} {% elif val is boolean %} AND {{key}} = {{val}} {% else %} AND {{key}} = '{{ val }}' {% endif %} {% endfor %} </code></pre> <p>Any idea what the best way is to convert a datetime value in a jinja template for SQL insertion?</p>
<python><sql><datetime><jinja2>
2023-02-13 23:01:57
1
609
Masterstack8080
75,441,980
2,487,835
How to install imagemagick on android?
<p>I am writing python code on android using Pydroid app. Able to install pip packages, but imagemagick, which is requirement for image generation, as I understand, is a binary. I have access to terminal emulator, but don't know command that would work on android. Please suggest what to do.</p>
<python><android><imagemagick><pydroid>
2023-02-13 23:00:51
0
3,020
Lex Podgorny
75,441,918
214,526
DataFrame from list of dicts with relative order of keys maintained in columns
<p>I have a list of dictionaries (row data) like below:</p> <pre class="lang-py prettyprint-override"><code>from typing import List, Dict, Any testDict: List[Dict[str, Any]] = list( ( {&quot;A&quot;: 0.1, &quot;B&quot;: 1, &quot;E&quot;: &quot;ABE&quot;}, {&quot;A&quot;: 0.11, &quot;B&quot;: 20, &quot;C&quot;: 0.2, &quot;E&quot;: &quot;ABCE&quot;}, {&quot;A&quot;: 0.11, &quot;B&quot;: 3, &quot;D&quot;: 33, &quot;E&quot;: &quot;ABDE&quot;}, {&quot;A&quot;: 0.13, &quot;B&quot;: 40, &quot;C&quot;: 0.5, &quot;D&quot;: 23, &quot;E&quot;: &quot;ABCDE&quot;}, ) ) </code></pre> <pre class="lang-py prettyprint-override"><code>testDict [{'A': 0.1, 'B': 1, 'E': 'ABE'}, {'A': 0.11, 'B': 20, 'C': 0.2, 'E': 'ABCE'}, {'A': 0.11, 'B': 3, 'D': 33, 'E': 'ABDE'}, {'A': 0.13, 'B': 40, 'C': 0.5, 'D': 23, 'E': 'ABCDE'}] </code></pre> <p>I want to convert this <code>testDict</code> to a pandas dataframe. So, I did this:</p> <pre class="lang-py prettyprint-override"><code>testDf: pd.DataFrame = pd.json_normalize(data=testDict, max_level=1) testDf A B E C D 0 0.10 1 ABE NaN NaN 1 0.11 20 ABCE 0.2 NaN 2 0.11 3 ABDE NaN 33.0 3 0.13 40 ABCDE 0.5 23.0 </code></pre> <p>However, I want the relative order of the keys to be maintained in the column names like <code>[A, B, C, D, E]</code> (or <code>[A, B, D, C, E]</code> only if I don't have the last entry).</p> <p>I've 100K such row data with total number of keys as 256 in the actual data. Is there any easy way to achieve this? Or do I need to merge these key names like merge-sort to build the column name orders and use that?</p> <h2>Update 1:</h2> <p>I'm not looking for how to lexiographically sort the columns here. In each dict, keys are already in specific order. For any row, the relative order of the keys should remain the same. My sample list could be following:</p> <pre class="lang-py prettyprint-override"><code>[{'XYZ': 0.1, 'ABC': 1, 'PQR': 'ABE'}, {'XYZ': 0.11, 'ABC': 20, 'KLM': 0.2, 'PQR': 'ABCE'}, {'XYZ': 0.11, 'ABC': 3, 'DEF': 33, 'PQR': 'ABDE'}, {'XYZ': 0.13, 'ABC': 40, 'KLM': 0.5, 'DEF': 23, 'PQR': 'ABCDE'}] </code></pre> <p>In this case, final column order should be ['XYZ', 'ABC', 'KLM', 'DEF', 'PQR']</p>
<python><pandas><dataframe>
2023-02-13 22:51:41
2
911
soumeng78
75,441,827
188,547
vectorized addition in numpy array
<p>How do I vectorize addition between columns in a numpy array? For example, what is the fastest way to implement something like:</p> <pre><code>import numpy ary = numpy.array([[1,2,3],[3,4,5],[5,6,7],[7,8,9],[9,10,11]]) for i in range(ary.shape[0]): ary[i,0] += ary[i,1] </code></pre>
<python><numpy>
2023-02-13 22:34:52
1
1,423
Brad
75,441,786
4,688,190
Selenium Python Chrome: Extremely slow. Are cookies the problem?
<p>I read that Selenium Chrome can run faster if you use implicit waits, headless, ID and CSS selectors etc. Before implementing those changes, I want to know whether cookies or caching could be slowing me down.</p> <p>Does Selenium store cookies and cache like a normal browser or does it reload all assets everytime it navigates to a new page on a website?</p> <p>If yes, then this would slow down the process of scraping websites with millions of identical profile pages, where the scripts and images are similar for each profile.</p> <p>Is yes, is there a way to avoid this problem? Interested in using cookies and cache during a session and then destroying after the browser is closed.</p> <p>Edit, more details:</p> <pre><code>sel_options = {'proxy': {'https': pString}} prefs = {'download.default_directory' : dFolder} options.add_experimental_option('prefs', prefs) blocker = os.path.join( os.getcwd(), &quot;extension_iijehicfndmapfeoplkdpinnaicikehn&quot;) options.add_argument(f&quot;--load-extension={blocker}&quot;) wS = &quot;--window-size=&quot;+s1+&quot;,&quot;+s2 options.add_argument(wS) if headless == &quot;yes&quot;: options.add_argument(&quot;--headless&quot;); driver = uc.Chrome(seleniumwire_options=sel_options, options=options, use_subprocess=True, version_main=109) stealth(driver, languages=[&quot;en-US&quot;, &quot;en&quot;], vendor=&quot;Google Inc.&quot;, platform=&quot;Win32&quot;, webgl_vendor=&quot;Intel Inc.&quot;, renderer=&quot;Intel Iris OpenGL Engine&quot;, fix_hairline=True) driver.execute_cdp_cmd('Network.setUserAgentOverride', {&quot;userAgent&quot;: agent}) navigate(&quot;https://linkedin.com&quot;) </code></pre> <p>I don't think my proxy or extension is the culprit, because I have a similar automation app running with no speed issue.</p>
<python><selenium-webdriver><selenium-chromedriver>
2023-02-13 22:27:50
2
678
Ned Hulton
75,441,774
18,505,884
How to customise distance between tickers on the 2nd y-axis in BokehJS
<p>So I'm trying to create a plot with BokehJS. This plot needs 3 axes (left, right, bottom): The plot looks like this</p> <p><a href="https://i.sstatic.net/qGHzS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qGHzS.png" alt="Graph output" /></a></p> <p>As you can see, the right y-axis is pretty ugly, you can't read any data from towards the bottom as the tickers are bunched up.</p> <p>This is how im creating my plot and lines</p> <pre class="lang-js prettyprint-override"><code>function zip(arr1, arr2, floatCast){ var out = {}; if (floatCast){ arr1.map( (val,idx)=&gt;{ out[parseFloat(val)] = arr2[idx]; } ); } else{ arr1.map( (val,idx)=&gt;{ out[val] = arr2[idx]; } ); } return out; } function createBokehPlot(PNVibe, staticPN, transX, transY){ //empty the previous plot $( &quot;#graph-div&quot; ).empty(); //Data Source for PN under vibration vs. offset frequency const source = new Bokeh.ColumnDataSource({ data: { x: Object.keys(PNVibe), y: Object.values(PNVibe) } }); //Data source for Static PN vs offset frequency const source1 = new Bokeh.ColumnDataSource({ data: { x: Object.keys(staticPN), y: Object.values(staticPN) } }); //Data source for Transmissibility line const source2 = new Bokeh.ColumnDataSource({ data: { x: transX, y: transY} }); //Set plot x/y ranges var max_offset = Math.max(Object.keys(PNVibe)); const xdr = new Bokeh.Range1d({ start: 1, end: 100000 }); const ydr = new Bokeh.Range1d({ start: -180, end: -50 }); const y2_range = new Bokeh.Range1d({start: 0.001, end: 10}); // make a plot with some tools const plot = Bokeh.Plotting.figure({ title: 'Example of random data', tools: &quot;pan,wheel_zoom,box_zoom,reset,save&quot;, toolbar_location: &quot;right&quot;, toolbar_sticky: false, height: 600, width: 700, outerWidth: 800, legend_location: &quot;top_left&quot;, x_range: xdr, y_range: ydr, x_axis_type:&quot;log&quot;, x_axis_label: &quot;Offset Frequency (Hz)&quot;, y_axis_type: &quot;linear&quot;, y_axis_label: &quot;Phase Noise (dBc/Hz)&quot;, extra_y_ranges: {y2_range}, major_label_standoff: 1 }); //Add the second y axis on the right const second_y_axis = new Bokeh.LogAxis({y_range_name:&quot;y2_range&quot;, axis_label:'Vibration Profile (g^2/Hz)', x_range: xdr, bounds:[0.0001, 10]}); second_y_axis.ticker = new Bokeh.FixedTicker({ticks: [0.0001, 0.001, 0.01, 0.1, 1, 10]}) plot.add_layout(second_y_axis, &quot;right&quot;); // add line for vibraiton phase noise plot.line({ field: &quot;x&quot; }, { field: &quot;y&quot; }, { source: source, line_width: 2, line_color: &quot;red&quot;, legend_label: &quot;Phase Noise under Vibrations&quot; }); //add line for static phase noise plot.line({ field: &quot;x&quot; }, { field: &quot;y&quot; }, { source: source1, line_width: 2, line_color: &quot;blue&quot;, legend_label: &quot;Static Phase Noise&quot; }); plot.line({ field: &quot;x&quot; }, { field: &quot;y&quot; }, { source: source2, line_width: 2, line_color: &quot;green&quot;, y_range_name:&quot;y2_range&quot;, legend_label: &quot;Transmissibillity&quot; }); // show the plot, appending it to the end of the current section Bokeh.Plotting.show(plot, &quot;#graph-div&quot;); return; } //Call function var PNVibe = zip([10, 100, 1000], [-95, -100, -105], false); var staticPN = zip([10, 100, 1000], [-90, -105, -110], false); var transX = [10, 100, 1000]; var transY = [0.0005, 0.003, 0.05]; createBokehPlot(PNVibe, staticPN, transX, transY); </code></pre> <p>My question is, how would I be able to make it so that the right y-axis displays better? Preferably I want each tick to be the same distance from each other (ie. space between 10^0 and 10^1 is the same as space between 10^1 and 10^2)</p> <p>Thanks</p> <p>I also posted this on the bokeh forums:<a href="https://discourse.bokeh.org/t/how-to-make-log-axis-on-2nd-y-axis-have-bigger-distances-between-ticks/10042" rel="nofollow noreferrer">Here</a></p>
<javascript><python><plot><bokeh><bokehjs>
2023-02-13 22:25:44
1
614
mrblue6
75,441,747
5,302,069
Function imported from file can't find a module - what's going on? (python)
<p>As the title says, I'm encountering an issue where a function imported from a file (<code>myfun_fromfile</code>) doesn't seem to have access to other imported modules, e.g. <code>cv2</code>.</p> <p>Solutions that work are 1) defining the function in the same script it's called in (<code>myfun_inline</code>) and 2) importing the module <em>inside</em> the function imported from file (<code>myfun_fromfile_containsimport</code>). I'd prefer to separate my function definitions and workflow, so 1 is not an ideal solution for me. And solution 2 seems... strange?</p> <p>What's going on here? And how can I import a function from file and have it call modules successfully?</p> <h3>Example code</h3> <p>file <strong>functions.py</strong> that contains two functions, <code>myfun_fromfile</code> and <code>myfun_fromfile_containsimport</code>:</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- def myfun_fromfile(path_to_image): img = cv2.imread(path_to_image, cv2.IMREAD_UNCHANGED) return img def myfun_fromfile_containsimport(path_to_image): # the only difference is &quot;import cv2&quot; import cv2 img = cv2.imread(path_to_image, cv2.IMREAD_UNCHANGED) return img </code></pre> <p>main script:</p> <pre><code># import module cv2 and two functions from functions.py import cv2 from functions import (myfun_fromfile, myfun_fromfile_containsimport) # define an inline function def myfun_inline(path_to_image): img = cv2.imread(path_to_image, cv2.IMREAD_UNCHANGED) return img # path_to_image = '../data/train/5fb9edb4-bb99-11e8-b2b9-ac1f6b6435d0_red.png' # compare from-file and inline directly img1 = myfun_fromfile(path_to_image) # this does not work, see error below img1 = myfun_inline(path_to_image) # this works, solution 1 img2 = myfun_fromfile_containsimport(path_to_image) # this works, solution 2 </code></pre> <p>Error message thrown by <code>myfun_fromfile</code></p> <pre><code>img1 = myfun_fromfile(path_to_image) Traceback (most recent call last): Cell In[385], line 1 img1 = myfun_fromfile(path_to_image) File ~/path/to/functions.py:10 in myfun_fromfile img = cv2.imread(path_to_image, cv2.IMREAD_UNCHANGED) NameError: name 'cv2' is not defined </code></pre>
<python><opencv><import>
2023-02-13 22:21:29
1
499
R Greg Stacey
75,441,624
3,803,152
Can I get both an enum value and enum class name from a string in Python?
<p>Let's say I have the following Python enum class:</p> <pre class="lang-py prettyprint-override"><code>class FooEnum(IntEnum): foo = auto() bar = auto() baz = auto() </code></pre> <p>And I also have the strings <code>&quot;FooEnum&quot;</code> and <code>&quot;bar&quot;</code>. Both of the strings come from HTML select values, which are limited to basic types.</p> <p>Can I turn these strings directly into a <code>FooEnum.bar</code>?</p>
<python>
2023-02-13 22:01:16
1
6,985
TheSoundDefense
75,441,557
11,397,243
Regex with m flag in Perl vs. Python
<p>I'm trying to automatically translate some simple Perl code with a regex to Python, and I'm having an issue. Here is the Perl code:</p> <pre class="lang-perl prettyprint-override"><code>$stamp='[stamp]'; $message = &quot;message\n&quot;; $message =~ s/^/$stamp/gm; print &quot;$message&quot;; [stamp]message </code></pre> <p>Here is my Python equivalent:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.sub(re.compile(&quot;^&quot;, re.M), &quot;[stamp]&quot;, &quot;message\n&quot;, count=0) '[stamp]message\n[stamp]' </code></pre> <p>Note the answer is different (it has an extra <code>[stamp]</code> at the end). How do I generate code that has the same behavior for the regex?</p>
<python><regex><multiline>
2023-02-13 21:52:53
3
633
snoopyjc
75,441,438
3,383,640
Detect Fluid Pathlines in Images
<p>I have several thousand images of fluid pathlines -- below is a simple example --</p> <p><a href="https://i.sstatic.net/vZe68.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vZe68.jpg" alt="image" /></a></p> <p>and I would like to automatically detect them: Length and position. For the position a defined point would be sufficient (e.g. left end). I don't need the full shape information.</p> <p>This is a pretty common task but I did not find a reliable method.</p> <p>How could I do this?</p> <p>My choice would be Python but it's no necessity as long as I can export the results.</p> <hr /> <p><strong>EDIT</strong></p> <p>This is a rough draft of what I'm searching for:</p> <p><a href="https://i.sstatic.net/A9I96.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A9I96.png" alt="example" /></a></p> <p>I need the length of the lines and e.g. the coordinates of the red dot.</p>
<python><fluid-dynamics>
2023-02-13 21:39:43
1
5,078
Suuuehgi
75,441,386
1,260,682
generating AST from existing python function
<p>I'm trying to use python's <code>ast.parse</code> to generate the AST of a function I defined, but the <code>parse</code> function takes in strings as parameter. How can I use it on a function / code object that is defined in the same file? For instance:</p> <pre><code>def foo(bar): print(bar) import ast ast.parse(foo) # this doesn't work </code></pre>
<python>
2023-02-13 21:33:04
0
6,230
JRR
75,441,359
6,211,470
create Self Referencing Table using peewee
<p>I am failing to find a way I can create a self-referencing table using peewee. I am trying to create an entity similar to one on <a href="https://www.codeproject.com/Tips/5255964/How-to-Create-and-Use-a-Self-referencing-Hierarchi" rel="nofollow noreferrer">this article</a>.</p> <p><a href="https://i.sstatic.net/u542Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u542Q.png" alt="enter image description here" /></a></p> <p>I have tried this solution <a href="https://stackoverflow.com/a/47894549/6211470">here</a> and it doesn't seem to give me the results that I want.</p> <pre><code>class Customer(Model): name = TextField() class CustomerDepartment(Model): refid = ForeignKeyField(Customer, related_name='customer') id = ForeignKeyField(Customer, related_name='department') </code></pre>
<python><postgresql><orm><peewee>
2023-02-13 21:29:34
1
358
tendaitakas
75,441,343
6,100,445
Python and Starlette: running a long async task
<p>I have a simple experiment in the code snippet shown below. My goal is to have the browser client (via a WebSocket) kick off a long-running task on the server, but the server should service WebSocket messages from the client while the long-running task is running. Here's the workflow (&quot;OK&quot; means this step is working as-is in the snippet, while &quot;?&quot; means this is what I'm trying to figure out)...</p> <ul> <li>OK - Run the code</li> <li>OK - Launch a browser at 127.0.0.1</li> <li>OK - WebSocket connects</li> <li>OK - Click &quot;Send&quot; and the browser client generates a random number, sends it to the server, and the server echoes back the number</li> <li>OK - Click &quot;Begin&quot; and this invokes a long-running task on the server (5.0 seconds)</li> <li>? - During this 5sec (while the long-running task is running), I'd like to click &quot;Send&quot; and have the server immediately echo back the random number that was sent from the client while the long-running task continues to be concurrently executed in the event loop</li> </ul> <p>For that last bullet point, it is not working that way: rather, if you click &quot;Send&quot; while the long process is running, the long process finishes and <em>then</em> the numbers are echoed back. To me, this demonstrates that <code>await simulate_long_process(websocket)</code> is truly waiting for <code>simulate_long_process()</code> to complete -- makes sense. However, part of me was expecting that <code>await simulate_long_process(websocket)</code> would signal the event loop that it could go work on other tasks and therefore go back to the <code>while True</code> loop to service the next incoming messages. I was expecting this because <code>simulate_long_process()</code> is fully async (<code>async def</code>, <code>await websocket.send_text()</code>, and <code>await asyncio.sleep()</code>). The current behavior kinda makes sense but not what I want. So my question is, how can I achieve my goal of responding to incoming messages on the WebSocket while the long-running task is running? I am interested in two (or more) approaches:</p> <ol> <li>Spawning the long-running task in a different thread. For example, with <code>asyncio.to_thread()</code> or by stuffing a message into a separate queue that another thread is reading, which then executes the long-running task (e.g. like a producer/consumer queue). Furthermore, I can see how using those same queues, at the end of the long-running tasks, I could then send acknowledgment messages back to the Starlette/async thread and then back to the client over the WebSocket to tell them a task has completed.</li> <li>Somehow achieving this &quot;purely async&quot;? &quot;Purely async&quot; means mostly or entirely using features/methods from the <code>asyncio</code> package. This might delve into synchronous or blocking code, but here I'm thinking about things like: organizing my coroutines into a <code>TaskGroup()</code> object to get concurrent execution, using <code>call_soon()</code>, using <code>run_in_executor()</code>, etc. I'm really interested in hearing about this approach! But I'm skeptical since it may be convoluted. The spirit of this is mentioned here: <a href="https://stackoverflow.com/questions/35355849/long-running-tasks-with-async-server">Long-running tasks with async server</a></li> </ol> <p>I can certainly see the path to completion on approach (1). So I'm debating how &quot;pure async&quot; I try to go -- maybe Starlette (running in its own thread) is the only async portion of my entire app, and the rest of my (CPU-bound, blocking) app is on a different (synchronous) thread. Then, the Starlette async thread and the CPU-bound sync thread simply coordinate via a queue. This is where I'm headed but I'd like to hear some thoughts to see if a &quot;pure async&quot; approach could be reasonably implemented. Stated differently, if someone could refactor the code snippet below to work as intended (responding immediately to &quot;Send&quot; while the long-running task is running), using only or mostly methods from <code>asyncio</code> then that would be a good demonstration.</p> <pre><code>from starlette.applications import Starlette from starlette.responses import HTMLResponse from starlette.routing import Route, WebSocketRoute import uvicorn import asyncio index_str = &quot;&quot;&quot;&lt;!DOCTYPE HTML&gt; &lt;html&gt; &lt;head&gt; &lt;script type = &quot;text/javascript&quot;&gt; const websocket = new WebSocket(&quot;ws://127.0.0.1:80&quot;); window.addEventListener(&quot;DOMContentLoaded&quot;, () =&gt; { websocket.onmessage = ({ data }) =&gt; { console.log('Received: ' + data) document.body.innerHTML += data + &quot;&lt;br&gt;&quot;; }; }); &lt;/script&gt; &lt;/head&gt; &lt;body&gt; WebSocket Async Experiment&lt;br&gt; &lt;button onclick=&quot;websocket.send(Math.floor(Math.random()*10))&quot;&gt;Send&lt;/button&gt;&lt;br&gt; &lt;button onclick=&quot;websocket.send('begin')&quot;&gt;Begin&lt;/button&gt;&lt;br&gt; &lt;button onclick=&quot;websocket.send('close')&quot;&gt;Close&lt;/button&gt;&lt;br&gt; &lt;/body&gt; &lt;/html&gt; &quot;&quot;&quot; def homepage(request): return HTMLResponse(index_str) async def simulate_long_process(websocket): await websocket.send_text(f'Running long process...') await asyncio.sleep(5.0) async def websocket_endpoint(websocket): await websocket.accept() await websocket.send_text(f'Server connected') while True: msg = await websocket.receive_text() print(f'server received: {msg}') if msg == 'begin': await simulate_long_process(websocket) elif msg == 'close': await websocket.send_text('Server closed') break else: await websocket.send_text(f'Server received {msg} from client') await websocket.close() print('Server closed') if __name__ == '__main__': routes = [ Route('/', homepage), WebSocketRoute('/', websocket_endpoint) ] app = Starlette(debug=True, routes=routes) uvicorn.run(app, host='0.0.0.0', port=80) </code></pre>
<python><python-3.x><python-asyncio><starlette>
2023-02-13 21:27:42
1
927
rob_7cc
75,441,334
14,790,056
How to addline to link stacked bar plot categories
<p>I have a stacked bar chart for two variables.</p> <pre><code>ax = count[['new_category', &quot;Count %&quot;, &quot;Volume%&quot;]].set_index('new_category').T.plot.bar(stacked=True) plt.xticks(rotation = 360) plt.show() </code></pre> <p>I want to connect the categories with lines, so my categories are connected with lines.</p> <p><a href="https://i.sstatic.net/edYDJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/edYDJ.jpg" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-02-13 21:25:52
1
654
Olive
75,441,263
5,759,359
redis.exceptions.ConnectionError: Error UNKNOWN while writing to socket. Connection lost
<p>For my python <code>(python 3.10)</code> based project, we were using aioredis <code>(aioredis 2.0.1)</code> to connect to redis cache and all of a sudden all the requests accessing redis cache started failing. After analysis, we found that <a href="https://github.com/aio-libs/aioredis-py" rel="nofollow noreferrer">Aioredis is now in redis-py</a>. Post that we removed aioredis and added redis <code>(redis 4.5.1)</code> as a dependency in pipfile.</p> <p>I didn't added any extra code just changed the imports from</p> <pre><code>import aioredis </code></pre> <p>to</p> <pre><code>from redis import asyncio as aioredis </code></pre> <p>But that didn't resolve the issue completely, now half of the requests are failing with the error code as below.(In an hour 145 requests were success while 79 failed)</p> <blockquote> <p>redis.exceptions.ConnectionError: Error UNKNOWN while writing to socket. Connection lost</p> </blockquote> <p>we use aioredis.Redis for connection</p> <pre><code>aioredis.Redis( host=redis_hostname, port=redis_port, db=db_name, password=redis_password, ssl=true, connection_pool=aioredis.ConnectionPool.from_url( f&quot;{redis_protocol}://:{redis_password}@{redis_hostname}:{redis_port}/{db_name}&quot;, connection_class=aioredis.Connection, max_connections=redis_pool_size, ) </code></pre> <p>Below is the error trace</p> <blockquote> <p>Traceback (most recent call last):Β Β File /usr/local/lib/python3.10/site-packages/redis/asyncio/connection.py, line 788, in send_packed_commandΒ Β Β Β await self._writer.drain()</p> <p>File /usr/local/lib/python3.10/asyncio/streams.py, line 371, in drainΒ Β Β Β await self._protocol._drain_helper()</p> <p>Traceback (most recent call last):Β Β File /usr/local/lib/python3.10/site-packages/ddtrace/contrib/asgi/middleware.py, line 173, in <strong>call</strong>Β Β Β Β return await self.app(scope, receive, wrapped_send)</p> <p>File /usr/local/lib/python3.10/asyncio/streams.py, line 167, in _drain_helperΒ Β Β Β raise ConnectionResetError('Connection lost')ConnectionResetError: Connection lost The above exception was the direct cause of the following exception:</p> <p>.....</p> <p>File /usr/local/lib/python3.10/site-packages/redis/asyncio/client.py, line 487, in _send_command_parse_responseΒ Β Β Β await conn.send_command(*args)redis.exceptions.ConnectionError: Error UNKNOWN while writing to socket. Connection lost.</p> </blockquote>
<python><redis><redis-py><aioredis>
2023-02-13 21:16:27
0
477
Kashyap
75,441,242
17,696,880
How to perform string separations using regex as a reference and that a part of the used separator pattern is not removed from the following string?
<pre class="lang-py prettyprint-override"><code>import re sentences_list = [&quot;El coche ((VERB) es) rojo, la bicicleta ((VERB)estΓ‘) allΓ­; el monopatΓ­n ((VERB)ha sido pintado) de color rojo, y el camiΓ³n tambiΓ©n ((VERB)funciona) con cargas pesadas&quot;, &quot;El Γ‘rbol ((VERB es)) grande, las hojas ((VERB)son) doradas y ((VERB)son) secas, los juegos del parque ((VERB)estan) algo oxidados y ((VERB)es) peligroso subirse a ellos&quot;] aux_list = [] for i_input_text in sentences_list: #separator_symbols = r'(?:(?:,|;|\.|\s+)\s*y\s+|,\s*|;\s*)' separator_symbols = r'(?:(?:,|;|\.|)\s*y\s+|,\s*|;\s*)(?:[A-Z]|l[oa]s|la|[eΓ©]l)' pattern = r&quot;\(\(VERB\)\s*\w+(?:\s+\w+)*\)&quot; # Separar la frase usando separator_symbols frases = re.split(separator_symbols, i_input_text) aux_frases_list = [] # Buscar el patrΓ³n en cada frase separada for i_frase in frases: verbos = re.findall(pattern, i_frase) if verbos: #print(f&quot;Frase: {i_frase}&quot;) #print(f&quot;Verbos encontrados: {verbos}&quot;) aux_frases_list.append(i_frase) aux_list = aux_list + aux_frases_list sentences_list = aux_list print(sentences_list) </code></pre> <p>How to make these separations without what is identified by <code>(?:[A-Z]|l[oa]s|la|[eΓ©]l)</code> be removed from the following string after the split?</p> <p>Using this code I am getting this wrong output:</p> <pre><code>['El coche ((VERB) es) rojo', ' bicicleta ((VERB)estΓ‘) allΓ­', ' monopatΓ­n ((VERB)ha sido pintado) de color rojo', ' camiΓ³n tambiΓ©n ((VERB)funciona) con cargas pesadas', ' hojas ((VERB)son) doradas y ((VERB)son) secas', ' juegos del parque ((VERB)estan) algo oxidados y ((VERB)es) peligroso subirse a ellos'] </code></pre> <p>It is curious that the sentence <code>&quot;El Γ‘rbol ((VERB es)) grande&quot;</code> directly dasappeared from the final list, although it should be</p> <p>Instead you should get this list of strings:</p> <pre><code>[&quot;El coche ((VERB) es) rojo&quot;, &quot;la bicicleta ((VERB)estΓ‘) allΓ­&quot;, &quot;el monopatΓ­n ((VERB)ha sido pintado) de color rojo&quot;, &quot;el camiΓ³n tambiΓ©n ((VERB)funciona) con cargas pesadas&quot;, &quot;El Γ‘rbol ((VERB es)) grande&quot;, &quot;las hojas ((VERB)son) doradas y ((VERB)son) secas&quot;, &quot;los juegos del parque ((VERB)estan) algo oxidados y ((VERB)es) peligroso subirse a ellos&quot;] </code></pre>
<python><python-3.x><regex><list><regex-group>
2023-02-13 21:13:36
1
875
Matt095
75,441,104
7,331,538
Why is RSI calculation with python ta library changes depending on starting position?
<p>I have a <code>DataFrame</code> and I want to calculate the RSI on the <code>Close</code> column with a window of <code>14</code> like so:</p> <pre><code>from ta.momentum import RSIIndicator import pandas as pd data = pd.read_csv() output = RSIIndicator(data.Close, 14).rsi() print(output.head(20)) </code></pre> <p>This works and I get the following RSI result:</p> <pre><code>0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN 10 NaN 11 NaN 12 NaN 13 30.565576 14 30.565576 15 30.565576 16 36.847817 17 53.471152 18 53.471152 19 59.140918 </code></pre> <p>But If I start the RSI at another arbitrary position, example on <code>data.iloc[1:]</code>, I understand that since I shifted a position by 1, the 13th index is now going to be <code>NaN</code> and RSI will start at the 14th. But why does this change the values?</p> <pre><code>t = RSIIndicator(data.Close.iloc[1:], window).rsi() print(t(20)) 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN 10 NaN 11 NaN 12 NaN 13 NaN 14 31.481498 15 31.481498 16 37.849374 17 54.534367 18 54.534367 19 60.171078 20 44.372719 </code></pre> <p>Shouldn't the RSI be the same value no matter where you start. The only thing that is needed is the previous 14 values right? so why does the RSI change if the oldest 15th value is not there?</p> <p>This is important because I would like to calculate the RSI <strong>on the fly</strong> meaning as data comes in, I would pass the previous 14 data points to the RSI function and get the next value. But it seems like I always need to pass the whole dataset from beginning.</p>
<python><technical-indicator>
2023-02-13 20:56:21
1
2,377
bcsta
75,441,003
4,045,121
How to correctly pivot a dataframe so the values of the first column are my new columns?
<p>I have a file with some random census data, in essence multiple lines of the following:</p> <pre><code>age=senior workclass=Self-emp-not-inc education=Bachelors edu_num=13 marital=Divorced occupation=Craft-repair relationship=Not-in-family race=White sex=Male gain=high loss=none hours=half-time country=United-States salary&gt;50K </code></pre> <p>I want to transform this into a csv that looks like this:</p> <pre><code>senior Self-emp-not-inc Bachelors ... &gt;50K </code></pre> <p>I created the following script that I was hoping would do what I want:</p> <pre><code> for i in range(df.shape[1]): temp_df = df.loc[i].str.split(&quot; &quot;, expand=True) temp_df = temp_df[0].str.split(&quot;=&quot;, expand=True) temp_df.columns = ['column_names', 'column_values'] temp_df = temp_df.reset_index(drop=True) temp_df = temp_df.pivot(index=temp_df.index, columns='column_names', values='column_values') </code></pre> <p>The last line though is throwing an error, specifically:</p> <pre><code>KeyError: 0 </code></pre> <p>How can I either fix my <code>pivot</code> or if this is not correct, what would be a better way to achieve what I want?</p>
<python><python-3.x><pandas>
2023-02-13 20:45:12
3
3,452
dearn44
75,440,987
7,762,646
How to increase Google Colab cell output width?
<p>I found a very similar <a href="https://stackoverflow.com/questions/21971449/how-do-i-increase-the-cell-width-of-the-jupyter-ipython-notebook-in-my-browser">question</a> but the solution did not work on Google colab. I would like to see better the text in my pandas dataframes columns. Right now the default width seems 50%.</p> <pre><code>from IPython.display import display, HTML display(HTML(&quot;&lt;style&gt;.container { width:100% !important; }&lt;/style&gt;&quot;)) display(HTML(&quot;&lt;style&gt;.output_result { max-width:100% !important; }&lt;/style&gt;&quot;)) </code></pre> <p>As an alternative, I found an extension called <a href="https://colab.research.google.com/notebooks/data_table.ipynb#scrollTo=3jcHW3nRJpaE" rel="nofollow noreferrer">Data Tables</a>. This may be the best solution available today for it but seems to only works for tables.</p> <pre><code>from google.colab import data_table data_table.enable_dataframe_formatter() </code></pre>
<python><html><css><pandas><google-colaboratory>
2023-02-13 20:44:02
1
1,541
G. Macia
75,440,942
15,515,166
Convert ByteString into numpy array of 1s and 0s
<p>I am wanting to turn a bytestring, for example <code>b'\xed\x07b\x87S.\x866^\x84\x1e\x92\xbf\xc5\r\x8c'</code> into a numpy array of 1s and 0s (i.e. the binary value of this bytestring as an array of binary values).</p> <p>How would I go about doing this?</p> <p>I tried using <code>np.fromstring</code> and <code>np.frombuffer</code> but neither did what I wanted.</p>
<python><numpy>
2023-02-13 20:39:12
2
1,153
Sam
75,440,935
4,688,190
Python Requests POST / Upload Image File
<p>How can I os.remove() this image after it has been uploaded? I believe that I need to close it somehow.</p> <pre><code>import requests, os imageFile = &quot;test.jpg&quot; myobj = {'key': 'key', 'submit':'yes'} up = {'fileToUpload':(imageFile, open(imageFile, 'rb'), 'multipart/form-data')} r = requests.post('https://website/uploadfile.php', files=up, data = myobj) sendTo = r.text os.remove(imageFile) </code></pre> <p>The above generates:</p> <pre><code>PermissionError: [WinError 32] The process cannot access the file because it is being used by another process </code></pre>
<python><post><python-requests>
2023-02-13 20:37:58
2
678
Ned Hulton
75,440,932
12,297,666
Convert a digit code into datetime format in a Pandas Dataframe
<p>I have a pandas dataframe that has a column with a 5 digit code that represent a day and time, and it works like following:</p> <p><strong>1</strong> - The first three digits represent the day;</p> <p><strong>2</strong> - The last two digits represent the hour:minute:second.</p> <p><strong>Example1:</strong> The first row have the code 19501, so the 195 represent the 1st of January of 2009 and the 01 part represents the time from 00:00:00 to 00:29:59;</p> <p><strong>Example2:</strong> In the second row i have the code 19502 which is the 1st of January of 2009 from 00:30:00 to 00:59:59;</p> <p><strong>Example3:</strong> Another example, 19711 would be the 3rd of January of 2009 from 05:00:00 to 05:29:59;</p> <p><strong>Example4:</strong> The last row is the code 73048, which represent the 20th of June of 2010 from 23:30:00 to 23:59:59.</p> <p>Any ideas in how can I convert this 5 digit code into a proper datetime format?</p>
<python><pandas><datetime>
2023-02-13 20:37:55
2
679
Murilo
75,440,925
13,571,357
Python multiprocessing: sharing global real-only large data without reloading from disk for child processes
<p>Say I need to read from disk a large data and do some <strong>read-only</strong> work on it.</p> <p>I need to use multiprocessing, but to share it across processes using <code>multiprocessing.Manager()</code> or <code>Array()</code> is way too slow. Since my operation on this large data is read-only, according to <a href="https://stackoverflow.com/questions/19366259/multiprocessing-in-python-with-read-only-shared-memory">this answer</a>, I can declare this large data in the global scope, and then each child process has its own large data in the memory:</p> <pre class="lang-py prettyprint-override"><code># main.py import argparse import numpy as np import multiprocessing as mp import time parser = argparse.ArgumentParser() parser.add_argument('-p', '--path', type=str) args = parser.parse_args() print('loading data from disk... may take a long time...') global_large_data = np.load(args.path) def worker(row_id): # some stuff read-only to the global_large_data time.sleep(0.01) print(row_id, np.sum(global_large_data[row_id])) def main(): pool = mp.Pool(mp.cpu_count()) pool.map(worker, range(global_large_data.shape[0])) pool.close() pool.join() if __name__ == '__main__': main() </code></pre> <p>And in terminal,</p> <pre class="lang-bash prettyprint-override"><code>$ python3 main.py -p /path/to/large_data.npy </code></pre> <p>This is fast, and almost good to me. However, one shortcoming is that each child process needs to reload the large file from disk, and the loading process can waste a lot of time.</p> <p>Is there any way (e.g., wrapper) so that only the parent process loads the file from disk once, and then directly send the copy to each child process's memory?</p> <p>Note that my memory space is abundant -- many copies of this large data in memory is good. I just don't want to reload it from disk for many times.</p>
<python><memory><parallel-processing><multiprocessing><python-multiprocessing>
2023-02-13 20:36:42
1
731
graphitump
75,440,860
6,099,211
Compare result of "in statement" with bool in Python, why 1 in [1] == True results with False?
<pre class="lang-py prettyprint-override"><code>print(1 in [1] == True) # False Why?????????????????? print((1 in [1]) == True) # True OK print(1 in ([1] == True)) # TypeError: argument of type 'bool' is not iterable OK </code></pre> <p>So just what is happening here? How Python interprets this statement?</p>
<python>
2023-02-13 20:28:07
0
1,200
Anton Ovsyannikov
75,440,753
4,856,302
Popular Python type checkers give a false negative with Any annotation
<p>I tested the following snippet with 4 common type checkers for Python and, to my surprise, none of them complained:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any def length(s: str) -&gt; int: return len(s) def any_length(o: Any) -&gt; int: return length(o) if __name__ == &quot;__main__&quot;: print(any_length(1234)) </code></pre> <p>It's easy to predict that running this code will result in an exception:</p> <pre class="lang-none prettyprint-override"><code>TypeError: object of type 'int' has no len() </code></pre> <p><strong>mypy</strong>:</p> <pre class="lang-none prettyprint-override"><code>Success: no issues found in 1 source file </code></pre> <p><strong>pytype</strong>:</p> <pre class="lang-none prettyprint-override"><code>Success: no errors found </code></pre> <p><strong>pyright</strong>:</p> <pre class="lang-none prettyprint-override"><code>0 errors, 0 warnings, 0 informations Completed in 0.435sec </code></pre> <p><strong>pyre</strong>:</p> <pre class="lang-none prettyprint-override"><code>Ζ› No type errors found </code></pre> <p>I would expect at least a warning saying that <code>Any</code> is not a subtype of <code>str</code> and therefore application of <code>length: str -&gt; int</code> to an object of type <code>Any</code> is unsafe. Is there something about these particular types that makes it difficult for type checkers to consider this simple case? The problem of determining whether a concrete type is a subtype of another doesn't seem undecidable, but maybe I'm wrong here?</p>
<python><types><static-analysis>
2023-02-13 20:14:14
1
1,020
shooqie
75,440,456
2,088,886
Flask IIS Webapp Attempting to Get User IP Address
<p>I am attempting to deploy a Flask webapp onto IIS. First, I used standard suggestions (Flask, IIS, wfastcgi). This method allowed me to correctly see the IPs of users using <code>ip = request.environ.get('HTTP_X_REAL_IP', request.remote_addr)</code></p> <p>For various reasons detailed here: <a href="https://stackoverflow.com/questions/74944834/wfastcgi-500-error-in-flask-app-when-trying-to-plot">wfastcgi 500 error in flask app when trying to plot</a> , I was encouraged to stop using <code>wfastcgi</code>. So I followed instructions to configure Flask on IIS using <code>httpplatformhandler</code>. This is my config file:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;configuration&gt; &lt;system.webServer&gt; &lt;handlers&gt; &lt;add name=&quot;httpPlatformHandler&quot; path=&quot;*&quot; verb=&quot;*&quot; modules=&quot;httpPlatformHandler&quot; resourceType=&quot;Unspecified&quot; requireAccess=&quot;Script&quot; /&gt; &lt;/handlers&gt; &lt;httpPlatform stdoutLogEnabled=&quot;true&quot; stdoutLogFile=&quot;.\logs\python.log&quot; startupTimeLimit=&quot;20&quot; processPath=&quot;C:\ProgramData\Anaconda3\python.exe&quot; arguments=&quot;-m waitress --port %HTTP_PLATFORM_PORT% wsgi:application&quot;&gt; &lt;environmentVariables&gt; &lt;environmentVariable name=&quot;FLASK_APP&quot; value=&quot;C:\dcm_webapp\main.py&quot; /&gt; &lt;/environmentVariables&gt; &lt;/httpPlatform&gt; &lt;tracing&gt; &lt;traceFailedRequests&gt; &lt;add path=&quot;*&quot;&gt; &lt;traceAreas&gt; &lt;add provider=&quot;ASP&quot; verbosity=&quot;Verbose&quot; /&gt; &lt;add provider=&quot;ASPNET&quot; areas=&quot;Infrastructure,Module,Page,AppServices&quot; verbosity=&quot;Verbose&quot; /&gt; &lt;add provider=&quot;ISAPI Extension&quot; verbosity=&quot;Verbose&quot; /&gt; &lt;add provider=&quot;WWW Server&quot; areas=&quot;Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,FastCGI,WebSocket&quot; verbosity=&quot;Verbose&quot; /&gt; &lt;/traceAreas&gt; &lt;failureDefinitions statusCodes=&quot;400-500&quot; /&gt; &lt;/add&gt; &lt;/traceFailedRequests&gt; &lt;/tracing&gt; &lt;/system.webServer&gt; &lt;/configuration&gt; </code></pre> <p>Fortunately, this configuration solved the problem of random python functions causing 500s on IIS, but now when I try to get the user's IP, I always just get localhost back.</p> <p>Is there a way to configure this so that using <code>request.environ.get('HTTP_X_REAL_IP', request.remote.addr)</code> gets me the user's IP instead of localhost?</p> <p>Thanks :)</p>
<python><request><waitress><httpplatformhandler>
2023-02-13 19:35:12
1
2,161
David Yang
75,440,379
9,795,817
Unable to pip install old version of scikit-learn
<p>I need to use <code>scikit-learn==0.23.1</code> (or older) to make my project compatible with an AWS Lambda layer.</p> <p>As suggested by <a href="https://stackoverflow.com/questions/62248752/how-do-i-install-previous-version-of-scikit">this post</a>, I tried <code>pip3 install scikit-learn==0.23.1</code>, but it did not work.</p> <p>I then found <a href="https://github.com/scikit-learn/scikit-learn/issues/24604#issuecomment-1282104392" rel="nofollow noreferrer">an issue</a> that suggested <code>pip3 install -U scikit-learn==0.23.1 --no-cache-dir</code>, but it did not work either.</p> <p>I am using Python 3.11.1 on an M1 MacBook Air. These are the error logs from running <code>pip3 install scikit-learn==0.23.1</code>:</p> <pre><code>Collecting scikit-learn==0.23.1 Using cached scikit-learn-0.23.1.tar.gz (7.2 MB) Installing build dependencies ... error error: subprocess-exited-with-error Γ— pip subprocess to install build dependencies did not run successfully. β”‚ exit code: 1 ╰─&gt; [107 lines of output] Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system != &quot;AIX&quot; and platform_python_implementation == &quot;CPython&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system != &quot;AIX&quot; and platform_python_implementation != &quot;CPython&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system != &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version &gt;= &quot;3.8&quot; and platform_system == &quot;AIX&quot;' don't match your environment Collecting setuptools Using cached setuptools-67.2.0-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.38.4-py3-none-any.whl (36 kB) Collecting Cython&gt;=0.28.5 Using cached Cython-0.29.33-py2.py3-none-any.whl (987 kB) Collecting numpy==1.17.3 Using cached numpy-1.17.3.zip (6.4 MB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting scipy&gt;=0.19.1 Using cached scipy-1.10.0-cp311-cp311-macosx_12_0_arm64.whl (28.7 MB) Using cached scipy-1.9.3-cp311-cp311-macosx_12_0_arm64.whl (28.4 MB) Using cached scipy-1.9.2-cp311-cp311-macosx_12_0_arm64.whl (28.4 MB) Using cached scipy-1.9.1.tar.gz (42.0 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [67 lines of output] The Meson build system Version: 0.62.2 Source dir: /private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28 Build dir: /private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-7pg6lokl/build Build type: native build Project name: SciPy Project version: 1.9.1 C compiler for the host machine: cc (clang 14.0.0 &quot;Apple clang version 14.0.0 (clang-1400.0.29.202)&quot;) C linker for the host machine: cc ld64 820.1 C++ compiler for the host machine: c++ (clang 14.0.0 &quot;Apple clang version 14.0.0 (clang-1400.0.29.202)&quot;) C++ linker for the host machine: c++ ld64 820.1 Host machine cpu family: aarch64 Host machine cpu: arm64 Compiler for C supports arguments -Wno-unused-but-set-variable: YES Library m found: YES ../../meson.build:41:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['g95']] The following exception(s) were encountered: Running &quot;gfortran --version&quot; gave &quot;[Errno 2] No such file or directory: 'gfortran'&quot; Running &quot;gfortran -V&quot; gave &quot;[Errno 2] No such file or directory: 'gfortran'&quot; Running &quot;flang --version&quot; gave &quot;[Errno 2] No such file or directory: 'flang'&quot; Running &quot;flang -V&quot; gave &quot;[Errno 2] No such file or directory: 'flang'&quot; Running &quot;nvfortran --version&quot; gave &quot;[Errno 2] No such file or directory: 'nvfortran'&quot; Running &quot;nvfortran -V&quot; gave &quot;[Errno 2] No such file or directory: 'nvfortran'&quot; Running &quot;pgfortran --version&quot; gave &quot;[Errno 2] No such file or directory: 'pgfortran'&quot; Running &quot;pgfortran -V&quot; gave &quot;[Errno 2] No such file or directory: 'pgfortran'&quot; Running &quot;ifort --version&quot; gave &quot;[Errno 2] No such file or directory: 'ifort'&quot; Running &quot;ifort -V&quot; gave &quot;[Errno 2] No such file or directory: 'ifort'&quot; Running &quot;g95 --version&quot; gave &quot;[Errno 2] No such file or directory: 'g95'&quot; Running &quot;g95 -V&quot; gave &quot;[Errno 2] No such file or directory: 'g95'&quot; A full log can be found at /private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-7pg6lokl/build/meson-logs/meson-log.txt + meson setup --native-file=/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2 --prefix=/Library/Frameworks/Python.framework/Versions/3.11 /private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28 /private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-7pg6lokl/build Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 969, in get_requires_for_build_wheel with _project(config_settings) as project: File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 948, in _project with Project.with_temp_working_dir( File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 777, in with_temp_working_dir yield cls(source_dir, tmpdir, build_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 682, in __init__ self._configure(reconfigure=bool(build_dir) and not native_file_mismatch) File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 713, in _configure self._meson( File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 696, in _meson return self._proc('meson', *args) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-build-env-u2xs3lnb/overlay/lib/python3.11/site-packages/mesonpy/__init__.py&quot;, line 691, in _proc subprocess.check_call(list(args)) File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/subprocess.py&quot;, line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['meson', 'setup', '--native-file=/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-native-file.ini', '-Ddebug=false', '-Doptimization=2', '--prefix=/Library/Frameworks/Python.framework/Versions/3.11', '/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28', '/private/var/folders/c2/v1x2pn511b79mxpbdj208p780000gn/T/pip-install-txosa8qz/scipy_734bcce5e1d549fab1cee5a52c8d7f28/.mesonpy-7pg6lokl/build']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— pip subprocess to install build dependencies did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>Any ideas on how to successfully install version 0.23.1 of scikit-learn?</p>
<python><scikit-learn><pip>
2023-02-13 19:22:48
1
6,421
Arturo Sbr
75,440,354
54,873
Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?
<p>This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11.</p> <p>Do folks know the fix?</p> <pre><code> File &quot;/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py&quot;, line 38, in &lt;module&gt; main() File &quot;/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py&quot;, line 25, in main sb = diana.superbills.load_superbills_births(args.site, ath) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/aizenman/My Drive/code/daily_new_clients/code/diana/superbills.py&quot;, line 148, in load_superbills_births sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name=&quot;Births&quot;, parse_dates=[&quot;DOS&quot;, &quot;DOB&quot;]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py&quot;, line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py&quot;, line 331, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py&quot;, line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py&quot;, line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py&quot;, line 557, in __init__ super().__init__(filepath_or_buffer, storage_options=storage_options) File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py&quot;, line 545, in __init__ self.book = self.load_workbook(self.handles.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py&quot;, line 568, in load_workbook return load_workbook( ^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py&quot;, line 346, in load_workbook reader.read() File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py&quot;, line 303, in read self.parser.assign_names() File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/workbook.py&quot;, line 109, in assign_names sheet.defined_names[name] = defn ^^^^^^^^^^^^^^^^^^^ AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names' </code></pre>
<python><pandas><openpyxl>
2023-02-13 19:20:26
4
10,076
YGA
75,440,187
19,369,310
Create a new column in Pandas based on count of other columns and a fixed specific value
<p>This is a continuation of my another related question: <a href="https://stackoverflow.com/questions/75439107/create-a-new-column-based-on-count-of-other-columns">Create a new column based on count of other columns</a></p> <p>I have a dataframe that looks like</p> <pre><code>col_1 col_2 col_3 6 A 1 2 A 1 5 B 1 3 C 1 5 C 2 3 B 2 6 A 1 6 A 0 2 B 3 2 C 3 5 A 3 5 B 1 </code></pre> <p>and i want to add a new column <code>col_new</code> that counts the number of rows with the same elements in <code>col_1</code> and <code>col_2</code> but excluding that row itself and such that the element in <code>col_3</code> is 1 (regardless of the row element in <code>col_3</code> is actually <code>1</code> or not ). So the desired output would look like</p> <pre><code>col_1 col_2 col_3 col_new 6 A 1 1 2 A 1 0 5 B 1 1 3 C 1 0 5 C 2 0 3 B 2 0 6 A 1 1 6 A 0 1 (even though ```col_3``` value is 0) 2 B 3 0 2 C 3 0 5 A 3 0 5 B 1 1 </code></pre> <p>What I have tried:</p> <p><code>df['col_new] = df[df['col_3' == 1]].groupby(['col_1', 'col_2'])['col_2'].transform('count').sub(1)</code></p> <p>which shows the correct result for those rows with <code>col_3</code> value <code>1</code> but <code>NaN</code> for rows with <code>col_3</code> value <code>0</code> (like row 8)</p> <p>Thank you so much in advance.</p>
<python><python-3.x><pandas><dataframe>
2023-02-13 19:02:55
1
449
Apook
75,440,166
11,922,765
Raspberry Pi No module named Plotly
<p>I have installed <code>plotly</code> on the <code>Raspberry Pi</code>. The objective is connecting to a remote MySql database and plot interactive time-series plots (that would update as the new data arrives into the MySql database). But I am running into <code>no module found</code> even after installing it. Looks like the <code>pip install plotly</code> is installing the package for python 2.7 (below screenshot) but I am using 3.3 (latest). How do I install the package for latest Python version.</p> <p>Screenshot shows, installation of the module and then importing this module into the Python script: <a href="https://i.sstatic.net/ogW5l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ogW5l.png" alt="enter image description here" /></a></p>
<python><raspberry-pi><plotly><raspberry-pi4>
2023-02-13 18:59:33
2
4,702
Mainland
75,439,970
5,212,614
In Jupyter Notebook -- No module named 'pandas'
<p>I'm working in a Windows 10 Enterprise environment. I have been using Jupiter Notebooks successfully for several months, and all of a sudden this morning I'm getting the following error message:</p> <pre><code>ModuleNotFoundError: No module named 'pandas' </code></pre> <p>So, I go to my Anaconda cmd prompt and type: <code>pip install pandas</code></p> <p>Then, I get this.</p> <pre><code>(base) C:\Users\RS&gt;pip install pandas Requirement already satisfied: pandas in c:\users\rs\anaconda3\lib\site-packages (1.4.4) Requirement already satisfied: python-dateutil&gt;=2.8.1 in c:\users\rs\anaconda3\lib\site-packages (from pandas) (2.8.2) Requirement already satisfied: pytz&gt;=2020.1 in c:\users\rs\anaconda3\lib\site-packages (from pandas) (2022.1) Requirement already satisfied: numpy&gt;=1.18.5 in c:\users\rs\anaconda3\lib\site-packages (from pandas) (1.21.5) Requirement already satisfied: six&gt;=1.5 in c:\users\rs\anaconda3\lib\site-packages (from python-dateutil&gt;=2.8.1-&gt;pandas) (1.16.0) </code></pre> <p>For some weird reason, I have to start the Jupyter Notebook like this '<code>python -m notebook</code>'. If I just enter '<code>jupyter notebook</code>' in the Anaconda command prompt, I get this.</p> <pre><code>(base) C:\Users\RShuell&gt;jupyter notebook Unable to create process using 'C:\Users\python.exe C:\Users\RS\jupyter-script.py notebook' </code></pre> <p>Again, everything was working perfectly fine for the last several months, and I'm not sure what changed over the weekend.</p>
<python><python-3.x><jupyter>
2023-02-13 18:37:19
0
20,492
ASH
75,439,401
21,192,065
Child Class from MagicMock object has weird spec='str' and can't use or mock methods of the class
<p>When a class is created deriving from a MagicMock() object it has an unwanted spec='str'. Does anyone know why this happens? Does anyone know any operations that could be done to the MagicMock() object in this case such that it doesn't have the spec='str' or can use methods of the class?</p> <pre><code>from unittest.mock import MagicMock a = MagicMock() class b(): @staticmethod def x(): return 1 class c(a): @staticmethod def x(): return 1 print(a) print(b) print(c) print(a.x()) print(b.x()) print(c.x()) </code></pre> <p>which returns</p> <pre><code>MagicMock id='140670188364408'&gt; &lt;class '__main__.b'&gt; &lt;MagicMock spec='str' id='140670220499320'&gt; &lt;MagicMock name='mock.x()' id='140670220574848'&gt; 1 Traceback (most recent call last): File &quot;/xyz/test.py&quot;, line 19, in &lt;module&gt; print(c.x()) File &quot;/xyz/lib/python3.7/unittest/mock.py&quot;, line 580, in _getattr_ raise AttributeError(&quot;Mock object has no attribute %r&quot; % name) AttributeError: Mock object has no attribute 'x' </code></pre> <p>Basically I need the AttributeError to not be here. Is there something I can do to 'a' such that c.x() is valid?</p> <p>edit - the issue seems to be with _mock_add_spec in <a href="https://github.com/python/cpython/blob/3.11/Lib/unittest/mock.py" rel="nofollow noreferrer">mock.py</a> still not sure how to fix this.</p>
<python><python-3.x><python-unittest><python-unittest.mock><magicmock>
2023-02-13 17:39:28
1
978
arrmansa
75,439,361
10,620,003
Move the ytick vertically in plot python
<p>I have a histogram plot and I want to move the yticks vertically (0.2 cm lower than their positions of the existing yticks). I searched a lot and I could not find anything which exactly did this. Could you please help me with that? I attached an image here that shows the new location of the y ticks. <a href="https://i.sstatic.net/01fXl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/01fXl.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np import seaborn as sns import matplotlib.pyplot as plt VAL = [8, 4, 5, 20] objects = ['h', 'b', 'c', 'a'] y_pos = np.arange(len(objects)) cmap = plt.get_cmap('RdYlGn_r') norm = plt.Normalize(vmin=min(VAL), vmax=max(VAL)) ax = sns.barplot(x=VAL, y=objects, hue=VAL, palette='RdYlGn_r', dodge=False) plt.yticks(y_pos, objects) plt.show() </code></pre>
<python><matplotlib><seaborn>
2023-02-13 17:36:25
2
730
Sadcow
75,439,221
4,157,512
DjangoRestFramwork how to override ModelViewSet get method
<p>I have a model like this :</p> <pre class="lang-py prettyprint-override"><code>class AccountViewSet(viewsets.ModelViewSet): &quot;&quot;&quot; A simple ViewSet for viewing and editing accounts. &quot;&quot;&quot; queryset = Account.objects.all() serializer_class = AccountSerializer permission_classes = [IsAccountAdminOrReadOnly] </code></pre> <p>How do I override the get method so when I hit <code>/api/accounts/8</code> I can add some code before returning the 8th account ?</p>
<python><django><django-rest-framework><get>
2023-02-13 17:22:52
3
3,835
BoumTAC
75,439,152
18,020,941
Django Admin select_related issue
<p>I'm trying to <code>select_related</code> for a bunch of assets in the Django admin. These assets are tied to a product, which in turn has a separate Company and Category model. These are both a foreign key from product, to the respective model. I defined the string method like so in the Product class:</p> <pre><code>def __str__(self): if self.company: return f&quot;{self.category.name} | {self.company.name} | {self.name}&quot; return f&quot;{self.category.name} | {self.name}&quot; </code></pre> <p>However, when I check debug toolbar in the Django admin, I see that a bunch of extra queries are made because of this.</p> <pre><code>products\models.py in __str__(53) return f&quot;{self.category.name} | {self.company.name} | {self.name}&quot; </code></pre> <p>I have the following <code>get_queryset</code> method defined in the ModelAdmin:</p> <pre><code>def get_queryset(self, request): qs = super().get_queryset(request).select_related( 'product', 'product__category', 'product__company') if request.user.is_superuser or request.user.is_staff: return qs return qs.filter(user=request.user) </code></pre> <p>Is there a fix for this? Am I doing it wrong?</p> <h2>Edit*</h2> <p>I have narrowed down the issue to Django admin's list filter. The <code>list_filter</code> allows filtering by product, which in turn displays it's <code>__str__</code> method.</p> <p>I have decided to just remove the product from the list filter. Would still be nice to have a fix provided for any future visitors.</p>
<python><django><django-models><django-views><django-queryset>
2023-02-13 17:16:47
1
1,925
nigel239
75,439,107
19,369,310
Create a new column based on count of other columns
<p>I have a dataframe in pandas that looks like</p> <pre><code>col_1 col_2 6 A 2 A 5 B 3 C 5 C 3 B 6 A 6 A 2 B 2 C 5 A 5 B </code></pre> <p>and i want to add a new column <code>col_new</code> that counts the number of rows with the same elements in <code>col_1</code> and <code>col_2</code> but excluding that row itself. So the desired output would look like</p> <pre><code>col_1 col_2 col_new 6 A 2 2 A 0 5 B 1 3 C 0 5 C 0 3 B 0 6 A 2 6 A 2 2 B 0 2 C 0 5 A 0 5 B 1 </code></pre> <p>Here what's I tried but I am not sure if it's the right approach:</p> <p><code>df['col_new'] = df.groupby(['col_1', 'col_2']).count()</code></p> <p>But then I got the error: <code>TypeError: incompatible index of inserted column with frame index</code></p> <p>Thanks in advance.</p>
<python><python-3.x><pandas><dataframe>
2023-02-13 17:13:33
2
449
Apook
75,439,020
1,483,263
pip install --pre without installing pre-releases for dependencies
<p>Say I have a package <code>A</code> on pypi with a pre-release version <code>1.0.0rc1</code>.</p> <p>Package <code>A</code> has package <code>B</code> as a dependency with version <code>B &gt;= 1.0.0</code>, but <code>B</code> also has a pre-release version <code>1.0.0rc1</code> that is incompatible with <code>A</code>.</p> <p>I want to be able to pip install the pre-release version of <code>A</code> while installing the regular (not-pre) release version of <code>B</code>.</p> <p><code>pip install --pre A</code> seems to install <code>A 1.0.0rc1</code> and <code>B 1.0.0rc1</code>.</p> <p>Are there any recommendations for dealing with this scenario? A few that I've considered</p> <ol> <li>pin the version of <code>B==1.0.0</code>, but this is a bit limiting and requires me to periodically update as new releases of <code>B</code> come out.</li> <li>tell users to <code>pip install A==1.0.0rc1</code>, which is slightly less convenient.</li> </ol> <p>What is the standard approach here? Thanks</p>
<python><pip><release>
2023-02-13 17:03:29
1
534
twhughes
75,438,927
1,945,875
Tensorflow GO - how to port python querie
<p>pretty new to the whole world of tf and co.</p> <p>managed to create/train/predict a model - in jupyter-playbook using python.</p> <p>for production, i'd like to use golang. but i am unable to find a &quot;simple&quot; sample on how to do the prediction inside go.</p> <p>i'd like to have this piece of python, for go:</p> <pre><code>sample = { 'b': 200, 'c': 10, 'd': 1, } input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()} predictions = reloaded_model.predict(input_dict) prob = tf.nn.sigmoid(predictions[0]) </code></pre> <p>anyone has a good tutorial for <code>model.Session.Run</code> using github.com/galeone/tensorflow/tensorflow/go</p> <p>regards helmut</p>
<python><tensorflow><go>
2023-02-13 16:54:06
1
1,634
Helmut Januschka
75,438,855
6,930,340
How to tell mypy that I am explicitly testing for an incorrect type?
<p>Consider the following toy example:</p> <pre><code>import pytest def add(a: float) -&gt; float: if not isinstance(a, float): raise ValueError(&quot;a must be of type float&quot;) return 10 + a def test_add_wrong_type() -&gt; None: with pytest.raises(ValueError) as err: add(&quot;foo&quot;) # mypy is complaining here assert str(err.value) == &quot;a must be of type float&quot; </code></pre> <p><code>mypy</code> is complaining as follows:<br /> Argument 1 to &quot;add&quot; has incompatible type &quot;str&quot;; expected &quot;float&quot; [arg-type]</p> <p>Well, <code>mypy</code> is correct. However, in that case I put in an incorrect type on purpose. How can I tell <code>mypy</code> to ignore this line`?</p> <p>Put it differently, what is a pythonic way to test for an incorrect input type?</p>
<python><pytest><mypy>
2023-02-13 16:47:57
2
5,167
Andi
75,438,839
11,501,370
Combine rows in pyspark dataframe to fill in empty columns
<p>I have the following pyspark dataframe</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Car</th> <th>Time</th> <th>Val1</th> <th>Val2</th> <th>Val 3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>None</td> <td>1.5</td> <td>None</td> </tr> <tr> <td>1</td> <td>1</td> <td>3.5</td> <td>None</td> <td>None</td> </tr> <tr> <td>1</td> <td>1</td> <td>None</td> <td>None</td> <td>3.4</td> </tr> <tr> <td>1</td> <td>2</td> <td>2.5</td> <td>None</td> <td>None</td> </tr> <tr> <td>1</td> <td>2</td> <td>None</td> <td>6.0</td> <td>None</td> </tr> <tr> <td>1</td> <td>2</td> <td>None</td> <td>None</td> <td>7.3</td> </tr> </tbody> </table> </div> <p>I want to fill in the gaps and combine these rows using the car/time column as a key of sorts. Specifically, if the car/time column for two (or more) rows is identical, then combine all the rows into one. It is guaranteed that only one of Val1/Val2/Val will be filled out for duplicate rows. You will never have a case where two rows have the same values in the car/time column, but different/not None values in another column. The resulting dataframe therefore should look like this.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Car</th> <th>Time</th> <th>Val1</th> <th>Val2</th> <th>Val3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>3.5</td> <td>1.5</td> <td>3.4</td> </tr> <tr> <td>1</td> <td>2</td> <td>2.5</td> <td>6.0</td> <td>7.3</td> </tr> </tbody> </table> </div> <p>Thanks in advance for your help</p>
<python><pyspark>
2023-02-13 16:46:45
1
369
DataScience99
75,438,549
5,896,319
How to merge multiple csv files?
<p>I have several csv files that has same first row element in it. For example:</p> <pre><code>csv-1.csv: Value,0 Currency,0 datetime,0 Receiver,0 Beneficiary,0 Flag,0 idx,0 csv-2.csv: Value,0 Currency,1 datetime,0 Receiver,0 Beneficiary,0 Flag,0 idx,0 </code></pre> <p>And with these files (more than 2 files by the way) I want to merge them and create something like that:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">left</th> <th style="text-align: center;">csv-1</th> <th style="text-align: right;">csv-2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Value</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">Currency</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">datetime</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> </tbody> </table> </div> <p>How can I create this funtion in python?</p>
<python><pandas><csv>
2023-02-13 16:20:20
3
680
edche
75,438,527
3,116,231
Webhook verification fails with Shopify triggering an Azure function
<p>I'm trying to create a webhook consumer:</p> <ul> <li>Shopify is posting the data</li> <li>an Azure function written in Python is triggered by HTTP</li> <li>the sample code below just logged the result of the webhook verificationthe result of the verification is being logged</li> </ul> <p>There's an <a href="https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python?pivots=python-mode-decorators" rel="nofollow noreferrer">example</a> by Shopify, but as I'm not using Flask (like in <a href="https://stackoverflow.com/q/69326620/3116231">this</a> SO question), I modified the code a bit:</p> <pre><code>import azure.functions as func import logging import hmac import hashlib import base64 import json app = func.FunctionApp() @app.function_name(name=&quot;webhook-trigger&quot;) @app.route(route=&quot;webhook&quot;, auth_level=func.AuthLevel.ANONYMOUS) def webhook_process(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function is processing a request.') verified = verify(req) logging.info(&quot;Webhook verified: {}&quot;.format(verified)) return func.HttpResponse( &quot;This HTTP triggered function executed successfully.&quot;, status_code=200 ) def verify(req: func.HttpRequest) -&gt; bool: CLIENT_SECRET = 'xxxxx' header = req.headers data = req.get_json() encoded_data = json.dumps(data).encode('utf-8') digest = hmac.new(CLIENT_SECRET.encode('utf-8'), encoded_data, digestmod=hashlib.sha256).digest() computed_hmac = base64.b64encode(digest) logging.info(&quot;computed_hmac: {}&quot;.format(computed_hmac)) logging.info(&quot;header['X-Shopify-Hmac-Sha256'] encoded: {}&quot;.format(header['X-Shopify-Hmac-Sha256'].encode('utf-8'))) verified = hmac.compare_digest(computed_hmac, header['X-Shopify-Hmac-Sha256'].encode('utf-8')) return verified </code></pre> <p>The logging output: <a href="https://i.sstatic.net/czxtZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/czxtZ.png" alt="enter image description here" /></a></p> <p>I don't understand, why the verification fails (possibly due to some encoding stuff which I did wrong. Any ideas?</p>
<python><azure-functions><webhooks><hmac><shopify-api>
2023-02-13 16:18:35
1
1,704
Zin Yosrim
75,438,514
978,511
Split number into chunks according to rank
<p>This is a very practical task. I have a number of items that need to be distributed into several stores, according to store rank. So the higher the rank, the more items store will get, so store with rank 1 will get most items and store with rank 100 least.</p> <p>So the inputs are:</p> <ul> <li>number of items to distribute</li> <li>stores, represented by their rank, can have duplicates as well, and does not follow any pattern, for instance [1, 4, 40]</li> </ul> <p>Right now, I have the following:</p> <ul> <li>I inverse the ranks, so that smaller number would become biggest (sum(n...) - n)</li> <li>I take percentage of each store: inversed_rank \ sum(inversed_ranks)</li> <li>Multiply the amount I need to distribute by the percentage</li> </ul> <p>It works fine on some of the numbers, like:</p> <pre><code>[1, 4, 40] 1 = 147.2 4 = 137.1 40 = 16.7 </code></pre> <p>Clearly, the store with rank 40 gets the least, but with more stores it flattens:</p> <pre><code>[6,3,24,10, 25, 12, 14,35,40, 16,28,29,17,1,26,23,8,10] 1 = 17.7 3 = 17.5 6 = 17.4 8 = 17.3 10 = 17.2 10 = 17.2 12 = 17.1 14 = 16.9 16 = 16.8 17 = 16.8 23 = 16.5 24 = 16.4 25 = 16.4 26 = 16.3 28 = 16.2 29 = 16.1 35 = 15.8 40 = 15.5 </code></pre> <p>So store with rank 40 gets only a fraction less than the top. How can I make it a bit more &quot;curved&quot;?</p> <p>My Python code:</p> <pre><code>number = 301 ranks = [6,3,24,10, 25, 12, 14,35,40, 16,28,29,17,1,26,23,8,10] #ranks = [1,4,40] ranks.sort() total = sum(ranks) tmp = 0 reversed = [] for value in ranks: reversed.append(total - value) total = sum(reversed) print('Items per rank:') for i in range(len(ranks)): percent = reversed[i] / total print(str(ranks[i]) + ' = ' + &quot;{:10.1f}&quot;.format(percent * number)) tmp = tmp + percent * number print('') print('Total = ' + str(tmp)) </code></pre> <p>You may play with it here: <a href="https://trinket.io/python/a15c54b978" rel="nofollow noreferrer">https://trinket.io/python/a15c54b978</a> Would be perfect to have a more math rather than library solution as I will need to translate it into Excel VBA</p> <p><strong>Solution</strong></p> <p>I have also added a factor to control the distribution:</p> <pre><code>number = 301 ranks = [6,3,24,10, 25, 12, 14,35,40, 16,28,29,17,1,26,23,8,10] #ranks = [1,4,40] factor = 1 ranks.sort() total = sum(ranks) tmp = 0 max = max(ranks) + 1 reversed = [] for value in ranks: reversed_value = max - value reversed_value = pow(reversed_value, factor) reversed.append(reversed_value) total = sum(reversed) print('Items per rank:') for i in range(len(ranks)): percent = reversed[i] / total value = percent * number print(str(ranks[i]) + ' = ' + &quot;{:10.1f}&quot;.format(value)) tmp = tmp + value print('') print('Total = ' + str(tmp)) </code></pre>
<python><math>
2023-02-13 16:17:47
1
13,523
Andrey Marchuk
75,438,456
15,481,917
Keras: color_mode = 'grayscale' does not convert to grayscale
<p>I am trying to read a dataset from directory <code>data</code> and I want the photos to be grayscaled.</p> <pre><code>data = tf.keras.utils.image_dataset_from_directory('data', shuffle=True, color_mode='grayscale') </code></pre> <p>When I print the results, the images are not grayscaled: <a href="https://i.sstatic.net/1X6pX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1X6pX.png" alt="enter image description here" /></a></p> <p>How can I grayscale an image with keras?</p>
<python><tensorflow><keras>
2023-02-13 16:12:45
1
584
Orl13
75,438,450
12,968,928
Skip to the next iteration if a warning is raised
<p>How can I skip the iteration if warning is raised</p> <p>Suppose I have the code below</p> <pre><code>import warnings # The function that might raise a warning def my_func(x): if x % 2 != 0: warnings.warn(&quot;This is a warning&quot;) return &quot;Problem&quot; else: return &quot;No Problem&quot; for i in range(10): try: # code that may raise a warning k = my_func(i) except Warning: # skip to the next iteration if a warning is raised continue # rest of the code print(i, &quot; : &quot;,k) # Only print this if warning was not raised in try:except </code></pre> <p>I would expect this to print only even numbers as my_funct(i) will raise a warning for odd numbers</p> <p><strong>update:</strong></p> <p><code> my_func(i)</code> was used just for illustration purposes, the actual problem I want to use might not have an obvious returned value that raises a warning.</p>
<python>
2023-02-13 16:12:16
2
1,511
Macosso
75,438,358
8,553,795
Which metrics are printed (train or validation) when validation_split and validation_data is not specified in the keras model.fit function?
<p>I have a TF neural network and I am using the <code>tf.data</code> API to create the dataset using a generator. I am not passing <code>validation_split</code> and <code>validation_data</code> into the <code>model.fit()</code> function of keras.</p> <p>The default values for the above parameter are <code>0.0</code> and <code>None</code> respectively. So, I am not sure about the metrics (precision, recall, etc) that get printed after <code>model.fit()</code>, are those training metrics or validation metrics? According to my understanding, those shouldn't be validation metrics as I am using the default values for the mentioned arguments.</p> <p>Here's what I am referring to -</p> <p><code>Epoch 1/50 10/10 [==============================] - 6119s 608s/step - loss: 0.6588 - accuracy: 5.4746e-06 - precision: 0.0095 - recall: 0.3080</code></p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">Tensorflow doc</a> for <code>model.fit()</code></p>
<python><tensorflow><keras><tensorflow2.0><tensorflow-datasets>
2023-02-13 16:03:55
1
393
learnToCode
75,438,284
11,002,498
Why is the parallel version of my code slower than the serial one?
<p>I am trying to run a model multiple times. As a result it is time consuming. As a solution I try to make it parallel. However, it ends up to be slower. <strong>Parallel is 40 seconds</strong> while <strong>serial is 34 seconds</strong>.</p> <pre><code># !pip install --target=$nb_path transformers oracle = pipeline(model=&quot;deepset/roberta-base-squad2&quot;) question = 'When did the first extension of the Athens Tram take place?' print(data) print(&quot;Data size is: &quot;, len(data)) parallel = True if parallel == False: counter = 0 l = len(data) cr = [] for words in data: counter+=1 print(counter, &quot; out of &quot;, l) cr.append(oracle(question=question, context=words)) elif parallel == True: from multiprocessing import Process, Queue import multiprocessing no_CPU = multiprocessing.cpu_count() print(&quot;Number of cpu : &quot;, no_CPU) l = len(data) def answer_question(data, no_CPU, sub_no): cr_process = [] counter_process = 0 for words in data: counter_process+=1 l_data = len(data) # print(&quot;n is&quot;, no_CPU) # print(&quot;l is&quot;, l_data) print(counter_process, &quot; out of &quot;, l_data, &quot;in subprocess number&quot;, sub_no) cr_process.append(oracle(question=question, context=words)) # Q.put(cr_process) cr.append(cr_process) n = no_CPU # number of subprocesses m = l//n # number of data the n-1 first subprocesses will handle res = l % n # number of extra data samples the last subprocesses has # print(m) # print(res) procs = [] # instantiating process with arguments for x in range(n-1): # print(x*m) # print((x+1)*m) proc = Process(target=answer_question, args=(data[x*m:(x+1)*m],n, x+1,)) procs.append(proc) proc.start() proc = Process(target=answer_question, args=(data[(n-1)*m:n*m+res],n,n,)) procs.append(proc) proc.start() # complete the processes for proc in procs: proc.join() </code></pre> <p>A sample of the <code>data</code> variable can be found <a href="https://pastebin.pl/view/272b15ed" rel="nofollow noreferrer">here</a> (to not flood the question). Argument <code>parallel</code> controls the serial and the parallel version. So my question is, why does it happen and how do I make the parallel version faster? I use google colab so it has 2 CPU cores available , that's what <code>multiprocessing.cpu_count()</code> is saying at least.</p>
<python><multiprocessing><google-colaboratory><python-multiprocessing><huggingface-transformers>
2023-02-13 15:57:00
1
464
Skapis9999
75,438,217
10,266,106
NumPy Nearest Neighbor Line Fitting Across Moving Window
<p>I have two two-dimensional arrays loaded into NumPy, both of which are 80i x 80j in size. I'm looking to do a moving window polyfit calculation across these arrays, I've nailed down how to conduct the polyfit but am stuck on the specific moving window approach I'm looking to accomplish. I'm aiming for:</p> <p><strong>1)</strong> At each index of Array 1 (a1), code isolates all the values of the index &amp; its closest 8 neighbors into a separate 1D array, and repeats over the same window at Array 2 (a2)</p> <p><strong>2)</strong> With these two new arrays isolated, perform linear regression line-fitting using NumPy's polyfit in the following approach: <code>model = np.polyfit(a1slice, a2slice, 1)</code></p> <p><strong>3)</strong> Cast the resulting regression coefficient and intercept (example output doing this manually: <code>array([-0.02114911, 10.02127152])</code>) to the same index of two other arrays, where <code>model[0]</code> would be placed into the first new array and <code>model[1]</code> into the second new array at this index.</p> <p><strong>4)</strong> The code then moves sequentially to the next index and performs steps 1-3 again, or a1(i+1, j+0, etc.)</p> <p>I've provided a graphical example of what I'm trying achieve for two random index selections across Array 1 and the calculation across the index's eight nearest neighbors, should this make the desired result easier to understand:</p> <p><a href="https://i.sstatic.net/1fAP0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1fAP0.png" alt="Target Example One" /></a></p> <p><a href="https://i.sstatic.net/sDfFS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sDfFS.png" alt="Target Example Two" /></a></p>
<python><arrays><numpy><slice><numpy-ndarray>
2023-02-13 15:51:44
1
431
TornadoEric
75,438,152
2,623,317
How to convert time durations to numeric in polars?
<p>Is there any built-in function in <code>polars</code> or a better way to convert time durations to numeric by defining the time resolution (e.g.: days, hours, minutes)?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;from&quot;: [&quot;2023-01-01&quot;, &quot;2023-01-02&quot;, &quot;2023-01-03&quot;], &quot;to&quot;: [&quot;2023-01-04&quot;, &quot;2023-01-05&quot;, &quot;2023-01-06&quot;], }) </code></pre> <p>My current approach:</p> <pre><code># Convert to date and calculate the time difference df = ( df.with_columns( pl.col(&quot;to&quot;, &quot;from&quot;).str.to_date().name.suffix(&quot;_date&quot;) ) .with_columns((pl.col(&quot;to_date&quot;) - pl.col(&quot;from_date&quot;)).alias(&quot;time_diff&quot;)) ) # Convert the time difference to int (in days) df = df.with_columns( ((pl.col(&quot;time_diff&quot;) / (24 * 60 * 60 * 1000)).cast(pl.Int8)).alias(&quot;time_diff_int&quot;) ) </code></pre> <p>Output:</p> <pre><code>shape: (3, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ from ┆ to ┆ to_date ┆ from_date ┆ time_diff ┆ time_diff_int β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ date ┆ date ┆ duration[ms] ┆ i8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ════════════β•ͺ════════════β•ͺ══════════════β•ͺ═══════════════║ β”‚ 2023-01-01 ┆ 2023-01-04 ┆ 2023-01-04 ┆ 2023-01-01 ┆ 3d ┆ 3 β”‚ β”‚ 2023-01-02 ┆ 2023-01-05 ┆ 2023-01-05 ┆ 2023-01-02 ┆ 3d ┆ 3 β”‚ β”‚ 2023-01-03 ┆ 2023-01-06 ┆ 2023-01-06 ┆ 2023-01-03 ┆ 3d ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><datetime><python-polars><duration>
2023-02-13 15:46:43
2
477
Guz
75,438,149
3,922,727
Azure http trigger functions throwing an error over imported modules from other folder
<p>We are building an http trigger using Azure functions.</p> <pre><code>import logging import azure.functions as func def main(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: return func.HttpResponse(f&quot;Hello, {name}. This HTTP triggered function executed successfully.&quot;) else: return func.HttpResponse( &quot;This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.&quot;, status_code=200 ) </code></pre> <p>From the online platform, we were able to trigger it by adding a parameter and print it on the screen.</p> <p>Once we try to import files from src folder which at the same level of <code>HttpTrigger1</code>,</p> <pre><code>import src.main </code></pre> <p><a href="https://i.sstatic.net/mXs3T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mXs3T.png" alt="enter image description here" /></a></p> <p>We have the following error:</p> <blockquote> <p>Error in load_function. Sys Path: ['C:\Users\AliMehdy\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\workers\python\3.10\WINDOWS\X64',</p> </blockquote> <p>And it points out to <code>main.py</code> where we have already loaded a library called <code>load_toml</code>. Also over each import in main.py from src folder.</p> <p><a href="https://i.sstatic.net/Ao7jZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ao7jZ.png" alt="enter image description here" /></a></p> <p>So it's throwing the same error over each module.</p> <p>We did check the <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=asgi%2Capplication-level&amp;pivots=python-mode-configuration#import-behavior" rel="nofollow noreferrer">documentations</a> and specifically that part:</p> <blockquote> <p>When you're using absolute import syntax, the shared_code/ folder needs to contain an <strong>init</strong>.py file to mark it as a Python package.</p> </blockquote> <p>Still, we didn't know what to do. Any idea how to import other modules into Azure http trigger function?</p>
<python><azure><azure-functions><azure-web-app-service><azure-http-trigger>
2023-02-13 15:46:33
1
5,012
alim1990
75,438,124
203,175
How to print several strings side-by-side and span multiple lines at a fixed output width
<p>I am trying to print out three long strings (same-length), character-by-character, and with a fixed output width at 60, which may be rendered like:</p> <pre><code>aaaaaaaaaaaaa bbbbbbbbbbbbb ccccccccccccc ---blank line--- aaaaaaaaaaaaa bbbbbbbbbbbbb ccccccccccccc ..... </code></pre> <p>I simplify the strings so that the first string is an arbitrarily long string contains &quot;a&quot;s, the second string contains many &quot;b&quot;s, etc. There could be as many blocks of lines shown above as possible, within each block, the first line stands for string1, second line stands for string2..etc. .And since a fixed output width is required, the printing will continue at the next block of three lines(for example, str1 will continue at the first line of the second block if length&gt;60).</p> <p>My current code looks like:</p> <pre><code> for chunk in chunkstring(str1, 60): f.write(chunk) f.write('\n') for chunk in chunkstring(str2, 60): f.write(chunk) f.write('\n') for chunk in chunkstring(str3, 60): f.write(chunk) f.write('\n') </code></pre> <p>However, the result is not correct. It will print out all the str1 first then str2, then str3</p> <pre><code> aaaaaaaaaaaaa aaaaaaaaaaaaa aaaaaaaaaaaaa aaaa ---blank line--- bbbbbbbbbbbbb bbbbbbbbbbbbb bbbbbb ---blank line--- ccccccccccccc cccc ..... </code></pre> <p>Sorry if not intepretted clearly, please highlight any ambiguation so I can edit the descrption.</p>
<python><python-3.x>
2023-02-13 15:43:33
2
6,851
Kevin
75,438,026
802,589
How to show play button in every code cell for jupyter lab
<p>In vscode and colab, there's a play button in every code cell in the notebook. <a href="https://stackoverflow.com/questions/54607200/remove-play-button-display-at-every-cell-line-of-jupyter-notebook">This issue</a> seems to suggest there should be the play button in <code>notebook</code> 5.6.0, but I'm on 5.6.1 (and using jupyterlab), there's no play button along the cells.</p> <p>Same issue seem to suggest css can be used to control the display of the button. Is that the only way? Would be good if there's a configuration somewhere, considering it should be a common element, and both vscode and colab has it by default.</p>
<python><jupyter-notebook><jupyter-lab>
2023-02-13 15:35:24
1
1,629
liang
75,437,933
10,613,037
When writing to excel, add padding on the top of dataframe with meta info
<p>I've got a dataframe e.g <code>df = pd.DataFrame({'col1': [1,2]})</code></p> <pre><code> col1 0 1 1 2 </code></pre> <p>I want it when I do <code>df.to_excel('abc.xlsx')</code>, I get an output with a bit of padding on top of the entire dataframe, where I write some meta information in that space, i.e</p> <p><a href="https://i.sstatic.net/AoRfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AoRfg.png" alt="enter image description here" /></a></p> <p>How can I do this best?</p>
<python><excel><pandas><dataframe>
2023-02-13 15:28:26
0
320
meg hidey
75,437,921
11,391,711
Computing statistical results for the test dataset in PyTorch - Python
<p>I'm creating a vanilla neural network with artificial datasets to learn <code>pytorch</code>. I'm currently looking how I can get the predictions for the test data set and obtain the statistical metrics including <code>mse</code>, <code>mae</code>, and <code>r2</code>. I was wondering if my calculations are correct. Also, is there any built-in function that could potentially give me all these results in <code>pytorch</code> as it happens in <code>sckit-learn</code>?</p> <p>Let's first upload libraries and then generate artificial training and test data.</p> <pre><code>import random import torch import pandas as pd import numpy as np from torch import nn from torch.utils.data import Dataset,DataLoader,TensorDataset from torchvision import datasets, transforms import math n_input, n_hidden, n_out= 5, 64, 1 #Create training and test datasets X_train = pd.DataFrame([[random.random() for i in range(n_input)] for j in range(1000)]) y_train = pd.DataFrame([[random.random() for i in range(n_out)] for j in range(1000)]) X_test = pd.DataFrame([[random.random() for i in range(n_input)] for j in range(50)]) y_test = pd.DataFrame([[random.random() for i in range(n_out)] for j in range(50)]) test_dataset = TensorDataset(torch.Tensor(X_test.to_numpy().astype(np.float32)), torch.Tensor((y_test).to_numpy().astype(np.float32))) testloader = DataLoader(test_dataset, batch_size= 32) #For training, use 32 as a batch size training_dataset = TensorDataset(torch.Tensor(X_train.to_numpy().astype(np.float32)), torch.Tensor((y_train).to_numpy().astype(np.float32))) dataloader = DataLoader(training_dataset, batch_size=32, shuffle=True) </code></pre> <p>Now, let us generate the model to train.</p> <pre><code>model = nn.Sequential(nn.Linear(n_input, n_hidden), nn.ReLU(), nn.Linear(n_hidden, n_out), nn.ReLU()) loss_function = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) </code></pre> <p>Next, I start the training process.</p> <pre><code>losses = [] epochs = 1000 for epoch in range(epochs+1): for times,(x_train,y_train) in enumerate(dataloader): y_pred = model(x_train) loss = loss_function(y_pred, y_train) model.zero_grad() loss.backward() optimizer.step() </code></pre> <p>Now, I'd like to get the predictions for the test dataset and get statistical results. This is the part that I need help and some guidance. Do they seem correct? Is there something I might be doing wrong?</p> <pre><code>from torchmetrics import R2Score r2score = R2Score() running_mae = running_mse = running_r2 = 0 with torch.no_grad(): model.eval() for times,(x_test,y_test) in enumerate(testloader): y_pred = model(x_test) error = torch.abs(y_pred - y_test).sum().data squared_error=((y_pred - y_test)*(y_pred - y_test)).sum().data running_mae+=error running_mse+=squared_error running_r2+=r2score(y_pred, y_test) mse = math.sqrt(squared_error/ len(testloader)) mae = error / len(testloader) r2 = running_r2 / len(testloader) print(&quot;MSE:&quot;,mse, &quot;MAE:&quot;, mae, &quot;R2:&quot;, r2) </code></pre>
<python><machine-learning><pytorch><static-analysis>
2023-02-13 15:27:43
0
488
whitepanda
75,437,910
2,487,835
Pydroid PIL does not display image on android
<p>I am writing code on my Android. I know, it's weird. But my notebook is being repaired :)</p> <p>I am trying to display an image generated by pillow library. I'm doing this within Pydroid app.</p> <p>Matplotlib charts are displaying okay. But not the image of pillow.</p> <p>There is a question similar to mine, that links this problem to image magic not being installed. But it is not particular to Android. If this is also the case for me, please specify how to install it, since it is not a pip package. Here is my code</p> <pre><code> from PIL import Image img = Image.new( mode='RGB', size=(400, 240), color=(153,153,153) ) img.show() </code></pre>
<python><android><python-imaging-library><pydroid>
2023-02-13 15:26:35
1
3,020
Lex Podgorny
75,437,890
19,580,067
Send a email in Outlook for automation
<p>Tried to send an outlook email as a part of automation. But the usual code seems not working. Not sure what I'm missing here.</p> <p>The code I tried is here</p> <pre><code>import win32com.client ol=win32com.client.Dispatch(&quot;outlook.application&quot;) olmailitem=0x0 #size of the new email newmail=ol.CreateItem(olmailitem) </code></pre> <p>The error I'm facing here is</p> <p><a href="https://i.sstatic.net/0ZzKz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ZzKz.png" alt="enter image description here" /></a></p> <p>Deleted the files under AppData\Local\Temp\gen_py\3.9 But still got the error. <a href="https://i.sstatic.net/VNPJZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VNPJZ.png" alt="enter image description here" /></a></p>
<python><outlook><pywin32><win32com><office-automation>
2023-02-13 15:25:19
2
359
Pravin
75,437,773
1,876,739
Pandas MultiIndex Lookup By Equality and Set Membership
<p>Given a pandas <code>Series</code>, or <code>Dataframe</code>, with a multiindex:</p> <pre><code>first_key = ['a', 'b', 'c'] second_key = [1, 2, 3] m_index = pd.MultiIndex.from_product([first_key, second_key], names=['first_key', 'second_key']) series_with_index = pd.Series(0.0, index=m_index) </code></pre> <p><strong>How can the MultiIndex be indexed to lookup an equality for the first level and an <code>isin</code> on the second index?</strong></p> <p>For example, how can all values where the first level is equal to <code>a</code> and the second level is in the set <code>{2, 3, 4}</code> be set to <code>1.0</code>?</p> <p>Thank you in advance for your consideration and response.</p>
<python><pandas>
2023-02-13 15:16:07
4
17,975
RamΓ³n J Romero y Vigil
75,437,730
3,507,584
Python - Categorise a single value yields error "Input array must be 1 dimensional"
<p>I am trying to categorise single float numbers avoiding a list of <code>if</code> and <code>elif</code> statements using <code>pd.cut</code>.</p> <p>Why the 2 codes below yield error <code>Input array must be 1 dimensional</code>?</p> <pre><code>import pandas as pd import numpy as np pd.cut(0.96,bins=[0,0.5,1,10],labels=['A','B','C']) pd.cut(np.array(0.96),bins=[0,0.95,1,10],labels=['A','B','C']) </code></pre>
<python><arrays><pandas>
2023-02-13 15:12:09
1
3,689
User981636
75,437,697
10,613,037
When writing to excel, make columns not at top level
<p>I've got a dataframe e.g <code>df = pd.DataFrame({'col1': [1,2]})</code></p> <pre><code> col1 0 1 1 2 </code></pre> <p>I want it when I do <code>df.to_excel('abc.xlsx')</code>, I get an output with a bit of padding on top e.g.</p> <p><a href="https://i.sstatic.net/3AIvw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3AIvw.png" alt="enter image description here" /></a></p> <p>So I decided to add empty rows to the top of the dataframe</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'col1': [1,2]}) empty_rows = pd.DataFrame(np.nan, index=[0,1], columns=df.columns) pd.concat([empty_rows, df]) </code></pre> <p>However, this leaves the column name, <code>col1</code>, at the top of the excel file when I write to the excel file.</p> <p><a href="https://i.sstatic.net/guM6B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/guM6B.png" alt="enter image description here" /></a></p> <p>How can I create the effect like in my first image?</p>
<python><excel><pandas><dataframe>
2023-02-13 15:08:46
0
320
meg hidey
75,437,666
9,758,017
Remove duplicate of a list via list matching in Python
<p>I have chosen a slightly different procedure to remove duplicates of a list. I want to keep a new list in parallel, in which each duplicate is added. Afterwards I check if the element is present in the &quot;newly created list&quot; in order to delete it.</p> <p>The code looks like this:</p> <pre><code># nums = [1,1,2] or [0,0,1,1,1,2,2,3,3,4] t = [] nums_new = nums for i in nums: if nums[i] not in t: t.append(nums[i]) else: nums_new.remove(nums[i]) nums = nums_new print(nums) </code></pre> <p>For the case when <code>nums = [1,1,2]</code> this works fine and returns <code>[1,2]</code>.</p> <p>However, for <code>nums = [0,0,1,1,1,2,2,3,3,4]</code> this case does not seem to work as I get the following output: <code>[0, 1, 2, 2, 3, 3, 4]</code>.</p> <p>Why is this? Can someone explain me the steps?</p>
<python>
2023-02-13 15:06:20
1
1,778
41 72 6c
75,437,636
13,382,780
Filter for specific tags on Zendesk API using Zenpy on python
<p>Need some help, while trying to retrieve some tickets with Zenpy, I encounter that when I search for more than one tag, they fetch all the data related to both tags, is there a way to filter only for the tickets who have both tags?</p> <p>An example of my script is:</p> <p><code>zenpy_client.search(tags = [&quot;tag1&quot;,&quot;tag2&quot;], created_between=[to_date, from_date], type='ticket', minus='negated')</code></p> <p>Could you help on that? Thanks</p>
<python><python-3.x><zendesk><zendesk-api>
2023-02-13 15:04:26
1
312
Tayzer Damasceno
75,437,326
3,719,167
python fnmatch exclude path with string
<p>I want to perform check and allow access to only specific pattern URLs and exclude few.</p> <p>Using the following check to match for the allowed URLs</p> <pre class="lang-py prettyprint-override"><code>ALLOWED_URL = [ '/auth/*' ] </code></pre> <p>and using <code>fnmatch</code> to match the pattern</p> <pre><code>any(fnmatch(request.path, p) for p in settings.ALLOWED_URL) </code></pre> <p>This works for the following URL</p> <ul> <li><code>/auth/login/</code></li> <li><code>/auth/signup/google/</code></li> <li><code>/auth/user-tracking/</code></li> </ul> <p>But I want to exclude <code>/auth/user-tracking/</code> from the URL and user should not access it. So I modified the pattern as</p> <pre class="lang-py prettyprint-override"><code>MULTI_USER_EXCLUDE_PATH = [ '/auth/[!user-tracking/*]*' ] </code></pre> <p>But this is now not working for</p> <ul> <li><code>/auth/signup/google/</code></li> </ul>
<python><fnmatch>
2023-02-13 14:39:32
1
9,922
Anuj TBE
75,437,202
9,401,029
Python path remains unchanged even after config update and restart
<p>I am using a <code>mac</code> and my end goal is for <code>which python</code> to return</p> <pre><code>/usr/bin/python2.7 </code></pre> <p>The above path exists and it is an executable that works fine.</p> <p>At present <code>which python</code> incorrectly returns</p> <pre><code>/Library/Frameworks/Python.framework/Versions/2.7/bin/python </code></pre> <p><code>echo $PYTHON</code> returns <code>/usr/bin/python2.7</code> correctly thus issue is about fixing the result from <code>which</code> command.</p> <p>I am using <code>zsh</code>.</p> <p>I have already tried <code>source ~/.zshrc</code> and also restarted the laptop. Same outcome.</p> <p>This is what I get when I echo $PATH</p> <blockquote> <p>/Users/myname/.nvm/versions/node/v16.16.0/bin:/usr/bin:/Users/myname/.sdkman/candidates/java/current/bin:/Users/myname/.nvm/versions/node/v16.16.0/bin:/usr/bin/python2.7:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/myname/tools/apache-maven-3.8.1/bin:/Users/myname/tools/apache-maven-3.8.1/bin</p> </blockquote> <p>This is what I have in my <code>.zshrc</code> file.</p> <pre><code>export PYTHON=&quot;/usr/bin/python2.7&quot; export PATH=&quot;$PYTHON:$PATH&quot; </code></pre> <p>Full file for reference</p> <pre><code>export PYTHON=&quot;/usr/bin/python2.7&quot; export PATH=&quot;$PYTHON:$PATH&quot; # Believe only the above 2 lines is relevant. Leaving the rest in just in case. export M2_HOME=/Users/name/tools/apache-maven-3.8.1 export PATH=$PATH:$M2_HOME/bin export NODEJS_ORG_MIRROR=https://myserver-proxy export NVM_NODEJS_ORG_MIRROR=$NODEJS_ORG_MIRROR export NODE_ENV=development export NODE_CERTS=&quot;/Users/name/certs.pem&quot; export NVM_DIR=&quot;$HOME/.nvm&quot; [ -s &quot;$NVM_DIR/nvm.sh&quot; ] &amp;&amp; \. &quot;$NVM_DIR/nvm.sh&quot; # This loads nvm [ -s &quot;$NVM_DIR/bash_completion&quot; ] &amp;&amp; \. &quot;$NVM_DIR/bash_completion&quot; # This loads nvm bash_completion export ZSH=&quot;$HOME/.oh-my-zsh&quot; ZSH_THEME=&quot;robbyrussell&quot; plugins=(git) source $ZSH/oh-my-zsh.sh #THIS MUST BE AT THE END OF THE FILE FOR SDKMAN TO WORK!!! export SDKMAN_DIR=&quot;$HOME/.sdkman&quot; [[ -s &quot;$HOME/.sdkman/bin/sdkman-init.sh&quot; ]] &amp;&amp; source &quot;$HOME/.sdkman/bin/sdkman-init.sh&quot; export JETBRAINS_LICENSE_SERVER=&quot;http://jetbrains.something.com:443&quot; </code></pre> <p>What am I missing?</p> <p>UPDATE:</p> <pre><code>Command: ls -lah /usr/bin/python2.7 Result: ls: /usr/bin/python2.7: No such file or directory </code></pre> <p>There is no result as shown above. But I can go to usr/bin. Then typing python2.7 opens interactive shell to write python code.</p> <p>Reason I am looking to resolve which command for python here.</p> <p>I am installing a node project and getting the following error.</p> <p>Thus trying to fix the python path to get this installation to work.</p> <pre><code>gyp info using node-gyp@3.8.0 gyp info using node@14.18.2 | darwin | x64 gyp verb command rebuild [] gyp verb command clean [] gyp verb clean removing &quot;build&quot; directory gyp verb command configure [] gyp verb check python checking for Python executable &quot;/usr/bin/python2.7&quot; in the PATH gyp verb `which` failed Error: not found: /usr/bin/python2.7 gyp verb `which` failed at getNotFoundError (/Users/name/projects/fe/aa/node_modules/which/which.js:13:12) gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable &quot;/usr/bin/python2.7&quot;, you can set the PYTHON env variable. </code></pre>
<python><zsh>
2023-02-13 14:29:39
1
1,836
karvai
75,437,159
6,372,859
Reshape pandas columns into numpy arrays
<p>I have a very long dataframe with many columns of the form <code>k1, p1, k2, p2,...,kn, pn</code> such as</p> <pre><code> k1 p1 k2 p2 k3 p3 ... -0.001870 0.000659 -0.005000 0.000795 -0.003889 0.000795 ... -0.002778 0.000556 0.000795 0.001667 0.000795 0.002778 ... </code></pre> <p>How could I build, in the most pythonic way, a numpy array with the k's and p's separated into arrays, like <code>[[[-0.001870,-0.005000,-0.003889,...],[0.000659,0.000795,0.000795...]],[[-0.002778, 0.000795, 0.000795...],[0.000556, 0.001667, 0.002778...]], ...[[...],[...]]]</code></p>
<python><arrays><pandas><numpy><reshape>
2023-02-13 14:26:11
2
583
Ernesto Lopez Fune
75,437,105
19,238,204
How to Convert Automatically Currency to USD and Label the Legend with the Index Name with Python, pandas, yfinance
<p>I want to label the legend of the index stocks with its name like &quot;Euronext 100 Index&quot; instead of &quot;^N100&quot; because it is easier to be read that way for me. How to modify the label on the legend for each chart?</p> <p>Second, I want to convert the price if it is in GBP it will be converted to USD, if it is in Euro it will be USD too, is there an automatic converter or should I put it manually?</p> <p>Thank You.</p> <p>This is the code:</p> <pre><code>from pandas_datareader import data as pdr import yfinance as yf yf.pdr_override() y_symbols = ['^FTSE', '^GDAXI', '^FCHI', '^STOXX50E','^N100', '^BFX'] from datetime import datetime startdate = datetime(2000,1,1) enddate = datetime(2023,1,31) data = pdr.get_data_yahoo(y_symbols, start=startdate, end=enddate) #print(data) #data['Close'].plot() plt.figure(figsize=(20,10)) plt.plot(data.index, data['Close'], label=data[&quot;Close&quot;].columns) plt.xlabel(&quot;Date&quot;) plt.ylabel(&quot;Price (in its own currency)&quot;) plt.title(&quot;Europe Indexes 1/1/00 - 1/1/23&quot;) plt.legend() plt.show() </code></pre> <p><a href="https://i.sstatic.net/uAOCm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uAOCm.png" alt="1" /></a></p>
<python><pandas><yfinance>
2023-02-13 14:21:21
1
435
Freya the Goddess
75,437,030
14,125,436
How to implement self daptive weight in neural network in Pytorch
<p>I want to develop a Physics Informed Neural Network model in Pytorch. My network should be trained based on two losses: boundary condition (BC) and partial derivative equation (PDE). I am adding these two losses but the problem is that the BC is controlling the main loss, like the following figure:</p> <p><a href="https://i.sstatic.net/LpAZq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LpAZq.jpg" alt="enter image description here" /></a></p> <p>This way I make asimple finite difference calculation for my 1D heat conduction:</p> <pre><code>import torch import torch.nn as nn import matplotlib.pyplot as plt import numpy as np from pyDOE import lhs ######### Finite difference solution # geometry: L = 1 # length of the rod # mesh: dx = 0.01 nx = int(L/dx) + 1 x = np.linspace(0, L, nx) # temporal grid: t_sim = 1 dt = 0.01 nt = int (t_sim/dt) # parametrization alpha = 0.14340344168260039 # IC t_ic = 4 # BC t_left = 5 # left side with 6 Β°C temperature t_right = 3 # right side with 4 Β°C temperature # Results T = np.ones(nx) * t_ic all_T = [] for i in range (0, nt): Tn = T.copy() T[1:-1] = Tn[1:-1] + dt/(dx++2) * alpha * (Tn[2:] - 2*Tn[1:-1] + Tn[0:-2]) T[0] = t_left T[-1] = t_right all_T.append(Tn) </code></pre> <p>Then,data is prepared for the PINN model through the next block of code:</p> <pre><code>x = torch.linspace(0, L, nx, dtype=torch.float32) t = torch.linspace(0, t_sim, nt, dtype=torch.float32) T, X = torch.meshgrid(t,x) Temps = np.concatenate (all_T).reshape(nt,nx) x_test = torch.hstack((X.transpose(1,0).flatten()[:,None], T.transpose(1,0).flatten()[:,None])) y_test = torch.from_numpy(Temps) # I suppose it is the ground truth lb = x_test[0] # lower boundary ub = x_test[-1] # upper boundary left_x = torch.hstack((X[:,0][:,None], T[:,0][:,None])) # x and t of left boundary left_y = torch.ones(left_x.shape[0], 1) * t_left # Temperature of left boundary left_y[0,0] = t_ic right_x = torch.hstack((X[:,-1][:,None], T[:,0][:,None])) # x and t of right boundary right_y = torch.ones(right_x.shape[0], 1) * t_right # Temperature of right boundary right_y[0,0] = t_ic bottom_x = torch.hstack((X[0,1:-1][:,None], T[0,1:-1][:,None])) # x and t of IC bottom_y = torch.ones(bottom_x.shape[0], 1) * t_ic # Temperature of IC No_BC = 1 # 50 percent of the BC data are used from training No_IC = 1 # 75 percent of the IC data are used from training idx_l = np.random.choice(left_x.shape[0], int (left_x.shape[0]*No_BC), replace=False) idx_r = np.random.choice(right_x.shape[0], int (right_x.shape[0]*No_BC), replace=False) idx_b = np.random.choice(bottom_x.shape[0], int (bottom_x.shape[0]*No_IC), replace=False) X_train_No = torch.vstack([left_x[idx_l,:], right_x[idx_r,:], bottom_x[idx_b,:]]) Y_train_No = torch.vstack([left_y[idx_l,:], right_y[idx_r,:], bottom_y[idx_b,:]]) N_f = 5000 X_train_Nf = lb + (ub-lb)*lhs(2,N_f) f_hat = torch.zeros(X_train_Nf.shape[0], 1, dtype=torch.float32) # zero array for loss of PDE </code></pre> <p>This is my script for PINN and I very much appreciate your help:</p> <pre><code>class FCN(nn.Module): ##Neural Network def __init__(self,layers): super().__init__() #call __init__ from parent class self.activation = nn.Tanh() self.loss_function = nn.MSELoss(reduction ='mean') 'Initialise neural network as a list using nn.Modulelist' self.linears = nn.ModuleList([nn.Linear(layers[i], layers[i+1]) for i in range(len(layers)-1)]) self.iter = 0 'Xavier Normal Initialization' for i in range(len(layers)-1): nn.init.xavier_normal_(self.linears[i].weight.data, gain=1.0) nn.init.zeros_(self.linears[i].bias.data) 'foward pass' def forward(self,x): if torch.is_tensor(x) != True: x = torch.from_numpy(x) a = x.float() for i in range(len(layers)-2): z = self.linears[i](a) a = self.activation(z) a = self.linears[-1](a) return a 'Loss Functions' #Loss BC def lossBC(self, x_BC, y_BC): loss_BC = self.loss_function(self.forward(x_BC),y_BC) return loss_BC.float() #Loss PDE def lossPDE(self,x_PDE): g = x_PDE.clone() g.requires_grad = True # Enable differentiation f = self.forward(g) f_x_t = torch.autograd.grad(f,g,torch.ones([g.shape[0],1]).to(device),retain_graph=True, create_graph=True)[0] #first derivative f_xx_tt = torch.autograd.grad(f_x_t,g,torch.ones(g.shape).to(device), create_graph=True)[0]#second derivative f_t = f_x_t[:,[1]] f_xx = f_xx_tt[:,[0]] f = f_t - alpha * f_xx return self.loss_function(f,f_hat).float() def loss(self,x_BC,y_BC,x_PDE): loss_bc = self.lossBC(x_BC.float(),y_BC.float()) loss_pde = self.lossPDE(x_PDE.float()) return loss_bc.float() + loss_pde.float() </code></pre> <p>And this is how I make the model, arrays representing losses and finally the plot:</p> <pre><code>layers = np.array([2, 50, 50, 50, 50, 50, 1]) PINN = FCN(layers) optimizer = torch.optim.Adam(PINN.parameters(), lr=0.001) def closure(): optimizer.zero_grad() loss_p = PINN.lossPDE(X_train_Nf) loss_p.backward() loss_b = PINN.lossBC(X_train_No, Y_train_No) loss_b.backward() return loss_b + loss_p total_l = np.array([]) BC_l = np.array([]) PDE_l = np.array([]) test_BC_l = np.array([]) for i in range(10000): loss = optimizer.step(closure) total_l = np.append(total_l, loss.cpu().detach().numpy()) PDE_l = np.append (PDE_l, PINN.lossPDE(X_train_Nf).cpu().detach().numpy()) BC_l = np.append(BC_l, PINN.lossBC(X_train_No, Y_train_No).cpu().detach().numpy()) with torch.no_grad(): test_loss = PINN.lossBC(X_test, Y_test.flatten().view(-1,1)) test_BC_l = np.append(test_BC_l, test_loss.cpu().detach().numpy()) import matplotlib.pyplot as plt fig,ax=plt.subplots(1,1, figsize=(9,9)) ax.plot(PDE_l, c = 'g', lw=2, label='PDE loss in train') ax.plot(BC_l, c = 'k', lw=2, label='BC loss in train') ax.plot(test_BC_l, c = 'r', lw=2, label='BC loss in test') ax.plot(total_l, c = 'b', lw=2, label='total loss in train') ax.set_xlabel('Epoch') ax.set_ylabel('Loss') plt.legend() plt.show() </code></pre>
<python><machine-learning><deep-learning><pytorch><neural-network>
2023-02-13 14:14:52
1
1,081
Link_tester
75,436,689
7,882,899
How to make python multiple conditions into a single statement
<p>How to make the method below be returned in a single line?</p> <p>Prefer to the #Note ## remarks below.</p> <pre><code>def falsify(leftover): #Note ## Your code here (replace with a single line) ### def falsify(leftover): false = [] for num in leftover: if 30 &gt; num &gt; 20: false.append(num - 10) elif num &gt;= 30: false.append('1' + (str(num[1:]))) else: false.append(num) return false </code></pre> <p>I don't have any other idea except breaking into 2 methods.</p> <pre><code>leftover1 = [19.7, 20.0, 28.5, 30.0, 30.7] def process(leftover): false = [] for num in leftover: print('num:' , num) if 30 &gt; num &gt;= 20: false.append(num - 10) elif num &gt;= 30: # (str(num[1])) result = str(num) #print('result:' , result) false.append('1' + result[1:]) # else: false.append(num) return false def falsify(leftover): #Note ## Your code here (replace with a single line) ### return process(leftover) print('result', falsify(leftover1)) </code></pre> <p>Sample output as below</p> <pre><code>num: 19.7 num: 20.0 num: 28.5 num: 30.0 num: 30.7 result [19.7, 10.0, 18.5, '10.0', '10.7'] </code></pre>
<python><python-3.x><list><conditional-statements>
2023-02-13 13:44:11
2
337
Banana Tech
75,436,462
19,238,204
How to Add Legend for Specific Stock Chart using matplotlib?
<p>I have this code to plot 3 game corporation stocks. But I want to give legend thus I can know which chart is for EA or for Take Two or for Activision.</p> <pre><code>from pandas_datareader import data as pdr import yfinance as yf import matplotlib.pyplot as plt yf.pdr_override() y_symbols = ['EA', 'TTWO', 'ATVI'] from datetime import datetime startdate = datetime(2000,1,1) enddate = datetime(2023,1,31) data = pdr.get_data_yahoo(y_symbols, start=startdate, end=enddate) #print(data) #data['Close'].plot() plt.figure(figsize=(20,10)) plt.plot(data.index, data['Close']) plt.xlabel(&quot;Date&quot;) plt.ylabel(&quot;Price (in USD)&quot;) plt.title(&quot;Game Corporation Stock Price 1/1/00 - 1/1/23&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/pWBVm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pWBVm.png" alt="1" /></a></p> <p>Thank You.</p>
<python><matplotlib>
2023-02-13 13:24:16
1
435
Freya the Goddess
75,436,252
5,641,924
Fast solution to get NaN and ignore None in numpy array
<p>I have an array like this:</p> <pre><code>array = np.random.randint(1, 100, 10000).astype(object) array[[1, 2, 6, 83, 102, 545]] = np.nan array[[3, 8, 70]] = None </code></pre> <p>Now, I want to find the indices of the <code>NaN</code> items and ignore the <code>None</code> ones. In this example, I want to get the <code>[1, 2, 6, 83, 102, 545]</code> indices. I can get the NaN indices with <code>np.equal</code> and <code>np.isnan</code>:</p> <pre><code>np.isnan(array.astype(float)) &amp; (~np.equal(array, None)) </code></pre> <p>I checked the performance of this solution with %timeit and got the following result:</p> <pre><code>243 Β΅s Β± 1.32 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) </code></pre> <p>Is there faster solution?</p>
<python><numpy>
2023-02-13 13:03:58
1
642
Mohammadreza Riahi
75,436,228
10,049,514
Efficiently setting and deleting array items with Redis JSON
<p>I'm using Redis OM for Python and my models look like below:</p> <pre class="lang-py prettyprint-override"><code>from typing import List from pydantic import BaseModel from redis_om import EmbeddedJsonModel, Field, JsonModel, Migrator class FeedItem(EmbeddedJsonModel): id: str = Field(index=True) s_score: str = Field() i_score: str = Field() factors: List[str] class Feed(JsonModel): user_id: str = Field(index=True, primary_key=True) feed_items: List[FeedItem] = Field(default=[]) </code></pre> <p>which will then result in a data structure like this:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;user_id&quot;: &quot;john_001&quot;, &quot;feed_items&quot;: [ { &quot;pk&quot;: &quot;01GS2N47G8WK2831GNHMGRVDJT&quot;, &quot;id&quot;: &quot;63e8c53825e41aca93229eac&quot;, &quot;s_score&quot;: &quot;0.5082375478927202&quot;, &quot;i_score&quot;: &quot;0.04626620037029417&quot;, &quot;factors&quot;: [&quot;2nd&quot;], }, { &quot;pk&quot;: &quot;01GS2N557FTV0SCK5TP2KENVAF&quot;, &quot;id&quot;: &quot;63e8c5d31e033af45abfb64d&quot;, &quot;s_score&quot;: &quot;0.7&quot;, &quot;i_score&quot;: &quot;0.37718576424604&quot;, &quot;factors&quot;: [&quot;2nd&quot;, &quot;computer&quot;, &quot;laptop&quot;], }, { &quot;pk&quot;: &quot;01GS2N63S6VM1HZ6RJVH6M1XQJ&quot;, &quot;id&quot;: &quot;63e8c743414c482153e332e6&quot;, &quot;s_score&quot;: &quot;0.5082375478927202&quot;, &quot;i_score&quot;: &quot;0.24141123225673727&quot;, &quot;factors&quot;: [&quot;2nd&quot;, &quot;thumbdrive&quot;, &quot;portables&quot;], }, ], } </code></pre> <p>This is going to be a feed of a user and if he has viewed the first item (with <code>&quot;pk&quot;: &quot;01GS2N47G8WK2831GNHMGRVDJT&quot;</code>), we will need to delete this item from his feed.</p> <p>Currently, what I am having to do is to find the key with <code>&quot;user_id&quot;: &quot;john_001&quot;</code>, retrieve the <code>feed_items</code> to a Python list and remove the item with that <code>pk</code>, then reassign the <code>feed_items</code> and save the item. It's as the following:</p> <pre class="lang-py prettyprint-override"><code>feed = Feed.find(Feed.user_id = &quot;john_001&quot;).first() feed_items = feed.feed_items new_feed_items = [i in feed_items if i[&quot;pk&quot;] != &quot;01GS2N47G8WK2831GNHMGRVDJT&quot;] feed.feed_items = new_feed_items feed.save() </code></pre> <p>Is there any better way to do this? Because right now the process is taking quite long to complete (we have dozens of thousands of users' feed and there are several deletion processes like this every seconds.</p>
<python><redis><py-redis>
2023-02-13 13:01:29
0
1,071
knl
75,436,220
14,403,266
Sum the rows of a pandas dataframe grouping by the last n dates
<p>I have a pandas dataframe looking like this:</p> <pre><code>Ac |Type |Id |Date |Value |Pe | --------------------------------------------------- Debt |Other |DE |2017-12-31 |5 |12M | Debt |Other |DE |2018-03-31 |4 |12M | Debt |Other |DE |2018-06-30 |3 |12M | Debt |Other |DE |2018-09-30 |2 |12M | Debt |Other |DE |2018-12-31 |5 |12M | Debt |Other |DE |2019-03-31 |6 |12M | Debt |Other |DE |2019-06-30 |1 |12M | Debt |Other |DE |2019-09-30 |5 |12M | Debt |Other |DE |2019-12-31 |2 |12M | Debt |Other |DE |2020-03-31 |3 |12M | Debt |Other |DE |2019-06-30 |4 |12M | </code></pre> <p>And, grouping by year, I need to add the 4 previous values of the column 'Value' with respect to that year, having something like this:</p> <pre><code>Ac |Type |Id |Date |Value |Pe | --------------------------------------------------- Debt |Other |DE |2017-12-31 |5 |12M | Debt |Other |DE |2018-12-31 |4+3+2+5 |12M | Debt |Other |DE |2019-12-31 |6+1+5+2 |12M | Debt |Other |DE |2020-09-30 |5+2+3+4 |12M | </code></pre> <p>With the following conditions:</p> <ol> <li>if it is not possible to sum 3 previous dates because there are not 3 rows with previous dates, leave the one that is already there as in the case of the row with date 2017-12-31 in the example.</li> <li>If the previous rows are not all in the same year, add the value column of those rows and leave in the 'Date' column the last date. As in the case of the row with date 2020-09-30 in the example</li> </ol> <p>]Can you guys help me out?</p>
<python><pandas><datetime>
2023-02-13 13:00:34
1
337
Valeria Arango
75,436,154
10,437,727
VSCode Python debugger throwing ImportError ModuleNotFound
<p>I'm running tests for my project which contains several modules:</p> <pre><code>β”œβ”€β”€ compute_metrics_service β”‚ β”œβ”€β”€ tests β”‚ β”‚ β”œβ”€β”€ integration β”‚ β”œβ”€β”€ test_app.py β”‚ └── __init__.py β”‚ β”‚ └── unit β”‚ └── utils β”œβ”€β”€ data_quality β”‚ β”œβ”€β”€ helpers β”‚ └── tests β”‚ β”œβ”€β”€ integration β”‚ └── unit └── fetch_data_metrics β”œβ”€β”€ helpers └── tests β”œβ”€β”€ integration └── unit </code></pre> <p>For example, I'm trying to debug a test inside of <code>compute_metrics_service/tests/integration</code> with VSCode, but I'm getting the following error:</p> <pre><code>ImportError: 'test_app' module incorrectly imported from '/Users/xxxx/Documents/SOC/data-quality-tool/data_quality/tests/integration'. Expected '/Users/xxxx/Documents/SOC/data-quality-tool/compute_metrics_service/tests/integration'. Is this module globally installed? </code></pre> <p>I've set my PYTHONPATH the following way:</p> <pre class="lang-bash prettyprint-override"><code>export PYTHONPATH=&quot;$PWD/data_quality:$PWD/compute_metrics_service:$PWD/fetch_data_metrics&quot; </code></pre> <p><code>test_app.py</code> is simply a TestCase, similar to the following:</p> <pre class="lang-py prettyprint-override"><code>import unittest import os import logging from datetime import datetime from unittest import mock from unittest.mock import MagicMock, patch, mock_open from compute_metrics_service import app from compute_metrics_service.utils import Helpers class TestMainCompleteness(unittest.TestCase): ... </code></pre> <p>What am I missing?</p> <p>TIA!</p>
<python><visual-studio-code><vscode-debugger>
2023-02-13 12:53:34
0
1,760
Fares
75,436,081
5,257,286
Strings searches in a list, if the list contains a string with a space
<p>I'd like to identify a word in a list, however one of the strings has a space in-between and is not recognized. My code:</p> <pre><code>res = [word for word in somestring if word not in myList] myList = [&quot;first&quot;, &quot;second&quot;, &quot;the third&quot;] </code></pre> <p>So when</p> <pre><code>somestring = &quot;test the third&quot; </code></pre> <p>is parsed then <code>res=&quot;test the third&quot;</code> (should be &quot;<code>test&quot;</code>).</p> <p>How can I overcome strings searches in a list, if the list contains a string with a space?</p>
<python><python-3.x><string><list>
2023-02-13 12:46:56
2
1,192
pymat
75,435,961
15,485
Matplotlib imshow and secondary x and y axis
<p>Let's say I have a picture taken with a sensor where the pixel size is 1mm. I would like to show the image with <code>imshow</code>: the main axes should show the pixel while the secondary axes should show the mm.</p> <p><code>frassino.png</code> is the following picture</p> <p><a href="https://i.sstatic.net/eURq0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eURq0.png" alt="enter image description here" /></a></p> <pre><code>from matplotlib import pyplot as plt import cv2 import numpy as np a = cv2.imread('frassino.png') fig,ax = plt.subplots(1) ax.imshow(a,aspect='equal') ax.set_xlabel('pixel') ax.set_ylabel('pixel') ax.figure.savefig('1.png') </code></pre> <p><code>1.png</code> is the following picture, all is fine (I need the pixel to be square and so I add the argument <code>aspect='equal'</code>.</p> <p><a href="https://i.sstatic.net/5qMQB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5qMQB.png" alt="enter image description here" /></a></p> <p>Now I add a secondary y axis:</p> <pre><code>v2 = ax.twinx() v2.set_yticks(np.linspace(0,48,12)) v2.set_xlabel('mm') ax.figure.savefig('2.png') </code></pre> <p><code>2.png</code> is the following picture and I have two problems: first, the image is cropped and the upper part of the tree, like the foreground grass, is not visible; second, the <code>mm</code> label is truncated.</p> <p><a href="https://i.sstatic.net/fGvWo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fGvWo.png" alt="enter image description here" /></a></p> <p>Now I add the secondary x axis:</p> <pre><code>h2 = ax.twiny() h2.set_xticks(np.linspace(0,64,8)) h2.set_xlabel('mm') ax.figure.savefig('3.png') </code></pre> <p>The following picture is <code>3.png</code>, the <code>mm</code> label is there but the image is still cropped.</p> <p><a href="https://i.sstatic.net/598zb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/598zb.png" alt="enter image description here" /></a></p> <p>How can the crop be avoided?</p> <p>How can the y <code>mm</code> label be fixed?</p>
<python><matplotlib><imshow>
2023-02-13 12:36:29
1
18,835
Alessandro Jacopson
75,435,956
1,436,800
An exception is being raised when testing websocket with POSTMAN
<p>I am implementing web sockets in my Django Project with channels library. When an object is created, name of that object should be sent to a consumer with group name test_consumer_group_1.</p> <pre><code>class MyClass(models.Model): name = models.CharField(max_length=128, unique=True) members = models.ManyToManyField(&quot;Employee&quot;) def save(self, *args, **kwargs): super().save(*args,**kwargs) channel_layer = get_channel_layer() data = {&quot;current_obj&quot;:self.name} async_to_sync(channel_layer.group_send)( &quot;test_consumer_group_1&quot;,{ 'type':'send_notification', 'value':json.dumps(data) } ) </code></pre> <p>This is the code of my consumer:</p> <pre><code>class TestConsumer(WebsocketConsumer): def connect(self): self.room_name=&quot;test_consumer&quot; self.room_group_name = &quot;test_consumer_group_1&quot; async_to_sync(self.channel_layer.group_add)( self.channel_name, self.room_group_name ) self.accept() print('connected..') self.send(text_data=json.dumps({'status':'connected'})) def recieve(self, text_data): print(text_data) def disconnect(self, *args, **kwargs): print('disconnected') def send_notification(self, event): print(&quot;send_notification called&quot;) print(event) </code></pre> <p>But it gives following error when testing the websocket API with POSTMAN:</p> <pre><code> raise TypeError(self.invalid_name_error.format(&quot;Group&quot;, name)) TypeError: Group name must be a valid unicode string with length &lt; 100 containing only ASCII alphanumerics, hyphens, underscores, or periods, not specific.be2251de4bb647c1988845bd460d6971!564c92a792634237bcdba63290554557 WebSocket DISCONNECT /ws/test/ [127.0.0.1:35480] </code></pre> <p>How to fix it?</p>
<python><django><django-rest-framework><websocket><django-channels>
2023-02-13 12:35:45
1
315
Waleed Farrukh
75,435,780
478,213
Dynamic number of lamda and apply statements in Pandas
<p>I am trying to create a nested JSON block and came accross this awesome solution <a href="https://stackoverflow.com/questions/61781186/pandas-grouping-by-multiple-columns-to-get-a-multi-nested-json">Pandas grouping by multiple columns to get a multi nested Json</a>:</p> <pre><code>test = [df.groupby('cat_a')\ .apply(lambda x: x.groupby('cat_b')\ .apply(lambda x: [x.groupby('cat_c') .apply(lambda x: x[['participants_actual','participants_registered']].to_dict('r') ).to_dict()] ).to_dict() ).to_dict()] import json json_res = list(map(json.dumps, test)) </code></pre> <p>This works well for my usecase. However, as I cannot control the dataframe in all cases, there may be more than the three levels noted here.</p> <p>I could easily imagine getting the levels as follows:</p> <pre><code>for c in cols[:-2]: .... perform level gropuping </code></pre> <p>However, as each of the lamba and apply functions feeds into the level above, I am not sure how I could write such a statement in a for loop.</p> <p>Is there a path for make this statement more dynamic?</p>
<python><pandas>
2023-02-13 12:18:39
2
1,424
NickP
75,435,594
11,167,163
tag_configure does not apply background as excepted to tagged row
<p>It seems that I have an issue with <code>tag_configure</code> : nor size &amp; color are taken into account :</p> <pre><code>import tkinter as tk from tkinter import ttk import pandas as pd class ScrollTree(ttk.Treeview): def __init__(self, master,ROW, *args, **kwargs): ttk.Treeview.__init__(self, master, *args, **kwargs) sb = tk.Scrollbar(master, orient='vertical', command=self.yview) # Create a scrollbar sb.grid(row=ROW, column=2, sticky='ns') self.config(yscrollcommand=sb.set) class MainWindow: def __init__(self, master): self.tree_frame = tk.Frame(master) self.tree_frame.pack() self.load_data() def _SQL_ToTable(self,DF,ROW): try: columns_name = list(DF.columns.values.tolist()) DF = list(DF.itertuples(index=False)) print(DF) columns = [f'{columns_name[i]}' for i in range(len(DF[0]))] # List of column names for all the columns of the data tree = ScrollTree(self.tree_frame,ROW, columns=columns, show='headings') tree.grid(row=ROW, column=1) for col in columns: tree.heading(col, text=col, anchor='center') # Add a heading in the given `col` as ID tree.column(col, anchor='center') # Properties for the column with `col` as ID for i in DF: if i[2]==22: Tag = &quot;QTTKO&quot; else: Tag = &quot;QTTOK&quot; tree.insert('', 'end', values=i,tags=(Tag,)) # Insert each tuple into the treeview tree.tag_configure('QTTKO', background='red') tree.tag_configure('QTTOK', background='blue',font=({'family': 'Courier', 'size': 300, 'weight': 'normal'})) except: print(&quot;error&quot;) def load_data(self): data={'Name':['Karan','Rohit','Sahil','Aryan'],'Age':[23,22,21,24],'AGEB':[23,22,21,24],'AGEC':[23,22,21,24]} df=pd.DataFrame(data) self._SQL_ToTable(df,0) if __name__ == &quot;__main__&quot;: root = tk.Tk() window = MainWindow(root) root.mainloop() </code></pre> <p>What am I doing wrong there ?</p> <p>Ouptut :</p> <p><a href="https://i.sstatic.net/K9U0f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K9U0f.png" alt="enter image description here" /></a></p> <p>playing with <code>size</code> does not even change the labels size .. And I see no colors on rows.</p> <pre><code>print(tk.TkVersion) --&gt; 8.6 sys.version --&gt; 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] </code></pre> <p>Also I am working on Spyder</p>
<python><tkinter>
2023-02-13 11:59:10
0
4,464
TourEiffel
75,435,585
756,233
Using Python 3.9.1 and requests 2.25.1 a local connection to a Mongoose HTTP server takes 2 seconds
<p>I am writing a little REST API client using Python, Java and NodeJS. The server is written using the Mongoose HTTP server.</p> <p>With Java and NodeJS every request takes only milliseconds but with Python every request takes 2 seonds.</p> <p>I confirmed that this is not a requests problem by using urllib directly. This also takes 2 seconds per request.</p> <p>I also tried &quot;Connection&quot; &quot;Close&quot;, no change...</p> <p>Any ideas why the request takes 2 seconds with Python but not with Java and NodeJS ?</p> <p>My code:</p> <pre><code>import json from urllib import request from datetime import datetime url = &quot;http://localhost:8080/api&quot; req = request.Request(url, method=&quot;POST&quot;) req.add_header('Content-Type', 'application/json') req.add_header(&quot;Connection&quot;, &quot;Close&quot;) myData = { &quot;schema&quot;: &quot;jsonCommand.org/v1&quot;, &quot;requestId&quot;: 1, &quot;api&quot;: &quot;admin&quot;, &quot;apiVersion&quot;: &quot;1.0&quot;, &quot;action&quot;: &quot;pingSession&quot; } data = json.dumps(myData) data = data.encode() for i in range(0, 10): now = datetime.now() print('Current DateTime:', now) with request.urlopen(req, data=data) as response: body = response.read() print(body) </code></pre>
<python><python-requests><mongoose-web-server>
2023-02-13 11:58:24
1
11,321
Ray Hulha
75,435,484
248,959
Trying to install GDAL on Ubuntu 20: error in GDAL setup command: use_2to3 is invalid
<p>I'm trying to install GDAL using this command:</p> <p><code>pip install GDAL==3.0.4</code></p> <p>but I'm getting this below:</p> <p><a href="https://i.sstatic.net/NcJG0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NcJG0.png" alt="enter image description here" /></a></p> <p>I'm using <code>pip install GDAL==3.0.4</code> instead of <code>pip install GDAL</code> (that would install the current latest version 3.6.2) because looks like I have installed the version <code>3.0.4</code> of <code>libgdal-dev</code>:</p> <p><a href="https://i.sstatic.net/4CRBh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4CRBh.png" alt="enter image description here" /></a></p> <p>I'm on Ubuntu 20.04.</p>
<python><pip><gdal>
2023-02-13 11:49:11
1
31,891
tirenweb
75,435,464
18,215,498
How to POST image using Python requests to FastAPI backend?
<p>I have my own simple API build with FastAPI. It's very simple -&gt; <code>/predict</code> endpoint expects image, and it returns some predictions from neural network. I connected it with JS front and it works very well.</p> <p>Backend snippet:</p> <pre class="lang-py prettyprint-override"><code>@app.post(&quot;/predict&quot;) async def create_upload_file(file: UploadFile): image = await file.read() result = my_net.predict(image) return {&quot;state&quot;: result} </code></pre> <p>Now I'm trying to write simple script.py with request package to upload image directly from my computer:</p> <pre class="lang-py prettyprint-override"><code>import requests import json url = 'https://MYAPI.com/predict' file = open('images/image.png', 'rb') response = requests.post(url, data=file) print(response) print(response.json()) </code></pre> <p>It returns:</p> <pre class="lang-bash prettyprint-override"><code>&lt;Response [422]&gt; {'detail': [{'loc': ['body', 'file'], 'msg': 'field required', 'type': 'value_error.missing'}]} </code></pre> <p>I probably have to wrap the image with some information, but no idea how, I tried with 100 different configurations, I also tried to do it with Insomnia binary file request and get the same error 422.</p>
<python><http><python-requests><fastapi>
2023-02-13 11:47:29
0
533
mcdominik
75,435,423
6,930,340
Create a pd.DataFrame with a minimum number of rows using hypothesis
<p>I'm using the <code>hypothesis</code> library and I would like to create a <code>pd.DataFrame</code> with three columns. Each column may contain integer values, either +1, 0, or -1. The values doesn't need to be unique. Also, I would like to get at least ten rows.</p> <p>With the following code, <code>hypothesis</code> seems to produce either empty dataframes or a dataframe with only one row.<br /> When adding <code>assume(len(signals) &gt; 10)</code> to the test, <code>hypothesis</code> is not able to find a suitable example.</p> <pre><code>from hypothesis import strategies as st from hypothesis.extra.pandas import columns, data_frames @given( signals=data_frames( columns=columns( [&quot;sec_1&quot;, &quot;sec_2&quot;, &quot;sec_3&quot;], elements=st.integers(min_value=-1, max_value=1), ) ) ) def test_length(signals: pd.DataFrame) -&gt; None: assume(len(signals) &gt; 10) assert len(signals) &gt; 10 </code></pre> <p>What am I doing wrong here?</p>
<python><python-hypothesis>
2023-02-13 11:43:17
1
5,167
Andi