QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,246,480
10,549,044
Sample Weights procude contradictory results for binary classification with higher precision for less weighted class
<p>I am using sample_weights to give class 1 higher weight in the loss function, however, If I do so I find that class 0 is the one given higher precision and the model tends to classify most instances as class 1 (the one I given higher weight). I expected the model to pay more attention at the class 1 and it should be the one having higher precision. To get higher precision on class 1, I have to weight class 0 so the model now classifies most instance as class 0 but keeps class 1 very precise. Is this normal?</p>
<python><tensorflow><keras><deep-learning>
2023-01-26 13:01:22
0
410
ma7555
75,246,436
1,465,726
How to read files in parallel with tf.data.Dataset.from_generator?
<p>I have successfully created a TF dataset using <code>tf.data.Dataset.from_generator</code> that reads several binary files in sequence and converts them to strings. The code looks like this:</p> <pre><code>my_dataset = tf.data.Dataset.from_generator( lambda: _generate_data_points(fnames), tf.string) </code></pre> <p>The generator does something like this:</p> <pre><code>for fname in fnames: for data_point in read_binary_file(fname): yield data_point </code></pre> <p>However, I would like to read the files in parallel using <code>dataset.interleave</code>. Since this operates on an existing dataset, I first created one like this:</p> <pre><code>my_dataset = tf.data.Dataset.list_files(fnames) </code></pre> <p>I then changed my generator to operate on a single file and integrated it into dataset.interleave as follows:</p> <pre><code>my_dataset = my_dataset.interleave( lambda fname: _generate_data_points(fname), cycle_length=8) </code></pre> <p>It seems as if TF is now expecting my generator to operate on Tensors instead of regular types though, because I get the following error message:</p> <pre><code>TypeError: a bytes-like object is required, not 'str' </code></pre> <p>As far as I know, this usually means that we need to wrap with <code>tf.py_function</code>, but when I do that I get:</p> <pre><code>tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function. </code></pre> <p>Here’s how I did the wrapping:</p> <pre><code>for data_point in tf.py_function( func=read_binary_file, inp=[file], Tout=tf.string): yield data_point </code></pre> <p>I found <a href="https://stackoverflow.com/questions/62068323/iterating-over-tf-tensor-is-not-allowed-autograph-is-disabled-in-this-function">this question</a> that gets the same error message, and they suggest using <code>tf.map_fn</code>. However, I am not applying my function to multiple arguments. Instead, I am returning multiple values by applying it to a single argument.</p> <p>Not sure where to go from here...</p>
<python><tensorflow><tensorflow-hub>
2023-01-26 12:55:52
0
570
niefpaarschoenen
75,246,372
13,061,414
How to save python ml model so that in can be run in C# Unity
<p>I have a python script that trains an ml model. I would like to then export this model so that it can be loaded and used in a Unity game written in C#.</p> <p>I know I can't serialize the model using pickle or scikit-learn's joblib because they are not supported by C#. So is there any way to save the model so that it can be deserialised and used in my Unity game?</p>
<python><c#><unity-game-engine><machine-learning>
2023-01-26 12:50:23
1
375
RishtarCode47
75,246,296
8,794,221
Unittest strategy for a function performing an experiment that includes some randomness
<p>What approach should I take to write a unittest for this function?</p> <p>Please note that:</p> <ul> <li>at each execution this function will generate different list of results for the same input parameters (with a very high probability).</li> <li>the list might be empty at the end of the execution (if we have reached out the maximum number of tries without finding a single result that is seen as <em>valid</em>)</li> <li><code>NUMBER_OF_RESULTS</code> and <code>MAX_TRIES</code> are <code>&gt; 0</code> and <code>MAX_TRIES</code> is way larger than <code>NUMBER_OF_RESULTS</code></li> </ul> <pre><code>def perform_experiment(some parameters) -&gt; results[obj]: results = [] for i in range(MAX_TRIES): result_to_validate = random_attempt() if valid(result_to_validate): results.append(result_to_validate) if len(results) &gt;= NUMBER_OF_RESULTS: break return results </code></pre> <p>I was thinking of implementing in the unittest in the following way</p> <ol> <li>When the list of results is NOT empty, then I can simply go through all the elements and <code>assert</code> each of them are valid. Which isn't difficult to write.</li> <li>If the result list is empty, I would like to make sure that the <code>perform_experiment</code> has run until <code>i</code> has reached <code>MAX_TRIES</code>, however the variable <code>i</code> is not accessible outside of the function.</li> </ol> <p>I am not sure how I could test the 2. point in a unittest, should I change this into making sure that the function to test has run at least for a certain amount of time instead of checking that <code>i</code> has reached the <code>MAX_TRIES</code> threshold? Is using a <code>seed</code> the only option here? What can be done if we can't use one? Or can we completely omit point <code>2.</code> from the unittest?</p>
<python><unit-testing>
2023-01-26 12:42:59
1
12,516
Allan
75,246,092
4,752,223
How to have an ipywidgets.ToggleButtons with multiple buttons using the same label
<p>By default, all the <code>options</code> for the buttons must be unique.</p> <pre><code>options=[&quot;label1&quot;,&quot;label2&quot;,...] </code></pre> <p>You can however provide a list of tuples like this</p> <pre><code>options=[(&quot;label1&quot;,value1), (&quot;label2&quot;,value2),] </code></pre> <p>And that is accepted.</p> <p>However, if you provide this</p> <pre><code>options=[(&quot;label1&quot;,value1), (&quot;label2&quot;,value2),(&quot;label1&quot;,value3)] </code></pre> <p>It displays well, behind the scenes provides the good <code>.value</code> and <code>.index</code>, but visually it selects the first &quot;label1&quot; when you click the second &quot;label1&quot;.</p> <p><a href="https://i.sstatic.net/QlqCw.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QlqCw.gif" alt="enter image description here" /></a></p> <p>How can I work around that limitation?</p> <p>Code:</p> <pre><code>import ipywidgets as widgets realoptions=&quot;One word in different location is possibly a different word&quot;.split() options=realoptions t=widgets.ToggleButtons( options=options, description='Choose:', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' #tooltips=['Description of b1', 'Description of b2', ... ], # icons=['check'] * 3 ) t </code></pre>
<python><ipywidgets>
2023-01-26 12:24:37
1
2,928
Rub
75,246,007
7,800,760
Github Actions: create a pull request after formatting code with psf/black
<p>In my github formatting workflow I have the following step:</p> <pre><code>- name: Format with black uses: psf/black@stable id: action_black with: options: &quot;--verbose&quot; src: &quot;./src&quot; </code></pre> <p>after which I have copied from another action template a step which should open a pull request if black did format any of its target files:</p> <pre><code>- name: Create Pull Request if: steps.action_black.outputs.is_formatted == 'true' uses: peter-evans/create-pull-request@v3 with: token: ${{ secrets.GITHUB_TOKEN }} title: &quot;Format Python code with psf/black push&quot; commit-message: &quot;:art: Format Python code with psf/black&quot; body: | There appear to be some python formatting errors in ${{ github.sha }}. This pull request uses the [psf/black](https://github.com/psf/black) formatter to fix these issues. base: ${{ github.head_ref }} # Creates pull request onto pull request or commit branch branch: actions/black </code></pre> <p>but the if line, which I do not fully understand, is wrong and therefore the rest of this step is skipped.</p> <p>Can anyone please help me understand how to tie the official <strong>psf/black</strong> action to <strong>peter-evans/create-pull-request@v3</strong> in such a way that the pull request is activated <strong>ONLY</strong> if any file has been changed/formatted? Thanks.</p>
<python><github-actions>
2023-01-26 12:15:35
1
1,231
Robert Alexander
75,245,944
2,038,360
create stacked plot of percentages
<p>I have a dataframe containing the percentage of people in my dataset stratified per Gender and Age.</p> <pre><code>df_test = pd.DataFrame(data=[['Male','16-24',10], ['Male','25-34',5], ['Male','35-44',2], ['Female','16-24',3], ['Female','25-34',60], ['Female','35-444',20], ], columns=['Gender','Age','Percentage']) </code></pre> <p>First I create a plot showig the percentages of Male and Female in the dataset</p> <pre><code>df_test.groupby('Gender').sum().plot(kind='bar',rot=45) </code></pre> <p><a href="https://i.sstatic.net/3RlJu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3RlJu.png" alt="enter image description here" /></a></p> <p>Now I would like to add within each bar the percentage of people in the age ranges in a stacked kind of way... Could you help ?</p>
<python><pandas><matplotlib><plot><seaborn>
2023-01-26 12:09:14
1
5,589
gabboshow
75,245,922
11,999,452
Manim: Is the a nice way to move and scale a bunch of mobjects?
<p>I'm using the manim module in Python to display some decision trees. Fist I want to show Tree_1 as it is in the code below. Then I want it to scale it down and shift it to the left. Next I want Tree_2 to appear where Tree_1 is and them move to the upper right quadrant of the screen. Also the <code>PURE_RED</code> lines should move from tilted (as in Tree_1) to straight (as in Tree_2 in the code below). The same should then happen with Tree_3 just in the bottom right quadrant.</p> <p>Now I could do it by figuring out all the points and then hardcode it. But I wanted to ask if there is a nicer way. Maybe one where I could define points in a local coordinate system and then I can just scale and move the whole tree.</p> <p>Also I'm sorry if its is considered common knowledge, but I'm super new to manim.</p> <pre><code>from manim import * class Tree_1(Scene): def construct(self): line_1 = Line([0,3,0], [-6,0,0]) line_2 = Line([0,3,0], [0,0,0]) line_3 = Line([0,3,0], [6,0,0]) self.play( Create(line_1), Create(line_2), Create(line_3), ) line_1l = Line([-6, 0, 0], [-7,-3, 0]).set_color(PURE_GREEN) line_1r = Line([-6, 0, 0], [-5,-3, 0]).set_color(PURE_RED) line_2l = Line([ 0, 0, 0], [-1,-3, 0]).set_color(PURE_GREEN) line_2r = Line([ 0, 0, 0], [ 1,-3, 0]).set_color(PURE_RED) line_3l = Line([ 6, 0, 0], [ 5,-3, 0]).set_color(PURE_GREEN) line_3r = Line([ 6, 0, 0], [ 7,-3, 0]).set_color(PURE_RED) self.play( Create(line_1l), Create(line_1r), Create(line_2l), Create(line_2r), Create(line_3l), Create(line_3r), ) class Tree_2(Scene): def construct(self): line_1 = Line([0,3,0], [-6,0,0]) line_2 = Line([0,3,0], [0,0,0]) line_3 = Line([0,3,0], [6,0,0]) self.play( Create(line_1), Create(line_2), Create(line_3), ) line_4 = Line([-6, 0, 0], [-6,-3, 0]).set_color(PURE_RED) line_5 = Line([ 0, 0, 0], [-0,-3, 0]).set_color(PURE_RED) line_6 = Line([ 6, 0, 0], [ 6,-3, 0]).set_color(PURE_RED) self.play( Create(line_4), Create(line_5), Create(line_6), ) class Tree_3(Scene): def construct(self): line_1 = Line([0,3,0], [-6,0,0]) line_2 = Line([0,3,0], [0,0,0]) line_3 = Line([0,3,0], [6,0,0]) self.play( Create(line_1), Create(line_2), Create(line_3), ) line_4 = Line([-6, 0, 0], [-6,-3, 0]).set_color(PURE_GREEN) line_5 = Line([ 0, 0, 0], [-0,-3, 0]).set_color(PURE_GREEN) line_6 = Line([ 6, 0, 0], [ 6,-3, 0]).set_color(PURE_GREEN) self.play( Create(line_4), Create(line_5), Create(line_6), ) </code></pre>
<python><manim>
2023-01-26 12:07:52
1
400
Akut Luna
75,245,754
19,580,067
Tried to update the database table values using python 'pyodbc'. But not working
<p>I just created a new table in the database with empty columns in varchar(max) datatype. Tried to update the column values using pyodbc but the changes are not getting reflected in the database table.</p> <p>Any suggestions, what am I doing wrong here?</p> <p>My Code:</p> <pre><code>#Code to connect database with the notebook conn_str = pyodbc.connect( r'Driver=SQL Server;' r'Server=ALAP;' r'Database=master;' r'Trusted_Connection=yes;' ) cursor = conn_str.cursor() cursor.execute(&quot;UPDATE tbl_EMAIL_ENQUIRY SET fld_EMAIL_BODY = ? &quot;, 'Hello') conn_str.commit() </code></pre>
<python><sql-server><ssms><pyodbc>
2023-01-26 11:54:20
1
359
Pravin
75,245,751
5,431,734
Why `conda update conda` updates a whole lot of other packages?
<p>I ran <code>conda update conda</code> and the prompt came back asking to download and update several other packages, like for example:</p> <ul> <li><code>pandas</code> from <code>1.4.3</code> to <code>1.5.3</code></li> <li><code>numba</code> from <code>0.55.2</code> to <code>0.56.4</code></li> <li><code>dask</code> from <code>2022.7</code> to <code>2023.1</code></li> </ul> <p>and lots of other packages, too many to mention. <code>Conda</code> itself, is currently at <code>22.9.0</code> and will be updated to <code>22.11.1</code>.</p> <p>I thought <code>conda update conda</code> updates conda, the package/environment manager, to the latest version. Why does it want to update individual packages in my env?</p> <p>Edit: The actual output, following Merv's comment, is <a href="https://www.dropbox.com/s/dw3i87wk6qfgunr/conda.log?dl=0" rel="nofollow noreferrer">here</a></p>
<python><anaconda><conda>
2023-01-26 11:53:50
1
3,725
Aenaon
75,245,748
9,102,437
Can't save a file in moviepy python
<p>I have seen a few questions about this here, but none of them solved the issue for me, so maybe my case is different in some way.</p> <p>I am trying to achieve a simple result: read a file and write it. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>import os os.environ['FFMPEG_BINARY'] = '/usr/bin/ffmpeg' from moviepy.editor import VideoFileClip name = 'test.mp4' clip = VideoFileClip('./vids/'+name) clip.write_videofile('./vids/'+name, codec='libx264', fps=30) </code></pre> <p>This code comes up with an error:</p> <pre class="lang-bash prettyprint-override"><code>---&gt; 88 '-r', '%.02f' % fps, 89 '-an', '-i', '-' 90 ] 91 if audiofile is not None: 92 cmd.extend([ 93 '-i', audiofile, 94 '-acodec', 'copy' 95 ]) TypeError: must be real number, not NoneType </code></pre> <p>You may notice that I have set the environment variable for <code>ffmpeg</code> (I have also changed that in <code>configure_defaults.py</code>). This is because it was suggested in other questions. Also based on them I have run the following commands before running the code:</p> <pre><code>sudo apt -y update sudo apt -y install ffmpeg pip install decorator pip install moviepy --upgrade pip install ffmpeg --upgrade </code></pre> <p>I am using a <code>Debian GNU/Linux 10 (buster)</code> machine, and the versions of <code>moviepy</code> and <code>ffmpeg</code> are <code>1.0.3</code> and <code>4.1.10-0+deb10u1</code> respectively.</p> <p>Nothing seems to be helping to solve this. What am I missing here?</p>
<python><python-3.x><linux><ffmpeg><moviepy>
2023-01-26 11:53:34
1
772
user9102437
75,245,739
12,807,756
python merge list of dicts into 1 based on common key
<p>I have this dict:</p> <pre><code>data= [{'org_id': 'AGO-cbgo', 'ws_name': 'finops_enricher-nonprod', 'ws_id': 'ws-CTvV7QysPeY4Gt1Q', 'current_run': None} {'org_id': 'AGO-cbgo', 'ws_name': 'finops_enricher-prod', 'ws_id': 'ws-s4inidN9aDxELE4a', 'current_run': None} {'org_id': 'AGO-cbgo', 'ws_name': 'finops_enricher-preprod', 'ws_id': 'ws-fvyKv7m4FRYf8v5o', 'current_run': None} {'org_id': 'AGO-cbgo', 'ws_name': 's3_dlp-getd_sherlock-prod', 'ws_id': 'ws-XpzzptzGHL2YNjsL', 'current_run': None} {'org_id': 'AGO-cbgo', 'ws_name': 's3_dlp-getd_sherlock-nonprod', 'ws_id': 'ws-dksk8nnXTjzLWmRn', 'current_run': 'run-osSNuCtt5ULHPBus'} ] </code></pre> <p>I need to have this result:</p> <pre><code> result= {'AGO-cbgo', 'ws': [ {'ws_name': 'finops_enricher-nonprod', 'ws_id': 'ws-CTvV7QysPeY4Gt1Q', 'current_run': None}, {'ws_name': 'finops_enricher-preprod', 'ws_id': 'ws-fvyKv7m4FRYf8v5o', 'current_run': None}, {'ws_name': 's3_dlp-getd_sherlock-prod', 'ws_id': 'ws-XpzzptzGHL2YNjsL', 'current_run': None}, {'ws_name': 's3_dlp-getd_sherlock-nonprod', 'ws_id': 'ws-dksk8nnXTjzLWmRn', 'current_run': 'run-osSNuCtt5ULHPBus'} ] } </code></pre> <p>Any idea how to achieve this? I played around with collections and defaultdict, but without success.</p>
<python><python-3.x><list><dictionary>
2023-01-26 11:52:28
2
759
WorkoutBuddy
75,245,691
11,622,712
How to calculate an an accumulated value conditionally?
<p>This question is based on <a href="https://stackoverflow.com/questions/75240753/how-to-calculate-an-an-accumulated-value-conditionally">this thread</a>.</p> <p>I have the following dataframe:</p> <pre><code>diff_hours stage sensor 0 0 20 0 0 21 0 0 21 1 0 22 5 0 21 0 0 22 0 1 20 7 1 23 0 1 24 0 3 25 0 3 28 6 0 21 0 0 22 </code></pre> <p>I need to calculated an accumulated value of <code>diff_hours</code> while <code>stage</code> is growing. When stage drops to 0, the accumulated value <code>acc_hours</code> should restart to 0 even though <code>diff_hours</code> might not be equal to 0.</p> <p>The proposed solution is this one:</p> <pre><code>blocks = df['stage'].diff().lt(0).cumsum() df['acc_hours'] = df['diff_hours'].groupby(blocks).cumsum() </code></pre> <p>Output:</p> <pre><code> diff_hours stage sensor acc_hours 0 0 0 20 0 1 0 0 21 0 2 0 0 21 0 3 1 0 22 1 4 5 0 21 6 5 0 0 22 6 6 0 1 20 6 7 7 1 23 13 8 0 1 24 13 9 0 3 25 13 10 0 3 28 13 11 6 0 21 6 12 0 0 22 6 </code></pre> <p>On the line 11 the value of <code>acc_hours</code> is equal to 6. I need it to be restarted to 0, because the <code>stage</code> dropped from <code>3</code> back to <code>0</code> in row 11.</p> <p>The expected output:</p> <pre><code> diff_hours stage sensor acc_hours 0 0 0 20 0 1 0 0 21 0 2 0 0 21 0 3 1 0 22 1 4 5 0 21 6 5 0 0 22 6 6 0 1 20 6 7 7 1 23 13 8 0 1 24 13 9 0 3 25 13 10 0 3 28 13 11 6 0 21 0 12 0 0 22 0 </code></pre> <p>How can I implement this logic?</p>
<python><pandas>
2023-01-26 11:47:59
1
2,998
Fluxy
75,245,387
9,182,743
for each day, count unique users for day and day-1, grouped by third column
<p>I have a dataset of:</p> <ul> <li><strong>day</strong> -&gt; the day as integer the user did an action (eg. day = 1)</li> <li><strong>user_id</strong> -&gt; unique id identifying each user. (eg. user_id = 'a'</li> <li><strong>actions</strong> -&gt; type of action take (eg. action = 1)</li> </ul> <p><strong>Objective:</strong></p> <ul> <li>For day=n:</li> <li>For each action:</li> <li>For days = (n, n-1) (today &amp; yesteday)</li> <li>Count number of unique users that performed said action.</li> </ul> <p>With the dataset below, for example:</p> <p><strong>Q</strong>: &quot;How many users on day 2 did action 1 on day2 and day2-1 ?&quot;</p> <ul> <li>on day = 2.</li> <li>for action =1</li> <li>count of unique users for day = 2, day =1 -&gt; a,b,c = 3</li> </ul> <p><strong>My current soltuion</strong></p> <p>I have made a solution with 2 for loops, However I think there is a better solution that I am missing, using groupby/apply/rolling. But am unable to find a more concise solution.</p> <p>here is the full code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame( { &quot;day&quot; : [ 0, 1, 1, 2, 2, 3 , 1, 2, 4, 4, 5], &quot;user_id&quot;: ['a','a','b','b','c', 'c', 'a', 'b', 'a', 'b', 'c'], &quot;actions&quot; : [ 1, 1 , 1, 1, 1, 1, 2, 2, 2, 2, 2] } ) # current soltion with 2 for loops. unique_dictionary = {'action': [], 'day': [], 'unique_users_last_n_days': []} # store the results n_days = 1 # change the days previous you look at. for action in (list(df.actions.unique())): for day in (sorted(list(df.day.unique()))): mask_last_n_days = (day - df[&quot;day&quot;] &gt;=0) &amp; (day - df[&quot;day&quot;] &lt;= n_days) #only look at values that meet condition. mask_action = df['actions'] == action unique_users_last_n_days = df[(mask_action) &amp; (mask_last_n_days)][&quot;user_id&quot;].nunique() # get the unique users in the condition # store result in dictionary. unique_dictionary['action'].append(action) unique_dictionary['day'].append(day) unique_dictionary['unique_users_last_n_days'].append(unique_users_last_n_days) df_unique_users_last_n_days = pd.DataFrame(unique_dictionary) print (df_unique_users_last_n_days) -OUT action day unique_users_last_n_days 0 1 0 1 1 1 1 2 2 1 2 3 3 1 3 2 4 1 4 1 5 1 5 0 6 2 0 0 7 2 1 1 8 2 2 2 9 2 3 1 10 2 4 2 11 2 5 3 </code></pre> <p>The solution should work with missing days in the day column.</p>
<python><pandas>
2023-01-26 11:18:17
1
1,168
Leo
75,245,222
20,311,786
Why aren't two `zip` objects equal if the underlying data is equal?
<p>Suppose we create two <code>zip</code>s from lists and tuples, and compare them, like so:</p> <pre><code>&gt;&gt;&gt; x1=[1,2,3] &gt;&gt;&gt; y1=[4,5,6] &gt;&gt;&gt; x2=(1,2,3) &gt;&gt;&gt; y2=(4,5,6) &gt;&gt;&gt; w1=zip(x1,y1) &gt;&gt;&gt; w2=zip(x2,y2) &gt;&gt;&gt; w1 == w2 False </code></pre> <p>But using <code>list</code> on each <code>zip</code> shows the same result:</p> <pre><code>&gt;&gt;&gt; list(w1) [(1, 4), (2, 5), (3, 6)] &gt;&gt;&gt; list(w2) [(1, 4), (2, 5), (3, 6)] </code></pre> <p>Why don't they compare equal, if the contents are equal?</p>
<python><python-zip>
2023-01-26 11:00:38
1
646
newview
75,245,104
1,436,800
How to raise 404 as a status code in Serializer Validate function in DRF?
<p>I have written a validate() function inside my serializer. By default, Serializer Errors return 400 as a status code. But I want to return 404. I tried this:</p> <pre><code>class MySerializer(serializers.ModelSerializer): class Meta: model = models.MyClass fields = &quot;__all__&quot; def validate(self, data): current_user = self.context.get(&quot;request&quot;).user user = data.get(&quot;user&quot;) if user!=current_user: raise ValidationError({'detail': 'Not found.'}, code=404) return data </code></pre> <p>But it still returns 400 as a response in status code. How to do it?</p>
<python><django><django-rest-framework><django-serializer><django-validation>
2023-01-26 10:49:24
2
315
Waleed Farrukh
75,245,079
4,505,301
Python concurrent futures large number of inputs sit idle
<p>I am processing large number of files (tens of millions) using Python's concurrent.futures. Issuing a small number of inputs work fine, however when the input size increases, the processes just don't start. Below code executes only when input size is small, e.g. 20_000.</p> <pre><code>import concurrent import math def some_math(x): y = 3*x**2 + 5*x + 7 return math.log(y) inputs = range(1_000_000) results = [] with concurrent.futures.ProcessPoolExecutor() as executor: for result in executor.map(some_math, inputs): results.append(result) </code></pre> <p>I have tried to overcome this by submitting jobs in smaller batches as below:</p> <pre><code>import concurrent import math def some_math(x): y = 3*x**2 + 5*x + 7 return math.log(y) up_to = 220_000 batch_size = 20_000 results = [] for idx in range(0, up_to, batch_size): low = idx high = min(low + batch_size, up_to) inputs = range(low, high) with concurrent.futures.ProcessPoolExecutor() as executor: for result in executor.map(some_math, inputs): results.append(result) </code></pre> <p>But again, it either does not start at all, or gets stuck after a few iterations of the outer for loop.</p> <p>My Python version is 3.10.7. What is the issue here?</p>
<python><python-3.x><multiprocessing><concurrent.futures>
2023-01-26 10:46:42
1
1,562
meliksahturker
75,244,704
9,872,147
Python reshape list that has no exact square root
<p>I'm trying to reshape a numpy array with a length of 155369 using <code>numpy.reshape</code> but since 155369 has no exact square root we round it down and the reshape function gives an error <code>ValueError: cannot reshape array of size 155369 into shape (394, 394)</code></p> <pre class="lang-py prettyprint-override"><code>size = int(numpy.sqrt(index)) reshaped = numpy.reshape(data[:index], (size, size)) </code></pre> <p>How can this array be reshaped correctly?</p>
<python><numpy>
2023-01-26 10:10:51
1
342
Dimo Dimchev
75,244,698
2,411,173
postgres query for overlapping integer time intervals
<p>I have a postgres table of the form</p> <pre><code>from_secs, to_secs, value 10 20 1 12 50 2 .... </code></pre> <p>now at query time I get a list of time intervals <code>[(from_secs1, to_secs1), (from_secs2, to_secs2), ...]</code> and I need to get all the values which <code>(from_secs, to_secs)</code> overlap with at least 1 of the interval in the list.</p> <p>How can I do that?</p> <p><strong>EXAMPLE:</strong></p> <p>Taking as example the above table and an input list of <code>[(1, 11), (55, 100)]</code></p> <p>Then the query should return the first row of the table as it is the only one that overlap with at least 1 interval of the list</p>
<python><postgresql>
2023-01-26 10:10:26
1
17,747
Donbeo
75,244,664
11,240,107
How to close session after stopping script with Selenium Remote Webdriver?
<p>I have a script where I use <code>selenium</code> with the remote webdriver in order to use my personal endpoint rather than Chrome or Firefox.</p> <p>My problem is that when I quit my script with Ctrl + C while it's running, the <code>selenium</code> session is not closed automatically but continues to live until the timeout period is reached (which is 30 minutes for me). Also I know that in my script I can end the session with <code>driver.quit()</code></p> <p>What I would like is to close the session as soon as the script is stopped.</p>
<python><selenium><selenium-webdriver>
2023-01-26 10:07:17
0
485
Takamura
75,244,633
8,547,163
Run matlab code through subprocess module in python
<p>I have a matlab code which I would like to call and execute via <code>subprocess</code> in my python code. To run the matlab code via the terminal I have</p> <p><code>matlab2021a -nodesktop -r &quot;addpath('path/to/dependencies'); matlab_file 1 1 0.5, exit&quot;</code></p> <p>To replicate the same in my python script I do the following:</p> <pre><code>#myscript.py import subprocess subprocess.run([&quot;matlab2021a&quot;, &quot;-nodesktop&quot;, &quot;-r&quot; , &quot;addpath('/path/to/dependencies');&quot;, &quot;/path/to/matlab_file&quot;, &quot;1&quot; , &quot;1&quot; ,&quot;0.5&quot;, &quot;exit&quot;]) </code></pre> <p>with the above code I end up within the matlab terminal <code>&gt;&gt;&gt;</code> and I think my python script doesn't call the matlab file.</p> <p>Can someone suggest the correct way to call and execute a matlab code while adding path for dependencies.</p>
<python><matlab><subprocess>
2023-01-26 10:05:00
0
559
newstudent
75,244,611
6,734,243
Is there a way to check if a path is absolute in jinja2?
<p>In the pydata-sphinx-theme we need to check if a path is absolute or not before adding it to the template. Currently we use the following:</p> <pre><code>{% set image_light = image_light if image_light.startswith(&quot;http&quot;) else pathto('_static/' + image_light, 1) %} </code></pre> <p>It's working but fails to capture local files and many other absolute configurations. Is there a more elegant way to perform this check ?</p>
<python><jinja2><python-sphinx>
2023-01-26 10:02:22
2
2,670
Pierrick Rambaud
75,244,607
660,311
VRP example in the docs gives different output when I run it
<p>On <a href="https://developers.google.com/optimization/routing/vrp" rel="nofollow noreferrer">https://developers.google.com/optimization/routing/vrp</a>, it says:</p> <blockquote> <p>The complete programs are shown in the next section. When you run the programs, they display the following output:</p> <p>Route for vehicle 0: 0 -&gt; 8 -&gt; 6 -&gt; 2 -&gt; 5 -&gt; 0 Distance of route: 1552m</p> <p>Route for vehicle 1: 0 -&gt; 7 -&gt; 1 -&gt; 4 -&gt; 3 -&gt; 0 Distance of route: 1552m</p> <p>Route for vehicle 2: 0 -&gt; 9 -&gt; 10 -&gt; 16 -&gt; 14 -&gt; 0 Distance of route: 1552m</p> <p>Route for vehicle 3: 0 -&gt; 12 -&gt; 11 -&gt; 15 -&gt; 13 -&gt; 0 Distance of route: 1552m</p> <p>Total distance of all routes: 6208m</p> </blockquote> <p>However, when I run the example given on that page:</p> <pre><code>&quot;&quot;&quot;Simple Vehicles Routing Problem (VRP). This is a sample using the routing library python wrapper to solve a VRP problem. A description of the problem can be found here: http://en.wikipedia.org/wiki/Vehicle_routing_problem. Distances are in meters. &quot;&quot;&quot; from ortools.constraint_solver import routing_enums_pb2 from ortools.constraint_solver import pywrapcp def create_data_model(): &quot;&quot;&quot;Stores the data for the problem.&quot;&quot;&quot; data = {} data['distance_matrix'] = [ [ 0, 548, 776, 696, 582, 274, 502, 194, 308, 194, 536, 502, 388, 354, 468, 776, 662 ], [ 548, 0, 684, 308, 194, 502, 730, 354, 696, 742, 1084, 594, 480, 674, 1016, 868, 1210 ], [ 776, 684, 0, 992, 878, 502, 274, 810, 468, 742, 400, 1278, 1164, 1130, 788, 1552, 754 ], [ 696, 308, 992, 0, 114, 650, 878, 502, 844, 890, 1232, 514, 628, 822, 1164, 560, 1358 ], [ 582, 194, 878, 114, 0, 536, 764, 388, 730, 776, 1118, 400, 514, 708, 1050, 674, 1244 ], [ 274, 502, 502, 650, 536, 0, 228, 308, 194, 240, 582, 776, 662, 628, 514, 1050, 708 ], [ 502, 730, 274, 878, 764, 228, 0, 536, 194, 468, 354, 1004, 890, 856, 514, 1278, 480 ], [ 194, 354, 810, 502, 388, 308, 536, 0, 342, 388, 730, 468, 354, 320, 662, 742, 856 ], [ 308, 696, 468, 844, 730, 194, 194, 342, 0, 274, 388, 810, 696, 662, 320, 1084, 514 ], [ 194, 742, 742, 890, 776, 240, 468, 388, 274, 0, 342, 536, 422, 388, 274, 810, 468 ], [ 536, 1084, 400, 1232, 1118, 582, 354, 730, 388, 342, 0, 878, 764, 730, 388, 1152, 354 ], [ 502, 594, 1278, 514, 400, 776, 1004, 468, 810, 536, 878, 0, 114, 308, 650, 274, 844 ], [ 388, 480, 1164, 628, 514, 662, 890, 354, 696, 422, 764, 114, 0, 194, 536, 388, 730 ], [ 354, 674, 1130, 822, 708, 628, 856, 320, 662, 388, 730, 308, 194, 0, 342, 422, 536 ], [ 468, 1016, 788, 1164, 1050, 514, 514, 662, 320, 274, 388, 650, 536, 342, 0, 764, 194 ], [ 776, 868, 1552, 560, 674, 1050, 1278, 742, 1084, 810, 1152, 274, 388, 422, 764, 0, 798 ], [ 662, 1210, 754, 1358, 1244, 708, 480, 856, 514, 468, 354, 844, 730, 536, 194, 798, 0 ], ] data['num_vehicles'] = 4 data['depot'] = 0 return data def print_solution(data, manager, routing, solution): &quot;&quot;&quot;Prints solution on console.&quot;&quot;&quot; print(f'Objective: {solution.ObjectiveValue()}') max_route_distance = 0 for vehicle_id in range(data['num_vehicles']): index = routing.Start(vehicle_id) plan_output = 'Route for vehicle {}:\n'.format(vehicle_id) route_distance = 0 while not routing.IsEnd(index): plan_output += ' {} -&gt; '.format(manager.IndexToNode(index)) previous_index = index index = solution.Value(routing.NextVar(index)) route_distance += routing.GetArcCostForVehicle( previous_index, index, vehicle_id) plan_output += '{}\n'.format(manager.IndexToNode(index)) plan_output += 'Distance of the route: {}m\n'.format(route_distance) print(plan_output) max_route_distance = max(route_distance, max_route_distance) print('Maximum of the route distances: {}m'.format(max_route_distance)) def main(): &quot;&quot;&quot;Entry point of the program.&quot;&quot;&quot; # Instantiate the data problem. data = create_data_model() # Create the routing index manager. manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']), data['num_vehicles'], data['depot']) # Create Routing Model. routing = pywrapcp.RoutingModel(manager) # Create and register a transit callback. def distance_callback(from_index, to_index): &quot;&quot;&quot;Returns the distance between the two nodes.&quot;&quot;&quot; # Convert from routing variable Index to distance matrix NodeIndex. from_node = manager.IndexToNode(from_index) to_node = manager.IndexToNode(to_index) return data['distance_matrix'][from_node][to_node] transit_callback_index = routing.RegisterTransitCallback(distance_callback) # Define cost of each arc. routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index) # Add Distance constraint. dimension_name = 'Distance' routing.AddDimension( transit_callback_index, 0, # no slack 3000, # vehicle maximum travel distance True, # start cumul to zero dimension_name) distance_dimension = routing.GetDimensionOrDie(dimension_name) distance_dimension.SetGlobalSpanCostCoefficient(100) # Setting first solution heuristic. search_parameters = pywrapcp.DefaultRoutingSearchParameters() search_parameters.first_solution_strategy = ( routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC) # Solve the problem. solution = routing.SolveWithParameters(search_parameters) # Print solution on console. if solution: print_solution(data, manager, routing, solution) else: print('No solution found !') if __name__ == '__main__': main() </code></pre> <p>I get the following output:</p> <blockquote> <p>Objective: 177500 Route for vehicle 0: 0 -&gt; 9 -&gt; 10 -&gt; 2 -&gt; 6 -&gt; 5 -&gt; 0 Distance of the route: 1712m</p> <p>Route for vehicle 1: 0 -&gt; 16 -&gt; 14 -&gt; 8 -&gt; 0 Distance of the route: 1484m</p> <p>Route for vehicle 2: 0 -&gt; 7 -&gt; 1 -&gt; 4 -&gt; 3 -&gt; 0 Distance of the route: 1552m</p> <p>Route for vehicle 3: 0 -&gt; 13 -&gt; 15 -&gt; 11 -&gt; 12 -&gt; 0 Distance of the route: 1552m</p> <p>Maximum of the route distances: 1712m</p> </blockquote> <p>I'm running the latest version of the python ortools library (9.5) with Python 3.11.</p> <h3>So</h3> <p>Are the docs wrong?</p> <p>Or is the code in the latest release bugged (the solution it gives is worse than the docs, so it may be a regression)?</p> <p>Or is there something messed up with my local environment that is causing the difference? Is it happening for other people too?</p>
<python><or-tools>
2023-01-26 10:01:37
1
7,798
Spycho
75,244,542
1,684,315
Python Basemap pcolormesh valueError
<p>I want to use Python <a href="/questions/tagged/basemap" class="post-tag" title="show questions tagged &#39;basemap&#39;" aria-label="show questions tagged &#39;basemap&#39;" rel="tag" aria-labelledby="basemap-container">basemap</a> and map an aggregated value of income in various cities. I have created a dictionary of cities and their respective income. When plotting the code, I am getting an error message:</p> <blockquote> <p>ValueError: not enough values to unpack (expected 2, got 1)</p> </blockquote> <p>and I do not know what is wrong there. Here is my code:</p> <pre><code>%matplotlib inline import pandas as pd import numpy as np import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from geopy.geocoders import Nominatim j = {'Aach': 38.0, 'Aachen': 380.0, 'Aalen': 348.0, 'Aalen-Waldhausen': 10.0, 'Aarbergen': 17.0, 'Abenberg': 2.0, 'Abstatt': 6.0} lat = list() lon = list() con = list() count = 0 geolocator = Nominatim(user_agent = &quot;geoapiExercises&quot;) for i in j: #print(i) location = geolocator.geocode(i) try: print(&quot;Country Name: &quot;, location) loc_dict=location.raw k=(loc_dict['display_name'].rsplit(',' , 1)[1]) #print(loc_dict[&quot;lon&quot;]) lat.append(loc_dict[&quot;lat&quot;]) lon.append(loc_dict[&quot;lon&quot;]) count =(int)(basedict[i]) con.append(count) except: continue lat = np.asarray(lat) lon = np.asarray(lon) coun = np.asarray(con) lon, lat = np.meshgrid(lon, lat) fig = plt.figure(figsize=(10, 8)) m = Basemap(projection='lcc', resolution='c', width=8E6, height=8E6, lat_0=45, lon_0=8,) m.shadedrelief(scale=0.5) m.pcolormesh(lon, lat, coun, latlon=True, cmap='RdBu_r') </code></pre> <p>Any idea what is wrong here?</p> <p>Many thanks in advance!</p>
<python><matplotlib-basemap>
2023-01-26 09:55:24
1
1,044
Andi Maier
75,244,528
4,616,326
How to sum over column in pandas dropping value of a given category if values of another are present?
<p>I downloaded food production data from <a href="https://www.fao.org/home/en" rel="nofollow noreferrer">FAOSTAT</a>. For a given year, production data for a certain foodstuff may be provided as an official value, an estimate or it may be of another category. However, the production values are all given in one column like this:</p> <pre><code> Area Y2017 Y2017flags 0 France 10 official 1 USA 11 estimate 2 Germany 12 official 3 Germany 10 estimate </code></pre> <p>For some areas multiple production values are available, e.g. an estimate, an official value, and an unofficial value.</p> <p>I'd now like to sum over all values in the column <code>Y2017</code> but in a conditional way: If an official figure is available for a country, take that value, if not take the estimate, if not take the unofficial value, etc.</p> <p>Is there a way to do this without splitting the dataframe?</p>
<python><pandas>
2023-01-26 09:53:46
1
705
Dahlai
75,244,523
8,771,201
Python compare images of, piece of, clothing (identification)
<p>As an example I have two pictures with a particular type of clothing of a certain brand. I can download a lot of different images of this same piece, and color, of clothing</p> <p><a href="https://i.sstatic.net/6AoUC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6AoUC.jpg" alt="enter image description here" /></a><a href="https://i.sstatic.net/Erx5N.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Erx5N.jpg" alt="enter image description here" /></a></p> <p>I want to create a model which can recognize the item based on a picture. I tried to do it using this example: <a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/classification</a>. This can recognize the type of clothing (eg shirt or shoe or trousers, etc) But not a specific item and color. My goal is to have a model that can tell me that the person on my first picture is wearing the item of my second picture. As mentioned I can upload a few variations of this same item to train my model, if that would be the best approach.</p> <p>I also tried to use <a href="https://pillow.readthedocs.io" rel="nofollow noreferrer">https://pillow.readthedocs.io</a> This can do something with color recognition but does not solve my initial goal.</p>
<python><machine-learning><artificial-intelligence>
2023-01-26 09:53:21
3
1,191
hacking_mike
75,244,441
12,061,197
How give a possible value range and a possible value out of that range to a variable
<p>I'm using <code>CP-SAT</code> model of google <code>OR Tools</code> with python. and I need to add a constraint as below, <code>x</code> must be in the range of <code>10</code> and <code>100</code>. Otherwise, it must be zero. How can I add such a constraint to the model?</p>
<python><or-tools>
2023-01-26 09:44:55
1
326
Prasad Darshana
75,244,437
11,479,825
Group by several columns and add values from last column to list
<p>I have this table:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>type</th> <th>text</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>inv_num</td> <td>123</td> </tr> <tr> <td>1</td> <td>company</td> <td>ASD</td> </tr> <tr> <td>1</td> <td>item</td> <td>fruit</td> </tr> <tr> <td>1</td> <td>item</td> <td>vegetable</td> </tr> <tr> <td>2</td> <td>inv_num</td> <td>123</td> </tr> <tr> <td>2</td> <td>company</td> <td>FOO</td> </tr> <tr> <td>2</td> <td>item</td> <td>computer</td> </tr> <tr> <td>2</td> <td>item</td> <td>mouse</td> </tr> <tr> <td>2</td> <td>item</td> <td>headphones</td> </tr> </tbody> </table> </div> <p>I would like to group the same types in one row in a list format:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>type</th> <th>text</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>inv_num</td> <td>123</td> </tr> <tr> <td>1</td> <td>company</td> <td>ASD</td> </tr> <tr> <td>1</td> <td>item</td> <td>['fruit', 'vegetable']</td> </tr> <tr> <td>2</td> <td>inv_num</td> <td>123</td> </tr> <tr> <td>2</td> <td>company</td> <td>FOO</td> </tr> <tr> <td>2</td> <td>item</td> <td>['computer', 'mouse', 'headphones']</td> </tr> </tbody> </table> </div> <p>Is it possible to do it using 'groupby'?</p>
<python><pandas>
2023-01-26 09:44:37
1
985
Yana
75,244,419
10,596,488
AWS ECS environment variable not available [Python]
<p>I am using AWS ECS with a Python Framework and in my task definition i have the option to add environment variables that will be available to the service(cluster).</p> <p>Here is where i added the env variables: <a href="https://i.sstatic.net/8cWRS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8cWRS.png" alt="enter image description here" /></a></p> <p>When i then try to print all the env variables in my service i do not get access to these variables and i am not sure why. Here i printed all my env using environ:</p> <pre><code>for a in os.environ: print('Var: ', a, 'Value: ', os.getenv(a)) print(&quot;all done&quot;) </code></pre> <p>Result: <a href="https://i.sstatic.net/R0iai.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R0iai.png" alt="enter image description here" /></a></p> <p><strong>DB_PORT</strong> or <strong>APP_KEY</strong> is not available in my service or python-code.</p>
<python><amazon-web-services><docker><environment-variables><amazon-ecs>
2023-01-26 09:42:59
1
353
Ali Durrani
75,244,379
8,739,916
Python Pandas reading CSV issue
<p>I am trying to read an unstructured CSV file without any header, using Pandas. The number of columns differ in different rows and there is no clear upper limit for the num of columns. Right now it is 10 but it will increase to maybe 15.</p> <p>Example CSV file content:</p> <pre><code>a;b;c a;b;c;d;e;;;f a;; a;b;c;d;e;f;g;h;;i a;b; .... </code></pre> <p>Here is how I read it using Python Pandas:</p> <pre><code>pd.DataFrame(pd.read_csv(path, sep=&quot;;&quot;, header=None, usecols=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], names=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'], nrows=num_of_rows + 1)) </code></pre> <p>However this produces <code>FutureWarning: Defining usecols with out of bounds indices is deprecated and will raise a ParserError in a future version.</code> warning message. And I don't want my code to stop working in the future because of this reason.</p> <p>My question is that is there a way to read such an unstructured CSV file using Pandas (or any other equivalently fast library) in a future-safe way?</p>
<python><pandas><csv>
2023-01-26 09:38:35
1
867
JollyRoger
75,244,361
12,242,625
Count preceding and following rows >=10
<p>I have a spectrum and would like to identify the different channels. The channels can be distinguished from the edges between the channels by the level (<code>val</code>).</p> <p>I have created a simplified table that contains the column <code>val</code>. Now I want to calculate how many rows before and after contain values greater or equal <code>10</code>.</p> <p>Example:</p> <p><a href="https://i.sstatic.net/h6Wiv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h6Wiv.png" alt="example" /></a></p> <ul> <li> <ol> <li>row: <code>val = 10</code> , followed by <code>11, 10, 2, 3</code> -&gt; <code>next = 2</code> and <code>prev = 0</code></li> </ol> </li> <li> <ol start="2"> <li>row: <code>val = 11</code>, followed by <code>10, 2, 3</code>, before was <code>10</code> -&gt; <code>next = 1</code>, <code>prev = 1</code></li> </ol> </li> <li> <ol start="3"> <li>row: <code>val = 10</code>, followed by <code>2, 3</code>, before was <code>11, 10</code> -&gt; <code>next = 0</code>, <code>prev = 2</code></li> </ol> </li> <li> <ol start="4"> <li>row: <code>val = 2</code>, followed by <code>3</code>, before was <code>10, 11, 10</code> -&gt; <code>next = 0</code>, <code>prev = 3</code></li> </ol> </li> <li> <ol start="5"> <li>row: <code>val = 2</code>, before was <code>2, 10, 11, 10</code> -&gt; <code>prev = 0</code></li> </ol> </li> </ul> <p>In real-world data a channel contains about 20-30 measurement points, so I hope that there is a solution to solve this without creating 30 shifts in both directions. And also not all channels have the same width, so it should be dynamic.</p> <p>Happy for any help or hint, thank you very much.</p> <hr /> <h1>MWE</h1> <pre><code>import pandas as pd df_test = pd.DataFrame({&quot;val&quot;:[10,11,10,2,3,10,10,10,4,6,11,7,10,10,11,10,11,10]}) df_test[&quot;prev&quot;] = [0,1,2,3,0,0,1,2,3,0,0,1,0,1,2,3,4,5] df_test[&quot;next&quot;] = [2,1,0,0,3,2,1,0,0,1,0,6,5,4,3,2,1,0] +----+-------+--------+--------+ | | val | prev | next | |----+-------+--------+--------| | 0 | 10 | 0 | 2 | | 1 | 11 | 1 | 1 | | 2 | 10 | 2 | 0 | | 3 | 2 | 3 | 0 | | 4 | 3 | 0 | 3 | | 5 | 10 | 0 | 2 | | 6 | 10 | 1 | 1 | | 7 | 10 | 2 | 0 | | 8 | 4 | 3 | 0 | | 9 | 6 | 0 | 1 | | 10 | 11 | 0 | 0 | | 11 | 7 | 1 | 6 | | 12 | 10 | 0 | 5 | | 13 | 10 | 1 | 4 | | 14 | 11 | 2 | 3 | | 15 | 10 | 3 | 2 | | 16 | 11 | 4 | 1 | | 17 | 10 | 5 | 0 | +----+-------+--------+--------+ </code></pre>
<python><pandas>
2023-01-26 09:37:30
2
3,304
Marco_CH
75,244,276
11,261,546
Import and import as in boost::python
<p>I'm working on a project that has most of it's code in C++ and some in python.</p> <p>Is there a way to call <code>import xxx</code> and/or <code>import xxx as x</code> from C++?</p> <p>I would expect something like this:</p> <pre><code>auto other_mod = boost::python::import(&quot;the_other_module&quot;); BOOST_PYTHON_MODULE(pystuff) { boost::python::module_&lt;other_mod&gt;(&quot;wrapping_name&quot;); // I just invented this } </code></pre> <p>And then in python be able to:</p> <pre><code>from pystuff import wrapping_name as wn wn.someFunction() </code></pre> <p>Notice that I DO NOT want to do this in python</p> <pre><code>import pystuff import the_other_module </code></pre> <p>The are objects in <code>the_other_module</code> with similar goals and dependencies than the ones in <code>pystuff</code> , so I don't want the user to have one without the other.</p> <p>Also I know I could take <strong>every object</strong> from <code>the_other_module</code> that I want to expose and wrap, but I don't want to do one by one.</p>
<python><c++><boost><bind>
2023-01-26 09:30:16
1
1,551
Ivan
75,244,142
221,270
Pandas replace value with other column value
<p>I have a table and I want to replace column values with other columns values based on a condition:</p> <p>Table:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <td>x</td> <td>1</td> <td>test</td> <td>fool</td> <td>bar</td> </tr> <tr> <td>y</td> <td>3</td> <td>test</td> <td>fool</td> <td>bar</td> </tr> </tbody> </table> </div> <p>If column C contains the word test -&gt; value should be replaced with content of column A</p> <p>If column D contains the word fool -&gt; value should be replaced with content of column B</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <td>x</td> <td>1</td> <td>x</td> <td>1</td> <td>bar</td> </tr> <tr> <td>y</td> <td>3</td> <td>y</td> <td>3</td> <td>bar</td> </tr> </tbody> </table> </div> <p>How can I create this table?</p>
<python><pandas>
2023-01-26 09:13:53
1
2,520
honeymoon
75,243,901
3,922,727
Python converting cURL into POST request is returning an error of 414
<p>I am trying to convert a cURL request into a python POST request.</p> <p>Here is the cURL:</p> <pre><code>curl -X POST -F xls_file=@/path/to/form.xls https://api.ona.io/api/v1/forms </code></pre> <p>And here is the python script that I've been working on:</p> <pre><code>with open(param_build_dir+'panther_test.xls', 'rb') as form: # xls = form.read() response = requests.post(ona_post_base_api, params={'xls_file': form}, headers = headers, auth=HTTPBasicAuth(username, password)) print(response.status_code) print(response.reason) print(response.text) </code></pre> <p>When we uncomment the part of <code>xls = form.read()</code>, the following error occured:</p> <blockquote> <p>Status code: 400</p> <p>Reason: Bad Request</p> <p>Text:</p> <p>{&quot;type&quot;:&quot;alert-error&quot;,&quot;text&quot;:&quot;[&quot;XLSForm not provided, expecting either of these params: 'xml_file', 'xls_file', 'xls_url', 'csv_url', 'dropbox_xls_url', 'text_xls_form', 'floip_file'&quot;]&quot;}</p> </blockquote> <p>Once commented, we receive the following error:</p> <blockquote> <p>Status Code: 414, Request-URI Too Large</p> </blockquote>
<python><curl><python-requests>
2023-01-26 08:50:43
1
5,012
alim1990
75,243,888
19,642,884
Problem serving Django "polls" tutorial under lighttpd: 404 page not found
<p>I am following the Django polls tutorial, which is working 100% with the built-in development server (<code>python3 manage.py runserver</code>).</p> <p>I have set up lighttpd to serve django through UWSGI and that seems to be working fine but for one glitch: the URL passed to django seems to have been modified.</p> <p>My lighttpd configuration is basically this:</p> <pre><code>... server.modules += (&quot;mod_scgi&quot;,&quot;mod_rewrite&quot;) scgi.protocol = &quot;uwsgi&quot; scgi.server = ( &quot;/polls&quot; =&gt; (( &quot;host&quot; =&gt; &quot;localhost&quot;, &quot;port&quot; =&gt; 7000, &quot;check-local&quot; =&gt; &quot;disable&quot;, )) ) </code></pre> <p>The Django tutorial mapping looks like:</p> <pre class="lang-py prettyprint-override"><code># tutorial1/urls.py urlpatterns = [ path('polls/', include('polls.urls')), path('admin/', admin.site.urls), ] # polls/urls.py app_name = 'polls' urlpatterns = [ path('', views.IndexView.as_view(), name='index'), path('&lt;int:pk&gt;/', views.DetailView.as_view(), name='detail'), path('&lt;int:pk&gt;/results/', views.ResultsView.as_view(), name='results'), path('&lt;int:question_id&gt;/vote/', views.vote, name='vote'), ] </code></pre> <p>However when I hit <code>http://localhost:8080/polls/</code> in the address bar, it produces an 404 error.</p> <p><a href="https://i.sstatic.net/9Sx7O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Sx7O.png" alt="enter image description here" /></a></p> <p>If I add an extra <code>/polls</code> to the URL then it works just fine.</p> <p><a href="https://i.sstatic.net/sahny.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sahny.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/uJ8eM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uJ8eM.png" alt="enter image description here" /></a></p> <p>My goal with this exercise is to be able to serve this app switching from and to both servers without needing to modify configuration files each time.</p> <p>What do I need to do on the <code>lighttpd.conf</code> side to make lighttpd interchangeable with Django's own internal dev server?</p> <p>I have tried to add the following <code>url.rewrite</code> rule but it messes up completely the URL handling.</p> <pre><code>url.rewrite = ( &quot;^/polls/(.*)$&quot; =&gt; &quot;/polls/polls/$1&quot; ) </code></pre> <p>With the URL rewrite, the first call <code>http://127.0.0.1/polls/</code> works fine but the links to the detail posts are generated as <code>http://127.0.0.1/polls/polls/1</code> and the resulting sub-url <code>polls/1</code> is not recognized by the <code>polls/urls.py</code> settings.</p> <p>Using the empty path in lighttpd server:</p> <pre><code> scgi.protocol = &quot;uwsgi&quot; scgi.server = ( &quot;&quot; =&gt; (( &quot;host&quot; =&gt; &quot;localhost&quot;, &quot;port&quot; =&gt; 7000, &quot;check-local&quot; =&gt; &quot;disable&quot;, )) ) </code></pre> <p>Produces this very weird result: <a href="https://i.sstatic.net/kAsCP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kAsCP.png" alt="enter image description here" /></a></p> <p>The <code>Raised by: polls.views.DetailView</code> means that the URL is actually being parsed and forwarded correctly to the Polls Detail view, which is 100% the correct handler, but it is somehow raising a 404? WTF</p> <p>Thank you!</p>
<python><django><uwsgi><lighttpd>
2023-01-26 08:49:27
1
4,512
Henrique Bucher
75,243,582
15,913,281
Sorting Pandas Dataframe Based on Date/Time
<p>I have a dataframe created using an Access table. The Date column is formatted as an object and has the following syntax:</p> <p>2019-12-31 12:55:00</p> <p>I want to sort the dataframe on ascending date order so converted the column to datetime using:</p> <pre><code>df['Date'] = pd.to_datetime(df['Date']) </code></pre> <p>I then try to sort using the following but it doesn't work. The df looks exactly as it did before:</p> <pre><code>df.sort_values(by='Date') </code></pre> <p>An extract of the df is below. What am I doing wrong?</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>RefTT</th> </tr> </thead> <tbody> <tr> <td>31/12/2019 12:55</td> <td>10107355</td> </tr> <tr> <td>31/12/2020 15:25</td> <td>36643074</td> </tr> <tr> <td>31/12/2020 14:15</td> <td>36868924</td> </tr> <tr> <td>08/01/2019 17:35</td> <td>24763287</td> </tr> <tr> <td>08/01/2019 19:45</td> <td>10073929</td> </tr> <tr> <td>08/01/2019 14:50</td> <td>24132711</td> </tr> <tr> <td>08/01/2019 18:40</td> <td>24576111</td> </tr> <tr> <td>08/01/2019 14:40</td> <td>12255761</td> </tr> <tr> <td>08/01/2019 13:30</td> <td>18443380</td> </tr> <tr> <td>31/12/2020 14:00</td> <td>25106015</td> </tr> <tr> <td>31/01/2019 15:10</td> <td>19629874</td> </tr> <tr> <td>31/01/2019 19:30</td> <td>14505238</td> </tr> <tr> <td>31/01/2020 15:40</td> <td>11930839</td> </tr> <tr> <td>31/01/2020 13:40</td> <td>8753735</td> </tr> <tr> <td>31/01/2020 14:45</td> <td>23523591</td> </tr> <tr> <td>31/01/2020 17:15</td> <td>27388541</td> </tr> <tr> <td>31/01/2021 15:10</td> <td>28575475</td> </tr> <tr> <td>28/02/2019 16:30</td> <td>21470086</td> </tr> <tr> <td>28/02/2019 16:55</td> <td>18136828</td> </tr> <tr> <td>28/02/2019 13:35</td> <td>11896776</td> </tr> <tr> <td>28/02/2019 13:50</td> <td>14708670</td> </tr> <tr> <td>28/02/2019 14:25</td> <td>16095243</td> </tr> <tr> <td>29/02/2020 15:00</td> <td>26641007</td> </tr> <tr> <td>29/02/2020 15:45</td> <td>24342002</td> </tr> <tr> <td>29/02/2020 13:37</td> <td>1532111</td> </tr> <tr> <td>29/02/2020 16:10</td> <td>26684160</td> </tr> <tr> <td>28/02/2021 16:40</td> <td>6320524</td> </tr> <tr> <td>28/02/2021 16:20</td> <td>27632002</td> </tr> <tr> <td>28/02/2021 15:10</td> <td>21104540</td> </tr> <tr> <td>31/03/2019 17:40</td> <td>6896639</td> </tr> <tr> <td>31/03/2021 15:10</td> <td>38036656</td> </tr> <tr> <td>31/03/2021 17:20</td> <td>21281524</td> </tr> <tr> <td>31/03/2021 15:00</td> <td>24986895</td> </tr> <tr> <td>31/03/2021 15:50</td> <td>26600969</td> </tr> </tbody> </table> </div>
<python><pandas>
2023-01-26 08:18:46
0
471
Robsmith
75,243,418
6,089,166
Keras, predict which player is the strongest (instead of player1 will win?)
<p>Hello I have a dataset that lets me know if player 1 has won:</p> <pre><code>BMI, Temperature, Weight, Player1Win 33.6,17,50.0 26.6,19,31.0 23.3,16,32.1 28.1,20,21.0 43.1,17,33.1 </code></pre> <p>I can correctly predict if player 1 will win their game with my model:</p> <pre><code>import pandas as pd from tensorflow.keras.models sequential import from tensorflow.keras.layers import Dense from tensorflow.keras.utils import plot_model from matplotlib import pyplot as plt df = pd.read_csv('winner.csv') X = df.loc[:, df.columns != 'Player1Win'] Y = df.loc[:, 'Player1Win'] pattern = Sequential() model.add(Dense(12, input_shape=(3,), activation='reread')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, Y, epochs=100) _, precision = model.evaluate(X, Y) print('Precision: %.2f' % (precision*100)) </code></pre> <p>But I would like to extend my model, considering all players with this new dataset:</p> <pre><code>BMI, Temperature, Weight, Player1, Player2, Winner 33.6,17,50,Bob,Joe,Bob 26.6,19,31,Nathan,Bob,Bob 23.3,16,32,Bob,Joe,Joe 28.1,20,21,Joe,Bob,Bob 43.1,17,33,Joe,Nathan,Nathan </code></pre> <p>rather than predicting if &quot;player 1 will win&quot;, I would like to know if &quot;Bob will win against Nathan&quot;. I would like to know what method to use to do this.</p> <p>I thought of adding 1 column per player, with a 0 and 1 if they won but: 1) it would make a huge matrix 2) it would make false information, because in each match there are only 2 players who play not all.</p> <p>It would be 3 columns like that:</p> <pre><code>bob,nathan,joe 0,1,0 1,0,0 1,0,0 0,0,1 1,0,0 </code></pre> <p>Another question, in my dataset, I have more important parameters than others. Is there a model capable of prioritizing certain variables? In my case, the opponent's name is the most important variable (more than temperature or weight).</p>
<python><keras>
2023-01-26 07:56:30
1
389
sazearte
75,243,301
5,618,856
How to run a jupyter notebook cell in VS code with command line arguments (staying in the notebook context)
<p>Is there a way running a jupyter notebook python cell with command line parameters retaining the whole variable visibility of a notebook?</p> <p>What I tried:</p> <ul> <li>According to <a href="https://stackoverflow.com/questions/37534440/passing-command-line-arguments-to-argv-in-jupyter-ipython-notebook?answertab=modifieddesc#tab-top">this discussion</a> I prefixed my script with <code>%%python - --option1 value1 - -option2 value2 - -etc</code> but this only runs it as a normal python script. No variables visible, no notebook interaction.</li> <li><a href="https://github.com/nteract/papermill" rel="nofollow noreferrer">papermill</a> adresses this issue but relies on tags not available in VS code.</li> <li>export as normal py script ... of course this works but is cumbersome in the development process</li> <li>hardcoding default values in argpare ... but again this is an ugly way as I keep altering code which shouldn't be changed when debugging.</li> </ul> <p>I'm aware of <a href="https://stackoverflow.com/questions/37534440/passing-command-line-arguments-to-argv-in-jupyter-ipython-notebook">this discussion</a> also not coming to a clean general solution.</p>
<python><jupyter-notebook>
2023-01-26 07:43:52
0
603
Fred
75,243,277
458,060
Read a file and generate a list of lists using list comprehensions
<p>I'd like to read a file with the following input:</p> <pre><code>10 20 30 50 60 70 80 90 100 </code></pre> <p>and generate the following output:</p> <pre><code>[['10', '20', '30'], ['50','60','70'] ... ] </code></pre> <p>using list comprehensions and not foor loops. Naturally the problem I'm facing is creating the nested list when a <code>\n</code> character is detected. Of course 'disclaimer' the code would probably be more readable with for loops!</p> <pre class="lang-py prettyprint-override"><code>with open('file.txt', 'r') as f: result = [line.strip() for line in f.readlines() if line != '\n'] print(result) // </code></pre> <pre><code>['10', '20', '30', '50', '60', '70'] // not correct </code></pre>
<python><list-comprehension>
2023-01-26 07:41:00
1
14,293
George Katsanos
75,243,224
16,971,617
checking elements is in a set in numpy
<p>I am new to numpy so any help is appreciated. But I am curious how array handles set in python.</p> <p>This is my code but it doesn't work as expected. I am trying to filter elements that are not in my set.</p> <pre><code>new_mask = np.where(np.isin(mask, my_set), 1, 0) </code></pre> <p>Because for my understanding, searching in a set is more efficient than in a list due to hashing. But I found this in the <a href="https://numpy.org/doc/stable/reference/generated/numpy.isin.html" rel="nofollow noreferrer">doc</a>. I am wondering why it doesn't work?</p> <blockquote> <p>Because of how array handles sets, the following does not work as expected:</p> <pre><code>&gt;&gt;&gt; test_set = {1, 2, 4, 8} &gt;&gt;&gt; np.isin(element, test_set) array([[False, False], [False, False]]) </code></pre> <p>Casting the set to a list gives the expected result:</p> <pre><code>&gt;&gt;&gt; np.isin(element, list(test_set)) array([[False, True], [ True, False]]) </code></pre> </blockquote>
<python><numpy>
2023-01-26 07:34:17
0
539
user16971617
75,243,117
10,660,847
Dataplex API Tag Policies
<p>I'm on exploration Dataplex API with Python in Google Documentation, there is documentation to Get Lake, Zone, Assets, etc. I've explored that documentation, but I didn't find any documentation related to Tag Policies, for example, I need to attach my Tag Template and add Policy Tag to my BigQuery Table via API.</p> <p>Is it possible to attach Tag Template and add Policy Tag into BigQuery Table via API?</p> <p>Here is the link that I've explored:</p> <p><a href="https://cloud.google.com/python/docs/reference/dataplex/latest/google.cloud.dataplex_v1.services.dataplex_service.DataplexServiceClient" rel="nofollow noreferrer">Dataplex API Service</a></p> <p><a href="https://cloud.google.com/python/docs/reference/dataplex/latest/google.cloud.dataplex_v1.services.metadata_service.MetadataServiceClient#google_cloud_dataplex_v1_services_metadata_service_MetadataServiceClient_list_entities" rel="nofollow noreferrer">Dataplex API Metadata Service</a></p> <p><a href="https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.services.policy_tag_manager.PolicyTagManagerClient#google_cloud_datacatalog_v1_services_policy_tag_manager_PolicyTagManagerClient_update_policy_tag" rel="nofollow noreferrer">Data Catalog API</a></p>
<python><google-cloud-platform><google-data-catalog><google-dataplex>
2023-01-26 07:20:13
2
1,793
MADFROST
75,243,102
3,764,619
Pytest fixture for `airflow.operators.python.get_current_context` fails with `airflow.exceptions.AirflowException`
<p>I am trying to mock <code>airflow.operators.python.get_current_context</code> as follows:</p> <pre><code>@pytest.fixture def _mock_get_current_context(mocker): mocker.patch( &quot;airflow.operators.python.get_current_context&quot;, return_value={}, ) </code></pre> <p>This pattern works for all other functions I am mocking, for example <code>requests.get</code>, <code>airflow.operators.trigger_dagrun.TriggerDagRunOperator.execute</code> and <code>requests.Response.content</code>. However, when I call <code>get_current_context()</code> in a DAG task, I get the following error:</p> <pre><code>if not _CURRENT_CONTEXT: raise AirflowException( &quot;Current context was requested but no context was found! &quot; &quot;Are you running within an airflow task?&quot; ) E airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task? </code></pre> <p>Indicating that the mocking did not work since the source code for <code>get_current_context()</code> looks like this:</p> <pre><code> def get_current_context() -&gt; Context: if not _CURRENT_CONTEXT: raise AirflowException( &quot;Current context was requested but no context was found! &quot; &quot;Are you running within an airflow task?&quot; ) return _CURRENT_CONTEXT[-1] </code></pre> <p>Any ideas what can have gone wrong?</p>
<python><mocking><airflow><pytest><fixtures>
2023-01-26 07:17:54
1
680
Casper Lindberg
75,243,027
17,275,588
How to search strings in Python using a wildcard that represents only a single word -- and not multiple words as well?
<p>fnmatch is pretty simple in Python -- however it will output &quot;True&quot; whether there is 1, or 100 words, between the words you've put the wildcard between.</p> <p>I'd like to be more narrow than this -- and be able to use some kind of wildcard searching library that let's me specify HOW MANY words I want to be wildcards.</p> <p>So if I used: &quot;the * cat&quot;, it would ONLY include single words like &quot;the ugly cat&quot; or &quot;the furry cat&quot;</p> <p>But if I used something like: &quot;the ** cat&quot;, it would include ONLY two words like &quot;the very ugly cat&quot; or &quot;the extremely furry cat&quot;</p> <p>Is there any python library that allows this kind of fine-tuned wildcard functionality?</p> <p>Thanks!</p>
<python><string><search><substring><wildcard>
2023-01-26 07:08:02
1
389
king_anton
75,242,731
5,052,365
is there a C# equivalent for Python's self-documenting expressions in f-strings?
<p>Since Python 3.8 it is possible to use <a href="https://docs.python.org/3/whatsnew/3.8.html#f-strings-support-for-self-documenting-expressions-and-debugging" rel="nofollow noreferrer">self-documenting expressions in f-strings</a> like this:</p> <pre><code>&gt;&gt;&gt; variable=5 &gt;&gt;&gt; print(f'{variable=}') variable=5 </code></pre> <p>is there an equivalent feature in C#?</p>
<python><c#><string-interpolation>
2023-01-26 06:10:14
2
13,481
Adam.Er8
75,242,603
16,971,617
Skipping 0 when doing frequency count
<p>This line of code does the frequency count of my image which is 2D numpy array. But I would like to ignore 0, is there a simple way to skip it?</p> <pre class="lang-py prettyprint-override"><code>freq_count = dict(zip(*np.unique(img_arr.ravel(), return_counts=True))) for i in freq_count.keys(): # Do something </code></pre>
<python><numpy>
2023-01-26 05:45:27
1
539
user16971617
75,242,539
17,696,880
Set a regex pattern to identify a repeating pattern of an enumeration, where the same match is repeated an unknown number of successive times
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;hjshhshs el principal, amplio, de gran importancia, y mΓ‘s costoso hotel de la zona costera. Es una sobrilla roja, bastante amplia y incluso cΓ³moda de llevar. Hay autos rΓ‘pidos, mΓ‘s costosos, y veloces. tambiΓ©n, hay otro tipo de autos menos costosos&quot; direct_subject_modifiers = r&quot;((?:\w+))&quot; modifier_connectors = r&quot;(?:(?:,\s*|)y|(?:,\s*|)y|,)\s*(?:(?:(?:a[ΓΊu]n|todav[Γ­i]a|incluso)\s+|)(?:de\s*gran|bastante|un\s*tanto|un\s*poco|)\s*(?:m[Γ‘a]s|menos)\s+|)&quot; regex = modifier_connectors + direct_subject_modifiers matches = re.finditer(regex, input_text, re.MULTILINE | re.IGNORECASE) input_text = re.sub(matches, lambda m: (f&quot;\(\(DESCRIP\){m[1]}\)&quot;), input_text, re.IGNORECASE) print(repr(input_text)) </code></pre> <p>How to build regex to detect a successive description of n elements that coincide in these 2 patterns <code>regex = modifier_connectors + direct_subject_modifiers</code> , repeating themselves an unknown number of times?</p> <p>The output after identifying the elements in the string, and placing them in parentheses, keep in mind that within the same string there can be more than one pattern that must be encapsulated between parentheses, in this example there are 3 of them.</p> <pre><code>&quot;hjshhshs el ((DESCRIP)principal, amplio, de gran importancia, y mΓ‘s costoso) hotel de la zona costera. Es una sobrilla ((DESCRIP)roja, bastante amplia y incluso cΓ³moda) de llevar. Hay autos ((DESCRIP)Hay autos rΓ‘pidos, mΓ‘s costosos, y veloces). tambiΓ©n, hay otro tipo de autos menos costosos&quot; </code></pre>
<python><python-3.x><regex><regex-group><regexp-replace>
2023-01-26 05:32:09
1
875
Matt095
75,242,488
19,980,284
Pandas find overlapping time intervals in one column based on same date in another column for different rows
<p>I have data that looks like this:</p> <pre class="lang-py prettyprint-override"><code> id Date Time assigned_pat_loc prior_pat_loc Activity 0 45546325 2/7/2011 4:29:38 EIAB^EIAB^6 NaN Admission 1 45546325 2/7/2011 5:18:22 8W^W844^A EIAB^EIAB^6 Observation 2 45546325 2/7/2011 5:18:22 8W^W844^A EIAB^EIAB^6 Transfer to 8W 3 45546325 2/7/2011 6:01:44 8W^W858^A 8W^W844^A Bed Movement 4 45546325 2/7/2011 7:20:44 8W^W844^A 8W^W858^A Bed Movement 5 45546325 2/9/2011 18:36:03 8W^W844^A NaN Discharge-Observation 6 45666555 3/8/2011 20:22:36 EIC^EIC^5 NaN Admission 7 45666555 3/9/2011 1:08:04 53^5314^A EIC^EIC^5 Admission 8 45666555 3/9/2011 1:08:04 53^5314^A EIC^EIC^5 Transfer to 53 9 45666555 3/9/2011 17:03:38 53^5336^A 53^5314^A Bed Movement </code></pre> <p>I need to find where there were multiple patients (identified with <code>id</code> column) are in the same room at the same time, the start and end times for those, the dates, and room number (<code>assigned_pat_loc</code>). <code>assigned_pat_loc</code> is the current patient location in the hospital, formatted as β€œunit^room^bed”.</p> <p>So far I've done the following:</p> <pre class="lang-py prettyprint-override"><code># Read in CSV file and remove bed number from patient location data = pd.read_csv('raw_data.csv') data['assigned_pat_loc'] = data['assigned_pat_loc'].str.replace(r&quot;([^^]+\^[^^]+).*&quot;, r&quot;\1&quot;, regex=True) # Convert Date column to datetime type patient_data['Date'] = pd.to_datetime(patient_data['Date']) # Sort dataframe by date patient_data.sort_values(by=['Date'], inplace = True) # Identify rows with duplicate room and date assignments, indicating multiple patients shared room same_room = patient_data.duplicated(subset = ['Date','assigned_pat_loc']) # Assign duplicates to new dataframe df_same_rooms = patient_data[same_room] # Remove duplicate patient ids but keep latest one no_dups = df_same_rooms.drop_duplicates(subset = ['id'], keep = 'last') # Group patients in the same rooms at the same times together df_shuf = pd.concat(group[1] for group in df_same_rooms.groupby(['Date', 'assigned_pat_loc'], sort=False)) </code></pre> <p>And then I'm stuck at this point:</p> <pre class="lang-py prettyprint-override"><code> id Date Time assigned_pat_loc prior_pat_loc Activity 599359 42963403 2009-01-01 12:32:25 11M^11MX 4LD^W463^A Transfer 296155 42963484 2009-01-01 16:41:55 11M^11MX EIC^EIC^2 Transfer 1373 42951976 2009-01-01 15:51:09 11M^11MX NaN Discharge 362126 42963293 2009-01-01 4:56:57 11M^11MX EIAB^EIAB^6 Transfer 362125 42963293 2009-01-01 4:56:57 11M^11MX EIAB^EIAB^6 Admission ... ... ... ... ... ... ... 268266 46381369 2011-09-09 18:57:31 54^54X 11M^1138^A Transfer 16209 46390230 2011-09-09 6:19:06 10M^1028 EIAB^EIAB^5 Admission 659699 46391825 2011-09-09 14:28:20 9W^W918 EIAB^EIAB^3 Transfer 659698 46391825 2011-09-09 14:28:20 9W^W918 EIAB^EIAB^3 Admission 268179 46391644 2011-09-09 17:48:53 64^6412 EIE^EIE^3 Admission </code></pre> <p>Where you can see different patients in the same room at the same time, but I don't know how to extract those intervals of overlap between two different rows for the same room and same times. And then to format it such that the <code>start time</code> and <code>end time</code> are related to the earlier and later times of the transpiring of a shared room between two patients. Below is the desired output.</p> <p><img src="https://i.ibb.co/wMT87hC/Screen-Shot-2023-01-26-at-12-18-29-AM.png" alt="image" /></p> <p>Where <code>r_id</code> is the <code>id</code> of the other patient sharing the same room and <code>length</code> is the number of hours that room was shared.</p>
<python><pandas><dataframe><datetime>
2023-01-26 05:21:39
1
671
hulio_entredas
75,242,450
3,728,901
conda update conda: Collecting package metadata (current_repodata.json): failed
<p>Windows 11 x64, CMD</p> <pre><code>Microsoft Windows [Version 10.0.22621.1105] (c) Microsoft Corporation. All rights reserved. C:\Users\donhu&gt;conda update conda Collecting package metadata (current_repodata.json): failed CondaSSLError: OpenSSL appears to be unavailable on this machine. OpenSSL is required to download and install packages. Exception: HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/main/win-64/current_repodata.json (Caused by SSLError(&quot;Can't connect to HTTPS URL because the SSL module is not available.&quot;)) C:\Users\donhu&gt; </code></pre> <p><a href="https://i.sstatic.net/p72eU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p72eU.png" alt="enter image description here" /></a></p> <p>How to run command success?</p>
<python><anaconda>
2023-01-26 05:11:57
1
53,313
Vy Do
75,242,307
19,980,284
Remove everything after second caret regex and apply to pandas dataframe column
<p>I have a dataframe with a column that looks like this:</p> <pre><code>0 EIAB^EIAB^6 1 8W^W844^A 2 8W^W844^A 3 8W^W858^A 4 8W^W844^A ... 826136 EIAB^EIAB^6 826137 SICU^6124^A 826138 SICU^6124^A 826139 SICU^6128^A 826140 SICU^6128^A </code></pre> <p>I just want to keep everything before the second caret, e.g.: <code>8W^W844</code>, what regex would I use in Python? Similarly <code>PACU^SPAC^06</code> would be <code>PACU^SPAC</code>. And to apply it to the whole column.</p> <p>I tried <code>r'[\\^].+$'</code> since I thought it would take the last caret and everything after, but it didn't work.</p>
<python><pandas><regex>
2023-01-26 04:39:34
2
671
hulio_entredas
75,242,212
10,487,667
Python how to open a csv file for reading that was modified in the same program?
<p>I have a <code>.csv</code> file with headers.</p> <p>I am trying to delete the header row and then open the same file for reading.</p> <p>But the first line read is still the header line. How to I delete the header line and start reading from the first line of data?</p> <p><strong>Code snippet -</strong></p> <pre><code># Sort the cleaned file on r2 df = pd.read_csv(cleaned_file + &quot;.csv&quot;, names=['r2','r5','r7','r12','r15','r70','r83']) sorted_df = df.sort_values(by=[&quot;r2&quot;], ascending=True) sorted_df.to_csv(cleaned_file_sorted_on_ts + '.csv', index=False) # Remove the header line from the cleaned_file_sorted_on_ts file cmd = &quot;tail -n +2 &quot; + cleaned_file_sorted_on_ts + &quot;.csv&quot; + &quot; &gt; tmp.csv &amp;&amp; mv tmp.csv &quot; + cleaned_file_sorted_on_ts + &quot;.csv&quot; print(cmd) proc = Popen(cmd, shell=True, stdout=PIPE) with open(cleaned_file_sorted_on_ts + &quot;.csv&quot;,&quot;r&quot;) as infile: first_line = infile.readline().strip('\n') print(&quot;First line in cleaned file = {}&quot;.format(first_line)) </code></pre> <p>Output I am getting is -</p> <pre><code>tail -n +2 /ghostcache/Run.multi.rollout/h2_lines_cleaned_sorted.csv &gt; tmp.csv &amp;&amp; mv tmp.csv /ghostcache/Run.multi.rollout/h2_lines_cleaned_sorted.csv First line in cleaned file = r2,r5,r7,r12,r15,r70,r83 Traceback (most recent call last): File &quot;process_r83.py&quot;, line 51, in &lt;module&gt; first_ts = int(float(first_line.split(',')[0])) ValueError: could not convert string to float: 'r2' </code></pre>
<python><python-3.x>
2023-01-26 04:16:56
1
567
Ira
75,242,186
7,190,950
Python Selenium .value_of_css_property("background-color") returns black (0.0,0.0,0.0) for all tags
<p>I use selenium in my python code to loop through all elements in html page and retrieve their text color and background-color with <strong>element.value_of_css_property(&quot;color&quot;)</strong> and <strong>element.value_of_css_property(&quot;background-color&quot;)</strong> functions. It properly returns me the text color but the background color always returns me black (0.0,0.0,0.0).</p> <p>Below is my code.</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By # Initialize webdriver driver = webdriver.Chrome() # Navigate to webpage driver.get(&quot;https://www.example.com&quot;) # Find all elements on the page elements = driver.find_elements(By.XPATH, &quot;//*&quot;) # Iterate through elements and check color contrast for element in elements: color = element.value_of_css_property(&quot;color&quot;) bg_color = element.value_of_css_property(&quot;background-color&quot;) print(color) print(bg_color) </code></pre> <p>Any thoughts on why it always returns black as a background color and how can I fix it?</p>
<python><selenium><selenium-webdriver>
2023-01-26 04:11:05
0
421
Arman Avetisyan
75,242,160
3,564,318
Create a dictionary of all unique keys in a column and store correlation co-efficients of other columns as associated values
<p>There is a dataset with three columns:</p> <ul> <li>Col 1 : Name_of_Village</li> <li>Col 2: Average_monthly_savings</li> <li>Col 3: networth_in_dollars</li> </ul> <p>So, I want to create a dictionary &quot;Vill_corr&quot; where the <strong>key values</strong> are the name of the villages and the associated values are the correlation co-effient between Col2 &amp; Col3 using <strong>Pandas</strong>.</p> <p>I am aware of methods of calculating the correlation co-efficients, but not sure how to store it against each Village name key,</p> <p><code>corr = df[&quot;Col2&quot;].corr(df[&quot;Col3&quot;])</code></p> <p>Please help.</p>
<python><pandas><correlation>
2023-01-26 04:04:02
1
1,716
Pragyaditya Das
75,242,071
5,800,086
No FileSystem for scheme "gs" Google Storage Connector in plain PySpark installation
<p>I have already looked at several similar questions - <a href="https://stackoverflow.com/questions/27782844/no-filesystem-for-scheme-gs-when-running-spark-job-locally">here</a>, <a href="https://stackoverflow.com/questions/55595263/how-to-fix-no-filesystem-for-scheme-gs-in-pyspark">here</a> and some other blog posts and Stack overflow questions.</p> <p>I have the below PySpark script and looking to read data from a GCS bucket</p> <pre><code>from pyspark.sql import SparkSession spark = SparkSession.builder\ .appName(&quot;GCSFilesRead&quot;)\ .getOrCreate() bucket_name=&quot;my-gcs-bucket&quot; path=f&quot;gs://{bucket_name}/path/to/file.csv&quot; df=spark.read.csv(path, header=True) print(df.head()) </code></pre> <p>which fails with the error -</p> <pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling o29.csv. : org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme &quot;gs&quot; at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3443) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) </code></pre> <p>My environment setup <code>Dockerfile</code> is something like below:</p> <pre><code>FROM openjdk:11.0.11-jre-slim-buster # install a whole bunch of apt-get dev essential libraries (unixodbc-dev, libgdbm-dev...) # some other setup for other services # copy my repository, requirements file # install Python-3.9 and activate a venv RUN pip install pyspark==3.3.1 </code></pre> <p>There is no env variable like HADOOP_HOME, SPARK_HOME, PYSPARK_PYTHON etc. Just a plain installation of PySpark.</p> <p>I have tried to run -</p> <pre><code>spark = SparkSession.builder\ .appName(&quot;GCSFilesRead&quot;)\ .config(&quot;spark.jars.package&quot;, &quot;/path/to/jar/gcs-connector-hadoop3-2.2.10.jar&quot;) \ .getOrCreate() </code></pre> <p>or</p> <pre><code>spark = SparkSession.builder\ .appName(&quot;GCSFilesRead&quot;)\ .config(&quot;fs.gs.impl&quot;, &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem&quot;)\ .config(&quot;fs.AbstractFileSystem.gs.impl&quot;, &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS&quot;)\ .getOrCreate() </code></pre> <p>and some other solutions, but I am still getting the same error</p> <p>My question is -</p> <ol> <li><p><strong>in such a setup, what all do I need to do to get this script running?</strong> I have seen answers on updating pom files, core-site.xml file etc. but looks like simple pyspark installation does not come with those files</p> </li> <li><p><strong>how can I make jar installs/setup be a default spark setting in pyspark only installation?</strong> I hope to simply run this script - <code>python path/to/file.py</code> without passing any arguments with spark-submit, setting it in the sparksession.config etc. I know if we have a regular spark installation, we can add the default jars to spark-defaults.conf file, but looks like plain PySpark installation does not come with those file either</p> </li> </ol> <p>Thank you in advance!</p>
<python><apache-spark><pyspark><google-cloud-storage>
2023-01-26 03:35:50
1
407
kpython
75,242,037
8,076,768
Failed to import transformers.onnx.config
<blockquote> <p>Failed to import transformers.onnx.config due to DLL load failed while importing _imaging: The specified module could not be found.</p> </blockquote> <p><a href="https://i.sstatic.net/RUl2h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RUl2h.png" alt="enter image description here" /></a></p> <p>Already updated Pillow. <code>pip install --upgrade Pillow</code></p>
<python><dll><sentence-transformers>
2023-01-26 03:27:21
0
343
SaNa
75,242,002
2,975,438
How to read and process a file in Python that is too big for memory?
<p>I have csv file that looks like:</p> <pre><code>1,2,0.2 1,3,0.4 2,1,0.5 2,3,0.8 3,1,0.1 3,2,0.6 </code></pre> <p>First column correspond to <code>user_a</code>, second to <code>user_b</code> and third correspond to <code>score</code>. I want to find for every <code>user_a</code>, a <code>user_b</code> value that maximizes the <code>score</code>. For this example output should look like (output in form of dictionary preferable but not requred):</p> <pre><code>1 3 0.4 2 3 0.8 3 2 0.6 </code></pre> <p>The problem is that file is very big (millions of rows) and I try to find way to do it without out of memory error. Because of environment setup I cannot use <strong>Pandas</strong>, Dask and other packages with dataframes.</p> <p>I used the yield function to keep the memory needed for computation, but I still get an out-of-memory error. Any advice on how to reduce memory consumption would be highly appreciated.Β </p>
<python>
2023-01-26 03:19:51
2
1,298
illuminato
75,241,909
5,942,779
Parse unix time with pd.to_datetime and datetime.datetime.fromtimestamp
<p>I am trying to parse a Unix timestamp using <code>pd.to_datetime()</code> vs. <code>dt.datetime.fromtimestamp()</code>, but their outputs are different. Which one is correct?</p> <pre><code>import datetime as dt import pandas as pd ts = 1674853200000 print(pd.to_datetime(ts, unit='ms')) print(dt.datetime.fromtimestamp(ts / 1e3)) &gt;&gt; 2023-01-27 21:00:00 &gt;&gt; 2023-01-27 13:00:00 </code></pre>
<python><pandas><datetime><timestamp>
2023-01-26 02:55:07
2
689
Scoodood
75,241,692
19,950,360
plotly dash layout update
<pre><code>app = Dash(__name__) app.layout = html.Div([ dcc.Location(id='url', refresh=False) html.Div(id='page-content') ]) @app.callback( Output('page-content', 'children'), Input('url', 'search') ) def url_check(search): project_key = re.search('project_key=(\w+)&amp;', search).group(1) if project_key == 'U9sD0DItDJ0479kiFPG8': layout = html.Div([ dcc.Dropdown(options=['bar', 'pie'], id='dropdown', multi=False, value='bar', placeholder='Select graph type'), html.Div(id='page-content'), ]) return layout else: layout = html.Div([ html.Div('μ‹€νŒ¨') ]) if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>this is my code when i receive project_key in URL if url match my project_key then want to show dropdown, and graph But if not match my project_key just show 404error How to do that??</p>
<python><plotly><plotly-dash>
2023-01-26 02:04:11
1
315
lima
75,241,650
5,513,336
Using Pandas in Python: Splitting one column into three with possible blanks?
<p>Right now, I'm working with a csv file in which there is one column with a string.</p> <p>This file is 'animals.csv'.</p> <pre><code>Row,Animal 1,big green cat 2,small lizard 3,gigantic blue bunny </code></pre> <p>The strings are either two or three elements long.</p> <p>I'm practicing using pandas, with the <code>expand=True</code> option to separate the column into three. My ideal table would look like this:</p> <pre><code>Row,Size,Color,Animal 1,big,green,cat 2,small, ,lizard 3,gigantic,blue,bunny </code></pre> <p>But how can I deal with situations where one element is missing? In this example, &quot;small lizard&quot; has no color, but I still want to include it in the table. Here's the code I have so far.</p> <pre><code>import pandas as pd file = 'animals.csv' def copy_csv(file): filereader = pd.read_csv(file) filereader[['size', 'color', 'animal']] = filereader['Animal'].str.split(expand=True) filereader.to_csv('sorted' + 'animals.csv') copy_csv(file) </code></pre> <p>I end up with this error, which I know is happening because one of the strings (&quot;small lizard&quot; only has two elements.</p> <pre><code>ValueError: Columns must be same length as key </code></pre> <p>Any suggestions for how to solve this?</p> <p>Edit: I tried the suggestion below and tried this:</p> <pre><code>new = filereader['Animal'].str.split('\s', expand=True) </code></pre> <p>And get a little closer to the goal, but not quite:</p> <pre><code>Row,Size,Color,Animal 1,big,green,cat 2,small,lizard,None 3,gigantic,blue,bunny </code></pre> <p>Looks like I need to figure out a way to say &quot;if there are only two elements, the middle element should be None&quot;.</p>
<python><pandas><string><csv><split>
2023-01-26 01:54:17
0
2,124
Leia_Organa
75,241,599
18,183,907
how to save multi level dict per line?
<p>i have this dict</p> <pre><code>dd = { &quot;A&quot;: {&quot;a&quot;: {&quot;1&quot;: &quot;b&quot;, &quot;2&quot;: &quot;f&quot;}, &quot;z&quot;: [&quot;z&quot;, &quot;q&quot;]}, &quot;B&quot;: {&quot;b&quot;: {&quot;1&quot;: &quot;c&quot;, &quot;2&quot;: &quot;g&quot;}, &quot;z&quot;: [&quot;x&quot;, &quot;p&quot;]}, &quot;C&quot;: {&quot;c&quot;: {&quot;1&quot;: &quot;d&quot;, &quot;2&quot;: &quot;h&quot;}, &quot;z&quot;: [&quot;y&quot;, &quot;o&quot;]}, } </code></pre> <p>and i wanna have it formated in one line like this in a file i used</p> <pre><code>with open('file.json', 'w') as file: json.dump(dd, file, indent=1) # result { &quot;A&quot;: { &quot;a&quot;: { &quot;1&quot;: &quot;b&quot;, &quot;2&quot;: &quot;f&quot; }, &quot;z&quot;: [ &quot;z&quot;, &quot;q&quot; ] }, &quot;B&quot;: { &quot;b&quot;: { &quot;1&quot;: &quot;c&quot;, &quot;2&quot;: &quot;g&quot; }, &quot;z&quot;: [ &quot;x&quot;, &quot;p&quot; ] }, &quot;C&quot;: { &quot;c&quot;: { &quot;1&quot;: &quot;d&quot;, &quot;2&quot;: &quot;h&quot; }, &quot;z&quot;: [ &quot;y&quot;, &quot;o&quot; ] } } </code></pre> <p>i also tried but gave me string and list wrong</p> <pre><code>with open('file.json', 'w') as file: file.write('{\n' +',\n'.join(json.dumps(f&quot;{i}: {dd[i]}&quot;) for i in dd) +'\n}') # result { &quot;A: {'a': {'1': 'b', '2': 'f'}, 'z': ['z', 'q']}&quot;, &quot;B: {'b': {'1': 'c', '2': 'g'}, 'z': ['x', 'p']}&quot;, &quot;C: {'c': {'1': 'd', '2': 'h'}, 'z': ['y', 'o']}&quot; } </code></pre> <p>the result i wanna is</p> <pre><code> { &quot;A&quot;: {&quot;a&quot;: {&quot;1&quot;: &quot;b&quot;, &quot;2&quot;: &quot;f&quot;}, &quot;z&quot;: [&quot;z&quot;, &quot;q&quot;]}, &quot;B&quot;: {&quot;b&quot;: {&quot;1&quot;: &quot;c&quot;, &quot;2&quot;: &quot;g&quot;}, &quot;z&quot;: [&quot;x&quot;, &quot;p&quot;]}, &quot;C&quot;: {&quot;c&quot;: {&quot;1&quot;: &quot;d&quot;, &quot;2&quot;: &quot;h&quot;}, &quot;z&quot;: [&quot;y&quot;, &quot;o&quot;]}, } </code></pre> <p>how do i print the json content one line per dict while all inside is one line too?</p> <p>i plan to read it using <code>json.load</code></p>
<python><json>
2023-01-26 01:40:15
2
487
yvgwxgtyowvaiqndwo
75,241,502
13,860,719
How to connect two line segments without changing their directions in matplotlib?
<p>Say I have two line segments with different slopes, one has <code>x</code> ranging from -0.5 to -0.1, the other has <code>x</code> ranging from 0 to 0.5. I want to use matplotlib to plot the two line segments into one line by connecting the two line segments. This is my current code</p> <pre><code>import numpy as np from matplotlib import pyplot as plt x = np.arange(-0.5, 0.51, 0.1) y1 = np.linspace(2, 10.5, 5) y2 = np.linspace(11, 1, 6) plt.ylim([0, 14]) plt.plot(x, np.concatenate([y1,y2]), marker='o') plt.show() </code></pre> <p>This will produce the following plot <a href="https://i.sstatic.net/Ps3Cz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ps3Cz.png" alt="enter image description here" /></a></p> <p>However, what I want is to extend the two line segments until their intersection. The expected plot looks something like below <a href="https://i.sstatic.net/QCeyS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCeyS.png" alt="enter image description here" /></a></p> <p>I want a general solution, which can generate such plot that connects any two given line segments <code>y1</code> and <code>y2</code>. Note that I only want to have 11 markers. Any suggestions?</p>
<python><python-3.x><numpy><matplotlib><plot>
2023-01-26 01:16:36
1
2,963
Shaun Han
75,241,494
363,843
Nesting Django QuerySets
<p>Is there a way to create a queryset that operates on a nested queryset?</p> <p>The simplest example I can think of to explain what I'm trying to accomplish is by demonstration.</p> <p>I would like to write code something like</p> <pre><code>SensorReading.objects.filter(reading=1).objects.filter(meter=1) </code></pre> <p>resulting in SQL looking like</p> <pre><code>SELECT * FROM ( SELECT * FROM SensorReading WHERE reading=1 ) WHERE sensor=1; </code></pre> <hr /> <p>More specifically I have a model representing readings from sensors</p> <pre><code>class SensorReading(models.Model): sensor=models.PositiveIntegerField() timestamp=models.DatetimeField() reading=models.IntegerField() </code></pre> <p>With this I am creating a queryset that annotates every sensor with the elapsed time since the previous reading in seconds</p> <pre><code>readings = ( SensorReading.objects.filter(**filters) .annotate( previous_read=Window( expression=window.Lead(&quot;timestamp&quot;), partition_by=[F(&quot;sensor&quot;),], order_by=[&quot;timestamp&quot;,], frame=RowRange(start=-1, end=0), ) ) .annotate(delta=Abs(Extract(F(&quot;timestamp&quot;) - F(&quot;previous_read&quot;), &quot;epoch&quot;))) ) </code></pre> <p>I now want to aggregate those per sensor to see the minimum and maximum elapsed time between readings from every sensor. I initially tried</p> <pre><code>readings.values(&quot;sensor&quot;).annotate(max=Max('delta'),min=Min('delta'))[0] </code></pre> <p>however, this fails because window values cannot be used inside the aggregate.</p> <p>Are there any methods or libraries to solve this without needing to resort to raw SQL? Or have I just overlooked a simpler solution to the problem?</p>
<python><django><django-orm>
2023-01-26 01:14:42
1
374
Bjorn Harpe
75,241,484
2,313,307
Pandas: how to filter rows of a column to get alternating signs between every two consecutive cells?
<p>I have a data frame that looks like this</p> <pre><code>df = pd.DataFrame({&quot;a&quot;:[-0.1, -0.2, 0.2, -0.1, 0.1, 0.5, 0.6]}) df a 0 -0.1 1 -0.2 2 0.2 3 -0.1 4 0.1 5 0.5 6 0.6 </code></pre> <p>I would like to filter the rows of column <code>a</code> such that the signs of two consecutive cells are alternating.</p> <p>For example, if I want to start with a positive number in the first cell, then the filtered dataframe would look like this</p> <pre><code> a 2 0.2 3 -0.1 4 0.1 </code></pre> <p>Alternatively, if I want to start with a negative number, the dataframe would look like this</p> <pre><code> a 0 -0.1 2 0.2 3 -0.1 4 0.1 </code></pre> <p>How can I achieve this?</p> <p>Thanks!</p> <p><strong>UPDATE:</strong></p> <p>The solution I have so far for a dataframe that starts with a positive number is</p> <pre><code>df_long_first = pd.DataFrame(columns = ['a']) for i in range(len(df)): cur_val = df['a'].iloc[i] cur_sign = np.where(cur_val &gt; 0, &quot;p&quot;, &quot;n&quot;) if i == 0: df_long_first = df_long_first.append({&quot;a&quot; : cur_val}, ignore_index = True) else: last_sign = np.where(df_long_first['a'].iloc[-1] &gt; 0, &quot;p&quot;, &quot;n&quot;) if last_sign == cur_sign: continue else: df_long_first = df_long_first.append({&quot;a&quot; : cur_val}, ignore_index = True) df_long_first </code></pre> <p>I'm wondering if there is a more elegant way of doing this?</p>
<python><pandas><lambda><group-by>
2023-01-26 01:12:36
1
1,419
finstats
75,241,470
9,386,819
Is there an easy way to see the earliest version of pandas that a particular function appeared in?
<p>I recently ran into an issue with calling <code>sns.scatterplot</code> where the seaborn version loaded into the backend of the platform I was using was too old. I discovered through a post here on stackoverflow that the function wasn't introduced until v.x.y.</p> <p>Is there a standard way to see a particular function's earliest appearance in its library? In my case, I'm looking to see when the <code>df.quantile()</code> method first appeared in pandas. Perhaps it was there from the earliest version. Idk. I don't see this info in the documentation. Perhaps I'm just missing it? Or does it not exist anywhere that's easily searchable?</p>
<python><pandas><documentation>
2023-01-26 01:08:31
0
414
NaiveBae
75,241,437
1,729,591
How can I execute multiple Selenium files written in Python asynchronously or in parallel
<p>I have looked online and in Stack Overflow but cannot find a solution to my problem.</p> <p>I need to execute multiple files written in Python using Selenium. I'd like to do this asynchronously or in parallel. Please note that my files are not necessarily tests.</p> <p>Ideally I'd like to accomplish the following:</p> <ol> <li>Execute my files that are stored in a single directory</li> <li>This can be done asynchronously or in parallel</li> <li>I cannot use Sauce Labs or similar web tools</li> </ol> <p>I have written files before using Robot Framework and have had success executing multiple files using Pabot but can't seem to find a similar solution for executing multiple files with a Python module. Is Selenium Grid a good approach for something like this?</p>
<python><selenium>
2023-01-26 00:59:11
0
541
Freddy
75,241,431
3,943,868
Should I use bracket or parenthesis in creating numpy array?
<p>For example, these two syntaxes seems to be same:</p> <pre><code>In [3]: np.random.random((3,2)) Out[3]: array([[0.12612127, 0.81236009], [0.12289859, 0.89502736], [0.70360669, 0.51271339]]) In [4]: np.random.random([3,2]) Out[4]: array([[0.94723024, 0.55169203], [0.48919411, 0.22082705], [0.24072127, 0.88963255]]) </code></pre> <p>So either one is ok?</p>
<python><numpy>
2023-01-26 00:58:03
0
7,909
marlon
75,241,204
6,050,364
Why Onnxruntime runs 2-3x slower in C++ than Python?
<p>I have a code that runs 3 inference sessions one after the other. The problem that I am having is that it only runs at top performance in my Mac and the Windows VM (VMWare) that runs in my Mac. It takes between 58-68s to run my test set.</p> <p>When I ask someone else using windows (with similar hardware: Intel i7 6-8 cores) to test, it runs in 150s. If I ask the same person to run the inference using an equivalent python script, it runs 2-3x faster than that, on par with my original Mac machine.</p> <p>I have no idea what else to try. Here is the relevant part of the code:</p> <pre class="lang-cpp prettyprint-override"><code>#include &quot;onnxruntime-osx-universal2-1.13.1/include/onnxruntime_cxx_api.h&quot; // ... Ort::Env OrtEnv; Ort::Session objectNet{OrtEnv, objectModelBuffer.constData(), (size_t) objectModelBuffer.size(), Ort::SessionOptions{}}; // x3, one for each model std::vector&lt;uint16_t&gt; inputTensorValues; normalize(img, {aiPanoWidth, aiPanoHeight}, inputTensorValues); // convert the cv:Mat imp into std::vector&lt;uint16_t&gt; std::array&lt;int64_t, 4&gt; input_shape_{ 1, 3, aiPanoHeight, aiPanoWidth }; auto allocator_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault); Ort::Value input_tensor_ = Ort::Value::CreateTensor(allocator_info, inputTensorValues.data(), sizeof(uint16_t) * inputTensorValues.size(), input_shape_.data(), input_shape_.size(), ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16); const char* input_names[] = { &quot;images&quot; }; const char* output_names[] = { &quot;output&quot; }; std::vector&lt;Ort::Value&gt; ort_outputs = objectNet.Run(Ort::RunOptions{ nullptr }, input_names, &amp;input_tensor_, 1, output_names, 1); //... after this I read the output, but the step above is already 2-3x slower on C++ than Python </code></pre> <p>Some more details:</p> <ul> <li>The code above runs in the background in a worker thread (needed since GUI runs in the main thread)</li> <li>I am using float16 to reduce the footprint of the AI models</li> <li>I used the vanila onnxruntime dlls provided by Microsoft (v1.13.1)</li> <li>I compiled my code with both Mingw Gcc and VC++2022. The result is similar in both with a small advantage to VC++. I believe that other parts of my code runs faster, and not necessarily the inference.</li> <li>I don't want to run it in the GPU.</li> <li>I'm compiling with /arch:AVX /openmp -O2 and -lonnxruntime</li> </ul>
<python><c++><opencv><onnx><onnxruntime>
2023-01-26 00:14:58
1
2,687
Adriel Jr
75,241,193
12,224,591
Accessing Contents of "System.Reflection.Pointer"? (Python 3.10)
<p>I'm calling a set of C# functions from a Python 3.10 script via the <code>clr</code> module from the <code>PythonNet</code> library. One of those functions returns a pointer of type <code>System.Reflection.Pointer</code>, that points to an array of <code>float</code> values.</p> <p>I'm quite a bit confused as to how I'm exactly supposed to acquire or access the actual <code>float</code> array that the <code>System.Reflection.Pointer</code> variable is supposedly pointing to.</p> <p>In my Visual Studio 2022 IDE, I can see that the pointer variable has a few class functions, such as <code>ToString()</code> or <code>Unbox()</code>, however none of those give me the desired <code>float</code> array.</p> <p>What is the proper way of accessing the data pointed to by a <code>System.Reflection.Pointer</code> variable in Python?</p> <p>Thanks for reading my post, any guidance is appreciated.</p>
<python><pointers><python.net>
2023-01-26 00:13:05
1
705
Runsva
75,241,185
2,179,970
Why does Python logging throw an error when a handler name starts with `s`?
<h2>The Problem</h2> <p>When I run <code>main.py</code> below, it prints out <code>HELLO WORLD</code> (everything works). However, if I rename <code>console</code> in <code>LOGGING_CONFIG</code> to <code>s</code>, python throws this error: <code>AttributeError: 'ConvertingDict' object has no attribute 'handle'</code>. Why does changing a handler name cause this to happen and how can I fix it?</p> <h2>Background</h2> <p>I have an asyncio application that requires logging, but &quot;<a href="https://www.zopatista.com/python/2019/05/11/asyncio-logging/s" rel="nofollow noreferrer">the logging module uses blocking I/O when emitting records.</a>&quot; Python's <code>logging.handlers.QueueHandler</code> was built for this and I'm trying to implement the <code>QueueHandler</code> with <code>dictConfig</code>. I used the links in the references section at the bottom to put <code>main.py</code> together.</p> <h2>Code</h2> <p>This is <code>main.py</code>. Note that the filename <code>main.py</code> is important because <code>main.QueueListenerHandler</code> references it in <code>LOGGING_CONFIG</code>.</p> <pre class="lang-py prettyprint-override"><code># main.py import logging import logging.config import logging.handlers import queue import atexit # This function resolves issues when using `cfg://handlers.[name]` where # QueueListenerHandler complains that `cfg://handlers.[name]` isn't a handler. def _resolve_handlers(myhandlers): if not isinstance(myhandlers, logging.config.ConvertingList): return myhandlers # Indexing the list performs the evaluation. return [myhandlers[i] for i in range(len(myhandlers))] class QueueListenerHandler(logging.handlers.QueueHandler): def __init__( self, handlers, respect_handler_level=False, auto_run=True, queue=queue.Queue(-1), ): super().__init__(queue) handlers = _resolve_handlers(handlers) self._listener = logging.handlers.QueueListener( self.queue, *handlers, respect_handler_level=respect_handler_level ) if auto_run: self.start() atexit.register(self.stop) def start(self): self._listener.start() def stop(self): self._listener.stop() def emit(self, record): return super().emit(record) LOGGING_CONFIG = { &quot;version&quot;: 1, &quot;handlers&quot;: { &quot;console&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, }, &quot;queue_listener&quot;: { &quot;class&quot;: &quot;main.QueueListenerHandler&quot;, &quot;handlers&quot;: [ &quot;cfg://handlers.console&quot; ], }, }, &quot;loggers&quot;: { &quot;server&quot;: { &quot;handlers&quot;: [&quot;queue_listener&quot;], &quot;level&quot;: &quot;DEBUG&quot;, &quot;propagate&quot;: False, }, }, } if __name__ == &quot;__main__&quot;: logging.config.dictConfig(LOGGING_CONFIG) logger = logging.getLogger(&quot;server&quot;) logger.debug(&quot;HELLO WORLD&quot;) </code></pre> <p>If I modify <code>LOGGING_CONFIG[&quot;handlers&quot;]</code> to this:</p> <pre class="lang-json prettyprint-override"><code>&quot;handlers&quot;: { &quot;s&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, }, &quot;queue_listener&quot;: { &quot;class&quot;: &quot;main.QueueListenerHandler&quot;, &quot;handlers&quot;: [ &quot;cfg://handlers.s&quot; ], }, }, </code></pre> <p>python will throw this error:</p> <pre class="lang-bash prettyprint-override"><code>sh-3.2$ pyenv exec python main.py Exception in thread Thread-1 (_monitor): Traceback (most recent call last): File &quot;/Users/zion.perez/.pyenv/versions/3.10.6/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner self.run() File &quot;/Users/zion.perez/.pyenv/versions/3.10.6/lib/python3.10/threading.py&quot;, line 953, in run self._target(*self._args, **self._kwargs) File &quot;/Users/zion.perez/.pyenv/versions/3.10.6/lib/python3.10/logging/handlers.py&quot;, line 1548, in _monitor self.handle(record) File &quot;/Users/zion.perez/.pyenv/versions/3.10.6/lib/python3.10/logging/handlers.py&quot;, line 1529, in handle handler.handle(record) AttributeError: 'ConvertingDict' object has no attribute 'handle' </code></pre> <h2>Notes</h2> <ul> <li>Interestingly only <code>s</code> causes this issue. Any other letter works. If <code>s</code> is the first letter of the handler name (e.g. <code>sconsole</code>, <code>shandler</code>), Python will throw the exception above.</li> <li>Tested and confirmed the same behavior on MacOS with Python 3.11.1, 3.10.6, 3.9.16</li> <li>Tested on Ubuntu 22.04 with Python 3.10.6 and 3.11.0rc1</li> <li>Regarding the <code>_resolve_handlers</code> func, if the handler is <code>console</code> (does not start with <code>s</code>) the func returns <code>[&lt;StreamHandler &lt;stderr&gt; (NOTSET)&gt;]</code> and everything works. If the handler is <code>sconsole</code> (starts with <code>s</code>), the func returns <code>[{'class': 'logging.StreamHandler'}]</code>. For more background on <code>_resolve_handlers</code>, <a href="https://rob-blackbourn.medium.com/how-to-use-python-logging-queuehandler-with-dictconfig-1e8b1284e27a" rel="nofollow noreferrer">this article</a> explains why this function is needed.</li> </ul> <h2>References</h2> <ul> <li><a href="https://www.zopatista.com/python/2019/05/11/asyncio-logging/s" rel="nofollow noreferrer">https://www.zopatista.com/python/2019/05/11/asyncio-logging/s</a></li> <li><a href="https://rob-blackbourn.medium.com/how-to-use-python-logging-queuehandler-with-dictconfig-1e8b1284e27a" rel="nofollow noreferrer">https://rob-blackbourn.medium.com/how-to-use-python-logging-queuehandler-with-dictconfig-1e8b1284e27a</a></li> </ul>
<python><python-3.x><logging><python-logging>
2023-01-26 00:12:33
2
1,566
Zion
75,241,163
6,552,836
Apply a function to dataframe which includes previous row data
<p>I have an input dataframe for daily fruit spend which looks like this:</p> <p><code>spend_df</code></p> <pre class="lang-none prettyprint-override"><code>Date Apples Pears Grapes 01/01/22 10 47 0 02/01/22 0 22 3 03/01/22 11 0 3 ... </code></pre> <p>For each fruit, I need to apply a function using their respective parameters and inputs spends. The function includes the previous day and the current day spends, which is as follows:</p> <p><code>y = beta(1 - exp(-(theta*previous + current)/alpha))</code></p> <p><code>parameters_df</code></p> <pre class="lang-none prettyprint-override"><code>Parameter Apples Pears Grapes alpha 132 323 56 beta 424 31 33 theta 13 244 323 </code></pre> <p>My output data frame should look like this (may contain errors):</p> <p><code>profit_df</code></p> <pre class="lang-none prettyprint-override"><code>Date Apples Pears Grapes 01/01/22 30.93 4.19 0 02/01/22 265.63 31.00 1.72 03/01/22 33.90 30.99 32.99 ... </code></pre> <p>This is what I attempted:</p> <pre><code># First map parameters_df to spend_df merged_df = input_df.merge(parameters_df, on=['Apples','Pears','Grapes']) # Apply function to each row profit_df = merged_df.apply(lambda x: beta(1 - exp(-(theta*x[-1] + x)/alpha)) </code></pre>
<python><pandas><dataframe><apply><data-wrangling>
2023-01-26 00:09:26
2
439
star_it8293
75,241,115
13,494,917
to dataframe from a CSV file with multiple delimiters
<p>I'm trying to create a pandas dataframe from a CSV that has multiple delimiters. The delimiter for the header(column names) of the CSV is a comma, the rest of the rows are TAB-delimited.</p> <p>I've tried doing things like this:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('csvfile.csv', names=['Code', 'Name'], header=None, skiprows=1, sep='\t') </code></pre> <p>It's not a big deal for me to skip the header row since I know what the column names will be any way, but the above isn't working for me. Is there a way I can parse the header row differently than the rest of the data, or is it possible for me to skip the header row and just delimit by TAB?</p>
<python><pandas><dataframe><csv>
2023-01-26 00:00:22
1
687
BlakeB9
75,241,086
14,514,276
Appending pandas dataframes in for loop
<p>The problem with the code below is that <code>df</code> is not appended by new DataFrame. When I <code>print</code> the shape it is still <code>(1,6)</code>. How can I fix it?</p> <pre><code>columns = ['name', 'precision', 'recall', 'gmean', 'f1', 'mse'] df_SMOTE = pd.DataFrame(columns=columns ) df_ENN = pd.DataFrame(columns=columns ) df_Ensemble = pd.DataFrame(columns=columns ) for name, model in zip(names, [rfc, knc, lr, svc, dtc, xgbc, cbc, lgbc]): for X, y, df in [(X_smote, y_smote, df_SMOTE), (X_enn, y_enn, df_ENN), (X_smote, y_smote, df_Ensemble)]: learner = Learner(model, X, y) learner() precision, recall, gmean, f1, mse = learner.get_metrics() df = pd.concat([df, pd.DataFrame({'name': [name], 'precision': [precision], 'recall': [recall], 'gmean': [gmean], 'f1': [f1], 'mse': [mse]})], ignore_index=True) print(df.shape) </code></pre>
<python><pandas><dataframe>
2023-01-25 23:53:26
1
693
some nooby questions
75,241,084
17,274,113
WinError 2: The system cannot find the file specified: (Cannot set explicit path to .exe file)
<p>I am attempting to use the package WhiteboxTools. I have installed it to my python environment and imported it to my Jupyter notebook script. The whitebox documentation suggests explicitly setting the path to the executable file if it is not in the same folder as the script being written with the following:</p> <pre><code>wbt.set_whitebox_dir('/local/path/to/whitebox/binary/') </code></pre> <p>Another reason I believe this to be the problem is that I set the path to this file in my first code chunk: <code>wbt.set_whitebox_dir('C:\\Users\\maxduso.stu\\Anaconda3\\envs\\geo\\pkgs\\whitebox_tools-2.2.0-py39hf21820d_2\\Library\\bin\\whitebox_tools.exe')</code>, but the error arises in the next code chunk which calls one of the tools:</p> <pre><code>wbt.breach_depressions_least_cost( &quot;C:\\Users\\maxduso.stu\\Desktop\\FCOR_599\\project_work\\data\\tif_folder\\full_pa.tif&quot;, &quot;C:\\Users\\maxduso.stu\\Desktop\\FCOR_599\\project_work\\data\\tif_folder\\sa_breached_dem.tif&quot;, dist = 10, #maximum search distancefor breach paths in cells max_cost=None, min_dist=True, flat_increment=None, fill=True ) [WinError 267] The directory name is invalid: 'C:\\Users\\maxduso.stu\\Anaconda3\\envs\\geo\\pkgs\\whitebox_tools-2.2.0-py39hf21820d_2\\Library\\bin\\whitebox_tools.exe' </code></pre> <p><code>whitebox_tools</code> is one of two files I attempted, the other being <code>whitebox_gui</code>. These files were suggested to me when I searched for &quot;.exe&quot; in the whitebox tools folder within my environment folder. That said, they are type = Application and so that brings me to the question: why did windows autofill &quot;whitebox_tools.exe&quot; in the file search bar but find an application file? Also, considering I cannot find a file of type .exe, where do I go from here?</p> <p>Note: the tools also don't run without the path set.</p> <p>Thanks for reading. I hope the question was clear enough.</p>
<python><exe><executable>
2023-01-25 23:53:07
0
429
Max Duso
75,241,034
6,060,841
How to make uploading multiple files with Webdav faster?
<p>Currently, I am developing a script that involves uploading using <a href="https://en.wikipedia.org/wiki/WebDAV" rel="nofollow noreferrer">WebDAV</a>. I am able to make (empty) directories and upload files fine. However, I have not been able to find a way to upload an entire directory or multiple files at once.</p> <p><strong>So, I have to upload a directory by making each individual parent directory and uploading each file one by one</strong>. The more files in a directory, the longer it takes. Using the script below, it takes me around 5 minutes to upload a 100 megabyte git project.</p> <p>Importantly, a small quantity of bigger files upload at a much faster speed than a large quantity of small files. Unfortunately, I can't decompress a file on the website I am uploading too, and I don't think most WebDAV applications support that either, or I would just upload a tarball.</p> <p>So I was wondering is there any way to upload multiple files faster using WebDAV?</p> <pre><code>#!/usr/bin/env python import os, sys, subprocess import webdav4.client mycon = webdav4.client.Client(&quot;https://example.com&quot;, \ auth=(&quot;username&quot;, &quot;password&quot;)) dicti = {} # Find directories within path dicti[&quot;dirs&quot;] = subprocess.run(\ ('find', '-type', 'd', '-print0'),\ capture_output=True, text=True).stdout.split('\0') # Find files within path dicti[&quot;files&quot;] = subprocess.run(\ ('find', '-type', 'f', '-print0'),\ capture_output=True, text=True).stdout.split('\0') for ky in ('dirs', 'files'): for i in dicti[ky]: if not mycon.exists('dest/' + i): if ky == &quot;dirs&quot;: print(mycon.mkdir('dest/' + i)) else: print(mycon.upload_file(i, 'dest/' + i)) </code></pre>
<python><performance><upload><webdav><multiple-file-upload>
2023-01-25 23:43:45
1
997
Maximilian Ballard
75,240,766
1,816,745
Problem converting an image for a 3-color e-ink display
<p>I am trying to process an image file into something that can be displayed on a Black/White/Red e-ink display, but I am running into a problem with the output resolution.</p> <p>Based on the example code for the display, it expects two arrays of bytes (one for Black/White, one for Red), each 15,000 bytes. The resolution of the e-ink display is 400x300.</p> <p>I'm using the following Python script to generate two BMP files: one for Black/White and one for Red. This is all working, but the file sizes are 360,000 bytes each, which won't fit in the ESP32 memory. The input image (a PNG file) is 195,316 bytes.</p> <p>The library I'm using has a function called <code>EPD_4IN2B_V2_Display(BLACKWHITEBUFFER, REDBUFFER);</code>, which wants the full image (one channel for BW, one for Red) to be in memory. But, with these image sizes, it won't fit on the ESP32. And, the example uses 15KB for each color channel (BW, R), so I feel like I'm missing something in the image processing necessary to make this work.</p> <p>Can anyone shed some light on what I'm missing? How would I update the Python image-processing script to account for this?</p> <p>I am using the <a href="https://www.waveshare.com/product/4.2inch-e-paper-b.htm" rel="nofollow noreferrer">Waveshare 4.2inch E-Ink</a> display and the <a href="https://www.waveshare.com/e-paper-esp32-driver-board.htm" rel="nofollow noreferrer">Waveshare ESP32 driver board</a>. A lot of the Python code is based on <a href="https://stackoverflow.com/questions/55988123/convert-full-color-image-to-three-color-image-for-e-ink-display">this StackOverflow post</a> but I can't seem to find the issue.</p> <p><a href="https://i.sstatic.net/DRjU3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DRjU3.png" alt="Input Image" /></a> <a href="https://i.sstatic.net/lSxlx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lSxlx.png" alt="BW" /></a> <a href="https://i.sstatic.net/ykjfu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykjfu.png" alt="Red" /></a></p> <pre><code>import io import traceback from wand.image import Image as WandImage from PIL import Image # This function takes as input a filename for an image # It resizes the image into the dimensions supported by the ePaper Display # It then remaps the image into a tri-color scheme using a palette (affinity) # for remapping, and the Floyd Steinberg algorithm for dithering # It then splits the image into two component parts: # a white and black image (with the red pixels removed) # a white and red image (with the black pixels removed) # It then converts these into PIL Images and returns them # The PIL Images can be used by the ePaper library to display def getImagesToDisplay(filename): print(filename) red_image = None black_image = None try: with WandImage(filename=filename) as img: img.resize(400, 300) with WandImage() as palette: with WandImage(width = 1, height = 1, pseudo =&quot;xc:red&quot;) as red: palette.sequence.append(red) with WandImage(width = 1, height = 1, pseudo =&quot;xc:black&quot;) as black: palette.sequence.append(black) with WandImage(width = 1, height = 1, pseudo =&quot;xc:white&quot;) as white: palette.sequence.append(white) palette.concat() img.remap(affinity=palette, method='floyd_steinberg') red = img.clone() black = img.clone() red.opaque_paint(target='black', fill='white') black.opaque_paint(target='red', fill='white') red_image = Image.open(io.BytesIO(red.make_blob(&quot;bmp&quot;))) black_image = Image.open(io.BytesIO(black.make_blob(&quot;bmp&quot;))) red_bytes = io.BytesIO(red.make_blob(&quot;bmp&quot;)) black_bytes = io.BytesIO(black.make_blob(&quot;bmp&quot;)) except Exception as ex: print ('traceback.format_exc():\n%s',traceback.format_exc()) return (red_image, black_image, red_bytes, black_bytes) if __name__ == &quot;__main__&quot;: print(&quot;Running...&quot;) file_path = &quot;testimage-tree.png&quot; with open(file_path, &quot;rb&quot;) as f: image_data = f.read() red_image, black_image, red_bytes, black_bytes = getImagesToDisplay(file_path) print(&quot;bw: &quot;, red_bytes) print(&quot;red: &quot;, black_bytes) black_image.save(&quot;output/bw.bmp&quot;) red_image.save(&quot;output/red.bmp&quot;) print(&quot;BW file size:&quot;, len(black_image.tobytes())) print(&quot;Red file size:&quot;, len(red_image.tobytes())) </code></pre>
<python><image-processing><esp32><dithering><e-ink>
2023-01-25 22:57:21
2
1,000
Mike Buss
75,240,753
11,622,712
How to calculate an an accumulated value conditionally
<p>I have the following pandas dataframe:</p> <pre><code>diff_hours stage sensor 0 0 20 0 0 21 0 0 21 1 0 22 5 0 21 0 0 22 0 1 20 7 1 23 0 1 24 0 3 25 0 3 28 6 0 21 0 0 22 </code></pre> <p>I need to calculated an accumulated value of <code>diff_hours</code> while <code>stage</code> is growing. When <code>stage</code> drops to 0, the accumukated value of <code>diff_hours</code> should restart.</p> <p>This is the expected result:</p> <pre><code>acc_hours stage sensor 0 0 20 0 0 21 0 0 21 1 0 22 6 0 21 6 0 22 6 1 20 13 1 23 13 1 24 13 3 25 13 3 28 0 0 21 0 0 22 </code></pre>
<python><pandas>
2023-01-25 22:56:01
1
2,998
Fluxy
75,240,720
3,221,407
S3 Select Query JSON for nested value when keys are dynamic
<p>I have a JSON object in S3 which follows this structure:</p> <pre><code>&lt;code&gt; : { &lt;client&gt;: &lt;value&gt; } </code></pre> <p>For example,</p> <pre><code> { &quot;code_abc&quot;: { &quot;client_1&quot;: 1, &quot;client_2&quot;: 10 }, &quot;code_def&quot;: { &quot;client_2&quot;: 40, &quot;client_3&quot;: 50, &quot;client_5&quot;: 100 }, ... } </code></pre> <p>I am trying to retrieve the <strong>numerical value</strong> with an S3 Select query, where the &quot;code&quot; and the &quot;client&quot; are populated dynamically with each query.</p> <p>So far I have tried:</p> <pre><code>sql_exp = f&quot;SELECT * from s3object[*][*] s where s.{proc}.{client_name} IS NOT NULL&quot; sql_exp = f&quot;SELECT * from s3object s where s.{proc}[*].{client_name}[*] IS NOT NULL&quot; </code></pre> <p>as well as without the asterisk inside the square brackets, but nothing works, I get <code>ClientError: An error occurred (ParseUnexpectedToken) when calling the SelectObjectContent operation: Unexpected token found LITERAL:UNKNOWN at line 1, column X</code> (depending on the length of the query string)</p> <p>Within the function defining the object, I have:</p> <pre><code>resp = s3.select_object_content( Bucket=&lt;bucket&gt;, Key=&lt;filename&gt;, ExpressionType=&quot;SQL&quot;, Expression=sql_exp, InputSerialization={'JSON': {&quot;Type&quot;: &quot;Document&quot;}}, OutputSerialization={&quot;JSON&quot;: {}}, ) </code></pre> <p>Is there something off in the way I define the object serialization? How can I fix the query so I can retrieve the desired numerical value on the fly when I provide ”code” and β€œclient”?</p>
<python><json><amazon-s3><boto3><amazon-s3-select>
2023-01-25 22:51:35
1
432
nvergos
75,240,492
9,855,588
python evaluate 2 values to eachother if not none
<p>Is it possible to do an evaluation, but only if both values aren't <code>None</code>?</p> <pre><code>foo=a bar=a if foo==bar: pass </code></pre> <p>But I need it as long as foo and bar are not <code>None</code>. Basically if both values are None, don't do the evaluation?</p>
<python><python-3.x>
2023-01-25 22:24:55
2
3,221
dataviews
75,240,465
9,287,587
How can pyomo & ipopt calculate different results each time
<p>I am using <code>pyomo</code> and <code>ipopt</code> to solve a nonlinear problem involving over 100 variables. I don't know too much math to know what the options in ipopt mean.</p> <p>If my conditions are set relatively loosely, I find that every time my program is restarted, the results are far different from the last, and even if they are <code>optimal</code>, only some of them are satisfactory to me. I've tried to make the constraints a little bit tighter, but that also reduces the probability that I'm going to get a valid result.</p> <p>So my approach is to loop <code>solver.solve</code> multiple times, storing the appropriate result into a file and then constrainting it. If I put <code>model.cons = ConstraintList()</code> into the loop, the program is running too slow; If I just put <code>solver.solve(...)</code> in the loop, as long as I don't run the program again, I'll get almost the same solution no matter how many times I loop through it.</p> <p>I am not sure whether this problem is about <code>pyomo</code> or <code>ipopt</code>. I hope anyone can help me so that I can get different solutions in each cycle, thank you.</p> <p>Here is my code:</p> <pre><code># Create model is extremly slow, can't bear it in loop model = ConcreteModel() model.x = Var(range(n),range(n),[0,1],within=NonNegativeIntegers) model.tolerance = Param(initialize=s_tolerance, mutable=True) # Some constraints model.cons = ConstraintList() for j in range(n): model.cons.add(sum(model.x[i,j,0] for i in range(n)) == data[j][3]) model.cons.add(sum(model.x[i,j,1] for i in range(n)) == data[j][4]) # Get almost the same solutions with these code while True: # Not work for random solution seed = random.uniform(-0.5, 0.5) model.tolerance.set_value(seed + s_tolerance) solver = SolverFactory('ipopt', keepfiles=False) results = solver.solve(model, tee=False, symbolic_solver_labels=False, options={'max_iter':2000}) if good_result(results): break </code></pre>
<python><pyomo><np><ipopt>
2023-01-25 22:22:16
0
405
scott
75,240,343
7,331,538
Scrapy request in an infinite loop until specific callback result
<p>I want to call N scrapy requests from <code>start_requests</code>. This value is dynamic since I want to loop through all pages in an API. I do not know the limit number of pages before hand. But I know that when I exceed the number of pages, the response of the API will be an empty json. I want to do something like:</p> <pre><code>url = &quot;https://example.com?page={}&quot; def start_requests(self): page = 0 while True: page += 1 yield scrapy.Request(url=url.format(page), callback=self.parse) def parse(self, response, **kwargs): data = json.loads(response.body) if 'key' in data: # parse and yield an item pass else: # do not yield an item and break while loop in start_requests </code></pre> <p>I do not know how to achieve this. Can I <code>return</code> a value from callback (instead of <code>yield</code>) when condition is met?</p>
<python><while-loop><scrapy><request><scrapy-request>
2023-01-25 22:04:17
1
2,377
bcsta
75,240,106
6,077,239
How can I make polars cut method deal with null correctly?
<p>As of current, it seems like pl.cut cannot maintain order and handle missing value (null). For example, the following code fails, which means it cannot handle null.</p> <pre><code>import polars as pl s = pl.Series([1, 1, 4, 3, 5, 2, 2, None]) pl.cut(s, bins=[2, 4]) </code></pre> <p>Another example shows that its output will not maintain the original order of the series.</p> <pre><code>import polars as pl s = pl.Series([1, 1, 4, 3, 5, 2, 2]) pl.cut(s, bins=[2, 4]) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ┆ break_point ┆ category β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ cat β”‚ β•žβ•β•β•β•β•β•ͺ═════════════β•ͺ═════════════║ β”‚ 1.0 ┆ 2.0 ┆ (-inf, 2.0] β”‚ β”‚ 1.0 ┆ 2.0 ┆ (-inf, 2.0] β”‚ β”‚ 2.0 ┆ 2.0 ┆ (-inf, 2.0] β”‚ β”‚ 2.0 ┆ 2.0 ┆ (-inf, 2.0] β”‚ β”‚ 3.0 ┆ 4.0 ┆ (2.0, 4.0] β”‚ β”‚ 4.0 ┆ 4.0 ┆ (2.0, 4.0] β”‚ β”‚ 5.0 ┆ inf ┆ (4.0, inf] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>But as of now, it looks like there is a workaround for the order problem (<a href="https://github.com/pola-rs/polars/issues/4286" rel="nofollow noreferrer">here</a>, and the referenced code/function is pasted at the bottom), so my question is <strong>how can I modify the function below to make it handle null correctly, i.e., just return null whenever it encounters null in the input series?</strong></p> <pre><code>from typing import Optional import polars as polars def cut( s: polars.internals.series.Series, bins: list[float], labels: Optional[list[str]] = None, break_point_label: str = &quot;break_point&quot;, category_label: str = &quot;category&quot;, maintain_order: bool = False, ) -&gt; polars.internals.frame.DataFrame: if maintain_order: _arg_sort = polars.Series(name=&quot;_arg_sort&quot;, values=s.argsort()) result = polars.cut(s, bins, labels, break_point_label, category_label) if maintain_order: result = ( result .select([ polars.all(), _arg_sort, ]) .sort('_arg_sort') .drop('_arg_sort') ) return result </code></pre>
<python><python-polars>
2023-01-25 21:34:18
1
1,153
lebesgue
75,240,070
2,210,825
How can custom errorbars be aligned on grouped bars?
<p>I have created a <code>sns.catplot</code> using seaborn. My goal is to obtain a barplot with error bars.</p> <p>I followed <a href="https://stackoverflow.com/questions/74540942/shift-error-bars-in-seaborn-barplot-with-two-categories">this</a> answer to error bars to my plot. However, I now find that my error bars, using the same <code>ax.errorbar</code> function no longer align to my bar plot.</p> <p>I appreciate any answers or comments as to why sorting my data frame has caused this issue.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib import seaborn as sns data = {'Parameter': ['$ΞΌ_{max}$', '$ΞΌ_{max}$', '$ΞΌ_{max}$', '$ΞΌ_{max}$', '$ΞΌ_{max}$', '$m$', '$m$', '$m$', '$m$', '$m$', '$\\alpha_D$', '$\\alpha_D$', '$\\alpha_D$', '$\\alpha_D$', '$\\alpha_D$', '$N_{max}$', '$N_{max}$', '$N_{max}$', '$N_{max}$', '$N_{max}$', '$\\gamma_{cell}$', '$\\gamma_{cell}$', '$\\gamma_{cell}$', '$\\gamma_{cell}$', '$\\gamma_{cell}$', '$K_d$', '$K_d$', '$K_d$', '$K_d$', '$K_d$'], 'Output': ['POC', 'DOC', 'IC', 'Cells', 'Mean', 'POC', 'DOC', 'IC', 'Cells', 'Mean', 'POC', 'DOC', 'IC', 'Cells', 'Mean', 'POC', 'DOC', 'IC', 'Cells', 'Mean', 'POC', 'DOC', 'IC', 'Cells', 'Mean', 'POC', 'DOC', 'IC', 'Cells', 'Mean'], 'Total-effect': [0.9806103414992552, -7.054718234598588e-10, 0.1960778044402512, 0.2537531550865872, 0.3576103250801555, 0.1663846098641205, 1.0851909901687566, 0.2563681021056311, 0.0084168031549801, 0.3790901263233721, 0.0031054085922008, 0.0002724061050653, 0.1659030569337202, 0.2251452993113863, 0.0986065427355931, 0.0340237460462674, 0.3067235088110348, 0.3150260538485233, 0.3349234507482945, 0.24767418986353, 0.1938746960877987, -6.17103884336228e-07, 0.0041542186143554, 0.0032055759222461, 0.050308468380129, 0.0417496162986251, 2.328088857274425e-09, 0.9483137697398172, 0.9881583951740854, 0.4945554458851541], 'First-order': [0.7030107013984165, 2.266962154339895e-19, 0.0062233586910709, 0.001029343445717, 0.1775658508838011, 0.0007896517048184, 0.7264368524472167, 0.0072701545157557, 0.0047752182357577, 0.1848179692258871, -2.123427373989929e-05, 2.395667282242805e-19, 0.0055179953736572, 0.0004377224837127, 0.0014836208959075, -1.509666411558862e-06, 6.068293373049956e-20, 0.0115237519530005, 0.0009532607225978, 0.0031188757522967, 0.0117401346791109, 3.482140934635793e-24, 0.0015109239301033, -2.9803014832201013e-08, 0.0033127572015498, 0.0015795893288074, 3.393882814623132e-17, 0.3451307225252993, 0.4106729024860886, 0.1893458035850488], 'Total Error': [0.0005752772018327, 1.3690325778564916e-09, 0.0033197127516203, 0.0042203628326116, 0.0020288385387743, 0.0007817126652407, 0.074645390474463, 0.0016832816591233, 0.0023529269720789, 0.0198658279427265, 0.0001233951911322, 0.0023340612253369, 0.0029383350061101, 0.003741247467092, 0.0022842597224178, 0.0005740976276596, 0.1017075201238418, 0.0016784578928217, 0.0037270295879161, 0.0269217763080598, 0.0009021103063017, 4.619682769520493e-07, 0.0005201826302926, 0.0005615428740041, 0.0004960744447188, 0.000910170372727, 1.0571905831111963e-09, 0.0029389557787801, 0.0054832440706334, 0.0023330928198327], 'First Error': [0.0024072925459877, 9.366089709991011e-20, 0.0002667351219131, 0.0002702376243862, 0.0007360663230718, 0.0002586411466273, 0.0409234887280223, 0.0005053286335856, 0.0003348751699561, 0.0105055834195478, 2.195881790893627e-05, 8.208495135059976e-20, 0.0001643584459509, 0.0002162523113349, 0.0001006423937987, 0.0001928274220008, 3.4836161809305005e-20, 0.0005126354796536, 0.0005972681850905, 0.0003256827716862, 0.0003252835339205, 5.013811598030501e-24, 3.247452070080876e-05, 8.972262407759052e-08, 8.946194431135658e-05, 0.0001221659592046, 2.8775799201024936e-18, 0.0033817071114312, 0.0058875798799757, 0.0023478632376529]} df = pd.DataFrame(data) # Picks outputs to show show_vars = [&quot;Mean&quot;] err_df = df.melt(id_vars=[&quot;Parameter&quot;, &quot;Output&quot;], value_vars=[&quot;Total Error&quot;, &quot;First Error&quot;], var_name=&quot;Error&quot;).sort_values(by=&quot;Parameter&quot;) df = df.melt(id_vars=[&quot;Parameter&quot;, &quot;Output&quot;], value_vars=[&quot;Total-effect&quot;, &quot;First-order&quot;], var_name=&quot;Sobol index&quot;, value_name=&quot;Value&quot;).sort_values(by=&quot;Parameter&quot;) # Plot grid = sns.catplot(data=df[df[&quot;Output&quot;].isin(show_vars)], x=&quot;Parameter&quot;, y=&quot;Value&quot;, col=&quot;Output&quot;, col_wrap=2, hue=&quot;Sobol index&quot;, kind=&quot;bar&quot;, aspect=1.8, legend_out=False) grid.set_titles(col_template=&quot;Sensitivity with respect to {col_name}&quot;) # Add error lines and values for ax, var in zip(grid.axes.ravel(), show_vars): # Value labels for i, c in enumerate(ax.containers): if type(c) == matplotlib.container.BarContainer: ax.bar_label(c, labels=[f'{v.get_height():.2f}' if v.get_height() &gt;= 0.01 else &quot;&lt;0.01&quot; for v in c], label_type='center') # Error bars ticklocs = ax.xaxis.get_majorticklocs() offset = ax.containers[0][0].get_width() / 2 ax.errorbar(x=np.append(ticklocs - offset, ticklocs + offset), y=df[df[&quot;Output&quot;] == var][&quot;Value&quot;], yerr=err_df[err_df[&quot;Output&quot;] == var][&quot;value&quot;], ecolor='black', linewidth=0, elinewidth=2, capsize=2) # Careful: array order matters # Change title for mean if var == &quot;Mean&quot;: ax.set_title(&quot;Average sensitivity across outputs&quot;) grid.tight_layout() </code></pre> <p>Output: <a href="https://i.sstatic.net/GQpx0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GQpx0.png" alt="enter image description here" /></a></p> <p>I did try to sort the select dataframes by doing:</p> <pre class="lang-py prettyprint-override"><code>y=df[df[&quot;Output&quot;] == var].sort_values(by=&quot;Parameter&quot;)[&quot;Value&quot;], yerr=err_df[err_df[&quot;Output&quot;] == var].sort_values(by=&quot;Parameter&quot;)[&quot;value&quot;] </code></pre> <p>This despite the fact that order in the data frame seems to be preserved across operations.</p>
<python><matplotlib><seaborn><errorbar><grouped-bar-chart>
2023-01-25 21:29:44
1
1,458
donkey
75,240,066
4,219,005
How to compute rolling average in pandas just for a specific date
<p>I have this example dataframe below. I created a function that does what I want, computing a <code>Sales</code> rolling average (7,14 days window) for each <code>Store</code> for the previous day and shifts it to the current date. How can I compute this <strong>only</strong> for a specific date, <code>2022-12-31</code>, for example? I have a lot of rows and I don't want to recalculate it each time I add a date.</p> <pre><code>import numpy as np import pandas as pd ex = pd.DataFrame({'Date':pd.date_range('2022-10-01', '2022-12-31'), 'Store': np.random.choice(2, len(pd.date_range('2022-10-01', '2022-12-31'))), 'Sales': np.random.choice(10000, len(pd.date_range('2022-10-01', '2022-12-31')))}) ex.sort_values(['Store','Date'], ascending=False, inplace=True) for days in [7, 14]: ex['Sales_mean_' + str(days) + '_days'] = ex.groupby('Store')[['Sales']].apply(lambda x: x.shift(-1).rolling(days).mean().shift(-days+1))``` </code></pre>
<python><pandas><dataframe><rolling-computation>
2023-01-25 21:29:20
1
570
RodiX
75,239,878
8,076,768
Visual studio terminal
<p>How to convert the following codes in VS Code terminal?</p> <pre><code>conda create -n cluster_topic_model python=3.7 -y conda activate cluster_topic_model </code></pre>
<python><visual-studio-code>
2023-01-25 21:08:13
1
343
SaNa
75,239,861
12,242,085
How to check whether all values are in input DataFrame for pivot_table and create and fill by 0 some values which not exists in Python Pandas?
<p>I have table in Python Pandas like below:</p> <p><strong>Input:</strong></p> <pre><code>df = pd.DataFrame() df[&quot;ID&quot;] = [111,222,333] df[&quot;TYPE&quot;] = [&quot;A&quot;, &quot;A&quot;, &quot;C&quot;] df[&quot;VAL_1&quot;] = [1,3,0] df[&quot;VAL_2&quot;] = [0,0,1] </code></pre> <p>df:</p> <pre><code>ID | TYPE | VAL_1 | VAL_2 -----|-------|-------|------- 111 | A | 1 | 0 222 | A | 3 | 0 333 | C | 0 | 1 </code></pre> <p>And I need to create pivot_table using code like below:</p> <pre><code>df_pivot = pd.pivot_table(df, values=['VAL_1', 'VAL_2'], index=['ID'], columns='TYPE', fill_value=0) df_pivot.columns = df_pivot.columns.get_level_values(1) + '_' + df_pivot.columns.get_level_values(0) df_pivot = df_pivot.reset_index() </code></pre> <p>df_pivot (result of above code):</p> <p><a href="https://i.sstatic.net/tj7Wl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tj7Wl.png" alt="enter image description here" /></a></p> <p><strong>Requirements:</strong></p> <ul> <li>Input df should have the following values in column &quot;TYPE&quot;: A, B, C.</li> <li>However, input df is a result of some query in SQL, so sometimes there could be lack of some values (A, B, C) in column &quot;TYPE&quot;</li> <li><em><strong>I need to check whether input df has all categories (A, B, C) in column &quot;TYPE&quot; if not in df_pivot create this category and fill by 0</strong></em></li> </ul> <p><strong>Output:</strong> And I need something like below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>A_VAL_1</th> <th>C_VAL_1</th> <th>A_VAL_2</th> <th>C_VAL_2</th> <th>B_VAL_1</th> <th>B_VAL_2</th> </tr> </thead> <tbody> <tr> <td>111</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td><strong>0</strong></td> <td><strong>0</strong></td> </tr> <tr> <td>222</td> <td>3</td> <td>0</td> <td>0</td> <td>0</td> <td><strong>0</strong></td> <td><strong>0</strong></td> </tr> <tr> <td>333</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td><strong>0</strong></td> <td><strong>0</strong></td> </tr> </tbody> </table> </div> <p>As you can see value &quot;B&quot; was not in input df in column &quot;TYPE&quot;, so in df_pivot was created columns with &quot;B&quot; (B_VAL_1, B_VAL_2) filling by 0.</p> <p>How can I do that in Python Pandas ?</p>
<python><pandas><dataframe><pivot><pivot-table>
2023-01-25 21:05:58
2
2,350
dingaro
75,239,840
13,313,873
Fibonacci function can not recurse - NameError: name 'fib' is not defined
<p>When using an embeddded interpreter</p> <pre><code>&gt;&gt;&gt; import IPython &gt;&gt;&gt; IPython.embed() In [1]: def fib(n): ...: if n &lt;= 1: ...: return n ...: else: ...: return fib(n-1) + fib(n-2) ...: In [2]: fib(2) --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 fib(2) Cell In[1], line 5, in fib(n) 3 return n 4 else: ----&gt; 5 return fib(n-1) + fib(n-2) NameError: name 'fib' is not defined </code></pre> <p>Expected output:</p> <pre><code>In [2]: fib(2) Out[2]: 1 </code></pre> <p>It gets <code>NameError: name 'fib' is not defined</code>, but the name was defined just before. Why isn't the function functioning?</p>
<python><function><recursion><ipython><fibonacci>
2023-01-25 21:03:58
0
955
noob overflow
75,239,767
34,935
How to map Django TextChoices text to a choice?
<p>Suppose I have this code, inspired from <a href="https://docs.djangoproject.com/en/4.1/ref/models/fields/#enumeration-types" rel="nofollow noreferrer">the Django docs</a> about enumeration types:</p> <pre class="lang-py prettyprint-override"><code>class YearInSchool(models.TextChoices): FRESHMAN = 'FR', 'Freshman' SOPHOMORE = 'SO', 'Sophomore' JUNIOR = 'JR', 'Junior' SENIOR = 'SR', 'Senior' GRADUATE = 'GR', 'Graduate' </code></pre> <p>Now suppose I have the string &quot;Sophomore&quot;. How do I get from that to <code>YearInSchool.SOPHOMORE</code>?</p> <p>The only thing I can think of is a loop:</p> <pre class="lang-py prettyprint-override"><code>the_str = &quot;Sophomore&quot; val = None for val1, label in YearInSchool.choices: if label == the_str: val = YearInSchool(val1) break assert YearInSchool.SOPHOMORE == val </code></pre> <p>That seems awkward. Is there a better way?</p> <p><strong>EDIT</strong>: Thanks for the answers folks! I'll try them out. Just to provide more context, I am loading data from text files into a database, so the &quot;Sophomore&quot; is in a text file I've been provided that wasn't created by me. So, I'm stretching the use case for TextChoices, but it seemed a reasonable way to tie text file input to a DB field.</p>
<python><django><enums>
2023-01-25 20:56:35
2
21,683
dfrankow
75,239,708
9,511,844
Send message from bot to self with telethon
<p>I'm trying to send a message to myself using a Telegram bot I have created, and I got the values from <a href="https://my.telegram.org" rel="nofollow noreferrer">https://my.telegram.org</a> as well as the BotFather.</p> <pre class="lang-py prettyprint-override"><code>import telebot from telethon.sync import TelegramClient def telegram_setup(): api_id = 12345678 api_hash = 'X' token = 'X' phone = '+111' client = TelegramClient('session', api_id, api_hash) client.connect() if not client.is_user_authorized(): client.send_code_request(phone) client.sign_in(phone, input('Enter the code: ')) return client def send_message(client, message): try: entity = client.get_entity('username') client.send_message(entity, message, parse_mode='html') except Exception as e: print(e) client.disconnect() if __name__ == &quot;__main__&quot;: client = telegram_setup() message = &quot;test&quot; send_message(client, message) </code></pre> <p>The first time I ran this, it sent me a message asking for a code which I supplied. Running it again caused &quot;test&quot; to appear under &quot;Saved Messages&quot; in Telegram, rather than coming from my bot.</p> <p>Any idea what's causing this or how to resolve it?</p> <p>Thanks.</p> <p>EDIT: I realised I'm not using <code>token</code> anywhere, but I'm not sure where it goes.</p>
<python><telethon>
2023-01-25 20:49:54
1
551
J P
75,239,621
14,514,276
class variable doesn't update after __call__
<p>I have probably an easy question. Why if I run <code>get_predictions</code> method inside <code>print</code> it gives me <code>[]</code> and not a <code>[1,2,3]</code>? Value assignment should be done at object creation (<code>__call__</code>).</p> <pre><code>class Learner(): def __init__(self): self.predictions = [] def get_predictions(self): return self.predictions def __call__(self): self.predictions = [1,2,3] l = Learner() print(l.get_predictions()) </code></pre>
<python><class><call>
2023-01-25 20:41:25
0
693
some nooby questions
75,239,620
2,985,796
Converting elements of 2D Tensor based on element mapping between old and new values?
<p>I have a 2D tensor that contains the indices into some other tensor.</p> <pre><code>old = torch.Tensor([ [1, 2, 12, 12], [0, 1, 12, 12], [3, 5, 12, 12], [7, 8, 12, 12], [6, 7, 12, 12], [9, 11, 12, 12]]) </code></pre> <p>I have another tensor that represents a mapping between elements in the <code>old</code> tensor to a <code>new</code> tensor</p> <pre><code>mapping = torch.Tensor([ [0, 0], [1, 6], [2, 1], [3, 6], [4, 2], [5, 6], [6, 3], [7, 6], [8, 4], [9, 6], [10, 5], [11, 6], [12, 6]]) </code></pre> <p>That is, the <code>[:, 0]</code> column of <code>mapping</code> represents the values found in <code>old</code>, and the <code>[:, 1]</code> represents the values to convert the correspond to. Thus the desired output is this <code>new</code> tensor</p> <pre><code>new_or_desired = torch.Tensor([ [6, 1, 6, 6], [0, 6, 6, 6], [6, 6, 6, 6], [6, 4, 6, 6], [3, 6, 6, 6], [6, 6, 6, 6]]) </code></pre> <p>I have tried many iterations but my best idea yet to applying this mapping is</p> <pre><code>old[old == mapping[:, 0]] = mapping[:, 1] </code></pre> <p>But the shapes are obviously mis-matched. <strong>How can I apply the <code>mapping</code> to convert the <code>old</code> elements to the <code>new</code> elements values?</strong> I think I should use <code>scatter_</code> but I can't quite figure out how to apply it correctly.</p>
<python><indexing><pytorch><tensor>
2023-01-25 20:41:21
1
7,178
KDecker
75,239,345
3,684,433
How do I use 1D gradients to compute a 2D Sobel in OpenCV with a different vector norm?
<p>OpenCV uses an implementation of a <a href="https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html#gacea54f142e81b6758cb6f375ce782c8d" rel="nofollow noreferrer">Sobel operator defined here</a> (<a href="https://docs.opencv.org/4.x/d2/d2c/tutorial_sobel_derivatives.html" rel="nofollow noreferrer">details here</a>). In this implementation, the horizontal derivative is generated, then the vertical derivative is generated, then the gradient is computed as the L2 norm of the derivatives.</p> <p>Let's say I wanted to use the L1 norm instead. In order to prove this out, I take an image and try to get the same result from OpenCV's <code>Sobel()</code> that I get from manually calculating the L2 norm of the gradients:</p> <pre class="lang-py prettyprint-override"><code>import cv2 z_img = cv2.imread(&quot;.\\some_image.tif&quot;, cv2.IMREAD_UNCHANGED) z_px_rows = z_img.shape[0] z_px_cols = z_img.shape[1] print(f'Center pixel intensity (original): {z_img[z_px_rows // 2, z_px_cols // 2]}') gx = cv2.Sobel(z_img, cv2.CV_32F, 1, 0, ksize=13) print(f'Center pixel intensity (gx): {gx[z_px_rows // 2, z_px_cols // 2]}') gy = cv2.Sobel(z_img, cv2.CV_32F, 0, 1, ksize=13) print(f'Center pixel intensity (gy): {gy[z_px_rows // 2, z_px_cols // 2]}') mag, _ = cv2.cartToPolar(gx, gy) print(f'Center pixel intensity (homebrew sobel): {mag[z_px_rows // 2, z_px_cols // 2]}') native_sobel = cv2.Sobel(z_img, cv2.CV_32F, 1, 1, ksize=13) print(f'Center pixel intensity (native sobel): {native_sobel[z_px_rows // 2, z_px_cols // 2]}') </code></pre> <p>Here I'm using a 32-bit float image where the minimum is 0.0 and the maximum is around 600.0. The output of this is:</p> <pre><code>Center pixel intensity (original): 537.156982421875 Center pixel intensity (gx): -220087.90625 Center pixel intensity (gy): 350005.25 Center pixel intensity (homebrew sobel): 413451.78125 Center pixel intensity (native sobel): 16357.7548828125 </code></pre> <p>Obviously, something is way off. I would expect those last two values to be the same (not <em><strong>exactly</strong></em> the same, but definitely close). I tried normalizing the pixels in the image to the range [0, 1], which didn't help. I tried converting the images to 8-bit unsigned, which also didn't help. What have I misunderstood about the implementation that would account for this discrepancy?</p>
<python><opencv><derivative><sobel>
2023-01-25 20:12:13
3
447
maldata
75,239,326
19,094,667
Finding the positions of multiple objects in an image, but only when they are next to each other
<p>Finding the positions of multiple objects in an image, but only when they are next to each other. So I would like to use parameters to decide whether 2 or 3 are next to each other and then get the coordinates.</p> <p>At the moment I can only find one and it works. Code:</p> <pre><code>def diff(a, b): return sum((a - b) ** 2 for a, b in zip(a, b)) def g_c_o(c, _b): time.sleep(1) s_i_p = '' c_b_64 = _b.execute_script(&quot;return arguments[0].toDataURL('image/png').substring(21);&quot;, c) c_i = base64.b64decode(c_b_64) with open(r&quot;canvas.png&quot;, 'wb') as f: f.write(c_i) with open(&quot;files/important.pickle&quot;, &quot;rb&quot;) as f: d_r = pickle.load(f) if s_i_p == '' and d_r[10] is not None and d_r[10] != '': s_i_p = d_r[10] c_s_i = Image.open('canvas.png') i_s = c_s_i.size s_i = Image.open(&quot;findMe.png&quot;) w_s = s_i.size x0, y0 = w_s[0] // 2, w_s[1] // 2 p = s_i.getpixel((x0, y0))[:-1] b = (100, 0, 0) c = [] for x in range(i_s[0]): for y in range(i_s[1]): i_p_s = c_s_i.getpixel((x, y)) d = diff(i_p_s, p) if d &lt; b[0]: b = (d, x, y) x, y = b[1:] return [x, y] </code></pre> <p>And here the images, if this can help:</p> <p>Image to find:</p> <p><a href="https://i.sstatic.net/zNTol.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zNTol.png" alt="enter image description here" /></a></p> <p>Find in:</p> <p><a href="https://i.sstatic.net/AJiba.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJiba.png" alt="enter image description here" /></a></p> <p>And it works, i find this:</p> <p><a href="https://i.sstatic.net/b78jg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b78jg.png" alt="enter image description here" /></a></p> <p>Can anyone tell me if I can find 2 or 3 that are next to each other and coordinates of them, not just the one?</p> <p>Can anyone tell me if I can find 2 or 3 that are next to each other and coordinates of them, not just the one?</p>
<python><python-imaging-library>
2023-01-25 20:10:02
0
517
Agan
75,239,229
5,722,359
How to change the default size/geometry of tkinter.filedialog.askdirectory()?
<p>How do I change the default size/geometry of <code>tkinter.filedialog.askdirectory()</code>? I find its default size small and would like to make it wider and taller upon activation.</p> <p>I am aware that:</p> <ol> <li><p>Its window size can be changed manually using the mouse pointer but that is not what I am after.</p> </li> <li><p><code>tkinter.filedialog.askdirectory()</code> is a <code>tkinter.filedialog.Directory</code> object which inherits from the <code>tkinter.commondialog.Dialog</code> base class. However, I have not yet figure out how to change this object size.</p> <pre><code># For the following classes and modules: # # options (all have default values): # # - defaultextension: added to filename if not explicitly given # # - filetypes: sequence of (label, pattern) tuples. the same pattern # may occur with several patterns. use &quot;*&quot; as pattern to indicate # all files. # # - initialdir: initial directory. preserved by dialog instance. # # - initialfile: initial file (ignored by the open dialog). preserved # by dialog instance. # # - parent: which window to place the dialog on top of # # - title: dialog title # # - multiple: if true user may select more than one file # # options for the directory chooser: # # - initialdir, parent, title: see above # # - mustexist: if true, user must pick an existing directory # def askdirectory (**options): &quot;Ask for a directory, and return the file name&quot; return Directory(**options).show() # the directory dialog has its own _fix routines. class Directory(commondialog.Dialog): &quot;Ask for a directory&quot; command = &quot;tk_chooseDirectory&quot; def _fixresult(self, widget, result): if result: # convert Tcl path objects to strings try: result = result.string except AttributeError: # it already is a string pass # keep directory until next time self.options[&quot;initialdir&quot;] = result self.directory = result # compatibility return result </code></pre> <p>I did an experiment trying out an option called <code>width</code>. The returned error message clearly showed it was invalid and only 4 options can be specified (i.e. <code>initialdir</code>, <code>mustexist</code>, <code>parent</code>, or <code>title</code>).</p> <pre><code> File &quot;/usr/lib/python3.10/tkinter/commondialog.py&quot;, line 45, in show s = master.tk.call(self.command, *master._options(self.options)) _tkinter.TclError: bad option &quot;-width&quot;: must be -initialdir, -mustexist, -parent, or -title </code></pre> </li> </ol> <p><a href="https://www.tcl.tk/man/tcl/TkCmd/chooseDirectory.html" rel="nofollow noreferrer">https://www.tcl.tk/man/tcl/TkCmd/chooseDirectory.html</a> tcl/tk documentation also does not provide such an option.</p>
<python><tkinter><tcl>
2023-01-25 19:59:59
0
8,499
Sun Bear
75,238,829
19,854,658
Iterating through multiple arguments in a function?
<p>How would I go about iterating through this function so that it tries all possible combinations where <strong>a</strong>, <strong>b</strong>, <strong>c</strong>, <strong>d</strong> are a range of numbers where:</p> <p><strong>a</strong> = 20 to 40, <strong>b</strong> = 80 to 100, <strong>c</strong> = 100 to 120, <strong>d</strong> = 120 to 140</p> <pre><code> def euler(a,b,c,d): my_dict = {'A1':[],'A2':[],'A3':[],'A4':[],'Number': []} y = a**5 + b**5 + c**5 + d**5 for n in range(140,161): if n**5 == y: my_dict['A1'].append(a) my_dict['A2'].append(b) my_dict['A3'].append(c) my_dict['A4'].append(d) my_dict[&quot;Number&quot;].append(n) return my_dict else: pass </code></pre> <p>Essentially I want to iterate through all combinations to find a match between <strong>a</strong> <strong>b</strong> <strong>c</strong> and <strong>d</strong>.</p> <p>Any thoughts? Thanks in advance!</p>
<python>
2023-01-25 19:17:33
2
379
Jean-Paul Azzopardi
75,238,616
3,285,014
Code to generate many test case files and its contents
<p>I have hunderds of files that needs to be grouped and modified. Instead, I thought it might be easier to somehow regenerate the files with correct data and formats.</p> <p>There are 3 modes; mode1, mode2 and mode3. Each mode contains 45 test cases, which are named as: (For Mode 1): mode1_test1.txt, mode1_test2.txt... (For Mode 2): mode2_test1.txt, mode2_test2.txt ...</p> <p>Let's look into mode1_test1.txt:</p> <pre><code>#Title: /mydrive/test/mode1_test1.txt #Author: Me #Description: We will test Mode1 using testcase 1 $init=bench ## Sourcing all common files: source ../common_mode1.txt ##Initiate test $init address 0x9876 data 0x1234 -type write $init address 0x8765 data 0x2344 -type write ## Test Result source ../expected_data/mode1_test1.txt quit </code></pre> <p>mode1_test2.txt</p> <pre><code>#Title: /mydrive/test/mode1_test2.txt #Author: Me #Description: We will test Mode1 using testcase 2 $init=bench ## Sourcing all common files: source ../common_mode1.txt ##Initiate test $init address 0x9876 data 0x1234 -type write $init address 0x8765 data 0x2344 -type write ## Test Result source ../expected_data/mode1_test2.txt quit </code></pre> <p>This is an example of the test case files and functionally incorrect, which I am not too worried about.</p> <p>The code to generate these scripts needs to source the common files, initiate the testbench, read/write same data and finally source the expected data file for check.</p> <p>As I have mentioned, there are 45 test cases for each mode. What is the easiest way to generate these files using shell/python?</p> <p>One idea is to have shell script shown it this <a href="https://stackoverflow.com/questions/13883767/script-to-generate-other-scripts">thread</a></p> <p>However, are there any other, relatively simple way to generate these files and its contents?</p> <p>Thanks in advance.</p>
<python><shell>
2023-01-25 18:55:16
1
319
user3285014
75,238,557
13,460,543
How to insert a pre-initialized dataframe or several columns into another dataframe at a specified column position?
<p>Suppose we have the following dataframe.</p> <pre class="lang-none prettyprint-override"><code> col1 col2 col3 0 one two three 1 one two three 2 one two three 3 one two three 4 one two three </code></pre> <p>We seek to introduce 31 columns into this dataframe, each column representing a day in the month.</p> <p>Let's say we want to introduce it precisely between columns <code>col2</code> and <code>col3</code>.</p> <p>How do we achieve this?</p> <p>To make it simple, the introduced columns can be numbered from 1 to 31.</p> <p><strong>Starting source code</strong></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd src = pd.DataFrame({'col1': ['one', 'one', 'one', 'one','one'], 'col2': ['two', 'two', 'two', 'two','two'], 'col3': ['three', 'three', 'three', 'three','three'], }) </code></pre>
<python><pandas><dataframe><numpy><join>
2023-01-25 18:50:00
5
2,303
Laurent B.
75,238,504
4,935,567
How can I make ipython incognito for a session?
<p>Is there any way to force <em>IPython</em> not to save the current session's history to the history file? <a href="https://stackoverflow.com/questions/35093576/how-can-i-avoid-storing-a-command-in-ipython-history">A similar question</a> has been asked about not saving a single command, but it's not practical to do that individually for every command in a session.</p>
<python><ipython>
2023-01-25 18:45:16
2
2,618
Masked Man
75,238,435
4,565,128
How to evaluate Gaussian Process Latent Variable ModelΒΆ
<p>I am following a tutorial on Gaussian Process Latent Variable Model here is the link <a href="https://pyro.ai/examples/gplvm.html" rel="nofollow noreferrer">https://pyro.ai/examples/gplvm.html</a></p> <p>It is a dimension-reduction method. Now I want to evaluate the model and find the accuracy, confusion matrix is it possible to do so?</p>
<python><pyro><gaussian-process>
2023-01-25 18:39:35
1
465
Mitu Vinci
75,238,283
8,595,958
Asymmetric Swaps - minimising max/min difference in list through swaps
<p>Was doing some exercises in CodeChef and came across the <a href="https://www.codechef.com/problems/ARRSWAP?tab=statement" rel="nofollow noreferrer">Asymmetric Swaps</a> problem:</p> <blockquote> <h3>Problem</h3> <p>Chef has two arrays 𝐴 and 𝐡 of the same size 𝑁.</p> <p>In one operation, Chef can:</p> <ul> <li>Choose two integers 𝑖 and 𝑗 (1 ≀ 𝑖,𝑗 ≀ 𝑁) and swap the elements 𝐴<sub>𝑖</sub> and 𝐡<sub>𝑗</sub>.</li> </ul> <p>​ Chef came up with a task to find the minimum possible value of (𝐴<sub>π‘šπ‘Žπ‘₯</sub> βˆ’ 𝐴<sub>π‘šπ‘–π‘›</sub>) after performing the swap operation any (possibly zero) number of times.</p> <p>Since Chef is busy, can you help him solve this task?</p> <p>Note that 𝐴<sub>π‘šπ‘Žπ‘₯</sub> and 𝐴<sub>π‘šπ‘–π‘›</sub> denote the maximum and minimum elements of the array 𝐴 respectively.</p> </blockquote> <p>I have tried the below logic for the solution. But the logic fails for some test cases and I have no access to the failed test cases and where exactly the below code failed to meet the required output.</p> <pre><code>T = int(input()) for _ in range(T): arraySize = int(input()) A = list(map(int, input().split())) B = list(map(int, input().split())) sortedList = sorted(A+B) minLower = sortedList[arraySize-1] - sortedList[0] # First half of the sortedList minUpper = sortedList[(arraySize*2)-1] - sortedList[arraySize] # Second half of the sortedList print(min(minLower,minUpper)) </code></pre> <p>I saw some submitted answers and didn't get the reason or logic why they are doing so. Can someone guide where am I missing?</p>
<python><data-structures>
2023-01-25 18:25:07
1
811
Mike
75,238,182
5,452,008
Best way to use match case for integers in Python
<p>What is the best way to check whether something is an integer in Python 3.10+, regardless if it is a native int, or numpy int8, int32, int64 etc. ?</p> <pre><code>import numpy as np def func(token): match token: case int(): print('integer') case str(): print('string') print('failed') func('a') &gt;&gt;&gt; string func(1) &gt;&gt;&gt; integer func(np.int64(1)) &gt;&gt;&gt; failed # this should return True </code></pre> <p>I found the answer, but someone closed the question...</p> <pre><code>import numpy as np def func(token): match token: case int() | np.int8() | np.int32() | np.int64() | np.uint8() | np.uint32() | np.uint64(): return 'integer' case np.int64(): return 'integer64' case str(): return 'string' return 'failed' </code></pre>
<python>
2023-01-25 18:16:25
0
9,295
Soerendip
75,237,939
15,283,859
Update values in dataframe based on dictionary and condition
<p>I have a dataframe and a dictionary that contains some of the columns of the dataframe and some values. I want to update the dataframe based on the dictionary values, and pick the higher value.</p> <pre><code>&gt;&gt;&gt; df1 a b c d e f 0 4 2 6 2 8 1 1 3 6 7 7 8 5 2 2 1 1 6 8 7 3 1 2 7 3 3 1 4 1 7 2 6 7 6 5 4 8 8 2 2 1 </code></pre> <p>and the dictionary is</p> <pre><code>compare = {'a':4, 'c':7, 'e':3} </code></pre> <p>So I want to check the values in columns ['a','c','e'] and replace with the value in the dictionary, if it is higher.</p> <p>What I have tried is this:</p> <pre><code>comp = pd.DataFrame(pd.Series(compare).reindex(df1.columns).fillna(0)).T df1[df1.columns] = df1.apply(lambda x: np.where(x&gt;comp, x, comp)[0] ,axis=1) </code></pre> <p>Excepted Output:</p> <pre><code>&gt;&gt;&gt;df1 a b c d e f 0 4 2 7 2 8 1 1 4 6 7 7 8 5 2 4 1 7 6 8 7 3 4 2 7 3 3 1 4 4 7 7 6 7 6 5 4 8 8 2 3 1 </code></pre>
<python><python-3.x><pandas><numpy><vectorization>
2023-01-25 17:53:25
2
895
Yolao_21