QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,431,873
16,707,518
Averaging values in another dataframe prior to a date in first dataframe
<p>This I admit is pretty specific example. I have a two dataframes: the first has a date and group:</p> <pre><code>Date Group 06/11/2023 A 05/11/2023 B 04/11/2023 A 03/11/2023 A 02/11/2023 B </code></pre> <p>The second has dates, groups and values:</p> <pre><code>Date Group Value 06/11/2023 A 5 05/11/2023 B 8 04/11/2023 A 12 03/11/2023 A 4 02/11/2023 B 9 02/11/2023 B 0 01/11/2023 A 6 01/11/2023 B 10 </code></pre> <p>Am looking to create an extra column in the first dataframe that simply looks at the backward average of the values for that group, but <em>before the date in question</em>.</p> <p>So</p> <ul> <li><p>looking at the first row of the first table, Group A on 06/11/2023: the resultant average would be all the average of the value for all prior dates for Group A: i.e. 12,4,6 = 7.33.</p> </li> <li><p>looking at the 2nd row of the first table, Group B on 05/11/2023, we'd have 9,0,10 = 6.33</p> </li> </ul> <p>My resulting table would look like:</p> <pre><code>Date Group AvgGroupValue_PriorDate 06/11/2023 A 7.33 05/11/2023 B 6.33 04/11/2023 A 5 03/11/2023 A 6 02/11/2023 B 10 </code></pre> <p>I can see this would be a merge calculation but am struggling to understand how to do the &quot;average prior to date by group&quot; element.</p>
<python><pandas>
2023-11-06 14:26:47
3
341
Richard Dixon
77,431,536
3,595,907
Zenoh subscriber not picking up payload
<p>Win 10 x64, Python 3.10</p> <p>I'm working from <a href="https://zenoh.io/docs/getting-started/first-app/" rel="nofollow noreferrer">Your first Zenoh app</a></p> <p>The publisher code,</p> <pre><code># z_sensor.py import zenoh, random, time random.seed() def read_temp(): return random.randint(15, 30) if __name__ == &quot;__main__&quot;: session = zenoh.open() key = 'myhome/kitchen/temp' pub = session.declare_publisher(key) while True: t = read_temp() buf = f&quot;{t}&quot; print(f&quot;Putting Data ('{key}': '{buf}')...&quot;) pub.put(buf) time.sleep(1) </code></pre> <p>The subscriber code,</p> <pre><code># z_subsciber.py import zenoh, time def listener(sample): print(f&quot;Received {sample.kind} ('{sample.key_expr}': '{sample.payload.decode('utf-8')}')&quot;) if __name__ == &quot;__main__&quot;: session = zenoh.open() sub = session.declare_subscriber('myhome/kitchen/temp', listener) time.sleep(60) </code></pre> <p>I start up 2 Anaconda command windows &amp; activate my Zenoh environment in both.</p> <p>I run the subscriber code in the first window followed by the publisher in the second window.</p> <p>The publisher window is pushing out the data &amp; printing to the window as expected,</p> <pre><code>Putting Data ('myhome/kitchen/temp': '28')... Putting Data ('myhome/kitchen/temp': '27')... Putting Data ('myhome/kitchen/temp': '25')... Putting Data ('myhome/kitchen/temp': '16')... Putting Data ('myhome/kitchen/temp': '19')... Putting Data ('myhome/kitchen/temp': '25')... Putting Data ('myhome/kitchen/temp': '28')... ..... </code></pre> <p>But the subscriber window does nothing. There are no print outs from the listener function.</p> <p>I installed using <code>pip</code> after installing the Rust toolchain.</p> <p>Anyone know what's going on here?</p>
<python><zenoh>
2023-11-06 13:44:31
1
3,687
DrBwts
77,431,491
11,189,280
TypeError: path should be path-like or io.BytesIO, not <class 'shiny.reactive._reactives.Value'>
<p>I am creating a simple shiny app for a dog breed classifier. The user would be able to upload a picture of a dog, and then have the shiny app display the predicted breed.</p> <p>A simplified version of the app code is as follows:</p> <pre><code>from shiny import App, Inputs, Outputs, Session, reactive, render, ui import keras import tensorflow as tf import numpy as np import pickle app_ui = ui.page_fluid( ui.input_file(&quot;dog_pic&quot;, &quot;Upload your dog picture!&quot;, accept=[&quot;.jpeg&quot;], multiple=False), ui.output_text(&quot;txt&quot;) ) def server(input, output, session): # PROCESSING IMAGE: @reactive.Calc def parsed_file(): image = tf.keras.preprocessing.image.load_img(input.dog_pic, target_size=(224, 224)) input_arr = tf.keras.preprocessing.image.img_to_array(image) input_arr = np.array([input_arr]) input_arr = input_arr.astype('float32') / 255. return input_arr # PARSING IMAGE THROUGH TO MODEL @output @render.text def txt(): model = tf.keras.models.load_model('model') dog_prediction = model.predict(parsed_file()) return f&quot;Predicted dog breed is {dog_prediction}!&quot; app = App(app_ui, server) </code></pre> <p>The error I am getting is: <a href="https://i.sstatic.net/vodeP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vodeP.png" alt="enter image description here" /></a></p> <p>What does this mean? Thanks!</p>
<python><py-shiny>
2023-11-06 13:35:28
3
465
Dieu94
77,431,444
4,628,557
pandas dataframe returning max value for a date when index isn't unique
<p>I've got a pretty simple dataframe and I want the max row for every date.</p> <pre><code> all_data = pd.DataFrame([{'Date': pd.to_datetime('1/1/2022'), 'Name': 'John', 'Score': 92}, {'Date': pd.to_datetime('1/1/2022'), 'Name': 'Mike', 'Score': 87}, {'Date': pd.to_datetime('1/1/2022'), 'Name': 'Sally', 'Score': 79}, {'Date': pd.to_datetime('1/2/2022'), 'Name': 'John', 'Score': 85}, {'Date': pd.to_datetime('1/2/2022'), 'Name': 'Mike', 'Score': 91}, {'Date': pd.to_datetime('1/2/2022'), 'Name': 'Sally', 'Score': 88}, {'Date': pd.to_datetime('1/3/2022'), 'Name': 'John', 'Score': 88}, {'Date': pd.to_datetime('1/3/2022'), 'Name': 'Mike', 'Score': 85}, {'Date': pd.to_datetime('1/3/2022'), 'Name': 'Sally', 'Score': 96}]) </code></pre> <p>This works great:</p> <pre><code>idx = all_data.groupby('Date')['Score'].idxmax() print(all_data.loc[idx]) Date Name Score 0 2022-01-01 John 92 4 2022-01-02 Mike 91 8 2022-01-03 Sally 96 </code></pre> <p>But when I set the index on the date column it doesn't work anymore since the index matches all the rows</p> <pre><code>all_data = all_data.set_index('Date') idx = all_data.groupby('Date')['Score'].idxmax() print(all_data.loc[idx]) Name Score Date 2022-01-01 John 92 2022-01-01 Mike 87 2022-01-01 Sally 79 2022-01-02 John 85 2022-01-02 Mike 91 2022-01-02 Sally 88 2022-01-03 John 88 2022-01-03 Mike 85 2022-01-03 Sally 96 </code></pre>
<python><pandas>
2023-11-06 13:27:39
1
1,463
frankd
77,431,234
1,864,294
Lowest common ancestor for multiple vertices in networkx
<p>How can I compute the lowest common ancestor (LCA) for a directed graph in <code>networkx</code> for a subset of vertices?</p> <p>For example, for the graph</p> <pre><code>G = nx.DiGraph() G.add_edges_from([(1, 2), (1, 3), (3, 4), (3, 5)]) </code></pre> <p>vertex <code>3</code> is the LCA for the vertex <code>{4, 5}</code> and vertex <code>1</code> for the the nodes <code>{3, 4, 5}</code>. In case it matters: All vertices are leaves.</p> <p><a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.lowest_common_ancestors.lowest_common_ancestor.html#networkx.algorithms.lowest_common_ancestors.lowest_common_ancestor" rel="nofollow noreferrer"><code>nx.lowest_common_ancestor()</code></a> is not suitable since it requires a pair of vertices, but does not allow a set of vertices.</p> <p>Thanks!</p>
<python><networkx><lowest-common-ancestor>
2023-11-06 12:53:28
1
20,605
Michael Dorner
77,431,211
212,063
How to define constant values of Python types defined in C++ extension with Py_LIMITED_API?
<p>I want to use <code>Py_LIMITED_API</code>. Thus, instead of defining my Python type statically, I define it dynamically, with specs and slots as follows.</p> <pre><code>struct MyType { PyObject_HEAD }; static void MyType_dealloc( MyType * self ) { PyObject_DEL( self ); } static PyObject * MyType_new( PyTypeObject * type, PyObject * args, PyObject * kwds ) { return (PyObject *)PyObject_NEW( MyType, type ); } static int MyType_init( MyType * self, PyObject * args, PyObject * kwds ) { return 0; } static PyType_Slot MyType_slots[] = { { Py_tp_doc, (void *)PyDoc_STR(&quot;MyType objects&quot;) }, { Py_tp_dealloc, (void *)&amp;MyType_dealloc }, { Py_tp_init, (void *)&amp;MyType_init }, { Py_tp_new, (void *)&amp;MyType_new }, { 0, NULL } }; static PyType_Spec MyType_spec = { &quot;MyType&quot;, sizeof(MyType), 0, Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, MyType_slots }; static PyTypeObject * MyTypePtr = NULL; static PyModuleDef MyModule = { PyModuleDef_HEAD_INIT, &quot;MyModule&quot;, &quot;My C++ Module.&quot;, -1, NULL, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_MyModule(void) { PyObject* m = PyModule_Create( &amp;MyModule ); if (m == NULL) return NULL; MyTypePtr = (PyTypeObject*)PyType_FromSpec( &amp;MyType_spec ); if ( MyTypePtr == NULL ) return NULL; if ( PyType_Ready( MyTypePtr ) &lt; 0 ) return NULL; PyDict_SetItemString( MyTypePtr-&gt;tp_dict, &quot;Normal&quot;, PyLong_FromLong( 0 ) ); PyDict_SetItemString( MyTypePtr-&gt;tp_dict, &quot;Custom&quot;, PyLong_FromLong( 15 ) ); Py_INCREF( MyTypePtr ); PyModule_AddObject( m, &quot;MyType&quot;, (PyObject *)MyTypePtr ); return m; } </code></pre> <p>With this definition of my module, I can write the following.</p> <pre><code>from MyModule import * print( MyType.Normal ) print( MyType.Custom ) </code></pre> <p>My problem is that with <code>Py_LIMITED_API</code>, I cannot use <code>tp_dict</code> to define the constant values <code>Normal</code> and <code>Custom</code>.</p> <p>I've tried to use <code>PyType_GetSlot</code>, but <code>Py_tp_dict</code> does not exist. (I guess I understand why: it's because it would allow modifications of existing types.)</p> <p>Now my question is: In the code above with specs and slots, how can I define the constants <code>Normal</code> and <code>Custom</code> of the type <code>MyType</code> without using <code>tp_dict</code>?</p>
<python><c++><c><python-c-api>
2023-11-06 12:49:17
1
37,637
Didier Trosset
77,431,210
7,929,036
Implementing IS-A cardinality in Flask SQLAlchemy and Marshamllow schemas
<p>Say I have a table called <code>User</code> which holds general information for all the users registered in the system:</p> <pre><code>class User(db.Model): __tablename__ = 'User' id = db.Column(db.Integer, primary_key=True) role = db.Column(db.String(20), nullable=False) email = db.Column(db.String(256), unique=True, nullable=False) password = db.Column(db.String(256), nullable=False) sign_up_date = db.Column(db.DateTime, nullable=False) archived = db.Column(db.Integer, nullable=False, default=0) </code></pre> <p>Then I have two sub-types of user, each containing all of the above columns, plus some type specific columns. For example:</p> <p><strong>Class Customer:</strong></p> <pre><code>class Customer(User): __tablename__ = &quot;Customer&quot; # USER is a CUSTOMER address = db.Column(db.String(256), nullable=False) name = db.Column(db.String(64), nullable=False) </code></pre> <p><strong>Class Driver:</strong></p> <pre><code>class Driver(User): __tablename__ = &quot;Driver&quot; # USER is a DRIVER first_name = db.Column(db.String(64), nullable=False) last_name = db.Column(db.String(64), nullable=False) profile_picture_path = db.Column(db.String(256), nullable=False) </code></pre> <p><strong>Class WarehouseManager:</strong></p> <pre><code>class WarehouseManager(User): __tablename__ = &quot;WarehouseManager&quot; # USER is a WAREHOUSE MANAGER first_name = db.Column(db.String(64), nullable=False) last_name = db.Column(db.String(64), nullable=False) profile_picture_path = db.Column(db.String(256), nullable=False) warehouse_name = db.Column(db.String(64), nullable=False) </code></pre> <p>and so on.</p> <p>The problem arises whenever I try to create this with <code>db.create_all()</code>, I get the following error:</p> <pre><code>sqlalchemy.exc.ArgumentError: Column 'first_name' on class WarehouseManager conflicts with existing column 'User.first_name'. If using Declarative, consider using the use_existing_column parameter of mapped_column() to resolve conflicts. </code></pre> <p>Also I don't exactly have an idea about how I'd create schemas in Marshmallow for this.</p> <p>To be clear, I want the database for Driver to contain the <code>user_id</code> and all the additional fields (first_name, last_name, profile_picture_path), but when I am interacting with the database via SQLAlchemy I want to be able to enter the <code>User</code> columns in the <code>Driver</code> table I.E.:</p> <pre><code>driver = Driver(email=&quot;driver@gmail.com&quot;, first_name=&quot;Driver Name&quot;...) </code></pre> <p>instead of:</p> <pre><code>user = Users(email=&quot;driver@gmail.com&quot;...) driver = Driver(user_id = user.id, first_name=&quot;Driver Name&quot;...) </code></pre> <p>I see the ladder as repetative and bad practice in terms of using a quality ORM. A quality ORM should handle these common cases with ease. This is not the first time I have been dissapointed by ORM's limited capabilities, especially when handling temporary tables, materialized views, triggers and so on.</p> <p><strong>Edit:</strong> I have tried adding <code>use_existing_column</code> in the <code>db.Column</code> for <code>first_name</code> and <code>last_name</code>. I still got the same error.</p> <p><strong>Edit 2:</strong> <a href="https://t.ly/hgibN" rel="nofollow noreferrer">Here</a> is a sandbox which reproduces the problem.</p>
<python><sqlalchemy><flask-sqlalchemy>
2023-11-06 12:49:17
1
1,189
Dimitar
77,431,083
511,436
How to add attributes to decorated functions when combining decorators?
<p>Two function decorators are defined. The target is to detect if 0, 1, or 2 decorators are applied to a function.</p> <p>Why does the below code return &quot;False&quot; for the 2nd decorator?</p> <pre><code>def decorator1(f): def wrapped(*args, **kwargs): f(*args, **kwargs) wrapped.dec1 = True return wrapped def decorator2(f): def wrapped(*args, **kwargs): f(*args, **kwargs) wrapped.dec2 = True return wrapped @decorator1 @decorator2 def myfunc(): print(f&quot;running myfunc&quot;) if __name__ == &quot;__main__&quot;: myfunc() print(f&quot;myfunc has decorator1: {getattr(myfunc, 'dec1', False)}&quot;) print(f&quot;myfunc has decorator2: {getattr(myfunc, 'dec2', False)}&quot;) </code></pre> <p>Result:</p> <pre class="lang-none prettyprint-override"><code>running myfunc myfunc has decorator1: True myfunc has decorator2: False </code></pre> <p>I am using Python 3.9.</p>
<python><python-3.x><python-decorators>
2023-11-06 12:26:12
2
1,844
Davy
77,431,020
3,322,273
Parse "multipart/form-data" payload while preserving order of parts in Python
<p>I am building an HTTP server that accepts multiple input images, while images can be input as image files or as buffers.</p> <p>For example, a request command may look as follows:</p> <pre><code>$ curl -F image_file=@/path/to/image1.jpg \ -F image_file=@/path/to/image2.png \ -F image_buffer=/path/to/image_buffer.bin -F image_buffer_shape=&quot;256,256,3&quot; \ -F image_file=@/path/to/image4.tif \ &lt;server URL&gt; </code></pre> <p>The above command will send an HTTP POST request of <code>multipart/form-data</code> type, which will contain the following 5 parts in this order: <code>image_file, image_file, image_buffer, image_buffer_shape, image_file</code> - which is the order in which they were sent by the client.</p> <p>The problem is that when using Python's built-in function <code>cgi.parse_multipart</code>, the result will be a dictionary of lists, which <strong>loses the order</strong> of inputs with different names. In my example, it will look as follows:</p> <pre><code>{'image_file' : [&lt;content of image1.jpg&gt;, &lt;content of image2.png&gt;, &lt;content of image4.tif&gt;], 'image_buffer' : [&lt;content of image_buffer.bin&gt;], 'image_buffer_shape' : ['256,256,3'] } </code></pre> <p>So in this example, there is no way to indicate that <code>image4.tif</code> was mentioned after <code>image_bufffer.bin</code>.</p> <p>What would be the most &quot;standard&quot; and elegant way to parse the multipart payload while having access to the original index of each input?</p> <h3>Edit</h3> <p>My question referred to the scenario of using Python's built-in <code>cgi</code> module to parse the headers and the multipart payload. However, the <code>cgi</code> module turned out to have been deprecated.</p> <p>For parsing the headers, there is no actual need to use the <code>cgi</code> module, since <code>self.headers</code> available in <code>BaseHTTPRequestHandler.do_POST</code> is of class <code>http.client.HTTPMessage</code>, which is a subclass of <code>email.message.Message</code>. This class provides standard methods to parse the headers (e.g., <code>self.headers.get_content_type()</code>, <code>self.headers.get_boundary()</code>, <code>self.headers.get(&quot;Content-Length&quot;)</code>).</p> <p>For parsing the multipart payload, users may refer to 3rd party libraries or parse the data manually. To parse manually, it is possible to split the payload by the boundary (need to add leading <code>--</code> string when splitting), removing first and last parts (before the first boundary and after the last boundary), and passing each part to <code>email.message_from_bytes</code>. The resulting object will allow you to read the part's header fields (e.g., <code>.get_param('name', header='content-disposition')</code>), and to get the payload using <code>.get_payload(decode = True)</code>.</p>
<python><post><multipart>
2023-11-06 12:15:27
1
12,360
SomethingSomething
77,430,937
12,242,085
How to divide Data Frame on 2 separated datasets 70%/30% based on unique combinations of values from 2 columns in Python Pandas?
<p>I have Data Frame in Python Pandas like below:</p> <p><strong>Input data:</strong></p> <pre><code>df = pd.DataFrame({ 'id' : [999, 999, 999, 185, 185, 185, 44, 44, 44], 'target' : [1, 1, 1, 0, 0, 0, 1, 1, 1], 'event': ['2023-01-01', '2023-01-02', '2023-02-03', '2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03'], 'survey': ['2023-02-02', '2023-02-02', '2023-02-02', '2023-03-10', '2023-03-10', '2023-03-10', '2023-04-22', '2023-04-22', '2023-04-22'], 'event1': [1, 6, 11, 16, np.nan, 22, 74, 109, 52], 'event2': [2, 7, np.nan, 17, 22, np.nan, np.nan, 10, 5], 'event3': [3, 8, 13, 18, 23, np.nan, 2, np.nan, 99], 'event4': [4, 9, np.nan, np.nan, np.nan, 11, 8, np.nan, np.nan], 'event5': [5, np.nan, 15, 20, 25, 1, 1, 3, np.nan] }) df = df.fillna(0) df </code></pre> <p><a href="https://i.sstatic.net/Twpkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Twpkm.png" alt="enter image description here" /></a></p> <p><strong>Requirements:</strong></p> <p>My real dataset has of course many more data, but I need to divide my dataset on 2 separate datasets (train and test) based on the following requirements:</p> <ol> <li>For train dataset I need to take 70% of unique values from combination columns: id, survey from my input dataset</li> <li>For test dataset I need to take 30% of unique values from combination columns: id, survey from my input dataset</li> <li>For each new dataset (train / test) I need to take all rows for each combination columns: id, survey, be aware that each combination columns: id, survey has same number of rows in my input dataset <strong>(it cannot be the case that for some combination columns: id, survey we take only 2 rows, for other combination columns: id, survey we take 3, always taking all rows of a given id)</strong></li> </ol> <p><strong>Example of needed result (of course in real data should be proportion 70% /30% of unique combination columns: id, survey):</strong></p> <p><strong>train dataset:</strong></p> <p><a href="https://i.sstatic.net/HjI6o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HjI6o.png" alt="enter image description here" /></a></p> <p><strong>test dataset:</strong></p> <p><a href="https://i.sstatic.net/rBx2I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rBx2I.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-11-06 12:01:24
2
2,350
dingaro
77,430,840
22,466,650
How to identify duplicated values over multiple unsorted columns?
<p>Sorry for the title but to explain my problem, I made a small dataframe that look like my real dataset :</p> <pre><code>df = pd.DataFrame({'user': ['u1', 'u2', 'u3', 'u4', 'u5'], 'date': ['1. August 2023', '5. January 2023', '2. May 2023', '13. July 2023', '1. January 2023'], 'task1': ['A', 'D', 'G', 'B', 'J'], 'task2': ['B', 'E', 'H', 'D', 'K'], 'task3': ['C', 'F', 'A', 'I', 'L']}) print(df) user date task1 task2 task3 0 u1 1. August 2023 A B C 1 u2 5. January 2023 D E F 2 u3 2. May 2023 G H A 3 u4 13. July 2023 B D I 4 u5 1. January 2023 J K L </code></pre> <p>I need to identify the users that got assigned with tasks that already belongs to others.</p> <p>I started making a first step with the code below but it gives me empty dataframe.</p> <pre><code>new_df = df.copy() new_df[&quot;date&quot;] = pd.to_datetime(new_df[&quot;date&quot;], format=&quot;%d. %B %Y&quot;) new_df = new_df.sort_values(&quot;date&quot;) new_df = new_df.loc[new_df[['task1', 'task2', 'task3']].duplicated()] </code></pre> <p>My expected output is this : <code>tasks_to_reassign = {'u4': ['task2'], 'u1': ['task1', 'task2']}</code>.</p> <p>Because the task2 (D) assigned to u4 in 13. July 2023 was already assigned to u2 in 5. January 2023. And for u1, task1 (A) and task2 (B) was assigned to him in 1. August 2023 while they were assigned before to u3 in 2. May 2023 and to u4 in 13. July 2023.</p> <p>Do you guys know how to solve my problem ?</p>
<python><pandas>
2023-11-06 11:44:03
1
1,085
VERBOSE
77,430,797
7,648,650
Permission denied error when pasting multiple images to pptx in a loop
<p>I am pasting a bunch of images (50+) into an existing PowerPoint File, slide by slide. After a quite randomly number of slides (usually between 10-20) I get an permission Error:</p> <blockquote> <p>PermissionError: [Errno 13] Permission denied: 'C:/Users/PowerPointFileName.pptx'</p> </blockquote> <p>My Code looks something like this:</p> <pre><code>from pptx import Presentation import os import time def insert_into_ppt(FileName, slide_nr, img): prs = Presentation(FileName) slide = prs.slides[slide_nr] slide.shapes.add_picture(img, 2, 5) prs.save(FileName) time.sleep(1) images = os.listdir('picture folder') for i, img in enumerate(images): insert_into_ppt('PowerPointFileName.pptx', i, img) </code></pre> <p>at first I thought the saving and opening again apart might be to fast, but even adding <code>time.sleep(5)</code> in there won't fix the issue. Do you have any idea why this keeps happening?</p>
<python><image><powerpoint><python-pptx>
2023-11-06 11:37:38
0
1,248
Quastiat
77,430,781
5,561,472
How can I enable retries for google cloud function?
<p>I created google cloud function:</p> <pre><code>@scheduler_fn.on_schedule( schedule=&quot;00 06 1 * *&quot;, ) def getRates(event: scheduler_fn.ScheduledEvent) -&gt; None: asyncio.run(getRatesAsync()) </code></pre> <p>How can I enable retry for this function?</p> <p>According to <a href="https://firebase.google.com/docs/functions/retries" rel="nofollow noreferrer">https://firebase.google.com/docs/functions/retries</a> - I tried to configure from the GCP Console. This document tells to press 'Edit' icon in Eventarc trigger pane and edit your trigger's settings.</p> <p>I actually see this:</p> <p><a href="https://i.sstatic.net/xLi2K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xLi2K.png" alt="enter image description here" /></a></p> <p>I tried to press Edit icon and see this:</p> <p><a href="https://i.sstatic.net/BDqaE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BDqaE.png" alt="enter image description here" /></a></p> <p>I don't understand - how can I edit trigger settings here?</p> <p>Then I tried to configure retries from function code. According to the guide - I need to call <code>functions.runWith({failurePolicy: true}).foo.onBar(myHandler);</code> for this.</p> <p>Now I just run <code>firebase deploy -- only functions</code> from firebase CLI and the function works.</p> <p>I don't understand - where should I add <code>runWith</code> here.</p> <p>I work with Flutter from VSCode.</p>
<python><google-cloud-functions>
2023-11-06 11:35:07
1
6,639
Andrey
77,430,743
22,538,132
Extract multiple roi's from np array using indices ranges without loops
<p>I have a numpy array of shape 640 x 480: <code>np.ones((640, 480))</code>, and I have a min, max ranges for the rows, columns:</p> <pre><code>u_min=[497, 157, 493, 137, 567] v_min=[ 36, 46, 208, 412, 418] u_max=[502, 162, 498, 142, 572] v_max=[41, 51 213, 417, 423] </code></pre> <p>I have tried:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np arr = np.ones((640, 480)) u_min= np.array([497, 157, 493, 137, 567], dtype=np.int32) u_max= np.array([502, 162, 498, 142, 572], dtype=np.int32) v_min= np.array([36, 46, 208, 412, 418], dtype=np.int32) v_max= np.array([41, 51, 213, 417, 423], dtype=np.int32) rois = arr[u_min:u_max, v_min:v_max] </code></pre> <p>but I'm getting an error:</p> <blockquote> <p><code>TypeError: only integer scalar arrays can be converted to a scalar index</code></p> </blockquote> <p>I want to slice the numpy array in vectorized manner without looping. can you please tell me how can I do that? thanks</p>
<python><arrays><numpy><slice>
2023-11-06 11:29:01
2
304
bhomaidan90
77,430,493
1,447,207
Why does numpy transpose the result in some cases when indexing with a boolean mask
<p>I have some trouble understanding the behaviour of numpy when indexing a multi-dimensional array (d &gt;= 3) with a boolean mask along one dimension.</p> <p>The following example makes sense to me:</p> <pre><code>&gt; x = np.zeros((2, 2, 3)) &gt; mask = np.ones(3, dtype=bool) &gt; print(x[:,:,:].shape) (2, 2, 3) &gt; print(x[:,:,mask].shape) (2, 2, 3) </code></pre> <p>But this examples does not make sense to me:</p> <pre><code>&gt; x = np.zeros((2, 2, 3)) &gt; mask = np.ones(3, dtype=bool) &gt; print(x[0,:,:].shape) (2, 3) &gt; print(x[0,:,mask].shape) (3, 2) </code></pre> <p>It appears that the result is transposed in the second case of the last example. Why does this happen?</p>
<python><numpy>
2023-11-06 10:48:45
0
803
Tor
77,430,283
2,146,803
How to make annotations static without removing using pypdf2
<p>I have so many annotations in my pdf, which is all editable and I would like to convert it non editable, I couldn't figure out how to make it static, I used below code but it completely removed the annotations.</p> <pre><code>import PyPDF2 with open('drawing.pdf', 'rb') as pdf_obj: pdf = PyPDF2.PdfReader(pdf_obj) out = PyPDF2.PdfWriter() for page in pdf.pages: if page.annotations: page.annotations.clear() # This line is removing the annotations, here I would like to make it static out.add_page(page) with open('output.pdf', 'wb') as f: out.write(f) </code></pre> <p>I do not wanted to use pypdftk since it is os dependent and <a href="https://stackoverflow.com/questions/27023043/generate-flattened-pdf-with-python">this</a> is all about form fields</p> <p><strong>UPDATE 1</strong></p> <p>To be more precise I am trying to flatten <code>indirect objects</code></p>
<python><pypdf>
2023-11-06 10:13:21
0
4,029
Prabhakaran
77,430,110
19,500,571
Rounding float to a base gives too many decimals
<p>I want to round a number to a given base. Take the following example, where I want to round to the nearest 0.01:</p> <pre><code>number = 12.123123123 base = 0.01 base * round(number/base) </code></pre> <p>I expect the result <code>12.12</code>, but I get <code>12.120000000000001</code>. Why is that? Do I need to process the output somehow?</p>
<python>
2023-11-06 09:46:16
2
469
TylerD
77,429,562
1,000,378
Leetcode answer print and answer not matching in Python
<p>I'm trying out leetcode for the first time <a href="https://leetcode.com/explore/featured/card/top-interview-questions-easy/92/array/727/" rel="nofollow noreferrer">here</a></p> <p>I don't understand what I'm supposed to do. If I run the code locally (sanity check really), everything works as expected. If I <code>print</code> results via leetcode, I get the expected result. However, <code>your answer</code> does not match and I have no idea why.</p> <p><a href="https://i.sstatic.net/deZpU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/deZpU.png" alt="enter image description here" /></a></p> <hr /> <p>I'm not asking about the solution. I know I don't need <code>k</code> for example. Returning <code>len(results)</code> would do.</p> <hr /> <p>What I don't understand is how leetcode works and what's expected in the return...</p> <hr /> <p>Notes:</p> <ul> <li>returning the results themselves fail. In that case, I got a compiler error.</li> <li>returning k, results failed too.</li> <li>the code in the screenshot is the closest I got to pass...</li> </ul>
<python>
2023-11-06 07:56:01
1
5,516
François Constant
77,429,272
5,132,860
Setting File Size Limits with Signed URLs in Google Cloud Storage v4
<p>I'm trying to upload a file to Google Cloud Storage using a signed URL and I want to restrict the file size during upload. Previously, with signed URL version 2, I could set the <code>conditions</code> parameter to enforce size restrictions, but when attempting to do the same using version 4, I receive an error stating that <code>conditions</code> is no longer a valid argument.</p> <p>Here's the error message I encountered:</p> <pre><code>TypeError: Blob.generate_signed_url() got an unexpected keyword argument 'conditions' </code></pre> <p>Below is a sample of my code that results in this error:</p> <pre class="lang-py prettyprint-override"><code># Sample code for generating a signed URL with size limitation from google.cloud import storage from django.conf import settings from datetime import timedelta class GcsHandler: def __init__(self): self.bucket_name = settings.GS_BUCKET_NAME self.storage_client = storage.Client() self.bucket = self.storage_client.bucket(self.bucket_name) def generate_signed_url(self, blob_name: str, content_type: str, file_size: int) -&gt; str: &quot;&quot;&quot; Generate a signed URL for a blob to upload a file with a specific size limit. :param blob_name: The path to the GCS file. :param content_type: The content type of the file. :param file_size: The exact file size limit for the upload. :return: The signed URL. &quot;&quot;&quot; blob = self.bucket.blob(blob_name) url = blob.generate_signed_url( version=&quot;v4&quot;, expiration=timedelta(minutes=60), method=&quot;PUT&quot;, content_type=content_type, conditions=[ [&quot;content-length-range&quot;, file_size, file_size] ] ) return url </code></pre> <p>After reviewing the implementation of <code>generate_signed_url</code> in <a href="https://github.com/googleapis/python-storage/blob/main/google/cloud/storage/bucket.py" rel="nofollow noreferrer"><code>google/cloud/storage/bucket.py</code></a>, I confirmed that the <code>conditions</code> parameter is indeed missing:</p> <pre class="lang-py prettyprint-override"><code># google/cloud/storage/bucket.py function definition for reference def generate_signed_url( self, expiration=None, api_access_endpoint=_API_ACCESS_ENDPOINT, method=&quot;GET&quot;, headers=None, query_parameters=None, client=None, credentials=None, version=None, virtual_hosted_style=False, bucket_bound_hostname=None, scheme=&quot;http&quot;, ): </code></pre> <p>I am looking for a way to limit file size using <code>generate_signed_url</code> version 4. If anyone knows a workaround or the correct method to impose such a limit, please advise.</p>
<python><google-cloud-storage>
2023-11-06 07:03:24
1
3,104
Nori
77,429,177
480,118
pandas: group or pivot table by a column horizonally rather than vertically
<p>I have got data that looks like this</p> <pre><code>data = [['01/01/2000', 'aaa', 101, 102], ['01/02/2000', 'aaa', 201, 202], ['01/01/2000', 'bbb', 301, 302], ['01/02/2000', 'bbb', 401, 402],] df = pd.DataFrame(data, columns=['date', 'id', 'val1', 'val2']) df date id val1 val2 01/01/2000 aaa 101 102 01/02/2000 aaa 201 202 01/01/2000 bbb 301 302 01/02/2000 bbb 401 402 </code></pre> <p>I would like this data to be transformed to look like this - where it's grouped horizonally by the id column</p> <pre><code> aaa bbb date val1 val2 val1 val2 01/01/2000 101 102 301 302 01/02/2000 201 202 401 402 </code></pre> <p>Closest i have gotten so far is: <code>df.set_index(['date', 'id']).unstack(level=1)</code>, but this does not quite do it:</p> <pre><code>val1 val2 id aaa bbb aaa bbb date 01/01/2000 101 301 102 302 01/02/2000 201 401 202 402 </code></pre>
<python><pandas><numpy>
2023-11-06 06:42:31
3
6,184
mike01010
77,428,855
3,685,918
How do I customize xtick to include full date for first tick in matplotlib
<p>I am using matplotlib to draw a plot chart as shown below. The x-axis is dates. I would like to display daily data in yyyy-mm-dd format as m in monthly units. And I want the first date of each year to be displayed as yy.m, not m. The first date of the year may not be January. Depending on the data, it could be February or October.</p> <p>The example below is 2022-11-1. In this case, I would like to display it as 22.1.</p> <p>I tried my best to write the code below, but it's still not enough.</p> <p>I've seen similar questions and answers before, but I couldn't find an answer about labeling yy.m for the first date by year among randomly set dates. ​</p> <pre><code> import pandas as pd import io import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.pyplot as plt import matplotlib.dates as mdates from matplotlib.ticker import FuncFormatter temp = u&quot;&quot;&quot; date,yield 2022-11-01,4.0419 \n 2022-11-02,4.1005 \n 2022-11-03,4.1469\n 2022-11-04,4.1584\n 2022-11-05,4.1584\n 2022-11-06,4.1584\n 2022-11-07,4.2135\n 2022-11-08,4.1234\n 2022-11-09,4.0923\n 2022-11-10,3.8125\n 2022-11-11,3.8125\n 2022-11-12,3.8125\n 2022-11-13,3.8125\n 2022-11-14,3.8536 \n 2022-11-15,3.7696\n 2022-11-16,3.6899\n 2022-11-17,3.7657\n 2022-11-18,3.8288\n 2022-11-19,3.8288\n 2022-11-20,3.8288\n 2022-11-21,3.8269\n 2022-11-22,3.7559\n 2022-11-23,3.6927\n 2022-11-24,3.6927\n 2022-11-25,3.6776\n 2022-11-26,3.6776 \n 2022-11-27,3.6776\n 2022-11-28,3.6812\n 2022-11-29,3.7441\n 2022-11-30,3.6054\n 2022-12-01,3.5048\n 2022-12-02,3.4862\n 2022-12-03,3.4862\n 2022-12-04,3.4862\n 2022-12-05,3.5736\n 2022-12-06,3.5314\n 2022-12-07,3.4169\n 2022-12-08,3.4819 \n 2022-12-09,3.5783\n 2022-12-10,3.5783\n 2022-12-11,3.5783\n 2022-12-12,3.6113\n 2022-12-13,3.5012\n 2022-12-14,3.4774\n 2022-12-15,3.4463\n 2022-12-16,3.4822\n 2022-12-17,3.4822\n 2022-12-18,3.4822\n 2022-12-19,3.5846\n 2022-12-20,3.6825 \n 2022-12-21,3.662 \n 2022-12-22,3.6786\n 2022-12-23,3.7472\n 2022-12-24,3.7472\n 2022-12-25,3.7472\n 2022-12-26,3.7472\n 2022-12-27,3.8411\n 2022-12-28,3.8827\n 2022-12-29,3.8145\n 2022-12-30,3.8748\n 2022-12-31,3.8748\n 2023-01-01,3.8748 \n 2023-01-02,3.8748\n 2023-01-03,3.7389\n 2023-01-04,3.6827\n 2023-01-05,3.7181\n 2023-01-06,3.558\n 2023-01-07,3.558\n 2023-01-08,3.558\n 2023-01-09,3.5321\n 2023-01-10,3.6188\n 2023-01-11,3.5392\n 2023-01-12,3.44\n 2023-01-13,3.5035\n 2023-01-14,3.5035\n 2023-01-15,3.5035 \n 2023-01-16,3.5035\n 2023-01-17,3.5476\n 2023-01-18,3.3698\n 2023-01-19,3.3915\n 2023-01-20,3.4787\n 2023-01-21,3.4787\n 2023-01-22,3.4787\n 2023-01-23,3.5098\n 2023-01-24,3.4527\n 2023-01-25,3.4416\n 2023-01-26,3.4947\n 2023-01-27,3.5035\n 2023-01-28,3.5035\n 2023-01-29,3.5035 \n 2023-01-30,3.5366\n 2023-01-31,3.5069\n 2023-02-01,3.4166\n 2023-02-02,3.3927\n 2023-02-03,3.5246\n 2023-02-04,3.5246\n 2023-02-05,3.5246\n 2023-02-06,3.6399\n 2023-02-07,3.6735\n 2023-02-08,3.6098\n 2023-02-09,3.6579\n 2023-02-10,3.732 \n 2023-02-11,3.732\n 2023-02-12,3.732\n 2023-02-13,3.7016\n 2023-02-14,3.7435\n 2023-02-15,3.8049\n 2023-02-16,3.8608\n 2023-02-17,3.8148\n 2023-02-18,3.8148\n 2023-02-19,3.8148\n 2023-02-20,3.8148\n 2023-02-21,3.9525\n 2023-02-22,3.9156\n 2023-02-23,3.8768\n 2023-02-24,3.9432\n 2023-02-25,3.9432 \n 2023-02-26,3.9432\n 2023-02-27,3.9141\n 2023-02-28,3.92\n 2023-03-01,3.9925\n 2023-03-02,4.0556\n 2023-03-03,3.9517\n 2023-03-04,3.9517\n 2023-03-05,3.9517\n 2023-03-06,3.9577\n 2023-03-07,3.9637\n 2023-03-08,3.9913\n 2023-03-09,3.9032\n 2023-03-10,3.6987\n 2023-03-11,3.6987\n 2023-03-12,3.6987\n 2023-03-13,3.5732\n 2023-03-14,3.6892\n 2023-03-15,3.4548\n 2023-03-16,3.577 \n 2023-03-17,3.4286\n 2023-03-18,3.4286\n 2023-03-19,3.4286\n 2023-03-20,3.4847\n 2023-03-21,3.6094\n 2023-03-22,3.4341\n 2023-03-23,3.4266\n 2023-03-24,3.3762\n 2023-03-25,3.3762\n 2023-03-26,3.3762\n 2023-03-27,3.5299\n 2023-03-28,3.5696\n 2023-03-29,3.5639\n 2023-03-30,3.5488 \n 2023-03-31,3.4676\n 2023-04-01,3.4676\n 2023-04-02,3.4676\n 2023-04-03,3.4114\n 2023-04-04,3.3387\n 2023-04-05,3.3108\n 2023-04-06,3.305\n 2023-04-07,3.3906\n 2023-04-08,3.3906\n 2023-04-09,3.3906\n 2023-04-10,3.4168\n 2023-04-11,3.4262\n 2023-04-12,3.3906\n 2023-04-13,3.4449 \n 2023-04-14,3.5128\n 2023-04-15,3.5128\n 2023-04-16,3.5128\n 2023-04-17,3.6004\n 2023-04-18,3.5756\n 2023-04-19,3.5908\n 2023-04-20,3.5318\n 2023-04-21,3.5718\n 2023-04-22,3.5718\n 2023-04-23,3.5718\n 2023-04-24,3.4901\n 2023-04-25,3.3996\n 2023-04-26,3.4485\n 2023-04-27,3.5204\n 2023-04-28,3.422 \n 2023-04-29,3.422\n 2023-04-30,3.422\n 2023-05-01,3.5681\n 2023-05-02,3.4239\n 2023-05-03,3.3356\n 2023-05-04,3.3787\n 2023-05-05,3.437\n 2023-05-06,3.437\n 2023-05-07,3.437\n 2023-05-08,3.5072\n 2023-05-09,3.5186\n 2023-05-10,3.4426\n 2023-05-11,3.3843\n 2023-05-12,3.4625\n 2023-05-13,3.4625 \n 2023-05-14,3.4625\n 2023-05-15,3.5019\n 2023-05-16,3.5339\n 2023-05-17,3.5641\n 2023-05-18,3.6457\n 2023-05-19,3.6726\n 2023-05-20,3.6726\n 2023-05-21,3.6726\n 2023-05-22,3.7148\n 2023-05-23,3.6919 \n 2023-05-24,3.7419\n 2023-05-25,3.8174\n 2023-05-26,3.7983\n 2023-05-27,3.7983\n 2023-05-28,3.7983\n 2023-05-29,3.7983\n 2023-05-30,3.6866\n 2023-05-31,3.6426\n 2023-06-01,3.595\n 2023-06-02,3.6907\n 2023-06-03,3.6907\n 2023-06-04,3.6907\n 2023-06-05,3.6831 &quot;&quot;&quot; temp = pd.read_csv(io.StringIO(temp), sep=&quot;,&quot;, parse_dates=False) temp.date = pd.to_datetime(temp.date) plt.plot(temp['date'], temp['yield']) def custom_date_format(x, pos): date = mdates.num2date(x) if date.month == 1 and date.day == 1: return date.strftime('%y.%m') else: return date.strftime('%m') ax = plt.gca() ax.xaxis.set_major_locator(mdates.MonthLocator()) date_form = FuncFormatter(custom_date_format) ax.xaxis.set_major_formatter(date_form) plt.grid(True) plt.xticks(rotation = 45) </code></pre> <p><a href="https://i.sstatic.net/mL47w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mL47w.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-11-06 05:07:57
1
427
user3685918
77,428,847
4,898,202
Is there a way to use a conditional kernel in openCV that only changes pixels on an image if the condition is true?
<p>I want to use a kernel that performs a pixel operation based on a <em>conditional expression</em>.</p> <p>Let's say I have this grayscale image (6x6 resolution):</p> <p><a href="https://i.sstatic.net/lpbS7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lpbS7.png" alt="enter image description here" /></a></p> <p>and I use a 3x3 pixel kernel, how would I change the value of the centre kernel pixel (centre) <em><strong>IF AND ONLY IF</strong></em> the centre pixel is the local minimum or maximum within the 3x3 kernel?</p> <p>For example, say I wanted to set the centre kernel pixel to the <em><strong>average</strong></em> value of the surrounding 8 pixels, like this:</p> <p><a href="https://i.sstatic.net/uKO8t.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uKO8t.gif" alt=", then set it to the average value of all surrounding 8 pixels." /></a></p> <p>Is there a way to do this with <code>OpenCV</code>?</p> <p><strong>EDIT: another more detailed example GIF - 9 passes implementing my example:</strong></p> <p><a href="https://i.sstatic.net/HdBby.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HdBby.gif" alt="enter image description here" /></a></p> <p>This was produced in Excel using the following formula (not the relative cell references - they show the kernel shape of 3x3 around the focus 'picell':</p> <pre><code>=IF(OR(C55=MIN(B54:D56),C55=MAX(B54:D56)),(SUM(B54:D56)-C55)/8,C55) </code></pre> <p>Here is the top left corner of the table with the source values for the first pass (these values control the cell colour):</p> <p><a href="https://i.sstatic.net/56oLz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/56oLz.png" alt="enter image description here" /></a></p> <p>This table refers to another source table. Each frame in the GIF is the next calculated colour table. There are 3 tables of formulae in between each image frame. Here is more context:</p> <p><a href="https://i.sstatic.net/LbpNF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LbpNF.jpg" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><blur><gaussianblur>
2023-11-06 05:05:36
2
1,784
skeetastax
77,428,810
12,242,085
How to convert values like Decimal('0.303440') in list to float in Python Pandas?
<p>I have list in Python like below:</p> <pre><code> my_list = [[Decimal('0.303440'), 0.0, 2.0, 8.0, 1.0], [109.0, 10.0, 0.0, 0.0, 3.0], [52.0, 5.0, 99.0, 0.0, Decimal('0.445378')]] </code></pre> <ul> <li><p>As you can se most of values is float, but some values are like: <code>Decimal('0.303440')</code></p> </li> <li><p>How can I transform all values in my list like: Decimal('0.303440') to float like: 0.303440.</p> </li> <li><p>Of course in my real list I have many mores values like: <code>Decimal('0.303440')</code> and we have to use full automation because I can't write them out manually.</p> </li> </ul> <p>So, as a result I need something like below:</p> <pre><code> my_list = [[0.303440, 0.0, 2.0, 8.0, 1.0], [109.0, 10.0, 0.0, 0.0, 3.0], [52.0, 5.0, 99.0, 0.0, 0.445378]] </code></pre> <p>How can I do that in Python Pandas ?</p>
<python><pandas><list><decimal>
2023-11-06 04:54:09
1
2,350
dingaro
77,428,642
12,242,085
How to divide Data Frame on 2 separated datasets 70%/30% of unique ids taking all rows for each id in Python Pandas?
<p>I have Data Frame in Python Pandas like below:</p> <p><strong>Input data:</strong></p> <pre><code>df = pd.DataFrame({ 'id' : [999, 999, 999, 185, 185, 185, 44, 44, 44], 'target' : [1, 1, 1, 0, 0, 0, 1, 1, 1], 'event_date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03'], 'event1': [1, 6, 11, 16, np.nan, 22, 74, 109, 52], 'event2': [2, 7, np.nan, 17, 22, np.nan, np.nan, 10, 5], 'event3': [3, 8, 13, 18, 23, np.nan, 2, np.nan, 99], 'event4': [4, 9, np.nan, np.nan, np.nan, 11, 8, np.nan, np.nan], 'event5': [5, np.nan, 15, 20, 25, 1, 1, 3, np.nan] }) # Wypełnienie brakujących wartości zerami df = df.fillna(0) df </code></pre> <p><a href="https://i.sstatic.net/tRA9x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tRA9x.png" alt="enter image description here" /></a></p> <p><strong>Requirements:</strong></p> <p>My real dataset has of course many more data, but I need to divide my dataset on 2 separate datasets (train and test) based on the following requirements:</p> <ol> <li>For train dataset I need to take 70% of unique id from my input dataset</li> <li>For test dataset I need to take 30% of unique id from my input dataset</li> <li>For each new dataset I need to take all rows for each id, be aware that each id have same number of rows in my input dataset <strong>(it cannot be the case that for some id we take only 2 rows, for other id we take 3, always taking all rows of a given id)</strong></li> </ol> <p><strong>Example of needed result (of course in real data should be proportion 70% /30% of unique ids):</strong></p> <p><em><strong>train dataset:</strong></em></p> <pre><code>df = pd.DataFrame({ 'id' : [999, 999, 999, 185, 185, 185], 'target' : [1, 1, 1, 0, 0, 0], 'event_date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03'], 'event1': [1, 6, 11, 16, np.nan, 22], 'event2': [2, 7, np.nan, 17, 22, np.nan], 'event3': [3, 8, 13, 18, 23, np.nan], 'event4': [4, 9, np.nan, np.nan, np.nan, 11], 'event5': [5, np.nan, 15, 20, 25, 1] }) df = df.fillna(0) df </code></pre> <p><em><strong>test dataset:</strong></em></p> <pre><code>df = pd.DataFrame({ 'id' : [44, 44, 44], 'target' : [1, 1, 1], 'event_date': ['2023-01-01', '2023-01-02', '2023-01-03'], 'event1': [74, 109, 52], 'event2': [ np.nan, 10, 5], 'event3': [2, np.nan, 99], 'event4': [8, np.nan, np.nan], 'event5': [1, 3, np.nan] }) # Wypełnienie brakujących wartości zerami df = df.fillna(0) df </code></pre>
<python><pandas><dataset>
2023-11-06 03:50:50
1
2,350
dingaro
77,428,515
369,792
Micropython remove cr/lf on pico w
<p>I'm pretty new to python and don't have a handle on the string functions yet. I want to have a simple TCP server I can enter commands into and have the network part working, but I can't figure out something simple like removing the CR/LF at the end :)</p> <p>Here's my simple code:</p> <pre class="lang-py prettyprint-override"><code>request = str(cl.readline()) print(&quot;Line:&quot;) print(request) request = request.replace('\r', '') print(request) request = request.replace('\n', '') print(request) request = request.strip() # this gives an error that decode doesn't exist # request = request.decode('UTF-8') #... print(&quot;Unknown request:&quot;) print(request) </code></pre> <p>But none of this seems to affect the string:</p> <pre><code>Line: b'on\r\n' b'on\r\n' b'on\r\n' Unknown request: b'on\r\n' </code></pre> <p>I thought I might have to use <code>decode('UTF-8')</code>, but I get an attribute error with that saying it doesn't exist.</p> <p><em><strong>Edit</strong></em></p> <p>UGH! Something I was doing was creating the string <code>&quot;b'on\\r\\n'&quot;</code>. Is there a way to differentiate between these two using <code>print()</code>?</p> <pre class="lang-py prettyprint-override"><code>s1 = b'on\r\n' s2 = &quot;b'on\r\n'&quot; print(s1) print(s2) </code></pre> <p>Both show as <code>b'on\r\n'</code>. I checked `len() and it was 9, but I assumed that was being unicode with a null terminator so that messed me up as well.</p>
<python><micropython>
2023-11-06 02:58:40
1
29,301
Jason Goemaat
77,428,290
1,056,563
How to specify the timezone in a python timestamp literal?
<p>The following works:</p> <pre><code> datetime.strptime(&quot;2023-11-15 01:02:03.123456&quot;, &quot;%Y-%m-%d %H:%M:%S.%f&quot;)) </code></pre> <blockquote> <p>Out[11]: datetime.datetime(2023, 11, 15, 1, 2, 3, 123456)</p> </blockquote> <p>But the following fails:</p> <pre><code> datetime.strptime(&quot;2023-11-15 01:02:03.123456&quot;, &quot;%Y-%m-%d %H:%M:%S.%f%Z&quot;)) </code></pre> <blockquote> <p>ValueError: time data '2023-11-15 01:02:03.123456' does not match format '%Y-%m-%d %H:%M:%S.%f%Z'</p> </blockquote> <p>How should that timestamp literal be set up to specify <code>UTC-8</code> ?</p> <p><em>Update</em> I also tried</p> <pre><code>&quot;2023-11-15 01:02:03.123456 -08:00&quot; </code></pre> <p>That did not work. I'm not sure what the syntax is here.</p>
<python><timestamp>
2023-11-06 01:22:13
1
63,891
WestCoastProjects
77,428,218
116
Creating a protobuf factory for a dynamically generated message?
<p>Based on <a href="https://stackoverflow.com/questions/27654594/dynamically-create-a-new-protobuf-message">Dynamically create a new protobuf message</a>, I have this Python code which dynamically generates a protobuf containing two ints and a string. (Context: a dynamic data server will transfer query results via protobuf.)</p> <p>I can create the descriptor properly, and I believe the code for instantiating and populating the message class is correct.</p> <p>But, when trying to generate the factory</p> <pre><code># Step 5: Create the message class factory = message_factory.MessageFactory(pool) DynamicMessageClass = factory.GetPrototype(message_descriptor) &lt;-- error here </code></pre> <p>I receive this error.</p> <pre><code>/Users/mark/g/private/lessons/proto_dynamic/g2.py:40: UserWarning: MessageFactory class is deprecated. Please use GetMessageClass() instead of MessageFactory.GetPrototype. MessageFactory class will be removed after 2024. DynamicMessageClass = factory.GetPrototype(message_descriptor) Traceback (most recent call last): File &quot;/Users/mark/g/private/lessons/proto_dynamic/g2.py&quot;, line 40, in &lt;module&gt; DynamicMessageClass = factory.GetPrototype(message_descriptor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/mark/miniconda3/lib/python3.11/site-packages/google/protobuf/message_factory.py&quot;, line 185, in GetPrototype return GetMessageClass(descriptor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/mark/miniconda3/lib/python3.11/site-packages/google/protobuf/message_factory.py&quot;, line 70, in GetMessageClass concrete_class = getattr(descriptor, '_concrete_class', None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: No message class registered for 'dynamic_package.DynamicMessage' </code></pre> <p>What do I need to do to generate my class from my descriptor?</p> <p>Full code, for reference:</p> <pre><code>from google.protobuf import descriptor_pb2, descriptor_pool, message_factory # Step 1: Create the DescriptorProto for the message message_descriptor_proto = descriptor_pb2.DescriptorProto() message_descriptor_proto.name = &quot;DynamicMessage&quot; message_descriptor_proto.field.add( name=&quot;first_int&quot;, number=1, type=descriptor_pb2.FieldDescriptorProto.TYPE_INT32, label=descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL, ) message_descriptor_proto.field.add( name=&quot;second_int&quot;, number=2, type=descriptor_pb2.FieldDescriptorProto.TYPE_INT32, label=descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL, ) message_descriptor_proto.field.add( name=&quot;a_string&quot;, number=3, type=descriptor_pb2.FieldDescriptorProto.TYPE_STRING, label=descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL, ) # Step 2: Create a FileDescriptorProto including the DescriptorProto file_descriptor_proto = descriptor_pb2.FileDescriptorProto() file_descriptor_proto.name = &quot;dynamic_message.proto&quot; file_descriptor_proto.package = &quot;dynamic_package&quot; file_descriptor_proto.message_type.add().CopyFrom(message_descriptor_proto) # Step 3: Add the FileDescriptorProto to a DescriptorPool pool = descriptor_pool.DescriptorPool() file_descriptor = pool.Add(file_descriptor_proto) # Step 4: Retrieve the message descriptor by name from the DescriptorPool message_descriptor = pool.FindMessageTypeByName(&quot;dynamic_package.DynamicMessage&quot;) # Step 5: Create the message class factory = message_factory.MessageFactory(pool) DynamicMessageClass = factory.GetPrototype(message_descriptor) # Step 6: Instantiate the message class and populate with data dynamic_message = DynamicMessageClass() dynamic_message.first_int = 123 dynamic_message.second_int = 456 dynamic_message.a_string = &quot;Hello, World!&quot; # Serialize the message to a byte string serialized_data = dynamic_message.SerializeToString() # Show the serialized data print(serialized_data) </code></pre>
<python><protocol-buffers>
2023-11-06 00:56:21
1
305,996
Mark Harrison
77,428,177
5,032,387
Why do I need an __init__.py file to run pytest
<p>My <code>conftest.py</code> file imports a function from a local module that's used when creating fixtures:</p> <pre><code>from config import dummy_func </code></pre> <p>I then use these fixtures in conjunction with a Hypothesis strategy.</p> <p>The code hierarchy:</p> <pre><code>- src -config.py - tests -conftest.py other test files </code></pre> <p>When I run <code>pytest</code>, I get a <code>ModuleNotFoundError</code> for <code>config</code>. If I add an empty <code>__init__.py</code> file to the tests directory, the test will run fine. Why do we need it?</p> <p>Note I'm not asking why we need <code>__init__.py</code> in general as in this <a href="https://stackoverflow.com/questions/448271/what-is-init-py-for">post</a>, rather why we need it to get the <code>pytest</code> to work.</p>
<python><pytest>
2023-11-06 00:35:23
0
3,080
matsuo_basho
77,428,104
12,242,085
How to create new rows in Data Frame based on values in 2 columns managing also values in another columns in Python Pandas?
<p>I have Data Frame in Python Pandas like below:</p> <p><strong>Input data: (columns survey and event are object date type)</strong></p> <pre><code>data = [ (1, 2, 1, 0.33, '2023-10-10', '2023-09-25', 1, 20, 11), (1, 2, 1, 0.33, '2023-10-10', '2023-10-04', 0, 10, 10), (1, 5, 0, 0.58, '2023-05-05', '2023-05-01', 0, 10, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-10', 0, 25, 10), (2, 8, 1, 0.45, '2023-02-13', '2023-02-02', None, 12, None) ] df = pd.DataFrame(data, columns=['id', 'nps', 'target', 'score', 'survey', 'event', 'col1', 'col2', 'col3']) df </code></pre> <p><a href="https://i.sstatic.net/cHF4K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cHF4K.png" alt="enter image description here" /></a></p> <p><strong>Requirements:</strong></p> <p>And I need to create new rows for each combination of values in columns: id, survey. New rows have to:</p> <ol> <li>For each combination of values in: id, survey create dates in the event column that do not exist for the given combination of values in: id, survey <strong>going back 31 days from date in survey column</strong>, for example if the combination id = 1 and survey = '2023-05-05' I need to have date rows in the event column from 2023-05-04 to 2023-04-04, i.e. 31 days backwards for that combination of: id, survey</li> <li>each new row need to has NaN values in columns: col1, col2, col3</li> <li>each new row need to has same values in columns: nps, target, score for each combination of values in: id, survey</li> </ol> <p><strong>Example output:</strong></p> <p>So, as a result I need to have something like below:</p> <pre><code>data = [ (1, 2, 1, 0.33, '2023-10-10', '2023-09-25', 1, 20, 11), (1, 2, 1, 0.33, '2023-10-10', '2023-10-04', 0, 10, 10), (1, 2, 1, 0.33, '2023-10-10', '2023-10-09', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-08', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-07', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-06', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-05', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-03', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-02', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-10-01', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-30', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-29', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-28', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-27', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-26', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-24', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-23', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-22', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-21', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-20', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-19', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-18', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-17', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-16', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-15', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-14', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-13', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-12', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-11', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-10', None, None, None), (1, 2, 1, 0.33, '2023-10-10', '2023-09-09', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-05-01', 0, 10, None), (1, 5, 0, 0.58, '2023-05-05', '2023-05-04', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-05-03', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-05-02', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-30', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-29', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-28', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-27', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-26', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-25', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-24', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-23', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-22', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-21', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-20', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-19', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-18', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-17', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-16', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-15', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-14', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-13', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-12', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-11', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-10', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-09', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-08', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-07', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-06', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-05', None, None, None), (1, 5, 0, 0.58, '2023-05-05', '2023-04-04', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-10', 0, 25, 10), (2, 8, 1, 0.45, '2023-02-13', '2023-02-02', None, 12, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-12', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-11', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-09', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-08', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-07', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-06', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-05', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-04', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-03', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-02-01', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-31', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-30', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-29', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-28', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-27', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-26', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-25', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-24', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-23', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-22', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-21', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-20', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-19', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-18', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-17', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-16', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-15', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-14', None, None, None), (2, 8, 1, 0.45, '2023-02-13', '2023-01-13', None, None, None), ] df = pd.DataFrame(data, columns=['id', 'nps', 'target', 'score', 'survey', 'event', 'col1', 'col2', 'col3']) df </code></pre> <p>How can I do that in Python Pandas ?</p>
<python><pandas><dataframe><date>
2023-11-05 23:59:04
3
2,350
dingaro
77,427,892
1,914,781
Customize plotly legend lines to square
<p>I would like to change legend symbol from line to square. example code as below:</p> <pre><code>import pandas as pd import plotly.express as px import plotly.graph_objects as go def save_fig(fig,pngname): fig.write_image(pngname,format=&quot;png&quot;,width=600,height=400, scale=1) print(&quot;[[%s]]&quot;%pngname) return df = px.data.stocks() print(df.columns) df = df[['date','AMZN', 'AAPL']] fig = go.Figure() for name in df.columns: if name == 'date': continue trace = go.Scatter( x = df['date'], y = df[name], name=name, ) fig.add_trace(trace) save_fig(fig,&quot;./demo.png&quot;) </code></pre> <p>What's proper way to do that?</p> <p><a href="https://i.sstatic.net/BIEtw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BIEtw.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-11-05 22:27:33
1
9,011
lucky1928
77,427,770
1,773,169
Longest Stable Subsequence - Recover solution from memo table
<p>I am working on a DP problem to find longest stable subsequence, and I'm currently stuck with recovering the solution.</p> <p>Here is the problem statement</p> <p><a href="https://i.sstatic.net/gIPzE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gIPzE.png" alt="enter image description here" /></a></p> <p>I have tried the below solution,</p> <pre><code>def computeLSS(a): T = {} # Initialize the memo table to empty dictionary # Now populate the entries for the base case n = len(a) for j in range(-1, n): T[(n, j)] = 0 # i = n and j # Now fill out the table : figure out the two nested for loops # It is important to also figure out the order in which you iterate the indices i and j # Use the recurrence structure itself as a guide: see for instance that T[(i,j)] will depend on T[(i+1, j)] # your code here for i in range(0, n + 1): for j in range(-1, n + 1): T[(i, j)] = 0 for i in range(n - 1, -1, -1): for j in range(n - 1 , -1, -1): aj = a[j] if 0 &lt;= j &lt; len(a) else None if aj != None and abs(a[i] - aj) &gt; 1: T[(i, j)] = T[(i + 1, j)] if aj == None or abs(a[i] - aj) &lt;= 1: T[(i, j)] = max(1 + T[(i + 1), i], T[(i + 1, j)]) for i in range(n-2, -1, -1): T[(i, -1)] = max(T[(i+1, -1)], T[(i+1, 0)], T[(i, 0)], 0) i = 0 j = -1 sol = [] while i &lt; n and j &lt; n: if abs(T[(i, j)] - T[(i+1, j)]) &gt; 1: sol.append(a[i]) j = i i = i + 1 return sol </code></pre> <p>But it fails for the below test cases,</p> <pre><code>a2 = [1, 2, 3, 4, 0, 1, -1, -2, -3, -4, 5, -5, -6] print(a2) sub2 = computeLSS(a2) print(f'sub2 = {sub2}') assert len(sub2) == 8 a3 = [0,2, 4, 6, 8, 10, 12] print(a3) sub3 = computeLSS(a3) print(f'sub3 = {sub3}') assert len(sub3) == 1 </code></pre> <p>I would really appreciate if someone can help me with some pointers like what might be the problem with my recovery code?</p>
<python><dynamic-programming><memoization>
2023-11-05 21:41:29
0
377
stack underflow
77,427,754
534,617
python decrement and possible deletion in one statement
<p>In python, I wonder if there is a combined statement for the following</p> <pre><code># dct = defaultdict(int) dct[k] -= 1 if dct[k] == 0: del dct[k] </code></pre> <p>Or any arbitrary lower bound, if below, delete the element.</p>
<python><python-3.x>
2023-11-05 21:36:03
2
10,925
Qiang Li
77,427,592
15,531,189
kafka-python not receiving message on consumer side
<p>I have my kafka running on my docker. When I create a topic from CLI and start producer and consumer on CLI then kafka is able to transfer messages. but when I do same thing with python, then I don't receive any message on consumer side.</p> <p>My docker-compose file looks like</p> <pre><code>version: '3' services: zookeeper: image: zookeeper container_name: zookeeper ports: - &quot;2181:2181&quot; networks: - kafka-net kafka: image: wurstmeister/kafka container_name: kafka ports: - &quot;9092:9092&quot; environment: KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT KAFKA_LISTENERS: INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9093 KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_CREATE_TOPICS: &quot;chatgpt:1:1&quot; networks: - kafka-net expose: - 9092 networks: kafka-net: driver: bridge </code></pre> <p>producer -</p> <pre><code>import json import time from kafka import KafkaProducer producer = KafkaProducer(bootstrap_servers=['localhost:9092'], value_serializer=lambda x: json.dumps(x).encode('utf-8')) producer.send(&quot;chatgpt&quot;, f&quot;Hello 1&quot;, f&quot;key1&quot;.encode(&quot;utf-8&quot;)) producer.flush() </code></pre> <p>consumer -</p> <pre><code>import json from kafka import KafkaConsumer consumer = KafkaConsumer(&quot;chatgpt&quot;, bootstrap_servers=['localhost:9092'], auto_offset_reset='earliest', enable_auto_commit=True, group_id='my-group', value_deserializer=lambda x: json.loads(x.decode('utf-8'))) consumer.subscribe(['chatgpt']) consumer.subscription() for msz in consumer: print(msz) </code></pre> <p>Can anyone suggest what I'm doing wrong?</p>
<python><apache-kafka><kafka-consumer-api><kafka-python>
2023-11-05 20:38:04
0
343
SHIVAM SINGH
77,427,567
14,358,734
Why am I getting TypeError: readlinestest() missing 1 required positional argument: 'input'
<p>Here's the code</p> <pre><code>def readlinestest(input): return len(input) if __name__ == '__main__': argtest = [&quot;&quot;, &quot;/n /n&quot;, &quot;/n /n /n&quot;] with concurrent.futures.ThreadPoolExecutor() as e: results = list(e.map(readlinestest(), argtest)) for x in results: print(x) </code></pre> <p>The expected output is something like</p> <pre><code>1 2 3 </code></pre> <p>And the full error message</p> <pre><code>Traceback (most recent call last): File &quot;HW9.py&quot;, line 17, in &lt;module&gt; results = list(e.map(readlinestest(), input=argtest)) TypeError: readlinestest() missing 1 required positional argument: 'input' </code></pre> <p>From what I know about .map(), everything is correctly written. And yet: the error message. So something must be off.</p>
<python>
2023-11-05 20:30:41
0
781
m. lekk
77,427,370
14,072,498
How to update nested object in delta-rs
<p>Given the below code, I'm able to update root-level fields as e.g. &quot;age&quot;. But how to update fields inside <code>my_struct</code>?</p> <p>The attempt with <code>struct(&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: &quot;2&quot;)</code> gives</p> <blockquote> <p>DeltaTable error: This feature is not implemented: Unsupported SQL json operator Colon</p> </blockquote> <p>I also tried <code>named_struct</code>, it gives</p> <blockquote> <p>Generic DeltaTable error: Error during planning: Invalid function 'named_struct'.</p> </blockquote> <p>I've been searching a couple of hours for docs/examples, but without luck so far.</p> <pre><code>import pandas as pd from deltalake import write_deltalake, DeltaTable df = pd.DataFrame.from_records([ {&quot;name&quot;: &quot;Alice&quot;, &quot;age&quot;: 25, &quot;gender&quot;: &quot;Female&quot;, &quot;my_struct&quot;: {&quot;a&quot;: 1, &quot;b&quot;: 2}}, {&quot;name&quot;: &quot;Bob&quot;, &quot;age&quot;: 30, &quot;gender&quot;: &quot;Male&quot;, &quot;my_struct&quot;: {&quot;a&quot;: 3, &quot;b&quot;: 4}} ]) write_deltalake('./db/my_table', df) dt = DeltaTable('./db/my_table') dt.update({&quot;age&quot;: &quot;42&quot;}) # this is working dt.update({&quot;my_struct&quot;: 'struct(&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: &quot;2&quot;)'}) # not working print(dt.to_pandas()) </code></pre> <p>My pyproject.toml</p> <pre><code>[tool.poetry] name = &quot;delta-test&quot; [tool.poetry.dependencies] python = &quot;^3.10&quot; deltalake = &quot;^0.12.0&quot; pandas = &quot;1.5&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre>
<python><delta-lake><delta-rs>
2023-11-05 19:34:20
1
11,484
Roar S.
77,427,359
2,869,180
deep equality vs shallow equality in python?
<p>I read somewhere that the shallow equality for reference type like tuples, lists and strings referes to comparing them to see if they are the same.</p> <p>For strings and lists the axiom is correct. But for tuple, it is wrong:</p> <pre><code>t1 = (1, 2) t2 = (1, 2) print(t1 is t2) print(t1 == t2) </code></pre> <p>this code outputs True and True.</p> <p>Can someone please explain why they are the same please.</p>
<python>
2023-11-05 19:31:29
0
1,060
bib
77,427,358
19,694,624
PyCharm doesn't list Python 3.12 interpreter
<p>On my Pop OS system the base Python interpreter is 3.10, and I installed Python 3.12, it works perfectly fine:</p> <p><a href="https://i.sstatic.net/aVFR3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aVFR3.png" alt="img1" /></a></p> <p>With 3.12 I can create virtual environment, run Python files, so it works perfectly:</p> <p><a href="https://i.sstatic.net/WOwn2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WOwn2.png" alt="enter image description here" /></a></p> <p>But when I try to set it in PyCharm, it just doesn't see Python 3.12:</p> <p><a href="https://i.sstatic.net/HRi3t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HRi3t.png" alt="img2" /></a></p> <p>So it sees and recognizes <code>python</code>, <code>python3</code>, <code>python3.10</code> binaries, but not <code>python3.12</code>, despite python3.12 being there:</p> <p><a href="https://i.sstatic.net/lSs5p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lSs5p.png" alt="enter image description here" /></a></p> <p>Why is that? What should I do?</p>
<python><python-3.x><pycharm><jetbrains-ide><pythoninterpreter>
2023-11-05 19:31:21
1
303
syrok
77,427,221
16,707,518
Converting existence of any text in a pandas column to a number value
<p>I can't seem to find this specific question but hopefully it's an easy one. I have a pandas dataframe with a column with mixed numbers and text. I just want to convert any text instances in that column to the value 999. e.g. for column B, I'd like to perform this operation on, so:</p> <pre><code>A B 1 1 2 5 3 FDS 4 EWR 5 7 6 3 7 BB 8 C </code></pre> <p>becomes...</p> <pre><code>A B 1 1 2 5 3 999 4 999 5 7 6 3 7 999 8 999 </code></pre>
<python><pandas>
2023-11-05 18:45:12
2
341
Richard Dixon
77,427,019
5,551,539
Python script to generate letter does not read .csv well
<p>Maybe you can help me troubleshoot. I am running this code to read a <code>.csv</code> file and us a sample <code>.txt</code> letter to generate a <code>.txt</code> file for each row of the <code>.csv</code> file. I name each file after the <code>Institution</code> column of the <code>.csv</code> file.</p> <pre><code> import csv import os # Define the path to the directory where you want to save the files directory_path = r'C:\Users' # Set the current working directory to the specified path os.chdir(directory_path) # Define the CSV and sample text file names csv_file = &quot;sample.csv&quot; sample_text_file = &quot;sample_template.txt&quot; # Read the CSV file with open(csv_file, 'r') as csv_file: csv_reader = csv.DictReader(csv_file) data = list(csv_reader) # Read the sample text template with open(sample_text_file, 'r') as template_file: template_text = template_file.read() # Define the placeholders without delimiters placeholders = [&quot;Position&quot;, &quot;Platform&quot;, &quot;Institution&quot;, &quot;Optional Paragraph&quot;] # Process each row from the CSV for row in data: # Create a copy of the template text for this row modified_text = template_text # Replace placeholders with values from the CSV for placeholder in placeholders: value = row.get(placeholder, '') modified_text = modified_text.replace(f'&quot;{placeholder}&quot;', value) # Get the name for the output file from &quot;Institution&quot; column output_file_name = f&quot;{row['Institution']}.txt&quot; # Write the modified text to the output file with open(output_file_name, 'w') as output_file: output_file.write(modified_text) print(f&quot;File '{output_file_name}' has been created.&quot;) print(&quot;All files have been generated in the specified directory.&quot;) </code></pre> <p>The <code>.txt</code> file is written to be exported to latex and looks like this:</p> <pre><code>\begin{document} %---------------------------------------------------------------------------------------- % FIRST PAGE HEADER %---------------------------------------------------------------------------------------- \vspace{-1em} % Pull the rule closer to the logo \rule{\linewidth}{1pt} % Horizontal rule \bigskip\bigskip % Vertical whitespace %---------------------------------------------------------------------------------------- % YOUR NAME AND CONTACT INFORMATION %---------------------------------------------------------------------------------------- \hfill \begin{tabular}{l @{}} \today \bigskip\\ % Date NAME \\ INSTITUTION \\ A1 \\ % Address A2 \\ \end{tabular} Dear Members of the Recruiting Committee, \bigskip % Vertical whitespace %---------------------------------------------------------------------------------------- % LETTER CONTENT %---------------------------------------------------------------------------------------- I am writing to apply for the &quot;Position&quot; position that you have advertised on &quot;Platform&quot;. I believe the skills and competencies I hold would be of great value to the &quot;Institution&quot;. &quot;Optional Paragraph&quot; Thank you for your time and consideration. \bigskip % Vertical whitespace Sincerely yours, \vspace{50pt} % Vertical whitespace \includegraphics[width=0.2\textwidth]{signature.png} NAME \end{document} </code></pre> <p>And the sample <code>.csv</code> looks like this:</p> <pre><code>Position,Platform,Institution,Optional Paragraph Waiter,Facebook,Company A , Coder,Twitter,Company B,I am cool. </code></pre> <p><strong>My problem is that this gives me the following result for Company A:</strong></p> <pre><code>I am writing to apply for the position that you have advertised on Facebook. I believe the skills and competencies I hold would be of great value to the Company A . </code></pre> <p>As you can see, the <code>Position</code> column is not recognized by my code. I was wondering if you had thoughts on what is causing this?</p>
<python><csv><txt>
2023-11-05 17:44:22
1
301
Weierstraß Ramirez
77,426,587
3,622,727
Ignoring all default arguments python / parse_args
<p>All arguments to a python script are unknown. I am using this to parse them:</p> <pre><code>parser = argparse.ArgumentParser() parsed, unknown = parser.parse_known_args() for arg in unknown: if arg.startswith((&quot;-&quot;)): parser.add_argument(arg.split(' ')[0]) args = parser.parse_args() ids = {} names = {} for arg in vars(args): priv = getattr(args, arg) name = arg[0:7] ids[arg] = priv names[arg] = name </code></pre> <p>This works until an argument starts with &quot;-h&quot; which is interpreted as a known (?) or default (?) argument and the script breaks with</p> <p><code>error: argument -h/--help: ignored explicit argument ...</code></p> <p>How can I disable this to only use unknown arguments?</p>
<python><argparse>
2023-11-05 15:46:08
0
353
Døner M.
77,426,566
10,940,989
concurrent.futures.Executor.map: calculate function inputs lazily
<p>I have some code similar to the following:</p> <pre><code>def worker(l:list): ... really_big_list = [......] with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: # Start the load operations and mark each future with its URL future = {executor.submit(worker, l[i:i+100]) for i in range(0, len(really_big_list), 10)) for future in concurrent.futures.as_completed(future_to_url): data = future.result() </code></pre> <p>Note that the workers take slices of size 100 but it only goes up by 10 at a time. In this code, even though I am only using 5 workers, to calculate the inputs to each worker I'll need to use 10 times the size of <code>really_big_list</code>. This is simply too much memory for what I am doing. Is there a way to defer calculating <code>l[i:100]</code> until that worker is started?</p>
<python><multiprocessing>
2023-11-05 15:38:51
1
380
Anthony Poole
77,426,429
5,352,674
Form not valid after setting QuerySet of field in __init__
<p>A new <code>Workorder</code> is created by visiting the <code>Building</code> page and creating a Work Order. This in turn displays the <code>WorkOrderForm</code> on a new page. When viewing this form, I want to display only the equipment associated to a building, not all of the equipment.</p> <p>I have tried overriding the <code>__init__</code> of the form to set the <code>queryset</code> of the <code>MultipleChoiceField</code>, which works as intended in displaying the equipment associated to the building, however when submitting the form the <code>form.is_valid()</code> fail yet there are no errors displayed.</p> <p>I have 3 models <code>Building</code>, <code>WorkOrder</code> and <code>Equipment</code> as follows:</p> <p><strong>Building</strong></p> <pre><code>class BuildingDetail(models.Model): location_Code = models.CharField(primary_key=True, max_length=15, help_text='Enter a location code.', unique=True) civic_Address = models.CharField(max_length=100, help_text='Enter the civic address of the location.') current_Tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE, help_text='Select a Tenant.') landlord = models.ForeignKey(Landlord, on_delete=models.CASCADE, help_text='Select a Landlord.') </code></pre> <p><strong>WorkOrder</strong></p> <pre><code>class WorkOrder(models.Model): id = models.CharField(primary_key=True, max_length=7, default=increment_workorder, verbose_name=&quot;ID&quot;) building_Code = models.ForeignKey(BuildingDetail, on_delete=models.CASCADE) issue_Date = models.DateField(blank=True, null=True, default=datetime.date.today()) completion_Date = models.DateField(blank=True, null=True) tenant_Issue = models.TextField(help_text=&quot;Enter the issue description.&quot;) scope_Of_Work = models.TextField(help_text=&quot;Enter the Scope of Work required.&quot;) contact_name = models.CharField(max_length=50, help_text='Enter the contact Name.', blank=True, default=&quot;&quot;) contact_number = models.CharField(max_length=11, help_text='Enter the contact Phone Number.', blank=True, default=&quot;&quot;) rebill_Invoice = models.TextField(max_length=50, help_text='Enter the Devco Rebill Invoice number.', blank=True, default=&quot;&quot;) rebill_Date = models.TextField(blank=True, null=True) tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE, null=True) equipment = models.ManyToManyField(Equipment, blank=True, null=True, help_text='Select Equipment associated to this work order. Hold CTRL to select multiple.') </code></pre> <p><strong>Equipment</strong></p> <pre><code>class Equipment(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, help_text='Unique ID for equipment.') name = models.CharField(max_length=150, help_text='Enter a name for this piece of equipment.') make = models.CharField(max_length=100, blank=True, help_text='Enter the Make of the item.') model = models.CharField(max_length=100, blank=True, help_text='Enter the model of the item.') serial = models.CharField(max_length=100, blank=True, help_text='Enter the serial number of the item.') cost = models.DecimalField(max_digits=15, decimal_places=2, help_text='Enter the cost before GST.', blank=True, default=0) building = models.ForeignKey(BuildingDetail, on_delete=models.CASCADE) </code></pre> <p><strong>Forms.py</strong></p> <pre><code>class WorkOrderForm(ModelForm): class Meta: model = WorkOrder fields = '__all__' widgets = { &quot;issue_Date&quot;: DatePickerInput(), &quot;completion_Date&quot;: DatePickerInput(), } def __init__(self, *args, **kwargs): building = kwargs.pop('building', None) super(WorkOrderForm, self).__init__(**kwargs) self.fields['equipment'].queryset = Equipment.objects.filter(building__location_Code=building) </code></pre> <p><strong>Views.py</strong></p> <pre><code>def CreateWorkOrder(request, building_code): if building_code == &quot;none&quot;: workOrderForm = WorkOrderForm() associatedCostsForm = AssociatedCostsForm() context = { 'workOrderForm': workOrderForm, 'associatedCostsForm': associatedCostsForm, } else: building = BuildingDetail.objects.get(location_Code=building_code) equipment = Equipment.objects.filter(building=building) currentTenant = building.current_Tenant workOrderForm = WorkOrderForm(building=building_code, initial={ 'building_Code': building, 'tenant': currentTenant}) associatedCostsForm = AssociatedCostsForm() context = { 'workOrderForm': workOrderForm, 'associatedCostsForm': associatedCostsForm, 'building_code': building_code, 'building': building, } if request.method == &quot;POST&quot;: form = WorkOrderForm(request.POST) if 'submit-workorder' in request.POST: if form.is_valid(): form.clean() workorder = form.save(commit=False) workorder.save() added_workorder = WorkOrder.objects.get(id=workorder.id) building = added_workorder.building_Code print(added_workorder) print(&quot;You have submitted a Work Order.&quot;) return redirect('view_building', building) else: print(form.errors) return render(request, 'workorders\\workorder_form.html', context) </code></pre>
<python><django><django-forms>
2023-11-05 14:58:15
1
319
Declan Morgan
77,426,399
11,395,264
Why is Python leaving non-existent files in my package?
<p>For whatever reason, when I install my package <code>pie</code> with <code>pip install --break-system-packages --user git+https://github.com/zurgeg/pie.git@ed00801</code>, it fails due to a mysterious phantom file:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;/home/jg/.local/bin/pie&quot;, line 5, in &lt;module&gt; from pie.pie import cli File &quot;/home/jg/.local/lib/python3.11/site-packages/pie/pie.py&quot;, line 20, in &lt;module&gt; from .crusts.notdate.play.playdate_pie import playdate as crust_playdate File &quot;/home/jg/.local/lib/python3.11/site-packages/pie/crusts/__init__.py&quot;, line 7, in &lt;module&gt; globals()[submodule] = _import_module(&quot;.&quot; + submodule, __name__) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/jg/.local/lib/python3.11/site-packages/pie/crusts/notdate/__init__.py&quot;, line 5, in &lt;module&gt; if not submodule.startswith(&quot;_&quot;): </code></pre> <p>Now, this file did exist at one point:</p> <pre class="lang-bash prettyprint-override"><code>git -P log --all --full-history --no-color -- src/pie/crusts/notdate/__init__.py | awk 'match($0, /commit/){ print substr($0, RSTART + 7, 7) }' aaef1b9 33210c5 bd3ded3 </code></pre> <p>That is, until I realized that the file was in the wrong place and moved it to its current position:</p> <pre class="lang-bash prettyprint-override"><code>git diff aaef1b9 aaef1b9~1 diff --git a/src/pie/crusts/__init__.py b/src/pie/crusts/notdate/__init__.py similarity index 58% rename from src/pie/crusts/__init__.py rename to src/pie/crusts/notdate/__init__.py [...] </code></pre> <p>Deleting the file didn't do anything, Python still thinks it's there, and clearly it is, even after a <code>pip uninstall pie</code>. Now, I can just <code>rm</code> the file, but that's a temporary solution, because if I recreate it in the repository, install, and then delete it, the file is kept. Bumping the version to <code>0.2.2</code> doesn't work either.</p> <p>I expect the file to just be deleted when I reinstall my package, but for whatever reason, it seems like <code>pip</code> isn't tracking that file as part of the package?</p> <p>UPDATE: It still seems to be failing with all of these commands using both a venv and the system install:</p> <ul> <li><code>pip install --no-cache-dir</code></li> <li><code>pip install --force</code></li> <li><code>pip install --force --no-cache-dir</code></li> </ul>
<python><pip>
2023-11-05 14:49:40
1
544
zurgeg
77,426,326
893,254
Numpy power with array as exponent
<p>I have a Python code containing the following lines:</p> <pre><code># Poisson model lambda_param = 3.0 x_model = numpy.linspace(0, 10, num=11, dtype=int) y_model = numpy.zeros(10) index = 0 for x in x_model: y_model[index] = numpy.power(lambda_param, x) / math.factorial(x) * numpy.exp(-lambda_param) index += 1 </code></pre> <p>I want to write this in a more Pythonic way - without using a for loop.</p> <p>I tried this:</p> <pre><code>y_model = numpy.power(lambda_param, x_model) / math.factorial(x_model) * numpy.exp(-lambda_param) </code></pre> <p>But this didn't seem to work, I would guess because <code>math.factorial</code> doesn't know how to operate on an array like object.</p> <p>Is there a way to do this?</p>
<python><numpy><factorial><poisson>
2023-11-05 14:33:51
2
18,579
user2138149
77,425,978
15,001,463
D-dimensional multi-index sums
<p>I understand how a summation in 1, 2, and 3 dimensions could be defined by using 1, 2, and 3 <code>for</code> loops respectively, but how could you make a general function that sums over a multi-index like the one defined below?</p> <p><a href="https://i.sstatic.net/1DZuj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1DZuj.png" alt="enter image description here" /></a></p> <p>Assume that <code>u</code> just samples a random integer for simplicity, i.e.,</p> <pre class="lang-py prettyprint-override"><code>from random import randint def u(a=0, b=100): return randint(a, b) </code></pre> <p>As an example, say the multi-index vector <code>js = [0, 0, 0]</code> and the upper limit vector <code>Ms = [5, 5, 5]</code>,this would imply that <code>d = 3</code> (i.e., 3 dimensional multi-index), and the summation could be written simply as below</p> <pre class="lang-py prettyprint-override"><code>mysum = 0 for j1 in range(Ms[0]): for j2 in range(Ms[1]): for j3 in range(Ms[2]): mysum += u() </code></pre> <p>but what would be a general way to write this for <code>d</code> dimensions (like is shown in the image)?</p> <p>Edit: I know in this trivial example, one could simply do something like</p> <pre class="lang-py prettyprint-override"><code>from numpy import prod M = prod(Ms) mysum = sum([u() for i in range(M)]) </code></pre> <p>but again, I am interested specifically in a sort of general way to do <code>d</code> nested <code>for</code> loops since the function <code>u</code> can be more complex and depend on the indices j_1, j_2, ..., j_d.</p>
<python><sum>
2023-11-05 12:53:56
2
714
Jared
77,425,962
9,835,872
How to compose functions through purely using Python's standard library?
<p>Python's standard library is vast, and my intuition tells that there must be a way in it to accomplish this, but I just can't figure it out. This is purely for curiosity and learning purposes:</p> <p>I have two simple functions:</p> <pre class="lang-py prettyprint-override"><code>def increment(x): return x + 1 def double(x): return x * 2 </code></pre> <p>and I want to compose them into a new function <code>double_and_increment</code>. I could of course simply do that as such:</p> <pre><code>double_and_increment = lambda x: increment(double(x)) </code></pre> <p>but I could also do it in a more convoluted but perhaps more &quot;ergonomically scalable&quot; way:</p> <pre><code>import functools double_and_increment = functools.partial(functools.reduce, lambda acc, f: f(acc), [double, increment]) </code></pre> <p>Both of the above work fine:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; double_and_increment(1) 3 </code></pre> <p>Now, the question is, is there tooling in the standard library that would allow achieving the composition <strong>without any user-defined lambdas, regular functions, or classes.</strong></p> <p>The first intuition is to replace the <code>lambda acc, f: f(acc)</code> definition in the <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="noreferrer"><code>functools.reduce</code></a> call with <a href="https://docs.python.org/3/library/operator.html#operator.call" rel="noreferrer"><code>operator.call</code></a>, but that unfortunately takes the arguments in the reverse order:</p> <pre><code>&gt;&gt;&gt; (lambda acc, f: f(acc))(1, str) # What we want to replace. &gt;&gt;&gt; '1' &gt;&gt;&gt; import operator &gt;&gt;&gt; operator.call(str, 1) # Incorrect argument order. &gt;&gt;&gt; '1' </code></pre> <p>I have a hunch that using <code>functools.reduce</code> is still the way to accomplish the composition, but for the life of me I can't figure out a way to get rid of the user-defined lambda.</p> <p>Few out-of-the-box methods that got me close:</p> <pre class="lang-py prettyprint-override"><code>import functools, operator # Curried form, can't figure out how to uncurry. functools.partial(operator.methodcaller, '__call__')(1)(str) # The arguments needs to be in the middle of the expression, which does not work. operator.call(*reversed(operator.attrgetter('args')(functools.partial(functools.partial, operator.call)(1, str)))) </code></pre> <p>Have looked through all the existing questions, but they are completely different and rely on using user-defined functions and/or lambdas.</p>
<python><functional-programming><standard-library><language-features><function-composition>
2023-11-05 12:48:12
3
24,819
ruohola
77,425,931
14,923,149
How to Parse Hierarchical Data and Format it into a TSV File in Python?
<p>I have a dataset with hierarchical information and KO numbers, and I'm looking to format this data into a TSV (Tab-Separated Values) file in Python, where the first column contains KO numbers, the second column contains descriptions, and the third column contains a hierarchy based on the nearest 'A' section in the input data. The hierarchy should include elements starting with 'A,' 'B,' and 'C' up to the nearest 'C' section. Also if same KO number is present different hirarch, that hirarchy should be seperated by | under same row input data is file.keg formate Input Data:</p> <pre><code>A09100 Metabolism B B 09101 Carbohydrate metabolism C 00010 Glycolysis / Gluconeogenesis [PATH:ko00010] D K00844 HK; hexokinase [EC:2.7.1.1] D K12407 GCK; glucokinase [EC:2.7.1.2] D K00001 E1.1.1.1, adh; alcohol dehydrogenase [EC:1.1.1.1] B 09103 Lipid metabolism C 00071 Fatty acid degradation [PATH:ko00071] D K00001 E1.1.1.1, adh; alcohol dehydrogenase [EC:1.1.1.1] A09120 Genetic Information Processing B B 09121 Transcription C 03020 RNA polymerase [PATH:ko03020] D K03043 rpoB; DNA-directed RNA polymerase subunit beta [EC:2.7.7.6] D K13797 rpoBC; DNA-directed RNA polymerase subunit beta-beta' [EC:2.7.7.6] </code></pre> <p>Expected Output:</p> <pre><code> KO metadata_KEGG_Description metadata_KEGG_Pathways K00844 HK; hexokinase [EC:2.7.1.1] Metabolism, Carbohydrate metabolism, Glycolysis / Gluconeogenesis K12407 GCK; glucokinase [EC:2.7.1.2] Metabolism, Carbohydrate metabolism, Glycolysis / Gluconeogenesis K00001 E1.1.1.1, adh; alcohol dehydrogenase [EC:1.1.1.1] Metabolism, Carbohydrate metabolism, Glycolysis / Gluconeogenesis|Metabolism, Lipid metabolism, Fatty acid degradation K03043 rpoB; DNA-directed RNA polymerase subunit beta [EC:2.7.7.6] Genetic Information Processing, Transcription, RNA polymerase K13797 rpoBC; DNA-directed RNA polymerase subunit beta-beta' [EC:2.7.7.6] Genetic Information Processing, Transcription, RNA polymerase </code></pre> <p>I would appreciate any help or guidance on how to correctly process this data into a desired TSV file based on the provided hierarchical information. Thank you for your assistance!</p> <p>this is my code</p> <pre><code>data = &quot;&quot;&quot;A09100 Metabolism B B 09101 Carbohydrate metabolism C 00010 Glycolysis / Gluconeogenesis [PATH:ko00010] D K00844 HK; hexokinase [EC:2.7.1.1] D K12407 GCK; glucokinase [EC:2.7.1.2] D K00001 E1.1.1.1, adh; alcohol dehydrogenase [EC:1.1.1.1] B 09103 Lipid metabolism C 00071 Fatty acid degradation [PATH:ko00071] D K00001 E1.1.1.1, adh; alcohol dehydrogenase [EC:1.1.1.1] A09120 Genetic Information Processing B B 09121 Transcription C 03020 RNA polymerase [PATH:ko03020] D K03043 rpoB; DNA-directed RNA polymerase subunit beta [EC:2.7.7.6] D K13797 rpoBC; DNA-directed RNA polymerase subunit beta-beta' [EC:2.7.7.6]&quot;&quot;&quot; lines = data.split('\n') result = [] ko = None description = None hierarchy_names = [] for line in lines: parts = line.strip().split() if parts: if parts[0].startswith('A'): # Reset hierarchy for a new 'A' section hierarchy_names = [&quot; &quot;.join(parts[1:])] elif parts[0] == 'K': ko = parts[0] description = &quot; &quot;.join(parts[1:]) elif parts[0] == 'D' and len(parts) &gt;= 3: ko = parts[1] description = &quot; &quot;.join(parts[2:]) else: hierarchy_names.append(&quot; &quot;.join(parts[1:])) if ko and description: hierarchy_str = &quot;, &quot;.join(hierarchy_names) result.append([ko, description, hierarchy_str]) # Add the header row result.insert(0, [&quot;KO&quot;, &quot;metadata_KEGG_Description&quot;, &quot;metadata_KEGG_Pathways&quot;]) # Specify the filename for the TSV file tsv_filename = &quot;output_data.tsv&quot; with open(tsv_filename, 'w') as tsv_file: for row in result: tsv_file.write(&quot;\t&quot;.join(row) + &quot;\n&quot;) print(f&quot;Data saved to {tsv_filename}&quot;) </code></pre>
<python><for-loop><while-loop>
2023-11-05 12:34:18
1
504
Umar
77,425,921
2,804,160
Scientific notation in list form in Python
<p>I have a list <code>A</code>. I want to convert the elements of this list to scientific notation. I present the current and expected output.</p> <pre><code>A = [0.000129,0.000148] A_scientific = [f'{num:.2e}' for num in A] print(A_scientific) </code></pre> <p>The current output is</p> <pre><code>['1.29e-04', '1.48e-04'] </code></pre> <p>The expected output is</p> <pre><code>[1.29e-04, 1.48e-04] </code></pre>
<python><list><scientific-notation>
2023-11-05 12:32:00
1
930
Wiz123
77,425,826
9,398,584
create a directory name from a string in python
<p>What's the most idiomatic, clean way to convert a descriptive string to a portable directory name?</p> <p>Something like <code>description.replace(&quot; &quot;, &quot;_&quot;)</code> but that also removes / replaces punctuations and other whitespaces, and maybe other edge cases I haven't thought about.</p> <p>The mapping can be lossy (you don't need to be able to reproduce the original string), it just needs to be a reasonable approximation to the given description, and of course - if there's a standard implementation somewhere that's a big bonus</p> <p>Thanks!</p> <p>Example: <code>&quot;I'm thinking Avocado toast&quot; -&gt; &quot;im_thinking_avocado_toast&quot;</code></p>
<python><filesystems>
2023-11-05 12:05:00
4
431
Just Me
77,425,791
17,795,398
Kivy `Logger` treats Python exception as warning in `PYTHON` mode
<p>I'm using this basic configuration for logging a Kivy app:</p> <pre><code>import logging logging.basicConfig( filename = &quot;log.log&quot;, filemode = &quot;w&quot;, encoding = &quot;utf-8&quot;, force = True, level = logging.INFO, format = &quot;%(asctime)s %(levelname) -8s %(message)s&quot;, datefmt = &quot;%Y-%m-%d %H:%M:%S&quot; ) from kivy.logger import Logger, LOG_LEVELS Logger.mode = &quot;PYTHON&quot; </code></pre> <p>If I raise an exception manually:</p> <pre><code>raise NameError(&quot;Bad name&quot;) </code></pre> <p>This is the log file:</p> <pre><code>2023-11-05 12:47:36 INFO deps: Successfully imported &quot;kivy_deps.angle&quot; 0.3.3 2023-11-05 12:47:36 INFO Logger: Record log in C:\Users\acgc9\.kivy\logs\kivy_23-11-05_75.txt 2023-11-05 12:47:36 INFO deps: Successfully imported &quot;kivy_deps.glew&quot; 0.3.1 2023-11-05 12:47:36 INFO deps: Successfully imported &quot;kivy_deps.sdl2&quot; 0.6.0 2023-11-05 12:47:36 INFO Kivy: v2.2.1 2023-11-05 12:47:36 INFO Kivy: Installed at &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\__init__.py&quot; 2023-11-05 12:47:36 INFO Python: v3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] 2023-11-05 12:47:36 INFO Python: Interpreter at &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\python.exe&quot; 2023-11-05 12:47:36 INFO Logger: Purge log fired. Processing... 2023-11-05 12:47:36 INFO Logger: Purge finished! 2023-11-05 12:47:36 WARNING stderr: Traceback (most recent call last): 2023-11-05 12:47:36 WARNING stderr: File &quot;C:\Users\acgc9\Desktop\test.py&quot;, line 15, in &lt;module&gt; 2023-11-05 12:47:36 WARNING stderr: raise NameError(&quot;Bad name&quot;) 2023-11-05 12:47:36 WARNING stderr: NameError: Bad name </code></pre> <p>So the exception is treated as warning and I don't want that, because the program is aborted.</p>
<python><logging><kivy>
2023-11-05 11:50:57
0
472
Abel Gutiérrez
77,425,781
12,214,867
`[0, 267, 270, 468]` describes a bbox, how do I get it from `[266.67, 0.0, 201.69, 269.58]`?
<p>i got a set of modified annotations for a bunch of coco images. for example, <code>[0, 267, 270, 468]</code> and <code>[254, 250, 458, 454]</code> are 2 pieces from the set, describing two bboxes for the following image. (named 000000173350.jpg in the original dataset)</p> <p><a href="https://i.sstatic.net/dgxnX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dgxnX.png" alt="enter image description here" /></a></p> <p>although they are not in the form of <code>[x, y, width, height]</code>, the following ones, which come from the original dataset, are</p> <p><code>[266.67, 0.0, 201.69, 269.58]</code> and <code>[250.18, 254.3, 203.64, 203.64]</code></p> <p>with the original annotations i can draw bboxes easily</p> <p><a href="https://i.sstatic.net/fXXWs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fXXWs.png" alt="enter image description here" /></a></p> <p>i can decode some part of the modified annotations, since the original ones could be rephrased as [267, 0, 202, 270] (ceiling) and [250, 254, 203, 203] (floor), and the xs and ys parts are swapped.</p> <p>however, i cannot imagine the rest of the modified annotations, how do i get them from the original annotations?</p>
<python><computer-vision><bounding-box>
2023-11-05 11:49:41
2
1,097
JJJohn
77,425,750
14,271,847
RuntimeError: Number of consecutive failures exceeded the limit of 3 during Keras Tuner search
<p>I'm encountering a <code>RuntimeError</code> when using Keras Tuner to search for the best hyperparameters for my image segmentation model. The error indicates that the number of consecutive failures has exceeded the limit of 3. Below is the full error message:</p> <pre><code>Exception has occurred: RuntimeError Number of consecutive failures exceeded the limit of 3. Traceback (most recent call last): File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py&quot;, line 273, in _try_run_and_update_trial self._run_and_update_trial(trial, *fit_args, **fit_kwargs) File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py&quot;, line 238, in _run_and_update_trial results = self.run_trial(trial, *fit_args, **fit_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py&quot;, line 314, in run_trial obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py&quot;, line 233, in _build_and_fit_model results = self.hypermodel.fit(hp, model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\hypermodel.py&quot;, line 149, in fit return model.fit(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\utils\traceback_utils.py&quot;, line 70, in error_handler raise e.with_traceback(filtered_tb) from None File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\engine\training.py&quot;, line 1697, in fit raise ValueError( ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. File &quot;C:\AutomationEdge\Workflows\WF2\Classificacao_Documentos\Source\test.py&quot;, line 105, in &lt;module&gt; tuner.search(train_generator, RuntimeError: Number of consecutive failures exceeded the limit of 3. Traceback (most recent call last): File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py&quot;, line 273, in _try_run_and_update_trial self._run_and_update_trial(trial, *fit_args, **fit_kwargs) File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py&quot;, line 238, in _run_and_update_trial results = self.run_trial(trial, *fit_args, **fit_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py&quot;, line 314, in run_trial obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py&quot;, line 233, in _build_and_fit_model results = self.hypermodel.fit(hp, model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\hypermodel.py&quot;, line 149, in fit return model.fit(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\utils\traceback_utils.py&quot;, line 70, in error_handler raise e.with_traceback(filtered_tb) from None File &quot;C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\engine\training.py&quot;, line 1697, in fit raise ValueError( ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. </code></pre> <p>This error occurs during the <code>.search()</code> method of Keras Tuner. Here is the relevant portion of my code:</p> <pre><code>tuner.search(train_generator, steps_per_epoch=len(X_train) // 16, validation_data=(X_test, y_test), epochs=50, callbacks=callbacks) </code></pre> <p>My images are resized to 128x128 pixels as required, and I've also addressed an earlier issue with train_test_split resulting in an empty training set. However, when I try to run the search method, I get the runtime error mentioned above.</p> <p>Notably, when I print out the shapes of the images and masks from the train_generator, it seems that my batch size is 1, which is not what I was expecting.</p> <p>Additionally, I've made sure that the model compiles and trains correctly outside of the Keras Tuner context.</p> <p>I'm seeking advice on what could be causing this issue and how to get more detailed error logs to help with troubleshooting. Suggestions on how to proceed or debug this error would be very helpful.</p> <p>Full code</p> <pre><code>import os import numpy as np from tensorflow import keras from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Input from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.image import ImageDataGenerator from sklearn.model_selection import train_test_split import cv2 from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from kerastuner import RandomSearch import matplotlib.pyplot as plt import tensorflow as tf tf.config.run_functions_eagerly(True) # Path to the directory with training images and edge masks train_images_dir = r'C:\AutomationEdge\nota_fiscal\Nova pasta\original' border_masks_dir = r'C:\AutomationEdge\nota_fiscal\Nova pasta\borda/' # Function to load images def load_images(directory): images = [] for filename in sorted(os.listdir(directory)): if filename.endswith(&quot;.jpg&quot;): # or .png if your images are in that format img = cv2.imread(os.path.join(directory, filename)) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # convert to RGB img = cv2.resize(img, (128, 128)) # resize images if necessary images.append(img) return np.array(images) # Loading the dataset train_images = load_images(train_images_dir) border_masks = load_images(border_masks_dir) border_masks = border_masks / 255.0 # Normalizing masks to [0, 1] # Splitting the dataset into training and testing X_train, X_test, y_train, y_test = train_test_split(train_images, border_masks, test_size=0.1) # Creating data generators with data augmentation for training data_gen_args = dict(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.1, zoom_range=0.1, horizontal_flip=True, fill_mode='nearest') image_datagen = ImageDataGenerator(**data_gen_args) mask_datagen = ImageDataGenerator(**data_gen_args) # Provide the same seeds and keyword arguments to the flow of generators to ensure matching of images and their masks seed = 1 image_datagen.fit(X_train, augment=True, seed=seed) mask_datagen.fit(y_train, augment=True, seed=seed) image_generator = image_datagen.flow(X_train, batch_size=16, seed=seed) mask_generator = mask_datagen.flow(y_train, batch_size=16, seed=seed) # Combine generators to create a generator that provides images and their corresponding masks train_generator = zip(image_generator, mask_generator) callbacks = [ EarlyStopping(patience=10, verbose=1), ModelCheckpoint('model-best.h5', verbose=1, save_best_only=True, save_weights_only=True) ] # Function to create the model to be used by Keras Tuner def build_model(hp): inputs = Input(shape=(128, 128, 3)) conv1 = Conv2D( hp.Int('conv1_units', min_value=32, max_value=256, step=32), (3, 3), activation='relu', padding='same')(inputs) pool1 = MaxPooling2D((2, 2))(conv1) conv2 = Conv2D( hp.Int('conv2_units', min_value=32, max_value=256, step=32), (3, 3), activation='relu', padding='same')(pool1) up1 = UpSampling2D((2, 2))(conv2) outputs = Conv2D(1, (1, 1), activation='sigmoid')(up1) model = Model(inputs=[inputs], outputs=[outputs]) model.compile( optimizer=Adam( hp.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='LOG')), loss='binary_crossentropy', metrics=['accuracy'] ) return model # Instantiating RandomSearch tuner = RandomSearch( build_model, objective='val_accuracy', max_trials=5, # Number of variations to be tested executions_per_trial=1, # Number of models to train for each variation directory='random_search', # Directory to store logs project_name='edge_detection' ) for imgs, masks in train_generator: print(imgs.shape, masks.shape) # Should be something like: (16, 128, 128, 3) (16, 128, 128, 1) break # This is just to test one batch # Running the search for the best hyperparameters tuner.search(train_generator, steps_per_epoch=len(X_train) // 16, validation_data=(X_test, y_test), epochs=50, callbacks=callbacks) </code></pre>
<python><tensorflow><keras><image-segmentation><keras-tuner>
2023-11-05 11:38:40
0
429
sysOut
77,425,682
4,832,499
What is the point of usedforsecurity?
<p>The parameter <code>usedforsecurity</code> was added to every hash function in hashlib in Python 3.9.</p> <blockquote> <p>Changed in version 3.9: All hashlib constructors take a keyword-only argument usedforsecurity with default value True. A false value allows the use of insecure and blocked hashing algorithms in restricted environments. False indicates that the hashing algorithm is not used in a security context, e.g. as a non-cryptographic one-way compression function.</p> </blockquote> <p>However, this provides <em>zero</em> guidance on</p> <ol> <li>When you should use <code>usedforsecurity</code></li> <li>When you should <em>not</em> use <code>usedforsecurity</code></li> <li>What &quot;restricted environments&quot; are</li> </ol> <p>And while I'm not a security researcher, I darn well know md5 is <em>not</em> secure in any sense of the word. Consequently, the name <code>usedforsecurity</code> boggles my mind in more ways than one.</p> <p>What is the point of <code>usedforsecurity</code>?</p>
<python><python-3.x><security><hash>
2023-11-05 11:14:07
2
21,560
Passer By
77,425,573
330,457
How can I randomize existing byte array?
<p>I create an array of bytes.</p> <pre class="lang-py prettyprint-override"><code>array = bytearray(random.randint(1, 8192)) # Now, how can I randomize array's elements? </code></pre> <p>Now how can I randomize each elements of the <code>array</code>?</p> <p>Just like,</p> <pre class="lang-java prettyprint-override"><code>// with Java var array = new byte[ThreadLocalRandom.current().nextInt(1, 8192)]; ThreadLocalRandom.current().nextBytes(array); </code></pre>
<python><random>
2023-11-05 10:34:08
3
22,366
Jin Kwon
77,425,492
16,611,809
How can I get the number of Apple Silicon performance cores in Python?
<p>I want to determine the number of performance cores in a Python script (it's actually going to be a Python app frozen with PyInstaller, if that makes a difference).</p> <p>There are some ways to get the number of CPUs/cores like <code>os.cpu_count()</code>, <code>multiprocessing.cpu_count()</code> or <code>psutil.cpu_count()</code> (the latter allowing discrimination between physical and virtual cores). However, Apple Silicon CPUs are separated into performance and efficiency cores, which you can get with (e.g.) <code>sysctl hw.perflevel0.logicalcpu_max</code> for performance and <code>sysctl hw.perflevel1.logicalcpu_max</code> for efficiency cores.</p> <p>Is there any way to get this in Python besides running <code>sysctl</code> and get the shell output?</p>
<python><cpu><apple-silicon><sysctl>
2023-11-05 10:05:44
0
627
gernophil
77,425,450
21,401,783
What does datetime.now().strftime("%j") do?
<p>To get the current year using the datetime module, you can use <code>datetime.now().strftime(&quot;%Y&quot;)</code>, which returns <code>2023</code> as expected.</p> <p>However, I found that another argument, <code>%j</code> can be passed instead of <code>%Y</code>. Doing this returns <code>309</code>.</p> <p>What does <code>%j</code> do?</p> <pre><code>from datetime import datetime print(datetime.now().strftime(&quot;%Y&quot;)) #Returns current year (2023) print(datetime.now().strftime(&quot;%j&quot;)) #Returns 309 </code></pre>
<python><datetime>
2023-11-05 09:49:35
1
1,364
Dinux
77,425,362
2,642,351
What is the default timed out time for Popen.communicate?
<p>In Python I am using subprocess to call a long running command and <code>proc.communicate()</code> so as to be able to log/print the output in realtime as shown <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow noreferrer">here</a>.</p> <p>However, I am getting a <code>fatal: Application timed out after 00:10:00.3431942</code> error message after about 10 minutes through the application run. Hence, I am here to check if <code>Popen.communicate</code> has a timeout and if yes, what is the default timeout that gets applied if it is not set manually?</p>
<python><subprocess><popen>
2023-11-05 09:19:16
0
5,918
Temp O'rary
77,425,335
17,795,398
How to set python gettext translations to source language
<p>I have this piece of code for translations:</p> <pre><code>translations = gettext.translation( locale_language, localedir=&quot;gui/locale&quot;, languages=[locale_language] ) translations.install() </code></pre> <p>Where <code>locale_language</code> can be <code>&quot;en&quot;</code> (source language (<code>msgid</code>s)) or <code>&quot;es&quot;</code> (translated language (<code>msgstr</code>s))</p> <p>I need to have <code>en.mo</code> and <code>es.mo</code> to switch languages correctly, but since the source language is English, <code>en.mo</code> is not filled. I have to do it this way to run previous code when language changes.</p> <p>I was wondering if there is a better way to do this, i.e., without an empty <code>en.mo</code> (also above code would requiere an <code>if</code> checking if <code>locale_language == &quot;en&quot;</code>, but that is not the problem).</p>
<python><gettext>
2023-11-05 09:08:53
1
472
Abel Gutiérrez
77,425,239
5,821,028
cuml K-means clustering slower when multiple instances are running
<p>I am having a performance problem when running multiple cuml.cluster.KMeans.fit_predict() concurrently on a single machine. There is enough memory on both the GPU and the host. When run in isolation, the function (max_silhouette_score) call takes approximately 1 second. However, when I run the 2x functions concurrently, each one takes around 5 seconds — resulting in an overall 5x slowdown.</p> <p>Here's the context of my usage:</p> <p>Environment: GPU RTX-3090 CUDA version 11.8 cuML version 23.08.00</p> <p>Dataset: The input is a pandas DataFrame with a shape of 3000x20, consisting entirely of numeric and normalized columns.</p> <p>Function: I am running my max_silhouette_score() function, which internally calls fit_predict() on the dataset for 18 times</p> <p>Code Snippet:</p> <pre><code>from cuml.cluster import KMeans from cuml.metrics.cluster import silhouette_samples,silhouette_score def max_silhouette_score(df): sil_scores = [] test_range = range(2,20) for k in test_range: kmeans = KMeans(n_clusters=k,n_init=10) predictions = kmeans.fit_predict(df) sil = silhouette_samples(df.values, predictions) sil_scores.append(float(np.mean(sil))) max_sil_idx = np.argmax(sil_scores) max_sil = sil_scores[max_sil_idx] max_sil_k = list(test_range)[max_sil_idx] return max_sil,max_sil_k </code></pre> <p>I've confirmed that the machine's resources are not maxed out during the execution. Does anyone have insights into why the concurrent execution is so much slower, or suggestions on how to keep the same performance while running multiple fit_predict() calls on the same machine?</p>
<python><k-means><cuml>
2023-11-05 08:34:02
0
1,125
Jihyun
77,425,149
14,459,677
Summing up in pandas based on Column conditions
<p>I have a dataframe (df1) which looks like this:</p> <pre><code>Date Name Category # 01/01/01 Vegetables A 15 01/01/01 Fruits A 10 01/01/01 Meat B 35 01/02/01 Vegetables A 7 01/03/01 Vegetables A 9 01/03/01 No Data No Data No Data </code></pre> <p>I want to create another dataframe (df2) that looks like this:</p> <pre><code>Date Classification # 01/01/01 A 25 01/01/01 B 35 01/02/01 A 7 01/03/01 A 9 01/03/01 No Data No Data </code></pre> <p>and another dataframe that looks like this:</p> <pre><code>Date Classification # 01/01/01 A 25 01/01/01 B 35 01/02/01 A 7 01/03/01 A 9 </code></pre> <p>which basically means classifying them by dates and category (df1) and the next dataframe (df2), classify by dates and category and excluding the &quot;No Data&quot;</p> <p>I made this:</p> <pre><code>length = len(df2) for i in range length: if df1 ['Category'] = &quot;A&quot;: df1['Date'].map(df.groupby('Date')['Category].sum()) </code></pre> <p>which doesn't give me anything as I believe this is still incomplete. Also, I have an error: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<python><pandas><group-by><sum>
2023-11-05 07:57:20
4
433
kiwi_kimchi
77,425,084
8,950,733
Why AutoKey soft faster than bare Python script?
<p>I just have moved to <strong>Ubuntu 22.04.3 LTS</strong> from <strong>Windows</strong>, where I was active user of the <strong>AutoHotKey</strong>.</p> <p>So, I have created <strong>.py</strong> script file with the following code:</p> <p><a href="https://i.sstatic.net/OhLqL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OhLqL.png" alt="enter image description here" /></a></p> <p>This script is to move between Google Chrome browser tabs. Just for script test reasons.</p> <p>So, when I tap keys - script is working, <strong>but I feel noticeable delay. About 0.5 second. It is not suitable for me.</strong></p> <p>Then, I have created same setup for AutoKey:</p> <p><a href="https://i.sstatic.net/I9txA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I9txA.png" alt="enter image description here" /></a></p> <p>It works perfect. Superfast, instant, like AutoHotKey in Windows.</p> <p>But actually I don't want to have this type &quot;gasket&quot; with AutoKey. I am good in Python, and it will be better for my to use simple Python without any additional software.</p> <p>So, my question: <strong>Why AutoKey much faster than bare Python script? Are there any ways to boost my bare Python script?</strong></p>
<python><ubuntu><keyboard-shortcuts><autokey>
2023-11-05 07:28:55
0
516
Mikhail
77,425,036
13,566,519
How can I determine the number of Python packages available for a specific package name in Python?
<p>Python packages vary in terms of their compatibility with different Python versions and platforms. For example, in the case of &quot;cffi&quot; equals 1.16.0 version packages on PyPI, you can find approximately 51 distinct packages tailored for various Python versions and platforms such as Windows, macOS, and Linux. On the other hand, packages like 'coloredlogs' equals 15.0.1 version offer a single package that works across all platforms and different Python versions.</p> <p>Is there a way to determine the number of variations available for any Python package with a specific version?</p>
<python><python-3.x><pip><pypi><python-wheel>
2023-11-05 07:08:48
1
317
GreenMan
77,424,814
22,860,226
AWS SQS FIFO in Python - deleted messages are available even after correct VisbilityTimeout
<p>I created an FIFO queue. I send 10 msgs, receive them and delete them. But they are still available in queue and received again. I receive 10 msgs initially, then 5 , then 2 and then 0. &quot;delete_message&quot; response was also 200.</p> <p>Code :</p> <pre><code>import json import os import uuid import boto3 from dotenv import load_dotenv load_dotenv() sqs = boto3.client(&quot;sqs&quot;, os.getenv(&quot;AWS_REGION&quot;)) receive_queue = os.getenv(&quot;RECEIVE_QUEUE_URL&quot;) def receive(attempt_id, max_num_messages): response = sqs.receive_message( QueueUrl=receive_queue, ReceiveRequestAttemptId=attempt_id, MaxNumberOfMessages=max_num_messages, VisibilityTimeout=100, WaitTimeSeconds=20, ) if &quot;Messages&quot; not in response: return None, True messages = [message[&quot;Body&quot;] for message in response[&quot;Messages&quot;]] receipt_handles = [message[&quot;ReceiptHandle&quot;] for message in response[&quot;Messages&quot;]] print(f&quot;{len(messages)} msgs received&quot;) for receipt_handle in receipt_handles: sqs.delete_message(QueueUrl=receive_queue, ReceiptHandle=receipt_handle) receipt_handles.remove(receipt_handle) print(f&quot;{len(receipt_handles)} msgs deleted&quot;) return messages, False def send_to_queue(queue_url, data, message_group_id): response = sqs.send_message( QueueUrl=queue_url, MessageBody=json.dumps(data), MessageGroupId=message_group_id, ) return response # 10 msgs created for i in range(10): sent_response = send_to_queue( receive_queue, {&quot;key&quot;: i}, message_group_id=os.getenv(&quot;MESSAGE_GROUP_ID&quot;) ) # 10 msgs received receive(str(uuid.uuid4()), 10) # Now still 5 msgs are &quot;inflight&quot; and received again </code></pre> <p>Queue configuration</p> <p><a href="https://i.sstatic.net/vcGZj.png" rel="nofollow noreferrer">queue</a></p> <p>I checked prev SO answers which asked me to change non-zero Timeout which I have set.</p>
<python><amazon-web-services><amazon-sqs>
2023-11-05 05:27:37
1
411
JTX
77,424,793
18,756,733
Multiprocessing Function Runs Forever
<p>I have a list of URLs : daily_article_urls. For each URL I want to run the following webscraping code:</p> <pre><code>daily_data=[] def scrape_article(i,article_url): try: article_html=requests.get(article_url).content article_soup=BeautifulSoup(article_html, 'html.parser') article_id=article_soup.select('span[class*=&quot;sc-6e54cb25-12 GRQvX&quot;]') article_title=article_soup.select_one('div[class=&quot;screen-section&quot;] div[class*=&quot;sc-&quot;] h2[class*=&quot;jvaJJV&quot;]') date_published=article_soup.select('span[class*=&quot;sc-6e54cb25-12 GRQvX&quot;]') appatment_price=article_soup.select_one('aside[class=&quot;detail-page-aside&quot;] h1[id=&quot;price&quot;]') appatment_address=article_soup.select_one('div[class=&quot;screen-section&quot;] a[id=&quot;address&quot;]') apparment_details_1=set([i.get_text(separator='|') if i else None for i in article_soup.select('div[id=&quot;details_desc&quot;] div[class*=&quot;sc-5fa917ee-0&quot;] div')]) apparment_details_2=set([i.get_text(separator='|') if i else None for i in article_soup.select('div[id=&quot;details_desc&quot;] div[class*=&quot;sc-1b705347-0&quot;] div')]) article_id=article_id[-1].text if article_id else None article_title=article_title.text if article_title else None date_published=date_published[-2].text if date_published else None appatment_price=appatment_price.text if appatment_price else None appatment_address=appatment_address.text if appatment_address else None aparment_data_dict={'Article ID':article_id,'Article URL':article_url,'Title':article_title,'Date Published':date_published,'Price':appatment_price,'Address':appatment_address,'Details_1':apparment_details_1,'Details_2':apparment_details_2} daily_data.append(aparment_data_dict) #return aparment_data_dict except: pass #return None </code></pre> <p>webscraping code alone works perfectly, but only few CPU cores are utilized and it takes a lot of time for my 12 CPU core i7 to complete the task. I want to utilize all 12 cores, so I am running the following code:</p> <pre><code>import multiprocessing if __name__ == &quot;__main__&quot;: daily_article_urls = daily_article_urls# Define your list of article URLs with multiprocessing.Pool(processes=12) as pool: pool.map(scrape_article, enumerate(daily_article_urls)) pool.close() pool.join() </code></pre> <p>The code runs forever. How can I modify the code, so I can utilize all CPU cores?</p> <p>The same multiprocessing code does not work with this simplier calcualtion function as well:</p> <pre><code>import multiprocessing import math def calculate(number): try: calculated_number = math.sqrt((number**2 + 5*number**3 + 1022)) return calculated_number except Exception as e: return None if __name__ == &quot;__main__&quot;: numbers = list(range(10)) # Define your list of article URLs with multiprocessing.Pool(processes=12) as pool: results = pool.map(calculate, numbers) for result in results: print(result) </code></pre>
<python><beautifulsoup><multiprocessing>
2023-11-05 05:16:05
1
426
beridzeg45
77,424,774
10,200,497
Finding the last row that meets conditions of a mask
<p>This is my dataframe:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': [20, 21, 333, 444], 'b': [20, 20, 20, 20]}) </code></pre> <p>I want to create column <code>c</code> by using this mask:</p> <pre class="lang-py prettyprint-override"><code>mask = (df.a &gt;= df.b) </code></pre> <p>And I want to get the last row that meets this condition and create column <code>c</code>. The output that I want looks like this:</p> <pre class="lang-none prettyprint-override"><code> a b c 0 20 20 NaN 1 21 20 NaN 2 333 20 NaN 3 444 20 x </code></pre> <p>I tried the code below but it didn't work:</p> <pre class="lang-py prettyprint-override"><code>df.loc[mask.cumsum().gt(1) &amp; mask, 'c'] = 'x' </code></pre>
<python><pandas><indexing><conditional-statements><duplicates>
2023-11-05 05:11:20
4
2,679
AmirX
77,424,767
992,421
RDD creating adjacency list
<p>I am trying to create adj list with input</p> <pre><code>RDD=[&quot;2\t{'3': 1}&quot;, &quot;3\t{'2': 2}&quot;, &quot;4\t{'1': 1, '2': 1}&quot;, &quot;5\t{'4': 3, '2': 1, '6': 1}&quot;, &quot;6\t{'2': 1, '5': 2}&quot;, &quot;7\t{'2': 1, '5': 1}&quot;, &quot;8\t{'2': 1, '5': 1}&quot;, &quot;9\t{'2': 1, '5': 1}&quot;, &quot;10\t{'5': 1}&quot;, &quot;11\t{'5': 2}&quot;] </code></pre> <p>expectation is adj list should creates a new record for any dangling nodes and sets it edges (neighbors) to be a empty initializes a rank of 1/N for each node I am writing a spark job to read in the raw data but i need to initialize an adjacency list representation with a record for each node (including dangling nodes). Returns: RDD - a pair RDD of (node_id , (score, edges)) I am not fully there yet but I am trying to do the following.</p> <pre><code>adj_list = () ad = sc.broadcast(adj_list) # write any helper functions here def parse(line): node, edges = line.split('\t') print(f'node: {node} edges: {edges}') for key, value in ast.literal_eval(edges): yield (node, key, value) RDD = dataRDD.flatMap(parse) \ .reduceByKey(lambda x, y: (x[1]+y[1], x[0],y[0])) </code></pre> <ol> <li>I am getting error ValueError: not enough values to unpack (expected 2, got 1) when I do collect. I looked around and dont have a clue on what’s happening.</li> <li>am I going in the right direction? Idea is to add the scores then at the end divide it by total node count</li> <li>by the way i need to return rdd with the expected return data Appreciate guiding me in the logic. Thanks</li> </ol>
<python><list><apache-spark><pyspark>
2023-11-05 05:07:09
1
850
Ram
77,424,645
5,246,226
Building PyTorch as a submodule of larger project with Bazel results in invalid file paths
<p>I have a project I need to build using Bazel, and I need the PyTorch libraries for C++. After a lot of trial and error, I've resorted to including PyTorch's source as a submodule. This is because I need to edit the BUILD files and need to specify the dependencies again, since Bazel doesn't read external dependency WORKSPACE files.</p> <p>The PyTorch build has <a href="https://github.com/pytorch/pytorch/blob/64f326097be8ac66ff057365f3bed2d64c697563/build.bzl#L126-L131" rel="nofollow noreferrer">this particular python command</a> to generate code. This script has <a href="https://github.com/pytorch/pytorch/blob/674c104d122797644805b07151850e1dffb10fa2/tools/setup_helpers/generate_code.py#L204-L205" rel="nofollow noreferrer">two hard-coded paths</a> inside, checked with <code>os.path.isfile</code>, that I believe are only valid if one executes the PyTorch build from the top of its repo. However, since I'm using PyTorch as a submodule inside my project and running Bazel's build commands from the top of <em>my</em> repo, I believe this is causing these paths to not exist. Since this is hard-coded in the source, it feels like I would need to modify the PyTorch submodule itself, but I'd rather not do that. Is there another way?</p>
<python><build><bazel>
2023-11-05 03:52:56
0
759
Victor M
77,424,632
130,208
How to mock.patch a class method which is a generator/has yield
<p>How to patch a generator class method. For e.g. in class below, how would we patch get_changed_diff_patch method?</p> <pre><code>class PassiveJsonMixin(JsonMixin): &quot;&quot;&quot; passive items that do not have id/key &quot;&quot;&quot; def __init__(self, *args, **kwargs): JsonMixin.__init__(self, *args, **kwargs) # self.build_json() pass def get_changed_diff_patch(self, parent_hidden=False): print (&quot;From PassiveJsonMixin: get_changed_diff_patch&quot;) return yield </code></pre> <p>The usual approach</p> <pre><code>patch.object(mymod.PassiveJsonMixin, 'get_changed_diff_patch',wrapper(mymod.PassiveJsonMixin.get_changed_diff_patch) ) </code></pre> <p>doesn't seem to work.</p>
<python><generator><patch><yield><python-unittest.mock>
2023-11-05 03:40:59
1
2,065
Kabira K
77,424,417
8,593,689
sqlite3 error "ValueError: parameters are of unsupported type"
<p>I have an sqlite3 table that I created like this:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE packages ( id INTEGER PRIMARY KEY, name TEXT UNIQUE NOT NULL, description TEXT, price DECIMAL(5,2), enabled INTEGER DEFAULT 1 ); </code></pre> <p>I am trying to query the table as below:</p> <pre class="lang-py prettyprint-override"><code>import sqlite3 from flask import current_app, g def connection(): if 'db' not in g or not isinstance(g.db, sqlite3.Connection): g.db = sqlite3.connect(current_app.config.get('DBPATH'), detect_types=sqlite3.PARSE_DECLTYPES) g.db.row_factory = sqlite3.Row return g.db packageid = 6 cur = connection().cursor() cur.execute(&quot;&quot;&quot; SELECT id FROM packages WHERE id = :packageid &quot;&quot;&quot;, {'packageid', packageid}) rec = cur.fetchone() # Omitted logic to handle rec being None or not None </code></pre> <p>When I run the code above, it fails with <code>ValueError: parameters are of unsupported type</code> for line 16.</p> <p>Why do I get this error? <code>id</code> is an <code>INTEGER</code> in sqlite3, and <code>packageid</code> is an <code>int</code> in python. I don't see why these two types are incompatible.</p>
<python><flask><sqlite3-python>
2023-11-05 01:28:07
1
5,032
Shane Bishop
77,424,238
359,219
Fixing my Python package and module structure so it works with pytest and VS Code
<p>I have this directory structure:</p> <pre class="lang-none prettyprint-override"><code>memsim/ __init__.py utils.py tests/ __init__.py utils_test.py </code></pre> <p>In utils.py:</p> <pre><code>def flatten_free_segments(free_segments): </code></pre> <p>In utils_test.py:</p> <pre><code>from utils import flatten_free_segments </code></pre> <p>Now:</p> <pre class="lang-none prettyprint-override"><code>$ python -m unittest discover --&gt; finds no tests $ python -m unittest tests/utils_test.py --&gt; works </code></pre> <p>But in VS Code with Pylance:</p> <p><a href="https://i.sstatic.net/eC9Hr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eC9Hr.png" alt="enter image description here" /></a></p> <p>And in VS Code debug:</p> <p><a href="https://i.sstatic.net/qJXrr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qJXrr.png" alt="enter image description here" /></a></p> <h3>Question</h3> <ol> <li>Is my basic directory structure and import statement <em>valid</em> and <em>the right way</em>?</li> <li>How do I get pylance and the VS Code test runner to accept that?</li> </ol>
<python>
2023-11-04 23:50:22
0
11,040
pitosalas
77,424,233
15,456,681
How to wrap an external function in numba such that the resulting function is cacheable?
<p>I would like to wrap an external function with numba, but require that the resulting function is able to be cached with <code>njit(cache=True)</code> like I can do with the numba implementation (I'm just using <code>dgesv</code> as an example):</p> <pre class="lang-py prettyprint-override"><code>import numba as nb import numpy as np @nb.njit(cache=True) def dgesv_numba(A, b): return np.linalg.solve(A, b) </code></pre> <p>I've tried with <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow noreferrer">ctypes</a>:</p> <pre class="lang-py prettyprint-override"><code>import ctypes as ct from ctypes.util import find_library from numba import types from numba.core import cgutils from numba.extending import intrinsic @intrinsic def ptr_from_val(typingctx, data): # from https://stackoverflow.com/questions/51541302/how-to-wrap-a-cffi-function-in-numba-taking-pointers def impl(context, builder, signature, args): ptr = cgutils.alloca_once_value(builder, args[0]) return ptr sig = types.CPointer(data)(data) return sig, impl ptr_int = ct.POINTER(ct.c_int) ptr_double = ct.POINTER(ct.c_double) argtypes = [ ptr_int, # n ptr_int, # nrhs ptr_double, # a ptr_int, # lda ptr_int, # ipiv ptr_double, # b ptr_int, # ldb ptr_int, # info ] lapack_ctypes = ct.CDLL(find_library(&quot;lapack&quot;)) _dgesv_ctypes = lapack_ctypes.dgesv_ _dgesv_ctypes.argtypes = argtypes _dgesv_ctypes.restype = None # Or get it from scipy # addr = nb.extending.get_cython_function_address( # &quot;scipy.linalg.cython_lapack&quot;, &quot;dgesv&quot; # ) # functype = ct.CFUNCTYPE(None, *argtypes) # _dgesv_ctypes = functype(addr) @nb.njit(cache=True) def args(A, b): if b.ndim == 1: _b = b[:, None] # .reshape(-1, 1) # change to reshape numba &lt; 0.57 nrhs = np.int32(1) else: _b = b.T.copy() # Dunno? is there a better way to do this? nrhs = np.int32(b.shape[1]) n = np.int32(A.shape[0]) info = np.int32(0) ipiv = np.zeros((n,), dtype=np.int32) return _b, n, nrhs, ipiv, info @nb.njit(cache=True) def dgesv_ctypes(A, b): b, n, nrhs, ipiv, info = args(A, b) _dgesv_ctypes( ptr_from_val(n), ptr_from_val(nrhs), A.T.copy().ctypes, # Dunno? is there a better way to do this? ptr_from_val(n), ipiv.ctypes, b.ctypes, ptr_from_val(n), ptr_from_val(info), ) if info: raise Exception(&quot;something went wrong&quot;) return b.T </code></pre> <p>and with <a href="https://cffi.readthedocs.io/en/stable/" rel="nofollow noreferrer">cffi</a>:</p> <pre class="lang-py prettyprint-override"><code>import cffi ffi = cffi.FFI() ffi.cdef( &quot;&quot;&quot; void dgesv_(int *n, int *nrhs, double *a, int *lda, int *ipiv, double *b, int *ldb, int *info); &quot;&quot;&quot; ) lapack_cffi = ffi.dlopen(find_library(&quot;lapack&quot;)) _dgesv_cffi = lapack_cffi.dgesv_ @nb.njit(cache=True) def dgesv_cffi(A, b): b, n, nrhs, ipiv, info = args(A, b) _dgesv_cffi( ptr_from_val(n), ptr_from_val(nrhs), ffi.from_buffer(A.T.copy()), ptr_from_val(n), ffi.from_buffer(ipiv), ffi.from_buffer(b), ptr_from_val(n), ptr_from_val(info), ) if info: raise Exception(&quot;something went wrong&quot;) return b.T </code></pre> <p>but in both cases I get a warning that the function cannot be cached as I have used ctypes pointers:</p> <pre><code>/var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_4390/2568069903.py:79: NumbaWarning: Cannot cache compiled function &quot;dgesv_ctypes&quot; as it uses dynamic globals (such as ctypes pointers and large global arrays) @nb.njit(cache=True) /var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_4390/2568069903.py:97: NumbaWarning: Cannot cache compiled function &quot;dgesv_cffi&quot; as it uses dynamic globals (such as ctypes pointers and large global arrays) @nb.njit(cache=True) </code></pre> <p>I have managed to do it with <a href="https://numba.pydata.org/numba-doc/latest/reference/types.html#wrapper-address-protocol-wap" rel="nofollow noreferrer">WAP</a>:</p> <pre class="lang-py prettyprint-override"><code>class Dgesv(nb.types.WrapperAddressProtocol): def __wrapper_address__(self): return ct.cast(lapack_ctypes.dgesv_, ct.c_voidp).value def signature(self): return nb.types.void( nb.types.CPointer(nb.int32), # n nb.types.CPointer(nb.int32), # nrhs nb.types.CPointer(nb.float64), # a nb.types.CPointer(nb.int32), # lda nb.types.CPointer(nb.int32), # ipiv nb.types.CPointer(nb.float64), # b nb.types.CPointer(nb.int32), # ldb nb.types.CPointer(nb.int32), # info ) @nb.njit(cache=True) def dgesv_wap(f, A, b): b, n, nrhs, ipiv, info = args(A, b) f( ptr_from_val(n), ptr_from_val(nrhs), A.T.copy().ctypes, ptr_from_val(n), ipiv.ctypes, b.ctypes, ptr_from_val(n), ptr_from_val(info), ) if info: raise Exception(&quot;something went wrong&quot;) return b.T </code></pre> <p>but the resulting function is significantly slower than the other methods and it isn't really what I want as you have to pass the function as an argument for the caching to work:</p> <pre class="lang-py prettyprint-override"><code>rng = np.random.default_rng() for i in range(3, 5): N = i A = rng.random((N, N)) x = rng.random((N, 1000)) b = A @ x _ctypes = dgesv_ctypes(A.copy(), b.copy()) _cffi = dgesv_cffi(A.copy(), b.copy()) _wap = dgesv_wap(Dgesv(), A.copy(), b.copy()) _numba = dgesv_numba(A, b) assert np.allclose(_ctypes, _numba) assert np.allclose(_cffi, _numba) assert np.allclose(_wap, _numba) assert np.allclose(x, _numba) print(&quot;all good&quot;) %timeit dgesv_ctypes(A.copy(), b.copy()) %timeit dgesv_cffi(A.copy(), b.copy()) %timeit dgesv_wap(Dgesv(), A.copy(), b.copy()) %timeit dgesv_numba(A, b) </code></pre> <p>Output:</p> <pre><code>all good 56.5 µs ± 1.62 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) 55.8 µs ± 1.36 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) 89.6 µs ± 2.57 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) 59.7 µs ± 894 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) </code></pre> <p>so how do I do it whilst retaining the performance of the other implementations?</p>
<python><numba>
2023-11-04 23:47:11
1
3,592
Nin17
77,424,156
414,104
logger not logging info but only logging error though every setting in logging flow is correct?
<p>I'm trying to log in python but I'm not seeing logs for anything lower than error. I've checked every part of the logging flow, from logger to to appropriate level, handlers, filters at logger and handler level.</p> <p>I don't have small reproducible answer since it works as expected when I pare it down, but screenshot and relevant code below, along with PR. What am I missing?</p> <p><a href="https://docs.python.org/3/howto/logging.html#logging-flow" rel="nofollow noreferrer">https://docs.python.org/3/howto/logging.html#logging-flow</a></p> <p><a href="https://github.com/aivillage/llm_verification/pull/27/files" rel="nofollow noreferrer">https://github.com/aivillage/llm_verification/pull/27/files</a></p> <pre class="lang-py prettyprint-override"><code>import logging class DebugConsoleHandler(logging.Handler): def emit(self, record): print(f&quot;Custom Handler: {record.levelname} - {record.msg}&quot;) log.addHandler(DebugConsoleHandler()) log.error(&quot;Initialized llm_route in verifications&quot;) log.error(f&quot;Logger name {log.name}&quot;) log.error(&quot;Logging Level %s&quot;, log.level) log.setLevel(logging.NOTSET) log.error(&quot;Logging Level %s&quot;, log.level) log.error(&quot;Logging Handlers %s&quot;, log.handlers) log.error(&quot;Logging filters %s&quot;, log.filters) for handler in log.handlers: log.error(handler.filters) # # record = logging.LogRecord(&quot;test&quot;, level=20, msg=&quot;test&quot;) # handler.emit(&quot;Test Message&quot;) log.info( f'Received text generation request from user &quot;{get_current_user().name}&quot; ' f'for challenge ID &quot;{request.json[&quot;challenge_id&quot;]}&quot;' ) log.error( f'Received text generation request from user &quot;{get_current_user().name}&quot; ' f'for challenge ID &quot;{request.json[&quot;challenge_id&quot;]}&quot;' ) </code></pre> <p><a href="https://i.sstatic.net/5wDEm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5wDEm.png" alt="enter image description here" /></a></p>
<python><logging>
2023-11-04 23:16:28
1
3,595
canyon289
77,424,139
3,949,631
Indexing numpy array with an array of indices
<p>I have 2 <code>numpy</code> arrays: <code>X</code> with shape <code>(..., 12, 3)</code> and <code>Y</code> with shape <code>(..., 12)</code>, where the <code>...</code> is the same in both cases. I get an array of indices using:</p> <pre><code>idx = np.argmin(Y, axis=-1) </code></pre> <p>which afterwards I want to use to get the corresponding elements from array <code>X</code> and <code>Y</code>. However, I run into issues due to the extra dimensions. For <code>Y</code>, I can do e.g.</p> <pre><code>Y[np.arange(Y.shape[0]), idx] </code></pre> <p>however this requires me to know the shape of <code>...</code> which will not be the case at all times. How do I do this in a better way?</p>
<python><numpy>
2023-11-04 23:11:04
1
497
John
77,424,126
22,466,650
How to shift the values of a list to the left as much as possible?
<p>Let's consider these two lists :</p> <pre><code>list1 = ['N/A', 'a', 'b', 'c'] list2 = [None, None, 'd', None, 'e'] </code></pre> <p>1-) I need to shift their items to the left as long as there is a target value at the beginning but 2-) we should ignore the eventual targets that can be in the middle and finally 3-) we fill the replacements with a value from the end.</p> <p>In a parallel universe, the function I need would be a method of the python's list: <code>list.lstrip(target_value, fill_value)</code></p> <p>I tried to make it real with the code below but it doesn't give the expected output for <code>list2</code> :</p> <pre><code>def lstrip(a_list, target_value=None, fill_value='Invalid'): if a_list[0] != target_value: return a_list else: result = [] for element in a_list: if element != target_value: result.append(element) return result + [fill_value]*(len(a_list) - len(result)) </code></pre> <p>The current outputs are :</p> <pre><code>print(lstrip(list1, 'N/A')) ['a', 'b', 'c', 'Invalid'] print(lstrip(list2)) ['d', 'e', 'Invalid', 'Invalid', 'Invalid'] </code></pre> <p>The second output should be <code>['d', None, 'e', 'Invalid', 'Invalid']</code>.</p> <p>Do you guys have an idea how to fix my code ?</p>
<python><list>
2023-11-04 23:03:31
2
1,085
VERBOSE
77,423,938
3,498,864
Order elements in a matrix based on another matrix
<p>I'd like to order an arbitrary number of elements into an arbitrarily shaped matrix (<code>matrix_a</code>) based on values in a binary matrix (<code>element_map</code>) that describes attributes of the elements. <code>matrix_b</code> defines which <code>elements</code> can be adjacent to each other in <code>matrix_a</code>. &quot;Adjacent&quot; in this case includes diagonal. A concrete, but toy example:</p> <pre><code>import numpy as np #This is a just a mask of the final matrix, to show the shape. #The center position (5) is adjacent to all other positions whereas #position (1) is adjacent to 4, 2 and 5, etc. matrix_a = np.array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) elements = range(0, 10) #each 'row' corresponds to an element #each 'column' corresponds to an attribute of the various elements #here, each element has 5 attributes, which are always 0 or 1 element_map = np.array([[0, 0, 1, 0, 1], #element 1 [1, 1, 1, 0, 1], #element 2 [1, 0, 0, 1, 0], #element 3 [1, 0, 1, 0, 0], #etc. [1, 1, 1, 0, 0], [1, 1, 0, 1, 0], [1, 0, 1, 0, 1], [1, 1, 0, 0, 0], [1, 1, 0, 0, 0]]) #element 9 </code></pre> <p>The desired output is that the nine <code>elements</code> are placed in <code>matrix_a</code>, each only appearing once, based on an adjacency rule for which <code>matrix_b</code> is needed. The rule is that for any <code>element</code> placed in <code>matrix_a</code>, the <code>element</code>'s value for attribute <code>x</code> (can be any attribute) must be zero, and the value of all adjacent (adjacent in <code>matrix_a</code>) <code>elements</code> must be 1 for attribute <code>x</code>.</p> <p>Assume also that we have a function that can accept a matrix and coordinates and return all the adjacent values:</p> <pre><code>def adj(m, x, y): ''' find adjacent values to coordinates x,y in a matrix, m''' if x == 0: if y == 0: r = m[0:2,0:2] # x==0, y==0 else: r = m[0:2,y-1:y+2] # x==0, y!=0 elif y == 0: # x != 0, y == 0 r = m[x-1:x+2,0:2] else: #any other position r = m[(x-1):(x+2),(y-1):(y+2)] return [i for i in r.flatten().tolist() if i != m[x,y]] #example: q = np.arange(1,97).reshape(12,8).T array([[ 1, 9, 17, 25, 33, 41, 49, 57, 65, 73, 81, 89], [ 2, 10, 18, 26, 34, 42, 50, 58, 66, 74, 82, 90], [ 3, 11, 19, 27, 35, 43, 51, 59, 67, 75, 83, 91], [ 4, 12, 20, 28, 36, 44, 52, 60, 68, 76, 84, 92], [ 5, 13, 21, 29, 37, 45, 53, 61, 69, 77, 85, 93], [ 6, 14, 22, 30, 38, 46, 54, 62, 70, 78, 86, 94], [ 7, 15, 23, 31, 39, 47, 55, 63, 71, 79, 87, 95], [ 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96]]) adj(q, 5, 5) [37, 45, 53, 38, 54, 39, 47, 55] </code></pre> <p>As a side note, the possible search space for an 8x12 matrix and 96 elements is 10**149 = 96!</p> <p>Maybe this would be a job for linear/integer programming, but I'm not sure how to formulate the adjacency constraints</p>
<python><optimization><sparse-matrix><linear-programming><integer-programming>
2023-11-04 21:49:47
1
3,719
Ryan
77,423,868
1,734,357
Failing to use bound_contextvars with complex logger configuration because of missing structlog.contextvars.merge_contextvars processor
<p>This</p> <pre><code>import structlog from structlog.contextvars import bound_contextvars if __name__==&quot;__main__&quot;: log = structlog.getLogger(__name__) with bound_contextvars(frame_id=&quot;ADA&quot;, command_id=&quot;123&quot;): log.info(&quot;rabo&quot;) </code></pre> <p>logs <code>2023-11-04 21:17:15 [info ] rabo command_id=123 frame_id=ADA</code></p> <p>This minimal example of what we actually use in production:</p> <pre><code>import os from pathlib import Path import logging import logging.config import structlog from structlog.contextvars import bound_contextvars LOG_DIR: Path = Path(&quot;.&quot;) LOG_CONFIG_FILE: Path = Path(&quot;logging.conf&quot;) STRUCTLOG_PROCESSORS = [ structlog.stdlib.filter_by_level, structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.processors.CallsiteParameterAdder( [ structlog.processors.CallsiteParameter.THREAD_NAME, structlog.processors.CallsiteParameter.PROCESS_NAME, structlog.processors.CallsiteParameter.PATHNAME, ] ), structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.TimeStamper(fmt=&quot;ISO&quot;, key=&quot;@timestamp&quot;), structlog.processors.UnicodeDecoder(), structlog.processors.ExceptionRenderer(), structlog.processors.StackInfoRenderer(), structlog.processors.EventRenamer(&quot;message&quot;), structlog.processors.JSONRenderer(), ] DEFAULT_EVENT_FIELDS = { &quot;appVersion&quot;: &quot;unset&quot;, &quot;supervisor_process_name&quot;: &quot;not_set&quot;, &quot;hostname&quot;: None, ## This causes log line &quot;frame_id&quot;: null, &quot;command_id&quot;: null, &quot;message&quot;: &quot;rabo&quot; # &quot;frame_id&quot;: None, # &quot;command_id&quot;: None, } def _configure_logging(processors) -&gt; None: LOG_DIR.mkdir(parents=True, exist_ok=True) os.environ[&quot;LOG_DIR&quot;] = str(LOG_DIR) logging.config.fileConfig(LOG_CONFIG_FILE, disable_existing_loggers=False) structlog.configure( processors=processors, wrapper_class=structlog.stdlib.BoundLogger, logger_factory=structlog.stdlib.LoggerFactory(), cache_logger_on_first_use=True, ) def add_custom_fields(logger, method_name, event_dict): # pylint: disable=unused-argument return {**event_dict, **DEFAULT_EVENT_FIELDS} _configure_logging([add_custom_fields] + STRUCTLOG_PROCESSORS) if __name__==&quot;__main__&quot;: log = structlog.getLogger(__name__) with bound_contextvars(frame_id=&quot;ADA&quot;, command_id=&quot;123&quot;): log.info(&quot;rabo&quot;) </code></pre> <p>with <code>logging.conf</code></p> <pre><code>[loggers] keys=root [handlers] keys=consoleHandler,consoleHandlerErr [formatters] keys=Formatter [logger_root] level=DEBUG handlers=consoleHandler [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=Formatter args=(sys.stdout,) [handler_consoleHandlerErr] class=StreamHandler level=DEBUG formatter=Formatter args=(sys.stderr,) [formatter_Formatter] format=%(message)s </code></pre> <p>logs <code>{&quot;appVersion&quot;: &quot;unset&quot;, &quot;supervisor_process_name&quot;: &quot;not_set&quot;, &quot;hostname&quot;: null, &quot;logger&quot;: &quot;__main__&quot;, &quot;level&quot;: &quot;info&quot;, &quot;thread_name&quot;: &quot;MainThread&quot;, &quot;process_name&quot;: &quot;n/a&quot;, &quot;pathname&quot;: &quot;/Users/dario.figueira/Library/Application Support/JetBrains/PyCharm2023.1/scratches/bound_contextvars.py&quot;, &quot;@timestamp&quot;: &quot;2023-11-04T21:19:17.083187Z&quot;, &quot;message&quot;: &quot;rabo&quot;}</code></p> <p>What's missing so that the bound context vars show up in the log lines? Adding <code>frame_id</code> and <code>command_id</code> to <code>DEFAULT_EVENT_FIELDS</code> was not enough</p> <p><strong>EDIT:</strong> Adding <code>structlog.contextvars.merge_contextvars,</code> after <code>STRUCTLOG_PROCESSORS = [</code> fixes it</p>
<python><logging><structlog>
2023-11-04 21:22:06
1
1,493
WurmD
77,423,844
5,594,008
Wagtail CheckConstraint, column live does not exists
<p>I have such model</p> <pre><code>from wagtail.models import Page class Article(Page): price = PositiveIntegerField( null=True, blank=True, ) class Meta: constraints = [ CheckConstraint(check=Q(price__gte=100) &amp; Q(live=True), name=&quot;min price for published&quot;), CheckConstraint(check=Q(price__lte=350000) &amp; Q(live=True), name=&quot;max price for published&quot;), ] </code></pre> <p>After running makemigrations everything is fine. But when I try to execute it I got an error <code>django.db.utils.ProgrammingError: column &quot;live&quot; does not exist</code></p> <p>How could that possible be? <strong>live</strong> column is part of Page. How this can be fixed?</p>
<python><django><wagtail>
2023-11-04 21:12:58
1
2,352
Headmaster
77,423,773
15,433,308
Reusing Spark SQL code in an on demand web service
<p>I have a process written in pyspark that runs some complex logic on large batches of data and is run periodically. There is also high demand to enable users to run the logic on demand on small subsets of data according to a query and get results fast. It would be really expensive and hard to maintain another copy of the logic in another framework like pandas, so I am trying to instead run the logical parts of the process in local mode (<code>master=local[n]</code>) packaged in a flask service that the users can then send requests to. The service queries a database with the input data according to the user's request, runs the logical parts of the process and then returns the result to the user.</p> <p>It works, the problem is it is too slow. I have managed to narrow down the bad performance to two main bottlenecks:</p> <p>The first one is two python udfs that take a long time to run.</p> <p>The other one is that it takes about 10 seconds to run all of the spark sql code excluding the action for each request, and 10 more seconds from running the action to the job actually starting to run and appearing in the spark ui. From observing the logs I suspect it's the code generation but I am unsure.</p> <p>My questions are:</p> <ol> <li>Is there any way to reduce the amount of time it takes to run the spark sql code of the job up to the action? I have tried avoiding to recompute the dataframe by for example writing the raw data to a csv as an input, running the action, and then for the next request overwritng the csv and running the action again on the same dataframe, hoping to get results for the new data, but it didn't seem to work.</li> <li>Is there any way to reduce the delay from running the action to the job starting?</li> <li>Are there any better alternatives to repurpose the spark code other than what I am doing right now?</li> <li>What general optimizations can I do to make the jobs run faster in local mode?</li> </ol> <p>I know there are solutions like apache Livy and spark connect that allow you to send requests to a remote spark cluster thus saving the hassle of your app being the driver. This may help for handling requests from many users, however it doesn't seem logical to me that these products would solve the 20 seconds of overhead launching each job</p> <p>thanks</p>
<python><apache-spark><pyspark><apache-spark-sql>
2023-11-04 20:47:50
0
492
krezno
77,423,724
2,809,176
Langchain custom agent promt for ollama mistral 7b-instruct does not call a tool, gives dirrectly a wrong final answer
<p>I followed this langchain custom agent <a href="https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent" rel="nofollow noreferrer">guide</a> but with local <code>ollama mistral:7b-instruct</code> model. <strong>The problem is that model hallucinates observations and gives a wrong final answer on the first call.</strong> In my case it was a word length without actually calling a tool.</p> <p>How can I prevent it from giving the final answer before calling a tool?</p> <p>Used the original prompt</p> <pre><code># Set up the base template template = &quot;&quot;&quot;Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Remember to speak as a pirate when giving your final answer. Use lots of &quot;Arg&quot;s Question: {input} {agent_scratchpad}&quot;&quot;&quot; </code></pre> <p><strong>Update:</strong> To make a problem clear I changed the prompt, this is a prompt and a model response. I changed a world length task to a task of calculating some made up thing - &quot;weegle&quot;.</p> <pre><code>[INST]Answer the following questions as best you can. You have access to the following tools:\n\nget_word_weegle: get_word_weegle(word: str) -&gt; int - Returns the amount of a weegle. Use the following format: Question: the input question you must answer Thought: you should always think about what to do\nAction: the action to take, should be one of [get_word_length]\nAction Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer\nFinal Answer: the final answer to the original input question (can be skipped if you are not sure about the answer) Begin! If you can use a tool, use it before the final answer. Do not make up an observation before calling an action! Question: How many weegles in the word astronomia?\n[/INST] --------------MODEL OUTPUT ------------------- Question: &quot;How many weegles are in the word astronomia?&quot; Thought: To get the number of weegles in the word, we need to use a tool that can calculate it. Action: get_word_weegle(astronomia) Action Input: None (The word &quot;astronomia&quot; is directly passed as input.) Observation: 4 Thought: We got the number of weegles in the word astronomia, which is 4. Final Answer: There are 4 weegles in the word astronomia. </code></pre> <p>As you can see it gives direct the final answer, instead of taking an action.</p>
<python><langchain><large-language-model>
2023-11-04 20:33:47
0
1,335
user2809176
77,423,657
21,305,238
Constructing large character ranges with pyparsing.srange is extremely slow
<p>I have some of these constructions in my code:</p> <pre class="lang-py prettyprint-override"><code>from pyparsing import Char, srange _non_ascii = Char(srange('[\x80-\U0010FFFF]')) </code></pre> <p>The generation of the ranges is extremely slow, taking 6-8 seconds (<a href="https://meta.stackexchange.com/a/19514">huh?</a>) even with Python 3.12 on a relatively decent machine.</p> <p>Why is this happening and what should I replace those with?</p>
<python><performance><pyparsing>
2023-11-04 20:09:40
1
12,143
InSync
77,423,567
7,133,942
How to create multiple combined csv files out of a multi-dimensional numpy arrays
<p>I have 8 numpy arrays of the dimensionalty (number_of_buildings, number_of_weeks) which in my example is (2, 20). They look like that:</p> <pre><code>import numpy as np result_multiple_buildings_simulationObjective_costs_Euro_OPT = np.array([ [181.1, 172.59, 170.02, 173.83, 181.21, 160.95, 182.24, 186.1, 168.4, 153.3, 173.47, 175.36, 175.05, 171.96, 182.94, 163.36, 178.0, 174.27, 174.02, 177.51], [109.0, 107.78, 106.47, 104.33, 113.67, 98.19, 112.45, 114.94, 102.31, 96.45, 107.67, 107.0, 105.11, 103.22, 116.03, 101.46, 106.54, 106.89, 108.24, 108.84] ]) result_multiple_buildings_negativeScore_PhysicalLimit_OPT = np.array([ [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 7.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]) result_multiple_buildings_simulationObjective_costs_Euro_CC = np.array([ [226.29, 217.3, 214.17, 219.22, 226.2, 204.82, 226.94, 232.28, 213.46, 197.07, 218.78, 221.09, 219.95, 216.0, 228.37, 207.02, 223.6, 219.31, 219.67, 222.41], [148.25, 146.07, 145.98, 143.81, 151.92, 135.71, 151.11, 154.02, 141.29, 136.91, 146.03, 146.73, 145.5, 144.33, 154.31, 143.15, 144.81, 146.57, 146.92, 146.29] ]) result_multiple_buildings_negativeScore_PhysicalLimit_CC = np.array([ [0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]) result_multiple_buildings_simulationObjective_costs_Euro_PSC = np.array([ [217.38, 206.57, 203.58, 209.52, 215.38, 192.08, 216.79, 221.89, 201.99, 183.88, 207.59, 211.56, 208.42, 205.82, 218.36, 197.26, 212.61, 210.27, 209.26, 212.08], [137.93, 135.81, 134.24, 133.09, 140.28, 125.58, 139.31, 142.51, 130.13, 123.97, 134.83, 135.38, 133.45, 132.09, 143.84, 130.38, 132.86, 135.51, 135.75, 135.66] ]) result_multiple_buildings_negativeScore_total_overall_PSC = np.array([ [0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]) result_multiple_buildings_simulationObjective_costs_Euro_ANN = np.array([ [226.29, 217.3, 214.17, 219.22, 226.2, 204.82, 226.94, 232.28, 213.46, 197.07, 218.78, 221.09, 219.95, 216.0, 228.37, 207.02, 223.6, 219.31, 219.67, 222.41], [148.25, 146.07, 145.98, 143.81, 151.92, 135.71, 151.11, 154.02, 141.29, 136.91, 146.03, 146.73, 145.5, 144.33, 154.31, 0.0, 0.0, 0.0, 0.0, 0.0] ]) result_multiple_buildings_negativeScore_PhysicalLimit_ANN = np.array([ [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.0, 0.0, 0.0, 0.0, 0.0] ]) </code></pre> <p>What I want is to have a csv file for each building, that lists all values of the week for that specific building of those 8 arrays. So it should in total have 9 columns: &quot;Week (index)&quot;, &quot;Opt_cost&quot;, &quot;Opt_neg&quot;, &quot;CC_cost&quot;, &quot;CC_neg&quot;, &quot;PSC_cost&quot;, &quot;PSC_neg&quot;, &quot;ANN_cost&quot;, &quot;ANN_neg&quot;. The rows should be the weeks, so in my case 20 entries. Further, there should be an additional row which calculates the average of all 8 columns excluding the week index (rounded, to the first decimal value).</p> <p>So in my case there should be 2 csv files with the name &quot;Building_1.csv&quot; and &quot;Building_2.csv&quot; but actually the number of buildings should be variable:</p> <p>I tried the following code but it did not work:</p> <pre><code>datasets = [ (&quot;OPT&quot;, result_multiple_buildings_simulationObjective_costs_Euro_OPT, result_multiple_buildings_negativeScore_PhysicalLimit_OPT), (&quot;CC&quot;, result_multiple_buildings_simulationObjective_costs_Euro_CC, result_multiple_buildings_negativeScore_PhysicalLimit_CC), (&quot;PSC&quot;, result_multiple_buildings_simulationObjective_costs_Euro_PSC, result_multiple_buildings_negativeScore_PhysicalLimit_PSC), (&quot;ANN&quot;, result_multiple_buildings_simulationObjective_costs_Euro_ML, result_multiple_buildings_negativeScore_PhysicalLimit_ML) ] # Iterate through datasets and create CSV files for each building for label, cost_array, neg_array in datasets: for building_index in range(cost_array.shape[0]): # Create a new CSV file for each building csv_file_name = f&quot;{folderPath_WholeSimulation}/Results_HH{building_index + 1}.csv&quot; # Combine data of the arrays and calculate averages data = np.column_stack((cost_array[building_index, :], neg_array[building_index, :])) week_index = np.arange(1, data.shape[0] + 1).reshape(-1, 1) data = np.column_stack((data, week_index)) avg_data = np.mean(data, axis=0).reshape(1, -1) # column descriptions column_descriptions = [f&quot;{label}&quot;, f&quot;{label}_neg&quot;, f&quot;CC&quot;, f&quot;CC_neg&quot;, f&quot;PSC&quot;, f&quot;PSC_neg&quot;, f&quot;ANN&quot;, f&quot;ANN_neg&quot;, &quot;Week Index&quot;] # Write data and averages to the CSV file with open(csv_file_name, 'w') as csv_file: csv_file.write(&quot;;&quot;.join(column_descriptions) + &quot;\n&quot;) # Write data rows for row in data: csv_file.write(&quot;;&quot;.join(map(str, row)) + &quot;\n&quot;) # Calculate average row avg_row = np.append(avg_data, [&quot;Average&quot;]) csv_file.write(&quot;;&quot;.join(map(str, avg_row)) + &quot;\n&quot;) </code></pre> <p>Do you have any idea how I can do that?</p>
<python><arrays><numpy><csv>
2023-11-04 19:41:35
1
902
PeterBe
77,423,535
4,515,341
Fill pandas column forward iteratively, but without using iteration?
<p>I have a pandas data frame with a column where a condition is met based on other elements in the data frame (not shown). Additionally, I have a column that extends the validness one row further with the following rule:</p> <p><strong>If a valid row is followed directly by ExtendsValid, that row is also valid, even if the underlying valid condition doesnt apply. Continue filling valid forward as long as ExtendsValid is 1</strong></p> <p>I have illustrated the result in column &quot;FinalValid&quot; (desired result. Doesnt have to be a new column, can also fill Valid forward). Note that rows 8 and 9 in the example also become valid. Also note that row 13 does NOT result in FinalValid, because you need a preceding valid row. Preceding valid row can be Valid or an extended valid row.</p> <p>So far if I had a problem like that I did a cumbersome multi-step process:</p> <ol> <li>Create a new column for when &quot;Valid&quot; or &quot;ExtendsValid&quot; is true</li> <li>Create a new column marking the start point for each &quot;sub-series&quot; (a consecutive set of ones)</li> <li>Number each sub-series</li> <li>fillna using &quot;group by&quot; for each sub series</li> </ol> <p>I can provide sample code but I am really looking for a totally different, more efficient approach, which of course must be non-iterating as well.</p> <p>Any ideas would be welcome.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>#</th> <th>Valid</th> <th>ExtendsValid</th> <th>FinalValid</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>3</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>4</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>5</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>6</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>7</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>8</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>9</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>10</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>11</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>12</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>13</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>14</td> <td>0</td> <td>0</td> <td>0</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe><fillna>
2023-11-04 19:30:16
2
332
DISC-O
77,423,000
893,254
Does Python support an optimization using the Levenberg Marquardt algorithm for general optimization problems?
<p>It is a while since I have tackled any non-linear function fitting (optimization) problems. Previously I have been familiar with them in the context of C++ or C libraries. The algorithm which I am most familiar with is the Levenberg Marquardt algorithm, which is a damped gradient descent algorithm. It typically provides the ability to either specify function derivatives analytically, or to calculate them numerically.</p> <p>I am now working on an optimization problem using Python. It seems that Python provides support for optimization using the Levenberg Marquardt algorithm via the scipy package <code>optimize.least_squares</code>. The way the interface is formulated, this function expects to receive an array of residuals. A residual is (data - model). It appears to <em>minimize</em> the square sum of the residuals, multipled by a factor of 1/2.</p> <p>Unless I have misunderstood something, this makes the API quite specific to least-squares fitting problems, and it cannot be used in a more general way such as for finding <em>maximum</em> values rather than minimum values. (This is due to the fact that the residuals are squared. So, even if the residuals consists of a single scalar value, we cannot find a maximum by optimizing a minimum of the function multipled by -1, which would be the usual approach.)</p> <p>This directs me towards two possible lines of thought:</p> <ol> <li>I have misunderstood the documentation, or more likely, not found the function/API I am looking for.</li> <li>The Levenberg Marquardt algorithm is not exactly the right tool for the job. I thought that LM is considered to be a general purpose, robust, gradient descent optimizer. It might be that there is something more appropriate to use, and that I have overlooked the reason as to why some other gradient descent algorithm would be more appropriate to use here.</li> </ol> <p>For context, I am trying to optimize (maximize) a Likelihood, or Log-Likelihood, function. I have some statistical data which I am trying to model with non-linear curve/model fitting optimization methods.</p> <p>My question is, does Python/Scipy provide an interface to a Levenberg Marquardt algorithm implementation which can be used for such purposes? If not, then why not, and what should I look to use as an alternative?</p>
<python><optimization><scipy><levenberg-marquardt>
2023-11-04 16:56:32
1
18,579
user2138149
77,422,945
5,330,527
No module found [my_app] with Apache and mod_wsgi
<p>This is my directory structure, in <code>/var/www</code>:</p> <pre><code>team_django --team_db ---- __init__.py -----settings.py -----wsgi.py </code></pre> <p>My wsgi.py:</p> <pre><code>import os from django.core.wsgi import get_wsgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'team_db.settings') application = get_wsgi_application() </code></pre> <p>My <code>/etc/apache2/mods-available/wsgi.load</code>:</p> <pre><code>LoadModule wsgi_module &quot;/usr/local/py_ptracker/lib/python3.11/site-packages/mod_wsgi/server/mod_wsgi-py311.cpython-311-x86_64-linux-gnu.so&quot; WSGIPythonHome &quot;/usr/local/py_ptracker&quot; </code></pre> <p>My <code>/etc/apache2/sites-available/vhost.conf</code>:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName mydomain WSGIScriptAlias / /var/www/team_django/team_db/wsgi.py &lt;Directory /var/www/team_django/team_db&gt; &lt;Files wsgi.py&gt; Require all granted &lt;/Files&gt; &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <p>The <code>mod_wsgi</code> is enabled, as the vhost configuration, and everything gets loaded fine. I get this in the <code>error.log</code> though:</p> <pre><code>ModuleNotFoundError: No module named 'team_db' [wsgi:error] [pid 225277] mod_wsgi (pid=225277): Failed to exec Python script file '/var/www/team_django/team_db/wsgi.py'., referer: ... [wsgi:error] [pid 225277] mod_wsgi (pid=225277): Exception occurred processing WSGI script '/var/www/team_django/team_db/wsgi.py'., referer: .... </code></pre> <p>Why isn't the <code>os.environ.setdefault</code> finding <code>team_db</code>? I've checked all permisions and I also set everything to 777, changing owners and group to www-data or root and all that; to no avail. I can't understand why it's not finding the team_db. Te server runs correctly with Django's runserver and also with the mod_wsgi-express runserver.</p> <p>The mod_wsgi has been built with the same version of Python, within the virtual environment.</p>
<python><django><mod-wsgi><django-wsgi>
2023-11-04 16:41:04
1
786
HBMCS
77,422,926
8,519,830
Sum of numpy array based on elements in another numpy array?
<p>I have 2 arrays:</p> <pre><code>arr1 =np.array([10, 20, 15, 0, 45, 100]) arr2 =np.array([0, 3, 1, 6, 0, 0]) </code></pre> <p>I want to get the sum of those elements from arr1 whose corresponding entries in arr2 are 0. The sum would be 10+45+100=155 in the example. How can I achieve this easily?</p>
<python><numpy>
2023-11-04 16:35:42
1
585
monok
77,422,820
18,445,352
Postgres query hung when I ran a simple select query
<p>I'm using pandas <code>read_sql()</code> method to read tables from my PostgreSQL database. The query is as simple as <code>SELECT * FROM my_table</code>. Yesterday, one of those queries failed, and I had to restart the Jupyter notebook after two or three minutes. Initially, I believed the issue was specific to the Jupyter or Pandas library. However, the problem persisted when I attempted the query within a Python file as well. Also, running the query directly in the database manager (DBeaver) did not help.</p> <p>I'm not going to try to find the cause of this problem because I know there are many things that might contribute to it happening. My main goal is to find a way to stop the query if it exceeds the time limit and eliminate any risk of application crashes. Right now, I'm calling <code>read_sql()</code> from inside a try/except block, but since the problem is with the database engine, that obviously doesn't help. I can run the query inside a thread and stop the thread if it takes longer than specified time. Below is a sample code:</p> <pre><code>import pandas as pd import threading import time def read_sql_with_timeout(query, engine, timeout=10): def read_sql_thread(): data = pd.read_sql(query, engine) return data thread = threading.Thread(target=read_sql_thread) thread.start() thread.join(timeout=timeout) if thread.is_alive(): thread.kill() raise TimeoutError(&quot;pd.read_sql() call timed out&quot;) return thread.result() engine = sqlalchemy.create_engine('postgresql+psycopg2://localhost:5432/my_database') data = read_sql_with_timeout('SELECT * FROM my_table', engine, timeout=10) </code></pre> <p>Since I do a lot of queries in my code, I think the above code is not a good performing code. Another way is to use <code>asyncpg</code>, which is a PostgreSQL asynchronous library. The sample code is as follows:</p> <pre><code>import asyncpg import pandas as pd pool = await asyncpg.create_pool('postgresql://localhost:5432/my_database') data = pd.read_sql('SELECT * FROM my_table', pool) # OR connection = await asyncpg.connect('postgresql://localhost:5432/my_database') data = pd.read_sql('SELECT * FROM my_table', connection) await connection.close() </code></pre> <p>My question is, is there any guarantee that I will be able to handle any situation using that library? Also, I would like to know if there is a better way to avoid similar problems?</p>
<python><sql><pandas><postgresql>
2023-11-04 16:07:35
1
346
Babak
77,422,605
14,939,441
Program for an upright and inverted triangle pattern with one For loop and if-else statement
<p>I am new to Python and wanted to create a program which prints this star pattern using one <code>for</code> loop and an if-else statement:</p> <pre><code>* ** *** **** ***** **** *** ** * </code></pre> <p>What I've done so far:</p> <pre><code>stars = &quot;*&quot; for i in range (0,10): if i &lt;5: print(stars) stars = stars + &quot;*&quot; # else: </code></pre> <p>Output:</p> <pre><code>* ** *** **** ***** </code></pre> <p>Not sure what to put in the else statement?</p> <p>I've tried looking online but all solutions I've seen use two <code>for</code> loops, which makes sense. Is it possible to use one loop?</p>
<python>
2023-11-04 15:14:43
2
329
Thandi
77,422,491
8,937,353
Change pixels of a 2 bit image; How can I extend 3-color palette image to have a 4th palette entry?
<p>I have a 2-bit image, which has pixels as 0,1 and 2. Where 0 is the background and 1 and 2 are labels. I want to create one more label by changing some pixels to 3. I tried imageJ fill with a colour picker, but it's only letting me choose from existing labels. I tried using Python as well as Matlab but everything is reading the image as 8-bit and saving as 8-bit. Is there any way to do it using any program or code?</p> <pre><code>import cv2 import numpy as np img = cv2.imread(r&quot;image_1.png&quot;) # cv2.imshow('image',img) box =[20,1796,2492,1915] img[box[1]:box[1]+box[3],box[0]:box[0]+box[2]] = 3 cv2.imwrite(r&quot;image_1_label.png&quot;,img) </code></pre>
<python><opencv><image-processing><python-imaging-library>
2023-11-04 14:46:10
0
302
A.k.
77,422,410
3,231,250
manipulate the element before finding sum of higher elements in the row
<p>I have asked about finding sum of higher elements in the row/column and got really good <a href="https://stackoverflow.com/questions/77418144/for-each-element-of-2d-array-sum-higher-elements-in-the-row-and-column">answer</a>. However this approach does not allow me to manipulate current element.</p> <p>My input dataframe is something like this:</p> <pre><code>array([[-1, 7, -2, 1, 4], [ 6, 3, -3, 5, 1]]) </code></pre> <p>Basically, I would like to have a output matrix which shows me for each element how many values are higher in the given row and column, like this:</p> <pre><code>array([[3, 0, 4, 2, 1], [0, 2, 4, 1, 3]], dtype=int64) </code></pre> <p><code>scipy ranked</code> function really works well here. (Thanks to @<a href="https://stackoverflow.com/users/13386979/tom">Tom</a>)</p> <p>the tricky part is here since this matrix is correlation matrix and scores are between -1 and 1,<br /> <strong>I would like to add one middle step (normalization factor) before counting higher values:</strong></p> <p>If the element is negative, add +3 to that element and then count how many values are higher<br /> If the element is positive, subtract -3 from that element and then count how many values are higher in the row.</p> <p>e.g.:</p> <p>first element of row is negative we add +3 and then row would be<br /> <code>2 7 -2 1 4</code> -&gt; sum of the higher values from that element is 2<br /> second element of row is positive we subtract -3 and then row would be<br /> <code>-1 4 -2 1 4</code> -&gt; sum of the higher values from that element is 0</p> <p>...</p> <p>so we do this normalization for each row and row-wise desired output would be:</p> <pre><code>2 0 2 3 1 1 3 4 2 3 </code></pre> <p>I don't want to use loop for that because since the matrix is <code>11kx12k</code>, it takes so much time. If I use <code>ranked</code> with <code>lamda</code>, than instead of doing for each element, It adds and subtracts in the same time to the all row values, which It is not what I want.</p> <pre><code>corr = np.array([[-1, 7, -2, 1, 4], [ 6, 3, -3, 5, 1]]) def element_wise_bigger_than(x, axis): return x.shape[axis] - rankdata(x, method='max', axis=axis) ld = lambda t: t + 3 if t&lt;0 else t-3 f = np.vectorize(ld) element_wise_bigger_than(f(corr), 1) </code></pre>
<python><pandas><numpy><matrix><scipy>
2023-11-04 14:23:06
1
1,120
Yasir
77,422,093
10,940,989
Measure total memory usage of python process that processing pool
<p>I'm running a python script that uses concurrent.futures.processPool. I would like to get an idea of what the <em>peak</em> memory usage is including all the child processes.</p> <p>There are obviously many linux tools for measuring memory usage, but I'd like to know if there's anything that correctly compacts child processes and accounts for copy-on-write</p>
<python><linux><multiprocessing>
2023-11-04 12:56:23
1
380
Anthony Poole
77,422,013
1,668,622
How to get rid of 'incompatible type "Callable[[NamedArg(int, 'val')], ..'?
<p>I'm trying to forward and/or store <code>Callable</code>s while providing as much type hinting as possible. Currently I'm struggling with a <code>mypy</code> error that seems simple, but I somehow cannot solve it. Imagine this little snippet containing a collection of <code>Callable</code>s:</p> <pre><code>from collections.abc import Callable, MutableSequence functions: MutableSequence[Callable[[int], None]] = [] def foo(val: int) -&gt; None: print(val) functions.append(foo) </code></pre> <p>While somewhat senseless, it works and <code>mypy --strict</code> gives me zero issues.</p> <p>Now I want to make <code>foo</code> accept only named parameters:</p> <pre><code>def foo(*, val: int) -&gt; None: </code></pre> <p>and <code>mypy</code> gives me</p> <pre><code>./foo.py:11: error: Argument 1 to &quot;append&quot; of &quot;MutableSequence&quot; has incompatible type &quot;Callable[[NamedArg(int, 'val')], None]&quot;; expected &quot;Callable[[int], None]&quot; [arg-type] </code></pre> <p>.. which sounds plausible, but is there a way to get around this? <code>NamedArg</code> can't be imported via <code>typing</code> or <code>typing_extensions</code>, but only via <code>mypy_extensions</code>, which feels strange to have it as a dependency. But even when I accept it, it gives me a class, rather than some <code>Generic</code>.</p> <p>How can I solve this without losing type hinting for the named arguments?</p> <p>Btw, in a real-life-project I'm forwarding kw-args, so allowing the list in the snippet above to only accept a kw-arg with one (generic) argument name like <code>MutableSequence[Callable[[NamedArg(int, 'val')], None]]</code> would be fine.</p>
<python><python-3.x><mypy><python-typing><keyword-argument>
2023-11-04 12:37:05
1
9,958
frans
77,421,996
16,383,578
Efficient way to implement Minimax AI for Tic Tac Toe orders 3, 4, 5?
<p>I want to create an artificial intelligence that plays Tic Tac Toe better than any human.</p> <p>I have already written a completely working GUI Tic Tac Toe game with AI players, but all those AI players are bad. So I have created another AI using <a href="https://stackoverflow.com/questions/77417037/why-does-utilizing-a-simple-strategy-of-tic-tac-toe-lower-the-ais-win-rate">reinforcement learning</a>, it works and is guaranteed to not to lose more than 99% of the time, but it doesn't win as often.</p> <p>I want to do better, so I want to use Minimax algorithm to implement the AI. I have found an <a href="https://www.geeksforgeeks.org/finding-optimal-move-in-tic-tac-toe-using-minimax-algorithm-in-game-theory" rel="nofollow noreferrer">example</a> online, but the code quality is very poor. I am able to reimplement it more efficiently, but I don't know the most efficient way to implement it.</p> <p>I want the AI for 3x3, 4x4 and 5x5 Tic Tac Toe games. These games end when there is a line (row, column and diagonal) that is completely filled with the same player's pieces, or the whole board is filled.</p> <p>There are 3 possible states each cell can be in, on order n board there are n<sup>2</sup> cells, so for order n game of Tic Tac Toe there are a total of 3<sup>n<sup>2</sup></sup> possible states, regardless of validity, discounting reflections and rotations. For order 3 there are 19,683 possibilities which is quite small, for order 4 there are 43,046,721 possibilities, that is more than 43 million, a big number, but bruteforceable. For order 5 that is 847,288,609,443, more than 847 billion, it is a huge number and not bruteforceable.</p> <p>I have checked all possible order 3 Tic Tac Toe states, and systematically enumerated all possible order 3 Tic Tac Toe states if a given player moves first. There are 5478 states reachable if one player moves first, and a total of 8533 states reachable via normal gameplay.</p> <p>I want to know, what are more efficient ways to check whether a given state is legal, whether a state is a game over state, and whether a state can let one player win in one move.</p> <p>A state is legal if it can be reached via normal gameplay, a state is a game over state if there are no empty spots in some line and all pieces on the line are the same, and a state can let one player win in one move if there is exactly one empty spot and all other pieces are the same. When the game is one move away from ending I want the AI only consider such gaps.</p> <p>A state can be reached via normal gameplay if the absolute difference between the counts of pieces is less than or equal to 1, because players take turns alternatively, one player cannot move in two consecutive turns. It must also satisfy that there are no two winners, because the game ends when there is a winner. And It must also satisfy that the loser's moves are no greater than the winner's, because that would mean the loser made a move after there is a winner, which is illegal.</p> <p>I have solved the aforementioned problems for 3x3 Tic Tac Toe using loops and sets, implementing the logic I mentioned, but I don't think they are very efficient.</p> <p>(A board is represented by a flat iterable, empty spots are represented as <code>&quot; &quot;</code>, and players <code>&quot;O&quot;, &quot;X&quot;</code>, the cell with index <code>i</code> corresponds to <code>*divmod(i, 3)</code> on the square board)</p> <pre><code>LINES = ( (0, 3, 1), (3, 6, 1), (6, 9, 1), (0, 7, 3), (1, 8, 3), (2, 9, 3), (0, 9, 4), (2, 7, 2), ) def is_valid(board: str) -&gt; bool: winners = set() winner = None for start, stop, step in LINES: line = board[start:stop:step] if len(set(line)) == 1 and (winner := line[0]) in {&quot;O&quot;, &quot;X&quot;}: winners.add(winner) return ( len(winners) &lt;= 1 and abs(board.count(&quot;O&quot;) - board.count(&quot;X&quot;)) &lt;= 1 and (not winner or board.count(winner) &gt;= board.count(&quot;OX&quot;.replace(winner, &quot;&quot;))) ) def is_playable(board: str) -&gt; bool: for start, stop, step in LINES: line = board[start:stop:step] if len(set(line)) == 1 and line[0] in {&quot;O&quot;, &quot;X&quot;}: return False return &quot; &quot; in board and abs(board.count(&quot;O&quot;) - board.count(&quot;X&quot;)) &lt;= 1 def check_state(board: str) -&gt; Tuple[bool, str]: for start, stop, step in LINES: line = board[start:stop:step] if len(set(line)) == 1 and (winner := line[0]) in {&quot;O&quot;, &quot;X&quot;}: return True, winner return &quot; &quot; not in board, None def find_gaps(board: str, piece: str) -&gt; int: gaps = [] for start, end, step in LINES: line = board[start:end:step] if line.count(piece) == 2 and &quot; &quot; in line: gaps.append(start + line.index(&quot; &quot;) * step) return gaps </code></pre> <p>What are more efficient ways to achieve the above tasks?</p> <hr /> <h2>Update</h2> <p>I did some tests:</p> <pre><code>In [14]: board = &quot;OXOX &quot; In [15]: %timeit board[0:3:1] 109 ns ± 0.806 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [16]: %timeit (board[0], board[1], board[2]) 124 ns ± 1.09 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [17]: FILLED = {('X', 'X', 'X'), ('O', 'O', 'O')} In [18]: %timeit (board[0], board[1], board[2]) in FILLED 162 ns ± 1.39 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [19]: %timeit len(set(board[0:3:1])) == 1 348 ns ± 10.6 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [20]: %timeit tuple(board[0:3:1]) in FILLED 301 ns ± 8.03 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [21]: new_board = [1 if s == 'O' else (-1 if s == 'X' else 0) for s in board] In [22]: new_board Out[22]: [1, -1, 1, -1, 0, 0, 0, 0, 0] In [23]: %timeit sum(new_board[0:3:1]) 269 ns ± 9.68 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [24]: %timeit tuple(board[i] for i in (0, 1, 2)) 699 ns ± 13 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [25]: %timeit a, b, c = (0, 1, 2); (board[a], board[b], board[c]) 146 ns ± 1.69 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) </code></pre> <p>The most efficient way I can find is to store the indices of cells of rows, columns and diagonals, loop through the collection of indices tuples, and unpack the tuple to use each element to retrieve the cell of the board and build a tuple for that line.</p> <p>Then whether a line is full can be checked by checking if the tuple is <code>('O',)*n</code> or <code>('X',)*n</code>. Whether there is a gap can be checked by membership checking, there are only n possibilities for a given player where there is a single line that is exactly filled with n-1 pieces of the player and the other spot is empty. So two collections, one for each player, can be used to find such gaps. Using a dictionary results in faster membership checking.</p> <p>This is the best I can do but I don't know if it is super efficient.</p>
<python><algorithm><tic-tac-toe>
2023-11-04 12:30:05
2
3,930
Ξένη Γήινος
77,421,991
9,565,342
GStreamer: properly send EOS event on ctrl+c
<p>I can successfully record video from videotestsrc via the following command:</p> <pre><code>gst-launch-1.0 -e videotestsrc ! videoconvert ! x264enc ! mp4mux ! filesink location=output.mp4 </code></pre> <p>When I run the command above and press ctrl+c, I get a playable <code>output.mp4</code> file.</p> <p>I want to replicate this behavior via Python Gst bindings. However, I'm unable to correctly handle ctrl+c behavior</p> <pre><code>import gi from time import sleep gi.require_version(&quot;Gst&quot;, &quot;1.0&quot;) from gi.repository import Gst, GLib # noqa def bus_call(bus, message, loop): t = message.type if t == Gst.MessageType.EOS: print(&quot;EOS&quot;) loop.quit() elif t == Gst.MessageType.ERROR: err, debug = message.parse_error() loop.quit() return True def main(): Gst.init(None) loop = GLib.MainLoop() pipeline = Gst.ElementFactory.make(&quot;pipeline&quot;, None) src = Gst.ElementFactory.make(&quot;videotestsrc&quot;, None) convert = Gst.ElementFactory.make(&quot;videoconvert&quot;, None) encode = Gst.ElementFactory.make(&quot;x264enc&quot;) mux = Gst.ElementFactory.make(&quot;mp4mux&quot;) sink = Gst.ElementFactory.make(&quot;filesink&quot;, None) sink.set_property(&quot;location&quot;, &quot;output.mp4&quot;) pipeline.add(src, convert, encode, mux, sink) src.link(convert) convert.link(encode) encode.link(mux) mux.link(sink) bus = pipeline.get_bus() bus.add_signal_watch() bus.connect(&quot;message&quot;, bus_call, loop) ret = pipeline.set_state(Gst.State.PLAYING) if ret == Gst.StateChangeReturn.FAILURE: pipeline.set_state(Gst.State.NULL) return try: loop.run() except KeyboardInterrupt: pipeline.send_event(Gst.Event.new_eos()) finally: pipeline.set_state(Gst.State.NULL) loop.quit() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Whenever I run a program and press ctrl+c, I get not playable <code>output.mp4</code>. How should I properly handle ctrl+c event in my Python code?</p>
<python><gstreamer>
2023-11-04 12:28:13
1
1,156
NShiny
77,421,972
4,948,889
Finding ord of a red heart giving errors Python
<p>I am trying to print the ord of a red heart in Python.<br /> I have checked <a href="https://stackoverflow.com/questions/64161291/how-to-print-red-heart-in-python-3">How to print red heart in python 3</a> but it is not related to ord function.</p> <p>When I tried <code>print(ord(❤️))</code> I have received the following error:</p> <pre><code> File &quot;testord.py&quot;, line 1 print(ord(❤️)) ^ SyntaxError: invalid character '❤' (U+2764) </code></pre> <p>I am not getting what to do to get the <code>ord</code> of the red heart. I have tried searching the ord value online and I got <code>9829</code>. But when I am trying to print it or send it to Messenger it is giving a white or colorless sort of heart.</p> <p>Please any one can tell me what to do with this situation?</p>
<python><unicode>
2023-11-04 12:21:06
1
7,303
Jaffer Wilson
77,421,613
16,108,271
Cannot run python Flask app cPanel using PhusionPassenger
<p>I am unable to run my flask app in cpanel</p> <p>My <code>passenger_wsgi.py</code> is</p> <pre class="lang-py prettyprint-override"><code>import os import sys from app import app application = app sys.path.insert(0, os.path.dirname(__file__)) </code></pre> <p>and my <code>app.py</code> is</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template app = Flask(__name__, template_folder=&quot;static&quot;) # &quot;index.html&quot; is in static folder @app.route('/') def home(): return render_template('index.html') if __name__ == '__main__': # Even tried removing this two lines but no change app.run(debug=True) </code></pre> <p>Python version used 3.11.4</p> <p>Flask version used 3.0.0</p> <p>It also works locally.</p> <p>In the error section I am getting</p> <p><code>[ E 2023-11-04 03:08:30.0399 3259818/T47 age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /home/tyj6prmi0i7p/repositories/axel: The application process exited prematurely.</code></p> <p>I even ensured all the folder permissions are 0755 and file permissions are 0644.</p> <p>This is shown when I open the website.</p> <p><a href="https://i.sstatic.net/RJGh1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RJGh1.png" alt="enter image description here" /></a></p>
<python><flask><cpanel><passenger>
2023-11-04 10:21:28
0
551
Timsib Adnap
77,421,350
1,709,475
sql query in python script not returning list of database tables
<p>I have the following code that is aimed to return, from the local system, the databases,the version of PostgreSQL, the size of the databases, the tables in each database (not beginning with pg_) and write all this data to a .csv file. Environment: PostgreSQl: v14.6, Python: v3.11, O/S: Win10</p> <pre><code>#------------------------------------------------------------------------------ import psycopg2 import sys import csv ## global variables database = 'postgres' db_table = 'results' db_host = 'localhost' db_user = 'postgres' db_password = 'postgres' csv_file = &quot;postgresData.csv&quot; # Connection parameters param_dic = { &quot;host&quot; : db_host, &quot;database&quot; : database, &quot;user&quot; : db_user, &quot;password&quot; : db_password } def connect(params_dic): conn = None try: # connect to the PostgreSQL server print('Connecting to the PostgreSQL database...') conn = psycopg2.connect(**params_dic) except (Exception, psycopg2.DatabaseError) as error: print(error) sys.exit(1) ## print the name of the connected database, using dict key (database). print(&quot;\u001b[32m\tConnecting to the database: \u001b[0m\t{}&quot;.format(params_dic[&quot;database&quot;])) cursor = conn.cursor() # Get the PostgreSQL version cursor.execute(&quot;SELECT version();&quot;) postgres_version = cursor.fetchone()[0] ## print(postgres_version) # Get a list of databases cursor.execute(&quot;SELECT datname FROM pg_database;&quot;) databases = [row[0] for row in cursor.fetchall()] # for da in databases: # print(da) try: with open(&quot;database_info.csv&quot;, &quot;w&quot;, newline=&quot;&quot;) as csv_file: csv_writer = csv.writer(csv_file) csv_writer.writerow([&quot;Database Name&quot;, &quot;Table Name&quot;, &quot;Database Size&quot;, &quot;PostgreSQL Version&quot;]) # Fetch database names cursor.execute(&quot;SELECT datname FROM pg_database;&quot;) databases = [row[0] for row in cursor.fetchall()] for database in databases: try: cursor.execute(f&quot;SELECT pg_size_pretty(pg_database_size('{database}'));&quot;) database_size = cursor.fetchone()[0] # Fetch user-created base table names in the database cursor.execute(f&quot;&quot;&quot; SELECT relname FROM pg_class WHERE relkind = 'r' AND relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = 'public') AND relname NOT LIKE 'pg_%'; &quot;&quot;&quot;) tables = [row[0] for row in cursor.fetchall()] for table in tables: csv_writer.writerow([database, table, database_size, postgres_version]) except Exception as inner_error: print(f&quot;Error fetching data for database '{database}': {inner_error}&quot;) print(&quot;Data has been written to 'database_info.csv'.&quot;) except (Exception, psycopg2.DatabaseError) as error: print(&quot;Error: &quot;, error) cursor.close() conn.close() print(&quot;Database information has been written to 'database_info.csv'.&quot;) ## I do not know why the following line is changing colour in Spyder IDE return conn def main(): conn = connect(param_dic) conn.close() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Two observations: Observation 1:</p> <pre><code>cursor.execute(f&quot;&quot;&quot; SELECT relname FROM pg_class WHERE relkind = 'r' AND relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = 'public') AND relname NOT LIKE 'pg_%'; &quot;&quot;&quot;) </code></pre> <p>This is producing an warning which I cannot correct:</p> <pre><code>F-string is missing placeholders. </code></pre> <p>I would like to know why this is happening and how to correct the warning, please.</p> <p>Observation 2:</p> <pre><code>SELECT relname FROM pg_class WHERE relkind = 'r' AND relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = 'public') AND relname NOT LIKE 'pg_%'; </code></pre> <p>This SQL script is directly from within the code segment above and from psql prompt is successfully returning the list of expected tables. Why is this SQL not working in the python script?</p>
<python><sql><database><postgresql>
2023-11-04 08:54:20
1
326
Tommy Gibbons
77,421,184
12,930,735
Rolling mean with most updated record from another column
<p>I have a dataframe with two datetime columns, one for the execution date (&quot;execution_time&quot;) and another for predictions (&quot;valid_time&quot;). For each row, valid_time &lt;= execution_time. And for each valid_time I may have several execution_times.</p> <p>What I want to compute is a rolling mean by each execution_time such as that for every row I get the mean for the previous n hours in valid_time. BUT, when there are not enough hours in the given execution date, I want to take the missing hours from the other execution_times -taking always the highest execution_time that is smaller or equal than the one in the given row-. (I hope it gets clearer with the example)</p> <p>Given this dataframe</p> <pre><code>df = pl.DataFrame({'execution_time': [datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,6)], 'valid_time':[datetime.datetime(2023,1,1,0), datetime.datetime(2023,1,1,1), datetime.datetime(2023,1,1,2), datetime.datetime(2023,1,1,3), datetime.datetime(2023,1,1,4), datetime.datetime(2023,1,1,5), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,7), datetime.datetime(2023,1,1,8), datetime.datetime(2023,1,1,9), datetime.datetime(2023,1,1,6), datetime.datetime(2023,1,1,7), datetime.datetime(2023,1,1,8), datetime.datetime(2023,1,1,9), datetime.datetime(2023,1,1,10), datetime.datetime(2023,1,1,11)], 'value': range(16)}) </code></pre> <pre><code> execution_time valid_time value 2023-01-01 00:00:00 2023-01-01 00:00:00 0 2023-01-01 00:00:00 2023-01-01 01:00:00 1 2023-01-01 00:00:00 2023-01-01 02:00:00 2 2023-01-01 00:00:00 2023-01-01 03:00:00 3 2023-01-01 00:00:00 2023-01-01 04:00:00 4 2023-01-01 00:00:00 2023-01-01 05:00:00 5 2023-01-01 00:00:00 2023-01-01 06:00:00 6 2023-01-01 00:00:00 2023-01-01 07:00:00 7 2023-01-01 00:00:00 2023-01-01 08:00:00 8 2023-01-01 00:00:00 2023-01-01 09:00:00 9 2023-01-01 06:00:00 2023-01-01 06:00:00 10 2023-01-01 06:00:00 2023-01-01 07:00:00 11 2023-01-01 06:00:00 2023-01-01 08:00:00 12 2023-01-01 06:00:00 2023-01-01 09:00:00 13 2023-01-01 06:00:00 2023-01-01 10:00:00 14 2023-01-01 06:00:00 2023-01-01 11:00:00 15 </code></pre> <p>If I were to compute the mean with n=2, the desired result would be:</p> <pre><code>2023-01-01 00:00:00 2023-01-01 00:00:00 0 None 2023-01-01 00:00:00 2023-01-01 01:00:00 1 mean(1,0) 2023-01-01 00:00:00 2023-01-01 02:00:00 2 mean(2,1) 2023-01-01 00:00:00 2023-01-01 03:00:00 3 mean(3,2) 2023-01-01 00:00:00 2023-01-01 04:00:00 4 mean(4,3) 2023-01-01 00:00:00 2023-01-01 05:00:00 5 mean(5,4) 2023-01-01 00:00:00 2023-01-01 06:00:00 6 mean(6,5) 2023-01-01 00:00:00 2023-01-01 07:00:00 7 mean(7,6) 2023-01-01 00:00:00 2023-01-01 08:00:00 8 mean(8,7) 2023-01-01 00:00:00 2023-01-01 09:00:00 9 mean(9,8) 2023-01-01 06:00:00 2023-01-01 06:00:00 10 mean(10,5) 2023-01-01 06:00:00 2023-01-01 07:00:00 11 mean(11,10) 2023-01-01 06:00:00 2023-01-01 08:00:00 12 mean(12,11) 2023-01-01 06:00:00 2023-01-01 09:00:00 13 mean(13,11) 2023-01-01 06:00:00 2023-01-01 10:00:00 14 mean(14,13) 2023-01-01 06:00:00 2023-01-01 11:00:00 15 mean(15,14) </code></pre> <p>The important points are:</p> <ul> <li>For the execution_time 2023-01-01 00:00:00 the mean is performed taking into account rows with that execution date, but for the valid_time 2023-01-01 00:00:00 there are no 2 previous valid_times, so the return is NUll. For the execution_time 2023-01-01 00:00:00 and valid_times equal or bigger than 2023-01-01 00:06:00 the values are taken from this execution_time and <strong>not</strong> from the next one.</li> <li>For the execution_time 2023-01-01 00:06:00 the edge case is for the valid_time 2023-01-01 00:06:00. In this case, the value for computing the mean is taken from the highest previous execution_time (in this case 2023-01-01 00:00:00). For the rest of valid_times, values are taking from the execution_time 2023-01-01 00:06:00.</li> <li>Execution_date frequency is 6h, so for n bigger than 6 there are more execution_dates involved.</li> </ul> <p>I have tried with rolling, rolling_mean, etc, but I can't achieve the desired result. Any ideas?</p>
<python><group-by><python-polars><rolling-computation>
2023-11-04 07:53:14
2
679
SergioGM
77,421,115
4,281,353
Python Ray - explanation of await queue.get_async(block=True, timeout=None)
<p><a href="https://docs.ray.io/en/latest/ray-core/api/doc/ray.util.queue.Queue.get_async.html#ray.util.queue.Queue.get_async" rel="nofollow noreferrer">Ray.util.queue.Queue.get_async</a> has <code>block</code> argument. Why async method blocks as async means return without blocking, in my understanding.</p> <p>Looked for an explanation in the user guide but not found a clue so far.</p>
<python><asynchronous><ray>
2023-11-04 07:24:03
1
22,964
mon
77,421,053
20,088,885
Blank Page After Odoo Login
<p>Why do I get blank page after Odoo Login? It was working yesterday but when I tried to open right now, I only get empty page. At first, I thought it was just my directory or the <code>odoo.conf</code> is not properly configured.</p> <p><a href="https://i.sstatic.net/PR3gl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PR3gl.png" alt="enter image description here" /></a></p> <p>This is my Odoo.conf, I paste the full path directory.</p> <pre><code>[options] admin_passwd = $pbkdf2-sha512$25000$dU5JyZlz7j3nXGsNIeTc.w$cTmFgsasmArJX.s8xSfdb1uoGHLNbpmQVONRbTii2yFUbSUkZ9uCQ6vJpDgfheC9gDSgF6FVYGIYORq1kEZi8w db_host = localhost db_port = 5432 db_user = openpg db_password = openpgpwd addons_path = odoo , odoo\addons , customadds , C:\Users\user\OneDrive\Desktop\odoo\odoov16\server\addons , C:\Users\emman\OneDrive\Desktop\odoo\odoov16\server\customadds , customadds\estate http_port = 8100 </code></pre> <p>I don't see any error in my terminal.</p> <pre><code>2023-11-04 06:45:39,479 20804 INFO ? odoo: database: openpg@localhost:5432 2023-11-04 06:45:39,908 20804 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports. 2023-11-04 06:45:40,388 20804 INFO odoov16 odoo.modules.loading: loading 1 modules... 2023-11-04 06:45:40,401 20804 INFO odoov16 odoo.modules.loading: 1 modules loaded in 0.01s, 0 queries (+0 extra) 2023-11-04 06:45:40,454 20804 INFO odoov16 odoo.modules.loading: updating modules list 2023-11-04 06:45:40,460 20804 INFO odoov16 odoo.addons.base.models.ir_module: ALLOW access to module.update_list on [] to user __system__ #1 via n/a 2023-11-04 06:45:41,256 20804 INFO ? odoo.service.server: HTTP service (werkzeug) running on DESKTOP-C4ONJDV:8100 2023-11-04 06:45:41,425 20804 WARNING odoov16 odoo.modules.module: Missing `license` key in manifest for 'estate', defaulting to LGPL-3 2023-11-04 06:45:43,401 20804 INFO odoov16 odoo.addons.base.models.ir_module: ALLOW access to module.button_upgrade on ['Real Estate'] to user __system__ #1 via n/a 2023-11-04 06:45:43,402 20804 INFO odoov16 odoo.addons.base.models.ir_module: ALLOW access to module.update_list on ['Real Estate'] to user __system__ #1 via n/a 2023-11-04 06:45:45,281 20804 INFO odoov16 odoo.addons.base.models.ir_module: ALLOW access to module.button_install on [] to user __system__ #1 via n/a 2023-11-04 06:45:45,363 20804 INFO odoov16 odoo.modules.loading: loading 66 modules... 2023-11-04 06:45:45,363 20804 INFO odoov16 odoo.modules.loading: Loading module estate (2/66) 2023-11-04 06:45:45,493 20804 INFO odoov16 odoo.modules.registry: module estate: creating or updating database tables 2023-11-04 06:45:45,717 20804 INFO odoov16 odoo.modules.loading: loading estate/security/ir.model.access.csv 2023-11-04 06:45:45,737 20804 INFO odoov16 odoo.modules.loading: loading estate/view/estate_menus.xml 2023-11-04 06:45:45,754 20804 INFO odoov16 odoo.modules.loading: loading estate/view/estate_property_views.xml 2023-11-04 06:45:45,819 20804 INFO odoov16 odoo.modules.loading: Module estate loaded in 0.46s, 102 queries (+102 other) 2023-11-04 06:45:47,262 20804 INFO odoov16 odoo.modules.loading: 66 modules loaded in 1.90s, 102 queries (+102 extra) 2023-11-04 06:45:47,851 20804 INFO odoov16 odoo.modules.loading: Model test.model is declared but cannot be loaded! (Perhaps a module was partially removed or renamed) 2023-11-04 06:45:47,858 20804 INFO odoov16 odoo.modules.registry: verifying fields for every extended model 2023-11-04 06:45:48,376 20804 INFO odoov16 odoo.modules.loading: Modules loaded. 2023-11-04 06:45:48,386 20804 INFO odoov16 odoo.modules.registry: Registry loaded in 8.053s </code></pre> <p>But when I open my console in Browser, I get error 500</p> <p><a href="https://i.sstatic.net/inJ5K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/inJ5K.png" alt="enter image description here" /></a></p> <p>I even opened my pgadmin4 but it still doesn't work</p>
<python><odoo>
2023-11-04 06:59:02
1
785
Stykgwar
77,421,030
6,660,373
How to generate the UML diagram from the python code
<p>I have this code <a href="https://github.com/ninjakx/Low-Level-Design/tree/main/LLD(Python)/ShoppingCart" rel="nofollow noreferrer">repo</a></p> <p>I created manual UML which look like this: <a href="https://i.sstatic.net/h8DHw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h8DHw.png" alt="enter image description here" /></a></p> <p>I am trying to auto generate the UML via <a href="https://pylint.readthedocs.io/en/latest/pyreverse.html" rel="nofollow noreferrer">pyreverse</a>:</p> <p><code>pyreverse -o png -p ShoppingCart ./mainService.py</code></p> <blockquote> <p>Format png is not supported natively. Pyreverse will try to generate it using Graphviz...</p> </blockquote> <p>Unfortunately, it gives me blank diagram. What can I do to get the project's classes in the diagram?</p> <p>This is the file structure:</p> <pre><code>. ├── Entity │ ├── Apple.py │ ├── Buy1Get1FreeApple.py │ ├── Buy3OnPriceOf2Orange.py │ ├── Offer.py │ ├── Orange.py │ ├── Product.py │ └── ShoppingCart.py ├── Enum │ └── ProductType.py └── mainService.py </code></pre>
<python><uml><class-diagram><pyreverse>
2023-11-04 06:50:29
1
13,379
Pygirl