QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,195,211
48,956
How to enforce in-order-of-receival processing of async message handlers?
<p>Suppose we have a message handler. Messages might be received from a TCP socket and dispatched to a handler:</p> <pre class="lang-py prettyprint-override"><code>async def on_recv(msg): await handle(msg) while True: msg = await socket.recv() await on_recv(msg) </code></pre> <p>This is problematic because <code>handle</code> may take some time or send another message and wait on the socket for a response, and become deadlocked (<code>on_recv</code> did not return!). To prevent this, we may make the handler a task so that it does not block us from receiving the next message:</p> <pre class="lang-py prettyprint-override"><code>async def on_recv(msg): asynico.Task(handle(msg)) </code></pre> <p>However, here I believe its true that we lose guarantees about the order of processing (specifically the order in which the tasks start). Suppose I wanted to make sure that each handler starts is execution in the order of message receival. How might I do that? My goal here to introduce more deterministic behavior (it's OK where this isn't guaranteed when the handler hits IO boundaries).</p> <p>In trying to find a solution, it looks like finger puzzle. Everything I try here either leaves messages potentially not starting, involved an scheduler which itself has the same issue of the orginal dispatcher (cedeing order to an <code>asyncio.Task</code>).</p>
<python><python-asyncio>
2023-05-07 16:56:55
1
15,918
user48956
76,195,132
13,142,245
Python dictionary filter only returning first valid occurrence
<p>I'm implementing filter functionality for a dictionary.</p> <pre class="lang-py prettyprint-override"><code>class Filter: def __init__(self, obj): self.obj = obj def isin(self, key, lst): self.obj = {k:[v[i] for i in range(len(self.obj)+1) if self.obj[key][i] in lst] for k,v in self.obj.items()} def gt_x(self,key,value): self.obj = {k:[v[i] for i in range(len(self.obj)+1) if self.obj[key][i] &gt; value] for k,v in self.obj.items()} </code></pre> <p>The interesting thing I've noted is that it only returns the first occurrence which meets filter requirements.</p> <pre class="lang-py prettyprint-override"><code>d = {'a':[1,1,2,2,3,3], 'b':[3,3,2,2,1,1,]} f = Filter(d) f.isin(key='a',lst=[2,3]) f.obj &gt;&gt;&gt; {'a': [2], 'b': [2]} # desired: {'a': [2,2,3,3], 'b': [2,2,1,1]} </code></pre> <p>And</p> <pre class="lang-py prettyprint-override"><code>d = {'a':[1,1,2,2,3,3], 'b':[3,3,2,2,1,1,]} f = Filter(d) f.gt_x('a', value=1) f.obj &gt;&gt;&gt; {'a': [2], 'b': [2]} # desired: {'a': [2,2,3,3], 'b': [2,2,1,1]} </code></pre> <p>I know that in practicality I could do this functionality in Pandas or other data libraries. But this is more of a curiosity now than anything else. Why are these class methods only returning the first valid occurrence?</p>
<python><filter><hashmap>
2023-05-07 16:42:54
1
1,238
jbuddy_13
76,195,047
753,707
Get quotients and remainder from division of floats
<p>What is the Pythonic approach to getting a quotients and remainder from division of <strong>floats</strong>? So taking 8 divided by 3 which is 2.66666666666</p> <p>so I want:</p> <pre><code>aPythonOperator(8,3) = 2.0, 0.66666666666 </code></pre> <p>but I cannot find a clean way of doing this without the remainder being rounded. For completeness the Python division operators are:</p> <pre><code>import math a = 8.0 b = 3.0 print(f'Division:{a/b}') print(f'Floored division: {a//b}') print(f'Modulo: {a%b}') print(f'Remainder: {math.remainder(a,b)}') # rounds to int and yields -1 (?) print(f'Divmod: {divmod(a, b)}') </code></pre> <p>which gives:</p> <pre><code>Division:2.6666666666666665 Floored division: 2.0 Modulo: 2.0 Remainder: -1.0 </code></pre> <p>These don't help, but I expected the Divmod function to work. However, Divmod rounds the quotient to an int:</p> <pre><code>print(f'Divmod: {divmod(a, b)}') Divmod: (2.0, 2.0) </code></pre> <p>I can get the behaviour I want by creating my own version of the Divmod function:</p> <pre><code>def hackDivmod(a,b): #HACK Div = (a/b) floredDiv = (a//b) return floredDiv, Div-floredDiv </code></pre> <p>This works as desired:</p> <pre><code>HackDivmod: (2.0, 0.6666666666666665) </code></pre> <p>But is there a better way do doing this using built in functions or operators?</p>
<python>
2023-05-07 16:24:43
3
454
DUFF
76,194,893
7,437,143
How to overwrite a print statement line, after printing text below it?
<h2>Context</h2> <p>I have difficulties finding the right stackoverflow question for this purpose: I would like to update a printed line in python with a value that is computed after something else is printed. the <code>something else</code> should stay below the line that is updated.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>first=&quot;hello&quot; third=&quot;&quot; print(f'{first} \r{third}', end=&quot; &quot;, flush=True) second=&quot;somethingelse&quot; print(second) if second == &quot;somethingelse&quot;: third=&quot;world&quot; else: third=&quot;Universe&quot; </code></pre> <p>Should print:</p> <pre><code>hello something. </code></pre> <p>which is then changed to:</p> <pre><code>hello world something </code></pre> <p>Or it should print:</p> <pre><code>hello something. </code></pre> <p>which is then changed to:</p> <pre><code>hello universe something </code></pre> <p>It is important that:</p> <ul> <li>hello is printed before something is printed.</li> <li>world/universe is printed on the same line as hello.</li> <li>something is below the hello universe/world line.</li> <li>something can consist of an arbitrary number of printed lines.</li> </ul> <h2>XY-problem</h2> <p>I modified the <a href="https://github.com/navdeep-G/showme/blob/master/showme/core.py" rel="nofollow noreferrer">showme</a> pip package to show the runtimes of functions. And I would like the function name to be printed, followed by the output of the function that is being running. And still print the runtime behind/on the same line as the function name. (To reduce the CLI clutter).</p> <h2>Question</h2> <p>How do I update a line with a value that is computed after other stuff is printed below that line?</p> <h2>Related Questions:</h2> <p><a href="https://stackoverflow.com/questions/3249524/print-in-one-line-dynamically">Print in one line dynamically</a></p> <p><a href="https://stackoverflow.com/questions/13094994/how-can-i-print-something-and-then-call-a-function-with-a-print-on-the-same-lin">How can I print something, and then call a function with a print on the same line?</a></p> <p><a href="https://stackoverflow.com/questions/5290994/remove-and-replace-printed-items">Remove and Replace Printed items</a></p> <p><a href="https://stackoverflow.com/questions/5419389/how-to-overwrite-the-previous-print-to-stdout">How to overwrite the previous print to stdout?</a></p>
<python><printing>
2023-05-07 15:46:26
2
2,887
a.t.
76,194,847
14,183,155
Numpy reshape behaves differently on square matrices
<p>I have the following data:</p> <pre class="lang-py prettyprint-override"><code>weights = [ np.array([np.array([1, 2, 3]), np.array([1, 2, 3])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3])]), ] len_models = len(weights) len_layers = len(weights[0]) </code></pre> <p>If I try to reshape the data as such: <code>x = np.reshape(weights, (len_layers, len_models))</code>, I'm getting an error:</p> <blockquote> <p>cannot reshape array of size 30 into shape (2,5)</p> </blockquote> <p>However, if the weights have the following values:</p> <pre class="lang-py prettyprint-override"><code>weights = [ np.array([np.array([1, 2, 3]), np.array([1, 2, 3, 4])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3, 4])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3, 4])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3, 4])]), np.array([np.array([1, 2, 3]), np.array([1, 2, 3, 4])]), ] </code></pre> <p>The reshape operation works.</p> <p>Why is that?</p>
<python><numpy>
2023-05-07 15:37:19
1
2,340
Vivere
76,194,609
4,575,197
if the name exists in the Excel, check if a folder for that name exists, if true copy the Folder
<p>I've been working on this for days but no luck. I have some folder in another folder and I need to check if the Name of these folders appear on a Excel sheet. If so I want to copy the files to another folder. It seems easy job to be, but I'm kind of new to python so I'm lost.</p> <p>An Example of the list of names in excel file:</p> <pre><code>[mike nicolai Tesla, Thomas Edison, Edie Morphy, Josef mike Tesla Johanssen] </code></pre> <p>an example of the directories, this is a over all schema R:\this folder\Folder\Subfolder\lastName, first Name</p> <pre><code>R:\this folder\docs\GERMANY\Tesla Johanssen, Josef mike R:\this folder\docs\GERMANY\Morphy, Edie R:\this folder\docs\France\nicolai Tesla, Josef mike </code></pre> <p>I have made these Functions, in order to Copy or extract the names accordingly. Due to the middle names, I had to check if the Function returns more than just First and Last Name. I have 2 functions for that purpose <em>sort_Names(_name)</em> &amp; <em>extract_Names_From_Folders(folder_path)</em></p> <pre><code>import os from glob import glob import shutil def extract_Names_From_Folders(folder_path): folder_Name=folder_path.split(sep=&quot;\\&quot;) folder_Name=folder_Name[-1::] if len(''.join(map(str, folder_Name)).split(' '))&gt;2: _lName=''.join(map(str, folder_Name)).split(&quot; &quot;)[0] _mName=''.join(map(str, folder_Name)).split(&quot; &quot;)[1] _fName=''.join(map(str, folder_Name)).split(&quot; &quot;)[2] return _fName,_mName,_lName elif len(''.join(map(str, folder_Name)).split(' '))&gt;1: _lName=''.join(map(str, folder_Name)).split(&quot; &quot;)[0] _fName=''.join(map(str, folder_Name)).split(&quot; &quot;)[1] return _fName,_lName elif len(''.join(map(str, folder_Name)).split(' '))&gt;0: _fName=''.join(map(str, folder_Name)).split(&quot; &quot;)[0] return _fName def sort_Names(_name): if len(''.join(map(str, _name)).split(&quot; &quot;))==4: excel_fName2=''.join(map(str, _name)).split(&quot; &quot;)[3] excel_fName1=''.join(map(str, _name)).split(&quot; &quot;)[2] excel_mName= ''.join(map(str, _name)).split(&quot; &quot;)[0] excel_lName=''.join(map(str, _name)).split(&quot; &quot;)[1] return excel_fName1,excel_fName2,excel_mName,excel_lName elif len(''.join(map(str, _name)).split(&quot; &quot;))==3: excel_fName=''.join(map(str, _name)).split(&quot; &quot;)[2] excel_mName= ''.join(map(str, _name)).split(&quot; &quot;)[0] excel_lName=''.join(map(str, _name)).split(&quot; &quot;)[1] return excel_fName,excel_mName,excel_lName elif len(''.join(map(str, _name)).split(&quot; &quot;))==2: excel_fName=''.join(map(str, _name)).split(&quot; &quot;)[1] excel_lName=''.join(map(str, _name)).split(&quot; &quot;)[0] return excel_fName,excel_lName elif len(''.join(map(str, _name)).split(&quot; &quot;))==1: excel_fName=''.join(map(str, _name)).split(&quot; &quot;)[0] return excel_fName else: return [0] def copy_the_file(folder_name,_name,df): index = _name.index(folder_name) df.at[_name, &quot;User Request Form Created?\n[Yes/No]&quot;] = &quot;Yes&quot; # Copy the subfolder to the Proof folder proof_folder_path = r&quot;r:\blabla\Proof&quot; shutil.copytree(folder_name, os.path.join(proof_folder_path, _name), dirs_exist_ok=True) </code></pre> <p>Here's the main part. This code doesn't currently work I have no idea <em>why</em>. It got complex so fast. I'm pretty sure it can be done in easier and more pythonic way. But have no clue how.</p> <pre><code>import pandas as pd from glob import glob # Set up file paths excel_file_path = r&quot;k:\blabla\path.xlsx&quot; folder_path = r&quot;r:\blabla\path\*\*&quot; df = pd.read_excel(excel_file_path, header=1) # Get the list of names from the &quot;Name&quot; column names = df[&quot;Name&quot;].tolist() for folder_name in glob(folder_path, recursive = True): #Check if function returns 3 or 2 variables. A function returns a Tuple mName='' tuple_=extract_Names_From_Folders(folder_name) if len(tuple_) == 3: fName = tuple_[0] if tuple_ else None mName= tuple_[1] if len(tuple_) &gt; 1 else None lName= tuple_[2] if len(tuple_) &gt; 2 else None elif len(tuple_) == 2: fName = tuple_[0] if tuple_ else None lName= tuple_[1] if len(tuple_) &gt; 1 else None elif len(tuple_) == 1: fName = tuple_[0] if tuple_ else None #convert list to string then split by comma # print(''.join(map(str, folder_Name)).split(',')[0]) for _name in names: if len(_name)&gt;2: tuple_=sort_Names(_name) if len(tuple_) == 4: excel_fName2=tuple_[1] if tuple_ else None excel_fName1=tuple_[0] if tuple_ else None excel_mName= tuple_[2] if tuple_ else None excel_lName=tuple_[3] if tuple_ else None print(excel_fName1+' '+excel_fName2,excel_mName+' '+excel_lName) # Check if the subfolder matches a name in the Excel file if excel_fName1+' '+excel_fName2 == fName and excel_mName+&quot; &quot;+excel_lName == lName: copy_the_file(folder_name,_name,df) if len(tuple_) == 3: excel_fName=tuple_[0] if tuple_ else None excel_mName= tuple_[1] if tuple_ else None excel_lName=tuple_[2] if tuple_ else None print(excel_fName+' '+excel_mName,excel_lName) if excel_fName == fName and excel_mName+&quot; &quot;+excel_lName == lName: copy_the_file(folder_name,_name,df) elif len(tuple_) == 2: excel_fName = tuple_[0] if tuple_ else None excel_lName = tuple_[1] if len(tuple_) &gt; 1 else None print(excel_fName +' '+excel_lName) if excel_fName==fName and excel_lName == lName: copy_the_file(folder_name,_name,df) elif len(tuple_) == 1: fName = tuple_[0] if tuple_ else None </code></pre>
<python><pandas><excel><algorithm><directory>
2023-05-07 14:45:21
1
10,490
Mostafa Bouzari
76,194,563
18,904,265
UnboundLocalError at ConnectionError for API call with requests
<p>I am trying to write myself a function to handle my calls to an API, which return a JSON. This is my function at the moment:</p> <pre class="lang-py prettyprint-override"><code>import requests def get_data(url:str, headers:str, timeout_seconds=5) -&gt; str: &quot;&quot;&quot;Return JSON file from API call.&quot;&quot;&quot; try: response = requests.get( url, headers=headers, timeout=timeout_seconds ) response.raise_for_status() except requests.exceptions.HTTPError as errh: print(&quot;Http Error:&quot;, errh) except requests.exceptions.ConnectionError as errc: print(&quot;Error Connecting:&quot;, errc) except requests.exceptions.Timeout as errt: print(&quot;Timeout Error:&quot;, errt) except requests.exceptions.RequestException as err: print(&quot;Unspecified error:&quot;, err) return response.json() </code></pre> <p>This works fine when the request is successful, but when trying to simulate an error by disconnecting from the internet, I get this error additonally to the Connection Error. Is this something I'm not handling correctly in my function?</p> <pre><code>UnboundLocalError: local variable 'response' referenced before assignment </code></pre>
<python>
2023-05-07 14:33:59
2
465
Jan
76,194,506
20,568,970
why collide_mask is significantly slower than collide_rect in pygame?
<p>i have this piece of code here to add collision to my pygame game:</p> <pre class="lang-py prettyprint-override"><code> collidable_sprites = self.tiles[&quot;ground&quot;].sprites. Sprites() + self.tiles[&quot;fg_machines&quot;].sprites.sprites() for sprite in collidable_sprites: if sprite.rect.colliderect(player.rect): </code></pre> <p>and this is working smooth at 60 fps (my fps for clock.tick) but, of course, that collision detection is very rough unless your game is just rectangles - which is not true in my case. so, i switched to mask collision:</p> <pre class="lang-py prettyprint-override"><code> collidable_sprites = self.tiles[&quot;ground&quot;].sprites.sprites() + self.tiles[&quot;fg_machines&quot;].sprites.sprites() for sprite in collidable_sprites: if pygame.sprite.collide_mask(player, sprite): </code></pre> <p>which works better, but drops my fps to 20. i understanding that this far harder to calculate, but is this a good alternative to do it with 3x frame drop?</p>
<python><performance><pygame>
2023-05-07 14:22:35
1
334
nope
76,194,364
12,894,926
How to build a self-contained Python venv?
<p>Is that possible to build a self-contained Python virtual environment? By self-contained, I mean that all needed files to execute the program are there, <strong>including Python.</strong></p> <p>I'm using Poetry to manage <code>venv</code>s. Looking at <code>venv</code>s I created using <code>poetry install</code> I see that the dependencies are actually copied, but Python is simblinked.</p> <p>For example:</p> <pre><code>&gt;&gt; ls -lah my-venv/bin/ python -&gt; /Users/my-user/.pyenv/versions/3.11.2/bin/python </code></pre> <p>Also, I tried the <code>virtualenvs.options.always-copy</code> <a href="https://python-poetry.org/docs/configuration/#virtualenvsoptionsalways-copy" rel="nofollow noreferrer">Poetry config</a>, which translates to <code>--always-copy</code> <a href="https://virtualenv.pypa.io/en/latest/cli_interface.html#always-copy" rel="nofollow noreferrer">virtualvenv configuration</a>, but it didn't copy Python.</p> <p>Since an expected answer would be &quot;use containers&quot;, I say in advance that it's not an option for the given use case.</p> <p><strong>This question aims to a solution that an &quot;all-in&quot; directory/file can be uploaded to a Linux server and just run without using depending on any system-installed software.</strong></p>
<python><virtualenv><python-packaging><python-venv><python-poetry>
2023-05-07 13:57:45
3
1,579
YFl
76,194,314
2,647,447
Need to click text entering area before enter text successfully in selenium python
<p>I need to entering text in a text entering area. my command is:</p> <pre><code>self.driver.find_element_by_xpath(f&quot;{self.plateId_input}&quot;).send_keys(plateId) </code></pre> <p>but I have found out issue this command does not work. I have to issue the same command twice. The first time is to click the text area, so the area is ready then issue the command and the text is entered successfully. My question is is there other way to perform this task? issue the command twice does not seems like a good practice.</p>
<python><selenium-webdriver>
2023-05-07 13:46:23
1
449
PChao
76,194,228
2,032,998
In a polars dataframe, filter a column of type list by another column of type list
<p>I have a <a href="/questions/tagged/polars" class="s-tag post-tag" title="show questions tagged &#39;polars&#39;" aria-label="show questions tagged &#39;polars&#39;" rel="tag" aria-labelledby="tag-polars-tooltip-container" data-tag-menu-origin="Unknown">polars</a> dataframe as below:</p> <p>Example input:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.select(user_id=1, items=[1, 2, 3, 4], popular_items=[3, 4, 5, 6]) </code></pre> <pre><code>β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ user_id ┆ items ┆ popular_items β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════════║ β”‚ 1 ┆[1, 2, 3, 4] ┆ [3, 4, 5, 6] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I want to filter <code>popular_items</code> column by removing any items that are in <code>items</code> column for each <code>user_id</code></p> <p>I have been trying to get it to work but have been unsuccessful due to various issues. In all likelihood, I am probably overcomplicating things.</p> <p>The expected output should be as follows:</p> <pre class="lang-py prettyprint-override"><code>β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ user_id ┆ items ┆ popular_items ┆ suggested β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ list[i64] ┆ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════════β•ͺ═══════════║ β”‚ 1 ┆ [1, 2, 3, 4]┆ [3, 4, 5, 6] ┆ [5, 6] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>It seems like the solution should be simple, but it seems to escape me after some time now trying different things.</p> <p>Any help would be greatly appreciated!</p>
<python><dataframe><python-polars>
2023-05-07 13:29:54
1
433
D_usv
76,193,930
1,581,090
How to call a method in a running python code from another code?
<p>I am running a small python code <code>background.py</code> as follows:</p> <pre><code>class Background: def __init__(self): self.q = 0 def run(self): while True: print(f&quot;test, q={self.q}&quot;) time.sleep(5) def setq(self, x): self.q += x print(f&quot;New value q={q}&quot;) if __name__ == &quot;__main__&quot;: bg = Background() bg.run() </code></pre> <p>and I want to start a small python code in another terminal to access this running python code and to execute the method <code>setq</code> so that the value in the running code is being changed. How can this be done easiest?</p>
<python>
2023-05-07 12:28:48
0
45,023
Alex
76,193,921
1,779,532
`object` class has does not have all magic methods, such as __add__(), but how the magic methods are supported by inheriting `object` class?
<p>In Python, all classes inherit 'object' class. I found that the 'object' class does not have all magic methods.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; dir(object) ['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__'] </code></pre> <p><code>object</code> class has some binary operations such as <code>__eq__</code>, <code>__lt__</code>, <code>__gt__</code>, but some magic methods, such as <strong>add</strong> and <code>__mul__</code>, are not in the list. Although <code>object</code> class does not have <code>__add__</code> and <code>__mul__</code>, we can use the magic method as follows:</p> <pre class="lang-py prettyprint-override"><code>class Num(object): def __init__(self, n): self.n = n def __eq__(self, other): return (self.n&gt;=other.n) and (self.n&lt;=other.n) def __add__(self, other): return self.n + other.n a = Num(10) b = Num(20) print(a==b) # False print(a+b) # 30 </code></pre> <p>How can we use the magic methods, such as <code>__add__</code>, which are not defined in <code>object</code> class? I am wondering if it is not related to inheriting <code>object</code> class.</p> <p>Thanks in advance.</p>
<python>
2023-05-07 12:26:11
0
2,544
Park
76,193,710
5,561,144
Plotting polar function using matplotlib
<p>I'm trying to plot this function using matplotlib.</p> <p><a href="https://i.sstatic.net/yR0wj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yR0wj.png" alt="Desmos version" /></a></p> <p>As you can see in the Desmos app, the equation correctly plot the function as circle, but when I try to port it to Python, I got this instead:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt def fungsi_r4(theta, theta0, r0, a): return r0 * np.cos(theta - theta0) + np.sqrt((a ** 2) - (r0 ** 2) * (np.sin(theta - theta0) ** 2)) theta = np.linspace(0, 2 * np.pi, 100) r = fungsi_r4(theta, 2.4, 5.1, 2.6) ax = plt.subplot(projection='polar') ax.plot(theta, r) </code></pre> <p><a href="https://i.sstatic.net/OZXD7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OZXD7.png" alt="Python version" /></a></p> <p>My feeling tells me it has something to do with the negative values retuned from the function but I don't know what to do with it.</p>
<python><numpy><matplotlib><polar-coordinates>
2023-05-07 11:41:59
1
799
Fahmi Noor Fiqri
76,193,660
2,302,262
In python, can I return a child instance when instantiating its parent?
<p>I have a zoo with animals, represented by objects. Historically, only the <code>Animal</code> class existed, with animal objects being created with e.g. <code>x = Animal('Bello')</code>, and typechecking done with <code>isinstance(x, Animal)</code>.</p> <p>Recently, it has become important to distinguish between species. <code>Animal</code> has been made an ABC, and all animal objects are now instances of its subclasses such as <code>Dog</code> and <code>Cat</code>.</p> <p>This change allows me to create an animal object directly from one of the subclasses, e.g. with <code>dog1 = Dog('Bello')</code> in the code below. This is cheap, and I can use it as long as I know what kind of animal I'm dealing with. Typechecking <code>isinstance(dog1, Animal)</code> still works as before.</p> <p>However, for usibility and backwards compatibility, I also want to be able to call <code>dog2 = Animal('Bello')</code>, have <em>it</em> (from the input value) determine the species, and return a <code>Dog</code> instance - even if this is computationally more expensive.</p> <p><strong>I need help with the second method.</strong></p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>class Animal: def __new__(cls, name): if cls is not Animal: # avoiding recursion return super().__new__(cls) # Return one of the subclasses if name.lower() in ['bello', 'fido', 'bandit']: # expensive tests name = name.title() # expensive data correction return Dog(name) elif name.lower() in ['tiger', 'milo', 'felix']: # ... name = property(lambda self: self._name) present = lambda self: print(f&quot;{self.name}, a {self.__class__.__name__}&quot;) # ... and (many) other methods that must be inherited class Dog(Animal): def __init__(self, name): self._name = f&quot;Mr. {name}&quot; # cheap data correction # ... and (few) other dog-specific methods class Cat(Animal): def __init__(self, name): self._name = f&quot;Dutchess {name}&quot; # cheap data correction # ... and (few) other cat-specific methods dog1 = Dog(&quot;Bello&quot;) dog1.present() # as expected, prints 'Mr. Bello, a Dog'. dog2 = Animal(&quot;BELLO&quot;) dog2.present() # unexpectedly, prints 'Mr. BELLO, a Dog'. Should be same. </code></pre> <p>Remarks:</p> <ul> <li><p><strong>In my use-case, the second creation method is by far the more important one.</strong></p> </li> <li><p>What I want to achieve is that calling <code>Animal</code> return a subclass, <code>Dog</code> in this case, initialized with manipulated arguments (<code>name</code>, in this case)</p> </li> <li><p>So, I'm looking for a way to keep the basic structure of the code above, where the parent class can be called, but just always returns a child instance.</p> </li> <li><p>Of course, this is a contrived example ;)</p> </li> </ul> <p>Many thanks, let me know if more information is helpful.</p> <hr /> <h1>Suboptimal solutions</h1> <h3>factory function</h3> <pre class="lang-py prettyprint-override"><code>def create_animal(name) -&gt; Animal: # Return one of the subclasses if name.lower() in ['bello', 'fido', 'bandit']: name = name.title() return Dog(name) elif name.lower() in ['tiger', 'milo', 'felix']: # ... class Animal: name = property(lambda self: self._name) present = lambda self: print(f&quot;{self.name}, a {self.__class__.__name__}&quot;) # ... and (many) other methods that must be inherited class Dog(Animal): # ... </code></pre> <p>This breaks backward compatibility by no longer allowing <strong>the creation</strong> of animals with a <code>Animal()</code> call. Typechecking is still possible</p> <p>I prefer the symmetry of being able to call a specific species, with <code>Dog()</code>, or use the more general <code>Animal()</code>, in the exact same way, which does not exist here.</p> <h3>factory funcion, alternative</h3> <p>Same as previous, but change the name of the <code>Animal</code> class to <code>AnimalBase</code>, and the name of the <code>create_animal</code> function to <code>Animal</code>.</p> <p>This fixes the previous problem, but breaks backward compatibility by no longer allowing <strong>typechecking</strong> with <code>isinstance(dog1, Animal)</code>.</p>
<python><class>
2023-05-07 11:33:41
4
2,294
ElRudi
76,193,644
14,555,505
Numpy variable slice size (possibly zero)
<p>Lets say I've got some time series data:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.linspace(0, 10, num=100) time_series = np.sin(x) + np.random.random(100) plt.plot(x, time_series) </code></pre> <p><a href="https://i.sstatic.net/VQoya.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VQoya.png" alt="sin curve with some small amount of randomness at each time step" /></a></p> <p>If I want to 'delay' the time series by some amount, I can do this:</p> <pre class="lang-py prettyprint-override"><code>delay = 10 x_delayed = x[delay:] time_series_delayed = time_series[:-delay] plt.plot(x, time_series, label='original') plt.plot(x_delayed, time_series_delayed, label='delayed') plt.legend() </code></pre> <p><a href="https://i.sstatic.net/rlLVR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rlLVR.png" alt="Same as previous, but with another orange time series that is the original time series shifted to the right by 10 time steps" /></a></p> <p>This is all well and good, but I want to keep the code clean while still allowing <code>delay</code> to be zero. As it stands, I get an error because the slice <code>my_arr[:-0]</code> just evaluates to <code>my_arr[:0]</code> which will always be the empty slice, instead of the full array.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; time_series[:-0] array([], dtype=float64) </code></pre> <p>This means that if I want to encode the idea that a delay of zero is identical to the original array, I have to special case every single time I use the slice. This is tedious and error prone:</p> <pre><code># Make 3 plots, for negative, zero, and positive delays for delay in (0, 5, -5): if delay &gt; 0: x_delayed = x[delay:] time_series_delayed = time_series[:-delay] elif delay &lt; 0: # Negative delay is the complement of positive delay x_delayed = x[:delay] time_series_delayed = time_series[-delay:] else: # Zero delay just copies the array x_delayed = x[:] time_series_delayed = time_series[:] # Add the delayed time series to the plot plt.plot( x_delayed, time_series_delayed, label=f'delay={delay}', # change the alpha to make things less cluttered alpha=1 if delay == 0 else 0.3 ) plt.legend() </code></pre> <p><a href="https://i.sstatic.net/FDVga.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FDVga.png" alt="Now there are 3 time series: the original, one which is shifted left by 5 time steps, and one which is shifted right by 5 time steps" /></a></p> <p>I've had a look at the numpy slicing object and <a href="https://numpy.org/doc/stable/reference/generated/numpy.s_.html" rel="nofollow noreferrer"><code>np._s</code></a>, but I can't seem to figure it out.</p> <p>Is there a neat/pythonic way of encoding the idea that a delay of zero is the original array?</p>
<python><numpy><indexing><slice>
2023-05-07 11:30:54
2
1,163
beyarkay
76,193,476
12,487,076
How to deal with OverflowError when you modify exif data for images?
<p>Inside a loop the code successfully modifies exif data for jpeg images,</p> <pre><code>... with open(f&quot;{old_source_path}{item.path}&quot;, &quot;rb&quot;) as im_file: imexif = Image2(im_file) # Description imexif.copyright = &quot;Copyright 2023 somebody. All Rights Reserved.&quot; # GPS imexif.gps_latitude = decdeg2dms(item.latitude) imexif.gps_longitude = decdeg2dms(item.longitude) imexif.gps_altitude = item.altitude ... </code></pre> <p>but sometimes there are errors which I seem not be able to put in a try block:</p> <pre><code> ...\backend\venv\Lib\site-packages\plum\buffer.py&quot;, line 53, in unpack_and_dump raise UnpackError(dump, exc) from exc plum.exceptions.UnpackError: +--------+------------+-------+-------+------------------------+ | Offset | Access | Value | Bytes | Format | +--------+------------+-------+-------+------------------------+ | | | | | TiffHeader (Structure) | | 0 | byte_order | 22166 | 56 96 | tiff_byte_order | +--------+------------+-------+-------+------------------------+ ValueError occurred during unpack operation: 22166 is not a valid TiffByteOrder </code></pre> <p>There are more such errors from the plum package such as OverflowError. How to deal with these?</p>
<python>
2023-05-07 10:54:41
1
303
ML Rozendale
76,193,385
4,304,610
How to find highlighted text in pdf with python (or differentiate it from its unhighlighted counterpart)
<p>I have several pdf files containing multiple choice questions where choices are formatted as a table and their answers formatted as the same but the correct answer is highlighted.</p> <p>I want to create a pdf or txt file only with the questions and a seperate pdf or txt file only with the answers in order (like 1-D, 2-C, 3-A etc)</p> <p>Background and details: As each question starts with word &quot;Question&quot;, it is relatively straight forward to extract questions and as each answer revealing page has at least B and C choices it is also straight forward to find where the answer revealing page is, but somehow couldn't find how to know if a specific text is highlighted.</p> <p>My backup plan is to either do it manually by extracting answer revealing pages in order in a separate pdf file or convert the choices tables in the question and in the answer revealing page and convert them to images to see if I can detect gray highlights in the images but this might mean more suffering than doing it manually.</p> <p>Two tables (choices for the question and the answer revealing page with choices) are always in different pages and *almost always directly subsequent *if the question fits in one page but not fitting in one page is rare</p> <p><a href="https://i.sstatic.net/oIPwO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oIPwO.png" alt="enter image description here" /></a></p> <pre><code>from pypdf import PdfReader, PdfWriter flList = glob.glob('samplepath') writer = PdfWriter() questionString = '' for fl in flList: reader = PdfReader(fl) print(fl) for i_page,page in enumerate( reader.pages): txPag = page.extract_text() if &quot;Question&quot; in txPag: questionString += txPag elif (&quot;\nB&quot; in txPag) &amp; (&quot;\nC&quot; in txPag): #TODO: Answer should be here, but how to extract the highlighted choice and store independently None </code></pre>
<python><pypdf>
2023-05-07 10:32:51
0
313
Uğur Dinç
76,193,289
4,002,633
Django Many2Many query to find all `things` in a group of `categories`
<p>Given these Django models:</p> <pre class="lang-py prettyprint-override"><code>from django.db import models class Thing(models.model): name = models.CharField('Name of the Thing') class Category(models.model): name = models.CharField('Name of the Category') things = models.ManyToManyField(Thing, verbose_name='Things', related_name='categories') </code></pre> <p>Note that all the categories a Thing is in can be found by:</p> <pre><code>thing = Thing.objects.get(id=1) # for example cats = thing.categories.all() # A QuerySet </code></pre> <p>I'm really struggling to build a query set that returns all Things in all of a given set of Categories.</p> <p>Let's say we have 5 categories, with IDs 1, 2, 3, 4, 5.</p> <p>And say I have a subset of categories:</p> <pre><code>my_cats = Category.objects.filter(id__in=[2,3]) </code></pre> <p>I want to find all Things that are in say categories, 2 AND 3.</p> <p>I can find all Things in category 2 OR 3 easily enough. For example this:</p> <pre><code>Thing.objects.filter(categories__in=[2,3]) </code></pre> <p>seems to return just that, Things in category 2 OR 3.</p> <p>And something like:</p> <pre><code>Thing.objects.filter(Q(categories=2)|Q(categories=3)) </code></pre> <p>also, but this returns nothing:</p> <pre><code>Thing.objects.filter(Q(categories=2)&amp;Q(categories=3)) </code></pre> <p>I might envisage something like:</p> <pre><code>Thing.objects.filter(categories__contains=[2,3]) </code></pre> <p>but of course that's a dream as <code>contains</code> operates on strings not ManyToMany sets.</p> <p>Is there a standard trick here I'm missing?</p> <p>I spun up a sandbox here to test and demonstrate:</p> <p><a href="https://codesandbox.io/p/sandbox/django-m2m-test-cizmud" rel="nofollow noreferrer">https://codesandbox.io/p/sandbox/django-m2m-test-cizmud</a></p> <p>It implements this simple pair of models and populates the database with a small set of things and categories and tests the queries, here's the latest state of it:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;Database contains:&quot;) for thing in Thing.objects.all(): print( f&quot;\t{thing.name} in categorties {[c.id for c in thing.categories.all()]}&quot;) print() # This works fine. Prints: # Cat1 OR Cat2: ['Thing 1', 'Thing 5', 'Thing 4'] things = Thing.objects.filter( Q(categories=1) | Q(categories=2)).distinct() print(f&quot;Cat1 OR Cat2: {[t.name for t in things]}&quot;) # We would love this to return Thing4 and thing5 # The two things in the test data set that are in # Category 2 and in Category 3. # But this does not work. It prints: # Cat2 AND Cat3: [] # because # What does yield ['Thing 4', 'Thing 5']? print(&quot;\nAiming to to get: ['Thing 4', 'Thing 5']&quot;) things = Thing.objects.filter( Q(categories=2) &amp; Q(categories=3)).distinct() print(f&quot;Try 1: Cat2 AND Cat3: {[t.name for t in things]}&quot;) # This also fails, producing an OR not AND things = Thing.objects.filter(categories__in=[2, 3]).distinct() print(f&quot;Try 2: Cat2 AND Cat3: {[t.name for t in things]}&quot;) # Also fails things = Thing.objects.filter(categories__in=[2, 3])\ .filter(categories=2).distinct() print(f&quot;Try 3: Cat2 AND Cat3: {[t.name for t in things]}&quot;) # Also fails things = Thing.objects.filter(categories__in=[2, 3], categories=2)\ .distinct() print(f&quot;Try 4: Cat2 AND Cat3: {[t.name for t in things]}&quot;) </code></pre> <p>and it's output:</p> <pre><code>Database contains: Thing 1 in categorties [1, 2] Thing 2 in categorties [3, 4] Thing 3 in categorties [5] Thing 4 in categorties [2, 3] Thing 5 in categorties [1, 2, 3] Cat1 OR Cat2: ['Thing 1', 'Thing 5', 'Thing 4'] Aiming to to get: ['Thing 4', 'Thing 5'] Try 1: Cat2 AND Cat3: [] Try 2: Cat2 AND Cat3: ['Thing 1', 'Thing 4', 'Thing 5', 'Thing 2'] Try 3: Cat2 AND Cat3: ['Thing 1', 'Thing 4', 'Thing 5'] Try 4: Cat2 AND Cat3: ['Thing 1', 'Thing 4', 'Thing 5'] </code></pre> <p>I guess if I can work it out in SQL, we can write us a custom lookup:</p> <p><a href="https://docs.djangoproject.com/en/4.2/howto/custom-lookups/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/howto/custom-lookups/</a></p> <p>But why do I think this must already have been written? How this be such a unique and new use case?</p>
<python><django><django-queryset><contains><m2m>
2023-05-07 10:13:44
1
2,192
Bernd Wechner
76,193,177
5,760,497
TypeError: '>=' not supported between instances of 'list' and 'dict'
<p>I wanted to create a permutation test in python, I have a dictionary of observed values, denoted as &quot;a&quot;, and a randomly generated values, denoted as &quot;b&quot;:</p> <pre><code> a = {0: 4,1: 3.0,2: 4.0,3: 2.0,4: 3.0,5: 3.0,6: 3.0,7: 3.0,8: 3.0,9: 3.0,10: 2.0 ,11: 3.0,12: 2.0, 13: 4,14: 2.0,15: 3.0,16: 2.0,17: 2.0,18: 3.0} b=[{0: 10,1: 11,2: 10,3: 10.0,4: 10,5: 9,6: 9,7: 10,8: 10,9: 10,10: 9,11: 9.9,12: 9,13: 10,14: 10, 15: 10, 16: 10, 17: 10,18: 10}, {0: 9,1: 9,2: 9,3: 9.0,4: 10,5: 9,6: 9.0,7: 8, 8: 9,9: 10, 10: 9.4,11: 8.44,12: 9.0,13: 8.75,14: 9.4,15: 9.4,16: 8.14,17: 8.88,18: 9.85}] </code></pre> <p>The key-value pair here is that key represents the index of the data points, while value represents any count data, but for example's sake say number of connections.</p> <p>I wanted to determine a probability under the hypothesis that the observed values in &quot;a&quot; is also random, by this I mean &quot;the connections generated in my observed data are random&quot;, using:</p> <pre><code>p_value = (np.sum(b&gt;= a) + np.sum(b &lt;= a)) / 2 </code></pre> <p>I tried using operators itemgetter:</p> <pre><code> from operator import itemgetter c = sorted(b,key=itemgetter(0)) p_value = (np.sum(c&gt;= a) + np.sum(c &lt;= a)) / 2 </code></pre> <p>gives an error: <code>TypeError: '&gt;=' not supported between instances of 'list' and 'dict'</code>, I tried using a for loop:</p> <pre><code>for i in range(len(c)): p_value = (np.sum( c[i]&gt;= a) + np.sum(c[i] &lt;= a)) / 2 </code></pre> <p>which gives an error:</p> <pre><code>TypeError: '&gt;=' not supported between instances of 'dict' and 'dict' </code></pre> <p>what am I missing here?</p>
<python><numpy><dictionary><scipy><operators>
2023-05-07 09:50:58
0
518
Gerard
76,193,111
7,895,542
How can i speed up this combinatorical problem?
<p>I have the following problem: I want do calculate something that can be considered a distance between team trajectories in a 5 (v 5) player game to in the end do clustering on these trajectories.</p> <p>I have a working setup. However, it is extremely slow to calculate the distances/clusters for any appreciable number of trajectories.</p> <p>The setup and the steps i am currently taking are as follows.</p> <p>The game map can be segmented into a finite number of &quot;tiles&quot; as shown here: <a href="https://i.sstatic.net/q5oYp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q5oYp.png" alt="enter image description here" /></a>. The distances from each tile to every other tile is precomputed and stored in a json file with the following structure:</p> <p>area_distance_matrix[map_name][area1id][area2id][dist_type(euclidean,graph,geodesic)] This step takes a bit but is not the bottleneck here as it does not scale with the number of trajectories i want to look at.</p> <p>So now if i want to know the distance between any two players i just look at which area they are in (also precomputed at this stage) and just grab their distance from this loaded json file.</p> <p>Next to determine the distance between any team of 5 players (at a specific time) i do the following: I determine every possible assignment of players from the one team to players from the other team. (So for 3 player each ((1,1),(2,2),(3,3)), ((1,2),(2,1),(3,3)) and so on. Then i calculate the every average distance of the pairs for each assignment and keep the smallest one.</p> <pre class="lang-py prettyprint-override"><code>def position_state_distance( map_name: str, position_array_1: npt.NDArray, position_array_2: npt.NDArray, distance_type: DistanceType = &quot;geodesic&quot;, ) -&gt; float: &quot;&quot;&quot;Calculates a distance between two game states based on player positions. Args: map_name (string): Map to search position_array_1 (numpy array): Numpy array with shape (2|1, 5, 1) with the first index indicating the team, the second the player and the third the tile. position_array_2 (numpy array): Numpy array with shape (2|1, 5, 1) with the first index indicating the team, the second the player and the third the tile. distance_type (string, optional): String indicating how the distance between two player positions should be calculated. Options are &quot;geodesic&quot;, &quot;graph&quot; and &quot;euclidean&quot;. Defaults to 'geodesic' Returns: A float representing the distance between these two game states &quot;&quot;&quot; position_array_1, position_array_2 = _check_arguments_position_distance( map_name, position_array_1, position_array_2, distance_type ) pos_distance: float = 0 # Get the minimum mapping distance for each side separately for team in range(position_array_1.shape[0]): side_distance = float(&quot;inf&quot;) # Generate all possible mappings between players from array1 and array2. # Map player1 from array1 to player1 from array2 and # player2's to each other or match player1's with player2's and so on for mapping in itertools.permutations( range(position_array_1.shape[1]), position_array_2.shape[1] ): cur_dist: float = 0 for player2, player1 in enumerate(mapping): this_dist = 0 area1 = position_array_1[team][player1][0] area2 = position_array_2[team][player2][0] this_dist = min( AREA_DIST_MATRIX[map_name][str(area1)][str(area2)][distance_type], AREA_DIST_MATRIX[map_name][str(area2)][str(area1)][distance_type], ) if this_dist == float(&quot;inf&quot;): this_dist = sys.maxsize / 6 cur_dist += this_dist / len(mapping) side_distance = min(side_distance, cur_dist) pos_distance += side_distance / position_array_1.shape[0] return pos_distance </code></pre> <p>Now for the trajectory distance i can either average/sum the team distances at each point in time or do dynamic time warping (which is n^2 in the number of time steps).</p> <p>So effectively:</p> <pre class="lang-py prettyprint-override"><code>def area_trajectory_distance( trajectory_array_1: np.ndarray, trajectory_array_2: np.ndarray, ) -&gt; float: length = max(len(trajectory_array_1), len(trajectory_array_2)) dist = 0 for time in range(length): dist += position_state_distance( position_array_1=trajectory_array_1[time] if time in range(len(trajectory_array_1)) else trajectory_array_1[-1], position_array_2=trajectory_array_2[time] if time in range(len(trajectory_array_2)) else trajectory_array_2[-1], dist_matrix=dist_matrix, ) return dist / length </code></pre> <p>And then finally to do the clustering i take all of the (thousands) of trajectories i want to cluster and precompute a distance matrix for them to give to the clustering algorithm.</p> <pre><code>def get_traj_matrix_area( precompute_array: np.ndarray, dist_matrix: dict, dtw: bool ) -&gt; np.ndarray: &quot;&quot;&quot;Precompute the distance matrix for all trajectories of areas Args: precompute_array: Numpy array of trajectories for which to compute the distance matrix Returns: Numpy array of distances between trajectories &quot;&quot;&quot; precomputed = np.zeros((len(precompute_array), len(precompute_array))) for i in range(len(precompute_array)): for j in range(i + 1, len(precompute_array)): precomputed[i][j] = area_trajectory_distance( precompute_array[i], precompute_array[j] ) precomputed += precomputed.T return precomputed </code></pre> <p>To then finally do</p> <pre><code>kmed = KMedoids(n_clusters=n_cluster, metric=&quot;precomputed&quot;).fit( precomputed ) </code></pre> <p>And now my question which parts i could improve to reduce the time it takes for the whole procedure and how to best do that.</p> <p>So far i have tried using numba and that has given me a bit of a performance boost but it doesnt play nice with itertools so in the end it helped a bit but wasnt enough.</p> <pre><code> 2511861870 function calls (2460436306 primitive calls) in 1281.971 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 66365512 132.039 0.000 567.527 0.000 typeddict.py:176(__getitem__) 4995000 113.294 0.000 1198.658 0.000 nav_utils.py:648(fast_token_state_distance) 66364936 100.695 0.000 183.750 0.000 typedlist.py:365(__getitem__) 66365512 90.456 0.000 194.496 0.000 typeddict.py:37(_getitem) 37169228/18182557 50.793 0.000 287.301 0.000 iterables.py:1278(multiset_permutations) 16755452/8377726 48.454 0.000 105.217 0.000 sorting.py:10(default_sort_key) 66366790 47.594 0.000 215.832 0.000 dispatcher.py:724(typeof_pyval) 66364936 42.223 0.000 58.534 0.000 typedlist.py:90(_getitem) 36438158/14252492 40.854 0.000 212.264 0.000 sorting.py:203(ordered) 165915291 40.810 0.000 40.810 0.000 serialize.py:30(_numba_unpickle) 66368504/66368412 37.567 0.000 99.990 0.000 functools.py:904(wrapper) 66368411 37.499 0.000 163.191 0.000 typeof.py:27(typeof) 66368504 31.599 0.000 52.594 0.000 functools.py:818(dispatch) 33183044 27.201 0.000 79.542 0.000 typeddict.py:77(_from_meminfo_ptr) 25370390 26.705 0.000 26.711 0.000 {built-in method builtins.sum} 33183069 23.493 0.000 40.857 0.000 typeddict.py:107(__init__) 8377726 22.370 0.000 40.692 0.000 sorting.py:180(_nodes) 12353052 18.566 0.000 29.054 0.000 sympify.py:102(sympify) 66377100 16.215 0.000 16.233 0.000 weakref.py:414(__getitem__) 132734582 16.106 0.000 16.106 0.000 typeddict.py:160(_numba_type_) 50274265 15.941 0.000 21.666 0.000 &lt;frozen importlib._bootstrap&gt;:405(parent) </code></pre>
<python><performance><combinatorics>
2023-05-07 09:32:41
1
360
J.N.
76,193,025
1,889,213
How to get pipreqs to skip files that fail to run
<p>Many python files in my current project do not run / have syntax errors.</p> <p>How can I get pipreqs to keep generating the requirements.txt, skipping those failures?</p>
<python>
2023-05-07 09:11:51
1
1,517
user3180
76,192,974
1,008,728
Shuffling sequences in a list
<p>I have a list containing sequences of various lengths. The pattern of a sequence is as follows:</p> <p><code>x_k, y_k, ..., x_k_i, y_k_i, ... z_k</code></p> <p>For example, a list having 4 sequences with lengths: 3, 3, 5, and 7 is as follows:</p> <pre><code>input_list = ['x_1', 'y_1', 'z_1', 'x_2', 'y_2', 'z_2', 'x_3_1', 'y_3_1', 'x_3_2', 'y_3_2', 'z_3', 'x_4_1', 'y_4_1', 'x_4_2', 'y_4_2', 'x_4_3', 'y_4_3', 'z_4'] </code></pre> <p>I need to shuffle the list, such that the order of sequences is shuffled, but the entries within a sequence is not shuffled.</p> <p>For example, a candidate output would be as follows:</p> <pre><code>shuffled_list = ['x_3_1', 'y_3_1', 'x_3_2', 'y_3_2', 'z_3', 'x_1', 'y_1', 'z_1', 'x_4_1', 'y_4_1', 'x_4_2', 'y_4_2', 'x_4_3', 'y_4_3', 'z_4', 'x_2', 'y_2', 'z_2'] </code></pre> <p>One way to achieve this would be by saving each sequence as a separate list, and then having a nested list represent all the sequences. Then, one by one randomly removing a list (i.e., a sequence) from the nested list and appending the removed list's elements in the final shuffled list.</p> <p>Is there a more efficient way to achieve the same?</p>
<python>
2023-05-07 08:56:55
3
1,360
Saad
76,192,935
1,581,090
How to run a permanent background process within the python-flask framework?
<p>I am about to code a simple browser-based game in which players can connect to it via browser using python and flask. Now, I want this game still to be running when no player is connected to the backend, as things are going on in the game. That can be handled by a separate process running and accessing a database, and updating entries on the database when some event happens.</p> <p>Is there then a simple way to incorporate that into a flask appication I am currently running as</p> <pre><code>import sys import json from flask import Flask, request, render_template app = Flask(__name__) from mf import gameplay game = gameplay.Game() @app.route('/') def index(): return render_template('mainmap.html') @app.route('/api', methods=['POST']) def api(): data_json = request.data.decode() data = json.loads(data_json) return game(data_json) if __name__ == '__main__': app.run('0.0.0.0', 5000, threaded=True) </code></pre> <p>And is there a way to &quot;notify&quot; the background process when a player is (re)connecting to the game, so that the background process &quot;knows&quot; a new player has connected, and the background process performs some specific actions?</p>
<python><flask>
2023-05-07 08:44:38
1
45,023
Alex
76,192,496
8,697,724
Openai /v1/completions vs. /v1/chat/completions end points
<pre><code>class OpenaiClassifier(): def __init__(self, api_keys): openai.api_key = api_keys['Openai'] def get_ratings(self, review): prompt = f&quot;Rate the following review as an integer from 1 to 5, where 1 is the worst and 5 is the best: \&quot;{review}\&quot;&quot; response = openai.Completion.create( engine=&quot;text-davinci-003&quot;, prompt=prompt, n=1, max_tokens=5, temperature=0.5, top_p=1 ) try: rating = int(response.choices[0].text.strip()) return rating except ValueError: return None </code></pre> <p>I wonder what's the main difference between /v1/completions and /v1/chat/completions endpoints, and how I can do text classification using these models: gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301</p>
<python><openai-api>
2023-05-07 06:29:58
7
473
David Zayn
76,192,459
3,391,549
How to convert pandas data frame to Huggingface Dataset grouped by column value?
<p>I have the following data frame <code>df</code></p> <pre><code>import pandas as pd from datasets import Dataset data = [[1, 'Jack', 'A'], [1, 'Jamie', 'A'], [1, 'Mo', 'B'], [1, 'Tammy', 'A'], [2, 'JJ', 'A'], [2, 'Perry', 'C']] df = pd.DataFrame(data, columns=['id', 'name', 'class']) &gt; df id name class 0 1 Jack A 1 1 Jamie A 2 1 Mo B 3 1 Tammy A 4 2 JJ A 5 2 Perry C </code></pre> <p>I would like to covert it to a Dataset object that has 2 rows, one per <code>id</code>. The desired output is</p> <pre><code>&gt; myDataset Dataset({ features: ['id', 'name', 'class'], num_rows: 2 }) </code></pre> <p>where</p> <pre><code>&gt; myDataset[0:2] {'id': ['1', '2'], 'name': [['Jack', 'Jamie', 'Mo', 'Tammy'],['JJ', 'Perry']], 'class': [['A', 'A', 'B', 'A'], ['A', 'C']]} </code></pre> <p>Based on the documentation <a href="https://huggingface.co/docs/datasets/main/en/loading#inmemory-data" rel="nofollow noreferrer">here</a>, I tried the following but that gave me a Dataset with 6 rows, instead of one with 2 rows and grouped by the column <code>id</code></p> <pre><code>myDataset = Dataset.from_pandas(df) &gt; myDataset Dataset({ features: ['id', 'name', 'class'], num_rows: 6 }) &gt; myDataste[0:2] {'id': [1, 1], 'name': ['Jack', 'Jamie'], 'class': ['A', 'A']} </code></pre>
<python><pandas><huggingface>
2023-05-07 06:17:41
1
9,883
Adrian
76,192,357
3,391,549
How to convert a pandas data frame to a sliceable dictionary
<p>I have a pandas data frame <code>df</code> that looks like this</p> <pre><code>import pandas as pd data = [[1, 'Jack', 'A'], [1, 'Jamie', 'A'], [1, 'Mo', 'B'], [1, 'Tammy', 'A'], [2, 'JJ', 'A'], [2, 'Perry', 'C']] df = pd.DataFrame(data, columns=['id', 'name', 'class']) &gt; df id name class 0 1 Jack A 1 1 Jamie A 2 1 Mo B 3 1 Tammy A 4 2 JJ A 5 2 Perry C </code></pre> <p>I want to convert this to a dictionary <code>mydict</code> where</p> <pre><code>&gt; mydict[0] {'id': '1', 'name': ['Jack', 'Jamie', 'Mo', 'Tammy'], 'class': ['A', 'A', 'B', 'A']} &gt; mydict[1] {'id': '2', 'name': ['JJ', 'Perry'], 'class': ['A', 'C']} </code></pre> <p>and</p> <pre><code>&gt; mydict[0:2] {'id': ['1', '2'], 'name': [['Jack', 'Jamie', 'Mo', 'Tammy'],['JJ', 'Perry']], 'class': [['A', 'A', 'B', 'A'], ['A', 'C']]} </code></pre> <p>I tried <code>mydict = df.to_dict()</code> but that didn't seem to work as intended.</p>
<python><pandas>
2023-05-07 05:42:45
2
9,883
Adrian
76,192,245
3,511,656
Trying to build a jacobian matrix using the multiprocessing library in python - how to share a matrix variable across multiple processes?
<p>I was trying to construct the jacobian matrix in python using central differences. I want to use multiple cores that I have access to to build this matrix. I tried to use the Process library. I wasn't sure if this or the pool library would let me utilize more cores to complete this task. Suggestions on what would be better would be greatly appreciated!</p> <p>Anyhow, my thought process was that I would define an empty NxN matrix, and then have the process fill in every column of this matrix using multiprocessing. It looks to me the output is the zero matrix (what I instatiated it to be orginally), but when I look at every process execution, the column of the matrix is correct.</p> <pre><code>jacobian after process 1 [[2. 0.] [4. 0.]] jacobian after process 2 [[0. 1.] [0. 4.]] final jacobian [[0. 0.] [0. 0.]] p=Process(target=jac_part, args=(i,x,h,Jac)) </code></pre> <p>It seems to me that the &quot;Jac&quot; I am passing is not being treated as an object that is being edited. Anyone have any ideas how I can maintain the state at the different process work on the matrix? Below is a bare bones code - any suggestions would greatly be appreciated.</p> <p>Thank you</p> <pre><code>import numpy as np import multiprocessing from multiprocessing import Process def Jacobian(f, h = 1e-6): return lambda x: jacobian_pool(f, x, h = h) def jacobian_pool(f, x, h): #pool = Pool(multiprocessing.cpu_count()-1) Jac = np.zeros((len(x),len(x))) processes=[] for i in range(len(x)): p=Process(target=jac_part, args=(i,x,h,Jac)) processes.append(p) for p in processes: p.start() p.join() return Jac def jac_part(i,x,h,Jac): x_plus = x.copy() x_minus = x.copy() x_plus[i] += h x_minus[i] -= h f_plus = f(x_plus) f_minus = f(x_minus) Jac[:, i] = (f_plus - f_minus) / (2 * h) print(Jac) def f(x): return np.array([x[0] ** 2 + x[1], 2 * x[0] ** 2 + x[1] ** 2]) x = np.array([1., 2.]) jf = Jacobian(f) print(jf(x)) </code></pre>
<python><matrix><multiprocessing>
2023-05-07 04:52:39
1
1,133
Eigenvalue
76,192,202
806,160
how can solve ANTLR Runtime version error in pyspark?
<p>I want run a code with pyspark that read a csv file. The code is :</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession spark = (SparkSession.builder.appName(&quot;4-1&quot;) .getOrCreate()) csv_file = &quot;departuredelays.csv&quot; schema = &quot;`date` STRING, `delay` INT, `distance` INT, `origin` STRING, `destination` STRING&quot; df = spark.read.csv(csv_file, schema, inferSchema=True, header=True) </code></pre> <p>but when I run that it accrued this errore:</p> <pre><code>An error occurred while calling o28.schema. : java.lang.ExceptionInInitializerError at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:107) at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseTableSchema(ParseDriver.scala:76) at org.apache.spark.sql.types.StructType$.fromDDL(StructType.scala:543) at org.apache.spark.sql.DataFrameReader.schema(DataFrameReader.scala:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.UnsupportedOperationException: java.io.InvalidClassException: org.antlr.v4.runtime.atn.ATN; Could not deserialize ATN with version 3 (expected 4). at org.antlr.v4.runtime.atn.ATNDeserializer.deserialize(ATNDeserializer.java:56) at org.antlr.v4.runtime.atn.ATNDeserializer.deserialize(ATNDeserializer.java:48) at org.apache.spark.sql.catalyst.parser.SqlBaseLexer.&lt;clinit&gt;(SqlBaseLexer.java:1603) ... 16 more Caused by: java.io.InvalidClassException: org.antlr.v4.runtime.atn.ATN; Could not deserialize ATN with version 3 (expected 4). ... 19 more File &quot;E:\spark-3.3.2-bin-hadoop3\python\lib\py4j-0.10.9.5-src.zip\py4j\protocol.py&quot;, line 326, in get_return_value File &quot;E:\spark-3.3.2-bin-hadoop3\python\pyspark\sql\utils.py&quot;, line 190, in deco return f(*a, **kw) ^^^^^^^^^^^ File &quot;E:\spark-3.3.2-bin-hadoop3\python\lib\py4j-0.10.9.5-src.zip\py4j\java_gateway.py&quot;, line 1321, in __call__ File &quot;E:\spark-3.3.2-bin-hadoop3\python\pyspark\sql\readwriter.py&quot;, line 118, in schema self._jreader = self._jreader.schema(schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;E:\spark-3.3.2-bin-hadoop3\python\pyspark\sql\readwriter.py&quot;, line 50, in _set_opts self.schema(schema) # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^ File &quot;E:\spark-3.3.2-bin-hadoop3\python\pyspark\sql\readwriter.py&quot;, line 496, in csv self._set_opts( File &quot;F:1.py&quot;, line 14, in &lt;module&gt; df = spark.read.csv(csv_file, schema, inferSchema=True, header=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ py4j.protocol.Py4JJavaError: An error occurred while calling o28.schema. : java.lang.ExceptionInInitializerError at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:107) at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseTableSchema(ParseDriver.scala:76) at org.apache.spark.sql.types.StructType$.fromDDL(StructType.scala:543) at org.apache.spark.sql.DataFrameReader.schema(DataFrameReader.scala:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.UnsupportedOperationException: java.io.InvalidClassException: org.antlr.v4.runtime.atn.ATN; Could not deserialize ATN with version 3 (expected 4). at org.antlr.v4.runtime.atn.ATNDeserializer.deserialize(ATNDeserializer.java:56) at org.antlr.v4.runtime.atn.ATNDeserializer.deserialize(ATNDeserializer.java:48) at org.apache.spark.sql.catalyst.parser.SqlBaseLexer.&lt;clinit&gt;(SqlBaseLexer.java:1603) ... 16 more Caused by: java.io.InvalidClassException: org.antlr.v4.runtime.atn.ATN; Could not deserialize ATN with version 3 (expected 4). ... 19 more </code></pre> <p>and</p> <pre><code>To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). ANTLR Tool version 4.8 used for code generation does not match the current runtime version 4.12.0 ANTLR Runtime version 4.8 used for parser compilation does not match the current runtime version 4.12.0 </code></pre> <p>How can I solve this problem?</p>
<python><pyspark><runtime-error><antlr4>
2023-05-07 04:32:26
0
1,423
Tavakoli
76,192,108
2,475,195
Cursor in jupyter notebook is a green rectangle
<p>I think I may have inadvertently hit some keyboard shortcut on my laptop. Each time I click inside a cell, the cursor is a green rectangle, and it seems to behave in a command mode. If I enter <code>a</code> or <code>i</code> it goes back to normal (edit mode). Which is what I want pretty much all the time, so my experience is very annoying now, each time I click inside a cell, I have to press one of those keys before being able to do anything. Restarting the jupyter server didn't help. Anyone has an idea how I can restore defaults, i.e. go back to defaulting to edit mode? (sorry, no screenshot because this is at my job and I am not allowed to share a screenshot. I am attaching a cursor that looks similar. Basically, instead of a simple vertical line, it's a solid green rectangle)</p> <p><a href="https://i.sstatic.net/2oyCt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2oyCt.png" alt="enter image description here" /></a></p>
<python><jupyter-notebook><jupyter>
2023-05-07 03:44:46
1
4,355
Baron Yugovich
76,192,068
1,603,480
Retrieving Continuation Token in Azure DevOps Python API v7.0+ for Paginated Results
<p>In earlier versions of the Azure DevOps Python API (until 6.0.0b4), when making a request on some items (e.g. WorkItems, Test Suites, ...), you had a response object with a value and a <code>continuation_token</code> that you could use to make a new request and continue parsing.</p> <p>With more recent versions (in particular 7.0), you now get a list returned (but with the limit of size imposed by the API).</p> <p>For example, the function of the API have changed from:</p> <pre><code>def get_test_suites_for_plan(self, project, plan_id, expand=None, continuation_token=None, as_tree_view=None): ... :rtype: :class:`&lt;GetTestSuitesForPlanResponseValue&gt;` </code></pre> <p>to</p> <pre><code>def get_test_suites_for_plan(self, project, plan_id, expand=None, continuation_token=None, as_tree_view=None): ... :rtype: :class:`&lt;[TestSuite]&gt; &lt;azure.devops.v6_0.test_plan.models.[TestSuite]&gt;` </code></pre> <p>I am trying to retrieve the continuation token to continue parsing the other results, but it is not being returned in the response.</p> <p>Here is an example of the code I was using in older versions:</p> <pre class="lang-py prettyprint-override"><code>resp = client.get_test_suites_for_plan(project, my_plan_id) suites = resp.value while resp.continuation_token: resp = client.get_test_suites_for_plan(project, my_plan_id) suites += resp.value </code></pre> <p>How can I retrieve the continuation token to continue parsing the other results with a more recent version (7.0 and above)?</p> <p><em>Note: I have already opened an issue in the GitHub repo of Azure DevOps Python API: <a href="https://github.com/microsoft/azure-devops-python-api/issues/461" rel="nofollow noreferrer">https://github.com/microsoft/azure-devops-python-api/issues/461</a></em></p>
<python><azure-devops><pagination>
2023-05-07 03:25:01
1
13,204
Jean-Francois T.
76,192,051
8,936,561
Why a GeneratorExit instance is replaced with another one inside an inner coroutine?
<p>I'm developing my own async-library, and have been facing an issue regarding <code>coroutine.throw()</code>. Take a look at the code below. This code works as I expect.</p> <pre class="lang-py prettyprint-override"><code>import types import pytest @types.coroutine def sleep_forever(): yield lambda __: None async def async_fn(): try: await sleep_forever() except BaseException as e: print(id(e), 'caught') if e.args and (e.args[0] == 'Hello'): return pytest.fail() async def wrapper(): return await async_fn() @pytest.mark.parametrize('exc_cls', (Exception, BaseException, GeneratorExit)) def test_exc(exc_cls): print(&quot;\n&quot;, exc_cls) coro = async_fn() coro.send(None) e = exc_cls('Hello') print(id(e), 'about to be thrown') try: coro.throw(e) except StopIteration: pass if __name__ == '__main__': pytest.main(['-s', __file__]) </code></pre> <pre><code> &lt;class 'Exception'&gt; 139665311893360 about to be thrown 139665311893360 caught . &lt;class 'BaseException'&gt; 139665311894960 about to be thrown 139665311894960 caught . &lt;class 'GeneratorExit'&gt; 139665312313152 about to be thrown 139665312313152 caught . </code></pre> <p>All the tests succeeded, meaning <code>async_fn()</code> successfully received the <code>'Hello'</code> in each test case. You can also see the exception instance at the <code>coro.throw(e)</code> line and the one at the <code>except BaseException as e:</code> line are identical as thier ids are the same.</p> <p>However things went wrong when I wrapped the <code>async_fn()</code> in another async function. (Replacing the <code>coro = async_fn()</code> line with <code>coro = wrapper()</code>)</p> <pre><code> &lt;class 'Exception'&gt; 140647138076928 about to be thrown 140647138076928 caught . &lt;class 'BaseException'&gt; 140647138078608 about to be thrown 140647138078608 caught . &lt;class 'GeneratorExit'&gt; 140647138525488 about to be thrown 140647138078288 caught F ==================== FAILURES ===================== _______ test_exc[GeneratorExit] _______ exc_cls = &lt;class 'GeneratorExit'&gt; @pytest.mark.parametrize('exc_cls', (Exception, BaseException, GeneratorExit)) def test_exc(exc_cls): print(&quot;\n&quot;, exc_cls) coro = wrapper() coro.send(None) e = exc_cls('Hello') print(id(e), 'about to be thrown') try: &gt; coro.throw(e) test_exc.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_exc.py:21: in wrapper return await async_fn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ async def async_fn(): try: await sleep_forever() except BaseException as e: print(id(e), 'caught') if e.args and (e.args[0] == 'Hello'): return &gt; pytest.fail() E Failed test_exc.py:17: Failed </code></pre> <p>As you can see, the test failed and the exception instances weren't identical when <code>exc_cls</code> is <code>GeneratorExit</code>.</p> <p>I feel like I should not directly instantiate a <code>GeneratorExit</code>. It might always have to be instantiated through <code>coroutine.close()</code>. In other words, there is no way to pass a <code>GeneratorExit</code> instance a value, and thus, there is no way to make the tests succeed.</p> <p>I'm not 100% sure it is impossible. Is there a way to make it work? (I don't need the exception instances to be identical. I just want to pass the <code>'Hello'</code> to <code>async_fn()</code>.)</p> <h3>Environment</h3> <ul> <li>CPython 3.9.7</li> <li>pytest 7.3.1</li> </ul>
<python><python-3.x><async-await>
2023-05-07 03:17:38
0
988
Nattōsai Mitō
76,191,862
395,857
How can I fine-tune mBART-50 for machine translation in the transformers Python library so that it learns a new word?
<p>I try to fine-tune mBART-50 (<a href="https://arxiv.org/pdf/2008.00401" rel="nofollow noreferrer">paper</a>, <a href="https://huggingface.co/facebook/mbart-large-50" rel="nofollow noreferrer">pre-trained model on Hugging Face</a>) for machine translation in the transformers Python library. To test the fine-tuning, I am trying to simply teach mBART-50 a new word that I made up.</p> <p>I use the following code. Over 95% of the code is from the <a href="https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50" rel="nofollow noreferrer">Hugging Face documentation</a>:</p> <pre><code>from transformers import MBartForConditionalGeneration, MBart50TokenizerFast print('Model loading started') model = MBartForConditionalGeneration.from_pretrained(&quot;facebook/mbart-large-50&quot;) tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50&quot;, src_lang=&quot;fr_XX&quot;, tgt_lang=&quot;en_XX&quot;) print('Model loading done') src_text = &quot; billozarion &quot; tgt_text = &quot; plorization &quot; model_inputs = tokenizer(src_text, return_tensors=&quot;pt&quot;) with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors=&quot;pt&quot;).input_ids print('Fine-tuning started') for i in range(1000): #pass model(**model_inputs, labels=labels) # forward pass print('Fine-tuning ended') # Testing whether the model learned the new word. Translate French to English tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50-many-to-many-mmt&quot;) tokenizer.src_lang = &quot;fr_XX&quot; article_fr = src_text encoded_fr = tokenizer(article_fr, return_tensors=&quot;pt&quot;) generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id[&quot;en_XX&quot;]) translation = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(translation) </code></pre> <p>However, the new word wasn't learned. The output is &quot;billozarion&quot; instead of &quot;plorization&quot;. Why?</p> <p>I'm strictly following the Hugging Face documentation, unless I missed something. The <code># forward pass</code> does make me concerned, as one would need a backward pass to update the gradients. Maybe this means that the documentation is incorrect, however I can't test that hypothesis as I don't know how to add the backward pass.</p> <hr /> <p>Environment that I used to run the code: Ubuntu 20.04.5 LTS with an NVIDIA A100 40GB GPU (I also tested with an NVIDIA T4 Tensor Core GPU) and CUDA 12.0 with the following conda environment:</p> <pre><code>conda create --name mbart-python39 python=3.9 conda activate mbart-python39 pip install transformers==4.28.1 pip install chardet==5.1.0 pip install sentencepiece==0.1.99 pip install protobuf==3.20 </code></pre>
<python><huggingface-transformers><pre-trained-model><machine-translation><fine-tuning>
2023-05-07 01:29:35
1
84,585
Franck Dernoncourt
76,191,788
2,150,411
pydantic - (de)serialize list to dict
<p><strong>With:</strong> Pydantic, (de)serializing to/from JSON</p> <p><em>Goal:</em> deserialize a List of objects, to a dictionary, lifting a property on the child object, as the key in the dictionary for the serialized form, and of course back again when deserializing.</p> <p><strong>Example:</strong></p> <p>I have a class model</p> <pre class="lang-py prettyprint-override"><code>import uuid from typing import List from pydantic import BaseModel class Recipe(BaseModel): id: uuid.UUID name: str description: str class CountryDetails(BaseModel): name: str recipes: List[Recipe] </code></pre> <p>I am after the JSON to look like</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;australia&quot;, &quot;recipes&quot;: { &quot;a1acd620-0e87-4cbe-8d15-a4a3aff00cc3&quot;: { &quot;name&quot;: &quot;shrimp on the barbie&quot;, &quot;description&quot;: &quot;classic prawn on the BBQ&quot; }, &quot;175f2a58-33c4-4886-920f-cb073101f104&quot;: { &quot;name&quot;: &quot;vegemite toast&quot;, &quot;description&quot;: &quot;toast + vegemite; nice&quot; } } } </code></pre> <p>How do I go about this ? (my google-foo is failing me today)</p>
<python><serialization><deserialization><pydantic>
2023-05-07 00:44:03
1
501
Ramon
76,191,661
8,134,164
Module 'swagger_client' has no attribute 'CustomSpeechTranscriptionsApi'
<p>I'm using Microsoft's official GitHub repository to set up <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/python" rel="nofollow noreferrer">Azure Speech Services Batch Transcription API from Python</a></p> <p>I receive an error from line number 142 in main.py which states</p> <pre><code>AttributeError: module 'swagger_client' has no attribute 'CustomSpeechTranscriptionsApi' </code></pre> <p>I suppose this has to do with the recent changes made to the swagger_client. How to fix this issue any thoughts?</p> <p>I already followed the below threads and noticed a few others also reported the same issue but nothing helped,<br> <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/477" rel="nofollow noreferrer">Thread 1</a>, <a href="https://github.com/MicrosoftDocs/azure-docs/issues/50995" rel="nofollow noreferrer">Thread 2</a></p> <p>To debug I gave <code>print(dir(swagger_client))</code> before it throws the above error and here is the output of the print statement</p> <pre><code>['ApiClient', 'ApiSpeechtotextV30DatasetsLocalesGet200ApplicationJsonResponse', 'ApiSpeechtotextV30EndpointsLocalesGet200ApplicationJsonResponse', 'ApiSpeechtotextV30EvaluationsLocalesGet200ApplicationJsonResponse', 'ApiSpeechtotextV30ModelsLocalesGet200ApplicationJsonResponse', 'ApiSpeechtotextV30ProjectsLocalesGet200ApplicationJsonResponse', 'ApiSpeechtotextV30TranscriptionsLocalesGet200ApplicationJsonResponse', 'Component', 'Configuration', 'Dataset', 'DatasetProperties', 'DatasetUpdate', 'DefaultApi', 'Endpoint', 'EndpointLinks', 'EndpointProperties', 'EndpointPropertiesUpdate', 'EndpointUpdate', 'EntityError', 'EntityReference', 'Error', 'ErrorContent', 'ErrorDetail', 'Evaluation', 'EvaluationProperties', 'EvaluationUpdate', 'File', 'FileLinks', 'FileProperties', 'HealthStatus', 'InnerError', 'InnerErrorV2', 'InternalModel', 'Links', 'ManagementModel', 'ManagementModelArray', 'ManagementModelProperties', 'Model', 'ModelCopy', 'ModelDeprecationDates', 'ModelFile', 'ModelLinks', 'ModelManifest', 'ModelProperties', 'ModelUpdate', 'PaginatedDatasets', 'PaginatedEndpoints', 'PaginatedEvaluations', 'PaginatedFiles', 'PaginatedModels', 'PaginatedProjects', 'PaginatedTranscriptions', 'PaginatedWebHooks', 'Project', 'ProjectLinks', 'ProjectProperties', 'ProjectUpdate', 'Transcription', 'TranscriptionProperties', 'TranscriptionUpdate', 'WebHook', 'WebHookEvents', 'WebHookEvents1', 'WebHookLinks', 'WebHookProperties', 'WebHookPropertiesUpdate', 'WebHookUpdate', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'absolute_import', 'api', 'api_client', 'configuration', 'models', 'rest'] </code></pre>
<python><swagger><azure-cognitive-services><azure-speech>
2023-05-06 23:48:49
1
4,372
Indrajith Ekanayake
76,191,616
7,662,164
Jit a JAX function that select a function from a dictionary
<p>I have a JAX function that, given the order and the index, selects a polynomial from a pre-defined dictionary, as follows:</p> <pre><code>poly_dict = { (0, 0): lambda x, y, z: 1., (1, 0): lambda x, y, z: x, (1, 1): lambda x, y, z: y, (1, 2): lambda x, y, z: z, (2, 0): lambda x, y, z: x*x, (2, 1): lambda x, y, z: y*y, (2, 2): lambda x, y, z: z*z, (2, 3): lambda x, y, z: x*y, (2, 4): lambda x, y, z: y*z, (2, 5): lambda x, y, z: z*x } def poly_func(order: int, index: int): try: return poly_dict[(order, index)] except KeyError: print(&quot;(order, index) must be a key in poly_dict!&quot;) return </code></pre> <p>Now I want to jit <code>poly_func()</code>, but it gives an error <code>TypeError: unhashable type: 'DynamicJaxprTracer'</code> Moreover, if I just do</p> <pre><code>def poly_func(order: int, index: int): return poly_dict[(order, index)] </code></pre> <p>it still gives the same error. Is there a way to resolve this issue?</p>
<python><dictionary><conditional-statements><jit><jax>
2023-05-06 23:31:10
1
335
Jingyang Wang
76,191,444
2,256,031
How to optimize n async concurrent calls made in Kotlin
<p>I have the following code in Kotlin and curious if there is a way to further optimize it compared to the Python implementation that I think is somewhat equivalent?</p> <p>Kotlin version <code>1.8.20-release-327 (JRE 17.0.7+7</code></p> <pre><code>import java.net.http.HttpClient import java.net.http.HttpRequest import java.net.http.HttpResponse import java.net.URI import kotlin.system.measureTimeMillis import kotlinx.coroutines.* fun main() = runBlocking { val timeElapsed = measureTimeMillis { val deferreds: List&lt;Deferred&lt;String&gt;&gt; = (1..7).map { async { val client = HttpClient.newBuilder().build(); val request = HttpRequest.newBuilder() .uri(URI.create(&quot;URL_WITH_INCREASING_PATH_PARAMETER/$it&quot;)) .build(); val response = client.send(request, HttpResponse.BodyHandlers.ofString()); response.body() } } val data = deferreds.awaitAll() println(&quot;$data&quot;) } println(&quot;time elapsed $timeElapsed&quot;) } // time elapsed 2126 </code></pre> <p>Python version <code>3.11.1</code></p> <pre class="lang-py prettyprint-override"><code>import time import aiohttp import asyncio start_time = time.time() async def get_data(session, url): async with session.get(url) as response: data = await response.json() return data async def main(): async with aiohttp.ClientSession() as session: tasks = [] for number in range(1, 7): url = f'URL_WITH_INCREASING_PATH_PARAMETER/{number}' tasks.append(asyncio.ensure_future(get_data(session, url))) data_list = await asyncio.gather(*tasks) print(data_list) asyncio.run(main()) print('time elapsed %s' % (time.time() - start_time)) # time elapsed 0.7383 </code></pre>
<python><kotlin><asynchronous><async-await><kotlin-coroutines>
2023-05-06 22:27:09
1
1,785
Petesta
76,190,962
4,322
Running a script after creation of instance in GCP Managed Instance Group
<p>I'm using the distributed compute framework Bacalhau[0]. The pattern for setting up a cluster is the following:</p> <pre><code>$ curl -sL https://get.bacalhau.org/install.sh | bash [...output...] $ bacalhau serve To connect another node to this private one, run the following command in your shell: bacalhau serve --node-type compute --private-internal-ipfs --peer /ip4/10.158.0.2/tcp/1235/p2p/QmeEoVj8wyxMxhcUSr6p7EK1Dcie7PvNeXCVQny15Htb1W --ipfs-swarm-addr /ip4/10.158.0.2/tcp/46199/p2p/QmVPFmHmruuuAcEmsGRapB6yDDaPxhf2huqa9PhPVEHK8F </code></pre> <p>(doing this in a production friendly format involves using systemd - I have excluded it here).</p> <p>What I'd like to do is have a Google managed instance group that watches a Cloud Pub/Sub (not covered here) to create a new instance when the signal is made. The problem is that the peering string is only known after the first instance starts. My initial thought is that I would start one instance, capture the output, and write it to a common location which everything could read from.</p> <p>I've thought about the following patterns:</p> <ol> <li>Create an instance template that checks a central endpoint (KV store?) for this information</li> <li>Create an instance template that reads from a GCS bucket for this information</li> <li>Something else?</li> </ol> <p>I've read this piece[1] about leader election using GCS, but can I force GCS as the locking mechanism? Or do I need to use a whole library[2]? Or is there another solution? I can use any managed service on GCP to accomplish this.</p> <p>My preference would NOT be to use golang, but to use a non-compiled language (e.g. Python) to accomplish this.</p> <p>[0] <a href="https://docs.bacalhau.org/quick-start-pvt-cluster" rel="nofollow noreferrer">https://docs.bacalhau.org/quick-start-pvt-cluster</a></p> <p>[1] <a href="https://cloud.google.com/blog/topics/developers-practitioners/implementing-leader-election-google-cloud-storage" rel="nofollow noreferrer">https://cloud.google.com/blog/topics/developers-practitioners/implementing-leader-election-google-cloud-storage</a></p> <p>[2] <a href="https://pkg.go.dev/github.com/hashicorp/vault/physical/gcs" rel="nofollow noreferrer">https://pkg.go.dev/github.com/hashicorp/vault/physical/gcs</a></p>
<python><go><google-cloud-platform><google-cloud-storage>
2023-05-06 20:21:46
2
7,148
aronchick
76,190,916
1,953,475
Attrition Calculation Performance
<p>TLDR:</p> <ol> <li>how to speed up the SQL calculation for attrition given it is O(n^2)</li> <li>how to distribute the python implementation for attrition given it maintains a counter (cumulative leavers and cumulative joiners)</li> <li>any other ideas? (influxdb? kinetica? )</li> </ol> <p>I need to calculate <a href="https://www.gartner.com/en/human-resources/glossary/attrition#:%7E:text=Attrition%20is%20the%20departure%20of,%2C%20termination%2C%20death%20or%20retirement." rel="nofollow noreferrer">employee attrition</a> for every company, every month using a big table that looks like this. Startdate and enddate is the first day of the month to keep things simple as monthly attrition rate is sufficient.</p> <pre><code>peopleid | companyid | startdate | enddate Joe | MSFT | 2010-01-01 | 2015-02-01 ... </code></pre> <p>I have a SQL solution (pseudo code below) that is taking a very long time, mostly due to the cross join between the calendar table and the employment history which is big.</p> <p>For example, if the history table is 10B, and 50 years has 600 month in total. The cross join will yield 10e9 * 6e2 = 6e12, 6 trillion which is very expensive.</p> <pre><code>with cal as ( select yearmonth from calendar ), people_left ( select cal.yearmonth, eh.companyid, count(distinct eh.peopleid) as peopleid_left_count from cal inner join employment_history eh on cal.yearmonth = eh.enddate_yearmonth group by cal.yearmonth, eh.companyid ), people_avg ( select cal.yearmonth, eh.companyid, count(distinct eh.peopleid) as peopleid_count from cal inner join employment_history eh on cal.yearmonth &gt;= eh.startdate_yearmonth and cal.yearmonth &lt;= eh.enddate_yearmonth group by cal.yearmonth, eh.companyid ) select n.yearmonth, n.companyid, n.peopleid_left_count / d.peopleid_count as attrition -- division by zero from people_left n inner join people_avg n on n.yearmonth = d.yearmonth and n.companyid = d.companyid; </code></pre> <p>I also implemented a python solution, here are the key steps:</p> <ol> <li>restructure the employment history table as employment events table ( previously one job with a start and end date now got split into two records, with one for joining and the other for leaving, indexed by month for quick lookup O(1) )</li> <li>maintain one big lookup dictionary O(1) for each company as the counters for cumulative joiners, and cumulative leavers</li> <li>starting from month 1 and go through each month one by one till the end</li> <li>within each month, find all the joining and leaving events, and update the counter for that company that month, calculate the churn using that month leavers divide by the difference between cumulative joiners and cumulative leavers</li> </ol> <p>This is significantly faster than the SQL implementation but is it even possible to distribute the computing to speed up? the accumulation step is dependent on all previously snapshots so I cannot wrap my head around how to think in a mapreduce way.</p>
<python><sql><pandas><pyspark><time-series>
2023-05-06 20:10:19
1
19,728
B.Mr.W.
76,190,897
4,966,317
How to split a column of defined strings written without spaces in pandas, e.g. appleorange to apple orange?
<p>I am trying to write a code in python that splits a column value in a pandas dataframe. This column will have values like <code>appleorangemango</code> that I want to split to <code>apple orange mango</code>. I will have a large set of unique words that I will split with respect to them.</p> <p>Assume that I have this dataframe called <code>unique_fruits</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">unique_fruits</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">mango</td> </tr> <tr> <td style="text-align: center;">apple</td> </tr> <tr> <td style="text-align: center;">orange</td> </tr> <tr> <td style="text-align: center;">apricot</td> </tr> <tr> <td style="text-align: center;">peach</td> </tr> </tbody> </table> </div> <p>And another dataframe of fruits that are written without spaces called <code>my_fruits</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">my_fruits</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">mango</td> </tr> <tr> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: center;">apricotapple</td> </tr> <tr> <td style="text-align: center;">orangemango peach</td> </tr> <tr> <td style="text-align: center;">banana</td> </tr> </tbody> </table> </div> <p>Please note that <code>banana</code> is not in <code>unique_fruits</code> dataframe. Also, sometimes the column can contain spaces as in <code>orangemango peach</code>. Finally, the column can be a single fruit or blank as in the first and second rows of <code>my_fruits</code>.</p> <p>I am planning to read an excel file and save it to a dataframe. Then, try to find out patterns that I can split based on them. If I found something new, then I will get a list of unknown words. I will manually add the new splitted versions of the unknown words and then repeat until I feel that everything is perfect or almost perfect.</p> <p>An example of unknown words is <code>bananastrawberry</code>. Both of <code>banana</code> and <code>strawberry</code> are new unknown words that I will add to <code>unique_fruits</code> dataframe then re-run the code.</p> <p>If I have <code>pine</code>, <code>apple</code> and <code>pineapple</code> added to <code>unique_fruits</code>, then I prefer to have it as <code>pineapple</code>. I will split only if I don't have <code>pineapple</code> in <code>unique_fruits</code>.</p>
<python><python-3.x><pandas>
2023-05-06 20:06:12
1
2,643
Ambitions
76,190,870
10,564,566
Failing in the patching of a python function
<p>I have a problem with mocking functions outside classes in unittest. Let's say that my project looks like that:</p> <p><a href="https://i.sstatic.net/WxQwk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxQwk.png" alt="enter image description here" /></a></p> <p>file a.py:</p> <pre><code>class A: def func(self): raise NotImplemented def func2(): raise NotImplemented </code></pre> <p>file test_a.py:</p> <pre><code>from unittest.mock import patch from code.a import A, func2 def test_good(): with patch( 'code.a.A.func', ): a_obj = A() a_obj.func() # patching is working fine, no error def test_bad(): with patch( 'code.a.func2', ): func2() # getting Value error -&gt; patching isn't working </code></pre> <p>When I am mocking method inside the class everything is ok, but when I try to do it with function outside class It does not patch it. I would be grateful for your help. I am using python 3.8.</p>
<python><python-3.x><unit-testing><mocking><python-unittest>
2023-05-06 20:01:45
0
539
Matmozaur
76,190,787
6,567,212
How to write a pandas dataframe to kdb+ partition table on disk
<p>I need to use kdb+ in Python, so tried <a href="https://github.com/exxeleron/qPython" rel="nofollow noreferrer">qpython</a> 2.0 to write a Pandas dataframe like this</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'sym':['abc','def','ghi'],'price':[10.1,10.2,10.3]}) with qconnection.QConnection(host = 'localhost', port = 5001, pandas = True) as q: q.sendSync('{t::x}',df) </code></pre> <p>and query:</p> <pre class="lang-py prettyprint-override"><code>with qconnection.QConnection(host = 'localhost', port = 5001, pandas = True) as q: df1 = q.sendSync('select from t') </code></pre> <p>It looks working, but the table is saved in memory. How to write the Pandas dataframe directly into a partitioned table on disk?</p>
<python><pandas><database><dataframe><kdb+>
2023-05-06 19:41:29
1
329
ningl
76,190,747
595,305
How can I install pyenv-win Python versions outside C:\Users?
<p>My C: drive is always too full, some useful stuff but 80% execrable, unremovable Microsoft bloatware. I always install new stuff on another drive.</p> <p>I've tried <a href="https://github.com/pyenv-win/pyenv-win" rel="nofollow noreferrer">this installation page</a> for installing pyenv-win.</p> <p>Even if I set the pyenv installation directory itself somewhere off C: drive, I find that <code>pyenv install [version]</code> installs that version under C:\Users... Python version 3.10.5 (for example) takes up 200 MB.</p> <p>I went <code>pyenv install --help</code> but could see no options for installing Python versions somewhere else. Is it possible?</p>
<python><windows-10><pyenv><installation-path>
2023-05-06 19:32:19
2
16,076
mike rodent
76,190,703
21,784,274
Is there anyway to protect Django source code from being revealed, after giving server access to the client?
<p>After deploying a Django project and giving server access to the client, is there anyway to protect the source code from being revealed?</p> <p>Maybe somthing like seperating the servers for holding the database and the source code?<br /> Or having some lock or something on the docker container of the Django project?</p>
<python><django><database><docker><source-code-protection>
2023-05-06 19:21:57
0
947
Mohawo
76,190,531
13,055,818
__radd__ operation with numpy number results to __getitem__ loop
<p>I am implementing a class that handles basic binary operations such as addition, multiplication, subraction and division with their respective variants (reversed, in-place). I encountered an unexpected behaviour that I am trying to understand. Unfortunately, even by looking at numpy's implementation of <code>__add__</code> of <code>unsignedinteger</code>, I can not.</p> <p>To reproduce this behavior you can simply run this code :</p> <pre><code>import numpy as np class test: def __init__(self): self.a = 1 def __len__(self): return 1 def __getitem__(self, index): print(&quot;getitem&quot;) return self.a def __radd__(self, other): return self.a + other a = test() b = np.uint8(1) + a </code></pre> <p>It will result in a <code>__getitem__</code> loop. Of course my actual code works a bit differently but still faces the exact same issue. I also tried to use python debugger in order to better understand the behavior of the operation that actually was called. What I wish to do is mainly, when I run this code :</p> <pre><code>b = np.uint8(10) + test() </code></pre> <p>To actually execute <code>__radd__</code> operation. I understand it is happening because <code>numpy.unsignedinteger.__add__</code> is execute. Is there any pythonic way to prevent or even better, to fix it ?</p>
<python><numpy>
2023-05-06 18:39:15
1
519
Maxime Debarbat
76,190,481
6,224,975
Remove the the obersvations which is more than the i'th duplicated observation pandas
<p>Say I have a dataframe like</p> <pre class="lang-py prettyprint-override"><code>a b c 1 2 3 1 2 3 . . </code></pre> <p>and I want to allow, say, 100 duplicated values of <code>a</code> and <code>b</code> pairs i.e say theres 200 pairs of <code>a=1</code> and <code>b=2</code> then I want to keep 100 of those.</p> <p>I cannot use <code>duplicated</code> on a <code>GroupBy</code> dataframe, thus I'm rather lost on how to solve this</p>
<python><pandas>
2023-05-06 18:26:58
4
5,544
CutePoison
76,190,373
740,371
Asyncio.create_task() not executed immediately in a callback function
<p>My code uses <code>FastAPI</code>, <code>WebSocket</code> and <code>Asyncio</code>, and I need to send messages through websockets from a callback function to give a real-time progress value to the user (5%, 10%, 15% and so on).</p> <p>But all the messages from the callback function are sent after the main function is finished. This function can't be awaited as I have no control on it.</p> <p><strong>I expect to have this:</strong></p> <pre class="lang-text prettyprint-override"><code>β€’ the_main_process(callback) starts β€’ callback() sends &quot;5%&quot; when done β€’ callback() sends &quot;10%&quot; when done β€’ callback() sends &quot;15%&quot; when done β€’ ... β€’ the_main_process(callback) ends and sends 'Finished&quot; </code></pre> <p><strong>But I have this:</strong></p> <pre class="lang-text prettyprint-override"><code>β€’ the_main_process(callback) starts β€’ the_main_process(callback) ends and sends 'Finished&quot; β€’ callback() sends &quot;5%&quot; immediately after β€’ callback() sends &quot;10%&quot; immediately after β€’ callback() sends &quot;15%&quot; immediately after β€’ ... </code></pre> <p>Here is a simplified version of the scripts:</p> <p><strong>app.py</strong></p> <pre class="lang-python prettyprint-override"><code>from fastapi import FastAPI, WebSocket from otherscript app = FastAPI() @app.websocket(&quot;/ws&quot;) async def websocket_endpoint(websocket: WebSocket): await websocket.accept() # Pass websocket reference to otherscript.py otherscript.websocket = websocket while True: # Run my_function() on otherscript.py result = otherscript.my_function() </code></pre> <p><strong>otherscript.py</strong></p> <pre><code>import asyncio # Init placeholder for websocket reference websocket = None # This callback gives the progression of the_main_process() as it is called every n steps def callback(): body = { &quot;value&quot;: &quot;5%&quot;, &quot;foo&quot;: &quot;bar&quot;, ... } # I use asyncio without await because the_main_process() can't be awaited, so callback() can't be asynced too # The callback messages arrive all after the end of the_main_process() task = asyncio.create_task(websocket.send_json(body)) def my_function(): # I have no control on this function with callback. It's a torch multithreading function and coroutine that can't be awaited result = the_main_process(arg1, arg2, ..., callback) # This message arrives before all the callback messages task = asyncio.create_task(websocket.send_text(&quot;Finished&quot;)) return result </code></pre> <p>I suppose that all the new created tasks in the callback function are added in the asyncio queue after the main progress function task, so they are executed after.</p> <p>Any idea? Maybee my code structure is not correct?</p>
<python><websocket><fastapi>
2023-05-06 18:04:04
0
1,640
Yann Masoch
76,190,265
10,754,437
Using custom Python CDK lib build from Typescript source using jsii and jsii-pacmak
<p>I managed to generate 2 files defining a Python lib build from a Typascript code base as described in the AWS workshop:</p> <p><a href="https://catalog.us-east-1.prod.workshops.aws/workshops/d93fec4c-fb0f-4813-ac90-758cb5527f2f/en-US/walkthrough/typescript/sample/target-construct/build-and-package" rel="nofollow noreferrer">https://catalog.us-east-1.prod.workshops.aws/workshops/d93fec4c-fb0f-4813-ac90-758cb5527f2f/en-US/walkthrough/typescript/sample/target-construct/build-and-package</a></p> <p>This generates two files for the Python lib.</p> <ul> <li>personal.mypackage-0.1.0.tar.gz</li> <li>personal.mypackage-0.1.0-py3-none-any.whl</li> </ul> <p>How would I use the generated lib in a Python project after I launched one with the command <code>cdk init --language=python</code>.</p> <p>I am confused about:</p> <ul> <li>Where should I place the packages?</li> <li>Should I extract the packaged files?</li> </ul> <p>I tried searching the internet however the only example I found was using Typescript constructs build using jsii and sjii-pacmak in a C# project.</p>
<python><python-3.x><aws-cdk><aws-cdk-typescript>
2023-05-06 17:37:54
1
2,597
SecretIndividual
76,190,124
12,466,687
How to get struct datatype similar to value_counts() return in polars?
<p>I am trying to combine <strong>two columns</strong> row wise and get something similar to what <code>value_counts()</code> returns in <code>polars</code>.</p> <p>I have tried to use the <code>to_struct</code> but that didn't give the result that I expected.</p> <p>data:</p> <pre><code>import polars as pl df = pl.DataFrame({ &quot;tags&quot;: [&quot;c&quot;, &quot;a&quot;, &quot;b&quot;, &quot;d&quot;], &quot;vals&quot;:[4,3,1,1] }) df.to_struct </code></pre> <pre><code>&lt;bound method DataFrame.to_struct of shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ tags ┆ vals β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ c ┆ 4 β”‚ β”‚ a ┆ 3 β”‚ β”‚ b ┆ 1 β”‚ β”‚ d ┆ 1 β”‚ </code></pre> <p>Expected Result:</p> <pre><code>β”‚ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {&quot;c&quot;,4} β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ {&quot;a&quot;,3} β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ {&quot;b&quot;,1} β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ {&quot;d&quot;,1} β”‚ </code></pre> <p>Not sure how can I do that as I have already tried to convert into <code>struct</code> datatype</p>
<python><dataframe><python-polars>
2023-05-06 17:04:31
1
2,357
ViSa
76,190,115
9,835,749
Fastest way to compute n-gram overlap matrix in Python
<p>I have a large corpus of documents that I want to merge if they have a notable n-gram overlap (in my case, I'm considering bigrams). Consider the list of sets:</p> <pre><code>corpus = [{'example', 'bigram'}, {'another', 'example'}, {'some', 'outlier'}] </code></pre> <p>I have the following code to compute the similarity matrix:</p> <pre><code>import numpy as np sim_matrix = np.zeros((len(corpus), len(corpus))) for i in range(len(corpus)): for j in range(i+1, len(corpus)): sim_matrix[i][j] = get_ngram_overlap(corpus[i], corpus[j]) np.fill_diagonal(sim_matrix, 1) </code></pre> <p>Where the <code>get_ngram_overlap</code> function is defined as follows:</p> <pre><code>def get_ngram_overlap(ngrams_s1, ngrams_s2): if min(len(ngrams_s1), len(ngrams_s2)) == 0: return 0 common_ngrams = ngrams_s1 &amp; ngrams_s2 return len(common_ngrams)/min(len(ngrams_s1), len(ngrams_s2)) </code></pre> <p>The result is a MxM matrix, where the M is the number of n-grams in my corpus</p> <pre><code>print(sim_matrix) &gt;&gt; [[1. 0.5 0.] [0. 1. 0.] [0. 0. 1.]] </code></pre> <p>The problem is, when the M grows large, the code is really slow and this becomes really computationally expensive, because of the necessity to compare all unique pairs. <strong>Are there ways to compute it more efficiently?</strong> Like using an optimized distance matrix computation with a custom similarity metric that would work for sets of strings. Any help is appreciated!</p>
<python><python-3.x><numpy><set><n-gram>
2023-05-06 17:03:17
2
332
errno98
76,190,079
2,386,605
Expose FastAPI on two different ports
<p>I want to scrape some metrics from my FastAPI server. However, I do not want to show it to the public in contrast to the remaining API.</p> <p>Hence, I resolved to create a second server and tried to run (inspired by this <a href="https://stackoverflow.com/questions/69641363/how-to-run-fastapi-app-on-multiple-ports">here</a>):</p> <pre class="lang-py prettyprint-override"><code>app = FastAPI() app2 = FastAPI() app2.mount(&quot;/metrics&quot;, metrics_app) </code></pre> <p>and then set it up with docker-compose like this</p> <pre><code>version: '3.8' services: scraiber_gpt: build: context: ./src dockerfile: Dockerfile.test command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000 &amp; uvicorn app.main:app2 --reload --workers 1 --host 0.0.0.0 --port 8001 volumes: - ./src:/usr/src/app ports: - 8004:8000 - 8005:8001 </code></pre> <p>However, I cannot see anything from app2.</p> <p>Is there some way to make it work? πŸ™‚</p>
<python><docker><docker-compose><fastapi>
2023-05-06 16:52:13
1
879
tobias
76,189,975
9,008,300
Fine-tuning an already trained XGBoost classification model
<p>I have trained an XGBoost classification model for sentiment analysis of product reviews. However, there are certain cases where the model predictions are not as expected. For example, when I input the review &quot;The delivery was a bit late but the product was awesome&quot;, the model classifies it as a negative review (0), but I want to fine-tune the model on that exact case to say the review is positive (1).</p> <p>Is there a way to fine-tune the already trained XGBoost model by adding specific data points like this? What would be the best approach to achieve this without retraining the whole model from scratch?</p> <p>I've tried the following function:</p> <pre class="lang-py prettyprint-override"><code># Fine tune the model def fine_tune(model, inp, output, word2vec): model.fit( np.array([word2vec.get_mean_vector(tokenize( inp ))]), np.array([output]) ) return model </code></pre> <p>However, when I run it it retrains the whole model on that single data point I provide it with.</p> <p>Any guidance or suggestions would be greatly appreciated. Thank you!</p>
<python><xgboost><xgbclassifier>
2023-05-06 16:27:58
1
422
Chris
76,189,964
275,552
Regex pattern isn't matching in python but works in regex101
<p>Given text that looks like this:</p> <pre><code>146.204.224.152 - feest6811 [21/Jun/2019:15:45:24 -0700] &quot;POST /incentivize HTTP/1.1&quot; 302 4622 197.109.77.178 - kertzmann3129 [21/Jun/2019:15:45:25 -0700] &quot;DELETE /virtual/solutions/target/web+services HTTP/2.0&quot; 203 26554 156.127.178.177 - okuneva5222 [21/Jun/2019:15:45:27 -0700] &quot;DELETE /interactive/transparent/niches/revolutionize HTTP/1.1&quot; 416 14701 100.32.205.59 - ortiz8891 [21/Jun/2019:15:45:28 -0700] &quot;PATCH /architectures HTTP/1.0&quot; 204 6048 </code></pre> <p>For each line I want to match the IP, username, timestamp and request type/path. Trying to use this pattern:</p> <p><code>([0-9\.]+)[- ]+([A-Za-z0-9\-]+)\s\[([0-9]{2}\/[A-Za-z]+\/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2} -[0-9]{4})\] \&quot;[A-Z]+ [A-Za-z0-9\/\- \.+]+\&quot;</code></p> <p>When I test this on regex101.com it works just fine, but using <code>re.findall()</code> gives me 0 matches. The problem seems to be at <code>-[0-9]{4}</code>(about 3/4ths the way through the string, meant to match -0700). If I remove everything starting with the hyphen character, it starts finding matches. Why would python have an issue with this?</p>
<python><regex>
2023-05-06 16:25:32
1
16,225
herpderp
76,189,924
6,071,128
String manipulation - for each value in a dataframe column, extract a custom string of text comprised between two separators
<p>I have an imported dataframe which includes mainly text values under the column <code>'full_name'</code> . The values look typically like this: <code>'Insulation, glass wool, water repellent, kraft paper lining, 2.25 Km2/W, 90 mm, 12 kg/m3, 1.08 kg/m2, IBR (Isover)'</code></p> <p>Now I would like to extract from these certain values (physical properties of building materials) by utilising the measure unit as a search string, for instance <code>'Km2/W'</code> for the sake of this example. Then I would like the text comprised between two commas separators before and after the search string to be copied in a separate column, where the values can ultimately be converted to <code>numerical</code>.</p> <p>I asked this question to ChatGPT and it returned the following code: This code splits the text column by commas, removes any leading or trailing whitespace, selects the second column, extracts the substring that contains the search string and any characters after it, and then splits that substring by commas and selects the first part.</p> <pre><code># Extract the text between two comma separators filtered_df['extracted_text'] = filtered_df['text'].str.split(',', expand=True).apply(lambda x: x.str.strip()).iloc[:, 1].str.extract(f'({search_string}.*)')[0].str.split(',').str[0] </code></pre> <p>However the resulting column - in this example <code>filtered_df['extracted_text']</code>, is full of <code>NaN</code>. What do you think is going wrong here?</p>
<python><pandas><string><text>
2023-05-06 16:17:02
1
1,093
Andreuccio
76,189,836
12,129,443
How to interpret the xaxis values of a density histogram when there's an offset
<p>I have a list of 100000 values, all are around 299800 as shown below.</p> <pre><code>values = [299850.3 299856.4 299855.9 ... 299843.7 299847.2 299860.9] </code></pre> <p>When I plot a Probability Density Function (PDF) of these values using <code>matplotlib.pyplot.hist</code>, I am getting the plot whose X-axis and Y-Axis values are confusing. I expect the X-axis values to be binned around 299800 and Y-axis values be the probability of those binned values happening. How to interpret this graph values correctly, appreciate any comments.</p> <pre><code>plt.hist(values,density=True, bins=30) plt.xlabel(&quot;Values&quot;) plt.ylabel('PDF') plt.show() </code></pre> <p><a href="https://i.sstatic.net/3NvAN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3NvAN.png" alt="enter image description here" /></a></p>
<python><matplotlib><histogram>
2023-05-06 15:57:35
0
668
Srinivas
76,189,830
10,973,108
Docker compose is loading files that should ignore?
<p>Why my container has files/folders that should be ignored?</p> <p>example:</p> <pre class="lang-bash prettyprint-override"><code>➜ ~ docker exec -it athena-app-container /bin/bash root@db2ecfd0967b:/project# ls __pycache__ app extensions should_ignore.txt api constants serializers utils root@db2ecfd0967b:/project# </code></pre> <p>In the above example, the <code>__pycache__</code> and <code>should_ignore.txt</code> should not be in container exec, alright? Or this files are loaded in <code>volume</code> of `compose.yaml</p> <p><code>.dockerignore</code></p> <pre class="lang-bash prettyprint-override"><code>.env __pycache__ should_ignore.txt </code></pre> <p>I've tried somethings but not works... I'm really stucked with it.</p> <p>This is my actual project structure:</p> <p><a href="https://i.sstatic.net/vSWNn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vSWNn.png" alt="project-structure" /></a></p> <p><code>Dockerfile</code></p> <pre class="lang-bash prettyprint-override"><code>FROM python:3.11.3-slim-buster RUN apt-get update &amp;&amp; apt-get install \ --no-install-recommends -qq -y \ build-essential \ gcc \ g++ \ libpoppler-cpp-dev \ poppler-utils \ pkg-config \ libpq-dev WORKDIR /project COPY ./pylintrc /project/pylintrc COPY ./requirements.txt /project/requirements.txt COPY ./requirements-dev.txt /project/requirements-dev.txt RUN pip install --upgrade pip RUN pip install -r /project/requirements.txt -r /project/requirements-dev.txt COPY ./alembic.ini /project/alembic.ini COPY ./alembic /project/alembic COPY ./athena /project/athena </code></pre> <p><code>compose.yaml</code></p> <pre class="lang-bash prettyprint-override"><code>services: database: image: postgres:12 restart: always # command: postgres -c 'max_connections=300' container_name: athena-database-container environment: POSTGRES_DB: postgres POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres volumes: - postgres:/var/lib/postgresql/data ports: - &quot;7999:5432&quot; alembic: build: . container_name: athena-alembic-container env_file: compose.env depends_on: - database image: athena-project command: bash -c &quot; sleep 1 &amp;&amp;\ alembic upgrade head&quot; volumes_from: - database profiles: - migrate api: build: . container_name: athena-api-container # restart: on-failure env_file: compose.env depends_on: - alembic image: athena-project ports: - &quot;8000:8000&quot; volumes: - ./athena:/project command: bash -c &quot; sleep 3 &amp;&amp;\ uvicorn api.main:api --reload --workers 1 --host 0.0.0.0 --port 8000&quot; app: build: . container_name: athena-app-container # restart: on-failure env_file: compose.env depends_on: - api image: athena-project ports: - &quot;8001:8001&quot; volumes: - ./athena:/project command: bash -c &quot; sleep 4 &amp;&amp;\ uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8001&quot; volumes: postgres: </code></pre>
<python><bash><docker><docker-compose><devops>
2023-05-06 15:56:25
0
348
Daniel Bailo
76,189,815
12,724,372
AWS lambda throwing import error because of URLLIB
<p>Im running a python script on aws lambda and its throwing the following error.</p> <pre><code> { &quot;errorMessage&quot;: &quot;Unable to import module 'app': urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with OpenSSL 1.0.2k-fips 26 Jan 2017. See: https://github.com/urllib3/urllib3/issues/2168&quot;, &quot;errorType&quot;: &quot;Runtime.ImportModuleError&quot;, &quot;stackTrace&quot;: [] } </code></pre> <p>It was running perfectly an hour ago , and even after I have made no deployments , it seems to be failing.</p> <p>my python version is 3.7. and Im only using urllib to parse and unquote urls . namely</p> <pre><code>from urllib.parse import urlparse </code></pre> <p>and</p> <pre><code>from urllib.parse import unquote </code></pre> <p>like its mentioned in the GitHub url I can upgrade my python version, but doing so would break other things. Are there any alternative librries I can use to get the same result?</p> <p>from the GitHub link , it shows urllib no longer supports OpenSSL&lt;1.1.1 but somehow some of our higher environments the same scripts is running without issues.</p>
<python><amazon-web-services><urllib>
2023-05-06 15:53:48
11
1,275
Devarshi Goswami
76,189,806
8,750,051
How does inferSchema for PySpark really work?
<p>Reading the documentation <a href="https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option</a>, it is very clear that inferSchema &quot;Infers the input schema automatically from data&quot; however, my code is not working to infer the data types. I have even tried enforceSchema, but nothing worked. <a href="https://i.sstatic.net/3r1jp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3r1jp.jpg" alt="enter image description here" /></a> In Excel, I notice all the data is 'General&quot; type; is that the reason behind the hiccup?</p>
<python><pyspark><etl><data-processing>
2023-05-06 15:52:08
0
422
RoiMinuit
76,189,808
4,180,276
Making a binary package for my open source project
<p>I currently have the following open source project, <a href="https://github.com/nadermx/backgroundremover" rel="nofollow noreferrer">https://github.com/nadermx/backgroundremover</a> I followed the tutorial <a href="https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/" rel="nofollow noreferrer">https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/</a> and everything seems fine. People can install it via <code>pip install backgroundremover</code>.</p> <p>The issue arises because some one asked for me to offer it as a binary. I went head and uploaded wheel, <a href="https://github.com/nadermx/backgroundremover/tree/main/dist" rel="nofollow noreferrer">https://github.com/nadermx/backgroundremover/tree/main/dist</a> but I am unsure what else I have to modify to get the option to say install backgroundremover-binary?</p>
<linux><python><pip>
2023-05-06 15:50:32
0
2,954
nadermx
76,189,742
8,236,050
Integrate Google Colab GPUs with docker
<p>I am trying to build a Reinforcement Learning tool, but I have no access to a GPU nor can I afford it. Due to this, I was thinking on using the GPU resources provided freely, for example, by Google Colab. Nevertheless, my project cannot be executed as a python notebook, because it is a very big project consisting on several dependencies and modules. Because of that, I'd like to know if there is any way to integrate Google Colab with docker so that a docker image that uses the GPUs form Google Colab can be built in order to use that GPU when training my model. Any other way of integrating free GPUs in my docker container will be valid as well.</p>
<python><docker><gpu><google-colaboratory>
2023-05-06 15:38:49
0
513
pepito
76,189,730
8,281,276
Flask uploaded file size is not correct
<p>Why flask's uploaded file byte length is shorter than what it is actually? I'm seeing same behavior for several images.</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def home(): if request.method == 'POST': f = file.read() print(len(f)) # 45825 </code></pre> <p>However,</p> <pre><code>f = skimage.io.imread('file.jpg') len(f.tobytes()) # 1102080 </code></pre>
<python><flask>
2023-05-06 15:35:32
1
1,676
Wt.N
76,189,618
12,845,199
Trasnforming, kg ml, l, proportion into g proportion
<p>I have the following pd.Series</p> <pre><code>s = pd.Series(['3.95/kg', '3.30/kg', '3.49/kg', '3.96/g', '8.49/kg', '3.19/kg', '0.0154/g', '8.98/kg', '6.35/kg', '5.79/kg', '3.79/kg', '6.59/kg', '3.50/kg', '3.85/kg', '3.55/kg', '5.59/kg', '5.98/kg', '0.0152/g', '5.99/kg', '3.20/gr', '8.99/kg', '16.90/kg', '4.29/kg', '0.0128/g', '5.29/kg', '3.39/kg', '6.29/kg', '4.59/kg', '28.90/kg', '4.69/kg', '0.0389/gr', '0.0099/ml', '0.0608/g',]) </code></pre> <p>What I want to do is to transform every single proportions into one single value for example, with it is in kg. Transform the proportion into grams, the same goess with ml and l proporitons.</p> <p>So for example we got 3.95/kg on the string. I want the same proportion but in grams meaning we got to divide it by 1000 so it goes to 0.00395/g. If the string contains an ml well it remains the same. If it contains an l well it divides by 1000</p> <p>Wanted result</p> <pre><code>pd.Series(['0.00395/g','0.00330/g','0,00349/g','3.96/g']) # And so on </code></pre> <p>Any ideas on how to do this treamtment utilizing pandas?</p>
<python><pandas>
2023-05-06 15:10:23
1
1,628
INGl0R1AM0R1
76,189,260
1,185,254
Flask render_template does not update the page
<p>I am trying to upload a file, make some calculations to generate an image and then to display it. I came up with the following, but the page does not refresh after the <code>redirect</code>. Even though I get the correct codes:</p> <pre><code>&quot;POST /result HTTP/1.1&quot; 302 - &quot;GET /display_image/census_2009b.png HTTP/1.1&quot; 200 - </code></pre> <p><a href="https://i.sstatic.net/G5eRC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G5eRC.png" alt="Not refreshed page" /></a></p> <pre><code>from flask import Flask, request, render_template, send_file, redirect, url_for app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') @app.route('/result', methods=['GET', 'POST']) def upload_file(): file = request.files['file'] ### generating and saving an image to `filename`... return redirect(url_for('display_image', filename=filename)) @app.route('/display_image/&lt;filename&gt;', methods=['GET', 'POST']) def display_image(filename): image_url = url_for('static', filename=filename) return render_template('display_image.html', image_url=image_url) if __name__ == '__main__': app.run(host='0.0.0.0', debug=True) </code></pre> <p>with <code>index.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;script src=&quot;https://unpkg.com/dropzone@5/dist/min/dropzone.min.js&quot;&gt;&lt;/script&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;https://unpkg.com/dropzone@5/dist/min/dropzone.min.css&quot; type=&quot;text/css&quot; /&gt; &lt;/head&gt; &lt;body&gt; &lt;form action=&quot;/result&quot; method=&quot;get&quot; class=&quot;dropzone&quot; id=&quot;my-dropzone&quot;&gt;&lt;/form&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>and <code>display_image.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Uploaded Image&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;img src=&quot;{{ image_url }}&quot; alt=&quot;Uploaded image&quot;&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>What am I missing?</p>
<python><flask><http-redirect><request><response>
2023-05-06 13:53:22
1
11,449
alex
76,189,129
10,970,202
pyspark dataframe to tfrecords not working
<p>pyspark 3.2.0</p> <p>I've downloaded spark-tensorflow-connector.jar file from <a href="http://spark.apache.org/third-party-projects.html" rel="nofollow noreferrer">http://spark.apache.org/third-party-projects.html</a>.</p> <p>After creating spark session with jar file added from pyspark.sql import SparkSession</p> <pre><code>spark = SparkSession.builder\ .appName('stc-test')\ .config('spark.jars', 'spark-tensorflow-connector-1.0.0-s_2.11.jar')\ .getOrCreate() </code></pre> <p>then when I try to write it into tfrecords following example from tensorflow-spark-connector documentation</p> <pre><code>train_pdf.write.format('tfrecords').option('writeLocality', 'local').save(&quot;/tfrecords&quot;) </code></pre> <p>I get following error: <code>Py4JJavaError: An error occurred while calling o152.save. : java.lang.ClassNotFoundException: Failed to find data source: tfrecords. Please find packages at http://spark.apache.org/third-party-projects.html Caused by: java.lang.ClassNotFoundException: tfrecords.DefaultSource</code></p> <p>Can someone help me why? I am new to configuring pyspark so I am not even sure if library is even added to sparksession.</p>
<python><apache-spark><pyspark>
2023-05-06 13:23:50
1
5,008
haneulkim
76,188,882
10,164,750
comparing all the rows with in a group pyspark dataframe
<p>I have dataframe as below, where I need to compare the row values of column <code>first_nm</code> and <code>sur_nm</code> within a group based on <code>company</code>. Based on the matching I would assign a value to <code>status</code> column in the output.</p> <pre><code>+--------+--------+----------------+--------------+ | company| id| first_nm| sur_nm| +--------+--------+----------------+--------------+ |SYNTHE01|SYNTHE02| JAMES| FOWLER| |SYNTHE01|SYNTHE03| MONICA| FOWLER| |SYNTHE01|SYNTHE04| GEORGE| FOWLER| |SYNTHE08|SYNTHE05| JAMES| FIWLER| |SYNTHE08|SYNTHE06| JAMES| FUWLER| |SYNTHE08|SYNTHE07| JAMES| FAWLER| |SYNTHE08|SYNTHE08| JAMES| FEWLER| |SYNTHE11|SYNTHE12| JAMES| FOWLER| |SYNTHE11|SYNTHE11| JAMES| FOWLER| |SYNTHE09|SYNTHE0X| Null| Null| |SYNTHE09|SYNTHE0Y| Null| Null| |SYNTHE09|SYNTHE0Z| Null| Null| +--------+--------+----------------+--------------+ </code></pre> <p>For eg.</p> <p>If both <code>first_nm</code> and <code>sur_nm</code> of all rows matches in a particular <code>company</code> - <code>status</code> is 0.</p> <p>If only <code>first_nm</code> matches in a <code>company</code> group - <code>status</code> is 1.</p> <p>If only <code>sur_nm</code> matches in a <code>company</code> group - <code>status</code> is 2.</p> <p>If nothing matches or null values - <code>status</code> is 99.</p> <p>The output dataframe is as below:</p> <pre><code>+--------+--------+----------------+--------------+-------+ | company| id| first_nm| sur_nm| status| +--------+--------+----------------+--------------+-------+ |SYNTHE01|SYNTHE02| JAMES| FOWLER| 2| |SYNTHE01|SYNTHE03| MONICA| FOWLER| 2| |SYNTHE01|SYNTHE04| GEORGE| FOWLER| 2| |SYNTHE08|SYNTHE05| JAMES| FIWLER| 1| |SYNTHE08|SYNTHE06| JAMES| FUWLER| 1| |SYNTHE08|SYNTHE07| JAMES| FAWLER| 1| |SYNTHE08|SYNTHE08| JAMES| FEWLER| 1| |SYNTHE11|SYNTHE12| JAMES| FOWLER| 0| |SYNTHE11|SYNTHE11| JAMES| FOWLER| 0| |SYNTHE09|SYNTHE0X| Null| Null| 99| |SYNTHE09|SYNTHE0Y| Null| Null| 99| |SYNTHE09|SYNTHE0Z| Null| Null| 99| +--------+--------+----------------+--------------+-------+ </code></pre> <p>How can we handle this kind of compare within a column for different row values. Please guide.</p> <p>Thank you</p>
<python><dataframe><apache-spark><pyspark>
2023-05-06 12:26:38
2
331
SDS
76,188,656
21,725,164
Can't run python3 in command line
<p>I have installed python 3.11 on my Widnows from its official website, and when I want to run it in command line by typing &quot;python3&quot; it doesn't run Python for me; Instead it opens Microsoft store for me to download Python!</p> <p>I tried reinstalling it but it didn't fix. Did I miss anything before installing or in the installation process?</p>
<python><python-3.11>
2023-05-06 11:38:13
1
608
Hr.Panahi
76,188,468
7,476,542
Retrieve the return value of a function used in threading.Timer method
<p>I have a function <code>check_is_window_open</code> which checks if a Windows application is open or not ? If the application is open then it will return 1 else 0</p> <p>Here is the function :</p> <pre><code>from pywinauto import Application,keyboard def check_is_window_open(): try: app = Application(backend='uia').connect(title='Some Application') window = app['Some Application'] return 1 except: return 0 </code></pre> <p>This application window takes several minutes to open. So I want to check for every 10 seconds if the window is open. If it is open then I will do some automation else I will keep checking.</p> <pre><code>from threading import Timer def my_automation(): Timer(10,check_is_window_open).start() #if the return of check_is_window_open is 0 then I will keep checking #else I will cancel the timer and perform my automation. </code></pre> <p>Now the question is how to retrieve the return value of <code>check_is_window_open</code> ?</p>
<python><multithreading><timer><pywinauto>
2023-05-06 10:52:40
1
303
Sayandip Ghatak
76,188,369
4,305,436
Running Pytest test cases in transaction isolation in a FastAPI setup
<p>I have a FastAPI application, with MySQL and <code>asyncio</code>.</p> <p>I have been trying to integrate some test cases with my application, with the ability to rollback the changes after every test case, so that all test cases can run in isolation.</p> <p>This is how my controller is set up, with a DB dependency getting injected.</p> <pre><code>from sqlalchemy.ext.asyncio import create_async_engine async def get_db_connection_dependency(): engine = create_async_engine(&quot;mysql+aiomysql://root:root@mysql8:3306/user_db&quot;) connection = engine.connect() return connection class UserController: async def create_user( self, request: Request, connection: AsyncConnection = Depends(get_db_connection_dependency) ) -&gt; JSONResponse: # START TRANSACTION await connection.__aenter__() transaction = connection.begin() await transaction.__aenter__() try: do_stuff() except: await transaction.rollback() else: await transaction.commit() finally: await connection.close() # END TRANSACTION return JSONResponse(status_code=201) </code></pre> <p>I have a test case written using Pytest like so</p> <pre><code>import pytest app = FastAPI() @pytest.fixture() def client(): with TestClient(app=app) as c: yield c class TestUserCreation: CREATE_USER_URL = &quot;/users/create&quot; def test_create_user(self, client): response = client.post(self.CREATE_USER_URL, json={&quot;name&quot;: &quot;John&quot;}) assert response.status_code == 201 </code></pre> <p>This test case works and persists the newly created user in the DB, but like I said earlier, I want to rollback the changes automatically once the test case finishes.</p> <p>I have checked a few resources online, but none of them were helpful.</p> <ol> <li><p><a href="https://aalvarez.me/posts/python-transactional-tests-using-sqlalchemy-pytest-and-factory-boy/" rel="nofollow noreferrer">This link</a> talks about using factory objects, but I can't use factory objects here because my controller requires the DB connection as a dependency. Plus, the controller itself is updating the DB, and not a &quot;mocked&quot; factory object.</p> </li> <li><p>I then searched for ways to inject the dependency manually. This was in the hopes that if I can create a connection manually BEFORE calling the API in my test case and inject it as the required dependency, then I can also forcefully rollback the transaction AFTER the API finishes.</p> <ul> <li>So, I came across <a href="https://github.com/tiangolo/fastapi/issues/2894" rel="nofollow noreferrer">this</a>, which talks about a way to get a dependency to use outside of a controller, but not how to inject it into the controller manually.</li> </ul> </li> <li><p>The <a href="https://fastapi.tiangolo.com/advanced/testing-database/" rel="nofollow noreferrer">official FastAPI docs</a> aren't very exhaustive on how to rollback persisted data in a DB-related test case.</p> </li> </ol> <p>The only way I can think of is to not inject the DB connection as a dependency into the controller, but attach it to the Starlette request object in the request middleware. And then in the response middleware, depending on an env var (<code>test</code> vs <code>prod</code>), I can ALWAYS rollback if the var is <code>test</code>.</p> <p>But this seems over-engineering to me for a very fundamental requirement of a robust testing suite.</p> <p>Is there any readily-available, built-in way to do this in FastAPI? Or is there any other library or package available that can do it for me?</p> <p>If Pytest isn't the best suited framework for this, I'm more than happy to change it to something more suitable.</p> <p>Appreciate any help I can get. Thank you!</p>
<python><mysql><sqlalchemy><python-asyncio><fastapi>
2023-05-06 10:23:17
1
796
Sidharth Samant
76,188,348
7,553,746
Why is my JSON data with null not returning 0 causing an exception?
<p>I have an API which returns null when there are no results, else it's usually got a count with an int. In this minimal example with a similar data structure, if either value is null I'd hope to return 0.</p> <p>I thought that adding <code>{}</code> usually allows traversing deeper into the data I get the following:</p> <blockquote> <p>Traceback (most recent call last): File &quot;/home/Documents/projects/data-reviews/test_code.py&quot;, line 52, in total_reviews = data.get('data', {}).get('node', {}).get('reviews', {}).get('aggregates', {}).get('count', {}) AttributeError: 'NoneType' object has no attribute 'get'</p> </blockquote> <p>I'm not what to do here or it try: except: the only option?</p> <pre><code>import json resp = &quot;&quot;&quot;{ &quot;data&quot;: { &quot;node&quot;: { &quot;myReviews&quot;: { &quot;totalReviews&quot;: 0 }, &quot;allReviews&quot;: { &quot;aggregates&quot;: null } } } }&quot;&quot;&quot; data = json.loads(resp) total_ratings = data.get('data', {}).get('node', {}).get('myReviews', {}).get('totalReviews', 0) total_reviews = data.get('data', {}).get('node', {}).get('allReviews', {}).get('aggregates', {}).get('count', 0) </code></pre>
<python><json>
2023-05-06 10:16:48
2
3,326
Johnny John Boy
76,188,316
3,783,002
Is `PYTHONPATH` really an environment variable?
<p>The <a href="https://docs.python.org/3/library/sys.html#sys.path" rel="noreferrer">docs</a> for <code>sys.path</code> state the following:</p> <blockquote> <p>A list of strings that specifies the search path for modules. Initialized from the environment variable PYTHONPATH, plus an installation-dependent default.</p> </blockquote> <p>So my understanding here is that <code>PYTHONPATH</code> is an <em>environment variable</em>. Environment variables can be printed out in Powershell using the following command:</p> <pre><code>PS&gt; echo $ENV:VARIABLENAME </code></pre> <p>However when I do <code>$ENV:PYTHONPATH</code> I get no output. If I try to access <code>PYTHONPATH</code> from a python terminal, I get a <code>KeyError</code>:</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; os.environ[&quot;PYTHONPATH&quot;] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Python310\lib\os.py&quot;, line 679, in __getitem__ raise KeyError(key) from None KeyError: 'PYTHONPATH' </code></pre> <p>However, I know <code>PYTHONPATH</code> is defined <em>somewhere</em>, because its value does appear when I use <code>sys.path</code>:</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path ['', 'C:\\Python310\\python310.zip', 'C:\\Python310\\DLLs', 'C:\\Python310\\lib', 'C:\\Python310', 'C:\\Users\\aa\\AppData\\Roaming\\Python\\Python310\\site-packages', 'C:\\Python310\\lib\\site-packages', 'C:\\Python310\\lib\\site-packages\\scons-4.4.0-py3.10.egg', 'C:\\Python310\\lib\\site-packages\\colorama-0.3.2-py3.10.egg', 'C:\\Python310\\lib\\site-packages\\win32', 'C:\\Python310\\lib\\site-packages\\win32\\lib', 'C:\\Python310\\lib\\site-packages\\Pythonwin'] </code></pre> <p>If <code>PYTHONPATH</code> is truly an environment variable, why can't I access it using either Powershell or <code>os</code> in my Python interpreter?</p>
<python><environment-variables><pythonpath>
2023-05-06 10:09:37
1
6,067
user32882
76,188,195
6,630,397
Cleanest way of changing dtype of several DataFrame columns
<p>I have four columns of type <code>object</code> in a Pandas (2.0.1) <a href="https://pandas.pydata.org/docs/reference/frame.html" rel="nofollow noreferrer">DataFrame</a> which want to convert to <code>int</code>.</p> <p>Applying the following method:</p> <pre><code>cols = ['x1','x2','y1','y2'] df[cols] = df[cols].apply(pd.to_numeric) # The same message is raised when trying to cast a single column: df['x1'] = pd.to_numeric(df['x1']) # The same message is also raised when using .astype(): dff[cols] = dff[cols].astype(int) </code></pre> <p>as described here: <a href="https://stackoverflow.com/a/28648923/6630397">https://stackoverflow.com/a/28648923/6630397</a> raises the message:</p> <pre><code>/tmp/ipykernel_87959/2834796204.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[cols] = df[cols].apply(pd.to_numeric) </code></pre> <p>How can I <em>properly</em> (and rapidly) cast my four columns from <code>object</code> to <code>int</code>?</p>
<python><pandas><dataframe><dtype>
2023-05-06 09:43:08
3
8,371
swiss_knight
76,188,133
7,025,033
Python, numpy performance inconsistency
<p>I am creating a cell-automata, and I was doing a performance test on the code.</p> <p>I did it like this:</p> <pre><code>while True: time_1 = perf_counter_ns() numpy.fnc1() time_2 = perf_counter_ns() numpy.fnc2() time_3 = perf_counter_ns() numpy.fnc3() time_4 = perf_counter_ns() numpy.fnc4() time_5 = perf_counter_ns() numpy.fnc5() time_6 = perf_counter_ns() numpy.fnc6() time_7 = perf_counter_ns() diff_1 = time_2 - time_1 diff_2 = time_3 - time_2 diff_3 = time_4 - time_3 diff_4 = time_5 - time_4 diff_5 = time_6 - time_5 diff_6 = time_7 - time_6 print(results) </code></pre> <p>However, I found that the runtimes are somewhat inconsistent. I did some longer tests, for about 8 hours.</p> <p>It seems the perfomance is &quot;jumping around&quot;: for an hour it is running quicker, then slower, and within that hour there are smaller sections, when it is quicker, then it is slower...</p> <p>What is even more disturbing, the length of the slower-quicker periods are lengthening over time. So I am pretty sure it is not a regular system-process.</p> <p>The tests were conducted on debian 11 AMD Ryzen 5 2600 Six-Core Processor</p> <p>The GUI was running, but no browser, etc. I monitored the overall processor usage and most of the cores did nothing.</p> <p>Note that it is a cell-automata, and it evolved during the test, so the input data is <em>not</em> the same, it changes constantly.</p> <p>However, I don't think that the time it takes to add up arrays would be different, based on the numbers in the arrays...</p> <p>Also, if the performance change was data-driven, it would be very unlikely to see random data producing about the same runtimes for a whole minute...</p> <p><strong>Question</strong>: What am I seeing??? What causes it???</p> <p>My hunch is that maybe it has something to do with <code>python</code> thread scheduling, but I don't know, if <code>numpy</code> uses threads and I only have 1 thread in my code...</p> <p>There is at least a 2x performance difference between the &quot;slower&quot; and the &quot;quicker&quot; state on average, so it would be very beneficial to make the code stick in the &quot;quicker&quot; state.</p> <p>I included 2 images:</p> <p>One of the images is the value of <code>diff_3</code>, cycle-after-cycle, zoomed in more-and-more.</p> <p>The other image is <code>diff_1</code>, <code>diff2</code>, ... <code>diff_6</code>, all in 1 image. At this scale it is difficult to see the details, but <code>diff_3</code> and <code>diff_5</code> are somewhat comparable. As you can see, the &quot;quicker&quot; and &quot;slower&quot; periods match up, but not exactly.</p> <p>On the images there are about 6.5million cycles.</p> <p><a href="https://i.sstatic.net/w979G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w979G.png" alt="Runtime of a single function" /></a></p> <p><a href="https://i.sstatic.net/Dx6Aj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dx6Aj.png" alt="Runtime of each function" /></a></p>
<python><numpy><performance>
2023-05-06 09:26:03
1
1,230
Zoltan K.
76,187,997
7,195,332
Azure QueueClient Python can't use DefaultAzureCredential
<p>I'm using <em>DefaultAzureCredential()</em> from <em>azure.identity</em> to get credential and use it to establish TableServiceClient (azure.data.table) connection. It works. If I am trying to do the same for QueueClient (azure.storage.queue), I'm getting following error. As far as I understand documentation, it should be possible to use DefailtAzureCredential() for that.</p> <pre><code>[2023-05-06T08:40:40.331Z] Response status: 403 Response headers: 'Content-Length': '279' 'Content-Type': 'application/xml' 'Server': 'Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0' 'x-ms-request-id': '30a41a1b-a003-0004-71f6-7f25a6000000' 'x-ms-client-request-id': 'adedb50b-ebe9-11ed-b983-001a7dda7113' 'x-ms-version': 'REDACTED' 'x-ms-error-code': 'REDACTED' 'Date': 'Sat, 06 May 2023 08:40:39 GMT' </code></pre> <p>I'm connecting in following way. If I switch credential with storage account key, it works.</p> <pre><code>credential = DefaultAzureCredential() queue_service_client = QueueClient( account_url = os.environ[&quot;STORAGE_ENDPOINT_QUEUE&quot;], credential=credential, queue_name = &quot;smsnotification&quot; ) def pushNotifyToQueune(queue_service_client, playerId): logging.info(f&quot;Push notify to queune&quot;) try: response = queue_service_client.send_message(&quot;m&quot;) logging.info(f&quot;Print response form queune {response}&quot;) except: logging.info(f&quot;Something goes wrong when pusing notify to queune&quot;) #TODO use ErrorName </code></pre> <p>I were trying to use Storage Account Key and it works. DefaultAzureCredential is also working for TableServiceClient but not for Queue.</p>
<python><azure><azure-functions><azure-sdk-python>
2023-05-06 08:54:01
1
333
Caishen
76,187,968
8,281,276
Feed input as test case for a source code in string
<p>I'm making a simple quiz app that asks a user to write a source code. For example, a user is asked to &quot;Given n values, add them and print it&quot;. Input is</p> <pre class="lang-none prettyprint-override"><code>N a[0], a[1], ..., a[N-1] </code></pre> <p>and output is</p> <pre class="lang-none prettyprint-override"><code>s </code></pre> <p>For example, an input is</p> <pre class="lang-none prettyprint-override"><code>3 1 3 5 </code></pre> <p>and an output is</p> <pre class="lang-none prettyprint-override"><code>9 </code></pre> <p>A user should write</p> <pre><code>N = int(input()) print(sum(([int(input()) for i input().split())) </code></pre> <p>The source code should be in string because a user answers in HTML input and submits it to a server.</p> <p>Given the above context, how would you test source code in string which contains <code>input</code> and <code>print</code>?</p> <p>My work-in-progress code is</p> <pre class="lang-py prettyprint-override"><code># source should be assigned with the payload of from HTTP POST request source = `print([int(input()) for i in range(N))` # How to feed an input and validate an output? assertEqual(correct_answer, exec(source)) </code></pre> <p>I was thinking to use <code>exec</code>, but not sure how I can feed input for <code>input()</code> programatically.</p> <p>At least, you can redirect stdout for <code>print</code>.</p>
<python>
2023-05-06 08:47:02
1
1,676
Wt.N
76,187,889
4,692,635
Why does regular expression think chr(181) and chr(956) are the same when ignoring case?
<h1>1. What I met</h1> <p>My following code:</p> <pre class="lang-py prettyprint-override"><code>import re txt = 'μ' lst = [ chr(181), chr(956) ] for c in lst: print(c) reObj = re.compile(c, re.IGNORECASE | re.VERBOSE ) print(list(re.finditer(reObj, txt))) </code></pre> <p>displays:</p> <pre><code>¡ [&lt;re.Match object; span=(0, 1), match='μ'&gt;] μ [&lt;re.Match object; span=(0, 1), match='μ'&gt;] </code></pre> <p>which means <code>chr(181) == chr(956)</code> by using <code>IGNORECASE</code>.</p> <p>I can understand the upper/lower case of non-English letters:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; 'a'.upper() 'A' </code></pre> <h1>2. My questions</h1> <h2>2.1 how to get the upper/lower case of non-English letters</h2> <p>since I found:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; chr(181).upper() 'Μ' </code></pre> <p>which I suppose that it should give me the character <code>chr(956)</code>, but;</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; chr(956).upper() 'Μ' &gt;&gt;&gt; chr(956).upper() == chr(181).upper() True &gt;&gt;&gt; chr(181).lower() '¡' &gt;&gt;&gt; chr(956).lower() 'μ' &gt;&gt;&gt; chr(181).lower() == chr(956).lower() False </code></pre> <h2>2.2 Is there a detailed documents on such <code>upper/lower case</code>, for example, a table.</h2> <p>Thanks</p>
<python><regex><unicode><python-re>
2023-05-06 08:26:10
0
567
oyster
76,187,525
3,234,803
Type function that returns a function wrapper
<p>I'm trying to annotate the return type of the function below:</p> <pre class="lang-py prettyprint-override"><code>from concurrent.futures import Executor from typing import Callable from typing_extensions import Concatenate, ParamSpec, TypeVar _P = ParamSpec(&quot;_P&quot;) _R = TypeVar(&quot;_R&quot;) def make_run_in_executor(executor: Executor): # What to put here? def run_in_executor( func: Callable[_P, _R], *args: _P.args, **kwargs: _P.kwargs ) -&gt; _R: return executor.submit(func, *args, **kwargs).result() return run_in_executor </code></pre> <p>I tried <code>Callable[Concatenate[Callable[_P, _R], _P], _R]</code>, but got the following error from Pylance:</p> <pre><code>Expression of type &quot;(func: (**_P@run_in_executor) -&gt; _R@run_in_executor, *args: _P.args, **kwargs: _P.kwargs) -&gt; _R@run_in_executor&quot; cannot be assigned to return type &quot;((**_P@make_run_in_executor) -&gt; _R@make_run_in_executor, **_P@make_run_in_executor) -&gt; _R@make_run_in_executor&quot; Β Β Type &quot;(func: (**_P@run_in_executor) -&gt; _R@run_in_executor, *args: _P.args, **kwargs: _P.kwargs) -&gt; _R@run_in_executor&quot; cannot be assigned to type &quot;((**_P@make_run_in_executor) -&gt; _R@make_run_in_executor, **_P@make_run_in_executor) -&gt; _R@make_run_in_executor&quot; </code></pre> <p>Did I do it right and this is just a bug in Pylance? Or is there another way to type this?</p>
<python>
2023-05-06 06:53:42
1
6,718
Zizheng Tai
76,187,493
16,783,860
SOLVED Why is np.empty == np.zeros True, and np.empty == np.zeros False
<p>New to python and maybe misunderstanding some fundamentals here, but having trouble making sense of the following.</p> <pre><code>print(numpy.empty(3) == numpy.zeros(3)) #Result [True True True] </code></pre> <pre><code>print(numpy.empty(3) == numpy.empty(3)) #Result [False False False] </code></pre> <p>My original assumption was that .empty array, when the comparison is called, is initialized as .zeros()? But if that's the case, the latter wouldn't make sense.</p>
<python><numpy>
2023-05-06 06:42:39
3
383
kenta_desu
76,187,402
16,770,405
Implementing Kosaraju's algorithm to detect a cycle in a list of edges
<p>I'm going off the pseudo code in this <a href="https://en.wikipedia.org/wiki/Kosaraju%27s_algorithm" rel="nofollow noreferrer">link</a> but I can't seem to get it working to detect the SCCs or any cycles. Trying to detect cycles in a list of edges Any help is appreciated.</p> <pre class="lang-py prettyprint-override"><code>class Solution: def canFinish(self, numCourses: int, prerequisites: List[List[int]]) -&gt; bool: if len(prerequisites) == 0: return True adj = defaultdict(list) inv = defaultdict(list) for i, j in prerequisites: if i ==j: return False adj[j].append(i) inv[i].append(j) topo = [] visited = set() def toposort(head): if head in visited: return visited.add(head) for neighbor in adj[head]: toposort(head) topo.append(head) for i in range(numCourses): toposort(i) assigned = set() scc = defaultdict(list) def kos(u, root): if u in assigned: return assigned.add(u) scc[root].append(u) for neighbor in inv[u]: kos(neighbor, root) for i in reversed(topo): kos(i, i) print(scc) return max(len(v) for v in scc.values()) &lt;= 1 </code></pre>
<python><directed-acyclic-graphs><directed-graph>
2023-05-06 06:08:16
1
323
Kevin Jiang
76,187,395
8,283,848
Aggregate sum of values from Django JSONField
<p>Consider I have a simple model setup with <strong><code>JSONField</code></strong></p> <pre class="lang-py prettyprint-override"><code>from django.db import models import random def simple_json_callable(): return {&quot;amount&quot;: random.randint(1, 100)} def nested_json_callable(): data = { &quot;l1&quot;: { &quot;l2&quot;: { &quot;l3&quot;: { &quot;amount&quot;: random.randint(1, 100) } } } } return data class Foo(models.Model): simple_json = models.JSONField(default=simple_json_callable) nested_json = models.JSONField(default=nested_json_callable) </code></pre> <p>I want to get the sum of <em><strong><code>amount</code></strong></em> key from both <code>simple_json</code> and <code>nested_json</code> fields.</p> <p>I tried the following queries</p> <h3>Case 1: Annotate and then aggregate</h3> <pre class="lang-py prettyprint-override"><code> result = Foo.objects.annotate( simple_json_amount=Cast('simple_json__amount', output_field=models.IntegerField()), nested_json_amount=Cast('nested_json__l1__l2__l3__amount', output_field=models.IntegerField()), ).aggregate( simple_json_total=models.Sum('simple_json_amount'), nested_json_total=models.Sum('nested_json_amount'), ) </code></pre> <h3>Case 2: Aggregate</h3> <pre class="lang-py prettyprint-override"><code>result = Foo.objects.aggregate( simple_json_total=models.Sum(Cast('simple_json__amount', output_field=models.IntegerField())), nested_json_total=models.Sum(Cast('nested_json__l1__l2__l3__amount', output_field=models.IntegerField())), ) </code></pre> <p>In both cases, I got the error</p> <pre><code>django.db.utils.DataError: cannot cast jsonb object to type integer </code></pre> <h3>Question</h3> <p>What is the proper way to aggregate sum of values from a <code>JSONField</code> in Django?</p> <hr /> <h3>Version</h3> <ul> <li>Django 3.1.X</li> <li>Python 3.9.X</li> </ul>
<python><django><django-orm><django-jsonfield><jsonfield>
2023-05-06 06:06:32
1
89,380
JPG
76,187,256
8,237,910
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3
<p>After <code>pip install openai</code>, when I try to <code>import openai</code>, it shows this error:</p> <blockquote> <p>the 'ssl' module of urllib3 is compile with LibreSSL not OpenSSL</p> </blockquote> <p>I just followed a tutorial on a project about using API of OpenAI. But when I get to the first step which is the install and import OpenAI, I got stuck. And I tried to find the solution for this error but I found nothing.</p> <p>Here is the message after I try to import OpenAI:</p> <pre class="lang-none prettyprint-override"><code>Python 3.9.6 (default, Mar 10 2023, 20:16:38) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import openai Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/yule/Library/Python/3.9/lib/python/site-packages/openai/__init__.py&quot;, line 19, in &lt;module&gt; from openai.api_resources import ( File &quot;/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/__init__.py&quot;, line 1, in &lt;module&gt; from openai.api_resources.audio import Audio # noqa: F401 File &quot;/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/audio.py&quot;, line 4, in &lt;module&gt; from openai import api_requestor, util File &quot;/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py&quot;, line 22, in &lt;module&gt; import requests File &quot;/Users/mic/Library/Python/3.9/lib/python/site-packages/requests/__init__.py&quot;, line 43, in &lt;module&gt; import urllib3 File &quot;/Users/mic/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py&quot;, line 38, in &lt;module&gt; raise ImportError( ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3. See: https://github.com/urllib3/urllib3/issues/2168 </code></pre> <p>I tried to <code>--upgrade</code> the <code>urllib3</code>, but it is still not working. The result is:</p> <pre class="lang-none prettyprint-override"><code>pip3 install --upgrade urllib3 Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: urllib3 in ./Library/Python/3.9/lib/python/site-packages (2.0.2) </code></pre>
<python><openai-api><urllib3>
2023-05-06 05:11:42
21
1,923
maimai
76,187,161
3,681,795
QR Code with extra payload via steganography
<p>I'm trying to use the approach described on <a href="https://ieeexplore.ieee.org/document/8985346" rel="nofollow noreferrer">this paper</a> to &quot;watermark&quot; a QR Code with a special payload.</p> <p>The whole flow seems to be correct, but I'm having some trouble saving the payload as bytes to be xor'd against the QR Code</p> <pre class="lang-py prettyprint-override"><code>import qrcode from PIL import Image, ImageChops qr = qrcode.QRCode( version=5, error_correction=qrcode.constants.ERROR_CORRECT_H, box_size=10, border=2, ) msg = &quot;25.61795.?000001?.907363.02&quot; sct = &quot;secret message test&quot; def save_qrcode(data, path): qr.add_data(data) qr.make(fit=True) img = qr.make_image(fill_color=&quot;black&quot;, back_color=&quot;white&quot;) img.save(path) save_qrcode(msg, &quot;out.png&quot;) save_qrcode(sct, &quot;out2.png&quot;) pure_qr_code = Image.open(&quot;out.png&quot;) encoded_data_as_img = Image.new(mode=pure_qr_code.mode, size=pure_qr_code.size) encoded_data_pre_xor = [ord(e) for e in sct] print(encoded_data_pre_xor) # Encoding encoded_data_as_img.putdata(encoded_data_pre_xor) encoded_data_as_img.save(&quot;out2.png&quot;) encoded_data_as_img = Image.open(&quot;out2.png&quot;) result = ImageChops.logical_xor(pure_qr_code, encoded_data_as_img) result.save(&quot;result.png&quot;) # Decoding result = Image.open(&quot;result.png&quot;) result2 = ImageChops.logical_xor(result, pure_qr_code) result2.save(&quot;result2.png&quot;) img_data_as_bytes = Image.open(&quot;result2.png&quot;).getdata() encoded_data_after_xor = [] i = 0 while img_data_as_bytes[i]: encoded_data_after_xor.append(img_data_as_bytes[i]) i += 1 print(encoded_data_after_xor) </code></pre> <p>This gives me the following output:</p> <pre class="lang-py prettyprint-override"><code>[115, 101, 99, 114, 101, 116, 32, 109, 101, 115, 115, 97, 103, 101, 32, 116, 101, 115, 116] [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255] </code></pre> <p>But trying to diff <code>result2.png</code> and <code>out2.png</code> returns a no-diff. This means the problem happens on the message to image saving</p> <pre class="lang-py prettyprint-override"><code>encoded_data_as_img.putdata(encoded_data_pre_xor) </code></pre>
<python><python-imaging-library><qr-code><steganography>
2023-05-06 04:40:36
2
382
Victor
76,187,052
281,873
Altair: How to use an aggregate that adds a facet (e.g., an extra column with the aggregate)?
<p>Aggregate and joinaggregate don't seem to be <em>quite</em> what I want here. I would like to add another facet to my plot that is an aggregation. I want to start with something like:</p> <pre><code>import altair as alt import pandas as pd data = { &quot;Dataset&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;A&quot;, &quot;B&quot;], &quot;Variant&quot;: [&quot;X&quot;, &quot;X&quot;, &quot;Y&quot;, &quot;Y&quot;], &quot;Val&quot;: [1, 4, 7, 3], } df = pd.DataFrame(data) alt.Chart(df, mark=&quot;bar&quot;).encode(x=&quot;Variant:N&quot;, y=&quot;Val:Q&quot;, column=&quot;Dataset:N&quot;) </code></pre> <p><a href="https://i.sstatic.net/QJzmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QJzmw.png" alt="Base data, no aggregation" /></a></p> <p>and then aggregate across all columns to generate a mean value, which will hopefully look something like:</p> <p><a href="https://i.sstatic.net/mppdS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mppdS.png" alt="Base data + aggregation" /></a></p> <p>I can do this in Pandas. I'd like to do it in Altair if possible.</p> <p>Bonus points: Aggregate with the geometric mean (not the arithmetic mean)?</p>
<python><altair>
2023-05-06 03:49:56
2
385
jowens
76,187,045
10,464,107
Django - if statement in template not working
<p>I have a Django site using the following models:</p> <pre><code>class User_Posts(models.Model): message = models.TextField(max_length=CONFIG['MODEL_SETTINGS']['User_Posts']['message_length']) post_date = models.DateTimeField(auto_now_add=True) character = models.ForeignKey(Characters, related_name='user_posts', on_delete=models.CASCADE) created_by = models.ForeignKey(User, related_name='user_posts', on_delete=models.CASCADE) non_bot = models.CharField(max_length = 10, default = 'user') class Bot_Replies(models.Model): message = models.TextField(max_length=CONFIG['MODEL_SETTINGS']['Bot_Replies']['message_length']) character = models.ForeignKey(Characters, related_name='bot_posts', on_delete=models.CASCADE) post_date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, related_name='user_origin', on_delete=models.CASCADE) non_bot = models.CharField(max_length = 10, default = 'bot') </code></pre> <p>in my view, i am passing:</p> <pre><code> user_posts = User_Posts.objects.filter(created_by = request.user, character__pk = pk) bot_posts = Bot_Replies.objects.filter(created_by = request.user, character__pk = pk) posts_list = sorted( chain(user_posts, bot_posts), key = lambda x: x.post_date, reverse = False )[-10:] </code></pre> <p>and passing 'posts_list' to the template as 'posts'. My template is looping:</p> <pre><code> {% for post in posts %} &lt;div class=&quot;card mb-2&quot;&gt; &lt;div class=&quot;card-body p-3&quot;&gt; &lt;div class=&quot;row&quot;&gt; {% if post.non_bot == 'user' %} &lt;div class=&quot;col-2&quot;&gt; {{ post.created_by }} &lt;/div&gt; {% else %} &lt;div class=&quot;col-2&quot;&gt; {{ post.created_by }} &lt;/div&gt; {% endif %} &lt;div class=&quot;col-2&quot;&gt; {{ post.message }} &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} </code></pre> <p>but the if statement never works, it always passes to the &quot;else&quot; clause. If I print out the variable non_bot value on my screen, it shows the correct bot/non-bot values but for some reason I cannot access them in the template. All other values I display show up correctly. Any ideas why? Thank you! (as an FYI, I initially had this as a boolean field, which also didnt work)</p>
<python><django><django-models><django-views>
2023-05-06 03:47:06
0
347
DeathbyGreen
76,187,007
6,063,706
Weld mocap body to point of other body mujoco
<p>I am trying to weld a mocap body to the tip of a finger in mujoco. I am using the following code:</p> <pre><code> &lt;equality&gt; &lt;weld body1=&quot;mocap_test&quot; body2=&quot;finger1_distal&quot; solimp=&quot;0.9 0.95 0.001&quot; solref=&quot;0.02 1&quot; anchor=&quot;0.0 0.1035 0.0&quot;&gt;&lt;/weld&gt; &lt;/equality&gt; </code></pre> <p>However, the mocap keeps getting welded to the base of the finger not the tip (see image. Currently the base of the distal body is going to the mocap body but I would like the tip (blue point) to go there). How do I weld it to the tip in mujoco? I thought I could use the anchor command but that seems to be doing nothing. Thanks for the help</p> <p><a href="https://i.sstatic.net/MEZXU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MEZXU.png" alt="enter image description here" /></a></p> <p>For reference, the full finger xml is:</p> <pre><code>&lt;body name=&quot;finger1_proximal&quot; childclass=&quot;prox_link&quot; pos=&quot;0.26 0.0 0.0&quot; euler=&quot;1.57079632679 3.14159265359 -1.57079632679&quot;&gt; &lt;geom name=&quot;finger1_proximal&quot; rgba=&quot;0.0 01 0.0 1&quot;&gt;&lt;/geom&gt; &lt;body name=&quot;finger1_middle&quot; childclass=&quot;middle_link&quot; pos=&quot;0 0.0655 0&quot; euler=&quot;0 0 0&quot;&gt; &lt;inertial pos=&quot;0 0 0&quot; mass=&quot;0.09930&quot; fullinertia=&quot;0.00030191767 0.00001861456 0.00030797470 0.00000203190 0.00000000003 0.00000000011&quot;&gt;&lt;/inertial&gt; &lt;joint name=&quot;finger1_middle_joint&quot;&gt;&lt;/joint&gt; &lt;geom name=&quot;finger1_middle&quot;&gt;&lt;/geom&gt; &lt;body name=&quot;finger1_distal&quot; childclass=&quot;dist_link&quot; pos=&quot;0 0.069 0&quot; euler=&quot;0 3.14159265359 0&quot;&gt; &lt;inertial pos=&quot;0 0 0&quot; mass=&quot;0.09268&quot; fullinertia=&quot;0.00053149692 0.00001518193 0.00053424309 -0.00000000003 0.00000000000 0.00000474504&quot;&gt;&lt;/inertial&gt; &lt;joint name=&quot;finger1_dist_joint&quot;&gt;&lt;/joint&gt; &lt;geom name=&quot;finger1_distal_bracket&quot; mesh=&quot;distal_bracket&quot; user=&quot;0.0285&quot;&gt;&lt;/geom&gt; &lt;geom name=&quot;finger1_distal&quot; mesh=&quot;distal_tip&quot; pos=&quot;0 0.0285 0&quot; user=&quot;0.075&quot;&gt;&lt;/geom&gt; &lt;site name=&quot;finger1_tip&quot; pos=&quot;0 0.1035 0&quot;&gt;&lt;/site&gt; &lt;/body&gt; &lt;/body&gt; &lt;/body&gt; </code></pre>
<python><mujoco>
2023-05-06 03:31:39
0
1,035
Tob
76,186,981
3,826,733
manually test azure-cosmos sdk exceptions in python
<p>How do I force test exceptions in this below code. In the below code I am creating a record inside of Azure cosmos db and I want to manually test all types of failure scenarios?</p> <pre><code>def resolve_createUser(_, info, user): print(user) try: result = container.create_item({ &quot;id&quot;: user[&quot;userDisplayName&quot;]+&quot;-&quot;+user[&quot;userPhoneNumber&quot;], &quot;partitionKey&quot;: &quot;user&quot;, &quot;userPhoneNumber&quot;: user[&quot;userPhoneNumber&quot;], &quot;userDisplayName&quot;: user[&quot;userDisplayName&quot;] }) except AzureMissingResourceHttpError as e: if e.status_code == 200: return { &quot;status&quot;: e.status_code, &quot;error&quot;: &quot;&quot;, &quot;user&quot;: result } else: return { &quot;status&quot;: e.status_code, &quot;error&quot;: e.message, &quot;user&quot;: None } else: return { &quot;status&quot;: 200, &quot;error&quot;: &quot;&quot;, &quot;user&quot;: result } </code></pre>
<python>
2023-05-06 03:21:52
0
3,842
Sumchans
76,186,944
3,789,200
Can't import a method in a file in Python
<p>I'm creating a project with the following format.</p> <p><a href="https://i.sstatic.net/6zMwA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6zMwA.png" alt="enter image description here" /></a></p> <p>In the data_transformation.py following import is made.</p> <pre><code>from src.utills import save_object </code></pre> <p>Main main function is in data_ingestion.py. It calls data_transformation.py and data_transformation.py calls save_object method as mentioned above.</p> <p><a href="https://i.sstatic.net/Aq7uj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Aq7uj.png" alt="enter image description here" /></a></p> <p>above is the save_object. But I'm unable to import this method correctly. Always getting the error below.</p> <p><a href="https://i.sstatic.net/epaua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/epaua.png" alt="enter image description here" /></a></p> <p>Can someone kindly let me know the issue that I have made here</p>
<python><import>
2023-05-06 03:09:25
1
1,186
user3789200
76,186,763
9,371,999
Value error using scikit-learn transformers
<p>I am having trouble with a piece of code I am writing. Specifically a pipeline. The data is a simple numerical dataframe (firewall logs) which is being split in X_train and X_test very commonly. After splitting, I devised a pipeline. This pipeline has 3 steps:</p> <ol> <li>ColumnTransformer(...some stuff going on here...)</li> <li>PCA(n=10)</li> <li>Randomforestclassifier()</li> </ol> <p>Then, I run a pipeline through a gridsearCV(), fit() the grid-search itself, and then fit the pipeline with the best parameters. The problem appears when I try to transform the test set with the fitted pipeline:</p> <p>The pipeline I am using to fit the testing data is as follows:</p> <pre><code>test_pipe_transform = Pipeline( steps = [ ('preprocessor', final_pipe.named_steps['preprocessor']), ('scaler' , final_pipe.named_steps['PCA']), ]) </code></pre> <p>I make this pipeline specifically to transform the test set using the fitted steps from the main pipeline. It seems that I cannot transform my testing data with the fitted pipeline. the error is showing:</p> <pre><code>self._check_n_features(X, reset=False) File &quot;C:\Users\............\lib\site-packages\sklearn\base.py&quot;, line 359, in _check_n_features raise ValueError( ValueError: X has 10 features, but ColumnTransformer is expecting 11 features as input. </code></pre> <p>What is happening in here? Can somebody give me a hint on what can be going wrong?</p> <p>The complete code below:</p> <pre><code>import pandas as pd import numpy as np import warnings warnings.filterwarnings('ignore') # import dependencies import pandas as pd from typing import Any, List, Tuple from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer from sklearn.preprocessing import ( OrdinalEncoder, MinMaxScaler, PowerTransformer, FunctionTransformer, ) # Classifier from sklearn.metrics import classification_report from sklearn.decomposition import PCA from sklearn.feature_selection import SelectPercentile, chi2 from sklearn.model_selection import ( GridSearchCV, train_test_split, RandomizedSearchCV) def get_categorical_columns(df): categorical_cols = df.select_dtypes(include=['object', 'category']).columns.tolist() return categorical_cols def get_numerical_columns(df): numerical_cols = [] for col in df.columns: if pd.api.types.is_numeric_dtype(df[col]): numerical_cols.append(col) return numerical_cols if __name__ == '__main__': data = pd.read_csv(filepath_or_buffer='DATA\log2.csv') X = data.drop(['Action'], axis=1) y = data[&quot;Action&quot;] X_train, X_test,\ y_train, y_test = train_test_split(\ X, y, shuffle = False, stratify = None, test_size = 0.5, random_state = 0) categorical_features = get_categorical_columns(data) numeric_features = get_numerical_columns(data) ####### BLOCK FOR NUMERIC INPUTER OF MISSING VALUES ######## numeric_inputer = \ Pipeline( steps = [ (&quot;imputer&quot;, SimpleImputer(strategy = &quot;median&quot;)), #(&quot;scaler&quot; , StandardScaler()) ]) ########## BLOCK FOR CATEGORIAL INPUTER OF MISSING VALUES ## categorical_inputer = \ Pipeline( steps = [ ('imputer', SimpleImputer( strategy = 'constant', fill_value = 'missing')), ('label_encoder', OrdinalEncoder()), #(&quot;selector&quot;, SelectPercentile(chi2, percentile = 50)), ]) ############# BLOCK FOR SCALING pkts_received ############## def log_transform(x): return np.log10(x+10) logtransformer = FunctionTransformer(log_transform ,validate = True) scaler = PowerTransformer(method='yeo-johnson', standardize = True) scaler_2 = MinMaxScaler() pipe_pkt_received = \ Pipeline( steps = [ ('log1_transform' , logtransformer), ('scaler' , scaler ), ('min_max_scaler' , scaler_2 ), ]) ##################### PREPROCESSOR ######################## ############################################################ ## Applying Column transformer pipelines ################ preprocessor = ColumnTransformer( transformers = [ (&quot;Droping_Bytes_Received&quot; , &quot;drop&quot; , [&quot;Bytes Received&quot;] ), (&quot;Droping_Bytes&quot; , &quot;drop&quot; , [&quot;Bytes&quot;] ), (&quot;Droping_Packets&quot; , &quot;drop&quot; , [&quot;Packets&quot;] ), (&quot;num&quot; , numeric_inputer , numeric_features ), (&quot;pkt_received_scaling&quot; , pipe_pkt_received , [&quot;pkts_received&quot;] ), #(&quot;cat&quot; , categorical_inputer, categorical_features), ], remainder = 'passthrough', ) ############################################################ ############################################################ ##################### FINAL PIPELINE ###################### ############################################################ step_1 = (&quot;preprocessor&quot;, preprocessor) step_2 = (&quot;PCA&quot; , PCA(n_components = 10)) step_3 = (&quot;RNDF_clf&quot; , RandomForestClassifier()) final_pipe = \ Pipeline( steps = [ step_1, step_2, step_3, ]) param_grid = {&quot;PCA__n_components&quot; : [5, 10],} grid_search = GridSearchCV( estimator = final_pipe , param_grid = param_grid , cv = 3 , n_jobs = -1 , verbose = 2 ,) grid_search.fit(X_train, y_train) # use best parameters to transform test data best_params = grid_search.best_params_ final_pipe.set_params(**best_params) final_pipe.fit(X_train, y_train) test_pipe_transform = Pipeline( steps = [ ('preprocessor', final_pipe.named_steps['preprocessor']), ('scaler' , final_pipe.named_steps['PCA']), ]) X_test_transformed = test_pipe_transform.transform(X_test) # evaluate model on test data using multiple metrics y_pred = final_pipe.predict(X_test_transformed) report = classification_report(y_test, y_pred) </code></pre>
<python><scikit-learn><scikit-learn-pipeline>
2023-05-06 01:42:42
1
529
GEBRU
76,186,738
1,290,170
How to use Barrier to limit number of simultaneous threads?
<p>I am using multithreading in python to send HTTP requests, but if I send too many simultanesouly, my request gets blocked.</p> <p>So I would like to send the requests 5 by 5. This is what I tried to do, but this doesn't seem to work. Where in that code am I supposed to use the barrier, and what code is missing?</p> <pre><code>b = threading.Barrier(5) def send_request(arg1,arg2) # send HTTP request here for task in tasks: thread = Thread(target=send_request,args=(arg1,arg2)) thread.start() threads.append(thread) for t in threads: t.join() </code></pre>
<python><multithreading>
2023-05-06 01:32:39
1
1,567
alexx0186
76,186,506
2,478,485
Creating python virtualenv is failing
<p>Creating <code>python3.10</code> <code>virtualenv</code> is failing</p> <pre class="lang-py prettyprint-override"><code>$python -v -m virtualenv --python /usr/bin/python3.10 py310 # /home/test/.local/lib/python3.7/site-packages/virtualenv/activation/python/__pycache__/__init__.cpython-37.pyc matches /home/test/.local/lib/python3.7/site-packages/virtualenv/activation/python/__init__.py # code object from '/home/test/.local/lib/python3.7/site-packages/virtualenv/activation/python/__pycache__/__init__.cpython-37.pyc' import 'virtualenv.activation.python' # &lt;_frozen_importlib_external.SourceFileLoader object at 0x7fc0a0849810&gt; # /home/test/.local/lib/python3.7/site-packages/virtualenv/activation/xonsh/__pycache__/__init__.cpython-37.pyc matches /home/test/.local/lib/python3.7/site-packages/virtualenv/activation/xonsh/__init__.py # code object from '/home/test/.local/lib/python3.7/site-packages/virtualenv/activation/xonsh/__pycache__/__init__.cpython-37.pyc' import 'virtualenv.activation.xonsh' # &lt;_frozen_importlib_external.SourceFileLoader object at 0x7fc0a0849d90&gt; import 'virtualenv.activation' # &lt;_frozen_importlib_external.SourceFileLoader object at 0x7fc0a0aa3c90&gt; KeyError: 'scripts' </code></pre>
<python><virtualenv>
2023-05-05 23:45:39
2
3,355
Lava Sangeetham
76,186,486
1,290,170
How can I create delays in multithreading for HTTP requests?
<p>I am using multithreading in python to send HTTP requests, but if I send too many simultanesouly, my request gets blocked.</p> <p>So I would like to send the requests 5 by 5. This is what I tried to do, but this doesn't seem to work.</p> <pre><code>for task in tasks: thread = Thread(target=send_request,args=(arg1,arg2)) thread.start() threads.append(thread) total = len(threads) for x in range(0, total, 5): for t in threads[min(total,x):min(total,(x+5))]: t.join() </code></pre> <p>Basically, I want all 5 threads of <code>send_request()</code> to complete, before I send another 5 request. What should I do instead?</p>
<python><multithreading>
2023-05-05 23:42:24
1
1,567
alexx0186
76,186,459
5,091,720
Pandas 12 month Rolling average
<p>I am trying to get a 12 month rolling average and having a hard time with it.</p> <p>This is the example data and the code.</p> <pre><code>data = { 'Att' : ['red','red','red','red','tan','tan','tan','tan','cyan','cyan','cyan','cyan','cyan','red','red','red','red','tan','tan','tan','tan','cyan','cyan','red'], 'loc' : ['A','A', 'B','B', 'A','A', 'B','B','A','A', 'B','B','A','A', 'B','B','A','A', 'B','B','A','A', 'B','B'], 'Date' : ['3/14/2022 10:05','3/14/2022 10:05','4/20/2022 12:18','5/4/2022 7:42','6/1/2022 12:45','3/1/2022 11:46','8/2/2022 10:30','8/2/2022 10:30','9/22/2022 13:08','10/20/2022 13:46','11/8/2022 9:00','12/12/2022 9:58','1/19/2022 9:43','5/25/2022 8:57','6/22/2022 9:18','7/20/2022 13:13','8/25/2022 8:35','9/22/2022 9:35','4/5/2022 10:42','10/20/2022 10:32','11/16/2022 8:43','12/21/2022 8:36','1/14/2022 9:30','2/18/2022 11:26'], 'Result' : [28,19,23,31,40,63,71,41,55,59,28,40,3.9,12,15,13,18,15,36,16,15,26,13,16], } df = pd.DataFrame(data) df['Date'] = pd.to_datetime(df['Date']) df.sort_values([ 'loc','Date',], inplace=True) df.to_clipboard(index=False) df # %% for ana in list(df['Att'].unique()): dfa = df[df['Att']==ana] dfa.sort_values(['Date', 'loc'], inplace=True) # I tried another option of pivot_table here but this removes some values that are in the same month. dfa = dfa.pivot(values='Result',index='Date',columns='loc' ).reset_index() dfa.set_index(keys='Date', inplace=True) dfa = dfa.rolling(12,min_periods=1).mean() dfa['Att'] = ana print(dfa) </code></pre> <p>Below is another option I tried this removes some values that are in the same month. If I change the <code>aggfunc='mean'</code> this creates a calculation error of averaging averages.</p> <pre><code> dfa['Date'] = dfa['Date'].dt.to_period('M').dt.to_timestamp('M') dfa = dfa.pivot_table(values='Result',index='Date',columns='loc', aggfunc='first').reset_index() </code></pre> <p>I would like to do something like <code>'12M'</code> (see below) but this creates an error.</p> <pre><code>dfa = dfa.rolling('12M',min_periods=1).mean() </code></pre>
<python><pandas><dataframe>
2023-05-05 23:32:43
1
2,363
Shane S
76,186,389
1,663,528
IDNA Encode Adding Apostrophes and letter B?
<p>I am using the IDNA library to encode/decode unicide domain names but when I encode a domain name, it adds apostrophes either side of the string and prepends the letter b?</p> <p>For example:</p> <pre><code>import idna print(idna.encode('espaΓ±ol.com')) </code></pre> <p>Output: <code>b'xn--espaol-zwa.com'</code></p> <p>Expected output: <code>xn--espaol-zwa.com</code></p> <p>I feel like I'm missing something really obvious but not sure how to get to the bottom of this.</p> <p>My expected output is reinforced by the fact if I <em>decode</em> it:</p> <pre><code>print(idna.decode('xn--espaol-zwa.com')) </code></pre> <p>I get the original domain: <code>espaΓ±ol.com</code></p>
<python><punycode>
2023-05-05 23:11:18
1
8,579
Mr Fett
76,186,356
1,302,606
chromedriver suddenly slow (scraping with python, selenium)
<p>Have a python script running some scrapers using selenium and chromedriver.</p> <p>Have been scraping the same sites for a few years now with no issues. Starting last night, the same sites have started to load EXTREMELY slowly when loaded through chromedriver, though loading on my regular un-automated browser is totally fine. I've tried uninstalling and reinstalling chromedriver, upgrading, restarting, etc. to no avail. This has happened across two completely separate sites, both became slow starting last night. I am not blocked from the sites, but they load much slower than anything else.</p> <p>It almost feels like a memory allocation issue, as even javascript and scrolling performs much slower than it used to. But I changed no code, and the issue arose even without an update to chromedriver (used to be fast on 112, but then became slow last night while still on version 112).</p> <p>Using Selenium 4.2.0, and ChromeDriver 113.0.5672.63, though I was on version 112 yesterday and still seeing the error.</p> <p>Does anyone know if there was a widespread change or something I'm not aware of?</p>
<python><selenium-webdriver><selenium-chromedriver><undetected-chromedriver>
2023-05-05 23:00:53
4
425
snn
76,186,339
396,014
Can I create a pandas dataframe that contains 2d arrays instead of lists?
<p>I have accelerometer/gyro dataβ€”series of x,y,z values for acceleration, angle, and angular velocityβ€”that I am trying to process to produce charts. I have the charts part worked out, but I currently process them one parameter at a time and would like to do all of them in one run. The data look like this (series truncated for space):</p> <pre><code>import numpy as np accel = [[-0.7437,0.1118,-0.6367], [-0.7471,0.1162,-0.6338], [-0.7437,0.1216,-0.6255]] angle = [[169.4366,49.4714,56.9421], [169.3762,49.5374,56.8433], [169.2828,49.6582,56.7059]] avelo = [[-0.5493,-0.9766,1.4038], [0,-1.4038,0.7935], [0.061,-1.0986,0.2441]] </code></pre> <p>I would like to add these to a dataframe so I can retrieve them by iterating over the names of the parameters. However, all the examples I've been able to find for dataframes are for creating and accessing named lists (like age,height,weight for a number of people). Even searching &quot;create pandas dataframe from multiple numpy arrays&quot; isn't yielding what I'm looking for. In Perl this would be easy to do as a hash of arrays. I had similar bad luck searching for a &quot;dictionary of arrays&quot;.</p> <p>Is what I want to do even possible in Python? If so could someone please point me to a resource or give an example of how to put those arrays above into a dataframe (or dictionary) and get the series for a parameter back out?</p>
<python><pandas><dataframe><numpy-ndarray>
2023-05-05 22:55:27
1
1,001
Steve
76,186,321
18,533,248
Python: How to check if adding/subtracting two 8 bit integers causes a carry or a borrow?
<p>My question is as the title says: How to check if adding/subtracting two 8 bit integers causes a carry or a borrow?</p> <p>Thanks!</p>
<python><binary><integer><addition>
2023-05-05 22:49:04
1
501
kamkow1
76,186,317
2,302,262
Bypass expensive initialisation when called internally
<p>Let's say I have a class with an init function that does many expensive checks on the input data, before storing it as an attribute:</p> <pre class="lang-py prettyprint-override"><code> def expensive_function_to_verify_data_is_ok(data) -&gt; bool: (...) class MyClass: def __init__(self, data): if not expensive_function_to_verify_data_is_ok(data): raise ValueError(&quot;Your data cannot be used to create a MyClass object!&quot;) self._data = data ... </code></pre> <p>Now, objects are sometimes instantiated by an internal process, in which case the checks are not necessary, because I can be sure the data is OK.</p> <p><strong>Is there a way to change the class definition, to allow for this? I.e., to bypass the checks in this case?</strong></p> <p>The signature of the <code>__init__</code> method should ideally remain unchanged.</p> <p>I thought about adding a class method for internal calls, but that will eventually still need to call <code>__init__()</code>, and therefore does not work.</p>
<python><class>
2023-05-05 22:47:44
1
2,294
ElRudi
76,186,239
6,846,071
create a new column that compares rows in another column if it's different plus 1
<p>I have a column named Customer and trying to create a new column Line ID, where it starts from 1 and increase by one whenever the customer is different from the previous customer. I'm using pandas dataframe.</p> <p>example:</p> <pre><code> Customer | Line ID Pepsi | 1 Pepsi | 1 Lego | 2 Mcdonald's | 3 Mcdonald's | 3 </code></pre> <p>I can use a for loop for this but is there an easier way? I would like to keep the code short but I don't know if it's possible.</p>
<python><pandas>
2023-05-05 22:25:39
1
395
PiCubed
76,186,132
4,045,275
Group and sum a numpy array based on the position of the columns in another array
<h3>What my data looks like</h3> <p>I run certain numerical simulations in numba.</p> <p>The output is a set of numpy arrays; each array represents a metric, and each array has shape <code>(periods x items)</code>.</p> <p>E.g. <code>metric_1[p,i]</code> tells me the value of metric_1 at time <code>p</code> for item <code>i</code>.</p> <p>Each item belongs to a certain category - let's say red and green just for the sake of an example. The one-dimensional array <code>categories</code> tells me exactly this - e.g. <code>categories[0]='a'</code> means the first item belongs to category a. Conceptually, this is like having a pandas multi-index &quot;flattened&quot; into another array.</p> <h3>What I am trying to do</h3> <ol> <li><p>I want to group by category, and create the array <code>metric_1_grouped</code> of dimensions <code>(periods x categories)</code>, etc.</p> </li> <li><p>I want to create one dataframe per category, and one with the sum of all categories, where each row is a period and each column is a metric</p> </li> </ol> <p>The problem itself is fairly banal, but my question is <strong>what is a good way to do this as efficiently as possible, since I have to do this many times?</strong> A typical case would be:</p> <ul> <li>300 periods</li> <li>12 metrics</li> <li>500,000 items</li> <li>6 categories</li> </ul> <h3>Why I think this question is not a duplicate</h3> <p>I am aware of a few questions asking if there is a groupby equivalent in numpy, e.g. <a href="https://stackoverflow.com/questions/38013778/is-there-any-numpy-group-by-function">Is there any numpy group by function?</a> but those are different because they all group by an element of the array itself. That is <strong>not</strong> what I am trying to do - I need to group, yes, but not by any element of the array itself, but rather by matching the column numbers to another array.</p> <p>There are some questions which mention summing based on positions but, if I have understood them correctly, they do not resemble my case, e.g. <a href="https://stackoverflow.com/questions/59005406/map-numpy-array-and-sum-values-on-positions-in-another-array">Map numpy array and sum values on positions in another array</a> and <a href="https://stackoverflow.com/questions/42310614/sum-array-with-condition-in-another-array-with-numpy">sum array with condition in another array with numpy</a></p> <h3>Potential solutions</h3> <ul> <li>pandas dataframes with multi-index - but I'm afraid it might be much slower</li> <li>itertools groupby? I admit I am not very familiar</li> </ul> <h3>What I have tried</h3> <p>The code I have below works, but is inelegant and kind of clunky. I am hoping there is a better / more elegant / faster version:</p> <pre><code>import numpy as np import pandas as pd num_periods = 300 num_items = 1000 # Let's suppose for simplicity that the data has already been sorted by category categories = np.empty(num_items, dtype=object) categories[0:100]='a' categories[100:300]='b' categories[300:600]='c' categories[600:]='d' rng = np.random.default_rng(seed=42) #setting a seed for reproducibility metric_1 = rng.normal(0,1,(num_periods,num_items)) metric_2 = rng.uniform(0,1,(num_periods,num_items)) unique_categories = np.unique(categories) num_categories=len(unique_categories) where_to_split = np.unique(categories, return_index=True)[1][1:] # The second item of the tuple returned by np.unique is an array with the # indices of the categores (which, remember, we had already sorted - this is # a requirement), # so it will be: [0, 100, 300. 600] # so where_to_split is an array which is [100, 300, 600] metric_1_list = np.split(metric_1, where_to_split, axis=1) metric_1_by_category = np.zeros((num_periods, num_categories)) for i in range(len(metric_1_list)): metric_1_by_category[:,i] = metric_1_list[i].sum(axis=1) metric_2_list = np.split(metric_2, where_to_split, axis=1) metric_2_by_category = np.zeros((num_periods, num_categories)) for i in range(len(metric_2_list)): metric_2_by_category[:,i] = metric_2_list[i].sum(axis=1) # we now create a dictionary of dataframes # df_by_cat['a'] will be the dataframe for categiry a, etc df_by_cat = {} for my_count, my_val in enumerate(unique_categories): df_by_cat[my_val] = pd.DataFrame(index = np.arange(0,num_periods), columns=['metric 1','metric 2']) df_by_cat[my_val]['metric 1'] = metric_1_by_category[:,my_count] df_by_cat[my_val]['metric 2'] = metric_2_by_category[:,my_count] </code></pre>
<python><pandas><numpy><group-by>
2023-05-05 21:58:12
2
9,100
Pythonista anonymous
76,185,856
2,471,211
AWS Wrangler - Pandas red_sql to S3 in a limited memory environment
<p>I am looking for a way to extract data from a database and push that data into a parquet dataset in S3 in a environment with limited memory. If I proceed like this:</p> <pre><code>with someDB.connect() as connect: df = pd.read_sql(&quot;SELECT * FROM table&quot;, connect) wr.s3.to_parquet(df, dataset=True, path=&quot;s3://flo-bucket/&quot;) </code></pre> <p>The Pandas data frame (df) is fully loaded in memory then pushed to S3 by wrangler. So if the data frame is too big, the operation fails. I would like to chunk the data frame and pass those chunks to a process (does not have to be wrangler) that would progressively send them to S3 in parquet format. Is this possible? I found examples using an IO buffer for a CSV file but I don't think it's possible with parquet.</p>
<python><pandas><amazon-s3><memory-management><parquet>
2023-05-05 20:51:00
1
485
Flo
76,185,767
3,072,174
How to print out the result of the last statement of an arbitrary Python script (not in interactive shell)?
<p>In an interactive Python shell, I can get the result of the last statement <a href="https://stackoverflow.com/questions/200020/get-last-result-in-interactive-python-shell">by using an underscore</a>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; x = 1 &gt;&gt;&gt; y = 2 &gt;&gt;&gt; x+y 3 &gt;&gt;&gt; print(_) 3 </code></pre> <p>How to make this code work outside of an interactive shell, like from a Python script?</p> <p>For example, I have code.py :</p> <pre class="lang-py prettyprint-override"><code>x = 1 y = 2 x+y </code></pre> <p>I want to run</p> <pre><code>python show_the_latest.py code.py </code></pre> <p>and get <code>3</code> as the output:</p> <pre class="lang-bash prettyprint-override"><code>$ python show_the_latest.py code.py 3 </code></pre> <p>Limitations:</p> <ul> <li>The script is given by a user. I don't know the last statement.</li> <li>Parsing the script is possible. But it should be a proper parsing with building an AST, not a hack.</li> </ul>
<python>
2023-05-05 20:36:21
1
1,577
Dmitry Petrov