QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,313,590
1,112,097
Modifying pandas dataframe colums based on conditionals
<p>I am trying to modify the values in columns of a pandas DataFrame based on conditionals. This answer: <a href="https://stackoverflow.com/a/50779719/1112097">https://stackoverflow.com/a/50779719/1112097</a> is close, but the conditionals used are too simple for my use case, which uses a dictionary of lists in the conditional</p> <p>Consider a Dataframe of individuals and their location:</p> <pre class="lang-py prettyprint-override"><code>owners = pd.DataFrame([['John', 'North'], ['Sara', 'South'], ['Seth', 'East'], ['June', 'West']], columns=['Who','Location']) owners </code></pre> <p>output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Who</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>John</td> <td>North</td> </tr> <tr> <td>1</td> <td>Sara</td> <td>South</td> </tr> <tr> <td>2</td> <td>Seth</td> <td>East</td> </tr> <tr> <td>3</td> <td>June</td> <td>West</td> </tr> </tbody> </table> </div> <p>The dictionary contains lists of locations where a type of pet can go:</p> <pre class="lang-py prettyprint-override"><code>pets = { 'Cats': ['North', 'South'], 'Dogs': ['East', 'North'], 'Birds': ['South', 'East']} pets </code></pre> <p>output: {'Cats': ['North', 'South'], 'Dogs': ['East', 'North'], 'Birds': ['South', 'East']}</p> <p>I need to add a column in the owners DateFrame for each pet type that says yes or no based on the presence of the location in the dictionary lists</p> <p>In this example, the final table should look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Who</th> <th>Location</th> <th>Cats</th> <th>Dogs</th> <th>Birds</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>John</td> <td>North</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>1</td> <td>Sara</td> <td>South</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>2</td> <td>Seth</td> <td>East</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>3</td> <td>June</td> <td>West</td> <td>No</td> <td>No</td> <td>No</td> </tr> </tbody> </table> </div> <p>This fails</p> <pre class="lang-py prettyprint-override"><code>for pet in pets: owners[pet] = 'Yes' if owners['Location'] in pets[pet] else 'No' </code></pre> <p>With the following error: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p> <p>I understand that the error comes from the fact that <code>owners['Location']</code> is a series not an individual value in a row, but I don't know the proper way to apply this kind of conditional across the rows of a DataFrame.</p>
<python><pandas><dataframe>
2023-02-01 16:54:57
5
2,704
Andrew Staroscik
75,313,574
2,403,819
Automate the update of packages in pyproject.toml from virtualenv or pip-tools
<p>I am trying to update my Python CI environment and am working on package management right now. I have several reasons that I do not want to use Poetry; however, one nice feature of poetry is the fact that it automatically updates the <code>pyproject.toml</code> file. I know that pip-tools can create a <code>requirements.txt</code> file from the <code>pyproject.toml</code> file; however, is there any feature within <code>virtualenv</code> or <code>pip-tools</code> that will enable an automatic update of the <code>pyproject.toml</code> file when you install a package with pip to your virtual environment?</p>
<python><pip><python-packaging><pyproject.toml><pip-tools>
2023-02-01 16:54:05
1
1,829
Jon
75,313,520
935,376
How to bring pytorch datasets into pandas dataframe
<p>I have seen a lot of code on how to convert pandas data to pytorch dataset. However, I haven't found or been able to figure out to do the reverse. i.e. Load pytorch dataset into pandas dataframe. I want to load AG news into pandas. Can you please help? Thanks.</p> <p><code>from torchtext.datasets import AG_NEWS</code></p>
<python><pandas><pytorch><torchtext>
2023-02-01 16:49:55
1
2,064
Zenvega
75,313,508
4,506,929
How can I type wider range of LaTeX characters in the IPython REPL
<p>This question is specific about the IPython REPL, not the Jupyter notebook.</p> <p>Currently according to the <a href="https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.completer.html#forward-latex-unicode-completion" rel="nofollow noreferrer">docs on forward latex completion</a>:</p> <blockquote> <p>Only valid Python identifiers will complete</p> </blockquote> <p>Which means that, by default, I can type β by typing <code>\beta</code> and hitting tab, but I can't type ∇ by the same method.</p> <p>Is there a way to type other unicode characters like ∇ in the IPython REPL? Maybe with an extension?</p>
<python><jupyter-notebook><ipython>
2023-02-01 16:49:07
1
3,547
TomCho
75,313,493
2,398,593
Cannot install awesomeversion==22.9.0 because these package versions have conflicting dependencies
<p>I've never done any Python so I'm not familiar with the package versions and dependencies system overall. I'm trying to run this repo <a href="https://github.com/Maaxion/homeassistant2influxdb" rel="nofollow noreferrer">https://github.com/Maaxion/homeassistant2influxdb</a></p> <p>For this, I want to use Docker. So once I've cloned the repo, I've added this Dockerfile at the root and followed what was explained in the readme to the best I could:</p> <pre><code>FROM ubuntu:18.04 RUN apt update -y RUN apt install python3 python3.7-dev python3-venv python3-pip git -y WORKDIR /home COPY . . RUN git clone --depth=1 https://github.com/home-assistant/core.git home-assistant-core RUN python3 -m venv .venv RUN . .venv/bin/activate RUN python3 -m pip install --upgrade --force pip RUN pip3 install -r home-assistant-core/requirements.txt RUN pip3 install -r requirements.txt </code></pre> <p>It goes fine until it tries to install with pip3 with that line: <code>pip3 install -r home-assistant-core/requirements.txt</code> and I get:</p> <pre><code>Collecting atomicwrites-homeassistant==1.4.1 Downloading atomicwrites_homeassistant-1.4.1-py2.py3-none-any.whl (7.1 kB) ERROR: Cannot install awesomeversion==22.9.0 because these package versions have conflicting dependencies. The conflict is caused by: The user requested awesomeversion==22.9.0 The user requested (constraint) awesomeversion==22.9.0 To fix this you could try to: loosen the range of package versions you've specified remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies </code></pre> <p>I'm really not sure how to solve this despite taking a look at the link above...</p> <p>Is it something to do with pip3? Have I missed something in the Dockerfile? How can I solve that issue? I've been looking online but there doesn't seem to be silver bullet answer for this kind of issues.</p> <p>Could anyone provide some guidance? Thanks!</p>
<python><pip>
2023-02-01 16:47:50
2
23,968
maxime1992
75,313,457
4,831,435
OpenAI API: openai.api_key = os.getenv() not working
<p>I am just trying some simple functions in Python with OpenAI APIs but running into an error:</p> <p>I have a valid API secret key which I am using.</p> <p>Code:</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; import openai &gt;&gt;&gt; openai.api_key = os.getenv(&quot;I have placed the key here&quot;) &gt;&gt;&gt; response = openai.Completion.create(model=&quot;text-davinci-003&quot;, prompt=&quot;Say this is a test&quot;, temperature=0, max_tokens=7) </code></pre> <p><a href="https://i.sstatic.net/zCgm4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zCgm4.png" alt="Simple test" /></a></p>
<python><openai-api><chatgpt-api><gpt-3><gpt-4>
2023-02-01 16:44:32
3
9,549
Ranadip Dutta
75,313,424
4,751,273
Can not run the code of this repository - NETL-Automatic-Topic-Labelling-
<p>I am trying to run this <a href="https://github.com/sb1992/NETL-Automatic-Topic-Labelling-" rel="nofollow noreferrer">code</a> - Automatic Labelling of Topics with Neural Embeddings</p> <p>The problem is that they did not mention what versions they used for the libraries and tools they used. Sadly, not even which Python version they have used.</p> <p>I have started by trying to run the pre-trained models, I have followed their instructions but I got the following error, please see the following screenshot:</p> <p><a href="https://i.sstatic.net/Zm6Lq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zm6Lq.png" alt="enter image description here" /></a></p> <p>I am using Python 3.11.1, and I used pip to get gensim.</p> <p>I really need help with this one, please help me!</p>
<python><nlp><gensim><word-embedding><python-embedding>
2023-02-01 16:41:51
1
1,064
ziMtyth
75,313,204
1,710,392
Correct way to append to string in python
<p>I've read this <a href="https://stackoverflow.com/a/4435752/1710392">reply</a> which explains that CPython has an optimization to do an in-place append without copy when appending to a string using <code>a = a + b</code> or <code>a += b</code>. I've also read this PEP8 recommendation:</p> <blockquote> <p>Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations.</p> </blockquote> <p>So if I understand correctly, instead of doing <code>a += b + c</code> in order to trigger this CPython optimization which does the replacement in-place, the proper way is to call <code>a = ''.join([a, b, c])</code> ?</p> <p>But then why is this form with <code>join</code> significantly slower than the form in <code>+=</code> in this example (In loop1 I'm using <code>a = a + b + c</code> on purpose in order to not trigger the CPython optimization)?</p> <pre><code>import os import time if __name__ == &quot;__main__&quot;: start_time = time.time() print(&quot;begin: %s &quot; % (start_time)) s = &quot;&quot; for i in range(100000): s = s + str(i) + '3' time1 = time.time() print(&quot;end loop1: %s &quot; % (time1 - start_time)) s2 = &quot;&quot; for i in range(100000): s2 += str(i) + '3' time2 = time.time() print(&quot;end loop2: %s &quot; % (time2 - time1)) s3 = &quot;&quot; for i in range(100000): s3 = ''.join([s3, str(i), '3']) time3 = time.time() print(&quot;end loop3: %s &quot; % (time3 - time2)) </code></pre> <p>The results show <code>join</code> is significantly slower in this case:</p> <pre><code>~/testdir$ python --version Python 3.10.6 ~/testdir$ python concatenate.py begin: 1675268345.0761461 end loop1: 3.9019 end loop2: 0.0260 end loop3: 0.9289 </code></pre> <p>Is my version with <code>join</code> wrong?</p>
<python>
2023-02-01 16:22:57
2
5,078
Étienne
75,312,974
5,754,828
List index out of range (Fractional Knapsack)
<p>I'm attempting to implement a greedy algorithm for the fractional knapsack problem for a coursera assignment. The course insists on not giving the inputs that cause the problem. Below is my code of the solution and the code I used for stress testing.</p> <pre><code>def optimal_value(capacity, weights, values): value = 0. while capacity &gt; 0 and len(values) &gt; 0: maxval = 0 # I need values per weight prices = [val / wei for val, wei in zip(values, weights)] for i in range(len(prices)): if prices[i] &gt; maxval: maxval = prices[i] best_item = i # put as much of the best item into the pack as we can and update the value added = min(capacity, weights[best_item]) value += (added / weights[best_item]) * values[best_item] capacity += -added #print(f&quot;prices are {prices}, values are {values}, best item is {best_item}&quot;) #print(f&quot;value is {value}, added is {added}, capacity is {capacity}&quot;) if added == weights[best_item]: # we used up the best option values = [val for i, val in enumerate(values) if i != best_item] weights = [weight for i, weight in enumerate( weights) if i != best_item] return value </code></pre> <pre><code>import numpy as np def create_knapsack_test_case(): capacity = np.random.randint(0, 2e6) n = np.random.randint(1, 1000) weights = np.random.randint(1, 2e6, n) values = np.random.randint(0, 2e6, n) return capacity, weights, values for i in range(20000): try: inputs = create_knapsack_test_case() opt = optimal_value(*inputs) except: print(f&quot;faulty inputs are {inputs}&quot;) </code></pre> <p>I'm missing an edge case and have no idea what it could be. Please help.</p>
<python><greedy>
2023-02-01 16:04:47
0
397
Erol Can Akbaba
75,312,873
19,003,861
Django form with m2m relationship not saving
<p>I have a form where I want <code>request.user</code> to populate as little as possible and rely on the views to populate other fields automatically.</p> <p>As a result, some of these fields are not rendered on the form.</p> <p>The code in my view seems to work fine for the <code>FK relationship</code>, but some reason the <code>m2m</code> is failing.</p> <p>It's probably the first time I am trying to save a form with m2m and I am probably missing something.</p> <p>At the moment the error I get with the current code is <code>'VoucherForm' object has no attribute 'user'</code>.</p> <p>If I remove <code>voucherform.user.add(userprofile)</code>from the views the form will save, but will not add the user.</p> <p><strong>model</strong></p> <pre><code>class UserProfile(models.Model): user = models.OneToOneField(User, null=True, on_delete=models.CASCADE) class Voucher(models.Model): user = models.ManyToManyField(User, blank=True) venue = models.ForeignKey(Venue, blank=True, null=True, related_name=&quot;vouchervenues&quot;, on_delete=models.CASCADE) title = models.TextField('voucher title', blank=True) terms = models.TextField('terms &amp; conditions', blank=True) </code></pre> <p><strong>form</strong></p> <pre><code>class VoucherForm(ModelForm): class Meta: model = Voucher fields = ('title','terms') labels ={ 'title': '', 'terms': '', } widgets = { 'title': forms.TextInput(attrs={'class':'form-control', 'placeholder':'Enter title'}), 'terms': forms.TextInput(attrs={'class':'form-control', 'placeholder':'Enter terms'}), } </code></pre> <p><strong>views</strong></p> <pre><code>def add_voucher(request, userprofile_id): url = request.META.get('HTTP_REFERER') venue = UserProfile.objects.filter(user=request.user).values('venue') userprofile = UserProfile.objects.get(id=userprofile_id) submitted = False if request.method ==&quot;POST&quot;: voucherform = VoucherForm(request.POST) if voucherform.is_valid(): data = voucherform.save(commit=False) data.user_id = userprofile.id data.venue_id = venue data.save() voucherform.save_m2m() voucherform.user.add(userprofile) return HttpResponseRedirect(url) else: voucherform = VoucherForm if 'submitted' in request.GET: submitted=True return redirect('venue-loyalty-card',{'submitted':submitted,'userprofile':userprofile}) </code></pre>
<python><django><django-views><django-forms><django-queryset>
2023-02-01 15:58:29
1
415
PhilM
75,312,786
11,725,056
How the change EVERY children tag (of a specific nature) to a different one using BeauifulSoup
<p>In the Given <code>HTML</code> below:</p> <pre><code>given = &quot;&quot;&quot;&lt;html&gt; &lt;body&gt; Free Text: Above &lt;ul&gt; &lt;li&gt; data 1 &lt;/li&gt; &lt;li&gt; &lt;ul&gt; &lt;li&gt; &lt;ol start = &quot;321&quot;&gt; &lt;li&gt; sub-sub list 1 &lt;ol&gt; &lt;li&gt; sub sub sub list &lt;/li&gt; &lt;/ol&gt; &lt;/li&gt; &lt;li&gt; sub-sub list 2 &lt;/li&gt; &lt;/ol&gt; &lt;/li&gt; &lt;li&gt; sub list 2 &lt;/li&gt; &lt;li&gt; sub list 3 &lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt; list type paragraph &lt;/p&gt; data 3 &lt;/li&gt; &lt;/ul&gt; Free Text: Middle &lt;ul&gt; &lt;li&gt; Second UL list &lt;/li&gt; &lt;li&gt; Second List part 2 &lt;/li&gt; &lt;/ul&gt; Free Text : Below &lt;/body&gt; &lt;/html&gt;&quot;&quot;&quot; </code></pre> <p>Now I want to ask:</p> <p><strong>How can I change the Children <code>&lt;li&gt;</code> tags whose <code>ANY</code> of the parent is <li> to something else, say <code>&lt;SOME&gt;</code></strong> (please don't ask why would I want to and I won't be able to render it. I have reasons)</p> <p>In a nutshell, I want my above code to look like:</p> <pre><code>result = &quot;&quot;&quot;&lt;html&gt; &lt;body&gt; Free Text: Above &lt;ul&gt; &lt;li&gt; data 1 &lt;/li&gt; &lt;li&gt; &lt;ul&gt; &lt;SOME&gt; &lt;ol start = &quot;321&quot;&gt; &lt;SOME&gt; sub-sub list 1 &lt;ol&gt; &lt;SOME&gt; sub sub sub list &lt;/SOME&gt; &lt;/ol&gt; &lt;/SOME&gt; &lt;SOME&gt; sub-sub list 2 &lt;/SOME&gt; &lt;/ol&gt; &lt;/SOME&gt; &lt;SOME&gt; sub list 2 &lt;/SOME&gt; &lt;SOME&gt; sub list 3 &lt;/SOME&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt; list type paragraph &lt;/p&gt;data 3 &lt;/li&gt; &lt;/ul&gt; Free Text: Middle &lt;ul&gt; &lt;li&gt; Second UL list &lt;/li&gt; &lt;li&gt; Second List part 2 &lt;/li&gt; &lt;/ul&gt; Free Text : Below &lt;/body&gt; &lt;/html&gt;&quot;&quot;&quot; </code></pre> <p>I tried (with and without <code>tag.decompose</code>:</p> <pre><code> soup = BeautifulSoup(given, 'html.parser') for tag in soup.find_all(['li']): if tag.find_parents(&quot;li&quot;): new_tag = soup.new_tag(&quot;SOME&quot;) new_tag.string = tag.text tag.replace_with(new_tag) result = str(soup) </code></pre> <p>but it doesn't seem to work on <code>depth &gt; 1</code> such as inner tags like <code>sub-sub list</code> etc</p>
<python><html><python-3.x><beautifulsoup><replace>
2023-02-01 15:51:40
1
4,292
Deshwal
75,312,706
12,469,912
Find all combinations of positive integers in increasing order that adds up to a given positive number n
<p>How to write a function that takes <code>n</code> (where n &gt; 0) and returns the list of all combinations of positive integers that sum to <code>n</code>? This is a common question on the web. And there are different answers provided such as <a href="https://www.geeksforgeeks.org/find-all-combinations-that-adds-upto-given-number-2/" rel="nofollow noreferrer">1</a>, <a href="https://www.educative.io/m/find-all-sum-combinations" rel="nofollow noreferrer">2</a> and <a href="https://stackoverflow.com/questions/62344469/find-all-combinations-of-n-positive-numbers-adding-up-to-k-in-python">3</a>. However, in the answers provided, they use two functions to solve the problem. I want to do it with only one single function. Therefore, I coded as follows:</p> <pre><code>def all_combinations_sum_to_n(n): from itertools import combinations_with_replacement combinations_list = [] if n &lt; 1: return combinations_list l = [i for i in range(1, n + 1)] for i in range(1, n + 1): combinations_list = combinations_list + (list(combinations_with_replacement(l, i))) result = [list(i) for i in combinations_list if sum(i) == n] result.sort() return result </code></pre> <p>If I pass 20 to my function which is <code>all_combinations_sum_to_n(20)</code>, the OS of my machine kills the process as it is very costly. I think the space complexity of my function is O(n*n!). How do I modify my code so that I don't have to create any other function and yet my single function has an improved time or space complexity? I don't think it is possible by using itertools.combinations_with_replacement.</p> <p><strong>UPDATE</strong></p> <p>All answers provided by Barmar, ShadowRanger and pts are great. As I was looking for an efficient answer in terms of both memory and runtime, I used <a href="https://perfpy.com" rel="nofollow noreferrer">https://perfpy.com</a> and selected python 3.8 to compare the answers. I used six different values of <code>n</code> and in all cases, ShadowRanger's solution had the highest score. Therefore, I chose ShadowRanger's answer as the best one. The scores were as follows:</p> <p><a href="https://i.sstatic.net/0s6eE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0s6eE.png" alt="enter image description here" /></a></p>
<python><python-3.x>
2023-02-01 15:44:06
3
599
plpm
75,312,537
9,472,066
SQLAlchemy - is BigInteger Identity column possible in ORM?
<p>I want to create BigInteger Identity column in SQLAlchemy ORM. <a href="https://docs.sqlalchemy.org/en/20/dialects/postgresql.html" rel="noreferrer">Documentation</a> does not have any example of either ORM Identity or BigInteger Identity.</p> <ol> <li>Is this possible at all? I don't see any parameter for Identity type that would allow specifying inner integer type</li> <li>How to do this? Do I have to create custom type and pass it inside <code>Mapping[]</code> brackets?</li> </ol>
<python><postgresql><sqlalchemy>
2023-02-01 15:31:40
1
1,563
qalis
75,312,508
1,468,810
Create a dataframe of combinations with an ID with pandas
<p>I'm running into a wall in terms of how to do this with Pandas. Given a dataframe (df1) with an ID column, and a separate dataframe (df2), how can I combine the two to make a third dataframe that preserves the ID column with all the possible combinations it could have?</p> <p>df1</p> <pre><code>ID name.x 1 a 2 b 3 c </code></pre> <p>df2</p> <pre><code>name.y l m </code></pre> <p>dataframe creation:</p> <pre><code>df1 = pd.DataFrame({'ID':[1,2,3],'name.x':['a','b','c']}) df2 = pd.DataFrame({'name.y':['l','m']}) </code></pre> <p>combined df</p> <pre><code>ID name.x name.y 1 a l 1 a m 2 b l 2 b m 3 c l 3 c m </code></pre>
<python><pandas><dataframe>
2023-02-01 15:29:13
1
721
Benjamin
75,312,431
4,183,498
Python - test a function was called with context
<p>Say I have a context manager and a function which has to be called within this context.</p> <pre><code>with my_context_manager(): my_function() </code></pre> <p>Is it possible to test that <code>my_function</code> was called in the context of <code>my_context_manager</code>?</p>
<python><testing><contextmanager>
2023-02-01 15:23:36
0
10,009
Dušan Maďar
75,312,384
9,756,752
Using APscheduler with Textual in Python
<p>i need a scheduler in a Textual application to periodically query an external data source. As a test i've tried to use APscheduler to call a <code>tick()</code> function every second.</p> <p>However nothing happens although the scheduler should be started.</p> <p>What is going on and how to debug this?</p> <pre class="lang-py prettyprint-override"><code>from textual.app import App, ComposeResult from textual.containers import Horizontal, Vertical from textual.widgets import * from apscheduler.schedulers.background import BackgroundScheduler class HeaderApp(App): def __init__(self, *args, **kwargs): self.sched = BackgroundScheduler() self.sched.add_job(self.tick,'interval', seconds=1) self.sched.start() super(HeaderApp, self).__init__(*args, **kwargs) def compose(self) -&gt; ComposeResult: yield Header() yield TextLog() def tick(self): text_log = self.query_one(TextLog) text_log.write(&quot;tick&quot;) def on_mount(self): text_log = self.query_one(TextLog) text_log.write(self.sched.running) if __name__ == &quot;__main__&quot;: app = HeaderApp() app.run() </code></pre>
<python><rich>
2023-02-01 15:19:28
1
705
Marvin Noll
75,312,333
4,682,492
Truncated result when using Paramiko invoke_shell
<p>I am using Paramiko <code>invoke_shell</code> to pull results of the <code>top</code> command from a remote system. But I am getting truncated lines when looking at the results.</p> <p>Code as follows :</p> <pre><code>channel = token.invoke_shell() channel.send ('terminal length 0\n') time.sleep(1) resp = channel.recv(9999) output = resp.decode('ascii') channel.send('top -n 1\n') time.sleep(1) resp = channel.recv(9999) output = resp.decode('ascii') result = (''.join(output)) return (result) </code></pre> <p>The result is as follows (note <code>cn_node+</code> is not the complete name, it is longer):</p> <pre class="lang-none prettyprint-override"><code> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28639 root 20 0 17.0g 38912 28336 S 111.1 0.1 1101:49 cn-node+ 29889 root 20 0 3379668 16532 13428 S 94.4 0.1 991:39.71 Flare </code></pre> <p>If directly ssh'ed into the system and running the command, the result is :</p> <pre class="lang-none prettyprint-override"><code> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28639 root 20 0 17.0g 38912 28336 S 116.7 0.1 1186:17 cn-node-cnfp 29889 root 20 0 3379668 16796 13428 S 94.4 0.1 1067:53 Flare </code></pre> <p>Wondering how to get the whole line and not the truncated line (get <code>cn-node-cnfp</code> instead of <code>cn-node+</code>).</p> <p>Thanks!</p>
<python><ssh><paramiko>
2023-02-01 15:15:59
1
396
Dan G
75,312,320
4,958,693
How to create a model from a select query/CTE in peewee?
<p>Lets say I have a table called flights and I want an aggregation on it like:</p> <pre><code>SELECT plane, count(flightId) as num from flights; </code></pre> <p>And let´s say I don´t want to create a permanent View. Can I use an SQL Query or its results as the source for a Model?</p>
<python><peewee>
2023-02-01 15:14:41
1
373
Dennis Beier
75,312,141
11,092,636
Type-hinting problem with mypy when iterating through a list of arguments
<p>Here is a MRE:</p> <pre class="lang-py prettyprint-override"><code>def test(a: int | tuple[int, int]): print(a) if __name__ == &quot;__main__&quot;: for b in [1, (1, 2)]: test(b) </code></pre> <p>mypy outputs: <code>7: error: Argument 1 to &quot;test&quot; has incompatible type &quot;object&quot;; expected &quot;Union[int, Tuple[int, int]]&quot; [arg-type]</code></p> <p>But I'm passing to the function <code>func</code> an <code>integer</code> and a <code>tuple</code> of <code>two integers</code>, hence why I don't understand the error.</p> <p>I'm using:</p> <pre><code>platform win32 -- Python 3.11.1, pytest-7.2.0, pluggy-1.0.0 plugins: anyio-3.6.2, mypy-0.10.3 </code></pre>
<python><mypy>
2023-02-01 15:01:30
1
720
FluidMechanics Potential Flows
75,312,036
954,698
imageio get_data skips frames
<p>For a viewer, I am trying to randomly access frame in an mp4 file. Unfortunately, depending on where I start off, I am getting different frames from the same index, in my case for any frame after frame 123:</p> <pre><code>import imageio import hashlib from tqdm import tqdm reader = imageio.get_reader(&quot;360_0011.MP4&quot;) reader2 = imageio.get_reader(&quot;360_0011.MP4&quot;) # Build up a hash library hashes = dict() for i_fr, img in enumerate(tqdm(reader)): hashes[i_fr] = hashlib.md5(img).hexdigest() # Query frame 123 after frame 0 fr_idx = 123 reader2.get_data(0) fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; True &gt;&gt; True &gt;&gt; 123 # Query frame 124 after frame 0 fr_idx = 124 reader2.get_data(0) fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; False &gt;&gt; True &gt;&gt; 125 # Query frame 125 after frame 0 fr_idx = 125 reader2.get_data(0) fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; False &gt;&gt; True &gt;&gt; 126 # Query frame 124 fr_idx = 124 fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; False &gt;&gt; True &gt;&gt; 125 # Query frame 123 fr_idx = 123 fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; True &gt;&gt; True &gt;&gt; 123 # Query frame 124 fr_idx = 124 fr_hash = hashlib.md5(reader2.get_data(fr_idx)).hexdigest() print(fr_hash == hashes[fr_idx]) print(fr_hash in hashes.values()) list(hashes.values()).index(fr_hash) &gt;&gt; True &gt;&gt; True &gt;&gt; 124 </code></pre> <p>It seems like as long as I am querying in series it works for all subsequent frames (I checked multiple ranges):</p> <pre><code>reader2.get_data(0) for i in range(1130,1140): fr_hash = hashlib.md5(reader2.get_data(i)).hexdigest() print(f&quot;{i} - {fr_hash == hashes[i]} - {list(hashes.values()).index(fr_hash)}&quot;) ​ &gt;&gt; 1130 - False - 1131 &gt;&gt; 1131 - True - 1131 &gt;&gt; 1132 - True - 1132 &gt;&gt; 1133 - True - 1133 &gt;&gt; 1134 - True - 1134 &gt;&gt; 1135 - True - 1135 &gt;&gt; 1136 - True - 1136 &gt;&gt; 1137 - True - 1137 &gt;&gt; 1138 - True - 1138 &gt;&gt; 1139 - True - 1139 </code></pre> <p>Is this somehow intended/expected? is this a bug? Is there a way to work around it except for always querying 2 frames?</p>
<python><video><mp4><python-imageio>
2023-02-01 14:53:43
0
356
mcandril
75,311,843
7,800,760
Python poetry: adding the --allow-prereleases flag to an already installed package
<p>By mistake I added <strong>black</strong> to my poetry environment along with all other dependencies without the <strong>--allow-prereleases</strong> flag.</p> <p>If I try to do it now I get:</p> <pre><code>(rssita-py3.10) (base) bob@Roberts-Mac-mini rssita % poetry add --group dev black --allow-prereleases** The following packages are already present in the pyproject.toml and will be skipped: • black If you want to update it to the latest compatible version, you can use `poetry update package`. If you prefer to upgrade it to the latest available version, you can use `poetry add package@latest`. Nothing to add. </code></pre> <p>How would I be able to add that option to only black either via command line poetry commands or editing the <strong>pyproject.toml</strong> file manually? Here's the current version:</p> <pre><code>[tool.poetry] name = &quot;rssita&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Robert Alexander &lt;gogonegro@gmail.com&gt;&quot;] readme = &quot;README.md&quot; packages = [{include = &quot;rssita&quot;, from = &quot;src&quot;}] [tool.poetry.dependencies] python = &quot;^3.10&quot; stanza = &quot;^1.4.2&quot; feedparser = &quot;^6.0.10&quot; [tool.poetry.group.dev.dependencies] pytest-cov = &quot;^4.0.0&quot; pre-commit = &quot;^3.0.2&quot; flake8 = &quot;^6.0.0&quot; mypy = &quot;^0.991&quot; isort = &quot;^5.12.0&quot; black = &quot;^22.12.0&quot; requests = &quot;^2.28.2&quot; types-requests = &quot;^2.28.11.8&quot; pylint = &quot;^2.16.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; [tool.isort] multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 use_parentheses = true line_length = 79 [tool.black] line-length = 79 target-version = ['py310'] include = '.pyi?$' exclude = ''' ( /( .eggs # exclude a few common directories in the | .git # root of the project | .hg | .mypy_cache | .tox | .venv | _build | buck-out | build | dist )/ | foo.py # also separately exclude a file named foo.py in # the root of the project ) ''' </code></pre>
<python><python-poetry>
2023-02-01 14:39:52
1
1,231
Robert Alexander
75,311,760
11,267,783
Matplotlib performance on pan
<p>I working on a PyQt GUI in order to plot 2D/3D data. Matplotlib is very interesting concerning all its available features. However, Matplotlib seems to not be relevant when you have huge amount of data (for instance 10000x10000). I found that maybe GTKAgg backend is really efficient for this issue, but not really.</p> <p>Maybe you can help me in order to enhance the performance of matplotlib when you want to pan or zoom in the figure.</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np mpl.use('GTK3Agg') fig, ax = plt.subplots() n = 10000 Z = np.random.randint(10,size=(n,n)) ax.imshow(Z) plt.show() </code></pre>
<python><matplotlib>
2023-02-01 14:33:16
0
322
Mo0nKizz
75,311,712
9,950,503
Extract json data from an array in PySpark
<p>I am trying to flatten and extract only one value (time) from the JSON file and its array (records), and store it in the new column (date). Then I want to get the max date from that column. However, it seems like I can only get the time value from the first batch of the records array, but not all the other records array (as there is more, <code>newRecord</code> string gives us the new records array).</p> <p>Could you perhaps give me any hints how can I adjust my solution that I have to make sure I get all time values from every records array?</p> <p>Thank you in advance!</p> <p>My JSON file format:</p> <pre><code>root |-- newRecord: string (nullable = true) |-- records: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- val1: long (nullable = true) | | |-- val2: long (nullable = true) | | |-- time: string(nullable = true) </code></pre> <p>My solution as of now:</p> <pre><code>import pyspark.sql.functions as F df = spark.read.json(&quot;....json&quot;) df.select ( &quot;newRecord&quot;, F.transform('records', lambda x: to_timestamp(x['time'])).alias('date')).agg(max('date')).first() </code></pre> <p>Ïnput:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;records&quot;: [ { &quot;val1&quot;: 37.99711, &quot;val2&quot;: 231.7571, &quot;time&quot;: &quot;2023-01-17T14:39:09.207Z&quot; }, { &quot;val1&quot;: 37.99711, &quot;val2&quot;: 231.7571, &quot;time&quot;: &quot;2023-01-16T14:37:09.207Z&quot; } </code></pre> <p>Output:</p> <p>Latest (max) date over all records.</p>
<python><json><apache-spark><pyspark>
2023-02-01 14:28:59
0
341
saraherceg
75,311,496
10,689,857
Pip freeze separate test vs non test
<p>I have a python project for which I want to create:</p> <ul> <li>requirements.txt -&gt; with the modules needed for running the code but not the tests.</li> <li>requirements_dev.txt -&gt; with just the extra modules needed for the tests.</li> </ul> <p>I will reuse those in the install_requires and extras_require fields of the setup.py.</p> <p>If I do a pip freeze &gt; requirements.txt, the whole thing goes into that file. How can I separate between test vs no test?</p>
<python>
2023-02-01 14:13:17
0
854
Javi Torre
75,311,480
13,361,176
argparse - two optional arguments must be provided both or none
<p>I am writing a script that accepts multiple arguments, some are optional but should be provided together, how can I achieve this behaviour?</p> <p>Ex: arg1 is a required argument</p> <p>opt-arg-1 and opt-arg-2 should be provided together or none at all</p> <pre><code>python my_script.py --arg1 val1 # Works fine python my_script.py --arg1 val1 --opt-arg-1 other_val # Error both opt-arg-1 and opt-arg-2 should be provided python my_script.py --arg1 val1 --opt-arg-2 other_val # Error both opt-arg-1 and opt-arg-2 should be provided python my_script.py --arg1 val1 --opt-arg-1 other_val --opt-arg-2 other_val_2 # Works fine </code></pre>
<python><arguments><argparse>
2023-02-01 14:12:11
0
560
arnino
75,311,368
544,542
Python: removing black pixels from an image where the text to be extracted is black
<p>I have the following code to extract text from an image</p> <pre class="lang-py prettyprint-override"><code>img = cv2.imread('download.jpg') text = pytesseract.image_to_string(img, lang='lets', config='--psm 6 ') solution = re.sub('[^0-9]','', text) </code></pre> <p>However using an image like below where it says <code>1981</code>, the actual text that gets pulled back is <code>5139011</code></p> <p><a href="https://i.sstatic.net/OfNmE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OfNmE.jpg" alt="enter image description here" /></a></p> <p>Any suggestions?</p>
<python><python-3.x><tesseract><python-tesseract>
2023-02-01 14:04:25
1
3,797
pee2pee
75,311,365
5,680,504
How to get column index which is matching with specific value in Pandas?
<p>I have the following dataframe as below.</p> <pre><code> 0 1 2 3 4 5 6 7 True False False False False False False False [1 rows * 8 columns] </code></pre> <p>As you can see, there is one <code>True</code> value which is the first column.</p> <p>Therefore, I want to get the <code>0</code> index which is <code>True</code> element in the dataframe. In other case, there is <code>True</code> in the 4th column index, then I would like to get the <code>4</code> as 4th column has the <code>True</code> value for below dataframe.</p> <pre><code> 0 1 2 3 4 5 6 7 False False False False True False False False [1 rows * 8 columns] </code></pre> <p>I tried to google it but failed to get what I want. And for assumption, there is no designated column name in the case.</p> <p>Look forward to your help.</p> <p>Thanks.</p>
<python><pandas>
2023-02-01 14:04:12
6
1,329
sclee1
75,311,342
3,047,977
Why does this Python Try not catch an exception when the return value is assigned to a variable?
<p>I've got a try block like so:</p> <pre><code>try: result = api.call_to_api(arg1, ar2) except: ValueError </code></pre> <p>Which gets called via a unit test where api.call_to_api() is patched as following:</p> <pre><code>@patch('api.call_to_api') def myTest(self, mock): mock.side_effect = [DEFAULT, ValueError] # test logic </code></pre> <p>In the test it does not catch the error. Rather, what happens is result gets assigned </p> <p>However, when I change the try block to the following it does catch the error:</p> <pre><code>try: result = api.call_to_api(arg1, ar2) api.call_to_api(arg1, ar2) except: ValueError </code></pre> <p>When the api.call_to_api() gets its own line there is no issue with the test. It successfully catches the error I have mocked and sends it down the exception path. When I inspect the values result is still receiving the '' value, and it does trigger the try on the specific line without the assignment.</p> <p>Why does this Python Try not catch an exception when the return value is assigned to a variable?</p>
<python><unit-testing><mocking><try-catch>
2023-02-01 14:02:53
1
316
Brad B.
75,311,321
3,801,865
AWS Lambda function can't connect to and query SQLite DB file from S3
<p>My goal is to upload a SQLite database file to AWS S3, use AWS Lambda &amp; python (<code>sqlite3</code>) to connect to the database, query it, and return some of its data.</p> <p>Having uploaded the database file to S3, I wrote a python script (which is to become the Lambda function) that <em>successfully</em> downloads the database from S3, connects to it with <code>sqlite3</code>, and returns some query results.</p> <p>The issue is that when I take the <em>exact same code</em> and put it in AWS Lambda, I get the following error:</p> <pre><code>{ &quot;errorMessage&quot;: &quot;malformed database schema (message_idx_undelivered_one_to_one_imessage) - near \&quot;where\&quot;: syntax error&quot;, &quot;errorType&quot;: &quot;DatabaseError&quot;, &quot;stackTrace&quot;: [ &quot; File \&quot;/var/task/lambda_function.py\&quot;, line 11, in lambda_handler\n print(cursor.execute(\&quot;select mycolumn from message limit 1\&quot;).fetchone())\n&quot; ] } </code></pre> <p>Here is the python script/Lambda function:</p> <pre><code>import sqlite3 import boto3 def lambda_handler(event, context): s3 = boto3.resource(&quot;s3&quot;) bucket = s3.Bucket(&quot;my-bucket&quot;) bucket.download_file(&quot;my_database.db&quot;, &quot;/tmp/my_database.db&quot;) conn = sqlite3.connect(&quot;/tmp/my_database.db&quot;) cursor = conn.cursor() print(cursor.execute(&quot;select mycolumn from message limit 1&quot;).fetchone()) </code></pre> <p>Running this file locally works correctly. I have confirmed that both my local environment and Lambda are using these versions of the following:</p> <pre><code>boto3: 1.20.32 python: 3.9.13 sqlite3: 2.6.0 </code></pre> <p><a href="https://pastebin.com/NCrWitsX" rel="nofollow noreferrer">Here</a> is the schema of the database I'm trying to read.</p> <p>Why is Lambda producing this error while my local environment isn't? How can I go about getting python to connect to the database within the Lambda function?</p>
<python><sqlite><amazon-s3><aws-lambda>
2023-02-01 14:01:47
1
1,022
Josh Clark
75,311,210
7,714,681
ImportError: cannot import name 'prod' from 'math' (unknown location)
<p>When I try to run</p> <p>either :</p> <pre><code>from math import prod </code></pre> <p>or:</p> <pre><code>import math arr = (1,2,3) math.prod(arr) </code></pre> <p>I get the following error message:</p> <pre><code>ImportError: cannot import name 'prod' from 'math' (unknown location) </code></pre> <p>or:</p> <pre><code>AttributeError: module 'math' has no attribute 'prod' </code></pre> <p>Meanwhile, there is an article about the usage of this module: <a href="https://www.geeksforgeeks.org/python-math-prod-method/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-math-prod-method/</a>.</p> <p>(How) Can I use the <code>prod</code> module from <code>math</code>?</p>
<python><math><importerror>
2023-02-01 13:53:04
0
1,752
Emil
75,311,138
4,458,718
cdk SageMaker.CfnModel outputs empty containers property when passed model package arn
<p>When I try to pass a model package to cdk CfnModel the template output has an empty dictionary. Then input checks require an array of objects, however whatever I pass does not end up in the template output. What is the issue here?</p> <p>Given the input below, I am expecting an output with the containers property set to {&quot;ModelPackageName&quot;: model_package_arn}, but it is an array with an empty dictionary (see below).</p> <p>Input</p> <pre><code>class Model(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -&gt; None: super().__init__(scope, construct_id, **kwargs) model_package_arn = 'arn:aws:sagemaker:us-east-2:{ACCOUNT}:model-package/xgboost-abalone2023-01-30-15-28-52/1' model = sagemaker.CfnModel( self, &quot;MLInference&quot;, execution_role_arn=my_role, model_name=&quot;my-model&quot;, containers=[{&quot;ModelPackageName&quot;: model_package_arn}] #&lt;--- Passing an array of dictionary as required </code></pre> <p>Output:</p> <pre><code>&quot;Resources&quot;: { &quot;MLInference&quot;: { &quot;Type&quot;: &quot;AWS::SageMaker::Model&quot;, &quot;Properties&quot;: { &quot;ExecutionRoleArn&quot;: &quot;arn:aws:iam::243788878595:role/service-role/A2ISageMaker-ExecutionRole-20221116T114907&quot;, &quot;Containers&quot;: [ {} #&lt;----------- this should be {&quot;ModelPackageName&quot;: model_package_arn}] from the cdk stack ], &quot;ModelName&quot;: &quot;my-model7&quot; }, &quot;Metadata&quot;: { &quot;aws:cdk:path&quot;: &quot;AdverseEventsStack/MLInference&quot; } }} </code></pre>
<python><amazon-web-services><aws-cdk><amazon-sagemaker>
2023-02-01 13:47:51
1
1,931
L Xandor
75,311,131
11,177,720
Why is yield "blocking" all other functionality?
<p>I decided to finally learn <code>yield</code> to make generators and came across this weird thing:</p> <p>If I run this code, it prints out <code>&lt;generator ...&gt;</code>. It doesn't print <code>num</code> or stop execution at <code>3.1</code>...</p> <p>If I remove the for-yield lines, then it works correctly.</p> <pre class="lang-py prettyprint-override"><code>def gen(num = 1): print(num) if num == 1: return 3 if num == 2: return 3.1 for i in range(num): yield i print(gen(2)) </code></pre> <p>Why does this happen?</p> <p>I'm using Python 3.8.12</p>
<python><python-3.x><yield>
2023-02-01 13:47:13
0
1,865
12944qwerty
75,311,099
5,553,962
"EOFError: Ran out of input" when packaging a Python script with PyInstaller
<p>I'm developing an application for Windows operating systems written in Python 3.8 and which makes use of the nnunet library (<a href="https://pypi.org/project/nnunet/" rel="nofollow noreferrer">https://pypi.org/project/nnunet/</a>) which uses multiprocessing. I have tested the script and it works correctly.</p> <p>Now I'm trying to package everything with pyinstaller v5.7.0. The creation of the .exe is successful but when I run it I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 344, in &lt;module&gt; File &quot;nnunet\inference\predict.py&quot;, line 694, in predict_from_folder File &quot;nnunet\inference\predict.py&quot;, line 496, in predict_cases_fastest File &quot;nnunet\inference\predict.py&quot;, line 123, in preprocess_multithreaded File &quot;multiprocess\process.py&quot;, line 121, in start File &quot;multiprocess\context.py&quot;, line 224, in _Popen File &quot;multiprocess\context.py&quot;, line 327, in _Popen File &quot;multiprocess\popen_spawn_win32.py&quot;, line 93, in __init__ File &quot;multiprocess\reduction.py&quot;, line 70, in dump File &quot;dill\_dill.py&quot;, line 394, in dump File &quot;pickle.py&quot;, line 487, in dump File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 603, in save File &quot;pickle.py&quot;, line 717, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1186, in save_module_dict File &quot;pickle.py&quot;, line 971, in save_dict Traceback (most recent call last): File &quot;main.py&quot;, line 341, in &lt;module&gt; File &quot;pickle.py&quot;, line 997, in _batch_setitems File &quot;D:\MyProject\venv\Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_multiprocessing.py&quot;, line 49, in _freeze_support File &quot;dill\_dill.py&quot;, line 388, in save spawn.spawn_main(**kwds) File &quot;pickle.py&quot;, line 560, in save File &quot;pickle.py&quot;, line 901, in save_tuple File &quot;dill\_dill.py&quot;, line 388, in save File &quot;multiprocessing\spawn.py&quot;, line 116, in spawn_main File &quot;pickle.py&quot;, line 560, in save File &quot;multiprocessing\spawn.py&quot;, line 126, in _main File &quot;dill\_dill.py&quot;, line 1427, in save_instancemethod0 EOFError: Ran out of input [588] Failed to ex File &quot;pickle.py&quot;, line 692, in save_reduce ecute script 'main' d File &quot;dill\_dill.py&quot;, line 388, in save ue to unhandled File &quot;pickle.py&quot;, line 560, in save exception! File &quot;pickle.py&quot;, line 886, in save_tuple File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 603, in save File &quot;pickle.py&quot;, line 717, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1186, in save_module_dict File &quot;pickle.py&quot;, line 971, in save_dict File &quot;pickle.py&quot;, line 997, in _batch_setitems File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 603, in save File &quot;pickle.py&quot;, line 687, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1698, in save_type File &quot;dill\_dill.py&quot;, line 1070, in _save_with_postproc File &quot;pickle.py&quot;, line 692, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;pickle.py&quot;, line 901, in save_tuple File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;pickle.py&quot;, line 886, in save_tuple File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1698, in save_type File &quot;dill\_dill.py&quot;, line 1084, in _save_with_postproc File &quot;pickle.py&quot;, line 997, in _batch_setitems File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 603, in save File &quot;pickle.py&quot;, line 717, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1186, in save_module_dict File &quot;pickle.py&quot;, line 971, in save_dict File &quot;pickle.py&quot;, line 997, in _batch_setitems File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 603, in save File &quot;pickle.py&quot;, line 717, in save_reduce File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 560, in save File &quot;dill\_dill.py&quot;, line 1186, in save_module_dict File &quot;pickle.py&quot;, line 971, in save_dict File &quot;pickle.py&quot;, line 997, in _batch_setitems File &quot;dill\_dill.py&quot;, line 388, in save File &quot;pickle.py&quot;, line 578, in save File &quot;PyInstaller\loader\pyimod01_archive.py&quot;, line 76, in __getattr__ AssertionError [4392] Failed to execute script 'main' due to unhandled exception! </code></pre> <p>Below is the code of my python script:</p> <pre><code>#============================== # main.py #============================== from multiprocessing import freeze_support from nnunet.inference.predict import predict_from_folder if __name__ == &quot;__main__&quot;: freeze_support() ... predict_from_folder(...) ... </code></pre> <p>Below is the code of the nnunet library that triggers the error:</p> <pre><code>#============================== # nnunet\inference\predict.py #============================== def preprocess_multithreaded(trainer, list_of_lists, output_files, num_processes=2, segs_from_prev_stage=None): if segs_from_prev_stage is None: segs_from_prev_stage = [None] * len(list_of_lists) num_processes = min(len(list_of_lists), num_processes) classes = list(range(1, trainer.num_classes)) assert isinstance(trainer, nnUNetTrainer) q = Queue(1) processes = [] for i in range(num_processes): pr = Process( target=preprocess_save_to_queue, args=( trainer.preprocess_patient, q, list_of_lists[i::num_processes], output_files[i::num_processes], segs_from_prev_stage[i::num_processes], classes, trainer.plans['transpose_forward'] ) ) pr.start() ## &lt;------------ The error is generated here!!!!!!!!!!!!! processes.append(pr) try: end_ctr = 0 while end_ctr != num_processes: item = q.get() if item == &quot;end&quot;: end_ctr += 1 continue else: yield item finally: for p in processes: if p.is_alive(): p.terminate() p.join() q.close() def predict_cases_fastest(...): ... pool = Pool(num_threads_nifti_save) ... preprocessing = preprocess_multithreaded( trainer, list_of_lists, cleaned_output_files, num_threads_preprocessing, segs_from_prev_stage ) ... pool.starmap_async(...) ... pool.close() pool.join() def predict_from_folder(...): ... return predict_cases_fastest(...) if __name__ == &quot;__main__&quot;: ... </code></pre> <h1>Edit 03-02-2023</h1> <p>I have created a public project with which it is possible to reproduce the reported problem: <a href="https://gitlab.com/carlopoletto/nnunet_pyinstaller_problem" rel="nofollow noreferrer">https://gitlab.com/carlopoletto/nnunet_pyinstaller_problem</a></p> <p>In the <code>./scripts</code> folder there are some scripts to install everything and run the tests:</p> <ul> <li><code>./scripts/install</code>: dependency installation</li> <li><code>./scripts/dist</code>: creating the executable with pyinstaller</li> <li><code>./scripts/run_py</code>: running the python script (NB: this script automatically delete the <code>./temp</code> folder and recreate it by copying the contents of <code>./data</code>)</li> <li><code>./scripts/run_exe</code>: running the executable created with <code>./scripts/dist</code> (NB: this script automatically delete the <code>./temp</code> folder and recreate it by copying the contents of <code>./data</code>)</li> </ul> <p>The problem appears to be internal to the <code>nnunet</code> library. I don't know if this problem can be solved by properly configuring <code>pyinstaller</code>.</p>
<python><python-3.x><multiprocessing><pyinstaller><eoferror>
2023-02-01 13:44:51
0
729
El_Merendero
75,311,057
12,248,220
1D histogram from 4 column txt dataset in python?
<p>I have a text file with 4 columns, the first 3 are the x, y and z coordinates of one datapoint, and the 4th column is the value of the datapoint at that x, y, z set of coordinates.</p> <p>For example:</p> <pre><code>0 1 2 10000 0 1 3 20000 0 2 1 30000 1 0 0 40000 1 1 1 50000 </code></pre> <p>I want to make a plot having as the horizontal axis the x-coordinate-value and as the vertical axis the TOTAL value at that location (x-coordinate-value). This basically is a marginalized histogram across y and z of the .txt dataset above.</p> <p>For example, for the above dataset, I would have 2 points in my plot: <code>(0, 60000)</code> and <code>(1, 90000)</code> where the first number represents the x-coordinate value and the second number represents the value.</p> <p>I have tried to read about the <code>np.histogramdd</code> function, but when I feed in my .txt dataset, it outputs a 4 dimensional tensor. I then sum across its 2nd and 3rd axes (Matlab notation) to obtain a 2D tensor. This has shape <code>(10, 10)</code>.</p> <p>How could I obtain the <code>(0, 60000)</code> and <code>(1, 90000)</code> from above?</p> <p>Thank you!</p>
<python><arrays><numpy><data-science><histogram>
2023-02-01 13:41:15
1
576
velenos14
75,311,034
5,114,342
Bokeh Server not finding images
<p>I'm trying to achieve the seemingly simple thing of showing a picture in a Bokeh Server.<br /> [It works just fine for me when I do this in a Jupyter Notebook, by the way.]</p> <p>My file layout is</p> <pre><code>logo.png server_folder/ main.py logo.png static/ logo.png </code></pre> <p>With three logo files, just to cover all possible spots.</p> <p>The code I tried so far are</p> <pre><code>p = Div(text=&quot;&lt;img src=\&quot;logo.png\&quot;&gt;&quot;) curdoc().add_root(p) </code></pre> <p>and</p> <pre><code>p = figure(x_range=(0,400), y_range=(0,400), width=400, height=400) p.image_url(url=[&quot;logo.png&quot;], x=0, y=400, w=400, h=400) curdoc().add_root(p) </code></pre> <p>It does not find the logo:</p> <pre><code>(base) PS C:\some_path&gt; bokeh serve --show server_folder 2023-02-01 14:55:07,357 Starting Bokeh server version 2.4.3 (running on Tornado 6.1) 2023-02-01 14:55:07,357 User authentication hooks NOT provided (default user enabled) 2023-02-01 14:55:07,362 Bokeh app running at: http://localhost:5006/server_folder 2023-02-01 14:55:07,362 Starting Bokeh server with process id: 8048 2023-02-01 14:55:07,957 WebSocket connection opened 2023-02-01 14:55:07,957 ServerConnection created 2023-02-01 14:55:07,962 404 GET /favicon.ico (127.0.0.1) 0.00ms 2023-02-01 14:55:07,992 404 GET /logo.png (127.0.0.1) 0.00ms 2023-02-01 14:55:08,007 404 GET /logo.png (127.0.0.1) 0.00ms </code></pre> <p><a href="https://i.sstatic.net/RwYoV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RwYoV.png" alt="empty plot" /></a></p> <p>Everything except the image is rendered just fine. In my actual full code, I have all sorts of other elements, and they work as they should.<br /> Also note that my full code reads data from tabular files, and this also works just fine, that data is visualized in graphs as expected. It's only images which won't work.</p> <hr /> <p>I also tried out <code>file://logo.png</code>, <code>file://c:/some_path/logo.png</code>, <code>http://localhost:5006/logo.png</code>, <code>c:/some_path/logo.png</code>.</p> <hr /> <p>Given the code in one answer in <a href="https://stackoverflow.com/questions/34646270/how-do-i-work-with-images-in-bokeh-python/39540721#39540721">How do I work with images in Bokeh (Python)</a> (which is similar to <a href="https://discourse.bokeh.org/t/how-to-load-image-in-bokeh-app/1317?page=2" rel="nofollow noreferrer">https://discourse.bokeh.org/t/how-to-load-image-in-bokeh-app/1317?page=2</a>) I tried</p> <pre><code>logo = &quot;logo.png&quot; logo_src = ColumnDataSource(dict(url = [logo])) p = figure(plot_width = 500, plot_height = 500, title=&quot;&quot;) p.image_url(url='url', x=0.05, y = 0.85, h=0.7, w=0.9, source=logo_src) curdoc().add_root(p) </code></pre> <p>which worked neither.</p> <p>Is this maybe rather about settings I have to change than about the code?</p> <p>I work on Windows, if that matters. My Bokeh version is 2.4.3 and I'm using Python 3.9.13. I'm executing from within the Anaconda Powershell Prompt of anaconda 1.11.0.</p>
<python><bokeh>
2023-02-01 13:39:59
1
3,912
Aziuth
75,311,023
3,575,623
Print rows of df as columns
<p>I have a data frame that looks like this:</p> <pre><code>category ID values A foo ABCDEF A baz GHIJKL B bar MNOPQR B biff STUVWX C bop YZABCD </code></pre> <p>All of the <code>values</code> are the same length.</p> <p>I'd like to print the <code>ID</code> and <code>values</code> columns into a csv file, but printing the <code>values</code> as columns, so the file would like this:</p> <pre><code>foo bar baz biff bop A G M S Y B H N T Z C I O U A D J P V B E K Q W C F L R X D </code></pre> <p>The only method I can see is to create a numpy array with the correct lengths, fill it iteratively with the <code>values</code> column, then turn that into a df and use pandas print to csv method, with <code>ID</code> as the header. I know how to do that, but it just seems terribly inefficient.</p> <p>Does anyone have a better / faster method?</p>
<python><pandas>
2023-02-01 13:39:00
7
507
Whitehot
75,310,934
6,027,879
python iterate yaml and filter result
<p>I have this yaml file</p> <pre><code>data: - name: acme_aws1 source: aws path: acme/acme_aws1.zip - name: acme_gke1 source: gke path: acme/acme_gke1.zip - name: acme_oci source: oci path: acme/acme_oci1.zip - name: acme_aws2 source: aws path: acme/acme_aws2.zip - name: acme_gke2 source: gke path: acme/acme_gke2.zip - name: acme_oci2 source: oci path: acme/acme_oci2.zip </code></pre> <p>i want to filter out the data containing &quot;source=gke&quot; and for loop assign the value of path to variable., can any one please share how-to when using python with pyyaml as import module.</p>
<python><yaml><pyyaml>
2023-02-01 13:31:16
2
406
hare krshn
75,310,911
403,425
Can Django's URL tag show the URL of the RedirectView
<p>Let's say that in my urls.py I have a url like this:</p> <pre><code>path(&quot;support/&quot;, RedirectView.as_view(url=&quot;http://www.example.com&quot;), name=&quot;support&quot;), </code></pre> <p>And in one of my templates I use the url tag:</p> <pre><code>{% url &quot;support&quot; %} </code></pre> <p>This of course outputs <code>/support/</code> as expected. But what if I want it to output <code>http://www.example.com</code> instead? Is that at all possible? Skip the redirect basically.</p> <p>So <code>&lt;a href=&quot;{% url &quot;support&quot; %}&quot;&gt;Link&lt;/a&gt;</code> would output <code>&lt;a href=&quot;http://www.example.com&quot;&gt;Link&lt;/a&gt;</code>.</p>
<python><django><http-redirect><django-views><django-urls>
2023-02-01 13:30:04
1
5,828
Kevin Renskers
75,310,835
11,829,398
Can you modify an object's field every time another field is modified?
<p>I have a dataclass that looks like this</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, field @dataclass class Data: name: str | None = None file_friendly_name: str | None = field(default=None, init=False) def __post_init__(self): # If name, automatically create the file_friendly_name if self.name: self.file_friendly_name = &quot;&quot;.join( i for i in self.name if i not in &quot;/:*?&lt;&gt;|&quot; ) </code></pre> <p>If user passes <code>name</code> on instantiation, <code>file_friendly_name</code> is automatically created.</p> <p>Is there a way to do it so that every time <code>name</code> is updated/changed, <code>file_friendly_name</code> also changes?</p> <p>e.g.</p> <pre class="lang-py prettyprint-override"><code>data = Data() data.name = 'foo/bar' print(data.file_friendly_name) # want: 'foobar' data = Data(name='foo/bar') data.name = 'new?name' print(data.file_friendly_name) # want: 'newname' </code></pre> <p>Update based on answers:</p> <ol> <li>I've tried setting <code>_name: str</code> and creating <code>name</code> using getters/setters. But I don't like how when you do <code>print(Data())</code> it shows <code>_name</code> as an attribute. I'd like that not to happen.</li> <li>I like setting <code>file_friendly_name</code> as a property. But then you can't see that as an attribute when you do <code>print(Data())</code>. This is less of an issue but still not ideal.</li> </ol> <p>Can it just show <code>name</code> and <code>file_friendly_name</code> as attributes when doing <code>print(Data())</code>?</p>
<python><python-dataclasses><python-class>
2023-02-01 13:23:40
3
1,438
codeananda
75,310,791
19,580,067
Extract the values, if the part of string in one column matches with part of string in another column
<p>I have 2 data frames with 2 columns in each table. Now, I need to match the values of col A from <strong>df1</strong> with the col C of <strong>df2</strong> and get only the matched values.</p> <p>For example:</p> <p><strong>df1</strong></p> <pre><code> A B ABC PVT Ltd 1FWE23 Auxil Solutions 22354 Cambridge 32684 Stacking Ltd 45368 Ciscovt Ltd 46485 Samsung Ltd 45346 Nokia Ltd 58446 </code></pre> <p><strong>df2</strong></p> <pre><code> C D BTD AAVV Auxil Company ASDC Cambridge Univers DECVD The Stacking Pvt DVVCA Ciscovt brand VDKMN The Samsung Mobile VDAVV The Nokia Mobile VFAD </code></pre> <p>I tried to convert the column C of <strong>df2</strong> into list and compared with the column A of <strong>df1</strong>. But I'm not sure how to extract the values if it even matches partially between the columns.</p> <p>The code I tried:</p> <pre><code>dd= (df2['C'].str.upper()).unique().tolist() df1['New'] = (df1['A'].str.upper()).apply(lambda x: ''.join([part for part in dd if part in x])) </code></pre> <p>The expected Output should be:</p> <pre><code> A B New ABC PVT Ltd 1FWE23 Auxil Solutions 22354 Auxil Company Cambridge 32684 Cambridge Univers Stacking Ltd 45368 The Stacking Pvt Ciscovt Ltd 46485 Ciscovt brand Samsung Ltd 45346 The Samsung Mobile Nokia Ltd 58446 The Nokia Mobile </code></pre>
<python><string><dataframe><lambda>
2023-02-01 13:20:16
1
359
Pravin
75,310,708
11,558,143
how to accelerate the numpy for-loop, for coloring point-cloud by its intensity
<p>enter code hereI want to color a point-cloud by its intensity.</p> <p>currently I use the following for-loop to apply <a href="https://stackoverflow.com/questions/20792445/calculate-rgb-value-for-a-range-of-values-to-create-heat-map">a colormap function</a> to the intensity(4th-dim) of points:</p> <pre><code>import numpy as np points = np.random.random([128*1200,4]) points_colors = np.zeros([points.shape[0], 3]) for idx, p_c in enumerate(points[:, 3]): points_colors[idx, :] = color_map(p_c) points_colors /= 255.0 </code></pre> <p>an example of color mapping function:</p> <pre><code>def color_map( value, minimum=0, maximum=255): minimum, maximum = float(minimum), float(maximum) ratio = 2 * (value-minimum) / (maximum - minimum) b = int(max(0, 255*(1 - ratio))) r = int(max(0, 255*(ratio - 1))) g = 255 - b - r return r, g, b </code></pre> <p>Coloring the point clouds consumes much more time than directly use open3d's original colormap(i.e. color by points' x,y,z-pose)</p> <p>How could I accelerate the process of color-mapping point-clouds by its intensity?</p> <p><em>Other solution that does not convert xyzi-point-cloud to xyzrgb-point-cloud is also welcomed.</em></p> <hr /> <p>Ps. the color_map I am actually using is a bit more complicated but has same output:</p> <pre><code>def rainbow_color_map( val, minval = 0 maxval=256, normalize=False, colors=[(1, 1, 255), (1, 255, 1), (255, 1, 1)] * 10, ): i_f = float(val - minval) / float(maxval - minval) * (len(colors) - 1) i, f = int(i_f // 1), i_f % 1 # Split into whole &amp; fractional parts. (r1, g1, b1), (r2, g2, b2) = colors[i], colors[i + 1] if normalize: return ( (r1 + f * (r2 - r1)) / maxval, (g1 + f * (g2 - g1)) / maxval, (b1 + f * (b2 - b1)) / maxval, ) else: return r1 + f * (r2 - r1), g1 + f * (g2 - g1), b1 + f * (b2 - b1) </code></pre>
<python><numpy><colors><open3d>
2023-02-01 13:13:43
1
1,462
zheyuanWang
75,310,668
10,693,596
PyScript: is it possible to execute an uploaded Python script?
<p>Is it possible to have a user upload an arbitrary Python script (that uses built-ins only, no external packages) and then execute it using PyScript?</p>
<python><pyscript>
2023-02-01 13:10:40
1
16,692
SultanOrazbayev
75,310,650
16,813,096
How to get font path from font name [python]?
<p>My aim is to get the font path from their common font name and then use them with <code>PIL.ImageFont</code>.</p> <p>I got the names of all installed fonts by using <code>tkinter.font.families()</code>, but I want to get the full path of each font so that I can use them with <code>PIL.ImageFont</code>. Is there any other way to use the common font name with <code>ImageFont.truetype()</code> method?</p>
<python><python-3.x><tkinter><python-imaging-library>
2023-02-01 13:09:09
2
582
Akascape
75,310,636
4,331,885
Poetry Install crashes because excluded dependency cannot be found
<p>One of our repositories relies on another first-party one. Because we're in the middle of a migration from (a privately hosted) gitlab to azure, some of our dependencies aren't available in gitlab, which is where the problem comes up.</p> <p>Our pyproject.toml file has this <a href="https://python-poetry.org/docs/1.2/managing-dependencies/" rel="nofollow noreferrer">poetry group</a>:</p> <pre><code># pyproject.toml [tool.poetry.group.cli.dependencies] cli-parser = { path = &quot;../cli-parser&quot; } </code></pre> <p>In the Gitlab-CI, this cannot resolve. Therefore, we want to run the pipelines without this dependency. There is no code being run that actually relies on this library, nor files being imported. Therefore, we factored it out into a separate <a href="https://python-poetry.org/docs/1.2/managing-dependencies/" rel="nofollow noreferrer">poetry group</a>. In the gitlab-ci, that looks like this:</p> <pre class="lang-yaml prettyprint-override"><code># .gitlab-ci.yml install-poetry-requirements: stage: install script: - /opt/poetry/bin/poetry --version - /opt/poetry/bin/poetry install --without cli --sync </code></pre> <p>As visible, poetry is instructed to omit the cli dependency group. However, it still crashes on it:</p> <pre><code># Gitlab CI logs $ /opt/poetry/bin/poetry --version Poetry (version 1.2.2) $ /opt/poetry/bin/poetry install --without cli --sync Directory ../cli-parser does not exist </code></pre> <p>If I comment out the cli-parser line in pyproject.toml, it will install successfully (and the pipeline passes), but we cannot do that because we need it in production.</p> <p>I can't find another way to tell poetry to omit this library. Is there something I missed, or is there a workaround?</p>
<python><python-3.x><gitlab><gitlab-ci><python-poetry>
2023-02-01 13:08:00
1
1,438
Gloweye
75,310,578
1,990,524
Pandas Pivot Table, How to create new variables
<p>I am working with pandas in python.</p> <p>I have the following data frame:</p> <pre><code>ID | Time | X_mean | Y_mean | status 1 1 0.1 0.6 0 1 2 0.2 0.7 0 1 3 0.3 0.8 0 2 1 0.6 0.3 1 2 2 0.2 0.5 1 2 3 0.3 0.6 1 . . . . . . . . . . . . . . . </code></pre> <p>I would like to create the following dataframe:</p> <pre><code>ID | X_mean_1 | X_mean_2 | X_mean_3 | Y_mean_1 | Y_mean_2 | Y_mean_3 | status 1 . . . . . . 2 . . . . . . </code></pre> <p>I tried to use the pivot command in various different forms, but nothing works. In stata I would just use the following command:</p> <pre><code>reshape wide X_mean Y_mean, i(ID) j(Time) </code></pre> <p>Is there a way to do the same in pandas?</p>
<python><pandas><dataframe>
2023-02-01 13:02:32
2
1,709
jjuser19jj
75,310,547
4,495,790
Conditional binary replacement in Pandas column with NaNs
<p>I have the following Pandas data frame in Python:</p> <pre><code>col1 ---- A B NaN A A NaN NaN B C </code></pre> <p>I would like to replace the values so that all <code>A</code> remain <code>A</code>, all other values (<code>B, C</code> in this example) are replaced with <code>D</code>, and <code>NaN</code> remain unchanged. What is the appropriate way to do it? So that the required output is:</p> <pre><code>col1 ---- A D NaN A A NaN NaN D D </code></pre> <p>I have tried these so far:</p> <p><code>df[&quot;col1&quot;] = np.where(df[&quot;col1&quot;] == &quot;A&quot;, &quot;A&quot;, &quot;D&quot;)</code>, but this changed <code>NaN</code>s to <code>D</code> as well.</p> <p><code>df[&quot;col1&quot;].replace([&quot;A&quot;, &quot;B&quot;, &quot;C&quot;], [&quot;A&quot;, &quot;D&quot;, &quot;D&quot;])</code> seems better, but in my real scenario there are far more non-<code>A</code> values that I want to change to <code>D</code>, so exhaustive enumeration is problematic.</p>
<python><pandas>
2023-02-01 12:59:53
2
459
Fredrik
75,310,472
6,224,790
Can't satisfy Django configuration for AUTH_USER_MODEL setting
<p>I use Django Rest Framework and have a AbstractUser that I have inherited to create my user :</p> <p>file : Project/NovalixExpedition/models/NovalixUserModel</p> <pre class="lang-py prettyprint-override"><code>from NovalixBackend.models.AddressModel import Address from NovalixBackend.models.RoleModel import Role from django.contrib.auth.models import AbstractUser class NovalixUser(AbstractUser): id_novalix_user = models.AutoField(primary_key=True) first_name = models.CharField(max_length=100, blank=False, default='') last_name = models.CharField(max_length=100, blank=False, default='') email = models.CharField(max_length=100, blank=False, default='') phone_number = models.IntegerField(blank=False, default='') deleted = models.BooleanField(default=False) id_role = models.ManyToManyField(Role) current_address = models.ForeignKey(Address, on_delete=models.DO_NOTHING) class Meta: db_table = 'novalix_user' </code></pre> <p>It's located in a folder like so :</p> <pre><code>Project |- manage.py | |- NovalixBackend | |- models | |- __init__.py | |- NovalixUserModel | |- settings.py </code></pre> <p>And I'm attempting to set it as my base model for django integrated authentication by setting the variable <code>AUTH_USER_MODEL</code>.</p> <h2>Step 1 - Naïve approach</h2> <p>However when I simply try in settings.py :</p> <pre class="lang-py prettyprint-override"><code>AUTH_USER_MODEL = 'NovalixBackend.models.NovalixUserModel.NovalixUser' </code></pre> <p>Result : <code>AUTH_USER_MODEL must be of the form 'app_label.model_name</code></p> <h2>Step 2 - Follow the error indication</h2> <p>So I try to follow the recommendation :</p> <pre class="lang-py prettyprint-override"><code>AUTH_USER_MODEL = 'NovalixBackend.NovalixUser' </code></pre> <p>Result : <code>django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'NovalixBackend.NovalixUser' that has not been installed</code></p> <p>Of course since there's no module installed with this module name / path.</p> <h2>Step 3 - Follow the next django indication</h2> <p>I can try to install it then :</p> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'rest_framework.authtoken', 'corsheaders', 'NovalixBackend.models', 'django.contrib.admin', 'drf_yasg', 'NovalixBackend.NovalixUser' ] AUTH_USER_MODEL = 'NovalixBackend.NovalixUser' </code></pre> <p>Result : <code>ImportError: Module 'NovalixBackend' does not contain a 'NovalixUser' class.</code></p> <p>Again, logical since no module exist in my code that would fulfill this module description.</p> <h2>Step 4.1 - I can try to fill the correct path and try to get the module in AUTH_USER_MODEL</h2> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'rest_framework.authtoken', 'corsheaders', 'NovalixBackend.models', 'django.contrib.admin', 'drf_yasg', 'NovalixBackend.models.NovalixUserModel.NovalixUser' ] AUTH_USER_MODEL = 'NovalixBackend.NovalixUser' </code></pre> <p>Result : <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code></p> <p>Yes I read about that in som eother issues, it's because I'm in settings, trying to configure stuff and I call a specific models right away.</p> <p>I doubt this configuration would have worked anyway. So I can try something else :</p> <h2>Step 4.2 - I will make NovalixBackend.NovalixUser available as its own import in the module</h2> <p>So I add a <strong>init</strong>.py in the Project folder, here :</p> <pre><code>Project |- manage.py | |- NovalixExpedition |- __init__.py &lt;---- HERE |- models | |- __init__.py | |- NovalixUserModel | |- settings.py </code></pre> <p>and inside I write :</p> <pre class="lang-py prettyprint-override"><code>from .models.NovalixUserModel import NovalixUser </code></pre> <p>without changing the settings.py (same as 4.1)</p> <p>Result : <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code></p> <p>Same as before.</p> <h2>Step 4.3 - I'll try to make the models simpler</h2> <p>So I add a <strong>init</strong>.py in the Project folder, here :</p> <pre><code>Project |- manage.py | |- NovalixExpedition |- models.py &lt;---- HERE |- models | |- __init__.py | |- NovalixUserModel | |- settings.py </code></pre> <p>and inside the new <code>models.py</code> file, I copy/paste the NovalixUserClass. so : <code>Project/NovalixExpedition/models/NovalixUserModel</code> and <code>Project/NovalixExpedition/models.py</code> have the exact same content. The file settings.py still have the line <code>AUTH_USER_MODEL = &quot;NovalixBackend.NovalixUser&quot;</code></p> <p>Result : <code>django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'NovalixBackend.NovalixUser' that has not been installed</code></p> <h2>Problem</h2> <p>So I can't use a relative path for AUTH_USER_MODEL, but I can't import it a module for my project either. I'm then utterly stuck. And nothing online points to any resolution of this redundant cycle.</p> <p>Note : The behaviour doesn't change by extending 'User' instead.</p> <h2>Annex :</h2> <p>Complete errors : <code>django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'NovalixBackend.NovalixUser' that has not been installed</code></p> <p>is</p> <pre><code>Exception in thread django-main-thread: Traceback (most recent call last): File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\config.py&quot;, line 268, in get_model return self.models[model_name.lower()] KeyError: 'novalixuser' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\contrib\auth\__init__.py&quot;, line 160, in get_user_model return django_apps.get_model(settings.AUTH_USER_MODEL, require_ready=False) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\registry.py&quot;, line 211, in get_model return app_config.get_model(model_name, require_ready=require_ready) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\config.py&quot;, line 270, in get_model raise LookupError( LookupError: App 'NovalixBackend' doesn't have a 'NovalixUser' model. During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\threading.py&quot;, line 973, in _bootstrap_inner self.run() File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\threading.py&quot;, line 910, in run self._target(*self._args, **self._kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\core\management\commands\runserver.py&quot;, line 110, in inner_run autoreload.raise_last_exception() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 87, in raise_last_exception raise _exception[1] File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\core\management\__init__.py&quot;, line 375, in execute autoreload.check_errors(django.setup)() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\registry.py&quot;, line 122, in populate app_config.ready() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\contrib\auth\apps.py&quot;, line 23, in ready last_login_field = getattr(get_user_model(), 'last_login', None) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\contrib\auth\__init__.py&quot;, line 164, in get_user_model raise ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'NovalixBackend.NovalixUser' that has not been installed </code></pre> <p><code>AUTH_USER_MODEL must be of the form 'app_label.model_name</code> is</p> <pre><code>Exception in thread django-main-thread: Traceback (most recent call last): File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\utils.py&quot;, line 15, in make_model_tuple app_label, model_name = model.split(&quot;.&quot;) ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\threading.py&quot;, line 973, in _bootstrap_inner self.run() File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\threading.py&quot;, line 910, in run self._target(*self._args, **self._kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\core\management\commands\runserver.py&quot;, line 110, in inner_run autoreload.raise_last_exception() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 87, in raise_last_exception raise _exception[1] File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\core\management\__init__.py&quot;, line 375, in execute autoreload.check_errors(django.setup)() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\utils\autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\registry.py&quot;, line 114, in populate app_config.import_models() File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\apps\config.py&quot;, line 301, in import_models self.models_module = import_module(models_module_name) File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\importlib\__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1030, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 850, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 228, in _call_with_frames_removed File &quot;C:\Users\aweisser\Miniconda3\envs\NovalixBoilerplate\lib\site-packages\rest_framework\authtoken\models.py&quot;, line 9, in &lt;module&gt; class Token(models.Model): File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\base.py&quot;, line 161, in __new__ new_class.add_to_class(obj_name, obj) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\base.py&quot;, line 326, in add_to_class value.contribute_to_class(cls, name) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\fields\related.py&quot;, line 747, in contribute_to_class super().contribute_to_class(cls, name, private_only=private_only, **kwargs) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\fields\related.py&quot;, line 318, in contribute_to_class lazy_related_operation(resolve_related_class, cls, self.remote_field.model, field=self) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\fields\related.py&quot;, line 80, in lazy_related_operation return apps.lazy_model_operation(partial(function, **kwargs), *model_keys) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\fields\related.py&quot;, line 78, in &lt;genexpr&gt; model_keys = (make_model_tuple(m) for m in models) File &quot;C:\Users\aweisser\AppData\Roaming\Python\Python39\site-packages\django\db\models\utils.py&quot;, line 22, in make_model_tuple raise ValueError( ValueError: Invalid model reference 'NovalixBackend.models.NovalixUser'. String model references must be of the form 'app_label.ModelName'. </code></pre>
<python><django-models><django-rest-framework><django-authentication><django-apps>
2023-02-01 12:53:19
2
373
Pazka
75,310,448
9,552,161
Why does np.split create a copy when passing into an existing array?
<p>I am new to Python and trying to understand the behaviour of views and copies, so apologies if this is an obvious question! In the example below, I use <code>np.split()</code> to split an array <code>x</code>. When I pass np.split into a new object (either <code>x1</code>, a list of 3 1D arrays or <code>x2, x3, x4</code>, three separate 1D arrays) the objects are views of <code>x</code>, as expected:</p> <pre><code>import numpy as np x = np.arange(1, 10) # create an array of length 9 x1 = np.split(x, (3,6)) # split array into 1 new object (list of arrays) print(x1[0].base) # each array in list is a view x2, x3, x4 = np.split(x, (3, 6)) # split array into 3 new objects print(x2.base) # these objects are views </code></pre> <p>However if I create an empty (3,3) array <code>x5</code> and pass np.split into each row of this array (I know this is a silly thing to do, I'm just trying to figure out how splitting works), a copy is created:</p> <pre><code>x5 = np.empty((3,3), dtype = np.int32) # create an uninitialised array x5[0], x5[1], x5[2] = np.split(x, (3, 6)) # split x into each row of x5 print(x5.base) # this object is a COPY </code></pre> <p>I thought perhaps that the slicing of x5 was causing a copy to be made, but if I slice <code>x2, x3, x4</code> they are still views:</p> <pre><code>x2[:], x3[:], x4[:] = np.split(x, (3, 6)) # split array into 3 existing objects using indexing print(x2.base) # these objects are views </code></pre> <p>I haven't managed to find an explanation for this in any explanations of views and copies or np.split - what am I missing?</p>
<python><numpy>
2023-02-01 12:50:30
2
321
Lucy Wheeler
75,310,340
19,290,081
How do round decimals when writing dataframe in streamlit
<pre class="lang-py prettyprint-override"><code>import streamlit as st import pandas as pd data = {'Days': [&quot;Sunday&quot;, &quot;Wednesday&quot;, &quot;Friday&quot;], 'Predictions': [433.11, 97.9, 153.65]} df = pd.DataFrame(data) st.write(df) </code></pre> <p>streamlit writes dataframe with four decimals by default, but i expected two decimals. With <code>print()</code> it produces the expected, and when <code>st.write()</code> is used it produces the below output:</p> <pre><code> Days | Predictions -------------|------------- 0 Sunday | 433.1100 | 1 Wednesday| 97.9000 | 2 Friday | 153.6500 | </code></pre> <p>I tried:</p> <pre><code>df.round(2) </code></pre> <p>but it didn't help.</p> <p>Desired output format:</p> <pre><code> Days | Predictions -------------|------------- 0 Sunday | 433.11 | 1 Wednesday| 97.90 | 2 Friday | 153.65 | </code></pre>
<python><pandas><dataframe><streamlit>
2023-02-01 12:41:13
1
5,771
Jamiu S.
75,310,294
7,800,760
Github Actions: poetry installs black but CI workflow does not find it
<p>I am setting up a python code quality workflow locally (pre-commit) and on Github Actions (GHA). Environment is managed with poetry.</p> <p>While the local precommit works fine, the remote GHA workflow fails, saying it does not find black, while looking at the workflow logs it seems it was installed just fine. Workflow was largely copied from this great writeup: <a href="https://jacobian.org/til/github-actions-poetry/" rel="nofollow noreferrer">https://jacobian.org/til/github-actions-poetry/</a></p> <p>Where am I making a mistake? Here are the relevant files:</p> <p><strong>codequal.yml</strong></p> <pre><code>name: Python QA on: push: branches: [ &quot;main&quot; ] pull_request: branches: [ &quot;main&quot; ] permissions: contents: read jobs: pylint: runs-on: ubuntu-latest name: Python QA steps: - name: Check out uses: actions/checkout@v3 - name: Set up Python 3.10 uses: actions/setup-python@v4 with: python-version: &quot;3.10&quot; # Cache the installation of Poetry itself, e.g. the next step. This prevents the workflow # from installing Poetry every time, which can be slow. Note the use of the Poetry version # number in the cache key, and the &quot;-0&quot; suffix: this allows you to invalidate the cache # manually if/when you want to upgrade Poetry, or if something goes wrong. This could be # mildly cleaner by using an environment variable, but I don't really care. - name: cache poetry install uses: actions/cache@v3 with: path: ~/.local key: poetry-1.3.2 # Install Poetry. You could do this manually, or there are several actions that do this. # `snok/install-poetry` seems to be minimal yet complete, and really just calls out to # Poetry's default install script, which feels correct. I pin the Poetry version here # because Poetry does occasionally change APIs between versions and I don't want my # actions to break if it does. # # The key configuration value here is `virtualenvs-in-project: true`: this creates the # venv as a `.venv` in your testing directory, which allows the next step to easily # cache it. - uses: snok/install-poetry@v1.3.3 with: version: 1.3.2 virtualenvs-create: true virtualenvs-in-project: true installer-parallel: true # Cache your dependencies (i.e. all the stuff in your `pyproject.toml`). Note the cache # key: if you're using multiple Python versions, or multiple OSes, you'd need to include # them in the cache key. I'm not, so it can be simple and just depend on the poetry.lock. - name: cache deps id: cache-deps uses: actions/cache@v3 with: path: .venv key: pydeps-${{ hashFiles('**/poetry.lock') }} # Install dependencies. `--no-root` means &quot;install all dependencies but not the project # itself&quot;, which is what you want to avoid caching _your_ code. The `if` statement # ensures this only runs on a cache miss. - run: poetry install --no-interaction --no-root if: steps.cache-deps.outputs.cache-hit != 'true' # Now install _your_ project. This isn't necessary for many types of projects -- particularly # things like Django apps don't need this. But it's a good idea since it fully-exercises the # pyproject.toml and makes that if you add things like console-scripts at some point that # they'll be installed and working. - run: poetry install --no-interaction ################################################################ # Now finally run your code quality tools ################################################################ - name: Format with black run: | black 'src' - name: Lint with pylint run: | pylint --fail-under=7.0 --recursive=y --enable=W 'src' </code></pre> <p><strong>Relevant section of GitHub Action logging of codequal.yml</strong></p> <pre><code>Run poetry install --no-interaction --no-root Creating virtualenv rssita in /home/runner/work/rssita/rssita/.venv Installing dependencies from lock file Package operations: 49 installs, 1 update, 0 removals • Updating setuptools (65.6.3 -&gt; 67.0.0) • Installing attrs (22.2.0) • Installing certifi (2022.12.7) • Installing charset-normalizer (3.0.1) • Installing distlib (0.3.6) • Installing exceptiongroup (1.1.0) • Installing filelock (3.9.0) • Installing idna (3.4) • Installing iniconfig (2.0.0) • Installing nvidia-cublas-cu11 (11.10.3.66) • Installing nvidia-cuda-nvrtc-cu11 (11.7.99) • Installing nvidia-cuda-runtime-cu11 (11.7.99) • Installing nvidia-cudnn-cu11 (8.5.0.96) • Installing packaging (23.0) • Installing platformdirs (2.6.2) • Installing pluggy (1.0.0) • Installing tomli (2.0.1) • Installing typing-extensions (4.4.0) • Installing urllib3 (1.26.14) • Installing cfgv (3.3.1) • Installing click (8.1.3) • Installing coverage (7.1.0) • Installing identify (2.5.17) • Installing emoji (2.2.0) • Installing mccabe (0.7.0) • Installing mypy-extensions (0.4.3) • Installing nodeenv (1.7.0) • Installing numpy (1.24.1) • Installing pathspec (0.11.0) • Installing protobuf (4.21.12) • Installing pycodestyle (2.10.0) • Installing pyflakes (3.0.1) • Installing pytest (7.2.1) • Installing pyyaml (6.0) • Installing requests (2.28.2) • Installing sgmllib3k (1.0.0) • Installing six (1.16.0) • Installing torch (1.13.1) • Installing tqdm (4.64.1) • Installing types-urllib3 (1.26.25.4) • Installing virtualenv (20.17.1) • Installing black (22.12.0) • Installing feedparser (6.0.10) • Installing flake8 (6.0.0) • Installing isort (5.12.0) • Installing mypy (0.991) • Installing pre-commit (3.0.2) • Installing pytest-cov (4.0.0) • Installing stanza (1.4.2) • Installing types-requests (2.28.11.8) 1s Run poetry install --no-interaction Installing dependencies from lock file No dependencies to install or update Installing the current project: rssita (0.1.0) 0s Run black 'src' /home/runner/work/_temp/0fc25aa8-4903-45ae-9d8e-9c11f60dca11.sh: line 1: black: command not found Error: Process completed with exit code 127. </code></pre> <p><strong>Project tree on local:</strong> (base) bob@Roberts-Mac-mini rssita % tree</p> <pre><code>. ├── README.md ├── __pycache__ │ └── test_feeds.cpython-310-pytest-7.2.1.pyc ├── poetry.lock ├── pyproject.toml ├── setup.cfg ├── src │ └── rssita │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-310-pytest-7.2.1.pyc │ │ ├── __init__.cpython-310.pyc │ │ ├── feeds.cpython-310-pytest-7.2.1.pyc │ │ ├── feeds.cpython-310.pyc │ │ ├── rssita.cpython-310-pytest-7.2.1.pyc │ │ └── termcolors.cpython-310-pytest-7.2.1.pyc │ ├── feeds.py │ ├── rssita.py │ └── termcolors.py └── tests ├── __init__.py ├── __pycache__ │ ├── __init__.cpython-310.pyc │ └── test_feeds.cpython-310-pytest-7.2.1.pyc └── test_feeds.py </code></pre>
<python><continuous-integration><github-actions><python-poetry>
2023-02-01 12:37:26
1
1,231
Robert Alexander
75,310,213
1,472,474
Making certain matches impossible in assignment problem (Hungarian algorithm)
<h3>Motivation:</h3> <p>I'm using scipy's python implementation of hungarian algorithm (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html#scipy-optimize-linear-sum-assignment" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html#scipy-optimize-linear-sum-assignment</a>) for matching two sets of time events. So far so good, but I have a time difference limit (let's say 0.1s) for matching these events and I don't want any matches which would have time difference above this limit.</p> <h3>An example without any time difference limit:</h3> <pre><code>from dataclasses import dataclass from scipy.optimize import linear_sum_assignment @dataclass(frozen=True) class Event: t: float expected = [Event(0), Event(1), Event(2)] detected = [ Event(1), Event(2), Event(3)] # shifted to show # time matching def tdiff(e1: Event, e2: Event) -&gt; float: return abs(e1.t - e2.t) cost_matrix = [ [ tdiff(e1, e2) for e2 in detected ] for e1 in expected ] print('cost matrix:') for row in cost_matrix: print('[', ', '.join(map(str, row)), ']') row_idx, col_idx = linear_sum_assignment(cost_matrix) for i, j in zip(row_idx, col_idx): exp_ev = expected[i] det_ev = detected[j] td = cost_matrix[i][j] print('matches %s and %s, tdiff: %.3f' % (exp_ev, det_ev, td)) </code></pre> <p>This prints:</p> <pre><code>cost matrix: [ 1, 2, 3 ] [ 0, 1, 2 ] [ 1, 0, 1 ] matches Event(t=0) and Event(t=1), tdiff: 1.000 matches Event(t=1) and Event(t=2), tdiff: 1.000 matches Event(t=2) and Event(t=3), tdiff: 1.000 </code></pre> <p>This is clearly not what I want, because all matches are above the limit (0.1s).</p> <h3>What have I tried:</h3> <p>My solution is to artificially increase cost of &quot;unwanted&quot; matching in the cost matrix, and then I filter matches to return only &quot;wanted&quot; matches:</p> <pre><code>from dataclasses import dataclass from scipy.optimize import linear_sum_assignment @dataclass(frozen=True) class Event: t: float TDIFF_MAX = 0.1 expected = [Event(0), Event(1), Event(2)] detected = [ Event(1), Event(2), Event(3)] # shifted to show # time matching def tdiff(e1: Event, e2: Event) -&gt; float: return abs(e1.t - e2.t) orig_cost_matrix = [ [ tdiff(e1, e2) for e2 in detected ] for e1 in expected ] orig_max = max(map(max, orig_cost_matrix)) print('original cost matrix:') for row in orig_cost_matrix: print('[', ', '.join(map(str, row)), ']') cost_matrix = [ [ x if x &lt;= TDIFF_MAX else orig_max + 1 for x in row ] for row in orig_cost_matrix ] print('modified cost matrix:') for row in cost_matrix: print('[', ', '.join(map(str, row)), ']') row_idx, col_idx = linear_sum_assignment(cost_matrix) for i, j in zip(row_idx, col_idx): exp_ev = expected[i] det_ev = detected[j] td = cost_matrix[i][j] if td &lt;= TDIFF_MAX: print('matches %s and %s, tdiff: %.3f' % (exp_ev, det_ev, td)) </code></pre> <p>Output:</p> <pre><code>original cost matrix: [ 1, 2, 3 ] [ 0, 1, 2 ] [ 1, 0, 1 ] modified cost matrix: [ 4, 4, 4 ] [ 0, 4, 4 ] [ 4, 0, 4 ] matches Event(t=1) and Event(t=1), tdiff: 0.000 matches Event(t=2) and Event(t=2), tdiff: 0.000 </code></pre> <p>This works fine, but I don't like this solution for 2 reasons:</p> <ol> <li>it looks <em>ugly and is harder to understand</em></li> <li>in &quot;real&quot; application the <code>expected</code> and <code>detected</code> sets can be quite large, and since time difference limit is rather small, most of the matches are &quot;unwanted&quot;, which makes me think this is rather <em>inefficient and wastes a lot of CPU</em>.</li> </ol> <h3>Question:</h3> <p>Is there abetter way to &quot;disable&quot; some possible matches (not necessary using hungarian algoritm..), which would work better in cases with large data sets where each <code>Event</code> has mostly only 1, 2 or 3 possible matches?</p> <p>I'm mostly interested in algorithms, so feel free to post solutions in other languages than python or even in pseudocode.</p>
<python><scipy><hungarian-algorithm><assignment-problem>
2023-02-01 12:29:33
0
5,587
Jan Spurny
75,310,173
5,784,323
AttributeError: 'OptionEngine' object has no attribute 'execute'
<p>SQLAlchemy v2.0.0 works in a different way - they have changed some of the api.</p> <p>Following investigation I found a solution. My code was simply:</p> <pre><code>s_settings_df = pd.read_sql_query(query, engine_cloud) </code></pre> <p>The error like the title, &quot;AttributeError: 'OptionEngine' object has no attribute 'execute'&quot;</p> <p>I will answer my own post below.</p> <p>I tried using various versions but did not like the idea of getting locked with historic components.</p>
<python><pandas><sqlalchemy>
2023-02-01 12:26:39
1
1,421
NAJ
75,310,169
7,209,497
Problem during Download of pdf file using Python
<p>From <a href="https://research.un.org/en/docs/ga/quick/regular/76" rel="nofollow noreferrer">https://research.un.org/en/docs/ga/quick/regular/76</a> I intend to download the first resolution (A/RES/76/307), which has the link (<a href="https://undocs.org/en/A/RES/76/307" rel="nofollow noreferrer">https://undocs.org/en/A/RES/76/307</a>) and which then is transformed to <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/N22/587/47/PDF/N2258747.pdf?OpenElement" rel="nofollow noreferrer">https://documents-dds-ny.un.org/doc/UNDOC/GEN/N22/587/47/PDF/N2258747.pdf?OpenElement</a> , when clicked on.</p> <p>I use the standard code to download pdfs:</p> <pre><code>import requests url = &quot;https://undocs.org/en/A/RES/76/307&quot; response = requests.get(url) print(response.status_code) print(response.content) with open(&quot;document.pdf&quot;, &quot;wb&quot;) as f: f.write(response.content) </code></pre> <p>While the status_code indicates everything is okay (200), the content simply is:</p> <pre><code>b'\n&lt;head&gt;\n&lt;/head&gt;\n&lt;body text=&quot;#000000&quot;&gt;\n&lt;META HTTP-EQUIV=&quot;refresh&quot; CONTENT=&quot;1; URL=/tmp/1286884.54627991.html&quot;&gt;\n&lt;/body&gt;\n&lt;/html&gt;\n' </code></pre> <p>, which is evidently not the actual content of the document. A pdf file is saved, but it is much too small and I cannot open it with Document viewer (&quot;File type HTML document (text/html) is not supported&quot;).</p> <p>How can I download that pdf file using python?</p>
<python><pdf><download>
2023-02-01 12:26:29
2
314
Jan
75,310,143
1,901,071
Polars Adding Days to a date
<p>I am using Polars in Python to try and add thirty days to a date I run the code, get no errors but also get no new dates Can anyone see my mistake?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( {&quot;start_date&quot;: [&quot;2020-01-02&quot;, &quot;2020-01-03&quot;, &quot;2020-01-04&quot;]}) df = df.with_columns( pl.col(&quot;start_date&quot;).str.to_date() ) # Generate the days above and below df = df.with_columns( pl.col(&quot;start_date&quot;) + pl.duration(days=30).alias(&quot;date_plus_delta&quot;) ) df = df.with_columns( pl.col(&quot;start_date&quot;) + pl.duration(days=-30).alias(&quot;date_minus_delta&quot;) ) print(df) </code></pre> <pre><code>shape: (3, 1) ┌────────────┐ │ start_date │ │ --- │ │ date │ ╞════════════╡ │ 2020-01-02 │ │ 2020-01-03 │ │ 2020-01-04 │ └────────────┘ </code></pre> <p><strong>Quick References</strong></p> <p><strong>The Manual:</strong> <a href="https://docs.pola.rs/user-guide/transformations/time-series/parsing/" rel="nofollow noreferrer">https://docs.pola.rs/user-guide/transformations/time-series/parsing/</a></p> <p><strong>strftime formats:</strong> <a href="https://docs.rs/chrono/latest/chrono/format/strftime/index.html" rel="nofollow noreferrer">https://docs.rs/chrono/latest/chrono/format/strftime/index.html</a></p> <p><strong>SO Answer from a previous Post:</strong> <a href="https://stackoverflow.com/questions/74800989/how-to-add-a-duration-to-datetime-in-python-polars">How to add a duration to datetime in Python polars</a></p>
<python><datetime><duration><timedelta><python-polars>
2023-02-01 12:24:19
1
2,946
John Smith
75,310,122
6,218,173
NameError, name is not defined
<p>I wrote a simple code to check if a list is sorted or not. I have two questions: First, my result is wrong. I guess the issue is with following the line. Which one is correct?:</p> <pre><code>sorted_list = mylist[:].sort() sorted_list = list(mylist[:]).sort() </code></pre> <p>As second question, when I try to print(sorted_list), I get NameError which is sorted_list is not defined which is weird. Because I've already defined it in the second line. Would you please help me understand why I'm getting this error?</p> <pre><code>def is_sorted(mylist): sorted_list = mylist[:].sort() # mylist.sort() if sorted_list == mylist: return True else: return False print(is_sorted(['Aajid', 'Bafiee', 'Hello'])) print(sorted_list) </code></pre> <p>Output:</p> <pre><code>False Traceback (most recent call last): File &quot;e:\NectCloud\Python\is_sorted.py&quot;, line 11, in &lt;module&gt; print(sorted_list) ^^^^^^^^^^^ NameError: name 'sorted_list' is not defined </code></pre> <p>Thanks</p>
<python><python-3.x><list><nameerror>
2023-02-01 12:22:45
1
491
Majid
75,310,082
3,335,108
Accessing Exchange inbox using exchangelib from Python with oauth
<p>Since Microsoft has now dropped support for accessing an Exchange mailbox using basic authentication, I've had to upgrade some code to use oauth based access to the mailbox instead.</p> <p>I've setup an Azure AD app following these docs:</p> <ul> <li><a href="https://learn.microsoft.com/en-us/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth</a></li> </ul> <p>Now I'd like to access emails from the mailbox using <a href="https://pypi.org/project/exchangelib/" rel="nofollow noreferrer">exchangelib</a> and <code>exchangelib.OAuth2Credentials</code></p> <p>Here's the code I'm using:</p> <pre class="lang-py prettyprint-override"><code>from exchangelib import Account, OAuth2Credentials AZURE_CLIENT_ID = &quot;MY_CLIENT_ID&quot; AZURE_TENANT_ID = &quot;MY_TENANT_ID&quot; AZURE_CLIENT_CREDENTIAL = &quot;MY_CLIENT_CREDENTIAL&quot; MY_EMAIL = &quot;me@example.com&quot; credentials = OAuth2Credentials( client_id=AZURE_CLIENT_ID, client_secret=AZURE_CLIENT_CREDENTIAL, tenant_id=AZURE_TENANT_ID ) account = Account(primary_smtp_address=MY_EMAIL, credentials=credentials, autodiscover=True) for item in account.inbox.all().order_by('-datetime_received')[:100]: print(item.subject, item.sender, item.datetime_received) </code></pre> <p>However I'm getting an error saying <code>The SMTP address has no mailbox associated with it</code>:</p> <pre><code>Traceback (most recent call last): File &quot;exchangelib_test.py&quot;, line 19, in &lt;module&gt; account = Account(primary_smtp_address=MY_EMAIL, credentials=credentials, autodiscover=True) File &quot;/home/me/.local/lib/python3.6/site-packages/exchangelib/account.py&quot;, line 119, in __init__ email=primary_smtp_address, credentials=credentials, auth_type=auth_type, retry_policy=retry_policy File &quot;/home/me/.local/lib/python3.6/site-packages/exchangelib/autodiscover/discovery.py&quot;, line 114, in discover ad_response = self._quick(protocol=ad_protocol) File &quot;/home/me/.local/lib/python3.6/site-packages/exchangelib/autodiscover/discovery.py&quot;, line 201, in _quick return self._step_5(ad=ad) File &quot;/home/me/.local/lib/python3.6/site-packages/exchangelib/autodiscover/discovery.py&quot;, line 531, in _step_5 ad.raise_errors() File &quot;/home/me/.local/lib/python3.6/site-packages/exchangelib/autodiscover/properties.py&quot;, line 313, in raise_errors raise ErrorNonExistentMailbox('The SMTP address has no mailbox associated with it') exchangelib.errors.ErrorNonExistentMailbox: The SMTP address has no mailbox associated with it </code></pre> <p>Does anyone have an idea what might be going wrong here? The SMTP address definitely does exist and is an Exchange mailbox which I can log in to and view via Outlook.</p>
<python><oauth><smtp><exchangewebservices><exchangelib>
2023-02-01 12:18:07
2
380
Benjamin Gorman
75,310,047
5,881,804
python multithreading: how to ensure a thread is waiting and not currently joining?
<p>I have a thread A which permanently listens for events. When an event for a particular resource R1 arrives, it starts thread B and passes the job to B for processing. Thread A then continues to listen, while B waits for a job, receives the job from thread A and processes it. Additional events for resources R1 are also sent to thread B (placed in a queue for thread B). Events for resources R2, R3, and so on are treated similarly, a new thread is started for each unique resource, ie. thread C for R2, thread D for R3, and so on. The nature of the events is peaky for a particular resource, followed by long periods of nothing, hence thread A starts thread B and when B is finished with the job, it waits for another job from A and if no job arrives, it joins. Because thread B may still be waiting after completing a job from a previous event, thread A checks if B is alive before passing it to the current job (it places it in a queue). If it is still alive, A just passes B the job, if it is not it starts thread B, again and then passes it the job. To ensure serialization of events for a particular resource, only one thread for each resource is started (otherwise this would be trivial, just start a new thread for every event)</p> <p>Now, here is the problem: there is a small but finite time when thread B has just timed out waiting for a job and will join, but has not joined, yet. If thread A checks if thread B is alive during that short time, thread A will see that B is alive and send it a job, but B will not process it because it is no longer awaiting jobs - instead it is in the process of joining. Hence the job is not processed. This can be simulated by inserting a sleep statement as the last line of code in thread B.</p> <p>How can I ensure that when thread A checks that thread B is alive and is waiting for a job, and not currently joining? I have considered using a lock but acquiring a lock also takes time, even if that time is very small.</p>
<python><multithreading><python-multithreading>
2023-02-01 12:15:25
2
732
Blindfreddy
75,309,924
1,417,053
Python Pathlib Strange Behaviour
<p>I can find all the files in the subfolders of a folder path with specific filetypes in this way:</p> <pre><code>list(Path(folder_path).glob('**/*.[jpg][jpeg][png]*')) </code></pre> <p>But if I change the code to try and find other filetypes (like <code>jfif</code> or <code>bmp</code>), with some filetypes the code works and with the others, it can't find the filepaths:</p> <pre><code>list(Path(folder_path).glob('**/*.[jpg][jpeg][jfif][png]*')) </code></pre> <p>Why isn't this code working properly?</p>
<python><glob><pathlib>
2023-02-01 12:04:48
2
2,620
Cypher
75,309,711
13,019,353
Calling python script from bash script and getting it's return value
<p>I've bash script doing some tasks but I need to manipulate on string obtained from configuration (for simplification in this test it's hardcoded). This manipulation can be done easily in python but is not simple in bash, so I've written a script in python doing this tasks and returning a string (or ideally an array of strings). I'm calling this python script in my bash script. Both scripts are in the same directory and this directory is added to environment variables. I'm testing it on Ubuntu 22.04. My python script below:</p> <pre><code>#!/usr/bin/python def Get(input: str) -&gt; list: #Doing tasks - arr is an output array return ' '.join(arr) #or ideally return arr </code></pre> <p>My bash script used to call the above python script</p> <pre><code>#!/bin/bash ARR=(&quot;$(python -c &quot;from test import Get; Get('val1, val2,val3')&quot;)&quot;) echo $ARR for ELEMENT in &quot;${ARR[@]}&quot;; do echo &quot;$ELEMENT&quot; done </code></pre> <p>When I added print in python script for test purposes I got proper results, so the python script works correctly. But in the bash script I got simply empty line. I've tried also something like that: <code>ARR=(&quot;$(python -c &quot;from test import Get; RES=Get('val1, val2,val3')&quot;)&quot;)</code> and the iterate over res and got the same response. It seems like the bash script cannot handle the data returned by python. How can I rewrite this scripts to properly get python script response in bash? Is it possible to get the whole array or only the string?</p>
<python><bash>
2023-02-01 11:46:20
2
377
Aenye_Cerbin
75,309,660
3,812,928
Python: why is mock not called in this instance of loop?
<p>I have the following test code:</p> <pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize( &quot;any_recordings,any_transcriptions,returns_recording,duration,exception&quot;, [ ( True, True, True, 60, None ), ( True, True, False, 121, RecordingLengthException, ), ( True, True, False, 1, RecordingLengthException, ), ( False, False, False, None, timer.Timeout, ), ( True, False, False, 0, NotTranscribedException, ), ], ) async def test_10_mock_wait_for_recording_result( twilio_mock_testing: Twilio, any_recordings: bool, any_transcriptions: bool, returns_recording: bool, duration: int, exception: Optional[Exception], ): with mock.patch(&quot;twilio.rest.api.v2010.account.recording.transcription.TranscriptionInstance&quot;) as transcription_instance: transcription_instance.transcription_text = &quot;Hello there.&quot; with mock.patch(&quot;twilio.rest.api.v2010.account.recording.RecordingInstance&quot;) as recording_instance: recording_instance.call_sid = &quot;54321&quot; recording_instance.date_created = datetime.datetime.now() recording_instance.price = &quot;1.25&quot; recording_instance.source = &quot;whatever&quot; recording_instance.transcriptions.list.return_value = [transcription_instance] if any_transcriptions else [] recording_instance.duration = duration recording_instance.uri = &quot;http://possu.xyz&quot; recording_instance.status = CallStatusFinal.COMPLETED.value twilio_mock_testing.client.recordings.list.return_value = [recording_instance] if any_recordings else [] on_transcribed: Callable[[Call, Recording], Any] = mock.Mock(return_value=None) with mock.patch.object(timer, 'Timer') as t: t.loop = mock.MagicMock(return_value=True) with pytest.raises(exception) if exception is not None else nullcontext(): recording = twilio_mock_testing.wait_for_recording_result( call=Call( sid=&quot;54321&quot;, direction=&quot;abc&quot;, to=&quot;1234567890&quot;, from_=&quot;9876543210&quot;, status=CallStatusRunning.IN_PROGRESS.value, start_time=datetime.datetime.now(), end_time=None, duration=None, price=None, caller_name=None, ), raise_on_timeout=True, on_transcribed=on_transcribed ) twilio_mock_testing.client.recordings.list.assert_called() if returns_recording: assert recording.status == CallStatusFinal.COMPLETED.value on_transcribed.assert_called_once() else: on_transcribed.assert_not_called() t.loop.assert_called() </code></pre> <p>Inside <code>wait_for_recording_result</code> it uses a library <code>timer.Timer.loop</code> method (which is being mocked):</p> <pre class="lang-py prettyprint-override"><code> def wait_for_recording_result( self, *, call: Call, raise_on_timeout=False, on_transcribed: Callable[[Call, Recording], Any] = None ): recording_timer = timer.Timer(timeout=60, sleep=5, what=&quot;call recording&quot;) recording = None try: while recording_timer.loop(raise_timeout=raise_on_timeout): recordings = list(self.recordings(call_sid=call.sid)) if any(recordings): recording = recordings[0] if ( recording.duration &lt;= 2 or recording.duration &gt;= 120 ): # See https://www.twilio.com/docs/api/errors/13617 raise RecordingLengthException(recording) if any(recording.transcriptions): break catch timer.Timeout: ... </code></pre> <p>All the test cases pass, except for the first test case on <code>t.loop.assert_called()</code> and the fourth test case seems to wait for the 60 second timeout (even though loop is mocked):</p> <pre><code>tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[True-True-True-60-None] FAILED tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[True-True-False-121-RecordingLengthException] PASSED tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[True-True-False-1-RecordingLengthException] PASSED tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[False-False-False-None-Timeout] PASSED tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[True-False-False-0-NotTranscribedException] PASSED FAILED tests/integrations/test_twilio.py::test_10_mock_wait_for_recording_result[True-True-True-60-None] - AssertionError: Expected 'loop' to have been called. </code></pre> <p>How is it that the mock is called in every other circumstance, yet it still waits for 60 seconds on the 4th case, while claiming the mock was not called in the first case?</p> <p>The only thing the <strong>init</strong> does of timer.Timer that I can think may cause it is <code>self._event = threading.Event()</code>, but this I can't see used anywhere except inside the <code>loop</code> method, meaning it should not be called as it is mocked.</p> <p>I should also note that <code>timer.Timeout</code> is a custom exception, from the same <code>timer</code> package, inheriting from <code>Exception</code> so it isn't part of built-in python.</p> <p><code>twilio_mock_testing</code> merely creates a mock for <code>client</code> property of this <code>Twilio</code> package and not of any of the methods such as <code>wait_for_recording_result</code>...</p> <pre class="lang-py prettyprint-override"><code>with mock.patch(&quot;twilio.rest.Client&quot;): return Twilio(sid=&quot;Hello&quot;, token=&quot;olleH&quot;) </code></pre> <p>I am incredibly confused by this...</p>
<python><unit-testing>
2023-02-01 11:41:37
1
665
NotoriousPyro
75,309,589
186,202
Django Admin - How to prefill add inline forms values sent through the querystring?
<p>I am able to prefill a form using query-string parameters in Django Admin.</p> <p>Let's say I have the following models:</p> <pre class="lang-py prettyprint-override"><code>class Book(models.Model): id = models.Autofield(primary_key=True) author = models.ForeignKey(Author, on_delete=models.CASCADE) name = models.Charfield(max_length=200) class Author(models.Model): id = models.Autofield(primary_key=True) name = models.Charfield(max_length=200) </code></pre> <p>If I go to <code>/admin/library/author/add/?name=J.+K.+Rowling</code> the author's name will be properly prefilled.</p> <p>However if I add InlineForms like that:</p> <pre class="lang-py prettyprint-override"><code>class BookInline(StackedInline): model = Book extra = 0 class AuthorAdmin(ModelAdmin): inlines = [BookInline] admin.site.register(Author, AuthorAdmin) </code></pre> <p>I don't seem to be able to prefill books.</p> <p>I tried: <code>/admin/library/author/add/?name=J.+K.+Rowling&amp;books-TOTAL_FORMS=1&amp;books-0-name=Harry+Potter+and+the+Philosopher's+Stone</code></p> <p>The author form is prefilled, but the first book form is not prefilled. Do you know how one manages that?</p>
<python><django><django-admin><inline-formset>
2023-02-01 11:35:25
1
18,222
Natim
75,309,530
11,629,296
Python : Replace two for loops with the fastest way to sum the elements
<p>I have list of 5 elements which could be 50000, now I want to sum all the combinations from the same list and create a dataframe from the results, so I am writing following code,</p> <pre><code>x =list(range(1,5)) t=[] for i in x: for j in x: t.append((i,j,i+j)) df=pd.Dataframe(t) </code></pre> <p>The above code is generating the correct results but taking so long to execute when I have more elements in the list. Looking for the fastest way to do the same thing</p>
<python><pandas><dataframe><for-loop><python-itertools>
2023-02-01 11:29:51
2
2,189
Kallol
75,309,443
7,541,421
Curve fitting with three unknowns Python
<p>I have a data set where from one <em>x</em> I can obtain the exponential function <code>f(x) = a*np.exp(-b*(x-c))</code> , defined in Python like this:</p> <pre><code>def func(x, a, b, c): return a*np.exp(-b*(x-c)) f = func(x, a, b, c) a = 1.6 b = 0.02 </code></pre> <p><em>a</em>, <em>b</em>, <em>c</em> are all known in this case.</p> <p>However, after an algebraic solution, where I need to partition this function to three different members, I need to obtain a solution for this function:</p> <p><code>g(x) = f1*a1*np.exp(-b1*(x-c)) + f2*a2*np.exp(-b2*(x-c)) + f3*a3*np.exp(-b3*(x-c))</code>.</p> <p><em>a1</em>, <em>a2</em>, <em>a3</em>, <em>f1</em>, <em>f2</em>, <em>f3</em> and <em>c</em> are all known, what I need to do is to fit <em>g(x)</em> to <em>f(x)</em> in such way to obtain <strong>b1</strong>, <strong>b2</strong> and <strong>b3</strong>, using curve_fit or any kind of adequate fit for this kind of problem.</p> <p><em>f1</em>, <em>f2</em>, <em>f3</em> represent fractions and their sum is equal to 1.</p> <p>My question is: how do I define the <em>g(x)</em> function in order to obtain a solution for <strong>b1</strong>, <strong>b2</strong> and <strong>b3</strong>?</p> <p>For clarity and testing purposes, I attach also possible values to tackle this problem, but note that <em>f1</em>, <em>f2</em>, <em>f3</em> and <em>a1</em>, <em>a2</em>, <em>a3</em> should be part of inputs as they change from one point to another. I have an array, but am simplifying the calculation in order to make it clear what to obtain:</p> <pre><code>x = np.arange(300., 701., 5.) f1=0.3 f2=0.5 f3=0.2 c = 350. a1=1.82 a2=7.32 a3=1.52 </code></pre>
<python><scipy><curve-fitting>
2023-02-01 11:22:10
1
788
PEBKAC
75,309,422
357,313
Slice MultiIndex by multiple tuples
<p>I have a DataFrame with multiple index levels. I define some subset by selecting multiple combinations of all levels but the last. Then I want to slice the original DataFrame with that subset, but I cannot find how. Best is to look at a simple example:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.DataFrame({'a': ['A', 'B', 'A', 'B'], 'b': ['X', 'X', 'X', 'Y'], ...: 'c': ['S', 'T', 'T', 'T'], 'd': [1, 2, 3, 1]}).set_index(['a', 'b', 'c']) In [3]: print(df.to_string()) d a b c A X S 1 B X T 2 A X T 3 B Y T 1 In [4]: sel = df.index.droplevel('c')[df.d == 1] # Some selection on multiple index levels. In [5]: print(sel) MultiIndex([('A', 'X'), ('B', 'Y')], names=['a', 'b']) </code></pre> <p>Now I would like all rows from <code>df</code> where (<code>a</code>, <code>b</code>) in <code>sel</code>, in this case all but the second row. I tried <code>.loc</code>, <code>.xs</code> and more.</p> <p>I'm sure I can manipulate the index (drop level <code>c</code>, select, then add level <code>c</code> again), but that feels like a workaround. The same goes for an inner join. I must be overlooking some method...?</p>
<python><pandas><slice><multi-index>
2023-02-01 11:20:44
1
8,135
Michel de Ruiter
75,309,417
902,485
The distribution of p-values is not uniform when applying t-test to random coin flips from Python's random.randint(0,1)
<p>Theoretically, p-values <a href="https://stats.stackexchange.com/questions/10613/why-are-p-values-uniformly-distributed-under-the-null-hypothesis">are uniformly distributed under the null hypothesis</a>.</p> <p>Therefore, I would expect p-values from G-test or Chi-square test to test equal proportions to provide uniformly distributed p-values when I apply it to some random coin flip simulations using Python's <code>random.randint(0,1)</code>, which should be an unbiased random coin, i.e., a Bernoulli(0.5).</p> <p>Likewise, in case n*p is sufficiently large, the assumptions behind a t-test become reasonable, and we would expect a t-test to give uniformly distributed p-values too.</p> <p>However, that is not what I empirically see.</p> <p>I plot a histogram of p-values from repeated experiments with sample size 20k, using the following snippet:</p> <pre class="lang-py prettyprint-override"><code>from scipy import stats from matplotlib import pyplot as plt ps = [] for i in range(5000): heads = [random.randint(0,1) for _ in range(20000)] tails = [1-x for x in heads] p = stats.ttest_ind(heads, tails).pvalue ps.append(p) plt.hist(ps, 100) </code></pre> <p>This results in the following distribution of p-values, which seems to give p-values close to 0 much more often than expected. Note that this is not due to the approximations of the t-test, as I find similar distributions of p-values when I plug in a Chi-square or G-test.</p> <p><a href="https://i.sstatic.net/ffq9y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ffq9y.png" alt="enter image description here" /></a></p> <p>Am I running into a situation here where Python's pseudorandom number generator (<a href="https://github.com/schmouk/PyRandLib/tree/master/PyRandLib" rel="nofollow noreferrer">which are based on Mersenne Twister algorithm</a>) simply do not have sufficiently good statistical properties and are simply not random enough? Or is there something else that I am missing here?</p>
<python><random><statistics><p-value><bernoulli-probability>
2023-02-01 11:19:59
2
841
Niek Tax
75,309,406
20,646,427
How to connect 2 many to many fields in Django
<p>I have 2 many to many fields in models and i want to connect them to each other i mean if i connect user in Admin Model with Counter Party i cant see that in Counter Party admin</p> <p>How can i do that?</p> <p>When im trying to do that it shows only in 1 model</p> <p>models.py</p> <pre><code>class CustomUser(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, verbose_name='Пользователь') user_counter = models.ManyToManyField('CounterParty', blank=True, verbose_name='Контрагенты пользователя') def __str__(self): return f'{self.user}' class CounterParty(models.Model): GUID = models.UUIDField(default=uuid.uuid4, editable=True, unique=True) name = models.CharField(max_length=150, verbose_name='Наименование') customer = models.BooleanField(default=False, verbose_name='Заказчик') contractor = models.BooleanField(default=False, verbose_name='Подрядчик') counter_user = models.ManyToManyField(User, blank=True, related_name='counter_user', verbose_name='Пользователи контрагента') class Meta: verbose_name = 'Контрагент' verbose_name_plural = 'Контрагенты' def __str__(self): return </code></pre> <p>admin.py</p> <pre><code>from django.contrib import admin from .models import CustomUser, CounterParty, ObjectList, SectionList from authentication.models import User from authentication.admin import UserAdmin class CustomUserInLine(admin.StackedInline): model = CustomUser can_delete = False verbose_name_plural = 'Пользователи' class CustomUserAdmin(UserAdmin): inlines = (CustomUserInLine,) @admin.register(CounterParty) class CounterPartyAdmin(admin.ModelAdmin): pass admin.site.unregister(User) admin.site.register(User, CustomUserAdmin) </code></pre> <p><a href="https://i.sstatic.net/NKfzF.png" rel="nofollow noreferrer">user admin</a></p> <p><a href="https://i.sstatic.net/TiicH.png" rel="nofollow noreferrer">counter party admin</a></p>
<python><django>
2023-02-01 11:18:34
1
524
Zesshi
75,309,351
15,724,084
python tkinter cannot assign stringvar() set
<p>I wanted to create module based tkinter classes. I want to be able <code>var_text</code> variable to be able to print on <code>Label</code> text given to it.</p> <pre><code>from tkinter import * class Maincl(Tk): def __init__(self,xwidth,yheight): super().__init__() self.geometry(f'{xwidth}x{yheight}') self.var_text=StringVar() Labelcl(self,'darkblue',30,3).pack() Labelcl(self, 'white', 30, 3,self.var_text.set('hello tkinter')).pack() self.mainloop() class Labelcl(Label): def __init__(self,master,bg_color,width,height,text_variable=None): super().__init__(master) self.master=master self.configure(bg=bg_color,width=width,height=height,textvariable=text_variable) app=Maincl('500','300') </code></pre> <p>But as of matter, for testing purpose I assigned(set) to <code>var_text</code> to &quot;hello tkinter&quot; but cannot see text when code is run.</p>
<python><tkinter>
2023-02-01 11:13:43
2
741
xlmaster
75,309,237
21,124,805
read_sql_query() throws "'OptionEngine' object has no attribute 'execute'" with SQLAlchemy 2.0.0
<p>First of all, I'm a totally new guys in the dev world I'm currently taking courses in AI / Data Science and one of my work is to use a SQL Database to make prediction using Prophet, then use these predition to make a PowerBI But currently, I'm stuck with the Python code, I'm not a developer initially, so I have no clue where the problem is:</p> <pre class="lang-py prettyprint-override"><code>import sqlalchemy from sqlalchemy import create_engine import pandas as pd from prophet import Prophet import pymysql engine = create_engine(&quot;mysql+pymysql://root:Password@localhost:3306/data&quot;) query = &quot;SELECT Cle_Produit, Date_Facturation, SUM(Quantite) AS Total_Quantite FROM ventes GROUP BY Cle_Produit, Date_Facturation&quot; df = pd.read_sql_query(query, engine) df = df.pivot(index='Date_Facturation', columns='Cle_Produit', values='Total_Quantite') df = df.reset_index() df.rename(columns={'Date_Facturation': 'ds', 'Total_Quantite': 'y'}, inplace=True) m = Prophet() m.fit(df) future = m.make_future_dataframe(periods=365) forecast = m.predict(future) forecast[['ds', 'yhat']].to_csv('forecast.csv', index=False) </code></pre> <p>It returns me this message:</p> <blockquote> <p>Importing plotly failed. Interactive plots will not work. Traceback (most recent call last): File &quot;f:\Backup\Cours\Cours\Explo Data\app3.py&quot;, line 9, in df = pd.read_sql_query(query, engine) File &quot;F:\Programmes\Anaconda\envs\myenv\lib\site-packages\pandas\io\sql.py&quot;, line 397, in read_sql_query return pandas_sql.read_query( File &quot;F:\Programmes\Anaconda\envs\myenv\lib\site-packages\pandas\io\sql.py&quot;, line 1560, in read_query result = self.execute(*args) File &quot;F:\Programmes\Anaconda\envs\myenv\lib\site-packages\pandas\io\sql.py&quot;, line 1405, in execute return self.connectable.execution_options().execute(*args, **kwargs) AttributeError: 'OptionEngine' object has no attribute 'execute'</p> </blockquote> <p>Please, can somebody help me?</p> <p>I want this python script to create a csv file with the prediction from prophet. I want Prophet to use the table ventes from the DB data, and it should use the column <code>Cle_Produit</code>, <code>Quantite</code> and <code>Date_Facturation</code></p>
<python><pandas><sqlalchemy>
2023-02-01 11:02:34
8
541
Elu
75,309,138
8,849,071
Why mypy is not throwing an error if interface and implementation has a different argument name
<p>I have discovered today this bug in our code base (simplified example):</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod class Interface(ABC): @abstractmethod def method(self, variable: str) -&gt; str: pass class Implementation(Interface): def method(self, variable_changed: str) -&gt; str: return &quot;A&quot; Implementation().method(&quot;d&quot;) # works Implementation().method(variable=&quot;d&quot;) # error Implementation().method(variable_changed=&quot;d&quot;) # works </code></pre> <p>Here we have a class implementing an interface. That's all good, but the implementation change the name of the first argument of the method. If I run mypy, I would expect to get an error, because implementation is not following the contract defined by the Interface. To my surprise, this is not the case at all.</p> <p>I'm not sure if this is intended or not, but I would like to detect those kinds of mismatchs as soon as possible. Any idea how to fix mypy or how to enable this kind of detection in case it's not an error?</p>
<python><mypy><python-typing>
2023-02-01 10:54:18
1
2,163
Antonio Gamiz Delgado
75,309,052
14,824,108
Calculating pairwise distances between entries in a `torch.tensor`
<p>I'm trying to implement a manifold alignment type of loss illustrated <a href="https://towardsdatascience.com/manifold-alignment-c67fc3fc1a1c" rel="nofollow noreferrer">here</a>.</p> <p>Given a tensor representing a batch of embeddings of shape <code>(L,N)</code> for example with L=256:</p> <pre><code>tensor([[ 0.0178, 0.0004, -0.0217, ..., -0.0724, 0.0698, -0.0180], [ 0.0160, 0.0002, -0.0217, ..., -0.0725, 0.0655, -0.0207], [ 0.0155, -0.0010, -0.0153, ..., -0.0750, 0.0688, -0.0253], ..., [ 0.0130, -0.0113, -0.0078, ..., -0.0805, 0.0634, -0.0241], [ 0.0120, -0.0047, -0.0135, ..., -0.0846, 0.0722, -0.0230], [ 0.0120, -0.0048, -0.0142, ..., -0.0843, 0.0734, -0.0246]], grad_fn=&lt;AddmmBackward0&gt;) </code></pre> <p>I want to compute all the pairwise distances between the row entries. Resulting in a <code>(L, L)</code> shaped output.</p> <p>I've tried with <code>torch.nn.PairwiseDistance</code> but it is not clear to me if it is useful for what I'm looking for.</p>
<python><machine-learning><pytorch><pairwise-distance>
2023-02-01 10:47:38
1
676
James Arten
75,308,996
1,096,660
How to prevent multiple queries by inlines in Django admin?
<p>I fought with this all day yesterday, but can't really find a way out. I've got three models where A is related to C through B. The A-admin inlines B which has the FK to C.</p> <p>The admin inlines are causing queries per item which is incredibly slow. Making the offending fields raw_id_only does help (readonly even further), but that is not what I'm trying to do.</p> <p>I read in a few bug tickets, that this intended behaviour, because they don't want to keep the queryset for the choices in memory. But I do want that. Just I'm not sure where to start. I'd like to create a dict of all C (just id and <strong>str</strong> actually) in the A-admin once and then pass it to each of the inline admins forms instead of it causing another query. How could I pull this off?</p> <p>Here is the ticket that describes the behaviour in detail: <a href="https://code.djangoproject.com/ticket/31295" rel="nofollow noreferrer">https://code.djangoproject.com/ticket/31295</a></p> <p>Thank You!</p>
<python><django><django-admin><python-3.8><django-database>
2023-02-01 10:43:04
0
2,629
JasonTS
75,308,944
1,901,071
Polars Case Statement
<p>I am trying to pick up the package <a href="https://www.pola.rs/" rel="nofollow noreferrer">polars</a> from Python. I come from an R background so appreciate this might be an incredibly easy question.</p> <p>I want to implement a case statement where if any of the conditions below are true, it will flag it to 1 otherwise it will be 0. My new column will be called 'my_new_column_flag'</p> <p>I am however getting the error message</p> <blockquote> <p>TypeError: invalid input for `col`. Expected `str` or `DataType`, got 'int'.</p> </blockquote> <pre class="lang-py prettyprint-override"><code>import polars as pl import numpy as np np.random.seed(12) df = pl.DataFrame( { &quot;nrs&quot;: [1, 2, 3, None, 5], &quot;names&quot;: [&quot;foo&quot;, &quot;ham&quot;, &quot;spam&quot;, &quot;egg&quot;, None], &quot;random&quot;: np.random.rand(5), &quot;groups&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;B&quot;], } ) print(df) df.with_columns( pl.when(pl.col('nrs') == 1).then(pl.col(1)) .when(pl.col('names') == 'ham').then(pl.col(1)) .when(pl.col('random') == 0.014575).then(pl.col(1)) .otherwise(pl.col(0)) .alias('my_new_column_flag') ) </code></pre> <p>Can anyone help?</p>
<python><dataframe><python-polars>
2023-02-01 10:38:37
1
2,946
John Smith
75,308,928
4,865,723
Create a MultiIndex with set_index() transforms 0 and 1 into booleans
<p>I do use <code>DataFrame.set_index()</code> to transform two columns into a <code>MultiIndex</code>. The problem is that values with <code>0</code> and <code>1</code> are transformed into booleans <code>False</code> and <code>True</code>.</p> <p>This is the initial table. Please see the values in <code>idx2</code>.</p> <pre><code>| | idx1 | idx2 | val | |---:|:-------|:-------|------:| | 0 | A | | 1 | | 1 | B | False | 2 | | 2 | B | True | 3 | | 3 | C | 0 | 4 | | 4 | C | 1 | 5 | | 5 | C | 2 | 6 | | 6 | C | 3 | 7 | </code></pre> <p>After doing <code>df.set_index(['idx1', 'idx2'])</code> the table looks like this. Look into the 4th and 5th row please and see that the integers are transformed into booleans.</p> <pre><code>| | val | |:-------------|------:| | ('A', '') | 1 | | ('B', False) | 2 | | ('B', True) | 3 | | ('C', False) | 4 | &lt;&lt;&lt;&lt; should be ('C', 0) | ('C', True) | 5 | &lt;&lt;&lt;&lt; should be ('C', 1) | ('C', 2) | 6 | | ('C', 3) | 7 | </code></pre> <p>This happens with pandas version <code>1.5.3</code>.</p> <p>The question is why this happens and if there is a way to prevent that?</p> <p>Here is a full MWE</p> <pre><code>#!/usr/bin/env python3 import pandas df = pandas.DataFrame({ 'idx1': list('ABBCCCC'), 'idx2': ['', False, True, 0, 1, 2, 3], 'val': range(1, 8) }) print(df.to_markdown()) df = df.set_index(['idx1', 'idx2']) print(df.to_markdown()) </code></pre>
<python><pandas>
2023-02-01 10:37:03
2
12,450
buhtz
75,308,894
262,667
How to type hint a method that returns a subclass?
<p>I have the setup</p> <pre class="lang-py prettyprint-override"><code>import typing class A: def get_b(self) -&gt; B: # &lt;- ERROR: B is not defined yet return b # &lt;- and instance of B here class B(A): # &lt;- subclass of A, defined after A is defined pass </code></pre> <p>Is there a clean way to type hint the method <code>get_b</code>?</p> <p><strong>Edit</strong>: thanks to the wonderful answers, this problem is not really addressed in the documentation, but in some <a href="https://peps.python.org/" rel="nofollow noreferrer">PEPs</a>, one of them is <a href="https://peps.python.org/pep-0484/#the-problem-of-forward-declarations" rel="nofollow noreferrer">the problem of forward declarations</a></p>
<python><oop><python-typing>
2023-02-01 10:34:50
1
49,544
Olivier Verdier
75,308,879
10,590,609
How can I pass setup-code and processing-code to another process?
<p>I have a generic multiprocessing worker class that takes items from a queue to be processed. A user of the worker class would need to pass a function that processes each item. However, some processing functions need setup-code.</p> <p>The current implementation uses a generator function that has to be correctly implemented by the user to correctly perform the setup code only once, process the items from the queue, and handle the StopIteration exception raised when the worker finishes normally.</p> <p>Can a more straightforward and reliable method be used to separate the setup code from the processing code and handle the exceptions raised by the worker?</p> <p>Here is what I have:</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp import typing P = typing.Callable[[], typing.Generator[None, None, None]] Q: typing.TypeAlias = &quot;mp.Queue&quot; class Worker(mp.Process): def __init__(self, queue: Q, processor: P): mp.Process.__init__(self) self.queue = queue self.processor = processor def run(self): processor = self.processor() next(processor) # start the processor while True: item = self.queue.get() processor.send(item) if item is None: break class WorkerPool: def __init__(self, n_workers: int, processor_generator: P, queue: Q): self.workers = [Worker(queue, processor_generator) for _ in range(n_workers)] self.queue = queue def __enter__(self): for worker in self.workers: worker.start() def signal_end(self): for _ in self.workers: self.queue.put(None) def terminate(self): for worker in self.workers: worker.terminate() def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is None: self.signal_end() self.join() return True self.terminate() return False def join(self): for worker in self.workers: worker.join() class GeneratorWorkerManager: def __init__( self, item_generator: typing.Generator, processor_generator: P, n_workers: int ) -&gt; None: queue: Q = mp.Queue() with WorkerPool(n_workers, processor_generator, queue): for item in item_generator: queue.put(item) </code></pre> <p>A user of the <code>GeneratorWorkerManager</code> class could do the following:</p> <pre class="lang-py prettyprint-override"><code>def processor(): # All sorts of setup code possible, including a with-statement. item = yield while item is not None: # process item print(item) item = yield return items = range(10) GeneratorWorkerManager(items, processor, 1) </code></pre> <p>Where the worker would print 0 to 9. However, this relies on the user implementing the processor function correctly. The worker also raises a <code>StopIteration</code> exception when it finishes normally.</p> <p>Is there a better way to use setup code and processing code in the same context?</p>
<python><multithreading><multiprocessing>
2023-02-01 10:33:34
1
332
Izaak Cornelis
75,308,818
2,876,079
Pandas to_sql ignores foreign key constrains when appending to sqlite table?
<p>I use the pandas method <code>to_sql</code> to append a DataFrame to some sqlite table. The sqlite table has a foreign key constraints on a column <code>id_region</code> that pandas should consider. The available values for <code>id_region</code> are 1, 2.</p> <p>If the DataFrame contains a non-existing <code>id_region</code> value 3, I would expect <code>to_sql</code> to throw an exception.</p> <p>However, the data is written to the database without exception and the foreign key constraint is ignored.</p> <p>If I manually change the value in the sqlite database using Navicat, for example to 1 and then back to 3, I get the expected error.</p> <p>=&gt; The foreign key constraint in sqlite seems to work but not when inserting the data.</p> <p>=&gt; How can I tell pandas to consider the foreign key constraint?</p> <p>Example code to reproduce the issue:</p> <pre><code>import sqlite3 import pandas as pd file_path = 'demo.sqlite' id_region = pd.DataFrame([ {'id': 1, 'label': 'foo'}, {'id': 2, 'label': 'baa'}, ]) id_region.set_index(['id'], inplace=True) data = pd.DataFrame([ {'id': 1, 'id_region': 3, 'value': 1} ]) data.set_index(['id'], inplace=True) create_data_table_query = 'CREATE TABLE `data` (' +\ 'id integer PRIMARY KEY NOT NULL, ' +\ 'id_region integer NOT Null, ' +\ 'value real NOT NULL, ' + \ 'FOREIGN KEY (id_region) REFERENCES id_region(id)' + \ ')' with sqlite3.connect(file_path) as connection: id_region.to_sql('id_region', connection, index_label='id') cursor = connection.cursor() cursor.execute(create_data_table_query ) data.to_sql('data', connection, index_label='id', if_exists='append') </code></pre> <p>Tables created by the above code:</p> <p><strong>id_region</strong>:</p> <p><a href="https://i.sstatic.net/QIYpF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QIYpF.png" alt="enter image description here" /></a></p> <p><strong>data</strong>, referencing id_region:</p> <p><a href="https://i.sstatic.net/e8jf6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8jf6.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/qETSN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qETSN.png" alt="enter image description here" /></a></p>
<python><pandas><sqlite><foreign-keys>
2023-02-01 10:29:01
1
12,756
Stefan
75,308,553
6,672,815
Implementing decorators in terms of closures with pyo3
<p>As a learning exercise I'm trying to implement a parameterised decorator function in pyo3 using closures. The pyo3 documentation contains an example of a (non-parameterised) decorator implemented as a class with a <code>__call__</code> method, and I've built on this and created a parameterised decorator using an outer class with a <code>__call__</code> method that returns an inner class with a <code>__call__</code> that invokes the target function, and it works. But as a learning exercise (I could do with improving my understanding of lifetimes especially) I wanted to try to implement the same thing in terms of closures. (NB I've previously done this with lambdas in C++)</p> <p>So my non-parameterised decorator, after some experimenting and fighting with the compiler, looks like this:</p> <pre class="lang-rust prettyprint-override"><code>#[pyfunction] pub fn exectime(py: Python, wraps: PyObject) -&gt; PyResult&lt;&amp;PyCFunction&gt; { PyCFunction::new_closure( py, None, None, move |args: &amp;PyTuple, kwargs: Option&lt;&amp;PyDict&gt;| -&gt; PyResult&lt;PyObject&gt; { Python::with_gil(|py| { let now = Instant::now(); let ret = wraps.call(py, args, kwargs); println!(&quot;elapsed (ms): {}&quot;, now.elapsed().as_millis()); ret }) } ) } </code></pre> <p>Note I needed to wrap the captured <code>py</code> in a <code>Python::with_gil</code> to make it work. Trying to extend this to a nested decorator I came up with:</p> <pre class="lang-rust prettyprint-override"><code>#[pyfunction] pub fn average_exectime(py: Python, n: usize) -&gt; PyResult&lt;&amp;PyCFunction&gt; { let f = move |args: &amp;PyTuple, _kwargs: Option&lt;&amp;PyDict&gt;| -&gt; PyResult&lt;&amp;PyCFunction&gt; { Python::with_gil(|py| { let wraps: PyObject = args.get_item(0)?.into(); let g = move |args: &amp;PyTuple, kwargs: Option&lt;&amp;PyDict&gt;| -&gt; PyResult&lt;PyObject&gt; { Python::with_gil(|py| { let now = Instant::now(); for _ in 0..n-1 { wraps.call(py, args, kwargs); } let ret = wraps.call(py, args, kwargs); println!(&quot;elapsed (ms): {}&quot;, now.elapsed().as_millis()); ret }) }; PyCFunction::new_closure(py, None, None, g) }) }; PyCFunction::new_closure(py, None, None, f) } </code></pre> <p>for which the compiler tells me:</p> <pre><code>error: lifetime may not live long enough ] 44/45: poetry-rust-integration --&gt; src/decorator.rs:48:13 | 35 | Python::with_gil(|py| { | --- return type of closure is Result&lt;&amp;'2 pyo3::types::PyCFunction, pyo3::PyErr&gt; | | | has type `pyo3::Python&lt;'1&gt;` ... 48 | PyCFunction::new_closure(py, None, None, g) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ returning this value requires that `'1` must outlive `'2` Building [=========================&gt; ] 44/45: poetry-rust-integration error: aborting due to previous error </code></pre> <p>I've tried all sorts of lifetime parameters including enclosing lifetimes to no avail, I just end up with more errors. I guess I don't understand why the compiler thinks the inner lifetime must outlive the other? Isn't it sufficient to tell the complier they both have the same lifetime? And if so, how to achieve this?</p>
<python><rust><decorator><pyo3>
2023-02-01 10:10:24
2
830
virgesmith
75,308,496
10,134,422
How do I run uvicorn in a docker container that exposes the port?
<p>I am developing a fastapi inside a docker container in windows/ubuntu (code below). When I test the app outside the container by running <em>python -m uvicorn app:app --reload</em> in the terminal and then navigating to <em>127.0.0.1:8000/home</em> everything works fine:</p> <pre><code>{ Data: &quot;Test&quot; } </code></pre> <p>However, when I <em>docker-compose up</em> I can neither run <em>python -m uvicorn app:app --reload</em> in the container (due to the port already being used), nor see anything returned in the browser. I have tried 127.0.0.1:8000/home, host.docker.internal:8000/home and localhost:8000/home and I always receive:</p> <pre><code>{ detail: &quot;Not Found&quot; } </code></pre> <p>What step am I missing?</p> <p>Dockerfile:</p> <pre><code>FROM python:3.8-slim EXPOSE 8000 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 COPY requirements.txt . RUN python -m pip install -r requirements.txt WORKDIR /app COPY . /app RUN adduser -u nnnn --disabled-password --gecos &quot;&quot; appuser &amp;&amp; chown -R appuser /app USER appuser CMD [&quot;gunicorn&quot;, &quot;--bind&quot;, &quot;0.0.0.0:8000&quot;, &quot;-k&quot;, &quot;uvicorn.workers.UvicornWorker&quot;, &quot;app:app&quot;] </code></pre> <p>Docker-compose:</p> <pre><code>version: '3.9' services: fastapitest: image: fastapitest build: context: . dockerfile: ./Dockerfile ports: - 8000:8000 extra_hosts: - &quot;host.docker.internal:host-gateway&quot; </code></pre> <p>app.py:</p> <pre><code>from fastapi import FastAPI app = FastAPI() @app.get(&quot;/home&quot;) #must be one line above the function fro the route def home(): return {&quot;Data&quot;: &quot;Test&quot;} if __name__ == '__main__': import uvicorn uvicorn.run(app, host=&quot;127.0.0.1&quot;, port=8000) </code></pre>
<python><docker><fastapi><uvicorn>
2023-02-01 10:07:00
1
460
Sanchez333
75,308,373
275,195
How do I keep the first entry from consecutive entries in a DataFrame?
<p>With the following DataFrame:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import random random.seed(3) df = pd.DataFrame( data=[random.sample([&quot;A&quot;,&quot;B&quot;],1) for i in range(6)], columns=[&quot;category&quot;] ) </code></pre> <p>We get:</p> <p><a href="https://i.sstatic.net/1goef.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1goef.png" alt="enter image description here" /></a></p> <p><strong>How do I get only the first row for each consecutive category group?</strong></p> <blockquote> <p>Note: the data can contain an arbitrary number of repeats - I only want the first of each consecutive group.</p> </blockquote> <p>Expected would be:</p> <pre><code> category 0 A 2 B 4 A </code></pre> <p>I hoped that the <code>sort</code> flag from <code>groupby()</code> would solve this, but it nevertheless treats all occurences of category as a group - not consecutive ones:</p> <pre class="lang-py prettyprint-override"><code>df.groupby(&quot;category&quot;).head(1) </code></pre> <p><a href="https://i.sstatic.net/H5rDP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H5rDP.png" alt="enter image description here" /></a></p> <p>As I am learning pandas and my <code>DataFrame</code> can become very large I'm searching for a pandas native solution and not iterating over the array or <code>DataFrame</code>.</p> <p>While the answers from <a href="https://stackoverflow.com/q/32683492/275195">Make Pandas groupby act similarly to itertools groupby</a> can be applied here, the posed question is different. As such I would leave this question open so it's easier to find an answer.</p>
<python><pandas><dataframe><group-by>
2023-02-01 09:56:37
1
2,338
Pascal
75,308,316
7,012,917
Pandas DataFrame.corr() doesn't give same results as Series.corr()
<p>I have two timeseries</p> <pre><code>jpm = pd.read_csv(...) # JPM GBI Global All Traded msci = pd.read_csv(...) # MSCI WORLD U$ </code></pre> <p>Together in a DataFrame they look like</p> <pre><code>df = jpm.merge(msci, how='outer', on='Date', sort=True) df </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Date</th> <th>JPM GBI Global All Traded</th> <th>MSCI WORLD U$</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1970-01-01</td> <td>NaN</td> <td>100.0</td> </tr> <tr> <td>1</td> <td>1970-01-02</td> <td>NaN</td> <td>100.0</td> </tr> <tr> <td>2</td> <td>1970-01-05</td> <td>NaN</td> <td>100.0</td> </tr> <tr> <td>3</td> <td>1970-01-06</td> <td>NaN</td> <td>100.0</td> </tr> <tr> <td>4</td> <td>1970-01-07</td> <td>NaN</td> <td>100.670</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>13838</td> <td>2023-01-17</td> <td>492.3360</td> <td>2736.452</td> </tr> <tr> <td>13839</td> <td>2023-01-18</td> <td>496.4402</td> <td>2713.537</td> </tr> <tr> <td>13840</td> <td>2023-01-19</td> <td>494.9905</td> <td>2685.317</td> </tr> <tr> <td>13841</td> <td>2023-01-20</td> <td>492.3206</td> <td>2725.396</td> </tr> <tr> <td>13842</td> <td>2023-01-23</td> <td>491.5816</td> <td>2754.961</td> </tr> </tbody> </table> </div> <p>I want to compute the correlation between the two timeseries.</p> <p><strong>Using DataFrame.corr()</strong>:</p> <pre><code>corr1 = df.corr(method='pearson', min_periods=1, numeric_only=True) corr1 </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>JPM GBI Global All Traded</th> <th>MSCI WORLD U$</th> </tr> </thead> <tbody> <tr> <td><strong>JPM GBI Global All Traded</strong></td> <td>1.000000</td> <td>0.849705</td> </tr> <tr> <td><strong>MSCI WORLD U$</strong></td> <td>0.849705</td> <td>1.000000</td> </tr> </tbody> </table> </div> <p>Correlation is <code>0.849705</code></p> <p><strong>Using Series.corr()</strong>:</p> <pre><code>s1 = jpm['JPM GBI Global All Traded']#.dropna() s2 = msci['MSCI WORLD U$']#.dropna() corr2 = s1.corr(s2, method='pearson', min_periods=1) corr2 </code></pre> <p>Correlation is <code>0.904641</code></p> <hr /> <p>As you can see the two correlations don't match even though they should. And I've also tried applying the .dropna() function manually but it makes no difference.</p> <p>And according to the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corr.html" rel="nofollow noreferrer">pandas.DataFrame.corr</a> documentation:</p> <blockquote> <p>Compute pairwise correlation of columns, excluding NA/null values.</p> </blockquote> <p>Is it a bug with my code or with pandas?</p>
<python><pandas><dataframe>
2023-02-01 09:51:10
1
1,080
Nermin
75,308,311
6,054,404
Getting the shortest path geometry using python networkx
<p>How do I get the path as a geometry from networkx?</p> <p>Below is my worked example which returns the <code>nx.shortest_path_length</code></p> <pre><code>import osmnx as ox import networkx as nx from shapely.geometry import Point def get_network(centre_point, dist): G = ox.graph_from_point(centre_point, dist=dist, network_type='walk', simplify=False) G = ox.add_edge_speeds(G) G = ox.add_edge_travel_times(G) nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True) return G, nodes, edges point = (50.864595387190924, -2.153190840083006) G, nodes, edges = get_network(centre_point=point, dist=1000) a = nodes.iloc[4].name b = nodes.iloc[20].name nx.shortest_path_length(G, a, b, weight='length', method='dijkstra') </code></pre> <p>This gives the shortest path length as <code>223.964</code>, how do I get the actual path geometry of this shortest path? The paths are in <code>edges</code> but how do I extract the correct ones for this path?</p>
<python><networkx><geopandas>
2023-02-01 09:50:47
1
1,993
Spatial Digger
75,308,263
4,614,499
Prevent Custom Renderer to render Exceptions in Django Rest Framework
<p>I have written a custom Renderer that allows rendering nested json data as flat Excel files.</p> <pre><code>class ExcelRenderer(renderers.BaseRenderer): &quot;&quot;&quot;Custom renderer for exporting XLS files&quot;&quot;&quot; media_type = &quot;application/xlsx&quot; format = &quot;excel&quot; render_style = 'binary' def render(self, data, accepted_media_type=None, renderer_context=None): query_params = renderer_context[&quot;request&quot;].query_params query_params = &quot;_&quot;.join([f&quot;{v.split('T')[0]}&quot; for k,v in query_params.items()]) &quot;&quot;&quot; Render `data` into XLSX workbook, returning a workbook. &quot;&quot;&quot; try: df = json_into_flat_dataframe(data) except Exception as e: logger.error(e, exc_info=True) df = pd.DataFrame() if len(df.index) == 0: raise rest_exceptions.APIException(&quot;No data!&quot;) output = io.BytesIO() # Use a temp filename to keep pandas happy. writer = pd.ExcelWriter(output, engine='xlsxwriter') # Write the data frame to the StringIO object. df.to_excel(writer, sheet_name='Sheet1', index=False) writer.save() data = output.getvalue() #Enable direct download. renderer_context['response'][&quot;content-disposition&quot;] = f&quot;attachment; filename=vodepro_export_{query_params}.xlsx&quot; return data </code></pre> <p>This works as expected as long as there is not exception (e.g. <code>ValidationError</code>) in the process before rendering. When a ValidationError is raised, my Custom Renderer tries to render it (and fails, since this was not even designed behaviour of Renderer).</p> <p>I would like to implement a renderer, that would not try to parse ValidationErrors (or any other Exceptions in fact) into Excel, but would fallback to a normal DRF behaviour (json or BrowsableAPI renderer) in such case?</p>
<python><excel><pandas><django-rest-framework><renderer>
2023-02-01 09:47:03
1
2,879
Marjan Moderc
75,308,137
10,545,426
Serialize and Deserialize a custom tensorflow model which can be loaded back to a python object
<p>I have a requirement of saving my custom tensorflow model which has a mix of python objects and tf tensors included within. I want something that can help serialize this custom tensorflow python class. Something like <code>pickle.dumps(custom_tf_python_class)</code> so that I can just load it like <code>custom_tf_python_class = pickle.load(fp)</code>. I have tried <code>pickle</code>, <code>dill</code>, and <code>cloudpickle</code>, but I always get the <code>TypeError: can't pickle _thread.RLock objects</code>.</p> <p>I am not, necessarily, limited to pickle, any module/library that can help me serialize this object will do? Maybe I need to implement a custom <code>__reduce__</code> function but I am not sure how to work with that?</p> <p>I am using <code>tensorflow==2.11.0</code></p> <p>My sample code</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf class MyModule(tf.Module): def __init__(self, input_dim, output_dim): self.input_shape = input_dim self.dense = tf.keras.layers.Dense(output_dim, input_shape=(input_dim,)) @property def input_shape(self): return self._input_shape @input_shape.setter def input_shape(self, value): self._input_shape = value @property def name(self): return &quot;Something&quot; def __repr__(self): return f&quot;{self.name}-{super().__repr__()}&quot; def __call__(self, inputs): return self.dense(inputs) class MyModuleWithSubModule(tf.Module): def __init__(self): self.sub_module = MyModule(input_dim=2, output_dim=1) def __call__(self, inputs): return self.sub_module(inputs) @property def name(self): return &quot;SomethingElse&quot; def __repr__(self): return f&quot;{self.name}-{super().__repr__()}&quot; main_module = MyModuleWithSubModule() </code></pre>
<python><tensorflow><pickle>
2023-02-01 09:36:21
0
399
Stalin Thomas
75,307,956
14,649,310
Can the GNU gettext module in Python fetch translations from cloud or from another container?
<p>This is more of an structural architectural question. I am using Python <a href="https://phrase.com/blog/posts/translate-python-gnu-gettext/" rel="nofollow noreferrer">gettext module</a> to fetch translations. But the way it works, as described in the link as well, the translations have to be in the same repo and pod with the Python code so that the gettext module can find them in the local directory when the application is running.</p> <p>The issue with this is that we have to update and redeploy the repo every time the translations change. Would it be possible for gettext module to fetch the translation from cloud storage or even better maybe from a different pod which is deployed independently from the application? Any suggestions on how I can separate the code from the translations?</p>
<python><docker><localization><gettext>
2023-02-01 09:21:04
1
4,999
KZiovas
75,307,944
3,923,078
Force data relocation between workers on Dask?
<p>I am working with Dask Arrays. The data is read from parquet files as Dask DataFrame and I need to transform the data into column-wise arrays to accelerate certain computation. However, after rechunking, the data is located in a single worker and not able to scale to the entire cluster.</p> <p>I read from other <a href="https://stackoverflow.com/questions/57013717/forced-or-explicit-data-rebalancing-with-dask-distributed">question</a> that Dask does not invoke rebalancing when the data size is fairly small. Nonetheless I need to perform sorting so forcing data relocation should be desirable. I did not find a suitable method to move data between workers and can anyone suggest?</p> <p><a href="https://i.sstatic.net/3NDbn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3NDbn.png" alt="Rechunking" /></a> <a href="https://i.sstatic.net/g5rM3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5rM3.png" alt="Data is located in a single worker" /></a></p> <p>Note: the same <a href="https://stackoverflow.com/questions/57013717/forced-or-explicit-data-rebalancing-with-dask-distributed">answer</a> shows it is viable to store the data as NPY files and read back. But it is a bit clumsy as the data is already in memory and all you need is just relocation.</p>
<python><dask>
2023-02-01 09:19:58
0
597
Fontaine007
75,307,905
6,854,595
Python typing for a metaclass Singleton
<p>I have a Python (3.8) metaclass for a singleton as seen <a href="https://stackoverflow.com/a/6798042/6854595">here</a></p> <p>I've tried to add typings like so:</p> <pre class="lang-py prettyprint-override"><code>from typing import Dict, Any, TypeVar, Type _T = TypeVar(&quot;_T&quot;, bound=&quot;Singleton&quot;) class Singleton(type): _instances: Dict[Any, _T] = {} def __call__(cls: Type[_T], *args: Any, **kwargs: Any) -&gt; _T: if cls not in cls._instances: cls._instances[cls] = super().__call__(*args, **kwargs) return cls._instances[cls] </code></pre> <p>In the line:</p> <pre><code>_instances: Dict[Any, _T] = {} </code></pre> <p>MyPy warns:</p> <p><code>Mypy: Type variable &quot;utils.singleton._T&quot; is unbound</code></p> <p>I've tried different iterations of this to no avail; it's very hard for me to figure out how to type this dict.</p> <p>Further, the line:</p> <pre class="lang-py prettyprint-override"><code>def __call__(cls: Type[_T], *args: Any, **kwargs: Any) -&gt; _T: </code></pre> <p>Produces:</p> <p><code>Mypy: The erased type of self &quot;Type[golf_ml.utils.singleton.Singleton]&quot; is not a supertype of its class &quot;golf_ml.utils.singleton.Singleton&quot;</code></p> <p>How could I correctly type this?</p>
<python><mypy><python-typing><metaclass>
2023-02-01 09:16:30
2
540
alexcs
75,307,868
10,967,961
Multiprocessing uses python 2.7 even if I am in a python 3 env
<p>I am trying to run a code (confronting pairwise strings in a dict basically) using concurrent.futures. The code looks as follows (where masterprova is a dict of 100 elements):</p> <pre><code>import concurrent.futures from collections import defaultdict from itertools import combinations #import fuzz def process_chunk(masterprova_chunk, d, threshold): for (i,ii),(j,jj) in combinations(masterprova_chunk, 2): if jj not in d[i] and fuzz.partial_ratio(ii,jj) &gt;= threshold: d[i].add(jj) return d threshold = 90 d = defaultdict(set) for (i,ii) in master.items(): d[i].add(ii) # Split masterprova into chunks for parallel processing num_workers = 4 # number of parallel processes to use chunk_size = len(masterprova)//num_workers masterprova_chunks = [list(masterprova.items())[i:i + chunk_size] for i in range(0, len(masterprova), chunk_size)] </code></pre> <p>everything is good until here...then when I try putting the chunks together calling concurrent.futures as follows:</p> <pre><code>with concurrent.futures.ProcessPoolExecutor(max_workers=num_workers) as executor: results = [executor.submit(process_chunk, masterprova_chunk, d, threshold) for masterprova_chunk in masterprova_chunks] for f in concurrent.futures.as_completed(results): d.update(f.result()) </code></pre> <p>an error occurs referring to multiprocessing in python 2.7. The fact is that I am using python 3. and I have already tried to uninstall multiprocessing and re-install it with pip3 directly in the notebook but it did not work. The error looks as follows:</p> <pre><code>Traceback (most recent call last): File ~/opt/anaconda3/envs/single_auth_paper/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3397 in run_code exec(code_obj, self.user_global_ns, self.user_ns) Input In [53] in &lt;cell line: 1&gt; with concurrent.futures.ProcessPoolExecutor(max_workers=num_workers) as executor: File ~/opt/anaconda3/envs/single_auth_paper/lib/python3.10/concurrent/futures/__init__.py:44 in __getattr__ from .process import ProcessPoolExecutor as pe File ~/opt/anaconda3/envs/single_auth_paper/lib/python3.10/concurrent/futures/process.py:51 in &lt;module&gt; import multiprocessing as mp File /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py:64 in &lt;module&gt; from multiprocessing.process import Process, current_process, active_children File /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py:271 except SystemExit, e: ^ SyntaxError: multiple exception types must be parenthesized </code></pre> <p>How can I fix it? How can multiprocessing be used with python 3. rather than python 2.7 in a jupyter notebook that uses python 3. specifically?</p> <p>Thanks</p>
<python><python-3.x><python-2.7><multiprocessing>
2023-02-01 09:14:06
0
653
Lusian
75,307,857
19,950,360
I want to make cloud run log when come to input
<p>when i receive input like url Then want to logging in cloud run</p> <p>I want to see my input with python</p>
<python><google-cloud-platform><google-cloud-run>
2023-02-01 09:13:27
1
315
lima
75,307,814
5,684,405
Error: Python packaging tool 'setuptools' not found
<p>I have a poetry project that is not using setuptools</p> <pre><code>[tool.poetry.dependencies] python = &quot;&gt;=3.9,&lt;3.11&quot; opencv-python = &quot;^4.7.0.68&quot; tensorflow-macos = &quot;^2.11.0&quot; tensorflow-metal = &quot;^0.7.0&quot; </code></pre> <p>but I keep getting this error in pycharm. Command from screenshot:</p> <pre><code>/Users/mc/Library/Caches/pypoetry/virtualenvs/besafe-_8yAv-v6-py3.9/bin/Python /Users/mc/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.8214.51/PyCharm.app/Contents/plugins/python/helpers/packaging_tool.py list </code></pre> <p>It just pops up without any action from my side. It seems like PyCharm is doing some execution under the hood but I do not know what is it.</p> <p>I do not understand how am I supposed to fix this?</p> <p><a href="https://i.sstatic.net/2TSGq.png" rel="noreferrer"><img src="https://i.sstatic.net/2TSGq.png" alt="enter image description here" /></a></p>
<python><pycharm><setuptools><python-poetry>
2023-02-01 09:09:21
4
2,969
mCs
75,307,709
18,744,117
Uvicorn + Quart yielding error 413 on post request
<p>I'm trying to setup a simple http server to receive ( potentially large ) JSON objects.</p> <pre><code>from quart import Quart, request import uvicorn app = Quart(__name__) @app.route( '/' , methods=[ 'POST' ] ) async def hello(): json = await request.json return ('',204) config = uvicorn.Config( app, host=&quot;127.0.0.1&quot;, port=8000, ws_max_size=2**64, h11_max_incomplete_event_size=2**64, ) server = uvicorn.Server( config ) server.run() </code></pre> <p>however when I test this using the following</p> <pre><code>import requests requests.post(&quot;http://127.0.0.1:8000/&quot;, json = {&quot;test&quot;:&quot;0&quot;*(2**32)}) </code></pre> <p>I get <code>&quot;POST / HTTP/1.1&quot; 413 Request Entity Too Large</code></p> <p>But I can't seem to figure out where this is coming from.</p>
<python><httpserver><uvicorn><quart><http-status-code-413>
2023-02-01 08:59:42
1
683
Sam Coutteau
75,307,652
13,762,083
Attribute error when trying to import scipy.interpolate
<p>I am trying to import <code>scipy.interpolate</code> using the following code:</p> <pre><code>from scipy.interpolate import interp1d </code></pre> <p>but I get the following error:</p> <pre><code>AttributeError: scipy.spatial.qhull is deprecated and has no attribute __pyx_capi__. Try looking in scipy.spatial instead. </code></pre> <p>How should I fix this?</p>
<python><scipy><python-import>
2023-02-01 08:54:31
0
409
ranky123
75,307,639
8,124,392
Autoencoder outputs pixelated colorful predictions
<p>I am working on an autoencoder to remove motion blur from images. I am using a small dataset of 1816 blurry and 1816 sharp images.</p> <p>This is my autoencoder with 6 layers:</p> <pre><code>physical_devices = tf.config.list_physical_devices('GPU') if physical_devices: tf.config.experimental.set_memory_growth(physical_devices[0], True) seed = 21 random.seed = seed np.random.seed = seed # Paths to the good images and the corresponding motion blurred images good_frames = '/mnt/share/Datasets/BLUR_small/BLUR/sharp' bad_frames = '/mnt/share/Datasets/BLUR_small/BLUR/motion_blurred' # Network Parameters dims = 128 input_shape = (dims, dims, 3) batch_size = 32 kernel_size = 3 latent_dim = 256 # Below is a custom data loader. def load_image(file, target_size): image = tf.keras.preprocessing.image.load_img(file, target_size=target_size) image = tf.keras.preprocessing.image.img_to_array(image).astype('float32') / 255 return image clean_frames = [] blurry_frames = [] extensions = ['.jpg', 'jpeg', '.png'] for file in tqdm(sorted(os.listdir(good_frames))): if any(extension in file for extension in extensions): file_path = os.path.join(good_frames, file) clean_frames.append(load_image(file_path, (dims,dims))) clean_frames = np.array(clean_frames) for file in tqdm(sorted(os.listdir(bad_frames))): if any(extension in file for extension in extensions): file_path = os.path.join(bad_frames, file) blurry_frames.append(load_image(file_path, (dims,dims))) blurry_frames = np.array(blurry_frames) with tf.device('GPU:0'): inputs = Input(shape = input_shape, name = 'encoder_input') x = inputs # Layers of the encoder x = Conv2D(filters=64, kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) x = Conv2D(filters=128, kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) x = Conv2D(filters=256, kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) shape = K.int_shape(x) x = Flatten()(x) latent = Dense(latent_dim, name='latent_vector')(x) encoder = Model(inputs, latent, name='encoder') latent_inputs = Input(shape=(latent_dim,), name='decoder_input') x = Dense(shape[1]*shape[2]*shape[3])(latent_inputs) x = Reshape((shape[1], shape[2], shape[3]))(x) x = Conv2DTranspose(filters=256,kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) x = Conv2DTranspose(filters=128,kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) x = Conv2DTranspose(filters=64,kernel_size=kernel_size, strides=2, activation='relu', padding='same')(x) outputs = Conv2DTranspose(filters=3, kernel_size=kernel_size, activation='sigmoid', padding='same', name='decoder_output')(x) decoder = Model(latent_inputs, outputs, name='decoder') autoencoder = Model(inputs, decoder(encoder(inputs)), name='autoencoder') autoencoder.compile(loss='mse', optimizer='adam',metrics=[&quot;acc&quot;]) # Automated Learning Rate reducer lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1), cooldown=0, patience=5, verbose=1, min_lr=0.5e-6) callbacks = [lr_reducer] # Define the data generator data_gen = ImageDataGenerator(rotation_range=20, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True) # Create the flow method to apply data augmentation train_gen = data_gen.flow(blurry_frames, clean_frames, batch_size=batch_size) # Begins training history = autoencoder.fit(train_gen, epochs=100, validation_data=(blurry_frames, clean_frames), batch_size=batch_size, callbacks=callbacks) </code></pre> <p>And this is how I'm running predictions:</p> <pre><code>path = '/mnt/share/Datasets/kaggle_motion_blur/motion_blurred/' images_list = os.listdir(path) model = load_model('trained_BLUR_Small_5L_autoencoder.h5') counter = 0 for image in images_list: image = Image.open(os.path.join(path, image)) image = image.resize((128, 128)) image_array = img_to_array(image) image_array = np.expand_dims(image_array, axis=0) prediction = model.predict(image_array) prediction = np.squeeze(prediction, axis=0) prediction = prediction * 255.0 prediction = prediction.astype(np.uint8) prediction = prediction.reshape(128, 128, 3) img = array_to_img(prediction, scale=False) # Plotting the input and predicted images side by side f, axarr = plt.subplots(1, 2) axarr[0].imshow(image) axarr[1].imshow(img) plt.savefig('./output/combined_image_{}.png'.format(counter)) img.save('./output/saved_image_{}.png'.format(counter), format=&quot;PNG&quot;) counter+= 1 </code></pre> <p><a href="https://i.sstatic.net/gmG7y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gmG7y.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/6dScf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6dScf.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/5Xf82.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Xf82.png" alt="enter image description here" /></a></p> <p>This is the JSON output of the training epochs, loss, accuracy and learning rates:</p> <pre><code>{ &quot;loss&quot;: { &quot;0&quot;: 0.0559637062, &quot;1&quot;: 0.0473968722, &quot;2&quot;: 0.0444460958, &quot;3&quot;: 0.041732043, &quot;4&quot;: 0.0403001383, &quot;5&quot;: 0.0384916142, &quot;6&quot;: 0.0371751711, &quot;7&quot;: 0.0357408971, &quot;8&quot;: 0.0342390947, &quot;9&quot;: 0.0336149298, &quot;10&quot;: 0.0324862674, &quot;11&quot;: 0.0320564546, &quot;12&quot;: 0.0311057456, &quot;13&quot;: 0.0306660384, &quot;14&quot;: 0.0306252893, &quot;15&quot;: 0.0300185885, &quot;16&quot;: 0.0297512896, &quot;17&quot;: 0.0291082468, &quot;18&quot;: 0.0286234375, &quot;19&quot;: 0.0283738524, &quot;20&quot;: 0.0281269737, &quot;21&quot;: 0.0278169811, &quot;22&quot;: 0.0271670725, &quot;23&quot;: 0.027053671, &quot;24&quot;: 0.0268216059, &quot;25&quot;: 0.0265594069, &quot;26&quot;: 0.0262372922, &quot;27&quot;: 0.0260510016, &quot;28&quot;: 0.025675552, &quot;29&quot;: 0.0256246217, &quot;30&quot;: 0.0249390211, &quot;31&quot;: 0.0247799307, &quot;32&quot;: 0.0243325885, &quot;33&quot;: 0.0240763351, &quot;34&quot;: 0.0239928197, &quot;35&quot;: 0.0239428878, &quot;36&quot;: 0.0235468745, &quot;37&quot;: 0.0233788602, &quot;38&quot;: 0.0230851565, &quot;39&quot;: 0.0227795113, &quot;40&quot;: 0.0226637572, &quot;41&quot;: 0.0221564285, &quot;42&quot;: 0.0219694152, &quot;43&quot;: 0.0221202765, &quot;44&quot;: 0.0215503499, &quot;45&quot;: 0.0214253683, &quot;46&quot;: 0.0210911259, &quot;47&quot;: 0.0210189149, &quot;48&quot;: 0.0211602859, &quot;49&quot;: 0.0209334362, &quot;50&quot;: 0.0206862725, &quot;51&quot;: 0.0202936586, &quot;52&quot;: 0.020195011, &quot;53&quot;: 0.0203813333, &quot;54&quot;: 0.0199731477, &quot;55&quot;: 0.0199599732, &quot;56&quot;: 0.0198422913, &quot;57&quot;: 0.0200028252, &quot;58&quot;: 0.020069683, &quot;59&quot;: 0.0193502475, &quot;60&quot;: 0.0193256326, &quot;61&quot;: 0.0192854889, &quot;62&quot;: 0.0191295687, &quot;63&quot;: 0.0188199598, &quot;64&quot;: 0.0187446959, &quot;65&quot;: 0.0185292251, &quot;66&quot;: 0.0184431728, &quot;67&quot;: 0.0184016339, &quot;68&quot;: 0.0186514556, &quot;69&quot;: 0.0182920825, &quot;70&quot;: 0.0180315617, &quot;71&quot;: 0.0181470234, &quot;72&quot;: 0.0180766061, &quot;73&quot;: 0.0179452486, &quot;74&quot;: 0.0177276377, &quot;75&quot;: 0.0177405551, &quot;76&quot;: 0.0177125093, &quot;77&quot;: 0.017679546, &quot;78&quot;: 0.0177526604, &quot;79&quot;: 0.0168209467, &quot;80&quot;: 0.016371103, &quot;81&quot;: 0.0162614379, &quot;82&quot;: 0.0160754975, &quot;83&quot;: 0.0160399396, &quot;84&quot;: 0.0158573128, &quot;85&quot;: 0.0158148855, &quot;86&quot;: 0.0157786086, &quot;87&quot;: 0.0156954341, &quot;88&quot;: 0.0155929513, &quot;89&quot;: 0.015328614, &quot;90&quot;: 0.0152604291, &quot;91&quot;: 0.0153146992, &quot;92&quot;: 0.015211042, &quot;93&quot;: 0.0151338875, &quot;94&quot;: 0.0151586076, &quot;95&quot;: 0.0150965769, &quot;96&quot;: 0.0151056759, &quot;97&quot;: 0.0150074158, &quot;98&quot;: 0.0149844224, &quot;99&quot;: 0.0150296595 }, &quot;acc&quot;: { &quot;0&quot;: 0.4606803954, &quot;1&quot;: 0.467546761, &quot;2&quot;: 0.4655869901, &quot;3&quot;: 0.4671278894, &quot;4&quot;: 0.4661996067, &quot;5&quot;: 0.4693737924, &quot;6&quot;: 0.4923563302, &quot;7&quot;: 0.5473389626, &quot;8&quot;: 0.6008113027, &quot;9&quot;: 0.6279739141, &quot;10&quot;: 0.6372382045, &quot;11&quot;: 0.6473849416, &quot;12&quot;: 0.650372386, &quot;13&quot;: 0.6581180692, &quot;14&quot;: 0.6512622237, &quot;15&quot;: 0.6611989737, &quot;16&quot;: 0.6612861753, &quot;17&quot;: 0.6657290459, &quot;18&quot;: 0.6735184789, &quot;19&quot;: 0.6756489277, &quot;20&quot;: 0.6758747697, &quot;21&quot;: 0.6802487969, &quot;22&quot;: 0.6829707623, &quot;23&quot;: 0.6869801283, &quot;24&quot;: 0.6898216009, &quot;25&quot;: 0.6890658736, &quot;26&quot;: 0.6886766553, &quot;27&quot;: 0.6913455725, &quot;28&quot;: 0.6947268248, &quot;29&quot;: 0.6963167787, &quot;30&quot;: 0.6970059276, &quot;31&quot;: 0.6994771361, &quot;32&quot;: 0.7033333778, &quot;33&quot;: 0.701754868, &quot;34&quot;: 0.7039641738, &quot;35&quot;: 0.7053239346, &quot;36&quot;: 0.7035363913, &quot;37&quot;: 0.7069551945, &quot;38&quot;: 0.7062308192, &quot;39&quot;: 0.7106642723, &quot;40&quot;: 0.7096898556, &quot;41&quot;: 0.7116783261, &quot;42&quot;: 0.7139574289, &quot;43&quot;: 0.7117571831, &quot;44&quot;: 0.7149085999, &quot;45&quot;: 0.7143892646, &quot;46&quot;: 0.7177314162, &quot;47&quot;: 0.7178704143, &quot;48&quot;: 0.7152007222, &quot;49&quot;: 0.7162288427, &quot;50&quot;: 0.7176439762, &quot;51&quot;: 0.7200078368, &quot;52&quot;: 0.7232849598, &quot;53&quot;: 0.7205632329, &quot;54&quot;: 0.7218416929, &quot;55&quot;: 0.7249743342, &quot;56&quot;: 0.7220694423, &quot;57&quot;: 0.7221901417, &quot;58&quot;: 0.7248255014, &quot;59&quot;: 0.7284849882, &quot;60&quot;: 0.7274199128, &quot;61&quot;: 0.7269214392, &quot;62&quot;: 0.728218019, &quot;63&quot;: 0.7291227579, &quot;64&quot;: 0.7305186987, &quot;65&quot;: 0.7319476008, &quot;66&quot;: 0.7326906323, &quot;67&quot;: 0.732211709, &quot;68&quot;: 0.7313355207, &quot;69&quot;: 0.7331151366, &quot;70&quot;: 0.7346365452, &quot;71&quot;: 0.7359617949, &quot;72&quot;: 0.7359285951, &quot;73&quot;: 0.7339394689, &quot;74&quot;: 0.7341305614, &quot;75&quot;: 0.7364740372, &quot;76&quot;: 0.7368541956, &quot;77&quot;: 0.7347199321, &quot;78&quot;: 0.7379792929, &quot;79&quot;: 0.7438659072, &quot;80&quot;: 0.7457298636, &quot;81&quot;: 0.7466909289, &quot;82&quot;: 0.7475773692, &quot;83&quot;: 0.747853756, &quot;84&quot;: 0.7487729788, &quot;85&quot;: 0.7476714253, &quot;86&quot;: 0.7501927614, &quot;87&quot;: 0.7497540116, &quot;88&quot;: 0.7511804104, &quot;89&quot;: 0.752410531, &quot;90&quot;: 0.7530171275, &quot;91&quot;: 0.7530553937, &quot;92&quot;: 0.7526908517, &quot;93&quot;: 0.7528964281, &quot;94&quot;: 0.7526913285, &quot;95&quot;: 0.753929615, &quot;96&quot;: 0.7543320656, &quot;97&quot;: 0.7543010116, &quot;98&quot;: 0.7544336319, &quot;99&quot;: 0.7536581159 }, &quot;val_loss&quot;: { &quot;0&quot;: 0.0491532497, &quot;1&quot;: 0.0428819433, &quot;2&quot;: 0.0403589457, &quot;3&quot;: 0.0374303386, &quot;4&quot;: 0.0374171548, &quot;5&quot;: 0.0354377925, &quot;6&quot;: 0.0335176662, &quot;7&quot;: 0.0325156339, &quot;8&quot;: 0.0312758237, &quot;9&quot;: 0.0298724882, &quot;10&quot;: 0.0290645473, &quot;11&quot;: 0.0287973378, &quot;12&quot;: 0.0283008274, &quot;13&quot;: 0.0279917233, &quot;14&quot;: 0.0277446155, &quot;15&quot;: 0.0267075617, &quot;16&quot;: 0.0269869734, &quot;17&quot;: 0.0260658357, &quot;18&quot;: 0.0261003803, &quot;19&quot;: 0.0256356988, &quot;20&quot;: 0.024952108, &quot;21&quot;: 0.0248472486, &quot;22&quot;: 0.0243210737, &quot;23&quot;: 0.024381537, &quot;24&quot;: 0.0239129663, &quot;25&quot;: 0.0230953004, &quot;26&quot;: 0.0232715681, &quot;27&quot;: 0.0230326615, &quot;28&quot;: 0.0231741108, &quot;29&quot;: 0.0226652324, &quot;30&quot;: 0.0221607611, &quot;31&quot;: 0.0218544509, &quot;32&quot;: 0.0217084009, &quot;33&quot;: 0.0216371976, &quot;34&quot;: 0.0214716587, &quot;35&quot;: 0.0209531747, &quot;36&quot;: 0.0207797866, &quot;37&quot;: 0.0206740964, &quot;38&quot;: 0.0203931443, &quot;39&quot;: 0.0200728513, &quot;40&quot;: 0.0201499071, &quot;41&quot;: 0.0196091868, &quot;42&quot;: 0.0195282493, &quot;43&quot;: 0.0196591243, &quot;44&quot;: 0.0192415901, &quot;45&quot;: 0.0187332761, &quot;46&quot;: 0.018796768, &quot;47&quot;: 0.0186291263, &quot;48&quot;: 0.0188825149, &quot;49&quot;: 0.0184612796, &quot;50&quot;: 0.018214304, &quot;51&quot;: 0.018066233, &quot;52&quot;: 0.0180678181, &quot;53&quot;: 0.0183129609, &quot;54&quot;: 0.0177096091, &quot;55&quot;: 0.017653849, &quot;56&quot;: 0.0177150834, &quot;57&quot;: 0.0176380556, &quot;58&quot;: 0.0175104905, &quot;59&quot;: 0.018088758, &quot;60&quot;: 0.0175425448, &quot;61&quot;: 0.0171913933, &quot;62&quot;: 0.0168936346, &quot;63&quot;: 0.0169047564, &quot;64&quot;: 0.0168141816, &quot;65&quot;: 0.0170337521, &quot;66&quot;: 0.016725529, &quot;67&quot;: 0.0169604756, &quot;68&quot;: 0.0165361054, &quot;69&quot;: 0.0165681373, &quot;70&quot;: 0.0163681172, &quot;71&quot;: 0.0164794382, &quot;72&quot;: 0.0163597036, &quot;73&quot;: 0.0161324181, &quot;74&quot;: 0.0161470957, &quot;75&quot;: 0.0162882674, &quot;76&quot;: 0.0163499508, &quot;77&quot;: 0.0161468163, &quot;78&quot;: 0.0160854217, &quot;79&quot;: 0.0153935291, &quot;80&quot;: 0.0151251443, &quot;81&quot;: 0.0148611758, &quot;82&quot;: 0.0150120985, &quot;83&quot;: 0.0146467723, &quot;84&quot;: 0.0146741169, &quot;85&quot;: 0.0146548087, &quot;86&quot;: 0.0147513337, &quot;87&quot;: 0.0145894634, &quot;88&quot;: 0.0145941339, &quot;89&quot;: 0.0144529464, &quot;90&quot;: 0.0142841199, &quot;91&quot;: 0.0142779676, &quot;92&quot;: 0.0142102633, &quot;93&quot;: 0.0142730884, &quot;94&quot;: 0.014272172, &quot;95&quot;: 0.014208761, &quot;96&quot;: 0.0141810579, &quot;97&quot;: 0.0141390217, &quot;98&quot;: 0.0140905399, &quot;99&quot;: 0.0141332904 }, &quot;val_acc&quot;: { &quot;0&quot;: 0.4686006606, &quot;1&quot;: 0.4674130976, &quot;2&quot;: 0.4529431164, &quot;3&quot;: 0.4659602046, &quot;4&quot;: 0.4662201703, &quot;5&quot;: 0.4551965594, &quot;6&quot;: 0.5195012093, &quot;7&quot;: 0.5835120082, &quot;8&quot;: 0.6048460603, &quot;9&quot;: 0.6395056844, &quot;10&quot;: 0.6545616388, &quot;11&quot;: 0.662117064, &quot;12&quot;: 0.6643206477, &quot;13&quot;: 0.6685555577, &quot;14&quot;: 0.6260975599, &quot;15&quot;: 0.6835178137, &quot;16&quot;: 0.6557799578, &quot;17&quot;: 0.6855746508, &quot;18&quot;: 0.6705876589, &quot;19&quot;: 0.6919317245, &quot;20&quot;: 0.6861786842, &quot;21&quot;: 0.6914568543, &quot;22&quot;: 0.6953245401, &quot;23&quot;: 0.690965116, &quot;24&quot;: 0.7035027146, &quot;25&quot;: 0.705488801, &quot;26&quot;: 0.7045339942, &quot;27&quot;: 0.7091979384, &quot;28&quot;: 0.7046523094, &quot;29&quot;: 0.7073927522, &quot;30&quot;: 0.7071937323, &quot;31&quot;: 0.7159055471, &quot;32&quot;: 0.7098062038, &quot;33&quot;: 0.7090770006, &quot;34&quot;: 0.7168362737, &quot;35&quot;: 0.7215492129, &quot;36&quot;: 0.7177282572, &quot;37&quot;: 0.7153795362, &quot;38&quot;: 0.7185223699, &quot;39&quot;: 0.7179871202, &quot;40&quot;: 0.7176033854, &quot;41&quot;: 0.7234653234, &quot;42&quot;: 0.713513732, &quot;43&quot;: 0.7200869322, &quot;44&quot;: 0.7262759805, &quot;45&quot;: 0.7287439704, &quot;46&quot;: 0.7239111066, &quot;47&quot;: 0.7238330245, &quot;48&quot;: 0.7275136709, &quot;49&quot;: 0.7328174114, &quot;50&quot;: 0.7293641567, &quot;51&quot;: 0.7332755923, &quot;52&quot;: 0.7260206938, &quot;53&quot;: 0.7235223651, &quot;54&quot;: 0.7341402769, &quot;55&quot;: 0.7222154737, &quot;56&quot;: 0.7359579802, &quot;57&quot;: 0.7379191518, &quot;58&quot;: 0.7348037958, &quot;59&quot;: 0.7349512577, &quot;60&quot;: 0.7326335907, &quot;61&quot;: 0.73770684, &quot;62&quot;: 0.7337531447, &quot;63&quot;: 0.7356146574, &quot;64&quot;: 0.736933589, &quot;65&quot;: 0.7397321463, &quot;66&quot;: 0.7403889894, &quot;67&quot;: 0.7353423238, &quot;68&quot;: 0.7442782521, &quot;69&quot;: 0.7409216762, &quot;70&quot;: 0.7454639077, &quot;71&quot;: 0.7416924238, &quot;72&quot;: 0.7463977337, &quot;73&quot;: 0.7433106303, &quot;74&quot;: 0.7427838445, &quot;75&quot;: 0.739207983, &quot;76&quot;: 0.7410522699, &quot;77&quot;: 0.7474275231, &quot;78&quot;: 0.7450863123, &quot;79&quot;: 0.7468729615, &quot;80&quot;: 0.7513384223, &quot;81&quot;: 0.7542303801, &quot;82&quot;: 0.7525380254, &quot;83&quot;: 0.7562230825, &quot;84&quot;: 0.7542062998, &quot;85&quot;: 0.7541561723, &quot;86&quot;: 0.7559394836, &quot;87&quot;: 0.7560024261, &quot;88&quot;: 0.7567353249, &quot;89&quot;: 0.7569227815, &quot;90&quot;: 0.7569333911, &quot;91&quot;: 0.7582103014, &quot;92&quot;: 0.7579286695, &quot;93&quot;: 0.7579852939, &quot;94&quot;: 0.7573221326, &quot;95&quot;: 0.7593886256, &quot;96&quot;: 0.7584165335, &quot;97&quot;: 0.7586974502, &quot;98&quot;: 0.7584565282, &quot;99&quot;: 0.7585714459 }, &quot;lr&quot;: { &quot;0&quot;: 0.001, &quot;1&quot;: 0.001, &quot;2&quot;: 0.001, &quot;3&quot;: 0.001, &quot;4&quot;: 0.001, &quot;5&quot;: 0.001, &quot;6&quot;: 0.001, &quot;7&quot;: 0.001, &quot;8&quot;: 0.001, &quot;9&quot;: 0.001, &quot;10&quot;: 0.001, &quot;11&quot;: 0.001, &quot;12&quot;: 0.001, &quot;13&quot;: 0.001, &quot;14&quot;: 0.001, &quot;15&quot;: 0.001, &quot;16&quot;: 0.001, &quot;17&quot;: 0.001, &quot;18&quot;: 0.001, &quot;19&quot;: 0.001, &quot;20&quot;: 0.001, &quot;21&quot;: 0.001, &quot;22&quot;: 0.001, &quot;23&quot;: 0.001, &quot;24&quot;: 0.001, &quot;25&quot;: 0.001, &quot;26&quot;: 0.001, &quot;27&quot;: 0.001, &quot;28&quot;: 0.001, &quot;29&quot;: 0.001, &quot;30&quot;: 0.001, &quot;31&quot;: 0.001, &quot;32&quot;: 0.001, &quot;33&quot;: 0.001, &quot;34&quot;: 0.001, &quot;35&quot;: 0.001, &quot;36&quot;: 0.001, &quot;37&quot;: 0.001, &quot;38&quot;: 0.001, &quot;39&quot;: 0.001, &quot;40&quot;: 0.001, &quot;41&quot;: 0.001, &quot;42&quot;: 0.001, &quot;43&quot;: 0.001, &quot;44&quot;: 0.001, &quot;45&quot;: 0.001, &quot;46&quot;: 0.001, &quot;47&quot;: 0.001, &quot;48&quot;: 0.001, &quot;49&quot;: 0.001, &quot;50&quot;: 0.001, &quot;51&quot;: 0.001, &quot;52&quot;: 0.001, &quot;53&quot;: 0.001, &quot;54&quot;: 0.001, &quot;55&quot;: 0.001, &quot;56&quot;: 0.001, &quot;57&quot;: 0.001, &quot;58&quot;: 0.001, &quot;59&quot;: 0.001, &quot;60&quot;: 0.001, &quot;61&quot;: 0.001, &quot;62&quot;: 0.001, &quot;63&quot;: 0.001, &quot;64&quot;: 0.001, &quot;65&quot;: 0.001, &quot;66&quot;: 0.001, &quot;67&quot;: 0.001, &quot;68&quot;: 0.001, &quot;69&quot;: 0.001, &quot;70&quot;: 0.001, &quot;71&quot;: 0.001, &quot;72&quot;: 0.001, &quot;73&quot;: 0.001, &quot;74&quot;: 0.001, &quot;75&quot;: 0.001, &quot;76&quot;: 0.001, &quot;77&quot;: 0.001, &quot;78&quot;: 0.001, &quot;79&quot;: 0.0003162278, &quot;80&quot;: 0.0003162278, &quot;81&quot;: 0.0003162278, &quot;82&quot;: 0.0003162278, &quot;83&quot;: 0.0003162278, &quot;84&quot;: 0.0003162278, &quot;85&quot;: 0.0003162278, &quot;86&quot;: 0.0003162278, &quot;87&quot;: 0.0003162278, &quot;88&quot;: 0.0003162278, &quot;89&quot;: 0.0001, &quot;90&quot;: 0.0001, &quot;91&quot;: 0.0001, &quot;92&quot;: 0.0001, &quot;93&quot;: 0.0001, &quot;94&quot;: 0.0001, &quot;95&quot;: 0.0001, &quot;96&quot;: 0.0000316228, &quot;97&quot;: 0.0000316228, &quot;98&quot;: 0.0000316228, &quot;99&quot;: 0.0000316228 } } </code></pre> <p>When I change my architecture to this:</p> <pre><code>inputs = Input(shape = input_shape, name = 'encoder_input') x = inputs # Layers of the encoder x = Conv2D(filters=64, kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2D(filters=128, kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2D(filters=256, kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) # Additional layers x = Conv2D(filters=512, kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2D(filters=1024, kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) shape = K.int_shape(x) x = Flatten()(x) latent = Dense(latent_dim, name='latent_vector')(x) encoder = Model(inputs, latent, name='encoder') latent_inputs = Input(shape=(latent_dim,), name='decoder_input') x = Dense(shape[1]*shape[2]*shape[3])(latent_inputs) x = Reshape((shape[1], shape[2], shape[3]))(x) # Layers of the dencoder # Additional layers x = Conv2DTranspose(filters=1024,kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2DTranspose(filters=512,kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2DTranspose(filters=256,kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2DTranspose(filters=128,kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) x = Conv2DTranspose(filters=64,kernel_size=kernel_size, strides=2, activation='relu', padding=padding)(x) outputs = Conv2DTranspose(filters=3, kernel_size=kernel_size, activation='sigmoid', padding=padding, name='decoder_output')(x) </code></pre> <p>And change the dimensions to 1024x1024x3, these are my results:</p> <p><a href="https://i.sstatic.net/U4FQ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U4FQ2.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/W8kZS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W8kZS.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/1LnuX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1LnuX.png" alt="enter image description here" /></a></p>
<python><tensorflow><machine-learning><computer-vision><autoencoder>
2023-02-01 08:53:33
0
3,203
mchd
75,307,597
10,062,025
Response cannot post in python Requests
<p>I am trying to scrape all items from this website: <a href="https://www.sayurbox.com/category/vegetables-1-a0d03d59/sub-category/all" rel="nofollow noreferrer">https://www.sayurbox.com/category/vegetables-1-a0d03d59/sub-category/all</a></p> <p>Here's my code</p> <pre><code>#get all libraries import requests import json import pandas as pd import re from datetime import date today = str(date.today()) slugcat=&quot;vegetables-1-a0d03d59&quot; url=&quot;https://www.sayurbox.com/graphql/v1?deduplicate=1&quot; headers = { 'authorization': str(authoId)} payload=[{&quot;operationName&quot;:&quot;getCartItemCount&quot;, &quot;variables&quot;:{&quot;deliveryConfigId&quot;:DCId}, &quot;query&quot;:&quot;query getCartItemCount($deliveryConfigId: ID!) {\n cart(deliveryConfigId: $deliveryConfigId) {\n id\n count\n __typename\n }\n}&quot;}, {&quot;operationName&quot;:&quot;getProducts&quot;, &quot;variables&quot;:{&quot;deliveryConfigId&quot;:DCId, &quot;sortBy&quot;:&quot;related_product&quot;, &quot;isInstantDelivery&quot;:False, &quot;slug&quot;:slugcat, &quot;first&quot;:12, &quot;abTestFeatures&quot;:[]}, &quot;query&quot;:&quot;query getProducts($deliveryConfigId: ID!, $sortBy: CatalogueSortType!, $slug: String!, $after: String, $first: Int, $isInstantDelivery: Boolean, $abTestFeatures: [String!]) {\n productsByCategoryOrSubcategoryAndDeliveryConfig(\n deliveryConfigId: $deliveryConfigId\n sortBy: $sortBy\n slug: $slug\n after: $after\n first: $first\n isInstantDelivery: $isInstantDelivery\n abTestFeatures: $abTestFeatures\n ) {\n edges {\n node {\n ...ProductInfoFragment\n __typename\n }\n __typename\n }\n pageInfo {\n hasNextPage\n endCursor\n __typename\n }\n productBuilder\n __typename\n }\n}\n\nfragment ProductInfoFragment on Product {\n id\n uuid\n deliveryConfigId\n displayName\n priceRanges\n priceMin\n priceMax\n actualPriceMin\n actualPriceMax\n slug\n label\n isInstant\n isInstantOnly\n nextDayAvailability\n heroImage\n promo\n discount\n isDiscount\n variantType\n imageIds\n isStockAvailable\n defaultVariantSkuCode\n quantitySoldFormatted\n promotion {\n quota\n isShown\n campaignId\n __typename\n }\n productVariants {\n productVariant {\n id\n skuCode\n variantName\n maxQty\n isDiscount\n stockAvailable\n promotion {\n quota\n campaignId\n isShown\n __typename\n }\n __typename\n }\n pageInfo {\n hasPreviousPage\n hasNextPage\n __typename\n }\n __typename\n }\n __typename\n}&quot;}, {&quot;operationName&quot;:&quot;getProducts&quot;, &quot;variables&quot;:{&quot;deliveryConfigId&quot;:DCId, &quot;sortBy&quot;:&quot;related_product&quot;, &quot;isInstantDelivery&quot;:False, &quot;slug&quot;:slugcat, &quot;first&quot;:12, &quot;abTestFeatures&quot;:[], &quot;after&quot;:&quot;YXJyYXljb25uZWN0aW9uOjEx&quot;}, &quot;query&quot;:&quot;query getProducts($deliveryConfigId: ID!, $sortBy: CatalogueSortType!, $slug: String!, $after: String, $first: Int, $isInstantDelivery: Boolean, $abTestFeatures: [String!]) {\n productsByCategoryOrSubcategoryAndDeliveryConfig(\n deliveryConfigId: $deliveryConfigId\n sortBy: $sortBy\n slug: $slug\n after: $after\n first: $first\n isInstantDelivery: $isInstantDelivery\n abTestFeatures: $abTestFeatures\n ) {\n edges {\n node {\n ...ProductInfoFragment\n __typename\n }\n __typename\n }\n pageInfo {\n hasNextPage\n endCursor\n __typename\n }\n productBuilder\n __typename\n }\n}\n\nfragment ProductInfoFragment on Product {\n id\n uuid\n deliveryConfigId\n displayName\n priceRanges\n priceMin\n priceMax\n actualPriceMin\n actualPriceMax\n slug\n label\n isInstant\n isInstantOnly\n nextDayAvailability\n heroImage\n promo\n discount\n isDiscount\n variantType\n imageIds\n isStockAvailable\n defaultVariantSkuCode\n quantitySoldFormatted\n promotion {\n quota\n isShown\n campaignId\n __typename\n }\n productVariants {\n productVariant {\n id\n skuCode\n variantName\n maxQty\n isDiscount\n stockAvailable\n promotion {\n quota\n campaignId\n isShown\n __typename\n }\n __typename\n }\n pageInfo {\n hasPreviousPage\n hasNextPage\n __typename\n }\n __typename\n }\n __typename\n}&quot;}] payload=json.dumps(payload) response=requests.post(url,headers=headers,json=payload) </code></pre> <p>I got the authoId and deliverycodeID from inspect element source and pasted it. However it is not returning any 200 . I am guessing The way I put the payload is wrong. Can anyone help me with that?</p> <p>I also tried to pass just the first &quot;getProducts&quot; only and don't know if it will work.</p>
<python><python-requests>
2023-02-01 08:50:36
0
333
Hal
75,307,544
2,277,833
Performing an assertion in a test written with pytest that should not have occurred
<p>Below is the full test code where each assertion is executed. This is unintuitive for me for one reason. If the value of the variable k is None then the function t throws an exception, and thus the code after calling t should not be executed and the exception should be caught by the context manager. However, this does not happen and I do not know why. Not that it bothers me, it's even fantastic that it executes this way, but I'd like to know why.</p> <pre class="lang-py prettyprint-override"><code>from contextlib import nullcontext as does_not_raise import pytest def t(k): if k: return k else: raise ValueError(&quot;Value&quot;) @pytest.mark.parametrize(&quot;k, cntxt&quot;, [(None, pytest.raises(ValueError)), (&quot;Value&quot;, does_not_raise())]) def test_t(k, cntxt): with cntxt as ex: kk = t(k) if k: assert kk == k assert ex is None else: assert kk is None assert str(ex.value) == &quot;Value&quot; </code></pre>
<python><python-3.x><exception><pytest>
2023-02-01 08:46:22
2
364
Draqun
75,307,494
13,846,577
how to run the command "ps -eo pid,user,ppid,cmd,%mem,%cpu --sort=-%cpu | head" through python
<p>I want to run the command <code>ps -eo pid,user,ppid,cmd,%mem,%cpu --sort=-%cpu | head</code> to get the output through a python script.</p> <p>I tried making it with the subprocess library but i am not able to find the solution.</p>
<python><linux>
2023-02-01 08:40:54
1
509
Yasharth Dubey
75,307,473
3,172,556
How to read a SQLite database file using polars package in Python
<p>I want to read a SQLite database file (database.sqlite) using <code>polars</code> package. I tried following unsuccessfully:</p> <pre><code>import sqlite3 import polars as pl conn = sqlite3.connect('database.sqlite') df = pl.read_sql(&quot;SELECT * from table_name&quot;, conn) print(df) </code></pre> <p>Getting following error:</p> <pre><code>AttributeError: 'sqlite3.Connection' object has no attribute 'split' </code></pre> <p>Any suggestions?</p>
<python><sqlite><python-polars>
2023-02-01 08:39:07
3
788
canon-ball
75,307,309
7,376,511
Subclass overriding attribute with subtype of parent's attribute
<pre class="lang-py prettyprint-override"><code>class B: pass class InheritsB1(B): pass class InheritsB2(B): pass class A: prop: list[B] class InheritsA1(A): prop: list[InheritsB1] class InheritsA2(A): prop: list[InheritsB2] </code></pre> <p>With this code mypy raises <code>Incompatible types in assignment (expression has type &quot;List[InheritsB2]&quot;, base class &quot;A&quot; defined the type as &quot;List[B]&quot;)</code>.</p> <p>How can I make this work?</p> <p><code>InheritsB1</code> is a subclass of <code>B</code>, so <code>list[InheritsB1]</code> is always a list of <code>B</code>. How can I tell mypy that it's not incompatible? Or, how can I tell mypy that the <code>prop</code> in <code>A</code> is &quot;list of <code>B</code> or any specific subclass of <code>B</code>&quot;?</p> <p>I understand the issue here: <a href="https://stackoverflow.com/questions/50305636/mypy-trouble-with-inheritance-of-objects-in-lists">mypy trouble with inheritance of objects in lists</a>. But in this case I want the <code>prop</code> object to be a list of a specific instances (<code>B</code> or any subclass of <code>B</code>). I know it will never be mixed, as in it will always be <code>list[B]</code> or <code>list[SubclassOfB1]</code> or <code>list[SubclassOfB2]</code>, never <code>list[SubclassOfB1 | SubclassOfB2]</code>. How can I do this?</p>
<python><python-typing><mypy>
2023-02-01 08:23:12
1
797
Some Guy
75,306,929
952,257
Python groupby: Custom analysis of each group
<p>I have an excel file containing rank data. The columns are Id, date, and rank. I want to find the average time it takes to move from one rank to another. For this purpose, I want to group my dataframe by ID, then sort by time, and then for each pair of consecutive entries, calculate a triplet (rankA, rankB, timeDiff)</p> <p>For example for the following data</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">id</th> <th style="text-align: center;">date</th> <th style="text-align: center;">rank</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">2009</td> <td style="text-align: center;">l1</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">2008</td> <td style="text-align: center;">l2</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">2010</td> <td style="text-align: center;">l2</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">2011</td> <td style="text-align: center;">l3</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">2012</td> <td style="text-align: center;">l3</td> </tr> </tbody> </table> </div> <p>I want to get the triplets (1,2,1), (2,3,3), (2,3,2) corresponding to the rank changes of employee 1 from level 1 to 2, then of employee 2 from level 2 to 3, then of employee 1 from level 2 to 3. How can this be done?</p>
<python><group-by>
2023-02-01 07:39:53
2
1,069
Shaharg
75,306,914
16,950,492
How can I merge different Db Models into one?
<p>I have an old Django Project, which I started when I was a beginner. So far it worked but, due to some code refactoring I would like to do, I would like to change the original database models. Basically, I originally made many different models, each one for a user type.</p> <p>old models:</p> <pre><code>class CustomUser(AbstractUser): user_type_data = ( ('admin', 'Admin'), ('instructor', 'Instructor'), ('student', 'Student'), ('renter', 'Renter'), ) user_type = models.CharField( max_length=20, choices=user_type_data, default=1) class Admin(models.Model): user = models.OneToOneField(CustomUser, on_delete=models.CASCADE) first_name = models.CharField(max_length=200) last_name = models.CharField(max_length=200) date_of_birth = models.DateField(null=True, blank=True) fiscal_code = models.CharField(max_length=50, null=True, blank=True) phone = models.CharField(max_length=50, null=True, blank=True) picture = models.ImageField( blank=True, null=True, default='default.png') address = models.CharField(max_length=100, blank=True, null=True) cap = models.CharField(max_length=10, blank=True, null=True) city = models.CharField(max_length=100, blank=True, null=True) province = models.CharField( max_length=100, choices=PROVINCE_CHOICES, blank=True, null=True) country = CountryField(blank=True, null=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) is_active = models.BooleanField(default=True) def __str__(self): return self.user.username class Meta: ordering = ['last_name'] class Instructor(models.Model): user = models.OneToOneField(CustomUser, on_delete=models.CASCADE) first_name = models.CharField(max_length=200) last_name = models.CharField(max_length=200) date_of_birth = models.DateField(null=True, blank=True) fiscal_code = models.CharField(max_length=50, null=True, blank=True) phone = models.CharField(max_length=50, null=True, blank=True) picture = models.ImageField( blank=True, null=True, default='default.png') address = models.CharField(max_length=100, null=True, blank=True) cap = models.CharField(max_length=10, null=True, blank=True) city = models.CharField(max_length=100, null=True, blank=True) province = models.CharField( max_length=100, choices=PROVINCE_CHOICES, blank=True, null=True) country = CountryField(blank=True, null=True) is_active = models.BooleanField(default=True) flight_wage = models.FloatField(null=True, blank=True) theory_wage = models.FloatField(null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.user.last_name + ' ' + self.user.first_name class Meta: ordering = ['last_name'] </code></pre> <p>I posted just the first two user type but you get the picture. A lot of redundant code. What I would like to achieve is something like that:</p> <p>new models.py:</p> <pre><code>class CustomUser(AbstractUser): user_type_data = ( ('admin', 'Admin'), ('instructor', 'Instructor'), ('student', 'Student'), ('renter', 'Renter'), ) user_type = models.CharField( max_length=20, choices=user_type_data, default=1) first_name = models.CharField(max_length=200) last_name = models.CharField(max_length=200) date_of_birth = models.DateField(null=True, blank=True) fiscal_code = models.CharField(max_length=50, null=True, blank=True) phone = models.CharField(max_length=50, null=True, blank=True) picture = models.ImageField( blank=True, null=True, default='default.png') address = models.CharField(max_length=100, null=True, blank=True) cap = models.CharField(max_length=10, null=True, blank=True) city = models.CharField(max_length=100, null=True, blank=True) province = models.CharField( max_length=100, choices=PROVINCE_CHOICES, blank=True, null=True) country = CountryField(blank=True, null=True) is_active = models.BooleanField(default=True) flight_wage = models.FloatField(null=True, blank=True) theory_wage = models.FloatField(null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.user.last_name + ' ' + self.user.first_name class Meta: ordering = ['last_name'] </code></pre> <p>My question is: Is it possibile to adapt my old projects (I already have two customers using the old project) to this new brand type of database model?</p>
<python><django><django-models><django-inheritance>
2023-02-01 07:38:20
2
418
Giorgio Scarso
75,306,803
12,883,179
subset fail on np.meshgrid generated dataframe
<p>I generate a dataframe for lonlat like this</p> <pre><code>a=np.arange(89.7664, 89.7789, 1e-4) b=np.arange(20.6897, 20.7050, 1e-4) temp_arr=np.array(np.meshgrid(a, b)).T.reshape(-1, 2) np_df=pd.DataFrame(temp_arr, columns = ['lon','lat']) </code></pre> <p>and it create the dataframe I want</p> <p><a href="https://i.sstatic.net/fuSgK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fuSgK.png" alt="enter image description here" /></a></p> <p>When I tried to subset the first lon</p> <pre><code>len(np_df[np_df['lon']==89.7664]) </code></pre> <p>it will return 153. But when I tried subset some last lon</p> <pre><code>len(np_df[np_df['lon']==89.7788]) </code></pre> <p>it will return 0</p> <p>I wonder what is wrong here. Thank you</p>
<python><pandas><dataframe><numpy>
2023-02-01 07:24:16
1
492
d_frEak