QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,202,407
11,015,558
Creating a date range in python-polars with the last days of the months?
<p>How do I create a date range in Polars (Python API) with only the last days of the months?</p> <p>This is the code I have:</p> <pre><code>pl.date_range(datetime(2022,5,5), datetime(2022,8,10), &quot;1mo&quot;, name=&quot;dtrange&quot;) </code></pre> <p>The result is: <code>'2022-05-05', '2022-06-05', '2022-07-05', '2022-08-05'</code></p> <p>I would like to get: <code>'2022-05-31', '2022-06-30', '2022-07-31'</code></p> <p>I know this is possible with Pandas with:</p> <pre><code>pd.date_range(start=datetime(2022,5,5), end=datetime(2022,8,10), freq='M') </code></pre>
<python><datetime><date-range><python-polars>
2023-01-22 17:20:02
3
1,994
Luca
75,202,296
6,076,861
Read mutliple parquet files to pandas with select columns where select columns exist
<p>When running the below i hit an error due to some of the files missing the required columns</p> <pre class="lang-py prettyprint-override"><code> li = [] for filename in parquet_filtered_list: df = pd.read_parquet(filename, columns = list_key_cols_aggregates ) li.append(df) df_raw_2021_to_2022 = pd.concat(li, axis=0, ignore_index=False) del li </code></pre> <p>How do i skip the file if it is missing the required columns.</p>
<python><pandas><pyarrow>
2023-01-22 17:04:24
1
2,045
mapping dom
75,202,174
4,451,315
Which directive to use to parse Z in time string?
<p>If I use <code>fromisoformat</code> (in Python3.11), then <code>Z</code> is parsed as <code>UTC</code>:</p> <pre class="lang-py prettyprint-override"><code>In [15]: dt.datetime.fromisoformat('2020-01-01T03:04:05Z') Out[15]: datetime.datetime(2020, 1, 1, 3, 4, 5, tzinfo=datetime.timezone.utc) </code></pre> <p>But how can I parse it if I want to pass the format explicitly?</p> <p>I tried <code>'%Y-%m-%dT%H:%M:%S%Z'</code>, but it errors:</p> <pre class="lang-py prettyprint-override"><code>In [16]: dt.datetime.strptime('2020-01-01T03:04:05Z', '%Y-%m-%dT%H:%M:%S%Z') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[16], line 1 ----&gt; 1 dt.datetime.strptime('2020-01-01T03:04:05Z', '%Y-%m-%dT%H:%M:%S%Z') File /usr/lib/python3.11/_strptime.py:568, in _strptime_datetime(cls, data_string, format) 565 def _strptime_datetime(cls, data_string, format=&quot;%a %b %d %H:%M:%S %Y&quot;): 566 &quot;&quot;&quot;Return a class cls instance based on the input string and the 567 format string.&quot;&quot;&quot; --&gt; 568 tt, fraction, gmtoff_fraction = _strptime(data_string, format) 569 tzname, gmtoff = tt[-2:] 570 args = tt[:6] + (fraction,) File /usr/lib/python3.11/_strptime.py:349, in _strptime(data_string, format) 347 found = format_regex.match(data_string) 348 if not found: --&gt; 349 raise ValueError(&quot;time data %r does not match format %r&quot; % 350 (data_string, format)) 351 if len(data_string) != found.end(): 352 raise ValueError(&quot;unconverted data remains: %s&quot; % 353 data_string[found.end():]) ValueError: time data '2020-01-01T03:04:05Z' does not match format '%Y-%m-%dT%H:%M:%S%Z' </code></pre> <p>I know I can get a result with '%Y-%m-%dT%H:%M:%SZ', but that does actually parse the <code>Z</code> as a timezone:</p> <pre class="lang-py prettyprint-override"><code>In [17]: dt.datetime.strptime('2020-01-01T03:04:05Z', '%Y-%m-%dT%H:%M:%SZ') Out[17]: datetime.datetime(2020, 1, 1, 3, 4, 5) </code></pre>
<python><datetime><utc><iso8601>
2023-01-22 16:46:06
1
11,062
ignoring_gravity
75,202,027
2,447,609
Rclone+Python - Retain the file permissions during the backup to the S3 bucket
<p>I'm using the rclone with python. I want to persist the file permissions during the file transfer and then retain back during the restore. What is the best way to implement this?</p> <p>I don't see the &quot;rclone mount&quot; is the best solution for our transfer. Please suggest me...</p>
<python><rclone>
2023-01-22 16:24:46
1
365
suresh goud
75,201,985
12,350,966
pandas groupby with value_counts(normalize=True) return dataframe instead of series?
<p>Not the data I am using, but best reproducible example I can think of:</p> <pre><code> test = pd.util.testing.makeMixedDataFrame() grouped = test.groupby(['C', 'B'])['A'].value_counts(normalize=True) </code></pre> <p>give a series:</p> <pre><code>C B A foo1 0.0 0.0 1.0 foo2 1.0 1.0 1.0 foo3 0.0 2.0 1.0 foo4 1.0 3.0 1.0 foo5 0.0 4.0 1.0 Name: A, dtype: float64 </code></pre> <p>I want to have a dataframe with 4 columns (C, B, A, and the other unnamed), but have not been able to get this.</p> <p>I tried <code>reset_index()</code> onf <code>grouped</code>, but get <code>*** ValueError: cannot insert A, already exists</code></p>
<python><pandas>
2023-01-22 16:20:14
0
740
curious
75,201,928
1,714,385
ImportError: cannot import name '_log_class_usage' from 'torchtext.utils'
<p>I recently updated pytorch and torchtext from anaconda, in order to run <a href="https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html" rel="nofollow noreferrer">this tutorial</a> from the torchtext website, but it seems that my installation of torchtext has been broken. Everytime that I try <code>import torchtext</code> I get the following error:</p> <pre><code>ImportError: cannot import name '_log_class_usage' from 'torchtext.utils' (c:\Users\ferdi\anaconda3\lib\site-packages\torchtext\utils.py) </code></pre> <p>Uninstalling torchtext then reinstalling with <code>conda install -c pytorch torchtext</code> does not help. Does anybody have an idea for how to solve it? My torch version is 1.13.0.</p>
<python><pytorch><torchtext>
2023-01-22 16:12:41
1
4,417
Ferdinando Randisi
75,201,847
3,507,584
Plotly make marker overlay add_trace
<p>I have the following Scatterternary plot below. Whenever I <code>add_trace</code>, the marker remains under it (so you cannot even hover it). How can I make the marker circle above the red area? [In implementation, I will have several areas and the marker may move around]</p> <p>I tried adding <code>fig.update_ternaries(aaxis_layer=&quot;above traces&quot;,baxis_layer=&quot;above traces&quot;, caxis_layer=&quot;above traces&quot;)</code> as shown in the <a href="https://plotly.com/python/reference/layout/ternary/" rel="nofollow noreferrer">documentation</a> without success. There is also another explanation for the <a href="https://community.plotly.com/t/boxplot-trace-behind-area-traces/33663" rel="nofollow noreferrer">boxplots with the same issue</a> but I don't know how to implement it in this case.</p> <p><a href="https://i.sstatic.net/eHN9O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eHN9O.png" alt="Example" /></a></p> <pre><code>import plotly.graph_objects as go fig = go.Figure(go.Scatterternary({ 'mode': 'markers', 'a': [0.3],'b': [0.5], 'c': [0.6], 'marker': {'color': 'AliceBlue','size': 14,'line': {'width': 2} },})) fig.update_layout({ 'ternary': { 'sum': 100, 'aaxis': {'nticks':1, 'ticks':&quot;&quot;}, 'baxis': {'nticks':1}, 'caxis': {'nticks':1} }}) fig.add_trace(go.Scatterternary(name='RedArea',a=[0.1,0.1,0.6],b=[0.7,0.4,0.5],c=[0.2,0.6,0.8],mode='lines',opacity=0.35,fill='toself', fillcolor='red')) fig.update_traces( hovertemplate = &quot;&lt;b&gt;CatA: %{a:.0f}&lt;br&gt;CatB: %{b:.0f}&lt;br&gt;CatC: %{c:.0f}&lt;extra&gt;&lt;/extra&gt;&quot;) fig.show() </code></pre>
<python><scatter-plot><plotly>
2023-01-22 16:00:54
1
3,689
User981636
75,201,806
7,168,098
change indentation in VS code for python functions
<p>I am using VS code to write python code.</p> <p>When writing functions I get: <a href="https://i.sstatic.net/pcZQU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pcZQU.png" alt="enter image description here" /></a></p> <p>What I would like to have when I hit return after every variable of the method is: <a href="https://i.sstatic.net/ixPCZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ixPCZ.png" alt="enter image description here" /></a></p> <p>But after hitting return after the first argument the next line starts just under &quot;def&quot;.</p> <p>After looking for solutions in internet I read somewhere that adding this to settings.json would solve it:</p> <p>&quot;editor.autoIndent&quot;: true, &quot;editor.indentAfterOpenBracket&quot;: &quot;control&quot; }</p> <p>But this is not the case and the behavior remains the same.</p> <p>How and what should be added in settings.json to get this behavior.</p>
<python><visual-studio-code>
2023-01-22 15:55:26
3
3,553
JFerro
75,201,798
2,881,414
Separating elements from a list in Python depending on a condition
<p>I have a list of elements and want to <strong>separate</strong> the elements of the list by a certain condition.</p> <p>A simple example is a list of numbers and i want to separate the odd from the even ones. For that could use the <code>filter</code> builtin like so:</p> <pre class="lang-py prettyprint-override"><code>def is_even(x): # ... l = [0, 1, 2, 3, 4, 5, 6] even = list(filter(is_even, l)) odd = list(filter(not is_even, l)) </code></pre> <p>That is a bit error prone if the condition is a bit more complex, because i repeat myself twice in the <code>filter</code> functions. Is there a more elegant way to achieve this?</p>
<python><functional-programming>
2023-01-22 15:53:56
4
17,530
Bastian Venthur
75,201,698
16,698,040
urlencode without quote python
<p>I wish to use Python's <code>urllib.parse.urlencode()</code> method to convert a dict to URL-like params, however, I do not want any quoting of characters, just the <code>?&lt;...&gt;=&lt;...&gt;&amp;&lt;...&gt;=&lt;...&gt;</code> logic.</p> <p>Is that possible?</p>
<python><url>
2023-01-22 15:39:51
1
475
Stack Overflow
75,201,645
11,405,004
How to count the number of unique values per group over the last n days
<p>I have the pandas dataframe below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>groupId</th> <th>date</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2023-01-01</td> <td>A</td> </tr> <tr> <td>1</td> <td>2023-01-05</td> <td>B</td> </tr> <tr> <td>1</td> <td>2023-01-17</td> <td>C</td> </tr> <tr> <td>2</td> <td>2023-01-01</td> <td>A</td> </tr> <tr> <td>2</td> <td>2023-01-20</td> <td>B</td> </tr> <tr> <td>3</td> <td>2023-01-01</td> <td>A</td> </tr> <tr> <td>3</td> <td>2023-01-10</td> <td>B</td> </tr> <tr> <td>3</td> <td>2023-01-12</td> <td>C</td> </tr> </tbody> </table> </div> <p>I would like to do a groupby and count the number of unique values for each <code>groupId</code> but only looking at the last n=14 days, relatively to the <code>date</code> of the row.</p> <p>What I would like as a result is something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>groupId</th> <th>date</th> <th>value</th> <th>newColumn</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2023-01-01</td> <td>A</td> <td>1</td> </tr> <tr> <td>1</td> <td>2023-01-05</td> <td>B</td> <td>2</td> </tr> <tr> <td>1</td> <td>2023-01-17</td> <td>C</td> <td>2</td> </tr> <tr> <td>2</td> <td>2023-01-01</td> <td>A</td> <td>1</td> </tr> <tr> <td>2</td> <td>2023-01-20</td> <td>B</td> <td>1</td> </tr> <tr> <td>3</td> <td>2023-01-01</td> <td>A</td> <td>1</td> </tr> <tr> <td>3</td> <td>2023-01-10</td> <td>B</td> <td>2</td> </tr> <tr> <td>3</td> <td>2023-01-12</td> <td>C</td> <td>3</td> </tr> </tbody> </table> </div> <p>I did try using a <code>groupby(...).rolling('14d').nunique()</code> and while the <code>rolling</code> function works on numeric fields to count and compute the mean, etc ... it doesn't work when used with <code>nunique</code> on string fields to count the number of unique string/object values.</p> <p>You can use the code below to generate the dataframe.</p> <pre><code>pd.DataFrame( { 'groupId': [1, 1, 1, 2, 2, 3, 3, 3], 'date': ['2023-01-01', '2023-01-05', '2023-01-17', '2023-01-01', '2023-01-20', '2023-01-01', '2023-01-10', '2023-01-12'], #YYYY-MM-DD 'value': ['A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'], 'newColumn': [1, 2, 2, 1, 1, 1, 2, 3] } </code></pre> <p>)</p> <p>Do you have an idea on how to solve this, even if not using the <code>rolling</code> function? That'd be much appreciated!</p>
<python><pandas><dataframe>
2023-01-22 15:32:38
2
386
confused_pandas
75,201,622
817,630
What are the correct mypy hints for this generic classmethod?
<p>I have a series of classes that looks like this</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod from typing import TypeVar T = TypeVar(&quot;T&quot;, bound=&quot;A&quot;) U = TypeVar(&quot;U&quot;, bound=&quot;ThirdPartyClass&quot;) class ThirdPartyClass: &quot;&quot;&quot; This is from a third-party library and I don't control the implementation. &quot;&quot;&quot; @classmethod def create(cls: type[U]) -&gt; U: return cls() class A(ABC): @classmethod @abstractmethod def f(cls: type[T]) -&gt; T: pass class B(ThirdPartyClass, A): @classmethod def f(cls) -&gt; T: return cls.create() </code></pre> <p>When I run mypy on this module, I get two errors for the last two lines</p> <blockquote> <p>error: A function returning TypeVar should receive at least one argument containing the same Typevar</p> <p>error: Incompatible return value type (got &quot;B&quot;, expected &quot;T&quot;)</p> </blockquote> <p>In my mind, neither of these are valid.</p> <p>For the first one, <code>B.f</code> <em>does</em> receive an argument containing the Typevar—it receives a <code>type[B]</code> and since <code>B</code> inherits from <code>A</code>, and <code>T</code> is bound by <code>A</code>, <code>type[B]</code> is valid here.</p> <p>Similarly for the second one, the return type of <code>B</code> should be fine because <code>B</code> inherits from <code>A</code>, and <code>A</code> is the bound for <code>T</code>.</p> <p>What types should I be using here to prevent mypy from failing?</p>
<python><generics><types><mypy>
2023-01-22 15:30:03
0
5,912
Kris Harper
75,201,521
13,943,207
Panel is overlapping and has a wrong ratio in mplfinance plot
<p>I'm trying to plot a subplot but there are two problems. <br> #1 The <code>panel_ratio</code> setting <code>(6,1)</code> is unnoticed. <br> #2 The y axis of the top panel juts down and overlaps the y axis of the bottom panel, so that the bars are trimmed in the top panel</p> <p>What is wrong with the code?</p> <pre><code>import pandas as pd import numpy as np from matplotlib.animation import FuncAnimation import mplfinance as mpf times = pd.date_range(start='2022-01-01', periods=50, freq='ms') def get_rsi(df, rsi_period): chg = df['close'].diff(1) gain = chg.mask(chg&lt;0,0) loss = chg.mask(chg&gt;0,0) avg_gain = gain.ewm(com=rsi_period-1, min_periods=rsi_period).mean() avg_loss = loss.ewm(com=rsi_period-1, min_periods=rsi_period).mean() rs = abs(avg_gain/avg_loss) rsi = 100 - (100/(1+rs)) return rsi df = pd.DataFrame(np.random.randint(3000, 3100, (50, 1)), columns=['open']) df['high'] = df.open+5 df['low'] = df.open-2 df['close'] = df.open df['rsi14'] = get_rsi(df, 14) df.set_index(times, inplace=True) lows_peaks = df.low.nsmallest(5).index fig = mpf.figure(style=&quot;charles&quot;,figsize=(7,8)) ax1 = fig.add_subplot(1,1,1) ax2 = fig.add_subplot(2,1,2) ap0 = [ mpf.make_addplot(df['rsi14'],color='g', ax=ax2, ylim=(10,90), panel=1) ] mpf.plot(df, ax=ax1, ylim=(2999,3104), addplot=ap0, panel_ratios=(6,1)) mpf.show() </code></pre> <p><a href="https://i.sstatic.net/8yFLK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8yFLK.png" alt="enter image description here" /></a></p>
<python><matplotlib><mplfinance>
2023-01-22 15:13:38
1
552
stanvooz
75,201,514
1,417,053
Sending Pickled Objects Using Pickle5's Out-of-Band Buffers Over the Network
<p>I've been using PyArrow to do my Serialization/Deserialization of custom made objects and have been searching for ways to use Pickle's new <code>protocol 5</code> to replace PyArrow as discussed <a href="https://github.com/apache/arrow/issues/11239" rel="nofollow noreferrer">here</a>.</p> <p>I want to send the serialized data of Pickle's new out-of-band buffer using a ZMQ socket to another ZMQ server. Here's an example code:</p> <p>Server:</p> <pre><code>import numpy as np class Person: def __init__(self, Thumbnail: np.ndarray = None): if Thumbnail is not None: self.Thumbnail: np.ndarray = Thumbnail else: self.Thumbnail: np.ndarray = np.random.rand(256, 256, 3) def create_socket(ip, port, send_timeout=20000, receive_timeout=None): &quot;&quot;&quot; :param ip: Server IP :param port: Server Port :param send_timeout: Send Timeout in MilliSeconds :param receive_timeout: Receive Timeout in MilliSeconds &quot;&quot;&quot; import zmq try: print('Creating Socket @ tcp://%s:%s' % (ip, port)) context = zmq.Context() socket = context.socket(zmq.REP) socket.bind(&quot;tcp://*:%s&quot; % port) if send_timeout is not None: socket.setsockopt(zmq.SNDTIMEO, send_timeout) if receive_timeout is not None: socket.setsockopt(zmq.RCVTIMEO, receive_timeout) # Don't Linger Pending Messages As Soon As The Socket Is Closed socket.setsockopt(zmq.LINGER, 0) # Messages Will Be Queued ONLY For Completed Socket Connections socket.setsockopt(zmq.IMMEDIATE, 1) return socket except Exception as e: print(&quot;\nCouldn't Create Socket on tcp://%s:%s =&gt; %s\n&quot; % (ip, port, e)) raise e def connect_to_socket(ip, port, send_timeout=None, receive_timeout=None): import zmq try: print(&quot;Connecting to the Socket @ tcp://%s:%s&quot; % (ip, port)) context = zmq.Context() socket = context.socket(zmq.REQ) socket.connect(&quot;tcp://%s:%s&quot; % (ip, port)) if send_timeout is not None: socket.setsockopt(zmq.SNDTIMEO, send_timeout) if receive_timeout is not None: socket.setsockopt(zmq.RCVTIMEO, receive_timeout) return socket except Exception as e: print(&quot;Couldn't Connect to Socket @ tcp://%s:%s&quot;, ip, port, e) raise e server_socket = create_socket('localhost', 20000) while True: data = server_socket.recv_multipart() print(data) server_socket.send_string(&quot;OK!&quot;) </code></pre> <p>Client:</p> <pre><code>import pickle import numpy as np class Person: def __init__(self, Thumbnail: np.ndarray = None): if Thumbnail is not None: self.Thumbnail: np.ndarray = Thumbnail else: self.Thumbnail: np.ndarray = np.random.rand(256, 256, 3) def create_socket(ip, port, send_timeout=20000, receive_timeout=None): &quot;&quot;&quot; :param ip: Server IP :param port: Server Port :param send_timeout: Send Timeout in MilliSeconds :param receive_timeout: Receive Timeout in MilliSeconds &quot;&quot;&quot; import zmq try: print('Creating Socket @ tcp://%s:%s' % (ip, port)) context = zmq.Context() socket = context.socket(zmq.REP) socket.bind(&quot;tcp://*:%s&quot; % port) if send_timeout is not None: socket.setsockopt(zmq.SNDTIMEO, send_timeout) if receive_timeout is not None: socket.setsockopt(zmq.RCVTIMEO, receive_timeout) # Don't Linger Pending Messages As Soon As The Socket Is Closed socket.setsockopt(zmq.LINGER, 0) # Messages Will Be Queued ONLY For Completed Socket Connections socket.setsockopt(zmq.IMMEDIATE, 1) return socket except Exception as e: print(&quot;\nCouldn't Create Socket on tcp://%s:%s =&gt; %s\n&quot; % (ip, port, e)) raise e def connect_to_socket(ip, port, send_timeout=None, receive_timeout=None): import zmq try: print(&quot;Connecting to the Socket @ tcp://%s:%s&quot; % (ip, port)) context = zmq.Context() socket = context.socket(zmq.REQ) socket.connect(&quot;tcp://%s:%s&quot; % (ip, port)) if send_timeout is not None: socket.setsockopt(zmq.SNDTIMEO, send_timeout) if receive_timeout is not None: socket.setsockopt(zmq.RCVTIMEO, receive_timeout) return socket except Exception as e: print(&quot;Couldn't Connect to Socket @ tcp://%s:%s&quot;, ip, port, e) raise e PERSONS = [Person() for i in range(100)] buffers = [] persons_pickled = pickle.dumps(PERSONS, protocol=5, buffer_callback=buffers.append) socket_client = connect_to_socket('localhost', 20000) while True: socket_client.send(persons_pickled) reply = socket_client.recv() print(reply) </code></pre> <p>I don't know how to send the two serialized objects (<code>persons_pickled</code> and <code>buffers</code>) without having to do any unnecessary memory copies. I don't want to have to send them as two separate <code>socket.send()</code> calls because that can introduce new problems down the line.</p> <p>How is this possible?</p>
<python><serialization><deserialization><pickle><zeromq>
2023-01-22 15:12:39
1
2,620
Cypher
75,201,456
3,817,518
Python Dynamic Programming Problem - ( 2 dimension recursion stuck in infinite loop )
<p>In the book &quot;A Practical Guide to Quantitative Finance Interview&quot;, there is a question called Dynamic Card Game, 5.3 Dynamic Programming)</p> <p>The solution according to the book is basically the following:</p> <p><code>E[f(b,r)] = max(b−r,(b/(b+r))∗E[f(b−1,r)]+(r/(b+r))∗E[f(b,r−1)])</code></p> <p>with the following boundary conditions.</p> <p><code>f(0,r)=0, f(b,0)=b</code></p> <p>I tried implementing it in python as follows:</p> <pre><code>def f(b,r): if b == 0: return 0 elif r == 0: return b else: var = (b/(b+r)) * f(b-1, r) + (r/(b+r)) * f(b, r-1) return max( b-r, var ) print(&quot;The solution is&quot;) print(f(26,26)) </code></pre> <p>But, for some reason, the above code got stuck in infinite loop and the program does not return anything for large input such as <code>f(26,26)</code>.</p> <p>It works fine for smaller number. For example, <code>f(5,5)</code> would return <code>1.11904</code> immediately.</p> <p>Can anyone explain what I am doing wrong here in the code?</p>
<python><recursion><dynamic-programming><infinite-recursion>
2023-01-22 15:04:00
3
1,986
nyan314sn
75,201,404
7,446,003
How to make sequential signup pages with Django allauth?
<p>I currently have a single page signup form implemented with allauth</p> <pre><code>from django.contrib.auth.models import AbstractUser class User(AbstractUser): email = models.EmailField(_('Professional email address'), unique=True) username = models.CharField(_(&quot;User Name&quot;), blank=False, max_length=255, unique=True) first_name = models.CharField(_(&quot;First Name&quot;), null=True, max_length=255, default='') last_name = models.CharField(_(&quot;Last Name&quot;), null=True, max_length=255, default='') country = CountryField(_(&quot;Country of Practice&quot;), blank_label='(Country of Practice)', blank = False, default='GB') terms = models.BooleanField(verbose_name=_('I have read and agree to the terms and conditions'), default=False) def get_absolute_url(self): return reverse( &quot;users:detail&quot;, kwargs={&quot;username&quot;: self.username} ) objects = UserManager() </code></pre> <p>And this is the forms.py</p> <pre><code>class UserCreationForm(forms.UserCreationForm): error_message = forms.UserCreationForm.error_messages.update( {&quot;duplicate_username&quot;: _(&quot;This username has already been taken.&quot;)} ) username = CharField(label='User Name', widget=TextInput(attrs={'placeholder': 'User Name'})) class Meta(forms.UserCreationForm.Meta): model = User fields = ['username', 'email', 'first_name', 'last_name', 'password1', 'password2', 'terms'] field_order = ['username', 'email', 'first_name', 'last_name', 'password1', 'password2', 'terms'] def clean_terms(self): is_filled = self.cleaned_data['terms'] if not is_filled: raise forms.ValidationError('This field is required') return is_filled def clean_username(self): username = self.cleaned_data[&quot;username&quot;] if self.instance.username == username: return username try: User._default_manager.get(username=username) except User.DoesNotExist: return username raise ValidationError( self.error_messages[&quot;duplicate_username&quot;] ) </code></pre> <p>I would like however for the first sign up page to have a ‘next’ button at the bottom and then there would be a second page where the user input separate details (the data input here might vary based on the inputs in the first page). The Django ‘form tools’ form wizard seems well suite to this but I can’t work out how to integrate it with all auth</p> <p>Any suggestions much appreciated</p>
<python><django>
2023-01-22 14:57:03
0
422
RobMcC
75,201,226
7,575,552
Replacing a Keras layer in a pretrained model with another layer
<p>I am using Keras with Tensorflow version 2.7 as backend. I am referring to the stackoverflow post at <a href="https://stackoverflow.com/questions/45306433/removing-then-inserting-a-new-middle-layer-in-a-keras-model/45309508#45309508">Removing then Inserting a New Middle Layer in a Keras Model</a>. I aim to instantiate an Imagenet-pretrained VGG16 model and replace every MaxPooling2D layer by the AveragePooling2D layer:</p> <pre><code>import os from tensorflow import keras from tensorflow.keras import backend as K from tensorflow.keras.layers import * from tensorflow.keras import applications from tensorflow.keras.models import Model model_input = (224,224,3) model = applications.VGG16(include_top=False, weights='imagenet', input_shape=model_input) model.summary() for layer in tuple(model.layers): layer_type = type(layer).__name__ if layer.__name__ == 'MaxPooling2D': pool_name = layer.name + &quot;_averagepooling2d&quot; pool = AveragePooling2D() if layer_type == &quot;MaxPooling2D&quot; else pool(name=pool_name) model.add(pool) model.summary() </code></pre> <p>I get the following error:</p> <pre><code> File &quot;C:\Users\AppData\Local\Temp\2/ipykernel_26864/1200445239.py&quot;, line 15, in &lt;module&gt; if layer.__name__ == 'MaxPooling2D': AttributeError: 'InputLayer' object has no attribute '__name__' </code></pre> <p>Also, I am not sure if this is the right way to replace the MaxPooling layers with the AveragePooling layer in all types of pretrained models including those with skip connections and dense blocks. Requesting code correction in this regard.</p>
<python><tensorflow><keras>
2023-01-22 14:29:37
2
1,189
shiva
75,201,174
4,576,519
How to create a recurrent connection between two Dense layers in Keras?
<p>Before starting, there is a <a href="https://stackoverflow.com/questions/63899389/how-to-create-a-recurrent-connection-between-2-layers-in-tensorflow-keras">very similar question</a>, but it was asked 2 years ago and I would like to do this <em>without</em> creating a custom class. Essentially, I have the simple feedforward neural network:</p> <img src="https://i.sstatic.net/JG0ZX.png" width="300" /> <p>Which can be created with:</p> <pre><code>import keras inputs = keras.layers.Input(1,) hidden1 = keras.layers.Dense(10, name='hidden1')(inputs) hidden2 = keras.layers.Dense(10, name='hidden2')(hidden1) outputs = keras.layers.Dense(1, name='outputs')(hidden2) model = keras.Model(inputs, outputs) </code></pre> <p>But I would like:</p> <img src="https://i.sstatic.net/9CI5U.png" width="410" /> <p>I know that I can concatenate the <code>inputs</code> and <code>hidden2</code> layer outputs like:</p> <pre><code>concat_layer = keras.layers.Concatenate()([inputs, outputs]) </code></pre> <p>but how can I replace the inputs of the <code>hidden1</code> layer?</p>
<python><tensorflow><keras><recurrence>
2023-01-22 14:23:19
0
6,829
Thomas Wagenaar
75,201,041
2,748,513
How to print matching json element value from nested json string
<p>my_json file has list of nested dicts, I need to print only the username if <code>type==Developer-Verified and it's value==1</code>, I managed to print just the approvals list, unable to to go further.</p> <pre><code>$ cat myjson_file | python3.6 -c &quot;import sys, json; approvals=json.load(sys.stdin)['currentPatchSet']['approvals']; print(json.dumps(approvals, indent=4))&quot; [ { &quot;type&quot;: &quot;Developer-Verified&quot;, &quot;description&quot;: &quot;Developer-Verified&quot;, &quot;value&quot;: &quot;1&quot;, &quot;grantedOn&quot;: 1581451370, &quot;by&quot;: { &quot;name&quot;: &quot;Donald Snifer&quot;, &quot;email&quot;: &quot;dsnifer@gmail.com&quot;, &quot;username&quot;: &quot;dsnifer&quot; } }, { &quot;type&quot;: &quot;Code-Review&quot;, &quot;description&quot;: &quot;Code-Review&quot;, &quot;value&quot;: &quot;2&quot;, &quot;grantedOn&quot;: 1581623684, &quot;by&quot;: { &quot;name&quot;: &quot;Brandon Welch&quot;, &quot;email&quot;: &quot;bwelch@gmail.com&quot;, &quot;username&quot;: &quot;bwelch&quot; } }, { &quot;type&quot;: &quot;Developer-Verified&quot;, &quot;description&quot;: &quot;Developer-Verified&quot;, &quot;value&quot;: &quot;1&quot;, &quot;grantedOn&quot;: 1581451370, &quot;by&quot;: { &quot;name&quot;: &quot;Hamlin Damer&quot;, &quot;email&quot;: &quot;hdamer@gmail.com&quot;, &quot;username&quot;: &quot;hdamer&quot; } } ] $ </code></pre> <p>I need to print just <code>dsnifer hdamer</code></p> <p>I tried to move further with below and other logics, and I keep failing <code>python3.6 -c &quot;import sys, json; approvals=json.load(sys.stdin)['currentPatchSet']['approvals']; print( k for k,v in approvals[0].items())&quot;</code></p>
<python><python-3.6>
2023-01-22 14:04:37
1
3,221
rodee
75,201,003
3,672,883
How can I reference var from section into another section in pyproject.toml?
<p>Hello I am building a pyproject object and I have the following two sections</p> <pre><code>[tool.poetry] version = &quot;0.1.0&quot; [tool.commitizen] version = &quot;0.1.0&quot; </code></pre> <p>As you can see poetry uses the version in its section and commitizen in its section, the question is how to can I set only one version and share between sections?</p> <p>Thanks</p>
<python><python-poetry><pyproject.toml><commitizen>
2023-01-22 13:59:15
2
5,342
Tlaloc-ES
75,200,983
4,432,671
NumPy: how implement Mathematica's MixedRadix in NumPy?
<p>Mathematica has a built-in function <a href="https://reference.wolfram.com/language/ref/MixedRadix.html" rel="nofollow noreferrer">MixedRadix</a> which maps an integer to a list of digits in a mixed radix numerical system.</p> <p>Here's my Python version of the same:</p> <pre><code>def mixed_radix(num, bases): digits = [] for base in bases[::-1]: num, digit = divmod(num, base) digits.append(digit) return digits[::-1] </code></pre> <p>Is there an idiomatic/built-in way of doing this in NumPy?</p>
<python><numpy><wolfram-mathematica>
2023-01-22 13:56:05
1
3,737
xpqz
75,200,875
1,436,800
How to write permissions in a viewset with conditional statements in DRF?
<p>I have a viewset written in DRF:</p> <pre><code>class MyViewSet(ModelViewSet): serializer_class = MySerializer queryset = models.MyClass.objects.all() def get_serializer_class(self): permission = self.request.user.permission if permission=='owner' or permission=='admin': return self.serializer_class else: return OtherSerializer def perform_create(self, serializer): permission = self.request.user.permission if permission=='owner' or permission=='admin': serializer.save() else: employee = models.Employee.objects.get(user=self.request.user) serializer.save(employee=employee) </code></pre> <p>Here, I am using the following statements in both get_serializer_class and perform_create which looks like a repetitive code:</p> <pre><code>permission = self.request.user.permission if permission=='owner' or permission=='admin': </code></pre> <p>Is there any way to write it once and then use it as a permission_class somehow?</p>
<python><django><django-rest-framework><django-views><django-permissions>
2023-01-22 13:40:20
1
315
Waleed Farrukh
75,200,874
14,952,975
how to display a text in a more readable format in python?
<p>I'm very new to python</p> <p>and I have <strong>diff</strong> text like this</p> <pre class="lang-xml prettyprint-override"><code> &lt;revision&gt; - &lt;count&gt;22&lt;/count&gt; + &lt;count&gt;33&lt;/count&gt; &lt;/revision&gt; </code></pre> <p>this is config file changes</p> <p>everywhere that you <strong>+</strong> means line <code>added</code></p> <p>and everywhere that you see <strong>-</strong> means something <code>removed</code></p> <p>it is like git .</p> <p><strong>the problem</strong> is that above text is not readable .</p> <p>I want something like this :</p> <pre><code>revision-&gt;count : 22 (old) 33 (new) </code></pre> <p><strong>so</strong> , how can I start for doing that ?</p> <p>is it needed to use library ?</p>
<python>
2023-01-22 13:40:17
1
1,658
morteza mortezaie
75,200,764
19,694,624
Can't run Chrome in headless mode using Selenium
<p>So here's my code first:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options import time from fake_useragent import UserAgent import random ua = UserAgent() options = Options() chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--blink-settings=imagesEnabled=false') chrome_options.add_argument('--headless') chrome_options.add_argument(f'user-agent={ua.random}') driver = webdriver.Chrome(options=options, chrome_options=chrome_options) driver.maximize_window() url = &quot;https://magiceden.io/marketplace/hasuki&quot; driver.get(url) element = driver.find_element(By.CSS_SELECTOR, &quot;#content &gt; div.tw-w-full.tw-py-0.sm\:tw-mt-0 &gt; div.tw-flex.tw-relative &gt; div.tw-flex-auto.tw-max-w-full.tw-pt-0 &gt; div.tw-flex.tw-items-center.md\:tw-justify-between.tw-gap-2.md\:tw-gap-4.md\:tw-sticky.tw-top-\[133px\].tw-bg-gray-100.tw-z-10.tw-flex-wrap.tw-p-5 &gt; div.tw-flex.tw-flex-grow.tw-justify-center.tw-gap-x-2 &gt; button &gt; span:nth-child(4)&quot;) print(f&quot;The current instant sell price is {element.text}&quot;) </code></pre> <p>When I run it, I get weird long error, that ends with:</p> <pre><code>Backtrace: (No symbol) [0x00806643] (No symbol) [0x0079BE21] (No symbol) [0x0069DA9D] (No symbol) [0x006D1342] (No symbol) [0x006D147B] (No symbol) [0x00708DC2] (No symbol) [0x006EFDC4] (No symbol) [0x00706B09] (No symbol) [0x006EFB76] (No symbol) [0x006C49C1] (No symbol) [0x006C5E5D] GetHandleVerifier [0x00A7A142+2497106] GetHandleVerifier [0x00AA85D3+2686691] GetHandleVerifier [0x00AABB9C+2700460] GetHandleVerifier [0x008B3B10+635936] (No symbol) [0x007A4A1F] (No symbol) [0x007AA418] (No symbol) [0x007AA505] (No symbol) [0x007B508B] BaseThreadInitThunk [0x75EB00F9+25] RtlGetAppContainerNamedObjectPath [0x77A27BBE+286] RtlGetAppContainerNamedObjectPath [0x77A27B8E+238] </code></pre> <p>BUT if I comment out &quot;chrome_options.add_argument('--headless')&quot;, my code works perfectly fine. What's the issue here? I suppose the problem is that website doesn't let me use headless mode, how can I solve this?</p> <p>I want my program to run in headless mode, but I get restricted either by the website or chrome browser.</p>
<python><selenium><automation><screen-scraping><google-chrome-headless>
2023-01-22 13:22:17
1
303
syrok
75,200,651
11,564,487
VS Code IntelliSense inside R magic
<p>Suppose that one is using an <code>ipyn</code> notebook, with <code>R</code> magic cells. Can <code>IntelliSense</code> work for the <code>R</code> code inside the R magic cells?</p> <p>I have extensively searched the web but found nothing so far.</p>
<python><r><visual-studio-code>
2023-01-22 13:05:11
1
27,045
PaulS
75,200,579
9,406,165
How to create an abstract subclass of SQLAlchemy's Table(Base) class
<p>I am using SQLAlchemy to create tables in my project. I have a requirement where all these tables should have some specific attributes and functions. I want to create a structure such that all tables inherit from an abstract class which includes these attributes and functions.</p> <p>Here's an example of what I want to achieve:</p> <pre class="lang-py prettyprint-override"><code>Base = declarative_base() # pseudo class Table(ABC, Base): # like @abstractattribute some_attribtue = list() @staticmethod def some_func(self): pass class Users(Table): __tablename__ = &quot;users&quot; user_id = Column(Integer, primary_key=True) username = Column(String, nullable=False) some_attribute = list() @staticmethod def some_func(): do_something() </code></pre> <p>By doing this, I hope that I can use these classes in something like:</p> <pre class="lang-py prettyprint-override"><code>Base.metadata.create_all(engine) </code></pre> <p>while also being able to call:</p> <pre class="lang-py prettyprint-override"><code>Users.some_func() </code></pre> <p>I understand that this code wouldn't work as is, due to issues like having ABC and Base at the same time, not having <code>@abstractattribute</code>, and needing to add <code>__tablename__</code> and a Primary-Key Column to the class Table.</p> <p>I am thinking of using a decorator to achieve this, but I am not sure how to implement it correctly. This is the outline of my idea:</p> <pre class="lang-py prettyprint-override"><code>class Table(ABC): some_attribute=None @staticmethod def some_func(self): pass # create decorator def sql_table(): def decorator(abstract_class): class SQLTable(Base): # How do I name the class correctly? __tablename__ = abstract_class.__dict__[&quot;__tablename__&quot;] some_attribute = abstract_class.__dict__[&quot;some_attribute&quot;] for name, obj in abstract_class.__dict__.items(): if isinstance(obj, Column): locals()[name] = obj # How do I get the some_func function? @sql_table class Users(Table): __tablename__ = &quot;users&quot; user_id = Column(Integer, primary_key=True) username = Column(String, nullable=False) some_attribute = &quot;some_val&quot; @staticmethod def some_func(): do_something() </code></pre> <p>Any help or suggestions on how to implement this (not necessarily with decorators) would be greatly appreciated.</p>
<python><sqlalchemy>
2023-01-22 12:52:36
1
507
CodingTil
75,200,487
16,169,533
Stripe payment do something when payment is successfull Django
<p>I have an app about posting an advertises and by default i made an expiration date for</p> <p>every advertise (30 days) now i wanna use stripe to extend the expiration date.</p> <p>what i have so far is the checkout but i want when the payment i success i update the database.</p> <p>my checkout view :</p> <pre><code> class StripeCheckoutView(APIView): def post(self, request, *args, **kwargs): adv_id = self.kwargs[&quot;pk&quot;] adv = Advertise.objects.get(id = adv_id) try: adv = Advertise.objects.get(id = adv_id) checkout_session = stripe.checkout.Session.create( line_items=[ { 'price_data': { 'currency':'usd', 'unit_amount': 50 * 100, 'product_data':{ 'name':adv.description, } }, 'quantity': 1, }, ], metadata={ &quot;product_id&quot;:adv.id }, mode='payment', success_url=settings.SITE_URL + '?success=true', cancel_url=settings.SITE_URL + '?canceled=true', ) return redirect(checkout_session.url) except Exception as e: return Response({'msg':'something went wrong while creating stripe session','error':str(e)}, status=500) </code></pre> <p>Models:</p> <pre><code> # Create your models here. class Advertise(models.Model): owner = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;advertise&quot;) category = models.CharField(max_length= 200, choices = CATEGORY) location = models.CharField(max_length= 200, choices = LOCATIONS) description = models.TextField(max_length=600) price = models.FloatField(max_length=100) expiration_date = models.DateField(default = Expire_date, blank=True, null=True) created_at = models.DateTimeField(auto_now_add=True, blank=True, null=True) updated_at = models.DateTimeField(auto_now=True, blank=True, null=True) #is_active class Meta: ordering = ['created_at'] def __str__(self): return self.category </code></pre> <p>So what i want to do is check if the payment is successful nad if so i extend the expiration_date of this adv.</p> <p>Thanks in advance.</p>
<python><django><django-rest-framework><stripe-payments>
2023-01-22 12:41:37
1
424
Yussef Raouf Abdelmisih
75,200,316
14,282,714
ModuleNotFoundError: No module named 'nbformat'
<p>I would like to run python in a <code>Quarto</code> document. I followed the <a href="https://quarto.org/docs/computations/python.html" rel="nofollow noreferrer">docs</a> about installing and using python in Quarto, but the error stays. Here is some reproducible code:</p> <pre><code>--- title: &quot;matplotlib demo&quot; format: html: code-fold: true jupyter: python3 --- For a demonstration of a line plot on a polar axis, see @fig-polar. ```{python} #| label: fig-polar #| fig-cap: &quot;A line plot on a polar axis&quot; import numpy as np import matplotlib.pyplot as plt r = np.arange(0, 2, 0.01) theta = 2 * np.pi * r fig, ax = plt.subplots( subplot_kw = {'projection': 'polar'} ) ax.plot(theta, r) ax.set_rticks([0.5, 1, 1.5, 2]) ax.grid(True) plt.show() ``` </code></pre> <p>Error output:</p> <pre><code>Starting python3 kernel...Traceback (most recent call last): File &quot;/Applications/RStudio.app/Contents/Resources/app/quarto/share/jupyter/jupyter.py&quot;, line 21, in &lt;module&gt; from notebook import notebook_execute, RestartKernel File &quot;/Applications/RStudio.app/Contents/Resources/app/quarto/share/jupyter/notebook.py&quot;, line 16, in &lt;module&gt; import nbformat ModuleNotFoundError: No module named 'nbformat' </code></pre> <p>I also checked with Quarto if Jupyter is installed in the terminal like this:</p> <pre><code>quarto check jupyter </code></pre> <p>Output:</p> <pre><code>[✓] Checking Python 3 installation....OK Version: 3.7.11 (Conda) Path: /Users/quinten/Library/r-miniconda/envs/r-reticulate/bin/python Jupyter: 4.12.0 Kernels: julia-1.8, python3 [✓] Checking Jupyter engine render....OK </code></pre> <p>Which seems to be OK. So I was wondering if anyone knows how to fix this error?</p> <hr /> <p><strong>Edit: output conda info --envs</strong></p> <p>Output of conda info:</p> <pre><code># conda environments: # /Users/quinten/.julia/conda/3 /Users/quinten/Library/r-miniconda /Users/quinten/Library/r-miniconda/envs/r-reticulate /Users/quinten/Library/rminiconda/general /Users/quinten/opt/anaconda3 base * /Users/quinten/opt/miniconda3 </code></pre> <hr /> <p><strong>Edit: conda install Jupyter</strong></p> <p>The condo install Jupyter was installed (thanks to @shafee), now when I check with quarto if Jupyter exists, I get the following error:</p> <pre><code>quarto check jupyter [✓] Checking Python 3 installation....OK Version: 3.7.11 (Conda) Path: /Users/quinten/Library/r-miniconda/envs/r-reticulate/bin/python Jupyter: 4.11.1 Kernels: julia-1.8, python3 (/) Checking Jupyter engine render....Unable to load extension: pydevd_plugins.extensions.types.pydevd_plugin_pandas_types Unable to load extension: pydevd_plugins.extensions.types.pydevd_plugin_pandas_types [✓] Checking Jupyter engine render....OK </code></pre>
<python><jupyter><quarto>
2023-01-22 12:14:29
1
42,724
Quinten
75,200,206
19,776,016
How to return 1 in numpy array with one row using .shape?
<p>For the arrays with only one row <code>x_data.shape</code> return <code>(4,) </code>or<code> (5,)</code> is it possible to modify it to return <code>(4,1)</code>? Sometimes when I pass one dimension matrix to my function it runs into an error because <code>m = x_data.shape[1]</code> is not defined.</p>
<python><numpy>
2023-01-22 11:58:15
2
339
Gaff
75,200,179
16,154,762
Extracting values from a special string type dataframe column
<p>I have a string-type pandas dataframe column:</p> <pre><code>{&quot;min&quot;:[0,1,0.1,0,0,0], &quot;max&quot;:[0,1,0.4,0,0,0]} </code></pre> <p>df:</p> <pre><code>ID min_max_config 1 {&quot;min&quot;:[0,1,0.1,0,0,0], &quot;max&quot;:[0,1,0.4,0,0,0]} 2 {&quot;min&quot;:[0,1,0.1,0,0,0], &quot;max&quot;:[0,1,0.5,0,0,0]} 3 {&quot;min&quot;:[0,1,0.6,0,0,0], &quot;max&quot;:[0,1,0.7,0,0,0]} 4 {&quot;min&quot;:[0,1,0.8,0,0,0], &quot;max&quot;:[0,1,0.2,0,0,0]} </code></pre> <p>I want to make separate columns out of the sum of the values of min and max:</p> <p>output_df:</p> <pre><code>ID. min. max min_max_config 1. 1.1 1.4 {&quot;min&quot;:[0,1,0.1,0,0,0], &quot;max&quot;:[0,1,0.4,0,0,0]} 2. 1.1 1.5 {&quot;min&quot;:[0,1,0.1,0,0,0], &quot;max&quot;:[0,1,0.5,0,0,0]} 3. 1.6 1.7 {&quot;min&quot;:[0,1,0.6,0,0,0], &quot;max&quot;:[0,1,0.7,0,0,0]} 4. 1.8 1.2 {&quot;min&quot;:[0,1,0.8,0,0,0], &quot;max&quot;:[0,1,0.2,0,0,0]} </code></pre> <p>How to achieve this output</p>
<python><pandas><dataframe>
2023-01-22 11:54:31
1
439
genz_on_code
75,200,177
7,084,115
Python MQTT - Understanding the Scenario
<p>I have the following requirements to implement.</p> <p>The application should offer simple REST API interface taking inputs for add operation and returning calculation result. The application should offer Mqtt interface</p> <ol> <li><p>Subscribe for topic e.g. calculator/add and accept payload e.g. {“number1”:10, “number2”:20}</p> </li> <li><p>Publish result calculator/add/result e.g. { “final_result”:30}</p> </li> </ol> <p>Above points are a bit ambiguous. To my understanding of the above points;</p> <p>So in simply, I should write an <code>/add</code> post endpoint with query parameters and get those two numbers and publish them to the <code>add</code> endpoint. And in the same endpoint, I should write my subscription logic to the same <code>add</code> topic and get the two numbers and add them and then again publish it to the <code>result</code> topic and again subscribe to the <code>result</code> topic and return the result value as a json output?</p> <p>I came up with the following python snippet:</p> <pre><code>from fastapi import FastAPI, Query import json import paho.mqtt.client as mqtt import logging import os # Configuring the logging level logging.basicConfig(level=os.environ.get(&quot;LOGLEVEL&quot;, &quot;INFO&quot;), format='%(levelname)s - %(asctime)s - %(message)s', datefmt='%m-%d %H:%M:%S',) # Creating an instance of FastAPI with a title and a description app = FastAPI(title=&quot;My Sweet Calculator&quot;, description=&quot;A simple service that taking inputs for add operation and returning calculation resul&quot;) # Defining the broker, port, topic and client id to use for the MQTT connection BROKER = 'mqtt.eclipseprojects.io' PORT = 1883 PUBLISHER_TOPIC = &quot;calculator/add/RESULT&quot; SUBSCRIBER_TOPIC = &quot;calculator/add/RESULT&quot; CLIENT_ID = 'CALCULATER_RESULTS' # Defining a callback function that gets triggered when the MQTT client connects to the broker def on_connect(client, userdata, flags, rc): if rc == 0: logging.info(&quot;Connected to MQTT Broker!&quot;) else: logging.error(&quot;Failed to connect, return code %d\n&quot;, rc) # Creating an instance of the MQTT client with the defined client id and connecting it to the broker using the defined port mqtt_client = mqtt.Client(CLIENT_ID) mqtt_client.on_connect = on_connect mqtt_client.connect(BROKER, PORT) # endpoint '/add' that accepts two query params 'a' and 'b' and publishing the result to the topic 'calculator' @app.get(&quot;/add&quot;) async def add(a: int = Query(..., gt=0), b: int = Query(..., gt=0)): result = a + b mqtt_client.publish(PUBLISHER_TOPIC, json.dumps({&quot;a&quot;: a, &quot;b&quot;: b})) logging.info(&quot;PUBLISHED values of \'a\' and \'b\'&quot; + &quot; to Topic &quot; + PUBLISHER_TOPIC) return {&quot;result&quot;: result} # Defining a callback function that gets triggered when a message is received on the topic 'calculator'. def on_message(client, userdata, message): payload = json.loads(message.payload) logging.info(&quot;SUBSCRIBED Result: &quot; + str(payload)) # Assign the callback function to the MQTT client, subscribing to the topic 'calculator' and starting the MQTT client loop mqtt_client.loop_start() mqtt_client.subscribe(SUBSCRIBER_TOPIC) mqtt_client.on_message = on_message </code></pre> <p>Can someone help me understand if I have understood this requirement properly and the implementation is right?</p>
<python><mqtt><fastapi>
2023-01-22 11:54:28
0
4,101
Jananath Banuka
75,199,880
5,929,910
How to handle mainloop and serve_forever socket server together in if __name__ == '__main__'
<p>I am using Tkinter which has a QRcode generate button. I want to create a QRcode based on the provided URL and if I click the QRcode generate button then it will generate a QRcode and the URL will be active forever. The code I tried so far.</p> <pre><code>generate_button = tk.Button(my_w,font=22,text='Generate QR code', command=lambda:my_generate()) generate_button.place(relx=0.2, rely=0.5, anchor=CENTER) qrcode_label=tk.Label(my_w) qrcode_label.place(relx=0.6, rely=0.5, anchor=CENTER) link ='http://192.x.x.x:8010' PORT = 8010 def my_generate(): global my_img my_qr = pyqrcode.create(link) my_qr = my_qr.xbm(scale=10) my_img=tk.BitmapImage(data=my_qr) qrcode_label.config(image=my_img) </code></pre> <p>So far everything is cool. Now if I try to activate the server beside the main Tkinter window, seems both two loops are going to conflict and the application gets crashed.</p> <pre><code>if __name__ == '__main__': Handler = http.server.SimpleHTTPRequestHandler httpd = socketserver.TCPServer((&quot;&quot;, PORT), Handler) print(&quot;serving at port&quot;, PORT) httpd.serve_forever() my_w.mainloop() </code></pre> <p>Tried some ways but nothing helps me so far.</p>
<python><sockets><tkinter>
2023-01-22 11:03:34
0
3,137
mhhabib
75,199,703
16,589,029
Django Rest Framework fail on setting a new context to the serializer
<p>Django time:</p> <p>I am facing an issue with providing a context to the serializer:</p> <pre class="lang-py prettyprint-override"><code>class CommentSerializer(serializers.ModelSerializer): likes = CustomUserSerializer(many=True,source='likes.all') class Meta: fields = 'likes', model = models.Comment def get_user_like(self,obj): for i in obj.likes.all(): if self.context['user'] in i.values(): return self.context['user'] </code></pre> <p>in the view:</p> <pre class="lang-py prettyprint-override"><code>class CommentView(viewsets.ModelViewSet): serializer_class = serializer.CommentSerializer def get_serializer_context(self): #adding request.user as an extra context context = super(CommentView,self).get_serializer_context() context.update({'user':self.request.user}) return context </code></pre> <p>as you can see, i have overridded <code>get_serializer_context</code> to add user as a context</p> <p>however, in the serializer side, i am getting <code>KeyError:'user'</code> means the key does not exist, any idea how to set a context?</p>
<python><django><django-rest-framework>
2023-01-22 10:31:46
1
766
Ghazi
75,199,675
4,858,867
Use remove() function to delete elements containg specific substring from list
<p>I was trying to delete some elements which contain specific substring of a list. For this reason, I tried to use the <a href="https://docs.python.org/3.9/library/array.html?highlight=remove#array.array.remove" rel="nofollow noreferrer">remove()</a> to solve my problem.</p> <p>However, remove() did not solve my problem and I even found out that the for-loop didn't loop through. Does any one have any idea? Based on the documentation from Python, I know remove() does only delete the occurrent element.</p> <p>The following is my example code and its output:</p> <pre class="lang-py prettyprint-override"><code>l = ['x-1', 'x-2', 'x-3', 'x-4', 'x-5', 'x-6', 'x-7', 'x-8', 'x-9', 'x-10', 'x-11'] for i in l: print(i) if 'x-1' in i: l.remove(i) print('l after removed element: ', l) print(l) </code></pre> <p>Output</p> <pre class="lang-none prettyprint-override"><code>x-1 l after removed element: ['x-2', 'x-3', 'x-4', 'x-5', 'x-6', 'x-7', 'x-8', 'x-9', 'x-10', 'x-11'] x-3 x-4 x-5 x-6 x-7 x-8 x-9 x-10 l after removed element: ['x-2', 'x-3', 'x-4', 'x-5', 'x-6', 'x-7', 'x-8', 'x-9', 'x-11'] ['x-2', 'x-3', 'x-4', 'x-5', 'x-6', 'x-7', 'x-8', 'x-9', 'x-11'] </code></pre>
<python><arrays>
2023-01-22 10:26:57
1
383
Kai-Chun Lin
75,199,603
5,091,507
Simplify scientific notation/offset in matplotlib axis
<p>I am trying to draw a plot with large data using matplotlib. matplotlib simplifies the numbers on the y-axis which is expected. However, there are too many useless leading zeros that I can not find a way to remove.</p> <p>Here's a reproducible example:</p> <pre><code>import matplotlib.pyplot as plt y = [i for i in range(10400000000000000, 10400750000000000,500000000000)] x = [i for i in range(len(y))] plt.plot(x,y) plt.show() </code></pre> <p>What I get looks like this:</p> <p><a href="https://i.sstatic.net/WngRO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WngRO.png" alt="enter image description here" /></a></p> <p>I know how to disable useOffset but I want the top of the y-axis to keep the scientific notation and just say <code>1e11+1.04e16</code> instead of <code>1e11+1.0400000000e16</code>. Any ideas on how to solve this?</p>
<python><matplotlib><scientific-notation>
2023-01-22 10:12:18
1
1,047
A.A.
75,199,354
302,378
Serve string containing HTML as webpage
<p>This code serves a webpage (using micropython on a microcontroller). I don't think the use of <a href="https://github.com/miguelgrinberg/microdot/blob/main/src/microdot.py" rel="nofollow noreferrer">microdot.py</a> (a Flask-like library) is important for the question, but I apologize this it not reproducible without that and a microcontroller connected to a wi-fi network.</p> <pre><code>from microdot import Microdot myhtml = &quot;&quot;&quot; &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;p&gt;LED is {state}&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; &quot;&quot;&quot; app = Microdot() @app.route('/') def index(request): state = 'OFF' return str( myhtml.format(state=state) ) if __name__ == '__main__': app.run(debug=True) </code></pre> <p>The browser interprets what it gets as text rather than HTML so on the browser it looks like the below instead of displaying &quot;LED is OFF&quot; without the HTML tags.</p> <blockquote> <pre><code> &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;p&gt;LED is OFF&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> </blockquote> <p>I imagine a simple adjustment to my string formatting commands should fix this? Omitting the <code>str</code> call doesn't help.</p>
<python><webserver><micropython>
2023-01-22 09:24:28
1
2,601
Alex Holcombe
75,199,166
4,865,723
pandas.cut() with NA values causing "boolean value of NA is ambiguous"
<p>I would like to understand why this code does raise an <code>TypeError</code>.</p> <pre><code>import pandas pandas.cut(x=[1, 2, pandas.NA, 4, 5, 6, 7], bins=3) </code></pre> <p>The full error</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/user/.local/lib/python3.9/site-packages/pandas/core/reshape/tile.py&quot;, line 293, in cut fac, bins = _bins_to_cuts( File &quot;/home/user/.local/lib/python3.9/site-packages/pandas/core/reshape/tile.py&quot;, line 428, in _bins_to_cuts ids = ensure_platform_int(bins.searchsorted(x, side=side)) File &quot;pandas/_libs/missing.pyx&quot;, line 382, in pandas._libs.missing.NAType.__bool__ TypeError: boolean value of NA is ambiguous </code></pre> <p>Of course the values containing missing (pandas.NA) values, too. But looking <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer">into the to docs</a> in the section <em>Notes</em>.</p> <blockquote> <p>Any NA values will be NA in the result. Out of bounds values will be NA in the resulting Series or Categorical object.</p> </blockquote> <p>In my understanding of the docs this shouldn't throw an error.</p>
<python><pandas>
2023-01-22 08:49:56
2
12,450
buhtz
75,199,032
1,335,492
Use static object in LibreOffce python script?
<p>I've got a LibreOffice python script that uses serial IO. On my systems, opening a serial port is a very slow process (around 1 second), so I'd like to keep the serial port open, and just send stuff as required.</p> <p>But LibreOffice python apparently reloads the python framework every time a call is made. Unlike most python implementations, where the process is persistent, and un-enclosed code in a module is run once, when the module is imported.</p> <p>Is there a way in LibreOffice python to persist objects between calls?</p> <pre><code>SerialObject=None def return_global(): return str(SerialObject) #always returns &quot;None&quot; def init_serial_object(): SerialObject=True </code></pre>
<python><libreoffice>
2023-01-22 08:19:58
1
2,697
david
75,199,021
17,696,880
How to accumulate the modifications made on a string in each one of the iterations of a for loop?
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;Acá festejaremos mi cumpleaños. Yo ya sabía que los naipes estaban abajo de su manga.&quot; #example 1 list_all_adverbs_of_place = [&quot;aquí&quot;, &quot;aqui&quot;, &quot;acá&quot; , &quot;aca&quot;, &quot;abajo&quot;, &quot;bajo&quot;, &quot;alrededor&quot;, &quot;al rededor&quot;] place_reference = r&quot;((?i:\w\s*)+)?&quot; #capturing group for an alphanumeric string with upper or lower case for place_adverb in list_all_adverbs_of_place: pattern = r&quot;(&quot; + place_adverb + r&quot;)\s+(?i:del|de)\s+&quot; + place_reference + r&quot;\s*(?:[.\n;,]|$)&quot; input_text = re.sub(pattern, lambda m: f&quot;((PL_ADVB='{m[2] or ''}'){m[1]})&quot;, input_text, re.IGNORECASE) print(repr(input_text)) # --&gt; OUTPUT </code></pre> <p>How to make the <code>input_text</code> variable not be reset in each iteration of the for loop, so that the changes made by the <code>re.sub()</code> function in one iteration are kept for the following iterations of the loop?</p> <pre class="lang-py prettyprint-override"><code>&quot;((PL_ADVB='')Acá) festejaremos mi cumpleaños. Yo ya sabía que los naipes estaban ((PL_ADVB='su manga')abajo).&quot; #for example 1 </code></pre> <p><a href="https://i.sstatic.net/vmMc6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vmMc6.png" alt="enter image description here" /></a></p>
<python><python-3.x><string><loops><for-loop>
2023-01-22 08:18:23
1
875
Matt095
75,198,932
16,610,577
How to call an async function from the main thread without waiting for the function to finish (utilizing aioconsole)?
<p><strong>I'm trying to call an async function containing an await function from the main thread without halting computation in the main thread.</strong></p> <p>I've looked into similar questions employing a variety of solutions two of which I have demonstrated below, however, none of which seem to work with my current setup involving <code>aioconsole</code> and <code>asyncio</code>.</p> <p>Here is a simplified version of what I currently have implemented:</p> <pre class="lang-py prettyprint-override"><code>import aioconsole import asyncio async def async_input(): line = await aioconsole.ainput(&quot;&gt;&gt;&gt; &quot;) print(&quot;You typed: &quot; + line) if __name__ == &quot;__main__&quot;: asyncio.run(async_input()) print(&quot;This text should print instantly. It should not wait for you to type something in the console.&quot;) while True: # Do stuff here pass </code></pre> <blockquote> <p>I have also tried replacing <code>asyncio.run(async_input())</code> with the code below but that hangs the program even after entering console input.</p> <pre class="lang-py prettyprint-override"><code>loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(async_input()) loop.run_forever() </code></pre> </blockquote> <p>What's currently happening:</p> <ol> <li>Function is created ✓</li> <li>Function is called from the main thread ✓</li> <li>The program stops and awaits the completion of the async function. ✗</li> <li>The processing that should be performed in parallel with the async function is performed sequentially. ✗</li> </ol> <p>Output</p> <pre><code>&gt;&gt;&gt; </code></pre> <p><strong>What should happen:</strong></p> <ol> <li>Function is created ✓</li> <li>Function is called from the main thread ✓</li> <li>The program continues while the async function runs in the background. ✓</li> <li>The processing following is performed in parallel with the async function. ✓</li> </ol> <p>Expected output</p> <pre><code>&gt;&gt;&gt; This text should print instantly. It should not wait for you to type something in the console. </code></pre> <p>Python version: 3.10.8 <br> Operating System: MacOS Ventura</p>
<python><asynchronous><async-await><python-asyncio>
2023-01-22 07:54:51
2
342
Leon
75,198,911
7,624,196
Usage of nested protocol (member of protocol is also a protocol)
<p>Consider a Python protocol attribute which is also annotated with a protocol. I found in that case, both mypy and Pyright report an error even when my custom datatype follows the nested protocol. For example in the code below <code>Outer</code> follows the <code>HasHasA</code> protocol in that it has <code>hasa: HasA</code> because <code>Inner</code> follows <code>HasA</code> protocol.</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from typing import Protocol class HasA(Protocol): a: int class HasHasA(Protocol): hasa: HasA @dataclass class Inner: a: int @dataclass class Outer: hasa: Inner def func(b: HasHasA): ... o = Outer(Inner(0)) func(o) </code></pre> <p>However, mypy shows the following error.</p> <pre><code>nested_protocol.py:22: error: Argument 1 to &quot;func&quot; has incompatible type &quot;Outer&quot;; expected &quot;HasHasA&quot; [arg-type] nested_protocol.py:22: note: Following member(s) of &quot;Outer&quot; have conflicts: nested_protocol.py:22: note: hasa: expected &quot;HasA&quot;, got &quot;Inner&quot; </code></pre> <p>What's wrong with my code?</p>
<python><mypy><python-typing><pyright>
2023-01-22 07:51:58
1
1,623
HiroIshida
75,198,634
13,194,245
How to add a root inside of main root of XML in Pyton
<p>I have the following xml which i have export using the following:</p> <pre><code>df.to_xml('test.xml', index=False, row_name='instance', root_name='file') </code></pre> <p>which produces an xml file like:</p> <pre><code>&lt;file&gt; &lt;instance&gt; &lt;ID&gt;1&lt;/ID&gt; &lt;name&gt;John&lt;/name&gt; &lt;age&gt;32&lt;/age&gt; &lt;city&gt;London&lt;/city&gt; &lt;/instance&gt; .... &lt;/file&gt; </code></pre> <p>How can i add an extra root (<code>&lt;NAMES&gt;</code>) for underneath <code>&lt;file&gt;</code> so my output is below:</p> <pre><code>&lt;file&gt; &lt;NAMES&gt; &lt;instance&gt; &lt;ID&gt;1&lt;/ID&gt; &lt;name&gt;John&lt;/name&gt; &lt;age&gt;32&lt;/age&gt; &lt;city&gt;London&lt;/city&gt; &lt;/instance&gt; .... &lt;/NAMES&gt; &lt;/file&gt; </code></pre>
<python><xml>
2023-01-22 06:41:51
3
1,812
SOK
75,198,404
18,125,194
Determine the duration of an event
<p>I have a dataframe with a list of events, a column for an indicator for a criterion, and a column for a timestamp.</p> <p>For each event, if the indicator is true, I want to see if the event lasted more than one period, and for how long.</p> <p>In terms of an expected output, I have provided an example below. For the duration column, A is true for only one time period so it will be coded as 1. Then, A is False for the next period, so it will code that as 0. Then, A is true for 2 time periods, so the duration is two, the next entry can be coded as 0 since I am only interested in the first entry, and so on.</p> <pre><code> id target time duration 0 A True 2023-01-22 11:00:00 1 3 A False 2023-01-22 11:05:00 0 6 A True 2023-01-22 11:10:00 2 9 A True 2023-01-22 11:15:00 0 12 A False 2023-01-22 11:20:00 0 </code></pre> <p>But I have no idea how to do this.</p> <p>A sample dataframe is included below</p> <pre><code>import pandas as pd time_test = pd.DataFrame({'id':[ 'A','B','C','A','B','C', 'A','B','C','A','B','C', 'A','B','C','A','B','C'], 'target':[ 'True','True','True','False','True','True', 'True','False','True','True','True','True', 'False','True','False','True','False','True'], 'time':[ '11:00','11:00','11:00','11:05','11:05','11:05', '11:10','11:10','11:10','11:15','11:15','11:15', '11:20','11:20','11:20','11:25','11:25','11:25']}) time_test =time_test.sort_values(['id','time']) time_test['time'] =pd.to_datetime(time_test['time']) time_test </code></pre> <p>EDIT: I need to provide some clarification about the expected output</p> <p>Let's take group B as an example. An event occurs for B at 11:00, indicated by the &quot;True&quot; under target. At 11:05, the event is still occurring so duration should be 2 for the row <code>1 B True 2023-01-22 11:00:00 </code> . I am not interested in the row following so that can coded as 0. So in a since 0 would represent both &quot;already accounted for&quot; and the absence of an event.</p> <p>At 11:10 that event is not occurring so the summation would re-set.</p> <p>At 11:15 another event is occurring, and at 11:20 that event is still going, so the value for the first row should be 2.</p> <p>In the end, the values for B should be 2,0,0,2,0,0.</p> <p>I can see why this method would be confusing but I hope my explanation makes since. My data is in 5 minute chunks so I figured I could just count the number of chunks to see how long an event lasted for, instead of using a start and end time to calculate the elapsed time(but maybe that would be easier?)</p>
<python><pandas><datetime>
2023-01-22 05:39:12
1
395
Rebecca James
75,198,369
14,154,784
django_bootstrap5 not formatting anything
<p>I'm trying to get basic bootstrap formatting working in a django app, and installed <a href="https://github.com/zostera/django-bootstrap5" rel="nofollow noreferrer">django_bootstrap5</a> to do so. No formatting, however, is getting applied to any of the pages.</p> <p>Here's the various pages:</p> <p>base.html:</p> <pre><code>&lt;!DOCTYPE html&gt; {% load django_bootstrap5 %} &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt; {% block title %} {% endblock %} &lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;container&quot;&gt; {% block body %} {% endblock %} &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I extend this in a simple index page:</p> <pre><code>&lt;!DOCTYPE html&gt; {% extends 'base.html' %} {% load django_bootstrap5 %} {% block title %} Home {% endblock %} {% block body %} &lt;h1&gt;Hello World&lt;/h1&gt; {% endblock %} </code></pre> <p>Hello World, however, is not showing up in a container.</p> <p>This is also failing on a form page:</p> <pre><code>&lt;!DOCTYPE html&gt; {% extends 'base.html' %} {% load django_bootstrap5 %} {% block body %} &lt;div class=&quot;container&quot;&gt; &lt;h1&gt;Sign Up&lt;/h1&gt; &lt;form method=&quot;POST&quot;&gt; {% csrf_token %} {% bootstrap_form form %} &lt;input type=&quot;submit&quot; value=&quot;Sign Up&quot; class=&quot;btn btn-default&quot;&gt; &lt;/form&gt; &lt;/div&gt; {% endblock %} </code></pre> <p>The form is neither in a bootstrap container, nor does it have any styling at all. What am I missing here? Do you need to also load the bootstrap files by cdn or download them and add them to static when using <code>django_bootstrap5</code>? That makes things work, but it seems like it defeats the purpose of installing via pip. Thank you.</p>
<python><django><django-bootstrap3><django-bootstrap4><django-bootstrap5>
2023-01-22 05:27:32
1
2,725
BLimitless
75,198,338
9,532,692
Using pass in the if else statement of python List Comprehension
<p>I am trying to get a grasp of list comprehension and ran into a problem that throws a syntax error.</p> <p>Here I'm trying to get a list of odd number:</p> <pre><code>ll = [] for each in l: if each%2 == 1: ll.append(each) else: pass ll &gt;&gt;&gt; [1, 3, 5] </code></pre> <p>Using list comprehension, however, this throws a syntax error at <code>pass</code>:</p> <pre><code>l = [1,2,3,4,5] [each if each%2==1 else pass for each in l] &gt;&gt;&gt; [each if each%2==1 else pass for each in l] ^ &gt;&gt;&gt; SyntaxError: invalid syntax </code></pre> <p>If I were to replace pass with something like 0, it would work and return [1,0,3,0,5] without throwing an error. Could someone explain why I can't use pass here?</p>
<python><list-comprehension>
2023-01-22 05:20:42
1
724
user9532692
75,198,319
8,422,170
AttributeError: tensorflow has no attribute io
<p>I am trying to train a seq2seq model using simpletransformers library. While using the Seq2Seq model, I am constantly getting this error</p> <pre><code>import tensorflow as tf from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs import logging model_args = { &quot;reprocess_input_data&quot;: True, &quot;overwrite_output_dir&quot;: True, &quot;max_seq_length&quot;: 50, &quot;train_batch_size&quot;: 16, &quot;num_train_epochs&quot;: 3, &quot;save_eval_checkpoints&quot;: False, &quot;save_model_every_epoch&quot;: False, &quot;evaluate_generated_text&quot;: True, &quot;evaluate_during_training_verbose&quot;: True, &quot;use_multiprocessing&quot;: False, &quot;max_length&quot;: 50, &quot;manual_seed&quot;: 42,} logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger(&quot;transformers&quot;) transformers_logger.setLevel(logging.WARNING) model = Seq2SeqModel( &quot;roberta&quot;, &quot;roberta-base&quot;, &quot;bert-base-uncased&quot;,from_tf=True, args=model_args,use_cuda=False) AttributeError: module 'tensorflow' has no attribute 'io' </code></pre> <p>tensorflow version == 2.11.0 python == 3.8.3</p>
<python><python-3.x><tensorflow><attributeerror><simpletransformers>
2023-01-22 05:16:42
0
1,939
Mehul Gupta
75,198,237
13,307,245
Stream a .zst compressed file line by line
<p>I am trying to sift through a big database that is compressed in a .zst. I am aware that I can simply just decompress it and then work on the resulting file, but that uses up a lot of space on my ssd and takes 2+ hours so I would like to avoid that if possible.</p> <p>Often when I work with large files I would stream it line by line with code like</p> <pre><code>with open(filename) as f: for line in f.readlines(): do_something(line) </code></pre> <p>I know gzip has this</p> <pre><code>with gzip.open(filename,'rt') as f: for line in f: do_something(line) </code></pre> <p>but it doesn't seem to work with .zsf, so I am wondering if there're any libraries that can decompress and stream the decompressed data in a similar way. For example:</p> <pre><code>with zstlib.open(filename) as f: for line in f.zstreadlines(): do_something(line) </code></pre>
<python><python-3.x><archive><zstd>
2023-01-22 04:50:30
1
579
SimonUnderwood
75,198,235
726,773
Why does the key received by `__getitem__` become `0`?
<p>I was implementing a <code>__getitem__</code> method for a class and found that <code>obj[key]</code> worked as expected, but <code>key in obj</code> always transformed <code>key</code> into <code>0</code>:</p> <pre class="lang-py prettyprint-override"><code>class Mapper: def __getitem__(self, key): print(f'Retrieving {key!r}') if key == 'a': return 1 else: raise KeyError('This only contains a') </code></pre> <pre><code>&gt;&gt;&gt; mapper['a'] Retrieving 'a' 1 </code></pre> <pre><code>&gt;&gt;&gt; 'a' in mapper Retrieving 0 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;stdin&gt;&quot;, line 7, in __getitem__ KeyError: 'This only contains a' </code></pre> <p>I didn't find a <code>__hasitem__</code> method, so I thought the <code>in</code> check worked by just calling <code>__getitem__</code> and checking if it throws a <code>KeyError</code>. I couldn't figure out how the key gets transformed into an integer, of all things!</p> <p>I couldn't find an answer here, so I started writing this question. I figured out the answer before I posted, but in the interest of saving other people some time, I'll post my question and solution.</p>
<python>
2023-01-22 04:50:01
1
631
Nick S
75,198,183
14,673,832
Unexpected output in nested list comprehension in Python
<p>I have a nested list comprehension, when I print the output, it gives me generator object, I was expecting a tuple.</p> <pre><code>vector = [[1,2],[2,3],[3,4]] res = (x for y in vector for x in y if x%2 == 0) print(res) </code></pre> <p>I thought since I have small bracket for res assignment, I thought the result will be tuple, but it gives me a generator object.</p> <p>But when I use a traditional approach, it gives me a list.</p> <pre><code>lst = [] for y in vector: print(y) for x in y: print(x) if x%2 == 0: lst.append(x) print(lst) </code></pre> <p>It looks clear here that it gives a list there is no confusion here in the second one, but the first one is little confusing as why it gives a generator object.</p>
<python><list><list-comprehension><generator>
2023-01-22 04:33:36
1
1,074
Reactoo
75,197,726
597,858
Download pdfs and join them using python
<p>I have a list, named links_to_announcement, of urls for different pdfs.</p> <p>How do I download them and join them together? My code generated a corrupt pdf which doesn't open in pdf reader at all.</p> <pre><code>with open('joined_pdfs.pdf', 'wb') as f: for l in links_to_announcement: response = requests.get(l) f.write(response.content) </code></pre>
<python>
2023-01-22 01:45:12
1
10,020
KawaiKx
75,197,685
1,070,480
Why can't I pass the default_factory argument in collections.defaultdict as a keyword argument?
<p>If I first do</p> <pre class="lang-python prettyprint-override"><code>from collections import defaultdict </code></pre> <p>then doing</p> <pre class="lang-python prettyprint-override"><code>defaultdict(lambda: &quot;Default value&quot;)[7] </code></pre> <p>yields <code>'Default value'</code>. However, instead doing</p> <pre class="lang-python prettyprint-override"><code>defaultdict(default_factory=lambda: &quot;Default value&quot;)[7] </code></pre> <p>results in</p> <pre class="lang-none prettyprint-override"><code>KeyError Traceback (most recent call last) Cell In [28], line 1 ----&gt; 1 defaultdict(default_factory=lambda: &quot;Default value&quot;)[7] </code></pre> <p>but according to the <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow noreferrer"><code>defaultdict</code> documentation</a>, the first argument is <code>default_factory</code>, so the two statements should be equivalent. So why do they behave differently?</p>
<python><default-value><keyword-argument><defaultdict>
2023-01-22 01:21:40
0
3,998
HelloGoodbye
75,197,614
15,239,717
How can I get Total Deposit of Customers
<p>I am working on a Django project with 2 Models; Customer and Deposit and I want to display a list of Customers with their names, deposited dated, account number, and Total Deposited within the year so how do I do it the right way. See what I have tried but Couldn't get the Customers' name in my Django Templates.</p> <p>Models:</p> <pre><code>class Customer(models.Model): surname = models.CharField(max_length=10, null=True) othernames = models.CharField(max_length=20, null=True) account_number = models.CharField(max_length=10, null=True) address = models.CharField(max_length=50, null=True) phone = models.CharField(max_length=11, null=True) date = models.DateTimeField(auto_now_add=True, null=True) #Get the url path of the view def get_absolute_url(self): return reverse('customer_create', args=[self.id]) #Making Sure Django Display the name of our Models as it is without Pluralizing class Meta: verbose_name_plural = 'Customer' # def __str__(self): return f'{self.surname} {self.othernames} - {self.account_number}' class Deposit(models.Model): customer = models.ForeignKey(Customer, on_delete=models.CASCADE, null=True) acct = models.CharField(max_length=6, null=True) staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True) deposit_amount = models.PositiveIntegerField(null=True) date = models.DateTimeField(auto_now_add=True) def get_absolute_url(self): return reverse('create_account', args=[self.id]) def __str__(self): return f'{self.customer} Deposited {self.deposit_amount} by {self.staff.username}' </code></pre> <p>here is my view code:</p> <pre><code>def customer_list(request): #Get Current Date current_date = datetime.now().date() #Get Current Month Name from Calendar current_month_name = calendar.month_name[date.today().month] group_deposits = Deposit.objects.filter(date__year=current_date.year).order_by('acct') grouped_customer_deposit = group_deposits.values('acct').annotate(total `=Sum('deposit_amount')).order_by()` context = { 'customers':grouped_customer_deposit,} </code></pre> <p>Here is how I tried to display the result in Django Template:</p> <pre><code>{% for deposit in customers %} &lt;tr&gt; &lt;td&gt;{{ forloop.counter }}&lt;/td&gt; &lt;td&gt;{{ deposit.acct }}&lt;/td&gt; &lt;td&gt;{{ deposit.customer.surname }}&lt;/td&gt; &lt;td&gt;{{ deposit.total }}&lt;/td&gt; &lt;td&gt;{{ customer.deposit.date }}&lt;/td&gt; &lt;th scope=&quot;row&quot;&gt;&lt;a class=&quot;btn btn-info btn-sm&quot; href=&quot; &quot;&gt;Deposit&lt;/a&gt;&lt;/th&gt; &lt;/tr&gt; {% endfor %} </code></pre> <p>Someone should graciously help with the most efficient way of getting the Total Deposit for each customer with their names, account number, and date deposited.</p>
<python><django>
2023-01-22 00:55:03
1
323
apollos
75,197,608
597,858
Extracting links in a sequence from a table in a webpage using Selenium in Python
<p>I want to extract links of pdfs from this <a href="https://www.bseindia.com/stock-share-price/sanghi-industries-ltd/sanghiind/526521/corp-announcements/" rel="nofollow noreferrer">page</a> using Selenium in python</p> <p>I managed to extract the entire table that contains the rows and the links to the pdfs.</p> <pre><code>driver.get(company_link) announcement_link = driver.find_element(By.XPATH, '//*[@id=&quot;heading1&quot;]/h1/a').get_attribute('href') driver.get(announcement_link) table = driver.find_element(By.XPATH, '//*[@id=&quot;lblann&quot;]/table/tbody/tr[4]/td') </code></pre> <p>I am looking for a shortest possible method to create a list of all pdf links in a sequence. How do I do that?</p>
<python><selenium>
2023-01-22 00:52:11
1
10,020
KawaiKx
75,197,222
20,959,773
Get outer-HTML of element on click
<p>Im making a project for finding xpath, and i need the fastest and easiest way for the user to actually select in a webpage the element he wants the xpath to be found. Selection ideally needs to be made with just a click, which needs to return the value of outerHTML of that element,so I can take it and process against fullHTML of page to find any indicator.</p> <p>For now, im stuck double-tapping element,pressing inspect element and copying, all manually, which is not good.I know to automate in selenium, but i haven't found a way to automate this process.</p> <p>Any suggestion,idea or preferably answer would be greatly appreciated! Thanks</p>
<javascript><python><html><selenium><xpath>
2023-01-21 23:07:59
1
347
RifloSnake
75,197,211
16,319,191
Import multiple sas files in Python and then row bind
<p>I have over 20 SAS (sas7bdat) files all with same columns I want to read in Python. I need an iterative process to read all the files and rbind into one big df. This is what I have so far, but it throws an error saying no objects to concatenate.</p> <pre><code>import pyreadstat import glob import os path = r'C:\\Users\myfolder' # or unix / linux / mac path all_files = glob.glob(os.path.join(path , &quot;/*.sas7bdat&quot;)) li = [] for filename in all_files: reader = pyreadstat.read_file_in_chunks(pyreadstat.read_sas7bdat, filename, chunksize= 10000, usecols=cols) for df, meta in reader: li.append(df) frame = pd.concat(li, axis=0) </code></pre> <p>I found this answer to read in csv files helpful: <a href="https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe">Import multiple CSV files into pandas and concatenate into one DataFrame</a></p>
<python><dataframe><sas><rbind>
2023-01-21 23:05:44
1
392
AAA
75,196,668
13,285,779
What has to be an input shape for a CNN using MFCC?
<p>I am doing some classification on audio data using MFCC. I have extracted MFCCs from a few audio samples and I would like to pass them to a CNN, however I have hard time understanding what <code>input_shape</code> parameter I should provide to my model for training and classifications.</p> <p>The training set <code>train_x</code> has the following shape: <code>(213, 1723, 39)</code>. Hence 213 samples, each of those are 1723 by 39.</p> <p>That's how my model looks like:</p> <pre><code>model = Sequential() model.add(Conv2D(64,[2,2],data_format='channels_last',activation='sigmoid',input_shape=(?, ?))) model.add(Conv2D(128,[2,2],data_format='channels_last',activation='sigmoid')) model.add(Flatten(data_format='channels_last')) model.add(Dropout(0.2)) model.add(Dense(64,activation='sigmoid')) model.add(Dense(32,activation='sigmoid')) model.add(Dropout(0.2)) model.add(Dense(4,activation='sigmoid')) model.compile(loss='categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) history = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=2, batch_size=16) </code></pre> <p>What should be the input shape for the first layer? What does it make with the batch size? What are general rules for understanding how to define an input shape?</p>
<python><tensorflow><keras><conv-neural-network><mfcc>
2023-01-21 21:24:26
0
1,359
CuriousPan
75,196,539
13,441,462
Pyzbar does not recognize CODE-128 barcode
<p>I am trying to read text encoded in barcode - I am using <code>pyzbar</code> like this:</p> <pre><code>from pyzbar import pyzbar import cv2 img = cv2.imread(&quot;example/path&quot;) barcodes = pyzbar.decode(img, symbols=[pyzbar.ZBarSymbol.CODE128]) print(barcodes) </code></pre> <p>It normally works, but in the last batch of barcodes that I have received, <code>pyzbar</code> cannot read them - output of <code>pyzbar.decode</code> is <code>[]</code>. There is one example:</p> <p><a href="https://i.sstatic.net/UxaDp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UxaDp.png" alt="enter image description here" /></a></p> <p>I have tried to put it into <a href="https://www.onlinebarcodereader.com/" rel="nofollow noreferrer">online decoder</a> and it decodes it just fine (it also says the barcode type is CODE-128). Anybody knows, how can I read it in Python, please?</p>
<python><opencv><image-processing><barcode><zbar>
2023-01-21 21:01:24
1
409
Foreen
75,196,513
17,311,709
Django Form unexpected keyword argument
<p>i have a form which contains a choiceField and i need to populate it from a view, so i'm trying to use the kwargs inside the init function like this :</p> <pre class="lang-py prettyprint-override"><code>class SelectionFournisseur(forms.Form): def __init__(self,*args, **kwargs): super(SelectionFournisseur, self).__init__(*args, **kwargs) self.fields['Fournisseur'].choices = kwargs.pop(&quot;choixF&quot;,None) Fournisseur = forms.ChoiceField(choices = ()) </code></pre> <p>my view :</p> <pre><code>formF = SelectionFournisseur(choixF=choices) </code></pre> <p>but i get the error <code>BaseForm.__init__() got an unexpected keyword argument 'choixF'</code></p>
<python><django>
2023-01-21 20:57:39
1
635
Rafik Bouloudene
75,196,453
4,974,431
Regex : split on '.' but not in substrings like "J.K. Rowling"
<p>I am looking for names of books and authors in a bunch of texts, like:</p> <pre><code>my_text = &quot;&quot;&quot; My favorites books of all time are: Harry potter by J. K. Rowling, Dune (first book) by Frank Herbert; and Le Petit Prince by Antoine de Saint Exupery (I read it many times). That's it by the way. &quot;&quot;&quot; </code></pre> <p>Right now I am using the following code to split the text on separators like this:</p> <pre><code>pattern = r&quot; *(.+) by ((?: ?\w+)+)&quot; matches = re.findall(pattern, my_text) res = [] for match in matches: res.append((match[0], match[1])) print(res) # [('Harry potter', 'J'), ('K. Rowling, Dune (first book)', 'Frank Herbert'), ('and Le Petit Prince', 'Antoine de Saint Exupery '), (&quot;I read it many times). That's it&quot;, 'the way')] </code></pre> <p>Even if there are false positive (like 'that's it by the way') my main problem is with authors that are cut when written as initials, which is pretty common.</p> <p>I can't figure out how to allow initials like &quot;J. K. Rowling&quot; (or the same without space before / after dot like &quot;J.K.Rowling&quot;)</p>
<python><regex>
2023-01-21 20:47:37
1
1,624
Vincent
75,196,422
9,855,588
pytest parameterization indirectly when using unittest framework
<p>I'm using the python unittest and pytest frameworks together. I ran into a case where I have a fixture that generates signed header tokens, which I'm trying to mock locally. So I want to create a fixture that I can pass a payload to when the test runs. It doesn't seem like I can do it with pytest and unittest together, however.</p> <pre><code>pytest.fixture def gen_headers(value): return &quot;hey bud, you got a nice header!&quot; + value[&quot;fake_head&quot;] class Tests(unittest.TestCase): @pytest.mark.parametrize(&quot;gen_headers&quot;, [{&quot;fake_head&quot;: &quot;hi :)&quot;}], indirect=True) def test_func(self, gen_headers): do_ya_thing = api.request(&quot;get&quot;, headers=gen_headers) ... </code></pre> <p>the test fails <code>TypeError: ... missing 1 required positional argument: ...</code></p> <p>I tried using the paramterized python package, but I dont think it supports passing a custom input to the fixture at run time (test time), which is what I'm looking to do, since the thing being returned in <code>gen_headers</code> may change.</p>
<python><python-3.x><pytest><python-unittest>
2023-01-21 20:41:58
1
3,221
dataviews
75,196,263
10,380,766
Calculating a rolling weighted sum without a for loop in python/numpy
<p>I recently asked the question: <a href="https://stackoverflow.com/questions/75192220/numpy-convolve-method-has-slight-variance-between-equivalent-for-loop-method-for">NumPy convolve method has slight variance between equivalent for loop method for Volume Weighted Average Price</a></p> <p>Trying to use <code>np.convolve</code> was significantly faster than a standard for loop to calculate a rolling VWAP metric, but was providing an incorrect calculation as it was leaving off the last item in the array.</p> <p>Is there a methodology to do a rolling weighted sum without a for loop?</p> <h1>What I've tried:</h1> <h2>Using standard <code>for</code> loop (slow)</h2> <pre class="lang-py prettyprint-override"><code>def calc_vwap_1(price, volume, period_lookback): &quot;&quot;&quot; Calculates the volume-weighted average price (VWAP) for a given period of time. The VWAP is calculated by taking the sum of the product of each price and volume over a given period, and dividing by the sum of the volume over that period. Parameters: price (numpy.ndarray): A list or array of prices. volume (numpy.ndarray): A list or array of volumes, corresponding to the prices. period_lookback (int): The number of days to look back when calculating VWAP. Returns: numpy.ndarray: An array of VWAP values, one for each day in the input period. &quot;&quot;&quot; vwap = np.zeros(len(price)) for i in range(period_lookback, len(price)): lb = i - period_lookback # lower bound ub = i + 1 # upper bound volume_sum = volume[lb:ub].sum() if volume_sum &gt; 0: vwap[i] = (price[lb:ub] * volume[lb:ub]).sum() / volume_sum else: vwap[i] = np.nan return vwap </code></pre> <h2>Using <code>np.convolve</code> in <code>same</code> mode</h2> <pre><code>def calc_vwap_2(price, volume, period_lookback): price_volume = price * volume # Use convolve to get the rolling sum of product of price and volume price_volume_conv = np.convolve(price_volume, np.ones(period_lookback), mode='same')[period_lookback-1:] # Use convolve to get the rolling sum of volume volume_conv = np.convolve(volume, np.ones(period_lookback), mode='same')[period_lookback-1:] # Create a mask to check if the volume sum is greater than 0 mask = volume_conv &gt; 0 # Initialize the vwap array vwap = np.zeros(len(price)) # Use the mask to check if volume sum is greater than zero, if it is, proceed with the division and store the result in vwap array, otherwise store NaN vwap[period_lookback-1:] = np.where(mask, price_volume_conv / volume_conv, np.nan) return vwap </code></pre> <h2>Using <code>np.convolve</code> in <code>valid</code> mode</h2> <pre><code>def calc_vwap_3(price, volume, period_lookback): # Calculate product of price and volume price_volume = price * volume # Use convolve to get the rolling sum of product of price and volume and volume array price_volume_conv = np.convolve(price_volume, np.ones(period_lookback), mode='valid') # Use convolve to get the rolling sum of volume volume_conv = np.convolve(volume, np.ones(period_lookback), mode='valid') # Create a mask to check if the volume sum is greater than 0 mask = volume_conv &gt; 0 # Initialize the vwap array vwap = np.zeros(len(price)) # Use the mask to check if volume sum is greater than zero, if it is, proceed with the division and store the result in vwap array, otherwise store NaN vwap[period_lookback-1:] = np.where(mask, price_volume_conv / volume_conv, np.nan) return vwap </code></pre> <h2>Using <code>np.cumsum</code> (sorry mom) with slicing</h2> <pre><code>def calc_vwap_4(price, volume, period_lookback): price_volume = price * volume # Use cumsum to get the rolling sum of product of price and volume price_volume_cumsum = np.cumsum(price_volume)[period_lookback-1:] # Use cumsum to get the rolling sum of volume volume_cumsum = np.cumsum(volume)[period_lookback-1:] # Create a mask to check if the volume sum is greater than 0 mask = volume_cumsum &gt; 0 # Initialize the vwap array vwap = np.zeros(len(price)) # Use the mask to check if volume sum is greater than zero, if it is, proceed with the division and store the result in vwap array, otherwise store NaN vwap[period_lookback-1:] = np.where(mask, price_volume_cumsum / volume_cumsum, np.nan) return vwap </code></pre> <h2>Using <code>np.reduceat</code></h2> <pre><code>def calc_vwap_5(price, volume, period_lookback): price_volume = price * volume # Use reduceat to get the rolling sum of product of price and volume price_volume_cumsum = np.add.reduceat(price_volume, np.arange(0, len(price), period_lookback))[period_lookback-1:] # Use reduceat to get the rolling sum of volume volume_cumsum = np.add.reduceat(volume, np.arange(0, len(price), period_lookback))[period_lookback-1:] # Create a mask to check if the volume sum is greater than 0 mask = volume_cumsum &gt; 0 # Initialize the vwap array vwap = np.zeros(len(price)) # Use the mask to check if volume sum is greater than zero, if it is, proceed with the division and store the result in vwap array, otherwise store NaN vwap[period_lookback-1:] = np.where(mask, price_volume_cumsum / volume_cumsum, np.nan) return vwap </code></pre> <h2>Using <code>np.lib.stride_tricks.as_strided</code></h2> <pre><code>def calc_vwap_6(price, volume, period_lookback): price_volume = price * volume price_volume_strided = np.lib.stride_tricks.as_strided(price_volume, shape=(len(price)-period_lookback+1, period_lookback), strides=(price_volume.strides[0], price_volume.strides[0])) volume_strided = np.lib.stride_tricks.as_strided(volume, shape=(len(price)-period_lookback+1, period_lookback), strides=(volume.strides[0], volume.strides[0])) price_volume_sum = price_volume_strided.sum(axis=1) volume_sum = volume_strided.sum(axis=1) mask = volume_sum &gt; 0 vwap = np.zeros(len(price)) vwap[period_lookback-1:] = np.where(mask, price_volume_sum / volume_sum, np.nan) return vwap </code></pre> <h2>Test Data</h2> <pre><code>import numpy as np price = np.random.random(10000) volume = np.random.random(10000) print(calc_vwap(price, volume, 100)) print() print(calc_vwap_1(price, volume, 100)) print() print(calc_vwap_2(price, volume, 100)) print() print(calc_vwap_3(price, volume, 100)) print() print(calc_vwap_4(price, volume, 100)) print() print(calc_vwap_5(price, volume, 100)) print() print(calc_vwap_6(price, volume, 100)) print() </code></pre> <h2>Results</h2> <pre><code>vwap_1 -&gt; [0. 0. 0. ... 0.47375965 0.47762679 0.48448903] # CORRECT CALCULATION vwap_2 -&gt; [0. 0. 0. ... 0.53108759 0.51933363 0.51360848] vwap_3 -&gt; [0. 0. 0. ... 0.49834202 0.4984141 0.49845759] vwap_4 -&gt; [0. 0. 0. ... 0.49834202 0.4984141 0.49845759] vwap_5 -&gt; [0. 0. 0. ... 0.48040529 0.48040529 0.48040529] vwap_6 -&gt; [0. 0. 0. ... 0.47027032 0.48009596 0.48040529] </code></pre>
<python><numpy>
2023-01-21 20:14:52
1
1,020
Hofbr
75,196,143
15,461,255
Seaborn error with kde plot: The following variable cannot be assigned with wide-form data `hue`
<p>I have a pandas dataframe <code>df</code> with two columns (<code>type</code> and <code>IR</code>) as this one:</p> <pre><code> type IR 0 a 0.1 1 b 0.3 2 b 0.2 3 c 0.8 4 c 0.5 ... </code></pre> <p>I want to plot three distributions (one for each <code>type</code>) with the values of the IR so, I write:</p> <pre><code>sns.displot(df, kind=&quot;kde&quot;, hue='type', rug=True) </code></pre> <p>but I get this error: <code>The following variable cannot be assigned with wide-form data 'hue'</code></p> <p>Any idea?</p> <hr /> <p>EDIT:</p> <p>My real dataframe looks like</p> <pre><code>pd.DataFrame({&quot;type&quot;: [&quot;IR orig&quot;, &quot;IR orig&quot;, &quot;IR orig&quot;, &quot;IR trans&quot;, &quot;IR trans&quot;, &quot;IR trans&quot;, &quot;IR perm&quot;, &quot;IR perm&quot;, &quot;IR perm&quot;, &quot;IR perm&quot;, &quot;IR perm&quot;], &quot;IR&quot;: [1.41, 1.42, 1.32, 0.0, 0.44, 0.0, 1.41, 1.31, 1.41, 1.37, 1.34] }) </code></pre> <p>but with <code>sns.displot(df, x='IR', kind=&quot;kde&quot;, hue='type', rug=True)</code> I got <code>ValueError: cannot reindex on an axis with duplicate labels</code></p>
<python><pandas><seaborn><kernel-density>
2023-01-21 19:57:09
1
350
Palinuro
75,196,023
1,483,288
Pandas, merge multiple dummy variables into one column by name
<p>I have a datafile with one VALUE column and multiple dummy variables representing TYPES. I have copied a short example below. I need the average of each type (which I can get) with a column with the named type (which I don't seem to be able to get). Pointers would be welcome.</p> <pre><code>import pandas as pd data = {'salary' : [50000, 45000, 55000, 40000, 35000, 45000, 30000,25000,35000], 'manager': [1,1,1,0,0,0,0,0,0], 'foreman': [0,0,0,1,1,1,0,0,0], 'worker': [0,0,0,0,0,0,1,1,1]} df = pd.DataFrame(data=data) df </code></pre> <p>This is my input data.</p> <pre><code>salary manager foreman worker 0 50000 1 0 0 1 45000 1 0 0 2 55000 1 0 0 3 40000 0 1 0 4 35000 0 1 0 5 45000 0 1 0 6 30000 0 0 1 7 25000 0 0 1 8 35000 0 0 1 </code></pre> <p>I can get the average, like this, but not consolidate the three dummy vars into one categorical column:</p> <pre><code>print(df.groupby(['manager','foreman','worker']).mean().reset_index()) manager foreman worker salary 0 0 0 1 30000 1 0 1 0 40000 2 1 0 0 50000 </code></pre> <p>I would like to have something that looks like this:</p> <pre><code>need = {'salary' : [50000, 45000, 55000, 40000, 35000, 45000, 30000,25000,35000], 'type': ['manager','manager','manager','foreman','foreman','foreman','worker','worker','worker']} df2 = pd.DataFrame(data=need) df2 salary type 0 50000 manager 1 45000 manager 2 55000 manager 3 40000 foreman 4 35000 foreman 5 45000 foreman 6 30000 worker 7 25000 worker 8 35000 worker </code></pre> <p>I can do this simple example by hand. The result looks like this, which is ultimately where I will end up:</p> <pre><code>pay = {'type' : ['manager','foreman','worker'], 'avg_pay': [50000,40000,30000]} df1 = pd.DataFrame(data=pay) df1 type avg_pay 0 manager 50000 1 foreman 40000 2 worker 30000 </code></pre> <p>Can't seem to find any documentation on how to &quot;undummy&quot; variables. How do I do this?</p>
<python><pandas><dataframe><aggregate><dummy-variable>
2023-01-21 19:35:46
3
773
ccc31807
75,196,015
3,837,788
TypeError raised even if the variable seems to be of the right type
<p>Considering the following code portion:</p> <pre><code>txt = 'Some text, dude!' with open('to_load', mode='wb') as f: raw = txt.encode('ascii') print(raw, file=f) </code></pre> <p>Could anybody please explain why a <code>TypeError: a bytes-like object is required, not 'str'</code> exception is raised inside <code>print</code>, even if the <code>raw</code> variable seems to be of binary data type?</p> <pre><code>(Pdb) type(raw) &lt;class 'bytes'&gt; </code></pre> <p>Even using <code>bytes</code>, the exception is still raised:</p> <pre><code>raw = bytes(txt.encode('ascii')) </code></pre>
<python><python-3.x><string><file><binary>
2023-01-21 19:34:19
0
566
rudicangiotti
75,195,927
6,367,971
Using glob recursion to get sub directories and files containing CSVs
<p>I am trying to concat multiple CSVs that live in subfolders of my parent directory.</p> <pre><code>/ParentDirectory │ │ ├───SubFolder 1 │ test1.csv │ ├───SubFolder 2 │ test2.csv │ ├───SubFolder 3 │ test3.csv │ test4.csv │ ├───SubFolder 4 │ test5.csv </code></pre> <p>When I do</p> <pre><code>import pandas as pd import glob files = glob.glob('/ParentDirectory/*.csv', recursive=True) df = pd.concat([pd.read_csv(fp) for fp in files], ignore_index=True) </code></pre> <p>I get <code>ValueError: No objects to concatenate</code>.</p> <p>But if I select a specific sub folder, it works:</p> <pre><code>files = glob.glob('/ParentDirectory/SubFolder 3/*.csv', recursive=True) </code></pre> <p>How come <code>glob</code> isn't able to go down a directory and get the CSVs within each folder of the parent directory?</p>
<python><pandas><glob>
2023-01-21 19:17:44
2
978
user53526356
75,195,914
8,059,615
Python GRPC 13 Internal Error when trying to yield response
<p>When I print the response, everything seems to be correct, and the type is also correct.</p> <pre><code>Assertion: True Response type: &lt;class 'scrape_pb2.ScrapeResponse'&gt; </code></pre> <p>But on postman I get &quot;13 INTERNAL&quot; With no additional information:</p> <p><a href="https://i.sstatic.net/ZSgKA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSgKA.png" alt="Error Screenshot" /></a></p> <p>I can't figure out what the issue is, and I can't find out how to log or print the error from the server side.</p> <p>Relevant proto parts:</p> <pre><code>syntax = &quot;proto3&quot;; service ScrapeService { rpc ScrapeSearch(ScrapeRequest) returns (stream ScrapeResponse) {}; } message ScrapeRequest { string url = 1; string keyword = 2; } message ScrapeResponse { oneof result { ScrapeSearchProgress search_progress = 1; ScrapeProductsProgress products_progress = 2; FoundProducts found_products = 3; } } message ScrapeSearchProgress { int32 page = 1; int32 total_products = 2; repeated string product_links = 3; } </code></pre> <p>scraper.py</p> <pre><code>def get_all_search_products(search_url: str, class_keyword: str): search_driver = webdriver.Firefox(options=options, service=service) search_driver.maximize_window() search_driver.get(search_url) # scrape first page product_links = scrape_search(driver=search_driver, class_keyword=class_keyword) page = 1 search_progress = ScrapeSearchProgress(page=page, total_products=len(product_links), product_links=[]) search_progress.product_links[:] = product_links # scrape next pages while go_to_next_page(search_driver): page += 1 print(f'Scraping page=&gt;{page}') product_links.extend(scrape_search(driver=search_driver, class_keyword=class_keyword)) print(f'Number of products scraped=&gt;{len(product_links)}') search_progress.product_links.extend(product_links) # TODO: remove this line if page == 6: break search_progress_response = ScrapeResponse(search_progress=search_progress) yield search_progress_response </code></pre> <p>Server:</p> <pre><code>class ScrapeService(ScrapeService): def ScrapeSearch(self, request, context): print(f&quot;Request received: {request}&quot;) scrape_responses = get_all_search_products(search_url=request.url, class_keyword=request.keyword) for response in scrape_responses: print(f&quot;Assertion: {response.HasField('search_progress')}&quot;) print(f&quot;Response type: {type(response)}&quot;) yield response </code></pre>
<python><grpc><grpc-python>
2023-01-21 19:14:20
1
405
Yousef
75,195,748
1,436,800
AttributeError Exception: Serializer has no attribute request in DRF
<p>I have written following code in serializer where I am validating data:</p> <pre><code>class MySerializer(serializers.ModelSerializer): class Meta: model = models.MyClass fields = &quot;__all__&quot; def validate(self, data): role = data[&quot;role&quot;] roles = models.Role.objects.filter( --&gt;(exception) organization=self.request.user.organization ) if role not in roles: raise serializers.ValidationError(&quot;Invlid role selected&quot;) return data </code></pre> <p>But I am getting following exception:</p> <p>'MySerializer' object has no attribute 'request'. And it is coming in the mentioned line. I want to access current user in validate function. How can I do that?</p>
<python><django><django-rest-framework><django-views><django-serializer>
2023-01-21 18:46:38
1
315
Waleed Farrukh
75,195,738
9,102,437
How to update a python library without restarting ipynb kernel?
<p>Unfortunately, there is no better way to do this, so please don't ask why I can't do it the normal way, it is too long to explain :). The issue is that one of the packages demands numpy&lt;=1.21, but the one installed is 1.23.4 . In the notebook I run <code>!pip install numpy==1.21</code> which solves the issue, BUT pip tells you that to see the changes you have to restart the notebook (which I cannot do). I think that is because the notebook runs in a virtual environment and numpy is installed outside of it. I have tried many things like <code>%%reboot</code> or <code>importlib.reload(np)</code>, but the output of</p> <pre><code>import numpy as np print(np.__version__) </code></pre> <p>Is strictly 1.23.4 . Maybe there is a way to overcome this?</p>
<python><python-3.x><linux><numpy><sys>
2023-01-21 18:45:27
4
772
user9102437
75,195,622
18,023,322
Download video with yt-dlp using format id
<p>How can I download a specific format without using options like &quot;best video&quot;, using the format ID... example: 139, see the <a href="https://i.sstatic.net/7MlOf.jpg" rel="noreferrer">picture</a></p> <pre class="lang-none prettyprint-override"><code>❯ yt-dlp --list-formats https://www.youtube.com/watch?v=BaW_jenozKc [youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc [youtube] BaW_jenozKc: Downloading webpage [youtube] BaW_jenozKc: Downloading android player API JSON [info] Available formats for BaW_jenozKc: ID EXT RESOLUTION FPS CH │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ASR MORE INFO ───────────────────────────────────────────────────────────────────────────────────────────────────────────── 139 m4a audio only 2 │ 58.59KiB 48k https │ audio only mp4a.40.5 48k 22k low, m4a_dash 249 webm audio only 2 │ 58.17KiB 48k https │ audio only opus 48k 48k low, webm_dash 250 webm audio only 2 │ 76.07KiB 63k https │ audio only opus 63k 48k low, webm_dash 140 m4a audio only 2 │ 154.06KiB 128k https │ audio only mp4a.40.2 128k 44k medium, m4a_dash 251 webm audio only 2 │ 138.96KiB 116k https │ audio only opus 116k 48k medium, webm_dash 17 3gp 176x144 12 1 │ 55.79KiB 45k https │ mp4v.20.3 45k mp4a.40.2 0k 22k 144p 160 mp4 256x144 15 │ 135.08KiB 113k https │ avc1.4d400c 113k video only 144p, mp4_dash 278 webm 256x144 30 │ 52.22KiB 44k https │ vp9 44k video only 144p, webm_dash 133 mp4 426x240 30 │ 294.27KiB 246k https │ avc1.4d4015 246k video only 240p, mp4_dash 242 webm 426x240 30 │ 33.27KiB 28k https │ vp9 28k video only 240p, webm_dash 134 mp4 640x360 30 │ 349.59KiB 292k https │ avc1.4d401e 292k video only 360p, mp4_dash 18 mp4 640x360 30 2 │ ~525.60KiB 420k https │ avc1.42001E 420k mp4a.40.2 0k 44k 360p 243 webm 640x360 30 │ 75.55KiB 63k https │ vp9 63k video only 360p, webm_dash 135 mp4 854x480 30 │ 849.41KiB 710k https │ avc1.4d401f 710k video only 480p, mp4_dash 244 webm 854x480 30 │ 165.49KiB 138k https │ vp9 138k video only 480p, webm_dash 22 mp4 1280x720 30 2 │ ~ 1.82MiB 1493k https │ avc1.64001F 1493k mp4a.40.2 0k 44k 720p 136 mp4 1280x720 30 │ 1.60MiB 1366k https │ avc1.4d401f 1366k video only 720p, mp4_dash 247 webm 1280x720 30 │ 504.68KiB 420k https │ vp9 420k video only 720p, webm_dash 137 mp4 1920x1080 30 │ 2.11MiB 1803k https │ avc1.640028 1803k video only 1080p, mp4_dash 248 webm 1920x1080 30 │ 965.31KiB 804k https │ vp9 804k video only 1080p, webm_dash </code></pre> <p>I tried using the format url, but it didn't work</p>
<python><yt-dlp>
2023-01-21 18:26:44
2
605
LegzDev
75,195,481
149,818
Z3 optimize by index not a value
<p>With greate respect to the answer of @alias there: (<a href="https://stackoverflow.com/a/69655123/149818">Find minimum sum</a>) I would like to solve similar puzzle. Having 4 agents and 4 type of works. Each agent does work on some price (see <code>initial</code> matrix in the code). I need find the optimal allocation of agents to the particular work. Following code almost copy paste from the mentioned answer:</p> <pre><code>initial = ( # Row - agent, Column - work (7, 7, 3, 6), (4, 9, 5, 4), (5, 5, 4, 5), (6, 4, 7, 2) ) opt = Optimize() agent = [Int(f&quot;a_{i}&quot;) for i, _ in enumerate(initial)] opt.add(And(*(a != b for a, b in itertools.combinations(agent, 2)))) for w, row in zip(agent, initial): opt.add(Or(*[w == val for val in row])) minTotal = Int(&quot;minTotal&quot;) opt.add(minTotal == sum(agent)) opt.minimize(minTotal) print(opt.check()) print(opt.model()) </code></pre> <p>Mathematically correct answer: <code>[a_2 = 4, a_1 = 5, a_3 = 2, a_0 = 3, minTotal = 14]</code> is not working for me, because I need get index of agent instead. Now, my question - how to rework the code to optimize by indexes instead of values? I've tried to leverage the <code>Array</code> but have no idea how to minimize multiple sums.</p>
<python><dynamic-programming><z3><z3py>
2023-01-21 18:04:27
1
23,762
Dewfy
75,195,469
16,567,918
Django multi-language does not load the custom-translated files when changing the user language but only works when LANGUAGE_CODE explicitly set
<p>when I change the language of the user by URL or calling <code>translation.activate(lang_code)</code> only the default texts are translated and the custom translation that I created not loading, but if I change the <code>LANGUAGE_CODE</code> to the target language in the settings file the translation shows without any issue. but I want when to change the user language show the custom translations that I create</p> <p>its my model:</p> <pre><code>from django.db import models from django.utils.translation import gettext as _ class Test(models.Model): test = models.CharField( max_length=100, verbose_name=_(&quot;test text&quot;), ) </code></pre> <p>my admin:</p> <pre><code>from django.contrib import admin from lng.models import Test @admin.register(Test) class TestModelAdmin(admin.ModelAdmin): ... </code></pre> <p>my settings:</p> <pre><code>MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', &quot;django.middleware.locale.LocaleMiddleware&quot;, # Here ! 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] LANGUAGE_CODE = 'en' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True LOCALE_PATHS = [ BASE_DIR / 'locale/', ] </code></pre> <p>my urls :</p> <pre><code>from django.urls import path ,include from django.conf.urls.i18n import i18n_patterns urlpatterns = [ path('i18n/', include('django.conf.urls.i18n')), ] translatable_urls = [ path('admin/', admin.site.urls), ] urlpatterns += i18n_patterns(*translatable_urls) </code></pre> <p>my project structure:</p> <pre><code>. ├── db.sqlite3 ├── lng │   ├── admin.py │   ├── apps.py │   ├── __init__.py │   ├── migrations │   │   ├── 0001_initial.py │   │   └── __init__.py │   ├── models.py │   ├── tests.py │   └── views.py ├── locale │   ├── en │   │   └── LC_MESSAGES │   │   ├── django.mo │   │   └── django.po │   └── fa │   └── LC_MESSAGES │   ├── django.mo │   └── django.po ├── manage.py └── Test ├── asgi.py ├── __init__.py ├── settings.py ├── urls.py └── wsgi.py 8 directories, 19 files </code></pre> <p>my fa po file:</p> <pre><code>#: lng/models.py:6 msgid &quot;test text&quot; msgstr &quot;متن تستی&quot; </code></pre> <p><a href="https://i.sstatic.net/4TknT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4TknT.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/vTCBV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTCBV.png" alt="enter image description here" /></a></p>
<python><python-3.x><django><internationalization><django-i18n>
2023-01-21 18:02:27
1
445
Nova
75,195,325
10,694,247
How to monitor AWS Fsx ONTAP Filesystem Usage using lambda function and cloudwatch metrics
<p>I am first time trying to work on the <code>aws lambda</code> function to get the monitoring for <code>FSxN</code> IE <code>FSx</code> for ONTAP Storage in AWS.</p> <p>Here I want to get the <code>StorageCapacity</code> and <code>StorageTier</code> in order to achieve total <code>filesystemUsed</code> percentage for monitoring.</p> <p>I have tried the code below after a lot of search and trial but got an error.</p> <h1>Code tried:</h1> <pre><code>import json import boto3 from datetime import datetime def lambda_handler(event, context): fsx = boto3.client('fsx') filesystems = fsx.describe_file_systems() table = [] for filesystem in filesystems.get('FileSystems'): status = filesystem.get('Lifecycle') filesystem_id = filesystem.get('FileSystemId') table.append(filesystem_id) cloudwatch = boto3.client('cloudwatch') result = [] for filesystem_id in table: current_time = datetime.utcnow().isoformat() response = cloudwatch.get_metric_data( MetricDataQueries=[ { 'Id': 'm1', 'MetricStat': { 'Metric': { 'Namespace': 'AWS/FSx', 'MetricName': 'StorageCapacity', 'Dimensions': [ { 'Name': 'FileSystemId', 'Value': filesystem_id }, { 'Name': 'StorageTier', 'Value': 'SSD' }, { 'Name': 'DataType', 'Value': 'All' } ] }, 'Period': 60, 'Stat': 'Sum' }, 'ReturnData': True }, { 'Id': 'm2', 'MetricStat': { 'Metric': { 'Namespace': 'AWS/FSx', 'MetricName': 'StorageUsed', 'Dimensions': [ { 'Name': 'FileSystemId', 'Value': filesystem_id }, { 'Name': 'StorageTier', 'Value': 'SSD' }, { 'Name': 'DataType', 'Value': 'All' } ] }, 'Period': 60, 'Stat': 'Sum' }, 'ReturnData': True } ], StartTime='2023-01-20T00:01:00Z', EndTime='2023-01-20T00:02:00Z' ) storage_capacity = response['MetricDataResults'][0]['Values'][0] storage_used = response['MetricDataResults'][1]['Values'][0] result.append({'filesystem_id': filesystem_id,'storage_capacity': storage_capacity, 'storage_used': storage_used}) return result </code></pre> <h1>Error after execution</h1> <pre><code>Response { &quot;errorMessage&quot;: &quot;list index out of range&quot;, &quot;errorType&quot;: &quot;IndexError&quot;, &quot;requestId&quot;: &quot;a09573f2-87ea-4464-afc0-8b196f669415&quot;, &quot;stackTrace&quot;: [ &quot; File \&quot;/var/task/lambda_function.py\&quot;, line 75, in lambda_handler\n storage_capacity = response['MetricDataResults'][0]['Values'][0]\n&quot; ] } </code></pre> <h1>MetricDataResults for metric(<code>m1</code>) and metric(<code>m2</code>)</h1> <pre><code>{'MetricDataResults': [ {'Id': 'm1', 'Label': 'StorageCapacity', 'Timestamps': [datetime.datetime(2023, 1, 20, 0, 1, tzinfo=tzlocal())], 'Values': [925308932096.0], 'StatusCode': 'Complete'}, {'Id': 'm2', 'Label': 'StorageUsed', 'Timestamps': [datetime.datetime(2023, 1, 20, 0, 1, tzinfo=tzlocal())], 'Values': [2439143424.0], 'StatusCode': 'Complete'}], 'Messages': [], 'ResponseMetadata': {'RequestId': '479a53b2-b5f5-46c0-b79d-278d803df94b', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '479a53b2-b5f5-46c0-b79d-278d803df94b', 'content-type': 'text/xml', 'content-length': '923', 'date': 'Sat, 21 Jan 2023 18:06:35 GMT'}, 'RetryAttempts': 0}} {'MetricDataResults': [ {'Id': 'm1', 'Label': 'StorageCapacity', 'Timestamps': [datetime.datetime(2023, 1, 20, 0, 1, tzinfo=tzlocal())], 'Values': [925308932096.0], 'StatusCode': 'Complete'}, {'Id': 'm2', 'Label': 'StorageUsed', 'Timestamps': [datetime.datetime(2023, 1, 20, 0, 1, tzinfo=tzlocal())], 'Values': [2593112064.0], 'StatusCode': 'Complete'}], 'Messages': [], 'ResponseMetadata': {'RequestId': 'db9ad0a4-0a24-4f1d-be60-55bde63fd49b', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'db9ad0a4-0a24-4f1d-be60-55bde63fd49b', 'content-type': 'text/xml', 'content-length': '923', 'date': 'Sat, 21 Jan 2023 18:06:35 GMT'}, 'RetryAttempts': 0}} </code></pre> <p>Please help/hint something to get into the solution.</p>
<python><amazon-web-services><aws-lambda>
2023-01-21 17:40:56
1
488
user2023
75,195,218
3,259,222
Inherit as required only some fields from parent pandera SchemaModel
<p>I have Input and Output pandera SchemaModels and the Output inherits the Input which accurately represents that all attributes of the Input schema are in the scope of the Output schema.</p> <p>What I want to avoid is inheriting all attributes as required (non-Optional) as they are rightly coming from the Input schema. Instead I want to preserve them as required for the Input schema but define which of them remain required for the Output schema while the other inherited attributes become optional.</p> <p>This pydantic <a href="https://stackoverflow.com/questions/61948723/how-to-extend-a-pydantic-object-and-change-some-fields-type">question</a> is similar and has solution for defining <code>__init_subclass__</code> method in the parent class. However, this doesn't work out of the box for pandera classes and I'm not sure if it is even implementable or the right approach.</p> <pre><code>import pandera as pa from typing import Optional from pandera.typing import Index, DataFrame, Series, Category class InputSchema(pa.SchemaModel): reporting_date: Series[pa.DateTime] = pa.Field(coerce=True) def __init_subclass__(cls, optional_fields=None, **kwargs): super().__init_subclass__(**kwargs) if optional_fields: for field in optional_fields: cls.__fields__[field].outer_type_ = Optional cls.__fields__[field].required = False class OutputSchema(InputSchema, optional_fields=['reporting_date']): test: Series[str] = pa.Field() @pa.check_types def func(inputs: DataFrame[InputSchema]) -&gt; DataFrame[OutputSchema]: inputs = inputs.drop(columns=['reporting_date']) inputs['test'] = 'a' return inputs data = pd.DataFrame({'reporting_date': ['2023-01-11', '2023-01-12']}) func(data) </code></pre> <p>Error:</p> <pre><code>---&gt; 18 class OutputSchema(InputSchema, optional_fields=['reporting_date']): KeyError: 'reporting_date' </code></pre> <p>Edit:</p> <p>Desired outcome to be able to set which fields from the inherited schema remain required while the remaining become optional:</p> <pre><code>class InputSchema(pa.SchemaModel): reporting_date: Series[pa.DateTime] = pa.Field(coerce=True) other_field: Series[str] = pa.Field() class OutputSchema(InputSchema, required=['reporting_date']) test: Series[str] = pa.Field() </code></pre> <p>The resulting <code>OutputSchema</code> has <code>reporting_date</code> and <code>test</code> as required while <code>other_field</code> as optional.</p>
<python><pydantic><pandera>
2023-01-21 17:23:32
2
431
Konstantin
75,195,149
15,229,310
python display type of raised exception
<p>I define my own exception</p> <pre><code>class MyException(Exception): pass </code></pre> <p>somewhere deep in project folder structure, i.e. project\subfolder_1\subfolder_2\etc...\exceptions.py and then use it <code>from project.subproject.subfolder_1.subfolder_2.\etc ... .exceptions import MyException as MyException</code> and then raise it as <code>raise MyException('bad stuff happened')</code></p> <p>it is then displayed in output as</p> <blockquote> <p>project.subproject.subfolder_1.subfolder_2.etc... .exceptions.MyException: bad stuff happened</p> </blockquote> <p>can I somehow get rid of the full namespace? Since it's anyway 'imported as' and in code referred only as MyException, to display just</p> <blockquote> <p>MyException: bad stuff happened</p> </blockquote> <p>as with other built in exceptions?</p>
<python>
2023-01-21 17:13:58
1
349
stam
75,195,111
6,952,996
How to keep track of data structures for class instances in Python
<p>I am using VSCode to write Python code.<br /> I have a lot of different structures for data that is sent and received from other functions and stored in variables.<br /> I tend to resort to put an example of the structure of the variable as a comment in the code, to help with quickly providing information when writing additonal code or refactoring. This helps, but its also easy to forget to update the comments when the structure change.</p> <p>To help with this I started using the <code>@dataclass</code> decorator to define classes of the different structures. This helps as I could then keep example of the structure of the data as a docstring in the class.</p> <p>For example</p> <pre class="lang-py prettyprint-override"><code>@dataclass class NameAge: &quot;&quot;&quot; [ { &quot;name&quot;: &quot;Peter&quot;, &quot;age&quot;: &quot;22&quot;, } ] &quot;&quot;&quot; name: str age: str </code></pre> <p>I can then create an instance of the class by doing</p> <pre class="lang-py prettyprint-override"><code>name_age_obj = NameAge(name=&quot;Peter&quot;, age=&quot;22&quot;) </code></pre> <p>When I do mouse-over the &quot;NameAge&quot; part of the line above in VSCode I get a nice help text from the docstring that looks like this</p> <p><a href="https://i.sstatic.net/9IcUP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9IcUP.png" alt="Docstring" /></a></p> <p>But to get to the actual question and problem, after I have created a variable called <code>name_age_obj</code> which is an instance of the class <code>NameAge</code> it is not possible to do mouse-over on <code>name_age_obj</code> to see the docstring and class help. It only shows as this:</p> <p><a href="https://i.sstatic.net/oNG2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oNG2G.png" alt="enter image description here" /></a></p> <p>Is it possible to get the mouse-over popup to show the class help? This would make it much easier and quicker to understand how different variables are structured.</p>
<python><visual-studio-code>
2023-01-21 17:07:43
0
937
Johnathan
75,195,106
14,584,978
Write python polars lazy_frame to csv gzip archive after collect()
<p>What is the best way to write a gzip archive csv in python polars?</p> <p>This is my current implementation:</p> <pre><code>import polars as pl import gzip # create a dataframe df = pl.DataFrame({ &quot;foo&quot;: [1, 2, 3, 4, 5], &quot;bar&quot;: [6, 7, 8, 9, 10], &quot;ham&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;] }) # collect dataframe to memory and write to gzip file file_path = 'compressed_dataframe.gz' with gzip.open(file_path, 'wb') as f: df.collect().write_csv(f) </code></pre>
<python><gzip><lazy-evaluation><python-polars>
2023-01-21 17:06:33
2
374
Isaacnfairplay
75,195,030
6,792,327
Gemini API: Supplied value is not a valid DateTime
<p>I am attempting to call an endpoint provided by the <a href="https://docs.gemini.com/rest-api/#fx-rate" rel="nofollow noreferrer">Gemini REST API to extract FX rates</a>, but it kept throwing this error message:</p> <blockquote> <p>{'result': 'error', 'reason': 'Bad Request', 'message': &quot;Supplied value '1495127793000' is not a valid DateTime&quot;}</p> </blockquote> <p>Code:</p> <pre><code>base_url = &quot;https://api.gemini.com&quot; fx_url = base_url + '/v2/fxrate/gbpusd/1495127793000' fxpayload = { &quot;request&quot;: &quot;/v2/fxrate&quot;, } fxheaders = create_headers(fxpayload) response = requests.get(fx_url, headers=fxheaders) print(response.json() </code></pre> <p>Note that for that endpoint, the docs did state that the <a href="https://docs.gemini.com/rest-api/#timestamps" rel="nofollow noreferrer">timestamp</a> path should be of <code>timestamp</code> type, which the docs also indicated &quot;whole number (milliseconds)&quot;. I believe I have provided it in the right format but still throwing errors.</p>
<python>
2023-01-21 16:55:00
1
2,947
Koh
75,194,890
1,096,777
Why are PIL and its Image module capitalized?
<p><a href="https://peps.python.org/pep-0008/#package-and-module-names" rel="nofollow noreferrer">PEP8 standard is for modules to be lower-case</a></p> <p>PIL being a top-level module in all caps isn't so bad, but to <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html" rel="nofollow noreferrer">name a module Image</a> and then have a class in that module called Image seems unnecessarily confusing.</p> <p>Why was it done this way?</p>
<python><python-imaging-library><pep8>
2023-01-21 16:36:08
1
2,556
mavix
75,194,852
7,091,646
count of sublists containing item for all items
<p>I'm looking for a more efficient/pythonic way of doing this.</p> <pre><code>l = [[0,0],[1,0],[4,5,1],[2,3,5],[0,4]] set_l = set([i for sl in l for i in sl]) sublists_containing_item_count = [sum([1 for x in l if i in x]) for i in set_l] count_dict = dict(zip(set_l,sublists_containing_item_count)) count_dict {0: 3, 1: 2, 2: 1, 3: 1, 4: 2, 5: 2} </code></pre>
<python><counter>
2023-01-21 16:30:50
6
1,399
Eric
75,194,840
4,321,525
How to loop over all elements of different sorted lists in sorted order?
<p>I have several sorted lists. How can I loop over all elements in sorted order efficiently and elegantly? In my real-life problem, those lists contain elements that are directly comparable and sortable but are different and require different treatment.</p> <p>I prefer to retain my lists, which is why I copy them manually. If that is missing from a one-liner solution like a library function, I will gladly use that and copy the lists beforehand.<br /> This code does what I want but is neither efficient nor elegant.</p> <pre><code>from random import randint a: list = [] b: list = [] c: list = [] list_of_lists: list = [a, b, c] for i in range(10): l = randint(0, 2) list_of_lists[l].append(i) print(a, b, c) a_copy = a.copy() b_copy = b.copy() c_copy = c.copy() # print the elements of the lists in sorted order x = a_copy.pop(0) y = b_copy.pop(0) z = c_copy.pop(0) while (x and x is not 1000) or \ (y and y is not 1000) or \ (z and z is not 1000): if x is not 1000 and x &lt; y and x &lt; z: print(x) if a_copy and a_copy[0]: x=a_copy.pop(0) else: x = 1000 elif y is not 1000 and y &lt; x and y &lt; z: print(y) if b_copy and b_copy[0]: y=b_copy.pop(0) else: y = 1000 elif z is not 1000 and z &lt; x and z &lt; y: print(z) if c_copy and c_copy[0]: z=c_copy.pop(0) else: z = 1000 </code></pre>
<python><list><algorithm><loops>
2023-01-21 16:29:30
3
405
Andreas Schuldei
75,194,729
4,688,705
Is this a multi-threading race condition problem?
<p>In a python3 tkinter project, I am trying to read a continuous stream of data from a serial port (just an arduino sending a milli second value over USB).</p> <p>The code which reads serial data runs in a separate thread so as to disconnect it from the GUI loop.</p> <p>I need to be able to connect and disconnect from the serial port, which is done from the GUI.</p> <p>Everything works up until I disconnect from the serial port when I get the following error.</p> <p>I was expecting that once, the serialConnect.close() function is called in the main code, the serialStream function would just run 'pass' (line 14) until the connection is opened again -- the error suggests it is still running line 12.</p> <p>Is this a race condition error, I wonder and how to fix it?</p> <pre class="lang-none prettyprint-override"><code>Exception in thread Thread-1 (serialStream): Traceback (most recent call last): File &quot;/usr/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner self.run() File &quot;/usr/lib/python3.10/threading.py&quot;, line 953, in run self._target(*self._args, **self._kwargs) File &quot;/home/soon/Python/mwe_threaded_serial.py&quot;, line 12, in serialStream rawReading = str(serialConnection.readline()) File &quot;/home/soon/.local/lib/python3.10/site-packages/serial/serialposix.py&quot;, line 575, in read buf = os.read(self.fd, size - len(read)) TypeError: 'NoneType' object cannot be interpreted as an integer </code></pre> <p>This is from a minimal working example, which looks like this:</p> <pre class="lang-py prettyprint-override"><code>import serial import threading import time # Change to correct serial port on your system serialPort = &quot;/dev/ttyACM0&quot; serialConnection = serial.Serial() def serialStream(): while True: if (serialConnection.is_open): rawReading = str(serialConnection.readline()) print(rawReading) else: pass def connect_serial(): global serialConnection, serialPort # In case serial connection is already oåpen if (serialConnection.is_open): serialConnection.close() time.sleep(1) serialConnection = serial.Serial( port=serialPort,\ baudrate=9600,\ parity=serial.PARITY_NONE,\ stopbits=serial.STOPBITS_ONE,\ bytesize=serial.EIGHTBITS,\ timeout=1) time.sleep(1) if (not serialConnection.is_open): print(&quot;Connection failed&quot;) else: print(&quot;Connection establied&quot;) thread = threading.Thread(target=serialStream) thread.daemon = True thread.start() connect_serial() time.sleep(5) serialConnection.close() time.sleep(5) connect_serial() </code></pre> <p>If anyone needs an example arduino code to send a milli reading:</p> <pre class="lang-c prettyprint-override"><code>void setup() { // initialize serial communication at 9600 bits per second: Serial.begin(9600); } void loop() { Serial.println(millis()); delay(1); // delay in between reads for stability } </code></pre>
<python><python-multithreading><pyserial>
2023-01-21 16:12:17
0
529
Søren ONeill
75,194,622
922,588
Is there a problem with Pipes in Python multiprocessing on macOS?
<p>I'm encountering some strange behavior with <code>Pipe</code> in Python multiprocessing on my Mac (Intel, Monterey). I've tried the following code in 3.7 and 3.11 and in both cases, not all the tasks are executed.</p> <pre><code>def _mp_job(nth, child): print(&quot;Nth is&quot;, nth) if __name__ == &quot;__main__&quot;: from multiprocessing import Pool, Pipe, set_start_method, log_to_stderr import logging, time set_start_method(&quot;spawn&quot;) logger = log_to_stderr() logger.setLevel(logging.DEBUG) with Pool(processes = 10) as mp_pool: jobs = [] for i in range(20): parent, child = Pipe() # child = None r = mp_pool.apply_async(_mp_job, args = (i, child)) jobs.append(r) while jobs: new_jobs = [] for job in jobs: if not job.ready(): new_jobs.append(job) jobs = new_jobs print(&quot;%d jobs remaining&quot; % len(jobs)) time.sleep(1) </code></pre> <p>I know exactly what's going on, but I don't know why.</p> <p>[<strong>EDITED</strong>: my explanation for what was happening was quite unclear on my first pass, as reflected in the comments, so I've cleaned it up. Thanks for your patience.]</p> <p>If I run this code on my macOS Monterey machine, it will loop forever, reporting that some number of jobs are remaining. The logging information reveals that the child processes are failing; you'll see a number of lines like this:</p> <pre><code>[DEBUG/SpawnPoolWorker-10] worker got EOFError or OSError -- exiting </code></pre> <p>What's happening is that when the child worker dequeues a job and tries to unpickle the argument list, it encounters <code>ConnectionRefusedError</code> when unpickling the child connection side of the <code>Pipe</code> in the arguments (I know these details not because of the output of the function above, but because I inserted a traceback printout at the point in the Python multiprocessing library where the worker reports encountering the <code>OSError</code>). At that point the worker fails, having removed the job from the work queue but not having completed it. That's why I have <code># child = None</code> in there; if I uncomment that, everything works fine.</p> <p>My first suspicion is that this is a bug in Python on macOS (I haven't tested this on other platforms, but it makes no sense to me that something this basic would have been missed unless it's a platform-specific error). I don't understand why the child process would get <code>ConnectionRefusedError</code>, since the <code>Pipe</code> establishes a socket pair and you shouldn't be able to get <code>ConnectionRefusedError</code> in that case, as far as I understand.</p> <p>This seems more likely to happen the more processes I have in the pool. If I have 2, it seems to work reliably. But 4 or more seem to cause a problem; I have a six-core computer, so I don't think that's part of what's happening.</p> <p>Does anyone have any insight into this? Am I doing something obviously wrong?</p>
<python><python-multiprocessing>
2023-01-21 15:57:44
0
415
Sam Bayer
75,194,618
15,229,310
Python exception - add custom message as very last line
<p>any exception causing code, e.g.</p> <pre><code> l= [1,2,3] index = 4 l[index] </code></pre> <p>produce standard exception output to console</p> <p><em>trace</em></p> <p><em>IndexError: list index out of range</em></p> <p><em>Process finished with exit code 1</em></p> <p>I'd like to add custom message at the bottom</p> <p><em>trace</em></p> <p><em>IndexError: list index out of range</em></p> <p><strong>My own message here (i.e. better explained what have happened)</strong></p> <p><em>Process finished with exit code 1</em></p> <p>I tried numerous options but I'm not able to get desired output. I'd be fine with raising new exception from original one as in</p> <pre><code> try: l= [1,2,3] index = 4 l[index] except Exception as e: MyException = (create new exception that on raise produce only single line message) raise MyException from e </code></pre> <p>however new exception displays lots of noise like its own trace, and I only seem to be able to get rid of it via <code>sys.tracebacklimit = 0</code> what hides original exception trace as well - not desired.</p> <p>I neither was able to override original error's __ str __ and or __ repr __ to inject my message to the end (note adding new entry into error.args won't do the trick since at best do smt like</p> <p><em>IndexError: My own message, rest of original output (i.e. error.args)</em></p> <p>what is not what I want. I want to show very last line (likely first thing the user will read) a simple business message, and only whoever is interested can go into the details above it. Real use case is there is workflow manager what runs steps (code outside of my control) and I want to display any exception as is thrown by the step, only appending some text to the bottom (i.e. above exception occurred while workflow manager was trying to do flow X, running step Y)</p>
<python>
2023-01-21 15:57:26
0
349
stam
75,194,469
2,255,491
Django namespacing still produces collision
<p>I have two apps using same names in my django project. After configuring namespacing, I still get collision. For example when I visit <code>localhost:8000/nt/</code>, I get template from the other app. (localhost:8000/se/ points to the right template).</p> <p>I must have missed something. Here is the code:</p> <p><em>dj_config/urls.py</em></p> <pre class="lang-py prettyprint-override"><code>urlpatterns = [ path(&quot;se/&quot;, include(&quot;simplevent.urls&quot;, namespace=&quot;se&quot;)), path(&quot;nt/&quot;, include(&quot;nexttrain.urls&quot;, namespace=&quot;nt&quot;)), # ... ] </code></pre> <p><em>dj_apps/simplevent/urls.py</em></p> <pre class="lang-py prettyprint-override"><code>from . import views app_name = &quot;simplevent&quot; urlpatterns = [ path(route=&quot;&quot;, view=views.Landing.as_view(), name=&quot;landing&quot;) ] </code></pre> <p><em>dj_apps/nexttrain/urls.py</em></p> <pre class="lang-py prettyprint-override"><code>from django.urls import path from . import views app_name = &quot;nexttrain&quot; urlpatterns = [ path(route=&quot;&quot;, view=views.Landing.as_view(), name=&quot;landing&quot;), ] </code></pre> <p><em>dj_config/settings.py</em></p> <pre><code>INSTALLED_APPS = [ &quot;dj_apps.simplevent.apps.SimpleventConfig&quot;, &quot;dj_apps.nexttrain.apps.NexttrainConfig&quot;, # ... ] TEMPLATES = [ { # .... &quot;DIRS&quot;: [], &quot;APP_DIRS&quot;: True, } </code></pre> <p>Both views will have the same code:</p> <pre class="lang-py prettyprint-override"><code>class Landing(TemplateView): template_name = &quot;landing.html&quot; </code></pre> <p>Templates are located in:</p> <ul> <li>dj_apps/simplevent/templates/landing.html</li> <li>dj_apps/nexttrain/templates/landing.html</li> </ul> <p>Note that reversing order of apps in INSTALLED_APPS will reverse the problem (<code>/se</code> will point to nexttrain app).</p>
<python><django><django-urls>
2023-01-21 15:32:30
1
11,222
David Dahan
75,194,431
19,130,803
Dash dcc.upload component for large file
<p>I am developing a dash application. In that I have file upload feature. The file size is big enough minimum is some about 100MB to support that I have set <code>max_size=-1</code> (no file size limit). Below is code:</p> <pre><code>dcc.Upload( id=&quot;upload_dataset&quot;, children=html.Div( [ &quot;Drag and Drop or &quot;, html.A( &quot;Select File&quot;, style={ &quot;font-weight&quot;: &quot;bold&quot;, }, title=&quot;Click to select file.&quot;, ), ] ), multiple=False, max_size=-1, ) </code></pre> <p>The uploaded files are saved on server side. This <code>dcc.upload</code> component has attribute <code>contents</code> which holds the entire data in string format using base64. While <strong>browsing</strong> I come to know that before sending the data to server, this <code>contents</code> is also <strong>stored in web browser memory</strong>.</p> <p><strong>Problem:</strong> for small file size storing contents in web browser memory may be fine. Since I have large file size by doing so browser may crash and app freeze.</p> <p>Is there any way to by-pass this default behavior and I will like to send file in chunks or as stream?</p> <p>How to achieve this in dash using dcc.upload component or any other way?</p>
<python><plotly-dash>
2023-01-21 15:25:57
1
962
winter
75,194,186
4,764,604
Password unregistered when saving a new user in a Django with a custom user model
<p>When I try to register a user, adapting what I learnt from <a href="https://youtu.be/Ae7nc1EGv-A?t=2029" rel="nofollow noreferrer">Building a Custom User Model with Extended Fields youtube tutorial</a>, I can't login afterwards despite providing the same password. However I can log in for any I created through the command line. Here is the <code>views.py</code> part that deal with user registration:</p> <p><code>views.py</code></p> <pre><code>from django.shortcuts import render, redirect, get_object_or_404 from django.contrib.auth.forms import UserCreationForm, AuthenticationForm from django.contrib.auth import get_user_model from django.conf import settings User = settings.AUTH_USER_MODEL from django.db import IntegrityError from django.contrib.auth import login, logout, authenticate from .forms import TodoForm from .models import Todo from django.utils import timezone from django.contrib.auth.decorators import login_required def home(request): return render(request, 'todo/home.html') def signupuser(request): if request.method == 'GET': return render(request, 'todo/signupuser.html', {'form':UserCreationForm()}) else: if request.POST['password1'] == request.POST['password2']: try: db = get_user_model() user = db.objects.create_user(request.POST['email'], request.POST['username'], request.POST['firstname'], request.POST['company'], request.POST['mobile_number'], password=request.POST['password1']) user.save() login(request, user) print(&quot;after login&quot;, request, user, request.POST['password1']) return redirect('currenttodos') def loginuser(request): if request.method == 'GET': return render(request, 'todo/loginuser.html', {'form':AuthenticationForm()}) else: user = authenticate(request, username=request.POST['username'], password=request.POST['password']) print(&quot;in login: &quot;, request.POST['username'], request.POST['password']) if user is None: return render(request, 'todo/loginuser.html', {'form':AuthenticationForm(), 'error':'Username and password did not match'}) else: login(request, user) return redirect('currenttodos') </code></pre> <p>The user is still registered but I can't login to my website login page with the password I provided on the sign up page. Even if the user was created.</p> <h1>Example</h1> <p>I tested with: <code>james@gmail.com</code> and the password <code>James.1234</code>. It created the user:</p> <p><a href="https://i.sstatic.net/j9FM5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j9FM5.png" alt="enter image description here" /></a></p> <p>As you can see, a password is registered.</p> <p>But I was sent back to the login page, where I tried to login again but the password didn't match:</p> <p><a href="https://i.sstatic.net/lh0Lg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lh0Lg.png" alt="enter image description here" /></a></p> <p>And not to <code>currenttodos</code>. Here are the logs:</p> <pre><code>after login &lt;WSGIRequest: POST '/signup/'&gt; james@gmail.com James.1234 [21/Jan/2023 20:04:16] &quot;POST /signup/ HTTP/1.1&quot; 302 0 [21/Jan/2023 20:04:16] &quot;GET /current/ HTTP/1.1&quot; 302 0 [21/Jan/2023 20:04:16] &quot;GET /login?next=/current/ HTTP/1.1&quot; 301 0 [21/Jan/2023 20:04:16] &quot;GET /login/?next=/current/ HTTP/1.1&quot; 200 3314 in login: james@gmail.com James.1234 [21/Jan/2023 20:04:41] &quot;POST /login/?next=/current/ HTTP/1.1&quot; 200 3468 </code></pre> <p>I can't even log in for anybody but the user I created through the command line ... What could be the reason?</p> <p>Here is my custom user model:</p> <p><code>models.py</code>:</p> <pre><code>from django.db import models from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, BaseUserManager class CustomAccountManager(BaseUserManager): def create_superuser(self, email, user_name, first_name, password, **other_fields): other_fields.setdefault('is_staff', True) other_fields.setdefault('is_superuser', True) other_fields.setdefault('is_active', True) if other_fields.get('is_staff') is not True: raise ValueError( 'Superuser must be assigned to is_staff=True.') if other_fields.get('is_superuser') is not True: raise ValueError( 'Superuser must be assigned to is_superuser=True.') return self.create_user(email, user_name, first_name, password, **other_fields) def create_user(self, email, user_name, first_name, company, mobile_number, password, **other_fields): if not email: raise ValueError(('You must provide an email address')) email = self.normalize_email(email) user = self.model(email=email, user_name=user_name, first_name=first_name, company=company, mobile_number=mobile_number, password=password, **other_fields) user.set_password(password) user.save() return user class Newuser(AbstractBaseUser, PermissionsMixin): email = models.EmailField(('email address'), unique=True) user_name = models.CharField(max_length=150, unique=True) first_name = models.CharField(max_length=150, blank=True) mobile_number = models.CharField(max_length=10) company = models.CharField(max_length=5) is_staff = models.BooleanField(default=False) is_active = models.BooleanField(default=False) objects = CustomAccountManager() USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['user_name', 'first_name', 'mobile_number'] def __str__(self): return self.user_name </code></pre> <p><code>settings.py</code></p> <pre><code>AUTH_USER_MODEL = 'todo.NewUser' </code></pre>
<python><python-3.x><django><authentication><model-view-controller>
2023-01-21 14:48:27
0
3,396
Revolucion for Monica
75,194,160
166,442
Flask route doesn't capture parts containing urlencoded slash
<p>I have defined a Flask route as follows:</p> <pre><code>@app.route('/upload/from_url/&lt;url:string&gt;') def upload_from_url(url: str) -&gt; str: return {'url': url} </code></pre> <p>I also tried the route with <code>&lt;url&gt;</code> only. In any case, the route doesn't match if it contains a URL encoded slash. For example, this fails:</p> <pre><code>https://example.com/upload/from_url/https%3A%2F%2Fgithub.com </code></pre> <p>Whereas this one works:</p> <pre><code>https://example.com/upload/from_url/github.com </code></pre> <p>I am confused why Flask would decode the URL before attempting to match, that seems to defeat the purpose of the encoding.</p> <p>What's the right way to approach this?</p>
<python><flask>
2023-01-21 14:43:59
1
6,244
knipknap
75,194,130
14,098,258
how to make a scrollable legend in bokeh
<p>I make <code>bokeh</code> line plots from <code>pandas DataFrames</code> with many columns. I have already managed to hide individual lines by clicking on the corresponding legend entry. However, the legend is too big and does not fit in the graph. I have tried placing the legend outside of the graph, still does not work.</p> <p>Is there a way to make the legend of a <code>bokeh</code> plot scrollable?</p> <p>I know, I could increase the size of the whole figure, but than it does not fit on my screen. And I also don't want to make the font of the legend entries smaller.</p> <p><strong>EXAMPLE:</strong></p> <pre><code>from bokeh.io import show from bokeh.layouts import gridplot from bokeh.plotting import figure from bokeh.palettes import Category20_20 import pandas as pd import numpy as np data1 = {str(key): [ np.random.randint(0, 10) for i in range(100) ] for key in range(20) } data1=pd.DataFrame(data1) data2 = {str(key): [ np.random.randint(5, 15) for i in range(100) ] for key in range(20) } data2=pd.DataFrame(data2) graph1 = figure( title='Top graph' ) graph2 = figure( title='Bottom Graph', x_range=graph1.x_range, ) for col, color in zip(data1.columns, Category20_20): graph1.line( data1.index, data1[col], legend_label=col, color=color, ) for col, color in zip(data2.columns, Category20_20): graph2.line( data2.index, data2[col], legend_label=col, color=color, ) overplot = gridplot( [[graph1, None], [graph2, None]], width=1000, height=400, ) graph1.add_layout(graph1.legend[0], 'right') graph1.legend.click_policy = 'hide' graph2.legend.click_policy = 'hide' show(overplot) </code></pre>
<python><pandas><dataframe><plot><bokeh>
2023-01-21 14:40:34
0
383
Andre
75,194,122
15,877,202
How can I recreate a class object for testing, tried debugging and printing relevant items but can't get it?
<p>I have the following class:</p> <pre><code>class MessageContext: def __init__(self, raw_packet, packet_header, message_header, message_index): self.raw_packet = raw_packet self.pkthdr = packet_header self.msghdr = message_header self.msgidx = message_index self.msg_seqno = packet_header.seqno + message_index </code></pre> <p>And a function that creates objects using the above class:</p> <pre><code>def parsers(data): ... context = MessageContext(None, PacketAdapter(), msghdr, 0) self.on_message(rawmsg, context) </code></pre> <p>I am trying to recreate <code>context</code>, and when i set a <code>breakpoint</code> just after it and print <code>context</code>, I get:</p> <pre><code>&lt;exchanges.protocols.blahblah.MessageContext object at 0x7337211520&gt; </code></pre> <p>I have left out quite a bit of code as it is very long, but if any more information is needed I am happy to provide of course.</p> <p>Here is what I get when I print the arguments of <code>MessageContext</code>:</p> <p>print(PacketAdapter()) -&gt; <code>&lt;exchanges.blahblah.PacketAdapter object at 0x7f60929e1820&gt;</code></p> <hr /> <p>Following the comments below, the <code>PacketAdapter()</code> class looks like this:</p> <pre><code>class PacketAdapter: def __init__(self): self.seqno = 0 </code></pre>
<python><class>
2023-01-21 14:39:07
0
566
Patrick_Chong
75,194,035
8,761,554
Logistic Regression in seaborn does not show the line
<p>I have a simple dataframe with continous and categorical column:</p> <pre><code> Unemployment Rate AcceptsCash E020 0.058080 0 E021 0.049818 0 E022 0.037112 1 E023 0.051215 0 E024 0.051215 0 E025 0.065413 0 E026 0.071571 0 E027 0.060130 0 E029 0.035013 1 </code></pre> <p>however, when I try to display logistic regression plot using seaborn code:</p> <pre><code>sns.lmplot(x=&quot;Unemployment Rate&quot;, y=&quot;AcceptsCash&quot;, data=final_data_copy, logistic=True, fit_reg = True) </code></pre> <p>I get this warning and furthermore the plot only contains scatter points but not actually the regression line, any tips what can be wrong?</p> <pre><code>RuntimeWarning: All-NaN slice encountered def _nanquantile_ureduce_func(a, q, axis=None, out=None, overwrite_input=False, </code></pre> <p><a href="https://i.sstatic.net/iiDrw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iiDrw.png" alt="enter image description here" /></a></p>
<python><numpy><seaborn><lmplot>
2023-01-21 14:26:34
0
341
Sam333
75,194,030
7,267,480
loading number of CSVs to one dataset for neural network training for signal transformation y1(x) -> y2(x)
<p>I am trying to build a neural network for signal transformation: <code>y1(x) - &gt; y2(x)</code> using TensorFlow and Keras as main instruments.</p> <p><strong>I want to prepare a training dataset from files which are stored separately and my question for now is how to do it efficient way?</strong></p> <p><strong>DATA</strong> I have a lot of data - more than one million files each CSV file represents a case: it contains the following columns:</p> <blockquote> <p>y1 | x_val | Z1 | Z2 | ... | y2</p> </blockquote> <p>where Z1 | Z2 - additional columns that can be used as features for the training of NN in future.</p> <p><strong>So the main idea is to train NN using <code>y1</code> and <code>x_val</code> as inputs predict <code>y2</code> as output.</strong></p> <p>I use the next <em>code to build a dataset from separate files</em>:</p> <pre><code># Define the path to the input and output data input_path = 'data/marked/' # Get the list of input files input_files = os.listdir(input_path) # lists for dataset handling input_data = [] output_data = [] for i, file in enumerate(input_files): # Read the input data and output from the CSV file df = pd.read_csv(input_path + file) # Select the needed columns for the input data # Reshape the input data to fit the neural network input input_data_from_csv = df[['y1','x_val']].values.T input_data.append(input_data_from_csv) # Append the output data to the list output_data.append(df['y2'].values) # Convert the lists to numpy arrays X = np.array(input_data) y = np.array(output_data) print(type(X)) print(type(y)) print(X.shape) print(y.shape) print(&quot;Memory size of numpy array X in bytes:&quot;, X.size * X.itemsize) print(&quot;Memory size of numpy array y in bytes:&quot;, y.size * y.itemsize) # splitting the dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) # print(X_train) </code></pre> <p>It works fine if I work with a small amount of data - e.g. 1000 files ok.</p> <p>I understand that if I will have a large number of files in my data folder - the data will not fit in the memory of the notebook? How to handle that question?</p> <p>Does the Tensorflow has special dataset objects that can be loaded into memory by slices and optimized for load, saving and work with large dataset?</p> <p>E.g. - how can I do the same I am doing simple lists but using TensorFlow functionali?</p> <p>I can see the official documentation (e.g. <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/data/Dataset</a>) but it seems unclear to me.</p> <p>Thanks in advance.</p>
<python><tensorflow><neural-network>
2023-01-21 14:25:57
0
496
twistfire
75,193,965
17,124,619
RandomForestClassifer throwing estimator error
<p>I am attempting to build a stacking classifier using multiple combinations of available models, however, when I have a RandomForestClassifier the loop throws an error. Here is what I have attempted:</p> <blockquote> <p>'RandomForestClassifier' object has no attribute 'estimators_'. Did you mean: 'estimator_'?</p> </blockquote> <pre><code>import numpy as np from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, StackingClassifier from sklearn.metrics import accuracy_score from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split data = load_breast_cancer() X, y = data.data, data.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) RF = RandomForestClassifier(n_estimators=500, random_state=1250, criterion='entropy', max_depth=2, min_impurity_decrease=0.5) RF1 = RandomForestClassifier(n_estimators=500, random_state=1250, criterion='entropy', max_depth=2, min_impurity_decrease=0.2, oob_score=True) ABC = AdaBoostClassifier(random_state=1250) GBC = GradientBoostingClassifier(random_state=1250) stackModels = [RF, RF1, GBC, ABC] from itertools import combinations classifier_combinations = [ list(np.array(stackModels)[list(x)]) for x in list(combinations(range(len(stackModels)), 2))] Stackresults = {'estimators': [],'final_estimaor': [], 'accuracy': []} for list_class in classifier_combinations: for classify in stackModels: CLASS = StackingClassifier(estimators = list_class, final_estimator=classify) CLASS.fit(X_train, y_train) ypred = CLASS.predict(X_test) accuracy = accuracy_score(y_test, ypred) Stackresults['accuracy'].append(accuracy) Stackresults['estimators'].append(list_class) Stackresults['final_estimator'].append(classify) </code></pre> <p>FULL TRACEBACK:</p> <pre><code> /var/folders/dr/9wh_z8y10fl79chj86pq7knc0000gn/T/ipykernel_7755/3533362225.py in &lt;module&gt; 24 for classify in stackModels: 25 CLASS = StackingClassifier(estimators = list_class, final_estimator=classify) ---&gt; 26 CLASS.fit(X_train, y_train) 27 ypred = CLASS.predict(X_test) 28 accuracy = accuracy_score(y_test, ypred) ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_stacking.py in fit(self, X, y, sample_weight) 486 self._le = LabelEncoder().fit(y) 487 self.classes_ = self._le.classes_ --&gt; 488 return super().fit(X, self._le.transform(y), sample_weight) 489 490 @if_delegate_has_method(delegate=&quot;final_estimator_&quot;) ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_stacking.py in fit(self, X, y, sample_weight) 148 # all_estimators contains all estimators, the one to be fitted and the 149 # 'drop' string. --&gt; 150 names, all_estimators = self._validate_estimators() 151 self._validate_final_estimator() 152 ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_base.py in _validate_estimators(self) 245 &quot; of (string, estimator) tuples.&quot; ... --&gt; 188 return iter(self.estimators_) 189 190 AttributeError: 'RandomForestClassifier' object has no attribute 'estimators_' </code></pre>
<python><machine-learning><scikit-learn>
2023-01-21 14:13:35
0
309
Emil11
75,193,847
2,929,914
Can I use python's mouse library with Spyder 5.2.2 @ Anaconda 2.3.1
<p>I'm trying to learn how to control the mouse with Python learning from:</p> <p><a href="https://www.thepythoncode.com/article/control-mouse-python" rel="nofollow noreferrer">How to Control your Mouse in Python</a></p> <p>My IDE is Spyder (version 5.2.2) and I'm running it from Anaconda (version 2.3.1).</p> <p>When I try to execute:</p> <blockquote> <p>conda install mouse</p> </blockquote> <p>I get the following error:</p> <blockquote> <p>The following packages are not available from current channels: - mouse</p> </blockquote> <p>(Full error description below).</p> <p>I'm new to Anaconda/Spyder/Python so I'm sorry If that's a newbie question, but can I somehow use the mouse library at my environment? If yes, what's the catch? If no, what's the alternative?</p> <p>Thank you.</p> <p>Full error description after running &quot;conda install mouse&quot;:</p> <pre><code>Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Note: you may need to restart the kernel to use updated packages. PackagesNotFoundError: The following packages are not available from current channels: mouse Current channels: https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre>
<python><anaconda><spyder>
2023-01-21 13:54:24
1
705
Danilo Setton
75,193,665
2,179,795
Correcting pandas index "duplicate" behavior
<p>I am trying to create a new dataframe using the structure of an existing index (the source of this is a excel file, so the empty spaces are important) and some new data. Typically I would create the index and assign it data values by constructing a new pd.DataFrame() object.</p> <p>However, I am encountering odd behavior when creating the indices as None (again, needed as a placeholder. Is there a workaround for this? Or am I approaching this incorrectly? Thanks</p> <pre><code>import pandas as pd import numpy as np #index of final_df - note the None values in index are needed #because the final output will be written to a excel file with those #rows blank test_index = ['row1', 'row2', None, None, 'rowN'] #some dummy source data - think of this as new sales data coming in weekly test_update_df = pd.DataFrame(index=['row1', np.NaN, 'row2', np.NaN, 'rowN'], data=[1,np.NaN, 64,np.NaN, 643.78]) test_update_df.columns = ['1/6/23'] #create the final df #BUT here is where the problems lie final_df = pd.DataFrame(index=test_index, data=test_update_df) &quot;&quot;&quot; ValueError: cannot reindex from a duplicate axis I believe we are getting this due to the None values in the index? &quot;&quot;&quot; </code></pre>
<python><pandas>
2023-01-21 13:23:03
2
1,247
Merv Merzoug
75,193,628
9,493,965
Cannot assign "<django.contrib.auth.models.AnonymousUser object at 0x7f81fe558fa0>": "Post.author" must be a "UserData" instance
<p>I'm trying to write simple tests for some endpoints but the second one keeps failing. Here's the test.py</p> <pre><code>from rest_framework.test import APITestCase, APIRequestFactory, APIClient from rest_framework import status from django.urls import reverse from .views import PostViewSet from django.contrib.auth import get_user_model User = get_user_model() class PostListCreateTestCase(APITestCase): def setUp(self): self.factory = APIRequestFactory() self.view = PostViewSet.as_view({&quot;get&quot;: &quot;list&quot;, &quot;post&quot;: &quot;create&quot;}) self.url = reverse(&quot;post_list&quot;) self.user = User.objects.create_user( email=&quot;testuser@gmail.com&quot;, name=&quot;testuser&quot; ) self.user.set_password(&quot;pass&quot;) self.user.save() def test_list_posts(self): request = self.factory.get(self.url) response = self.view(request) self.assertEqual(response.status_code, status.HTTP_200_OK) def test_create_post(self): print(User) print(self.user) client = APIClient() login = client.login(email=&quot;testuser@gmail.com&quot;, password=&quot;pass&quot;) self.assertTrue(login) sample_post = { &quot;title&quot;: &quot;sample title&quot;, &quot;body&quot;: &quot;sample body&quot;, } request = self.factory.post(self.url, sample_post) request.user = self.user print(isinstance(request.user, get_user_model())) response = self.view(request) self.assertEqual(response.status_code, status.HTTP_201_CREATED) </code></pre> <p>And here's the view:</p> <pre><code>class PostViewSet(viewsets.ModelViewSet): serializer_class = PostSerializer queryset = Post.objects.all() def get_queryset(self): posts = Post.objects.all() return posts def get_object(self): post = get_object_or_404(self.get_queryset(), pk=self.kwargs[&quot;pk&quot;]) self.check_object_permissions(self.request, post) return post def create(self, request, *args, **kwargs): try: post = Post.objects.create( title=request.data.get(&quot;title&quot;), body=request.data.get(&quot;body&quot;), author=request.user, ) post = PostSerializer(post) return Response(post.data, status=status.HTTP_201_CREATED) except Exception as ex: print(str(ex)) return Response(str(ex), status=status.HTTP_400_BAD_REQUEST) def list(self, request, *args, **kwargs): posts = self.get_queryset() serializer = self.get_serializer(posts, many=True) return Response( data=dict(posts=serializer.data, total=len(serializer.data)), status=status.HTTP_200_OK, ) </code></pre> <p>And here's what I get:</p> <blockquote> <p>Found 2 test(s). Creating test database for alias 'default'... System check identified no issues (0 silenced). &lt;class 'account.models.UserData'&gt; testuser True Cannot assign &quot;&lt;django.contrib.auth.models.AnonymousUser object at 0x7f81fe558fa0&gt;&quot;: &quot;Post.author&quot; must be a &quot;UserData&quot; instance. F. ====================================================================== FAIL: test_create_post (posts.tests.PostListCreateTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File &quot;/home/amr/Snakat/social_network/posts/tests.py&quot;, line 41, in test_create_post self.assertEqual(response.status_code, status.HTTP_201_CREATED) AssertionError: 400 != 201</p> <p>---------------------------------------------------------------------- Ran 2 tests in 0.887s</p> <p>FAILED (failures=1) Destroying test database for alias 'default'...</p> </blockquote>
<python><django><django-rest-framework>
2023-01-21 13:17:44
0
425
Raskolnikov
75,193,298
6,463,651
Jupyter Notebook kernel dies when i import model from Huggingface
<p>Here is the code where I am loading a huggingface pre trained model , but my kernel dies. The model size in the description page is only 458 MB size. Why is it failing?</p> <pre><code>from transformers import TextClassificationPipeline, AutoModelForSequenceClassification, AutoTokenizer model_name = &quot;ElKulako/cryptobert&quot; tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels = 3) pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, max_length=64, truncation=True, padding = 'max_length') # post_1 &amp; post_3 = bullish, post_2 = bearish post_1 = &quot; see y'all tomorrow and can't wait to see stock in the morning, i wonder what price it is going to be at. 😎🐂🤠💯😴,It is looking good go for it and flash by that 45k. &quot; df_posts = [post_1, post_2, post_3] preds = pipe(df_posts) print(preds) </code></pre> <p><a href="https://i.sstatic.net/QgFwx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QgFwx.png" alt="The model description" /></a></p>
<python><jupyter-notebook><nlp><huggingface-transformers>
2023-01-21 12:27:00
0
603
Shubh
75,193,297
1,512,250
Create the same ids for the same names in different dataframes in pandas
<p>I have a dataset with unique names. Another dataset contains several rows with the same names as in the first dataset.</p> <p>I want to create a column with unique ids in the first dataset and another column in the second dataset with the same ids corresponding to all the same names in the first dataset.</p> <p>For example:</p> <p>Dataframe 1:</p> <pre><code> player_id Name 1 John Dosh 2 Michael Deesh 3 Julia Roberts </code></pre> <p>Dataframe 2:</p> <pre><code>player_id Name 1 John Dosh 1 John Dosh 2 Michael Deesh 2 Michael Deesh 2 Michael Deesh 3 Julia Roberts 3 Julia Roberts </code></pre> <p>I want to do to use both data frames to run deep feature synthesis using featuretools. To be able to do something like this:</p> <pre><code>entity_set = ft.EntitySet(&quot;basketball_players&quot;) entity_set.add_dataframe(dataframe_name=&quot;players_set&quot;, dataframe=players_set, index='name' ) entity_set.add_dataframe(dataframe_name=&quot;season_stats&quot;, dataframe=season_stats, index='season_stats_id' ) entity_set.add_relationship(&quot;players_set&quot;, &quot;player_id&quot;, &quot;season_stats&quot;, &quot;player_id&quot;) </code></pre>
<python><pandas>
2023-01-21 12:26:35
1
3,149
Rikki Tikki Tavi
75,192,900
2,651,073
How to get max of counts for groupby (most frequent items)
<p>I have a dataframe. I want to group by rows on some columns and then form a count column and then get the max of counts and create a column for it and attach it to dataframe.</p> <p>I tried:</p> <pre><code> df[&quot;max_pred&quot;] = df.groupby(['fid','prefix','pred_text1'], sort=False)[&quot;pred_text1&quot;].transform(&quot;max&quot;) </code></pre> <p>However it lists a row with max repeat for <code>pred_text1</code>, but I want the number of reparation for it</p> <p>For example:</p> <pre><code>A B C a d b a d b a d b a d a a d a b b c b b c b b d </code></pre> <p>If I group the rows by A and B and then count C and get its max for each group and store that in new column F, I expect:</p> <pre><code>A B F E a d 3 b a d 3 b a d 3 b a d 3 b a d 3 b b b 2 c b b 2 c b b 2 c </code></pre> <p>E shows the most frequent item whose frequency was specified in F</p>
<python><pandas><dataframe><group-by><max>
2023-01-21 11:16:43
2
9,816
Ahmad
75,192,895
6,478,085
Replacing sub-string occurrences with elements of a given list
<p>Suppose I have a string that has the same sub-string repeated multiple times and I want to replace each occurrence with a different element from a list.</p> <p>For example, consider this scenario:</p> <pre class="lang-py prettyprint-override"><code>pattern = &quot;_____&quot; # repeated pattern s = &quot;a(_____), b(_____), c(_____)&quot; r = [0,1,2] # elements to insert </code></pre> <p>The goal is to obtain a string of the form:</p> <pre class="lang-py prettyprint-override"><code>s = &quot;a(_001_), b(_002_), c(_003_)&quot; </code></pre> <p>The number of occurrences is known, and the list <code>r</code> has the same length as the number of occurrences (3 in the previous example) and contains increasing integers starting from 0.</p> <p>I've came up with this solution:</p> <pre class="lang-py prettyprint-override"><code>import re pattern = &quot;_____&quot; s = &quot;a(_____), b(_____), c(_____)&quot; l = [m.start() for m in re.finditer(pattern, s)] i = 0 for el in l: s = s[:el] + f&quot;_{str(i).zfill(5 - 2)}_&quot; + s[el + 5:] i += 1 print(s) </code></pre> <p>Output: <code>a(_000_), b(_001_), c(_002_)</code></p> <p>This solves my problem, but it seems to me a bit cumbersome, especially the <code>for</code>-loop. Is there a better way, maybe more &quot;pythonic&quot; (intended as concise, possibly elegant, whatever it means) to solve the task?</p>
<python><regex><python-re>
2023-01-21 11:15:11
3
2,662
damianodamiano
75,192,471
20,920,790
Can't run Python visual element in Power BI
<p>I'm trying to create a hist plot in Power BI.<br /> I got installed ANaconda, MS Vusial Code.<br /> Screenshots with my settings:<br /> <img src="https://i.sstatic.net/dziR6.png" alt="Power BI setting" /> <img src="https://i.sstatic.net/KqHDI.png" alt="Python home folder" /><br /> I'm trying make hist with simple table with 1 column.</p> <p>The following code to create a dataframe and remove duplicated rows is always executed and acts as a preamble for your script:</p> <pre class="lang-py prettyprint-override"><code>dataset = pandas.DataFrame(reg_min_ses_dt) dataset = dataset.drop_duplicates() import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.histplot(data=dataset['reg_min_ses_dt']) plt.show() </code></pre> <p>But I get this error:<br /> <img src="https://i.sstatic.net/Bhp47.png" alt="Error text 1" /> <img src="https://i.sstatic.net/gUpnm.png" alt="Error text 2" /></p> <p>I think, I just didn't set up some python extension or something else.</p> <p>I just want make Python visual like this.<br /> <img src="https://i.sstatic.net/QmlNB.png" alt="Expected result" /></p>
<python><powerbi>
2023-01-21 09:55:56
1
402
John Doe