QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,366,511
5,722,359
Why a function to customise ttk.Style() not work?
<p>I can't figure out the cause of why function <code>customise_ttk_widgets_style(ss)</code> is causing the <code>ttk.Button</code> and <code>ttk.Labels</code> to not appear correctly. Commenting out line <code># s = customise_ttk_style(s)</code> avoids the wrong widget appearances but I don't understand why this is so. Can you explain the cause and remedy of the issue? Thanks.</p> <pre><code>import tkinter as tk import tkinter.ttk as ttk BG = '#3b3b39' FG = 'white' DFONT = ('URW Gothic L', '10', 'Normal') def customise_ttk_widgets_style(ss): # All Widgets ss.configure(&quot;.&quot;, font=DFONT, background=BG, foreground=FG, cap=tk.ROUND, join=tk.ROUND) # main ttk.Frame ss.configure('Main.TFrame', background='pink') # For debugging # Default ttk.Button ss.configure('TButton', padding=5, relief=tk.FLAT) ss.map('TButton', foreground=[(&quot;disabled&quot;, 'grey'), ('pressed', 'red'), ('active', 'yellow')], background=[('disabled', '#646a67'), ('pressed', '!focus', BG), ('active', '#535553')], relief=[('pressed', 'sunken'), ('!pressed', 'raised')], ) return ss if __name__ == &quot;__main__&quot;: root = tk.Tk() root['background'] = BG s = ttk.Style() s = customise_ttk_widgets_style(s) # This function is causing problem button = ttk.Button(root, text=&quot;ttk.Button (!disabled)&quot;) dbutton = ttk.Button(root, text=&quot;ttk.Button (disabled)&quot;) dbutton.state(['!disabled', 'disabled']) label = ttk.Label(root, text=&quot;ttk.Label (!disabled)&quot;) dlabel = ttk.Label(root, text=&quot;ttk.Label (disabled)&quot;) dlabel.state(['!disabled', 'disabled']) button.grid(row=0, column=0) dbutton.grid(row=0, column=1) label.grid(row=1, column=0) dlabel.grid(row=1, column=1) root.mainloop() </code></pre> <p>Correct appearance:</p> <p><a href="https://i.sstatic.net/bjAeD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjAeD.png" alt="correct" /></a></p> <p>Wrong appearance: <a href="https://i.sstatic.net/AHVCv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AHVCv.png" alt="Wrong1" /></a> <a href="https://i.sstatic.net/z0qX0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z0qX0.png" alt="wrong2" /></a></p>
<python><tkinter><ttk>
2023-02-06 20:56:55
1
8,499
Sun Bear
75,366,396
5,613,367
Permission denied: calling a shell script from Python in a Jenkins job
<p>Trying to provide the minimal amount of information necessary here, so I've left a lot out. Lots of similar questions around, but the most common answer (use <code>chmod +x</code>) isn't working for me.</p> <p>I have a Python script and a shell script that sit next to each in a GitHub Enterprise repository:</p> <p><a href="https://i.sstatic.net/9A7It.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9A7It.png" alt="enter image description here" /></a></p> <p>Next, in Jenkins I check the code in this repository out. The two key steps in my Jenkinsfile are like so:</p> <pre><code>dir (&quot;$WORK/python&quot;) { sh &quot;chmod +x test.sh&quot; sh &quot;python3 foo.py -t '${AUTH}'&quot; } </code></pre> <p>Here, <code>$WORK</code> is the location on the Jenkins node that the code checks out to, and <code>python</code> (yes, poorly named) is the folder in the repository that the Python and shell script live in. Now, <code>foo.py</code> calls the shell script in this way:</p> <pre><code>try: cmd = f'test.sh {repo_name}' subprocess.Popen(cmd.split()) except Exception as e: print(f'Error during repository scan: {e}') </code></pre> <p>Here, <code>repo_name</code> is just an argument that I define above this snippet, that I'm asking the shell script to do something with. When I run the job in Jenkins, it technically executes without error, but the exception branch above does run:</p> <pre><code>11:37:24 Error during repository scan - [Errno 13] Permission denied: 'test.sh' </code></pre> <p>I wanted to be sure that the <code>chmod</code> in the Jenkinsfile was running, so I opened a terminal to the machine that the code checked out to and found that the execute permissions were indeed correctly set:</p> <pre><code>-rw-r--r-- 1 adm domain users 4106 Feb 6 14:24 foo.py -rwxr-xr-x 1 adm domain users 619 Feb 6 14:37 test.sh </code></pre> <p>I've gone around on this most of the day. What the heck is going on?</p>
<python><bash><github><jenkins>
2023-02-06 20:44:55
1
898
Pat Jones
75,366,372
5,818,059
Why is `summary_col` ignoring the `info_dict` parameter?
<p>I need to run some linear regressions and output <code>Latex</code> code with <code>statsmodels</code> in <code>Python</code>. I am using the <code>summary_col</code> function to achieve that. However, there is either a bug or a misunderstanding from my side. Please see the following code:</p> <pre><code>import numpy as np import statsmodels.api as sm from statsmodels.iolib.summary2 import summary_col np.random.seed(123) nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x ** 2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample) X = sm.add_constant(X) y1 = np.dot(X, beta) + e y2 = np.dot(X, beta) + 2 * e model1 = sm.OLS(y1, X).fit() model2 = sm.OLS(y2, X).fit() </code></pre> <p>Now, to have a table with the two models side by side:</p> <pre><code>out_table = summary_col( [model1, model2], stars=True, float_format='%.2f', info_dict={ 'N':lambda x: &quot;{0:d}&quot;.format(int(x.nobs)), 'R2':lambda x: &quot;{:.2f}&quot;.format(x.rsquared) } ) </code></pre> <p>Hence I'd expect a table providing the number of observations and the $R^2$ <em>only</em> since I am explicit about the <code>info_dict</code> argument. The result I get however is the following:</p> <pre><code>============================== y I y II ------------------------------ const 0.81** 0.63 (0.34) (0.67) x1 0.22 0.35 (0.16) (0.31) x2 9.99*** 9.98*** (0.02) (0.03) R-squared 1.00 1.00 R-squared Adj. 1.00 1.00 N 100 100 R2 1.00 1.00 ============================== Standard errors in parentheses. * p&lt;.1, ** p&lt;.05, ***p&lt;.01 </code></pre> <p>Please notice how there are two extra rows with the normal r-squared and the adjusted one. My desired behavior would be:</p> <pre><code>============================== y I y II ------------------------------ const 0.81** 0.63 (0.34) (0.67) x1 0.22 0.35 (0.16) (0.31) x2 9.99*** 9.98*** (0.02) (0.03) N 100 100 R2 1.00 1.00 ============================== Standard errors in parentheses. * p&lt;.1, ** p&lt;.05, ***p&lt;.01 </code></pre> <p>The documentation is not the best yet: <a href="https://tedboy.github.io/statsmodels_doc/generated/statsmodels.iolib.summary2.summary_col.html" rel="nofollow noreferrer">https://tedboy.github.io/statsmodels_doc/generated/statsmodels.iolib.summary2.summary_col.html</a></p> <p>Any ideas on how to displayu only the information asked by the <code>info_dict</code> argument?</p>
<python><python-3.x><latex><linear-regression><statsmodels>
2023-02-06 20:42:11
1
815
Raul Guarini Riva
75,366,367
4,542,117
numpy polyfit alternative for speed
<p>I am utilizing numpy's polyfit a numerous amount of times to run calculations and get the slope between two datasets. However, the speed at which it performs these calculations are not fast enough for what would be desired.</p> <p>Two things to note about the calculations:</p> <ol> <li><p>The value for x in the call numpy.polyfit(x,y,n) will always be the same slope value, and</p> </li> <li><p>The value for n = 1. So it is a linear regression.</p> </li> </ol> <p>I know there are many different alternatives, including numpy.polynomial.polynomial.polyfit(x,y,n), but they seem to provide the same slow speed performance. I have had little luck getting np.linalg to work properly. Therefore, I am wondering what might be an alternative to speed up calculations?</p>
<python><numpy>
2023-02-06 20:41:43
1
374
Miss_Orchid
75,366,330
1,990,200
Match statement case int 1 matches with boolean value True
<pre><code>a = True match a: case False: print(&quot;It is false&quot;) case 2: print('It is 2 true.') case 1: print('It is 1 true.') &gt;&gt;&gt; It is 1 true. </code></pre> <p>Why does the program execute the last case statement?</p>
<python><match><python-3.10><structural-pattern-matching>
2023-02-06 20:37:27
0
786
sp1rs
75,366,064
10,321,768
Should I use Integer to represent currency?
<p>I know that Decimal or a custom class is <a href="https://stackoverflow.com/questions/1406737/what-class-to-use-for-money-representation">generally is the preferred way of representing currency</a>, but I am asking if it can be also achieved using integer. If not possible, I would like to know why.</p> <p>I know we should never use float to represent currency, because of the floating point precision issues:</p> <pre class="lang-py prettyprint-override"><code>burger = 1.3 amount = 3 total = burger * amount # 3.9000000000000004 </code></pre> <p>Python have the Decimal module that solves the issue:</p> <pre class="lang-py prettyprint-override"><code>from decimal import Decimal burger = Decimal('1.3') amount = 3 total = burger * amount # Decimal('3.9') print(total) # 3.9 </code></pre> <p>But there is also the option to store the values and do the Math operations using integers. If we need to show the value to a human, we just divide by 100 to show the representation as currency:</p> <pre class="lang-py prettyprint-override"><code>burger = 130 amount = 3 total = burger * amount # 390 print(total / 100) # 3.9 </code></pre> <p>Using integers seem much simpler, but would the integer solution work in any situation involving currency representation? Is there any trade-offs when using integer to represent currency?</p>
<python><python-3.x><floating-point><integer>
2023-02-06 20:07:08
1
736
Eduardo Matsuoka
75,366,046
3,654,588
Cython: Overriding the `__cinit__` function with different parameters and signature
<p>I am interested in subclassing an existing Cython class (we'll call it <code>A</code>), which has say the following <code>__cinit__(self, int a, int b, *argv)</code> function signature.</p> <p>My new class <code>B</code> would have the following <code>__cinit__(self, int a, int c, *argv)</code>, where <code>b</code> is no longer required, or used.</p> <p>I want something along the lines of:</p> <pre><code>cdef class A: cdef int a cdef int b def __cinit__(self, int a, int b, *argv): self.a = a self.b = b cdef class B(A): cdef double c def __cinit__(self, int a, double c, *argv): self.a = a self.c = c </code></pre> <p>Is there a way to do this?</p>
<python><cython>
2023-02-06 20:05:07
1
1,302
ajl123
75,366,015
15,956,657
Python loop.sock_accept only accepts one connection
<p>My process is using <code>asyncio</code> and <code>socket</code> to communicate with other processes. It handles one client process connection perfectly, but when a second client tries to connect, it waits forever at the client's <code>connect_ex</code> method and the server's <code>sock_accept</code> method.</p> <p>The flow is this: Process #1 spawns a worker process which creates a socket server. Process #1 connects and communicates with worker process. When/if process #1 dies, process #2 tries to connect with worker process. Process #2 can't connect.</p> <pre><code># process #1 class Communicator: def __init__(self, ..., port): ... self.port = port self.loop = asyncio.get_event_loop() async def connect(self): self.psocket = socket.socket(socket.AF_INET, (socket.SOCK_STREAM | socket.SOCK_NONBLOCK)) ex = 1 while ex: ex = self.psocket.connect_ex(('localhost', self.port)) self.psocket.setblocking(False) self.listen_task = self.loop.create_task(self.listen()) self.emit_task = self.loop.create_task(self.emit()) print('Connected on port', self.port) async def listen(self): ... async def emit(self): ... </code></pre> <pre><code># worker process class Communicator(threading.Thread): def __init__(self, ..., port): ... self.port = port super().__init__() def set_event_loop(self): try: self.loop = asyncio.get_event_loop() except RuntimeError: self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) def run(self): self.set_event_loop() self.psocket = socket.socket(socket.AF_INET, (socket.SOCK_STREAM | socket.SOCK_NONBLOCK)) self.psocket.bind(('localhost', self.port)) self.psocket.listen() self.psocket.setblocking(False) self.accept_task = self.loop.create_task(self.accept()) pending = asyncio.all_tasks(loop=self.loop) self.loop.run_until_complete(asyncio.gather(*pending)) async def accept(self): while True: connection, addr = await self.loop.sock_accept(self.psocket) self.tasks.append({ 'listener': self.loop.create_task(self.listen(connection)), 'emitter': self.loop.create_task(self.emit(connection)), }) async def listen(self, connection): ... async def emit(self, connection): ... </code></pre> <p>I know how to do this with threading, I only want to use asynchronous methods for handling multiple client connections.</p> <p>Also <code>connect_ex</code> blocks when the second process tries to connects. The first process runs through the <code>connect_ex</code> loop many times before connecting.</p> <p>What's causing <code>connect_ex</code> to block and <code>sock_accept</code> to wait forever?</p>
<python><sockets><python-asyncio>
2023-02-06 20:01:34
0
363
alvrm
75,365,818
6,357,916
Group list of 4-strings into list of pairs
<p>I have following list of strings:</p> <pre><code>['word1 word2 word3 word4', 'word5 word6 word7 word8'] </code></pre> <p>(I have shown only two strings, but there can be many.) I want to create new list which should look like this:</p> <pre><code>['word1 word2', 'word3 word4', 'word5 word6', 'word7 word8'] </code></pre> <p>I tried following:</p> <pre><code>lines = ['word1 word2 word3 word4', 'word5 word6 word7 word8'] [[word1 + ' ' + word2, word3 + ' ' + word4] for line in lines for word1, word2, word3, word4 in line.split()] </code></pre> <p>But it gives following error:</p> <pre><code>ValueError: too many values to unpack (expected 4) </code></pre> <p>How do I do this in most pythonic way?</p>
<python><python-3.x>
2023-02-06 19:39:16
4
3,029
MsA
75,365,737
1,276,506
When fine-tuning a pre-trained Model, how does tensorflow know that the base_model has been changed?
<p>Ng's Convolutional Neural Network class's Week 2 Lab on using Transfer Learning with MobileNetV2 (summary: <a href="https://github.com/EhabR98/Transfer-Learning-with-MobileNetV2" rel="nofollow noreferrer">https://github.com/EhabR98/Transfer-Learning-with-MobileNetV2</a>) and an additional tutorial (<a href="https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/" rel="nofollow noreferrer">https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/</a>) both begin like this:</p> <pre><code>base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.trainable = False </code></pre> <p>They then proceed to add a pooling layer(s), a Dropout layer and a Dense 1-unit layer to the end, apply a BinaryCrossentropy loss and some kind of optimizer, then train it on some custom data that has been inputted. Lets call this custom model &quot;<code>model2</code>&quot; as Ng's lab does</p> <p>Here's what my the Coursera class model looks like, its important to include here because the variable <code>base_model</code> is called in two different closures throughout the Coursera lab (<em>previous to this</em> it was called outside of a method, as <code>base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=True, weights='imagenet'); base_model.trainable= False</code>)</p> <pre><code>def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()) input_shape = image_shape + (3,0) base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False, weights='imagenet') base_model.trainable = False inputs = tf.keras.Input(shape=input_shape) x = data_augmentation(inputs) x = preprocess_input(x) x = base_model(x, training=False) x = tfl.GlobalAveragePooling2D()(x) x = tfl.Dropout(0.2)(x) prediction_layer = tfl.Dense(1) outputs = prediction_layer(x) model = tf.keras.Model(inputs, outputs) return model model2 = alpaca_model() base_learning_rate = 0.001 initial_epochs = 5 model2.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=[&quot;accuracy&quot;]) history = model2.fit(train_dataset, validation_data=validation_dataset, epochs=initial_epochs) </code></pre> <p>This performs OK, getting as much as 80% accuracy</p> <p>Fine tuning -- Now in both the course lab and the tutorial, they then proceed to &quot;unfreeze&quot; some of the last layers of the internal network so that they can be trained, like so:</p> <pre><code>fine_tune_at = 120 base_model = model2.layers[4] #totally separate question, but I would love to hear in comments, what this does exactly. It is difficult to Google this. base_model.trainable = True print(&quot;#/layers in base model: &quot;, len(base_model.layers)) for layer in base_model.layers[:fine_tune_at]: layer.trainable = False loss_function = tf.keras.losses.BinaryCrossentrop(from_logits=True) optimizer = tf.keras.optimizers.Adam(learning_rate=base_learning_rate*0.1) metrics = ['accuracy'] fine_tune_epochs = 5 total_epochs = initial_epochs + fine_tune_epochs </code></pre> <p><em><strong>Up until this point, I'm satisfied, I can clearly see what is going on, but then:</strong></em></p> <pre><code>model2.compile(loss=loss_function,optimizer=optimizer,metrics=metrics) history_fine = model2.fit(train_dataset, epochs=total_epochs, initial_epoch=history.epoch[-1], validation_data = validation_dataset) </code></pre> <p>This leads to a marked improvement in results. Which confused me, I was very much expecting <code>base_model</code> to get passed in somehow. I didn’t imagine that altering some other variable that hasn’t been passed into or been initially called would come into play.</p> <p><em><strong>So given all of that context, the question is:</strong></em> How is altering the <code>base_model</code> affecting <code>model2</code>?</p> <p>If the above example from the Coursera lab is as confusing to you as it is to me, the example shown on <a href="https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/" rel="nofollow noreferrer">https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/</a> as mentioned above is much simpler and contains much less ambiguity as <code>base_model</code> is defined only once. Regardless, the same dynamic applies and I'm equally confused on both. Thanks again for your time</p>
<python><tensorflow><keras><transfer-learning>
2023-02-06 19:28:40
2
39,963
boulder_ruby
75,365,729
554,421
Pytest plugin ignoring hooks for test run
<p>I am trying to add a plugin to eavesdrop on my pytest runs. from what I can find in code samples and documentation, this &quot;should&quot; work (emphasis on should)</p> <pre><code>&quot;&quot;&quot; to test scripted pytest control &quot;&quot;&quot; import sys import pytest class MyPytestPlugin: &quot;&quot;&quot; Hooks to get info from pytest &quot;&quot;&quot; def __init__(self) -&gt; None: pass # pylint: disable=unused-argument def pytest_collection_modifyitems(self, config, items): print(&quot;collection done.&quot;) def pytest_sessionstart(self, session): print(&quot;*&quot; * 10) print(&quot;session started&quot;, session) print(&quot;*&quot; * 10) def pytest_sessionfinish(self, session, exitstatus): print(&quot;-&quot; * 10) print(&quot;session finished&quot;, session) def pytest_runtest_logstart(self, nodeid, location): print(&quot;&lt;&lt;&lt;&lt;&lt; log start &gt;&gt;&gt;&gt;&gt;&gt;&quot;, nodeid) def pytest_runtest_logfinish(self, nodeid, location): print(&quot;&lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt;&quot;, nodeid) def pytest_runtest_protocol(self, item, nextitem): print(&quot;&lt;&lt;&lt;&lt;&lt; protocol &gt;&gt;&gt;&gt;&gt;&gt;&quot;) def pytest_collectreport(self, report): print(&quot;&lt;&lt;&lt;&lt;&lt; collect report &gt;&gt;&gt;&gt;&gt;&gt;&quot;) def pytest_report_teststatus(self, report): print(&quot;&lt;&lt;&lt;&lt;&lt; test status &gt;&gt;&gt;&gt;&gt;&gt;&quot;) def pytest_runtest_setup(self, item): print(&quot;&lt;&lt;&lt;&lt;&lt; setup &gt;&gt;&gt;&gt;&gt;&gt;&quot;) def pytest_runtest_call(self, item): print(&quot;&lt;&lt;&lt;&lt;&lt; runtest call &gt;&gt;&gt;&gt;&gt;&gt;&quot;) def pytest_runtest_teardown(self, item): print(&quot;&lt;&lt;&lt;&lt;&lt; runtest teardown &gt;&gt;&gt;&gt;&gt;&gt;&quot;) my_plugin = MyPytestPlugin() sys.exit( pytest.main( [&quot;-s&quot;, &quot;sample_test.py&quot;], plugins=[my_plugin], ) ) </code></pre> <p>however, when I run this, only the session (start/finish) and the log (start/finish) print their message. nothing else (for example, the runtest setup, call, teardown etc.) seems to be called. has anyone seen this before?</p> <p>here is the output</p> <pre><code>python sample_test_runner.py Test session starts (platform: linux, Python 3.9.6, pytest 7.2.0, pytest-sugar 0.9.5) cachedir: .pytest_cache metadata: {'Python': '3.9.6', 'Platform': 'Linux-5.10.124-linuxkit-x86_64-with-glibc2.28', 'Packages': {'pytest': '7.2.0', 'py': '1.11.0', 'pluggy': '1.0.0'}, 'Plugins': {'cov': '4.0.0', 'xdist': '3.0.2', 'metadata': '2.0.2', 'aiohttp': '0.3.0', 'asyncio': '0.16.0', 'html': '3.1.1', 'sugar': '0.9.5', 'email': '0.3', 'forked': '1.4.0', 'timeout': '2.1.0'}} rootdir: /src/*******, configfile: tox.ini plugins: cov-4.0.0, xdist-3.0.2, metadata-2.0.2, aiohttp-0.3.0, asyncio-0.16.0, html-3.1.1, sugar-0.9.5, email-0.3, forked-1.4.0, timeout-2.1.0 timeout: 7200.0s timeout method: signal timeout func_only: False ********** session started &lt;Session xxxxxxxx exitstatus=&lt;ExitCode.OK: 0&gt; testsfailed=0 testscollected=0&gt; ********** [gw0] Python 3.9.6 (default, Aug 17 2021, 02:38:04) -- [GCC 8.3.0] [gw1] Python 3.9.6 (default, Aug 17 2021, 02:38:04) -- [GCC 8.3.0] [gw2] Python 3.9.6 (default, Aug 17 2021, 02:38:04) -- [GCC 8.3.0] gw0 [5] / gw1 [5] / gw2 [5] scheduling tests via LoadScheduling &lt;&lt;&lt;&lt;&lt; log start &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t1] &lt;&lt;&lt;&lt;&lt; log start &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t2] &lt;&lt;&lt;&lt;&lt; log start &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t3] [gw0] PASSED sample_test.py sample_test.py::test_addon[t2] βœ“ 20% β–ˆβ–ˆ [gw1] PASSED sample_test.py &lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t2] sample_test.py::test_addon[t1] βœ“ 60% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ sample_test.py::test_addon[t3] βœ“ 40% β–ˆβ–ˆβ–ˆβ–ˆ [gw2] PASSED sample_test.py &lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t3] &lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t1] &lt;&lt;&lt;&lt;&lt; log start &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t4] sample_test.py::test_addon[t5] βœ“ 80% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ [gw1] PASSED sample_test.py &lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t5] sample_test.py::test_addon[t4] βœ“ 100% β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ [gw0] PASSED sample_test.py &lt;&lt;&lt;&lt;&lt; log finish &gt;&gt;&gt;&gt;&gt;&gt; sample_test.py::test_addon[t4] ---------- session finished &lt;Session xxxxxxxxx exitstatus=0 testsfailed=0 testscollected=5&gt; 0 0 5 Results (0.70s): 5 passed </code></pre> <p>for the curious, here is my test file:</p> <pre><code>import pytest @pytest.mark.parametrize( &quot;a,b,expected&quot;, [ pytest.param(1, 1, 2, id=&quot;t1&quot;), pytest.param(2, 2, 4, id=&quot;t2&quot;), pytest.param(2, 1, 3, id=&quot;t3&quot;), pytest.param(2, 3, 5, id=&quot;t4&quot;), pytest.param(3, 4, 7, id=&quot;t5&quot;), ], ) def test_addon(a, b, expected): print(f&quot;{a}, {b}, {expected}&quot;) assert a + b == expected </code></pre> <p><strong>Update</strong>: Thanks to @Teejay, it is true that in cases like these we should wrap the hook. however, even if we change all the above hooks to generators like below, we still won't get any output except from session and log start and finish hooks:</p> <pre><code> @pytest.hookimpl(hookwrapper=True) def pytest_runtest_protocol(self, item, nextitem): print(&quot;&lt;&lt;&lt;&lt;&lt; protocol &gt;&gt;&gt;&gt;&gt;&gt;&quot;) outcome = yield # outcome.excinfo may be None or a (cls, val, tb) tuple print(outcome.get_result()) </code></pre> <p>neither will anything change if these methods were static:</p> <pre><code> @pytest.hookimpl(hookwrapper=True) @staticmethod def pytest_runtest_protocol(item, nextitem): print(&quot;&lt;&lt;&lt;&lt;&lt; protocol &gt;&gt;&gt;&gt;&gt;&gt;&quot;) outcome = yield # outcome.excinfo may be None or a (cls, val, tb) tuple print(outcome.get_result()) </code></pre>
<python><plugins><pytest><hook>
2023-02-06 19:27:13
0
867
Reza
75,365,695
5,592,430
Update plot by new data instead of making new plot in Jupiter notebook
<p>I have some issue and I hope you can help me to resolve it. I need to create interactive plot with dropdown widget where I could select and plot the interested data. I doing it by following way:</p> <pre><code>import plotly.graph_objects as go import ipywidgets as widgets import pandas as pd df = pd.DataFrame( {'ticker' : ['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c'], 'timestamp' : [1,1,1,2,2,2,3,3,3], 'val' : [10,11,12,21,22,23, 100, 200, 300]}) ddwn = widgets.Dropdown(options = df.ticker.unique()) display(ddwn) def on_change(change): if change['type'] == 'change' and change['name'] == 'value': d = df[df['ticker'] == change['new']].sort_values('timestamp') fig = go.Figure(data=go.Scatter(x=list(d.timestamp), y=list(d.val), name=change['new'])) fig.show() ddwn.observe(on_change) </code></pre> <p>The problem is that a new figure is added below the previous one, instead of the current figure cleared. But really I want to update figure. I tried to use answer from <a href="https://stackoverflow.com/questions/42998009/clear-matplotlib-figure-in-jupyter-python-notebook">Clear MatPlotLib figure in Jupyter Python notebook</a> but it didn't help me.</p> <p>P.S. I have a lot of tickers - therefore I don't want to create dict on every ticker and use it.</p>
<python><matplotlib><jupyter-notebook><plotly>
2023-02-06 19:22:26
2
981
Roman Kazmin
75,365,612
7,331,538
python string parsing issues when saving sql commands to file
<p>I have dicts data I am looping through to form <code>INSERT</code> commands for postgresql. where the parent keys of the dicts are the column names and the parent values are the column values:</p> <pre><code>data = { 'id': 45, 'col1': &quot;foo's&quot;, 'col2': {'dict_key': 5} } columns = ', '.join(data.keys()) # replace single quote with double to form a json type for psql column data['col2'] = data['col2'].replace(&quot;'&quot;, '&quot;') with open(&quot;file.sql&quot;, &quot;w&quot;) as f: command = &quot;INSERT INTO table1({}) VALUES {};&quot; f.write(command.format(columns, tuple(data.values())) </code></pre> <p>The problem is that the output of this is not formatted correctly for sql to execute. This is the output of the above:</p> <p><code>INSERT INTO table1(id, col1, col2) VALUES (45, &quot;foo's&quot;, '{&quot;dict_key&quot;:5}');</code></p> <p>The json field is formatted correctly with the single quotes around the value. But <code>col2</code> keeps the double quotes <strong>if the string in <code>col2</code> contains a single quote</strong>. This is a problem because postgresql requires single quotes to identify <code>TEXT</code> input.</p> <p>Is there a better way to parse data into <code>psql</code> insert commands?</p>
<python><sql-insert><psql>
2023-02-06 19:11:54
2
2,377
bcsta
75,365,463
17,047,177
Editorconfig for python docstring
<p>I am using the <code>pycharm</code> IDE and I would like to automatize the indentation of the python <code>docstrings</code> with the <code>.editorconfig</code> file. I get to control almost everything with the following configuration (just in case it is useful):</p> <pre><code>[{*.py,*.pyw}] ij_python_align_collections_and_comprehensions = true ij_python_align_multiline_imports = true ij_python_align_multiline_parameters = true ij_python_align_multiline_parameters_in_calls = true ij_python_blank_line_at_file_end = true ij_python_blank_lines_after_imports = 1 ij_python_blank_lines_after_local_imports = 0 ij_python_blank_lines_around_class = 1 ij_python_blank_lines_around_method = 1 ij_python_blank_lines_around_top_level_classes_functions = 2 ij_python_blank_lines_before_first_method = 0 ij_python_call_parameters_new_line_after_left_paren = false ij_python_call_parameters_right_paren_on_new_line = false ij_python_call_parameters_wrap = normal ij_python_dict_alignment = 0 ij_python_dict_new_line_after_left_brace = false ij_python_dict_new_line_before_right_brace = false ij_python_dict_wrapping = 1 ij_python_from_import_new_line_after_left_parenthesis = false ij_python_from_import_new_line_before_right_parenthesis = false ij_python_from_import_parentheses_force_if_multiline = false ij_python_from_import_trailing_comma_if_multiline = false ij_python_from_import_wrapping = 1 ij_python_hang_closing_brackets = false ij_python_keep_blank_lines_in_code = 1 ij_python_keep_blank_lines_in_declarations = 1 ij_python_keep_indents_on_empty_lines = false ij_python_keep_line_breaks = true ij_python_method_parameters_new_line_after_left_paren = false ij_python_method_parameters_right_paren_on_new_line = false ij_python_method_parameters_wrap = normal ij_python_new_line_after_colon = false ij_python_new_line_after_colon_multi_clause = true ij_python_optimize_imports_always_split_from_imports = false ij_python_optimize_imports_case_insensitive_order = false ij_python_optimize_imports_join_from_imports_with_same_source = false ij_python_optimize_imports_sort_by_type_first = true ij_python_optimize_imports_sort_imports = true ij_python_optimize_imports_sort_names_in_from_imports = false ij_python_space_after_comma = true ij_python_space_after_number_sign = true ij_python_space_after_py_colon = true ij_python_space_before_backslash = true ij_python_space_before_comma = false ij_python_space_before_for_semicolon = false ij_python_space_before_lbracket = false ij_python_space_before_method_call_parentheses = false ij_python_space_before_method_parentheses = false ij_python_space_before_number_sign = true ij_python_space_before_py_colon = false ij_python_space_within_empty_method_call_parentheses = false ij_python_space_within_empty_method_parentheses = false ij_python_spaces_around_additive_operators = true ij_python_spaces_around_assignment_operators = true ij_python_spaces_around_bitwise_operators = true ij_python_spaces_around_eq_in_keyword_argument = false ij_python_spaces_around_eq_in_named_parameter = false ij_python_spaces_around_equality_operators = true ij_python_spaces_around_multiplicative_operators = true ij_python_spaces_around_power_operator = true ij_python_spaces_around_relational_operators = true ij_python_spaces_around_shift_operators = true ij_python_spaces_within_braces = false ij_python_spaces_within_brackets = false ij_python_spaces_within_method_call_parentheses = false ij_python_spaces_within_method_parentheses = false ij_python_use_continuation_indent_for_arguments = false ij_python_use_continuation_indent_for_collection_and_comprehensions = false ij_python_use_continuation_indent_for_parameters = true ij_python_wrap_long_lines = false </code></pre> <p>It would be rather useful to manage all the indentation config from the same file. That is, adding the docstring configuration to the .editorconfig. Someone knows if it is possible to control the <code>docstring</code> style by using .editorconfig?</p>
<python><pycharm><documentation><docstring><editorconfig>
2023-02-06 18:55:12
0
1,069
A.Casanova
75,365,406
15,239,717
How can I get a Field Id from a Related Django Model
<p>I am working on a Django project where and I want to get an ID of a Related model with a OneToOne attributed so I can edit the profile of the user with his related Profile but all I get in return is <strong>Field 'id' expected a number but got 'GANDE1'</strong>.</p> <p>Here are my Models:</p> <pre><code>class Profile(models.Model): customer = models.OneToOneField(User, on_delete=models.CASCADE, null = True) surname = models.CharField(max_length=20, null=True) othernames = models.CharField(max_length=40, null=True) gender = models.CharField(max_length=6, choices=GENDER, blank=True, null=True) address = models.CharField(max_length=200, null=True) phone = models.CharField(max_length=11, null=True) image = models.ImageField(default='avatar.jpg', blank=False, null=False, upload_to ='profile_images', ) #Method to save Image def save(self, *args, **kwargs): super().save(*args, **kwargs) img = Image.open(self.image.path) #Check for Image Height and Width then resize it then save if img.height &gt; 200 or img.width &gt; 150: output_size = (150, 250) img.thumbnail(output_size) img.save(self.image.path) def __str__(self): return f'{self.customer.username}-Profile' class Account(models.Model): customer = models.OneToOneField(User, on_delete=models.CASCADE, null=True) account_number = models.CharField(max_length=10, null=True) date = models.DateTimeField(auto_now_add=True, null=True) def __str__(self): return f' {self.customer} - Account No: {self.account_number}' </code></pre> <p>Here is my Views:</p> <pre><code>def create_account(request): #Search Customer if searchForm.is_valid(): #Value of search form value = searchForm.cleaned_data['value'] #Filter Customer by Surname, Othernames , Account Number using Q Objects user_filter = Q(customer__exact = value) | Q(account_number__exact = value) #Apply the Customer Object Filter list_customers = Account.objects.filter(user_filter) else: list_customers = Account.objects.all() context = { 'customers':paged_list_customers, } return render(request, 'dashboard/customers.html', context) </code></pre> <p>Here is how I displayed list of accounts in my Template:</p> <pre><code>{% for customer in customers %} &lt;tr&gt; &lt;td&gt;{{ forloop.counter }}&lt;/td&gt; &lt;td&gt;{{ customer.account_number }}&lt;/td&gt; {% if customer.customer.profile.surname == None %} &lt;td&gt; &lt;a class=&quot;btn btn-danger&quot; href=&quot; {% url 'update-customer' customer.customer.id %} &quot;&gt;Click to Enter Customer Personal Details.&lt;/a&gt; &lt;/td&gt; {% else %} &lt;td&gt;{{ customer.customer.profile.surname }} {{ customer.customer.profile.othernames }}&lt;/td&gt; &lt;td&gt;{{ customer.customer.profile.phone }}&lt;/td&gt; &lt;td&gt;&lt;a class=&quot;btn btn-success btn-sm&quot; href=&quot;{% url 'account-statement' customer.id %}&quot;&gt;Statement&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a class=&quot;btn btn-danger btn-sm&quot; href=&quot;{% url 'dashboard-witdrawal' customer.id %}&quot;&gt;Withdraw&lt;/a&gt;&lt;/td&gt; &lt;th scope=&quot;row&quot;&gt;&lt;a class=&quot;btn btn-success btn-sm&quot; href=&quot;{% url 'create-deposit' customer.id %}&quot;&gt;Deposit&lt;/a&gt;&lt;/th&gt; {% endif %} &lt;/tr&gt; {% endfor %} </code></pre> <p>Here is my Customer Update View where I am having issues:</p> <pre><code>def update_customer_profile(request, pk): #get logged in user user = request.user #check if logged in user is staff try: customer_user = User.objects.get(id=pk) except User.DoesNotExist: return redirect('user-register') else: count_users = User.objects.count() #Get the Customer User's Profile from the User above user_profile = Profile.objects.get(customer=customer_user.username) </code></pre> <p>Please, understand that I want to get the ID of a User and MATCH it with the one in his profile record so I can be able to edit his profile record. And also note that the customer profile is automatically created using signals upon user registration.</p>
<python><django><django-models><django-views><django-templates>
2023-02-06 18:46:50
1
323
apollos
75,365,231
12,226,377
Append multiple excel sheets and create a identifier column using pandas
<p>I am in a situation where I would like append multiple excel sheets coming from a single workbook on top of each other and build an identifier column. The identifier column will be built via extracting a word(within brackets of a column) from the column header, essentially creating a new column and storing that extracted information in it. Here is an example: My excel workbook has two sheets , &quot;Sheet1&quot; and &quot;Sheet2&quot; and their header looks like this:</p> <p>Sheet1:</p> <pre><code>a b c d(Connect) 1 2 3 4 11 22 33 44 </code></pre> <p>Sheet2:</p> <pre><code>a b c d(Connect2) 5 6 7 8 </code></pre> <p>What I want is to append these two sheets together in a way that the resultant dataframe should like following:</p> <pre><code>identifier a b c d Connect1 1 2 3 4 Connect1 11 22 33 44 Connect2 5 6 7 8 </code></pre> <p>The idea is that the identifier should be placed corresponding to each and every row when we are appending the sheets on top of each other.</p> <p>How do I achieve this?</p>
<python><pandas><merge><append>
2023-02-06 18:28:05
1
807
Django0602
75,365,158
8,968,801
Django: Unable to Apply Function View Decorator to Class Based View
<p>I'm migrating from regular function based views, to class based views. One of the things that I couldn't migrate were the decorators I used. The decorator in question checks if the credentials of the current user are valid and then executes the decorated function:</p> <pre class="lang-py prettyprint-override"><code>def custom_auth(function): @wraps(function) def wrap(request, *args, **kwargs): # Logic for validating if user has correct credentials # Fetches the user that accessed the function user_object = User.objects.get(username=request_username) # Try to execute the decorated function. If it fails, redirect # to previous page and show an error popup try: return function(request, user=user_object, *args, **kwargs) except: # Logic for displaying the popup </code></pre> <p>Previously I could just decorate my function by doing</p> <pre class="lang-py prettyprint-override"><code>@custom_auth def view(request, *args, **kwargs): # View logic </code></pre> <p>However, when I try to apply it to my class based view in the same way, I get an error saying <code>__init__() takes 1 positional argument but 2 were given: user='username', view='cbvview'</code></p> <pre class="lang-py prettyprint-override"><code>@custom_auth class CBV(View): def get(self, request, *args, **kwargs): # Get request logic </code></pre> <p>I know that this is not the way you are supposed to apply the decorator, so I tried with different approaches. Either adding the decorator to <code>urls.py</code>, adding the <code>@method_decorator(custom_auth, name=&quot;dispatch&quot;)</code> or simply overriding the dispatch method, but none of them work. All of them give me the same error.</p> <p>What could be the issue? Maybe I should use a custom mixin instead?</p>
<python><django><django-views><django-queryset><django-class-based-views>
2023-02-06 18:20:26
2
823
Eddysanoli
75,365,126
12,014,637
The copy() method in Python does not work properly
<p>I have a pandas dataframe that I would like to make a duplicate of and do some operations on the duplicated version without affecting the original one. I use &quot;.copy()&quot; method but for some reason it doesn't work! Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np x = np.array([1,2]) df = pd.DataFrame({'A': [x, x, x], 'B': [4, 5, 6]}) duplicate = df.copy() duplicate['A'].values[0][[0,1]] = 0 print(duplicate) print(df) A B 0 [0, 0] 4 1 [0, 0] 5 2 [0, 0] 6 A B 0 [0, 0] 4 1 [0, 0] 5 2 [0, 0] 6 </code></pre> <p>As you can see &quot;df&quot; (the original dataset) gets affected as well. Does anyone know why, and how this should be done correctly?</p>
<python><pandas>
2023-02-06 18:16:31
1
618
Amin Shn
75,365,121
6,929,467
FastAPI - Pydantic Query Parameter with Enum
<h3>Problem</h3> <p>I am unable to use Pydantic model as a query parameter and to support things like enum, it doesn't work. You can do a hacky way to use Pydantic for query parameters with <code>Depends</code> but it doesn't work as well as it should.</p> <p>This is a very big requirement in our company and I'm guessing that a lot of other companies use the query parameters as well.</p> <p><a href="https://i.sstatic.net/6Dvfd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Dvfd.png" alt="enter image description here" /></a></p> <h3>Looking for solution like this</h3> <p><a href="https://i.sstatic.net/nNN0R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNN0R.png" alt="enter image description here" /></a></p> <h4>Additional</h4> <p>I would like to use Pydantic models as a Query Parameter natively without the <code>Depends</code> does anyone have an idea how to do it?</p>
<python><fastapi><pydantic>
2023-02-06 18:15:51
0
2,720
innicoder
75,365,105
6,423,456
Generic vs Specific MyPy types of functions
<p>I remember reading, or hearing somewhere that for a function, input types should be as generic as possible (<code>Iterable</code> over <code>list</code>), but return types should be as specific as possible.</p> <p>Is this written down somewhere official that I can reference when this comes up in team discussions? Or am I crazy and this isn't actually a guideline?</p>
<python><mypy><typing>
2023-02-06 18:13:37
1
2,774
John
75,365,066
2,130,515
How to avoid click intercept while sending text to email field using Selenium in headless mode
<p>I want to connect <a href="https://app.wordtune.com/account/login?product=write&amp;platform=editor&amp;afterAuthRedirect=%2Feditor" rel="nofollow noreferrer">website</a>. I write the following code:</p> <pre><code>from time import sleep from fake_useragent import UserAgent from selenium.webdriver.support.ui import WebDriverWait as W from selenium.webdriver.support import expected_conditions as E from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument(&quot;--start-maximized&quot;) options.add_argument('--no-sandbox') options.add_argument('--headless') user_agent = UserAgent().random options.add_argument(f'user-agent={self.user_agent}') options.add_argument('--disable-infobars') options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(driver_path, options=self.options) wait_time = 10 wait_variable = W(self.driver, self.wait_time) driver.get(&quot;https://app.wordtune.com/account/login?product=write&amp;platform=editor&amp;afterAuthRedirect=%2Feditor&quot;) sleep(5) email_holder = wait_variable.until(E.presence_of_element_located((By.ID, 'email-label'))) # this does not work # I tried to focus on, click on but nothing is working. # it looks that another element receive the click # email_holder.click() email_holder.send_keys(&quot;email&quot;) </code></pre> <p>My question is how to focus and send text to email_holder ?</p>
<python><selenium><xpath><css-selectors><webdriverwait>
2023-02-06 18:09:46
2
1,790
LearnToGrow
75,365,019
15,317,733
Automatically set outline and fill color based on GeoJSON properties with geopandas
<p>I am making a program that retrieves GeoJSON data from past convective outlooks from the Storm Prediction Center (SPC) and plot it using geopandas. With my current code, it is able to plot outlooks correctly onto a map. However, the coloring isn't right. I noticed that the GeoJSON returned by the SPC included outline and fill coloring data for the categories - (in <code>properties</code> field)</p> <pre><code>{&quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: [{&quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: {&quot;type&quot;: &quot;MultiPolygon&quot;, &quot;coordinates&quot;: ...}, &quot;properties&quot;: {&quot;DN&quot;: 2, &quot;VALID&quot;: &quot;202109010100&quot;, &quot;EXPIRE&quot;: &quot;202109011200&quot;, &quot;ISSUE&quot;: &quot;202109010042&quot;, &quot;LABEL&quot;: &quot;TSTM&quot;, &quot;LABEL2&quot;: &quot;General Thunderstorms Risk&quot;, &quot;stroke&quot;: &quot;#55BB55&quot;, &quot;fill&quot;: &quot;#C1E9C1&quot;}}, {&quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: {&quot;type&quot;: &quot;MultiPolygon&quot;, &quot;coordinates&quot;: ...}, &quot;properties&quot;: {&quot;DN&quot;: 3, &quot;VALID&quot;: &quot;202109010100&quot;, &quot;EXPIRE&quot;: &quot;202109011200&quot;, &quot;ISSUE&quot;: &quot;202109010042&quot;, &quot;LABEL&quot;: &quot;MRGL&quot;, &quot;LABEL2&quot;: &quot;Marginal Risk&quot;, &quot;stroke&quot;: &quot;#005500&quot;, &quot;fill&quot;: &quot;#66A366&quot;}}, {&quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: {&quot;type&quot;: &quot;MultiPolygon&quot;, &quot;coordinates&quot;: ...}, &quot;properties&quot;: {&quot;DN&quot;: 4, &quot;VALID&quot;: &quot;202109010100&quot;, &quot;EXPIRE&quot;: &quot;202109011200&quot;, &quot;ISSUE&quot;: &quot;202109010042&quot;, &quot;LABEL&quot;: &quot;SLGT&quot;, &quot;LABEL2&quot;: &quot;Slight Risk&quot;, &quot;stroke&quot;: &quot;#DDAA00&quot;, &quot;fill&quot;: &quot;#FFE066&quot;}}]} </code></pre> <p>Is it possible to use the <code>stroke</code> and <code>fill</code> data in <code>properties</code> to automatically color every <code>MultiPolygon</code>?</p> <p>My current code is below (assume that all packages are imported)</p> <pre><code>outlook = &quot;https://www.spc.noaa.gov/products/outlook/archive/2021/day1otlk_20210901_0100_cat.lyr.geojson&quot; world = geopandas.read_file( geopandas.datasets.get_path('naturalearth_lowres') ) df = geopandas.read_file(outlook) ax = world.plot(color='white', edgecolor='#333333',linewidth=0.3) print(type(df)) s = geopandas.GeoDataFrame(df) s.plot(ax=ax,markersize=0.7,figsize=(1000,1000)) ax.set_xlim(-140, -70) # focus on continental US ax.set_ylim(25, 50) # focus on continental US plt.savefig('outlook.jpg', dpi=360) # save as outlook.jpg </code></pre> <p>I tried looking in the geopandas documentation but it didn't state how to use fields in geojson to color polygons.</p>
<python><matplotlib><geojson><geopandas>
2023-02-06 18:06:53
1
394
Jellyfish
75,364,971
6,342,337
Issues compiling python3.10.9
<p>I have installed openssl version 1.1.1 on RHEL 7.9 and I am running the following commands</p> <pre><code>./configure --with-openssl=/opt/python/ssl --with-openssl-rpath=auto </code></pre> <pre><code>make -j8 </code></pre> <p>My gcc version is:</p> <pre><code>gcc --version gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44) Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. </code></pre> <p>I am seeing this output.</p> <pre><code>./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo &quot;generate-posix-vars failed&quot; ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi Could not import runpy module Traceback (most recent call last): File &quot;/tmp/Python-3.10.9/Lib/runpy.py&quot;, line 15, in &lt;module&gt; import importlib.util File &quot;/tmp/Python-3.10.9/Lib/importlib/util.py&quot;, line 14, in &lt;module&gt; from contextlib import contextmanager File &quot;/tmp/Python-3.10.9/Lib/contextlib.py&quot;, line 4, in &lt;module&gt; import _collections_abc SystemError: &lt;built-in function compile&gt; returned NULL without setting an exception generate-posix-vars failed </code></pre>
<python><cpython><rhel7><python-3.10>
2023-02-06 18:02:35
0
1,837
ScipioAfricanus
75,364,869
5,556,711
Can a Python module be multiple files?
<p>For years, I've known that the very definition of a Python module is as a separate file. In fact, even the <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow noreferrer">official documentation</a> states that &quot;a module is a file containing Python definitions and statements&quot;. Yet, this <a href="https://python-packaging-tutorial.readthedocs.io/en/latest/setup_py.html" rel="nofollow noreferrer">online tutorial</a> from people who seem pretty knowledgeable states that &quot;a module usually corresponds to a single file&quot;. Where does the &quot;usually&quot; come from? Can a Python module consist of multiple files?</p>
<python><module><package>
2023-02-06 17:52:50
3
706
David Cian
75,364,863
15,450,772
Folders included in the tar.gz, not in the wheel, setuptools build
<p>The automatic discovery of <code>setuptools.build_meta</code> includes top-level folders into the tarball that shouldn't be included.</p> <p>We were trying to build a <a href="https://gitlab.com/octopus-code/postopus" rel="noreferrer">python package</a> with <code>python3 -m build</code>. Our project has a <a href="https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#src-layout" rel="noreferrer">src-layout</a> and we are using <code>setuptools</code> as the backend for the build. According to the documentation, the <a href="https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#automatic-discovery" rel="noreferrer">automatic discovery</a> should figure out the project structure and build the tarball and the wheel accordingly.</p> <p>For the wheel build it works as expected, i.e. only the files under <code>src/...</code> are included on the package. On the other hand, in the tarball build also top-level folders that are on the same hierarchical level as <code>src</code> are included. For example, the <code>tests</code> and the <code>docs</code> folder, which shouldn't be included. Interestingly, only a few files from the <code>docs</code> were included, not all of them.</p> <p>We tried to explicitly exclude the <code>tests</code> and <code>docs</code> folder in the <code>pyproject.toml</code>, the <code>setup.cfg</code> and the <code>MANIFEST.in</code>, following the respective documentations, but none of them helped.</p> <p>We configured the build backend in the <code>pyproject.toml</code> like so:</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&quot;] build-backend = &quot;setuptools.build_meta&quot; ... </code></pre> <p>We don't have a <code>setup.py</code> and our <code>setup.cfg</code> only contains <code>flake8</code> specifications.</p> <p>We are using <code>setuptools-67.1.0</code>, we tried python versions 3.8, 3.9 and 3.10. We tried it with an Ubuntu 20.04 and a Debian 11 running conda 3.8.13.</p> <p>The tarball and the wheel can be seen <a href="https://pypi.org/project/postopus/#files" rel="noreferrer">here</a>.</p>
<python><python-3.x><setuptools>
2023-02-06 17:52:13
1
451
KevinYanesG
75,364,827
12,458,212
Pickling/Unpickling vs Reading/Writing from txt - Memory Efficiency
<p>Recently I've been interested in optimizing data processing for a project where I send many API requests. Instead of saving the responses directly in console (ie. writing to a list), I instead write the returned data to a pickle file. It's my understanding that this is more efficient when considering memory limitations. However, I load the data back in, and noticed a significant amount of memory being used up. After some research, it looks like unpickled object is getting stored in memory and takes up a considerable amount of space. So instead of pickling/unpickling, would writing to a json/txt file be more efficient? Would really appreciate some technical guidance on this.</p>
<python><memory><pickle>
2023-02-06 17:48:36
0
695
chicagobeast12
75,364,758
12,076,197
Split a bottom level column tuple into another level
<p>My current dataframe has two levels. I'm looking to add a third by splitting tuples, which are the names of the columns. See example:</p> <p>Original DF:</p> <pre><code>Category (A,Cat) (B,Dog) (B,Bird) (B,Frog) (HH,Lion) (HH,Tiger) 48 28 585 4 233 44 11 434 23 854 32 10 </code></pre> <p>Desired DF: &quot;Category&quot; is top level. Letter (A,B,HH) is the second level. Then the animal is the bottom level of the dataframe</p> <pre><code>Category A B B B HH HH Cat Dog Bird Frog Lion Tiger 48 28 585 4 233 44 11 434 23 854 32 10 </code></pre> <p>I don't have much experience with working with Multi-index columns in dataframes. Any suggestions is appreciated.</p>
<python><dataframe>
2023-02-06 17:41:17
1
641
dmd7
75,364,642
14,790,056
I want to groupby and drop groups if the shape is 3 and non of the values from a column contains zero
<p>I want to groupby and drop groups if it satisfies two conditions (the shape is 3 and column A doesn't contain zeros).</p> <p>My df</p> <pre><code>ID value A 3 A 2 A 0 B 1 B 1 C 3 C 3 C 4 D 0 D 5 D 5 E 6 E 7 E 7 F 3 F 2 </code></pre> <p>my desired df would be</p> <pre><code>ID value A 3 A 2 A 0 B 1 B 1 D 0 D 5 D 5 F 3 F 2 </code></pre>
<python><pandas><dataframe>
2023-02-06 17:28:57
1
654
Olive
75,364,539
2,890,093
How to run Python to reckognize module hierarchy for Airflow DAGs?
<p>A have several DAGs of similar structure and I wanted to use advice described in Airflow docs as <a href="https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/modules_management.html" rel="nofollow noreferrer">Modules Management</a>:</p> <blockquote> <p>This is an example structure that you might have in your dags folder:</p> <pre><code>&lt;DIRECTORY ON PYTHONPATH&gt; | .airflowignore -- only needed in ``dags`` folder, see below | -- my_company | __init__.py | common_package | | __init__.py | | common_module.py | | subpackage | | __init__.py | | subpackaged_util_module.py | | my_custom_dags | __init__.py | my_dag1.py | my_dag2.py | base_dag.py </code></pre> <p>In the case above, these are the ways you could import the python files:</p> <pre><code>from my_company.common_package.common_module import SomeClass from my_company.common_package.subpackage.subpackaged_util_module import AnotherClass from my_company.my_custom_dags.base_dag import BaseDag </code></pre> </blockquote> <p>That works fine in Airflow.</p> <p>However I used to validate my DAGs locally by running (also as advised by a piece of documentation - <a href="https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#dag-loader-test" rel="nofollow noreferrer">DAG Loader Test</a>):</p> <pre><code>python my_company/my_custom_dags/my_dag1.py </code></pre> <p>When using the imports, it complains:</p> <pre><code>Traceback (most recent call last): File &quot;/[...]/my_company/my_custom_dags/my_dag1.py&quot;, line 1, in &lt;module&gt; from my_company.common_package.common_module import SomeClass ModuleNotFoundError: No module named 'my_company' </code></pre> <p>How should I run it so that it understands the context and reckognizes the package?</p>
<python><airflow><python-import>
2023-02-06 17:18:51
1
10,783
Kombajn zboΕΌowy
75,364,359
3,045,351
Using Selenium to run Javascipt and return the results
<p>I am trying to use Python Selenium to grab the data payload populating this <a href="https://www.route.org.uk/q42022-dashboard" rel="nofollow noreferrer">page</a>. Data seems to being fed in using Javascript from some Tableau API's. I'm not really a Javascript guy, but I've had a look in Chrometools and have found the following .js files listed as resources:</p> <p><a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/jquery.min.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/jquery.min.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/ViewerBootstrap.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/ViewerBootstrap.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/jquery.min.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/jquery.min.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/mscorlib.min.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/mscorlib.min.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/jsstrings_en.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/jsstrings_en.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/messages.en_US.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/messages.en_US.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/formatters-and-parsers.en_US.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/formatters-and-parsers.en_US.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/vqlweb.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/vqlweb.js</a> <a href="https://public.tableau.com/vizql/v_202242301300809/javascripts/require.min.js" rel="nofollow noreferrer">https://public.tableau.com/vizql/v_202242301300809/javascripts/require.min.js</a></p> <p>I have tried the below code with a few of the different .js files and all I keep getting back is None.</p> <pre><code>import sys from selenium import webdriver import chromedriver_autoinstaller from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.desired_capabilities import DesiredCapabilities sys.path.append(&quot;G:\\Python36\\mypath&quot;) chromedriver_autoinstaller.install() chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) d = DesiredCapabilities.CHROME #d['goog:loggingPrefs'] = { 'browser':'ALL' } driver = webdriver.Chrome(chrome_options=chrome_options) drv = driver.get(&quot;https://www.route.org.uk/q42022-dashboard&quot;) print(driver.execute_script(&quot;return //public.tableau.com/vizql/v_202242301300809/javascripts/ViewerBootstrap.js&quot;)) driver.quit() </code></pre> <p>...at this stage I don't know exactly what I should be getting back in terms of a result, but something that resembles the content of the rendered page would be a good start...</p> <p>Thanks</p>
<python><python-3.x><selenium>
2023-02-06 17:01:04
1
4,190
gdogg371
75,364,346
4,453,566
Using Custom Function with multiple parameters and partitioning/grouping by a column
<p>Is it possible to do something like the following in python polars:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import statsmodels.api as sm lowess = sm.nonparametric.lowess df = pl.DataFrame([pl.Series('x', ['a', 'a', 'a', 'b','b', 'b']), pl.Series('y', [1, 2, 3, 1, 2, 3]), pl.Series('z', [.2, .3, .5, .1, .3, .7])] ) df.with_columns( pl.struct('z', 'y').map_batches(lambda cols: pl.DataFrame(lowess(cols['z'], cols['y'], frac = .1))) .over('x') ) </code></pre> <pre><code># ComputeError: TypeError: cannot select elements using Sequence with elements of type 'str' </code></pre> <p>I want to group by one or more columns and then apply a function with more than 1 argument.</p>
<python><python-polars>
2023-02-06 16:59:40
1
1,364
troh
75,364,299
12,097,553
go.scatterpolar : trying to render radar graph with various lines color not working
<p>I am trying to build a radar chart where each line is of different color.</p> <p>I feel like I have followed the doc closely and I am now facing an error I can't seem solve, especially because NO ERROR is output!</p> <p>here is some dummy data I am working with :</p> <pre><code>r = [52,36,85] theta = [&quot;analytique&quot;, &quot;analogique&quot;, &quot;affectif&quot;] colors = [&quot;blue&quot;, &quot;red&quot;,&quot;yellow&quot;] </code></pre> <p>Here is what I have for my graph:</p> <pre><code>for i in range(len(theta)): fig_reception.add_trace(go.Scatterpolar( mode='lines+text', theta=[theta[i]], r=[r[i]], , line_color=text_colors[i], fillcolor='#d3d3d3', marker=dict(color=text_colors), )) fig_reception.update_layout(autosize=False, height=305, polar=dict(radialaxis = dict(range=[0,100],visible = False), angularaxis=dict(rotation=180,direction=&quot;clockwise&quot;) ) ) fig_reception.update_layout( template=None, polar = dict(bgcolor = &quot;rgba(255, 255, 255, 0.2)&quot;),) fig_reception.update_layout( font=dict( size=16, color=&quot;black&quot;, family=&quot;Courier New, monospace&quot;, ), title=&quot;RΓ©ception&quot;, title_font_family=&quot;Courier New, monospace&quot;, showlegend=False ) </code></pre> <p>what's strange its that when I hover each line, a frame with the right color and value shows up.</p> <p>Here is a picture <a href="https://i.sstatic.net/Nn0G7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nn0G7.png" alt="enter image description here" /></a></p>
<python><plotly><plotly-dash>
2023-02-06 16:54:17
1
1,005
Murcielago
75,364,197
6,668,031
Adding multiple users to yammer group via PowerShell or Python
<p>Looking for a way through which I can add multiple users to a yammer group.</p> <p><a href="https://www.c-sharpcorner.com/blogs/how-to-add-user-to-yammer-group-using-powershell" rel="nofollow noreferrer">I tried this</a>, but it doesn't work.</p> <p>Is there any PowerShell scripts or an option available on the UI to add multiple users? I also tried to separate the users via comma delimiters but this doesn't work either.</p> <p>Have also tried adding a AAD group but that didn't work either (the group doesn't appears in the dropdown).</p> <p>Below is the PS script:</p> <pre><code># Input Parameters $developerToken = &quot;999-test&quot; $uri=&quot;https://www.yammer.com/api/v1/group_memberships.json&quot; $headers = @{ Authorization=(&quot;Bearer &quot; + $developerToken) } $body=@{group_id=&quot;groupid&quot;;email=&quot;test@test.com&quot;} # Invoke Web Request - Add user to Group $webRequest = Invoke-WebRequest –Uri $uri –Method POST -Headers $headers -Body $body </code></pre>
<python><python-3.x><powershell><yammer><yammer-api>
2023-02-06 16:45:36
0
580
Joseph
75,364,155
4,115,123
Ansible + Python - Programmatically provide ansible-vault the vault password
<p>I followed the tutorial provided here : <a href="https://stackoverflow.com/questions/64705336/editing-ansible-vault-file-from-a-playbook">Editing Ansible vault file from a playbook</a> to create the ability to programmatically update my ansible vaults.</p> <p>Let's say though this is part of a much larger pipeline, where it unreasonable to expect the end user to sit around waiting for the 3+ <code>ansible-vault</code> vault password prompts.</p> <p>Is there a way to programmatically provide an <code>ansible-vault</code> call with the <code>vault password</code> such that it is automatically entered when it is required?</p> <p>Ideally, the user at run-time would enter once the vault-password as part of the larger script, and then as the script encounters needs that require <code>ansible-vault</code> the password is provided as a variable as well so the user doesn't have to constantly check back to enter it.</p>
<python><ansible><ansible-vault>
2023-02-06 16:40:56
2
1,057
Jibril
75,364,133
653,770
How do I replace a string-value in a specific column using method chaining?
<p>I have a pandas data frame, where some string values are &quot;NA&quot;. I want to replace these values in a specific column (i.e. the 'strCol' in the example below) using method chaining.</p> <p>How do I do this? (I googled quite a bit without success even though this should be easy?! ...)</p> <p>Here is a minimal example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({'A':[1,2,3,4], 'B':['val1','val2','NA','val3']}) df = ( df .rename(columns={'A':'intCol', 'B':'strCol'}) # method chain example operation 1 .astype({'intCol':float}) # method chain example operation 2 # .where(df['strCol']=='NA', pd.NA) # how to replace the sting 'NA' here? this does not work ... ) df </code></pre>
<python><pandas><method-chaining>
2023-02-06 16:39:10
3
1,292
packoman
75,364,117
1,842,491
how can I narrow dataclass annotations (i.e., how can I update type hints after handling default None in post_init)?
<p>I have a dataclass that can take a keyword value, or, if no value is specified, infer a value from other attributes.</p> <pre><code>import dataclasses @dataclasses.dataclass class RelatedValues: primary: float _: dataclasses.KW_ONLY secondary: float | None = None def __post_init__(self): if self.secondary is None: self.secondary = self.primary </code></pre> <p>This code works, but it leaves me stuck with <code>float | None</code> as the type hint for <code>.secondary</code> even though <code>.secondary</code> <em>cannot possibly</em> be <code>None</code> after <code>__post_init__</code>.</p> <p><code>cast</code>-ing <code>self.secondary</code> in <code>__post_init__</code> doesn't work. This does:</p> <pre><code>NULL_FLOAT = float(int(uuid.uuid4()) @dataclasses.dataclass class RelatedValues: primary: float _: dataclasses.KW_ONLY secondary: float = NULL_FLOAT def __post_init__(self): if self.secondary == NULL_FLOAT: self.secondary = self.primary </code></pre> <p>But it feels distinctly non-Pythonic.</p> <p>This also works:</p> <pre><code>@dataclasses.dataclass class RelatedValues: primary: float _: dataclasses.KW_ONLY _secondary: float | None = None def __post_init__(self): if self._secondary is None: self.secondary = self.primary else: self.secondary = self._secondary </code></pre> <p>or this:</p> <pre><code>@dataclasses.dataclass class RelatedValues: primary: float _: dataclasses.KW_ONLY _secondary: float | None = None @property def secondary(self) -&gt; float: if self._secondary is None: self.secondary = self.primary else: self.secondary = self._secondary </code></pre> <p>But the latter two are just mangling my kwargs for the sake of type narrowing, which kind of feels wrong.</p> <p>What am I missing?</p>
<python><mypy>
2023-02-06 16:37:36
1
1,509
Shay
75,364,099
12,596,824
Subtracting asctime from time module to get hours minutes and seconds in python
<p>How can I subtract end - start to get hours minutes and seconds of time completion in Python?</p> <p>I have some pseudocode here, I want to convert the print statement to what I said above.</p> <pre><code>start = time.asctime(time.localtime(time.time())) &lt; some code here&gt; end = time.asctime(time.localtime(time.time())) print(end - start) </code></pre>
<python>
2023-02-06 16:35:36
1
1,937
Eisen
75,364,001
7,800,760
Python: forcing imported string interpolation for SPARQL (not SQL)
<p>I use f-strings to build SPARQL queries with a variable and they work well as follows:</p> <pre><code>for label in longnames: sparqlquery = f&quot;&quot;&quot; PREFIX osr:&lt;http://dati.senato.it/osr/&gt; SELECT * WHERE {{ ?uri a osr:Senatore. ?uri rdfs:label &quot;{label}&quot;^^xsd:string. }} &quot;&quot;&quot; </code></pre> <p>which would run N queries with each having a different label value coming from a longnames list[str].</p> <p>I would like to externalize these SPARQL queries to a separate file and therefore tried to create a <strong>sparqlqueries.py file</strong> as follows:</p> <pre><code>DBP_LANGS = &quot;&quot;&quot; PREFIX dbo: &lt;http://dbpedia.org/ontology/&gt; PREFIX sdo: &lt;https://schema.org/&gt; CONSTRUCT { ?lang a sdo:Language ; sdo:alternateName ?iso6391Code . } WHERE { ?lang a dbo:Language ; dbo:iso6391Code ?iso6391Code . FILTER (STRLEN(?iso6391Code)=2) # to filter out non-valid values } LIMIT {QUERY_LIMIT} &quot;&quot;&quot; </code></pre> <p>(edit: as noted in the comments that is not an f-string but a simple string literal. Could not use the f-string as the QUERY_LIMIT variable is unknown in that module)</p> <p>and in my new <strong>sample.py</strong> code I tried:</p> <pre><code>from SPARQLWrapper import SPARQLWrapper from sparqlqueries import DBP_LANGS sparql = SPARQLWrapper(&quot;http://dbpedia.org/sparql&quot;) QUERY_LIMIT = 5 print(DBP_LANGS) # just to debug sparql.setQuery(DBP_LANGS) </code></pre> <p>but this SPARQL query fails and as the previous print shows, the QUERY_LIMIT is not interpolated at all in the f-string:</p> <pre><code>PREFIX dbo: &lt;http://dbpedia.org/ontology/&gt; PREFIX sdo: &lt;https://schema.org/&gt; CONSTRUCT { ?lang a sdo:Language ; sdo:alternateName ?iso6391Code . } WHERE { ?lang a dbo:Language ; dbo:iso6391Code ?iso6391Code . FILTER (STRLEN(?iso6391Code)=2) # to filter out non-valid values } LIMIT {QUERY_LIMIT} </code></pre> <p>and therefore understandably the query fails.</p> <p>Is there a way to &quot;force&quot; the f-string to be re-interpolated in the current code?</p>
<python><python-module><f-string>
2023-02-06 16:26:32
0
1,231
Robert Alexander
75,363,784
15,452,168
pivot data, in case of multiple values
<p>I have a pandas timeseries dataframe df with columns <code>date, week, week_start_date, country, campaign_name, active </code> for some date(s) we have multiple campaign information.</p> <p>for example</p> <pre><code>data = [[&quot;2023.01.02&quot;, 1, &quot;2023.01.01&quot;, &quot;BR&quot;, &quot;SALE-1&quot;, 1], [&quot;2023.01.02&quot;, 1, &quot;2023.01.01&quot;, &quot;BR&quot;, &quot;SALE-2&quot;, 1], [&quot;2023.01.02&quot;, 1, &quot;2023.01.01&quot;, &quot;NL&quot;, &quot;SALE-1&quot;, 1], [&quot;2023.01.02&quot;, 1, &quot;2023.01.01&quot;, &quot;DE&quot;, &quot;SALE-1&quot;, 1]] df = pd.DataFrame(data, columns=[&quot;date&quot;, &quot;week&quot;, &quot;week_start_date&quot;, &quot;country&quot;, &quot;campaign_name&quot;, &quot;active&quot;]) date week week_start_date country campaign_name active 2023.01.02 1 2023.01.01 BR SALE-1 1 2023.01.02 1 2023.01.01 BR SALE-2 1 2023.01.02 1 2023.01.01 NL SALE-1 1 2023.01.02 1 2023.01.01 DE SALE-1 1 </code></pre> <p>I don't mind having separate date country time-series combination but for the same country in case we have 2 campaigns then I would like to pivot it</p> <pre><code>date week week_start_date country campaign_name active campaign_name_n active_n total_active 2023.01.02 1 2023.01.01 BR SALE-1 1 SALE-2 1 2 2023.01.02 1 2023.01.01 NL SALE-1 1 NaN NaN 1 2023.01.02 1 2023.01.01 DE SALE-1 1 NaN NaN 1 </code></pre> <p>now <code>campaign_name_n , active_n</code> could be any number based on the values we find while running the loop.</p> <p>I am trying to use</p> <pre><code>import pandas as pd # Load your data into a pandas DataFrame df = pd.read_csv(&quot;data.csv&quot;) # Group the data by date, week, week_start_date, country, and days_active grouped = df.groupby([&quot;date&quot;, &quot;week&quot;, &quot;week_start_date&quot;, &quot;country&quot;, &quot;days_active&quot;]) # Create a dictionary to store the campaign names for each group campaign_names = {} # Iterate through the groups for name, group in grouped: # Check if there are multiple entries for a particular date if len(group) &gt; 1: # Create new columns for the campaign names for i, row in enumerate(group.itertuples()): campaign_name = &quot;campaign_name_{}&quot;.format(i + 1) campaign_names[row.Index] = campaign_name df.at[row.Index, campaign_name] = row.campaign_name # Add the campaign name columns to the DataFrame for index, campaign_name in campaign_names.items(): df.at[index, &quot;campaign_name&quot;] = campaign_name # Drop the original campaign_name column df = df.drop(columns=[&quot;campaign_name&quot;]) # Save the grouped and modified data to a new file df.to_csv(&quot;grouped_data.csv&quot;, index=False) </code></pre> <p>but I am getting all the campaigns pivoted. which is not intended. Would be great if someone can help here. Thank you!</p>
<python><pandas><dataframe><pivot><data-cleaning>
2023-02-06 16:06:07
1
570
sdave
75,363,774
3,734,059
How to read top-level requirements (as from requirements.in) from setup.py and write back pinned requirements (as in requirements.txt)?
<p>I have a package with a <code>setup.py</code> file and want to use <code>pip-tools</code> to pin my dependencies for production.</p> <p>Let's say my <code>setup.py</code> looks as follows:</p> <pre><code>#!/usr/bin/env python import pathlib from setuptools import setup, find_packages setup( author=&quot;Foo&quot;, description=&quot;My package&quot;, install_requires=[&quot;package1==1.0&quot;, &quot;package2==2.0&quot;], extras_require={ &quot;top_level&quot;: [&quot;package1&quot;, &quot;package2&quot;], }, version=&quot;0.1.0&quot;, ) </code></pre> <p>How could I here track my top level requirements within a <code>setup.py</code> and write them back to the same file within the section <code>install_requires</code>? Would I just <code>pip-compile</code> from <code>setup.py</code> into a <code>requirements.txt</code> and read the contents from this file into <code>install_requires</code>?</p>
<python><pip-tools>
2023-02-06 16:05:22
2
6,977
Cord Kaldemeyer
75,363,758
15,991,297
Finding Last n Groups of Rows in Dataframe
<p>I have a large dataframe (1m+ rows) that contains test data. A snapshot of &quot;Events&quot; was taken at various times and up to three rows were added to the dataframe per snapshot. Eg, in the extract below the first snapshot for Event At223 was taken at 18/03/2016 18:10:45, the second at 21/03/2016 10:14:28, etc.</p> <p>I want to filter the dataframe so that it returns only the last n snapshots per Ref. Refs are unique whereas Events may be duplicated.</p> <p>I'm new to Pandas but have tried various combinations of sort_values, groupby and tail but cannot get the desired result. Eg:</p> <pre><code>df = df.sort_values(['Ref', 'Time']).groupby(['Time', 'Ref', 'TestId']).tail(3) </code></pre> <p>Can anyone suggest how to do it? In the deisred result example below n = 3 so it shows the last three snapshots per Ref.</p> <p>Extract:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Time</th> <th style="text-align: left;">Ref</th> <th style="text-align: left;">Event</th> <th style="text-align: left;">EndTime</th> <th style="text-align: left;">TestId</th> <th style="text-align: left;">TestNames</th> <th style="text-align: left;">Result</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">18/03/2016 18:10:45</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">18/03/2016 18:10:45</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">18/03/2016 18:10:45</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 10:14:28</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 10:14:28</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4</td> </tr> <tr> <td style="text-align: left;">21/03/2016 10:14:28</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 12:44:34</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 12:44:34</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">21/03/2016 12:44:34</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">18/03/2016 18:12:12</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7</td> </tr> <tr> <td style="text-align: left;">18/03/2016 18:12:12</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.58</td> </tr> <tr> <td style="text-align: left;">18/03/2016 18:12:12</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.4</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.58</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.4</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.58</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.59</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.5</td> </tr> </tbody> </table> </div> <p>Desired result:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Time</th> <th style="text-align: left;">Ref</th> <th style="text-align: left;">Event</th> <th style="text-align: left;">EndTime</th> <th style="text-align: left;">TestId</th> <th style="text-align: left;">TestNames</th> <th style="text-align: left;">Result</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:05:16</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:14:22</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">28212</td> <td style="text-align: left;">One</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">466299</td> <td style="text-align: left;">Two</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">01/04/2016 10:37:43</td> <td style="text-align: left;">1.123717985</td> <td style="text-align: left;">At223</td> <td style="text-align: left;">01/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Three</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.58</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:03:48</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.4</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.58</td> </tr> <tr> <td style="text-align: left;">21/03/2016 13:19:15</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.5</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">28214</td> <td style="text-align: left;">Eight</td> <td style="text-align: left;">7.2</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">1212772</td> <td style="text-align: left;">Nine</td> <td style="text-align: left;">1.59</td> </tr> <tr> <td style="text-align: left;">01/04/2016 12:48:13</td> <td style="text-align: left;">1.123719512</td> <td style="text-align: left;">Br12</td> <td style="text-align: left;">03/04/2016 16:00</td> <td style="text-align: left;">58805</td> <td style="text-align: left;">Ten</td> <td style="text-align: left;">4.5</td> </tr> </tbody> </table> </div>
<python><pandas>
2023-02-06 16:03:40
3
500
James
75,363,555
19,854,658
Calculating probability distribution of an image?
<p>I want to find the probability distribution of two images so I can calculate KL Divergence.</p> <p>I'm trying to figure out what probability distribution means in this sense. I've converted my images to grayscale, flattened them to a 1d array and plotted them as a histogram with bins = 256</p> <pre><code>imageone = imgGray.flatten() # array([0.64991451, 0.65775765, 0.66560078, ..., imagetwo = imgGray2.flatten() plt.hist(imageone, bins=256, label = 'image one') plt.hist(imagetwo, bins=256, alpha = 0.5, label = 'image two') plt.legend(loc='upper left') </code></pre> <p>My next step is to call the ks_2samp function from scikit to calculate the divergence, but I'm unclear what arguments to use.</p> <p>A previous answer explained that we should take the &quot;take the histogram of the image(in gray scale) and than divide the histogram values by the total number of pixels in the image. This will result in the probability to find a gray value in the image.&quot;</p> <p>Ref: <a href="https://stackoverflow.com/questions/39928250/can-kullback-leibler-be-applied-to-compare-two-images">Can Kullback-Leibler be applied to compare two images?</a></p> <p>But what do we mean by take the histogram values? How do I 'take' these values?</p> <p>Might be overcomplicating things, but confused by this.</p>
<python><math><image-processing>
2023-02-06 15:48:39
1
379
Jean-Paul Azzopardi
75,363,548
3,885,446
yolov8 predict show=True - Iamge kills notebook kernel
<p>I am testing yolov8 prediction using the following code:</p> <pre><code>from ultralytics import YOLO # Load a model model = YOLO(&quot;yolov8n.pt&quot;) # Use the model model.predict(source= &quot;bus.jpg&quot;,show=True) # predict on an image </code></pre> <p>This works perfectly in the Spyder IDE and the resulting image can be closed by clicking the toprighthand corner in the usual way.</p> <p>Using the same code in a Jupter notebook also works but the image cannot be closed. Clicking on the image gives the popup message that Python is not responding, offering the choice of closing the program or waiting. AS expected clsing the program kills the kernel. Is there a way round this?</p>
<python><yolov5>
2023-02-06 15:48:14
1
575
Alan Johnstone
75,363,440
7,437,143
Plotly-Dash how to add an identifier to an Annotation?
<h2>Context</h2> <p>Suppose one has multiple sets of annotations, which are merged into 1 large <code>List</code> of annotations in the <code>fig.update_layout</code> of the <code>go.Figure</code> object in plotly/Dash. Since the annotations are created at different places, it may be somewhat tedious to keep track of the indices based on list <code>index</code>. So I thought, if I add an identifier to the annotation, I am sure I am updating the right annotation each time. Especially as the annotations may contain duplicate properties or possibly be complete duplicates (without identifier).</p> <h2>MWE</h2> <p>A trivial MWE is included:</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go import numpy as np t = np.linspace(0, 4*np.pi, 50) t2 = np.pi * np.arange(5) fig = go.Figure(go.Scatter(x=t, y=np.sin(t), mode='lines')) fig.add_trace(go.Scatter(x=t2, y=np.sin(t2), mode='markers')) first_annotations=[ go.layout.Annotation( x=point, y=np.sin(point), xref=&quot;x&quot;, yref=&quot;y&quot;, text=&quot;dict Text&quot;, align='center', showarrow=False, yanchor='bottom', textangle=90) for point in t2] second_annotations=[ go.layout.Annotation( x=point, y=np.cos(point), xref=&quot;x&quot;, yref=&quot;y&quot;, text=&quot;Other dict Text&quot;, align='center', showarrow=False, yanchor='bottom', textangle=90) for point in t2] first_annotations.extend(second_annotations) fig.update_layout(annotations=first_annotations ) fig.show() </code></pre> <h2>Output</h2> <p>Example with 2 sets of annotations: <a href="https://i.sstatic.net/rNvbU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rNvbU.jpg" alt="enter image description here" /></a></p> <h2>Question</h2> <p>How can one add an identifier to an Annotation object in plotly dash?</p> <h2>Approach</h2> <p>I looked through <a href="https://plotly.com/python/reference/layout/annotations/" rel="nofollow noreferrer">the documentation</a> of <code>plotly.graph_objs.layout.Annotation:</code> however, I did not find an &quot;identifier&quot; (like) property.</p>
<python><plotly><annotations><plotly-dash><super>
2023-02-06 15:38:41
1
2,887
a.t.
75,363,115
112,871
Stacking to 100% with `seaborn.objects`
<p>I'm trying to make a plot with bars or areas rescaled to 100% with the new <code>seaborn.objects</code> interface and I can't seem to get <code>so.Norm()</code> to work, with or without <code>by</code>...</p> <p>Here's what I've got so far:</p> <pre class="lang-py prettyprint-override"><code>import seaborn as sns import seaborn.objects as so tips = sns.load_dataset(&quot;tips&quot;) # bars ( so.Plot(tips, x=&quot;day&quot;, y=&quot;total_bill&quot;, color=&quot;time&quot;) .add(so.Bar(), so.Agg(&quot;sum&quot;), so.Norm(func=&quot;sum&quot;), so.Stack()) ) #areas ( so.Plot(tips, x=&quot;size&quot;, y=&quot;total_bill&quot;, color=&quot;time&quot;) .add(so.Area(), so.Agg(&quot;sum&quot;), so.Norm(func=&quot;sum&quot;), so.Stack()) ) </code></pre>
<python><plot><seaborn><visualization><seaborn-objects>
2023-02-06 15:11:43
1
27,660
nicolaskruchten
75,363,020
5,539,674
Split string into functions on parantheses, but not subfunctions
<p>I am cleaning a data set that consists of concatenated function calls strings that look like this: <code>&quot;hello(data=x, capitalize = True)there()my(x = x)dear(x, 6L, ...)friend(x = c(1, 2, 3))&quot;</code>. The goal is to split such a string into separate list elements, so that every function stands on its own.</p> <p>So far I can split all functions that do not contain a subfunction (such as <code>&quot;c(1,2,3)&quot;</code>) using regex:</p> <pre><code>import re s=&quot;hello(data=x, capitalize = True)there()my(x = x)dear(x, 6L, ...)&quot; t = re.findall(r&quot;\w+\(.*?\)&quot;, s) ['hello(data=x, capitalize = True)', 'there()', 'my(x = x)', 'dear(x, 6L, ...)'] </code></pre> <p>I am however stuck when a subfunction is included inside a function call such as <code>friend(x = c(1, 2, 3))&quot;</code>, where the function is then split in half due to the subfunction instead of being preserved.</p> <p>Is it possible to leave functions that contain other functions as substring intact using regex?</p>
<python><regex>
2023-02-06 15:04:00
3
315
O RenΓ©
75,363,019
19,198,552
How to position the insertion cursor under the mouse pointer, after creating a tkinter text widget by a event?
<p>I want the user to be able to edit a canvas-text item. As the canvas-text item has less functionality than the text-widget, I want to use the text-widget for editing. So when the editing is started, by mouse double click event, I open a new canvas-window item over the canvas-text item and put a text-widget in it. Then I insert the text of the canvas-text item into the text-widget. Of course the insertion cursor of the text-widget is now positioned at the end of the text-widget. But I want it to be positioned at the location, where the mouse double click happened. How can I do this?</p> <p>This is my code:</p> <pre><code>import tkinter as tk def edit_text(event): coords = canvas.bbox(canvas_text) text_ref = tk.Text(root, font=(&quot;Courier&quot;, 10)) canvas_window = canvas.create_window(coords[0], coords[1], window=text_ref, anchor=&quot;nw&quot;) text_ref.bind(&quot;&lt;Escape&gt;&quot;, lambda event: store_edits(text_ref, canvas_window)) text_ref.insert(&quot;1.0&quot;, canvas.itemcget(canvas_text, &quot;text&quot;)) text_ref.focus_set() def store_edits(text_ref, canvas_window): canvas.itemconfig(canvas_text, text=text_ref.get(&quot;1.0&quot;, &quot;end&quot;)) canvas.delete(canvas_window) del text_ref root = tk.Tk() canvas = tk.Canvas(root) canvas.grid() canvas_text = canvas.create_text(100, 100, text=&quot;aaa\n456\n123\n123\n456\n123\nbbb\n&quot;, font=(&quot;Courier&quot;, 10)) canvas.tag_bind(canvas_text, &quot;&lt;Double-Button-1&gt;&quot;, edit_text) root.mainloop() </code></pre> <p>I ask, because I believe I am not the first one having this problem.</p>
<python><tkinter><mouseevent><text-widget>
2023-02-06 15:03:55
2
729
Matthias Schweikart
75,362,838
1,831,520
Sending slash on send_keys on Selenium Chrome Driver on Linux
<p>I have been trying to send a URL link to a text area. I am using python3 and Chrome Version 109.0.5414.119. It worked fine on my local OSX machine but when I tried to automate it on a Linux Machine. It started behaving weirdly. So I have this feeling that It can be a Chrome Driver-related issue.</p> <p>When I send a URL like this:</p> <pre><code>l.send_keys(&quot;https://google.com&quot;) </code></pre> <p>But becomes:</p> <pre><code>/google.comhttps: </code></pre> <p>Then I tried to debug this behavior by sending the following:</p> <pre><code>&gt;&gt;&gt; l.send_keys(&quot;/&quot;) # /| &gt;&gt;&gt; l.send_keys(&quot;/&quot;) # |/ </code></pre> <p>So the position of the cursor is going ahead of the line for the second <code>/</code>. I was not expecting this. I wonder if you can shed light on how to solve this?</p>
<python><python-3.x><selenium><selenium-chromedriver><automated-tests>
2023-02-06 14:47:12
1
7,590
sadaf2605
75,362,834
9,861,647
Python Pandas transpose Date Range of Values
<p>I have this Python data frame</p> <pre><code>year_month Type ID Values1 Values2 Values3 ... 2022-01 A 1 1 0 0 2022-02 A 1 3 4 6 2022-03 A 1 5 9 10 2022-01 B 2 5 9 10 2022-02 B 2 4 2 1 .... ... ... ... </code></pre> <p>I want to transpose my results like this? How I can do this with python?</p> <pre><code> ID Type Values 2022-01 2022-02 2022-03 ... 1 A Values1 1 3 5 1 A Values2 0 4 9 1 A Values3 0 6 10 2 B Values1 5 4 0 2 B Values2 9 2 0 2 B Values3 10 1 0 ... </code></pre>
<python><pandas>
2023-02-06 14:47:05
1
1,065
Simon GIS
75,362,809
9,290,590
How to disable the Matplotlib navigation toolbar in a particular axis?
<p>I have a figure with different plots on several axes. Some of those axes do not play well with some of the navigation toolbar actions. In particular, the shortcuts to go back to the home view and the ones to go to the previous and next views.</p> <p>Is there a way to disable those shortcuts only for those axes? For example, in one of the two in the figure from the example below.</p> <pre><code>import matplotlib.pyplot as plt # Example data for two plots x1 = [1, 2, 3, 4] y1 = [10, 20, 25, 30] x2 = [2, 3, 4, 5] y2 = [5, 15, 20, 25] # Create figure and axes objects fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) # Plot data on the first axis ax1.plot(x1, y1) ax1.set_title(&quot;First Plot&quot;) # Plot data on the second axis ax2.plot(x2, y2) ax2.set_title(&quot;Second Plot&quot;) # Show plot plt.show() </code></pre> <hr /> <p>Edit 1:</p> <p>The following method will successfully disable the pan and zoom tools from the GUI toolbox in the target axis.</p> <pre><code>ax2.set_navigate(False) </code></pre> <p>However, the home, forward, and back buttons remain active. Is there a trick to disable also those buttons in the target axis?</p>
<python><matplotlib><user-interface><widget><interactive>
2023-02-06 14:45:06
3
419
Stefano
75,362,799
9,182,743
Pandas: i)groupby col_a ii) sort by col_b iii)sort df by col_b min value for each group
<p>given a dataframe:</p> <pre><code> col_a col_b 0 b 2022-01-01 1 a 2022-01-02 2 c 2022-10-03 3 b 2022-10-01 4 a 2022-10-03 5 c 2022-10-02 </code></pre> <p>I want to:</p> <ul> <li>groupby col_a</li> <li>within groups, values are sorted by col_b</li> <li>the groups are then sorted by order of min of col_b</li> </ul> <p>so the first row should correspond to the col_a group that has had the first value in col_b.</p> <p>desired output:</p> <pre class="lang-py prettyprint-override"><code> col_a col_b 0 b 2022-01-01 # b has first min value col_b --&gt; at the start of df 1 b 2022-10-01 # the next value is the sorted next value of group b. 2 a 2022-01-02 # a has second min value col_b --&gt; second in df order 3 a 2022-10-03 4 c 2022-10-03 5 c 2022-10-02 </code></pre> <p>I am able to group by col_a, and within group sort by col_b, but I am not able to then order the df by the min value of col_b for each group</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ &quot;col_a&quot;: ['b', 'a', 'c', 'b', 'a', 'c'], 'col_b' : pd.to_datetime([&quot;2022/01/01&quot;,&quot;2022/01/02&quot;,&quot;2022/10/03&quot;,&quot;2022/10/01&quot;, &quot;2022/10/03&quot;,&quot;2022/10/02&quot;]) }) desired_df = pd.DataFrame({ &quot;col_a&quot;: ['b', 'b', 'a', 'a', 'c', 'c'], 'col_b' : pd.to_datetime([&quot;2022/01/01&quot;, &quot;2022/10/01&quot;,&quot;2022/01/02&quot;, &quot;2022/10/03&quot;,&quot;2022/10/03&quot;, &quot;2022/10/02&quot;]) }) print (df.groupby('col_a').apply(lambda x: x.sort_values('col_b'))) # this is not working </code></pre>
<python><pandas>
2023-02-06 14:44:03
1
1,168
Leo
75,362,419
1,325,133
What order does Pytest execute the tests
<p>Can anyone confirm the order in which pytest executes the tests? For basic tests (no fixture parameterization), it appears to perform the tests based on the sequential naming order of the test modules and then the order of test functions within the module. However, what is the order when you start to add in parametrization for fixtures (including indirect parametrization)? So far, I've been unable to find any clear details online about this.</p> <p>Thanks</p>
<python><pytest>
2023-02-06 14:10:55
0
16,889
felix001
75,362,373
2,244,766
how to invalidate toxenv
<p>I have some tox project that processes some protobuf in the install-deps phase and outputs some <code>*pb.py</code> codecs (custom script executed as <code>install_command</code> option in the config). When I'm updating my workspace (and the protobuf files are updated), i would like to somehow mark the toxenv as invalid - so that it would get recreated <strong>without the need of passing <code>-r, --recreate</code> flags</strong> to some later <code>tox</code> call. I could add such action the the script that does the env update. Any idea on how to do it? i'm using some older tox - 3.14</p>
<python><protocol-buffers><python-packaging><tox><protobuf-python>
2023-02-06 14:06:53
1
4,035
murison
75,362,342
3,760,875
Python entry_point in virtual environment not working
<p>I have a virtual environment where I am developing a Python package. The folder tree is the following:</p> <pre><code>working-folder |-setup.py |-src |-my_package |-__init__.py |-my_subpackage |-__init__.py |-main.py </code></pre> <p><code>main.py</code> contains a function <code>my_main</code> that ideally, I would want to run as a bash command.</p> <p>I am using <code>setuptools</code> and the <code>setup</code> function contains the following line of code</p> <pre><code>setup( ... entry_point={ &quot;console_scripts&quot;: [ &quot;my-command = src.my_package.my_subpackage.main:my_main&quot;, ] }, ... ) </code></pre> <p>When I run <code>pip install .</code> the package gets correctly installed in the virtual environment. However, when running <code>my-command</code> on the shell, the command does not exist.</p> <p>Am I missing some configuration to correctly generate the entry point?</p>
<python><package><virtualenv><setuptools><entry-point>
2023-02-06 14:04:40
1
2,569
aretor
75,362,059
2,727,655
Convert Tensoflow model to PyTorch model - model isn't learning
<p>I'm trying to port a tensorflow neural network to pytorch, as an exercise to familiarize myself with both / their nuances. This is the tensorflow network I'm porting to pytorch:</p> <pre><code>import pandas as pd import tensorflow as tf from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation from tensorflow.keras.layers import Embedding from tensorflow.keras.layers import Conv1D, GlobalMaxPooling1D from tensorflow.keras.datasets import imdb (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=5000) x_train = sequence.pad_sequences(x_train, maxlen=400, padding=&quot;post&quot;) x_test = sequence.pad_sequences(x_test, maxlen=400, padding=&quot;post&quot;) model = Sequential() model.add(Embedding(5000, 50, input_length=400)) model.add(Dropout(0.2)) model.add(Conv1D(250, 3, padding='valid',activation='relu',strides=1)) model.add(GlobalMaxPooling1D()) model.add(Dense(250)) model.add(Dropout(0.2)) model.add(Activation('relu')) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() h2 = model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test)) </code></pre> <p>The shapes of each layer is shown below:</p> <pre><code>Model: &quot;sequential&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 400, 50) 250000 dropout (Dropout) (None, 400, 50) 0 conv1d (Conv1D) (None, 398, 250) 37750 global_max_pooling1d (Globa (None, 250) 0 lMaxPooling1D) dense (Dense) (None, 250) 62750 dropout_1 (Dropout) (None, 250) 0 activation (Activation) (None, 250) 0 dense_1 (Dense) (None, 1) 251 activation_1 (Activation) (None, 1) 0 ================================================================= Total params: 350,751 Trainable params: 350,751 Non-trainable params: 0 </code></pre> <p>And the output of the tensorflow model is:</p> <pre><code>Epoch 1/10 loss: 0.4043 - accuracy: 0.8021 - val_loss: 0.2764 - val_accuracy: 0.8854 Epoch 2/10 loss: 0.2332 - accuracy: 0.9052 - val_loss: 0.2690 - val_accuracy: 0.8888 Epoch 3/10 loss: 0.1598 - accuracy: 0.9389 - val_loss: 0.2948 - val_accuracy: 0.8832 Epoch 4/10 loss: 0.1112 - accuracy: 0.9600 - val_loss: 0.3015 - val_accuracy: 0.8906 Epoch 5/10 loss: 0.0810 - accuracy: 0.9700 - val_loss: 0.3057 - val_accuracy: 0.8868 Epoch 6/10 loss: 0.0537 - accuracy: 0.9811 - val_loss: 0.4055 - val_accuracy: 0.8868 Epoch 7/10 loss: 0.0408 - accuracy: 0.9860 - val_loss: 0.4083 - val_accuracy: 0.8852 Epoch 8/10 loss: 0.0411 - accuracy: 0.9845 - val_loss: 0.4789 - val_accuracy: 0.8789 Epoch 9/10 loss: 0.0380 - accuracy: 0.9862 - val_loss: 0.4828 - val_accuracy: 0.8827 Epoch 10/10 loss: 0.0329 - accuracy: 0.9879 - val_loss: 0.4999 - val_accuracy: 0.8825 </code></pre> <p>Here's what I have in my PyTorch port over:</p> <pre><code>from torch.utils.data import DataLoader from torch.utils.data import Dataset import torch from tqdm import tqdm import torch.nn.functional as F from sklearn.metrics import accuracy_score class CustomDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, idx): return self.x[idx], self.y[idx] train_dataloader = DataLoader(CustomDataset(torch.Tensor(x_train), torch.Tensor(y_train)), batch_size=32, shuffle=True) test_dataloader = DataLoader(CustomDataset(torch.Tensor(x_test), torch.Tensor(y_test)), batch_size=32, shuffle=True) class MyModel(torch.nn.Module): def __init__(self, vocab_size=5000, input_len=400, embedding_dims=50, kernel_size=3, filters=250, hidden_dims=250): super(MyModel, self).__init__() self.embedding_dims = embedding_dims self.input_len = input_len self.embedding = torch.nn.Embedding(num_embeddings=vocab_size, embedding_dim=embedding_dims) self.dropout1 = torch.nn.Dropout(p=0.2) self.conv1d = torch.nn.Conv1d(in_channels=embedding_dims, out_channels=filters, kernel_size=kernel_size, padding=(0,), stride=1) self.pool = torch.nn.AdaptiveMaxPool1d(1) self.linear1 = torch.nn.Linear(in_features=hidden_dims, out_features=hidden_dims) self.dropout2 = torch.nn.Dropout(p=0.2) self.activation = torch.nn.ReLU() self.output = torch.nn.Linear(in_features=hidden_dims, out_features=1) self.activation2 = torch.nn.Sigmoid() def forward(self, x): x = self.dropout1(self.embedding(x.type(torch.LongTensor))) x = self.conv1d(x.view(-1, self.embedding_dims, self.input_len)) x = self.pool(x) x = self.activation(self.dropout2(self.linear1(x.view(-1,x.size()[1])))) x = self.activation2(self.output(x)) return x class FitTorchModel(): def __init__(self, model, num_epochs=10, steps_per_epoch=782): self.model = model self.epochs = num_epochs self.steps_per_epoch = steps_per_epoch def fit(self, train_dataloader, test_dataloader): opt = torch.optim.Adam(self.model.parameters(), lr=0.001) crit = torch.nn.BCELoss(reduction = &quot;mean&quot;) history_df = pd.DataFrame(columns = [&quot;Loss&quot;, &quot;Accuracy&quot;, &quot;Val_Loss&quot;, &quot;Val_Acc&quot;]) for epoch in range(self.epochs): self.model.train() print(f&quot;Epoch {epoch}&quot;) epoch_loss = 0 epoch_acc = 0 it = iter(train_dataloader) for step in tqdm(range(self.steps_per_epoch)): opt.zero_grad() x, y = next(it) y_pred = self.model(x).view(-1) loss = crit(y_pred, y) epoch_loss += loss.item() epoch_acc += accuracy_score(y==1, y_pred &gt; 0.5) loss.backward() opt.step() val_loss, val_acc = self.predict_proba(test_dataloader, crit) df = pd.DataFrame({&quot;Loss&quot;: epoch_loss/(step+1), &quot;Accuracy&quot;: epoch_acc/(step+1), &quot;Val_Loss&quot;: val_loss, &quot;Val_Acc&quot;: val_acc}, index=[0]) history_df = pd.concat((history_df, df), ignore_index=True) return history_df def predict_proba(self, test_dataloader, crit): self.model.eval() val_loss = 0 val_acc = 0 it = iter(test_dataloader) with torch.no_grad(): for step in tqdm(range(self.steps_per_epoch)): x,y = next(it) y_pred = self.model(x).view(-1) batch_loss = crit(y_pred, y) val_loss += batch_loss.item() val_acc += accuracy_score(y==1, y_pred &gt; 0.5) return val_loss/(step+1), val_acc/(step+1) ftm = FitTorchModel(model=MyModel(), num_epochs=10, steps_per_epoch=782) history_df = ftm.fit(train_dataloader, test_dataloader) </code></pre> <p>The shape of each layer is:</p> <pre><code>After embedding layer: torch.Size([32, 400, 50]) After dropout1 layer: torch.Size([32, 400, 50]) After convolution1d layer: torch.Size([32, 250, 398]) After maxpooling layer: torch.Size([32, 250, 1]) After linear1 layer: torch.Size([32, 250]) After dropout2 layer: torch.Size([32, 250]) After activation layer: torch.Size([32, 250]) After output layer: torch.Size([32, 1]) After activation2 layer: torch.Size([32, 1]) </code></pre> <p>The output of the pytorch model training is:</p> <pre><code> Loss Accuracy Val_Loss Val_Acc 0 0.697899 0.505874 0.692495 0.511629 1 0.693063 0.503477 0.693186 0.503637 2 0.693190 0.496044 0.693149 0.499201 3 0.693181 0.501359 0.693082 0.502038 4 0.693169 0.503237 0.693234 0.495964 5 0.693177 0.500240 0.693154 0.500679 6 0.693069 0.507473 0.693258 0.498881 7 0.693948 0.500320 0.693145 0.501598 8 0.693196 0.499640 0.693164 0.496324 9 0.693170 0.500759 0.693140 0.501918 </code></pre> <p>Couple things: the accuracy hovers around guessing (this is a binary classification task), no matter how many epochs have passed. Secondly, the training loss barely improves. I set the learning rate to the default learning rate described by <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam" rel="nofollow noreferrer">tensorflow's Adam Optimizer docs</a>. What else am I missing here? I had some trouble with the input / output dimensions for the various layers - did I mess those up at all?</p>
<python><tensorflow><keras><deep-learning><pytorch>
2023-02-06 13:40:37
2
554
lrthistlethwaite
75,361,952
7,980,206
Reassign the multiple column's value from other rows in a pandas dataframe
<p>Got a <code>dataframe</code>, CY (current year)- refers to 2022, PY (Previous Year) - refers to 2021, &amp; PPY (Prior to Previous year) refers to 2020. Want to collect this information as in a single row for a single id. Input dataframe looks like -</p> <pre><code>id Year Jan_CY Feb_CY Jan_PY Feb_PY Jan_PPY Feb_PPY 1 2022 1 2 0 0 0 0 1 2021 0 0 3 4 0 0 1 2020 0 0 0 0 5 6 2 2022 0 0 0 0 0 0 2 2021 0 0 7 8 0 0 2 2020 0 0 0 0 9 10 </code></pre> <p>Output dataframe looks like</p> <pre><code> id Year Jan_CY Feb_CY Jan_PY Feb_PY Jan_PPY Feb_PPY 0 1 2022 1 2 3 4 5 6 1 2 2022 0 0 7 8 9 10 </code></pre> <p>Tried with below code:</p> <pre><code>def get_previous_values(row): cols = row.columns py_cols = [i for i in cols if i.endswith(&quot;_PY&quot;)] ppy_cols = [j for j in cols if j.endswith(&quot;_PPY&quot;)] row[py_cols].mask((df['clnt_orgn_id'] == clnt_orgn_id) &amp; (df['SMRY_YR_NO'] == 2021), df[py_cols]) return row </code></pre> <p>but couldn't solve it.</p>
<python><pandas>
2023-02-06 13:32:15
2
717
ggupta
75,361,902
11,829,398
Is it ok to reference self.var in a parent class with the expectation it will be implemented in the child class?
<p>I have some classes that look like this.</p> <pre class="lang-py prettyprint-override"><code>class Parent: def f(self, x): # Is this bad practice? return x * self.number class Child(Parent): def __init__(self, number): # We create number in Child. self.number = number child = Child(2) child.f(3) # 6 - this runs </code></pre> <p>It seems to be going against <em>'explicit is better than implicit'</em> to define <code>self.number</code> in <code>Parent</code> without any indication that you must override it in <code>Child</code>. But it does run.</p> <p>What's the best way to handle this? I could define it in <code>Parent</code>'s <code>__init__</code> but the user will only need to refer to <code>Child</code> and I don't want to duplicate the params passed to <code>Parent</code> and <code>Child</code>.</p>
<python><oop><python-class>
2023-02-06 13:26:29
0
1,438
codeananda
75,361,896
5,675,325
Authentication using GitHub is not using the primary email
<p>Recently integrated GitHub authentication in my Django website and noticed that <a href="https://python-social-auth.readthedocs.io/en/latest/configuration/django.html" rel="nofollow noreferrer">Python Social Auth</a> is registering the users using a non-primary email address.</p> <p>How can that behaviour be modified?</p>
<python><django><authentication><github><python-social-auth>
2023-02-06 13:26:06
1
15,859
Tiago Peres
75,361,826
7,008,628
How to set the state of a past TaskInstance to success and continue the pipeline inside a airflow DAG?
<p>I'm trying to make a DAG with params that can be triggered with a dag_id/task_id. The goal of this DAG is to set the state of the last executed task to &quot;success&quot; and to continue the pipeline from this point.</p> <p>exemple of pipeline:</p> <p><a href="https://i.sstatic.net/HZc7q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HZc7q.png" alt="DAG pipeline exemple" /></a></p> <p>In my dag I want to be able to set &quot;run_that&quot; to success and automatically run &quot;run_them&quot; as a result of the new state change.</p> <p>Here is what I did from now:</p> <pre class="lang-py prettyprint-override"><code>import airflow from airflow.models import DagRun, TaskInstance, DagBag from airflow.operators.dagrun_operator import TriggerDagRunOperator from airflow.utils.trigger_rule import TriggerRule from airflow.utils.state import State from airflow.operators.python_operator import PythonOperator import pendulum from wrapper import ScriptLauncher, handleErrorSlack, handleErrorMail from datetime import timedelta, datetime default_args = { 'owner': 'tozzi', 'depends_on_past': False, 'start_date': pendulum.datetime(2022, 12, 19, tz='Europe/Paris'), 'retries': 0, 'retry_delay': timedelta(minutes=5), 'xcom_push': True, 'catchup': False, 'params': { 'dag_id': 'my_dag', 'task_id': 'run_that', } } def last_exec(dag_id, task_id, session): task_instances = ( session.query(TaskInstance) .filter(TaskInstance.dag_id == dag_id, TaskInstance.task_id == task_id) .all() ) task_instances.sort(key=lambda x: x.execution_date, reverse=True) if task_instances: return task_instances[0] return None def set_last_task_success(**kwargs): dag_id = kwargs['dag_id'] task_id = kwargs['task_id'] session = airflow.settings.Session() task_instance = last_exec(dag_id, task_id, session) if (task_instance is not None): task_instance.state = 'success' # task_instance = TaskInstance(task_id=task_id, execution_date=last_task_instance.execution_date) task_instance.run(session=session, ignore_ti_state=True, ignore_task_deps=True) session.commit() session.close() doc_md=f&quot;&quot;&quot;## Set the given task_id to success of the given dag_id&quot;&quot;&quot; # launched remotely launcher = ScriptLauncher(default_args, &quot;@once&quot;, 'set_task_to_success', ['airflow'], doc_md) dag = launcher.dag; set_to_success = PythonOperator( task_id='set_to_success', provide_context=True, python_callable=set_last_task_success, dag=dag, op_kwargs={ 'dag_id': '{{ params.dag_id }}', 'task_id': '{{ params.task_id }}', } ) </code></pre> <p>The <code>task_instance.run(...)</code> call fail here with this error : &quot;AttributeError: 'TaskInstance' object has no attribute 'task'&quot;, the state change is correctly working tho. What should I change so it rerun the &quot;run_them&quot; task when I change the state of the &quot;run_that&quot; task?</p>
<python><airflow>
2023-02-06 13:18:16
2
1,689
Nicolas Menettrier
75,361,712
18,749,472
Python datetime.date.today() not formatting inside sqllite3
<p>In my database query which is executed with the <code>sqlite3</code> module I insert a new row of data which includes a date field.</p> <p>The problem is when getting todays date with <code>datetime.date.today().strftime('%Y-%m-%d')</code> which outputs <code>'2023-02-06'</code> (expected output), it changes inside the database to <code>'2015'</code>. Why does this happen?</p> <p>This is a Django project so that is where i created the model for the database.</p> <p><em>models.py</em></p> <pre><code>class User(models.Model): ... date_joined = models.DateField('%Y-%m-%d') ... </code></pre> <p><em>database.py</em></p> <pre><code>def add_user(self, email, password): date = datetime.date.today().strftime('%Y-%m-%d') self.cursor.execute(f&quot;&quot;&quot; INSERT INTO App_user ('username','email','password', 'email_preference', 'region', 'date_joined') VALUES ('{username}', '{email}', '{password}', 'All', 'None', {date}) &quot;&quot;&quot;) self.con.commit() </code></pre>
<python><sql><django><sqlite><datetime>
2023-02-06 13:07:00
1
639
logan_9997
75,361,554
3,575,623
Combine Binned barplot with lineplot
<p>I'd like to represent two datasets on the same plot, one as a line as one as a binned barplot. I can do each individually:</p> <pre><code>tobar = pd.melt(pd.DataFrame(np.random.randn(1000).cumsum())) tobar[&quot;bins&quot;] = pd.qcut(tobar.index, 20) bp = sns.barplot(data=tobar, x=&quot;bins&quot;, y=&quot;value&quot;) </code></pre> <p><a href="https://i.sstatic.net/EmjfW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EmjfW.png" alt="barplot by itself" /></a></p> <pre><code>toline = pd.melt(pd.DataFrame(np.random.randn(1000).cumsum())) lp = sns.lineplot(data=toline, x=toline.index, y=&quot;value&quot;) </code></pre> <p><a href="https://i.sstatic.net/KyePx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KyePx.png" alt="lineplot by itself" /></a></p> <p>But when I try to combine them, of course the x axis gets messed up:</p> <pre><code>fig, ax = plt.subplots() ax2 = ax.twinx() bp = sns.barplot(data=tobar, x=&quot;bins&quot;, y=&quot;value&quot;, ax=ax) lp = sns.lineplot(data=toline, x=toline.index, y=&quot;value&quot;, ax=ax2) bp.set(xlabel=None) </code></pre> <p><a href="https://i.sstatic.net/E4ye9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4ye9.png" alt="failed attempt at combining them" /></a></p> <p>I also can't seem to get rid of the bin labels.</p> <p>How can I get these two informations on the one plot?</p>
<python><matplotlib><seaborn><bar-chart><line-plot>
2023-02-06 12:52:20
1
507
Whitehot
75,361,535
5,901,870
Convert PySpark data frame to dictionary after grouping the elements in the column as key
<p>I have below PySpark data frame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>value-1</td> </tr> <tr> <td>1</td> <td>value-2</td> </tr> <tr> <td>1</td> <td>value-3</td> </tr> <tr> <td>2</td> <td>value-1</td> </tr> <tr> <td>2</td> <td>value-2</td> </tr> </tbody> </table> </div> <p>I want to convert it into a dictionary:</p> <pre class="lang-py prettyprint-override"><code>dict1 = {'1':['value-1','value-2','value-3'], '2':['value-1','value-2']} </code></pre> <p>I was able to do it (wrote an answer below) but I need much simpler and efficient way without converting the data frame to Pandas.</p>
<python><pandas><dataframe><pyspark>
2023-02-06 12:50:27
4
400
Mikesama
75,361,487
13,877,952
GroupBy and save each Occurence in Columns
<p>I have the following problem :</p> <pre><code>df Key1 Key2 Value1 Value2 FixedValue A A 12 32 15 A A 40 25 15 A A 13 12 15 A A 80 100 15 B A 0 1 20 B A 0 12 20 A B 50 50 40 B B 7 8 30 </code></pre> <p>What I want is to create a new Dataframe, with only one line for each (Key1, Key2) couple, but creating new columns to keep the different values taken by Value1 and Value2 (see Output Example to understand better). FixedValue directly depends to (Key1, Key2) so won't change in time. I'd like to limit to a certain number of new columns created, so my output doesn't explode</p> <pre><code>Output wanted if I limit number of &quot;new column by Value&quot; to 3 : Key1 Key2 Value1_1 Value1_2 Value1_3 Value2_1 Value2_2 Value2_3 FixedValue A A 12 40 13 32 25 12 15 B A 0 0 1 12 20 A B 50 50 40 B B 7 8 30 </code></pre> <p>I don't mind the type of the blank going to non-existant values (they can be NaN, '', ... whatever)</p> <p>Thanks in advance for your help</p>
<python><pandas><group-by>
2023-02-06 12:45:58
1
564
Adept
75,361,467
6,071,697
Making a django model field readonly
<p>I am creating a django DB model and I want one the fields to be readonly. When creating a new object I want to set it, but later if someone tries to update the object, it should raise an error. How do I achieve that?</p> <p>I tried the following but I was still able to update the objects.</p> <pre><code>from django.db import models as django_db_models class BalanceHoldAmounts(django_db_models.Model): read_only_field = django_db_models.DecimalField(editable=False) </code></pre> <p>Thank you</p>
<python><django>
2023-02-06 12:44:12
1
622
Epic
75,361,423
10,270,590
How to use a python list as global variable python list with in @task.external_python?
<h2>GOAL:</h2> <ul> <li>Have a python list as a global variable between tasks.</li> <li>Currently it crashes at the 1st task.</li> <li>1.) I am trying to have a simple python list that is carried from 1 task to the next and append a few string values to it at task 2. So the goal is to have 1 shared list.</li> <li>2.) Even if 1 task fails it should just move on ad dotn care (obviously mark the task area failed)</li> </ul> <h2>SETUP:</h2> <ul> <li>I am on Airflow 2.4.1</li> <li>I use Airflow Docker and build a python environemnt that I have used many times and just works fine.</li> </ul> <h2>MY CODE:</h2> <pre><code>from __future__ import annotations import logging import os import shutil import sys import tempfile import time from pprint import pprint import pendulum from airflow import DAG from airflow.decorators import task log = logging.getLogger(__name__) PYTHON = sys.executable BASE_DIR = tempfile.gettempdir() my_default_args = { 'owner': 'me', 'email': ['some_email@some_email.com'], 'email_on_failure': True, 'email_on_retry': False, 'write_successes': [], } with DAG( dag_id='my_dag_id', schedule='9 9 * * *', start_date=pendulum.datetime(2022, 1, 1, tz=&quot;UTC&quot;), catchup=False, default_args=my_default_args, tags=['a', 'b'], ) as dag: @task.external_python(task_id=&quot;one&quot;, python='/opt/airflow/venv1/bin/python3') def first(**kwargs): task_id=&quot;one&quot; write_successes = kwargs.get('write_successes', []) print(write_successes) write_successes.append(99) print(write_successes) @task.external_python(task_id=&quot;two&quot;, python='/opt/airflow/venv1/bin/python3') def second(**kwargs): write_successes = kwargs.get('write_successes', []) print(write_successes) write_successes.append(101) print(write_successes) one = first() two = second() one &gt;&gt; two </code></pre> <h2>ERROR:</h2> <pre><code>*** Reading local file: /opt/airflow/logs/dag_id=test_global_variable/run_id=scheduled__2023-02-05T09:09:00+00:00/task_id=one/attempt=1.log [2023-02-06, 12:24:43 GMT] {taskinstance.py:1165} INFO - Dependencies all met for &lt;TaskInstance: test_global_variable.one scheduled__2023-02-05T09:09:00+00:00 [queued]&gt; [2023-02-06, 12:24:43 GMT] {taskinstance.py:1165} INFO - Dependencies all met for &lt;TaskInstance: test_global_variable.one scheduled__2023-02-05T09:09:00+00:00 [queued]&gt; [2023-02-06, 12:24:43 GMT] {taskinstance.py:1362} INFO - -------------------------------------------------------------------------------- [2023-02-06, 12:24:43 GMT] {taskinstance.py:1363} INFO - Starting attempt 1 of 1 [2023-02-06, 12:24:43 GMT] {taskinstance.py:1364} INFO - -------------------------------------------------------------------------------- [2023-02-06, 12:24:43 GMT] {taskinstance.py:1383} INFO - Executing &lt;Task(_PythonExternalDecoratedOperator): one&gt; on 2023-02-05 09:09:00+00:00 [2023-02-06, 12:24:43 GMT] {standard_task_runner.py:54} INFO - Started process 239657 to run task [2023-02-06, 12:24:43 GMT] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test_global_variable', 'one', 'scheduled__2023-02-05T09:09:00+00:00', '--job-id', '72751', '--raw', '--subdir', 'DAGS_FOLDER/test_global_variable.py', '--cfg-path', '/tmp/tmpxldmrzpp'] [2023-02-06, 12:24:43 GMT] {standard_task_runner.py:83} INFO - Job 72751: Subtask one [2023-02-06, 12:24:43 GMT] {dagbag.py:525} INFO - Filling up the DagBag from /opt/airflow/dags/test_global_variable.py [2023-02-06, 12:24:43 GMT] {task_command.py:384} INFO - Running &lt;TaskInstance: test_global_variable.one scheduled__2023-02-05T09:09:00+00:00 [running]&gt; on host 4851b30aa5cf [2023-02-06, 12:24:43 GMT] {taskinstance.py:1590} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=me AIRFLOW_CTX_DAG_ID=test_global_variable AIRFLOW_CTX_TASK_ID=one AIRFLOW_CTX_EXECUTION_DATE=2023-02-05T09:09:00+00:00 AIRFLOW_CTX_TRY_NUMBER=1 AIRFLOW_CTX_DAG_RUN_ID=scheduled__2023-02-05T09:09:00+00:00 [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'execution_date' from the template is deprecated and will be removed in a future version. Please use 'data_interval_start' or 'logical_date' instead. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'next_ds' from the template is deprecated and will be removed in a future version. Please use '{{ data_interval_end | ds }}' instead. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'next_ds_nodash' from the template is deprecated and will be removed in a future version. Please use '{{ data_interval_end | ds_nodash }}' instead. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'next_execution_date' from the template is deprecated and will be removed in a future version. Please use 'data_interval_end' instead. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'prev_ds' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'prev_ds_nodash' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'prev_execution_date' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'prev_execution_date_success' from the template is deprecated and will be removed in a future version. Please use 'prev_data_interval_start_success' instead. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'tomorrow_ds' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'tomorrow_ds_nodash' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'yesterday_ds' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/context.py:204: AirflowContextDeprecationWarning: Accessing 'yesterday_ds_nodash' from the template is deprecated and will be removed in a future version. warnings.warn(_create_deprecation_warning(key, self._deprecation_replacements[key])) [2023-02-06, 12:24:44 GMT] {taskinstance.py:1851} ERROR - Task failed with exception Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/decorators/base.py&quot;, line 188, in execute return_value = super().execute(context) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 370, in execute return super().execute(context=serializable_context) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 175, in execute return_value = self.execute_callable() File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 678, in execute_callable return self._execute_python_callable_in_subprocess(python_path, tmp_path) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 411, in _execute_python_callable_in_subprocess self._write_args(input_path) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 381, in _write_args file.write_bytes(self.pickling_library.dumps({'args': self.op_args, 'kwargs': self.op_kwargs})) _pickle.PicklingError: Can't pickle &lt;function first at 0x7f80ff76e4c0&gt;: it's not the same object as unusual_prefix_6cc7442bed7c02593e3a29524b0e65329d9f59da_test_global_variable.first [2023-02-06, 12:24:44 GMT] {taskinstance.py:1401} INFO - Marking task as FAILED. dag_id=test_global_variable, task_id=one, execution_date=20230205T090900, start_date=20230206T122443, end_date=20230206T122444 [2023-02-06, 12:24:44 GMT] {standard_task_runner.py:102} ERROR - Failed to execute job 72751 for task one (Can't pickle &lt;function first at 0x7f80ff76e4c0&gt;: it's not the same object as unusual_prefix_6cc7442bed7c02593e3a29524b0e65329d9f59da_test_global_variable.first; 239657) [2023-02-06, 12:24:44 GMT] {local_task_job.py:164} INFO - Task exited with return code 1 [2023-02-06, 12:24:44 GMT] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check </code></pre> <h2>I have tried to fix it based on the following posts:</h2> <ul> <li><p>I have tried global python variables that did not worked at all</p> </li> <li><p><a href="https://stackoverflow.com/questions/58792721/global-variables-in-airflow">Global variables in Airflow</a> - i have separate &quot;task.external_python&quot; that makes it not possible to use the following post.</p> </li> <li><p>Mine is not a class issue - <a href="https://stackoverflow.com/questions/61705029/list-as-global-variable-inside-a-class-in-python">List as global variable inside a class in Python</a></p> </li> <li><p>might be interesting but I have separate python venve for each task - <a href="https://stackoverflow.com/a/58804409/10270590">https://stackoverflow.com/a/58804409/10270590</a></p> </li> <li><p>I could not get Airflow XCOM working</p> </li> <li><p>@TJaniF -&gt; (I have retried this a 2nd time than it have worked but on the 1st run with the same code I got the following results:) I have tried the following code the long top bar is marked as Failed but a single square bellow marked as success but then there was no square below that square at all. I don't understand this</p> </li> </ul> <pre><code>from airflow.decorators import dag, task from pendulum import datetime @dag( dag_id='test_global_variable', start_date=datetime(2022,12,10), schedule=None, catchup=False,) def write_var(): @task.external_python(task_id=&quot;task_1&quot;, python='/opt/airflow/venv1/bin/python3') def add_to_list(my_list): print(my_list) my_list.append(19) return my_list @task.external_python(task_id=&quot;task_2&quot;, python='/opt/airflow/venv1/bin/python3') def add_to_list_2(my_list): print(my_list) my_list.append(42) return my_list add_to_list_2(add_to_list([23, 5, 8])) write_var() </code></pre> <p>LOG From the succesful task</p> <pre><code>[2023-02-06, 15:36:52 GMT] {taskinstance.py:1165} INFO - Dependencies all met for &lt;TaskInstance: test_global_variable.task_1 manual__2023-02-06T15:36:51.225176+00:00 [queued]&gt; [2023-02-06, 15:36:52 GMT] {taskinstance.py:1165} INFO - Dependencies all met for &lt;TaskInstance: test_global_variable.task_1 manual__2023-02-06T15:36:51.225176+00:00 [queued]&gt; [2023-02-06, 15:36:52 GMT] {taskinstance.py:1362} INFO - -------------------------------------------------------------------------------- [2023-02-06, 15:36:52 GMT] {taskinstance.py:1363} INFO - Starting attempt 1 of 1 [2023-02-06, 15:36:52 GMT] {taskinstance.py:1364} INFO - -------------------------------------------------------------------------------- [2023-02-06, 15:36:52 GMT] {taskinstance.py:1383} INFO - Executing &lt;Task(_PythonExternalDecoratedOperator): task_1&gt; on 2023-02-06 15:36:51.225176+00:00 [2023-02-06, 15:36:52 GMT] {standard_task_runner.py:54} INFO - Started process 249785 to run task [2023-02-06, 15:36:52 GMT] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test_global_variable', 'task_1', 'manual__2023-02-06T15:36:51.225176+00:00', '--job-id', '72908', '--raw', '--subdir', 'DAGS_FOLDER/test_global_variable.py', '--cfg-path', '/tmp/tmpuw6bfiif'] [2023-02-06, 15:36:52 GMT] {standard_task_runner.py:83} INFO - Job 72908: Subtask task_1 [2023-02-06, 15:36:52 GMT] {dagbag.py:525} INFO - Filling up the DagBag from /opt/airflow/dags/test_global_variable.py [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_1&gt;, task_2 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {taskmixin.py:205} WARNING - Dependency &lt;Task(_PythonExternalDecoratedOperator): task_2&gt;, task_1 already registered for DAG: test_global_variable [2023-02-06, 15:36:52 GMT] {task_command.py:384} INFO - Running &lt;TaskInstance: test_global_variable.task_1 manual__2023-02-06T15:36:51.225176+00:00 [running]&gt; on host 4851b30aa5cf [2023-02-06, 15:36:52 GMT] {taskinstance.py:1590} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=airflow AIRFLOW_CTX_DAG_ID=test_global_variable AIRFLOW_CTX_TASK_ID=task_1 AIRFLOW_CTX_EXECUTION_DATE=2023-02-06T15:36:51.225176+00:00 AIRFLOW_CTX_TRY_NUMBER=1 AIRFLOW_CTX_DAG_RUN_ID=manual__2023-02-06T15:36:51.225176+00:00 [2023-02-06, 15:36:53 GMT] {process_utils.py:179} INFO - Executing cmd: /opt/airflow/venv1/bin/python3 /tmp/tmd35abbbcv/script.py /tmp/tmd35abbbcv/script.in /tmp/tmd35abbbcv/script.out /tmp/tmd35abbbcv/string_args.txt [2023-02-06, 15:36:53 GMT] {process_utils.py:183} INFO - Output: [2023-02-06, 15:36:54 GMT] {process_utils.py:187} INFO - [23, 5, 8] [2023-02-06, 15:36:54 GMT] {python.py:177} INFO - Done. Returned value was: [23, 5, 8, 19] [2023-02-06, 15:36:54 GMT] {taskinstance.py:1401} INFO - Marking task as SUCCESS. dag_id=test_global_variable, task_id=task_1, execution_date=20230206T153651, start_date=20230206T153652, end_date=20230206T153654 [2023-02-06, 15:36:54 GMT] {local_task_job.py:164} INFO - Task exited with return code 0 [2023-02-06, 15:36:54 GMT] {local_task_job.py:273} INFO - 1 downstream tasks scheduled from follow-on schedule check </code></pre> <h2>Screenshot:</h2> <p><a href="https://i.sstatic.net/uCvSz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uCvSz.png" alt="Issue" /></a></p>
<python><python-3.x><airflow><directed-acyclic-graphs><airflow-2.x>
2023-02-06 12:39:18
1
3,146
sogu
75,361,419
5,860,375
Python TypeHint Optional dataclass as None inside dataclass results in mypy error
<p>I am having a hard time to wrap my head around type hints for dataclasses. I have a dataclass Sudoku that creates another dataclass Grid with the method create_grid. The Grid class needs the size variable of the Sudoku in order to initialize correctly. Therefore i typehint the Grid dataclass as Grid | None and set it to None as i cannot use the default_factory of the field method if I have attributes to pass.</p> <pre><code>from dataclasses import dataclass from grid import Grid @dataclass class Sudoku: size: int grid: Grid | None = None def create_grid(self) -&gt; None: &quot;&quot;&quot;Create the grid&quot;&quot;&quot; self.grid = Grid(size=self.size) def load_grid_values_from_string(self, grid_string: str) -&gt; None: &quot;&quot;&quot;Load sudoku from a string&quot;&quot;&quot; self.create_grid() self.grid.load_grid_values(grid_string) </code></pre> <p>However mypy does not like this. My Grid dataclass has the method load_grid_values implemented, and the code runs fine, but mypy still gives this this error message:</p> <pre><code>Item &quot;None&quot; of &quot;Optional[Grid]&quot; has no attribute &quot;load_grid_values&quot; [union-attr]mypy </code></pre> <p>How do I type hint this correctly? I used <code>__post_init__</code> before and passed the self.size but this gives the same error message.</p>
<python><type-hinting><mypy><python-dataclasses>
2023-02-06 12:38:56
0
801
ConSod
75,361,308
14,744,714
Configuration error at The Humanitarian Data Exchange (hdx API python)
<p>I'm trying to get data from a resource: <code>novel-coronavirus-2019-ncov-cases</code> from the site Humanitarian Data Exchang. Previously everything was fine, I updated the library <code>hdx-python-api==5.9.7</code> to the latest version and I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/dashboard/scripts/jhu.py&quot;, line 31, in &lt;module&gt; Configuration.create(hdx_site=&quot;prod&quot;, user_agent=&quot;A_Quick_Example&quot;, hdx_read_only=True) File &quot;/home/user/anaconda3/lib/python3.8/site-packages/hdx/api/configuration.py&quot;, line 647, in create return cls._create( File &quot;/home/user/anaconda3/lib/python3.8/site-packages/hdx/api/configuration.py&quot;, line 607, in _create cls._configuration.setup_session_remoteckan(remoteckan, **kwargs) File &quot;/home/user/anaconda3/lib/python3.8/site-packages/hdx/api/configuration.py&quot;, line 471, in setup_session_remoteckan self._session, user_agent = self.create_session_user_agent( File &quot;/home/user/anaconda3/lib/python3.8/site-packages/hdx/api/configuration.py&quot;, line 436, in create_session_user_agent session = get_session( File &quot;/home/user/anaconda3/lib/python3.8/site-packages/hdx/utilities/session.py&quot;, line 173, in get_session retries = Retry( TypeError: __init__() got an unexpected keyword argument 'allowed_methods' </code></pre> <p>Which clearly refers me to a configuration error. I only need to download the data, so I'm using the read configuration example that was given in the official documentation.</p> <p>Example code:</p> <pre><code>from hdx.api.configuration import Configuration from hdx.data.dataset import Dataset Configuration.create(hdx_site='prod', user_agent='A_Quick_Example', hdx_read_only=True) def save(direct): datasets = Dataset.read_from_hdx('novel-coronavirus-2019-ncov-cases') print(datasets.get_date_of_dataset()) resources = Dataset.get_all_resources(datasets) for res in resources: url, path = res.download(folder=direct) print('Resource URL %s downloaded to %s' % (url, path)) </code></pre> <p>Can you help to solve this error?</p>
<python><configuration><hdx>
2023-02-06 12:27:06
1
717
kostya ivanov
75,361,276
10,569,922
Reshape dataframe from long to wide
<p>my df:</p> <pre><code>d = {'project_id': [19,20,19,20,19,20], 'task_id': [11,22,11,22,11,22], &quot;task&quot;: [&quot;task_1&quot;,&quot;task_1&quot;,&quot;task_1&quot;,&quot;task_1&quot;,&quot;task_1&quot;,&quot;task_1&quot;], &quot;username&quot;: [&quot;tom&quot;,&quot;jery&quot;,&quot;tom&quot;,&quot;jery&quot;,&quot;tom&quot;,&quot;jery&quot;], &quot;image_id&quot;:[101,202,303,404,505,606], &quot;frame&quot;:[0,0,9,8,11,11], &quot;label&quot;:['foo','foo','bar','xyz','bar','bar']} df = pd.DataFrame(data=d) </code></pre> <p>So my df, is long format, in some duplicate and only <code>image_id</code> is unique. I trying pivot my df, with <code>pd.pivot</code> and <code>pd.merge</code> reshape to wide format by <code>username</code>.<br /> My code:</p> <pre><code>pd.pivot(df, index=['task','frame','image_id'], columns = 'username', values='label') </code></pre> <p>My output:<br /> <a href="https://i.sstatic.net/2081j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2081j.png" alt="actual" /></a></p> <p>I expected(or want to reach):<br /> <a href="https://i.sstatic.net/UF8xB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UF8xB.png" alt="expected" /></a></p> <p>So, as you see, I don't really need <code>image_id</code> in my output. Just summary, which user use tag per frame.</p>
<python><pandas><dataframe><reshape>
2023-02-06 12:23:34
1
521
TeoK
75,361,253
11,611,246
Translate Matlab PLS Regression code to Python
<hr /> <p><strong>Matlab</strong></p> <p>I have some PLSR regression in Matlab that I need to translate to Python. The Matlab code is as follows:</p> <pre><code>% PLSR with 15 latent variables [~,~,~,~,~,MSEcv] = plsregress(X,y,15,&quot;cv&quot;,5,&quot;MCReps&quot;,100); rRMSE = 100*sqrt(MSEcv(2,:))/max(y)-min(y)); % Find optimal number of latent variables minl = 1; maxl = 15 nlv = find(rMSE(minl+1:maxl+1)==min(rRMSE(minl+1:maxl+1)))-1+minl; nlv = nlv(1) % Fit with optimal number of latent variables [XL,yl,XS,YS,beta,PCTVAR;MSE;stats] = plsregress(X,y,nlv) </code></pre> <p>(PLS regression with a maximum of 15 latent variables, cross-validation with 1/5 of the values and 100 Monte-Carlo repetitions.)</p> <hr /> <p><strong>Python</strong></p> <p>I tried to implement the code in Python as follows:</p> <pre><code>import numpy as np from sklearn.cross_decomposition import PLSRegression from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit # Maximum number of latent variables mlv = 15 # Cross-validation cv = 5 # Monte-Carlo repetitions mcr = 100 # 1...mlv to fit models with various number of latent variables try_latent_vars = np.arange(1, mlv) ##----------------------------------------------------------------------------| # Define funtion to fit PLSmodel def optimise_pls_cv(X_vals, y_vals, n_comp, crossval, mcreps): '''Fit PLS regression model using cross-validation.''' # Define PLS object pls = PLSRegression(n_components = n_comp) # Cross-validation fit cv_split = ShuffleSplit(n_splits = mcreps, test_size = 1/crossval, random_state = 0) cvs = cross_validate(pls, X_vals, y_vals, cv = cv_split, scoring = [&quot;r2&quot;, &quot;neg_mean_squared_error&quot;]) mean_r2_error = np.mean(cvs[&quot;test_r2&quot;]) test_mse = -np.mean(cvs[&quot;test_neg_mean_squared_error&quot;]) return pls, mean_r2_error, test_mse ##----------------------------------------------------------------------------| # Fit PLS model # Empty lists to store R^2 and mean squared error values r2s = [] mses = [] for n_comp in try_latent_vars: model, r2, mse = optimise_pls_cv(X_vals = X.T, y_vals = y, n_comp = n_comp, crossval = cv, mcreps = mcr) r2s.append(r2) mses.append(mse) index_max_r2s = np.argmax(r2s) lv = try_latent_vars[index_max_r2s] ##----------------------------------------------------------------------------| ## Fit model with optimized number of components model, r2, mse = optimise_pls_cv(X_vals = X.T, y_vals = y, n_comp = lv, crossval = cv, mcreps = mcr) metrics = {&quot;R2&quot; : r2, &quot;MSE&quot; : mse} metrics_str = &quot;R2: %0.4f, MSE: %0.4f&quot; % (r2, mse) </code></pre> <p>In both code snippets, <code>X</code> and <code>y</code> are a set of spectral information and some variable to predict from the spectra, respectively.</p> <hr /> <p><strong>The problem</strong></p> <p>Unfortunately, I get vastly different results with those versions (R^2 values for the Matlab version are all 0.5 or higher while the Python version ends up with R^2 values close to zero or negative). How is that and how can I solve this issue?</p> <hr /> <p><strong>Example data</strong></p> <p><a href="https://www.dropbox.com/sh/4unik5hdsvfnrd7/AAAA-JinQPfzLLYOfyLSQHAZa?dl=0" rel="nofollow noreferrer">Here</a> is some example data as .txt files in Dropbox. Read as</p> <pre><code>import pandas as pd X = pd.read_table(&quot;/path/to/saved/files/X.txt&quot;, header = None) y = pd.read_table(&quot;/path/to/saved/files/y.txt&quot;, header = None) </code></pre>
<python><matlab><scikit-learn><pls>
2023-02-06 12:20:36
1
1,215
Manuel Popp
75,361,162
12,436,050
Fetching triples using SPARQL query from turtle file
<p>I am new to SPARQL and currently struglling to fetch triples from a turtle file.</p> <pre><code>### https://ontology/1001 &lt;https://ontology/1001&gt; rdf:type owl:Class ; rdfs:subClassOf &lt;https://ontology/748&gt;; &lt;http://www.geneontology.org/formats/oboInOwl#hasExactSynonym&gt; &quot;Injury, neuronal&quot; , &quot;Neurotrauma&quot; ; rdfs:label &quot;Nervous system injury&quot; . ### https://ontology/10021 &lt;https://ontology/10021&gt; rdf:type owl:Class ; rdfs:subClassOf &lt;https://ontology/2034&gt; ; rdfs:label &quot;C3 glomerulopathy&quot; . </code></pre> <p>I am trying to extract all classes with their superclasses, labels and Synonym. The query which I am running is below.</p> <pre><code>query_id = &quot;&quot;&quot; prefix oboInOwl: &lt;http://www.geneontology.org/formats/oboInOwl#&gt; prefix obo: &lt;http://purl.obolibrary.org/obo/&gt; prefix rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; SELECT distinct ?cid ?label ?class ?synonyms WHERE { ?cid rdfs:label ?label . ?cid rdfs:subClassOf ?class . ?cid oboInOwl:hasExactSynonym ?synonyms . } &quot;&quot;&quot; </code></pre> <p>However, this query is filtering the triple where 'hasExactSynonym' doesn't exists.</p> <p>Following is the output:</p> <pre><code>cid label class synonyms 1001 Nervous system injury 748 Injury, neuronal , Neurotrauma </code></pre> <p>The expected output is:</p> <pre><code>cid label class synonyms 1001 Nervous system injury 748 Injury, neuronal , Neurotrauma 10021 C3 glomerulopathy 2034 </code></pre>
<python><sparql><rdflib><turtle-rdf>
2023-02-06 12:11:38
1
1,495
rshar
75,361,137
19,770,795
Bind inner class instance to an instance of its outer class
<h2>Code first</h2> <p>The goal is to design <code>OuterBase</code> such that the following passes:</p> <pre class="lang-py prettyprint-override"><code>class Outer(OuterBase): def __init__(self, foo: str) -&gt; None: self.foo = foo class Inner: outer: Outer def get_foo(self) -&gt; str: return self.outer.foo inner = Outer(&quot;bar&quot;).Inner() assert inner.get_foo() == &quot;bar&quot; </code></pre> <p>My question is closely related to this: <a href="https://stackoverflow.com/q/2024566/19770795">How to access outer class from an inner class?</a></p> <p>But it is decisively different in one relevant nuance. That question is about how to access an outer <strong>class</strong> from inside the inner class. This question is about access to a specific <strong>instance</strong> of the outer class.</p> <h2>Question</h2> <p>Given an <code>Outer</code> and an <code>Inner</code> class, where the latter is defined in the body of the former, and given an <strong>instance</strong> of <code>Outer</code>, can we pass <strong>that instance</strong> to the <code>Inner</code> constructor so as to bind the <code>Inner</code> instance to that <code>Outer</code> instance?</p> <p>So if we did <code>outer = Outer()</code> and then <code>inner = outer.Inner()</code>, there would then be a reference to <code>outer</code> in an attribute of <code>inner</code>.</p> <h2>Secondary requirements</h2> <h4>1) Simplest possible usage (minimal boilerplate)</h4> <p>It would be ideal, if the entire logic facilitating this binding of the instances were &quot;hidden&quot; in the <code>Outer</code> class.</p> <p>Then there would be some <code>OuterBase</code> class that a user could inherit from and all he would have to do, is define the <code>Inner</code> class (with the agreed upon fixed name) and expect its <code>outer</code> attribute (also agreed upon) to hold a reference to an instance of the outer class.</p> <p>Solutions involving <em>decoration</em> of the inner class or explicitly passing it a meta class or defining a special <code>__init__</code> method and so on would be considered sub-optimal.</p> <h4>2) Type safety (to the greatest degree possible)</h4> <p>The code (both of the implementation and the usage) should ideally pass <code>mypy --strict</code> checks and obfuscate dynamic typing as little as possible.</p> <h2>Hypothetical real-life use case</h2> <p>Say the inner class is used as a settings container for instances of the outer class, similar to how <a href="https://docs.pydantic.dev/" rel="nofollow noreferrer">Pydantic</a> is designed. In Pydantic (possibly for various reasons) the inner <code>Config</code> class is a class-wide configuration, i.e. it applies to <em>all</em> instances of the model (outer class). With a setup like the one I am asking about here the usage of Pydantic models would remain unchanged, but now deviations in configuration would be possible on an <em>instance</em> level by binding a specific <code>Config</code> instance to a specific model instance.</p>
<python><inheritance><inner-classes>
2023-02-06 12:09:53
1
19,997
Daniel Fainberg
75,361,128
18,096,205
I want to download all the images of my boards on pinterest
<p>I am now trying to download all the images on the board using selenium.</p> <p>However, I am unable to log in. How should I click on this and enter my ID:Pass?</p> <pre><code>&lt;div data-test-id=&quot;login-button&quot;&gt; </code></pre>
<python><selenium><web-scraping><selenium-chromedriver>
2023-02-06 12:08:59
1
349
Tdayo
75,361,122
6,751,456
python add a list of variables as positional arguments for functions
<p>I have a list of variables that need to be passed in a couple of functions that take the same number of arguments.</p> <pre><code>a = 1 b = 'hello' c = 1.9 args = (a, b, c) op = func_a(args) if &lt;cond1&gt; else func_b(args) def func_a(a, b, c): ... def func_b(a, b, c): ... </code></pre> <p>But sending this as a tuple, the tuple is set to arg <code>a</code> and the function expects args <code>b</code> and <code>c</code> as well.</p> <p>How can I pass these tuple as <code>a</code>, <code>b</code>, and <code>c</code>respectively?</p>
<python><django><function><arguments>
2023-02-06 12:08:25
1
4,161
Azima
75,361,106
18,582,529
Best way to add run parallel proccesses in Python
<p>I have a telegram bot in aiogram and I need to run few tasks of checking google sheets in background. Here is my current code:</p> <pre><code>async def on_startup(_): asyncio.create_task(loop1()) asyncio.create_task(loop2()) asyncio.create_task(loop3()) if __name__ == '__main__': executor.start_polling(dp, skip_updates=True, on_startup=on_startup) </code></pre> <p>Code in each function looks like this:</p> <pre><code>async def loop1(): while True: # Iterating over data time.sleep(PAUSE) </code></pre> <p>Code runs in single thread VPS so eventually loops &quot;pausing&quot; and waiting for each other and don't run in parallel. Is there a better way to run this background tasks?</p>
<python><concurrency><parallel-processing><python-asyncio>
2023-02-06 12:06:55
0
663
lisa.smith
75,361,034
13,667,627
How to interpolate missing years within pd.groupby()
<p><strong>Problem:</strong></p> <p>I have a dataframe that contains entries with 5 year time intervals. I need to group entries by 'id' columns and interpolate values between the first and last item in the group. I understand that it has to be some combination of groupby(), set_index() and interpolate() but I am unable to make it work for the whole input dataframe.</p> <p>Sample df:</p> <pre><code>import pandas as pd data = { 'id': ['a', 'b', 'a', 'b'], 'year': [2005, 2005, 2010, 2010], 'val': [0, 0, 100, 100], } df = pd.DataFrame.from_dict(data) </code></pre> <p>example input df:</p> <pre><code>_ id year val 0 a 2005 0 1 a 2010 100 2 b 2005 0 3 b 2010 100 </code></pre> <p>expected output df:</p> <pre><code>_ id year val type 0 a 2005 0 original 1 a 2006 20 interpolated 2 a 2007 40 interpolated 3 a 2008 60 interpolated 4 a 2009 80 interpolated 5 a 2010 100 original 6 b 2005 0 original 7 b 2006 20 interpolated 8 b 2007 40 interpolated 9 b 2008 60 interpolated 10 b 2009 80 interpolated 11 b 2010 100 original </code></pre> <p>'type' is not necessary its just for illustration purposes.</p> <p><strong>Question:</strong></p> <p>How can I add missing years to the groupby() view and interpolate() their corresponding values?</p> <p>Thank you!</p>
<python><python-3.x><pandas><group-by><interpolation>
2023-02-06 11:59:26
2
1,562
Geom
75,361,001
1,502,991
Pandas slowness with dataframe size increased size
<p>I want to remove all url from a column. The column has string format. My Dataframe has two columns: <code>str_val[str], str_length[int]</code>. I am using following code:</p> <pre><code>t1 = time.time() reg_exp_val = r&quot;((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()&lt;&gt;]+|\(([^\s()&lt;&gt;]+|(\([^\s()&lt;&gt;]+\)))*\))+)&quot; df_mdr_pd['str_val1'] = df_mdr_pd.str_val.str.replace(reg_exp_val, r'') print(time.time()-t1) </code></pre> <p>When I run the code for <code>10000</code> instance, it is finished in <code>0.6</code> seconds. For 100000 instances the execution just gets stuck. I tried using <code>.loc[i, i+10000]</code> and run it in <code>for</code> cycle but it did not help either.</p>
<python><pandas><dataframe>
2023-02-06 11:56:55
1
515
Maria
75,360,972
20,731,770
How to make a button with a command with the parameter/attribute of its name with tkinter?
<p>How to make a button with a command with the parameter/attribute of its name with tkinter?</p> <pre class="lang-py prettyprint-override"><code>from tkinter import * def openfile(name): print(name) for i in l: if num == 2: num = 0 y = y+100 x = 100 print(y) print(x) print(num) bt = Button(second_frame, text=i, command=lambda: openfile(i)).grid(row=y, column=x, pady=10, padx=10) num = num + 1 x = x + 100 print(num) </code></pre> <p>I want to set the button command to be open file then its name as an attribute. I don't know how to get the buttons name on the same line.</p> <p>NOTE: The button name differs from each button</p>
<python><tkinter><tkinter-button>
2023-02-06 11:54:22
1
590
Adam Basha
75,360,942
2,607,447
unittest - skip folder/do not run tests inside folder automatically
<p>I have two folders with tests in this project:</p> <pre><code>/tests/... /tests_manual/... </code></pre> <p>I want <code>unittest</code> to skip <code>/tests_manual/...</code> folder, or, just discover tests in <code>/tests/...</code></p> <p>Tests inside <code>/tests_manual/...</code> should only be run manually this way:</p> <pre><code>python -m unittest tests_manual/test_something.py </code></pre> <p>How can I do that?</p> <p>I don't want to use <code>@unittest.skip</code> decoratos as then <code>python -m unittest tests_manual/test_something.py</code> wouldn't work at all.</p>
<python><unit-testing><python-unittest>
2023-02-06 11:51:35
0
18,885
Milano
75,360,898
14,808,637
Vectors Concatenation
<p>Suppose I have three vectors A, B, C</p> <blockquote> <pre><code>A vector size of 256 B vector size of 256 C vector size of 256 </code></pre> </blockquote> <p>Now I want to do concatenation in the following way:</p> <blockquote> <pre><code>AB= vector size will be 512 AC = vector size will be 512 BC = vector size will be 512 </code></pre> </blockquote> <p><strong>However,</strong> I need to restrict all the concatenated vectors to <em>256</em>, like:</p> <blockquote> <pre><code>AB= vector size will be 256 AC = vector size will be 256 BC = vector size will be 256 </code></pre> </blockquote> <p>One way is to take the mean of each two values of the two vectors like <code>A first index value</code> and <code>B first index value</code>, <code>A second index value</code> and <code>B second index value</code> ... etc. Similarly, in the concatenation of other vectors.</p> <p><strong>How I implement this:</strong></p> <pre><code>x # torch.Size([32, 3, 256]) # 32 is Batch size, 3 is vector A, vector B, vector C and 256 is each vector dimension def my_fun(self, x): iter = x.shape[0] counter = 0 new_x = torch.zeros((10, x.shape[1]), dtype=torch.float32, device=torch.device('cuda')) for i in range(0, x.shape[0] - 1): iter -= 1 for j in range(0, iter): mean = (x[i, :] + x[i+j, :])/2 new_x[counter, :] = torch.unsqueeze(mean, 0) counter += 1 final_T = torch.cat((x, new_x), dim=0) return final_T ref = torch.zeros((x.shape[0], 15, x.shape[2]), dtype=torch.float32, device=torch.device('cuda')) for i in range (x.shape[0]): ref[i, :, :] = self.my_fun(x[i, :, :]) </code></pre> <p>But this implementation is computationally expensive. One reason is I am iterating <strong>batch-wise</strong> which makes it computationally expensive. Is there any efficient way to implement this task?</p>
<python><numpy><pytorch><torch>
2023-02-06 11:46:18
1
774
Ahmad
75,360,882
6,929,467
InfluxDB: How to deal with missing data?
<h4>Question Description</h4> <p>We are performing a lot of timeseries queries, these queries sometimes result in issues, they are usually performed through an API (Python) and sometimes result in complete failure due to data missing.</p> <p>Due to this situation we are not sure where to educate ourselves and get the answer to this specific question on, how to deal with missing data in our timeseries <em>(influxdb)</em> database</p> <h4>Example</h4> <p>To describe a problem in an example..</p> <p>We have some timeseries data, let's say we measure the temperature of the room, now we have many rooms and sometimes sensors die or stop working for a week or two, then we replace them and so on, in that timeframe the data is missing.</p> <p>Now we try to perform certain calculations, they fail, let's say we want to calculate the temperature average per each day, now this will fail because some days we have no measurement input on the sensors.</p> <p>One approach that we thought of is that we just interpolate the data for that day. Use the last and the first available and just place that value for the days that there is no data available.</p> <p>This has many downsides, major one being due to fake data, you can't trust it and for our processes that are a bit more serious we would prefer to not store fake data (or interpolated).</p> <p>We were wondering what the possible alternatives were to this question and where can we find the resource to educate ourselves on such topic.</p>
<python><time-series><influxdb>
2023-02-06 11:44:14
1
2,720
innicoder
75,360,686
15,178,267
Django: byte indices must be integers or slices, not str error
<p>I am trying to get some data from a payload using a webhook, in the response that I sent along side I added the meta data that should return with the payload and this is how I did it (<em>code below</em>), if you take a close look at the keys, there I one called <code>meta</code>, now I am trying to get some of the data that is in the meta from my webhook view like this:</p> <p><strong>views.py webhook</strong></p> <pre><code>@csrf_exempt @require_http_methods(['POST', 'GET']) def webhook(request): payload = request.body order_id = payload['meta']['order_id'] print(order_id) return HttpResponse(&quot;testing...&quot;) </code></pre> <p><strong>Sending the reponse to the payment gateway</strong></p> <pre><code>@csrf_exempt def process_payment(request, name, email, amount, order_id): auth_token = &quot;FLWSECK_TEST-9efb9ee17d8afe40c6c890294a1163de-X&quot; hed = {'Authorization': 'Bearer ' + auth_token} data = { &quot;tx_ref&quot;:''+str(math.floor(1000000 + random.random()*9000000)), &quot;amount&quot;:amount, &quot;currency&quot;:&quot;USD&quot;, &quot;order_id&quot;:order_id, &quot;redirect_url&quot;:f&quot;http://localhost:3000/service-detail/booking/confirmation/{order_id}&quot;, &quot;payment_options&quot;:&quot;card&quot;, &quot;meta&quot;:{ &quot;order_id&quot;:order_id, &quot;consumer_id&quot;:23, &quot;consumer_mac&quot;:&quot;92a3-912ba-1192a&quot; }, &quot;customer&quot;:{ &quot;email&quot;:email, &quot;name&quot;:name, &quot;order_id&quot;:order_id }, &quot;customizations&quot;:{ &quot;title&quot;:&quot;ForecastFaceoff&quot;, &quot;description&quot;:&quot;Leading Political Betting Platform&quot;, &quot;logo&quot;:&quot;https://i.im.ge/2022/08/03m/FELzix.stridearn-high-quality-logo-circle.jpg&quot; } } url = ' https://api.flutterwave.com/v3/payments' response = requests.post(url, json=data, headers=hed) response=response.json() link=response['data']['link'] return redirect(link) </code></pre> <p>payload</p> <pre><code>{ &quot;event&quot;: &quot;charge.completed&quot;, &quot;data&quot;: { &quot;id&quot;: 4136234873, &quot;tx_ref&quot;: &quot;6473247093&quot;, &quot;flw_ref&quot;: &quot;FLW-MOCK-b33e86ab2342316fec664110e8eb842a3c2f956&quot;, &quot;device_fingerprint&quot;: &quot;df38c8854324598c54e16feacc65348a5e446&quot;, &quot;amount&quot;: 152, &quot;currency&quot;: &quot;USD&quot;, &quot;charged_amount&quot;: 152, &quot;app_fee&quot;: 5.78, &quot;merchant_fee&quot;: 0, &quot;processor_response&quot;: &quot;Approved. Successful&quot;, &quot;auth_model&quot;: &quot;VBVSECUREertrCODE&quot;, &quot;ip&quot;: &quot;52.209.154.143&quot;, &quot;narration&quot;: &quot;CARD Transaction &quot;, &quot;status&quot;: &quot;successful&quot;, &quot;payment_type&quot;: &quot;card&quot;, &quot;created_at&quot;: &quot;2023-02-06T11:19:45.000Z&quot;, &quot;account_id&quot;: 685622, &quot;customer&quot;: { &quot;id&quot;: 1970338, &quot;name&quot;: &quot;destiny &quot;, &quot;phone_number&quot;: null, &quot;email&quot;: &quot;******@gmail.com&quot;, &quot;created_at&quot;: &quot;2023-02-06T11:19:45.000Z&quot; }, &quot;card&quot;: { &quot;first_6digits&quot;: &quot;653188&quot;, &quot;last_4digits&quot;: &quot;2340&quot;, &quot;issuer&quot;: &quot;MASTERCARD CREDIT&quot;, &quot;country&quot;: &quot;NG&quot;, &quot;type&quot;: &quot;MASTERCARD&quot;, &quot;expiry&quot;: &quot;49/32&quot; } }, &quot;event.type&quot;: &quot;CARD_TRANSACTION&quot; } </code></pre> <p>The metadata is not even showing in the payload, must it show up there in the payload before I can grab the data that is sent with it?</p>
<python><django><django-rest-framework><django-views><django-queryset>
2023-02-06 11:25:29
2
851
Destiny Franks
75,360,667
3,767,820
How to use FilteredRelation with OuterRef?
<p>I'm trying to use Django ORM to generate a queryset and I can't find how to use an <code>OuterRef</code> in the joining condition with a <code>FilteredRelation</code>.</p> <p><strong>What I have in Django</strong></p> <p><em>Main queryset</em></p> <pre class="lang-py prettyprint-override"><code>queryset = LineOutlier.objects.filter(report=self.kwargs['report_pk'], report__apn__customer__cen_id=self.kwargs['customer_cen_id']) \ .select_related('category__traffic') \ .select_related('category__frequency') \ .select_related('category__stability') \ .prefetch_related('category__traffic__labels') \ .prefetch_related('category__frequency__labels') \ .prefetch_related('category__stability__labels') \ .annotate(history=subquery) </code></pre> <p><em>The subquery</em></p> <pre class="lang-py prettyprint-override"><code>subquery = ArraySubquery( LineOutlierReport.objects .filter((Q(lineoutlier__imsi=OuterRef('imsi')) | Q(lineoutlier__isnull=True)) &amp; Q(id__in=last_5_reports_ids)) .values(json=JSONObject( severity='lineoutlier__severity', report_id='id', report_start_date='start_date', report_end_date='end_date' ) ) ) </code></pre> <p>The request can be executed, but the SQL generated is not exactly what I want :</p> <p><strong>SQL Generated</strong></p> <pre class="lang-sql prettyprint-override"><code>SELECT &quot;mlformalima_lineoutlier&quot;.&quot;id&quot;, &quot;mlformalima_lineoutlier&quot;.&quot;imsi&quot;, ARRAY( SELECT JSONB_BUILD_OBJECT('severity', V1.&quot;severity&quot;, 'report_id', V0.&quot;id&quot;, 'report_start_date', V0.&quot;start_date&quot;, 'report_end_date', V0.&quot;end_date&quot;) AS &quot;json&quot; FROM &quot;mlformalima_lineoutlierreport&quot; V0 LEFT OUTER JOIN &quot;mlformalima_lineoutlier&quot; V1 ON (V0.&quot;id&quot; = V1.&quot;report_id&quot;) WHERE ((V1.&quot;imsi&quot; = (&quot;mlformalima_lineoutlier&quot;.&quot;imsi&quot;) OR V1.&quot;id&quot; IS NULL) AND V0.&quot;id&quot; IN (SELECT DISTINCT ON (U0.&quot;id&quot;) U0.&quot;id&quot; FROM &quot;mlformalima_lineoutlierreport&quot; U0 WHERE U0.&quot;apn_id&quot; = 2 ORDER BY U0.&quot;id&quot; ASC, U0.&quot;end_date&quot; DESC LIMIT 5)) ) AS &quot;history&quot;, FROM &quot;mlformalima_lineoutlier&quot; </code></pre> <p>The problem here is that the <code>OuterRef</code> condition <code>(V1.&quot;imsi&quot; = (&quot;mlformalima_lineoutlier&quot;.&quot;imsi&quot;))</code> is done on the <code>WHERE</code> statement, and I want it to be on the <code>JOIN</code> statement</p> <p><strong>What I want in SQL</strong></p> <pre class="lang-sql prettyprint-override"><code>SELECT &quot;mlformalima_lineoutlier&quot;.&quot;id&quot;, &quot;mlformalima_lineoutlier&quot;.&quot;imsi&quot;, ARRAY( SELECT JSONB_BUILD_OBJECT('severity', V1.&quot;severity&quot;, 'report_id', V0.&quot;id&quot;, 'report_start_date', V0.&quot;start_date&quot;, 'report_end_date', V0.&quot;end_date&quot;) AS &quot;json&quot; FROM &quot;mlformalima_lineoutlierreport&quot; V0 LEFT OUTER JOIN &quot;mlformalima_lineoutlier&quot; V1 ON (V0.&quot;id&quot; = V1.&quot;report_id&quot; AND ((V1.&quot;id&quot; IS NULL) OR V1.&quot;imsi&quot; = (&quot;mlformalima_lineoutlier&quot;.&quot;imsi&quot;))) WHERE V0.&quot;id&quot; IN (SELECT DISTINCT ON (U0.&quot;id&quot;) U0.&quot;id&quot; FROM &quot;mlformalima_lineoutlierreport&quot; U0 WHERE U0.&quot;apn_id&quot; = 2 ORDER BY U0.&quot;id&quot; ASC, U0.&quot;end_date&quot; DESC LIMIT 5)) ) AS &quot;history&quot;, FROM &quot;mlformalima_lineoutlier&quot; </code></pre> <p><strong>What I tried in Django</strong></p> <p>I tried to use the <code>FilteredRelation</code> to change the JOIN condition, but I can't seem to use it in combination with an <code>OuterRef</code></p> <pre class="lang-py prettyprint-override"><code>subquery = ArraySubquery( LineOutlierReport.objects .annotate(filtered_relation=FilteredRelation('lineoutlier', condition=Q(lineoutlier__imsi=OuterRef('imsi')) | Q(lineoutlier__isnull=True))) .filter(Q(id__in=last_5_reports_ids)) .values(json=JSONObject( severity='filtered_relation__severity', report_id='id', report_start_date='start_date', report_end_date='end_date' ) ) ) </code></pre> <p>I can't execute this query because of the following error</p> <p><code>ValueError: This queryset contains a reference to an outer query and may only be used in a subquery.</code></p> <p>How can I modify my query to make it work ?</p>
<python><django><postgresql><django-orm>
2023-02-06 11:23:45
1
712
Xavier FRANCOIS
75,360,619
1,479,619
Python async socket - request trimmed (received partially)
<p>I have written a Python async socket server, basically listens on port and responds to it. However, sometimes I observe that the request is sent partially e.g. the Content-Length indicates that the length is X but actual request received is kinda trimmed halfway. What could go wrong here?</p> <pre><code>SERVER_HOST = '127.0.0.1' SERVER_PORT = 12345 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_socket.setblocking(False) server_socket.bind((SERVER_HOST, SERVER_PORT)) server_socket.listen(5) inputs = [server_socket] outputs = [] messages = {} while True: for i in inputs: if i.fileno() &lt; 0: inputs.remove(i) for i in outputs: if i.fileno() &lt; 0: outputs.remove(i) readable, writable, exceptional = select.select(inputs, outputs, inputs, 1) if not (readable or writable or exceptional): continue for s in readable: if s is server_socket: connection, client_address = s.accept() connection.setblocking(0) inputs.append(connection) messages[connection] = queue.Queue() else: data = s.recv(self.params[&quot;buffer_size&quot;]) if data: messages[s].put(data) if s not in outputs: outputs.append(s) else: # Interpret empty result as closed connection if s in outputs: outputs.remove(s) inputs.remove(s) s.close() del messages[s] # Handle outputs for s in writable: try: next_msg = messages[s].get_nowait() # handle next_msg here handle(next_msg) except Exception as ex: print('Error: ', s.getpeername(), ': ', ex) outputs.remove(s) else: s.sendall(response.encode(&quot;utf-8&quot;)) outputs.remove(s) s.shutdown(socket.SHUT_WR) s.close() # close socket server_socket.close() </code></pre> <p>And based on the comments, the TCP is stream based, I changed a little bit on recv e.g. read all until no more.</p> <pre><code>for s in readable: if s is server_socket: # A &quot;readable&quot; server socket is ready to accept a connection connection, client_address = s.accept() print('new connection from', client_address) connection.setblocking(0) inputs.append(connection) # Give the connection a queue for data we want to send messages[connection] = queue.Queue() else: _data = s.recv(self.params[&quot;buffer_size&quot;]) if _data: data = _data while _data: print('received &quot;%s&quot; from %s' % (_data, s.getpeername())) try: _data = s.recv(self.params[&quot;buffer_size&quot;]) if _data: data += _data except: break # Add output channel for response if s not in outputs: outputs.append(s) messages[s].put(data) else: # Interpret empty result as closed connection print('closing', client_address, 'after reading no data') # Stop listening for input on the connection if s in outputs: outputs.remove(s) inputs.remove(s) s.close() # Remove message queue del messages[s] </code></pre> <p>But, still, there is a chance this will not work.</p>
<python><sockets>
2023-02-06 11:19:18
0
6,101
justyy
75,360,577
6,224,557
Create numpy array from panda daataframe inside a For loop
<p>Lets say that i have the following dataframe:</p> <pre><code>data = {&quot;Names&quot;: [&quot;Ray&quot;, &quot;John&quot;, &quot;Mole&quot;, &quot;Smith&quot;, &quot;Jay&quot;, &quot;Marc&quot;, &quot;Tom&quot;, &quot;Rick&quot;], &quot;Sports&quot;: [&quot;Soccer&quot;, &quot;Judo&quot;, &quot;Tenis&quot;, &quot;Judo&quot;, &quot;Tenis&quot;,&quot;Soccer&quot;,&quot;Judo&quot;,&quot;Tenis&quot;]} </code></pre> <p>I want to have a for loop like that for each unique Sport i am able to retrieve a numpy array containing the Names of people playing that sport. In pseudo code that can be explainded as</p> <pre><code>for unique sport in sports: nArray= numpy array of names of people practicing sport --------- Do something with nArray ------- </code></pre>
<python><pandas><numpy>
2023-02-06 11:14:31
3
305
Rolando Azevedo
75,360,558
7,800,760
Pylint disagrees with VSCode and python in imports
<p>I am not finding the way to properly code so that both pylint <strong>and</strong> the execution of the code (within VSCode or from the command line) would work.</p> <p>There are some similar questions but none seems to apply to my project structure with a src directory under which there will be multiple packages. Here's the simplified project structure:</p> <pre><code>. β”œβ”€β”€ README.md β”œβ”€β”€ src β”‚Β Β  β”œβ”€β”€ rssita β”‚Β Β  β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”‚Β Β  β”œβ”€β”€ feeds.py β”‚Β Β  β”‚Β Β  β”œβ”€β”€ rssita.py β”‚Β Β  β”‚Β Β  └── termcolors.py β”‚Β Β  └── zanotherpackage β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── anothermodule.py └── tests β”œβ”€β”€ __init__.py └── test_feeds.py </code></pre> <p>From what I understand <strong>rssita</strong> is one of my my packages (because of the <strong>init</strong>.py file) with some modules under it amongst which <strong>rssita.py</strong> file contains the following imports:</p> <pre><code>from feeds import RSS_FEEDS from termcolors import PC </code></pre> <p>The <strong>rssita.py</strong> code as shown above runs well from both within <strong>VSCode</strong> and from command line <strong>python</strong> ( python src/rssita/rssita.py ) from the project root, but at the same time <strong>pylint</strong> (both from within VSCode and from the command line (<em>pylint src</em> or <em>pylint src/rssita</em>)) flags the two imports as not found.</p> <p>If I modify the code as follows:</p> <pre><code>from rssita.feeds import RSS_FEEDS from rssita.termcolors import PC </code></pre> <p>pylint will then be happy but the code will not run anymore since it would not find the imports.</p> <p>What's the cleanest fix for this?</p>
<python><pylint>
2023-02-06 11:11:56
1
1,231
Robert Alexander
75,360,489
20,646,427
How to connect multiple chained dropdown with django-forms-dynamic
<p>I have a form with counterparty, object and sections i connected them to each other with django-forms-dynamic package but object not connected to sections</p> <p>Counterparty connected to object form but sections are not connected to object how can i fix that?</p> <p>I guess that im wrong with 2 functions in forms.py: section_choices and initial_sections and they`re not connected to objects but dont know how to fix that</p> <p>forms.py</p> <pre><code> class WorkLogForm(DynamicFormMixin, forms.ModelForm): def object_choices(form): contractor_counter = form['contractor_counter'].value() object_query = ObjectList.objects.filter(contractor_guid__in=[contractor_counter]) return object_query def initial_object(form): contractor_counter = form['contractor_counter'].value() object_query = ObjectList.objects.filter(contractor_guid__in=[contractor_counter]) return object_query.first() def section_choices(form): contractor_object = form['contractor_object'].value() section_query = SectionList.objects.filter(object=contractor_object) return section_query def initial_sections(form): contractor_object = form['contractor_object'].value() section_query = SectionList.objects.filter(object=contractor_object) return section_query.first() contractor_counter = forms.ModelChoiceField( label='ΠšΠΎΠ½Ρ‚Ρ€Π°Π³Π΅Π½Ρ‚', queryset=CounterParty.objects.none(), initial=CounterParty.objects.first(), empty_label='', ) contractor_object = DynamicField( forms.ModelChoiceField, label='ΠžΠ±ΡŠΠ΅ΠΊΡ‚', queryset=object_choices, initial=initial_object, ) contractor_section = DynamicField( forms.ModelMultipleChoiceField, label='Π Π°Π·Π΄Π΅Π»', queryset=section_choices, initial=initial_sections, ) </code></pre> <p>views.py</p> <pre><code>@login_required def create_work_log(request): if request.method == 'POST': form = WorkLogForm(request.POST, user=request.user) if form.is_valid(): work_log = form.save(commit=False) work_log.author = request.user work_log = form.save() messages.success(request, 'Π”Π°Π½Π½Ρ‹Π΅ занСсСны ΡƒΡΠΏΠ΅ΡˆΠ½ΠΎ', {'work_log': work_log}) return redirect('create_worklog') else: messages.error(request, 'Ошибка Π²Π°Π»ΠΈΠ΄Π°Ρ†ΠΈΠΈ') return redirect('create_worklog') form = WorkLogForm(user=request.user, initial=initial) return render(request, 'contractor/create_work_log.html', {'form': form}) def contractor_object(request): form = WorkLogForm(request.GET, user=request.user) return HttpResponse(form['contractor_object']) def contractor_section(request): form = WorkLogForm(request.GET, user=request.user) return HttpResponse(form['contractor_section']) </code></pre>
<python><django>
2023-02-06 11:04:14
1
524
Zesshi
75,360,485
7,241,287
How to install Python or R in an apptainer?
<p>I am new to docker and using apptainer for that.</p> <p>the def file is: <strong><code>firstApp.def</code></strong>:</p> <pre><code>`Bootstrap: docker From: ubuntu:22.04 %environment export LC_ALL=C ` </code></pre> <p>then I built it as follows and I want it to be writable (I hope I am not so naive), so I can install some packages later:</p> <pre><code>`apptainer build --sandbox --fakeroot firstApp.sif firstApp.def ` </code></pre> <p>now I do not know how to install Python3 (preferably, 3.8 or later).</p> <p>I tried to add the following command lines to the def file:</p> <pre><code>`%post apt-get -y install update apt-get -y install python3.8 ` </code></pre> <p>it raises these errors as well even without &quot;<code>apt-get -y install python3.8</code>&quot;:</p> <pre><code>Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package update FATAL: While performing build: while running engine: exit status 100 </code></pre>
<python><docker><apptainer>
2023-02-06 11:03:48
1
730
m.i.cosacak
75,360,284
11,409,379
Request for data that generates chart always empty
<p>I am trying to scrape data that generates a chart on a website using python's request module.</p> <p>My code currently looks like this:</p> <pre class="lang-py prettyprint-override"><code># load modules import os import json import requests as r # url to send the call to postURL = &lt;insert website&gt; # utiliz get to pull cookie data cookie_intel = r.get(postURL, verify = False) # get cookies search_cookies = cookie_intel.cookies #### Request Information #### # API request data post_data = &lt;insert request json&gt; # header information headers = {&quot;user-agent&quot;:&quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36&quot;} # results results_post = r.post(postURL, data = post_data, cookies = search_cookies, headers = headers, verify = False) # result print(results_post.json()) </code></pre> <p>As a quick summary, I first loaded the site to then inspect it, from there I identified the url for the request in the network tab and then checked the required request data in the payload tab. Then I took the user-agent from the request headers tab.</p> <p>The request itself works, however, it is always empty. I have tried altering all sorts of inputs but without success. I would highly appreciate any sort of tips that would help me to solve this issue. Thank you in advance!</p>
<python><web-scraping><python-requests>
2023-02-06 10:46:21
1
1,816
fabla
75,360,256
5,349,291
Producing data for a cumulative distribution plot in bigquery
<p>I'd like to produce a cumulative plot that shows, for a given value, the percentage of data that is less than or equal to that value. In python / matplotlib / pandas, I can do this with the quantile function provided by pandas (and I guess numpy too):</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def plot_quantile(series, start=0., end=0.99): y = np.linspace(start,end, 500) x = series.quantile(y) plt.plot(x, y*100) plt.xlabel(&quot;Duration in minutes&quot;) plt.ylabel(&quot;% of drives less than x minutes&quot;) plt.grid() plot_quantile(df.duration) </code></pre> <p>In this case I'm plotting the distribution of taxi ride duration from <a href="https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page" rel="nofollow noreferrer">the NYC taxi dataset</a>.</p> <p><a href="https://i.sstatic.net/VsX9Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VsX9Z.png" alt="enter image description here" /></a></p> <p>I'd like to produce similar data with an SQL query to bigquery. I'm pretty close with the following query:</p> <pre><code> select approx_quantiles(duration, 100) as duration_quantile from base_table </code></pre> <p>This gives me 101 data points, starting at the minimum value and ending at the max value. Now I have two problems:</p> <ul> <li>I have no idea how the values correspond to the quantile (e.g. which value is the P50?) - I'll need to generate those numbers too for the plot, as you see in the python code.</li> <li>I don't seem to have a way to truncate it near the top - as the max value is likely to be a very large outlier that makes my plot hard to read.</li> </ul>
<python><sql><pandas><google-bigquery>
2023-02-06 10:43:57
1
2,074
mchristos
75,360,210
11,016,652
TensorFlow classifiying integers for divisibility by 3 does not work
<p>I want to learn TensorFlow. I found a code which classifies integers for divisibility by 2 <a href="https://stackoverflow.com/a/54225144/11016652">here</a>. It works very well and accuracy is 100%. I only had to add an import command for numpy at the very beginning.</p> <p>Now I wanted to change it to classify for divisibility by 3 instead of 2. I changed one single line. I changed</p> <pre class="lang-py prettyprint-override"><code>Y.append( to_categorical(v%2, 2) ) </code></pre> <p>to</p> <pre class="lang-py prettyprint-override"><code>Y.append( to_categorical(0 if v%3 == 0 else 1, 2) ) </code></pre> <p>But now it no longer works. It always predicts 1 and the accuracy is 0.67. How can that be as I didn't change the style of the code? I only changed the classification function. I tried to use different loss functions, add an hidden layer and also different activation functions. Nothing helped. I want to know why the code no longer works after applying this little change. Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.utils import to_categorical # Helper function to convert a number # to its fixed width binary representation def conv(x): a = format(x, '032b') l = list(str(a)) l = np.array(list(map(int, l))) return l # input data data = [conv(i) for i in range(100000)] X = np.array(data) Y= list() # empty list of results for v in range(100000): Y.append( to_categorical(0 if v%3 == 0 else 1, 2) ) Y = np.array(Y) # we need np.array # Sequential is a fully connected network model = Sequential() # 32 inputs and 1 neuron in the first layer (hidden layer) model.add(Dense(1, input_dim=32, activation='relu')) # 2 output layer model.add(Dense(2, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # epochs is the number of times to retrain over the same data set # batch_size is how may elements to process in parallel at one go model.fit(X, Y, epochs=5, batch_size=100, verbose=1) weights, biases = model.layers[0].get_weights() print(&quot;weights&quot;,weights.size, weights, &quot;biases&quot;, biases) model.summary() </code></pre> <p>I have also read <a href="https://stackoverflow.com/questions/41488279/neural-network-always-predicts-the-same-class">Neural network always predicts the same class</a> but nothing helped.</p>
<python><tensorflow><tensorflow2.0>
2023-02-06 10:40:10
1
2,424
zomega
75,360,160
9,113,910
pip: cannot install dependencies from wheel built by poetry if pyproject.toml contains dependencies from file
<p>Im using <code>poetry</code> in my project to manage dependencies, so my <code>pyproject.toml</code> contains all my dependencies. My goal is is to build a wheel from the current project and install them in another <code>virtualenv</code> using <code>pip</code>. But I'm facing an error:</p> <pre><code>pip._vendor.pkg_resources.RequirementParseError: Invalid URL: artifacts/pp-0.358.0.tar.gz </code></pre> <p>I guess that the problem may be in the relative path, but I don't know how to fix it, because when installing dependencies, this package is taken from the tar.gz file located in this path.</p> <p>Here is my <code>pyproject.toml</code>:</p> <pre><code>[tool.poetry] name = &quot;package_name&quot; version = &quot;0.0.0&quot; description = &quot;&quot; readme = &quot;README.md&quot; packages = [ {include = &quot;package1&quot;}, {include = &quot;package2&quot;}, {include = &quot;package3&quot;}, ] [tool.poetry.dependencies] python = &quot;~3.8&quot; pp = {file = &quot;artifacts/pp-0.358.0.tar.gz&quot;} ...some other deps </code></pre> <p>To build a wheel I'm using the following command:</p> <pre><code>poetry build --format wheel </code></pre> <p>Then I'm trying to install the wheel in another virtualenv with following command:</p> <pre><code>pip3 install /Users/av/Projects/package_name/dist/package_name-0.0.0-py3-none-any.whl </code></pre> <p>Is there any way to fix it somehow? If I comment the <code>pp</code> line in dependencies installation then it works well.</p>
<python><pip><python-packaging><python-poetry><python-wheel>
2023-02-06 10:35:50
0
517
Andrej Vilenskij
75,360,093
2,924,453
derive path from adjacency matrix with numpy operations
<p>I need to derive a path from an adjacency matrix in a fast way (I have 40000 points).</p> <p>If a is the adjacency matrix:</p> <pre><code> a = array([[0., 0., 1., 0., 1.], [0., 0., 1., 1., 0.], [1., 1., 0., 0., 0.], [0., 1., 0., 0., 1.], [1., 0., 0., 1., 0.]]) </code></pre> <p>then I want to get:</p> <pre><code>path(a) = [0, 2, 1, 3, 4] </code></pre> <p>For now, I am using a while loop to get the path, but it is slow:</p> <pre><code> def create_path_from_joins(joins): # not assuming the path is connected i = 0 path = [i] elems = np.where(joins[i] == 1) elems = elems[0].tolist() join_to = set(elems) - set(path) while len(join_to) &gt; 0: # choose the one that is not already in the path elem = list(join_to)[0] path.append(elem) i = elem elems = np.where(np.array(joins[i]) == 1) elems = elems[0].tolist() join_to = set(elems) - set(path) return path </code></pre> <p>So I wanted to know if this can be done with matrix operations somehow in order to make it faster.</p> <p>Thanks.</p>
<python><numpy><graph-theory>
2023-02-06 10:29:23
2
3,801
GabyLP
75,360,070
11,311,927
Can you return a __str__ representation of an object within the __repr__ method at recursion level 1?
<p>I am looking for a way to implement a custom <code>__repr__</code> function in Python, that handles recursion Pythonically, while maintaining readability.</p> <p>I have two classes (Foo and Bar) that point to each other in their attributes. Of course, simply putting e.g.:</p> <pre><code>class Foo: def __init__(self, bar): self.bar = bar def __repr__(self): return f'Foo({self.bar})' @property def bar(self): return self._bar @bar.setter def bar(self, bar_instance): if bar_instance is not None: bar_instance._foo = self self._bar = bar_instance class Bar: def __init__(self, foo=None): self.foo = None def __repr__(self): return f'Bar({self.foo})' @property def foo(self): return self._foo @foo.setter def foo(self, foo_instance): if foo_instance is not None: foo_instance._bar = self self._foo = foo_instance bar = Bar() foo = Foo(bar) </code></pre> <p>Would result in a recursion error. Luckily, there's the <code>@recursive_repr()</code> from <a href="https://docs.python.org/3/library/reprlib.html" rel="nofollow noreferrer">reprlib</a> to the rescue. See my implementation below.</p> <pre><code>from reprlib import recursive_repr class Foo: def __init__(self, bar): self.bar = bar @recursive_repr() def __repr__(self): return f'Foo({self.bar})' @property def bar(self): return self._bar @bar.setter def bar(self, bar_instance): if bar_instance is not None: bar_instance._foo = self self._bar = bar_instance class Bar: def __init__(self, foo=None): self.foo = None @recursive_repr() def __repr__(self): return f'Bar({self.foo})' @property def foo(self): return self._foo @foo.setter def foo(self, foo_instance): if foo_instance is not None: foo_instance._bar = self self._foo = foo_instance bar = Bar() foo = Foo(bar) </code></pre> <p>However, the custom implementation with <code>fillvalue='...'</code> doesn't improve readability much, in my opinion. Also, this representation of e.g. the <code>foo</code> instance - although I know this <a href="https://stackoverflow.com/questions/15661063/what-is-the-relationship-between-repr-and-eval-and-what-is-the-main-purpo">isn't required</a> - doesn't make it very reproducible, necessarily.</p> <pre><code>&gt;&gt;&gt; bar = Bar() &gt;&gt;&gt; foo = Foo(bar) &gt;&gt;&gt; foo ... Foo(Bar(...)) </code></pre> <p>I am looking for an implementation where I can print a string representation of the <code>foo</code> instance, instead of printing the triple dots. I would want an output that'd be something like this: <code>Foo(Bar(my_foo_object))</code> in which I'd be able to define <code>my_foo_object</code> as the return value of the <code>Foo</code> <code>__str__</code> method.</p> <p>Although in the above example this might not make a ton of sense, in my non-MWE it would provide a more intuitive perspective on the objects and their values.</p> <p>In brief: is it possible to return a <code>__str__</code> representation of an object within an objects <code>recursive_repr</code> <em>at recursion level 1</em>?</p>
<python><string><oop><recursion><repr>
2023-02-06 10:26:53
1
315
Sam
75,359,981
6,224,975
How does pickle.load work on class objects that have been modified and saved?
<p>Say I have four files <code>main_class.py</code>, <code>help_class.py</code>, <code>train.py</code> and <code>predict.py</code></p> <p><code>main_class.py</code> consists of</p> <pre class="lang-py prettyprint-override"><code>from help_class import HC class MC: hc = HC() def fit(self): self.hc.fit() def predict(self): return self.hc.predict() </code></pre> <p>and <code>help_class</code> has</p> <pre class="lang-py prettyprint-override"><code>class HC: instances = None def fit(self): self.instances = &quot;fitted&quot; def predict(self): return self.instances </code></pre> <p>If I then in <code>train.py</code> train a <code>MC</code> instance, pickle it, and then load it in <code>predict.py</code> then <code>mc.predict()</code> now returns <code>None</code> and not <code>&quot;fitted&quot;</code></p> <pre class="lang-py prettyprint-override"><code>#train.py from main_class import MC import pickle mc = MC() mc.predict() #None mc.fit() mc.predict() #&quot;fitted&quot; with open(&quot;./mc_fitted.pkl&quot;,&quot;wb&quot;) as f: pickle.dump(mc,f) </code></pre> <pre class="lang-py prettyprint-override"><code># predict.py import pickle with open(&quot;./mc_fitted.pkl&quot;,&quot;rb&quot;) as f: mc = pickle.load(f) mc.predict() # None </code></pre> <p>I think I've heard somewhere, that <code>pickle</code> does actually not save the objects per say, but more creates a &quot;blue print&quot; to reconstruct the data, thus it might somehow make sense that the <code>instances</code> object in <code>hc</code> is overwritten if we, some place, create a new instance of <code>hc</code>, since <code>instances</code> is a class instances and not object instances, but I'm not 100% sure when/where this happens. I hope that someone could shed some light on exactly <em>why</em> or <em>when</em> this happens when using <code>pickle</code>?</p> <p>Note, I'm aware that this can be fixed just by moving <code>hc = HC()</code> inside an <code>__init__</code> call, thus this is not a question on &quot;how to fix this issue&quot;. I'm also aware about the difference between having objects in <code>__init__</code> and in class contructor, but I might be short on some <code>pickle</code> knowledge here</p>
<python><class><pickle>
2023-02-06 10:17:44
0
5,544
CutePoison
75,359,622
10,437,110
df.apply(hurst_function) gave TypeError: must be real number, not tuple in, Python
<p>I have a column in form of a data-frame that contains the ratio of some numbers. On that df col, I want to apply hurst function using df.apply() method.</p> <p>I don't know if the error is with the <code>df.apply</code> or with the <code>hurst_function</code>. Consider the code which calculates hurst exponent on a col using the df.apply method:</p> <pre><code>import hurst def hurst_function(df_col_slice): display(df_col_slice) return hurst.compute_Hc(df_col_slice) def func(df_col): results = round(df_col.rolling(101).apply(hurst_function)[100:],1) return results func(df_col) </code></pre> <p>I get the error:</p> <pre><code>Input In [73], in func(df_col) ---&gt; 32 results = round(df_col.rolling(101).apply(hurst_function)[100:],1) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:1843, in Rolling.apply(self, func, raw, engine, engine_kwargs, args, kwargs) 1822 @doc( 1823 template_header, 1824 create_section_header(&quot;Parameters&quot;), (...) 1841 kwargs: dict[str, Any] | None = None, 1842 ): -&gt; 1843 return super().apply( 1844 func, 1845 raw=raw, 1846 engine=engine, 1847 engine_kwargs=engine_kwargs, 1848 args=args, 1849 kwargs=kwargs, 1850 ) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:1315, in RollingAndExpandingMixin.apply(self, func, raw, engine, engine_kwargs, args, kwargs) 1312 else: 1313 raise ValueError(&quot;engine must be either 'numba' or 'cython'&quot;) -&gt; 1315 return self._apply( 1316 apply_func, 1317 numba_cache_key=numba_cache_key, 1318 numba_args=numba_args, 1319 ) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:590, in BaseWindow._apply(self, func, name, numba_cache_key, numba_args, **kwargs) 587 return result 589 if self.method == &quot;single&quot;: --&gt; 590 return self._apply_blockwise(homogeneous_func, name) 591 else: 592 return self._apply_tablewise(homogeneous_func, name) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:442, in BaseWindow._apply_blockwise(self, homogeneous_func, name) 437 &quot;&quot;&quot; 438 Apply the given function to the DataFrame broken down into homogeneous 439 sub-frames. 440 &quot;&quot;&quot; 441 if self._selected_obj.ndim == 1: --&gt; 442 return self._apply_series(homogeneous_func, name) 444 obj = self._create_data(self._selected_obj) 445 if name == &quot;count&quot;: 446 # GH 12541: Special case for count where we support date-like types File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:431, in BaseWindow._apply_series(self, homogeneous_func, name) 428 except (TypeError, NotImplementedError) as err: 429 raise DataError(&quot;No numeric types to aggregate&quot;) from err --&gt; 431 result = homogeneous_func(values) 432 return obj._constructor(result, index=obj.index, name=obj.name) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:582, in BaseWindow._apply.&lt;locals&gt;.homogeneous_func(values) 579 return func(x, start, end, min_periods, *numba_args) 581 with np.errstate(all=&quot;ignore&quot;): --&gt; 582 result = calc(values) 584 if numba_cache_key is not None: 585 NUMBA_FUNC_CACHE[numba_cache_key] = func File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:579, in BaseWindow._apply.&lt;locals&gt;.homogeneous_func.&lt;locals&gt;.calc(x) 571 start, end = window_indexer.get_window_bounds( 572 num_values=len(x), 573 min_periods=min_periods, 574 center=self.center, 575 closed=self.closed, 576 ) 577 self._check_window_bounds(start, end, len(x)) --&gt; 579 return func(x, start, end, min_periods, *numba_args) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\window\rolling.py:1342, in RollingAndExpandingMixin._generate_cython_apply_func.&lt;locals&gt;.apply_func(values, begin, end, min_periods, raw) 1339 if not raw: 1340 # GH 45912 1341 values = Series(values, index=self._on) -&gt; 1342 return window_func(values, begin, end, min_periods) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\_libs\window\aggregations.pyx:1315, in pandas._libs.window.aggregations.roll_apply() TypeError: must be real number, not tuple </code></pre> <p>What can I do to solve this?</p> <p>Edit: <code>display(df_col_slice)</code> is giving the following output:</p> <pre><code>0 0.282043 1 0.103355 2 0.537766 3 0.491976 4 0.535050 ... 96 0.022696 97 0.438995 98 -0.131486 99 0.248250 100 1.246463 Length: 101, dtype: float64 </code></pre>
<python><dataframe><math><apply>
2023-02-06 09:44:04
1
397
Ash
75,359,523
12,928,363
Django REST Framework JSON API show an empty object of relationships link when use relations.HyperlinkedRelatedField from rest_framework_json_api
<p>I'm create the REST API for space conjunction report, I want the conjunction to be a child of each report.</p> <p>My models:</p> <pre><code>from django.db import models from django.utils import timezone class Report(models.Model): class Meta: managed = False db_table = 'report' ordering = ['-id'] predict_start = models.DateTimeField(null=True) predict_end = models.DateTimeField(null=True) process_duration = models.IntegerField(default=0, null=True) create_conjunction_date = models.DateTimeField(null=True) ephe_filename = models.CharField(max_length=100, null=True) class Conjunction(models.Model): class Meta: managed = False db_table = 'conjunction' ordering = ['-conjunction_id'] conjunction_id = models.IntegerField(primary_key=True) tca = models.DateTimeField(max_length=3, null=True) missdt = models.FloatField(null=True) probability = models.FloatField(null=True) prob_method = models.CharField(max_length=45, null=True) norad = models.OneToOneField(SatelliteCategory, to_field='norad_cat_id', db_column='norad', null=True, on_delete=models.DO_NOTHING) doy = models.FloatField(null=True) ephe_id = models.IntegerField(null=True) pri_obj = models.IntegerField(null=True) sec_obj = models.IntegerField(null=True) report = models.ForeignKey(Report, related_name='conjunctions', null=True, on_delete=models.DO_NOTHING) probability_foster = models.FloatField(null=True) probability_patera = models.FloatField(null=True) probability_alfano = models.FloatField(null=True) probability_chan = models.FloatField(null=True) </code></pre> <p>My serializers:</p> <pre><code>class ConjunctionSerializer(serializers.ModelSerializer): class Meta: model = Conjunction fields = '__all__' class ReportSerializer(serializers.ModelSerializer): conjunctions = relations.ResourceRelatedField(many=True, read_only=True) class Meta: model = Report fields = '__all__' </code></pre> <p>My views:</p> <pre><code>from rest_framework import permissions from rest_framework_json_api.views import viewsets from .serializers import ReportSerializer, ConjunctionSerializer from .models import Report, Conjunction class ReportViewSet(viewsets.ModelViewSet): queryset = Report.objects.all() serializer_class = ReportSerializer permission_classes = [permissions.AllowAny] class ConjunctionViewSet(viewsets.ModelViewSet): queryset = Conjunction.objects.all() serializer_class = ConjunctionSerializer permission_classes = [permissions.AllowAny] </code></pre> <p>My urls.py</p> <pre><code>from django.contrib import admin from django.urls import include, path from rest_framework import routers from api.views import UserViewSet, GroupViewSet from shared.views import ReportViewSet, SatelliteCategoryViewSet, ConjunctionViewSet, ReportSentViewSet router = routers.DefaultRouter() router.register(r'users', UserViewSet) router.register(r'groups', GroupViewSet) router.register(r'reports', ReportViewSet) router.register(r'report_sent', ReportSentViewSet) router.register(r'satellite_category', SatelliteCategoryViewSet) router.register(r'conjunctions', ConjunctionViewSet) # Wire up our API using automatic URL routing. # Additionally, we include login URLs for the browsable API. urlpatterns = [ path('admin/', admin.site.urls), path('api/', include(router.urls)), path('api-auth/', include('rest_framework.urls', namespace='rest_framework')) ] </code></pre> <p>When I'm use <code>ResourceRelatedField</code> the JSON output will be like:</p> <pre><code>{ &quot;links&quot;: { &quot;first&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=1&quot;, &quot;last&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=84&quot;, &quot;next&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=2&quot;, &quot;prev&quot;: null }, &quot;data&quot;: [ { &quot;type&quot;: &quot;Report&quot;, &quot;id&quot;: &quot;838&quot;, &quot;attributes&quot;: { &quot;predict_start&quot;: &quot;2023-01-26T12:00:00Z&quot;, &quot;predict_end&quot;: &quot;2023-02-02T12:00:00Z&quot;, &quot;process_duration&quot;: 752, &quot;create_conjunction_date&quot;: &quot;2023-01-26T14:52:45Z&quot;, &quot;ephe_filename&quot;: &quot;Filename.txt&quot; }, &quot;relationships&quot;: { &quot;conjunctions&quot;: { &quot;meta&quot;: { &quot;count&quot;: 107 }, &quot;data&quot;: [ { &quot;type&quot;: &quot;Conjunction&quot;, &quot;id&quot;: &quot;78728&quot; }, # ... more data ... { &quot;type&quot;: &quot;Conjunction&quot;, &quot;id&quot;: &quot;78622&quot; } ] } } } ], &quot;meta&quot;: { &quot;pagination&quot;: { &quot;page&quot;: 1, &quot;pages&quot;: 84, &quot;count&quot;: 838 } } } </code></pre> <p>But when I'm use <code>HyperlinkedRelatedField</code>, it gives the empty object of <code>conjunctions</code>:</p> <pre><code> { &quot;links&quot;: { &quot;first&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=1&quot;, &quot;last&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=84&quot;, &quot;next&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=2&quot;, &quot;prev&quot;: null }, &quot;data&quot;: [ { &quot;type&quot;: &quot;Report&quot;, &quot;id&quot;: &quot;838&quot;, &quot;attributes&quot;: { &quot;predict_start&quot;: &quot;2023-01-26T12:00:00Z&quot;, &quot;predict_end&quot;: &quot;2023-02-02T12:00:00Z&quot;, &quot;process_duration&quot;: 752, &quot;create_conjunction_date&quot;: &quot;2023-01-26T14:52:45Z&quot;, &quot;ephe_filename&quot;: &quot;Filename.txt&quot; }, &quot;relationships&quot;: { &quot;conjunctions&quot;: {} } }, ], &quot;meta&quot;: { &quot;pagination&quot;: { &quot;page&quot;: 1, &quot;pages&quot;: 84, &quot;count&quot;: 838 } } } </code></pre> <p>This is what I'm expect:</p> <pre><code> { &quot;links&quot;: { &quot;first&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=1&quot;, &quot;last&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=84&quot;, &quot;next&quot;: &quot;http://127.0.0.1:8000/api/reports/?page%5Bnumber%5D=2&quot;, &quot;prev&quot;: null }, &quot;data&quot;: [ { &quot;type&quot;: &quot;Report&quot;, &quot;id&quot;: &quot;838&quot;, &quot;attributes&quot;: { &quot;predict_start&quot;: &quot;2023-01-26T12:00:00Z&quot;, &quot;predict_end&quot;: &quot;2023-02-02T12:00:00Z&quot;, &quot;process_duration&quot;: 752, &quot;create_conjunction_date&quot;: &quot;2023-01-26T14:52:45Z&quot;, &quot;ephe_filename&quot;: &quot;Filename.txt&quot; }, &quot;relationships&quot;: { &quot;conjunctions&quot;: { # Any links or something. &quot;data&quot;: [{ &quot;type&quot;: &quot;Conjunction&quot;, &quot;id&quot;: &quot;1&quot;, },{ &quot;type&quot;: &quot;Conjunction&quot;, &quot;id&quot;: &quot;2&quot;, }], &quot;links&quot;: { &quot;self&quot;: &quot;http://localhost:8000/api/reports/838/relationships/conjunctions/&quot;, &quot;related&quot;: &quot;http://localhost:8000/api/reports/838/conjunctions/&quot; } } } }, ], &quot;meta&quot;: { &quot;pagination&quot;: { &quot;page&quot;: 1, &quot;pages&quot;: 84, &quot;count&quot;: 838 } } } </code></pre>
<python><json><django><django-rest-framework><json-api>
2023-02-06 09:34:09
1
377
Paweenwat Maneechai
75,359,439
4,534,466
Tensorflow 2.10.0 not detecting GPU
<p>I've created a conda environment and installed tensorflow as such:</p> <pre><code>conda create -n foo python=3.10 conda activate foo conda install mamba mamba install tensorflow -c conda-forge mamba install cudnn cudatoolkit </code></pre> <p>This installed TensorFlow 2.10.0. I've installed CUDA 11.2 and cuDNN 8.1, and then try to run the following:</p> <pre><code>import tensorflow as tf print(f&quot;GPUs available: {tf.config.list_physical_devices('GPU')}&quot;) </code></pre> <p>but it just returns an empty list. I have a 3060ti that I want to use for my ML projects but TensorFlow is not detecting it. I found similar questions to mine, like <a href="https://stackoverflow.com/questions/65381073/tensorflow-gpu-not-using-gpu-with-cuda-cudnn">this</a>, <a href="https://stackoverflow.com/questions/41402409/tensorflow-doesnt-seem-to-see-my-gpu">this</a> and <a href="https://stackoverflow.com/questions/68117724/tensorflow-gpu-not-detecting-gpu">this</a> but they use the old version of TensorFlow, which would install <code>tensorflow-gpu</code> and is no longer supported. How can I fix this, or even attempt to troubleshoot it.</p> <p>I'm using a Windows 10 machine</p> <p>Output of <code>nvidia-smi</code>:</p> <pre><code>+-----------------------------------------------------------------------------+ | NVIDIA-SMI 528.24 Driver Version: 528.24 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... WDDM | 00000000:09:00.0 On | N/A | | 30% 43C P8 16W / 200W | 809MiB / 8192MiB | 3% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 7176 C+G ...perience\NVIDIA Share.exe N/A | | 0 N/A N/A 9240 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 12936 C+G ...cw5n1h2txyewy\LockApp.exe N/A | | 0 N/A N/A 13652 C+G ...e\PhoneExperienceHost.exe N/A | | 0 N/A N/A 14020 C+G ...2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 14888 C+G ...ser\Application\brave.exe N/A | | 0 N/A N/A 15112 C+G ...5n1h2txyewy\SearchApp.exe N/A | | 0 N/A N/A 16516 C+G ...oft OneDrive\OneDrive.exe N/A | | 0 N/A N/A 18296 C+G ...aming\Spotify\Spotify.exe N/A | | 0 N/A N/A 18624 C+G ...in7x64\steamwebhelper.exe N/A | | 0 N/A N/A 18672 C+G ...\app-1.0.9010\Discord.exe N/A | | 0 N/A N/A 18828 C+G ...lPanel\SystemSettings.exe N/A | | 0 N/A N/A 19284 C+G ...Central\Razer Central.exe N/A | | 0 N/A N/A 20020 C+G ...arp.BrowserSubprocess.exe N/A | | 0 N/A N/A 22912 C+G ...8wekyb3d8bbwe\Cortana.exe N/A | | 0 N/A N/A 24848 C+G ...ontend\Docker Desktop.exe N/A | | 0 N/A N/A 25804 C+G ...y\ShellExperienceHost.exe N/A | | 0 N/A N/A 27064 C+G ...8bbwe\WindowsTerminal.exe N/A | +-----------------------------------------------------------------------------+ </code></pre> <p>Output of <code>nvcc -V</code>:</p> <pre><code>Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_22:08:44_Pacific_Standard_Time_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 </code></pre> <p>I ran a dummy code as such:</p> <pre><code>import tensorflow as tf import numpy as np def make_nn(): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Dense(1, input_shape=(1,))) model.compile(loss='mean_squared_error', optimizer='sgd') return model def dataset(): x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) y = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) return tf.data.Dataset.from_tensor_slices((x, y)).batch(1) def main(): model = make_nn() model.fit(dataset(), epochs=1, steps_per_epoch=9) if __name__ == '__main__': print(f&quot;GPUs available: {tf.config.list_physical_devices('GPU')}&quot;) print(f&quot;Built with cuda: {tf.test.is_built_with_cuda()}&quot;) main() </code></pre> <p>and it gave me the following log:</p> <pre><code>GPUs available: [] Built with cuda: False 2023-02-06 09:47:32.744450: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-02-06 09:47:32.779280: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. </code></pre> <p>Looks like it's using a CPU build</p>
<python><tensorflow><gpu>
2023-02-06 09:25:50
3
1,530
JoΓ£o Areias
75,359,362
3,247,006
Can't I set the number with a decimal part to "MinMoneyValidator()" and "MaxMoneyValidator()" in "MoneyField()" with Django-money?
<p>I use <a href="https://github.com/django-money/django-money" rel="nofollow noreferrer">Django-money</a>, then I set <code>0.00</code> and <code>999.99</code> to <code>MinMoneyValidator()</code> and <code>MaxMoneyValidator()</code> respectively in <code>MoneyField()</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; from django.db import models from djmoney.models.fields import MoneyField from djmoney.models.validators import MinMoneyValidator, MaxMoneyValidator class Product(models.Model): name = models.CharField(max_length=50) price = MoneyField( # Here max_digits=5, decimal_places=2, default=0, default_currency='USD', validators=[ # Here # Here MinMoneyValidator(0.00), MaxMoneyValidator(999.99), ] ) </code></pre> <p>Then, I tried to add a product as shown below:</p> <p><a href="https://i.sstatic.net/HdnHB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HdnHB.png" alt="enter image description here" /></a></p> <p>But, I got the error below:</p> <blockquote> <p>TypeError: 'float' object is not subscriptable</p> </blockquote> <p>So, I set <code>0</code> and <code>999</code> to <code>MinMoneyValidator()</code> and <code>MaxMoneyValidator()</code> respectively in <code>MoneyField()</code> as shown below, then the error above was solved:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; from django.db import models from djmoney.models.fields import MoneyField from djmoney.models.validators import MinMoneyValidator, MaxMoneyValidator class Product(models.Model): name = models.CharField(max_length=50) price = MoneyField( max_digits=5, decimal_places=2, default=0, default_currency='USD', validators=[ # Here # Here MinMoneyValidator(0), MaxMoneyValidator(999), ] ) </code></pre> <p>Actually, I can set <code>0.00</code> and <code>999.99</code> to <a href="https://docs.djangoproject.com/en/4.1/ref/validators/#minvaluevalidator" rel="nofollow noreferrer">MinValueValidator()</a> and <a href="https://docs.djangoproject.com/en/4.1/ref/validators/#maxvaluevalidator" rel="nofollow noreferrer">MaxValueValidator()</a> respectively in <a href="https://docs.djangoproject.com/en/4.1/ref/models/fields/#decimalfield" rel="nofollow noreferrer">models.DecimalField()</a> without any errors as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; from django.db import models from django.core.validators import MinValueValidator, MaxValueValidator class Product(models.Model): name = models.CharField(max_length=50) price = models.DecimalField( # Here max_digits=5, decimal_places=2, default=0, validators=[ # Here # Here MinValueValidator(0.00), MaxValueValidator(999.99) ], ) </code></pre> <p>So, can't I set the number with the decimal part to <code>MinMoneyValidator()</code> and <code>MaxMoneyValidator()</code> in <code>MoneyField()</code>?</p>
<python><django><django-models><django-money>
2023-02-06 09:17:25
1
42,516
Super Kai - Kazuya Ito
75,359,078
8,723,790
How to validate if a string field is all uppercase without custom validators
<p>In pydantic, is there a way to validate if all letters in a string field are uppercase without a custom validator?</p> <p>With the following I can turn input string into an all-uppercase string. But what I want is to validate the input so that no string with lower letters is allowed.</p> <pre><code>from pydantic import BaseModel, constr class FooSchema(BaseModel): foo: constr(to_upper=True) </code></pre> <p>and</p> <pre><code>foo_obj = FooSchema.parse_raw({foo:&quot;abc&quot;}) print(foo_obj.foo) # result: &quot;ABC&quot; </code></pre> <p>Any idea?</p>
<python><pydantic>
2023-02-06 08:47:56
1
301
Paul Chuang
75,359,058
755,640
lxml .text returns None when string contains tags
<p>I am a traversing complex XML file with millions of TU nodes and extracting strings from <code>&lt;seg&gt;</code> elements. Whenever <code>&lt;seg&gt;</code> element contains serialized tags, I get <code>None</code> object instead of a string.</p> <p>Code that returns <code>None</code>:</p> <pre><code>source_segment = ET.parse(file).getroot().find('body').findall('tu')[0].findall('tuv')[0].find('seg').text </code></pre> <p>Sample content of <code>&lt;seg&gt;</code> element that causes the issue:</p> <pre><code>&lt;seg&gt;&lt;bpt i=&quot;1&quot; type=&quot;14&quot; x=&quot;1&quot; /&gt;Coded glass plate&lt;ept i=&quot;1&quot; /&gt;&lt;ph x=&quot;4&quot; type=&quot;33&quot; /&gt;&lt;/seg&gt; </code></pre> <p>Expected value of string variable <code>source_segment</code>:</p> <pre><code>&lt;bpt i=&quot;1&quot; type=&quot;14&quot; x=&quot;1&quot; /&gt;Coded glass plate&lt;ept i=&quot;1&quot; /&gt;&lt;ph x=&quot;4&quot; type=&quot;33&quot; /&gt; </code></pre> <p>I cant serialize <code>ET.parse(file).getroot().find('body').findall('tu')[0].findall('tuv')[0].find('seg').text</code> cause it is a <code>None</code> object. If I serialize only part <code>ET.parse(file).getroot().find('body').findall('tu')[0].findall('tuv')[0].find('seg')</code>, I get this:</p> <pre><code>b'&lt;seg&gt;&lt;bpt i=&quot;1&quot; type=&quot;14&quot; x=&quot;1&quot; /&gt;Coded glass plate&lt;ept i=&quot;1&quot; /&gt;&lt;ph x=&quot;4&quot; type=&quot;33&quot; /&gt;&lt;/seg&gt;\n ' </code></pre> <p>Sample XML content:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; &lt;tmx version=&quot;1.4&quot;&gt; &lt;header creationtool=&quot;XXXXXXXX&quot; creationtoolversion=&quot;100&quot; o-tmf=&quot;XXXXXXXX&quot; datatype=&quot;xml&quot; segtype=&quot;sentence&quot; adminlang=&quot;en-GB&quot; srclang=&quot;en-GB&quot; creationdate=&quot;XXXXXXXX&quot; creationid=&quot;XXXXXXXX&quot;&gt; &lt;prop type=&quot;x-Note:SingleString&quot;&gt;&lt;/prop&gt; &lt;prop type=&quot;x-Recognizers&quot;&gt;RecognizeAll&lt;/prop&gt; &lt;prop type=&quot;x-IncludesContextContent&quot;&gt;True&lt;/prop&gt; &lt;prop type=&quot;x-TMName&quot;&gt;XXXXXXXX&lt;/prop&gt; &lt;prop type=&quot;x-TokenizerFlags&quot;&gt;DefaultFlags&lt;/prop&gt; &lt;prop type=&quot;x-WordCountFlags&quot;&gt;DefaultFlags&lt;/prop&gt; &lt;/header&gt; &lt;body&gt; &lt;tu creationdate=&quot;XXXXXXXX&quot; creationid=&quot;XXXXXXXX&quot; changedate=&quot;XXXXXXXX&quot; changeid=&quot;XXXXXXXX&quot; lastusagedate=&quot;XXXXXXXX&quot; usagecount=&quot;1&quot;&gt; &lt;prop type=&quot;x-LastUsedBy&quot;&gt;XXXXXXXX&lt;/prop&gt; &lt;prop type=&quot;x-Context&quot;&gt;0, 0&lt;/prop&gt; &lt;prop type=&quot;x-Origin&quot;&gt;TM&lt;/prop&gt; &lt;prop type=&quot;x-ConfirmationLevel&quot;&gt;Translated&lt;/prop&gt; &lt;prop type=&quot;x-StructureContext:MultipleString&quot;&gt;sdl:cdata&lt;/prop&gt; &lt;prop type=&quot;x-Note:SingleString&quot;&gt;XXXXXXXX&lt;/prop&gt; &lt;tuv xml:lang=&quot;en-GB&quot;&gt; &lt;seg&gt;&lt;bpt i=&quot;1&quot; type=&quot;14&quot; x=&quot;1&quot; /&gt;Coded glass plate&lt;ept i=&quot;1&quot; /&gt;&lt;ph x=&quot;4&quot; type=&quot;33&quot; /&gt;&lt;/seg&gt; &lt;/tuv&gt; &lt;tuv xml:lang=&quot;lt-LT&quot;&gt; &lt;seg&gt;&lt;bpt i=&quot;1&quot; type=&quot;14&quot; x=&quot;1&quot; /&gt;YYYYYYYYYYYYY&lt;ept i=&quot;1&quot; /&gt;&lt;ph x=&quot;4&quot; type=&quot;33&quot; /&gt;&lt;/seg&gt; &lt;/tuv&gt; &lt;/tu&gt; &lt;/body&gt; &lt;/tmx&gt; </code></pre> <p>How do I extract the string from <code>&lt;seg&gt;</code> element when it contains serialized tags?</p>
<python><xml><lxml>
2023-02-06 08:45:22
2
1,213
wilkas