QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
76,566,196
22,009,322
Animate grouped scatter points in matplotlib
<p>How can I animate the points in positions 1,2,3 so they run simultaneously (in parallel) and each have according color?</p> <p>Now the points are drawn in series and just in one color.</p> <p>The current output is:</p> <p><a href="https://i.sstatic.net/hGiZ7.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hGiZ7.gif" alt="enter image description here" /></a></p> <p>Example of the code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np from matplotlib.animation import FuncAnimation df = pd.DataFrame() cf = 1 while cf &lt; 4: df = pd.concat([df, pd.DataFrame( { &quot;Track&quot;: f'Track {cf}', &quot;Position&quot;: np.random.randint(low=0+cf, high=1+cf, size=10), &quot;Timeline&quot;: np.linspace(1, 10, 10, dtype=int) } )]) cf = cf + 1 df = df.reset_index(drop=True) print(df) # plot: fig, ax = plt.subplots() # Point coordinates: y = df['Position'] x = df['Timeline'] # Labels with axes: ax.set_xlabel('Timeline') ax.set_ylabel('Position') ax.invert_yaxis() xi = list(np.unique(x)) yi = list(np.unique(y)) ax.set_xticks(xi) ax.set_yticks(yi) # Colors: colors = {'Track 1': 'tab:red', 'Track 2': 'tab:blue', 'Track 3': 'blue'} # Drawing points according to positions: frames = len(df) points = plt.scatter(x, y, s=45, c=df['Track'].map(colors), zorder=2) def animate(i): points.set_offsets((x[i], y[i])) return points, anim = FuncAnimation(fig, animate, frames=frames, interval=200, repeat=True) plt.show() plt.close() anim.save('test.gif', writer='pillow') </code></pre> <p>I tried to just create &quot;points&quot; variable for each type of scatter and add them to &quot;animate&quot; function but it didn't worked.</p>
<python><matplotlib><matplotlib-animation>
2023-06-27 14:59:45
1
333
muted_buddy
76,566,162
11,065,874
how to model nested object with pydantic
<p>I have this small pydantic code:</p> <pre><code>from pydantic import BaseModel class B: a: int class A(BaseModel): b: dict[str, B] d = { &quot;b&quot;: { &quot;us&quot;: {&quot;a&quot;: 1}, &quot;uk&quot;: {&quot;a&quot;: 2} } } m = A.parse_obj(d) print(m.dict()) </code></pre> <p>when I run it it shows</p> <pre><code>RuntimeError: no validator found for &lt;class '__main__.B'&gt;, see `arbitrary_types_allowed` in Config </code></pre> <p>It works, if I rewrite the model as below:</p> <pre><code>from pydantic import BaseModel class A(BaseModel): b: dict[str, dict[str, int]] d = { &quot;b&quot;: { &quot;us&quot;: {&quot;a&quot;: 1}, &quot;uk&quot;: {&quot;a&quot;: 2} } } m = A.parse_obj(d) print(m.dict()) </code></pre> <p>But I want to define pydantic models for nested objects as well.</p> <p>how to do this?</p>
<python><fastapi><pydantic>
2023-06-27 14:55:31
1
2,555
Amin Ba
76,566,088
16,383,578
Algorithm to discretize overlapping ranges
<p>I wrote a script to deal with overlapping intervals, I took several attempts at this before, I won't go into the details here, you can see <a href="https://codereview.stackexchange.com/q/284913/234107">this</a> to get more context. This version is the smartest I came up with, it is the most efficient, and after rigorous testing I have determined that its output is completely correct.</p> <p>The problem can be simplified as this, given a huge number of triples <code>(a, b, c)</code>, in each triple <code>a</code> and <code>b</code> are integers and <code>b &gt;= a</code>, each triple represents a range <code>range(a, b + 1)</code>, and <code>c</code> is the data associated with the range. <code>(a, b, c)</code> is equivalent to <code>{range(a, b + 1): c}</code>.</p> <p>There are a lot of overlaps in the ranges, and adjacent ranges that share data.</p> <p>Suppose there are two such ranges, call them <code>(s0, e0, d0)</code> and <code>(s1, e1, d1)</code> respectively, one of the following 3 undesirable situations might happen:</p> <ul> <li><p><code>s0 &lt;= s1 &lt;= e1 &lt; e0</code>, the second range is completely contained within the first, and <code>d0 == d1</code></p> </li> <li><p>the second range is a sub range of the first, like above, but <code>d0 != d1</code></p> </li> <li><p><code>e0 == s1 - 1 and d0 == d1</code></p> </li> </ul> <p>The following situations never happen:</p> <ul> <li><code>e0 == s1</code> and <code>s0 &lt;= s1 and e0 &lt;= e1</code></li> </ul> <p>The ranges either are completely discrete, or one is a complete subset of the other, there are no partial intersections.</p> <p>Now I want to process the ranges such that there are no overlaps and all adjacent ranges share different data, this requires the following operations for each of the above 3 situations:</p> <ul> <li><p>ignore the subrange, do nothing, because they share the same data</p> </li> <li><p>split the parent range, delete the overlapping portion of the parent range, and overwrite the overlapping range with the data of the sub range.</p> </li> <li><p>ignore the second range, and set <code>e0</code> to <code>e1</code>, so that the two ranges are combined.</p> </li> </ul> <p>The result should contain no overlapping ranges and no extra ranges, each range has only one datum and it is always inherited from the newer, smaller range.</p> <p>The input is sorted in such a way that can be obtained in Python by using <code>lambda x: (x[0], -x[1])</code> as key. And if the input is not sorted I will sort it in such a way before processing it.</p> <p>This is a small example:</p> <p>Data:</p> <pre><code>[(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D')] </code></pre> <p>Meaning of the data:</p> <pre><code>[ {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A', 6: 'A', 7: 'A', 8: 'A', 9: 'A', 10: 'A'}, {0: 'B', 1: 'B'}, {2: 'C', 3: 'C', 4: 'C', 5: 'C'}, {3: 'C', 4: 'C'}, {6: 'C', 7: 'C'}, {8: 'D'} ] </code></pre> <p>My intended process is equivalent to simply processing them in order and update the dictionary according to the current item one by one.</p> <p>Processed data:</p> <pre><code>{0: 'B', 1: 'B', 2: 'C', 3: 'C', 4: 'C', 5: 'C', 6: 'C', 7: 'C', 8: 'D', 9: 'A', 10: 'A'} </code></pre> <p>Output:</p> <pre><code>[(0, 1, 'B'), (2, 7, 'C'), (8, 8, 'D'), (9, 10, 'A')] </code></pre> <p>I need to do this as efficiently as possible, because I literally have millions of entries to process and that is not an exaggeration. I tried to use <a href="https://codereview.stackexchange.com/a/285124/234107"><code>intervaltree</code></a> but it is way too inefficient for my purposes:</p> <pre><code>In [3]: %timeit merge_rows([[0, 10, 'A'], [0, 1, 'B'], [2, 5, 'C'], [3, 4, 'C'], [6, 7, 'C'],[8, 8, 'D']]) 499 ยตs ยฑ 14.5 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) </code></pre> <hr /> <h2><em><strong>The following is my code</strong></em></h2> <pre class="lang-py prettyprint-override"><code>from bisect import bisect_left class Interval: instances = [] def __init__(self, start, end, data): self.start = start self.end = end self.data = data self.ends = [end] self.children = [] Interval.instances.append(self) def variate(self, start, data): return not self.start &lt;= start &lt;= self.end or data != self.data def distinct(self, start, end, data): if self.end == start - 1 and self.data == data: self.ends[-1] = self.end = end return False return True def fill(self, start): if self.start &lt; start: before = start - 1 self.ends.insert(-1, before) self.children.append(Interval(self.start, before, self.data)) def merge_worker(self, start, end, data): self.fill(start) add = self.start &lt; start if not add: add = not self.children or self.children[-1].distinct(start, end, data) if add: self.children.append(Interval(start, end, data)) self.ends.insert(-1, end) self.start = end + 1 def merge(self, start, end, data): if not (self.variate(start, data) and self.distinct(start, end, data)): return if self.start &lt;= start: self.merge_worker(start, end, data) else: self.children[bisect_left(self.ends, start)].merge(start, end, data) def discretize(seq): Interval.instances.clear() current = -1e309 for start, end, data in seq: if start &gt; current + 1: top = Interval(start, end, data) current = end else: top.merge(start, end, data) return sorted(((i.start, i.end, i.data) for i in Interval.instances)) </code></pre> <p>And it is much more efficient.</p> <pre><code>In [305]: %timeit discretize([[0, 10, 'A'], [0, 1, 'B'], [2, 5, 'C'], [3, 4, 'C'], [6, 7, 'C'],[8, 8, 'D']]) 11 ยตs ยฑ 85.4 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) </code></pre> <p>I used the following code to test it:</p> <p><a href="https://drive.google.com/file/d/1PIgF8L2-HGLCCZdQ4fM2HVZp3HKKe6Bi/view?usp=sharing" rel="nofollow noreferrer">check.py (Google drive)</a></p> <p>English isn't my mother tongue and I am not really good at words so I won't describe my method, the code isn't hard to understand.</p> <p>My question is, how can I dynamically calculate the insertion index of the so that I can insert the ranges in order directly, thus eliminating the need to sort the result? The result must be in order and sorting is really expensive.</p> <p>I can attach an index to each instance when it is created but so far I only know how to use consecutive integers as index, just increment by one each time an instance is created, I don't know how to dynamically calculate the index required to make the instances keep sorted.</p> <p>How can I calculate the index required? Or is there some better way?</p>
<python><python-3.x><algorithm>
2023-06-27 14:48:58
1
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
76,565,992
5,969,463
Error in Postgres to_regclass Treats Table Name as Column
<p>I had the following query in Python (SQLAlchemy) code.</p> <pre><code>r = list(conn.execute(f&quot;SELECT 'OK' FROM {table} LIMIT 1&quot;))[0][0] </code></pre> <p>A case occurred where the table was truncated (it exists, but has no data in it). This line would error in this (edge) case. Hence, I am trying to change it to</p> <pre><code> r = list( conn.execute( f&quot;SELECT case when(SELECT to_regclass({table}) is not NULL) &quot; f&quot;then 'OK' else 'FAIL' end;&quot; ) )[0][0] </code></pre> <p>However, I am getting the following error</p> <pre><code> File &quot;/opt/app-root/lib64/python3.9/site-packages/sqlalchemy/engine/default.py&quot;, line 608, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column &quot;&lt;table&gt;&quot; does not exist LINE 1: SELECT case when(SELECT to_regclass(&lt;table&gt;) is no... </code></pre> <p>Essentially, I am trying to make sure that the query still works and returns fail when the table is empty. How can I do that and/or why am I getting the error?</p>
<python><postgresql><sqlalchemy>
2023-06-27 14:37:53
0
5,891
MadPhysicist
76,565,982
15,452,168
Analysing app data in bigquery from google play store on partitioned tables
<p>I was able to connect my google play store data to big query via transfer service. Now I have a lot of data about my app.</p> <p>I have already done the <code>sentiment analysis and topic modelling</code> on it.</p> <p>but it still has data about ratings, installs, and crashes (app version, device, os version, carrier, country, language and so on)`</p> <p>I would like to merge these different tables that are available in the screenshot below and analyze the data.</p> <p><a href="https://i.sstatic.net/6RrYC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6RrYC.png" alt="enter image description here" /></a></p> <p>To begin with, for example, I would like to analyse for example <code>p_Crashes_device_PS</code>which has columns</p> <pre><code>**Filed name** **Type** Date DATE Package_Name STRING Device STRING Daily_Crashes INTEGER Daily_ANRs INTEGER </code></pre> <pre><code>SELECT EXTRACT(YEAR FROM Date) AS Year, EXTRACT(MONTH FROM Date) AS Month, REGEXP_EXTRACT(Device, r'^(\w+)') AS Device, SUM(Daily_Crashes) AS Total_Crashes, SUM(Daily_ANRs) AS Total_ANRs FROM `abc.google_playstore.p_Crashes_device_PS` GROUP BY Year, Month, Device ORDER BY Year, Month, Device </code></pre> <p>Now I can see that one device had most of the crashes, but what I am missing is how to connect this data with other tables to be sure of what went wrong and in which country/language/os/app/carrier etc</p> <p>To my surprise, I could not find a single article that helps me understand these Play Store dataset tables and analyse this data in the big query/python.</p> <p>Am I doing something wrong? There are Google apps data analysis but it involves comparing different apps etc not related to a single app and its specific data.</p> <p>I would be very happy if someone can push me in the right direction where I can find this analysis. Thank you</p>
<python><sql><google-bigquery><google-play-developer-api>
2023-06-27 14:37:22
1
570
sdave
76,565,954
20,620,999
"Regex pattern did not match", even though they should - pytest
<p>I am trying to match the string in this error:</p> <pre class="lang-py prettyprint-override"><code>for warning in args: if not isinstance(warning, str): with StaticvarExceptionHandler(): raise TypeError(f&quot;Configure.suppress() only takes string arguments. Current type: {type(warning)}&quot;) </code></pre> <p>with the one in this pytest test:</p> <pre class="lang-py prettyprint-override"><code>with pytest.raises( TypeError, match = &quot;Configure.suppress() only takes string arguments. Current type: .*&quot; ): Configure.suppress('ComplicatedTypeWarning', 1) </code></pre> <br /> <p>It should match, but I am getting this error:</p> <pre><code>E AssertionError: Regex pattern did not match. E Regex: 'Configure.suppress() only takes string arguments. Current type: .*' E Input: &quot;Configure.suppress() only takes string arguments. Current type: &lt;class 'int'&gt;&quot; </code></pre> <p>Keep in mind that for all other tests that I use <code>.*</code>, everything works fine.</p> <p>I am very new to Regex, so I apologise if this is actually a really stupid question. Also, I've seen many many questions like these, but they were all in Java and not the same case as mine, so I couldn't find a solution. Feel free to flag this as a duplicate and link a matching question, though.</p>
<python><python-3.x><regex><exception><pytest>
2023-06-27 14:33:05
1
382
AbdelRahman
76,565,872
13,799,627
How to calculate number of Trues per row in polars
<p>In Pandas, calculating the number of <code>True</code>s can easily done by the <code>.sum()</code> function on either a column (axis=0) or row (axis=1).</p> <p>However, in polars, this only seems to work on individual columns:</p> <p>Input:</p> <pre class="lang-py prettyprint-override"><code>s = pl.DataFrame({&quot;a&quot;: [True, False, True], &quot;b&quot;:[True, True, False]}) print(s) # Number of Trues in each column (This works) print(s.sum(axis=0)) # Number of Trues in each row (This does not work) print(s.sum(axis=1)) </code></pre> <p>Output:</p> <pre><code>shape: (3, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ a โ”† b โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ bool โ”† bool โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ true โ”† true โ”‚ โ”‚ false โ”† true โ”‚ โ”‚ true โ”† false โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ shape: (1, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ” โ”‚ a โ”† b โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ u32 โ”† u32 โ”‚ โ•žโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•ก โ”‚ 2 โ”† 2 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”˜ --------------------------------------------------------------------------- PanicException Traceback (most recent call last) Cell In[125], line 5 2 print(s) 3 print(s.sum(axis=0)) ----&gt; 5 s.sum(axis=1) File c:\Users\xxxx\.venv\lib\site-packages\polars\dataframe\frame.py:7006, in DataFrame.sum(self, axis, null_strategy) 7004 return self._from_pydf(self._df.sum()) 7005 if axis == 1: -&gt; 7006 return wrap_s(self._df.hsum(null_strategy)) 7007 raise ValueError(&quot;Axis should be 0 or 1.&quot;) PanicException: `add` operation not supported for dtype `bool` </code></pre> <p>How can I achieve the calculation over the <code>axis=1</code>?<br /> For non-boolean values this works, but for boolean values not.</p> <p>(My polars verion is 0.16.18)</p> <p>Thanks.</p>
<python><python-polars>
2023-06-27 14:24:30
1
535
Crysers
76,565,723
6,414,179
Creating 3 color dithering image
<p>Hello I am trying to convert an image from full color image to 3 colors dithering image to convert this image</p> <p><a href="https://i.sstatic.net/vCGrJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vCGrJ.jpg" alt="enter image description here" /></a></p> <p>to this one</p> <p><a href="https://i.sstatic.net/8g45U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8g45U.png" alt="enter image description here" /></a></p> <p>I don't have a clue how to do this, but I assume that I need to first change source image to tri color image (Black - White - Red) then apply dithering on it, so I am not sure how to convert a fully colored image into 3 colors only.</p>
<python><image-processing><dithering>
2023-06-27 14:07:19
1
362
mohamed elsabagh
76,565,609
1,542,011
Python, numba, class with field of its own type
<p>I'm trying to use numba's <code>jitclass</code> on a class that has a field of its own type.</p> <p>The below code does not work because <code>Foo</code> is not defined.</p> <pre><code>from numba.experimental import jitclass @jitclass class Foo: a: int pred: 'Foo' def __init__(self, pred: 'Foo'): self.a = 1 self.pred = pred if __name__ == &quot;__main__&quot;: x = Foo(None) </code></pre> <p>Replacing <code>Foo</code> with object also does not work. Additionally, I need to be able to pass None in one instance.</p> <p>Is there a way to make this work?</p> <p>The only other idea I have is to store <code>pred</code> in an external dictionary.</p>
<python><numba>
2023-06-27 13:55:10
1
1,490
Christian
76,565,563
19,251,893
Get the char* value with python gdb
<p>With GDB and Python I tried to get the char* value on <code>x1</code> register</p> <pre><code>python a= gdb.execute(&quot;x/s $x1&quot;, to_string=True) print(a) end </code></pre> <p>But I got <code>0xbb4aaa: &quot;SomeString&quot;</code></p> <p>I want to get only the <code>SomeString</code> without the address</p> <p>How can I do that directly with python GDB (without regex/split)</p>
<python><debugging><gdb><gdb-python>
2023-06-27 13:50:17
1
345
python3.789
76,565,469
2,532,203
Execute Cython function in PHP via FFI
<p>So this is firmly in mad scientist territory, but I need to use the <a href="https://www.nltk.org/howto/wordnet.html" rel="nofollow noreferrer"><code>wordnet</code></a> utility from the Python NLTK package in a PHP application.<br /> I cannot shell out to a Python process for performance reasons -- this specific part of the code is going to get called a lot. I also cannot realistically port the code to PHP without serious effort, and there is no comparable library available.</p> <p>Therefore, I'm trying to compile a wrapper function (1) exposing the wordnet reader to a shared C library using <a href="https://cython.org" rel="nofollow noreferrer">Cython</a>(2), which I can load in PHP using its <a href="https://cython.org" rel="nofollow noreferrer">FFI</a> interface(3). In theory, this should make it possible to invoke the Python function directly from PHP code.<br /> The code compiles fine, but I stumble over dependency errors when invoking the library:</p> <pre><code>PHP Fatal error: Uncaught FFI\ParserException: Undefined C type &quot;PyObject&quot; at line 2 in /build/test.php:5 Stack trace: #0 /build/test.php(5): FFI::cdef() #1 {main} thrown in /build/test.php on line 5 </code></pre> <p>How can I fix this?</p> <hr /> <ol> <li>The wrapper script <code>wordnet.pyx</code>: <pre class="lang-py prettyprint-override"><code>from nltk.corpus import wordnet def synonyms(word: str) -&gt; list[str]: return wordnet.synonyms(word) </code></pre> </li> <li>The <code>setup.py</code> file: <pre class="lang-py prettyprint-override"><code>from distutils.core import setup from Cython.Build import cythonize setup( ext_modules=cythonize( 'wordnet.pyx', compiler_directives={'language_level': '3'} ) ) </code></pre> The C header file <code>wordnet.h</code>: <pre class="lang-c prettyprint-override"><code>#include &lt;Python.h&gt; PyObject* wordnet(PyObject* self, PyObject* args); </code></pre> </li> <li>The PHP test script: <pre class="lang-php prettyprint-override"><code>&lt;?php $ffi = FFI::cdef( file_get_contents(&quot;/build/wordnet.h&quot;), &quot;/build/wordnet_wrapper.so&quot; ); $result = $ffi-&gt;wordnet(&quot;dog&quot;); </code></pre> </li> </ol> <p>The compilation steps:</p> <pre class="lang-bash prettyprint-override"><code>python setup.py build_ext --inplace gcc -Wall \ -pthread \ -fwrapv \ -O2 \ -fPIC \ -fno-strict-aliasing \ -shared -o wordnet_wrapper.so \ wordnet.c \ -I /usr/local/include/python3.11 \ -L /usr/local/lib/python3.11 \ -L /usr/local/lib \ -lpython3.11 </code></pre>
<python><php><c><cython><ffi>
2023-06-27 13:38:20
0
1,501
Moritz Friedrich
76,565,458
11,001,493
How to create new column with header name as value based on specific condition of multiple columns?
<p>Imagine I have a dataframe like this:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({&quot;ID&quot;:[1, 1, 2, 3, 3, 4], &quot;CLASS A&quot;:[np.nan, 1, np.nan, np.nan, np.nan, np.nan], &quot;CLASS B&quot;:[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], &quot;CLASS C&quot;:[np.nan, np.nan, np.nan, 1, np.nan, np.nan], &quot;DEPTH&quot;:[12, 31, 45, 66, 32, 46]}) df Out[4]: ID CLASS A CLASS B CLASS C DEPTH 0 1 NaN NaN NaN 12 1 1 1.0 NaN NaN 31 2 2 NaN NaN NaN 45 3 3 NaN NaN 1.0 66 4 3 NaN NaN NaN 32 5 4 NaN NaN NaN 46 </code></pre> <p>I would like to create a new column (df[&quot;FILTER&quot;]) with the name of the column as value where I have the number 1, but only looking to the columns (CLASS A, CLASS B and CLASS C). So my final dataframe should look like this:</p> <pre><code> ID CLASS A CLASS B CLASS C DEPTH FILTER 0 1 NaN NaN NaN 12 NaN 1 1 1.0 NaN NaN 31 CLASS A 2 2 NaN NaN NaN 45 NaN 3 3 NaN NaN 1.0 66 CLASS C 4 3 NaN NaN NaN 32 NaN 5 4 NaN NaN NaN 46 NaN </code></pre> <p>Anyone could help me?</p>
<python><pandas><dataframe>
2023-06-27 13:37:10
2
702
user026
76,565,279
11,688,559
How to obtain contents of previous versions of google sheets using Python and Google APIs
<p>I have various google sheets for which I need to retrieve the historic versions of. These historic sheets pertain to the status of products and will be appended to a Google Big Query table. As such, it is important that I be able to access the actual contents of these old sheets and not just their metadata.</p> <p>I have attempted this problem with the Python code below. In this code, I have been able to setup a service with the proper credentials. I am then able to get historic versions in the variable <code>revisions</code> which is a list of dictionaries that look like this</p> <pre><code>{'id': '15104', 'mimeType': 'application/vnd.google-apps.spreadsheet', 'kind': 'drive#revision', 'modifiedTime': '2023-06-27T12:41:52.305Z'} </code></pre> <p>This is where I then get stuck. I am not able to download or retrieve the content of this historic version of the file. I typically get an error that complains about only being able to download binary files:</p> <pre><code>HttpError: &lt;HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/1D1pkeTUDoGZnlHHQh0AiRvFAippyX4OYRWR4XNx3leU/revisions/15098?alt=media returned &quot;Only files with binary content can be downloaded. Use Export with Docs Editors files.&quot;. Details: &quot;[{'message': 'Only files with binary content can be downloaded. Use Export with Docs Editors files.', 'domain': 'global', 'reason': 'fileNotDownloadable', 'location': 'alt', 'locationType': 'parameter'}]&quot;&gt; </code></pre> <p>Please help me to understand how to access the contents of the historic files. I am also aware that it might not be possible. If so, please do let me know about such limitations. Thank you for your time.</p> <pre><code>import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build SCOPES = [ 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/spreadsheets', ] def login(): creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'bom_files.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API return service def get_sheet_revisions(sheet_id,service): revisions = service.revisions().list(fileId=sheet_id).execute().get('revisions') revised_file_contents = [] # contents of revised files for revision in revisions: request = service.revisions().get_media(fileId=sheet_id, revisionId=revision['id']) file_contents = request.execute() # Do something with the file like save it. # For now, lets append it to a list revised_file_contents.append(file_contents) return revised_file_contents if __name__ == '__main__': service = login() historic_sheets = get_sheet_revisions(sheet_id,service) </code></pre> <h2>EDIT</h2> <p>I have also tried the following. It actually downloads something but it is an unreadable mess. Google sheets cannot even open the xlsx file that it creates. On a positive note, it does give a url request code of 200.</p> <pre><code>import os.path import gspread from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials import requests SCOPES = ['https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/spreadsheets'] def login(): creds = None if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: creds = Credentials.from_service_account_file('bom_files.json', scopes=SCOPES) with open('token.json', 'w') as token: token.write(creds.to_json()) return creds def export_sheet_revision(sheet_id, revision_id, export_format): creds = login() client = gspread.authorize(creds) sheet = client.open_by_key(sheet_id) url = f&quot;https://docs.google.com/spreadsheets/export?id={sheet_id}&amp;revision={revision_id}&amp;exportFormat={export_format}&quot; return sheet, url def download_file(url, output_path): response = requests.get(url) with open(output_path, 'wb') as file: file.write(response.content) if __name__ == '__main__': sheet_id = '1D1pkeTUDoGZnlHHQh0AiRvFAippyX4OYRWR4XNx3leU' sheet_id = '1wl7kLGLAgCnFB0dn7JYubO-ZwnK5-s-4Rxq-mQtRRC8' # simpler sheet revision_id = '15098' export_format = 'xlsx' sheet, download_url = export_sheet_revision(sheet_id, revision_id, export_format) worksheets = sheet.worksheets() for worksheet in worksheets: worksheet_title = worksheet.title worksheet_url = download_url + f'&amp;gid={worksheet.id}' output_path = f'output_{worksheet_title}.xlsx' # Specify the desired output file path for each worksheet download_file(worksheet_url, output_path) print(f&quot;Worksheet '{worksheet_title}' downloaded to: {output_path}&quot;) </code></pre>
<python><google-sheets><google-api><google-drive-api><google-api-python-client>
2023-06-27 13:15:22
1
398
Dylan Solms
76,564,995
214,864
How to create bar chart with geomean, mean, max and min from data using Plotnine
<p>How to create bar chart with gmean, mean, max and min stats for each category. For the data below,</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>X</th> <th>B</th> <th>Y</th> </tr> </thead> <tbody> <tr> <td>A1</td> <td>b1</td> <td>4</td> </tr> <tr> <td>A1</td> <td>b2</td> <td>2</td> </tr> <tr> <td>A1</td> <td>b3</td> <td>3</td> </tr> <tr> <td>A1</td> <td>b4</td> <td>8</td> </tr> <tr> <td>A2</td> <td>b1</td> <td>7</td> </tr> <tr> <td>A2</td> <td>c1</td> <td>10</td> </tr> <tr> <td>A2</td> <td>c2</td> <td>8</td> </tr> <tr> <td>A2</td> <td>b3</td> <td>7</td> </tr> <tr> <td>A3</td> <td>b4</td> <td>10</td> </tr> <tr> <td>A3</td> <td>b5</td> <td>9</td> </tr> <tr> <td>A3</td> <td>b1</td> <td>4</td> </tr> <tr> <td>A3</td> <td>b3</td> <td>1</td> </tr> </tbody> </table> </div> <p>The chart should look like, <a href="https://i.sstatic.net/ZcOCB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZcOCB.png" alt="enter image description here" /></a></p>
<python><python-ggplot><plotnine>
2023-06-27 12:44:12
1
1,470
kanna
76,564,964
15,991,222
How to select specific values in a pandas data frame?
<p>I am working on a pandas data frame, which some of the row has one number and others has more than a number. I need to create a label column, which consists of values copied from the rows with one specific number and those rows with more than one value should be assigned to zero. This is an example:</p> <p><a href="https://i.sstatic.net/ishhR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ishhR.png" alt="enter image description here" /></a></p> <p>I tried the following code, but it does not work here.</p> <pre><code>df[&quot;label&quot;] = df[[&quot;Column1&quot;, &quot;Column2&quot;, &quot;Column3&quot;, &quot; Column4&quot;, &quot;Column5&quot;]].max(axis=1) </code></pre> <p>Can anyone suggest a way to solve this?</p> <p>Reproducible input:</p> <pre><code>df = pd.DataFrame({'A': [1, 0, 0, 0, 0, 1, 0, 0, 0], 'B': [2, 2, 0, 0, 2, 0, 0, 0, 4], 'C': [0, 0, 3, 3, 0, 0, 0, 3, 2], 'D': [0, 0, 0, 4, 4, 0, 4, 0, 0], 'E': [0, 0, 0, 0, 0, 0, 0, 0, 5]}) </code></pre>
<python><pandas>
2023-06-27 12:39:19
1
328
rayan
76,564,898
4,373,898
Dataflow streaming error with Workflow failed error message
<p>When running my dataflow job, it fails very early (no data processing, no worker seems to be started) with a single error message:</p> <blockquote> <p>Workflow failed.</p> </blockquote> <ul> <li>I try to run a streaming Dataflow job based on custom flex-template.</li> <li>I developped it in Python, and am able to run it in &quot;batch&quot; mode during unit testing.</li> <li>I already checked the quotas, but everything seems right, the compute engine CPU is only at 22%.</li> <li>I have another predefined dataflow job running with the same service account, so I suppose it's not a permission error.</li> </ul> <p>Do you have any clue on where to investigate? What can cause such abrupt failure ?</p> <p>Kind.</p> <p>Below are the full logs of the job. First, everything seems to start well:</p> <pre><code>INFO 2023-06-27T10:59:43.625252Z Status: Downloaded newer image for europe-west1-docker.pkg.dev/[...] INFO 2023-06-27T10:59:45.668036Z Created new fluentd log writer for: /var/log/dataflow/template_launcher/runner-json.log INFO 2023-06-27T10:59:45.669085Z Started template launcher. INFO 2023-06-27T10:59:45.669291Z Initialize Python template. INFO 2023-06-27T10:59:45.669317Z Falling back to using template-container args from metadata: template-container-args INFO 2023-06-27T10:59:45.673562Z Validating metadata template-container-args: {&quot;consoleLogsLocation&quot;:&quot;gs://[...]/staging/template_launches/[...]/console_logs&quot;,&quot;environment&quot;:{&quot;region&quot;:&quot;europe-west1&quot;,&quot;serviceAccountEmail&quot;:&quot;***@developer.gserviceaccount.com&quot;,&quot;stagingLocation&quot;:&quot;***/staging&quot;,&quot;tempLocation&quot;:&quot;***/tmp&quot;},&quot;jobId&quot;:&quot;[...]&quot;,&quot;jobName&quot;:&quot;***&quot;,&quot;jobObjectLocation&quot;:&quot;***/staging/template_launches/[...]/job_object&quot;,&quot;operationResultLocation&quot;:&quot;***/staging/template_launches/[...]/operation_result&quot;,&quot;parameters&quot;:{&quot;app-log-level&quot;:&quot;DEBUG&quot;,&quot;error&quot;:&quot;gs://***/error/?format=text&quot;,&quot;input&quot;:&quot;pubsub:///projects/***/topics/***&quot;,&quot;log-level&quot;:&quot;DEBUG&quot;,&quot;max_num_workers&quot;:&quot;2&quot;,&quot;metrics&quot;:&quot;gsecrets:///***://***/output/?format=parquet&quot;,&quot;staging_location&quot;:&quot;***/staging&quot;,&quot;temp_location&quot;:&quot;***/tmp&quot;,&quot;window-interval-sec&quot;:&quot;1800&quot;},&quot;projectId&quot;:&quot;***&quot;} INFO 2023-06-27T10:59:45.674097Z Extracting operation result location. INFO 2023-06-27T10:59:45.674131Z Operation result location: ***/staging/template_launches/[...]/operation_result INFO 2023-06-27T10:59:45.674157Z Extracting console log location. INFO 2023-06-27T10:59:45.674177Z Console logs location: ***/staging/template_launches/[...]/console_logs INFO 2023-06-27T10:59:45.674201Z Extracting Python command specs. INFO 2023-06-27T10:59:45.674896Z Generating launch args. INFO 2023-06-27T10:59:45.675141Z Overriding staging_location with value: ***/staging (previous value: ***/staging) INFO 2023-06-27T10:59:45.675170Z Overriding temp_location with value: ***/tmp (previous value: ***/tmp) INFO 2023-06-27T10:59:45.675204Z Validating ExpectedFeatures. INFO 2023-06-27T10:59:45.675222Z Launching Python template. INFO 2023-06-27T10:59:45.675260Z Using launch args: [/template/save_to_parquet.py --requirements_file=/template/requirements.txt --job_name=*** --metrics=gsecrets:///*** --input=pubsub:///projects/***/topics/*** --window-interval-sec=1800 --runner=DataflowRunner --template_location=***/staging/template_launches/[...]/job_object --region=europe-west1 --log-level=DEBUG --output=gs://***/output/?format=parquet --error=gs://***/error/?format=text --project=*** --service_account_email=***@developer.gserviceaccount.com --temp_location=***/tmp --max_num_workers=2 --app-log-level=DEBUG --staging_location=***/staging --num_workers=1] INFO 2023-06-27T10:59:45.675339Z Executing: python /template/save_to_parquet.py --requirements_file=/template/requirements.txt --job_name=*** --metrics=gsecrets:///*** --input=pubsub:///projects/***/topics/*** --window-interval-sec=1800 --runner=DataflowRunner --template_location=***/staging/template_launches/[...]/job_object --region=europe-west1 --log-level=DEBUG --output=gs://***/output/?format=parquet --error=gs://***/error/?format=text --project=*** --service_account_email=***@developer.gserviceaccount.com --temp_location=***/tmp --max_num_workers=2 --app-log-level=DEBUG --staging_location=***/staging --num_workers=1 INFO 2023-06-27T10:59:48.025088Z /usr/local/lib/python3.10/site-packages/apache_beam/io/fileio.py:593: BeamDeprecationWarning: options is deprecated since First stable release. References to &lt;pipeline&gt;.options will not be supported INFO 2023-06-27T10:59:48.025219Z p.options.view_as(GoogleCloudOptions).temp_location or INFO 2023-06-27T10:59:48.027368Z INFO:apache_beam.io.fileio:Added temporary directory ***/tmp/.temp2d4e5f71-bb17-4ad1-a501-0b6ea8838124 INFO 2023-06-27T10:59:49.167115Z INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/local/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', '/tmp/tmp3kt4l_pk/tmp_requirements.txt', '--exists-action', 'i', '--no-deps', '--implementation', 'cp', '--abi', 'cp310', '--platform', 'manylinux2014_x86_64'] INFO 2023-06-27T10:59:54.127463Z INFO:apache_beam.runners.portability.stager:Downloading source distribution of the SDK from PyPi INFO 2023-06-27T10:59:54.127739Z INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/local/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/tmpr07g31e_', 'apache-beam==2.48.0', '--no-deps', '--no-binary', ':all:'] INFO 2023-06-27T11:00:14.273559Z [notice] A new release of pip is available: 23.0.1 -&gt; 23.1.2 INFO 2023-06-27T11:00:14.273666Z [notice] To update, run: pip install --upgrade pip INFO 2023-06-27T11:00:14.542496Z INFO:apache_beam.runners.portability.stager:Staging SDK sources from PyPI: dataflow_python_sdk.tar INFO 2023-06-27T11:00:14.543974Z INFO:apache_beam.runners.portability.stager:Downloading binary distribution of the SDK from PyPi INFO 2023-06-27T11:00:14.544215Z INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/local/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/tmpr07g31e_', 'apache-beam==2.48.0', '--no-deps', '--only-binary', ':all:', '--python-version', '310', '--implementation', 'cp', '--abi', 'cp310', '--platform', 'manylinux2014_x86_64'] INFO 2023-06-27T11:00:15.867609Z [notice] A new release of pip is available: 23.0.1 -&gt; 23.1.2 INFO 2023-06-27T11:00:15.867700Z [notice] To update, run: pip install --upgrade pip INFO 2023-06-27T11:00:16.093499Z INFO:apache_beam.runners.portability.stager:Staging binary distribution of the SDK from PyPI: apache_beam-2.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl INFO 2023-06-27T11:00:16.202858Z INFO:apache_beam.runners.dataflow.dataflow_runner:Pipeline has additional dependencies to be installed in SDK worker container, consider using the SDK container image pre-building workflow to avoid repetitive installations. Learn more on https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild INFO 2023-06-27T11:00:16.212242Z INFO:root:Default Python SDK image for environment is apache/beam_python3.10_sdk:2.48.0 INFO 2023-06-27T11:00:16.212622Z INFO:root:Using provided Python SDK container image: gcr.io/cloud-dataflow/v1beta3/beam_python3.10_sdk:2.48.0 INFO 2023-06-27T11:00:16.212759Z INFO:root:Python SDK container image set to &quot;gcr.io/cloud-dataflow/v1beta3/beam_python3.10_sdk:2.48.0&quot; for Docker environment INFO 2023-06-27T11:00:16.781664Z INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds. INFO 2023-06-27T11:00:16.782239Z INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds. INFO 2023-06-27T11:00:16.805635Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to ***/staging/***.1687863616.380928/requirements.txt... INFO 2023-06-27T11:00:16.926113Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to ***/staging/***.1687863616.380928/requirements.txt in 0 seconds. INFO 2023-06-27T11:00:16.926750Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to ***/staging/***.1687863616.380928/pickled_main_session... INFO 2023-06-27T11:00:17.024959Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to ***/staging/***.1687863616.380928/pickled_main_session in 0 seconds. INFO 2023-06-27T11:00:17.025549Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to ***/staging/***.1687863616.380928/dataflow_python_sdk.tar... INFO 2023-06-27T11:00:17.265944Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to ***/staging/***.1687863616.380928/dataflow_python_sdk.tar in 0 seconds. INFO 2023-06-27T11:00:17.266521Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to ***/staging/***.1687863616.380928/apache_beam-2.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl... INFO 2023-06-27T11:00:18.169724Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to ***/staging/***.1687863616.380928/apache_beam-2.48.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl in 0 seconds. INFO 2023-06-27T11:00:18.170524Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to ***/staging/***.1687863616.380928/pipeline.pb... INFO 2023-06-27T11:00:18.238244Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to ***/staging/***.1687863616.380928/pipeline.pb in 0 seconds. INFO 2023-06-27T11:00:18.745066Z INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://***/staging/template_launches/***/job_object... INFO 2023-06-27T11:00:18.912294Z INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://***/staging/template_launches/***/job_object in 0 seconds. INFO 2023-06-27T11:00:18.912678Z INFO:apache_beam.runners.dataflow.internal.apiclient:A template was just created at location gs://***/staging/template_launches/***/job_object INFO 2023-06-27T11:00:19.627536Z Template launch successful. INFO 2023-06-27T11:00:19.627637Z Uploading console logs to GCS location: gs://***/staging/template_launches/***/console_logs INFO 2023-06-27T11:00:19.710081Z Successfully uploaded console logs to GCS. INFO 2023-06-27T11:00:19.710187Z FLEX_TEMPLATES_TAIL_CMD_TIMEOUT_IN_SECS to fetch is not set in env. Using default value: 5 INFO 2023-06-27T11:00:19.710220Z FLEX_TEMPLATES_NUM_LOG_LINES to fetch is not set in env. Using default value: 100 INFO 2023-06-27T11:00:24.712683Z Uploading result file to GCS location: gs://***/staging/template_launches/***/operation_result INFO 2023-06-27T11:00:25.148819Z py options not set in envsetup file not set in envextra package not set in env INFO 2023-06-27T11:00:25.242819Z Credentials not provided. Not logging out of Docker private registry. INFO 2023-06-27T11:00:34.833338709Z Shutting down the Sandbox, launcher-2023062703574013425371552899353440 of type GCE Instance, used for launching. INFO 2023-06-27T11:00:38.466634757Z Worker configuration: n1-standard-2 in europe-west1-d. </code></pre> <p>Then some debug from beam which try to optimize my workflow:</p> <pre><code>DEBUG 2023-06-27T11:00:40.754549761Z Expanding SplittableParDo operations into optimizable parts. DEBUG 2023-06-27T11:00:40.791375973Z Expanding CollectionToSingleton operations into optimizable parts. DEBUG 2023-06-27T11:00:40.861882114Z Expanding CoGroupByKey operations into optimizable parts. DEBUG 2023-06-27T11:00:40.890661436Z Combiner lifting skipped for step [...]: GroupByKey not followed by a combiner. [...] DEBUG 2023-06-27T11:00:41.256597037Z Expanding SplittableProcessKeyed operations into optimizable parts. DEBUG 2023-06-27T11:00:41.290339380Z Expanding GroupByKey operations into streaming Read/Write steps DEBUG 2023-06-27T11:00:41.614488065Z Lifting ValueCombiningMappingFns into MergeBucketsMappingFns DEBUG 2023-06-27T11:00:41.969795862Z Annotating graph with Autotuner information. DEBUG 2023-06-27T11:00:42.025294160Z Fusing adjacent ParDo, Read, Write, and Flatten operations DEBUG 2023-06-27T11:00:42.063715064Z Unzipping flatten ref_AppliedPTransform_Save-data-to-gs-***-output-format-parquet-W_34 for input ref_AppliedPTransform_Save-data-to-gs-***-output-format-parquet-W_30.written_files DEBUG 2023-06-27T11:00:42.089804965Z Fusing unzipped copy of Save data to gs://***/output/?format=parquet/WriteToPandas/Map(&lt;lambda at fileio.py:627&gt;), through flatten Save data to gs://***/output/?format=parquet/WriteToPandas/Flatten, into producer Save data to gs://***/output/?format=parquet/WriteToPandas/ParDo(_WriteUnshardedRecordsFn)/ParDo(_WriteUnshardedRecordsFn) DEBUG 2023-06-27T11:00:42.123688209Z Unzipping flatten ref_AppliedPTransform_Save-data-to-gs-***-output-format-parquet-W_34-u123 for input ref_AppliedPTransform_Save-data-to-gs-***-output-format-parquet-W_35.None-c121 DEBUG 2023-06-27T11:00:42.155708287Z Fusing unzipped copy of Save data to gs://***/output/?format=parquet/WriteToPandas/GroupTempFilesByDestination/WriteStream, through flatten Save data to gs://***/output/?format=parquet/WriteToPandas/Flatten/Unzipped-1, into producer Save data to gs://***/output/?format=parquet/WriteToPandas/Map(&lt;lambda at fileio.py:627&gt;) DEBUG 2023-06-27T11:00:42.188468945Z Fusing consumer [...] [...] </code></pre> <p>And finally the error:</p> <pre><code>DEBUG 2023-06-27T11:00:44.262397762Z Workflow config is missing a default resource spec. DEBUG 2023-06-27T11:00:44.294972535Z Adding StepResource setup and teardown to workflow graph. INFO 2023-06-27T11:00:44.331156772Z Running job using Streaming Engine ERROR 2023-06-27T11:00:44.365334445Z Workflow failed. DEBUG 2023-06-27T11:00:44.397489706Z Cleaning up. INFO 2023-06-27T11:00:44.447547675Z Worker pool stopped. INFO 2023-06-27T11:00:59.262390276Z Sandbox, launcher-2023062703574013425371552899353440, stopped. </code></pre>
<python><google-cloud-dataflow><apache-beam>
2023-06-27 12:31:27
0
3,169
AlexisBRENON
76,564,857
2,876,079
How to improve readabilty and maintainability of @patch and MagicMock statements (avoid long names and String identification)?
<p>In my test code I have a lot of boilerplate expressions &quot;Magic&quot;, &quot;return_&quot;. I also have lengthy strings to identify the paths of the functions to mock. The strings are not automatically replaced during refactoring and I would prefer to directly use the imported functions.</p> <p>Example code:</p> <pre><code>from mock import patch, MagicMock from pytest import raises @patch( 'foo.baa.qux.long_module_name.calculation.energy_intensity.intensity_table', MagicMock(return_value='mocked_result_table'), ) </code></pre> <p>Instead I would prefer:</p> <pre><code>from better_test_module import patch, Mock, raises from foo.baa.qux.long_module_name.calculation import energy_intensity @patch( energy_intensity.intensity_table, Mock('mocked_result_table'), ) </code></pre> <p>or</p> <pre><code>@patch( energy_intensity.intensity_table, 'mocked_result_table', ) </code></pre> <p>I post my corresponding custom implementation as an answer below.</p> <p>If you have other suggestions, please let me know. I am wondering why the proposed solution is not the default. I do not want to reinvent the wheel. Therefore, if there is an already existing library I could use, please let me know.</p> <p>Related:</p> <p><a href="https://stackoverflow.com/questions/17181687/mock-vs-magicmock">Mock vs MagicMock</a></p> <p><a href="https://stackoverflow.com/questions/67580353/how-to-override-getitem-on-a-magicmock-subclass">How to override __getitem__ on a MagicMock subclass</a></p>
<python><mocking><pytest><magicmock>
2023-06-27 12:27:11
1
12,756
Stefan
76,564,766
12,691,626
Export NumPy array as NetCDF4 file
<p>I'm trying to export a simple 2D NumPy array as a <code>.nc</code> (NetCDF) file. I'm trying to follow <a href="https://stackoverflow.com/questions/55956270/convert-a-numpy-dataset-to-netcdf">this</a> example without exit.</p> <pre><code>import numpy as np import netCDF4 as nc # Array example x = np.random.random((16,16)) # Create NetCDF4 dataset with nc.Dataset('data/2d_array.nc', 'w', 'NETCDF4') as ncfile: ncfile.createDimension('rows', x.shape[0]) ncfile.createDimension('cols', x.shape[1]) data_var = ncfile.createVariable('data', 'int32', ('rows', 'cols')) data_var = x </code></pre> <p>However, where I import the array again is all zero:</p> <pre><code># Read file with nc.Dataset('data/2d_array.nc', 'r') as ncfile: array = ncfile.variables['data'][:] </code></pre> <p>What I'm doing wrong?</p>
<python><numpy><netcdf><netcdf4>
2023-06-27 12:16:29
1
327
sermomon
76,564,724
22,070,773
Emscripten compiling Python error: Linking globals named 'hypot': symbol multiply defined
<p>I am trying to build Python with emscripten and link it to my main program, but during the linking stage im getting:</p> <p><code>error: Linking globals named 'hypot': symbol multiply defined!</code></p> <p>which seems to be saying that there are multiple definitions of the global function <code>hypot</code>, I have <code>grep</code>ped hypot and only found one definition in pymath.c.</p> <p>How do i fix this problem?</p>
<python><linker><emscripten>
2023-06-27 12:10:42
1
451
got here
76,564,528
7,745,011
how to efficiently query a MySQL by a list of ids with repeating ids?
<p>If I have a list of (sometimes) repeating ids and I want to query a table based on this list, is there a better way to do it, other than the following example:</p> <pre><code>def get_sensors_by_ids(self, ids: List[int]) -&gt; List[SensorModel]: query = ( &quot;SELECT * FROM sensor &quot; &quot;WHERE id = %s&quot; ) sensor_models = [] with self.connection.cursor(dictionary=True, buffered=True) as cursor: for id in ids: cursor.execute(query, (id,)) result = cursor.fetchone() sensor_models.append( SensorDatabaseModel.parse_obj(result).to_sensor_model() ) return sensor_models </code></pre> <p>In the case above I want to return exactly one <code>SensorModel</code> instance per given <code>ids</code> item. <code>ids</code> could have the same value for every item, sometimes repeating values or all unique values, eg</p> <pre><code>ids = [1, 1, 1, 1, 1, 1, 1, 1, ... ] ids = [1, 1, 1, 1, 2, 2, 2, 2, ... ] ids = [1, 2, 3, 4, 5, 6, 7, 8, ... ] ids = [1, 22, 313, 4, 65, 16, 347, 228, ... ] </code></pre> <p>The code above works and I don't expect huge amount of data in the tables, but I still don't like the solution above where all is done in separate queries.</p>
<python><mysql><mysql-connector>
2023-06-27 11:46:17
0
2,980
Roland Deschain
76,564,427
2,107,030
python3.9 quit unexpectedly (segmentation fault) on Monterey M1
<p>Today I was running a python script from terminal and I can't anymore:</p> <pre><code>zsh: segmentation fault python3.9 script_name.py </code></pre> <p>The window with more details says:</p> <pre><code>------------------------------------- Translated Report (Full Report Below) ------------------------------------- Process: python3.9 [1420] Path: /Users/USER/*/python3.9 Identifier: python3.9 Version: ??? Code Type: ARM-64 (Native) Parent Process: zsh [1404] Responsible: Terminal [586] User ID: 501 Date/Time: 2023-06-27 13:21:57.6610 +0200 OS Version: macOS 12.6.1 (21G217) Report Version: 12 Anonymous UUID: C37BD5BC-29E8-9550-85BB-8E2EC972B657 Time Awake Since Boot: 4100 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xfffffffe00000375 Exception Codes: 0x0000000000000001, 0xfffffffe00000375 Exception Note: EXC_CORPSE_NOTIFY Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [1420] VM Region Info: 0xfffffffe00000375 is not in any region. Bytes after previous region: 18446638511466480502 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL MALLOC_NANO (reserved) 600018000000-600020000000 [128.0M] rw-/rwx SM=NUL ...(unallocated) ---&gt; UNUSED SPACE AT END Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 python3.9 0x102b19b94 PyBuffer_Release + 24 1 CoreGraphics 0x1925aad48 data_release_info + 40 2 CoreGraphics 0x1925aad48 data_release_info + 40 3 CoreGraphics 0x192564f5c data_provider_finalize + 60 4 CoreGraphics 0x192531e6c data_provider_retain_count + 96 5 CoreFoundation 0x18d13f134 _CFRelease + 1264 6 CoreGraphics 0x19258c210 image_finalize + 100 7 CoreFoundation 0x18d13ed2c _CFRelease + 232 8 CoreGraphics 0x1925cdf74 CG::DisplayListResourceImage::~DisplayListResourceImage() + 56 9 CoreGraphics 0x1925cdf24 CG::DisplayListResourceImage::~DisplayListResourceImage() + 16 10 CoreGraphics 0x19278a994 std::__1::__shared_weak_count::__release_shared() + 84 11 CoreGraphics 0x1928ecff4 std::__1::__tree&lt;std::__1::shared_ptr&lt;CG::DisplayListResourceImage&gt;, CG::CompareResourceImage, std::__1::allocator&lt;std::__1::shared_ptr&lt;CG::DisplayListResourceImage&gt; &gt; &gt;::destroy(std::__1::__tree_node&lt;std::__1::shared_ptr&lt;CG::DisplayListResourceImage&gt;, void*&gt;*) + 60 12 CoreGraphics 0x1925cda94 CG::DisplayList::~DisplayList() + 304 13 CoreFoundation 0x18d13ed2c _CFRelease + 232 14 CoreFoundation 0x18d002c10 __RELEASE_OBJECTS_IN_THE_ARRAY__ + 116 15 CoreFoundation 0x18d002b48 -[__NSArrayM dealloc] + 276 16 CoreGraphics 0x1925cb888 CG::DisplayListRecorder::~DisplayListRecorder() + 60 17 CoreGraphics 0x1925cb834 CG::DisplayListRecorder::~DisplayListRecorder() + 16 18 CoreGraphics 0x192556008 CGContextDelegateFinalize + 60 19 CoreFoundation 0x18d13ecfc _CFRelease + 184 20 CoreGraphics 0x192555f70 context_reclaim + 56 21 CoreFoundation 0x18d13ecfc _CFRelease + 184 22 AppKit 0x18fd26d90 -[NSCGSContext _invalidate] + 52 23 AppKit 0x18fd26d30 -[NSCGSContext dealloc] + 36 24 AppKit 0x18fd26cfc -[NSBitmapGraphicsContext dealloc] + 68 25 libobjc.A.dylib 0x18ce252b4 AutoreleasePoolPage::releaseUntil(objc_object**) + 196 26 libobjc.A.dylib 0x18ce21b34 objc_autoreleasePoolPop + 212 27 QuartzCore 0x19404e6a0 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 580 28 QuartzCore 0x193ee34cc CA::Transaction::commit() + 704 29 AppKit 0x18fd3569c __62+[CATransaction(NSCATransaction) NS_setFlushesWithDisplayLink]_block_invoke + 304 30 AppKit 0x19049a758 ___NSRunLoopObserverCreateWithHandler_block_invoke + 64 31 CoreFoundation 0x18d0641a4 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 36 32 CoreFoundation 0x18d063ff4 __CFRunLoopDoObservers + 592 33 CoreFoundation 0x18d063528 __CFRunLoopRun + 772 34 CoreFoundation 0x18d062a84 CFRunLoopRunSpecific + 600 35 HIToolbox 0x195ca6338 RunCurrentEventLoopInMode + 292 36 HIToolbox 0x195ca5fc4 ReceiveNextEventCommon + 324 37 HIToolbox 0x195ca5e68 _BlockUntilNextEventMatchingListInModeWithFilter + 72 38 AppKit 0x18fbca51c _DPSNextEvent + 860 39 AppKit 0x18fbc8e14 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1328 40 AppKit 0x18fbbafe0 -[NSApplication run] + 596 41 _macosx.cpython-39-darwin.so 0x1048a4bac show + 184 42 python3.9 0x102b992f8 cfunction_vectorcall_NOARGS + 96 43 python3.9 0x102c62874 call_function + 128 44 python3.9 0x102c5ba24 _PyEval_EvalFrameDefault + 23340 45 python3.9 0x102b3cea8 function_code_fastcall + 168 46 python3.9 0x102c62874 call_function + 128 47 python3.9 0x102c5ba24 _PyEval_EvalFrameDefault + 23340 48 python3.9 0x102c54938 _PyEval_EvalCode + 496 49 python3.9 0x102b3cdc4 _PyFunction_Vectorcall + 192 50 python3.9 0x102c56790 _PyEval_EvalFrameDefault + 2200 51 python3.9 0x102c54938 _PyEval_EvalCode + 496 52 python3.9 0x102b3cdc4 _PyFunction_Vectorcall + 192 53 python3.9 0x102b40210 method_vectorcall + 388 54 python3.9 0x102c56790 _PyEval_EvalFrameDefault + 2200 55 python3.9 0x102c54938 _PyEval_EvalCode + 496 56 python3.9 0x102b3cdc4 _PyFunction_Vectorcall + 192 57 python3.9 0x102c62874 call_function + 128 58 python3.9 0x102c5ba24 _PyEval_EvalFrameDefault + 23340 59 python3.9 0x102c54938 _PyEval_EvalCode + 496 60 python3.9 0x102cb8948 pyrun_file + 312 61 python3.9 0x102cb80ac pyrun_simple_file + 372 62 python3.9 0x102cb7ed8 PyRun_SimpleFileExFlags + 120 63 python3.9 0x102cdeff8 pymain_run_file + 300 64 python3.9 0x102cde3b4 pymain_run_python + 360 65 python3.9 0x102cde1f4 Py_RunMain + 40 66 python3.9 0x102cdfb74 pymain_main + 72 67 python3.9 0x102ad9fd0 main + 56 68 dyld 0x1030ed08c start + 520 Thread 1: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 2: 0 libomp.dylib 0x10419549c kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1748 1 libomp.dylib 0x104195328 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1376 2 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 3 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 4 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 5 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 6 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 7 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 3: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 4: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 5: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 6: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 7: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 8: 0 libsystem_kernel.dylib 0x18cf5a9b8 swtch_pri + 8 1 libsystem_pthread.dylib 0x18cf95108 cthread_yield + 20 2 libomp.dylib 0x104195450 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1672 3 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 4 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 5 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 6 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 7 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 8 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 9: 0 libomp.dylib 0x104174510 kmp_flag_native&lt;unsigned long long, (flag_type)1, true&gt;::notdone_check() + 0 1 libomp.dylib 0x104195328 kmp_flag_64&lt;false, true&gt;::wait(kmp_info*, int, void*) + 1376 2 libomp.dylib 0x104190560 __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184 3 libomp.dylib 0x1041940e8 __kmp_fork_barrier(int, int) + 628 4 libomp.dylib 0x104170e14 __kmp_launch_thread + 340 5 libomp.dylib 0x1041af00c __kmp_launch_worker(void*) + 280 6 libsystem_pthread.dylib 0x18cf9826c _pthread_start + 148 7 libsystem_pthread.dylib 0x18cf9308c thread_start + 8 Thread 10: 0 libsystem_pthread.dylib 0x18cf93078 start_wqthread + 0 Thread 11: 0 libsystem_pthread.dylib 0x18cf93078 start_wqthread + 0 Thread 12: 0 libsystem_pthread.dylib 0x18cf93078 start_wqthread + 0 Thread 0 crashed with ARM Thread State (64-bit): x0: 0x000000016d3264f0 x1: 0x0000000148000000 x2: 0x0000000000d6d800 x3: 0x000000018cda704c x4: 0x0000000000000000 x5: 0x0000000000000004 x6: 0x0000000000000000 x7: 0x0000000193f217fc x8: 0x00000001048a2ea8 x9: 0x00000000ffffffff x10: 0x00000000ff000000 x11: 0x007ffffffffffff8 x12: 0x000000000000016e x13: 0x00000000abb50144 x14: 0x00000000abd50800 x15: 0x00000000000002a1 x16: 0x0000000102b19b7c x17: 0x00000001e59f5c20 x18: 0x0000000000000000 x19: 0xfffffffe0000036d x20: 0x000000016d3264f0 x21: 0xffffffffffffffff x22: 0x00000001e31c4000 x23: 0xffffffffff810380 x24: 0x00000001e31bce80 x25: 0x0000000100000000 x26: 0xffffffff00000000 x27: 0x000000013dc6e678 x28: 0x00000001e3339000 fp: 0x000000016d327670 lr: 0x00000001925aad48 sp: 0x000000016d327660 pc: 0x0000000102b19b94 cpsr: 0x60001000 far: 0xfffffffe00000375 esr: 0x92000004 (Data Abort) byte read Translation fault Binary Images: 0x102ad4000 - 0x102dfffff python3.9 (*) &lt;f9348a2c-0432-3f11-98a7-f9481ff7e623&gt; /Users/USER/*/python3.9 0x192526000 - 0x192b33fff com.apple.CoreGraphics (2.0) &lt;790514ae-01b1-3cd7-a6cb-75224f9d6c56&gt; /System/Library/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x18cfe0000 - 0x18d526fff com.apple.CoreFoundation (6.9) &lt;fc3c193d-0cdb-3569-9f0e-bd2507ca1dbb&gt; /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x18fb89000 - 0x190a41fff com.apple.AppKit (6.9) &lt;5ece5db5-a167-3ab1-a1cf-af442beecea6&gt; /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x18ce17000 - 0x18ce54fff libobjc.A.dylib (*) &lt;ec96f0fa-6341-3e1d-be54-49b544e17f7d&gt; /usr/lib/libobjc.A.dylib 0x193ee1000 - 0x19420cfff com.apple.QuartzCore (1.11) &lt;239ccbf7-85b5-3a4c-bf04-d50cf65feaea&gt; /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x195c74000 - 0x195fa7fff com.apple.HIToolbox (2.1.1) &lt;aaf900bd-bfb6-3af0-a8d3-e24bbe1d57f5&gt; /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x10489c000 - 0x1048a7fff _macosx.cpython-39-darwin.so (*) &lt;ab6da419-f098-30ae-959f-b44482854f72&gt; /Users/USER/*/_macosx.cpython-39-darwin.so 0x1030e8000 - 0x103147fff dyld (*) &lt;24d09537-e51b-350e-b59e-181c9d94d291&gt; /usr/lib/dyld 0x18cf59000 - 0x18cf90fff libsystem_kernel.dylib (*) &lt;dbf55fdd-2b9b-3701-93b6-7a3ce359bd0e&gt; /usr/lib/system/libsystem_kernel.dylib 0x18cf91000 - 0x18cf9dfff libsystem_pthread.dylib (*) &lt;63c4eef9-69a5-38b1-996e-8d31b66a051d&gt; /usr/lib/system/libsystem_pthread.dylib 0x104150000 - 0x1041d7fff libomp.dylib (*) &lt;f53b1e01-af16-30fc-8690-f7b131eb6ce5&gt; /Users/USER/*/libomp.dylib 0x0 - 0xffffffffffffffff ??? (*) &lt;00000000-0000-0000-0000-000000000000&gt; ??? External Modification Summary: Calls made by other processes targeting this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by all processes on this machine: task_for_pid: 0 thread_create: 0 thread_set_state: 0 VM Region Summary: ReadOnly portion of Libraries: Total=824.3M resident=0K(0%) swapped_out_or_unallocated=824.3M(100%) Writable regions: Total=2.1G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=2.1G(100%) VIRTUAL REGION REGION TYPE SIZE COUNT (non-coalesced) =========== ======= ======= Accelerate framework 128K 1 Activity Tracing 256K 1 CG image 272K 8 ColorSync 576K 25 CoreAnimation 13.6M 7 CoreGraphics 16K 1 CoreUI image data 512K 5 Foundation 16K 1 Kernel Alloc Once 32K 1 MALLOC 312.2M 58 MALLOC guard page 192K 9 MALLOC_MEDIUM (reserved) 960.0M 8 reserved VM address space (unallocated) MALLOC_NANO (reserved) 384.0M 1 reserved VM address space (unallocated) STACK GUARD 208K 13 Stack 89.7M 13 VM_ALLOCATE 137.8M 173 VM_ALLOCATE (reserved) 256.0M 2 reserved VM address space (unallocated) __AUTH 1804K 151 __AUTH_CONST 9.8M 293 __CTF 756 1 __DATA 11.5M 371 __DATA_CONST 11.5M 380 __DATA_DIRTY 593K 100 __FONT_DATA 4K 1 __LINKEDIT 583.0M 90 __OBJC_CONST 1263K 128 __OBJC_RO 83.0M 1 __OBJC_RW 3168K 1 __TEXT 241.3M 394 __UNICODE 592K 1 dyld private memory 1024K 1 mapped file 67.7M 15 shared memory 848K 14 =========== ======= ======= TOTAL 3.1G 2269 TOTAL, minus reserved VM space 1.5G 2269 </code></pre> <p>I don't remember I have installed any software or changed any path since the last time the same script was launched fine. I've looked around and some similar issues have to do with a $pythonpath (not defined in my case). It strikes me that the full report says:</p> <pre><code>Version: ??? </code></pre> <p>What can I do?</p>
<python><segmentation-fault><python-3.9><macos-monterey>
2023-06-27 11:32:12
1
2,166
Py-ser
76,564,355
2,400,267
Module name binding by relative import in __init__.py
<p>I wrote a package containing two submodules:</p> <pre><code>pkg/__init__.py pkg/foo.py pkg/bar.py </code></pre> <p>I put the following code in <code>__init__.py</code> and also in <code>bar.py</code>.</p> <pre class="lang-py prettyprint-override"><code>from . import foo as f foo print(&quot;Hello!&quot;) </code></pre> <p>While importing <code>pkg</code> suceeds, <code>pkg.bar</code> doesn't:</p> <pre><code>&gt;&gt;&gt; import pkg Hello! &gt;&gt;&gt; import pkg.bar Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Users\fumito\Documents\pkg\bar.py&quot;, line 2, in &lt;module&gt; foo NameError: name 'foo' is not defined </code></pre> <p>Why is <code>foo</code> defined in the <code>pkg</code> namespace but not in the <code>pkg.bar</code>?</p>
<python>
2023-06-27 11:23:31
2
334
fumito
76,564,144
7,791,963
Why is my Flask app running on port 5000 even though I specified another containerPort in deployment.yaml file?
<p>I have a python app</p> <pre><code>from flask import Flask, jsonify app = Flask(__name__) @app.route(&quot;/&quot;) def index(): return jsonify({&quot;hey&quot;:&quot;test6&quot;}) </code></pre> <p>And a deployment.yaml file</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: mqttdemob namespace: default spec: replicas: 1 selector: matchLabels: app: mqttdemob strategy: {} template: metadata: labels: app: mqttdemob spec: containers: - image: ****/fluxdemo:main-584db7b6-1687862454 # {&quot;$imagepolicy&quot;: &quot;flux-system:mqttdemob&quot;} name: app ports: - containerPort: 6000 </code></pre> <p>And a service.yaml file</p> <pre><code> apiVersion: v1 kind: Service metadata: name: mqttdemob spec: selector: app: mqttdemob ports: - protocol: TCP port: 6000 targetPort: 6000 nodePort: 30400 externalIPs: - 1.2.4.122 type: NodePort </code></pre> <p>When I deploy using these files I would want the Flask app to run on port 6000 which then would be forwarded to 30400.</p> <p>But when I run</p> <p><code>kubectl exec &lt;pod name&gt; -- netstat -tulpn</code></p> <p>It outputs</p> <pre><code> Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1/python </code></pre> <p>Such that it is not using port 6000 but port 5000.</p> <p>What am I doing wrong and how can I make sure that Flask will use port 6000 instead of 5000?</p>
<python><kubernetes><flask>
2023-06-27 10:55:07
1
697
Kspr
76,563,834
1,845,672
How to add data to a tkinter canvas item?
<p>I managed to create an infrared picture using a canvas filled with little rectangle items in a 32 x 24 array. Each rectangle has a color to visually create a heat map. The colors give some impression of warm and cold spots.</p> <p>Now I want to see the exact temperature of the pixel I am hovering over with the mouse. Binding mouse events is not that hard. However, the mouse event only gives me (x,y) screen coordinates.</p> <p>With a normal TkInter widget w, I can attach custom data like this:</p> <pre><code>w.temperature = 20.3 </code></pre> <p>However, with a TkInter Canvas item, such as obtained with <code>canvas.create_rectangle</code>, this is not possible, as the item is just an id, an integer. I can add so-called 'tags' to such items, but this does not seem to be useful in the event callback.</p> <p>Using the (x,y) coordinates to lookup the temperatures is hard, partly because first I have to map (x,y) to the pixel blocks, each of which is 20 x 20 pixels.</p> <p>Is there really no pythonic way to attach some custom data to each Canvas Item, and retrieve that in the event callback?</p> <p>Or, now that I am writing this question, is there perhaps some funky syntax to bind the callback to each canvas rectangle item with an extra argument? With javascript react jsx I have seen such things, which was not simple but in a way still elegant, something along the lines of:</p> <pre><code>canvas.create_rectangle......bind(lambda event: mycallback(event, 20.3)) </code></pre> <p>I have very little experience with lambda kind of stuff in Python, and am thinking of it just right now, so I have not yet tried it.</p> <hr /> <p>update:</p> <p>The lambda idea is nice, but does not work for me, as the <code>bind</code> is called on the Canvas, not on the rectangle items.</p> <p>There seems no other way than to process the x,y and get the pixel temperature from the original input data :-(</p> <p>However, getting the pixel nr from the pixel coordinates is simple:</p> <pre><code>x = int(event.x / blocksize) </code></pre> <p>and storing the temperatures in a 2-dimensional dictionary is also simple, e.g.:</p> <pre><code>dict_t[x, y] = 20.3 </code></pre> <p>Problem solved, no stupid tag needed</p>
<python><tkinter><canvas><lambda><tags>
2023-06-27 10:13:12
0
5,302
Roland Kwee
76,563,741
6,251,742
Can we use Structural Pattern Matching for type narrowing?
<h1>Can we use Structural Pattern Matching for type narrowing</h1> <p>Using <a href="https://peps.python.org/pep-0622/#sequence-patterns" rel="nofollow noreferrer">PEP 622 โ€“ Structural Pattern Matching</a>, I want to exclude the possibility of <code>egg</code> to be sent to <code>multiple</code> as a <code>Mapping</code>, i.e. perform type narrowing using Structural Pattern Matching.</p> <p>From documentation:</p> <blockquote> <p>The _ wildcard can be starred to match sequences of varying lengths. For example:</p> <ul> <li>[*_] matches a sequence of any length.</li> </ul> </blockquote> <pre class="lang-py prettyprint-override"><code> # for egg as a typing.Sequence[typing.Mapping[str, typing.Any]] | typing.Mapping[str, typing.Any] match egg: case [*_]: print(multiple(egg)) case _: print(single(egg)) </code></pre> <p>How I understand SPM, for <code>case [*_]</code>, <code>egg</code> should be considered as a Sequence, and in <code>case _</code>, any other type but a Sequence.</p> <p>But, when I run <code>mypy</code> on the my project, I have this error:</p> <pre><code>file.py:19: error: Argument 1 to &quot;multiple&quot; has incompatible type &quot;Union[Sequence[Mapping[str, Any]], Mapping[str, Any]]&quot;; expected &quot;Sequence[Mapping[str, Any]]&quot; [arg-type] file.py:21: error: Argument 1 to &quot;single&quot; has incompatible type &quot;Union[Sequence[Mapping[str, Any]], Mapping[str, Any]]&quot;; expected &quot;Mapping[str, Any]&quot; [arg-type] Found 2 errors in 1 file (checked 1 source file) </code></pre> <p>You will find the minimal reproducible example used to generate the <code>mypy</code> error:</p> <pre class="lang-py prettyprint-override"><code>import typing SingleExecuteParams = typing.Mapping[str, typing.Any] MultiExecuteParams = typing.Sequence[SingleExecuteParams] AnyExecuteParams = MultiExecuteParams | SingleExecuteParams def multiple(spam: MultiExecuteParams) -&gt; int: return 1 def single(spam: SingleExecuteParams) -&gt; int: return 1 def main(egg: AnyExecuteParams) -&gt; None: match egg: case [*_]: print(multiple(egg)) case _: print(single(egg)) </code></pre> <h1>I tried:</h1> <h2><code>case {}:</code></h2> <p>Using <code>case {}:</code> to match a Mapping, not using <code>{**_}</code>:</p> <blockquote> <p>The subject must be an instance of collections.abc.Mapping. Extra keys in the subject are ignored even if **rest is not present. This is different from sequence pattern, where extra items will cause a match to fail. But mappings are actually different from sequences: they have natural structural sub-typing behavior, i.e., passing a dictionary with extra keys somewhere will likely just work.</p> <p>For this reason, **_ is invalid in mapping patterns; it would always be a no-op that could be removed without consequence.</p> </blockquote> <p>This has no effect.</p> <h2><code>case ... if ...</code>:</h2> <p>Using the <code>case ... if ...</code> syntax do narrow types, but in this case but what's the point as type narrowing should occurs with <code>[*_]</code>.</p> <pre class="lang-py prettyprint-override"><code> case _ if isinstance(egg, typing.Sequence): </code></pre> <h1>if statement type narrowing</h1> <p>Using if statements, the following code won't raise any <code>mypy</code> error:</p> <pre class="lang-py prettyprint-override"><code>def main(egg: AnyExecuteParams) -&gt; None: if isinstance(egg, collections.abc.Sequence): print(multiple(egg)) else: print(single(egg)) </code></pre> <p>But from my understanding of how SPM works, it should be equivalent.</p> <p><a href="https://peps.python.org/pep-0622/#match-semantics" rel="nofollow noreferrer">Match semantics</a></p> <blockquote> <p>Essentially this is equivalent to a chain of if ... elif ... else statements.</p> </blockquote>
<python><python-typing><mypy>
2023-06-27 10:01:46
0
4,033
Dorian Turba
76,563,657
2,133,561
Python/Audio Classification - Split audio file based on repetition
<p>I'm creating a audio classification model for animal sounds. It's a hobby project, just to get myself familiarized with the techniques. The thing that I'm struggling with is the duration differences of my audio clips and how I should cut them into similar duration lengths. It is not so much on the how (because I found many examples on how to split the audio files) but my question is about the duration itself.</p> <p>My files have some silences but mainly also a lot of repetitive sounds as the dataset is mainly insects. And the insect, like a cricket will make a similar sound, repetitive sound, for a long time. So my idea was: if there is a way to detect repetitions in audio files, use that to split the audio file. And then see what the duration is of the longest clip, and use that as a duration to cut split all the audio files.</p> <p>But maybe I'm thinking about it all wrong. Does anybody have any suggestions or nice literature for me?</p>
<python><audio><deep-learning><audioclip>
2023-06-27 09:53:01
1
331
user2133561
76,563,480
15,452,168
Emoji count and analysis using python pandas
<p>I am working on a sentiment analysis topic and there are a lot of comments with emojis.</p> <p>I would like to know if my code is correct or is there a way to optimize it as well?</p> <p><strong>Code to do smiley count</strong></p> <pre><code>import pandas as pd import regex as re import emoji # Assuming your DataFrame is called 'df' and the column with comments is 'Document' comments = df['Document'] # Initialize an empty dictionary to store smiley counts and types smiley_data = {'Smiley': [], 'Count': [], 'Type': []} # Define a regular expression pattern to match smileys pattern = r'([\U0001F600-\U0001F64F\U0001F300-\U0001F5FF\U0001F680-\U0001F6FF\U0001F1E0-\U0001F1FF])' # Iterate over the comments for comment in comments: # Extract smileys and their types from the comment smileys = re.findall(pattern, comment) # Increment the count and store the smileys and their types for smiley in smileys: if smiley in smiley_data['Smiley']: index = smiley_data['Smiley'].index(smiley) smiley_data['Count'][index] += 1 else: smiley_data['Smiley'].append(smiley) smiley_data['Count'].append(1) smiley_data['Type'].append(emoji.demojize(smiley)) # Create a DataFrame from the smiley data smiley_df = pd.DataFrame(smiley_data) # Sort the DataFrame by count in descending order smiley_df = smiley_df.sort_values(by='Count', ascending=False) # Print the smiley data smiley_df </code></pre> <p>I am majorly not sure if my below code block is getting all the smileys</p> <pre><code># Define a regular expression pattern to match smileys pattern = r'([\U0001F600-\U0001F64F\U0001F300-\U0001F5FF\U0001F680-\U0001F6FF\U0001F1E0-\U0001F1FF])' </code></pre> <p>would like to know what can I do with this analysis. something else on top of it - some charts maybe?</p> <p>I am also sharing a test dataset that will generate similar smiley counts as those available in my real data. Please note that the test dataset only has known smileys if there is something else. it won't be there like in a real dataset.</p> <p><strong>Test Dataset</strong></p> <pre><code>import random import pandas as pd smileys = ['๐Ÿ‘', '๐Ÿ‘Œ', '๐Ÿ˜', '๐Ÿป', '๐Ÿ˜Š', '๐Ÿ™‚', '๐Ÿ‘Ž', '๐Ÿ˜ƒ', '๐Ÿผ', '๐Ÿ’ฉ'] # Additional smileys to complete the required count additional_smileys = ['๐Ÿ˜„', '๐Ÿ˜Ž', '๐Ÿคฉ', '๐Ÿ˜˜', '๐Ÿค—', '๐Ÿ˜†', '๐Ÿ˜‰', '๐Ÿ˜‹', '๐Ÿ˜‡', '๐Ÿฅณ', '๐Ÿ™Œ', '๐ŸŽ‰', '๐Ÿ”ฅ', '๐Ÿฅฐ', '๐Ÿคช', '๐Ÿ˜œ', '๐Ÿค“', '๐Ÿ˜š', '๐Ÿคญ', '๐Ÿคซ', '๐Ÿ˜Œ', '๐Ÿฅฑ', '๐Ÿฅถ', '๐Ÿคฎ', '๐Ÿคก', '๐Ÿ˜‘', '๐Ÿ˜ด', '๐Ÿ™„', '๐Ÿ˜ฎ', '๐Ÿคฅ', '๐Ÿ˜ข', '๐Ÿค', '๐Ÿ™ˆ', '๐Ÿ™Š', '๐Ÿ‘ฝ', '๐Ÿค–', '๐Ÿฆ„', '๐Ÿผ', '๐Ÿต', '๐Ÿฆ', '๐Ÿธ', '๐Ÿฆ‰'] # Combine the required smileys and additional smileys all_smileys = smileys + additional_smileys # Set a random seed for reproducibility random.seed(42) # Generate a single review def generate_review(with_smiley=False): review = &quot;This movie&quot; if with_smiley: review += &quot; &quot; + random.choice(all_smileys) review += &quot; is &quot; review += random.choice([&quot;amazing&quot;, &quot;excellent&quot;, &quot;fantastic&quot;, &quot;brilliant&quot;, &quot;great&quot;, &quot;good&quot;, &quot;okay&quot;, &quot;average&quot;, &quot;mediocre&quot;, &quot;disappointing&quot;, &quot;terrible&quot;, &quot;awful&quot;, &quot;horrible&quot;]) review += random.choice([&quot;!&quot;, &quot;!!&quot;, &quot;!!!&quot;, &quot;.&quot;, &quot;..&quot;, &quot;...&quot;]) + &quot; &quot; review += random.choice([&quot;Highly recommended&quot;, &quot;Definitely worth watching&quot;, &quot;A must-see&quot;, &quot;I loved it&quot;, &quot;Not worth your time&quot;, &quot;Skip it&quot;]) + random.choice([&quot;!&quot;, &quot;!!&quot;, &quot;!!!&quot;]) return review # Generate the random dataset def generate_dataset(): dataset = [] review_count = 5000 # Generate reviews with top smileys for smiley, count, _ in top_smileys: while count &gt; 0: review = generate_review(with_smiley=True) if smiley in review: dataset.append(review) count -= 1 # Generate reviews with additional smileys additional_smileys_count = len(additional_smileys) additional_smileys_per_review = review_count - len(dataset) additional_smileys_per_review = min(additional_smileys_per_review, additional_smileys_count) for _ in range(additional_smileys_per_review): review = generate_review(with_smiley=True) dataset.append(review) # Generate reviews without smileys while len(dataset) &lt; review_count: review = generate_review() dataset.append(review) # Shuffle the dataset random.shuffle(dataset) return dataset # List of top smileys and their counts top_smileys = [ ('๐Ÿ‘', 331, ':thumbs_up:'), ('๐Ÿ‘Œ', 50, ':OK_hand:'), ('๐Ÿ˜', 41, ':smiling_face_with_heart-eyes:'), ('๐Ÿป', 38, ':light_skin_tone:'), ('๐Ÿ˜Š', 35, ':smiling_face_with_smiling_eyes:'), ('๐Ÿ™‚', 14, ':slightly_smiling_face:'), ('๐Ÿ‘Ž', 12, ':thumbs_down:'), ('๐Ÿ˜ƒ', 12, ':grinning_face_with_big_eyes:'), ('๐Ÿผ', 10, ':medium-light_skin_tone:'), ('๐Ÿ’ฉ', 10, ':pile_of_poo:') ] # Generate the dataset dataset = generate_dataset() # Create a data frame with 'Document' column df = pd.DataFrame({'Document': dataset}) # Display the DataFrame df </code></pre> <p>Thank you in advance!</p>
<python><pandas><emoji><sentiment-analysis><emoticons>
2023-06-27 09:30:34
2
570
sdave
76,563,319
1,206,998
How to execute maven test in a (unix/git) bash in intellij on windows?
<p>I have this relatively strange maven project that I work on under intellij, where most submodules are java, but one module contains shell and python script (maven packages those with the maven-assembly-plugin). I manage to configure maven to execute python unitest when executing <code>mvn test</code> and it works when executing it on git-bash. But it doesn't work in intellij when clicking on lifecycle&gt;test in the maven tool window.</p> <p>The problem seems to be that it executes it in a powershell. We develop under windows, but the project is deployed on linux, so the shell and python scripts are expected to work on linux. I found configuration in intellij to use git-bash in the terminal tool of intellij (<a href="https://stackoverflow.com/a/29347211/1206998">for example here</a>), but nothing to use it (or another linux-like shell) to execute maven test.</p> <p>=&gt; Is there any such configuration?</p> <p>Note: I am using intellij community edition 2021.3.2. I could probably get a new CE or an ultimate edition if that solves my issue (but I prefere to avoid changing/updating if not necessary)</p>
<python><windows><maven><intellij-idea><git-bash>
2023-06-27 09:09:25
1
15,829
Juh_
76,563,290
1,021,819
How can I detect a list of lists, of tuples or of non-string sequences in general?
<p>I have a <code>list</code>, and I would like to boolean test whether the <code>list</code> contains any of: <code>list</code>s or <code>tuple</code>s or non-string <code>sequence</code>s or <code>list</code>-like objects.</p> <p>I would like the test return <code>False</code> if the <code>list</code> is of <code>str</code>s or any other unspecified iterable.</p> <p>How can I evaluate this efficiently (lazily) in python?</p> <p>Should I, for example use <code>isinstance(x, collections.Sequence)</code> together with a list comprehension condition?</p> <p>Or could I collapse the elements of the <code>list</code> and test for residuals?</p> <p>Here's how it goes:</p> <p>{<code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code>, <code>f</code>} are {<code>str</code>s, <code>int</code>s, <code>UUID</code>s, ...}</p> <pre class="lang-py prettyprint-override"><code>x1 = [a, b, c, d, e, f] x2 = [(a, b),(c, d),(e, f)] x2 = [[a, b],[c, d],[e, f]] result = dict(x1=False, x2=True, x3=True) def func(y): ... return z for x in [x1, x2, x3]: assert func(x) == result[x], &quot;Function needs an answer!&quot; </code></pre> <p>What is a good way to populate <code>func</code>?</p> <p>PS: Here is an example of a relevant list-like object (type <code>OmegaConf.ListConfig</code>):</p> <p><a href="https://github.com/omry/omegaconf/blob/a68b149b8e1a9e9a0cabc83e8691df8c6620909a/omegaconf/listconfig.py#L42C1-L43C1" rel="nofollow noreferrer">https://github.com/omry/omegaconf/blob/a68b149b8e1a9e9a0cabc83e8691df8c6620909a/omegaconf/listconfig.py#L42C1-L43C1</a></p>
<python><list><tuples><sequence>
2023-06-27 09:05:46
2
8,527
jtlz2
76,563,207
9,488,023
Creating a new column in a Pandas DataFrame based on the previous quarter and the same ID in another DataFrame
<p>What I have is two very large datasets that I want to combine, but before I do, I want to make sure that the same columns with correct values are found in both. One of them is missing a column titled 'file', which should be based on values found in this column in the other dataframe and values found in a list. My code looks something like this:</p> <pre><code>import pandas as pd quarters = ['2021q1', '2021q2', '2021q3', '2021q4', '2022q1', '2022q2', '2022q3', '2022q4', '2023q1', '2023q2'] df_test = pd.DataFrame(data=None, columns=['file', 'quarter', 'ident']) df_test.file = ['file_1', 'file_1_v2', 'file_2_old', 'file_2_new', 'file_3'] df_test.quarter = ['2022q4', '2023q2', '2022q2', '2022q3', '2023q1'] df.ident = [1, 1, 2, 2, 3] df_test2 = pd.DataFrame(data=None, columns = ['quarter', 'ident']) df_test2.quarter = ['2023q1', '2022q4', '2023q2'] df_test2.ident = [1, 2, 3] </code></pre> <p>In the second dataframe df_test2, I want to insert a column 'file' with values defined by the 'file' values in df_test for the quarter before the one shown in df_test2 for the same id-number 'ident'. So, for example, the 'ident' = 1 has quarter '2022q4' and '2023q2' in df_test and '2023q1' in df_test2. This means that I want the 'file' column to read 'file_1' in df_test2 since this was the file name for the previous quarter, and not 'file1_v2'. The end result should be a column in df_test2 that reads:</p> <pre><code>['file_1', 'file_2_new', 'file_3'] </code></pre> <p>My idea is to look for the same id-number in both dataframes, compare the 'quarter' value in df_test2 with the previous quarter value in df_test and set the file name to be the same, but I'm not sure how to do this. Any help is really appreciated, thanks!</p>
<python><pandas><dataframe><compare>
2023-06-27 08:55:39
1
423
Marcus K.
76,563,194
451,878
selenium chromedriver crash on server (session deleted because of page crash)
<p>I've a small script using selenium running in <em>Docker container</em> :</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys import os try: use_service = True path_driver = os.getcwd() + &quot;/drivers/chromedriver&quot; options = webdriver.ChromeOptions() options.add_argument(&quot;--headless&quot;) options.add_argument(&quot;--disable-gpu&quot;) options.add_argument(&quot;--no-sandbox&quot;) if use_service: print(&quot;---&gt; use Service()&quot;) from selenium.webdriver.chrome.service import Service service = Service(path_driver) driver = webdriver.Chrome(service=service, options=options) else: driver = webdriver.Chrome(path_driver,options=options) driver.get(my_url) driver.find_element(By.ID, &quot;login&quot;).send_keys(&quot;bar&quot;) driver.find_element(By.ID, &quot;pass&quot;).send_keys(&quot;foo&quot;) driver.find_element(By.NAME, &quot;OK&quot;).send_keys(Keys.RETURN) urls = { &quot;04&quot;: &quot;https://xxx.xx/xxx/xxx&quot;,&quot;05&quot;: &quot;https://xxx.xx/xxx/xxx&quot;,} for dept, url in urls.items(): driver.get(url) print(len(driver.page_source)) driver.close() driver.quit() except Exception as e: print(&quot;ERROR &quot;, str(e)) </code></pre> <p>On the server, it's working, but some times in the container, it crash afte on this line &quot;driver.find_element(By.NAME, &quot;OK&quot;).send_keys(Keys.RETURN)&quot; :</p> <pre><code>---&gt; use Service() ERROR Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed (Session info: headless chrome=111.0.5563.64) Stacktrace: #0 0x556474bfed93 &lt;unknown&gt; #1 0x5564749cd15d &lt;unknown&gt; #2 0x5564749b908c &lt;unknown&gt; #3 0x5564749b8531 &lt;unknown&gt; #4 0x5564749b75bb &lt;unknown&gt; #5 0x5564749b7404 &lt;unknown&gt; #6 0x5564749b5ee2 &lt;unknown&gt; #7 0x5564749b6682 &lt;unknown&gt; #8 0x5564749c3b4f &lt;unknown&gt; #9 0x5564749c47a2 &lt;unknown&gt; #10 0x5564749d4ba0 &lt;unknown&gt; #11 0x5564749d8e40 &lt;unknown&gt; #12 0x5564749b6b43 &lt;unknown&gt; #13 0x5564749d47dc &lt;unknown&gt; #14 0x556474a4583a &lt;unknown&gt; #15 0x556474a2d353 &lt;unknown&gt; #16 0x5564749fce40 &lt;unknown&gt; #17 0x5564749fe038 &lt;unknown&gt; #18 0x556474c528be &lt;unknown&gt; #19 0x556474c568f0 &lt;unknown&gt; #20 0x556474c36f90 &lt;unknown&gt; #21 0x556474c57b7d &lt;unknown&gt; #22 0x556474c28578 &lt;unknown&gt; #23 0x556474c7c348 &lt;unknown&gt; #24 0x556474c7c4d6 &lt;unknown&gt; #25 0x556474c96341 &lt;unknown&gt; #26 0x7f92ba72ffa3 start_thread </code></pre> <p>I've try on the same server not using docker, simply install selenium, and it's working.</p> <p>Informations :</p> <p>Docker : Linux 40b5ec9c18f9 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 GNU/Linux</p> <p>System : Linux valabre-prod23 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux</p> <p>$ google-chrome --version Google Chrome 111.0.5563.64</p> <p>$ chromedriver --version ChromeDriver 110.0.5481.77 (65ed616c6e8ee3fe0ad64fe83796c020644d42af-refs/branch-heads/5481@{#839})</p> <p>Can you help me please ? Thanks</p> <p>F.</p>
<python><docker><selenium-webdriver>
2023-06-27 08:53:58
1
1,481
James
76,562,763
13,242,312
_pickle.PicklingError: Can't pickle <function foo at 0x7f13530775b0>: attribute lookup foo on __main__ failed
<p>I have a wierd problem with multiprocessing when trying to implement a communication api, when I try to store a function in a Manager.dict() I get the error that's in the title, after some research I found that you can only pickle top-level functions (with no nested functions), but the function I'm trying to pickle is a top-level function so I don't really understand what is happening. I came up with this minimal reproductible example and I managed to get the same error:</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp from typing import Callable, Any from uuid import uuid4 from time import sleep def defaultInit(self): pass def defaultLoop(self): pass class Event(): def __init__(self, name: str, callback: Callable[..., Any]): super().__init__() self._id = uuid4() self._name = name self._trigger = False self._callback = callback self._args = None def _updateEvents(self, events): for i, event in enumerate(events[self._name]): if event._id == self._id: tmp = events[self._name] tmp[i] = self events[self._name] = tmp break def execIfTriggered(self, owner) -&gt; None: if self._trigger: self._trigger = False self._updateEvents(owner.events) self._callback(owner, *self._args) def trigger(self, owner, *args) -&gt; None: self._args = args self._trigger = True self._updateEvents(owner.events) return self class Target(): def __init__(self, events): self._init = defaultInit self._loop = defaultLoop self._events = events def event(self, callback): name = callback.__name__ return self.on(name, callback) def on(self, eventName, callback): if eventName not in self._events: self._events[eventName] = [Event(eventName, callback)] else: self._events[eventName] = self._events[eventName] + [Event(eventName, callback)] return callback def emit(self, event, *args): for ev in self._events[event]: ev.trigger(self, *args) def init(self, callback): self._init = callback def loop(self, callback): self._loop = callback def processEvents(self): for events in self._events.values(): for event in events: event.execIfTriggered(self) def run(target): target._init() while True: target.processEvents() target._loop() if __name__ == '__main__': mgr = mp.Manager() t = Target(mgr.dict()) proc = mp.Process(target=run, args=(t,)) @t.init def fooInit(target): print('fooInit') @t.event def foo(target): print('fooEvent') proc.start() sleep(1) t.emit('foo') sleep(1) proc.join() </code></pre> <p>Full error output:</p> <pre><code>Traceback (most recent call last): File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 88, in &lt;module&gt; def foo(target): File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 47, in event return self.on(name, callback) File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 51, in on self._events[eventName] = [Event(eventName, callback)] File &quot;&lt;string&gt;&quot;, line 2, in __setitem__ File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/managers.py&quot;, line 817, in _callmethod conn.send((self._id, methodname, args, kwds)) File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/connection.py&quot;, line 206, in send self._send_bytes(_ForkingPickler.dumps(obj)) File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/reduction.py&quot;, line 51, in dumps cls(buf, protocol).dump(obj) _pickle.PicklingError: Can't pickle &lt;function foo at 0x7f13530775b0&gt;: attribute lookup foo on __main__ failed </code></pre> <p>A thing I noticed that is even weirder is that when I use the event function in a normal way instead of the decorator way, it outputs a different error message:</p> <pre class="lang-py prettyprint-override"><code># if instead of this @t.event def foo(target): print('fooEvent') # I do this def foo(target): print('fooEvent') t.event(foo) </code></pre> <p>I get this error message in that case:</p> <pre><code>Traceback (most recent call last): File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 89, in &lt;module&gt; t.event(foo) File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 47, in event return self.on(name, callback) File &quot;/home/lsuardi/smc/moscau/Acquisition_python/test.py&quot;, line 51, in on self._events[eventName] = [Event(eventName, callback)] File &quot;&lt;string&gt;&quot;, line 2, in __setitem__ File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/managers.py&quot;, line 833, in _callmethod raise convert_to_error(kind, result) multiprocessing.managers.RemoteError: --------------------------------------------------------------------------- Traceback (most recent call last): File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/managers.py&quot;, line 253, in serve_client request = recv() File &quot;/home/lsuardi/miniconda3/envs/SMC-lsuardi/lib/python3.10/multiprocessing/connection.py&quot;, line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) AttributeError: Can't get attribute 'foo' on &lt;module '__main__' from '/home/lsuardi/smc/moscau/Acquisition_python/test.py'&gt; --------------------------------------------------------------------------- </code></pre> <p>Which is roughly similar but not the same. Could someone explain me what is happening and how to prevent that ? The error seems to come from the fact that I'm using a Manager.dict() but I don't know how to share the functions between the parent and child process otherwise.</p>
<python><multiprocessing><pickle>
2023-06-27 07:57:44
1
1,463
Fayeure
76,562,752
988,279
DateTime is NULL when writing to database
<p>I've a csv file on an ftp, and want to store the timestamp and the number of lines (of this file) to the database. Unfortunately the datetime it is always set to NULL.</p> <pre><code>... class Foo(Base): __tablename__ = 'Foo' lines = Column(&quot;lines&quot;, Integer, primary_key=True) timestamp_of_csv_file = Column(&quot;timestamp_of_csv_file&quot;, DateTime) def __init__(self, lines, timestamp): self.lines = lines self.timestamp = timestamp remote_file = sftp.open(remote_file_name) file_date = sftp.stat(remote_file_name).st_mtime dt_m = datetime.fromtimestamp(file_date) data = remote_file.read() count_lines = data.count(b'\r\n') session.add(Foo(dt_m, count_lines)) session.commit() </code></pre> <p>This is the corresponding entry in the postgres log:</p> <pre><code>postgres@my_db(0)2023-06-27 09:24:11 CEST LOG: statement: INSERT INTO &quot;Foo&quot; (timestamp_of_csv_file, lines) VALUES NULL, 12329) </code></pre> <p>Breakpoint in the &quot;init&quot;:</p> <pre><code>-&gt; self.timestamp = timestamp (Pdb) timestamp datetime.datetime(2023, 6, 26, 20, 35, 14) </code></pre> <p>What am I doing wrong?</p>
<python><postgresql><sqlalchemy>
2023-06-27 07:56:56
1
522
saromba
76,562,575
2,473,382
Pythonic way to know in which parent an attribute was defined
<p>If I have this tree of classes:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Parent(): p: int = 42 @dataclass class Child(Parent): c: int = 0 obj = Child() </code></pre> <p>From the object <code>obj</code>, is there an easy way to know that <code>p</code> was defined in <code>Parent</code>, and <code>c</code> in <code>Child</code>?</p> <p>I am looping via <code>__mro__</code> from the top, getting the list of attributes there, and going lower in the hierarchy I get the new attributes not yet defined. It works, but I wonder if there is a more pythonic way.</p> <p><strong>Edit: Context</strong></p> <p>It is about autogenerated classes, describing real world electricity networks.</p> <p>Image having these 3 classes, defining a tree:</p> <ul> <li><code>Equipment</code> class defining <code>name</code></li> <li><code>ConductingEquipment</code> defining <code>resistance</code></li> <li><code>Cable</code> defining <code>material</code>.</li> </ul> <p>There will of course be more equipment types and more conducting equipment types. There is one standard exchange format for this, which requires giving the fully qualified attribute name, so <code>Equipment.name</code> even for a <code>Cable</code>.</p> <p>So in this admittedly weird context, this is something I do need to do.</p>
<python><python-dataclasses>
2023-06-27 07:31:29
2
3,081
Guillaume
76,562,560
12,371,863
Multiple Time Series Cross Validation - overlapping train test sets
<p>I have a dataset containing sales data for different products over time. The dataset includes a &quot;Time&quot; column representing the date and a &quot;Product&quot; column specifying the product ID. As multiple products can be sold on the same date, the &quot;Time&quot; column does not have unique values.</p> <p>I am trying to perform cross-validation on this time series data to train an ML model using the expanding window approach. The expanding window approach involves iteratively increasing the size of the training set while keeping the test set as the most recent observations.</p> <p>I implemented a custom cross-validation class ExpandingWindowCV based on sklearn.model_selection.BaseCrossValidator. The class takes parameters for the minimum training set size, shuffle option, and random state. The _iter_test_indices method is responsible for generating the test indices based on the expanding window approach.</p> <p>However, when I applied this custom cross-validation with RandomizedSearchCV for hyperparameter tuning, I noticed that the resulting train and test sets overlapped, which is incorrect for time series data</p> <p>I attempted to debug the issue by printing the train and test dates and indices within the _iter_test_indices method. Surprisingly, the printed output showed that the train and test sets were correctly split without any overlap.</p> <pre><code>class ExpandingWindowCV(BaseCrossValidator): def __init__(self, min_train_size=1, shuffle=False, random_state=None): super().__init__() self.min_train_size = min_train_size self.shuffle = shuffle self.random_state = random_state def _iter_test_indices(self, X=None, y=None, groups=None): unique_dates = np.unique(X['Time'].sort_values()) # Assuming 'Time' is the column name n_dates = len(unique_dates) print(f&quot;X \n {X[['Time', 'Product']]}&quot;) if self.shuffle: rng = check_random_state(self.random_state) rng.shuffle(unique_dates) for i in range(self.min_train_size, n_dates): train_dates = unique_dates[:i] test_dates = unique_dates[i:] train_indices = np.where(X['Time'].isin(train_dates))[0] test_indices = np.where(X['Time'].isin(test_dates) &amp; ~X['Time'].isin(train_dates))[0] #X.loc[X.Time.isin(test_dates), :].index print(f&quot;train_dates {train_dates}&quot;) print(f&quot;test_dates {test_dates}&quot;) print(f&quot;train_indices \n {train_indices}&quot;) print(f&quot;test_indices \n {test_indices}&quot;) print(&quot;train set&quot;, X.loc[X.Time.isin(train_dates), ['Time', 'Product']]) print(&quot;test set&quot;, X.loc[X.Time.isin(test_dates),['Time', 'Product']]) yield test_indices def get_n_splits(self, X=None, y=None, groups=None): unique_dates = np.unique(X['Time']) n_dates = len(unique_dates) return n_dates - self.min_train_size </code></pre> <p>To further investigate, I printed the train and test set dates outside of the custom cross-validation loop using the RandomizedSearchCV object. However, I observed that the train and test sets still had overlapping dates, which contradicted the earlier prints.</p> <pre><code>tscv = ExpandingWindowCV(min_train_size=5, shuffle=False, random_state=42) search = RandomizedSearchCV(pipeline, search_space, cv=tscv, scoring=custom_scorer, error_score='raise', n_jobs=2, n_iter=1, refit=True, verbose=0) for train, test in search.cv.split(X): print('TRAIN: ', train, ' TEST: ', test) print(f&quot; Train: index= \n{train} \n values= \n{X.loc[X.index.isin(train),['Time', 'Submodel']].sort_values('Time')}&quot;) print(f&quot; Test: index= \n{test} \n values= \n{X.loc[X.index.isin(test),['Time', 'Submodel']].sort_values('Time')}&quot;) print(f&quot; Train min= \n{X.loc[X.index.isin(train), 'Time'].min()}&quot;) print(f&quot; Train max= \n{X.loc[X.index.isin(train), 'Time'].max()}&quot;) print(f&quot; Test min= \n{X.loc[X.index.isin(test), 'Time'].min()}&quot;) print(f&quot; Test max= \n{X.loc[X.index.isin(test), 'Time'].max()}&quot;) </code></pre> <p>I suspect there might be an inconsistency or issue when passing the cross-validation object to RandomizedSearchCV or during the hyperparameter tuning process.</p> <p>I would appreciate any insights or suggestions to resolve this issue and properly perform cross-validation on my time series data.</p>
<python><scikit-learn><time-series><cross-validation>
2023-06-27 07:29:56
0
411
devcloud
76,562,451
11,052,072
Multithreading in GCP Compute Engine
<p>I have a Python job that runs on 16 threads. If I execute it on my local machine it works just fine. On a Compute Engine VM instance, however, it shows a strange behaviour: most of the threads completes, but 1-3 of them (depending on the run) just hangs at a random moment.</p> <p>Those threads don't share data with each other, so I excluded a deadlock kind of problem or other code bugs.</p> <p>What is noticed is that if I run <code>lscpu</code> on the VM, this is the output:</p> <pre><code>Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU @ 2.20GHz Stepping: 0 CPU MHz: 2200.232 BogoMIPS: 4400.46 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 56320K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities </code></pre> <p>Since the machine is configured with 8vcpu (4 core with 2 threads per core), is it possible that just doesen't support running a multithreading script with 16 threads? That's quite confusing for me, considering that on a normal machine multithreading in Python is not related to any hardware configuration. Am I on the right track? Do I need to upgrade the instance or this can't be related?</p> <p>Thanks in advance for any hint...</p>
<python><multithreading><google-cloud-platform>
2023-06-27 07:16:22
0
553
Liutprand
76,562,382
1,014,217
Inserting data as vectors from SQL Database to Pinecone
<p>I have a profiles table in SQL with around 50 columns, and only 244 rows. I have created a view with only 2 columns, ID and content and in content I concatenated all data from other columns in a format like this: FirstName: John. LastName: Smith. Age: 70, Likes: Gardening, Painting. Dislikes: Soccer.</p> <p>Then I created the following code to index all contents from the view into pinecone, and it works so far. However I noticed something strange.</p> <ol> <li>There are over 2000 vectors and still not finished, the first iterations were really fast, but now each iteration is taking over 18 seconds to finish and it says it will take over 40 minutes to finish upserting. (but for 244 rows only?)</li> </ol> <p>What am I doing wrong? or is it normal?</p> <pre><code> pinecone.init( api_key=PINECONE_API_KEY, # find at app.pinecone.io environment=PINECONE_ENV # next to api key in console ) import streamlit as st st.title('Work in progress') embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1) cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+DATABASE_SERVER+'.database.windows.net;DATABASE='+DATABASE_DB+';UID='+DATABASE_USERNAME+';PWD='+ DATABASE_PASSWORD) query = &quot;SELECT * from views.vwprofiles2;&quot; df = pd.read_sql(query, cnxn) index = pinecone.Index(&quot;default&quot;) batch_limit = 100 texts = [] metadatas = [] text_splitter = RecursiveCharacterTextSplitter( chunk_size=400, chunk_overlap=20, length_function=tiktoken_len, separators=[&quot;\n\n&quot;, &quot;\n&quot;, &quot; &quot;, &quot;&quot;] ) for _, record in stqdm(df.iterrows(), total=len(df)): # First get metadata fields for this record metadata = { 'IdentityId': str(record['IdentityId']) } # Now we create chunks from the record text record_texts = text_splitter.split_text(record['content']) # Create individual metadata dicts for each chunk record_metadatas = [{ &quot;chunk&quot;: j, &quot;text&quot;: text, **metadata } for j, text in enumerate(record_texts)] # Append these to the current batches texts.extend(record_texts) metadatas.extend(record_metadatas) # If we have reached the batch_limit, we can add texts if len(texts) &gt;= batch_limit: ids = [str(uuid4()) for _ in range(len(texts))] embeds = embed.embed_documents(texts) index.upsert(vectors=zip(ids, embeds, metadatas)) texts = [] metadatas = [] if len(texts) &gt; 0: ids = [str(uuid4()) for _ in range(len(texts))] embeds = embed.embed_documents(texts) index.upsert(vectors=zip(ids, embeds, metadatas)) </code></pre>
<python><pandas><openai-api><tqdm><pinecone>
2023-06-27 07:07:51
1
34,314
Luis Valencia
76,562,318
8,458,083
How to create a Python executable with nix flake
<p>I have a directory with two files, <em>main.py</em> and <em>flake.nix</em>.</p> <p><em>flake.nix</em>:</p> <pre><code>{ description = &quot;virtual environment with python and streamlit&quot;; inputs.nixpkgs.url = &quot;github:NixOS/nixpkgs/nixos-unstable&quot;; inputs.flake-utils.url = &quot;github:numtide/flake-utils&quot;; outputs = { self, nixpkgs, flake-utils }: flake-utils.lib.eachDefaultSystem (system: let pkgs = nixpkgs.legacyPackages.${system}; python = pkgs.python311; python_packages = python.withPackages(ps: with ps;[ ipython matplotlib pandas ]); myDevTools = [ python_packages ]; in { devShells.default = pkgs.mkShell { buildInputs = myDevTools; }; }); } </code></pre> <p>What should I use in <em>flake.nix</em> in order to create an executable that is equivalent to the following?</p> <pre><code> python main.py </code></pre> <p>I want to create the executable with the command</p> <pre><code> nix build </code></pre> <p>in the shell (it means with the necessary packages installed)</p> <p>I have added apps in my file:</p> <pre><code>{ description = &quot;virtual environment with python and streamlit&quot;; inputs.nixpkgs.url = &quot;github:NixOS/nixpkgs/nixos-unstable&quot;; inputs.flake-utils.url = &quot;github:numtide/flake-utils&quot;; outputs = { self, nixpkgs, flake-utils }: flake-utils.lib.eachDefaultSystem (system: let pkgs = nixpkgs.legacyPackages.${system}; python = pkgs.python311; f = ps: with ps;[ ipython matplotlib pandas ]; pip_python_packages = python.withPackages(f); myDevTools = [ pip_python_packages pkgs.streamlit ]; outputName = builtins.attrNames self.outputs self.outputs; in { devShells.default = pkgs.mkShell { buildInputs = myDevTools; }; packages.default = pkgs.poetry2nix.mkPoetryApplication { projectDir = self; }; apps.default = { program = &quot;${python}/bin/python&quot;; args = [ &quot;main.py&quot; ]; src = &quot;./.&quot;; type = &quot;app&quot;; }; }); } </code></pre> <p>But nix runs only to open the python interpreter.</p> <p>Furthermore, I enter this line</p> <pre><code> nix/store/cxsw4x1189ppmsydhwsmssr0x65nygj7-python3-3.11.4/bin/python ./main.py </code></pre> <p>in my shell</p> <p>because ${python} = cxsw4x1189ppmsydhwsmssr0x65nygj7-python3-3.11.4/bin/python</p> <p>And it didn't work:</p> <blockquote> <p>ModuleNotFoundError: No module named 'pandas'</p> </blockquote> <p>I changed my file like this:</p> <pre><code>{ description = &quot;virtual environment with python and streamlit&quot;; inputs.nixpkgs.url = &quot;github:NixOS/nixpkgs/nixos-unstable&quot;; inputs.flake-utils.url = &quot;github:numtide/flake-utils&quot;; outputs = { self, nixpkgs, flake-utils }: flake-utils.lib.eachDefaultSystem (system: let pkgs = nixpkgs.legacyPackages.${system}; python = pkgs.python311; f = ps: with ps;[ ipython matplotlib pandas ]; pip_python_packages = python.withPackages(f); myDevTools = [ pip_python_packages pkgs.streamlit ]; outputName = builtins.attrNames self.outputs self.outputs; in { devShells.default = pkgs.mkShell { buildInputs = myDevTools; }; packages.default = pkgs.poetry2nix.mkPoetryApplication { projectDir = self; }; apps.default = flake-utils.lib.mkApp { program = &quot;${pip_python_packages}/bin/python&quot;; args = [ &quot;${self}/main.py&quot; ]; }; }); } </code></pre> <p>But I have this error now after running <code>nix run</code>:</p> <blockquote> <p>error: โ€ฆ while evaluating the attribute 'pkgs.buildPythonPackage'</p> <pre><code> at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/development/interpreters/python/passthrufun.nix:87:5: 86| withPackages = import ./with-packages.nix { inherit buildEnv pythonPackages;}; 87| pkgs = pythonPackages; | ^ 88| interpreter = &quot;${self}/bin/${executable}&quot;; โ€ฆ while calling the 'mapAttrs' builtin at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/development/interpreters/python/passthrufun.nix:31:8: 30| value; 31| in lib.mapAttrs func items; | ^ 32| in ensurePythonModules (callPackage (stack trace truncated; use '--show-trace' to show the full trace) error: getting status of '/nix/store/c274sjvcr30c80429v9kpn4n2q0ic6n9-source/poetry.lock': No </code></pre> <p>such file or directory</p> </blockquote>
<python><nix><nix-flake>
2023-06-27 06:58:30
1
2,017
Pierre-olivier Gendraud
76,561,685
2,625,090
Get result from cancelled asyncio.gather tasks
<p>I need to get some values from an async generator concurrently, and I also need to handle <code>KeyboardInterrupt</code> correctly so that whenever the application is quit I have a chance to write to disk whatever was retrieved at the moment of the exception. I came up with the following code:</p> <pre><code>import asyncio async def some_task(): try: yield 1 yield 2 yield 3 await asyncio.sleep(10000) except asyncio.CancelledError: pass async def run_tasks(): async def _task(gen): return [value async for value in gen] try: result = await asyncio.gather(_task(some_task())) return result except asyncio.CancelledError: # Do something here to get the result from the gathered tasks ... async def amain(): value = await run_tasks() print(&quot;Returned&quot;, value) if __name__ == '__main__': asyncio.run(amain()) </code></pre> <p>I tried using <code>asyncio.shield</code> but <code>asyncio.gather</code> still throws an exception instead of returning the value from <code>_task</code></p>
<python><python-asyncio>
2023-06-27 04:45:21
1
9,205
arielnmz
76,561,466
3,654,588
Fast vectorized way of computing a pairwise delta kernel in Python
<p>I am interested in computing a 0-1 distance matrix where it is basically a &quot;Dirac delta&quot; kernel.</p> <p>The requirements would be the following:</p> <pre><code>def delta_kernel(X, Y): &quot;&quot;&quot; X : ndarray of shape (n_samples_x, n_features_x) Y : ndarray of shape (n_samples_y, n_features_y) Returns kernel_arr : ndarray of shape (n_samples_x, n_samples_y) &quot;&quot;&quot; # compute pairwise kernel, where the kernel_arr[i, j] == 1 if # X[i, :] == Y[j, :] </code></pre> <p>I am interested in a way to do this efficiently. Preferably the operation is vectorized, so that this can all be done with numpy very fast. Is this possible?</p> <p>Thanks!</p>
<python><numpy>
2023-06-27 03:40:01
0
1,302
ajl123
76,561,374
6,703,592
dataframe flatten array value column to columns
<pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Data': [np.array([[1,1], [2,2], [3,1], [10,1]]), np.array([[11,1],[12,1],[13,1],[20,1]]), np.array([[11,1],[13,1],[14,1],[20,1]])], 'label' : ['a', 'a', 'b']}) print (df) </code></pre> <p>I want to flatten array valued column 'Data' to 2*4 = 8 columns, say 'col_1',...,'col_8'. We can guarantee the shape of array of each cell in 'Data' is same (but regard shape of array as dynamic, do not use the specific number)</p>
<python><dataframe>
2023-06-27 03:10:56
2
1,136
user6703592
76,561,337
838,117
Efficiently loading list of parquet files with python pandas
<p>I am trying to load a large number of parquet files in python pandas and noticed a notable performance difference between two different approaches. Specifically</p> <pre><code>pd.read_parquet(&quot;/path/to/directory/&quot;) </code></pre> <p>Is more than twice as fast than something like:</p> <pre><code>filelist = glob.glob(&quot;/path/to/directory/*&quot;) pd.concat([pd.read_parquet(i) for i in filelist]) </code></pre> <p>The reason for want to use the 2nd approach include to pre-filter the parquet files to be loaded, or to load from multiple directories (that contain parquet files with same format etc).</p> <p>Any tips / guidance appreciated - basically looking to understand how to make the 2nd approach as performant as the first (and/or understanding what kind of magic might be making the 1st approach faster).</p>
<python><pandas><parquet>
2023-06-27 03:00:12
1
915
djmac
76,561,290
11,277,108
Calculate the mean across rows excluding min and max values?
<p>Given the following dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col_a</th> <th>col_b</th> <th>col_c</th> <th>col_d</th> </tr> </thead> <tbody> <tr> <td>-100</td> <td>2</td> <td>4</td> <td>100</td> </tr> <tr> <td>-200</td> <td>4</td> <td>8</td> <td>200</td> </tr> </tbody> </table> </div> <p>I'd like to calculate the mean across rows excluding the minimum and maximum values:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col_a</th> <th>col_b</th> <th>col_c</th> <th>col_d</th> <th>mean_exc_min_max</th> </tr> </thead> <tbody> <tr> <td>-100</td> <td>2</td> <td>4</td> <td>100</td> <td>3</td> </tr> <tr> <td>-200</td> <td>4</td> <td>8</td> <td>200</td> <td>6</td> </tr> </tbody> </table> </div> <p>I know I can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html" rel="nofollow noreferrer"><code>mean</code></a> via <code>mean(axis=1)</code> but I can't figure out how to exclude the min and max values. The datasets I'm dealing with are pretty large so ideally I'm looking for a solution that's vectorised.</p> <p>Any ideas?</p> <p>Code to create the dataframe:</p> <pre><code>data = { &quot;col_a&quot;: [-100, -200], &quot;col_b&quot;: [2, 4], &quot;col_c&quot;: [4, 8], &quot;col_d&quot;: [100, 200], } df = pd.DataFrame(data) </code></pre>
<python><pandas>
2023-06-27 02:41:35
2
1,121
Jossy
76,561,222
6,703,592
dataframe assign a array value to a cell
<p>I want to assign the value ('Data') of the first row of sub-group ('label') to its members. But the value is array(list) type:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Data': [np.array([1, 2, 3, 10]), np.array([11,12,13,20]), np.array([11,13,14,20])], 'label' : ['a', 'a', 'b']}) df['first_member'] = df.groupby('label')['Data'].transform(lambda x: x.iloc[0]) print (df) </code></pre> <p>I will receive an error: <code>ValueError: Length of values (4) does not match length of index (2)</code>. I think it is the issue from assigning a array value.</p>
<python><dataframe>
2023-06-27 02:16:41
2
1,136
user6703592
76,561,175
8,644,706
Referencing function input variables from other input variables
<p>I want to make some function like below:</p> <pre><code>def test_fn(var1: int = 1, var2: int = var1+1): return var1, var2 </code></pre> <p>(and the function above simply does not work which would raise <code>NameError</code>)</p> <p>Is there any smart way of referencing <code>var2</code> from <code>var1</code> as input variable without adding or running other lines before this function?</p> <p>Thanks in advance :D</p>
<python>
2023-06-27 02:01:00
0
765
Kevin Choi
76,561,103
14,250,641
creating side-by-side violin plots in seaborn
<p>I am trying to create side-by-side violin plots using Matplotlib and Seaborn, but I'm having trouble getting them to display properly (I kept finding that they would be superimposed). I want to compare the average Phast scores for different element types in my dataset. Currently, my code displays the violin plots as separate subplots.</p> <pre><code># Create a new figure and subplots with 1 row and 2 columns fig, axes = plt.subplots(1, 2, figsize=(12, 6)) # First violin plot sns.violinplot( y=df['avg_phast_score'], x=df['main_category'], ax=axes[0] # Assign the first subplot to the first column ) # Set title and labels for the first subplot axes[0].set_title('element regions') axes[0].set_xlabel('Element Type') axes[0].set_ylabel('Average Phast Score') axes[0].set_ylim([-.1, .2]) # Set the y-axis limits for the first subplot axes[0].set_xticklabels(axes[0].get_xticklabels(), rotation=90) # Rotate x-axis labels # Second violin plot sns.violinplot( y=df['control_score'], x=df['main_category'], ax=axes[1] # Assign the second subplot to the second column ) # Set title and labels for the second subplot axes[1].set_title('control') axes[1].set_xlabel('Element Type') axes[1].set_ylabel('Average Phast Score') axes[1].set_ylim([-.1, .2]) # Set the y-axis limits for the second subplot axes[1].set_xticklabels(axes[1].get_xticklabels(), rotation=90) # Rotate x-axis labels # Adjust the spacing between the subplots plt.tight_layout() # Display the plot plt.show() </code></pre>
<python><matplotlib><plot><seaborn>
2023-06-27 01:30:12
1
514
youtube
76,560,987
6,085,595
Extract all functions called in a python script
<p>I have a package and a script that calls the functions in the package. The package itself is relatively large and serves multiple purposes, and I am trying to extract the functions used in the package that is just enough for running a specific script.</p> <p>So my question is: for a given script, is it possible to extract all functions in the package that are actually used in this script? For example, suppose I have a package with the following structure:</p> <pre><code>mypackage - __init__.py - mymodule.py - func1 - func2 </code></pre> <p>Then I have another script that calls <code>func1</code> from <code>mypackage</code> as well as some other functions such as <code>numpy</code> in the python environment. Is it possible to know that <code>func1</code> is used by this script without modifying code under <code>mypackage</code>, and maybe even copy these functions to a standalone file?</p>
<python>
2023-06-27 00:50:45
1
2,577
DiveIntoML
76,560,891
2,137,570
AppleScript - do shell python3 error module not found but installed
<p>So I'm trying to run a python script. Which runs fine in the terminal but when I do it in the AppleScript it gives me an script error.</p> <pre><code>do shell script &quot;cd /Users/me/Desktop/folder1; python testRun.py </code></pre> <p>Error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/me/Desktop/folder1/run1.py&quot;, line 2, in &lt;module&gt; import myutils File &quot;/Users/me/Desktop/folder1/run1.py&quot;, line 14, in &lt;module&gt; import requests ModuleNotFoundError: No module named 'requests </code></pre> <ul> <li>myutils - is my python script</li> <li>requests - is a python module</li> </ul> <p>Doing a pip3 list it shows requests</p> <pre><code>requests 2.27.1 </code></pre> <p>Not sure what I'm doing wrong. Why is it working in the terminal but not in the AppleScript?</p> <p>After adding the path</p> <p>@simpleApp. Thanks for the direction. So doing which in terminal I got: python3</p> <pre><code>/Library/Frameworks/Python.framework/Versions/3.10/bin/python3. using that path. </code></pre> <p>Script editor error:</p> <pre><code> I did this do shell script (&quot;export PATH=\&quot;/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\&quot;;cd \&quot;/Users/me/Desktop/folder1/&quot;;testRun.py&quot;) Error output Traceback (most recent call last): File \&quot;/Users/me/Desktop/folder1/testRun.py\&quot;, line 41, in &lt;module&gt; driver = myutils.get_csv_report(dl_path, WEBREPORTS_URL, auth) File \&quot;/Users/me/Desktop/folder1/myutils.py\&quot;, line 199, in get_csv_report driver = webdriver.Firefox(firefox_profile=profile, options=options) File \&quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/firefox/webdriver.py\&quot;, line 164, in __init__ self.service.start() File \&quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/common/service.py\&quot;, line 81, in start raise WebDriverException( selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH. </code></pre>
<python><applescript>
2023-06-27 00:05:45
1
5,998
Lacer
76,560,698
7,535,975
Python 3.10 QuTip: AttributeError: can't set attribute 'format'
<p>I'm trying to setup a simple <code>docker</code> container to make my code portable. Following is the docker container setup that I start with</p> <pre><code>docker run -it --name qutip_portble python:3.10.9-slim bash </code></pre> <p>Once the docker container starts, I install some packages as follows</p> <pre><code>pip install qutip pip install matplotlib </code></pre> <p>Both of these install succesfully without any error. However, when I try to run the following import in python</p> <pre><code>import qutip </code></pre> <p>I get the following error</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.10/site-packages/qutip/__init__.py&quot;, line 106, in &lt;module&gt; from qutip.qobj import * File &quot;/usr/local/lib/python3.10/site-packages/qutip/qobj.py&quot;, line 2526, in &lt;module&gt; import qutip.superop_reps as sr File &quot;/usr/local/lib/python3.10/site-packages/qutip/superop_reps.py&quot;, line 74, in &lt;module&gt; _SINGLE_QUBIT_PAULI_BASIS = (identity(2), sigmax(), sigmay(), sigmaz()) File &quot;/usr/local/lib/python3.10/site-packages/qutip/operators.py&quot;, line 508, in identity return qeye(dims) File &quot;/usr/local/lib/python3.10/site-packages/qutip/operators.py&quot;, line 488, in qeye return Qobj(fast_identity(size), File &quot;/usr/local/lib/python3.10/site-packages/qutip/fastsparse.py&quot;, line 389, in fast_identity return fast_csr_matrix((data,ind,ptr),shape=(N,N)) File &quot;/usr/local/lib/python3.10/site-packages/qutip/fastsparse.py&quot;, line 55, in __init__ self.format = 'csr' AttributeError: can't set attribute 'format' </code></pre> <p>I get this error at the <code>import</code> stage before even writing any code of mine. Therefore, I'm assuming this is because of some setup issue. Any help will be appreciated.</p>
<python><python-3.x><qutip>
2023-06-26 22:57:07
1
377
F Baig
76,560,693
14,293,020
Numpy use an array to slice the indices of another array
<p><strong>Context:</strong> I have a 2D array <code>A</code> that I would like to modify at specific indices, given by the arrays <code>a1</code> and <code>a2</code>. I could use a <code>for loop</code> but I want to optimize this problem.</p> <p><strong>Problem:</strong> The way I modify my array needs to use <code>a2</code> as a slice: <code>A[a1, a2:] = 1000</code>. But I can't manage to get past <code>TypeError: only integer scalar arrays can be converted to a scalar index</code>. How could I do that value replacement of <code>A</code> faster than with loops ?</p> <p><strong>Example:</strong></p> <pre><code>import numpy as np # Initialize array A = np.zeros((10,10),int) # Create two arrays of indices a1 = np.array([1,5,6], dtype = int) a2 = np.array([4,6,2], dtype = int) # As a for loop for i in range(a1.shape[0]): A[a1[i], a2[i]:] = 10 # What I tried (doesn't work) A[a1, a2:] A Out[452]: array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 10, 10, 10, 10, 10, 10], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 10, 10, 10, 10], [ 0, 0, 10, 10, 10, 10, 10, 10, 10, 10], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) </code></pre>
<python><arrays><numpy><indices>
2023-06-26 22:55:53
1
721
Nihilum
76,560,689
4,180,276
Python script to pull a table from url and output it in a csv
<p>I am trying to get the table from this website, <a href="https://caniwin.com/poker/omahahilopreALL.php" rel="nofollow noreferrer">https://caniwin.com/poker/omahahilopreALL.php</a></p> <p>What I want to do is write a python script that will take that data, and put it in a csv so I can sort by <code>WinHi %</code></p> <p>The script I currently have so far does this</p> <pre><code>import requests import csv from bs4 import BeautifulSoup # Fetch the HTML content from the website url = 'https://caniwin.com/poker/omahahilopreALL.php' response = requests.get(url) html_content = response.text # Parse the HTML soup = BeautifulSoup(html_content, 'html.parser') # Find the table table = soup.find('table') print(table) </code></pre> <p>Which prints the table fine. The issue is, since I am using libre office, when I have tried to parse and put this into a comma seprated file it looks at janky or does not work.</p> <p>For example, this script does not output it in a manner in which I can sort by value I want</p> <pre><code>import requests import csv from bs4 import BeautifulSoup # Fetch the HTML content from the website url = 'https://caniwin.com/poker/omahahilopreALL.php' response = requests.get(url) html_content = response.text # Parse the HTML soup = BeautifulSoup(html_content, 'html.parser') # Find the table table = soup.find('table') # Extract table data table_data = [] for row in table.find_all('tr'): row_data = [] for cell in row.find_all(['td']): row_data.append(cell.text.strip()) table_data.append(row_data) # Output table to CSV file filename = 'output.csv' with open(filename, 'w', newline='') as csvfile: writer = csv.writer(csvfile) writer.writerows(table_data) print(f&quot;Table data has been saved to {filename}&quot;) </code></pre>
<python><csv><beautifulsoup>
2023-06-26 22:55:06
1
2,954
nadermx
76,560,684
210,920
Memory Leak Passing Numpy to C on Linux
<p>I've been experiencing a memory leak when calling some C code I wrote from Python 3.10.6 on Ubuntu 22.04.1 LTS.</p> <p><a href="https://github.com/seung-lab/mapbuffer/blob/main/mapbufferaccel.c#L95-L110" rel="nofollow noreferrer">https://github.com/seung-lab/mapbuffer/blob/main/mapbufferaccel.c#L95-L110</a></p> <p>When used in isolation, there is no memory leak, but the following code appears to leak:</p> <p><a href="https://github.com/seung-lab/igneous/blob/master/igneous/tasks/mesh/multires.py#L390-L406" rel="nofollow noreferrer">https://github.com/seung-lab/igneous/blob/master/igneous/tasks/mesh/multires.py#L390-L406</a></p> <p>When I replace the C call with a python implementation, the leak disappears:</p> <p><a href="https://github.com/seung-lab/mapbuffer/blob/main/mapbuffer/mapbuffer.py#L197" rel="nofollow noreferrer">https://github.com/seung-lab/mapbuffer/blob/main/mapbuffer/mapbuffer.py#L197</a></p> <p>This only happens on Linux, on MacOS there is no leak. I tried calling both the python and C versions at the same time and using the python result while deleting elements from the C version. I got the C version only returning -1 and it seems simply calling the C version causes the leak on Linux.</p> <p>Any ideas what's going on? Is it some part of glibc that is causing a problem? Did I forget to call some cleanup function in the Python C API?</p> <p>Here is the non-leaking Python implementation of that function:</p> <pre><code> def eytzinger_search(target, arr): mid = 0 N = len(arr) while mid &lt; N: if arr[mid] == target: return mid mid = mid * 2 + 1 + int(arr[mid] &lt; target) return -1 k = eytzinger_search(np.uint64(label), index[:, 0]) </code></pre> <p>Thanks so much!</p>
<python><c><memory-leaks><python-c-api>
2023-06-26 22:53:15
1
9,438
SapphireSun
76,560,608
3,501,622
Python - Requests Failed to establish a new connection BCB
<p>I am trying to download this excel file <a href="https://www.bcb.gov.br/content/estatisticas/Documents/Tabelas_especiais/BalPagM.xlsx" rel="nofollow noreferrer">https://www.bcb.gov.br/content/estatisticas/Documents/Tabelas_especiais/BalPagM.xlsx</a> using the requests library.</p> <p>When I open it in Google Chrome, it shows the following headers:</p> <pre><code>Request URL: https://www.bcb.gov.br/content/estatisticas/Documents/Tabelas_especiais/BalPagM.xlsx Request Method: GET Status Code: 200 Referrer Policy: strict-origin-when-cross-origin :Authority: www.bcb.gov.br :Method: GET :Path: /content/estatisticas/Documents/Tabelas_especiais/BalPagM.xlsx :Scheme: https Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7 Accept-Encoding: gzip, deflate, br Accept-Language: pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7 Cache-Control: no-cache Pragma: no-cache Sec-Ch-Ua: &quot;Not.A/Brand&quot;;v=&quot;8&quot;, &quot;Chromium&quot;;v=&quot;114&quot;, &quot;Google Chrome&quot;;v=&quot;114&quot; Sec-Ch-Ua-Mobile: ?0 Sec-Ch-Ua-Platform: &quot;Windows&quot; Sec-Fetch-Dest: document Sec-Fetch-Mode: navigate Sec-Fetch-Site: none Sec-Fetch-User: ?1 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 </code></pre> <p>When I run it from my Python code, I keep receiving &quot;Failed to establish a new connection&quot;.</p> <pre><code>import requests headers = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7', 'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Sec-Ch-Ua': '&quot;Not.A/Brand&quot;;v=&quot;8&quot;, &quot;Chromium&quot;;v=&quot;114&quot;, &quot;Google Chrome&quot;;v=&quot;114&quot;', 'Sec-Ch-Ua-Mobile': '?0', 'Sec-Ch-Ua-Platform': '&quot;Windows&quot;', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' } url = 'https://www.bcb.gov.br/content/estatisticas/Documents/Tabelas_especiais/BalPagM.xlsx' response = requests.get(url, headers=headers) </code></pre> <p>To me, it seems that I am correctly setting all the headers. What am I missing here?</p>
<python><python-requests>
2023-06-26 22:32:49
2
671
Daniel
76,560,486
11,238,061
Is there a way to populate a 2d array from time series data - Python
<p>Apologies for the poor formatting, on mobile.</p> <p>I have a large time-series dataset that has 4 columns.</p> <p><code>Time A B C</code></p> <p>I want to populate a new 2d array from this data, where;</p> <p>The number of rows is given by the number of unique values for A (x-axis).</p> <p>The number of columns is given by the number of unique values for B (y-axis).</p> <p>Assuming ascending order for the column and row values, the values in each 'cell' should be the corresponding values for C (z-axis).</p> <p>Where no corresponding value exists, the 'cell' should be <code>None</code></p> <p>Where more than one corresponding value exists, the cell should contain the average value.</p> <p>For context, I want to use this data to plot a contour chart using Plotly but need to minimise the amount of interpolation that's inherent to these chart types, None valuea would show blank. I understand that I'll need to also define a corresponding 2d array for both A (x-axis) &amp; C (y-axis).</p> <p>Is there a python function to allow this to be done easily without manually stating the number of unique values?</p> <p><strong>UPDATE - Resolved</strong></p> <p>I ended up using pivot_table and aggfunc to set what happens to the duplicates, like this;</p> <pre><code>pivot_table = pd.pivot_table(df, values='C', index='B', columns='A', aggfunc=np.mean) z_values = pivot_table.values x_values = pivot_table.columns.values y_values = pivot_table.index.values self.fig.add_trace(go.Contour(x=x_values, y=y_values, z=z_values, connectgaps=False)) </code></pre>
<python><plotly>
2023-06-26 22:04:58
1
419
Iceberg_Slim
76,560,457
1,607,849
Configuring a devcontainer for a Poetry-managed Python project and VS Code plugins
<p>I've having a surprisingly rough time today configuring a GitHub Codespace to create an environment for developing a Poetry-managed Python library compatible with the VS Code Python plugin.</p> <p>The main problem I've wrestled with: Poetry can be configured to install a virtual environment into a local <code>.venv</code> directory. The VS Code Python plugin seems to expect the virtual environment to live in <code>.env</code>. If I try <code>&quot;customizations&quot;: { &quot;vscode&quot;: { &quot;settings&quot;: { &quot;python.envFile&quot;: &quot;${workspaceFolder}/.venv&quot; }}}</code>, I see the amended setting appear in the <code>Remote [Codespaces]</code> tab, but the Python plugin seems to ignore it.</p> <p>Can anyone suggest a complete <code>.devcontainer.json</code> that wires a Codespace with the VS Code Python plugin for development of a Poetry-managed Python library?</p>
<python><python-poetry><codespaces>
2023-06-26 21:55:13
1
1,097
Steven Clontz
76,560,452
12,436,050
Convert xml to rdf format in python
<p>I am new to reading xml file and fetching information from it. Below is the (subset of) xml file I have.</p> <pre><code>&lt;icd_10_v2019&gt; &lt;item type=&quot;chapter&quot;&gt; &lt;name&gt;I&lt;/name&gt; &lt;description&gt;Certain infectious and parasitic diseases&lt;/description&gt; &lt;item type=&quot;block&quot;&gt; &lt;name&gt;A00-A09&lt;/name&gt; &lt;description&gt;Intestinal infectious diseases&lt;/description&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A00&lt;/name&gt; &lt;description&gt;Cholera&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A00.0&lt;/name&gt; &lt;description&gt;Cholera due to Vibrio cholerae 01, biovar cholerae&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A00.1&lt;/name&gt; &lt;description&gt;Cholera due to Vibrio cholerae 01, biovar eltor&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A00.9&lt;/name&gt; &lt;description&gt;Cholera, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A01&lt;/name&gt; &lt;description&gt;Typhoid and paratyphoid fevers&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A01.0&lt;/name&gt; &lt;description&gt;Typhoid fever&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A01.1&lt;/name&gt; &lt;description&gt;Paratyphoid fever A&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A01.2&lt;/name&gt; &lt;description&gt;Paratyphoid fever B&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A01.3&lt;/name&gt; &lt;description&gt;Paratyphoid fever C&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A01.4&lt;/name&gt; &lt;description&gt;Paratyphoid fever, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A02&lt;/name&gt; &lt;description&gt;Other salmonella infections&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A02.0&lt;/name&gt; &lt;description&gt;Salmonella enteritis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A02.1&lt;/name&gt; &lt;description&gt;Salmonella sepsis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A02.2&lt;/name&gt; &lt;description&gt;Localized salmonella infections&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A02.8&lt;/name&gt; &lt;description&gt;Other specified salmonella infections&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A02.9&lt;/name&gt; &lt;description&gt;Salmonella infection, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A03&lt;/name&gt; &lt;description&gt;Shigellosis&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.0&lt;/name&gt; &lt;description&gt;Shigellosis due to Shigella dysenteriae&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.1&lt;/name&gt; &lt;description&gt;Shigellosis due to Shigella flexneri&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.2&lt;/name&gt; &lt;description&gt;Shigellosis due to Shigella boydii&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.3&lt;/name&gt; &lt;description&gt;Shigellosis due to Shigella sonnei&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.8&lt;/name&gt; &lt;description&gt;Other shigellosis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A03.9&lt;/name&gt; &lt;description&gt;Shigellosis, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A04&lt;/name&gt; &lt;description&gt;Other bacterial intestinal infections&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.0&lt;/name&gt; &lt;description&gt;Enteropathogenic Escherichia coli infection&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.1&lt;/name&gt; &lt;description&gt;Enterotoxigenic Escherichia coli infection&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.2&lt;/name&gt; &lt;description&gt;Enteroinvasive Escherichia coli infection&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.3&lt;/name&gt; &lt;description&gt;Enterohaemorrhagic Escherichia coli infection&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.4&lt;/name&gt; &lt;description&gt;Other intestinal Escherichia coli infections&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.5&lt;/name&gt; &lt;description&gt;Campylobacter enteritis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.6&lt;/name&gt; &lt;description&gt;Enteritis due to Yersinia enterocolitica&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.7&lt;/name&gt; &lt;description&gt;Enterocolitis due to Clostridium difficile&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.8&lt;/name&gt; &lt;description&gt;Other specified bacterial intestinal infections&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A04.9&lt;/name&gt; &lt;description&gt;Bacterial intestinal infection, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A05&lt;/name&gt; &lt;description&gt;Other bacterial foodborne intoxications, not elsewhere classified&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.0&lt;/name&gt; &lt;description&gt;Foodborne staphylococcal intoxication&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.1&lt;/name&gt; &lt;description&gt;Botulism&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.2&lt;/name&gt; &lt;description&gt;Foodborne Clostridium perfringens [Clostridium welchii] intoxication&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.3&lt;/name&gt; &lt;description&gt;Foodborne Vibrio parahaemolyticus intoxication&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.4&lt;/name&gt; &lt;description&gt;Foodborne Bacillus cereus intoxication&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.8&lt;/name&gt; &lt;description&gt;Other specified bacterial foodborne intoxications&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A05.9&lt;/name&gt; &lt;description&gt;Bacterial foodborne intoxication, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A06&lt;/name&gt; &lt;description&gt;Amoebiasis&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.0&lt;/name&gt; &lt;description&gt;Acute amoebic dysentery&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.1&lt;/name&gt; &lt;description&gt;Chronic intestinal amoebiasis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.2&lt;/name&gt; &lt;description&gt;Amoebic nondysenteric colitis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.3&lt;/name&gt; &lt;description&gt;Amoeboma of intestine&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.4&lt;/name&gt; &lt;description&gt;Amoebic liver abscess&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.5&lt;/name&gt; &lt;description&gt;Amoebic lung abscess&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.6&lt;/name&gt; &lt;description&gt;Amoebic brain abscess&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.7&lt;/name&gt; &lt;description&gt;Cutaneous amoebiasis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.8&lt;/name&gt; &lt;description&gt;Amoebic infection of other sites&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A06.9&lt;/name&gt; &lt;description&gt;Amoebiasis, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A07&lt;/name&gt; &lt;description&gt;Other protozoal intestinal diseases&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.0&lt;/name&gt; &lt;description&gt;Balantidiasis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.1&lt;/name&gt; &lt;description&gt;Giardiasis [lambliasis]&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.2&lt;/name&gt; &lt;description&gt;Cryptosporidiosis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.3&lt;/name&gt; &lt;description&gt;Isosporiasis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.8&lt;/name&gt; &lt;description&gt;Other specified protozoal intestinal diseases&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A07.9&lt;/name&gt; &lt;description&gt;Protozoal intestinal disease, unspecified&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A08&lt;/name&gt; &lt;description&gt;Viral and other specified intestinal infections&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.0&lt;/name&gt; &lt;description&gt;Rotaviral enteritis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.1&lt;/name&gt; &lt;description&gt;Acute gastroenteropathy due to Norovirus&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.2&lt;/name&gt; &lt;description&gt;Adenoviral enteritis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.3&lt;/name&gt; &lt;description&gt;Other viral enteritis&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.4&lt;/name&gt; &lt;description&gt;Viral intestinal infection, unspecified&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A08.5&lt;/name&gt; &lt;description&gt;Other specified intestinal infections&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A09&lt;/name&gt; &lt;description&gt;Other gastroenteritis and colitis of infectious and unspecified origin&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A09.0&lt;/name&gt; &lt;description&gt;Other and unspecified gastroenteritis and colitis of infectious origin&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A09.9&lt;/name&gt; &lt;description&gt;Gastroenteritis and colitis of unspecified origin&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;/item&gt; &lt;item type=&quot;block&quot;&gt; &lt;name&gt;A15-A19&lt;/name&gt; &lt;description&gt;Tuberculosis&lt;/description&gt; &lt;item type=&quot;category&quot;&gt; &lt;name&gt;A15&lt;/name&gt; &lt;description&gt;Respiratory tuberculosis, bacteriologically and histologically confirmed&lt;/description&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.0&lt;/name&gt; &lt;description&gt;Tuberculosis of lung, confirmed by sputum microscopy with or without culture&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.1&lt;/name&gt; &lt;description&gt;Tuberculosis of lung, confirmed by culture only&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.2&lt;/name&gt; &lt;description&gt;Tuberculosis of lung, confirmed histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.3&lt;/name&gt; &lt;description&gt;Tuberculosis of lung, confirmed by unspecified means&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.4&lt;/name&gt; &lt;description&gt;Tuberculosis of intrathoracic lymph nodes, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.5&lt;/name&gt; &lt;description&gt;Tuberculosis of larynx, trachea and bronchus, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.6&lt;/name&gt; &lt;description&gt;Tuberculous pleurisy, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.7&lt;/name&gt; &lt;description&gt;Primary respiratory tuberculosis, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.8&lt;/name&gt; &lt;description&gt;Other respiratory tuberculosis, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;item type=&quot;subcategory&quot;&gt; &lt;name&gt;A15.9&lt;/name&gt; &lt;description&gt;Respiratory tuberculosis unspecified, confirmed bacteriologically and histologically&lt;/description&gt; &lt;/item&gt; &lt;/item&gt; &lt;/icd_10_v2019&gt; </code></pre> <p>The expected hierarchy I am expecting is below:</p> <pre><code>Certain infectious and parasitic diseases (I) Intestinal infectious diseases (A00-A09) Cholera (A00) Cholera due to Vibrio cholerae 01, biovar cholerae (A00.0) Cholera due to Vibrio cholerae 01, biovar eltor (A00.1) Cholera, unspecified (A00.9) Typhoid and paratyphoid fevers (A01) Typhoid fever (A01.0) ..... (so on ....) </code></pre> <p>In the end, I would like to save this to a graph format (rdf). How can I achieve this? Any help is highly appreciated.</p> <p>I tried the below code so far.</p> <pre><code>import xml.etree.ElementTree as ET import rdflib from rdflib import Graph, Namespace, URIRef, Literal graph = Graph() ICD_NS = Namespace(&quot;http://example.com/icd/&quot;) #Load the XML data tree = ET.parse('icd10_v19.xml') root = tree.getroot() #Create an rdf graph graph = Graph() def process_element(element, parent_uri): print (element) if element.find(&quot;name&quot;).text is not None: element_uri = parent_uri + element.find(&quot;name&quot;).text print (element_uri) graph.add((element_uri, ICD_NS['name'], Literal(element.find('name').text))) graph.add((element_uri, ICD_NS['description'], Literal(element.find('description').text))) graph.add((element_uri, ICD_NS['type'], Literal(element.attrib['type']))) #Recursively process child elements for child in element.findall(&quot;item&quot;): process_element(child, element_uri + '/') #Start processing from the root element process_element(root, ICD_NS['']) #Serialize the graph to RDF/XML format rdf_data = graph.serialize(format='xml') #Save the RDF/XML data to a file with open(&quot;icd10_graph.rdf&quot;, &quot;wb&quot;) as f: print (f) f.write(rdf_data) </code></pre>
<python><xml><rdf>
2023-06-26 21:53:37
1
1,495
rshar
76,560,449
489,088
How can I read an in-memory HDF byte array back into a pandas DataFrame without writing a file?
<p>I can convert a pandas dataframe to an HDF byte array like so:</p> <pre><code> data = ... # a pandas dataframe with pd.HDFStore( &quot;in-memory-save-file&quot;, mode=&quot;w&quot;, driver=&quot;H5FD_CORE&quot;, driver_core_backing_store=0, ) as store: store.put(&quot;data&quot;, data, format=&quot;table&quot;) bytes_to_write = store._handle.get_file_image() </code></pre> <p><code>bytes_to_write</code> now contains the DataFrame.</p> <p>I need to convert this buffer back to a DataFrame.</p> <p>The <code>pandas.read_hdf</code> method accepts a <code>pandas.HDFStore</code>, but this class seems like can only be constructed using a file path, not a buffer.</p> <p>But how can I convert this byte array back to a DataFrame?</p> <p>I'm on Python 3.9.</p> <p>Any pointers are greatly appreciated!</p>
<python><pandas><dataframe><hdf>
2023-06-26 21:52:48
0
6,306
Edy Bourne
76,560,346
6,525,082
calling pcolor results in error with shading
<p>While trying to use <code>matplotlib.pyplot.pcolor</code> to make a 2D plot on one of the axes (as shown below), I get an error.</p> <pre class="lang-py prettyprint-override"><code>f, ax = plt.subplots(1, len(cs_values)+1, figsize=(11,4), dpi=200) im0 = ax[0].pcolor(X, Y, Q[:-1, :-1], shading='auto') </code></pre> <p>The error I am getting is:</p> <pre class="lang-py prettyprint-override"><code>/path/to/python3.7/site-packages/matplotlib/artist.py&quot;, line 970, in _update_property .format(type(self).__name__, k)) AttributeError: 'PolyCollection' object has no property 'shading' </code></pre> <p>I am using Matplotlib 3.1.3 on Python 3.7. What is causing the error and how can I get rid of it?</p>
<python><matplotlib>
2023-06-26 21:29:56
0
1,436
wander95
76,560,332
7,481,334
Same component works when defined through @component but it fails when created with create_component_from_func
<p>I have a <code>Docker</code> container in <code>gcloud</code> with all the code I need (multiple files, etc.). I'm able to run it when I defined the component using the <code>@component</code> decorator. I define the function and I set up the <code>base_image</code>. The component does a few things and it loads code from the container as expected.</p> <p>However, It fails when I create the component through a function. First, I create the component with the <code>create_component_from_func</code> function (same function, and I define the same container). Then I use it.</p> <p>Then I create a pipeline with those 2 components (in the example they are just 2 components disconnected). I would expect the same result from both components, but the 2nd fails. It cannot import the functions. I did some prints and checks (even reading the code itself) and it is there. Everything looks exactly the same. I thought those were analogous approaches, and I guess they are not.</p> <p>Any idea of the differences and why it works with <code>@component</code> but not for <code>create_component_from_func</code></p> <p>This is obviously not the code (I would need to provide a container, etc.) but you can get an idea of what I'm doing:</p> <pre class="lang-py prettyprint-override"><code>from kfp.components import create_component_from_func from kfp.v2.dsl component def add_values_from_func(a:int, b:int) -&gt; int: # I can do stuff and print from container_file import transformer_function return transformer_function(a) + transformer_function(b) @component(base_image=&quot;my_docker_image_in_gcloud&quot;) def add_values(a:int, b:int) -&gt; int: # I can do stuff and print from container_file import transformer_function return transformer_function(a) + transformer_function(b) # PIPELINE CODE ### This component works add_values_task = ( add_values(a=1, b=2) ) ### This component errors out saying that it cannot find the transformer_function add_values_from_func_component = create_component_from_func( func=add_values_from_func, base_image=&quot;my_docker_image_in_gcloud&quot;, ) add_values_from_func_component_task = ( add_values_from_func_component(a=1, b=2) ) </code></pre>
<python><gcloud><google-cloud-vertex-ai><kubeflow><kubeflow-pipelines>
2023-06-26 21:25:38
2
401
100tifiko
76,560,327
3,778,175
Passing parameters to Hypercorn when Quart is used in development mode
<p>When running say:</p> <pre><code>export QUART_APP=hello-world:app quart run </code></pre> <p>As far as I understand you actually have three logging mechanisms in place, one set up by Hypercorn, another set up by Quart itself and another one set by AsyncIO module. In my case, in addition to the aforementioned, I have my own application-level logging mechanism.</p> <p>I'd like all the logs to be consistent in terms of how they appear (e.g. format etc.), how can I pass a log format to Hypercorn when using &quot;quart run&quot;? the only way I saw this is done is by creating a configuration file, but I am not sure if Hypercorn will even load it when it is spawned by quart.</p> <p>I can always get the loggers directly by using logging.getLogger('hypercorn.access') and logging.getLogger('hypercorn.error') and forcefully set any attributes after the fact but that a bit ugly.</p> <p>Thanks in advance!</p>
<python><python-asyncio><python-logging><quart><hypercorn>
2023-06-26 21:24:30
1
461
seidnerj
76,560,288
14,501,168
capturing escape sequences from the output of a shell in python
<p>I'm trying to create a Linux terminal in Python. but i know there's gonna be escape sequences sent by the shell and gonna need to interpret them, how can i capture the escape sequences and the control codes from the output of the shell and know their position? As an example this: <code>b'\x1b]0alaa@alaa-hplaptop15bs1xx:~/terminal\x07'</code></p>
<python><shell><terminal><ansi-escape>
2023-06-26 21:16:05
0
323
th3plus
76,560,192
8,378,817
Read a .txt files containing text into panda dataframe with single row and column
<p>I am trying to read many .txt files which are text files extracted from pdf articles. I want to create dataframe of each .txt files where each row will have all the text from the article. However, I am running into this issue that upon reading using pd.read_csv(), the parser splits into multiple columns. Basically, I am trying to get article dataframe of shape (n,1) where n is number of rows containing text per article.</p> <p>Below is an example of result I am getting:</p> <pre><code> pd.read_csv('file.txt', delimiter=None, header=None, skip_blank_lines=True, on_bad_lines='warn') </code></pre> <p><a href="https://i.sstatic.net/9L3Bf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9L3Bf.png" alt="enter image description here" /></a></p> <p>There will be thousands of .txt files to load into the Dataframe, what can be the best way to deal with this. Thank you</p> <p>A portion of .txt file:</p> <p>Text1:</p> <p>The tissue samples of the shoots (a mix of leaves and stems) and fibrous rootlets were immediately analysed as fresh materials, fixed in the NaH2PO4- Na2HPO4 fixative solution containing 2.5% glutaraldehyde, or frozen in liquid nitrogen and then stored at โˆ’80 ยฐC, depending on experimental requirements. The fibrous rootlets fixed in the fixative solution were taken and stained for 30 min in 1% sarranine followed by rinsing with sterilized deionized water. Then, they were stained for 5 min in aniline blue and sequentially rinsed once with 95% ethanol and twice with anhydrous ethanol, and sequentially soaked for 5 min in the solution composed of both anhydrous ethanol and dimethylbenzene as 1(v):1(v) and for 5 min in dimethylbenzene.</p> <p>Text2:</p> <p>Although ar- tificial lighting can extend growth duration in controlled environments, such as greenhouses and indoor growth facili- ties, we focus here on field-grown crops, given the limited feasibility with which many staple crops may be grown in controlled environments. The proportion of St used for pho- tosynthesis in plants could potentially increase through in- troduction of different pigments and their associated proteins in plants (see โ€œOpportunitiesโ€ below). Of the effi- ciency terms, which are genetically determined, ep and ei were largely targeted for improvement during the Green Revolution and are approaching the theoretical limit in most crops (Evans, 1993; Hay, 1995; Sinclair, 1998); therefore, these hold little further potential for improving Yp in major crop species.</p>
<python><pandas><dataframe>
2023-06-26 20:55:28
3
365
stackword_0
76,560,108
587,021
Fully qualified imports from `foo/setup.py`, `foo/bar/__init__.py` to get `import foo.bar`?
<p>Given this directory structure:</p> <pre><code>/tmp/foo |-- bar `-- __init__.py `-- setup.py </code></pre> <p>โ€ฆand this <code>setup.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from setuptools import find_packages, setup package_name = 'foo' module_name = 'bar' setup( name=package_name, packages=find_packages(), # Attempts: # # `package_dir` omitted entirely # package_dir={'': module_name}, # package_dir={package_name: module_name}, python_requires=&quot;&gt;=3.6&quot;, ) </code></pre> <p>But no matter what I do, I can't get <code>import foo.bar</code> to work. Some of the attempts get <code>import foo</code> to work.</p> <p>How do I get fully-qualified imports to work?</p>
<python><python-import><setuptools><python-module><python-packaging>
2023-06-26 20:38:25
0
13,982
A T
76,559,999
12,845,199
Divide first row by others, polars
<p>I have the following polars dataframe</p> <pre><code>pl.DataFrame({'pme-site-paginas': [118,54,24,21,20,20,18],'pme-midias-novo-empresas':[348,149,67,50,47,46,39,],'pme-site-claro-mais':[8,0,0,0,0,0,0]}) </code></pre> <p>So I have the following dataframe and I want to grab the first row and divide by the rest so I get the percentage considering the first value to be 100%. Really similar to the following question in pandas, would appreciate some help on how I can do this? <a href="https://stackoverflow.com/questions/12007406/divide-dataframe-by-first-row">Divide DataFrame by first row</a></p>
<python><python-polars>
2023-06-26 20:18:54
0
1,628
INGl0R1AM0R1
76,559,939
7,447,976
Getting error when using clientside_callback via Java Script in Dash - Python
<p>I've recently asked a question about how to use <code>clientside_callback</code> (see <a href="https://stackoverflow.com/questions/76474217/how-to-avoid-page-refresh-using-clients-callback-when-using-the-click-feature-in">this</a>) and am practicing it on my own dashboard application. In my dashboard, I have a map, a drop down menu, and a button. The user selects states on the map with a click, and can also select them from the drop down menu. However, there is ALL option in the drop down menu as well. As for the button, it clears the user's selection.</p> <p>My application works with the regular Dash callbacks, but my goal is to use <code>clientside_callback</code> to speed up the process. However, I receive multiple errors with my code due to the Java Script part about which I have no experience.</p> <p>That's why I'd appreciate if someone could assist me.</p> <pre><code>import random, json import dash from dash import dcc, html, Dash, callback, Output, Input, State import dash_leaflet as dl import geopandas as gpd from dash import dash_table #https://gist.github.com/incubated-geek-cc/5da3adbb2a1602abd8cf18d91016d451?short_path=2de7e44 us_states_gdf = gpd.read_file(&quot;us_states.geojson&quot;) us_states_geojson = json.loads(us_states_gdf.to_json()) options = [{'label': 'Select all', 'value': 'ALL'}, {'label': 'AK', 'value': 'AK'}, {'label': 'AL', 'value': 'AL'}, {'label': 'AR', 'value': 'AR'}, {'label': 'AZ', 'value': 'AZ'}, {'label': 'CA', 'value': 'CA'}, {'label': 'CO', 'value': 'CO'}, {'label': 'CT', 'value': 'CT'}, {'label': 'DE', 'value': 'DE'}, {'label': 'FL', 'value': 'FL'}, {'label': 'GA', 'value': 'GA'}, {'label': 'HI', 'value': 'HI'}, {'label': 'IA', 'value': 'IA'}, {'label': 'ID', 'value': 'ID'}, {'label': 'IL', 'value': 'IL'}, {'label': 'IN', 'value': 'IN'}, {'label': 'KS', 'value': 'KS'}, {'label': 'KY', 'value': 'KY'}, {'label': 'LA', 'value': 'LA'}, {'label': 'MA', 'value': 'MA'}, {'label': 'MD', 'value': 'MD'}, {'label': 'ME', 'value': 'ME'}, {'label': 'MI', 'value': 'MI'}, {'label': 'MN', 'value': 'MN'}, {'label': 'MO', 'value': 'MO'}, {'label': 'MS', 'value': 'MS'}, {'label': 'MT', 'value': 'MT'}, {'label': 'NC', 'value': 'NC'}, {'label': 'ND', 'value': 'ND'}, {'label': 'NE', 'value': 'NE'}, {'label': 'NH', 'value': 'NH'}, {'label': 'NJ', 'value': 'NJ'}, {'label': 'NM', 'value': 'NM'}, {'label': 'NV', 'value': 'NV'}, {'label': 'NY', 'value': 'NY'}, {'label': 'OH', 'value': 'OH'}, {'label': 'OK', 'value': 'OK'}, {'label': 'OR', 'value': 'OR'}, {'label': 'PA', 'value': 'PA'}, {'label': 'RI', 'value': 'RI'}, {'label': 'SC', 'value': 'SC'}, {'label': 'SD', 'value': 'SD'}, {'label': 'TN', 'value': 'TN'}, {'label': 'TX', 'value': 'TX'}, {'label': 'UT', 'value': 'UT'}, {'label': 'VA', 'value': 'VA'}, {'label': 'VT', 'value': 'VT'}, {'label': 'WA', 'value': 'WA'}, {'label': 'WI', 'value': 'WI'}, {'label': 'WV', 'value': 'WV'}, {'label': 'WY', 'value': 'WY'}] state_abbreviations = {'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA', 'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'Florida': 'FL', 'Georgia': 'GA', 'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA', 'Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD', 'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS', 'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH', 'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC', 'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA', 'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN', 'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA', 'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY'} states = list(state_abbreviations.values()) app = Dash(__name__) app.layout = html.Div([ #Store lists to use in the callback dcc.Store(id='options-store', data=json.dumps(options)), dcc.Store(id='states-store', data=json.dumps(states)), dcc.Store(id='states-abbrevations', data=json.dumps(state_abbreviations)), #US Map here dl.Map([ dl.TileLayer(url=&quot;http://tile.stamen.com/toner-lite/{z}/{x}/{y}.png&quot;), dl.GeoJSON(data=us_states_geojson, id=&quot;state-layer&quot;)], style={'width': '100%', 'height': '250px'}, id=&quot;map&quot;, center=[39.8283, -98.5795], ), #Drop down menu here html.Div(className='row', children=[ dcc.Dropdown( id='state-dropdown', options=[{'label': 'Select all', 'value': 'ALL'}] + [{'label': state, 'value': state} for state in states], value=[], multi=True, placeholder='States' )]), html.Div(className='one columns', children=[ html.Button( 'Clear', id='clear-button', n_clicks=0, className='my-button' ), ]), ]) @app.callback( Output('state-dropdown', 'value', allow_duplicate=True), [Input('clear-button', 'n_clicks')], prevent_initial_call=True ) def clear_tab(user_click): if user_click: return [] else: raise dash.exceptions.PreventUpdate app.clientside_callback( &quot;&quot;&quot; function(click_feature, selected_states, defaults_options, states, states_abbreviations) { let options = defaults_options let select_all_selected = selected_states.includes('ALL'); let list_states; if (select_all_selected) { options = [{'label': 'Select All', 'value': 'ALL'}]; selected_states = states; list_states = 'ALL'; } else { list_states = selected_states; if (click_feature &amp;&amp; dash.callback_context.triggered[0]['prop_id'].split('.')[0] == 'state-layer') { let state_name = state_abbreviations[click_feature[&quot;properties&quot;][&quot;NAME&quot;]]; if (!selected_states.includes(state_name)) { selected_states.push(state_name); list_states = selected_states; } } } return [options, list_states]; } &quot;&quot;&quot;, Output('state-dropdown', 'options'), Output('state-dropdown', 'value'), Input('state-layer', 'click_feature'), Input('state-dropdown', 'value'), State('options-store', 'data'), State('states-store', 'data'), State('states-abbrevations', 'data'), prevent_initial_call=True ) if __name__ == '__main__': app.run_server(debug=True) </code></pre>
<javascript><python><callback><plotly-dash><dashboard>
2023-06-26 20:10:19
1
662
sergey_208
76,559,909
2,055,938
Python erroneous logic 2
<p>I am new to Python and hope that someone can point me in the right direction about the logical issues I have with my code.</p> <p>The program works fine running and behaves, as far as I can understand, as I expect. But now when the killswitch button is being pressed (and I can see in the output that the button is pressed and that the killswitch variable is set to True) the threads continue their work without stopping. When the button is pressed I want the kill_processes-method to run. Why doesnt the threads die gracefully and the program end as I wish? What have I missed here?</p> <pre><code># Source: rplcd.readthedocs.io/en/stable/getting_started.html # https://www.circuitbasics.com/raspberry-pi-lcd-set-up-and-programming-in-python/ # LCD R Pi # RS GPIO 26 (pin 37) # E GPIO 19 (35) # DB4 GPIO 13 (33) # DB5 GPIO 6 (31) # DB6 GPIO 5 (29) # DB7 GPIO 11 / SCLK (23) # WPSE311/DHT11 GPIO 24 (18) # Relay Fog GPIO 21 (40) # Relay Oxy GPIO 20 (38) # Relay LED GPIO 16 (36) # Button Killsw. GPIO 12 (32) # Imports--- [ToDo: How to import settings from a configuration file?] import adafruit_dht, board, digitalio, threading import adafruit_character_lcd.character_lcd as characterlcd import RPi.GPIO as GPIO from time import sleep, perf_counter from gpiozero import CPUTemperature from datetime import datetime # Compatible with all versions of RPI as of Jan. 2019 # v1 - v3B+ lcd_rs = digitalio.DigitalInOut(board.D26) lcd_en = digitalio.DigitalInOut(board.D19) lcd_d4 = digitalio.DigitalInOut(board.D13) lcd_d5 = digitalio.DigitalInOut(board.D6) lcd_d6 = digitalio.DigitalInOut(board.D5) lcd_d7 = digitalio.DigitalInOut(board.D11) # Define LCD column and row size for 16x2 LCD. lcd_columns = 16 lcd_rows = 2 # Initialise the lcd class--- lcd = characterlcd.Character_LCD_Mono(lcd_rs, lcd_en, lcd_d4, lcd_d5, lcd_d6, lcd_d7, lcd_columns, lcd_rows) # Init sensors--- dhtDevice_nutrient_mist = adafruit_dht.DHT11(board.D24, use_pulseio=False) #dhtDevice_xx = adafruit_dht.DHT22(board.Dxx, use_pulseio=False) #dhtDevice = adafruit_dht.DHT11(board.D24, use_pulseio=False) # Define relays relay_fogger = 21 #digitalio.DigitalInOut(board.D21) #- Why does this not work? relay_oxygen = 20 #digitalio.DigitalInOut(board.D20) #- Why does this not work? relay_led = 16 #digitalio.DigitalInOut(board.D16) #- Why does this not work? # Init relays--- GPIO.setwarnings(False) GPIO.setup(relay_fogger, GPIO.OUT) GPIO.setup(relay_oxygen, GPIO.OUT) GPIO.setup(relay_led, GPIO.OUT) # Define liquid nutrient temperature probe liquid_nutrients_probe = 16 #digitalio.DigitalInOut(board.D16) - Why does this not work? # Define the killswitch push button GPIO.setup(12, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) # Global variables--- killswitch = False # Fogger bucket vars temp_nutrient_solution = 0 temp_nutrient_mist = 0 humidity_nutrient_mist = 0 fogger_on_seconds = 2700 #45 min fogger_off_seconds = 900 #15 min sleep_fogger = False # Grow bucket vars temp_roots = 0 humidity_roots = 0 # Oxygen bucket vars sleep_oxygen = False # Rapsberry Pi internal temperature rpi_internal_temp = 0 # Methods--- def get_temp_nutrient_solution(killswitch): # Temp fรถr nรคringsvรคska. KLAR fรถr TEST &quot;&quot;&quot;Measure the temperature of the nutrient solution where the ultrasonic fogger is.&quot;&quot;&quot; while not killswitch: global temp_nutrient_solution temp_nutrient_solution = 22 #lcd.message = datetime.now().strftime('%b %d %H:%M:%S\n') #lcd.message = &quot;Dummy temp liquid solution:/n {:.1f}C&quot;.format(temp_nutrient_solution) # For development process print( &quot;T: {:.1f} C / {:.1f} F&quot;.format( temp_nutrient_solution, c2f(temp_nutrient_mist) ) ) sleep(1) def get_temp_humidity_nutrient_mist(killswitch): # Temp och fuktighet fรถr รฅnga. KLAR fรถr TEST &quot;&quot;&quot;Measure the tmeperature and humidity of the nutrient mist where the ultrasonic fogger is.&quot;&quot;&quot; while not killswitch: try: # Update global temp value and humidity once per second global temp_nutrient_mist global humidity_nutrient_mist temp_nutrient_mist = dhtDevice_nutrient_mist.temperature humidity_nutrient_mist = dhtDevice_nutrient_mist.humidity # For development process print( &quot;T: {:.1f} C / {:.1f} F Humidity: {}% &quot;.format( temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist ) ) except RuntimeError as error: # Errors happen fairly often, DHT's are hard to read, just keep going print(error.args[0]) sleep(1) # sleep(1) for DHT11 and sleep(2) for DHT22 pass except Exception as error: dhtDevice.exit() kill_processes() # Fรถrbรคttra denna hรคr sรฅ att den ska visa vilken DHT-enhet som har fรฅtt error raise error sleep(1) def relay_fogger_control(killswitch, sleep_fogger): # Fogger av eller pรฅ &quot;&quot;&quot;Fogger on for 45 min and off for 15. Perpetual mode unless kill_processes() is activated&quot;&quot;&quot; while not killswitch or sleep_fogger: GPIO.output(relay_fogger, GPIO.HIGH) sleep(1) #sleep(fogger_on_seconds) GPIO.output(relay_fogger, GPIO.LOW) sleep(1) #sleep(fogger_off_seconds) def relay_heatLED_control(killswitch): # Vรคrmelampa LED av eller pรฅ &quot;&quot;&quot;Heat LED controller. When is it too hot for the crops? Sleep interval? Perpetual mode unless kill_processes() is activated&quot;&quot;&quot; while not killswitch: GPIO.output(relay_led, GPIO.HIGH) sleep(3) #sleep(fogger_on_seconds) GPIO.output(relay_led, GPIO.LOW) sleep(3) #sleep(fogger_off_seconds) def relay_oxygen_control(killswitch, sleep_oxygen): # Syremaskin av eller pรฅ &quot;&quot;&quot;Oxygen maker. Perpetual mode unless kill_processes() is activated&quot;&quot;&quot; while not killswitch or sleep_oxygen: GPIO.output(relay_oxygen, GPIO.HIGH) sleep(5) #sleep(fogger_on_seconds) GPIO.output(relay_oxygen, GPIO.LOW) #sleep(fogger_off_seconds) sleep(5) def kill_processes(): # Dรถda alla processer &quot;&quot;&quot;ToDo: A button must be pressed which gracefully kills all processes preparing for shutdown.&quot;&quot;&quot; # Power off machines GPIO.output(relay_fogger, GPIO.LOW) GPIO.output(relay_led, GPIO.LOW) GPIO.output(relay_oxygen, GPIO.LOW) # Joined the threads / stop the threads after killswitch is true t1.join() t2.join() t3.join() t4.join() t5.join() #t6.join() reset_clear_lcd() # Stop message and GPIO clearing lcd.message = 'Full stop.\r\nSafe to remove.' GPIO.cleanup() def reset_clear_lcd(): # Reset och rensa LCD &quot;&quot;&quot;Move cursor to (0,0) and clear the screen&quot;&quot;&quot; lcd.home() lcd.clear() def get_rpi_temp(): # Reset och rensa LCD &quot;&quot;&quot;Move cursor to (0,0) and clear the screen&quot;&quot;&quot; global rpi_internal_temp cpu = CPUTemperature() rpi_internal_temp = cpu.temperature def c2f(temperature_c): &quot;&quot;&quot;Convert Celsius to Fahrenheit&quot;&quot;&quot; return temperature_c * (9 / 5) + 32 def lcd_display_data_controller(killswitch): # LCD display data controller &quot;&quot;&quot;Display various measurments and data on the small LCD. Switch every four seconds.&quot;&quot;&quot; while not killswitch: reset_clear_lcd() # Raspberry Pi internal temperature lcd.message = ( &quot;R Pi (int. temp): \n{:.1f}C/{:.1f}F &quot;.format( rpi_internal_temp, c2f(rpi_internal_temp) ) ) sleep(5) reset_clear_lcd() # Nutrient liquid temperature lcd.message = ( &quot;F1: {:.1f}C/{:.1f}F &quot;.format( temp_nutrient_solution, c2f(temp_nutrient_solution) ) ) sleep(5) reset_clear_lcd() # Nutrient mist temperature and humidity lcd.message = ( &quot;F2: {:.1f}C/{:.1f}F \nHumidity: {}% &quot;.format( temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist ) ) sleep(5) reset_clear_lcd() # Root temperature and humidity lcd.message = ( &quot;R1: {:.1f}C/{:.1f}F \nHumidity: {}% &quot;.format( temp_roots, c2f(temp_roots), humidity_roots ) ) sleep(5) reset_clear_lcd() def button_callback(channel): global killswitch print(&quot;Button was pushed!&quot;) killswitch = True # Init the button GPIO.add_event_detect(12, GPIO.RISING, callback=button_callback) # Create the threads #tx = threading.Thread(target=xx, args=(killswitch,sleep_fogger,)) t1 = threading.Thread(target=get_temp_nutrient_solution, args=(killswitch,)) t2 = threading.Thread(target=get_temp_humidity_nutrient_mist, args=(killswitch,)) t3 = threading.Thread(target=relay_fogger_control, args=(killswitch,sleep_fogger,)) t4 = threading.Thread(target=lcd_display_data_controller, args=(killswitch,)) t5 = threading.Thread(target=get_rpi_temp) #t6 = threading.Thread(target=killswitch_button) # Start the threads t1.start() t2.start() t3.start() t4.start() t5.start() #t6.start() # Code main process--- while not killswitch: sleep(1) # Graceful exit kill_processes() </code></pre>
<python><multithreading><raspberry-pi3>
2023-06-26 20:03:30
1
517
Emperor 2052
76,559,598
14,073,111
Create conditional Column in dataframes based on 3 other column values
<p>Let's say I have a dataframe like this:</p> <pre><code> RING CLLI CIRCUIT SITE 0 N100 M200 Circuit1 M200 1 N100 M200 Circuit2 M200 2 N100 M201 Circuit3 M201 3 N101 M200 23 Circuit1 M200 4 N101 M300 Circuit1 M300 5 N101 M304 XK Circuit11 M304 6 N101 M147 Circuit10 M147 7 N102 M304E5 Circuit11 M304 8 N102 M874 Circuit114 M874 9 N102 M874 Circuit113 M874 10 N102 M874 Circuit112 M874 11 N104 M643 Circuit2 M643 12 N104 M643 Circuit234 M643 13 N104 M304 Circuit11 M304 </code></pre> <p>I want to add a conditional Column (NeigbourCLLI) based on the values from the other columns. Basically what I want is to check:</p> <pre><code>1. Check if the Circuit from CIRCUIT column is the same a) If yes, check if the value from RING column is different 1. If yes, check if the SITES column are a match a) If yes, add this to NeigbourCLLI (with comma separated if there are more </code></pre> <p>For Example, Circuit1 is present in row 1, 4, and 5. And This condition are met only in case of row 1 and 5. Meaning that for row 1 and 5 Circuit is the same, Ring is different and Sites are equal. For Circuit11 which is present in row 6, 8 and 14. The condition is met in all of them and so on... And the final dataframe would look like this:</p> <pre><code> RING CLLI CIRCUIT SITE NeigbourCLLI 0 N100 M200 Circuit1 M200 M200 23 # This one is taken from row N101 M200 23 Circuit1 M200 NaN 1 N100 M200 Circuit2 M200 NaN 2 N100 M201 Circuit3 M201 NaN 3 N101 M200 23 Circuit1 M200 M200 # This one is taken from row N100 M200 Circuit1 M200 M200 4 N101 M300 Circuit1 M300 NaN 5 N101 M304 XK Circuit11 M304 M304E5, M304 # This one is taken from row N102 M304E5 Circuit11 M304 and N104 M304 Circuit11 M304 6 N101 M147 Circuit10 M147 NaN 7 N102 M304E5 Circuit11 M304 M304 XK, M304 # This one is taken from row N102 N101 M304 XK Circuit11 M304 and N104 M304 Circuit11 M304 8 N102 M874 Circuit114 M874 NaN 9 N102 M874 Circuit113 M874 NaN 10 N102 M874 Circuit112 M874 NaN 11 N104 M643 Circuit2 M643 NaN 12 N104 M643 Circuit234 M643 NaN 13 N104 M304 Circuit11 M304 M304 XK, M304E5 # This one is taken from row N102 N101 M304 XK Circuit11 M304 and NN102 M304E5 Circuit11 M304 </code></pre> <p>I have tried something like this, but is not very efficient in case of datafarme with more than 500k rows:</p> <pre><code>grouped = df.groupby('CIRCUIT').apply(lambda x: x['RING'].nunique() &gt; 1) def find_neighboring_cllis(row): circuit, ring, site = row['CIRCUIT'], row['RING'], row['site'] if grouped[circuit]: neighbors = df[(df['CIRCUIT'] == circuit) &amp; (df['RING'] != ring) &amp; (df['SITE'] == site]['CLLI'].unique() if neighbors.size &gt; 0: return ', '.join(neighbors) return np.nan df['NeigbourCLLI'] = df.apply(find_neighboring_cllis, axis=1) </code></pre> <p>Is there a more &quot;pandas&quot; way to handle this?</p>
<python><python-3.x><pandas><dataframe>
2023-06-26 19:11:32
1
631
user14073111
76,559,527
13,200,217
How can I install package in pip such that older packages will not break?
<p>I have a project where I am trying to limit how much I update packages (for stability). For example, I have an older version of <code>scipy</code>. This is specified in my requirements.txt file with a strict <code>==a.b.c</code> version.</p> <p>Since <code>scipy</code> at the specified version depends on certain (older) versions of some packages (e.g. <code>libgcc</code>), it broke when I installed some other packages that required newer versions of <code>libgcc</code> (specifically <a href="https://pypi.org/project/stripy/" rel="nofollow noreferrer"><code>stripy</code></a>). I fixed my environment by uninstalling <code>stripy</code> and <code>scipy</code>, then reinstalling <code>scipy</code> only. I assume what happened was that pip tried to get the newest version of <code>stripy</code>, while <code>scipy</code> was locked in place.</p> <p>The issue is, I still want to use <code>stripy</code>. Along with other packages, that might need newer dependencies for their newest available version. How can I tell pip that I don't need the newest version available, I just need one that's compatible with what I have in my current environment?</p>
<python><pip>
2023-06-26 19:01:05
0
353
Andrei Miculiศ›ฤƒ
76,559,456
825,227
Incorporate KeyboardInterrupt in Python script for API
<p>I have a python script that connects to an Interactive Brokers API and retrieves security contract information as below:</p> <pre><code>from ibapi.client import * from ibapi.wrapper import * import time class TestApp(EClient, EWrapper): def __init__(self): EClient.__init__(self, self) def contractDetails(self, reqId: int, contractDetails: ContractDetails): print(f&quot;contract details: {contractDetails}&quot;) def contractDetailsEnd(self, reqId: int): print(&quot;End of contractDetails&quot;) self.disconnect() def main(): try: app = TestApp() app.connect(&quot;127.0.0.1&quot;, 7496, 1000) myContract = Contract() # myContract.conId = &quot;503837634&quot; myContract.symbol = &quot;/ES&quot; myContract.exchange = &quot;CME&quot; myContract.secType = &quot;FUT&quot; myContract.currency = &quot;USD&quot; myContract.localSymbol = 'ESZ3' #myContract.primaryExchange = &quot;ISLAND&quot; time.sleep(3) app.reqContractDetails(1, myContract) app.run() except KeyboardInterrupt: print('Session completed') app.disconnect() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>In cases where I need to break the process (e.g., the program doesn't end naturally because a contract with specified details isn't found), I've set it up using a try/catch and <code>KeyboardInterrupt</code> exception. This doesn't seem to get picked up so killing the process also kills the established connection and I'm forced to log in again manually.</p> <p>I've considered using the <code>signal</code> module but there doesn't seem to be a natural way to apply <a href="https://stackoverflow.com/questions/4205317/capture-keyboardinterrupt-in-python-without-try-except">code snippets</a> I've found to allow me to close the connection (<code>app.disconnect()</code>) before exiting.</p> <p>For example, I can implement a version of the linked to answer, as below, but there's no way to pass my app instance to be disconnected by the handler before exiting.</p> <pre><code>def handler(sig, frame): print('Exception detected...session completed') sys.exit(0) def main(): app = TestApp() app.connect(&quot;127.0.0.1&quot;, 7496, 1000) myContract = Contract() # myContract.conId = &quot;503837634&quot; myContract.symbol = '/ES' myContract.exchange = 'CME' myContract.secType = &quot;FUT&quot; myContract.currency = &quot;USD&quot; myContract.localSymbol = 'ESZ3' #myContract.primaryExchange = &quot;ISLAND&quot; time.sleep(3) app.reqContractDetails(1, myContract) app.run() signal.signal(signal.SIGINT, handler) signal.pause() app.disconnect() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Can anyone shed some light on how to accomplish this?</p>
<python><python-3.x><exception><keyboard-events>
2023-06-26 18:49:35
2
1,702
Chris
76,559,449
8,162,211
Why does my QPixmap image appear more than once in my QMainWindow?
<p>I have very limited knowledge of PyQt5 but was given a script that I'm having some problems altering. At startup the script is designed to show a frame upon which an image appears, together with a menubar.</p> <p>The main part of the script at the bottom of the file is this:</p> <pre><code>def main(): global window app = QApplication([]) window = MainMenu() window.show() sys.exit(app.exec_()) if __name__ == '__main__': main() </code></pre> <p><code>MainMenu</code> itself is a class, defined earlier, which starts out as follows:</p> <pre><code>class MainMenu(QMainWindow): def __init__(self): super(MainMenu, self).__init__() self.window() layout = self.frame_layout() self.create_menubar() def window(self): screen_width, screen_height = pyautogui.size() self.setGeometry(0, 0, screen_width - 25, screen_height - 180) self.showMaximized() startup_background(self) </code></pre> <p>...and so on.</p> <p><code>startup_background</code> in the last line above is a function that is designed to create the initial background using a specified .png file. I tried to create this function using the following in the <code>startup_background</code> function:</p> <pre><code>pixmap = QPixmap(background.png).scaled(window.width(), window.height(),QtCore.Qt.KeepAspectRatio) palette = QPalette() palette.setBrush(QPalette.Background, QBrush(pixmap)) window.setPalette(palette) </code></pre> <p>The problem is that the background image, which is a simple multiple time series plot, essentially appears more than once as shown by the ''chopped off'' right portion of the image below:</p> <p><a href="https://i.sstatic.net/DKJBr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DKJBr.png" alt="enter image description here" /></a></p> <p>I also tried using a label by replacing the lines above with pallette with a label:</p> <pre><code>label = QLabel(window) label.setPixmap(pixmap) </code></pre> <p>This produced nothing.</p>
<python><pyqt5>
2023-06-26 18:48:36
0
1,263
fishbacp
76,559,277
19,504,610
How to type `PyObject* const*` in Cython
<p>Let's take this as an example: <a href="https://docs.python.org/3/c-api/structures.html#c._PyCFunctionFastWithKeywords" rel="nofollow noreferrer">reference</a></p> <p>The definition of the type <code>_PyCFunctionFastWithKeywords</code> is:</p> <pre><code>PyObject *_PyCFunctionFastWithKeywords(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames); </code></pre> <p>How do I type <code>PyObject *const *</code> in Cython?</p> <p>Doing this does not work:</p> <pre><code>ctypedef PyObject *(*_PyCFunctionFastWithKeywords)(PyObject *, PyObject *const *, Py_ssize_t, PyObject *) </code></pre>
<python><cython><python-extensions>
2023-06-26 18:16:59
0
831
Jim
76,559,257
16,383,578
Theoretically can the Ackermann function be optimized?
<p>I am wondering if there can be a version of Ackermann function with better time complexity than the standard variation.</p> <p>This is not a homework and I am just curious. I know the Ackermann function doesn't have any practical use besides as a performance benchmark, because of the deep recursion. I know the numbers grow very large very quickly, and I am not interested in computing it.</p> <p>Even though I use Python 3 and the integers won't overflow, I do have finite time, but I have implemented a version of it myself according to the definition found on <a href="https://en.wikipedia.org/wiki/Ackermann_function" rel="noreferrer">Wikipedia</a>, and computed the output for extremely small values, just to make sure the output is correct.</p> <p><a href="https://i.sstatic.net/vsPES.jpg" rel="noreferrer"><img src="https://i.sstatic.net/vsPES.jpg" alt="enter image description here" /></a></p> <pre><code>def A(m, n): if not m: return n + 1 return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1) </code></pre> <p>The above code is a direct translation of the image, and is extremely slow, I don't know how it can be optimized, is it impossible to optimize it?</p> <p>One thing I can think of is to memoize it, but the recursion runs backwards, each time the function is recursively called the arguments were not encountered before, each successive function call the arguments decrease rather than increase, therefore each return value of the function needs to be calculated, memoization doesn't help when you call the function with different arguments the first time.</p> <p>Memoization can only help if you call it with the same arguments again, it won't compute the results and will retrieve cached result instead, but if you call the function with any input with (m, n) &gt;= (4, 2) it will crash the interpreter regardless.</p> <p>I also implemented another version according to this <a href="https://stackoverflow.com/a/20411205/16383578">answer</a>:</p> <pre><code>def ack(x, y): for i in range(x, 0, -1): y = ack(i, y - 1) if y else 1 return y + 1 </code></pre> <p>But it is actually slower:</p> <pre><code>In [2]: %timeit A(3, 4) 1.3 ms ยฑ 9.75 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) In [3]: %timeit ack(3, 4) 2 ms ยฑ 59.9 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>Theoretically can Ackermann function be optimized? If not, can it be definitely proven that its time complexity cannot decrease?</p> <hr /> <p>I have just tested <code>A(3, 9)</code> and <code>A(4, 1)</code> will crash the interpreter, and the performance of the two functions for <code>A(3, 8)</code>:</p> <pre><code>In [2]: %timeit A(3, 8) 432 ms ยฑ 4.63 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) In [3]: %timeit ack(3, 8) 588 ms ยฑ 10.4 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) </code></pre> <hr /> <p>I did some more experiments:</p> <pre><code>from collections import Counter from functools import cache c = Counter() def A1(m, n): c[(m, n)] += 1 if not m: return n + 1 return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1) def test(m, n): c.clear() A1(m, n) return c </code></pre> <p>The arguments indeed repeat.</p> <p>But surprisingly caching doesn't help at all:</p> <pre><code>In [9]: %timeit Ackermann = cache(A); Ackermann(3, 4) 1.3 ms ยฑ 10.1 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>Caching only helps when the function is called with the same arguments again, as explained:</p> <pre><code>In [14]: %timeit Ackermann(3, 2) 101 ns ยฑ 0.47 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000,000 loops each) </code></pre> <p>I have tested it with different arguments numerous times, and it always gives the same efficiency boost (which is none).</p>
<python><python-3.x><algorithm><ackermann>
2023-06-26 18:13:11
7
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
76,558,860
15,593,152
How to improve the time of execution of this for loop?
<p>I have a DF called &quot;zone&quot; with <code>x</code> and <code>y</code> columns as integers, that can be interpreted as the position of points. I need to compute the number of first and second neighbors, and I have written this:</p> <pre><code>import numpy as np import pandas as pd data = np.random.randint(1000,6000,size=(600000,2)) zone = pd.DataFrame(data, columns=['x', 'y']).drop_duplicates() a=[] for i,row in zone.iterrows(): x = row.x y = row.y num_1st_neigh = len(zone[(zone.x&gt;=(x-1))&amp;(zone.x&lt;=(x+1))&amp;(zone.y&gt;=(y-1))&amp;(zone.y&lt;=(y+1))])-1 num_2nd_neigh = (len(zone[(zone.x&gt;=(x-2))&amp;(zone.x&lt;=(x+2))&amp;(zone.y&gt;=(y-2))&amp;(zone.y&lt;=(y+2))])-1)\ -(num_1st_neigh) a.append([i,num_1st_neigh,num_2nd_neigh]) a = pd.DataFrame(a, columns = ['index','num_1st_neigh','num_2nd_neigh']) zzz = zone.reset_index().merge(a,on='index') </code></pre> <p>This works good but lasts 15 s on 3K points, I have 1M points and it's stll running after 2h. Any ideas on how can I improve the execution speed?</p> <p>I have read that iterrows is very slow, but I don't know how else I could do it.</p> <p>EDIT: I also tried the same using SQL, but also the execution time is &gt;2h and the query returns a timeout:</p> <pre><code>SELECT t0.x, t0.y, count_if(greatest(abs(t0.x-t1.x), abs(t0.y-t1.y)) = 1) num_1_neighbors, count_if(greatest(abs(t0.x-t1.x), abs(t0.y-t1.y)) = 2) num_2_neighbors FROM &quot;table&quot; t0 left join &quot;table&quot; t1 on t1.x between t0.x -2 and t0.x + 2 and t1.y between t0.y -2 and t0.y + 2 and ( t1.x &lt;&gt; t0.x or t1.y &lt;&gt; t0.y ) group by 1,2 </code></pre> <p>Any ideas both using SQL or pandas are very welcome</p>
<python><sql><pandas><count><nearest-neighbor>
2023-06-26 17:10:34
2
397
ElTitoFranki
76,558,853
2,441,776
ctypes opens a DLL, calling a function returns AttributeError: function not found, but function exists in the DLL
<p>I'm trying to use a third-party DLL in my code and I have to call a function from that DLL using Python. According to documentation, the DLL has got just 4 functions, and none of those can be called from my Python code. I've tried both x86 and x64 DLLs (with the corresponding Python distribution). I tried Python 3.7, 3.9, 3.10 and 3.11, all those cannot call any of the functions.</p> <pre><code>from ctypes import cdll sdk = cdll.LoadLibrary(&quot;./Library.dll&quot;) get_data = sdk.sdk_get_data print(get_data) </code></pre> <p>This returns an error:</p> <pre><code> File &quot;D:\sandbox\ะตัƒั‹ะต.py&quot;, line 7, in &lt;module&gt; get_data = sdk.sdk_get_data File &quot;C:\Python39-64\lib\ctypes\__init__.py&quot;, line 387, in __getattr__ func = self.__getitem__(name) File &quot;C:\Python39-64\lib\ctypes\__init__.py&quot;, line 392, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: function 'sdk_get_data' not found </code></pre> <p>I've used lucasg's <a href="https://github.com/lucasg/Dependencies" rel="nofollow noreferrer">Dependencies</a> tool to make sure the function documented exists in the DLL and here is the output:</p> <pre><code>1 (0x0001), (0x), int __cdecl sdk_get_data(char * __ptr64,int,unsigned short * __ptr64,unsigned char * __ptr64), 0x00001b30, Microsoft </code></pre> <p>How do I call functions from this DLL? Perhaps writing a wrapper in C++ is necessary?</p>
<python><dll><ctypes>
2023-06-26 17:09:23
1
375
Megan Caithlyn
76,558,805
12,568,761
What's the deal with np.memmap
<p>I recently found <a href="https://discuss.pytorch.org/t/quickly-loading-raw-binary-files/797" rel="nofollow noreferrer">this post on the pytorch messageboard</a> which suggested using <a href="https://numpy.org/doc/stable/reference/generated/numpy.memmap.html" rel="nofollow noreferrer">np.memmap</a> to load in large .bin files. I am more accustomed to using <a href="https://numpy.org/doc/stable/reference/generated/numpy.fromfile.html" rel="nofollow noreferrer">np.fromfile</a> to do this sort of thing (in fact the documentation says it's a &quot;highly efficient way of reading binary data with a known data-type&quot;).<br /> So I tried a time comparison between the two functions:</p> <pre><code>def read1(fname_bin): return np.memmap(filename, dtype=np.float32, mode='r+').__array__() </code></pre> <p>versus</p> <pre><code>def read2(fname_bin): return np.fromfile(fname, dtype=np.float32) </code></pre> <p>I then tried loading in 1,000 arrays with around 400,000 elements - the time comparison is below:<br /> | method | mean (ms) | median (ms) | std (ms) | | :------: | :----: | :------: | :---: | | read1() | 0.6364 | 0.6204 | 0.0566 | | read2() | 0.9435 |0.7607 |0.3729 |</p> <p>Which is great, and the CPU usage is lower as well. But elsewhere I've read that maybe this operation is creating a separate file somewhere?<br /> Maybe the problem is that I don't really understand the difference between a memory map and reading a file in from disk.<br /> My main question is - can I safely use np.memmap to load in large .bin files as numpy arrays?</p>
<python><numpy><memory>
2023-06-26 17:01:57
1
550
tiberius
76,558,656
5,942,100
Create a new group by taking unique characters before a colon, and create aggregations on the output using Pandas
<p>I wish to group by unique characters before the first colon and sum</p> <p><strong>Data</strong></p> <pre><code>Box FALSE TRUE DDD8:0Y:1C611:100 1 2 DDD8:0Y:1C711:107 2 1 DDD8:0Y:1C711:109 3 5 AAS0:1T:1F500A:001 1 4 AAS0:1T:1F500A:002 2 2 AAS0:1T:1F500A:005 0 3 AAS0:1T:1F500A:005 2 3 </code></pre> <p><strong>Desired</strong></p> <pre><code> Box FALSE TRUE DDD8 6 8 AA20 5 12 </code></pre> <p><strong>Doing</strong></p> <p>I am using str.split(':') in conjunction with groupby</p> <pre><code>df['Box'] = df['Box'].str.split(':').str[0] groupby('key').sum() </code></pre> <p>However the final output labeling is not being produced. Any suggestion is appreciated.</p>
<python><pandas><numpy>
2023-06-26 16:39:40
1
4,428
Lynn
76,558,507
4,494,781
Issue with handling ImportError
<p>I have a piece of code that has 2 fallback options in case the preferred solution does not work. The reason for this is that the preferred version of the algorithm depends on OpenSSL being compiled with support for RIPEMD160, which is no longer the case by default as of OpenSSL 3.0, and the second preferred option depends on an additional package that I didn't put in my requirements, and the final fallback option is a slow pure-python implementation.</p> <p>This is a minimal piece of code to illustrate the issue:</p> <pre><code>import hashlib assert issubclass(ModuleNotFoundError, ImportError) def sha256(public_key: bytes) -&gt; bytes: return bytes(hashlib.sha256(public_key).digest()) def hash_160(public_key: bytes) -&gt; bytes: sha256_hash = sha256(public_key) try: md = hashlib.new(&quot;ripemd160&quot;) md.update(sha256_hash) return md.digest() except ValueError: from Crypto.Hash import RIPEMD160 md = RIPEMD160.new() md.update(sha256_hash) return md.digest() except ImportError: # from . import ripemd # md = ripemd.new(sha256_hash) # return md.digest() return b&quot;&quot; print(hash_160(b&quot;very test&quot;)) </code></pre> <p>The error message is exactly the one I expect, yet the second <code>except</code> clause fails to suppress the <code>ModuleNotFoundError</code>:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3.10/hashlib.py&quot;, line 160, in __hash_new return _hashlib.new(name, data, **kwargs) ValueError: [digital envelope routines] unsupported During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/pierre/dev/test_package/./package1/tests/module1.py&quot;, line 14, in hash_160 md = hashlib.new(&quot;ripemd160&quot;) File &quot;/usr/lib/python3.10/hashlib.py&quot;, line 166, in __hash_new return __get_builtin_constructor(name)(data) File &quot;/usr/lib/python3.10/hashlib.py&quot;, line 123, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type ripemd160 During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/pierre/dev/test_package/./package1/tests/module1.py&quot;, line 31, in &lt;module&gt; print(hash_160(b&quot;very test&quot;)) File &quot;/home/pierre/dev/test_package/./package1/tests/module1.py&quot;, line 18, in hash_160 from Crypto.Hash import RIPEMD160 ModuleNotFoundError: No module named 'Crypto' </code></pre> <p>What am I missing here?</p>
<python><error-handling>
2023-06-26 16:17:55
1
1,105
PiRK
76,558,490
1,608,327
Why can all() not accept an async generator?
<p>Fairly new to async things in Python and trying to understand it more deeply and ran into this seeming inconsistency I'm trying to understand.</p> <p>Given this setup:</p> <pre><code>async def return_true(): # In my actual use case this executes a DB query that returns True/False return True async def async_range(count): # In my actual use case this is an iterator as the result of a DB query that may stream results for i in range(count): yield(i) await asyncio.sleep(0.0) </code></pre> <p>When I run this:</p> <pre><code>any(await return_true() async for i in async_range(10)) </code></pre> <p>I get this error:</p> <pre><code>TypeError: 'async_generator' object is not iterable </code></pre> <p>When I change it to <code>any([await return_true() async for i in async_range(10)])</code> it runs without issue but in my non-toy example means I have to wait for all the DB queries to return when I might not actually care about all of them since the first one to return True means the <code>any()</code> will return True.</p> <p><strong>So, my question is, is this expected?</strong> Is this just something that Python hasn't gotten around to implementing in an async compatible method yet (are there plans to do that?) or is there a separate library I should be using that implements these built-ins in async compatible ways?</p> <p>P.S. I did find <a href="https://stackoverflow.com/a/39407084/1608327">this answer</a> that seems to implement the behavior I'm looking for, but does so in a much more verbose way.</p>
<python><python-asyncio>
2023-06-26 16:16:03
1
816
Kenny Loveall
76,558,443
10,962,766
Column remains empty when using .map() with dictionary in pandas dataframe
<p>I have a dictionary with <code>key:value</code> pairs that I am reading from Github as a raw text file.</p> <p>Dict-structure:</p> <pre><code>{&quot;Absetzung&quot;:50, &quot;Amtsantritt&quot;:42, &quot;Amtseinfรผhrung&quot;:41, &quot;Aufenthalt&quot;:20, &quot;Aufnahme&quot;:20, &quot;Aufschwรถrung&quot;:20} </code></pre> <p>Column names in dataframe:</p> <pre><code>column_names = [&quot;factoid_ID&quot;, &quot;pers_ID&quot;, &quot;pers_name&quot;, &quot;alternative_names&quot;, &quot;event_type&quot;, &quot;event_after-date&quot;, &quot;event_before-date&quot;, &quot;event_start&quot;, &quot;event_end&quot;, &quot;event_date&quot;, &quot;pers_title&quot;, &quot;pers_function&quot;, &quot;place_name&quot;, &quot;inst_name&quot;, &quot;rel_pers&quot;, &quot;source_quotations&quot;, &quot;additional_info&quot;, &quot;comment&quot;, &quot;info_dump&quot;, &quot;source_combined&quot;, &quot;event_value&quot;, &quot;source&quot;, &quot;source_site&quot;] </code></pre> <p>After converting them to valid dictionaries, I can access the data by calling a key. But I cannot use the <code>.map()</code> function for dataframes.</p> <p>The dataframe column to which I want to assign the dictionary values remains empty. Perhaps an issue is that the key comes from a different column.</p> <p>Here is my code:</p> <pre><code># importing the module import requests import ast master = &quot;https://raw.githubusercontent.com/###/###/main/###/Event_value_dict.txt&quot; req = requests.get(master) req = req.text print(req) # reconstructing the data as a dictionary event_value_dict = ast.literal_eval(req) print(type(event_value_dict)) # add event values from dict to data frame try: test = event_value_dict[&quot;Aufschwรถrung&quot;] # random test if valid dict print(&quot;Value for chosen key: &quot;, test) except: print(&quot;Invalid dict structure!&quot;) df3['event_value'] = df3['event_type'].map(event_value_dict) # optional: na_action='ignore' df3.sort_values(by =['event_after-date','event_start','event_before-date', 'event_end', 'event_value']) display(df3) </code></pre>
<python><pandas><dictionary>
2023-06-26 16:08:41
1
498
OnceUponATime
76,558,315
9,571,463
Remove and Understand two Particular Substrings from a RegEx in Python
<p>I have a string like the following:</p> <pre><code>s: str = 'ENGBNAmphitheatre Pkwy\x1e\x1eNASY%am|fi|te|&quot;a|tre;ENGY*&quot;{m|fI|%Ti|t@r;ENGY&quot;{m|fI|%Ti|t@r *&quot;pArk|%we;NASY%am|fi|te|&quot;a|tre &quot;par|kwei' </code></pre> <p>I want to extract a string between two substring patterns using compound Regex (without using two separate replace commands).</p> <p>Below is an explanation of which strings I'd like to replace with blanks, thus leaving the middle section which is what I want to extract.</p> <pre><code># Replace leading 'ENGBN' with '' (blank) pat1: str = &quot;ENGBN&quot; # Replace this specific substring (that has special characters such as \x1e) with '' (blank) pat2: str = &quot;&quot;&quot;\x1e\x1eNASY%am|fi|te|&quot;a|tre;ENGY*&quot;{m|fI|%Ti|t@r;ENGY&quot;{m|fI|%Ti|t@r *&quot;pArk|%we;NASY%am|fi|te|&quot;a|tre &quot;par|kwei&quot;&quot;&quot; </code></pre> <p>Desired Result:</p> <pre><code>result: str = &quot;Amphitheatre Pkwy&quot; </code></pre> <p>I'm trying to learn more about Regex in general, so an explanation of what the proposed single pattern is doing is also desired.</p>
<python><regex>
2023-06-26 15:53:21
2
1,767
Coldchain9
76,558,298
16,922,748
Extracting multiple info from string to create new columns
<p>I have the following dataframe</p> <pre><code>df = pd.DataFrame({'info':{0:'1cr1:782906:F:He:1:Ho1:0:Ho2:0', 1:'5cr1:782946:G:He:1:Ho1:0:Ho2:0'}}) </code></pre> <p>that looks like this</p> <pre><code> info 0 1cr1:782906:F:He:1:Ho1:0:Ho2:0 1 5cr1:782946:G:He:1:Ho1:0:Ho2:0 </code></pre> <p>I'd like to extract the info between the 1st &amp; 2nd colon, 4th &amp; 5th , 6th and 7th and info after the 7th colon and append to new columns</p> <p>The result should look like this.</p> <pre><code> info A B C D 0 1cr1:782906:F:He:1:Ho1:0:Ho2:0 782906 1 0 0 1 5cr1:782946:G:He:1:Ho1:0:Ho2:0 782946 1 0 0 </code></pre> <p>I believe the following should give me the 1st three columns but I'm getting an expected a 1D array error and I'm unsure how to account for the 4th column in the regular expression</p> <pre><code>df['A','B','C'] = df['info'].str.extract(r'(:(\d*):)', expand=True) </code></pre> <p><a href="https://regex101.com/r/83aj0l/1" rel="nofollow noreferrer">https://regex101.com/r/83aj0l/1</a></p>
<python><pandas><regex><dataframe>
2023-06-26 15:50:50
3
315
newbzzs
76,558,205
3,247,006
What is `@stringfilter` in Django?
<p>I sometimes see <code>@stringfilter</code> with <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#django.template.Library.filter" rel="nofollow noreferrer">@register.filter</a>.</p> <p>So, I created <code>test</code> filter with <code>@stringfilter</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;templatetags/custom_tags.py&quot; from django.template import Library from django.template.defaultfilters import stringfilter register = Library() @register.filter(name=&quot;test&quot;) @stringfilter # Here def test_filter(num1, num2): return </code></pre> <p>But, it accepted <code>int</code> type values without error as shown below:</p> <pre><code># &quot;templates/index.html&quot; {% load custom_tags %} {{ 3|test:7 }} # Here </code></pre> <p>I thought that <code>@stringfilter</code> only accepts <code>str</code> type values giving error for the other types.</p> <p>So, what is <code>@stringfilter</code> in Django?</p>
<python><django><django-templates><frontend><django-template-filters>
2023-06-26 15:38:24
1
42,516
Super Kai - Kazuya Ito
76,558,192
12,415,855
Get email from website-link which only can be seen when its manually clicked?
<p>i would like to get the email-address from this site: <a href="https://irglobal.com/advisor/angus-forsyth" rel="nofollow noreferrer">https://irglobal.com/advisor/angus-forsyth</a></p> <p>I tried it with the following code:</p> <pre><code>import time import os from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager if __name__ == '__main__': WAIT = 1 print(f&quot;Checking Browser driver...&quot;) os.environ['WDM_LOG'] = '0' options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service(ChromeDriverManager().install()) driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = &quot;https://irglobal.com/advisor/angus-forsyth&quot; print(f&quot;Working for {link}&quot;) driver.get (link) time.sleep(WAIT) soup = BeautifulSoup (driver.page_source, 'lxml') tmp = soup.find(&quot;a&quot;, {&quot;class&quot;: &quot;btn email&quot;}) print(tmp.prettify()) driver.quit() </code></pre> <p>But i canยดt see any email in this html-tag:</p> <pre><code>(selenium) C:\DEV\Fiverr\TRY\saschanielsen&gt;python tmp2.py Checking Browser driver... Working for https://irglobal.com/advisor/angus-forsyth &lt;a class=&quot;btn email&quot; data-id=&quot;103548&quot; href=&quot;#&quot;&gt; &lt;svg aria-hidden=&quot;true&quot; class=&quot;svg-inline--fa fa-envelope&quot; data-fa-i2svg=&quot;&quot; data-icon=&quot;envelope&quot; data-prefix=&quot;fas&quot; focusable=&quot;false&quot; role=&quot;img&quot; viewbox=&quot;0 0 512 512&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt; &lt;path d=&quot;M48 64C21.5 64 0 85.5 0 112c0 15.1 7.1 29.3 19.2 38.4L236.8 313.6c11.4 8.5 27 8.5 38.4 0L492.8 150.4c12.1-9.1 19.2-23.3 19.2-38.4c0-26.5-21.5-48-48-48H48zM0 176V384c0 35.3 28.7 64 64 64H448c35.3 0 64-28.7 64-64V176L294.4 339.2c-22.8 17.1-54 17.1-76.8 0L0 176z&quot; fill=&quot;currentColor&quot;&gt; &lt;/path&gt; &lt;/svg&gt; &lt;!-- &lt;i class=&quot;fas fa-envelope&quot;&gt;&lt;/i&gt; --&gt; &lt;/a&gt; </code></pre> <p>When i click on the button manually on the site:</p> <p><a href="https://i.sstatic.net/noa4B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/noa4B.png" alt="enter image description here" /></a></p> <p>i can see the email-address in the opened email-program:</p> <p><a href="https://i.sstatic.net/KBFJC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KBFJC.png" alt="enter image description here" /></a></p> <p>How can i get this email-address?</p> <p>This should now only work for the specific link: <a href="https://irglobal.com/advisor/angus-forsyth" rel="nofollow noreferrer">https://irglobal.com/advisor/angus-forsyth</a></p> <p>This should also work for any person on this site - so i need the information which is behind this mail-icon: <a href="https://irglobal.com/advisor/ns-shastri/" rel="nofollow noreferrer">https://irglobal.com/advisor/ns-shastri/</a> <a href="https://irglobal.com/advisor/adriana-posada/" rel="nofollow noreferrer">https://irglobal.com/advisor/adriana-posada/</a> etc.</p>
<python><selenium-webdriver><beautifulsoup>
2023-06-26 15:36:37
1
1,515
Rapid1898
76,558,177
12,827,931
Numpy tile for complex transformation
<p>Suppose there's an array</p> <pre><code>arr = np.array([[1,2,3], [4,5,6]]) </code></pre> <p>My goal is to repeat <code>N = 2</code> times the elements that are on <code>axis = 0</code>, so the desired output is</p> <pre><code>array([[[1., 4.], [1., 4.]], [[2., 5.], [2., 5.]], [[3., 6.], [3., 6.]]]) </code></pre> <p>I've tried <code>np.ones((2,2))*arr.T[:,:,None]</code>, but the output differs</p> <pre><code>array([[[1., 1.], [4., 4.]], [[2., 2.], [5., 5.]], [[3., 3.], [6., 6.]]]) </code></pre> <p>Is there an easy way to fix that? In guess it's the matter of transposition, but I'm not sure how to achieve that.</p> <p><strong>EDIT:</strong> I've found the answer. It is:</p> <pre><code>np.transpose(np.ones((2,2))*arr.T[:,:,None], axes = tuple([0, 2, 1])) </code></pre>
<python><arrays><numpy>
2023-06-26 15:34:34
2
447
thesecond
76,557,990
9,211,026
Separate Flask app users using the same function
<p>I'm creating an app that allows users to get weather events sent directly to their access control/intrusion system. It works so far, but I need to allow multiple people with different system locations to request data based on their location. I have a function that takes the address and requests the weather alerts then sends them via API to the system no problem. How do I allow for that function to be accessed and ran by multiple users without causing an issue? If I use something like Flask-login will it allow separate users to use this function at the same time?</p>
<python><flask><flask-login>
2023-06-26 15:11:48
0
2,152
cmrussell
76,557,975
4,186,430
select a subkey in a dictionary in Python
<p>I'm working with a dictionary with multiple keys, like the following:</p> <pre><code>import pandas as pd dict = {'Mouses': {'White': {'Small': 1, 'Big': 2}, 'Black': {'Small': 3, 'Big': 4} }, 'Cats': {'White': {'Small': 5, 'Big': 6}, 'Black': {'Small': 7, 'Big': 8} }, 'Dogs': {'White': {'Small': 9, 'Big': 1}, 'Black': {'Small': 2, 'Big': 3} } } </code></pre> <p>If I want to get the number of <code>'Big'-'White'-'Cats'</code>, it's simple. But what if I want all the <code>'White'-'Cats'</code>? Or All the <code>'Dogs'</code>? Is there a way to easily access all values within a given subkey? A sort of query of subkeys with a sum over all values. Thanks in advance.</p>
<python><dictionary><sum><key>
2023-06-26 15:09:59
1
641
urgeo
76,557,910
992,687
How to explode dataframe after groupby aggregation?
<p>A typical dataframe after a groupby might look like:</p> <pre><code>import polars as pl pl.DataFrame( [ pl.Series(&quot;chromosome&quot;, ['chr1', 'chr1', 'chr1', 'chr1'], dtype=pl.Utf8), pl.Series(&quot;starts&quot;, [10, 1, 4, 7], dtype=pl.Int64), pl.Series(&quot;ends&quot;, [11, 4, 5, 8], dtype=pl.Int64), pl.Series(&quot;starts_right&quot;, [[6, 8], [0, 5], [5, 0], [6, 5]], dtype=pl.List(pl.Int64)), pl.Series(&quot;ends_right&quot;, [[10, 9], [2, 7], [7, 2], [10, 7]], dtype=pl.List(pl.Int64)), ] ) shape: (4, 5) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ chromosome โ”† starts โ”† ends โ”† starts_right โ”† ends_right โ”‚ โ”‚ --- โ”† --- โ”† --- โ”† --- โ”† --- โ”‚ โ”‚ str โ”† i64 โ”† i64 โ”† list[i64] โ”† list[i64] โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ chr1 โ”† 10 โ”† 11 โ”† [6, 8] โ”† [10, 9] โ”‚ โ”‚ chr1 โ”† 1 โ”† 4 โ”† [0, 5] โ”† [2, 7] โ”‚ โ”‚ chr1 โ”† 4 โ”† 5 โ”† [5, 0] โ”† [7, 2] โ”‚ โ”‚ chr1 โ”† 7 โ”† 8 โ”† [6, 5] โ”† [10, 7] โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>How do I explode the data frame in the least expensive way? So that each scalar entry is repeated twice and each item in each list is listed once as a scalar value, on its own row. I guess this operation is so common it should be built in somehow.</p> <p>I.e.</p> <pre><code>โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ chromosome โ”† starts โ”† ends โ”† starts_right โ”† ends_right โ”‚ โ”‚ --- โ”† --- โ”† --- โ”† --- โ”† --- โ”‚ โ”‚ str โ”† i64 โ”† i64 โ”† i64 โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ chr1 โ”† 10 โ”† 11 โ”† 6 โ”† 10 โ”‚ โ”‚ chr1 โ”† 10 โ”† 11 โ”† 8 โ”† 9 โ”‚ โ”‚ chr1 โ”† 1 โ”† 4 โ”† 0 โ”† 2 โ”‚ โ”‚ chr1 โ”† 1 โ”† 4 โ”† 5 โ”† 7 โ”‚ ... </code></pre>
<python><dataframe><python-polars>
2023-06-26 15:03:16
1
32,459
The Unfun Cat
76,557,842
1,713,513
Auditwheel ELF alignment error on shared library
<p>I have a Cython project that is pulling in FFTW. When I build my project with</p> <p><code>python setup.py build_ext --inplace</code></p> <p>and then import the generated .so from python it runs just fine. But when I try to build a wheel with</p> <p><code>auditwheel repair --plat manylinux2014_x86_64 dist/&lt;package&gt;.whl -w wheelhouse/</code></p> <p>and upload the resultant .whl in wheelhouse/ to our hosted pypi the resulting wheel doesn't work after installation. When I try to import it I get the error that:</p> <p><code>ImportError: libfftw3-&lt;sha&gt;.so.3.3.2: ELF load command address/offset not properly aligned</code></p> <p>Inspecting the whl it is neat that <code>auditwheel</code> seems to copy the FFTW .so library into it (I believe a stripped version). That <em>would</em> be great, if it worked.</p> <p>Note that this is on the same machine, it's just the difference between using the built .so versus creating the .whl and uploading it and then pip installing it.</p> <p>Am I doing something wrong? Are there options on <code>auditwheel</code> I should be using?</p>
<python><cython><fftw><python-manylinux>
2023-06-26 14:54:34
1
376
Eric C.
76,557,773
4,451,315
Interpolate based on datetimes
<p>In pandas, I can interpolate based on a datetimes like this:</p> <pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame( { &quot;ts&quot;: [ datetime(2020, 1, 1), datetime(2020, 1, 3, 0, 0, 12), datetime(2020, 1, 3, 0, 1, 35), datetime(2020, 1, 4), ], &quot;value&quot;: [1, np.nan, np.nan, 3], } ) df1.set_index('ts').interpolate(method='index') </code></pre> <p>Outputs:</p> <pre class="lang-py prettyprint-override"><code> value ts 2020-01-01 00:00:00 1.000000 2020-01-03 00:00:12 2.333426 2020-01-03 00:01:35 2.334066 2020-01-04 00:00:00 3.000000 </code></pre> <p>Is there a similar method in polars? Say, starting with</p> <pre class="lang-py prettyprint-override"><code>df1 = pl.DataFrame( { &quot;ts&quot;: [ datetime(2020, 1, 1), datetime(2020, 1, 3, 0, 0, 12), datetime(2020, 1, 3, 0, 1, 35), datetime(2020, 1, 4), ], &quot;value&quot;: [1, None, None, 3], } ) </code></pre> <pre class="lang-py prettyprint-override"><code>shape: (4, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ ts โ”† value โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ datetime[ฮผs] โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2020-01-01 00:00:00 โ”† 1 โ”‚ โ”‚ 2020-01-03 00:00:12 โ”† null โ”‚ โ”‚ 2020-01-03 00:01:35 โ”† null โ”‚ โ”‚ 2020-01-04 00:00:00 โ”† 3 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>EDIT: I've updated the example to make it a bit more &quot;irregular&quot;, so that <code>upsample</code> can't be used as a solution and to make it clear that we need something more generic</p>
<python><interpolation><python-polars>
2023-06-26 14:45:50
3
11,062
ignoring_gravity
76,557,596
11,313,496
PySpark: Create new rows (explode) based on a number in a column and multiple conditions
<p>I have a dataframe with a few columns, a unique ID, a month, and a split. I need to explode the dataframe and create new rows for each unique combination of id, month, and split. The number to explode has already been calculated and is stored in the column, <code>bad_call_dist</code>. For example, if <code>ID</code> is <code>12345</code>, <code>month</code> is <code>Jan</code>, <code>split</code> is <code>'A'</code>, and <code>bad_call_dist</code> is <code>6</code>, I need to have a total of 6 rows. This process must repeat for each unique combination.</p> <p>I have code that works for small datasets, however I need to apply it to a much, much larger dataframe and it times out each time. What the code below does is extract a single-row dataframe from the original data with a temporary range column representing how many rows must exist for a unique col combination. I then use <code>explode()</code> to generate the new rows and union that into a master dataframe. I'm looking for assistance to optimize the code and speed up processing times while producing the same outcome:</p> <pre><code># unique ID-month-split combinations for the data idMonthSplits = call_data.select('id', 'month', 'split').distinct().collect() # set the schema to all cols except the bad call flag, which is set to 1 in the loop master_explode = spark.createDataFrame([], schema=call_data.select([col for col in call_data.columns if col != 'bad_call_flag']).schema) # loop for ims in idMonthSplits: id = ims ['id'] month = ims ['month'] split = ims ['split'] # explode the df one row per n, where n is the value in bad_call_dist. explode_df = exploded.filter((exploded['id'] == id) &amp; (exploded['month'] == month) &amp; (exploded['split'] == split))\ .withColumn('bad_call_flag', F.lit(1)) try: # extract the value that represents the number of rows to explode expVal = explode_df.select(F.first(F.col(&quot;bad_call_dist&quot;)).cast(&quot;int&quot;)).first()[0] # range that is used by explode() to convert single row to multiple rows explode_df = explode_df.withColumn( 'range', F.array( [F.lit(i) for i in range(expVal + 1)] ) ) # explode the df, then drop cols no longer needed for union explode_df = explode_df.withColumn('explode', F.explode(F.col('range')))\ .drop(*['explode', 'range', 'bad_call_dist']) # union to master df master_explode = master_explode.unionAll(explode_df) # if the explode value is 0, no need to expand rows. This triggers to avoid an error. except: continue </code></pre>
<python><apache-spark><pyspark><row>
2023-06-26 14:22:57
1
316
Whitewater
76,557,574
2,312,801
pip dependency conflict_ cannot install tweetnlp
<p>I tried to install <code>tweetnlp</code> package from <a href="https://pypi.org/project/tweetnlp/" rel="nofollow noreferrer">here</a> to analyze emotions of some social media text. However, I get the following error when I run <code>pip install tweetnlp</code>:</p> <blockquote> <p>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow 2.11.0 requires protobuf&lt;3.20,&gt;=3.9.2, but you have protobuf 4.23.3 which is incompatible. tensorboard 2.11.2 requires protobuf&lt;4,&gt;=3.9.2, but you have protobuf 4.23.3 which is incompatible.</p> </blockquote> <p>I used a solution from <a href="https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly">a similar post</a> (using <code>pip install protobuf==3.20.*</code> command), but I got the following error:</p> <blockquote> <p>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorboardx 2.6.1 requires protobuf&gt;=4.22.3, but you have protobuf 3.20.3 which is incompatible. tensorflow 2.11.0 requires protobuf&lt;3.20,&gt;=3.9.2, but you have protobuf 3.20.3 which is incompatible.</p> </blockquote> <p>What can I try next?</p>
<python><macos><tensorflow><nlp><emotion>
2023-06-26 14:20:15
1
2,459
mOna
76,557,519
14,956,120
Is it safe to assume that using a QEventLoop is a correct way of creating qt5-compatible coroutines in python?
<p>I'm using a custom QEventLoop instance in my code to simulate the <code>QDialog.exec_()</code> function. That is, the ability to pause the python script at some point without freezing the GUI, and then, at some other point of time after the user manually interacts with the GUI, the program resumes its execution right after the <code>QEventLoop.exec_()</code> call, by calling <code>QEventLoop.quit()</code>. This is the exact behaviour of what a coroutine should look like.</p> <p>To illustrate the example, here's a MRE of what I'm doing:</p> <pre><code>from PySide2.QtWidgets import QApplication, QWidget, QVBoxLayout, QRadioButton, QButtonGroup, QDialogButtonBox from PySide2.QtCore import Qt, QTimer, QEventLoop recursion = 5 def laterOn(): # Request settings from user: dialog = SettingsForm() # Simulate a coroutine. # - Python interpreter is paused on this line. # - Other widgets code will still execute as they are connected from Qt5 side. dialog.exec_() # After the eventloop quits, the python interpreter will execute from where # it was paused: # Using the dialog results: if (dialog.result): # We can use the user's response somehow. In this simple example, I'm just # printing text on the console. print('SELECTED OPTION WAS: ', dialog.group.checkedButton().text()) class SettingsForm(QWidget): def __init__(self): super().__init__() vbox = QVBoxLayout() self.setLayout(vbox) self.eventLoop = QEventLoop() self.result = False a = QRadioButton('A option') b = QRadioButton('B option') c = QRadioButton('C option') self.group = QButtonGroup() self.group.addButton(a) self.group.addButton(b) self.group.addButton(c) bbox = QDialogButtonBox() bbox.addButton('Save', QDialogButtonBox.AcceptRole) bbox.addButton('Cancel', QDialogButtonBox.RejectRole) bbox.accepted.connect(self.accept) bbox.rejected.connect(self.reject) vbox.addWidget(a) vbox.addWidget(b) vbox.addWidget(c) vbox.addWidget(bbox) global recursion recursion -= 1 if (recursion &gt; 0): QTimer.singleShot(0, laterOn) def accept(self): self.close() self.eventLoop.quit() self.result = True def reject(self): self.close() self.eventLoop.quit() self.result = False def exec_(self): self.setWindowModality(Qt.ApplicationModal) self.show() self.eventLoop.exec_() ### app = QApplication() # Initialize widgets, main interface, etc... mwin = QWidget() mwin.show() QTimer.singleShot(0, laterOn) app.exec_() </code></pre> <p>In this code, the <code>recursion</code> variable control how many times different instances of QEventLoop are crated, and how many times its <code>.exec_()</code> method is called, halting the python interpreter without freezing the other widgets.</p> <hr /> <p>It can be seen that the <code>QEventLoop.exec_()</code> acts just like a <code>yield</code> keyword from a python generator function. Is it correct to assume that <code>yield</code> is used every time <code>QEventLoop.exec()</code> is called? Or it's not something related to a coroutine at all, and another thing is happening at the background? (I don't know if there's a way to see the PySide2 source code, so that's why I'm asking.)</p>
<python><pyqt5><yield><pyside2><qeventloop>
2023-06-26 14:14:18
1
1,039
Carl HR
76,557,447
2,006,674
Running python script from crontab on OSX?
<p>I do have experience with crontab, but this is first time using it on OSX 10.15.<br /> Have this strange error:</p> <pre><code> Fatal Python error: init_import_site: Failed to import the site module Python runtime state: initialized Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1147, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 690, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 980, in exec_module File &quot;&lt;frozen site&gt;&quot;, line 616, in &lt;module&gt; File &quot;&lt;frozen site&gt;&quot;, line 599, in main File &quot;&lt;frozen site&gt;&quot;, line 517, in venv PermissionError: [Errno 1] Operation not permitted: '/Users/macbook/code/script/venv_3-11-4/pyvenv.cfg' </code></pre> <p>file pyvenv.cfg exist</p> <pre><code>-rw-r--r-- 1 macbook staff 307 Jun 26 11:50 venv_3-11-4/pyvenv.cfg </code></pre> <p>Any ideas why ?</p>
<python><macos><cron>
2023-06-26 14:05:34
1
7,392
WebOrCode
76,557,085
2,251,058
Csv reader misinterprets quotes
<p>When I try reading my string with csv reader the output I get converts the json string to <code>'{filter&quot;:&quot;freeinternet&quot;', 'region:&quot;178307&quot;}&quot;'</code> which needs to stay</p> <p><code>&quot;{\&quot;filter\&quot;:\&quot;freeinternet\&quot;,\&quot;region\&quot;:\&quot;178307\&quot;}&quot;</code></p> <p>This is what I've tried. I've even tried adding quotechar, escapechar, and trying different versions, but it results in incorrect results</p> <pre><code>import csv from io import StringIO s = u&quot;&quot;&quot;url,forgeresponsetype,identifiers,metadata,partitionkey,sortkey,expirationdate,lastmodifieddate,redirectkey,siteid,locale,type,update_date https://www.expedia.com/Seattle-Hotels.d178307.Travel-Guide-Hotels,MOVED_PERMANENTLY,&quot;{\&quot;filter\&quot;:\&quot;freeinternet\&quot;,\&quot;region\&quot;:\&quot;178307\&quot;}&quot;,,HOTEL_DESTINATION_THEME.494647,1.en_US.filter:freeinternet.region:178307,1696746399,2023-06-18T06:26:40.521Z,,1,en_US,HOTEL_DESTINATION_THEME,21/06/23 https://www.expedia.com/Seattle-Hotels.d178307.Travel-Guide-Hotels,MOVED_PERMANENTLY,&quot;{\&quot;filter\&quot;:\&quot;freeinternet\&quot;,\&quot;region\&quot;:\&quot;178307\&quot;}&quot;,,HOTEL_DESTINATION_THEME.494647,1.en_US.filter:freeinternet.region:178307,1696746399,2023-06-18T06:26:40.521Z,,1,en_US,HOTEL_DESTINATION_THEME,21/06/23 https://www.expedia.com/Seattle-Hotels.d178307.Travel-Guide-Hotels,MOVED_PERMANENTLY,&quot;{\&quot;filter\&quot;:\&quot;freeinternet\&quot;,\&quot;region\&quot;:\&quot;178307\&quot;}&quot;,,HOTEL_DESTINATION_THEME.494647,1.en_US.filter:freeinternet.region:178307,1696746399,2023-06-18T06:26:40.521Z,,1,en_US,HOTEL_DESTINATION_THEME,21/06/23&quot;&quot;&quot; f = StringIO(s) reader = csv.reader(f, delimiter=',', escapechar = '\\') xd = [row for row in reader] </code></pre> <p>Any help is appreciated</p>
<python><python-3.x><csv>
2023-06-26 13:19:28
2
3,287
Akshay Hazari
76,557,066
2,163,392
Cannot run Tensorflow GPU on Docker (although it seems to be installed outside of it)
<p>Here is my situation</p> <p>I downloaded the tensorflow/tensorflow:latest-gpu image. In order to run it, I run the following command to start the docker image:</p> <pre><code>docker run -it --rm \ --ipc=host \ --gpus all \ --volume=&quot;/tmp/.X11-unix:/tmp/.X11-unix:rw&quot; \ --volume=&quot;$(pwd)/://mydir:rw&quot; \ --workdir=&quot;/mydir/&quot; \ tensorflow/tensorflow:latest-gpu bash -c 'bash' </code></pre> <p>However, whenever I run the following Python commands:</p> <pre><code>&gt;&gt; import tensorflow as tf &gt;&gt; print(&quot;Num GPUs Available: &quot;, len(tf.config.list_physical_devices('GPU'))) </code></pre> <p>Here is the output I have (inside the Docker):</p> <pre><code>2023-06-26 13:10:46.768093: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:266] failed call to cuInit: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE: forward compatibility was attempted on non supported HW 2023-06-26 13:10:46.768177: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:168] retrieving CUDA diagnostic information for host: 466cc7912253 2023-06-26 13:10:46.768189: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:175] hostname: 466cc7912253 2023-06-26 13:10:46.768314: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:199] libcuda reported version is: NOT_FOUND: was unable to find libcuda.so DSO loaded into this program 2023-06-26 13:10:46.768355: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:203] kernel reported version is: 515.65.1 Num GPUs Available: 0 </code></pre> <p>Here is what I have outside and INSIDE docker:</p> <pre><code>## nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A | | 0% 37C P0 58W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA GeForce ... Off | 00000000:05:00.0 Off | N/A | | 0% 35C P0 59W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA GeForce ... Off | 00000000:84:00.0 Off | N/A | | 0% 38C P0 60W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA GeForce ... Off | 00000000:85:00.0 Off | N/A | | 0% 32C P0 59W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA GeForce ... Off | 00000000:88:00.0 Off | N/A | | 0% 26C P0 58W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA GeForce ... Off | 00000000:89:00.0 Off | N/A | | 0% 28C P0 57W / 250W | 0MiB / 11264MiB | 1% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ </code></pre> <p>I also can see that there is a CUDA version installed:</p> <pre><code>## nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0 </code></pre> <p>So what can I do to make my docker image see where is my CUDA and make my code run in the GPU?</p>
<python><docker><tensorflow>
2023-06-26 13:17:15
2
2,799
mad
76,556,935
22,009,322
GIF image keeps looping even with repeat=False parameter in FuncAnimation using Pillow writer
<p>I've added a dummy &quot;init&quot; function since I've read somewhere that it might be the cause but it didn't helped and I still can't figure out why it is constantly looping. Also &quot;repeat_delay&quot; parameter is not working as well and there are no any errors thrown in console.</p> <p>Example of the code:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation import matplotlib.patches as patches fig, ax = plt.subplots() ax.set_ylim(-20, 20) ax.set_xlim(-20, 20) circle_size = 400 point_anim1 = ax.scatter(10, 10, s=circle_size, c='red') point_anim2 = ax.scatter(-10, -4, s=circle_size, c='blue') arrow = patches.Arrow(2, 2, 2, 2) ax.add_patch(arrow) frames = 4 def init(): return arrow, point_anim1, point_anim2 def animate(i): global point_anim1, point_anim2 point_anim1.remove() point_anim2.remove() if i % 2: point_anim1 = ax.scatter(10, 10, s=circle_size * i, c='red') point_anim2 = ax.scatter(-10, -4, s=circle_size * i, c='blue') arrow.set_visible(False) else: point_anim1 = ax.scatter(10, 10, s=circle_size, c='red') point_anim2 = ax.scatter(-10, -4, s=circle_size, c='blue') arrow.set_visible(True) return arrow, point_anim1, point_anim2 anim = FuncAnimation(fig, animate, init_func=init, frames=frames, interval=600, repeat=False#, repeat_delay=5000 ) plt.show() plt.close() anim.save('test.gif', writer='pillow') </code></pre>
<python><matplotlib><gif><matplotlib-animation>
2023-06-26 12:59:57
0
333
muted_buddy
76,556,703
5,359,846
How to authenticate using Windows Authentication in Playwright
<p>I need to automate a test that uses Windows Authentication.</p> <p>I know that the the prompt that opens up is not part of the HTML page, yet why my code is not working:</p> <pre><code>login_page.click_iwa() sleep(5) self.page.keyboard.type('UserName') sleep(5) self.page.keyboard.press('Tab') self.page.keyboard.type('Password') </code></pre>
<python><playwright><playwright-python>
2023-06-26 12:30:52
2
1,838
Tal Angel
76,556,544
1,869,361
Python struct reverse unpack
<p>I'm trying to unpack structure as shown below in this way to get '0123456789'</p> <pre><code>b&quot;\x10\x32\x54\x76\x98&quot; </code></pre> <p>Maybe there is other way then writing own function to do so ?</p> <p>Thanks</p>
<python><struct>
2023-06-26 12:09:44
3
317
admfotad
76,556,489
3,788,808
Difference between two imbalance DataFrames columns in pyspark
<p>I have a followup question on top of this thread: <a href="https://stackoverflow.com/questions/38359666/difference-between-two-dataframes-columns-in-pyspark">Difference between two DataFrames columns in pyspark</a></p> <p>This time, I am looking for a way to find difference in values, in columns of two <strong>SUBSET</strong> DataFrame. For example:</p> <pre><code>from pyspark.sql import SQLContext sc = SparkContext() sql_context = SQLContext(sc) df_a = sql_context.createDataFrame([(1,&quot;a&quot;, 3), (2,&quot;b&quot;, 5), (3,&quot;c&quot;, 7)], [&quot;id&quot;,&quot;name&quot;, &quot;age&quot;]) df_b = sql_context.createDataFrame([(&quot;a&quot;, 3), (&quot;b&quot;, 10), (&quot;c&quot;, 13)], [&quot;name&quot;, &quot;age&quot;]) </code></pre> <p>DataFrame A:</p> <pre><code>++------+---+ |id|name|age| ++------+---+ |1 | a| 3| |2 | b| 5| |3 | c| 7| ++------+---+ </code></pre> <p>DataFrame B:</p> <pre><code>+----+---+ |name| age| +----+---+ | a| 3| | b| 10| | c| 13| +----+---+ </code></pre> <p>i plan to use subtract to get the dataset</p> <pre><code>++------+---+ |id|name|age| ++------+---+ |2 | b| 5| |3 | c| 7| ++------+---+ </code></pre> <p>However, seems subtract does not support</p> <ul> <li>using subset comparison and return the full set of dataset</li> </ul> <p>Is there any other way that i can compare 2 imbalance dataset and return the id ? or it is a must to use join for the comparison?</p>
<python><pyspark>
2023-06-26 12:02:19
1
775
Question-er XDD
76,556,441
14,777,704
How to suppress callback exceptions in dash 2.10.2 as suppress_callback_exceptions=True is not working?
<p>I am a beginner in Dash. I have built dashboard app using Dash 2.10.2, having callback functions as well as background callback functions. (@dash.callback(...))</p> <p>When I load the page, it intermittently throws errors like - Invalid Value -Unable to Update Figure, but on refreshing it, the outputs are displayed correctly and the error disappears on its own.</p> <p>To prevent these intermittent errors from happening I have attempted the below 2 ways, but my efforts are futile.</p> <p>Attempt 1-</p> <pre><code>app=Dash() app.config.suppress_callback_exceptions=True </code></pre> <p>Attempt 2-</p> <pre><code>app=Dash(suppress_callback_exceptions=True) </code></pre> <p>But these are not working. Can anyone please suggest how to prevent these intermittent exceptions in my callbacks and background callbacks, so that the app works smoothly for my stakeholders when they use it?</p> <p>Edit - Adding code snippet</p> <pre><code>df=pd.read_csv(....) dropdown=dcc.Dropdown(options=subjects,value=subjects[0]) dropdown2=dcc.Dropdown(options=another,value=another[0]) app.layout=html.Div(children[ dropdown2, dropdown, dcc.Store(id='intermediate-value') dcc.Graph(id='graph1'), dcc.Graph(id='graph2'), dcc.Graph(id='graph3') ] ) @dash.callback( Output(component_id='graph2',component_property='figure'), Input(component_id='dropdown2',component_property='value'), manager=background_callback_manager ) def f2(selection): # intermittent error occurring at updating this figure # on starting app #some code and processing return fig2 @app.callback( Output('intermediate-value','data'), Input(component_id='dropdown',component_property='value') ) def heavyProcessing(selection): filtered=df[df[x]==selection] #heavy processing return processed.to_json(date_format='iso',orient='split') @app.callback( Output(component_id='graph1',component_property='figure'), Input('intermediate-value','data') ) def f1(data): #some code return fig1 @app.callback( Output(component_id='graph3',component_property='figure'), Input('intermediate-value','data') ) def f3(data): # intermittent error occurring at updating this figure on #starting app #some code return fig3 </code></pre> <p>Intermittent error generally occurs at fig2 (1st callback) and fig3 (last callback).</p>
<python><plotly-dash>
2023-06-26 11:55:19
1
375
MVKXXX