QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,835,908
21,420,742
How to filter on dates n days from from today using Python
<p>iI am fairly new to Python and have not used datetime/timedelta too much, I am trying to look at a dataset I have from the past 90 days and I have seen how to get todays date, but have no clue how to use <code>date.today()</code> into filter.</p> <p>You can assume the date looks like this:</p> <pre><code> ID Date Employed? 1 12/11/2021 Y 2 2/12/2022 Y 3 3/02/2023 Y </code></pre> <p>And I would like it to filter to only see 90 days prior today.</p> <pre><code> ID Date Employed? 3 3/02/2023 Y </code></pre> <p>Any suggestions?</p>
<python><python-3.x><dataframe><datetime><timedelta>
2023-03-24 16:05:37
1
473
Coding_Nubie
75,835,720
1,120,622
Python property getters and setters not working
<p>I am attempting to use property-based getters and setters in Python and am getting strange problems. Consider the following code:#!/usr/bin/env python</p> <pre><code>class anyThing(object): def __init__(self): self._xyz = 'something' @property def XYZ(self): return self._xyz @XYZ.setter def XYZ(self, value): self._xyz = value t = anyThing() # Construct an instance print(f'Type of t.XYZ is {type(t.XYZ)}') print(f'Invoking getter {t.XYZ()}') t.XYZ('anotherThing') # Invoke instance setter </code></pre> <p>The system is interpreting the getter method as a string rather than a method. I get the following specific results:</p> <pre><code>(PR) jonathan@jfgdev:/PR$ ./testProperty.py Type of t.XYZ is &lt;class 'str'&gt; Traceback (most recent call last): File &quot;/mnt/ProgrammingRenaissance/./testProperty.py&quot;, line 17, in &lt;module&gt; print(f'Invoking getter {t.XYZ()}') TypeError: 'str' object is not callable </code></pre> <p>Can anyone see what I am doing wrong? I got the problem on both Python 3.10.6 and Python 3.10.10. If you comment out the invocation of the getter, the same problem shows up when invoking the setter. If I turn the methods into old-fashioned getters and setters, they work so the problem seems to be associated with properties only.</p>
<python><getter>
2023-03-24 15:49:15
0
2,927
Jonathan
75,835,580
5,213,451
How to configure MyPy for Polars API Extension?
<p>I'm writing data manipulation code around <code>polars</code> Series and DataFrames. To make my life easier and the code more readable, I use the <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api.html" rel="nofollow noreferrer">Polars API extensions</a>:</p> <pre class="lang-py prettyprint-override"><code>@pl.api.register_dataframe_namespace(&quot;split&quot;) class SplitFrame: def __init__(self, df: pl.DataFrame): self._df = df def by_alternate_rows(self) -&gt; list[pl.DataFrame]: df = self._df.with_row_count(name=&quot;n&quot;) return [ df.filter((pl.col(&quot;n&quot;) % 2) == 0).drop(&quot;n&quot;), df.filter((pl.col(&quot;n&quot;) % 2) != 0).drop(&quot;n&quot;), ] pl.DataFrame( data=[&quot;aaa&quot;, &quot;bbb&quot;, &quot;ccc&quot;, &quot;ddd&quot;, &quot;eee&quot;, &quot;fff&quot;], columns=[(&quot;txt&quot;, pl.Utf8)], ).split.by_alternate_rows() # [β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” # β”‚ txt β”‚ β”‚ txt β”‚ # β”‚ --- β”‚ β”‚ --- β”‚ # β”‚ str β”‚ β”‚ str β”‚ # β•žβ•β•β•β•β•β•‘ β•žβ•β•β•β•β•β•‘ # β”‚ aaa β”‚ β”‚ bbb β”‚ # β”‚ ccc β”‚ β”‚ ddd β”‚ # β”‚ eee β”‚ β”‚ fff β”‚ # β””β”€β”€β”€β”€β”€β”˜, β””β”€β”€β”€β”€β”€β”˜] </code></pre> <p>My problem is that my codebase is checked by MyPy, which doesn't like them one bit:</p> <pre class="lang-bash prettyprint-override"><code>error: &quot;DataFrame&quot; has no attribute &quot;split&quot; [attr-defined] </code></pre> <p>At the moment I can think of two workarounds, none of which is satisfying:</p> <ol> <li><code># type: ignore</code> comments<br /> It is what I'm doing at the moment, but they need to be everywhere and ruin a little bit the codebase.</li> <li>Stub files<br /> I didn't explore this avenue but I know one can provide stubfiles for a library. My issue here is that both Polars and my extensions are actively developed so I don't want to &quot;freeze&quot; their signatures, as I think maintaining these stubs will be a real burden.</li> </ol> <p>Essentially I would like to tell MyPy <em>&quot;Please assume polars.DataFrame has a <code>split</code> attribute of type <code>SplitFrame</code>.&quot;</em> That's it. Any suggestions on how to do this or something serving the same purpose would be appreciated.</p>
<python><mypy><python-polars>
2023-03-24 15:35:12
2
1,000
Thrastylon
75,835,550
5,168,463
Parsing NLTK Chart diagram
<p>I am trying to do parsing using context free grammar which is returning multiple parse trees. I can visualize these parse trees one by one using the code below:</p> <pre><code>grammar = nltk.CFG.fromstring(&quot;&quot;&quot; S -&gt; VP NP PUN | NP VP PUN | NP PUN | VP PUN NP -&gt; ADP NP | NOUN | ADJ NP | ADV ADJ NP |ADJ | CONJ NP | NOUN NP | DET NP |ADV VP -&gt; VERB | ADV VP | ADV | VERB VP | VERB NP ADP -&gt; 'as' | 'in' | 'along' | 'with' | 'of' | 'a' NOUN -&gt; 'three' | 'parts' | 'milk' | 'oak' | 'body' | 'center' | 'stage' | 'notes' | 'date' | 'rice' | 'structure' | 'a' | 'hint' | 'blossom' | 'candy' | 'chocolate' | 'lemon' | 'thyme' | 'espresso' | 'cacao' | 'tart' | 'honeysuckle' | 'citrus' | 'apricot' | 'finish' | 'paste' | 'grapefruit' | 'cherry' | 'mouthfeel' | 'spice' | 'cup' | 'vanilla' | 'narcissus' | 'savory-tart' | 'aroma' | 'leads' | 'zest' | 'herb' | 'florals' | 'tones' | 'peppercorn' | 'intimations' | 'nib' PUN -&gt; '.' | ',' | ';' | '(' | ')' DET -&gt; 'the' | 'all' | 'a' ADV -&gt; 'as' | 'deeply' | 'long' | 'all' | 'gently' | 'crisply' | 'richly' | 'delicately' | 'sweetly' VERB -&gt; 'evaluated' | 'take' | 'notes' | 'follow' | 'roasted' | 'dried' | 'fruit' | 'finish' | 'drying' | 'leads' | 'intensify' X -&gt; 'a' CONJ -&gt; 'and' PRT -&gt; 'in' ADJ -&gt; 'rich' | 'dark' | 'black' | 'short' | 'small' | 'sweet' | 'white' | 'floral' | 'silky' | 'pink' | 'thyme-like' | 'syrupy' | 'roasted' | 'dried' | 'floral-toned' | 'chocolaty' | 'sweet-toned' | 'cocoa-toned' | 'plush' | 'floral-driven' | 'herb-toned' | 'delicate' | 'resonant' | 'flavor-saturated' &quot;&quot;&quot;) statement = nltk.word_tokenize(&quot;Crisply sweet cocoa-toned Lemon blossom roasted cacao nib date rice candy white peppercorn in aroma and cup.&quot;) statement = [i.lower().strip() for i in statement] rd_parser = nltk.RecursiveDescentParser(grammar) for pos, tree in enumerate(rd_parser.parse(statement)): tree.draw() </code></pre> <p>But the problem is that it generates the tree diagram one by one and the program stops untill I stop all of the charts manually. Is there a way by which I can generate all the charts into single image and display it in my jupyter notebook all at once?</p>
<python><nlp><treeview><nltk><text-parsing>
2023-03-24 15:32:12
0
515
DumbCoder
75,835,421
3,734,059
How to get start and end datetime indices of groups of consecutive values of data in pandas including repeated valus?
<p>There are many answers based on numerical indices but I am looking for a solution that works with a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DatetimeIndex.html" rel="nofollow noreferrer">DateTimeIndex</a> and got really stuck here. The closest answer I found with a numerical index is <a href="https://stackoverflow.com/questions/65238399/how-to-get-start-and-end-indices-of-consecutive-groups-of-data-in-pandas">this one</a> but does not work for my example.</p> <p>I want to get the group start and end as <code>DateTime</code> for groups of <code>n</code> consecutive values in a DataFrame column.</p> <p>Sample data:</p> <pre><code>import pandas as pd index = pd.date_range( start=pd.Timestamp(&quot;2023-03-20 12:00:00+0000&quot;, tz=&quot;UTC&quot;), end=pd.Timestamp(&quot;2023-03-20 15:00:00+0000&quot;, tz=&quot;UTC&quot;), freq=&quot;15Min&quot;, ) data = { &quot;values_including_constant_groups&quot;: [ 2.0, 1.0, 1.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 2.0, 3.0, 3.0, 1.0, ], } df = pd.DataFrame( index=index, data=data, ) print(df) </code></pre> <p>yields:</p> <pre><code> values_including_constant_groups 2023-03-20 12:00:00+00:00 2.0 2023-03-20 12:15:00+00:00 1.0 2023-03-20 12:30:00+00:00 1.0 2023-03-20 12:45:00+00:00 3.0 2023-03-20 13:00:00+00:00 3.0 2023-03-20 13:15:00+00:00 3.0 2023-03-20 13:30:00+00:00 4.0 2023-03-20 13:45:00+00:00 4.0 2023-03-20 14:00:00+00:00 4.0 2023-03-20 14:15:00+00:00 2.0 2023-03-20 14:30:00+00:00 3.0 2023-03-20 14:45:00+00:00 3.0 2023-03-20 15:00:00+00:00 1.0 </code></pre> <p>Desired output (I would be flexible here but this would be my first idea):</p> <pre><code> values_including_constant_groups group_start group_end 2023-03-20 12:00:00+00:00 2.0 NaN NaN 2023-03-20 12:15:00+00:00 1.0 True False 2023-03-20 12:30:00+00:00 1.0 False True 2023-03-20 12:45:00+00:00 3.0 True False 2023-03-20 13:00:00+00:00 3.0 False False 2023-03-20 13:15:00+00:00 3.0 False True 2023-03-20 13:30:00+00:00 4.0 True False 2023-03-20 13:45:00+00:00 4.0 False False 2023-03-20 14:00:00+00:00 4.0 False True 2023-03-20 14:15:00+00:00 2.0 NaN NaN 2023-03-20 14:30:00+00:00 3.0 True False 2023-03-20 14:45:00+00:00 3.0 False True 2023-03-20 15:00:00+00:00 1.0 NaN NaN </code></pre> <p>So only groups of <code>n&gt;=2</code> should be considered here and &quot;single&quot; values excluded. Moreover, repeated groups should be included.</p> <p>Any hints are very welcome!</p>
<python><pandas>
2023-03-24 15:18:25
1
6,977
Cord Kaldemeyer
75,835,357
13,039,962
Calculate consecutive quarterly cumulative values from daily data
<p>I have this df:</p> <pre><code> DATE PP 0 1964-01-01 1.0 1 1964-01-02 0.0 2 1964-01-03 0.0 3 1964-01-04 0.0 4 1964-01-05 0.0 ... ... 21788 2023-03-18 NaN 21789 2023-03-19 NaN 21790 2023-03-20 NaN 21791 2023-03-21 NaN 21792 2023-03-22 NaN </code></pre> <p>I want to calculate the consecutive quarterly cumulative values of PP form daily values</p> <p>Expected result:</p> <pre><code>cum_start cum_end PP 1964-12 1965-02 25 1965-01 1965-03 30 1965-02 1965-04 12 1965-03 1965-05 11 1965-04 1965-06 25 ... ... 2022-9 2022-11 14 2022-10 2022-12 22 2022-11 2023-01 15 2022-12 2023-02 21 2023-01 2023-03 40 </code></pre> <p>I tried this code but it have a problem:</p> <pre><code>trimestres = df.set_index('DATE').rolling('90D')['PP'].sum() trimestres = trimestres.groupby([trimestres.index.year, trimestres.index.quarter]).tail(3) trimestres = trimestres.reset_index().rename(columns={'DATE': 'cum_start'}) trimestres['cum_end'] = trimestres['cum_start'] + pd.DateOffset(months=3) </code></pre> <p>Do you have any possible solution?</p> <p>Thanks in advance.</p>
<python><pandas>
2023-03-24 15:12:22
1
523
Javier
75,835,249
10,192,593
Nested for loop to create summary table of mean and SD in Python
<p>I have a numpy array of shape (2,2,1000) representing income-groups, age-groups and a sample of 1000 observations for each group.</p> <p>I am trying to use a for-loop to calculate a summary table.</p> <p>I would like to receive a table of the following sort:</p> <p><a href="https://i.sstatic.net/ofaCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ofaCW.png" alt="enter image description here" /></a></p> <p>And this is my code:</p> <pre><code>import numpy as np import pandas as pd elasticity = np.random.rand(2,2,1000) print(elasticity.shape) income = ['i0','i1'] age_gr= ['&lt;=18','&gt;18'] data=[] df = pd.DataFrame() for i in range(len(age_gr)): row=[] for j in range((len(income))): row.append(np.mean(elasticity[i,j,:])) row.append(np.std(elasticity[i,j,:])) data.append(row) data df = pd.DataFrame(data) df </code></pre> <p>But there is something wrong with my for-loop and I keep getting 4x4 table of the mean and standard deviation instead of the 2x4 that I would like. What am I doing wrong?</p>
<python><numpy><numpy-ndarray>
2023-03-24 15:02:37
2
564
Stata_user
75,835,248
1,175,081
Mypy can't find obvious type mismatch when using namespace-packages and subfolders with the same name
<p>The problem is as simple as can be if put in one file:</p> <pre><code>class A(): a = 1 class B(): b = 2 def test(x: A): return x def testit(): b = B() test(b) </code></pre> <p>And mypy finds the issue here with <code>mypy test_folder --check-untyped-defs</code></p> <p>But if those functions / classes are put in various files mypy doesn't detect the problem. It doesn't even warn that it can't piece information together which IMO is the biggest problem.</p> <p>I spent some time to create a minimal reproducible version:</p> <p>The file tree looks like this:</p> <pre><code>src mb2 base.py steps init step.py impl.py jmeter step.py test test.py </code></pre> <p>The content of the 5 involved files is:</p> <p>base.py</p> <pre><code>class BaseCfg: pass </code></pre> <p>test.py</p> <pre><code>from mb2.steps.jmeter.step import JCfg from mb2.steps.init.impl import get_resource_costs class TestCollectMetrics: def test_get_resource_costs(self): cfg = JCfg() get_resource_costs(cfg) # &lt;- Wrong type should be detected </code></pre> <p>init/impl.py</p> <pre><code>from .step import Cfg def get_resource_costs(config: Cfg): pass </code></pre> <p>init/step.py</p> <pre><code>from mb2.base import BaseCfg class Cfg(BaseCfg): pass </code></pre> <p>jmeter/step.py</p> <pre><code>from mb2.base import BaseCfg class JCfg(BaseCfg): pass </code></pre> <p>The test is passing an object of type <code>JCfg</code> to the method <code>get_resource_costs</code>. While the method specifies it needs type <code>Cfg</code>. This doens't get detected by mypy.</p> <p>It only runs with <code>--namespace-packages</code> as otherwise the duplicate <code>step.py</code> files become a problem. So the command to reproduce is:</p> <pre><code>mypy src --check-untyped-defs --namespace-packages </code></pre> <p>This returns <code>Success: no issues found in 5 source files</code></p> <p>putting <code>reveal_type(cfg)</code> and <code>reveal_type(get_resource_costs)</code> into test.py returns <code>Revealed type is &quot;Any&quot;</code>.</p> <p>Putting local errors in any of the 5 files gets detected, so mypy is checking all files.</p> <p>I assume it is connected to namespace-packages not being able to resolve paths and silently failing. Any way to fix this?</p>
<python><mypy>
2023-03-24 15:02:15
1
14,967
David Schumann
75,835,230
7,240,233
How to put elements of a pandas index as values of a dictionary?
<p>I'm struggling with an error in pandas which is driving me crazy.</p> <p>I want to build a dictionary which extracts some data from a pandas df:</p> <pre><code>Index_col col1 P1 F1-R1 P2 F1-R1 P3 F1-R1 P4 F1-R1 P5 F1-R2 P6 F1-R2 P7 F1-R2 P8 F1-R2 (etc) </code></pre> <p>Would give:</p> <pre><code>{'F1-R1': ['P1', 'P2', 'P3', 'P4'], 'F1-R2': ['P5', 'P6', 'P7', 'P8']} </code></pre> <p>However the following code:</p> <pre><code>dic = dict.fromkeys(df.col1.unique(), []) for idx, row in df.iterrows(): dic[row[&quot;col1&quot;]].append(idx) </code></pre> <p>Produces</p> <pre><code>{'F1-R1': ['P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8']], 'F1-R2': ['P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8']} </code></pre> <p>I can't figure ut why :/. Does someone has an answer (or another way to proceed) ?</p>
<python><pandas>
2023-03-24 15:00:27
3
721
Micawber
75,835,140
11,160,898
Python load .pem public key
<p>I have an ERCA pem file and need to load it in as a public key, which is then used to decrypt some data.</p> <pre><code>------------ BEGIN ERCA PK --------------- /UVDIAD//wHpgHY6REqVJQqVh4LR1UrPwyPSXzlGuBbpL8+dMrQqJhPRo2O05DUyoCZoYynI lmPMwAH3J4IGtqtlrShxhIpoD2pX2P2h14LJtYEpA+pbZuKpvh2FvdD9rnakYIjXGmF2sfap hBkQBCTcVtCEaqPIQ5DTUXoPEZLe3/dAkkzbpwAAAAAAAQAB ------------ END ERCA PK --------------- </code></pre> <p>Now I basically do this</p> <pre><code> file = open(&quot;.\dddfiles\EC_PK\EC_PK.pem&quot;, 'rb') b = file.read() file.close() rsa.PublicKey.load_pkcs1_openssl_der(b) </code></pre> <p>But it gives me the errors:</p> <pre><code>File &quot;C:\Users\Oscar\Documents\QanALL_CMD_Tools\src\ddd_parser.py&quot;, line 97, in &lt;module&gt; _7601_.validate() File &quot;C:\Users\Oscar\Documents\QanALL_CMD_Tools\src\parser7601.py&quot;, line 89, in validate rsa.PublicKey.load_pkcs1_openssl_der(b) File &quot;C:\Users\Oscar\AppData\Roaming\Python\Python311\site-packages\rsa\key.py&quot;, line 375, in load_pkcs1_openssl_der (keyinfo, _) = decoder.decode(keyfile, asn1Spec=OpenSSLPubKey()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Oscar\AppData\Roaming\Python\Python311\site-packages\pyasn1\codec\ber\decoder.py&quot;, line 1618, in __call__ raise error.PyAsn1Error( pyasn1.error.PyAsn1Error: &lt;TagSet object, tags 0:32:13&gt; not in asn1Spec: &lt;OpenSSLPubKey schema object, tagSet=&lt;TagSet object, tags 0:32:16&gt;, subtypeSpec=&lt;ConstraintsIntersection object&gt;, componentType=&lt;NamedTypes object, types &lt;NamedType object, type header=&lt;PubKeyHeader schema object, tagSet=&lt;TagSet object, tags 0:32:16&gt;, subtypeSpec=&lt;ConstraintsIntersection object&gt;, componentType=&lt;NamedTypes object, types &lt;NamedType object, type oid=&lt;ObjectIdentifier schema object, tagSet &lt;TagSet object, tags 0:0:6&gt;&gt;&gt;, &lt;NamedType object, type parameters=&lt;Null schema object, tagSet &lt;TagSet object, tags 0:0:5&gt;, subtypeSpec &lt;ConstraintsIntersection object, consts &lt;SingleValueConstraint object, consts b''&gt;&gt;, encoding iso-8859-1&gt;&gt;&gt;, sizeSpec=&lt;ConstraintsIntersection object&gt;&gt;&gt;, &lt;NamedType object, type key=&lt;OctetString schema object, tagSet &lt;TagSet object, tags 0:0:3&gt;, encoding iso-8859-1&gt;&gt;&gt;, sizeSpec=&lt;ConstraintsIntersection object&gt;&gt; </code></pre> <p>For the live of me, I tried a lot, can't get it to work, does anyone have any idea? I also have a BIN and txt file. Tried both, no difference.</p> <p>To give some context: i'm trying to write code to validate signatures from Tachograph files. Each chapter has certificates which are used to create a signature of the data. If i understand the documentation correct, I need to unwrap the certificate of the chapter using the European public key, after which I have a new public key from the manufacturer which I can then use to verify the signature of the whole chapter. But I could be wrong about this all, as the documentation seems to be written by Albert Einstein and I'm far from his intellect, or any intellect for that matter.</p>
<python><encryption><rsa><public-key-encryption>
2023-03-24 14:51:33
2
305
Oscar K
75,835,020
898,042
How to print list with comma separators without spaces?
<p>I need to print list with commas without spaces and enclosed with 2 curly brackets. My code:</p> <pre><code>x = [1,3,6,7] x = set(x) print('{', *x, &quot;}&quot;, sep=',') </code></pre> <p>This prints:</p> <pre><code>{,1,3,6,7,} </code></pre> <p>I need it to be like this:</p> <pre><code>{1,3,6,7} </code></pre>
<python>
2023-03-24 14:40:26
5
24,573
ERJAN
75,834,927
7,376,511
isinstance with string representing types
<pre><code>isinstance(&quot;my string&quot;, &quot;str | int&quot;) isinstance(&quot;my string&quot;, &quot;list&quot;) </code></pre> <p>Is there a way to check the type of a variable (<code>&quot;my string&quot;</code> in this case) based on a valid type which is stored as a string?</p> <p>I don't need to import the type, I know it's builtin (int / str / float / etc).</p> <p>The naive approach would be to call <code>eval()</code>, but that's of course unsafe. Is there a safe way to do this other than using regex/string splitting to extract individual types and matching them to their evaluated types in a huge if/else block?</p>
<python><eval><isinstance>
2023-03-24 14:32:13
3
797
Some Guy
75,834,913
1,621,736
How to cache response in google app engine
<p>We have some APIs which return exact same response to all calls based on some data which we change from time to time. We have a memcached layer where we have stored the response and have to do a fetch from memcached whenever the calls are received in API. Since there is high frequency of calls to API we would have ideally liked to keep the response cached for 1 minute at API level only (since we do not need immediate invalidation).</p> <p>Is there some way to store small response data at api/instance level so that we don't need to call the memcached for every request?</p>
<python><google-app-engine><google-cloud-platform>
2023-03-24 14:29:55
1
797
puneet
75,834,891
737,971
Apply a generic function to an all-to-all couple of tensors
<p>I have a question <a href="https://stackoverflow.com/questions/67741628/apply-a-function-over-all-combination-of-tensor-rows-in-pytorch">similar to this one</a>, using a <strong>generic function</strong> of the kind:</p> <pre><code>def f(a, b): return Something </code></pre> <p>This something can be a scalar, or a tensor. The application of <code>f</code> should be all-to-all from the rows of two matrices <strong>that are possibly aliased</strong> like:</p> <pre><code>def apply_very_slow(source1, source2): s = torch.tensor(...) # compute the right dimensions for i in range(k): for j in range(k): p = f(source1[i], source2[j]) s[i,j] = p </code></pre> <p>Of course this is immensely slow. I could be calling <code>apply_very_slow</code> with the same argument <code>apply_very_slow(M, M)</code> or different tensors <code>apply_very_slow(M, N)</code>, but I have no guarantee that in this case <code>M</code> and <code>N</code> point at the same data.</p> <p>Moreover, I'd like the results to be autograd-friendly. No problem if <code>f</code> must be converted to a class. In the questions I've linked they use <code>torch.nn.functional.kl_div</code>, but I need a solution to be generic.</p> <p>Any hints?</p>
<python><pytorch>
2023-03-24 14:28:29
1
2,499
senseiwa
75,834,673
19,980,284
Pandas convert values with commas to new value
<p>I have a column in a pandas df that looks like this:</p> <pre><code> specialty 0 1 1 2,5 2 2 3 6 4 missing 5 1 6 3 7 1,3,4 8 4 9 1 </code></pre> <p>And I'd like to convert all the values with more than one value to <code>7</code>, and convert all &quot;missing&quot; to 6. I know for that I can do <code>df['specialty'].replace({'missing':6})</code>. But not sure for the conversion of multiple values to <code>7</code> Output would look like:</p> <pre><code> specialty 0 1 1 7 2 2 3 6 4 6 5 1 6 3 7 7 8 4 9 1 </code></pre> <p>I've tried <code>df['specialty'].str.contains(',') = 7</code> but that gives</p> <pre><code>SyntaxError: cannot assign to function call </code></pre>
<python><pandas><replace>
2023-03-24 14:07:58
3
671
hulio_entredas
75,834,651
21,420,742
Conditionally fill a column using multiple fields using Python
<p>I have a dataset where I am looking to see if an ID has switched Jobs as well as Departments through their history and if it happened at least once then create a new column [Transition] that is all 'Yes' or 'No'. Below is the code I have and what it shows:</p> <pre><code>df[Transition] = np.where((df['Job_Code'] != ['Adjusted_Code'] &amp; (df['Department'] != ['Adjusted Department']),'Yes','No') </code></pre> <p>Output:</p> <pre><code>ID Job_Code Adjusted_Code Job_Title Adjusted_Job Department Adjusted_Department Transition 1 327 362 Associate Manager Sales Sales No 1 327 362 Associate Manager Sales Sales No 1 362 362 Associate Manager Sales Sales No 2 358 455 Consult Advisor HR Finance No 2 455 455 Advisor Advisor Finance Finance Yes 2 455 455 Advisor Advisor Finance Finance Yes 3 215 381 Manager Director Tech Sales No 3 215 381 Manager Director Tech Sales No 3 215 381 Manager Director Tech Sales No 3 381 381 Manager Director Sales Sales Yes </code></pre> <p>What I want is group by the ID and if Yes is a value at least once then change [Transition] to Yes for all values by ID.</p>
<python><python-3.x><pandas><dataframe><numpy>
2023-03-24 14:05:40
3
473
Coding_Nubie
75,834,409
17,158,703
Array Averages for "Hour of the Date in Year" in Python
<p>I have two arrays:</p> <ol> <li>a 3D numpy array with shape (1, 87648, 100) and dtype float64</li> <li>a 1D array with shape (87648,) and type pandas DatetimeIndex</li> </ol> <p>the values of the 3D array along the axis=1 correspond to the hourly sequence datetimes in the 1D array. Total duration is 10 years with 2 leap years (i.e. 8760 * 8 + 8784 * 2 = 87648). There is no daylight saving so every day has exactly 24 corresponding values.</p> <p>I would like to calculate the average for the hour of the year across the 10 years worth of data. Meaning, across the 10 years, I want to average all hour 0 of the 1st of Jan, all hour 1 of 1st of Jan, ..., such that I have 8784 averages at the end, each being the average over 10 data points except for the 24 hours of Feb 29th, those would be the average over 2 data points each.</p> <p>Just to clarify more precisely, the desired outcome is a 3D array with shape (1, 8748, 100) and dtype float64.</p> <p>Let the 3D array be called &quot;volume&quot; and the 1D datetime array &quot;datetime_array&quot;, my incomplete last attempt was going in this direction, but I'm really puzzled with this problem:</p> <pre><code>hour_of_year = np.array([dt.hour + (dt.dayofyear - 1) * 24 for dt in datetime_array]) volume_by_hour = np.reshape(volume, (volume.shape[0], volume.shape[1] / 24, volume.shape[2], 24)) profile = np.array([np.mean(group, axis=0) for i, group in np.ndenumerate(volume)]).reshape(???) </code></pre> <p>The problem here in the first line already is that it doesn't distinguish between the dates. So the hour 1417 to 1440 in a regular year corresponds to 1st March, whereas that is 29th Feb in a leap year.</p> <p>If the leap year distinction makes it significantly more complicated, it is not that important and can be neglected.</p>
<python><numpy><datetime><multidimensional-array>
2023-03-24 13:42:47
1
823
Dattel Klauber
75,834,362
19,980,284
Pandas convert pairs of values into new values
<p>I have a pandas column that looks like this:</p> <pre><code> gender 0 1 1 2 2 1 3 1 4 3 5 2 6 1 7 4 8 5 9 1 </code></pre> <p>I want all the values of 1 to stay 1, 2 or 5 to become 2, and 3 or 4 to become 3. The desired output would then be</p> <pre><code> gender 0 1 1 2 2 1 3 1 4 3 5 2 6 1 7 3 8 2 9 1 </code></pre> <p>Would I just use if statements for this?</p>
<python><pandas>
2023-03-24 13:38:54
1
671
hulio_entredas
75,834,302
423,839
Subscribe mechanism in telegram bot
<p>I'm trying to build a bot that basically has two threads. One thread is doing something, generating data and another thread is managing telegram bot. User can subscribe on telegram to receive updates from the first thread. This first thread calls the second one when it generates data and that data is sent out to all the users who subscribed.</p> <pre><code>import threading import asyncio from config import cfg from telegram import Update from telegram.ext import filters, ApplicationBuilder, ContextTypes, CommandHandler, MessageHandler import os subscribed = set() async def subscribe(update: Update, context: ContextTypes.DEFAULT_TYPE): subscribed.add(update.effective_chat.id) await context.bot.send_message( chat_id=update.effective_chat.id, text=&quot;You have subscribed&quot; ) class TelegramService(): def __init__(self, control) -&gt; None: self.control = control #asyncio.run(self.main()) self.thread = threading.Thread(target=self.mainM) self.thread.start() def mainM(self): self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) self.application = ApplicationBuilder().token(cfg.telegram_TOKEN).build() self.application.add_handler(CommandHandler('subscribe', subscribe)) self.application.run_polling() def sendOut(self, msg): asyncio.ensure_future(self.sendAsync(msg), loop=self.loop) async def sendAsync(self, msg): async with self.application.bot: for chat_id in subscribed: await self.application.bot.send_message(text = msg, chat_id=chat_id) </code></pre> <p>This works more-or-less, I receive updates in Telegram and can also send stuff (like mentioned <code>/subscribe</code>, but I receive an erorr in the output:</p> <p>It is just too big to mention here, but it start with this:</p> <pre><code> Error while getting Updates: httpx.ReadError: Task exception was never retrieved future: &lt;Task finished coro=&lt;TelegramService.sendAsync() done, defined at .\communication\telegramService.py:83&gt; exception=NetworkError('httpx.ReadError: ')&gt; Traceback (most recent call last): File &quot;d:\Development\.Projects.Chast\ViGrabber2\.venv\lib\site-packages\httpcore\_exceptions.py&quot;, line 10, in map_exceptions yield File &quot;d:\Development\.Projects.Chast\ViGrabber2\.venv\lib\site-packages\httpcore\backends\asyncio.py&quot;, line 34, in read return await self._stream.receive(max_bytes=max_bytes) File &quot;d:\Development\.Projects.Chast\ViGrabber2\.venv\lib\site-packages\anyio\streams\tls.py&quot;, line 195, in receive data = await self._call_sslobject_method(self._ssl_object.read, max_bytes) File &quot;d:\Development\.Projects.Chast\ViGrabber2\.venv\lib\site-packages\anyio\streams\tls.py&quot;, line 137, in _call_sslobject_method data = await self.transport_stream.receive() File &quot;d:\Development\.Projects.Chast\ViGrabber2\.venv\lib\site-packages\anyio\_backends\_asyncio.py&quot;, line 1272, in receive raise ClosedResourceError from None anyio.ClosedResourceError During handling of the above exception, another exception occurred: .... </code></pre> <p>There goes two more <code>httpcore.ReadError</code>, two <code>httpx.ReadError</code>, one <code>anyio.ClosedResourceError</code>, two <code>httpcore.ReadError</code> and ends with <code>RuntimeError: This HTTPXRequest is not initialized!</code> followed by <code> telegram.error.NetworkError: Unknown error in HTTP implementation: RuntimeError('This HTTPXRequest is not initialized!')</code> all of these are connected by <code>The above exception was the direct cause of the following exception: </code></p> <p>I've tried to increase timeouts but I think it is not the problem. I guess I am doing something very wrong with <code>asyncio</code></p>
<python><telegram><telegram-bot><python-telegram-bot>
2023-03-24 13:32:58
1
8,482
Archeg
75,834,182
6,303,377
Safe way to update a mysql table entry with a dictionary in python (safeguarding against SQL injection)
<p>I am writing a helper function for my webapp that updates the database based on some information that is fetched from a foreign API (not user entered). I have the following code, but it was flagged as 'unsafe' by the <a href="https://bandit.readthedocs.io/en/latest/start.html" rel="nofollow noreferrer">Bandit</a> python package.</p> <p>Ideally I could write a function in a way that the columns to be updated are hardcoded, but I think it should also be possible dynamically.</p> <p>Is this a safe way (no possibility for sql injections) to update a table?</p> <pre><code>import mysql.connector as database def update_message_by_uid(uid: str, update_dict: dict) -&gt; None: # Fetches the previous entry from the database using the unique identifier message_info_dict = get_message_by_uid(uid) # check that all the keys of the update dict are also in the original dict assert set(update_dict.keys()) &lt;= set( message_info_dict.keys() ), &quot;Some of the keys in the dictionary passed are not valid database columns&quot; # We update the entry for all entries in the dictionary containing the updates statement = 'UPDATE messages SET {} WHERE uid = %s'.format(&quot;, &quot;.join('{}=%s'.format(k) for k in update_dict)) # Concatenates the values of the dict with the unique identifier to pass it to the execution method as one variable data = list(update_dict.values()) + [uid] cursor.execute(statement, data) </code></pre>
<python><mysql><sql><sql-injection><mysql-connector>
2023-03-24 13:22:46
1
1,789
Dominique Paul
75,834,092
18,018,869
Expand and transform dataframe. Compare each row with all other rows
<p>Please compare Bob's byte array with the byte arrays of all the other people. Do this with every person.</p> <pre class="lang-py prettyprint-override"><code>columns = [&quot;pasta&quot;, &quot;potatoes&quot;, &quot;rice&quot;] data = [[1, 0, 1], [0, 1, 1], [1, 1, 1]] index = [&quot;tom&quot;, &quot;jenny&quot;, &quot;bob&quot;] df = pd.DataFrame(data=data, columns=columns, index=index) # output # pasta potatoes rice # tom 1 0 1 # jenny 0 1 1 # bob 1 1 1 </code></pre> <p>Explanation of data: 1 = likes food of columnname // 0 doesn't like food of columnname.</p> <p>I want to compare every person's bytearray to the byte array of all the other persons. 1 if they differ; 0 if they don't differ.</p> <p>So wished output would look like</p> <pre class="lang-py prettyprint-override"><code> pasta potatoes rice tom jenny 1 1 0 tom bob 0 1 0 jenny tom 1 1 0 jenny bob 1 0 0 bob tom 0 1 0 bob jenny 1 0 0 </code></pre> <p>I know that byte-array of &quot;bob-jenny&quot; is the same as &quot;jenny-bob&quot; but I need it that way. I don't care if it is going to be a dataframe with multiindex or if the persons are in two distinct columns. Thank you!</p>
<python><pandas><numpy>
2023-03-24 13:11:48
4
1,976
Tarquinius
75,833,973
7,951,901
How do I change the value of a field using odoo migration manager?
<p>I created a migration folder with a pre-migration file in odoo to change the value of a field in odoo, but the migration seems not be triggered.</p> <p>I created a pre migration file like this2</p> <pre><code>import logging from odoo import SUPERUSER_ID, api from openupgradelib import openupgrade logger = logging.getLogger(__name__) @openupgrade.migrate() def migrate(cr, version): env = api.Environment(cr, SUPERUSER_ID, {}) logger.info(&quot;-----Renaming Project Templates-----&quot;) if planning_project := env[&quot;project.project&quot;].search( [(&quot;name&quot;, &quot;=&quot;, &quot;Planning&quot;)], limit=1 ): planning_project.write({&quot;name&quot;: &quot;Location Qualification&quot;}) logger.info(&quot;----completed------&quot;) </code></pre> <p>my file structure for the migration is</p> <pre><code>migrations 15.0.1.0.0 pre-migration.py </code></pre> <p>But this seems to have no effect when i update the module</p>
<python><odoo><odoo-15>
2023-03-24 12:59:31
1
405
A.Sasori
75,833,935
8,699,450
Is there any way to specify a typehinted Pytorch DataLoader?
<p>Say I have the following:</p> <pre class="lang-py prettyprint-override"><code>class SomeClass: def some_function(dataloader: DataLoader): for idx, batch in enumerate(dataloader): ... do something with batch ... </code></pre> <p>I would like to type the dataloader such that I can show through function parameter typehinting what format I expect <code>batch</code> to have. For example, I would like to have <code>batch</code> be of type <code>Tuple[Tensor, Tensor]</code> or I would like to type it <code>Tuple[Tensor, Tensor, CustomObject]</code>. Is there any way in which I can specify this?</p> <p>I thought that maybe it would be possible through an AbstractClass inheriting DataLoader and then somehow specifying a type, but I'm not sure how that would look.</p>
<python><pytorch><python-typing><pytorch-dataloader>
2023-03-24 12:55:47
2
380
Robin van Hoorn
75,833,861
305,904
Gateway Timeout when performing REST API CALL via Python3.9
<p>I am trying to query <a href="https://www.nseindia.com/get-quotes/equity?symbol=EQUITASBNK" rel="nofollow noreferrer">this site</a> for historical price volume data, but it seems my GET queries are timing out. How can I set up my requests to bypass this problem?</p> <p>The way the code is set up is:</p> <ol> <li>The landing page of the ticker that I want to download historical data for (here EQUITASBNK), is first queried</li> <li>I extract the cookie for the response I receive</li> <li>I use this cookie and modified parameters to (try to extract) the historical data</li> </ol> <p>Step 1 -&gt; Status 200<br /> Step 3 -&gt; Code hangs/Stuck waiting for response</p> <p>This is my code:</p> <pre><code>class NSE(Exchange): def __init__(self): self.url_landing = &quot;https://www.nseindia.com/get-quotes/equity?&quot; self.url_quotes =&quot;https://www.nseindia.com/api/historical/cm/equity?&quot; def fetchbulkprices(self, ticker, fromdate, todate): sys.stderr.write(&quot;Querying Ticker = {} fromdate = {} todate {} \n&quot;.format(ticker, fromdate, todate)) headers = { &quot;authority&quot;: &quot;www.nseindia.com&quot;, &quot;method&quot;: &quot;GET&quot;, &quot;path&quot;: &quot;/api/historical/cm/equity?symbol=&quot; + ticker + &quot;&amp;series = [%22EQ%22]&amp;from=&quot; + fromdate + &quot;&amp;to=&quot;+ todate+ &quot;&amp;csv=true&quot;, &quot;scheme&quot;: &quot;https&quot;, &quot;accept&quot;: &quot;*/*&quot;, &quot;accept-Encoding&quot;: &quot;gzip, deflate, br&quot;, &quot;accept-Language&quot;: &quot;en-GB,en-US;q=0.9,en;q=0.8&quot;, &quot;referer&quot;: &quot;https://www.nseindia.com/get-quotes/equity?symbol=&quot;+ticker, &quot;sec-ch-ua&quot;: &quot;Google Chrome&quot; + &quot;;&quot; + &quot;v=&quot;&quot;111&quot;&quot;, &quot;&quot;Not(A:Brand&quot;&quot;&quot; + &quot;;&quot; + &quot;v=&quot;&quot;8&quot;&quot;&quot; + &quot;,&quot;&quot;Chromium&quot;&quot;&quot;, &quot;sec-ch-ua-mobile&quot; : &quot;?0&quot;, &quot;sec-ch-ua-platform&quot; : &quot;Windows&quot;, &quot;sec-fetch-dest&quot;: &quot;empty&quot;, &quot;sec-fetch-mode&quot;: &quot;cors&quot;, &quot;sec-fetch-site&quot;: &quot;same-origin&quot;, &quot;user-agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36&quot;, &quot;x -requested-with&quot;: &quot;XMLHttpRequest&quot; } session = requests.Session() params = {&quot;symbol&quot;: ticker} response = requests.get(self.url_landing, params=params, headers=headers) cookies = response.cookies params = {&quot;symbol&quot;: ticker, &quot;series&quot;: &quot;[%22EQ%22]&quot;, &quot;fromDate&quot;: from date,&quot;toDate&quot;: todate, &quot;csv&quot;: True} response = session.get(self.url_quotes, params=params, headers=headers, cookies=cookies) if response.status_code == 200: sys.stderr.write(&quot;Queried successfully&quot;) </code></pre> <p>A few sample queries can be (Symbol, From Date, To Date):</p> <ol> <li>AAVAS, 18-09-2020, 23-01-2021</li> <li>EQUITASBNK, 18-09-2020, 23-01-2021</li> <li>MASTEK, 18-09-2020, 23-01-2021</li> </ol>
<python><json><python-3.x><web-scraping><python-requests>
2023-03-24 12:49:34
1
873
Soham
75,833,833
1,284,927
Python type hinting with Visual Studio Code & Flake8 plugin (without "missing" Annotations)
<p>I have a bigger Python project and want to introduce type hinting step-by-step (I would like to suppress the &quot;Missing type annotation&quot; messages for now) to give me (coming from the strongly typed world) more confidence with Python. But I'm struggling to setup Flake8 with some meaningful type hinting in VS Code.</p> <p>I expected some mechanism to do lint-time type checking when calling functions, for example, but it seems the Flake8 Annotations plugin doesn't seem to care at all. Do I have a misconception about what type hinting is all about?</p> <p>Also type annotations (ANN###) won't show up in VS Code at all. How can I setup VS Code to squiggle annotated code as warning (yellow)?</p> <p>Python VS Code extension is installed of course and I experimented with various <code>python.linting.flake8Args</code> in <code>settings.json</code>. I also installed the Python Type Hint extension, which does nice auto-completes but doesn't affect linting.</p> <p>Anybody here who is successfully working on projects with type hinting in VS Code?</p>
<python><visual-studio-code><python-typing><flake8>
2023-03-24 12:46:32
1
3,075
thomiel
75,833,826
12,760,550
Search for LOV columns in dataframe and replace with codes using other dataframe
<p>Imagine I have a dirty dataframe of employees with their ID, and Contract related information per country.</p> <p>Some columns of this dataframe are LOV columns (depending on the country, some columns are LOV for just one country, others some or all of them) and some LOV columns are mandatory and some are not (that is just used to understand if a blank value is accepted or not).</p> <p>We would need to check, using another mapping dataframe:</p> <ul> <li>if the values provided exist in the mapping dataframe and,</li> <li>if so, replace the value provided with the corresponding code on the dataframe.</li> </ul> <p>If the value provided is not on the list, create a new column on the main dataframe named &quot;Errors&quot; where it says the name of the column it errored (if more than 1 column, maybe save the name in a list on that column).</p> <p>So from this dataframe:</p> <pre><code>ID Country Contract Type 1 CZ Permanent BOFF 1 ES Fixed-term . 2 CZ Contractor Front-Office 3 PT Permanent 4 PT 2022-01-01 Employee 4 PT Fixed-term Office 4 ES Employee 5 SK Permanent Employee </code></pre> <p>And using this mapping:</p> <pre><code>Country Field Values Code Mandadory CZ Contract Permanent PE Yes CZ Contract Fixed-term FX Yes CZ Contract Contractor CT Yes ES Contract Permanent PERMA No SK Contract Permanent PER-01 Yes SK Contract Fixed-term FIX-01 Yes ES Type Office OFF Yes CZ Type Back-Office BOFF Yes CZ Type Front-Office FOFF Yes PT Type Employee EMP No PT Type Front-Office FRONT No </code></pre> <p>Would result in this dataframe:</p> <pre><code>ID Country Contract Type Errors 1 CZ PE BOFF ['Type'] 1 ES Fixed-term . ['Contract','Type'] 2 CZ CT FOFF 3 PT Permanent 4 PT 2022-01-01 FRONT ['Type'] 4 PT Fixed-term Office ['Type'] 4 ES Employee ['Contract','Type'] 5 SK PER-01 Employee </code></pre> <p>Thank you so much for the support!</p>
<python><pandas><dataframe><group-by><lines-of-code>
2023-03-24 12:45:35
1
619
Paulo Cortez
75,833,728
17,487,457
Pandas equivalent of SQL LAG()
<p>I have this <code>dataframe</code>:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( {'id': [10, 10, 10, 12, 12, 12, 12, 13, 13, 13], 'session_id': [1, 3, 9, 1, 3, 5, 7, 1, 3, 5], 'start_time': [5866, 6810, 8689, 8802, 8910, 9013, 9055, 9157, 9654, 9665], 'end_time': [6808, 8653, 8722, 8881, 9001, 9049, 9062, 9651, 9659, 9725] }) df.head() id session_id start_time end_time 0 10 1 5866 6808 1 10 3 6810 8653 2 10 9 8689 8722 3 12 1 8802 8881 4 12 3 8910 9001 </code></pre> <p>I want a new column stay_time, to store the duration a user stays after current session, before the start of new session.</p> <p>Required:</p> <pre class="lang-py prettyprint-override"><code> id session_id start_time end_time stay_time 0 10 1 5866 6808 0 1 10 3 6810 8653 2 2 10 9 8689 8722 36 3 12 1 8802 8881 0 4 12 3 8910 9001 29 5 12 5 9013 9049 12 6 12 7 9055 9062 6 7 13 1 9157 9651 0 8 13 3 9654 9659 3 9 13 5 9665 9725 6 </code></pre> <p>In <code>SQL</code>, this is equivalent to:</p> <pre class="lang-sql prettyprint-override"><code># assuming participants is the table select p.*, start_time - lag(end_time, 1, start_time) over(partition by id order by session_id) stay_time from participants p </code></pre>
<python><pandas><dataframe>
2023-03-24 12:35:17
2
305
Amina Umar
75,833,721
762,688
Unpacking an Optional value
<p>I have an <code>Optional[str]</code> that I know <code>is not None</code> and am passing this into a method that requires a <code>str</code> argument.</p> <p>MyPy complains that the types do not match (<code>Optional[str]</code> vs <code>str</code>). I'm trying to write a method to unpack the wrapped value in optional and would like it to be generic so I don't need to write one for each final datatype.</p> <p>I can't seem to find what the type of T is so I may do the appropriate cast. Below is the incorrect code:</p> <pre><code>from typing import TypeVar, Generic, Optional, cast T = TypeVar('T') def unpack_optional(cls, opt: Optional[Generic[T]]) -&gt; T: if opt is None: raise ValueError(&quot;Cannot unpack None into T&quot;) packed_value = cast(T, opt) return packed_value </code></pre> <p>Is it possible to do this in Python?</p>
<python><mypy>
2023-03-24 12:34:39
1
759
sean.net
75,833,518
4,179,570
Accessing child orderable attribute in page model template
<p>The <a href="https://docs.wagtail.org/en/stable/getting_started/tutorial.html#parents-and-children" rel="nofollow noreferrer">wagtail documentation</a> outlines how to iterate over an orderable property in a page template, however, if I try the same from the parent page, whilst iterating on the children, I am unable to access the children's page orderable prop</p> <p>Ex:</p> <pre><code>{% for post in page.get_children %} &lt;div class=&quot;max-w-sm bg-white border border-gray-200 rounded-lg shadow dark:bg-gray-800 dark:border-gray-700&quot;&gt; &lt;a href=&quot;{% pageurl post %}&quot;&gt; {% for item in post.gallery_images.all %} {% image item.image fill-320x240 as img %} &lt;img alt=&quot;{{ item.caption }}&quot; class=&quot;block h-full w-full rounded-lg object-cover object-center&quot; src=&quot;{{ img.url }}&quot; /&gt; &lt;img class=&quot;rounded-t-lg&quot; src=&quot;{{ img.url }}&quot; alt=&quot;{{ img.url }}&quot; class=&quot;carousel-img&quot;/&gt; &lt;p class=&quot;mb-3 font-normal text-gray-700 dark:text-gray-400&quot;&gt;{{ img.url }}&lt;/p&gt; {% endfor %} &lt;/a&gt; &lt;div class=&quot;p-5&quot;&gt; &lt;a href=&quot;{% pageurl post %}&quot;&gt; &lt;h5 class=&quot;mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white&quot;&gt;{{ post.title }}&lt;/h5&gt; &lt;/a&gt; &lt;p class=&quot;mb-3 font-normal text-gray-700 dark:text-gray-400&quot;&gt;Here are the biggest enterprise technology acquisitions of 2021 so far, in reverse chronological order.&lt;/p&gt; &lt;a href=&quot;#&quot; class=&quot;inline-flex items-center px-3 py-2 text-sm font-medium text-center text-white bg-blue-700 rounded-lg hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 dark:bg-blue-600 dark:hover:bg-blue-700 dark:focus:ring-blue-800&quot;&gt; Read more &lt;svg aria-hidden=&quot;true&quot; class=&quot;w-4 h-4 ml-2 -mr-1&quot; fill=&quot;currentColor&quot; viewBox=&quot;0 0 20 20&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt;&lt;path fill-rule=&quot;evenodd&quot; d=&quot;M10.293 3.293a1 1 0 011.414 0l6 6a1 1 0 010 1.414l-6 6a1 1 0 01-1.414-1.414L14.586 11H3a1 1 0 110-2h11.586l-4.293-4.293a1 1 0 010-1.414z&quot; clip-rule=&quot;evenodd&quot;&gt;&lt;/path&gt;&lt;/svg&gt; &lt;/a&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; </code></pre> <p>I'm basically trying to access the img url's to display the children's images from the parent (index page)</p>
<python><jinja2><wagtail>
2023-03-24 12:13:55
0
2,423
glls
75,833,516
5,506,400
Debugging Python, stay in current stack trace (not cycle after each command)
<p>Say I am debugging some python code, for interest sake it's a Django application, that has a <code>breakpoint()</code> in it, while stepping the code, another request may come and hit the same breakpoint. Now there are multiple concurrent debugger instances. And sometimes its more than 2.</p> <p>With after eacvh debugger command it cycles to the next debugging instance.</p> <p>So say one says <code>x=1</code>. Then you say <code>pp x</code>, x will be undefined because it switched to another debugging session. It makes debugging pretty impossible how it cycles through unknown to me number of instances.</p> <p>How do I make the debugger never switch from the current debugging session, but to wait until its done (e.g. via 'continue')?</p>
<python><pdb>
2023-03-24 12:13:39
0
2,550
run_the_race
75,833,473
5,588,347
Extract data from pdf in table format to excel/csv - Amazon textract
<p>Today, I'm trying to extract table from pdf files into an excel using Amazon Textract! Initially I thought this is going to be very simple because it was till I was working on it with Java sdk's. But now I'm stuck. I don't want to use lambda, I don't want to use S3 bucket to upload the files.</p> <p><strong>What I need and tried</strong>: extracting entire table from multiple pdf files into excel.</p> <p>I don't want to read pdf into a text file and than write logic to fill the excel, I can do this in pure c#.</p> <p>This is not about extracting data from table in key-value pair. This I have already tried: <a href="https://github.com/kinjalpgor/AWS-Textract-Key-Value-Pair-Demo-CSharp/blob/master/CSharpAWSTextract/Program.cs" rel="nofollow noreferrer">Key-Value Pair demo</a>. With this, I'm able to get data from images and pdf's in a key-value format. But but but, after going through a lot of documentations I got to know, <code>AnalyzeDocumentRequest</code> works only with single page images/pdf's and not with pdf's containing multiple pages.</p> <p>StartDocumentTextDetection I tried but again this has S3 bucket as a necessary parameter I guess and SNS, SQS, etc. Please correct me if I'm wrong.</p> <p>So, <strong>Where I'm stuck</strong>:</p> <ul> <li>I have lots of solution on google in Python and Java like:</li> </ul> <p><a href="https://stackoverflow.com/questions/66679210/export-all-table-data-from-pdf-to-excel-using-amazon-textract">Export all table data from PDF to Excel using Amazon textract</a></p> <p><a href="https://stackoverflow.com/questions/69034908/amazon-textract-without-using-amazon-s3">Amazon Textract without using Amazon S3</a></p> <p><a href="https://stackoverflow.com/questions/59038306/how-to-use-the-amazon-textract-with-pdf-files">How to use the Amazon Textract with PDF files</a> - again python and got to know something new about boto which I'm not sure about. Lol!</p> <ul> <li>I want to implement this in C#.Net. I'm not getting proper documentation on this.</li> <li>Obviously, I have gone through <a href="https://github.com/aws-samples/amazon-textract-code-samples" rel="nofollow noreferrer">this</a> but that's not what I want.</li> <li>Not necessarily but even if the solution is without usage of S3 bucket that would be more great.</li> </ul> <p>It would be really great if anyone can help me with this. Thanks in advance!</p>
<python><java><c#><amazon-textract><pdf-parsing>
2023-03-24 12:08:26
0
958
StackUseR
75,833,456
673,018
Highlight dataframe having NaN (matplotlib) while writing to the PDF file(PdfPages)?
<p>I'm trying to perform two things:</p> <ol> <li>Highlight 'NaN' values with red color for the dataframe.</li> <li>Add the dataframe to the PDF file.</li> </ol> <p>I'm able to display the <code>dataframe</code> successfully in the PDF pages, however <code>NaN</code> values are not reflected with the red color inside the PDF.</p> <p>I have tried following code:</p> <pre><code> df.style.highlight_null('red') with PdfPages('stale_curve_report.pdf') as pdf: fig, ax = plt.subplots() ax.axis('off') ax.table(cellText=df.values, colLabels=df.columns, rowLabels=df.index, loc='center',colWidths=[0.12] * 15) pdf.savefig(fig) plt.close(fig) </code></pre> <p>I have tried few other stuffs using <code>seaborn</code> also:</p> <pre><code>sns.heatmap(df.isna(), cmap=['red', 'white', 'white']) </code></pre> <p>I think, I need an option inside the <code>ax.table</code> to highlight the dataframe.</p>
<python><pandas><matplotlib><pdfpages>
2023-03-24 12:06:26
1
13,094
Mandar Pande
75,833,328
1,473,517
Why do these two different ways to sum a 2d array have such different performance?
<p>Consider the following two ways of summing all the values in a 2d numpy array.</p> <pre><code>import numpy as np from numba import njit a = np.random.rand(2, 5000) @njit(fastmath=True, cache=True) def sum_array_slow(arr): s = 0 for i in range(arr.shape[0]): for j in range(arr.shape[1]): s += arr[i, j] return s @njit(fastmath=True, cache=True) def sum_array_fast(arr): s = 0 for i in range(arr.shape[1]): s += arr[0, i] for i in range(arr.shape[1]): s += arr[1, i] return s </code></pre> <p>Looking at the nested loop in sum_array_slow it seems it should be performing exactly the same operations in the same order as sum_array_fast. However:</p> <pre><code>In [46]: %timeit sum_array_slow(a) 7.7 Β΅s Β± 374 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [47]: %timeit sum_array_fast(a) 951 ns Β± 2.63 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each) </code></pre> <p>Why is the sum_array_fast function 8 times faster than sum_array_slow when it seems it would be performing the same computations in the same order?</p>
<python><numpy><numba>
2023-03-24 11:54:17
1
21,513
Simd
75,833,309
241,552
Bazel: install python using bazel
<p>I am working on an Bazel build MVP, and one of the steps in this process is to run our build in CI. However, CI builds are run in docker containers that do not contain much. Previously I had to add gcc to the container to build python that is used by Bazel itself, but now when I try to build a python library with <code>py_wheel</code>, I get the error <code>/usr/bin/env: 'python3': No such file or directory</code>. On the one hand, I do realise that I can just add the required python to the container, but that does not seem to be hermetic, and then the python version will have to be configured in the repo where the Docker images for our containers are kept, not within the project itself, which adds another layer of complexity.</p> <p>Is there a way to install the required python from Bazel iteslf? Or do I have to install it in docker?</p> <p><strong>UPD</strong> My initial approach was as per the documentation and what Brian Silverman described below. I also tried other approaches (<a href="https://www.anthonyvardaro.com/blog/hermetic-python-toolchain-with-bazel" rel="nofollow noreferrer">https://www.anthonyvardaro.com/blog/hermetic-python-toolchain-with-bazel</a>, <a href="https://thundergolfer.com/bazel/python/2021/06/25/a-basic-python-bazel-toolchain/" rel="nofollow noreferrer">https://thundergolfer.com/bazel/python/2021/06/25/a-basic-python-bazel-toolchain/</a>), but to no avail. Each of these approaches does download python, but when I run my python binaries or the <code>py_wheel</code> rule, the scripts look for <code>/usr/bin/env python</code>, which cannot be found.</p>
<python><bazel>
2023-03-24 11:52:31
1
9,790
Ibolit
75,833,267
15,673,412
python - count number of windows containing only 0s
<p>Let's suppose I have a list or array <code>ts</code> of length N:</p> <pre><code>ts = [1, 6, 2, 9, 0, 0, 0, 0, 0, 3, 6, 4, 0, 0, 5, 2, 3] </code></pre> <p>Let's suppose <code>k=4</code>.</p> <p>I need to find the number of times that a sequence of <code>k</code> or more consecuitve zeros are present in the array. In this case it would be 1 (when a 0-sequence is found, the search must continue starting from the next non-zero item).</p> <p>Right now I achieved this task with a brute-force, C-like method.</p> <p>Any idea for a pythonic solution?</p>
<python><arrays>
2023-03-24 11:48:08
3
480
Sala
75,833,221
2,836,172
Code generated from Postman does not work - body format in request is actually different
<p>I build a REST API in Flask and tested it with Postman. Now I want to get a script which i can modify myself. Therefore I generated the script for Python and well, it doesn't work.</p> <p>In Postman I send a key-value pair under the form-data tab. It works, request parser can find the string and proceed using it.</p> <p>The generated code looks like this, and I think it makes totally sense (I removed the URL):</p> <pre><code>import requests import json url = &quot;...&quot; payload = {'apikey': '1234567891' } headers = { 'Content-Type': 'application/json' } response = requests.request(&quot;POST&quot;, url, headers=headers, data=payload) print(response.text) </code></pre> <p>To actually show the format, I added an log handler which logs every request I send to Flask:</p> <pre><code>@app.before_request def log_request_info(): app.logger.info('Headers: %s', request.headers) app.logger.info('Body: %s', request.get_data()) </code></pre> <p>When I send a request with Postman GUI, this is the logged request:</p> <pre><code>Body: b'----------------------------222811375002090327791056\r\nContent-Disposition: form-data; name=&quot;apikey&quot;\r\n\r\n1234567891\r\n----------------------------222811375002090327791056--\r\n' </code></pre> <p>This is the request I see using the script:</p> <pre><code>Body: b'apikey=1234567891' </code></pre> <p>Where does the difference come from, and why do not both version work? When adding the apikey to the reqparser, I didn't even specify the location of the data (just the name, required=True, helptext and custom validator)</p>
<python><postman>
2023-03-24 11:42:23
0
1,522
Standard
75,833,101
3,393,192
Calculation of world coordinates of camera from chessboard image
<p>I have a stationary camera and want to get the real world coordinates and rotation from my camera (relative to the pattern). I placed a checkerboard pattern in the FoV of the camera and took some images.</p> <p>Some might say, this question already has an answer on Stackoverflow, but none of them work. So this is NO duplicate. I have looked into a few posts on StackOverflow, e.g. (<a href="https://stackoverflow.com/questions/49335644/estimate-world-position-of-a-real-camera">Estimate world position of a real camera</a>, <a href="https://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp?rq=1">Camera position in world coordinate from cv::solvePnP</a> and some more) and the documentation at OpenCV.</p> <p>As one of the posts above has an accepted answer, I followed that post and implemented the code. I don't have the camera setup anymore, so I can't check if the results are correct, because I don't have ground truth values.</p> <p>Without ground truth values, my idea to check if the results are correct is to calculate the 3D position of the camera and project that 3D position back into the image. The projected camera position should be precisely in the middle of the image (in a perfect world). However, that projected point is often not even close to the center of the image.</p> <p>From my understanding the math behind the scenario/idea should work. However, here is my code, and it does not work, even though, I am pretty sure it should work. What am I missing?</p> <pre><code>import cv2 import numpy as np class CameraCalibrator: def __init__(self, folder, rows, cols, square_size = 0.079, singlePnP = True, mtx = None, dist = None): self.folder = folder self.mtx = mtx self.dist = dist self.rows = rows self.cols = cols self.square_size = square_size self.objp = None self.axis = None self.singlePnP = singlePnP self.generate_board() def generate_board(self): # Generate the 3D points of the intersections of the chessboard pattern objp = np.zeros((self.rows * self.cols, 3), np.float32) objp[:, :2] = np.mgrid[0:self.rows, 0:self.cols].T.reshape(-1, 2) self.objp = objp * self.square_size # Generate the axis vectors self.axis = np.float32([[self.square_size, 0, 0], [0, self.square_size, 0], [0, 0, -self.square_size]]).reshape(-1, 3) def estimate_pose(self, image_names): # Loop over all images for image_name in image_names: # Extract chessboard corners img = cv2.imread(self.folder + image_name) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) found_corners, corners = cv2.findChessboardCorners(gray, (self.rows, self.cols), None) if found_corners: # Refine corners criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) corners = cv2.cornerSubPix(gray, corners, (self.rows, self.cols), (-1, -1), criteria=criteria) # Use solve PnP to determine the rotation and translation between camera and 3D object ret, rvec, tvec = cv2.solvePnP(self.objp, corners, self.mtx, self.dist) # Project the axis into the image imgpts, jac = cv2.projectPoints(2 * self.axis, rvec, tvec, self.mtx, self.dist) # Draw the axes img = self.draw_axes(img, corners, imgpts) # Calculate camera position. Following: https://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp?rq=1 rotM = cv2.Rodrigues(rvec)[0] cameraPosition = -np.matrix(rotM).T * np.matrix(tvec) imgpts, jac = cv2.projectPoints(cameraPosition, rvec, tvec, self.mtx, self.dist) # Draw a circle in the center of the image (just as a reference) and draw a line from the top left intersection to the projected camera position img = self.draw(img, corners[0], imgpts[0]) cv2.imshow('img', img) k = cv2.waitKey(0) &amp; 0xFF def draw_axes(self, img, corners, imgpts): # Extract the first corner (the top left) corner = tuple(corners[0].ravel()) corner = (int(corner[0]), int(corner[1])) # Color format is BGR color = [(0, 0, 255), (0, 255, 0), (255, 0, 0)] # Iterate over the points for i in range(len(imgpts)): tmp = tuple(imgpts[i].ravel()) tmp = (int(tmp[0]), int(tmp[1])) img = cv2.line(img, corner, tmp, color[i], 5) return img def draw(self, img, corners, imgpts): corner = tuple(corners[0].ravel()) corner = (int(corner[0]), int(corner[1])) for i in range(len(imgpts)): tmp = tuple(imgpts[i].ravel()) tmp = (int(tmp[0]), int(tmp[1])) img = cv2.line(img, corner, tmp, (255, 255, 0), 5) cv2.circle(img, (int(img.shape[1] / 2), int(img.shape[0] / 2)), 1, (255, 255, 255), 10) return img def calibrate_camera(self, images): # Prepare points criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) objp = np.zeros((self.rows * self.cols, 3), np.float32) objp[:, :2] = np.mgrid[0:self.rows, 0:self.cols].T.reshape(-1, 2) objp = objp * self.square_size objpoints = [] # 3d point in real world space imgpoints = [] # 2d points in image plane for img_name in images: full_name = self.folder + img_name img = cv2.imread(full_name) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) found_corners, corners = cv2.findChessboardCorners(gray, (self.rows, self.cols), None) if found_corners: objpoints.append(objp) corners2 = cv2.cornerSubPix(gray,corners, (11, 11), (-1, -1), criteria) imgpoints.append(corners2) # Draw and display the corners cv2.drawChessboardCorners(img, (8,6), corners2, found_corners) cv2.imshow('img', img) k = cv2.waitKey(0) &amp; 0xFF mtx = np.array([367.47894432, 0.0, 249.3915073, 0.0, 367.39795727, 205.2466732, 0.0, 0.0, 1.0]).reshape((3, 3)) dist = np.array([0.10653164, -0.33399435, -0.00111262, -0.00186027, 0.15269198]) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], mtx, dist, flags=cv2.CALIB_USE_INTRINSIC_GUESS) for img_name in images: full_name = self.folder + img_name img = cv2.imread(full_name) cv2.imshow('img', img) k = cv2.waitKey(0) &amp; 0xFF # undistort dst = cv2.undistort(img, mtx, dist, None, mtx) cv2.imshow('img', dst) k = cv2.waitKey(0) &amp; 0xFF def main(): image_names = [&quot;frame0100.png&quot;, &quot;frame1234.png&quot;, &quot;frame1345.png&quot;, &quot;frame1426.png&quot;, &quot;frame1777.png&quot;, &quot;frame2860.png&quot;, &quot;frame2879.png&quot;, &quot;frame3000.png&quot;] folder = &quot;/home/X/folder_name/&quot; size = 0.079 # Unit is 'meter' # Precalibrated camera information mtx = np.array([370.68093, 0.0, 250.27853, 0.0, 370.65810, 206.94584, 0.0, 0.0, 1.0]).reshape((3, 3)) dist = np.array([0.11336, -0.35520, 0.00076, -0.00117, 0.18745]) # Number of checkerboard squares per row/col rows = 8 cols = 6 camera_calibrator = CameraCalibrator(folder, rows=rows, cols=cols, square_size=size, mtx=mtx, dist=dist) # The following line is only for camera calibration #camera_calibrator.calibrate_camera(image_names) camera_calibrator.estimate_pose(image_names) if __name__ == '__main__': main() </code></pre> <p>The source code should run when copy-pasted. Only 'folder' (and 'image_names') have to be adjusted. The images that I use for testing purposes can be found here: <a href="https://easyupload.io/uwg9uu" rel="nofollow noreferrer">checkerboards.zip</a></p> <p>I've run out of ideas, thanks in advance.</p>
<python><opencv><pose-estimation><opencv-solvepnp>
2023-03-24 11:29:38
0
497
Sheradil
75,833,059
16,030,430
Pandas extend DataFrame with Zeros
<p>I have following DataFrame which start by an index of 1.10 and increment by 0.05:</p> <pre><code> A B C D E 1.10 3 2 2 1 0 1.15 1 2 0 1 0 1.20 0 0 0 1 -1 1.25 1 1 3 -5 2 1.30 -3 4 2 6 0 ... </code></pre> <p>I want to extend this DataFrame with zeros that the result looks like this:</p> <pre><code> A B C D E 0 0 0 0 0 0 0.05 0 0 0 0 0 0.10 0 0 0 0 0 ... 1.10 3 2 2 1 0 1.15 1 2 0 1 0 1.20 0 0 0 1 -1 1.25 1 1 3 -5 2 1.30 -3 4 2 6 0 ... </code></pre> <p>So I want that the index starts by 0 and increments by 0.05 until the original start index of 1.10 is reached. How to do this?</p>
<python><pandas><dataframe>
2023-03-24 11:26:02
1
720
Dalon
75,832,973
6,110,160
Why aliasing dataclass(frozen=True) fails mypy check?
<p>Is it possible to alias <code>dataclass(frozen=True)</code> decorator? I tried this:</p> <pre class="lang-py prettyprint-override"><code>import dataclasses key = dataclasses.dataclass(frozen=True) @key class A: a: str A(a=&quot;a&quot;) </code></pre> <p>And I get the following mypy error:</p> <pre><code>&gt; mypy alias.py alias.py:9: error: Unexpected keyword argument &quot;a&quot; for &quot;A&quot; [call-arg] Found 1 error in 1 file (checked 1 source file) </code></pre>
<python><mypy><python-dataclasses>
2023-03-24 11:16:38
0
1,852
psarka
75,832,814
15,593,152
Pandas, combination of values and check if any of the combiantions is in a list
<p>I have a <code>df1</code> with some <code>item_id</code>'s and some values for each item (called &quot;nodes&quot;):</p> <pre><code>df1 = pd.DataFrame({'item_id':['1','1','1','2','2','2','3','3'],'nodes':['a','b','c','d','a','e','f','g']}) </code></pre> <p>and a <code>df2</code> that is a list of &quot;vectors&quot; where each row is a tuple of nodes (that can be in <code>df1</code>, but some of them aren't):</p> <pre><code>df2=pd.DataFrame({'vectors':[('a','b'),('b','c'),('d','f'),('e','b')]}) </code></pre> <p>I need to count the number of different <code>item_id</code>'s in <code>df1</code> that have at least one vector in <code>df2</code>, given the fact that a vector can be constructed from all possible combiantions of nodes for that item.</p> <p>For example, <code>item_id = 1</code> have the nodes <code>[a,b,c]</code>, so these vectors can be formed: <code>[(a,b),(a,c),(b,a),(b,c),(c,a),(c,b)]</code>. Since the vectors <code>(a,b)</code> and <code>(b,c)</code> exist in <code>df2</code>, then I should count <code>item_id = 1</code>. However, I should not count <code>item_id = 2</code> since from all the vectors that can be formed from the combination of its nodes, none of them is in <code>df2</code>.</p> <p>I don't know how can I achieve that. I can obtain a list of all possible combinations of nodes to form the different vectors for the first <code>item_id</code> in <code>df1</code>, using:</p> <pre><code>from itertools import product nodes_fa=df1[df1.item_id==&quot;1&quot;].nodes.to_list() vectors_fa = pd.DataFrame(product(nodes_fa,nodes_fa),columns=['u','v'],dtype='str') vectors_fa['vector'] = vectors_fa[[&quot;u&quot;, &quot;v&quot;]].agg(tuple, axis=1) vectors_fa = vectors_fa[['vector']] display(vectors_fa) </code></pre> <p>but I don't know how to expand this to all the <code>item_id</code>'s, nor how to check if any value in this list is in <code>df2</code> inside a loop.</p> <p>Any help would be much appreciated.</p>
<python><pandas><dataframe><combinations>
2023-03-24 10:59:05
2
397
ElTitoFranki
75,832,731
9,104,884
Parallel while loop with unknown number of calls
<p>I have written a function that does some calculations, and that everytime it is called it returns a different result, since it uses a different seed for the random number generator. In general, I want to run this functions many times, in order to obtain many samples.</p> <p>I have managed to use <code>multiprocessing</code> to run this function in parallel, using, for example, 4 processes, until the desired number of runs <code>n_runs</code> is reached. Here is a minimal working example (note that the function <code>flip_coin()</code> is just an example function that uses a <code>rng</code>, in reality I am using a more complex function):</p> <pre><code>import multiprocessing as mp import random, sys def flip_coin(n): # initialise the random number generator seed = random.randrange(sys.maxsize) rng = random.Random(seed) # do stuff and obtain result if rng.random()&gt;0.5: res = 1 else: res = 0 return res, seed # total number of runs n_runs = 100 # initialise parallel pool pool = mp.Pool(processes = 4) # initialise empty lists for results results, seeds = [], [] for result in pool.map(flip_coin, range(n_runs)): # save result and the seed that generated that result results.append(result[0]) seeds.append(result[1]) # close parallel pool pool.close(); pool.join() </code></pre> <p>Now, instead of fixing the <code>n_runs</code> a priori, I would like to fix a different condition that is met only after an unknown number of calls to the function. For example, I would like to fix the number of <code>1</code>'s returned by the function. Without using <code>multiprocessing</code>, I would do something like this:</p> <pre><code># desired number of ones n_ones = 10 # counter to keep track of the ones counter = 0 # empty list for seeds seeds = [] while counter &lt; n_ones: result = flip_coin(1) # if we got a 1, increase counter and save seed if result[0] == 1: counter += 1 seeds.append(result[1]) </code></pre> <p>The question is: how do I parallelise such a <code>while</code> loop?</p>
<python><while-loop><parallel-processing><multiprocessing><python-multiprocessing>
2023-03-24 10:50:59
1
1,523
Tropilio
75,832,713
10,229,386
Stable Baselines 3 support for Farama Gymnasium
<p>I am building an environment in the maintained fork of <code>gym</code>: <code>Gymnasium</code> by Farama. In my <code>gym environment</code>, I state that the <code>action_space = gym.spaces.Discrete(5)</code> and the <code>observation_space = gym.spaces.MultiBinary(25)</code>. Running the environment with the agent-environment loop suggested on the <a href="https://www.gymlibrary.dev/content/basic_usage/" rel="nofollow noreferrer">Gym Basic Usage</a> website runs with no problems: I registered the environment and it is simply callable by <code>gym.make()</code>.</p> <p>However, I want to now train a reinforcement learning agent on this environment. Now I have come across <code>Stable Baselines3</code>, which makes a DQN agent implementation fairly easy. However, it does seem to support the new <code>Gymnasium</code>. Namely:</p> <pre class="lang-py prettyprint-override"><code>import gymnasium as gym from stable_baselines3.ppo.policies import MlpPolicy from stable_baselines3 import DQN env = gym.make(&quot;myEnv&quot;) model = DQN(MlpPolicy, env, verbose=1) </code></pre> <p><em>Yes I know, &quot;myEnv&quot; is not reproducable, but the environment itself is too large (along with the structure of the file system), but that is not the point of this question</em></p> <p>This code produces an error:</p> <pre class="lang-py prettyprint-override"><code>AssertionError: The algorithm only supports (&lt;class 'gym.spaces.discrete.Discrete',) as action spaces but Discrete(5) was provided </code></pre> <p>My question is the following: does Stable Baselines3 support <code>Gymnasium</code>?</p> <p>I have tried to instead use <code>gym.spaces</code> in order to define the <code>action_space</code> and <code>observation_space</code>, such that</p> <pre class="lang-py prettyprint-override"><code>from gym.spaces import Discrete, MultiBinary action_space = Discrete(5) observation_space = MultiBinary(25) </code></pre> <p>but along with this, I have to rewrite a large portion of the environment to support the old <code>gym</code> package. I wonder whether there is a better solution than that.</p>
<python><reinforcement-learning><openai-gym><stable-baselines>
2023-03-24 10:49:24
2
1,103
Lexpj
75,832,701
6,160,923
How to make a static tree view in odoo with fixed rows where first column contains some hard coded value and 2nd and 3rd column is editable
<p>How to make a static tree view in odoo with fixed number of rows where first column contains some hard coded value and 2nd column is editable. When user will go to crate new record button there will be a tree view with first column having some pre defined value and user only input in other columns and save the table data with 3 columns.</p>
<python><xml><odoo>
2023-03-24 10:48:40
1
672
Atiq Baqi
75,832,685
4,997,239
Import "dash" could not be resolved Pylance
<p>I'm using python on Mac with VSCode and have set up a virtual environment which is definitely being used by the program. Imports like <code>requests</code>, <code>pandas</code> etc are found no problem but <code>dash</code> cannot be found and <code>matplotlib</code> cannot be found. I installed the libraries using <code>pip install dash</code> in the terminal which installed successfully. Also <code>pip list</code> shows both libraries as present. The environment was also activated with <code>source venv/bin/activate</code>.</p> <p>Things I tried.</p> <ol> <li>uninstalling and reinstalling dash in terminal.</li> <li>restarting vscode.</li> <li><code>pip install dash --upgrade</code></li> <li><code>pip3 install dash</code></li> </ol>
<python><visual-studio-code><pylance>
2023-03-24 10:46:39
1
12,448
Hasen
75,832,639
3,203,845
How to split a string to an array of filtered integers?
<p>I have a <code>DF</code> column which is a long strings with comma separated values, like:</p> <ul> <li><code>2000,2001,2002:a,2003=b,2004,100,101,500,20</code></li> <li><code>101,102,20</code></li> </ul> <p>What I want to do is to create a new <code>Array&lt;Int&gt;</code> column out of it where:</p> <ol> <li>only values starting with 2 are included</li> <li>when a value has additional delimiter then only the first part will be returned (e.g. 2002)</li> <li>some specific values will be excluded (let's say value = 20)</li> <li>if the array is empty it should be filled with a default value (let's say [199])</li> </ol> <p>So basically the 2 test strings should be returned as:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">stringColumn</th> <th style="text-align: left;">arrayColumn</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">2000,2001,2002:a,2003=b,2004,100,101,500,20</td> <td style="text-align: left;">[2000,2001,2002,2003,2004]</td> </tr> <tr> <td style="text-align: left;">101,102,20</td> <td style="text-align: left;">[199]</td> </tr> </tbody> </table> </div>
<python><amazon-web-services><apache-spark><pyspark><aws-glue>
2023-03-24 10:41:51
2
407
1131
75,832,624
4,162,689
Is there a DRY way to get n Cartesian Products in Python?
<p>According to <a href="https://stackoverflow.com/a/72490086/4162689">72489951</a>, if one wants to generate the product of a list, this function does the job.</p> <pre><code>def generate_prods(): from itertools import product in_list = ['a', 'b', 'c'] out_prods = list(product(in_list, in_list)) return out_prods </code></pre> <p>This returns</p> <pre><code>[('a', 'a'), ('a', 'b'), ('a', 'c'), ('b', 'a'), ('b', 'b'), ('b', 'c'), ('c', 'a'), ('c', 'b'), ('c', 'c')] </code></pre> <p>This is fine unless you want to add another character. Doing so means you need to hardcode another use of the same list.</p> <pre><code>def generate_prods(): from itertools import product in_list = ['a', 'b', 'c'] out_prods = list(product(in_list, in_list, in_list)) return out_prods </code></pre> <p>Clearly, this is not DRY, so how do you generate the Cartesian Product for length <em>n</em> of the same list?</p>
<python>
2023-03-24 10:40:43
1
972
James Geddes
75,832,590
15,637,435
Website section not visible when using Selenium over Playwright
<h2>Problem</h2> <p>I have a scraper that scrapes product information. One thing that I would like to scrape is the CO2 emission and the compensation price of the product. Since this information is available in a specific section I have to click on a button and therefore use a browser automation tool.</p> <p>I have now created the scraper with Playwright which is able to scrape the information in the sustainability section. I would now like to refactor the playwright part to selenium.</p> <p>However, when I do this, somehow the product pages displayed with the driver in selenium do not include the sustainability section and make it not possible to scrape the information (see the normal website of product page and website accessed with selenium driver).</p> <p>Why does this happen and how can I scrape the sustainability section using selenium?</p> <h2>Code</h2> <p><strong>Scraper with Playwright part</strong></p> <pre><code>import requests import time import random import pandas as pd from typing import List from bs4 import BeautifulSoup from playwright.sync_api import Playwright, sync_playwright, TimeoutError as PlaywrightTimeoutError from product_scraper.port.sources import Scraper from product_scraper.domain import ProductItem from dataclasses import asdict class DayDealScraper(Scraper): def __init__(self, url): self.url = url self.urls = self._get_product_links(self.url) def get_product_info_df(self): &quot;&quot;&quot; Return pd.DataFrame with product information from deals of the day. &quot;&quot;&quot; product_info_df = self._get_product_info() print(product_info_df) return product_info_df def _get_product_links(self, url: str) -&gt; List[str]: &quot;&quot;&quot; Get href of products on url-page &quot;&quot;&quot; urls = [] r = requests.get(url) soup = BeautifulSoup(r.content, 'lxml') articles = soup.find_all('article') for article in articles: try: href = article.find('a', class_='sc-qlvix8-0 dgECEw')['href'] urls.append(f&quot;https://www.digitec.ch{href}&quot;) except TypeError: continue return urls def _get_product_info(self): &quot;&quot;&quot; Scrape product info of every subpage &quot;&quot;&quot; urls = self._get_product_links(self.url) products = [] for url in urls: print(url) r = requests.get(url) soup = BeautifulSoup(r.content, 'lxml') name = soup.find('h1', class_='sc-12r9jwk-0 hcjJEJ').text price = float(soup.find('div', class_='sc-18ppxou-1 gwNBaL').text.split('.')[0]) # Narrow down navigation section to get category navigation = soup.find('ol', class_='sc-4cfuhz-2 ipoVcw') navigation_parts = navigation.find_all('li', class_='sc-4cfuhz-3 iIgemP') category = [subcategory.text for subcategory in navigation_parts][-2] time.sleep(random.randint(4, 6)) # Use Playwright to scrape emission information try: with sync_playwright() as pw: agent = 'userMozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' \ 'Chrome/83.0.4103.116 Safari/537.36' browser = pw.chromium.launch(headless=True) context = browser.new_context(user_agent=agent) page = context.new_page() page.goto(url) # Find weight under product Specifications &gt; Show more page.locator(&quot;[data-test=\&quot;showMoreButton-specifications\&quot;]&quot;).click() weight = page.text_content(&quot;td:text(\&quot;Weight\&quot;) + td&quot;).split(&quot;\xa0&quot;) weight = &quot; &quot;.join(weight) # Find sustainability section and open it page.locator(&quot;[data-test=\&quot;sustainability\&quot;]&quot;).click() compensation_price = page.get_by_role(&quot;row&quot;, name=&quot;Compensation amount&quot;).text_content() compensation_price = compensation_price.split(&quot;CHF &quot;)[1].replace(&quot;’&quot;, &quot;&quot;) compensation_price = float(compensation_price) emission = page.get_by_role(&quot;row&quot;, name=&quot;COβ‚‚-Emission&quot;).text_content() emission = emission.split(&quot;Emission&quot;)[1].split(&quot;kg&quot;)[0].replace(&quot;’&quot;, &quot;&quot;) emission = float(emission) context.close() browser.close() except PlaywrightTimeoutError: print(f&quot;{url} has no sustainability section&quot;) continue product = ProductItem(name=name, price=price, category=category, weight=weight, emission=emission, compensation_price=compensation_price) products.append(asdict(product)) print(asdict(product)) products_df = pd.DataFrame(products) return products_df if __name__ == '__main__': url = 'https://www.digitec.ch/en/daily-deal' day_deals = DayDealScraper(url) day_deals.get_product_info_df() </code></pre> <p><strong>Scraper with Selenium part</strong></p> <pre><code>import requests import time import random import pandas as pd from typing import List from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.firefox.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException from product_scraper.port.sources import Scraper from product_scraper.domain import ProductItem from dataclasses import asdict class DayDealScraper(Scraper): def __init__(self, url): self.url = url self.urls = self._get_product_links(self.url) def get_product_info_df(self): &quot;&quot;&quot; Return pd.DataFrame with product information from deals of the day. &quot;&quot;&quot; product_info_df = self._get_product_info() print(product_info_df) return product_info_df def _get_product_links(self, url: str) -&gt; List[str]: &quot;&quot;&quot; Get href of products on url-page &quot;&quot;&quot; urls = [] r = requests.get(url) soup = BeautifulSoup(r.content, 'lxml') articles = soup.find_all('article') for article in articles: try: href = article.find('a', class_='sc-qlvix8-0 dgECEw')['href'] urls.append(f&quot;https://www.digitec.ch{href}&quot;) except TypeError: continue return urls def _get_product_info(self): &quot;&quot;&quot; Scrape product info of every subpage &quot;&quot;&quot; urls = self._get_product_links(self.url) products = [] for url in urls: print(url) r = requests.get(url) soup = BeautifulSoup(r.content, 'lxml') name = soup.find('h1', class_='sc-12r9jwk-0 hcjJEJ').text price = float(soup.find('div', class_='sc-18ppxou-1 gwNBaL').text.split('.')[0]) # Narrow down navigation section to get category navigation = soup.find('ol', class_='sc-4cfuhz-2 ipoVcw') navigation_parts = navigation.find_all('li', class_='sc-4cfuhz-3 iIgemP') category = [subcategory.text for subcategory in navigation_parts][-2] # Use Selenium to scrape emission information options = Options() # Set user agent user_agent = 'userMozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' \ 'Chrome/83.0.4103.116 Safari/537.36' options.add_argument(f'user-agent={user_agent}') #options.add_argument('-headless') # Launch the browser driver = webdriver.Firefox(options=options) # Navigate to the URL driver.get(url) time.sleep(random.randint(4, 6)) try: # Find weight under product Specifications &gt; Show more show_more_button = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, '[data-test=&quot;showMoreButton-specifications&quot;]'))) show_more_button.click() weight = WebDriverWait(driver, 10).until(EC.presence_of_element_located( (By.XPATH, '//td[text()=&quot;Weight&quot;]/following-sibling::td'))).text.strip() # Find sustainability section and open it sustainability_section = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, '[data-test=&quot;sustainability&quot;]'))) sustainability_section.click() compensation_price = WebDriverWait(driver, 10).until(EC.presence_of_element_located( (By.XPATH, '//tr[./th[text()=&quot;Compensation amount&quot;]]/td'))).text.strip() compensation_price = compensation_price.split(&quot;CHF &quot;)[1].replace(&quot;’&quot;, &quot;&quot;) compensation_price = float(compensation_price) emission = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, '//tr[./th[text()=&quot;COβ‚‚-Emission&quot;]]/td'))).text.strip() emission = emission.split(&quot;Emission&quot;)[1].split(&quot;kg&quot;)[0].replace(&quot;’&quot;, &quot;&quot;) emission = float(emission) except TimeoutException: print(f&quot;{url} has no sustainability section&quot;) continue finally: driver.close() product = ProductItem(name=name, price=price, category=category, weight=weight, emission=emission, compensation_price=compensation_price) products.append(asdict(product)) print(asdict(product)) products_df = pd.DataFrame(products) return products_df if __name__ == '__main__': url = 'https://www.digitec.ch/en/daily-deal' day_deals = DayDealScraper(url) day_deals.get_product_info_df() </code></pre> <p><strong>ProductItem code (dataclass) - product_scraper.domain</strong></p> <pre><code>from dataclasses import dataclass @dataclass class ProductItem(): &quot;&quot;&quot; Product in daily deals. &quot;&quot;&quot; name: str price: float category: str weight: str emission: float compensation_price: float </code></pre> <p><strong>Scraper abstract class - product_scraper.port.sources</strong></p> <pre><code>from abc import ABC, abstractmethod class Scraper(ABC): @abstractmethod def _get_product_links(self, url): pass </code></pre>
<python><selenium-webdriver><web-scraping><playwright>
2023-03-24 10:38:09
1
396
Elodin
75,832,557
1,160,393
Initial pagerank precomputed values with networkx
<p>I'm trying to run an experiment where I have PageRank values and a directed graph built. I have a graph in the shape of a star (many surrounding nodes that point to a central node).</p> <p>All those surrounding nodes have already a PageRank value precomputed and I want to check how the central node PageRank value is affected by the surrounding ones.</p> <p>Is there a way to perform this with networkx? I've tried building the graph with weighs (using the weights to store the precomputed PageRank values) but at the end, it look does not look like the central node value changes much.</p>
<python><networkx><pagerank>
2023-03-24 10:35:43
1
876
Ed.
75,832,518
5,664,889
Github X-RateLimit-Remaining is not zero and "limit exceeded" error is thrown
<p>I am experiencing this issue when in response headers I see X-RateLimit-Remaining is set to a value other than zero, but an exception is raised with &quot;Max retries exceeded (Caused by ResponseError('too many 403 error responses'))&quot;</p>
<python><github><python-requests><github-api>
2023-03-24 10:30:39
0
449
qurat
75,832,471
13,935,315
How to prevent long type hints in python when using third party classes, abstract classes and factory class
<p>I am trying to create a python package which works with secret managers. Depending on which cloud the user is working (azure, gcp or aws) it should create the required secret manager of that particular cloud provider. For azure this is keyvaul and for gcp this is secret manager. To accomodate this I have created a function that can detect in which cloud the user is working. This will be sent to a Factory class that will create the corresponding secret manager, i.e., keyvault or secret manager etc. I have also created an Abstract class _SecretManager that will be used elsewhere in my code to provide a common interface to fetch secrets.</p> <p>The problem lies within the type hinting the abstract method &quot;_get_secret_manager&quot;. To make it work correctly when using mypy I have to add the following type hint: &quot;Union[Keyvault, SecretManagerServiceClient]&quot;. But how can I extend this easily when we have say, 100 different cloud providers without having to write each possible return type for &quot;_get_secret_manager&quot;?</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from abc import ABC, abstractmethod from google.cloud.secretmanager import SecretManagerServiceClient from azureml.core import Keyvault class _SecretManager(ABC): &quot;&quot;&quot;ABC Class for Cloud Secret Managers like Keyvault or GCP Secret Manager&quot;&quot;&quot; @abstractmethod def __init__(self) -&gt; None: self.manager = self._get_secret_manager() @abstractmethod def _get_secret_manager(self): pass @abstractmethod def _get_secret(self, name: str): pass class _AzureKeyVault(_SecretManager): &quot;&quot;&quot;_SecretManager Subclass for getting secrets from azure keyvault&quot;&quot;&quot; def __init__(self) -&gt; None: self.manager = self._get_secret_manager() def _get_secret_manager(self): manager = Keyvault() return manager def _get_secret(self, name: str) -&gt; str: return self.manager.get_secret(name) class _GCPSecretManager(_SecretManager): &quot;&quot;&quot;_SecretManager Subclass for getting secrets from azure keyvault&quot;&quot;&quot; def __init__(self) -&gt; None: self.manager = self._get_secret_manager() def _get_secret_manager(self): manager = SecretManagerServiceClient() return manager def _get_secret(self, name: str) -&gt; str: return self.manager.fetch_secret(name) #is actually a different method but for the sake of simpicity used this class _SecretManagerFactory: &quot;&quot;&quot;Factory for creating _SecretManager&quot;&quot;&quot; def __init__(self) -&gt; None: self._managers = {&quot;azure&quot;: _AzureKeyVault} def create_secret_manager(self, cloud_provider, **kwargs) -&gt; _SecretManager: try: manager = self._managers[cloud_provider] except KeyError: raise ValueError(&quot;This manager does not exist&quot;) return manager(**kwargs) </code></pre> <p>I have tried the following:</p> <pre class="lang-py prettyprint-override"><code>class _SecretManager(ABC): &quot;&quot;&quot;ABC Class for Cloud Secret Managers like Keyvault or GCP Secret Manager&quot;&quot;&quot; @abstractmethod def __init__(self) -&gt; None: self.manager = self._get_secret_manager() @abstractmethod def _get_secret_manager(self) -&gt; _SecretManager: pass @abstractmethod def _get_secret(self, name: str): pass </code></pre> <p>but with this approach mypy will throw an error: &quot;_SecretManager&quot; has no attribute &quot;get_secret&quot;; maybe &quot;_get_secret&quot;? [attr-defined]. Which is kind of logical.</p>
<python>
2023-03-24 10:26:16
1
331
Jens
75,832,445
6,068,731
Matplotlib imshow with x values log-spaced but y values lin-spaced
<h1>Task</h1> <p>I have a grid of values <code>zs</code> relating to some <code>xs</code> values that are log-spaced, and <code>ys</code> values that are lin-spaced. I tried to make it work as follows but the plot looks weird. The key problem is that the x-values are in the log-space and the y values are not.</p> <p>I tried using <code>ax.set_xscale('log')</code> but doesn't seem to do the job. I am not even sure if this is possible, since it's just a grid.</p> <h1>Minimal Working Example</h1> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from numpy.random import default_rng # x values are log-spaced, y values are lin-spaced xs = np.geomspace(0.001, 1.0, num=20, endpoint=True) ys = np.linspace(0.0, 1.0, num=20, endpoint=False) # Generate z-values randomly for the MWE zs = np.zeros((20, 20)) rng = default_rng(seed=1234) for x_ix in range(len(xs)): for y_ix in range(len(ys)): zs[x_ix, y_ix] = rng.normal() fig, ax = plt.figure(figsize=(18, 5)) im = ax.imshow(zs, origin='lower') ax.set_xticks(xs[::4]) ax.set_xticklabels([&quot;{:.2f}&quot;.format(x) for x in xs[::4]], fontsize=8) ax.set_yticks(ys[::4]) ax.set_yticklabels([&quot;{:.1f}&quot;.format(y) for y in ys[::4]], fontsize=8) ax.cax.colorbar(im) ax.cax.toggle_label(True) plt.show() </code></pre> <p><a href="https://i.sstatic.net/yW6Exm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yW6Exm.png" alt="enter image description here" /></a></p> <h1>Alternative - <code>pcolor</code></h1> <p>Following the suggestion, I have coded it as follows.</p> <pre class="lang-py prettyprint-override"><code>x_grid, y_grid = np.meshgrid(xs, ys) fig, ax = plt.subplots(figsize=(18, 5)) im = ax.pcolor(x_grid, y_grid, zs) ax.set_xscale('log') fig.colorbar(im) plt.show() </code></pre> <p>This works well, only problem is that in practice I have 5 plots in a row and the colorbar is taking the space off the final plot. Basically I am actually using <code>plt.subplots(ncols=5, figsize=(18, 5))</code> but then when I add the colorbar, the final plot is less wide than the others because it contains the colorbar</p>
<python><matplotlib>
2023-03-24 10:23:38
0
728
Physics_Student
75,832,379
12,255,379
reliable scale aware and lightweight object tracking on real life video stream
<p>I'm trying to make a python script which tracks people on live video stream. My script uses pose detector from mediapipe solutions and face-recognition module which does some dlib magic under the hood.</p> <p>Currently I locate faces, and check for overlaps with body bbox and group them in such fashion, however face-recognition module is slow, and I'm trying to work my way around it. In particular, I try to only detect face every 500th frame, and in between use mean-shift algorithm provided by opencv-python to adjust position of face.</p> <p>This is not very reliable, even when I introduced certain threshold of dx and dy which when exceeded forces face-recognition to take over again. My guess is that lighting and scale cause problems, so I look for alternative solution.</p>
<python><opencv>
2023-03-24 10:16:28
0
769
Nikolai Savulkin
75,832,256
1,818,713
How to get output from Fiona instead of fiona.model object
<p>I'm following the examples in <a href="https://fiona.readthedocs.io/en/stable/manual.html#records" rel="nofollow noreferrer">the docs</a> but using Virginia's <a href="https://gismaps.vdem.virginia.gov/download/BaseMapData/SHP/VirginiaParcel.shp.zip" rel="nofollow noreferrer">parcel shp file</a>. Warning: it's about 1GB zipped and 1.8GB unzipped.</p> <p>I have very simply</p> <pre><code>fiava = fiona.open(&quot;VirginiaParcel.shp/VirginiaParcel.shp&quot;, layer='VirginiaParcel') </code></pre> <p>from which I can do <code>fiava.schema</code> to get</p> <pre><code># {'properties': {'FIPS': 'str:8', # 'LOCALITY': 'str:64', # 'PARCELID': 'str:64', # 'PTM_ID': 'str:64', # 'LASTUPDATE': 'date', # 'VGIN_QPID': 'str:50'}, # 'geometry': 'Polygon'} </code></pre> <p>So far so good</p> <p>but when I do</p> <pre><code>fiava[0] ## I get a Feature object, not the data ## &lt;fiona.model.Feature at 0x7f2fd582aa10&gt; </code></pre> <p>In the docs it shows this output</p> <pre><code>{'geometry': {'coordinates': [[(-4.663611, 51.158333), (-4.669168, 51.159439), (-4.673334, 51.161385), (-4.674445, 51.165276), (-4.67139, 51.185272), (-4.669445, 51.193054), (-4.665556, 51.195), (-4.65889, 51.195), (-4.656389, 51.192215), (-4.646389, 51.164444), (-4.646945, 51.160828), (-4.651668, 51.159439), (-4.663611, 51.158333)]], 'type': 'Polygon'}, 'id': '1', 'properties': OrderedDict([('CAT', 232.0), ('FIPS_CNTRY', 'UK'), ('CNTRY_NAME', 'United Kingdom'), ('AREA', 244820.0), ('POP_CNTRY', 60270708.0)]), 'type': 'Feature'} </code></pre> <p>If I use the schema for the specific keys then I can get data one value at a time but this is not optimal</p> <pre><code>fiava[0]['properties']['FIPS'] ## 51149 </code></pre> <p>Even if I do <code>fiava[0].items()</code> then it's just an <code>ItemsView</code></p> <p>What am I missing?</p>
<python><shapefile><fiona>
2023-03-24 10:03:08
1
19,938
Dean MacGregor
75,832,196
673,600
Editing a pandas dataframe, and I simply don't know why
<p>I'm perplexed whereby I cannot seem to update a row in pandas. I have tried a) creating a copy (and deep copy), since I only need to change the copy.</p> <pre><code>df_company = df.copy(deep=True) df_company = df_company[df_company[&quot;Market&quot;]==company_name] df_company = df_company[( df_company[&quot;Date&quot;] &lt; pd.to_datetime(date_start, dayfirst=True))] df_company.iloc[j][&quot;Quantity&quot;] += int(remainder_shares) </code></pre> <p>The last row never updates. This is the error generated:</p> <pre><code>&lt;ipython-input-10-9d795087e048&gt;:56: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p><code>j</code> is used because there is a loop: <code>for j in range(0, i)</code> so how can I access that row by index and then column and then update?</p>
<python><pandas>
2023-03-24 09:56:57
1
6,026
disruptive
75,832,049
6,760,948
timeout if the condition is not met
<p>My requirement is to monitor a kafka topic for the messages &amp; filter or search for a keyword - <code>state</code> from the arrived messages and print, Active Brands and Non active brands(Disables &amp; Booted) separately. This has to be done within a timeout of 15 minutes or else break out of the script saying TimedOut.</p> <p>Below is my piece of code and the sample message format from Kafka topic. I'm using confluent kafka.</p> <p>Problems being, I'm unsure, where to add the timeout logic, should I do it in <code>to_get_msg_from_kafka()</code> or do it as part of <code>status_check(</code>). Secondly, ideally, All the messages until 15 minutes should be processed right or how should that be handled ?</p> <p>Code Snippet:</p> <pre><code>START_TIME = datetime.now() lap_list = ['Apple', 'Acer'] TIMEOUT = 15 #Function that polls kafka topic and retrieve messages.. def get_msg_from_kafka(args): pass #Function that has to validate, whether keyword is present and also have to capture #which laptops are in Active state and which aren't.. def check_state(): success = True msg_status = get_msg_from_kafka(args) # This will return a JSON output as posted below. if bool(msg_status) == False: return False for val in msg_status['Brand']: if val['name'] in lap_list and val['state'] != &quot;Active&quot;: success = False print(f&quot;Status of {val['name']}: {val['Brand']}&quot;) return success #Is it possible to check how long the validation took, i.e. to check the laptops are in active state. def status_check(): stop_time = START_TIME + timedelta(minutes=TIMEOUT) while True: success = check_state() if success: print(&quot;All the Laptops are in Active State..&quot;) break current_time = datetime.now() if current_time &gt; stop_time: print(&quot;Exceeded the timeout ...&quot;) break if __name__ == '__main__': status_check() </code></pre> <p><code>msg_status</code> looks like below (Message format from Kafka)</p> <pre><code>msg_status = { &quot;case&quot;: &quot;2nmi&quot;, &quot;id&quot;: &quot;6.00c&quot;, &quot;key&quot;: &quot;subN&quot;, &quot;Brand&quot;:[ { &quot;state&quot;: &quot;Active&quot;, &quot;name&quot;: &quot;Apple&quot;, &quot;date&quot;: &quot;2021-01-20T08:35:33.382532&quot;, &quot;Loc&quot;: &quot;GA&quot;, }, { &quot;state&quot;: &quot;Disabled&quot;, &quot;name&quot;: &quot;HP&quot;, &quot;date&quot;: &quot;2018-01-09T08:25:90.382&quot;, &quot;Loc&quot;: &quot;TX&quot;, }, { &quot;state&quot;: &quot;Active&quot;, &quot;name&quot;: &quot;Acer&quot;, &quot;date&quot;: &quot;2022-01-2T8:35:03.5432&quot;, &quot;Loc&quot;: &quot;IO&quot; }, { &quot;state&quot;: &quot;Booted&quot;, &quot;name&quot;: &quot;Toshiba&quot;, &quot;date&quot;: &quot;2023-09-29T9:5:03.3298&quot;, &quot;Loc&quot;: &quot;TX&quot; } ], &quot;DHL_ID&quot;:&quot;a3288ec45c82&quot; } </code></pre>
<python><python-3.x><confluent-kafka-python>
2023-03-24 09:40:40
1
563
voltas
75,831,843
520,556
How to fill pandas column (variable) using two conditions at once
<p>The following code works fine, but I failed to make it a single-liner. Is it possible?</p> <pre><code>dat.loc[dat['A'] == '', 'A'] = 'no' dat.loc[dat['A'].isnull(), 'A'] = 'no' </code></pre> <p>(It is strange, though, that after merging two dataframes A contains both empty strings and nulls. 🀷🏻)</p>
<python><pandas>
2023-03-24 09:16:36
2
1,598
striatum
75,831,723
2,859,206
Pandas group events close together by date, then test if other values are equal
<p>The problem: group together events that occur close to each other in time, that also have another variable that is equal. For example, given the date of disease onset, and an address, find disease outbreaks that occur at the same location within specified timeframe of each other. Large - 300K rows - pandas dataframe. Example data:</p> <pre><code>df = pd.DataFrame( [ ['2020-01-01 10:00', '1', 'A'], ['2020-01-01 10:01', '2', 'A'], ['2020-01-01 10:02', '3a', 'A'], ['2020-01-01 10:02', '3b', 'A'], ['2020-01-02 10:03', '4', 'B'], ['2020-01-02 10:50', '5', 'B'], ['2020-01-02 10:54', '6', 'B'], ['2020-01-02 10:55', '7', 'B'], ], columns=['event_time', 'event_id', 'Address'] ) </code></pre> <p>The output should have rows with the first and last event date, a list of the the events and the address</p> <pre><code> event_time_start event_time_end events_and_related_event_id_list Address 0 2020-01-01 10:00:00 2020-01-01 10:02:00 [1, 2, 3a] A 6 2020-01-01 10:54:00 2020-01-01 10:55:00 [6, 7] B </code></pre> <p>EDITED - to clarify - SOLUTION</p> <p>The solution by jezrael to match dates within a specified number of days before or after a date is based on <a href="https://stackoverflow.com/questions/64102074/python-pandas-group-close-events-together-based-on-a-window?noredirect=1&amp;lq=1">a similar approach from another thread</a>, but includes a groupby for the Address. This first step works perfectly without modification on the real data. It is not changed below, except to name some of the values for clarity.</p> <p>The second step did not work because, unlike the example data, the real data contained non-continuous and non-sequential events. This required: sorting of the first output by Address and event_time; different logic for the boolean series to groups event_times together (m/timeGroup_bool); and removal of the bool series as df filter for the Groupby.agg.</p> <p>Here is the full solution with modifications and clarifications based on jezrael's simply awesome response (the <a href="https://stackoverflow.com/questions/17657720/python-list-comprehension-double-for">f1 lambda, which collects all values from the grouped lists, is best explained here</a>).:</p> <pre><code> df = pd.DataFrame( [ ['1', 'A', '2020-01-01 10:00'], ['2', 'B', '2020-01-01 10:01'], ['3', 'A', '2020-01-01 10:01'], ['4', 'C', '2020-01-01 10:02'], ['5', 'D', '2020-01-01 10:03'], ['6', 'A', '2020-01-01 10:03'], ['7', 'E', '2020-01-01 10:03'], ['8', 'A', '2020-01-01 10:07'], ['9', 'A', '2020-01-01 10:09'], ['10', 'A', '2020-01-01 10:11'], ['11', 'F', '2020-01-01 10:54'], ['12', 'G', '2020-01-01 10:55'], ['13', 'F', '2020-01-01 10:56'], ], columns=['id', 'Address', 'event_time'] ) df = df.sort_values(by=[&quot;Address&quot;, &quot;event_time&quot;]) df['event_time'] = pd.to_datetime(df['event_time']) ## group by address and surrounding time timeDiff = pd.Timedelta(&quot;2m&quot;) # time span between related events def idsNearDates(mDf): f = lambda colName, val: mDf.loc[mDf['event_time'].between(val - timeDiff, val + timeDiff), 'id'].drop(colName).tolist() mDf['relatedIds'] = [f(colName, value) for colName, value in mDf['event_time'].items()] return mDf df_1stStep = df.groupby('Address').apply(idsNearDates).sort_values(by=[&quot;Address&quot;, 'event_time']) ## aggregate the initial output into a single row per related events # mark where event times are too far apart timeGroup_bool = ~(df_1stStep['event_time'].between(df_1stStep['event_time'].shift(1) - timeDiff, df_1stStep['event_time'].shift(1) + timeDiff)) # create a single list from all grouped lists f1 = lambda x: list(dict.fromkeys([value for idList in x for value in idList])) df_2ndstep = (df_1stStep.groupby([(timeGroup_bool).cumsum(),'Address']) .agg(Date_first=('event_time','min'), Date_last=('event_time','max'), Ids=('relatedIds',f1)) .droplevel(0) .reset_index()) # get rid of rows with empty lists df_2ndstep = df_2ndstep[df_2ndstep['Ids'].str.len() &gt; 0] </code></pre>
<python><pandas><lambda>
2023-03-24 09:03:59
2
2,490
DrWhat
75,831,457
13,217,286
How do I filter rows of strings that contain any value from a list in Polars
<p>If you had a list of values and a Polars dataframe with a column of text. And you wanted to filter to only the rows containing items from the list, how would you write that?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl a_list = ['a', 'b', 'c'] df = pl.DataFrame({ 'col1': [ 'I am just a string', 'one more, but without the letters', 'we want, a, b, c,', 'Nothing here' ] }) </code></pre> <pre><code>shape: (4, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ I am just a string β”‚ β”‚ one more, but without the lett… β”‚ β”‚ we want, a, b, c, β”‚ β”‚ Nothing here β”‚ # no 'a', 'b', or 'c' β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Expected output:</p> <pre class="lang-py prettyprint-override"><code>shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ I am just a string β”‚ β”‚ one more, but without the letter… β”‚ β”‚ we want, a, b, c, β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I assume it'd have something combining/using <code>.is_in(a_list)</code> and <code>.str.contains()</code>, but I haven't been able to make it work.</p>
<python><regex><dataframe><contains><python-polars>
2023-03-24 08:31:58
2
320
Thomas
75,831,415
2,730,554
pycharm not detecting changes to code when re-running function
<p>I have just moved from using Spyder to PyCharm &amp; so far think its great. I have one issue though. When I run a simple function it runs as expected. The problem comes when I make a small change to the function, for example add an extra column to a dataframe that will be returned by the function, it is not reflected when I re-run the code. If I close &amp; re-open PyCharm then the change is reflected, am I missing something?</p> <p><strong>Update</strong></p> <p>I run the function within the python console in pycharm. So,</p> <pre><code>from some_direcotry.some_file import some_function output = some_function() </code></pre> <p>I then make a change to the function, crtl+s the file &amp; run the line below,</p> <pre><code>output = some_function() </code></pre> <p>But it does not reflect the changes I made</p>
<python><pycharm>
2023-03-24 08:27:27
1
6,738
mHelpMe
75,831,265
13,994,829
How to improve bigquery client query response time in FastAPI?
<p>I have develope an API via <code>FastAPI</code>.</p> <p>This API will do:</p> <blockquote> <ol> <li><strong>POST</strong> with <code>user_ip</code> in <code>client.py</code></li> <li>Use <em>BigQuery API Client</em> to <code>query</code> from data table in <code>v1_router.py</code></li> <li>If <code>user_ip</code> exist in data table, return his <code>group</code></li> </ol> </blockquote> <p>The response time will cost <code>5 sec</code> in my computer.</p> <p>I have try to use <code>time</code> function to analyze:</p> <blockquote> <p><code>3 sec</code> : instance <code>bigquery.Client</code> in <code>v1_router.py</code></p> <p><code>2 sec</code> : do <code>client.query</code> and <code>res.to_dataframe().to_dict(orient='records')</code></p> </blockquote> <p>Can Someone suggest me how to improve the response time ?</p> <p>Thanks!</p> <h1>Project Folder Structure</h1> <pre><code>β”‚ client.py β”‚ main.py β”‚ β”œβ”€routers β”‚ β”‚ v1_router.py β”‚ β”‚ __init__.py </code></pre> <h1>Data Table</h1> <pre><code>user_ip group a12345 1, 2 b12345 1, 3 c12345 3 d12345 2 e12345 3, 1, 2 f12345 1 g12345 3 h12345 1 i12345 1 j12345 2 k12345 3 </code></pre> <h1>FastAPI</h1> <h3>main.py</h3> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI import uvicorn from routers import v1_router app = FastAPI() app.include_router(v1_router) if __name__ == &quot;__main__&quot;: uvicorn.run(app=&quot;main:app&quot;, host=&quot;0.0.0.0&quot;, port=8000, reload=True) </code></pre> <h3>/routers/v1_router.py</h3> <pre class="lang-py prettyprint-override"><code>from fastapi import APIRouter from google.cloud import bigquery from pydantic import BaseModel v1_router = APIRouter( prefix=&quot;/v1&quot;, ) client = bigquery.Client() class User(BaseModel): ip: str @v1_router.post(&quot;&quot;) async def get_recommend(user: User): user_ip = user.ip QUERY = (f&quot;&quot;&quot; SELECT group FROM dataset.datatable WHERE user_ip = '{user_ip}' &quot;&quot;&quot;) query_job = client.query(QUERY) res = query_job.result() res = res.to_dataframe().to_dict(orient='records') return res </code></pre> <h1>client.py</h1> <pre class="lang-py prettyprint-override"><code>import requests host = &quot;http://localhost:8000/&quot; url = f&quot;{host}v1/&quot; user = {&quot;ip&quot;: &quot;f12345&quot;} res = requests.post(url, json=user) if res.status_code == 200: print(res.json()) else: print(&quot;Error: &quot;, res.text) </code></pre>
<python><google-bigquery><fastapi>
2023-03-24 08:09:03
0
545
Xiang
75,831,189
6,866,762
ROS2 Topic on WSL Can not Find All Available Topics
<p>I have a ROS2 Galactic version installed in my <code>Ubuntu 20.04</code> inside <strong>WSL</strong> on my <strong>Windows 11</strong>. I have a <code>Turtlebot4</code> which is functioning without issues. My windows pc is connected to the same network as the TurtleBot4 but whenever I ran <code>ros2 topic list</code> topics published by my TurtleBot is not showing up. I can see only two topics:</p> <pre><code>/parameter_events /rosout </code></pre> <p>I checked the following issues as well:</p> <blockquote> <ol> <li>Both my PC and the TurtleBot are connected to the same wifi network.</li> <li>I can ssh into the Turtlebot4 RPI without errors so it's working fine.</li> <li>I tried connecting to the bot with a PC comes with Ubuntu as default OS. It works fine there. Unfortunately, changing my default OS (Windows) is not an option for now.</li> </ol> </blockquote> <p>Any suggestions on this would be appreciated.</p>
<python><windows-subsystem-for-linux><ros><ubuntu-20.04><ros2>
2023-03-24 08:00:07
0
519
tahsin314
75,830,683
6,197,439
Right-align (pad left) index column in Pandas?
<p>Consider the following simple example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({'text': [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;, &quot;qux&quot;, &quot;quux&quot;, &quot;corge&quot;, &quot;grault&quot;, &quot;garply&quot;, &quot;waldo&quot;, &quot;fred&quot;, &quot;plugh&quot;, &quot;xyzzy&quot;, &quot;thud&quot;]}) print(df) </code></pre> <p>This prints:</p> <pre class="lang-none prettyprint-override"><code>$ python3 test.py text 0 foo 1 bar 2 baz 3 qux 4 quux 5 corge 6 grault 7 garply 8 waldo 9 fred 10 plugh 11 xyzzy 12 thud </code></pre> <p>Note that the autogenerated index column is printed left-aligned (padded with spaces to right).</p> <p>How can I make the printout of the autogenerated index column to be right-aligned (padded with spaces to left); that is:</p> <pre class="lang-none prettyprint-override"><code> text 0 foo 1 bar 2 baz 3 qux 4 quux 5 corge 6 grault 7 garply 8 waldo 9 fred 10 plugh 11 xyzzy 12 thud </code></pre>
<python><pandas><stdout>
2023-03-24 06:42:22
1
5,938
sdbbs
75,830,645
2,065,083
Parse a dynamic xml file recursively in python
<p>I am trying to read a dynamic xml file recursively. Here is my code:</p> <pre><code>import xml.etree.ElementTree as ET class XmlNavigator: def __init__(self, xml=None): self.tree = ET.ElementTree(ET.fromstring(xml)) self.root = self.tree.getroot() def parse_xml(self, node): for elem in self.root.iter(): if elem.tag == f&quot;{node}&quot;: return elem if __name__ == &quot;__main__&quot;: xml_string = &quot;&lt;note&gt; &lt;to&gt;Tove&lt;/to&gt; &lt;from&gt;Jani&lt;/from&gt; &lt;heading&gt;Reminder&lt;/heading&gt; &lt;body&gt;Don't forget me this weekend!&lt;/body&gt; &lt;/note&gt; &quot; xmlObj = XmlNavigator(xml_string) </code></pre> <p>Here my <code>xml_string</code> is dynamic which can change and have different items and tags.</p> <p>Now I want to print the items as follows in recursive manner:</p> <pre><code>&lt;note&gt; &lt;to&gt;Tove&lt;/to&gt; &lt;from&gt;Jani&lt;/from&gt; &lt;heading&gt;Reminder&lt;/heading&gt; &lt;body&gt;Don't forget me this weekend!&lt;/body&gt; &lt;/note&gt; &lt;to&gt;Tove&lt;/to&gt; Tove &lt;from&gt;Jani&lt;/from&gt; Jani &lt;heading&gt;Reminder&lt;/heading&gt; Reminder &lt;body&gt;Don't forget me this weekend!&lt;/body&gt; Don't forget me this weekend! </code></pre> <p>How can I achieve this.</p>
<python><xml>
2023-03-24 06:36:19
1
21,515
Learner
75,830,592
5,371,582
type annotation when I know to be in a particular case
<p>I have a function that reads a JSON file and returns either a dictionary or a list, depending on the contents of the file. Here is the function:</p> <pre><code>from typing import Union import json def read_json_file(filename)-&gt;Union[dict, list]: with open(filename) as f: return json.load(f) </code></pre> <p>I'm using this function to read a JSON file that I know contains a list, like this:</p> <pre><code>def get_name_list()-&gt;list: return read_json_file(&quot;name_list.json&quot;) &lt;-- problematic line </code></pre> <p>My issue is that the static type checker (pyright for the record) complains that <code>read_json_file</code> could return a dict while the function <code>get_name_list</code> has to return a list.</p> <p>QUESTION: how to say &quot;I know that this specific call to <code>read_json_file</code> will return a list&quot; ?</p> <h2>Unsatisfactory solution 1</h2> <p>I've tried a couple of solutions along the lines of:</p> <pre><code>def read_json_file(filename, ensure_type)-&gt;Union[dict, list]: with open(filename) as f: answer = json.load(f) hack:ensure_type = answer return hack def get_name_list()-&gt;list: return read_json_file(&quot;name_list.json&quot;, ensure_type=list) </code></pre> <p>It does not work.</p> <h2>Unsatisfactory solution 2</h2> <p>ChatGPT recommended me this kind of solution:</p> <pre><code>from typing import TypeVar, Union, List, Dict, Type import json T = TypeVar('T', List, Dict) def read_json(filename, ensure_type: Type[T]=str) -&gt; T: with open(filename) as f: answer = json.load(f) if ensure_type != str: assert(isinstance(answer, ensure_type)) answer: T = answer return answer def get_list() -&gt; List: return read_json(&quot;name_list.json&quot;, ensure_type=list) def get_json(filename): return read_json(filename) foo = get_list() print(foo) bar = get_json(&quot;bonjour.json&quot;) print(bar) </code></pre> <p>It makes the job BUT: the return type of <code>read_json_file</code> is not well readable. I prefer to keep <code>-&gt;Union[list,dict]</code> which is more explicit.</p> <h2>Unsatisfactory solution 3</h2> <p>I can do this:</p> <pre><code>def read_list(): answer:list = read_json_file(&quot;name_list.json&quot;) #type:ignore return answer </code></pre> <p>But it is not quite satisfactory.</p>
<python><python-typing>
2023-03-24 06:26:01
1
705
Laurent Claessens
75,830,048
1,021,306
Python - ModuleNotFoundError thrown when calling a function that imports another module in the same directory
<p>I have the following Python file structure.</p> <pre><code>-- main.py -- scripts/ -- __init__.py -- helper_functions.py -- mytest.py </code></pre> <p><code>mytest.py</code> has the following function.</p> <pre><code>def test(): print(&quot;test&quot;) </code></pre> <p><code>helper_functions.py</code> imports the mytest module and use the test function as</p> <pre><code>import mytest def hello(): mytest.test() if __name__==&quot;__main__&quot;: mytest.test() </code></pre> <p><code>__init__.py</code> imports the hello function from helper_functions.py module so that the main.py outside the scripts folder can directly use the hello function.</p> <pre><code>from scripts.helper_functions import hello </code></pre> <p><code>main.py</code> tries to call the hello function directly:</p> <pre><code>from scripts import hello if __name__==&quot;__main__&quot;: hello() </code></pre> <p>When I tried to run <code>main.py</code>, the following error was thrown:</p> <pre><code>Traceback (most recent call last): File &quot;\main.py&quot;, line 1, in &lt;module&gt; from scripts import hello File &quot;\scripts\__init__.py&quot;, line 1, in &lt;module&gt; from scripts.helper_functions import hello File &quot;\scripts\helper_functions.py&quot;, line 8, in &lt;module&gt; import mytest ModuleNotFoundError: No module named 'mytest' </code></pre> <p>It seems the mytest module couldn't be imported into helper_functions.py. How can this be fixed?</p>
<python>
2023-03-24 04:31:10
2
3,579
alextc
75,829,938
19,293,506
TypeError: only size-1 arrays (I am trying to print most similar words)
<h1>How to return similar words?</h1> <p>I have such Python code, this code should simply prints array of given similar word e,g, &quot;japan&quot;</p> <h2>code:</h2> <pre class="lang-py prettyprint-override"><code>import spacy import numpy as np # Load the larger pre-trained English model nlp = spacy.load(&quot;en_core_web_md&quot;) # Get the vector for &quot;japan&quot; word_vec = nlp(&quot;japan&quot;).vector # Reshape the word_vec array to have a second dimension of size 1 word_vec = np.reshape(word_vec, (1, -1)) # Find the 3 most similar words to &quot;japan&quot; most_similar = nlp.vocab.vectors.most_similar(word_vec, n=3) # Extract the words from the most_similar list similar_words = [nlp.vocab.strings[similar[0]] for similar in most_similar] print(similar_words) </code></pre> <h2>Error</h2> <pre><code>user@users-MacBook training % python3 main.py Traceback (most recent call last): File &quot;/user/project/training/main.py&quot;, line 17, in &lt;module&gt; similar_words = [nlp.vocab.strings[similar[0]] for similar in most_similar] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/user/project/training/main.py&quot;, line 17, in &lt;listcomp&gt; similar_words = [nlp.vocab.strings[similar[0]] for similar in most_similar] ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File &quot;spacy/strings.pyx&quot;, line 156, in spacy.strings.StringStore.__getitem__ TypeError: only size-1 arrays can be converted to Python scalars </code></pre> <hr /> <p>When I print <code>most_similar</code></p> <pre><code> (array([ [ 6726951824429389069, 10815251124955538419, 4763767604223487472 ] ], dtype=uint64), array([ [10959, 11621, 16082] ], dtype=int32), array([[1. , 0.6241, 0.575 ]], dtype=float32)) </code></pre> <p>I get this, but I cannot convert this to word</p> <p>How can I fix it?</p>
<python><numpy><nlp><spacy>
2023-03-24 04:03:45
1
631
kawa
75,829,937
11,901,732
Sort list by dictionary in Python
<p>I want to sort a list by a dictionary, as shown below:</p> <p>The list:</p> <pre><code>L = ['Jack', 'King', '9', 'King', '10'] </code></pre> <p>The dictionary:</p> <pre><code>D = {0:'Ace', 1:'King', 2:'Queen', 3:'Jack', 4:'10', 5:'9'} </code></pre> <p>I want to sort L based on keys of dictionary D, such that the output would be:</p> <p><code>[ 'King', 'King', 'Jack', '10', '9' ]</code></p> <p>I tried:</p> <pre><code>sorted(L, key = D.get) </code></pre> <p>but got error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) in ----&gt; 3 sorted(L, key = D.get) TypeError: '&lt;' not supported between instances of 'NoneType' and 'NoneType' </code></pre> <p>Why did I get this error? Aren't the dictionary keys integers?</p> <hr /> <p>Update:</p> <p>Similar question: <a href="https://stackoverflow.com/q/483666/11901732">Reverse / invert a dictionary mapping</a></p>
<python><list><dictionary><sorting>
2023-03-24 04:02:49
2
5,315
nilsinelabore
75,829,747
12,931,358
Setting CUDA_VISIBLE_DEVICES just has no effect even though I put it before pytorch
<p>I have tried many methods to set cuda device as 1, or 0,1 however,most of them didn't work, I followed some solution but not work...</p> <pre><code># '''cmd:CUDA_VISIBLE_DEVICES=1, python mytest.py''' #stil shows 0 import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' #cannot work #export 'CUDA_VISIBLE_DEVICES'='1' #how to use this in detail? if __name__ == &quot;__main__&quot;: import torch print(torch.cuda.current_device()) #show 0, os.environ method cannot works torch.cuda.set_device(0) print(torch.cuda.current_device()) #show 0, it can works torch.cuda.set_device(1) print(torch.cuda.current_device()) #show 1, it can works torch.cuda.set_device('cuda:0') print(torch.cuda.current_device()) #show 0, it can works torch.cuda.set_device('cuda:1') print(torch.cuda.current_device()) #show 1, it can works torch.cuda.set_device('cuda:0,1') #Error: why it shows Invalid device string: 'cuda:0,1' print(torch.cuda.current_device()) device = torch.device(&quot;cuda:1&quot;) print(torch.cuda.current_device()) #show 0, it cannot works </code></pre> <p>The only method that can work is <code>torch.cuda.set_device()</code>, however when I want to use 2 gpus, like set_device('cuda:0,1') it still shows error, because it cannot accept multiple devices? If so how can I set multi devices?</p> <p>P.S my environment is a Linux server, by nvidia-smi,it shows,</p> <pre><code> CUDA Version: 10.2 | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | |==============+===============+============| | 0 Tesla T4 Off | 00000000:3B:00.0 Off | 0 | | N/A 68C P0 62W / 70W | 0MiB / 15109MiB |26% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:AF:00.0 Off | 0 | | N/A 40C P0 16W / 70W | 0MiB / 15109MiB | 6% Default | +-------------------------------+----------------------+----------------------+ </code></pre>
<python><pytorch><cuda>
2023-03-24 03:21:41
2
2,077
4daJKong
75,829,670
308,827
Finding sets of files matching the same substring
<p>I have a set of files with the following type of name: <code>soil_moisture_2015090T043000_sm_global.tif</code> <code>soil_moisture_2015090T133000_sm_global.tif</code> <code>soil_moisture_2015090T163000_sm_global.tif</code> <code>soil_moisture_2015091T073000_sm_global.tif</code> <code>soil_moisture_2015091T223000_sm_global.tif</code> <code>soil_moisture_2015092T013000_sm_global.tif</code></p> <p>The first three files have the same date <code>2015090</code> and the next three files have the same date <code>2015092</code>. How can I find all sets of files in the directory that share the same date in the name? I know how to do this for a substring match, but not sure how to do it when the substring itself is changing.</p> <pre><code>import glob for file in glob.glob('/dir/soil_moisture_*_sm_global.tif'): print file </code></pre>
<python>
2023-03-24 03:01:22
1
22,341
user308827
75,829,412
1,857,373
PCA(n_components=9) digits Value Error Shape passed values (n, 9), indices imply (n, 10)
<p><strong>Problem</strong></p> <p>Working on digits data (0-9) and finished coding OneVsOne Classifier and OneVsRest Classifiers for Binary Classification using Multi Class Classification. Code is working on classifiers for machine leaning.</p> <p>Now ready to add PCA dimension reduction and experiment with results to setup new runs for OneVsOne Classifier and OneVsRest Classifier Multi Class Classification.</p> <p>Attempting to run PCA(n_components=9) on training model, which is working. Range of digits data is 0-9, hence selection for PCA(n_components=9). The value error reports that the shape of passed values is (16800, 9), while the indices imply (16800, 10).</p> <p>How can I make this PCA(n_components=9) successfully work with 0-9, essentially shape of (16800, 9) to be matched with indices?</p> <p><strong>Shape of Data</strong></p> <pre><code>train.shape : (16800, 783) X.shape: (16800, 783) y (response): (16800,) </code></pre> <p><strong>Code</strong></p> <pre><code>principalComponents = model_pca.fit_transform(X) df_pca = pd.DataFrame(data = principalComponents, columns = ['pc0', 'pc1', 'pc2', 'pc3', 'pc4', 'pc5', 'pc6', 'pc7', 'pc8', 'pc9']) df_pca['targets'] = y_target </code></pre> <p><strong>Value Error trace</strong></p> <pre><code>ValueError: Shape of passed values is (16800, 9), indices imply (16800, 10) ValueError Traceback (most recent call last) Cell In[142], line 3 1 principalComponents = model_pca.fit_transform(X) ----&gt; 3 df_pca = pd.DataFrame(data = principalComponents 4 , columns = ['pc0', 'pc1', 'pc2', 'pc3', 'pc4', 'pc5', 'pc6', 'pc7', 'pc8', 'pc9']) 6 df_pca['targets'] = y_target File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py:694, in DataFrame.__init__(self, data, index, columns, dtype, copy) 684 mgr = dict_to_mgr( 685 # error: Item &quot;ndarray&quot; of &quot;Union[ndarray, Series, Index]&quot; has no 686 # attribute &quot;name&quot; (...) 691 typ=manager, 692 ) 693 else: --&gt; 694 mgr = ndarray_to_mgr( 695 data, 696 index, 697 columns, 698 dtype=dtype, 699 copy=copy, 700 typ=manager, 701 ) 703 # For data is list-like, or Iterable (will consume into list) 704 elif is_list_like(data): File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/internals/construction.py:351, in ndarray_to_mgr(values, index, columns, dtype, copy, typ) 346 # _prep_ndarray ensures that values.ndim == 2 at this point 347 index, columns = _get_axes( 348 values.shape[0], values.shape[1], index=index, columns=columns 349 ) --&gt; 351 _check_values_indices_shape_match(values, index, columns) 353 if typ == &quot;array&quot;: 355 if issubclass(values.dtype.type, str): File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/internals/construction.py:422, in _check_values_indices_shape_match(values, index, columns) 420 passed = values.shape 421 implied = (len(index), len(columns)) --&gt; 422 raise ValueError(f&quot;Shape of passed values is {passed}, indices imply {implied}&quot;) </code></pre>
<python><pandas><machine-learning><scikit-learn><pca>
2023-03-24 01:51:42
1
449
Data Science Analytics Manager
75,829,393
1,270,076
Install brew in another folder
<p>I want to install python 3.8.6 on an M1 mac but that version has no arm compilation. Looking at this <a href="https://stackoverflow.com/questions/71862398/install-python-3-6-on-mac-m1">answer</a>, I could install x86 brew but I already have a lot of other dependencies installed with my current brew version so I don't want to screw that up.</p> <p>Is there a way to install brew to another folder similar to a venv in python?</p>
<python><python-3.x><homebrew><zsh><apple-m1>
2023-03-24 01:46:55
0
344
JuanKB1024
75,829,289
9,926,472
Variable length nested for loops with different function calls at each level
<p>Although some variants of this question has been asked, I could not find what I was looking for.</p> <p>Let us say I have a variable length vector of instances of different classes: [objA, objB, objC], each of which have a method called <code>call_method()</code> that returns a particular value <code>v</code>, and a boolean <code>s</code>, the call_method() are infact like &quot;generators&quot; that return a new <code>v</code> and a new <code>s</code> for every call. They are actually wrappers around generators, that set <code>s</code> as True if the generator returned an object, else false</p> <p>I would like to have a structure like:</p> <pre><code>while True: v1, s1 = objA.call_method() if s1 == False: break # It is a failure while True: v2, s2 = objB.call_method() if s2 == False: break # break out of the middle loop and call objA again for a new v1, s1 while True: v3, s3 = objC.call_method() if s3 == True: return [v1, v2, v3] if s3 == False: break #i.e. call objB again. </code></pre> <p>As you can see here, I do not need a <code>while True</code> for the third object, since I will exit the loop whether true or false. (Just added it here for completeness). Only if the outerloop succeeds (i.e. s = True), it enters the inner loop, and it goes on.</p> <p>What would the best way be to implement this for an arbitrary list of [objK]?</p>
<python><loops><nested>
2023-03-24 01:21:46
1
587
OlorinIstari
75,829,014
2,158,231
Bazel error: no such package: The repository could not be resolved: Repository is not defined and referenced by
<p>I am trying to run a simple python binary with bazel using <code>bazel run projects/my-python-app/...</code>. But when I run it I get the error:</p> <pre><code>ERROR: /private/var/tmp/_bazel_justin/84cef48b5ae183d272bc73733d19e8e1/external/python_deps_pypi__flask/BUILD.bazel:11:11: no such package '@python_deps_pypi__itsdangerous//': The repository '@python_deps_pypi__itsdangerous' could not be resolved: Repository '@python_deps_pypi__itsdangerous' is not defined and referenced by '@python_deps_pypi__flask//:pkg' ERROR: Analysis of target '//projects/my-python-app:main' failed; build aborted: </code></pre> <p>In my <code>WORKSPACE</code> I have:</p> <pre><code>load(&quot;@bazel_tools//tools/build_defs/repo:http.bzl&quot;, &quot;http_archive&quot;) rules_python_version = &quot;740825b7f74930c62f44af95c9a4c1bd428d2c53&quot; # Latest @ 2021-06-23 http_archive( name = &quot;rules_python&quot;, # Bazel will print the proper value to add here during the first build. # sha256 = &quot;FIXME&quot;, strip_prefix = &quot;rules_python-{}&quot;.format(rules_python_version), url = &quot;https://github.com/bazelbuild/rules_python/archive/{}.zip&quot;.format(rules_python_version), ) load(&quot;@rules_python//python:pip.bzl&quot;, &quot;pip_parse&quot;) # Create a central repo that knows about the dependencies needed from # requirements_lock.txt. pip_parse( name = &quot;python_deps&quot;, requirements_lock = &quot;//third_party:requirements_lock.txt&quot;, ) # Load the starlark macro which will define your dependencies. load(&quot;@python_deps//:requirements.bzl&quot;, &quot;install_deps&quot;) # Call it to define repos for your requirements. install_deps() </code></pre> <p>In my <code>requirements_lock.txt</code> I have:</p> <pre><code>Flask==2.0.2 </code></pre> <p>In my <code>BUILD</code> I have:</p> <pre><code>load(&quot;@python_deps//:requirements.bzl&quot;, &quot;requirement&quot;) py_binary( name = &quot;main&quot;, srcs = [&quot;main.py&quot;], deps = [ &quot;//projects/calculator:calculator&quot;, requirement(&quot;Flask&quot;), ], ) </code></pre> <p>Any ideas what's going on here? I was following the README in <a href="https://github.com/bazelbuild/rules_python" rel="nofollow noreferrer">https://github.com/bazelbuild/rules_python</a></p>
<python><bazel><bazel-rules>
2023-03-24 00:09:13
2
1,828
jlcv
75,829,013
250,161
Python Venv in a Docker running as root
<p>I am editing the question because you did not understand the problem. And the problem still exists.</p> <p>How do I install into a Docker container via pip3, modules that are not installed on the system. When I do that it says it is a managed environment, and does not proceed, <strong>BUT</strong> there is no debian package that installs that module. So how do I get the modules on there?</p> <p>###############################################################</p> <p>I will start off as I dabble in Python, but Python has all the things I need. I deploy my scripts in containers and run them from there.</p> <p>So recently in a Dockerfile, based on debian:bookworm I get an error while installing modules using pip. All the articles say I have to install the modules via apt. Fine but some of the modules I need do not have a package. So I am screwed. They say I should use a venv, but I do not know how or what that is about and how it applies to a container.</p> <p>My container is using the root user.</p> <p>So how do I use venv inside a container so that entrypoint can use it?</p> <p>Thanx</p>
<python><docker><pip><python-venv>
2023-03-24 00:08:57
0
1,362
Bodger
75,828,973
6,683,107
mypy: Missing type parameters for generic type "Connection" [type-arg] when type hinting pymysql.connections.Connection
<p>mypy is saying &quot;Missing type parameters for generic type &quot;Connection&quot; [type-arg]&quot; when I type hint these pymysql Connection types:</p> <pre><code>import typing from dataclasses import dataclass import pymysql from loguru import logger @dataclass class ConnectionDetails: &quot;&quot;&quot;Database connection details container class.&quot;&quot;&quot; host: str port: int database: str username: str password: str def connect(connection_details: ConnectionDetails) -&gt; pymysql.connections.Connection: &quot;&quot;&quot;Attempt to connect to a database.&quot;&quot;&quot; try: return pymysql.connect( host=connection_details.host, port=connection_details.port, database=connection_details.database, user=connection_details.username, password=connection_details.password, cursorclass=pymysql.cursors.DictCursor, ) except pymysql.OperationalError as exc: logger.error(f&quot;Connection failed: {str(exc.args[1])}&quot;) raise def select(sql: str, connection: pymysql.connections.Connection) -&gt; list[dict[str, typing.Any]] | None: &quot;&quot;&quot;Select data from the database connection.&quot;&quot;&quot; with connection: with connection.cursor() as cursor: try: # Execute the query cursor.execute(sql) except pymysql.err.OperationalError as exc: logger.error(f&quot;SQL error: {str(exc.args[1])}&quot;) return None try: # Fetch the result result = cursor.fetchall() if result: return list(result) return None except pymysql.err.ProgrammingError as exc: logger.error(f&quot;SQL error: {str(exc.args[1])}&quot;) return None </code></pre> <p>Inspection of the return type of the <code>connect</code> function suggests <code>Connection[Cursor]</code>, but I can't subscript <code>Connection</code>. I haven't been able to find much help in the <a href="https://github.com/python/typeshed/tree/main/stubs/PyMySQL/pymysql" rel="nofollow noreferrer">typeshed</a> either.</p> <p>Does anyone know how I might make mypy happy here?</p>
<python><mypy><pymysql>
2023-03-24 00:00:52
0
840
bearoplane
75,828,902
6,875,778
How do I get the index of the array for a job in a batch job that was started using a map distribution in a step function?
<p>I have a script in batch job that gets the index of the batch array using the environment variable like this:</p> <pre><code>import os # get index of array int_idx_array = int(os.environ['AWS_BATCH_JOB_ARRAY_INDEX']) print(f'Array index: {int_idx_array}') </code></pre> <p>This works perfectly unless I run the batch job in a step function using a map distribution in which I get an error because that environment variable does not exist.</p> <p>My step function is able to start the necessary number of batch jobs but they all fail due to this exception.</p> <p>Here is my definition:</p> <pre><code>{ &quot;Comment&quot;: &quot;Shared Jobs State Machine&quot;, &quot;StartAt&quot;: &quot;GetListOfFeatures&quot;, &quot;States&quot;: { &quot;GetListOfFeatures&quot;: { &quot;Type&quot;: &quot;Task&quot;, &quot;Resource&quot;: &quot;arn:aws:lambda:&lt;REGION&gt;:&lt;ACCOUNT&gt;:function:lambda-list-starting-feats-function-ae&quot;, &quot;Next&quot;: &quot;Tuning&quot; }, &quot;Tuning&quot;: { &quot;Type&quot;: &quot;Task&quot;, &quot;Resource&quot;: &quot;arn:aws:states:::batch:submitJob.sync&quot;, &quot;Parameters&quot;: { &quot;JobQueue&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-queue/queue-batch-tuning-ae-image-1&quot;, &quot;JobDefinition&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-definition/job-def-batch-tuning-ae-image-1:1&quot;, &quot;JobName&quot;: &quot;job-name-batch-tuning-ae-image-1&quot;, &quot;ArrayProperties&quot;: { &quot;Size&quot;: 10 } }, &quot;Next&quot;: &quot;ConcatTuning&quot; }, &quot;ConcatTuning&quot;: { &quot;Type&quot;: &quot;Task&quot;, &quot;Resource&quot;: &quot;arn:aws:states:::batch:submitJob.sync&quot;, &quot;Parameters&quot;: { &quot;JobQueue&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-queue/queue-batch-concattuning-ae-image-1&quot;, &quot;JobDefinition&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-definition/job-def-batch-concattuning-ae-image-1:1&quot;, &quot;JobName&quot;: &quot;job-name-batch-concattuning-ae-image-1&quot;, &quot;ArrayProperties&quot;: { &quot;Size&quot;: 10 } }, &quot;Next&quot;: &quot;Map&quot; }, &quot;Map&quot;: { &quot;Type&quot;: &quot;Map&quot;, &quot;ItemProcessor&quot;: { &quot;ProcessorConfig&quot;: { &quot;Mode&quot;: &quot;DISTRIBUTED&quot;, &quot;ExecutionType&quot;: &quot;STANDARD&quot; }, &quot;StartAt&quot;: &quot;SensitivityAnalysis&quot;, &quot;States&quot;: { &quot;SensitivityAnalysis&quot;: { &quot;Type&quot;: &quot;Task&quot;, &quot;Resource&quot;: &quot;arn:aws:states:::batch:submitJob.sync&quot;, &quot;Parameters&quot;: { &quot;JobDefinition&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-definition/job-def-batch-sensitivity-ae-image-5:1&quot;, &quot;JobQueue&quot;: &quot;arn:aws:batch:&lt;REGION&gt;:&lt;ACCOUNT&gt;:job-queue/queue-batch-sensitivity-ae-image-5&quot;, &quot;JobName&quot;: &quot;job-name-batch-sensitivity-ae-image-1&quot; }, &quot;End&quot;: true } } }, &quot;End&quot;: true, &quot;Label&quot;: &quot;Map&quot;, &quot;MaxConcurrency&quot;: 1000, &quot;ItemReader&quot;: { &quot;Resource&quot;: &quot;arn:aws:states:::s3:getObject&quot;, &quot;ReaderConfig&quot;: { &quot;InputType&quot;: &quot;CSV&quot;, &quot;CSVHeaderLocation&quot;: &quot;FIRST_ROW&quot; }, &quot;Parameters&quot;: { &quot;Bucket&quot;: &quot;20230321-step-functions-poc&quot;, &quot;Key&quot;: &quot;ae/01_list_of_starting_features/df_cols_in_model.csv&quot; } } } } } </code></pre> <p>I am reading a .csv file from s3 with the number of rows being equal to the number of batch jobs that need to be ran. How do I access the index information from this .csv file?</p> <p>I also have it saved as a json with the keys being a string index (i.e., &quot;0&quot;). Is it easier to use this?</p>
<python><amazon-web-services><aws-lambda><aws-step-functions><aws-batch>
2023-03-23 23:45:52
1
1,283
Aaron England
75,828,899
11,922,765
Python how to count the nested list of lists containing strings
<p>I have a nested lists of list. I want to know total items in the main and sublists. Then accordingly I want to choose a value for each sublist or item.</p> <p>My code:</p> <pre><code>y = ['ab',['bc','cd'],'ef'] print([len(y[i]) for i in range(len(y))]) alpha=0.5 plot_alpha = [alpha for i in range(len(y)) if len(y[i])&gt;1 else 0.5] print(plot_alpha) </code></pre> <p>My present answer:</p> <pre><code>[2, 2, 2] [0.5, 0.5, 0.5] </code></pre> <p>Expected answer:</p> <pre><code>[1, 2, 1] [1, 0.5, 1] </code></pre>
<python><arrays><list><numpy><arraylist>
2023-03-23 23:45:02
1
4,702
Mainland
75,828,787
6,676,101
How do you compute a regular expression which is the intersection of two other regular expressions?
<p>Consider the following two regular expressions:</p> <blockquote> <ol> <li><code>[ab][0123]</code></li> <li><code>[bc][2345]</code></li> </ol> </blockquote> <p>What function will output the intersection of the two inputs without using <code>?=</code></p> <p>The following is not an acceptable output for our applications:</p> <pre class="lang-python prettyprint-override"><code>(?=[ab][0123])(?=[bc][2345]) </code></pre> <p>For my example, the correct output would be <code>[b][23]</code></p> <p>feel free to assume that we only use character classes and quantifiers with no negative look-behinds or fancy things.</p>
<python><string>
2023-03-23 23:21:41
1
4,700
Toothpick Anemone
75,828,625
525,865
NoneType' object is not subscriptable : bs4 task fails permanently
<p><strong>update:</strong> tried the scripts of Driftr95 .. in google-colab - and got some questions - the scripts failed - and was not succesful - queston. at the beginning of the scripts i have noticed that some lines are commendted out. why is this so. i will try to investigate more - meanwhile thanks alot..-. awesome.</p> <p>two ideas that come up to mind: a. the whole site of a result-page contains even more data: see</p> <p>see the results of one (of 700 ) pages:</p> <p>the digital innovation hub: <strong>4PDIH - Public Private People Partnership Digital Innovation Hub</strong></p> <p><a href="https://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool/-/dih/17265/view?_eu_europa_ec_jrc_dih_web_DihWebPortlet_backUrl=%2Fdigital-innovation-hubs-tool" rel="nofollow noreferrer">https://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool/-/dih/17265/view?_eu_europa_ec_jrc_dih_web_DihWebPortlet_backUrl=%2Fdigital-innovation-hubs-tool</a></p> <p>the dataset with the categories:</p> <pre><code>Hub Information Description Contact Data Organisation Technologies Market and Services Service Examples Funding Customers Partners </code></pre> <p><a href="https://i.sstatic.net/DEX4K.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DEX4K.jpg" alt="enter image description here" /></a></p> <p>second idea: with the awesme-scripts:</p> <pre><code>NameError(&quot;name 'pd' is not defined&quot;) from https://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool?_eu_europa_ec_jrc_dih_web_DihWebPortlet_cur=1 --------------------------------------------------------------------------- NameError Traceback (most recent call last) &lt;ipython-input-3-c1f39e3c2547&gt; in &lt;module&gt; 11 12 # pd.concat(dfList).to_csv(output_fp, index=False) ## save without page numbers ---&gt; 13 df=pd.concat(dfList, keys=list(range(1,len(dfList)+1)),names=['from_pg','pgi']) 14 df.reset_index().drop('pgi',axis='columns').to_csv(output_fp, index=False) NameError: name 'pd' is not defined </code></pre> <p>and besides that - in the next trial</p> <pre><code>NameError Traceback (most recent call last) &lt;ipython-input-5-538670405002&gt; in &lt;module&gt; 1 # df = pd.concat(dfList..... ----&gt; 2 orig_cols = list(df.columns) 3 for ocn in orig_cols: 4 if any(vals:=[cv for cv,*_ in df[ocn]]): df[ocn[0]] = vals 5 if any(links:=[c[1] for c in df[ocn]]): df[ocn[0].split()[0]+' Links'] = links NameError: name 'df' is not defined </code></pre> <p>and the next trail</p> <pre><code>NameError Traceback (most recent call last) &lt;ipython-input-1-4a00208c3fe6&gt; in &lt;module&gt; 10 pg_num += 1 11 if isinstance(max_pg, int) and pg_num&gt;max_pg: break ---&gt; 12 pgSoup = BeautifulSoup((pgReq:=requests.get(next_link)).content, 'lxml') 13 rows = pgSoup.select('tr:has(td[data-ecl-table-header])') 14 all_rows += [{'from_pg': pg_num, **get_row_dict(r)} for r in rows] NameError: name 'BeautifulSoup' is not defined </code></pre> <p>end of the update:</p> <p>the <strong>full story:</strong></p> <p>I am currently trying to learn Beautiful Soup (BS4), starting with fetching data.</p> <p>With a scraper that should work with beautiful soup and scrapes the dataset of <a href="https://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool" rel="nofollow noreferrer">this</a> page and puts the data into a csv format or uses pandas. If I run this in Google colab - I am facing some weird issues. See below:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd # Make a request to the webpage url = 'https://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool' response = requests.get(url) # Parse the HTML content with Beautiful Soup soup = BeautifulSoup(response.content, 'html.parser') # Find the table with the data table = soup.find('table') # Extract the table headers headers = [] for th in table.find_all('th'): headers.append(th.text.strip()) # Extract the table rows rows = [] for tr in table.find_all('tr')[1:]: row = [] for td in tr.find_all('td'): row.append(td.text.strip()) rows.append(row) # Find the total number of pages num_pages = soup.find('input', {'id': 'paginationPagesNum'})['value'] # Loop through each page and extract the data for page in range(2, int(num_pages) + 1): # Make a request to the next page page_url = f'{url}?page={page}' page_response = requests.get(page_url) # Parse the HTML content with Beautiful Soup page_soup = BeautifulSoup(page_response.content, 'html.parser') # Find the table with the data page_table = page_soup.find('table') # Extract the table rows for tr in page_table.find_all('tr')[1:]: row = [] for td in tr.find_all('td'): row.append(td.text.strip()) rows.append(row) # Create a Pandas DataFrame with the data df = pd.DataFrame(rows, columns=headers) # Save the DataFrame to a CSV file df.to_csv('digital-innovation-hubs.csv', index=False) </code></pre> <p>see what i am getting back - if i run this in google-colab</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-1-f87e37f02fde&gt; in &lt;module&gt; 27 28 # Find the total number of pages ---&gt; 29 num_pages = soup.find('input', {'id': 'paginationPagesNum'})['value'] 30 31 # Loop through each page and extract the data TypeError: 'NoneType' object is not subscriptable </code></pre> <p>update: see what is gotten back:</p> <p><a href="https://i.sstatic.net/qbKXY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qbKXY.png" alt="enter image description here" /></a></p> <p>due to the help of riley.johnson3 i found out that the pagination wrapper should be fixed.</p> <ul> <li>awesome many thanks for the quick help and the explanation - have gathered a set of data - its a sample. have to find out now how to get the full set of data. all the 700 records - with all the data.. - guess that we are allmost there. - again many thanks for your outstanding help. this is great... and appreciated alot... ;)</li> </ul>
<python><pandas><csv><web-scraping><beautifulsoup>
2023-03-23 22:50:49
2
1,223
zero
75,828,595
15,257,122
How do I precompile Python code on a Mac?
<p>I am using a Mac M2 with Ventura, and the answers I can get from other webpages / ChatGPT / Bard told me I can do either:</p> <ol> <li>Run this as a python program, using <code>python3 do_it_foo.py</code></li> </ol> <pre class="lang-py prettyprint-override"><code>import py_compile py_compile.compile('abc.py') </code></pre> <ol start="2"> <li>Or, in Bash, do</li> </ol> <pre class="lang-bash prettyprint-override"><code>python3 -m py_compile abc.py </code></pre> <p>and a <code>abc.pyc</code> should be created. And then just run <code>python3 abc.pyc</code></p> <p>But none of the methods above can produce <code>.pyc</code> file and running it shows:</p> <pre class="lang-bash prettyprint-override"><code>$ python3 abc.pyc /Library/Developer/CommandLineTools/usr/bin/python3: can't open file '/Users/peterpeter/try/TryHowLong/abc.pyc': [Errno 2] No such file or directory </code></pre> <p>How can it be done?</p>
<python><python-3.x>
2023-03-23 22:45:44
0
787
Stefanie Gauss
75,828,524
525,865
itterating over a set of data - so that i fetch data from a site with beautiful-soup, panda and store it subsequently
<p>i am currently workin on a sript to be able to scrape the data from the given website using Beautiful Soup and store it in a CSV format: the site 'https://schulen.bildung-rp.de' - a governmental site that lists schools.</p> <p><strong>update:</strong> see a result-page here.</p> <p><a href="https://i.sstatic.net/EAkNT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EAkNT.png" alt="enter image description here" /></a></p> <p>with the following dataset:</p> <pre><code>Kurzbezeichnung: GY Ludwigshafen Carl-Bosch Schulnummer: 50360 Anschrift: Carl-Bosch-Gymnasium Ludwigshafen JΓ€gerstr. 9 67059 Ludwigshafen am Rhein Telefon: (0621)504430810 Telefax: (0621)504430898 E-Mail: cbg(at)cbglu.de Internet: http://www.cbglu.de TrΓ€ger: Stadtverwaltung Ludwigshafen am Rhein letzte Γ„nderung: 15 Nov 2012 14:39:15 von 50360 </code></pre> <p>well i am looking for the following things -( categories ) in the dataset:</p> <pre><code>Kurzbezeichnung: Schulnummer: Anschrift: Telefon: Telefax: E-Mail: Internet: TrΓ€ger: </code></pre> <p>subsequently i have to correct the code!</p> <p>#well Here's an example code snippet that i am using as a starting point:</p> <p>what is aimed: this code will extract data from the table on the above given website, and store it in both a CSV file (schools.csv) and a pandas dataframe (schools_pandas.csv). note that the table on the website contains seven columns or lemme say fieldnames = ['Kurzbezeichnung','Schulnummer','Anschrift', 'Telefon', 'Telefax', 'E-Mail','Internet', 'TrΓ€ger'). ... but</p> <pre><code>import requests from bs4 import BeautifulSoup import csv import pandas as pd # send a GET request to the target URL url = 'https://schulen.bildung-rp.de' response = requests.get(url) # parse the HTML content using Beautiful Soup soup = BeautifulSoup(response.content, 'html.parser') # find the table element containing the desired data table = soup.find('table') # extract the data from the table and store it in a list of dictionaries data = [] for row in table.find_all('tr'): cols = row.find_all('td') if len(cols) == 7: # extract the desired columns and store them in a # extract the desired columns and store them in a dictionary Kurzbezeichnung = cols[0].text.strip() Anschrift = cols[1].text.strip() Telefon: = cols[2].text.strip() Telefax: = cols[3].text.strip() E-Mail: = cols[4].text.strip() Internet = cols[5].text.strip() TrΓ€ger = cols[6]text.strip() data.append({'Kurzbezeichnung': school_name, 'Anschrift': school_type, 'Telefon': phonenumber, 'Telefax': faxnumber, 'E-Mail': mailadress 'Internet': internet-adress, 'TrΓ€ger': trager,}) # store the data in a CSV file with open('schools.csv', 'w', newline='') as csvfile: fieldnames = ['Kurzbezeichnung','Schulnummer','Anschrift', 'Telefon', 'Telefax', 'E-Mail','Internet', 'TrΓ€ger'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for item in data: writer.writerow(item) # alternatively, load the data into a pandas dataframe df = pd.DataFrame(data) df.to_csv('schools_pandas.csv', index=False) </code></pre> <p>it does not store the data ... but the table structure i think is correctly &quot;reworked and treated&quot; in the code -</p> <p>well i think that the code is pretty complete hmmm i think that i may need to modify the code accordingly a bit...</p>
<python><pandas><csv><beautifulsoup>
2023-03-23 22:32:05
2
1,223
zero
75,828,373
2,256,085
How to rotate a stacked area plot
<p>I'd like to re-orient a stacked-area plot such that the stacked areas are stacked horizontally rather than vertically (from a viewer's perspective). So here is what a typical stacked-area plot looks like:</p> <pre><code># imports import matplotlib.pyplot as plt from matplotlib.transforms import Affine2D from matplotlib.collections import PathCollection # Create data x=range(1,6) y1=[1,4,6,8,9] y2=[2,2,7,10,12] y3=[2,8,5,10,6] # Basic stacked area chart. ax = plt.gca() ax.stackplot(x,y1, y2, y3, labels=['A','B','C']) ax.legend(loc='upper left') plt.show() </code></pre> <p>Next, I'd like to take that entire plot and rotate it 90 degrees, which I attempted to do using the accepted answer <a href="https://stackoverflow.com/questions/43892973/transform-entire-axes-or-scatter-plot-in-matplotlib/43915452#43915452">found here</a>. However, running the code that follows, the contents of the plot are seemingly lost? Is there a better way to draw a &quot;rotated stacked-area&quot; plot?</p> <p>Here is the code I attempted to use:</p> <pre><code># Attempt to re-orient plot ax = plt.gca() ax.stackplot(x,y1, y2, y3, labels=['A','B','C']) ax.legend(loc='upper left') r = Affine2D().rotate_deg(90) for x in ax.images + ax.lines + ax.collections: trans = x.get_transform() x.set_transform(r+trans) if isinstance(x, PathCollection): transoff = x.get_offset_transform() x._transOffset = r+transoff old = ax.axis() ax.axis(old[2:4] + old[0:2]) plt.show() </code></pre> <p>If it is possible, I'd also like to apply <code>plt.gca().invert_yaxis()</code> after rotating in order to reverse the values plotted on the y-axis (formerly the x-axis).</p>
<python><matplotlib>
2023-03-23 22:05:29
1
469
user2256085
75,828,370
21,420,742
Conditionally Fill a column with a single value in Python
<p>I have a dataset where I am looking to see if someone has left their job title to start a new job. The way I have decided to represent this is, I have taken the column 'job' and made a new column named 'Latest_Job' which populates to all the rows of their history. I compare the two to see when or if a change has occurred. The issue I am experiencing is that I want to populate a new column 'Switch Jobs' to populate with all 'Yes' or 'No' depending on if the person switched or not. Here is an example of what I have.</p> <pre><code>ID Job Latest_Job Switch Jobs 1 Sales Sales No 1 Sales Sales No 2 Tech Advisor Yes 2 Tech Advisor Yes 2 Advisor Advisor No 2 Advisor Advisor No 3 Sales Manager Yes 3 Manager Manager No 3 Manager Manager No </code></pre> <p>The problem I am having is I would like to just see a 'Yes' in the 'Switch Jobs' column if there was a change per ID like this:</p> <pre><code>ID Job Latest_Job Switch Jobs 1 Sales Sales No 1 Sales Sales No 2 Tech Advisor Yes 2 Tech Advisor Yes 2 Advisor Advisor Yes 2 Advisor Advisor Yes 3 Sales Manager Yes 3 Manager Manager Yes 3 Manager Manager Yes </code></pre> <p>The code I tried for getting the values to changes was this:</p> <pre><code>if df['Switch Jobs'] == 'Yes': df['Switch Jobs'].groupby('ID')['Switch Jobs'].replace('No', 'Yes') </code></pre> <p>this line however threw a <strong>ValueError: The truth value of a series is ambigous. Usea. empty, a.bool(), a.item(), a.any(), a.all().</strong> Any Suggestions to fix this?</p>
<python><python-3.x><pandas><dataframe><numpy>
2023-03-23 22:04:10
2
473
Coding_Nubie
75,828,345
993,812
Aggregating Columns Returns 'Column not iterable' Error
<p>I'm trying to get percentiles for some columns in a spark dataframe, but the following code returns a &quot;Column not iterable&quot; error.</p> <pre><code>import pyspark.sql.functions as F exprs = {x: F.percentile_approx(x, 0.25) for x in df.columns} df.agg(exprs).show() </code></pre> <p>Any suggestions as to why?</p>
<python><pyspark>
2023-03-23 22:00:57
1
555
John
75,828,279
1,653,273
Polars conversion error going from Python to Rust with pyo3
<p>Related to the solution proposed in <a href="https://stackoverflow.com/questions/75827086/convert-pandas-dataframe-with-dictionary-objects-into-polars-dataframe-with-obje/75827315#75827315">this other question</a>.</p> <p>This code creates a polars dataframe of python dictionaries. In Python land, everything is fine. The code fails when we <code>extract</code> to Rust land.</p> <pre class="lang-rust prettyprint-override"><code>use pyo3::prelude::*; use pyo3_polars::*; let code = r#&quot; import polars as pl import pandas as pd df = pd.DataFrame({ &quot;the_column&quot;: [{ &quot;key&quot; : 123 }, { &quot;foo&quot; : 456 }, { &quot;bar&quot; : 789 }]}) schema = pl.from_pandas(df.head(1)).schema | {&quot;the_column&quot;: pl.Object} result = pl.DataFrame(df.to_dict(&quot;records&quot;), schema=schema) &quot;#; Python::with_gil(|py| { let globals = py.import(&quot;builtins&quot;).unwrap().dict(); py.run(code, Some(globals), None).unwrap(); let result = globals.get_item(&quot;result&quot;).unwrap(); let PyDataFrame(_df) = result.extract().unwrap(); }); </code></pre> <p>The error is:</p> <pre><code>thread '&lt;unnamed&gt;' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidOperation(Owned(&quot;Cannot create polars series from FixedSizeBinary(8) type&quot;))' </code></pre> <p>Polars is in <code>Cargo.toml</code> with:</p> <pre><code>polars = { version = &quot;~0.27.2&quot;, features = [&quot;object&quot;, &quot;dtype-full&quot;] } </code></pre>
<python><dataframe><rust><rust-polars><pyo3>
2023-03-23 21:51:01
0
801
GrantS
75,828,264
5,224,239
Remove palindromic rows based on two columns
<p>I have the following data frame:</p> <pre><code>Data_Frame &lt;- data.frame ( A = c(&quot;a&quot;, &quot;c&quot;, &quot;b&quot;, &quot;e&quot;, &quot;g&quot;, &quot;d&quot;, &quot;f&quot;, &quot;h&quot;), B = c(&quot;b&quot;, &quot;d&quot;, &quot;a&quot;, &quot;f&quot;, &quot;h&quot;, &quot;c&quot;, &quot;e&quot;, &quot;g&quot;), value = c(&quot;0.3&quot;, &quot;0.2&quot;, &quot;0.1&quot;, &quot;0.1&quot;, &quot;0.5&quot;, &quot;0.7&quot;, &quot;0.8&quot;, &quot;0.1&quot;) effect = c(&quot;123&quot;, &quot;345&quot;, &quot;123&quot;, &quot;444&quot;, &quot;123&quot;, &quot;345&quot;, &quot;444&quot;, &quot;123&quot;) ) </code></pre> <p>I want to find rows where the value at columns <code>A</code> and <code>B</code> are palindromic and the value at <code>effect</code> is equal. For example, in the provided data frame, rows 1 and row 3 &amp; and rows 2 and row 6 meet this condition. Then from each pair of palindromic rows, I want to retain the row with the lowest value in the &quot;value&quot; column.</p> <p>The output column should look like this:</p> <pre><code>Data_Frame &lt;- data.frame ( A = c(&quot;c&quot;, &quot;b&quot;, &quot;e&quot;, &quot;h&quot;), B = c(&quot;d&quot;, &quot;a&quot;, &quot;f&quot;, &quot;g&quot;), value = c(&quot;0.2&quot;, &quot;0.1&quot;, &quot;0.1&quot;, &quot;0.1) effect = c(&quot;345&quot;, &quot;123&quot;, &quot;444&quot;, &quot;123&quot;) ) </code></pre> <p>The <code>levels(Data_Frame$A)</code> and <code>levels(Data_Frame$B)</code> are not equal and, <code>as.character()</code> does not solve my problem.</p> <p>I appreciate any hints in R or python!</p>
<python><r><pandas>
2023-03-23 21:49:02
1
447
RJF
75,828,200
1,717,414
pip install <.whl file> installs dist_info but doesn't actually install the package
<p>I'm trying change my package over from using <code>setup.py</code> to using <code>pyproject.toml</code> and <code>setup.cfg</code>.</p> <p>My <code>setup.cfg</code> is roughly as follows:</p> <pre><code>[metadata] name = our_name version = 0.1.1 author = me [options] install_requires = networkx &gt;= 3.0 </code></pre> <p>My code is all under <code>src</code>, and I'm relying on the automatic <code>src-layout</code> discovery mechanism.</p> <p>When I run</p> <pre><code>python -m build </code></pre> <p>it generates a proper wheel file (<code>dist/our_name-0.1.1-py3-none-any.whl</code>). When I unpack that wheel file, everything is in there properly - it seems to have discovered all my code correctly.</p> <p>However, when I install it (<code>pip install dist/our_name*.whl</code>), only the distribution info directory shows up in <code>site-packages</code>. Because the dist info is there, <code>pip list</code> shows it as being there, but because the actual module directory is missing, I can't import anything from my module elsewhere, or do anything with it.</p> <p>Any clue what I'm doing wrong?</p> <p>Addendum: Per sinoroc's comment: <code>pyproject.toml</code>:</p> <pre><code>[build-system] requires = [&quot;setuptools&quot;] build-backend = &quot;setuptools.build_meta&quot; </code></pre> <p>project structure:</p> <pre><code>README.md requirements.txt src __init__.py module_1 __init__.py foo.py module_2 __init__.py bar.py </code></pre> <p>Wheel contents:</p> <pre><code>__init.py__ our_name-0.1.1.dist-info METADATA RECORD top_level.txt WHEEL module_1 __init__.py foo.py module_2 __init__.py bar.py </code></pre>
<python><pip><setuptools><python-packaging><python-wheel>
2023-03-23 21:37:53
1
533
Nathan Kronenfeld
75,828,115
4,042,278
How run sklearn.preprocessing.OrdinalEncoder on several columns?
<p>this code raise error:</p> <pre><code>import pandas as pd from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.preprocessing import OrdinalEncoder # Define categorical columns and mapping dictionary categorical_cols = ['color', 'shape', 'size'] mapping = {'red': 0, 'green': 1, 'blue': 2, 'circle': 0, 'square': 1, 'triangle': 2, 'small': 0, 'medium': 1, 'large': 2} cols = ['color','size'] # Define ColumnTransformer to preprocess categorical columns preprocessor = ColumnTransformer( transformers=[ ('orlEncdr_with_map', Pipeline(steps=[('orlEnc_with_map', OrdinalEncoder(categories=[list(mapping.keys())], dtype=int))]), cols), ]) # Load sample data data = pd.DataFrame({'color': ['red', 'green', 'blue', 'red'], 'shape': ['circle', 'square', 'triangle', 'triangle'], 'size': ['small', 'medium', 'large', 'medium']}) # Apply preprocessor to data preprocessed_data = preprocessor.fit_transform(data) # View preprocessed data print(preprocessed_data) </code></pre> <p>Error:</p> <pre><code>ValueError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_38148\1089712396.py in &lt;module&gt; 18 19 # Apply preprocessor to data ---&gt; 20 preprocessed_data = preprocessor.fit_transform(data) 21 22 # View preprocessed data ~\Anaconda3\lib\site-packages\sklearn\compose\_column_transformer.py in fit_transform(self, X, y) 673 self._validate_remainder(X) 674 --&gt; 675 result = self._fit_transform(X, y, _fit_transform_one) 676 677 if not result: ~\Anaconda3\lib\site-packages\sklearn\compose\_column_transformer.py in _fit_transform(self, X, y, func, fitted, column_as_strings) 604 ) 605 try: --&gt; 606 return Parallel(n_jobs=self.n_jobs)( 607 delayed(func)( 608 transformer=clone(trans) if not fitted else trans, ~\Anaconda3\lib\site-packages\joblib\parallel.py in __call__(self, iterable) 1046 # remaining jobs. 1047 self._iterating = False -&gt; 1048 if self.dispatch_one_batch(iterator): 1049 self._iterating = self._original_iterator is not None 1050 ~\Anaconda3\lib\site-packages\joblib\parallel.py in dispatch_one_batch(self, iterator) 862 return False 863 else: --&gt; 864 self._dispatch(tasks) 865 return True 866 ~\Anaconda3\lib\site-packages\joblib\parallel.py in _dispatch(self, batch) 780 with self._lock: 781 job_idx = len(self._jobs) --&gt; 782 job = self._backend.apply_async(batch, callback=cb) 783 # A job can complete so quickly than its callback is 784 # called before we get here, causing self._jobs to ~\Anaconda3\lib\site-packages\joblib\_parallel_backends.py in apply_async(self, func, callback) 206 def apply_async(self, func, callback=None): 207 &quot;&quot;&quot;Schedule a func to be run&quot;&quot;&quot; --&gt; 208 result = ImmediateResult(func) 209 if callback: 210 callback(result) ~\Anaconda3\lib\site-packages\joblib\_parallel_backends.py in __init__(self, batch) 570 # Don't delay the application, to avoid keeping the input 571 # arguments in memory --&gt; 572 self.results = batch() 573 574 def get(self): ~\Anaconda3\lib\site-packages\joblib\parallel.py in __call__(self) 261 # change the default number of processes to -1 262 with parallel_backend(self._backend, n_jobs=self._n_jobs): --&gt; 263 return [func(*args, **kwargs) 264 for func, args, kwargs in self.items] 265 ~\Anaconda3\lib\site-packages\joblib\parallel.py in &lt;listcomp&gt;(.0) 261 # change the default number of processes to -1 262 with parallel_backend(self._backend, n_jobs=self._n_jobs): --&gt; 263 return [func(*args, **kwargs) 264 for func, args, kwargs in self.items] 265 ~\Anaconda3\lib\site-packages\sklearn\utils\fixes.py in __call__(self, *args, **kwargs) 214 def __call__(self, *args, **kwargs): 215 with config_context(**self.config): --&gt; 216 return self.function(*args, **kwargs) 217 218 ~\Anaconda3\lib\site-packages\sklearn\pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params) 891 with _print_elapsed_time(message_clsname, message): 892 if hasattr(transformer, &quot;fit_transform&quot;): --&gt; 893 res = transformer.fit_transform(X, y, **fit_params) 894 else: 895 res = transformer.fit(X, y, **fit_params).transform(X) ~\Anaconda3\lib\site-packages\sklearn\pipeline.py in fit_transform(self, X, y, **fit_params) 432 fit_params_last_step = fit_params_steps[self.steps[-1][0]] 433 if hasattr(last_step, &quot;fit_transform&quot;): --&gt; 434 return last_step.fit_transform(Xt, y, **fit_params_last_step) 435 else: 436 return last_step.fit(Xt, y, **fit_params_last_step).transform(Xt) ~\Anaconda3\lib\site-packages\sklearn\base.py in fit_transform(self, X, y, **fit_params) 850 if y is None: 851 # fit method of arity 1 (unsupervised transformation) --&gt; 852 return self.fit(X, **fit_params).transform(X) 853 else: 854 # fit method of arity 2 (supervised transformation) ~\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in fit(self, X, y) 884 885 # `_fit` will only raise an error when `self.handle_unknown=&quot;error&quot;` --&gt; 886 self._fit(X, handle_unknown=self.handle_unknown, force_all_finite=&quot;allow-nan&quot;) 887 888 if self.handle_unknown == &quot;use_encoded_value&quot;: ~\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in _fit(self, X, handle_unknown, force_all_finite) 82 if self.categories != &quot;auto&quot;: 83 if len(self.categories) != n_features: ---&gt; 84 raise ValueError( 85 &quot;Shape mismatch: if categories is an array,&quot; 86 &quot; it has to be of shape (n_features,).&quot; ValueError: Shape mismatch: if categories is an array, it has to be of shape (n_features,). </code></pre> <p>if you change it in this way it works: <code>cols = ['size']</code></p> <p>How can I change it to works for several columns?</p>
<python><scikit-learn><pipeline>
2023-03-23 21:24:07
2
1,390
parvij
75,828,068
781,938
Does scikit-learn have a method for calculating an "all errors curve" for a binary classifier (TP, FP, TN, FN)?
<p>i'm plotting ROC curves and precision-recall curves to evaluate various classification models for a problem i'm working on. i've noticed that scikit-learn has some nice convenience functions for computing such curves:</p> <ul> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html" rel="nofollow noreferrer">sklearn.metrics.roc_curve</a> for FPR and TPR</li> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.det_curve.html" rel="nofollow noreferrer">sklearn.metrics.det_curve</a> for FPR and FNR</li> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html" rel="nofollow noreferrer">sklearn.metrics.precision_recall_curve</a> for precision and recall</li> </ul> <p>Is there a method (maybe hidden) that calculates all of these in one call? Or maybe that returns counts of TP, TN, FP, and FN (from which one could compute arbitrary metrics) and the associated thresholds?</p> <p>for example,</p> <pre><code>fp, tp, fn, tn, thresholds = sklearn.metrics.errors_curve(y_true, y_score) </code></pre> <p>I could in theory compute precision and recall from the ROC curve (TPR and FPR), because I know the true counts of positives and negatives in my data. But I'd like to use a library to do this so I don't have to worry about the math.</p>
<python><machine-learning><scikit-learn><statistics>
2023-03-23 21:17:27
1
6,130
william_grisaitis
75,828,046
3,605,534
How to convert a dataset column of numbers in python numpy 2D array?
<p>I would like to ask for help. I have a dataset which has a column called pixels (I loaded with Pandas):</p> <p><a href="https://i.sstatic.net/W3LL7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W3LL7.png" alt="enter image description here" /></a></p> <p>I would like to train this image classification dataset. So, I used split function inside apply pandas function</p> <pre><code>dfemotrain['pic']=dfemotrain['pixels'].apply(lambda x: np.array(x.split()).reshape(48, 48)) </code></pre> <p>Later on, I used train_test_split from sklearn</p> <pre><code>X = dfemotrain['pic'].values y = dfemotrain['emotion'].values X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.25, stratify=y, random_state=42) print(X_train.shape, X_val.shape, y_train.shape, y_val.shape) </code></pre> <p>I got (21750,) (7250,) (21750,) (7250,) and I am not sure how to convert to a numpy array n_samplesx48x48 to input to a Deep Learning model</p> <p><a href="https://i.sstatic.net/5Asx8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Asx8.png" alt="enter image description here" /></a></p> <p>Please any suggestion to solve this issue. Thanks in advance</p>
<python><pandas><numpy>
2023-03-23 21:14:11
0
945
GSandro_Strongs
75,828,008
594,376
Docs about boolean scalars in indexing of numpy array
<p>The NumPy's <a href="https://numpy.org/doc/stable/user/basics.indexing.html" rel="nofollow noreferrer">array indexing</a> documentation seems to contain no mention of array indexing Of the type <code>x[True]</code>, or <code>x[False]</code>.</p> <p>Empirically, using <code>True</code> inserts a new dimension of size 1, while using <code>False</code> inserts a new dimension of size 0.</p> <p>The behavior of <code>x[True, True]</code> only inserts one new axis of size 1 instead of two.</p> <p>This behavior is not consistent with boolean indexing, and it's not consistent with treating boolean scalars as integers.</p> <p>Looking for an explanation of the observed behavior, and hopefully a rational. Thanks much!</p>
<python><numpy><numpy-ndarray>
2023-03-23 21:09:13
1
5,954
Sasha
75,827,857
15,257,122
What is the reason a Python3 loop is taking so much longer than Node.js?
<p>First of all, some readers are negating it as a valid question. But if my goal is to check, if I have an algorithm that is O(nΒ²) and <code>n</code> is 10,000 or 100,000, then what kind of minimum running time should I expect, then the loop down below is totally valid.</p> <p>I first wrote a JavaScript test:</p> <pre class="lang-js prettyprint-override"><code>const n = 10000; const n2 = n * n; let a = 0; for (let i = 0; i &lt; n2; i++) { a += 3.1; a -= 1.01; a -= 2.0001; } console.log(a); </code></pre> <p>and ran it:</p> <pre><code>$ time node try.js node try.js 0.31s user 0.01s system 98% cpu 0.321 total </code></pre> <p>so it took 0.32 seconds to finish.</p> <p>But then I tried it in Python3, on a MacBook Air M2:</p> <pre class="lang-py prettyprint-override"><code>n = 10000 n2 = n * n a = 0 for i in range(n2): a += 3.1 a -= 1.01 a -= 2.0001 print(a) </code></pre> <p>and it look 9.88 seconds:</p> <pre><code>$ time python3 try.py python3 try.py 9.88s user 0.04s system 99% cpu 9.948 total </code></pre> <p>I don't quite get it how come the JavaScript is 30 times faster than the Python code. I would have used <code>xrange()</code> in Python 2 but Python 3 doesn't have it any more and it seems we use <code>range()</code> and it won't generate a huge array because it is a generator.</p> <p>Did I do something wrong or could I make it run faster (more like less than 1 second)?</p>
<javascript><python><python-3.x>
2023-03-23 20:48:34
1
787
Stefanie Gauss
75,827,840
11,572,712
Streamlit - AssertionError: Cannot pair 2 captions with 1 images
<p>I have the following code:</p> <pre><code>import pm4py import pandas as pd import streamlit as st import graphviz st.set_page_config(page_title=&quot;Process Mining dashboard&quot;, page_icon=&quot;:chart_with_downwards_trend:&quot;, layout=&quot;wide&quot;) file_path = r'path/to/file.csv' event_log = pd.read_csv(file_path, sep=';') df = pm4py.format_dataframe(event_log, case_id = 'case_id', activity_key = 'activity', timestamp_key = 'timestamp', timest_format = '%Y-%m-%d %H:%M:%S%z') pm_image = st.container() bpmn_model = pm4py.discover_bpmn_inductive(df) bpmn = 'bpmn.png' pm4py.save_vis_bpmn(bpmn_model, bpmn) pn, im, fm = pm4py.discover_petri_net_inductive(df) petri = 'petri.png' pm4py.save_vis_petri_net(pn, im, fm, petri) images = { 'bpmn': bpmn, 'petri': petri } img = st.sidebar.selectbox(&quot;Select the visualization of the process.&quot;, list(images.keys())) with pm_image: st.image(images[img], caption=list(images.keys())) </code></pre> <p>But when running this I get the following error:</p> <pre><code>AssertionError: Cannot pair 2 captions with 1 images. </code></pre> <p>What I would like to have is a dropdown menu which is linked to <code>st.image</code> and shows the image which is selected in the dropdown menu. The dropdown is correctly displayed but not the image I select (instead, the error is raised).</p> <p>How should I change my code?</p>
<python><error-handling><streamlit>
2023-03-23 20:46:09
1
1,508
Tobitor
75,827,778
2,510,104
Trying to calculate total memory usage of multiprocess Python code on Linux
<p>I'm stuck with a memory usage calculation problem for a Python code I have which is using multiprocessing. The code runs on Linux.</p> <p>Here is the gist of the problem: I have a parent process which forks off bunch of child processes. The address space of the parent process is pretty much readonly from child process point of view (not explicitly marked as Read-Only). When calculating total memory usage, if I don't include children processes I'm underestimating total usage. If I'm including them, I'm double-counting the read-only memory which is pretty sizable. I'm getting RSS value of the processes involved. Not sure how to cleanly estimate memory usage for this code.</p> <p>Here is a simple code which shows the issue:</p> <pre><code>import multiprocessing as mp import psutil import resource import time large_arr = [] for i in range(100000000): large_arr.append(i) def run(): print(&quot;Starting!&quot;) # sleeping to get the RSS size for the process time.sleep(5) sum_val = 0 small_local_arr = [] for i in range(1000): small_local_arr.append(i) sum_val += small_local_arr[-1] for elem in large_arr: sum_val += elem print(&quot;Useless work is done!&quot;) process = mp.Process(target=run, name=&quot;random_process&quot;) process.start() def print_mem_usage(): rusage_self = resource.getrusage(resource.RUSAGE_SELF) rusage_children = resource.getrusage(resource.RUSAGE_CHILDREN) process_usage = psutil.Process(process.pid).memory_info().rss / 1000 print(f&quot;Self {rusage_self.ru_maxrss} KB, Children {rusage_children.ru_maxrss} KB, Process {process_usage} KB&quot;) for i in range(10): time.sleep(1) print_mem_usage() </code></pre> <p>Here is the output that gets printed when running it:</p> <blockquote> <p>Self 3969424 KB, Children 3196 KB, Process 4062810.112 KB</p> </blockquote> <p>While true memory use of child process should be much much smaller than the parent process, it's reporting the size to be very close to the memory consumption of the parent process. htop shows the total size of this code to be around 4GB vs. if relying on this code, total size would be 8GB. So I'm effectively double-counting parent's process address space size.</p>
<python><linux><memory><multiprocessing>
2023-03-23 20:36:53
0
541
Amir
75,827,769
15,452,168
change iterrows() to .loc for large dataframes
<p>I have 2 data frames, df1 and df2.</p> <p>Based on the condition in <code>df1</code> that <code>day_of_week == 7</code> we have to match 2 other column values, <code>(statWeek and statMonth)</code> if the condition matches then we have to replace <code>as_cost_perf</code> from df2 with <code>cost_eu</code> from df1. in other places we simply keep as_cost_perf as it is.</p> <p>Below is my code block with iterrows()</p> <p>in case, I have a huge dataframe it will be time consuming, can someone please help me optimize this snippet?</p> <pre><code>import pandas as pd # create df1 data1 = {'day_of_week': [7, 7, 6], 'statWeek': [1, 2, 3], 'statMonth': [1, 1, 1], 'cost_eu': [957940.0, 942553.0, 1177088.0]} df1 = pd.DataFrame(data1) # create df2 data2 = {'statWeek': [1, 2, 3, 4, 1, 2, 3], 'statMonth': [1, 1, 1, 1, 2, 2, 2], 'as_cost_perf': [344560.0, 334580.0, 334523.0, 556760.0, 124660.0, 124660.0, 763660.0]} df2 = pd.DataFrame(data2) # identify rows in df1 where day_of_week == 7 mask = df1['day_of_week'] == 7 # update df2 with cost_eu from df1 where there is a match for i, row in df1[mask].iterrows(): matching_rows = df2[(df2['statWeek'] == row['statWeek']) &amp; (df2['statMonth'] == row['statMonth'])] if not matching_rows.empty: df2.loc[matching_rows.index, 'as_cost_perf'] = row['cost_eu'] # print the updated df2 df2 </code></pre> <p>Thanks in advance!</p>
<python><pandas><dataframe><lambda><data-wrangling>
2023-03-23 20:35:06
3
570
sdave
75,827,592
12,436,050
PermissionError: [Errno 13] Permission denied: while writing output to a file in Python3.7
<p>I am trying to write &amp; append json data retrieved through the API calls to a file. 'missed_loc_id' is a dataframe of single column with identifiers. Below is the python code.</p> <pre><code>df_loc_missed = pd.DataFrame( columns=['location_id', 'city', 'country']) json_arr = [] url_loc_missed = &quot;https://europa.eu/v1/locations{id}&quot; headers = {'Accept': 'application/json'} for row in missed_loc_id.itertuples(index=True, name='Pandas'): id = row.location_id url_loc_missed = f&quot;https://europa.eu/v1/locations/{id}&quot; miss_loc_response_api = requests.get(url_loc_missed, auth=('hghgh', 'ghghgh$'), headers = headers) miss_loc_api_json_resp = miss_loc_response_api.json() json_arr.append(miss_loc_api_json_resp) with open(&quot;missed_location.json&quot;, &quot;a&quot;, encoding='utf-8') as myfile: json.dump(json_arr, myfile, ensure_ascii=False, indent=4) myfile.close() </code></pre> <p>However, after few successful writing and appending the data to the outfile, I get following error.</p> <pre><code>PermissionError: [Errno 13] Permission denied: 'missed_location.json' </code></pre> <p>Any help is highly appreciated to resolve this issue.</p>
<python><json>
2023-03-23 20:10:43
0
1,495
rshar
75,827,522
2,809,834
Merging multiple pandas columns together
<p>I want to convert the following data frame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>2020</th> <th>2021</th> <th>2022</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>NaN</td> <td>5</td> <td>NaN</td> </tr> <tr> <td>12</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>NaN</td> <td>NaN</td> <td>15</td> </tr> </tbody> </table> </div> <p>to the following:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Merged</th> </tr> </thead> <tbody> <tr> <td>10</td> </tr> <tr> <td>5</td> </tr> <tr> <td>12</td> </tr> <tr> <td>15</td> </tr> </tbody> </table> </div> <p>Some considerations:</p> <ul> <li>There can be multiple columns (more than the three shown 2020, 2021, 2022)</li> <li>Only 1 column will ever have a value, the rest have NaNs</li> </ul> <p>What is a clean efficient way to do this type of column merge?</p>
<python><pandas>
2023-03-23 20:00:47
1
374
Ossz
75,827,508
9,937,874
Better way to iterate over a list?
<p>I am using <code>fitz</code> to scrape PDFs. I use the following code to generate a list of words located in the pdf.</p> <pre><code>import fitz doc = fitz.open(&quot;very_important_document.pdf&quot;) for page in doc.pages(0, 1, 1): # reads just the first page wlist = page.get_text(&quot;words&quot;) doc.close() </code></pre> <p>This returns a list of tuples of the following form:</p> <pre><code>(x0, y0, x1, y1, &quot;lines in the block&quot;, block_no, block_type) </code></pre> <p>Some text I need to pull out comes after multiple lines, for example:</p> <pre><code>wlist = [(x0, y0, x1, y1, &quot;sample&quot;, block_no, block_type), (x0, y0, x1, y1, &quot;population&quot;, block_no, block_type), (x0, y0, x1, y1, &quot;1,451,990&quot;, block_no, block_type), (x0, y0, x1, y1, &quot;target&quot;, block_no, block_type), (x0, y0, x1, y1, &quot;population&quot;, block_no, block_type) (x0, y0, x1, y1, &quot;5,678&quot;, block_no, block_type) </code></pre> <p>If I want to pull out sample_population and target_population I can just increase my count by 1 since it always comes on the line after the indicator text:</p> <pre><code>sample_pop = 0 target_pop = 0 for i in range(len(wlist)): if (wlist[i][4] == &quot;sample&quot;) &amp; (wlist[i+1][4] == &quot;population&quot;): sample_pop = wlist[i+2][4] if (wlist[i][4] == &quot;target&quot;) &amp; (wlist[i+1][4] == &quot;population&quot;: target_pop = wlist[i+2][4] </code></pre> <p>This code works but I end up getting an <code>IndexError('list index out of range')</code> as this checks every item in the list of extracted words. Any cleaner/more pythonic way to parse this list?</p>
<python><string><list><iteration>
2023-03-23 19:59:26
2
644
magladde
75,827,502
3,788
How to get Pydantic model types?
<p>Consider the following model:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Cirle(BaseModel): radius: int pi = 3.14 </code></pre> <p>If I run the following code, I can see the fields of this model:</p> <pre class="lang-py prettyprint-override"><code>print(Circle.__fields__) # Output: { 'radius': ModelField(name='radius', type=int, required=True), 'pi': ModelField(name='pi', type=float, required=False, default=3.14) } </code></pre> <p>The question is: how can I get the type (or inferred type hint) from the <code>ModelField</code> type? These all give errors:</p> <pre class="lang-py prettyprint-override"><code>Circle.__fields__['pi'].type # AttributeError: 'ModelField' object has no attribute 'type' Circle.__fields__['pi'].__dict__ # AttributeError: 'ModelField' object has no attribute '__dict__' type(Circle.__fields__['pi']) # &lt;class 'pydantic.fields.ModelField'&gt; import typing typing.get_type_hints(Circle.__fields__['pi']) # TypeError: ModelField(name='pi', type=float, required=False, default=3.14) # is not a module, class, method, or function. typing.get_type_hints(Circle) # This does not include the &quot;pi&quot; field because it has no type hints # {'radius': &lt;class 'int'&gt;} </code></pre> <p>I don't even see where the <code>ModelField</code> type is defined in <a href="https://github.com/pydantic/pydantic/search?q=ModelField" rel="noreferrer">github</a>.</p> <p>How can I iterate over the fields of a pydantic model and find the types?</p>
<python><pydantic>
2023-03-23 19:58:56
2
19,469
poundifdef
75,827,415
11,025,049
Is it possible to mock SimpleUploadedFile?
<p>I have a function like this:</p> <pre class="lang-py prettyprint-override"><code>def my_func(): [...] with open(full_path, &quot;rb&quot;) as file: image.image = SimpleUploadedFile( name=image.external_url, content=file.read(), content_type=&quot;image/jpeg&quot;, ) image.save() print(f&quot;{file_name} - Saved!&quot;) </code></pre> <p>I would like mock the &quot;SimpleUploadedFile&quot; for a test (calling print...).</p> <p>Is it possible?</p> <p>A little bit of context: the function download and upload files. Maybe the test is not necessary...</p>
<python><django><mocking>
2023-03-23 19:46:45
1
625
Joey Fran