QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,699,783
1,745,291
Is there a function in python like Path.relative_to() supporting '../' if the path is not a subpath of the other?
<p><code>Path.relative_to</code> is great to suppress the common prefix of two path... But if the first path is not a subdir / subfile of the second, like :</p> <pre><code>Path('/tmp/random/path').relative_to(Path('/tmp/random/path/etc')) </code></pre> <p>Then Python raises :</p> <pre><code>ValueError: '/tmp/random/path' is not in the subpath of '/tmp/random/path/etc' OR one path is relative and the other is absolute. </code></pre> <p>Is there a function that would return <code>Path('..')</code> ?</p>
<python><path>
2023-03-10 18:27:23
2
3,937
hl037_
75,699,625
9,782,619
Dataframe shift by duration but round(/ceil) to the nearest time
<p>I got a Dataframe with monotonic ts like:</p> <pre><code>ts | value1 | value2 11 9 x 19 10 x 26 x x 29 x x 32 x x </code></pre> <p>for each <code>value1</code>, I want to find the <code>value2</code> at a timepoint just <code>x seconds</code> ago.</p> <p>An example:</p> <pre><code>x seconds = 10 e.g. 1# 32 - 10 = 22, the ts just pass 22 is 26, then I want to find the value2 at ts 26, e.g. 2: 29 - 10 = 19, then line2 ts is just 19, then I want to find the value2 at ts 29 </code></pre> <p>so I wrote something like this:</p> <pre><code>x = 10 i1 = 0 row_num = df.shape[0] for i in range(row_num): cur_row = df.iloc[i] cur_t = cur_row.ts cur_v = cur_row.value1 delay_t = cur_t - x while i1 &lt; row_num and df.iloc[i1].ts &lt; delay_t: i1 += 1 delay_v = df.iloc[i1].value2 df.iloc[i,3] = cur_v - delay_v </code></pre> <p>It seems a complexity <code>O(n)</code>, but it is very slow. So I wanna know</p> <ol> <li>why it's slow and how to improve it, a vectorize implementation is better</li> <li>if you can solve the first question, can you improve it to find the <code>value2</code> closest to <code>ts - x</code> instead of just pass by? for example:</li> </ol> <pre><code>32 - 10 = 22, abs(22-19) =3, abs(22- 26)=4, then I want to find the value2 at ts 19, </code></pre> <p>that's much better</p>
<python><pandas><dataframe>
2023-03-10 18:06:33
1
635
YNX
75,699,585
15,363,250
Joining 2 dataframes and collecting only the unique values in pyspark
<p>I have 2 dataframes, the first one that is called questions_version1 and the second one called questions_version2.</p> <pre><code>questions_version1 +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | brown | 1 | 32000 | |eye color | green | 1 | 5000 | |hair color | brown | 1 | 47000 | |hair color | blonde | 1 | 3000 | |hair color | white | 1 | 2000 | +-----------+----------+-----------+--------------+ </code></pre> <pre><code>questions_version2 +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | green | 1 | 5000 | |eye color | hazel | 2 | 9000 | |hair color | brown | 1 | 47000 | |hair color | white | 1 | 2000 | |hair color | red | 2 | 500 | +-----------+----------+-----------+--------------+ </code></pre> <p>How do I join both so I can get all the values in questions_version2 that doesn't exist in questions_version1 without duplicating all the repeated values that exist in both?</p> <p>The final result would be something like this:</p> <pre><code>questions_1_and_2_merged +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | brown | 1 | 32000 | |eye color | green | 1 | 5000 | |eye color | hazel | 2 | 9000 | |hair color | brown | 1 | 47000 | |hair color | blonde | 1 | 3000 | |hair color | white | 1 | 2000 | |hair color | red | 2 | 500 | +-----------+----------+-----------+--------------+ </code></pre> <p>Could you all help me with this please? Thanks!</p>
<python><dataframe><pyspark>
2023-03-10 18:01:38
3
450
Marcos Dias
75,699,538
13,309,379
Cupy Code Optimization: How to speed up nested for loops
<p>I would like to optimize the python code between the 2 <code>perf_counter</code> functions. By using <code>cupy</code> I already obtained substantial improvement compared to <code>numpy</code>. I was asking myself if there is some reordering or vectorization that I am missing. The main constraint is that I should not be building any tensor(ndarray) that is bigger than dim^4, since this is part of some memory optimization of a bigger project, for which a method that scales with dim^5 is already known and better performing.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cupy as cp from time import perf_counter dim = 60 T_final = cp.random.rand(dim,dim,dim,dim) T_start = cp.random.rand(dim,dim,dim,dim) U = cp.random.rand(dim,dim,dim) cp.cuda.Stream.null.synchronize() start = perf_counter() for xf in range(0,dim): for yu in range(0,dim): B = cp.einsum(&quot;ij,kim-&gt;kmj&quot;,U[:,:,yu],T_start[xf,:,:,:]) for xb in range(0,dim): C = cp.einsum(&quot;kmj,kjs-&gt;ms&quot;,B,T_start[:,xb,:,:]) T_final[xf,xb,yu,:]=cp.einsum(&quot;msc,ms&quot;,U,C) cp.cuda.Stream.null.synchronize() print(perf_counter()-start) </code></pre>
<python><multidimensional-array><cupy><numpy-einsum>
2023-03-10 17:57:37
1
712
Indiano
75,699,462
14,697,000
Is there a better way of displaying a csv file that was made from a Multi-index dataframe?
<p>I was working on a project for calculating the different speeds of different algorithms with different datas and I wanted to save this data in a csv format and eventually view it in Excel.</p> <p>I did this in the following lines of code:</p> <pre><code>df=DataFrame(dictionary_container,index=indexer) df=df.transpose() df.to_csv(&quot;sorted_results.csv&quot;) print(df) </code></pre> <p>Now, the problem is that my data frame has a multi-index and it seems like when this multi-index is converted into csv it doesn't has the same format and adaptation DataFrame has.</p> <p>When printed as a dataframe in my pycharm console it looks something like this:</p> <pre><code> Selection Sort Bubble Sort Insertion Sort Shell Sort Merge Sort Quick Sort Data Comparisons Data Swaps Time Data Comparisons Data Swaps Time Data Comparisons Data Swaps Time Data Comparisons Data Swaps Time Data Comparisons Data Swaps Time Data Comparisons Data Swaps Time Ascending_Sorted_250 31125.0 0.0 0.008000 61752.0 0.0 0.008000 249.0 0.0 0.000000 1506.0 0.0 0.000000 102985.0 0.0 0.000000 32220.0 16110.0 0.007996 Ascending_Sorted_500 124750.0 0.0 0.008002 248502.0 0.0 0.038870 499.0 0.0 0.002037 3506.0 0.0 0.000000 466830.0 0.0 0.008044 130682.0 65341.0 0.012074 Ascending_Sorted_1000 499500.0 0.0 0.048405 997002.0 0.0 0.124397 999.0 0.0 0.000000 8006.0 0.0 0.009063 2091964.0 0.0 0.031324 542432.0 271216.0 0.029222 Descending_Sorted_250 31125.0 144.0 0.002518 61752.0 0.0 0.007534 249.0 31005.0 0.000000 1506.0 867.0 0.008008 98029.0 0.0 0.000000 62250.0 31125.0 0.008001 Descending_Sorted_500 124750.0 304.0 0.016446 248502.0 0.0 0.058094 499.0 124499.0 0.021138 3506.0 1988.0 0.007011 450069.0 0.0 0.017184 249500.0 124750.0 0.008066 Descending_Sorted_1000 499500.0 627.0 0.057602 997002.0 0.0 0.211443 999.0 498961.0 0.121921 8006.0 4480.0 0.009598 2038910.0 0.0 0.036352 999000.0 499500.0 0.051927 Unordered_Sorted_250 31125.0 247.0 0.000000 61752.0 0.0 0.008011 249.0 16176.0 0.008005 1506.0 1190.0 0.000000 119646.0 0.0 0.008008 102818.0 51409.0 0.000000 Unordered_Sorted_500 124750.0 497.0 0.016108 248502.0 0.0 0.032092 499.0 62517.0 0.014741 3506.0 3038.0 0.002054 546612.0 0.0 0.010065 433998.0 216999.0 0.000000 Unordered_Sorted_1000 499500.0 991.0 0.050410 997002.0 0.0 0.148675 999.0 238976.0 0.062542 8006.0 6602.0 0.000000 2631656.0 0.0 0.032139 2001812.0 1000906.0 0.008013 Ascending_Sorted_2000 49995000.0 0.0 5.434769 99970002.0 0.0 13.766283 9999.0 0.0 0.022327 120005.0 0.0 0.057448 314528524.0 0.0 0.392050 65294480.0 32647240.0 0.897377 Descending_Sorted_2000 1999000.0 1249.0 0.216450 3994002.0 0.0 0.796005 1999.0 1997998.0 0.476391 18006.0 9947.0 0.016075 9127864.0 0.0 0.054320 3998000.0 1999000.0 0.181211 Unordered_Sorted_2000 1999000.0 1997.0 0.270270 3994002.0 0.0 0.633384 1999.0 975989.0 0.272254 18006.0 17558.0 0.007661 11645237.0 0.0 0.072448 8575300.0 4287650.0 0.007999 </code></pre> <p>But when displayed in csv(sorted_results.csv) it looks like this which is very off putting: <a href="https://i.sstatic.net/Rk5DC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rk5DC.png" alt="enter image description here" /></a></p> <p>But I want it to look something like this which is very formal and way better: <a href="https://i.sstatic.net/KegDF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KegDF.png" alt="enter image description here" /></a></p> <p>The first level of the multi index is repeated and I do not want that</p> <p>I tried everything I tried replacing the second and third repetitions with <code>pd.NA</code> or <code>numpy.NaN</code> or even <code>None</code> but even then it will actually display ,nan or None in the indexes, respectively, and I don't want that I want it to be empty over there.</p> <p>I did search for solutions on stack overflow, and I did end up stumbling upon something but the solutions/python scripts had to deal with using os to actually affect the already created csv file.</p> <p>I also tried to use style.format but apparently this only helps with CSS styles and usually helps with color and stuff. Also I don't know how much it would help with a .csv file.</p>
<python><pandas><dataframe><format><multi-index>
2023-03-10 17:49:50
1
460
How why e
75,699,176
5,394,072
Google OR-Tools, accessing value of the variables that the solver tries. Dynamic coefficients for variables
<p>I am using google or tools, to find the spend allocation for maximum revenue. In below example, (say) <code>mon</code>, <code>tues</code> are variables for spend allocation for monday tuesday out of total alloaction.</p> <p>To be short, can I try dynamic coefficient for the variable values instead of static coefficients (the coefficient depends on what value the solver tries for the variable, when choosing the optimum values of variable). When I access the variables <code>mon</code>, <code>tues</code> in a function in solver, how to get the variable values (that solver tries) instead of variable names. instead of Details below.</p> <p>I want to maximize revenue and as revenue is not constant and it depends on spend. So, as the spend allocation value (<code>mon</code>, <code>tues</code>) changes the revenue changes. Hence, instead of a static revenue value (or variable coefficient) associated with <code>mon</code> and <code>tues</code>, I wanted to get an updated value for the revenue, depending on what values of mon and tues the solver tries. Is it possible to get the values using below example of OR-Tools. I try to access the mon, tues values, assuming that they are numeric when the solver tries different values of <code>mon</code>, <code>tue</code>, in loop or the like. But the <code>mon</code> and <code>tues</code> are variables only and I get try error, when trying something like below.</p> <p>Please let me know if I had to try a different framework/library instead of OR tools.</p> <pre><code>def get_revenue(x): # dummy, in reality this will be tree based model return np.random.rand(1)*1000 def revenue (mon, tues): print(&quot;Values of Monday, Tuesday are : &quot;, mon, tues) mon_revenue = get_revenue(mon) tues_revenue = get_revenue(tues) return mon*mon_revenue + tues*tuesday_revenue solver = pywraplp.Solver('Maximize', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING) mon = solver.NumVar(0.2, 0.8, 'monday') tues = solver.NumVar(0.2, 0.8, 'Tuesday') solver.Add(mon + tues == 1) solver.Maximize(revenue(mon,tues)) status = solver.Solve() </code></pre>
<python><optimization><linear-programming><or-tools><constraint-programming>
2023-03-10 17:19:47
0
738
tjt
75,698,797
1,473,517
How do I set simple linear constraints with dual_annealing?
<p>I can set simple bounds to use with dual_annealing: E.g.</p> <pre><code>upper_bound = 20 num_points = 30 bounds = [(0, upper_bound) for i in range(num_points)] res = dual_annealing(fun, bounds, maxiter=1000) </code></pre> <p>But I would also like to constrain the variables so that <code>x_i &gt;= x_{i-1}+0.5</code> for each i. That is each variable should be at least 0.5 larger than the one preceding it.</p> <p>How can you do that?</p> <p>If scipy can't do it, are there other libraries with global optimizers that can?</p>
<python><scipy><mathematical-optimization>
2023-03-10 16:41:11
2
21,513
Simd
75,698,611
5,109,125
Question with reading csv using encoding - ISO-8859-1
<p>I am reading in a csv file that is being sent by another team weekly.</p> <p>Previously, my python script had no issue with reading that file (using pandas). But with this week's data I got an error &quot;UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 208426: ordinal not in range(128)&quot;</p> <p>I found another stackoverflow post about checking the csv encoding using chardet (python). I followed that script and it shows encoding='ascii'. So I updated my python read_csv as follows:</p> <p>df = pd.read_csv('file', encoding='ascii')</p> <p>However, I still get the same error. Upon further checking stackoverflow I came upon a random post suggesting to use <code>encoding='ISO-8859-1'</code> and it worked.</p> <p>I tried doing read_csv using <code>encoding='ISO-8859-1'</code> on the other csv files that normally do not need encoding, and surprisingly it works on those csv files, too.</p> <p>I asked the team who sent me the file if they had done any encoding and they said no.</p> <p>I hope this won't eventually bite my behind. Would appreciate anyone's input on this.</p>
<python><pandas>
2023-03-10 16:24:42
1
597
punsoca
75,698,404
4,450,090
polars concat columns of list type into columns of string type
<p>I would like to concat dataframe list[String] column into String column using polars only native functions. Is it possible?</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ 'col1': [[&quot;1.0&quot;, &quot;1.0&quot;], [&quot;2.0&quot;, &quot;3.0&quot;]], 'col2': [&quot;a&quot;, &quot;a&quot;], }) </code></pre> <p>This is the expected output I would like to achieve.</p> <pre><code>shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ 1.0,1.0 ┆ a β”‚ β”‚ 2.0,3.0 ┆ a β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I don't want to use custom python functions at all.</p>
<python><python-polars>
2023-03-10 16:05:52
1
2,728
Dariusz Krynicki
75,698,369
10,962,766
How many reviews does "app store scraper" for Python scrape by default, and is there a fixed date range?
<p>I am confused about how many reviews the <a href="https://pypi.org/project/app-store-scraper/" rel="nofollow noreferrer">app store scraper</a> for Python scrapes by default and if there is any fixed date range (&quot;past n months&quot;). The documentation says that the maximum number of reviews fetched per request is 20, but I did not find a limit otherwise.</p> <p>This tutorial states that if &quot;how_many&quot; is not provided, app store scraper defaults to scraping all:</p> <p><a href="https://python.plainenglish.io/scraping-app-store-reviews-with-python-90e4117ccdfb" rel="nofollow noreferrer">https://python.plainenglish.io/scraping-app-store-reviews-with-python-90e4117ccdfb</a></p> <p>Based on documentation and different tutorials, I have thus created the following script:</p> <p><a href="https://github.com/MonikaBarget/DigitalHistory/blob/master/JupyterNotebooks/Webscraping_ApplePodcasts.ipynb" rel="nofollow noreferrer">https://github.com/MonikaBarget/DigitalHistory/blob/master/JupyterNotebooks/Webscraping_ApplePodcasts.ipynb</a></p> <p>This script sets the number of reviews to 7000, but even for top-rated podcasts that have been running for several years, reviews I can scrape do not seem to go back to the beginning and stay way below the 7000 thresholds. I was wondering if older reviews are inaccessible for other reasons.</p> <p>As a sample, I have scraped reviews for two different podcasts.</p> <ol> <li>The first one is &quot;Black Girl Gone&quot;: <a href="https://podcasts.apple.com/us/podcast/black-girl-gone-a-true-crime-podcast/id1556267741" rel="nofollow noreferrer">https://podcasts.apple.com/us/podcast/black-girl-gone-a-true-crime-podcast/id1556267741</a></li> </ol> <p>The script with <code>how_many=7000</code> gave me 271 reviews, the oldest dated 2021-03-11 17:34:44.</p> <ol start="2"> <li>The second podcast is &quot;Crime Junkie&quot;: <a href="https://podcasts.apple.com/us/podcast/crime-writers-on-true-crime-review/id949195280" rel="nofollow noreferrer">https://podcasts.apple.com/us/podcast/crime-writers-on-true-crime-review/id949195280</a></li> </ol> <p>The same script gave me 2000 reviews, the oldest dated 2022-07-17 16:13:36.</p> <p>I would be surprised if Crime Junkie, which started in 2017, had no reviews on Apple before the summer of 2022. I am at a loss as to how those vast differences in scrapeable reviews occur and what I can do the scrape as many as possible.</p>
<python><web-scraping>
2023-03-10 16:02:08
1
498
OnceUponATime
75,698,254
536,262
How to make requests in parallel using FastAPI
<p>In FastAPI, I have this route:</p> <pre class="lang-py prettyprint-override"><code>for id in ids: #get projects from list of ids p = await gitlab.project(id) if p and 'error' not in p: projects[int(id)] = p </code></pre> <p>But it takes around 2sec per request sequentially, so I wait more than a minute.</p> <p>How can I do this in parallel using like 10 threads from a thread-pool the easiest way without having to manipulate the gitlap.project(id) method</p> <p>gitlab.py has a global httpx.AsyncClient()</p> <p>I tried sending the ids directly: <code>res = await gitlab.projects(ids)</code></p> <p>but is still do all sequentially.</p> <p>Below are the two functions in gitlab.py:</p> <pre class="lang-py prettyprint-override"><code>async def project(id:str): &quot;&quot;&quot;&quot; return meta data for project &quot;&quot;&quot; global s url = config.get_config()['gitlaburl'] + f&quot;/{id}&quot; r = await s.get(url, headers={'PRIVATE-TOKEN': config.get_config()['gitlabtoken']}) if r.status_code!=200: return {&quot;error&quot;: f&quot;unable to fetch from gitlab: {url}:{r.status_code} -&gt; {r.reason}&quot;} out = {} out['id'] = int(id) dat = json.loads(r.text) for k,v in dat.items(): if k in &quot;description,name,path_with_namespace&quot;.split(','): out[k] = v if k=='namespace' and 'avatar_url' in v: out['avatar_url'] = v['avatar_url'] return out async def projects(ids:List[Union[str,int]]): &quot;&quot;&quot; array of projects from config projectids &quot;&quot;&quot; dat = [] for id in ids: dat.append(await project(id)) return dat </code></pre>
<python><fastapi><httpx>
2023-03-10 15:51:51
1
3,731
MortenB
75,698,238
1,102,514
Django model unique constraint based on existing model data?
<p>Is it possible to make reference to existing model data, as part of a Django unique constraint check?</p> <p>For example,</p> <pre class="lang-py prettyprint-override"><code>class MyModel(models.Model): .... my_field = models.IntField(...) class Meta: constraints = [ models.CheckConstraint( check=models.Q(my_field__gte=MyModel.objects.all().last().my_field, name=&quot;example_constraint&quot; ] </code></pre> <p>In this pseudocode example, I query MyModel, fetching the latest record, and use the value of it's 'my_field' property as part of the constraint, when adding a new MyModel.</p> <hr /> <h2>Update</h2> <p>While it appears that my pseudocode will not have the intended effect, I am now wondering if there is any possiblity to use django.db.models Max(), to achieve the desired outcome?</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>constraints = [ models.CheckConstraint( check=models.Q(my_field__gte=Max(my_field), name=&quot;example_constraint&quot; ] </code></pre> <p>Will this call to Max(my_field) return the max value of my_field, from the existing database rows? Or is it simply passing the current instance value of my_field to the Max() function?</p>
<python><django><django-models><constraints>
2023-03-10 15:51:07
1
1,401
Scratcha
75,698,214
5,111,234
Optional setup.py dependencies that require the Extension module
<p>I have a main package (say package A) that I want to have an optional dependency. I know that I can use <code>extras_require</code> to define optional package dependencies, however I want this optional dependency to use the <code>Extension</code> module to build using f2py with additional data files. Is there a way to define an <code>extras_require</code> statement to take an extension module? In other words, when I run my <code>setup.py</code> file, I can optionally specify that I want to install my wind model, and then it compiles my fortran code using f2py. If the optional wind model dependency is not desired, then the setup.py file runs normally and does not compile the fortran code (useful if a user does not have a fotran compiler installed and does not need this dependency).</p> <p>The only way I can think to do this currently is by creating a separate package with the wind model and putting this package in the <code>extras_require</code> for the main package. The downside to this, is that a user would have to run a separate <code>setup.py</code> file for the wind model, and then return to the main package and run the main <code>setup.py</code>. To me this is undesirable, and there feels like there should be a way to do this. I have seen this <a href="https://stackoverflow.com/q/41778153/5111234">question</a>, however if the cython extension can be downloaded it is downloaded rather than the cython extension needing to be specified by the user.</p>
<python><numpy><setuptools><setup.py><f2py>
2023-03-10 15:49:01
0
679
Jehan Dastoor
75,698,114
3,116,231
How to skip JSON levels when creating data classes
<p>I'd like a nested JSON into flat data classes:</p> <pre><code>from typing import List from dataclasses import dataclass @dataclass class MyDataClass: key_0: str key_2: str key_3: str key_4_1: str key_5: str @dataclass class MyDataClassList: classes: List[MyDataClass] data = [ { &quot;key_0&quot;: &quot;val_0&quot;, &quot;key_1_to_skip&quot;: { &quot;level_to_skip_1&quot;: [ { &quot;level_to_skip_1_1&quot;: { &quot;level_to_skip_1_1_1&quot;: { &quot;key_2&quot;: &quot;value_2&quot;, &quot;key_3&quot;: &quot;value_3&quot;, &quot;key_4&quot;: { &quot;key_4_1&quot;: &quot;value_4_1&quot;, &quot;key_to_ignore_4_2&quot;: &quot;value_4_2&quot; } }, }, &quot;key_5&quot;: &quot;value_5&quot; } ] }, } ] </code></pre> <p>Printing the instantiated <code>MyDataClassList</code> should return:</p> <pre><code>MyDataClassList(classes=[MyDataClass(key_0='value_0', key_1='value_1', key_2='value_2', key_3_1='value_3_1', key_4='value_4')]) </code></pre> <p>How can I achieve this without creating all intermediate levels and mapping the whole JSON structure to a dataclass hierarchy?</p>
<python><python-dataclasses>
2023-03-10 15:40:12
0
1,704
Zin Yosrim
75,698,060
3,684,314
Replacing Overlapping Regex Patterns in Python
<p>I am dealing with trying to make a <code>.ttl</code> file I was handed digestible. One of the issues is that the <code>rdfs:seeAlso</code> values are not sanitized and it breaks down downstream programs. What I mean by this is that there are links of the form:</p> <pre><code>rdfs:seeAlso prefix:value_(discipline) </code></pre> <p>In order to fix this, I need to precede particular characters with a <code>\</code>, per <a href="https://www.w3.org/TR/turtle/#sec-escapes" rel="nofollow noreferrer">the RDF 1.1 Turtle documentation</a>. Of the characters present, I need to escape the following:</p> <pre><code>_, ~, -, !, $, &amp;, (, ), *, +, =, ?, #, % </code></pre> <p>At first I thought this would be easy and I began constructing a <code>re.sub()</code> pattern. I tried a number of potential solutions, but the closest I could get was with:</p> <pre><code>re.sub(pattern=r&quot;(rdfs\:seeAlso)(.{0,}?)([\_\~\-\!\$\&amp;\(\)\*\+\=\?\#\%]{1})(.{0,})&quot;, repl='\\1\\2\\\\\\3\\4', string=str_of_ttl_file) </code></pre> <p>The <code>(rdfs\:seeAlso)</code> component was added in order to prevent accidentally changing characters within strings that are instances of <code>rdfs:label</code> and <code>rdfs:comment</code> (i.e. any of the above characters in between <code>''</code> or <code>&quot;&quot;</code>).</p> <p>However, this has the drawback of only working for the first occurrence and results in:</p> <pre><code>rdfs:seeAlso prefix:value\_(discipline) </code></pre> <p>Where it should be</p> <pre><code>rdfs:seeAlso prefix:value\_\(discipline\) </code></pre> <p>Any help or guidance with this would be much appreciated!</p> <p>EDIT 1: Instances of <code>rdfs:label</code> and <code>rdfs:comment</code> are strings that are between single (<code>'</code>) or double (<code>&quot;</code>) quotes, such as:</p> <pre><code>rdfs:label &quot;example-label&quot;@en </code></pre> <p>Or</p> <pre><code>rdfs:comment &quot;This_ is+ an $example$ comment where n&amp;thing should be replaced.&quot;@en </code></pre> <p>The special characters there do not need to be replaced for Turtle to function and should therefore be left alone by the regular expression.</p>
<python><regex><replace><ttl>
2023-03-10 15:34:25
3
793
user3684314
75,697,886
8,382,028
Confusion with Pytest optimization to speed up testing
<p>This is not going to be a coding specific question, I honestly just have been attempting to speed up testing for my project the last week or so using <code>pytest</code>, <code>pytest-xdist</code> and <code>pytest-django</code> and I feel like I am running into a wall with optimization now.</p> <p>I have 3047 tests spanning my project. I am fully aware that database access inside of pytest slows down testing times, but I honestly feel like most of them require DB access as it is a database-driven system, but running my CI, my tests are taking about 12-14 minutes to complete. It's not that bad, but I used to have a full test-to-deploy time right around 10 minutes and I guess since I just added quite a few new tests to cover everything a bit more in depth, the time has increased to about 15 minutes. Again, not bad, I just want to improve it.</p> <p>I run my tests from my root directory via <code>pytest -n auto</code></p> <p>Is there anything I can do to speed up my tests that I may be overlooking? I realize that is a somewhat challenging question to answer without seeing my individual tests, but I'm asking more in concept because most of my tests look similar (give or take some fixtures) to this:</p> <pre><code>def test_cannot_login_staff_with_over_5_errors(self, client, slug, admin): url = reverse('login', args=[slug]) admin.has_two_factor_authentication = False # store human readable portions of the user so we can log them in admin.set_password('password') admin.save() data = { 'username': admin.username, 'password': 'password-Not', } for i in range(0,6): # will try 4 times response = client.post(url, data) # will redirect on successful login if i &lt; 4: # on the 5th it does redirect print(i) assert response.status_code == 200 else: assert response.status_code == 302 </code></pre> <p>I have tried collecting files more quickly (although this actually doesn't speed things up much) with this:</p> <p><code>{ find -name 'test*.py'; } | xargs pytest</code></p> <p>I have removed all logic, other that the required <code>client</code> object from my tests, so 'setup' objects are being configured only when needed (in my attempt to limit as much db access as possible). For example here is 1 fixture that is required in nearly every test:</p> <pre><code> @pytest.fixture def slug(db): # with connections.cursor() as cursor: # cursor.execute( # &quot;&quot;&quot; # INSERT INTO client_info( # name, price_per_unit, price_per_website, url_base, date_entered, last_update, uuid, is_active, deleted_by_cascade) # VALUES('test', 1.99, 148.99, 'test', NOW(), NOW(), 'f15cf30a-3d69-44db-ab2c-58c6608717ad', True, False) # &quot;&quot;&quot; # ) client = mommy.make('client.ClientInformation', url='test', name='test') return client.url </code></pre> <p>I then started thinking that if it were possible to not call the <code>slug</code> fixture in almost every test, perhaps that would speed things up a bit so I went to sessionstart and attempted to add db access to do this, but any possible way I try to allow db access fails with <code>RuntimeError: Database access not allowed, use the &quot;django_db&quot; mark, or the &quot;db&quot; or &quot;transactional_db&quot; fixtures to enable it.</code>, for example:</p> <pre><code>@pytest.mark.django_db def pytest_sessionstart(session): from django.test import TestCase TestCase.multi_db = True TestCase.databases = '__all__' client = mommy.make('client.ClientInformation', url_base='test', name='test', price_per_unit=1.99) # or.... # Argument(s) {'db'} are declared in the hookimpl but can not be found in the hookspec def pytest_sessionstart(session, db): from django.test import TestCase TestCase.multi_db = True TestCase.databases = '__all__' client = mommy.make('client.ClientInformation', url_base='test', name='test', price_per_unit=1.99) </code></pre>
<python><pytest><pytest-django>
2023-03-10 15:16:23
0
3,060
ViaTech
75,697,761
16,363,897
Element-wise weighted average of multiple dataframes
<p>Let's say we have 3 dataframes (df1, df2, df3). I know I can get an element-wise average of the three dataframes with</p> <pre><code>list_of_dfs = [df1, df2, df3] sum(list_of_dfs)/len(list_of_dfs) </code></pre> <p>But I need to get a weighted average of the three dataframes, with weights defined in an array &quot;W&quot;</p> <pre><code>W = np.array([0.2, 0.3, 0.5]) </code></pre> <p>So df1 will get a 20% weight, df2 30% and df3 50%. Unfortunately the actual number of dataframes is much larger than 3, otherwise I could do simply the follwing:</p> <pre><code>df1*W[0] + df2*W[1] + df3*W[2] </code></pre> <p>Any help? Thanks</p>
<python><pandas><dataframe>
2023-03-10 15:03:47
1
842
younggotti
75,697,707
14,269,252
Unable to use sidebar with st.column
<p>I am building a stream lit app, I defined st.column and sidebar in my code. when I first click on button 1, then I check on sidebar, my click on button 1 would disappear, is there a way that I can keep them both?</p> <pre class="lang-py prettyprint-override"><code>col1, col2, col3 = st.columns([.4,.5,1]) m = st.markdown(&quot;&quot;&quot; &lt;style&gt; div.stButton &gt; button:first-child { background-color: rgb(0,250,154); } &lt;/style&gt;&quot;&quot;&quot;, unsafe_allow_html=True) with col1: button1 = st.button('aa') with col2: button2 = st.button('bb') with col3: button3 = st.button('cc') option_1 = st.sidebar.checkbox('x1', value=True) option_2 = st.sidebar.checkbox('x2') </code></pre>
<python><streamlit>
2023-03-10 14:58:30
1
450
user14269252
75,697,579
11,951,910
After recursive search of my json object how to determine data type for looping data
<p>I have a recursive function that examines a json object. It captures the data when I try to loop the data, it works for one but not the other. I need help determining what type of data the returned data is show I can loop the data.</p> <pre><code>targetType = 'equal' def find_data(lookup_key, jsonData, search_result = []): if type(jsonData) == dict: for key, value in jsonData.items(): if targetType == 'equal' and key == lookup_key: search_result.append(value) elif targetType == 'regex' and pattern.search(key): search_result.append(value) find_data(lookup_key, value, search_result) elif type(jsonData) == list: for element in jsonData: find_data(lookup_key, element, search_result) return search_result </code></pre> <pre><code>json_obj = { &quot;data&quot;: [ {&quot;stats&quot;:[{&quot;num&quot;:1,&quot;Count&quot;:[0,1,2]},{&quot;num&quot;:2,&quot;Count&quot;:[3,4,5]}, {&quot;num&quot;:3,&quot;Count&quot;:[6,7,8]}, {&quot;num&quot;:4,&quot;Count&quot;:[9,10,11]}, {&quot;num&quot;:5,&quot;Count&quot;:[12,13,14]}, {&quot;num&quot;:6,&quot;Count&quot;:[15,16,17]},{&quot;num&quot;:7,&quot;Count&quot;:[18,19,20]}]}, {&quot;settings&quot;:[{&quot;num&quot;:8,&quot;Channel1&quot;:[[21,22,23],[1,2,3],[4,5,6]]}]}]} target_key = &quot;Count&quot; #target_key = &quot;Channel1&quot; data = find_data(target_key,json_obj, []) for item in data: print(item) </code></pre> <p>When ran with <code>target_key = &quot;Count&quot;</code> a item (list) prints one per loop. When I use <code>target_key = &quot;Channel1&quot;</code> the loop fails I have to to use <code>for item in data[0]</code>. How can I determine which loop to use. I tried <code>if all(isinstance(i, list) for i in data):</code> to determine it is was a nested loop, but both return true.</p> <p>Here is the return from Count <code>[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 19, 20]]</code></p> <p>here is the return from Channel1 <code>[[[21, 22, 23], [1, 2, 3], [4, 5, 6]]]</code></p>
<python><json>
2023-03-10 14:48:06
0
718
newdeveloper
75,697,107
4,907,339
Efficiently filtering out the last row of a duplicate column
<p>I need to filter out the last row where <code>col2 = 3</code> but preserve the rest of the dataframe.</p> <p>I can do that like so, while maintaining the order relative to the index:</p> <pre class="lang-py prettyprint-override"><code>import pandas d = { 'col1': [0, 1, 2, 3, 3, 3, 3, 4, 5, 6], 'col2': [0, 11, 21, 31, 32, 33, 34, 41, 51, 61] } df = pandas.DataFrame(d) df2 = df[df['col1'] != 3] df3 = df[df['col1'] == 3].iloc[:-1] pandas.concat([df2,df3]).sort_index() </code></pre> <pre><code> col1 col2 0 0 0 1 1 11 2 2 21 3 3 31 4 3 32 5 3 33 7 4 41 8 5 51 9 6 61 </code></pre> <p>But for a larger dataframe, this operation gets progressively more expensive to perform.</p> <p>Is there a more efficient way?</p> <p><strong>UPDATE</strong></p> <p>Based on the answers provided this far, here are the results:</p> <pre><code>import pandas import random dupes = 1000 rows = 10000000 d = {'col1': [random.choice(range(dupes)) for i in range(rows)], 'col2': [range for range in range(rows)]} df = pandas.DataFrame(d) df2 = df[df['col1'] != 3] df3 = df[df['col1'] == 3].iloc[:-1] %timeit pandas.concat([df2,df3]).sort_index() df = pandas.DataFrame(d) %timeit df.drop(df['col1'].where(df['col1'].eq(3)).last_valid_index()) df = pandas.DataFrame(d) idx = df.loc[::-1, 'col1'].eq(3).idxmax() %timeit df.drop(idx) df = pandas.DataFrame(d) %timeit df.loc[ df[&quot;col1&quot;].ne(3) | df[&quot;col1&quot;].duplicated(keep=&quot;last&quot;) ] df = pandas.DataFrame(d) %timeit df.drop(df.index[df['col1'].eq(3)][-1]) df = pandas.DataFrame(d) %timeit df.drop((df['col1'].iloc[::-1] == 3).idxmax()) df = pandas.DataFrame(d) %timeit df.loc[df['col1'].iloc[::-1].ne(3).rank(method = 'first').ne(1)] df = pandas.DataFrame(d) %timeit df.drop(index=df[df['col1'].eq(3)].index[-1:], axis=0) </code></pre> <pre><code>703 ms Β± 60.7 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 497 ms Β± 10.9 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 413 ms Β± 11.5 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 253 ms Β± 6.7 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 408 ms Β± 8.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 404 ms Β± 8.02 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 792 ms Β± 103 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 491 ms Β± 142 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) </code></pre>
<python><pandas>
2023-03-10 14:05:27
7
492
Jason
75,697,081
4,543,743
How to configure rotating proxy with scrapy playwright?
<p>I am trying to add rotating proxy Scrapy Playwright. <a href="https://github.com/rejoiceinhope/scrapy-proxy-pool" rel="nofollow noreferrer">scrapy-proxy-pool</a> does not work well with Scrapy Playwright. So I hacked <a href="https://github.com/rejoiceinhope/scrapy-proxy-pool" rel="nofollow noreferrer">https://github.com/rejoiceinhope/scrapy-proxy-pool</a> and found out that it uses <a href="https://pypi.org/project/proxyscrape/" rel="nofollow noreferrer">https://pypi.org/project/proxyscrape/</a> to build rotating proxy mechanism.</p> <p>Trying to debug this for hours. But I think there is some techinial mistake I am making. Beacuse of which it is showing connection error with the proxy server and then it show timeout error.</p> <p><strong>My Code:</strong></p> <pre><code>import scrapy from scrapy_playwright.page import PageMethod from proxyscrape import create_collector collector = create_collector('proxy', 'http') class ProxySpider(scrapy.Spider): name = 'proxy' PLAYWRIGHT_LAUNCH_OPTIONS = { &quot;headless&quot;: False, &quot;timeout&quot;: 100 * 1000, # 20 seconds } def start_requests(self): proxy = collector.get_proxy() print(&quot;Proxy --&gt; http://&quot;+proxy.host+&quot;:&quot;+proxy.port) yield scrapy.Request(&quot;http://httpbin.org/get&quot;, meta={ &quot;playwright&quot;: True, &quot;playwright_context_kwargs&quot;: { &quot;java_script_enabled&quot;: True, &quot;ignore_https_errors&quot;: True, &quot;proxy&quot;: { &quot;server&quot;: &quot;http://&quot;+proxy.host+&quot;:&quot;+proxy.port, }, }, }) def parse(self,response): print(response.text) </code></pre> <p><strong>Error:</strong></p> <pre><code> File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/scrapy_playwright/handler.py&quot;, line 297, in _download_request result = await self._download_request_with_page(request, page, spider) File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/scrapy_playwright/handler.py&quot;, line 331, in _download_request_with_page response = await page.goto(url=request.url, **page_goto_kwargs) File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/async_api/_generated.py&quot;, line 9162, in goto await self._impl_obj.goto( File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/_impl/_page.py&quot;, line 494, in goto return await self._main_frame.goto(**locals_to_params(locals())) File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/_impl/_frame.py&quot;, line 147, in goto await self._channel.send(&quot;goto&quot;, locals_to_params(locals())) File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/_impl/_connection.py&quot;, line 44, in send return await self._connection.wrap_api_call( File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/_impl/_connection.py&quot;, line 419, in wrap_api_call return await cb() File &quot;/home/sappy/.virtualenvs/121-server/lib/python3.10/site-packages/playwright/_impl/_connection.py&quot;, line 79, in inner_send result = next(iter(done)).result() playwright._impl._api_types.Error: net::ERR_TIMED_OUT at http://httpbin.org/get =========================== logs =========================== navigating to &quot;http://httpbin.org/get&quot;, waiting until &quot;load&quot; ============================================================ </code></pre>
<python><scrapy><playwright><playwright-python><scrapy-playwright>
2023-03-10 14:02:46
1
421
saprative
75,696,793
1,432,980
apply a function on the columns from the list
<p>I have a list of columns, whose values in the dataframe I want to convert to Decimal</p> <pre><code>column_list = ['Parameter 1', 'Parameter 2' ... 'Parameter N'] </code></pre> <p>The data in a dataframe looks like this</p> <pre><code>Name | Parameter 1 | Parameter 2 | Surname | ... | Parameter N | ... </code></pre> <p>How to achieve that? If it I had a limited set of column names I could probably do something like <code>df['Parameter 1'] = df['Parameter 1'].apply</code> or something or maybe even with <code>lambda row: ...</code> etc.</p> <p>But I am not sure how to achieve that if I have a list of columns that I want to update. Should I just iterate over the items like this for example?</p> <pre><code>for column_name in column_list: df[column_name] = Decimal(df[column_name]) </code></pre>
<python><pandas><dataframe>
2023-03-10 13:34:21
1
13,485
lapots
75,696,743
755,371
Python multiprocessing Queue broken after worker kill
<p>I made a program to simulate heavy mail management by using python multiprocessing Pool and Queue :</p> <pre><code>from multiprocessing import Pool, Queue import time import uuid import os NB_WORKERS = 3 NB_MAILS_PER_5_SECONDS = 2 MAIL_MANAGEMENT_DURATION_SECONDS = 1 def list_new_mails_id(): for i in range(NB_MAILS_PER_5_SECONDS): # fake mailbox msg list yield str(uuid.uuid1()) def mail_worker(msg_ids): pid = os.getpid() print(f&quot;Starting worker PID = {pid} and queue={msg_ids} queue size={msg_ids.qsize()} ...&quot;) while True: print(f&quot;[{pid}] Waiting a mail to manage...&quot;) msg_id = msg_ids.get() print(f&quot;[{pid}] managing mail msg_id = {msg_id} ...&quot;) # here should read mail msg_id and remove it from mailbox when finish print(f&quot;[{pid}] --&gt; fake duration of {MAIL_MANAGEMENT_DURATION_SECONDS}s&quot;) time.sleep(MAIL_MANAGEMENT_DURATION_SECONDS) if __name__ == &quot;__main__&quot;: msg_id_queue = Queue() with Pool(NB_WORKERS, mail_worker, (msg_id_queue,)) as p: while True: for msg_id in list_new_mails_id(): msg_id_queue.put(msg_id) print(&quot;\nWaiting for new mails to come...\n&quot;) time.sleep(5) </code></pre> <p>The program puts some message ids in a queue which are read by workers. Workers are started and initialized with the same Queue object. It works well.</p> <p>To know how tolerant is the python <code>Pool()</code> and to know if tasks processing can continue despite a worker death because of some temporary memory issues for example, I killed a worker :</p> <p>The <code>Pool()</code> re-create automatically the worker(s) and initilize it(them) with the same Queue object than before but the workers are not able anymore to get items in the queue : it is stuck on <code>msg_ids.get()</code> : why ?</p> <p>I am using ubuntu 22.04 LTS and python 3.10.4</p>
<python><multiprocessing>
2023-03-10 13:28:39
3
5,139
Eric
75,696,676
743,188
pydantic: how to type hint to mypy that a function accepts any model subclass
<p>Best asked through code:</p> <pre><code>from pydantic import BaseModel class Role(BaseModel): class Config: extra = Extra.forbid someprop: sometype = somedefault class Administrator(Role): someprop = foo class Teacher(Role): someprop = bar ... def some_func_that_accepts_any_role(role: ?????) ... a = Administrator() some_func_that_accepts_any_role(a) </code></pre> <p>For <code>role: ???</code> I have tried both <code>Role</code> and <code>BaseModel</code>.</p> <pre><code>def some_func_that_accepts_any_role(role: Role) def some_func_that_accepts_any_role(role: BaseModel) </code></pre> <p>However both leads mypy to complain with either</p> <pre><code>error: Incompatible types in assignment (expression has type &quot;Type[Administrator]&quot;, variable has type &quot;Role&quot;) </code></pre> <p>or</p> <pre><code>error: Incompatible types in assignment (expression has type &quot;Type[Administrator]&quot;, variable has type &quot;BaseModel&quot;) </code></pre> <p>what is the correct type hint to say &quot;this expects a Role or any derived model&quot;?</p> <p>Really hoping I dont have to do:</p> <pre><code>def some_func_that_accepts_any_role(role: Union[Administrator, Teacher,..... (list all possible subclasses)]) </code></pre> <p>EDIT: The answer was I needed <code>Type</code> although someone deleted that answer.</p> <p>MRE that shows the error:</p> <pre><code>from typing import Type from pydantic import BaseSettings class Settings(BaseSettings): cache_clear: bool = False class Staging(Settings): cache_clear = True # WRONG (will run, but will not lint) def print_settings(stn: Settings): print(stn().json()) print_settings(Staging) # RIGHT def print_settings_2(stn: Type[Settings]): print(stn().json()) print_settings_2(Staging) </code></pre>
<python><mypy><pydantic>
2023-03-10 13:21:57
0
13,802
Tommy
75,696,639
14,033,436
What is the correct way to define a vectorized (jax.vmap) function in a class?
<p>I want to add a function, which is vectorized by <code>jax.vmap</code>, as a class method. However, I am not sure where to define this function within the class. My main goal is to avoid, that the function is being redefined each time I call the class method.</p> <p>Here is a minimal example for a class that counts how often a value occurs in a <code>jnp.array</code>, with a non-vectorized and vectorized version:</p> <pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp import jax class ValueCounter(): def __init__(self): # for completeness, not used self.attribute_1 = None @staticmethod def _count_value_in_array( # non-vectorized function array: jnp.array, value: float ) -&gt; jnp.array: &quot;&quot;&quot;Count how often a value occurs in an array&quot;&quot;&quot; return jnp.count_nonzero(array == value) # here comes the vectorized function def count_values_in_array(self, array: jnp.array, value_array: jnp.array) -&gt; jnp.array: &quot;&quot;&quot;Count how often each value in an array of values occurs in an array&quot;&quot;&quot; count_value_in_array_vec = jax.vmap( self._count_value_in_array, in_axes=(None, 0) ) # vectorized function is defined again each time the function is called return count_value_in_array_vec(array, value_array) </code></pre> <p>Example output &amp; input:</p> <pre class="lang-py prettyprint-override"><code>value_counter = ValueCounter() value_counter.count_values_in_array(jnp.array([0, 1, 2, 2, 1, 1]), jnp.array([0, 1, 2])) </code></pre> <p>Result (correct as expected)</p> <pre><code>Array([1, 3, 2], dtype=int32) </code></pre> <p>The vectorized function <code>count_value_in_array_vec</code>is redefined each time <code>count_values_in_array</code> - which seems unnecessary to me. However, I am a bit stuck on how to avoid this. Does someone know how the vectorized function could be integrated into the class in a more elegant way?</p>
<python><vector><parallel-processing><vectorization><jax>
2023-03-10 13:18:37
1
790
yuki
75,696,580
12,965,658
Boolean null values in pandas
<p>I have a column with datatype string. I want to convert it to boolean using pandas.</p> <p>The dataframe column has values such as:</p> <pre><code>'True' 'False' 'None' </code></pre> <p>I am using pandas to convert it to bool.</p> <pre><code>df[column] = df[column].astype(bool) print(df.dtypes) print(df) </code></pre> <p>I want to insert data frame to sql so I want the final result to be</p> <pre><code>True False null </code></pre> <p>How can I convert these values?</p>
<python><python-3.x><pandas><dataframe><snowflake-cloud-data-platform>
2023-03-10 13:13:21
2
909
Avenger
75,696,578
18,050,861
How to open a ".pyc" file with Pycharm?
<p>I was programming some code in pycharm and I don't know how it came out and saved like this: <code>Eq_FD.cpython-39.pyc</code>. Whereas before it was just <code>Eq_FD</code>. When I tried to open it again in pycharm it was full of characters so that I can't understand what is written. Looks to me like it's in binary language or something different. Is there a way to recover the file?</p>
<python><file><pycharm>
2023-03-10 13:13:15
1
375
User8563
75,696,569
5,091,467
How do I extract meaningful simple rules from this classification problem?
<p>I have a problem of this type: A customer creates an order by hand, which might be erroneous. Submitting a wrong order is costly, which is why we try to reduce the error rate.</p> <p>I need to detect what factors cause an error, so that a new rule can be created, such as Product &quot;A&quot; and type &quot;B&quot; must not go together. All explanatory variables are categorical.</p> <p>I have 2 questions:</p> <ol> <li>What approach do I take to extract simple but useful rules to give to a human expert for further review? A useful rule should cover as many errors as possible while covering as few non-errors as possible.</li> <li>How do I make sure variable interactions are taken into account?</li> </ol> <p>Below is a sample dataset and a simple approach I took -- finding variables with high proportion of errors to be proposed as rules. I create a single interaction term by hand (based on prior knowledge, but I might be missing others).</p> <p>I also tried using classification models (LASSO, Decision tree, RF), but I had an issue with 1. high dimensionality (especially when including many interactions), 2. extracting simple rules, since models use many coefficients even with regularization.</p> <pre><code>import pandas as pd # Create sample dataset for task df = pd.DataFrame(data={'error':[0,1,0,0,0,0,0,1,1,1], 'product':[1,2,1,2,2,3,4,2,2,2], 'type':[1,1,2,3,3,1,2,1,4,4], 'discount_level':[5,3,3,4,1,2,2,1,4,5], 'extra1':[1,1,1,2,2,2,3,3,3,3,], 'extra2':[1,2,3,1,2,3,1,2,3,1], 'extra3':[6,6,9,9,8,8,7,7,6,6] }) # Variable interaction based on prior knowledge df['product_type'] = df['product'].astype(str) + '_' + df['type'].astype(str) X = df.drop('error', axis=1) # Find groups with high portion of errors groups_expl = pd.DataFrame() for col in X.columns: groups = df.groupby(col).agg(count_all=('error', 'count'), count_error=('error', 'sum')) groups['portion_error'] = groups['count_error'] / groups['count_all'] groups['column'] = col # Save groups with high portion of errors groups_expl = pd.concat([groups_expl, groups.loc[groups['portion_error']&gt;0.8, :]], axis=0) groups_expl['col_val'] = groups_expl.index print(groups_expl) </code></pre> <p>Thank you for help!</p>
<python><scikit-learn><classification><rules><interaction>
2023-03-10 13:11:34
1
714
Dudelstein
75,696,356
8,849,755
Set only lower axis range in plotly
<p>I want to set the lower value for the range of an axis and let the higher value to be automatic. Is this possible? I tried this:</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go import numpy x = numpy.linspace(0,1,99) fig = go.Figure() fig.add_trace( go.Scatter( x = x, y = 1/x, mode = 'markers', ) ) fig.update_yaxes(type='log', range=[numpy.log10(3),None]) fig.show() </code></pre> <p>but it does not work.</p>
<python><plotly><range>
2023-03-10 12:49:39
1
3,245
user171780
75,696,248
17,630,139
Flake8: ValueError: 'choice' is not callable
<p>After upgrading to flake8 v6.0.0, I tried running the command <code>flake8</code> at the project level. However, I receive this error in the console:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;env/bin/flake8&quot;, line 8, in &lt;module&gt; sys.exit(main()) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/main/cli.py&quot;, line 23, in main app.run(argv) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/main/application.py&quot;, line 198, in run self._run(argv) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/main/application.py&quot;, line 186, in _run self.initialize(argv) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/main/application.py&quot;, line 165, in initialize self.plugins, self.options = parse_args(argv) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/options/parse_args.py&quot;, line 51, in parse_args option_manager.register_plugins(plugins) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/options/manager.py&quot;, line 259, in register_plugins add_options(self) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8_quotes/__init__.py&quot;, line 100, in add_options cls._register_opt(parser, '--quotes', action='store', File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8_quotes/__init__.py&quot;, line 90, in _register_opt parser.add_option(*args, **kwargs) File &quot;/Users/gree030/Workspace/myproject/env/lib/python3.8/site-packages/flake8/options/manager.py&quot;, line 281, in add_option self._current_group.add_argument(*option_args, **option_kwargs) File &quot;/Users/gree030/.pyenv/versions/3.8.15/lib/python3.8/argparse.py&quot;, line 1373, in add_argument raise ValueError('%r is not callable' % (type_func,)) ValueError: 'choice' is not callable </code></pre> <h2>What I have tried</h2> <ul> <li>Downgrading to 5.0.4, removes the error. However, I would like to use 6.0.0</li> <li><code>$ flake myproject</code></li> <li><code>$ env/bin/flake8 myproject</code></li> <li><code>$ python3.8 -m flake8</code></li> <li>Reading the <a href="https://flake8.pycqa.org/en/latest/user/invocation.html" rel="nofollow noreferrer">documentation</a></li> </ul> <h2>setup.cfg File</h2> <pre class="lang-ini prettyprint-override"><code># Black-compatible configurations [isort] multi_line_output = 3 include_trailing_comma = True force_grid_wrap = 0 use_parentheses = True ensure_newline_before_comments = True line_length = 88 [flake8] ignore = D413, E203, E266, E501, E711, W503, F401, F403 max-line-length = 100 max-complexity = 18 select = B,C,E,F,W,T4,B9 application-import-names = myproject,tests import-order-style = google per-file-ignores = __init__.py:F401 exclude = tests/integration/.env.shadow, env require-plugins = flake8-import-single [pylint] max-line-length = 88 [pylint.messages_control] disable = C0330, C0326 </code></pre> <h2>Directory Structure</h2> <p>For reference, <code>env</code> is my python virtual environment. I am also using <strong>Python 3.8</strong></p> <pre class="lang-py prettyprint-override"><code>β”œβ”€β”€ Dockerfile β”œβ”€β”€ Makefile β”œβ”€β”€ env β”‚Β Β  β”œβ”€β”€ bin β”‚Β Β  β”œβ”€β”€ include β”‚Β Β  β”œβ”€β”€ lib β”‚ ... β”œβ”€β”€ myproject β”‚Β Β  β”œβ”€β”€ __init__.py β”‚ ... β”œβ”€β”€ requirements-dev.txt β”œβ”€β”€ requirements.txt β”œβ”€β”€ setup.cfg </code></pre>
<python><python-3.x><flake8>
2023-03-10 12:38:04
1
331
Khalil
75,696,234
3,360,848
Efficient Filtering of Lists in a Dictionary of Lists
<p>I'm working with some reasonably large datasets (500,000 datapoints with 30 variables each) and would like to find the most efficient methods for filtering them.</p> <p>For compatibility with existing code the data is structured as a dictionary of lists but can't be converted (e.g. to pandas DataFrame) and has to be filtered in situ.</p> <p>Working example:</p> <pre><code>data = {'Param0':['x1','x2','x3','x4','x5','x6'], 'Param1':['A','A','A','B','B','C'], 'Param2': [100,200,150,80,90,50], 'Param3': [20,60,40,30,30,5]} # Param0 keys to keep keep = ['x2', 'x4'] filtered = {k: [x for i, x in enumerate(v) if data['Param0'][i] in keep] for k, v in data.items()} </code></pre> <p>The result <code>filtered</code> gives the desired output, but this is very slow at scale.</p> <p>Are there any quicker ways of doing this?</p>
<python><list><dictionary><list-comprehension>
2023-03-10 12:36:07
2
417
awenborn
75,696,084
14,125,436
How to make heat equation dimensionless for neural network in pytorch
<p>I am trying to use PyTorch for making a Physics Informed Neural Network for the heat equation in 1D:</p> <p><a href="https://i.sstatic.net/1nAMS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1nAMS.png" alt="enter image description here" /></a></p> <p>I tried the following code to make a loss function for PDE residual:</p> <pre><code>def lossPDE(self,x_PDE): g = x_PDE.clone() g.requires_grad = True # Enable differentiation f = self.forward(g) f_x_t = torch.autograd.grad(f,g,torch.ones([g.shape[0],1]).to(device),retain_graph=True, create_graph=True)[0] # first derivative of time f_xx_tt = torch.autograd.grad(f_x_t,g,torch.ones(g.shape).to(device), create_graph=True)[0]#second derivative of x f_t = f_x_t[:,[1]] f_xx = f_xx_tt[:,[0]] f = f_t - alpha * f_xx return self.loss_function(f,f_hat) # f_hat is a tensor of zeros and it minimizes the f value </code></pre> <p>In the simulation I am making a one day simulation (86400 seconds) for a bar with the length of 1 meters. Now, I want to make my PDE dimensionless. How can I do it in pytorch? Which part of my code should be changed? The unit of alpha is already in <code>m2/s</code>. I can share my whole code but it would be hundreds lines of code. I very much appreciate any help in advance.</p>
<python><pytorch><neural-network>
2023-03-10 12:20:49
1
1,081
Link_tester
75,696,056
13,798,993
Pythons inspect.getsource throws error if used in a decorator
<p>I have the following function</p> <pre><code>def foo(): for _ in range(1): print(&quot;hello&quot;) </code></pre> <p>Now I want to add another print statement to print &quot;Loop iterated&quot; after every loop iteration. For this I define a new function that transforms foo into an ast tree, inserts the corresponding print node and then compiles the ast tree into an executable function:</p> <pre><code>def modify(func): def wrapper(): source_code = inspect.getsource(func) ast_tree = ast.parse(source_code) # insert new node into ast tree for node in ast.walk(ast_tree): if isinstance(node, ast.For): node.body += ast.parse(&quot;print('Loop iterated')&quot;).body # get the compiled function new_func = compile(ast_tree, '&lt;ast&gt;', 'exec') namespace = {} exec(new_func, globals(), namespace) new_func = namespace[func.__name__] return new_func() return wrapper </code></pre> <p>This works fine as expected when using:</p> <pre><code>foo = modify(foo) foo() </code></pre> <p>However, if I decide to use <code>modify</code> as a decorator:</p> <pre><code>@modify def foo(): for _ in range(1): print(&quot;hello&quot;) foo() </code></pre> <p>I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\noinn\Documents\decorator_test\test.py&quot;, line 34, in &lt;module&gt; foo() File &quot;c:\Users\noinn\Documents\decorator_test\test.py&quot;, line 25, in wrapper return new_func() File &quot;c:\Users\noinn\Documents\decorator_test\test.py&quot;, line 11, in wrapper source_code = inspect.getsource(func) File &quot;C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py&quot;, line 1024, in getsource lines, lnum = getsourcelines(object) File &quot;C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py&quot;, line 1006, in getsourcelines lines, lnum = findsource(object) File &quot;C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py&quot;, line 835, in findsource raise OSError('could not get source code') OSError: could not get source code </code></pre> <p>Does anyone know why that error appears? Note that this does not happen If I return the original function and the error only appears once new_func() is called.</p> <p>------------------------- Solution ----------------------</p> <p>Simply remove the decorator from the function in the decorator itself using:</p> <pre><code>ast_tree.body[0].decorator_list = [] </code></pre>
<python><abstract-syntax-tree>
2023-03-10 12:18:39
1
689
Quasi
75,696,017
3,046,211
Sorting of categorical variables using np.unique
<p>I'm trying to get the unique values of categorical variables in sorted fashion using the below code but without success.</p> <pre><code>import numpy as np unique_values, unique_value_counts = np.unique(['Small', 'Medium', 'Large', 'Medium', 'Small', 'Large', 'Small', 'Medium'], return_counts = True) print(unique_values) </code></pre> <p>which gives me an output like below</p> <pre><code>['Large', 'Medium', 'Small'] </code></pre> <p>However, I'm expecting output in ascending format like</p> <pre><code>['Small', 'Medium', 'Large'] </code></pre> <p>Is there a way wherein I can get the categorical values in a sorted format using <code>np.unique()</code>?</p>
<python><numpy>
2023-03-10 12:14:44
4
716
user3046211
75,695,955
14,269,252
Assign different part of codes to st.button in streamlit app
<p>I am building an streamlit app, I defined 3 buttons. I have a large set of codes that does different things. If a user choose button1, it does something, if a user choose button2 it should perform other part of code and does something else.</p> <p>There is two issue I am dealing with:</p> <p>1- when I select button2, and then I select slider, or change the option in st.sidebar.checkbox, which are defined inside this if, it doesn't do anything.</p> <p>2- how to define data frame under buttom2 which can be called under button 3 as well?</p> <p>3- if I define slider and st.sidebar.checkbox outside of the <strong>if button2</strong>, the problem of 1 will be solved but what is outside of <strong>if button2</strong>, will be shown also in <strong>if button1</strong>.</p> <p>Can you help me to develop this code further?</p> <pre class="lang-py prettyprint-override"><code>import streamlit as st col1, col2, col3 = st.columns([.4,.5,1]) m = st.markdown(&quot;&quot;&quot; &lt;style&gt; div.stButton &gt; button:first-child { background-color: rgb(0,250,154); } &lt;/style&gt;&quot;&quot;&quot;, unsafe_allow_html=True) with col1: button1 = st.button('In') with col2: button2 = st.button('DA') with col3: button3 = st.button('Vi') if button1: write some intro text if button2: a data frame is generated here there is slider and st.sidebar.checkbox defined in this part. if button3: use the data frame generated in button2 and plot </code></pre>
<python><streamlit>
2023-03-10 12:10:07
1
450
user14269252
75,695,717
12,193,952
How to cleanup RAM between program iterations using Python
<h2>Problem</h2> <p>I have a Python application inside Docker container. The application receives &quot;jobs&quot; from some queue service (<code>RabbitMQ</code>), does some computing tasks and uploads results into database (<code>MySQL</code> and <code>Redis</code>).</p> <p>The issue I face is - <strong>the RAM is not properly &quot;cleaned up&quot; between iterations</strong> and thus memory consumption between iterations raises until OOM. Since I have implemented <code>MemoryError</code> (see tested solutions below for more info), the container stays alive and the memory keeps exhausted (not freed up by container restart).</p> <hr /> <h2>Question</h2> <ul> <li>How to debug what is &quot;staying&quot; in the memory so I can clean it up?</li> <li>How to cleanup the memory properly between runs?</li> </ul> <hr /> <h4>Iteration description</h4> <p>An example of increasing memory utilisation; memory limit set to <code>3000 MiB</code></p> <ul> <li>fresh container: <code>130 MiB</code></li> <li>1st iteration: <code>1000 MiB</code></li> <li>2nd iteration: <code>1500 MiB</code></li> <li>3rd iteration: <code>1750 MiB</code></li> <li>4th iteration: <code>OOM</code></li> </ul> <p><em>Note: Every run/iteration is a bit different and thus has a bit different memory requirements, but the pattern stays similar.</em></p> <hr /> <p>Below is a brief overiew of the iteraion which might be helpful while determining what might be wrong</p> <ol> <li>Receiving job parameters from <code>rabbitmq</code></li> <li>Loading data from local parquet into dataframe (using <code>read_parquet(filename, engine=&quot;fastparquet&quot;)</code>)</li> <li>Computing values using <code>Pandas</code> functions and other libraries <strong>(most of the laod is probably here)</strong></li> <li>Converting dataframe to dictionary and computing some other values inside a loop</li> <li>Adding some more metrics from computed values - e.g. highest/lowest values, trends etc.</li> <li>Storing metrics from 5. in database (<code>MySQL</code> and <code>Redis</code>)</li> </ol> <p>A selection of the tech I use</p> <ul> <li>Python <code>3.10</code></li> <li>Pandas <code>1.4.4</code></li> <li>numpy <code>1.24.2</code></li> <li>running in <code>AWS ECS Fargate</code> (but results on local are similar); <code>1 vCPU</code> and <code>8 GB</code> or memory</li> </ul> <hr /> <h2>Possible solutions / tried approaches</h2> <ul> <li>❌: tried; not worked</li> <li>πŸ’‘: and idea I am going to test</li> <li>😐: did not completely solved the problem, but helped towards the solution</li> <li>βœ…: working solution</li> </ul> <h4>❌ Restart container after every iteration</h4> <p>The most obvious one is to restart the docker container (e.g. using <code>exit()</code> and causing container to restart itself) after every iteration. This solution is not feasible, because the size of &quot;restart overhead&quot; is too big (one run takes 15 - 60 seconds and thus the restart will slow things soo much).</p> <h4>❌ Using <code>gc.collect()</code></h4> <p>I have tried to call <code>gc.collect()</code> at the very beginning of each iteration, but the memory usage did not change at all.</p> <h3>βœ… Test <code>multiprocessing</code></h3> <p>I read some recommendations to use <code>multiprocessing</code> module in order to improve memory efficiency, because it will &quot;drop&quot; all resources after subprocess finishes.</p> <p><strong>This solved the issue, see answers below.</strong></p> <p><a href="https://stackoverflow.com/a/1316799/12193952">https://stackoverflow.com/a/1316799/12193952</a></p> <h3>πŸ’‘ Use explicit <code>del</code> on unwanted objects</h3> <p>The idea is to explicitly delete objects that are not longer used (e.g. <code>dataframe</code> after it's converted to <code>dictionary</code>).</p> <pre class="lang-py prettyprint-override"><code>del my_array del my_object </code></pre> <p><a href="https://stackoverflow.com/a/1316793/12193952">https://stackoverflow.com/a/1316793/12193952</a></p> <h4>😐 Monitor memory using <code>psutil</code></h4> <pre class="lang-py prettyprint-override"><code>import psutil # Local imports from utils import logger def get_usage(): total = round(psutil.virtual_memory().total / 1000 / 1000, 4) used = round(psutil.virtual_memory().used / 1000 / 1000, 4) pct = round(used / total * 100, 1) logger.info(f&quot;Current memory usage is: {used} / {total} MB ({pct} %)&quot;) return True </code></pre> <h4>😐 Support <code>except MemoryError</code></h4> <p>Thanks to <a href="https://stackoverflow.com/questions/64891160/how-to-let-a-python-process-use-all-docker-container-memory-without-getting-kill">this question</a> I was able to set up <code>try</code>/<code>except</code> pattern that catches <code>OOM</code> errors and keep the container running (so logs are available etc.).</p> <hr /> <p><em>Even if I don't get any answer, I will continue testing and editing until I find a solution and hopefully help someone else.</em></p>
<python><pandas><docker><memory>
2023-03-10 11:43:11
1
873
FN_
75,695,487
3,017,749
matplotlib multicolored line from pandas DataFrame with colors from value in dataframe
<p>I am trying to plot a DataFrame containing 3 columns, first 2 will be the coordinates of each point and the third would determine the color of the plot at that point:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>X</th> <th>Y</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>R</td> </tr> <tr> <td>2</td> <td>1</td> <td>R</td> </tr> <tr> <td>3</td> <td>4</td> <td>B</td> </tr> <tr> <td>4</td> <td>3</td> <td>R</td> </tr> <tr> <td>5</td> <td>1</td> <td>R</td> </tr> <tr> <td>6</td> <td>5</td> <td>G</td> </tr> <tr> <td>7</td> <td>6</td> <td>G</td> </tr> <tr> <td>8</td> <td>8</td> <td>B</td> </tr> </tbody> </table> </div> <p>I grouped the data into segments of the same color:</p> <pre><code>df.groupby((df['C']!=df['C'].shift()).cumsum()) </code></pre> <p>And then tried to call <code>.plot</code> for each group, but the displayed plot had discontinuities and was also extremely slow as the amount of data is quite large.</p> <p>I found <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/multicolored_line.html" rel="nofollow noreferrer">this example</a> and I believe using <code>LineCollection</code> and <code>ListedColormap</code> could be the right solution, but being new to the ecosystem, I'm failing to understand how I could adapt it to work with the described DataFrame.</p>
<python><pandas><numpy><matplotlib>
2023-03-10 11:17:33
1
462
roign
75,695,414
13,158,157
Read parquet folder from blob storage
<p>I am usually writing and reading parquet files saved from pandas (pyarrow engine) to blob storage in a way described <a href="https://stackoverflow.com/questions/63351478/how-to-read-parquet-files-from-azure-blobs-into-pandas-dataframe">in this question</a>. Generally my read function looks like this:</p> <pre><code>from io import BytesIO from azure.storage.blob import BlobServiceClient def read_blob(blobname, conn_str, container): blob_service_client = BlobServiceClient.from_connection_string(conn_str) container_client = blob_service_client.get_container_client( container=container_name) downloaded_blob = container_client.download_blob(blobname) bytes_io = BytesIO(downloaded_blob.readall()) res = pd.read_parquet(bytes_io) return res </code></pre> <p>This works but when I try to download segmented .parquet that looks like a folder I get following error: <code>ArrowInvalid: Could not open Parquet input source '': Parquet file size is 0 bytes</code> I have verified that there is nothing from with that parquet by downloading it manually from blob storage and then pathing this parquet-folder path into pandas. It reads ok.</p> <p>The trouble occurs when I try to download such parquet-folder through BytesIO. How can I read it ?</p>
<python><pandas><azure-blob-storage><parquet>
2023-03-10 11:11:45
1
525
euh
75,695,324
19,770,795
Overloaded signature supposedly incompatible with supertype, despite not strengthening preconditions
<p>Imagine being given the following base class that you have no control over:</p> <pre class="lang-py prettyprint-override"><code>from typing import Optional class A: def foo(self, x: Optional[bool] = None) -&gt; None: pass </code></pre> <p>Now you want to write a subclass that merely distinguishes two different call signatures using <a href="https://docs.python.org/3/library/typing.html#typing.overload" rel="nofollow noreferrer"><code>typing.overload</code></a> without actually constraining the parameter types:</p> <pre class="lang-py prettyprint-override"><code># ... import A from typing import Literal, Optional, overload class B(A): @overload def foo(self, x: Literal[True]) -&gt; None: ... @overload def foo(self, x: Optional[Literal[False]] = None) -&gt; None: ... def foo(self, x: Optional[bool] = None) -&gt; None: pass </code></pre> <p>I would say the two overloaded <code>B.foo</code> signatures combined fully cover the possible calls of <code>A.foo</code>. I see no violation of the <a href="https://en.wikipedia.org/wiki/Liskov_substitution_principle" rel="nofollow noreferrer">Liskov Substitution Principle</a> here since the parameter types of <code>B.foo</code> have not been narrowed.</p> <p>Yet <code>mypy</code> complains about this:</p> <pre><code>error: Signature of &quot;foo&quot; incompatible with supertype &quot;A&quot; [override] note: Superclass: note: def foo(self, x: Optional[bool] = ...) -&gt; None note: Subclass: note: @overload note: def foo(self, x: Literal[True]) -&gt; None note: @overload note: def foo(self, x: Optional[Literal[False]] = ...) -&gt; None </code></pre> <p>Is there something I am missing here or have I stumbled onto a limitation/bug of or <code>mypy</code>?</p> <p>I found this issue <a href="https://github.com/python/mypy/issues/14725" rel="nofollow noreferrer">#14725</a>, which appears to address the same problem, but since nobody reacted yet, I am not sure this is actually a bug. If more people here agree that it is, I can of course simply <code># type: ignore[override]</code> it for now and keep an eye on that issue, but maybe there is a good reason this is forbidden that I just don't understand.</p> <p>A few other issues I found that seem vaguely related is this one: <a href="https://github.com/python/mypy/issues/14002" rel="nofollow noreferrer">#14002</a>, <a href="https://github.com/python/mypy/issues/3750" rel="nofollow noreferrer">#3750</a></p> <hr /> <p>Tested on Python <code>3.9</code>-<code>3.11</code> with <code>mypy==1.1.1</code>.</p> <p>For what it's worth, the PyCharm type checker also complains about this, but interestingly it only highlights the <em>first</em> overload signature (the one with <code>Literal[True]</code>) saying that it does not match that of the supertype.</p>
<python><overloading><mypy><python-typing><liskov-substitution-principle>
2023-03-10 11:03:05
0
19,997
Daniel Fainberg
75,695,171
4,815,580
Does loop.sock_recv(sock, nbytes) in python asyncio raise any exception?
<p>I am working on python socket programming using asyncio, where I have below code:</p> <pre><code>buff = b&quot;&quot; try: while len(buff) &lt; 100): buff += await loop.sock_recv(srv_sock, 4096) except Exception as exp: raise exp </code></pre> <p>But I am not sure if loop.sock_recv raise any exception.</p> <p>I was reading about the decription of this API but it doesn't mention that it raises any exception <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_recv" rel="nofollow noreferrer">https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_recv</a></p> <pre><code>Receive up to nbytes from sock. Asynchronous version of socket.recv(). Return the received data as a bytes object. sock must be a non-blocking socket. </code></pre> <p><a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_sendall" rel="nofollow noreferrer">https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_sendall</a></p> <p>This method continues to send to the socket until either all data in data has been sent or an error occurs. None is returned on success. <strong>On error, an exception is raised.</strong> Additionally, there is no way to determine how much data, if any, was successfully processed by the receiving end of the connection.</p> <p>Where as it mentions about exception for <code>loop.sock_sendall(sock, data)</code></p>
<python><python-asyncio>
2023-03-10 10:49:52
1
512
NPE
75,695,118
1,961,574
How to avoid reading half-written arrays spanning multiple chunks using zarr?
<p>In a multiprocess situation, I want to avoid reading arrays from a zarr group that haven't fully finished writing by the other process yet. This functionality does not seem to come out of the box with zarr.</p> <p>While chunk writing is atomic in zarr, array writing seems not to be (i.e. while you can never have a half-written chunk, you can have a half-written array if said array spans multiple chunks).</p> <p>In my concrete example, one process is writing to the <code>position</code> group. This group contains a 1D array with a chunksize of 100. All goes well if the array I'm writing is smaller than this chunksize. Larger arrays will be written into several chunks, but not all of them are written simultaneously.</p> <p>A parallel process may then try to read the array and find only a first chunk. Zarr then blithely returns an array of 100 elements. Milliseconds later, the 2nd chunk is written, and a subsequent opening of the group now yields 200 elements.</p> <p>I can identify a number of solutions:</p> <ol> <li><p>A store/group lock which must be acquired before writing or reading the entire array. This works, but makes concurrent writing and reading a lot harder because chunk-level locking is better than group/store-level locking. For simple 1D arrays that are write once/read many, that's enough.</p> </li> <li><p>A store/group lock that does not allow reading the entire array while the array is write-locked. I don't know if such read/write locks exist in zarr, or if I should brew my own using the fasteners library. Again for more complex N-D arrays this means loss of performance.</p> </li> <li><p>Adjust my write/read code to obtain a lock based on the region to write or read (the lock key could be composed of the indices to write or chunks to write). This would have better performance but it seems absurd that this isn't out-of-the-box supported by zarr.</p> </li> </ol> <p>The zarr docs are a bit too succinct and don't delve very deep into the concept of synchronisation/locking, so maybe I'm just missing something.</p>
<python><multiprocessing><zarr>
2023-03-10 10:44:34
1
2,712
bluppfisk
75,695,028
17,082,611
Applying gaussian noise gives white on colored regions of my image
<p>I want to apply gaussian noise to this image:</p> <p><a href="https://i.sstatic.net/OUyt6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OUyt6.png" alt="original image" /></a></p> <p>through this function:</p> <pre><code>def add_noise(image): (x, y, channels) = image.shape mean = 0 var = 0.1 std = np.sqrt(var) noise = np.random.normal(loc=mean, scale=std, size=(x, y, channels)) return image + noise </code></pre> <p>unfortunately this is giving me the following result:</p> <p><a href="https://i.sstatic.net/JDbAK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JDbAK.jpg" alt="blurred" /></a></p> <p>As you can see, the noise gets applied only on black regions.</p> <p>This is the function I use for showing the &quot;noised&quot; image:</p> <pre><code>plt.imshow(image, cmap=&quot;gray&quot;) </code></pre> <p>Why is the noise not being applied on the cell itself (i.e. the colored region)?</p> <p>Moreover, I tried to apply the gaussian noise to the gray-scaled image:</p> <pre><code>def add_noise(image): image = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # &lt;&lt; added (x, y) = image.shape mean = 0 var = 0.1 std = np.sqrt(var) noise = np.random.normal(loc=mean, scale=std, size=(x, y)) return image + noise </code></pre> <p>Unfortunately this is the result:</p> <p><a href="https://i.sstatic.net/SGeSZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SGeSZ.png" alt="noised" /></a></p> <p>whose result is equivalent to this script:</p> <pre><code>def add_noise(image): image = cv.cvtColor(image, cv.COLOR_BGR2GRAY) return image </code></pre> <p>that is, if no noise is added.</p>
<python><numpy><matplotlib>
2023-03-10 10:36:31
2
481
tail
75,694,838
12,965,658
Pandas string with none to int conversion
<p>I have a column with datatype string. I want to convert it to int using pandas.</p> <p>The dataframe column has values such as:</p> <pre><code>1 2 None </code></pre> <p>I am using pandas to convert it to int.</p> <pre><code>df['column'] = pd.to_numeric(df[column],errors='coerce').fillna(0) print(df.dtypes) print(df) </code></pre> <p>It prints:</p> <p>object and values such as:</p> <pre><code>&quot;1&quot; &quot;2&quot; &quot;None&quot; </code></pre>
<python><python-3.x><pandas><dataframe>
2023-03-10 10:17:12
3
909
Avenger
75,694,809
9,479,925
How to reshape pandas dataframe?
<p>I have a dataframe as:</p> <pre><code>pd.DataFrame({'a':['name','number','dob'],'b':['myamulla','1234','1999-01-01']}) </code></pre> <p><a href="https://i.sstatic.net/nJryx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nJryx.png" alt="enter image description here" /></a></p> <p>I would like to have this reshaped as below.</p> <p><a href="https://i.sstatic.net/n2Mub.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n2Mub.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-03-10 10:14:15
2
1,518
myamulla_ciencia
75,694,674
222,189
Is there a Python linter to check for a missing `raise` keyword?
<p>I have seen the following pattern several times:</p> <pre><code>if value &gt; MAX_VALUE: ApplicationError(&quot;value is too large&quot;) # value is in range now use_value(value) </code></pre> <p>There is an obvious but relatively easy-to-miss bug here: the <code>raise</code> keyword is missing from <code>ApplicationError</code>. An exception object is constructed but not raised, and an invalid <code>value</code> continues to the rest of the function.</p> <p>We use flake8, pyright, and a select set of pylint lints. My expectation is that <em>some sort</em> of static analysis tool would find this problem:</p> <ul> <li>either by detecting that I'm creating an exception instance and not raising it,</li> <li>or (easier) that I'm creating an instance of a class, but not storing the result (this could be used for side-effects but it's definitely an antipattern, right?)</li> <li>or (most general) that there is a function call with unused result</li> </ul> <p>The last option seems like something mypy/pyright should be capable of catching, at least, but I'm completely unable to find <em>any sort</em> of linter for this.</p> <p>Is there a linting tool for Python code that can detect this problem (missing <code>raise</code> keyword) and how to configure it to do so?</p>
<python><static-analysis><pylint>
2023-03-10 10:01:08
1
2,122
matejcik
75,694,548
8,176,763
airflow return a stringIO buffer
<p>I have a dag in airflow as such:</p> <pre><code>from datetime import timedelta import pendulum from airflow.decorators import dag from stage import stage_data from table_async_pg import run_async from read_mem_view import get_view_buffer @dag( dag_id = &quot;data-sync&quot;, schedule_interval = '*/30 * * * *', start_date=pendulum.datetime(2023, 3, 9, tz=&quot;Asia/Hong_Kong&quot;), catchup=False, dagrun_timeout=timedelta(minutes=20), ) def Pipeline(): CSV_URL=&quot;myurl&quot; a = stage_data() b = run_async() c = get_view_buffer(CSV_URL) [a,c] &gt;&gt; b pipeline = Pipeline() </code></pre> <p>The problem happens in task <code>get_view_buffer</code> , the function look like that, it runs fine and it returns a stringio object.</p> <pre><code>@task def get_view_buffer(URL): SKIPROWS=2 p = Path(r'/home/d5291029/cert/truststore-prod-2.0.7.pem') if p.exists(): with requests.Session() as s: start = time.time() download = s.get(URL,verify=p) decoded_content = download.content.decode('utf-8') end = time.time() - start print(f'time it takes to download in seconds: {end}') reader = csv.reader(decoded_content.splitlines(),delimiter=',') output = io.StringIO() writer = csv.writer(output,lineterminator='\n') for row in reader: if reader.line_num &gt; SKIPROWS: writer.writerow(row) output.seek(0) return output </code></pre> <p>I get this error:</p> <pre><code> [2023-03-10, 17:30:07 HKT] {python.py:177} INFO - Done. Returned value was: &lt;_io.StringIO object at 0x7f2a70fdfee0&gt; [2023-03-10, 17:30:07 HKT] {taskinstance.py:1768} ERROR - Task failed with exception .................. File &quot;/opt/.venv/lib/python3.9/site-packages/airflow/utils/json.py&quot;, line 153, in default CLASSNAME: o.__module__ + &quot;.&quot; + o.__class__.__qualname__, AttributeError: '_io.StringIO' object has no attribute '__module__' </code></pre>
<python><airflow>
2023-03-10 09:49:09
0
2,459
moth
75,694,541
12,281,404
Fastapi async.sleep() get time spent
<pre class="lang-py prettyprint-override"><code>async def work(): asyncio.sleep(3) @router.get('') async def test(): time1 = monotonic() ... # need to call work time2 = monotonic() return TestResponse(time=time2-time1) </code></pre> <p><code>work</code> function should be called only once at the same time. i need write <code>test</code> function so if i call the endpoint several times at the same time i get responses: <code>time</code>=time that differs from the previous call by at least 3 seconds.</p>
<python><python-3.x><python-asyncio><fastapi>
2023-03-10 09:48:20
2
487
Hahan't
75,694,454
5,625,534
Windows locale: set number of digits
<p>I am looking for an example of how to set (and get) the number of digits (LOCALE_IDIGITS).</p> <p>Background: I am using COM automation (client: Python, server: ACCESS, package:pywin32). When using TransferText it always prints 2 decimals. (See e.g. <a href="https://social.msdn.microsoft.com/Forums/office/en-US/5bba131c-d6cd-4548-9f0f-c85947b3d81d/access-2010-export-text-truncates-numbers-to-two-decimal-places?forum=accessdev" rel="nofollow noreferrer">https://social.msdn.microsoft.com/Forums/office/en-US/5bba131c-d6cd-4548-9f0f-c85947b3d81d/access-2010-export-text-truncates-numbers-to-two-decimal-places?forum=accessdev</a> for others hitting this problem). Actually, it turns out, this is due to the locale settings. (I could not find any documentation on this, but a bit of testing confirmed this.) Under Windows, the locale has a setting called LOCALE_IDIGITS:</p> <blockquote> <p>Number of fractional digits placed after the decimal separator. The maximum number of characters allowed for this string is two, including a terminating null character. For example, 2 for 5.00, 1 for 5.0.</p> </blockquote> <p>So, because of this, I want to change the locale settings for the application so the Access Function will write a CSV file in full precision (say 16 or 8 decimals). I don't mind if the example is in a different language or is changing a similar setting. I assume get and set will be symmetric, so one of these should be sufficient to get me going.</p>
<python><winapi><pywin32>
2023-03-10 09:40:52
1
16,927
Erwin Kalvelagen
75,694,367
13,049,379
Understanding Pytorch Tensor Slicing
<p>Let <code>a</code> and <code>b</code> be two PyTorch tensors with <code>a.shape=[A,3]</code> and <code>b.shape=[B,3]</code>. Further <code>b</code> is of type <code>long</code>.</p> <p>Then I know there are several ways slicing <code>a</code>. For example,</p> <pre><code>c = a[N1:N2:jump,[0,2]] # N1&lt;N2&lt;A </code></pre> <p>would return <code>c.shape = [2,2]</code> for N1=1 and N2=4 and jump=2.</p> <p>But the below should have thrown a error,</p> <pre><code>c = a[b] </code></pre> <p>but instead <code>c.shape = [B,3,3]</code>.</p> <p>For example,</p> <pre><code>a = torch.rand(10,3) b = torch.rand(20,3).long() print(a[b].shape) #torch.Size([20, 3, 3]) </code></pre> <p>Can someone explain how the slicing is working for <code>a[b]</code>?</p>
<python><pytorch><slice><tensor>
2023-03-10 09:32:47
1
1,433
Mohit Lamba
75,694,356
1,736,294
Python OSError: Failure with SFTP
<p>I'm testing SFTP communication on a Windows 11 laptop with SFTP server running at localhost:3373. An <code>sftp.get</code> request generates an <em>&quot;OSError: Failure&quot;</em> error with this code:</p> <pre><code>import pysftp remotepath = &quot;C:/Users/Profile/sftpdata/remote/gimme.txt&quot; localpath = &quot;C:/Users/Profile/sftpdata/local/gimme.txt&quot; cnopts = pysftp.CnOpts() cnopts.hostkeys = None with pysftp.Connection('localhost', port=3373, username='admin', password='admin', cnopts=cnopts) as sftp: sftp.get(remotepath, localpath=localpath) </code></pre> <p>The traceback:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\Profile\sftpsrc\test_sftp.py&quot;, line 9, in &lt;module&gt; sftp.get(remotepath, localpath=localpath) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\pysftp\__init__.py&quot;, line 249, in get self._sftp.get(remotepath, localpath, callback=callback) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 811, in get size = self.getfo(remotepath, fl, callback, prefetch) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 782, in getfo file_size = self.stat(remotepath).st_size File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 493, in stat t, msg = self._request(CMD_STAT, path) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 822, in _request return self._read_response(num) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 874, in _read_response self._convert_status(msg) File &quot;C:\Users\Profile\AppData\Local\Programs\Python\Python310\lib\site-packages\paramiko\sftp_client.py&quot;, line 907, in _convert_status raise IOError(text) OSError: Failure </code></pre> <p>The environment is Windows 11, Python, Paramiko 3.0.0, sftpserver and pysftp.</p> <p>The file <code>gimme.txt</code> is definitely in the remote folder. Have tried transforming the path statements using <code>Path</code> + <code>as_posix()</code> and <code>realpath</code> but with no luck. The generated key is <code>rsa-ssh 4096</code>.</p> <p>Btw, <code>localpath = &quot;C:/Users/Profile/sftpdata/local&quot;</code> gives a Permission Error.</p> <p>What am I doing wrong?</p>
<python><sftp><paramiko><pysftp>
2023-03-10 09:31:25
1
4,617
Henry Thornton
75,694,218
19,546,216
Python: How to get index from an Array of JSON?
<p>So, I was able to answer my question before on how to get the value of a JSON from an array of JSON. But now I'm trying to convert it in Python(for selenium).</p> <p>Here is the Array of JSON:</p> <pre><code>[ { &quot;id&quot;: 3328367679, &quot;inbox_id&quot;: 35584, &quot;subject&quot;: &quot;Open this message&quot; }, { &quot;id&quot;: 3328364404, &quot;inbox_id&quot;: 35584, &quot;subject&quot;: &quot;Another message&quot; }, { &quot;id&quot;: 3328363502, &quot;inbox_id&quot;: 35584, &quot;subject&quot;: &quot;Open this message&quot; }, { &quot;id&quot;: 3328357254, &quot;inbox_id&quot;: 35584, &quot;subject&quot;: &quot;Open this message&quot; }, { &quot;id&quot;: 3328349654, &quot;inbox_id&quot;: 35584, &quot;subject&quot;: &quot;Open this message&quot; } ] </code></pre> <p>Below is my script, it's working in Cypress Javascript. This works by getting the index of JSON that gives the first same <code>&quot;subject&quot;</code> which is <code>&quot;Open this message&quot;</code> and then uses the same index to return the <code>&quot;id&quot;</code> which is <code>3328367679</code></p> <pre><code>function getMessageID() { cy.request({ url: Cypress.env('url'), method: 'GET', headers: { 'Api-Token': Cypress.env('token'), 'Content-Type': 'application/json;charset=utf-8' }, }) .then((response) =&gt; { expect(response.status).to.eq(200) var messages = response.body var messageTitle = 'Open this message' for (var i = 0; i &lt; messages.length; i++){ if (messages[i].subject == messageTitle){ return messages[i].id break; } } }) } getMessageID() </code></pre> <p>How do I convert this in Python? This is my current script and it's giving me value of <code>None</code> when I print it. I run the <code>get_id</code> in the feature file since they're connected:</p> <pre><code>def get_messages(): url = &quot;https://uri.io/api/accounts/fdsdfs/inboxes/dfsfds/messages/&quot; headers = { &quot;Content-Type&quot;: &quot;application/json;charset=utf-8&quot;, &quot;Api-Token&quot;: &quot;123345dsfgsdrgf3456dfg5634dfv5&quot; } response = requests.get(url=url, headers=headers) return response.json() @staticmethod def get_id(): messages = ClassName.get_messages() message_title = &quot;Open this message&quot; for item in messages: if item[&quot;subject&quot;] == message_title: print(&quot;Message id is &quot; + item[&quot;id&quot;]) return item[&quot;id&quot;] break </code></pre> <p>Tried as well using <code>for loop</code> within a <code>for loop</code>:</p> <pre><code>def get_id(): messages = ClassName.get_messages() message_title = &quot;Open this message&quot; for json_item in messages: for message_item in json_item: if message_item[&quot;subject&quot;] == message_title: print(&quot;Message id is &quot; + str(message_item[&quot;id&quot;])) return message_item[&quot;id&quot;] break </code></pre> <p>I was able to print a <code>subject</code> using this code. How do I use indexes in Python? <code>print(messages[0][&quot;subject&quot;])</code></p> <pre><code> @staticmethod def get_id(): messages = ClassName.get_messages() message_title = &quot;Open this message&quot; # print(messages) print(messages[0][&quot;subject&quot;]) # for item in messages: # if messages[item][&quot;subject&quot;] == message_title: # print(&quot;Message id is &quot; + messages[item][&quot;id&quot;]) # return messages[item][&quot;id&quot;] # break </code></pre>
<python><json><for-loop><selenium-webdriver><python-requests>
2023-03-10 09:17:57
2
321
Faith Berroya
75,694,005
3,573,626
Transform laltitude and longitude in python pandas using pyproj
<p>I have a dataframe as below:</p> <pre><code>df = pd.DataFrame( { 'epsg': [4326, 4326, 4326, 4203, 7844], 'latitude': [-34.58, -22.78, -33.45, -33.60, -30.48], 'longitude': [122.31, 120.2, 118.55, 140.77, 115.88]}) </code></pre> <p>Here is the function to transform the lat/long if it is not based on 4326:</p> <pre><code>def transfform_lat_long(inproj:int, outproj:int, x1, y1): proj = pyproj.Transformer.from_crs(inproj, outproj, always_xy=True) x2, y2 = proj.transform(x1, y1) return outproj, x2, y2 </code></pre> <p>I try to apply the function over the data frame so that its lat/long/epsg will be updated if the epsg is not 4326</p> <pre><code>df[['epsg','latitude', 'longitude']] = df.apply(lambda row: transfform_lat_long(row.epsg, 4326, row.latitude, row.longitude) if row.epsg != 4326) </code></pre> <p>It produces syntax error. Any help?</p>
<python><pandas><latitude-longitude><pyproj><epsg>
2023-03-10 08:58:13
2
1,043
kitchenprinzessin
75,693,988
1,826,066
Handle empty columns in polars when concatenating dataframes
<p>I want to be able to concatenate dataframes in <code>polars</code> where the dataframes have the same columns, but some of the dataframes have no data for a subset of the columns.</p> <p>More precisely, I am looking for the <code>polars</code> equivalent of this <code>pandas</code> minimal working example:</p> <pre class="lang-py prettyprint-override"><code>from io import StringIO import polars as pl import pandas as pd TESTDATA1 = StringIO(&quot;&quot;&quot; col1,col2,col3 1,1,&quot;a&quot; 2,1,&quot;b&quot; &quot;&quot;&quot; ) TESTDATA2 = StringIO(&quot;&quot;&quot; col1,col2,col3 1,,&quot;a&quot; 2,,&quot;b&quot; &quot;&quot;&quot; ) df = pd.concat( [ pd.read_csv(TESTDATA1), pd.read_csv(TESTDATA2), ], ) print(df) </code></pre> <p>This prints</p> <pre><code> col1 col2 col3 0 1 1.0 a 1 2 1.0 b 0 1 NaN a 1 2 NaN b </code></pre> <p>I tried the following <code>polars</code> implementation which does not work for me:</p> <pre class="lang-py prettyprint-override"><code> TESTDATA1 = StringIO(&quot;&quot;&quot; col1,col2,col3 1,1,&quot;a&quot; 2,1,&quot;b&quot; &quot;&quot;&quot;) TESTDATA2 = StringIO(&quot;&quot;&quot; col1,col2,col3 1,,&quot;a&quot; 2,,&quot;b&quot; &quot;&quot;&quot;) df = pl.concat( [ pl.read_csv(TESTDATA1), pl.read_csv(TESTDATA2), ], how =&quot;diagonal&quot; ) </code></pre> <p>I get the error message:</p> <pre><code>SchemaError: cannot vstack: because column datatypes (dtypes) in the two DataFrames do not match for left.name='col2' with left.dtype=i64 != right.dtype=str with right.name='col2' </code></pre> <p>It seems that the empty column is treated as <code>str</code> in <code>polars</code> and can't be merged with the other dataframe where it is of type <code>i64</code>.</p> <p>I understand that this is a solution for my problem:</p> <pre class="lang-py prettyprint-override"><code>df = pl.concat( [ pl.read_csv(TESTDATA1), pl.read_csv(TESTDATA2).with_columns(pl.col(&quot;col2&quot;).cast(pl.Int64)), ], how =&quot;diagonal&quot; ) </code></pre> <p>But in reality, I have some twenty columns that are potentially <code>null</code> and I don't want to cast all of them.</p> <p>What works in <code>pandas</code> and <code>polars</code> is the situation where the empty column is removed from the dataframe, i.e.</p> <pre class="lang-py prettyprint-override"><code> TESTDATA1 = StringIO(&quot;&quot;&quot; col1,col2,col3 1,1,&quot;a&quot; 2,1,&quot;b&quot; &quot;&quot;&quot;) TESTDATA2 = StringIO(&quot;&quot;&quot; col1,col3 1,&quot;a&quot; 2,&quot;b&quot; &quot;&quot;&quot;) pl.concat( [ pl.read_csv(TESTDATA1), pl.read_csv(TESTDATA2), ], how =&quot;diagonal&quot; ) </code></pre> <p>In <code>pandas</code>, I could also drop the empty column by calling <code>.dropna(how=&quot;all&quot;,axis=1)</code> but I don't know the equivalent for this in <code>polars</code>.</p> <p>So, to <strong>summarize</strong>:</p> <ul> <li>How can I concatenate dataframes in <code>polars</code> if some of them have columns with no data (<code>null</code>)?</li> <li>How can I achieve the equivalent of <code>.dropna(how=&quot;all&quot;,axis=1)</code> in <code>polars</code>?</li> </ul> <p>Thanks!</p>
<python><dataframe><python-polars>
2023-03-10 08:56:04
1
1,351
Thomas
75,693,958
2,119,941
Error in calling TaskGroup operator wrapped in function - AirflowException: TaskGroup can only be used inside a dag
<p>Using Airflow v2.5.1</p> <p>I'm omitting all the functions operators are calling since this is not a problem.</p> <p>I have a following basic DAG:</p> <pre><code>with DAG('00_s3_file_processing_to_redshift', default_args=default_args, schedule_interval=None) as dag: start_task = DummyOperator( task_id='start_task', dag=dag ) end_task = DummyOperator( task_id='end_task', dag=dag ) process_files_task = PythonOperator( task_id='process_files_task', python_callable=process_files_func, op_kwargs={'s3_bucket_name': s3_bucket_name}, provide_context=True, dag=dag ) start_task &gt;&gt; process_files_task &gt;&gt; end_task </code></pre> <p><code>process_files_task</code> calls <code>process_files_func</code> which looks like this (shortened code for clarity):</p> <pre><code>def process_files_func(s3_bucket_name, **kwargs): with TaskGroup('process_files') as process_files_group: # s3_files = list_s3_files() list_s3_files_task = PythonOperator( task_id='list_s3_files_task', python_callable=list_s3_files, provide_context=True, dag=dag ) s3_files = kwargs['ti'].xcom_pull(key='s3_files') print(s3_files) previous_task = None for filepath in s3_files[1:]: print(&quot;xxxxxxxxxx Working on filepath :&quot;, filepath) s3_file = filepath.replace(&quot;/&quot;, &quot;_&quot;).replace(&quot; &quot;, &quot;_&quot;).replace(&quot;-&quot;,&quot;_&quot;)[:-4] download_file_task = PythonOperator( task_id=f'download_file_{s3_file}', python_callable=download_file_from_s3, op_kwargs={ 's3_bucket_name': s3_bucket_name, 'filepath': filepath, 'local_dir': 'temp' } ) count_op_task = PythonOperator( task_id=f'count_records_{s3_file}', python_callable=count_records, op_kwargs={'filepath': filepath, 'local_dir': 'temp', 's3_file': s3_file } ) create_temp_table_task = PythonOperator( task_id=f'create_temp_table_task_{s3_file}', python_callable=create_temp_table_function ) # other tasks go here... download_file_task &gt;&gt; count_op_task &gt;&gt; create_temp_table_task..other dependencies </code></pre> <p>Previously I didn't wrapp <code>TaskGroup</code> in function and it worked fine nicely iterating over files and processing them. I just used <code>s3_files = list_s3_files()</code> to fetch list of keys.</p> <p>In code above I decided to create task list_s3_files_task to have more control over how keys are fetched from S3. To do that I need kwargs for pulling xcom from <code>list_s3_files_task</code>. And to get kwargs I need to wrap everything in function.</p> <p>All other tasks have no problem accessing and processing each others xcom but I don't know how to use xcom value from some task as iterator in TaskGroup... Calling Python operator to call function which calls <code>TaskGroup</code> looks overly complicated...</p> <p>This solution gives me error:</p> <pre><code>Traceback (most recent call last): File &quot;/data/airflow-env/lib/python3.10/site-packages/airflow/operators/python.py&quot;, line 175, in execute return_value = self.execute_callable() File &quot;/data/airflow-env/lib/python3.10/site-packages/airflow/operators/python.py&quot;, line 192, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File &quot;/data/airflow/dags/DoubleVerify/02_s3_file_processing.py&quot;, line 246, in process_files_func with TaskGroup('process_files') as process_files_group: File &quot;/data/airflow-env/lib/python3.10/site-packages/airflow/utils/task_group.py&quot;, line 122, in __init__ raise AirflowException(&quot;TaskGroup can only be used inside a dag&quot;) airflow.exceptions.AirflowException: TaskGroup can only be used inside a dag </code></pre> <p>Don't know why it's throwing that when whole thing is part of DAG?</p>
<python><airflow>
2023-03-10 08:53:03
0
15,380
Hrvoje
75,693,943
12,396,154
How to create nested and complex dictionary from an existing one?
<p>I'd like to create a new dictionary from this dictionary:</p> <pre><code>inp = {'tagA': {'2023-03-09 00:00:00': 'X', '2023-03-09 01:00:00': 'X', '2023-03-09 02:00:00': 'Y', '2023-03-09 03:00:00': 'Z', '2023-03-09 04:00:00': 'X'}, 'tagB': {'2023-03-09 00:00:00': 'Y', '2023-03-09 01:00:00': 'X', '2023-03-09 02:00:00': 'X', '2023-03-09 03:00:00': 'Y', '2023-03-09 04:00:00': 'Z'}} </code></pre> <p>The desired output is:</p> <pre><code>output = { 'tags':[ { 'name': 'tagA', 'values':[ { 'timestamp':'2023-03-09 00:00:00', 'value':'X' }, { 'timestamp':'2023-03-09 01:00:00', 'value':'X' }, { 'timestamp':'2023-03-09 02:00:00', 'value':'Y' }, { 'timestamp':'2023-03-09 03:00:00', 'value':'Z' }, { 'timestamp':'2023-03-09 04:00:00', 'value':'X' } ] }, { 'name': 'tagB', 'values':[ ...]}]} </code></pre> <p>I wrote a for loop to iterate through each values of the inner list but with this logic all the values will be assigned to all the keys.</p> <pre><code>inner_list = [] outer_dict = {} outer_list = [] for k, v in inp.items(): print(k) for timestamp, value in v.items(): inner_list.append({'timestamp':timestamp, 'value':value}) outer_list.append({'name': k, 'values':inner_list}) outer_dict.update({'tags': outer_list}) </code></pre> <p>How can I fix this?</p>
<python><list><dictionary>
2023-03-10 08:51:53
3
353
Nili
75,693,927
12,783,363
Pygame: How to iterate through numbers and check for key presses on each number using pygame.key.get_pressed?
<p>Currently I have the below set-up. Doing so for 9 more numbers is a bit messy. Is there a way to have a code resembling the &quot;Target&quot; below?</p> <p>Current:</p> <pre><code>keys = pygame.key.get_pressed() if keys[pygame.K_1]: print(&quot;1&quot;) # more if statements </code></pre> <p>Target:</p> <pre><code>keys = pygame.key.get_pressed() values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] for value in values: if keys[some_pygame_function(value)]: print(str(value)) </code></pre> <p>An alternative I found so far is by using events from <code>pygame.event.get()</code> instead. However, I prefer it in <code>pygame.key.get_pressed()</code>.</p>
<python><pygame>
2023-03-10 08:50:15
1
916
Jobo Fernandez
75,693,905
8,461,786
Type hint for an argument that is an attribute of a class
<p>I work on a legacy codebase where some constants are encapsulated in a class:</p> <pre><code>class Fields: FIELD_1 = 'field 1 name' </code></pre> <p>This class is often use like:</p> <pre><code>some_dict[Fields.FIELD_1] </code></pre> <p>Now I would like to type hint a function operating on this class:</p> <pre><code>def use_field(field: Fields): field.startswith('a') # Pylance error: Cannot access member &quot;startswith&quot; for type &quot;Fields&quot; Member &quot;startswith&quot; is unknown </code></pre> <p>I would like to properly type hint <code>fields</code> argument of <code>use_fields</code> function. Currently Pylance will complain as follows:</p> <blockquote> <p>Cannot access member &quot;startswith&quot; for type &quot;Fields&quot; Β Β Member &quot;startswith&quot; is unknown</p> </blockquote> <p>I understand why is that - my function expects an instance of <code>Fields</code> class at the moment. But can the type hint be adjusted somehow, possibly without refactoring the <code>Fields</code> class into a <code>Literal</code> type for example?</p>
<python><python-typing>
2023-03-10 08:47:37
1
3,843
barciewicz
75,693,876
6,224,975
scipy.sparse vstack or hstack when two matrices have different numbers of rows and columns
<p>I have multiple sparse matrices I want to merge into one.</p> <p>Each matrice is made up of the following:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from local_utils import get_data def _count_words_within_userid(x): model = CountVectorizer(min_df=1, ngram_range=(1, 2), binary=True) transformed_data = model.fit_transform(x) return transformed_data data = get_data() user_models_and_data = data.groupby(&quot;user_id&quot;)[&quot;text&quot;].apply(_count_words_within_userid) </code></pre> <p><code>user_models_and_data</code> now consists of a <code>scipy.sparse.csr_matrix</code> where the number of rows in all of the matrices sums up to the total number of rows in <code>data</code>.</p> <p>I want to create one matrix with shape <code>(data_train.shape[0],n_cols)</code> where <code>n_cols</code> is the sum of columns of all of the matrices i.e simply concat all the rows and all the columns of all of the matrices.</p> <p>The issue is that they all have different number of rows and columns, thus I can't <code>vstack</code> nor <code>hstack</code> them.</p> <p>What I'm looking for is some kind of &quot;outer join&quot; with zero-filled values e.g</p> <pre><code>m1 = [[0,0,1, [0,1,0]] m2 = [[0,1], [1,1] [1,0]] result = some_function((m1,m2)) # The first 3 columns are from m1, the next 2 columns are from m2. # The rows from &quot;m2&quot; are concatted to the rows of m1 # and have 0's in all of m1 columns (the first 3). # and the rows from m1 have 0's in all of the columns from m2 (the last 2) # such that we get a shape of `(m1.rows+m2.rows, m1.columns+m2.columns)` print(result) # [[0,0,1,0,0], # [0,1,0,0,0], # [0,0,0,0,1], # [0,0,0,1,1], # [0,0,0,1,0] # ] </code></pre>
<python><matrix><scipy>
2023-03-10 08:44:06
2
5,544
CutePoison
75,693,850
1,310,540
Post array of dictionary using multipart/form-data in python
<p>I am having an issue while posting my array to some url. The requested url only supports <code>content-type=multipart/form-data</code>.</p> <p>The <a href="https://intrasheets.com/docs/api/value" rel="nofollow noreferrer">documentation sample</a> is in <strong>node.js</strong>, and it works fine. But I need to achieve it in python. After several tries I am able to send data, but it gives me error <code>{&quot;error&quot;:1,&quot;message&quot;:&quot;The data field must be an array&quot;}</code></p> <p>Below is my code.</p> <pre><code># post_data = [{&quot;x&quot;:0,&quot;y&quot;:0,&quot;value&quot;:&quot;a1&quot;}] post_data = [ ('x', (&quot;data&quot;, 0)), ('y', (&quot;data&quot;, 0)), ('value', (&quot;data&quot;, 'x y orange')), ] file_data ={&quot;data&quot;:(None, str(post_data),&quot;application/json&quot;)} response = requests.post(post_url,files=file_data, headers=request_headers) print(response.text) </code></pre> <p>Response:</p> <pre><code>{&quot;error&quot;:1,&quot;message&quot;:&quot;The data field must be an array&quot;} </code></pre> <p>If we use <code>data</code> or <code>json</code> in post parameter it gives us following error.</p> <pre><code>{&quot;error&quot;:1,&quot;message&quot;:&quot;The Content-Type must be multipart/form-data&quot;} </code></pre> <p>Any help will be appreciated.</p> <p>Thanks in advance.</p>
<python><multipartform-data>
2023-03-10 08:41:03
2
931
Mehmood
75,693,197
3,553,814
Pydash: how to find using a object: py_.collections.find(DATA, OBJECT)
<p>In lodash I can use the syntax: <code>find(ARRAY_OF_OBJECTS, OBJECT)</code> This will return an object from the array if it meets the criteria of the passed object. In this case <code>OBJECT</code> would be e.g. <code>{ active: true, dimension: 'target' }</code>. The objects in the array would contain e.g. <code>active</code>, <code>dimension</code>, <code>status</code> etc.</p> <p>How can I do the same in pydash? I know I can do <code>find(ARRAY_OF_OBJECTS, lambda x: x.active == True</code>, but the thing is, the object I pass is dynamically made. So sometimes it might not have <code>active</code> (as example)</p>
<python><lodash><pydash>
2023-03-10 07:14:54
1
549
Edgar Koster
75,693,136
1,525,238
Configure isort to not touch already existing sections but only sort within sections
<p>I want to configure <code>isort</code> such that it doesn't touch my sections but only sorts within sections alphabetically or according to some other parameter. Namely I want it to turn:</p> <pre><code>import z import a from my.package.b import thing from my.package.a import other </code></pre> <p>into:</p> <pre><code>import a import z from my.package.a import other from my.package.b import thing </code></pre> <p>or any other readable ordering. Is this possible?</p>
<python><isort>
2023-03-10 07:05:48
1
5,334
Ayberk Γ–zgΓΌr
75,693,121
15,181,384
Python: create interactive ssh-session with parameters host and user
<p>I am trying to build a small tui-program that allows me to choose hostname and username for a ssh-session from a list.</p> <p>My question is how to create an interactive ssh-session with these parameters.</p> <p>I tried the subprocess.popen:</p> <pre><code>subprocess.Popen([&quot;ssh&quot;, &quot;username@host&quot;], shell=True) </code></pre> <p>but it destroys my terminal-formating. There are line breaks all over the place and nothing is in line anymore. Also I cannot see what I'm typing that way.</p> <p>My alternative idea is to open the ssh-session, exit the python program and attach to the ssh-session, but I don't know how to do it or if it is even possible.</p> <p>Any help and ideas are appreciated</p>
<python><ssh><subprocess>
2023-03-10 07:04:21
0
361
JadBlackstone
75,693,094
5,783,373
Excel sheets processing in python
<p>I want to iterate over each sheet in excel file to apply some function on top of the data in each sheet and then get the output in a separate excel with all the processed sheets of the original excel file (i.e. not an inplace operation).</p> <p>Example :</p> <p>Input file &quot;Myfile.xlsx&quot; contains 3 sheets (names not known, could be anything)</p> <p>Output file &quot;Myfile_processed.xlsx&quot; also contains 3 sheets with the same sheetname as above (but with processed data since we applied function on top of it)</p> <p>I am able to iterate over the sheets in excel and treat each sheet as a dataframe to apply my function on top of it. But, post that I am not sure how to append that dataframe as a sheet in new excel, along with all the other sheets.</p> <p>Any leads on this will help.</p>
<python><python-3.x><excel><dataframe>
2023-03-10 06:59:56
1
345
Sri2110
75,693,033
10,313,194
Can I get F1 score each time from GridSearchCV?
<p>I want to show F1 score from gridsearch each loop of change parameter. I use <code>f1_micro</code> in the GridSearchCV like this.</p> <pre><code>params = { 'max_depth': [None, 2, 4, 6, 8, 10], 'max_features': [None, 'sqrt', 'log2', 0.2, 0.4, 0.6, 0.8], } clf = GridSearchCV( estimator=DecisionTreeClassifier(), param_grid=params, scoring='f1_micro' ) clf.fit(X, y) df = pd.DataFrame(clf.cv_results_) df.to_csv('result.csv') </code></pre> <p>It show many columns like this.</p> <pre><code>mean_fit_time std_fit_time mean_score_time std_score_time param_max_depth param_max_features params split0_test_score split1_test_score split2_test_score split3_test_score split4_test_score mean_test_score std_test_score rank_test_score </code></pre> <p>I see the result in csv file it have no column F1 score. I don't understand how to use F1 score in GridSearchCV.</p>
<python><scikit-learn><grid-search><gridsearchcv>
2023-03-10 06:52:18
1
639
user58519
75,692,831
17,696,880
How to write this new file completing with information inside a string in the correct positions?
<p>If I have this string, it contains several lines separated by newlines.</p> <pre class="lang-py prettyprint-override"><code>data_string = &quot;&quot;&quot; te gustan los animales? puede que algunos me gusten pero no se mucho de eso animales puede que algunos me gusten pero no se mucho de eso te gustan las plantas? no se mucho sobre ellas carnivoro segun mis registros son aquellos que se alimentan de carne &quot;&quot;&quot; # Split the string into individual lines lines = data_string.splitlines() # Open the check.py file for writing with open(&quot;check.py&quot;, &quot;w&quot;) as f: # Write the lines in the file here... for line in lines: #How to write the code with this lines info? </code></pre> <p>Using python how can I create a file using the content of the lines inside the <code>data_string</code> variable so that it is thus inside a new file.</p> <p>This is how the resulting file should look like, note that the lines <code>&quot;te gustan los animales?&quot;</code>, <code>&quot;animales&quot;</code>, <code>&quot;te gustan las plantas?&quot;</code> and <code>&quot;carnivoro&quot;</code> were left as parameters of <code>SequenceMatcher(None, str1, &quot;here&quot;).ratio()</code> and the lines <code>&quot;puede que algunos me gusten pero no se mucho de eso&quot;</code>, <code>&quot;no se mucho sobre ellas&quot;</code>, <code>&quot;no se mucho sobre ellas&quot;</code> and <code>&quot;segun mis registros son aquellos que se alimentan de carne&quot;</code> were left as <code>response_text = &quot;here&quot;</code></p> <p>Output file called <code>check.py</code> :</p> <pre class="lang-py prettyprint-override"><code>from difflib import SequenceMatcher def check_function(str1): similarity_ratio = 0.0 response_text = &quot;no coincide con nada&quot; threshold = 0.0 similarity_ratio_to_compare = SequenceMatcher(None, str1, &quot;te gustan los animales?&quot;).ratio() if similarity_ratio_to_compare &gt; similarity_ratio and similarity_ratio_to_compare &gt; threshold: response_text = &quot;puede que algunos me gusten pero no se mucho de eso&quot; similarity_ratio = similarity_ratio_to_compare similarity_ratio_to_compare = SequenceMatcher(None, str1, &quot;animales&quot;).ratio() if similarity_ratio_to_compare &gt; similarity_ratio and similarity_ratio_to_compare &gt; threshold: response_text = &quot;puede que algunos me gusten pero no se mucho de eso&quot; similarity_ratio = similarity_ratio_to_compare similarity_ratio_to_compare = SequenceMatcher(None, str1, &quot;te gustan las plantas?&quot;).ratio() if similarity_ratio_to_compare &gt; similarity_ratio and similarity_ratio_to_compare &gt; threshold: response_text = &quot;no se mucho sobre ellas&quot; similarity_ratio = similarity_ratio_to_compare similarity_ratio_to_compare = SequenceMatcher(None, str1, &quot;carnivoro&quot;).ratio() if similarity_ratio_to_compare &gt; similarity_ratio and similarity_ratio_to_compare &gt; threshold: response_text = &quot;segun mis registros son aquellos que se alimentan de carne&quot; similarity_ratio = similarity_ratio_to_compare return response_text input_text = &quot;te gusta saltar la soga bien piolon???&quot; text = check_function(input_text) print(text) </code></pre> <p>In the end this file must be saved with the name <code>check.py</code>, keep in mind that the number of lines inside the <code>data_string</code> variable is not known (and in this case there are only 4 questions with 4 answers to prevent the question from being too long)</p>
<python><python-3.x><string><file><file-writing>
2023-03-10 06:21:51
1
875
Matt095
75,692,797
800,735
When using Apache Beam Python with GCP Dataflow, does it matter if you materialize the results of GroupByKey?
<p>When using Apache Beam Python with GCP Dataflow, is there a downside to materializing the results of GroupByKey, say, to count the number of elements. For example:</p> <pre><code>def consume_group_by_key(element): season, fruits = element for fruit in fruits: yield f&quot;{fruit} grows in {season}&quot; def consume_group_by_key_materialize(element): season, fruits = element num_fruits = len(list(fruits)) print(f&quot;There are {num_fruits} fruits grown in {season}&quot;) for fruit in fruits: yield f&quot;{fruit} grows in {season}&quot; ( pipeline | 'Create produce counts' &gt;&gt; beam.Create([ ('spring', 'strawberry'), ('spring', 'carrot'), ('spring', 'eggplant'), ('spring', 'tomato'), ('summer', 'carrot'), ('summer', 'tomato'), ('summer', 'corn'), ('fall', 'carrot'), ('fall', 'tomato'), ('winter', 'eggplant'), ]) | 'Group counts per produce' &gt;&gt; beam.GroupByKey() | beam.ParDo(consume_group_by_key_generator) ) </code></pre> <p>Are the grouped values, <code>fruits</code>, passed to my DoFn as a generator? Is there a performance penalty for using <code>consume_group_by_key_materialize</code> instead of <code>consume_group_by_key</code>? Or in other words materializing <code>fruits</code> via something like <code>len(list(fruits))</code>? If there are billions of fruits will this use up all my memory?</p>
<python><out-of-memory><generator><google-cloud-dataflow><apache-beam>
2023-03-10 06:14:52
1
965
cozos
75,692,576
14,700,182
Why is median blur not working? - OpenCV - Python
<p>I have a function to add gaussian noise to an image read by OpenCV with <code>imread</code> that returns an image (matrix).</p> <p>I am trying to use median blur on that image but terminal returns this error:</p> <pre><code>median = cv2.medianBlur(image, 5) ^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.7.0) D:/a/opencv-python/opencv-python/opencv/modules/imgproc/src/median_blur.simd.hpp:870: error: (-210:Unsupported format or combination of formats) in function 'cv::opt_AVX2::medianBlur' </code></pre> <p>Here's the code:</p> <pre><code>import numpy as np import cv2 def GaussianNoise(image): row,col,ch= image.shape mean = -10 var = 200 sigma = var**0.5 gauss = np.random.normal(mean,sigma,(row,col,ch)) gauss = gauss.reshape(row,col,ch) gaussImage = image + gauss return gaussImage def ApplyFilters(image): median = cv2.medianBlur(image, 5) return median def CleanImage(): image = cv2.imread(&quot;SampleImages/Sample1.png&quot;) noiseImage = GaussianNoise(image) cleanedImage = ApplyFilters(noiseImage) cv2.imshow(&quot;Clean image&quot;, cleanedImage) cv2.waitKey(0) cv2.destroyAllWindows() if __name__ == &quot;__main__&quot;: CleanImage() </code></pre> <p>I don't understand why it says <code>error: (-210:Unsupported format or combination of formats)</code> if it seems to be both valid images (matrixes) and <code>ksize</code> seems to be correct too.</p> <p>What I am doing wrong?</p> <p><strong>EDIT:</strong> image info is <code>col = 1024</code> <code>row = 1024</code> <code>ch = 3</code> <code>dtype = uint8</code></p>
<python><opencv><image-processing><types>
2023-03-10 05:34:56
1
334
Benevos
75,692,571
6,245,473
Look up column in csv file and enter value in corresponding column?
<p>The following code works well, but it does not look up anything from a csv file. Symbols must be entered manually ('MSFT','AAPL','GOOG') in order for values to be retrieved (asOfDate &amp; PbRatio). There are about 300,000 symbols.</p> <pre><code>import pandas as pd from yahooquery import Ticker symbols = ['MSFT','AAPL','GOOG'] header = [&quot;asOfDate&quot;,&quot;PbRatio&quot;] for tick in symbols: faang = Ticker(tick) faang.valuation_measures df = faang.valuation_measures try: for column_name in header : if column_name not in df.columns: df.loc[:,column_name ] = None #Missing columns set to None. df = df[df['PbRatio'].notna()] df = df[df['asOfDate'] == df['asOfDate'].max()] df.to_csv('Output.csv', mode='a', index=True, header=False, columns=header) except AttributeError: continue </code></pre> <p>The csv file looks like this:</p> <p><a href="https://i.sstatic.net/ti42m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ti42m.png" alt="enter image description here" /></a></p> <p>The desired result is this:</p> <p><a href="https://i.sstatic.net/ClIMX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ClIMX.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><csv><yfinance>
2023-03-10 05:34:17
2
311
HTMLHelpMe
75,692,262
45,843
PyCharm not finding conda pytorch
<p>I'm trying to use PyCharm to edit a program that uses PyTorch, but the IDE is not finding the library.</p> <p>The program runs from the command line, as the conda environment does have PyTorch installed:</p> <pre><code>(torch2) C:\&gt;conda list # packages in environment at C:\Users\russe\Anaconda3\envs\torch2: # # Name Version Build Channel aom 3.5.0 h63175ca_0 conda-forge appdirs 1.4.4 pyh9f0ad1d_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge ... python_abi 3.7 3_cp37m conda-forge pytorch 1.12.1 cpu_py37h5e1f01c_1 pytz 2022.7.1 pyhd8ed1ab_0 conda-forge </code></pre> <p>But PyCharm is not seeing it:</p> <p><a href="https://i.sstatic.net/hyjz5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hyjz5.png" alt="enter image description here" /></a></p> <p>I tried scrolling down in case it was listed under <code>torch</code>, but not there either.</p> <p>Other, similar issues have been addressed here, but this does not seem to be one of them.</p> <p>In <a href="https://stackoverflow.com/questions/54909322/mismatching-conda-and-pycharm">Mismatching Conda and Pycharm</a> the answer points to confusion between pip and conda in the PyCharm configuration. However, as you can see in the above screenshot, I definitely have PyCharm set to use conda; pip is not in the picture at all.</p> <p>In <a href="https://stackoverflow.com/questions/51455571/conda-virtual-environment-not-working-with-pycharm">conda virtual environment not working with pycharm</a> the top answer says</p> <blockquote> <p>This is a known issue in PyCharm on Windows at least. The conda environment is used but not actually activated by PyCharm so environment variables for the env are not loaded. It's been a problem for a while, seems it'd be easy to fix but for some reason they haven't fixed it.</p> </blockquote> <blockquote> <p>The only work around is to start PyCharm from a cmd window in which the env is activated, or possibly run the environment activation prior to execution as an external tool.</p> </blockquote> <p>However, I have tried starting PyCharm from a cmd window in which the env is activated:</p> <pre><code>(torch2) C:\t&gt;&quot;C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.3\bin\pycharm64.exe&quot; </code></pre> <p>and the problem persists.</p> <p>Trying to change the path to the Python interpreter doesn't seem to work; the interpreter path field is not editable, and when I try to add a new Python interpreter, I just get this:</p> <p><a href="https://i.sstatic.net/bKTuz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bKTuz.png" alt="enter image description here" /></a></p>
<python><pycharm><anaconda><conda>
2023-03-10 04:27:18
1
34,049
rwallace
75,692,123
9,008,162
Is it more efficient to loop though (a bunch of csv lines) or (list contains dictionary)?
<p>I can download the data as <code>csv</code> or <code>json</code>. If the data is in csv, I use <code>response.text</code> to convert it to text before inserting it into the dictionary. The data sample looks like this:</p> <pre><code>Date,Open,High,Low,Close,Adjusted_close,Volume 1966-07-05,10.9176,11.0872,10.836,10.9176,0.1327,388800 1966-07-06,11.5024,11.5024,10.836,11.5024,0.1398,692550 1966-07-07,11.0872,11.7936,11.0008,11.0872,0.1347,1858950 </code></pre> <p>Then I must loop through all of the data points and insert them into the database using the following code:</p> <pre><code>for ticker, csv_data in data.items(): reader = csv.reader(csv_data.splitlines()) next(reader) for input_data in reader: input_data = [ticker] + [input_data[0], input_data[1], input_data[2], input_data[3], input_data[4]] </code></pre> <p>I can also use <code>response.json()</code> and the data will be in bunch of lists and dictionaries like this:</p> <pre><code>[{'date': '2021-02-09', 'open': 123.41, 'high': 123.49, 'low': 122.36, 'close': 123.24, 'adjusted_close': 121.7803, 'volume': 1988260}, {'date': '2021-02-10', 'open': 124.59, 'high': 125.74, 'low': 123.88, 'close': 125.08, 'adjusted_close': 123.5985, 'volume': 1112597}] </code></pre> <p>Then I'll use the following code to loop through all of the data points and insert them into the database:</p> <pre><code> input_data_columns = &quot;date open high low close&quot;.split(&quot; &quot;) for ticker, symbol_data in data.items(): for date_data in symbol_data: input_data = (ticker,) + tuple(date_data.get(col) for col in input_data_columns) </code></pre> <p>My question is which type of data is more efficient and faster (with the same amount of datapoints)?</p>
<python><json><list><csv><dictionary>
2023-03-10 03:49:13
1
775
saga
75,692,036
3,789,481
Plotly shows wrong data of Gantt chart when using subplot
<p>I have the Gantt chart which is plotted by px.timeline is working well like below image. As it would be overlapping each other and could be easier to review with transparent color as well.</p> <p><a href="https://i.sstatic.net/QONeO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QONeO.png" alt="enter image description here" /></a></p> <p>But when I would like to add it as subplot, it shows something wrong as Red window is gone, like below image</p> <p><a href="https://i.sstatic.net/7CY6P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7CY6P.png" alt="enter image description here" /></a></p> <p>This is full code, you can comment and uncomment 2 rows here to see the different</p> <pre><code># fig = trace_1 #uncomment this line to see the difference fig.add_trace(trace_1.data[0], row=1, col=1) #comment this line to see the difference </code></pre> <pre><code>import pandas as pd import plotly.express as px from plotly.subplots import make_subplots df = pd.DataFrame({ 'Task': ['Task 1', 'Task 2', 'Task 3', 'Task 4', 'Task 5', 'Task 6'], 'Start': [pd.Timestamp('2022-01-01'), pd.Timestamp('2022-01-02'), pd.Timestamp('2022-01-03'), pd.Timestamp('2022-01-03'), pd.Timestamp('2022-01-05'), pd.Timestamp('2022-01-04')], 'Finish': [pd.Timestamp('2022-01-02'), pd.Timestamp('2022-01-04'), pd.Timestamp('2022-01-05'), pd.Timestamp('2022-01-08'), pd.Timestamp('2022-01-06'), pd.Timestamp('2022-01-07')], 'Resource': ['Resource 1', 'Resource 2', 'Resource 3', 'Resource 4', 'Resource 4', 'Resource 4'], 'Type': ['Type_Blue', 'Type_Blue', 'Type_Red', 'Type_Blue', 'Type_Blue', 'Type_Red'] }) fig = make_subplots(rows=2, shared_xaxes=True) trace_1 = px.timeline(df, x_start='Start', x_end='Finish', y='Resource', color='Type', color_discrete_map={'Type_Blue': 'blue', 'Type_Red': 'red'}, opacity=0.2) # fig = trace_1 #uncomment this line to see the difference fig.add_trace(trace_1.data[0], row=1, col=1) #comment this line to see the difference fig.update_yaxes(autorange=&quot;reversed&quot;) fig.update_layout(title='Timeline Subplots') fig.update_xaxes(type='date') fig.show() </code></pre> <p>Thank you.</p>
<python><pandas><plotly>
2023-03-10 03:28:31
1
2,086
Alfred Luu
75,691,952
6,137,682
How to apply a method to all values in Enum class?
<p>I have some <code>Enum</code>s which may or may not have clashing names but are in different modules, so I want to apply some sort of prefix / suffix lambda to make them unique without having to manually specify it for each value. How can I alter my <code>Enum</code> class definitions to do something like the following? Thanks!</p> <pre class="lang-py prettyprint-override"><code># foo.py class Ids(Enum): # define some accessor lambda or dunder override? some_id = auto() # many more... # bar.py class Ids(Enum): # define some accessor lambda or dunder override? some_id = auto() # many more... # baz.py from foo import Ids as FooIds from bar import Ids as BarIds print(FooIds.some_id) # foo-some_id print(BarIds.some_id) # bar-some_id </code></pre>
<python><enums>
2023-03-10 03:09:08
1
685
David Owens
75,691,822
1,628,347
python type alias, pydantic constranined list, and mypy
<p>Related to <a href="https://github.com/pydantic/pydantic/issues/975" rel="nofollow noreferrer">https://github.com/pydantic/pydantic/issues/975</a>.</p> <p>I basically have:</p> <pre><code>Class A(BaseModel): x: int Class B(BaseModel): as: # List of A, size &gt;=1 Class C(BaseModel): as: # List of A, size &gt;=1 (same as above) </code></pre> <p><strong>GOAL</strong></p> <ol> <li>Enforce the constraint size &gt;=1.</li> <li>Not have to repeat myself in the definitions of B and C. So hopefully use a type alias for that list of As that I can then use as the type of B and C.</li> <li>Pass mypy.</li> </ol> <p>This works for the constraint and not repeating myself:</p> <pre><code>ListA = conlist(A, min_items=1) </code></pre> <p>However, the following doesn't typecheck with mypy:</p> <pre><code>as: ListA </code></pre> <p><strong>QUESTION</strong></p> <p>Is there a solution that achieves all three of these goals?</p>
<python><pydantic>
2023-03-10 02:41:55
0
1,225
allstar
75,691,795
7,788,402
Expecting property name enclosed in double quotes: line 2 column 1 (char 2)
<p>I have simple Python code to read a JSON file, but I keep getting the error <code>Expecting property name enclosed in double quotes: line 2 column 1 (char 2)</code> when I use <code>json.load()</code> function.</p> <pre><code>import json f = open(json_path) for line in f: print(line) print(json.loads(line)) </code></pre> <p>Many have encountered the same error, but it seems primarily for formatting. I don't see any issues with the formatting here. My double quotations are fine too.</p> <p>This is my JSON file example :</p> <pre><code>{ &quot;ABCD&quot;: { &quot;IMAGE.jpg&quot;:{ &quot;LOSS&quot;:&quot;CAR&quot;, &quot;UNIT_ID&quot;:&quot;14_1&quot;, &quot;DESCRIPTION&quot;:&quot;Marginal&quot;, &quot;IMAGE_NAME&quot;:&quot;car.jpg&quot;, &quot;CODE&quot;:&quot;SURFACE&quot;, &quot;DEPTH&quot;:1.5 } }, &quot;DEFG&quot;: {}, &quot;HIJK&quot;: {}, &quot;LMNO&quot;: {} } </code></pre>
<python><json><file>
2023-03-10 02:34:46
1
2,301
PCG
75,691,599
2,480,947
Multi-part values as hash key/set entries in Ruby
<p>In Python I can use tuples (or any hashable object) as dictionary keys or set members. It's useful for deduping:</p> <pre class="lang-py prettyprint-override"><code>cool_podcast_guests = set([ ('Emma', 'Suter'), ('Dave', 'Warner'), ('Evie', 'Wilde'), ('Emma', 'Suter'), ]) for forename, surname in cool_podcast_guests: interview_these[(forename, surname)] = lookup_address(forename, surname) </code></pre> <p>This (simple-mindedly assuming that forename-surname pairs refer to unique people) deduplicates the list of candidates.</p> <p>How might you do this in Ruby?</p>
<python><ruby><dictionary><duplicates>
2023-03-10 01:44:50
2
1,734
Kim
75,691,516
19,425,874
Printing to a label printer via Python but nothing is happening
<p>Driving myself crazy, desperate need of a Python pro. I have been trying to figure this out for days on days -- at first I wasn't able to figure out setting up the connection. That error is gone now thankfully, but now my code is simply not doing anything. There is no error, but it's also not printing the tab like I need.</p> <p>My goal is to print the tab &quot;Label&quot; that is on my Google Sheet x amount of times. X is defined by F2 on the sheet.</p> <p>Any clue what I'm doing wrong?</p> <pre><code>import gspread from oauth2client.service_account import ServiceAccountCredentials import win32api import win32print import tkinter as tk # Authenticate and open the Google Sheet creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', [ 'https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']) client = gspread.authorize(creds) sheet = client.open_by_key( '1eRO-30eIZamB5sjBp-Mz2QMKCfGxV037MQO8nS7G7AI').worksheet('Label') class App(tk.Frame): def __init__(self, master=None): super().__init__(master) self.master = master self.master.title('Porter\'s Label Printer') self.master.geometry('400x300') # Set the window size self.counter = 0 self.total_labels = int(sheet.acell('F2').value) self.create_widgets() self.update_counter() def create_widgets(self): self.print_button = tk.Button(self.master, text='Print Labels', command=self.print_labels, font=( 'Arial', 24), bg='lightblue', padx=40, pady=40) self.print_button.pack(fill=tk.BOTH, expand=True) self.counter_label = tk.Label(self.master, text='0 of {} labels printed'.format( self.total_labels), font=('Arial', 18)) self.counter_label.pack(pady=20) self.refresh_button = tk.Button(self.master, text='Refresh', command=self.refresh_labels, font=( 'Arial', 14), bg='lightgrey', padx=20, pady=10) self.refresh_button.pack(pady=20) def print_labels(self): printer_name = 'Test Printer' print_data = sheet.get_all_values() for i in range(int(sheet.acell('F2').value)): print_data_str = '\n'.join(['\t'.join(row) for row in print_data]) printer_handle = win32print.OpenPrinter(printer_name) job_info = (&quot;Label&quot;, None, &quot;RAW&quot;) win32print.StartDocPrinter(printer_handle, 1, job_info) win32print.WritePrinter(printer_handle, print_data_str.encode()) win32print.EndDocPrinter(printer_handle) win32print.ClosePrinter(printer_handle) self.counter += 1 self.update_counter() def refresh_labels(self): try: self.total_labels = int(sheet.acell('F2').value) self.counter = 0 self.update_counter() self.print_button.config(state=tk.NORMAL) self.counter_label.config(fg='black') except ValueError: print('Error: could not convert F2 value to an integer') self.total_labels = 0 self.counter_label.config( text='Invalid number of labels!', fg='red') self.print_button.config(state=tk.DISABLED) def update_counter(self): if self.counter == self.total_labels: self.counter_label.config( text='All documents printed!', fg='green') self.print_button.config(state=tk.DISABLED) else: self.counter_label.config(text='{} of {} labels printed'.format( self.counter, self.total_labels)) # Authenticate and open the Google Sheet creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', [ 'https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']) client = gspread.authorize(creds) sheet = client.open_by_key( '1eRO-30eIZamB5sjBp-Mz2QMKCfGxV037MQO8nS7G7AI').worksheet('Label') # Create a GUI window root = tk.Tk() app = App(master=root) # Run the GUI app.mainloop() </code></pre>
<python><winapi><printing><pywin32>
2023-03-10 01:29:03
0
393
Anthony Madle
75,691,374
13,609,298
How to get the colors of a bar plot
<p>There are a number of similar questions out there, but I am struggling to tailor it to my specific case (which is in fact easier). I have the following data:</p> <pre><code>data = {'CCM1': {'Exact': 15.32, '1 Notch': 36.29, '2 Notches': 45.97}, 'CCM2': {'Exact': 24.19, '1 Notch': 42.74, '2 Notches': 54.03}, 'CCM3': {'Exact': 39.25, '1 Notch': 70.09, '2 Notches': 88.79}, 'CCM4': {'Exact': 29.03, '1 Notch': 65.32, '2 Notches': 88.71}, 'CCQ1': {'Exact': 17.74, '1 Notch': 41.40, '2 Notches': 54.84}, 'CCQ2': {'Exact': 25.27, '1 Notch': 50, '2 Notches': 64.52}, 'CCQ3': {'Exact': 29.57, '1 Notch': 67.74, '2 Notches': 89.25}, 'CCQ4': {'Exact': 31.18, '1 Notch': 67.74, '2 Notches': 88.71}, 'PCM1': {'Exact': 15.05, '1 Notch': 30.65, '2 Notches': 42.47}, 'PCM2': {'Exact': 15.05, '1 Notch': 28.49, '2 Notches': 42.47}, 'PCM3': {'Exact': 22.58, '1 Notch': 44.09, '2 Notches': 64.52}, 'PCM4': {'Exact': 40.32, '1 Notch': 68.28, '2 Notches': 84.95}, 'PCQ1': {'Exact': 12.37, '1 Notch': 26.34, '2 Notches': 32.36}, 'PCQ2': {'Exact': 11.83, '1 Notch': 29.57, '2 Notches': 46.24}, 'PCQ3': {'Exact': 34.41, '1 Notch': 64.52, '2 Notches': 86.02},} </code></pre> <p>Which I can plot using the code below</p> <pre><code>df = pd.DataFrame(data) df = df.T df ['sum'] = df.sum(axis=1) x=df.sort_values('sum', ascending=False)[['Exact','1 Notch','2 Notches']].plot.bar() plt.axhline(y=31.75, color='blue', linestyle='--') plt.axhline(y=76.19, color='orange', linestyle='--') plt.axhline(y=90.48, color='green', linestyle='--') </code></pre> <p>which yields the following plot: <a href="https://i.sstatic.net/TPWFJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TPWFJ.jpg" alt="enter image description here" /></a></p> <p>In reality, I would like the colours of the lines to assume the exact same colour as the barplots - i.e., the green line assumes the same shade of green as the green bar, the orange line assumes the same shade of orange as the orange bar and so on. However, since I do not know what colours have been used to generate the parplots, this is not possible. Some help here would be appreciated!</p>
<python><matplotlib><bar-chart>
2023-03-10 00:57:40
0
311
Carl
75,691,196
11,080,806
Does pyproject.toml need to exist in the target directory when using poetry to install a local package?
<p>I have a local package, &quot;mypackage&quot;, inside a venv environment. I successfully ran <code>poetry install</code> and I'm able to import it into Python sessions started inside of the same virtual env. Now I want to use mypackage in other projects and environments on my machine, specifically a pipenv environment setup for &quot;myproject&quot;.</p> <p>I followed the steps here: <a href="https://stackoverflow.com/q/64829418/11080806">Install the latest version of my package from working directory into my local environment using Python&#39;s poetry</a> But that leads to: <code>Poetry could not find a pyproject.toml file in C:\Users\jonathan.biemond\PycharmProjects\myproject or its parents</code> Do I need to have a pyproject.toml file in my project directory to import mypackage? myproject is not a package, just a local, undistributed project, so I don't have dependencies to maintain...</p> <p>Here are my steps to setup mypackage:</p> <ol> <li>Create venv for mypackage: <code>python -m venv</code></li> <li><code>pip install poetry</code></li> <li><code>poetry init</code> from mypackage directory to create pyproject.toml for mypackage</li> <li><code>poetry install</code></li> </ol> <p>Now I can use mypackage in the same virtual environment it was created in.</p> <p>But I also want to use it in another virtual environment. Here are my steps to import mypackage:</p> <ol> <li><code>pipenv install numpy</code> from myproject directory to create environment (and install numpy)</li> <li>Install poetry: <code>(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | py - </code></li> <li>Install mypackage: <code>poetry install mypackage</code></li> </ol>
<python><pipenv><python-poetry>
2023-03-10 00:16:22
1
568
Jonathan Biemond
75,691,072
13,138,364
Plot regression confidence interval using seaborn.objects
<p>How can I use the <code>objects</code> API to plot a regression line with confidence limits (like <a href="https://seaborn.pydata.org/generated/seaborn.regplot.html" rel="nofollow noreferrer"><code>sns.regplot</code></a> / <a href="https://seaborn.pydata.org/generated/seaborn.lmplot.html" rel="nofollow noreferrer"><code>sns.lmplot</code></a>)?</p> <p>Based on the <a href="https://seaborn.pydata.org/generated/seaborn.objects.Band.html" rel="nofollow noreferrer"><code>so.Band</code></a> <code>fmri</code> example, I thought it would be something like this:</p> <pre class="lang-py prettyprint-override"><code>import seaborn as sns import seaborn.objects as so df = sns.load_dataset(&quot;tips&quot;) (so.Plot(df, x=&quot;total_bill&quot;, y=&quot;tip&quot;) .add(so.Dots()) .add(so.Line(), so.PolyFit()) # regression line .add(so.Band(), so.Est()) # confidence band ) </code></pre> <p>But I'm getting an unexpected result (left). I'm expecting something like the <code>regplot</code> band (right).</p> <p><a href="https://i.sstatic.net/t0CPp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t0CPp.png" alt="current and expected output" /></a></p>
<python><pandas><seaborn><scatter-plot><seaborn-objects>
2023-03-09 23:51:30
1
42,007
tdy
75,690,891
2,320,476
How to read the content of a file sent as application/octet-stream
<p>I am using an API that returns application/octet-stream. I want to be able to get the content.</p> <p>Here is what I see in the data section while debugging</p> <pre><code> HTTPHeaderDict({'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Length': '20738', 'Content-Type': 'application/octet-stream', 'Expires': '-1', 'Server': '', 'Content-Disposition': 'attachment; filename=Document.pdf', 'X-Powered-By': '', 'Date': 'Thu, 09 Mar 2023 22:42:28 GMT'}) </code></pre> <p>Here is the code I am using get this content</p> <pre><code> response_returned = https.request(method='POST', url=f'{Checker}/Id/Document/', headers=headers, body=body) response_returned.data does not work. </code></pre> <p>How can I get this content in python ?</p>
<python>
2023-03-09 23:11:51
0
2,247
Baba
75,690,641
5,394,072
pandas group by one column, aggregate another column, filter on a different column
<p>says this is my data.</p> <pre><code>pd.DataFrame({'num_legs': [4,4,5,6,7,4,2,3,4, 2,4,4,5,6,7,4,2,3,3,5,5,6], 'num_wings': [2,7,21,0,21,13,23,43, 2,7,21,13,23,43,23,23,23,11,26,32,75,13], 'new_col':np.arange(22)}) </code></pre> <p>I would like to do the following.</p> <ol> <li>Group by 'num_legs', and compute rolling(3, min_periods =1) (rolling mean of past 3, with minimum one value in rolling 3) for the 'new_col'</li> <li>However when computing rolling(3), I don't want to take all values of new_col in the group, I wanted to take values in new_col, which have num_wings &gt;10.</li> <li>Need a transform for #2 value, i.e, I would like to populate the result in #2 above, for all rows in the df.</li> <li>EDIT - Also, rows that have num_wings &gt; 10, should get a rolling mean ( = previous or next row value, by the group)</li> </ol> <p>How can I do this? Something like below is what I am thinking, but it's incorrect.</p> <pre><code>df.groupby('num_legs')['new_col'].transform(lambda x: df.loc[df['num_wings']&gt;10, 'new_col'].rolling(3)) </code></pre>
<python><pandas>
2023-03-09 22:31:10
2
738
tjt
75,690,506
6,245,473
Convert txt python dictionary file to csv data file?
<p>I have a text file with 300,000 records. A sample is below:</p> <pre><code>{'AIG': 'American International Group', 'AA': 'Alcoa Corporation', 'EA': 'Electronic Arts Inc.'} </code></pre> <p>I would like to export the records into a csv file like this:</p> <p><a href="https://i.sstatic.net/JyVDl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JyVDl.png" alt="enter image description here" /></a></p> <p>I tried the following, but it does not put any line breaks between the records. So in excel, the 300,000 records are in two rows, which doesn't fit everything (16,000 column limit in Excel).</p> <pre><code>import pandas as pd read_file = pd.read_csv (r'C:\...\input.txt') read_file.to_csv (r'C:\...\output.csv', index=None) </code></pre>
<python><pandas><dataframe><csv><txt>
2023-03-09 22:09:24
1
311
HTMLHelpMe
75,690,483
2,359,203
How to use Python ldap3 Kerberos authentication on Linux without editing any config files?
<p>How can I log into an LDAP server with a Kerberos username and password without making any changes to the configuration of the computer that the Python script is running on? <a href="https://ldap3.readthedocs.io/en/latest/bind.html#kerberos" rel="nofollow noreferrer">The ldap3 documentation</a> assumes that Kerberos has already been set up by editing <code>/etc/krb5.conf</code> and running <code>kinit</code>.</p>
<python><linux><ldap><kerberos><ldap3>
2023-03-09 22:04:52
2
887
Alex Henrie
75,690,455
66,580
Quick and dirty replacement for asciinema server
<p>On a slow and low resource dev box I need to save terminal sessions using asciinema. I cannot afford to install the official asciinema-server. I tried to find out what is being sent to the server and save it with a small PHP script. So I created a <code>~/.config/asciinema/config</code> config file and set the api url to one on localhost.</p> <p>I have a very simple script that is hit when a cast is uploaded but it seems nothing is being posted. Here is my script:</p> <pre><code>&lt;?php session_start(); $i = []; $i['post'] = $_POST; $i['get'] = $_GET; $i['request'] = $_REQUEST; $i['server'] = $_SERVER; $i['cookie'] = $_COOKIE; $i['env'] = $_ENV; $i['files'] = $_FILES; $i['session'] = $_SESSION; $i['body'] = file_get_contents('php://input'); $j = json_encode($i); $n = mt_rand(100000000, 999999999); file_put_contents(__DIR__ . &quot;/$n.json&quot;, $j); </code></pre> <p>And here is a captured request where I did a simple <code>ls -la</code> in the terminal and uploaded the session:</p> <pre><code>{ &quot;post&quot;: [], &quot;get&quot;: [], &quot;request&quot;: [], &quot;server&quot;: { &quot;USER&quot;: &quot;********&quot;, &quot;HOME&quot;: &quot;/home/********&quot;, &quot;HTTP_CONNECTION&quot;: &quot;close&quot;, &quot;HTTP_AUTHORIZATION&quot;: &quot;Basic dXNlcm5hbWU6cGFzc3dvcmQK&quot;, &quot;HTTP_USER_AGENT&quot;: &quot;asciinema/2.0.0 CPython/3.6.9 Linux/5.4.0-144-generic-x86_64-with-Ubuntu-18.04-bionic&quot;, &quot;HTTP_HOST&quot;: &quot;test.dev&quot;, &quot;HTTP_ACCEPT_ENCODING&quot;: &quot;identity&quot;, &quot;SCRIPT_FILENAME&quot;: &quot;/var/www/testbench/asciinema/api/asciicasts/index.php&quot;, &quot;REDIRECT_STATUS&quot;: &quot;200&quot;, &quot;SERVER_NAME&quot;: &quot;test.dev&quot;, &quot;SERVER_PORT&quot;: &quot;80&quot;, &quot;SERVER_ADDR&quot;: &quot;127.0.0.1&quot;, &quot;REMOTE_PORT&quot;: &quot;51626&quot;, &quot;REMOTE_ADDR&quot;: &quot;127.0.0.1&quot;, &quot;SERVER_SOFTWARE&quot;: &quot;nginx/1.14.0&quot;, &quot;GATEWAY_INTERFACE&quot;: &quot;CGI/1.1&quot;, &quot;REQUEST_SCHEME&quot;: &quot;http&quot;, &quot;SERVER_PROTOCOL&quot;: &quot;HTTP/1.1&quot;, &quot;DOCUMENT_ROOT&quot;: &quot;/var/www/testbench&quot;, &quot;DOCUMENT_URI&quot;: &quot;/asciinema/api/asciicasts/index.php&quot;, &quot;REQUEST_URI&quot;: &quot;/asciinema/api/asciicasts/&quot;, &quot;SCRIPT_NAME&quot;: &quot;/asciinema/api/asciicasts/index.php&quot;, &quot;CONTENT_LENGTH&quot;: &quot;&quot;, &quot;CONTENT_TYPE&quot;: &quot;&quot;, &quot;REQUEST_METHOD&quot;: &quot;GET&quot;, &quot;QUERY_STRING&quot;: &quot;&quot;, &quot;FCGI_ROLE&quot;: &quot;RESPONDER&quot;, &quot;PHP_SELF&quot;: &quot;/asciinema/api/asciicasts/index.php&quot;, &quot;PHP_AUTH_USER&quot;: &quot;********&quot;, &quot;PHP_AUTH_PW&quot;: &quot;********&quot;, &quot;REQUEST_TIME_FLOAT&quot;: 1678396966.030703, &quot;REQUEST_TIME&quot;: 1678396966 }, &quot;cookie&quot;: [], &quot;env&quot;: [], &quot;files&quot;: [], &quot;session&quot;: [], &quot;body&quot;: &quot;&quot; } </code></pre> <p>The function that uploads the cast should be this: <a href="https://github.com/asciinema/asciinema/blob/d34941cd6dc3b38fab4a48b80456722386da2725/asciinema/api.py#L40" rel="nofollow noreferrer">https://github.com/asciinema/asciinema/blob/d34941cd6dc3b38fab4a48b80456722386da2725/asciinema/api.py#L40</a></p> <p>Where else should I look for the cast data?</p>
<python><asciinema>
2023-03-09 22:00:16
0
30,410
Majid Fouladpour
75,690,421
2,525,940
How many QTimers can I have in a pyqt app?
<p>I'm working on a pyqt 5 app that controls a piece of industrial plant. We have a lot of:<br /> Do a for x seconds then do b for y minutes</p> <p>Each of these is implemented with (mostly) single-shot QTimers. There are currently about 110 of these. Most of these won't be running at the same time and timing accuracy is not critical at the sub-second level.</p> <p>We now need to implement safety checks on top of this, for instance<br /> If input c is too high for too long do something.</p> <p>The list of these that came back from the ops team has 250+ conditions and will probably grow.<br /> Sticking with how the code is currently written I'd implement each of these with another QTimer.</p> <p>The code is set up with different objects for different parts of the system and the conditions will be read from a setup file, so implementing the safety system with lots of QTimers is doable, but is it a good idea?</p> <p>Is having 300-400 QTimers in an app sensible? If it causes problems what will the symptons be?<br /> Is there a different approach?<br /> An answer from real world experience would be great.</p>
<python><pyqt><pyqt5>
2023-03-09 21:55:58
2
499
elfnor
75,690,418
5,969,893
Airflow SSHOperator Command Timed Out when executing Python script
<p>I created a DAG that successfully uses SSHOperator to execute a simple Python script from a server(note I have set <code>cmd_timeout = None</code></p> <p>When I change the simple Python script to a more complex script, I get an error for &quot;SSH command timed out&quot;</p> <p>Additionally, if i log into the server, open CMD and execute <code>C:/ProgramData/Python/Scripts/activate.bat &amp;&amp; python C:/Users/Main/Desktop/Python/Python_Script.py</code>, which is the one that &quot;times out&quot;. It is successful, so I don't believe it is an issue with script or access.</p> <p>Is there an additional setting that must changed to avoid the SSH command time out when executing commands?</p> <p>Log:</p> <pre><code>[2023-03-09, 21:31:53 UTC] {ssh.py:123} INFO - Creating ssh_client [2023-03-09, 21:31:53 UTC] {ssh.py:101} INFO - ssh_hook is not provided or invalid. Trying ssh_conn_id to create SSHHook. [2023-03-09, 21:31:53 UTC] {base.py:73} INFO - Using connection ID 'server' for task execution. [2023-03-09, 21:31:53 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_for_Windows_8.1) [2023-03-09, 21:31:53 UTC] {transport.py:1874} INFO - Authentication (password) successful! [2023-03-09, 21:31:53 UTC] {ssh.py:465} INFO - Running command: call C:/ProgramData/Python/Scripts/activate.bat &amp;&amp; python C:/Users/Main/Desktop/Python/Python_Script.py [2023-03-09, 21:32:03 UTC] {taskinstance.py:1772} ERROR - Task failed with exception Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py&quot;, line 158, in execute result = self.run_ssh_client_command(ssh_client, self.command) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py&quot;, line 143, in run_ssh_client_command exit_status, agg_stdout, agg_stderr = self.ssh_hook.exec_ssh_client_command( File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/hooks/ssh.py&quot;, line 526, in exec_ssh_client_command raise AirflowException(&quot;SSH command timed out&quot;) airflow.exceptions.AirflowException: SSH command timed out </code></pre>
<python><ssh><airflow>
2023-03-09 21:55:37
2
667
AlmostThere
75,690,403
12,415,855
Selenium / Accept cookie-window?
<p>i would like to automate the following site: <a href="https://atlas.immobilienscout24.de/" rel="nofollow noreferrer">https://atlas.immobilienscout24.de/</a></p> <p>using this code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import os from webdriver_manager.chrome import ChromeDriverManager if __name__ == '__main__': print(f&quot;Checking Browser driver...&quot;) os.environ['WDM_LOG'] = '0' options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service(ChromeDriverManager().install()) driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = f&quot;https://atlas.immobilienscout24.de/&quot; driver.get (link) driver.execute_script(&quot;arguments[0].click();&quot;, waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@data-testid=&quot;uc-accept-all-button&quot;]')))) input(&quot;Press!&quot;) </code></pre> <p>When the site opens there allways appears this cookie-window and i am not able to press the accept button with the above code.</p> <p><a href="https://i.sstatic.net/AWvxy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWvxy.png" alt="enter image description here" /></a></p> <p>Instead i allways get this timeout-exception:</p> <pre><code>$ python test.py Checking Browser driver... Traceback (most recent call last): File &quot;G:\DEV\Fiverr\TRY\mapesix\test.py&quot;, line 26, in &lt;module&gt; driver.execute_script(&quot;arguments[0].click();&quot;, waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@data-testid=&quot;uc-accept-all-button&quot;]')))) File &quot;G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\support\wait.py&quot;, line 90, in until raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: Stacktrace: Backtrace: (No symbol) [0x00DB37D3] (No symbol) [0x00D48B81] (No symbol) [0x00C4B36D] (No symbol) [0x00C7D382] (No symbol) [0x00C7D4BB] (No symbol) [0x00CB3302] (No symbol) [0x00C9B464] (No symbol) [0x00CB1215] (No symbol) [0x00C9B216] (No symbol) [0x00C70D97] (No symbol) [0x00C7253D] GetHandleVerifier [0x0102ABF2+2510930] GetHandleVerifier [0x01058EC1+2700065] GetHandleVerifier [0x0105C86C+2714828] GetHandleVerifier [0x00E63480+645344] (No symbol) [0x00D50FD2] (No symbol) [0x00D56C68] (No symbol) [0x00D56D4B] (No symbol) [0x00D60D6B] BaseThreadInitThunk [0x766500F9+25] RtlGetAppContainerNamedObjectPath [0x77C27BBE+286] RtlGetAppContainerNamedObjectPath [0x77C27B8E+238] </code></pre> <p>How can i close this cookie-window?</p>
<python><selenium-webdriver>
2023-03-09 21:53:33
2
1,515
Rapid1898
75,690,331
1,023,753
IronPython3 Host throws "TypeErrorException: metaclass conflict" when inheriting from a .net class
<p>I have the following C# type:</p> <pre><code>[PythonType] public class ScriptBase { public virtual int TestMe() { return 11; } } </code></pre> <p>Then, I'm trying to create a class that inherits from <code>ScriptBase</code> in python:</p> <pre><code>from Base import ScriptBase class TestClass (ScriptBase): pass </code></pre> <p>Trying to execute this script yields: <code>TypeErrorException: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases</code></p> <p>This is how I'm creating the engine and registering the type:</p> <pre><code>// Executed to test ScriptEngine _engine = Python.CreateEngine(); // Create base module ScriptScope _gameScope = _engine.CreateModule(&quot;Base&quot;); // Expose ScriptBase class in the module _gameScope.SetVariable(nameof(ScriptBase), typeof(ScriptBase)); // Load the source from a file stream through a wrapper and compile ScriptSource source = engine.CreateScriptSource(new CodeProvider(inStream), package, Encoding.ASCII, SourceCodeKind.Statements); CompiledCode compiled = source.Compile(); // Create new module for the inheriting classes _scope = engine.CreateModule(package); // Execute the code -- this line throws an exception compiled.Execute(_scope); </code></pre> <p>I've tried importing CLR and defining both <code>__init__</code> and <code>__new__</code>, this changes nothing. The <code>.CreateModule</code> approach works for instantiating and managing existing types, it's just the inheritance part that seems broken - is there any step that I'm missing?</p> <p>I'm not very proficient with python, so I'm not sure what this message complains about. I've tried numerous approaches found online, but always seem to end up in the same place.</p> <p>The runtime is mono, targeting .Net 4.x. Here's the full callstack.</p> <pre><code>TypeErrorException: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases IronPython.Runtime.Types.PythonType.FindMetaClass (IronPython.Runtime.Types.PythonType cls, IronPython.Runtime.PythonTuple bases) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) IronPython.Runtime.Operations.PythonOps.MakeClass (IronPython.Runtime.FunctionCode funcCode, System.Func`2[T,TResult] body, IronPython.Runtime.CodeContext parentContext, System.String name, IronPython.Runtime.PythonTuple bases, IronPython.Runtime.PythonDictionary keywords, System.String selfNames) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) Microsoft.Scripting.Interpreter.FuncCallInstruction`8[T0,T1,T2,T3,T4,T5,T6,TRet].Run (Microsoft.Scripting.Interpreter.InterpretedFrame frame) (at &lt;9784853821ea47dbb6bb1f3e03091049&gt;:0) Microsoft.Scripting.Interpreter.Interpreter.Run (Microsoft.Scripting.Interpreter.InterpretedFrame frame) (at &lt;9784853821ea47dbb6bb1f3e03091049&gt;:0) Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet] (T0 arg0, T1 arg1) (at &lt;9784853821ea47dbb6bb1f3e03091049&gt;:0) IronPython.Compiler.PythonScriptCode.RunWorker (IronPython.Runtime.CodeContext ctx) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) IronPython.Compiler.PythonScriptCode.Run (Microsoft.Scripting.Runtime.Scope scope) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) IronPython.Compiler.RuntimeScriptCode.InvokeTarget (Microsoft.Scripting.Runtime.Scope scope) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) IronPython.Compiler.RuntimeScriptCode.Run (Microsoft.Scripting.Runtime.Scope scope) (at &lt;57a5bd653356491bba8a61ecf9211193&gt;:0) Microsoft.Scripting.Hosting.CompiledCode.Execute (Microsoft.Scripting.Hosting.ScriptScope scope) (at &lt;0d164baeace648b2930cfd25249cf9f3&gt;:0) (wrapper remoting-invoke-with-check) Microsoft.Scripting.Hosting.CompiledCode.Execute(Microsoft.Scripting.Hosting.ScriptScope) </code></pre>
<python><c#><.net><ironpython>
2023-03-09 21:43:51
1
3,065
noisy cat
75,690,295
9,329,400
Python import from package file inside same directory
<p>I'm trying to create my first <a href="https://github.com/josh-bone/sportsrefscraper" rel="nofollow noreferrer">python package</a>, and I have a dicrectory structured like this:</p> <pre><code>. β”œβ”€β”€ LICENSE β”œβ”€β”€ README.md β”œβ”€β”€ requirements.txt β”œβ”€β”€ setup.py β”œβ”€β”€ test_scrape.py β”œβ”€β”€ sportsrefscraper β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ config.py β”‚ β”œβ”€β”€ games.py β”‚ β”œβ”€β”€ league.py β”‚ β”œβ”€β”€ players.py β”‚ β”œβ”€β”€ teams.py | β”œβ”€β”€ utils.py | └── ... └── ... </code></pre> <p>I would think that I can import from utils inside teams.py, but when I run test_scrape.py</p> <pre><code>&quot;&quot;&quot;test_scrape.py &quot;&quot;&quot; import datetime as dt from sportsrefscraper.players import scrape_game_logs, scrape_per100 from sportsrefscraper.games import scrape_boxscores, scrape_play_by_play, scrape_shot_chart from sportsrefscraper.teams import get_roster from sportsrefscraper.config import TEAMNAME_TO_ID, PLAYERS import random GAMES = [ [dt.date(2023, 3, 4), 'PHI', 'MIL'] ] class TestScrape: def test_x(self): ... </code></pre> <p>I get this error</p> <pre><code>________________________________________________________ ERROR collecting test_scrape.py ________________________________________________________ ImportError while importing test module '/&lt;user&gt;/Documents/sportsrefscraper/test_scrape.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ../../miniconda3/envs/test_sportsref/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) test_scrape.py:5: in &lt;module&gt; from sportsrefscraper.teams import get_roster sportsrefscraper/teams.py:4: in &lt;module&gt; from utils import teamname_to_id E ModuleNotFoundError: No module named 'utils' </code></pre>
<python><import><package>
2023-03-09 21:39:04
1
610
JTB
75,690,171
7,517,192
How can I make poetry install the Python version required in pyproject.toml?
<p>One thing about poetry seems really odd to me:</p> <p>When you install poetry, it installs the latest Python version and uses this as a default for all poetry projects. If a different Python version is required as per a per the project's pyproject.toml, the poetry documentation states that you must point poetry to a suitable Python executable via <code>poetry env use</code>.</p> <p>But that means that you need to already have said Python version installed manually.</p> <p>Isn't the whole point of a dependency manager to install all the right dependency versions for you inside the virtual environment, including the required Python version?</p> <p>Is there a way to get poetry to install the right Python version automatically or do I have to manually manage the various Python versions that all my poetry projects require?</p>
<python><python-3.x><python-poetry>
2023-03-09 21:23:27
2
3,626
Alex
75,690,154
6,296,919
mark duplicate as 0 in new column based on condition
<p>I have dataframe as below</p> <pre><code>data =[['a',96.21623993,1], ['a',99.88211060,1], ['b',99.90232849,1], ['b',99.91232849,1], ['b',99.91928864,1], ['c',99.89162445,1], ['d',99.95264435,1], ['a',99.82862091,2], ['a',99.84466553,2], ['b',99.89685059,2], ['c',78.10614777,2], ['c',97.73305511,2], ['d',95.42383575,2], ] df = pd.DataFrame(data, columns=['ename','score', 'groupid']) df </code></pre> <p>I need to mark duplicate as 0 in new column but NOT the one with highest score. and should be grouping on groupid and ename.</p> <p>I am looking to get output as below:</p> <pre><code>ename score groupid duplicate a 96.21624 1 TRUE a 99.882111 1 FALSE b 99.902328 1 TRUE b 99.912328 1 TRUE b 99.919289 1 FALSE c 99.891624 1 FALSE d 99.952644 1 FALSE a 99.828621 2 TRUE a 99.844666 2 FALSE b 99.896851 2 FALSE c 78.106148 2 TRUE c 97.733055 2 FALSE d 95.423836 2 FALSE </code></pre>
<python><python-3.x><dataframe><group-by>
2023-03-09 21:20:35
2
847
tt0206
75,690,113
9,983,652
regular expression with group to a string including ( ) and dash
<p>I am trying to use regular expression to find number. But I never succeed. Can anyone help me with it?</p> <pre><code>import re item='800-850(0.2)' find_list=re.findall(r'(\d+)\-(\d+)\((\d+)\)$',item) print(find_list) [] </code></pre> <p>The above return empty [], what I wanted is [800,850,0.2]. I am using () to capture group. do I use ( and ) to indicate ( or ) ? and use - to indicate -</p> <p>Thanks for your help.</p>
<python>
2023-03-09 21:15:39
1
4,338
roudan
75,689,975
7,613,669
Python Pulp: Optimise while limiting element frequency between groups
<p>I am trying to optimise a problem with several constraints, where elements (e.g. person) are duplicated between groups and I need to select the correct elements such that the total <strong>groups scores</strong> are maximised.</p> <ol> <li>I can only select an element (e.g person) <strong>1 time</strong>.</li> <li>A person can be initially assigned to multiple groups</li> <li>Each group can only consist of <strong>2 people</strong></li> <li>Each person has a difference score for each group they are initially assigned to</li> </ol> <p>I have created a toy example below:</p> <pre><code># example data data = pd.DataFrame({ 'group': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C'], 'person': [1, 2, 3, 4, 1, 2, 5, 6, 1, 4, 7], 'score': [3, 1, 0.5, 1, 0.7, 0.8, 1.5, 2, 1.5, 2, 1.1] }) prob = pulp.LpProblem(&quot;group_scores&quot;, sense=pulp.LpMaximize) group = pulp.LpVariable.dicts(&quot;group&quot;, data['group'], cat=pulp.LpBinary) person = pulp.LpVariable.dicts(&quot;group&quot;, data['person'].unique(), cat=pulp.LpBinary) # CONSTRAINTS # set group size prob += pulp.lpSum(group) == 2 # 1 person cannot be seletced more than once for p in person: prob += pulp.lpSum(p) &lt;= 1 # objective is to maximise score per group and overall # TODO: I have a strong feeling this objective function is incorrect prob += pulp.lpSum([person[p] * data[data['person'] == p]['score'].values[0] for p in person]) prob.solve() results = [] for v in prob.variables(): results.append({ 'group': v.name.split('group_')[1], 'selected': True if v.varValue else False }) results = pd.DataFrame(results) </code></pre>
<python><optimization><pulp>
2023-03-09 20:58:12
0
348
Sharma
75,689,924
99,717
python sqlite3 table disappearing for no apparent reason
<p>I have a python sqlite3 db with a single table that for no obvious reason 'disappears' -- I guess it's getting dropped. The database file is still there, and does not seem to be 'corrupted': I can connect to it via CLI and execute basic operations on it:</p> <pre><code>sqlite&gt; SELECT name FROM sqlite_schema WHERE type='table' ORDER BY name; </code></pre> <p>returns nothing. So far, it's only happening in dev, and only happens a few days after restart. There's very little traffic on the dev server, mostly just me testing the app.</p> <p><strong>I'd welcome any ideas on how to debug this problem</strong>. There are no errors in the logs that point to a <em>cause</em> of the table dropping. I've added some more verbose logging, but there's nothing that I can see happening that causes the table to drop. Just, at some point, it's gone, and I start getting <code>no such table</code> errors:</p> <pre><code># Everything looking fine.... 2023-03-07 17:14:43,039 139982591223616 - application INFO - Updating hash value for PID: 12933 2023-03-07 17:14:43,039 139982591223616 - application INFO - Connecting to: /some_path/mydatabase.db 2023-03-07 17:14:43,047 139982591223616 - application INFO - Connecting to: /some_path/mydatabase.db 2023-03-07 17:14:43,063 139982591223616 - application INFO - Got 7 cache items. This is 7 more than the previous set. 2023-03-07 17:14:43,064 139982591223616 - application INFO - Connecting to: /some_path/mydatabase.db 2023-03-07 17:14:43,072 139982591223616 - application INFO - MONOMERS STATUS (pid=12932) cache_data_hash=d49e11e052832ed7de03f38fa61c09cabdae66473991ae3e9d02041f019983ae (pid=12933) cache_data_hash=d49e11e052832ed7de03f38fa61c09cabdae66473991ae3e9d02041f019983ae ***** 2023-03-07 17:14:43,072 139982591223616 - uvicorn.error INFO - Application startup complete. 2023-03-07 17:14:43,072 139982591223616 - uvicorn.error INFO - Uvicorn running on socket ('127.0.0.1', 8200) (Press CTRL+C to quit) # Then, the next day, suddenly... 2023-03-08 15:46:10,190 140122358679360 - application INFO - Connecting to: /some_path/mydatabase.db 2023-03-08 15:46:10,733 140122358679360 - application ERROR - Traceback (most recent call last): File &quot;/some_path/my_app/web_service.py&quot;, line 74, in whitelist_authentication_and_log_all_exceptions if should_update_cache_data(): File &quot;/some_path/my_app/operations/update_cache_data.py&quot;, line 80, in should_update_cache_data return _should_update_cache_data(os.getpid()) File &quot;/some_path/my_app/operations/update_cache_data.py&quot;, line 84, in _should_update_cache_data cache_data_hash: Union[str, None] = _get_hash_for_pid(pid) File &quot;/some_path/my_app/operations/update_cache_data.py&quot;, line 94, in _get_hash_for_pid cursor.execute(f&quot;select * from {TABLE_NAME} where pid=?&quot;, (pid,)) sqlite3.OperationalError: no such table: MY_CACHE_TABLE </code></pre> <p>My best, weak guess is that something in my db access code, maybe combined with a concurrency issue (multi-process) is somehow breaking something -- an unclosed cursor/connection, a race condition... I've scrutinized the code as best I can. I've included all of it below.</p> <p>The db is for in memory cache synchronization across multiple uvicorn processes. The single table has two columns: the PID and a hash of the cached data. When I receive a cache update notification, I update the cache for the current process, then invalidate the caches of all the other processes by setting the hash for the PID to an empty string. Each process polls the table, and if it finds an empty string for it's PID, it updates the cache and sets the new hash.</p> <pre><code>PID HASH 2385 d49e11e052832ed7de03f38fa61c09cabdae66473991ae3e9d02041f019983ae 9823 d49e11e052832ed7de03f38fa61c09cabdae66473991ae3e9d02041f019983ae </code></pre> <p>I re-create the db every time during deployment (I never call <code>create_db()</code> or <code>create_table()</code> during runtime), and update the cache and table at startup -- there's no need to save any of the data across deployments/restarts.</p> <pre><code>def create_db(): create_table() def get_database_connection() -&gt; sqlite3.Connection: try: connection = sqlite3.connect(settings.DB_FILE_NAME) except sqlite3.Error as exc: raise RuntimeError(f&quot;Cannot connect to {os.path.abspath(settings.DB_FILE_NAME)}&quot;) from exc return connection def create_table() -&gt; None: &quot;&quot;&quot; Establishing a connection creates the db if it doesn't exist. &quot;&quot;&quot; database_connection: sqlite3.Connection = get_database_connection() cursor: sqlite3.Cursor try: cursor = database_connection.cursor() cursor.execute(f&quot;DROP TABLE IF EXISTS {TABLE_NAME}&quot;) cursor.execute( f&quot;CREATE TABLE {TABLE_NAME} (PID INTEGER, CACHE_HASH TEXT)&quot;) except sqlite3.Error as err: raise RuntimeError(&quot;Error creating database.&quot;) from err finally: cursor.close() database_connection.commit() database_connection.close() if __name__ == &quot;__main__&quot;: create_db() </code></pre> <p>Here is all of the other code that touches the db. Basically, I try to use the context manager (<code>with</code>) whenever possible, and otherwise try to make sure I'm committing and closing everything properly:</p> <pre><code>PID = 0 CACHE_DATA_HASH = 1 async def update_cache_data_and_invalidate_cache(): result_message: dict[str, str] = await update_cache_data() invalidate_cache_for_all_other_processes() _log_cache_data_info() return result_message async def update_cache_data() -&gt; dict[str, str]: cache_data = await _get_cache_data() thin_cache_data: CacheData = CacheData(cache_data) cache_data_refresh_diff = len(thin_cache_data.items) - len(settings.cache_data.items) if len(thin_cache_data.items) &gt; 0: settings.cache_data = thin_cache_data set_cache_data_up_to_date() return {&quot;message&quot;: f&quot;Successfully received {len(thin_cache_data.items)} items.&quot;} error_message: str = (&quot;ERROR: items refresh request did not error, but we received 0 items.&quot;) logger.error(error_message) _log_cache_data_info() raise RuntimeError(error_message) async def _get_cache_data() -&gt; list[dict]: async with httpx.AsyncClient(verify=False) as client: response = await client.get(cache_data_url, timeout=15.0) cache_data: list[dict] = response.json() return cache_data def should_update_cache_data() -&gt; bool: return _should_update_cache_data(os.getpid()) def _should_update_cache_data(pid: int) -&gt; bool: cache_data_hash: Union[str, None] = _get_hash_for_pid(pid, include_connection_logging=False) if cache_data_hash is None or cache_data_hash == '': return True return False def _get_hash_for_pid(pid: int, include_connection_logging=True) -&gt; Union[str, None]: cache_data_hash: Union[str, None] = None with get_database_connection(include_logging=include_connection_logging) as conn: cursor: sqlite3.Cursor = conn.cursor() cursor.execute(f&quot;select * from {TABLE_NAME} where pid=?&quot;, (pid,)) result: Union[tuple, None] = cursor.fetchone() cursor.close() if result is not None: cache_data_hash = result[CACHE_DATA_HASH] return cache_data_hash def set_cache_data_up_to_date() -&gt; None: current_pid: int = os.getpid() _set_cache_data_up_to_date(current_pid) def _set_cache_data_up_to_date(pid: int) -&gt; None: cache_data_hash: Union[str, None] = _get_hash_for_pid(pid) with get_database_connection() as conn: if cache_data_hash is None: conn.execute(f&quot;insert into {TABLE_NAME} values (?, ?)&quot;, (pid, settings.cache_data.hash)) else: conn.execute( f&quot;update {TABLE_NAME} set cache_data_hash = ? where pid = ?&quot;, (settings.cache_data.hash, pid)) def invalidate_cache_for_all_other_processes() -&gt; None: with get_database_connection() as conn: process_ids = [] for row in conn.execute(f&quot;select * from {TABLE_NAME}&quot;): process_ids.append(row[PID]) # Invalidate the cache for all other processes by setting the hash to an empty string this_process_pid: int = os.getpid() for pid in process_ids: if pid != this_process_pid: conn.execute(f&quot;update {TABLE_NAME} set cache_data_hash = ? where pid = ?&quot;, ('', pid)) def _generate_cache_data_info() -&gt; str: cache_data_info: str = &quot;\nCACHE STATUS\n&quot; got_rows: bool = False with get_database_connection() as conn: for row in conn.execute(f&quot;select * from {TABLE_NAME}&quot;): got_rows = True if row[PID] == os.getpid(): cache_data_info += f&quot;(pid={row[PID]}) cache_data_hash={row[CACHE_DATA_HASH]} *****\n&quot; else: cache_data_info += f&quot;(pid={row[PID]}) cache_data_hash={row[CACHE_DATA_HASH]}\n&quot; if not got_rows: cache_data_info += f&quot;{TABLE_NAME} is empty.&quot; return cache_data_info def _log_cache_data_info() -&gt; None: logger.info(_generate_cache_data_info()) </code></pre>
<python><sqlite>
2023-03-09 20:51:54
1
8,851
Hawkeye Parker
75,689,914
6,552,666
How to tell python to use same directory for file
<p>I need this python project to be useable for multiple users after downloading it from git, but I'm having a problem generalizing paths. I have it set to use absolute paths in my version, but that will break for others, and if I'm using a file in the same directory as the module, python can't find it if I use the relative path (i.e., <code>with open('foo.txt') as f</code>).</p> <p>I settled on a text file called properties.txt in the project directory, and a module that reads it into a dict. Right now it just has one line, <code>MAIN_DIR=/my/home/directory</code>. The problem is circular though. I can't use a relative path to read that text file either.</p> <p>I'm confident this solution is overengineered. There's got to be a way to get around that, or to get around the problem with relative paths in the first place?</p>
<python>
2023-03-09 20:51:20
1
673
Frank Harris
75,689,889
10,387,506
Python API call pagination issue
<p>I am stuck on the following issue: I have the code below for an API call. In and of itself, the API call is fine, no server issues, I have checked it in Postman:</p> <pre><code>{ &quot;@odata.context&quot;: &quot;https://api.channeladvisor.com/v1/$metadata#Orders&quot;, &quot;value&quot;: [ {....} ], &quot;@odata.nextLink&quot;: &quot;https://api.channeladvisor.com/v1/Orders?refresh_token=xxx&amp;$skip=100&quot; } </code></pre> <p>I am new to all of this, so please bear with me. All I want to do is save all of the records into a data frame at the end, not just the first 100 records. Channel Advisor's documentation is not really helpful, either, and I haven't been able to find what I need when searching on Google.</p> <p><a href="https://stackoverflow.com/questions/66754620/how-to-combine-all-pages-from-api-into-one-json-response-with-python">I found a post here</a> that I massaged to add on the &quot;while&quot; loop, and I had to modify it to fit my requirements.</p> <p><strong>Code version 1 (full):</strong></p> <pre><code>import requests import pandas as pd import os import json def get_access_token(): url = 'https://api.channeladvisor.com/oauth2/token' headers = {'Authorization': 'Basic xxxxx'} post_body = {'grant_type': 'refresh_token', 'refresh_token': 'xxxxx'} response = requests.post(url, headers=headers, data=post_body) return response.json()['access_token'] access_token = get_access_token() url = 'https://api.channeladvisor.com/v1/orders' headers = { 'Authorization': 'bearer ' + access_token, 'accept': 'text/plain', 'Content-Type': 'application/json' } response = requests.get(url, headers=headers) response_data = response.json()['value'] response_nextlink = response.json()['@odata.nextLink'] #create list to store the values data = [] data.extend(response_data) while True: if '@odata.nextLink' in response_nextlink: response = requests.request('GET', response_data['@odata.nextLink'], headers=headers) response_data = response.json()['value'] data.extend(response_data['value']) response_nextlink = response.json()['@odata.nextLink'] else: break </code></pre> <p>This code only runs once, likely because the while loop is not set up correctly.</p> <p>So, I added some error handling:</p> <p><strong>Code version 2:</strong></p> <pre><code>while True: response = requests.get(url, headers=headers) if response.status_code != 200: print(&quot;Error occurred. Status code: &quot;, response.status_code) print(&quot;Content: &quot;, response.content) break response_data = response.json().get('value') if not response_data: break data.extend(response_data) if '@odata.nextLink' in response.json(): url = response.json()['@odata.nextLink'] else: break </code></pre> <p>The result of this is a Status code 400 runtime error. Again, the server is fine when I try the call on Postman.</p> <p>So, I modified the while loop again:</p> <p><strong>Code version 3:</strong></p> <pre><code>while True: response = requests.get(url, headers=headers) response_data = response.json()['value'] data.extend(response_data) if '@odata.nextLink' in response.json(): url = response.json()['@odata.nextLink'] else: break </code></pre> <p>This produces the following error:</p> <pre><code>--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) Input In [80], in &lt;cell line: 25&gt;() 24 while True: 25 response = requests.get(url, headers=headers) ---&gt; 26 response_data = response.json()['value'] 27 data.extend(response_data) 28 if '@odata.nextLink' in response.json(): ................... JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>I am otherwise stuck. What am I doing wrong?</p>
<python><json>
2023-03-09 20:48:37
1
333
Dolunaykiz
75,689,761
1,185,242
How do you detect the largest set of parallel lines in an image?
<p>I have images with multiple line in them and I'm looking to detect the largest set of lines which are (approximately) parallel using Python and OpenCV. For example give:</p> <p><a href="https://i.sstatic.net/UI5Mr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UI5Mr.png" alt="enter image description here" /></a></p> <p>The green lines are the best set:</p> <p><a href="https://i.sstatic.net/iFkY9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iFkY9.png" alt="enter image description here" /></a></p>
<python><algorithm><geometry>
2023-03-09 20:33:49
1
26,004
nickponline
75,689,738
10,045,805
How can I dynamically populate a table in Google Doc using their API?
<p>In Python, I have a list of dicts with all the data I want to display inside a table in a Google Doc:</p> <pre class="lang-py prettyprint-override"><code>fruits = [ {&quot;name&quot;: &quot;apple&quot;, &quot;description&quot;:&quot;juicy&quot;, &quot;stuff&quot;: &quot;why not?&quot;}, {&quot;name&quot;: &quot;banana&quot;, &quot;description&quot;:&quot;healthy&quot;, &quot;stuff&quot;: &quot;yeah!&quot;}, ... ] </code></pre> <p>Each item in the <code>fruits</code> list correspond to a row in the table I want to populate. Each value inside theses items is a new cell on that row.</p> <p>Now, I have a Google Doc I use as a template. There's an empty table in it: <a href="https://i.sstatic.net/YuKgz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YuKgz.png" alt="enter image description here" /></a></p> <p>I want to dynamically populate that table with the previous <code>fruits</code> list in my Python code.</p> <p>This is my code so far:</p> <pre class="lang-py prettyprint-override"><code>SCOPES = ['https://www.googleapis.com/auth/drive'] CREDENTIALS = ServiceAccountCredentials.from_json_keyfile_name(os.environ[&quot;CREDS&quot;], scopes=SCOPES) GOOGLE_DOCS_SERVICE = build('docs', 'v1', credentials=CREDENTIALS) def get_table_requests(fruits): requests = [] # We create new rows in the selected table # This part works ! for i in range(0, len(fruits) - 1): requests.append( { &quot;insertTableRow&quot;: { &quot;tableCellLocation&quot;: { &quot;tableStartLocation&quot;: {&quot;index&quot;: 2488}, &quot;rowIndex&quot;: 1, &quot;columnIndex&quot;: 1, }, &quot;insertBelow&quot;: &quot;true&quot;, } } ) # We insert the text into the table cells using the insertText object. # This part does NOT work ! index = 2580 # set index to first empty cell for fruit in fruits: for _, value in fruit.items(): value += &quot;\n&quot; requests.append({&quot;insertText&quot;: {&quot;location&quot;: {&quot;index&quot;: index}, &quot;text&quot;: str(value)}}) index += len(str(value)) + 2 # set index to next cell index += 1 # set index to new row return requests def modify_document(new_doc_id): template_result = GOOGLE_DOCS_SERVICE.documents().get(documentId=TEMPLATE_FILE_ID).execute() fruits = [ {&quot;name&quot;: &quot;apple&quot;, &quot;description&quot;:&quot;juicy&quot;, &quot;stuff&quot;: &quot;why not?&quot;}, {&quot;name&quot;: &quot;banana&quot;, &quot;description&quot;:&quot;healthy&quot;, &quot;stuff&quot;: &quot;yeah!&quot;} ] requests = get_table_requests(fruits) GOOGLE_DOCS_SERVICE.documents().batchUpdate(documentId=new_doc_id, body={'requests': requests}).execute() </code></pre> <p>The first part (<code>insertTableRow</code>) works : rows are created. But the second part (<code>insertText</code>) fails, even with the right index. I get this error:</p> <pre class="lang-bash prettyprint-override"><code>googleapiclient.errors.HttpError: &lt;HttpError 400 when requesting https://docs.googleapis.com/v1/documents/some_id:batchUpdate?alt=json returned &quot;Invalid requests[6].insertText: The insertion index must be inside the bounds of an existing paragraph. You can still create new paragraphs by inserting newlines.&quot;. Details: &quot;Invalid requests[6].insertText: The insertion index must be inside the bounds of an existing paragraph. You can still create new paragraphs by inserting newlines.&quot;&gt; </code></pre> <p>I don't really understand this error. I tried to add some newlines but doesnt work.</p>
<python><google-docs><google-docs-api>
2023-03-09 20:31:48
1
380
cuzureau
75,689,436
4,036,532
How to open a pretrained PyTorch model that isn't in the Torch library?
<p>I am trying to load some of the models from <a href="https://github.com/facebookresearch/fairseq/tree/0338cdc3094ca7d29ff4d36d64791f7b4e4b5e6e/examples/data2vec" rel="nofollow noreferrer">data2vec2</a> so that I can execute predictions. I have, on that page, downloaded 2 models: <code>base_imagenet.pt</code> (which is one of the data2vec2 vision models) and <code>nlp_base.pt</code> (the NLP data2vec2 model).</p> <p>I had hoped I could do <code>image_model = torch.load('/redacted-path/base_imagenet.pt')</code>, but this does not seem to give me a model object and <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html" rel="nofollow noreferrer">PyTorch documentation</a> implies this isn't what I want - I need the model class.</p> <p>I am somewhat new to PyTorch, but my impression of PyTorch is that if you want to load a model, you have two options:</p> <p>First, you can define the model class yourself or import. For example, I believe the model class for data2vec2 is defined <a href="https://github.com/facebookresearch/fairseq/blob/0338cdc3094ca7d29ff4d36d64791f7b4e4b5e6e/examples/data2vec/models/data2vec2.py#L151" rel="nofollow noreferrer">here</a>. However, this package doesn't have any sort of <code>requirements.txt</code> so when I try to import it that class so I can run code like:</p> <pre><code>import torch from fairseq.examples.data2vec.models.data2vec2 import Data2VecMultiModel </code></pre> <p>I end up in this long set of <code>ModuleNotFound</code> errors because I don't have <code>omegaconf</code>, <code>bitarray</code>, <code>hydra-core</code> etc., installed. So this seems like the wrong approach, since there is no provided <code>requirements.txt</code> file that I could use to get my environment set up for this kind of approach.</p> <p>The second option I've seen is to use something like <code>torchvision</code> to load a model. However, I'm unclear how to tell if the data2vec2 pretrained model is in torchvision at all. In the list of models available when I run <code>dir(torchvision.models)</code> is something called <code>vit_b_16</code>, which looks like what is in the <a href="https://github.com/facebookresearch/fairseq/tree/0338cdc3094ca7d29ff4d36d64791f7b4e4b5e6e/examples/data2vec#vision" rel="nofollow noreferrer">documentation</a>. However, when I download the model from the data2vec2 documentation, it is called <code>base_imagenet.pt</code>, and when I run:</p> <pre><code>from torchvision.models import vit_b_16, ViT_B_16_Weights image_model = vit_b_16(weights=ViT_B_16_Weights.DEFAULT) </code></pre> <p>It outputs:</p> <blockquote> <p>Downloading: &quot;https://download.pytorch.org/models/vit_b_16-c867db91.pth&quot; to /redacted/.cache/torch/hub/checkpoints/vit_b_16-c867db91.pth</p> </blockquote> <p>So. What I would like to do is load a pretrained model from data2vec2 and use it to predict. I want to take an image and a set of text and use these pretrained models to create embeddings. A bit lost on how to do it.</p> <p>Any advice?</p>
<python><pytorch>
2023-03-09 19:55:56
1
2,202
Katya Willard
75,689,185
7,794,924
How to QUICKLY batch scan video files to check for integrity (corrupt / valid)
<p>This question has been asked several times on this forum, with the accepted answer using ffmpeg to assess the integrity of the file with these example commands:</p> <pre><code># scan a single file ffmpeg.exe -v error -i C:\to\path\file.avi -f null - &gt;error.log 2&gt;&amp;1 # batch scan find C:\to\path\ -name &quot;*.mp4&quot; -exec sh -c &quot;ffmpeg -v error -i '{}' -map 0:1 -f null - 2&gt;'{}.log'&quot; \; </code></pre> <h4>The Problem:</h4> <p>The above commands work without issue, taking anywhere between 2-20 mins to assess a single video file. But when running the above batch command on a large number of video files (1000+) (assuming an average of <strong>5 minutes per file</strong>), the process could take over a week to finish.</p> <h4>The Objective:</h4> <p>Looking for a <strong>faster solution</strong> to verify integrity of my files. Either to modify the <code>ffmpeg</code> command, or to use as a different binary entirely. Anything is accepted as long as I can run the new command in the terminal/bash. Would like to get the processing time down from a few days, to a few hours.</p> <h4>References:</h4> <p><a href="https://superuser.com/questions/100288/how-can-i-check-the-integrity-of-a-video-file-avi-mpeg-mp4">https://superuser.com/questions/100288/how-can-i-check-the-integrity-of-a-video-file-avi-mpeg-mp4</a></p> <p><a href="https://stackoverflow.com/questions/34077302/quickly-check-the-integrity-of-video-files-inside-a-directory-with-ffmpeg">Quickly check the integrity of video files inside a directory with ffmpeg</a></p> <p><a href="https://stackoverflow.com/questions/58815980/how-can-i-tell-if-a-video-file-is-corrupted-ffmpeg">How can I tell if a video file is corrupted? FFmpeg?</a></p> <p><a href="https://gist.github.com/ridvanaltun/8880ab207e5edc92a58608d466095dec" rel="noreferrer">https://gist.github.com/ridvanaltun/8880ab207e5edc92a58608d466095dec</a></p> <h4>Update</h4> <p>I never did find a &quot;quick&quot; way of scanning the video files. I just accepted that for the sake of thoroughness it will take some time. However, I made a GUI Python program that may benefit others:</p> <p><a href="https://github.com/nhershy/CorruptVideoFileInspector" rel="noreferrer">https://github.com/nhershy/CorruptVideoFileInspector</a></p>
<python><bash><macos><ffmpeg><mp4>
2023-03-09 19:25:41
2
812
nhershy