QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
76,068,654
594,319
Python generic protocol: TypeVar is "Unknown" (Pylance) or "Any" (Mypy)
<p>For a generic class or protocol, why is the result of <code>f()</code> stored in <code>fout</code> here &quot;Unknown&quot; (Pylance v2023.4.30) or &quot;Any&quot; (mypy v1.2.0) rather than &quot;T@MyProto&quot;? This is mostly academic, as the goal for <code>fout</code> to be a valid input to <code>g()</code>, which Unknown/Any obviously is.</p> <pre><code>import typing as t T = t.TypeVar(&quot;T&quot;) class MyProto(t.Protocol[T]): def f(self) -&gt; T: ... def g(self, fout: T) -&gt; t.Any: ... tup: t.Tuple[MyProto, ...] = tuple() for item in tup: fout = item.f() reveal_type(fout) # Type of &quot;fout&quot; is &quot;Unknown&quot; (Pylance) item.g(fout) </code></pre>
<python><mypy><python-typing><pyright>
2023-04-20 22:26:48
0
2,347
moon prism power
76,068,615
4,885,544
Python Mailchimp Transactional payload not being recognized by mailchimp template
<p>I am using the <code>MailchimpTransactional</code> package to send an email with a dynamic body, and mailchimp is rendering the subject, from, to and other field correctly. However; the variables in the actual body of the template are not being populated in the mailchimp email. Below is my payload I am sending. Nothing in the <code>template_content</code> key is being used in the template:</p> <p>Template code:</p> <pre><code>&lt;header&gt;&lt;/header&gt; *|HTML:BODY|* &lt;footer&gt;&lt;/footer&gt; </code></pre> <p>Payload:</p> <pre><code>mailchimp = MailchimpTransactional.Client(MAILCHIMP_KEY) message = { 'template_name': 'general-template', 'merge_language': 'mailchimp', 'template_content': [{ 'name': 'BODY', 'content': '&lt;h1&gt;HEY LISTEN!&lt;/h1&gt;' }], 'message': { 'to': [{ 'email': 'test1@example.com', 'type': 'to' }, { 'email': 'test@example.com', 'type': 'cc' }], 'subject': 'Test Subject', 'attachments': [], 'from_email': 'no-reply@test.com', 'from_name': None, 'headers': { 'Reply-To': '{}' } } } mailchimp.messages.send_template(message) </code></pre> <p>The email sends just find, however, all the email returns is below:</p> <p><a href="https://i.sstatic.net/ldEMn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ldEMn.png" alt="email image" /></a></p> <p>What am I doing wrong?</p>
<python><django><mailchimp>
2023-04-20 22:17:51
1
486
sclem72
76,068,612
2,687,317
python time plotting, not working with data with microseconds
<p>I have a dataset with 589 datetime strings and 589 points. The time seq has msec accuracy. e.g. -</p> <pre><code>['2022-10-07 12:05:39.866', '2022-10-07 12:05:40.914', '2022-10-07 12:05:41.963', '2022-10-07 12:05:43.012', '2022-10-07 12:05:44.060', '2022-10-07 12:05:45.109', '2022-10-07 12:05:46.157', '2022-10-07 12:05:47.206', '2022-10-07 12:05:48.254', '2022-10-07 12:05:49.303'] </code></pre> <p>I'm using astropy Time, so I convert this list using</p> <p><code>dateTimeTags = Time(dateTimeTags, format='iso', scale='utc')</code> to get something like this:</p> <pre><code>&lt;Time object: scale='utc' format='iso' value=['2022-10-07 12:05:39.866' '2022-10-07 12:05:40.914']&gt; </code></pre> <p>but that goes from <code>2022-10-07 12:05:39.866</code> to <code>2022-10-07 12:15:56.429</code>. About 10 min.</p> <p>The data is just floats...</p> <p>When I do:</p> <pre><code>fig, ax = plt.subplots(figsize=(8,4)) ax.xaxis.set_major_locator(matdates.SecondLocator(interval=10)) ax.plot_date(dateTimeTags, data,'-') plt.gcf().autofmt_xdate() </code></pre> <p>I get</p> <pre><code> ConversionError: Failed to convert value(s) to axis units: &lt;Time object: scale='utc' format='iso' ... </code></pre> <p>When I try</p> <pre><code>fig, ax = plt.subplots(figsize=(8,4)) ax.plot(dateTimeTags, data,'-') </code></pre> <p>I get</p> <pre><code>ValueError: setting an array element with a sequence. </code></pre> <p>Any idea how I plot this? The <a href="https://docs.astropy.org/en/stable/visualization/matplotlib_integration.html#plotting-times" rel="nofollow noreferrer">astropy site</a> says I should have no problem.</p>
<python><matplotlib><astropy>
2023-04-20 22:16:25
0
533
earnric
76,068,519
16,363,897
Expanding average per group ignoring previous n rows
<p>This if a follow-up question to <a href="https://stackoverflow.com/questions/76065644/pandas-expanding-average-per-group-without-changing-the-order-of-the-rows/76066078">this one</a>.</p> <p>I have the following dataframe:</p> <pre><code> a b week 8 10 9 9 3 8 9 5 5 9 7 2 10 1 3 9 4 4 9 2 6 </code></pre> <p>I want to calculate the expanding average of each column, conditional on the index, but ignoring the previous n rows (including the current row). I also need to preserve the order of the rows of the original dataframe. This is the expected output with n=2:</p> <pre><code> a b week 8 NaN NaN 9 NaN NaN 9 NaN NaN 9 3.0 8.0 10 NaN NaN 9 5.0 5.0 9 5.0 5.0 </code></pre> <p>For example, the value of column &quot;a&quot; on the fourth row (3) is equal to the average of all preceding rows (excluding the third and fourth) where week = 9, so is equal to the value of column &quot;a&quot; in the second row. The value of column &quot;b&quot; on the last row (5) is equal to the average of all preceding rows (excluding the last two) where week = 9, so is equal to the average of second, third and fourth row (8, 5, 2).</p> <p>To better clarify, this is an excel representation of what I'm trying to do (using the last value of column &quot;b&quot; as a reference): <a href="https://i.sstatic.net/SnSq3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SnSq3.png" alt="enter image description here" /></a></p> <p>From the previous question, I know that I can get the conditional expanding average with the following:</p> <pre><code>(df.reset_index() .groupby('week') .expanding() .mean() .sort_index(level=1) .reset_index(level=1, drop=True)) </code></pre> <p>In order to add the &quot;ignore the previous 2 rows&quot; feature, I tried to shift the original dataframe, but the output is not what I want:</p> <pre><code>(df.shift(2) .reset_index() .groupby('week') .expanding() .mean() .sort_index(level=1) .reset_index(level=1, drop=True)) a b week 8 NaN NaN 9 NaN NaN 9 10.000000 9.000000 9 6.500000 8.500000 10 5.000000 5.000000 9 6.666667 6.333333 9 5.250000 5.500000 </code></pre> <p>Any help? Thanks</p>
<python><pandas><dataframe>
2023-04-20 21:54:03
1
842
younggotti
76,068,347
807,797
Pythonic way to assign global administrator roles for Azure Active Directory
<p><strong>What specifically needs to be changed in the Python 3 code below in order to successfully assign the Global Administrator role for an Azure Active Directory Tenant to a given service principal?</strong></p> <p>We tried to adjust the code from <a href="https://stackoverflow.com/questions/66752779/azure-cli-equivalent-of-add-azureaddirectoryrolemember">the answer to this other posting</a> so that it can run in Python 3, but we are getting the error below. This has to be agnostic with respect to operating system, so we cannot use the bash code given in the other posting. We also do not want to add a dependency on PowerShell.</p> <p>And we also tried to do this with an ARM template, but @Philip pointed out that ARM templates are not allowed to assign Active Directory tenant roles.</p> <p>Note that <code>62e90394-69f5-4237-9190-012177145e10</code> is the role definition id for Global Administrator of an Azure Active Directory tenant.</p> <p><strong>COMMAND AND RESULTING ERROR:</strong></p> <pre><code>C:\path\to\directory&gt; python .\assignADRoles.py ERROR: unrecognized arguments: 'valid-service-principal-object-id', 'roleDefinitionId': '62e90394-69f5-4237-9190-012177145e10', 'directoryScopeId': '/'} Examples from AI knowledge base: az rest --method get --url https://graph.microsoft.com/beta/auditLogs/directoryAudits Get Audit log through Microsoft Graph https://docs.microsoft.com/en-US/cli/azure/reference-index#az_rest Read more about the command in reference docs C:\path\to\directory&gt; </code></pre> <p>Note that <code>valid-service-principal-object-id</code> refers to an actual valid service principal object id that is redacted here for security reasons.</p> <p>Also note that the user running the command is also global administrator of the same Azure Active Directory tenant and is also owner of the only in-scope subscription.</p> <p><strong>CURRENT CODE:</strong></p> <pre><code># coding: utf-8 import subprocess import re ansi_escape = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]') def callTheAPI(): URI=&quot;https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&quot; BODY={ &quot;principalId&quot;: &quot;valid-service-principal-object-id&quot;, &quot;roleDefinitionId&quot;: &quot;62e90394-69f5-4237-9190-012177145e10&quot;, &quot;directoryScopeId&quot;: &quot;/&quot; } assignGlobalAdminCommand='az rest --method POST --uri '+URI+' --header Content-Type=application/json --body '+str(BODY) proc = subprocess.Popen(assignGlobalAdminCommand,cwd=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) while True: line = proc.stdout.readline() if line: thetext=ansi_escape.sub('', line.decode('utf-8').rstrip('\r|\n')) print(thetext) else: break callTheAPI() </code></pre> <p>Note that the python 3 requests module might be good to use here, but the supposedly working starting point in the other posting linked above uses the <code>az rest</code> cli command, so it seemed like a good idea to get a version of that working in Python 3 first before trying to put the same API call into the requests module.</p> <p><strong>POWERSHELL VERSION THAT WORKS:</strong></p> <p>Here are the successful results of running the <code>az rest</code> cli command using PowerShell from the answer to this other posting:</p> <pre><code>PS C:\path\to\directory&gt; $Body = @{ &gt;&gt; &quot;roleDefinitionId&quot; = &quot;62e90394-69f5-4237-9190-012177145e10&quot;; &gt;&gt; &quot;principalId&quot; = &quot;valid-service-principal-object-id&quot;; &gt;&gt; &quot;directoryScopeId&quot; = &quot;/&quot; &gt;&gt; } | ConvertTo-Json -Compress PS C:\path\to\directory&gt; $Body = $Body.Replace('&quot;', '\&quot;') PS C:\path\to\directory&gt; az rest -m post -u &quot;https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&quot; -b &quot;$Body&quot; { &quot;@odata.context&quot;: &quot;https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignments/$entity&quot;, &quot;directoryScopeId&quot;: &quot;/&quot;, &quot;id&quot;: &quot;long-alpha-numeric-hash-id&quot;, &quot;principalId&quot;: &quot;valid-service-principal-object-id&quot;, &quot;principalOrganizationId&quot;: &quot;valid-ad-tenant-id&quot;, &quot;resourceScope&quot;: &quot;/&quot;, &quot;roleDefinitionId&quot;: &quot;62e90394-69f5-4237-9190-012177145e10&quot; } PS C:\path\to\directory&gt; </code></pre> <p>But even though this PowerShell invocation of the <code>az rest</code> cli command works, the Python code above still gives an error. When we paste the string result of the PowerShell <code>$Body = @{ &quot;roleDefinitionId&quot; = &quot;62e90394-69f5-4237-9190-012177145e10&quot;; &quot;principalId&quot; = &quot;valid-service-principal-object-id&quot;; &quot;directoryScopeId&quot; = &quot;/&quot; } | ConvertTo-Json -Compress</code> command from above into the Python 3 code given above, we get the same error.</p> <p><strong>How can this working Powershell example be translated into Python 3 starting with the Python 3 code that is given above in the OP?</strong></p>
<python><azure><azure-active-directory><azure-cli><azure-rbac>
2023-04-20 21:16:18
1
9,239
CodeMed
76,068,150
1,394,353
How to use inspect.signature to check that a function needs one and only one parameter?
<p>I want to validate (at runtime, this is not a typing question), that a function passed as an argument takes only 1 positional variable (basically that function will be called with a string as input and returns a truthy).</p> <p>Naively, this is what I had:</p> <pre><code>def check(v_in : Callable): &quot;&quot;&quot;check that the function can be called with 1 positional parameter supplied&quot;&quot;&quot; sig = signature(v_in) len_param = len(sig.parameters) if not len_param == 1: raise ValueError( f&quot;expecting 1 parameter, of type `str` for {v_in}. got {len_param}&quot; ) return v_in </code></pre> <p>If I check the following function, it is OK, which is good:</p> <pre><code>def f_ok1_1param(v : str): pass </code></pre> <p>but the next fails, though the <code>*args</code> would receive 1 param just fine and <code>**kwargs</code> would be empty</p> <pre><code>def f_ok4_vargs_kwargs(*args,**kwargs): pass </code></pre> <pre><code>from rich import inspect as rinspect try: check(f_ok1_1param) print(&quot;\n\npasses:&quot;) rinspect(f_ok1_1param, title=False,docs=False) except (Exception,) as e: print(&quot;\n\nfails:&quot;) rinspect(f_ok1_1param, title=False,docs=False) try: check(f_ok4_vargs_kwargs) print(&quot;\n\npasses:&quot;) rinspect(f_ok4_vargs_kwargs, title=False,docs=False) except (Exception,) as e: print(&quot;\n\nfails:&quot;) rinspect(f_ok4_vargs_kwargs, title=False,docs=False) </code></pre> <p>which passes the first, and fails the second, instead of passing both:</p> <pre><code>passes: โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ &lt;function f_ok1_1param at 0x1013c0f40&gt; โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ def f_ok1_1param(v: str): โ”‚ โ”‚ โ”‚ โ”‚ 37 attribute(s) not shown. Run inspect(inspect) for options. โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ fails: โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ &lt;function f_ok4_vargs_kwargs at 0x101512200&gt; โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ def f_ok4_vargs_kwargs(*args, **kwargs): โ”‚ โ”‚ โ”‚ โ”‚ 37 attribute(s) not shown. Run inspect(inspect) for options. โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p>all the different combination of signatures are defined below:</p> <pre><code>def f_ok1_1param(v : str): pass def f_ok2_1param(v): pass def f_ok3_vargs(*v): pass def f_ok4_p_vargs(p, *v): pass def f_ok4_vargs_kwargs(*args,**kwargs): pass def f_ok5_p_varg_kwarg(param,*args,**kwargs): pass def f_bad1_2params(p1, p2): pass def f_bad2_kwargs(**kwargs): pass def f_bad3_noparam(): pass </code></pre> <p>Now, I did already check a bit more about the Parameters:</p> <pre><code>rinspect(signature(f_ok4_vargs_kwargs).parameters[&quot;args&quot;]) โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ &lt;class 'inspect.Parameter'&gt; โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Represents a parameter in a function signature. โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ &lt;Parameter &quot;*args&quot;&gt; โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ โ”‚ โ”‚ โ”‚ KEYWORD_ONLY = &lt;_ParameterKind.KEYWORD_ONLY: 3&gt; โ”‚ โ”‚ kind = &lt;_ParameterKind.VAR_POSITIONAL: 2&gt; โ”‚ โ”‚ name = 'args' โ”‚ โ”‚ POSITIONAL_ONLY = &lt;_ParameterKind.POSITIONAL_ONLY: 0&gt; โ”‚ โ”‚ POSITIONAL_OR_KEYWORD = &lt;_ParameterKind.POSITIONAL_OR_KEYWORD: 1&gt; โ”‚ โ”‚ VAR_KEYWORD = &lt;_ParameterKind.VAR_KEYWORD: 4&gt; โ”‚ โ”‚ VAR_POSITIONAL = &lt;_ParameterKind.VAR_POSITIONAL: 2&gt; โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p>I suppose checking <code>Parameter.kind</code> vs <code>_ParameterKind</code> enumeration of each parameter is how this needs to be approached, but I wonder if I am overthinking this or if something already exists to do this, either in <code>inspect</code> or if the <code>typing</code> <code>protocol</code> support can be used to do, <em>but at runtime</em>.</p> <p>Note, theoretically <code>def f_ok_cuz_default(p, p2 = None):</code> would also work, but let's ignore that for now.</p> <p>p.s. The motivation is providing a custom callback function in a validation framework. The call location is deep in the framework and that particular argument can also be a string (which gets converted to a regex). It can even be None. Easiest here is just to stick a <code>def myfilter(*args,**kwargs): breakpoint</code>. Or <code>myfilter(foo)</code>. Then look at what you get from the framework and adjust body. Itโ€™s one thing to have exceptions <em>in</em> your function, another for the framework to <em>accept</em> it but then error <em>before</em> calling into it. So a quick โ€œwill this work when we call it?โ€ is more user friendly.</p>
<python><signature><introspection>
2023-04-20 20:43:55
2
12,224
JL Peyret
76,068,146
7,910,725
Python: Replace lower case to upper, and vice versa, simultaneously via ReGex lib
<p>I open this question although a duplicate of it exists in C# lang. (without the relevant resolution needed).</p> <p>Im trying to replace all the lower case characters in a given string, with upper case chars, and vice versa. This should be done simultaneously with the least time complexity (because of use in high volume of verbal translations).</p> <p><strong>The IO:</strong></p> <p><em>input:</em> str_1 = &quot;Www.GooGle.com&quot;</p> <p><em>output:</em> &quot;wWW.gOOgLE.COM&quot;</p> <p><strong>The code:</strong></p> <pre><code>import re # import RegEx lib str_1 = &quot;Www.GooGle.com&quot; # variable tested def swapLowerUpper(source): # takes source string # returns group 0 regex to lower and group 1 regex to upper, by source return re.sub(r'([A-Z]+)|([a-z]+)', lambda x: x.group(0).lower(), source) # check the output print(swapLowerUpper(str_1) </code></pre> <p><strong>The Question:</strong></p> <p>I have a hard time in triggering the second group (which index is 1) and apply the attribute &quot;.upper()&quot; on it. My attempt was to open it as {x: x.group(0).lower(), x: x.group(1).upper()} which failed.</p>
<python><regex>
2023-04-20 20:43:00
3
355
Mabadai
76,068,055
7,498,328
Parsing a list with double backslashes in Python3 and converting to a dict object?
<p>I have a peculiar list in Python as:</p> <pre><code>peculiar_list=['&quot;{\\n \\&quot;persons\\&quot;: \\&quot;person 1\\&quot;,\\n \\&quot;score\\&quot;: \\&quot;Very Good\\&quot;,\\n \\&quot;comment\\&quot;: \\&quot;This result was found to be very good for the the user\'s interest.\\&quot;,\\n \\&quot;level\\&quot;: \\&quot;high level\\&quot;\\n}&quot;', '&quot;{\\n \\&quot;persons\\&quot;: \\&quot;person 2\\&quot;,\\n \\&quot;score\\&quot;: \\&quot;Pretty Good\\&quot;,\\n \\&quot;comment\\&quot;: \\&quot;This result, coupled with other data, was found to be pretty good.\\&quot;,\\n \\&quot;level\\&quot;: \\&quot;high level\\&quot;\\n}&quot;'] </code></pre> <p>I would like to get rid of the <code>\\n</code>, <code>\\</code> and ultimately return a clean <code>dict</code> object to allow</p> <pre><code>peculiar_list[0]['persons'] </code></pre> <p>to show &quot;person 1&quot;. Is there an automatic way to do this?</p>
<python><python-3.x><dictionary>
2023-04-20 20:30:30
1
2,618
user321627
76,068,044
3,277,396
Using subprocess.Popen messing Ubuntu terminal output
<p>Using subprocess.Popen messing Ubuntu terminal output</p> <pre><code>import subprocess import time subprocess.Popen(&quot;sudo ./agt_internal&quot;, shell=True, cwd=&quot;/home/boblittle/AGT/&quot;) time.sleep(10) for i in range(10): print('Good Morning America') </code></pre> <p>The terminal output is messed up. Any suggestions? <a href="https://i.sstatic.net/8nlYZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nlYZ.png" alt="enter image description here" /></a></p> <pre><code>def start_agt_pmlog(): command = 'sudo ./agt_internal -unilog=PM' args = shlex.split(command) print(f'Starts AGT PMLog') subprocess.Popen(args, shell=False, cwd=&quot;/home/boblittle/AGT/&quot;, stdout=subprocess.PIPE, stderr=subprocess.PIPE) </code></pre> <p>I chaged the code with a suggested solution by adding the function above. It didn't help.</p>
<python><ubuntu><subprocess>
2023-04-20 20:29:18
2
496
SHR
76,068,016
10,620,003
Rename the columns of the dataframe based based on another number
<p>I have a dataframe and I want to change the columns of that with a specifc way. I want to change the 0-27 of the columns to d0, d1, d2, ...d27. And the 28-56 to d10-d127, and the third 28 of the columns (56-84) to (d20-d227) and etc. Can you please help me with that.</p> <p>I already did that with another dataframe to rename the columns of df with 7 step. Here is the code I used for that. But, this does not work for the new case.</p> <pre><code>df = dataframe with size (10*71. The first column is id) b = df.shape[1]-1 df.columns = np.hstack(['id', np.core.defchararray.add('d', (b%7+b//7*10).astype(str))]) </code></pre> <p>The new case is:</p> <pre><code>df= dataframe with size (10*281. The first column is id) import numpy as np import pandas as pd A = np.random.randint(10, size = (10, 281)) A_df = pd.DataFrame(A) </code></pre> <p>Can you please help me with that? Thank you so much</p>
<python><dataframe>
2023-04-20 20:25:17
1
730
Sadcow
76,068,002
13,722,601
How to replace a character in a multiline string with a variable
<p>I have a multiline string containing the ascii art of a playing card:</p> <pre><code>clubs_ascii = &quot;&quot;&quot; _____ |V . | | /.\ | |(_._)| | | | |____V| &quot;&quot;&quot; </code></pre> <p>I try to replace the letter &quot;V&quot; (V for value) with a randomly genererated number, say 4, using replace:</p> <pre><code>clubs_ascii.replace(&quot;V&quot;, random_number) </code></pre> <p>The result is always unchanged</p> <pre><code> _____ |V . | | /.\ | |(_._)| | | | |____V| </code></pre> <p>Why is this and how can i get it to function? Thanks in advance!</p>
<python><string>
2023-04-20 20:23:29
1
433
some_user_3
76,067,991
7,498,328
How to parse a list in Python that might have whitespace and returning JSONDecodeError: Extra data: line 1 column 6 (char 5) errors?
<p>I have the following list in Python3</p> <pre><code>mylist=['&quot;{ &quot;score&quot;: &quot;Really Good&quot;, &quot;text&quot;: &quot;This is a text accompanying the score, level, and more.&quot;, &quot;level&quot;: &quot;high level&quot;}&quot;', '&quot;{ &quot;score&quot;: &quot;Not So Good&quot;, &quot;text&quot;: &quot;This is a text accompanying the score, level, and more.&quot;, &quot;level&quot;: &quot;high level&quot;}&quot;'] </code></pre> <p>I would like to access each of the elements in <code>mylist</code>, such as <code>score</code>,<code>key</code>,<code>text</code>. When I try to do</p> <pre><code>scores = [json.loads(json_str)[&quot;score&quot;] for json_str in mylist] </code></pre> <p>I get the error <code>JSONDecodeError: Extra data: line 1 column 6 (char 5)</code>. My suspicion is that it is because of whitespace, and more (single quotes on outside). How can I parse this correctly?</p>
<python><python-3.x>
2023-04-20 20:21:11
1
2,618
user321627
76,067,982
11,171,435
No pendingTickersEvents being emitted after subscribing to IBKR streaming market data through reqMktData,
<p>Fairly new to ib_insync. Requesting streaming market data but the ticker events aren't being emitted even after 2-3 minutes, no error gets thrown or anything, just nothing happens. Below is the code snippet attached.</p> <pre><code>def _on_connected_event(): print(f&quot;Connected to IB!&quot;) def _on_pending_tickers_event(tickers): print(f&quot;In tickers event&quot;) for t in tickers: print(t) def stream_data(ib: IB, contract): try: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) print(f&quot;In Thread&quot;) data = ib.reqMktData(contract, '', True, True) while True: sleep(1) print(f&quot;{datetime.now(tz=pytz.timezone('US/Eastern'))}&quot;) except: print(f&quot;Error: {traceback.format_exc()}&quot;) ib.cancelMktData(contract) if __name__ == '__main__': try: ib = IB() ib.connectedEvent += _on_connected_event ib.pendingTickersEvent += _on_pending_tickers_event ib.connect('127.0.0.1', port=7497, clientId=1) stock = Stock(symbol=&quot;SPY&quot;, exchange=&quot;SMART&quot;, currency=&quot;USD&quot;) st = threading.Thread(target=stream_data, args=(ib, stock, )) st.start() sleep(150) except: ib.cancelMktData(stock) </code></pre> <p>Idk if it's because I am not running it asynchronously or what, but any help would be appreciative.</p> <p>Thanks.</p> <p>PS: I have the market data subscription.</p>
<python><algorithmic-trading><interactive-brokers><tws><ib-insync>
2023-04-20 20:19:42
1
351
Mubbashir Ali
76,067,675
8,340,881
Iterate through each rows in pandas dataframe and get value from a different dataframe
<p>I have 3 pandas dataframe, df, df_a, df_b as below</p> <pre><code>df = pd.DataFrame(data={'table': [&quot;a&quot;, &quot;a&quot;, &quot;b&quot;], 'seq': [&quot;xx&quot;, &quot;xy&quot;, &quot;z&quot;]}) df_a = pd.DataFrame(data={'a_id': [1, 2, 3], 'col1': [&quot;fg&quot;, &quot;rt&quot;, &quot;zh&quot;]}) df_b = pd.DataFrame(data={'b_id': [4, 5, 6], 'col1': [&quot;&quot;, &quot;gh&quot;, &quot;k&quot;]}) </code></pre> <p>I want to create a new dataframe with below condition for row in df,</p> <p>if seq is xx, get a_id from df_a and have the seq value added for those a_ids (xx)</p> <p>if seq is xy, get a_id from df_a and have the seq value added for those a_ids (xy)</p> <p>if seq is z, get b_id from df_b and have the seq value added for those b_ids (z)</p> <p>Expected output:</p> <pre><code> a_id seq b_id 0 1.0 xx NaN 1 2.0 xx NaN 2 3.0 xx NaN 0 1.0 xy NaN 1 2.0 xy NaN 2 3.0 xy NaN 0 NaN z 4.0 1 NaN z 5.0 2 NaN z 6.0 </code></pre> <p>Here is my code:</p> <pre><code>df_final = pd.DataFrame() for i in range(len(df)) : if df.loc[i, &quot;seq&quot;] == 'xx': df_final = df_final.append(df_a[['a_id']]) df_final['seq'] = df.loc[i, &quot;seq&quot;] elif df.loc[i, &quot;seq&quot;] == 'xy': df_final = df_final.append(df_a[['a_id']]) df_final['seq'] = df.loc[i, &quot;seq&quot;] elif df.loc[i, &quot;seq&quot;] == 'z': df_final = df_final.append(df_b[['b_id']]) df_final['seq'] = df.loc[i, &quot;seq&quot;] </code></pre> <p>which returns z for all rows as below</p> <pre><code> a_id seq b_id 0 1.0 z NaN 1 2.0 z NaN 2 3.0 z NaN 0 1.0 z NaN 1 2.0 z NaN 2 3.0 z NaN 0 NaN z 4.0 1 NaN z 5.0 2 NaN z 6.0 </code></pre>
<python><pandas>
2023-04-20 19:37:04
2
1,255
Shanoo
76,067,656
7,505,256
Mypy - Incompatible types in assignment when assigning different values in alternative branches
<p>I am wondering why I am getting <code>Incompatible types in assignment</code> here?</p> <pre class="lang-py prettyprint-override"><code>from typing import cast import pyvisa from pyvisa.constants import InterfaceType from pyvisa.resources import GPIBInstrument, TCPIPInstrument class Instrument: resource_manager = pyvisa.ResourceManager() def __init__(self, resource: str): self.resource = self.resource_manager.open_resource(resource_name=resource) if self.resource.interface_type == InterfaceType.tcpip: self.instance: TCPIPInstrument = cast(TCPIPInstrument, resource) elif self.resource.interface_type == InterfaceType.gpib: self.instance: GPIBInstrument = cast(GPIBInstrument, resource) else: raise TypeError(f&quot;Unsupported resource interface type: {self.resource.interface_type}&quot;) </code></pre> <p>Gives <code>Incompatible types in assignment (expression has type &quot;GPIBInstrument&quot;, variable has type &quot;TCPIPInstrument&quot;)</code></p> <p><code>self.instance</code> gets correctly the type <code>instance: TCPIPInstrument | GPIBInstrument</code> in vscode.</p> <p>I am using python 3.11.3 and mypy 1.2.0.</p> <p><a href="https://mypy-play.net/?mypy=latest&amp;python=3.11&amp;gist=449310f8b2105a531e2a64157107f02d" rel="nofollow noreferrer">Link</a> to a gist with the same issue, but with slightly different code since I could not get pyvisa to install in the playground.</p> <p>Found some issues in the code from the comments, here is the more correct code, but still with the same issue.</p> <pre class="lang-py prettyprint-override"><code>from typing import cast import pyvisa from pyvisa.constants import InterfaceType from pyvisa.resources import GPIBInstrument, TCPIPInstrument class Instrument: resource_manager = pyvisa.ResourceManager() def __init__(self, resource_name: str): self.resource = self.resource_manager.open_resource(resource_name=resource_name) if self.resource.interface_type == InterfaceType.tcpip: self.instance = cast(TCPIPInstrument, self.resource) elif self.resource.interface_type == InterfaceType.gpib: self.instance = cast(GPIBInstrument, self.resource) else: raise TypeError(f&quot;Unsupported resource interface type: {self.resource.interface_type}&quot;) </code></pre>
<python><python-typing><mypy>
2023-04-20 19:34:23
2
316
Prokie
76,067,541
15,452,168
response 404 on using requests for scraping postal data
<p>for education purposes I want to scrap the population of Germany per pin code. This information is available at <a href="https://postal-codes.cybo.com/germany/" rel="nofollow noreferrer">https://postal-codes.cybo.com/germany/</a></p> <p>I tried to run web scraping tools such as requests and some others which I found in stackoverflow.</p> <pre><code>import requests from bs4 import BeautifulSoup headers= {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'} url = 'https://postal-codes.cybo.com/germany' r = requests.get(url, headers = headers) soup = BeautifulSoup(r.text, 'html.parser') print(soup.title.text) print(r.status_code) soup = BeautifulSoup(r.text, 'html.parser') print(soup.title.text) </code></pre> <p>can someone guide me on how to approach this issue? Is there any other way I ca get the population of Germany per post /pin code.</p> <p>Thank you in advance!!</p> <p><a href="https://postal-codes.cybo.com/germany/#listcodes" rel="nofollow noreferrer">https://postal-codes.cybo.com/germany/#listcodes</a></p> <p>basically I want to scrap this data, postal code, city, population area - if we open individual postal code then we can also wee male/female polulation as well</p> <p><a href="https://i.sstatic.net/j3klt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j3klt.png" alt="basically I want to scrap this data, postal code, city, population area - if we open individual postal code then we can also wee male/female polulation as well " /></a></p>
<python><web-scraping><beautifulsoup><python-requests>
2023-04-20 19:16:34
1
570
sdave
76,067,520
3,380,902
Must be list or null Exception when attempting to pandas json_normalize
<p>I have a pandas dataframe containing <code>json</code> data that I am attempting to normalize using pandas <code>json_normalize</code>.</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'id': [0, 1], 'j': [{'type': 's', 'loc': {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': {'value': '2.0', 'text': 'xxx'}, 'g':[]}}, {'type': 's', 'loc': {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': {'value': '2.0', 'text': 'xxx'}, 'g':[]}}] }) dff = pd.json_normalize(df['j'], record_path=['loc'], meta=['id']) dff --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File &lt;command-3958878855945685&gt;:6 1 df = pd.DataFrame({ 2 'id': [0, 1], 3 'j': [{'type': 's', 'loc': {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': {'value': '2.0', 'text': 'xxx'}, 'g':[]}}, {'type': 's', 'loc': {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': {'value': '2.0', 'text': 'xxx'}, 'g':[]}}] 4 }) ----&gt; 6 dff = pd.json_normalize(df['j'], record_path=['loc'], meta=['id']) 7 dff File /databricks/python/lib/python3.9/site-packages/pandas/io/json/_normalize.py:515, in _json_normalize(data, record_path, meta, meta_prefix, record_prefix, errors, sep, max_level) 512 meta_vals[key].append(meta_val) 513 records.extend(recs) --&gt; 515 _recursive_extract(data, record_path, {}, level=0) 517 result = DataFrame(records) 519 if record_prefix is not None: 520 # Incompatible types in assignment (expression has type &quot;Optional[DataFrame]&quot;, 521 # variable has type &quot;DataFrame&quot;) File /databricks/python/lib/python3.9/site-packages/pandas/io/json/_normalize.py:497, in _json_normalize.&lt;locals&gt;._recursive_extract(data, path, seen_meta, level) 495 else: 496 for obj in data: --&gt; 497 recs = _pull_records(obj, path[0]) 498 recs = [ 499 nested_to_record(r, sep=sep, max_level=max_level) 500 if isinstance(r, dict) 501 else r 502 for r in recs 503 ] 505 # For repeating the metadata later File /databricks/python/lib/python3.9/site-packages/pandas/io/json/_normalize.py:427, in _json_normalize.&lt;locals&gt;._pull_records(js, spec) 425 result = [] 426 else: --&gt; 427 raise TypeError( 428 f&quot;{js} has non list value {result} for path {spec}. &quot; 429 &quot;Must be list or null.&quot; 430 ) 431 return result TypeError: {'type': 's', 'loc': {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': []}} has non list value {'s': {'value': '0.0', 'text': 'xxx'}, 'ps': {'value': '0.0', 'text': 'xxx'}, 'g': []} for path loc. Must be list or null. </code></pre> <p>Also, there might be some json data where the string would be an empty string. How would I handle that exception? Desired behavior is if the <code>key</code> is not found, there normalize with <code>np.nan</code>.</p> <p>Expected output:</p> <pre><code>s.value s.text ps.value ps.text g.value g.text g.id id 0 0.0 xxx 0.0 xxx 2.0 xxx [0.0] 0 1 0.0 xxx 0.0 xxx 2.0 xxx [0.0] 1 </code></pre>
<python><json><pandas><json-normalize>
2023-04-20 19:13:05
1
2,022
kms
76,067,245
1,876,739
Add Values of Pandas Series with MultiIndex to Values of A Single Level Index Series
<p>Given a pandas <code>Series</code> with simple index</p> <pre><code>import pandas as pd simple_index_series = pd.Series([1,2,3], index=['a', 'b', 'c']) a 1 b 2 c 3 </code></pre> <p>And a Series with a MultiIndex</p> <pre><code>multi_index_series = pd.Series( 0, index = pd.MultiIndex.from_product([['foo', 'bar'], ['c', 'b', 'a']]) ) foo c 0 b 0 a 0 bar c 0 b 0 a 0 </code></pre> <p><strong>How can the values of the single level index Series be numerically added to the values of the second Series based on matching up the the second level index?</strong></p> <p>Thank you in advance for your consideration and response.</p>
<python><pandas>
2023-04-20 18:33:22
1
17,975
Ramรณn J Romero y Vigil
76,067,216
1,942,868
FIle upload from browser to S3 which has block public access
<p>I want to upload the file to <code>S3</code> from browser accessing the web application running on <code>Fargate</code></p> <p>However, this <code>S3</code> must have <code>block public access=ON</code>.</p> <p>So, It is impossible to upload directly from browser.</p> <p>Then, I have some alternative ideas,</p> <ul> <li><p>1 Upload files into the container in Fargate and copy them to S3 by lambda.</p> <p>Q1) Is it possible to access the directory inside the container from lambda?</p> <p>Q2) How lambda can be triggered?</p> </li> <li><p>2 Upload files to the EBS and copy them to S3 by lambda</p> <p>Q1) Is it possible to access EBS from lambda? (maybe yes?)</p> <p>Q2) Is it possible to trigger the lambda by new files is created in EBS?</p> </li> </ul> <p>Or is there any standard practical method in this case?</p> <p>Web application system is python django.</p> <p>Any hint is appreciated.</p> <p>Thank you very much.</p>
<python><amazon-web-services><amazon-s3><aws-lambda><aws-fargate>
2023-04-20 18:28:49
2
12,599
whitebear
76,067,104
1,492,337
Using Vicuna + langchain + llama_index for creating a self hosted LLM model
<p>I want to create a self hosted LLM model that will be able to have a context of my own custom data (Slack conversations for that matter).</p> <p>I've heard Vicuna is a great alternative to ChatGPT and so I made the below code:</p> <pre><code>from llama_index import SimpleDirectoryReader, LangchainEmbedding, GPTListIndex, \ GPTSimpleVectorIndex, PromptHelper, LLMPredictor, Document, ServiceContext from langchain.embeddings.huggingface import HuggingFaceEmbeddings import torch from langchain.llms.base import LLM from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM !export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 class CustomLLM(LLM): model_name = &quot;eachadea/vicuna-13b-1.1&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) pipeline = pipeline(&quot;text2text-generation&quot;, model=model, tokenizer=tokenizer, device=0, model_kwargs={&quot;torch_dtype&quot;:torch.bfloat16}) def _call(self, prompt, stop=None): return self.pipeline(prompt, max_length=9999)[0][&quot;generated_text&quot;] def _identifying_params(self): return {&quot;name_of_model&quot;: self.model_name} def _llm_type(self): return &quot;custom&quot; llm_predictor = LLMPredictor(llm=CustomLLM()) </code></pre> <p>But sadly I'm hitting the below error:</p> <pre><code>OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB (GPU 0; 22.03 GiB total capacity; 21.65 GiB already allocated; 94.88 MiB free; 21.65 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF </code></pre> <p>Here's the output of <code>!nvidia-smi</code> (before running anything):</p> <pre><code>Thu Apr 20 18:04:00 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A10G Off| 00000000:00:1E.0 Off | 0 | | 0% 23C P0 52W / 300W| 0MiB / 23028MiB | 18% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ </code></pre> <p>Any idea how to modify my code to make it work?</p>
<python><machine-learning><pytorch><chatgpt-api><langchain>
2023-04-20 18:14:37
2
433
Ben
76,067,002
1,795,245
Create new dataframe by applying 2D mask on another DataFrame with the same shape
<p>Is there a fast and easy way to create a new data frame based on a masked dataframe and a dataframe with values:</p> <pre><code>df_1 = pd.DataFrame(data = [[False, True],[True, False],[False, True], [False, False]] ) 0 1 0 False True 1 True False 2 False True 3 False False df_2 = pd.DataFrame(data = [[1, 2],[3, 4],[5, 6], [7, 8]] ) 0 1 0 1 2 1 3 4 2 5 6 3 7 8 </code></pre> <p>The result I want is a new dataframe that only contains the values masked TRUE:</p> <pre><code> 0 1 0 3 2 1 NaN 6 </code></pre>
<python><pandas><dataframe>
2023-04-20 17:59:36
4
649
Jonas
76,066,919
3,623,603
What is a best practice for handling paths in Python modules?
<p>I have the following folder structure in Linux:</p> <pre><code>Model | |-----Shared_Modules (contains shared.py, a module full of shared functions) | |-----Segment_Sales (contains main.py, run the model from this file) | -----------data (contains data.csv) </code></pre> <p>main.py in the <code>segment_sales</code> folder calls functions from the shared_modules folder. Everytime I call a function, I have to pass a path to it. I feel like I am passing a lot of paths into every single function, and I just don't know this is the right way. Am I missing a way to this more simply?</p> <p>Right now a simple example of my main.py file is:</p> <pre><code>working_dir=os.getcwd() shared_path = os.path.dirname(os.getcwd()) sys.path.append(os.path.join(shared_path, &quot;Shared_modules/&quot;)) import shared_modules as sm </code></pre> <p>The first 3 lines add the path where shared_modules is located to <code>sys</code> so I can have access to the shared_modules.py module.</p> <p>Here is an example of a function I'd pull in shared_modules.</p> <pre><code>def get_input_data(input_path, model_name): config = pd.read_excel('data/Sales_and_Input_Files/model_input_file.xlsx', sheet_name='config', engine='openpyxl') print(input_path) return config </code></pre> <p>This simple function is an example of how I'm having to pass an input path to <code>get_input_data</code> when I just want it to use the <code>shared_path</code> above. When I try to just run main and call something from <code>shared_module.py</code>, it can't find the path unless I pass it directly to the function, and this occurs with every function in <code>Shared Modules</code>. Thanks in advance for advice.</p> <p>Edit: I realize I oversimplified my sample function and didn't use the input path for any reason. I added a simple print statement to demonstrate that it might be used.</p>
<python><pandas><module><path><sys>
2023-04-20 17:47:21
0
510
Vaslo
76,066,812
11,546,773
How to combine columns in polars horizontally?
<p>I'm trying to combine multiple columns into 1 column with python Polars. However I don't seem to find an (elegant) way to combine columns into a list.</p> <p>I only need to combine column b - e into 1 column. Column a needs to stay exactly as it is now. I've tried using <code>map_elements</code> to achieve this. Despite the fact that this isn't working its also slow and must likely not the best way to do this.</p> <p>Anyone who can help me out with how I can achieve this result?</p> <p><strong>The dataframe I have:</strong></p> <pre><code>df = pl.from_repr(&quot;&quot;&quot; โ”Œโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ” โ”‚ a โ”† b โ”† c โ”† d โ”† e โ”‚ โ”‚ --- โ”† --- โ”† --- โ”† --- โ”† --- โ”‚ โ”‚ f64 โ”† f64 โ”† f64 โ”† f64 โ”† f64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•ก โ”‚ 0.1 โ”† 1.1 โ”† 2.1 โ”† 3.1 โ”† 4.1 โ”‚ โ”‚ 0.2 โ”† 1.2 โ”† 2.2 โ”† 3.2 โ”† 4.2 โ”‚ โ”‚ 0.3 โ”† 1.3 โ”† 2.3 โ”† 3.3 โ”† 4.3 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”˜ &quot;&quot;&quot;) </code></pre> <p><strong>The result I need:</strong></p> <pre><code>shape: (3, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ a โ”† value โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ f64 โ”† list[f64] โ”‚ โ•žโ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 0.1 โ”† [1.1, 2.1, 3.1, 4.1] โ”‚ โ”‚ 0.2 โ”† [1.2, 2.2, 3.2, 4.2] โ”‚ โ”‚ 0.3 โ”† [1.3, 2.3, 3.3, 4.3] โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre>
<python><dataframe><python-polars>
2023-04-20 17:32:36
1
388
Sam
76,066,749
19,130,803
Postgres truncate command SQL: passing dynamic table name
<p>I am working on a Python web project using a <code>postgres</code> database as backend with the <code>psycopg2</code> package.</p> <p>Earlier, I was trying <code>static</code> query for <code>truncate</code> command for demo purpose, after it works, I tried to make it <code>dynamic</code>, thereby passing <code>table name</code>.</p> <pre><code>def clear_tables(query: str, vars_list: list[Any]): # psycog 2 connection engine code cursor.executemany(query=query, vars_list=vars_list) conn.commit() def foo(): vars_list: list[Any] = list() table_name: str = &quot;person&quot; record: tuple[Any] = (table_name,) vars_list.append(record) query = &quot;&quot;&quot; TRUNCATE TABLE %s RESTART IDENTITY; &quot;&quot;&quot; clear_tables(query=query, vars_list=vars_list) </code></pre> <p>On executing I am getting an error. Somehow the <code>table name</code> gets unwanted <code>'</code> single quotes</p> <pre><code>psycopg2.errors.SyntaxError: syntax error at or near &quot;'person'&quot; LINE 2: TRUNCATE TABLE 'person' RESTART IDENTITY; </code></pre> <p>How to remove those quotes?</p>
<python><postgresql>
2023-04-20 17:23:44
2
962
winter
76,066,634
1,044,117
capturing/replaying requests with python gives a different result than a browser
<p>I'm trying to interact with the form at <a href="https://referti.policlinicodiliegro.it/servizi/#/portalereferto/ritiro" rel="nofollow noreferrer">https://referti.policlinicodiliegro.it/servizi/#/portalereferto/ritiro</a> via command-line / python.</p> <p>I captured a request/response via Chrome Developer Tools network tab:</p> <p>It shows:</p> <p>REQUEST HEADERS:</p> <pre><code>Accept: application/json, text/plain, */* Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryPvCnB4Ae6FtNAcnc Referer: https://referti.policlinicodiliegro.it/servizi/ sec-ch-ua: &quot;Chromium&quot;;v=&quot;112&quot;, &quot;Google Chrome&quot;;v=&quot;112&quot;, &quot;Not:A-Brand&quot;;v=&quot;99&quot; sec-ch-ua-mobile: ?0 sec-ch-ua-platform: &quot;macOS&quot; User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36 </code></pre> <p>REQUEST PAYLOAD:</p> <pre><code>------WebKitFormBoundaryPvCnB4Ae6FtNAcnc Content-Disposition: form-data; name=&quot;serviceName&quot; PortaleRefertoPublicService ------WebKitFormBoundaryPvCnB4Ae6FtNAcnc Content-Disposition: form-data; name=&quot;methodName&quot; accessByPinSisWeb ------WebKitFormBoundaryPvCnB4Ae6FtNAcnc Content-Disposition: form-data; name=&quot;parametri&quot; {&quot;utente&quot;:&quot;A&quot;,&quot;password&quot;:&quot;B&quot;,&quot;pin&quot;:&quot;C&quot;} ------WebKitFormBoundaryPvCnB4Ae6FtNAcnc-- </code></pre> <p>RESPONSE HEADERS:</p> <pre><code>Access-Control-Allow-Headers: origin, x-requested-with, Content-Type, accept, MaxDataServiceVersion Access-Control-Allow-Methods: POST, PUT, GET, OPTIONS, DELETE Access-Control-Allow-Origin: * Access-Control-Max-Age: 3600 Cache-Control: no-store Connection: keep-alive Content-Language: en-GB Content-Type: application/json;charset=UTF-8 Date: Thu, 20 Apr 2023 16:55:07 GMT Keep-Alive: timeout=60 Transfer-Encoding: chunked </code></pre> <p>RESPONSE PAYLOAD:</p> <pre class="lang-json prettyprint-override"><code>{&quot;message&quot;:&quot;Credenziali non valide&quot;,&quot;errorCode&quot;:&quot;&quot;,&quot;success&quot;:false} </code></pre> <p>(which is the expected result, since I input the invalid A/B/C credentials).</p> <p>Then trying to replay the same request with python:</p> <pre class="lang-py prettyprint-override"><code>import requests r = requests.post('https://referti.policlinicodiliegro.it/IasiSmartControl/sisweb/dataServiceLoad.action', files={ 'serviceName': 'PortaleRefertoPublicService', 'methodName': 'accessByPinSisWeb', 'parametri': '{&quot;utente&quot;:&quot;A&quot;,&quot;password&quot;:&quot;B&quot;,&quot;pin&quot;:&quot;C&quot;}', }, headers={ 'Accept': 'application/json, text/plain, */*', 'Referer': 'https://referti.policlinicodiliegro.it/servizi/', 'sec-ch-ua': '&quot;Chromium&quot;;v=&quot;112&quot;, &quot;Google Chrome&quot;;v=&quot;112&quot;, &quot;Not:A-Brand&quot;;v=&quot;99&quot;', 'sec-ch-ua-mobile': '?0', 'sec-ch-ua-platform': '&quot;macOS&quot;', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', }) print(r.json()) </code></pre> <p>I get this response:</p> <pre class="lang-json prettyprint-override"><code>{'dataResult': ['&lt;long alphanumeric code redacted&gt;'], 'success': True} </code></pre> <p>where the long alphanumeric code is always the same...</p> <p>Am I doing some mistake? Or is this website somehow detecting and preventing a non-web browser client? (How?)</p> <p>I also try to inspect the sent request:</p> <pre class="lang-py prettyprint-override"><code>for k, v in r.request.headers.items(): print(f'{k}: {v}') print(r.request.body.decode('utf-8')) </code></pre> <p>and it looks as this:</p> <pre><code>User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36 Accept-Encoding: gzip, deflate Accept: application/json, text/plain, */* Connection: keep-alive Referer: https://referti.policlinicodiliegro.it/servizi/ sec-ch-ua: &quot;Chromium&quot;;v=&quot;112&quot;, &quot;Google Chrome&quot;;v=&quot;112&quot;, &quot;Not:A-Brand&quot;;v=&quot;99&quot; sec-ch-ua-mobile: ?0 sec-ch-ua-platform: &quot;macOS&quot; Content-Length: 463 Content-Type: multipart/form-data; boundary=c960d72f4d302329b731d85ef4ee4b37 --c960d72f4d302329b731d85ef4ee4b37 Content-Disposition: form-data; name=&quot;serviceName&quot;; filename=&quot;serviceName&quot; PortaleRefertoPublicService --c960d72f4d302329b731d85ef4ee4b37 Content-Disposition: form-data; name=&quot;methodName&quot;; filename=&quot;methodName&quot; accessByPinSisWeb --c960d72f4d302329b731d85ef4ee4b37 Content-Disposition: form-data; name=&quot;parametri&quot;; filename=&quot;parametri&quot; {&quot;utente&quot;:&quot;A&quot;,&quot;password&quot;:&quot;B&quot;,&quot;pin&quot;:&quot;C&quot;} --c960d72f4d302329b731d85ef4ee4b37-- </code></pre> <p>which seems almost the same request as the one generated in the browser...</p>
<python><python-requests>
2023-04-20 17:05:27
1
19,047
fferri
76,066,597
10,146,283
Splitting polygon by line using intersection returns original polygon in Shapely
<p>I reduced my problem to some MWE so that it can be reproduced. I'm using Shapely 2.0 I want to split a polygon P in 2 parts by a line L.</p> <p><code>POLYGON ((292718.0381447676 6638193.414029885, 292694.50537013356 6637994.803334004, 292718.0381447676 6638193.414029885, 292718.9708331647 6638193.303518486, 292722.0992155936 6638192.97038651, 292725.48975053953 6638192.648911643, 292729.16966360115 6638192.34135049, 292733.16283984896 6638192.050457474, 292737.48817684915 6638191.779469689, 292742.1600771896 6638191.532042792, 292747.1894678763 6638191.312155023, 292752.58358723175 6638191.124185786, 292758.34681013395 6638190.973003162, 292753.9574101781 6637991.021175833, 292758.34681013395 6638190.973003162, 292753.9574101781 6637991.021175833, 292694.50537013356 6637994.803334004, 292718.0381447676 6638193.414029885))</code></p> <p>First, I get the intersection between P and L. The result is a LineString L2 as expected : <code>LINESTRING (292756.2221414001 6638094.18724655, 292743.0611 6638095.284, 292709.9881 6638095.284, 292706.4410368586 6638095.537357549)</code></p> <p>Second, I split P by L2, and I expect it to return 2 polygons P1 and P2.</p> <pre class="lang-py prettyprint-override"><code>from shapely.ops import split box = wkt.loads(my_polygon_as_str) line = wt.loads(my_line_as_str) result = split(box, line) # should return 2 polygons </code></pre> <p>Yet, the <code>split()</code> method returns only one polygon for some reason, which is the original P shape. <code>POLYGON ((292694.50537013356 6637994.803334004, 292718.0381447676 6638193.414029885, 292718.9708331647 6638193.303518486, 292722.0992155936 6638192.97038651, 292725.48975053953 6638192.648911643, 292729.16966360115 6638192.34135049, 292733.16283984896 6638192.050457474, 292737.48817684915 6638191.779469689, 292742.1600771896 6638191.532042792, 292747.1894678763 6638191.312155023, 292752.58358723175 6638191.124185786, 292758.34681013395 6638190.973003162, 292753.9574101781 6637991.021175833, 292694.50537013356 6637994.803334004))</code></p> <p>What is happening here ? Is it a matter of floating points with Shapely considering L2 boundaries not intersecting exactly on P sides ? If so, how can I achieve such operation with real world data ?</p> <p>P.S: I want to achieve this for hundreds of polygons, and the splitting randomly works for a few of them (about 10%) which is really confusing.</p>
<python><shapely>
2023-04-20 17:00:06
1
626
Beinje
76,066,562
3,517,025
How to visualize nodes in horizontal cluster with invisible edges (Python diagrams)
<p>I'm trying to make a horizontal cluster containing two nodes (here visualized as users) in Python module <a href="https://diagrams.mingrammer.com/" rel="nofollow noreferrer">diagrams</a> which uses Graphviz for rendering.</p> <p>But the only way to achieve this is by adding edges between the nodes.</p> <h3>Horizontal, but with edge shown between nodes</h3> <pre class="lang-py prettyprint-override"><code>from diagrams import Cluster, Diagram from diagrams.onprem.client import User with Diagram(&quot;sample&quot;, show=False) as diag: with Cluster(&quot;sample horizontal cluster with edge&quot;): node1 = User(&quot;Some Node&quot;) node2 = User(&quot;Some other Node&quot;) node1 &gt;&gt; node2 </code></pre> <p>Which produces:</p> <p><a href="https://i.sstatic.net/aS4D1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aS4D1.png" alt="enter image description here" /></a></p> <h3>Without edge, but vertical</h3> <p>I've tried without the connection <code>node1 &gt;&gt; node2</code>:</p> <pre class="lang-py prettyprint-override"><code> with Cluster(&quot;sample horizontal cluster with edge&quot;, direction=&quot;LR&quot;): node1 = User(&quot;Some Node&quot;) node2 = User(&quot;Some other Node&quot;) </code></pre> <p>Then it renders vertically, which I don't want: <a href="https://i.sstatic.net/WVcdd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WVcdd.png" alt="enter image description here" /></a></p> <p>I don't want to see that edge but have the nodes arranged horizontally.</p> <p>How can I achieve that?</p>
<python><graphviz><diagram>
2023-04-20 16:56:02
1
5,409
Joey Baruch
76,066,499
11,510,977
Error in Converting PyTorch Model to ONNX format
<p>I want to convert <code>https://github.com/TencentARC/GFPGAN</code> to ONNX format to use in Android App. After going through the procedure, it shows error (AttributeError: 'dict' object has no attribute 'modules').</p> <p>Following is my code in Colab:</p> <pre><code>!pip install onnx !pip install torch torchvision onnx import urllib.request url = &quot;https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth&quot; urllib.request.urlretrieve(url, &quot;my_model.pth&quot;) </code></pre> <pre><code>import torch import numpy as np model = torch.load(&quot;my_model.pth&quot;) input_shape = (1, 3, 224, 224) example_input = torch.rand(input_shape) </code></pre> <pre><code>import urllib.request import torch output_path = &quot;my_model.onnx&quot; torch.onnx.export(model, input_shape, output_path, opset_version=11) # Load and check the ONNX model for correctness onnx_model = onnx.load(output_path) onnx.checker.check_model(onnx_model) </code></pre> <p>Here in the last code snippet the following error is shown:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-9-ef0dbdcd2de8&gt; in &lt;cell line: 5&gt;() 3 4 output_path = &quot;my_model.onnx&quot; ----&gt; 5 torch.onnx.export(model, input_shape, output_path, opset_version=11) 6 7 # Load and check the ONNX model for correctness 5 frames /usr/local/lib/python3.9/dist-packages/torch/onnx/utils.py in disable_apex_o2_state_dict_hook(model) 137 if not isinstance(model, torch.jit.ScriptFunction): 138 model_hooks = {} # type: ignore[var-annotated] --&gt; 139 for module in model.modules(): 140 for key, hook in module._state_dict_hooks.items(): 141 if type(hook).__name__ == &quot;O2StateDictHook&quot;: AttributeError: 'dict' object has no attribute 'modules' </code></pre> <p>I tried to convert the model into ONNX and imported libraries. But the issue comes while exporting the torch.onnx.export.</p>
<python><pytorch><onnx><tflite>
2023-04-20 16:46:41
1
373
Hammad Ali Shah
76,066,446
13,723,501
Gtk thread blocking function execution?
<p>I have the following python code:</p> <pre class="lang-py prettyprint-override"><code>import threading from fpysensor import fp import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk class MainWindow(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title=&quot;Hello, World!&quot;) self.set_border_width(10) label = Gtk.Label() label.set_label(&quot;Hello, World!&quot;) self.add(label) def run(): win = MainWindow() win.connect(&quot;destroy&quot;, Gtk.main_quit) win.show_all() Gtk.main() threading.Thread(target=run).start() threading.Thread(target=fp.run_verification, args=(&quot;127.0.0.1&quot;, 5000)).start() while True: pass </code></pre> <p><code>fp</code> module is defined as:</p> <pre class="lang-py prettyprint-override"><code>from fpysensor import __fingerprint from ctypes import c_char_p def run_enrollment(ip_addr: str, port: int): &quot;&quot;&quot; Run the fingerprint enrollment process &quot;&quot;&quot; __ip_cp = c_char_p(ip_addr.encode('utf-8')) __fingerprint.py_run_enrollment(__ip_cp, port) def run_verification(ip_addr: str, port: int): &quot;&quot;&quot; Run verification against the fingerprints stored &quot;&quot;&quot; __ip_cp = c_char_p(ip_addr.encode('utf-8')) __fingerprint.py_run_verification(__ip_cp, port) </code></pre> <p>Where</p> <pre class="lang-py prettyprint-override"><code>__fingerprint = ctypes.CDLL(f&quot;{__wd}/lib.so&quot;) </code></pre> <p>As you can see, I'm using a shared library name <code>lib.so</code> (<a href="https://gitlab.com/parawa/fingerprint" rel="nofollow noreferrer">Code here</a>) and calling that library with the functions defined on fp. Now for some reason, when I'm running gtk, even on a different thread, my function, in this case <code>fp.run_verification</code> isn't executed until I close the window created by gtk on <code>MainWindow</code> class. Is gtk blocking all threads or there's something wrong with my C library?</p>
<python><multithreading><gtk><shared-libraries>
2023-04-20 16:39:09
1
343
Parker
76,066,423
1,942,868
What class instance should be used for request of authenticate()?
<p>I have backend which uses the <code>Ldap</code></p> <pre><code>class MyLDAPBackend(LDAPBackend): def authenticate(self, request, username=None, password=None, **kwargs): print(&quot;Try ldap auth&quot;) print(type(request)) # &lt;class 'django.core.handlers.wsgi.WSGIRequest'&gt; user = LDAPBackend.authenticate(request, username, password,kwargs) </code></pre> <p>This shows the error like this,</p> <pre><code>'WSGIRequest' object has no attribute 'settings' </code></pre> <p>So, I checked the source code in <code>[django-auth-ldap][1]</code> library.</p> <pre><code>class LDAPBackend: def authenticate(self, request, username=None, password=None, **kwargs): if username is None: return None if password or self.settings.PERMIT_EMPTY_PASSWORD: ldap_user = _LDAPUser(self, username=username.strip(), request=request) user = self.authenticate_ldap_user(ldap_user, password) else: logger.debug(&quot;Rejecting empty password for %s&quot;, username) user = None return user </code></pre> <p>It expect the instance which has access to <code>settings.py</code>, however framework pass the <code>&lt;class 'django.core.handlers.wsgi.WSGIRequest'&gt;</code> to authenticate.</p> <p>How can I solve this?</p>
<python><django><ldap><openldap>
2023-04-20 16:35:23
0
12,599
whitebear
76,066,328
17,795,398
How to use kivymd.ResponsiveLayout?
<p>I'm learning <code>kivymd</code>, and I want to use the <code>ResponsiveLayout</code> defined at a <code>.kv</code> file.</p> <p><code>.py</code> code:</p> <pre><code>from kivy.lang import Builder from kivymd.app import MDApp class Test(MDApp): def build(self): return Builder.load_file(&quot;startpage.kv&quot;) Test().run() </code></pre> <p><code>.kv</code> file:</p> <pre><code>&lt;MobileView&gt;: MDLabel: text: &quot;Mobile&quot; halign: &quot;center&quot; &lt;TabletView&gt;: MDLabel: text: &quot;Tablet&quot; halign: &quot;center&quot; &lt;DesktopView&gt;: MDLabel: text: &quot;Desktop&quot; halign: &quot;center&quot; ResponsiveLayout: mobile_view: MobileView tablet_view: TabletView desktop_view: DesktopView </code></pre> <p>I get this error:</p> <pre><code>[INFO ] [Logger ] Record log in C:\Users\acgc9\.kivy\logs\kivy_23-04-20_32.txt [INFO ] [deps ] Successfully imported &quot;kivy_deps.gstreamer&quot; 0.3.3 [INFO ] [deps ] Successfully imported &quot;kivy_deps.angle&quot; 0.3.3 [INFO ] [deps ] Successfully imported &quot;kivy_deps.glew&quot; 0.3.1 [INFO ] [deps ] Successfully imported &quot;kivy_deps.sdl2&quot; 0.6.0 [INFO ] [Kivy ] v2.2.0.dev0, git-0fc8c67, 20230419 [INFO ] [Kivy ] Installed at &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\__init__.py&quot; [INFO ] [Python ] v3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)] [INFO ] [Python ] Interpreter at &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\python.exe&quot; [INFO ] [Logger ] Purge log fired. Processing... [INFO ] [Logger ] Purge finished! [INFO ] [Factory ] 190 symbols loaded [INFO ] [KivyMD ] 1.1.1, git-Unknown, 2023-04-20 (installed at &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivymd\__init__.py&quot;) [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored) [INFO ] [Text ] Provider: sdl2 [INFO ] [Window ] Provider: sdl2 [INFO ] [GL ] Using the &quot;OpenGL&quot; graphics system [INFO ] [GL ] GLEW initialization succeeded [INFO ] [GL ] Backend used &lt;glew&gt; [INFO ] [GL ] OpenGL version &lt;b'4.5.0 - Build 25.20.100.6617'&gt; [INFO ] [GL ] OpenGL vendor &lt;b'Intel'&gt; [INFO ] [GL ] OpenGL renderer &lt;b'Intel(R) UHD Graphics 630'&gt; [INFO ] [GL ] OpenGL parsed version: 4, 5 [INFO ] [GL ] Shading version &lt;b'4.50 - Build 25.20.100.6617'&gt; [INFO ] [GL ] Texture max size &lt;16384&gt; [INFO ] [GL ] Texture max units &lt;32&gt; [INFO ] [Window ] auto add sdl2 input provider [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked Traceback (most recent call last): File &quot;C:\Users\acgc9\Desktop\test\main.py&quot;, line 8, in &lt;module&gt; Test().run() File &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\app.py&quot;, line 955, in run self._run_prepare() File &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\app.py&quot;, line 925, in _run_prepare root = self.build() ^^^^^^^^^^^^ File &quot;C:\Users\acgc9\Desktop\test\main.py&quot;, line 6, in build return Builder.load_file(&quot;startpage.kv&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\lang\builder.py&quot;, line 305, in load_file return self.load_string(data, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\lang\builder.py&quot;, line 403, in load_string widget = Factory.get(parser.root.name)(__no_builder=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\acgc9\AppData\Local\Programs\Python\Python311\Lib\site-packages\kivy\factory.py&quot;, line 147, in __getattr__ raise FactoryException('Unknown class &lt;%s&gt;' % name) kivy.factory.FactoryException: Unknown class &lt;ResponsiveLayout&gt; </code></pre>
<python><kivymd>
2023-04-20 16:24:09
1
472
Abel Gutiรฉrrez
76,066,317
1,477,072
How can I annotate a django queryset with more than one values
<p>I have an application that scrapes some eshops every day, and I record the <code>Price</code> of every <code>Item</code>.</p> <p>A simplified version of my models looks like this:</p> <pre class="lang-py prettyprint-override"><code>class Item(models.Model): name = models.CharField(max_length=512, blank=True) class Price(models.Model): item = models.ForeignKey(&quot;Item&quot;, on_delete=models.CASCADE) timestamp = models.DateTimeField(blank=True) per_item = models.FloatField(blank=True) </code></pre> <p>And my tables can look like this:</p> <pre><code># Item id | name ----------- 1i | item-1 2i | item-2 3i | item-3 # Price id | item | timestamp | per_item -------------------------------------- 1p | i1 | 2023-04-10 | 10.0 2p | i2 | 2023-04-10 | 2.0 3p | i3 | 2023-04-10 | 14.5 4p | i1 | 2023-04-11 | 11.0 5p | i3 | 2023-04-11 | 12.5 6p | i2 | 2023-04-12 | 2.0 7p | i3 | 2023-04-12 | 14.5 8p | i1 | 2023-04-13 | 12.0 9p | i2 | 2023-04-13 | 3.5 10p | i3 | 2023-04-13 | 15.0 </code></pre> <p>Note that not all Items exist every day, some of them may be present today, missing tomorrow and back again on the day after, and so on.</p> <h3>What I want to do</h3> <p>I'm trying to write a query that for a given day it will give me the <strong>last 2 Prices</strong> for every Item that existed that day.</p> <p>So, if I was looking at the last day (2023-04-13) I would get something like this:</p> <pre class="lang-py prettyprint-override"><code>[ {'id': '1i', 'latest_prices': [{'price__id': '8p', 'price__per_item': 12.0}, {'price__id': '4p', 'price__per_item': 11.0}] {'id': '2i', 'latest_prices': [{'price__id': '9p', 'price__per_item': 3.5}, {'price__id': '6p', 'price__per_item': 2.0}] {'id': '3i', 'latest_prices': [{'price__id': '10p', 'price__per_item': 15.0}, {'price__id': '7p', 'price__per_item': 14.5}] ] </code></pre> <p>I have ~25k items every day, so I'd like not to:</p> <ul> <li>Have to go to the database for every <code>Price</code> to retrieve its <code>per_item</code> field</li> <li>Have to loop through every <code>Item</code> of every day, to retrieve its last 2 <code>Price</code>s</li> </ul> <h3>What I have tried</h3> <ol> <li>I have tried using <code>annotate</code> and <code>Subquery</code>, but I was getting <code>django.db.utils.ProgrammingError: more than one row returned by a subquery used as an expression</code></li> <li>Then I found that on postgres I could use an <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/postgres/expressions/#arraysubquery-expressions" rel="nofollow noreferrer"><code>ArraySubquery</code></a> but I could only return multiple <code>price__id</code>, not multiple fields</li> <li>Then I looked closer at the example in the <code>ArraySubquery</code> link and I noticed the <code>JSONObject</code> - however I'm probably holding it the wrong way and I always get a single item in the <code>latest_prices</code> annotation.</li> </ol> <p>I <em>suspect</em> that the solution is in the <code>JSONObject</code> approach, but somehow it's just out of reach for my mind at the moment, so any steer in the right direction is really appreciated :)</p>
<python><django><postgresql><django-orm>
2023-04-20 16:23:17
1
593
Sakis Vtdk
76,066,269
10,472,655
Reticulate wrongly translating values of a column
<p>Right now I'm working in a .Rmd file and using <code>reticulate</code> so I can combine both Python and R chunks.</p> <p>Initially, I'm getting a dataframe like this one below.</p> <pre><code>id device sessions 1 mobile 40 2 desktop 61 3 desktop 55 4 mobile 24 </code></pre> <p>Afterwards I'm converting it into an R object this way</p> <pre><code>df &lt;- py$df </code></pre> <p>Additionally, I have to say that once having completed this last step, apparently nothing looks wrong when I display it by running <code>knitr::kable(df)</code></p> <p>But when in following R chunks I'm trying to continue working with this data, I'm not being able to keep on with my work because at that point my data look like follows:</p> <pre><code>id device sessions 1 &lt;chr [1]&gt; 40 2 &lt;chr [1]&gt; 61 3 &lt;chr [1]&gt; 55 4 &lt;chr [1]&gt; 24 </code></pre> <p>What causes this weird translation and how could I fix it so the dataframe in R could fit what I got originally in Python?</p>
<python><r><r-markdown><reticulate>
2023-04-20 16:17:58
0
343
teogj
76,066,238
2,999,868
Passing BytesIO to BufferedReader results in warning message
<p>I am trying to create a BufferedReader from a BytesIO object but my IDE (PyCharm) is warning me that BufferedReader <code>Expected type 'RawIOBase', got 'BytesIO' instead</code>.</p> <p>The logic seems to work but I'd like to solve this properly if there is a way to do so.</p> <p>Another way to phrase this question would be: can <code>BufferedReader</code> wrap a <code>BytesIO</code> object or can it only do so with objects that extend the <code>RawIOBase</code> interface?</p> <p>Minimum reproducible code below:</p> <pre><code>from io import BytesIO, BufferedReader b_handle = BytesIO() b_handle.write(b&quot;Hello World&quot;) b_handle.seek(0) # Warning: Expected type 'RawIOBase', got 'BytesIO' instead br = BufferedReader(b_handle) data = br.read() print(data) </code></pre>
<python><minio><bytesio>
2023-04-20 16:13:54
0
952
keith.g
76,066,200
11,923,747
Python does't search for libraries in the right path (windows)
<p>I have 2 installations of Python on my windows computer:</p> <p>Python3.6.5 with Anaconda:</p> <pre class="lang-none prettyprint-override"><code>C:\Users\Pedro\AppData\Local\Continuum\anaconda3\python.exe </code></pre> <p>Python3.8.5 with WinPython:</p> <pre class="lang-none prettyprint-override"><code>C:\Users\Pedro\Documents\Softwares\Winpython.385\python-3.8.5.amd64\python.exe </code></pre> <p>I can well double click on the WinPython python.exe. It works well.</p> <p>When I double click on the Anaconda python.exe, it crashes.</p> <pre class="lang-none prettyprint-override"><code>.\python.exe : Fatal Python error: Py_Initialize: can't initialize sys standard streams Traceback (most recent call last): File &quot;C:\Users\Pedro\Documents\Softwares\Winpython.385\python-3.8.5.amd64\Lib\abc.py&quot;, line 64, in &lt;module&gt; ModuleNotFoundError: No module named '_abc' </code></pre> <p>Why Anaconda python is looking for modules in Winpython folder ?</p> <p>WinPython folder is not in the env path of my shell.</p> <p>If I delete Winpython folder, Anaconda python.exe works well.</p>
<python><python-3.x><windows><path><version>
2023-04-20 16:09:38
1
321
floupinette
76,066,125
16,389,095
Kivy/KivyMD: Add a method when the selection of a drop down item changes
<p>I developed an UI with Kivy / KivyMD with Python. The UI is very simple: when a button is pressed a DropDownItem is added to the layout. I need to add a method that is called every time the selection of the DropDownItem changes. Here is the code:</p> <pre><code>from kivy.lang import Builder from kivymd.app import MDApp from kivymd.uix.screen import MDScreen from kivymd.toast import toast from kivymd.uix.menu import MDDropdownMenu from kivymd.uix.dropdownitem.dropdownitem import MDDropDownItem from kivy.metrics import dp Builder.load_string( &quot;&quot;&quot; &lt;View&gt;: MDGridLayout: rows: 3 id: layout padding: 100, 50, 100, 50 spacing: 0, 50 MDRaisedButton: text: 'CREATE DDI' on_release: root.Button_CreateDDI__On_Click() MDRaisedButton: id: button_check disabled: True text: 'CHECK SELECTION' on_release: root.Button_CheckSelection_On_Click() &quot;&quot;&quot;) class View(MDScreen): def __init__(self, **kwargs): super(View, self).__init__(**kwargs) def Button_CreateDDI__On_Click(self): self.myDdi = MDDropDownItem() self.myDdi.text = 'SELECT POSITION' myMenu, scratch = self.Create_DropDown_Widget(self.myDdi, ['POS 1', 'POS 2', 'POS 3'], width=4) self.myDdi.on_release = myMenu.open self.myDdi.bind(on_select = self.DDI_Selection_Changed) self.ids.button_check.disabled = False self.ids.layout.add_widget(self.myDdi) def Button_CheckSelection_On_Click(self): toast('CURRENT VALUE: ' + self.myDdi.current_item) def DDI_Selection_Changed(self): toast('SELECTION CHANGED: ' + self.myDdi.current_item) def Create_DropDown_Widget(self, drop_down_item, item_list, width): items_collection = [ { &quot;viewclass&quot;: &quot;OneLineListItem&quot;, &quot;text&quot;: item_list[i], &quot;height&quot;: dp(56), &quot;on_release&quot;: lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x), } for i in range(len(item_list)) ] menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=width) menu.bind() return menu, items_collection def Set_DropDown_Item(self, drop_down_item, menu, text_item): drop_down_item.set_item(text_item) menu.dismiss() class MainApp(MDApp): def __init__(self, **kwargs): super().__init__(**kwargs) self.View = View() def build(self): self.title = ' DROP DOWN ITEM ADDED DYNAMICALLY' return self.View if __name__ == '__main__': MainApp().run() </code></pre> <p>When the DropDownItem is added in the kv part, you can use the <strong>select</strong> event. Here the widget is added once the layout is already displayed. So I tried to bind the <strong>on_select</strong> event, but nothing occured neither an error or an exception. How to call a method everytime the selection of a DropDownItem changes?</p>
<python><kivy><kivy-language><kivymd>
2023-04-20 16:01:20
1
421
eljamba
76,066,092
1,942,505
How to print fetched result in FastSQLite, there is no .cursor() like in SQLite3?
<p>I downloaded <a href="https://pypi.org/project/fastsqlite/" rel="nofollow noreferrer"><code>FastSQLite</code></a> for python , <a href="https://github.com/perseoq/FastSQLite/blob/main/README.md" rel="nofollow noreferrer"><code>Readme</code></a><br> I created a database and table, but I do not know how to print fetched results, in sqlite3 there is a .cursor() to print results, in FastSQLite there is no <code>.cursor()</code> method</p> <pre><code>from fastsqlite import FastSQLite import pandas as pd db = FastSQLite() db.connect('file.db') db.create_table('gato', 'id int, rum varchar(300)') db.insert('gato', 'id, rum', '1,123') db.insert('gato', 'id, rum', '2,223') db.insert('gato', 'id, rum', '3,323') db.insert('gato', 'id, rum', '4,423') table=db.fetch_all('gato') print () print(&quot;===========================&quot;) print(pd.DataFrame(table)) print(&quot;===========================&quot;) print () </code></pre> <p>I checked the database. It is not empty! Output:</p> <pre><code>=========================== Empty DataFrame Columns: [] Index: [] =========================== </code></pre>
<python><sqlite><sqlite3-python>
2023-04-20 15:58:23
0
541
user1942505
76,065,940
512,652
Using a global Popen object from a Flask route
<p>I have a small Flask app, running with Gunicorn configured with one worker. At the app's initialization, it starts up a <code>subprocess.Popen</code> and hangs onto it in a global variable.</p> <pre class="lang-py prettyprint-override"><code>import subprocess def my_init(): ... global MY_SUBPROCESS MY_SUBPROCESS = subprocess.Popen([&quot;my_process&quot;, &quot;foo&quot;, &quot;bar&quot;]) </code></pre> <p>I'd like to have a route that reports on whether that global subprocess is still running.</p> <pre class="lang-py prettyprint-override"><code>@app.route(&quot;/poll-subprocess&quot;) def my_poll(): # expected None if the subprocess is still running, however it's always returning 0 return MY_SUBPROCESS.poll() </code></pre> <p>The route can see the global Popen object just fine, however the .poll call always returns <code>0</code>, even if the process is still running.</p> <p>I haven't found anything in the <code>subprocess</code> documentation that can explain this. I suppose there is a threading issue playing into this somehow?</p> <hr /> <p>edit: here is a reproducible example:</p> <pre class="lang-py prettyprint-override"><code>from subprocess import Popen from flask import Flask, jsonify import gunicorn.app.base # (copied from example https://docs.gunicorn.org/en/stable/custom.html) class MyApp(gunicorn.app.base.BaseApplication): def __init__(self, app, options=None): self.options = options or {} self.application = app super().__init__() def load_config(self): config = {key: value for key, value in self.options.items() if key in self.cfg.settings and value is not None} for key, value in config.items(): self.cfg.set(key.lower(), value) def load(self): return self.application # Spawn our global subprocess MY_PROCESS = Popen([&quot;sleep&quot;, &quot;999&quot;]) # Create the app app = Flask(__name__) @app.route(&quot;/&quot;) def hello(): &quot;&quot;&quot;Return some info about our global subprocess&quot;&quot;&quot; return jsonify({ &quot;obj&quot;: str(MY_PROCESS), &quot;pid&quot;: MY_PROCESS.pid, &quot;poll&quot;: MY_PROCESS.poll() }) # Preview the same info we'll have exposed on a route print({ &quot;obj&quot;: str(MY_PROCESS), &quot;pid&quot;: MY_PROCESS.pid, &quot;poll&quot;: MY_PROCESS.poll() }) # Launch the app MyApp(app).run() </code></pre> <p>Example output running it:</p> <pre><code>$ python ./demo.py {'obj': '&lt;subprocess.Popen object at 0x7fe2f87ed0f0&gt;', 'pid': 20135, 'poll': None} [2023-04-20 12:13:51 -0400] [20134] [INFO] Starting gunicorn 20.0.4 [2023-04-20 12:13:51 -0400] [20134] [INFO] Listening at: http://127.0.0.1:8000 (20134) [2023-04-20 12:13:51 -0400] [20134] [INFO] Using worker: sync [2023-04-20 12:13:51 -0400] [20138] [INFO] Booting worker with pid: 20138 </code></pre> <pre><code>$ curl http://127.0.0.1:8000/ {&quot;obj&quot;:&quot;&lt;subprocess.Popen object at 0x7fe2f87ed0f0&gt;&quot;,&quot;pid&quot;:20135,&quot;poll&quot;:0} </code></pre> <p>Note it's reporting the same python object and pid as the main thread did when we started the app, however now poll() looks like the process terminated. According to <code>ps</code>, it's still running:</p> <pre><code>$ ps -p 20135 -o '%p %a' PID COMMAND 20135 sleep 999 </code></pre>
<python><flask><subprocess><gunicorn>
2023-04-20 15:42:09
1
19,227
ajwood
76,065,724
10,266,106
Numpy Broadcast Function Across All Array Columns At Once
<p>To preface, looping across multidimensional arrays in Numpy using ndenumderate to perform operations is inefficient, particularly when the array is massive (on the order of millions of indices). With this in mind, I would like to apply a function across all columns (j) of an array's row (i) at once, which I believe is the same principle as broadcasting in Numpy.</p> <p>Consider the following ndarray at row 0, which is of size <code>(2606, 8, 26)</code>. At each column j of this array, there are a total of 208 floating point integers across 8 separate arrays, which I'd like to flatten to one single array at column j. I've successfully executed this at a single index; the relevant code inside of a larger function to tackle this is as follows:</p> <pre><code>def lambdaloop(): # There Are 1228 Rows In This Array for index in range(0,1228,1): # Series of Stored Arrays That Define A &quot;Moving Window&quot;, Is Not Important To This Question segment = window[index, :] # &quot;Moving Window&quot; Partitions Values In A Separate ndarray 'stacked', Resultant Is Important To This Question inspect = stacked[segment[:,:,0], segment[:,:,1]] # Flattening The ndarray At Column 0. This Works Successfully, tolist() Could Also Be Used Here flat = inspect[0].flatten() </code></pre> <p>The result of <code>flat</code> is further below. I made a separate function with an embedded lambda function to pass to np.vectorize to broadcast this across all columns at once. I'm aware that np.vectorize is essentially a fancier looping mechanism, but other entries here indicate it can still be notably faster than moving across each column in a for loop. With additions, this is what I assembled:</p> <pre><code>def broadcast(piece): flatten = lambda piece: piece.flatten() return flatten def lambdaloop(): # There Are 1228 Rows In This Array for index in range(0,1228,1): # Series of Stored Arrays That Define A &quot;Moving Window&quot;, Is Not Important To This Question segment = window[index, :] # &quot;Moving Window&quot; Partitions Values In A Separate ndarray 'stacked', Resultant Is Important To This Question inspect = stacked[segment[:,:,0], segment[:,:,1]] # Flattening The ndarray At Column 0. This Works Successfully, tolist() Could Also Be Used Here vfunc = np.vectorize(broadcast) vresult = vfunc(inspect) </code></pre> <p>Upon inspecting <code>vresult</code>, I find the following applied to every value in the ndarray: <code>&lt;function broadcast.&lt;locals&gt;.&lt;lambda&gt; at 0x7fddc342ed40&gt;</code>. How can I apply the broadcast function to return flatten arrays at each column j with a resultant shape (2606,208)?</p> <p>For clarity, the result of <code>flat</code> (and what I expect for each array to look like across all columns) looks as follows:</p> <pre><code>[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.46875 2.6875 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.78125 2.21875 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3.1640625 2.625 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.78125 2.21875 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3.6640625 1.09375 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 2.71875 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.90625 2.15625 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.1875 2.4375 0. 0. 0. ] </code></pre>
<python><arrays><numpy><numpy-ndarray><array-broadcasting>
2023-04-20 15:17:14
0
431
TornadoEric
76,065,718
8,207,701
Text Alignment Not Centered MoviePy
<p>I want to create a subtitleclip that's position on the video is little right to the center but, the text should be aligned to the center.</p> <p>So here's my code</p> <pre><code> generator = lambda txt, font_size: TextClip(txt, font='Arial-Rounded-MT-Bold', fontsize=font_size, color='white', align=&quot;center&quot;) subs = [(text, start_end, font_size) for (start_end, text, font_size) in words] subtitles = SubtitlesClip(subs, generator) subtitles = subtitles.set_pos((950, 'center')) </code></pre> <p>The position works fine, subtitles show as it supposed to, but the aligment of those textclip is to the left even through I have added align = 'center' in the code. The aligment stays centered when I use the position like the following,</p> <pre><code>subtitles = subtitles.set_pos(('center', 'center')) </code></pre> <p>But it goes to left aligment when I change the x position value.</p> <p>How do I fix this?</p>
<python><python-3.x><moviepy>
2023-04-20 15:16:44
1
1,216
Bucky
76,065,711
4,108,376
Converting py::array to Eigen::Ref from C++
<p>I'm using pybind for an interface to a program that uses Eigen internally to access a 2D array, and should be able to receive a Numpy array from python.</p> <p>I have a function like this:</p> <pre><code>void A::f(const Eigen::Ref&lt; Eigen::Matrix&lt;double, Eigen::Dynamic, 3, Eigen::RowMajor&gt; &gt;&amp;) // bound using m.def(&quot;f&quot;, &amp;A::f); </code></pre> <p>From Python it is called like this:</p> <pre><code># arr is a Nx3 array a.f(arr) # or, when arr is not of float64 type: a.f(np.asarray(arr, dtype=np.float64)) </code></pre> <p>if I'm correct, the latter call creates a temporary Numpy array, that persists in memory during the call to <code>f()</code>. And the Eigen::Ref&lt;&gt; will reference that memory, without additional copy.</p> <hr /> <p>But now the function instead has multiple such arguments, and there are multiple similar functions. So I instead have a <code>View</code> class, which holds pointers to the data, and <code>f</code> receives a <code>View</code> object:</p> <pre><code>struct View { const Eigen::Vector3d* ptr; std::size_t rows; View(const Eigen::Ref&lt; Eigen::Matrix&lt;double, Eigen::Dynamic, 3, Eigen::RowMajor&gt; &gt;&amp; arr) { assert(arr.rowStride()*sizeof(double) == sizeof(Eigen::Vector3d)); ptr = reinterpret_cast&lt;const Eigen::Vector3d*&gt;(arr.data()); rows = arr.rows(); } }; A::f(const View&amp;); // bound using py::class_&lt;View&gt;(m, &quot;View&quot;) .def(py::init&lt;const Eigen::Ref&lt;Eigen::Matrix&lt;double, Eigen::Dynamic, 3, Eigen::RowMajor&gt;&gt;&amp;&gt;()); py::class_&lt;A&gt;(m, &quot;A&quot;) .def(&quot;f&quot;, &amp;A::f); </code></pre> <p>This works fine if used from Python like this:</p> <pre><code># arr is a Nx3 array a.f(arr) # arr is not of float64 type arr_copy = np.asarray(arr, dtype=np.float64)) view = View(arr_copy) a.f(view) </code></pre> <p>However, it seems this usage does not work:</p> <pre><code># arr is a Nx3 array a.f(arr) view = View(np.asarray(arr, dtype=np.float64))) a.f(view) </code></pre> <p>...because the copy of <code>arr</code> no longer exists when <code>a.f()</code> is called.</p> <hr /> <p>One way to solve it seems to be to create a <code>View</code> subclass, that contains the <code>py::array</code> object from pybind11 (= ref-counted handle to the Python object):</p> <pre><code>struct PyView : View { py::array arr_; PyView(const py::array&amp; arr) : arr_(arr) // ensure that the Python object will persist during lifetime of PyView { View::set_array( ?? ); } } // where there is a function // View::set_array(const Eigen::Ref&lt;Eigen::Matrix&lt;double, Eigen::Dynamic, 3, Eigen::RowMajor&gt;&gt;&amp;) </code></pre> <p>Is there an API in Pybind11 to get a Eigen::Ref (or the Eigen::Map that would be passed to it), from an <code>py::array</code>, the way it would normally do automatically?</p> <p>Or alternately, is there a more elegant way of handling this? Such that ideally, there is only one place in the code that needs to take care of the details of the memory layout and set the <code>View::ptr</code>, <code>View::rows</code> values correctly.</p>
<python><c++><numpy><eigen><pybind11>
2023-04-20 15:16:12
0
9,230
tmlen
76,065,644
16,363,897
Pandas expanding average per group without changing the order of the rows
<p>I have the following dataframe &quot;df1&quot;:</p> <pre><code> a b week 9 47 19 9 36 44 10 29 46 10 -68 -37 10 -12 90 9 67 66 </code></pre> <p>I want to calculate the expanding average of each column, conditional on the index. This is the expected output:</p> <pre><code> a b week 9 47.0 19.0 9 41.5 31.5 10 29.0 46.0 10 -19.5 4.5 10 -17.0 33.0 9 50.0 43.0 </code></pre> <p>For example, the last value of column &quot;b&quot; (43) is the average of all values of column &quot;b&quot; of df1 where index = 9 (66, 44, 19).</p> <p>I tried the following but obviously didn't work:</p> <pre><code>df1[df1.index == df1.index].expanding().mean() </code></pre> <p>Thanks</p>
<python><pandas><dataframe>
2023-04-20 15:09:32
2
842
younggotti
76,065,543
4,045,275
Arranging 4 histogram plots in a 2x2 facet grid
<h1>The issue</h1> <p>I have a dataframe with two columns: color and values.</p> <p>Color can be red, yellow, green and black. For each of these 4 colors, I need to plot an histogram with the distribution of &quot;values&quot;. I would like the plots to be arranged in a 2x2 grid.</p> <h1>What I would like to do</h1> <p>I would like to automate the creation of the plots - with the FacetGrid function or some equivalent.</p> <p>In the examples I have seen, histogram are facetted by subsets of data, with one variable over the columns and one over the row. E.g here: <a href="https://seaborn.pydata.org/examples/faceted_histogram.html" rel="nofollow noreferrer">https://seaborn.pydata.org/examples/faceted_histogram.html</a> there are 3 species, 2 sexes, and 6 charts.</p> <h1>What I have tried but doesn't work</h1> <p>I have tried the FacetGrid function, but it produces 4 identical charts. It is clear I am doing something wrong when I define the FacetGrid or call the map() method, but I'm not sure what.</p> <pre><code>import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns df=pd.DataFrame() df['values'] = np.hstack([np.ones(25), np.arange(200,225), np.ones(25) *5, np.linspace(500,550,25)]) df['category'] = np.repeat(['red','yellow','green','black'],25) fig = sns.FacetGrid(df, col='category', col_wrap=2) fig.map(sns.histplot, data = df, x='values', stat='density' ) </code></pre> <p>In my data, there is only one categorical variable, not 2. This categorical variable can take 4 values and I'd like the plots in a 2x2 grid.</p> <h1>What I have got to work</h1> <p>I can create a figure with 2x2 subplots, and manually populate each of the 4 axes. It works, but it's neither elegant nor efficient</p> <pre><code>import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns df=pd.DataFrame() df['values'] = np.hstack([np.ones(25), np.arange(200,225), np.ones(25) *5, np.linspace(500,550,25)]) df['category'] = np.repeat(['red','yellow','green','black'],25) fig2, ax = plt.subplots(2,2) categories = np.array([['red','yellow'],['green', 'black']]) for r in range(2): for c in range(2): cat = categories[r,c] print(cat) sns.histplot(df.query(&quot;category == @cat&quot;), x=&quot;values&quot;, kde= True, stat='density', bins =10, ax = ax[r,c]) plt.tight_layout() plt.show() </code></pre>
<python><matplotlib><seaborn><facet><facet-grid>
2023-04-20 14:59:44
0
9,100
Pythonista anonymous
76,065,488
19,130,803
Postgres Truncate command: how to check it is successfully executed python
<p>I am working on python web app. I am using <code>postgres</code> database through <code>psycog2</code>. I am making a table empty using command</p> <pre><code>TRUNCATE TABLE some_table_name RESTART IDENTITY; </code></pre> <p>I have an <code>IDENTITY</code>column in the table. I read that it returns the <code>identity</code> number. I have <code>cursor</code> object but didn't find any property for this.</p> <p>How to check that the command is executed successfully?</p>
<python><postgresql>
2023-04-20 14:54:22
0
962
winter
76,065,485
12,870,628
sqlite3 row does not work to make fetchall() a dict instead of tuple
<pre><code>con = sqlite3.connect(&quot;funds.db&quot;) with con: cur = con.cursor() con.row_factory = sqlite3.Row funds = cur.execute(&quot;SELECT * from funds&quot;).fetchall() print(funds) </code></pre> <p>Since i set <code>con.row_factory = sqlite3.Row</code>, i expect funds to return me a dictionary, as my funds is a table. However, when I run .fetchall() even after this, I always get a tuple. Why is this the case if the line of code is supposedly to make fetchall() a dictionary instead of a tuple?</p>
<python><sqlite3-python>
2023-04-20 14:54:03
1
495
Justin Chee
76,064,952
9,275,146
Matplotlib Histogram has weird gaps in the data
<p>I have the following dataframe that consists of 50 rows where each row is a year and month formatted as datetime64[ns], with an Order_ct that is a count of number of orders that month formatted as int64, and two other fields.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np data =\ {0: {'year_month': pd.Timestamp('2019-02-01 00:00:00'), 'Order_ct': 234, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 1: {'year_month': pd.Timestamp('2019-03-01 00:00:00'), 'Order_ct': 392, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 2: {'year_month': pd.Timestamp('2019-04-01 00:00:00'), 'Order_ct': 379, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 3: {'year_month': pd.Timestamp('2019-05-01 00:00:00'), 'Order_ct': 356, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 4: {'year_month': pd.Timestamp('2019-06-01 00:00:00'), 'Order_ct': 363, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 5: {'year_month': pd.Timestamp('2019-07-01 00:00:00'), 'Order_ct': 464, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 6: {'year_month': pd.Timestamp('2019-08-01 00:00:00'), 'Order_ct': 355, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 7: {'year_month': pd.Timestamp('2019-09-01 00:00:00'), 'Order_ct': 458, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 8: {'year_month': pd.Timestamp('2019-10-01 00:00:00'), 'Order_ct': 379, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 9: {'year_month': pd.Timestamp('2019-11-01 00:00:00'), 'Order_ct': 379, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 10: {'year_month': pd.Timestamp('2019-12-01 00:00:00'), 'Order_ct': 381, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 11: {'year_month': pd.Timestamp('2020-01-01 00:00:00'), 'Order_ct': 399, 'Order_ct_Prior_Year': np.nan, 'Order_Perc_Chng': np.nan}, 12: {'year_month': pd.Timestamp('2020-02-01 00:00:00'), 'Order_ct': 345, 'Order_ct_Prior_Year': 234.0, 'Order_Perc_Chng': 47.43589743589743}, 13: {'year_month': pd.Timestamp('2020-03-01 00:00:00'), 'Order_ct': 306, 'Order_ct_Prior_Year': 392.0, 'Order_Perc_Chng': -21.93877551020408}, 14: {'year_month': pd.Timestamp('2020-04-01 00:00:00'), 'Order_ct': 208, 'Order_ct_Prior_Year': 379.0, 'Order_Perc_Chng': -45.11873350923483}, 15: {'year_month': pd.Timestamp('2020-05-01 00:00:00'), 'Order_ct': 275, 'Order_ct_Prior_Year': 356.0, 'Order_Perc_Chng': -22.752808988764045}, 16: {'year_month': pd.Timestamp('2020-06-01 00:00:00'), 'Order_ct': 408, 'Order_ct_Prior_Year': 363.0, 'Order_Perc_Chng': 12.396694214876034}, 17: {'year_month': pd.Timestamp('2020-07-01 00:00:00'), 'Order_ct': 428, 'Order_ct_Prior_Year': 464.0, 'Order_Perc_Chng': -7.758620689655173}, 18: {'year_month': pd.Timestamp('2020-08-01 00:00:00'), 'Order_ct': 377, 'Order_ct_Prior_Year': 355.0, 'Order_Perc_Chng': 6.197183098591549}, 19: {'year_month': pd.Timestamp('2020-09-01 00:00:00'), 'Order_ct': 431, 'Order_ct_Prior_Year': 458.0, 'Order_Perc_Chng': -5.895196506550218}, 20: {'year_month': pd.Timestamp('2020-10-01 00:00:00'), 'Order_ct': 372, 'Order_ct_Prior_Year': 379.0, 'Order_Perc_Chng': -1.8469656992084433}, 21: {'year_month': pd.Timestamp('2020-11-01 00:00:00'), 'Order_ct': 285, 'Order_ct_Prior_Year': 379.0, 'Order_Perc_Chng': -24.80211081794195}, 22: {'year_month': pd.Timestamp('2020-12-01 00:00:00'), 'Order_ct': 341, 'Order_ct_Prior_Year': 381.0, 'Order_Perc_Chng': -10.498687664041995}, 23: {'year_month': pd.Timestamp('2021-01-01 00:00:00'), 'Order_ct': 383, 'Order_ct_Prior_Year': 399.0, 'Order_Perc_Chng': -4.010025062656641}, 24: {'year_month': pd.Timestamp('2021-02-01 00:00:00'), 'Order_ct': 407, 'Order_ct_Prior_Year': 345.0, 'Order_Perc_Chng': 17.971014492753625}, 25: {'year_month': pd.Timestamp('2021-03-01 00:00:00'), 'Order_ct': 409, 'Order_ct_Prior_Year': 306.0, 'Order_Perc_Chng': 33.66013071895425}, 26: {'year_month': pd.Timestamp('2021-04-01 00:00:00'), 'Order_ct': 366, 'Order_ct_Prior_Year': 208.0, 'Order_Perc_Chng': 75.96153846153845}, 27: {'year_month': pd.Timestamp('2021-05-01 00:00:00'), 'Order_ct': 396, 'Order_ct_Prior_Year': 275.0, 'Order_Perc_Chng': 44.0}, 28: {'year_month': pd.Timestamp('2021-06-01 00:00:00'), 'Order_ct': 365, 'Order_ct_Prior_Year': 408.0, 'Order_Perc_Chng': -10.53921568627451}, 29: {'year_month': pd.Timestamp('2021-07-01 00:00:00'), 'Order_ct': 409, 'Order_ct_Prior_Year': 428.0, 'Order_Perc_Chng': -4.439252336448598}, 30: {'year_month': pd.Timestamp('2021-08-01 00:00:00'), 'Order_ct': 353, 'Order_ct_Prior_Year': 377.0, 'Order_Perc_Chng': -6.36604774535809}, 31: {'year_month': pd.Timestamp('2021-09-01 00:00:00'), 'Order_ct': 364, 'Order_ct_Prior_Year': 431.0, 'Order_Perc_Chng': -15.54524361948956}, 32: {'year_month': pd.Timestamp('2021-10-01 00:00:00'), 'Order_ct': 322, 'Order_ct_Prior_Year': 372.0, 'Order_Perc_Chng': -13.440860215053762}, 33: {'year_month': pd.Timestamp('2021-11-01 00:00:00'), 'Order_ct': 316, 'Order_ct_Prior_Year': 285.0, 'Order_Perc_Chng': 10.87719298245614}, 34: {'year_month': pd.Timestamp('2021-12-01 00:00:00'), 'Order_ct': 328, 'Order_ct_Prior_Year': 341.0, 'Order_Perc_Chng': -3.812316715542522}, 35: {'year_month': pd.Timestamp('2022-01-01 00:00:00'), 'Order_ct': 306, 'Order_ct_Prior_Year': 383.0, 'Order_Perc_Chng': -20.10443864229765}, 36: {'year_month': pd.Timestamp('2022-02-01 00:00:00'), 'Order_ct': 283, 'Order_ct_Prior_Year': 407.0, 'Order_Perc_Chng': -30.46683046683047}, 37: {'year_month': pd.Timestamp('2022-03-01 00:00:00'), 'Order_ct': 246, 'Order_ct_Prior_Year': 409.0, 'Order_Perc_Chng': -39.85330073349633}, 38: {'year_month': pd.Timestamp('2022-04-01 00:00:00'), 'Order_ct': 302, 'Order_ct_Prior_Year': 366.0, 'Order_Perc_Chng': -17.48633879781421}, 39: {'year_month': pd.Timestamp('2022-05-01 00:00:00'), 'Order_ct': 333, 'Order_ct_Prior_Year': 396.0, 'Order_Perc_Chng': -15.909090909090908}, 40: {'year_month': pd.Timestamp('2022-06-01 00:00:00'), 'Order_ct': 293, 'Order_ct_Prior_Year': 365.0, 'Order_Perc_Chng': -19.726027397260275}, 41: {'year_month': pd.Timestamp('2022-07-01 00:00:00'), 'Order_ct': 300, 'Order_ct_Prior_Year': 409.0, 'Order_Perc_Chng': -26.65036674816626}, 42: {'year_month': pd.Timestamp('2022-08-01 00:00:00'), 'Order_ct': 354, 'Order_ct_Prior_Year': 353.0, 'Order_Perc_Chng': 0.28328611898017}, 43: {'year_month': pd.Timestamp('2022-09-01 00:00:00'), 'Order_ct': 394, 'Order_ct_Prior_Year': 364.0, 'Order_Perc_Chng': 8.241758241758241}, 44: {'year_month': pd.Timestamp('2022-10-01 00:00:00'), 'Order_ct': 389, 'Order_ct_Prior_Year': 322.0, 'Order_Perc_Chng': 20.80745341614907}, 45: {'year_month': pd.Timestamp('2022-11-01 00:00:00'), 'Order_ct': 371, 'Order_ct_Prior_Year': 316.0, 'Order_Perc_Chng': 17.405063291139243}, 46: {'year_month': pd.Timestamp('2022-12-01 00:00:00'), 'Order_ct': 333, 'Order_ct_Prior_Year': 328.0, 'Order_Perc_Chng': 1.524390243902439}, 47: {'year_month': pd.Timestamp('2023-01-01 00:00:00'), 'Order_ct': 338, 'Order_ct_Prior_Year': 306.0, 'Order_Perc_Chng': 10.457516339869281}, 48: {'year_month': pd.Timestamp('2023-02-01 00:00:00'), 'Order_ct': 354, 'Order_ct_Prior_Year': 283.0, 'Order_Perc_Chng': 25.08833922261484}, 49: {'year_month': pd.Timestamp('2023-03-01 00:00:00'), 'Order_ct': 379, 'Order_ct_Prior_Year': 246.0, 'Order_Perc_Chng': 54.0650406504065}} df_test = pd.DataFrame.from_dict(data, orient='index') </code></pre> <p>I tried creating a histogram with the following code but I am getting some weird gaps in the bars and the values don't appear to be matching the bars. For example the first bar is above 600 but should be 234 matching the Order_ct for '2019-02-01'.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax1 = plt.subplots(figsize=(12,6)) # Set background color of figure fig.patch.set_facecolor('#fffcf2') ax1.set_facecolor('#fffcf2') # Create histogram on first axis ax1.hist(df_test['year_month'], bins=50, weights=df_test['Order_ct'], color='#3d91e6', alpha=0.7) # Set labels and title for first y-axis ax1.set_xlabel('Order Month') ax1.set_ylabel('Order Count') # Set x-axis ticks to be the unique values in the 'year_month' column ax1.set_xticks(df_test['year_month'].unique()) # Rotate x-axis tick labels by 45 degrees plt.xticks(rotation=90) # Show plot plt.show() </code></pre> <p><a href="https://i.sstatic.net/qdPXG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qdPXG.png" alt="Example of the output" /></a></p>
<python><pandas><matplotlib><bar-chart>
2023-04-20 13:59:36
0
379
Kreitz Gigs
76,064,896
5,212,614
Can we assign a text index to a list object, rather than a numeric index?
<p>I am merging a bunch of word docs into a list, like this.</p> <pre><code>import os import docx2txt import warnings warnings.filterwarnings('ignore') # all resumes ext = ('.docx') resume_path = 'C:\\Users\\resumes\\' resumes = [] # load the data for files in os.listdir(resume_path): my_text = docx2txt.process(resume_path+files) resumes.append(files + '----' + my_text + '--END_OF_RESUME--') print(len(resumes)) </code></pre> <p>I'm getting each file name from 'files'. Now, I can easily find these items in the list, and each looks something like this...</p> <pre><code>aaron.docx----, more text, etc., etc. sam.docx----, more text, etc., etc. henry.docx----, more text, etc., etc. </code></pre> <p>Instead of an index 0, 1, 2, etc., how can I create an index like aaron.docx, sam.docx, henry.docx, etc? Or, should the list be converted into a dataframe or a dictionary, and then I'll have indexes? Ultimately, I want to merge two lists together. I tried to zip two lists together and create a dictionary from this, but when I zipped a list with 5 items and a list with 3 items, the final dictionary had just 3 items and 2 were dropped.</p>
<python><python-3.x><list>
2023-04-20 13:54:03
1
20,492
ASH
76,064,880
5,852,692
Converting some elements of a python list and preserving the rest
<p>I am looking for an elegant way to convert some elements of a list and preserve the rest.</p> <p>Consider the following list:</p> <pre><code>list_ = [0, 0, 1, 1, 0, 0] </code></pre> <p>I would like to convert <code>list_</code> to <code>[0, 1, 1, 1, 0, 1]</code>, of course I can write a for loop with some if-statements like:</p> <pre><code>list_ = [1 if i == 1 or i == 5 else list_[i] for i in range(len(list_))] </code></pre> <p>However, I am looking a basic way, which copies the elements where the values are equal to <code>_</code>, and convertes the rest to the given value (in this case 1) like:</p> <pre><code>list_ = [_, 1, _, _, _, 1] in list_ </code></pre> <p>Is there a pythonic way to do that? or should I write a function for it?</p> <p>Additionally, sometimes the converted elements are not equal to 0, but for example 'x'.</p> <p>E.g.:</p> <pre><code>list_ = [0, 0, 1, 1, 0, 0] # some magic functions etc... for converting 1st element to 'x' and 5th to 2 magic([_, 'x', _, _, _, 2], list_) in&gt;&gt;&gt; list_ out&gt;&gt; [0, 'x', 1, 1, 0, 2] </code></pre>
<python><list><converters>
2023-04-20 13:52:20
2
1,588
oakca
76,064,867
1,354,354
Python Sqlparse: How to get the boolean status if sql query is not in correct format
<p>I have a use case where I need to check the format of the sql query before sending to the the database for execution.</p> <p>If sql query is not in correct format sqlparse should return the false status, if it is in correct format it should return true status.</p> <p>I did not see any function in sqlparse docuementation for the same.</p> <p>Example of my use case</p> <p><strong>This should return true:</strong></p> <pre><code>select count(users), department from usertable group by department </code></pre> <p><strong>This should return false:</strong></p> <pre><code>select count(users), department, location from usertable group by department </code></pre>
<python><sql-parser>
2023-04-20 13:50:52
0
808
Rahul Neekhra
76,064,826
601,862
How to find out the grids which have the path inside
<p><a href="https://i.sstatic.net/NrnVr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NrnVr.png" alt="demo image" /></a></p> <p>As you can see, there is a 10 by 10 grids. They are stored in a variable <code>matrix</code>.</p> <p>The <code>matrix</code> stores an 10 by 10 2 dimensional array. In each element, the data structure looks like:</p> <p>{ &quot;distance&quot;: distance, &quot;topLeft&quot;: (xmin, ymin), &quot;bottomRight&quot;: (xmax, ymax) }</p> <p>We also know X1(x1,y1), X2(x2, y2).</p> <p>Question:</p> <p>How can I find out all grids which have the path(X1 to X2) inside?</p> <p>E.g. <code>matrix[0][7]</code> has a part of the path, thus it should be included in the result array.</p> <p><strong>Update 1</strong></p> <p>One thing to mention that: this code will be running during a video stream on RPi, so I prefer the algorithm does need to go through each pixel to figure out whether a cell includes the path or not.</p> <p><strong>Update2:</strong></p> <p>One important thing I need to mention: the X1, X2 are the points in an 224 by 224 image.</p> <p>I'm now using the <a href="https://en.m.wikipedia.org/wiki/Bresenham%27s_line_algorithm" rel="nofollow noreferrer">Bresenham's line algorithm</a>. The code generates X,y coordinates on my image, but what I Really need to know is which grid is crossed by the line. So how can I utilise the Bresenham's line algorithm to get the crossed grids?</p> <p><strong>Update 3</strong></p> <p>Here is the Python Code I implemented for the <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm" rel="nofollow noreferrer">Bresenham's_line_algorithm</a></p> <pre><code> def _find_crossed_cells(self, point0, point1, matrix) -&gt; List(tuple): path_cells = [] x0, y0 = point0 x1, y1 = point1 dx = abs(x1 - x0) sx = 1 if x0 &lt; x1 else -1 dy = abs(y1 - y0) sy = 1 if y0 &lt; y1 else -1 error = dx + dy while True: path_cells.append((x0, y0)) if x0 == x1 and y0 == y1: break e2 = 2 * error if e2 &gt;= dy: if x0 == x1: break error = error + dy x0 = x0 + sx if e2 &lt;= dx: if y0 == y1: break error += dx y0 += sy return path_cells </code></pre> <p>As you can see this code can only find a list of x,y coordinates(224 by 224), but I need to find out a lit of crossed cells(10 by 10 coordinates). So how can I achieve this?</p>
<python><algorithm>
2023-04-20 13:45:40
0
7,147
Franva
76,064,820
4,497,972
How to create new column in first pandas dataframe using index of second pandas dataframe based on certain conditions in python?
<p>Let me describe the problem using simple example.</p> <p>Lets say we have df1 and df1 created as follows:</p> <pre><code>df1 = pd.DataFrame({'month': [1, 4, 7, 10], 'year': [2012, 2014, 2013, 2014], 'sale': [55, 40, 84, 31]}) df2 = pd.DataFrame({'month1': [2,3,4,1, 4, 7, 10], 'year1': [2012,2012,2012,2012, 2014, 2013, 2014], 'sale1': [34,35,36,55, 40, 84, 31]}) df2.set_index(pd.Index([100, 200, 300, 411,444,415,416])) </code></pre> <p>now df1 and df2 looks like</p> <pre><code> month year sale 0 1 2012 55 1 4 2014 40 2 7 2013 84 3 10 2014 31 month1 year1 sale1 100 2 2012 34 200 3 2012 35 300 4 2012 36 411 1 2012 55 444 4 2014 40 415 7 2013 84 416 10 2014 31 </code></pre> <p>Now for each row of df1, we find the index of df2 such that df1.month=df2.month1, df1.year=df2.year1, df1.sale=df2.sale1. For the first row of df1, we find index in df2 which is 411. We do this for all values in df1 and create a new column in df1 which stores this index</p> <p>the final df1 would look like:</p> <pre><code> month year sale index_result 0 1 2012 55 411 1 4 2014 40 444 2 7 2013 84 415 3 10 2014 31 416 </code></pre> <p>Here,</p> <ol> <li>we cannot guarantee that only one unique index of df2 matched with the condition. In that case, we take the first value only.</li> <li>column names between two dataframes would be different(i have just used suffix 1 here). This need not be too much of a concern as I can create temporary dataframes with new column names so that column names matches. But both dataframes have multiple other columns having different names.</li> </ol> <p>Since I come from c++ background, I can implement a loop solution but it would be very inefficient as df1 in my case has thousands of rows and df2 has millions of rows.</p> <p>I am looking for an efficient solution in terms of time.. I think I have enough ram on my system, so memory should not be an issue.</p> <p>If asking for full solution is too much, I would appreciate any leads which can lead me to an efficient solution.</p> <p>I could not find any similar query in stackoverflow. If somebody can point to a duplicate query, that would also be helpful.</p>
<python><pandas><dataframe>
2023-04-20 13:44:37
2
535
lonstud
76,064,476
4,495,790
How to get maximum value per rows by top-level groups in multilevel Pandas DataFrame?
<p>I have the following Multilevel DF in Pandas:</p> <pre><code>A B a b a b ---------- 1 2 3 4 2 3 3 2 7 2 1 0 </code></pre> <p>Now I would like to get maximum values in each row per level-0 groups like this:</p> <pre><code>A B ---- 2 4 3 3 7 1 </code></pre> <p>What is the proper query? <code>df.groupby(level=0).apply(max)</code> works only row-wise, not column wise.</p>
<python><pandas>
2023-04-20 13:11:55
1
459
Fredrik
76,064,417
6,623,277
TKInter widgets having different width even after setting same width
<p>I am trying to build a small window in tkinter, but I observed that different widgets take on different widths even when set with the same width. Here's the output that I can see on running the code:</p> <p><a href="https://i.sstatic.net/k6lld.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k6lld.png" alt="output" /></a></p> <p>The black widget in first row is an Entry widget declared as below <strong>(width=30)</strong>:</p> <pre><code>entry=Entry(root,width=30,font=('arial',14,'bold'),bd=0,bg='black',fg='white') entry.grid(row=0,column=0,padx=0,pady=0,ipadx=0,ipady=0,columnspan=10) </code></pre> <p>The colorful widgets in second row are Label(s) widgets defined as below <strong>(width=3 * 10 widgets = 30)</strong>:</p> <pre><code>label=Label(root,width=3,font=('arial',14,'bold'),bg=&lt;different_colors&gt;) label.grid(row=1,column=&lt;increasing columns&gt;,padx=0,pady=0ipadx=0ipady=0) </code></pre> <p>The purple widget in the third row is again a Label widget defined as below <strong>(width=30)</strong>:</p> <pre><code>biglabel=(root,width=30,font=('arial',14,'bold'),bg='purple') biglabel.grid(row=2,column=0,padx=0,pady=0ipadx=0ipady=0,columnspan=10) </code></pre> <p>As per the widths mentioned, shouldn't all 3 rows be of same width? But as seen in the output image, all 3 rows are of different width.</p> <p>Questions:</p> <ol> <li>Can someone please explain the reason for this behavior?</li> <li>How to make different widgets to follow the grids throughout the app/what to use instead of width to make them of the same size deterministically? As I need to add widgets based on what is typed in the entry and don't want it to have different sizes. Right now the window expands to accommodate the second row when that row is added <em>after</em> typing something in the entry. Interactive output can be seen by executing my code online here: <a href="https://replit.com/@KCKotak/TKinter-experiment#main.py" rel="nofollow noreferrer">https://replit.com/@KCKotak/TKinter-experiment#main.py</a></li> </ol> <p>Full code example to test:</p> <pre><code>from tkinter import * root=Tk() def add_text_label(r=0,c=0,bgcolor='white'): text_label=Label(root, width=3, font=('arial',14,'bold'), bg=bgcolor) text_label.grid(row=r,column=c,padx=0,pady=0,ipadx=0,ipady=0) root.geometry('-15-35') root.title('experiment') root.config(bg='white') # first row: black entry as seen in output image. **width=30** text_entry=Entry(root,width=30,font=('arial',14,'bold'),bd=0,bg='black',fg='white') text_entry.grid(row=0,column=0,padx=0, pady=0,ipadx=0,ipady=0,columnspan=10) # second row: colorful labels. **width=3 x 10 = 30** add_text_label(1,0,'red') add_text_label(1,1,'blue') add_text_label(1,2,'yellow') add_text_label(1,3,'orange') add_text_label(1,4,'green') add_text_label(1,5,'red') add_text_label(1,6,'blue') add_text_label(1,7,'yellow') add_text_label(1,8,'orange') add_text_label(1,9,'green') # third row: purple label. **width=30** big_label=Label(root, width=30, font=('arial',14,'bold'), bg='purple') big_label.grid(row=2,column=0,padx=0, pady=0,ipadx=0,ipady=0,columnspan=10) root.mainloop() </code></pre>
<python><python-3.x><tkinter><tkinter-entry><tkinter-layout>
2023-04-20 13:06:06
1
2,077
KCK
76,064,376
6,411,540
Construct an object using ruamel
<p>I want to create some objects based on arguments in a yaml file. Everything was working fine, but now I want to supply the class with some options from which one should be chosen randomly. I thought I could use the <code>__post_init__</code> method for this, but the learned, that this is intentionally not called because the use case is to serialize and deserialize objects.</p> <p>Is there a way to tell ruamel to use the constructor of the class and call <code>__post_init__</code>?</p> <p>This is almost what I want, but I couldn't get it to work with ruamel despite it saying that it was tested using ruamel at the bottom:</p> <ul> <li><a href="https://stackoverflow.com/a/35476888/6411540">Answer on SO: Is there a way to construct an object using PyYAML construct_mapping after all nodes complete loading?</a></li> </ul> <p>My current setup is like this:</p> <pre class="lang-py prettyprint-override"><code>import random from abc import ABC, abstractmethod from dataclasses import dataclass from ruamel.yaml import YAML, yaml_object yaml = YAML() @dataclass class A(ABC): some_var: str options: list[str] def __post_init__(self): self.other_var = random.choice(self.options) @yaml_object(yaml) @dataclass class B(A): yaml_tag = &quot;!B&quot; more: int @yaml_object(yaml) @dataclass class C(A): yaml_tag = &quot;!C&quot; foo: list[int] def __post_init__(self): super().__post_init__() self.bar = random.choice(self.foo) # โ€ฆ data = yaml.load(&quot;config.yml&quot;) </code></pre> <p>yaml to load:</p> <pre class="lang-yaml prettyprint-override"><code>test_classes: - !B some_var: abc123 options: ['X', 'Y', 'Z'] more: 7 - !C some_var: abc123 options: ['X', 'Y', 'Z'] foo: [7, 12, 42] </code></pre>
<python><constructor><yaml><ruamel.yaml>
2023-04-20 13:03:01
1
1,273
Darkproduct
76,064,228
3,227,302
area weighted average of raster using polygon - R vs Python
<p>I have an R function in <code>terra</code> package that takes a raster and polygon and calculates the average of the raster for the polygon. In doing so, it uses the area intersection of each raster cell with the polygon as a weight to do the averaging (for e.g. some raster on polygon boundary could be partially intersecting with the polygon). The <code>weights = T</code> argument does this for me. Here's the R code:</p> <pre><code>library(terra) terra::rast(my_raster, my_shp, fun = 'mean', na.rm = T, weights = T, touches = T) </code></pre> <p>I want to do the equivalent in python:</p> <pre><code>import rasterio import geopandas as gpd import numpy as np out_image, out_transform = rasterio.mask.mask(my_raster, my_shp, crop=True, all_touched=True) mean_val = np.mean(out_image) # Calculate the mean value of the masked raster </code></pre> <p>However, I am not able to find an argument in python that accounts for raster cell's intersection area with the polygon to calculate the average. How do people go about doing this in python?</p>
<python><r><geopandas><terra><rasterio>
2023-04-20 12:46:27
1
3,845
89_Simple
76,064,194
10,061,193
Store User's Credentials in Python Package
<p>I'm working on a Python package where I need to get credentials (email and password) from the user via CLI and they can enter their credentials just as follows.</p> <pre><code>$ my_package auth --email abc@efg.com --password testpass123 </code></pre> <p>My package is responsible for storing and using the provided credentials in the next calls (even if the system reboots). What is the best way of implementing this? Using environment variables? Online password managers? Keeping them in the user's <code>$HOME</code> directory?</p>
<python><authentication><package><authorization><credentials>
2023-04-20 12:42:40
1
394
Sadra
76,064,091
9,640,238
Select rows between two rows with pandas
<p>There's many answers to how to select rows where values in a column fits between two values, but I can't find how to select all rows between two rows. For instance, say I have a dataframe, with a column that identifies the start and end.</p> <pre class="lang-py prettyprint-override"><code>data = {'record_id' : [1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3], 'some_data': ['semper', 'lectus', 'turpis', 'proin', 'justo', 'vitae', 'luctus', 'magna', 'non', 'vestibulum', 'nulla', 'erat', 'nisl', 'orci', 'curae', 'nam', 'aliquet', 'aliquam', 'cum', 'convallis'], 'boundaries': [np.NaN, 'start', np.NaN, np.NaN, 'end', np.NaN, np.NaN, np.NaN, 'start', np.NaN, np.NaN, 'end', np.NaN, np.NaN, np.NaN, 'start', np.NaN, 'end', np.NaN, np.NaN]} df = pd.DataFrame(data) </code></pre> <p>How do I return only the rows that fit between each sequence of <code>start</code> and <code>end</code> (included)? I was thinking of using <code>ffill()</code> if I can figure out how to propagate forward <em>only</em> that values <code>start</code> (until the next value is <code>end</code>), still leaving blank all rows between <code>end</code> and <code>start</code> (otherwise I don't know which occurrence of <code>end</code> I need to include in my filter).</p>
<python><pandas>
2023-04-20 12:32:37
2
2,690
mrgou
76,064,067
2,123,706
downlad file from sharepoint via python error: The access policy does not allow token issuance
<p>I was looking <a href="https://stackoverflow.com/questions/53671547/python-download-files-from-sharepoint-site">Python - Download files from SharePoint site</a> to try and download files from sharepoint, but I get this error:</p> <pre><code>ValueError: An error occurred while retrieving token from XML response: AADSTS53003: Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. </code></pre> <p>I contacted my It team, and they said <code>alter your Python script to use Modern Authentication rather than Legacy Authentication</code></p> <p>does anyone know what a more modern authentication method is to download files from Sharepoint?</p>
<python><sharepoint>
2023-04-20 12:30:02
1
3,810
frank
76,064,018
2,912,349
Minimal covering circle for a matplotlib.PathPatch
<p>For an arbitrary matplotlib.PathPatch instance and a given origin, I would like to find the circle with the minimal radius that fully covers the patch. For straight line paths that is straightforward: simply compute the distances between path vertices and the origin; the maximum is the desired radius. However, for paths with Bezier curves, the path vertices are control points that can lie outside the patch, and hence the circle ends up being too large. How can I find the minimal radius in these cases?</p> <p><a href="https://i.sstatic.net/TYftS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYftS.png" alt="enter image description here" /></a></p> <pre><code>#!/usr/bin/env python &quot;&quot;&quot; Find the circle with minimum radius that given an xy origin fully covers a matplotlib.patch.PathPatch instance. &quot;&quot;&quot; import numpy as np import matplotlib.pyplot as plt from matplotlib.path import Path from matplotlib.patches import PathPatch fig, ax = plt.subplots() # arbitrary PathPatch vertices = np.array([[ 0.44833333, -2.75444444], [-0.78166667, -1.28444444], [-2.88166667, 1.81555556], [-0.75666667, 1.81555556], [-0.28166667, 0.96555556], [ 1.06833333, 3.01555556], [ 1.86833333, -0.13444444], [ 0.86833333, -0.68444444], [ 0.44833333, -2.75444444]]) codes = (1, 4, 4, 4, 2, 4, 4, 4, 79) ax.add_artist(PathPatch(Path(vertices, codes), color='red')) # plot control points separately ax.scatter(*vertices.T, c='black', marker='x') # covering circle origin = np.array((0, 0)) deltas = vertices - origin[np.newaxis, :] distances = np.linalg.norm(deltas, axis=-1) radius = np.max(distances) ax.add_artist(plt.Circle(origin, radius, alpha=0.1)) ax.axis([-4, 4, -4, 4]) ax.set_aspect(&quot;equal&quot;) plt.show() </code></pre>
<python><matplotlib><bezier>
2023-04-20 12:24:27
1
12,703
Paul Brodersen
76,063,981
11,795,964
Create a new pandas category column on the basis of values in another column
<p>I am a doctor looking at surgery data. There is a column in the data frame for admission method.</p> <pre><code>df['ADMIMETH'] </code></pre> <p>There are a number of codes for admission method, but the categories essentially are emergency or non-emergency.</p> <p>Emergency has 17 codes, non-emergency has 3 codes. I want to create a new column with the <em>categories</em> emergency, non-emergency by filtering the admission method column. I have used a categorizing method - writing a function to categorize the admission methods into emergency or non-emergency, and then applied that function to the relevant pandas column.</p> <p><code>emerg = ['2A', '2B', '2C', '2D', '2C', '28', '31', '32', '21', '22', '23', '24', '25', '2A', '2B', '2C', '2D']</code></p> <p><code>nonemerg = ['11', '12', '13']</code></p> <pre><code>def filter(x): if df['ADMIMETH'].isin(emerg): return 'acute' if df['ADMIMETH'].isin(nonemerg): return 'elective' </code></pre> <p><code>df['new_col'] = df['ADMIMETH'].apply(filter)</code></p> <pre><code> File ~/Library/Python/3.9/lib/python/site-packages/pandas/_libs/lib.pyx:2918, in pandas._libs.lib.map_infer() Cell In [7], line 2, in filter(x) 1 def filter(x): ----&gt; 2 if df['ADMIMETH'].isin(emerg): 3 return 'acute' 4 if df['ADMIMETH'].isin(nonemerg): File ~/Library/Python/3.9/lib/python/site-packages/pandas/core/generic.py:1527, in NDFrame.__nonzero__(self) 1525 @final 1526 def __nonzero__(self) -&gt; NoReturn: -&gt; 1527 raise ValueError( 1528 f&quot;The truth value of a {type(self).__name__} is ambiguous. &quot; 1529 &quot;Use a.empty, a.bool(), a.item(), a.any() or a.all().&quot; 1530 ) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>as above - value error. I have obviously done soething funadamentally wrong. I would like to keep it as simple as possible as I am a doctor not a developer...</p>
<python><pandas><apply>
2023-04-20 12:20:31
0
363
capnahab
76,063,891
995,431
Python Celery chord raises ChordError even when error is handled by callback
<p>I have this problem in a larger application, but have reduced it to the following MRE.</p> <p>I have a Celery chord consisting of two parallel tasks feeding into one final tasks. It's expected that tasks might occasionally permanently fail for external reasons, simulated here by raising <code>Exception</code>. I wish to handle that error with a custom error handler, and not cause any unhandled execptions in the Celery worker process.</p> <p>Here is the definition of my three example tasks in a file I call <code>canvastest.py</code>.</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python from celery import Celery app = Celery('canvastest', backend='redis://localhost', broker='redis://localhost') @app.task(throws=(Exception,),) def t1(x): print(&quot;t1&quot;) return x @app.task(throws=(Exception,),) def t2(x): print(&quot;t2&quot;) return x @app.task(throws=(Exception,),) def t3(x): print(&quot;t3&quot;) return x @app.task() def error_handler(*args, **kwargs): print(&quot;Error handler&quot;) </code></pre> <p>I build my chord as follows in the file <code>main.py</code></p> <pre class="lang-py prettyprint-override"><code>from canvastest import t1, t2, t3, error_handler from celery import chord if __name__ == '__main__': combined_task = chord((t1.s(1), t2.s(2)),t3.s()).on_error(error_handler.si()) combined_task() </code></pre> <p>To run it I first start a local redis instance by running <code>docker run -p 6379:6379 redis</code> then I run the worker <code>celery -A canvastest worker</code> and the main application <code>python main.py</code></p> <p>The log output of the worker then as expected looks as follows</p> <pre><code>[2023-04-20 13:59:00,840: WARNING/ForkPoolWorker-16] t1 [2023-04-20 13:59:00,842: WARNING/ForkPoolWorker-1] t2 [2023-04-20 13:59:00,854: WARNING/ForkPoolWorker-16] t3 </code></pre> <p>If I modify t3 to fail by changing it as</p> <pre class="lang-py prettyprint-override"><code>@app.task(throws=(Exception,),) def t3(x): print(&quot;t3&quot;) raise Exception() return x </code></pre> <p>the logs correctly looks as follows</p> <pre><code>[2023-04-20 14:00:18,078: WARNING/ForkPoolWorker-16] t1 [2023-04-20 14:00:18,080: WARNING/ForkPoolWorker-1] t2 [2023-04-20 14:00:18,090: WARNING/ForkPoolWorker-16] t3 [2023-04-20 14:00:18,092: WARNING/ForkPoolWorker-16] Error handler </code></pre> <p>but if we move the error to t1 or t2 as</p> <pre><code>@app.task(throws=(Exception,),) def t2(x): print(&quot;t2&quot;) raise Exception() return x </code></pre> <p>then this happens</p> <pre><code> [2023-04-20 14:01:31,667: WARNING/ForkPoolWorker-16] t1 [2023-04-20 14:01:31,668: WARNING/ForkPoolWorker-1] t2 [2023-04-20 14:01:31,675: ERROR/ForkPoolWorker-1] Chord '3336718d-57e3-41e0-b62d-4fad8ae9cc62' raised: ChordError('Dependency ffdd942e-cfe9-498f-a4b1-9d3a80a9ee45 raised Exception()') Traceback (most recent call last): File &quot;/home/jerkern/celery_mre/.venv/lib/python3.10/site-packages/celery/app/trace.py&quot;, line 451, in trace_task R = retval = fun(*args, **kwargs) File &quot;/home/jerkern/celery_mre/.venv/lib/python3.10/site-packages/celery/app/trace.py&quot;, line 734, in __protected_call__ return self.run(*args, **kwargs) File &quot;/home/jerkern/celery_mre/canvastest.py&quot;, line 15, in t2 raise Exception() Exception During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/jerkern/celery_mre/.venv/lib/python3.10/site-packages/celery/backends/redis.py&quot;, line 520, in on_chord_part_return resl = [unpack(tup, decode) for tup in resl] File &quot;/home/jerkern/celery_mre/.venv/lib/python3.10/site-packages/celery/backends/redis.py&quot;, line 520, in &lt;listcomp&gt; resl = [unpack(tup, decode) for tup in resl] File &quot;/home/jerkern/celery_mre/.venv/lib/python3.10/site-packages/celery/backends/redis.py&quot;, line 426, in _unpack_chord_result raise ChordError(f'Dependency {tid} raised {retval!r}') celery.exceptions.ChordError: Dependency ffdd942e-cfe9-498f-a4b1-9d3a80a9ee45 raised Exception() [2023-04-20 14:01:31,676: WARNING/ForkPoolWorker-1] Error handler </code></pre> <p>There is now an unhandled exception and stack trace in the log. This is the problem, and I wish to get rid of that. The error has already been handled by the <code>error_handler</code> method, and I don't wish to pollute the logs by surplus exceptions and stack traces.</p> <p>Is there a way to silence this, or is it a bug in celery? My expectation would be that the error_handler is invoked in the same way regardless of which of the three tasks raises the exception, and that not further exceptions are raised and logged as a side-effect of the task failing (since that is expected behavior).</p>
<python><celery><celery-task><celeryd>
2023-04-20 12:09:31
0
325
ajn
76,063,762
13,158,157
pyspark isin multiple columns
<p>Coming from pandas to pyspark I understand that you can substitute <code>isin</code> by <code>.join</code> with <code>how='inner'</code> or <code>how='left_anti</code>. This works for me in simple conditions but I cannot figure how to use it in more nuanced cases.</p> <p>I have a case when I want to filter one dataframe by columns of several other dataframes, how can I do this ?</p> <p>In pandas I would do:</p> <pre><code>df.loc[(df.A.isin(df2.A2)) &amp; (df.B.isin(df3.B3), 'new_col'] = value </code></pre> <p>My pyspark attempt was</p> <pre><code>df = df.withColumn('new_col', when( (df.A.isin(df2.A2)) &amp; (df.B.isin(df3.B3), value ) ) </code></pre> <p>This fails with <code>AnalysisException: Resolved attribute(s) A2 missing from A,B ... </code> My guess is that I can use <code>isin</code> between columns of the same dataframe but cannot pass other dataframe columns to it.</p> <p>EDIT: my second attempt is definetely working but does not look opimal when I consider multiple such &quot;isin-join-alternatives&quot;:</p> <pre><code># split main frame into subframes df1_notin = main_df.join(F.broadcast(df1), on='A', how='left_anti') df1_isin = main_df.join(F.broadcast(df1), on='A', how='inner') df2_notin_all = df1_notin.join(F.broadcast(df2), on='B', how='left_anti') df2_isin = df1_notin.join(F.broadcast(df2), on='B', how='inner') # conditionally fill with Bool values in my case df2_notin_all = df2_notin_all.withColumn('new_column', F.lit(None).cast(BooleanType())) df1_isin = df1_isin.withColumn('new_column', F.lit(True).cast(BooleanType())) df2_isin = df2_isin.withColumn('new_column', F.lit(True).cast(BooleanType())) main_df = df2_notin_all.unionByName(df1_isin).unionByName(df2_isin) </code></pre>
<python><pandas><pyspark>
2023-04-20 11:55:11
0
525
euh
76,063,651
11,426,624
requests does not retrun the full div
<p>I would like to scrape a webpage using BeautifulSoup and requests. The below code works, but I do not get the full div back.</p> <pre><code>import requests cert = (certs['cert'], certs['password']) r = requests.get(url, cert=(certs['cert'], certs['password']), verify=certs['CA_file']) </code></pre> <p>For r.text I get</p> <pre><code>...... &lt;div id=\'App\'&gt;&lt;/div&gt;\r\n &lt;script type=&quot;text/javascript&quot; src=&quot;/bundle.js&quot;&gt;&lt;/script&gt;&lt;/body&gt;\r\n&lt;/html&gt;\r\n' </code></pre> <p>but I would like to have the html code in <code>&lt;div id=\'App\'&gt;&lt;/div&gt;</code> but it does not show. I tried some different headers but they also did not work. Can this be done with BeautifulSoup (i would prefer to not use selenium as it gets way too complicated with the credentials). I need to use microsoft Edge.</p> <p>Is there anything I can do to get the full html code in the div?</p>
<python><web-scraping><python-requests>
2023-04-20 11:42:19
1
734
corianne1234
76,063,620
19,386,576
stick top progress bar in python to stdout
<p>How can I get in python a print to stdout with the progress bar pinned to the top to get a result similar to that with the &quot;sudo apt upgrade&quot; command on linux but with the elements that got printed one under the other but with the progress bar above. Im searching for a way to make the progress bar behave as sticky with z-index:999 on CSS.</p> <pre><code>Setting up python3-aiodns (3.0.0-2) ... Setting up libpython3-all-dev:arm64 (3.11.2-1+b1) ... Setting up python3-dev (3.11.2-1+b1) ... Setting up theharvester (4.2.0-0kali2) ... Setting up faraday (4.3.5-0kali1) ... faraday.service is a disabled or a static unit not running, not starting it. Setting up samba-common-bin (2:4.17.7+dfsg-1) ... Setting up python3-aesedb (0.1.3+git20230221.9b7c468-0kali1) ... Setting up samba (2:4.17.7+dfsg-1) ... nmbd.service is a disabled or a static unit not running, not starting it. samba-ad-dc.service is a disabled or a static unit not running, not starting it. smbd.service is a disabled or a static unit not running, not starting it. Setting up python3-all-dev (3.11.2-1+b1) ... Setting up kali-linux-headless (2023.2.5) ... Setting up python3-pypykatz (0.6.6-0kali1) ... Setting up kali-linux-default (2023.2.5) ... Processing triggers for systemd (252.6-1) ... Processing triggers for man-db (2.11.2-2) ... Progress: [ 99%] [#########################################################.] </code></pre> <p>I tried with this code:</p> <pre><code>import time def print_progress_bar(iteration, total): percent = &quot;{:.1f}&quot;.format(100 * (iteration / float(total))) filled_length = int(50 * iteration // total) bar = &quot;โ–ˆ&quot; * filled_length + &quot;-&quot; * (50 - filled_length) print(f&quot;\033[F\033[KProgress: |{bar}| {percent}% Complete&quot;) for i in range(1, 101): print_progress_bar(i, 100) print(i, end=&quot; &quot;) time.sleep(0.1) </code></pre> <p>but i can't print the numbers under each other.</p>
<python><python-3.x><terminal><progress-bar><stdout>
2023-04-20 11:37:07
1
608
federikowsky
76,063,383
11,125,112
Bounding Box & Segmentation Mask of a Blender Particle System
<p>I m trying to simulate multiple falling objects with a particle system in blender.</p> <p>To train certain CNN's I require the segmentation mask of single elements that are visible in the scene. Up to now I m only able to perform part of the exporting mask task. Such as exporting a mask by composition nodes or using python to extract some properties.</p> <p>What I tried so far is :</p> <ul> <li>Exporting the mask via particle indices (See Images). The colors of the objects in the segmentation image randomly vary. Since they will overlap at some point, this solution is not really applicable. The edges of the segmentation image is also quite fuzzy, mainly to the setting: Film -&gt; Pixel Filter -&gt; Width property (If set to 0, the rendered image is also affected).</li> </ul> <p><a href="https://i.sstatic.net/NtBgb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NtBgb.jpg" alt="Shading Nodes" /></a> <a href="https://i.sstatic.net/pXpQq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pXpQq.jpg" alt="Composition Nodes" /></a></p> <ul> <li>Another possibility that I see is exporting the mask by python. Therefore I created a script which (up to now) exports the centerpoints of the particles in the image. However I could probably loop over the visible particles and disable one by one and always save the image, but this will use a tremendous amount of memory and time.</li> </ul> <pre><code> import bpy from bpy_extras.object_utils import world_to_camera_view context = bpy.context dg = context.evaluated_depsgraph_get() ob = context.object.evaluated_get(dg) ps = ob.particle_systems.active scene = bpy.context.scene cam = bpy.data.objects['Camera'] render = scene.render res_x = render.resolution_x res_y = render.resolution_y for frame in range(30,35): scene.frame_set(frame) bpy.ops.render.render(write_still = True) print(&quot;res_x:{},res_y:{}&quot;.format(res_x, res_y)) for particle in ps.particles: if particle.is_visible: print(particle.location) coords_2d = world_to_camera_view(scene, cam, particle.location) x = coords_2d[0] y = coords_2d[1] dist_to_cam = coords_2d[2] rnd = lambda i: round(i) print(&quot;x:{},y:{}&quot;.format(rnd(res_x*x),res_y- rnd(res_y*y))) </code></pre> <p>If someone could show me a more reliable clean method to export the masks of all the visible single particle objects, also considering overlapping, in blender I would highly appreciate it.</p> <p>Thank you</p>
<python><blender><bpy>
2023-04-20 11:06:08
2
464
Daniel Klauser
76,063,210
9,481,479
How to get Bitrate from uridecodebin Deepstream source
<p>For decoding online and local mp4 files, I am using this code snippet to create source bin in Deepstream Python code.(similar to deepstream_test_3 example) This is uridecodebin based source, and handles all decoding part. I need bitrate info of source video, so as I can set the same to encoder of my output video. Please help me on this.</p> <pre><code>def decodebin_child_added(child_proxy,Object,name,user_data): print(&quot;Decodebin child added:&quot;, name, &quot;\n&quot;) if(name.find(&quot;decodebin&quot;) != -1): Object.connect(&quot;child-added&quot;,decodebin_child_added,user_data) if &quot;source&quot; in name: source_element = child_proxy.get_by_name(&quot;source&quot;) if source_element.find_property('drop-on-latency') != None: Object.set_property(&quot;drop-on-latency&quot;, True) def cb_newpad(decodebin, decoder_src_pad,data): print(&quot;In cb_newpad\n&quot;) caps=decoder_src_pad.get_current_caps() if not caps: caps = decoder_src_pad.query_caps() gststruct=caps.get_structure(0) gstname=gststruct.get_name() source_bin=data features=caps.get_features(0) # Need to check if the pad created by the decodebin is for video and not # audio. print(&quot;gstname=&quot;,gstname) if(gstname.find(&quot;video&quot;)!=-1): # Link the decodebin pad only if decodebin has picked nvidia # decoder plugin nvdec_*. We do this by checking if the pad caps contain # NVMM memory features. print(&quot;features=&quot;,features) if features.contains(&quot;memory:NVMM&quot;): # Get the source bin ghost pad bin_ghost_pad=source_bin.get_static_pad(&quot;src&quot;) if not bin_ghost_pad.set_target(decoder_src_pad): sys.stderr.write(&quot;Failed to link decoder src pad to source bin ghost pad\n&quot;) else: sys.stderr.write(&quot; Error: Decodebin did not pick nvidia decoder plugin.\n&quot;) def create_source_bin(index,uri): print(&quot;Creating source bin&quot;) # Create a source GstBin to abstract this bin's content from the rest of the # pipeline bin_name=&quot;source-bin-%02d&quot; %index print(bin_name) nbin=Gst.Bin.new(bin_name) if not nbin: sys.stderr.write(&quot; Unable to create source bin \n&quot;) # Source element for reading from the uri. # We will use decodebin and let it figure out the container format of the # stream and the codec and plug the appropriate demux and decode plugins. uri_decode_bin=Gst.ElementFactory.make(&quot;uridecodebin&quot;, &quot;uri-decode-bin&quot;) if not uri_decode_bin: sys.stderr.write(&quot; Unable to create uri decode bin \n&quot;) # We set the input uri to the source element uri_decode_bin.set_property(&quot;uri&quot;,uri) # Connect to the &quot;pad-added&quot; signal of the decodebin which generates a # callback once a new pad for raw data has beed created by the decodebin uri_decode_bin.connect(&quot;pad-added&quot;,cb_newpad,nbin) uri_decode_bin.connect(&quot;child-added&quot;,decodebin_child_added,nbin) # We need to create a ghost pad for the source bin which will act as a proxy # for the video decoder src pad. The ghost pad will not have a target right # now. Once the decode bin creates the video decoder and generates the # cb_newpad callback, we will set the ghost pad target to the video decoder # src pad. Gst.Bin.add(nbin,uri_decode_bin) bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target(&quot;src&quot;,Gst.PadDirection.SRC)) if not bin_pad: sys.stderr.write(&quot; Failed to add ghost pad in source bin \n&quot;) return None return nbin print(&quot;Creating source_bin\n&quot;) source_bin=create_source_bin(0, &quot;&lt;video_path_uri&gt;&quot;) if not source_bin: sys.stderr.write(&quot;Unable to create source bin \n&quot;) </code></pre>
<python><video-streaming><encoder-decoder><deepstream>
2023-04-20 10:43:08
1
1,004
Nawin K Sharma
76,062,931
3,188,444
How can I save .py text version of jupyter notebook from the notebook itself
<p>I want to save a text version (<code>.py</code>) of a Jupyter notebook (<code>.ipynb</code>) without output for version control purposes.</p> <p>I know you can do this by running <code>jupyter nbconvert</code> in terminal, but how can I automate this by executing this command from within the Jupyter notebook itself?</p> <p>Also see <a href="https://stackoverflow.com/questions/18734739/using-ipython-jupyter-notebooks-under-version-control">Using IPython / Jupyter Notebooks Under Version Control</a></p>
<python><jupyter-notebook><nbconvert>
2023-04-20 10:11:01
2
1,138
Kouichi C. Nakamura
76,062,929
4,451,315
Rolling group_by, but truncating each value at midnight
<p>Say I have the following DataFrame:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import numpy as np from datetime import datetime df = pl.DataFrame({'ts': pl.datetime_range(datetime(2020, 1, 1), datetime(2020, 1, 10), '1h', eager=True)}) df = df.with_columns(value=pl.Series(np.arange(len(df)))) </code></pre> <pre class="lang-py prettyprint-override"><code>In [62]: df Out[62]: shape: (217, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ ts โ”† value โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ datetime[ฮผs] โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2020-01-01 00:00:00 โ”† 0 โ”‚ โ”‚ 2020-01-01 01:00:00 โ”† 1 โ”‚ โ”‚ 2020-01-01 02:00:00 โ”† 2 โ”‚ โ”‚ 2020-01-01 03:00:00 โ”† 3 โ”‚ โ”‚ โ€ฆ โ”† โ€ฆ โ”‚ โ”‚ 2020-01-09 21:00:00 โ”† 213 โ”‚ โ”‚ 2020-01-09 22:00:00 โ”† 214 โ”‚ โ”‚ 2020-01-09 23:00:00 โ”† 215 โ”‚ โ”‚ 2020-01-10 00:00:00 โ”† 216 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>What I would like to get, for each row, is:</p> <ul> <li>consider all rows which are between 3 days before, and that same day at midnight</li> <li>calculate the mean</li> </ul> <p>So, for example, for row <code>2020-01-09 23:00:00</code>, I would like to consider the rows where <code>ts</code> is greater or equal to <code>2020-01-06 00:00:00</code> and less than <code>2020-01-09 00:00:00</code> and take the mean of the <code>'value'</code> column.</p> <p>Expected output:</p> <pre><code>shape: (217, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ ts โ”† value โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ datetime[ฮผs] โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2020-01-01 00:00:00 โ”† null โ”‚ โ”‚ 2020-01-01 01:00:00 โ”† null โ”‚ โ”‚ 2020-01-01 02:00:00 โ”† null โ”‚ โ”‚ 2020-01-01 03:00:00 โ”† null โ”‚ โ”‚ โ€ฆ โ”† โ€ฆ โ”‚ โ”‚ 2020-01-09 21:00:00 โ”† 155.5 โ”‚ โ”‚ 2020-01-09 22:00:00 โ”† 155.5 โ”‚ โ”‚ 2020-01-09 23:00:00 โ”† 155.5 โ”‚ โ”‚ 2020-01-10 00:00:00 โ”† 179.5 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ </code></pre> <p>How I calculated the expected output:</p> <pre><code>df.filter( (pl.col(&quot;ts&quot;) &gt;= datetime(2020, 1, 6)) &amp; (pl.col(&quot;ts&quot;) &lt; datetime(2020, 1, 9)) ).mean() df.filter( (pl.col(&quot;ts&quot;) &gt;= datetime(2020, 1, 7)) &amp; (pl.col(&quot;ts&quot;) &lt; datetime(2020, 1, 10)) ).mean() </code></pre>
<python><python-polars>
2023-04-20 10:10:47
1
11,062
ignoring_gravity
76,062,894
403,425
Annotate a QuerySet with Exists instead of Count
<p>I have a model which contains a <code>ManyToManyField</code>:</p> <pre class="lang-py prettyprint-override"><code>class UserHasProduct(Model): user = ForeignKey(User, on_delete=CASCADE) products = ManyToManyField(Product) class Product(Model): is_nfr = BooleanField(default=False) </code></pre> <p>I want to annotate my queryset with a simple <code>is_nfr</code> value that returns True if any of the <code>products</code> have <code>is_nfr</code> set to True. The best I could come up with is this:</p> <pre class="lang-py prettyprint-override"><code>UserHasProduct.objects.filter(user=self.request.user).annotate( is_nfr=Count(&quot;license_code_products&quot;, filter=Q(products__is_nfr=True)) ) </code></pre> <p>That works, but instead of a boolean it returns an integer. Not a huge deal, but I am still wondering if it's possible to return a boolean, and if that would help with query performance in any way (i.e. it can stop as soon as it finds the first match, no idea if it works like that).</p>
<python><django><django-orm>
2023-04-20 10:06:24
2
5,828
Kevin Renskers
76,062,688
12,226,377
Scraping Trustpilot and storing results in a pandas dataframe
<p>I am using the following code to extract all the reviews for the first 20 pages from Trustpilot</p> <pre><code>from bs4 import BeautifulSoup import requests import pandas as pd import datetime as dt # Initialize lists review_titles = [] review_dates_original = [] review_dates = [] review_ratings = [] review_texts = [] page_number = [] # Set Trustpilot page numbers to scrape here from_page = 1 to_page = 20 for i in range(from_page, to_page + 1): response = requests.get(f&quot;https://uk.trustpilot.com/review/www.abc.com?page={i}&quot;) web_page = response.text soup = BeautifulSoup(web_page, &quot;html.parser&quot;) for review in soup.find_all(class_ = &quot;paper_paper__1PY90 paper_square__lJX8a card_card__lQWDv card_noPadding__D8PcU card_square___tXn9 styles_navigationContainer__kPGA_&quot;): # Review titles review_title = review.find(class_ = &quot;typography_heading-s__f7029 typography_appearance-default__AAY17&quot;) review_titles.append(review_title.getText()) # Review dates review_date_original = review.select_one(selector=&quot;time&quot;) review_dates_original.append(review_date_original.getText()) # Convert review date texts into Python datetime objects review_date = review.select_one(selector=&quot;time&quot;).getText().replace(&quot;Updated &quot;, &quot;&quot;) if &quot;hours ago&quot; in review_date.lower() or &quot;hour ago&quot; in review_date.lower(): review_date = dt.datetime.now().date() elif &quot;a day ago&quot; in review_date.lower(): review_date = dt.datetime.now().date() - dt.timedelta(days=1) elif &quot;days ago&quot; in review_date.lower(): review_date = dt.datetime.now().date() - dt.timedelta(days=int(review_date[0])) else: review_date = dt.datetime.strptime(review_date, &quot;%b %d, %Y&quot;).date() review_dates.append(review_date) # Review ratings review_rating = review.find(class_ = &quot;star-rating_starRating__4rrcf star-rating_medium__iN6Ty&quot;).findChild() review_ratings.append(review_rating[&quot;alt&quot;]) # When there is no review text, append &quot;&quot; instead of skipping so that data remains in sequence with other review data e.g. review_title review_text = review.find(class_ = &quot;typography_body-l__KUYFJ typography_appearance-default__AAY17 typography_color-black__5LYEn&quot;) if review_text == None: review_texts.append(&quot;&quot;) else: review_texts.append(review_text.getText()) # Trustpilot page number page_number.append(i) # Create final dataframe from lists df_reviews = pd.DataFrame(list(zip(review_titles, review_dates_original, review_dates, review_ratings, review_texts, page_number)), columns =['review_title', 'review_date_original', 'review_date', 'review_rating', 'review_text', 'page_number']) </code></pre> <p>and I am getting the following error</p> <pre><code> AttributeError Traceback (most recent call last) in 24 # Review titles 25 review_title = review.find(class_ = &quot;typography_heading-s__f7029 typography_appearance-default__AAY17&quot;) ---&gt; 26 review_titles.append(review_title.getText()) 27 # Review dates 28 review_date_original = review.select_one(selector=&quot;time&quot;) AttributeError: 'NoneType' object has no attribute 'getText' </code></pre> <p>I understand that one of the review titles are coming out to be None and the same issue is persisting for other elements as well such as Dates, Pages, Review etc.</p> <p>How should I resolve this?</p>
<python><pandas><dataframe><web-scraping><beautifulsoup>
2023-04-20 09:42:01
2
807
Django0602
76,062,643
245,549
How can I efficiently handle large CSV files in Python?
<p>I have a CSV file with millions of rows and columns that I need to process in Python. However, when I try to load it into memory using pandas or csv modules, my program becomes extremely slow and memory-intensive.</p> <p>What are some efficient techniques or libraries that I can use to handle such large CSV files in Python? I have heard about chunking and streaming the data, but I am not sure how to implement them. Can you provide some code examples or point me to some helpful resources?</p> <p>Any advice or suggestions would be greatly appreciated. Thank you in advance!</p>
<python><csv><size>
2023-04-20 09:37:24
1
132,218
Roman
76,062,594
6,352,008
Use the same rule at multiple locations in a pipeline with and without wildcards
<p>With snakemake, I'm trying to apply the same rule to N independent files and to a file which is the merge of all those N files.</p> <p>I've created a minimal example you can find below. That does this: <a href="https://i.sstatic.net/B70WP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B70WP.png" alt="dag" /></a></p> <p>I have a bunch of files as my initial input, their paths are given in a configuration file I have little control over.</p> <p>First I am extracting a specific part of those files (rule <code>create_list</code>) which I am processing (rule <code>do_stuff_on_list</code>).</p> <p>This works just fine, what I'm trying to do and have trouble doing is merge all the &quot;lists&quot; together (rule <code>merge_lists</code>) and apply to that the exact same processing (rule <code>do_stuff_on_list</code>).</p> <pre class="lang-py prettyprint-override"><code>config_file = { &quot;result_files&quot;: [ { &quot;id&quot;: 0, &quot;path&quot;: &quot;/path/to/readonly/location/1.txt&quot; }, { &quot;id&quot;: 8, &quot;path&quot;: &quot;/path/to/readonly/location/2.txt&quot; }, { &quot;id&quot;: 4, &quot;path&quot;: &quot;/path/to/readonly/location/3.txt&quot; } ] } SAMPLES = {str(x[&quot;id&quot;]): x[&quot;path&quot;] for x in config_file[&quot;result_files&quot;]} rule all: input: &quot;AAA_finalResult.txt&quot; rule create_list: input: sample_path = lambda wildcards: SAMPLES[wildcards.sample] output: &quot;{sample}_mut_list.json&quot; shell: &quot;touch {output}&quot; rule merge_lists: input: expand(rules.create_list.output, sample=SAMPLES.keys()) output: &quot;merged_mut_list.json&quot; shell: &quot;touch {output}&quot; rule do_stuff_on_list: input: rules.create_list.output output: &quot;{sample}_stuff.json&quot; shell: &quot;touch {output}&quot; rule merge_all_results: input: expand(rules.do_stuff_on_list.output, sample=SAMPLES.keys()), output: &quot;AAA_finalResult.txt&quot; shell: &quot;touch {output}&quot; </code></pre> <p>I know I could definitely solve that issue by creating a second rule identical to <code>do_stuff_on_list</code> that takes as input the merge. But I feel like there should be a better way but I cannot figure it out...</p> <p>Is there a way to do that kind of stuff ?</p>
<python><python-3.x><snakemake><directed-acyclic-graphs>
2023-04-20 09:31:07
2
967
Plopp
76,062,543
5,852,692
Pandas apply function to both columns and rows
<p>I have the the following pandas dataframe:</p> <pre><code> COL1 COL2 COL3 N1 1 2 0 N2 2 2 1 N3 3 2 1 </code></pre> <p>I would like to apply a function to each column &amp; row, where e.g.: <code>x[N1, COL1] = x[N1, COL1] / sum(x[_, COL1])</code></p> <p>The result should look like:</p> <pre><code> COL1 COL2 COL3 N1 1/6 2/6 0/2 N2 2/6 2/6 1/2 N3 3/6 2/6 1/2 </code></pre> <p>I cannot simply use <code>df.apply(lambda x: x/sum(x), axis=1)</code>, because the x in this case would be the whole column... How can I do this?</p>
<python><pandas><function><apply>
2023-04-20 09:25:23
1
1,588
oakca
76,062,394
14,606,987
ValueError: Got unknown type S when using GPT-4 with LangChain for Summarization
<p>I am trying to use LangChain with the GPT-4 model for a summarization task. When I use the GPT-3.5-turbo model instead of GPT-4, everything works fine. However, as soon as I switch to GPT-4, I get the following error:</p> <pre class="lang-bash prettyprint-override"><code>ValueError: Got unknown type S </code></pre> <p>Here is the relevant part of my code that produces the error:</p> <pre class="lang-py prettyprint-override"><code>llm = ChatOpenAI(temperature=0, model_name=&quot;gpt-4&quot;) text_splitter = RecursiveCharacterTextSplitter( chunk_size=4000, chunk_overlap=0, separators=[&quot; &quot;, &quot;,&quot;, &quot;\n&quot;] ) texts = text_splitter.split_text(readme_content) docs = [Document(page_content=t) for t in texts] prompt_template = &quot;&quot;&quot;template&quot;&quot;&quot; PROMPT = PromptTemplate(template=prompt_template, input_variables=[&quot;text&quot;]) chain = load_summarize_chain(llm, chain_type=&quot;map_reduce&quot;, map_prompt=PROMPT, combine_prompt=PROMPT) summary = chain.run(docs) </code></pre> <p>In this code, docs are the chunks created using the RecursiveCharacterTextSplitter class.</p> <p>The full traceback of the error is as follows:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/Users/maxhager/Projects2023/githubgpt/testing_env/test_single.py&quot;, line 136, in &lt;module&gt; print(create_tweet(info, readme_content)) File &quot;/Users/maxhager/Projects2023/githubgpt/testing_env/test_single.py&quot;, line 120, in create_tweet summary = llm(prompt) File &quot;/Users/maxhager/.virtualenvs/githubgpt/lib/python3.10/site-packages/langchain/chat_models/base.py&quot;, line 128, in __call__ return self._generate(messages, stop=stop).generations[0].message File &quot;/Users/maxhager/.virtualenvs/githubgpt/lib/python3.10/site-packages/langchain/chat_models/openai.py&quot;, line 247, in _generate message_dicts, params = self._create_message_dicts(messages, stop) File &quot;/Users/maxhager/.virtualenvs/githubgpt/lib/python3.10/site-packages/langchain/chat_models/openai.py&quot;, line 277, in _create_message_dicts message_dicts = [_convert_message_to_dict(m) for m in messages] File &quot;/Users/maxhager/.virtualenvs/githubgpt/lib/python3.10/site-packages/langchain/chat_models/openai.py&quot;, line 277, in &lt;listcomp&gt; message_dicts = [_convert_message_to_dict(m) for m in messages] File &quot;/Users/maxhager/.virtualenvs/githubgpt/lib/python3.10/site-packages/langchain/chat_models/openai.py&quot;, line 88, in _convert_message_to_dict raise ValueError(f&quot;Got unknown type {message}&quot;) ValueError: Got unknown type S </code></pre> <p>Has anyone encountered a similar issue when using GPT-4 with LangChain? Any suggestions on how to resolve this error would be greatly appreciated.</p>
<python><openai-api><langchain>
2023-04-20 09:09:55
2
868
yemy
76,062,338
245,549
How to force a function to treat outputs of different objects identically?
<p>I have this part of the code:</p> <pre><code>llm = OpenAI(temperature = 0) agent = initialize_agent( tools, llm, agent = 'conversational-react-description', verbose = True, memory = memory, return_intermediate_steps = True ) response = agent(prompt) </code></pre> <p>When I run it I get the &quot;response&quot; (a dictionary) and in it I see the following values:</p> <pre><code>Thought: Do I need to use a tool? Yes Action: Python Evaluator Action Input: sin(3)/cos(5) Observation: 0.4974931989239781 </code></pre> <p>Now, I do a small modification, I try to create a wrapper around LLM and give to <code>initialize_agent</code> a wrapped llm object:</p> <pre><code>class wrapper(LLM): llm: langchain.llms.openai.OpenAI log: str @property def _llm_type(self) -&gt; str: return &quot;custom&quot; def _call(self, prompt: str, stop: Optional[List[str]] = None) -&gt; str: reply = self.llm(prompt) f = open(self.log, 'a') f.write('\n{\n' + prompt + '\n}{\n' + reply + '\n}\n' + '.'*100) f.close() return reply @property def _identifying_params(self) -&gt; Mapping[str, Any]: &quot;&quot;&quot;Get the identifying parameters.&quot;&quot;&quot; return {&quot;llm&quot;: self.llm} wllm = wrapper(llm = llm, log = 'output.log') agent = initialize_agent( tools, wllm, agent = 'conversational-react-description', verbose = True, memory = memory, return_intermediate_steps = True ) response = agent(prompt) </code></pre> <p>My expectation was that I should get the same output. However, it is not the case. Now, in the reply dictionary I see the following output:</p> <pre><code>Thought: Do I need to use a tool? Yes Action: Python Evaluator Action Input: sin(3)/cos(5) Observation: 0.8390075430348345 Observation: 0.4974931989239781 </code></pre> <p>Note, that there is addition line with &quot;Observation&quot;.</p> <p>I guess that the wrapper itself does not alter the output of the wrapped llm object. I believe that the <code>agent</code> object treats the output of <code>llm</code> and <code>wllm</code> (non-wrapped and wrapped objects) differently.</p> <p>So, my question is what can be a reason for it? How can I achieve that agent treats llm and wllm (their outputs, to be precise) identically?</p>
<python><oop>
2023-04-20 09:02:53
1
132,218
Roman
76,062,284
9,287,711
replace specific element of Dataframe to value of Series
<p>suppose I have a Dataframe <code>df</code> like</p> <pre><code> a b c 0 1 2 3 1 nan 2 3 2 1 2 nan </code></pre> <p>I want to replace the element which larger than the Series <code>df.count(axis=1)</code> to the element of the Series. Whith the <code>df</code> convert to</p> <pre><code> a b c 0 1 2 3 1 nan 2 2 2 1 2 nan </code></pre>
<python><pandas>
2023-04-20 08:57:43
1
559
xxyao
76,062,151
14,414,944
Why does env cause Python Popen to hang?
<p><strong>Edit:</strong> I've figured out that this problem is being caused by something to do with env. If I remove env from the below, all works well.</p> <p>I've got a bit of a classic Python <code>Popen</code> problem.</p> <p>Below is the code I'm using to create an async iterator from a <code>Popen</code> call. For the most part, it works. However, sometimes it gets deadlocked; it does this particularly often when run in the same process as a <code>uvicorn</code> server.</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess from threading import Thread from queue import Queue from typing import IO, AsyncIterator, Dict, Iterable, Optional, Tuple from enum import Enum, IntEnum import sys class StdFd(IntEnum): stdout = 0 stderr = 1 stdeof = 2 def read_fd(pipe : IO[bytes], queue : Queue[Tuple[StdFd, bytes]], fd : Optional[StdFd] = StdFd.stdout): try: with pipe: for line in iter(pipe.readline, b&quot;&quot;): queue.put((fd, line)) finally: queue.put((StdFd.stdeof, b&quot;&quot;)) async def call_with_env(cmd : Iterable[str], *, env : Dict[str, str])-&gt;AsyncIterator[Tuple[StdFd, bytes]]: queue : Queue[Tuple[StdFd, bytes]] = Queue() process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env, ) Thread(target=read_fd, args=[process.stdout, queue, StdFd.stdout]).start() Thread(target=read_fd, args=[process.stderr, queue, StdFd.stderr]).start() hits = 0 while hits &lt; 2: try: next = queue.get(block=False) if next[0] == StdFd.stdeof: hits += 1 continue yield next except Exception: continue yield (StdFd.stdeof, b&quot;&quot;) </code></pre> <p>I've tried flushing all number of things. That hasn't yet made a difference. What can check next?</p> <p>I also get the same behavior when calling <code>process.communicate()</code>, though this wouldn't produce the desired behavior anyways.</p> <p>The Python version is 3.10.10. I run into the same behavior with <code>asyncio.create_subprocess_exec</code>.</p>
<python><subprocess>
2023-04-20 08:44:00
0
1,011
lmonninger
76,062,119
11,333,604
read nifti files 3d slicer vs other methods
<p>I have some nii.gz files which, when I open using imajeJ , or any python library such as SimpleITK or Monai I get a stack of MRI images, as if taken from top to bottom. That being said, when I open the same file with slicer, I also get a stack of images, taken from the side and from the front, the G and Y channels. My question is, do the images from the different angles exist inside the file, or is 3D slicer inferring how these channels would look like based on the R channel. I am asking this because the other methods donโ€™t seem to be able to find the other views. Thank you.</p> <p>3D slicer <a href="https://i.sstatic.net/L6qaT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L6qaT.png" alt="enter image description here" /></a></p> <p>ImageJ</p> <p><a href="https://i.sstatic.net/Iyg5i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iyg5i.png" alt="enter image description here" /></a></p> <p>Monai. SimpleITK in google colab <a href="https://i.sstatic.net/LjX2l.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjX2l.jpg" alt="enter image description here" /></a></p>
<python><medical-imaging><simpleitk>
2023-04-20 08:40:13
2
303
Iliasp
76,062,034
9,948,248
monkey patch not working? (using it to change a function of a class in external library)
<p>I'm trying to change an external library's function (inside of a class). My objective is to, while I execute my python program, change some lines inside that function. I've seen some examples and I think I'm doing it correctly, but maybe there is some nuances I don't fully understand that makes the code not to work. My python main program is in home directory and I'm using tianshou as an external library installed via pip inside a conda environment.</p> <p>My main program (<code>~/projects/testing_ground/main.py</code>) is something like this:</p> <pre><code>import testing_ground.monkey from tianshou.data import Collector from tianshou.env import DummyVectorEnv from tianshou.utils import TensorboardLogger def auxiliar_func(): #do_something def main_func(): #do_main_thing if __name__ == '__main__': auxiliar_func() main_func() </code></pre> <p><code>main_func</code> is the main function that calls to the class <code>OffpolicyTrainer</code> inside <code>tianshou</code> external library. That class is the child of the <code>BaseTrainer</code> class that has the function <code>test_step</code>, the one I want to change. The path of the both <code>test_step</code> function and <code>BaseTrainer</code> class are in <code>tianshou.trainer.utils</code>. As I've seen in recommendations, I added the <code>monkey</code> python file as import before the main <code>tianshou</code> libraries.</p> <p>This is the monkey patch file's content (<code>~/projects/testing_ground/monkey.py</code>):</p> <pre><code>import tianshou.trainer.utils def my_test_episode(*args, **kwargs): result = tianshou.trainer.utils.test_episode(*args, **kwargs) print(&quot;check comment in execution to know it works&quot;) return result setattr(tianshou.trainer.utils, 'test_episode', my_test_episode) </code></pre> <p>I also tested this other monkey patch (same file with other content):</p> <pre><code>import tianshou.trainer.utils def my_test_episode(*args, **kwargs): result = tianshou.trainer.utils.test_episode(*args, **kwargs) print(&quot;check comment in execution to know it works&quot;) return result tianshou.trainer.utils.test_episode = my_test_episode </code></pre> <p>But I can't seem to make it work and show the <code>print</code>. Any tips or ideas?</p>
<python><monkeypatching>
2023-04-20 08:29:20
0
1,018
Aurelie Navir
76,062,022
12,243,638
Fillna with sum of previous row and the change in another column in dataframe
<p>I have data frame which has some <code>nan</code> values. I want to fill them by adding the previous row with the change in the column <code>Factor</code>. The dataframe looks like this:</p> <pre><code> Value Col Factor 2022-11-30 0.020 84 2022-12-31 0.015 77 2023-01-31 NaN 90 2023-02-28 NaN 44 2023-03-31 NaN 39 </code></pre> <p>To fill <code>df.iloc[2, 0]</code>, I want to sum <code>df.iloc[1,0]</code> and the change in <code>Factor</code> column(in that case which is 90-77 = 13). The expected output is like this:</p> <pre><code> Value Col Factor 2022-11-30 0.020 84 2022-12-31 0.015 77 2023-01-31 13.015 90 2023-02-28 59.015 44 2023-03-31 64.015 39 </code></pre> <p>I tried with for loop, it works. But could not find a way which is pandas function but fill the nans row by row. I used <code>df['Factor'].diff(1)</code> to get the difference of the <code>Factor</code> column but do not know how to fill these nans row by row.</p>
<python><pandas><dataframe>
2023-04-20 08:28:00
2
500
EMT
76,061,903
2,405,663
Bokeh chart set x axis label
<p>I m building a charts page using bokeh to plot them. I have some Line chart where I need to plot on x-axis the number of week of the year.</p> <p>So this is the result that I m able to build: <a href="https://i.sstatic.net/MiuiD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MiuiD.jpg" alt="enter image description here" /></a></p> <p>As you can see there are some poin on X axis with 09/2023 and 10/2023 at last point.</p> <p>It is possible to display only the point where are the values, like this? <a href="https://i.sstatic.net/0AnNC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0AnNC.jpg" alt="enter image description here" /></a></p> <p>This is the code I used to build this chart:</p> <pre><code>plotIndicatori = figure(plot_width=int(850), plot_height=int(165), tools=&quot;pan,box_zoom,reset&quot;, title=nomeIndicatore, x_axis_type=&quot;datetime&quot;) plotIndicatori.toolbar.logo = None plotIndicatori.y_range.start = 0 plotIndicatori.y_range.end = rangeMax + 5 plotIndicatori.line(x='DateTime', y='Valore', color=&quot;navy&quot;, alpha=1, source=sourceIndicatori) plotIndicatori.circle(x='DateTime', y='Valore', color=&quot;navy&quot;, alpha=1, source=sourceIndicatori) # plotSfera.axis.visible = False plotIndicatori.xgrid.grid_line_color = None plotIndicatori.outline_line_color = background plotIndicatori.border_fill_color = background plotIndicatori.background_fill_color = background </code></pre>
<python><charts><bokeh>
2023-04-20 08:11:42
1
2,177
bircastri
76,061,722
9,542,989
Export LangChain Index?
<p>I just created a <code>langchain</code> index in Python by passing in quite a few URLs. I used the <code>UnstructuredURLLoader</code> as my Document Loader.</p> <p>Now, to use <code>UnstructuredURLLoader</code>, I had to install several packages on my environment including several very larges ones like <code>unstructured</code> and <code>chromadb</code>.</p> <p>What I want to know is, when I am deploying my index to run queries on it, is there any way that I can kind of export the index to do so? Sort of like how machine learning models are deployed for inference?</p> <p>There are two main reasons why I want to do this, one is to avoid installing all of the large packages mentioned previously (if possible) and secondly, to bypass the lag of reading in the documents for their URLs.</p>
<python><python-3.x><langchain>
2023-04-20 07:52:51
1
2,115
Minura Punchihewa
76,061,672
1,877,002
Detection Docker environment Python
<p>I am running a docker image of Python code in two different environments.</p> <ol> <li>In Windows via cmd <code>docker run --rm --name test -p 9000:8080 estimate_variance</code></li> <li>In Linux via terminal <code>docker run --rm --name test -p 9000:8080 estimate_variance</code></li> </ol> <p>The code itself uses <code>aws S3 bucket</code> and for some reason when I am using (1) and authenticate <code>aws sso login</code> it gives me an error of credentials so I had to supply <code>aws_access_key_id aws_secret_access_key aws_session_token</code> when I've created the client <code>s3 = boto3.client('s3',aws_access_key_id =....,aws_secret_access_key = ..., aws_session_token=..</code></p> <p>while in (2) the authentication works fine.</p> <p>Is there any way to detect within the code which environment I am running?</p> <p>Something like</p> <pre><code>import boto3 env = detect_env() if env =='linux': s3 = boto3.client('s3') elif env =='win': s3 = boto3.client('s3',aws_access_key_id =....,aws_secret_access_key = ..., aws_session_token=..) else: pass </code></pre>
<python><python-3.x><amazon-web-services><docker><amazon-s3>
2023-04-20 07:45:22
1
2,107
Benny K
76,061,556
7,012,917
How to enforce start and end date tick labels regardless if it's a major or minor tick?
<p>I have a timeseries (one of many) ranging between the years 2001 and 2023.</p> <p>This is how it looks like when I plot it: <a href="https://i.sstatic.net/je7Aj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/je7Aj.png" alt="enter image description here" /></a></p> <p>As you can see the major ticks are every 4:th year and the minor ticks are enforced with a <code>YearLocator()</code>.</p> <p>However I would like to display the start and end years (2001 and 2023 respectively) at the ends of the axis so that the reader can immediately (without counting minor ticks) tell which year-range they're looking at.</p> <p>The range is originally [2001, 2023) - meaning that it's from and including 2001-01-01 to, but not including, 2023-01-01. However, I've tried [2001, 2023] as well, but it made no visual difference.</p> <p>The code:</p> <pre><code>fig, ax = plt.subplots(figsize=(15, 6)) ax.set_xlabel('Date') ax.grid(color='lightgrey') ax.plot(df.iloc[:, 0], df.iloc[:, 1], label='MSCI WORLD') ax.legend(loc='upper left') plt.minorticks_on() ax.xaxis.set_minor_locator(YearLocator()) plt.show() </code></pre> <p>I suppose the solution would be, in this case, to set the date tick label on the minor ticks at the ends, but I can't seem to figure out how to achieve that. In addition, the solution needs to work with any date range (in terms of years), and therefore needs to be generic enough.</p>
<python><dataframe><matplotlib>
2023-04-20 07:30:36
1
1,080
Nermin
76,061,287
19,553,193
django.db.utils.IntegrityError: (1215, 'Cannot add foreign key constraint') in Django
<p>It is my first time encountered this error, I've implemented it multiple times on my different projects and migrated to database it works fine , but just this one returns me an error, when I tried to migrate my models.py it says:</p> <pre><code>django.db.utils.IntegrityError: (1215, 'Cannot add foreign key constraint') </code></pre> <p>Now I have <code>UserDetails class</code> which linked to the <code>AuthUser</code> of Django migration I tried it multiple times in previous projects but this one has an error, I don't know the real issue here. it is my version of Python? mysql?</p> <pre><code>class AuthUser(models.Model): //comes from the command : python manage.py inspectdb &gt; models.py password = models.CharField(max_length=128) last_login = models.DateTimeField(blank=True, null=True) is_superuser = models.IntegerField() username = models.CharField(unique=True, max_length=150) first_name = models.CharField(max_length=150) last_name = models.CharField(max_length=150) email = models.CharField(max_length=254) is_staff = models.IntegerField() is_active = models.IntegerField() date_joined = models.DateTimeField() class Meta: managed = False db_table = 'auth_user' class UserDetails(models.Model): user_id = models.OneToOneField(AuthUser, on_delete=models.CASCADE) middle_name = models.CharField(max_length=128, blank=True, null=True) birthdate = models.DateField(blank=True, null=True) sex = models.CharField(max_length=128, blank=True, null=True) address = models.CharField(max_length=128, blank=True, null=True) position = models.CharField(max_length=128, blank=True, null=True) updated_at = models.DateTimeField(blank=True, null=True) class Meta: managed = True db_table = 'user_details' </code></pre>
<python><django><model>
2023-04-20 06:49:40
0
335
marivic valdehueza
76,061,004
5,750,238
How to calculate distance to nearest flagged row by date difference in pandas?
<p>I need to calculate the distance to the nearest christmas by date, not just naively by number of rows. My toy data has the columns: road_id, traffic, date, and is_christmas.</p> <p>I can do something like the following:</p> <pre><code>import pandas as pd import numpy as np # create example dataframe df = pd.DataFrame({ 'traffic': [100, 200, 150, 300, 250, 400, 350, 500], 'date': pd.date_range(start='2021-12-24', periods=8, freq='D'), 'is_christmas': [0, 1, 0, 0, 0, 1, 0, 0] }) # find the nearest christmas day for each row df['nearest_christmas'] = np.nan for i, row in df.iterrows(): if row['is_christmas'] == 1: df.at[i, 'nearest_christmas'] = 0 else: nearest_christmas_index = (df.loc[df['is_christmas'] == 1, 'date'] - row['date']).abs().idxmin() df.at[i, 'nearest_christmas'] = (df.at[nearest_christmas_index, 'date'] - row['date']).days print(df) </code></pre> <p>but this doesn't take into account the different road_ids (which would matter if I did it rows-wise), and also doesn't seem right and seems over-engineered. It is also slow as hell on my real data, which is quite large.</p> <p>My next attempt:</p> <pre><code>import pandas as pd # create example dataframe df = pd.DataFrame({ 'traffic': [100, 200, 150, 300, 250, 400, 350, 500], 'date': pd.date_range(start='2021-12-24', periods=8, freq='D'), 'is_christmas': [0, 1, 0, 0, 0, 1, 0, 0] }) df['nearest_christmas'] = df.groupby('is_christmas')['date'].transform(lambda x: x.diff().abs().dt.days) print(df) </code></pre> <p>Does it by date... but gives me the wrong values A and B I can't figure out how to get to work with road_id. It may not actually even need to use road_id, but there ARE multiple same-date entries with the is_christmas flag set to 1.</p>
<python><pandas><datetime><group-by>
2023-04-20 05:59:47
1
579
CapnShanty
76,060,853
13,946,204
Is it possible to call self (building) list inside a list comprehension?
<p>Let's say that we have a simple code:</p> <pre class="lang-py prettyprint-override"><code>chars = [&quot;a&quot;, &quot;a&quot;, &quot;b&quot;] added = [] for c in chars: if c in added: added.append(c + '_added_again') else: added.append(c) # added is ['a', 'a_added_again', 'b'] </code></pre> <p>We are calling <code>added</code> that is being extend inside loop.</p> <p>The question is, is it possible to reproduce this behavior with list comprehension?</p> <p>Something like</p> <pre class="lang-py prettyprint-override"><code>chars = [&quot;a&quot;, &quot;a&quot;, &quot;b&quot;] added = [ c + '_added_again' if c in {something like `self` or `this` here, or maybe lambda can be used?} else c for c in chars ] </code></pre> <p>And same question about <a href="https://www.geeksforgeeks.org/python-map-function/" rel="nofollow noreferrer"><code>map</code></a>. Is it possible to access list that is still under construction inside <code>map</code> function?</p> <pre class="lang-py prettyprint-override"><code>added = list(map(lambda x: x + '_added_again' if ??? else x, chars)) </code></pre> <p><sub>Why I asked this? Just for curiosity.</sub></p>
<python>
2023-04-20 05:29:38
2
9,834
rzlvmp
76,060,833
1,576,710
find elements by class name selenium python is not working for me
<p>I want to get web elements with the same class name. I'll use those to take screenshots of elements for my application. I am using Selenium in Python for this purpose.</p> <pre><code>url = &quot;https://www.pexels.com/search/happy/&quot; driver = webdriver.Chrome(executable_path=ChromeDriverManager().install()) driver.get(url) elements = driver.find_elements(By.CLASS_NAME, &quot;MediaCard_image__ljFAl&quot;) print(&quot;**** Start *******&quot;) print(elements.count) for element in elements: print(&quot;1&quot;) print(element.text) print(&quot;**** End *******&quot;) </code></pre> <p>output is:</p> <pre><code>**** Start ******* &lt;built-in method count of list object at 0x00000250202F2000&gt; 1 1 1 1 1 1 1 1 **** End ******* </code></pre> <p>I think <code>element.text</code> is empty, but why? There are many elements with this class name. Can anyone offer some help?</p>
<python><selenium-webdriver><web-scraping><chrome-web-driver>
2023-04-20 05:27:15
1
447
hina abbasi
76,060,632
2,348,503
How to return the result of llama_index's index.query(query, streaming=True)?
<p>I am trying to return the result of llama_index's <code>index.query(query, streaming=True)</code>.</p> <p>But not sure how to do it.</p> <p>This one obviously doesn't work.</p> <pre class="lang-py prettyprint-override"><code>index = GPTSimpleVectorIndex.load_from_disk(index_file) return index.query(query, streaming=True) </code></pre> <p>Error message: <code>TypeError: cannot pickle 'generator' object</code>.</p> <p>This one neither.</p> <pre class="lang-py prettyprint-override"><code>def stream_chat(query: str, index): for chunk in index.query(query, streaming=True): print(chunk) content = chunk[&quot;response&quot;] if content is not None: yield content # in another function index = GPTSimpleVectorIndex.load_from_disk(index_file) return StreamingResponse(stream_chat(query, index), media_type=&quot;text/html&quot;) </code></pre> <p>Error message: <code>TypeError: 'StreamingResponse' object is not iterable</code>.</p> <p>Thanks!</p>
<python><stream><artificial-intelligence><openai-api><llama-index>
2023-04-20 04:43:12
2
420
Taishi Kato
76,060,592
8,996,209
requests.exceptions.ConnectTimeout error in Azure Cognitive Services Text-to-speech REST API
<p>So, I have been trying process a folder with thousands of text files to convert each one to speech using Azure Cognitive Services Text-to-speech REST API. It works fine until it doesn't. I get errors after several successful conversions. I would like to have a stable connection so I can reliably leave the script running and not have to manually restart each time I get an error.</p> <pre><code>TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='eastus.api.cognitive.microsoft.com', port=443): Max retries exceeded with url: /sts/v1.0/issueToken (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x000001F63AF32650&gt;, 'Connection to eastus.api.cognitive.microsoft.com timed out. (connect timeout=None)')) raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='eastus.api.cognitive.microsoft.com', port=443): Max retries exceeded with url: /sts/v1.0/issueToken (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x000001F63AF32650&gt;, 'Connection to eastus.api.cognitive.microsoft.com timed out. (connect timeout=None)')) </code></pre> <p>This is my current script:</p> <pre><code>import os import requests import time import chardet subscription_key = 'here my subscription key' region = 'eastus' voice_name = 'es-MX-DaliaNeural' output_format = 'audio-24khz-96kbitrate-mono-mp3' tts_url = f'https://{region}.tts.speech.microsoft.com/cognitiveservices/v1' headers = { 'Authorization': '', 'Content-Type': 'application/ssml+xml', 'X-Microsoft-OutputFormat': output_format, 'User-Agent': 'YOUR_RESOURCE_NAME' } # looping through all text files in the input folder input_folder = 'C:/path/to/text/files' output_folder = 'C:/path/to/folder' for filename in os.listdir(input_folder): # Check if the file is a text file if filename.endswith('.txt'): # Read the contents of the file and detect the encoding with open(os.path.join(input_folder, filename), 'rb') as f: rawdata = f.read() encoding = chardet.detect(rawdata)['encoding'] text = rawdata.decode(encoding) # creating the SSML body for the TTS request ssml = f'&lt;speak version=&quot;1.0&quot; xmlns=&quot;http://www.w3.org/2001/10/synthesis&quot; xmlns:mstts=&quot;https://www.w3.org/2001/mstts&quot; xml:lang=&quot;es-MX&quot;&gt;&lt;voice name=&quot;{voice_name}&quot;&gt;{text}&lt;/voice&gt;&lt;/speak&gt;' # getting the access token for the TTS service token_url = f'https://{region}.api.cognitive.microsoft.com/sts/v1.0/issueToken' token_headers = {'Ocp-Apim-Subscription-Key': subscription_key} response = requests.post(token_url, headers=token_headers) access_token = response.text headers['Authorization'] = f'Bearer {access_token}' response = requests. Post(tts_url, headers=headers, data=ssml.encode('utf-8')) if response.status_code == 200: # save the audio content to a file audio_filename = os.path.splitext(filename)[0] + '.mp3' with open(os.path.join(output_folder, audio_filename), 'wb') as f: f.write(response.content) print(f'Successfully converted &quot;{filename}&quot; to speech') else: print(f'Error converting &quot;{filename}&quot; to speech: {response.content}') time. Sleep(30) </code></pre> <p>I leave 30 seconds between each conversion, but it isn't working. It converts 20-30 files and then the errors. Any help to get a more stable process?</p> <p>Thanks.</p>
<python><azure><rest><text-to-speech><azure-cognitive-services>
2023-04-20 04:32:34
1
335
eera5607
76,060,546
2,288,659
What allows NaN to work with the Python list inclusion operator?
<p>Pretty much anyone who works with IEEE floating-point values has run into NaN, or &quot;not a number&quot;, at some point. Famously, <a href="https://stackoverflow.com/questions/50124383/what-is-the-intuitive-reason-that-nan-nan">NaN is not equal to itself</a>.</p> <pre><code>&gt;&gt;&gt; x = float('nan') &gt;&gt;&gt; x == x False </code></pre> <p>Now, I had come to terms with this, but there's a strange behavior I'm struggling to wrap my head around. Namely,</p> <pre><code>&gt;&gt;&gt; x in [x] True </code></pre> <p>I had always assumed that <code>list.__contains__</code> was written something like</p> <pre><code>def __contains__(self, element): for x in self: if element == x: return True return False </code></pre> <p>i.e., it used <code>__eq__</code> on the relevant data type internally. And indeed it does. If I define a custom class with an <code>__eq__</code> method of my own design, then I can verify that Python does in fact call <code>__eq__</code> when doing the inclusion check. But then how can there exist a value <code>x</code> (NaN in our case) such that <code>x == x</code> is false but <code>x in [x]</code> is true?</p> <p>We can observe the same behavior with a custom <code>__eq__</code> as well.</p> <pre><code>class Example: def __eq__(self, other): return False x = Example() print(x == x) # False print(x in [x]) # True </code></pre>
<python><nan><equality>
2023-04-20 04:22:55
1
70,851
Silvio Mayolo
76,060,462
5,218,240
Python Pip install another package into site-packages with the same module name existing in that folder
<p>For example, I already have a module named <code>test1</code> in my site-packages folder, <code>test1</code> already toke that folder name under <code>site-packages</code></p> <p>Now I have another package, using the module name <code>test1.subModule1</code>, I decided to install this new package without destroy the structure of the above one.</p> <p>Can I do that? If yes, I need a solution in maybe <code>setup.py</code> in both module, to tell both module sharing the same top-level module name: MERGE FILE not overwrite. Is there a solution anther than <code>namespace</code> to achieve this goal?</p> <p>Thanks</p>
<python><pip><setuptools><setup.py><python-packaging>
2023-04-20 03:57:43
0
1,215
cinqS
76,060,288
1,362,485
Error trying to use Dask on Kubernetes with distributed workers
<p>I'm attempting to deploy a dask application on Kubernetes/Azure. I have a Flask application server that is the client of a Dask scheduler/workers.</p> <p>I installed the Dask operator as described <a href="https://kubernetes.dask.org/en/latest/#kubecluster" rel="nofollow noreferrer">here</a>:</p> <pre><code>helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator </code></pre> <p>This created the scheduler and worker pods, I have them running on Kubernetes without errors.</p> <p>For the Flask application, I have a Docker image with the following Dockerfile:</p> <pre><code>FROM daskdev/dask RUN apt-get -y install python3-pip RUN pip3 install flask RUN pip3 install gunicorn RUN pip3 install &quot;dask[complete]&quot; RUN pip3 install &quot;dask[distributed]&quot; --upgrade RUN pip3 install &quot;dask-ml[complete]&quot; </code></pre> <p>Whenever I try to run a function in the workers using the <code>Client</code> interface, I get this error in the scheduler pod:</p> <pre><code>TypeError: update_graph() got an unexpected keyword argument 'graph_header' </code></pre> <p>It seems to me that the Dask image used to run Flask and the Dask Kubernetes that I installed are not compatible or aligned?</p> <p>How to create an image that includes Dask for the Flask server that can be integrated with the Dask Kubernetes package?</p> <p>I run in Flask <code>client.get_versions(check=True)</code> and this is what I get:</p> <p>{'scheduler': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'workers': {'tcp://10.244.0.3:40749': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.0.4:36757': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.1.7:40561': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}}, 'client': {'host': {'python': '3.8.16.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.16.final.0', 'dask': '2023.4.0', 'distributed': '2023.4.0', 'msgpack': '1.0.5', 'cloudpickle': '2.2.1', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.23.5', 'pandas': '2.0.0', 'lz4': '4.3.2'}}} @ 2023-04-20 13:33:09.921545&quot;}</p>
<python><kubernetes><dask><dask-distributed><dask-kubernetes>
2023-04-20 03:13:56
1
1,207
ps0604
76,060,220
2,502,791
nested function calls using with statements for authentication
<p>I have a couple of functions that I'm trying to write for tableau (not that that should matter necessarily). The important bit is that it's hierarchical so projects contain workbooks which contain views. project/workbook/view So I have a function for each <code>get_project</code>, <code>get_workbook</code>, and <code>get_view</code>. In order to get view, I first have to get a workbook and before I get a workbook I first have to get a project.</p> <pre class="lang-py prettyprint-override"><code>def get_view(): # need to get the workbook before I get the view workbook = get_workbook() #can't have this in the with statement or the connection closes with sign_in(): # do some stuff # and get workbook looks like def get_workbook(): # need to get the project before I can the workbook project = get_project() with sign_in(): # do some stuff </code></pre> <p>tableau signs out after the <code>with</code> statement. But I'm wondering if there's a better way to have these statements so that I can keep the same connection open instead of opening and closing? <a href="https://github.com/tableau/server-client-python/issues/693#issuecomment-695005306" rel="nofollow noreferrer">This github comment</a> is what got me worried about it.</p> <p>Right now my only thought is to just pass a live_cxn flag but it's a lot of boilerplate. I don't know if some decorator or something would make this more pythonic. It works, but I don't get any of the benefit of the with statements, mainly if anything errors out my connection is left open.</p> <pre class="lang-py prettyprint-override"><code>def get_view(live_cxn=False): if not live_cxn: sign_in() workbook = get_workbook(live_cxn=True) # do some stuff if not live_cxn: sign_out() # and get workbook looks like def get_workbook(live_cxn=False): if not live_cxn: sign_in() project= get_project(live_cxn=True) # do some stuff if not live_cxn: sign_out() </code></pre>
<python><tableau-api>
2023-04-20 03:01:34
1
1,718
Faller
76,060,204
2,998,077
To 'overlay' 2 dataframes with noted differences
<p>2 simple dataframes that I want to 'overlay' them, i.e., if the value of the cell in df_history but not in df_now, it will be added into the new df_now with a prefix, like:</p> <p><a href="https://i.sstatic.net/eZFqV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eZFqV.png" alt="enter image description here" /></a></p> <p>What I tried is:</p> <ol> <li>Convert the 2 dataframes into separate dictionaries</li> <li>Find out the difference, from df_history to df_now</li> <li>Update the dict_now according to dict_history</li> <li>Write the dict_now to a new dataframe</li> </ol> <p>Here is the code:</p> <pre><code>import pandas as pd from io import StringIO import os, math csvfile_now = StringIO( &quot;&quot;&quot;Name Project_A Project_B Project_C Project_D Mike 2 8 Jane 7 Kate 17 &quot;&quot;&quot;) csvfile_history = StringIO( &quot;&quot;&quot;Name Project_A Project_B Project_C Project_D Mike 7 Jane 8 2 6 Kate 11 12 1 &quot;&quot;&quot;) df_now = pd.read_csv(csvfile_now, sep = '\t', engine='python') df_history = pd.read_csv(csvfile_history, sep = '\t', engine='python') df_now = df_now.set_index('Name') df_history = df_history.set_index('Name') dict_now = df_now.to_dict('index') dict_history = df_history.to_dict('index') new_value_list = [] # as ['Mike|Project_B|7.0', 'Jane|Project_C|2.0', 'Jane|Project_D|6.0', 'Kate|Project_C|12.0', 'Kate|Project_D|1.0'] for k, v in dict_now.items(): for son_key, son_value in v.items(): if math.isnan(son_value) and not math.isnan(dict_history[k][son_key]): new_value_list.append(k + '|' + son_key + '|' + str(dict_history[k][son_key])) for each in new_value_list: mother_key, son_key, value = each.split('|') dict_now.update({dict_now[mother_key][son_key]: '(history) ' + value}) </code></pre> <p>But the dictionary is not updated but extended:</p> <pre><code>{'Mike': {'Project_A': 2.0, 'Project_B': nan, 'Project_C': nan, 'Project_D': 8.0}, 'Jane': {'Project_A': nan, 'Project_B': 7.0, 'Project_C': nan, 'Project_D': nan}, 'Kate': {'Project_A': 17.0, 'Project_B': nan, 'Project_C': nan, 'Project_D': nan}, nan: '(history) 7.0', nan: '(history) 2.0', nan: '(history) 6.0', nan: '(history) 12.0', nan: '(history) 1.0'} </code></pre> <p>What's the right way to achieve the goal?</p>
<python><pandas><dataframe>
2023-04-20 02:56:53
1
9,496
Mark K
76,059,948
2,855,071
pip download mirror in requirements.txt or venv
<p>Is there a way to fix the PyPi download mirror in a requirements.txt file? or somewhere else that gets picked up automatically? I'm based in China, and it's often convenient to use an alternative mirror as the main one can be slow e.g.</p> <pre><code>pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple </code></pre> <p>Today this caused me a lot of confusion, as my Python code &amp; requirements.txt produced unexpected and different results on different machines.</p>
<python><pip>
2023-04-20 01:40:18
1
2,535
innisfree
76,059,825
2,780,906
openpyxl workbook.save() corrupts simple xlsx workbook with external link
<p>I have a simple Excel xlsx file with one sheet and one external link which becomes corrupted if I open it and save it with openpyxl.</p> <p>This is a question similar to others (<a href="https://stackoverflow.com/questions/59308064/openpyxl-workbook-save-function-creates-a-corrupt-and-un-openable-excel-xlsx">here</a> and <a href="https://stackoverflow.com/questions/41925006/using-openpyxl-module-to-write-to-spreadsheet-creates-a-damaged-spreadsheet-how">here</a> and elsewhere). However, my example does not contain complicated excel features like charts / images / macros. And my example is not solved by different versions of openpyxl. I have tested these versions of openpyxl:</p> <ul> <li>3.0.3</li> <li>2.4</li> <li>3.1.1</li> <li>3.1.2</li> </ul> <p>I have a simple Excel file with one sheet. It has one formula which is a link to another workbook. The formula is:</p> <p><code>='C:\Temp\temp\[Book2.xlsx]Sheet1'!$A$3</code></p> <p>Now I run the simple program:</p> <pre><code>def main(): f1=&quot;C:/Temp/temp/Book1.xlsx&quot; print(openpyxl.__version__) wb=openpyxl.load_workbook(filename=f1) wb.save(f1) wb.close() </code></pre> <p>After I run this the file becomes corrupted:</p> <p><a href="https://i.sstatic.net/9ZwhN.png" rel="noreferrer"><img src="https://i.sstatic.net/9ZwhN.png" alt="enter image description here" /></a></p> <p>If I allow Excel to attempt to recover the file, the external link appears as: <code>=[RecoveredExternalLink1]Sheet1!$A$3</code></p> <p>I can update this link to the original file and no other corruption appears to have occurred.</p> <p>This problem seems to have happened recently. Similar code I have been using for over a year has started to produce this issue in the last couple of months. My version of Excel (64 bit) is:</p> <p><a href="https://i.sstatic.net/wC95o.png" rel="noreferrer"><img src="https://i.sstatic.net/wC95o.png" alt="enter image description here" /></a></p>
<python><excel><openpyxl>
2023-04-20 00:58:07
1
397
Tim
76,059,773
13,079,519
Gurobi Model search for variable with exact match (or exact match string search)
<p>Trying to use the following code to search for variables that have &quot;732913342082_MAN-1&quot;</p> <pre><code>[var for var in m.getVars() if '732913342082_MAN-1' in var.VarName] </code></pre> <p>but then it is giving me the following result:</p> <pre><code>[&lt;gurobi.Var v1_Severn_732913342082_MAN-1&gt;, &lt;gurobi.Var v1_Severn_732913342082_MAN-10&gt;, &lt;gurobi.Var v5_27_732913342082_MAN-1&gt;, &lt;gurobi.Var v5_27_732913342082_MAN-10&gt;, &lt;gurobi.Var v0_Windsor Locks_732913342082_MAN-1&gt;, &lt;gurobi.Var v0_Windsor Locks_732913342082_MAN-10&gt;, &lt;gurobi.Var v2_Chesterfield_732913342082_MAN-1&gt;, &lt;gurobi.Var v2_Chesterfield_732913342082_MAN-10&gt;, &lt;gurobi.Var v6_Harrisburg_732913342082_MAN-10&gt;, &lt;gurobi.Var v6_Harrisburg_732913342082_MAN-1&gt;] </code></pre> <p>And what I want is:</p> <pre><code>[&lt;gurobi.Var v1_Severn_732913342082_MAN-1&gt;, &lt;gurobi.Var v5_27_732913342082_MAN-1&gt;, &lt;gurobi.Var v0_Windsor Locks_732913342082_MAN-1&gt;, &lt;gurobi.Var v2_Chesterfield_732913342082_MAN-1&gt;, &lt;gurobi.Var v6_Harrisburg_732913342082_MAN-1&gt;] </code></pre> <p>Basically, it is giving me '732913342082_MAN-10' as well which I think is due to them looking alike. I think this is more toward the exact string search part and hope someone can help!</p> <p>Thanks!</p>
<python><string><gurobi>
2023-04-20 00:44:26
1
323
DJ-coding
76,059,768
3,719,146
How can schedule stop-start of standard app engine in gcp
<p>I was trying to find out a way to schedule stop/start the latest version of app engine in google cloud, but couldn't find any way for that. I just found gcloud command for stopping/starting a specific version, but don't know how can I schedule gcloud command.</p> <p>We have manual scaling standard app engine , and I want to stop it every night at specific time, and re-start it again in morning. What is the best way to do that?</p> <p>My implemented solution is separate python cloud function for stopping and starting the app engine, then scheduling those functions at the specific time</p> <p><a href="https://cloud.google.com/appengine/docs/standard/python3/runtime#environment_variables" rel="nofollow noreferrer">https://cloud.google.com/appengine/docs/standard/python3/runtime#environment_variables</a></p> <p>Thanks,</p>
<python><google-app-engine><google-cloud-platform><terraform><scheduling>
2023-04-20 00:43:12
2
399
mary
76,059,539
2,941,617
Calculating cumulative sum over non unique list elements in pySpark
<p>I have a PySpark dataframe with a column containing lists. The list items might overlap across rows. I need the cumulative sum of unique list elements down through the rows ordered by 'orderCol' column. In my application there might be millions of rows and hundreds of items in each list. I can't seem to wrap my brain around how to do this in PySpark so that it scales and would be grateful for any ideas big or small on how to solve it.</p> <p>I have posted input and desired output to give an idea of what I'm trying to achieve.</p> <pre><code>from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName(&quot;myApp&quot;) \ .getOrCreate() data = [{&quot;node&quot;: 'r1', &quot;items&quot;: ['a','b','c','d'], &quot;orderCol&quot;: 1}, {&quot;node&quot;: 'r2', &quot;items&quot;: ['e','f','g','a'], &quot;orderCol&quot;: 2}, {&quot;node&quot;: 'r3', &quot;items&quot;: ['h','i','g','b'], &quot;orderCol&quot;: 3}, {&quot;node&quot;: 'r4', &quot;items&quot;: ['j','i','f','c'], &quot;orderCol&quot;: 4}, ] df = spark.createDataFrame(data) df.show() data_out = [{&quot;node&quot;: 'r1', &quot;items&quot;: ['a','b','c','d'], &quot;orderCol&quot;: 1, &quot;cumulative_item_count&quot;: 4}, {&quot;node&quot;: 'r2', &quot;items&quot;: ['e','f','g','a'], &quot;orderCol&quot;: 2, &quot;cumulative_item_count&quot;: 7}, {&quot;node&quot;: 'r3', &quot;items&quot;: ['h','i','g','b'], &quot;orderCol&quot;: 3, &quot;cumulative_item_count&quot;: 9}, {&quot;node&quot;: 'r4', &quot;items&quot;: ['j','i','f','c'], &quot;orderCol&quot;: 4, &quot;cumulative_item_count&quot;: 10}, ] df_out = spark.createDataFrame(data_out) df_out.show() </code></pre>
<python><apache-spark><pyspark>
2023-04-19 23:37:13
1
452
UlrikP