QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,453,802
6,032,140
Unique string to Python Dictonary and then dump as yaml
<ol> <li><p>The following output string is from a specific program and gets dumped in a text file. This more or less looks like python dictionary but isn't. [BTW. this is just a basic example but it might complex with multi layer dict]</p> <pre><code>p_d: '{a:3, what:3.6864e-05, s:&quot;lion&quot;, sst:'{c:-20, b:6, p:&quot;panther&quot;}} </code></pre> </li> <li><p>First I wanted to convert this string output to a python dictionary format, so that I can then dump it into an yaml output file.</p> </li> <li><p>Tried like ast.literal_eval or json.loads or yaml.load, but wasn't successful. Wondering if there is any better custom way to convert this to python dictionary.</p> </li> </ol> <p>Update:</p> <ol> <li><p>Tried the scenario which @Anthon provide and worked fine for one of the case scenarios. Thanks a lot for that.</p> </li> <li><p>And when tried with slightly complex scenario, faced the error.</p> <pre><code>import sys import json import ruamel.yaml #Case 1 #dit = &quot;{p_d: '{a:3, what:3.6864e-05, s:lion, sst:'{c:-20, b:6, p:panther}}}&quot; #Case 2 dit = &quot;'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}&quot; print(dit) yaml_str = dit.replace('&quot;', '').replace(&quot;'&quot;,'').replace(':', ': ') print(yaml_str) print('#### full block style') yaml = ruamel.yaml.YAML(typ='safe') # yaml.default_flow_style = False data = yaml.load(yaml_str) yaml.dump(data, sys.stdout) Output: '{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}} {p_d: {a: 3, what: 3.6864e-05, s: lion, vec_mode: {2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst: {c: -20, b: 6, p: panther}}} #### full block style Traceback (most recent call last): File &quot;./ditoyaml_ruamel.py&quot;, line 24, in &lt;module&gt; data = yaml.load(yaml_str) File &quot;python3.6/site-packages/ruamel/yaml/main.py&quot;, line 434, in load return constructor.get_single_data() File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 122, in get_single_data return self.construct_document(node) File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 132, in construct_document for _dummy in generator: File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 722, in construct_yaml_map value = self.construct_mapping(node) File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 446, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 264, in construct_mapping if self.check_mapping_key(node, key_node, mapping, key, value): File &quot;python3.6/site-packages/ruamel/yaml/constructor.py&quot;, line 295, in check_mapping_key raise DuplicateKeyError(*args) ruamel.yaml.constructor.DuplicateKeyError: while constructing a mapping in &quot;&lt;unicode string&gt;&quot;, line 1, column 52 found duplicate key &quot;2&quot; with value &quot;None&quot; (original value: &quot;None&quot;) in &quot;&lt;unicode string&gt;&quot;, line 1, column 90 To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys </code></pre> </li> </ol> <p>Update:</p> <ol> <li><p>Tried to suppress the error as suggested in the link with below code</p> <pre><code>yaml.allow_duplicate_keys = True </code></pre> </li> </ol> <p>Output:</p> <pre><code>'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}} {p_d: {a: 3, what: 3.6864e-05, s: lion, vec_mode: {2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst: {c: -20, b: 6, p: panther}}} #### full block style p_d: a: 3 s: lion sst: b: 6 c: -20 p: panther vec_mode: -8.9: null -5.67: null -2.9: null 2: null 2.5: null 3.4: null 5.4: null 5.6: null 6.545: null what: 3.6864e-05 </code></pre> <p>It was add :null pair to it, but was expecting more like arrays. Something like as shown below. Trying further at my end.</p> <pre><code>'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}} {p_d: {a: 3, what: 3.6864e-05, s: lion, vec_mode: {2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst: {c: -20, b: 6, p: panther}}} #### full block style p_d: a: 3 s: lion sst: b: 6 c: -20 p: panther vec_mode: [-8.9, -5.67, -2.9, 2, 2.5, 3.4, 5.4, 5.6, 6.545] what: 3.6864e-05 </code></pre>
<python><json><python-3.x><yaml>
2023-02-14 22:23:33
1
1,163
Vimo
75,453,728
603,136
Round-trip HTML using Python xml.etree.ElementTree or lxml.ElementTree
<p>I have code that creates and saves XML fragments. Now I would like to handle HTML as well. ElementTree.write() has a method=&quot;html&quot; parameter that suppresses end tags for &quot;area&quot;, &quot;base&quot;, &quot;basefont&quot;, &quot;br&quot;, &quot;col&quot;, &quot;frame&quot;, &quot;hr&quot;, &quot;img&quot;, &quot;input&quot;, &quot;isindex&quot;, &quot;link&quot;, &quot;meta&quot;, and &quot;param&quot; elements. Unfortunately, reading a file that contains those tags throws an error. I'm reluctant to rewrite a lot of my code to use an HTML-specific package. At this point, I'm seriously considering sub-classing the XMLParser to insert an end tag for those tags. Is there an accepted way to do this?</p> <p>And it just occurred to me that perhaps html.parser.HTMLParser will solve my problems. I will investigate and report back what I find.</p>
<python><html><lxml><elementtree>
2023-02-14 22:13:15
0
2,996
samwyse
75,453,722
1,118,576
What does "domain" mean in Python's tracemalloc?
<p>Python's <a href="https://docs.python.org/3.10/library/tracemalloc.html" rel="nofollow noreferrer">tracemalloc documentation</a> defines &quot;domain&quot; as an &quot;address space of a memory block&quot; and identifies a domain using an integer. It also says:</p> <blockquote> <p>tracemalloc uses the domain 0 to trace memory allocations made by Python. C extensions can use other domains to trace other resources</p> </blockquote> <p>What are these address spaces to which the tracemalloc documentation is referring? As a C extension developer, how can I &quot;use&quot; a particular domain as the above quote suggests, and in what situations should I? As a Python package developer, in what situations should I use a <code>DomainFilter</code> when using tracemalloc?</p> <p>Python's <a href="https://docs.python.org/3.10/c-api/memory.html#allocator-domains" rel="nofollow noreferrer">memory management documentation</a> uses the phrase &quot;allocator domain,&quot; which seems to be unrelated. These domains are categories of allocator functions. They don't seem to have integer identifiers like the domains in the tracemalloc documentation.</p>
<python><python-3.x><memory-management>
2023-02-14 22:12:01
1
885
Mike Placentra
75,453,656
436,025
Run event loop until all tasks are blocked in python
<p>I am writing code that has some long-running coroutines that interact with each other. These coroutines can be blocked on <code>await</code> until something external happens. I want to be able to drive these coroutines in a unittest. The regular way of doing <code>await</code> on the coroutine doesn't work, because I want to be able to intercept something in the middle of their operation. I would also prefer not to mess with the coroutine internals either, unless there is something generic/reusable that can be done.</p> <p>Ideally I would want to <em>run an event loop until all tasks are currently blocked</em>. This should be fairly easy to tell in an event loop implementation. Once everything is blocked, the event loop yields back control, where I can assert some state about the coroutines, and poke them externally. Then I can resume the loop until it gets blocked again. This would allow for deterministic simulation of tasks in an event loop.</p> <p>Minimal example of the desired API:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from asyncio import Event # Imagine this is a complicated &quot;main&quot; with many coroutines. # But event is some external &quot;mockable&quot; event # that can be used to drive in unit tests async def wait_on_event(event: Event): print(&quot;Waiting on event&quot;) await event.wait() print(&quot;Done waiting on event&quot;) def test_deterministic(): loop = asyncio.get_event_loop() event = Event() task = loop.create_task(wait_on_event(event)) run_until_blocked_or_complete(loop) # define this magic function # Should print &quot;Waiting on event&quot; # can make some test assertions here event.set() run_until_blocked_or_complete(loop) # Should print &quot;Done waiting on event&quot; </code></pre> <p>Anything like that possible? Or would this require writing a custom event loop just for tests?</p> <p>Additionally, I am currently on Python 3.9 (AWS runtime limitation). If it's not possible to do this in 3.9, what version would support this?</p>
<python><python-asyncio><python-3.9><python-3.10>
2023-02-14 22:00:34
5
4,256
Alexander Kondratskiy
75,453,620
6,679,011
How to extract the binary string from the string
<p>I have a tricky question of how to process a string. I need to process a payload from the link, the payload is a encrypted binary string looks like this:</p> <pre class="lang-py prettyprint-override"><code>payload=b'F4ChGNL/Pemxy8l6cCR......' </code></pre> <p>AS you can see, it is a bytes. However, when I try to fetch the payload through AWS gateway and Lambda function something like: <code>payload=event['queryStringParameters']['payload']</code> The payload variable is actually a type of string. which makes the decrypt failed.</p> <pre><code>def lambda_handler(event, context): #Decode from base64 str_64_enc=event['queryStringParameters']['payload'] str_enc=base64.b64decode(str_64_enc) print(str_enc) #Decrypt by private key pr_key = RSA.importKey(open('private_pem.pem', 'r').read()) decrypt = PKCS1_OAEP.new(key=pr_key) decrypted_message = decrypt.decrypt(str_enc) print(decrypted_message) </code></pre> <p>Please advise what is the right way to process this case. Thanks</p>
<python><pycrypto>
2023-02-14 21:54:55
1
469
Yang L
75,453,563
19,871,699
Why do function attributes (setattr ones) only become available after assigning it as a property to a class and instantiating it?
<p>I apologize if I'm butchering the terminology. I'm trying to understand the code in <a href="https://stackoverflow.com/questions/59651935/how-to-add-custom-method-to-pyspark-dataframe-class-by-inheritance">this example</a> on how to chain a custom function onto a PySpark dataframe. I'd really want to understand exactly what it's doing, and if it is not awful practice before I implement anything.</p> <p>From the way I'm understanding the code, it:</p> <ol> <li>defines a function g with sub-functions inside of it, that returns a copy of itself</li> <li>assigns the sub-functions to g as attributes</li> <li>assigns g as a property of the DataFrame class</li> </ol> <p>I don't think at any step in the process do any of them become a method (when I do getattr, it always says &quot;function&quot;)</p> <p>When I run a (as best as I can do) simplified version of the code (below), it seems like only when I assign the function as a property to a class, and then instantiate at least one copy of the class, do the attributes on the function become available (even outside of the class). <strong>I want to understand what and why that is happening</strong>.</p> <p>An answer [here(https://stackoverflow.com/a/17007966/19871699) indicates that this is <em>a</em> behavior, but doesn't really explain what/why it is. I've read <a href="https://stackoverflow.com/questions/64991519/functions-attributes-when-in-a-class">this too</a> but I'm having trouble seeing the connection to the code above.</p> <p>I read <a href="https://sethdandridge.com/blog/assigning-attributes-to-python-functions" rel="nofollow noreferrer">here</a> about the <code>setattr</code> part of the code. He doesn't mention exactly the use case above. <a href="https://stackoverflow.com/questions/338101/python-function-attributes-uses-and-abuses">this post</a> has some use cases where people do it, but I'm not understanding how it directly applies to the above, unless I've missed something.</p> <p>The confusing part is when the inner attributes become available.</p> <pre><code>class SampleClass(): def __init__(self): pass def my_custom_attribute(self): def inner_function_one(): pass setattr(my_custom_attribute,&quot;inner_function&quot;,inner_function_one) return my_custom_attribute [x for x in dir(my_custom_attribute) if x[0] != &quot;_&quot;] </code></pre> <p>returns <code>[]</code></p> <p>then when I do:</p> <pre><code>SampleClass.custom_attribute = property(my_custom_attribute) [x for x in dir(my_custom_attribute) if x[0] != &quot;_&quot;] </code></pre> <p>it returns <code>[]</code></p> <p>but when I do:</p> <pre><code>class_instance = SampleClass() class_instance.custom_attribute [x for x in dir(my_custom_attribute) if x[0] != &quot;_&quot;] </code></pre> <p>it returns <code>['inner_function']</code></p> <p>In the code above though, if I do <code>SampleClass.custom_attribute = my_custom_attribute</code> instead of <code>=property(...)</code> the <code>[x for x...</code> code still returns <code>[]</code>.</p> <p>edit: I'm not intending to access the function itself outside of the class. I just don't understand the behavior, and don't like implementing something I don't understand.</p>
<python><oop><pyspark>
2023-02-14 21:46:35
2
728
jonathan-dufault-kr
75,453,530
3,672,883
Are there any alternative to this overload case in python with typing?
<p>currently I am implementing a lot of subclasses that should implement a function, this is a minimal example of the current project.</p> <p>In this case, I have a function that needs to call, to the login of a user, but depending of the implementation injected previously the application will use <code>UserUP</code> or <code>UserToken</code></p> <p>Because in <code>login_user</code> I have all data available for the two options, I set the params like follow.</p> <pre><code>from typing import Protocol, runtime_checkable @runtime_checkable class User(Protocol): def login(self, **kwargs) -&gt; None: raise NotImplementedError class UserUP(User): def login(self, user: str, password:str, **kwargs) -&gt; None: print(user) print(password) class UserToken(User): def login(self, token:str, **kwargs) -&gt; None: print(token) def login_user(user: User) -&gt; None: user.login(user=&quot;user&quot;, password=&quot;password&quot;, token=&quot;Token&quot;) login_user(UserUP()) login_user(UserToken()) </code></pre> <p>The problem is that when I run <code>mypy</code> I get errors like the following:</p> <pre><code>app.py:10: error: Signature of &quot;login&quot; incompatible with supertype &quot;User&quot; [override] app.py:10: note: Superclass: app.py:10: note: def login(self, **kwargs: Any) -&gt; None app.py:10: note: Subclass: app.py:10: note: def login(self, user: str, password: str, **kwargs: Any) -&gt; None app.py:16: error: Signature of &quot;login&quot; incompatible with supertype &quot;User&quot; [override] app.py:16: note: Superclass: app.py:16: note: def login(self, **kwargs: Any) -&gt; None app.py:16: note: Subclass: app.py:16: note: def login(self, token: str, **kwargs: Any) -&gt; None </code></pre> <p>Of course, the signature is incompatible, but which alternatives do I have to implement things like this?</p>
<python><mypy>
2023-02-14 21:41:20
1
5,342
Tlaloc-ES
75,453,502
2,474,876
caching python class instances
<p>I have a memory heavy class, say a type representing a high-resolution resource (ie: media, models, data, etc), that can be instantiated multiple times with identical parameters, such as same filename of the resource loaded multiple times.</p> <p>I'd like to implement some sort of unbounded <code>caching</code> on object creation to memory reuse identical instances if they have the same constructor parameter values. I don't care about mutability of one instance affecting the other shared ones. What is the easiest pythonic way to achieve this?</p> <p>Note that neither <code>singletons</code>, <code>object-pools</code>, <code>factory methods</code> or field <code>properties</code> meet my use case.</p>
<python><caching><singleton><multiple-instances><object-pooling>
2023-02-14 21:37:08
3
417
eliangius
75,453,379
4,085,019
Flask Mutating Request Path
<h2>Problem Statement</h2> <p>When path contains <code>%25</code>, flask seems to be mutating the incoming path to treat <code>%25</code> as <code>%</code> instead of preserving the original request path. Here are the request and path variable:</p> <ul> <li>Request: : <code>GET http://localhost:5000/Files/dir %a/test %25a.txt</code></li> <li>Flask <code>request.base_url</code>: <code>http://localhost:5000/Files/dir%20%25a/test%20%25a.txt</code></li> <li>Debug: <code>127.0.0.1 - - [14/Feb/2023 12:00:49] &quot;GET /Files/dir%20%a/test%20%25a.txt HTTP/1.1&quot; 200 -</code></li> </ul> <p>Specifically the <code>test %25a.txt</code> seems to be encoded as <code>test%20%25a.txt</code> instead of <code>test%20%2525a.txt</code>.</p> <h2>Environment</h2> <ul> <li>Python 3</li> <li>Ubuntu 20.04</li> <li>Flask 2.2.x</li> </ul> <h2>Things Tried</h2> <ul> <li>Looks like others suggested that <code>%25</code> is not allowed to be in url paths (Ref: <a href="https://stackoverflow.com/questions/50540825/in-url-is-replaced-by-25-when-using-queryparams-while-routing-in-angula">In URL `%` is replaced by `%25` when using `queryParams` while routing in Angular</a>).</li> </ul> <h2>Help Needed</h2> <ul> <li>Is <code>%25</code> indeed not allowed to be in the request path ?</li> <li>For apps that allow files to be named with <code>%25</code> what would be a good way to handle this ?</li> </ul>
<python><flask><request><special-characters>
2023-02-14 21:19:11
1
6,049
PseudoAj
75,453,277
3,122,657
staticmethod vs classmethod for factory pattern when using pydantic
<p>Reading most questions with <em>@classmethod vs @staticmethod</em>, replies state that static is almost useless in python except to logically group functions, but module can do the work.</p> <p>But I ended up with a factory pattern, using pydantic, where I cannot see how we can replace the static method with a class method.<br /> Without pydantic, one would put the logic for object instanciation in <code>__init__()</code> constructor.<br /> Using pydantic, I ended up to create a factory like <code>build_static()</code> (that will call some logic, <code>init3x()</code> here)</p> <pre><code>from pydantic import BaseModel class A(BaseModel): text: str uid: int @staticmethod def build_static(text, uid): a = A(text=text, uid=uid) a.init3x() a.show() return a def init3x(self): self.text *= 3 print(self.text) @classmethod def show(cls): print(cls.text, cls.uid) @classmethod def build(cls, text, uid): cls.uid = uid cls.text = text cls.init3x(cls) cls.show() return cls </code></pre> <p>My question is: is <code>build_static()</code> as good pattern, using a static method ? If not, what is the right way ?</p> <p>Note 1: class method <code>build()</code> seems a wrong pattern in this case, since there is a logic that should only apply to each instance independently (<code>init3x()</code> in my example)</p> <pre><code>a = A.build('test ', 1) b = A.build('test B ', 2) c = A.build_static('test C ', 3) d = A.build_static('test D ', 4) a.text, b.text, c.text, d.text &gt; ('test B test B test B ', 'test B test B test B ', 'test C test C test C ', 'test D test D test D ') </code></pre> <p>Which is not what you want with a factory for instances a and b.</p> <p>Note 2: <code>cls.init3x(cls)</code> looks highly anti-pythonic to me.</p> <p>UPDATE:<br /> Seems staticmethod for factory is seen as default pattern in this book <a href="https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html" rel="nofollow noreferrer">Python3 patterns idioms</a></p>
<python><design-patterns><static><pydantic>
2023-02-14 21:07:03
1
3,374
comte
75,453,197
1,284,415
How to generate RSA private key using Fernet
<p>This should be simple (famous last words)</p> <p>In the terminal, i can run this command:</p> <p>winpty openssl genrsa -des3 -out my_rsa_key_pair 2048</p> <p>How can I do the exact same thing using <a href="https://cryptography.io/en/latest/" rel="nofollow noreferrer">pyca/cryptography</a> ?</p>
<python><cryptography><fernet>
2023-02-14 20:56:08
1
6,008
Dallas Caley
75,453,175
1,818,713
Use list comprehension for expression functions
<p>Let's say I want to make a list of functions, ie <code>aggs=['sum','std','mean','min','max']</code></p> <p>then if I have an arbitrary df</p> <pre><code>df=pl.DataFrame({'a':[1,2,3], 'b':[2,3,4]}) </code></pre> <p>I want to be able to do something like (this obviously doesn't work)</p> <pre><code>df.with_columns([pl.col('a').x() for x in aggs]) </code></pre> <p>Is there a way to do that? <code>aggs</code> need not be a list of strings but just the easiest way to type out my intention for the purpose of this question. Additionally it'd need to have room for <code>.name.suffix()</code></p> <p>I know I could have a function that has all the aggs in the function and takes arbitrary dfs as a parameter which is like my backup plan so I'm hoping for something that resembles the above.</p>
<python><list-comprehension><python-polars>
2023-02-14 20:53:54
1
19,938
Dean MacGregor
75,452,969
7,981,821
Errno 111 Connection refused error, between flask server and raspberry pi over ethernet
<p>I am trying to send data through an http request with the python <code>requests</code> library froma raspberry pi to my local computer connected by an ethernet cable. When trying to send data from the raspberry pi I get an <code>Failed to establish a new connection: [Errno 111] Connection refused'))</code> error. I have attached the full stack trace below.</p> <pre><code>During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/pi/Documents/PROJECT_NAME/src/client/send_data.py&quot;, line 7, in &lt;module&gt; response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload,) File &quot;/usr/lib/python3/dist-packages/requests/api.py&quot;, line 60, in request return session.request(method=method, url=url, **kwargs) File &quot;/usr/lib/python3/dist-packages/requests/sessions.py&quot;, line 533, in request resp = self.send(prep, **send_kwargs) File &quot;/usr/lib/python3/dist-packages/requests/sessions.py&quot;, line 646, in send r = adapter.send(request, **kwargs) File &quot;/usr/lib/python3/dist-packages/requests/adapters.py&quot;, line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='0.0.0.0', port=6000): Max retries exceeded with url: / (Caused by NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0xb5a6b1b0&gt;: Failed to establish a new connection: [Errno 111] Connection refused')) </code></pre> <p>I was expecting the response to return a simple string as my route is a simple hello world function. The client file and flask server minimum examples are attached below.</p> <p><code>client.py</code></p> <pre class="lang-py prettyprint-override"><code>import requests url = &quot;https://0.0.0.0:6000/&quot; payload={} headers = {} response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload,) print(response.text) </code></pre> <p><code>app.py</code></p> <pre class="lang-py prettyprint-override"><code>import flask app = flask.Flask(__name__) @app.route(&quot;/&quot;, methods=[&quot;GET&quot;]) def hello(): return flask.jsonify(&quot;Hello World!&quot;) if __name__ == &quot;__main__&quot;: app.run(host='0.0.0.0', port=6000, debug=True) </code></pre> <p>I have tried disabling my firewall but this has not fixed the issue. I am also developing on the Pi through ssh using the vscode remote development extension and there is no issue with connection on that front. Any help is appreciated!</p> <p>Thank you!</p>
<python><python-requests><raspberry-pi><ethernet>
2023-02-14 20:27:11
1
1,033
Josh Zwiebel
75,452,962
9,983,652
how to stop jupyter-dash from running in vscode?
<p>I am using jupyter-dash in VSCode. I have a big iteration. sometime I'd like to stop running.</p> <p>I tried few options in the below link and they never work.</p> <p><a href="https://stackoverflow.com/questions/58230077/vscode-python-interactive-window-how-to-stop-jupyter-server#:%7E:text=You%20can%20stop%20it%20using%20jupyter%2Dnotebook%20stop%20">https://stackoverflow.com/questions/58230077/vscode-python-interactive-window-how-to-stop-jupyter-server#:~:text=You%20can%20stop%20it%20using%20jupyter%2Dnotebook%20stop%20</a>.</p> <p><a href="https://stackoverflow.com/questions/10162707/how-to-close-ipython-notebook-properly">How to close IPython Notebook properly?</a></p> <p>If I use below with port number like this:</p> <pre><code>jupyter-notebook stop 8064 error: [NbserverStopApp] WARNING | Config option `kernel_spec_manager_class` not recognized by `NbserverStopApp`. </code></pre>
<python><visual-studio-code><jupyterdash>
2023-02-14 20:26:19
1
4,338
roudan
75,452,958
16,332,690
How to write a DataFrame to csv while ingesting data via an API with asyncio and aiohttp
<p>I built an API wrapper module in Python with <code>aiohttp</code> that allows me to significantly speed up the process of making multiple GET requests and retrieving data. Every data response is turned into a pandas DataFrame.</p> <p>Using <code>asyncio</code> I do something that looks like this:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from custom_module import CustomAioClient id_list = [&quot;123&quot;, &quot;456&quot;] async def main(): client = CustomAioClient() tasks = [] for id in id_list: task = asyncio.ensure_future(client.get_latest_value(id=id)) tasks.append(task) responses = await asyncio.gather(*tasks, return_exceptions=True) # Close the session await client.close_session() return responses if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>This returns a list of pandas DataFrames with time series for each id in the <code>id_list</code> that I want to save as csv files. I am a bit confused on how to proceed here.</p> <p>Obviously I could just iterate over the list and save every DataFrame iteratively, but this seems highly inefficient to me. Is there a way to improve things here?</p> <p><strong>Edit</strong></p> <p>I did the following to save things and it is much faster than just iterating over multiple URLs, getting the data and saving it. I doubt whether this fully makes use of the asynchronous functionalities though.</p> <pre class="lang-py prettyprint-override"><code>import asyncio from custom_module import CustomAioClient async def fetch(client: CustomAioClient, id: str): df = await client.get_latest_value(id=id) df.to_csv(f&quot;C:/{id}.csv&quot;) print(df) async def main(): client = CustomAioClient() id_list = [&quot;123&quot;, &quot;456&quot;] tasks = [] for id in id_list: task = asyncio.ensure_future(fetch(client=client, id=id)) tasks.append(task) responses = await asyncio.gather(*tasks, return_exceptions=True) # Close the session await client.close_session() if __name__ == &quot;__main__&quot;: loop = asyncio.new_event_loop() loop.run_until_complete(main()) </code></pre>
<python><pandas><python-asyncio><aiohttp>
2023-02-14 20:25:56
2
308
brokkoo
75,452,941
563,130
In Python how to call Parent class function as if I were Parent object
<p>I have a Parent and a Child class, both should execute their own fct in <strong>init</strong> but Child have to execute first the Parent fct :</p> <pre><code>class Parent(object): def __init__(self): self.fct() def fct(self): # do some basic stuff class Child(Parent): def __init__(self): super().__init__() self.fct() def fct(self): # add other stuff </code></pre> <p>Problem is that super().init() calls the Child fct and not the Parent one as I would like. Of course I could rename Child function as fct2 but I was wondering if I can do what I want to do without changing names (because fct and fct2 do the same thing conceptually speaking, they just apply on different things). It would be nice if I could call super().__init() <em>as if were</em> Parent object.</p>
<python><class><inheritance>
2023-02-14 20:23:30
1
2,739
Patrick
75,452,926
10,007,302
Error executing cursor.execute when quotes are required
<p>fairly new to SQL in general. I'm currently trying to bolster my general understanding of how to pass commands via <code>cursor.execute()</code>. I'm currently trying to grab a column from a table and rename it to something different.</p> <pre><code>import mysql.connector user = 'root' pw = 'test!*' host = 'localhost' db = 'test1' conn = mysql.connector.connect(user=user, password=pw, host=host, database=db) cursor = conn.cursor(prepared=True) new_name = 'Company Name' query = f'SELECT company_name AS {new_name} from company_directory' cursor.execute(query) fetch = cursor.fetchall() </code></pre> <p>I've also tried it like this:</p> <pre><code>query = 'SELECT company_name AS %s from company_directory' cursor.execute(query, ('Company Name'),) fetch = cursor.fetchall() </code></pre> <p>but that returns the following error:</p> <pre><code>stmt = self._cmysql.stmt_prepare(statement) _mysql_connector.MySQLInterfaceError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? from company_directory' at line 1 </code></pre> <p>I'm using python and mySQL. I keep reading about database injection and not using string concatenation but every time I try to use <code>%s</code> I get an error similar to the one below where. I've tried switching to <code>?</code> syntax but i get the same error.</p> <p>If someone could ELI5 what the difference is and what exactly database injection is and if what I'm doing in the first attempt qualifies as string concatenation that I should be trying to avoid.</p> <p>Thank you so much!</p>
<python><mysql><mysql-connector>
2023-02-14 20:21:52
2
1,281
novawaly
75,452,897
7,254,750
Adding generic typing to function signature with pandas Series results in 'type' object is not subscriptable error
<p><code>Mypy</code> is correctly telling me that the following is missing the generic arg for <code>pd.Series</code></p> <pre class="lang-py prettyprint-override"><code>def foo(x : pd.Series) -&gt; None: pass </code></pre> <p>When I add the arg like so</p> <pre class="lang-py prettyprint-override"><code>from typing import Any def foo(x : pd.Series[Any]) -&gt; None: pass </code></pre> <p>and I try to run my code using <code>Python 3.10.10</code> and using <code>python myfile.py</code> to run it I get the following</p> <pre><code>def foo(x: pd.Series[Any]) -&gt; None: TypeError: 'type' object is not subscriptable </code></pre> <p>I'm also using <code>pandas-stubs==1.5.3.230203</code> for <code>pandas</code> type stubs.</p> <p><a href="https://stackoverflow.com/questions/54503964/type-hint-for-numpy-ndarray-dtype">Related question here indicates this type hinting should be possible.</a> <a href="https://cs.github.com/bokeh/bokeh/blob/ed4a08a98cc1313d1ab4646ef5828060bbb35c34/src/bokeh/plotting/_plot.py#L75" rel="nofollow noreferrer">I've also found code examples here</a></p>
<python><pandas><mypy><python-typing>
2023-02-14 20:18:35
1
341
MilesConn
75,452,801
5,858,752
Is abc.ABC available in python 2.7.16?
<p>Based on <a href="https://docs.python.org/2/library/abc.html" rel="nofollow noreferrer">https://docs.python.org/2/library/abc.html</a>, it seems abstract classes are available starting in version 2.6 (the top says &quot;New in version 2.6.&quot;)</p> <p>However, I cannot <code>from abc import ABC, abstractmethod</code> using python 2.7.16 as it produces the error:<code>ImportError: cannot import name ABC</code> but I can with python3+. So is <code>abc</code> not available for python 2.X?</p>
<python><python-2.7><abc>
2023-02-14 20:07:37
0
699
h8n2
75,452,643
10,904,328
z3 fails to solve problem with square root
<p>I'm trying to solve simple problem which involves calculating square root, yet for some reason z3 throws an error like <code>failed to solve</code> or <code>z3types.Z3Exception: model is not available</code></p> <pre><code>from z3 import * x = Int('x') y = Int('y') solve(x &gt; 0, y &gt; x, y ** 0.5 == x) </code></pre> <hr /> <pre><code>from z3 import * x = Int('x') y = Int('y') s = Solver() s.add(x &gt; 0) s.add(y &gt; x) s.add(y ** 0.5 == x) print(s.check()) print(s.model()) </code></pre> <p>What I'm doing wrong?</p>
<python><z3><z3py>
2023-02-14 19:49:53
1
813
tezzly
75,452,550
18,384,775
Best way to merge multiple csv files starting with different timestamps using pandas concat
<h4>The dataset:</h4> <p>I have a collection csv files inside a folder with each csv file with title: <code>timestamp</code> and <code>close</code>. Each file is saved as <code>{symbol}.csv</code> where symbols range from a list of symbols eg: <code>['ADAUSDT', 'MAGICUSDT', 'LUNCUSDT', 'LINAUSDT', 'LEVERUSDT', 'BUSDUSDT', BTSUSDT, ALGOUSDT]</code>.... In reality I have over 100+ symbols</p> <p>Here's the link to <a href="https://drive.google.com/drive/folders/1U5FdC-sSQqWJHgj8GpKHtHV5glPlHim9?usp=sharing" rel="nofollow noreferrer">sample csv files</a> incase you need them</p> <h4>What I would like to do:</h4> <p>I want to merge all the close prices inside these files into one data frame using pd.concat without losing much data. Most of the files start at a similar date, but some of them don't have much data back to 1 year (eg: LUNCUSDT) In those cases I want to find a way in which I can either drop those files and merge the rest depending on whether the rest of the dates all come within a close range.</p> <p>If that is complicated maybe I would like to try to arrange them all together based on the most recent data. However, all the DateTime stamps in the last rows are also not in the same range.</p> <p>I would appreciate any help on how I can approach this logic. Thanks in advance.</p> <h4>Here's my attempt:</h4> <pre><code>symbols = pd.read_csv('symbols.csv') symbols = symbols.symbols.to_list() merged_df = pd.DataFrame() for symbol in symbols: df = pd.read_csv(f&quot;OHLC/5m/{symbol}.csv&quot;, usecols=[0,4]) df = df.rename(columns={'close': symbol}) #df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms') merged_df = pd.concat([merged_df, df], axis=1) merged_df </code></pre> <p>This unfortunately prints an uneven dataframe, which also has repeating timestamp column. Therefore I do not know how I could figure out the latest and earliest time in the row:</p> <pre><code> timestamp ADAUSDT timestamp XRPUSDT timestamp XLMUSDT timestamp TRXUSDT timestamp VETUSDT ... timestamp LEVERUSDT timestamp STGUSDT timestamp LUNCUSDT timestamp HFTUSDT timestamp MAGICUSDT 0 2022-02-14 17:35:00 1.048 2022-02-14 17:35:00 0.7989 2022-02-14 17:35:00 0.2112 2022-02-14 17:35:00 0.06484 2022-02-14 17:35:00 0.05662 ... 2022-07-13 04:00:00 0.001252 2022-08-19 09:00:00 0.4667 2022-09-09 08:00:00 0.000529 2022-11-07 13:00:00 3.6009 2022-12-12 08:00:00 0.7873 1 2022-02-14 17:40:00 1.047 2022-02-14 17:40:00 0.7986 2022-02-14 17:40:00 0.2111 2022-02-14 17:40:00 0.06482 2022-02-14 17:40:00 0.05665 ... 2022-07-13 04:05:00 0.001249 2022-08-19 09:05:00 0.5257 2022-09-09 08:05:00 0.000522 2022-11-07 13:05:00 2.9160 2022-12-12 08:05:00 0.8116 2 2022-02-14 17:45:00 1.048 2022-02-14 17:45:00 0.7981 2022-02-14 17:45:00 0.2111 2022-02-14 17:45:00 0.06488 2022-02-14 17:45:00 0.05668 ... 2022-07-13 04:10:00 0.001320 2022-08-19 09:10:00 0.5100 2022-09-09 08:10:00 0.000517 2022-11-07 13:10:00 2.6169 2022-12-12 08:10:00 0.8064 3 2022-02-14 17:50:00 1.047 2022-02-14 17:50:00 0.7980 2022-02-14 17:50:00 0.2109 2022-02-14 17:50:00 0.06477 2022-02-14 17:50:00 0.05658 ... 2022-07-13 04:15:00 0.001417 2022-08-19 09:15:00 0.5341 2022-09-09 08:15:00 0.000520 2022-11-07 13:15:00 2.4513 2022-12-12 08:15:00 0.8035 4 2022-02-14 17:55:00 1.047 2022-02-14 17:55:00 0.7969 2022-02-14 17:55:00 0.2108 2022-02-14 17:55:00 0.06474 2022-02-14 17:55:00 0.05656 ... 2022-07-13 04:20:00 0.001400 2022-08-19 09:20:00 0.6345 2022-09-09 08:20:00 0.000527 2022-11-07 13:20:00 2.5170 2022-12-12 08:20:00 0.8550 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 105123 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 105124 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 105125 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 105126 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 105127 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN </code></pre> <p>Any help would be appreciated. Thankyou!</p>
<python><pandas><dataframe><csv>
2023-02-14 19:39:11
1
318
Royce Anton Jose
75,452,493
3,763,616
Split a dataframe into n dataframes by column value in polars
<p>I have a large Polars dataframe that I'd like to split into n number of dataframes given the size. Like take dataframe and split it into 2 or 3 or 5 dataframes.</p> <p>There are several observations that will show up for each column and would like to choose splitting into a chosen number of dataframes. A simple example is like the following where I am splitting on a specific id, but would like to have similar behave, but more like split into 2 approximately even dataframes since the full example has a large number of identifiers.</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({'Identifier': [1234,1234, 2345,2345], 'DateColumn': ['2022-02-13','2022-02-14', '2022-02-13', '2022-02-14'] }) df2 = df.with_columns( pl.col('DateColumn').str.to_date() ) </code></pre> <pre class="lang-py prettyprint-override"><code># df ┌────────────┬────────────┐ │ Identifier ┆ DateColumn │ │ --- ┆ --- │ │ i64 ┆ str │ ╞════════════╪════════════╡ │ 1234 ┆ 2022-02-13 │ │ 1234 ┆ 2022-02-14 │ │ 2345 ┆ 2022-02-13 │ │ 2345 ┆ 2022-02-14 │ └────────────┴────────────┘ </code></pre> <pre class="lang-py prettyprint-override"><code>df1 = df.filter( pl.col('Identifier')==1234 ) df2 = df.filter( pl.col('Identifier')==2345 ) </code></pre> <pre><code># df1 ┌────────────┬────────────┐ │ Identifier ┆ DateColumn │ │ --- ┆ --- │ │ i64 ┆ str │ ╞════════════╪════════════╡ │ 1234 ┆ 2022-02-13 │ │ 1234 ┆ 2022-02-14 │ └────────────┴────────────┘ # df2 ┌────────────┬────────────┐ │ Identifier ┆ DateColumn │ │ --- ┆ --- │ │ i64 ┆ str │ ╞════════════╪════════════╡ │ 2345 ┆ 2022-02-13 │ │ 2345 ┆ 2022-02-14 │ └────────────┴────────────┘ </code></pre>
<python><python-polars>
2023-02-14 19:31:50
1
489
Drthm1456
75,452,354
12,082,289
Paramiko RSAKey "private key file is encrypted"
<p>I'm trying to use Paramiko to connect to an SFTP site.</p> <pre><code>&quot;paramiko&quot;: { &quot;hashes&quot;: [ &quot;sha256:6bef55b882c9d130f8015b9a26f4bd93f710e90fe7478b9dcc810304e79b3cd8&quot;, &quot;sha256:fedc9b1dd43bc1d45f67f1ceca10bc336605427a46dcdf8dec6bfea3edf57965&quot; ], &quot;index&quot;: &quot;pypi&quot;, &quot;version&quot;: &quot;==3.0.0&quot; }, </code></pre> <p>I have a .pem file in the form</p> <pre><code>-----BEGIN OPENSSH PRIVATE KEY----- data for the key -----END OPENSSH PRIVATE KEY----- </code></pre> <p>Worth mentioning that the key is encrypted with a passphrase.</p> <p>I attempt to load the key file, providing the password and that works fine</p> <pre><code># Works great :) mykey = paramiko.RSAKey.from_private_key_file(key_file_path, password=password) self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # this explodes self.ssh_client.connect(hostname=settings.ftp_host, username=username, pkey=mykey, port=22) self.ftp = self.ssh_client.open_sftp() </code></pre> <blockquote> <p>paramiko.ssh_exception.PasswordRequiredException: private key file is encrypted</p> </blockquote> <p>If I change the connect to</p> <pre><code>self.ssh_client.connect(hostname=settings.ftp_host, username=username, pkey=mykey, port=22, passphrase=password) </code></pre> <blockquote> <p>paramiko.ssh_exception.SSHException: OpenSSH private key file checkints do not match</p> </blockquote> <p>And if I try to use <code>key_filename</code> instead of <code>pkey</code></p> <pre><code>self.ssh_client.connect(hostname=settings.ftp_host, username=username, key_filename=key_file_path, port=22, passphrase=password) </code></pre> <blockquote> <p>ValueError: q must be exactly 160, 224, or 256 bits long</p> </blockquote> <p>I am able to successfully connect to the SFTP with this key using FileZilla I'm just not sure what I'm doing wrong in Paramiko.</p>
<python><ssh><ftp><sftp><paramiko>
2023-02-14 19:17:23
1
565
Jeremy Farmer
75,452,155
8,941,248
xlabel of DataFrame.plot appears on the y-axis
<p>I'm creating a stacked bar plot via the code below:</p> <pre><code>ax = df.plot( kind=&quot;barh&quot;, stacked=True, width=0.9, figsize=(10, 11), colormap=&quot;RdYlGn&quot;, xlabel='% Respondents' ); </code></pre> <p>But the label for the x-axis appears on the y-axis! If I change <code>xlabel</code> to <code>ylabel</code>, the text on the y-axis does not move.</p> <p>How can I get a label on the x-axis?</p>
<python><pandas><matplotlib>
2023-02-14 18:53:52
1
521
mdrishan
75,452,080
9,983,652
how to print at same line using end parameter?
<p>I'd like to print at same line for print statement inside a for loop using end= parameter. not sure which end parameter i can use.</p> <p>For example, in below, for each time's print, only need to change str(i) and str(result), everything is the same.</p> <pre><code>for i in range(10): result=i**2 print('iteration is'+str(i)+' with result of '+str(result)) </code></pre> <p>Thanks</p>
<python>
2023-02-14 18:47:26
1
4,338
roudan
75,452,067
7,531,433
How can I set a seed for random array creation in dask?
<p>In dask we can create a random array for example like this.</p> <pre class="lang-py prettyprint-override"><code>import dask.array as da rand = da.random.random((100, 100)) </code></pre> <p>For reproducibility I would like to set a random seed, such that <code>rand</code> will always have the same content. How can I do this?</p>
<python><dask>
2023-02-14 18:46:03
1
709
tierriminator
75,451,880
422,005
Convert string with "_" to int?
<p>I have a function which takes a string input, tries to convert it to integer and then proceeds with two alternative paths depending on whether the conversion succeeded or not:</p> <pre><code>def make_int(arg): try: int_value = int(arg) except ValueError: str_value = arg </code></pre> <p>I now was quite surprised when the string <code>'123_2307_7'</code> was happily converted to the integer <code>12323077</code> - whereas I was expecting it to follow the <code>str</code> path here. What details of <code>str -&gt; int</code> conversion is it I have not yet grokked?</p>
<python>
2023-02-14 18:27:52
2
2,081
user422005
75,451,668
6,804,439
Fixed ylabel space (aligned y-axis) across multiple figures
<p>I'm using code much like:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt labels = ['AAA', 'BBBBBBBBBBBBB', 'CCCCCC', 'DDDDDDDDDD'] values = [0, 2, 2, 5] fig, ax = plt.subplots(figsize=(8, 0.07 + 0.25 * len(values))) bars = ax.barh(labels, values, color=colors) </code></pre> <p>to generate horizontal bar plots as separate figures, one after another:</p> <p><a href="https://i.sstatic.net/kZ49L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZ49L.png" alt="barh" /></a></p> <p>How can I make the left spines (i.e. the black bars) align when the width of labels varies between plots? (Aside from just aligning the rendered images to the right.)</p> <p>I think the <em>left margin/padding/space should be fixed</em>, or the bar width should be fixed, but I can't quite figure <em>how</em> to do it.</p>
<python><matplotlib><visualization>
2023-02-14 18:09:15
1
320
Minty
75,451,613
496,289
openpyxl and MS Excel: Problem with content, recover as much as we can
<p>I have an excel with a named/excel-table (<code>B22:O25</code>) in it with header row + 3 rows of empty data-cells. I make a copy of this file, then open the copy using <code>openpyxl.open()</code>, insert 17 more rows (<code>ws.insert_rows(idx=23, amount=17)</code>), update <code>table.ref</code> (+copy formats, update autofilters etc), write a dataframe into it (<code>dataframe_to_rows()</code>).</p> <p>When I save it and open it using MS Excel (Professional Plus 2016), Excel says: <em>We found some problem with some content in 'my file'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes.</em></p> <p><a href="https://i.sstatic.net/yhokE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yhokE.png" alt="enter image description here" /></a></p> <p>On clicking yes, it shows:</p> <p><a href="https://i.sstatic.net/5EUgc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5EUgc.png" alt="enter image description here" /></a></p> <p>Linked log file (<code>error0340...xml</code>) has same info: <code>&lt;repairedRecord&gt;Repaired Records: Table from /xl/tables/table1.xml part (Table)&lt;/repairedRecord&gt;</code> and nothing else useful.</p> <p>How do I debug this? How do I find out what was wrong/messed-up in my xlsx file? Question is more generic, where do I find some debug info from Excel?</p> <p>I can post my code and example xlsx files if it helps.</p> <hr /> <p>edit: after using Open XML SDK Tools to compare the xlsx file with error and the one after being repaired by MS Excel. Ignoring the <code>&lt;xml&gt;</code> tag, only diff is missing <code>headerRowCount=&quot;1&quot;</code> in repaired xlsx's <code>/xl/tables/table1.xml</code></p> <pre><code>- - &lt;table id=&quot;1&quot; name=&quot;hit_ratio&quot; displayName=&quot;hit_ratio&quot; ref=&quot;B22:O42&quot; headerRowCount=&quot;1&quot; totalsRowShown=&quot;0&quot; headerRowDxfId=&quot;17&quot; headerRowBorderDxfId=&quot;16&quot; tableBorderDxfId=&quot;15&quot; totalsRowBorderDxfId=&quot;14&quot; xmlns=&quot;http://schemas.openxmlformats.org/spreadsheetml/2006/main&quot;&gt; + &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;yes&quot; ?&gt; + &lt;table id=&quot;1&quot; name=&quot;hit_ratio&quot; displayName=&quot;hit_ratio&quot; ref=&quot;B22:O42&quot; totalsRowShown=&quot;0&quot; headerRowDxfId=&quot;17&quot; headerRowBorderDxfId=&quot;16&quot; tableBorderDxfId=&quot;15&quot; totalsRowBorderDxfId=&quot;14&quot; xmlns=&quot;http://schemas.openxmlformats.org/spreadsheetml/2006/main&quot;&gt; </code></pre>
<python><excel><ms-office><openpyxl>
2023-02-14 18:03:44
1
17,945
Kashyap
75,451,402
5,449,789
Addressing Saddle Points in Keras Model Training
<p>My keras model seems to have to hit a saddle point in it's training. Of course this is just an assumption; I'm not really sure. In any case, the loss stops at .0025 and nothing I have tried has worked to reduce the loss any further.</p> <p>What I have tried so far is:</p> <ol> <li><p>Using Adam and RMSProp with and without cyclical learning rates. The Results are that the loss starts and stays .0989. The learning rates for cyclical learning where .001 to .1.</p> </li> <li><p>After 4 or 5 epochs of not moving I tried SGD instead and the loss steadily declined too .0025. This is where the learning rate stalls out. After about 5 epochs of not changing I tried using SGD with cyclical learning enabled hoping it would decrease but I get the same result.</p> </li> <li><p>I have tried increasing network capacity (as well as decreasing) thinking maybe the network hit it's learning limitations. I increased all 4 dense layers to 4096. That didn't change anything.</p> </li> <li><p>I've tried different batch sizes.</p> </li> </ol> <p>The most epochs I have trained the network for is 7. However, for 6 of those epochs the loss or validation loss do not change. Do I need to train for more epochs or could it be that .0025 is not a saddle point but is the global minimum for my dataset? I would think there is more room for it to improve. I tested the predictions of the network at .0025 and they aren't that great.</p> <p>Any advice on how to continue? My code is below.</p> <p>For starters my keras model is similar in style to VGG-16:</p> <pre><code># imports pip install -q -U tensorflow_addons import tensorflow_addons as tfa import tensorflow as tf from tensorflow import keras from keras import layers def get_model(input_shape): input = keras.input(shape=input_shape) x = layers.Conv2D(filters=64, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=64, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.MaxPooling2D(pool_size=(2, 2) strides=none, paddings=&quot;same&quot;)(x) x = layers.Conv2D(filters=128, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=128, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.MaxPooling2D(pool_size=(2, 2) strides=none, paddings=&quot;same&quot;)(x) x = layers.Conv2D(filters=256, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=256, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=256, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=256, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.MaxPooling2D(pool_size=(2, 2) strides=none, paddings=&quot;same&quot;)(x) x = layers.Conv2D(filters=512, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=512, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=512, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.Conv2D(filters=512, kernel_size= (3, 3), activation='relu', paddings=&quot;same&quot;)(input) x = layers.MaxPooling2D(pool_size=(2, 2) strides=none, paddings=&quot;same&quot;)(x) x = layers.Flatten()(x) x = layers.Dense(4096, activation='relu')(x) x = layers.Dense(2048, activation='relu')(x) x = layers.Dense(1024, activation='relu')(x) x = layers.Dense(512, activation='relu')(x) output = layers.Dense(9, activation='sigmoid')(x) return keras.models.Model(inputs=input, outputs=output) # define learning rate range lr_range = [.001, .1] epochs = 100 batch_size = 32 # based on https://www.tensorflow.org/addons/tutorials/optimizers_cyclicallearningrate steps_per_epoch = len(training_data)/batch_size clr = tfa.optimizers.CyclicalLearningRate(initial_learning_rate=lr_range[0], maximal_learning_rate=lr_range[1], scale_fn=lambda x: 1/(2.**(x-1)), step_size=2 * steps_per_epoch ) optimizer = tf.keras.optimizers.Adam(clr) model = get_model((224, 224, 3)) model.compile(optimzer=optimzer, loss='mean_squared_error') # used tf.dataset objects for model input model.fit(train_ds, validation_data=valid_ds, batch_size=batch_size, epochs=epochs) </code></pre>
<python><keras><trainingloss>
2023-02-14 17:38:27
0
461
junfanbl
75,451,399
1,815,739
Swap trade in Uniswap fails using uniswap-python standard functions
<p>I am trying to do a simple trade using uniswap-python and it doesn't work.</p> <p><strong>Sample code:</strong></p> <pre><code>from uniswap import Uniswap provider = &quot;https://polygon-mainnet.infura.io/v3/&quot;+INFURA_API_KEY uniswap = Uniswap(address, private_key, version = 3, provider) result = uniswap.make_trade(USDT, WMATIC, 10) </code></pre> <p><strong>Result:</strong></p> <pre><code>raise ExtraDataLengthError(web3.exceptions.ExtraDataLengthError: The field extraData is 97 bytes, but should be 32. It is quite likely that you are connected to a POA chain. Refer to http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority for more details. The full extraData is: HexBytes('0xd3...d0') </code></pre> <p>I've checked PoA docs and tested all options without success. I always get the same message. There are enough funds in my wallet to do the trade. Any clues?</p>
<python><polygon><swap><uniswap>
2023-02-14 17:38:09
1
496
The Dare Guy
75,451,378
2,038,360
Modify a bar plot into a staked plot keeping the original values
<p>I have a pandas DataFrame containing the percentage of students that have a certain skill in each subject stratified according to their gender</p> <pre><code>iterables = [['Above basic','Basic','Low'], ['Female','Male']] index = pd.MultiIndex.from_product(iterables, names=[&quot;Skills&quot;, &quot;Gender&quot;]) df = pd.DataFrame(data=[[36,36,8,8,6,6],[46,46,2,3,1,2],[24,26,10,11,16,13]], index=[&quot;Math&quot;, &quot;Literature&quot;, &quot;Physics&quot;], columns=index) print(df) Skill Above basic Basic Low Gender Female Male Female Male Female Male Math 36 36 8 8 6 6 Literature 46 46 2 3 1 2 Physics 24 26 10 11 16 13 </code></pre> <p>Next I want to see how the skills are distributed according to the subjects</p> <pre><code>#plot how the skills are distributed according to the subjects df.sum(axis=1,level=[0]).plot(kind='bar') df.plot(kind='bar') </code></pre> <p><a href="https://i.sstatic.net/XosqE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XosqE.png" alt="enter image description here" /></a></p> <p>Now I would like to add the percentage of Male and Female to each bar in a stacked manner.. eg. for the fist bar (&quot;Math&quot;, &quot;Above basic&quot;) it should be 50/50. For the bar (&quot;Literature&quot;, &quot;Basic&quot;) it should be 40/60, for the bar (&quot;Literature&quot;,&quot;Low&quot;) it should be 33.3/66.7 and so on...</p> <p>Could you give me a hand?</p>
<python><pandas><matplotlib><bar-chart><plot-annotations>
2023-02-14 17:35:33
1
5,589
gabboshow
75,451,278
4,875,022
How to sample groups by a condition a subset of each group's data should satisfy using pandas
<p>I am wondering if there is a better way to solve the following problem.</p> <p>I have a dataset that tracks the price of products over time (using a relative time index centered around some event date (t=0)):*</p> <pre><code>df = pd.DataFrame({'id': [1,1,1,1,1,2,2,2,2,2], 'time': [-2,-1,0,1,2,-2,-1,0,1,2], 'price': [1,2,3,4,5,1,4,3,2,1]}) </code></pre> <p>I want to sample all products where the prices before the event (t&lt;0) are lower (&lt;) than the price at the event (t=0).</p> <p>For <code>df</code>, I should only sample product 1 (because for product 2: price t=-1 &gt; price t=0):</p> <pre><code>pd.DataFrame({'id': [1,1,1,1,1], 'time': [-2,-1,0,1,2], 'price': [1,2,3,4,5]}) </code></pre> <p><strong>In my current implementation</strong>, I create a column with the <code>event price</code> (price at t=0) and find all <code>id</code>s where all <code>price</code> &lt; <code>event price </code> for <code>time</code>&lt;0, and then sample using the <code>id</code>s. I feel like this should be solvable with <code>groupby</code>, but don't know how get each <code>id</code>s price at t=0 for the condition.</p> <hr /> <p>/* all products have the same time window. That is, in this example, all products have a relative time window from -2 to 2. The window is always symmetric around 0.</p>
<python><pandas>
2023-02-14 17:25:52
1
397
greyBag
75,451,248
2,394,694
Use Dask Dataframe On delayed function
<p>I have three sources and a Dask Dataframe for each of them. I need to apply a function that computes an operation that combines data from the three sources. The operation requires a state to be calculated ( I can't change that).</p> <p>The three sources are in parquet format and I read the data using <code>read_parquet</code> Dask Dataframe function:</p> <pre class="lang-py prettyprint-override"><code> @dask.delayed def load_data(data_path): ddf = dd.read_parquet(data_path, engine=&quot;pyarrow&quot;) return ddf results = [] sources_path=[&quot;/source1&quot;,&quot;/source2&quot;,&quot;/source3&quot;] for source_path in sources_path: data = load_data(source_path) results.append(data) </code></pre> <p>I create another delayed function that executes the operation:</p> <pre class="lang-py prettyprint-override"><code> @dask.delayed def process(sources): operation(sources[0][&lt;list of columns&gt;],sources[1][&lt;list of columns&gt;],sources[2][&lt;list of columns&gt;]) </code></pre> <p>The <code>operation</code> function comes from a custom library. It could not actually be parallelized because it has an internal state.</p> <p>Reading the dask documentation, <a href="https://docs.dask.org/en/latest/delayed-best-practices.html#don-t-call-dask-delayed-on-other-dask-collections" rel="nofollow noreferrer">this is not a best practice</a>.</p> <p>Is there a way to apply a custom function on multiple dask dataframe without using delayed function?</p>
<python><dask><dask-distributed><dask-dataframe><dask-delayed>
2023-02-14 17:23:20
1
1,549
theShadow89
75,451,239
7,483,211
How to reference input in params section of snakemake rule?
<p>I need to process my input file values, turning them into a comma-separated string (instead of white space) in order to pass them to a CLI program. To do this, I want to run the input files through a Python function. How can I reference the input files of a rule in the params section of the same rule?</p> <p>This is what I've tried, but it doesn't work:</p> <pre><code>rule a: input: foo=&quot;a.txt&quot;, bar=expand({build}.txt,build=config[&quot;build&quot;]), output: baz=result.txt, params: joined_bar=lambda w: &quot;,&quot;.join(input.bar), # this doesn't work shell: &quot;&quot;&quot; qux --comma-separated-files {params.joined_bar} \ --foo {input.foo} \ &gt;{output.baz} &quot;&quot;&quot; </code></pre> <p>It fails with:</p> <pre><code>InputFunctionException: AttributeError: 'builtin_function_or_method' object has no attribute 'bar' </code></pre> <p>Potentially related but (over-)complicated questions: <br /> <a href="https://stackoverflow.com/questions/72077735/how-to-define-parameters-for-a-snakemake-rule-with-expand-input">How to define parameters for a snakemake rule with expand input</a> <br /> <a href="https://stackoverflow.com/questions/46636844/is-snakemake-params-function-evaluated-before-input-file-existence">Is Snakemake params function evaluated before input file existence?</a></p>
<python><snakemake>
2023-02-14 17:22:23
1
10,272
Cornelius Roemer
75,451,197
5,105,118
How to run multiple julia functions from python multiprocessing pool using juliacall
<p>I want to run julia functions/scripts from within python. I managed to call julia scripts via the library <code>juliacall</code>. Now I want to parallelize this. Therefore I created a python multiprocessing Pool and call the julia script from each worker. However this fails with the following message:</p> <pre><code>python: /buildworker/worker/package_linux64/build/src/debuginfo.cpp:1634: void register_eh_frames(uint8_t*, size_t): Assertion `end_ip != 0' failed. </code></pre> <p>How can I further debug this? Here is my minimal working example:</p> <pre class="lang-py prettyprint-override"><code>import os os.environ['PYTHON_JULIAPKG_EXE'] = &quot;/home/user/.juliaup/bin/julia&quot; os.environ['PYTHON_JULIAPKG_OFFLINE'] = 'yes' os.environ['PYTHON_JULIAPKG_PROJECT'] = '/home/user/julia/environments/v1.6/' from juliacall import Main as jl, convert as jlconvert from multiprocessing import Pool from tqdm import tqdm import ipdb def init_worker(): import os os.environ['PYTHON_JULIAPKG_EXE'] = &quot;/home/user/juliaup/bin/julia&quot; os.environ['PYTHON_JULIAPKG_OFFLINE'] = 'yes' os.environ['PYTHON_JULIAPKG_PROJECT'] = '/home/user/.julia/environments/v1.6/' from juliacall import Main as jl, convert as jlconvert print('in init_worker()...') jl.seval('using Pkg') jl.seval('Pkg.status()') print('...done') def compute(jobid): print(f'in main({jobid})...') jl.seval('include(&quot;test_julia_simple.jl&quot;)') print('...done') return def main(): njobs = 10 #start pool with init_worker() as initializer with Pool(2, initializer=init_worker) as p, tqdm(total=njobs) as pbar: res = [] for jid in range(njobs): res.append(p.apply_async(compute, (jid,))) for r in res: r.get() pbar.update(1) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>And the julia script <code>test_julia_simple.jl</code></p> <pre><code>for i in 1:10 println(i) end 1+2 </code></pre> <hr /> <p>additional info:</p> <pre class="lang-bash prettyprint-override"><code>$ python --version Python 3.9.7 $ pip freeze | grep julia juliacall==0.9.10 juliapkg==0.1.9 $ julia --version The latest version of Julia in the `1.6` channel is 1.6.7+0.x64.linux.gnu. You currently have `1.6.6+0~x64` installed. Run: juliaup update to install Julia 1.6.7+0.x64.linux.gnu and update the `1.6` channel to that version. julia version 1.6.6 </code></pre> <p>not sure if this is related but the error message is nearly identical <a href="https://github.com/JuliaLang/julia/issues/44969" rel="nofollow noreferrer">https://github.com/JuliaLang/julia/issues/44969</a></p> <hr /> <p>After some comment I tried using a thread pool but in that case python fails with Segmentation fault:</p> <pre class="lang-py prettyprint-override"><code>import os os.environ['PYTHON_JULIAPKG_EXE'] = &quot;/home/user/.juliaup/bin/julia&quot; os.environ['PYTHON_JULIAPKG_OFFLINE'] = 'yes' os.environ['PYTHON_JULIAPKG_PROJECT'] = '/home/user/.julia/environments/v1.6/' from juliacall import Main as jl, convert as jlconvert import concurrent.futures from tqdm import tqdm import ipdb def compute(jobid): print(f'in main({jobid})...') print('in init_worker()...') jl.seval('using Pkg') jl.seval('Pkg.status()') print('...done') jl.seval('include(&quot;test_julia_simple.jl&quot;)') print('...done') return def main(): njobs = 10 #start pool with init_worker() as initializer with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: with tqdm(total=njobs) as pbar: jobs = {executor.submit(compute, jid):jid for jid in range(njobs)} for future in concurrent.futures.as_completed(jobs): jid = jobs[future] try: data = future.result() except Exception as exc: print('%r generated an exception: %s' % (jid, exc)) else: print('%r page is %d bytes' % (jid, len(data))) pbar.update(1) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><julia><python-multiprocessing><juliacall>
2023-02-14 17:18:38
0
1,765
v.tralala
75,451,064
4,707,978
weird error all the sudden: source code string cannot contain null bytes
<p>Really weird error when running with docker-compose dockerfile:</p> <pre><code>FROM tiangolo/uvicorn-gunicorn:python3.10-slim COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD celery worker -B --app=worker.worker.celery --loglevel=debug </code></pre> <p>Error:</p> <pre><code>momo-api-worker-1 | During handling of the above exception, another exception occurred: momo-api-worker-1 | momo-api-worker-1 | Traceback (most recent call last): momo-api-worker-1 | File &quot;/usr/local/bin/celery&quot;, line 8, in &lt;module&gt; momo-api-worker-1 | sys.exit(main()) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/__main__.py&quot;, line 16, in main momo-api-worker-1 | _main() momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/celery.py&quot;, line 322, in main momo-api-worker-1 | cmd.execute_from_commandline(argv) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/celery.py&quot;, line 499, in execute_from_commandline momo-api-worker-1 | super(CeleryCommand, self).execute_from_commandline(argv))) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/base.py&quot;, line 289, in execute_from_commandline momo-api-worker-1 | argv = self.setup_app_from_commandline(argv) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/base.py&quot;, line 509, in setup_app_from_commandline momo-api-worker-1 | self.app = self.find_app(app) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/base.py&quot;, line 531, in find_app momo-api-worker-1 | return find_app(app, symbol_by_name=self.symbol_by_name) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/app/utils.py&quot;, line 373, in find_app momo-api-worker-1 | sym = symbol_by_name(app, imp=imp) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/bin/base.py&quot;, line 534, in symbol_by_name momo-api-worker-1 | return imports.symbol_by_name(name, imp=imp) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/kombu/utils/imports.py&quot;, line 59, in symbol_by_name momo-api-worker-1 | reraise(ValueError, momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/vine/five.py&quot;, line 194, in reraise momo-api-worker-1 | raise value.with_traceback(tb) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/kombu/utils/imports.py&quot;, line 57, in symbol_by_name momo-api-worker-1 | module = imp(module_name, package=package, **kwargs) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/celery/utils/imports.py&quot;, line 111, in import_from_cwd momo-api-worker-1 | return imp(module, package=package) momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module momo-api-worker-1 | return _bootstrap._gcd_import(name[level:], package, level) momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 992, in _find_and_load_unlocked momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import momo-api-flower-1 | [I 230214 17:01:48 mixins:225] Connected to redis://redis:6379/0 momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module momo-api-worker-1 | File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed momo-api-worker-1 | File &quot;/app/worker/__init__.py&quot;, line 1, in &lt;module&gt; momo-api-worker-1 | from .worker import celery momo-api-worker-1 | File &quot;/app/worker/worker.py&quot;, line 5, in &lt;module&gt; momo-api-worker-1 | from container import init_container momo-api-worker-1 | File &quot;/app/container.py&quot;, line 15, in &lt;module&gt; momo-api-worker-1 | from services import * momo-api-worker-1 | File &quot;/app/services/__init__.py&quot;, line 2, in &lt;module&gt; momo-api-worker-1 | from .push_notifications import * momo-api-worker-1 | File &quot;/app/services/push_notifications.py&quot;, line 1, in &lt;module&gt; momo-api-worker-1 | from exponent_server_sdk import PushClient, PushMessage momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/exponent_server_sdk/__init__.py&quot;, line 4, in &lt;module&gt; momo-api-worker-1 | import requests momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/requests/__init__.py&quot;, line 147, in &lt;module&gt; momo-api-worker-1 | from . import packages, utils momo-api-worker-1 | File &quot;/usr/local/lib/python3.10/site-packages/requests/packages.py&quot;, line 16, in &lt;module&gt; momo-api-worker-1 | locals()[package] = __import__(package) momo-api-worker-1 | ValueError: Couldn't import 'worker.worker.celery': source code string cannot contain null bytes </code></pre> <p>But when I run with VS Code no error:</p> <pre><code>{ &quot;name&quot;: &quot;Worker&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;celery&quot;, &quot;args&quot;: [ &quot;worker&quot;, &quot;-B&quot;, &quot;--app=worker.worker.celery&quot;, &quot;--loglevel=debug&quot; ], &quot;jinja&quot;: true, &quot;justMyCode&quot;: true } </code></pre>
<python>
2023-02-14 17:04:12
0
3,431
Dirk
75,451,056
10,404,281
Two data frame Name mapping and replace a values base on condition
<p>I have two data frames (df) one equally divided into weeks by count of the week for the month(February 4 weeks March 5 weeks). The other one has actual data.</p> <p>equally divided df</p> <pre class="lang-py prettyprint-override"><code>Name Feb_1 Feb_2 Feb_3 Feb_4 Mar_5 Mar_6 Mar_7 Mar_8 Mar_9 Sunil 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Amaya 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Will 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Francis 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Kadeep 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Hima 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Lazy 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Joseph 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 </code></pre> <p>actual df</p> <pre class="lang-py prettyprint-override"><code>Name Feb_1 Feb_2 Feb_3 Feb_4 Mar_5 Mar_6 Mar_7 Mar_8 Mar_9 Sunil 0.2 0.4 0.3 0.1 0 0 0 0 0 Hima 0.5 0.2 0.3 0 0.2 0.3 0.2 0.2 0.1 Kamal 0.3 0 0.5 0.2 0.1 0.5 0.1 0.1 0.2 Amaya 0.5 0 0.3 0.2 0 0 0 0 0 Jacob 0.2 0.4 0 0.4 0.4 0 0.2 0.2 0.2 Preety 0.7 0.1 0.1 0.1 0.2 0.1 0.4 0.3 0 Will 0.8 0.2 0 0 0.1 0.2 0.3 0.1 0.3 Francis 0.4 0.2 0.3 0.1 0.2 0.4 0 0.4 0 Tara 0 0.5 0.2 0.3 0.2 0 0.3 0.2 0.3 </code></pre> <p>I want to replace the data in equally divided df from the actual df. Condition for the week 1 to 4 (Feb) is if the actual df week 1 to 4 sum = 1. For example.</p> <p>actual df Sunil 0.2 + 0.4 + 0.3 + 0.1 = 1</p> <p>Then replace the number in equally divided df. So Sunil 0.25 0.25 0.25 0.25 will replace with the above values.</p> <p>Week 5-9 is the same. If some of the values in actual df = 1 then replace. So for Sunil, it's 0 + 0 + 0 + 0 + 0 not equal to 1 So not replace the values for weeks 5 to 9.</p> <p>So the data frame looks like the below.</p> <p>equally divided with edit df</p> <pre class="lang-py prettyprint-override"><code>Name Feb_1 Feb_2 Feb_3 Feb_4 Mar_5 Mar_6 Mar_7 Mar_8 Mar_9 Sunil 0.2 0.4 0.3 0.1 0.2 0.2 0.2 0.2 0.2 Amaya 0.5 0 0.3 0.2 0.2 0.2 0.2 0.2 0.2 Will 0.8 0.2 0 0 0.1 0.2 0.3 0.1 0.3 Francis 0.4 0.2 0.3 0.1 0.2 0.4 0 0.4 0 Kadeep 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Hima 0.5 0.2 0.3 0 0.2 0.3 0.2 0.2 0.1 Lazy 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 Joseph 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.2 0.2 </code></pre> <p>I'm trying to bring all the values from actual df to equally divided frame and going to edit it but couldn't get a way.</p> <pre class="lang-py prettyprint-override"><code>equally ['Feb_1_actual'] = equally ['Name'].map(actual.set_index('Name')['Feb_1']) ## then get a some and if it's equal to 1 replace the value other wise keep the same value </code></pre> <p>Is there another way to do it? Any help would be appreciate. Thanks in advance!</p>
<python><pandas><dataframe>
2023-02-14 17:03:19
2
819
rra
75,450,874
2,395,382
ModuleNotFoundError when import package from src folder
<p>What is wrong with this python structure? I have read multiple docs and forums on this issue i just can't seem to get it solved.</p> <pre class="lang-bash prettyprint-override"><code>. └── src ├── __init__.py ├── lib │ ├── hello.py │ └── __init__.py ├── libtwo │ ├── __init__.py │ └── world.py └── main.py </code></pre> <p>world.py</p> <pre class="lang-py prettyprint-override"><code>def helloworld(): print(&quot;Hello World!&quot;) </code></pre> <p>hello.py</p> <pre class="lang-py prettyprint-override"><code>from libtwo.world import helloworld helloworld() </code></pre> <p>main.py</p> <pre class="lang-py prettyprint-override"><code>from libtwo.world import helloworld helloworld() </code></pre> <p>(it runs fine from main.py)</p> <pre><code> File &quot;src/lib/hello.py&quot;, line 1, in &lt;module&gt; from libtwo.world import helloworld ModuleNotFoundError: No module named 'libtwo' </code></pre> <p>the following gives</p> <pre><code> File &quot;lib/hello.py&quot;, line 1, in &lt;module&gt; from ..libtwo.world import helloworld ImportError: attempted relative import with no known parent package </code></pre> <pre class="lang-py prettyprint-override"><code>from ..libtwo.world import helloworld helloworld() </code></pre> <p>using vscode i tried to set up workspaces and configs, all the lot. <code>&quot;python.terminal.executeInFileDir&quot;: true</code></p> <p>(yes i am using a virtual env)</p> <p>virtualenv sys path entries.</p> <p>according to some sources if i don't see my directory in the virtualenv that means something is wrong</p> <pre><code>(my_venv) ➜ src python main.py &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/user/Documents/py/myui/subject_a/my_venv/lib/python3.8/site-packages'] &gt;&gt;&gt; </code></pre> <p>running sys.path from main.py</p> <pre><code>(my_venv) ➜ src python main.py ['/home/user/Documents/py/myui/subject_a/src', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/user/Documents/py/myui/subject_a/my_venv/lib/python3.8/site-packages'] </code></pre> <p>I really can't pin point what is wrong with my structure, i have been doing this for years and today nothing seems to work.</p>
<python><python-3.x><python-import><python-module>
2023-02-14 16:48:10
3
2,421
Fanna1119
75,450,844
6,290,211
How to get performance_metrics() on weekly frequency in facebook-prophet?
<p><a href="https://facebook.github.io/prophet/" rel="nofollow noreferrer">I am working with <code>prophet</code></a> library for educational purpose on a classic dataset: the <a href="https://www.kaggle.com/datasets/yasserh/air-passengers-forecast-dataset" rel="nofollow noreferrer">air passenger dataset available on Kaggle</a>.</p> <p>Data are on monthly frequency which is not possible to cross validate as standard frequency on Prophet, <a href="https://github.com/facebook/prophet/issues/1672" rel="nofollow noreferrer">based on that discussion</a>.</p> <p>During the cross validation for Time Series I used the <code>prophet</code> function <code>cross_validation()</code> passing the arguments on weekly frequency.</p> <p>But when I call the function <code>performance_metrics</code> it returns the horizion column on daily frequency.</p> <p><strong>How can I get on weekly frequency?</strong></p> <p>I also tried to read the documentation and the <a href="https://github.com/facebook/prophet/blob/e665430adcd7690a1ea7565803f34043596045fe/python/prophet/diagnostics.py#L318" rel="nofollow noreferrer">function description</a>:</p> <blockquote> <p>Metrics are calculated over a rolling window of cross validation predictions, after sorting by horizon. Averaging is first done within each value of horizon, and then across horizons as needed to reach the window size. The size of that window (number of simulated forecast points) is determined by the rolling_window argument, which specifies a proportion of simulated forecast points to include in each window. rolling_window=0 will compute it separately for each horizon. The default of rolling_window=0.1 will use 10% of the rows in df in each window. rolling_window=1 will compute the metric across all simulated forecast points. The results are set to the right edge of the window.</p> </blockquote> <p>Here how I modelled the dataset:</p> <pre><code>model = Prophet() model.fit(df) future_dates = model.make_future_dataframe(periods=36, freq='MS') df_cv = cross_validation(model, initial='300 W', period='5 W', horizon = '52 W') df_cv.head() </code></pre> <p><a href="https://i.sstatic.net/DfRMP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DfRMP.png" alt="enter image description here" /></a></p> <p>And then when I call the <code>performance_metrics</code></p> <pre><code>df_p = performance_metrics(df_cv) df_p.head() </code></pre> <p>This is the output that I get with a daily frequency. <a href="https://i.sstatic.net/mjCKQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mjCKQ.png" alt="enter image description here" /></a></p> <p>I am probably missing something or I made a mistake in the code.</p>
<python><time-series><facebook-prophet>
2023-02-14 16:45:06
0
389
Andrea Ciufo
75,450,525
5,858,752
What is the purpose of using tz_localize after tz_convert to convert to local time zone?
<p>In a codebase I am looking at, I see the following</p> <pre><code>local_timezone = get_local_timezone() df1[&quot;start_time&quot;] = df1.start_time.dt.tz_convert(local_timezone) # pandas dataframe df_merged = pd.merge(df1, df2, left_on=[&quot;start_time&quot;]) df_merged[&quot;start_time&quot;] = df_merged[&quot;start_time&quot;].dt.tz_localize(None) </code></pre> <p>I've been under the impression that only either one of <code>tz_localize</code> and <code>tz_convert</code> is needed, and not both, to convert a timezone to the local timezone. What is the purpose of using both here?</p>
<python><pandas><datetime><timezone>
2023-02-14 16:16:20
1
699
h8n2
75,450,403
12,242,625
Width of bars according to associated group
<p>We have a dataframe that contains various values of a broadcast medium.</p> <pre><code>+----+-------+--------+------+-------------+ | | MHz | Slot | dB | dB_median | |----+-------+--------+------+-------------| | 0 | 10 | Slot1 | 20 | 20.5 | | 1 | 20 | Slot1 | 21 | 20.5 | | 2 | 30 | Slot2 | 19 | 19 | | 3 | 40 | Slot3 | 18 | 18 | | 4 | 50 | Slot4 | 21 | 19 | | 5 | 60 | Slot4 | 17 | 19 | | 6 | 70 | Slot4 | 20 | 19 | | 7 | 80 | Slot5 | 22 | 22 | | 8 | 90 | Slot6 | 19 | 19 | | 9 | 100 | Slot6 | 19 | 19 | +----+-------+--------+------+-------------+ </code></pre> <ul> <li><code>MHz</code> refers to the frequency of the carrier</li> <li><code>Slot</code> refers to the name of the service</li> <li><code>dB</code> refers to the measured value per carrier</li> <li><code>dB_median</code> refers to the median value per service</li> </ul> <p>A service is modulated on 1 or more carriers or in other words: a <code>Slot</code> can have a length of 1 or more <code>MHz</code> values.</p> <p>I'm now looking for a way to plot this spectrum like:</p> <p><a href="https://i.sstatic.net/96YH4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/96YH4.png" alt="enter image description here" /></a></p> <p>But I only get to the point where I can color the single bars according to the related <code>Slot</code>:</p> <p><a href="https://i.sstatic.net/WfDA1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WfDA1.png" alt="enter image description here" /></a></p> <p>Happy for any idea or hint.</p> <hr /> <h1>MWE</h1> <pre><code>import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df_test = pd.DataFrame({ &quot;MHz&quot;:[10,20,30,40,50,60,70,80,90,100], &quot;Slot&quot;:[&quot;Slot1&quot;,&quot;Slot1&quot;,&quot;Slot2&quot;,&quot;Slot3&quot;,&quot;Slot4&quot;, &quot;Slot4&quot;,&quot;Slot4&quot;,&quot;Slot5&quot;,&quot;Slot6&quot;,&quot;Slot6&quot;], &quot;dB&quot;:[20,21,19,18,21,17,20,22,19,19], &quot;dB_median&quot;:[20.5,20.5,19,18,19,19,19,22,19,19] }) plt.figure(figsize=(8,4)) g = sns.barplot( data=df_test, x=&quot;MHz&quot;, y=&quot;dB_median&quot;, hue=&quot;Slot&quot; ) sns.move_legend(g, &quot;lower center&quot;, ncol=6, bbox_to_anchor=(0.5, 1)) plt.show() </code></pre>
<python><matplotlib><seaborn>
2023-02-14 16:05:09
1
3,304
Marco_CH
75,450,088
2,725,742
Get tkinter Entry to validate final state, skip intermediary deletions?
<p>So if I want to restrict a value to be between 100 and 200...</p> <pre><code>import tkinter as tk master = tk.Tk() def validatePLZ(index, value_if_allowed): print(&quot;validatePLZ with index:%s and proposed value:%s&quot; % (index, value_if_allowed)) if int(index) &gt; 3 or len(value_if_allowed) &gt; 3: return False print(&quot;Not too long&quot;) value = int(value_if_allowed) print(&quot;Considering value: &quot;, value) if value &lt; 100 or value &gt; 200: return False return True vcmd = (master.register(validatePLZ), '%i', '%P') ent = tk.Entry(master, validate=&quot;key&quot;, validatecommand=vcmd) ent.insert(0, &quot;150&quot;) ent.pack() tk.Button(master, text=&quot;Validate&quot;).pack() tk.mainloop() </code></pre> <p>This has worked in less strict conditions, where zero was allowed, but if the value is '150' and I am trying to change to '170' by highlighting '5' and typing '7' what I see is:</p> <ol> <li><p>Deletion of '5' with a result of '10'</p> </li> <li><p>Addition of '7', after failed deletion, with a result of '1570'</p> </li> </ol> <p>But it fails on #1 so it is not working. I do still want #1 to happen if just deleting a character, but how can the highlight-replace be combined into one check? Or how do I know if it is a highlight-replace and not simply a deletion?</p>
<python><validation><tkinter><tkinter-entry>
2023-02-14 15:37:15
1
448
fm_user8
75,449,953
5,959,685
Split a pandas dataframe into equal of multiple dataframe and convert to json
<p>Part of my program needs a certain kind of input. So I need to split a pandas dataframe into n multiple dataframes and convert to json without header for that input. E.g. I have a dataframe like this</p> <pre><code>`import pandas as pd myDf = pd.DataFrame(['banana', 'apple', 'watermelon','grapes','cherry', 'blueberry'], columns=['fruit'])` </code></pre> <p>which looks like this</p> <pre><code>` fruit 0 banana 1 apple 2 watermelon 3 grapes 4 cherry 5 blueberry` </code></pre> <p>I need an output similar to l1 programatically</p> <pre><code>`import json l1 = [['banana','apple', 'watermelon'], ['grapes', 'cherry', 'blueberry']] print(json.dumps(l1, indent = 4))` </code></pre> <p>which should look like this as it should exclude the header 'fruit'</p> <pre><code>`[ [ &quot;banana&quot;, &quot;apple&quot;, &quot;watermelon&quot; ], [ &quot;grapes&quot;, &quot;cherry&quot;, &quot;blueberry&quot; ] ]` </code></pre> <p>I spilt the dataframe</p> <pre><code>fruitList = np.array_split(myDf, 2) </code></pre> <p>then in for loop I tried this</p> <pre><code>`json.loads(fruitList_iter.reset_index(drop=True).to_json(orient='values'))` </code></pre> <p>but it's not giving me the expected result. Any help would be much appreciated.</p>
<python><json><pandas>
2023-02-14 15:23:40
1
423
Dutt
75,449,889
2,263,683
Check if request is coming from Swagger UI
<p>Using <code>Python</code> and <code>Starlette</code> or <code>FastAPI</code>, How can I know if the request is coming from the Swagger UI or anywhere else (Postman, Frontend app)?</p> <p>I tried to see if there's something in <code>Request</code> object which I can use:</p> <pre><code>from fastapi import Request @app.get(&quot;/&quot;) async def root(request: Request): # request.client.host just returns some IP # request.headers doesn't contain any hint # request.scope ? request_from_swagger = request.hints_on_whether_request_is_coming_from_swagger_ui if request_from_swagger: return {&quot;message&quot;: &quot;Hello Swagger UI&quot;} return {&quot;message&quot;: &quot;Hello World&quot;} </code></pre> <p>I need to take some actions based of that. So is there anyway I can tell, whether the request is coming from the Swagger UI?</p>
<python><request><fastapi><starlette>
2023-02-14 15:18:49
1
15,775
Ghasem
75,449,820
18,883,443
ValueError: cannot reshape array of size 777600 into shape (720,1080,5)
<p>The following lines:</p> <pre><code>currentImageClip = ImageClip(r'path/to/image') imageClipWithCaption = CompositeVideoClip([ColorClip(size=(1080, 720), color='black'), currentImageClip]) </code></pre> <p>gives me the following error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\samlb\Documents\folder\pythonFile.py&quot;, line 140, in makeVideo imageClipWithCaption = CompositeVideoClip([ColorClip(size=(1080, 720), color='black'), currentImageClip]) File &quot;C:\Users\samlb\AppData\Roaming\Python\Python39\site-packages\moviepy\video\VideoClip.py&quot;, line 1157, in __init__ np.tile(color, w * h).reshape(shape), is_mask=is_mask, duration=duration ValueError: cannot reshape array of size 777600 into shape (720,1080,5) </code></pre> <p>How can I avoid this error? Does it have something to do with the type of image I'm trying to composite?</p> <p>This is the image I'm using (.png):</p> <p><a href="https://i.sstatic.net/noQQb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/noQQb.jpg" alt="enter image description here" /></a></p> <p>General Info: Python 3.9.13, moviepy 2.0.0.dev2-, Windows 10</p> <p>Relevant moviepy code:</p> <p>ImageClip in <a href="https://github.com/Zulko/moviepy/blob/master/moviepy/video/VideoClip.py" rel="nofollow noreferrer">https://github.com/Zulko/moviepy/blob/master/moviepy/video/VideoClip.py</a> CompositeVideoClip in <a href="https://github.com/Zulko/moviepy/blob/master/moviepy/video/compositing/CompositeVideoClip.py" rel="nofollow noreferrer">https://github.com/Zulko/moviepy/blob/master/moviepy/video/compositing/CompositeVideoClip.py</a></p>
<python><moviepy>
2023-02-14 15:12:38
0
634
THEMOUNTAINSANDTHESKIES
75,449,742
4,495,790
How to visualize boxplot for full column and per categories
<p>I want to have a multiple boxplot for my Pandas DataFrame with different boxes for <code>num_column</code> per each category levels in <code>cat_column</code> AND one box for the entire <code>num_column</code>. So far the best that I could do is double subplots (one for the entire column and one for the per category boxes):</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt plt.subplots(1, 2) plt.subplot(1, 2, 1) sns.boxplot(data=df, y=&quot;num_column&quot;) plt.title(&quot;Full Column&quot;) plt.subplot(1, 2, 2) sns.boxplot(data=df, x=&quot;cat_column&quot;, y=&quot;num_column&quot;) plt.title(&quot;Categories&quot;) plt.show() </code></pre> <p>I would like to have all in one plot.</p>
<python><matplotlib><seaborn><visualization>
2023-02-14 15:06:15
1
459
Fredrik
75,449,717
1,736,294
Fastapi Dependency Injection with CLI Arguments
<p>I want my fastapi routes to include a dependency injection formed from the parameters of a CLI.</p> <p>In the skeleton code below, a, b and c are the CLI parameters, Consort is the DI and the fastapi class is King.</p> <p>How can this be achieved?</p> <pre><code>import charles, william, george #program modules from fastapi import FastAPI, Depends app = FastAPI() class Consort: def __init__(self, a, b, c): self.x = a_stuff(a) self.y = b_stuff(b) self.z = c_stuff(c) class King: def __init__(self, a, b, c): ... ... @router.post(&quot;/create&quot;) async def create(self, consort=Depends(Consort())): return charles.create() @router.post(&quot;/read&quot;) async def read(self, consort=Depends(Consort())): return william.read() @router.post(&quot;/update&quot;) async def update(self, consort=Depends(Consort())): return george.update() @router.post(&quot;/delete&quot;) async def delete(self, consort=Depends(Consort())): return elizabeth.delete() def main(args): a, b, c = arg_parse() service = King(a, b, c) uvicorn.run(... ... ...) return if __name__ == &quot;__main__&quot;: main(sys.argv) </code></pre>
<python><dependencies><singleton><fastapi>
2023-02-14 15:04:16
1
4,617
Henry Thornton
75,449,595
13,176,896
wxpython combobox value is not changed once selected
<p>I have wxpython code (wxpython version: 4.2.0). It has two Comboboxes to select value of x and y, and list of values for y is determined by x.</p> <pre><code>self.x = wx.ComboBox( self, wx.ID_ANY, &quot;x&quot;, wx.Point( 60, 43 ), (220, 30), x_choices, 0 ) self.x.Bind(wx.EVT_COMBOBOX, self.update_y) def update_y(self, event): y_choice = df.loc[df['A'] == self.x.GetStringSelection(), 'y'].tolist() self.y = wx.ComboBox( self, wx.ID_ANY, &quot;y&quot;, (60, 73), (220, 30), y_choice, 0 ) </code></pre> <p>It works on Macbook, but do not work on Windows 10. To be more concrete, once value of x is selected, then the value is not changed anymore. How to solve the problem and why this problem occurs?</p>
<python><combobox><window><wxpython>
2023-02-14 14:54:47
1
2,642
Gilseung Ahn
75,449,454
7,074,969
Image has 3 channels but it's in a grayscale color. If I change it to 1 channel, it goes into RGB
<p>I started doing some image-processing in python and I've stumbled upon an issue which is kind of confusing from a beginner's perspective. I have a dataset of 1131 np arrays (images) of MRI on knee. The shape of the images is kind of weird, it is <code>(44, 256, 256)</code> meaning that one array has 44 images with size of 256x256 pixels. For instance, if I show the 22nd image (middle one), with <code>plt.imshow(image[22])</code> this is what I get:</p> <p><a href="https://i.sstatic.net/Q6LHk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q6LHk.png" alt="enter image description here" /></a></p> <p>This image clearly seems like an RGB but if I do <code>image[22].shape</code>, I get <code>(256,256)</code> which is expected but at the same time confusing because it says that it has no channels and from what I know, images that don't have channels should be in grayscale color but clearly this is not the case here.</p> <p>I dived deeper and tried executing <code>cv2.cvtColor(image[22, :, :], cv2.COLOR_BGR2RGB)</code> which resulted in:</p> <p><a href="https://i.sstatic.net/mLlPG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLlPG.png" alt="enter image description here" /></a></p> <p>This does seem like a grayscale image but the shape of this image is <code>(256, 256, 3)</code>. This is so confusing for me and can someone point it out why does this happen, grayscale image with 3 channels and colorful image with no channels and how can I have this <code>image[22]</code> in grayscale mode with 1 channel? I need it in 1 channel because I want to do a feature extraction but if I do it this way, pure <code>image[22]</code> and the &quot;grayscale&quot; variant have the same values for each feature but from the looks of it, they do not seem exactly.</p> <p>The numpy array (image) can be taken from <a href="https://pastebin.com/aWyPXrBS" rel="nofollow noreferrer">here</a>.</p>
<python><matplotlib>
2023-02-14 14:44:20
2
1,013
anthino12
75,449,407
9,195,600
How to sort boundary boxes from right to left and top to bottom
<p>I have a use case where I want to sort boundary boxes from the top right to the bottom left.</p> <blockquote> <p>args: dt_boxes(array): detected text boxes with shape [4, 2] return: sorted boxes(array) with shape [4, 2]</p> </blockquote> <p><strong>Example 1:</strong></p> <pre><code>[[[258.0, 52.0], [329.0, 46.0], [329.0, 72.0], [260.0, 76.0]], [[91.0, 32.0], [174.0, 43.0], [175.0, 68.0], [90.0, 64.0]], [[182.0, 45.0], [266.0, 42.0], [266.0, 69.0], [184.0, 74.0]], [[16.0, 41.0], [97.0, 39.0], [98.0, 64.0], [17.0, 69.0]], [[226.0, 4.0], [296.0, 4.0], [296.0, 32.0], [227.0, 35.0]], [[288.0, 9.0], [331.0, 2.0], [331.0, 36.0], [289.0, 39.0]]] </code></pre> <p><strong>Example 2:</strong></p> <pre><code>[[[224.0, 85.0], [381.0, 83.0], [381.0, 128.0], [223.0, 126.0]], [[412.0, 88.0], [544.0, 81.0], [545.0, 129.0], [413.0, 134.0]], [[291.0, 18.0], [357.0, 18.0], [357.0, 56.0], [292.0, 68.0]], [[122.0, 12.0], [295.0, 11.0], [296.0, 57.0], [125.0, 64.0]], [[350.0, 22.0], [435.0, 11.0], [435.0, 55.0], [351.0, 66.0]], [[442.0, 15.0], [538.0, 11.0], [539.0, 49.0], [442.0, 57.0]], [[9.0, 12.0], [125.0, 8.0], [127.0, 54.0], [10.0, 63.0]]] </code></pre> <p>I have used the <code>sort</code> function like this to sort using the 2nd element (<strong>Top Right</strong>) and 3rd Element (<strong>Bottom Right</strong>) in the array but it doesn't provide the correct sorting <a href="https://i.sstatic.net/6646X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6646X.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/JmsxF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JmsxF.png" alt="enter image description here" /></a></p> <p>Results to be achieved : <a href="https://i.sstatic.net/cgA6E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgA6E.png" alt="enter image description here" /></a></p> <p>Boundaries are upper left, upper right, lower right, and lower left.</p> <pre><code>sorted_boxes = sorted(dt_boxes, key=lambda x: (x[1][1], x[1][0]),reverse=True) </code></pre>
<python><sorting><bounding-box>
2023-02-14 14:40:16
1
633
ahmed osama
75,449,378
15,358,800
Pad middle values based on previouse and next values
<p>Let's say I've df Like this</p> <pre><code> col col2 0 0 Repeat1 1 3 Repeat2 2 5 Repeat3 3 7 Repeat4 4 9 Repeat5 </code></pre> <p>Reproducable</p> <pre><code>L= [0,3,5,7,9] L2 = ['Repeat1','Repeat2','Repeat3','Repeat4','Repeat5'] import pandas as pd df = pd.DataFrame({'col':L}) df['col2']= L2 print (df) </code></pre> <p>How can fill missing intermidaite values such that my df will looks like this</p> <pre><code> col col2 0 0 Repeat1 1 1 Repeat1 2 2 Repeat1 3 3 Repeat2 4 4 Repeat2 5 5 Repeat3 6 6 Repeat3 7 7 Repeat4 8 8 Repeat4 9 9 Repeat5 </code></pre> <p>Similar threads I've tried</p> <p><a href="https://stackoverflow.com/questions/37821653/filling-missing-middle-values-in-pandas-dataframe">Filling missing middle values in pandas dataframe</a> (Filling Nan values for intermediate values but I don't need Nan)</p> <p><a href="https://stackoverflow.com/questions/28798076/fill-pandas-dataframe-with-values-in-between">Fill pandas dataframe with values in between</a> (Very Big approch. I'm looking any functional appraoch)</p> <p>Both cases helped me some extent But i was wondering is any ways to do it? :D</p>
<python><pandas>
2023-02-14 14:37:47
3
4,891
Bhargav
75,449,296
11,586,490
Unable to find element that looks like it's dynamically generated
<p>I'm trying to find the email address text located in the below URL. You can clearly see the email address but I believe the text is generated dynamically through Javascript/React. When I copy the XPATH or CSS Selector and try to find the element like I would any other element I just get an error saying the element cannot be found.</p> <p>I've tried time.sleep(30) to give the page time to fully load but that's not the issue.</p> <p>I've tried:</p> <pre><code>driver.find_element(By.XPATH, '//*[@id=&quot;mount_0_0_D8&quot;]/div/div[1]/div/div[5]/div/div/div[3]/div/div/div[1]/div[1]/div/div/div[4]/div[2]/div/div[1]/div[2]/div/div[1]/div/div/div/div/div[2]/div[2]/div/ul/div[2]/div[2]/div/div/span') </code></pre> <p>You can see from the snippet below that the email is visible but is between some ::before and ::after text I've not seen before. <a href="https://i.sstatic.net/wP7cT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wP7cT.png" alt="Inspect snippet" /></a></p> <p><a href="https://www.facebook.com/homieestudio" rel="nofollow noreferrer">https://www.facebook.com/homieestudio</a></p> <p>Any ideas on how to consistently pull back the email address here? I'm using Chromedriver.</p>
<python><selenium-webdriver><selenium-chromedriver>
2023-02-14 14:30:58
2
351
Callum
75,448,985
1,018,861
Create duplicates of rows based on values in another column
<p>I'm trying to build a histogram of some data in polars. As part of my histogram code, I need to duplicate some rows. I've got a column of values, where each row also has a weight that says how many times the row should be added to the histogram.</p> <p>How can I duplicate my <code>value</code> rows according to the <code>weight</code> column?</p> <p>Here is some example data, with a target series:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({&quot;value&quot;:[1,2,3], &quot;weight&quot;:[2, 2, 1]}) print(df) # shape: (3, 2) # ┌───────┬────────┐ # │ value ┆ weight │ # │ --- ┆ --- │ # │ i64 ┆ i64 │ # ╞═══════╪════════╡ # │ 1 ┆ 2 │ # │ 2 ┆ 2 │ # │ 3 ┆ 1 │ # └───────┴────────┘ s_target = pl.Series(name=&quot;value&quot;, values=[1,1,2,2,3]) print(s_target) # shape: (5,) # Series: 'value' [i64] # [ # 1 # 1 # 2 # 2 # 3 # ] </code></pre>
<python><dataframe><python-polars>
2023-02-14 14:06:11
2
3,252
TomNorway
75,448,714
4,025,404
Writing a DataFrame to an excel file where items in a list are put into separate cells
<p>Consider a dataframe like <code>pivoted</code>, where replicates of some data are given as lists in a dataframe:</p> <pre><code> d = {'Compound': ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C'], 'Conc': [1, 0.5, 0.1, 1, 0.5, 0.1, 2, 1, 0.5, 0.1], 'Data': [[100, 90, 80], [50, 40, 30], [10, 9.7, 8], [20, 15, 10], [3, 4, 5, 6], [100, 110, 80], [30, 40, 50, 20], [10, 5, 9, 3], [2, 1, 2, 2], [1, 1, 0]]} df = pd.DataFrame(data=d) pivoted = df.pivot(index='Conc', columns='Compound', values='Data') </code></pre> <p>This df can be written to an excel file as such:</p> <pre><code>with pd.ExcelWriter('output.xlsx') as writer: pivoted.to_excel(writer, sheet_name='Sheet1', index_label='Conc') </code></pre> <p><a href="https://i.sstatic.net/KMoj7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KMoj7.png" alt="enter image description here" /></a></p> <p>How can this instead be written where replicate data are given in side-by-side cells? Desired excel file: <a href="https://i.sstatic.net/BxbED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BxbED.png" alt="enter image description here" /></a></p>
<python><excel><pandas><dataframe>
2023-02-14 13:42:50
2
957
John Crow
75,448,340
5,740,397
Python and ETL: Job and workflow/task mamangent trigger and other concepts
<p>i am writing a very simple ETL(T) pipline currently:</p> <ul> <li>look at ftp if new csv files exist</li> <li>if yes than donwload them</li> <li>Some initial Transformations</li> <li>bulk insert the individual CSVs into a MS sql DB</li> <li>Some additional Transformations</li> </ul> <p>There can be alot of csv files. The srcript runs ok for the moment, but i have no concept of how to actually create a &quot;managent&quot; layer around this. Currently my pipeline runs linear. I have a list of the filenames that need to be loaded, and ( in a loop) i load them into the DB.</p> <p>If something fails the whole pipeline has to rerun. I do not manage the state of the pipleine ( i.e. has an specific file already been downloaded and transformed/changed?). There is no way to start from an intermediate point. How cold i break this down into individual taks that need to be performedß</p> <p>I rougly now of tools like Airflow, but i feel that this is only a part of the necessary tools, and frankly i am to uneducated in this area to even ask the right questions.</p> <p>It would be really nice if somebody could point me in the right direction of what i am missing and what tools are available. Thanks in advance</p>
<python><etl><workflow><pipeline>
2023-02-14 13:10:52
1
565
NorrinRadd
75,448,320
3,617,165
Py4JJavaError when invoke pyspark distinct
<p>I am running a spark script in a course with Jupyter Notebook in Windows 10. I create my session with no problems and invoke the collect for an rdd that we tested.</p> <p>But when I try to run it with a distinct, I get the following error and I can't figure out how to fix it.</p> <pre><code>import findspark findspark.init() findspark.find() import pyspark from pyspark import SparkConf from pyspark.sql import SparkSession spark = SparkSession.builder.appName(&quot;First Level Fundamentals&quot;)\ .config(&quot;spark.driver.extraJavaOptions&quot;,&quot;--add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED&quot;,)\ .getOrCreate() #spark.conf.set(&quot;spark.kryo.registrationRequired&quot;,&quot;false&quot;) sc = spark.sparkContext #This Work num_list = [i for i in range(1,6)] num_rdd = sc.parallelize(num_list) num_rdd.collect() #This fail list_rep = [1,1,2,3,3,4,5,5,6,6,7,7,7,7] rep_rdd = sc.parallelize(list_rep) #Aplied distinct dist_rdd = num_rdd.distinct() dist_rdd.collect() </code></pre> <p>This is the error that I get when I make the distinct</p> <pre><code>--------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) Input In [9], in &lt;cell line: 1&gt;() ----&gt; 1 dist_rdd.collect() File C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\rdd.py:950, in RDD.collect(self) 941 &quot;&quot;&quot; 942 Return a list that contains all of the elements in this RDD. 943 (...) 947 to be small, as all the data is loaded into the driver's memory. 948 &quot;&quot;&quot; 949 with SCCallSiteSync(self.context) as css: --&gt; 950 sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 951 return list(_load_from_socket(sock_info, self._jrdd_deserializer)) File C:\spark\spark-3.2.1-bin-hadoop2.7\python\lib\py4j-0.10.9.3-src.zip\py4j\java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -&gt; 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\sql\utils.py:111, in capture_sql_exception.&lt;locals&gt;.deco(*a, **kw) 109 def deco(*a, **kw): 110 try: --&gt; 111 return f(*a, **kw) 112 except py4j.protocol.Py4JJavaError as e: 113 converted = convert_exception(e.java_exception) File C:\spark\spark-3.2.1-bin-hadoop2.7\python\lib\py4j-0.10.9.3-src.zip\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --&gt; 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( 331 &quot;An error occurred while calling {0}{1}{2}. Trace:\n{3}\n&quot;. 332 format(target_id, &quot;.&quot;, name, value)) Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 1 times, most recent failure: Lost task 3.0 in stage 2.0 (TID 19) (T203598.tasa.telefonica.com.ar executor driver): java.lang.IllegalArgumentException: Unable to create serializer &quot;com.esotericsoftware.kryo.serializers.FieldSerializer&quot; for class: java.nio.HeapByteBuffer at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:65) at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:43) at com.esotericsoftware.kryo.Kryo.newDefaultSerializer(Kryo.java:396) at com.twitter.chill.KryoBase.newDefaultSerializer(KryoBase.scala:62) at com.esotericsoftware.kryo.Kryo.getDefaultSerializer(Kryo.java:380) at com.esotericsoftware.kryo.Kryo.register(Kryo.java:410) at org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$2(KryoSerializer.scala:141) at scala.collection.immutable.List.foreach(List.scala:431) at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:140) at org.apache.spark.serializer.KryoSerializer$$anon$1.create(KryoSerializer.scala:102) at com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) at org.apache.spark.serializer.KryoSerializer$PoolWrapper.borrow(KryoSerializer.scala:109) at org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:346) at org.apache.spark.serializer.KryoSerializationStream.&lt;init&gt;(KryoSerializer.scala:266) at org.apache.spark.serializer.KryoSerializerInstance.serializeStream(KryoSerializer.scala:432) at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:157) at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:279) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:171) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:79) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:483) at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:51) ... 27 more Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field final byte[] java.nio.ByteBuffer.hb accessible: module java.base does not &quot;opens java.nio&quot; to unnamed module @34f5090e at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:180) at java.base/java.lang.reflect.Field.setAccessible(Field.java:174) at com.esotericsoftware.kryo.serializers.FieldSerializer.buildValidFields(FieldSerializer.java:283) at com.esotericsoftware.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:216) at com.esotericsoftware.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:157) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:150) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:134) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:130) at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67) ... 30 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279) at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.RDD.collect(RDD.scala:1029) at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180) at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:577) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.IllegalArgumentException: Unable to create serializer &quot;com.esotericsoftware.kryo.serializers.FieldSerializer&quot; for class: java.nio.HeapByteBuffer at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:65) at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:43) at com.esotericsoftware.kryo.Kryo.newDefaultSerializer(Kryo.java:396) at com.twitter.chill.KryoBase.newDefaultSerializer(KryoBase.scala:62) at com.esotericsoftware.kryo.Kryo.getDefaultSerializer(Kryo.java:380) at com.esotericsoftware.kryo.Kryo.register(Kryo.java:410) at org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$2(KryoSerializer.scala:141) at scala.collection.immutable.List.foreach(List.scala:431) at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:140) at org.apache.spark.serializer.KryoSerializer$$anon$1.create(KryoSerializer.scala:102) at com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) at org.apache.spark.serializer.KryoSerializer$PoolWrapper.borrow(KryoSerializer.scala:109) at org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:346) at org.apache.spark.serializer.KryoSerializationStream.&lt;init&gt;(KryoSerializer.scala:266) at org.apache.spark.serializer.KryoSerializerInstance.serializeStream(KryoSerializer.scala:432) at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:157) at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:279) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:171) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ... 1 more Caused by: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:79) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:483) at com.esotericsoftware.kryo.factories.ReflectionSerializerFactory.makeSerializer(ReflectionSerializerFactory.java:51) ... 27 more Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field final byte[] java.nio.ByteBuffer.hb accessible: module java.base does not &quot;opens java.nio&quot; to unnamed module @34f5090e at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:180) at java.base/java.lang.reflect.Field.setAccessible(Field.java:174) at com.esotericsoftware.kryo.serializers.FieldSerializer.buildValidFields(FieldSerializer.java:283) at com.esotericsoftware.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:216) at com.esotericsoftware.kryo.serializers.FieldSerializer.rebuildCachedFields(FieldSerializer.java:157) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:150) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:134) at com.esotericsoftware.kryo.serializers.FieldSerializer.&lt;init&gt;(FieldSerializer.java:130) at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67) ... 30 more </code></pre> <p>At some point I read that putting in the org.apache.spark.serializer.KryoSerializer might fix it, but that didn't work either</p> <pre><code>spark = SparkSession.builder.appName(&quot;First Level Fundamentals&quot;)\ .config(&quot;spark.driver.extraJavaOptions&quot;,&quot;--add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED&quot;,)\,&quot;--add-opens=java.base/java.util=ALL-UNNAMED&quot;,)\. .config(&quot;spark.executor.extraJavaOptions&quot;,&quot;-Dspark.serializer=org.apache.spark.serializer.KryoSerializer&quot;,)\,&quot;-Dspark.serializer=org.apache.spark.serializer.KryoSerializer&quot;,)\ .getOrCreate() </code></pre> <p>These are the versions I am using for this example</p> <p><a href="https://i.sstatic.net/jMA5F.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jMA5F.jpg" alt="enter image description here" /></a></p> <p>Does anyone know how I can solve this error?</p>
<python><windows><apache-spark><pyspark><jupyter-notebook>
2023-02-14 13:09:33
0
368
alejomarchan
75,448,242
4,470,126
Databricks DLT pipeline with for..loop reports error "AnalysisException: Cannot redefine dataset"
<p>I have the following code which works fine for a single table. But when I try to use a for..loop() to process all the tables in my database, I am getting the error, <code>&quot;AnalysisException: Cannot redefine dataset 'source_ds',Map(),Map(),List(),List(),Map())&quot;</code>.</p> <p>I need to pass the table name to source_ds so as to process CDC based on key &amp; sequence_columns. Appreciate any help/suggestions please.</p> <pre><code>import dlt from pyspark.sql.functions import * from pyspark.sql.types import * import time raw_db_name = &quot;raw_db&quot; def generate_silver_tables(target_table, source_table, keys_col_list): @dlt.table def source_ds(): return spark.table(f&quot;{raw_db_name}.{source_table}&quot;) ### Create the target table definition dlt.create_target_table(name=target_table, comment= f&quot;Clean, merged {target_table}&quot;, #partition_cols=[&quot;topic&quot;], table_properties={ &quot;quality&quot;: &quot;silver&quot;, &quot;pipelines.autoOptimize.managed&quot;: &quot;true&quot; } ) ## Do the merge dlt.apply_changes( target = target_table, source = &quot;source_ds&quot;, keys = keys_col_list, apply_as_deletes = expr(&quot;operation = 'DELETE'&quot;), sequence_by = col(&quot;ts_ms&quot;), ignore_null_updates = False, except_column_list = [&quot;operation&quot;, &quot;timestamp_ms&quot;], stored_as_scd_type = &quot;1&quot; ) return # THIS WORKS FINE #--------------- # raw_dbname = &quot;raw_db&quot; # raw_tbl_name = 'raw_table' # processed_tbl_name = raw_tbl_name.replace(&quot;raw&quot;, &quot;processed&quot;) # generate_silver_tables(processed_tbl_name, raw_tbl_name) table_list = spark.sql(f&quot;show tables in landing_db &quot;).collect() for row in table_list: landing_tbl_name = row.tableName s2 = spark.sql(f&quot;select key from {landing_db_name}.{landing_tbl_name} limit 1&quot;) keys_col_list = list(json.loads(s2.collect()[0][0]).keys()) raw_tbl_name = landing_tbl_name.replace(&quot;landing&quot;, &quot;raw&quot;) processed_tbl_name = landing_tbl_name.replace(&quot;landing&quot;, &quot;processed&quot;) generate_silver_tables(processed_tbl_name, raw_tbl_name, keys_col_list) # time.sleep(10) </code></pre>
<python><pyspark><databricks><azure-databricks><delta-live-tables>
2023-02-14 13:02:47
1
3,213
Yuva
75,448,239
33,404
Quickbase Pipelines: How to extract nested JSON array, using Jinja, into Multi-Select Text field?
<p>I am using <a href="https://www.quickbase.com/" rel="nofollow noreferrer">Quickbase</a> <a href="https://helpv2.quickbase.com/hc/en-us/articles/4570257915924-About-Quickbase-Pipelines-" rel="nofollow noreferrer">Pipelines</a> to pull data from a REST API and save it into a table. The flow of the pipeline is:</p> <pre><code>B. Fetch JSON C. Prepare Bulk Record Upsert D. Iterate over JSON records For each JSON Handler - JSON Objects (D) Do E. Add Bulk Upsert Row F. Commit Upsert </code></pre> <p>In the <em>&quot;Add a Bulk Upsert Row&quot;</em> step I am mapping fields from my table to properties of incoming JSON records.</p> <p>I am facing a challenge around &quot;custom fields&quot; in my JSON records. There are fields where instead of simply appearing as <code>&quot;key&quot;: &quot;value&quot;</code> pairs in JSON, they appear in a nested array similar to this:</p> <pre><code>&quot;custom_fields&quot;: [ { &quot;key&quot;: &quot;street&quot;, &quot;value&quot;: &quot;foo&quot;, }, { &quot;key&quot;: &quot;city&quot;, &quot;value&quot;: &quot;bar&quot;, }, { &quot;key&quot;: &quot;state&quot;, &quot;value&quot;: &quot;baz&quot;, } ] </code></pre> <p>I've been able to extract specific values by their key from this structure using this Jinja code block inside my upsert step:</p> <pre><code>{% for field in d.custom_fields %} {% if field['key'] == 'street' %} {{ field['value'] }} {% endif %} {% endfor %} </code></pre> <p>This works well for <em>Text</em>-type fields but this technique fails when trying to get a JSON array value into a <em>Multi-Select Text</em>-type field.</p> <p>When I use the a Jinja code block like the one above to parse this bit of JSON:</p> <pre><code>&quot;custom_fields&quot;: [ { &quot;key&quot;: &quot;choices&quot;, &quot;value&quot;: [ &quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot; ], } ] </code></pre> <p>I get these 3 values in my Multi-Select Text field: <code>[u'foo'</code>, <code>u'bar'</code> and <code>u'baz']</code>.</p> <p>It's as if Quickbase / Jinja are parsing the value from JSON as a literal string rather than an array.</p> <p>My question is - <strong>How can I get the above JSON value to be transformed into the correct choices: <code>foo</code>, <code>bar</code> and <code>baz</code>?</strong></p> <hr /> <p><strong>Update:</strong> Having examined the activity logs for the pipeline, I've found that the <em>JSON Object</em> that QB derived from the actual JSON has the value as <code>&quot;[u'foo', u'bar', u'baz']&quot;</code> - a string - which explains the odd values ending up in the table field. Might this be a bug?</p> <p>I've also found that the same <em>JSON Object</em> has a copy of itself under the property <code>raw_record</code> and in that copy the value is a sensible <code>['foo', 'bar', 'baz']</code>. I've not been able, however, to make use of the <code>raw_record</code> in my Jinja templates.</p>
<python><json><jinja2><quickbase>
2023-02-14 13:02:29
2
16,911
urig
75,448,230
3,067,485
Filtering site with specific tags in Django while keeping all site tags aggregated in annotation field
<p>Let's say I have the following django model:</p> <pre><code>class Tag(models.Model): key = models.CharField(max_length=64, unique=True) class Site(models.Model): key = models.CharField(max_length=64, unique=True) tags = models.ManyToManyField(Tag, through='SiteTag') class SiteTag(models.Model): site = models.ForeignKey(Site, on_delete=models.RESTRICT) tag = models.ForeignKey(Tag, on_delete=models.RESTRICT) </code></pre> <p>Where a site can have multiple tags using explicit Many-to-Many relationship. When I filter site with a specific tag in the following query other tags are filtered out where I mean give me sites with this tags and show all tags those sites have:</p> <pre><code>Site.objects.filter(tags__key__in=['type-1', 'type-2'])\ .annotate(tags=ArrayAgg(&quot;tags__key&quot;, distinct=True, filter=Q(tags__isnull=False))) </code></pre> <p>Then it indeed retrieve sites with either <code>type-1</code> or <code>type-2</code> but if a selected site has the <code>type-3</code> tag I also want to collect it in the <code>ArrayAgg</code> aggregation. Unfortunetly with this query I will only have <code>[&quot;type-1&quot;, &quot;type-2&quot;]</code> in annotated tags field instead of <code>[&quot;type-1&quot;, &quot;type-2&quot;, &quot;type-3&quot;]</code>.</p> <p>How can I filter sites using this criterion and still have all tags collected in the annotate section.</p>
<python><django><postgresql><filter><array-agg>
2023-02-14 13:01:57
1
11,564
jlandercy
75,448,227
14,471,688
Remove redundant tuples from dictionary based on the score
<p>I wonder if there is a fast way to remove redundant tuples from dictionary. Suppose I have a dictionary as below:</p> <pre><code>a = { 'trans': [('pickup', 1.0), ('boat', 1.0), ('plane', 1.0), ('walking', 1.0), ('foot', 1.0), ('train', 0.7455259731472191), ('trailer', 0.7227749512667475), ('car', 0.7759192750865143)], 'actor': { 'autori': [('smug', 1.0), ('pol', 1.0), ('traff', 1.0), ('local authori', 0.6894454471465952), ('driv', 0.6121365092485745), ('car', 0.6297345748705596)], 'fam': [('fa', 1.0), ('mo', 1.0), ('bro', 1.0), ('son', 0.9925431812951816), ('sis', 0.9789254869156859), ('fami', 0.8392597243422916)], 'fri': [('fri', 1.0), ('compats', 1.0), ('mo', 0.814126196299157), ('neighbor', 0.7433986938516075), ('parent', 0.32202418215134565), ('bro', 0.8496284151715676), ('fami', 0.6375584385858655), ('best fri', 0.807654599975373)] } } </code></pre> <p>In this dictionary for example we have tuples like: ('car', 0.7759192750865143) for key 'trans' and ('car', 0.6297345748705596) for key 'autori'. I want to remove the tuple ('car', 0.6297345748705596) because it has a lower score.</p> <p>My desired output is:</p> <pre><code>new_a = { 'trans': [('pickup', 1.0), ('boat', 1.0), ('plane', 1.0), ('walking', 1.0), ('foot', 1.0), ('train', 0.7455259731472191), ('trailer', 0.7227749512667475), ('car', 0.7759192750865143)], 'actor': { 'autori': [('smug', 1.0), ('pol', 1.0), ('traff', 1.0), ('local authori', 0.6894454471465952), ('driv', 0.6121365092485745)], 'fam': [('fa', 1.0), ('mo', 1.0), ('bro', 1.0), ('son', 0.9925431812951816), ('sis', 0.9789254869156859), ('fami', 0.8392597243422916)], 'fri': [('fri', 1.0), ('compats', 1.0), ('neighbor', 0.7433986938516075), ('parent', 0.32202418215134565), ('best fri', 0.807654599975373)] } } </code></pre> <p>Is there a fast way to do this or we still need to loop through all values for each key?</p>
<python><dictionary><tuples><redundancy>
2023-02-14 13:01:53
2
381
Erwin
75,448,200
6,372,189
How to read dbf file in python and convert it to dataframe
<p>I am trying to read a dbf file using <code>simpledbf</code> library and convert to to dataframe for further processing.</p> <pre><code>from simpledbf import Dbf5 dbf = Dbf5(r&quot;C:\Users\Prashant.kumar\Downloads\dbf\F1_1.DBF&quot;) df1 = dbf.to_dataframe() </code></pre> <p>Unfortunately, I am getting the following error. <a href="https://i.sstatic.net/DGaOj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DGaOj.png" alt="enter image description here" /></a></p> <p>I tried to find a solution but couldn't find a resolution, nor I can find an alternative way to convert the dbf file to a dataframe for post processing.</p> <p>Here is the file <a href="https://mega.nz/folder/gKIBUKIa#rE7TmE5FToLzCblMhLLFbw" rel="nofollow noreferrer">https://mega.nz/folder/gKIBUKIa#rE7TmE5FToLzCblMhLLFbw</a></p> <p>Is there a way to read this dbf to python as a dataframe?</p>
<python><pandas><dbf>
2023-02-14 13:00:27
1
701
Prashant Kumar
75,448,154
8,406,122
Extracting Named Entity from Input Utterance in python using regex
<p>Say I have some strings</p> <pre><code>&quot;Open Youtube&quot; &quot;Install PlayStore App&quot; &quot;Go to Call of Duty app&quot; </code></pre> <p>Now I have a <code>rules.list</code> file which contains all the rules in it to extract the named entity out of the above commands.</p> <p>Say the contents of rules.list file is like this</p> <pre><code>app install (.*) 1 app install app (.*) 1 app install the (.*) app 1 app uninstall the app (.*) 1 app uninstall app (.*) 1 app uninstall the (.*) app 1 app go to (.*) app 1 app download (.*) 1 app download (.*) app 1 app download app (.*) 1 app download the app (.*) 1 app download the (.*) app 1 app install the app (.*) 1 app open the (.*) app 1 app open (.*) 1 app uninstall (.*) 1 app launch (.*) app 1 app launch (.*) 1 </code></pre> <p>Is there any way I can use this <code>rules.list</code> file in python to extract the Named Entities from my sentences, so that I will have <code>Youtube</code>, <code>PlayStore</code> , <code>Call of Duty</code> as my output?</p>
<python><regex>
2023-02-14 12:56:01
1
377
Turing101
75,448,050
6,610
In Python, how to specify that a Protocol implementer has a specific constructor signature?
<p>Is it possible to define that a class needs a specific constructor?</p> <pre class="lang-py prettyprint-override"><code>class Constructible(Protocol): def __init__(self, i: int): # how do I do this? raise NotImplementedError def get_value(self): raise NotImplementedError def map_is(cs: Iterable[Constructible], i: int): mapped = tuple(C(i) for C in cs) values = tuple(c.get_value() for c in mapped) # both the constructor and a member method are used return mapped, values # implementors (omitting __hash__ and __eq__ for brevity) class X(Constructible): def __init__(self, i): self.i=i def get_value(self): return self.i class Sq(Constructible): def __init__(self, i): self.i=i def get_value(self): return self.i * self.i cs, values = tuple(map_is((X, Sq), 5)) assert values == (5, 25) </code></pre> <p>When specifying it like this, I get</p> <pre class="lang-bash prettyprint-override"><code>$ mypy constr.py constr.py:12: error: &quot;Constructible&quot; not callable Found 1 error in 1 file (checked 1 source file) </code></pre> <p>Is this even possible? Or should I revert to a factory function <code>@classmethod def construct(i: int): Self</code>?</p>
<python><constructor><protocols>
2023-02-14 12:46:09
1
41,708
xtofl
75,447,934
14,353,779
Partial String map from pandas dictionary
<p>I have a mapping dictionary written this way. How can I do a partial string search for the same? For e.g even if its <code>East South Central Division Conv</code> in text I need to map to 1,21,28,47 states. Also I need to match irrerspective of case ans spaces.</p> <p><code>state_code</code> is a column I have in <code>df</code> with the numbers like 9,23,25,33</p> <pre><code>dct.update(dict.fromkeys([34,36,42],'Middle Atlantic')) dct.update(dict.fromkeys([18,17,26,39,55],'East North Central')) dct.update(dict.fromkeys([19,20,27,29,31,38,46],'West North Central')) dct.update(dict.fromkeys([10,11,12,13,24,37,45,51,54],'South Atlantic')) dct.update(dict.fromkeys([1,21,28,47],'East South Central')) dct.update(dict.fromkeys([5,22,40,48],'West South Central')) dct.update(dict.fromkeys([4,8,16,35,30,49,32,56],'Mountain')) dct.update(dict.fromkeys([2,6,15,41,53],'Pacific')) dct.update(dict.fromkeys([0],'Other')) df['MAPPING'] = df['state_code'].map(dct) </code></pre>
<python><pandas><dataframe><dictionary>
2023-02-14 12:36:17
0
789
Scope
75,447,884
1,878,788
How to use seaborn kdeplot legend from one suplots axis for whole figure legend?
<p>I'm using seaborn to plot kdeplots on axes of subplots, and I want to have one global figure legend instead of one legend on each subplot.</p> <p>However, the axes I pass to <code>sns.kdeplot</code> or get from <code>sns.kdeplot</code> seem to have empty lists of handles and labels when I use <code>get_legend_handles_labels()</code> to get them.</p> <p>I cannot rely on the data to create a legend from scratch, because I have no guarantee that the colors will actually match.</p> <p>All these issues can be seen on the following example:</p> <pre><code>from matplotlib import pyplot as plt import numpy as np import pandas as pd import seaborn as sns # Two numerical data columns, plus one for the hue data = pd.DataFrame( {&quot;A&quot;: np.random.random(20), &quot;B&quot;: np.random.random(20), &quot;C&quot;: [&quot;c&quot;, &quot;C&quot;] * 10 }) # figure with 2 subplots (fig, [ax1, ax2]) = plt.subplots(2, 1) # passing ax1 to sns.kdeplot sns.kdeplot(data=data, x=&quot;A&quot;, hue=&quot;C&quot;, ax=ax1) # passing and getting ax2 ax2 = sns.kdeplot(data=data, x=&quot;B&quot;, hue=&quot;C&quot;, ax=ax2) # Extracting handles and labels from the axes legends # (Those are all empty lists. Where is the info I need?) (handles_1, labels_1) = ax1.get_legend_handles_labels() (handles_2, labels_2) = ax2.get_legend_handles_labels() # Setting a global legend based on the data fig.legend(labels=data[&quot;C&quot;].unique(), loc=&quot;upper right&quot;) # Setting another global legend # based on extracted handles and labels fig.legend(handles=handles_1, labels=labels_1, loc=&quot;center right&quot;) # Displaying debugging info in the title plt.suptitle(f&quot;ax1 legend: {len(labels_1)}, ax2 legend: {len(labels_1)} labels&quot;) plt.savefig(&quot;legend_issues.png&quot;) </code></pre> <p>This results in the following figure:</p> <p><a href="https://i.sstatic.net/Kyyeb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kyyeb.png" alt="kdeplot displays legend but axes have empty legends" /></a></p> <p>Is there a bug or do I look at the wrong place for legend handles and labels?</p>
<python><matplotlib><seaborn>
2023-02-14 12:32:07
0
8,294
bli
75,447,776
3,489,155
Trustpilot: how can I get reviews data via the API with a simple python starter script
<p>Ok, it just took me quite some time to figure out how to get (private) reviews data from the Trustpilot API using python.</p> <p>Yes, they have docs:<br /> <a href="https://developers.trustpilot.com/" rel="nofollow noreferrer">https://developers.trustpilot.com/</a><br /> <a href="https://developers.trustpilot.com/authentication" rel="nofollow noreferrer">https://developers.trustpilot.com/authentication</a></p> <p>But for some reason it's still never right away clear to me how to get an access token, and how to use that access token to get reviews data from the api.</p> <p>So: can you provide me a clear python starter script, that gets an access token from the Trustpilot api, and then gets reviews data from the api?</p>
<python><trustpilot>
2023-02-14 12:23:18
1
13,022
Sander van den Oord
75,447,775
10,760,601
Index matrix but return a list of lists Pytorch
<p>I have a 2-dimensional tensor and I would like to index it so that the result is a list of lists. For example:</p> <pre><code>R = torch.tensor([[1,2,3], [4,5,6]]) mask = torch.tensor([[1,0,0],[1,1,1]], dtype=torch.bool) output = R[mask] </code></pre> <p>This makes <code>output</code> as <code>tensor([1, 4, 5, 6])</code>. However, I would like to have <code>[[1], [4,5,6]]</code> or <code>[tensor(1), tensor([4,5,6])]</code>.</p> <p>I now that it could be done with a loop and ussing <code>.append()</code>. However, I would like to avoid the use of any loop to make it faster if <code>R</code> and <code>mask</code> are very big.</p> <p>Is there any way to do that in Python without any loop?</p>
<python><performance><indexing><pytorch>
2023-02-14 12:23:14
1
346
Josemi
75,447,720
19,318,120
speech brain with voxlingua107 is not working properly
<p>I'm using voxlingua107 to detect language from audio first it works fine but after a while I tried downloading an audio file with Spanish but it detects English !</p> <pre><code>class AudioMixin: def detect_language(self, path) -&gt; str: language_id = EncoderClassifier.from_hparams(source=&quot;TalTechNLP/voxlingua107-epaca-ecapa&quot;, savedir='media/tmp') # Download Thai language sample from Omniglot and cvert to suitable form signal = language_id.load_audio(path) prediction = language_id.classify_batch(signal) return prediction[3] </code></pre> <p>that's the code and I've no idea what I'm doing wrong</p>
<python><speech-recognition>
2023-02-14 12:17:34
1
484
mohamed naser
75,447,618
16,459,035
Unable to connect to mysql using docker-compose
<p>I am trying to connect to MySql DB using a python script ingested via docker. I have the following compose file:</p> <pre class="lang-yaml prettyprint-override"><code>version: '3.9' services: mysql_db: image: mysql:latest restart: unless-stopped environment: MYSQL_DATABASE: ${MY_SQL_DATABASE} MYSQL_USER: ${MY_SQL_USER} MYSQL_PASSWORD: ${MY_SQL_PASSWORD} MYSQL_ROOT_PASSWORD: ${MY_SQL_ROOT_PASSWORD} ports: - '3306:3306' volumes: - ./mysql-data:/var/lib/mysql adminer: image: adminer:latest restart: unless-stopped ports: - 8080:8080 ingestion-python: build: context: . dockerfile: ingestion.dockerfile depends_on: - mysql_db </code></pre> <p>Adminer connects to MySql with success. Then I created the following ingestion script to automate a criação de uma tabela. My ingestion script is:</p> <pre class="lang-py prettyprint-override"><code>from dotenv import load_dotenv import os import pandas as pd from sqlalchemy import create_engine def main(): load_dotenv('.env') user = os.environ.get('MY_SQL_USER') password = os.environ.get('MY_SQL_PASSWORD') host = os.environ.get('MY_SQL_HOST') port = os.environ.get('MY_SQL_PORT') db = os.environ.get('MY_SQL_DATABASE') table_name = os.environ.get('MY_SQL_TABLE_NAME') print(f'mysql+pymysql://{user}:{password}@{host}:{port}/{db}') engine = create_engine(f'mysql+pymysql://{user}:{password}@{host}:{port}/{db}') df = pd.read_csv('./data/data.parquet', encoding='ISO-8859-1', on_bad_lines='skip', engine='python') df.to_sql(name=table_name, con=engine, if_exists='append') if __name__ == '__main__': main() </code></pre> <p>When I run my docker compose (<code>docker-compose up -d</code>) file I get:</p> <pre><code>2023-02-14 08:58:59 sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, &quot;Can't connect to MySQL server on 'mysql_db' ([Errno 111] Connection refused)&quot;) 2023-02-14 08:58:59 (Background on this error at: https://sqlalche.me/e/20/e3q8) </code></pre> <p>The credentials and connections are retrieved from my <code>.env</code> file:</p> <pre><code>#MYSQL CONFIG MY_SQL_DATABASE = test_db MY_SQL_USER = data MY_SQL_PASSWORD = random MY_SQL_ROOT_PASSWORD = root #PYTHON INGESTION MY_SQL_HOST = mysql_db MY_SQL_PORT = 3306 MY_SQL_TABLE_NAME = test_table </code></pre> <p>Why I can't connect to MySql DB using my python script?</p>
<python><docker>
2023-02-14 12:07:09
2
671
OdiumPura
75,447,466
19,189,069
How to remove any non-Persian character in a string in python?
<p>I want to remove any non-Persian character in a string in python. For example if I have a string like this:</p> <pre><code>00سلامabc </code></pre> <p>I have the Persian characters and the result becomes like this:</p> <pre><code>سلام </code></pre> <p>I know that it is possible that I can extract just Persian characters from a string by regex. But I have four questions:</p> <ol> <li>Which type of characters should I consider? ascii or unicode?</li> <li>Is there Persian range in ascii or unicode?</li> <li>Which language should I use? Arabic or Persian?</li> <li>How do I find the range of the alphabets?</li> </ol>
<python>
2023-02-14 11:54:19
1
385
HosseinSedghian
75,447,445
3,939,193
Python: Force Virtual Environments to use system certificate store on Windows
<p>My company uses a VPN, which does not work with the PIP certificate check out of the box. When I install a package with <code>pip install asyncio</code>, it gives me the following error:</p> <blockquote> <p>Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/asyncio/</p> </blockquote> <p>In the system wide python installation, this can be circumvented by using:</p> <pre><code>pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org pip-system-certs </code></pre> <p>After that, the Windows certificate store is used, which contains the CA for the VPN. Now, I can just use:</p> <pre><code>pip install asyncio </code></pre> <p>and all works fine.</p> <p>However, if I use a virtual environment, I am back in the same position of having to first install pip-system-certs:</p> <pre><code>python -m venv C:\location\of\venv cd C:\location\of\venv .\Scripts\activate pip install asyncio </code></pre> <blockquote> <p>Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/asyncio/</p> </blockquote> <p>This makes working with systems like Poetry very difficult, because all configurations need to change.</p> <p>Is there a way to force all Python Virtual Environments on my system to use the system certificate store?</p>
<python><pip><python-venv>
2023-02-14 11:52:28
1
904
DrDonut
75,447,371
3,225,420
Bokeh ColumnDataSource Error when identifying as source - why?
<p>I'm getting error messages when identifying ColumnDataSource as my source, want to do it right.</p> <p>Let me show the errors.</p> <p>First, I generate some random data in a DataFrame and put it into the ColumnDataSource:</p> <pre><code>col_list = ['ob1','ob2','ob3','ob4','ob5'] df = pd.DataFrame(np.random.uniform(73.965,74.03,size=(25, 5)).astype(float), columns=col_list) df.reset_index(inplace=True) df = df.rename(columns = {'index':'order'}) df['order'] = df['order'] + 1 cds = ColumnDataSource(data=df) </code></pre> <p>So far so good.</p> <p>I try to generate a graph:</p> <pre><code>p = figure(title = 'ColumnDataSource test', sizing_mode = 'stretch_both') p.line(x=cds.data['order'], y = cds.data['ob1'], source = cds.data[['order', 'ob1']]) show(p) </code></pre> <p>and get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;e:\Black_Belt_Six_Sigma\first_take.py&quot;, line 57, in &lt;module&gt; p.line(x=cds.data['order'], y = cds.data['ob1'], source = cds.data[['order', 'ob1']]) TypeError: unhashable type: 'list' </code></pre> <p>Fair enough, I won't give the source parameter a list and try again:</p> <pre><code>p = figure(title = 'ColumnDataSource test', sizing_mode = 'stretch_both') p.line(x=cds.data['order'], y = cds.data['ob1'], source = cds.data) show(p) </code></pre> <p>I get no graph but only the following error:</p> <pre><code>RuntimeError: Expected x and y to reference fields in the supplied data source. When a 'source' argument is passed to a glyph method, values that are sequences (like lists or arrays) must come from references to data columns in the source. For instance, as an example: source = ColumnDataSource(data=dict(x=a_list, y=an_array)) p.circle(x='x', y='y', source=source, ...) # pass column names and a source Alternatively, *all* data sequences may be provided as literals as long as a source is *not* provided: p.circle(x=a_list, y=an_array, ...) # pass actual sequences and no source </code></pre> <p>Based on this error message I've tried the following:</p> <pre><code>cds = ColumnDataSource(data=dict(order = df['order'].to_list(), ob1 = df['ob1'].to_list())) p = figure(title = 'ColumnDataSource test', sizing_mode = 'stretch_both') p.line(x=cds.data['order'], y = cds.data['ob1'], source = cds) show(p) </code></pre> <p>And</p> <pre><code>cds = ColumnDataSource(data=dict(order = df['order'], ob1 = df['ob1'])) p = figure(title = 'ColumnDataSource test', sizing_mode = 'stretch_both') p.line(x=cds.data['order'], y = cds.data['ob1'], source = cds) show(p) </code></pre> <p>Both keep returning the same error message.</p> <p>I can get a graph/plot if I don't specify the <code>source</code> parameter, so maybe that's the right course of action? Seems odd, I imagine it's important if the developers made it a parameter.</p>
<python><bokeh>
2023-02-14 11:45:46
1
1,689
Python_Learner
75,447,361
24,334
reading csv with values with international numbers
<p>I am trying to read a CSV which contains numbers in a European manner for instance:</p> <pre><code>&quot;COL_A&quot;, &quot;COL_B&quot; &quot;ID_A&quot;, &quot;47,37&quot; </code></pre> <p>I am reading it with a code like:</p> <pre><code>pl.read_csv(inputfile, dtypes={COL_B:float}, infer_schema_length=200) </code></pre> <p>and I get an error <code>exceptions.ComputeError: could not parse &quot;47,37&quot; as dtype Float64 at</code> ...</p> <p>Is there an easy way to import it adequately (as a float of 47.37)?</p> <p>I am using polars version 0.15.14</p>
<python><dataframe><csv><python-polars>
2023-02-14 11:45:03
2
1,767
call me Steve
75,447,336
13,629,335
CTRL KeyPress event is fired trough scrolling with TouchPad
<p>I'm trying to track the control-key when it's pressed and released.<br /> While everything seemed fine in the beginning, but when scrolling with my touch-pad on my HP laptop on Windows 11 the KeyPress event is fired automatically.</p> <p>Is this a Windows thing and is <em>normal behavior</em> or is it a <em>bug</em> in tkinter?</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk ctrl_pressed = None def check_ctrl(event): print(ctrl_pressed, 'checked') def track_ctrl(event): global ctrl_pressed if (et:=event.type.name) == 'KeyPress': ctrl_pressed = True elif et == 'KeyRelease': ctrl_pressed = False print(ctrl_pressed, 'tracked') root = tk.Tk() root.bind('&lt;MouseWheel&gt;', check_ctrl) root.bind('&lt;KeyPress-Control_L&gt;', track_ctrl) root.bind('&lt;KeyRelease-Control_L&gt;', track_ctrl) root.mainloop() </code></pre> <ul> <li>Using the MouseWheel first will output <code>None</code>--as expected.</li> <li>Using the Touchpad first will output <code>True</code>--not expected.</li> <li>Pressing the Key will output first <code>True</code> then <code>False</code>--as expected.</li> </ul> <p>It seems to be an generated event:</p> <pre class="lang-py prettyprint-override"><code>def track_ctrl(event): print(event.send_event) </code></pre> <p>This produces <code>True</code> with touchpad.</p> <hr /> <blockquote> <p>I'm using patchlevel of Tkinter <code>8.6.12</code>, and Python version <code>3.11.0</code></p> </blockquote>
<python><tkinter><tcl><tk-toolkit><gesture>
2023-02-14 11:42:55
1
8,142
Thingamabobs
75,447,077
6,751,456
python athena calculate total number of output rows
<p>I need to get total number of output rows returned by Athena.</p> <pre><code> status = 'RUNNING' while status in ['QUEUED', 'RUNNING']: response_get_query_details = athena.get_query_execution( QueryExecutionId=query_execution_id ) status = ( response_get_query_details.get(&quot;QueryExecution&quot;, {}) .get(&quot;Status&quot;, {}) .get(&quot;State&quot;, &quot;NA&quot;) ) if status in (&quot;FAILED&quot;, &quot;CANCELLED&quot;, &quot;NA&quot;): raise Exception(f&quot;Athena Query Failed: {failure_reason}&quot;) elif status == 'SUCCEEDED': query_stats = response_get_query_details['QueryExecution']['Statistics'] total_rows = query_stats['OutputRows'] &lt;&lt;--- `outputRows` is not available return total_rows </code></pre> <p>There's only <code>DataScannedInBytes</code> in the statistics.</p> <pre><code>{'EngineExecutionTimeInMillis': 9799, 'DataScannedInBytes': 1090182, 'TotalExecutionTimeInMillis': 9991, 'QueryQueueTimeInMillis': 164, 'QueryPlanningTimeInMillis': 8860, 'ServiceProcessingTimeInMillis': 28} </code></pre> <p>Is there a way to calculate total number of rows from this?</p>
<python><amazon-web-services><boto3><amazon-athena>
2023-02-14 11:20:25
1
4,161
Azima
75,447,018
10,811,647
Return several values inside for loop
<p>I have a function that must return several values using a for loop. I do not wish to store the values inside a list or a dict. Because of the use of the <code>return</code>, I only get the first value. How can I return all values successively? I tried using generators and <code>yield</code> but I'm not sure how to use it.</p> <p>here is the function:</p> <pre><code>import random def my_function(): for i in range(3): return(dict(x=[[random.randint(0,10)]], y=[[random.randint(0,10)]]), 0) </code></pre> <p>Are generators and the use of <code>yield</code> suited for my need?</p>
<python><return><generator><plotly-dash><yield>
2023-02-14 11:14:31
2
397
The Governor
75,446,974
4,427,375
How to frame a simple Factorio/Dyson Sphere factory problem in linear programming using `scipy.optimize`
<p>I'm trying to frame a Linear Programming problem from the game Dyson Sphere (a Factorio clone) and then use <code>scipy</code> to optimize for it. The problem is as follows:</p> <p>The problem is as follows: I have a copper mine which produces 150 copper per minute. I can build multiple copper plate factories each of which consume 60 copper per minute and produce 60 copper plates per minute. I want to determine how many factories I should build to maximize copper plate production.</p> <p>After beginning the process of wrapping my head around the math I'm struggling to get a simplified example to work. I have the following:</p> <pre><code>from scipy.optimize import linprog c = [-60] A = [[-60]] b = [150] copper_plate_bounds = (0, None) res = linprog(c, A_ub=A, b_ub=b, bounds=[copper_plate_bounds], method='highs') print(res) </code></pre> <p>The code above will report <code>message: The problem is unbounded. (HiGHS Status 10: model_status is Unbounded; primal_status is At upper bound)</code> but I'm not too sure why. Obviously this is a silly matrix but you have <code>Ax=b</code> where here it is <code>-60x=150</code>. <code>c</code> should be negative here as I understand it because I'm trying to maximize the production of copper plates which I do by subtracting from the copper output (150).</p> <p>Full output:</p> <pre><code>C:\Users\grant\Documents\code\math\venv\Scripts\python.exe C:\Users\grant\Documents\code\math\calcular_cobre.py message: The problem is unbounded. (HiGHS Status 10: model_status is Unbounded; primal_status is At upper bound) success: False status: 3 fun: None x: None nit: 0 lower: residual: None marginals: None upper: residual: None marginals: None eqlin: residual: None marginals: None ineqlin: residual: None marginals: None Process finished with exit code 0 </code></pre> <p>Since I'm new I'm not sure if I've misunderstood something mathematically or if I've improperly constructed the constraints of my program.</p>
<python><scipy><mathematical-optimization><linear-programming><scipy-optimize>
2023-02-14 11:11:00
0
1,873
Grant Curell
75,446,886
5,197,034
Selecting columns based on a condition in Polars
<p>I want to select columns in a Polars DataFrame based on a condition. In my case, I want to select all string columns that have less than 100 unique values. Naively I tried:</p> <pre><code>df.select((pl.col(pl.String)) &amp; (pl.all().n_unique() &lt; 100)) </code></pre> <p>which gave me an error, which is probably due to the second part of the expression.</p> <pre><code>df.select(pl.all().n_unique() &lt; 100) </code></pre> <p>This doesn't select columns but instead returns a single row DataFrame of bool values. I'm new to polars and still can't quite wrap my head around the expression API, I guess. What am I doing wrong?</p>
<python><dataframe><python-polars>
2023-02-14 11:01:50
2
2,603
pietz
75,446,862
1,441,998
How do I get Python to talk to the portmidi c library on Windows 11/64
<p>I'm trying to get <a href="https://github.com/PortMidi/portmidi" rel="nofollow noreferrer">portmidi</a> working with Python on Windows 11/64.  I'm able to compile the dll/lib in MSVS without errors, without specifying any changes or options.  I'm new to this, so I don't know how to tell the system about the libraries, do I have to put them on some path or something?  For now I'm just telling Python their location in the build directory directly.</p> <p>I'd like to get two approaches working in order to compare them.</p> <p>First way: <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow noreferrer">ctypes</a>.  I stumbled across <a href="https://pypi.org/project/portmidizero/#files" rel="nofollow noreferrer">portmidizero</a>, which looks old and dead, but basically correct.  I have to <a href="https://docs.python.org/3/library/2to3.html" rel="nofollow noreferrer">2to3</a> it and fix a few tiny errors, but then it almost works.  Running <code>testportmidizero.py</code>, I can <code>Initialize()</code>, list the devices and get sensible names, open an <code>Output</code>, and run the <code>Time()</code> function, all without errors.  But then when I try to <code>Pm_write()</code>, I get <code>OSError: Access Violation</code>.  I am running as administrator.  Googling, my best guess is that there's a problem either with calling conventions or pointer sizes.  But I don't see any mention of <code>cdecl</code> or <code>stdcall</code> anywhere, so I think everything has defaulted to <code>cdecl</code>, but I don't know how to verify.  And I don't see any problem with how the pointer types/sizes are written, but I barely know what I'm doing.  What should I do to figure out what's wrong?</p> <p>Second way: <a href="https://cython.org/" rel="nofollow noreferrer">Cython</a>.  As far as I understand this approach will give the best performance.  <a href="https://github.com/PortMidi/pm_python" rel="nofollow noreferrer">pyportmidi</a> used to work well on mac several years ago, but is now out of date. I tried to get <code>pip install</code> working:</p> <ul> <li>Remove the <code>description</code> related stuff from <code>setup.py</code> because the expected <code>readme</code> is missing. Same for all testing related stuff.</li> <li>I couldn't figure out how to add a path to the portmidi header files, the compiler couldn't find them when I added the path to <code>include_dirs</code>, even though it looked correct in the compiler directive output. I just manually copied the <code>.h</code>'s into the <code>pyportmidi</code> directory for now.</li> <li>But then I got all the following errors I have no idea how to solve:</li> </ul> <pre><code> pyportmidi\_pyportmidi.c(399): warning C4244: 'function': conversion from 'Py_ssize_t' to 'long', possible loss of data       pyportmidi\_pyportmidi.c(419): warning C4244: 'function': conversion from 'Py_ssize_t' to 'long', possible loss of data       pyportmidi\_pyportmidi.c(441): warning C4244: 'function': conversion from 'Py_ssize_t' to 'long', possible loss of data       pyportmidi\_pyportmidi.c(2093): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int32_t', possible loss of data       pyportmidi\_pyportmidi.c(2625): warning C4113: 'PtTimestamp (__cdecl *)(void)' differs in parameter lists from 'PmTimeProcPtr'       pyportmidi\_pyportmidi.c(3455): error C2105: '++' needs l-value       pyportmidi\_pyportmidi.c(3458): error C2105: '--' needs l-value       pyportmidi\_pyportmidi.c(3634): error C2105: '++' needs l-value       pyportmidi\_pyportmidi.c(3637): error C2105: '--' needs l-value       pyportmidi\_pyportmidi.c(4554): warning C4996: 'PyUnicode_GET_SIZE': deprecated in 3.3       pyportmidi\_pyportmidi.c(4568): warning C4996: 'PyUnicode_GET_SIZE': deprecated in 3.3       pyportmidi\_pyportmidi.c(5523): warning C4047: 'function': 'PyObject *' differs in levels of indirection from 'int'       pyportmidi\_pyportmidi.c(5523): warning C4024: 'PyCode_New': different types for formal and actual parameter 14       pyportmidi\_pyportmidi.c(5525): warning C4047: 'function': 'int' differs in levels of indirection from 'PyObject *'       pyportmidi\_pyportmidi.c(5524): warning C4024: 'PyCode_New': different types for formal and actual parameter 15       pyportmidi\_pyportmidi.c(5507): error C2198: 'PyCode_New': too few arguments for call       pyportmidi\_pyportmidi.c(5534): error C2037: left of 'f_lineno' specifies undefined struct/union '_frame' </code></pre> <p>I assume these need to be fixed by editing the <code>.pyx</code> with knowledge of how the c library has been updated. But I don't know Cython/pyrex, so any guidance would be greatly appreciated!</p>
<python><windows><cython><ctypes><pyportmidi>
2023-02-14 10:59:35
0
459
user1441998
75,446,780
17,160,160
Extracting items from array using variable sliding window in Python
<p>I have an array of digits: <code>array = [1.0, 1.0, 2.0, 4.0, 1.0]</code></p> <p>I would like to create a function that extracts sequences of digits from the input array and appends to one of two lists depending on defined conditions being met</p> <p>The first condition <code>f</code> specifies the number of places to look ahead from index <code>i</code> and check if a valid index exists. If true, append <code>array[i]</code> to list1. If false, append to list2.</p> <p>I have implemented it as follows:</p> <pre><code>def somefunc(array, f): list1, list2 = [], [] for i in range(len(array)): if i + f &lt; len(array): list1.append(array[i]) else: list2.append(array[i]) return list1, list2 </code></pre> <p>This functions correctly as follows:<br /> <code>somefunc(array,f=1)</code> returns <code>([1.0, 1.0, 2.0, 4.0], [1.0])</code><br /> <code>somefunc(array,f=2)</code> returns <code>([1.0, 1.0, 2.0], [4.0, 1.0])</code><br /> <code>somefunc(array,f=3)</code> returns <code>([1.0, 1.0], [2.0, 4.0, 1.0])</code></p> <p>However, I would like to add a second condition to this function, <code>b</code>, that specifies the window length for previous digits to be summed and then appended to the lists according to the <code>f</code> condition above.</p> <p>The logic is this:</p> <ul> <li>iterate through array and at each index <code>i</code> check if <code>i</code>+<code>f</code> is a valid index.</li> <li>If true, append the sum of the previous <code>b</code> digits to <code>list1</code></li> <li>If false, append the sum of the previous <code>b</code> digits to <code>list2</code></li> <li>If the length of window <code>b</code> isn't possible (i.e. <code>b=2</code> when <code>i=0</code>) continue to next index.</li> </ul> <p>With both <code>f</code> and <code>b</code> conditions implemented. I would expect:<br /> <code>somefunc(array,f=1, b=1)</code> returns <code>([1.0, 1.0, 2.0, 4.0], [1.0])</code><br /> <code>somefunc(array,f=1, b=2)</code> returns <code>([2.0, 3.0, 6.0], [5.0])</code><br /> <code>somefunc(array,f=2, b=2)</code> returns <code>([2.0, 3.0], [6.0, 5.0])</code></p> <p><s>My first challenge is implementing the <code>b</code> condition. I cannot seem to figure out how.</s> <em>see edit below</em></p> <p>I also wonder if there is a more efficient approach than the iterative method I have begun?</p> <p>Given only the <code>f</code> condition, I know that the following functions correctly and would bypass the need for iteration:</p> <pre><code>def somefunc(array, f): return array[:-f], array[-f:] </code></pre> <p>However, I again don't know how to implement the <code>b</code> condition in this approach.</p> <hr /> <p><em>Edit</em></p> <p>I have managed an iterative solution which implements the <code>f</code> and <code>b</code> conditions:</p> <pre><code>def somefunc(array, f, b): list1, list2 = [], [] for i in range(len(array)): if i &gt;= (b-1): if i + f &lt; len(array): list1.append(sum(array[i+1-b: i+1])) else: list2.append(sum(array[i+1-b: i+1])) return list1, list2 </code></pre> <p>However, the indexing syntax feels horrible and I so I am certain there must be a more elegant solution. Also, anything with improved runtime would really be preferable.</p>
<python>
2023-02-14 10:50:58
1
609
r0bt
75,446,597
14,494,483
Understanding streamlit data flow and how to submit form in a sequential way
<p>Below is a simple reproducible example that works to illustrate the problem in its simple form. You can jump to the code and expected behaviour as the problem description can be long.</p> <h2>The main concept</h2> <p>There are 3 dataframes stored in a list, and a form on the sidebar shows the <code>supplier_name</code> and <code>po_number</code> from the relevant dataframe. When the user clicks the <code>Next</code> button, the information inside the <code>supplier_name</code> and <code>po_number</code> <strong>text_input</strong> will be saved (in this example, they basically got printed out on top of the sidebar).<br /> <a href="https://i.sstatic.net/4x3oP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4x3oP.png" alt="enter image description here" /></a></p> <h2>Problem</h2> <p>This app works well when the user don't change anything inside the <strong>text_input</strong>, but if the user changes something, it breaks the app. See below pic for example, when I change the <code>po_number</code> to <code>somethingrandom</code>, the saved information is not <code>somethingrandom</code> but <code>p123</code> from the first dataframe.</p> <p><a href="https://i.sstatic.net/qpl90.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qpl90.png" alt="enter image description here" /></a></p> <p>What's more, if the information from the next dataframe is the same as the first dataframe, the changed value inside the text_input will be unchanged for the next display. For example, because the first and second dataframe's supplier name are both <code>S1</code>, if I change the supplier name to <code>S10</code>, then click next, the <code>supplier_name</code> is still <code>S10</code> on the second dataframe, while the second dataframe's supplier_name should be <code>S1</code>. But if the supplier name for the next dataframe changed, the information inside the <code>text_input</code> will be changed.</p> <h2>Justification</h2> <p>If you are struggling to understand why I want to do this, the original use for this is for the sidebar input area to extract information from each PDFs, then when the user confirms the information are all correct, they click next to review the next PDF. But if something is wrong, they can change the information inside the text_input, then click next, and the information of the changed value will be recorded, and for the next pdf, the extracted information should reflect on what the next pdf is. I did this in R shiny quite simply, but can't figure out how the data flow works here in streamlit, please help.</p> <h2>Reproducible Example</h2> <pre><code>import streamlit as st import pandas as pd # 3 dataframes that are stored in a list data1 = { &quot;supplier_name&quot;: [&quot;S1&quot;], &quot;po_number&quot;: [&quot;P123&quot;], } data2 = { &quot;supplier_name&quot;: [&quot;S1&quot;], &quot;po_number&quot;: [&quot;P124&quot;], } data3 = { &quot;supplier_name&quot;: [&quot;S2&quot;], &quot;po_number&quot;: [&quot;P125&quot;], } df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df3 = pd.DataFrame(data3) list1 = [df1, df2, df3] # initiate a page session state, every time next button is clicked # it will go to the next dataframe in the list if 'page' not in st.session_state: st.session_state.page = 0 def next_page(): st.sidebar.write(f&quot;Submitted! supplier_name: {supplier_name} po_number: {po_number}&quot;) st.session_state.page += 1 supplier_name_value = list1[st.session_state.page][&quot;supplier_name&quot;][0] po_number_value = list1[st.session_state.page][&quot;po_number&quot;][0] # main area list1[st.session_state.page] # sidebar form with st.sidebar.form(&quot;form&quot;): supplier_name = st.text_input(label=&quot;Supplier Name&quot;, value=supplier_name_value) po_number = st.text_input(label=&quot;PO Number&quot;, value=po_number_value) next_button = st.form_submit_button(&quot;Next&quot;, on_click=next_page) </code></pre> <h2>Expected behaviour</h2> <p>The dataframe's info are extracted into the sidebar input area. The user can change the input if they wish, then click next, and the values inside the input areas will be saved. When it goes to the next dataframe, the values inside the text input will be refreshed to extract from the next dataframe, and repeats.</p>
<python><streamlit>
2023-02-14 10:34:41
2
474
Subaru Spirit
75,446,455
3,299,347
Mock psycopg2 database insertion in Python
<p>My unit test does not mock the database insertion because I do see in the logs and in the database that the record id <code>42</code> had been inserted in the database, see logs:</p> <pre><code>./tests/test_requesthandler.py::TestRequestHandler::test_handle_post_coordinates Failed: [undefined]AssertionError: {'id': 42, 'latitude': 12.9, 'longitude': 77.6} != [{'latitude': 12.9, 'longitude': 77.6}] </code></pre> <p>How can I mock the database insertion in my unit test?</p> <p>Here is my unit test:</p> <pre><code>import unittest from unittest.mock import patch, Mock, MagicMock from http.server import BaseHTTPRequestHandler import json from src.requesthandler import RequestHandler # The code to test class TestRequestHandler(unittest.TestCase): def setUp(self): self.handler = RequestHandler() @patch('src.requesthandler.psycopg2.connect') def test_handle_post_coordinates(self, mock_connect): print(json.dumps({&quot;latitude&quot;: &quot;12.9&quot;, &quot;longitude&quot;: &quot;77.6&quot;}).encode('utf-8')) expected = [{'latitude': 12.9, 'longitude': 77.6}] # This will disable the database connection # mock_connect.return_value.cursor.return_value.execute.return_value = None mock_con = mock_connect.return_value # result of psycopg2.connect(**connection_stuff) mock_cur = mock_con.cursor.return_value # result of con.cursor(cursor_factory=DictCursor) mock_cur.execute.return_value = expected # return this when calling cur.fetchall() mock_cur.fetchone.return_value = expected # return this when calling cur.fetchall() mock_con.commit.return_value = expected # return this when calling cur.fetchall() environ = { 'CONTENT_LENGTH': '23', 'REQUEST_METHOD': 'POST', 'PATH_INFO': '/coordinates', 'wsgi.input': Mock(read=Mock(return_value=json.dumps({'latitude': 12.9, 'longitude': 77.6}).encode('utf-8'))) } start_response = Mock() response = self.handler.handle_post_coordinates(environ, start_response) self.assertEqual(json.loads(response[0].decode().replace(&quot;'&quot;, '&quot;')), [{'latitude': 12.9, 'longitude': 77.6}]) start_response.assert_called_with('200 OK', [('Content-type', 'text/plain')]) def test_handle_get(self): environ = { 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/coordinates', } start_response = Mock() response = self.handler.handle_get(environ, start_response) self.assertEqual(json.loads(response[0].decode()), {'mssg': 'werkt123'}) start_response.assert_called_with('200 OK', [('Content-type', 'application/json')]) if __name__ == '__main__': unittest.main() </code></pre> <p>And here is my code:</p> <pre><code>import json from http.server import BaseHTTPRequestHandler, HTTPServer import psycopg2 from json import dumps from waitress import serve import logging class RequestHandler(BaseHTTPRequestHandler): # the constructor is called &quot;__init__&quot;for convenience def __init__(self): self.coordinates = [] print('qwe') # Connect to the PostgreSQL database self.conn = psycopg2.connect( host=&quot;localhost&quot;, database=&quot;postgres&quot;, user=&quot;postgres&quot;, password=&quot;admin&quot; ) # Create a cursor object self.cursor = self.conn.cursor() def _send_response(self, message, status=200): self.send_response(status) self.send_header(&quot;Content-type&quot;, &quot;application/json&quot;) self.end_headers() self.wfile.write(bytes(json.dumps(message), &quot;utf8&quot;)) def handle_post_coordinates(self, environ, start_response): content_length = int(environ.get('CONTENT_LENGTH', 0)) request_body = environ['wsgi.input'].read(content_length).decode() coordinates = json.loads(request_body) self.coordinates.append(coordinates) self.cursor.execute(&quot;INSERT INTO coordinates (latitude, longitude) VALUES (%s, %s) RETURNING id&quot;, (coordinates['latitude'], coordinates['longitude'])) new_coordinate_id = self.cursor.fetchone()[0] self.conn.commit() new_coordinate = {'id': new_coordinate_id, 'latitude': coordinates['latitude'], 'longitude': coordinates['longitude']} status = '200 OK' headers = [('Content-type', 'text/plain')] start_response(status, headers) # return [b&quot;Coordinates added&quot;] return [bytes(str(new_coordinate), 'utf-8')] def handle_get(self, environ, start_response): if environ['PATH_INFO'] == '/coordinates': # self.cursor.execute(&quot;SELECT * FROM coordinates&quot;) # coordinates = self.cursor.fetchall() # coordinates = [{'id': c[0], 'latitude': c[1], 'longitude': c[2]} for c in coordinates] # response = dumps(coordinates) response = dumps({'mssg' : 'werkt123'}) # response = {'mssg' : 'haha3'} elif environ['PATH_INFO'].startswith('/coordinates/'): coordinate_id = int(environ['PATH_INFO'].split('/')[-1]) self.cursor.execute(&quot;SELECT * FROM coordinates WHERE id = %s&quot;, (coordinate_id,)) coordinate = self.cursor.fetchone() if coordinate: coordinate = {'id': coordinate[0], 'latitude': coordinate[1], 'longitude': coordinate[2]} response = dumps(coordinate) else: response = dumps({'error': 'Coordinate not found'}) start_response(&quot;404 Not Found&quot;, [('Content-type', 'application/json')]) return [response.encode()] else: response = dumps({'error': 'Invalid endpoint'}) start_response(&quot;404 Not Found&quot;, [('Content-type', 'application/json')]) return [response.encode()] start_response(&quot;200 OK&quot;, [('Content-type', 'application/json')]) return [response.encode()] def application(self, environ, start_response): try: path = environ.get('PATH_INFO', '').lstrip('/') if path == 'coordinates': if environ['REQUEST_METHOD'] == 'GET': return self.handle_get(environ, start_response) elif environ['REQUEST_METHOD'] == 'POST': return self.handle_post_coordinates(environ, start_response) # return self.do_POST(environ, start_response) # elif environ['REQUEST_METHOD'] == 'PUT': # return handle_put_coordinates(environ, start_response) # elif environ['REQUEST_METHOD'] == 'DELETE': # return handle_delete_coordinates(environ, start_response) else: start_response(&quot;404 Not Found&quot;, [('Content-type', 'text/plain')]) return [b&quot;Not Found&quot;] except Exception as e: print(&quot;Error:&quot;, str(e)) start_response(&quot;500 Internal Server Error&quot;, []) return [b&quot;Error: &quot; + str(e).encode()] </code></pre>
<python><python-3.x><unit-testing><mocking><psycopg2>
2023-02-14 10:23:56
2
1,279
superkytoz
75,446,132
5,722,359
How to convert Path("/home/user/mypic.jpg").stat().st_ctime to human-readable datetime format?
<p>I got the creation time of a file using this command:</p> <pre><code>ctime = Path(&quot;/home/user/mypic.jpg&quot;).stat().st_ctime </code></pre> <p>How do I convert this info to a <code>datetime.datatime</code> object that is human-readable? I tried this <a href="https://stackoverflow.com/a/10256141/5722359">answer</a> but it failed:</p> <pre><code>from datetime import datetime datetime.strptime(str(ctime), &quot;%a %b %d %H:%M:%S %Y&quot;) </code></pre>
<python>
2023-02-14 09:56:30
1
8,499
Sun Bear
75,446,122
7,183,388
Python: How to annotate a variable number of iterable attributes?
<p>I have a class family for which I need to be able to iterate through attributes of type: <code>Metric</code>.</p> <p>The family consists of an abstract base class parent and child classes. The child classes will all have varying number of class attributes of type Metric, and they inherit an <code>__iter__</code> method from the parent class that allows me to iterate through the attributes.</p> <p>I am using iterable attributes rather than a dict because I want my objects to be typed, but I need to be able to call metrics in sequence, and by name. So I need to be able to do:</p> <p><code>Metrics.metric_1</code> and <code>for metric in Metrics:</code></p> <p>My question is, how do I correctly hint in the base class that there are a variable number of attributes of the same type?</p> <p>I'm currently using a couple of attribute hints with an ellipsis:</p> <pre><code>class MetricsBase(ABC): metric_1: Metric metric_2: Metric ... @classmethod def __iter__(cls): for attr, value in cls.__dict__.items(): if not attr.startswith(&quot;__&quot;): yield value class MetricChild(MetricsBase): metric_1 = Metric(x) metric_2 = Metric(y) metric_3 = Metric(z) </code></pre> <p>But I'm not sure if this is pythonic or correct, and wondering if there is a neater way of doing this.</p> <p>Many thanks for any input!</p>
<python><python-typing>
2023-02-14 09:55:27
1
353
Archie
75,446,087
12,242,085
How to avoid overfitting on multiclass classification OvR Xgboost model / class_weight in Python?
<p>I try to build multiclass classification model in Python using XGBoost OvR (OneVsRest) like below:</p> <pre><code>from xgboost import XGBClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.metrics import roc_auc_score X_train, X_test, y_train, y_test = train_test_split(abt.drop(&quot;TARGET&quot;, axis=1) , abt[&quot;TARGET&quot;] , train_size = 0.70 , test_size=0.30 , random_state=123 , stratify = abt[&quot;TARGET&quot;]) model_1 = OneVsRestClassifier(XGBClassifier()) </code></pre> <p>When I used above code I have HUGE overfitting: <code>AUC_TRAIN: 0.9988, AUC_TEST: 0.7650</code></p> <p>Si, I decided to use: <code>class_weight.compute_class_weight</code></p> <pre><code>from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train) model_1.fit(X_train, y_train, class_weight=class_weights) metrics.roc_auc_score(y_train, model_loop_a.predict_proba(X_train), multi_class='ovr') metrics.roc_auc_score(y_test, model_loop_a.predict_proba(X_test), multi_class='ovr') </code></pre> <p>Nevertheless, when I try to use <code>class_weight.compute_class_weight</code> like above, I have the following error: <code>TypeError: fit() got an unexpected keyword argument 'class_weight'</code></p> <p>How can i fix that, or maybe you have some other idea how to avoid such HUGE overfitting on my multiclass classification model in Python ?</p>
<python><scikit-learn><xgboost><multiclass-classification><overfitting-underfitting>
2023-02-14 09:52:20
1
2,350
dingaro
75,445,810
12,932,447
How to implement a "Citation" table? (using SQLModel or SQLAlchemy)
<p>I'm struggling with implementing the concept of &quot;scientific paper citation&quot; in SQL.</p> <p>I have a table of <code>Paper</code>s. Each <code>Paper</code> can <em><strong>cite</strong></em> many other <code>Paper</code>s and, vice-versa, it can <em><strong>be cited</strong></em> by many other more.</p> <p>Here's the code I wrote</p> <pre class="lang-py prettyprint-override"><code>class Paper(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) bibliography: List[&quot;Citation&quot;] = Relationship(back_populates=&quot;citing&quot;) cited_by: List[&quot;Citation&quot;] = Relationship(back_populates=&quot;cited&quot;) class Citation(SQLModel, table=True): citing_id: Optional[int] = Field(default=None, primary_key=True, foreign_key=&quot;paper.id&quot;) citing: &quot;Paper&quot; = Relationship(back_populates=&quot;bibliography&quot;) cited_id: Optional[int] = Field(default=None, primary_key=True, foreign_key=&quot;paper.id&quot;) cited: &quot;Paper&quot; = Relationship(back_populates=&quot;cited_by&quot;) </code></pre> <p>This is not working:</p> <pre><code>sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship Paper.bibliography - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table. </code></pre> <p>The problem is the fact that I wrote <code>foreign_key=&quot;paper.id&quot;</code> twice, but I don't know how to fix it.</p> <hr /> <p>To reproduce the error:</p> <ul> <li>I'm using Python 3.10.5;</li> <li>the only dependency is <code>sqlmodel</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>from typing import List from typing import Optional from sqlmodel import create_engine from sqlmodel import Field from sqlmodel import Relationship from sqlmodel import Session from sqlmodel import SQLModel sqlite_file_name = &quot;database.db&quot; sqlite_url = f&quot;sqlite:///{sqlite_file_name}&quot; engine = create_engine(sqlite_url, echo=True) # class Paper(SQLModel, table=True): ... # class Citation(SQLModel, table=True): ... if __name__ == &quot;__main__&quot;: SQLModel.metadata.create_all(engine) Paper() </code></pre> <p>I'm using <a href="https://sqlmodel.tiangolo.com/" rel="nofollow noreferrer"><code>SQLModel</code></a>, but an answer in <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer"><code>SQLAlchemy</code></a> would be fine as well.</p>
<python><sqlalchemy><foreign-keys><sqlmodel>
2023-02-14 09:28:50
1
875
ychiucco
75,445,290
778,508
Poetry No File/Folder for Package
<p>I have a simple project layout</p> <pre><code>myproject on  main [$!?] is 📦 v1.0.0 via  v18.14.0 via 🐍 v3.10.9 ❯ tree -L 1 . ├── build ├── deploy ├── Dockerfile ├── poetry.lock ├── pyproject.toml ├── README.md ├── scripts.py └── src </code></pre> <p>The <code>pyproject.toml</code> is:</p> <pre><code>[tool.poetry] name = &quot;myproject&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;&quot;] [tool.poetry.scripts] test = &quot;scripts:test&quot; </code></pre> <p>The scripts.py is:</p> <pre><code>import subprocess def test(): &quot;&quot;&quot; Run all unittests. &quot;&quot;&quot; subprocess.run( ['python', '-u', '-m', 'pytest'] ) if __name__ == '__main__': test() </code></pre> <p>When I run <code>poetry run test</code>:</p> <p><code>myproject on main [$!?] is 📦 v1.0.0 via  v18.14.0 via 🐍 v3.10.9 </code></p> <pre><code> No file/folder found for package myproject </code></pre>
<python><python-poetry>
2023-02-14 08:46:42
6
8,046
gdm
75,445,200
6,357,916
googlesearch package in colab
<p>I can do following in google colab:</p> <pre><code>import googlesearch as g urls = list(g.search(search_string, stop=results_count, lang='en')) </code></pre> <p>But I am guessing which package is this <code>googlesearch</code>?</p> <p>I checked <a href="https://pypi.org/project/googlesearch-python/" rel="nofollow noreferrer">this</a> package, but it does not have <code>stop</code> parameter in its <code>search()</code> method, so I am guessing it is different from whats installed in the colab. (The same parameter is called <code>num_results</code> in this API as shown in hello world example in its pypi page linked above.)</p> <p>How do I know which googlesearch package is installed in colab, perhaps to use it locally on my laptop?</p> <p>PS: You can check the API in <a href="https://colab.research.google.com/drive/1sIcfBUN9MbCFZXZBKUZkzkj_VNPGruyC?usp=sharing" rel="nofollow noreferrer">this notebook</a>.</p>
<python><pip><google-colaboratory><pypi>
2023-02-14 08:38:42
1
3,029
MsA
75,445,186
11,809,811
force Toplevel Widget on top of root widget
<p>I have a tkinter app with a Toplevel widget that I want to create when the window is starting. The issue I have is that the Toplevel window always ends up behind the main window. Is there a way to force it in front of the root window?</p>
<python><tkinter><customtkinter>
2023-02-14 08:37:38
1
830
Another_coder
75,445,030
11,998,382
Lazily transpose dimensions 0 and 2 in iterator model
<p>Given an iterable of an iterable of an iterable <code>it_it_it</code> (i.e. a lazy representation of 3d array) you can lazily transpose dimensions <code>0</code> and <code>1</code> by <code>zip(*it_it_it)</code> and dimensions <code>1</code> and <code>2</code> by <code>map(lambda it_it: zip(*it_it), it_it_it)</code>.</p> <p>However, the last combination (<code>0</code> and <code>2</code>) is trickier. It seems you must full evaluate the outer two iterators before yielding anything and the type yielded must be <code>List[List]</code> not a lazy <code>Iterable[Iterable]</code>; the inner most iterator is the only one that can be lazily evaluated (i.e. <code>Iterable[List[List]]</code> is the best you can do).</p> <p>I'm going to give an answer but am <strong>interested in a more elegant answer</strong>.</p> <h5>Aside:</h5> <p>I'm interested in this question for understanding the problem with statically typed iterators i.e. rust and c++. Do you make sure to set up your data so you never have to do this operation. Is the best thing to do is just fully evaluate the iterators to <code>List[List[List]]</code> and then transpose c style.</p>
<python><iterator>
2023-02-14 08:20:09
3
3,685
Tom Huntington
75,445,025
283,538
get centroid latitude and longitude of h3 hexagon
<p>I know ho to get h3 hexagon ids for various resolutions and add them to a pandas dataframe containing latitudes and longitudes. Is it possible to get the centroid latitude and longitude of each h3 hexagon given its id? I saw function but do not know how to use it in this context cell_to_latlng. Any pointers would be very much appreciated. thanks.</p> <p>PS:</p> <pre><code>import h3 dir(h3) </code></pre> <p>does not show cell_to_lat_lng btw?!</p> <p>This does not work:</p> <pre><code>from h3 import h3 h3.cell_to_lat_lng('88396c0331fffff') </code></pre> <p>I get:</p> <pre><code>AttributeError: module 'h3.api.basic_str' has no attribute 'cell_to_lat_lng' </code></pre>
<python><h3>
2023-02-14 08:19:53
2
17,568
cs0815
75,444,625
13,874,745
How to update the weights of a composite model composed of pytorch and torch-geometric?
<p>I made a composite model <code>MainModel</code> which consist of a <code>GinEncoder</code> and a <code>MainModel</code> which containing some <code>Linear</code> layers, and the <code>GinEncoder</code> made by the package <code>torch-geometric</code>, show as following codes :</p> <pre class="lang-py prettyprint-override"><code>class GinEncoder(torch.nn.Module): def __init__(self): super(GinEncoder, self).__init__() self.gin_convs = torch.nn.ModuleList() self.gin_convs.append(GINConv(Sequential(Linear(1, 4), BatchNorm1d(4), ReLU(), Linear(4, 4), ReLU()))) self.gin_convs.append(GINConv(Sequential(Linear(4, 4), BatchNorm1d(4), ReLU(), Linear(4, 4), ReLU()))) def forward(self, x, edge_index, batch_node_id): # Node embeddings nodes_emb_layers = [] for i in range(2): x = self.gin_convs[i](x, edge_index) nodes_emb_layers.append(x) # Graph-level readout nodes_emb_pools = [global_add_pool(nodes_emb, batch_node_id) for nodes_emb in nodes_emb_layers] # Concatenate and form the graph embeddings graph_embeds = torch.cat(nodes_emb_pools, dim=1) return graph_embeds def get_embeddings(self, x, edge_index, batch_node_id): with torch.no_grad(): graph_embeds = self.forward(x, edge_index, batch_node_id).reshape(-1) return graph_embeds class MainModel(torch.nn.Module): def __init__(self, graph_encoder:torch.nn.Module): super(MainModel, self).__init__() self.graph_encoder = graph_encoder self.lin1 = Linear(8, 4) self.lin2 = Linear(4, 8) def forward(self, x, edge_index, batch_node_id): graph_embeds = self.graph_encoder(x, edge_index, batch_node_id) out_lin1 = self.lin1(graph_embeds) pred = self.lin2(out_lin1)[-1] return pred gin_encoder = GinEncoder().to(&quot;cuda&quot;) model = MainModel(gin_encoder).to(&quot;cuda&quot;) </code></pre> <p>I found that the weights of <code>GinEncoder</code> were not updated, while the weights of <code>Linear</code> layer in <code>MainModel</code> were updated.I observe this by following codes:</p> <pre class="lang-py prettyprint-override"><code>gin_encoder = GinEncoder().to(&quot;cuda&quot;) model = MainModel(gin_encoder).to(&quot;cuda&quot;) criterion = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters()) epochs = for epoch_i in range(epochs): model.train() train_loss = 0 for batch_i, data in enumerate(train_loader): data.to(&quot;cuda&quot;) x, x_edge_index, x_batch_node_id = data.x, data.edge_index, data.batch y, y_edge_index, y_batch_node_id = data.y[-1].x, data.y[-1].edge_index, torch.zeros(data.y[-1].x.shape[0], dtype=torch.int64).to(&quot;cuda&quot;) optimizer.zero_grad() graph_embeds_pred = model(x, x_edge_index, x_batch_node_id) y_graph_embeds = model.graph_encoder.get_embeddings(y, y_edge_index, y_batch_node_id) loss = criterion(graph_embeds_pred, y_graph_embeds) train_loss += loss loss.backward() optimizer.step() if batch_i == 0: print(f&quot;NO. {epoch_i} EPOCH&quot;) print(f&quot;MainModel weights in epoch_{epoch_i}_batch0:{next(islice(model.parameters(), 15, 16))}&quot;, end=&quot;\n\n&quot;) print(f&quot;GinEncoder weights in epoch_{epoch_i}_batch0:{next(model.graph_encoder.parameters())}&quot;) print(&quot;*&quot;*80) </code></pre> <p>Outputs of codes:</p> <pre><code>NO. 0 EPOCH MainModel weights in epoch_0_batch0:Parameter containing: tensor([-0.1447, -0.3689, -0.2840, -0.3619, -0.2040, 0.2430, 0.4651, 0.3736], device='cuda:0', requires_grad=True) GinEncoder weights in epoch_0_batch0:Parameter containing: tensor([[-0.8312], [-0.5712], [-0.6963], [-0.1601]], device='cuda:0', requires_grad=True) ******************************************************************************** NO. 1 EPOCH MainModel weights in epoch_1_batch0:Parameter containing: tensor([-0.1842, -0.3333, -0.3170, -0.3247, -0.2424, 0.2627, 0.4272, 0.4119], device='cuda:0', requires_grad=True) GinEncoder weights in epoch_1_batch0:Parameter containing: tensor([[-0.8312], [-0.5712], [-0.6963], [-0.1601]], device='cuda:0', requires_grad=True) ******************************************************************************** NO. 2 EPOCH MainModel weights in epoch_2_batch0:Parameter containing: tensor([-0.2302, -0.3077, -0.3251, -0.2905, -0.2847, 0.2558, 0.3881, 0.4527], device='cuda:0', requires_grad=True) GinEncoder weights in epoch_2_batch0:Parameter containing: tensor([[-0.8312], [-0.5712], [-0.6963], [-0.1601]], device='cuda:0', requires_grad=True) ******************************************************************************** </code></pre> <p>My question is how to make <code>loss.backward()</code> and <code>optimizer.step()</code> also pass to <code>GinEncoder</code>?</p> <p>PS.</p> <ul> <li>I put the complete codes in here: <a href="https://gist.github.com/theabc50111/3ca708d0c1101d57b6172bd717302710" rel="nofollow noreferrer">https://gist.github.com/theabc50111/3ca708d0c1101d57b6172bd717302710</a></li> <li>I put the training data on Google Drive: <a href="https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing</a></li> </ul>
<python><pytorch><neural-network><pytorch-geometric><graph-neural-network>
2023-02-14 07:31:07
1
451
theabc50111
75,444,464
5,028,320
Automated ML Scheduled Pipeline Not Triggerring Automatically
<p>I have 2 pipelines:</p> <ol> <li>Uploading data from a blob storage to the datastore (to update the data asset)</li> <li>Training model</li> </ol> <p>I also have 2 schedules:</p> <ol> <li>Triggers every 10 minutes to call the pipeline 1</li> <li>Triggers when there is a new data in datastore</li> </ol> <p>I used <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb" rel="nofollow noreferrer">this notebook</a> as a starter code and modified it to fit my needs.</p> <p>Code for Schedule 1</p> <pre class="lang-py prettyprint-override"><code>schedule = Schedule.create( workspace=ws, name=&quot;data_ingestion_schedule&quot;, description=&quot;Schedule that updates the Dataset&quot;, pipeline_parameters={&quot;ds_name&quot;: DATASET_NAME}, pipeline_id=published_pipeline.id, # id of the published data upload pipeline experiment_name=EXPERIMENT_NAME, recurrence=ScheduleRecurrence(frequency=&quot;Minute&quot;, interval=10), wait_for_provisioning=True ) </code></pre> <p>Code for Schedule 2</p> <pre class="lang-py prettyprint-override"><code>schedule = Schedule.create( workspace=ws, name=&quot;retraining_schedule&quot;, description=&quot;Schedule that triggers training pipeline&quot;, pipeline_parameters={&quot;ds_name&quot;: DATASET_NAME, &quot;model_name&quot;: MODEL_NAME}, pipeline_id=published_pipeline.id, # id of the published training pipeline experiment_name=EXPERIMENT_NAME, datastore=dstor, path_on_datastore=DATASET_NAME, wait_for_provisioning=True, polling_interval=5 ) </code></pre> <p>So, the problem is that the second schedule does not get triggered even where there is change in data in the datastore (at this path - <code>dstore/DATASET_NAME</code>). I waited and tried uploading multiple files, but the second schedule never get triggered.</p> <p>I have tried it in different workspaces, datastore and path on datastore but did not succeed. I have tried searching on Google but havent found anything.</p> <p>Ideally, I want to be able to upload new data to a blob storage -&gt; then pipeline 1 (triggered by schedule 1) will take the data from the storage and upload to datastore -&gt; pipeline 2 (triggered by schedule 2) takes the updated dataset and (re-)train the model.</p>
<python><azure><machine-learning><pipeline><automl>
2023-02-14 07:10:59
0
716
Ashyam
75,444,303
1,640,005
Clicking on button in discord channel does not execute the code in on_button_click
<p>The purpose of this code is to</p> <ol> <li>create the channel if it doesnt exist (this works) and write a welcome message in it (this works)</li> <li>when you type in !open_ticket it should create a button and display it along with a message (this works)</li> <li>click the button and it will create a new channel with the users name as part of it (this does not work)</li> </ol> <p>the print does not display in the terminal so it tells me the code is never even executed.</p> <p>Why?</p> <pre><code>import discord from discord.ext import commands from helperfunctions import create_default_message token = 'my token' my_guild = 'guild name' channel_name = 'support-tx' intents = discord.Intents.all() intents.members = True intents.message_content = True bot = commands.Bot(command_prefix='!', intents=intents.all()) @bot.event async def on_ready(): for guild in bot.guilds: if guild.name == my_guild: break for channel in guild.channels: if channel.name == channel_name: print(&quot;support tickets channel already created&quot;) bCF = True #channel found break else: bCF = False #create support tickets channel if it doesnt exist if not bCF: await guild.create_text_channel(channel_name) #await sync_channel_permissions(guild,channel_name) #this isnt working #create the initial message if it doesnt exist await create_default_message(bot,channel_name) print( f&quot;{bot.user} is connected to the following guild:\n&quot; f&quot;{guild.name}(id: {guild.id})&quot; ) @bot.event async def on_message(message): if message.author == bot.user: #make sure bot doesnt read its own messages return #message_content = message.content.lower() await bot.process_commands(message) @bot.command() #using !open_ticket in the discord chat, this should create a button and type a message async def open_ticket(ctx): print(&quot;hello function executed&quot;) view = discord.ui.View() button = discord.ui.Button(label=&quot;Open Ticket&quot;, custom_id=&quot;openTicket&quot;) view.add_item(button) view.timeout = None # set the view timeout to None bot.add_view(view) # register the view with the bot await ctx.send(&quot;Click the button to open a new support ticket.&quot;,view=view) @bot.event #clicking on the above button should run this code but it does not, the print line doesn't show in the console async def on_button_click(interaction: discord.Interaction): print(f&quot;Button clicked by {interaction.user}&quot;) if interaction.component.custom_id == &quot;openTicket&quot;: print(&quot;Button clicked&quot;) # Add this line to check if the function is being executed user = interaction.user guild = interaction.guild channel_name = f&quot;{user.name}'s Ticket&quot; overwrites = { guild.default_role: discord.PermissionOverwrite(read_messages=False), user: discord.PermissionOverwrite(read_messages=True) } new_channel = await guild.create_text_channel(channel_name, overwrites=overwrites) await interaction.respond(content=f&quot;Hello {user.mention}! Your ticket has been created in {new_channel.mention}.&quot;) bot.run(token) </code></pre>
<python><discord><discord.py>
2023-02-14 06:47:17
1
799
zoonosis
75,444,216
4,470,126
Databricks DLT reading a table from one schema(bronze), process CDC data and store to another schema (processed)
<p>I am developing an ETL pipeline using databricks DLT pipelines for CDC data that I recieve from kafka. I have created 2 pipelines successfully for landing, and raw zone. The raw one will have operation flag, a sequence column, and I would like to process the CDC and store the clean data in processed layer (SCD 1 type). I am having difficulties in reading table from one schema, apply CDC changes, and load to target db schema tables.</p> <p>I have 100 plus tables, so i am planning to loop through the tables in RAW layer and apply CDC, move to processed layer. Following is my code that I have tried (I have left the commented code just for your reference).</p> <pre><code>import dlt from pyspark.sql.functions import * from pyspark.sql.types import * raw_db_name = &quot;raw_db&quot; processed_db_name = &quot;processed_db_name&quot; def generate_curated_table(src_table_name, tgt_table_name, df): # @dlt.view( # name= src_table_name, # spark_conf={ # &quot;pipelines.incompatibleViewCheck.enabled&quot;: &quot;false&quot; # }, # comment=&quot;Processed data for &quot; + str(src_table_name) # ) # # def create_target_table(): # # return (df) # dlt.create_target_table(name=tgt_table_name, # comment= f&quot;Clean, merged {tgt_table_name}&quot;, # #partition_cols=[&quot;topic&quot;], # table_properties={ # &quot;quality&quot;: &quot;silver&quot; # } # ) # @dlt.view # def users(): # return spark.readStream.format(&quot;delta&quot;).table(src_table_name) @dlt.view def raw_tbl_data(): return df dlt.create_target_table(name=tgt_table_name, comment=&quot;Clean, merged customers&quot;, table_properties={ &quot;quality&quot;: &quot;silver&quot; }) dlt.apply_changes( target = tgt_table_name, source = f&quot;{raw_db_name}.raw_tbl_data, keys = [&quot;id&quot;], sequence_by = col(&quot;timestamp_ms&quot;), apply_as_deletes = expr(&quot;op = 'DELETE'&quot;), apply_as_truncates = expr(&quot;op = 'TRUNCATE'&quot;), except_column_list = [&quot;id&quot;, &quot;timestamp_ms&quot;], stored_as_scd_type = 1 ) return tbl_name = 'raw_po_details' df = spark.sql(f'select * from {raw_dbname}.{tbl_name}') processed_tbl_name = tbl_name.replace(&quot;raw&quot;, &quot;processed&quot;) //processed_po_details generate_curated_table(tbl_name, processed_tbl_name, df) </code></pre> <p>I have tried with dlt.view(), dlt.table(), dlt.create_streaming_live_table(), dlt.create_target_table(), but ending up with either of the following errors:</p> <pre><code>AttributeError: 'function' object has no attribute '_get_object_id' pyspark.sql.utils.AnalysisException: Failed to read dataset '&lt;raw_db_name.mytable&gt;'. Dataset is not defined in the pipeline </code></pre> <p>.Expected result:</p> <ol> <li>Read the dataframe which is passed as a parameter (RAW_DB) and</li> <li>Create new tables in PROCESSED_DB which is configured in DLT pipeline settings</li> </ol> <p><a href="https://www.databricks.com/blog/2022/04/27/how-uplift-built-cdc-and-multiplexing-data-pipelines-with-databricks-delta-live-tables.html" rel="nofollow noreferrer">https://www.databricks.com/blog/2022/04/27/how-uplift-built-cdc-and-multiplexing-data-pipelines-with-databricks-delta-live-tables.html</a></p> <p><a href="https://cprosenjit.medium.com/databricks-delta-live-tables-job-workflows-orchestration-patterns-bc7643935299" rel="nofollow noreferrer">https://cprosenjit.medium.com/databricks-delta-live-tables-job-workflows-orchestration-patterns-bc7643935299</a></p> <p>Appreciate any help please.</p> <p>Thanks in advance</p>
<python><databricks><azure-databricks><delta-live-tables>
2023-02-14 06:32:27
1
3,213
Yuva
75,444,215
16,853,253
Unable to display image in html flask
<p>I'm trying to render an image in html in flask, the path is pointing to the correct file but its not rendering some how.</p> <p>#VIEWS</p> <pre><code>@inventory_bp.route(&quot;/show&quot;) def show_image(): id = 3 image = Images.query.get(id) upload_folder = app.config['UPLOAD_FOLDER'] image_path = os.path.join(upload_folder, image.filename) return render_template(&quot;image-upload.html&quot;,image_url=image_path) </code></pre> <p>#HTML</p> <pre><code> &lt;div&gt; &lt;img src=&quot;{{ image_url }}&quot; alt=&quot;&quot; /&gt; &lt;/div&gt; </code></pre> <pre><code>UPLOAD_FOLDER = os.path.join(os.getcwd(), 'flaskbp/uploads') app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER </code></pre> <p><a href="https://i.sstatic.net/gkzWf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gkzWf.png" alt="enter image description here" /></a></p>
<python><flask>
2023-02-14 06:32:26
1
387
Sins97
75,444,181
9,609,843
pathlib.Path.exists returns False for broken symlink
<p>Consider this setup in a command line:</p> <pre><code>touch test.txt ln -s test.txt test.lnk rm test.txt </code></pre> <p>So we have a broken symlink that points to deleted file. Now in Python:</p> <pre class="lang-py prettyprint-override"><code>import pathlib p = pathlib.Path('test.lnk') p.is_symlink() # True p.exists() # False </code></pre> <p>Python tells me that &quot;<code>test.lnk</code> is a symlink, but it doesn't exist...&quot; <br> Because of that I am not able to check correctly if there is something named <code>test.lnk</code> in a directory:</p> <pre class="lang-py prettyprint-override"><code>if not p.exists(): p.symlink_to('another_file') # Traceback (most recent call last): # File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; # File &quot;/usr/lib/python3.8/pathlib.py&quot;, line 1384, in symlink_to # self._accessor.symlink(target, self, target_is_directory) # File &quot;/usr/lib/python3.8/pathlib.py&quot;, line 446, in symlink # return os.symlink(a, b) # FileExistsError: [Errno 17] File exists: 'another_file' -&gt; 'test.lnk' </code></pre> <p>How do I work around that? What is the correct way to check if a broken symlink exists in Python?</p>
<python><linux><symlink>
2023-02-14 06:28:41
3
8,600
sanyassh
75,444,036
194,547
DRF: API root view doesn't list my APIs for some reasons
<p>I have the following <code>urls.py</code> for an app named <code>service</code> where I register API endpoints:</p> <pre><code>from .views import AccessViewSet, CheckViewSet app_name = &quot;api&quot; router = DefaultRouter() router.register(r&quot;access/(?P&lt;endpoint&gt;.+)&quot;, AccessViewSet, basename=&quot;access&quot;) router.register(r&quot;check/(?P&lt;endpoint&gt;.+)&quot;, CheckViewSet, basename=&quot;check&quot;) urlpatterns = [ path(&quot;&quot;, include(router.urls)), ] </code></pre> <p>Below is my project's <code>urls.py</code> where I use it:</p> <pre><code>from django.conf import settings from django.contrib import admin from django.urls import include, path import service.urls as service_urls urlpatterns = [ # ... path(&quot;service/&quot;, include('service.urls')), ] </code></pre> <p>The APIs themselves are functioning properly, but I am having trouble making them work with DRF's default API root view. The view is displaying an empty list of available endpoints. I'm not sure, but this issue may be related to the regular expressions I'm using when registering endpoints, such as <code>r&quot;access/(?P&lt;endpoint&gt;.+)</code>. If this is indeed the problem, how can I resolve it?&quot;</p>
<python><django><regex><routes><django-rest-framework>
2023-02-14 06:06:47
1
2,595
varnie
75,443,943
15,793,575
Efficiently replacing values in each row of pandas dataframe based on condition
<p>I would like to work with a pandas data frame to get a strange yet desired output dataframe. For each row, I'd like any values of 0.0 to be replaced with an empty string (''), and all values of 1.0 to be replaced with the value of the index. Any given value on a row can only be 1.0 or 0.0.</p> <p>Here's some example data:</p> <pre><code># starting df df = pd.DataFrame.from_dict({'A':[1.0,0.0,0.0],'B':[1.0,1.0,0.0],'C':[0.0,1.0,1.0]}) df.index=['x','y','z'] print(df) </code></pre> <p>What the input df looks like:</p> <pre><code> A B C x 1.0 1.0 0.0 y 0.0 1.0 1.0 z 0.0 0.0 1.0 </code></pre> <p>What I would like the output df to look like:</p> <pre><code> A B C x x x y y y z z </code></pre> <p>So far I've got this pretty inefficient but seemingly working code:</p> <pre><code>for idx in df.index: df.loc[idx] = df.loc[idx].map(str).replace('1.0',str(idx)) df.loc[idx] = df.loc[idx].map(str).replace('0.0','') </code></pre> <p>Could anyone please suggest an efficient way to do this?</p> <p>The real data frame I'll be working with has a shape of (4548, 2044) and the values will always be floats (1.0 or 0.0), like in the example. I'm manipulating the usher_barcodes.csv data from &quot;raw.githubusercontent.com/andersen-lab/Freyja/main/freyja/data/…&quot; into a format required by another pipeline, where the column headers are lineage names and the values are mutations (taken from the index). The column headers and index values will likely be different each time I need to run this code because the lineage assignments are constantly changing.</p> <p>Thanks!</p>
<python><pandas>
2023-02-14 05:50:37
3
393
frustrated_bioinformatician