QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,605,959
10,621,702
Flask set Authentication-Header in Bearer Format
<p>I'm using FLASK to authenticate with Keycloak. When I use <code>resp.set_cookie('jwt_token', jwt_token)</code> in every request is now a Cookie with <code>jwt_token=....</code>.</p> <p>But I need the token in a Header as a bearer Token.</p> <p>is there a way to add a header like a Cookie for future requests?</p> <pre class="lang-py prettyprint-override"><code>@app.route(f'/{url}/callback') def callback(): auth_code = request.args.get('code') token_params = { 'grant_type': 'authorization_code', 'code': auth_code, 'client_id': keycloak_client_id, 'client_secret': keycloak_client_secret, 'redirect_uri': keycloak_redirect_uri.format(rd=request.args.get('rd').replace('/', '%2F')) } response = requests.post(keycloak_token_endpoint, data=token_params) if response.status_code == 200: token_data = response.json() print(token_data) token_data = response.json() jwt_token = token_data.get('access_token') resp = redirect(f'{redirect_host}/{request.args.get(&quot;rd&quot;).replace(&quot;%2F&quot;, &quot;/&quot;)}') resp.set_cookie('jwt_token', jwt_token) return resp else: return &quot;Authentifizierung fehlgeschlagen&quot; </code></pre> <p>EDIT1: I don't use any frontend app. I want to access the site with a browser.</p>
<python><authentication><flask>
2023-07-03 15:03:32
1
461
Meeresgott
76,605,956
13,221,007
How to read several file types in spark?
<p>I want to read files of several different types. Can I do it in one spark operation? I.e. do it <strong>without</strong> a loop like this:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.shell import spark load_folder = '...' general_df = None for extension in (&quot;*.txt&quot;, &quot;*.inf&quot;): df = spark.read.format(&quot;text&quot;) \ .option(&quot;pathGlobFilter&quot;, extension) \ .option(&quot;recursiveFileLookup&quot;, &quot;true&quot;) \ .load(load_folder) if general_df is None: general_df = df else: general_df = general_df.union(df) general_df.show() </code></pre>
<python><apache-spark><pyspark>
2023-07-03 15:03:12
1
496
MariaMsu
76,605,838
6,260,154
Python Regex to match every words in sentence until a last word has hyphen in it, not working
<p>I have this <a href="https://regex101.com/r/pAEof4/1" rel="nofollow noreferrer">regex</a> to find the words in a sentence until the last word which has hyphen in it.</p> <p>This is my input string:</p> <pre><code> 13wfe + 123dg Tetest-xt ldf-dfdlj-dfldjf-dfs test 123 </code></pre> <p>And so far using this regex, from this <a href="https://stackoverflow.com/questions/76604458/python-regex-to-match-every-words-in-sentence-until-a-last-word-has-underscore-i">post</a>, I am getting match like this:</p> <pre><code> 13wfe + 123dg Tetest-xt ldf-dfdlj- </code></pre> <p>But my expected output should be only this:</p> <pre><code> 13wfe + 123dg Tetest-xt </code></pre> <p>And this is the regex <code>(.*\b)(?=\w+-)</code> I am using.</p> <p>I do not want the last word which has hyphen in it. Kindly guide me in this scenario.</p>
<python><regex>
2023-07-03 14:45:55
2
1,016
Tony Montana
76,605,765
21,395,742
<input> not retrieved using Flask in Python
<p>I am creating a simple form in python.</p> <p>Here is the HTML:</p> <pre><code>&lt;form class=&quot;msger-inputarea&quot; action=&quot;{{url_for('get_learn')}}&quot; method=&quot;post&quot;&gt; &lt;input name =&quot;my_msg&quot; id=&quot;my_msg&quot; type=&quot;text&quot; class=&quot;msger-input&quot; placeholder=&quot;Enter your message...&quot;&gt; &lt;button onclick=&quot;msg_sent()&quot; type=&quot;submit&quot; class=&quot;msger-send-btn&quot;&gt;Send&lt;/button&gt; &lt;/form&gt; </code></pre> <p>And here is the Python Flask code:</p> <pre><code>def get_learn(): if request.method == 'POST': new_question = str(request.form.get(&quot;my_msg&quot;)) print(new_question) return render_template('learn.html') </code></pre> <p><code>print(new_question)</code> prints <code>None</code></p> <p>It appears that request.form is an empty dict. <code>print(request.form)</code> returns <code>ImmutableMultiDict([])</code></p> <p>It should be having my_msg, which should be what the user has inputed.</p>
<python><html><forms><flask>
2023-07-03 14:36:28
1
845
hehe
76,605,694
1,451,479
Kivy remove Popup overlay
<p>I use Kivy Popup the following way:</p> <pre><code>&lt;AlertPopup@Popup&gt;: auto_dismiss: False title: &quot;Notice&quot; background: '' size_hint: (None,None) pos_hint: {'top': 0.9, 'right': 1.0} overlay_color: 0, 0, 0, 0.5 background_color: 0, 0, 0, 0 canvas.before: Color: rgba: 243, 253, 18, 0.48 Rectangle: pos: self.pos[0], self.pos[1] size: self.size BoxLayout: orientation: 'vertical' BoxLayout: orientation: 'horizontal' Button: size_hint: .15, 1.0 text: 'X' on_release: root.dismiss() Label: text: 'Some message here' </code></pre> <p>When the popup is shown, there is an overlay, for which the <code>overlay_color: 0,0,0, 0.5</code> which I setup on purpose so I can see it. What I want, is for it to be removed, because when popup is shown I can't click on the rest of the application.</p> <p>I'm using Python 3.9 and Kivy 2.1.0</p> <p><a href="https://i.sstatic.net/sWsYX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sWsYX.png" alt="enter image description here" /></a></p> <p>I want to be able to click the area that surround the Popup(and there are clickable parts of the application behind that overlay).</p> <p><strong>Question: How can the overlay be removed completely?</strong></p>
<python><kivy>
2023-07-03 14:27:19
1
4,112
Aviel Fedida
76,605,629
4,743,482
Have a object created on a fixture accassible at setup_class, teardown_class, setup_method, and teardown_method
<p>I have a <strong>conftest.py</strong>:</p> <pre class="lang-py prettyprint-override"><code>import pytest import asyncio from some_library import MyLibrary @pytest.fixture() def event_loop(request): loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() @pytest.fixture(name=&quot;library_instance&quot;, scope='session') def fixture_library_instance(): library = MyLibrary() print(&quot;\nSetup Library&quot;) library.config() yield library print(&quot;\nTeardown Library&quot;) library = None </code></pre> <p>And a test file (test_1.py):</p> <pre class="lang-py prettyprint-override"><code>import pytest class Test_Somemore: @classmethod def setup_class(self): print(&quot;\nSetup Class&quot;) @classmethod def teardown_class(self): print(&quot;\nTeardown Class&quot;) @classmethod def setup_method(self, method): print(&quot;\nSetup Test = &quot;, method.__name__) @classmethod def teardown_method(self, method): print(&quot;\nTeardown Test = &quot;, method.__name__) @pytest.mark.usefixtures(&quot;library_instance&quot;) @pytest.mark.asyncio async def test_something_4(self, library_instance): print(f&quot;\ntest 4 - {library_instance.var}&quot;) assert 1 == 1 assert library_instance.var == 100 @pytest.mark.usefixtures(&quot;library_instance&quot;) @pytest.mark.asyncio async def test_something_5(self, library_instance): print(f&quot;\ntest 5 - {library_instance.var}&quot;) assert 2 == 2 assert library_instance.var == 100 @pytest.mark.usefixtures(&quot;library_instance&quot;) @pytest.mark.asyncio async def test_something_6(self, library_instance): print(f&quot;\ntest 6 - {library_instance.var}&quot;) assert 3 == 3 assert library_instance.var == 100 </code></pre> <p>The order it is being called is:</p> <ol> <li>Setup Library</li> <li>Setup Class</li> <li>Setup Test (test 4)</li> <li>Teardown Test (test 4)</li> <li>Setup Test (test 5)</li> <li>Teardown Test (test 5)</li> <li>Setup Test (test 6)</li> <li>Teardown Test (test 6)</li> <li>Teardown Class</li> <li>Teardown Libary</li> </ol> <p>This is ok.</p> <p>What I need is the following:</p> <ol> <li>To have <strong>library_instance</strong> (from the fixture &quot;<strong>fixture_library_instance</strong>&quot;) callable inside the <strong>setup_class</strong>, <strong>teardown_class</strong>, <strong>setup_method</strong>, and <strong>teardown_method</strong>. Not just the test cases. I haven't found a way to make this work.</li> <li>In <strong>teardown_method</strong>, check if the test has failed. If it did, I want to call some functions.</li> </ol> <p>Basically something like this:</p> <pre class="lang-py prettyprint-override"><code> @classmethod def setup_class(self): print(&quot;\nSetup Class&quot;) library_instance.foo1() @classmethod def teardown_class(self): print(&quot;\nTeardown Class&quot;) library_instance.foo2() @classmethod def setup_method(self, method): print(&quot;\nSetup Test = &quot;, method.__name__) library_instance.foo3() @classmethod def teardown_method(self, method): print(&quot;\nTeardown Test = &quot;, method.__name__) if test == FAIL: library_instance.foo4() </code></pre> <p>Can somebody please help me?</p>
<python><pytest><conftest>
2023-07-03 14:19:03
1
345
CFlux
76,605,626
15,358,800
How can I add a new line based on keyword for unstructured data python?
<p>I've some text like this</p> <pre><code>Forbes_Middle_East: 309, Building 4, Emaar Business Park , Dubai , United Arab Emirates Emirates_Neon_Group: No address International_Cricket_Council: No address Tourism_Development_Authority: The Ras AI Khaimah Tourism Development Authority was established in May 2011 under the Government of Ras AI Khaimah. Its purpose is to develop and promote the emirate's tourism offering and infrastructure, both domestically and abroad. Wikipedia Dubai , United Arab Emirates Allsopp_&amp;_Allsopp: No address Lamprell: No address </code></pre> <p>My aim is add a new line for every address. so that it will look this.</p> <pre><code>Forbes_Middle_East: 309, Building 4, Emaar Business Park , Dubai , United Arab Emirates Emirates_Neon_Group: No address International_Cricket_Council: No address Tourism_Development_Authority: The Ras AI Khaimah Tourism Development Authority was established in May 2011 under the Government of Ras AI Khaimah. Its purpose is to develop and promote the emirate's tourism offering and infrastructure, both domestically and abroad. Wikipedia Dubai , United Arab Emirates Allsopp_&amp;_Allsopp: No address Lamprell: No address </code></pre> <p>so the only indicator that it's a new address is <code>:</code>. Here the issue is with text wrapping.</p> <p>I'm trying like</p> <pre><code>with open('test.txt', 'r') as infile: data = infile.read() final_list = [] for ind, val in enumerate(data.split('\n')): final_list.append(val) if val == ':': final_list.insert(-1, '\n') </code></pre> <p>My logic is working most of the time, but it is failing in some cases with strings having <code>:</code> in the middle and also fails if there is a text wrapping.</p> <p>Can you guys suggest me any better way to do this?</p>
<python>
2023-07-03 14:19:00
3
4,891
Bhargav
76,605,471
1,014,217
Pinecone inside Azure FUnctions Enum RpcLogCategory has no value defined for name 'User'
<p>I created an Azure Function in Python, with Python 3.10 and Azure Function Core Tools 4.0 which clearly says to use Python 3.10 and 3.11 Is still on preview.</p> <p>My code is literally nothing, just the <code>import pinecone</code> breaks the code.</p> <pre><code> import pinecone import logging import azure.functions as func def main(req: func.HttpRequest) -&gt; func.HttpResponse: return func.HttpResponse(f&quot;Hello World&quot;) </code></pre> <p>The vs code console shows all of this:</p> <pre><code> createPineConeIndex: [GET,POST] http://localhost:7071/createPineConeIndex For detailed output, run func with --verbose flag. [2023-07-03T13:41:46.389Z] Worker process started and initialized. [2023-07-03T13:41:49.571Z] Host lock lease acquired by instance ID '000000000000000000000000C65794F3'. [2023-07-03T13:41:50.096Z] Worker failed to load function: 'createPineConeIndex' with functionId: '30b6cdec-555d-40ea-88c5-434896f07082'. [2023-07-03T13:41:50.097Z] Result: Failure Exception: ValueError: Enum RpcLogCategory has no value defined for name 'User' Stack: File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\dispatcher.py&quot;, line 380, in _handle__function_load_request func = loader.load_function( File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\utils\wrappers.py&quot;, line 44, in call return func(*args, **kwargs) File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\loader.py&quot;, line 132, in load_function mod = importlib.import_module(fullmodname) File &quot;C:\Users\xx\anaconda3\lib\importlib\__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;C:\Users\xx\repos\yy\createPineConeIndex\__init__.py&quot;, line 5, in &lt;module&gt; from chromadb.config import Settings File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\chromadb\__init__.py&quot;, line 6, in &lt;module&gt; from chromadb.api import API File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\chromadb\api\__init__.py&quot;, line 3, in &lt;module&gt; import pandas as pd File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\__init__.py&quot;, line 48, in &lt;module&gt; from pandas.core.api import ( File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\api.py&quot;, line 27, in &lt;module&gt; from pandas.core.arrays import Categorical File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\arrays\__init__.py&quot;, line 1, in &lt;module&gt; from pandas.core.arrays.arrow import ArrowExtensionArray File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\arrays\arrow\__init__.py&quot;, line 1, in &lt;module&gt; from pandas.core.arrays.arrow.array import ArrowExtensionArray File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\arrays\arrow\array.py&quot;, line 60, in &lt;module&gt; from pandas.core.arraylike import OpsMixin File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\arraylike.py&quot;, line 21, in &lt;module&gt; from pandas.core.ops.common import unpack_zerodim_and_defer File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\ops\__init__.py&quot;, line 38, in &lt;module&gt; from pandas.core.ops.array_ops import ( File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\ops\array_ops.py&quot;, line 57, in &lt;module&gt; from pandas.core.computation import expressions File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\computation\expressions.py&quot;, line 20, in &lt;module&gt; from pandas.core.computation.check import NUMEXPR_INSTALLED File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\core\computation\check.py&quot;, line 5, in &lt;module&gt; ne = import_optional_dependency(&quot;numexpr&quot;, errors=&quot;warn&quot;) File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\pandas\compat\_optional.py&quot;, line 142, in import_optional_dependency module = importlib.import_module(name) File &quot;C:\Users\xx\anaconda3\lib\importlib\__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\numexpr\__init__.py&quot;, line 44, in &lt;module&gt; nthreads = _init_num_threads() File &quot;c:\Users\xx\repos\yy\.venv\lib\site-packages\numexpr\utils.py&quot;, line 160, in _init_num_threads log.info('NumExpr defaulting to %d threads.'%n_cores) File &quot;C:\Users\xx\anaconda3\lib\logging\__init__.py&quot;, line 1477, in info self._log(INFO, msg, args, **kwargs) File &quot;C:\Users\xx\anaconda3\lib\logging\__init__.py&quot;, line 1624, in _log self.handle(record) File &quot;C:\Users\xx\anaconda3\lib\logging\__init__.py&quot;, line 1634, in handle self.callHandlers(record) File &quot;C:\Users\xx\anaconda3\lib\logging\__init__.py&quot;, line 1696, in callHandlers hdlr.handle(record) File &quot;C:\Users\xx\anaconda3\lib\logging\__init__.py&quot;, line 968, in handle self.emit(record) File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\dispatcher.py&quot;, line 821, in emit Dispatcher.current.on_logging(record, msg) File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\azure_functions_worker\dispatcher.py&quot;, line 208, in on_logging log_category = protos.RpcLog.RpcLogCategory.Value('User') File &quot;C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.10/WINDOWS/X64\google\protobuf\internal\enum_type_wrapper.py&quot;, line 73, in Value raise ValueError('Enum {} has no value defined for name {!r}'.format( . </code></pre> <p>If I remove the import pinecone, the error disappears</p> <p>requirements.txt</p> <pre><code>azure-functions langchain pinecone-client #azure-storage-blob openai pyodbc azure-identity azure-keyvault-secrets pydantic </code></pre>
<python><azure><azure-functions><pinecone>
2023-07-03 13:58:57
1
34,314
Luis Valencia
76,605,336
2,426,635
Packaging ODBC driver - SQLalchemy for MS SQL Server
<p>I'm using sqlalchemy in connection with pyodbc to connect to an MS SQL Server instance.</p> <p>I prepare the connection string as below, and then pass that to sqlalchemy to create the engine:</p> <pre class="lang-py prettyprint-override"><code>uri = f&quot;mysql+pyodbc://{user_auth}@{hostname}/{dbname}?trusted_connection=yes&amp;DRIVER=ODBC+Driver+17+for+SQL+Server&quot; </code></pre> <p>This works when I run it on my local machine, but I want to distribute this code (build an exe). As I understand it, the above snippet will only run on a machine that has the right driver installed. e.g. When I replace the ODBC+Driver+17 with ODBC+Driver+13 for example, I get an error because I don't have this driver installed. So I would anticipate a similar issue when I distribute this to users.</p> <p>What is the best practice for dealing with this? Is there a way for me to package/install the driver on the target system? Do I need to catch this error and prompt the users to please go download the missing driver?</p>
<python><sql-server><sqlalchemy><pyodbc>
2023-07-03 13:43:39
0
626
pwwolff
76,605,223
8,973,620
Fill gaps in time intervals with other time intervals
<p>We have two tables with time intervals. I want to fill gaps in <code>df1</code> with <code>df2</code> as in the graph to get <code>df3</code>. <code>df1</code> is moved to <code>df3</code> as it is, and only the parts of <code>df2</code> that lie in the gaps of <code>df1</code> (difference) are moved to <code>df3</code>.</p> <p><a href="https://i.sstatic.net/FUCvi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FUCvi.png" alt="[1]: https://i.sstatic.net/uQeML.png" /></a></p> <pre><code>df1 = pd.DataFrame({'Start': ['2023-01-01', '2023-02-01', '2023-03-15', '2023-04-18', '2023-05-15', '2023-05-25'], 'End': ['2023-01-15', '2023-02-20', '2023-04-01', '2023-05-03', '2023-05-20', '2023-05-30']}) df2 = pd.DataFrame({'Start': ['2023-01-02', '2023-01-05', '2023-01-20', '2023-02-25', '2023-03-05', '2023-04-18', '2023-05-12'], 'End': ['2023-01-03', '2023-01-10', '2023-02-10', '2023-03-01', '2023-04-15', '2023-05-10', '2023-06-05']}) df3 = pd.DataFrame({'Start': ['2023-01-01', '2023-01-20', '2023-02-01', '2023-02-25', '2023-03-05', '2023-03-15', '2023-04-02', '2023-04-18', '2023-05-04', '2023-05-12', '2023-05-15', '2023-05-21', '2023-05-25', '2023-05-31'], 'End': ['2023-01-15', '2023-01-31', '2023-02-20', '2023-03-01', '2023-03-14', '2023-04-01', '2023-04-15', '2023-05-03', '2023-05-10', '2023-05-14', '2023-05-20', '2023-05-24', '2023-05-30', '2023-06-05']}) # df1 Start End 0 2023-01-01 2023-01-15 1 2023-02-01 2023-02-20 2 2023-03-15 2023-04-01 3 2023-04-18 2023-05-03 4 2023-05-15 2023-05-20 5 2023-05-25 2023-05-30 # df2 Start End 0 2023-01-02 2023-01-03 1 2023-01-05 2023-01-10 2 2023-01-20 2023-02-10 3 2023-02-25 2023-03-01 4 2023-03-05 2023-04-15 5 2023-04-18 2023-05-10 6 2023-05-12 2023-06-05 # df3 (desired result) Start End 0 2023-01-01 2023-01-15 1 2023-01-20 2023-01-31 2 2023-02-01 2023-02-20 3 2023-02-25 2023-03-01 4 2023-03-05 2023-03-14 5 2023-03-15 2023-04-01 6 2023-04-02 2023-04-15 7 2023-04-18 2023-05-03 8 2023-05-04 2023-05-10 9 2023-05-12 2023-05-14 10 2023-05-15 2023-05-20 11 2023-05-21 2023-05-24 12 2023-05-25 2023-05-30 13 2023-05-31 2023-06-05 </code></pre> <p>Code to generate plot:</p> <pre><code>import plotly.express as px df_plot = pd.concat( [ df1.assign(color='df1', df='df1'), df2.assign(color='df2', df='df2'), df3.assign(color=['df1', 'df2', 'df1', 'df2', 'df2', 'df1', 'df2', 'df1', 'df2', 'df2', 'df1', 'df2', 'df1', 'df2'], df='df3') ], ) fig = px.timeline(df_plot, x_start=&quot;Start&quot;, x_end=&quot;End&quot;, y=&quot;df&quot;, color=&quot;color&quot;) fig.update_yaxes(categoryorder='category descending') fig.show() </code></pre>
<python><pandas><time-series>
2023-07-03 13:29:51
2
18,110
Mykola Zotko
76,605,126
9,525,238
pyqtgraph plot multiple QPainterPath that look like pg.TextItem
<p>This is what it looks when you add pg.TextItems to each individual pg.ScatterPlotItem</p> <p><a href="https://i.sstatic.net/VzXsr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VzXsr.png" alt="enter image description here" /></a></p> <p>using:</p> <pre><code>label = pg.TextItem(text=str(i), color=HEX_ORANGE, anchor=([1, 1])) label.setPos(x, y) self.plot.addItem(label) </code></pre> <p>but this can only be done in a for loop and that is slow... so i wanted to replace it with a faster solution. I found that replacing it with QPainterPath works for pg.ScatterPlotItem but i cannot get them to behave the same using:</p> <pre><code>from PySide6.QtGui import QTransform, QPainterPath, QFont def genSymbol(label): # Basically the example in pyqtgraph/examples/ScatterPlotItem symbol = QPainterPath() f = QFont() f.setPointSize(5) symbol.addText(0, 0, f, label) br = symbol.boundingRect() scale = min(1. / br.width(), 1. / br.height()) tr = QTransform() tr.scale(scale, scale) tr.translate(-br.x() - br.width()/2., -br.y() - br.height()/2.) return symbol if __name__ == '__main__': import pyqtgraph as pg from PySide6.QtWidgets import QMainWindow from PySide6.QtGui import QColor app = pg.mkQApp(&quot;PerPlotHelper&quot;) w = QMainWindow() cw = pg.GraphicsLayoutWidget() w.show() w.resize(600, 600) w.setCentralWidget(cw) w.setWindowTitle('pyqtgraph example: Arrow') p = cw.addPlot(row=0, col=0) #### point_xs = [1, 2, 3, 4, 5, 6] point_ys = [1, 2, 3, 4, 5, 6] points = pg.ScatterPlotItem(point_xs, point_ys, symbol='o', size=9, pen=pg.mkPen(QColor(&quot;#ff9900&quot;)), brush=pg.mkBrush(QColor(&quot;#ff9900&quot;)), data=&quot;w/e&quot;) p.addItem(points) symbols = [genSymbol(str(i)) for i in range(len(point_xs))] # Draw Text Label spots = [{'x': point_xs[i], 'y': point_ys[i], 'data': &quot;w/e&quot;, 'brush': &quot;#ff9900&quot;, 'symbol': symbols[i]} for i in range(len(point_xs))] textLabelsP = pg.ScatterPlotItem(pen=pg.mkPen('w'), pxMode=True) textLabelsP.addPoints(spots) p.addItem(textLabelsP) app.exec() </code></pre> <p>only puts small, barely visible text on the top-right inside the point itself...</p> <p><a href="https://i.sstatic.net/zYHCf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYHCf.png" alt="enter image description here" /></a></p> <p>or with pxMode=False it produces this abomination:</p> <p><a href="https://i.sstatic.net/iSdUc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iSdUc.png" alt="enter image description here" /></a></p> <p>What am i missing?</p> <p>Thanks.</p>
<python><pyqt><pyqtgraph>
2023-07-03 13:15:27
1
413
Andrei M.
76,604,868
3,446,051
VSCode: coverage run results in ModuleNotFoundError: No Module named
<p>I have a project in vscode with the following structure:</p> <pre><code>project_root/ src/ __init__.py b.py a.py test/ __init__.py test_a.py .vscode/ launch.json settings.json .env __init__.py </code></pre> <p>My settings for the unittests are:</p> <pre><code>{ &quot;python.testing.unittestArgs&quot;: [ &quot;-v&quot;, &quot;-s&quot;, &quot;./tests&quot;, &quot;-p&quot;, &quot;test_*.py&quot; ], &quot;python.testing.pytestEnabled&quot;: false, &quot;python.testing.unittestEnabled&quot;: true, &quot;python.envFile&quot;: &quot;${workspaceFolder}/.env&quot; </code></pre> <p>and in <code>.env</code> I have set:</p> <pre><code>PYTHONPATH=${WORKSPACE_FOLDER}${pathSeparator}src </code></pre> <p>Here <code>test_a.py</code> calls a function in <code>a.py</code> while <code>a.py</code> imports a class inside <code>b.py</code>. At the beginning, I was not able to run <code>test_a.py</code> and got the error:</p> <pre><code>ModuleNotFoundError: No module named 'b' </code></pre> <p>But after adding <code>&quot;python.envFile&quot;: &quot;${workspaceFolder}/.env&quot;</code> into <code>settings.json</code> and additionally creating <code>.env</code> and adding <code>PYTHONPATH=${WORKSPACE_FOLDER}${pathSeparator}src</code> into <code>.env</code>, I was able to run the tests in vscode using the Testing panel on the left side of vscode.</p> <p>But when I try to run the <code>coverage</code> command in the vscode terminal: <code>coverage run --source=./tests,./src,./src/model_manager -m unittest</code> I get again the following error message:</p> <pre><code>ModuleNotFoundError: No module named 'b' </code></pre> <p>I also tried <code>python -m coverage run --source=./tests,./src,./src/model_manager -m unittest</code> but I am receiving the same error message.</p> <p>What is the correct way to solve this problem?</p> <p><strong>PS:</strong> I am not able to change the PATH and environment variables for the whole user account as I don't have the permissions for that.</p>
<python><unit-testing><visual-studio-code><python-unittest>
2023-07-03 12:44:23
1
5,459
Code Pope
76,604,717
5,609,221
Reshaping dataframe using wide_to_long vs melt
<p>I want to reshape my pandas dataframe, and recently came across the <code>wide_to_long</code>-function. In what cases would you prefer this function compared to the <code>melt</code>-function?</p>
<python><pandas><dataframe>
2023-07-03 12:23:15
1
2,425
Archie
76,604,686
6,117,400
PyTorch Distributed Run with SLURM results in "Adress family not found"
<p>When trying to run an example python file via <code>torch.distributed.run</code> on 2 Nodes with 2 GPUs each on a cluster by using a SLURM script I encounter the following error:</p> <pre><code>[W socket.cpp:426] [c10d] The server socket cannot be initialized on [::]:16773 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.url.de]:16773 (errno: 97 - Address family not supported by protocol). </code></pre> <p>This is the SLURM script:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash #SBATCH --job-name=distribution-test # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=4 # number of cores per tasks #SBATCH --partition=clara #SBATCH --gres=gpu:v100:2 # number of gpus #SBATCH --time 0:15:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name module load Python pip install --user -r requirements.txt MASTER_ADDR=$(scontrol show hostnames &quot;$SLURM_JOB_NODELIST&quot; | head -n 1) MASTER_PORT=$(expr 10000 + $(echo -n $SLURM_JOBID | tail -c 4)) GPUS_PER_NODE=2 LOGLEVEL=INFO python -m torch.distributed.run --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py </code></pre> <p>and the python code that should be running:</p> <pre class="lang-py prettyprint-override"><code>import fcntl import os import socket import torch import torch.distributed as dist def printflock(*msgs): &quot;&quot;&quot;solves multi-process interleaved print problem&quot;&quot;&quot; with open(__file__, &quot;r&quot;) as fh: fcntl.flock(fh, fcntl.LOCK_EX) try: print(*msgs) finally: fcntl.flock(fh, fcntl.LOCK_UN) local_rank = int(os.environ[&quot;LOCAL_RANK&quot;]) torch.cuda.set_device(local_rank) device = torch.device(&quot;cuda&quot;, local_rank) hostname = socket.gethostname() gpu = f&quot;[{hostname}-{local_rank}]&quot; try: # test distributed dist.init_process_group(&quot;nccl&quot;) dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM) dist.barrier() # test cuda is available and can allocate memory torch.cuda.is_available() torch.ones(1).cuda(local_rank) # global rank rank = dist.get_rank() world_size = dist.get_world_size() printflock(f&quot;{gpu} is OK (global rank: {rank}/{world_size})&quot;) dist.barrier() if rank == 0: printflock(f&quot;pt={torch.__version__}, cuda={torch.version.cuda}, nccl={torch.cuda.nccl.version()}&quot;) except Exception: printflock(f&quot;{gpu} is broken&quot;) raise </code></pre> <p>I have tried different python runs like this:</p> <pre class="lang-bash prettyprint-override"><code>LOGLEVEL=INFO python -m torch.distributed.run --master_addr $MASTER_ADDR --master_port $MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py </code></pre> <pre class="lang-bash prettyprint-override"><code>LOGLEVEL=INFO torchrun --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py </code></pre> <pre class="lang-bash prettyprint-override"><code>LOGLEVEL=INFO python -m torch.distributed.launch --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py </code></pre> <p>All resulting in the same error.</p> <p>I have tried specifing the IP Adress explicitly instead of the <code>MASTER_ADDR</code></p> <pre class="lang-bash prettyprint-override"><code>IP_ADDRESS=$(srun hostname --ip-address | head -n 1) </code></pre> <ul> <li>I have looked at ports that are open: everything above 1023 is open</li> <li>And inspected the <code>/etc/resolv.conf</code>: the hostnames are clearly mapped</li> <li>And pinged the nodes, which also succeeded.</li> <li>I have specified the ip version by appending <code>.ipv4</code> to the MASTER_ADDR with no success.</li> </ul>
<python><pytorch><artificial-intelligence><slurm><multi-gpu>
2023-07-03 12:19:57
1
649
Scorix
76,604,620
14,282,714
Plotly with Pandas dataframe side by side in Jupyter notebook
<p>There are some questions about how to create <a href="https://stackoverflow.com/questions/70639494/plotly-express-graphs-side-by-side-in-jupyter-notebook">two plotly graphs side-by-side in Jupyter notebook</a> or <a href="https://stackoverflow.com/questions/35790922/how-to-render-two-pd-dataframes-in-jupyter-notebook-side-by-side">how to show two pandas dataframes side by side</a>. But I would like to display a plotly graph with a pandas dataframe side by side in a Jupyter notebook. Here is some reproducible code for the graph and pandas dataframe:</p> <pre><code>import pandas as pd import plotly.express as px df = px.data.iris() fig = px.scatter(df, x=&quot;sepal_width&quot;, y=&quot;sepal_length&quot;) fig.show() # Simple pandas dataframe df[[&quot;sepal_length&quot;, &quot;species&quot;]].groupby(&quot;species&quot;).agg(['mean', 'count', 'median', 'min', 'max']) </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/CFFif.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFFif.png" alt="enter image description here" /></a></p> <p>Now the output is below each other, but I would like to have them side by side. So I was wondering if anyone knows how to show a plotly graph side by side with a pandas dataframe?</p>
<python><pandas><jupyter-notebook><plotly>
2023-07-03 12:11:04
2
42,724
Quinten
76,604,562
17,471,060
Append items within a list to Polars Dataframe
<p>I would like to append or concatenate horizontally list of items to Polars based dataframe through for loop.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import polars as pl add_list = [(10, 100, 'd'), (20, 20, 'D')] df = pl.DataFrame({&quot;a&quot;: [1, 2, 3], &quot;b&quot;: [6, 7, 8], &quot;c&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]}) </code></pre> <p>In pandas -</p> <pre class="lang-py prettyprint-override"><code>df_pd = df.to_pandas() for d in add_list: ser_pd = pd.Series(data=d, index=df1_pd.columns) df_pd = pd.concat([df_pd, ser_pd.to_frame().T], ignore_index=True) print(pl.from_pandas(df_pd)) </code></pre> <pre><code>shape: (5, 3) ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str │ ╞═════╪═════╪═════╡ │ 1 ┆ 6 ┆ a │ │ 2 ┆ 7 ┆ b │ │ 3 ┆ 8 ┆ c │ │ 10 ┆ 100 ┆ d │ │ 20 ┆ 20 ┆ D │ └─────┴─────┴─────┘ </code></pre> <p>I was able to get the same result for Polars by using numpy <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.reshape.html#numpy.ndarray.reshape" rel="nofollow noreferrer"><code>reshape()</code></a></p> <pre class="lang-py prettyprint-override"><code>df2 = df for d in add_list: arr = np.asarray(d).reshape(-1, len(d)) df2 = pl.concat([df2, pl.DataFrame(arr, schema=df2.schema, orient=&quot;row&quot;)]) </code></pre> <p>How can I achieve this using only Polars?</p>
<python><dataframe><python-polars>
2023-07-03 12:04:38
1
344
beta green
76,604,458
6,260,154
Python Regex to match every words in sentence until a last word has underscore in it
<p>I am trying to find the regex which can match every word in sentence until a last word has an <strong>underscore</strong> in it.</p> <p>For example:</p> <pre><code>13wfe + 123dg Text ldf_dfdlj_dfldjf_dfs test 123 </code></pre> <p>In this example, I am looking to get only</p> <pre><code>13wfe + 123dg Text </code></pre> <p>I have tried using something along the line of these,</p> <pre><code>^.*?(?=_) </code></pre> <p>but it is returning this</p> <pre><code>13wfe + 123dg Text ldf </code></pre> <p>You can find the <a href="https://regex101.com/r/OfP5eS/1" rel="nofollow noreferrer">regex</a> here. Kindly guide me in this scenario.</p> <p>Update: using the <a href="https://regex101.com/r/j6iSwd/1" rel="nofollow noreferrer">regex</a> provided by @liginity, I am able to find the substring, but in some cases it is still failing.</p> <p>Such as in this example:</p> <pre><code> 13wfe + 123dg Tetest_xt ldf_dfdlj_dfldjf_dfs test 123 </code></pre> <p>It should be able to find on this much:</p> <pre><code>13wfe + 123dg Tetest_xt </code></pre> <p>But it is finding multiple:</p> <pre><code>13wfe + 123dg and _xt </code></pre>
<python><regex>
2023-07-03 11:49:51
1
1,016
Tony Montana
76,604,398
11,199,684
How to use print function with for in an interactive session
<p>Simple question about print function (python 3.9.13) what causes these (a link to some informative page will be appreciated). I have a list of strings and I want to print them each on separate line in an interactive session.</p> <pre><code>&gt;&gt;&gt; aa = ['word 1','test 2', 'blah 3', 'ding 4'] &gt;&gt;&gt; [print(x) for x in aa] word 1 test 2 blah 3 ding 4 [None, None, None, None] &gt;&gt;&gt; (print(x) for x in aa) &lt;generator object &lt;genexpr&gt; at 0x000001D8AC975DD0&gt; &gt;&gt;&gt; print(x) for x in aa SyntaxError: invalid syntax &gt;&gt;&gt; {print(x) for x in aa} word 1 test 2 blah 3 ding 4 {None} &gt;&gt;&gt; </code></pre> <p>Question: could you explain these behaviours, especially what causes those None to appear and how to avoid it?</p> <p>[Edit:] Related posts: <a href="https://stackoverflow.com/q/27959258/11199684">&quot;Why does the print function return None&quot;</a> (but no loops or list actions there), and <a href="https://stackoverflow.com/a/5753719/11199684">&quot;Is it pythonic to use list comprehensions just for side effect?&quot;</a> (but more abstract, no print function there)</p>
<python><python-3.x><for-loop>
2023-07-03 11:41:22
2
305
user9393931
76,604,360
8,551,424
ARM raspberry pi dockerize from Windows
<p>I have a Python script that I need to run it on my Raspberry Pi 4.</p> <p>My idea is to dockerize it, my first idea was doing</p> <pre><code>FROM python:3 WORKDIR /usr/src/app copy main.py . copy requirements.txt . run pip install --no-cache-dir -r requirements.txt CMD [&quot;python&quot;, &quot;./main.py&quot;] </code></pre> <p>but <code>FROM python:3</code> doesn't work on the Raspberry because it's not ARM.</p> <p>Reading and asking chatgpt I found this options, <code>FROM arm32v7/python:3</code>, <code>FROM python:3-slim-stretch-arm32v7</code> and <code>FROM arm32v7/python:3.7-slim-buster</code> but they always crash with this type of message,</p> <pre><code>&gt; 2023/07/03 13:25:05 http2: server: error reading preface from client &gt; //./pipe/docker_engine: file has already been closed [+] Building 0.8s &gt; (3/3) FINISHED &gt; docker:default =&gt; [internal] load build definition from Dockerfile &gt; 0.0s =&gt; =&gt; transferring dockerfile: 326B 0.0s =&gt; [internal] load .dockerignore 0.0s =&gt; =&gt; transferring context: 2B 0.0s =&gt; ERROR [internal] load metadata for docker.io/library/python:3-slim-stretch-arm32v7 &gt; 0.7s &gt; ------ &gt; &gt; [internal] load metadata for docker.io/library/python:3-slim-stretch-arm32v7: &gt; ------ Dockerfile:1 &gt; -------------------- 1 | &gt;&gt;&gt; FROM python:3-slim-stretch-arm32v7 2 | 3 | WORKDIR /usr/src/app &gt; -------------------- ERROR: failed to solve: python:3-slim-stretch-arm32v7: &gt; docker.io/library/python:3-slim-stretch-arm32v7: not found </code></pre> <p>How can I dockerize my script and use it on my Raspberry 4?</p> <p>I found <a href="https://hub.docker.com/r/arm32v7/python" rel="nofollow noreferrer">arm32v7/python</a> but or I'm doing wrong or is not working for me.</p> <p>Thanks.</p>
<python><docker><raspberry-pi><dockerfile><arm>
2023-07-03 11:36:43
1
1,373
Lleims
76,604,299
12,292,254
Sum values in a list based on the unique values in an other list
<p>I have two lists, where one lists contains numeric values and the other contains strings (&quot;A&quot;, &quot;B&quot; or &quot;C&quot;). The goal is to sum up the values for every unique string in the second list. I assume the first list is ordered (arbitrary) and the indexes in second list match up.</p> <p>Example:</p> <pre><code>list_one = [&quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;, &quot;C&quot;] list_two = [1000, 200, 500, 120, 500, 350] </code></pre> <p>The resulting list should be the sum for each unique string in list_one based on the values in list_two:</p> <pre><code>res_list = [1200, 620, 850] </code></pre> <p>I can find the indexes of per unique string in list_one by</p> <blockquote> <p><code>np.unique(list_one, return_index=True)[1] = array([0, 2, 4], dtype=int64)</code></p> </blockquote> <p>but I don't know how to go from here.</p>
<python><list>
2023-07-03 11:28:11
2
460
Steven01123581321
76,604,175
4,277,485
if first value is zero in one dataframe set previous values to 1 in another dataframe on condition
<p>i have 2 dataframes, df1 and df2 i want to change the values of df2 based on a condition from df1</p> <p>df1</p> <pre><code> name date flag 0 abc 4/11/2023 1 1 xyz 2/8/2023 0 </code></pre> <p>df2:</p> <pre><code> name date flag 0 xyz 2/6/2023 0 1 xyz 2/7/2023 0 2 xyz 2/8/2023 0 3 xyz 2/9/2023 1 4 xyz 2/10/2023 1 5 xyz 2/11/2023 1 6 xyz 2/12/2023 1 7 xyz 2/13/2023 1 </code></pre> <p>in df1 for 'xyz', the flag is 0 on 2/8/2023 hence in df2 dates less than the date in df1 should be 1</p> <p>expected output</p> <p><a href="https://i.sstatic.net/BK4Td.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BK4Td.png" alt="enter image description here" /></a></p> <p>I am new to python and want to do it using pandas functions</p>
<python><pandas><dataframe>
2023-07-03 11:09:43
1
438
Kavya shree
76,604,061
1,898,070
How the the computation code is transferreed to dask worker from the client?
<p>How does the client side source code get transferred to dask worker(s) ?</p> <pre><code>from dask.distributed import Client def hello3(a, b): print('I am hello 3') return a + 5 + b * 3 def hello2(a, b): print('I am hello2') return hello3(a * 10, b + 5) def xadd(a, b): return hello2(a + 2, b * 5) if __name__ == '__main__': client = Client('127.0.0.1:8786') x = client.submit(xadd, 1, 2) print(x.result()) </code></pre> <p>In the above code snippet, how will submit transfer code for xadd, hello2 and hello3 functions to the workers ?</p>
<python><serialization><dask><dask-distributed>
2023-07-03 10:54:02
2
5,437
Nipun Talukdar
76,604,018
8,849,755
Run shell commands in Python and enter password
<p>I need to copy a number of files from one PC to another using <code>scp</code>. Of course I can manually do it one by one, but since they are several files and each one can take up to one hour, I would like to automate this with a simple script. Normally I would simply do this:</p> <pre class="lang-py prettyprint-override"><code>import subprocess FILES_NAMES = [ 'file_1.raw', 'file_2.raw', ] for fname in FILES_NAMES: subprocess.run(['scp', f'user@pc:/path/to/files/{fname}', '.']) </code></pre> <p>but it keeps asking for the password each iteration.</p> <p>Is it possible to do something like</p> <pre class="lang-py prettyprint-override"><code>for fname in FILES_NAMES: subprocess.run(['scp', f'user@pc:/path/to/files/{fname}', '.'], propmt_password='hardcode_your_super_secure_password_here') </code></pre>
<python><subprocess><passwords>
2023-07-03 10:48:01
1
3,245
user171780
76,603,915
12,224,591
Get Polynomial X at Y? (Python 3.10, NumPy)
<p>I'm attempting to calculate all possible real X-values at a certain Y-value from a polynomial given in descending coefficent order, in Python 3.10. I want the resulting X-values to be provided to me in a <code>list</code>.</p> <p>I've tried using the <code>roots()</code> function of the <code>numpy</code> library, as shown in one of the answers to <a href="https://stackoverflow.com/questions/16827053/solving-for-x-values-of-polynomial-with-known-y">this post</a>, however it does not appear to work:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def main(): coeffs = np.array([1, 2, 2]) y = 1.5 polyDataX = np.linspace(-2, 0) polyDataY = np.empty(shape = len(polyDataX), dtype = float) for i in range(len(polyDataX)): polyDataY[i] = coeffs[0] * pow(polyDataX[i], 2) + coeffs[1] * polyDataX[i] + coeffs[2] coeffs[-1] -= y x = np.roots(coeffs).tolist() plt.axhline(y, color = &quot;orange&quot;) plt.plot(polyDataX, polyDataY, color = &quot;blue&quot;) plt.title(&quot;X = &quot; + str(x)) plt.show() plt.close() plt.clf() if (__name__ == &quot;__main__&quot;): main() </code></pre> <p>In my example above, I have the coefficents of my polynomial stored in the local variable <code>coeffs</code>, in descending order. I then attempt to gather all the X-values at the Y-value of <code>0.5</code>, stored within the <code>x</code> and <code>y</code> local variables respectivelly. I then display the gathered X-values as the title of the shown plot.</p> <p>The script above results in the following plot: <a href="https://i.sstatic.net/C4uPSl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C4uPSl.png" alt="enter image description here" /></a></p> <p>With the X-values being shown as <code>[-2.0, 0.0]</code>, instead of the correct: <a href="https://i.sstatic.net/eFdCHl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eFdCHl.png" alt="enter image description here" /></a></p> <p>What is the proper way to get all real X-values of a polynomial at a certain Y-value in Python?</p> <p>Thanks for reading my post, any guidance is appreciated.</p>
<python><numpy><polynomials>
2023-07-03 10:34:21
3
705
Runsva
76,603,811
13,498,838
Token authentication issue with FastAPI and JWT tokens - "Could not validate credentials"
<p>I am building an API using Python 3.10.8 and FastAPI 0.95.1, and I'm experiencing an issue with user authentication, specifically related to JWT tokens. I have followed the guide provided in FastAPI's <a href="https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/" rel="nofollow noreferrer">security documentation</a>.</p> <p>The problem arises when I make a request to an endpoint that requires user authentication. Instead of receiving a valid JWT token in the <code>get_current_user()</code> function, the token is being passed as the string <code>&quot;undefined&quot;</code>. As a result, I encounter a 401 Error with the message &quot;Could not validate credentials.&quot; In the Chrome developer tools, the Authorization header also shows <code>&quot;Bearer undefined&quot;</code>.</p> <pre><code># --- dependencies.py --- from database import recipes_db from datetime import datetime, timedelta from fastapi import Depends, HTTPException, status from fastapi.security import OAuth2PasswordBearer from jose import JWTError, jwt from passlib.context import CryptContext from utils.models import UserInternal, UserExternal, TokenData from settings import SECRET_KEY, ALGORITHM from typing import Annotated SELECT_USER = &quot;&quot;&quot; SELECT id, email, first_name as firstName, hashed_password as hashedPassword FROM users WHERE email = :email &quot;&quot;&quot; password_context = CryptContext(schemes=[&quot;bcrypt&quot;], deprecated=&quot;auto&quot;) # This context can be used to hash passwords and verify them later on. # The deprecated argument is set to &quot;auto&quot;, which means that the library # will automatically deprecate old algorithms and switch to new ones as needed. oauth2_scheme = OAuth2PasswordBearer(tokenUrl=&quot;token&quot;) # This is a security scheme used for authenticating users with OAuth2. # The tokenUrl parameter is set to &quot;token&quot;, which is the endpoint where the # user can obtain an access token by providing their credentials. The oauth2_scheme # object can be used as a dependency in FastAPI endpoints to enforce authentication. def verify_hash(plain_text: str, hashed_text: str) -&gt; bool: &quot;&quot;&quot; Verifies that a plain text matches a hashed text. Args: plain_text (str): The plain text to verify. hashed_text (str): The hashed text to compare against. Returns: bool: True if the plain text matches the hashed text, False otherwise. &quot;&quot;&quot; return password_context.verify(plain_text, hashed_text) def create_hash(text: str) -&gt; str: &quot;&quot;&quot; Hashes a string (e.g., password or token) Args: text (str): The plain text to hash. Returns: str: The hashed text. &quot;&quot;&quot; return password_context.hash(text) async def get_user(email: str, internal: bool = True) -&gt; UserExternal | UserInternal: &quot;&quot;&quot; Fetches a user from the database by their email. Args: email (str): The email of the user. internal (bool, optional): If True, returns an internal user object. If False, returns an external user object. Defaults to True. Returns: UserInternal or UserExternal: The user object. &quot;&quot;&quot; data = await recipes_db.fetch_one( query=SELECT_USER, values={'email': email} ) if data: if internal: user = UserInternal(**data) else: user = UserExternal(**data) return user async def authenticate_user(email: str, password: str): &quot;&quot;&quot; Authenticates a user. Args: email (str): The email of the user. password (str): The user's password. Returns: UserInternal: The authenticated user object if successful, otherwise False. &quot;&quot;&quot; # Retrieve an UserInternal object user = await get_user(email) if not user: return False if not verify_hash(password, user.hashedPassword): return False return user def create_access_token(data: dict, expires_delta: timedelta | None = None): &quot;&quot;&quot; Generates a JWT access token. Args: data (dict): The data to include in the token. expires_delta (timedelta | None): The expiration time for the token, or None for default. Returns: bytes: The encoded access token. &quot;&quot;&quot; # Make a copy of the data so we don't modify the original dictionary to_encode = data.copy() # If an expiration time is provided, set the 'exp' claim to that time if expires_delta: expire = datetime.utcnow() + expires_delta else: expire = datetime.utcnow() + timedelta(minutes=15) to_encode.update({&quot;exp&quot;: expire}) # Encode the token using the JWT library encoded_jwt = jwt.encode(to_encode, str(SECRET_KEY), algorithm=ALGORITHM) return encoded_jwt async def get_current_user(token: Annotated[str, Depends(oauth2_scheme)]): &quot;&quot;&quot; Gets the current user from an authentication token. Args: token (Annotated[str, Depends(oauth2_scheme)]): The authentication token. Raises: HTTPException: If the credentials cannot be validated. Returns: UserInternal: The current user. &quot;&quot;&quot; # Define the exception to raise if credentials cannot be validated credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=&quot;Could not validate credentials&quot;, headers={&quot;WWW-Authenticate&quot;: &quot;Bearer&quot;}, ) try: # Decode the JWT token payload = jwt.decode(token, str(SECRET_KEY), algorithms=[ALGORITHM]) email: str = payload.get(&quot;sub&quot;) # Raise exception if the email is missing from the payload if email is None: raise credentials_exception # Create TokenData object with email token_data = TokenData(email=email) except JWTError as e: # Raise exception if there is a JWT error raise credentials_exception # Get user from database using email from token data user = await get_user(email=token_data.email) # Raise exception if user not found if user is None: raise credentials_exception return user async def get_current_active_user(current_user: Annotated[UserInternal, Depends(get_current_user)]): # if current_user.disabled: # raise HTTPException(status_code=400, detail=&quot;Inactive user&quot;) return current_user </code></pre> <pre><code># --- main.py --- @app.post(&quot;/token&quot;, response_model=Token) async def login_for_access_token( formData: Annotated[OAuth2PasswordRequestForm, Depends()] ): &quot;&quot;&quot; Logs in and receives an access token. ### Arguments - `formData` (`Annotated[OAuth2PasswordRequestForm, Depends()]`): The OAuth2 password request form. ### Returns - `Token`: The access token. &quot;&quot;&quot; # Authenticate the user credentials, i.e. check username and password in database user = await dependencies.authenticate_user( formData.username, formData.password ) # If user credentials are invalid, raise an HTTPException with a 401 status code if not user: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=&quot;Incorrect email or password&quot;, headers={&quot;WWW-Authenticate&quot;: &quot;Bearer&quot;}, ) # Define access token expiry time access_token_expires = timedelta( minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES ) # Create access token using user email and expiry time access_token = dependencies.create_access_token( data={&quot;sub&quot;: user.email}, expires_delta=access_token_expires ) return {&quot;accessToken&quot;: access_token, &quot;tokenType&quot;: &quot;bearer&quot;} </code></pre> <p>When I request an endpoint that depends on a user sign in, i.e.:</p> <pre><code>@router.get(&quot;/&quot;) async def get_user( currentUser: Annotated[UserInternal, Depends(get_current_active_user)]): return currentUser </code></pre> <p>I expected the <code>get_current_user()</code> function to receive a valid JWT token from the Authorization header, decode it, and extract the email address from the payload. The JWT is being correctly created in the <code>login_for_access_token()</code> endpoint but it seems as if this is not being passed in the headers of the request in the automatically generated OpenAPI docs.</p> <p>What can I do to fix this?</p>
<python><jwt><fastapi>
2023-07-03 10:15:32
1
1,454
jda5
76,603,626
4,575,197
How to throw exception when a for loop is taking more time than usual to complete in windows
<p>i have a list of websites (more than 250) and i would like to get all the texts in the website, for further analysis. the problem occurs for some websites, which takes long time to load or it even get's stuck in the process of sending a Request.</p> <p>here's the code:</p> <pre><code>def get_the_text(_df): ''' sending a request to recieve the Text of the Articles Parameters ---------- _df : DataFrame Returns ------- dataframe with the text of the articles ''' df['text']='' for k,link in enumerate(df['url']): if link: website_text=list() print(link,'\n','K:',k) #time.sleep(2) session = requests.Session() retry = Retry(connect=2, backoff_factor=0.3) adapter = HTTPAdapter(max_retries=retry) session.mount('http://', adapter) session.mount('https://', adapter) # signal.signal(signal.SIGALRM, took_too_long) # signal.setitimer(signal.ITIMER_REAL, 10)# 10 seconds try: timeout_decorator.timeout(seconds=10)#timeout of 10 seconds time.sleep(1) response=session.get(link) # signal.setitimer(signal.ITIMER_REAL, 0) # success, reset to 0 to disable the timer #GETS THE TEXT IN THE WEBSITE THEN except TimeoutError: print('Took too long') continue except ConnectionError: print('Connection error') </code></pre> <p>as you can see i tried both solutions mentioned in <a href="https://stackoverflow.com/q/68764950/4575197">this post</a>. i found out that using Signal library the <a href="https://stackoverflow.com/a/48336681/4575197">SIGALRM is not supported on Windows.</a> the second solution,which is <code>timeout_decorator</code> doesn't throw exception, when it takes more than for example 10 seconds.</p> <p>i would like to skip a request when it get's more than 10 second to process. how can i achieve this?</p>
<python><exception><request><signals>
2023-07-03 09:50:57
1
10,490
Mostafa Bouzari
76,603,575
11,198,558
How to use string input to execute function inside class?
<p>I'm stucking with the process of user input a string to execute a function without using dictionary of functions. Specifically, I defined a class</p> <pre><code>class Myclass(): def __init__(self, df): ... return def function_1(self, **kwargs): return result def function_2(self, **kwargs): return result ... </code></pre> <p>User from the frontend give an input in string, like <code>&quot;function_1&quot;</code>. I wonder how can I execute the method of class right away using this string as below</p> <pre><code># From frontend receive = Input() # receive will be the string right now when user choose &quot;function_1&quot; print(receive) &gt;&gt;&gt; &quot;function_1&quot; # From backend result = Myclass(df).function_1() </code></pre> <h2>Solution that I have tried</h2> <p>I have tried to use</p> <pre><code>def MyFunction(dataframe, userFunctionChoose:str, **kwargs): tasks = {} def task(task_fn): tasks[task_fn.__name__] = task_fn @task def function_1(dataframe): return result @task def function_2(dataframe): return result return tasks[userFunctionChoose]() result = MyFunction(dataframe, &quot;function_1&quot;) </code></pre> <p>Although it's worked properly, however, I still need to create a class for other use-cases, and it contains more than 50 methods inside.</p> <p>With the input string, how can I make the <code>Myclass</code> run the required method as nested function?</p>
<python><oop>
2023-07-03 09:42:57
0
981
ShanN
76,603,505
12,415,855
ScrollDown in table on a website usinig Selenium
<p>i try to scroll down in a table on a website with the following code:</p> <pre><code>import os from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from webdriver_manager.chrome import ChromeDriverManager if __name__ == '__main__': WAIT = 1 print(f&quot;Checking Browser driver...&quot;) os.environ['WDM_LOG'] = '0' options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service(ChromeDriverManager().install()) driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = &quot;https://www.appliancepartspros.com/ge-dryer-timer-knob-we1m654-ap3995088.html&quot; driver.get (link) driver.execute_script(&quot;arguments[0].scrollIntoView(true);&quot;, waitWD.until(EC.presence_of_element_located((By.XPATH,'//h2[text()=&quot;Cross Reference and Model Information&quot;]')))) tmpBODY = driver.find_element(By.XPATH, '//div[@class=&quot;m-bsc&quot;]/a[@name=&quot;crossref&quot;]') for _ in range(5): tmpBODY.send_keys (Keys.PAGE_DOWN) </code></pre> <p>But i allways get this error-message:</p> <pre><code>(selenium) C:\DEV\Fiverr\TRY\biglaundrystore&gt;python try.py Checking Browser driver... Traceback (most recent call last): File &quot;C:\DEV\Fiverr\TRY\biglaundrystore\try.py&quot;, line 31, in &lt;module&gt; tmpBODY.send_keys (Keys.PAGE_DOWN) File &quot;C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 231, in send_keys self._execute( File &quot;C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 404, in _execute return self._parent.execute(command, params) File &quot;C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 440, in execute self.error_handler.check_response(response) File &quot;C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable (Session info: chrome=113.0.5672.127) </code></pre> <p>This is the table i would like to scrolldown - as you can see the page-downs are not working as expected:</p> <p><a href="https://i.sstatic.net/8Zgwi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Zgwi.png" alt="enter image description here" /></a></p>
<python><selenium-webdriver>
2023-07-03 09:33:45
2
1,515
Rapid1898
76,603,429
14,114,654
Create new column based on missing values
<p>I want to create a new column that is based on the other columns. product5 is the best, product2 is the second-best. So the new column should use product5 if available. If not, then try product2 etc. It needs to generalise to many columns based on the order of the items specified in the list:</p> <pre><code>cols_pref_inorder = [&quot;product5&quot;, &quot;product2&quot;, &quot;product&quot;...] df product product2 product5 0 apple Appl Apple 1 banan Banan NaN </code></pre> <p>I tried:</p> <pre><code>def create(x): if pd.notnull(df[&quot;product5&quot;]): return df[&quot;product5&quot;] ... df[&quot;Product_final&quot;] = df.apply(create, axis=1) </code></pre> <p>Expected Output</p> <pre><code>df product product2 product5 Product_final 0 apple Appl Apple Apple (Product_final uses product5 since available) 1 banan Banan NaN Banan (Product_final uses product2 since product5 is missing) </code></pre>
<python><pandas>
2023-07-03 09:20:41
1
1,309
asd
76,603,346
10,220,019
Read TimeValue from pvts file with python
<p>I have a pvts file that I'm trying to read with Python. Reading the point and cell data works fine, but I also want to extract the time value. For that, I have added</p> <pre class="lang-xml prettyprint-override"><code>&lt;FieldData&gt; &lt;DataArray type=&quot;Float64&quot; Name=&quot;TimeValue&quot; NumberOfTuples=&quot;1&quot;&gt;49.000000 &lt;/DataArray&gt; &lt;/FieldData&gt; </code></pre> <p>To the end of the pvts file (before <code>&lt;\PStructuredGrid&gt;</code>)<br /> This works fine for Paraview, but I'm unable to extract the value using the python <a href="https://vtk.org/doc/nightly/html/index.html" rel="nofollow noreferrer">vtk</a> library. I think I should extract from the <code>vtkXMLPStructuredGridReader</code> object and not the <code>vtkStructuredGrid</code> based on what I have tried so far.</p> <pre class="lang-py prettyprint-override"><code>reader = vtkXMLPStructuredGridReader() reader.SetFileName(file) reader.Update() print(reader) # Has a line stating &quot;ActiveTimeDataArrayName:TimeValue&quot; print(reader.GetTimeDataArray(0)) # Prints &quot;TimeValue&quot; </code></pre> <p>It shows me that it does see the &quot;TimeValue&quot; entry in my pvts.</p> <pre class="lang-py prettyprint-override"><code>data = reader.GetOutput() dim = data.GetDimensions() print(data) </code></pre> <p>On the other hand, says that I have no FieldData. I think this is because I haven't added the TimeValue to the underlying vts files. Maybe I should, but paraview can use the one in the pvts, so I think it should also work via python. Just to be clear: I want to extract the <code>49.000000</code> from the file.</p> <p>Rough structure pvts file:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot;?&gt; &lt;VTKFile type=&quot;PStructuredGrid&quot; version=&quot;0.1&quot; byte_order=&quot;LittleEndian&quot;&gt; &lt;PStructuredGrid WholeExtent=&quot;. . . . . .&quot; GhostLevel=&quot;.&quot;&gt; &lt;PPoints&gt; &lt;PDataArray type=&quot;Float64&quot; Name=&quot;coordinates&quot; NumberOfComponents=&quot;3&quot; format=&quot;appended&quot; offset=&quot;0&quot;&gt; &lt;/PDataArray&gt; &lt;/PPoints&gt; &lt;PPointData Scalars=&quot;scalars&quot;&gt; &lt;PDataArray type=&quot;Float64&quot; Name=&quot;.&quot; NumberOfComponents=&quot;1&quot; format=&quot;appended&quot; offset=&quot;0&quot;&gt; &lt;/PDataArray&gt; &lt;PDataArray type=&quot;Float64&quot; Name=&quot;.&quot; NumberOfComponents=&quot;1&quot; format=&quot;appended&quot; offset=&quot;0&quot;&gt; &lt;/PDataArray&gt; &lt;/PPointData&gt; &lt;Piece Extent=&quot;. . . . . .&quot; Source=&quot;&lt;n&gt;/field_&lt;n&gt;_0.vts&quot;/&gt; &lt;Piece Extent=&quot;. . . . . .&quot; Source=&quot;&lt;n&gt;/field_&lt;n&gt;_1.vts&quot;/&gt; &lt;FieldData&gt; &lt;DataArray type=&quot;Float64&quot; Name=&quot;TimeValue&quot; NumberOfTuples=&quot;1&quot;&gt;49.000000 &lt;/DataArray&gt; &lt;/FieldData&gt; &lt;/PStructuredGrid&gt; &lt;/VTKFile&gt; </code></pre> <p>I'm also fine with a solution using an XML parser. I tried to look into that, but that also didn't look too easy at first glance.</p>
<python><vtk>
2023-07-03 09:06:59
1
451
C. Binair
76,603,188
7,936,386
argparse validation in a python class
<p>I'm trying an OOP approach to my Python code which eventually will be converted to an .EXE file created with PyInstaller. The idea is to pass a series of arguments from the user input (n to a program that eventually will go something like (<code>myprogram.exe -secureFolder C:/Users -thisisacsvfile.csv -countyCode 01069 -utmZone 15</code>).</p> <p>I can initialize a class definition and pass the arguments like:</p> <pre><code>import argparse import sys class myprogram(): def __init__(self, secureFolder, inputCsvFile, countyCode, utmZone): self.secureFolder = secureFolder self.inputCsvFile = inputCsvFile self.countyCode = countyCode self.utmZone = utmZone if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument(&quot;secureFolder&quot;, help = &quot;A directory where the files are located&quot;, type=str) parser.add_argument(&quot;inputCsvFile&quot;, help=&quot;A CSV file containing the results for a particular county (e.g. 36107.csv)&quot;, type=str) parser.add_argument(&quot;countyCode&quot;, help = &quot;The FIPS county code&quot;, type = str) parser.add_argument(&quot;utmZone&quot;, help = &quot;The UTM zone code for that specific county (e.g. 18)&quot;, type = int) </code></pre> <p>However, I need to validate every user argument and that's the part where I'm getting confused. In other words, I need to check if the <code>secureFolder</code> exists, if the <code>inputCsvFile</code> is indeed a CSV and contains some specific columns and other operations for the rest of the arguments. What I don't know exactly, where do I perform these operations? After the class definition? Before the OOP approach, I was doing something like:</p> <pre><code># Check if all the arguments were passed undefined_arguments = [attr for attr in vars(args) if getattr(args, attr) is None] if undefined_arguments: print(&quot;The following arguments were not defined:&quot;, undefined_arguments) else: print(&quot;All arguments were defined.&quot;) # 1a. Check inputCsvFile if args.inputCsvFile is None: sys.exit(&quot;Please select an input CSV file to process (e.g. inputCsvFile.../myfile.csv) &quot;) else: if not os.path.isfile(args.inputCsvFile): sys.exit (f&quot;File {args.inputCsvFile} doesn't appear to exists...please check if the file exists or if you have privileges to access it&quot;) else: grid_file_csv = args.inputCsvFile print (f&quot;{args.inputCsvFile} found...&quot;) # 1b. Check if inputCsvFile is a CSV: if not args.inputCsvFile.endswith('.csv'): raise ValueError(&quot;Invalid input file. Expected a CSV file.&quot;) sys.exit('No propper CSV file has been passed...') # 2. Check if the FIPS code if args.countyCode is None: sys.exit(&quot;Please specify a valid county code (e.g. -countyCode3607)&quot;) # Check the UTM area code if args.utmzone is None: sys.exit(&quot;Please specify a valid UTM zone area (e.g. -utmZone 16): &quot;) if args.utmZone is not None: val = args.utmZone if val &lt; 1 and val &gt; 20: raise Exception('UTM zone area should be between 1 and 20') sys.exit() </code></pre>
<python><class><oop><pyinstaller><argparse>
2023-07-03 08:46:24
2
619
Andrei Niță
76,602,857
20,771,478
Bulk insert Pandas Dataframe via SQLalchemy into MS SQL database
<p>I have a dataframe with 300,000 rows and 20 columns with a lot of them containing text. In Excel format this is 30 to 40 MB.</p> <p>My target is to write this to the database in below 10min. How can I do this?</p> <p>The question is very similar to <a href="https://stackoverflow.com/questions/31997859/bulk-insert-a-pandas-dataframe-using-sqlalchemy">this one</a>. However most of the answers are directed at Postgres databases and the second answer, which might work with MS SQL, involves defining the whole table, which I don't want to do in order to have reusable code. I also checked other questions in the forum without success.</p> <p>I have the following three requirements:</p> <ul> <li>Use a Pandas Dataframe</li> <li>Use SQLalchemy for the database connection</li> <li>Write to a MS SQL database</li> </ul> <p>From experimenting I found a solution that takes 5 to 6 hours to complete. I provide some code below where you can test my current solution with dummy data. The only thing that needs to be replaced is the url_object.</p> <pre><code>#For database connection to INFOR LN database from sqlalchemy import create_engine from sqlalchemy.engine import URL #To be able to use dataframes for data transformation import pandas as pd #Used for progress bar functionality from tqdm import tqdm #For reading JSON files import json #For random dataframe creation import random import string content = open('config.json') config = json.load(content) db_user = config['user'] db_password = config['password'] url_object = URL.create( &quot;mssql+pyodbc&quot; , username=db_user , password=db_password , host=&quot;Server_Name&quot; , database=&quot;Database&quot; , query={&quot;driver&quot;: &quot;SQL Server Native Client 11.0&quot;} ) #Thank you chat GPT. # Set random seed for reproducibility random.seed(42) # Generate random numbers between 0 and 1000000 for 5 columns num_cols = ['num_col1', 'num_col2', 'num_col3', 'num_col4', 'num_col5'] data = { col: [random.randint(0, 1000000) for _ in range(50000)] for col in num_cols } # Generate random texts with less than 50 characters for 15 columns text_cols = ['text_col1', 'text_col2', 'text_col3', 'text_col4', 'text_col5', 'text_col6', 'text_col7', 'text_col8', 'text_col9', 'text_col10', 'text_col11', 'text_col12', 'text_col13', 'text_col14', 'text_col15'] for col in text_cols: data[col] = [''.join(random.choices(string.ascii_letters + string.digits, k=random.randint(1, 50))) for _ in range(50000)] # Create DataFrame df = pd.DataFrame(data) engine = create_engine(url_object, fast_executemany=True) df[&quot;Python_Script_Excecution_Timestamp&quot;] = pd.Timestamp('now') def chunker(seq, size): # from http://stackoverflow.com/a/434328 return (seq[pos:pos + size] for pos in range(0, len(seq), size)) def insert_with_progress(df): chunksize = 50 with tqdm(total=len(df)) as pbar: for i, cdf in enumerate(chunker(df, chunksize)): replace = &quot;replace&quot; if i == 0 else &quot;append&quot; cdf.to_sql(&quot;Testtable&quot; , engine , schema=&quot;dbo&quot; , if_exists=replace , index=False , chunksize = 50 , method='multi' ) pbar.update(chunksize) insert_with_progress(df) </code></pre> <p>In my specific case I can't increase the chunk size because of an error that get's thrown if I do. Explanation is that MS SQL doesn't allow for more than 2100 parameters per insert. Explanation is <a href="https://stackoverflow.com/questions/50689082/to-sql-pyodbc-count-field-incorrect-or-syntax-error">here</a>.</p> <blockquote> <p>Error: ('07002', '[07002] [Microsoft][SQL Server Native Client 11.0]COUNT field incorrect or syntax error (0) (SQLExecDirectW)')</p> </blockquote>
<python><sql-server><pandas><sqlalchemy>
2023-07-03 08:00:34
1
458
Merlin Nestler
76,602,725
8,087,322
Programmatic (Python) format check in jsonschema
<p>I have a schema where a property should comply to a pattern that only can be checked programmatically:</p> <pre class="lang-yaml prettyprint-override"><code>type: object properties: unit: description: Unit of this column. Must be FITS conform. type: string </code></pre> <p>where &quot;FITS conformity&quot; can be ensured by a small Python snippet (which raises an exception on fail):</p> <pre><code>import astropy.units as u u.Unit(col[&quot;unit&quot;], format=u.format.Fits) </code></pre> <p>It seems that this could be done with a custom &quot;format&quot; attribute of the property:</p> <pre class="lang-yaml prettyprint-override"><code>type: object properties: unit: description: Unit of this column. Must be FITS conform. type: string format: fitsunit </code></pre> <p>and then implement a format checker with the decorator <a href="https://python-jsonschema.readthedocs.io/en/stable/validate/#jsonschema.FormatChecker.checks" rel="nofollow noreferrer">jsonschema.FormatChecker.checks</a>. However, I could not find out how to write this.</p>
<python><jsonschema><python-jsonschema>
2023-07-03 07:38:30
1
593
olebole
76,602,650
1,833,326
Pyspark Split result does not contain the remaining string
<p>I would like to split as a string as &quot;23220&quot; to an array [2,3,2,20]. I tried</p> <pre><code>from pyspark.sql import functions as f df = spark.createDataFrame([('23220',)], ['s',]) df.select(f.split(str=df.s, pattern='', limit=4).alias('s')).show() </code></pre> <p>However, this returns [2,3,2,2 ]. Even the <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.split.html" rel="nofollow noreferrer">documentation</a> says:</p> <blockquote> <p>limit &gt; 0: The resulting array’s length will not be more than limit, and the resulting array’s last entry will contain all input beyond the last matched pattern.</p> </blockquote> <p>I noticed this phenomenon did not exist with Databricks Version 10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12) but with Databricks Version 11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12) and newer.</p>
<python><apache-spark><pyspark><split><databricks>
2023-07-03 07:25:07
0
1,018
Lazloo Xp
76,602,376
12,728,204
Identify rows based on a condition and select one above and one below
<p>I have the below dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>P</th> <th>L</th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>0</td> <td>5</td> </tr> <tr> <td>1</td> <td>1</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>1</td> <td>0</td> <td>7</td> </tr> <tr> <td>1</td> <td>2</td> <td>0</td> <td>10</td> </tr> <tr> <td>1</td> <td>2</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>0</td> <td>8</td> </tr> <tr> <td>1</td> <td>2</td> <td>1</td> <td>5</td> </tr> <tr> <td>1</td> <td>2</td> <td>0</td> <td>7</td> </tr> <tr> <td>1</td> <td>2</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>0</td> <td>8</td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>9</td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>9</td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>10</td> </tr> <tr> <td>2</td> <td>1</td> <td>1</td> <td></td> </tr> <tr> <td>2</td> <td>1</td> <td>0</td> <td>7</td> </tr> <tr> <td>2</td> <td>1</td> <td>1</td> <td></td> </tr> </tbody> </table> </div> <p>I would like to select one row with L = 0 above and one row with L = 0 below the rows with L = 1, groupby ID and P. These rows are in red font in the image. If there are multiple rows with L = 1, the same rule applies (that is one row below and one row above). Any suggestion? Thank you.</p> <p><a href="https://i.sstatic.net/U77hl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U77hl.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-07-03 06:39:42
2
467
Jason
76,602,209
19,198,552
How can I redirect STDOUT of a subprocess called by a tkinter app into a tkinter text widget?
<p>I want to run a compiler from my tkinter application. The STDOUT of the compiler shall be copied into a text widget of my tkinter application: At any time the compiler creates a message at STDOUT, this message shall immediately get visible in the text widget. The problem is, that I always get the STDOUT messages in the text widget after the compiler has finished, but not during the compiler runs.</p> <p>As finding a solution was difficult for me I decided to create a question here, as someone other may have the same problem.</p>
<python><tkinter><asynchronous><subprocess>
2023-07-03 06:09:35
1
729
Matthias Schweikart
76,601,980
2,000,548
Got "ERROR:root:Failed to get healthz info attempt 1 of 5." when deploy pipeline in the Kubeflow
<p>I am trying to follow the <a href="https://www.kubeflow.org/docs/components/pipelines/v2/installation/quickstart/" rel="nofollow noreferrer">Kubeflow v2 quickstart</a>.</p> <p>First, I deployed Kubeflow to a local Kubernetes cluster by</p> <pre class="lang-bash prettyprint-override"><code>export PIPELINE_VERSION=&quot;2.0.0-alpha.4&quot; kubectl apply -k &quot;github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION&quot; kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io kubectl apply -k &quot;github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION&quot; </code></pre> <p>I port forwarded by</p> <pre class="lang-bash prettyprint-override"><code>kubectl port-forward service/ml-pipeline-ui --namespace=kubeflow 38620:80 </code></pre> <p>I can see the UI at http://localhost:38620</p> <p><a href="https://i.sstatic.net/5uf0b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5uf0b.png" alt="enter image description here" /></a></p> <p>Next, I installed <code>kfp</code> 2.0.1. And here is my code:</p> <pre class="lang-py prettyprint-override"><code>from kfp import client, dsl @dsl.component def addition_component(num1: int, num2: int) -&gt; int: return num1 + num2 @dsl.pipeline(name=&quot;addition-pipeline&quot;) def my_pipeline(a: int, b: int, c: int = 10): add_task_1 = addition_component(num1=a, num2=b) add_task_2 = addition_component(num1=add_task_1.output, num2=c) endpoint = &quot;http://localhost:38620&quot; # &lt;- Not entirely sure if it is correct as it is missing in the quickstart document. kfp_client = client.Client(host=endpoint) run = kfp_client.create_run_from_pipeline_func( my_pipeline, arguments={&quot;a&quot;: 1, &quot;b&quot;: 2}, ) url = f&quot;{endpoint}/#/runs/details/{run.run_id}&quot; print(url) </code></pre> <p>However, I got error</p> <pre><code>python src/main.py /Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp/client/client.py:158: FutureWarning: This client only works with Kubeflow Pipeline v2.0.0-beta.2 and later versions. warnings.warn( ERROR:root:Failed to get healthz info attempt 1 of 5. Traceback (most recent call last): File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp/client/client.py&quot;, line 435, in get_kfp_healthz return self._healthz_api.get_healthz() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api/healthz_service_api.py&quot;, line 63, in get_healthz return self.get_healthz_with_http_info(**kwargs) # noqa: E501 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api/healthz_service_api.py&quot;, line 134, in get_healthz_with_http_info return self.api_client.call_api( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api_client.py&quot;, line 364, in call_api return self.__call_api(resource_path, method, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api_client.py&quot;, line 188, in __call_api raise e File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api_client.py&quot;, line 181, in __call_api response_data = self.request( ^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/api_client.py&quot;, line 389, in request return self.rest_client.GET(url, ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/rest.py&quot;, line 230, in GET return self.request(&quot;GET&quot;, url, ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/hm-kubeflow-calculate-PriecqfA-py3.11/lib/python3.11/site-packages/kfp_server_api/rest.py&quot;, line 224, in request raise ApiException(http_resp=r) kfp_server_api.exceptions.ApiException: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'X-Powered-By': 'Express', 'Content-Security-Policy': &quot;default-src 'none'&quot;, 'X-Content-Type-Options': 'nosniff', 'Content-Type': 'text/html; charset=utf-8', 'Content-Length': '159', 'Date': 'Mon, 03 Jul 2023 03:59:33 GMT', 'Connection': 'keep-alive', 'Keep-Alive': 'timeout=5'}) HTTP response body: &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot;&gt; &lt;title&gt;Error&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;pre&gt;Cannot GET /apis/v2beta1/healthz&lt;/pre&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Inside for <code>endpoint</code>, I am using http://localhost:38620, However, I am not entirely sure if I am using correct one as it is missing in <a href="https://www.kubeflow.org/docs/components/pipelines/v2/installation/quickstart/" rel="nofollow noreferrer">quickstart document</a>.</p> <p>I tried to access</p> <ul> <li>http://localhost:38620/apis/v2beta1/healthz</li> <li>http://localhost:38620/#/apis/v2beta1/healthz</li> </ul> <p>and they do not exist.</p> <p>I found a similar issue at <a href="https://github.com/kubeflow/kubeflow/issues/5989" rel="nofollow noreferrer">https://github.com/kubeflow/kubeflow/issues/5989</a> but there is not any helpful info inside.</p> <p>Any guide would be appreciate!</p>
<python><kubernetes><kubeflow>
2023-07-03 05:15:02
1
50,638
Hongbo Miao
76,601,870
2,251,058
Certain sections in slack-api blocks don't work
<p>Trying this example payload in block-kit builder doesn't work when using it in python. Reason being it doesn't recognise some values in type <code>tags</code>, possibly emoji=True. But sending the payload formatting it to json string also doesn't work. I have to remove certain sections in the blocks which create the problem.</p> <p>As per this example Slack API takes blocks as a Dictionary<br /> <a href="https://github.com/slackapi/python-slack-sdk/blob/c9dc6aa0907a72c16cf36aa15e7e80031a9fdce2/integration_tests/samples/basic_usage/sending_a_message.py" rel="nofollow noreferrer">https://github.com/slackapi/python-slack-sdk/blob/c9dc6aa0907a72c16cf36aa15e7e80031a9fdce2/integration_tests/samples/basic_usage/sending_a_message.py</a></p> <p>The block-kit which gives an error when used in python code</p> <pre><code>https://app.slack.com/block-kit-builder/T09D77D4P#%7B%22blocks%22:%5B%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22Hello,%20Assistant%20to%20the%20Regional%20Manager%20Dwight!%20*Michael%20Scott*%20wants%20to%20know%20where%20you'd%20like%20to%20take%20the%20Paper%20Company%20investors%20to%20dinner%20tonight.%5Cn%5Cn%20*Please%20select%20a%20restaurant:*%22%7D%7D,%7B%22type%22:%22divider%22%7D,%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22*Farmhouse%20Thai%20Cuisine*%5Cn:star::star::star::star:%201528%20reviews%5Cn%20They%20do%20have%20some%20vegan%20options,%20like%20the%20roti%20and%20curry,%20plus%20they%20have%20a%20ton%20of%20salad%20stuff%20and%20noodles%20can%20be%20ordered%20without%20meat!!%20They%20have%20something%20for%20everyone%20here%22%7D,%22accessory%22:%7B%22type%22:%22image%22,%22image_url%22:%22https://s3-media3.fl.yelpcdn.com/bphoto/c7ed05m9lC2EmA3Aruue7A/o.jpg%22,%22alt_text%22:%22alt%20text%20for%20image%22%7D%7D,%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22*Kin%20Khao*%5Cn:star::star::star::star:%201638%20reviews%5Cn%20The%20sticky%20rice%20also%20goes%20wonderfully%20with%20the%20caramelized%20pork%20belly,%20which%20is%20absolutely%20melt-in-your-mouth%20and%20so%20soft.%22%7D,%22accessory%22:%7B%22type%22:%22image%22,%22image_url%22:%22https://s3-media2.fl.yelpcdn.com/bphoto/korel-1YjNtFtJlMTaC26A/o.jpg%22,%22alt_text%22:%22alt%20text%20for%20image%22%7D%7D,%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22*Ler%20Ros*%5Cn:star::star::star::star:%202082%20reviews%5Cn%20I%20would%20really%20recommend%20the%20%20Yum%20Koh%20Moo%20Yang%20-%20Spicy%20lime%20dressing%20and%20roasted%20quick%20marinated%20pork%20shoulder,%20basil%20leaves,%20chili%20&amp;%20rice%20powder.%22%7D,%22accessory%22:%7B%22type%22:%22image%22,%22image_url%22:%22https://s3-media2.fl.yelpcdn.com/bphoto/DawwNigKJ2ckPeDeDM7jAg/o.jpg%22,%22alt_text%22:%22alt%20text%20for%20image%22%7D%7D,%7B%22type%22:%22divider%22%7D,%7B%22type%22:%22actions%22,%22elements%22:%5B%7B%22type%22:%22button%22,%22text%22:%7B%22type%22:%22plain_text%22,%22text%22:%22Farmhouse%22,%22emoji%22:true%7D,%22value%22:%22click_me_123%22%7D,%7B%22type%22:%22button%22,%22text%22:%7B%22type%22:%22plain_text%22,%22text%22:%22Kin%20Khao%22,%22emoji%22:true%7D,%22value%22:%22click_me_123%22,%22url%22:%22https://google.com%22%7D,%7B%22type%22:%22button%22,%22text%22:%7B%22type%22:%22plain_text%22,%22text%22:%22Ler%20Ros%22,%22emoji%22:true%7D,%22value%22:%22click_me_123%22,%22url%22:%22https://google.com%22%7D%5D%7D%5D%7D </code></pre> <p>Payload creating an issue</p> <pre><code>block = [ { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;Hello, &lt;lob&gt; Team, Varanus wants you to know the Seo Anomalies Found.\n\n *Please review the Notifications below:*&quot; } }, { &quot;type&quot;: &quot;divider&quot; }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Farmhouse Thai Cuisine*\n:star::star::star::star: 1528 reviews\n They do have some vegan options, like the roti and curry, plus they have a ton of salad stuff and noodles can be ordered without meat!! They have something for everyone here&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media3.fl.yelpcdn.com/bphoto/c7ed05m9lC2EmA3Aruue7A/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Kin Khao*\n:star::star::star::star: 1638 reviews\n The sticky rice also goes wonderfully with the caramelized pork belly, which is absolutely melt-in-your-mouth and so soft.&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media2.fl.yelpcdn.com/bphoto/korel-1YjNtFtJlMTaC26A/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Ler Ros*\n:star::star::star::star: 2082 reviews\n I would really recommend the Yum Koh Moo Yang - Spicy lime dressing and roasted quick marinated pork shoulder, basil leaves, chili &amp; rice powder.&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media2.fl.yelpcdn.com/bphoto/DawwNigKJ2ckPeDeDM7jAg/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;divider&quot; }, { &quot;type&quot;: &quot;tags&quot;, &quot;elements&quot;: [ { &quot;type&quot;: &quot;button&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: &quot;Farmhouse&quot; } }, { &quot;type&quot;: &quot;button&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: &quot;Kin Khao&quot; } }, { &quot;type&quot;: &quot;button&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: &quot;Ler Ros&quot; } } ] } ] response = client.chat_postMessage(blocks=block, text=&quot;A New Notification&quot;, channel=&quot;Channel-ID&quot;) </code></pre> <p><strong>Without the tags section i.e (&quot;type&quot; : &quot;tags&quot;), it works</strong></p> <pre><code>block = [ { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;Hello, &lt;lob&gt; Team, Varanus wants you to know the Seo Anomalies Found.\n\n *Please review the Notifications below:*&quot; } }, { &quot;type&quot;: &quot;divider&quot; }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Farmhouse Thai Cuisine*\n:star::star::star::star: 1528 reviews\n They do have some vegan options, like the roti and curry, plus they have a ton of salad stuff and noodles can be ordered without meat!! They have something for everyone here&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media3.fl.yelpcdn.com/bphoto/c7ed05m9lC2EmA3Aruue7A/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Kin Khao*\n:star::star::star::star: 1638 reviews\n The sticky rice also goes wonderfully with the caramelized pork belly, which is absolutely melt-in-your-mouth and so soft.&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media2.fl.yelpcdn.com/bphoto/korel-1YjNtFtJlMTaC26A/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;section&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;*Ler Ros*\n:star::star::star::star: 2082 reviews\n I would really recommend the Yum Koh Moo Yang - Spicy lime dressing and roasted quick marinated pork shoulder, basil leaves, chili &amp; rice powder.&quot; }, &quot;accessory&quot;: { &quot;type&quot;: &quot;image&quot;, &quot;image_url&quot;: &quot;https://s3-media2.fl.yelpcdn.com/bphoto/DawwNigKJ2ckPeDeDM7jAg/o.jpg&quot;, &quot;alt_text&quot;: &quot;alt text for image&quot; } }, { &quot;type&quot;: &quot;divider&quot; } ] response = client.chat_postMessage(blocks=block, text=&quot;A New Notification&quot;, channel=&quot;Channel-ID&quot;) </code></pre> <p>If I use json.dumps(block) or str(block), it gives and error <code>invalid_block</code></p> <p>Using the suggestions to use json.dumps(blocks) doesn't work. <a href="https://stackoverflow.com/questions/60344831/slack-api-invalid-block/76601814#76601814">Slack API invalid_block</a></p> <p>Any idea what is the right way to send blocks in python?</p> <p>Any help is appreciated.</p>
<python><python-3.x><slack><slack-api>
2023-07-03 04:37:56
1
3,287
Akshay Hazari
76,601,841
292,291
How to avoid and handle Neo.TransientError.Transaction.DeadlockDetected?
<p>When doing load testing, I am seeing Deadlock errors happening occasionally, I added retries and its been working well. However, how do I know how to improve? The errors are quite vague so I cannot tell which query is causing this</p> <blockquote> <p>{code: Neo.TransientError.Transaction.DeadlockDetected} {message: ForsetiClient[transactionId=75637, clientId=2431] can't acquire ExclusiveLock{owner=ForsetiClient[transactionId=75634, clientId=2450]} on NODE_RELATIONSHIP_GROUP_DELETE(116) because holders of that lock are waiting for ForsetiClient[transactionId=75637, clientId=2431].</p> </blockquote> <p>The weird thing is I dont think I have code that does a relationship delete. I am actually doing MERGE to add nodes and relationships along the lines of:</p> <pre><code>MERGE (o:Org {id: $orgId}) SET o.name=$name MERGE (a:Item {id: $itemId}) SET a.orgId=$orgId, a.name=$itemName MERGE (s:SubItem {id: $subItemId}) SET s.name=$subItemName MERGE (o)-[:HAS_ITEM]-&gt;(a) MERGE (a)-[:HAS_SUB_ITEM]-&gt;(s) </code></pre> <p>UPDATE:</p> <p>Managed to identity the part causing the deadlock. I am modelling tags as a node and I suspect when many items use the same tag, and I try to merge the relationship, it causes a deadlock</p> <pre><code>MATCH (t:Tag {name: &quot;a&quot;}) MATCH (i:Item {name: &quot;x&quot;}) MERGE (i)-[:TAGGED]-&gt;(t) </code></pre> <p>I think this is the part causing deadlock often, how might I avoid this?</p>
<python><neo4j><deadlock>
2023-07-03 04:28:06
2
89,109
Jiew Meng
76,601,801
19,157,137
Copying directories with .toml files to Docker container
<p>I need to copy all directories that contain <code>.toml</code> files from the host machine into a Docker container while preserving the original directory structure. However, the <code>COPY **/*.toml ./ </code>instruction in my Dockerfile only copies the individual <code>.toml</code> files, not the directories.</p> <p>Here's a simplified version of my Dockerfile:</p> <pre><code>FROM python:3.8 # Install required packages RUN pip install toml # Copy directories with .toml files COPY **/*.toml ./ </code></pre> <p>How can I modify the Dockerfile to copy the directories containing <code>.toml</code> files while preserving the directory structure? I appreciate any help or alternative approaches to achieve this.</p>
<python><docker><file><data-structures><dockerfile>
2023-07-03 04:12:17
1
363
Bosser445
76,601,700
5,287,011
SentenceTransformer ('distilbert-base-nli-mean-tokens') is very slow
<p>I am trying to learn the use of BERT. Here is the code:</p> <pre><code>from sklearn.datasets import fetch_20newsgroups data = fetch_20newsgroups(subset='all')['data'] from sentence_transformers import SentenceTransformer model = SentenceTransformer('distilbert-base-nli-mean-tokens') embeddings = model.encode(data, show_progress_bar=True) </code></pre> <p>The problem is that it is incredibly slow: 24-48 hours to complete.</p> <p>I have macOS M1 Pro notebook. What can be done to speed-up the process?</p> <p>Thank you</p>
<python><performance><sentence-transformers><distilbert>
2023-07-03 03:32:05
0
3,209
Toly
76,601,658
2,863,964
How to run a Google Earth Engine script locally
<p>I am new to Google Earth Engine. I have been using the <a href="https://earthengine.google.com/platform/" rel="nofollow noreferrer">Google Earth Engine code editor</a> to create and run my scripts. However, this approach doesn't scale.</p> <p>I have a script that gets the precipitation data for a region and exports it as a .csv to Google Drive. I want to know if there is a way to run the script locally rather than on the Google Earth Engine code editor. Instead of storing the .csv output in Google Drive, I can even store it in my local file system if that simplifies things.</p> <pre><code>var grids = ee.FeatureCollection(&quot;users/ramaraja/grids&quot;); var gridCode = 13885; var grid_13885 = grids.filter(ee.Filter.eq('GRIDCODE',gridCode)); var chirps2021_1 = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD') .filterDate('2021-01-01', '2021-02-28') .filter(ee.Filter.calendarRange(1,2,'month')) .filterBounds(grid_13885); var chirps2021_1_sum = ee.Image(chirps2021_1.sum()) var chirps2021_1_sum_13885 = chirps2021_1_sum.clip(grid_13885); var precipitationVis = { min: 0.0, max: 112.0, palette: ['001137', '0aab1e', 'e7eb05', 'ff4a2d', 'e90000'], }; Map.addLayer(chirps2021_1_sum_13885, precipitationVis, &quot;Sum January &amp; February 2021 Precipitation&quot;, true, 1.0); Map.centerObject(grid_13885, 0); var pixels = chirps2021_1_sum_13885.multiply(10).toInt().reduceToVectors({ reducer:ee.Reducer.countEvery(), geometry: grid_13885, scale: 5566, geometryType: 'centroid', labelProperty: 'precipitation', }); var proj = chirps2021_1_sum_13885.select([0]).projection() var latlon = ee.Image.pixelLonLat() chirps2021_1_sum_13885 = chirps2021_1_sum_13885.addBands(latlon.select('longitude','latitude')) pixels=chirps2021_1_sum_13885.select('longitude').reduceRegions(pixels, ee.Reducer.first().setOutputs(['long']), 25) pixels=chirps2021_1_sum_13885.select('latitude').reduceRegions(pixels, ee.Reducer.first().setOutputs(['lat']), 25); pixels=chirps2021_1_sum_13885.select('precipitation').reduceRegions(pixels, ee.Reducer.first().setOutputs(['precipitation']), 25); Map.addLayer(pixels,{}, 'grids') Export.table.toDrive({ collection: pixels, description: 'grid_13885', selectors: ['system:index','precipitation','lat','long',], fileFormat: 'CSV' }); </code></pre> <p>Here's a visualization of the output,</p> <p><a href="https://i.sstatic.net/RsQqIm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RsQqIm.jpg" alt="enter image description here" /></a></p> <p>The asset, <code>users/ramaraja/grids</code> is a shapefile I uploaded to Google Earth Engine.</p>
<javascript><python><gis><google-earth><google-earth-engine>
2023-07-03 03:16:15
0
2,642
Ramaraja
76,601,513
9,949,963
Attempting to parse a line, character by character using python
<p>I have a line that can look different depending on the input. My current method that is not working, is to loop over it using <code>range()</code> so I can get the current position.<br /> The line consists of the word &quot;LR&quot; then a &quot;left&quot; string and a &quot;right&quot; string separated by a space. The problem is, you can not split it at all of the spaces because sometimes the left and/or right string has a space in it itself, causing the actual left or right string to be split more than once. 2 example inputs that demonstrate this are:</p> <pre><code>LR &quot;redirect\&quot;:\&quot;\\&quot; &quot;\&quot;&quot; -&gt; </code></pre> <p>This one you would not have a problem separating using a space.</p> <pre><code>LR &quot;name=\&quot;uuid\&quot; value=\&quot;&quot; &quot;\&quot;&quot; </code></pre> <p>This one fails on regex.</p> <pre><code>LR &quot;&lt;span class=\&quot;pointsNormal\&quot;&gt;&quot; &quot;&lt;&quot; -&gt; </code></pre> <p>This one as you can see has a space in the left side of the string after 'span'.</p> <pre><code> def parseLR(self, line) -&gt; None: line = line.split(&quot;LR &quot;)[1].split(&quot; -&gt;&quot;)[0] left = &quot;&quot; seen = 0 encountered = False for x in range(len(line)): char = line[x] if encountered and seen % 2 == 0: break if char == '&quot;' and line[x - 1] != '\\': seen += 1 elif char == &quot; &quot;: encountered = True left += char print(left) </code></pre> <p>This is my current approach. I go character by character, on each character check, I check if it is a &quot;, if so I increment the seen counter, if it is not, I check if the char is a space, if it is, I set <code>encountered</code> to True. Then regardless of that, I check if seen is even meaning there is an equal number of &quot; in the string, and if there has been a space encountered. If so that is the end of the LEFT string. If you run it, you will see the problem that occurs. How can I properly parse the left string and right string from the lines?</p>
<python><python-3.x><parsing>
2023-07-03 02:19:49
1
381
Noah
76,601,427
1,144,251
Json.loads() error "Invalid \escape" during load
<p>I'm trying to parse a webpage in python but it keeps giving me an error during json parsing. The web address is <a href="https://www.zacks.com/stock/research/ADIL/earnings-calendar" rel="nofollow noreferrer">https://www.zacks.com/stock/research/ADIL/earnings-calendar</a>. Browser seems to load without issue but when I try to parse the website in python I get: &quot;Invalid \escape: line 12 column 148 (char 3155)&quot;</p> <p>The JSON line it dies on is: &lt;span title=\&quot;Theodore R O\'neill\&quot; class=\&quot;hotspot\&quot;&gt;Theodore R O'ne..</p> <p>I do see an invalid escape on the single quote &quot;\'&quot;. But why does browser not show any errors and how do I avoid this when parsing the JSON from python?</p> <p>Code snippet:</p> <pre><code>response = s.get(url, headers=requestHeader) # Regex to find the earnings table values (?&lt;=document\.obj_data = )[^{]*{\s+([^}]+)\s+} regexMatch = re.search(&quot;(?&lt;=document\.obj_data = )[^{]*{\s+([^}]+)\s+}&quot;,response.content.decode('utf-8')) fullData = json.loads(regexMatch[0]) &lt;&lt;&lt; Dies here. </code></pre> <p>Thanks!</p>
<python><json><session><response>
2023-07-03 01:41:23
1
357
user1144251
76,601,266
289,784
Disable rich help panel in Typer
<p>I'm trying to use <code>typer</code> to build a CLI application. I'm also using <code>rich</code> to do some formatting. However having rich installed leads to typer using <code>rich_help_panel</code>. I would like to disable that. I would prefer the normal formatting of the help string. How can I achieve that?</p> <p>What I have tried so far:</p> <pre class="lang-py prettyprint-override"><code>import typer from typing_extensions import Annotated cli = typer.Typer(rich_help_panel=None) @cli.command() def multiply(x: int, y: int, exp: bool = False): &quot;&quot;&quot; Multiply two numbers. &quot;&quot;&quot; if exp: return print(x**y) return print(x * y) @cli.command() def sum(x: int, y: int): &quot;&quot;&quot; Sum two numbers. &quot;&quot;&quot; return print(x + y) @cli.callback() def main(): &quot;Does arithmetic&quot; if __name__ == &quot;__main__&quot;: cli() </code></pre>
<python><command-line-interface><rich><typer>
2023-07-03 00:33:23
1
4,704
suvayu
76,601,099
6,651,956
How do I add a different header to every boto3 request?
<p>I need to be able to dynamically add a custom header based on the object hash. I have previously been able to find examples of how to add a constant header to every request sent, but that's not adequate for my use case.</p>
<python><boto3><botocore>
2023-07-02 23:06:27
1
871
Frederik Baetens
76,600,743
11,188,140
How can I vectorize the interaction of two numpy arrays
<p>Consider two numpy arrays:</p> <ol> <li><p>array <code>a</code>: The important feature of array <code>a</code> is that every pair of columns (<code>p</code> and <code>q</code>) has a row that holds values (<code>q</code> and <code>p</code>) in that order.<br /> For example, columns 0 and 3 hold values 3 and 0 in row 2, and columns 4 and 5 hold values 5 and 4 in row 1.</p> <pre class="lang-py prettyprint-override"><code>a = np.array([[1, 0, 4, 5, 2, 3], [2, 3, 0, 1, 5, 4], [3, 4, 5, 0, 1, 2], [4, 5, 3, 2, 0, 1], [5, 2, 1, 4, 3, 0]]) </code></pre> </li> <li><p>array <code>b</code>: The first two columns show ALL column pairs (<code>p</code> and <code>q</code>) from array <code>a</code>. The remaining two columns are multipliers, as shown in the following: Consider array <code>b</code>'s row 5 <code>[1, 2, 4, -3]</code>. We examine columns 1 and 2 of array <code>a</code>, and locate the values 2 and 1. Then we replace, in array <code>a</code>, the 2 by 2x4=8, and we replace the 1 by 1x(-3) = -3.</p> <pre class="lang-py prettyprint-override"><code>b = np.array([[0, 1, 3 -1], [0, 2, -1, 3], [0, 3, -2, 4], [0, 4, -1, 0], [0, 5, -1, 4], [1, 2, 4, -3], [1, 3, 0, -1], [1, 4, 1, -2], [1, 5, 1, 1], [2, 3, 1, 1], [2, 4, -1, 0], [2, 5, -1, 1], [3, 4, 0, 0], [3, 5, 2, 1], [4, 5, 1, -2]]) </code></pre> </li> </ol> <p>The final result would look like:</p> <pre class="lang-py prettyprint-override"><code> c = np.array([[ 3, 0,-4,10, 0, 3], [-2, 0, 0,-1, 5,-8], [-6, 4,-5, 0,-2, 2], [-4, 5, 3, 2, 0, 1], [-5, 8,-3, 0, 0, 0]]) </code></pre> <p>Here is what I've been using. It works well for smaller arrays (produces the array c shown above), but my coding skills are quite rusty:</p> <pre><code>c = np.copy(a) for brow in b: arow = np.where(a[:, brow[0]] == brow[1]) c[arow, brow[0]] = (a[arow, brow[0]])*brow[2] c[arow, brow[1]] = (a[arow, brow[1]])*brow[3] print(c) </code></pre> <p>Could this be vectorized?</p>
<python><arrays><numpy>
2023-07-02 20:50:31
1
746
user109387
76,600,347
292,502
Why this Twitter login UI script only works in my developer environment and not on the server?
<p>Twitter made a fundamental change not so long ago: you cannot view a user's timeline any more if you are not logged in. That broke all my scraping code. The code bellow works on my developer environment, but not on the server. In my developer environment it's also headless. As you can see I tried to sprinkle in sleeps to figure out if the script is moving too fast.</p> <p>The script fails by not finding the password input. That input comes into view after filling in the username / email input and pressing the Next button. After filling in the password I'd need to click the &quot;Log in&quot; button.</p> <pre><code>TWITTER_URL_BASE = &quot;https://twitter.com/&quot; SELENIUM_INSTANCE_WAIT = 1 @classmethod def _get_driver(cls): driver = None firefox_options = FirefoxOptions() firefox_options.headless = True firefox_options.add_argument(&quot;width=1920&quot;) firefox_options.add_argument(&quot;height=1080&quot;) firefox_options.add_argument(&quot;window-size=1920,1080&quot;) firefox_options.add_argument(&quot;disable-gpu&quot;) # https://stackoverflow.com/questions/24653127/selenium-error-no-display-specified # export MOZ_HEADLESS=1 firefox_options.binary_location = &quot;/usr/bin/firefox&quot; # firefox_options.set_preference(&quot;extensions.enabledScopes&quot;, 0) # firefox_options.set_preference(&quot;gfx.webrender.all&quot;, False) # firefox_options.set_preference(&quot;layers.acceleration.disabled&quot;, True) firefox_binary = FirefoxBinary(&quot;/usr/bin/firefox&quot;) firefox_profile = FirefoxProfile() firefox_options.binary = &quot;/usr/bin/firefox&quot; # firefox_binary firefox_options.profile = firefox_profile capabilities = DesiredCapabilities.FIREFOX.copy() capabilities[&quot;pageLoadStrategy&quot;] = &quot;normal&quot; firefox_options._caps = capabilities try: driver = webdriver.Firefox( firefox_profile=firefox_profile, firefox_binary=firefox_binary, options=firefox_options, desired_capabilities=capabilities, ) except Exception as e: cls.log_response(&quot;_get_driver&quot;, 500, &quot;Crash: {}&quot;.format(e)) cls.log_response(&quot;_get_driver&quot;, 500, traceback.format_exc()) return driver def _login_scraper_user(cls, driver, scraper_account): driver.implicitly_wait(5) driver.get(TWITTER_URL_BASE) WebDriverWait(driver, 10).until( lambda dr: dr.execute_script(&quot;return document.readyState&quot;) == &quot;complete&quot; ) time.sleep(SELENIUM_INSTANCE_WAIT) username_inputs = driver.find_elements_by_css_selector(&quot;input[name='text']&quot;) if not username_inputs: return False username_input_parent = ( username_inputs[0].find_element_by_xpath(&quot;..&quot;).find_element_by_xpath(&quot;..&quot;) ) username_input_parent.click() time.sleep(SELENIUM_INSTANCE_WAIT) username_inputs[0].click() time.sleep(SELENIUM_INSTANCE_WAIT) username_inputs[0].send_keys(scraper_account[&quot;username&quot;]) time.sleep(SELENIUM_INSTANCE_WAIT) next_buttons = driver.find_elements_by_xpath('//span[text()=&quot;Next&quot;]') if not next_buttons: return False next_buttons[0].click() time.sleep(SELENIUM_INSTANCE_WAIT) password_inputs = driver.find_elements_by_css_selector(&quot;input[name='password']&quot;) if not password_inputs: return False password_input_parent = ( password_inputs[0].find_element_by_xpath(&quot;..&quot;).find_element_by_xpath(&quot;..&quot;) ) password_input_parent.click() time.sleep(SELENIUM_INSTANCE_WAIT) password_inputs[0].click() time.sleep(SELENIUM_INSTANCE_WAIT) password_inputs[0].send_keys(scraper_account[&quot;password&quot;]) time.sleep(SELENIUM_INSTANCE_WAIT) login_buttons = driver.find_elements_by_xpath('//span[text()=&quot;Log in&quot;]') if not login_buttons: return False login_buttons[0].click() time.sleep(SELENIUM_INSTANCE_WAIT) if driver.find_elements_by_xpath( '//span[text()=&quot;Boost your account security&quot;]' ): close_buttons = driver.find_elements_by_css_selector( &quot;div[data-testid='app-bar-close']&quot; ) if not close_buttons: return False close_buttons[0].click() driver.implicitly_wait(0) return True </code></pre> <p>This is an old version of Selenium because the server lags behind due to technical debt (it's an IaaS). I'm using the same ancient Selenium, however my Firefox is fresh.</p> <hr /> <p>Just a little follow-up: The whole API pricing -&gt; scraping (I predicted this on the 1st of May) -&gt; scrape prevention fight unnecessarily reached the user level: <a href="https://www.cnbc.com/2023/07/03/users-flock-to-twitter-competitor-bluesky-after-elon-musk-imposes-rate-limits.html" rel="nofollow noreferrer">https://www.cnbc.com/2023/07/03/users-flock-to-twitter-competitor-bluesky-after-elon-musk-imposes-rate-limits.html</a> Congratulations!</p>
<python><selenium-webdriver><web-scraping><twitter>
2023-07-02 18:49:34
1
10,879
Csaba Toth
76,600,338
12,415,855
Write comment to excel-cell and autofit the comment-box?
<p>i was able to add a comment to an excel-cell using the following code:</p> <pre><code>import xlwings as xw wb = xw.Book(&quot;test.xlsx&quot;) ws = wb.sheets[0] ws.range(&quot;A1&quot;).api.AddComment(Text=&quot;My Comment&quot;) wb.save() wb.close() </code></pre> <p>Is it possible to autofit the size of the comment-box somehow in excel?</p>
<python><excel><xlwings>
2023-07-02 18:46:50
1
1,515
Rapid1898
76,600,274
17,082,611
val_loss is stuck to 0 while training my keras.Model on cifar10 dataset
<p>I am trying to fit a variational autoencoder on Cifar10 dataset and I want to print both losses on training and validation data.</p> <p>Unfortunately, even though I set <code>validation_data=(x_val, y_val)</code> as parameter of <code>fit</code> method, it seems that <code>val_total_loss</code> (and other losses such as <code>val_reconstruction_loss</code> and <code>val_kl_loss</code>) is stuck to 0 while training.</p> <p>This is the full reproducible example:</p> <pre><code>class VAE(keras.Model): def __init__(self, encoder, decoder, **kwargs): super().__init__(**kwargs) self.encoder = encoder self.decoder = decoder self.total_loss_tracker = keras.metrics.Mean(name=&quot;total_loss&quot;) self.reconstruction_loss_tracker = keras.metrics.Mean(name=&quot;reconstruction_loss&quot;) self.kl_loss_tracker = keras.metrics.Mean(name=&quot;kl_loss&quot;) @property def metrics(self): return [ self.total_loss_tracker, self.reconstruction_loss_tracker, self.kl_loss_tracker, ] def train_step(self, data): with tf.GradientTape() as tape: z_mean, z_log_var, z = self.encoder(data) reconstruction = self.decoder(z) reconstruction_loss = tf.reduce_mean( tf.reduce_sum( keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2) ) ) kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)) kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1)) total_loss = reconstruction_loss + kl_loss grads = tape.gradient(total_loss, self.trainable_weights) self.optimizer.apply_gradients(zip(grads, self.trainable_weights)) self.total_loss_tracker.update_state(total_loss) self.reconstruction_loss_tracker.update_state(reconstruction_loss) self.kl_loss_tracker.update_state(kl_loss) return { &quot;loss&quot;: self.total_loss_tracker.result(), &quot;reconstruction_loss&quot;: self.reconstruction_loss_tracker.result(), &quot;kl_loss&quot;: self.kl_loss_tracker.result(), } def call(self, inputs, training=None, mask=None): _, _, z = encoder(inputs) outputs = decoder(z) return outputs class Encoder(keras.Model): def __init__(self, latent_dimension): super(Encoder, self).__init__() self.latent_dim = latent_dimension self.conv_block1 = keras.Sequential([ layers.Conv2D(filters=64, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.conv_block2 = keras.Sequential([ layers.Conv2D(filters=128, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.conv_block3 = keras.Sequential([ layers.Conv2D(filters=256, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.flatten = layers.Flatten() self.dense = layers.Dense(units=100, activation=&quot;relu&quot;) self.z_mean = layers.Dense(latent_dimension, name=&quot;z_mean&quot;) self.z_log_var = layers.Dense(latent_dimension, name=&quot;z_log_var&quot;) self.sampling = sample def call(self, inputs, training=None, mask=None): x = self.conv_block1(inputs) x = self.conv_block2(x) x = self.conv_block3(x) x = self.flatten(x) x = self.dense(x) z_mean = self.z_mean(x) z_log_var = self.z_log_var(x) z = self.sampling(z_mean, z_log_var) return z_mean, z_log_var, z class Decoder(keras.Model): super(Decoder, self).__init__() self.latent_dim = latent_dimension self.dense1 = keras.Sequential([ layers.Dense(units=100, activation=&quot;relu&quot;), layers.BatchNormalization() ]) self.dense2 = keras.Sequential([ layers.Dense(units=1024, activation=&quot;relu&quot;), layers.BatchNormalization() ]) self.dense3 = keras.Sequential([ layers.Dense(units=4096, activation=&quot;relu&quot;), layers.BatchNormalization() ]) self.reshape = layers.Reshape((4, 4, 256)) self.deconv1 = keras.Sequential([ layers.Conv2DTranspose(filters=256, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.deconv2 = keras.Sequential([ layers.Conv2DTranspose(filters=128, kernel_size=3, activation=&quot;relu&quot;, strides=1, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.deconv3 = keras.Sequential([ layers.Conv2DTranspose(filters=128, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.deconv4 = keras.Sequential([ layers.Conv2DTranspose(filters=64, kernel_size=3, activation=&quot;relu&quot;, strides=1, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.deconv5 = keras.Sequential([ layers.Conv2DTranspose(filters=64, kernel_size=3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;), layers.BatchNormalization() ]) self.deconv6 = layers.Conv2DTranspose(filters=3, kernel_size=3, activation=&quot;sigmoid&quot;, padding=&quot;same&quot;) def call(self, inputs, training=None, mask=None): x = self.dense1(inputs) x = self.dense2(x) x = self.dense3(x) x = self.reshape(x) x = self.deconv1(x) x = self.deconv2(x) x = self.deconv3(x) x = self.deconv4(x) x = self.deconv5(x) decoder_outputs = self.deconv6(x) return decoder_outputs latent_dimension = 100 encoder = Encoder(latent_dimension) decoder = Decoder(latent_dimension) # Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() # Normalize the input data x_train = x_train.astype(&quot;float32&quot;) / 255.0 x_test = x_test.astype(&quot;float32&quot;) / 255.0 # Split the data into training, validation, and test sets validation_size = 0.2 # 20% of the training data will be used for validation x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=validation_size) vae = VAE(encoder, decoder) vae.compile(optimizer=Adam()) epochs = 2 batch_size = 128 history = vae.fit(x_train, epochs=epochs, batch_size=batch_size, validation_data=(x_val, y_val)) </code></pre> <p>whose output is:</p> <pre><code>Epoch 1/2 313/313 [==============================] - 32s 97ms/step - loss: 706.5906 - reconstruction_loss: 706.1145 - kl_loss: 0.0157 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00 Epoch 2/2 313/313 [==============================] - 30s 95ms/step - loss: 705.7831 - reconstruction_loss: 690.2925 - kl_loss: 7.6490 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00 </code></pre> <p>I thought that the problem concerned the fact that <code>history.validation_data</code> is <code>None</code> even though I properly set <code>validation_data</code> parameter into <code>fit</code> method, <a href="https://stackoverflow.com/questions/76595965/history-validation-data-is-none-even-though-i-added-x-val-and-y-val-in-fit-metho?noredirect=1#comment135052367_76595965">as in my previous question</a> but the problem seems not to be correlated to this fact.</p> <p>Some help?</p> <p>Note that tensorflow version is <code>2.13.0rc1</code> and I am using Python 3.11 as interpreter.</p>
<python><tensorflow><keras><tensorflow2.0>
2023-07-02 18:32:24
1
481
tail
76,600,254
12,544,460
The proper way to run Spark Connect in Anaconda - error '$HOME' is not recognized as an internal or external command, operable program or batch file
<p>I try to learn this lesson <a href="https://spark.apache.org/docs/latest/api/python/getting_started/quickstart_connect.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/getting_started/quickstart_connect.html</a></p> <ol> <li>Method 1: from anaconda - window</li> </ol> <p>by download the JP notebook to my Downloads folder, then start the jupyter notebook via anaconda</p> <p>when I run the line</p> <p>!$HOME/sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_2.12:$SPARK_VERSION</p> <p>but it raise error &quot;'$HOME' is not recognized as an internal or external command, operable program or batch file.&quot;</p> <p>after follow all the step in <a href="https://stackoverflow.com/a/40514875/12544460">https://stackoverflow.com/a/40514875/12544460</a> but it still does not work.</p> <ul> <li><p>I start the jupyter note book in anaconda prompt from C:\Users\name\</p> </li> <li><p>my downloaded notebook in C:\Users\name\Downloads</p> </li> <li><p>my location to run Spark connect is:</p> </li> </ul> <p>C:\Users\name\anaconda3\envs\pyspark_env\Lib\site-packages\pyspark (already set HOME=&quot;&quot; in anaconda cmd)</p> <p>for method 1: how to fix the home location?</p> <ol start="2"> <li>Method 2: run pyspark in spark file in ubuntu</li> </ol> <p>following this lesson: <a href="https://spark.apache.org/docs/latest/spark-connect-overview.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/spark-connect-overview.html</a></p> <p>the very first lines work fine until I run to the step &quot;spark = SparkSession.builder.getOrCreate()&quot; it always raise error like &quot;ImportError: Pandas &gt;= 1.0.5 must be installed; however, it was not found.&quot;. Mention that this is a new extracted Spark folder. And then I try to install pandas via: &quot;pip install pandas&quot;.... successful but it still raise the error above. I tried multiple times to find where to put the pandas zip or extracted in the spark folder, but it still did not work.</p> <p>For the method 2, what is the proper way to fix this problem?</p>
<python><apache-spark><pyspark><anaconda><spark-connect>
2023-07-02 18:25:37
0
362
Tom Tom
76,600,197
5,659,969
Type hint issue when base class is parameterized on a value type and has methods that return that type
<p>This a toy example to illustrate the issue:</p> <pre class="lang-py prettyprint-override"><code>from typing import Type from dataclasses import dataclass, asdict import json @dataclass class ValueType1: x: int y: int @dataclass class ValueType2: x: int z: str class FooBase: def __init__(self, value_cls: Type[ValueType1|ValueType2], name: str): self.name = name self.value_cls = value_cls def save(self, value: ValueType1|ValueType2): with open(self.name+'.json', 'w') as f: json.dump(asdict(value), f) def load(self) -&gt; ValueType1|ValueType2: with open(self.name+'.json', 'r') as f: return self.value_cls(**json.load(f)) class Foo1(FooBase): def __init__(self): super().__init__(value_cls=ValueType1, name='1') class Foo2(FooBase): def __init__(self): super().__init__(value_cls=ValueType2, name='2') foo1 = Foo1() foo2 = Foo2() foo1.save(ValueType1(x=10, y=20)) foo2.save(ValueType2(x=10, z='a')) res1 = foo1.load() res2 = foo2.load() print(res1.y) print(res2.z) </code></pre> <p>This works fine and prints 20 and 'a' as expected but typing errors are shown at <code>res1.y</code> and <code>res2.z</code> of this sort (only <code>res1.y</code> errors shown):</p> <pre class="lang-none prettyprint-override"><code>Type of &quot;y&quot; is unknownPylancereportUnknownMemberType Type of &quot;y&quot; is partially unknown Type of &quot;y&quot; is &quot;int | Unknown&quot;PylancereportUnknownMemberType Argument type is partially unknown Argument corresponds to parameter &quot;values&quot; in function &quot;print&quot; Argument type is &quot;int | Unknown&quot;PylancereportUnknownArgumentType Cannot access member &quot;y&quot; for type &quot;ValueType2&quot; Member &quot;y&quot; is unknownPylancereportGeneralTypeIssues (variable) y: int | Unknown </code></pre> <p>I'm wondering if there is some type hinting magic that can resolve this and make it clear that <code>res1</code> is of type <code>ValueType1</code>. Thanks!</p>
<python><python-typing>
2023-07-02 18:06:39
1
479
omasoud
76,600,175
1,260,682
accessing type variable in a static method
<p>For a class that is created with a <code>TypeVar</code>, is there a way I can print the value of the <code>TypeVar</code> inside a static class method? For instance:</p> <pre><code>T = TypeVar(&quot;T&quot;) class Foo(Generic[T]): @staticmethod def print(): # what should I put here in order for the following code to print int? # there is no 'self' here so I can't do typing.get_args(self) Foo[int].print() </code></pre> <p>Note: this is not the same as <a href="https://stackoverflow.com/questions/48572831/how-to-access-the-type-arguments-of-typing-generic">this earlier question</a> that accesses the type arguments inside a variable. Here I am asking how to access the type arguments <em>inside a class static method</em> itself.</p>
<python>
2023-07-02 18:01:22
0
6,230
JRR
76,599,932
1,585,410
Python - Use pkexec only one time in subprocess.run one time for multiple commands
<p>I need to use two commands requiring privileges and for this reason I'm using pkexec. My piece of code is:</p> <pre><code>def __init__(self): self.__binary = &quot;/usr/bin/docker&quot; self.docker_start = subprocess.run([&quot;pkexec&quot;, &quot;systemctl&quot;, &quot;start&quot;, &quot;docker&quot;], capture_output=True) self.pulled_containers = subprocess.run([&quot;pkexec&quot;, self.__binary, &quot;ps&quot;, &quot;-a&quot;, &quot;--format&quot;, &quot;'{{.Image}}'&quot;], capture_output=True) </code></pre> <p>I would like to avoid that the request of password is prompting two times, and I would like to have only one time for both of the commands. Usually I can use <code>pkexec bash -c &quot;command1; command2&quot;</code> and I tried to use:</p> <pre><code>self.pulled_containers = subprocess.run([&quot;pkexec&quot;, &quot;bash&quot;, &quot;-c&quot;, &quot;\&quot;systemctl start docker; docker ps -a --format '{{.Image}}'\&quot;&quot;], capture_output=True) </code></pre> <p>but it seems to not work. Is there a good way to run two commands by subprocess by using pkexec (and the password prompt) only one time?</p>
<python><linux><privileges><polkit>
2023-07-02 16:55:14
1
435
Develobeer
76,599,855
4,324,496
Tensorflow AttributeError: '_NumpyIterator' object has no attribute 'shard'
<p>Getting AttributeError: '_NumpyIterator' object has no attribute 'shard' while executing below code.My dataset is having images and labels which I want to convert to tfrecords</p> <pre><code>ds_train = tf.keras.utils.image_dataset_from_directory(some parameters) ds_train = ( ds_train .unbatch() ) def encode_image(image, label): image_converted = tf.image.convert_image_dtype(image, dtype=tf.uint8) image = tf.io.encode_jpeg(image_converted) label = tf.argmax(label) return image, label encode_ds = ( ds_train.map(encode_image) ) NUM_SHARD=10 PATH = &quot;some path&quot; for shard_no in range(NUM_SHARD): encode_ds = ( encode_ds .shard(NUM_SHARD, shard_no) .as_numpy_iterator() ) with tf.io.TFRecordWriter(PATH.format(shard_no)) as file_writer: for image, label in encode_ds: file_writer.write(create_example(image, label)) </code></pre>
<python><tensorflow><tensorflow2.0><tensorflow-datasets>
2023-07-02 16:37:07
1
2,954
ZKS
76,599,694
16,383,578
What is the fastest way to append unique integers to a list while keeping the list sorted?
<p>I need to append values to an existing list one by one, while maintaining the following two conditions at all times:</p> <ul> <li><p>Every value in the list is unique</p> </li> <li><p>The list is always sorted in ascending order</p> </li> </ul> <p>The values are all integers, and there is a huge amount of them (literally millions, no exaggeration), and the list is dynamically constructed, values are added or removed depending on a changing condition, and I really need a very efficient way to do this.</p> <p><code>sorted(set(lst))</code> doesn't qualify as a solution because first the list is not pre-existing and I need to mutate it after that, the solution isn't efficient by itself and to maintain the two above conditions I need to repeat the inefficient method after every mutation, which is impractical to do and would take an unimaginable amount of time to process millions of numbers.</p> <p>One way to do it is to maintain a set with the same elements as the list, and do membership checking using the set and only add elements to the list if there aren't in the set, and add the same elements to both at the same time. This maintains uniqueness.</p> <p>To maintain the order use binary search to calculate the insertion index required and insert the element at the index.</p> <p>Example implementation I come up with without much thought:</p> <pre><code>from bisect import bisect class Sorting_List: def __init__(self): self.data = [] self.unique = set() def add(self, n): if n in self.unique: return self.unique.add(n) if not self.data: self.data.append(n) return if n &gt; self.data[-1]: self.data.append(n) elif n &lt; self.data[0]: self.data.insert(0, n) elif len(self.data) == 2: self.data.insert(1, n) else: self.data.insert(bisect(self.data, n), n) </code></pre> <p>I am not satisfied with this solution, because I have to maintain a set, this isn't memory efficient and I don't think this is the most time efficient solution.</p> <p>I have done some tests:</p> <pre><code>from timeit import timeit def test(n): setup = f'''from bisect import bisect from random import choice c = choice(range({n//2}, {n})) numbers = list(range({n}))''' linear = timeit('c in numbers', setup) / 1e6 binary = timeit('bisect(numbers, c) - 1 == c ', setup) / 1e6 return linear, binary </code></pre> <pre class="lang-none prettyprint-override"><code>In [182]: [test(i) for i in range(1, 24)] Out[182]: [(3.1215199967846275e-08, 9.411800000816583e-08), (4.0730200009420514e-08, 9.4089699909091e-08), (5.392530001699925e-08, 1.0571250005159527e-07), (5.4071999969892203e-08, 1.111615999834612e-07), (5.495569994673133e-08, 1.3055420003365725e-07), (7.999380002729595e-08, 1.2215890001971274e-07), (6.739119999110698e-08, 1.1633279989473522e-07), (1.1775600002147258e-07, 1.2142769992351532e-07), (9.138470003381372e-08, 1.1602859990671277e-07), (1.212503999704495e-07, 1.2919300002977253e-07), (1.4093979995232076e-07, 1.1543070001062005e-07), (1.3911779993213713e-07, 1.1900339997373521e-07), (1.641304000513628e-07, 1.2721199996303767e-07), (2.2550319996662438e-07, 1.3572790008038284e-07), (2.0048839994706214e-07, 1.2690539995674044e-07), (2.0169020001776515e-07, 1.3345349999144673e-07), (1.482249000109732e-07, 1.2819399998988957e-07), (1.777580000925809e-07, 1.2856919993646443e-07), (1.5940839995164425e-07, 1.2710969999898224e-07), (2.772621000185609e-07, 1.4048079994972795e-07), (2.014727999921888e-07, 1.4225799997802823e-07), (2.851358000189066e-07, 1.3718699989840387e-07), (2.607858000556007e-07, 1.4413580007385463e-07)] </code></pre> <p>So the membership checking of lists is done using a linear search, by iterating through the list one by one and perform equality checking, this is inefficient but beats binary search for small number of elements (n &lt;= 12) and is much slower for larger amounts.</p> <pre class="lang-none prettyprint-override"><code>In [183]: test(256) Out[183]: (2.5505281999940053e-06, 1.7594210000243037e-07) </code></pre> <p>And <code>set</code> membership checking is faster than binary search, which is much faster than linear search:</p> <pre class="lang-none prettyprint-override"><code>In [188]: lst = list(range(256)) In [189]: s = set(lst) In [190]: %timeit 199 in s 42.5 ns ± 0.946 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [191]: %timeit bisect(lst, 199) 159 ns ± 1.38 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [192]: %timeit 199 in lst 2.53 µs ± 31.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) </code></pre> <p>How can we devise a faster method?</p> <hr /> <p>I am well aware that Python <code>set</code>s are unordered, and sorting <code>set</code>s yields <code>list</code>s. But perhaps there are some external libraries that provide ordered sets, if there are then I am not aware of them and I haven't used them.</p> <p>Solutions using such ordered sets are welcome so long as they are performant, but I am not asking for recommendations so please don't close the question for that reason.</p> <p>Nevertheless ordered sets only fulfill the first criterium, only uniqueness is maintained, and I need the orderedness to be maintained as well.</p> <p><code>list</code> here is just terminology, in plain Python <code>list</code> is the only ordered mutable sequence. Other data types are welcome.</p> <p><a href="https://stackoverflow.com/questions/76599694/what-is-the-fastest-way-to-append-unique-integers-to-a-list-while-keeping-the-li#comment135054513_76599694">About SQLite</a>, I haven't tested in-memory transient databases yet, but for a local <a href="https://en.wikipedia.org/wiki/Hard_disk_drive" rel="nofollow noreferrer">HDD</a> based database, the I/O delay is many milliseconds which is unacceptable.</p> <hr /> <p>I have just performed yet another test:</p> <pre class="lang-none prettyprint-override"><code>In [243]: numbers = random.choices(range(4294967295), k=1048576) In [244]: sl = Sorting_List() In [245]: ss = SortedSet() In [246]: %timeit for i in numbers: ss.add(i) 306 ms ± 8.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [247]: %timeit for i in numbers: sl.add(i) --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) ... In [248]: sls = SortedList() In [249]: s = set() In [250]: %%timeit ...: for i in numbers: ...: if i not in s: ...: sls.add(i) ...: s.add(i) 145 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [251]: len(numbers) Out[251]: 1048576 </code></pre> <p>It seems my custom implementation is at least as performant as <code>SortedSet</code> for reasonable amount of data, I need much more efficient methods.</p> <p>For a million elements my method becomes very inefficient indeed, while <code>SortedSet</code> and <code>SortedList</code> both keep being competent.</p> <p>And now it seems a <code>SortedList</code> plus a <code>set</code> for membership checking is the most time efficient method, but obviously not very memory efficient, just as my custom implementation. But my custom implementation seems to outperform <code>SortedSet</code> which seems to be memory efficient.</p> <hr /> <p>Testing <code>SortedList</code> by adding 2^20 elements one by one, while keeping the elements unique by doing membership checking using the container itself:</p> <pre class="lang-none prettyprint-override"><code>In [252]: sls = SortedList() In [253]: %%timeit ...: for i in numbers: ...: if i not in sls: ...: sls.add(i) 1.93 s ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>(<code>numbers</code> is defined above)</p> <p>The test clearly shows <code>SortedList</code> membership checking is slow, as suspected, it uses linear search.</p>
<python><python-3.x><algorithm><performance>
2023-07-02 15:50:02
3
3,930
Ξένη Γήινος
76,599,507
4,217,755
How to incorporate Frechet Inception Distance into latent diffusion training and valiadation in pytorch lightning?
<p>I am using <a href="https://github.com/CompVis/stable-diffusion" rel="nofollow noreferrer">stable diffusion code</a> to train a diffusion model. Their <a href="https://arxiv.org/pdf/2112.10752.pdf" rel="nofollow noreferrer">paper</a> mentioned <strong>FID scores assessed on 5000 samples using 100 DDIM steps.</strong> The git repository does not have the implementation for how they computed the FID score over 5000 samples. I plan on using the torch metrics <a href="https://torchmetrics.readthedocs.io/en/stable/image/frechet_inception_distance.html" rel="nofollow noreferrer">FID</a> to compute it. What is not clear to me how to do it for random set of 5000 images in pytorch lightning run? My question is how to report FID score metric for my training and validation epochs?</p> <p>I have never used generative AI metrics before.</p>
<python><pytorch-lightning><stable-diffusion>
2023-07-02 15:05:05
0
581
bananagator
76,599,450
7,200,715
Get all permutations of substrings
<p>I want to get all permutations possible from a list of substrings,</p> <p>Let's take <code>['ab', 'CD', '12']</code> as input list.</p> <p>I want:</p> <pre><code>'ab', 'CD', '12', 'abCD', 'ab12', 'CD12', '12ab', '12CD', ..., 'abCD12', 'CDab12', '12abCD' </code></pre> <p>Because input list will be dynamical, I need a pythonic solution, probably using <code>itertools</code> over nested <code>for</code>.</p>
<python><python-3.x>
2023-07-02 14:48:14
1
10,511
Arount
76,599,443
14,094,460
pydantic: assign reference instead of copy when 'validate_assignment' is True
<p>Minimal working example:</p> <pre class="lang-py prettyprint-override"><code># okish: from pydantic import BaseModel class One(BaseModel): field1: list class Two(One): field2: list def __init__(self, **kwargs): super().__init__(**kwargs) self.field1 = self.field2 # works, but no validation :( # with assignment validation, but with reference copy &quot;problem&quot; class OneV2(BaseModel): field1: list class Config: validate_assignment = True class TwoV2(OneV2): field2: list class Config: validate_assignment = True def __init__(self, **kwargs): super().__init__(**kwargs) self.field1 = self.field2 # result in deep-copy, not reference assignment # testing _2 = Two(field1=[1], field2=[1, 2, 3]) _2.field1 is _2.field2 # True, as &quot;expected&quot; _2_v2 = TwoV2(field1=[1, 2, 3], field2=[1, 2, 3, 4]) _2_v2.field1 is _2_v2.field2 # False, not expected </code></pre> <p>In short, I was expecting (and need) that a reference to <code>field2</code> is assigned to <code>field1</code> in <code>TwoV2</code> and am having difficulties to find the appropriate documentation. Any explanations or doc references would be highly appreciated.</p> <p>python: 3.9 pydantic: '2.0'</p>
<python><pydantic>
2023-07-02 14:47:29
0
1,442
deponovo
76,599,436
2,055,938
Raspberry Pi Python button gives false positives
<p>Can someone point me in the correct direction of how to fix this issue with the false positives I recieve?</p> <p>I have tried to do according to this suggestion (<a href="https://raspberrypi.stackexchange.com/questions/63512/how-can-i-detect-how-long-a-button-is-pressed">https://raspberrypi.stackexchange.com/questions/63512/how-can-i-detect-how-long-a-button-is-pressed</a>) but to no avail. I still get those random false positives that ruin everything I have worked for these past two weeks.</p> <p>I have added the entire code for clarity (but relevant parts are down).</p> <pre><code>################################### # Control program ################################### # LCD R Pi # RS GPIO 26 (pin 37) # E GPIO 19 (35) # DB4 GPIO 13 (33) # DB5 GPIO 6 (31) # DB6 GPIO 5 (29) # DB7 GPIO 11 / SCLK (23) # WPSE311/DHT11 GPIO 24 (18) # Relay Fog GPIO 21 (40) # Relay Oxy GPIO 20 (38) # Relay LED GPIO 16 (36) # Button Killsw. GPIO 12 (32) # F1 (Therm.) GPIO 4 (7) - Gul kabel # F2 (Fogger) GPIO 25 (22) - Blå kabel # R (Root) GPIO 24 (18) - Lila kabel # G (Grow area) GPIO 23 (16) - Brun kabel # P (Bar . SCL) GPIO 3 (5) - Grön kabel (OBS! Går via I2C) # P (Bar . SDA) GPIO 2 (3) - Grå kabel (OBS! Går via I2C) ################################### # Imports--- [ToDo: How to import settings from a configuration file?] ################################### import adafruit_dht, board, digitalio, threading, os, glob import adafruit_character_lcd.character_lcd as characterlcd import RPi.GPIO as GPIO import adafruit_bmp280 import logging import time from datetime import datetime from time import sleep, perf_counter from gpiozero import CPUTemperature from datetime import datetime ################################### # Definitions--- ################################### # Compatible with all versions of RPI as of Jan. 2019 # v1 - v3B+ lcd_rs = digitalio.DigitalInOut(board.D26) lcd_en = digitalio.DigitalInOut(board.D19) lcd_d4 = digitalio.DigitalInOut(board.D13) lcd_d5 = digitalio.DigitalInOut(board.D6) lcd_d6 = digitalio.DigitalInOut(board.D5) lcd_d7 = digitalio.DigitalInOut(board.D11) # Define LCD column and row size for 16x2 LCD. lcd_columns = 16 lcd_rows = 2 # Define relays relay_fogger = 21 #digitalio.DigitalInOut(board.D21) #- Why does this not work? relay_oxygen = 20 #digitalio.DigitalInOut(board.D20) #- Why does this not work? relay_led = 16 #digitalio.DigitalInOut(board.D16) #- Why does this not work? # Define liquid nutrient temperature probe liquid_nutrients_probe = 23 #digitalio.DigitalInOut(board.D23) - Why does this not work? # Define the killswitch push button GPIO.setup(12, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) ################################### # Initializations ################################### # Init file log #logging.basicConfig(filename=datetime.now()+'.log', format='%(asctime)s %(levelname)s: %(message)s', encoding='utf-8', level=logging.DEBUG) #logging.basicConfig(filename='Logging.log', format='%(asctime)s: %(message)s', encoding='utf-8', level=logging.DEBUG) # Init the lcd class--- lcd = characterlcd.Character_LCD_Mono(lcd_rs, lcd_en, lcd_d4, lcd_d5, lcd_d6, lcd_d7, lcd_columns, lcd_rows) # Init thermometer for liquid nutrients fluid os.system('modprobe w1-gpio') os.system('modprobe w1-therm') base_dir = '/sys/bus/w1/devices/' device_folder = glob.glob(base_dir + '28*')[0] device_file = device_folder + '/w1_slave' # Initialise the BMP280 i2c = board.I2C() # For some strange reason, the default address of the BMP280 is 0x76 when checking all via 'i2cdetect -y 1' whereas default expects 0x77. # Thus are we forcing the address to correpsond correctly with below line of code.&quot;&quot;&quot; bmp280 = adafruit_bmp280.Adafruit_BMP280_I2C(i2c, address=0x76) # Init DHT22 sensors--- dhtDevice_nutrient_mist = adafruit_dht.DHT22(board.D23, use_pulseio=False) dhtDevice_roots_chamber = adafruit_dht.DHT22(board.D24, use_pulseio=False) dhtDevice_grow_area = adafruit_dht.DHT22(board.D25, use_pulseio=False) # Init relays--- GPIO.setwarnings(False) GPIO.setup(relay_fogger, GPIO.OUT) GPIO.setup(relay_oxygen, GPIO.OUT) GPIO.setup(relay_led, GPIO.OUT) ################################### # Global variables--- ################################### killswitch = False display_info_delay = 5 # Seconds # Fogger bucket variables temp_nutrient_solution = 0 temp_nutrient_mist = 0 humidity_nutrient_mist = 0 fogger_on_seconds = 2700 #45 min fogger_off_seconds = 900 #15 min sleep_fogger = False # Grow bucket variables temp_roots = 0 humidity_roots = 0 pressure_roots = 0 temp_grow = 0 humidity_grow = 0 # Oxygen bucket variables sleep_oxygen = False # Rapsberry Pi internal temperature rpi_internal_temp = 0 # DHT22 error status variables dht_fog_chamber_error = False dht_root_chamber_error = False dht_grow_area_error = False ################################### # Methods--- ################################### def button_callback(channel): &quot;&quot;&quot;When the button is pressed.&quot;&quot;&quot; global killswitch # Due to serious issues with false positives regarding button being pushed start_time = time.time() while GPIO.input(channel) == 0: # Wait for button up pass buttonTime = time.time() - start_time if buttonTime &gt;= .1: print(&quot;###########\nKillswitch button was pressed!\n###########&quot;) killswitch = True # Init the button GPIO.add_event_detect(12, GPIO.RISING, callback=button_callback, bouncetime=150) #GPIO.add_event_detect(12, GPIO.FALLING, callback=button_callback, bouncetime=500) def read_temp_raw(): &quot;&quot;&quot;Help function for the thermometer readings.&quot;&quot;&quot; f = open(device_file, 'r') lines = f.readlines() f.close() return lines def get_temp_nutrient_solution(): &quot;&quot;&quot;Measure the temperature of the nutrient solution where the ultrasonic fogger is.&quot;&quot;&quot; global killswitch while killswitch==False: global temp_nutrient_solution lines = read_temp_raw() while lines[0].strip()[-3:] != 'YES': time.sleep(0.2) lines = read_temp_raw() equals_pos = lines[1].find('t=') if equals_pos != -1: temp_string = lines[1][equals_pos+2:] temp_nutrient_solution = float(temp_string) / 1000.0 # For development process print( &quot;F1 Temp nutrient solution: {:.1f} C / {:.1f} F&quot;.format( temp_nutrient_solution, c2f(temp_nutrient_solution) ) ) sleep(1) def get_temp_humidity_nutrient_mist(): &quot;&quot;&quot;Measure the temperature and humidity of the nutrient mist where the ultrasonic fogger is.&quot;&quot;&quot; global killswitch while killswitch==False: try: # Update global temp value and humidity global temp_nutrient_mist global humidity_nutrient_mist temp_nutrient_mist = dhtDevice_nutrient_mist.temperature humidity_nutrient_mist = dhtDevice_nutrient_mist.humidity # For development process print( &quot;F2: {:.1f} C / {:.1f} F Humidity: {}% &quot;.format( temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist ) ) except RuntimeError as error: # Errors happen fairly often, DHT's are hard to read, just keep going print(&quot;Warning (DHT fog): &quot; + error.args[0]) sleep(2) # sleep(1) for DHT11 and sleep(2) for DHT22 pass except Exception as error: global dht_fog_chamber_error print(&quot;DHT fog sensor fatal error!&quot;) dht_fog_chamber_error = True dhtDevice_nutrient_mist.exit() raise error sleep(2) def get_temp_humidity_grow_area(): &quot;&quot;&quot;Measure the temperature and humidity of the grow area.&quot;&quot;&quot; global killswitch while killswitch==False: try: # Update global temp value and humidity global temp_grow global humidity_grow temp_grow = dhtDevice_grow_area.temperature humidity_grow = dhtDevice_grow_area.humidity # For development process print( &quot;Grow area: {:.1f} C / {:.1f} F Humidity: {}% &quot;.format( temp_grow, c2f(temp_grow), humidity_grow ) ) except RuntimeError as error: # Errors happen fairly often, DHT's are hard to read, just keep going print(&quot;Warning (DHT grow): &quot; + error.args[0]) sleep(2) # sleep(1) for DHT11 and sleep(2) for DHT22 pass except Exception as error: global dht_grow_area_error print(&quot;DHT grow sensor fatal error!&quot;) dht_grow_area_error = True dhtDevice_grow_area.exit() raise error sleep(2) def get_temp_humidity_root_chamber(): &quot;&quot;&quot;Measure the temperature and humidity of the roots chamber.&quot;&quot;&quot; global killswitch while killswitch==False: try: # Update global temp value and humidity global temp_roots global humidity_roots temp_roots = dhtDevice_roots_chamber.temperature humidity_roots = dhtDevice_roots_chamber.humidity # For development process print( &quot;Root chamber: {:.1f} C / {:.1f} F Humidity: {}% &quot;.format( temp_roots, c2f(temp_roots), humidity_roots ) ) except RuntimeError as error: # Errors happen fairly often, DHT's are hard to read, just keep going print(&quot;Warning (DHT root): &quot; + error.args[0]) sleep(2) # sleep(1) for DHT11 and sleep(2) for DHT22 pass except Exception as error: global dht_root_chamber_error print(&quot;DHT root sensor fatal error!&quot;) dht_root_chamber_error = True dhtDevice_roots_chamber.exit() raise error sleep(2) def get_pressure_root_chamber(): &quot;&quot;&quot;Gets the pressure from the BMP280 device. This device can also measure temperature (more precise than DHT22) and altitude. &quot;&quot;&quot; global killswitch while killswitch==False: #temperature = bmp280.temperature() global pressure_roots pressure_roots = bmp280.pressure # For development process print( &quot;Pressure roots: {:.1f} hPa &quot;.format( pressure_roots ) ) #print('Pressure: {} hPa'.format(pressure_roots)) sleep(2) def relay_fogger_control(): &quot;&quot;&quot;Fogger on for 45 min and off for 15. Perpetual mode unless kill_processes() is activated&quot;&quot;&quot; global killswitch global fogger_on_seconds, fogger_off_seconds while killswitch==False: GPIO.output(relay_fogger, GPIO.HIGH) sleep(10) #sleep(fogger_on_seconds) GPIO.output(relay_fogger, GPIO.LOW) sleep(10) #sleep(fogger_off_seconds) def relay_heatled_control(): &quot;&quot;&quot;Heat LED controller. When is it too hot for the crops? Sleep interval? Perpetual mode unless kill_processes() is activated&quot;&quot;&quot; global killswitch global led_on_seconds, led_off_seconds while killswitch==False: GPIO.output(relay_led, GPIO.HIGH) sleep(20) #sleep(led_on_seconds) GPIO.output(relay_led, GPIO.LOW) sleep(10) #sleep(led_off_seconds) def relay_oxygen_control(): &quot;&quot;&quot;Oxygen maker. Perpetual mode unless kill_processes() is activated or barometric pressure is too high.&quot;&quot;&quot; global killswitch while killswitch==False: GPIO.output(relay_oxygen, GPIO.HIGH) sleep(15) #sleep(oxygen_on_seconds) GPIO.output(relay_oxygen, GPIO.LOW) #sleep(oxygen_off_seconds) sleep(25) def reset_clear_lcd(): &quot;&quot;&quot;Move cursor to (0,0) and clear the screen&quot;&quot;&quot; lcd.home() lcd.clear() def get_rpi_temp(): &quot;&quot;&quot;Get Rapsberry Pi internal temperature&quot;&quot;&quot; global rpi_internal_temp rpi_internal_temp = CPUTemperature().temperature def c2f(temperature_c): &quot;&quot;&quot;Convert Celsius to Fahrenheit&quot;&quot;&quot; return temperature_c * (9 / 5) + 32 def lcd_display_data_controller(): &quot;&quot;&quot;Display various measurments and data on the small LCD. Switch every (display_info_delay) seconds.&quot;&quot;&quot; global killswitch, display_info_delay global dht_fog_chamber_error, dht_grow_area_error, dht_root_chamber_error global temp_roots, humidity_roots, pressure_roots global temp_grow, humidity_grow global temp_nutrient_solution, humidity_nutrient_mist while killswitch==False: reset_clear_lcd() # For testing purpose #lcd.message(&quot;New round.&quot;) #sleep(display_info_delay) #reset_clear_lcd() # Root temperature and humidity if dht_root_chamber_error==False: lcd.message = ( &quot;R1: {:.1f}C/{:.1f}F \nHumidity: {}% &quot;.format( temp_roots, c2f(temp_roots), humidity_roots ) ) sleep(display_info_delay) reset_clear_lcd() else: lcd.message = (&quot;ERROR: DHT root\nchamber!&quot;) sleep(display_info_delay) reset_clear_lcd() # Root pressure lcd.message = ( &quot;Root pressure:\n{:.1f} hPa&quot;.format( pressure_roots ) ) sleep(display_info_delay) reset_clear_lcd() # Crop grow area temperature and humidity if dht_grow_area_error==False: lcd.message = ( &quot;G:{:.1f}C/{:.1f}F \nHumidity: {}% &quot;.format( temp_grow, c2f(temp_grow), humidity_grow ) ) sleep(display_info_delay) reset_clear_lcd() else: lcd.message = (&quot;ERROR: DHT grow\narea!&quot;) sleep(display_info_delay) reset_clear_lcd() # Nutrient liquid temperature lcd.message = ( &quot;F nutrient temp.\n{:.1f}C/{:.1f}F &quot;.format( temp_nutrient_solution, c2f(temp_nutrient_solution) ) ) sleep(display_info_delay) reset_clear_lcd() # Nutrient mist temperature and humidity if dht_fog_chamber_error==False: lcd.message = ( &quot;F: {:.1f}C/{:.1f}F \nHumidity: {}% &quot;.format( temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist ) ) sleep(display_info_delay) reset_clear_lcd() else: lcd.message = (&quot;ERROR: DHT fog\nchamber!&quot;) sleep(display_info_delay) reset_clear_lcd() # Raspberry Pi internal temperature lcd.message = ( &quot;R Pi (int. temp): \n{:.1f}C/{:.1f}F &quot;.format( rpi_internal_temp, c2f(rpi_internal_temp) ) ) sleep(display_info_delay) reset_clear_lcd() def kill_processes(): &quot;&quot;&quot;ToDo: A button must be pressed which gracefully kills all processes preparing for shutdown.&quot;&quot;&quot; # Power off machines GPIO.output(relay_fogger, GPIO.LOW) GPIO.output(relay_led, GPIO.LOW) GPIO.output(relay_oxygen, GPIO.LOW) # Joined the threads / stop the threads after killswitch is true - Shutdown order is very important t1.join() t2.join() t3.join() t4.join() t5.join() t6.join() t8.join() t9.join() t10.join() t7.join() # display thread last to die #tx.join() reset_clear_lcd() # Stop message lcd.message = 'Full stop.\nOk to shut down.' # GPIO clearing GPIO.cleanup() ################################### # Thread setup - Startup order is important ################################### #tx = threading.Thread(target=xx, args=(killswitch,sleep_fogger,)) #tx = threading.Thread(target=killswitch_button) t1 = threading.Thread(target=get_rpi_temp) t2 = threading.Thread(target=get_temp_nutrient_solution) t3 = threading.Thread(target=get_temp_humidity_nutrient_mist) t4 = threading.Thread(target=get_temp_humidity_root_chamber) t5 = threading.Thread(target=get_pressure_root_chamber) t6 = threading.Thread(target=get_temp_humidity_grow_area) t7 = threading.Thread(target=lcd_display_data_controller) t8 = threading.Thread(target=relay_fogger_control) t9 = threading.Thread(target=relay_oxygen_control) t10 = threading.Thread(target=relay_heatled_control) # Start the threads t1.start() t2.start() t3.start() t4.start() t5.start() t6.start() sleep(2) # Give everything a bit extra time before the LCD starts displaying data t7.start() t8.start() t9.start() t10.start() #tx.start() ################################### # Code main process--- ################################### while not killswitch: sleep(1) ################################### # Graceful exit kill_processes() ################################### </code></pre> <p>Now, the relevant parts are below.</p> <pre><code># Define the killswitch push button GPIO.setup(12, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def button_callback(channel): &quot;&quot;&quot;When the button is pressed.&quot;&quot;&quot; global killswitch # Due to serious issues with false positives regarding button being pushed start_time = time.time() while GPIO.input(channel) == 0: # Wait for button up pass buttonTime = time.time() - start_time if buttonTime &gt;= .1: print(&quot;###########\nKillswitch button was pressed!\n###########&quot;) killswitch = True # Init the button GPIO.add_event_detect(12, GPIO.RISING, callback=button_callback, bouncetime=150) #GPIO.add_event_detect(12, GPIO.FALLING, callback=button_callback, bouncetime=500) def kill_processes(): &quot;&quot;&quot;ToDo: A button must be pressed which gracefully kills all processes preparing for shutdown.&quot;&quot;&quot; # Power off machines GPIO.output(relay_fogger, GPIO.LOW) GPIO.output(relay_led, GPIO.LOW) GPIO.output(relay_oxygen, GPIO.LOW) # Joined the threads / stop the threads after killswitch is true - Shutdown order is very important t1.join() t2.join() t3.join() t4.join() t5.join() t6.join() t8.join() t9.join() t10.join() t7.join() # display thread last to die #tx.join() reset_clear_lcd() # Stop message lcd.message = 'Full stop.\nOk to shut down.' # GPIO clearing GPIO.cleanup() ################################### # Thread setup - Startup order is important ################################### #tx = threading.Thread(target=xx, args=(killswitch,sleep_fogger,)) #tx = threading.Thread(target=killswitch_button) t1 = threading.Thread(target=get_rpi_temp) t2 = threading.Thread(target=get_temp_nutrient_solution) t3 = threading.Thread(target=get_temp_humidity_nutrient_mist) t4 = threading.Thread(target=get_temp_humidity_root_chamber) t5 = threading.Thread(target=get_pressure_root_chamber) t6 = threading.Thread(target=get_temp_humidity_grow_area) t7 = threading.Thread(target=lcd_display_data_controller) t8 = threading.Thread(target=relay_fogger_control) t9 = threading.Thread(target=relay_oxygen_control) t10 = threading.Thread(target=relay_heatled_control) # Start the threads t1.start() t2.start() t3.start() t4.start() t5.start() t6.start() sleep(2) # Give everything a bit extra time before the LCD starts displaying data t7.start() t8.start() t9.start() t10.start() #tx.start() ################################### # Code main process--- ################################### while not killswitch: sleep(1) ################################### # Graceful exit kill_processes() ################################### </code></pre> <p>Can anyone see where the error lies in or what I must change in order to make this work? As it is now, if I push the button nothing happens, no matter if I push it briefly or hold it.</p>
<python><multithreading><button><raspberry-pi><false-positive>
2023-07-02 14:45:16
0
517
Emperor 2052
76,599,086
9,744,061
Title and axis label overlapping in python
<p>I have a problem in plotting in subplot, i.e. title and axis label is overlap in python. This is my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x=np.linspace(-np.pi,np.pi,50,endpoint=True) plt.figure() plt.subplot(211) plt.plot(x,np.sin(x),'r-') plt.title('y=sin(x)') plt.xlabel('x') plt.ylabel('y') plt.grid() plt.subplot(212) plt.plot(x,np.cos(x),'b-') plt.title('y=cos(x)') plt.xlabel('x') plt.ylabel('y') plt.grid() plt.show() </code></pre> <p>This is the result:</p> <p><a href="https://i.sstatic.net/YRBNh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YRBNh.png" alt="enter image description here" /></a></p> <p>Now I want the title of second subplot is not overlap with x label in first subplot. How to make it?</p>
<python><matplotlib><plot><overlap>
2023-07-02 13:17:24
0
305
Ongky Denny Wijaya
76,598,838
9,975,452
Process from Multiprocessing not working well
<p>I am getting started with <code>multiprocessing</code> and tried to do a simple code using <code>Process</code>, however the result is different from <code>Colab</code> and my local pc. It seem on my pc (using Windows and <code>Jupyter lab</code> via Anaconda) the process doesn't start. The code is:</p> <pre><code>from multiprocessing import Process def teste(a): print(a) if __name__ == '__main__': print('ok') p = Process(target=teste, args=(1,)) p.start() p.join() print('b') print(p) </code></pre> <p>The result when running on Colab is:</p> <p>&quot;ok <br /> 1<br /> b <br /> Process name='Process-10' pid=3765 parent=493 stopped exitcode=0&gt;&quot;</p> <p>However if I run on my PC is:</p> <p>&quot;ok <br /> b <br /> Process name='Process-10' pid=27844 parent=10844 stopped exitcode=0&gt;&quot;</p> <p>It seems the comand <code>p.start()</code> is not working. What can it be?</p>
<python><parallel-processing><multiprocessing><python-multiprocessing><multiprocess>
2023-07-02 12:16:10
0
470
Oalvinegro
76,598,777
10,119,785
Django - multiple pages using one form
<p>There's a number of questions regarding one page and handling multiple forms but I seem to have the reverse issue. I have two pages that contain <code>datatables</code> showing filtered data from my model, but both are the same source data and every object (in both tables, on both pages) has the same &quot;edit&quot; button bringing the user to the same form allowing them to edit the object details. My problem is redirecting from the form to the original page. I have tried using <code>HttpResponseRedirect</code> but that doesn't seem to be working, it goes to my homepage everytime.</p> <pre><code>def edit_drawing(request, drawing_no): drawing = get_object_or_404(Drawings, pk=drawing_no) if request.method == &quot;POST&quot;: form = DrawingForm(request.POST, instance=drawing) if form.is_valid(): form.save() messages.success(request, f'Drawing {drawing.drawing_no} was successfully updated.') return HttpResponseRedirect(request.META.get('HTTP_REFERER', '/')) else: form = DrawingForm(instance=drawing) return render(request, 'edit_drawing.html', {'form': form, 'drawing': drawing}) </code></pre> <p>I thought I could do something like this where my tag in the original html would tell the redirect where to go if I passed a variable:</p> <p><code>&lt;a href=&quot;{% url 'schedule:edit_drawing' drawing_no=your_drawing_no %}?next={{ request.path }}&quot;&gt;Edit Drawing&lt;/a&gt;</code></p> <p>But when I try to use <code>return redirect(next_url)</code> in the view, <code>next_url</code> is undefined and doesn't work.</p> <p>Does anyone have a best practice to handle something like this? I would like to try and avoid individual forms for each page that are performing the same function.</p>
<python><django><forms><http-redirect>
2023-07-02 12:01:20
1
1,482
Chris Macaluso
76,598,380
3,097,821
tkinter background color will not change on Mac M1
<p>I haven't a clue why not but this won't change the background color</p> <pre><code>import tkinter as tk class Application: def __init__(self): self.root = tk.Tk() self.root.title(&quot;Swiperino Desktop&quot;) self.root.geometry(&quot;400x250&quot;) self.root.configure(background=&quot;white&quot;) self.root['background'] = &quot;white&quot; def run(self): self.root.mainloop() if __name__ == &quot;__main__&quot;: app = Application() app.run() </code></pre>
<python><tkinter><tk-toolkit>
2023-07-02 10:10:21
0
2,487
Trevor
76,598,322
10,027,628
Pytest in docker runs tests of venv files
<p>I am following <a href="https://testdriven.io/courses/tdd-fastapi/pytest-setup/" rel="nofollow noreferrer">https://testdriven.io/courses/tdd-fastapi/pytest-setup/</a>, but when running</p> <pre><code>docker-compose exec web python -m pytest </code></pre> <p>for the first time, I get</p> <pre><code>collected 212 items / 24 errors </code></pre> <p>instead of the expected 0 items.</p> <p>The short test summary info shows among others</p> <pre><code>ERROR env/Lib/site-packages/h11/tests/test_against_stdlib_http.py ERROR env/Lib/site-packages/h11/tests/test_connection.py ERROR env/Lib/site-packages/h11/tests/test_events.py ERROR env/Lib/site-packages/h11/tests/test_headers.py </code></pre> <p>so I believe my env folder in project is being copied in the container, however my .dockerignore file is present in project containing the following four lines:</p> <pre><code>env .dockerignore Dockerfile Dockerfile.prod </code></pre> <p>I committed my current progress to this <a href="https://github.com/christophhillisch/TDD-FastApi/" rel="nofollow noreferrer">GitHub Repo</a> ifyou want to take a look.</p> <p>Does anyone have an idea what I am doing wrong?</p>
<python><docker><pytest><fastapi>
2023-07-02 09:52:28
1
377
Christoph H.
76,598,241
12,193,952
How to plot multiple dataframes with different lenghts into one plot
<h2>The question</h2> <p>I have a lot of dataframes (approx. 300) that I would like to plot into one line chart. The issue is, they all have different a number of values (i.e. lengths). How can I plot the data frames such that they all start and end at the same point (on the x-axis) in the figure?</p> <p>I can convert data frames to lists or series or whatsoever.</p> <h2>Example</h2> <ul> <li>df1: <code>[5, 3, 10, 7, 10...]</code></li> <li>df2: <code>[2, 4, 5, 7, 2, 1, 3, 0, 1]</code></li> <li>adjusted df2: that desired state - same len as df1</li> </ul> <pre><code>df1 df2 adjusted df2 5 2 2 3 2 10 2 7 4 4 10 4 5 4 8 6 6 6 6 5 6 6 7 7 2 7 1 7 5 2 2 3 2 6 2 9 1 1 9 1 7 1 10 3 3 2 3 7 3 7 0 0 6 0 1 0 6 1 1 9 1 </code></pre> <h2>Example with datetimes</h2> <pre class="lang-py prettyprint-override"><code> datetime value 5448 2020-01-19 22:05:00 166.300003 5449 2020-01-19 22:10:00 165.259995 5450 2020-01-19 22:15:00 164.699997 5451 2020-01-19 22:20:00 165.380005 5452 2020-01-19 22:25:00 166.179993 5453 2020-01-19 22:30:00 162.630005 5424 2020-01-19 22:35:00 162.550003 5425 2020-01-19 22:40:00 161.990005 5426 2020-01-19 22:45:00 161.750000 5427 2020-01-10 22:50:00 161.440002 </code></pre> <pre class="lang-py prettyprint-override"><code> datetime value 15900 2020-02-25 11:55:00 262.510010 15901 2020-02-25 12:00:00 263.179993 15902 2020-02-25 12:05:00 262.260010 15903 2020-02-25 12:10:00 261.959991 15904 2020-02-25 12:15:00 262.179993 15905 2020-02-25 12:20:00 261.299988 15906 2020-02-25 12:25:00 261.579987 15907 2020-02-25 12:30:00 261.890015 15908 2020-02-25 12:35:00 262.820007 15909 2020-02-25 12:40:00 262.010010 15910 2020-02-25 12:45:00 261.630005 15911 2020-02-25 12:50:00 261.109985 15912 2020-02-25 12:55:00 261.149994 15913 2020-02-25 13:00:00 260.679993 15914 2020-02-25 13:05:00 261.929993 15915 2020-02-25 13:10:00 260.880005 15916 2020-02-25 13:15:00 259.929993 </code></pre> <pre class="lang-py prettyprint-override"><code> datetime value 16407 2020-02-27 06:10:00 224.860001 16408 2020-02-27 06:15:00 224.240005 16409 2020-02-27 06:20:00 223.610001 16410 2020-02-27 06:25:00 223.490005 16411 2020-02-27 06:30:00 223.199997 </code></pre> <p>So the plot will look like this <a href="https://i.sstatic.net/RwGa4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RwGa4.png" alt="" /></a></p> <h2>Possible options</h2> <h3>1. Interpolate values</h3> <p>The idea is similar to the example above, but I am not sure how to &quot;shift the values&quot; so that the data frame lengths will match.</p> <h3>2. Use a different x-axis for each data frame</h3> <p>Not sure how exactly to do this yet.</p>
<python><pandas><matplotlib>
2023-07-02 09:28:15
4
873
FN_
76,598,071
17,896,651
Best way to old "version" of specific parameters on django model
<p>I want to do a &quot;version control&quot; kind of thing.</p> <p>I have a django model called &quot;Person&quot;</p> <p>I also have a method of updating all persons &quot;gender&quot; and &quot;language&quot; according to &quot;detection&quot; version. this method improves from version to version. I dont want to &quot;detect&quot; gender of person in latest version to not over use resources, each detection cost ...</p> <p>my method need to run over all persons with gender version 1 or language version 1 and update the person lang and gen to version 2 (run the method).</p> <p>so the simple is:</p> <pre><code>class Person(models.Model): gender = models.CharField(max_length=50, blank=True, null=True) gender_version = int ... language = models.ManyToManyField( &quot;instagram_data.Language&quot;, verbose_name=_(&quot;language_code_name&quot;), blank=True) language_version = int ... </code></pre> <p>but I have also 20 more filters (like gen and lang) so I prefer to have one attribute named version, so I could filter also in a simple way. if I do ManyToManyField to class Version it will be slow to do filters, right ?</p> <pre><code>class Person(models.Model): gender = models.CharField(max_length=50, blank=True, null=True) language = models.ManyToManyField( &quot;instagram_data.Language&quot;, verbose_name=_(&quot;language_code_name&quot;), blank=True) version_control = models.ManyToManyField(DetectionVersion ...) class DetectionVersion(models.Model): gender_version = int ... language_version = int .. </code></pre> <p>is the second fast and a good solution ?</p>
<python><django>
2023-07-02 08:37:14
0
356
Si si
76,597,654
1,383,511
How to import a file so if I run files in diffent directoris within the project each will get the correct import?
<p>This is a question about python imports.</p> <p>Suppose I have a folder structure like this:</p> <pre><code>Project1 | |--app |--__init__.py |--app.py |--debug_routines |--__init__.py |--debug.py |--main.py </code></pre> <p>I want to be able to run main.py and app.py.... and any other file in the project potentially and the import to just work?</p> <p>main.py</p> <pre><code>from app.app import App from debug_routines.debug import dprint import os def main(): dprint(&quot;main start&quot;) app = App() app.mainloop() dprint(&quot;main end&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>app.py</p> <pre><code>import tkinter as tk #from debug_routines.debug import dprint ## This one works if running main.py from parent directory. #from ..debug_routines.debug import dprint ## Does not work when ran from app.py from debug_routines.debug import dprint class App(tk.Tk): #class App: def __init__(self): dprint(&quot;app.init start&quot;) super().__init__() dprint(&quot;app.init end&quot;) def main(): dprint(&quot;Start of app.main()&quot;) app = App() app.mainloop() dprint(&quot;End of app.main()&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>debug.py</p> <pre><code>import os def dprint(theString): if __debug__: print(&quot;dprint(&quot; + os.path.basename(__file__) + &quot;: &quot; + theString + &quot; )&quot;) </code></pre> <p>My question is about imports as it pertains to running <code>main.py</code> and <code>app.py</code>. I can setup the import in app.py so that when I run <code>main.py</code> the import works but if I then run <code>app.py</code> the import fails.</p> <p>Is there an import syntax that works so that in a project I can write the import in such a way that I can run different files and they still get the import correctly?</p> <p>As an example when I have the <code>app.py</code> import for the debug_routine set to <code>from debug_routines.debug import dprint</code>, if I run main it works. If I run <code>app.py</code> I get:</p> <pre><code>Traceback (most recent call last): File &quot;/media/sdb1_hdd_1tb/Projects/Python/Tkinter/PictureApp/src/app/app.py&quot;, line 6, in &lt;module&gt; from debug_routines.debug import dprint ModuleNotFoundError: No module named 'debug_routines' </code></pre> <p>I have also tried putting an <code>__init__.py</code> file in the <code>Project1</code> folder and it did not do anything.</p> <p>Edit:</p> <p>Am I just better off creating a separate <code>test.py</code> file for each dir/<code>module</code>? Maybe go with the <code>if/else if</code> import statement? I could even minimize the <code>if/else if</code> by just including an if in the app.py file in the header section and in the main function i could do the relative imports?</p> <p>Seems like the import system is not robust? (surprises me)</p>
<python><python-3.x><syntax>
2023-07-02 06:28:16
1
516
NDEthos
76,597,644
3,014,385
Telegram message will go to my personal chat id but not to a group id or id of another user
<p>I am trying to send messages to a group or a user that's not me using the following code</p> <pre><code>import requests token = &quot;my_bot_token&quot; url = f&quot;https://api.telegram.org/bot{token}&quot; Index = &quot;Hola&quot; myPersonalId = &quot;my_personal_id&quot; params = {&quot;chat_id&quot;: myPersonalId, &quot;text&quot;: Index} #John r = requests.get(url + &quot;/sendMessage&quot;, params=params) JosephId = &quot;Joseph_ID&quot; params = {&quot;chat_id&quot;: JosephId, &quot;text&quot;: Index} #Joseph r = requests.get(url + &quot;/sendMessage&quot;, params=params) </code></pre> <p>The first one works but the second one doesn't work. Other answers which I have seen make it sound pretty straight forward. As long as you have the id there should be no problem. P.S. Code is executed without any error.</p>
<python><python-3.x>
2023-07-02 06:25:05
1
404
Ankit
76,597,564
5,635,892
Pass a vector to a function input in numpy
<p>I have the following situation:</p> <pre><code>import numpy as np def func(m): M = np.zeros((3,3)) M[1,1] = 2*m return M print(func(2)) print(func([1,2,3])) </code></pre> <p>The first print function gives me what I want (the (1,1) entry in M becomes 4). But the second one gives me <code>ValueError: setting an array element with a sequence.</code>. What I want to do is to print 3 different matrices, one for each value in the array [1,2,3] (so for the 3 matrices the (1,1) entry would be 2, 4 and 6). How can I do this effectively (i.e. without using a for loop)? Thank you!</p>
<python><numpy><matrix><valueerror>
2023-07-02 05:52:27
1
719
Silviu
76,597,415
1,692,042
Scatter plot of points from several groups with legend
<p>I need to make a scatter plot of two variables in a data frame. The observations come from three groups, so I want to use three different colors. I also want to include a legend on the plot. Here is a small example.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt d = {'x': [1, 2, 3, 4, 5, 6, 7], 'y': [3, 5, 7, 2, 4, 1, 8], 'grp': ['a','a','b','a','c','b','c']} data = pd.DataFrame(d) </code></pre> <p>I learned that if each group is plotted separately, I can get what I need.</p> <pre><code>x1 = data.loc[data['grp'] == 'a', 'x'] y1 = data.loc[data['grp'] == 'a', 'y'] x2 = data.loc[data['grp'] == 'b', 'x'] y2 = data.loc[data['grp'] == 'b', 'y'] x3 = data.loc[data['grp'] == 'c', 'x'] y3 = data.loc[data['grp'] == 'c', 'y'] plt.scatter(x1, y1, label = 'a') plt.scatter(x2, y2, label = 'b') plt.scatter(x3, y3, label = 'c') plt.legend() plt.show() </code></pre> <p><a href="https://i.sstatic.net/yVzSq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yVzSq.png" alt="enter image description here" /></a></p> <p>This approach seems to be a little inefficient, especially when the number of groups is big. Another approach I found from stackoverflow is the following.</p> <pre><code>from matplotlib.colors import ListedColormap values = data['grp'].replace(['a','b','c'], [0, 1, 2]) colors = ListedColormap(['r','b','g']) scatter = plt.scatter(data['x'], data['y'], c=values, cmap=colors) plt.legend(handles=scatter.legend_elements()[0], labels=['a', 'b', 'c']) plt.show() </code></pre> <p><a href="https://i.sstatic.net/z8Wej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z8Wej.png" alt="enter image description here" /></a></p> <p>This is easier since you don't have to do each group separately. However, I don't quite understand <code>handles=scatter.legend_elements()[0]</code>. Is there an easy and intuitive way of doing this? I have been a R user before and this task can be easily done in <code>ggplot</code> where everything seems to be handled automatically. Thanks for the help!</p>
<python><matplotlib><scatter-plot>
2023-07-02 04:35:32
1
3,577
JACKY88
76,597,346
131,930
What is the problem with my Python3 script's parsing of non-ASCII UTF-8 arguments under Linux?
<p>Consider this trivial Python3 script, which simply prints out the contents of <code>sys.argv</code>, both as-is and encoded into UTF-8:</p> <pre><code>import locale import sys print(&quot;filesystem encoding is: &quot;, sys.getfilesystemencoding()) print(&quot;local preferred encoding is: &quot;, locale.getpreferredencoding()) print(&quot;sys.argv is:&quot;) print(sys.argv) for a in sys.argv: print(&quot;Next arg is: &quot;, a) print(&quot;UTF-8 encoding of arg is: &quot;, a.encode()) </code></pre> <p>If I run this script (via Python 3.11.4) on my Mac (OSX/Ventura 13.3.1/Intel), with an argument that includes <a href="https://www.fileformat.info/info/unicode/char/2019/index.htm" rel="nofollow noreferrer">a non-ASCII UTF-8 character</a>, I get the expected results:</p> <pre><code>$ python /tmp/supportfiles/test.py Joe’s filesystem encoding is: utf-8 local preferred encoding is: UTF-8 sys.argv is: ['./test.py', 'Joe’s'] Next arg is: ./test.py UTF-8 encoding of arg is: b'./test.py' Next arg is: Joe’s UTF-8 encoding of arg is: b'Joe\xe2\x80\x99s' </code></pre> <p>However, if I run the same command with the same arguments under Linux (Ubuntu 3.19.0, Linux, Python 3.7.0), things go wrong and the script throws a <code>UnicodeEncodeError</code> exception:</p> <pre><code>filesystem encoding is: utf-8 local preferred encoding is: UTF-8 sys.argv is: ['./test.py', 'Joe\udce2\udc80\udc99s'] Next arg is: ./test.py UTF-8 encoding of arg is: b'./test.py' Next arg is: Joe’s Traceback (most recent call last): File &quot;./test.py&quot;, line 13, in &lt;module&gt; print(&quot;UTF-8 encoding of arg is: &quot;, a.encode()) UnicodeEncodeError: 'utf-8' codec can't encode characters in position 3-5: surrogates not allowed </code></pre> <p>My question is, is this a bug in Python or in my Linux box's localization environment, or am I doing something wrong?</p> <p>And, is there anything I can to do get my script to correctly handle command line arguments containing non-ASCII characters on all OS's?</p>
<python><linux><macos><unicode><utf-8>
2023-07-02 03:54:10
1
73,884
Jeremy Friesner
76,597,186
12,172,744
HashMap implementation such that that inputs less than some specified hamming distance away from a key map to the same bucket as that key?
<p>Okay, this one is a bit of a doozy, but here goes...</p> <p>‎</p> <p>I have computed <a href="https://en.wikipedia.org/wiki/Perceptual_hashing" rel="nofollow noreferrer">perceptual hashes</a> for some amount of images where I wish to count occurrences of near-duplicates.</p> <p>The way this is currently being done is by throwing every hash into a <code>HashMap</code>, incrementing the associated value if a hash already exists as key, otherwise adding it as a unique key with a value of 1. <em>(values representing the observed count)</em></p> <p>An issue with this approach is that some images are only <em>similar</em>, and therefore do not produce the same hash, leading to this not being accurately reflected in the counts, which was expected.</p> <p><em>(the defining property of such a hash function principally being that similar images produce alike, but not identical, hashes)</em></p> <p>‎</p> <p>The most straight-forward way of accomplishing this would, of course, be to compute the <a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">hamming distances</a> between new inputs and every already existing key ﹘ returning the value for a key, if one exists within the threshold, and otherwise just use the input as a unique key. <em><strong>(this is not what I'm looking for)</strong></em> ‎‎ ‎</p> <p>‎</p> <p>I am wondering if there is a way to design a <a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow noreferrer">locally sensitive hash function</a> for a <code>HashMap</code> such that inputs less than some specified <a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">hamming distance</a> away from each-other will produce the same output? <em>(intentional hash collision)</em></p> <p>The inputs to this hash function would always be a guaranteed constant size of <code>64 bits</code> <em>(the perceptual hashes)</em></p> <p>The specific <a href="https://en.wikipedia.org/wiki/Perceptual_hashing" rel="nofollow noreferrer">perceptual hashing algorithm</a> used is irrelevant <em>(and may even change)</em>, but for the sake of simplicity let's assume it's <a href="https://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html" rel="nofollow noreferrer">AverageHash</a>, the important part is that the input is always <code>64 bits</code>. ‎ ‎</p> <p>‎</p> <p>I hope this question isn't too confusing since it involves hashing of already hashed values. ‎</p> <p>‎Since the core problem is about efficiently associating <em>similar</em> hashes, the answer does not necessarily have to answer the question, if there already exists some other data structure, or algorithm, for accomplishing the task, I'm happy to hear about those as well.</p>
<python><hash><hashmap><hamming-distance><phash>
2023-07-02 02:23:07
0
364
memeko
76,597,171
1,534,017
How to concatenate dataframes and download as file without creating a tempory file first?
<p>I build a <code>streamlit</code> app that needs to create a <code>csv</code> file in a very specific format. A minimal example looks like this:</p> <pre><code>import streamlit as st import pandas as pd import numpy as np def main(): df1 = pd.DataFrame(np.random.randint(0, 100, size=(5, 4))) df2 = pd.DataFrame(np.random.randint(0, 100, size=(5, 4))) df3 = pd.DataFrame(np.random.randint(0, 100, size=(5, 4))) di = {'df1': df1, 'df2': df2, 'df3': df3} fn = 'export.csv' with open(fn, 'w+') as f: for k, v in di.items(): f.write(f'{k}\n') v.to_csv(f, header=False, index=False) f.write('\n') with open(fn, 'rb') as file: st.download_button( 'Download file', data=file, file_name=fn, mime='text/csv' ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>So, for this example it looks like this</p> <p><a href="https://i.sstatic.net/JLbKv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JLbKv.png" alt="enter image description here" /></a></p> <p>While this works, I don't like that I first create a file which I need to store to later open it again.</p> <p>How can I get the exact same output, but without creating this file first? I tried via string concatenation using <code>v.to_string()</code>, but then the output seems space separated and not comma separated.</p>
<python><pandas><csv><streamlit>
2023-07-02 02:16:24
1
26,249
Cleb
76,597,092
18,551,983
Is there a way to solve the optimizer error?
<p>I have an error in my code, is there a way to solve. I have initialized my weights</p> <pre><code>weight_1 = nn.Parameter(torch.Tensor(128,128)).cuda() weight_2 = nn.Parameter(torch.Tensor(128,128)).cuda() optimizer= optim.Adam([{‘params’:model.parameters()}, {‘params’: [weight_1, weight_2]}], lr=1e-5) </code></pre> <p>Error</p> <pre><code>Value Error: cannot optimize a non-leaf tensor. </code></pre>
<python><optimization><pytorch>
2023-07-02 01:29:03
0
343
Noorulain Islam
76,596,998
5,380,809
Sum values associated with time intervals where intervals overlap in Python
<p>Say I have a pandas data frame where there are time intervals between start and end times, and then a value associated with each interval.</p> <pre><code>import random import time import numpy as np def random_date(input_dt = None): if input_dt is None: start = 921032233 else: start = dt.datetime.timestamp(pd.to_datetime(input_dt)) d = random.randint(start, int(time.time())) return dt.datetime.fromtimestamp(d).strftime('%Y-%m-%d %H:%M:%S') date_ranges = [] for _ in range(200): date_range = [] for i in range(2): if i == 0: date_range.append(random_date()) else: date_range.append(random_date(date_range[0])) date_ranges.append(date_range) date_ranges_df = pd.DataFrame(date_ranges, columns=['start_dt', 'end_dt']) date_ranges_df['value'] = np.random.random((date_ranges_df.shape[0], 1)) </code></pre> <p>There's two ways I can frame the problem and I would accept either answer.</p> <ol> <li><p>Obtain the sum of every different overlapping interval. Meaning there should be a sum associated with varying (non-overlapping and sequentially complete) time intervals. i.e. if the overlapping time intervals are unchanged for a period of time, the sum would remain unchanged and have a single value - then when the overlapping intervals changes in any way (removal or addition of a time interval) a new sum would be calculated. This may involve some self-merge on the table.</p> </li> <li><p>The other (and maybe easier) way would be to define a standard time interval like 1 hour, and ask what is the sum of all overlapping intervals in this hour segment?</p> </li> </ol> <p>Resulting data frame should have a similar structure with start and end times followed by a value column representing the sum of all values in that interval.</p> <p><a href="https://i.sstatic.net/3uX3q.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3uX3q.jpg" alt="Method 1 Image" /></a></p>
<python><datetime><intervals>
2023-07-02 00:26:57
1
2,926
conv3d
76,596,926
1,068,223
Pandas ewm conditional variable span
<p>I would like to find the ewm mean() over a variable span. The span varies depending on the sum of another column.</p> <p>Eg let’s say I have two columns A and B</p> <p>I want the exponentially weighted mean of B over the number of periods of B where the sum of A during those periods met some threshold.</p> <p>Let’s say that threshold was &gt;= 4:</p> <p>A B … desired outcome</p> <p>1 2</p> <p>1 2</p> <p>2 3 … ewm of last 3 rows of B</p> <p>4 1 … ewm of last 1 row of B</p> <p>1 1 … ewm of last 2 rows of B</p> <p>3 1 … ewm of last 2 rows of B</p>
<python><pandas>
2023-07-01 23:43:34
1
1,478
BYZZav
76,596,905
14,881,301
How to properly use python async functionality
<p>I hardly understand the python async, and how to use it properly when there is multiple functions on the await. For example, I have two models:</p> <ul> <li>In model A, function #3 should await function#2 that should await function#1</li> <li>In model B, there is a two independent branched functions (maybe work in parallel).</li> </ul> <p>I would be grateful, if anyone explain the async, through solving the two models.</p> <p><a href="https://i.sstatic.net/pAhDk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pAhDk.png" alt="enter image description here" /></a></p>
<python><asynchronous><async-await><parallel-processing>
2023-07-01 23:32:20
1
455
zezo
76,596,667
4,941,596
In Django, given an instance of Model A, how to define a relation between two instances of model B?
<p>What (I think) I'm trying to achieve is to create a new column in my database (resp. a new field in my model) for model B every time there is a new instance of model A.</p> <p>To clarify, say I have the following model in Django :</p> <pre><code>class Unit(models.Model): unit = models.CharField(max_length=200) </code></pre> <p>with two instanciations : <code>Kilograms</code> and <code>Volume (m^3)</code>.</p> <p>I'm looking to specify the following class:</p> <pre><code>class Object(models.Model): object = models.CharField(max_length=200) # Field to relate `Volumes` to `Kilograms`, as a float </code></pre> <p>so that I'm able to declare the relation between <code>Kilograms</code> and <code>Volume (m^3)</code> for each specific object. In a way, I'm trying to declare a float that relates the volume for each kilogram of the object (or vice-versa).</p> <p>I could use a workaround and add a <code>FloatField</code> if I knew <code>Kilograms</code> and <code>Volume</code> would be the only instances of Unit, but unfortunately, there might be others.</p> <p>PS: Additional (less important) question: Could I select a default unit, and declare the other ones compared to that default unit. Something like the following:</p> <pre><code>class Object(models.Model): object = models.CharField(max_length=200) default_unit = models.ForeignKey(Unit, on_delete=models.CASCADE) # Field to compare unit_1 to default_unit # Field to compare unit_2 to default_unit # And so on ... </code></pre>
<python><django><django-models><foreign-keys>
2023-07-01 21:40:28
1
397
Igor OA
76,596,651
6,283,102
Kubernetes container secret not recognized by app as env variable when app starts causing it to fail
<p>Im having an issue with deploying my app to my Kubernetes cluster in Digital Ocean and I cannot for the life of me figure out how to solve this issue.</p> <p>I'm creating a python flask api with Celery using cloudamqp to handle tasks for my api routes. All of this works well but the issue has to do with env variables. In my local machine, everything works well. When I deploy my app to my kubernetes cluster, the containers (one for flask-api and the other for api-worker) wont start due to both containers not being able to read the env variables. Ive used the secrets file approach from the kubernetes docs: <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data" rel="nofollow noreferrer">kubernetes handle secrets</a></p> <p>env.yaml file:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-secret namespace: backend type: Opaque data: APP_PORT: &lt;base 64 encoded number&gt; JWT_KEY: &lt;base 64 encoded string&gt; CLOUDAMQP_URL: &lt;base 64 encoded string&gt; </code></pre> <p>deployment.yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app namespace: backend spec: replicas: 1 selector: matchLabels: app: api-server template: metadata: labels: app: api-server spec: containers: - name: api-server image: my-api:latest resources: requests: cpu: &quot;200m&quot; memory: &quot;300Mi&quot; limits: cpu: &quot;300m&quot; memory: &quot;600Mi&quot; envFrom: - secretRef: name: my-secret ports: - containerPort: 5000 - name: api-worker image: api-worker:latest resources: requests: cpu: &quot;200m&quot; memory: &quot;300Mi&quot; limits: cpu: &quot;300m&quot; memory: &quot;600Mi&quot; envFrom: - secretRef: name: my-secret --- apiVersion: v1 kind: Service metadata: name: api-server namespace: backend spec: selector: app: api-server ports: - protocol: TCP port: 80 targetPort: 5000 </code></pre> <p>When I deploy to kubernetes using kubectl commands, my containers fail because the code cant recognize the env variable.</p> <p>ex:</p> <pre><code>from celery import Celery import os def create_celery_app(): broker_url = os.environ['CLOUDAMQP_URL'] celery = Celery('tasks', broker=broker_url, backend='rpc://') celery.conf.broker_connection_retry_on_startup = True return celery celery = create_celery_app() </code></pre> <p>The issue is definitely the env variable (in this case CLOUDAMQP_URL. It exists within my secrets file and Ive tested to see if the values show up. When I hardcode the values into the code above, the app works perfectly fine when deployed to my cluster. I think the issue is related to the app starting before the env variables are set (but I could be mistaken). I was able to print an env variable on an api route but that is after the app was fully operational and running. The app seems to fail on startup so for example, if the function above was in a main.py file and the app is starting, it crashes. Anyone have a solution for this or faced something similar? I checked the shells of the container (when the app isnt crashing) and I can see the env variables with the values I put in the env.yaml file in there and they are accurate.</p>
<python><kubernetes><digital-ocean><kubernetes-secrets>
2023-07-01 21:33:31
2
1,738
Ray
76,596,639
3,158,028
Github Actions Google Palm API No module named 'google.protobuf'
<p>Im getting the following error when my Github actions tries to install <code>google-generativeai</code> for the Palm API. I tried adding <code>pip install protobuf</code> explicitly in the github actions yaml, added it in requirements.txt, and in my setup.py. Nothing works.</p> <pre><code>Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_adapt_decorator.py:6: in &lt;module&gt; from .model import GPT4, Claude tests/model.py:8: in &lt;module&gt; import google.generativeai as palm /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_generativeai-0.1.0-py3.9.egg/google/generativeai/__init__.py:69: in &lt;module&gt; from google.generativeai import types /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_generativeai-0.1.0-py3.9.egg/google/generativeai/types/__init__.py:17: in &lt;module&gt; from google.generativeai.types.discuss_types import * /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_generativeai-0.1.0-py3.9.egg/google/generativeai/types/discuss_types.py:21: in &lt;module&gt; import google.ai.generativelanguage as glm /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_ai_generativelanguage-0.2.0-py3.9.egg/google/ai/generativelanguage/__init__.py:21: in &lt;module&gt; from google.ai.generativelanguage_v1beta2.services.discuss_service.async_client import ( /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_ai_generativelanguage-0.2.0-py3.9.egg/google/ai/generativelanguage_v1beta2/__init__.py:21: in &lt;module&gt; from .services.discuss_service import DiscussServiceAsyncClient, DiscussServiceClient /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_ai_generativelanguage-0.2.0-py3.9.egg/google/ai/generativelanguage_v1beta2/services/discuss_service/__init__.py:16: in &lt;module&gt; from .async_client import DiscussServiceAsyncClient /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_ai_generativelanguage-0.2.0-py3.9.egg/google/ai/generativelanguage_v1beta2/services/discuss_service/async_client.py:31: in &lt;module&gt; from google.api_core import exceptions as core_exceptions /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/google_api_core-2.12.0.dev0-py3.9.egg/google/api_core/exceptions.py:29: in &lt;module&gt; from google.rpc import error_details_pb2 /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/googleapis_common_protos-1.59.1-py3.9.egg/google/rpc/error_details_pb2.py:20: in &lt;module&gt; from google.protobuf import descriptor as _descriptor E ModuleNotFoundError: No module named 'google.protobuf' </code></pre>
<python><protocol-buffers><google-generativeai><palm-api>
2023-07-01 21:28:14
1
3,390
Soubriquet
76,596,630
14,421,479
Torch recognizes GPU but not Tensorflow
<p>I have a docker container running a Flask app, I am both using tensorflow and pytorch. In <code>torch</code> I can use the GPU but not in Tensorflow.</p> <p><code>nvidia-smi</code> output:</p> <pre><code>+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.41.03 Driver Version: 530.41.03 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla T4 Off| 00000000:00:04.0 Off | 0 | | N/A 60C P0 29W / 70W| 1146MiB / 15360MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+ </code></pre> <p>I don't understand why <code>nvidia-smi</code> show a cuda version but <code>nvcc</code> doesn't work, and I can't install the cuda toolkit using <code>apt</code> in the <code>python:3.9-slim</code> docker image.</p> <p><code>nvcc --version</code> output:</p> <pre><code>bash: nvcc: command not found </code></pre> <p><code>import tensorflow</code> output:</p> <pre><code>2023-07-01 21:12:51.765379: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-07-01 21:12:51.814111: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-07-01 21:12:51.814886: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-07-01 21:12:53.284879: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT </code></pre> <p><code>torch</code> output:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; torch.cuda.is_available() True &gt;&gt;&gt; </code></pre> <p><code>Dockerfile</code></p> <pre><code>ARG PYTHON_VERSION=3.9 FROM python:${PYTHON_VERSION}-slim as base ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 WORKDIR /app RUN apt-get update &amp;&amp; apt-get install -y ffmpeg RUN --mount=type=cache,target=/root/.cache/pip \ --mount=type=bind,source=requirements.txt,target=requirements.txt \ python -m pip install -r requirements.txt COPY . . EXPOSE 80 CMD gunicorn 'main:app' --bind=0.0.0.0:80 --timeout=36000000 --workers=1 --threads=8 </code></pre> <p><code>compose.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>services: server: build: context: . ports: - 80:80 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] </code></pre> <p>Can you please help me solve this?</p>
<python><docker><tensorflow><nvidia>
2023-07-01 21:22:36
1
318
Fady's Cube
76,596,620
880,783
How (and why!) do PySide6 signals call slots with mismatching signatures?
<p>In this naive example, <code>QSpinBox.valueChanged</code>, which is a signal passing an <code>int</code>, is able to call the slot <code>fun()</code> which does not accept an int. For some reason, this works. Why?</p> <pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QSpinBox def fun(): print(&quot;This works, but why? box.valueChanged passes an int!&quot;) app = QApplication() box = QSpinBox() box.valueChanged.connect(fun) box.setValue(10) </code></pre> <p>The problem becomes obvious when I add a decorator the slot, in which case I am getting the (expected?)</p> <blockquote> <p>TypeError: decorated_fun() takes 0 positional arguments but 1 was given.</p> </blockquote> <p>I wonder how <code>PySide6</code> is able to call <code>fun</code> in the first place, and if I can call <code>inner</code> within the decorator in a similarly forgiving way. That latter part is answered in my answer below, but <em>why</em> does PySide6 do something like this in the first place, and can I disable such behavior?</p> <pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QSpinBox def fun(): print(&quot;This works, but why?&quot;) def my_decorator(inner): def wrapper(*args, **kwargs): return inner(*args, **kwargs) return wrapper @my_decorator def decorated_fun(): print(&quot;This fails&quot;) app = QApplication() box = QSpinBox() box.valueChanged.connect(fun) box.valueChanged.connect(decorated_fun) box.setValue(10) </code></pre> <p>I also tried the <code>decorator</code> package, but I am getting essentially the same error:</p> <pre class="lang-py prettyprint-override"><code>from decorator import decorator from PySide6.QtWidgets import QApplication, QSpinBox def fun(): print(&quot;This works, but why?&quot;) @decorator def my_decorator(inner, *args, **kwargs): return inner(*args, **kwargs) @my_decorator def decorated_fun(): print(&quot;This fails&quot;) app = QApplication() box = QSpinBox() box.valueChanged.connect(fun) box.valueChanged.connect(decorated_fun) box.setValue(10) </code></pre>
<python><signature><signals-slots><pyside6>
2023-07-01 21:18:49
2
6,279
bers
76,596,316
5,425,826
Filtering python dictionary when values are lists
<p>Let's say I have the following dictionary in Python</p> <pre><code>dict_results = { &quot;first_names&quot;: [&quot;john&quot;, &quot;james&quot;, &quot;carlos&quot;], &quot;last_names&quot;: [&quot;smith&quot;, &quot;jones&quot;, &quot;sanchez&quot;], &quot;test_grade&quot;: [83, 79, 81] } </code></pre> <p>I need to select the person with the lowest test_grade, which in this example is james jones. How do I get the following output:</p> <pre><code>filtered_dict = { &quot;first_names&quot;: [&quot;james&quot;], &quot;last_names&quot;: [&quot;jones&quot;], &quot;test_grade&quot;: [79] } </code></pre>
<python><dictionary><filtering>
2023-07-01 19:43:46
5
422
Diego
76,596,269
5,635,892
How to reduce the number of zeros after the decimal place when printing NumPy arrays
<p>I have the following situation:</p> <pre><code>import numpy as np: V = np.ones((12, 12)) V[1][5] = 123523.42341234 print(np.round(V[0], 3)) print(np.round(V, 3)) </code></pre> <p>The first print gives: <code>[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]</code> which is what I want. However, the second one gives:</p> <pre class="lang-none prettyprint-override"><code>[[1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.23523423e+05 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00]] </code></pre> <p>which makes it very difficult to read (in practice I have more entries that are not one). For now I just need it for display purposes, so I would like to have the &quot;1.00000000e+00&quot; printed as &quot;1&quot; only. How can I do that?</p>
<python><numpy>
2023-07-01 19:28:42
0
719
Silviu
76,596,061
168,273
Azure Function App / saving to Blob issue (Python)
<p>while running the code locally using Visual Studio Code - all is working well and the file is saved the to cloud (blob storage).</p> <p>when deploying to the cloud as function app, files are not saved</p> <pre><code>blob_service_client = BlobServiceClient.from_connection_string(connection_string) blob_client = blob_service_client.get_blob_client(container=container_name, blob=file_name) blob_client.upload_blob(csv_bytes, overwrite=True) </code></pre> <p>Any hints or solution? thanks.</p>
<python><azure><azure-functions><azure-blob-storage>
2023-07-01 18:33:54
1
8,277
chewy
76,596,007
1,150,961
Type hint for wrapping methods after inheritance
<p>Suppose I have a base class which handles common functionality in a wrapper method (such as logging) and inheriting classes implementing specific functionality by overriding a private method.</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class Base: def do_something(self, *args, **kwargs) -&gt; Any: # Do some logging here ... result = self._actually_do_something(*args, **kwargs) # ... and here. return result def _actually_do_something(self, *args, **kwargs) -&gt; Any: raise NotImplementedError class Child1(Base): def _actually_do_something(self, x: int) -&gt; str: return str(x) </code></pre> <p>I would like to annotate <code>Base</code> such that <code>do_something</code> has the same signature as <code>_actually_do_something</code> for any inheriting class for static type analysis (e.g., using mypy or pylance in VS Code). How might I go about that?</p> <p><s>In other words, I would like the two following print statements to both yield <code>{'x': &lt;class 'int'&gt;, 'return': &lt;class 'str'&gt;}</code>.</p> <pre class="lang-py prettyprint-override"><code># prints: {'return': typing.Any} print(Child1.do_something.__annotations__) # prints: {'x': &lt;class 'int'&gt;, 'return': &lt;class 'str'&gt;} print(Child1._actually_do_something.__annotations__) </code></pre> </s> <h1>Related</h1> <p>This question is not unrelated to <a href="https://stackoverflow.com/questions/42124771/how-to-annotate-python-function-using-return-type-of-another-function">How to annotate Python function using return type of another function?</a> button the discussion did not resolve the question of <em>static</em> type hints.</p> <h1>Things I've tried</h1> <h2>Option A: generics</h2> <ul> <li>Pros: Gets the return type correct.</li> <li>Cons: Requires manual return type annotation on class decoration, doesn't generate type hints for the arguments.</li> </ul> <pre class="lang-py prettyprint-override"><code>R = TypeVar(&quot;R&quot;) class BaseA(Generic[R]): def do_something(self, *args, **kwargs) -&gt; R: ... class ChildA1(BaseA[str]): def _actually_do_something(self, x: int) -&gt; str: ... </code></pre> <h2>Option B: decorator with generics and another layer of indirection</h2> <ul> <li>Pros: Gets the return type and signature correct.</li> <li>Cons: Requires another level of indirection and <code>propagate_types</code> call in each child class, may cause issues with deeper inheritance.</li> </ul> <pre class="lang-py prettyprint-override"><code>def propagate_types(actual: Callable, hinted: R) -&gt; R: return actual class BaseB: def _inner_do_something(self, *args, **kwargs) -&gt; Any: return self._actually_do_something(*args, **kwargs) def _actually_do_something(self, *args, **kwargs) -&gt; Any: raise NotImplementedError class ChildB1(BaseB): def _actually_do_something(self, x: int) -&gt; str: pass do_something = propagate_types(BaseB._inner_do_something, _actually_do_something) </code></pre>
<python><mypy><python-typing><pylance>
2023-07-01 18:18:19
1
9,927
Till Hoffmann
76,595,965
17,082,611
history.validation_data is None even though I added x_val and y_val in fit method
<p>I am trying to train a variational autoencoder on cifar10 dataset. This is an extract of my script:</p> <pre><code># Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() # Split the data into training, validation, and test sets validation_size = 0.2 x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=validation_size) # Define the model vae = VAE(encoder, decoder) vae.compile(optimizer=Adam()) history = vae.fit(x_train, epochs=2, batch_size=128, validation_data=(x_val, y_val)) </code></pre> <p>But unfortunately if I access <code>history.validation_data</code> I get <code>None</code>, indeed <code>history.validation_data is None</code> is <code>True</code>.</p> <p>How's that and how do I fix?</p>
<python><tensorflow><machine-learning><keras><tensorflow2.0>
2023-07-01 18:06:35
0
481
tail
76,595,839
12,326,812
Bitbucket REST API query fails with python but succeed with curl
<p>I'm trying to retrieve information from bitbucket using REST API. If I run the query using <strong>curl</strong> it succeeds.</p> <pre><code>C:\&gt;curl -vv -k --header &quot;Authorization: Bearer %TOKEN%&quot; https://bitbucket.cicd/rest/api/1.0/projects/%OWNER%/repos/%PROJECT%/raw/%LOCATION%?at=%COMMIT% * Trying 10.&lt;....&gt;:443... * Connected to bitbucket.cicd (10.&lt;....&gt;) port 443 (#0) * schannel: disabled automatic use of client certificate * ALPN: offers http/1.1 * ALPN: server did not agree on a protocol. Uses default. * using HTTP/1.x &gt; GET /rest/api/1.0/projects/&lt;.....&gt; HTTP/1.1 &gt; Host: bitbucket.cicd &gt; User-Agent: curl/8.0.1 &gt; Accept: */* &gt; Authorization: Bearer &lt;token&gt; &gt; * schannel: failed to decrypt data, need more data &lt; HTTP/1.1 200 &lt; x-arequestid: *&lt;....&gt; &lt; set-cookie: BITBUCKETSESSIONID=&lt;....&gt;; Max-Age=1209600; Expires=Sat, 15 Jul 2023 15:25:03 GMT; Path=/; Secure; HttpOnly &lt; x-auserid: &lt;....&gt; &lt; x-ausername: &lt;....&gt; &lt; x-asessionid: &lt;....&gt; &lt; cache-control: private, no-cache &lt; pragma: no-cache &lt; cache-control: no-cache, no-transform &lt; vary: x-ausername,x-auserid,cookie,accept-encoding &lt; x-content-type-options: nosniff &lt; content-disposition: attachment; filename=&quot;&lt;....&gt;&quot;; filename*=UTF-8''&lt;....&gt; &lt; content-type: text/plain;charset=UTF-8 &lt; content-length: 3916 &lt; date: Sat, 01 Jul 2023 15:25:02 GMT &lt; &lt;...file content...&gt; * Connection #0 to host bitbucket.cicd left intact </code></pre> <p>However, I want to do the exact same thing using <strong>python</strong> and it fails</p> <pre><code>import requests TOKEN = '&lt;token&gt;' OWNER = '&lt;owner&gt;' PROJECT = '&lt;project&gt;' LOCATION = '&lt;location&gt;' COMMIT = '&lt;commit&gt;' headers = { 'Authorization': f'Bearer {TOKEN}' } url = f'https://bitbucket.cicd/rest/api/1.0/projects/{OWNER}/repos/{PROJECT}/raw/{LOCATION}?at={COMMIT}' response = requests.get(url=url, headers=headers, verify=False) print(response) </code></pre> <p>The response is:</p> <pre><code>response.status_code: 401 response._content: b'{&quot;errors&quot;:[{&quot;context&quot;:null,&quot;message&quot;:&quot;Authentication failed. Please check your credentials and try again.&quot;,&quot;exceptionName&quot;:&quot;com.atlassian.bitbucket.auth.IncorrectPasswordAuthenticationException&quot;}]}' </code></pre> <p>Can someone explain to me why?</p> <p>Thanks in advance, -Uri</p>
<python><curl><python-requests><bitbucket>
2023-07-01 17:29:17
1
669
Uriel
76,595,587
12,851,199
How to solve mypy error for django abstract class?
<p>I have the next django abstract class, describing preview mixin:</p> <pre><code>from django.core.files.storage import default_storage from django.db import models from sorl.thumbnail import get_thumbnail class PreviewMixin(models.Model): preview = models.ImageField(blank=True, null=True) class Meta: abstract = True def _get_preview_thumbnail(self, geometry_string: str, crop: str = 'noop', quality: int = 100) -&gt; str: preview = '' if self.preview: thumbnail = get_thumbnail(file_=self.preview, geometry_string=geometry_string, crop=crop, quality=quality) preview = default_storage.url(name=thumbnail) return preview </code></pre> <p>When I am running mypy, I get an error: <code>error: &quot;DefaultStorage&quot; has no attribute &quot;url&quot; [attr-defined]</code></p> <p>My code works correct for me without errors. What should I fix or add or update to pass this mypy checking?</p> <p>Versions of packages are:</p> <pre><code>django 4.2.2 mypy 1.4.1 django-stubs 4.2.3 django-stubs-ext 4.2.2 sorl.thumbnail 12.9.0 </code></pre> <p>mypy.ini</p> <pre><code>[mypy] python_version = 3.11 plugins = mypy_django_plugin.main, mypy_drf_plugin.main exclude = .git, .idea, .mypy_cache, .ruff_cache, node_modules check_untyped_defs = true disallow_untyped_decorators = true disallow_untyped_calls = true ignore_errors = false ignore_missing_imports = true implicit_reexport = false local_partial_types = true no_implicit_optional = true strict_optional = true strict_equality = true warn_unused_ignores = true warn_redundant_casts = true warn_unused_configs = true warn_unreachable = true warn_no_return = true [mypy.plugins.django-stubs] django_settings_module = 'settings' </code></pre>
<python><django><mypy>
2023-07-01 16:22:53
1
438
Vadim Beglov
76,595,452
1,274,147
How do I install PyTorch 1.0 using pip?
<p>In order to use Facebook's LASER embeddings, I need to install a version of PyTorch that meets the following requirement:</p> <p><code>torch&lt;2.0.0 and &gt;=1.0.1.post2</code></p> <p>The PyTorch <a href="https://pytorch.org/get-started/previous-versions/" rel="nofollow noreferrer">website</a> lists instructions for installing any number of versions that meet this requirement. However, when I run, say,</p> <p><code>pip3 install torch==1.13.1</code></p> <p>I get the following:</p> <p><code>ERROR: Could not find a version that satisfies the requirement torch==1.13.1 (from versions: 2.0.0, 2.0.1) ERROR: No matching distribution found for torch==1.13.1</code></p> <p>I am not much of a Python developer, so I'm not sure what I am missing here. I have searched around quite a bit for an answer, but previous answers to questions about similar issues lead to dead links, or instructions on how to use conda to resolve (which is not an option for me, I'm afraid).</p> <p>For what its worth, I am running Python 3.11.4.</p>
<python><pip><pytorch>
2023-07-01 15:49:41
1
1,369
rockusbacchus
76,595,390
8,726,488
Python web data extraction using BeautifulSoup module
<p>I am trying to get Product price from this website. <a href="https://www.cotswold-fayre.co.uk/" rel="nofollow noreferrer">https://www.cotswold-fayre.co.uk/</a> . Its required authentication to extract product price. we developed small python code to extract production price. In python we passed username and password and authentication was successful and received status code 200. After session is created , we tried to get product price we are getting 'Product Price: Login To Buy'.</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd username = 'testuser@gmail.com' password = 'test@123' num_pages = 10 # Create a session session = requests.Session() login_url = 'https://www.cotswold-fayre.co.uk/login' login_data = { 'email': username, 'password': password } session.post(login_url, data=login_data) # Initialize lists to store the extracted information links = [] titles = [] model_numbers = [] barcodes = [] prices = [] # Iterate over each page for page in range(1, num_pages + 1): # Send a GET request to the website for each page url = f'https://www.cotswold-fayre.co.uk/products/?page={page}' response = session.get(url) # Create a BeautifulSoup object to parse the HTML content soup = BeautifulSoup(response.content, 'html.parser') # Find all the product elements on the page product_elements = soup.find_all('div', class_='product-item-info') print(product_elements) </code></pre> <p>I noticed that after successful login , i am getting 302 http status from web UI. how to handle this situation</p>
<python><web-scraping>
2023-07-01 15:34:53
1
3,058
Learn Hadoop
76,595,272
10,826,692
How to stop Google Colab from automatically disconnecting from google drive?
<p>I purchased Colab Pro+ in order to run my neural network notebooks overnight. I read and write files from my google drive by mounting it in the standard way:</p> <pre><code>from google.colab import drive drive.mount('/content/drive') </code></pre> <p>However, after a few hours, in the middle of training for seemingly no reason, the colab notebook inevitably disconnects from my mounted google drive, halting training with the following error:</p> <p><code>OSError: [Errno 107] Transport endpoint is not connected</code></p> <p>Then the notebook just sits there for hours idly wasting my credits until it finally automatically disconnects the runtime.</p> <p>This effectively means I cannot run my notebooks overnight which is the whole reason I purchased Colab Pro+. When I used Pro+ last year it did not have this issue and would stay connected to the google drive all night. Is this a known current issue?</p>
<python><google-colaboratory>
2023-07-01 15:06:56
0
363
Ambrose
76,595,260
12,347,371
Parse C-style format strings in python
<p>Let's say a format string is given as <code>export_%05u.png</code>, we can create a full string in python by supplying an integer like this <code>s = &quot;export_%05u.png&quot;%1</code>. But If we have the format string and actual string, is it possible to extract the integer somehow?</p> <p>Formally, these are given: <code>export_%05u.png</code> and <code>export_00111.png</code> and we need to extract 111.</p>
<python><c><string><format-string>
2023-07-01 15:02:02
0
675
Appaji Chintimi
76,595,077
250,962
Generating an image of multi character emoji using Pillow in python
<p>I'm using Pillow to generate PNGs of emoji, using Google's <a href="https://fonts.google.com/noto/specimen/Noto+Emoji?preview.text=%F0%9F%99%82%F0%9F%91%A9%E2%80%8D%F0%9F%91%A9%E2%80%8D%F0%9F%91%A7%E2%80%8D%F0%9F%91%A6&amp;preview.text_type=custom" rel="nofollow noreferrer">Noto Emoji font</a> (which can be downloaded from that page). It works well for basic emojis but multi character emojis are output as, well, multiple characters. I can't work out if it's possible to output them as single emoji.</p> <p>This code:</p> <pre class="lang-py prettyprint-override"><code>from PIL import Image, ImageDraw, ImageFont emoji = &quot;🙂&quot; image = Image.new(&quot;RGB&quot;, (600, 100), &quot;#eeeeee&quot;) draw = ImageDraw.Draw(image) font = ImageFont.truetype(&quot;NotoEmoji-Bold.ttf&quot;, size=80) draw.text((0, 0), emoji, font=font, fill=&quot;#000000&quot;) image.save(&quot;emoji.png&quot;) </code></pre> <p>Outputs:</p> <p><a href="https://i.sstatic.net/iWU1d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iWU1d.png" alt="A light grey background with a single black smiley face emoji" /></a></p> <p>Which is great.</p> <p>But if I use a multi-character emoji, like a family:</p> <pre class="lang-py prettyprint-override"><code>emoji = &quot;👩‍👩‍👧‍👦&quot; </code></pre> <p>then I get:</p> <p><a href="https://i.sstatic.net/jOakx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jOakx.png" alt="A light grey background with four smiling characters on, two adults, a girl with pigtails and a boy with a backwards baseball cap" /></a></p> <p>Which is super cute, but I want the single-character family emoji:</p> <p><a href="https://i.sstatic.net/NcKwR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NcKwR.png" alt="A light grey background with a single emoji showing four stick figures holding hands, two adults and two children" /></a></p>
<python><python-imaging-library><emoji>
2023-07-01 14:15:27
1
15,166
Phil Gyford
76,595,019
7,694,287
Assign multiple variables with list values
<p>how to assign multiple variables using a list? For example, a list <code>a=[1,2,3, ..., 5]</code>, I would like that three variables <code>var1, var2, var3, ..., var5</code> respectively has value <code>1,2, 3, ..., 5</code>.</p>
<python>
2023-07-01 14:02:22
1
402
Mingming
76,594,950
5,025,009
Matplotlib - how to align a second plot on a specific date that corresponds to a first plot
<p>I have a <code>df</code> with 2 columns and I plot this with x-axis being dates and y-axis being the first data.</p> <p>I then have secondary data associated with the first data in the following manner: The secondary data is a dictionary with <code>key</code> a date and <code>value</code> a vector. The date corresponds to a specific date that also exists in the first dataset (plot).</p> <p>The vector includes past and future values in the sense that at index 20 of the vector, this value corresponds to the date (key).</p> <p>I want to plot this aligned on the date with the first plot. Any solution would work (subplots, superimposition, two different axes).</p> <p>I want ideally to have both dataset plotted in the same plot but correctly aligning the blue line as shown below.</p> <p>Below is some toy code to illustrate an example and the problem:</p> <pre><code>import pandas as pd, numpy as np import matplotlib.pyplot as plt np.random.seed(0) # data df=pd.DataFrame({'date':pd.date_range('2020-11-30', periods=31),'temp':np.arange(1,32)}) # secondary data for date '2020-12-15', key is date and value is a vector. In this vector index 20 corresponds to the date '2020-12-15' ml = {'2020-12-15':list(np.random.rand(56))} fig, axes = plt.subplots(2,1, dpi=120) axes = axes.flatten() axes[0].plot(df.date, df.temp, '-o', c='orange', label='T') axes[0].tick_params(which='major', labelrotation=90) axes[1].plot(ml['2020-12-15'], '-o', c='blue', label='ML: 2020-12-15') plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/zlV0P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zlV0P.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib>
2023-07-01 13:43:40
1
33,417
seralouk
76,594,818
13,944,524
Indexing a SearchVector vs Having a SearchVectorField in Django. When should I use which?
<p>Clearly I have some misunderstandings about the topic. I would appreciate if you correct my mistakes.</p> <p>So as explained <a href="https://www.postgresql.org/docs/current/textsearch-intro.html" rel="nofollow noreferrer">in PostgreSQL documentation</a>, We need to do <em>Full-Text Searching</em> instead of using simple textual search operators.</p> <p>Suppose I have a blog application in Django.</p> <pre class="lang-py prettyprint-override"><code>Entry.objects.filter(body_text__search=&quot;Cheese&quot;) </code></pre> <p>The bottom line is we have &quot;<em>document</em>&quot;s which are our individual records in <code>blog_post</code> field and a term <code>&quot;Cheese&quot;</code>.</p> <p>Individual documents are gonna be translated to something called &quot;<strong>tsvector</strong>&quot;(a vector of simplified words) and also a &quot;<strong>tsquery</strong>&quot; is created out of our term.</p> <ol> <li><p>If I have no <code>SearchVectorField</code> field and no <code>SearchVector</code> index:</p> <p>for every single record in <code>body_text</code> field, a <code>tsvector</code> is created and it's checked against our <code>tsquery</code>, in failure, we continue to the next record.</p> </li> <li><p>If I have <code>SearchVectorField</code> field but not <code>SearchVector</code> index:</p> <p>that <code>tsvector</code> vector is stored in <code>SearchVectorField</code> field. So the searching process is faster because we only check for match not creating tsvector anymore, but still we're checking every single record one by one.</p> </li> <li><p>If I have both <code>SearchVectorField</code> field and <code>SearchVector</code> index:</p> <p>a <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/postgres/indexes/#django.contrib.postgres.indexes.GinIndex" rel="nofollow noreferrer">GIN index</a> is created in database, it's somehow like a dictionary: <code>&quot;cat&quot;: [3, 7, 18], ...</code>. It stores the occurrences of the &quot;<em>lexems</em>&quot;(words) so that we don't have to iterate through all the records in the database. I think this is the fastest option.</p> </li> <li><p>Now if I have only <code>SearchVector</code> index:</p> <p>we have all the benefits of number 3.</p> </li> </ol> <p>Then why should I have <code>SearchVectorField</code> field in my table? IOW why do I need to store <code>tsvector</code> if I already have it indexed?</p> <p><a href="https://docs.djangoproject.com/en/4.2/ref/contrib/postgres/search/#searchvectorfield" rel="nofollow noreferrer">Django documentation</a> says:</p> <blockquote> <p>If this approach becomes too slow, you can add a <code>SearchVectorField</code> to your model.</p> </blockquote> <p>Thanks in advance.</p>
<python><django><postgresql><indexing><full-text-search>
2023-07-01 13:06:12
3
17,004
S.B
76,594,723
10,277,347
incompatible datatypes when plotting a bar chart
<p>I'm using ARIMA to predict the amount (Amt) for next 5 days of a dataset. The bar chart needs to show a concatenation of the amounts for each time value, and then the future prediction.</p> <p>An example of the dataframe, its much larger than this:</p> <pre><code> CCY Pair Time Amt 0 GBPUSD 13/05/2023 1000 1 EURUSD 13/05/2023 2000 2 EURUSD 14/05/2023 3000 3 EURUSD 14/05/2023 5000 4 GBPEUR 15/05/2023 4000 </code></pre> <p>The code below gives me the following error when I try and plot the model:</p> <pre><code>Traceback (most recent call last): File &quot;Graphs.py&quot;, line 46, in &lt;module&gt; plt.bar(combined_data.index, combined_data['Amt']) File &quot;/opt/homebrew/lib/python3.11/site-packages/matplotlib/pyplot.py&quot;, line 2439, in bar return gca().bar( ^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/matplotlib/__init__.py&quot;, line 1442, in inner return func(ax, *map(sanitize_sequence, args), **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/matplotlib/axes/_axes.py&quot;, line 2460, in bar raise TypeError(f'the dtypes of parameters x ({x.dtype}) ' TypeError: the dtypes of parameters x (object) and width (float64) are incompatible </code></pre> <p>Code:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from statsmodels.tsa.arima.model import ARIMA # The &quot;Time&quot; column contains the time periods, and &quot;Amt&quot; contains the values df = pd.read_csv(&quot;Data.csv&quot;) # Convert the &quot;Time&quot; column to a datetime type df['Time'] = pd.to_datetime(df['Time'], dayfirst=True) # Set the &quot;Time&quot; column as the index df.set_index('Time', inplace=True) # Sort the DataFrame by the index df.sort_index(inplace=True) df.index = pd.to_datetime(df.index).to_period('D') # Prepare the data for ARIMA modeling data = df['Amt'] # Fit the ARIMA model model = ARIMA(data, order=(1,0,0)) model_fit = model.fit() # Predict the next x periods x = 5 # Number of periods to predict predictions = model_fit.forecast(steps=x) # Generate the next x bar charts based on the predictions next_time_periods = pd.date_range(start=df.index.max().to_timestamp() + pd.DateOffset(days=1), periods=x, freq='D') # Generate x future time periods next_bar_charts = pd.DataFrame({'Time': next_time_periods, 'Amt': predictions}, index=next_time_periods) # Concatenate current and predicted bar chart data combined_data = pd.concat([df, next_bar_charts]) # Plot the combined bar chart plt.figure(figsize=(10, 6)) plt.bar(combined_data.index, combined_data['Amt']) plt.xlabel('Time') plt.ylabel('Amt') plt.xticks(rotation=45) # Rotate x-axis labels for better readability plt.show() </code></pre>
<python><pandas><dataframe><matplotlib><statsmodels>
2023-07-01 12:32:41
1
345
Tom Pitts
76,594,689
3,374,090
format text to limit the number of columns
<p>I'm looking for a way to format long text strings, by inserting new lines (or splitting) so that each part doesn't exceed a fixed length. For example, this input:</p> <pre><code>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc interdum mattis quam, id elementum tortor condimentum sit amet. Aliquam quam erat, suscipit ut dui ac, laoreet varius neque. Nulla commodo, arcu ut finibus tempor, leo lorem tempus tortor, et consectetur mi nisl sed ante. </code></pre> <p>would give, if limited to 80 chars, something like that:</p> <pre><code>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc interdum mattis quam, id elementum tortor condimentum sit amet. Aliquam quam erat, suscipit ut dui ac, laoreet varius neque. Nulla commodo, arcu ut finibus tempor, leo lorem tempus tortor, et consectetur mi nisl sed ante. </code></pre> <p>Is there anything in the standard library, or in a reasonable dependency, allowing to do something like that? If not, I can code something, but I'd be surprised if no one had already solved this kind of issue...</p> <p>Thank you ^^</p>
<python>
2023-07-01 12:22:27
1
515
ncarrier