QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,241,720
2,029,629
How can I map a numpy array into another numpy array
<p>I have two numpy arrays. A 3D or 2D array <code>reference</code> and a 3D array <code>map_</code>.</p> <p>In this example let Reference be</p> <pre><code>reference = array([[ [11,12,13], [14,15,16], [17,18,19] ], [ [21,22,23], [24,25,26], [27,28,29] ], [ [31,32,33], [34,35,36], [37,38,39] ], [ [41,42,43], [44,45,46], [47,48,49] ]]) </code></pre> <p>An let map be</p> <pre><code>map_ = array([[ [0,0], [1,2], [2,2] ], [ [2,1], [3,2], [3,0] ]]) </code></pre> <p>The result should be</p> <pre><code>array([[ [11,12,13], [27,28,29], [37,38,39] ], [ [24,25,36], [47,48,49], [41,42,43] ]]) </code></pre> <p>I can solve it as:</p> <pre><code>array([[reference[i[0],i[1],:] for i in j] for j in map_]) </code></pre> <p>However when map_ and reference are arrays of thousands by thousands, this become very slow.</p> <p>Is there a fastest way to achieve this?</p>
<python><arrays><numpy>
2023-10-06 03:37:40
3
2,678
Carlos Eugenio Thompson Pinzón
77,241,655
19,675,781
How to change colors based on the two other rows in Seaborn barplot
<p>I have a dataframe like this:</p> <pre><code>index = ['Col-45', 'Col-68', 'Col-17', 'Col-69', 'Col-43', 'Col-49', 'Col-91', 'Col-13', 'Col-14', 'Col-18', 'Col-38', 'Col-37', 'Col-40', 'Col-44', 'Col-32', 'Col-82', 'Col-75', 'Col-19', 'Col-5', 'Col-6', 'Col-16', 'Col-4', 'Col-7', 'Col-41', 'Col-10', 'Col-31', 'Col-12', 'Col-11', 'Col-42', 'Col-30', 'Col-76', 'Col-46', 'Col-83', 'Col-73', 'Col-63', 'Col-9', 'Col-28', 'Col-51', 'Col-74', 'Col-65', 'Col-50', 'Col-64', 'Col-86', 'Col-79', 'Col-80', 'Col-81', 'Col-55', 'Col-1', 'Col-57', 'Col-2', 'Col-61', 'Col-53', 'Col-88', 'Col-47', 'Col-3', 'Col-58', 'Col-29', 'Col-59', 'Col-8', 'Col-276', 'Col-56', 'Col-62', 'Col-52', 'Col-54'] Brand = ['LG','LG','LG','LG','LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Pixel', 'Pixel', 'Pixel', 'Pixel', 'Huawei', 'Huawei', 'Huawei', 'Apple', 'Apple', 'Apple', 'Xiaomi', 'Xiaomi', 'Xiaomi', 'Lenovo', 'Lenovo', 'Lenovo', 'Panasonic', 'Panasonic', 'Panasonic', 'Beetle', 'Beetle', 'Samsung', 'Samsung', 'Nothing', 'Nothing', 'Nikon', 'Nikon', 'Canon', 'Canon', 'Coby', 'Coby', 'Onida', 'Amara', 'Roxy'] Score = [4.75, 0.91, 0.79, 0.65, 0.62, 0.57, 0.38, 0.33, 0.27, 0.25, 0.25, 0.22, 0.16, 0.11, 0.02, 0.01, 3.89, 3.08, 2.1 , 1.75, 0.42, 0.27, 0.18, 4.44, 1.18, 0.8 , 0.74, 0.52, 0.25, 0.08, 1.13, 0.75, 0.54, 0.04, 1.03, 0.11, 0. , 5.53, 5.24, 4.98, 0.98, 0.78, 0.06, 0.76, 0.28, 0.04, 1.1 , 0.38, 0.25, 0.98, 0.01, 1.17, 0.61, 0.29, 0.19, 0.12, 0.01, 0.13, 0. , 4.37, 3.59, 0.53, 0.39, 1.3 ] Choice = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0] Result = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0] df = pd.DataFrame({'index':index,'Brand':Brand,'Score':Score,'Choice':Choice,'Result':Result}).set_index('index').T </code></pre> <p>I created a barplot for Score index in the dataframe using this code:</p> <pre><code>fig = plt.figure() sns.set(rc={'figure.figsize': (15,4)}) g1 = sns.barplot(data=df.loc[['Score']],color='blue') g1.patch.set_edgecolor('black') g1.patch.set_linewidth(0.5) g1.set_facecolor('white') g1.set_ylabel(f'Brand',weight='bold',fontsize=15) g1.set_xlabel(None) g1.set_xticklabels(g1.get_xticklabels(),rotation=90,fontsize=10) plt.show() </code></pre> <p>I want to change the colors of the bars based on the values in index rows Choice(df.loc['Choice']) &amp; Result(df.loc['Result']).</p> <p>If (Choice=0 &amp; Result=0), then bar color = blue</p> <p>Else If (Choice=1 &amp; Result=0), then bar color = green.</p> <p>Else If (Choice=0 &amp; Result=1), then bar color = pink.</p> <p>Else If (Choice=1 &amp; Result=1), then bar color = red.</p> <p>Can anyone help me with this?</p>
<python><pandas><matplotlib><seaborn>
2023-10-06 03:09:31
1
357
Yash
77,241,563
1,637,798
SWIG wrap for Python a void function with an output reference to smart pointer parameter
<p>Along the lines of <a href="https://stackoverflow.com/questions/53387538/swig-struct-pointer-as-output-parameter">this question</a>, suppose we have a C++ <code>struct S </code>and a function <code>makeS</code> which creates an instance of <code>S</code> and assigns it to a shared pointer <code>p</code>. Here is a <a href="https://coliru.stacked-crooked.com/view?id=436615986a6a9db9" rel="nofollow noreferrer">self-contained running example</a>:</p> <pre><code>#include &lt;iostream&gt; #include &lt;memory&gt; struct S { int x; S() { x = 0; std::cout &lt;&lt; &quot;S::S()\n&quot;; } // Note: non-virtual destructor is OK here ~S() { std::cout &lt;&lt; &quot;S::~S()\n&quot;; } }; void makeS(std::shared_ptr&lt;S&gt; &amp;p) { p = std::make_shared&lt;S&gt;(); p-&gt;x = 12; } int main() { std::shared_ptr&lt;S&gt; p; makeS(p); std::cout &lt;&lt; &quot;x = &quot; &lt;&lt; p-&gt;x &lt;&lt; &quot;\n&quot;; } </code></pre> <p>which outputs</p> <pre><code>S::S() x = 12 S::~S() </code></pre> <p>How do we wrap <code>void makeS(std::shared_ptr&lt;S&gt; &amp;p)</code> in SWIG for Python 3 so that, in Python, we can run</p> <pre><code>p = makeS() </code></pre> <p>and get a smart pointer to an instance of <code>S</code>? In other words, how do we write the Python-based <code>%typemap</code> for <code>std::shared_ptr&lt;S&gt; &amp;</code> so that we can write something like</p> <pre><code>%apply std::shared_ptr&lt;S&gt; &amp;OUTPUT { std::shared_ptr&lt;S&gt; &amp; } </code></pre> <p>and not get an error from SWIG that there is no typemap defined?</p> <p><strong>UPDATE:</strong> With input from Mark Tolonen below, I can add more detail but I am still stuck. We have the following files:</p> <p><strong>Widget.h</strong></p> <pre><code>#include &lt;iostream&gt; #include &lt;memory&gt; struct Widget { int x; Widget() { x = 0; std::cout &lt;&lt; &quot;Widget::Widget()\n&quot;; } // Note: non-virtual destructor is OK here ~Widget() { std::cout &lt;&lt; &quot;Widget::~Widget()\n&quot;; } }; void makeWidget(std::shared_ptr&lt;Widget&gt; &amp;p); </code></pre> <p><strong>Widget.cpp</strong></p> <pre><code>#include &quot;Widget.h&quot; void makeWidget(std::shared_ptr&lt;Widget&gt; &amp;p) { p = std::make_shared&lt;Widget&gt;(); p-&gt;x = 12; } </code></pre> <p><strong>main.cpp</strong></p> <pre><code>#include &quot;Widget.h&quot; int main() { std::shared_ptr&lt;Widget&gt; p; makeWidget(p); std::cout &lt;&lt; &quot;x = &quot; &lt;&lt; p-&gt;x &lt;&lt; &quot;\n&quot;; } </code></pre> <p>From here I can execute (on Ubuntu):</p> <pre><code>g++ -O2 -fPIC -c Widget.cpp g++ -O2 -fPIC -c main.cpp g++ Widget.o main.o -o widget widget </code></pre> <p>and get an output from running <code>widget</code> of</p> <pre><code>Widget::Widget() x = 12 Widget::~Widget() </code></pre> <p>Next we create a wrap</p> <p><strong>Widget.i</strong></p> <pre><code>%module Widget %include &lt;typemaps.i&gt; // for OUTPUT %include &lt;std_shared_ptr.i&gt; %shared_ptr(Widget); %{ #include &quot;Widget.h&quot; %} // Declare an input typemap that suppresses requiring any input. %typemap(in, numinputs=0) std::shared_ptr&lt;Widget&gt;&amp; %{ %} // Declare an output argument typemap updates the shared pointer, // converts it to a Python object, and appends it to the return value. %typemap(argout) std::shared_ptr&lt;Widget&gt;&amp; %{ try { PyObject* obj = SWIG_NewPointerObj(&amp;arg1, $descriptor(std::shared_ptr&lt;Widget&gt;*), 0); $result = SWIG_Python_AppendOutput($result, obj); } catch (...) { SWIG_fail; } %} %include &quot;Widget.h&quot; </code></pre> <p>Running</p> <pre><code>swig -c++ -python -o Widget_wrap.cpp Widget.i </code></pre> <p>will generate the following code in the new file</p> <p><strong>Widget_wrap.cpp</strong>:</p> <pre><code>SWIGINTERN PyObject *_wrap_makeWidget(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; std::shared_ptr&lt; Widget &gt; *arg1 = 0 ; if (!SWIG_Python_UnpackTuple(args, &quot;makeWidget&quot;, 0, 0, 0)) SWIG_fail; makeWidget(*arg1); resultobj = SWIG_Py_Void(); try { PyObject* obj = SWIG_NewPointerObj(&amp;arg1, SWIGTYPE_p_std__shared_ptrT_Widget_t, 0); resultobj = SWIG_Python_AppendOutput(resultobj, obj); } catch (...) { SWIG_fail; } return resultobj; fail: return NULL; } </code></pre> <p>Notice that the variable <code>arg1</code> and call <code>makeWidget(*arg1);</code> were generated by Swig on its own and are not contained in the <code>typemap</code> in the .i file. After a prior pass, I saw them and that is why I use <code>arg1</code> in the <code>typemap</code>. We compile and build the <code>.so</code> file:</p> <pre><code> g++ -O2 -fPIC -I/home/catskills/anaconda3/envs/trader/include/python3.10 -c Widget_wrap.cpp g++ -shared *Widget.o Widget_wrap.o -o _Widget.so </code></pre> <p>Unfortunately then loading and running this in Python gives a seg fault:</p> <pre><code>$ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import Widget &gt;&gt;&gt; Widget.makeWidget() Widget::Widget() Segmentation fault (core dumped) </code></pre> <p><strong>QUESTION:</strong> How do we get rid of the seg fault?</p> <p><strong>UPDATE 2:</strong> To answer this question, I modified the example to the more usual case of writing a function that returns a result, rather than a void function with a result out parameter. The resulting <code>Widget_wrap.cpp</code> gives some insight on how to wrap for the desired case. The modified case is:</p> <p><strong>Widget.h</strong></p> <pre><code>#include &lt;iostream&gt; #include &lt;memory&gt; struct Widget { int x; Widget() { x = 0; std::cout &lt;&lt; &quot;Widget::Widget()\n&quot;; } // Note: non-virtual destructor is OK here ~Widget() { std::cout &lt;&lt; &quot;Widget::~Widget()\n&quot;; } }; std::shared_ptr&lt;Widget&gt; makeWidget(); </code></pre> <p><strong>Widget.cpp</strong></p> <pre><code>#include &quot;Widget.h&quot; std::shared_ptr&lt;Widget&gt; makeWidget() { std::shared_ptr&lt;Widget&gt; p = std::make_shared&lt;Widget&gt;(); p-&gt;x = 12; return p; } </code></pre> <p><strong>Widget.i</strong></p> <pre><code>%module Widget %include &lt;std_shared_ptr.i&gt; %shared_ptr(Widget); %{ #include &quot;Widget.h&quot; %} %include &quot;Widget.h&quot; </code></pre> <p>This produces:</p> <p><strong>Widget_wrap.cpp</strong></p> <pre><code>SWIGINTERN PyObject *_wrap_makeWidget(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; std::shared_ptr&lt; Widget &gt; result; if (!SWIG_Python_UnpackTuple(args, &quot;makeWidget&quot;, 0, 0, 0)) SWIG_fail; result = makeWidget(); { std::shared_ptr&lt; Widget &gt; *smartresult = result ? new std::shared_ptr&lt; Widget &gt;(result) : 0; resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(smartresult), SWIGTYPE_p_std__shared_ptrT_Widget_t, SWIG_POINTER_OWN); } return resultobj; fail: return NULL; } </code></pre> <p>As we can see there is quite a fancy treatment of the conversion of the shared pointer into a Swig pointer. If we then transplant that manually into the Widget_wrap.cpp of the original case, we get this:</p> <pre><code>SWIGINTERN PyObject *_wrap_makeWidget(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; std::shared_ptr&lt; Widget &gt; result; if (!SWIG_Python_UnpackTuple(args, &quot;makeWidget&quot;, 0, 0, 0)) SWIG_fail; makeWidget(result); { std::shared_ptr&lt; Widget &gt; *smartresult = result ? new std::shared_ptr&lt; Widget &gt;(result) : 0; resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(smartresult), SWIGTYPE_p_std__shared_ptrT_Widget_t, SWIG_POINTER_OWN); } return resultobj; fail: return NULL; } </code></pre> <p>If we build this and run it through this Python test program:</p> <p><strong>test.py</strong></p> <pre><code>from Widget import * x = makeWidget() print(x) y = makeWidget() print(y) </code></pre> <p>We get a nice result:</p> <pre><code>$ python test.py Widget::Widget() &lt;Widget.Widget; proxy of &lt;Swig Object of type 'std::shared_ptr&lt; Widget &gt; *' at 0x7f05f6329f00&gt; &gt; Widget::Widget() &lt;Widget.Widget; proxy of &lt;Swig Object of type 'std::shared_ptr&lt; Widget &gt; *' at 0x7f05f61941e0&gt; &gt; Widget::~Widget() Widget::~Widget() </code></pre> <p>This leads to a final revised question (working on this next):</p> <p><strong>QUESTION</strong>: How do I write a typemap which gives me exactly the manually edited version of <code>_wrap_makeWidget</code> above, with no redundant calls to <code>makeWidget</code>?</p>
<python><c++><swig><swig-typemap>
2023-10-06 02:31:54
2
2,134
Lars Ericson
77,241,495
3,198,767
How to mock google bigquery client for unittests in pytests
<p>Below I am using a python client library to connect to bigquery and a service account to connect to the bigquery (see <a href="https://cloud.google.com/python/docs/reference/bigquery/latest/index.html" rel="nofollow noreferrer">this link</a> to get more info about the library).</p> <pre><code>from google.cloud.bigquery import Client, LoadJobConfig, LoadJob from google.oauth2 import service_account def fetch_last_modified_date_from_bq(): service_account_json_string = &quot;json to connect&quot; service_account_json = json.loads(service_account_json_string) credentials = service_account.Credentials.from_service_account_info(service_account_json) client = Client(credentials=credentials) # Run a SQL query on the table sql = &quot;&quot;&quot; SELECT last_modified_date FROM `project.dataset.table` order by last_modified_date desc LIMIT 1 &quot;&quot;&quot; query_job = client.query(sql) # Print the results for row in query_job: return row.result //this is string </code></pre> <p>How to mock the client?</p> <p>I tried to mock in this way but as client needs &quot;credentials&quot; to be passed and it was not successful.<br /> Is the code below the right way?</p> <pre><code>@mock.patch('google.cloud.bigquery.Client') @mock.patch('google.oauth2.service_account.Credentials.from_service_account_info') @mock.patch('google.auth.credentials.Credentials') def test_fetch_last_modified_date_from_bq(self, mock_credentials ,mock_service_account, mock_client): #arrange mock_service_account.return_value = mock_credentials row={} row['last_modified_date'] = '' rows = [row] mock_client.return_value.query.return_value = rows #act fetch_last_modified_date_from_bq() </code></pre>
<python><google-bigquery><mocking><pytest><patch>
2023-10-06 02:05:07
0
1,793
Deepak Kothari
77,241,412
1,833,118
How to obtain start and commit timestamps of transactions?
<p>How can I obtain start and commit timestamps of transactions in Dgraph from operation messages or database logs? I need to issue multiple transactions in each client session.</p> <p>I use <a href="https://hub.docker.com/r/dgraph/dgraph" rel="nofollow noreferrer">this Docker image</a>:</p> <pre><code>docker pull dgraph/dgraph:latest # version:v23.1.0 </code></pre> <p>And pydgraph-23.0.1 driver with Python v3.8.18. I use <a href="https://github.com/dgraph-io/pydgraph/blob/master/examples/simple/simple.py" rel="nofollow noreferrer">simple.py</a> and changed <code>localhost</code> on line 10 to <code>175.27.241.31</code>. <code>175.27.241.31</code> is publicly available (you can use it directly without pulling the Docker image). I added <code>print(response)</code> after line 78 but there is only <code>start_ts</code> for the transaction. I do not find <code>commit_ts</code>:</p> <pre><code>txn { start_ts: 260380056 keys: ... ... preds: ... ... } latency { ... } metrics { ... } uids { ... } </code></pre> <ul> <li>How to obtain <code>commit_ts</code> for transactions using pydgraph? Is there any official documentation or source code for this?</li> <li>Does following code from simple.py mean client is issuing multiple transactions <em>one by one</em>?</li> <li>Are there other ways of obtaining start and commit timestamps than using pydgraph? Code examples are appreciated.</li> </ul> <pre class="lang-py prettyprint-override"><code>def main(): client_stub = create_client_stub() client = create_client(client_stub) drop_all(client) set_schema(client) create_data(client) query_alice(client) # query for Alice query_bob(client) # query for Bob delete_data(client) # delete Bob query_alice(client) # query for Alice query_bob(client) # query for Bob # Close the client stub. client_stub.close() </code></pre> <p><a href="https://discuss.dgraph.io/t/how-to-obtain-the-start-and-commit-timestamps-of-transactions-of-dgraph-using-pydgraph/18964?u=hengxin" rel="nofollow noreferrer">Related</a>.</p>
<python><transactions><dgraph>
2023-10-06 01:34:38
1
2,011
hengxin
77,241,390
11,117,255
Querying HTML Content in Common Crawl Dataset Using Amazon Athena
<p>I am currently exploring the massive Common Crawl dataset hosted on Amazon S3 and am attempting to use Amazon Athena to query this dataset. My objective is to search within the HTML content of the web pages to identify those that contain specific strings within their tags. Essentially, I am looking to filter out websites whose HTML content matches particular criteria.</p> <p>I am aware that Athena is capable of querying large datasets on S3 using standard SQL. However, I am not entirely sure about the feasibility and the approach to directly query inside the HTML content of the web pages in the Common Crawl dataset.</p> <p>Here's a simplified version of what I am looking to achieve:</p> <pre><code>sql SELECT * FROM &quot;common_crawl_dataset&quot; WHERE html_content LIKE '%specific-string%'; </code></pre> <p>Is it possible to directly query the HTML content of the web pages in the Common Crawl dataset using Athena? If yes, what would be the best approach to accomplish this, considering efficiency and cost-effectiveness? Are there any limitations or challenges that I should be aware of?</p>
<python><amazon-web-services><web-crawler><amazon-athena><common-crawl>
2023-10-06 01:22:01
3
2,759
Cauder
77,241,304
807,797
Broken script fails to break calling program
<p>What specific syntax needs to be changed in the code below in order to cause the broken script to halt the program that calls it?</p> <p><strong>PROBLEM:</strong></p> <p>An object-oriented Python 3 program needs to run external scripts. The problem is that, when the program calls a script that breaks, the Python program just continues running instead of halting with an error message.</p> <p>The scripts are provided by third parties we do not control, so that the object-oriented Python 3 program needs to do the error handling when a called script breaks, and we cannot control the content of the called scripts.</p> <p><strong>STEPS TO REPRODUCE THE PROBLEM:</strong></p> <ol> <li><p>Create two directories and place the following 4 files in the 2 directories. One directory named MyApp, and the other directory named AnotherDir. The file structure should look like:</p> <pre><code> AnotherDir\scripts\brokenScript.py MyApp\app\main.py MyApp\app\second_level.py MyApp\app\third_level.py </code></pre> </li> <li><p>Navigate the command line to the parent directory of AnotherDir and MyApp</p> </li> <li><p>Run the following command to reproduce the problem:</p> <pre><code> python MyApp\app\main.py </code></pre> </li> </ol> <p>We are developing this on Windows, but it is intended to be OS-agnostic.</p> <p><strong>PROBLEM RESULT:</strong></p> <p>The problem is that the results look like:</p> <pre><code>About to run a broken script that should break on error. command is: python C:\path\to\AnotherDir\scripts\brokenScript.py shell Inside broken script. The script should break on the next line. ---------------------------------------------------------------------------- FAILED TO THROW ERROR. Done running the broken script, but it returned without breaking this calling program. This print command should never run if the script breaks. Instead, a graceful error message should terminate the program. </code></pre> <p><strong>DESIRED RESULT:</strong></p> <p>The brokenScript.py script's non-zero exit code should cause the calling program to halt instead of continuing to run.</p> <p><strong>CODE TO REPRODUCE THE PROBLEM:</strong></p> <p>100% of the bare minimum code to reproduce this error is as follows:</p> <p><code>AnotherDir\scripts\brokenScript.py</code> contains:</p> <pre><code>import sys print(&quot;Inside broken script. The script should break on the next line. &quot;) sys.exit(1) </code></pre> <p><code>MyApp\main.py</code> contains:</p> <pre><code>from second_level import second_level import sys def runCommands(): wfsys = second_level() wfsys.offFoundation() print(&quot;--------------------------------------&quot;) sys.exit(0) runCommands() </code></pre> <p><code>MyApp\second_level.py</code> contains:</p> <pre><code>from third_level import third_level class second_level: def __init__(self): pass def offFoundation(self): tl = third_level() print(&quot;About to run a broken script that should break on error.&quot;) callScript = &quot;\\AnotherDir\\scripts\\brokenScript.py&quot; tl.runScript(callScript) print(&quot;----------------------------------------------------------------------------&quot;) print(&quot;FAILED TO THROW ERROR. Done running the broken script, but it returned without breaking this calling program. This print command should never run if the script breaks. Instead, a graceful error message should terminate the program.&quot;) </code></pre> <p><code>MyApp\third_level.py</code> contains:</p> <pre><code>import subprocess import re import sys import os import platform class third_level: def __init__(self): pass ansi_escape = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]') def runShellCommand(self, commandToRun): proc = subprocess.Popen( commandToRun,cwd=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) while True: line = proc.stdout.readline() if line: thetext=line.decode('utf-8').rstrip('\r|\n') decodedline=self.ansi_escape.sub('', thetext) logString = decodedline print(&quot;shell&quot;, logString) else: break def runScript(self, relativePathToScript): userCallingDir = str(os.path.abspath(&quot;.&quot;))+'\\' userCallingDir = self.formatPathForOS(userCallingDir) fullyQualifiedPathToScript = userCallingDir+relativePathToScript fullyQualifiedPathToScript = self.formatPathForOS(fullyQualifiedPathToScript) if os.path.isfile(fullyQualifiedPathToScript): commandToRun = &quot;python &quot;+fullyQualifiedPathToScript print(&quot;command is: &quot;,commandToRun) self.runShellCommand(commandToRun) else: logString = &quot;ERROR: &quot;+fullyQualifiedPathToScript+&quot; is not a valid path. &quot; print('shell', logString) sys.exit(1) def formatPathForOS(self, input): if platform.system() == &quot;Windows&quot;: input = input.replace('/','\\') else: input = input.replace('\\','/') input = input.replace('//','/') if input.endswith('/n'): input = input[:-2] + '\n' return input </code></pre>
<python><python-3.x><shell><popen>
2023-10-06 00:42:34
1
9,239
CodeMed
77,241,240
8,245,814
While using this algorithm to approximate pi, I'm getting: TypeError: unsupported operand type(s)
<p>To find the real value of pi I'm using:</p> <pre><code>from mpmath import mp prec= 10**4 # precision equal to 10^4 digits mp.dps = prec # mp.pi is the value of pi with 10^4 digits of precision </code></pre> <p>I'm using the BBP formula to approximate pi:</p> <pre><code>from decimal import Decimal, getcontext getcontext().prec=prec pi = sum(1/Decimal(16)**k * (Decimal(4)/(8*k+1) - Decimal(2)/(8*k+4) - Decimal(1)/(8*k+5) - Decimal(1)/(8*k+6)) for k in range(20)) </code></pre> <p>I want to see how close this approximation is from the real pi, so</p> <pre><code>if(abs(mp.pi - pi) &lt; 0.001): print(&quot;true&quot;) else: print(&quot;false&quot;) </code></pre> <p>However I'm getting this error:</p> <pre><code>Traceback (most recent call last): line 23, in &lt;module&gt; if(abs(mp.pi - pi) &lt; 0.001): print(&quot;true&quot;) TypeError: unsupported operand type(s) for -: 'constant' and 'decimal.Decimal' </code></pre> <p>I tried converting the constant mp.pi to a decimal but this is not allowed.</p>
<python><pi>
2023-10-06 00:17:56
0
319
Pinteco
77,241,228
15,632,586
TypeError: Invalid data type 'str' when using DataLoader to train SciBERT
<p>I am trying to perform k-fold cross validation with SciBERT, using a dataset with 2 columns: <code>'text'</code> and <code>'label'</code>. My current dataset is looking like this, as a DataFrame from pandas: <a href="https://i.sstatic.net/b49cG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b49cG.png" alt="enter image description here" /></a></p> <p>Here is the code I used for loading dataset, and performing k-fold cross validation:</p> <pre><code>from torch.utils.data import DataLoader, Dataset from sklearn.model_selection import StratifiedKFold k_folds = 5 skf = StratifiedKFold(n_splits=k_folds, shuffle=True, random_state=42) # Define a custom PyTorch dataset class TextDataset(Dataset): def __init__(self, texts, labels): self.texts = texts self.labels = labels def __len__(self): return len(self.texts) def __getitem__(self, idx): text = self.texts[idx] label = self.labels[idx] encoding = tokenizer(text, padding='max_length', truncation=True, max_length=512, return_tensors='pt') input_ids = encoding['input_ids'].squeeze() attention_mask = encoding['attention_mask'].squeeze() return {'input_ids': input_ids, 'attention_mask': attention_mask, 'labels': torch.tensor(label)} # Convert dataframe to dataset dataset = TextDataset(data['text'].tolist(), data['label'].tolist()) from torch.optim import AdamW from tqdm import tqdm from statistics import mean fold_accuracies = [] # Perform k-fold cross-validation for fold, (train_indices, val_indices) in enumerate(skf.split(data['text'], data['label'])): print(f&quot;Training Fold {fold+1}/{k_folds}&quot;) # Split dataset into train and validation sets for the current fold train_dataset = torch.utils.data.Subset(dataset, train_indices) val_dataset = torch.utils.data.Subset(dataset, val_indices) print(train_dataset) # Create data loaders train_loader = DataLoader(train_dataset, batch_size=10, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=10, shuffle=False) print(type(train_loader)) # Training loop optimizer = torch.optim.AdamW(model.parameters(), lr=2e-5, eps=1e-8) criterion = torch.nn.CrossEntropyLoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) model.train() for epoch in range(6): # Adjust the number of epochs as needed epoch_losses = [] for batch in train_loader: optimizer.zero_grad() input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) labels = batch['labels'].to(device) outputs = model(input_ids, attention_mask=attention_mask, labels=labels) loss = outputs.loss epoch_losses.append(loss.item()) loss.backward() optimizer.step() print(f&quot;epoch {epoch + 1} loss: {mean(epoch_losses)}&quot;) </code></pre> <p>However, when I tried to train the model, I got <code>TypeError: new(): invalid data type str()</code> from my notebook. I think that could be because my label could not be converted to a tensor, however, I tried <code>tf.convert_to_tensor()</code>, as well as <code>torch.FloatTensor()</code>, but the problem is still not resolved.</p> <p><a href="https://i.sstatic.net/BQIxe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BQIxe.png" alt="tensor" /></a></p> <p>So, what could be the potential solution for this problem, to ensure that I could run the training process?</p>
<python><tensorflow><pytorch><bert-language-model>
2023-10-06 00:11:08
1
451
Hoang Cuong Nguyen
77,241,220
11,249,098
AWS S3 - how to prevent docs from downloading instead of displaying with pre-signed urls
<p>This may seem like a question similar to others, but the focus is that it is a .doc file.</p> <p>I am working with Python to generate AWS s3 pre-signed urls that looks like this:</p> <pre class="lang-py prettyprint-override"><code>def generate_s3_presigned_url(self, filename, expiration=300): try: mime_type = mimetypes.guess_type(filename)[0] # Generate the presigned URL presigned_url = self.client.generate_presigned_url( ClientMethod=&quot;get_object&quot;, Params={ &quot;Bucket&quot;: self.bucket_name, &quot;Key&quot;: filename, 'ResponseContentType': mime_type, 'ResponseContentDisposition': 'inline', }, ExpiresIn=expiration, # Expiration time in seconds (300 seconds = 5 minutes) ) return presigned_url </code></pre> <p>In this occasion I am working with Microsoft Word files, so the part <code>'ResponseContentType': mime_type</code> is actually <code>'ResponseContentType': 'application/msword’</code>.</p> <p>Apparently, by adding the <code>'ResponseContentDisposition': ‘inline’</code> it was supposed to work but it didn’t.</p> <p>When I am using the generated url the doc file keeps being downloaded no matter what I tried, but what I am trying to achieve is to display the doc file on my browser.<br> What can I do?</p>
<python><amazon-web-services><amazon-s3><pre-signed-url>
2023-10-06 00:08:38
2
489
Flora Biletsiou
77,241,219
219,153
Unexpectedly poor zip compression result
<p>This python script:</p> <pre><code>import numpy as np a = np.ones((10_000_000_000), dtype='u1') np.savez_compressed('a-zip', a=a) </code></pre> <p>produces 9.7MB large file. Theoretically this array can be compressed to less than 100 bytes: 71 bytes for <code>.npy</code> header and a handful of bytes for 10,000,000,000 copies of <code>1</code>. Why is ZIP failing so badly here? Is there another compression algorithm easily available for NumPy array with better performance in simple cases similar to this one, with mostly the same value being used?</p>
<python><numpy><zip><compression>
2023-10-06 00:08:11
2
8,585
Paul Jurczak
77,241,030
10,291,435
gecoder module works fine in the code, but is not found in udf pyspark running in virtual env for python
<p>I created virtual env in python and I installed gecoder, on it.</p> <p>I try to run this:</p> <pre><code>import geocoder def get_lat_long(address): g = geocoder.osm(address, method='reverse') if g.ok: location = g.json return (location.get('lat', 0), location.get('lng', 0)) else: return (0, 0) # Define a UDF that returns a StructType with two DoubleType fields get_lat_long_udf = udf(get_lat_long, StructType([ StructField(&quot;latitude&quot;, DoubleType(), False), StructField(&quot;longitude&quot;, DoubleType(), False) ])) # Apply the UDF to create two columns in the DataFrame df_2 = df.withColumn(&quot;lat_long_osm&quot;, get_lat_long_udf(df[&quot;address_cleansed&quot;])) # Split the StructType column into two separate columns df_2 = df_2.withColumn(&quot;latitude_osm&quot;, df_2[&quot;lat_long_osm&quot;].getField(&quot;latitude&quot;)) df_2 = df_2.withColumn(&quot;longitude_osm&quot;, df_2[&quot;lat_long_osm&quot;].getField(&quot;longitude&quot;)) # Drop the intermediate column &quot;lat_long_osm&quot; if not needed df_2 = df_2.drop(&quot;lat_long_osm&quot;) </code></pre> <p>when I try to run df_2.show() i get the previous error, despite the fact that it ran yesterday on colab</p> <p>when I try to do this:</p> <pre><code>g = geocoder.osm(address, method='reverse') print(g) </code></pre> <p>it works as expected meaning it can see geocoder. Also, I tried to run the udf function in colab it works fine. I don't know what to check on my machine especially the module is installed and the other line outside of the udf works fine. Any help please?</p>
<python><pyspark><python-venv>
2023-10-05 23:06:02
0
1,699
Mee
77,241,026
17,653,423
How to get dictionary in a list of dicts based on key value?
<p>A function receives a list of emails and calls other methods which can return a payload with status <code>success</code> or <code>error</code>. If a method returns <code>error</code> it doesn't stop the flow, the function continue to the following method.</p> <p>For each email, I want to get the first payload with <code>error</code> or the last payload with <code>success</code>, that way, if the first method returns <code>error</code> and the last <code>success</code> I can see where and if it failed in some step before.</p> <p>Note: If the first step returns <code>error</code>, the second <code>error</code> and the last <code>success</code>, I want to see only the payload of the first step. If the first step returns <code>success</code>, the second <code>error</code> and the last <code>success</code>, I want to see only the payload of the second step and so on.</p> <p>Current code:</p> <pre><code>def main(emails): result_from_step_1 = [ {&quot;email&quot;: &quot;test1@gmail.com&quot;, &quot;status&quot;: &quot;success&quot;, &quot;step&quot;: 1}, {&quot;email&quot;: &quot;test2@gmail.com&quot;, &quot;status&quot;: &quot;error&quot;, &quot;step&quot;: 1}, ] result_from_step_2 = [ {&quot;email&quot;: &quot;test1@gmail.com&quot;, &quot;status&quot;: &quot;error&quot;, &quot;step&quot;: 2}, {&quot;email&quot;: &quot;test2@gmail.com&quot;, &quot;status&quot;: &quot;success&quot;, &quot;step&quot;: 2}, ] result_from_step_3 = [ {&quot;email&quot;: &quot;test1@gmail.com&quot;, &quot;status&quot;: &quot;error&quot;, &quot;step&quot;: 3}, {&quot;email&quot;: &quot;test2@gmail.com&quot;, &quot;status&quot;: &quot;success&quot;, &quot;step&quot;: 3}, ] payloads = result_from_step_1 + result_from_step_2 + result_from_step_3 response = [get_status(payloads, email) for email in emails] return response def get_status(payloads, user_email): for payload in payloads: if payload[&quot;email&quot;] != user_email: continue response = payload if payload[&quot;status&quot;] == &quot;success&quot;: continue break return response emails = [&quot;test1@gmail.com&quot;, &quot;test2@gmail.com&quot;] response = main(emails) print(response) </code></pre> <p>Any other ideas?</p> <p>The nested loop feels a little complex for future maintenance.</p>
<python>
2023-10-05 23:04:40
3
391
Luiz
77,241,020
1,432,980
serialize json object with classmethods
<p>I have an object that looks like this</p> <pre><code>import json from typing import Any class Identity: def __init__(self, name: str) -&gt; None: self.name = name class BasicEncoder(json.JSONEncoder): def default(self, obj: Any) -&gt; Any: return obj.__dict__ id = Identity(name='Random name') print(json.dumps(id, cls=BasicEncoder)) </code></pre> <p>This worked fine.</p> <p>I decided to add <code>classmethod</code> functions to <code>Identity</code> object in order to have something like static methods that would allow to serialize the current object to json.</p> <p>I changed the code to this</p> <pre><code>from __future__ import annotations import json from typing import Any class Identity: def __init__(self, name: str) -&gt; None: self.name = name self.permissions = [] @classmethod def to_json(cls: Identity) -&gt; str: return json.dumps(cls, cls=BasicEncoder) class BasicEncoder(json.JSONEncoder): def default(self, obj: Any) -&gt; Any: return obj.__dict__ id = Identity(name='Random name') print(id.to_json()) </code></pre> <p>But now serialization throws this error</p> <pre><code>'mappingproxy' object has no attribute '__dict__'. Did you mean: '__dir__' </code></pre> <p>In a similar scenario (for another but more complex object) I got another exception</p> <pre><code>'getset_descriptor' object has no attribute '__dict__' </code></pre> <p>What it is the problem? As far as I understand, it is trying to serialize more than just object fields now with <code>classmethod</code>. How to solve it?</p> <p>I tried adding filtering for <code>__dict__</code> keys and values for <code>classmethod</code> encoder etc. but it did not help.</p>
<python><json>
2023-10-05 23:00:19
1
13,485
lapots
77,240,996
607,846
Matching on part of a string in a unittest
<p>I have the following test:</p> <pre><code>self.assertDictEquals( result, { 1: 2, 3: 4, 5: &quot;Error: You entered 1, while one of the following values was expected: 2, 3, ... 1000&quot; } ) </code></pre> <p>where the string above is very long as it lists all the expected values. Therefore I wish to only confirm that the start of this string is as expected. I could do it like this:</p> <pre><code>assert len(result) == 3 assert result[1] == 2 assert result[3] == 4 assert result[5].startswith(&quot;Error: You entered 1, while one of the following values was expected:&quot;) </code></pre> <p>But I prefer this as this is a recurring issue:</p> <pre><code>class CONTAINING: def __init__(self, v): self.v = v def __eq__(self, container) return self.v in container self.assertDictEquals( result, { 1: &quot;2&quot;, 3: &quot;4&quot;, 5: CONTAINING(&quot;Error: You entered 1, while one of the following values was expected:&quot;) } ) </code></pre> <p>Is there any other way to do this using the unittest framework in python?</p>
<python><unit-testing>
2023-10-05 22:53:47
0
13,283
Baz
77,240,878
15,587,184
Tokenizing and summarizing Textual data by group efficiently in Python
<p>I have a dataset in Python that look like this one:</p> <pre><code>data = pd.DataFrame({ 'ID': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'], 'TEXT': [ &quot;Mouthwatering BBQ ribs cheese, and coleslaw.&quot;, &quot;Delicious pizza with pepperoni and extra cheese.&quot;, &quot;Spicy Thai curry with cheese and jasmine rice.&quot;, &quot;Tiramisu dessert topped with cocoa powder.&quot;, &quot;Sushi rolls with fresh fish and soy sauce.&quot;, &quot;Freshly baked chocolate chip cookies.&quot;, &quot;Homemade lasagna with layers of cheese and pasta.&quot;, &quot;Gourmet burgers with all the toppings and extra cheese.&quot;, &quot;Crispy fried chicken with mashed potatoes and extra cheese.&quot;, &quot;Creamy tomato soup with a grilled cheese sandwich.&quot; ], 'DATE': [ '2023-02-01', '2023-02-01', '2023-02-01', '2023-02-01', '2023-02-02', '2023-02-02', '2023-02-01', '2023-02-01', '2023-02-02', '2023-02-02' ] }) </code></pre> <p>What I'd like to do is group by DATE and get the frequency of each token after removing punctuation. I'm very new to the Python environment; I come from R, and I have been looking into the gensim library for further reference. It looks quite complicated to me. My desired output would look like this: for each group (DATE), we'll have the frequency of each unique token.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>TOKEN</th> <th>SUBTOTAL</th> <th>DATE</th> </tr> </thead> <tbody> <tr> <td>cheese</td> <td>5</td> <td>1/02/2023</td> </tr> <tr> <td>and</td> <td>5</td> <td>1/02/2023</td> </tr> <tr> <td>with</td> <td>5</td> <td>1/02/2023</td> </tr> <tr> <td>extra</td> <td>2</td> <td>1/02/2023</td> </tr> <tr> <td>mouthwatering</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>bbq</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>ribs</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>coleslaw</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>delicious</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pizza</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pepperoni</td> <td>1</td> <td>1/02/2023</td> </tr> </tbody> </table> </div> <p>In R this can be done very easy with quanteda like this:</p> <pre><code>corpus_food&lt;-corpus(data, docid_field = &quot;ID&quot;, text_field = &quot;TEXT&quot;) corpus_food %&gt;% tokens(remove_punct = TRUE) %&gt;% dfm() %&gt;% textstat_frequency(groups = lubridate::date(DATE)) </code></pre> <p>Which only creates a corpus and then tokenizes to remove punctuation. Later, it creates a document-term matrix and finally summarizes the tokens and their frequencies by group.</p> <p>I am in no way comparing the two languages, Python and R. They are amazing, but at the moment, I'm interested in a very straightforward and fast method to achieve my results in Python. If perhaps you don't use the gensim library, I'd still be interested in a way to achieve what I'm looking for in a faster and more efficient way in Python. I'm new to Python.</p>
<python><pandas><dataframe><nlp><gensim>
2023-10-05 22:16:39
1
809
R_Student
77,240,858
7,656,369
How to fix: cannot import name '_request_ctx_stack' from 'flask'
<p>I made some changes to my project (and after rebuilding its virtual environment) I started encountering this issue when running my suite of unit tests:</p> <pre><code>ImportError: cannot import name '_request_ctx_stack' from 'flask' </code></pre> <p>The weird thing is that I hadn't altered the code related to the tests that were failing nor had I changed the requirements of the project, which is why I was very confused</p> <p>I tried undoing my changes (and rebuilding the virtual environment without my changes), but the error was still there, which was a signal that something else was happening</p>
<python><flask><python-venv><flask-script>
2023-10-05 22:12:12
3
992
alejandro
77,240,746
22,371,917
Endpoint not working with flask-selenium on render.com?
<p>Code:</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.common.by import By from flask import Flask app = Flask(__name__) @app.route(&quot;/t&quot;) def st(): chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(&quot;--headless=new&quot;) # chrome_options.add_argument(&quot;--disable-dev-shm-usage&quot;) # chrome_options.add_argument(&quot;--no-sandbox&quot;) # chrome_options.add_argument(&quot;--disable-gpu&quot;) browser = webdriver.Chrome(options=chrome_options) browser.get(&quot;http://www.example.com&quot;) button = browser.find_element(By.TAG_NAME, &quot;a&quot;) button.click() bpt = browser.find_element(By.TAG_NAME, &quot;p&quot;).text browser.quit() return bpt </code></pre> <p>when the route is &quot;/&quot; it works fine but when its /anything it gives a 502 error?? FYI adding this</p> <pre class="lang-py prettyprint-override"><code>@app.route(&quot;/hello&quot;) def sy(): return &quot;hello&quot; </code></pre> <p>works so if i go to /hello it will return hello no errors its just when im using selenium with an endpoint thats not the default &quot;/&quot; this code does also work locally. build command: gunicorn main:app</p>
<python><selenium-webdriver><flask><render.com>
2023-10-05 21:42:44
0
347
Caiden
77,240,678
1,812,732
How to find the threshold that will yield the desired number of array elements
<p>Given an array of numbers and a target count, I want to find the threshold such that the number of element that are above it will be equal the target (or as close as possible).</p> <p>For example.</p> <pre><code>arr = np.random.rand(100) target = 80 for i in range(100): t = i * 0.01 if (arr &gt; t).sum() &lt; target: break print(t) </code></pre> <p>However this is not efficient and it is not very precise, and perhaps someone has already solved this problem.</p> <p><strong>EDIT:</strong></p> <p>In the end I found <code>scipy.optimize.bisect</code> (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.bisect.html" rel="nofollow noreferrer">link</a>) which works perfectly.</p>
<python><numpy>
2023-10-05 21:28:14
4
11,643
John Henckel
77,240,508
22,466,650
How to make hbars in subplots with shared properties and no delimiter?
<p>My input is a dataframe <code>df</code> (you can find a snippet in the end of my question) and I try to create a plot like the second 'Panel vendor bar chart' here : <a href="https://peltiertech.com/stacked-bar-chart-alternatives" rel="nofollow noreferrer">https://peltiertech.com/stacked-bar-chart-alternatives</a></p> <p>The code I made is this (by the way I'm open to any suggestations to improve it) :</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.read_excel('my_folder/list_of_softwares.xlsx') sums = df.groupby('Software').sum().T fig, axes = plt.subplots(1, len(sums.index), sharey=True, figsize=(12, 3)) plt.subplots_adjust(wspace=0) colors = ['#6caddf', '#f27077', '#9dd374', '#fab26a'] for ax, software, parameter, color in zip(axes, sums.columns, sums.index, colors): ax.barh(sums.loc[parameter].index, sums.loc[parameter].values, color=color) ax.set(xticks=range(0, df.drop('Software', axis=1).max().max()+1, 2000), title=parameter) fig.suptitle('Summary of Softwares', y=1.05, fontweight='bold') plt.show() </code></pre> <p>It works except 4 small issues :</p> <ol> <li>The <code>y</code> ticks in all axes should be removed</li> <li>The subtitles of the axes should be inside the figure and not outside it</li> <li>Some <code>x</code> ticks labels are overlapping</li> <li>The vertical line that separate each ax should be removed</li> </ol> <p>I feel like this can be done easily but I don't know how.</p> <p>Any ideas, guys ?</p> <p><a href="https://i.sstatic.net/SpcjE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SpcjE.png" alt="enter image description here" /></a></p> <p><code>df</code> look like this :</p> <pre><code> Software Parameters Reports Dashbords Scorecards 0 Tableau 7935 68 474 712 1 Tableau 69 518 695 122 2 Oracle 651 540 842 764 3 Oracle 700 52 776 948 4 Oracle 758 862 182 757 5 IBM 999 271 847 338 6 IBM 316 128 395 441 7 IBM 915 199 1000 747 8 IBM 44 685 818 427 9 Tibco 500 575 876 450 10 Board 748 936 367 771 11 Board 304 700 856 77 12 Board 623 974 120 802 13 Board 131 151 395 410 14 Board 217 645 996 537 15 LogiXML 630 779 752 433 16 LogiXML 947 391 280 109 </code></pre>
<python><matplotlib><bar-chart><subplot>
2023-10-05 20:51:22
1
1,085
VERBOSE
77,240,144
7,492,737
What is numpy doing when assigning values beyond the dimensions of the array?
<p>What exactly is numpy doing when you assign values outside the bounds of the array? Does it pose a risk of overwriting some area of the memory currently used by something else?</p> <p>In this example, it writes row 9 to all ones. I was hoping to use this to get around an edge case, but I can't find any documentation on this behavior on the numpy pages for assigning or broadcasting.</p> <pre><code>import numpy as np x = np.zeros([10, 5]) x[9:11] = 1 </code></pre> <p>Edit per &quot;solved by this other question&quot;: I understand what it is doing and how it works. The question that I still have is if there is any risk of memory corruption or if the non-existent slice indices are ignored when writing/assigning.</p>
<python><numpy>
2023-10-05 19:45:55
1
302
Chris McL
77,240,105
3,566,606
Python Typing: Generic Type that has the same interface as the wrapper type
<p>I would like to define a sort of &quot;wrapper&quot; Generic Type, say <code>MyType[T]</code>, so that it has the same type interface as the wrapped type.</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar T = TypeVar(&quot;T&quot;) class MyType(Generic): pass # what to write here? </code></pre> <p>So, as an example, when I have a type <code>MyType[int]</code>, the type-checker should treat it as if it was an <code>int</code> type.</p> <p>Is that possible? If so, how?</p>
<python><python-typing>
2023-10-05 19:39:56
2
6,374
Jonathan Herrera
77,240,058
11,751,799
`plotly` Sankey plot: able to color the inflows and outflows upon hovering over a node
<p>I am working with the <code>plotly</code> package in Python and have made a nice Sankey plot. Other Stack Overflow posts discuss how to set the connection colors. However, I am content to have the default colors unless the user hovers over a node. In that case, I would like the inflows to light up in blue and the outflows to light up in red.</p> <p>Does <code>plotly</code> support this? If so, what would be the syntax? If not, would another Python package support this form of interaction in a Sankey plot?</p>
<python><colors><graphics><plotly><sankey-diagram>
2023-10-05 19:29:29
1
500
Dave
77,239,936
7,064,415
How to call `subprocess` efficiently (and avoid calling it in a loop)
<p>I have a Python script that contains a for-loop that iterates through a list of items. I need to perform a computation on a property of each item, but the code that does this computation is in Java (where the java <code>main()</code> method accepts two arguments: <code>arg1</code> and <code>arg2</code> in the example below). So far, so good -- I can use <code>subprocess</code> to call Java.</p> <p>This is how I do it currently (simplified):</p> <pre><code>from subprocess import Popen, PIPE cp = ... # my classpath string java_file = ... # the file with the java code arg1 = ... # an argument string (always the same value) items = [...] # my list of items for item in items: args2 = ... # calculated from item inside the python script cmd = ['java', '-cp', cp, java_file, arg1, arg2] process = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True) output, errors = process.communicate() outp_str = output.decode('utf-8') # the result I need </code></pre> <p>It works, but because my list can contain thousands of elements, I'd be calling <code>subprocess</code> as many times -- which seems very inefficient.</p> <p>Is there a way in which I can call <code>subprocess</code> only once, before the loop, and then give the active subprocess the necessary command within the loop? Or would that make no sense in terms of speed/efficiency?</p> <p>I found <a href="https://stackoverflow.com/questions/9322796/keep-a-subprocess-alive-and-keep-giving-it-commands-python">this</a> question, which seems to be related -- but I can't manage to translate this to my scenario. I also did not find my solution in the <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer">docs for subprocess</a>. I imagine it would be something like this:</p> <pre><code>cp = ... # my classpath string java_file = ... # the file with the java code arg1 = ... # an argument string (always the same value) cmd = [...] # &lt;-- ??? process = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True) items = [...] # my list of items for item in items: args2 = ... # calculated from item inside the python script process.stdin.write(bytes(..., 'utf-8')) # &lt;-- ??? process.stdin.flush() result = process.stdout.readline() # the result I need </code></pre> <p>... where I can't figure out what the two commands should be (in the lines that have the question marks).</p> <p>Is what I want possible? Any help much appreciated!</p>
<python><for-loop><subprocess>
2023-10-05 19:07:52
1
732
rdv
77,239,727
4,133,188
Open3D: Creating a partial sphere
<p>I am using Open3D with Python and would like to create a partial sphere based on limits for $\phi$ and $\theta$ (spherical coordinate representation). The <code>create_sphere</code> only allows me to create complete spheres.</p> <pre><code>import open3d as o3d a = o3d.geometry.TriangleMesh.create_sphere() a.compute_vertex_normals() o3d.visualization.draw_plotly([a]) </code></pre> <p><a href="https://i.sstatic.net/KbPXc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KbPXc.png" alt="enter image description here" /></a></p> <p>If Open3D is not the appropriate tool for this, I am open to using a different package and would like to create the partial sphere using that package.</p> <p>Thanks!</p>
<python><open3d>
2023-10-05 18:28:58
1
771
BeginnersMindTruly
77,239,704
856,804
In Ray, does worker node system logs get streamed back to header node or only stay in worker node by default
<p>I noticed system logs on worker node become unavailable when the node is gone, suspecting the log only stays in worker node. Is that correct? If so, how does the worker node becomes available on Ray dashboard?</p> <p>The Ray log persistence doc at <a href="https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/logging.html#putting-everything-together" rel="nofollow noreferrer">https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/logging.html#putting-everything-together</a> only shows configuration for the header node, does the same configuration need applied to worker nodes too?</p>
<python><ray>
2023-10-05 18:25:29
1
9,110
zyxue
77,239,338
1,968,829
How to optimize this iterable over a pandas dataframe
<p>I have the following dataframe:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import re d = { 'I am a sentence of words.': 'words', 'I am not a sentence of words.': 'words', 'I have no sentence with words or punctuation': 'letter', 'I am not a sentence with a letter or punctuation': 'letter'} df = pd.Series(d).rename_axis('sentence').reset_index(name='mention') </code></pre> <pre class="lang-bash prettyprint-override"><code> sentence mention 0 I am a sentence of words. words 1 I am not a sentence of words. words 2 I have no sentence with words or punctuation letter 3 I am not a sentence with a letter or punctuation letter </code></pre> <p>And I apply the following method to this for matching of various regex patterns:</p> <pre><code>def get_negated(row): negated = False # missed negation terms = ['neg', 'negative', 'no', 'free of', 'not', 'without', 'denies', 'ruled out'] for term in terms: regex_str=r&quot;(?:\s+\S+)*\b{0}(?:\s+\S+)*\s+{1}\b&quot;.format(term, row.mention) if (re.search(regex_str, row['sentence'])): #or (re.search(regex_str2, row.sentence)): negated = True break return int(negated) </code></pre> <p>via iteration:</p> <pre><code>negated_terms=[] for row in df.itertuples(): negated_terms.append(get_negated(row)) </code></pre> <p>and then add a new column to the dataframe via:</p> <pre><code>df['negated'] = negated_terms </code></pre> <p>with the following output:</p> <pre><code>df: sentence mention negated 0 I am a sentence of words. words 0 1 I am not a sentence of words. words 1 2 I have no sentence with words or punctuation letter 0 3 I am not a sentence with a letter or punctuation letter 1 </code></pre> <p>This works fine, but there are millions of rows in the dataframe and a few other methods that return other lists to create other new columns based on other regex patterns. As is, this is taking several hours to run. I was thinking of using the <code>apply</code> method to this to hopefully speed up the process, but given that there are multiple methods, I'm thinking this would actually be slower than my current implementation. I'm wondering if there is a more efficient (e.g., vectorized) method to speed this up. For the life of me, I haven't been able to find such a beast.</p>
<python><pandas><vectorization>
2023-10-05 17:27:37
1
2,191
horcle_buzz
77,239,148
5,405,669
psycopg stops deserializing in celery app context
<p>I have a psycopg <code>ConnectionPool</code> that extends the flask app:</p> <pre class="lang-py prettyprint-override"><code>def init_pool(cfg, connection_class, name) -&gt; ConnectionPool: pool = ConnectionPool( conninfo=f&quot;postgresql://{cfg['user']}:{cfg['password']}@{cfg['host']}/{cfg['database']}&quot;, min_size=cfg['min_pool_size'], max_size=cfg['max_pool_size'], connection_class=connection_class, kwargs={&quot;autocommit&quot;: True, &quot;options&quot;: &quot;-c idle_session_timeout=0&quot;}, name=name ) pool.wait(timeout=30.0) with pool.connection() as conn: # retrieve a connection to register the user-defined types # in the global scope of the application (*default*) register_composite_types(conn) return pool def register_composite_types(conn): &quot;&quot;&quot; Convert postgresql type -&gt; python * Register the shared composite types in the conn scope. * Utilizes the built-in psycopg factory call &quot;&quot;&quot; with conn.cursor() as cur: for (t,) in cur.execute(&quot;select shared_types()&quot;): if not conn.broken: info = CompositeInfo.fetch(conn, t) register_composite(info, context=None) else: raise LevelsDbException(project_id=None, filename=None, message=&quot;Failed to initialize: Broken connection&quot;) </code></pre> <p>I'm using a custom task class that injects the flask app context (per the latest documentation for <a href="https://flask.palletsprojects.com/en/2.3.x/patterns/celery/" rel="nofollow noreferrer">flask+celery</a>):</p> <pre class="lang-py prettyprint-override"><code>def celery_init(app: Flask) -&gt; Celery: class FlaskTask(Task): def __call__(self, *args: object, **kwargs: object) -&gt; object: with app.app_context(): return self.run(*args, **kwargs) # instantiated once for each task. Each task serves multiple task requests. celery_app = Celery(app.name, task_cls=FlaskTask) celery_app.config_from_object(celeryconfig) celery_app.set_default() app.extensions[&quot;celery&quot;] = celery_app return celery_app </code></pre> <p>I have a <code>shared_task</code> that uses the pool by way of the flask proxy, <code>current_app</code>:</p> <pre class="lang-py prettyprint-override"><code>with current_app.inspection_db_pool.connection() as conn: field_data = db.get_file_fields( conn , project_id , path , hashstr ) # field_data (a Cursor returned from `conn.execute`) fields: list[dict] = fields_from_levels_db(field_data) # ... def fields_from_levels_db(file_fields: Iterable) -&gt; list[dict]: &quot;&quot;&quot; file_fields is a Cursor &quot;&quot;&quot; return [ dict( idx = field.field_idx, purpose = field.purpose, ... levels = field.levels ) for (field,) in file_fields ] </code></pre> <p>The <code>shared_task</code> function worked as intended in the flask app. However, as a celery worker configured using the <code>FlaskTask</code>, I get the following error (<code>field_idx</code> is supposed to be a dict key):</p> <pre class="lang-py prettyprint-override"><code>AttributeError: 'str' object has no attribute 'field_idx' </code></pre> <p>My troubleshooting efforts included referencing the pool by way of <code>self.flask_app.inspection_pool</code>. I got the same error.</p> <p>Generally speaking, this type of error occurs when the postgresql user-types are not registered and/or available in the current scope. Am I properly registering the user-types in the relevant scope?</p>
<python><python-3.x><flask><celery-task><psycopg3>
2023-10-05 16:54:17
1
916
Edmund's Echo
77,239,047
2,386,113
Different floating point precision in Matrix multiplication in Python and C++
<p>I am trying to do a very simple thing</p> <ol> <li>take a square shape 2d array</li> <li>take its transpose</li> <li>multiply the array with its transpose</li> </ol> <p>I am trying to perform the above steps in C++ and Python as shown in the programs below:</p> <p><strong>C++ Program:</strong></p> <pre><code>#include &lt;iostream&gt; #include &lt;iomanip&gt; // For setw int main() { float R[16] = { 0.5, 0.63245553, -0.5, 0.31622777, 0.5, 0.31622777, 0.5, -0.63245553, 0.5, -0.31622777, 0.5, 0.63245553, 0.5, -0.63245553, -0.5, -0.31622777 }; const int nRows = 4; const int nCols = 4; float result[nRows][nRows] = { 0 }; // Perform matrix multiplication for (int i = 0; i &lt; nRows; i++) { for (int j = 0; j &lt; nRows; j++) { for (int k = 0; k &lt; nCols; k++) { result[i][j] += R[i * nCols + k] * R[j * nCols + k]; } } } // Print the result with left-aligned columns and a padding of 15 characters for (int i = 0; i &lt; nRows; i++) { for (int j = 0; j &lt; nRows; j++) { std::cout &lt;&lt; std::left &lt;&lt; std::setw(15) &lt;&lt; result[i][j] &lt;&lt; &quot; &quot;; } std::cout &lt;&lt; std::endl; } return 0; } </code></pre> <p><strong>Python Program:</strong></p> <pre><code>import numpy as np R = np.array([ 0.5, 0.63245553, -0.5, 0.31622777, 0.5, 0.31622777, 0.5, -0.63245553, 0.5, -0.31622777, 0.5, 0.63245553, 0.5, -0.63245553, -0.5, -0.31622777 ]).reshape(4, 4) result = np.dot(R, R.T) # Print the result print(result) </code></pre> <p><strong>Problem:</strong> As can be seen in the above programs, I am using exactly the same input array/matrix. But the precision of results of the multiplicaiton is quite different.</p> <p>Except for the diagonal, most of the values are quite different with huge differences in precision. How can I get the same precision in both languages? <strong>Why are even the signs flipped for some elements?</strong></p> <blockquote> <p><strong>C++ Results:</strong></p> </blockquote> <pre><code>1 -1.49012e-08 0 -7.45058e-09 -1.49012e-08 1 0 0 0 0 1 -1.49012e-08 -7.45058e-09 0 -1.49012e-08 1 </code></pre> <blockquote> <p><strong>Python Results:</strong></p> </blockquote> <pre><code>[[1.00000000e+00 2.77555756e-17 0.00000000e+00 5.32461714e-11] [2.77555756e-17 1.00000000e+00 5.32461852e-11 0.00000000e+00] [0.00000000e+00 5.32461852e-11 1.00000000e+00 2.77555756e-17] [5.32461714e-11 0.00000000e+00 2.77555756e-17 1.00000000e+00]] </code></pre>
<python><c++><precision><floating-accuracy>
2023-10-05 16:36:19
0
5,777
skm
77,239,027
4,489,998
Basinhopping (two-phase) optimization on multivariate-valued functions
<h3>Background</h3> <p>So far, I've been using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html" rel="nofollow noreferrer"><code>scipy.optimize.least_squares</code></a> to find a root to a multivalued function <code>f: R^n -&gt; R^m</code>. This works great for local optimisation, and I'm trying to find an equivalent for &quot;global&quot; optimization, starting by using two-phase methods (doing global optimization by running many local minimisation wit different initial conditions).</p> <p>I'd like to do this in <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow noreferrer">Basinhopping</a>, similarly to this:</p> <pre><code># local def custom_minimizer(fun, _x0, *args, **kwargs): return scipy.optimize.least_squares(fun, _x0) # global: minimize f starting at x0 ret = basinhopping( f, x0, minimizer_kwargs={&quot;method&quot;: custom_minimizer}, ) </code></pre> <p>This doesn't work, as basinhopping requires f to output a scalar. However, if I use <code>lambda x: np.sum(f(x)**2)</code> instead as an argument of <code>basinhopping</code>, the <code>fun</code> argument in <code>custom_minimizer</code> won't be <code>f</code> anymore, and I don't want to lose this information.</p> <h3>Question</h3> <p>Can I do multivariate-valued optimization (with a specific norm) using scipy's basinhopping function, or similar two-phase methods? In particular, is there a way to specify the norm and the objective function separately, instead of directly giving the function <code>x -&gt; norm(f(x))</code> as an argument?</p>
<python><scipy><scipy-optimize>
2023-10-05 16:32:47
0
2,185
TrakJohnson
77,238,998
769,486
dependency-injector: How to provide fallbacks for configuration values?
<p>I’m in the process of introducing dependency-injector to some old projects. In one of them I have a config value that has a fallback to another value and a default value. Essentially I’m looking for an equivalent of:</p> <pre class="lang-py prettyprint-override"><code>value = config.get(&quot;value&quot;, config.get(&quot;fallback&quot;, &quot;default&quot;)) </code></pre> <p>I’m especially interested in how to pass such a value to a service defined in a <code>provider.DeclarativeContainer</code>.</p> <p>While <a href="https://python-dependency-injector.ets-labs.org/" rel="nofollow noreferrer">the docs</a> provide many examples on how to <a href="https://python-dependency-injector.ets-labs.org/providers/configuration.html" rel="nofollow noreferrer">add configuration from different sources</a> I couldn’t find a lot about accessing config values. I also looked at the docstrings of the methods of <code>provider.Configuration()</code> but without success.</p>
<python><dependency-injection>
2023-10-05 16:28:48
2
956
zwirbeltier
77,238,991
2,767,937
How can I make my Python library with TensorFlow dependencies adaptable for both CPU and GPU versions using Setuptools?
<p>I am building a python library that would be deployed as a python package on PyPI. The tool I'm using as build system is Setuptools. The library is supposed to work with TensorFlow as a dependency so it is pretty straightforward to add it to the project:</p> <pre class="lang-ini prettyprint-override"><code>[project] ... dependencies = [ &quot;tensorflow &gt;= 2.11.0&quot;, ] </code></pre> <p>Now, the problem is that my library is able to work both with <code>tensorflow</code> and <code>tensorflow-gpu</code> and I would like to deliver a python package that, as default, has <code>tensorflow</code> as dependency but letting the user select a different &quot;flavor&quot; (in my case for using the <code>tensorflow-gpu</code> and taking advantage of the GPU on the tensor computations).</p> <p>I've gone trough the documentation of <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html" rel="nofollow noreferrer">Setuptools</a> but I couldn't find any help. As a possible solution I thought that using the section <code>[project.optional-dependencies]</code>, could solve my problem, for example setting a field as follows:</p> <pre class="lang-ini prettyprint-override"><code>[project.optional-dependencies] gpu = [&quot;tensorflow-gpu &gt;= 2.11.0&quot;] </code></pre> <p>I could run <code>pip install myLibrary[gpu]</code> to also include the optional dependencies, however, the required dependencies (i.e. <code>tensorflow</code>) would aways be installed and that's not the behavior I'm expecting.</p> <p>The other drastic way of solving this problem would be to ship two different libraries (<code>myLibrary</code> and <code>myLibrary-gpu</code>) but I think it is not an elegant way of managing this situation, especially because the code inside the library is exactly the same for both versions.</p> <p>Any help on that would be very much appreciated!</p>
<python><tensorflow><pip><setuptools><pep>
2023-10-05 16:28:03
1
629
TantrixRobotBoy
77,238,856
22,307,474
Problems installing libraries via pip after installing Python 3.12
<p>Today I installed the new Python 3.12 on my Ubuntu 22.04 from the ppa repository <strong>ppa:deadsnakes/ppa</strong>.</p> <p>Everything works, but when I try to install some library with the command <code>python3.12 -m pip install somelibrary</code>, I get the following error</p> <pre><code>ERROR: Exception: Traceback (most recent call last): File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py&quot;, line 165, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py&quot;, line 205, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/commands/install.py&quot;, line 285, in run session = self.get_default_session(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py&quot;, line 75, in get_default_session self._session = self.enter_context(self._build_session(options)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py&quot;, line 89, in _build_session session = PipSession( ^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/network/session.py&quot;, line 282, in __init__ self.headers[&quot;User-Agent&quot;] = user_agent() ^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/network/session.py&quot;, line 157, in user_agent setuptools_dist = get_default_environment().get_distribution(&quot;setuptools&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/pip/_internal/metadata/__init__.py&quot;, line 24, in get_default_environment from .pkg_resources import Environment File &quot;/usr/lib/python3/dist-packages/pip/_internal/metadata/pkg_resources.py&quot;, line 9, in &lt;module&gt; from pip._vendor import pkg_resources File &quot;/usr/lib/python3/dist-packages/pip/_vendor/pkg_resources/__init__.py&quot;, line 2164, in &lt;module&gt; register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? </code></pre> <p>Any suggestions why this is happening?</p> <p>EDIT: This problem doesn't exist when I use venv, it seems to me that the problem is that pip uses <code>/usr/lib/python3</code> instead of <code>/usr/lib/python3.12</code></p>
<python><python-3.x><pip>
2023-10-05 16:10:58
3
510
bin4ry
77,238,738
20,591,261
Iterate average of n elements per row
<p>I have a dataframe with a shape of (2205, 6000) obtained using a sensor with sampling rate 100hz. Each row correspond to a minute cicle (In this case i have 2205 cicles, 6000 samples per minute), but i need to get info for each second.</p> <p>So, for each row i need to get the mean for the 100 values until the last value of the row (In this case i'll have 60 values per row ) and append it to another df named &quot;Seconds&quot; and do this operation for all de rows.</p> <p>The <a href="https://raw.githubusercontent.com/Arancium98/ucihydraulic/main/Data/PS1.txt" rel="nofollow noreferrer">Data</a></p> <p>I think i need to create a double for loop for each row but i don't know how, any advice?</p>
<python><dataframe>
2023-10-05 15:54:05
2
1,195
Simon
77,238,646
14,667,736
Python: http.server.ThreadingHTTPServer fails since update to Python 3.12
<p>I've got a simple Python HTTP server application based on <code>http.server.ThreadingHTTPServer</code>, running in a Docker container. It works fine with Python 3.11, but under Python 3.12 aborts with the following as soon as a HTTP request comes in:</p> <pre><code>Exception occurred during processing of request from ('127.0.0.1', 50544) Traceback (most recent call last): File &quot;/usr/local/lib/python3.12/socketserver.py&quot;, line 318, in _handle_request_noblock self.process_request(request, client_address) File &quot;/usr/local/lib/python3.12/socketserver.py&quot;, line 706, in process_request t.start() File &quot;/usr/local/lib/python3.12/threading.py&quot;, line 971, in start _start_new_thread(self._bootstrap, ()) RuntimeError: can't create new thread at interpreter shutdown </code></pre> <p>The Dockerfile is:</p> <pre><code>FROM python:alpine RUN pip3 install requests psycopg2-binary semantic-version COPY mysource / ENTRYPOINT /myprogram.py </code></pre> <p>and the server code in <code>myprogram.py</code> is something like</p> <pre class="lang-py prettyprint-override"><code>def runServer(): server = http.server.ThreadingHTTPServer(('0.0.0.0', 1234), Server) server.serve_forever() ... thread = threading.Thread(target=runServer) thread.start() ... </code></pre> <p>Using <code>HTTPServer</code> instead of <code>ThreadingHTTPServer</code> makes the problem go away.</p> <p>All worked fine until a few days ago when Python 3.12 came out to which the <code>python:alpine</code> image was obviously updated. With <code>FROM python:3.12-alpine</code> or <code>FROM python</code> (which I suppose is also 3.12) in the Dockerfile it fails, with <code>FROM python:3.11-alpine</code> or <code>FROM: python:3-11</code> it works.</p> <p>Did anyone else also observe this? Any idea what to, short of pinning the Python version to 3.11 (which I'm reluctant to do)?</p>
<python><python-3.x><http.server>
2023-10-05 15:42:46
0
341
Wolfram Rösler
77,238,628
805,886
In VS Code: Red (stop running script) square button disappeared
<p>This happened to me: For years had a green &quot;play&quot; button at top right of screen to run my python script. When running (<strong>not in debug mode</strong>), a red square &quot;stop&quot; button would be in that spot that I could press to cause my code to stop running. Just today I was installing VS Code on a client's machine and the stop button does not appear.</p> <p>Clue: When I was trying to pip install the pyodbc library, it errored out, saying I needed to install some C++ stuff. I may have installed the wrong stuff or too much. BUT it was then able to pip install pyodbc so I let it go. But now I have no stop button AND script output now goes to the terminal tab at bottom of VS Code instead of the Output tab.</p> <p>Ideas?</p>
<python><visual-studio-code>
2023-10-05 15:40:34
1
1,054
ScotterMonkey
77,238,503
14,982,219
pydantic. Add optional field depends on value of another field
<p>Hi I have to validate this two json<br /> first json</p> <pre><code>{ 'building_type' : 'flat', 'direction': 'Fake Street', 'number': 123, 'apartment': 3 } </code></pre> <p>second json</p> <pre><code>{ 'building_type' : 'house', 'direction': 'Fake Street', 'number': 123 } </code></pre> <p>my model</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum from pydantic import BaseModel class Building(Enum): FLAT = 'flat' HOUSE = 'house' class Address(BaseModel): model_config = ConfigDict(extra='forbid') building_type: Building direction: str </code></pre> <p>I need both json examples to be valid. Key 'apartment' will only be provided in case 'building_type' = 'flat'. Is there a way to have field apartment be checked as mandatory only when 'building_type' = 'flat'?</p>
<python><json><pydantic>
2023-10-05 15:23:38
0
381
vll1990
77,238,451
15,900,832
Plotly: Hide colorbar on Surface plot
<p>I can't figure out how to hide the colorbar on a Plotly Surface plot.</p> <p>The code I am using is:</p> <pre><code>import plotly.graph_objs as go import numpy as np x = np.linspace(0, 9, 10) y = x z = np.linspace(0, 99, 100).reshape((10, 10)) fig = go.Figure() fig.add_trace( go.Surface( x=x, y=y, z=z ) ) fig.update_layout( scene=dict( xaxis_title='x', yaxis_title='y', zaxis_title='z' ) ) fig.show() </code></pre> <p>Here are the few things I have tried:</p> <pre><code>fig.update_coloraxes(showscale=False) </code></pre> <pre><code>fig.update_layout(coloraxis_showscale=False) </code></pre> <pre><code> go.Surface( x=x, y=y, z=z, colorscale=None, colorbar=None, coloraxis=None ) </code></pre> <p>Any other ideas?</p>
<python><plotly><colorbar>
2023-10-05 15:16:45
1
633
basil_man
77,238,396
14,584,978
Duckdb query plan parse to json
<p>Is there a better way to export data from the duckdb in sql? The default is to export to a pretty print format with boxes and details on each.</p> <p><a href="https://duckdb.org/docs/guides/meta/explain.html" rel="nofollow noreferrer">Duckdb Explain docs</a></p> <p>I am currently running the following script to parse to json.</p> <pre><code>import json def extract_boxes(text) -&gt; list[list[str]]: lines = text.split('\n') boxes = [] visited = set() for i in range(len(lines)): for j in range(len(lines[i])): if lines[i][j] == '┌' and (i, j) not in visited: # Find the bottom-left corner of the box for x in range(i + 1, len(lines)): if lines[x][j] == '└': break # Find the top-right corner of the box for y in range(j + 1, len(lines[i])): if lines[i][y] == '┐': break # Extract the box content box_content = [line[j+1:y] for line in lines[i+1:x]] boxes.append('\n'.join(box_content)) # Mark the corners as visited visited.add((i, j)) visited.add((i, y)) visited.add((x, j)) visited.add((x, y)) for i in range (10): boxes = [box.replace(' ', ' ').replace('\u2500','') for box in boxes] return [[element.strip() for element in box.split('\n') if element.strip()] for box in boxes] def list_to_dict(boxes): result = [] for box in boxes: if box: step_type = box[0].strip() details = [] ec = None for item in box[1:]: if &quot;EC:&quot; in item: ec = item.split(&quot;EC:&quot;)[1].strip() else: details.append(item) # You can populate this with actual values if available result.append({ 'StepType': step_type, 'EstimatedCost': ec, 'Details': details }) return result def parse_plan_to_json(text: str) -&gt; str: boxes = extract_boxes(text.encode('utf-8', 'ignore').decode('utf-8')) return json.dumps(list_to_dict(boxes)) </code></pre> <p>I want to have more detailed information about what is in each box. Am I missing something in the api?</p>
<python><sql><duckdb>
2023-10-05 15:10:25
1
374
Isaacnfairplay
77,238,309
565,380
How can I cast a Polars DataFrame to string
<p>How do I stringify a small <em><strong>df.head</strong></em> result.</p> <p>I have the following code:</p> <pre><code>df.head(row_amount).to_pandas().to_string() </code></pre> <p>I normally print out 2 or 3 rows but I need it in a string. I solved it like above by transforming it to pandas first. I have also solved it by Output buffering the print, but those are crazy solutions.</p> <p>Is there an easier way to stringify the df?</p>
<python><pandas><python-polars>
2023-10-05 14:58:21
1
2,597
Hans Wassink
77,238,109
3,535,537
Python - How to append conditionaly in array?
<p>I have a list of lists (records). Each record consists of 4 elements: <code>[id, msg_key, msg_value, date]</code></p> <p>For example:</p> <pre><code>events = [ [1, &quot;foo&quot;, &quot;bar&quot;, 709], [1, &quot;foo&quot;, &quot;bar2&quot;, 710], [1, &quot;foo&quot;, &quot;bar&quot;, 711], [1, &quot;foo&quot;, &quot;bar2&quot;, 712], [1, &quot;foo&quot;, &quot;bar&quot;, 713], [2, &quot;foo&quot;, &quot;toto&quot;, 714], [1, &quot;foo&quot;, &quot;bar&quot;, 715], [1, &quot;foo&quot;, &quot;bar2&quot;, 716] ] </code></pre> <p>I want to build a filtered version of this list of records with the following criteria.</p> <blockquote> <p>Given the current record ensure that the last prior record with the same <code>id</code> and <code>msg_key</code> as the current record, does not have the same <code>msg_value</code>.</p> </blockquote> <p>In this case:</p> <pre><code>events = [ [1, &quot;foo&quot;, &quot;bar&quot;, 709], [1, &quot;foo&quot;, &quot;bar2&quot;, 710], [1, &quot;foo&quot;, &quot;bar&quot;, 711], [1, &quot;foo&quot;, &quot;bar2&quot;, 712], [1, &quot;foo&quot;, &quot;bar&quot;, 713], # &lt;-----------------------------------------------┐ [2, &quot;foo&quot;, &quot;toto&quot;, 714], # | [1, &quot;foo&quot;, &quot;bar&quot;, 715], # remove as same &quot;msg_value&quot; as &quot;matching&quot; row ---┘ [1, &quot;foo&quot;, &quot;bar2&quot;, 716] ] </code></pre> <p>So the end result I seek is:</p> <pre><code>events_filtered = [ [1, &quot;foo&quot;, &quot;bar&quot;, 709], [1, &quot;foo&quot;, &quot;bar2&quot;, 710], [1, &quot;foo&quot;, &quot;bar&quot;, 711], [1, &quot;foo&quot;, &quot;bar2&quot;, 712], [1, &quot;foo&quot;, &quot;bar&quot;, 713], [2, &quot;foo&quot;, &quot;toto&quot;, 714], [1, &quot;foo&quot;, &quot;bar2&quot;, 716] ] </code></pre> <p>I have tried to filter out that row but I can't figure out the correct steps.</p> <p>Here is what I have at the moment:</p> <pre class="lang-py prettyprint-override"><code> out_array = [] for ... : event = [id, msg_key, msg_value, date] if ?????: out_array.append(event) </code></pre>
<python>
2023-10-05 14:30:57
2
11,934
Stéphane GRILLON
77,238,097
14,956,120
QPrintPreviewWidget freezes the application when the horizontal scrollbar is shown
<p>I am having a bug with QPrintPreviewWidget (PySide2 5.15.2.1, Windows 11). From the moment I made this post, I didn't found anything related online.</p> <p>When I force the horizontal scrollbar to appear inside the inherited scroll area (by resizing the window), the application becomes unresponsive, freezing instantly until you luckly resize the widget and force the scrollbar to disappear.</p> <p>Here's a example to force trigger the bug:</p> <pre><code>from PySide2.QtWidgets import QApplication, QWidget, QLabel, QHBoxLayout from PySide2.QtGui import QTextDocument from PySide2.QtPrintSupport import QPrinter, QPrintPreviewWidget app = QApplication() doc = QTextDocument() doc.setPlainText('\n' * 200) printer = QPrinter() view = QPrintPreviewWidget(printer) view.paintRequested.connect(doc.print_) window = QWidget() layout = QHBoxLayout() layout.addWidget(view) layout.addWidget(QLabel('Just to trigger the horizontal scrollbar')) window.setLayout(layout) window.show() app.exec_() </code></pre> <p>I had to create a document with more than 1 page, to force the vertical scrollbar to appear. When resized horizontally, at some point the horizontal scrollbar will become visible and lag the application.</p> <p>The only way I found so far to &quot;fix&quot; the error is to set a minimum width to the QPrintPreviewWidget, using QPrintPreviewWidget.setMinimumWidth. This prevents the widget from reaching a small width, dodging (but not fixing) the issue.</p> <p>Do anyone know a way to solve this issue?</p>
<python><pyqt5><pyside2><qscrollarea><qprinter>
2023-10-05 14:29:10
1
1,039
Carl HR
77,238,077
13,184,183
How to enforce pyspark reader to use specified schema instead of inferring one from parquet?
<p>I have a parquet file which has some schema. It involves some invalid characters, so I want to use my own schema. Let's say the parquet metadata contains <code>original_schema</code> and I have <code>new_schema</code> which is obtained by renaming most of the fields of <code>original_schema</code>, so they both have same length and order. I do</p> <pre class="lang-py prettyprint-override"><code>df = spark.read.schema(new_schema).parquet(source) </code></pre> <p>However, if column present in both schemas, it will be fine, but if it is present only in <code>new_schema</code>, it will be <code>null</code>. As if instead of using <code>new_schema</code> reader just try to find columns from it in the <code>original_schema</code>. Changing option <code>mergeSchema</code> doesn't change anything.</p> <p>Full code:</p> <pre class="lang-py prettyprint-override"><code>df1 = spark.read.parquet(source) schema = df1.schema new_fields = [ StructField(f.name.replace('.', '___').replace('(', '-').replace(')', '-'), f.dataType, f.nullable) for f in schema.fields ] new_schema = StructType(new_fields) df2 = spark.read.schema(new_schema).parquet(source) </code></pre> <p>How can I force reader to use <code>new_schema</code> for parquet file?</p>
<python><apache-spark><pyspark>
2023-10-05 14:26:24
1
956
Nourless
77,237,969
3,041,764
How to serve static files from submodules in Flask
<p>I'm writing a Flask application divided into modules, with each module residing in a separate folder that includes 'templates' and 'static' folders.</p> <p>In each module, I'm passing a global variable 'custom_js' to the template:</p> <pre><code>module_bp = Blueprint('module', __name__, template_folder='templates', static_folder='static') # pass custom js files to the base template @module_bp.context_processor def custom_js_context(): return {'custom_js': ['some_file.js']} </code></pre> <p>I include these JavaScript files in the main template base.html like this:</p> <pre><code>{% if custom_js %} {% for js_file in custom_js %} &lt;script src=&quot;{{ url_for('static', filename='js/' + js_file) }}&quot;&gt;&lt;/script&gt; {% endfor %} {% endif %} </code></pre> <p>The code above results in adding to base.html. The physical file is located at /module/static/js/some_file.js, but accessing <a href="http://127.0.0.1:8080/static/js/some_file.js" rel="nofollow noreferrer">http://127.0.0.1:8080/static/js/some_file.js</a> results in a 404 error.</p> <p>How can I correctly serve these JavaScript files from the modules?</p> <hr /> <p>One idea is to modify the blueprint to:</p> <pre><code>module_bp = Blueprint('module', __name__, template_folder='templates', static_folder='static', static_url_path='/module/static' ) </code></pre> <p>and serving the files to the template like this:</p> <pre><code>@module_bp.context_processor def custom_js_context(): custom_js_files = ['some_file.js'] custom_js_with_prefix = ['/module/static/js' + filename for filename in custom_js_files] return {'custom_js': custom_js_with_prefix} </code></pre> <p>with updated base.html loop</p> <pre><code>{% if custom_js %} {% for js_file in custom_js %} &lt;script src=&quot;{{ js_file) }}&quot;&gt;&lt;/script&gt; {% endfor %} {% endif %} </code></pre> <p>However, I'm not sure to what extent this is the correct approach.</p>
<python><flask><jinja2><static-files>
2023-10-05 14:15:19
1
849
user3041764
77,237,945
16,852,041
E ModuleNotFoundError: No module named 'botocore.compress'
<p>I had both <code>boto3</code> and <code>botocore</code> installed in my conda venv, but was getting this error during compilation:</p> <pre class="lang-bash prettyprint-override"><code>E ModuleNotFoundError: No module named 'botocore.compress' </code></pre>
<python><pip><boto3><modulenotfounderror><botocore>
2023-10-05 14:12:26
1
2,045
DanielBell99
77,237,818
2,160,809
How to load a huggingface pretrained transformer model directly to GPU?
<p>I want to load a huggingface pretrained transformer model directly to GPU (not enough CPU space) e.g. loading BERT</p> <pre><code>from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(&quot;bert-base-uncased&quot;) </code></pre> <p>would be loaded to CPU until executing</p> <pre><code>model.to('cuda') </code></pre> <p>now the model is loaded into GPU</p> <p>I want to load the model directly into GPU when executing <code>from_pretrained</code>. Is this possible?</p>
<python><nlp><huggingface-transformers>
2023-10-05 13:57:40
1
2,274
cookiemonster
77,237,557
4,317,946
In project build results, Python Pyramid referencing a view source file that no longer exists
<p>In a Python Pyramid project, I tried to rename a project view file temporarily while testing a new version of this file. Originally, the project started with the following view file:</p> <pre><code>/myproject/myproject/views/default.py </code></pre> <p>I then renamed this file to some other temporary name:</p> <pre><code>/myproject/myproject/views/stable_default.py </code></pre> <p>and created a new <code>default.py</code> in the same directory for testing purpose. The idea was that I could return to my starting point by removing the new <code>default.py</code> and again renaming <code>stable_default.py</code> back to <code>default.py</code>.</p> <p>However, after building the project (using <code>myproject/env/bin/pip install e .</code>), the original/renamed view file <code>stable_default.py</code> became recognized as being part of the project (and caused various problems because there are repeating blocks of code in the 2 view files...bad choice on my part to try using this approach). Next, if I remove the file <code>stable_default.py</code>, such that it no longer exists in the source directory (or anywhere else), and try to build the project again, <code>stable_default.py</code> is still being recognized as part of the project (and causing various problems since the source file does not exist...causing the project to fail).</p> <p>In searching through the project files, I found 2 file names associated with the build results that reference the non-existent view source file. In this case, I am using Python version 3.9x, which also influences the file/path names shown below:</p> <pre><code>/myproject/env/lib/python3.9/site-packages/myproject/views/stable_default.py </code></pre> <p>and</p> <pre><code>/myproject/env/lib/python3.9/site-packages/myproject/views/__pycache__/stable_default.cpython-39.pyc </code></pre> <p>If I simply delete the above 2 files, then all of the problems go away and the project runs properly.</p> <p>I have not been able to determine what aspect of the source files or build system is holding onto the file name <code>stable_default.py</code> even after this file has been removed from the view source directory.</p> <p>My workaround is to remove those 2 generated files referencing the <code>stable_default.py</code> source after each build of the project, but I would really like to fix this project such that these 2 files do get generated at all.</p> <p>I suppose a more general question is...how can the developer of a Pyramid project have complete control over which view files are included or not included in the build?</p>
<python><pyramid>
2023-10-05 13:25:38
0
690
Tim D
77,237,544
5,475,506
Identifying Correct JAR Versions for S3 Integration with PySpark 3.5
<p>I am attempting to set up a PySpark environment to read data from S3 using PySpark 3.5 in a Conda environment (Python 3.12). However, I am facing difficulties in identifying the correct versions of the <strong>aws-java-sdk</strong> and <strong>hadoop-aws</strong> JARs to use (needed for reading data from S3 directly). Here are the steps I have followed and the issues I have encountered:</p> <pre class="lang-bash prettyprint-override"><code>conda create --name spark_env conda activate spark_env pip install pyspark </code></pre> <p>Then in my Jupyter notebook (That is using the previously created environment), I'm creating the SparkSession:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName(&quot;Read from S3&quot;) \ .config(&quot;spark.hadoop.fs.s3a.access.key&quot;, &quot;AWS_ACCESS_KEY&quot;) \ .config(&quot;spark.hadoop.fs.s3a.secret.key&quot;, &quot;AWS_SECRET_KEY&quot;) \ .config(&quot;spark.hadoop.fs.s3a.impl&quot;, &quot;org.apache.hadoop.fs.s3a.S3AFileSystem&quot;) \ .getOrCreate() s3_path = &quot;s3a://bucket/file.json&quot; df = spark.read.json(s3_path) df.show() </code></pre> <p>What gaves me the following exception:</p> <p><em>&quot;java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found&quot;</em></p> <p>That's why I have tried to download different versions of</p> <ul> <li><a href="https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws" rel="noreferrer">hadoop-aws</a></li> <li><a href="https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk" rel="noreferrer">aws-java-sdk</a></li> </ul> <p>But after adding these jars to the /jars path that Pyspark is using (I have also tried specifying manually the path to these jars), I keep getting exceptions like:</p> <p>&quot;<em>Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.impl.prefetch.PrefetchingStatistics&quot;</em></p> <p>or</p> <p>&quot;&quot;&quot; <em>Py4JJavaError: An error occurred while calling o34.json. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 10001) (YYY executor driver): java.lang.NoSuchMethodError: org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(Lorg/apache/hadoop/fs/statistics/DurationTracker;Lorg/apache/hadoop/util/functional/CallableRaisingIOE;)Ljava/lang/Object;</em></p> <p>or</p> <p><em>&quot;java.lang.ClassNotFoundException: com.amazonaws.services.s3.model.MultiObjectDeleteException&quot;</em></p> <p><strong>Can someone guide me on how to identify the correct versions of the aws-java-sdk and hadoop-aws JARs to use with PySpark 3.5 for S3 integration, or provide a reliable source where this information is available? Additionally, if there's a different or more recommended approach to reading data from S3 using PySpark 3.5, I'm open to suggestions.</strong></p> <p>Thanks in advance.</p>
<python><amazon-web-services><amazon-s3><pyspark>
2023-10-05 13:24:32
2
516
AngryCoder
77,237,513
3,380,209
How to control the zorder of polygons
<p>I have modified the <a href="https://matplotlib.org/stable/gallery/event_handling/poly_editor.html" rel="nofollow noreferrer">Poly Editor</a> (from the gallery of Matplotlib) to make it able to load several polygons at a time. This works fine except on one point: I don't know how to control the zorder of the polygons.</p> <p>Here is my code (only interesting lines):</p> <pre><code>from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import polygons # a list of matplotlib.patches.Polygon import poly_editor root = tk.Tk() fig = Figure(figsize=(10, 8)) ax = fig.add_subplot(aspect='equal') canvas = FigureCanvasTkAgg(fig, root) poly_editors = [] for poly in polygons: ax.add_patch(poly) poly_editors.append(poly_editor.PolygonInteractor(ax, poly, canvas)) </code></pre> <p>With this code, the first polygon is below, the last one above.<a href="https://i.sstatic.net/tIq14.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tIq14.png" alt="enter image description here" /></a> I tried to test the use of zorder at this time:</p> <pre><code>for i,poly in enumerate(polygons): poly.set_zorder(10 - i) ax.add_patch(poly) poly_editors.append(poly_editor.PolygonInteractor(ax, poly, canvas)) </code></pre> <p>Then the result is the same, zorder is not taken into account.</p> <p>My goal is to put a polygon above all others when pressing the letter 't' inside it. I have written this code in the poly_editor, which obviously does not work.</p> <pre><code>def on_key_press(self, event): if event.key == 't': if self.poly.contains_point((event.x, event.y)): self.poly.set(zorder=10) else: self.poly.set(zorder=0) self.ax.draw_artist(self.poly) self.canvas.draw_idle() </code></pre> <p>I am new to matplotlib, could somebody help me to understand how zorder works ? I saw many questions related to zorder but I don't find any answer to my question.</p>
<python><matplotlib><tkinter-canvas>
2023-10-05 13:20:05
1
3,100
albar
77,237,468
11,028,689
Is there a better way to access values in a JSON file than a 'for' loop?
<p>I have a JSON file which looks like this:</p> <pre><code>[{'data': [{'text': 'add '}, {'text': 'Stani, stani Ibar vodo', 'entity': 'entity_name'}, {'text': ' songs in '}, {'text': 'my', 'entity': 'playlist_owner'}, {'text': ' playlist '}, {'text': 'música libre', 'entity': 'playlist'}]}, {'data': [{'text': 'add this '}, {'text': 'album', 'entity': 'music_item'}, {'text': ' to '}, {'text': 'my', 'entity': 'playlist_owner'}, {'text': ' '}, {'text': 'Blues', 'entity': 'playlist'}, {'text': ' playlist'}]}, {'data': [{'text': 'Add the '}, {'text': 'tune', 'entity': 'music_item'}, {'text': ' to the '}, {'text': 'Rage Radio', 'entity': 'playlist'}, {'text': ' playlist.'}]}] </code></pre> <p>I want to append the values in 'text' for each 'data' in this list.</p> <p>I have tried the following:</p> <pre><code>lst = [] for item in data: p = item['data'] p_st = '' for item_1 in p: p_st += item_1['text'] + ' ' lst.append(p_st) print(lst) Out: ['add Stani, stani Ibar vodo songs in my playlist música libre ', 'add this album to my Blues playlist ', 'Add the tune to the Rage Radio playlist. '] </code></pre> <p>It works, but I am new to JSON and am wondering if there is a better way to do it? Some built-in methods or libraries for JSON maybe?</p>
<python><json><list><readlines>
2023-10-05 13:14:33
2
1,299
Bluetail
77,236,930
4,205,289
PyCharm warning for 'None' argument passed to function decorated with functools.cache
<p>I have a function that may accept None as a value of its argument, this function is cached using functools.cache. Here is a simplified example:</p> <pre><code>from functools import cache counter = 0 @cache def cached_func(arg1, arg2): global counter print('Call of cached func: #{}, args: {}, {}'.format(counter, arg1, arg2)) counter += 1 return None cached_func(1, None) cached_func(2, None) cached_func(1, None) # &lt;- cache will be used cached_func(3, None) </code></pre> <p>Output:</p> <pre><code>Call of cached func: #0, args: 1, None Call of cached func: #1, args: 2, None Call of cached func: #2, args: 3, None </code></pre> <p>As I can see caching works ok. But at the same time PyCharm shows me a warning on None argument:</p> <p><a href="https://i.sstatic.net/tgsew.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tgsew.png" alt="enter image description here" /></a></p> <p>According to <a href="https://docs.python.org/3/library/functools.html#functools.lru_cache" rel="nofollow noreferrer">the documentation for functools</a>, &quot;Since a dictionary is used to cache results, the positional and keyword arguments to the function must be hashable.&quot;</p> <p>I see that None is actually hashable:</p> <pre><code>Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; None.__hash__() -9223363242286426420 &gt;&gt;&gt; hash(None) -9223363242286426420 </code></pre> <p><strong>So, should I care about this warning? It it ok to pass None as argument to a cached function?</strong></p>
<python><pycharm><functools>
2023-10-05 12:04:00
0
543
Roman
77,236,916
3,616,293
torch DDP Multi-GPU gives low accuracy metric
<p>I am trying Multi-GPU, single machine DDP training in PyTorch (CIFAR-10 + ResNet-18 setup). You can refer to the model architecture code <a href="https://github.com/arjun-majumdar/CNN_Classifications/blob/master/ResNet18_swish_torch.py" rel="nofollow noreferrer">here</a> and the full training code <a href="https://github.com/arjun-majumdar/CNN_Classifications/blob/master/multigpu_singlemachine_ddp_torch.py" rel="nofollow noreferrer">here</a>.</p> <p>Within main() function, the training loop is:</p> <pre><code>for epoch in range(1, num_epochs + 1): # Initialize metric for metric computation, for each epoch- running_loss = 0.0 running_corrects = 0.0 model.train() # Inform DistributedSampler about current epoch- train_loader.sampler.set_epoch(epoch) # One epoch of training- for batch_idx, (images, labels) in enumerate(train_loader): images = images.to(rank) labels = labels.to(rank) # Get model predictions- outputs = model(images) # Compute loss- J = loss(outputs, labels) # Empty accumulated gradients- optimizer.zero_grad() # Perform backprop- J.backward() # Update parameters- optimizer.step() ''' global step optimizer.param_groups[0]['lr'] = custom_lr_scheduler.get_lr(step) step += 1 ''' # Compute model's performance statistics- running_loss += J.item() * images.size(0) _, predicted = torch.max(outputs, 1) running_corrects += torch.sum(predicted == labels.data) train_loss = running_loss / len(train_dataset) train_acc = (running_corrects.double() / len(train_dataset)) * 100 print(f&quot;GPU: {rank}, epoch = {epoch}; train loss = {train_loss:.4f} &amp; train accuracy = {train_acc:.2f}%&quot;) </code></pre> <p>The problem is that the train accuracy being computed in this way is very low (say only 7.44% on average) across 8 GPUs. But, when I obtain the saved model and test its accuracy with the following code:</p> <pre><code>def test_model_progress(model, test_loader, test_dataset): total = 0.0 correct = 0.0 running_loss_val = 0.0 with torch.no_grad(): with tqdm(test_loader, unit = 'batch') as tepoch: for images, labels in tepoch: tepoch.set_description(f&quot;Validation: &quot;) images = images.to(device) labels = labels.to(device) # Set model to evaluation mode- model.eval() # Predict using trained model- outputs = model(images) _, y_pred = torch.max(outputs, 1) # Compute validation loss- J_val = loss(outputs, labels) running_loss_val += J_val.item() * labels.size(0) # Total number of labels- total += labels.size(0) # Total number of correct predictions- correct += (y_pred == labels).sum() tepoch.set_postfix( val_loss = running_loss_val / len(test_dataset), val_acc = 100 * (correct.cpu().numpy() / total) ) # return (running_loss_val, correct, total) val_loss = running_loss_val / len(test_dataset) val_acc = (correct / total) * 100 return val_loss, val_acc.cpu().numpy() test_loss, test_acc = test_model_progress(trained_model, test_loader, test_dataset) print(f&quot;ResNet-18 (multi-gpu DDP) test metrics; loss = {test_loss:.4f} &amp; acc = {test_acc:.2f}%&quot;) # ResNet-18 (multi-gpu DDP) test metrics; loss = 1.1924 &amp; acc = 59.88% </code></pre> <p>Why is there this discrepancy? What am I missing?</p>
<python><pytorch><multi-gpu>
2023-10-05 12:01:07
0
2,518
Arun
77,236,763
14,314,892
Can't install pyinstaller for python 2.7
<p>I try to install pyinstaller through pip and i get a syntax error. It succesfully downloads package and components then throws the error. The error is showing me an f string. I know that f strings came with python3 and i am trying to install old version of pyinstaller which is 3.6 and compatible with python 2.7. My command is <code>pip install pyinstaller==3.6</code>. Versions: <strong>python 2.7 pip 20.3.4</strong></p>
<python><pip><pyinstaller>
2023-10-05 11:37:48
1
707
aoiTenshi
77,236,722
4,504,877
Simulating - speeding up script in python
<p>I have the following script:</p> <pre><code>import pandas as pd import numpy as np import os import scipy.stats as sp raw = pd.read_csv(&quot;ELT_RawData.csv&quot;) raw[&quot;weight&quot;] = raw[&quot;RATE&quot;] / np.sum(raw[&quot;RATE&quot;]) MU = np.sum(raw[&quot;RATE&quot;]) poisson = sp.poisson(mu=MU) n_simulations = 10000 xs = 5e6 lim = 10e6 result = np.empty((n_simulations)) for i in range(n_simulations): no_clm = poisson.rvs(size=1) #simulate beta distribution sampled_events = raw.sample(n=no_clm,weights=raw[&quot;weight&quot;]) #calculate params for beta distribution a_1 = ( (sampled_events[&quot;PERSPVALUE&quot;] / (sampled_events[&quot;STDDEVI&quot;]+sampled_events[&quot;STDDEVC&quot;]))**2*(1 - sampled_events[&quot;PERSPVALUE&quot;]/sampled_events[&quot;EXPVALUE&quot;]) - \ sampled_events[&quot;PERSPVALUE&quot;] / sampled_events[&quot;EXPVALUE&quot;] ) b_1 = a_1*(sampled_events[&quot;EXPVALUE&quot;]/sampled_events[&quot;PERSPVALUE&quot;]-1) #simulate from said distribution clm_sev = np.array(sp.beta(a_1,b_1).rvs(no_clm)*sampled_events[&quot;EXPVALUE&quot;]) #apply features xol_loss = np.clip(a=clm_sev-xs,a_min=0,a_max=lim) #store result result[i] = xol_loss.sum() result.mean() </code></pre> <p>The <code>raw</code> data can be read like this:</p> <pre><code>raw = pd.DataFrame.from_dict({'EVENTID': {0: 'A_0', 1: 'A_1', 2: 'A_2', 3: 'A_3', 4: 'A_4', 5: 'A_5', 6: 'A_6', 7: 'A_7', 8: 'A_8', 9: 'A_9', 10: 'A_10', 11: 'A_11', 12: 'A_12', 13: 'A_13', 14: 'A_14', 15: 'A_15', 16: 'A_16', 17: 'A_17', 18: 'A_18', 19: 'A_19', 20: 'A_20', 21: 'A_21', 22: 'A_22', 23: 'A_23', 24: 'A_24', 25: 'A_25', 26: 'A_26', 27: 'A_27', 28: 'A_28', 29: 'A_29', 30: 'A_30', 31: 'A_31', 32: 'A_32', 33: 'A_33', 34: 'A_34', 35: 'A_35', 36: 'A_36', 37: 'A_37', 38: 'A_38', 39: 'A_39', 40: 'A_40', 41: 'A_41', 42: 'A_42', 43: 'A_43', 44: 'A_44', 45: 'A_45', 46: 'A_46', 47: 'A_47', 48: 'A_48', 49: 'A_49', 50: 'A_50', 51: 'A_51', 52: 'A_52', 53: 'A_53', 54: 'A_54', 55: 'A_55', 56: 'A_56', 57: 'A_57', 58: 'A_58', 59: 'A_59', 60: 'A_60', 61: 'A_61', 62: 'A_62', 63: 'A_63', 64: 'A_64', 65: 'A_65', 66: 'A_66', 67: 'A_67', 68: 'A_68', 69: 'A_69', 70: 'A_70', 71: 'A_71', 72: 'A_72', 73: 'A_73', 74: 'A_74', 75: 'A_75', 76: 'A_76', 77: 'A_77', 78: 'A_78', 79: 'A_79', 80: 'A_80', 81: 'A_81', 82: 'A_82', 83: 'A_83', 84: 'A_84', 85: 'A_85', 86: 'A_86', 87: 'A_87', 88: 'A_88', 89: 'A_89', 90: 'A_90', 91: 'A_91', 92: 'A_92', 93: 'A_93', 94: 'A_94', 95: 'A_95', 96: 'A_96', 97: 'A_97', 98: 'A_98', 99: 'A_99', 100: 'A_100', 101: 'A_101', 102: 'A_102', 103: 'A_103', 104: 'A_104', 105: 'A_105', 106: 'A_106', 107: 'A_107', 108: 'A_108', 109: 'A_109', 110: 'A_110', 111: 'A_111', 112: 'A_112', 113: 'A_113', 114: 'A_114', 115: 'A_115', 116: 'A_116', 117: 'A_117', 118: 'A_118', 119: 'A_119', 120: 'A_120', 121: 'A_121', 122: 'A_122', 123: 'A_123', 124: 'A_124', 125: 'A_125', 126: 'A_126', 127: 'A_127', 128: 'A_128', 129: 'A_129', 130: 'A_130', 131: 'A_131', 132: 'A_132', 133: 'A_133', 134: 'A_134', 135: 'A_135', 136: 'A_136', 137: 'A_137', 138: 'A_138', 139: 'A_139', 140: 'A_140', 141: 'A_141', 142: 'A_142', 143: 'A_143', 144: 'A_144', 145: 'A_145', 146: 'A_146', 147: 'A_147', 148: 'A_148', 149: 'A_149', 150: 'A_150', 151: 'A_151', 152: 'A_152', 153: 'A_153', 154: 'A_154', 155: 'A_155', 156: 'A_156', 157: 'A_157', 158: 'A_158', 159: 'A_159', 160: 'A_160', 161: 'A_161', 162: 'A_162', 163: 'A_163', 164: 'A_164', 165: 'A_165', 166: 'A_166', 167: 'A_167', 168: 'A_168', 169: 'A_169', 170: 'A_170', 171: 'A_171', 172: 'A_172', 173: 'A_173', 174: 'A_174', 175: 'A_175', 176: 'A_176', 177: 'A_177', 178: 'A_178', 179: 'A_179', 180: 'A_180', 181: 'A_181', 182: 'A_182', 183: 'A_183', 184: 'A_184', 185: 'A_185', 186: 'A_186', 187: 'A_187', 188: 'A_188', 189: 'A_189', 190: 'A_190', 191: 'A_191', 192: 'A_192', 193: 'A_193', 194: 'A_194', 195: 'A_195', 196: 'A_196', 197: 'A_197'}, 'RATE': {0: 0.001013797, 1: 0.001013797, 2: 0.001013797, 3: 0.001013797, 4: 0.001013797, 5: 0.001013797, 6: 0.001013797, 7: 0.001013797, 8: 0.001013797, 9: 0.001013797, 10: 0.001013797, 11: 0.001013797, 12: 0.001013797, 13: 0.001013797, 14: 0.001013797, 15: 0.001013797, 16: 0.001013797, 17: 0.001013797, 18: 0.001013797, 19: 0.001013797, 20: 0.001013797, 21: 0.001013797, 22: 0.001013797, 23: 0.001013797, 24: 0.001013797, 25: 0.001013797, 26: 0.001013797, 27: 0.001013797, 28: 0.001013797, 29: 0.001013797, 30: 0.001013797, 31: 0.001013797, 32: 0.001013797, 33: 0.001013797, 34: 0.001013797, 35: 0.001013797, 36: 0.001013797, 37: 0.001013797, 38: 0.001013797, 39: 0.001013797, 40: 0.001013797, 41: 0.001013797, 42: 0.001013797, 43: 0.001013797, 44: 0.001013797, 45: 0.001013797, 46: 0.001013797, 47: 0.001013797, 48: 0.001013797, 49: 0.001013797, 50: 0.001013797, 51: 0.001013797, 52: 0.001013797, 53: 0.001013797, 54: 0.001013797, 55: 0.001013797, 56: 0.001013797, 57: 0.001013797, 58: 0.001013797, 59: 0.001013797, 60: 0.001013797, 61: 0.001013797, 62: 0.001013797, 63: 0.001013797, 64: 0.001013797, 65: 0.001013797, 66: 0.001013797, 67: 0.001013797, 68: 0.001013797, 69: 0.001013797, 70: 0.001013797, 71: 0.001013797, 72: 0.001013797, 73: 0.001013797, 74: 0.001013797, 75: 0.001013797, 76: 0.001013797, 77: 0.001013797, 78: 0.001013797, 79: 0.001013797, 80: 0.001013797, 81: 0.001013797, 82: 0.001013797, 83: 0.001013797, 84: 0.001013797, 85: 0.001013797, 86: 0.001013797, 87: 0.001013797, 88: 0.001013797, 89: 0.001013797, 90: 0.001013797, 91: 0.001013797, 92: 0.001013797, 93: 0.001013797, 94: 0.001013797, 95: 0.001013797, 96: 0.001013797, 97: 0.001013797, 98: 0.001013797, 99: 0.001013797, 100: 0.001013797, 101: 0.001013797, 102: 0.001013797, 103: 0.001013797, 104: 0.001013797, 105: 0.001013797, 106: 0.001013797, 107: 0.001013797, 108: 0.001013797, 109: 0.001013797, 110: 0.001013797, 111: 0.001013797, 112: 0.001013797, 113: 0.001013797, 114: 0.001013797, 115: 0.001013797, 116: 0.001013797, 117: 0.001013797, 118: 0.001013797, 119: 0.001013797, 120: 0.001013797, 121: 0.001013797, 122: 0.001013797, 123: 0.001013797, 124: 0.001013797, 125: 0.001013797, 126: 0.001013797, 127: 0.001013797, 128: 0.001013797, 129: 0.001013797, 130: 0.001013797, 131: 0.001013797, 132: 0.001013797, 133: 0.001013797, 134: 0.001013797, 135: 0.001013797, 136: 0.001013797, 137: 0.001013797, 138: 0.001013797, 139: 0.001013797, 140: 0.001448281, 141: 0.001448281, 142: 0.001448281, 143: 0.001448281, 144: 0.001448281, 145: 0.001448281, 146: 0.001448281, 147: 0.001448281, 148: 0.001448281, 149: 0.001448281, 150: 0.001448281, 151: 0.001448281, 152: 0.001448281, 153: 0.001448281, 154: 0.001448281, 155: 0.001448281, 156: 0.001448281, 157: 0.001448281, 158: 0.001448281, 159: 0.001448281, 160: 0.001448281, 161: 0.001448281, 162: 0.001448281, 163: 0.001448281, 164: 0.001448281, 165: 0.001448281, 166: 0.001448281, 167: 0.001448281, 168: 0.001448281, 169: 0.001448281, 170: 0.001448281, 171: 0.001448281, 172: 0.001448281, 173: 0.001448281, 174: 0.001448281, 175: 0.001448281, 176: 0.001448281, 177: 0.001448281, 178: 0.001448281, 179: 0.001448281, 180: 0.001448281, 181: 0.001448281, 182: 0.001448281, 183: 0.001448281, 184: 0.001448281, 185: 0.001448281, 186: 0.001448281, 187: 0.001448281, 188: 0.001448281, 189: 0.00551956, 190: 0.00551956, 191: 0.00551956, 192: 0.00551956, 193: 0.00551956, 194: 0.00551956, 195: 0.00551956, 196: 0.00551956, 197: 0.00551956}, 'PERSPVALUE': {0: 27068207.53, 1: 16139432.33, 2: 16612880.78, 3: 15223154.23, 4: 13329725.56, 5: 12639283.46, 6: 14398548.59, 7: 26784197.78, 8: 17256618.07, 9: 11657837.22, 10: 12252501.6, 11: 24904672.14, 12: 10258771.29, 13: 11502035.81, 14: 11532813.52, 15: 17269205.53, 16: 12693599.88, 17: 14536926.96, 18: 9539792.21, 19: 7971501.616, 20: 8244881.149, 21: 6003459.601, 22: 4222915.124, 23: 20863174.85, 24: 11029584.79, 25: 7584126.22, 26: 15145416.28, 27: 7329721.345, 28: 12266029.43, 29: 5688784.226, 30: 4068513.751, 31: 15281888.13, 32: 11922537.91, 33: 4396013.28, 34: 2671760.764, 35: 1906603.424, 36: 3873159.464, 37: 13311419.88, 38: 5283934.11, 39: 11110692.34, 40: 9648885.591, 41: 6027654.746, 42: 6992837.358, 43: 4696371.296, 44: 4937007.527, 45: 5167548.583, 46: 1457331.046, 47: 4243265.898, 48: 5344537.717, 49: 5348237.625, 50: 2302969.738, 51: 3590748.252, 52: 4917922.812, 53: 6939780.743, 54: 3336948.716, 55: 2726471.252, 56: 6178289.029, 57: 2871203.944, 58: 3419346.8, 59: 2832834.938, 60: 5663559.289, 61: 3585861.103, 62: 3734039.93, 63: 3187926.465, 64: 4676672.934, 65: 4202429.132, 66: 825265.2121, 67: 2050863.214, 68: 2548953.176, 69: 4358778.251, 70: 2543793.804, 71: 3137277.456, 72: 3736864.886, 73: 1833556.979, 74: 2190284.487, 75: 1020392.457, 76: 3409046.421, 77: 3680362.683, 78: 3486760.876, 79: 3575292.65, 80: 3047082.934, 81: 3569162.709, 82: 2822371.667, 83: 3288314.142, 84: 2309749.76, 85: 274178.2925, 86: 2970113.437, 87: 2375593.126, 88: 2169592.635, 89: 2445757.474, 90: 1153270.295, 91: 843862.5591, 92: 1517770.688, 93: 3120120.296, 94: 1908922.549, 95: 949387.4377, 96: 555355.6563, 97: 1749621.443, 98: 1635296.659, 99: 1266107.641, 100: 1650184.057, 101: 1460167.669, 102: 1606946.205, 103: 1512397.297, 104: 1341212.516, 105: 894387.1565, 106: 1169059.447, 107: 1228485.862, 108: 369173.9589, 109: 960353.85, 110: 1022879.73, 111: 637366.6611, 112: 707716.8876, 113: 933317.7985, 114: 890590.3473, 115: 812629.496, 116: 671498.5074, 117: 648790.1261, 118: 738000.0, 119: 659490.5537, 120: 685486.6873, 121: 619717.8567, 122: 453714.5127, 123: 577421.4298, 124: 1142637.061, 125: 505567.7285, 126: 509675.9037, 127: 501422.4293, 128: 362900.5682, 129: 407749.7206, 130: 426971.0817, 131: 281477.1802, 132: 424598.9225, 133: 401336.4033, 134: 363427.0346, 135: 365667.4665, 136: 306451.1672, 137: 225000.0, 138: 230280.1425, 139: 297507.8831, 140: 67670518.82, 141: 40348580.81, 142: 41532201.95, 143: 38057885.58, 144: 33324313.91, 145: 31598208.66, 146: 35996371.47, 147: 66960494.46, 148: 43141545.17, 149: 29144593.06, 150: 30631253.99, 151: 62261680.34, 152: 25646928.23, 153: 28755089.53, 154: 28832033.8, 155: 43173013.82, 156: 31733999.71, 157: 36342317.41, 158: 23849480.52, 159: 19928754.04, 160: 20612202.87, 161: 15008649.0, 162: 52157937.13, 163: 27573961.96, 164: 18960315.55, 165: 37863540.7, 166: 18324303.36, 167: 30665073.58, 168: 14221960.57, 169: 38204720.33, 170: 29806344.78, 171: 10990033.2, 172: 33278549.69, 173: 13209835.28, 174: 27776730.85, 175: 24122213.98, 176: 15069136.87, 177: 17482093.4, 178: 11740928.24, 179: 12342518.82, 180: 12918871.46, 181: 13361344.29, 182: 13370594.06, 183: 12294807.03, 184: 17349451.86, 185: 15445722.57, 186: 14158898.22, 187: 11691682.34, 188: 10896945.63, 189: 135341037.6, 190: 80697161.63, 191: 83064403.89, 192: 133920988.9, 193: 86283090.34, 194: 124523360.7, 195: 86346027.65, 196: 104315874.3, 197: 76409440.66}, 'STDDEVI': {0: 5413641.505, 1: 3227886.465, 2: 3322576.156, 3: 3044630.846, 4: 2665945.113, 5: 2527856.693, 6: 2879709.718, 7: 5356839.557, 8: 3451323.613, 9: 2331567.445, 10: 2450500.32, 11: 4980934.427, 12: 2051754.258, 13: 2300407.162, 14: 2306562.704, 15: 3453841.106, 16: 2538719.977, 17: 2907385.393, 18: 1907958.442, 19: 1594300.323, 20: 1648976.23, 21: 1200691.92, 22: 844583.0249, 23: 4172634.97, 24: 2205916.957, 25: 1516825.244, 26: 3029083.256, 27: 1465944.269, 28: 2453205.886, 29: 1137756.845, 30: 813702.7502, 31: 3056377.626, 32: 2384507.582, 33: 879202.656, 34: 534352.1528, 35: 381320.6848, 36: 774631.8928, 37: 2662283.975, 38: 1056786.822, 39: 2222138.468, 40: 1929777.118, 41: 1205530.949, 42: 1398567.472, 43: 939274.2592, 44: 987401.5054, 45: 1033509.717, 46: 291466.2091, 47: 848653.1796, 48: 1068907.543, 49: 1069647.525, 50: 460593.9475, 51: 718149.6505, 52: 983584.5625, 53: 1387956.149, 54: 667389.7432, 55: 545294.2504, 56: 1235657.806, 57: 574240.7888, 58: 683869.36, 59: 566566.9876, 60: 1132711.858, 61: 717172.2206, 62: 746807.9859, 63: 637585.293, 64: 935334.5869, 65: 840485.8264, 66: 165053.0424, 67: 410172.6428, 68: 509790.6353, 69: 871755.6503, 70: 508758.7608, 71: 627455.4913, 72: 747372.9772, 73: 366711.3959, 74: 438056.8973, 75: 204078.4913, 76: 681809.2841, 77: 736072.5366, 78: 697352.1752, 79: 715058.5301, 80: 609416.5868, 81: 713832.5418, 82: 564474.3333, 83: 657662.8285, 84: 461949.9519, 85: 54835.65849, 86: 594022.6875, 87: 475118.6252, 88: 433918.5269, 89: 489151.4948, 90: 230654.059, 91: 168772.5118, 92: 303554.1375, 93: 624024.0593, 94: 381784.5097, 95: 189877.4875, 96: 111071.1313, 97: 349924.2887, 98: 327059.3317, 99: 253221.5283, 100: 330036.8114, 101: 292033.5338, 102: 321389.2409, 103: 302479.4595, 104: 268242.5032, 105: 178877.4313, 106: 233811.8894, 107: 245697.1723, 108: 73834.79179, 109: 192070.77, 110: 204575.946, 111: 127473.3322, 112: 141543.3775, 113: 186663.5597, 114: 178118.0695, 115: 162525.8992, 116: 134299.7015, 117: 129758.0252, 118: 147600.0, 119: 131898.1107, 120: 137097.3375, 121: 123943.5713, 122: 90742.90254, 123: 115484.286, 124: 228527.4122, 125: 101113.5457, 126: 101935.1807, 127: 100284.4859, 128: 72580.11363, 129: 81549.94411, 130: 85394.21633, 131: 56295.43605, 132: 84919.7845, 133: 80267.28067, 134: 72685.40692, 135: 73133.4933, 136: 61290.23344, 137: 45000.0, 138: 46056.02851, 139: 59501.57661, 140: 16917629.7, 141: 10087145.2, 142: 10383050.49, 143: 9514471.395, 144: 8331078.477, 145: 7899552.165, 146: 8999092.868, 147: 16740123.61, 148: 10785386.29, 149: 7286148.264, 150: 7657813.499, 151: 15565420.08, 152: 6411732.056, 153: 7188772.382, 154: 7208008.45, 155: 10793253.46, 156: 7933499.928, 157: 9085579.353, 158: 5962370.131, 159: 4982188.51, 160: 5153050.718, 161: 3752162.25, 162: 13039484.28, 163: 6893490.491, 164: 4740078.888, 165: 9465885.174, 166: 4581075.841, 167: 7666268.394, 168: 3555490.142, 169: 9551180.082, 170: 7451586.195, 171: 2747508.3, 172: 8319637.422, 173: 3302458.819, 174: 6944182.712, 175: 6030553.494, 176: 3767284.217, 177: 4370523.349, 178: 2935232.06, 179: 3085629.704, 180: 3229717.864, 181: 3340336.073, 182: 3342648.515, 183: 3073701.758, 184: 4337362.965, 185: 3861430.643, 186: 3539724.555, 187: 2922920.584, 188: 2724236.407, 189: 101505778.2, 190: 60522871.22, 191: 62298302.92, 192: 100440741.7, 193: 64712317.75, 194: 93392520.51, 195: 64759520.73, 196: 78236905.69, 197: 57307080.49}, 'STDDEVC': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 0, 15: 0, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0, 28: 0, 29: 0, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0, 41: 0, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 0, 48: 0, 49: 0, 50: 0, 51: 0, 52: 0, 53: 0, 54: 0, 55: 0, 56: 0, 57: 0, 58: 0, 59: 0, 60: 0, 61: 0, 62: 0, 63: 0, 64: 0, 65: 0, 66: 0, 67: 0, 68: 0, 69: 0, 70: 0, 71: 0, 72: 0, 73: 0, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 0, 80: 0, 81: 0, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 0, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 0, 98: 0, 99: 0, 100: 0, 101: 0, 102: 0, 103: 0, 104: 0, 105: 0, 106: 0, 107: 0, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 0, 116: 0, 117: 0, 118: 0, 119: 0, 120: 0, 121: 0, 122: 0, 123: 0, 124: 0, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 0, 132: 0, 133: 0, 134: 0, 135: 0, 136: 0, 137: 0, 138: 0, 139: 0, 140: 0, 141: 0, 142: 0, 143: 0, 144: 0, 145: 0, 146: 0, 147: 0, 148: 0, 149: 0, 150: 0, 151: 0, 152: 0, 153: 0, 154: 0, 155: 0, 156: 0, 157: 0, 158: 0, 159: 0, 160: 0, 161: 0, 162: 0, 163: 0, 164: 0, 165: 0, 166: 0, 167: 0, 168: 0, 169: 0, 170: 0, 171: 0, 172: 0, 173: 0, 174: 0, 175: 0, 176: 0, 177: 0, 178: 0, 179: 0, 180: 0, 181: 0, 182: 0, 183: 0, 184: 0, 185: 0, 186: 0, 187: 0, 188: 0, 189: 0, 190: 0, 191: 0, 192: 0, 193: 0, 194: 0, 195: 0, 196: 0, 197: 0}, 'EXPVALUE': {0: 54136415.06, 1: 32278864.66, 2: 33225761.56, 3: 30446308.46, 4: 26659451.12, 5: 25278566.92, 6: 28797097.18, 7: 53568395.56, 8: 34513236.14, 9: 23315674.44, 10: 24505003.2, 11: 49809344.28, 12: 20517542.58, 13: 23004071.62, 14: 23065627.04, 15: 34538411.06, 16: 25387199.76, 17: 29073853.92, 18: 19079584.42, 19: 15943003.23, 20: 16489762.3, 21: 12006919.2, 22: 8445830.248, 23: 41726349.7, 24: 22059169.58, 25: 15168252.44, 26: 30290832.56, 27: 14659442.69, 28: 24532058.86, 29: 11377568.45, 30: 8137027.502, 31: 30563776.26, 32: 23845075.82, 33: 8792026.56, 34: 5343521.528, 35: 3813206.848, 36: 7746318.928, 37: 26622839.76, 38: 10567868.22, 39: 22221384.68, 40: 19297771.18, 41: 12055309.49, 42: 13985674.72, 43: 9392742.592, 44: 9874015.054, 45: 10335097.17, 46: 2914662.092, 47: 8486531.796, 48: 10689075.43, 49: 10696475.25, 50: 4605939.476, 51: 7181496.504, 52: 9835845.624, 53: 13879561.49, 54: 6673897.432, 55: 5452942.504, 56: 12356578.06, 57: 5742407.888, 58: 6838693.6, 59: 5665669.876, 60: 11327118.58, 61: 7171722.206, 62: 7468079.86, 63: 6375852.93, 64: 9353345.868, 65: 8404858.264, 66: 1650530.424, 67: 4101726.428, 68: 5097906.352, 69: 8717556.502, 70: 5087587.608, 71: 6274554.912, 72: 7473729.772, 73: 3667113.958, 74: 4380568.974, 75: 2040784.914, 76: 6818092.842, 77: 7360725.366, 78: 6973521.752, 79: 7150585.3, 80: 6094165.868, 81: 7138325.418, 82: 5644743.334, 83: 6576628.284, 84: 4619499.52, 85: 548356.585, 86: 5940226.874, 87: 4751186.252, 88: 4339185.27, 89: 4891514.948, 90: 2306540.59, 91: 1687725.118, 92: 3035541.376, 93: 6240240.592, 94: 3817845.098, 95: 1898774.875, 96: 1110711.313, 97: 3499242.886, 98: 3270593.318, 99: 2532215.282, 100: 3300368.114, 101: 2920335.338, 102: 3213892.41, 103: 3024794.594, 104: 2682425.032, 105: 1788774.313, 106: 2338118.894, 107: 2456971.724, 108: 738347.9178, 109: 1920707.7, 110: 2045759.46, 111: 1274733.322, 112: 1415433.775, 113: 1866635.597, 114: 1781180.695, 115: 1625258.992, 116: 1342997.015, 117: 1297580.252, 118: 1476000.0, 119: 1318981.107, 120: 1370973.375, 121: 1239435.713, 122: 907429.0254, 123: 1154842.86, 124: 2285274.122, 125: 1011135.457, 126: 1019351.807, 127: 1002844.859, 128: 725801.1364, 129: 815499.4412, 130: 853942.1634, 131: 562954.3604, 132: 849197.845, 133: 802672.8066, 134: 726854.0692, 135: 731334.933, 136: 612902.3344, 137: 450000.0, 138: 460560.285, 139: 595015.7662, 140: 135341037.6, 141: 80697161.62, 142: 83064403.9, 143: 76115771.16, 144: 66648627.82, 145: 63196417.32, 146: 71992742.94, 147: 133920988.9, 148: 86283090.34, 149: 58289186.12, 150: 61262507.98, 151: 124523360.7, 152: 51293856.46, 153: 57510179.06, 154: 57664067.6, 155: 86346027.64, 156: 63467999.42, 157: 72684634.82, 158: 47698961.04, 159: 39857508.08, 160: 41224405.74, 161: 30017298.0, 162: 104315874.3, 163: 55147923.92, 164: 37920631.1, 165: 75727081.4, 166: 36648606.72, 167: 61330147.16, 168: 28443921.14, 169: 76409440.66, 170: 59612689.56, 171: 21980066.4, 172: 66557099.38, 173: 26419670.56, 174: 55553461.7, 175: 48244427.96, 176: 30138273.74, 177: 34964186.8, 178: 23481856.48, 179: 24685037.64, 180: 25837742.92, 181: 26722688.58, 182: 26741188.12, 183: 24589614.06, 184: 34698903.72, 185: 30891445.14, 186: 28317796.44, 187: 23383364.68, 188: 21793891.26, 189: 270682075.2, 190: 161394323.3, 191: 166128807.8, 192: 267841977.8, 193: 172566180.7, 194: 249046721.4, 195: 172692055.3, 196: 208631748.6, 197: 152818881.3}, 'weight': {0: 0.003861004360956014, 1: 0.003861004360956014, 2: 0.003861004360956014, 3: 0.003861004360956014, 4: 0.003861004360956014, 5: 0.003861004360956014, 6: 0.003861004360956014, 7: 0.003861004360956014, 8: 0.003861004360956014, 9: 0.003861004360956014, 10: 0.003861004360956014, 11: 0.003861004360956014, 12: 0.003861004360956014, 13: 0.003861004360956014, 14: 0.003861004360956014, 15: 0.003861004360956014, 16: 0.003861004360956014, 17: 0.003861004360956014, 18: 0.003861004360956014, 19: 0.003861004360956014, 20: 0.003861004360956014, 21: 0.003861004360956014, 22: 0.003861004360956014, 23: 0.003861004360956014, 24: 0.003861004360956014, 25: 0.003861004360956014, 26: 0.003861004360956014, 27: 0.003861004360956014, 28: 0.003861004360956014, 29: 0.003861004360956014, 30: 0.003861004360956014, 31: 0.003861004360956014, 32: 0.003861004360956014, 33: 0.003861004360956014, 34: 0.003861004360956014, 35: 0.003861004360956014, 36: 0.003861004360956014, 37: 0.003861004360956014, 38: 0.003861004360956014, 39: 0.003861004360956014, 40: 0.003861004360956014, 41: 0.003861004360956014, 42: 0.003861004360956014, 43: 0.003861004360956014, 44: 0.003861004360956014, 45: 0.003861004360956014, 46: 0.003861004360956014, 47: 0.003861004360956014, 48: 0.003861004360956014, 49: 0.003861004360956014, 50: 0.003861004360956014, 51: 0.003861004360956014, 52: 0.003861004360956014, 53: 0.003861004360956014, 54: 0.003861004360956014, 55: 0.003861004360956014, 56: 0.003861004360956014, 57: 0.003861004360956014, 58: 0.003861004360956014, 59: 0.003861004360956014, 60: 0.003861004360956014, 61: 0.003861004360956014, 62: 0.003861004360956014, 63: 0.003861004360956014, 64: 0.003861004360956014, 65: 0.003861004360956014, 66: 0.003861004360956014, 67: 0.003861004360956014, 68: 0.003861004360956014, 69: 0.003861004360956014, 70: 0.003861004360956014, 71: 0.003861004360956014, 72: 0.003861004360956014, 73: 0.003861004360956014, 74: 0.003861004360956014, 75: 0.003861004360956014, 76: 0.003861004360956014, 77: 0.003861004360956014, 78: 0.003861004360956014, 79: 0.003861004360956014, 80: 0.003861004360956014, 81: 0.003861004360956014, 82: 0.003861004360956014, 83: 0.003861004360956014, 84: 0.003861004360956014, 85: 0.003861004360956014, 86: 0.003861004360956014, 87: 0.003861004360956014, 88: 0.003861004360956014, 89: 0.003861004360956014, 90: 0.003861004360956014, 91: 0.003861004360956014, 92: 0.003861004360956014, 93: 0.003861004360956014, 94: 0.003861004360956014, 95: 0.003861004360956014, 96: 0.003861004360956014, 97: 0.003861004360956014, 98: 0.003861004360956014, 99: 0.003861004360956014, 100: 0.003861004360956014, 101: 0.003861004360956014, 102: 0.003861004360956014, 103: 0.003861004360956014, 104: 0.003861004360956014, 105: 0.003861004360956014, 106: 0.003861004360956014, 107: 0.003861004360956014, 108: 0.003861004360956014, 109: 0.003861004360956014, 110: 0.003861004360956014, 111: 0.003861004360956014, 112: 0.003861004360956014, 113: 0.003861004360956014, 114: 0.003861004360956014, 115: 0.003861004360956014, 116: 0.003861004360956014, 117: 0.003861004360956014, 118: 0.003861004360956014, 119: 0.003861004360956014, 120: 0.003861004360956014, 121: 0.003861004360956014, 122: 0.003861004360956014, 123: 0.003861004360956014, 124: 0.003861004360956014, 125: 0.003861004360956014, 126: 0.003861004360956014, 127: 0.003861004360956014, 128: 0.003861004360956014, 129: 0.003861004360956014, 130: 0.003861004360956014, 131: 0.003861004360956014, 132: 0.003861004360956014, 133: 0.003861004360956014, 134: 0.003861004360956014, 135: 0.003861004360956014, 136: 0.003861004360956014, 137: 0.003861004360956014, 138: 0.003861004360956014, 139: 0.003861004360956014, 140: 0.005515718883454712, 141: 0.005515718883454712, 142: 0.005515718883454712, 143: 0.005515718883454712, 144: 0.005515718883454712, 145: 0.005515718883454712, 146: 0.005515718883454712, 147: 0.005515718883454712, 148: 0.005515718883454712, 149: 0.005515718883454712, 150: 0.005515718883454712, 151: 0.005515718883454712, 152: 0.005515718883454712, 153: 0.005515718883454712, 154: 0.005515718883454712, 155: 0.005515718883454712, 156: 0.005515718883454712, 157: 0.005515718883454712, 158: 0.005515718883454712, 159: 0.005515718883454712, 160: 0.005515718883454712, 161: 0.005515718883454712, 162: 0.005515718883454712, 163: 0.005515718883454712, 164: 0.005515718883454712, 165: 0.005515718883454712, 166: 0.005515718883454712, 167: 0.005515718883454712, 168: 0.005515718883454712, 169: 0.005515718883454712, 170: 0.005515718883454712, 171: 0.005515718883454712, 172: 0.005515718883454712, 173: 0.005515718883454712, 174: 0.005515718883454712, 175: 0.005515718883454712, 176: 0.005515718883454712, 177: 0.005515718883454712, 178: 0.005515718883454712, 179: 0.005515718883454712, 180: 0.005515718883454712, 181: 0.005515718883454712, 182: 0.005515718883454712, 183: 0.005515718883454712, 184: 0.005515718883454712, 185: 0.005515718883454712, 186: 0.005515718883454712, 187: 0.005515718883454712, 188: 0.005515718883454712, 189: 0.021021018241875224, 190: 0.021021018241875224, 191: 0.021021018241875224, 192: 0.021021018241875224, 193: 0.021021018241875224, 194: 0.021021018241875224, 195: 0.021021018241875224, 196: 0.021021018241875224, 197: 0.021021018241875224}}) </code></pre> <p>The script generally works well and runs in less than 10 secs or so - however I would like to be able to put in multiple parameters for <code>xs</code> and for <code>lim</code> and the script starts to become quite slow. Is there any way I can speed it up?</p>
<python><profiling>
2023-10-05 11:31:48
4
566
user33484
77,236,506
1,669,328
async BlobClient.stage_block() regularly raises exception for 'HTTP Error 400. The request verb is invalid.'
<p>I'm using the python SDK to upload a block to an Azure Bob Storage Container. I need to do a partitioned upload so I'm using the <code>BlobClient.stage_block(...)</code> call. For performance reasons, I'm using the asynchronous version of the BlobClient (<code>azure.storage.blob.aio</code>).</p> <p>Staging the first block is no problem, the second block usually also runs ok but with the third or fourth block, I'm getting an exception telling me that the server doesn't support the HTTP-Verb:</p> <pre><code>.\src\venv\Scripts\python.exe .\tests\minimal_example.py Starting. bytes_transfered=0, file_size=601070298 bytes_transfered=10485760, file_size=601070298 bytes_transfered=20971520, file_size=601070298 Unexpected return type &lt;class 'str'&gt; from ContentDecodePolicy.deserialize_from_http_generics. Error: &lt;!DOCTYPE HTML PUBLIC &quot;-//W3C//DTD HTML 4.01//EN&quot;&quot;http://www.w3.org/TR/html4/strict.dtd&quot;&gt; &lt;HTML&gt;&lt;HEAD&gt;&lt;TITLE&gt;Bad Request&lt;/TITLE&gt; &lt;META HTTP-EQUIV=&quot;Content-Type&quot; Content=&quot;text/html; charset=us-ascii&quot;&gt;&lt;/HEAD&gt; &lt;BODY&gt;&lt;h2&gt;Bad Request - Invalid Verb&lt;/h2&gt; &lt;hr&gt;&lt;p&gt;HTTP Error 400. The request verb is invalid.&lt;/p&gt; &lt;/BODY&gt;&lt;/HTML&gt; ErrorCode:None Content: &lt;!DOCTYPE HTML PUBLIC &quot;-//W3C//DTD HTML 4.01//EN&quot;&quot;http://www.w3.org/TR/html4/strict.dtd&quot;&gt; &lt;HTML&gt;&lt;HEAD&gt;&lt;TITLE&gt;Bad Request&lt;/TITLE&gt; &lt;META HTTP-EQUIV=&quot;Content-Type&quot; Content=&quot;text/html; charset=us-ascii&quot;&gt;&lt;/HEAD&gt; &lt;BODY&gt;&lt;h2&gt;Bad Request - Invalid Verb&lt;/h2&gt; &lt;hr&gt;&lt;p&gt;HTTP Error 400. The request verb is invalid.&lt;/p&gt; &lt;/BODY&gt;&lt;/HTML&gt; bytes_transfered=20971520, file_size=601070298 bytes_transfered=31457280, file_size=601070298 bytes_transfered=41943040, file_size=601070298 bytes_transfered=52428800, file_size=601070298 bytes_transfered=62914560, file_size=601070298 bytes_transfered=73400320, file_size=601070298 Unexpected return type &lt;class 'str'&gt; from ContentDecodePolicy.deserialize_from_http_generics. Error: &lt;!DOCTYPE HTML PUBLIC &quot;-//W3C//DTD HTML 4.01//EN&quot;&quot;http://www.w3.org/TR/html4/strict.dtd&quot;&gt; &lt;HTML&gt;&lt;HEAD&gt;&lt;TITLE&gt;Bad Request&lt;/TITLE&gt; &lt;META HTTP-EQUIV=&quot;Content-Type&quot; Content=&quot;text/html; charset=us-ascii&quot;&gt;&lt;/HEAD&gt; &lt;BODY&gt;&lt;h2&gt;Bad Request - Invalid Verb&lt;/h2&gt; &lt;hr&gt;&lt;p&gt;HTTP Error 400. The request verb is invalid.&lt;/p&gt; &lt;/BODY&gt;&lt;/HTML&gt; ErrorCode:None Content: &lt;!DOCTYPE HTML PUBLIC &quot;-//W3C//DTD HTML 4.01//EN&quot;&quot;http://www.w3.org/TR/html4/strict.dtd&quot;&gt; &lt;HTML&gt;&lt;HEAD&gt;&lt;TITLE&gt;Bad Request&lt;/TITLE&gt; &lt;META HTTP-EQUIV=&quot;Content-Type&quot; Content=&quot;text/html; charset=us-ascii&quot;&gt;&lt;/HEAD&gt; &lt;BODY&gt;&lt;h2&gt;Bad Request - Invalid Verb&lt;/h2&gt; &lt;hr&gt;&lt;p&gt;HTTP Error 400. The request verb is invalid.&lt;/p&gt; &lt;/BODY&gt;&lt;/HTML&gt; bytes_transfered=73400320, file_size=601070298 </code></pre> <p>My target code is a little more complicated on the generator side but a minimalistic example to reproduce the behavior looks like this:</p> <pre class="lang-py prettyprint-override"><code>from azure.storage.blob.aio import ContainerClient, BlobClient from azure.identity import DefaultAzureCredential from pathlib import Path import datetime from uuid import uuid4 import asyncio CREDENTIALS = DefaultAzureCredential() CHUNK_SIZE = 4096 BLOCK_SIZE = 10 * 1024 * 1024 FILE = Path(&quot;large_file&quot;) BLOB_NAME = f&quot;test_blob{datetime.datetime.now().ctime()}&quot; async def generator(current_pos: int): with open(FILE, &quot;br&quot;) as fd: fd.seek(current_pos) data = fd.read(4096) while data: yield data data = fd.read(4096) async def upload(): print(f&quot;Starting.&quot;) async with ContainerClient(ACCOUNT_URL, CONTAINER, CREDENTIALS) as cc: async with cc.get_blob_client(BLOB_NAME) as blob: bytes_transfered = 0 file_size = FILE.stat().st_size blocks = [] excp_counter = 0 while bytes_transfered &lt; file_size: gen = generator(bytes_transfered) print(f&quot;bytes_transfered={bytes_transfered}, file_size={file_size}&quot;) id = str(uuid4()) length = min(BLOCK_SIZE, file_size - bytes_transfered) try: await blob.stage_block(block_id=id, data=gen, length=length) blocks.append(id) bytes_transfered += length excp_counter = 0 except Exception as e: print(f&quot;Error: {e}&quot;) excp_counter +=1 if excp_counter &gt; 2: raise RuntimeError(&quot;Cannot upload!&quot;) blob.commit_block_list(blocks) if __name__ == &quot;__main__&quot;: f = upload() asyncio.run(f) </code></pre> <p>When I handle the exception by basically ignoring it and retrying, the second or third attempt to stage the block works but with subsequent blocks, the issue happens again and again for an error rate somewhere between 30 and 70%.</p> <p>Is this a bug in the BlobClient Code or am I using it wrong?</p> <p>Used versions:</p> <pre><code>.\src\venv\Scripts\pip.exe freeze aiohttp==3.8.4 aiosignal==1.3.1 async-timeout==4.0.2 attrs==23.1.0 autopep8==2.0.2 azure-common==1.1.28 azure-core==1.28.0 azure-functions==1.15.0 azure-functions-durable==1.2.5 azure-identity==1.14.0 azure-keyvault==4.2.0 azure-keyvault-certificates==4.7.0 azure-keyvault-keys==4.8.0 azure-keyvault-secrets==4.7.0 azure-storage-blob==12.17.0 </code></pre> <h2>EDIT 2023-10-06</h2> <p>I changed the generator function to better represent the use case. It's still much simplified since actually I'm using aiohttp to simultaneously download a file from another source. The output was changed accordingly to the test with the new code.</p>
<python><python-3.x><azure><azure-blob-storage><azure-python-sdk>
2023-10-05 11:00:52
1
548
Matthias Holzapfel
77,236,502
4,809,603
Python to send a message to Teams using Microsoft Graph whilst keeping formatting with JSON vnd.microsoft.card.adaptive
<p>I am trying to send a message to MS Teams using MS Graph. This code works if I just use an unformatted string, like the below:</p> <pre><code> headers = { &quot;Authorization&quot;: f&quot;Bearer {bearer_token}&quot; } body = { &quot;body&quot;: { &quot;content&quot;: message }, &quot;replyToId&quot;: message_id } response = requests.post(f&quot;https://graph.microsoft.com/v1.0/teams/{team_id}/channels/{channel_id}/messages/{reply_to_id}/replies&quot;, headers=headers, json=body ) </code></pre> <p>However if I adapt this code to send formatted text, it doesn't recognise the JSON payload.</p> <pre><code>def send_teams_message_alt(team_id, channel_id, message_id, message, bearer_token): url = f&quot;https://graph.microsoft.com/v1.0/teams/{team_id}/channels/{channel_id}/messages/{message_id}/replies&quot; headers = { &quot;Authorization&quot;: f&quot;Bearer {bearer_token}&quot; #, # &quot;Content-Type&quot;: &quot;application/json&quot; } adaptive_message = {&quot;type&quot;: &quot;AdaptiveCard&quot;, &quot;body&quot;: [ { &quot;type&quot;: &quot;TextBlock&quot;, &quot;text&quot;: &quot;Hello world!&quot;, &quot;size&quot;: &quot;large&quot;, &quot;weight&quot;: &quot;bolder&quot; }, { &quot;type&quot;: &quot;TextBlock&quot;, &quot;text&quot;: &quot;This is the output message in JSON format using vnd.microsoft.card.adaptive.&quot;, &quot;wrap&quot;: True } ], &quot;replyToId&quot;: message_id } print(json.dumps(adaptive_message)) response = requests.post(url, headers=headers, json=json.dumps(adaptive_message)) if response.status_code == 201: print(&quot;Adaptive Card message sent successfully.&quot;) else: print(&quot;Failed to send Adaptive Card message:&quot;, response.status_code, response.text) </code></pre> <p>print statement with the json code to be sent:</p> <pre><code>{&quot;type&quot;: &quot;AdaptiveCard&quot;, &quot;body&quot;: [{&quot;type&quot;: &quot;TextBlock&quot;, &quot;text&quot;: &quot;Hello world!&quot;, &quot;size&quot;: &quot;large&quot;, &quot;weight&quot;: &quot;bolder&quot;}, {&quot;type&quot;: &quot;TextBlock&quot;, &quot;text&quot;: &quot;This is the output message in JSON format using vnd.microsoft.card.adaptive.&quot;, &quot;wrap&quot;: true}], &quot;replyToId&quot;: &quot;1696464791573&quot;} </code></pre> <p>Error message:</p> <pre><code>Failed to send Adaptive Card message: 400 {&quot;error&quot;:{&quot;code&quot;:&quot;BadRequest&quot;,&quot;message&quot;:&quot;Empty Payload. JSON content expected.&quot;,&quot;innerError&quot;:{&quot;date&quot;:&quot;2023-10-05T10:36:10&quot;,&quot;request-id&quot;:&quot;ad6d20b1-366f-4557-ac59-9afdc4fb7e90&quot;,&quot;client-request-id&quot;:&quot;ad6d20b1-366f-4557-ac59-9afdc4fb7e90&quot;}}} </code></pre>
<python><json><microsoft-graph-teams>
2023-10-05 11:00:43
0
415
Rhys
77,236,381
7,895,542
coverage/pytest and gitlab CI not running
<p>I am currently having two issues involving coverage/pytest and gitlab CI.</p> <p>The bigger and more recent one is that pytest and coverage do nothing at all when i call them in the gitlab CI. I do not get any output or error message at all. Only at the end does the job return with a non-zero exit code and fail. It used to work previously but i after updating some of our dependencies i have been getting this issue.</p> <p>Calling coverage --help does work normally and show the help message. However calling <code>coverage run ...</code> gives no output what so ever.</p> <p>The other, less severe but older, problem is that coverage returns with a 0 exit code even for test failures. However i have not been able to reproduce this locally and thus do not have an idea where to start on how to resolve these two issues.</p> <p>Maybe someone has seen something like that previously or has any idea what might be causing them?</p> <p>Here is a cut down version of my .gitlab-ci.yml</p> <pre><code>workflow: rules: - if: $CI_COMMIT_REF_PROTECTED == &quot;true&quot; - if: $CI_PIPELINE_SOURCE == 'merge_request_event' default: tags: # Make you job be executed in a shared runner that has CVMFS mounted - cvmfs before_script: - set +e - alias python3=&quot;python&quot; - sudo mv CommonAnalysisFramework .. - export CAFPATH=&quot;$PWD/..&quot; - ln -s templates/UserSettingsGuideline.py UserSettings.py - source $CAFPATH/CommonAnalysisFramework/${SETUP_SCRIPT} variables: SETUP_SCRIPT: setup_root628.sh image: atlas/centos7-atlasos-dev stages: - build - run - check - lint test: stage: check before_script: - set +e - alias python3=&quot;python&quot; - sudo mv CommonAnalysisFramework .. - export CAFPATH=&quot;$PWD/..&quot; - ln -s templates/UserSettingsGuideline.py UserSettings.py - source $CAFPATH/CommonAnalysisFramework/${SETUP_SCRIPT} - source /cvmfs/sft.cern.ch/lcg/releases/LCG_104/pytest/7.4.0/x86_64-centos7-gcc12-opt/pytest-env.sh - source /cvmfs/sft.cern.ch/lcg/releases/LCG_104/pytest_cov/3.0.0/x86_64-centos7-gcc12-opt/pytest_cov-env.sh - unalias python3 script: - set +e - echo &quot; in main script&quot; - coverage run --branch --source=&quot;.&quot; --omit=&quot;*config*,test/*.py,*UserSettings*,templates/*,*__init__.py&quot; -m pytest test - coverage report -m --ignore-errors - coverage html --ignore-errors artifacts: paths: - .coverage - htmlcov </code></pre> <p>This is the output</p> <pre><code>Getting source from Git repository 00:02 Fetching changes... Initialized empty Git repository in /builds/atlas-germany-dresden-vbs-group/CAF-Scripts/.git/ Created fresh repository. Checking out 3f716e4e as detached HEAD (ref is Root628)... Skipping Git submodules setup Downloading artifacts 00:01 Downloading artifacts for compile (32909051)... Downloading artifacts from coordinator... ok host=gitlab.cern.ch OK Downloading artifacts for run_USGuideline (32909052)... Downloading artifacts from coordinator... ok host=gitlab.cern.ch OK Executing &quot;step_script&quot; stage of the job script 00:16 $ # INFO: Lowering limit of file descriptors for backwards compatibility. ffi: https://cern.ch/gitlab-runners-limit-file-descriptors # collapsed multi-line command $ set +e $ alias python3=&quot;python&quot; $ sudo mv CommonAnalysisFramework .. $ export CAFPATH=&quot;$PWD/..&quot; $ ln -s templates/UserSettingsGuideline.py UserSettings.py $ source $CAFPATH/CommonAnalysisFramework/${SETUP_SCRIPT} lsetup lsetup &lt;tool1&gt; [ &lt;tool2&gt; ...] (see lsetup -h): lsetup asetup (or asetup) to setup an Athena release lsetup astyle ATLAS style macros lsetup atlantis Atlantis: event display lsetup eiclient Event Index lsetup emi EMI: grid middleware user interface lsetup ganga Ganga: job definition and management client lsetup lcgenv lcgenv: setup tools from cvmfs SFT repository lsetup panda Panda: Production ANd Distributed Analysis lsetup pyami pyAMI: ATLAS Metadata Interface python client lsetup root ROOT data processing framework lsetup rucio distributed data management system client lsetup scikit python data analysis ecosystem lsetup views Set up a full LCG release lsetup xcache XRootD local proxy cache lsetup xrootd XRootD data access advancedTools advanced tools menu diagnostics diagnostic tools menu helpMe more help printMenu show this menu showVersions show versions of installed software 06 Mar 2023 centos7: setupATLAS is python3 environment by default (same as setupATLAS -3). If you need the previous python2 environment, do setupATLAS -2. We strongly encourage API users to migrate their scripts to python3. * * * This environment has been setup as python3. * * * $ source /cvmfs/sft.cern.ch/lcg/releases/LCG_104/pytest/7.4.0/x86_64-centos7-gcc12-opt/pytest-env.sh $ source /cvmfs/sft.cern.ch/lcg/releases/LCG_104/pytest_cov/3.0.0/x86_64-centos7-gcc12-opt/pytest_cov-env.sh $ unalias python3 $ set +e $ echo &quot; in main script&quot; in main script $ coverage run --branch --source=&quot;.&quot; --omit=&quot;*config*,test/*.py,*UserSettings*,templates/*,*__init__.py&quot; -m pytest test $ coverage report -m --ignore-errors No data to report. $ coverage html --ignore-errors No data to report. Cleaning up project directory and file based variables 00:01 ERROR: Job failed: command terminated with exit code 1 </code></pre> <p>And one of the files in the test folder is this</p> <pre><code>#!/bin/env python &quot;&quot;&quot;Test that the nominal cut is part of the cut list. Also requires it to not be commented out for tables.&quot;&quot;&quot; import templates.WZUserSettings as us def test_cutNames(): cutnames = us.cutNames defaultCut = us.defaultCut.replace(&quot;*&quot;, &quot;&quot;) assert defaultCut in cutnames </code></pre> <p>Cheers</p>
<python><gitlab><pytest><gitlab-ci><coverage.py>
2023-10-05 10:41:59
0
360
J.N.
77,236,281
7,895,542
Type hinting pairwise with overhang
<p>I took over a code base (support down to 3.9) and wanted to add some type hinting. However I am currently stuck at this function.</p> <pre><code>def _pairwise(iterable: T.Iterable, end=None) -&gt; T.Iterable: left, right = itertools.tee(iterable) next(right, None) return itertools.zip_longest(left, right, fillvalue=end) </code></pre> <p>Which is later used to iterate over regex matches and extract their start and end indices for slicing. The last fill value of <code>None</code> is used to have the last slice go to the end of the string.</p> <p>We know that the actual signature should be</p> <p><code>_pairwise(iterable: Iterable[T], end: Optional[T] = None) -&gt; Iterator[tuple[T, Optional[T]]]</code></p> <p>because <code>left</code> is guaranteed to be at least as long as <code>right</code>.</p> <p>However the approach with the <code>zip_longest</code> does not allow that. Type checkers read that as <code>Iterator[Optional[T], Optional[T]]</code>.</p> <p>I have rewritten the function so that the type checker (pyright) is able to verify that target signature.</p> <pre><code>def _pairwise( iterable: Iterable[T], end: Optional[T] = None ) -&gt; Iterator[tuple[T, Optional[T]]]: left, right = itertools.tee(iterable) next(right, None) for x, y in zip(right, left): yield y, x if (last := next(left, None)) is not None: yield last, end </code></pre> <p>However I am not particularly pleased with this result yet. First the need to swap the arguments to <code>zip</code> to avoid it taking one extra step on <code>left</code> as well as the manual check to deal with the case of the argument being an empty iterable.</p> <p>This also means that the functionality is not the intended one for <code>T=NoneType</code> although that is not actually a problem, but it does annoy me a bit.</p> <p>Is there any other way to get this pairwise functionality to typecheck?</p>
<python><python-itertools><python-typing>
2023-10-05 10:25:27
1
360
J.N.
77,236,258
3,567,987
converting float (32 bit) and double (64 bit) to bytes
<p>I am trying to send a data buffer from Python over TCP to a C++ server.</p> <p>The TCP pack order is defined as follows:</p> <ol> <li>bytes[0 till 3] must be an integer (32 bit) with value = length of the buffer</li> <li>bytes[4 till 7] must be a float (32 bit)</li> <li>bytes[8 till 15] must be a double (64 bit)</li> </ol> <p>I have successfully implemented the first point (length of the buffer) in the following way.</p> <pre><code>buffer = bytearray(bufferLength) # bufferLength is an integer buffer[0] = bufferLength.to_bytes(4, 'little')[0] buffer[1] = bufferLength.to_bytes(4, 'little')[1] buffer[2] = bufferLength.to_bytes(4, 'little')[2] buffer[3] = bufferLength.to_bytes(4, 'little')[3] </code></pre> <p>which works perfectly. But how to convert the two floats (32 bit and 64 bit)?</p>
<python><floating-point><byte>
2023-10-05 10:22:36
0
2,289
Thomas
77,236,253
2,132,157
How can I get the path vertices of an Ellipse with the axes coordinates and not -1 to 1 coordinates?
<p>When I try to get the path vertices of an ellipse using Matplotlib the vertices are returned scaled from -1 to 1. I would like to understand what is the suggested way to get the vertices using the axes coordinates system and not the scaled one:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt fig,ax = plt.subplots() ell_patch = Ellipse((10, 10), 200, 200, theta*180/np.pi, edgecolor='red', facecolor='none') ax.add_patch(ell_patch) ax.set_xlim(-200,200) ax.set_ylim(-200,200) ell_patch.get_path().vertices </code></pre> <p>Note I am using Matplotlib 3.5 and I can't upgrade.</p> <p>As suggested @Flow:</p> <pre><code>path = ell_patch.get_path() transform = ell_patch.get_transform() # Now apply the transform to the path newpath = transform.transform_path(path) </code></pre> <p>However it does not scale properly, the Elipse is bigger...</p>
<python><matplotlib><ellipse>
2023-10-05 10:21:41
0
22,734
G M
77,236,080
10,159,065
A/B testing using t test
<p>I am running some simulations to test changes to the current model, Let's call the changes as new algorithm. My model predicts the route for a particular transaction between two possible routes. The success rate is defined as total success/ total transactions. The below dataframe has two columns old and new. old has daily success rates for 14 days through the old algorithm and new has daily success rates for 14 days from the new algorithm,</p> <p>Q1. I want to come to a conclusion as to whether the new algorithm is better than the old algorithm, I can just compare the mean from 14 days and compare but I want to run some statistical measures. I have written the below code but if I interchange new and old columns it still yields the same p value. I basically want to come to the conclusion that new is better than old, but I think this test is telling me that the results from both algorithms are significantly different from each other. need some help to reach the conclusion</p> <p>Q2. Can I tell a confidence interval in which my results (difference b/w old and new) can lie?</p> <pre><code>import pandas as pd from scipy import stats data = pd.DataFrame({ 'old': [74.9254,73.7721,73.6018,68.6855,63.4666,63.9204,70.6977,62.6488,67.8088,70.2274,71.1197,64.8925,73.1113,70.7065], # Replace with your old algorithm results 'new': [74.8419,73.7548,73.6677,68.9352,63.8387,64.1143,70.9533,62.6026,67.9586,70.7,71.1263,65.1053,72.9996,70.5899], }) # Perform a paired t-test t_statistic, p_value = stats.ttest_rel(data['new'], data['old']) # Define your significance level (alpha) alpha = 0.05 # Print the t-statistic and p-value print(f&quot;Paired t-statistic: {t_statistic}&quot;) print(f&quot;P-value: {p_value}&quot;) # Compare p-value to the significance level if p_value &lt; alpha: print(&quot;Reject the null hypothesis. The new algorithm is performing significantly better.&quot;) else: print(&quot;Fail to reject the null hypothesis. There is no significant difference between the algorithms.&quot;) </code></pre>
<python><pandas><statistics><simulation><ab-testing>
2023-10-05 09:58:28
1
448
Aayush Gupta
77,235,773
5,501,815
python/mypy exhaustiveness checking with tuples
<p>I am trying to match on the value of a <code>Union</code> and have mypy perform exhaustiveness checking. Here is a minimal working example:</p> <pre class="lang-py prettyprint-override"><code>t: tuple[int, float] | str match t: case str(): print(&quot;found str&quot;) case (int(), float()): print(&quot;found tuple&quot;) case _ as unreachable: assert_never(unreachable) </code></pre> <p>I would expect this to pass a mypy check, since both options are covered. But I get an error</p> <pre><code>Argument 1 to &quot;assert_never&quot; has incompatible type &quot;tuple[&lt;nothing&gt;, &lt;nothing&gt;]&quot;; expected &quot;NoReturn&quot; [arg-type] </code></pre> <p>This would indicate that there is a <code>case</code> missing for the value <code>tuple[&lt;nothing&gt;, &lt;nothing&gt;]</code>. I have been unable to find anything about limitations of matching tuples. Am I missing something or is this a mypy bug?</p>
<python><match><mypy>
2023-10-05 09:18:36
1
1,306
Sebastiaan
77,235,643
13,184,183
Why renamed schema alter result of query pyspark?
<p>I have large parquet dataframe with invalid characters in the names of fields (dots and braces) If I read it as is and print some aggregation</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df1 = spark.read.parquet(source) &gt;&gt;&gt; df1.select('`A.B.C`').fillna(0).agg({'`A.B.C`' : 'max'}).show() +----------+ |max(A.B.C)| +----------+ | 5989.625| +----------+ </code></pre> <p>I want to rename the columns for convenient further work. If I do it via schema</p> <pre class="lang-py prettyprint-override"><code>schema = df1.schema new_fields = [ StructField(f.name.replace('.', '___').replace('(', '-').replace(')', '-'), f.dataType, f.nullable) for f in schema.fields ] new_schema = StructType(new_fields) df2 = spark.read.schema(new_schema).parquet(source) </code></pre> <p>Then aggregation gives another result</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df2.select('A__B__C').fillna(0).agg({'A__B__C' : 'max'}).show() +------------+ |max(A__B__C)| +------------+ | 0.0| +------------+ </code></pre> <p>If I do renaming via mapping</p> <pre class="lang-py prettyprint-override"><code>mapping = {c : c.replace('.', '___').replace('(', '-').replace(')', '-') for c in df1.columns} df3 = df1.select([F.col(f'`{c}`').alias(mapping[c] for c in df1.columns]) </code></pre> <p>it works with aggregation</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df3.select('A__B__C').fillna(0).agg({'A__B__C' : 'max'}).show() +------------+ |max(A__B__C)| +------------+ | 5989.625| +------------+ </code></pre> <p>However when I try to save it for testing</p> <pre class="lang-py prettyprint-override"><code>df3.sample(fraction=0.0001).write.mode('overwrite').saveAsTable('test_table') </code></pre> <p>It gives error</p> <p><code>Attribute name &quot;Some.Col(1)&quot; contains invalid character(s) among &quot; ,;{}()\n\t=&quot;</code></p> <p>despite <code>&quot;Some.Col(1)&quot; in df3.columns </code> gives <code>False</code> and <code>&quot;Some__Col-1-&quot; in df3.columns</code> gives <code>True</code> and so does direct indexing.</p> <p>Meanwhile, using <code>withColumnRenamed</code> for every column takes much longer time so I didn't test it.</p> <p>So why does using the new schema alter output of aggregation? Why does renaming work with aggregation but fail at writing, as if some columns were not renamed? How can I do proper renaming?</p>
<python><dataframe><apache-spark><pyspark>
2023-10-05 08:58:33
1
956
Nourless
77,235,565
18,108,367
Why the thread inherited by the child process is in the stopped status while in parent process is in the status started?
<h2>Safely forking a multithreaded process is problematic</h2> <p>I'm learning to use the <code>multiprocessing</code> module. Now I'm focusing on the following sentence:</p> <blockquote> <p>Note that safely forking a multithreaded process is problematic.</p> </blockquote> <p>present in the <a href="https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow noreferrer">official documentation</a> in the section dedicated to <strong>context an start methods</strong>.</p> <p>About this topic I have read <a href="https://stackoverflow.com/questions/46439740/safe-to-call-multiprocessing-from-a-thread-in-python/46440564#46440564">this answer</a> which contains the statement:</p> <blockquote> <p>&quot;Note that safely forking a multithreaded process is problematic&quot;: here <strong>problematic</strong> is quite an euphemism for &quot;impossible&quot;.</p> </blockquote> <h2>Test code</h2> <p>I have written this test code (for <strong>Linux</strong> platform):</p> <pre><code>import multiprocessing import threading import time a = 1 t1 = None def thread_function_1(): global a while True: time.sleep(2) a += 1 def p1(): global a global t1 while True: time.sleep(1) print(f&quot;Process p1 ---&gt; a = {a}, t1 = {t1}, t1 is_alive = {t1.is_alive()}&quot;) if __name__ == &quot;__main__&quot;: multiprocessing.set_start_method('fork') t1 = threading.Thread(target = thread_function_1) t1.start() p = multiprocessing.Process(target=p1) p.start() while True: a += 1 time.sleep(1) print(f&quot;Thread Main ---&gt; a = {a}, t1 = {t1}, t1 is_alive = {t1.is_alive()}&quot;) </code></pre> <p>I'm running the code on <strong>Linux</strong> so the instruction:</p> <pre><code>multiprocessing.set_start_method('fork') </code></pre> <p>selects a start method which is already <em>The default start method</em> on Linux. This is because for the test I'm using Python 3.6 (from documentation: <strong>The default start method will change away from fork in Python 3.14.</strong>).</p> <p>The output of its execution is:</p> <pre><code>Thread Main ---&gt; a = 2, t1 = &lt;Thread(Thread-1, started 140618847368960)&gt;, t1 is_alive = True Process p1 ---&gt; a = 1, t1 = &lt;Thread(Thread-1, stopped 140618847368960)&gt;, t1 is_alive = False Thread Main ---&gt; a = 4, t1 = &lt;Thread(Thread-1, started 140618847368960)&gt;, t1 is_alive = True Process p1 ---&gt; a = 1, t1 = &lt;Thread(Thread-1, stopped 140618847368960)&gt;, t1 is_alive = False Thread Main ---&gt; a = 5, t1 = &lt;Thread(Thread-1, started 140618847368960)&gt;, t1 is_alive = True Process p1 ---&gt; a = 1, t1 = &lt;Thread(Thread-1, stopped 140618847368960)&gt;, t1 is_alive = False Thread Main ---&gt; a = 7, t1 = &lt;Thread(Thread-1, started 140618847368960)&gt;, t1 is_alive = True Process p1 ---&gt; a = 1, t1 = &lt;Thread(Thread-1, stopped 140618847368960)&gt;, t1 is_alive = False </code></pre> <p>The output shows that:</p> <ul> <li>the process <code>p1</code> has a reference to the global variable <code>t1</code> which points to an instance of the class <code>Thread</code>; but in Main process the thread is in the status <em>started</em>, when in the process <code>p1</code> is in the status <em>stopped</em></li> <li>see also the different value returned by the method <code>is_alive()</code> (<em>true</em> in Main, <em>false</em> in <code>p1</code>)</li> <li>previous considerations are confirmed by the fact that in the process <code>p1</code> the value of the variable <code>a</code> is always <code>1</code> while in the Main process is increased by the thread <code>t1</code>.</li> </ul> <h2>Question</h2> <p>Why the thread inherited by the child process is in the stopped status while in parent process is in the status started?<br /> This behavior is linked to the sentence <em>&quot;Note that safely forking a multithreaded process is problematic.&quot;</em>?</p>
<python><linux><multithreading><multiprocessing><fork>
2023-10-05 08:48:04
1
2,658
User051209
77,235,342
2,573,168
Run external program inside a conda environment in R
<p>I am trying to run <a href="https://jamieheather.github.io/stitchr/index.html" rel="nofollow noreferrer">stitchr</a> in R. For programs that run in Python, I use <code>reticulate</code>. I create a conda environment named <code>r-reticulate</code>, where I want to install <code>stitchr</code> and run it.</p> <p>I try the following:</p> <pre><code>if (!('r-reticulate' %in% reticulate::conda_list()[,1])){ reticulate::conda_create(envname = 'r-reticulate', packages = 'python=3.10') } reticulate::use_condaenv('r-reticulate') reticulate::py_install(&quot;stitchr&quot;, pip = TRUE) system(&quot;stitchr -h&quot;) # this does not work </code></pre> <p>But obviously enough, the <code>system()</code> call does not work, with the message <code>error in running command</code>.</p> <p>What would be the right way to do this?</p> <p>I had success in the past with <a href="https://cran.r-project.org/web/packages/anndata/index.html" rel="nofollow noreferrer">anndata</a>, for example. But this is an R package wrapper, so I can just do:</p> <pre><code>reticulate::use_condaenv('r-reticulate') reticulate::py_install(&quot;anndata&quot;, pip = TRUE) data_h5ad &lt;- anndata::read_h5ad(&quot;file.h5ad&quot;) </code></pre> <p>How can I approach the <code>stitchr</code> case?</p> <p><strong>EDIT:</strong></p> <p>So I retrieved <code>stitchr.py</code> location during the package installation: <code>/usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py</code></p> <p>I tried all the following but nothing works (see error messages):</p> <pre><code>pyloc=&quot;/usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py&quot; reticulate::source_python(pyloc) </code></pre> <blockquote> <p>Error in py_run_file_impl(file, local, convert) : ImportError: attempted relative import with no known parent package Run <code>reticulate::py_last_error()</code> for details.</p> </blockquote> <pre><code>reticulate::py_run_file(pyloc) </code></pre> <blockquote> <p>Error in py_run_file_impl(file, local, convert) : ImportError: attempted relative import with no known parent package Run <code>reticulate::py_last_error()</code> for details.</p> </blockquote> <pre><code>reticulate::py_run_string(paste(pyloc, &quot;-h&quot;)) </code></pre> <blockquote> <p>Error in py_run_string_impl(code, local, convert) : File &quot;&quot;, line 1 /usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py -h SyntaxError: invalid syntax Run <code>reticulate::py_last_error()</code> for details.</p> </blockquote> <p>I am absolutely clueless on how to proceed here.</p>
<python><r><conda><external><reticulate>
2023-10-05 08:13:44
2
3,247
DaniCee
77,235,225
7,188,690
Is there an efficient way to union all the spark dataframes than using union()?
<p>I am trying to union a big spark data frame. But I am getting <code>Py4JJavaError: An error occurred while calling o26434.collectToPython. : org.apache.spark.SparkException: Job 390 cancelled because SparkContext was shut down</code> Is there any better way to union the dataframes that will be efficient in terms of performance, especially when you are dealing with large DataFrames? I am adding the error stack trace. I have tried this code without using union() well but it gave me the same error.</p> <p>I am using spark version <code>3.1.0</code></p> <pre><code>def get_misc_exception(df: DataFrame, grouping_columns: list, distribution_cols: list) -&gt; DataFrame: misc_df = None misc_cols_df_list = [] for key, value in misc_cols_dict.items(): print(f'{key}----------------') filtered_df = df.filter(value) if filtered_df.rdd.isEmpty(): data = [(data_date, hour, key, 0, 0, '{}', '{}', '{}', '{}', '{}', '{}', '{}', '{}', '{}', '{}', '{}', '{}')] misc_col_df = spark.createDataFrame(data, schema = final_df_schema) else: misc_col_df = get_exception(filtered_df, grouping_columns, distribution_cols) misc_col_df = misc_col_df.withColumn('Exception', F.lit(key)).select(*final_df_cols) misc_cols_df_list.append(misc_col_df) misc_df = misc_cols_df_list[0] for misc_cols_df in misc_cols_df_list[1:]: misc_df = misc_df.union(misc_cols_df) return misc_df </code></pre> <p>Error:</p> <pre><code> --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) &lt;timed exec&gt; in &lt;module&gt; /usr/local/spark/python/pyspark/sql/pandas/conversion.py in toPandas(self) 136 137 # Below is toPandas without Arrow optimization. --&gt; 138 pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns) 139 column_counter = Counter(self.columns) 140 /usr/local/spark/python/pyspark/sql/dataframe.py in collect(self) 594 &quot;&quot;&quot; 595 with SCCallSiteSync(self._sc) as css: --&gt; 596 sock_info = self._jdf.collectToPython() 597 return list(_load_from_socket(sock_info, BatchedSerializer(PickleSerializer()))) 598 /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args) 1303 answer = self.gateway_client.send_command(command) 1304 return_value = get_return_value( -&gt; 1305 answer, self.gateway_client, self.target_id, self.name) 1306 1307 for temp_arg in temp_args: /usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 126 def deco(*a, **kw): 127 try: --&gt; 128 return f(*a, **kw) 129 except py4j.protocol.Py4JJavaError as e: 130 converted = convert_exception(e.java_exception) /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. --&gt; 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( Py4JJavaError: An error occurred while calling o26434.collectToPython. : org.apache.spark.SparkException: Job 390 cancelled because SparkContext was shut down at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1(DAGScheduler.scala:979) at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1$adapted(DAGScheduler.scala:977) at scala.collection.mutable.HashSet.foreach(HashSet.scala:79) at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:977) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2257) at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2170) at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:1973) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at org.apache.spark.SparkContext.stop(SparkContext.scala:1973) at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:1922) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2120) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2139) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2164) at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1004) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:388) at org.apache.spark.rdd.RDD.collect(RDD.scala:1003) at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:385) at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3450) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616) at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3447) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:829) </code></pre>
<python><pyspark>
2023-10-05 07:58:55
0
494
sam
77,235,156
3,999,951
Check if a column's integer is in another column's string of integers
<p>A dataframe has two columns. One has a single integer per row. The other has a string of multiple integers, separated by ',', per row:</p> <pre><code>import pandas as pd duck_ids = [&quot;1, 4, 5, 7&quot;, &quot;3, 11, 14, 27&quot;] ducks_of_interest = [4,15] duck_df = pd.DataFrame( { &quot;DucksOfInterests&quot;: ducks_of_interest, &quot;DuckIDs&quot;: duck_ids } ) print(f&quot;The starting dataframe:\n{duck_df}&quot;) DucksOfInterests DuckIDs 0 4 1, 4, 5, 7 1 15 3, 11, 14, 27 </code></pre> <p>A new column is required that returns a True if the Duck of Interest is within the set of Duck IDs. This is attempted using a simple lambda function with the .apply method:</p> <pre><code>duck_df['DoIinDIDs'] = duck_df.apply(lambda x: str(x['DuckIDs']) in [x['DucksOfInterests']], axis=1) </code></pre> <p>This was expected to return a True for the first row, as 4 is a number in &quot;1, 4, 5, 7&quot;, and False for the second row. However, the result is False for both rows:</p> <pre><code>print(f&quot;The dataframe with the additional column:\n{duck_df}&quot;) DucksOfInterests DuckIDs DoIinDIDs 0 4 1, 4, 5, 7 False 1 15 3, 11, 14, 27 False </code></pre> <p>What is the error in the code or the approach?</p>
<python><pandas>
2023-10-05 07:48:42
2
467
acolls_badger
77,235,064
17,721,722
How to Format Python Code to 'No Wrap' with an extention in VS Code?
<p>I have both Autopep8 and Black Formatter Installed in VS Code. I don't want to wrap python code to new lines. Please provide different ways to do this.</p> <p>I tried to format this line, So that wrapping will go away.</p> <pre><code>definition_mst_id = self.request.query_params.get( 'defination_id', None) </code></pre> <p>I tried this:</p> <pre><code>&quot;python.formatting.provider&quot;: &quot;autopep8&quot; &quot;python.formatting.autopep8Args&quot;: [ &quot;--max-line-length=200&quot; ] </code></pre> <p>But it gives this error.<br /> <code>This setting will soon be deprecated. Please use the Autopep8 extension or the Black Formatter extension. Learn more here: https://aka.ms/AAlgvkb.(2)</code></p> <p>Then I tried this:</p> <pre><code>&quot;[python]&quot;: { &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, &quot;editor.wordWrap&quot;: &quot;off&quot;, &quot;editor.wordWrapColumn&quot;: 1000, }, &quot;editor.wordWrapColumn&quot;: 1000, &quot;editor.wrappingIndent&quot;: &quot;none&quot;, &quot;debug.console.wordWrap&quot;: false, </code></pre> <p>But, this is also not working. When I try Format Selection, I get following error:<br /> <code>Extension 'autopep8' is configured as formatter but it cannot format 'python'-files</code><br /> Which is same for Black.</p>
<python><visual-studio-code>
2023-10-05 07:34:32
1
501
Purushottam Nawale
77,235,006
4,196,578
ImportError: cannot import name 'docstring' from 'matplotlib'
<p>Recently, my code involving <code>matplotlib.pyplot</code> suddenly stopped working on all my machines (Ubuntu 22.04 LTS). I tried a simple <code>import</code> and got the following error:</p> <pre><code>$ python Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import matplotlib.pyplot as plt Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.10/dist-packages/matplotlib/pyplot.py&quot;, line 66, in &lt;module&gt; from matplotlib.figure import Figure, FigureBase, figaspect File &quot;/usr/local/lib/python3.10/dist-packages/matplotlib/figure.py&quot;, line 43, in &lt;module&gt; from matplotlib import _blocking_input, backend_bases, _docstring, projections File &quot;/usr/local/lib/python3.10/dist-packages/matplotlib/projections/__init__.py&quot;, line 58, in &lt;module&gt; from mpl_toolkits.mplot3d import Axes3D File &quot;/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/__init__.py&quot;, line 1, in &lt;module&gt; from .axes3d import Axes3D File &quot;/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/axes3d.py&quot;, line 23, in &lt;module&gt; from matplotlib import _api, cbook, docstring, _preprocess_data ImportError: cannot import name 'docstring' from 'matplotlib' (/usr/local/lib/python3.10/dist-packages/matplotlib/__init__.py) </code></pre> <p>I am not sure what caused the problem, and how to diagnose or fix it. The <code>matplotlib</code> package is installed using pip as root, as I need it to be available to all users by default.</p> <p><em>Has anyone encountered a similar issue and know how to fix it?</em></p>
<python><matplotlib><importerror>
2023-10-05 07:26:05
2
22,728
thor
77,234,933
2,562,058
How to correctly change the internal state of a closure in python?
<p>I am looking into closures, but I am a bit confused on how to change their internal state.</p> <p>Consider the following two examples:</p> <pre><code>def mean(): sample = [] def inner_mean(number): sample.append(number) return sum(sample) / len(sample) return inner_mean </code></pre> <p>and</p> <pre><code>def lpf(fs): y = 0.0 def _lpf(fc, u): y = (1 - fc / fs) * y + fc / fs * u return y return _lpf </code></pre> <p>The first case runs without any problem, whereas the second gives me that <em>variable 'y' in enclosing scope is referenced before assignment</em>.</p> <p>Why this won't happen in the first example? And how shall I modify the second example to make it work?</p> <p>To be more clear, once I created a function I want the value of the internal state <code>y</code> to be remembered at each function call.</p> <p>For example, if do the following:</p> <pre><code>lpf_zero_three = lpf(0.3) lpf_zero_three(1.2, 0.2) lpf_zero_three(1.2, 3.1) </code></pre> <p>the returned value of <code>lpf_zero_three(1.2, 3.1)</code> call shall depend on the value <code>y</code> obtained from call of <code>lpf_zero_three(1.2, 0.2)</code>.</p>
<python><python-3.x>
2023-10-05 07:16:20
1
1,866
Barzi2001
77,234,846
7,848,740
Persistent name of USB serial device with Python pyserial
<p>I have a Python application that chose serial device using <a href="https://pyserial.readthedocs.io/en/latest/tools.html#module-serial.tools.list_ports" rel="nofollow noreferrer">pyserial</a> <code>serial.tools.list_ports</code></p> <p>I have two USB to RS485 serial adapter connected. They won't be disconnected from the server and the server has never been restarted (it's a Linux Ubuntu 22.04.3 LTS)</p> <p>For some reason every time I chose to use one of the two serial, it randomly pick one even if I always choose, for istance, <code>/dev/ttyUSB0</code></p> <p>To me, it looks strange because, the server is never restarted and the USB device is never removed from it so, technically, if I use the serial port mapped to <code>/dev/ttyUSB0</code> it should always be the same</p> <p>Is there a way to name serial ports in a persistent way?</p>
<python><linux><serial-port><pyserial>
2023-10-05 07:01:04
1
1,679
NicoCaldo
77,234,815
1,800,755
Threading OpenCV video in GTK and keep events?
<p>I'm trying to get my camera feed into a GTK window, and I'd like to keep the button-press-event and motion-notify-event events working.</p> <p>I've found how to get the video by refreshing the image in a Gtk.image, in a GLib.idle_add, tried with a thread.Threading, tried with a GLib.timeout_add, but the loop is still blocking the events. I also tried with an OpenCV-headless in a virtual environment...</p> <p>I read it: <a href="https://pygobject.readthedocs.io/en/latest/guide/threading.html" rel="nofollow noreferrer">https://pygobject.readthedocs.io/en/latest/guide/threading.html</a></p> <p>What didn't I understand? Is there a way to fix this ?</p> <p>Here's my (simplified) code :</p> <pre class="lang-py prettyprint-override"><code>import cv2 import gi gi.require_version('Gtk', '3.0') from gi.repository import GLib, Gtk, GdkPixbuf #import threading vdo_url=&quot;http://my_cam/vdo.mjpg&quot; # simplified, without security class Cam_GTK(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title=&quot;Cam GTK&quot;) self.capture = cv2.VideoCapture(vdo_url) self.video = Gtk.Image.new() self.video.connect(&quot;button-press-event&quot;, self.on_video_clicked ) self.video.connect(&quot;motion-notify-event&quot;, self.on_video_hover ) # Also tried with event=Gtk.EventBox.new(), event.add(self.video), then &quot;connect&quot;... page_box = Gtk.Box( orientation=Gtk.Orientation.VERTICAL, spacing=1 ) page_box.pack_start( self.video, True, True, 0 ) self.add(page_box) GLib.idle_add(self.show_frame) #GLib.timeout_add(100, self.show_frame) #tried to Thread(target=self.show_frame) w/out loop self.connect(&quot;destroy&quot;, Gtk.main_quit) self.show_all() def on_video_hover( self, event, result ): print(&quot;video hover&quot;) def on_video_clicked( self, event, button ): print(&quot;video clicked&quot;) def show_frame(self): ret, frame = self.capture.read() #tried with a while if ret: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) pb = GdkPixbuf.Pixbuf.new_from_data(frame.tobytes(), GdkPixbuf.Colorspace.RGB, False, 8, frame.shape[1], frame.shape[0], frame.shape[2]*frame.shape[1]) self.video.set_from_pixbuf( pb.copy() ) return True #tried changed to False to stop loop cam = Cam_GTK() Gtk.main() </code></pre>
<python><multithreading><opencv><gtk3><glib>
2023-10-05 06:54:14
2
478
s4mdf0o1
77,234,759
2,858,044
XCom pull by task_ids doesn't get value
<p>I have a simple dag which has two tasks in task group. This dag has schedule every 10 minutes to pull message from google pub/sub. However the pull sensor working well to get messages, but on the next task doesn't get the message which already pushed to XCom.</p> <p>Here is my dag</p> <pre><code>from airflow import DAG from airflow.models import Variable from airflow.providers.google.cloud.sensors.pubsub import PubSubPullSensor from airflow.operators.python import PythonOperator from airflow.utils.task_group import TaskGroup from airflow.operators.empty import EmptyOperator from airflow.utils.dates import days_ago from dags.python_scripts.airflow_callback import callback from datetime import datetime, timedelta import base64 import json # Airflow Config default_args = { 'owner': 'data', 'depends_on_past': False, 'start_date': days_ago(1), 'email_on_failure': True, 'email_on_retry': False, 'email': [], 'retries': 5, 'retry_delay': timedelta(seconds=300), 'on_failure_callback': callback } def extract_datetime(**kwargs): message_data = kwargs['ti'].xcom_pull(task_ids='pull_pubsub_message') print(&quot;MESSAGE FROM PUB/SUB&quot;, message_data) decoded_data = [base64.b64decode(message['message']['data']) for message in message_data] print(&quot;pubsub data received: &quot;, decoded_data) for d in decoded_data: decoded_d = json.loads(str(d.decode(&quot;utf-8&quot;))) payload_datetime = datetime.strptime(decoded_d['run_datetime'], '%Y-%m-%d %H:%M:%S') # Extract the date portion from the datetime extracted_date = payload_datetime.date() # Pass the extracted date as a parameter to the BigQuery operator kwargs['ti'].xcom_push(key='extracted_date', value=extracted_date) dag_id = &quot;ingest_score&quot; with DAG( dag_id, default_args=default_args, schedule_interval='*/10 * * * *', catchup=False, concurrency=1, max_active_runs=1, dagrun_timeout=timedelta(minutes=10) ) as dag: task_finish_ingestion = EmptyOperator( task_id='ingestion_finished', dag=dag, ) with TaskGroup('recommender_feed'): pull_pubsub_message_task = PubSubPullSensor( task_id='pull_pubsub_message', ack_messages=True, max_messages=100, timeout=100, project_id=&quot;{{ var.value.PUBSUB_PROJECT_ID }}&quot;, subscription=&quot;{{ var.value.PUBSUB_SUBSCRIBER }}&quot;, soft_fail=True ) extract_datetime_task = PythonOperator( task_id='extract_datetime', python_callable=extract_datetime ) # Set up task dependencies pull_pubsub_message_task &gt;&gt; extract_datetime_task &gt;&gt; task_finish_ingestion </code></pre> <p>Here is the log of <code>pull_sub_message_task</code></p> <pre><code>&lt;/br&gt; 2023-10-05T01:10:05.988+0000 logLevel=INFO logger=airflow.providers.google.cloud.hooks.pubsub.PubSubHook - Pulling max 100 messages from subscription (path) projects/my-project-id/subscriptions/scoring-sub &lt;/br&gt; 2023-10-05T01:10:08.883+0000 logLevel=INFO logger=airflow.providers.google.cloud.hooks.pubsub.PubSubHook - Pulled 1 messages from subscription (path) projects/my-project-id/subscriptions/scoring-sub &lt;/br&gt; 2023-10-05T01:10:08.884+0000 logLevel=INFO logger=airflow.providers.google.cloud.hooks.pubsub.PubSubHook - Acknowledging 1 ack_ids from subscription (path) projects/my-project-id/subscriptions/scoring-sub &lt;/br&gt; 2023-10-05T01:10:08.967+0000 logLevel=INFO logger=airflow.providers.google.cloud.hooks.pubsub.PubSubHook - Acknowledged ack_ids from subscription (path) projects/my-project-id/subscriptions/scoring-sub &lt;/br&gt; 2023-10-05T01:10:08.974+0000 logLevel=INFO logger=airflow.task.operators - Success criteria met. Exiting. </code></pre> <p>Here is the log of <code>extract_datetime_task</code>, you can see message_data is <code>None</code></p> <pre><code>2023-10-05T01:10:12.991+0000 logLevel=INFO logger=airflow.task - Executing &lt;Task(PythonOperator): recommender_feed.extract_datetime&gt; on 2023-10-05 01:00:00+00:00 &lt;/br&gt; 2023-10-05T01:10:12.998+0000 logLevel=INFO logger=airflow.task.task_runner.standard_task_runner.StandardTaskRunner - Started process 7658 to run task &lt;/br&gt; 2023-10-05T01:10:13.002+0000 logLevel=INFO logger=airflow.task.task_runner.standard_task_runner.StandardTaskRunner - Running: ['***', 'tasks', 'run', 'ingest_score', 'recommender_feed.extract_datetime', 'scheduled__2023-10-05T01:00:00+00:00', '--job-id', '114', '--raw', '--subdir', 'DAGS_FOLDER/dags/ingest_score.py', '--cfg-path', '/tmp/tmp54e9omke'] &lt;/br&gt; 2023-10-05T01:10:13.005+0000 logLevel=INFO logger=airflow.task.task_runner.standard_task_runner.StandardTaskRunner - Job 114: Subtask recommender_feed.extract_datetime &lt;/br&gt; 2023-10-05T01:10:13.998+0000 logLevel=INFO logger=airflow.cli.commands.task_command - Running &lt;TaskInstance: ingest_score.recommender_feed.extract_datetime scheduled__2023-10-05T01:00:00+00:00 [running]&gt; on host ***-worker-0.***-worker.***.svc.cluster.local &lt;/br&gt; 2023-10-05T01:10:15.036+0000 logLevel=INFO logger=airflow.task - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='data' AIRFLOW_CTX_DAG_ID='ingest_score' AIRFLOW_CTX_TASK_ID='recommender_feed.extract_datetime' AIRFLOW_CTX_EXECUTION_DATE='2023-10-05T01:00:00+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='scheduled__2023-10-05T01:00:00+00:00' &lt;/br&gt; 2023-10-05T01:10:15.049+0000 logLevel=INFO logger=airflow.task - MESSAGE FROM PUB/SUB None &lt;/br&gt; </code></pre> <p>The value is exists on the XComs UI <a href="https://i.sstatic.net/o6vny.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o6vny.png" alt="enter image description here" /></a></p> <p>airflow 2.6.2</p>
<python><airflow>
2023-10-05 06:45:33
1
1,419
itx
77,234,301
2,525,940
pyqt dynamic update of property and css
<p>I'm trying to create an alert label that changes color based on some system state</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QVBoxLayout, QLabel, QPushButton class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() self.setWindowTitle(&quot;My App&quot;) layout = QVBoxLayout() self.lbl1 = QLabel(&quot;text&quot;) self.lbl2 = QLabel(&quot;!!!!&quot;) btn = QPushButton(&quot;Alert&quot;) btn.clicked.connect(self.update) layout.addWidget(self.lbl1) layout.addWidget(self.lbl2) layout.addWidget(btn) self.lbl1.setProperty(&quot;alert&quot;, False) self.lbl2.setProperty(&quot;alert&quot;, True) widget = QWidget() widget.setLayout(layout) self.setCentralWidget(widget) def update(self, item): print(&quot;update alert&quot;) self.lbl1.setProperty(&quot;alert&quot;, True) CSS = &quot;&quot;&quot; QLabel[alert=&quot;true&quot;] { background-color: yellow} &quot;&quot;&quot; app = QApplication(sys.argv) app.setStyleSheet(CSS) window = MainWindow() window.show() app.exec() </code></pre> <p>I can use a custom property and CSS to correctly set the background of the 2nd label to yellow in the <code>__init__</code>. The 1st label should be set to yellow on clicking the alert button but nothing happens.</p> <p>Is there some update or redraw function that also needs to be called?</p>
<python><css><pyqt5>
2023-10-05 04:45:56
0
499
elfnor
77,234,200
16,545,894
mark as read when a product order is views with id. In Django rest framework
<p>This is my models</p> <pre><code>class Notification(models.Model): is_read = models.BooleanField(default=False) </code></pre> <p>and this is my views</p> <pre><code> class NotificationListCreateView(generics.ListCreateAPIView): queryset = Notification.objects.all() serializer_class = NotificationSerializer class NotificationDetailView(generics.RetrieveUpdateAPIView): queryset = Notification.objects.all() serializer_class = NotificationSerializer </code></pre> <p>In my NotificationDetailView I want to add functionality when a notification views in detail the is_read field should True.</p>
<python><django-rest-framework><mvt>
2023-10-05 04:10:37
1
1,118
Nayem Jaman Tusher
77,234,199
2,547,403
MSYS2 and Embedding Python. No module named 'encodings'
<p>I'm trying to use embedded python in my C++ dll library. The library is built and compiled in MSYS2 using GCC compiler, CMake and Ninja. Python 3.10 is also installed on MSYS2 using pacman. Windows 10 env contains <code>C:\msys64\mingw64\bin</code> in Path (python is also located there). Python doesn't installed on Windows, only on MSYS2.</p> <p>This is what <code>CMakeLists.txt</code> contains:</p> <pre><code>cmake_minimum_required(VERSION 3.26) project(python_test) set(CMAKE_CXX_STANDARD 17) find_package(Python REQUIRED Development) add_executable(python_test main.cpp) target_link_libraries(python_test PRIVATE Python::Python) </code></pre> <p>Simple test code:</p> <pre><code>int main() { Py_Initialize(); PyRun_SimpleString(&quot;from time import time,ctime\n&quot; &quot;import numpy as np\n&quot; &quot;print('Today is', ctime(time()))\n&quot;); if (Py_FinalizeEx() &lt; 0) exit(120); return 0; } </code></pre> <p>When I run this code I got this error:</p> <pre><code>Could not find platform independent libraries &lt;prefix&gt; Could not find platform dependent libraries &lt;exec_prefix&gt; Consider setting $PYTHONHOME to &lt;prefix&gt;[:&lt;exec_prefix&gt;] Python path configuration: PYTHONHOME = (not set) PYTHONPATH = (not set) program name = 'python3' isolated = 0 environment = 1 user site = 1 import site = 1 sys._base_executable = 'C:\\Users\\someUsername\\CLionProjects\\python_test\\cmake-build-debug\\python_test.exe' sys.base_prefix = 'D:\\a\\msys64\\mingw64' sys.base_exec_prefix = 'D:\\a\\msys64\\mingw64' sys.platlibdir = 'lib' sys.executable = 'C:\\Users\\someUsername\\CLionProjects\\python_test\\cmake-build-debug\\python_test.exe' sys.prefix = 'D:\\a\\msys64\\mingw64' sys.exec_prefix = 'D:\\a\\msys64\\mingw64' sys.path = [ 'D:\\a\\msys64\\mingw64\\lib\\python310.zip', 'D:\\a\\msys64\\mingw64\\lib\\python3.10', 'D:\\a\\msys64\\mingw64\\lib\\lib-dynload', '', ] Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x00004564 (most recent call first): &lt;no Python frame&gt; </code></pre> <p>But <code>python</code> command work both in MSYS2/MinGW console and Windows cmd. When I specify PYTHONHOME to <code>C:\msys64\mingw64\bin</code> in Windows 10 environment, I have got the same error again in Clion, and now in Windows command line too. How to resolve this problem?</p>
<python><c++><cmake><mingw><msys2>
2023-10-05 04:10:10
1
369
DiA
77,233,967
1,718,174
Django apps.get_models(): how to load models from other folders
<p>I have the following folder structure:</p> <p><a href="https://i.sstatic.net/3t3fz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3t3fz.png" alt="enter image description here" /></a></p> <p>All models from <code>apps/&lt;some_app_name&gt;/models.py</code> are loaded OK when using <code>apps.get_models()</code>, but unfortunately the folders client/brand/reseller/company on the image also have a <code>models.py</code> file on each of them, which are not being loaded on <code>apps.get_models()</code> (code below):</p> <pre class="lang-py prettyprint-override"><code>import pytest from django.apps import apps # Set managed=True for unmanaged models. !! Without this, tests will fail because tables won't be created in test db !! @pytest.fixture(autouse=True, scope=&quot;session&quot;) def __django_test_environment(django_test_environment): unmanaged_models = [m for m in apps.get_models() if not m._meta.managed] for m in unmanaged_models: m._meta.managed = True </code></pre> <p>Can someone explain me how/where should I set pytest/Django to lookup for models?</p> <p>Using Python 3.9.18, and Django 3.2.21</p> <p>Not sure if it helps on something: these 4 models inherit from <code>Account</code>, and that model inherit from <code>PolymorphicMPTTModel</code> (which allows for some pretty crazy/confusing parent-child relations).</p>
<python><django><django-models>
2023-10-05 02:40:06
2
11,945
Vini.g.fer
77,233,855
22,686,386
Why did I get an error ModuleNotFoundError: No module named 'distutils'?
<p>I've installed <code>scikit-fuzzy</code>, but when I <code>import skfuzzy as fuzz</code> I get an error</p> <pre class="lang-none prettyprint-override"><code>ModuleNotFoundError: No module named 'distutils'&quot; </code></pre> <p>I already tried to <code>pip uninstall distutils</code> and got this output</p> <pre class="lang-none prettyprint-override"><code>Note: you may need to restart the kernel to use updated packages. WARNING: Skipping distutils as it is not installed. </code></pre> <p>Then I tried to install it again <code>pip install distutils</code></p> <pre class="lang-none prettyprint-override"><code>Note: you may need to restart the kernel to use updated packages. ERROR: Could not find a version that satisfies the requirement distutils (from versions: none) ERROR: No matching distribution found for distutils </code></pre> <p>Where did I go wrong?</p> <hr /> <p><sub>This question addresses the problem from the perspective of <em>installing</em> a library. For <em>developing</em> a library, see <em><a href="https://stackoverflow.com/questions/69858963">How can one fully replace distutils, which is deprecated in 3.10?</a></em>.</sub></p>
<python><setuptools><distutils><skfuzzy><python-3.12>
2023-10-05 01:54:27
7
781
Kada
77,233,643
1,245,262
How can I fix duplicate targets with Cmake?
<p>I am currently trying to install <code>gr-matchstiq</code> from <a href="https://github.com/epiqsolutions/gr-matchstiq" rel="nofollow noreferrer">GitHubLink</a> and am having trouble. The code no longer conforms to CMake's standards.</p> <p>Specifically, cmake 2.6 intoduced the policy that logical target names must be unique (see: <a href="https://cmake.org/cmake/help/latest/policy/CMP0002.html" rel="nofollow noreferrer">CMP0002</a>). However, the target 'ALL' is used repeatedly. I believe this is so, because of the error I get:</p> <pre><code>$ cmake -Wno-dev ../ -- Build type not specified: defaulting to release. Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; TypeError: Strings must be encoded before hashing Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; TypeError: Strings must be encoded before hashing CMake Error at cmake/Modules/GrPython.cmake:115 (add_custom_target): add_custom_target cannot create target &quot;ALL&quot; because another target with the same name already exists. The existing target is a custom target created in source directory &quot;/home/me/Projects/gr-matchstiq/swig&quot;. See documentation for policy CMP0002 for more details. Call Stack (most recent call first): cmake/Modules/GrPython.cmake:214 (GR_UNIQUE_TARGET) python/CMakeLists.txt:31 (GR_PYTHON_INSTALL) Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; TypeError: Strings must be encoded before hashing CMake Error at cmake/Modules/GrPython.cmake:115 (add_custom_target): add_custom_target cannot create target &quot;ALL&quot; because another target with the same name already exists. The existing target is a custom target created in source directory &quot;/home/me/Projects/gr-matchstiq/swig&quot;. See documentation for policy CMP0002 for more details. Call Stack (most recent call first): cmake/Modules/GrPython.cmake:214 (GR_UNIQUE_TARGET) apps/CMakeLists.txt:22 (GR_PYTHON_INSTALL) -- Configuring incomplete, errors occurred! See also &quot;/home/me/Projects/gr-matchstiq/build/CMakeFiles/CMakeOutput.log&quot;. See also &quot;/home/me/Projects/gr-matchstiq/build/CMakeFiles/CMakeError.log&quot;. </code></pre> <p>The code at cmake/Modules/GrPython.cmake:115 is:</p> <pre><code>add_custom_target(${_target} ALL DEPENDS ${ARGN}) </code></pre> <p>The code at cmake/Modules/GrPython.cmake:214 is:</p> <pre><code>GR_UNIQUE_TARGET(&quot;pygen&quot; ${python_install_gen_targets}) </code></pre> <p>I don't have almost no experience with cmake, so am uncertain which fix is safest, either</p> <ol> <li><p>In root CMakelists.txt file, add line (Note: This didn't work, but maybe I did something wrong):</p> <p><code>set_property(GLOBAL ALLOW_DUPLICATE_TARGETS TRUE)</code></p> </li> <li><p>Change the 'ALL' cmake/Modules/GrPython.cmake:115 to something like 'ALL_PY' - i.e.</p> <p><code>add_custom_target(${_target} ALL_PY DEPENDS ${ARGN})</code></p> </li> <li><p>Somehow modify the GR_UNIQUE_TARGET function (lines 107-116 of GrPython.cmake):</p> </li> </ol> <blockquote> <pre><code>######################################################################## # Create an always-built target with a unique name # Usage: GR_UNIQUE_TARGET(&lt;description&gt; &lt;dependencies list&gt;) ######################################################################## function(GR_UNIQUE_TARGET desc) file(RELATIVE_PATH reldir ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}) execute_process(COMMAND ${PYTHON_EXECUTABLE} -c &quot;import re, hashlib unique = hashlib.md5('${reldir}${ARGN}').hexdigest()[:5] print(re.sub('\\W', '_', '${desc} ${reldir} ' + unique))&quot; OUTPUT_VARIABLE _target OUTPUT_STRIP_TRAILING_WHITESPACE) add_custom_target(${_target} ALL DEPENDS ${ARGN}) endfunction(GR_UNIQUE_TARGET) </code></pre> </blockquote> <p>Or, is there something else I should do?</p> <p>PS - One other fix I needed to make was in lines 95-102:</p> <pre><code>######################################################################## # Sets the python installation directory GR_PYTHON_DIR ######################################################################## execute_process(COMMAND ${PYTHON_EXECUTABLE} -c &quot; from distutils import sysconfig print (sysconfig.get_python_lib(plat_specific=True, prefix='')) &quot; OUTPUT_VARIABLE GR_PYTHON_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) </code></pre> <p>Originally, the Python print statement did not have the &quot;(&quot; &amp; &quot;)&quot; req'd by Python3)</p> <p>PPS - I have no idea how to either handle or find the Type Errors, so will deal with them later.</p>
<python><c++><cmake>
2023-10-05 00:18:19
1
7,555
user1245262
77,233,621
14,963,549
How to get a file from SharePoint by using Selenium in Databricks (Azure)?
<p>I've a code where I'm trying to develop an algorithm with web scraping thats be able to go into a SharePoint URL, make click in &quot;Descargar&quot; button in order to download a file and storage it into a temporary path, and then, convert it in a Data Frame to transform it with python. I'm using this method since I just have viewer permissions and I haven't password, ID or file id.</p> <p>So, the problem comes when I try to use By.CLASS_NAME method. When I go to inspect the object (button) where I want to click, don't get anything that can fits the methods described. The process I'm following to get the Class (Also I've been trying to use By.NAME, By.ID and By.XPATH but does not works):</p> <p><a href="https://i.sstatic.net/0szfn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0szfn.jpg" alt="enter image description here" /></a></p> <p>Down below my try using By.CLASS_NAME method:</p> <pre><code>from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.firefox.service import Service from selenium.webdriver.firefox.options import Options from webdriver_manager.firefox import GeckoDriverManager # downloads v 32.0 from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import time mypath = '/tmp/head_count_data/' service = Service(executable_path=GeckoDriverManager().install()) options = Options() options.set_preference(&quot;browser.download.folderList&quot;, 2) options.set_preference(&quot;browser.download.manager.showWhenStarting&quot;, False) options.set_preference(&quot;browser.download.dir&quot;, mypath) options.headless = False options.add_argument('--headless') options.binary_location = '/tmp/firefox/firefox' driver = webdriver.Firefox(options=options, service=service) url = &quot;https://sharepoint.com/sites/SiteName/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2FSite%2FShared%20Documents%2F01%2E%20File%2F2020&amp;viewid=d7ce04e1%2D44e3%2D4ff1%2Dac16%2D83b76b6a9d7f&quot; driver.get(url) dropdown = driver.find_element(By.CLASS_NAME, &quot;ms-ContextualMenu-itemText label-381&quot;) dropdown.click() time.sleep(60) </code></pre> <p>HTML Object:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;button name="Descargar" data-automationid="downloadCommand" class="ms-ContextualMenu-link root-371" aria-label="Descargar" aria-posinset="8" aria-setsize="13" aria-disabled="false" role="menuitem" tabindex="0"&gt;&lt;div class="ms-ContextualMenu-linkContent linkContent-375"&gt;&lt;span class="ms-ContextualMenu-itemText label-381"&gt;Descargar&lt;/span&gt;&lt;/div&gt;&lt;/button&gt; &lt;span class="ms-ContextualMenu-itemText label-381"&gt;Descargar&lt;/span&gt; &lt;div class="ms-ContextualMenu-linkContent linkContent-375"&gt;&lt;span class="ms-ContextualMenu-itemText label-381"&gt;Descargar&lt;/span&gt;&lt;/div&gt;</code></pre> </div> </div> </p> <p>The error is the next one:</p> <p><a href="https://i.sstatic.net/Mq4vv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mq4vv.png" alt="enter image description here" /></a></p> <p>Do you have any idea why I get this issue and how to fix it? What's the proper way to get Class from a object in SharePoint?</p> <p>Note: I've test this scenario with other web sites by and it works (in Databricks as well).</p> <pre><code>' </code></pre>
<python><selenium-webdriver><web-scraping><sharepoint><databricks>
2023-10-05 00:10:03
1
419
Xkid
77,233,607
2,805,482
Fill image are based on coordinates in cv2
<p>I am stuck on this easy task, I have a rectangular area in an image that I need to fill with a color. I know the x1, x2, y1, y2 coordinate of the image how to fill the area based on coordinate? consider the below image and the coordinates of the area that I want to fill with a is x1 = 132.5, x2 = 270, y1 = 68.5, y2 = 141. Thanks</p> <p><a href="https://i.sstatic.net/CjGTG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CjGTG.png" alt="enter image description here" /></a></p>
<python><opencv>
2023-10-05 00:03:32
1
1,677
Explorer
77,233,600
6,440,589
Can potly graph objects be used with Azure Data Explorer?
<p>I managed to create a scatter plot by evaluating a <strong>plotly</strong> snippet into a <strong>Kusto</strong> query into <strong>Azure Data Explorer</strong> (ADX):</p> <pre><code>let varName = ```if 1: import plotly.express as px import pandas as pd fig = px.scatter(df, x='easting', y='northing') fig.update_layout(title=dict(text=&quot;Test, plotly 2&quot;)) plotly_obj = fig.to_json() result = pd.DataFrame(data = [plotly_obj], columns = [&quot;plotly&quot;]) ```; data_delivery_report | project easting, northing | evaluate python(typeof(plotly:string), varName) </code></pre> <p>I would prefer to use <strong>graph_objects</strong> instead of <strong>plotly express</strong>.</p> <p>Alas, the KQL query fails to return any data when I replace the above script with:</p> <pre><code>let varName = ```if 1: import plotly.graph_objects as go import pandas as pd fig = go.Scatter(x=df['easting'], y=df['northing']) fig.update_layout(title=dict(text=&quot;Test, plotly 2&quot;)) plotly_obj = fig.to_json() result = pd.DataFrame(data = [plotly_obj], columns = [&quot;plotly&quot;]) ```; </code></pre> <p>Is ADX supporting plotly graph_objects? I checked <a href="https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/plotly-visualizations-in-azure-data-explorer/ba-p/3717768" rel="nofollow noreferrer">this article</a> but could not find any information about that topic.</p>
<python><azure><plotly><scatter-plot><kql>
2023-10-05 00:01:01
1
4,770
Sheldon
77,233,552
489,517
How to use GPU to accelerate imread/imwrite of OpenCV in Python
<p>I use OpenCV Python 4.8.1 on Ubuntu to split an image into some small images. The performance is not good and I want to accelerate the performance. I downloaded OpenCV 4.8.1 source code and built it. (follow this document: <a href="https://gist.github.com/raulqf/f42c718a658cddc16f9df07ecc627be7" rel="nofollow noreferrer">https://gist.github.com/raulqf/f42c718a658cddc16f9df07ecc627be7</a>) I use <code>cv2.cuda.getCudaEnabledDeviceCount()</code> to make sure GPU count is 1. But how to ask imread/imwrite to use GPU?</p> <p>This is my OpenCV Python code.</p> <pre><code># Load and split the image using OpenCV image = cv2.imread(request_data.file_location) height, width, _ = image.shape overlap = 100 # Adjust the overlap size as needed size = 500 results = [] for i in range(0, height, size - overlap): for j in range(0, width, size - overlap): img_part = image[i:i + size, j:j + size] output_file = os.path.join(output_dir, f&quot;result_{i}_{j}.jpg&quot;) cv2.imwrite(output_file, img_part) results.append(output_file) return {&quot;path&quot;: results} </code></pre> <p>And if it's not possible to use GPU to speed up it because it only uses 1 CPU, could I use multiprocessing to do it?</p> <p>But multiprocessing cannot improve imread(), right?</p> <p>Btw, I also tried <a href="https://pypi.org/project/Pillow-SIMD/" rel="nofollow noreferrer">Pillow-SIMD 9.0.0.post1</a> and changed the compression level to 1, it's slower than OpenCV. OpenCV takes 2 seconds to read and 2 seconds to write. Pillow-SIMD takes 1 second to read and 3 seconds to write. Is it reasonable? I tried to google, it seems most cases are Pillow-SIMD faster than OpenCV.</p> <p>This is my Pillow code:</p> <pre><code>image = Image.open(request_data.file_location) # Define split parameters size = 500 overlap = 100 results = [] for i in range(0, image.height, size - overlap): for j in range(0, image.width, size - overlap): # Crop and save the image using Pillow img_part = image.crop((j, i, j + size, i + size)) output_file = os.path.join(output_dir, f&quot;result_{i}_{j}.jpg&quot;) img_part.save(output_file) results.append(output_file) </code></pre>
<python><opencv><python-imaging-library><gpu>
2023-10-04 23:45:32
0
619
Dennys
77,233,492
857,932
Chaining classmethod constructors
<p><a href="https://stackoverflow.com/a/682545/857932">There is an idiom in Python</a> to use classmethods to provide additional ways to construct an object, where the conversion/transformation logic stays in the classmethod and the <code>__init__()</code> exists solely to initialize the fields. For example:</p> <pre class="lang-py prettyprint-override"><code>class Foo: field1: bytes def __init__(self, field1: bytes): self.field1 = field1 @classmethod def from_hex(cls, hex: str) -&gt; Foo: ''' construct a Foo from a hex string like &quot;12:34:56:78&quot; ''' return cls(field1=bytes.fromhex(hex.replace(':', ' '))) </code></pre> <p>Now, let's say I define a class derived from Foo:</p> <pre class="lang-py prettyprint-override"><code>class Bar(Foo): field2: str def __init__(self, field1: bytes, field2: str): Foo.__init__(self, field1) self.field2 = field2 </code></pre> <p>With this hierarchy in mind, I want to define a constructor <code>Bar.from_hex_with_tag()</code> that would serve as an extension of <code>Foo.from_hex()</code>:</p> <pre class="lang-py prettyprint-override"><code>class Bar(Foo): &lt;...&gt; @classmethod def from_hex_with_tag(cls, hex: str, tag: str) -&gt; Bar: return cls( field1=bytes.fromhex(hex.replace(':', ' ')), # duplicated code with Foo.from_hex() field2=tag ) </code></pre> <hr /> <p>How do I reuse <code>Foo.from_hex()</code> in <code>Bar.from_hex_with_tag()</code>?</p>
<python><python-3.x><oop><class-method>
2023-10-04 23:25:32
1
2,955
intelfx
77,233,467
2,955,541
Convert Multivariate Polynomial to Matrix Representation
<p>Let's say I have a multivariate polynomial:</p> <pre><code>import sympy x_1, x_2, x_3, x_4 = sympy.symbols('x_1 x_2 x_3 x_4') expressions = [ -x_1*x_1-x_2*x_2+x_1*x_2, x_2*x_2-x_1*x_2, x_1*x_1-x_1*x_2, x_1*x_2, x_1*x_1-x_1*x_3, x_1*x_3, x_3*x_3-x_2*x_3, -x_2*x_2-x_4*x_4+x_2*x_4, x_2*x_2-x_2*x_3, x_2*x_3, -x_3*x_3-x_4*x_4+x_3*x_4, x_3*x_4, ] model = sympy.Poly(sympy.Add(*expressions)) model # Poly(x_1**2 - x_2*x_3 + x_2*x_4 + 2*x_3*x_4 - 2*x_4**2, x_1, x_2, x_3, x_4, domain='ZZ') </code></pre> <p>Notice that the variables are <code>[x_1, x_2, x_3, x_4]</code> and so it is possible to represent the coefficients of the polynomial as a 4x4 square matrix where the coefficient of the squared terms (i.e., <code>x_i*x_i</code>) are the diagonal terms along the matrix and the off-diagonal terms depend on the coefficients of <code>x_i*x_j</code> :</p> <pre><code>[[1, 0, 0, 0], [0, 0, -1, 1], [0, 0, 0, 2], [0, 0, 0, -2] ] </code></pre> <p>Starting with the <code>sympy</code> polynomial, is it possible to extract the coefficients and construct the corresponding <code>sympy</code> matrix as shown above for ANY polynomial with variables <code>[x_1, x_2, ..., x_N]</code>?</p> <p>At the end of the day, I'm really hoping to obtain the final matrix as a <code>numpy</code> array so that it can be used for additional computation outside of <code>sympy</code>.</p>
<python><matrix><sympy><polynomials>
2023-10-04 23:12:01
1
6,989
slaw
77,233,404
11,279,970
Using llama-index loading Nodes into Weaviate Vector Store errors with TypeError: Object of type WindowsPath is not JSON serializable
<p>I am following the llama-index tutorial <a href="https://gpt-index.readthedocs.io/en/stable/examples/low_level/ingestion.html#load-data" rel="nofollow noreferrer">Building Data Ingestion From Scratch</a> which uses Pinecone to ingest the formatted nodes with</p> <pre><code>from llama_index.vector_stores import PineconeVectorStore vector_store = PineconeVectorStore(pinecone_index=pinecone_index) vector_store.add(nodes) </code></pre> <p>So I formatted this to use Weaviate instead</p> <pre><code>from llama_index.vector_stores import WeaviateVectorStore # construct vector store vector_store = WeaviateVectorStore(weaviate_client = client, index_name=&quot;SBCZoning&quot;) vector_store.add(nodes) </code></pre> <p>But I receive a TypeError &quot;WindowsPath&quot; not serializable. See below</p> <pre><code>TypeError Traceback (most recent call last) Input In [73], in &lt;cell line: 6&gt;() 1 # nodes_to_parse = SimpleNodeParser.get_nodes_from_documents(nodes) 2 # nodes_to_parse = parser.get_nodes_from_documents(nodes) 3 4 # construct vector store 5 vector_store = WeaviateVectorStore(weaviate_client = client, index_name=&quot;SBCZoning&quot;) ----&gt; 6 vector_store.add(nodes) 8 # setting up the storage for the embeddings 9 storage_context = StorageContext.from_defaults(vector_store = vector_store) File ~\anaconda3\lib\site-packages\llama_index\vector_stores\weaviate.py:181, in WeaviateVectorStore.add(self, nodes) 179 with self._client.batch as batch: 180 for node in nodes: --&gt; 181 add_node( 182 self._client, 183 node, 184 self.index_name, 185 batch=batch, 186 text_key=self.text_key, 187 ) 188 return ids File ~\anaconda3\lib\site-packages\llama_index\vector_stores\weaviate_utils.py:152, in add_node(client, node, class_name, batch, text_key) 149 metadata = {} 150 metadata[text_key] = node.get_content(metadata_mode=MetadataMode.NONE) or &quot;&quot; --&gt; 152 additional_metadata = node_to_metadata_dict( 153 node, remove_text=True, flat_metadata=False 154 ) 155 metadata.update(additional_metadata) 157 vector = node.get_embedding() File ~\anaconda3\lib\site-packages\llama_index\vector_stores\utils.py:46, in node_to_metadata_dict(node, remove_text, text_field, flat_metadata) 43 node_dict[&quot;embedding&quot;] = None 45 # dump remainder of node_dict to json string ---&gt; 46 metadata[&quot;_node_content&quot;] = json.dumps(node_dict) 48 # store ref doc id at top level to allow metadata filtering 49 # kept for backwards compatibility, will consolidate in future 50 metadata[&quot;document_id&quot;] = node.ref_doc_id or &quot;None&quot; # for Chroma File ~\anaconda3\lib\json\__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw) 226 # cached encoder 227 if (not skipkeys and ensure_ascii and 228 check_circular and allow_nan and 229 cls is None and indent is None and separators is None and 230 default is None and not sort_keys and not kw): --&gt; 231 return _default_encoder.encode(obj) 232 if cls is None: 233 cls = JSONEncoder File ~\anaconda3\lib\json\encoder.py:199, in JSONEncoder.encode(self, o) 195 return encode_basestring(o) 196 # This doesn't pass the iterator directly to ''.join() because the 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --&gt; 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) File ~\anaconda3\lib\json\encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot) 252 else: 253 _iterencode = _make_iterencode( 254 markers, self.default, _encoder, self.indent, floatstr, 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --&gt; 257 return _iterencode(o, 0) File ~\anaconda3\lib\json\encoder.py:179, in JSONEncoder.default(self, o) 160 def default(self, o): 161 &quot;&quot;&quot;Implement this method in a subclass such that it returns 162 a serializable object for ``o``, or calls the base implementation 163 (to raise a ``TypeError``). (...) 177 178 &quot;&quot;&quot; --&gt; 179 raise TypeError(f'Object of type {o.__class__.__name__} ' 180 f'is not JSON serializable') TypeError: Object of type WindowsPath is not JSON serializable </code></pre> <p>I have done exactly as the tutorial outlines ( Loading data from pdf with PyMuPDFReader(), Use SentenceSplitter over documents while maintaining relationship with source doc index, added Metadata using MetadataExtractor via llama_index.node_parser.extractors ...)</p> <p>How can I resolve this?</p>
<python><weaviate><llama-index>
2023-10-04 22:51:56
1
508
Simon Palmer
77,233,242
13,062,745
Python defaultdict(list) behavior
<p>I was playing around with Chatgpt and it kind of suprised me that if you declare</p> <pre><code>node = defaultdict(list) node['xyz'] = 'xyz' </code></pre> <p>adding a new string key-value pair. I thought node will create a new list when 'xyz' key doesn't present in the node, therefore if you assign A string to [], it will probably return a runtime error or something. But this one actually works, according to gpt. Any reason why this works? are there any docs I can read up about this?</p>
<python><python-3.x><list><defaultdict>
2023-10-04 22:04:10
2
742
zxcisnoias
77,233,241
14,328,794
Inserting huge data from one to another table and simultaneously deleting after movement using pyodbc package in Python file
<p>I am trying to use SQL Server and Python to perform data migration and deletion. However, during the process of execution, I find it hard to reconcile the data which is being inserted in new table in a loop is also moving the same data and also deleting the same data in a batch of 1000. I mean the exact rows.</p> <p>In my below code, there is a loop which inserts data into new table and statements below that deletes the records which are in that batch size.</p> <pre><code>While True: query = f&quot;insert into [{table_name1}] ({column_names})&quot; \ f&quot; SELECT {column_names}&quot; \ f&quot;FROM {database}.{table_name2}&quot; \ f&quot;WHERE {table_name1.date} = '{table_name2.date}'&quot; \ try: cursor.execute(query) cursor.commit() except: break delete_query = f&quot;DELETE FROM {database}.{table_name2} where {table_name1.date} = '{table_name2.date}&quot; cursor.execute(query) cursor.commit() </code></pre> <p>my question is that the movement and delete is working good. However, I want to put a condition where rows that are moved are the only ones that are being deleted in a loop of 1000.</p> <p>The above code is inserting x and deleting y data, however, for the same date.</p>
<python><sql><sql-server><pyodbc>
2023-10-04 22:02:44
1
380
Anil Dhage
77,233,108
6,084,335
Poetry No file/folder found when attempting to reference another local poetry project
<p>I have the following directory structure:</p> <pre><code>├── lib │   └── stuff │   ├── __init__.py │   ├── math.py │   ├── poetry.lock │   └── pyproject.toml └── services └── a ├── __init__.py ├── main.py ├── poetry.lock └── pyproject.toml </code></pre> <p>Both <code>services/a</code> and <code>lib/stuff</code> are poetry projects, they have a <code>pyproject.toml</code> file and <code>poetry.lock</code> file created by the <code>poetry init</code> process. Their contents are as follows</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;a&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Jack Evans &lt;jack@evans.gb.net&gt;&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.11&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;stuff&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Jack Evans &lt;jack@evans.gb.net&gt;&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.11&quot; requests = &quot;^2.31.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>I want to be able to reference code in <code>lib/stuff</code> from within my <code>service/a</code> project.</p> <p>However, when I run <code>poetry add ../../lib/stuff</code> inside <code>services/a</code> I get the following build error:</p> <pre><code> • Installing stuff (0.1.0 /Users/jack/code/poetry-test/lib/stuff): Failed ChefBuildError Backend subprocess exited when trying to invoke build_wheel Traceback (most recent call last): File &quot;/Users/jack/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/Users/jack/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/jack/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py&quot;, line 251, in build_wheel return _build_backend().build_wheel(wheel_directory, config_settings, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/var/folders/yn/4q7r_wwd16v279lryh84bgtc0000gq/T/tmpdoxiack6/.venv/lib/python3.11/site-packages/poetry/core/masonry/api.py&quot;, line 57, in build_wheel return WheelBuilder.make_in( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/var/folders/yn/4q7r_wwd16v279lryh84bgtc0000gq/T/tmpdoxiack6/.venv/lib/python3.11/site-packages/poetry/core/masonry/builders/wheel.py&quot;, line 80, in make_in wb = WheelBuilder( ^^^^^^^^^^^^^ File &quot;/var/folders/yn/4q7r_wwd16v279lryh84bgtc0000gq/T/tmpdoxiack6/.venv/lib/python3.11/site-packages/poetry/core/masonry/builders/wheel.py&quot;, line 61, in __init__ super().__init__(poetry, executable=executable) File &quot;/var/folders/yn/4q7r_wwd16v279lryh84bgtc0000gq/T/tmpdoxiack6/.venv/lib/python3.11/site-packages/poetry/core/masonry/builders/builder.py&quot;, line 85, in __init__ self._module = Module( ^^^^^^^ File &quot;/var/folders/yn/4q7r_wwd16v279lryh84bgtc0000gq/T/tmpdoxiack6/.venv/lib/python3.11/site-packages/poetry/core/masonry/utils/module.py&quot;, line 69, in __init__ raise ModuleOrPackageNotFound( poetry.core.masonry.utils.module.ModuleOrPackageNotFound: No file/folder found for package stuff at ~/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/poetry/installation/chef.py:152 in _prepare 148│ 149│ error = ChefBuildError(&quot;\n\n&quot;.join(message_parts)) 150│ 151│ if error is not None: → 152│ raise error from None 153│ 154│ return path 155│ 156│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -&gt; Path: Note: This error originates from the build backend, and is likely not a problem with poetry but with stuff (0.1.0 /Users/jack/code/poetry-test/lib/stuff) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 &quot;stuff @ file:///Users/jack/code/poetry-test/lib/stuff&quot;'. </code></pre> <p>Is there any way to reference <code>lib/stuff</code> from within <code>services/a</code> if both poetry projects share the same common root filepath.</p>
<python><python-poetry>
2023-10-04 21:32:31
0
1,717
Jack Evans
77,233,041
8,387,921
Tkinter is freezing using for loop to iterate and download files from driver
<p>I use python and selenium to read a text file which contains a more then 2000 of download link of zip files. Then on click of button i want to download files. But when i click button then tkinter freezes and give error of program not responding error. I also use time.sleep(5) methods inside for loop in hope that loop will run in every five second only and prevents from freezing but still no solution.</p> <p>my function is</p> <pre class="lang-py prettyprint-override"><code> def download_file(): if running: with open('mylinks.txt') as file: for line in file: driver.get(line.strip()) time.sleep(2) button2 = tk.Button(frame, text = &quot;Start Download&quot;, command=download_file) </code></pre> <p>How to prevent it from happening. Thanks in advance.</p>
<python><selenium-webdriver><tkinter><selenium-chromedriver>
2023-10-04 21:19:05
1
399
Sagar Rawal
77,232,820
2,610,522
Solving complicated expression in SymPy takes a long time
<p>I am trying to solve symbolically the following equations using sympy. I set the <code>simplify=False</code>, and <code>rational=False</code> in the <code>solve</code> as well and it helped a lot with eq7. But it did not help with e14 and it takes forever. How can I improve this? The same code in Matlab takes a few seconds!</p> <pre><code>from sympy import symbols, Eq, solve, Function def dynamic_solver(): # Define symbolic variables C_steps, O, Kc, Ko, kc, ko, S, Q, CB6F, RUB, kq, Kp1, Kp2, Kd, Kf, Ku, nl, nc, JP700_j, JP700_c, a2, a1, Abs, phi2P_a, phi2p_a, phi2u_a, q2_a, JP680_j, Vc_j, An_j, Ag_j, An_c, Ag_c, E, gtc, gm, Ca, Vc_c, Rd = symbols( 'C_steps O Kc Ko kc ko S Q CB6F RUB kq Kp1 Kp2 Kd Kf Ku nl nc JP700_j JP700_c a2 a1 Abs phi2P_a phi2p_a phi2u_a q2_a JP680_j Vc_j An_j Ag_j An_c Ag_c E gtc gm Ca Vc_c Rd' ) # Calculate S and gammas S = (kc / Kc) * (Ko / ko) gammas = O / (2 * S) # Define equations eq1 = JP700_j - (Q * CB6F * kq) / (Q + (CB6F * kq) / ((Abs - a2) * (Kp1 / (Kp1 + Kd + Kf)))) eq2 = eq1.subs(JP700_j, JP680_j * (1 - (nl / nc) + (3 + 7 * gammas / C_steps) / ((4 + 8 * gammas / C_steps) * nc))) eq3 = eq2.subs(JP680_j, Vc_j * (4 * (1 + 2 * gammas / C_steps))) eq4 = eq3.subs(Vc_j, Ag_j / (1 - gammas / C_steps)) eq5 = eq4.subs(Ag_j, An_j + Rd) eq6 = eq5.subs(An_j, -(C_steps * E * gm + Ca * E * gm + 2 * C_steps * gm * gtc - 2 * Ca * gm * gtc) / (E + 2 * gm + 2 * gtc)) eq7 = solve(eq6, a2, dict=True,simplify=False,rational=False) # Pass out the first root eqA = Function('eqA')(Abs, C_steps, CB6F, Ca, E, Kc, Kd, Kf, Ko, Kp1, O, Q, Rd, gm, gtc, kc, ko, kq, nc, nl) eqA = eq7[0][a2] # Define equations for the second case eq8 = JP700_j - Q * a2 * phi2P_a * (1 - (nl / nc) + (3 + 7 * gammas / C_steps) / ((4 + 8 * gammas / C_steps) * nc)) eq9 = eq8.subs(JP700_j, Q * CB6F * kq / (Q + CB6F * kq / ((Abs - a2) * (Kp1 / (Kp1 + Kd + Kf))))) eq10 = eq9.subs(phi2P_a, phi2p_a / (1 - phi2u_a)) eq11 = eq10.subs(phi2p_a, (q2_a) * Kp2 / (Kp2 + Kd + Kf + Ku)) eq12 = eq11.subs(phi2u_a, (q2_a) * Ku / (Kp2 + Kd + Kf + Ku) + (1 - q2_a) * Ku / (Kd + Kf + Ku)) eq13 = eq12.subs(q2_a, 1 - (Q / (Q + CB6F * kq / ((Abs - a2) * (Kp1 / (Kp1 + Kd + Kf)))))) eq14 = solve(eq13, a2, dict=True,simplify=False,rational=False) # Pass out the second root eqB = Function('eqB')(Abs, C_steps, CB6F, Kc, Kd, Kf, Ko, Kp1, Kp2, Ku, O, Q, kc, ko, kq, nc, nl) eqB = eq14[1][a2] return eqA, eqB # Call the dynamic_solver function to obtain the equations eqA and eqB eqA, eqB = dynamic_solver() </code></pre>
<python><sympy>
2023-10-04 20:36:58
2
810
Ress
77,232,741
13,142,245
How to block progression in asyncio
<p>So I understand that using <code>async ... await</code> python can prevent blocking and accomplish xyz to follow. But what about the opposite, where I <em>want</em> python to block xyz until an process has completed?</p> <p>For example, suppose I have three functions, A,B &amp; C where A should not block B and vice versa. But C should be blocked by both A and B?</p> <pre><code>async def A(): # some code async def B(): # some code def C(): # some code async def handler(): # A and B not to block each other await A() await B() # Must complete before C C() </code></pre> <p>As I understand it, asyncio will implicitly infer that C should be blocked by A and B if C is defined in terms of them (or their outputs.)</p> <p>But in the event that C is not defined in terms of A &amp; B, how can it be ensured that C will only commence when A and B are complete?</p>
<python><async-await><python-asyncio>
2023-10-04 20:22:34
2
1,238
jbuddy_13
77,232,609
19,130,803
use nested class without instance
<p>I am working on python web application using <code>dash</code>.In a module,Earlier all were individual variables inside my <code>Utils</code> class, For eg:</p> <pre><code>class Utils: port = some_port address = f&quot;some_domain:some_port&quot; gui_docs = f&quot;{some_domain}/docs&quot; gui_pgadmin = f&quot;{some_domain}/pgadmin4&quot; gui_flower = f&quot;{some_domain}/flower&quot; and many more </code></pre> <p>This is currently working and accessing it in other modules, For eg:</p> <pre><code>Utils.gui_docs etc </code></pre> <p>As the application is getting evolved, more variables are getting added and feels, I am losing track for the variables. So I am trying to refactor it. This is my approach and I am getting error as</p> <pre><code>unable to access nested class inside other nested class. </code></pre> <pre><code>class Utils: class WebAddress: &quot;&quot;&quot;Contain web address attributes.&quot;&quot;&quot; _port: int = 80 DOMAIN: str = f&quot;http://some_domain:{_port}&quot; class GUI: &quot;&quot;&quot;Contain GUI attributes.&quot;&quot;&quot; FLOWER_CELERY = html.A(&quot;Celery&quot;, href=f&quot;{WebAddress.DOMAIN}/flower/&quot;) PGADMIN_POSTGRES = html.A(&quot;Postgres&quot;, href=f&quot;{WebAddress.DOMAIN}/pgadmin4/&quot;) SRC_DOCS = html.A(&quot;Documentation&quot;, href=f&quot;{WebAddress.DOMAIN}/docs/&quot;) ---&gt; # Here, I am unable to access the `WebAddress.DOMAIN` inside `GUI` class </code></pre> <p>Now, I will access them in other modules as</p> <pre><code>Utils.GUI.FLOWER_CELERY </code></pre> <ol> <li>What I am missing?</li> <li>Any other better way to approach such scenario?</li> </ol>
<python>
2023-10-04 19:59:58
0
962
winter
77,232,606
2,600,531
Memory management in multiprocess python application on constrained device
<p>I have a fairly constrained Ubuntu 20.04 host (256MB RAM) that is running into OOM errors after a few hours of running my python application. I ultimately solved this by manually calling <code>gc.collect()</code> in one subprocess but I want to know why and be sure this is the best solution.</p> <p>I identified the process in my multiprocessing python application that was the dominant consumer of RAM and appeared to be growing consistently over time - suggesting a memory-leak.</p> <p>I reduced the suspect process to a minimal example:</p> <pre><code>import pyogg from pyogg import OpusBufferedEncoder import os import time import numpy as np import gc def float32_to_int16(arr): arr = arr / np.max(np.abs(arr)) # scale to [-1.0,1.0] return (arr * (2**15 - 1)).astype(np.int16) # scale to int16 range #initializes a numpy array of 720000 float_32 values data32 = np.random.uniform(-1,1,720000).astype(np.float32) #10s of 48kHz sample rate mono audio #converts the numpy array to int16 data16 = float32_to_int16(data32) while True: print(&quot;loop&quot;) opus_buffered_encoder = OpusBufferedEncoder() opus_buffered_encoder.set_application(&quot;audio&quot;) opus_buffered_encoder.set_sampling_frequency(48000) opus_buffered_encoder.set_channels(1) opus_buffered_encoder.set_frame_size(20) ogg_opus_writer = pyogg.OggOpusWriter(&quot;test.opus&quot;, opus_buffered_encoder) ogg_opus_writer.write(memoryview(bytearray(data16))) ogg_opus_writer.close() time.sleep(5) </code></pre> <p>I confirmed with this minimal implementation that the OOM errors continued in the multiprocess context. Then I broke it out and ran it as a single-process application like shown above - it reproduced the ballooning RAM but before all available system RAM was exhausted it got garbage collected and no OOM error occurred. I added <code>gc.collect()</code> to the end of the loop and consumption stabilized, suggesting no actual memory leak present.</p> <p>I then added the collection call to this same location in the original multiprocess code and the OOM errors have vanished.</p> <p>Questions:</p> <ol> <li>isn't the OS supposed to trigger GC in one python process when memory is requested by another python process and that isn't otherwise available?</li> <li>is there likely something wrong with my code or pyogg (which wraps C libraries) that's resulting in GC not being automatic? How can I approach determining this (full application too much to post here)?</li> <li>Instead of forcing garbage collection should I consider spawning a new process in the while loop to write each opus file?</li> </ol>
<python><memory-management><multiprocessing><garbage-collection><out-of-memory>
2023-10-04 19:59:15
0
944
davegravy
77,232,604
8,944,208
How to capture id size photo with webcam using python
<p>I have this few lines code which capture images using webcam but the pictures width and heigth is large.Is it a way i can have the frame to be small to capture images as the size of passport picture.</p> <p>Is the any available resource I can use to achieve that .This is how I want my frame to look like so I can use the webcam to capture the images from the head to shoulders.</p> <p><a href="https://i.sstatic.net/mYVnG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mYVnG.jpg" alt="Image in web" /></a> <a href="https://i.sstatic.net/0JHPp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0JHPp.jpg" alt="Final result should look like this" /></a></p> <p>Example image is attached</p> <pre><code>import cv2 key = cv2.waitKey(1) webcam = cv2.VideoCapture(0) while True: try: check, frame = webcam.read() print(check) # prints true as long as the webcam is running print(frame) # prints matrix values of each framecd cv2.imshow(&quot;Capturing&quot;, frame) key = cv2.waitKey(1) if key == ord('s'): cv2.imwrite(filename='saved_img.jpg', img=frame) webcam.release() img_new = cv2.imread('saved_img.jpg', cv2.IMREAD_UNCHANGED) img_new = cv2.imshow(&quot;Captured Image&quot;, img_new) # SHOWS THE IMAGE AFTER CAPTURING cv2.waitKey(1650) cv2.destroyAllWindows() print(&quot;captured images saved&quot;) break elif key == ord('q'): print(&quot;Turning off camera.&quot;) webcam.release() print(&quot;Camera off.&quot;) print(&quot;Program ended.&quot;) cv2.destroyAllWindows() break except(KeyboardInterrupt): print(&quot;Turning off camera.&quot;) webcam.release() print(&quot;Camera off.&quot;) print(&quot;Program ended.&quot;) cv2.destroyAllWindows() break </code></pre>
<python><webcam><image-size>
2023-10-04 19:58:37
2
597
O JOE
77,232,528
2,743,931
Azure function can't create a local folder (python)
<p>I'm using Azure functions and I want to make a local folder, where I can put a file and copy it to a blob storage.</p> <p>I'm following this tutorial: <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=managed-identity%2Croles-azure-portal%2Csign-in-azure-cli#upload-blobs-to-a-container" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=managed-identity%2Croles-azure-portal%2Csign-in-azure-cli#upload-blobs-to-a-container</a> where they just call <code>os.mkdir(local_path)</code> to create the (temp) folder. Based on the tutorial I do:</p> <pre><code>logging.info('before folder creation') # Create a local directory to hold blob data local_path = &quot;./data&quot; logging.info('inside folder creation') os.mkdir(local_path) logging.info('after folder creation') </code></pre> <p>and I'm getting:</p> <pre><code>2023-10-04T19:31:30Z [Information] before folder creation 2023-10-04T19:31:30Z [Information] inside folder creation ... and fail wrapping the code in a try block and catching the exception releals this error: [Errno 38] Function not implemented: './data' </code></pre> <p>The error looks strange but it really just looks like it can't create the folder. What should I do to allow Azure function to create the folder?</p>
<python><azure-functions>
2023-10-04 19:43:43
1
312
user2743931
77,232,399
1,451,632
can I control setuptools imports more finely?
<p>I'm having some difficulty controlling precisely which modules of my package are being imported by <code>setuptools</code>. It seems to be importing &quot;greedily&quot; and I don't know why.</p> <p>Let's say I have the following (flat) code structure:</p> <pre><code>tstpkg/ pyproject.toml tstpkg/ __init__.py modA fnA1 fnA2 modB fnB1 fnB2 modC fnC1 fnC2 </code></pre> <p>and let's imagine I want to have <em>only</em> <code>tstpkg.fnA1()</code> and <code>tstpkg.fnC2()</code> available to me after I import <code>tstpkg</code>. My expectation is that if <code>pyproject.toml</code> includes</p> <pre><code>[build-system] requires = [&quot;setuptools&quot;] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools.packages.find] namespaces = false </code></pre> <p>and if <code>__init__.py</code> includes</p> <pre><code>from tstpkg.modA import fnA1 from tstpkg.modC import fnC2 </code></pre> <p>then I should be all set. But this makes the entire <code>tstpkg.modA</code> and <code>tstpkg.modC</code> available when I install and import <code>tstpkg</code>, which is not what I would like.</p> <p>I have found that I can add <code>del(modA, modC)</code> to <code>__init__.py</code>, but I find this a bit inelegant. And it's confusing, because if I (for instance) boot up <code>ipython</code> and simply do <code>from scipy.special import gammaln as sgl</code> I don't have all of <code>scipy.special</code> available to me.</p> <p>Two questions:</p> <ol> <li>Why is this happening?</li> <li>And is there any natural way to get the behavior that I am after?</li> </ol>
<python><python-import><setuptools>
2023-10-04 19:20:05
0
311
user1451632
77,232,242
12,282,834
How to pass output variables of CodePipeline Lambda Action to CodeBuild action
<p>I'm trying to build a simple CICD Pipeline usind CDK python library. I have the code for creating a construct which consists of the following:</p> <ol> <li>A Lambda Function to be used inside a codepipeline action. Main purpose of this lambda function is to get the output values for a cloudformation stack that is in another account.</li> <li>CodeBuild project.</li> <li>CodePipeline Project: <ul> <li>Source</li> <li>Lambda Invoke Action (uses previously created lambda function)</li> <li>Build stage</li> </ul> </li> </ol> <p>I want to pass on some output values from the lambda invoke action to the build stage. However, after defining the environment variables when creating the codebuild project construct, they are not being passed at all.</p> <p>This is the code I have for the cdk application:</p> <pre class="lang-py prettyprint-override"><code>class ModelDeployCICDStack(Stack): def __init__(self, scope: Construct, construct_id: str, envs_deployment,**kwargs) -&gt; None: super().__init__(scope, construct_id, **kwargs) codebuild_lambda_role = iam.Role( self, &quot;CodeBuildLambdaInvokeAction&quot;, role_name=f&quot;CICDPipelineCodeBuildLambdaInvokeRole&quot;, managed_policies=[ iam.ManagedPolicy.from_aws_managed_policy_name(&quot;AWSLambda_FullAccess&quot;), iam.ManagedPolicy.from_aws_managed_policy_name(&quot;CloudWatchFullAccess&quot;) ], assumed_by=iam.CompositePrincipal( iam.ServicePrincipal(&quot;lambda.amazonaws.com&quot;), ) ) codebuild_lambda_role.add_to_policy( iam.PolicyStatement( actions=[&quot;sts:AssumeRole&quot;], resources=[&quot;arn:aws:iam::XXXXXXX:role/DescribeCfnStackRole&quot;] ) ) my_lambda = _lambda.Function( self, &quot;CfnCAALambdaFunc&quot;, runtime=_lambda.Runtime.PYTHON_3_8, code=_lambda.Code.from_asset(&quot;ticket_irregularities_cicd_cdk/lambda&quot;), handler=&quot;cfn_caa_helper.lambda_handler&quot;, role = codebuild_lambda_role ) lambda_invoke_action = aws_codepipeline_actions.LambdaInvokeAction( action_name=&quot;Lambda&quot;, lambda_= my_lambda, variables_namespace=&quot;ConfigMetadata&quot; ) # Create CodePipeline project # Note: It is created here so resource s3 bucket can be referenced pipeline = codepipeline.Pipeline( self, &quot;CodePipeline&quot;, pipeline_name=&quot;ModelDeploy-CodePipeline&quot;, ) # Create codebuild role codebuild_role = iam.Role( self, &quot;CodeBuildRole&quot;, role_name = f&quot;CodeBuildRoleModelDeployPipeline&quot;, assumed_by = iam.CompositePrincipal(iam.ServicePrincipal(&quot;codebuild.amazonaws.com&quot;), iam.ServicePrincipal(&quot;codepipeline.amazonaws.com&quot;) ) ) # Define the CodeBuild project for the building of the model deployment code # For this, the output of running the code, is a set of Cloudformation templates build_project_execute = codebuild.PipelineProject( self, id = &quot;CodeBuildProject&quot;, project_name=&quot;ModelDeploy-CodeBuild&quot;, role=codebuild_role, build_spec = codebuild.BuildSpec.from_source_filename(&quot;buildspec.yml&quot;), environment= codebuild.BuildEnvironment(build_image=codebuild.LinuxBuildImage.AMAZON_LINUX_2_2,), environment_variables={ &quot;SAGEMAKER_EXECUTION_ROLE_ARN_DEV&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= envs_deployment[&quot;dev&quot;][&quot;sagemaker&quot;] ), &quot;LAMBDA_EXECUTION_ROLE_ARN_DEV&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= envs_deployment[&quot;dev&quot;][&quot;lambda&quot;] ), &quot;DEFAULT_BUCKET_DEV&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= envs_deployment[&quot;dev&quot;][&quot;pipeline_bucket&quot;] ), &quot;SAGEMAKER_EXECUTION_ROLE_ARN_STAGING&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;#{ConfigMetadata.SMRoleArn}&quot; ), &quot;LAMBDA_EXECUTION_ROLE_ARN_STAGING&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;#{ConfigMetadata.LambdaRoleArn}&quot; ), &quot;DEFAULT_BUCKET_STAGING&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;#{ConfigMetadata.S3BucketName}&quot; ), &quot;ARTIFACT_BUCKET&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= pipeline.artifact_bucket.bucket_name ), &quot;MODEL_BUILD_S3_BUCKET&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value=envs_deployment[&quot;dev&quot;][&quot;modelbuild_bucket&quot;] ), &quot;EXPORT_TEMPLATE_NAME&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;template-export.yml&quot; ), &quot;EXPORT_TEMPLATE_DEV_CONFIG&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;dev-config-export.json&quot; ), &quot;EXPORT_TEMPLATE_STAGING_CONFIG&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;staging-config-export.json&quot; ), &quot;EXPORT_TEMPLATE_PROD_CONFIG&quot;: codebuild.BuildEnvironmentVariable( type=codebuild.BuildEnvironmentVariableType.PLAINTEXT, value= &quot;prod-config-export.json&quot; ), } ) # Define source action - what repository to look for changes # Note: The connection needs to be set up manually once - using the console source_code_output= codepipeline.Artifact() source_action = aws_codepipeline_actions.CodeStarConnectionsSourceAction( action_name=&quot;Github_Source_ModelDeploy&quot;, owner=&quot;XXXXXXXXXX&quot;, repo=&quot;sm-modeldeploy&quot;, branch=&quot;main&quot;, output=source_code_output, connection_arn=&quot;arn:aws:codestar-connections:eu-north-1:XXXXXXXXXX:connection/XXXXXXXXXXXXXXX&quot; ) # Define approval action before executing codebuild project manual_approval_for_build = aws_codepipeline_actions.ManualApprovalAction( action_name=&quot;ApproveBuildingTemplates&quot;, ) source_input_cfn_template = codepipeline.Artifact(&quot;artifact1&quot;) source_input_cfn_template_1 = codepipeline.Artifact(&quot;artifact2&quot;) source_input_cfn_template_2 = codepipeline.Artifact(&quot;artifact3&quot;) source_input_cfn_template_3 = codepipeline.Artifact(&quot;artifact4&quot;) # Define code build action (which will used previous codebuild project and code source output) # The underlying codebuild project, produces some output/artifacts files that will be used later on build_action = aws_codepipeline_actions.CodeBuildAction( action_name=&quot;BuildCfnTemplatesDeployment&quot;, project=build_project_execute, input=source_code_output, outputs=[source_input_cfn_template, source_input_cfn_template_1, source_input_cfn_template_2, source_input_cfn_template_3] ) # Add stage with source action pipeline.add_stage(stage_name=&quot;Source&quot;, actions=[source_action]) # Add lambda invoke action pipeline.add_stage(stage_name=&quot;LambdaInvoke&quot;, actions=[lambda_invoke_action]) # Add stage with manual approval action pipeline.add_stage(stage_name=&quot;Approve&quot;, actions=[manual_approval_for_build]) # Add stage with build action pipeline.add_stage(stage_name=&quot;Build&quot;, actions=[build_action]) </code></pre> <p>And this is the code for the lambda function that is used for the CodePipeline action:</p> <pre class="lang-py prettyprint-override"><code>import boto3 import json CODEPIPELINE_CLIENT = boto3.client('codepipeline') def assume_crossaccount_role(role_arn, role_session_name=&quot;cfn_lookup_outputs&quot;): role_info = { 'RoleArn': role_arn, 'RoleSessionName': role_session_name } client = boto3.client('sts') credentials = client.assume_role(**role_info) session = boto3.session.Session( aws_access_key_id=credentials['Credentials']['AccessKeyId'], aws_secret_access_key=credentials['Credentials']['SecretAccessKey'], aws_session_token=credentials['Credentials']['SessionToken'] ) return session boto_session = assume_crossaccount_role(&quot;arn:aws:iam::XXXXXXXXXXXXXXX:role/DescribeCfnStackRole&quot;) CF_CLIENT = boto_session.client('cloudformation') def lambda_handler(event, context): # Replace 'your-stack-name' with the name of your CloudFormation stack stack_name = 'ModelDeployStaging-ModelDeployInfra' try: # Describe the stack to get its information response = CF_CLIENT.describe_stacks(StackName=stack_name) # Extract the stack outputs stack = response['Stacks'][0] # Assuming there is only one stack with this name outputs = stack.get('Outputs', []) if not outputs: # Report failure to CodePipeline CODEPIPELINE_CLIENT.put_job_failure_result( jobId=event['CodePipeline.job']['id'], failureDetails={ 'type': 'JobFailed', 'message': f&quot;No outputs found for stack '{stack_name}'&quot; } ) else: output_dict = {output['OutputKey']: output['OutputValue'] for output in outputs} print(output_dict) # Report success to CodePipeline with the output values CODEPIPELINE_CLIENT.put_job_success_result( jobId=event['CodePipeline.job']['id'], outputVariables=output_dict ) except Exception as e: # Report failure to CodePipeline with the error message CODEPIPELINE_CLIENT.put_job_failure_result( jobId=event['CodePipeline.job']['id'], failureDetails={ 'type': 'JobFailed', 'message': f&quot;Error: {str(e)}&quot; } ) </code></pre> <p>Here: the variable <code>output_dict</code> has the following value <code>{'SMRoleArn': 'arn:aws:iam::XXXXXXX:role/SageMakerExecXXXXXXXXXXXXXX', 'LambdaRoleArn': 'arn:aws:iam::XXXXXXXXX:role/LambdaExecutioXXXXXXXXXX', 'S3BucketName': 'modeldeploystaging-XXXXXXXXXXXXXXXXXXXX'}</code> And I'm passing them using the .put_job_success_result method in the lambda function.</p> <p>In my build stage, within the CodePipeline project, I have a simple buildspec file that has the commands to print out the value of the environment variables, but it doesn't print the correct values.</p> <p>Am I missing something obvious? Can someone help me out on this?</p> <p>I have taken this question and accepted answer as reference: <a href="https://stackoverflow.com/questions/61603414/how-to-fetch-ssm-parameters-from-two-different-accounts-using-aws-cdk">How to fetch SSM Parameters from two different accounts using AWS CDK</a> But i had no luck.</p>
<python><amazon-web-services><aws-cdk>
2023-10-04 18:47:30
1
523
Francisco Parrilla
77,232,241
3,171,007
Use Pandas to concat csv's into another folder
<p>A coworker wrote this to combine csv files that are batch downloads of a DB table:</p> <pre><code>import os import pandas as pd folder_path = r'C:\_batches' files = [os.path.join(folder_path, file) for file in os.listdir(folder_path)] df = pd.concat([pd.read_csv(os.path.join(folder_path, file)) for file in files]) df.to_csv('combo_csv.csv', index=None) </code></pre> <p>The issue I have is that it's 60gb of data and climbing, so I have to combine the files into another drive where there's space. I modified like this, just spitballing, but it doesn't work:</p> <pre><code>import os import pandas as pd folder_path_input = r'C:\_batches' folder_path_output = r'D:\_batches_Concat' files = [os.path.join(folder_path_input, file) for file in os.listdir(folder_path_input)] df = pd.concat([pd.read_csv(os.path.join(folder_path_output, file)) for file in files]) df.to_csv('batch_1.csv', index=None) </code></pre> <p>How can I modify the code to concat to another folder?</p>
<python><pandas>
2023-10-04 18:47:22
1
1,744
n8.
77,232,035
6,003,620
Accessing path variable in fastapi
<p>I have fastapi endpoint</p> <pre><code>@app.post(&quot;/{ap_name}/{branch_name}&quot;) </code></pre> <p>Here branch is github repo branch name , that I need to capture If user gives one word branch name then its fine, if user gives branchname with slashes , like ap_name/<strong>ap/rt</strong> Then it errors out. Is there a mechanism where I can consider anything after {ap_name}/ as branch_name i,e if user gives <code>ap_name/ap/rt....</code> Then can we map ap_rt to branch_name like <code>{ap_name}/*..</code> in fastapi</p>
<python><rest><fastapi>
2023-10-04 18:11:53
1
1,155
niranjan pb