QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,142,428
17,353,489
Unexpected uint64 behaviour 0xFFFF'FFFF'FFFF'FFFF - 1 = 0?
<p>Consider the following brief numpy session showcasing <code>uint64</code> data type</p> <pre><code>import numpy as np a = np.zeros(1,np.uint64) a # array([0], dtype=uint64) a[0] -= 1 a # array([18446744073709551615], dtype=uint64) # this is 0xffff ffff ffff ffff, as expected a[0] -= 1 a # array([0], dtype=uint64) # what the heck? </code></pre> <p>I'm utterly confused by this last output.</p> <p>I would expect 0xFFFF'FFFF'FFFF'FFFE.</p> <p>What exactly is going on here?</p> <p>My setup:</p> <pre><code>&gt;&gt;&gt; sys.platform 'linux' &gt;&gt;&gt; sys.version '3.10.5 (main, Jul 20 2022, 08:58:47) [GCC 7.5.0]' &gt;&gt;&gt; np.version.version '1.23.1' </code></pre>
<python><numpy><uint64>
2023-04-30 16:50:29
5
533
Albert.Lang
76,142,366
9,983,652
ModuleNotFoundError: No module named 'dash_extensions.callback'
<p>since I am using anaconda, so I installed dash_extensions using below command instead of pip.</p> <p>conda install -c conda-forge dash-extensions</p> <p><a href="https://anaconda.org/conda-forge/dash-extensions" rel="nofollow noreferrer">https://anaconda.org/conda-forge/dash-extensions</a></p> <p>I can import dash_extensions and no problem. Just import dash_extensions.callback is not working. Do I have to use pip install? I am concerned it will create some issues if I use pip install inside anaconda.</p> <p>Thanks for help.</p> <p>the pip install is here, but I didn't use it</p> <p>pip install dash-extensions</p> <p><a href="https://pypi.org/project/dash-extensions/" rel="nofollow noreferrer">https://pypi.org/project/dash-extensions/</a></p> <pre><code>from dash_extensions.callback import CallbackCache, Trigger --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_33592/2123918533.py in ----&gt; 1 from dash_extensions.callback import CallbackCache, Trigger ModuleNotFoundError: No module named 'dash_extensions.callback' </code></pre>
<python><plotly-dash>
2023-04-30 16:36:10
0
4,338
roudan
76,142,338
16,527,606
When I recreate machine learning model in flask api, the server shuts down without any warning
<pre><code>from flask import Blueprint, request, current_app as app from project_app.modules import ml_tool, pickle_tool adminBp = Blueprint('admin', __name__, url_prefix='/admin') @adminBp.route('/refresh', methods=['POST']) def refresh(): body = request.get_json() df_nonTest, df_test = ml_tool.get_splitted_df() app.models = ml_tool.create_models(df_nonTest) # Shutdown occured when this file contain this line. pickle_tool.save_models(app.models) print('model is created') app.score = ml_tool.create_roc_auc_score(df_test, app.models) # No shutdown occured even this file contains this line. pickle_tool.save_score(app.score) print('score is calculated') return {'message' : 'refreshed'}, 200 </code></pre> <p>and output is like this</p> <pre><code>... Batch 8 started... [0] validation_0-auc:0.67508 [107] validation_0-auc:0.77892 Batch 9 started... [0] validation_0-auc:0.68782 [102] validation_0-auc:0.78389 model is created score is calculated 127.0.0.1 - - [01/May/2023 01:26:21] &quot;POST /admin/refresh HTTP/1.1&quot; 200 - (an_env) user@computer MINGW64 /path/to/project (main) $ </code></pre> <p>I could see 'model is created', and 'score is calculated' printed. And I could get response of 'refreshed' but after then, the server shuts down without any error or warning. Can somebody explain why this happened and what can be the right code if you can?</p>
<python><machine-learning><flask>
2023-04-30 16:30:34
0
537
niddddddfier
76,142,305
17,507,202
How to create function that translates an array of 0 and 1s into a float?
<p>I am trying to create a function that takes an array as argument, the array represents number but in Binary, I want this function to return a number.</p> <p>I can translate any number into Binary, however I can't figure out how to get float number from Binary.</p> <p>For example 4.8 is [0, 1, 0, 0, 1, 1, 0, 0], but when I call the function binaryToFloat([0, 1, 0, 0, 1, 1, 0, 0]) like this, it returns 5.2 instead of 4.8.</p> <p>However I can use only the functions provided below. Can anybody help me ?</p> <pre><code>from typing import List, Callable def bitNot(bit:int) -&gt; int: return 1 - bit def bitAnd(bit1:int, bit2:int) -&gt; int: return bit1 * bit2 def bitOr(bit1:int, bit2:int) -&gt; int: return max(bit1, bit2) def bitXor(bit1:int, bit2:int) -&gt; int: return bitOr(bit1, bit2) - bitAnd(bit1, bit2) def map_array(function, array): &quot;&quot;&quot;Použije funkci na každý prvek pole.&quot;&quot;&quot; result = [] for el in array: result += [function(el)] return result def zipArray(arr1: List, arr2: List) -&gt; List: result: List = [] for i in range(len(arr1)): result += [[arr1[i], arr2[i]]] return result &quot;&quot;&quot; &gt;&gt;&gt; zipArray([1, 2, 3], [4, 5, 6]) [[1, 4], [2, 5], [3, 6]] &quot;&quot;&quot; def mapTwoArrays(func: Callable, arr1: List, arr2: List) -&gt; List: def mapFunc(pair): return func(pair[0], pair[1]) return map_array(mapFunc, zipArray(arr1, arr2)) def add(num: int, num2: int) -&gt; int: return num + num2 &quot;&quot;&quot; &gt;&gt;&gt; mapTwoArrays(add, [1, 2, 3], [4, 5, 6]) [5, 7, 9] &quot;&quot;&quot; def bitArrayAnd(arr1: List, arr2: List) -&gt; List: return mapTwoArrays(bitAnd, arr1, arr2) def bitArrayOr(arr1: List, arr2: List) -&gt; List: return mapTwoArrays(bitOr, arr1, arr2) def bitArrayXor(arr1: List, arr2: List) -&gt; List: return mapTwoArrays(bitXor, arr1, arr2) def bitArrayLeftShift(arr: List, n: int) -&gt; List: return arr + (n * [0]) def bitArrayNot(arr: List) -&gt; List: return map_array(bitNot, arr) &quot;&quot;&quot; rekurze verze def bitArrayRightShift(arr: List, n: int) -&gt; List: if n == 0: return arr else: return bitArrayRightShift(arr[:-1], n -1) &quot;&quot;&quot; def bitArrayRightShift(arr: List, n: int) -&gt; List: return arr[:len(arr) - n] def fillZeroes(arr: List, n: int) -&gt; List: return ([0] * n) + arr def alignSize(arr: List, n: int) -&gt; List: return fillZeroes(arr, n - len(arr)) #pouze pomocí bitových operaci def setBitOn(arr: List, i: int) -&gt; List: res: List = alignSize(bitArrayLeftShift([1], i), len(arr)) return bitArrayOr(arr, res) def setBitOff(arr: List, i: int) -&gt; List: res: List = alignSize(bitArrayLeftShift([1], i), len(arr)) return map_array(bitNot,res) def getBit(arr: List, i: int) -&gt; List: res: List = bitArrayRightShift(arr, i) mask: List = alignSize([1], len(res)) return bitArrayAnd(res,mask) #reprezentace nezáporných čísel v dvojkove soustave def unsignedIntToBytes(num: int) -&gt; List: result: List = [] while num != 0: bit: int = num % 2 result = [bit] + result num = num // 2 return result &quot;&quot;&quot; &gt;&gt;&gt; unsignedIntToBytes(5) [1, 0, 1] &quot;&quot;&quot; def bitesToUnsignedInt(arr: List) -&gt; int: integer = 0 for i in range(len(arr)): if arr[i] == 1: integer += 2 ** (len(arr) - i - 1) return integer &quot;&quot;&quot; &gt;&gt;&gt; bitesToUnsignedInt([1, 0, 1]) 5 &quot;&quot;&quot; #cela cisla v binary def numToBytes(num: int, size: int) -&gt; List: if num &gt;= 0: arr: List = unsignedIntToBytes(num) fillSize = size - len(arr) if fillSize &lt;= 0: raise ValueError(f'SIze of arr, to small, Missing {-fillSize+1} bit(s)') return [0] * (size - len(arr)) + arr else: return bitArrayNot(numToBytes(-num - 1, size)) &quot;&quot;&quot; &gt;&gt;&gt; numToBytes(8, 8) [0, 0, 0, 0, 1, 0, 0, 0] &gt;&gt;&gt; numToBytes(-8, 8) [1, 1, 1, 1, 1, 0, 0, 0] &quot;&quot;&quot; def bytesToNum(arr: List) -&gt; int: signBit: int = arr[0] if signBit == 0: return bitesToUnsignedInt(arr) else: return -(bytesToNum(bitArrayNot(arr)) + 1) &quot;&quot;&quot; &gt;&gt;&gt; bytesToNum([0, 0, 0, 0, 1, 0, 0, 0]) 8 &gt;&gt;&gt; bytesToNum([1, 1, 1, 1, 1, 0, 0, 0]) -8 &quot;&quot;&quot; def nativeSetBitOn(num: int, i: int) -&gt; int: res: int = 1 &lt;&lt; i return num | res def nativeSetBitOff(num: int, i: int) -&gt; int: res: int = ~(1 &lt;&lt; i) return num &amp; res &quot;&quot;&quot; &gt;&gt;&gt; native_set_bit_off(8, 3) 0 &gt;&gt;&gt; native_set_bit_off(-1, 2) -5 &quot;&quot;&quot; def nativeGetBit(n: int, i:int) -&gt; int: res: int = n &gt;&gt; i return res &amp; n &quot;&quot;&quot; &gt;&gt;&gt; nativeGetBit(8, 3) 1 &quot;&quot;&quot; def nativeNumberToBitArray(num: int, size:int) -&gt; List: bit_array = [] for i in range(size-1, -1, -1): bit = (num &gt;&gt; i) &amp; 1 bit_array.append(bit) return bit_array &quot;&quot;&quot; &gt;&gt;&gt; nativeNumberToBitArray(5, 8) [0, 0, 0, 0, 0, 1, 0, 1] &gt;&gt;&gt; nativeNumberToBitArray(-1, 8) [1, 1, 1, 1, 1, 1, 1, 1] &quot;&quot;&quot; def getFirst(n:int) -&gt; int: return n &amp; 1 &quot;&quot;&quot; &gt;&gt;&gt; getFirst(9) 1 &quot;&quot;&quot; def getRest(n:int) -&gt; int: return n &gt;&gt; 1 &quot;&quot;&quot; &gt;&gt;&gt; getRest(9) 4 &quot;&quot;&quot; def nativeNumberToBitArray2(num: int, size:int) -&gt; List: &quot;&quot;&quot;Převede číslo na seznam bitů. Používá native_get_bit.&quot;&quot;&quot; bit_array = [] for i in range(size-1, -1, -1): bit_array.append(nativeGetBit(num, i) &amp; 1) return bit_array &quot;&quot;&quot; &gt;&gt;&gt; nativeNumberToBitArray2(5, 8) [0, 0, 0, 0, 0, 1, 0, 1] &gt;&gt;&gt; nativeNumberToBitArray2(-1, 8) [1, 1, 1, 1, 1, 1, 1, 1] &quot;&quot;&quot; DAY_BIT_COUNT = 5 MONTH_BIT_COUNT = 4 YEAR_BIT_COUNT = 7 def encodeDate(date: List) -&gt; int: day = date[0] &lt;&lt; (MONTH_BIT_COUNT + YEAR_BIT_COUNT) month = date[1] &lt;&lt; YEAR_BIT_COUNT year = (date[2] - 2000) &amp; ((1 &lt;&lt; YEAR_BIT_COUNT) - 1) return day | month | year def decodeDate(date: int) -&gt; List: year = (date &amp; ((1 &lt;&lt; YEAR_BIT_COUNT) - 1)) + 2000 month = (date &gt;&gt; YEAR_BIT_COUNT) &amp; ((1 &lt;&lt; MONTH_BIT_COUNT) - 1) day = date &gt;&gt; (YEAR_BIT_COUNT + MONTH_BIT_COUNT) return [day, month, year] #zápočet funkce co zakoduje a dekoduje zpět #je to zápočet #Rozšířit o rok #[27,4,2033] def floatToBinary(num: float, size: int): int_part = int(num) float_part = num - int_part int_binary = numToBytes(int_part, size) # assuming 32-bit integers float_binary = [] for i in range(size): float_part *= 2 bit = int(float_part) float_binary.append(bit) float_part -= bit return int_binary + float_binary def floatToBytes(num: float) -&gt; List[int]: # Extract sign, exponent, and significand sign = 0 if num &gt;= 0 else 1 num = abs(num) exponent = 0 while num &gt;= 2.0: num /= 2.0 exponent += 1 while num &lt; 1.0: num *= 2.0 exponent -= 1 significand = num - 1.0 # Convert sign, exponent, and significand to binary sign_bits = [sign] exponent_bits = numToBytes(exponent + 127, 8) significand_bits = unsignedIntToBytes(int(significand * (1 &lt;&lt; 23))) # Combine sign, exponent, and significand bits return sign_bits + exponent_bits + significand_bits def binaryToFloat(bits: List[int]) -&gt; float: whole_part = 0 fractional_part = 0 for i in range(len(bits)): if i &lt; 4: whole_part += bits[i] * (2 ** (3 - i)) else: fractional_part += bits[i] * (2 ** (3 - (i - 4))) return whole_part + (fractional_part / 10) print(floatToBinary(4.8,4)) print(binaryToFloat([0, 1, 0, 0, 1, 1, 0, 0])) #Should be 4.8 </code></pre>
<python><python-3.x><floating-point><binary><binary-data>
2023-04-30 16:22:54
1
335
mitas1c
76,142,253
181,098
Flask / Bootstrap not showing image
<p>I have this python flask bootstrap function:</p> <pre><code># Index page (main page) @app.route('/') def index(): return render_template('test.html') </code></pre> <p>And my run code:</p> <pre><code>if __name__ == '__main__': app.run(debug=True) </code></pre> <p>Of course the default folder will be under the (relative) templates directory so I have there:</p> <p>templates/test.html templates/images/logo.svg</p> <p>The source of test.html is:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;body&gt; Hello &lt;img src=&quot;/images/logo.svg&quot;&gt; &lt;img src=&quot;images/logo.svg&quot;&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But when I load the page with the flask supplied link to the app, both images are broken.</p> <p>I'm guessing my path to my logo.svg is wrong. How do I figure out where it is relative to the app's root?</p>
<python><flask><flask-bootstrap>
2023-04-30 16:09:32
1
3,367
Hein du Plessis
76,141,978
8,849,755
Cannot build scipy from source
<p>I am following the steps specified <a href="https://scipy.github.io/devdocs/dev/dev_quickstart.html" rel="nofollow noreferrer">here</a> under the <em>pip+venv</em> tab. I was able to create the virtual environment and install all the python level dependencies, but then when I navigate into <code>/wherever/I/cloned/scipy</code> and run <code>python dev.py build</code> I get <code>meson.build:1:0: ERROR: Cython compiler 'cython' cannot compile programs</code>. If I look into the log it says</p> <pre><code>Compiler stderr: ... File &quot;/home/me/.venvs/scipy-dev/lib/python3.10/site-packages/pythran/optimizations/pattern_transform.py&quot;, line 346, in &lt;module&gt; getattr(PatternTransform, attr_name) + (known_pattern,)) AttributeError: type object 'PatternTransform' has no attribute 'typePatterns'. Did you mean: 'CallPatterns'? </code></pre> <p>How can I fix this and build scipy?</p>
<python><build><scipy>
2023-04-30 15:10:37
1
3,245
user171780
76,141,615
6,819,038
How to reconnect / retrieve realtime session of previous jupyter notebook session?
<p>I have so much data to process and that takes a lot of time. Closing the tab of jupyter session disconnects the existing realtime session and output, so I can't retrieve previous realtime result. Example: I want to get <code>tqdm</code> progress shown again after I reopen the closed tab of jupyter. I want to retrieve the realtime session and get the realtime feedback again.</p> <p>I did some research about this but it didn't seek about the realtime.</p> <p>Any solution to this?</p>
<python><websocket><jupyter-notebook>
2023-04-30 13:55:22
0
653
adib-enc
76,141,506
1,152,915
Simplifying weighted moving average calculation
<pre><code>accs = pd.DataFrame({'v': values[start_index:], 'w': weights[start_index:]})\ .rolling(window=window_len, min_periods=1, method='table').apply(weighted_mean, engine='numba', raw=True)\ .v.to_numpy()[a:b] # after .apply() we get df back with all columns being set to the same resulting sequence </code></pre> <p>Is it possible here to simplify that expression by removing <code>.v.to_numpy()</code> part? It's there because the <code>.apply(weighted_mean, engine='numba', raw=True)</code> returns dataframe back with all the columns set to the same resulting series. I would like to get the straight numpy array back immediately if possible.</p>
<python><pandas>
2023-04-30 13:28:46
0
8,926
clime
76,141,435
13,438,431
Use SSL self-signed certificate to connect to ScyllaDB (cassandra) node on Windows?
<p>I've generated a self-signed cert using <a href="https://docs.scylladb.com/stable/operating-scylla/security/generate-certificate.html" rel="nofollow noreferrer">this scylla tutorial</a>. Started a scylladb node, everything's fine and dandy.</p> <p>Now it's time to connect clients. Here's the script:</p> <pre><code>from cassandra.cluster import Cluster ssl_options = dict( ca_certs='db.crt', cert_reqs=False, ssl_version=None, keyfile=None, certfile=None ) cluster = Cluster( ['&lt;my_ip&gt;'], port=9142, ssl_options=ssl_options ) cluster.connect() </code></pre> <p>The <code>db.crt</code> file is the PEM format certificate for the private key signed by the CA.</p> <p>On <code>Ubuntu 22.04</code> it works as expected. On windows 10 I get:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\...&quot;, line 27, in &lt;module&gt; cluster.connect() File &quot;cassandra\cluster.py&quot;, line 1734, in cassandra.cluster.Cluster.connect File &quot;cassandra\cluster.py&quot;, line 1770, in cassandra.cluster.Cluster.connect File &quot;cassandra\cluster.py&quot;, line 1757, in cassandra.cluster.Cluster.connect File &quot;cassandra\cluster.py&quot;, line 3573, in cassandra.cluster.ControlConnection.connect File &quot;cassandra\cluster.py&quot;, line 3618, in cassandra.cluster.ControlConnection._reconnect_internal cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'&lt;my_ip&gt;': OSError(None, &quot;Tried connecting to [('&lt;my_ip&gt;', 9142)]. Last error: timed out&quot;)}) </code></pre> <p>I thought that this is a connectivity problem, but once I get rid of <code>ssl_options</code>, it connects to the server successfully, but treats the incoming bytes wrongly, ending up with such error:</p> <pre><code>cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'&lt;my_ip&gt;': ProtocolError('This version of the driver does not support protocol version 21')}) </code></pre> <p>So I'm able to reach the server. It seems like windows is treating the certificate the wrong way or something. What can it be?</p> <p><em>P.S.</em> There is also a deprecation warning: <code>DeprecationWarning: Using ssl_options without ssl_context is deprecated and will result in an error in the next major release. Please use ssl_context to prepare for that release.</code>.</p> <p>I've looked at the <a href="https://github.com/apache/cassandra/blob/trunk/pylib/cqlshlib/cqlshmain.py" rel="nofollow noreferrer">cqlshlib implementation</a>, and it seems like it's still using the &quot;deprecated&quot; method of handling the <code>ssl</code>.</p> <p>How can one use <code>SSLContext</code> instead?</p>
<python><ssl><cassandra><scylla><datastax-python-driver>
2023-04-30 13:09:42
1
2,104
winwin
76,141,399
1,581,090
How to fix python logging error "TypeError: setLevel() missing 1 required positional argument: 'level'"?
<p>I am trying to use a simple logger in python (3.9.6) to log out to the terminal and to format the logging with some timestamp etc. as follows (following <a href="https://sematext.com/blog/python-logging/" rel="nofollow noreferrer">this example</a>):</p> <pre><code>import logging logger = logging.getLogger(&quot;__main__&quot;) logger.setLevel(logging.DEBUG) # create a file handler handler = logging.StreamHandler handler.setLevel(logging.DEBUG) # create a logging format and add that to the logger formatter = logging.Formatter('%(asctime)s - %(filename)s:%(lineno)d - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) </code></pre> <p>But when running this correctly-seeming code I get an error</p> <pre><code>TypeError: setLevel() missing 1 required positional argument: 'level' </code></pre> <p>Any ideas?</p>
<python>
2023-04-30 13:01:12
2
45,023
Alex
76,140,943
675,011
How to connect a Python gRPC stub to a Go gRPC server in the same process
<p>I have a Python app that loads Go code through a shared library. I want to call a gRPC server in the Go part from the Python part of the application. I could make the Go server listen on a TCP socket, but I want to avoid the overhead of TCP, want to avoid the use of sockets (as I will be creating and destroying many of these at a very high rate, possibly in parallel) and want to communicate entirely in-memory.</p> <p>I was hoping I could stub a net.Connection, and simply communicate reading/writing data across the languages using simple functions. The Go end doesn’t seem to be a problem, as you can stub a net.Listener for communication. If this were Go on both ends, I could use <a href="https://pkg.go.dev/google.golang.org/grpc/test/bufconn" rel="nofollow noreferrer">bufconn</a> to connect the client to the server directly, and communicate through an in-memory buffer. But for Python, the lowest level to stub for a gRPC client I can find is a Channel, for which there seems to be a <a href="https://grpc.github.io/grpc/python/grpc_testing.html" rel="nofollow noreferrer">testing Channel</a>, but this level seems to be much to high to stub conveniently.</p> <p>Are there other ways to connect gRPC between Python and Go, without having to stub every RPC call in the API on the language boundary?</p>
<python><go><grpc><grpc-python><grpc-go>
2023-04-30 11:14:39
0
877
Remko
76,140,909
6,198,659
IFC: Add Unit (IfcUnit) to IfcPropertySingleValue
<p>A python script produces an IFC-file wherein the following line appears several times:</p> <pre><code>PropertySingleValueWriter = ifcfile.createIfcPropertySingleValue(&quot;{}&quot;.format(V), &quot;{}&quot;.format(k), ifcfile.create_entity(&quot;IfcText&quot;, str((val[&quot;{}&quot;.format(k)]))), None) </code></pre> <p>This produces (as one represantive example)</p> <pre><code>#598=IFCPROPERTYSINGLEVALUE('Object','Wall',IFCTEXT('12.3'),$); </code></pre> <p>The last argument <code>None</code> stands for the unit which, in this case, has not been given yet and was translated as <code>$</code> in the output IFC-file. The unit known by line</p> <pre><code>#7=IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.); </code></pre> <p>in the IFC-file should now be inserted instead. This can be done manually in the IFC-file by writing <code>#7</code> into the line,</p> <pre><code>#598=IFCPROPERTYSINGLEVALUE('Object','Wall',IFCTEXT('12.3'),#7); </code></pre> <p>Using an adapted python script would be much more efficient. However, I have not found the correct scripting yet to add <code>#7</code> as a simple text. My attempts have been so far,</p> <pre><code>[1] PropertySingleValueWriter = ifcfile.createIfcPropertySingleValue(&quot;{}&quot;.format(V), &quot;{}&quot;.format(k), ifcfile.create_entity(&quot;IfcText&quot;, str((val[&quot;{}&quot;.format(k)]))), &quot;#7&quot;) [2] PropertySingleValueWriter = ifcfile.createIfcPropertySingleValue(&quot;{}&quot;.format(V), &quot;{}&quot;.format(k), ifcfile.create_entity(&quot;IfcText&quot;, str((val[&quot;{}&quot;.format(k)]))), &quot;'#7'&quot;) [3] PropertySingleValueWriter = ifcfile.createIfcPropertySingleValue(&quot;{}&quot;.format(V), &quot;{}&quot;.format(k), ifcfile.create_entity(&quot;IfcText&quot;, str((val[&quot;{}&quot;.format(k)]))), &quot;'{}'&quot;.format(&quot;#7&quot;)) [4] PropertySingleValueWriter = ifcfile.createIfcPropertySingleValue(&quot;{}&quot;.format(V), &quot;{}&quot;.format(k), ifcfile.create_entity(&quot;IfcText&quot;, str((val[&quot;{}&quot;.format(k)]))), ifcfile.create_entity(&quot;IfcText&quot;, &quot;#7&quot;)) </code></pre> <p>They either produce an error ([1], [2], [3]) or explicitly write <code>IFCTEXT('#7')</code> ([4]) into the IFC-file which is not interpretable as a connection to the line <code>#7</code>.</p> <p>Which is the correct scripting in the python file in order to get the connection to line <code>#7</code> as achievable by manual editing?</p>
<python><scripting><ifc>
2023-04-30 11:07:42
1
522
stonebe
76,140,839
16,383,578
Regex to find sequences of two given characters that alternate
<p>I have written a small program to convert IPv6 addresses to <code>int</code>s and back, and I have managed to beat built-in <code>ipaddress.IPv6Address</code> in terms of performance.</p> <pre class="lang-py prettyprint-override"><code>import re MAX_IPV6 = 2**128-1 DIGITS = set(&quot;0123456789abcdef&quot;) def parse_ipv6(ip: str) -&gt; int: assert isinstance(ip, str) and len(ip) &lt;= 39 segments = ip.lower().split(&quot;:&quot;) l, n, p, count, compressed = len(segments), 0, 7, 0, False last = l - 1 for i, s in enumerate(segments): assert count &lt;= 8 and len(s) &lt;= 4 and not set(s) - DIGITS if not s: if i in (0, last): continue assert not compressed p = l - i - 2 compressed = True else: n += int(s, 16) &lt;&lt; p*16 p -= 1 count += 1 return n def to_ipv6(n: int, compress=False) -&gt; str: assert isinstance(n, int) and 0 &lt;= n &lt;= MAX_IPV6 ip = '{:032_x}'.format(n).replace('_', ':') if not compress: return ip return re.sub('0{1,3}([\da-f]+)', '\\1', ip) </code></pre> <p>I am currently trying to implement compression, namely find the longest run of two alternating characters 0 and :, and replace the first occurrence of it with ::.</p> <p>For example, given this address: <code>'abcd:0:ef12::a:0:0'</code>, <code>parse_ipv6('abcd:0:ef12::a:0:0')</code> gives this number: <code>228362408209208931942454293848746098688</code>, but <code>to_ipv6(parse_ipv6('abcd:0:ef12::a:0:0'), 1)</code> gives this: <code>'abcd:0:ef12:0:0:a:0:0'</code>.</p> <p>As you see the result is not properly compressed.</p> <p>In short I want a regex pattern to be used with <code>re.findall</code> to find sequences like <code>[':0', ':0:', ':0:0:', '0:', '0:0:', '0:0:0:']</code>.</p> <p>I have Google searched this, and found many questions on this site with similar phrasing, but none of them solves my problem.</p> <p>I have tried this regex:</p> <pre><code>In [277]: ip Out[277]: 'abcd:0:ef12:0:0:a:0:0' In [278]: re.findall('((:0)+)|((0:)+)', ip) Out[278]: [(':0', ':0', '', ''), (':0:0', ':0', '', ''), (':0:0', ':0', '', '')] </code></pre> <p>I was expecting <code>[':0:', ':0:0:', ':0:0']</code>.</p> <p>How to fix this?</p> <hr /> <p>Using the correct regex I updated my code to this:</p> <pre><code>EMPTY = re.compile(r':?\b(?:0\b:?)+') def to_ipv6(n: int, compress:bool=False) -&gt; str: assert isinstance(n, int) and 0 &lt;= n &lt;= MAX_IPV6 ip = '{:039_x}'.format(n).replace('_', ':') if not compress: return ip ip = ':'.join(s.lstrip('0') if s != '0000' else '0' for s in ip.split(':')) longest = max(EMPTY.findall(ip), key=len, default='') if len(longest) &gt; 2: ip = ip.replace(longest, '::', 1) return ip </code></pre> <p>It correctly compresses the given example:</p> <pre><code>In [334]: to_ipv6(228362408209208931942454293848746098688, True) Out[334]: 'abcd:0:ef12::a:0:0' </code></pre> <p>I stopped using <code>re.sub('0{1,3}([\da-f]+)', '\\1', ip)</code> because somehow it took 8 microseconds and some to complete. The comprehension is faster.</p> <hr /> <p>The original code in this question doesn't actually work, the parser correctly parses valid IPv6 addresses but fails to identify invalid IPv6 addresses, thus gives incorrect output when it should raise exceptions.</p> <p>And the formatter also doesn't work, it gives incorrect outputs as well, namely it can sometimes fail to choose the longest consecutive empty fields, which is wrong. And will raise exceptions when the address cannot be compressed.</p> <p><em><strong>DO NOT USE MY CODE IN YOUR PRODUCTION CODE</strong></em>, it is buggy and not well-tested.</p> <p>That said, I have fixed every bug I can find and made the code raise exceptions for all invalid inputs I can think of, a long time ago. But I didn't bother update the code in this question. Because I am lazy and there might still be edge cases I didn't catch.</p> <p>And today this question received an upvote so long after it was posted, an upvote it didn't deserve. So I felt compelled to update the question and edit the code, I fixed the formatter and not the parser, because the parser became too long, and I meant to discourage its use (well I would keep using my code, but nobody else should use it).</p>
<python><python-3.x><regex>
2023-04-30 10:48:35
2
3,930
Ξένη Γήινος
76,140,609
10,327,984
error while training the generator in a text to image gan model
<p>I am trying to train a gan model which takes text embeddings and transforms them into an image due to a lack of resources I loaded the generator in GPU and discriminator in CPU. and I deleted every variable I do not further need in the training loop and then cleared the memory. this is my train function :</p> <pre><code>import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torchvision.utils import save_image import torch.nn.functional as F from tqdm import tqdm def train_gan(generator, discriminator, dataset, batch_size, num_epochs, device): # Set up loss functions and optimizers adversarial_loss_generator = nn.BCELoss() adversarial_loss_discriminator=nn.BCELoss() generator_optimizer = optim.Adam(generator.parameters(), lr=0.0002, betas=(0.5, 0.999)) discriminator_optimizer = optim.Adam(discriminator.parameters(), lr=0.0002, betas=(0.5, 0.999)) # Set up data loader data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True) generator.to(device) discriminator.to(&quot;cpu&quot;) # Train the GAN for epoch in range(num_epochs): for i, data in enumerate(tqdm(data_loader)): # Load data and labels onto the device text_embeddings = data['text_embedding'].to(device) # Generate fake images using the generator and the text embeddings noise = torch.randn(batch_size,generator.latent_dim).to(device) fake_images = generator(text_embeddings,noise) del noise clear_memory() fake_images = F.interpolate(fake_images, size=(512, 512), mode='bilinear', align_corners=False) # Train the discriminator discriminator_optimizer.zero_grad() real_images = data['image'].to(&quot;cpu&quot;) real_labels = torch.ones(real_images.size(0), 1).to(&quot;cpu&quot;) text_embeddings=text_embeddings.to(&quot;cpu&quot;) real_predictions = discriminator(real_images, text_embeddings) del real_images clear_memory() real_predictions=real_predictions.to(&quot;cpu&quot;) real_labels= real_labels.to(&quot;cpu&quot;) real_loss = adversarial_loss_discriminator(real_predictions, real_labels) real_loss=real_loss.to(&quot;cpu&quot;) fake_images=fake_images.to(&quot;cpu&quot;) text_embeddings=text_embeddings.to(&quot;cpu&quot;) fake_predictions = discriminator(fake_images, text_embeddings) fake_predictions=fake_predictions.to(&quot;cpu&quot;) fake_labels = torch.zeros(fake_images.size(0), 1).to(&quot;cpu&quot;) fake_loss = adversarial_loss_discriminator(fake_predictions, fake_labels) del fake_labels clear_memory() fake_loss=fake_loss.to (&quot;cpu&quot;) discriminator_loss = real_loss + fake_loss discriminator_loss=discriminator_loss.to(&quot;cpu&quot;) discriminator_loss.backward() discriminator_optimizer.step() # Train the generator generator_optimizer.zero_grad() #fake_predictions = discriminator(fake_images, text_embeddings) del text_embeddings del fake_images clear_memory() fake_predictions=fake_predictions.to(device) real_labels=real_labels.to(device) generator_loss = adversarial_loss_generator(fake_predictions, real_labels) generator_loss=generator_loss.to(device) generator_loss.backward() generator_optimizer.step() del fake_predictions del real_labels clear_memory() # Save generated images and model checkpoints every 500 batches if i % 100 == 0: with torch.no_grad(): noise = torch.randn(batch_size,generator.latent_dim).to(&quot;cpu&quot;) text_embeddings = data['text_embedding'].to(&quot;cpu&quot;) fake_images = generator(noise,text_embeddings) save_image(fake_images, f&quot;images\generated_images_epoch_{epoch}_batch_{i}.png&quot;, normalize=True, nrow=4) torch.save(generator.state_dict(), f&quot;models\generator_checkpoint_epoch_{epoch}_batch_{i}.pt&quot;) torch.save(discriminator.state_dict(), f&quot;models\discriminator_checkpoint_epoch_{epoch}_batch_{i}.pt&quot;) # Print loss at the end of each epoch print(f&quot;Epoch [{epoch+1}/{num_epochs}] Discriminator Loss: {discriminator_loss.item()}, Generator Loss: {generator_loss.item()}&quot;) </code></pre> <p>and for the function of clearing the memory</p> <pre><code>import gc def clear_memory(): gc.collect() torch.cuda.empty_cache() </code></pre> <p>in the backward pass for the generator I get the following error</p> <pre><code> in &lt;module&gt;:1 │ │ │ │ ❱ 1 train_gan(generator=generator, │ │ 2 │ │ discriminator=discriminator, │ │ 3 │ │ dataset=dataset, │ │ 4 │ │ batch_size=batch_size, │ │ │ │ in train_gan:71 │ │ │ │ 68 │ │ │ │ │ 69 │ │ │ generator_loss = adversarial_loss_generator(fake_predictions, real_labels) │ │ 70 │ │ │ generator_loss=generator_loss.to(device) │ │ ❱ 71 │ │ │ generator_loss.backward() │ │ 72 │ │ │ generator_optimizer.step() │ │ 73 │ │ │ del fake_predictions │ │ 74 │ │ │ del real_labels │ │ │ │ C:\Users\Mohamed Amine\Desktop\pytorch_env\torchvenv\lib\site-packages\torch\tensor.py:221 in │ │ backward │ │ │ │ 218 │ │ │ │ gradient=gradient, │ │ 219 │ │ │ │ retain_graph=retain_graph, │ │ 220 │ │ │ │ create_graph=create_graph) │ │ ❱ 221 │ │ torch.autograd.backward(self, gradient, retain_graph, create_graph) │ │ 222 │ │ │ 223 │ def register_hook(self, hook): │ │ 224 │ │ r&quot;&quot;&quot;Registers a backward hook. │ │ │ │ C:\Users\Mohamed │ │ Amine\Desktop\pytorch_env\torchvenv\lib\site-packages\torch\autograd\__init__.py:130 in backward │ │ │ │ 127 │ if retain_graph is None: │ │ 128 │ │ retain_graph = create_graph │ │ 129 │ │ │ ❱ 130 │ Variable._execution_engine.run_backward( │ │ 131 │ │ tensors, grad_tensors_, retain_graph, create_graph, │ │ 132 │ │ allow_unreachable=True) # allow_unreachable flag │ │ 133 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time. </code></pre> <p>I tried changing the backward call with this but nothing happened</p> <pre><code> grads = torch.autograd.grad(generator_loss, generator.parameters(), create_graph=True) # `grads` is a tuple of gradients for each parameter in `generator` # Update the generator parameters for i, param in enumerate(generator.parameters()): param.grad = grads[i] </code></pre> <p>I tried putting the retain_graph=True for the first backward in the generator and it didn't work also putting retain_graph=True makes the training process slower and resources demanding and I am trying to avoid that. I read in a thread in Pytorch that putting the loss, the model, and it s complements on the same device solves the error but nothing worked.</p>
<python><pytorch><generative-adversarial-network>
2023-04-30 09:56:17
1
622
Mohamed Amine
76,140,411
14,459,677
Calculating specific cells based on cell condition
<p>I have:</p> <p>EXCEL 1:</p> <pre><code>NAME Year ACTIVITY AMOUNT AA 1 CC 140 AA 1 CC 150 AA 1 DD 80 </code></pre> <p>My desired output is:</p> <pre><code>NAME ACTIVITY YEAR TOTAL AMOUNT AA CC 1 290 AA DD 1 80 </code></pre> <p>So output will calculate the total amount based on the variables as shown above (it has to have the same conditions so total amount can be summed):</p> <pre><code>search_1 = 'AA' search_2 = '1' search_3 = 'CC' for i in range (0,length): if search_1 == Name [i]: if YEAR[i] == search_2: if ACTIVITY[i] == search_3: TOTAL AMOUNT[i] == print(sum(df ['Amount'])) </code></pre> <p>The problem is, it keeps on counting the total &quot;Amount&quot; column, even if they have different year (result becomes 370). anyone can help?</p>
<python><pandas><excel><if-statement>
2023-04-30 09:06:55
1
433
kiwi_kimchi
76,140,300
7,371,707
How to limit the memory usage of python program so that it can trun to swap memory firstly?
<p>I am running a python program that has a large CPU memory consumption. I have limited real CPU memory (32G), but the swap memory is large enough (128G). Can I limit the python program to use less real memory (like up to 4G) and it turns to swap for more?</p> <p>I am working on Ubuntu 20.04 with python 3.8. Thanks!</p>
<python><ubuntu><memory>
2023-04-30 08:43:16
1
1,029
ToughMind
76,139,962
2,889,716
Celery stucks at pending
<p>This is my code: <code>celery_app.py</code></p> <pre class="lang-py prettyprint-override"><code>from celery import Celery app = Celery( 'tasks', # app name broker='redis://localhost:6379/0', # broker URL backend='redis://localhost:6379/1', # backend URL include=['tasks'] # list of task modules ) if __name__ == '__main__': app.start() </code></pre> <p><code>tasks.py</code></p> <pre class="lang-py prettyprint-override"><code>from celery_app import app @app.task def add(x, y): return x + y result = add.delay(2, 5) print(result.status) result.get() </code></pre> <p>I run celery using this command:</p> <pre><code>celery -A tasks worker --loglevel=DEBUG </code></pre> <p>And the output is:</p> <pre><code>PENDING </code></pre> <p>I can see after running this code some new keys are created in redis. For more information this is my docker-compose for redis:</p> <pre class="lang-yaml prettyprint-override"><code>version: '3.7' services: redis: image: redis:latest ports: - &quot;6379:6379&quot; redisinsight: image: redislabs/redisinsight:latest ports: - &quot;8001:8001&quot; environment: - REDISINSIGHT_REDIS_HOSTS=redis:6379 depends_on: - redis flower: image: mher/flower:0.9.7 command: ['flower', '--broker=redis://redis:6379', '--port=5555'] ports: - 5557:5555 depends_on: - redis </code></pre> <p>What's wrong?</p>
<python><celery>
2023-04-30 07:05:20
0
4,899
ehsan shirzadi
76,139,913
10,569,922
Plotly Scatter Plot Gap in categorical y-axis
<p>I cant find in <a href="https://plotly.com/python/reference/" rel="nofollow noreferrer">reference</a> Plotly Python, information that I need. Here my code:</p> <pre><code>fig_scatter = px.scatter(df_lv, x='frame', y='username', color='label_name', symbol='label_name', title=f'{task_option}')# size='count_tag', fig_scatter.update_traces(marker_size=10) fig_scatter.update_layout(scattermode=&quot;group&quot;,scattergap=0.75,xaxis_range=[0, tasks_labels_df.frame.max()]) </code></pre> <p>That my output: <a href="https://i.sstatic.net/wq6IZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wq6IZ.png" alt="enter image description here" /></a></p> <p>I want less gap between two categorical variables in y-axis. So interactively I can in plot do less space between y-axis variables, but how to create this in default? Plot that I want:</p> <p><a href="https://i.sstatic.net/l3CcN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l3CcN.png" alt="enter image description here" /></a></p>
<python><plotly><visualization>
2023-04-30 06:51:32
1
521
TeoK
76,139,892
11,099,842
How to use a PDF map with plotly?
<p>I want the parameter that allows me to update the layout with a PDF map instead of an open-street-map for <code>plotly.express.scatter_mapbox</code>.</p> <p>I've searched through the documentation of Plotly, and searched online but I couldn't find the answer.</p> <p>Line within comments below is the one that I want to change.</p> <pre><code>import pandas as pd import plotly.express as px us_cities = pd.read_csv(&quot;https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv&quot;) fig = px.scatter_mapbox(us_cities, lat=&quot;lat&quot;, lon=&quot;lon&quot;, hover_name=&quot;City&quot;, hover_data=[&quot;State&quot;, &quot;Population&quot;], color_discrete_sequence=[&quot;fuchsia&quot;], zoom=3, height=600) # -------------------------------------------------- # Change this line to include PDF map instead of &quot;open-street-map&quot; fig.update_layout(mapbox_style=&quot;open-street-map&quot;) # --------------------------------------------------- fig.update_layout(margin={&quot;r&quot;:0,&quot;t&quot;:0,&quot;l&quot;:0,&quot;b&quot;:0}) fig.show() </code></pre>
<python><plotly><gis>
2023-04-30 06:45:35
1
891
Al-Baraa El-Hag
76,139,800
6,455,731
How to dynamically assign a function signature?
<p>I would like to create a decorator for dynamically assigning function signatures to decorated functions, something like this:</p> <pre><code>import inspect signature = inspect.Signature([ inspect.Parameter('x', kind=inspect.Parameter.KEYWORD_ONLY, default=None), inspect.Parameter('y', kind=inspect.Parameter.KEYWORD_ONLY, default=None), inspect.Parameter('z', kind=inspect.Parameter.KEYWORD_ONLY, default=None)]) def with_signature(signature:inspect.Signature): def _decor(f): def _wrapper(): ... return _wrapper return _decor @with_signature(signature): def fun(): return [x, y, z] </code></pre> <p><code>fun</code> should now have a signature as if it had been defined literally with <code>def fun(x=None, y=None, z=None)</code>. Also type hints in the signature would be great.</p> <p>How to go about this?</p> <h1>Edit</h1> <p>I came up with a somewhat kludgy solution:</p> <pre><code>def with_signature(signature:inspect.Signature): def _decor(f): @functools.wraps(f) def _wrapper(**kwargs): signature.bind(**kwargs) _f = types.FunctionType( f.__code__, kwargs ) return _f() return _wrapper return _decor </code></pre> <p>This allows to write:</p> <pre><code>signature = inspect.Signature([ inspect.Parameter('x', kind=inspect.Parameter.KEYWORD_ONLY, default=None), inspect.Parameter('y', kind=inspect.Parameter.KEYWORD_ONLY, default=None), inspect.Parameter('z', kind=inspect.Parameter.KEYWORD_ONLY, default=None)]) @with_signature(signature) def fun(): return [x, y, z] print(fun(x=1, z=3, y=2)) </code></pre> <p>The idea is to check <code>**kwargs</code> with <code>signature.bind</code>, then create a function and provide <code>**kwargs</code> as <code>globals</code>. This is obviously pretty ugly, and in fact doesn't do what I want, i.e. actually change the decorated function's signature. So I'd appreciate any help.</p> <h1>Edit 2:</h1> <p>The whole endeavour seems rather unpythonic to me now. What I actually want is to provide a definite function interface, so using <code>**kwargs</code> with <code>typing_extension.Unpack</code> and a <code>typing.TypedDict</code> might be a good way to go.</p> <pre><code>from typing import TypedDict from typing_extensions import Unpack class Signature(TypedDict): x: int = 0 y: int = 0 z: int = 0 def fun(**kwargs: Unpack[Signature]): ... </code></pre> <p>I think I'll use that for now, but still the OP problem seems interesing to me, so I'm still looking forward to suggestions and ideas.</p>
<python><functional-programming><signature>
2023-04-30 06:14:19
2
964
lupl
76,139,521
11,198,558
How can I format number in plotly Indicator?
<p>I'm now plotting <code>go.Indicator</code> and face a problem with the number format. Specifically, my code is as below</p> <pre><code>fig.add_trace( go.Indicator( mode = &quot;delta&quot;, value = df[df.columns[2]].values[-1], domain = {'row': 1, 'column': 1}, delta = {&quot;reference&quot;: df[df.columns[2]].values[-13], &quot;valueformat&quot;: &quot;f&quot;, &quot;suffix&quot;: &quot;%&quot;}, ), row = 2, col = 1 ) </code></pre> <p>And the figure now like:</p> <p><a href="https://i.sstatic.net/dnw5X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dnw5X.png" alt="go.Indicator" /></a></p> <p>the raw number is also equal 0.999717. However, I would like to show only <code>0.99</code> instead. I have tried to change the <code>&quot;valueformat&quot; = &quot;.2f&quot; or &quot;2f</code> but it even makes a longer number. How can I change the <code>&quot;valueformat&quot;</code>?</p>
<python><plotly>
2023-04-30 04:18:16
1
981
ShanN
76,139,511
18,758,062
SSH using Python Paramiko: How to detect SSH Server responding with string on successful connection
<p>When I connect to my Ubuntu server via SSH using the terminal, it sometimes return a certain string then closes the connection.</p> <pre><code>$ ssh hello@my.server.com -i ~/.ssh/id_ed25519 container not found Connection to 123.10.10.231 closed. Connection to my.server.com closed. </code></pre> <p>Using <code>paramiko</code> in my Python script to connect to this server, how do I detect when this is happening?</p> <p>Currently I'm doing the following to make the connection:</p> <pre><code>import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy) ssh.connect(&quot;my.server.com&quot;, 22, userfoo, key_filename=&quot;/home/gameveloster/.ssh/id_ed25519&quot;) stdin, stdout, stderr = ssh.exec_command('ls') print(stdout.readlines(), stderr.readlines()) </code></pre> <p>and getting the following output even when the server is not closing the SSH connection immediately on connecting.</p> <pre><code>DEBUG:paramiko.transport:[chan 0] EOF sent (0) [&quot;Error: Your SSH client doesn't support PTY\n&quot;] [] </code></pre>
<python><python-3.x><ssh><paramiko>
2023-04-30 04:14:52
2
1,623
gameveloster
76,139,482
11,901,732
Find pairs of words in dictionary in Python
<p>I want to find pairs of words in a file called <code>dictionary.txt</code> based on given strings of <strong>distinct</strong> letters such as <code>GRIHWSNYP</code>.</p> <p><code>dictionary.txt</code> looks like this:</p> <pre><code>AARHUS AARON ABABA ABACK ABAFT ABANDON ABANDONED ABANDONING ABANDONMENT ABANDONS ABASE ABASED ... </code></pre> <p>Some examples of expected output:</p> <pre><code> &gt;&gt;&gt; f('ABCDEFGH') There is no solution. &gt;&gt;&gt; f('GRIHWSNYP') The pairs of words using all (distinct) letters in &quot;GRIHWSNYP&quot; are: ('SPRING', 'WHY') &gt;&gt;&gt; f('ONESIX') The pairs of words using all (distinct) letters in &quot;ONESIX&quot; are: ('ION', 'SEX') ('ONE', 'SIX') &gt;&gt;&gt; f('UTAROFSMN') The pairs of words using all (distinct) letters in &quot;UTAROFSMN&quot; are: ('AFT', 'MOURNS') ('ANT', 'FORUMS') ('ANTS', 'FORUM') ('ARM', 'FOUNTS') ('ARMS', 'FOUNT') ('AUNT', 'FORMS') ('AUNTS', 'FORM') ('AUNTS', 'FROM') ('FAN', 'TUMORS') ('FANS', 'TUMOR') ('FAR', 'MOUNTS') ('FARM', 'SNOUT') ('FARMS', 'UNTO') ('FAST', 'MOURN') ('FAT', 'MOURNS') ('FATS', 'MOURN') ('FAUN', 'STORM') ('FAUN', 'STROM') ('FAUST', 'MORN') ('FAUST', 'NORM') ('FOAM', 'TURNS') ('FOAMS', 'RUNT') ('FOAMS', 'TURN') ('FORMAT', 'SUN') ('FORUM', 'STAN') ('FORUMS', 'NAT') ('FORUMS', 'TAN') ('FOUNT', 'MARS') ('FOUNT', 'RAMS') ('FOUNTS', 'RAM') ('FUR', 'MATSON') ('MASON', 'TURF') ('MOANS', 'TURF') </code></pre> <p>I tried:</p> <pre><code> dictionary = 'dictionary.txt' solutions = [] clean_dictionary = [i for i in dictionary if len(i)== len(set(i))] # Filter out words without repeated letters. for word in clean_dictionary: if not set(word) - set(letters): first_word = word second_letters = set(letters) - set(word) for second in clean_dictionary: if set(second) == set(second_letters): second_word = second pair = tuple(sorted((first_word, second_word))) solutions.append(pair) solutions = sorted(set(solutions)) if not solutions: print('There is no solution.') else: print(f'The pairs of words using all (distinct) letters ' f'in &quot;{letters}&quot; are:' ) for solution in solutions: print(solution) </code></pre> <p>which seemed to work but quite slowly. I am wondering how I can improve the code to make it more efficient or if there is a better way to do this? Any help appreciated.</p>
<python><list><algorithm><tuples>
2023-04-30 04:04:15
1
5,315
nilsinelabore
76,139,185
5,049,813
How to compare two Pandas Dataframes with text, numerical, and None values
<p>I have two dataframes <code>df1</code> and <code>df2</code> that both contain text and numerical data in addition to <code>None</code>s as well. However, <code>df1</code> has integer numbers, and <code>df2</code> has float numbers.</p> <p>I've tried comparing their equality with <code>df1.equals(df2)</code>, but this fails because of the type differences (integers vs. floats). I've also tried doing <code>np.allclose(df1, df2, equal_nan=True)</code> but this fails with <code>TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''</code> (I imagine this is because of the text data).</p> <p>How can I check to see if <code>df1</code> and <code>df2</code> have the same data?</p>
<python><pandas><dataframe><equality>
2023-04-30 01:26:47
2
5,220
Pro Q
76,139,175
17,741,308
Visual Studio Code Debug Python Redirecting Input from Text File in Virtual Environment
<p>I am following the code mentioned in <a href="https://stackoverflow.com/questions/65165574/debugging-a-python-file-in-vs-code-with-input-text-file">debugging a python file in vs code with input text file</a>. To get my error:</p> <ol> <li>Create a new virtual pip virtual environment.</li> <li>Create a new folder called <code>project</code>.</li> <li>Inside <code>project</code>, create simple &quot;sort two numbers&quot; python file <code>hello.py</code>:</li> </ol> <pre><code>line = input() a,b = line.split() a = int(a) b = int(b) if a &lt;= b: print(a) print(b) else: print(b) print(a) </code></pre> <p>(If possible, I prefer answers that do not modify this .py file because I am actually working on problems like the ones in kattis. The code passes the test of <a href="https://open.kattis.com/problems/sorttwonumbers" rel="nofollow noreferrer">https://open.kattis.com/problems/sorttwonumbers</a> .) As my goal is to redirect input from a text file, I create the following <code>input.txt</code> also in <code>project</code>:</p> <pre><code>987 23 </code></pre> <p>(This input.txt is legitimate: use Pycharm to redirect input for example will make the <code>hello.py</code> run correctly.) By setting the python interpreter to be the virtual environment, we can see that <code>hello.py</code> runs in the virtual environment if the user gives numbers in the command line because green environment name shows up in the command line.</p> <ol start="4"> <li>Create folder <code>.vscode</code> in <code>project</code>, in which we create <code>launch.json</code>. Here are my two attempts. The first attempt quotes <a href="https://code.visualstudio.com/docs/editor/debugging#_redirect-inputoutput-tofrom-the-debug-target" rel="nofollow noreferrer">https://code.visualstudio.com/docs/editor/debugging#_redirect-inputoutput-tofrom-the-debug-target</a>:</li> </ol> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;PYTHON REDIRECT INPUT&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}\\hello&quot;, &quot;console&quot;: &quot;externalTerminal&quot;, &quot;args&quot;: [&quot;&lt;&quot;,&quot;${workspaceFolder}\\input.txt&quot;], } ] } </code></pre> <p>Whether in program I use <code>hello</code> or <code>hello.py</code>, whether in console I use <code>externalTerminal</code> or <code>integratedTerminal</code>, it won't respond when I click on run on the left side menu of visual studio code. In the end, it always gives:</p> <p><a href="https://i.sstatic.net/y0Qh7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y0Qh7.png" alt="enter image description here" /></a></p> <p>My second attempt deletes the <code>&quot;&lt;&quot;</code> symbol in the args:</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;PYTHON REDIRECT INPUT&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}\\hello.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;args&quot;: [&quot;${workspaceFolder}\\input.txt&quot;], } ] } </code></pre> <p>I have also changed the external terminal to &quot;integratedTerminal&quot;, use <code>hello.py</code> instead of <code>hello</code>, and deleted &quot;&lt;&quot; in args. Consequently, there is no longer any error message showing up when running the code and no more error windows popping up. The code runs whiling stopping at all stopping points, if I manually give the right inputs. That is, the <code>input.txt</code> does not participate in the debugging. So the problem remain unresolved.</p>
<python><python-3.x><visual-studio-code><debugging>
2023-04-30 01:23:54
1
364
温泽海
76,139,171
7,599,062
Secure Password Storage in Flask-based RESTful API using Python
<p>I am building a Flask-based RESTful API in Python, and I need to securely store and encrypt user passwords for authentication. Currently, I am using a simple hashing algorithm to hash passwords with a salt, but I'm not sure if this is secure enough.</p> <p>Here is an example of my current implementation for password hashing:</p> <pre><code>import hashlib password = 'password123' salt = 'somesalt' hashed_password = hashlib.sha256((password + salt).encode('utf-8')).hexdigest() print(hashed_password) </code></pre> <p>Can anyone suggest a more secure way to store and encrypt passwords for user authentication in a Flask-based API? Specifically, I would like to know which password hashing algorithm to use and how to use it in a Flask application. Any advice or suggestions would be greatly appreciated.</p>
<python><security><flask><restful-authentication>
2023-04-30 01:22:53
1
543
SyntaxNavigator
76,139,126
13,849,446
Selenium cannot send keys to google map input selenium
<p>I am trying to send keys to google maps to perform search, but Google Maps only accepts keys when the browser screen is in focus. I have tried input_box click, maximizing the window, action chain, and scrolling into view. The following is the code.</p> <pre class="lang-py prettyprint-override"><code>import time from undetected_chromedriver import Chrome, ChromeOptions from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By &quot;&quot;&quot;Initialize and return the drivers&quot;&quot;&quot; chrome_options = ChromeOptions() chrome_options.add_argument('--incognito') chrome_options.add_argument( &quot;--disable-blink-features=AutomationControlled&quot;) chrome_options.add_argument('--no-sandbox') chrome_options.add_argument(&quot;--mute-audio&quot;) chrome_options.add_argument(&quot;--window-size=1400,1000&quot;) chrome_options.add_argument( 'user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36') driver_installation = ChromeDriverManager().install() service = Service(driver_installation) service.service_args = chrome_options driver = Chrome(service=service) driver.get(&quot;https://maps.google.com&quot;) time.sleep(3) address = &quot;&quot;&quot;Rett Syndrome Research Trust Inc 67 Under Cliff Rd Trumbull, CT 6611&quot;&quot;&quot; map_input = driver.find_element(By.ID, &quot;searchboxinput&quot;) driver.maximize_window() driver.execute_script(&quot;arguments[0].scrollIntoView(false);&quot;, map_input) driver.execute_script(&quot;arguments[0].click()&quot;, map_input) map_input.send_keys(address) actions = ActionChains(driver) actions.click(map_input).send_keys(address).perform() driver.quit() </code></pre> <p>None of the above works. Thanks for any help in advance</p>
<python><python-3.x><selenium-webdriver><selenium-chromedriver>
2023-04-30 00:59:24
1
1,146
farhan jatt
76,139,012
12,369,606
Pandas groupby, keep most common value but drop if tie
<p>This is directly based off of the question <a href="https://stackoverflow.com/questions/15222754/groupby-pandas-dataframe-and-select-most-common-value/74900139#comment134266608_74900139">GroupBy pandas DataFrame and select most common value</a>. My goal is to groupby one column, and keep the value that occurs most often in a second column. The solution I am using is <code>df.groupby(['col1'])['col2'].agg(lambda x: x.value_counts().index[0])</code> I would like to expand on this solution to drop any entries where there is a tie.</p>
<python><pandas><group-by>
2023-04-30 00:05:38
1
504
keenan
76,138,816
2,647,447
How to remove "Python" from tk menu?
<p>I am trying to build a simple Python desktop application using tkinter. I want to have &quot;Release&quot; and &quot;Module&quot; in my dropdown menu. The issue is, there is always an extra menu -&quot;Python&quot; in it. How do I remove that? (My code is below):</p> <pre><code>root = Tk() myLabel =Label(root, text=&quot;Welcome!&quot;) myLabel.pack() menu = Menu(root) root.config(menu=menu) myButton = Button(root, text=&quot;Build DashBoard&quot;,command=myClick) myButton.pack() subMenu = Menu(menu) menu.add_cascade(label=&quot;Release&quot;,menu=subMenu) subMenu.add_command(label=&quot;Now Project.....&quot;,command=doNothing) subMenu.add_command(label=&quot;New.....&quot;,command=doNothing) subMenu.add_separator() subMenu.add_command(label=&quot;Exit&quot;,command=doNothing) </code></pre> <p><a href="https://i.sstatic.net/7LsUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7LsUA.png" alt="enter image description here" /></a></p>
<python><tkinter>
2023-04-29 22:53:33
1
449
PChao
76,138,764
44,330
Since timezone info is deprecated by numpy.datetime64 constructor, what's the right way to convert ISO 8601 datestrings?
<p>I have a timeseries with a large number of ISO 8601 datestrings and would like to convert to <code>numpy.datetime64</code>. But I get a warning:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; t=np.datetime64('2022-05-01T00:00:00-07:00') &lt;stdin&gt;:1: DeprecationWarning: parsing timezone aware datetimes is deprecated; this will raise an error in the future </code></pre> <p><strong>OK, so how do I avoid the warning?</strong> (Without giving up a significant performance factor)</p> <p>(Presumably the way to do this is convert to UTC and <em>then</em> create a <code>datetime64</code> object, but the whole point of using <code>datetime64</code> was that it worked directly on ISO 8601 strings very fast.)</p> <p><a href="https://numpy.org/doc/stable/reference/arrays.datetime.html" rel="nofollow noreferrer">The documentation mentions this issue</a> but doesn't suggest any solution:</p> <blockquote> <p>Deprecated since version 1.11.0: NumPy does not store timezone information. For backwards compatibility, datetime64 still parses timezone offsets, which it handles by converting to UTC±00:00 (Zulu time). This behaviour is deprecated and will raise an error in the future.</p> </blockquote>
<python><numpy>
2023-04-29 22:38:19
3
190,447
Jason S
76,138,703
1,039,860
issues with regular expressions in python
<p>I am trying to replace a string with another</p> <p>I have two strings I am looping over:</p> <pre class="lang-none prettyprint-override"><code>1: &quot;I have a sentence with this_is_a_test in it.&quot; 2: &quot;I have another sentance with this_is_a_test_also in it&quot;. </code></pre> <p>I want to replace <code>this_is_a_test</code> in the first sentence with <code>hello</code> so it reads:</p> <pre class="lang-none prettyprint-override"><code>&quot;I have a sentence with hello in it.&quot; </code></pre> <p>I do NOT want to touch the second sentence</p> <p>Here is what I have:</p> <pre class="lang-py prettyprint-override"><code>def exact_replace(string: str, old_word: str, new_word: str) -&gt; str: &quot;&quot;&quot; Finds an exact match of old_word in string and replaces it with new_word. :param string: original source string :param new_word: new string replacement :param old_word: string to search for in source :return: &quot;&quot;&quot; pattern = re.compile(f&quot;\\b{old_word}\\b&quot;) new_string = pattern.sub(new_word, string) return new_string </code></pre> <p>Unfortunately it matches in both strings</p>
<python><regex>
2023-04-29 22:20:04
0
1,116
jordanthompson
76,138,612
9,297,170
Django SearchRank not taking full text search operators into account
<p>I'm trying to add a new endpoint that does full-text search with <code>AND, OR, NOT</code> operators and also tolerates typos with <code>TriagramSimilarity</code>.</p> <p>I came across this question: <a href="https://stackoverflow.com/questions/37859960/combine-trigram-with-ranked-searching-in-django-1-10">Combine trigram with ranked searching in django 1.10</a> and was trying to use that approach but <code>SearchRank</code> is not behaving as I'd expect, and I'm confused about how it works.</p> <p>When my code looks like the basic implementation of full-text search the negative filter is working fine</p> <pre class="lang-py prettyprint-override"><code> @action(detail=False, methods=[&quot;get&quot;]) def search(self, request, *args, **kwargs): search_query = request.query_params.get(&quot;search&quot;) vector = SearchVector(&quot;name&quot;, weight=&quot;A&quot;) query = SearchQuery(search_query, search_type=&quot;websearch&quot;) qs = Project.objects.annotate( search=vector, ).filter( search=query, ) return Response({ &quot;results&quot;: qs.values() }) </code></pre> <p><a href="https://i.sstatic.net/GxOwq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GxOwq.png" alt="the returned documents" /></a></p> <p>But I need to implement this using <code>SearchRank</code> so I can later do some logic with the rank score and the similarity score.</p> <p>This is what my code looks like annotating for rank instead of using the tsvector annotation:</p> <pre class="lang-py prettyprint-override"><code> @action(detail=False, methods=[&quot;get&quot;]) def search(self, request, *args, **kwargs): search_query = request.query_params.get(&quot;search&quot;) vector = SearchVector(&quot;name&quot;, weight=&quot;A&quot;) query = SearchQuery(search_query, search_type=&quot;websearch&quot;) rank = SearchRank(vector, query, cover_density=True) qs = Project.objects.annotate( rank=rank, ).order_by(&quot;-rank&quot;) return Response({ &quot;results&quot;: qs.values() }) </code></pre> <p>And the response looks like: <a href="https://i.sstatic.net/8r4CI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8r4CI.png" alt="The documents I got back" /></a></p> <p>The rank given to the document named &quot;APT29 Attack Graph&quot; is 1. I'd expect the <code>-</code> operator would rank it lower, ideally 0.</p> <p>Does SearchRank not take into consideration any search operators?</p> <p>This is what the PostgreSQL looks like for the queryset</p> <pre><code>Sort (cost=37.78..37.93 rows=62 width=655) Sort Key: (ts_rank_cd(setweight(to_tsvector(COALESCE(name, ''::text)), 'A'::&quot;char&quot;), websearch_to_tsquery('apt29 -graph'::text))) DESC -&gt; Seq Scan on firedrill_project (cost=0.00..35.93 rows=62 width=655) </code></pre> <p>Also if there is a better way to do this kind of search without introducing new dependencies (Elasticsearch, haystack, etc) please reference it.</p> <p>I tried different search operators. Looked for alternative ways to do this, I had no success so far.</p>
<python><django><postgresql><django-rest-framework><full-text-search>
2023-04-29 21:52:53
2
580
Gerardo Sabetta
76,138,498
5,306,861
Understand why opencv's KNearest gives these results
<p>Below is the code I run in Python, when <code>K</code> = <code>1</code>, the output looks correct, But why when <code>K</code> = <code>3</code> the result is <code>4</code> and not <code>5</code>, then there is a row that matches exactly what we were looking for?</p> <pre><code>import cv2 as cv import numpy as np trainFeaturesData = [ [2,2,2,2], [3,3,3,3], [4,4,4,4], [5,5,5,5], [6,6,6,6], [7,7,7,7], ] trainFeatures = np.array(trainFeaturesData, dtype = np.float32) trainLabelsData = [ 2, 3, 4, 5, 6, 7 ]; trainLabels = np.array(trainLabelsData, dtype = np.float32) knn = cv.ml.KNearest_create() knn.train(trainFeatures, cv.ml.ROW_SAMPLE, trainLabels) testFeatureData = [[ 5, 5, 5, 5, ]] testFeature = np.array(testFeatureData, dtype = np.float32) for k in [1, 3]: print(&quot;------------ k = {} --------------\n&quot;.format(k)); ret, results, neighbours ,dist = knn.findNearest(testFeature, k) print( &quot;result: {}\n&quot;.format(results) ) print( &quot;neighbours: {}\n&quot;.format(neighbours) ) print( &quot;distance: {}\n&quot;.format(dist) ) </code></pre> <p>Output:</p> <pre><code>------------ k = 1 -------------- result: [[5.]] neighbours: [[5.]] distance: [[0.]] ------------ k = 3 -------------- result: [[4.]] &lt;= Why?? neighbours: [[5. 4. 6.]] distance: [[0. 4. 4.]] </code></pre>
<python><opencv><machine-learning><knn>
2023-04-29 21:26:23
0
1,839
codeDom
76,138,360
5,370,631
combine multiple rows in pyspark dataframe into one along with aggregation
<p>I have the below df with 2 rows which I want to combine into one row.</p> <pre><code>+--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+------------------+------------------+------------+--------------------+-------------+--------+---------------+ | vid| sid|pvid| cid| orderId| cartId|storeId|anchor_item|substitutes_1|substitutes_2|substitutes_3|substitutes_4|substitutes_5|impressions_sub_item_1|impressions_sub_item_2|impressions_sub_item_3|impressions_sub_item_4|impressions_sub_item_5|click_sub_item_1|click_sub_item_2|click_sub_item_3|click_sub_item_4|click_sub_item_5|preferred_sub_item|preferred_sub_rank|search_query|preferred_sub_source| prefType| date_id|event_timestamp| +--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+------------------+------------------+------------+--------------------+-------------+--------+---------------+ |526CC6AF-6A62-405 |911D8EC7-5B1F-4AD | |170836c4b25e4b0f9 |67374d42-024c-430 |200010858176654| 742| 14940752| null| null| null| null| null| null| null| null| null| null| 842643501| null| null| null| null| 10849171| 2| | P00N|SELECTED_PREF|20230427| 1682593258203| |526CC6AF-6A62-405 |911D8EC7-5B1F-4AD | |170836c4b25e4b0f9 |67374d42-024c-430 |200010858176654| 742| 14940752| null| null| null| null| null| null| null| null| null| null| null| 10849171| null| null| null| 10849171| 2| | P00N|SELECTED_PREF|20230427| 1682593276038| +--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+------------------+------------------+------------+--------------------+-------------+--------+---------------+ </code></pre> <p>I have done that as below</p> <pre><code> cols_to_merge = [f.first(x, ignorenulls=True).alias(x) for x in _final_df1.columns[8:23]] _final_df1.groupBy(_final_df1.columns[:8]+ _final_df1.columns[23:29]).agg(*cols_to_merge) </code></pre> <p>But now I want the max value in column event_timestamp to be also part of the combined row. How can I get that??</p> <pre><code>+--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+------------------+------------------+------------+--------------------+-------------+--------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+ | vid| sid|pvid| cid| orderId| cartId|storeId|anchor_item|preferred_sub_item|preferred_sub_rank|search_query|preferred_sub_source| prefType| date_id|substitutes_1|substitutes_2|substitutes_3|substitutes_4|substitutes_5|impressions_sub_item_1|impressions_sub_item_2|impressions_sub_item_3|impressions_sub_item_4|impressions_sub_item_5|click_sub_item_1|click_sub_item_2|click_sub_item_3|click_sub_item_4|click_sub_item_5| +--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+------------------+------------------+------------+--------------------+-------------+--------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+ |526CC6AF-6A62-405 |911D8EC7-5B1F-4AD | |170836c4b25e4b0f9 |67374d42-024c-430 |200010858176654| 742| 14940752| 10849171| 2| | P00N|SELECTED_PREF|20230427| null| null| null| null| null| null| null| null| null| null| 842643501| 10849171| null| null| null| +--------------------+--------------------+----+--------------------+--------------------+---------------+-------+-----------+------------------+------------------+------------+--------------------+-------------+--------+-------------+-------------+-------------+-------------+-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------+----------------+----------------+----------------+----------------+ </code></pre>
<python><apache-spark><pyspark><apache-spark-sql>
2023-04-29 20:53:26
1
1,572
Shibu
76,138,342
525,913
Twitter login with Playwright
<p>I'm new to Playwright, and I can't figure out how to get this working. I can enter my userID just fine, but then can't find the right code to click on the &quot;Next&quot; button. Here's one (of many) things I tried:</p> <pre><code>from playwright_stealth import stealth_async from playwright.sync_api import sync_playwright import time with sync_playwright() as pw: browser = pw.chromium.launch(headless=False) context = browser.new_context(viewport={&quot;width&quot;: 1200, &quot;height&quot;: 1080}) page = context.new_page() page.goto(&quot;https://twitter.com/login&quot;) # go to url page.fill('input[type=&quot;text&quot;]', 'unsername_here') page.locator('button:text(&quot;Next&quot;)').click() </code></pre> <p>Why doesn't this click the &quot;Next&quot; button and how can I get it to work? When I run the code I get:</p> <pre><code>Traceback (most recent call last): File &quot;test3.py&quot;, line 12, in &lt;module&gt; page.locator('button:text(&quot;Next&quot;)').click() File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\sync_api\_generated.py&quot;, line 15436, in click self._sync( File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_sync_base.py&quot;, line 104, in _sync return task.result() File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_locator.py&quot;, line 148, in click return await self._frame.click(self._selector, strict=True, **params) File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_frame.py&quot;, line 489, in click await self._channel.send(&quot;click&quot;, locals_to_params(locals())) File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_connection.py&quot;, line 61, in send return await self._connection.wrap_api_call( File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_connection.py&quot;, line 461, in wrap_api_call return await cb() File &quot;D:\anaconda3\envs\twitterSentiment\lib\site-packages\playwright\_impl\_connection.py&quot;, line 96, in inner_send result = next(iter(done)).result() playwright._impl._api_types.TimeoutError: Timeout 30000ms exceeded. =========================== logs =========================== waiting for locator(&quot;button:text(\&quot;Next\&quot;)&quot;) ============================================================ </code></pre> <p>I'm running version 1.32.1 of Playwright.</p>
<python><authentication><twitter><playwright-python>
2023-04-29 20:47:48
1
2,347
ViennaMike
76,138,267
15,900,832
Read/write data over Raspberry Pi Pico USB cable
<p>How can I read/write data to Raspberry Pi Pico using Python/MicroPython over the USB connection?</p>
<python><micropython><raspberry-pi-pico>
2023-04-29 20:26:14
1
633
basil_man
76,138,236
6,447,399
_ctypes not install when trying to import pandas
<p>I have read many questions online about trying to solve this issue but none of the solutions seemed to work for me.</p> <p>I get this error:</p> <pre><code>(venv) matt@localhost:~/Downloads/Python-3.11.3&gt; python3 Python 3.11.3 (main, Apr 29 2023, 22:07:31) [GCC 7.5.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import pandas Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/__init__.py&quot;, line 22, in &lt;module&gt; from pandas.compat import is_numpy_dev as _is_numpy_dev # pyright: ignore # noqa:F401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/compat/__init__.py&quot;, line 25, in &lt;module&gt; from pandas.compat.numpy import ( File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/compat/numpy/__init__.py&quot;, line 4, in &lt;module&gt; from pandas.util.version import Version File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/util/__init__.py&quot;, line 8, in &lt;module&gt; from pandas.core.util.hashing import ( # noqa:F401 File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/core/util/hashing.py&quot;, line 24, in &lt;module&gt; from pandas.core.dtypes.common import ( File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/core/dtypes/common.py&quot;, line 26, in &lt;module&gt; from pandas.core.dtypes.base import _registry as registry File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/core/dtypes/base.py&quot;, line 24, in &lt;module&gt; from pandas.errors import AbstractMethodError File &quot;/home/matt/.local/lib/python3.11/site-packages/pandas/errors/__init__.py&quot;, line 6, in &lt;module&gt; import ctypes File &quot;/usr/local/lib/python3.11/ctypes/__init__.py&quot;, line 8, in &lt;module&gt; from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes' </code></pre> <p>I am running opensuse leap 15.4 I have installed <code>libffi</code>.</p> <pre><code>S | Name | Summary | Type ---+---------------+-------------------------------------------+-------- i+ | libffi-devel | Include files for development with libffi | package i | libffi7 | Foreign Function Interface Library | package i | libffi7-32bit | Foreign Function Interface Library | package i+ | uwsgi-libffi | Plugin libffi for uWSGI | package </code></pre> <p>I have tried to reinstall python</p> <pre><code>cd MyPythonDownloadDIR ./configure --enable-optimizations sudo make install </code></pre> <p>I have tried other things also without any luck. If any body has any idea?</p> <p>I can't run <code>pandas</code> in python or in a virtual environment.</p>
<python><ctypes>
2023-04-29 20:17:56
1
7,189
user113156
76,138,199
9,064,615
Loading Pytorch weights results in poor performance compared to training
<p>I'm currently messing around PPO reinforcement learning with <a href="https://www.youtube.com/watch?v=hlv79rcHws0" rel="nofollow noreferrer">this video</a> (Github code located <a href="https://github.com/philtabor/Youtube-Code-Repository/tree/master/ReinforcementLearning/PolicyGradient/PPO/torch" rel="nofollow noreferrer">here</a>). The training goes well and the model seems to be able to learn how to keep the pole up, but when I load the model it performs as if it was just initialized to random weights. The time stamp of the file corresponds to when it was last saved, so I'm definitely overwriting the checkpoint file with the most recent version of the networks.</p> <p>My inference.py file is as follows:</p> <pre class="lang-py prettyprint-override"><code>import gym import numpy as np from ppo_torch import Agent import time import sys env = gym.make('CartPole-v1', render_mode = &quot;human&quot;) batch_size = 16 n_epochs = 5 alpha = 0.0003 agent = Agent(n_actions=env.action_space.n, batch_size=batch_size, alpha=alpha, n_epochs=n_epochs, input_dims=env.observation_space.shape) agent.load_models() agent.critic.eval() agent.actor.eval() while True: observation, _ = env.reset() done = False while not done: start = time.time() action, prob, val = agent.choose_action(observation) observation_, reward, done, info, _ = env.step(action) env.render() end = time.time() print(f'{(1/(end-start)):.2f}', end=&quot;\r&quot;, flush=True) </code></pre> <p>The only change I've made was adding a &quot;.pth&quot; to the <code>self.checkpoint_file</code> in both ActorNetwork and CriticNetwork. So the Actor and Critic <code>self.checkpoint_file</code> path is:</p> <pre class="lang-py prettyprint-override"><code>self.checkpoint_file = os.path.join(chkpt_dir, 'actor_torch_ppo.pth') #... self.checkpoint_file = os.path.join(chkpt_dir, 'critic_torch_ppo.pth') </code></pre>
<python><pytorch>
2023-04-29 20:08:00
1
608
explodingfilms101
76,137,989
13,403,510
Getting value error while enc.transform, where enc is OneHotEncoder(sparse_output=False), in pandas
<p>I have a timeseries dataset name temp which has 4 columns; Date, Minutes, Issues, Reason no.</p> <p>in which:</p> <pre><code>temp['REASON NO'].value_counts() </code></pre> <p>shows this output:</p> <pre><code>R13 158 R14 123 R4 101 R7 81 R2 40 R3 35 R5 31 R8 11 R15 9 R12 3 R6 2 R10 2 R9 1 </code></pre> <p>I had run this code earlier which ran fine:</p> <pre><code>reason_no = enc.fit_transform(temp['REASON NO'].values.reshape(-1, 1)) </code></pre> <p>But at the end after building model. I wanted to forecast values of Minutes, Issues, Reason no. for next week.</p> <p>I tried this code:</p> <pre><code>seq_length=7 last_week = df.iloc[-seq_length:, :] last_reason_no = enc.transform(last_week['REASON NO'].values.reshape(-1, 1)) last_issue = enc.transform(last_week['Issue'].values.reshape(-1, 1)) last_minutes = scaler.transform(last_week['Minutes'].values.reshape(-1, 1)) last_X = np.hstack([last_reason_no, last_issue, last_minutes]) next_X = last_X.reshape(1, last_X.shape[0], last_X.shape[1]) for i in range(7): pred = model.predict(next_X) pred_minutes = scaler.inverse_transform(pred[:, 2].reshape(-1, 1))[0][0] pred_issue = enc.inverse_transform([np.argmax(pred[:, 1])])[0] pred_reason_no = enc.inverse_transform([np.argmax(pred[:, 0])])[0] print(f'Date: {last_week.iloc[-1, 0]}') print(f'Predicted Reason Number: {pred_reason_no}') print(f'Predicted Issue: {pred_issue}') print(f'Predicted Minutes: {pred_minutes}') </code></pre> <p>But when I run this code, I got an error:</p> <blockquote> <p>ValueError<br /> Traceback (most recent call last)</p> <p> in &lt;cell line: 1&gt;() ----&gt; 1 last_reason_no = enc.transform(last_week['REASON NO'].values.reshape(-1, 1))</p> <p>2 frames</p> <p>/usr/local/lib/python3.10/dist-packages/sklearn/preprocessing/_encoders.py in _transform(self, X, handle_unknown, force_all_finite, warn_on_unknown) 172 &quot; during transform&quot;.format(diff, i) 173 ) --&gt; 174 raise ValueError(msg) 175 else: 176 if warn_on_unknown:</p> </blockquote> <blockquote> <p>ValueError: Found unknown categories ['R5', 'R4'] in column 0 during transform.</p> </blockquote> <p>Kindly looking for help to learn why I'm getting this error and how to fix it.</p>
<python><pandas><numpy><lstm>
2023-04-29 19:15:48
1
1,066
def __init__
76,137,936
895,544
Too many open files when run an external script with multiprocessing
<pre class="lang-py prettyprint-override"><code>def func(item, protein, ncpu): output = None item_id = item.id output_fname = tempfile.mkstemp(suffix='_output.json', text=True)[1] input_fname = tempfile.mkstemp(suffix='_input.pdbqt', text=True)[1] # &lt;-- error occurs here try: with open(input_fname, 'wt') as f: f.write(preprocess(item)) # &lt;- convert item to text format, not important python_exec = sys.executable cmd = f'{python_exec} script.py -i {input_fname} -p {protein} -o {output_fname} -c {ncpu}' subprocess.run(cmd, shell=True) with open(output_fname) as f: res = f.read() if res: res = json.loads(res) output = {'score': res['score'], 'block': res['poses']} finally: os.unlink(input_fname) os.unlink(output_fname) return item_id, output with Pool(ncpu) as pool: for item_id, res in pool.imap_unordered(partial(func, **kwargs), tuple(items), chunksize=1): yield item_id, res </code></pre> <p>I process multiple items using <code>multiprocessing.Pool</code>. For every item I run a python script from <code>subprocess</code> shell. Before, I created two temporary files and pass them as arguments to the script. The <code>script.py</code> calls C-extension which process an item. After, I parse the output json file and return values, if any. Temporary files should be destroyed. in a <code>finally</code> section. However, after I process 3880-3920 items I got an error:</p> <pre><code>multiprocessing.pool.RemoteTraceback: &quot;&quot;&quot; Traceback (most recent call last): File &quot;/home/pavlop/anaconda3/envs/vina_cache/lib/python3.9/multiprocessing/pool.py&quot;, line 125, in worker File &quot;/home/pavlop/python/docking-scripts/moldock/vina_dock.py&quot;, line 93, in func OSError: [Errno 24] Too many open files: '/var/tmp/pbs.147815.login/tmpp8tqblfv_input.pdbqt' &quot;&quot;&quot; The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/pavlop/python/docking-scripts/moldock/run_dock.py&quot;, line 230, in &lt;module&gt; main() File &quot;/home/pavlop/python/docking-scripts/moldock/run_dock.py&quot;, line 203, in main for i, (item_id, res) in enumerate(docking(mols, File &quot;/home/pavlop/python/docking-scripts/moldock/run_dock.py&quot;, line 74, in docking for item_id, res in pool.imap_unordered(partial(func, **kwargs), tuple(items), chunksize=1): File &quot;/home/pavlop/anaconda3/envs/vina_cache/lib/python3.9/multiprocessing/pool.py&quot;, line 870, in next raise value OSError: [Errno 24] Too many open files: '/var/tmp/pbs.147815.login/tmpp8tqblfv_input.pdbqt' </code></pre> <p>What do I do wrong or miss? Why file descriptors are not released? Could it happen that C-extension does not release file descriptors?</p> <p>I see that temporary files are created and removed as expected. <code>ulimit</code> (soft and hard) was set to 1000000. I checked all my code and all files are opened using <code>with</code> statement to avoid leaking.</p> <p>If I replace <code>multiprocessing.Pool</code> with <code>dask</code> cluster, everything works as expected, no errors.</p> <p>UPDATE:</p> <p>I checked output of <code>lsof</code>. Really both temporary files remain open for every item and they are accumulated over time in every running process, but they have status? (deleted). So the issue in that how I manage them. However, since the <code>ulimit</code> is large, I should not observe this error.</p> <p>UPDATE2:</p> <p>It seems that I have to close descriptors manually. It worked on a test run, have to check on a larger run.</p> <pre class="lang-py prettyprint-override"><code>fd, name - tempfile.mkstemp() try: ... finally: os.close(fd) os.unlink(name) </code></pre>
<python><multiprocessing>
2023-04-29 19:02:58
1
4,143
DrDom
76,137,930
14,790,056
Cumsum of previous values if conditions are met after groupby
<p>I have unbalanced panel. Basically, someone will increase liquidity before decreasing that liquidity fully. I have groups (NF_TOKEN_ID). I want to first <code>groupby</code> and whenever <code>ACTION</code> is <code>DECREASE_LIQUIDITY</code> i want to replace the zero value with all the funds that have been withdrawn. This amount corresponds to however much has been added previously. So I will have to cumsum previous values if value from <code>ACTION</code> is <code>INCREASE_LIQUIDITY</code>. <code>cumsum</code> should only be applied to funds that have been added right before, not retrospectively.</p> <p>My current df.</p> <pre><code> BLOCK_TIMESTAMP NF_TOKEN_ID ACTION LIQUIDITY AMOUNT0_ADJUSTED AMOUNT1_ADJUSTED 0 2023-01-19 11:43:23+00:00 417467.0 INCREASE_LIQUIDITY 2.0002500037479372e+16 0.0 999999.999999 1 2023-01-21 10:08:35+00:00 417467.0 DECREASE_LIQUIDITY 2.0002500037479372e+16 0.0 0.0 2 2023-01-23 17:43:23+00:00 417467.0 INCREASE_LIQUIDITY 1.9999500037496876e+16 1000000.0 0.0 3 2023-01-28 21:42:47+00:00 417467.0 DECREASE_LIQUIDITY 1.9999500037496876e+16 0.0 0.0 4 2023-01-31 09:20:11+00:00 417467.0 INCREASE_LIQUIDITY 2.001358136187257e+16 0.0 1000553.996968 5 2023-02-05 14:19:11+00:00 417467.0 DECREASE_LIQUIDITY 2.001358136187257e+16 0.0 0.0 6 2023-02-06 16:00:59+00:00 417467.0 INCREASE_LIQUIDITY 3.9510177985927736e+16 900000.0 1075372.476351 7 2023-02-11 16:21:47+00:00 417467.0 DECREASE_LIQUIDITY 3.9510177985927736e+16 0.0 0.0 8 2023-02-11 18:17:47+00:00 417467.0 INCREASE_LIQUIDITY 3.999900007499375e+16 2000000.0 0.0 9 2023-02-13 08:42:47+00:00 417467.0 DECREASE_LIQUIDITY 3.999900007499375e+16 0.0 0.0 10 2023-02-16 23:39:11+00:00 417467.0 INCREASE_LIQUIDITY 6.000384593243181e+16 3000267.297679 0.0 11 2023-02-18 13:02:47+00:00 417467.0 INCREASE_LIQUIDITY 2.000210525110979e+16 1000130.263937 0.0 64 2023-01-19 11:52:47+00:00 417520.0 INCREASE_LIQUIDITY 1.5233876511464717e+21 2360900.644245 17981.537918728 65 2023-01-19 11:52:47+00:00 417520.0 DECREASE_LIQUIDITY 1.5233876511464717e+21 0.0 0.0 66 2023-01-19 11:52:59+00:00 417521.0 INCREASE_LIQUIDITY 1e+19 0.05981737761 0.05981737761 81 2023-01-19 11:54:35+00:00 417537.0 INCREASE_LIQUIDITY 17130998133876.0 49.99712 0.02400335355 82 2023-01-23 07:29:23+00:00 417537.0 INCREASE_LIQUIDITY 28028281686564.0 121.373999 0.01412890286 83 2023-01-23 17:34:35+00:00 417537.0 INCREASE_LIQUIDITY 9508091561328.0 39.513265 0.00581565507 84 2023-01-25 00:55:47+00:00 417537.0 DECREASE_LIQUIDITY 54667371381768.0 0.0 0.0 </code></pre> <p>Desired df</p> <pre><code> BLOCK_TIMESTAMP NF_TOKEN_ID ACTION LIQUIDITY AMOUNT0_ADJUSTED AMOUNT1_ADJUSTED 0 2023-01-19 11:43:23+00:00 417467.0 INCREASE_LIQUIDITY 2.0002500037479372e+16 0.0 999999.999999 1 2023-01-21 10:08:35+00:00 417467.0 DECREASE_LIQUIDITY 2.0002500037479372e+16 0.0 999999.999999 2 2023-01-23 17:43:23+00:00 417467.0 INCREASE_LIQUIDITY 1.9999500037496876e+16 1000000.0 0.0 3 2023-01-28 21:42:47+00:00 417467.0 DECREASE_LIQUIDITY 1.9999500037496876e+16 1000000.0 0.0 4 2023-01-31 09:20:11+00:00 417467.0 INCREASE_LIQUIDITY 2.001358136187257e+16 0.0 1000553.996968 5 2023-02-05 14:19:11+00:00 417467.0 DECREASE_LIQUIDITY 2.001358136187257e+16 0.0 1000553.996968 6 2023-02-06 16:00:59+00:00 417467.0 INCREASE_LIQUIDITY 3.9510177985927736e+16 900000.0 1075372.476351 7 2023-02-11 16:21:47+00:00 417467.0 DECREASE_LIQUIDITY 3.9510177985927736e+16 900000.0 1075372.476351 8 2023-02-11 18:17:47+00:00 417467.0 INCREASE_LIQUIDITY 3.999900007499375e+16 2000000.0 0.0 9 2023-02-13 08:42:47+00:00 417467.0 DECREASE_LIQUIDITY 3.999900007499375e+16 2000000.0 0.0 10 2023-02-16 23:39:11+00:00 417467.0 INCREASE_LIQUIDITY 6.000384593243181e+16 3000267.297679 0.0 11 2023-02-18 13:02:47+00:00 417467.0 INCREASE_LIQUIDITY 2.000210525110979e+16 1000130.263937 0.0 64 2023-01-19 11:52:47+00:00 417520.0 INCREASE_LIQUIDITY 1.5233876511464717e+21 2360900.644245 17981.537918728 65 2023-01-19 11:52:47+00:00 417520.0 DECREASE_LIQUIDITY 1.5233876511464717e+21 2360900.644245 17981.537918728 66 2023-01-19 11:52:59+00:00 417521.0 INCREASE_LIQUIDITY 1e+19 0.05981737761 0.05981737761 81 2023-01-19 11:54:35+00:00 417537.0 INCREASE_LIQUIDITY 17130998133876.0 49.99712 0.02400335355 82 2023-01-23 07:29:23+00:00 417537.0 INCREASE_LIQUIDITY 28028281686564.0 121.373999 0.01412890286 83 2023-01-23 17:34:35+00:00 417537.0 INCREASE_LIQUIDITY 9508091561328.0 39.513265 0.00581565507 84 2023-01-25 00:55:47+00:00 417537.0 DECREASE_LIQUIDITY 54667371381768.0 210.884384 0.04394791148 </code></pre>
<python><pandas><dataframe>
2023-04-29 19:01:22
1
654
Olive
76,137,774
964,235
Loading .npy file takes over 30x longer if RAM usage is high
<p>Here is my RAM usage when loading a .npy file, deleting it with del, calling gc.collect, then loading it again. It behaves as expected and takes about 2.5 seconds.<br /> <a href="https://i.sstatic.net/s4RRQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s4RRQ.jpg" alt="enter image description here" /></a></p> <p>Here is the same thing, but while I have other things loaded in memory so my total RAM usage is high.<br /> It takes over 90 seconds to load and my usage does this zig zag pattern.<br /> <a href="https://i.sstatic.net/bjjbn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjjbn.jpg" alt="enter image description here" /></a></p> <p>What is going on here? When I delete the array, garbage collect it, then load it again I expect the newly loaded array to just fit right in where the old one was, regardless of my total RAM usage. When it does this zig zag loading it takes way too long.</p>
<python><numpy><memory><memory-management><ram>
2023-04-29 18:25:53
1
1,293
Frobot
76,137,381
9,079,461
AttributeError: window object has no attribute tk
<p>I have a tinker program with multiple windows.</p> <p>Code :</p> <pre><code>import tkinter as tk from tkinter import * class window(Toplevel): def __init__(self, master=None): self.master=master self.title(&quot;Add Customer Record&quot;) window= tk.Tk() window.geometry(&quot;500x400&quot;) class MainWindow: def __init__(self, master=None): self.master = master self.title(&quot;Shopping Mall Management&quot;) self.master.geometry(&quot;500x300&quot;) Button(self.master, text=&quot;Add customer record&quot;, command=self.open_save_record_window).pack(pady=20) root = window() main_window = MainWindow(root) root.mainloop() </code></pre> <p>Error :</p> <pre><code>AttributeError : window has no attribute tk. </code></pre> <p>I'm unable to resolve this error. Any help would be helpful to me.</p>
<python><python-3.x><tkinter>
2023-04-29 16:55:44
3
887
Khilesh Chauhan
76,137,292
16,655,290
Attempting to edit a Sqlite entry with Flask-SqlAlchemy but receive an error
<p>When trying to edit a row, on submit I get this error:</p> <p>sqlalchemy.exc.ProgrammingError: (sqlite3.ProgrammingError) Error binding parameter 1: type 'StringField' is not supported [SQL: UPDATE blog_post SET title=?, subtitle=?, body=?, author=?, img_url=? WHERE blog_post.id = ?] [parameters: (&lt;wtforms.fields.simple.StringField object at 0x0000020F99680610&gt;, &lt;wtforms.fields.simple.StringField object at 0x0000020F99680C10&gt;, &lt;flask_ckeditor.fields.CKEditorField object at 0x0000020F99682250&gt;, &lt;wtforms.fields.simple.StringField object at 0x0000020F99682ED0&gt;, &lt;wtforms.fields.simple.StringField object at 0x0000020F99682310&gt;, 2)] (Background on this error at: <a href="https://sqlalche.me/e/20/f405" rel="nofollow noreferrer">https://sqlalche.me/e/20/f405</a>)</p> <p>I am not trying to insert another data type, everything is a string. Here is my table:</p> <pre><code>class BlogPost(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(250), unique=True, nullable=False) subtitle = db.Column(db.String(250), nullable=False) date = db.Column(db.String(250), nullable=False) body = db.Column(db.Text, nullable=False) author = db.Column(db.String(250), nullable=False) img_url = db.Column(db.String(250), nullable=False) </code></pre> <p>Below is my code to edit an entry:</p> <pre><code>@app.route(&quot;/edit/&lt;int:post_id&gt;&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def edit_post(post_id): with app.app_context(): post = db.session.query(BlogPost).filter_by(id = post_id).first() print(post.__dict__) # post_data = CreatePostForm(obj=post) post_data = CreatePostForm( title = post.title, subtitle = post.subtitle, author = post.author, img_url = post.img_url, body = post.body ) if post_data.validate_on_submit(): print(post.title) post.title = post_data.title post.subtitle = post_data.subtitle post.author = post_data.author post.img_url = post_data.img_url post.body = post_data.body db.session.commit() return redirect(url_for('show_post', index = post_id)) return render_template('make-post.html', form = post_data, is_edit = True) </code></pre> <p>I don't know where this ProgrammingError is coming from.</p>
<python><sqlite><flask-sqlalchemy><flask-wtforms>
2023-04-29 16:31:48
1
351
Daikyu
76,137,221
3,166,177
Dask converts list of tuples to list of list from pandas
<p>I recently came across an issue that Dask converts list of tuples to list of listfrom pandas while applying a function on a groupby after converting a Pandas dataframe to Dask dataframe. Below is a small reproducible example:</p> <pre><code>abc = [[(&quot;a&quot;, 1), (&quot;b&quot;, 2)], [(&quot;a&quot;, 1), (&quot;b&quot;, 2)], [(&quot;a&quot;, 1), (&quot;b&quot;, 2)], [(&quot;a&quot;, 1), (&quot;b&quot;, 2)]] mnp = [1, 1, 2, 3] pdf1 = pd.DataFrame() pdf1[&quot;a&quot;] = abc pdf1[&quot;b&quot;] = mnp ddf = dd.from_pandas(pdf1, npartitions=2) def apply_fun(grouped_df): print(grouped_df) pdf1.groupby([&quot;b&quot;]).apply(apply_fun) # pandas version ddf.groupby([&quot;b&quot;]).apply(apply_fun, meta=pd.Series([], dtype=str)).compute() # dask version </code></pre> <p>The pandas version produces</p> <pre><code> a b 0 [(a, 1), (b, 2)] 1 1 [(a, 1), (b, 2)] 1 a b 2 [(a, 1), (b, 2)] 2 a b 3 [(a, 1), (b, 2)] 3 </code></pre> <p>the dask version</p> <pre><code> a b 0 [[a, 1], [b, 2]] 1 1 [[a, 1], [b, 2]] 1 a b 2 [[a, 1], [b, 2]] 2 a b 3 [[a, 1], [b, 2]] 3 </code></pre> <p>Could someone please help how to retain the original format as a list of tuples?</p>
<python><pandas><dask><dask-dataframe>
2023-04-29 16:17:54
1
352
jsanjayce
76,137,208
9,134,545
Python attrs nested objects convertes
<p>I'm using <code>attrs</code> lib to parse my configuration file, and I'm looking for a better way to parse nested data objects.</p> <p>Example :</p> <pre><code>from attrs import define, field, validators from typing import Dict class HoconParser: @classmethod def from_hocon(cls, file_path): from pyhocon import ConfigFactory return cls( **ConfigFactory.parse_file(file_path) ) @classmethod def from_dictstrobj(cls, dictstrobj): if dictstrobj: return { key: cls(**value) for key, value in dictstrobj.items() } return {} @define class ClassOne(HoconParser): id: str = field(validator=validators.instance_of(str)) name: str = field(validator=validators.instance_of(str)) @define class ClassTwo(HoconParser): id: str = field(validator=validators.instance_of(str)) name: str = field(validator=validators.instance_of(str)) test: Dict[str, ClassOne] = field( converter=ClassOne.from_dictstrobj, validator=validators.deep_mapping( key_validator=validators.instance_of(str), value_validator=validators.instance_of(ClassOne) ) ) a = ClassTwo.from_hocon(&quot;test.conf&quot;) </code></pre> <p>Basically, ClassTwo has an attribute which is a <code>Dict[str, ClassOne]</code>.</p> <p>In order to make this work, I had to make a specific function named <code>from_dictstrobj</code> and use it as a converter for <code>test</code> attribute.</p> <p>Is there any better way to do this ?</p>
<python><python-attrs>
2023-04-29 16:13:46
1
892
Fragan
76,137,199
10,307,576
Copy and subdivide object without original also being subdivided
<p>I wrote a .py script where I attempt to make two copies of a selected object, one with 1 subdivision and another with 2 subidivions. I ran this script on a plane and it resulted in three planes total, however they all ended up with 3 subdivisions. I think it's because they are all selected during the subdivision call but I don't know how to set the active object.</p> <pre class="lang-py prettyprint-override"><code>import bpy # Get the selected objects selected_objects = bpy.context.selected_objects # Loop over the selected objects for obj in selected_objects: print(obj.name) # Check if the object is a mesh if obj.type == 'MESH': for n in [1,2]: # Duplicate the object duplicate_obj = obj.copy() duplicate_obj.data = obj.data.copy() # Add the duplicated and subdivided object to the scene bpy.context.collection.objects.link(duplicate_obj) bpy.context.view_layer.objects.active = duplicate_obj # Subdivide the mesh bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.subdivide(number_cuts=n) bpy.ops.object.mode_set(mode='OBJECT') # Parent duplicated object to original object duplicate_obj.parent = obj duplicate_obj.name = obj.name + &quot;_subd_&quot; + str(n) </code></pre> <p><a href="https://i.sstatic.net/SsUUj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SsUUj.png" alt="example" /></a></p>
<python><blender>
2023-04-29 16:12:13
1
2,160
Bugbeeb
76,137,008
2,473,022
Python class constructor docstring with __new__ or metaclass
<p><a href="https://stackoverflow.com/questions/37019744/is-there-a-consensus-on-what-should-be-documented-in-the-class-and-init-docs">Previous</a> <a href="https://stackoverflow.com/questions/54189661/docstring-in-class-or-init-constructor">users</a> have asked whether to put the constructor docstring under the <code>class</code> definition or under <code>__init__</code>. The answer is given in <a href="https://peps.python.org/pep-0257/" rel="nofollow noreferrer">PEP 257 – Docstring Conventions</a>:</p> <blockquote> <p>The class constructor should be documented in the docstring for its <code>__init__</code> method.</p> </blockquote> <p>However, both <code>__new__</code> and <code>metaclass</code>, as actual constructors, have the ability to modify the function signature before the arguments given to the constructor are passed on to the initializer <code>__init__</code>. Since PEP 257 said the constructor docstring should be written in <code>__init__</code>, should the docstring describe the arguments provided by the user when <code>my_var = MyClass(*args1, **kwargs1)</code> is called, or the arguments when <code>__init__(self, *args2, **kwargs2)</code> is invoked? It is not guaranteed that <code>args1 == args2</code>, <code>kwargs1 == kwargs2</code>. In general, how should the effects of <code>__new__</code> and <code>metaclass</code> be documented?</p>
<python><docstring><pep>
2023-04-29 15:25:16
0
1,664
Moobie
76,137,007
13,119,730
Fast API - Gunicorn vs Uvicorn
<p>Im developing Fast API application which is doing a lot of I/O operations per single request. The request are handled synchronously.</p> <p>Whats the difference between serving the application using:</p> <ul> <li>uvicorn <code>uvicorn main:app --workers 4</code></li> <li>gunicorn - <code>gunicorn main:app --workers 4 --worker-class uvicorn.workers.UvicornWorker</code></li> </ul> <p>Whats better any why ? Many thanks :)</p>
<python><fastapi><gunicorn><uvicorn>
2023-04-29 15:24:45
1
387
Jakub Zilinek
76,136,697
7,179,546
Set AppDynamics integration with Python
<p>I'm trying to set up my Python application to send data to AppDynamics. I have the AppDynamics controller up and running and on my local my application works, but no data is found on AppDynamics</p> <p>I've been told to use this repo as a template (and I can confirm it works, sending data to the AppDynamics instance I'm working on) <code>https://github.com/jaymku/py-k8s-init-scar/blob/master/kube/web-api.yaml</code></p> <p>I have some doubts though, and they might be the cause of the issues that I'm having.</p> <p>I had in my Dockerfile a CMD at the end like <code>first.sh &amp;&amp; python3 second</code> and I've changed it to be ENTRYPOINT &quot;first.sh &amp;&amp; python3 second&quot;. Note no [] format here and also that there are two concatenated commands.</p> <p>On the value of the <code>APP_ENTRY_POINT</code> variable I'm trying just the same.</p> <p>There are no errors when I run this, my application works correctly, except the data is not sent to AppDynamics. Nothing seems to fail, I can't find any error messages. Any ideas what I'm missing?</p> <p>Also, where can I find out, within AppDynamics, the value that we need to set for the APPDYNAMICS_CONTROLLER_PORT variable? I'm pretty sure it will be 443 in our case, since we seem to be using that in other proyects in AppDynamics that are working, but checking it would be a good idea. It might also be related with this issue, I don't know.</p>
<python><dockerfile><appdynamics>
2023-04-29 14:16:27
1
737
Carabes
76,136,612
8,769,985
set axes ticks on double log plot
<p>In a double logarithmic plot with matplotlib, I would like to have axes ticks at given points, and with &quot;standard&quot; label (no scientific notation).</p> <p>This code does not produce the desired output</p> <pre><code>import matplotlib.pyplot as plt plt.errorbar(x = [1, 2, 4, 8, 10], y = [1, 1/2, 1/4, 1/8, 1/16 ], yerr = [0.05, 0.05, 0.05, 0.05, 0.05], fmt='o', capsize=2) axes=plt.gca() axes.set_xlim([1, 10]) axes.set_ylim([10**(-2),2]) axes.set_xscale('log') axes.set_yscale('log') axes.set_xticks([1,5,10]) plt.show() </code></pre> <p>What I get is</p> <p><a href="https://i.sstatic.net/nHURZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nHURZ.png" alt="plot" /></a></p> <p>I would like to have removed the x labels, and only have &quot;1, 5, 10&quot;.</p>
<python><matplotlib>
2023-04-29 13:59:46
1
7,617
francesco
76,136,551
1,159,488
How to use Python pwn tools to resolve a side channel case study
<p>I work on class exercice that involves on find a password on a remote server. The goal is to use the Python pwn library.</p> <p>When I access to the server with a <code>nc IP port</code> I have :</p> <blockquote> <p>[0000014075] Initializing the exercice...<br /> [0001678255] Looking for a door...<br /> [0001990325] Trying to unlock the door...<br /> ^ _ ^ Ready for the challenge ? ^_^<br /> Answer :</p> </blockquote> <p>I understand that it's kind of a side channel attack and I have to use the time of each iteration to get the write character. If I'm right at each iteration I should get the password.</p> <p>I use the following code :</p> <pre><code>import time from pwn import * conn = remote('URL', port) def determine_character(duration) -&gt; str: chars = &quot;0123456789ABCDEFGHIJKLMNOPQRSTWXYZabcdefghijklmnopqrstuvwyz&quot; return chars[int(duration * 10 / 3)] final_pass = &quot;&quot; supposedLength = 50; for i in range (supposedLength): conn.sendline(&quot;test&quot;) start = time.time() conn.sendline(&quot;a&quot;) conn.recvline() stop = time.time() print (current_time2) duration = (stop - start) real_pass = determine_character(duration) print (real_pass) final_pass += str(real_pass) print (&quot;final pass {} : &quot;. format(final_pass)) print (conn.recvline()) for i in range (supposedLength): conn.sendline(final_pass[i]) print(conn.recvline()) </code></pre> <p>But this does not work. Indeed when I run the script I get a strange password and obviously it fails :</p> <blockquote> <p>final pass 05010000000000000000000000000000000000000000000000</p> </blockquote> <p>How should I do to have a good password ? Is there a problem with the duration ? Have you some ideas to debug the script ?</p> <p>Any help would be greatly appreciated, thanks !</p>
<python><reverse-engineering><netcat><pwntools><server-side-attacks>
2023-04-29 13:46:29
0
629
Julien
76,136,494
8,189,936
How to scrape a page that doesn't load unless you click the pagination button
<p>I am trying to scrape a website with Python BeautifulSoup. The website is paginated. The scaping works until page 201. For page 201, the URL returns a 404 if you copy it, and paste it in your browser, and hit enter. However, if you click the pagination button, it loads.</p> <p>How do I handle this kind of scenario?</p> <pre><code>import requests from bs4 import BeautifulSoup import csv import time import logging from urllib.parse import urljoin # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0&quot; } def fetch_pagination_url(url): response = requests.get(url, headers) if response.status_code == 200: soup = BeautifulSoup(response.text, 'html.parser') next_page = soup.select_one('.pagination__link--next') if next_page: next_url = next_page.get('href') url = urljoin(&quot;https://mywebsite.com/&quot;,next_url) # type: ignore logging.info(f&quot;next url: {url}&quot;) return url def fetch_data(url): response = requests.get(url, headers) if response.status_code == 200: return response.text ## Return error and terminate the program if the response status code is not 200 logging.error(f&quot;error fetching data from {url}&quot;) return None def parse_data(html): soup = BeautifulSoup(html, 'html.parser') courses = soup.find_all('div', class_='grid__item') data = [] for course in courses: course_data = {} title_element = course.find('h2', class_=['h4', 'course-title']) if title_element: course_data['title'] = title_element.text.strip() else: course_data['title'] = None provider_element = course.find('div', class_='provider') if provider_element: course_data['provider'] = provider_element.text.strip() else: course_data['provider'] = None location_element = course.find('div', class_='location') if location_element: course_data['location'] = location_element.text.strip() else: course_data['location'] = None data.append(course_data) return data def save_data(data, filename): with open(filename, 'a', newline='', encoding='utf-8') as csvfile: # Add more fieldnames as needed fieldnames = ['title', 'provider', 'location'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) for row in data: writer.writerow(row) def main(): page = 1 output_filename = 'output.csv' # Write the header row to the CSV file with open(output_filename, 'w', newline='', encoding='utf-8') as csvfile: fieldnames = ['title', 'provider', 'location'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() while True: url = f&quot;https://website.com/coursedisplay/results/courses?studyYear=2023&amp;destination=Undergraduate&amp;postcodeDistanceSystem=imperial&amp;pageNumber=200&amp;sort=MostRelevant&amp;clearingPreference=None&quot; html = fetch_data(url) if html: data = parse_data(html) save_data(data, output_filename) logging.info(f&quot;scraping page {page}&quot;) page += 1 else: break # Add a delay to avoid overwhelming the server time.sleep(10) time.sleep(3000000) if __name__ == '__main__': main() </code></pre>
<python><web-scraping><beautifulsoup>
2023-04-29 13:30:33
1
1,631
David Essien
76,136,469
11,065,874
How to allow hyphen (-) in query parameter name using FastAPI?
<p>I have a simple application below:</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated import uvicorn from fastapi import FastAPI, Query, Depends from pydantic import BaseModel app = FastAPI() class Input(BaseModel): a: Annotated[str, Query(..., alias=&quot;your_name&quot;)] @app.get(&quot;/&quot;) def test(inp: Annotated[Input, Depends()]): return f&quot;Hello {inp.a}&quot; def main(): uvicorn.run(&quot;run:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><code>curl &quot;http://127.0.0.1:8001/?your_name=amin&quot;</code> returns &quot;Hello amin&quot;</p> <hr /> <p>I now <strong>change the alias from <code>your_name</code> to <code>your-name</code></strong>.</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated import uvicorn from fastapi import FastAPI, Query, Depends from pydantic import BaseModel app = FastAPI() class Input(BaseModel): a: Annotated[str, Query(..., alias=&quot;your-name&quot;)] @app.get(&quot;/&quot;) def test(inp: Annotated[Input, Depends()]): return f&quot;Hello {inp.a}&quot; def main(): uvicorn.run(&quot;run:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Then <code>curl &quot;http://127.0.0.1:8001/?your-name=amin&quot;</code> returns:</p> <pre><code>{&quot;detail&quot;:[{&quot;loc&quot;:[&quot;query&quot;,&quot;extra_data&quot;],&quot;msg&quot;:&quot;field required&quot;,&quot;type&quot;:&quot;value_error.missing&quot;}]} </code></pre> <hr /> <p>However, hyphened alias in a simpler application is allowed.</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated import uvicorn from fastapi import FastAPI, Query app = FastAPI() @app.get(&quot;/&quot;) def test(a: Annotated[str, Query(..., alias=&quot;your-name&quot;)]): return f&quot;Hello {a}&quot; def main(): uvicorn.run(&quot;run:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><code>curl &quot;http://127.0.0.1:8001/?your-name=amin&quot;</code> returns &quot;Hello Amin&quot;</p> <hr /> <hr /> <p>Is this a bug? what is the problem here?</p>
<python><query-string><fastapi><pydantic>
2023-04-29 13:25:40
1
2,555
Amin Ba
76,136,446
1,124,262
python logging with joblib returns empty
<p>When I try to log to a file from multithreaded loop, I just get empty file.</p> <p>Minimal program to illustrate the issue is following:</p> <pre><code>import logging import time from joblib import Parallel, delayed def worker(index): time.sleep(0.5) logging.info('Hi from myfunc {}'.format(index)) time.sleep(0.5) def main(): logging.basicConfig(filename='multithread_test.log', level=logging.INFO, format='%(relativeCreated)6d %(threadName)s %(message)s') Parallel(n_jobs=4)(delayed(worker)(m) for m in range(1, 8)) if __name__ == '__main__': main() </code></pre> <p>But when I set, <code>n_jobs=1</code> instead of <code>4</code>, I got expected output:</p> <pre><code> 1636 MainThread Hi from myfunc 1 2656 MainThread Hi from myfunc 2 3676 MainThread Hi from myfunc 3 4696 MainThread Hi from myfunc 4 5716 MainThread Hi from myfunc 5 6736 MainThread Hi from myfunc 6 7756 MainThread Hi from myfunc 7 </code></pre>
<python><python-logging><joblib>
2023-04-29 13:18:11
1
1,794
Mehmet Fide
76,136,190
7,789,649
is there a way to specify what should be in a list when passed as a parameter?
<p>I am learning python. I have a function:</p> <pre><code>def saveQuotesAndData(self,filename:str, items:list): </code></pre> <p>but I want my VS Code intellisense to know what items[i] will be, so i can write the function with auto complete, and not manually type properties of items[i].</p> <p>for example if I want to auto complete below:</p> <p>items[i].QuoteTime</p> <p>maybe something like this would be perfect:</p> <pre><code>def saveQuotesAndData(self,filename:str, items:list&lt;Quote&gt;): </code></pre> <p>where Quote would be the expected class contained in the list.</p> <p>something as simple as this is very slow in python so far for me because theres lots of manual typing and flicking back to class definitions. any suggestions here?</p>
<python><python-3.x><list><python-typing>
2023-04-29 12:21:45
2
344
Luke
76,136,148
8,849,755
plotly's line_group analog in seaborn
<p>The function <a href="https://plotly.com/python-api-reference/generated/plotly.express.line.html" rel="nofollow noreferrer"><code>plotly.express.line</code></a> has an argument that is <code>line_group</code> which allows to specify one column to determine each individual line. I am looking to replicate this behavior with seaborn, so far without success.</p> <p>This is how my pandas data frame looks like:</p> <pre><code> Time (s) Amplitude (V) n_trigger signal_name n_waveform 3 0.000000e+00 0.001249 1 MCP-PMT 3 5.000000e-11 0.001379 1 MCP-PMT 3 1.000000e-10 0.001558 1 MCP-PMT 3 1.500000e-10 0.001764 1 MCP-PMT 3 2.000000e-10 0.002313 1 MCP-PMT ... ... ... ... ... 3332 1.985000e-08 0.004731 1666 DUT 3332 1.990000e-08 0.004847 1666 DUT 3332 1.995000e-08 0.005053 1666 DUT 3332 2.000000e-08 0.004916 1666 DUT 3332 2.005000e-08 0.004648 1666 DUT </code></pre> <p>Using plotly like this</p> <pre class="lang-py prettyprint-override"><code>fig = px.line( data_frame = waveforms.reset_index(drop=False).sort_values(['n_waveform','Time (s)']), x = 'Time (s)', y = 'Amplitude (V)', color = 'signal_name', line_group = 'n_waveform', ) </code></pre> <p>I get the desired plot, which looks like this:</p> <p><a href="https://i.sstatic.net/Sjxln.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sjxln.png" alt="enter image description here" /></a></p> <p>With seaborn I have this code</p> <pre class="lang-py prettyprint-override"><code>sns.lineplot( data = waveforms, x = 'Time (s)', y = 'Amplitude (V)', hue = 'signal_name', ) </code></pre> <p>which produces this</p> <p><a href="https://i.sstatic.net/PhFkE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PhFkE.png" alt="enter image description here" /></a></p> <p>How can I tell it not to aggregate the data in the y axis, and use the <code>n_waveform</code> column to create individual lines?</p>
<python><plotly><seaborn>
2023-04-29 12:07:52
1
3,245
user171780
76,136,083
63,621
Return value object in FastAPI
<p>I'd like to work with this Python class as a value object that serialises to/from <code>str</code>, with FastAPI:</p> <pre class="lang-py prettyprint-override"><code> class KsuidPrefixed: &quot;&quot;&quot;Class with two fields: a Ksuid and the prefix. Uses the PREFIX_SEPARATOR to separate the prefix from the Ksuid. Also overrides __str__ etc, to make it behave like a string&quot;&quot;&quot; def __init__(self, prefix: str, ksuid: Ksuid): self.prefix = prefix self.ksuid = ksuid def __str__(self) -&gt; str: return f&quot;{self.prefix}{PREFIX_SEPARATOR}{str(self.ksuid)}&quot; def __repr__(self) -&gt; str: return str(self) def __eq__(self, other: object) -&gt; bool: assert isinstance(other, self.__class__) return self.prefix == other.prefix and self.ksuid == other.ksuid def __lt__(self, other: object) -&gt; bool: assert isinstance(other, self.__class__) return self.ksuid &lt; other.ksuid def __hash__(self) -&gt; int: return hash(self.ksuid) </code></pre> <p>However, if I simply return a dict with this id in it:</p> <pre class="lang-py prettyprint-override"><code>intent_id = KsuidSegment(&quot;i&quot;) @api.get(&quot;/{id}&quot;) def read(id: Annotated[KsuidPrefixed, Depends(intent_id)]): results = {&quot;intent_id&quot;: id} # str(id) would work return results </code></pre> <p>I'd get this error:</p> <pre class="lang-py prettyprint-override"><code>&gt; ??? E UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 3: invalid start byte </code></pre> <p>Can I make FastAPI understand I want to treat KsuidPrefixed as a proper value object, and have it call <code>__str__</code> on it?</p>
<python><fastapi>
2023-04-29 11:52:57
0
9,985
Henrik
76,135,980
1,228,333
Selenium, after open Chrome signed with my account, the python script doesn't seem getting executed
<p>I'm running a simple script which accesses to a page and clicks on a button, but looks like Selenium loses the focus or kind of. This is happening after I set the option to open the browser (Chrome) being signed with my account (since I need one extension to be available, and logged in)</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By # Set up Chrome options chrome_options = Options() chrome_options.headless = False # Set to True if you don't want to see the browser window chrome_options.add_argument(&quot;user-data-dir=/Users/myusername/Library/Application Support/Google/Chrome&quot;) #provide location where chrome stores profiles # Initialize the browser chrome_driver_path = '/Users/myusername/Documents/chromedriver_mac64/chromedriver' driver = webdriver.Chrome(executable_path=chrome_driver_path, chrome_options=chrome_options) # Navigate to the URL url = &quot;https://www.somewhere.com&quot; driver.get(url) def getElAfterRendered(el): return WebDriverWait(driver, 10).until(el) def css(selector): return EC.presence_of_element_located((By.CSS_SELECTOR, selector)) def xpath(selector): return EC.presence_of_element_located((By.XPATH, selector)) css_sel_swap_link = getElAfterRendered(css('#subtitle a')) css_sel_swap_link.click() </code></pre>
<python><google-chrome><selenium-webdriver>
2023-04-29 11:30:02
0
3,161
Donovant
76,135,912
12,968,928
How to define a new arithmetic operation in Python
<p>How to define a new arithmetic operation in python</p> <p>Suppose I want to have operator <code>?</code> such that <code>A?B</code> will result in <code>A</code> to power of <code>1/B</code></p> <p>So the statement below would be true</p> <p><code>A?B == A**(1/B)</code></p> <p>for instance in R I can achieve it as following</p> <pre><code>`?` &lt;- function(A,B) A^(1/B) 4?2 #2 </code></pre>
<python>
2023-04-29 11:16:18
2
1,511
Macosso
76,135,606
5,542,049
What is the benefit of using curried/currying function in Functional programming?
<p>If you consider <code>inner_multiply</code> as an initializer of <code>multiply</code>, shouldn't you make them loosely coupled and DI the initializer (or any other way) especially if you require multiple initializers? Or am I misunderstanding the basic concept of curried function in FP?</p> <pre><code>def inner_multiply(x): def multiply(y): return x * y return multiply def curried_multiply(x): return inner_multiply(x) multiply_by_3 = curried_multiply(3) result = multiply_by_3(5) print(result) # Output: 15 (3 * 5) </code></pre>
<python><dependency-injection><functional-programming><solid-principles><currying>
2023-04-29 10:07:03
1
845
Tando
76,135,553
5,719,294
Ansible-Lint warning for custom module argument exception
<p>I have a custom module to manage Keycloak realms. That module takes a long list of arguments including a sub-dictionary for smtp_server:</p> <pre class="lang-py prettyprint-override"><code> def keycloak_argument_spec(): return dict( auth_keycloak_url=dict(type='str', aliases=['url'], required=True), auth_client_id=dict(type='str', default='admin-cli'), auth_realm=dict(type='str', required=True), auth_client_secret=dict(type='str', default=None), auth_username=dict(type='str', aliases=['username'], required=True), auth_password=dict(type='str', aliases=['password'], required=True, no_log=True), validate_certs=dict(type='bool', default=True) ) def main(): argument_spec = keycloak_argument_spec() meta_args = dict( state=dict(default='present', choices=['present', 'absent']), realm=dict(type='str', required=True), smtp_server=dict(type='dict', aliases=['smtpServer'], options={ 'host': dict(default='localhost'), 'port': dict(type='int', default=25), 'auth': dict(type='bool', default=False), 'ssl': dict(type='bool', default=False), 'starttls': dict(type='bool', default=False), 'user': dict(no_log=True), 'password': dict(no_log=True), 'envelopeFrom': dict(), 'from': dict(), 'fromDisplayName': dict(), 'replyTo': dict(), 'replyToDisplayName': dict(), }), ) argument_spec.update(meta_args) module = AnsibleModule( argument_spec = argument_spec, supports_check_mode=True ) </code></pre> <p>Here is a task with that module.</p> <pre class="lang-yaml prettyprint-override"><code>- name: &quot;Create realm&quot; keycloak_realm: realm: &quot;{{ keycloak_realm.key }}&quot; smtp_server: host: &quot;{{ keycloak_realm.value.mail.host | default('localhost') }}&quot; port: &quot;{{ keycloak_realm.value.mail.port | default(25) | int }}&quot; starttls: &quot;{{ keycloak_realm.value.mail.starttls | default(false) }}&quot; ssl: &quot;{{ keycloak_realm.value.mail.ssl | default(false) }}&quot; auth: &quot;{{ keycloak_realm.value.mail.auth | default(false) }}&quot; user: &quot;{{ keycloak_realm.value.mail.user | default(omit) }}&quot; password: &quot;{{ keycloak_realm.value.mail.password | default(omit) }}&quot; replyTo: &quot;{{ keycloak_realm.value.mail.replyto | default(omit) }}&quot; from: &quot;{{ keycloak_realm.value.mail.from | default(omit) }}&quot; fromDisplayName: &quot;{{ keycloak_realm.value.mail.from_name | default(omit) }}&quot; state: &quot;present&quot; </code></pre> <p>But when I run ansible-lint against a task using that module, I'm getting a strange message.</p> <pre><code>WARNING Ignored exception from ArgsRule.&lt;bound method AnsibleLintRule.matchtasks of args: Validating module arguments.&gt; while processing roles/keycloak/tasks/install/configure_realm.yml (tasks): 'port' </code></pre> <p>I'm using</p> <pre><code>$ ansible-lint --version ansible-lint 6.14.2 using ansible 2.14.1 </code></pre> <p>I've tried using <code>ansible-lint -vv</code> to get that exception. But I don't get the stack trace, so I cannot investigate the problem. When I run the task, everything is fine. But ansible-lint has a problem with that dict.</p> <p>It doesn't have something to do with &quot;port&quot;-attribute, when I rearrange the order of the args in the module, another attribute is a problem (but only int or bool types have a problem). When I redefine &quot;port&quot; to <code>type=str</code> another not str attribute will raise the warning.</p> <p>There is also another way of defining the smtp_server dictionary, like</p> <pre class="lang-py prettyprint-override"><code> smtp_server=dict(type='dict', aliases=['smtpServer'], options=dict( host=dict(default='localhost'), port=dict(type='int', default=25), </code></pre> <p>which I like more, but &quot;from&quot; is a keyword and the warning comes anyway.</p> <p>I also tried with &quot;default(omit)&quot; in the task, because the value is optional and ansible-lint has no idea, what is in <code>keycloak_realm</code>. But it's the same.</p> <p>Any ideas to avoid that warning?</p>
<python><ansible-module><ansible-lint>
2023-04-29 09:50:50
0
1,034
TRW
76,135,497
9,452,512
Manim: Animating row of matrix
<p>I would like to animate a Latex array like this one</p> <pre class="lang-latex prettyprint-override"><code>\begin{matrix} A &amp; B&amp; C \\ A &amp; B&amp; D \\ A &amp; B&amp; F \\ A &amp; KASHDAKSJHD&amp; F \end{matrix} </code></pre> <p>such that manim starts with the first row, transforms it into the second row with <code>TransformMatchingTex</code> and so on.</p> <p>It is important, that the matrix is not split into single rows with <code>MathTex</code> as the overall spacing is important.</p>
<python><manim>
2023-04-29 09:34:13
1
1,473
Uwe.Schneider
76,135,304
19,186,611
Celery dockerfile
<p>I'm using Django and celery together in an app(or container) but I want to separate celery in another app(or container). I am not sure how should I do this because my tasks are in the Django app. so how should I set celery parameters in order to access tasks?</p>
<python><django><celery>
2023-04-29 08:47:01
1
433
lornejad
76,135,235
2,998,077
Convert image captured from online MJPG streamer by CV2, to Pillow format
<p>From an image that captured from an online MJPG streamer by CV2, I want to count its colors by Pillow.</p> <p>What I tried:</p> <pre><code>import cv2 from PIL import Image url = &quot;http://link-to-the-online-MJPG-streamer/mjpg/video.mjpg&quot; cap = cv2.VideoCapture(url) ret, original_image = cap.read() img_cv2 = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB) img_pil = Image.fromarray(img_cv2) im = Image.open(img_cv2).convert('RGB') na = np.array(im) colours, counts = np.unique(na.reshape(-1,3), axis=0, return_counts=1) print(len(counts)) </code></pre> <p>However it shows an error.</p> <p>What is the right way to convert the CV2 captured image to Pillow format?</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Administrator\AppData\Local\Programs\Python\Python38-32\lib\site-packages \PIL\Image.py&quot;, line 3231, in open fp.seek(0) AttributeError: 'numpy.ndarray' object has no attribute 'seek' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;D:\Python\Script.py&quot;, line 17, in &lt;module&gt; im = Image.open(img_cv2).convert('RGB') File &quot;C:\Users\Administrator\AppData\Local\Programs\Python\Python38-32\lib\site-packages \PIL\Image.py&quot;, line 3233, in open fp = io.BytesIO(fp.read()) AttributeError: 'numpy.ndarray' object has no attribute 'read' </code></pre>
<python><numpy><opencv><image-processing><python-imaging-library>
2023-04-29 08:26:11
1
9,496
Mark K
76,134,772
12,715,723
Tensorflow Keras TypeError: Eager execution of tf.constant with unsupported shape
<p>My goal is to do <code>Conv2d</code> to an array with a custom shape and custom kernel with this code:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np import sys tf.compat.v1.enable_eager_execution() # kernel kernel_array = np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) kernel = tf.keras.initializers.Constant(kernel_array) print('kernel shape:', kernel_array.shape) print('kernel:',kernel_array) # input input_shape = (3, 3, 4, 1) x = tf.random.normal(input_shape) print('x shape:', x.shape) print('x:',x.numpy()) # output y = tf.keras.layers.Conv2D( 3, kernel_array.shape, padding=&quot;same&quot;, strides=(1, 1), kernel_initializer=kernel, input_shape=input_shape[1:])(x) print('y shape:', y.shape) print('y:',y.numpy()) </code></pre> <p>The above codes give me an error like this:</p> <pre class="lang-py prettyprint-override"><code>kernel shape: (3, 3) kernel: [[1 1 1] [1 1 1] [1 1 1]] x shape: (3, 3, 4, 1) x: [[[[-0.01953345] [-0.7110965 ] [ 0.15634525] [ 0.1707633 ]] [[-0.70654714] [ 2.7564087 ] [ 0.60063267] [ 2.8321717 ]] [[ 1.4761941 ] [ 0.34693545] [ 0.85550934] [ 2.2514896 ]]] [[[ 0.82585895] [-0.6421492 ] [ 1.2688193 ] [-0.9054445 ]] [[ 1.1591591 ] [ 0.7465941 ] [ 1.2531661 ] [ 2.2717664 ]] [[-0.48740315] [-0.42796597] [ 0.4480274 ] [-1.1502023 ]]] [[[-0.7792355 ] [-0.801604 ] [ 1.6724508 ] [ 0.25857568]] [[ 0.09068593] [-0.4783198 ] [-0.02883703] [-2.1400564 ]] [[-0.5518157 ] [-1.4513488 ] [-0.07611077] [ 1.4752681 ]]]] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[6], line 19 16 print('x:',x.numpy()) 18 # output ---&gt; 19 y = tf.keras.layers.Conv2D( 20 3, kernel_array.shape, padding=&quot;same&quot;, strides=(1, 1), 21 kernel_initializer=kernel, 22 input_shape=input_shape[1:])(x) 23 print('y shape:', y.shape) 24 print('y:',y.numpy()) File c:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---&gt; 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb File c:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\initializers\initializers.py:265, in Constant.__call__(self, shape, dtype, **kwargs) 261 if layout: 262 return utils.call_with_layout( 263 tf.constant, layout, self.value, shape=shape, dtype=dtype 264 ) --&gt; 265 return tf.constant(self.value, dtype=_get_dtype(dtype), shape=shape) TypeError: Eager execution of tf.constant with unsupported shape. Tensor [[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]] (converted from [[1 1 1] [1 1 1] [1 1 1]]) has 9 elements, but got `shape` (3, 3, 1, 3) with 27 elements). </code></pre> <p>I have no idea where the mistake is. I have tried to change the input shape but still didn't work anymore. What did I miss?</p>
<python><numpy><tensorflow><keras><conv-neural-network>
2023-04-29 06:04:19
1
2,037
Jordy
76,134,758
5,034,651
Docker: issue when using entrypoint+cmd but not when using just cmd
<p>I want to preface this by saying this is not a <a href="https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile">general question</a> about the difference between cmd and entrypoint+cmd. I thought I understood the general difference and how to use them but I encountered possibly a more nuanced issue with entrypoint+cmd.</p> <p>I was trying to write a simple image (call this image2) that pulls from another image (call this image1) which basically contains my environment. The purpose of this was that the environment is pretty static but I might want to make nuanced changes to the container that runs the code. The image I was having issues with looks like this:</p> <pre><code>FROM image1 ENTRYPOINT [ &quot;/opt/conda/bin/python&quot; ] CMD [ &quot;/tmp/script.py&quot; ] </code></pre> <p>I wanted to write it this way to restrict the purpose of this container (running a python script). This however would throw an error when I ran it outside the container. It would start the script and run for a bit, but when it would get to some Pyspark code it would result in this:</p> <pre><code>java.io.IOException: Cannot run program &quot;python3&quot;: error=2, No such file or directory </code></pre> <p>Pyspark was suddenly looking to use python3 but I'm not sure why it started looking for that.</p> <p>However, if I change the Dockerfile to the following:</p> <pre><code>FROM image1 CMD /opt/conda/bin/python /tmp/script.py </code></pre> <p>Then it runs fine without error. So I'm wondering if someone can explain why I'm able to do my script with CMD alone but not with ENTRYPOINT.</p>
<python><docker>
2023-04-29 06:00:23
1
616
Ken Myers
76,134,704
14,581,828
How to connect to the active session of mainframe from Python
<p>How to connect to the active session in mainframe from Python -</p> <pre class="lang-py prettyprint-override"><code>import win32com.client passport = win32com.client.Dispatch(&quot;PASSPORT.Session&quot;) session = passport.ActiveSession if session.connected(): print(&quot;Session active&quot;) else: print(&quot;Session not active&quot;) </code></pre> <p>It is giving a DDL file missing error when activating the .zws file, and when the session is not active it's giving the message, <code>&quot;Session not active&quot;</code>.</p> <p>Can someone please help?</p> <p>I am using Python3.8 version</p>
<python><win32com><mainframe>
2023-04-29 05:44:38
1
551
Sabbha
76,134,340
4,107,349
Including additional data in Django DRF serializer response that doesn't need to be repeated every entry?
<p>Our Django project sends GeoFeatureModelSerializer responses and we want to include an additional value in this response for JS to access. We figured out how to do this in serializers.py:</p> <pre class="lang-py prettyprint-override"><code>from rest_framework_gis import serializers as gis_serializers from rest_framework import serializers as rest_serializers from core.models import Tablename class MarkerSerializer(gis_serializers.GeoFeatureModelSerializer): new_value = rest_serializers.SerializerMethodField('get_new_value') def get_new_value(self, foo): return True class Meta: fields = (&quot;date&quot;, &quot;new_value&quot;) geo_field = &quot;geom&quot; model = Tablename </code></pre> <p>JS can get this value with <code>geojson.features[0].properties.new_value</code> where <code>const geojson = await response.json()</code>, but it's unnecessarily added with every entry. We'd like for it to be included only once so JS can access it with something like <code>newResponse.new_value</code> and existing functionality can continue getting the same data via <code>newResponse.geojson</code> or similar.</p> <p>How can we include a single additional value in this response? We thought maybe <a href="https://stackoverflow.com/questions/58326819/wrap-a-django-serializer-in-another-serializer">wrapping our serializer in another</a>, but they seem to be asking a different thing we don't understand. Can we append this somehow? In the serializer can we do something like <code>newResponse = {'new_value': new_value, 'geojson': geojson}</code> somewhere?</p> <p>We've had a dig through <a href="https://www.django-rest-framework.org/api-guide/serializers/" rel="nofollow noreferrer">the Django Rest Framework serializers docs</a> and couldn't work it out, so perhaps we're missing something. Other <a href="https://stackoverflow.com/q/38758962">SO threads</a> seem to only ask about adding data for every entry.</p> <p>edit: we should have mentioned we're using viewsets.py, which looks like:</p> <pre class="lang-py prettyprint-override"><code>class MarkerViewSet(viewsets.ReadOnlyModelViewSet): bbox_filter_field = &quot;location&quot; filter_backends = (filters.InBBoxFilter,) queryset = Marker.objects.all() serializer_class = MarkerSerializer </code></pre>
<python><json><django><django-rest-framework><django-rest-framework-gis>
2023-04-29 03:04:01
3
1,148
Chris Dixon
76,134,265
4,175,822
How can I type hint a dictionary where the key is a specific tuple and the value is known?
<p>How can I type hint a dictionary where the key is a specific tuple and the value is known?</p> <p>For example I want to type hint a dict like this:</p> <pre><code>class A: pass class B: pass class_map: = { (str,): A (int,): B } some_cls = class_map[(str,)] </code></pre> <p>The use case will be to go from a known set of bases to a class that was previously defined using those bases.</p>
<python><python-typing><typeddict>
2023-04-29 02:26:46
1
2,821
spacether
76,134,210
11,065,874
how to create pydantic model with dynamic keys on the nested dictionary?
<p>I have this data structure for which I need to create a pydantic model.</p> <pre><code>a = { &quot;name&quot;: &quot;dummy&quot;, &quot;countries&quot;: { &quot;us&quot;: { &quot;newyork&quot;: { &quot;address&quot;: &quot;dummy address&quot; }, &quot;boston&quot;: { &quot;address&quot;: &quot;dummy address&quot; } }, &quot;canada&quot;: { &quot;toronto&quot;: { &quot;address&quot;: &quot;dummy address&quot; } } } } </code></pre> <p>I need to be able to run <code>m = MyModel(**a)</code> or <code>m = MyModel.parse_obj(a)</code> and then run <code>m.dict()</code> and it should return the same dictionary to me. How can I do this?</p> <hr /> <p>What I have tried and did not work, following the instructions in here <a href="https://docs.pydantic.dev/usage/models/#custom-root-types" rel="nofollow noreferrer">https://docs.pydantic.dev/usage/models/#custom-root-types</a></p> <pre><code>a = { &quot;name&quot;: &quot;dummy&quot;, &quot;countries&quot;: { &quot;us&quot;: { &quot;newyork&quot;: { &quot;address&quot;: &quot;dummy address&quot; }, &quot;boston&quot;: { &quot;address&quot;: &quot;dummy address&quot; } }, &quot;canada&quot;: { &quot;toronto&quot;: { &quot;address&quot;: &quot;dummy address&quot; } } } } from typing import Dict from pydantic import BaseModel class Address(BaseModel): address: str class Cities(BaseModel): __root__: Dict[str, Address] def __iter__(self): return iter(self.__root__) def __getitem__(self, item): return self.__root__[item] class Countries(BaseModel): __root__: Dict[str, Cities] def __iter__(self): return iter(self.__root__) def __getitem__(self, item): return self.__root__[item] class MyModel(BaseModel): name: str countries: Dict[str, Countries] # m = MyModel(**a) m = MyModel.parse_obj(a) print(m.dict()) </code></pre> <hr /> <p>Also, I tried this one and this one worked</p> <pre><code>from pydantic import BaseModel from typing import Dict class PetName(BaseModel): name: str class Pets(BaseModel): __root__: Dict[str, PetName] def __iter__(self): return iter(self.__root__) def __getitem__(self, item): return self.__root__[item] class Person(BaseModel): name: str pets: Pets a = Person(**{ &quot;name&quot;: &quot;amin&quot;, &quot;pets&quot;: { &quot;dog&quot;: {&quot;name&quot;: &quot;Rex&quot;}, &quot;cat&quot;: {&quot;name&quot;: &quot;gooli&quot;} } } ) print(a.dict()) </code></pre>
<python><pydantic>
2023-04-29 02:05:54
1
2,555
Amin Ba
76,134,147
2,909,897
How do you work around deprecated PRETTYPRINT_REGULAR Flask setting?
<p>The <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-0" rel="nofollow noreferrer"><em>Version 2.2.0</em></a> Flask release notes mention that <em>PRETTYPRINT_REGULAR</em> setting is deprecated; however, there isn't an explanation for what to do instead. The Flask changelog links to <a href="https://github.com/pallets/flask/pull/4692" rel="nofollow noreferrer"><em>pallets/flask#4692</em></a> pull request that also doesn't explain how to enable <em>PRETTYPRINT_REGULAR</em> excellent setting. Based on <a href="https://github.com/pallets/flask/pull/4692/files#diff-f0a5822170a0cb40ccc8b537d36cac7e27a986d9911b3a08c03feab8f9a47a13R318" rel="nofollow noreferrer"><em>docs/config.rst</em></a> example, I guessed that <code>app.json.prettyprint_regular</code> attribute might work; however, that doesn't work.</p> <p>Given <em>PRETTYPRINT_REGULAR</em> Flask setting is deprecated since version 2.2.0, how do you enable Flask to pretty print your flask.jsonify response?</p>
<python><flask>
2023-04-29 01:33:33
1
8,105
mbigras
76,134,060
2,056,201
Getting an Array from Flask to React
<p>I am trying to move this JSON list from Flask to my react app, but I dont understand the complex syntax</p> <p>In Flask I have</p> <pre><code>@app.route('/data') def get_time(): result_t = [] for k in range(1, 6): result_t.append(str(sensor.value)) return jsonify(result_t) </code></pre> <p>I can see this in the <code>localhost:5000/data</code> view</p> <pre><code>[&quot;-1.9661&quot;,&quot;-1.9661&quot;,&quot;-1.9663&quot;,&quot;-1.9666&quot;,&quot;-1.9671&quot;] </code></pre> <p>In React, I want to set a useState to view this array but my code does not work, I can see the data displayed in console.log but it is not getting set at the <code>lst</code> variable. Do I need to decode it with JSON? please help correct this</p> <pre><code>function App() { const [data, setdata] = useState({ lst: [], }); useEffect(() =&gt; { // Using fetch to fetch the api from // flask server it will be redirected to proxy fetch(&quot;/data&quot;).then((res) =&gt; res.json().then((data) =&gt; { // Setting a data from api console.log(data); setdata({ lst: data, }); }) ); }, []); const handleButtonClick = () =&gt; { fetch(&quot;/data&quot;).then((res) =&gt; res.json().then((data) =&gt; { console.log('Button clicked!'); // Setting a data from api setdata({ lst: data, }); console.log(data); }) ); }; return ( &lt;div className=&quot;App&quot;&gt; &lt;p&gt;{data[0]}&lt;/p&gt; &lt;/div &gt; ); } export default App; </code></pre>
<javascript><python><html><reactjs><flask>
2023-04-29 00:51:28
1
3,706
Mich
76,133,729
12,011,589
Planet not following orbit in Solar System Simulation
<p>I'm trying to simulate a solar system, but the planets don't seem to follow the orbit, instead, the distance between a planet and the sun increases. I can't figure out what's wrong with this code. Here is an MRE.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt GRAVITATIONAL_CONSTANT = 6.674e-11 EARTH_MASS = 5.972 * (10**24) SUN_MASS = 332954.355179 * EARTH_MASS MERCURY_MASS = 0.06 * EARTH_MASS AU2M = 1.495979e11 AUD2MS = AU2M / 86400 class SolarSystemBody: def __init__(self, name, mass, position, velocity): self.name = name self.mass = mass self.position = np.array(position, dtype=float) * AU2M self.velocity = np.array(velocity, dtype=float) * AUD2MS self.acceleration = np.zeros(3, dtype=float) def update_position(self, dt): self.position += self.velocity * dt + 0.5 * self.acceleration * dt**2 def update_velocity(self, dt): self.velocity += self.acceleration * dt def gravitational_force(self, sun): r = sun.position - self.position distance = np.linalg.norm(r) direction = r / distance force_magnitude = GRAVITATIONAL_CONSTANT * self.mass * sun.mass / distance**2 return force_magnitude * direction def calculate_acceleration(self, sun): force = self.gravitational_force(sun) self.acceleration = force / self.mass mercury_position = [0.1269730114807624, 0.281031132701101, 0.01131924496141172] mercury_velocity = [-0.03126433724097832, 0.01267637703164289, 0.00390363008183905] sun = SolarSystemBody(&quot;Sun&quot;, SUN_MASS, [0, 0, 0], [0, 0, 0]) mercury = SolarSystemBody(&quot;Mercury&quot;, MERCURY_MASS, mercury_position, mercury_velocity) dt = 3600 * 24 total_time = 365 * dt pos = np.zeros((365, 3), dtype=float) i = 0 for t in np.arange(0, total_time, dt): print(np.linalg.norm(sun.position - mercury.position)) pos[i, :] = mercury.position mercury.calculate_acceleration(sun) mercury.update_velocity(dt) mercury.update_position(dt) i += 1 fig = plt.figure() ax = fig.add_subplot(111, projection=&quot;3d&quot;) ax.plot(pos[:, 0], pos[:, 1], pos[:, 2]) plt.show() </code></pre> <p>Getting the initial data using <code>astroquery</code>'s <a href="https://astroquery.readthedocs.io/en/latest/api/astroquery.jplhorizons.HorizonsClass.html" rel="nofollow noreferrer">HorizonsClass</a></p> <pre class="lang-py prettyprint-override"><code>from astropy.time import Time from astroquery.jplhorizons import Horizons import numpy as np sim_start_date = &quot;2023-01-01&quot; data = [] planet_id = 199 # Mercury obj = Horizons(id=planet_id, location=&quot;@sun&quot;, epochs=Time(sim_start_date).jd, id_type='id').vectors() name = obj[&quot;targetname&quot;].data[0].split('(')[0].strip() r = [np.double(obj[xi]) for xi in ['x', 'y', 'z']] v = [np.double(obj[vxi]) for vxi in ['vx', 'vy', 'vz']] </code></pre> <p><strong>Update:</strong></p> <p>Apparently mercury was <a href="https://en.wikipedia.org/wiki/Gravity_assist" rel="nofollow noreferrer">slingshotting</a> using the sun thats why the sudden increase in it's distance to sun.The issue seems to be that <code>dt</code> was too big so acceleration wasn't getting updated quickly. Decreasing dt fixes the issue.</p> <pre class="lang-py prettyprint-override"><code>dt = 360 total_time = 365 * 240 * dt </code></pre>
<python><astronomy><astroquery>
2023-04-28 22:51:28
1
913
O_o
76,133,727
3,943,600
column names are dropped when converting a df from pandas to PySpark
<p>I have a json file in below format.</p> <pre><code>{ &quot;fd&quot;: &quot;Bank Account&quot;, &quot;appVar&quot;: [], &quot;varMode&quot;: &quot;AABH&quot;, &quot;occurred&quot;: [ { &quot;occurredTimes&quot;: 3, &quot;sys&quot;: [ { &quot;varTyp&quot;: &quot;Conf Param&quot;, &quot;varCode&quot;: &quot;P33&quot; } ], &quot;userAssignments&quot;: [] } ] } </code></pre> <p>My existing code is developed in pandas but I need to convert the same code to be running on PySpark. For this purpose I'm using <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/index.html" rel="nofollow noreferrer">Pandas API on Spark</a> API. As per the API, I'm trying to read a JSON from file and convert the Pandas df into a PySpark df as below.</p> <pre><code>import pyspark.pandas as ps import pandas as pd df_pandas = pd.read_json('./demoFile/in/all.json', orient='values') df_pyspark = ps.DataFrame(df_pandas) # ps.DataFrame([df_pandas]) </code></pre> <p>I need to get the <code>occurred</code> array as it is after the conversion as well. But it returns,</p> <p><strong>Expected Output:before conversion</strong></p> <pre><code>[{'occurredTimes': 3, 'sys': [{'varTyp': 'Conf Param', 'varCode': 'P33'}], &quot;userAssignments&quot;: []] </code></pre> <p><strong>Output : after conversion</strong></p> <pre><code>[(3, [Row(varTyp=,'Conf Param', varCode=,'P33',)], [])] </code></pre> <p>How can retain the column names while converting pandas df into a PySpark df using above API?</p>
<python><pandas><dataframe><apache-spark><pyspark>
2023-04-28 22:50:59
1
2,676
codebot
76,133,499
13,231,537
Numpy method to get max value of numpy array for multiple ranges
<p>I want to get the maximum and minimum value from a numpy array over multiple ranges. I found <a href="https://stackoverflow.com/questions/35253716/pythonic-way-for-partwise-max-in-a-numpy-array">this</a> solution for getting only the maximum over multiple ranges when the ranges are uniform. However, my ranges are not always uniform.</p> <p>So if there is an array that has the numbers 1-9 and the ranges are the indices <code>[0-2], [3-6], [7-8]</code>, I would like to get the max and min value from each range.</p> <p>I have posted a possible solution below, but I wonder if numpy has a method to do this without a loop.</p> <pre><code>import numpy as np a = np.array([1,2,3,4,5,6,7,8,9]) slices = [np.arange(0,3), np.arange(3,7), np.arange(7,9)] max = np.empty(0) min = np.empty(0) for slice in slices: max = np.concatenate([max, np.max(a[slice])]) min = np.concatenate([min, np.min(a[slice])]) min_and_max = np.concatenate([min,max]) </code></pre>
<python><numpy>
2023-04-28 21:54:31
1
858
nikost
76,133,461
3,324,136
Lambda boto3 python - error - ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
<p>I have a Lambda function <code>Lambda1</code> that runs Transcribe on a file, saves it to an s3 bucket and returns a dictionary (which has the location of the srt file created) that is passed out of one Lambda1 and to Lambda2 as a destination.</p> <p>Lambda2 then is supposed to form a download object so I can then run a process on this .srt file to convert it to another type of file.</p> <p><strong>Data</strong></p> <pre><code>return { 'TranscriptionJobName': response['TranscriptionJob']['TranscriptionJobName'], 'BucketDestination': destination_bucket_name, 'ObjectKey': job_name+'.srt' } </code></pre> <p>A sample of this output looks like this, I have shortened it to show the necessary items.</p> <pre><code>{'responsePayload': {'TranscriptionJobName': 'test_short49', 'BucketDestination': 'bucket-1', 'ObjectKey': 'test_short49.srt'}} </code></pre> <p>The lambda2 then receives this information and I parse it with the following code <strong>Lambda2 Code</strong></p> <pre><code>s3 = boto3.resource('s3') def lambda_handler(event, context): source_bucket_name = event['responsePayload']['BucketDestination'] file_name = urllib.parse.unquote_plus(event['responsePayload']['ObjectKey']) destination_bucket_name = 'bucket-1' # Download the source file from S3 source_object = s3.Object(source_bucket_name, file_name) print(f'The source object is {source_object}') source_file = '/tmp/source.srt' source_object.download_file(source_file) </code></pre> <p>I am getting an error on the last line of the code.</p> <pre><code>[ERROR] ClientError: An error occurred (404) when calling the HeadObject operation: Not Found Traceback (most recent call last):   File &quot;/var/task/lambda_function.py&quot;, line 32, in lambda_handler     source_object.download_file(source_file)   File &quot;/var/runtime/boto3/s3/inject.py&quot;, line 359, in object_download_file     return self.meta.client.download_file(   File &quot;/var/runtime/boto3/s3/inject.py&quot;, line 190, in download_file     return transfer.download_file(   File &quot;/var/runtime/boto3/s3/transfer.py&quot;, line 326, in download_file     future.result()   File &quot;/var/runtime/s3transfer/futures.py&quot;, line 103, in result     return self._coordinator.result()   File &quot;/var/runtime/s3transfer/futures.py&quot;, line 266, in result     raise self._exception   File &quot;/var/runtime/s3transfer/tasks.py&quot;, line 269, in _main     self._submit(transfer_future=transfer_future, **kwargs)   File &quot;/var/runtime/s3transfer/download.py&quot;, line 354, in _submit     response = client.head_object(   File &quot;/var/runtime/botocore/client.py&quot;, line 530, in _api_call     return self._make_api_call(operation_name, kwargs)   File &quot;/var/runtime/botocore/client.py&quot;, line 960, in _make_api_call     raise error_class(parsed_response, operation_name) </code></pre> <p>I have successfully gotten the code to run if I use the .srt file as an S3 trigger. I have printed out the results to see if I have formed something incorrectly. No matter how I form my <code>source_object</code>, I get an error if I pass the information using the Lambda1 function.</p> <p>I am wondering if it could be that the SRT file created from Transcribe has not made it into the S3 bucket before Lambda2 starts looking for it?</p>
<python><amazon-web-services><amazon-s3><aws-lambda>
2023-04-28 21:42:57
0
417
user3324136
76,133,399
600,360
Python object factory repeats constructor arguments multiple times
<p>In writing a python object factory, I'm running into a <em>lot</em> of parameter repetition in the constructors. It feels wrong, like there is a better way to use this pattern. I'm not sure if I should be replacing the parameters with <code>**kwargs</code> or if there is a different design pattern that is more suited to this sort of case.</p> <p>A simplified example is below. The real code is of course more complicated and you can see more reasons why I'd do it this way, but I think this is a reasonable Minimal Reproducible Example</p> <p>External to these classes, for the API, the most important factors are <code>species</code> and <code>subspecies</code>. It happens to be that internally, <code>is_salt_water</code> is important and results in a different object, but that's an internal matter.</p> <pre class="lang-py prettyprint-override"><code> class Fish: def __init__(self, species, sub_species, length, weight): # Repeating this a lot self.species = species self.sub_species = sub_species self.length = length self.weight = weight self.buoyancy = self._calc_buoyancy() def _calc_buoyancy(self): raise Exception(&quot;Do not call this abstract base class directly&quot;) class FreshWaterFish(Fish): def __init__(self, species, sub_species, length, weight): # Repeating this a lot self.fresh_water = True super().__init__(species, sub_species, length, weight) # Repeating this a lot def _calc_buoyancy(self): self.buoyancy = 3.2 * self.weight #totally made-up example. No real-world meaning class SaltWaterFish(Fish): def __init__(self, species, sub_species, length, weight): # Repeating this a lot self.fresh_water = False super().__init__(species, sub_species, length, weight) # Repeating this a lot def _calc_buoyancy(self): self.buoyancy = 1.25 * self.weight / self.length #totally made-up example. No real-world meaning def FishFactory(species, sub_species, length, weight, is_salt_water = False): # Repeating this a lot mapper = {True : FreshWaterFish, False: SaltWaterFish} return mapper[is_salt_water](species, sub_species, length, weight) # Repeating this a lot </code></pre>
<python><oop><design-patterns><factory-pattern>
2023-04-28 21:30:59
2
3,599
Mort
76,133,397
5,583,772
How to get a mean of a list of columns in a polars dataframe
<p>I want to get the average of a list of columns within a polars dataframe, but am getting stuck. For example:</p> <pre><code>df = pl.DataFrame({ 'a':[1,2,3], 'b':[4,5,6], 'c':[7,8,9] }) cols_to_mean = ['a','c'] </code></pre> <p>This works:</p> <pre><code>df.select(pl.col(cols_to_mean)) </code></pre> <p>In that it returns just those columns, but when I try to calculate the mean, this line</p> <pre><code>df.select(pl.col(cols_to_mean).mean()) </code></pre> <p>Returns the mean of each column (while I want a column the same length as each that is the mean of them both for each row). There isn't an option to pass an axis to the mean function. I also try:</p> <pre><code>df.select(pl.mean(pl.col(cols_to_mean).mean())) </code></pre> <p>But this produces an error:</p> <pre><code>TypeError: Invalid input for `col`. Expected `str` or `DataType`, got </code></pre> <p>Is there a way to do this?</p>
<python><dataframe><python-polars>
2023-04-28 21:30:51
2
556
Paul Fleming
76,133,320
13,629,335
Implementing Tcl bgerror in python
<p>This is a follow up question from <a href="https://stackoverflow.com/q/74732580/13629335">this one</a> where it is suggested to implement the <a href="https://www.tcl.tk/man/tcl/TclCmd/bgerror.html" rel="nofollow noreferrer"><code>bgerror</code></a> to actually get a response to what went wrong in the background. However I'm a bit confused, as I understand the documentation it should be sufficient to define a <code>proc</code> named <code>bgerror</code> that will be called by the interpreter, doesn't it ?</p> <p>My attempt below shows how I try to implement it, with no effect at all.</p> <pre><code>import tkinter as tk tcl = tk.Tk() ####work around for puts in python def puts(inp): print(inp) cmd = tcl.register(puts) tcl.eval( 'proc puts {args} {' + f'{cmd} [join $args &quot; &quot;]' + '}') tcl.call('puts', 'test') #test puts #attempt to implement bgerror tcl.eval(''' proc bgerror {message} { set timestamp [clock format [clock seconds]] puts &quot;$timestamp: bgerror in $::argv '$message'&quot; }''') #setup a failing proc tcl.eval(''' after 500 {error &quot;this is an error&quot;} ''') tcl.mainloop() </code></pre> <p>In addition, it would be nice to know if this is the right approach at all, since the documentation suggests for <em>newer</em> applications to use <a href="https://www.tcl.tk/man/tcl/TclCmd/interp.html#M10" rel="nofollow noreferrer"><code>interp bgerror</code></a>. But again I'm confused, cause there seems to be no indicator where I could find the <code>path</code> for these calls or are they for <em>child interpreter</em> exclusively ?</p>
<python><tkinter><tcl><tk-toolkit>
2023-04-28 21:15:24
2
8,142
Thingamabobs
76,133,293
19,369,393
What collection to use that is able to track and remove an object from it in constant time in python?
<p>I need a collection of elements the only purpose of which is to store elements for futher enumeration, so the elements don't have to be ordered/indexes in any way. But a simple list is not enough, because I may need to remove any element from the collection, and I need to do it in constant time. <code>deque</code> or other implementation of linked list as data structures would be sufficient for this purpose, because deletion of an element in linked list is a constant time operation. The problem is that python does not provide a way to track a specific node of a linked list (at least I don't know how), so if I need to remove previously added element from a list, first I need to find it, which is linear time operation.</p> <p>In C++, there is a <code>std::list</code> with its <code>std::list::iterator</code> that does exactly what I need. The iterator holds the pointer to the node in the linked list. When I add an element to a linked list, I can store the iterator of the just added element and further use it to remove the element from the list in constant time, since I already have a pointer to the linked list node with the pointer to previous and next elements. It also doesn't matter whether and how many elements were added or deleted before or after the element.</p> <p>So how can it be done in python? Is there some alternative to the C++'s <code>std::list::iterator</code>?</p> <p>I also want to notice that I can implement a linked list in python myself, but I'm afraid it would be slower than the implementation written in C. And I also don't want to reinvent the wheel.</p>
<python><collections><linked-list>
2023-04-28 21:08:31
2
365
g00dds
76,133,139
19,039,483
Custom Decorator over Dash CallBack
<p>I am trying to understand if it is possible to use dash callback with custom decorators. Basic Dash App:</p> <pre><code>from dash import Dash, html, dcc, callback, Output, Input import plotly.express as px import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder_unfiltered.csv') app = Dash(__name__) app.layout = html.Div([ html.H1(children='Title of Dash App', style={'textAlign':'center'}), dcc.Dropdown(df.country.unique(), 'Canada', id='dropdown-selection'), dcc.Graph(id='graph-content') ]) @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def update_graph(value): dff = df[df.country==value] return px.line(dff, x='year', y='pop') if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>Function that I want to use as decorator:</p> <pre><code>def my_decorator(func): def wrapper_function(*args, **kwargs): begin = time.time() func(*args, **kwargs) end = time.time() print(&quot;Total time taken in : &quot;, func.__name__, (end - begin)) return wrapper_function </code></pre> <p>I tried to do the following:</p> <pre><code>@my_decorator @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def update_graph(value): dff = df[df.country==value] return px.line(dff, x='year', y='pop') </code></pre> <p>This one updated the dashboard, but it didn't print total time</p> <pre><code>@callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) @my_decorator def update_graph(value): dff = df[df.country==value] return px.line(dff, x='year', y='pop') </code></pre> <p>This one prints time, but the dashboard is not updated.</p> <p>I need dashboard to update and time of callback to be printed. Python seems to support chained decorators, but it doesn't seem possible with @callback. Is there a way to do it? Or should I just copy/paste to each function?</p>
<python><python-3.x><plotly-dash>
2023-04-28 20:39:42
1
315
Thoughtful_monkey
76,133,124
680,268
do not assign default values when creating model from dict in pydantic
<p>I have a pydantic model with optional fields that have default values. I use this model to generate a JSON schema for another tool, so it can know the default values to apply if it chooses to assign a field.</p> <pre><code>class MyModel(BaseModel): my_field: Optional[str] = None my_other_field: Optional[str] = &quot;aaaaaaaaaaab&quot; my_dict = {&quot;my_field&quot;: &quot;value&quot;} my_model = MyModel(**my_dict) # prints: {&quot;my_field&quot;: &quot;value&quot;, &quot;my_other_field&quot;: &quot;aaaaaaaaaaab&quot;} print(my_model) </code></pre> <p>So in this case, I want <code>my_model</code> to have just my_field assigned, and not <code>my_other_field</code> to not be assigned. <code>my_other_field</code> may or may not be in the dictionary, so if it's not present, I don't want it to be in the resulting model object.</p> <p>I can not find a way to customize construction of the model from a dict and have it ignore assigning a default.</p> <p>So essentially, I want the defaults to be present when I call MyModel.schema(), but not when I construct the model from a dict. Is this possible?</p> <p>Note that my model is hierarchical and has nested models, but isn't shown above.</p>
<python><pydantic>
2023-04-28 20:37:12
1
10,382
Stealth Rabbi
76,133,090
8,863,779
HuggingChat API?
<p>I am trying to access HuggingChat using requests in Python but I am getting a response code of 500 for some reason. What I have so far is something like this:</p> <pre><code>hf_url = &quot;https://huggingface.co/chat&quot; resp = session.post(hf_url + f&quot;/conversation/{self.now_conversation}&quot;, json=req_json, stream=True) </code></pre> <p>The JSON objects holds the &quot;inputs&quot; field and other &quot;parameters&quot; such as temperature, top_p, etc.</p> <pre><code>req_json = { &quot;inputs&quot;: text, &quot;parameters&quot;: { &quot;temperature&quot;: temperature, &quot;top_p&quot;: top_p, ... </code></pre> <p>Is there no API I can directly use? If not, is there a auth I am missing here? I inspected the request going from my browser when I chat with HuggingChat and didn't really find anything.</p>
<python><chatbot><huggingface>
2023-04-28 20:31:06
1
1,515
Parth Shah
76,133,040
2,735,009
pool.map() not working on more than 2 CPUs
<p>I have the following piece of code:</p> <pre><code>import sentence_transformers import multiprocessing from tqdm import tqdm from multiprocessing import Pool import numpy as np embedding_model = sentence_transformers.SentenceTransformer('sentence-transformers/all-mpnet-base-v2') data = [[100227, 7382501.0, 'view', 30065006, False, ''], [100227, 7382501.0, 'view', 57072062, True, ''], [100227, 7382501.0, 'view', 66405922, True, ''], [100227, 7382501.0, 'view', 5221475, False, ''], [100227, 7382501.0, 'view', 63283995, True, '']] df_text = dict() df_text[7382501] = {'title': 'The Geography of the Internet Industry, Venture Capital, Dot-coms, and Local Knowledge - MATTHEW A. ZOOK', 'abstract': '23', 'highlight': '12'} df_text[30065006] = {'title': 'Determination of the Effect of Lipophilicity on the in vitro Permeability and Tissue Reservoir Characteristics of Topically Applied Solutes in Human Skin Layers', 'abstract': '12', 'highlight': '12'} df_text[57072062] = {'title': 'Determination of the Effect of Lipophilicity on the in vitro Permeability and Tissue Reservoir Characteristics of Topically Applied Solutes in Human Skin Layers', 'abstract': '12', 'highlight': '12'} df_text[66405922] = {'title': 'Determination of the Effect of Lipophilicity on the in vitro Permeability and Tissue Reservoir Characteristics of Topically Applied Solutes in Human Skin Layers', 'abstract': '12', 'highlight': '12'} df_text[5221475] = {'title': 'Determination of the Effect of Lipophilicity on the in vitro Permeability and Tissue Reservoir Characteristics of Topically Applied Solutes in Human Skin Layers', 'abstract': '12', 'highlight': '12'} df_text[63283995] = {'title': 'Determination of the Effect of Lipophilicity on the in vitro Permeability and Tissue Reservoir Characteristics of Topically Applied Solutes in Human Skin Layers', 'abstract': '12', 'highlight': '12'} # Define the function to be executed in parallel def process_data(chunk): results = [] for row in chunk: print(row[0]) work_id = row[1] mentioning_work_id = row[3] print(work_id) if work_id in df_text and mentioning_work_id in df_text: title1 = df_text[work_id]['title'] title2 = df_text[mentioning_work_id]['title'] embeddings_title1 = embedding_model.encode(title1,convert_to_numpy=True) embeddings_title2 = embedding_model.encode(title2,convert_to_numpy=True) similarity = np.matmul(embeddings_title1, embeddings_title2.T) results.append([row[0],row[1],row[2],row[3],row[4],similarity]) else: continue return results # Define the number of CPU cores to use num_cores = multiprocessing.cpu_count() # Split the data into chunks chunk_size = len(data) // num_cores chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)] # Create a pool of worker processest pool = multiprocessing.Pool(processes=num_cores) results = [] with tqdm(total=len(data)) as pbar: for i, result_chunk in enumerate(pool.map(process_data, chunks)): # Update the progress bar pbar.update() # Add the results to the list results += result_chunk # Concatenate the results final_result = results </code></pre> <p>I am running this code on <code>Amazon Sagemaker</code> and it runs just fine on an instance with 2 CPUs. It gives me the progress bar and everything. But I'd like to run it on a larger instance with more CPUs. But it just sort of hangs with more CPUs and doesn't progress at all. When I finally stop the kernel, I get this error:</p> <pre><code>--------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) &lt;ipython-input-18-19449c86abd3&gt; in &lt;module&gt; 1 results = [] 2 with tqdm(total=len(chunks)) as pbar: ----&gt; 3 for i, result_chunk in enumerate(pool.map(process_data, chunks)): 4 # Update the progress bar 5 pbar.update() /opt/conda/lib/python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize) 266 in a list that is returned. 267 ''' --&gt; 268 return self._map_async(func, iterable, mapstar, chunksize).get() 269 270 def starmap(self, func, iterable, chunksize=None): /opt/conda/lib/python3.7/multiprocessing/pool.py in get(self, timeout) 649 650 def get(self, timeout=None): --&gt; 651 self.wait(timeout) 652 if not self.ready(): 653 raise TimeoutError /opt/conda/lib/python3.7/multiprocessing/pool.py in wait(self, timeout) 646 647 def wait(self, timeout=None): --&gt; 648 self._event.wait(timeout) 649 650 def get(self, timeout=None): /opt/conda/lib/python3.7/threading.py in wait(self, timeout) 550 signaled = self._flag 551 if not signaled: --&gt; 552 signaled = self._cond.wait(timeout) 553 return signaled 554 /opt/conda/lib/python3.7/threading.py in wait(self, timeout) 294 try: # restore state no matter what (e.g., KeyboardInterrupt) 295 if timeout is None: --&gt; 296 waiter.acquire() 297 gotit = True 298 else: KeyboardInterrupt: </code></pre> <p>This makes me believe that it's waiting on resources? Not sure. Any help in this regard would be very appreciated. Also, when I run this code, I see a lot of <code>cores</code> created in the <code>Sagemaker</code> file explorer. <a href="https://i.sstatic.net/fBRgI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fBRgI.png" alt="enter image description here" /></a></p>
<python><python-3.x><multithreading><multiprocessing><mapreduce>
2023-04-28 20:21:40
1
4,797
Patthebug
76,132,901
468,455
Python to convert data to a png file
<p>We have an intranet/knowledge base site that has an API. I can use that API to pull assets from the backend. That worked as expected... except... I'm not sure what format the return is in. The file is a png file, and it has the png chunks listed like IHDR but can I just convert this (see below) to base64 and push it through PIL or some other renderer to get it to output (I am on a Mac)? Thanks.</p> <pre><code>�PNG IHDR� �AMA�� �a��IDATx^��Ǚ�?;�3����� ;���$ffK�,�-ɒAff��c;�c�cf ]r��]r����oO�Z��+Y���g����S�VuOO�t��٪�ѧ3�QO� �5c} ��\$&amp;�*��&amp;^M�,�u*}5��� 2� �O���b�O#����|6?��R- ۏÑ0��� `G؂�,�&amp;&amp;h�+Bb�ĤH�/!�+��� *�F�Ѫe��=T--���F'���}��L�q�'��Q��bxQo�u�DL�Um�H�-��E�����N���ڝY1H8�u��� ���K�P��K�ma/.�-O Wj x�&lt;N�n�ȅ~�ƣW�c���i��]z�I%���ɠ��N��2}���C��7lѷF}. �����z�Z���v�j�B�B��y��N��m4h$&quot;��H$&quot;G�a2&gt;_'��r��f�\'�� �BjQ˂�^&amp; �MA�!�eC���B&gt;h�$�:�C���L�}�XġUW�i�pG{_k�#�鬶gR�/��m(�,�L���(�q{��b&lt;ڑ�zL�V�6�]]*�K� )����6��Sa�B쳘|s*,Ǔ��&quot;��ժ�R��^+��W��yv��wg�w|����3�]3���^�x���V�x�ꗮ\����߽c�{w�~��-? �����?��]{޼�������;޽�ܺn��έ;`����s�v��n�����^�n ��뷽q���o&lt;獛��}����|��z���/�r��[��z�O�|�����wz�{�}���_~��w^��K/������w���_�x�������|. �޿���/�y </code></pre>
<python><python-imaging-library><png>
2023-04-28 19:57:06
1
6,396
PruitIgoe
76,132,861
1,418,326
How to get value mapping from sklearn Leave One Out encoder?
<p>I am using the <code>LeaveOneOutEncoder</code> from <a href="https://contrib.scikit-learn.org/category_encoders/leaveoneout.html" rel="nofollow noreferrer">category_encoders</a>. I am able to encode my data.</p> <p>How to get a map of the original value and looe_value pair?</p> <p>Original:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>city</th> <th>salary</th> </tr> </thead> <tbody> <tr> <td>san jose</td> <td>120</td> </tr> <tr> <td>san jose</td> <td>100</td> </tr> <tr> <td>austin</td> <td>95</td> </tr> </tbody> </table> </div> <p>Encoded:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>city</th> <th>salary</th> </tr> </thead> <tbody> <tr> <td>110</td> <td>120</td> </tr> <tr> <td>110</td> <td>100</td> </tr> <tr> <td>95</td> <td>95</td> </tr> </tbody> </table> </div> <p>I want to have a map of <code>city_sanjose = 110; city_austin = 95</code>. How to create this map? Do I have to do it manually?</p>
<python><scikit-learn><leave-one-out>
2023-04-28 19:50:21
0
1,707
topcan5
76,132,667
1,202,863
Consolidating tuples as a dict based on condition
<p>input :</p> <pre><code>x = ( (1,&quot;A&quot;, 10),(1,&quot;B&quot;, 10),(1,&quot;B&quot;, 10),(1,&quot;C&quot;, 10),(1,&quot;C&quot;, 10),(1,&quot;B&quot;, 10),(1,&quot;A&quot;, 10),(1,&quot;A&quot;, 10),(1,&quot;C&quot;, 10),(1,&quot;B&quot;, 10)) </code></pre> <p>Expected Output:</p> <pre><code>{'A': [(1, 'A', 10), (7, 'A', 70), (8, 'A', 80)], 'B': [(2, 'B', 20), (3, 'B', 30), (6, 'B', 60), (10, 'B', 100)], 'C': [(4, 'C', 40), (5, 'C', 50), (9, 'C', 90)]} </code></pre> <p>Basically grouping my tuples by one of the elements in the tuple</p> <p>I tried this, but it doesn't feel Pythonic</p> <pre><code>def consolidate(values): res = {} for a in values: if a[1] in res: p = res[a[1]] p.append(a) res[a[1]] = p else: res[a[1]] = [a] return res </code></pre>
<python><list><dictionary><tuples><generator>
2023-04-28 19:20:26
1
586
Pankaj Singh
76,132,614
8,401,294
Class attributes as dictionary is not reseting on new instance in Python
<p>First example:</p> <pre><code>class StatsCourseBuilder(): test = 0 novoA = StatsCourseBuilder() novoA.test += 1 novoA.test += 1 novoB = StatsCourseBuilder() print(novoB) novoB.test += 1 </code></pre> <p>Second example:</p> <pre><code>class StatsCourseBuilder(): test = {'name' : 0} novoA = StatsCourseBuilder() novoA.test['name'] += 1 novoA.test['name'] += 1 novoB = StatsCourseBuilder() print(novoB) novoB.test['name'] += 1 </code></pre> <p>Results:</p> <ol> <li>First example: <code>1</code></li> <li>Second example: <code>3</code></li> </ol> <p>Why is this happening? Is the fact that I have a dictionary as the value of my attribute impacting?</p> <p>Python documentation: <a href="https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables</a></p>
<python><oop>
2023-04-28 19:12:30
0
365
José Victor
76,132,050
3,380,902
AttributeError: module 'pyspark.dbutils' has no attribute 'fs'
<p><code>AttributeError</code> when attempting to transfer files from <code>dbfs</code> filestore in DataBricks to a local directory.</p> <pre><code>import pyspark.dbutils as pdbutils pdbutils.fs.cp(&quot;/dbfs/Data/file1.csv&quot;, &quot;/Users/Downloads/&quot;) Traceback (most recent call last): File &quot;/databricks/python/lib/python3.9/site-packages/IPython/core/interactiveshell.py&quot;, line 3378, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;command-1495140876495465&gt;&quot;, line 4, in &lt;module&gt; pdbutils.fs.cp(&quot;/dbfs/Usage-Data/df_tx_0106_2022.csv&quot;, &quot;/Users/keval/Downloads/&quot;) AttributeError: module 'pyspark.dbutils' has no attribute 'fs' </code></pre>
<python><databricks>
2023-04-28 17:43:48
0
2,022
kms
76,132,035
1,212,596
boto3 get_secret_value not working with ARN
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/secretsmanager/client/get_secret_value.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/secretsmanager/client/get_secret_value.html</a></p> <blockquote> <p>SecretId (string) – [REQUIRED]</p> <p>The ARN or name of the secret to retrieve.</p> <p>For an ARN, we recommend that you specify a complete ARN rather than a partial ARN. See Finding a secret from a partial ARN.</p> </blockquote> <p>I can't get this to work with the ARN though.</p> <pre class="lang-py prettyprint-override"><code>import boto3 boto3.client(&quot;secretsmanager&quot;).get_secret_value(SecretId=&quot;arn:aws:secretsmanager:us-east-1:260890374087:secret/Datadog/ApiKey-s3xUqf&quot;) </code></pre> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Library/Python/3.9/site-packages/botocore/client.py&quot;, line 530, in _api_call return self._make_api_call(operation_name, kwargs) File &quot;/Library/Python/3.9/site-packages/botocore/client.py&quot;, line 960, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the GetSecretValue operation: Invalid name. Must be a valid name containing alphanumeric characters, or any of the following: -/_+=.@! </code></pre> <p>How do I retrieve the secret with the ARN?</p>
<python><amazon-web-services><boto3>
2023-04-28 17:42:14
4
84,149
Paul Draper
76,132,019
5,858,752
Overriding a method definition with a method from another class?
<pre><code>class test: def func(self): print(5) class test1: def testing(self, t): t.func = self.another_func def another_func(self): print(6) t = test() t1 = test1() t1.testing(t) # now t.func() uses test1's another_func and prints 6 t.func() </code></pre> <p>This is a simple example I came up with for a larger problem I'm working on at work where this use case came up. I don't think I've seen function override of this sort done before, so while this can be done with seemingly the results that I'm seeking, should it be done? It feels hacky so I'm not sure if this is considered bad practice.</p> <p>It also feels this makes readability more difficult because one might look at the original <code>func</code> definition and not know it was overriden</p>
<python><python-class>
2023-04-28 17:39:43
0
699
h8n2
76,131,865
11,653,374
Numerically check the gradient is a linear operator
<p>Given the following two snippets of code, the first one is designed to iterate over each MNIST sample individually, calculate the loss and find the individual gradients. At the end, all the individual gradients are added and divided by the length of the training dataset to calculate the full gradient (batch). In the second snippet of code, the entire dataset is used as a mini-batch, resulting in a batch gradient calculation. Mathematically, both methods should provide the same result. However, when checking the norm square of the first and second gradients, they are very different. Why does the first method differ from the second method?</p> <p><strong>Code snippet 1:</strong></p> <pre><code>import torch import torchvision.datasets as datasets import torchvision.transforms as transforms from torch import nn # Download MNIST dataset train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) # Define LeNet-300-100-10 model class LeNet(torch.nn.Module): def __init__(self): super(LeNet, self).__init__() self.fc1 = torch.nn.Linear(28 * 28, 300) self.fc2 = torch.nn.Linear(300, 100) self.fc3 = torch.nn.Linear(100, 10) def forward(self, x): x = x.view(-1, 28 * 28) x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x # Initialize model with seed 0 torch.manual_seed(0) model = LeNet() device = torch.device(&quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;) model.to(device) # Initialize grad_sum grad_sum = [torch.zeros_like(param) for param in model.parameters()] criterion = nn.CrossEntropyLoss() for i, (images, labels) in enumerate(train_dataset): # Forward pass labels = torch.tensor(labels) images, labels = images.to(device), labels.to(device) outputs = model(images) # Calculate loss loss = criterion(outputs, labels.unsqueeze(0)) # Zero out gradient model.zero_grad() # Backward pass for one image loss.backward() # Add gradient to grad_sum for i, param in enumerate(model.parameters()): grad_sum[i] += param.grad grad = [grad_sum[i] / len(train_dataset) for i in range(len(grad_sum))] norm_sq = 0 for p_grad in grad: norm_sq += torch.norm(p_grad) ** 2 print(norm_sq) </code></pre> <p><strong>Code snippet 2:</strong></p> <pre><code>import torch import torch.nn as nn from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision.transforms import ToTensor # Define LeNet-300-100-10 model class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.fc1 = nn.Linear(28 * 28, 300) self.fc2 = nn.Linear(300, 100) self.fc3 = nn.Linear(100, 10) self.relu = nn.ReLU() def forward(self, x): x = x.view(-1, 28 * 28) x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3(x) return x # Load MNIST dataset train_dataset = MNIST(root='./data', train=True, download=True, transform=ToTensor()) # Create DataLoader for training dataset train_loader = DataLoader(train_dataset, batch_size=len(train_dataset), shuffle=True) # Move data to device (GPU if available) device = torch.device(&quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;) torch.manual_seed(0) # Create model instance model = LeNet().to(device) # Define loss function and optimizer criterion = nn.CrossEntropyLoss() # Forward pass for images, labels in train_loader: images, labels = images.to(device), labels.to(device) # Calculate loss outputs = model(images) loss = criterion(outputs, labels) # Zero out gradients model.zero_grad() # Backward pass loss.backward() for param in model.parameters(): norm_sq += torch.norm(param.grad) ** 2 print(norm_sq) </code></pre>
<python><pytorch>
2023-04-28 17:17:27
0
728
Saeed
76,131,842
1,076,493
Python requests module slow due to high number of internal function calls
<p>On one of my machines the Python module <code>requests</code> is extremely slow, seemingly due to a very high number of internal function calls. What could lead to such an issue?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import sys, cProfile, requests, urllib3 &gt;&gt;&gt; sys.version '3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]' &gt;&gt;&gt; requests.__version__ '2.28.1' # same issue with latest 2.29.0 release &gt;&gt;&gt; cProfile.run('requests.get(&quot;http://127.0.0.1:8080&quot;)', sort='ncalls') 11717750 function calls (11717748 primitive calls) in 10.091 seconds Ordered by: call count ncalls tottime percall cumtime percall filename:lineno(function) 5220221 0.770 0.000 0.775 0.000 {method 'read' of '_io.TextIOWrapper' objects} 5220221 0.801 0.000 0.801 0.000 shlex.py:68(punctuation_chars) 468483 0.364 0.000 9.564 0.000 shlex.py:101(get_token) 401557 7.612 0.000 9.188 0.000 shlex.py:133(read_token) 200797 0.059 0.000 0.059 0.000 {method 'startswith' of 'str' objects} 66931 0.012 0.000 0.012 0.000 {method 'popleft' of 'collections.deque' objects} 66926 0.012 0.000 0.012 0.000 {method 'appendleft' of 'collections.deque' objects} 66926 0.049 0.000 0.061 0.000 shlex.py:72(push_token) [..] # remaining calls with significantly lower numbers &gt;&gt;&gt; cProfile.run('http=urllib3.PoolManager(); http.request(&quot;GET&quot;, &quot;http://127.0.0.1:8080&quot;)', sort='ncalls') 784 function calls (783 primitive calls) in 0.005 seconds </code></pre> <p>On another machines, running the same version of Python and tested with same versions of <code>requests</code>, things look about right:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; cProfile.run('requests.get(&quot;http://127.0.0.1:8080&quot;)', sort='ncalls') 2432 function calls (2396 primitive calls) in 0.005 seconds </code></pre> <ul> <li>Above tests were done using <code>python3 -m http.server 8080 --bind 127.0.0.1</code>, but same results for other HTTP servers</li> <li>Most of the time spent is before a TCP connection is even created by looking at network traffic using <code>tcpdump</code></li> <li>As with <code>urllib3</code>, completely unrelated tools such as <code>curl</code> works just fine</li> <li>While tests were made on IPv4 addresses, actively disabling IPv6 for <code>requests</code> using <code>requests.packages.urllib3.util.connection.HAS_IPV6 = False</code> did not have any effect either</li> <li>Tested with clean virtualenvs</li> </ul>
<python><python-requests><.netrc>
2023-04-28 17:14:30
1
10,332
timss
76,131,670
8,942,319
Python, how to ensure object is garbage collected?
<p>I have an api client instance needed for some work. I'll need to run several iterations of this work with a change in target directories each iteration.</p> <p>Simple example. Obviously, the work we're doing is not just printing the client properties.</p> <pre><code>while x &lt; y: SUB_DIR = f&quot;dir_{x}&quot; dir_client = AzDirectoryClient( file_system_name=&quot;my-container&quot;, directory_name=SUB_DIR ) print(dir_client.get_directory_properties()) x += 1 </code></pre> <p>The <code>dir_client</code> creation happens each iteration. There is no way to change the <code>directory_name</code> for an existing client as I understand it (Azure datalake gen2 python sdk, DataLakeDirectoryClient class).</p> <p>Should each iteration delete the existing client (<code>del</code>) and create the new one? Will it be garbage collected if the name remains the same?</p> <p>The number of iterations will be significant.</p>
<python><azure><garbage-collection><azure-data-lake-gen2>
2023-04-28 16:49:40
1
913
sam
76,131,622
5,091,720
Itsdangerous security - TypeError: unsupported operand type(s) for +: 'int' and 'bytes'
<p>I am using Python 3.9 and itsdangerous 2.1.2. I was testing things in the terminal and it does not appear to be working. This is my first experience with itsdangerous so maybe I don't understand it.</p> <p>I want to get a token that can be emailed for when the user clicks on [forgot password].</p> <p>My code in terminal:</p> <pre><code>&gt;&gt;&gt; from itsdangerous import URLSafeTimedSerializer as Serializer &gt;&gt;&gt; s = Serializer('secret', 300) &gt;&gt;&gt; token = s.dumps({'user_id': 5}).decode('utf-8') </code></pre> <p>From above I get an error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\...\flask_env\lib\site-packages\itsdangerous\serializer.py&quot;, line 208, in dumps rv = self.make_signer(salt).sign(payload) File &quot;C:\...\flask_env\lib\site-packages\itsdangerous\timed.py&quot;, line 55, in sign return value + sep + self.get_signature(value) File &quot;C:\...\flask_env\lib\site-packages\itsdangerous\signer.py&quot;, line 209, in get_signature key = self.derive_key() File &quot;C:\...\flask_env\lib\site-packages\itsdangerous\signer.py&quot;, line 195, in derive_key bytes, self.digest_method(self.salt + b&quot;signer&quot; + secret_key).digest() TypeError: unsupported operand type(s) for +: 'int' and 'bytes' </code></pre> <p>After seeing <a href="https://stackoverflow.com/a/75197102/5091720">this answer</a> I changed to the below.</p> <pre><code>&gt;&gt;&gt; from itsdangerous import URLSafeTimedSerializer as Serializer &gt;&gt;&gt; s = Serializer('secret') &gt;&gt;&gt; token = s.dumps({'user_id': 5}) &gt;&gt;&gt; s.loads(token) {'user_id': 5} &gt;&gt;&gt; token 'eyJ1c2VyX2lkIjo1fQ.ZEv06A.rc99R7V53CJ1XDM0sk6VJjMFdjQ' </code></pre> <p>Part of the concept of the integer in <code>s = Serializer('secret', 300)</code> is to limit the time for the token to work. If I change to the 2nd option will the <code>Serializer</code> go to a default time out? How could I make it work better?</p> <p>Is there any concern with having this be the code of a token for a flask app? If it does seem wrong how should I do it?</p> <pre><code>def get_reset_token(self): s = Serializer(app.config['SECRET_KEY']) return s.dumps({'user_id': self.id}) </code></pre>
<python><itsdangerous>
2023-04-28 16:41:22
2
2,363
Shane S
76,131,401
17,176,270
Django form error "this field is required" after HTMX request
<p>I'm using HTMX lib to send the requests with Django. The issue is that after succeed htmx request with <code>project_form</code> swapped <code>project_detail_form</code> comes with &quot;this field is required&quot;. What am I doing wrong?</p> <p>This is my views.py:</p> <pre><code>def index(request): &quot;&quot;&quot;Index view.&quot;&quot;&quot; project_form = ProjectForm(data=request.POST or None) project_detail_form = ProjectDetailForm(data=request.POST or None) if request.method == &quot;POST&quot;: try: if project_form.is_valid(): project_form.save() return render( request, &quot;monitor/_project_detail_form.html&quot;, context={&quot;project_detail_form&quot;: project_detail_form}, ) except Exception as e: return HttpResponse(f&quot;Error: {e}&quot;) return render(request, &quot;monitor/index.html&quot;, context={&quot;project_form&quot;: project_form}) </code></pre> <p>A part of index.html</p> <pre><code> &lt;form class=&quot;pt-4&quot; hx-post=&quot;/&quot; hx-target=&quot;this&quot; hx-swap=&quot;innerHTML&quot; enctype=&quot;multipart/form-data&quot;&gt; {% csrf_token %} {{ project_form.as_div }} &lt;button class=&quot;btn btn-primary&quot; type=&quot;submit&quot;&gt;Save&lt;/button&gt; &lt;/form&gt; </code></pre> <p>_project_detail_form.html</p> <pre><code>{% csrf_token %} {{ project_detail_form.as_div }} &lt;button class=&quot;btn btn-primary&quot; type=&quot;submit&quot;&gt;Save&lt;/button&gt; </code></pre> <p>forms.py</p> <pre><code>class ProjectForm(forms.ModelForm): &quot;&quot;&quot;Project form.&quot;&quot;&quot; name = forms.CharField(widget=forms.TextInput(attrs={&quot;class&quot;: &quot;form-control&quot;})) check_interval = forms.IntegerField( widget=forms.NumberInput(attrs={&quot;class&quot;: &quot;form-control&quot;}) ) is_active = forms.BooleanField( required=False, widget=forms.CheckboxInput(attrs={&quot;class&quot;: &quot;form-check&quot;}) ) class Meta: &quot;&quot;&quot;Meta.&quot;&quot;&quot; model = Project fields = &quot;__all__&quot; class ProjectDetailForm(forms.ModelForm): &quot;&quot;&quot;Project Detail form.&quot;&quot;&quot; project = forms.ModelChoiceField( queryset=Project.objects.all(), widget=forms.Select(attrs={&quot;class&quot;: &quot;form-select&quot;}), ) url = forms.URLField(widget=forms.URLInput(attrs={&quot;class&quot;: &quot;form-control&quot;})) pagination = forms.IntegerField( required=False, widget=forms.NumberInput(attrs={&quot;class&quot;: &quot;form-control&quot;}) ) is_active = forms.BooleanField( required=False, widget=forms.CheckboxInput(attrs={&quot;class&quot;: &quot;form-check&quot;}) ) class Meta: &quot;&quot;&quot;Meta.&quot;&quot;&quot; model = ProjectDetail fields = [&quot;project&quot;, &quot;url&quot;, &quot;pagination&quot;, &quot;is_active&quot;] </code></pre> <p><a href="https://i.sstatic.net/3hWlA.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3hWlA.gif" alt="live demo" /></a></p>
<python><django><python-requests><django-forms><htmx>
2023-04-28 16:09:31
1
780
Vitalii Mytenko
76,131,280
5,252,492
Python Scipy: Force Least Squares to use Positive Integer Values when minimizing
<p>I'm able to create an minimal solution for a weight multiplication problem below using the <code>scipy.optimize.lsq_linear</code>.</p> <pre><code>Actual = [x,y,z] * [i,j,k] Difference = Expected - Actual Difference is sent to the Least Squares Minimizer </code></pre> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html</a></p> <p>How do I force the target vector (output), to only use positive integers, say between 0 and 100?</p> <p>I understand that I will get a non-optimal solution.</p>
<python><scipy><least-squares>
2023-04-28 15:52:59
0
6,145
azazelspeaks
76,131,271
13,391,350
Add blank rows to transposed dataframe
<p>Goal: Insert blank rows in between rows.</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( { 'revenue_month': ['Jan', 'Feb', 'Mar'], 'new_invoice_net': [1000, 2000, 3000], 'new_accrual': [500, 1000, 1500], 'tax': [100, 200, 300], 'total_new_invoice_gross': [1500, 2500, 3500], 'total_new_invoice_net': [1200, 2200, 3200], 'reversal_of_accruals': [200, 400, 600], 'new_revenue': [900, 1800, 2700], 'net_revenue_in_period': [800, 1600, 2400], 'change_in_net_revenue': [100, 200, 300], 'total_accruals': [600, 1200, 1800], 'settlement': [300, 600, 900], 'account_revenue': [500, 1000, 1500], } ) df_transposed = df.set_index('revenue_month').transpose() </code></pre> <p>I want to transpose df based on revenue_month, and insert empty row between <code>total new invoice net</code> and <code>reversal of accruals</code> and between <code>change in net revenue</code> and <code>total accruals</code>.</p> <p>This code below returns <code>KeyError</code>:</p> <pre class="lang-py prettyprint-override"><code># determine the row indices where you want to insert empty rows row_indices = [transposed_df.index.get_loc('total_new_invoice_net')+1, transposed_df.index.get_loc('reversal_of_accruals'), transposed_df.index.get_loc('change_in_net_revenue')+1, transposed_df.index.get_loc('total_accruals(eom)')] for row_index in reversed(row_indices): transposed_df = pd.concat( [ transposed_df.iloc[: row_index, :], pd.DataFrame(data=[[''] * len(transposed_df.columns)], index=[''], columns=transposed_df.columns), transposed_df.iloc[row_index :, :], ] ) # print the transposed DataFrame with empty rows print(df_transposed) </code></pre> <p>Final output:<br /> <img src="https://i.sstatic.net/y1Qai.png" alt="1" /></p>
<python><pandas>
2023-04-28 15:52:05
1
747
Luc
76,131,267
7,133,523
numpy seems to produce strange eignvectors
<p>I am doing linear algebra (calculating eignvestors) in python using one of Gilbert Strang's books. In his books the matrix A is given as shown below.</p> <p><a href="https://i.sstatic.net/8iTi7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8iTi7.png" alt="enter image description here" /></a></p> <p>The eignvectors of this matrix are the ones shown in matrix V below</p> <p><a href="https://i.sstatic.net/YjwRk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjwRk.png" alt="enter image description here" /></a></p> <p>I am trying to confirm the results using numpy as follows</p> <pre><code>import numpy as np from numpy import linalg as la A = np.array([[0,0], [0,16]]) eignval, eignvector = la.eig(A) print (eignvector) array([[1., 0.], [0., 1.]])) </code></pre> <p>We see that the eignvectors derived from numpy are different. It looks like if we exchange the row one with row two we get the correct results. Can someone explain why this happens</p>
<python><numpy><linear-algebra>
2023-04-28 15:51:34
1
319
Johny