QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,375,036
34,935
Tool to automatically fix pylint "unnecessary-comprehension"?
<p><a href="https://stackoverflow.com/questions/65902301/how-to-fix-pylint-error-unnecessary-use-of-a-comprehension">This question</a> shows pylint reporting <code>unnecessary-comprehension</code> on this code:</p> <pre><code>dict1 = {&quot;A&quot;: &quot;This is A&quot;, &quot;B&quot;: &quot;This is B&quot;} bools = [True, False] dict2 = {key: value for key, value in zip(dict1.keys(), bools)} </code></pre> <p><a href="https://stackoverflow.com/questions/54586757/how-do-i-automatically-fix-lint-issues-reported-by-pylint">This question</a> asks in general if there is an automated tool to fix pylint issues. It mentions black, autopep8, autoflake8. None of those fix this.</p> <p>Is there a tool to apply that would fix this pylint warning?</p>
<python><pylint>
2023-05-31 15:18:17
1
21,683
dfrankow
76,374,942
12,851,199
Is defining classmethods in Enum classes bad practice or not?
<p>I have Enum class:</p> <pre><code>class Group(StrEnum): admin = 'admin' manager = 'manager' user = 'user' @classmethod def get_staff_groups(cls): return {cls.admin, cls.manager} @classmethod def get_user_groups(cls): return {cls.user} </code></pre> <p>Is it good practice or not?</p> <p>I tried like written above. Can Enum contains methods?</p>
<python>
2023-05-31 15:08:22
1
438
Vadim Beglov
76,374,927
1,473,517
How to specify -march=native with %%cython
<p>In ipython I can do the following:</p> <pre><code>%%cython --compile-args=-Ofast def f(x): return 2.0*x </code></pre> <p>But how can I add <code>-march=native</code> as well?</p>
<python><ipython><cython>
2023-05-31 15:06:25
1
21,513
Simd
76,374,804
10,266,106
Correctly Computing Cumsum Along Third Dimension of ndarray
<p>I have a multi-dimensional array that I am performing summations along at each individual point, specifically for statistical analysis. The ndarray is 3-dimensional with shape <code>(1200,2600,200)</code>. The calculation I'm looking to execute specifically is the following:</p> <pre><code>cumsum = np.divide(np.cumsum(array, axis=2), np.sum(array)) </code></pre> <p>Where array is the ndarray defined above and the axis=2 call specifically instructs np.cumsum to sum the elements cumulatively down the third dimension at point i, j.</p> <p>My issue is the cumsum at point i,j which is not identical when run as shown above (incorrect result) compared to when I execute it at a specific point i, j individually (correct result). Specifically, the result is incorrect to about eight decimal places at each point in the ndarray. The data at point i,j and results follow below:</p> <p><strong>Code that produces the correct result</strong>:</p> <pre><code>cumsum = np.divide(np.cumsum(array[i,j]), np.sum(array[i,j])) </code></pre> <p><strong>Data at point i, j in the array</strong>:</p> <pre><code>[2.5896627e-01 7.8833267e-02 4.6881203e-02 3.3370260e-02 2.5926309e-02 2.1185182e-02 1.7896093e-02 1.5469486e-02 1.3617482e-02 1.2151432e-02 1.0961573e-02 9.9725584e-03 9.1435499e-03 8.4358063e-03 7.8244330e-03 7.2888890e-03 6.8194675e-03 6.4030001e-03 6.0309847e-03 5.6953467e-03 5.3933552e-03 5.1191193e-03 4.8689796e-03 4.6398928e-03 4.4284821e-03 4.2343135e-03 4.0546600e-03 3.8879570e-03 3.7322419e-03 3.5876189e-03 3.4524139e-03 3.3257415e-03 3.2063506e-03 3.0945288e-03 2.9891655e-03 2.8897219e-03 2.7953433e-03 2.7063680e-03 2.6220153e-03 2.5419386e-03 2.4658246e-03 2.3931006e-03 2.3241036e-03 2.2582957e-03 2.1954644e-03 2.1351755e-03 2.0777427e-03 2.0227525e-03 1.9700555e-03 1.9193111e-03 1.8708067e-03 1.8242144e-03 1.7794254e-03 1.7361674e-03 1.6947000e-03 1.6547571e-03 1.6162583e-03 1.5789805e-03 1.5431579e-03 1.5085702e-03 1.4751572e-03 1.4428621e-03 1.4115060e-03 1.3812946e-03 1.3520514e-03 1.3237320e-03 1.2961850e-03 1.2695953e-03 1.2438130e-03 1.2188030e-03 1.1944349e-03 1.1708769e-03 1.1479985e-03 1.1257721e-03 1.1040850e-03 1.0830887e-03 1.0626703e-03 1.0428075e-03 1.0234793e-03 1.0045893e-03 9.8627270e-04 9.6843322e-04 9.5105352e-04 9.3404873e-04 9.1754185e-04 9.0144749e-04 8.8575145e-04 8.7037822e-04 8.5544016e-04 8.4086129e-04 8.2662981e-04 8.1267761e-04 7.9910830e-04 7.8585331e-04 7.7290280e-04 7.6019607e-04 7.4782735e-04 7.3573570e-04 7.2391250e-04 7.1234960e-04 7.0099317e-04 6.8992859e-04 6.7910185e-04 6.6850579e-04 6.5809203e-04 6.4793869e-04 6.3799700e-04 6.2826090e-04 6.1868585e-04 6.0934469e-04 6.0019240e-04 5.9122394e-04 5.8239867e-04 5.7378388e-04 5.6533841e-04 5.5705803e-04 5.4890534e-04 5.4094271e-04 5.3313252e-04 5.2547100e-04 5.1795418e-04 5.1054877e-04 5.0331140e-04 4.9620820e-04 4.8923609e-04 4.8236389e-04 4.7564457e-04 4.6904699e-04 4.6256813e-04 4.5617929e-04 4.4993000e-04 4.4379136e-04 4.3776058e-04 4.3181126e-04 4.2598948e-04 4.2026848e-04 4.1464576e-04 4.0911944e-04 4.0366512e-04 3.9832521e-04 3.9307532e-04 3.8791337e-04 3.8281683e-04 3.7782549e-04 3.7291663e-04 3.6808843e-04 3.6331985e-04 3.5864810e-04 3.5405197e-04 3.4952996e-04 3.4506238e-04 3.4068426e-04 3.3637564e-04 3.3213521e-04 3.2794452e-04 3.2383655e-04 3.1979271e-04 3.1581166e-04 3.1189225e-04 3.0801745e-04 3.0421774e-04 3.0047612e-04 2.9679155e-04 2.9314787e-04 2.8957392e-04 2.8605366e-04 2.8258603e-04 2.7915623e-04 2.7579122e-04 2.7247594e-04 2.6920953e-04 2.6597784e-04 2.6280651e-04 2.5968134e-04 2.5660152e-04 2.5355379e-04 2.5056230e-04 2.4761367e-04 2.4470725e-04 2.4184212e-04 2.3900617e-04 2.3622185e-04 2.3347675e-04 2.3077012e-04 2.2809043e-04 2.2545905e-04 2.2286420e-04 2.2030527e-04 2.1777138e-04 2.1528259e-04 2.1282790e-04 2.1040675e-04 2.0800892e-04 2.0565331e-04 2.0332953e-04 2.0103710e-04 1.9877554e-04 1.9653517e-04 1.9433383e-04 1.9216179e-04] </code></pre> <p><strong>Performing calculation as shown above (incorrect result)</strong>:</p> <pre><code>[3.61291841e-09 4.71274575e-09 5.36679989e-09 5.83235860e-09 6.19406482e-09 6.48962573e-09 6.73929978e-09 6.95511915e-09 7.14510096e-09 7.31462935e-09 7.46755813e-09 7.60668861e-09 7.73425235e-09 7.85194310e-09 7.96110378e-09 8.06279310e-09 8.15793388e-09 8.24726420e-09 8.33140401e-09 8.41086223e-09 8.48610604e-09 8.55752447e-09 8.62545324e-09 8.69018546e-09 8.75196893e-09 8.81104345e-09 8.86761065e-09 8.92185259e-09 8.97392294e-09 9.02397446e-09 9.07214037e-09 9.11853881e-09 9.16327192e-09 9.20644538e-09 9.24814803e-09 9.28846333e-09 9.32746147e-09 9.36521882e-09 9.40179934e-09 9.43726342e-09 9.47166523e-09 9.50505186e-09 9.53747659e-09 9.56898294e-09 9.59961266e-09 9.62940039e-09 9.65838787e-09 9.68660796e-09 9.71409264e-09 9.74087033e-09 9.76697034e-09 9.79242021e-09 9.81724568e-09 9.84146720e-09 9.86511051e-09 9.88819604e-09 9.91074511e-09 9.93277371e-09 9.95430316e-09 9.97535032e-09 9.99593031e-09 1.00160600e-08 1.00357518e-08 1.00550226e-08 1.00738857e-08 1.00923545e-08 1.01104369e-08 1.01281499e-08 1.01455022e-08 1.01625064e-08 1.01791704e-08 1.01955058e-08 1.02115214e-08 1.02272271e-08 1.02426299e-08 1.02577404e-08 1.02725659e-08 1.02871143e-08 1.03013935e-08 1.03154081e-08 1.03291686e-08 1.03426796e-08 1.03559481e-08 1.03689795e-08 1.03817808e-08 1.03943574e-08 1.04067137e-08 1.04188569e-08 1.04307922e-08 1.04425224e-08 1.04540554e-08 1.04653930e-08 1.04765423e-08 1.04875051e-08 1.04982885e-08 1.05088942e-08 1.05193267e-08 1.05295914e-08 1.05396909e-08 1.05496287e-08 1.05594085e-08 1.05690345e-08 1.05785078e-08 1.05878346e-08 1.05970166e-08 1.06060565e-08 1.06149569e-08 1.06237223e-08 1.06323537e-08 1.06408544e-08 1.06492282e-08 1.06574767e-08 1.06656017e-08 1.06736069e-08 1.06814939e-08 1.06892655e-08 1.06969233e-08 1.07044711e-08 1.07119087e-08 1.07192397e-08 1.07264659e-08 1.07335891e-08 1.07406102e-08 1.07475335e-08 1.07543592e-08 1.07610889e-08 1.07677245e-08 1.07742677e-08 1.07807221e-08 1.07870859e-08 1.07933635e-08 1.07995550e-08 1.08056621e-08 1.08116867e-08 1.08176295e-08 1.08234932e-08 1.08292788e-08 1.08349862e-08 1.08406173e-08 1.08461746e-08 1.08516591e-08 1.08570708e-08 1.08624123e-08 1.08676836e-08 1.08728866e-08 1.08780212e-08 1.08830900e-08 1.08880931e-08 1.08930323e-08 1.08979092e-08 1.09027232e-08 1.09074758e-08 1.09121689e-08 1.09168017e-08 1.09213776e-08 1.09258949e-08 1.09303562e-08 1.09347624e-08 1.09391136e-08 1.09434115e-08 1.09476552e-08 1.09518474e-08 1.09559881e-08 1.09600773e-08 1.09641176e-08 1.09681082e-08 1.09720499e-08 1.09759446e-08 1.09797922e-08 1.09835936e-08 1.09873497e-08 1.09910596e-08 1.09947260e-08 1.09983489e-08 1.10019291e-08 1.10054668e-08 1.10089626e-08 1.10124168e-08 1.10158309e-08 1.10192051e-08 1.10225393e-08 1.10258354e-08 1.10290923e-08 1.10323120e-08 1.10354943e-08 1.10386402e-08 1.10417497e-08 1.10448228e-08 1.10478613e-08 1.10508651e-08 1.10538343e-08 1.10567697e-08 1.10596723e-08 1.10625411e-08 1.10653771e-08 1.10681819e-08 1.10709557e-08 1.10736975e-08 1.10764082e-08 1.10790888e-08] </code></pre> <p><strong>Performing calculation above, but at individual point (correct result)</strong>:</p> <pre><code>[0.3261025 0.42537308 0.48440808 0.5264295 0.55907714 0.5857545 0.6082901 0.62777 0.6449178 0.66021943 0.6740228 0.6865807 0.69809467 0.7087174 0.7185703 0.72774875 0.7363362 0.7443992 0.75199366 0.7591655 0.76595706 0.7724033 0.7785346 0.78437734 0.7899539 0.79528594 0.8003918 0.80528766 0.8099875 0.81450516 0.8188526 0.8230406 0.8270782 0.830975 0.8347391 0.83837795 0.84189796 0.8453059 0.8486077 0.85180867 0.8549138 0.85792726 0.8608539 0.86369765 0.8664623 0.869151 0.8717674 0.87431455 0.87679535 0.87921226 0.8815681 0.8838652 0.88610595 0.8882922 0.8904262 0.89250994 0.8945452 0.8965335 0.8984767 0.90037644 0.902234 0.90405095 0.90582836 0.90756774 0.9092703 0.91093725 0.91256946 0.9141682 0.91573447 0.91726923 0.9187733 0.9202477 0.92169327 0.9231109 0.9245012 0.92586505 0.92720324 0.9285163 0.92980516 0.93107015 0.93231213 0.93353164 0.9347293 0.93590546 0.9370609 0.93819606 0.9393114 0.94040745 0.9414847 0.9425435 0.9435845 0.9446078 0.9456141 0.94660366 0.9475769 0.9485342 0.9494758 0.9504024 0.9513139 0.9522109 0.95309365 0.95396245 0.9548176 0.95565945 0.95648813 0.95730406 0.9581075 0.9588986 0.9596777 0.960445 0.9612008 0.9619453 0.9626787 0.9634012 0.96411306 0.96481454 0.9655058 0.966187 0.96685827 0.96752 0.96817225 0.9688152 0.969449 0.9700738 0.9706899 0.9712973 0.9718963 0.9724869 0.9730694 0.97364384 0.97421044 0.9747693 0.9753205 0.9758643 0.97640073 0.97692996 0.97745216 0.9779673 0.97847563 0.9789772 0.9794722 0.9799607 0.98044276 0.9809186 0.9813882 0.9818517 0.98230916 0.9827608 0.9832066 0.98364675 0.98408127 0.98451024 0.9849338 0.98535204 0.985765 0.9861728 0.9865755 0.9869731 0.9873659 0.9877538 0.9881369 0.98851526 0.9888889 0.98925805 0.9896227 0.9899829 0.99033874 0.99069023 0.99103755 0.99138063 0.99171966 0.9920545 0.99238545 0.9927125 0.9930356 0.9933549 0.99367046 0.99398226 0.9942904 0.99459493 0.9948959 0.99519336 0.99548733 0.99577796 0.9960652 0.99634916 0.9966298 0.9969072 0.9971815 0.99745256 0.9977206 0.99798554 0.9982475 0.9985064 0.9987625 0.9990156 0.99926597 0.9995134 0.99975806 1.0000001 ] </code></pre>
<python><numpy><numpy-ndarray><cumsum>
2023-05-31 14:53:40
1
431
TornadoEric
76,374,785
11,760,357
Keep original string values after pandas.series.str.extract() if the regex doesn't match
<p>I am trying to extract emails from strings, and want to make sure that if the original value is formatted how I expect already, that it is not changed to Nan and instead is kept as is.</p> <p>Example input</p> <pre><code>&lt;class 'pandas.core.series.Series'&gt; 1 &lt;doe.b.john@gmail.com&gt; 2 &lt;doe.c.jane@gmail.com&gt; 3 person.anonymous@hotmail.com 4 dent.arthur@space.com </code></pre> <p>I am using</p> <pre class="lang-py prettyprint-override"><code># curr_emails is &lt;class 'pandas.core.series.Series'&gt; curr_emails = curr_emails.str.extract(r&quot;&lt;([^&lt;&gt;]+)&gt;&quot;).squeeze()` # regex extracts text between &lt; &gt; </code></pre> <p>I receive back</p> <pre><code>1 doe.b.john@gmail.com 2 doe.c.jane@gmail.com 3 NaN 4 Nan </code></pre> <p>But I instead would like</p> <pre><code>1 doe.b.john@gmail.com 2 doe.c.jane@gmail.com 3 person.anonymous@hotmail.com 4 dent.arthur@space.com </code></pre> <p>A similar question is posted <a href="https://stackoverflow.com/questions/59191897/keep-original-string-values-after-pandas-str-extract-if-the-regex-doesnt-matc">here</a>, but I could not seem to make it work with my current approach.</p>
<python><pandas>
2023-05-31 14:51:52
2
375
Caleb Renfroe
76,374,682
4,019,495
Is there a way to require a mutually exclusive group, where one of the groups has multiple options?
<p>I would like a behavior where exactly one of</p> <ul> <li><code>--a</code></li> <li><code>--b</code> AND <code>--c</code></li> </ul> <p>are required. I know how to do it if the second requirement were just <code>--b</code>:</p> <pre><code>group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--a', type=str) group.add_argument('--b', type=str) </code></pre>
<python><argparse>
2023-05-31 14:39:37
1
835
extremeaxe5
76,374,605
8,564,860
Kafka running in container - cannot connect from another container
<p>I've tried to follow the advice of some other excellent posts such as <a href="https://stackoverflow.com/questions/51630260/connect-to-kafka-running-in-docker/51634499#51634499">this one</a> and others, but still running into problems.</p> <p>I have two docker compose networks, one running kafka/zookeeper and another running a python application which includes a kafka producer, so needs to write to the container running kafka. I've created a network <code>kafka_logging_network</code> which both containers are a part of. I've double checked the <code>KAFKA_ADVERTISED_LISTENERS</code> variable many times but cant see what I'm missing.</p> <p>The below shows the two <code>docker-compose.yml</code> files, followed by the <code>Dockerfile</code> and <code>main.py</code> for the application. <code>main.py</code> should run without error but in reality it cannot find any kafka brokers at the given <code>KAFKA_HOST</code> and <code>KAFKA_PORT</code>, raising <code>kafka.errors.NoBrokersAvailable</code>. Where have I gone wrong?</p> <h2>kafka/docker-compose.yml (for kafka/zookeeper)</h2> <pre><code>version: '3.7' services: zoo1: image: confluentinc/cp-zookeeper:7.3.2 hostname: zoo1 container_name: zoo1 ports: - &quot;2181:2181&quot; environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_SERVER_ID: 1 ZOOKEEPER_SERVERS: zoo1:2888:3888 networks: - logging_network kafka1: image: confluentinc/cp-enterprise-kafka:5.3.1 hostname: kafka1 container_name: kafka1 ports: - &quot;9092:9092&quot; environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 depends_on: - zoo1 restart: always networks: - logging_network networks: logging_network: driver: bridge </code></pre> <h2>app_folder/docker-compose.yml (for python application)</h2> <pre><code>version: '3.7' services: pyapp: build: context: . dockerfile: Dockerfile container_name: pyapp networks: - kafka_logging_network environment: KAFKA_HOST: kafka1 KAFKA_PORT: 29092 KAFKA_TOPIC: logs-topic networks: kafka_logging_network: external: true </code></pre> <h2>Dockerfile</h2> <pre><code>FROM python:3.9.7 COPY . . RUN pip install kafka_python==2.0.2 ENTRYPOINT python main.py </code></pre> <h2>main.py</h2> <pre><code>from kafka import KafkaProducer import os host = os.environ['KAFKA_HOST'] port = os.environ['KAFKA_PORT'] kp = KafkaProducer(bootstrap_servers=f'{host}:{port}') print('success') </code></pre>
<python><docker><apache-kafka><docker-compose>
2023-05-31 14:32:37
1
1,102
John F
76,374,535
520,601
Numpy np.float32 maximum number is 16M instead of 3.4e38 - unclear behavior of floats
<p>Max value here:</p> <pre><code>import numpy numpy.finfo(numpy.float32).max 3.4028235e+38 </code></pre> <p>Then I run code</p> <pre><code>a = np.array([0, 0, 0], dtype=np.float32) for _ in range(100_000_000): a += 1.0 </code></pre> <p>Why does it end up being</p> <pre><code>array([16777216., 16777216., 16777216.], dtype=float32) </code></pre> <p>?</p>
<python><numpy>
2023-05-31 14:24:16
0
1,600
Tadas Šubonis
76,374,435
12,778,634
How to initialize a dataframe from dictionary translated into sparse matrix
<p>pI do:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; &gt;&gt;&gt; aa = {&quot;a&quot;: [&quot;group_a&quot;, &quot;group_b&quot;, &quot;group_c&quot;]} &gt;&gt;&gt; ab = {&quot;b&quot;: [&quot;x&quot;, &quot;y&quot;]} &gt;&gt;&gt; ba = {&quot;c&quot;: [&quot;group_b&quot;]} &gt;&gt;&gt; &gt;&gt;&gt; c = {**aa, **ab, **ba} &gt;&gt;&gt; &gt;&gt;&gt; df = pd.DataFrame.from_dict(c, orient=&quot;index&quot;) &gt;&gt;&gt; &gt;&gt;&gt; print(df) 0 1 2 a group_a group_b group_c b x y None c groub_b None None </code></pre> <p>But I am actually looking for a pythonic way to get:</p> <pre><code> group_a group_b group_c x y a 1 1 1 0 0 b 0 0 0 1 1 c 0 1 0 0 0 </code></pre>
<python><pandas>
2023-05-31 14:14:00
4
1,237
AndreasInfo
76,374,265
9,937,874
ONNX as a form of model managment
<p>I have am working on a project that requires the use of previously created model objects. They have all been built with scikit-learn but with different library versions (0.23.1, 0.23.2, 1.0.2, and 1.2.2). I have always been under the assumption that if you are using models to make predictions you should be using the version of scikit-learn that the model was built with. Since I only need to make predictions with these model objects I was considering converting them all to onnx objects in order to get around rebuilding all the model objects using the most recent version of scikit-learn. Is this the correct approach or is there a better solution?</p>
<python><python-3.x><scikit-learn><onnx>
2023-05-31 13:52:28
0
644
magladde
76,374,230
11,974,225
Using existing Python packages in VSCode and SSH
<p>I installed VSCode and its remote SSH and Python extensions.</p> <p>I connected to a remote server, let's say by <code>ssh user_name@ip_address</code>.</p> <p>Now, in the remote server, the path the folder with the Python packages I installed long time ago is: <code>/home/user_name/.local/lib/python3.7/site-packages</code>.</p> <p>How do I use them when I create a Python script in VSCode and, for example, try to <code>import numpy as np</code>?</p>
<python><visual-studio-code><ssh>
2023-05-31 13:47:20
2
907
qwerty
76,374,184
11,251,373
Fastest way to find out whether row or one of set of rows exists in database accesed via SqlAlchemy
<p>Django has <code>queryset.exists()</code> method that fetch bool indicating whether or not at least 1 row with specific conditions exists.</p> <p>for example</p> <pre><code>User.objects.filter(id__in=[1, 2, 3, 4, 5], usernmae='test').exists() </code></pre> <p>It would just return bool <code>True</code> in case at least one row with these conditions exists and False other way around.</p> <p><strong>Question</strong> - is it something like that could be done in SQLAlchemy &gt; 2.0 with <strong>asyncpg</strong> driver? Best I have found is <code>await session.get</code> but it potentially could fetch multiple objectds from DB, dezerialize them, etc, whereas I only need to find out if at least one of these rows does exists?</p>
<python><django><sqlalchemy>
2023-05-31 13:43:08
1
2,235
Aleksei Khatkevich
76,374,151
6,248,190
Error 1064 when inserting SQL data from a dump file into MariaDB using Python script
<p>I have written a Python script to import data from a SQL dump file into a MariaDB instance. However, when I run the script, I encounter the following error:</p> <pre><code>Error: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''Small Room - 2.00 Hours \r\n(08:00 PM - 10:00 PM)\r\nPeak Hours' at line 2 </code></pre> <p>Interestingly, when I use DBeaver and connect to the database, I can successfully insert the same data without any errors using the console in DBeaver.</p> <p>Here is the Python script I am using:</p> <pre><code>import mysql.connector # Database connection details host = 'localhost' port = 3306 user = 'admin' password = 'password' database = 'example' # SQL file path sql_file = 'file_location' try: connection = mysql.connector.connect( host=host, port=port, user=user, password=password, database=database ) cursor = connection.cursor() with open(sql_file, 'r') as file: sql_statements = file.read() statements = sql_statements.split(';') for statement in statements: if statement.strip(): cursor.execute(statement) connection.commit() print('Data imported successfully.') except mysql.connector.Error as error: print('Error:', error) finally: if 'connection' in locals() and connection.is_connected(): cursor.close() connection.close() </code></pre> <p>Here is the SQL data I am trying to insert:</p> <pre><code>INSERT INTO `transactions` (`id`, `profit`, `sub_total`, `grand_total`, `gst`, `status`, `created_on`, `content`, `receive_amount`, `change_amount`, `start_time`, `end_time`, `checkout`, `outlet_id`, `staff_id`, `server_transaction_id`, `outlet_transaction_id`) VALUES (318151, 18.69, 20.00, 20.00, 1.31, 'paid', '1480319419', 'Premium Medium Room - 3 Hours \r\n(04:00 PM - 07:00 PM)\r\nHappy Hours 3 Hours Package;19;2016-11-28 16:00:00;2016-11-28 19:00:00;132', 20.00, 0.00, '1480320000', '1480330800', b'1', 7, 1006, 318151, 36079) </code></pre> <p>I'm not sure why I'm encountering this error when running the script, but I can successfully execute the same SQL statement using DBeaver. Any help or suggestions would be greatly appreciated. Thank you</p>
<python><mariadb>
2023-05-31 13:40:09
2
1,271
user6248190
76,374,021
7,441,757
Pip does install although reporting conflicts
<p>I am trying to prevent installing two packages that conflict. But when I do:</p> <pre><code>pip install some-package pip install another-package </code></pre> <blockquote> <p>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. some-package requires numpy==1.24.2, but you have numpy 1.24.3 which is incompatible.</p> </blockquote> <blockquote> <p>Successfully installed numpy-1.24.3 botocore-1.29.140 ... another-package</p> </blockquote> <p>Now I am surprised I can import both <code>some-package</code> as <code>another-package</code> - shouldn't the second installation fail?</p> <p><code>pip check</code> does show the conflict.</p> <blockquote> <p>some-package has requirement numpy==1.24.2, but you have numpy 1.24.3.</p> </blockquote> <p>I found <a href="https://github.com/pypa/pip/issues/988" rel="nofollow noreferrer">this issue</a> which suggests that it should be solved, but I am using pip 23.1.2.</p>
<python><pip>
2023-05-31 13:26:56
0
5,199
Roelant
76,373,921
12,040,751
What does coroutine mean in Python?
<p>Is there a consensus on what a coroutine is today in Python?</p> <p>I found coroutines first mentioned in PEP <a href="https://peps.python.org/pep-0219/" rel="nofollow noreferrer">219</a> and <a href="https://peps.python.org/pep-0220/" rel="nofollow noreferrer">220</a> (2000, Python 2.1), then in <a href="https://peps.python.org/pep-0255/" rel="nofollow noreferrer">PEP 255 – Simple Generators</a> (2001, Python 2.2). The latter is the PEP that introduced generators and the <code>yield</code> statement, which are the basis of coroutines as of <a href="https://peps.python.org/pep-0342/" rel="nofollow noreferrer">PEP 342 - Coroutines via Enhanced Generators</a> (2005, Python 2.5).</p> <p>If we stop then, a generator that you can send values to is - pretty much - a coroutine:</p> <pre><code>def coroutine(): name = yield print(f&quot;Hello {name}&quot;) hello = coroutine() type(hello) # generator type hello.send(None) # Initialize the coroutine hello.send(&quot;Dave&quot;) # prints 'Hello Dave' (and raises StopIteration) </code></pre> <p>Fast forward to <a href="https://peps.python.org/pep-0492/" rel="nofollow noreferrer">PEP 492 – Coroutines with async and await syntax</a> (2015, Python 3.5), and this is called a coroutine:</p> <pre><code>async def coroutine(): pass type(coroutine()) # coroutine </code></pre> <p>Are both of these constructs commonly referred as coroutines, or is there some jargon to distinguish between the two?</p>
<python><coroutine>
2023-05-31 13:16:34
2
1,569
edd313
76,373,894
2,548,285
CMake: Could not find Python3 (found version 3.9.6)
<p>I am trying to configure CMake (v3.26.0) to find a local Anaconda installation of Python3 (on Ventura 13.3.1) I am using but I am running into an issue with CMake detecting Python. In my CMake configuration script, I have:</p> <pre><code>find_package(Python3 COMPONENTS Interpreter REQUIRED Python3_ROOT &quot;${CMAKE_CURRENT_LIST_DIR}/../conda/bin&quot;) </code></pre> <p>This correctly points CMake to the location of the local Python3 interpreter. Moreover, when I run the configuration script, it <em>appears</em> to find Python but seems to think it hasn't:</p> <pre><code>CMake Error at /usr/local/Cellar/cmake/3.26.0/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:230 (message): Could NOT find Python3 (missing: Python3_ROOT /Users/xxx/xxxxx/cmake/../conda/bin) (found version &quot;3.9.6&quot;) </code></pre> <p>I am confused; it seems to be picking up the right version of Python (i.e., I expect it to be 3.9.6) but CMake is somehow indicating that it wasn't found. What am I doing wrong here?</p>
<python><python-3.x><cmake>
2023-05-31 13:12:38
2
393
Professor Nuke
76,373,836
13,955,154
Delete a line from a PDF and creating a new PDF with these modifications
<p>I currently have this code:</p> <pre><code>def delete_first_line_from_pages(pages_indices, input_pdf_path, output_pdf_path): with open(input_pdf_path, 'rb') as input_file, open(output_pdf_path, 'wb') as output_file: reader = PyPDF2.PdfReader(input_file) for page_index in range(len(reader.pages)): page = reader.pages[page_index] if page_index in pages_indices: content = page.extract_text() lines = content.split(&quot;\n&quot;) if lines: first_line = next(line for line in lines if line.strip()) if first_line: modified_content = content.replace(first_line, '', 1) else: modified_content = content </code></pre> <p>where I end up with modified_content which is basically the text of the page with the first line of text removed. From now on I want to create a page element that has the same structure of the source but this new modified_content as text. My final goal is to save to an output_path the pdf I had as input but removing the first line of each page. How can I do it?</p>
<python><pdf>
2023-05-31 13:07:21
1
720
Lorenzo Cutrupi
76,373,668
14,282,714
Change y-axis title hover in histogram plotly python
<p>I would like to change the y-axis title of a histogram in plotly python. We could use the <code>fig.update_layout(yaxis_title=&quot;New y-axis title&quot;)</code> to change the title of the y-axis. But this doesn't automatically change the hover title of the y-axis. We could use the <code>labels</code> argument to change the title of the x-axis and hover title but this doesn't work for the y-axis title with count. Here is some reproducible code:</p> <pre><code>import plotly.express as px df = px.data.tips() fig = px.histogram(df, x=&quot;total_bill&quot;, labels={&quot;total_bill&quot;: &quot;Total bill&quot;}) fig.update_layout(yaxis_title=&quot;New y-axis title&quot;) fig.show() </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/MxUSR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MxUSR.png" alt="enter image description here" /></a></p> <p>As you can see the y-axis title is nicely changed, but the hover title isn't changed. So I was wondering if anyone knows how to fix this issue?</p>
<python><histogram><plotly>
2023-05-31 12:50:21
1
42,724
Quinten
76,373,649
8,605,348
In Databricks, where does `spark.*` come from?
<p>When working in Azure Databricks, the <code>spark.sql()</code> function is available at startup; no imports necessary. But where does it come from?</p> <p><code>spark.sql</code>'s docs say it's from <code>pyspark.sql.session</code>:</p> <pre class="lang-py prettyprint-override"><code>help(spark.sql) </code></pre> <pre><code>Help on method sql in module pyspark.sql.session: </code></pre> <p>But when I try calling <code>sql()</code> from <code>pyspark.sql.session</code>, it fails:</p> <pre class="lang-py prettyprint-override"><code>import pyspark.sql.session as spark spark.sql </code></pre> <pre><code>AttributeError: module 'pyspark.sql.session' has no attribute 'sql' </code></pre> <p>I'm developing a Python package to be used in Databricks, but I can't test it on my local machine without <code>spark</code>, but the <code>spark</code> scope doesn't seem to come from any package I can install locally with Poetry.</p>
<python><databricks><azure-databricks>
2023-05-31 12:47:50
1
1,294
ardaar
76,373,626
14,649,310
Alembic ignores new model and does not add new table to revision
<p>How can I make alembic take a new file with a new model into account for a new revision? All the models are in the same directory <code>/models</code> but alembic seems to ignore it. How can it be included and tracked by alembic?</p>
<python><flask><alembic>
2023-05-31 12:45:25
1
4,999
KZiovas
76,373,541
8,929,189
Django m2m relationship only saving sometimes through django admin
<p>Have been banging my head against a wall on this one for a while. I have a model called Car, in this is has a models.ManyToManyField of colours.</p> <pre><code>class Car(models.Model): brand = models.CharField(max_length=70, default='', unique=True) colours = models.ManyToManyField('Colours') ... def fetch_data(self): #some code which gets data from an api and adds it to some fields def save(self, *args, **kwargs): self.fetch_data() super(Car, self).save(*args, **kwargs) </code></pre> <p>For a while this has been working seamlessly. A Car can be saved with numerous colours.</p> <p>Recently, if I select a colour in django admin when adding a Car, I click save, the Car will be created. However, clicking back into the car in django admin and you can see there are no Colours attached to it. And looking at the database, the m2m values have not been added.</p> <p>However, if I stay on that Car's edit screen for a while (5 mins) in django admin, re click on the colours, and press save, it will save the colours half of the time.</p> <p>Does anyone have any idea what could be the issue with this? I have tried adding save_related() and save_model() in CarAdmin(admin.ModelAdmin) but to no avail.</p> <p>Thanks!</p> <p>EDIT:</p> <p>Code within admin.py for the Car model looks like this currently:</p> <pre><code>class CarAdmin(admin.ModelAdmin): list_display = ('name','published') list_filter = ['published','colours'] # This is needed to save the many to many on cars -&gt; colours # def save_related(self, request, form, formsets, change): # super(CarAdmin, self).save_related(request, form, formsets, change) # def save_model(self, request, obj, form, change): # Car.fetch_data() # super(Car, self).save_model(request, obj, form, change) </code></pre>
<python><django><postgresql><django-models>
2023-05-31 12:34:10
0
546
rlou
76,373,508
15,148,200
Pycaret error : TypeError: format() got an unexpected keyword argument 'precision'
<p>I'm using the compare_models() function for the classification problem in pycaret. It was running well during the process but in the end, it does not print anything but an empty list.</p> <pre><code>clf1 = setup(data = train_titanic_df, target = 'Survived') compare_models(include = ['lightgbm', 'xgboost'], errors=&quot;raise&quot;) </code></pre> <p>After I adding <em>errors=&quot;raise&quot;</em> to <em>compare_models()</em>, it gives me this error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[26], line 1 ----&gt; 1 compare_models(include = ['lightgbm', 'xgboost'], errors=&quot;raise&quot;) #'rf', 'ada','catboost' File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/utils/generic.py:965, in check_if_global_is_not_none.&lt;locals&gt;.decorator.&lt;locals&gt;.wrapper(*args, **kwargs) 963 if globals_d[name] is None: 964 raise ValueError(message) --&gt; 965 return func(*args, **kwargs) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/classification/functional.py:822, in compare_models(include, exclude, fold, round, cross_validation, sort, n_select, budget_time, turbo, errors, fit_kwargs, groups, experiment_custom_tags, probability_threshold, engine, verbose, parallel) 680 @check_if_global_is_not_none(globals(), _CURRENT_EXPERIMENT_DECORATOR_DICT) 681 def compare_models( 682 include: Optional[List[Union[str, Any]]] = None, (...) 698 parallel: Optional[ParallelBackend] = None, 699 ) -&gt; Union[Any, List[Any]]: 700 &quot;&quot;&quot; 701 This function trains and evaluates performance of all estimators available in the 702 model library using cross validation. The output of this function is a score grid (...) 819 - No models are logged in ``MLFlow`` when ``cross_validation`` parameter is False. 820 &quot;&quot;&quot; --&gt; 822 return _CURRENT_EXPERIMENT.compare_models( 823 include=include, 824 exclude=exclude, 825 fold=fold, 826 round=round, 827 cross_validation=cross_validation, 828 sort=sort, 829 n_select=n_select, 830 budget_time=budget_time, 831 turbo=turbo, 832 errors=errors, 833 fit_kwargs=fit_kwargs, 834 groups=groups, 835 experiment_custom_tags=experiment_custom_tags, 836 probability_threshold=probability_threshold, 837 engine=engine, 838 verbose=verbose, 839 parallel=parallel, 840 ) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/classification/oop.py:1190, in ClassificationExperiment.compare_models(self, include, exclude, fold, round, cross_validation, sort, n_select, budget_time, turbo, errors, fit_kwargs, groups, experiment_custom_tags, probability_threshold, engine, verbose, parallel) 1187 self._set_engine(estimator=estimator, engine=eng, severity=&quot;error&quot;) 1189 try: -&gt; 1190 return_values = super().compare_models( 1191 include=include, 1192 exclude=exclude, 1193 fold=fold, 1194 round=round, 1195 cross_validation=cross_validation, 1196 sort=sort, 1197 n_select=n_select, 1198 budget_time=budget_time, 1199 turbo=turbo, 1200 errors=errors, 1201 fit_kwargs=fit_kwargs, 1202 groups=groups, 1203 experiment_custom_tags=experiment_custom_tags, 1204 verbose=verbose, 1205 probability_threshold=probability_threshold, 1206 parallel=parallel, 1207 caller_params=caller_params, 1208 ) 1209 finally: 1210 if engine is not None: 1211 # Reset the models back to the default engines File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/supervised_experiment.py:788, in _SupervisedExperiment.compare_models(self, include, exclude, fold, round, cross_validation, sort, n_select, budget_time, turbo, errors, fit_kwargs, groups, experiment_custom_tags, probability_threshold, verbose, parallel, caller_params) 786 results_columns_to_ignore = [&quot;Object&quot;, &quot;runtime&quot;, &quot;cutoff&quot;] 787 if errors == &quot;raise&quot;: --&gt; 788 model, model_fit_time = self._create_model(**create_model_args) 789 model_results = self.pull(pop=True) 790 else: File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/supervised_experiment.py:1575, in _SupervisedExperiment._create_model(self, estimator, fold, round, cross_validation, predict, fit_kwargs, groups, refit, probability_threshold, experiment_custom_tags, verbose, system, add_to_model_list, X_train_data, y_train_data, metrics, display, model_only, return_train_score, **kwargs) 1570 self._master_model_container.append( 1571 {&quot;model&quot;: model, &quot;scores&quot;: model_results, &quot;cv&quot;: cv} 1572 ) 1574 # yellow the mean -&gt; 1575 model_results = self._highlight_and_round_model_results( 1576 model_results, return_train_score, round 1577 ) 1578 if system: 1579 display.display(model_results) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/supervised_experiment.py:1266, in _SupervisedExperiment._highlight_and_round_model_results(self, model_results, return_train_score, round) 1264 indices = [&quot;Mean&quot;] 1265 model_results = color_df(model_results, &quot;yellow&quot;, indices, axis=1) -&gt; 1266 model_results = model_results.format(precision=round) 1267 return model_results TypeError: format() got an unexpected keyword argument 'precision' </code></pre> <p>what I tried so far:</p> <p>1- Upgrade pycaret to version <code>3.0.0.rc9</code>.</p> <p>2- Trying other model other than lightgbm and xgboost.</p> <p>3- Check my python version which is 3.8.5.</p> <p>I searched online a lot but couldn't find anything that address this error! I appreciate any help with this error.</p>
<python><machine-learning><jupyter-notebook><anaconda><pycaret>
2023-05-31 12:30:09
1
547
Raha Moosavi
76,373,153
2,059,689
Converting pd.Series of lists to multi-column DataFrame
<p>Given Pandas <code>Series</code> of lists where all lists have the same length <code>N</code>. How can I reshape it into a <code>DataFrame</code> with <code>N</code> columns (keeping the index)?</p> <p>For example, given</p> <pre><code>s = pd.Series([[1, 10, 0.1], [2, 20, 0.2], [3, 30, 0.3]], index=['a', 'b', 'c']) &gt;&gt;&gt; print(s) a [1, 10, 0.1] b [2, 20, 0.2] c [3, 30, 0.3] dtype: object </code></pre> <p>I want the code to produce <code>DataFrame</code> looking like this</p> <pre><code>df = pd.DataFrame([[1, 10, 0.1], [2, 20, 0.2], [3, 30, 0.3]], index=['a', 'b', 'c'], columns=['X', 'Y']) &gt;&gt;&gt; print(df) X Y Z a 1 10 0.1 b 2 20 0.2 c 3 30 0.3 </code></pre>
<python><pandas><dataframe>
2023-05-31 11:47:35
2
3,200
vvv444
76,373,054
6,455,731
Pydantic: Why are field values referencing another model converted to dictionaries when using .dict?
<p>My problem is that when I iterate over a model instance using <code>.dict</code>, field values referencing other models are converted to dictionaries, e.g.:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Model(BaseModel): attr: str class AnotherModel(BaseModel): attr: str model_attr: Model model = AnotherModel(attr=&quot;test&quot;, model_attr=Model(attr=&quot;something&quot;)) # field values are model objects for field in model: print(field) # field values are dictionaries for key, val in model.dict().items(): print((key, val)) </code></pre> <p>Why is that and how can I change it?</p>
<python><pydantic>
2023-05-31 11:34:45
1
964
lupl
76,373,005
3,099,749
Accurate evaluation of function for negative x
<p>Is there a way to accurately evaluate <code>f(x)</code> as given below for moderate negative x (around -1e6 to -1e2), while sticking with float64 numbers?</p> <pre class="lang-py prettyprint-override"><code>def f(x): return 0.5 + arctan(x/sqrt(3)) / pi + sqrt(3)*x/(pi*(x*x+3)) </code></pre> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>g(-10_000) == 1.1026577511479050e-12 # actual closest float64 (what I want) f(-10_000) == 1.1026461116240595e-12 # current implementation # ^^^^^^^^^^^^ &lt;- wrong </code></pre>
<python><precision><numeric>
2023-05-31 11:29:18
1
303
hbwales
76,372,780
10,755,782
Version conflict error while installing albumentation on python 3.6.8 with opencv-python
<p>I'm trying to install <code>albumentations</code> on python 3.6.8. However, the installation is failing with the following error</p> <pre><code>Failed to build opencv-python ERROR: Could not build wheels for opencv-python, which is required to install pyproject.toml-based projects </code></pre> <p>This is happening because my Python version is 3.6.8 and the installed opencv-python version is 3.4.13.47 and albumentation is trying to install the latest version of opencv-python (4.7.x). As per the install instructions of albumentations as given <a href="https://albumentations.ai/docs/getting_started/installation/" rel="nofollow noreferrer">here</a>, there is a flag to force-use the existing opencv version</p> <pre><code>pip install -U albumentations --no-binary qudida, albumentations </code></pre> <p>However, even after using these flags, albumentation is trying to install opencv-python-4.7.0.72, and this is failing.</p> <p>How can we force this to use the existing opencv version?</p>
<python><opencv><pip><python-3.6><albumentations>
2023-05-31 11:00:00
0
660
brownser
76,372,689
4,451,521
tox cannot install numpy
<p>I have pyenv installed and these versions</p> <pre><code>pyenv versions system 3.8.0 3.9.16 * 3.10.1 (set by /home/me/.pyenv/version) </code></pre> <p>and I got this <code>tox.ini</code></p> <pre><code>[tox] envlist = unit_tests skipsdist = True [testenv] install_command = pip install {opts} {packages} deps = -rrequirements.txt commands= py.test [testenv:unit_tests] envdir = {toxworkdir}/unit_tests deps = {[testenv]deps} setenv = PYTHONPATH=. commands = python gradient_boosting_model/train_pipeline.py pytest \ -s \ -vv \ {posargs:tests/} [testenv:train] envdir = {toxworkdir}/train deps = {[testenv]deps} setenv = PYTHONPATH=. commands = python gradient_boosting_model/train_pipeline.py </code></pre> <p>the system python is 3.6.9</p> <p>When I run tox , no matter in which version I get</p> <pre><code>tox unit_tests create: /media/me/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/Udemy/testing-and-monitoring/testing-and-monitoring-ml-deployments/packages/gradient_boosting_model/.tox/unit_tests unit_tests installdeps: -rrequirements.txt ERROR: invocation failed (exit code 1), logfile: /media/me/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/Udemy/testing-and-monitoring/testing-and-monitoring-ml-deployments/packages/gradient_boosting_model/.tox/unit_tests/log/unit_tests-1.log ==================================================================== log start ==================================================================== ERROR: Could not find a version that satisfies the requirement numpy&lt;1.21.0,&gt;=1.20.0 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5) ERROR: No matching distribution found for numpy&lt;1.21.0,&gt;=1.20.0 ===================================================================== log end ===================================================================== ERROR: could not install deps [-rrequirements.txt]; v = InvocationError('/media/me/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/Udemy/testing-and-monitoring/testing-and-monitoring-ml-deployments/packages/gradient_boosting_model/.tox/unit_tests/bin/pip install -rrequirements.txt', 1) _____________________________________________________________________ summary _____________________________________________________________________ ERROR: unit_tests: could not install deps [-rrequirements.txt]; v = InvocationError('/media/me/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/Udemy/testing-and-monitoring/testing-and-monitoring-ml-deployments/packages/gradient_boosting_model/.tox/unit_tests/bin/pip install -rrequirements.txt', 1) </code></pre> <p>the <code>requirements.txt</code> file is</p> <pre><code># ML requirements numpy&gt;=1.20.0,&lt;1.21.0 pandas&gt;=1.3.5,&lt;1.4.0 scikit-learn&gt;=1.0.2,&lt;1.1.0 feature-engine&gt;=1.0.2,&lt;1.1.0 joblib&gt;=1.0.1,&lt;1.1.0 # config parsing strictyaml&gt;=1.3.2,&lt;1.4.0 ruamel.yaml==0.16.12 pydantic&gt;=1.8.1,&lt;1.9.0 # validation marshmallow&gt;=3.2.2,&lt;4.0 # packaging setuptools&gt;=41.4.0,&lt;42.0.0 wheel&gt;=0.33.6,&lt;0.34.0 # testing requirements pytest&gt;=5.3.2,&lt;6.0.0 </code></pre> <p>At first I thought it was because the original python version is old but even when I run it with python 10 it does not work.</p> <p>What can it be solved?</p>
<python><pyenv><tox>
2023-05-31 10:49:11
1
10,576
KansaiRobot
76,372,584
10,430,394
Can't make virtual environment using virtualenv with python 2.7 on Windows 10
<p>I need a python 2.7 install for some niche lib that runs into errors on 3.8.</p> <p>So I tried to make a virtual environment using virtualenv using the following command:</p> <p><code>virtualenv -p C:\Python27\python.exe pdf</code></p> <p>This returns the error:</p> <p><code>RuntimeError: failed to query C:\Python27\python.exe with code 1 err: ' File &quot;C:\\Python\\Python38\\lib\\site-packages\\virtualenv\\discovery\\py_info.py&quot;, line 152\n os.path.join(base_dir, exe) for exe in (f&quot;python{major}&quot;, f&quot;python{major}.{minor}&quot;)\n ^\nSyntaxError: invalid syntax\n' </code></p> <p>Looks like it wants to find the minor and major release info, but can't. I just installed Python 2.7.0 from <a href="https://www.python.org/download/releases/2.7/" rel="nofollow noreferrer">here</a>. Should I just use a different release instead?</p>
<python><virtualenv>
2023-05-31 10:35:54
1
534
J.Doe
76,372,544
4,231,821
Updated inkscape version stooped running my code
<p>This is my code using for converting svg to png and return it on browser.</p> <pre><code>svgchart = chart_pygal.render() inkscape_process = subprocess.Popen(['inkscape', '-z', '-e', '-', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL) png_data, error = inkscape_process.communicate(input=svgchart) png_io = BytesIO(png_data) return send_file(png_io ,mimetype='image/png', download_name=image_name+'.png', as_attachment=True) </code></pre> <p>It was working fine with version of inkscape 0.92.1 but I have updated the version to 1.2.2 now it has stopped working as expected the image shows error when i open it .</p> <p>sorry photos can not open this file because the format is currently unsupported, or the file is curropt</p>
<python><docker><inkscape><svgtopng>
2023-05-31 10:30:59
0
527
Faizan Naeem
76,372,367
7,848,740
Add a select field on Django form without saving on database
<p>I have a form that refers to a model into the database so, each time is filled and a user click SUBMIT, the form is saved on the database.</p> <p>The form has a series of fields took directly from the Django Model. See below.</p> <p>My problem is that, I want to add a new field, a select field, that have as option inside data I've processed from the backend via the home view and, when the form is submitted, it passes the data selected to the same view so I can elaborate from there. I don't want this field to be saved on the database with the <code>form.save()</code></p> <p>Is there a way to have this kind of &quot;hybrid&quot; form where part of the data/field are taken from the database and others are just added by me?</p> <h2>models.py</h2> <pre><code>class Test(models.Model): testTitle = models.CharField(max_length=50, blank=False, null=False) testDescription = models.TextField() sequenzaTest = models.ForeignKey(&quot;SequenzaMovimento&quot;, null=True, on_delete=models.SET_NULL) numeroCicli = models.IntegerField(blank=False, null=False) dataInizio = models.DateTimeField(auto_now_add=True) dataFine = models.DateTimeField(blank=False, null=True) testFinito = models.BooleanField(default=False) testInPausa = models.BooleanField(default=False) dataUltimaPausa = models.DateTimeField(null=True, blank=True) workerID = models.CharField(max_length=200, blank=True, null=True) def __str__(self): return self.testTitle class StartWorker(ModelForm): class Meta: model = Test fields = ('testTitle', 'testDescription', 'sequenzaTest', 'numeroCicli') widgets = { 'testTitle': forms.TextInput(attrs={'class':'form-control'}), 'testDescription': forms.Textarea(attrs={'class':'form-control'}), 'sequenzaTest': forms.Select(attrs={'class':'form-control'}), 'numeroCicli': forms.NumberInput(attrs={'class':'form-control'}), } </code></pre> <h2>views.py</h2> <pre><code>def home(request): if request.method == &quot;POST&quot;: form = StartWorker(request.POST) if form.is_valid(): form_info = form.save() print(&quot;ID oggetto creato&quot;, form_info.id) # print(form.cleaned_data['dataInizio']) print(&quot;Ultima data aggiunta&quot;, Test.objects.get(pk=form_info.id).dataInizio.isoformat(&quot;T&quot;,&quot;seconds&quot;)) return redirect(&quot;home&quot;) else: form = StartWorker() return render( request, 'dashboard/index.html', { &quot;form&quot;: form, } ) </code></pre> <h2>index.html</h2> <pre><code> &lt;form method=&quot;post&quot;&gt; {% csrf_token %} {{ form }} &lt;input type=&quot;submit&quot; value=&quot;Submit&quot;&gt; &lt;/form&gt; </code></pre>
<python><django><forms>
2023-05-31 10:09:30
2
1,679
NicoCaldo
76,372,225
9,986,657
Use LlamaIndex with different embeddings model
<p>OpenAI's GPT embedding models are used across all LlamaIndex examples, even though they seem to be the most expensive and worst performing embedding models compared to T5 and sentence-transformers models (<a href="https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9" rel="noreferrer">see comparison below</a>).</p> <p>How do I use <a href="https://huggingface.co/sentence-transformers/all-roberta-large-v1" rel="noreferrer">all-roberta-large-v1</a> as embedding model, in combination with OpenAI's GPT3 as &quot;response builder&quot;? I'm not even sure if I can use one model for creating/retrieving embedding tokens and another model to generate the response based on the retrieved embeddings.</p> <h2>Example</h2> <p>Following is an example of what I'm looking for:</p> <pre class="lang-py prettyprint-override"><code>documents = SimpleDirectoryReader('data').load_data() # Use Roberta or any other open-source model to generate embeddings index = ???????.from_documents(documents) # Use GPT3 here query_engine = index.as_query_engine() response = query_engine.query(&quot;What did the author do growing up?&quot;) print(response) </code></pre> <h2>Model Comparison</h2> <p><a href="https://i.sstatic.net/FqrUf.png" rel="noreferrer"><img src="https://i.sstatic.net/FqrUf.png" alt="Embedding Models" /></a></p> <p><a href="https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9" rel="noreferrer">Source</a></p>
<python><llama-index>
2023-05-31 09:53:20
6
2,079
Jay
76,372,216
17,192,324
How to translate a request from overpass turbo format to a raw request and make it work?
<h1>Reason to ask: unexpected change in behavior between working overpass turbo request and a request made using Python to the API</h1> <p>I need to get an outline of a region bases off its name and import it into geopandas</p> <p>This request works in overpass turbo (<a href="https://overpass-turbo.eu/s/1vxy" rel="nofollow noreferrer">link to results</a>):</p> <pre><code>[out:json][timeout:25]; {{geocodeArea:Велижский район}}[type=&quot;boundary&quot;]-&gt;.a; ( relation[&quot;boundary&quot;=&quot;administrative&quot;][&quot;admin_level&quot;=&quot;6&quot;](area.a); ); ( way(r); node(w); ); out skel qt; </code></pre> <p>This <a href="https://overpass-turbo.eu/s/1vxB" rel="nofollow noreferrer">request</a> also works:</p> <pre><code>[out:json]; relation(3342303)[type=&quot;boundary&quot;]; out geom; </code></pre> <p>I can then download them into GeoJSON and plot the results using geopandas:</p> <pre class="lang-py prettyprint-override"><code>import geopandas as gpd df = gpd.read_file('export.geojson') df.geometry = df.geometry.to_crs(epsg=32636) df.plot() </code></pre> <h1>What I have tried</h1> <pre class="lang-py prettyprint-override"><code>import overpass api = overpass.API() query = &quot;&quot;&quot; relation(3342303)[type=&quot;boundary&quot;]; out geom; &quot;&quot;&quot; res = api.Get(query) </code></pre> <h1>What I expected</h1> <p>To get back a json</p> <h1>What happened</h1> <pre><code>--------------------------------------------------------------------------- UnknownOverpassError Traceback (most recent call last) &lt;ipython-input-10-2eef0e720784&gt; in &lt;cell line: 9&gt;() 7 out geom; 8 &quot;&quot;&quot; ----&gt; 9 res = api.Get(query) 1 frames /usr/local/lib/python3.10/dist-packages/overpass/api.py in _as_geojson(self, elements) 212 polygons.append([points]) 213 else: --&gt; 214 raise UnknownOverpassError(&quot;Received corrupt data from Overpass (incomplete polygon).&quot;) 215 # Then get the inner polygons 216 for member in elem.get(&quot;members&quot;, []): UnknownOverpassError: Received corrupt data from Overpass (incomplete polygon). </code></pre> <h2>update 1</h2> <p>I was able to get something to Python, but this does not solve it as these lines have in total zero area:</p> <pre class="lang-py prettyprint-override"><code>import requests import geopandas as gpd from shapely.geometry import LineString, Polygon def extract_geometries(element, geometries): if 'members' in element: members = element['members'] for member in members: if member['type'] == 'way' and 'geometry' in member: geometry = member['geometry'] coordinates = [(point['lon'], point['lat']) for point in geometry] line = LineString(coordinates) geometries.append(line) elif member['type'] == 'relation': extract_geometries(member, geometries) def close_polygon(geometry): if geometry.geom_type == 'Polygon': coordinates = list(geometry.exterior.coords) if coordinates[0] != coordinates[-1]: coordinates.append(coordinates[0]) return Polygon(coordinates) return geometry def get_relation_geometry(relation_id): # Define the Overpass API URL url = &quot;https://overpass-api.de/api/interpreter&quot; # Define the Overpass query for the relation ID query = f&quot;&quot;&quot; [out:json]; rel({relation_id}); out geom; &quot;&quot;&quot; # Send the GET request to the Overpass API response = requests.get(url, params={&quot;data&quot;: query}) # Check if the request was successful if response.status_code == 200: # Convert the response to GeoPandas dataframe data = response.json() # Extract the 'elements' list elements = data['elements'] # Create an empty list to store geometries geometries = [] # Iterate over the elements and extract geometries for element in elements: extract_geometries(element, geometries) # Create a GeoPandas dataframe from the geometries gdf = gpd.GeoDataFrame(geometry=geometries) # Forcefully close the polygons gdf['geometry'] = gdf['geometry'].apply(close_polygon) return gdf else: print(f&quot;Request failed with status code {response.status_code}&quot;) return None # Example usage relation_id = 3342303 gdf = get_relation_geometry(relation_id) if gdf is not None: print(gdf.head()) </code></pre> <p><a href="https://i.sstatic.net/BpRiQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BpRiQ.png" alt="enter image description here" /></a></p>
<python><openstreetmap><overpass-api>
2023-05-31 09:52:23
1
331
Kirill Setdekov
76,372,125
10,889,650
Masking 4D numpy array with 3D mask
<p>I have a 4D numpy array D with shape nx,ny,nz,nt and a 3D numpy mask M with shape nx,ny,nz. I wish to set each element of D with index ix,iy,iz,: to 0 if M[ix,iy,iz] is False. How do I do this?</p>
<python><numpy>
2023-05-31 09:42:21
1
1,176
Omroth
76,372,115
8,248,194
Pandas scatterplot with coloring using a string
<p>I want to plot x and y using a scatterplot, and z as colors.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ &quot;x&quot;: [1,2 ,3], &quot;y&quot;: [4, 5, 6], &quot;z&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;], }) df.plot( x=&quot;x&quot;, y=&quot;y&quot;, kind=&quot;scatter&quot;, c=&quot;z&quot;, ) </code></pre> <p>How can I do this using pandas plotting functionality?</p> <p>This is throwing:</p> <pre><code>ValueError: 'c' argument must be a color, a sequence of colors, or a sequence of numbers, not ['A' 'B' 'C'] </code></pre>
<python><pandas>
2023-05-31 09:41:24
2
2,581
David Masip
76,371,991
13,146,022
How to add tensorflow dependency to poetry project without limiting the range of supported python versions?
<p>I have a <strong>package</strong> built with poetry. The package requires <code>tensorflow</code> (most recent version to date is 2.12.0). To be able to run <code>poetry add tensorflow</code> I need to change the supported python version to <code>python = &quot;&gt;=3.8,&lt;3.12&quot;</code> as suggested in <a href="https://stackoverflow.com/questions/70356867/solverproblemerror-on-install-tensorfow-with-poetry">this post</a>. However, if I then try to add my package to a freshly initalized poetry project (with python version 3.8) I receive an error. I would like to have a rather user-friendly package, where a simple <code>poetry add &lt;package&gt;</code> will not fail.</p> <h3>Steps to reproduce</h3> <p>I have a package with <code>python = &quot;^3.8&quot;</code>.Trying to add <code>tensorflow</code> to my package, getting the following error:</p> <pre><code>... For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to &quot;&gt;=3.8,&lt;3.12&quot; ... </code></pre> <p>Following <a href="https://stackoverflow.com/questions/70356867/solverproblemerror-on-install-tensorfow-with-poetry">this post</a>, I change the python version to <code>python = &quot;&gt;=3.8,&lt;3.12&quot;</code>. Then I am able to add <code>tensorflow</code></p> <pre class="lang-bash prettyprint-override"><code>poetry add tensorflow </code></pre> <p>I initalize a new project with poetry (with default values and no dependencies) with python3.8.</p> <pre class="lang-bash prettyprint-override"><code>poetry init </code></pre> <p>When trying to add my package</p> <pre class="lang-bash prettyprint-override"><code>poetry add &lt;package&gt; </code></pre> <p>I receive following error</p> <pre><code>The current project's Python requirement (&gt;=3.8,&lt;4.0) is not compatible with some of the required packages Python requirement: - &lt;package&gt; requires Python &gt;=3.8,&lt;3.12, so it will not be satisfied for Python &gt;=3.12,&lt;4.0 Because &lt;project&gt; depends on &lt;package&gt; &lt;version&gt; which requires Python &gt;=3.8,&lt;3.12, version solving failed. • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties For &lt;package&gt;, a possible solution would be to set the `python` property to &quot;&gt;=3.8,&lt;3.12&quot; </code></pre> <p>I would like to have add my package to the freshly initialized project without any errors. How do I achieve this?</p>
<python><tensorflow><python-poetry>
2023-05-31 09:24:09
0
443
EricT
76,371,806
21,859,039
How to properly concatenate str_values from dataframe rows with NaN values in Pandas?
<p>Smash rows from dataframe with NaN values</p> <p>I have this dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>descritpion</th> <th>quantity</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>This is the description</td> <td>2</td> <td>100,00</td> </tr> <tr> <td>for the first row</td> <td></td> <td></td> </tr> <tr> <td>Row number 2</td> <td>10</td> <td>150,00</td> </tr> <tr> <td>Row number 3</td> <td>15</td> <td>200,00</td> </tr> </tbody> </table> </div> <p>As we can see, the description for the first row takes 2 lines, but I have no way to know that description is that long. I have tried to iterate over rows, and if those rows have <code>NaN.sum() == len(df.shape[1]) -1</code> then I take the <code>temp_description_col = dataframe.description.iloc[row_index_iterator]</code> and join the descriptions to the last row with <code>join(dataframe.description.iloc[row_index_iterator - 1].join(temp_description_col)</code>. I also have tried <code>out = (df.bfill().groupby(['Importe'], as_index=False).agg({'Concepto': ' '.join}))</code> but it creates the first row with the smashed rows.</p> <p>The problem I found is, that ofcourse seems very rudimentary to check ever row and overwrite the description from the last row if I find nans in all columns except one and the possibility that the description with nans belongs to the rows after. For example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>descritpion</th> <th>quantity</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>This is the description</td> <td></td> <td></td> </tr> <tr> <td>for the first row</td> <td>2</td> <td>100,00</td> </tr> <tr> <td>Row 2 description</td> <td></td> <td></td> </tr> <tr> <td>and Row2 continuation</td> <td>15</td> <td>200,00</td> </tr> </tbody> </table> </div> <p>In this dataframe we can check that the first row descrption belongs to the second row, since the rest of the columns have NaN, and the third description belongs to the 4th row, which by smashing them, it is the second row.</p> <p>Wanted output:</p> <p>df1</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>descritpion</th> <th>quantity</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>This is the description for the first row</td> <td>2</td> <td>100,00</td> </tr> <tr> <td>Row number 2</td> <td>10</td> <td>150,00</td> </tr> <tr> <td>Row number 3</td> <td>15</td> <td>200,00</td> </tr> </tbody> </table> </div> <p>df2</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>descritpion</th> <th>quantity</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>This is the description for the first row</td> <td>2</td> <td>100,00</td> </tr> <tr> <td>Row 2 descriptionand Row2 continuation</td> <td>15</td> <td>200,00</td> </tr> </tbody> </table> </div>
<python><pandas>
2023-05-31 09:02:19
1
301
Jose M. González
76,371,507
10,722,169
Sort values of a pandas Dataframe based on date column but also take into consideration duplicated values of 3 other columns
<p>I want to sort values of a pandas dataframe based on date column in descending order but also take into consideration duplicated values of name, product and release version columns such that the sorting contains the rows based on duplicated name, product and release version columns in consecutive rows. But I'm not able to get the desired result.</p> <p>This is the sample code that I wrote:-</p> <pre><code>data = { 'Name': ['John', 'Alice', 'John', 'Bob', 'Alice', 'Bob', 'Alice', 'Bob'], 'Product': ['A', 'B', 'A', 'C', 'B', 'C', 'B', 'C'], 'Release Version': ['1.6', '2.0', '1.5', '3.0', '2.5', '3.2', '2.6', '2.8'], 'Date': ['2022-05-15', '2022-04-20', '2022-05-10', '2022-05-01', '2022-04-25', '2022-05-05', '2022-04-29', '2022-04-27'] } # Create a DataFrame df = pd.DataFrame(data) # Convert the 'Date' column to datetime df['Date'] = pd.to_datetime(df['Date']) # Sort values based on 'Date' in descending order and 'Name', 'Product', 'Release Version' columns df = df.sort_values(['Date', 'Name', 'Product', 'Release Version'], ascending=[False, True, True, False]) </code></pre> <p>This gives me the below result:-</p> <pre><code> Name Product Release Version Date John A 1.6 2022-05-15 John A 1.5 2022-05-10 Bob C 3.2 2022-05-05 Bob C 3.0 2022-05-01 Alice B 2.6 2022-04-29 Bob C 2.8 2022-04-27 Alice B 2.5 2022-04-25 Alice B 2.0 2022-04-20 </code></pre> <p>The desired result is something like this:-</p> <pre><code> Name Product Release Version Date John A 1.6 2022-05-15 John A 1.5 2022-05-10 Bob C 3.2 2022-05-05 Bob C 3.0 2022-05-01 Bob C 2.8 2022-04-27 Alice B 2.6 2022-04-29 Alice B 2.5 2022-04-25 Alice B 2.0 2022-04-20 </code></pre> <p>Would be great if someone can help me with this.</p>
<python><pandas><dataframe>
2023-05-31 08:20:26
1
435
vesuvius
76,371,466
13,955,154
Find Table of Contents from a pdf file
<p>I want to get the table of content of each of my document and based on previous questions I have this code:</p> <pre><code>from typing import Dict import os import fitz # pip install pymupdf def get_bookmarks(filepath: str) -&gt; Dict[int, str]: # WARNING! One page can have multiple bookmarks! bookmarks = {} with fitz.open(filepath) as doc: toc = doc.get_toc() # [[lvl, title, page, …], …] for level, title, page in toc: bookmarks[page] = title return bookmarks for root, directories, files in os.walk(&quot;doc/&quot;): for file in files: file_path = os.path.join(root, file) to_print = get_bookmarks(file_path) if to_print != {}: print(file_path) print(get_bookmarks(file_path)) </code></pre> <p>So I iterate through all my documents and perform get_bookmarks(). But this method doesn't work for most of my documents (actually I am able to find the toc of only 1 document out of 50). Why does that happen and how can I solve? The structure of my pdf's are quite classical and the toc can be in various forms but it's clearly recognisable. For example: <a href="https://i.sstatic.net/U9MjM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U9MjM.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/8PYKu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8PYKu.png" alt="enter image description here" /></a></p> <p>Is there some way I can solve? Do you need more information to understand what's wrong?</p>
<python><pdf><tableofcontents>
2023-05-31 08:15:24
1
720
Lorenzo Cutrupi
76,371,381
6,649,485
VS code pytest finding test directory but not files
<p>I'm trying to setup pytest for VS code. My project structure is the following.</p> <pre><code>root- |-projectA |-__init__.py |-tests |-__init__.py |-test_A1.py |-test_A2.py |-projectB </code></pre> <p>I can run pytest from the terminal:</p> <pre><code>cd root pytest */tests ... ========================== 15 passed, 1 warning in 8.97s=========================================== </code></pre> <p>These are my settings in VS code:</p> <pre><code>&quot;python.testing.pytestArgs&quot;: [ &quot;*/tests&quot; ], &quot;python.testing.unittestEnabled&quot;: false, &quot;python.testing.nosetestsEnabled&quot;: false, &quot;python.testing.cwd&quot;: &quot;root&quot;, &quot;python.testing.pytestEnabled&quot;: true, </code></pre> <p>In VS code, it looks as if the test directory is found, but not the test files inside. If I click to run the test, nothing happens.</p> <p><a href="https://i.sstatic.net/efxyA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/efxyA.png" alt="VS code pytest snapshot" /></a></p>
<python><visual-studio-code><pytest>
2023-05-31 08:02:52
1
333
ta8
76,371,334
1,319,998
zlib difference in size for level=0 between Python 3.9 and 3.10
<p>In this code that uses zlib to encode some data, but with level=0 so it's not actually compressed:</p> <pre class="lang-py prettyprint-override"><code>import zlib print('zlib.ZLIB_VERSION', zlib.ZLIB_VERSION) total = 0 print('Total 1', total) compress_obj = zlib.compressobj(level=0, memLevel=9, wbits=-zlib.MAX_WBITS) total += len(compress_obj.compress(b'-' * 1000000)) print('Total 2', total) total += len(compress_obj.flush()) print('Total 3', total) </code></pre> <p>Python 3.9.12 outputs</p> <pre><code>zlib.ZLIB_VERSION 1.2.12 Total 1 0 Total 2 983068 Total 3 1000080 </code></pre> <p>but Python 3.10.6 (and Python 3.11.0) outputs</p> <pre><code>zlib.ZLIB_VERSION 1.2.13 Total 1 0 Total 2 1000080 Total 3 1000085 </code></pre> <p>so both a different final size, and a different size along the way.</p> <p>Why? And how can I get them to be identical? (I'm writing a library where I would prefer identical behaviour between Python versions)</p>
<python><zlib><deflate>
2023-05-31 07:57:34
1
27,302
Michal Charemza
76,371,314
8,248,194
Pandas assign with comprehension after query
<p>I want to use assign in pandas passing a dictionary comprehension after a pandas query.</p> <p>Reproducible example:</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd df = pd.DataFrame({ &quot;a&quot;: [1, 2, 3], &quot;b&quot;: [4, 5, 6], &quot;weight&quot;: [0.1, 0.2, 0.3] }) metrics = [&quot;a&quot;, &quot;b&quot;] df = df.query(&quot;b &gt; 4&quot;).assign( **{ f&quot;weighted_{metric}&quot;: lambda df: df[metric] * df[&quot;weight&quot;] for metric in metrics } ) print(df) </code></pre> <p>Results:</p> <pre class="lang-py prettyprint-override"><code> a b weight weighted_a weighted_b 1 2 5 0.2 1.0 1.0 2 3 6 0.3 1.8 1.8 </code></pre> <p>I don't get the expected results, for a, I should get 0.2, 0.3.</p> <p>Do you know how can I get weighted_a correctly?</p> <p>I asked a similar question in <a href="https://stackoverflow.com/questions/76362880/pandas-assign-with-comprehension-giving-unexpected-results">here</a>, the solutions in there worked fine, but not with a query before. Any thoughts on how to adapt the answers in the other question in a pipeline-ish manner?</p> <p>Keep in mind may use a more complex function than just a multiplication for my use case.</p>
<python><pandas>
2023-05-31 07:55:33
1
2,581
David Masip
76,371,188
3,993,405
How to Pandas fillna by method bfill with first occurence of data for the same ID
<p>i have a CSV pandas <a href="https://www.dropbox.com/s/phnbeklg09ze6t0/test.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/phnbeklg09ze6t0/test.csv?dl=0</a></p> <p>i am trying to fillna with bfill method on column catH based on values of first seen value in column A. e.g. in row 382, column A ID is 271 with value M, its the first occurence of ID 271 (later occurence may have different values so i couldnt use group by), so previous nan values should be fill with value 'm'</p> <p>further example if row386, col A has ID 1286, and has value 'b', so row 746, 726, 704, and so on backwards with 1286 should be bfill with value 'b'</p> <p>anyone know a pythonic way to do this?</p>
<python><pandas>
2023-05-31 07:39:40
1
2,101
desmond
76,370,880
14,135,840
Shortcut to manually defining every dunder method
<p>I'm defining a <code>Sqrt</code> class to help with some complications in my code, since it's going to be used a lot. The class has a value <code>val</code>, which is the actual value of the square root being computed.</p> <p>Code:</p> <pre><code># Square root class (helps w accuracy and readability) class Sqrt(object): def __init__(self, x: float) -&gt; None: self.sqr_val = x self.val = math.sqrt(x) def __str__(self) -&gt; str: return f&quot;sqrt({self.sqr_val})&quot; def __pow__(self, n: float) -&gt; float: return self.sqr_val ** n / 2 </code></pre> <p>The issue I'm facing is the need to define every dunder method (&quot;magic method&quot;) to handle arithmetic operations like <code>+</code>, <code>-</code>, <code>*</code>, <code>/</code>, etc.:</p> <pre><code> class Sqrt(object): def __init__(self, x: float) -&gt; None: self.sqr_val = x self.val = math.sqrt(x) def __str__(self) -&gt; str: return f&quot;sqrt({self.sqr_val})&quot; def __pow__(self, n: float) -&gt; float: return self.sqr_val ** n / 2 def __add__(self, x): return self.val + x def __radd__(self, x): return self.val + x def __sub__(self, x): return self.val - x def __rsub__(self, x): return x - self.val def __mul__(self, x): return self.val * x def __rmul__(self, x): return self.val * x def __truediv__(self, x): return self.val / x </code></pre> <p>What I'd like is to be able to define these dunders implicitly, telling Python to use the normal int/float dunders for my class, but using <code>self.val</code> instead of <code>self</code>.</p> <p>I have tried inheriting from <code>int</code> and using <code>super()</code> but that doesn't decrease repetition whatsoever. Is there a way to do this (maybe some fancy descriptor)? Or do I write every applicable dunder out manually?</p>
<python>
2023-05-31 06:57:15
1
507
Codeman
76,370,861
2,810,187
Common method for DRF model Viewset
<p>I have multiple model viewsets that overrides perform_create as below.</p> <pre><code> def perform_create(self, serializer): user=self.request.user instance = serializer.save(user=self.request.user) </code></pre> <p>I am inserting this codes for every modelviewsets I am creating. Is there any way to make this more structured instead of inserting codes everytime?</p>
<python><django><django-rest-framework>
2023-05-31 06:55:42
1
443
andylee
76,370,559
1,982,032
Why can't install yfinance in custom directory?
<p>Many python packages were installed in my <code>.local/lib/python3.9/site-packages</code>,and i had already tried that</p> <pre><code>pip install git+https://github.com/ranaroussi/yfinance </code></pre> <p><code>yfinance</code> can be installed in <code>.local/lib/python3.9/site-packages</code>.Now i am trying to install with downloaded package.<br /> Download a specified yfinance version on <code>https://github.com/ranaroussi/yfinance</code> and extract it.</p> <pre><code>cd /home/debian/Downloads/yfinance-hotfix-proxy ls build LICENSE.txt mkdocs.yml setup.cfg test_yfinance.py CHANGELOG.rst MANIFEST.in README.md setup.py yfinance dist meta.yaml requirements.txt tests yfinance.egg-info </code></pre> <p>Install with <code>--prefix</code>:</p> <pre><code>debian@debian:~/Downloads/yfinance-hotfix-proxy$ python3 setup.py install --prefix /home/debian/.local running install /home/debian/.local/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( /home/debian/.local/lib/python3.9/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools. warnings.warn( running bdist_egg running egg_info error: Cannot update time stamp of directory 'yfinance.egg-info' </code></pre> <p>Why can't install it with <code>python3 setup.py install --prefix /home/debian/.local</code>?</p>
<python><installation><package>
2023-05-31 05:57:42
0
355
showkey
76,370,428
11,098,908
Preventing an object disappears from the screen in pygame
<p>This is the code from the book <em>Beginning Python Games Development With PyGame</em>:</p> <pre><code>import pygame from pygame.locals import * from sys import exit from math import * class Vector2: def __init__(self, x=0, y=0): self.x = x self.y = y if hasattr(x, &quot;__getitem__&quot;): x, y = x self._v = [float(x), float(y)] else: self._v = [float(x), float(y)] def __getitem__(self, index): return self._v[index] def __setitem__(self, index, value): self._v[index] = 1.0 * value def __str__(self): return &quot;(%s, %s)&quot;%(self.x, self.y) def from_points(P1, P2): return Vector2(P2[0] - P1[0], P2[1] - P1[1]) # return np.array(P2) - np.array(P1) # using numpy method def get_magnitude(self): return math.sqrt( self.x**2 + self.y**2 ) def normalize(self): magnitude = self.get_magnitude() try: self.x /= magnitude self.y /= magnitude except ZeroDivisionError: self.x = 0 self.y = 0 # rhs stands for Right Hand Side def __add__(self, rhs): return Vector2(self.x + rhs.x, self.y + rhs.y) def __sub__(self, rhs): return Vector2(self.x - rhs.x, self.y - rhs.y) def __neg__(self): return Vector2(-self.x, -self.y) def __mul__(self, scalar): return Vector2(self.x * scalar, self.y * scalar) def __truediv__(self, scalar): return Vector2(self.x / scalar, self.y / scalar) background_image_filename = 'sushiplate.jpg' sprite_image_filename = 'fugu.png' pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) background = pygame.image.load(background_image_filename).convert() sprite = pygame.image.load(sprite_image_filename).convert_alpha() clock = pygame.time.Clock() pygame.mouse.set_visible(False) pygame.event.set_grab(True) sprite_pos = Vector2(200, 150) sprite_speed = 300. sprite_rotation = 0. sprite_rotation_speed = 360. # Degrees per second while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() exit() if event.type == KEYDOWN: if event.key == K_ESCAPE: pygame.quit() exit() pressed_keys = pygame.key.get_pressed() pressed_mouse = pygame.mouse.get_pressed() rotation_direction = 0. movement_direction = 0. rotation_direction = pygame.mouse.get_rel()[0] / 3. if pressed_keys[K_LEFT]: rotation_direction = +1. if pressed_keys[K_RIGHT]: rotation_direction = -1. if pressed_keys[K_UP] or pressed_mouse[0]: movement_direction = +1. if pressed_keys[K_DOWN] or pressed_mouse[2]: movement_direction = -1. screen.blit(background, (0,0)) rotated_sprite = pygame.transform.rotate(sprite, sprite_rotation) w, h = rotated_sprite.get_size() sprite_draw_pos = Vector2(sprite_pos.x-w/2, sprite_pos.y-h/2) screen.blit(rotated_sprite, (sprite_draw_pos.x, sprite_draw_pos.y)) time_passed = clock.tick() time_passed_seconds = time_passed / 1000.0 sprite_rotation += rotation_direction * sprite_rotation_speed * time_passed_seconds heading_x = sin(sprite_rotation*pi/180.) heading_y = cos(sprite_rotation*pi/180.) heading = Vector2(heading_x, heading_y) heading *= movement_direction sprite_pos+= heading * sprite_speed * time_passed_seconds pygame.display.update() </code></pre> <p>The problem with this script is that the <code>sprite</code> can <em>disappear</em> from the display screen. I tried to fix that problem by adding the following code right above the last line <code>pygame.display.update()</code></p> <pre><code>if sprite_pos.x &gt; screen.get_width() or sprite_pos.x &lt; 0: sprite_draw_pos.x = 0 if sprite_pos.y &gt; screen.get_height() or sprite_pos.y &lt; 0: sprite_draw_pos.y = 0 </code></pre> <p>However, that fix didn't work because the <code>sprite</code> still disappeared? Could someone please explain what I did wrong and show me how to achieve that goal? Thanks.</p>
<python><class><oop><pygame>
2023-05-31 05:34:13
1
1,306
Nemo
76,370,353
5,169,785
convert pydantic dataclass with extra arguments to dict
<p>I want to convert a pydantic dataclass to a dict but the method I'm using doesn't work (using python 3.8.10 and pydantic==1.10.8)</p> <pre class="lang-py prettyprint-override"><code>import dataclasses import pydantic class configExtraArgsAllow: extra = pydantic.Extra.allow @pydantic.dataclasses.dataclass(config=configExtraArgsAllow) class MyDataModel: foo: int data = { &quot;foo&quot;: 1, &quot;bar&quot;: 2, } dc = MyDataModel.__pydantic_model__.parse_obj(data) assert dc.bar == 2 dc_as_dict = dataclasses.asdict(dc) </code></pre> <p>It results in</p> <pre><code> dc_as_dict = dataclasses.asdict(dc) File &quot;/usr/lib/python3.8/dataclasses.py&quot;, line 1072, in asdict raise TypeError(&quot;asdict() should be called on dataclass instances&quot;) TypeError: asdict() should be called on dataclass instances </code></pre> <p>The method above works for pydantic dataclass without extra arguments though. Is there any other way to convert the pydantic dataclass into a dict when extra arguments are 'allowed'?</p>
<python><pydantic><python-dataclasses>
2023-05-31 05:20:05
1
351
Uwe Brandt
76,370,351
1,155,409
Memory leak when using multiprocessing in python
<p>I have memory leak problem when running the following code. I removed unnecessary parts of code as much as I can. When I running this, the usage of memory for cpu and swap is getting bigger and it reaches the maximum of it and ended up killing the program with the exit code 9. I changed the process save its result with saving csv file and release all variables with del command. But, I do not know the further solution. Thanks in advance!!</p> <pre><code>@njit def haversine_np(lon1, lat1, lon2, lat2): lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2 c = 2 * np.arcsin(np.sqrt(a)) km = 6367 * c return km def get_similiar_candidates_lambda(i, r, df_emd, dists): KM_MAX_DISTANCE = 1.0 TOP_K = 5 similiar_min_candidates, similiar_max_candidates = [], [] viz_candidates = [] dists = {k: v for k, v in dists.items() if v &lt;= KM_MAX_DISTANCE} for c in df_emd.itertuples(): if i != c.Index: dc = c._asdict() if r['용도지역1'] == dc['용도지역1'] and r['지목'] == dc['지목']: try: euclidean_distance = dists[(r.name, dc['Index'])] if euclidean_distance &lt;= KM_MAX_DISTANCE: if dc['도로접면'] in pair_road_cond[r['도로접면']] and dc['토지이용상황'] in pair_land_usage[r['토지이용상황']]: if r['종류'] != '토지' and r['건물가격_추정'] != 0.: # 총액, 단가 기준 추가 total_price_prop = (dc['건물가격_추정'] + dc['공시가격_2022'] * dc['Mutiple_Pred'] ) / ( r['건물가격_추정'] + r['공시가격_2022'] * r['Mutiple_Pred'] ) else: total_price_prop = None unit_price_prop = (dc['공시가격_2022'] * dc['Mutiple_Pred']) / (r['공시가격_2022'] * r['Mutiple_Pred']) lb_total_price_prop, ub_total_price_prop = 0.5, 1.5 lb_unit_price_prop, ub_unit_price_prop = 0.5, 1.5 check1 = total_price_prop is not None and (lb_total_price_prop &lt;= total_price_prop and total_price_prop &lt;= ub_total_price_prop) check2 = lb_unit_price_prop &lt;= unit_price_prop and unit_price_prop &lt;= ub_unit_price_prop if check1: similiar_min_candidates.append((c.Index, euclidean_distance)) if check1 and check2: similiar_max_candidates.append((c.Index, euclidean_distance)) except KeyError as e: continue viz_candidates.append((c.Index, euclidean_distance)) # 반경 1km 내의 용도지역1과 지목이 같은 케이스 generals = [t for t in viz_candidates] min_candidates, max_candidates = None, None if len(similiar_min_candidates) &gt; TOP_K: similiar_min_candidates.sort(key=lambda x:x[1]) min_candidates = [t for t in similiar_min_candidates][:5] elif len(similiar_min_candidates) &gt;=1 and len(similiar_min_candidates) &lt;= 5: min_candidates = [t for t in similiar_min_candidates] if len(similiar_min_candidates) &gt; TOP_K: similiar_min_candidates.sort(key=lambda x:x[1]) max_candidates = [t for t in similiar_max_candidates][:5] elif len(similiar_min_candidates) &gt;=1 and len(similiar_max_candidates) &lt;= 5: max_candidates = [t for t in similiar_max_candidates] return min_candidates, max_candidates, generals def process_emd(args, tuple_emd): sd, sgg, emd = tuple_emd df_land_ai = get_land_ai(src_url, sql_land_ai) df_land = get_land(dm_url, sql_land) df_final = pd.merge(df_land_ai, df_land, on='pnu', how='right') df_final['Mutiple_Pred'] = df_final['predicted_land_price'] / df_final['공시가격_2022'] df_final = df_final.loc[:, columns] df_final = df_final.set_index('pnu') df_final = df_final[~df_final['Mutiple_Pred'].isna()] df_emd = df_final[~df_final.index.duplicated(keep='first')] candidate = list(df_emd.index) geo = df_emd[['lon_x', 'lat_y']].to_numpy() geos = np.asarray(list(it.product(geo, geo))).reshape(-1, 4) euclideans = haversine_np(geos[:,0], geos[:,1], geos[:,2], geos[:,3]) KM_MAX_DISTANCE = 1.0 TOP_K = 5 df_emd.reset_index(inplace=True) df_emd_cp = pd.merge(df_emd, df_emd, how='cross', suffixes=('_x', '_y')) # cartesian product for df_emd for speed up df_emd_cp['euclidean_distance'] = euclideans df_emd_cp = df_emd_cp[df_emd_cp['euclidean_distance'] &lt; KM_MAX_DISTANCE] # remove euclidean distance &gt; 1km in advance for speed up df_emd_cp = df_emd_cp[df_emd_cp['pnu_x'] != df_emd_cp['pnu_y']] df_emd_cp = df_emd_cp[df_emd_cp.apply(lambda x: True if x['도로접면_y'] in pair_road_cond[x['도로접면_x']] else False, axis=1)] # 236 df_emd_cp = df_emd_cp[df_emd_cp.apply(lambda x: True if x['토지이용상황_y'] in x['토지이용상황_x'] in pair_land_usage.keys() and pair_land_usage[x['토지이용상황_x']] else False, axis=1)] # 193 df_emd_cp['unit_price_prop'] = (df_emd_cp['공시가격_2022_y'] * df_emd_cp['Mutiple_Pred_y']) / (df_emd_cp['공시가격_2022_x'] * df_emd_cp['Mutiple_Pred_x']) lb_total_price_prop, ub_total_price_prop = 0.5, 1.5 lb_unit_price_prop, ub_unit_price_prop = 0.5, 1.5 df_emd_cp['check1'] = df_emd_cp['total_price_prop'].apply(lambda x : np.isnan(x) or (x is not None and (lb_total_price_prop &lt;= x and x &lt;= ub_total_price_prop))) df_emd_cp['check2'] = df_emd_cp['unit_price_prop'].apply(lambda x : x is not None and (lb_unit_price_prop &lt;= x and x &lt;= ub_unit_price_prop)) if len(df_emd_cp) &gt; 1: df_similiar_min_candidates = df_emd_cp.groupby(by=['pnu_x'], group_keys=True).apply(lambda x: (sorted( zip(x.loc[x['check1'], 'pnu_y'].tolist(), x.loc[x['check1'], 'euclidean_distance'].tolist()), key=lambda x:x[1])[:min(TOP_K, x['check1'].sum())]) ).reset_index().rename(columns={0:'similiar_min_candidates'}) df_similiar_max_candidates = df_emd_cp.groupby(by=['pnu_x'], group_keys=True).apply(lambda x: sorted( zip(x.loc[(x['check1'] &amp; x['check2']), 'pnu_y'].tolist(), x.loc[(x['check1'] &amp; x['check2']), 'euclidean_distance'].tolist()), key=lambda x:x[1])[: min( TOP_K, (x['check1'] &amp; x['check2']).sum())] ).reset_index().rename(columns={0:'similiar_max_candidates'}) if len(df_similiar_min_candidates) != 0: df_similiar_min_candidates = df_similiar_min_candidates[['pnu_x', 'similiar_min_candidates']] df_similiar_min_candidates.rename(columns={'pnu_x':'pnu'}, inplace=True, errors='ignore') # euclidean_distance df_similiar_min_candidates = df_similiar_min_candidates[df_similiar_min_candidates['similiar_min_candidates'].apply(lambda x: x is not None and len(x) &gt; 0)] df_emd_final = df_similiar_min_candidates.merge(df_emd, how='left', on='pnu') df_neighbors = df_emd_final[['pnu']].copy() df_land_neighbors = df_neighbors.merge(df_emd_final, how='left', on='pnu') df_land_neighbors = df_land_neighbors.groupby(by=['pnu'], group_keys=True).apply(lambda x: pd.DataFrame.from_records(x['similiar_min_candidates'].iloc[0], columns=['neighbor_pnu', 'neighbor_distance'])).reset_index() df_land_neighbors = df_land_neighbors.rename(columns={'level_1':'neighbor_seq'}) df_land_neighbors = df_land_neighbors.loc[:, ['criteria_id', 'criteria_version', 'pnu', 'neighbor_seq', 'neighbor_pnu', 'neighbor_distance', 'sd', 'sgg', 'emd', 'createdat', 'updatedat']] df_building = pd.read_sql(sql_building, con=get_db_connection('dm')) df_neighbors = df_land_neighbors[['pnu', 'neighbor_pnu', 'neighbor_seq', 'neighbor_distance', 'sd', 'sgg', 'emd']] df_neighbors = df_neighbors.rename(columns={'pnu': 'base_pnu'}) df_right = pd.merge(df_neighbors, df_building.loc[:, df_building.columns.isin(['pnu','building_master_id'])], left_on='neighbor_pnu', right_on='pnu') df_right.rename(columns={'building_master_id': 'neighbor_building_master_id'}, inplace=True) result = pd.merge(df_building.loc[:, df_building.columns.isin(['pnu','building_master_id'])], df_right, left_on='pnu', right_on='base_pnu') df_ai_building_neighbors = result[['criteria_id', 'criteria_version', 'building_master_id', 'neighbor_building_master_id', 'neighbor_distance', 'sd', 'sgg', 'emd']] os.makedirs(f'data/{sd}/{sgg}/{emd}', exist_ok=True) df_ai_building_neighbors.to_csv(f'data/{sd}/{sgg}/{emd}/ai_building_neighbors.csv', index=False) df_land_neighbors.to_csv(f'data/{sd}/{sgg}/{emd}/ai_land_neighbors.csv', index=False) del result, df_ai_building_neighbors, df_neighbors, df_right, df_building, df_similiar_min_candidates, df_similiar_max_candidates, df_emd_final, df_land_neighbors del df_emd_cp, df_land_ai, df_land, df_final, df_emd, geo, geos, euclideans else: del df_emd_cp, df_land_ai, df_land, df_final, df_emd, geo, geos, euclideans t.toc(msg=f'Elapsed time for getting similiar properties of {emd} : ', restart=True) def get_args(): parser = argparse.ArgumentParser(description='lightgbm to evaluate land price') parser.add_argument('--model_id', type=str, default='lightgbm', help='model id') parser.add_argument('--model_version', type=str, default='3.0', help='model version') parser.add_argument('--data_version', type=str, default='230525') return parser.parse_args() if __name__ == '__main__': args = get_args() NUM_CORES = 5 df_emd = pd.read_sql(sql_emd, con=get_db_connection('src')) pool = Pool(NUM_CORES) pool.starmap(process_emd, itertools.product([args], list(zip(df_emd.sd2, df_emd.sgg, df_emd.emd)))) pool.close() pool.join() </code></pre>
<python><pandas>
2023-05-31 05:19:35
0
3,939
verystrongjoe
76,370,279
1,260,682
setting python path in virtual env
<p>I have the following directory structure:</p> <pre><code>~ |-- test |-- foo.py </code></pre> <p>I was able to run <code>foo.py</code> as <code>python test/foo.py</code> or <code>python -m test.foo</code> in the normal command line. But once I started a venv in <code>~</code> the former still works but the latter no longer works (python complains about &quot;No module named test.foo&quot;). Did I set my python path incorrectly?</p>
<python><python-venv>
2023-05-31 04:59:20
1
6,230
JRR
76,370,238
7,023,590
Product of 2 numbers exclusively in Kivy language (.kv file) - why the order is important?
<p>I want to multiply 2 numbers, performing it exclusively in the .kv file.</p> <p>So I created this minimalistic <code>main.py</code> file:</p> <pre><code>from kivy.app import App class Test(App): pass Test().run() </code></pre> <p>and this minimalistic <code>test.kv</code> file:</p> <p> </p> <ol> <li><p>My 1<sup>st</sup> approach:</p> <pre><code>BoxLayout: TextInput: id: num_1 text: &quot;8&quot; TextInput: id: num_2 text: &quot;4&quot; Label: text: f&quot;Product: {int(num_1.text) * int(num_2.text)}&quot; </code></pre> <p>After launching the application, I obtained the error</p> <blockquote> <pre><code> &gt;&gt; 9: text: f&quot;Procuct: {int(num_1.text) * int(num_2.text)}&quot; ... ValueError: invalid literal for int() with base 10: '' </code></pre> </blockquote> </li> </ol> <p> </p> <ol start="2"> <li><p>My 2<sup>nd</sup> approach – I changed only <em>the order</em> of widgets, putting the Label before TextInputs:</p> <pre><code>BoxLayout: Label: text: f&quot;Procuct: {int(num_1.text) * int(num_2.text)}&quot; TextInput: id: num_1 text: &quot;8&quot; TextInput: id: num_2 text: &quot;4&quot; </code></pre> <p>and I obtained the expected result</p> <p><a href="https://i.sstatic.net/6JGrE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6JGrE.jpg" alt="enter image description here" /></a></p> <p>(which correctly reflects changes in TextInputs).</p> </li> </ol> <p><strong>So the question is: Why my first approach didn't work?</strong></p> <hr /> <p><em>Note:</em></p> <p>I'm not interested in a better solution or some workaround, I want only understand, why my first <code>test.kv</code> file don't work.</p> <hr /> <p><strong>EDIT:</strong></p> <p>If in my 1<sup>st</sup>, not working approach, I change the text of Label to <code>num_1.text + num_2.text</code> (concatenation of strings), it works!</p>
<python><kivy><kivy-language>
2023-05-31 04:46:23
1
14,341
MarianD
76,370,208
3,521,180
Why is the mock data unit test failing?
<p>I am writing a unit test with mock data for one of my functions.</p> <p>Previously the response of my function looked like this:</p> <pre><code>{ &quot;data&quot;: [ { &quot;a&quot;: 2264, &quot;b&quot;: &quot;Comp PT&quot; } ], &quot;success&quot;: True } </code></pre> <p>The unit test was written for this, and the test was passing as well. But due to some requirements the response is a little different from the previous one as shown below:</p> <pre><code>{ &quot;data&quot;: { &quot;data&quot;: [ { &quot;a&quot;: 2264, &quot;b&quot;: &quot;Comp PT&quot; } ], &quot;status&quot;: &quot;In Progress&quot;, &quot;c&quot;: &quot;5&quot; }, &quot;success&quot;: true } </code></pre> <p>Now my test is failing with this error:</p> <pre><code>FAILED (failures=1) 200 != 500 Expected :500 Actual :200 </code></pre> <p>The mock data is passed as <code>self.get_nano</code> inside the test file as shown below:</p> <pre class="lang-py prettyprint-override"><code>from unittest.mock import patch from src.rest_apis.web_app import app from src.v1.tests.test_main import MockTestBase class TestSimulationConfig(MockTestBase): @patch('&lt;path_to&gt;.SimulationResult') @patch('&lt;path_to&gt;.execute_query') def test_get_abc(self, mock_query_execut, mock_sim_result): mock_query_execut.return_value = {'result': self.get_nano['data']} mock_sim_result.get_selected_components.return_value = self.get_nano api_url = f'&lt;path_to_api&gt;' response = self.app.get(api_url, content_type='application/json', follow_redirects=True) self.assertEqual(response.status_code, 200) </code></pre>
<python><python-3.x><flask><python-unittest>
2023-05-31 04:35:28
0
1,150
user3521180
76,370,183
8,553,795
Why does Lime need training data to compute local explanations
<p>I am using Lime to compute local explanation, however I do not understand why do I have to pass training data <code>X_train</code> in the below line of code</p> <pre><code>explainer = lime_tabular.LimeTabularExplainer(X_train, mode=&quot;regression&quot;, feature_names= boston.feature_names) </code></pre> <p>Below is an excerpt around how Lime operates from this great book named <a href="https://christophm.github.io/interpretable-ml-book/lime.html#lime" rel="nofollow noreferrer">Interpretable Machine Learning</a> by Christoph Molnar around XAI -</p> <blockquote> <p>The recipe for training local surrogate models:</p> <ul> <li>Select your instance of interest for which you want to have an explanation of its black box prediction.</li> <li>Perturb your dataset and get the black box predictions for these new points.</li> <li>Weight the new samples according to their proximity to the instance of interest.</li> <li>Train a weighted, interpretable model on the dataset with the variations.</li> <li>Explain the prediction by interpreting the local model.</li> </ul> </blockquote> <p>If I understand correctly, Lime trains a weighted interpretable model for each instance of interest by sampling points from its neighbourhood. The weights assigned to the features in this model serve as the local explanation for that particular instance. And this is exactly what we do in the below lines of code -</p> <p><code>exp = explainer.explain_instance(X_test.values[3], model.predict, num_features=6)</code></p> <p>We pass an instance, using this instance it would compute the neighbours for which it would fit an interpretable model. So why did we pass <code>X_train</code> in the first line of code? How would Lime make use of it is what I don't understand.</p>
<python><machine-learning><shap><lime>
2023-05-31 04:28:17
1
393
learnToCode
76,370,158
17,685,806
How to encapsulate running a process in Python with realtime + to-variable output capture and return code capture?
<blockquote> <p>Note: I have read <a href="https://stackoverflow.com/questions/14043030/checking-status-of-process-with-subprocess-popen-in-python">this</a>, <a href="https://stackoverflow.com/questions/25750468/displaying-subprocess-output-to-stdout-and-redirecting-it">this</a>, <a href="https://stackoverflow.com/questions/803265/getting-realtime-output-using-subprocess">this</a>, and <a href="https://stackoverflow.com/questions/18344932/python-subprocess-call-stdout-to-file-stderr-to-file-display-stderr-on-scree">this</a> question. While there is much useful info, none give an exact answer. My knowledge of the language is limited, so I cannot put together pieces from those answers in a way that fit the needed use case (especially point (4) below).</p> </blockquote> <p>I am looking for a way to run a process with a given set of arguments in current Python (latest version atm is 3.11), with the following requirements:</p> <ol> <li>The stderr and stdout of the process are displayed in real-time, as they would be if run directly (or through a script) in almost any shell, i.e. <code>bash</code> or <code>PowerShell</code></li> <li>Both streams are also separately captured into a string or byte array for accessing later</li> <li>The return code is captured on process finish</li> <li>This is encapsualted in a function that simply requires the argument list, and returns an object containing both streams and the return code</li> </ol> <p><em>All points except for (1)</em> are covered by <code>subprocess.run(args, capture_output=True)</code>. So, I need to define a function</p> <pre class="lang-py prettyprint-override"><code>def run_and_output_realtime: # ??? return result </code></pre> <p>Which would allow to change <em>just the first line</em> of code like</p> <pre class="lang-py prettyprint-override"><code>run_tool_result = subproccess.run(args, capture_output=True) if run_tool_result.returncode != 0: if &quot;Unauthorized&quot; in run_tool_result.stderr.decode(): print(&quot;Please check authorization&quot;) else: if &quot;Duration&quot; in run_tool_result.stdout.decode(): # parse to get whatever duration was in the output </code></pre> <p>To <code>run_tool_result = run_and_output_realtime(args)</code>, and have all the remaining lines unchanged and working.</p>
<python><terminal><command-line><automation><subprocess>
2023-05-31 04:22:24
1
1,615
goose_lake
76,370,121
11,098,908
Why an object can be split into 2 components
<p>I came across this class <code>Vector2</code> which was constructed like this</p> <pre><code>class Vector2: def __init__(self, x=0, y=0): self.x = x self.y = y if hasattr(x, &quot;__getitem__&quot;): x, y = x self._v = [float(x), float(y)] else: self._v = [float(x), float(y)] def __getitem__(self, index): return self._v[index] def __setitem__(self, index, value): self._v[index] = 1.0 * value def __str__(self): return &quot;(%s, %s)&quot;%(self.x, self.y) </code></pre> <p>As I don't have much knowledge in Python, I couldn't understand the first line of the block <code>if hasattr(x, &quot;__getitem__&quot;):</code></p> <pre><code>if hasattr(x, &quot;__getitem__&quot;): x, y = x # why can we do this? </code></pre> <p>I meant how could <code>x</code> be split into <code>x</code> and <code>y</code> because <code>x</code> is already itself (that is, <code>x == x</code>)?</p> <p>Also, what is the objective/purpose of <code>hasattr(x, &quot;__getitem__&quot;)</code>?</p> <p><strong>EDIT</strong>: If I instantiated a vector <code>v1 = Vector2(100, 200)</code> to represent the movement of an object in <code>pygame</code>, I found out that I couldn't use <code>v1[0]</code> to specify the location of the object on the x-axis for drawing (with <code>pygame.blit()</code>). Why was that, given <code>v1[0] == v1.x</code>?</p> <p><strong>EDIT 2</strong>: correction of the <strong>edit</strong> above. I was wrong, <code>v1[0]</code> (same as <code>v1.x</code>) can be used to specify the location of the object.</p>
<python><class><oop><hasattr>
2023-05-31 04:11:20
1
1,306
Nemo
76,370,054
10,200,497
Creating a new column by subtracting two other columns that one of them is shifted
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({'a': [20, np.nan, np.nan], 'b': [np.nan, 3, 2]}) a b 0 20.0 NaN 1 NaN 3.0 2 NaN 2.0 </code></pre> <p>And this is the output that I want:</p> <pre><code> a b c 0 20.0 NaN NaN 1 NaN 3.0 17.0 2 NaN 2.0 15.0 </code></pre> <p>I want to subtract column <code>a</code> and <code>b</code>. For example the second row is 20 - 3 and the last row is 17 - 2. Basically I want to shift one value. That is, I want to subtract by the result of the previous one.</p> <p>This is the code that I have tried:</p> <pre><code>df['c'] = df['a'].shift(1) - df.b </code></pre>
<python><pandas>
2023-05-31 03:53:41
1
2,679
AmirX
76,369,917
1,603,480
With Python and `win32com`, how to open a RTF file without the pop-up asking to convert?
<p>When opening a RTF document in MS Word, Word is usually showing a pop-up to ask for converting (with different format including RTF).</p> <p>Therefore, when opening the same file with <code>win32com</code>, you need to manually click on the icon of MS Word or switch to MS Word if this is open (to show this pop up) and click on <strong>OK</strong>. That breaks the automation flow.</p> <p>The code I have currently is this one:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from win32com import client, __gen_path__ rtf_file = Path(...) word = client.gencache.EnsureDispatch(&quot;Word.Application&quot;) doc = word.Documents.Open(str(rtf_file.absolute())) doc.Activate() </code></pre> <p>How to by-pass this pop-up to automate processing of RTF files?</p>
<python><ms-word><pywin32><win32com><rtf>
2023-05-31 03:10:48
1
13,204
Jean-Francois T.
76,369,916
8,702,633
Add new column in Dataframe based on value in the array from another column
<p>I have a dataframework where one of column is a array. I am trying to get value from some elements in the array in this column and append this value as a new column</p> <p>Here is the dataframe:</p> <pre><code> Key Version 0 ABC-1 [{'name': 'v1', 'releaseDate': '2023-01-01'}] 1 ABC-2 [{'name': 'v2', 'releaseDate': '2023-02-01'}] 2 ABC-3 [{'name': 'v3', 'releaseDate': '2023-03-01'}] </code></pre> <p>Here is my code:</p> <pre><code>df[&quot;NewColumn&quot;] = df[&quot;Version&quot;][0][0]['name'] </code></pre> <p>The result is like below. It always pull the 1st value in row in the dataframe.</p> <pre><code> Key Version NewColumn 0 ABC-1 [{'name': 'v1', 'releaseDate': '2023-01-01'}] v1 1 ABC-2 [{'name': 'v2', 'releaseDate': '2023-02-01'}] v1 2 ABC-3 [{'name': 'v3', 'releaseDate': '2023-03-01'}] v1 </code></pre> <p>I tried to use dataframe apply with lambda and it did not work. It says &quot;KeyError: 0&quot;</p> <pre><code>df[&quot;NewColumn&quot;] = df.apply(lambda row: row.Version[0][0]['name'], axis=1) </code></pre>
<python><dataframe>
2023-05-31 03:10:47
2
331
Max
76,369,896
4,872,065
Creating a two level x-axis that groups the first axis
<p>I'm am trying to recreate a variant of this chart: <a href="https://i.sstatic.net/fhI3F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fhI3F.png" alt="chart generated using excel" /></a></p> <p>The main difference being that I am creating a bar chart with x-axis being date time (in 3 hour increments), and the bar being coloured based on another series called intensity (low - green, high - red).</p> <p>That said, one axis should be the the hours in a day, the second axis below it should group those times into the day they belong to.</p> <p>What I have so far:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib as mplt import matplotlib.pyplot as plt from matplotlib import dates import numpy as np import datetime as pdt from datetime import datetime, timedelta import seaborn as sns start_dt = np.datetime64('today').astype(np.datetime64) end_dt = np.datetime64('today') + np.timedelta64(6, 'D') x = np.arange(start_dt, end_dt, np.timedelta64(180, 'm')) x = [i.astype(datetime) for i in x] intensity = np.random.uniform(0, 10, len(x)) y = np.ones(shape=len(x)) plt.rcParams[&quot;figure.figsize&quot;] = [17, 1.2] plt.rcParams[&quot;figure.autolayout&quot;] = True sns.set_style(&quot;whitegrid&quot;) sns.despine(bottom = True, left = True, top = True) fig, ax = plt.subplots() colors = [] for i in intensity: if 0 &lt;= i &lt;= 5: colors.append('#75FF71') elif 6 &lt;= i &lt; 8: colors.append('#FFC53D') else: colors.append('#FF5C5C') graph = sns.barplot(x=x, y=y, palette=colors, width=1.0, linewidth=0) graph.grid(False) graph.set(yticklabels=[]) x_tick_label = [] for val in x: min_ts = min(x) diff_days = (val - min_ts).days diff_hours = (val - min_ts).seconds/3600 total = diff_days*24 + int(diff_hours) if val.time() == pdt.time(0,0): # x_tick_label.append(val.strftime(&quot;%m/%d&quot;)) x_tick_label.append(&quot;&quot;) elif val.time() == pdt.time(6,0) or val.time() == pdt.time(12,0) or val.time() == pdt.time(18,0) : # elif val.time() == pdt.time(12,0): x_tick_label.append(f&quot;{val.strftime('%-H')}:00&quot;) else: x_tick_label.append('') graph.set(xticklabels=x_tick_label) for ticklabel in graph.axes.get_xticklabels(): ticklabel.set_color(&quot;#FFC53D&quot;) ax2 = ax.axes.twiny() ax2.spines['top'].set_position(('axes', -0.15)) ax2.spines['top'].set_visible(False) # ax2.xaxis.set_major_formatter(day_locator) plt.xticks(fontweight='light',ha='right', rotation=90) plt.box(on=None) plt.show() </code></pre> <p><a href="https://i.sstatic.net/WNsqd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WNsqd.png" alt="output of the code above" /></a></p>
<python><matplotlib><seaborn><x-axis>
2023-05-31 03:04:27
1
427
AGS
76,369,726
5,861,256
Get max value rows then min value rows, or if a subset has all Nan, keep everything
<p>I have a pandas dataframe</p> <pre class="lang-py prettyprint-override"><code>x = pd.DataFrame({ &quot;t&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;A&quot;, &quot;C&quot;, &quot;C&quot;], &quot;m&quot;: [&quot;k&quot;, &quot;m&quot;, &quot;m&quot;, &quot;k&quot;, &quot;m&quot;, &quot;m&quot;, &quot;b&quot;, &quot;k&quot;, &quot;f&quot;, &quot;d&quot;], &quot;f1&quot;: [1.2, np.nan, 0.8, 1, 1, 1.5, 1, np.nan, np.nan, np.nan], &quot;f2&quot;: [100, 200, 200, 100, 100, 100, 50, 200, 300, 400]}) </code></pre> <pre><code> t m f1 f2 0 A k 1.2 100 1 A m NaN 200 2 A m 0.8 200 3 B k 1.0 100 4 B m 1.0 100 5 B m 1.5 100 6 B b 1.0 50 7 A k NaN 200 8 C f NaN 300 9 C d NaN 400 </code></pre> <p>On <code>f1</code> I want to pick min value of each subgroup <code>t</code>. If a subgroup has Nan values, we need to pick all the values. On <code>f2</code> I want to pick max values. In case of Nan values, we need to keep all the rows. So I want an output something similar to</p> <pre><code> t m f1 f2 0 A m 0.8 200 1 B k 1.0 100 2 B m 1.0 100 3 C d NaN 400 </code></pre> <p>I was able to achieve this using</p> <pre class="lang-py prettyprint-override"><code>def keep_rows(k, col, op): # if all values are Nan if np.isnan(k[col].values).all(): return k return k[k[col] == getattr(np, f&quot;nan{op}&quot;)(k[col])] </code></pre> <pre class="lang-py prettyprint-override"><code>tt = x.groupby(&quot;t&quot;, as_index=False).apply(lambda x: keep_rows(x, &quot;f1&quot;, &quot;min&quot;)).reset_index(drop=True) tt = tt.groupby(&quot;t&quot;, as_index=False).apply(lambda x: keep_rows(x, &quot;f2&quot;, &quot;max&quot;)).reset_index(drop=True) tt </code></pre> <p>But is there a better way? Especially in pandas v2?</p>
<python><pandas>
2023-05-31 02:11:12
1
715
Prakash Vanapalli
76,369,707
653,397
Flask Application on Azure App Service throwing "400 Bad Request" error
<p>I have deployed a simple <code>Flask app</code> on <code>Azure App Service</code> using Azure DevOps Build and Release pipeline. Flask app just takes user input and returns it back, on local deployment &amp; testing its running without any issue, I can see the expected output but when the app is deployed on the App service and request is sent from either Postman or Python then I am getting the below error.</p> <pre><code>&lt;!doctype html&gt; &lt;html lang=en&gt; &lt;title&gt;400 Bad Request&lt;/title&gt; &lt;h1&gt;Bad Request&lt;/h1&gt; &lt;p&gt;The browser (or proxy) sent a request that this server could not understand.&lt;/p&gt; </code></pre> <p>Below are relevant data</p> <p><strong>app.py</strong></p> <pre><code>from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/chat', methods=['GET', 'POST']) def chat(): data = request.json return jsonify(data) if __name__ == '__main__': app.run(debug=True) </code></pre> <p><strong>send_request.py</strong></p> <pre><code>import requests import json url = &quot;http://&lt;app&gt;.azurewebsites.net/chat&quot; payload = json.dumps({ &quot;name&quot;: &quot;Atinesh Singh&quot; }) headers = { 'Content-Type': 'application/json' } response = requests.request(&quot;POST&quot;, url, headers=headers, data=payload) print(response.text) </code></pre> <p><strong>Azure DevOps Release pipeline configuration</strong> <a href="https://i.sstatic.net/EkNqp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EkNqp.png" alt="enter image description here" /></a></p> <p><strong>App Service Access Restrictions settings</strong> <a href="https://i.sstatic.net/ItBK9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItBK9.png" alt="enter image description here" /></a></p>
<python><azure><flask><azure-web-app-service><azure-pipelines>
2023-05-31 02:06:36
1
1,930
Atinesh Singh
76,369,510
1,968,829
Is there a vectorization method to match a df column to a dictionary
<p>I have the following (abbreviated) dictionary:</p> <pre><code>_d = { &quot;pain&quot;: [&quot;C0030193&quot;, &quot;C0150055&quot;, &quot;C0151825&quot;, &quot;C0184567&quot;], &quot;anxiety&quot;: [&quot;C0003467&quot;, &quot;C0003469&quot;, &quot;C0027769&quot;, &quot;C0154587&quot;, &quot;C0231397&quot;, &quot;C0231401&quot;, &quot;C0231402&quot;], &quot;depression&quot;: [&quot;C0001539&quot;, &quot;C0005587&quot;, &quot;C0011579&quot;, &quot;C0011581&quot;, &quot;C0024517&quot;, &quot;C0086132&quot;], &quot;fatigue&quot;: [&quot;C0015672&quot;] } </code></pre> <p>and the following dataframe:</p> <pre><code>df = cui 1 &quot;C0015672&quot; 2 &quot;C0015634&quot; 3 &quot;C0011579&quot; 4 &quot;C0030193&quot; 5 &quot;C0031193&quot; 6 &quot;C0030193&quot; </code></pre> <p>I want to match up the column df[&quot;cui&quot;], such that if the value of df[&quot;cui&quot;] is in any of the value lists in then dictionary then I want a new column &quot;symptom&quot; based on the dictionary key, otherwise it would remain null.</p> <p>This is the desired output:</p> <pre><code>df = cui symptom 1 C0015672 fatigue 2 C0015634 NaN 3 C0011579 depression 4 C0030193 pain 5 C0031193 NaN 6 C0030193 pain </code></pre> <p>I can do this by iterating over each row in the dataframe, but since there are 10s of millions of rows, it's super slow. I'm looking for a way to vectorize this.</p>
<python><pandas><dictionary><vectorization>
2023-05-31 01:06:04
2
2,191
horcle_buzz
76,369,462
11,116,696
EntryPoints attribute error when trying to combine files using xarray
<p><strong>Background</strong></p> <p>As recently as two months ago, I've had a working python script which i used regularly to combine netcdf4 weather files together.</p> <p>However, ever since I recently updated my laptop, and reinstalled python (3.7 as per I.T. policy) and the latest python libraries, and the script has stopped working, it appears something has been deprecated, this has caused my code to stop working.</p> <p><strong>Problem</strong></p> <p>I am getting the error message: AttributeError: 'EntryPoints' object has no attribute 'get'</p> <p><strong>What I've tried</strong></p> <p>I've referred to these other posts on SO (<a href="https://stackoverflow.com/questions/73990243/import-fsspec-throws-error-attributeerror-entrypoints-object-has-no-attribut">link</a>, <a href="https://stackoverflow.com/questions/73929564/entrypoints-object-has-no-attribute-get-digital-ocean">link</a>). From that I can see the issue has arisen from importlib-metadata (v5.0.0. +), Which comes as part of the standard python install (including 3.7) (I have importlib-metadata-6.6.0 on my machine)</p> <p>I tried downgrading to an older version (importlib-metadata 4.0.0), but that resulted in a different problem.</p> <p>I also asked I.T. to update my python to higher version (3.9, 3.10, etc). But apparently that's an entire process which could take ages to do.</p> <p><strong>Help Requested</strong></p> <p>Anyone know how I can resolve this issue.</p> <p><strong>Example Error message</strong></p> <pre><code>Traceback (most recent call last): File &quot;C:/Users/User/PycharmProjects/project/python_script.py&quot;, line 59, in get_era5 merge_netcdf4 = xr.open_mfdataset(list, combine='by_coords') File &quot;C:/Users/User/PycharmProjects/project\venv\lib\site-packages\xarray\backends\api.py&quot;, line 908, in open_mfdataset datasets = [open_(p, **open_kwargs) for p in paths] File &quot;C:/Users/User/PycharmProjects/project\venv\lib\site-packages\xarray\backends\api.py&quot;, line 908, in &lt;listcomp&gt; datasets = [open_(p, **open_kwargs) for p in paths] File &quot;C:/Users/User/PycharmProjects/project\venv\lib\site-packages\xarray\backends\api.py&quot;, line 479, in open_dataset engine = plugins.guess_engine(filename_or_obj) File &quot;C:/Users/User/PycharmProjects/project\venv\lib\site-packages\xarray\backends\plugins.py&quot;, line 110, in guess_engine engines = list_engines() File &quot;C:/Users/User/PycharmProjects/project\venv\lib\site-packages\xarray\backends\plugins.py&quot;, line 105, in list_engines entrypoints = entry_points().get(&quot;xarray.backends&quot;, ()) AttributeError: 'EntryPoints' object has no attribute 'get' </code></pre> <p><strong>Example code</strong></p> <pre><code>for single_date in daterange(start_date, end_date): YYYY = single_date.strftime(&quot;%Y&quot;) MM = single_date.strftime(&quot;%m&quot;) DD = single_date.strftime(&quot;%d&quot;) fname = fpath + YYYY + MM + DD + '_era5.nc' list.append(fname) # Details lat_toplot = np.arange(25.25, 25.50, 0.25) # last number is exclusive lon_toplot = np.arange(140.25, 140.50, 0.25) # last number is exclusive merge_netcdf4 = xr.open_mfdataset(list, combine='by_coords').sel(longitude=lon_toplot, latitude=lat_toplot) </code></pre>
<python><python-xarray><python-importlib>
2023-05-31 00:49:06
1
601
Bobby Heyer
76,369,404
9,435,771
Extract and update DataFrame column names
<p>Reading data into DataFrame, expect certain column names but often they are mixed with a random string either before or after the name. There are also other columns and the column order is not guaranteed. I want to rename the applicable columns to the correct names.</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame( np.random.randn(4, 5), columns=['betarandstr1.823491', 'alpha randstr123', 'other', 'delta', 'randstr-1.281999 gamma'] ) keys = ['alpha', 'beta', 'gamma', 'delta'] # expected names </code></pre> <p>I can extract the desired name for each column.</p> <pre><code>&gt;&gt;&gt; df.columns.str.extract('(%s)' % '|'.join(keys)) 0 0 beta 1 alpha 2 NaN 3 delta 4 gamma </code></pre> <p>My question is how to update the names, but if NaN to keep the original name. So the desired result in this case <code>'other'</code> is retained instead of NaN.</p> <pre><code>&gt;&gt;&gt; df.columns Index(['beta', 'alpha', 'other', 'delta', 'gamma'], dtype='object') </code></pre>
<python><pandas>
2023-05-31 00:27:09
1
6,122
alec
76,369,395
5,942,100
Reindex dataframe based on column values using Pandas
<p><strong>Data</strong></p> <pre><code>ID Q1 Q2 Q3 AA 1 2 4 BB 5 5 5 CC 6 7 7 </code></pre> <p><strong>Desired</strong></p> <pre><code>ID Q1 Q2 Q3 BB 5 5 5 CC 6 7 7 AA 1 2 4 </code></pre> <p><strong>Doing</strong></p> <pre><code>df = pd.DataFrame(df, index = [&quot;BB&quot;,&quot;CC&quot;,&quot;AA&quot;]) </code></pre> <p>This is simply adding the row name, I wish to rea arrange row order based on name. Any suggestion is appreciated.</p>
<python><pandas><numpy>
2023-05-31 00:25:43
0
4,428
Lynn
76,369,340
3,008,221
What is the pandas equivalent to a sql count window function with a filter?
<p>I am trying to find the pandas equivalent to a sql count window function with a filter. Here is the sql query:</p> <pre><code>select t.*, count(*) filter(where grp = 'new') over(partition by usr order by id) rn from mytable t order by usr, id </code></pre> <p>I tried the below after sorting by id:</p> <pre><code>mytable['rn'] = mytable.groupby('usr')['grp'].transform('count') </code></pre> <p>But it is wrong as I am not filtering on grp as I should and I don't know how to do such filter. So, what is the correct pandas equivalent (a vectorized solution)?</p> <p>Note: For more context, if needed, you can refer to this question, but you don't have to: <a href="https://stackoverflow.com/questions/76364851/how-to-create-a-row-number-that-infer-the-group-it-belongs-to-from-another-colum">link</a></p>
<python><pandas>
2023-05-31 00:09:52
2
433
Aly
76,369,298
15,542,245
How to limit Python regex 'greedyness' when asking for all chars before negative lookup
<p>I have 4 matches with my pattern:</p> <pre><code>\d+\/?\d+\s[A-z]+.(?!\d) </code></pre> <p><a href="https://regex101.com/r/mPt19w/1" rel="nofollow noreferrer">Regex demo</a></p> <p><a href="https://i.sstatic.net/8bTA0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bTA0.png" alt="regex demo" /></a></p> <p>Require parsing of 4 strings:</p> <pre><code>17 Howard Rd Howard. Stdnt 11/169 Wall Road, Wontown, Wkr 105 AGNEW, Marilyn Barbara 106 AGNEW, Mavis Rosina </code></pre> <p>If I add <code>*</code> or <code>+</code> after <code>.</code> The match goes to the end of the string. So I lose the matches and the negative lookup. How do I reconfigure this regex to extend the matches so I get 4 complete strings?</p>
<python><regex><regex-lookarounds>
2023-05-30 23:53:59
1
903
Dave
76,369,286
2,673,149
How to run different pytest commands on windows vs Linux with Tox?
<p>I figured the following would work but only if I pass <code>tox -e linux</code> or <code>tox -e win32</code> are any tests ran at all.</p> <pre class="lang-ini prettyprint-override"><code>[testenv] commands = linux: py.test {posargs} win32: py.test -m &quot;not fails_on_windows&quot; {posargs} deps = pyflakes pytest </code></pre>
<python><pytest><tox>
2023-05-30 23:51:30
1
400
Spitfire19
76,369,225
5,942,100
Tricky shift values from specific rows to another row using Pandas with multi-index columns
<p>I am looking to shift values from specific rows to another row using Pandas.</p> <p><strong>Data</strong></p> <pre><code> cn_positions cn_positions cn_positions Date Q1.22 Q1.23 Q1.24 ID AA 73 87 104 BB 0 0 13 CC 0 20 62 CC 0 0 11 </code></pre> <p><strong>Desired</strong></p> <pre><code> cn_positions_Q1.22 cn_positions_Q1.23 cn_positions_Q1.24 Date ID AA 73 87 104 BB 0 0 13 CC 0 20 62 CC 0 0 11 </code></pre> <p><strong>Doing</strong></p> <pre><code>df_pivot = pd.pivot_table(df.unstack().reset_index(), values=0, index='ID', columns='Date').rename_axis(None, axis=1) </code></pre> <p>However, the above seems to eliminate the Q1.22,Q1.23 values. Any suggestion is appreciated.</p>
<python><pandas><multi-index>
2023-05-30 23:31:33
1
4,428
Lynn
76,369,147
6,087,667
re substitute with the first group
<p>I have an expression that I want to extract the first part before the <code>=</code> sign. However it doesn't work:</p> <pre><code>import re re.sub('(.*?)=(.*?)', r'\1', 'F0=None') </code></pre> <p>This returns 'F0None'. Shouldn't it return the first captured group, i.e. 'F0' only?</p>
<python><python-re>
2023-05-30 23:06:05
1
571
guyguyguy12345
76,369,119
512,480
VSCode: import of pip3-installed modules suddenly quit working
<p>I have been developing in a certain folder for months, undre python 3.10. Today, suddenly, VS code thinks that my imported modules don't exist. In particular, any that were installed by pip3. The same is true of an actual python process running in that context. But pip thinks they're installed!</p> <p>Possible culprit: a few days ago a package wouldn't install, and in a bit of haste and frustration I said &quot;sudo pip3 install ...&quot; Apparently a bad idea. Today I uninstalled all those pieces that I had installed, and made sure that everything under my site-packages folder belongs to my user. Didn't help.</p> <p>Deleting <strong>pycache</strong> from my working folder didn't help.</p> <p>Now here's the stumper: I created a new folder and into it wrote the following tiny program:</p> <pre><code>import numpy, cv2, PIL print(&quot;success&quot;) </code></pre> <p>It works great, with or without the intervention of VScode!</p> <p>Here's the relevant part of my launch.json, in case it could be somehow creating a strange environment. The environment variable MOCK is recognized by my code, shouldn't be an issue anywhere else, right? It certainly isn't new.</p> <pre><code>{ &quot;name&quot;: &quot;UIServer&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}/UIServer.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: false, &quot;env&quot;: { &quot;MOCK&quot;: &quot;true&quot; }, }, </code></pre> <p><a href="https://i.sstatic.net/lHKgc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHKgc.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><pip><python-import>
2023-05-30 22:55:45
1
1,624
Joymaker
76,369,055
1,282,160
Using throttler and joblib for heavy jobs and getting "cannot unpack non-iterable function object"
<p>I have a lot of heavy jobs which need to send throttled HTTP requests (2 requests/sec) and calculate the results based on the response values.</p> <p>I use <a href="https://github.com/uburuntu/throttler" rel="nofollow noreferrer">throttler</a> to control the request rate, and use <a href="https://github.com/joblib/joblib" rel="nofollow noreferrer">joblib</a> to calculate the results on multi-processes:</p> <pre><code>import asyncio from math import sqrt import time from joblib import Parallel, delayed from throttler import throttle @throttle(rate_limit=1, period=0.5) async def request(i): &quot;&quot;&quot; Do nothing. Just sleep to mimic HTTP requests. &quot;&quot;&quot; time.sleep(0.3) print(f&quot;get response: {i} {time.time()}&quot;) return i def heavy_calculate(i): &quot;&quot;&quot; Just sqrt and square the number i for 30_000_000 times to mimik heavy job. It takes about 2.1 sec for each call. &quot;&quot;&quot; n = i for _ in range(30_000_000): n = sqrt(n) n = n ** 2 print(f&quot;heavy_calculate: {n} {time.time()}&quot;) return n async def many_tasks(count: int): print(f&quot;=== START === {time.time()}&quot;) with Parallel(n_jobs=-1) as parallel: a = [] coros = [request(i) for i in range(count)] for coro in asyncio.as_completed(coros): i = await coro #! Note: this heavy_calculate will block the request rate too results = heavy_calculate(i) #! Try to using the following call instead, but it raises error &quot;cannot unpack non-iterable function object&quot; # results = parallel(delayed(heavy_calculate)(i)) a.append(results) print(f&quot;=== END === {time.time()}&quot;) print(a) # Start run asyncio.run(many_tasks(10)) </code></pre> <p>The above code shows two functions should be run:</p> <ul> <li><code>request</code>: mimic HTTP requests which is throttled with 0.5 sec.</li> <li><code>heavy_calculate</code>: mimic CPU-bound calculation which will block other jobs.</li> </ul> <p>I want to call <code>request</code> with throttling and run <code>heavy_calculate</code> on multi-processes, but it shows error:</p> <pre><code>... TypeError: cannot unpack non-iterable function object </code></pre> <p>Any suggestion?</p>
<python><multiprocessing><throttling><joblib>
2023-05-30 22:39:58
1
3,407
Xaree Lee
76,369,026
5,130,078
How to access US Census TIGER shapefiles in geopandas?
<p>Context: I have a range of ACS data I work with, and I was looking to plot it at the block group level. However, I am struggling to find the relevant <code>geometry</code> objects to actually make such plots. TIGER files from census (which I think I used back in the early 2010s with geopandas) can't be loaded to <code>gpd</code>, and nowhere in the <a href="https://www2.census.gov/geo/pdfs/maps-data/data/tiger/tgrshp2022/TGRSHP2022_TechDoc.pdf" rel="nofollow noreferrer">current TIGER documentation</a> do they mention a data table with a geometry column.</p> <p>This stands in contrast to numerous example codes online (eg. <a href="https://pygis.io/docs/d_access_census.html" rel="nofollow noreferrer">here</a> and <a href="https://n8henrie.com/uploads/2017/11/plotting-us-census-data-with-python-and-geopandas.html" rel="nofollow noreferrer">here</a>) that just load TIGER uris into geopandas and implicitly have geometry in the loaded object.</p> <p>Example code:</p> <pre><code>import geopandas as gpd # input vars state_id = 78 year = 2020 file_type='bg' # load TIGER from URI uri = f&quot;https://www2.census.gov/geo/tiger/TIGER{year}/{file_type.upper()}/tl_{year}_{state_id}_{file_type.lower()}.zip&quot; example_blockgroups = gpd.read_file(uri) </code></pre> <p>gives <code>TypeError: __init__() missing 1 required keyword-only argument: 'geometry'</code></p> <p>As do other very basic loads of TIGER files like</p> <pre><code>gpd.read_file(&quot;https://www2.census.gov/geo/tiger/TIGER2020/STATE/tl_2020_us_state.zip&quot;) gpd.read_file(&quot;https://www2.census.gov/geo/tiger/TIGER2017/STATE/tl_2017_us_state.zip&quot;) gpd.read_file(&quot;https://www2.census.gov/geo/tiger/TIGER2019/TABBLOCK/tl_2019_01_tabblock10.zip&quot;) </code></pre>
<python><geopandas><tiger-census>
2023-05-30 22:30:51
0
1,344
Mark_Anderson
76,368,967
14,820,295
Crosstab (contingency table, or similar)
<p>I have a dataset like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>january</th> <th>february</th> <th>march</th> <th>april</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>scheduled</td> <td>done</td> <td>null</td> <td>done</td> </tr> <tr> <td>2</td> <td>scheduled</td> <td>scheduled</td> <td>done</td> <td></td> </tr> <tr> <td>3</td> <td>ongoing</td> <td>canceled</td> <td>scheduled</td> <td></td> </tr> </tbody> </table> </div> <p>I desire to transform this dataset in a matrix like this in output, where each cell are the occurrences of the exact intersection (keeping Null Values).</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>event</th> <th>january</th> <th>february</th> <th>march</th> <th>april</th> </tr> </thead> <tbody> <tr> <td>scheduled</td> <td>2</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>done</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>ongoing</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>null</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> </tr> </tbody> </table> </div>
<python><group-by><pivot-table>
2023-05-30 22:10:14
1
347
Jresearcher
76,368,843
610,569
How to load a .py script directly with evaluate.load?
<p>If I have a script like this <a href="https://huggingface.co/spaces/evaluate-metric/frugalscore/blob/main/frugalscore.py" rel="nofollow noreferrer">https://huggingface.co/spaces/evaluate-metric/frugalscore/blob/main/frugalscore.py</a> and save it as <code>fgscore.py</code> with a directory locally like:</p> <pre><code>./ my_script.py fgscore/ fgscore.py </code></pre> <p>And in my_script.py, I can do something like:</p> <pre><code>import evaluate mt_metrics = evaluate.load(&quot;fgscore&quot;) sources = [&quot;안녕하세요 저는 당신의 아버지입니다&quot;, &quot;일반 케노비&quot;] predictions = [&quot;hello here I am your father&quot;, &quot;general kenobi&quot;] references = [&quot;hello there I am your father&quot;, &quot;general yoda&quot;] results = mt_metrics.compute(predictions=predictions, references=references) print(results) </code></pre> <p>Looking at the <code>evaluate.load()</code> function, <a href="https://github.com/huggingface/evaluate/blob/main/src/evaluate/loading.py#L688" rel="nofollow noreferrer">https://github.com/huggingface/evaluate/blob/main/src/evaluate/loading.py#L688</a>, it states:</p> <pre><code>path (`str`): Path to the evaluation processing script with the evaluation builder. Can be either: - a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. `'./metrics/rouge'` or `'./metrics/rouge/rouge.py'` - a evaluation module identifier on the HuggingFace evaluate repo e.g. `'rouge'` or `'bleu'` that are in either `'metrics/'`, `'comparisons/'`, or `'measurements/'` depending on the provided `module_type` </code></pre> <h3>Is there a reason to do <code>{name}/{name}.py</code> for using the path arguments in <code>evaluate.load()</code>?</h3> <h3>Is there a way to override such that I can point the evaluate directly to the <code>.py</code> file? E.g. <code>evaluate.load(&quot;fgscore.py&quot;)</code></h3>
<python><metrics><huggingface><huggingface-evaluate>
2023-05-30 21:41:23
1
123,325
alvas
76,368,834
2,100,039
Add Suffix to Dataframe with Specific Repeating Column Name
<p>i have data in a dataframe such as the following columns: week, SITE, LAL, SITE, LAL. I need to assign a suffix to the col name == 'SITE' such that the final df will look like: week, SITE_1, LAL, SITE_2, LAL.</p> <p>Thank you,</p> <p>dataframe example:</p> <pre><code> week SITE LAL SITE LAL 0 1 BARTON CHAPEL 1.1 PENASCAL I 1.0 1 2 BARTON CHAPEL 1.1 PENASCAL I 1.0 2 3 BARTON CHAPEL 1.1 PENASCAL I 1.0 3 4 BARTON CHAPEL 1.1 PENASCAL I 1.0 4 5 BARTON CHAPEL 1.1 PENASCAL I 1.0 5 6 BARTON CHAPEL 1.4 PENASCAL I 1.0 </code></pre>
<python><pandas><suffix>
2023-05-30 21:38:43
3
1,366
user2100039
76,368,791
11,760,357
python regex to remove all text not between '<' and '>'
<p>I want the following string</p> <pre><code>Doe, John PGM GUY FOOBARINC MD (USA) &lt;john.doe@email.mail&gt; </code></pre> <p>to become</p> <pre><code>john.doe@email.mail </code></pre> <p>while using the <code>series.str.replace()</code> function</p> <p>I have code like the following</p> <pre class="lang-py prettyprint-override"><code>email= email.squeeze() if '&lt;' in email[0] and '&gt;' in email[0]: # Checking to see if strings in this series are formatted with &lt;&gt;'s. Not all are, hence the check email.str.replace(r&quot;[^&lt;]*\&lt;|\&gt;[^&gt;]*&quot;, &quot;&quot;) </code></pre> <p>which seems to work <a href="https://pythex.org/?regex=%5B%5E%3C%5D*%5C%3C%7C%5C%3E%5B%5E%3E%5D*&amp;test_string=Doe%2C%20John%20PGM%20GUY%20FOOBARINC%20MD%20(USA)%20%3Cjohn.doe%40email.mail%3E&amp;ignorecase=0&amp;multiline=0&amp;dotall=0&amp;verbose=0" rel="nofollow noreferrer">here</a>, but doesn't work when I run the code. I simply get back the same strings, no edits made to them at all.</p>
<python><pandas><regex>
2023-05-30 21:31:23
2
375
Caleb Renfroe
76,368,779
2,302,262
Keep frequency when selecting with datetimeindex in pandas
<h1>Sample</h1> <p>I have a <code>DatetimeIndex</code> with a yearly frequency, and 2 series that together cover all its values:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd i = pd.date_range('2023', freq='AS', periods=3) s1 = pd.Series([1,2], [i[0], i[2]]) s2 = pd.Series([-30], [i[1]]) concat = pd.concat([s1, s2]) </code></pre> <p>In the final line I concatenate these series to a longer series, in which the index values are not sorted.</p> <p>I can select the values in the original index order with <code>result = concat.loc[i]</code>.</p> <h1>Problem</h1> <p><strong>The problem is that the <strong><code>freq</code></strong> property is lost.</strong></p> <p>The original index has frequency <code>i.freq</code> (<code>&lt;YearBegin: month=1&gt;</code>), but in <code>result.index.freq</code> is <code>None</code>.</p> <p>I don't understand why; I understand the <code>.loc[]</code> to select the corresponding value in <code>concat</code> and return a Series with <code>i</code> as its index. So I would assume <code>result</code>'s index to have retained the frequency.</p> <p><strong>Is there a way to keep the frequency?</strong></p> <h1>Remarks</h1> <ul> <li><p>Here, I can manually set the frequency with <code>result.index.freq = 'AS'</code>, or have pandas devise it with <code>result.index.freq = pd.infer_freq(result.index)</code>.</p> </li> <li><p>However, I cannot expect <code>i</code> to always be a <code>DatetimeIndex</code>.</p> </li> <li><p>I would like to avoid having to do extensive typechecking and error-catching, like so...</p> <pre class="lang-py prettyprint-override"><code>if isinstance(concat.index, pd.DatetimeIndex) and i.freq: try: concat.index.freq = i.freq except ValueError: pass </code></pre> <p>...if I can possibly avoid it.</p> </li> </ul>
<python><pandas>
2023-05-30 21:29:29
0
2,294
ElRudi
76,368,723
3,420,371
Insert list of dictionaries into Postgres table where each dict has different keys
<p>I have a list of Python dictionaries that look something like below where each dictionary can have slightly different keys.</p> <pre><code>data = [ {'name': 'Bob', 'age': 32}, {'name': 'Sara', 'city': 'Dallas'}, {'name': 'John', 'age': 45, 'city': 'Atlanta'} ] </code></pre> <p>I also have a Postgres table that contains all of the possible keys that are seen within this list of dictionaries (e.g.: <code>name</code>, <code>age</code>, <code>city</code>).</p> <p>I am looking for an elegant solution to efficiently insert this data into my database. While I could iterate over <code>data</code> line by line, and insert each line individually, that doesn't scale so well to my actual dataset including millions of records.</p> <p>I attempted to use the <code>execute_values</code> function from <code>psycopg2</code>, as seen in the example below, but that expected all of the dictionaries to have the same keys.</p> <p><strong>How can I edit my process below to insert multiple dictionaries at once, where each dictionary can contain different keys?</strong></p> <pre><code>import psycopg2 from psycopg2.extras import execute_values # connect to the database conn = psycopg2.connect( host=&quot;localhost&quot;, database=&quot;db_name&quot;, user=&quot;psql_user&quot;, password=&quot;psql_password&quot;, ) conn.autocommit = True cur = conn.cursor() # get the columns from first dictionary columns = data[0].keys() # write the SQL query to insert the records query = &quot;&quot;&quot;INSERT INTO schema.table ({}) VALUES %s ON CONFLICT (name) DO NOTHING&quot;&quot;&quot;.format( &quot;,&quot;.join(columns) ) # extract the values from each dictionary into as list of lists values = [[value for value in line.values()] for line in data] # execute the SQL query with the associated values execute_values(cur, query, values) </code></pre>
<python><postgresql><psycopg2>
2023-05-30 21:20:36
3
2,447
CurtLH
76,368,691
8,229,534
How to filter on STRING column using BETWEEN clause in pandas
<p>Consider a sample pandas dataframe as below -</p> <pre><code>import pandas as pd money = [100,200,300,400,500,600,700,800,900,1000,1100,1200] batch_code = ['B1_2023','B2_2023','B3_2023','B4_2023','B1_2024','B2_2024','B3_2024','B4_2024','B1_2025','B2_2025','B3_2025','B4_2025'] test_df = pd.DataFrame([money,batch_code]).T test_df.columns = ['money','batch_code'] test_df.money = test_df.money.astype(int) </code></pre> <p>Here is the how the data looks like -</p> <p><a href="https://i.sstatic.net/MBgSd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBgSd.png" alt="enter image description here" /></a></p> <p>I want to <code>filter</code> this data on column batch_code using 2 parameters - from_batch and to_batch.</p> <p>For example -</p> <pre><code>from_batch = B3_2023 to_batch = B4_2024 </code></pre> <p>Then my output data frame should consist of all the records that are in between B3_2024 and B3_2024. The batches represents quarters but are coded with prefix B. How can I achieve this in pandas?</p> <p>I tried writing</p> <pre><code>test_df[test_df.batch_code.between('B3_2023','B4_2024')] </code></pre> <p>but I got this as output, which seems to be incorrect -</p> <p><a href="https://i.sstatic.net/ufTxV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ufTxV.png" alt="enter image description here" /></a></p> <p>The correct output would be -</p> <p><a href="https://i.sstatic.net/7fnc9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7fnc9.png" alt="enter image description here" /></a></p> <p><strong>NOTE</strong> - The maximum range of prefix quarters is B1 to B4 only.</p>
<python><pandas>
2023-05-30 21:14:03
1
1,973
Regressor
76,368,612
16,462,878
Subprocess - pip: pass package names dynamically
<p>I am trying to download without installation some Python packages using <code>pip</code> with the shell command</p> <pre><code>$ python -m pip download --destination-directory path -r requirements.txt </code></pre> <p>What I'd like to do is to bypass the step of having a file, <em>requirements.txt</em>, which contains the list of packages to download and just pass the package names as parameters. Here my attempt</p> <pre><code>import sys import subprocess from tempfile import NamedTemporaryFile PIP = (sys.executable, '-m', 'pip') def download(to_dir, *pkgs): with NamedTemporaryFile(mode=&quot;w+&quot;, suffix=&quot;.txt&quot;) as tmp: tmp.write('\n'.join(pkgs)) # debugging info tmp.seek(0) print(tmp.read(), tmp.name) tmp.seek(0) return PIP + ('download', '--destination-directory', to_dir, '-r', tmp.name) pkg = &quot;sphinx&quot; # example of package to_dir = &quot;valid path&quot; # download directory cmd = download(to_dir, pkg) print(cmd) p = subprocess.call(cmd) print(p) </code></pre> <p>which raises the error</p> <pre><code>sphinx /tmp/tmpmg3inbtu.txt ('path to python3.11', '-m', 'pip', 'download', '--destination-directory', 'valid path', '-r', '/tmp/tmpmg3inbtu.txt') ERROR: Could not open requirements file: [Errno 2] No such file or directory: '/tmp/tmpmg3inbtu.txt' </code></pre>
<python><unix><pip><subprocess>
2023-05-30 21:02:51
1
5,264
cards
76,368,573
6,567,319
Is there a way to compress a .npy file more tightly than by using the LZMA algorithm?
<p>I am trying to compress some .npy files as tightly as possible. What I have read is that typically to do this you use the LZMA algorithm.</p> <p>So far I have tried xz tar compression level 9, and python lzma compression. This seems effective but I was wondering if anybody had tried something better? Is LZMA really the best algorithm or is there something better? I am optimizing SOLELY for compression, time to compress is a non-issue. I also recognize that .npy is already more compressed than, for example, an image so there is a limit to the opimality of the result.</p> <p>I am dealing with both folders of npy files and single npy files alone.</p> <p>Edit: The .npy files contain hyperspectral images from the Harvard Real World Hyperspectral Image Dataset stacked together</p>
<python><numpy><compression><lzma><pylzma>
2023-05-30 20:56:55
1
449
Mira Welner
76,368,516
7,903,749
Python unit test got error on INSTALLED_APPS settings not configured
<p>With PyCharm IDE, we have a unit test importing the <code>models</code> module of the Django project for testing. While expecting a happy pass, the test always hit en error saying:</p> <p><code>django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings. Configure() before accessing settings.</code>.</p> <p>We are new to this part of the Python unit test, so we highly appreciate any hints or suggestions.</p> <p><strong>The unit test:</strong></p> <pre class="lang-py prettyprint-override"><code>from applicationapp import models ... class TestElastic(TestCase): def test_parse_request(self): # expecting happy pass # # self. Fail() return </code></pre> <p><strong>The full error trace:</strong></p> <pre><code>/data/app-py3/venv3.7/bin/python /var/lib/snapd/snap/pycharm-professional/327/plugins/python/helpers/pycharm/_jb_unittest_runner.py --target test_elastic.TestElastic Testing started at 2:07 PM ... Launching unittests with arguments python -m unittest test_elastic.TestElastic in /data/app-py3/APPLICATION/tests Traceback (most recent call last): File &quot;/var/lib/snapd/snap/pycharm-professional/327/plugins/python/helpers/pycharm/_jb_unittest_runner.py&quot;, line 35, in &lt;module&gt; sys.exit(main(argv=args, module=None, testRunner=unittestpy.TeamcityTestRunner, buffer=not JB_DISABLE_BUFFERING)) File &quot;/usr/local/lib/python3.7/unittest/main.py&quot;, line 100, in __init__ self.parseArgs(argv) File &quot;/usr/local/lib/python3.7/unittest/main.py&quot;, line 147, in parseArgs self.createTests() File &quot;/usr/local/lib/python3.7/unittest/main.py&quot;, line 159, in createTests self.module) File &quot;/usr/local/lib/python3.7/unittest/loader.py&quot;, line 220, in loadTestsFromNames suites = [self.loadTestsFromName(name, module) for name in names] File &quot;/usr/local/lib/python3.7/unittest/loader.py&quot;, line 220, in &lt;listcomp&gt; suites = [self.loadTestsFromName(name, module) for name in names] File &quot;/usr/local/lib/python3.7/unittest/loader.py&quot;, line 154, in loadTestsFromName module = __import__(module_name) File &quot;/data/app-py3/APPLICATION/tests/test_elastic.py&quot;, line 6, in &lt;module&gt; from applicationapp.soap.elastic import Elastic File &quot;/data/app-py3/APPLICATION/applicationapp/soap/elastic.py&quot;, line 7, in &lt;module&gt; from applicationapp import models File &quot;/data/app-py3/APPLICATION/applicationapp/models.py&quot;, line 3, in &lt;module&gt; from django.contrib.auth.models import User File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/contrib/auth/models.py&quot;, line 2, in &lt;module&gt; from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/contrib/auth/base_user.py&quot;, line 48, in &lt;module&gt; class AbstractBaseUser(models.Model): File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/db/models/base.py&quot;, line 108, in __new__ app_config = apps.get_containing_app_config(module) File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/apps/registry.py&quot;, line 253, in get_containing_app_config self.check_apps_ready() File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/apps/registry.py&quot;, line 135, in check_apps_ready settings.INSTALLED_APPS File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/conf/__init__.py&quot;, line 83, in __getattr__ self._setup(name) File &quot;/data/app-py3/venv3.7/lib/python3.7/site-packages/django/conf/__init__.py&quot;, line 68, in _setup % (desc, ENVIRONMENT_VARIABLE)) django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. Process finished with exit code 1 Empty suite </code></pre>
<python><django><unit-testing><pycharm>
2023-05-30 20:46:58
2
2,243
James
76,368,511
1,473,517
How to get changelog info for pip upgrade-able modules
<p>If I do</p> <pre><code> pip list --outdated </code></pre> <p>I get a list of upgrade-able modules. For example:</p> <pre><code>Package Version Latest Type ------------------ ---------- -------- ----- aiofiles 22.1.0 23.1.0 wheel anyio 3.6.2 3.7.0 wheel attrs 22.2.0 23.1.0 wheel beautifulsoup4 4.11.2 4.12.2 wheel comm 0.1.2 0.1.3 wheel Cython 0.29.33 0.29.35 wheel [...] </code></pre> <p>Is there any way to get the changelog or relevant release notes for the new versions that would be installed? I would like to get these programmatically, ideally from the command line if possible.</p>
<python><pip><python-module>
2023-05-30 20:46:22
2
21,513
Simd
76,368,493
1,700,890
rpy2 how to check column data type
<p>I created pandas dataframe and converted it to R object using rpy2. Now I would like to check data type of a column. Here is my code:</p> <pre><code>import pandas as pd import rpy2.robjects as ro from rpy2.robjects.packages import importr from rpy2.robjects import pandas2ri utils = importr('utils') base = importr('base') pd_df = pd.DataFrame({'col_1': [1,2,3], 'col_2': [4,5,6]}) with (ro.default_converter + pandas2ri.converter).context(): r_df = ro.conversion.get_conversion().py2rpy(pd_df) </code></pre> <p>Normally in R it would be</p> <pre><code>str(r_df$col_1) </code></pre> <p>I tried the following (nothing worked)</p> <pre><code>r_df$col_1 r_df['col_1'] r_df[1,1] robjects.r['r_df'] robjects.r('''r_df''') </code></pre> <p>Any suggestions?</p>
<python><r><types><rpy2>
2023-05-30 20:43:27
2
7,802
user1700890
76,368,415
2,631,300
Catch CTRL+C and exit gracefully in Python with a multithreaded controller on Windows
<p>I'm trying to create a <strong>Windows</strong> SMTP server in Python that uses the <code>aiosmtpd</code> library, which provides a multithreaded controller that listens on a TCP port for connections.</p> <p>This code works perfectly:</p> <pre class="lang-py prettyprint-override"><code>from aiosmtpd.controller import Controller class CustomSMTPServer: # Proof of concept async def handle_DATA(self, server, session, envelope): print(f'&gt; Incoming mail from \'{envelope.mail_from}\'') return '250 OK' if __name__ == '__main__': controller = Controller(CustomSMTPServer(), hostname='127.0.0.1', port=25) controller.start() # I would like to wait for CTRL+C instead of ENTER input(f'SMTP listening on port {controller.port}. Press ENTER to exit.\n') # ... and stopping the controller after the key has been pressed controller.stop() </code></pre> <p>I would like to modify it such as it doesn't wait for <kbd>ENTER</kbd> but for <kbd>CTRL</kbd>+<kbd>C</kbd> instead, without using a <code>while True</code> endless loop because this would eat CPU. It would be great to not use a lot of libraries imports, if possible.</p> <p>Is there anything better than using <code>time.sleep(1)</code> inside the loop, as the <code>aiosmtpd.controller</code> is already running in its thread and the main thread is not doing anything useful, apart from waiting for <kbd>CTRL</kbd>+<kbd>C</kbd>?</p>
<python><windows><multithreading><keyboardinterrupt>
2023-05-30 20:28:00
2
509
virtualdj
76,368,304
12,436,050
Oracle error message: DPY-4004: invalid number
<p>I am creating a table in Oracle db with following lines in python script.</p> <pre><code>cur.execute('''BEGIN EXECUTE IMMEDIATE 'DROP TABLE MRCONSO'; EXCEPTION WHEN OTHERS THEN NULL; END; ''') try: cur.execute(''' CREATE TABLE MRCONSO ( CUI char(8) NOT NULL, LAT char(3) NOT NULL, TS char(1) NOT NULL, LUI varchar2(10) NOT NULL, STT varchar2(3) NOT NULL, SUI varchar2(10) NOT NULL, ISPREF char(1) NOT NULL, AUI varchar2(9) NOT NULL, SAUI integer, SCUI varchar2(100), SDUI varchar2(100), SAB varchar2(40) NOT NULL, TTY varchar2(40) NOT NULL, CODE varchar2(100) NOT NULL, STR varchar2(3000) NOT NULL, SRL integer NOT NULL, SUPPRESS char(1) NOT NULL, CVF integer) PCTFREE 10 PCTUSED 80 ''' ) print(&quot;MRCONSO Table Created&quot;) except Exception as e: print(&quot;Error: &quot;,str(e)) </code></pre> <p>When I insert the data using csv file, I get following error.</p> <pre><code>Oracle error message: DPY-4004: invalid number </code></pre> <pre><code>df_umls = pd.read_csv(&quot;../umls_files/umls-2023AA-metathesaurus-full/2023AA/META/MRCONSO.RRF&quot;, sep = '|', low_memory=False) df_umls.columns=[&quot;CUI&quot;, &quot;LAT&quot;, &quot;TS&quot;, &quot;LUI&quot;, &quot;STT&quot;, &quot;SUI&quot;, &quot;ISPREF&quot;, &quot;AUI&quot;,&quot;SAUI&quot;, &quot;SCUI&quot;, &quot;SDUI&quot;, &quot;SAB&quot;, &quot;TTY&quot;, &quot;CODE&quot;, &quot;STR&quot;, &quot;SRL&quot;, &quot;SUPPRESS&quot;, &quot;CVF&quot;, &quot;JUNK&quot;] df_umls = df_umls.drop(columns=['JUNK']) df_umls['CVF'] = df_umls['CVF'].fillna(0) df_umls['CVF'] = df_umls['CVF'].astype(&quot;Int64&quot;) df_umls['SAUI'] = df_umls['SAUI'].fillna(0) df_umls['SAUI'] = df_umls['SAUI'].astype(&quot;Int64&quot;) try: if conn: print(&quot;Oracle version:&quot;, oracledb.version) print(&quot;Database version:&quot;, conn.version) #print(&quot;Client version:&quot;, oracledb.clientversion()) print('Inserting data into table....') for i,row in tqdm(df_umls.iterrows(), total=df_umls.shape[0]): sql_conso = &quot;insert into MRCONSO (CUI,LAT,TS,LUI,STT,SUI,ISPREF, AUI, SAUI, SCUI, SDUI, SAB, TTY, CODE, STR, SRL, SUPPRESS, CVF) values (:0,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17)&quot; cur.execute(sql_conso, tuple(row)) # the connection is not autocommitted by default, so we must commit to save our changes conn.commit() print(&quot;Record inserted succesfully&quot;) except DatabaseError as e: err, = e.args print(&quot;Oracle-Error-Code:&quot;, err.code) print(&quot;Oracle-Error-Message:&quot;, err.message) finally: cur.close() conn.close() </code></pre> <p>The dataframe datatype is:</p> <pre><code>CUI object LAT object TS object LUI object STT object SUI object ISPREF object AUI object SAUI Int64 SCUI object SDUI object SAB object TTY object CODE object STR object SRL int64 SUPPRESS object CVF Int64 dtype: object </code></pre> <p>How should I modify the dataframe to insert the data into table successfully.</p>
<python><oracle-database>
2023-05-30 20:10:23
0
1,495
rshar
76,368,302
5,212,614
How can we loop through all text files in a folder, copy the first N rows from each one, and append the file name of each?
<p>I am trying to loop through a bunch of text files, copy the first N-rows from each, and append the file name of each. This is the code that I am testing.</p> <pre><code># import necessary libraries import pandas as pd import csv import os import glob # use glob to get all the txt files # in the folder path = 'C:\\Users\\ryans\\Desktop\\all_files\\' txt_files = glob.glob(os.path.join(path, &quot;*.txt&quot;)) df_headers = pd.DataFrame() # loop over the list of files for f in txt_files: with open(f) as myfile: print(f) firstNlines=myfile.readlines()[0:5] df = pd.DataFrame(firstNlines) fname = pd.DataFrame({f}) df_headers = pd.concat([df, fname], axis=1) df_headers.to_csv('C:\\Users\\ryans\\Desktop\\out.csv') </code></pre> <p>When I run the code, I get this.</p> <p><a href="https://i.sstatic.net/9gTL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9gTL2.png" alt="enter image description here" /></a></p> <p>So, the problem is as follows.</p> <ol> <li>I only get one file, which presumably is the last file</li> <li>The data is all tab-delimited, but everything is compressed into one single cell, not split out</li> <li>I'm not sure how to copy the file name down N-rows</li> </ol> <p>What I'd like to end up with is something like this, but with all the files read, not just the last file.</p> <p><a href="https://i.sstatic.net/fWXh3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fWXh3.png" alt="enter image description here" /></a></p>
<python><python-3.x><pandas><dataframe>
2023-05-30 20:10:18
1
20,492
ASH
76,368,297
682,466
reading data compressed by NSMutableData using the lz4 library in python
<p>I'm compressing random data using NSMutableData (in Swift) as follows:</p> <pre class="lang-swift prettyprint-override"><code>import Foundation let n = 1024 let numbers = (0..&lt;n).map { _ in UInt8.random(in: 0...1) } print(&quot;generated random data: \(numbers)&quot;) let data = Data(numbers) let mutable = NSMutableData(data: data) try! mutable.compress(using: .lz4) print(&quot;compressed size: \(mutable.count)&quot;)' try! mutable.write(toFile: &quot;compressed.dat&quot;, options: []) </code></pre> <p>And reading into python using:</p> <pre class="lang-py prettyprint-override"><code>import os import lz4.frame fh = open('compressed.dat', 'rb') ba = bytearray(fh.read()) print(&quot;read %d bytes&quot; % len(ba)) decompressed = lz4.frame.decompress(ba) print(&quot;decompressed to %d bytes&quot; % len(decompressed)) </code></pre> <p>When calling <code>lz4.frame.decompress</code>, I get:</p> <pre><code>RuntimeError: LZ4F_getFrameInfo failed with code: ERROR_frameType_unknown </code></pre> <p>trying <code>lz4.block.decompress</code>, I get:</p> <pre><code>_block.LZ4BlockError: Decompression failed: corrupt input or insufficient space in destination buffer. Error code: 4 </code></pre> <p>What am I doing wrong?</p>
<python><swift><lz4><nsmutabledata>
2023-05-30 20:09:56
0
6,498
Taylor
76,368,145
21,420,742
Filling in for missing values by name. pandas
<p>I have a dataset that I need to fill in the blanks of the ID.</p> <pre><code>ID Name Adam 101 Adam Adam 101 Adam 102 Ben 102 Ben Cathy Cathy 103 Cathy </code></pre> <p>What I need:</p> <pre><code>ID Name 101 Adam 101 Adam 101 Adam 101 Adam 102 Ben 102 Ben 103 Cathy 103 Cathy 103 Cathy </code></pre> <p>I tried using <code>df['Name'].ffill()</code> but does not work when trying on multiple names.</p> <p>Any other suggestions?</p>
<python><python-3.x><pandas><dataframe>
2023-05-30 19:51:28
5
473
Coding_Nubie
76,368,086
869,598
Search for imports which could be TYPE_CHECKING
<p>I make heavy use of mypy static type checking. I have a large lib where I know I have many imports which are being done just for typehints that could be protected with an <code>if TYPE_CHECKING</code> to speed things up.</p> <p>But searching for them all is proving difficult.</p> <p>Is there a way to identify these &quot;unused&quot; imports automatically so I can fix them?</p>
<python><mypy>
2023-05-30 19:38:56
1
303
JonFitt
76,367,953
5,091,720
pandas Auto increment based on date condition
<p>I would like to auto increment a new column I have the below code.</p> <pre><code>df = pd.DataFrame({'Date': ['2020-02-29', '2020-03-01', '2020-10-01', '2020-10-02', '2020-10-03', '2020-10-04', '2021-10-01', '2021-10-02', '2021-10-03', '2021-10-04']}) conditions = [((df['Date'].dt.day == 1) &amp; (df['Date'].dt.month == 10)), ((df['Date'].dt.day == 29) &amp; (df['Date'].dt.month == 2))] r_choice = [1, 151] df['Oct_year_day'] = np.select(conditions, r_choice, np.nan) df['Oct_year_day'] = df['Oct_year_day'].fillna(1).cumsum() # the below code does not work... reset_condition = (df['Date'].dt.day == 1) &amp; (df['Date'].dt.month == 10) df.loc[reset_condition, 'Oct_year_day'] = 1 </code></pre> <p>What I want returned is:</p> <pre><code> Date Oct_year_day 0 2020-02-29 151.0 1 2020-03-01 152.0 2 2020-10-01 1.0 3 2020-10-02 2.0 4 2020-10-03 3.0 5 2020-10-04 4.0 6 2021-10-01 1.0 7 2021-10-02 2.0 8 2021-10-03 3.0 9 2021-10-04 4.0 </code></pre> <p><em>Edit The incrementation.</em></p> <p>So the incrementation would start over every October 1st with 1 and increase by 1 each proceeding date afterword. Think of it like the <code>df['Date'].dt.dayofyear</code>. The other part is that the leap day would be 151 and each proceeding date afterword would increase by 1.</p>
<python><pandas>
2023-05-30 19:15:18
1
2,363
Shane S
76,367,729
10,266,106
NumPy all Function Across Third Dimension of ndarray
<p>Consider the ndarray with the following dimensions <code>(1200,2600,200)</code></p> <p>I am looking to assess whether all the values along the third dimension, or axis=2, at point [i,j] in this array are all the same value. For example, I'd use the following to assess the presence of all zeroes. I'm using the following code:</p> <pre><code>import numpy as np array = np.random.randint(0.00, 75.00, size=(1200, 2600, 200)) logical = array.all(axis=2) == 0 </code></pre> <p>This executes, however returns the boolean True at all array points, even where zeroes do not populate the entire third dimension. What alterations are required to yield the desired result?</p>
<python><numpy><numpy-ndarray><logical-operators>
2023-05-30 18:40:44
1
431
TornadoEric
76,367,557
2,916,639
Unable to list blobs from Azure VM with Python using system assigned managed identity
<p>I am try to list all blobs available in azure storage container. Below is the python code</p> <pre><code>import io import os from azure.core.exceptions import HttpResponseError, ResourceExistsError from azure.identity import DefaultAzureCredential from msrestazure.azure_active_directory import MSIAuthentication from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, BlobLeaseClient, BlobPrefix, ContentSettings class BlobSamples(object): # &lt;Snippet_list_blobs_flat&gt; def list_blobs_flat(self, blob_service_client: BlobServiceClient, container_name): container_client = blob_service_client.get_container_client(container=container_name) blob_list = container_client.list_blobs() for blob in blob_list: print(f&quot;Name: {blob.name}&quot;) # &lt;/Snippet_list_blobs_flat&gt; if __name__ == '__main__': # TODO: Replace &lt;storage-account-name&gt; with your actual storage account name account_url = &quot;https://testinglist.blob.core.windows.net&quot; credential = DefaultAzureCredential() # Create the BlobServiceClient object blob_service_client = BlobServiceClient(account_url, credential=credential) sample = BlobSamples() sample.list_blobs_flat(blob_service_client, &quot;testing&quot;) </code></pre> <p>I am running this code from virtual machine which has the below system assigned managed identity.</p> <p><a href="https://i.sstatic.net/alkcn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/alkcn.png" alt="enter image description here" /></a></p> <p>WHen i execute code getting below error.</p> <pre><code>Traceback (most recent call last): File &quot;listblob.py&quot;, line 28, in &lt;module&gt; sample.list_blobs_flat(blob_service_client, &quot;testing&quot;) File &quot;listblob.py&quot;, line 15, in list_blobs_flat for blob in blob_list: File &quot;/usr/local/lib/python3.6/site-packages/azure/core/paging.py&quot;, line 128, in __next__ return next(self._page_iterator) File &quot;/usr/local/lib/python3.6/site-packages/azure/core/paging.py&quot;, line 76, in __next__ self._response = self._get_next(self.continuation_token) File &quot;/usr/local/lib/python3.6/site-packages/azure/storage/blob/_list_blobs_helper.py&quot;, line 83, in _get_next_cb process_storage_error(error) File &quot;/usr/local/lib/python3.6/site-packages/azure/storage/blob/_shared/response_handlers.py&quot;, line 181, in process_storage_error exec(&quot;raise error from None&quot;) # pylint: disable=exec-used # nosec File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; azure.core.exceptions.HttpResponseError: This request is not authorized to perform this operation using this permission. RequestId:32c35c2b-101e-0066-801f-932ff0000000 Time:2023-05-30T17:55:50.1950002Z ErrorCode:AuthorizationPermissionMismatch Content: &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;&lt;Error&gt;&lt;Code&gt;AuthorizationPermissionMismatch&lt;/Code&gt;&lt;Message&gt;This request is not authorized to perform this operation using this permission. RequestId:32c35c2b-101e-0066-801f-932ff0000000 Time:2023-05-30T17:55:50.1950002Z&lt;/Message&gt;&lt;/Error&gt; </code></pre> <p>The container is private but accessible from all networks. I am unable to catch the issue. Kindly help.</p>
<python><azure><azure-blob-storage>
2023-05-30 18:12:56
1
428
user2916639
76,367,483
2,426,635
Identifying Read-Only SQL Statements
<p>I'm using SQLAlchemy to execute SQL statements on a variety of different databases and am trying to find a way to determine whether or not an SQL statement (passed to the connector as a python string). What is the best way to do this reliably for multiple flavours of SQL?</p> <pre><code>raw_query = &quot;SELECT * FROM table&quot; engine = get_engine(username=db_dict['username'], password=db_dict['password'], db=database, hostname=db_dict['hostname'], port=port, db_type=db_dict['type']) with engine.connect() as db_conn: # query database result = db_conn.execute(sqlalchemy.text(raw_query)) </code></pre> <p>In the above, what is the best way to check whether the SQL in raw_query will require more than read-only permissions? Or in other words, whether or not it will make changes to the database.</p>
<python><sql>
2023-05-30 18:00:53
1
626
pwwolff
76,367,218
13,217,286
How do I make a time delta column in Polars from two datetime columns
<p>How would I make a column with the delta (in days) of two date columns. I thought I could just subtract the date objects, but I'm obviously missing something</p> <pre class="lang-py prettyprint-override"><code>(pl.from_records([{'start': '2021-01-01', 'end': '2022-01-01'}]) .with_columns(pl.col(['start', 'end']).str.to_date('%Y-%m-%d')) .with_columns(delta = pl.col('end') - pl.col('start')) ) </code></pre>
<python><duration><timedelta><python-polars>
2023-05-30 17:23:04
1
320
Thomas
76,367,016
16,912,844
Getting `Abort trap: 6` Error During `import cv2` on Python 3.10 with macOS ARM64
<p>Getting the following <code>Abort trap: 6</code> error while doing <code>import cv2</code> on Python 3.10 with macOS ARM64.</p> <p>I tried using lower version of opencv-python (4.6.0.66) and latest version but still doesn't work. I've also tried some of the fixes such as using different Terminal and symlink <code>libssl.dylib</code> and <code>libcrypto.dylib</code> to <code>/usr/local/lib</code>.</p> <pre><code>&gt; python Python 3.10.11 (main, May 22 2023, 00:42:58) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import cv2 Modules/gcmodule.c:115: gc_decref: Assertion &quot;gc_get_refs(g) &gt; 0&quot; failed: refcount is too small Enable tracemalloc to get the memory block allocation traceback object address : 0x105a54010 object refcount : 8 object type : 0x1056e3ab8 object type name: type object repr : &lt;class 'cv2.utils.nested.ExportClassName'&gt; Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed Python runtime state: initialized Current thread 0x00000001fce51e00 (most recent call first): Garbage-collecting File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241 in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1176 in create_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 571 in module_from_spec File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 674 in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006 in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027 in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050 in _gcd_import File &quot;/opt/python/3.10.11/lib/python3.10/importlib/__init__.py&quot;, line 126 in import_module File &quot;/Users/alpha/Python/pv310-3di/lib/python3.10/site-packages/cv2/__init__.py&quot;, line 153 in bootstrap File &quot;/Users/alpha/Python/pv310-3di/lib/python3.10/site-packages/cv2/__init__.py&quot;, line 181 in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241 in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883 in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688 in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006 in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027 in _find_and_load File &quot;&lt;stdin&gt;&quot;, line 1 in &lt;module&gt; Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator (total: 13) Abort trap: 6 &gt; </code></pre>
<python><macos><opencv>
2023-05-30 16:52:57
2
317
YTKme
76,366,955
13,763,436
wxPython install, attrdict dependency ModuleNotFound error
<p><strong>System Details:</strong></p> <ul> <li>Machine: Raspberry Pi Model 4B - 4GB</li> <li>Operating System: Debian 11 (Bullseye)</li> <li>wxPython version: 4.2.0 from pypi</li> <li>Python version: 3.11.3 built from source</li> </ul> <p><strong>Problem Description:</strong></p> <p>When installing wxPython (latest version 4.2.0) I am getting an error that the <code>attrdict</code> dependency is not found, even though it is installed.</p> <pre><code>(.venv) user@raspberrypi:~/Documents $ python --version Python 3.11.3 (.venv) user@raspberrypi:~/Documents $ python -m pip freeze attrdict==2.0.1 attrdict3==2.0.2 dbus-fast==1.86.0 pytz==2023.3 six==1.16.0 tzdata==2023.3 (.venv) user@raspberrypi:~/Documents $ python -m pip install wxPython Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple Collecting wxPython Using cached wxPython-4.2.0.tar.gz (71.0 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [23 lines of output] Traceback (most recent call last): File &quot;/home/user/Documents/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/home/user/Documents/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/user/Documents/.venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-fs9i9itq/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-fs9i9itq/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 323, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-fs9i9itq/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 488, in run_setup self).run_setup(setup_script=setup_script) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-fs9i9itq/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 338, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 27, in &lt;module&gt; File &quot;/tmp/pip-install-6s5ym1hn/wxpython_0afd12e3eb4741b4854a998e85194d91/buildtools/config.py&quot;, line 30, in &lt;module&gt; from attrdict import AttrDict ModuleNotFoundError: No module named 'attrdict' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. (.venv) user@raspberrypi:~/Documents $ </code></pre> <p>Why is it saying that the dependency cannot be found when it is clearly there? Is there any way to correct this error and install wxPython on my system?</p> <p>I have also opened an issue in the wxPython GitHub page: <a href="https://github.com/wxWidgets/Phoenix/issues/2401" rel="nofollow noreferrer">https://github.com/wxWidgets/Phoenix/issues/2401</a></p>
<python><raspberry-pi><debian><wxpython><python-3.11>
2023-05-30 16:42:59
2
403
stackoverflowing321
76,366,875
5,080,858
Seaborn is jumbling x-axis tick labels, no longer referring to bars
<p>I'm trying to write a little suite of regular seaborn plots, along with styling, to allow for regular production of similarly styled plots. I'm working on bar plots. Mostly this works fine, but if I have too many values on the x axis, and specifically, if the names are too long and plentiful, the axis tick labels get completely jumbled, meaning that the plot becomes useless.</p> <p>I'd really appreciate any advice that could help me with making the ticks on the x axis always informative when I have a scenario where I have many value labels for the data I'm trying to visualise. I realise that I could stick the labels within the bars, but I'd like to keep this as x axis ticks.</p> <p>See here for an example of the x axis becoming uninformative. The data I used is taken from here <a href="https://footystats.org/england/premier-league/xg" rel="nofollow noreferrer">https://footystats.org/england/premier-league/xg</a>:</p> <p><a href="https://i.sstatic.net/AwMaS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AwMaS.png" alt="enter image description here" /></a></p> <p>Here's my example code:</p> <pre><code>def bar_plot(self, df: pd.DataFrame, x_var: str, y_var: str, x_label: str, y_label: str, y_line: float, rotate_x_labels: bool = False, export: bool = False, filepath: str = None): ''' produces a bar plot. ''' plt.figure(figsize=self.figsize) ax = sns.barplot(data=df, x=x_var, y=y_var, palette=self.plot_elems_palette) ax.spines['bottom'].set_linewidth(0.75) ax.spines['left'].set_linewidth(0.75) ax.tick_params(axis='x', which='major', width=0.5) ax.tick_params(axis='x', which='minor', width=0.5) ax.tick_params(axis='y', which='major', width=0.5) ax.tick_params(axis='y', which='minor', width=0.5) if x_label: ax.set_xlabel(x_label, fontweight='bold') if y_label: ax.set_ylabel(y_label, fontweight='bold') if y_line is not None: ax.axhline(y_line, color=self.plot_elems_colour, linestyle='--', linewidth=0.5) if rotate_x_labels: ax.tick_params(axis='x', rotation=45) ax.tick_params(axis='both', labelsize=(self.font_size*0.5), rotation=45) if export: if filepath: plt.savefig(filepath, dpi=DPI, bbox_inches=BBOX) else: raise ValueError('filepath must be specified if export is True') plt.show() </code></pre> <p>this all exists within a class that initialises defaults for all my plotting methods, that looks like this:</p> <pre><code>class PlotBuilder: def __init__(self, figsize: tuple = FIGSIZE, background: str = BACKGROUND, body_colour: str = BODY_COLOUR, plot_elems_colour: str = PLOT_ELEMS_COLOUR, plot_elems_palette: list = generate_colour_palette(BODY_COLOUR), grid_colour: str = GRID_COLOUR, font: str = FONT, font_size: int = FONT_SIZE): self.figsize = figsize self.background = background self.body_colour = body_colour self.plot_elems_colour = plot_elems_colour self.plot_elems_palette = plot_elems_palette self.grid_colour = grid_colour self.font = font self.font_size = font_size </code></pre>
<python><matplotlib><seaborn>
2023-05-30 16:30:41
1
679
nikUoM
76,366,860
6,758,739
Load file to an array and make necessary changes to array as per input and save it back to file using python
<p>The code below reads the file and stores it to an array it searches in array for specific username and changes password inside the array and save the array to a File.</p> <blockquote> <ol> <li>The first column of the file can be a * or DB name</li> <li>I would like to change the password depending on the input</li> </ol> </blockquote> <pre><code>* DB_USR QWERTY * DB_MGR QWERTY TESTDB10 DB_USR PQRSTUV TESTDB20 DB_USR QWERTY TESTDB30 DB_USR QWERTY </code></pre> <p>Below is the code snippet that reads the file into array and searches the array</p> <pre><code>import sys import logging def file_to_array(): file_info=&quot;/application/files/Password.txt&quot; with open (file_info,'rt') ass fh: pw_lines = [x.strip() for x in fh.readlines() if not x.startswith('#')] username=&quot;DB_USR&quot; pwd=&quot;PQRSTUV&quot; pw_lines = [re.sub(fr'^(\s*{username}\S+)\s+', fr'\1{pwd}', line) for line in pw_lines] # This line searches for a value DBFS_USR in 2nd column and changes the password in 3rd column def main(): file_to_array() </code></pre> <p>I would like the below regex to work in two ways ,</p> <pre><code>pw_lines = [re.sub(fr'^(\b{username}\S+)\s+', fr'\1{pwd}', line) for line in pw_lines] # This line searches for a value DB_USR in 2nd column and changes the password in 3rd column </code></pre> <blockquote> <p>if the input is ./update_pwd TESTDB10 -&gt; This has to change the password in the line which starts with TESTDB10</p> </blockquote> <p>New File:</p> <pre><code> * DB_USR QWERTY * DB_MGR QWERTY TESTDB10 DB_USR PQRSTUV. -&gt; This value should change TESTDB20 DB_USR QWERTY TESTDB30 DB_USR QWERTY </code></pre> <blockquote> <p>if the input is just ./update_pwd -&gt; it should change the password in the line that starts with *</p> </blockquote> <p>New File:</p> <pre><code> * DB_USR PQRSTUV -&gt; This value should change * DB_MGR QWERTY TESTDB10 DB_USR QWERTY TESTDB20 DB_USR QWERTY TESTDB30 DB_USR QWERTY </code></pre> <blockquote> <p>As of now, the regex is only working for the wildcard line , but doesn't work If I input the db name.</p> </blockquote> <blockquote> <p>I would also like to know if storing the file data in array and interpolating it and changing is good approach or is there any better approach than this ? I am new to python and please excuse my mistakes</p> </blockquote>
<python><python-3.x>
2023-05-30 16:28:28
1
992
LearningCpp
76,366,589
9,658,149
How to select the correct tool in a specific order for an agent using Langchain?
<p>I think I don't understand how an <strong>agent</strong> chooses a tool. I have a vector database (<strong>Chroma</strong>) with all the embedding of my <strong>internal knowledge</strong> that I want that the agent looks at first in it. Then, if the answer is not in the Chroma database, it should answer the question using the information that OpenAI used to train (external knowledge). In the case that the question is a &quot;natural conversation&quot; I want that the agent takes a role in answering it. This is the code that I tried, but It just uses the <strong>Knowledge External Base</strong> tool. I want that it decides the best tool.</p> <pre><code>from langchain.agents import Tool from langchain.chat_models import ChatOpenAI from langchain.chains.conversation.memory import ConversationBufferWindowMemory from langchain.chains import RetrievalQA from langchain.agents import initialize_agent from chroma_database import ChromaDatabase from langchain.embeddings import OpenAIEmbeddings from parameters import EMBEDDING_MODEL, BUCKET_NAME, COLLECTION_NAME embeddings = OpenAIEmbeddings(model=EMBEDDING_MODEL) chroma = ChromaDatabase(embedding_function=embeddings, persist_directory='database/vectors/', bucket_name=BUCKET_NAME, collection_name=COLLECTION_NAME) # chat completion llm llm = ChatOpenAI( model_name='gpt-3.5-turbo', temperature=0.0 ) # conversational memory conversational_memory = ConversationBufferWindowMemory( memory_key='chat_history', k=0, return_messages=True ) # retrieval qa chain qa = RetrievalQA.from_chain_type( llm=llm, chain_type=&quot;stuff&quot;, retriever=chroma.db.as_retriever() ) tools = [ Tool( name='Knowledge Internal Base', func=qa.run, description=( 'use this tool when answering internal knowledge queries. Search in the internal database retriever' ) ), Tool( name='Knowledge External Base', func=qa.run, description=( 'use this tool when the answer is not retrieved in the Knowledge Internal Base tool' ) ), Tool( name='Natural Conversation', func=qa.run, description=( 'use this tool when the answer is related to a natural conversation, act as friendly person' ) ) ] agent = initialize_agent( agent='chat-conversational-react-description', tools=tools, llm=llm, verbose=True, max_iterations=3, early_stopping_method='generate', memory=conversational_memory ) agent.run(&quot;What Pepito said?&quot;) #Pepito conversation is stored as embedding in Chroma agent.run(&quot;What Tom Cruise said in the movie Impossible Mission 1?&quot;) #I don't have anything about Tom Cruise in Chroma agent.run(&quot;Hello, how are you?&quot;) #I want the answer looks like: &quot;I'm pretty fine, how about you?&quot; </code></pre> <p>What should I do to have a correct plan-execute/orchestrator agent that takes the correct tool in the right order?</p>
<python><python-3.x><chatgpt-api><langchain>
2023-05-30 15:53:22
2
2,097
Eric Bellet