QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
77,971,552
14,488,413
How to write a python function that splits and selects first element in each pandas column
<p>I have a dataframe <code>df</code> with double entries separated by a <code>,</code> in some columns. I want to write a function to extract only the entry before the <code>,</code> for columns with double entries.</p> <p>Example: <code>20.15,20.15</code> split to <code>20.15</code></p> <p>See the dataframe</p> <pre><code>import pandas as pd # initialize data of lists. data = {'Name': ['Tom', 'nick', 'krish', 'jack','Phil','Shaq','Frank','Jerome','Arpan','Sean'], 'Age': ['20.15,20.15', '21.02,21.02', '19.04,19.04','18.17,18.17','65.77,65.77','34.19,34.19','76.12,76.12','65.55,65.55','55.03,55.03','41.11,41.11'], 'Score_1':['10,10', '21,21', '19,19','18,18','65,65','34,34','76,76','65,65','55,55','41,41'], 'Score_2':['11,11', '31,31', '79,79','38,38','75,75','94,94','26,26','15,15','96,96','23,23'], 'Score_3':['101,101', '212,212', '119,119','218,218','765,765','342,342','706,706','615,615','565,565','491,491'], 'Type':[ 'A','C','D','F','B','E','H','G','J','K'], 'bonus':['3.13,3.13','5.02,5.02','4.98,4.98','6.66,6.66','0.13,0.13','4.13,4.13','5.12,5.12','4.28,4.28','6.16,6.16','5.13,5.13'], 'delta':[0.1,0.3,2.3,8.2,7.1,5.7,8.8,9.1,4.3,2.9]} # Create DataFrame df = pd.DataFrame(data) # Print the output. print(df) </code></pre> <p>Desired output (You can copy &amp; paste)</p> <pre><code># initialize data of lists. df1 = {'Name': ['Tom', 'nick', 'krish', 'jack','Phil','Shaq','Frank','Jerome','Arpan','Sean'], 'Age': ['20.15', '21.02', '19.04','18.17','65.77','34.19','76.12','65.55','55.03','41.11'], 'Score_1':['10', '21', '19','18','65','34','76','65','55','41'], 'Score_2':['11', '31', '79','38','75','94','26','15','96','23'], 'Score_3':['101', '212', '119','218','765','342','706','615','565','491'], 'Type':[ 'A','C','D','F','B','E','H','G','J','K'], 'bonus':['3.13','5.02','4.98','6.66','0.13','4.13','5.12','4.28','6.16','5.13'], 'delta':[0.1,0.3,2.3,8.2,7.1,5.7,8.8,9.1,4.3,2.9]} # Create DataFrame df2 = pd.DataFrame(df1) # Print the output. print(df2) </code></pre> <p>I need help with a more robust function, see my attempt below</p> <pre><code>def stringsplitter(data,column): # select columns with object datatype data1 = data.select_dtypes(include=['object']) cols= data1[column].str.split(',', n=1).str print(cols[0]) # applying stringsplitter to the dataframe final_df = df.apply(stringsplitter) </code></pre> <p>Thanks for your help</p>
<python><pandas><user-defined-functions>
2024-02-10 00:55:27
2
322
nasa313
77,971,363
5,091,507
Python nested directories import error "ImportError: attempted relative import with no known parent package"
<p>I have a project where the files and directories are structured as below:</p> <pre><code>Project/ β”œβ”€ __init__.py β”œβ”€ main.py β”œβ”€ Utils/ └─ general_utils.py └─ __init__.py └─ Baselines/ └─ __init__.py └─ random/ └─ __init__.py └─ random.py </code></pre> <p>In order to run the code, I call <code>python main.py</code> and inside <code>main.py</code> the script <code>python Baselines/random/random.py</code> is called using <code>os.system</code>. My problem is I can't find a simple way to import functions from <code>general_utils.py</code> inside <code>random.py</code>.</p> <p>I tried</p> <p><code>from Utils.general_utils import *</code></p> <p>and</p> <p><code>from ..Utils.general_utils import *</code></p> <p>Similar questions have been asked before but, everything I try ends up with a &quot;ImportError: attempted relative import with no known parent package&quot;. Any ideas on how to do this import?</p>
<python><python-3.x><import><nested>
2024-02-09 23:32:43
1
1,047
A.A.
77,971,348
395,857
How can one add a label to a Markdown box in Gradio?
<p>I tried:</p> <pre><code>import gradio as gr with gr.Blocks(css=&quot;footer{display:none !important}&quot;) as demo: answer = gr.Markdown(value='[StackOverflow](https://stackoverflow.com/)', label=&quot;Answer&quot;) demo.launch(share=True) </code></pre> <p>but to my surprise, the label &quot;Answer&quot; doesn't appear in the UI:</p> <p><a href="https://i.sstatic.net/2lBwH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2lBwH.png" alt="enter image description here" /></a></p> <p>I don't have that issue with <code>Textbox</code>:</p> <p>E.g.,</p> <pre><code>import gradio as gr with gr.Blocks(css=&quot;footer{display:none !important}&quot;) as demo: answer = gr.Textbox(value='[StackOverflow](https://stackoverflow.com/)', label=&quot;Answer&quot;) demo.launch(share=True) </code></pre> <p>will display the label &quot;Answer&quot; for the textbox:</p> <p><a href="https://i.sstatic.net/DcpGr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DcpGr.png" alt="enter image description here" /></a></p> <p>How can one add a label to a Markdown box in Gradio?</p> <p>I use Gradio 4.16.0 with Python 3.11.7 on Windows 10.</p>
<python><gradio>
2024-02-09 23:28:17
1
84,585
Franck Dernoncourt
77,971,075
169,947
Python: load resources without using __init__.py files
<p>I have a directory of JSON Schema files that's distributed with a python package. Some code in the package uses the schema files to validate data.</p> <p>The way I'm currently loading the schema files is like so:</p> <pre><code>from importlib import resources from myapp.schema import v1 as schema_v1 ... def _schema_file(self, uri: str): return resources.files(schema_v1) / uri </code></pre> <p>But one thing I don't like: this requires sprinkling <code>__init__.py</code> files throughout the <code>schema/</code> directory for the <code>import</code> statement to work. I'd rather the <code>schema/</code> directory has just the schema files and nothing else.</p> <p>Is there a better strategy I should be using? I'm definitely open to switching away from <code>importlib</code> if that's advisable, I know there are lots of options but haven't figured out which is the most favored these days.</p>
<python><resources><jsonschema><init>
2024-02-09 22:02:02
0
24,277
Ken Williams
77,971,020
6,361,813
MultiIndex Dataframe: Sort values for each group
<p>Assume I have a <code>pandas</code> MultiIndex Dataframe similar to the one below. How do I sort the values for each group by preserving the assignment and order of the indexes? The code below works for the sorting, but I fail to resave the sorted values and <code>inplace</code> does not seem to work.</p> <pre><code>df = pd.DataFrame([4,8,6,11,13,15], columns=['val'], index=[['A','A','A','B','B','B'],['a1','a2','a3','b1','b2','b3']]) # val # A a1 4 # a2 8 # a3 6 # B b1 11 # b2 13 # b3 15 for idx in df.index.unique(level=0): tmp = df.loc[idx].sort_values(by='val') # val # a1 4 # a3 6 # a2 8 </code></pre>
<python><pandas><group-by>
2024-02-09 21:45:08
3
407
Pontis
77,971,003
8,713,442
How to create list column with alias in polars dataframe
<p>I have table which has some columns in it . I need to run some rules to match data with incoming data and provide score .</p> <p>I am trying to create list of score and find best out of it but below mentioned piece is not running . it is giving error</p> <pre><code>import polars as pl import jaro def test_polars(): name='savah' data = {&quot;first_name&quot;: ['sarah', 'purnima'], &quot;last_name&quot;: ['vats', 'malik']} df = pl.DataFrame(data) print(df) df = (df.with_columns( [ (pl.when(pl.col(&quot;first_name&quot;) == name).then(1).otherwise(0)).alias(&quot;E_FN&quot;), (pl.when(pl.col(&quot;last_name&quot;) == name).then(1).otherwise(0)).alias(&quot;E_LN&quot;), (pl.when(pl.col(&quot;first_name&quot;).str.slice(0, 3) == name[0:3]).then(1).otherwise(0)).alias(&quot;F3_FN&quot;), (pl.when(pl.col(&quot;first_name&quot;).map_elements( lambda first_name: jaro.jaro_winkler_metric(first_name, name)) &gt;= 0.8).then(1).otherwise(0)).alias( &quot;CMP80_FN&quot;), (pl.when(pl.col(&quot;last_name&quot;).map_elements( lambda first_name: jaro.jaro_winkler_metric(first_name, name)) &gt;= 0.9).then(1).otherwise(0)).alias( &quot;CMP90_LN&quot;), ] ) .with_columns( [ ([980 * pl.col(&quot;E_FN&quot;) * pl.col(&quot;E_LN&quot;) , 990 * pl.col(&quot;E_FN&quot;) * pl.col(&quot;CMP80_FN&quot;) ]).alias ( &quot;score&quot;), ] )) print(df) if __name__ == '__main__': test_polars() C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe &quot;C:\PythonProject\pythonProject\polars data.py&quot; shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ first_name ┆ last_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ sarah ┆ vats β”‚ β”‚ purnima ┆ malik β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Traceback (most recent call last): File &quot;C:\PythonProject\pythonProject\polars data.py&quot;, line 37, in &lt;module&gt; test_polars() File &quot;C:\PythonProject\pythonProject\polars data.py&quot;, line 28, in test_polars ]).alias ( &quot;score&quot;), ^^^^^ AttributeError: 'list' object has no attribute 'alias' Process finished with exit code 1 </code></pre>
<python><python-polars>
2024-02-09 21:39:52
1
464
pbh
77,970,924
893,254
Using a functor as a stateful callback function in Python
<p>If I have a function which takes a callback function as an argument, how can I create a functor object which can be used in place of a callback function, if that callback function takes some set of arguments?</p> <h1>Context:</h1> <p>Kafka producers take a callback function.</p> <pre><code>producer.produce( topic=topic, key=key, value=value, callback=delivery_callback ) </code></pre> <pre><code>callback_count = 0 def delivery_callback(error, message_payload): if error: print(f'{error}') else: global callback_count callback_count += 1 # count the number of successfully # delivered messages </code></pre> <p>This callback function counts the number of successfully delivered messages.</p> <h1>Problems with this design and proposed solution</h1> <p>The problems with this design are obvious - the use of global variables.</p> <p>My first instinct was to design some kind of functor object, which perhaps might define <code>__call__</code> to make it callable.</p> <pre><code>class DeliveryCallbackCounter(): def __init__(self) -&gt; None: self.count_callback = 0 def __call__(self, error, message): if error: print(f'ERROR: Kafka: Message delivery failure: {error}') else: self.count_callback += 1 def __str__(self) -&gt; str: return f'DeliveryCallbackCounter: callback count: {self.count_callback}' </code></pre> <p>This can be used in the following way, and indeed works.</p> <pre><code>delivery_callback_counter = DeliveryCallbackCounter() producer.produce( topic=topic, key=key, value=value, callback=delivery_callback_counter ) </code></pre> <p>However, this solution does not work in quite the same way as the previous option, because the class <code>DeliveryCallbackCounter</code> is not static. Previously, when using a function as the callback function, rather than a functor (class) the modified data <code>count_callback</code> was a global variable with static lifetime.</p> <p>With a functor, we have something slightly different, and it would be possible to create many instantiations of the class <code>DeliveryCallbackCounter</code>.</p> <p>Python classes have something similar to static methods which are indicated by the <code>@classmethod</code> decorator.</p> <p>Is it possible to reproduce the same semantics as offered by the &quot;function option&quot; by building a &quot;static class&quot; in Python? I am not familiar enough with Python to know how to do this.</p>
<python>
2024-02-09 21:19:59
1
18,579
user2138149
77,970,839
6,461,882
Vectorized way to copy elements from pandas Series to python built-in array
<p>Is there a vectorized way to copy elements from a pandas Series to a python built-in array? For example:</p> <pre class="lang-py prettyprint-override"><code>from array import array import pandas as pd s = pd.Series(range(0, 10, 2)); s += 0.1 a = array('d', [0.0]*10) # I am looking for a vectorized equivalent of the below line: for n,x in enumerate(s): a[n] = x </code></pre> <p>In my case, the array is given as a memory buffer to an external code, which saves the array address and only re-reads array values upon each call. So, I cannot recreate the array and need to only replace its values as quickly as possible.</p> <p>Thank you very much for your help!</p>
<python><arrays><pandas><vectorization><series>
2024-02-09 20:57:43
1
2,855
S.V
77,970,659
6,458,245
Huggingface pipeline error: IndexError: too many indices for tensor of dimension 2
<p>Anyone know why I get that error when I run the pipeline?</p> <pre><code>from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer from transformers import pipeline modeltemp = AutoModelForSequenceClassification.from_pretrained(&quot;bert-base-cased&quot;, num_labels=5) tokenizer = AutoTokenizer.from_pretrained(&quot;bert-base-cased&quot;) unmasker = pipeline('fill-mask', model=modeltemp, tokenizer=tokenizer) unmasker(&quot;Hello I drive a red [MASK].&quot;) </code></pre> <p>If I directly have the pipeline take in bert-base-cased, everything works. But if I first load bert-base-cased using AutoModel, it doesn't work.</p>
<python><pytorch><huggingface-transformers><huggingface>
2024-02-09 20:20:41
1
2,356
JobHunter69
77,970,391
9,698,518
Benchmarking with Pytest Parallelized Cython Code results in Fatal Python Error
<p>I have the following test:</p> <pre class="lang-py prettyprint-override"><code>import array def test_parallel_cython_clip(benchmark: Any) -&gt; None: benchmark.pedantic( math_cython.parallel_cython_clip_vector, args=(array.array(&quot;f&quot;, [0.0]),-1.0,1.0,array.array(&quot;f&quot;, [0.0])), rounds=1, iterations=1 ) </code></pre> <p>that I execute via:</p> <pre><code>pytest --benchmark-columns=min,max,mean,stddev --benchmark-sort=mean benchmark.py -vv -k test_parallel_cython_clip </code></pre> <p>which results in the error: <code>benchmark.py::test_parallel_cython_clip Fatal Python error: Aborted</code></p> <p>Furthermore the following information is provided:</p> <pre class="lang-bash prettyprint-override"><code>Extension modules: mkl._mklinit, mkl._py_mkl_service, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, math_cython.cython_computations (total: 16) </code></pre> <p>The package folder <code>math_cython</code>, which is editable-installed into the conda environment, contains a <code>.py</code> module with the following entry:</p> <pre class="lang-py prettyprint-override"><code>def parallel_cython_clip_vector( vector_in: array.array, min_value: float, max_value: float, vector_out: array.array, ) -&gt; None: _parallel_cython_clip_vector(vector_in, min_value, max_value, vector_out) </code></pre> <p>and a <code>.pyx</code> file with the <code>_parallel_cython_clip_vector</code> function defined as follows:</p> <pre><code>@cython.boundscheck(False) # Deactivate bounds checking (which is possible in Python, but not in C) @cython.wraparound(False) # Deactivate negative indexing (which is possible in Python, but not in C) def _parallel_cython_clip_vector( float[:] vector_in, float min_value, float max_value, float[:] vector_out, ): cdef signed int idx = 0 &lt;this part I commented for the moment&gt; </code></pre> <p>Operating System Info:</p> <pre class="lang-bash prettyprint-override"><code>macOS 14.3.1 Darwin 23.3.0 </code></pre> <p>Installed Python libraries:</p> <pre><code>cython 3.0.6 pytest 8.0.0 pytest-benchmark 4.0.0 python 3.11.7 numpy 1.26.3 </code></pre> <p>I saw similar questions here, but I do not think they address my issue:</p> <ul> <li><a href="https://stackoverflow.com/questions/46977293/enabling-parallelism-with-cython">Enabling Parallelism with Cython</a></li> <li><a href="https://stackoverflow.com/questions/42131988/cython-prange-fails-with-fatal-python-error-pythreadstate-get-no-current-threa">Cython prange fails with Fatal Python error: PyThreadState_Get: no current thread</a></li> <li><a href="https://stackoverflow.com/questions/48780826/cython-crashed-using-memoryviews">Cython crashed using memoryviews</a></li> </ul> <p>Any help to resolve the Fatal Python error would be very much appreciated.</p> <p>EDIT: Running debugger in VSCode I got more information on the error:</p> <pre class="lang-bash prettyprint-override"><code>OMP: Error #15: Initializing libomp.dylib, but found libiomp5.dylib already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/ Fatal Python error: Aborted </code></pre>
<python><parallel-processing><pytest><cython><benchmarking>
2024-02-09 19:24:17
1
672
mgross
77,970,307
1,786,165
Parsing single line and multi line comments with Arpeggio
<p>I'm trying to use Arpeggio to parse file containing single line and multi line comments.</p> <p>Arpeggio's documentation suggests to have a look at their &quot;simple&quot; example to see how to deal with them (see <a href="https://textx.github.io/Arpeggio/2.0/configuration/#comment-handling" rel="nofollow noreferrer">documentation</a> and <a href="https://github.com/textX/Arpeggio/blob/master/examples/simple/simple.py" rel="nofollow noreferrer">linked code</a>). The example indeed includes the following definition:</p> <pre><code>def comment(): return [_(r&quot;//.*&quot;), _(r&quot;/\*.*\*/&quot;)] </code></pre> <p>which is used by the parser as follows:</p> <pre><code>parser = ParserPython(simpleLanguage, comment, debug=debug) </code></pre> <p>Unfortunately, however, their example doesn't contain any comment so it's not really possible to see how it works. If I add the following dummy comments to the example:</p> <pre><code>/* This is a multi-line comment. */ // This is a single-line comment. function fak(n) { ... </code></pre> <p>then the following exception is raised:</p> <pre><code>arpeggio.NoMatch: Expected '//.*' or keyword at position (1, 1) =&gt; '*/* This is'. </code></pre> <p>which seems to suggest the example file doesn't match the comment rule nor keyword <code>function</code> that is the first token that the production of <code>simpleLanguage</code> allows.</p> <p>Does anyone know how we are supposed to deal with comments?</p> <p>Please find below a MRE if it helps debugging the problem:</p> <pre><code>from __future__ import unicode_literals import os from arpeggio import * from arpeggio import RegExMatch as _ def comment(): return [_(r&quot;//.*&quot;), _(r&quot;/\*.*\*/&quot;)] def document(): return Kwd(&quot;hello&quot;), _(r&quot;[a-z]+&quot;), '!', EOF def main(filename, debug=False): current_dir = os.path.dirname(__file__) content = open(os.path.join(current_dir, filename), &quot;r&quot;).read() parser = ParserPython(document, comment, debug=debug) parse_tree = parser.parse(content) if __name__ == &quot;__main__&quot;: main('simple.ex', debug=True) </code></pre> <p>and the content of the file to parse:</p> <pre><code>/* This is a multi-line comment. */ // This is a single line comment. hello world! </code></pre>
<python><python-3.x><peg><arpeggio>
2024-02-09 19:02:23
1
644
Stefano Bragaglia
77,970,235
21,107,707
Detect arrow key being pressed while an `input()` is running in Python
<p>I am creating a REPL for a programming language, and I want to allow arrow keys to cycle through past inputs, much like the official Python REPL does. In order to take user input, I'm doing something like this:</p> <pre class="lang-py prettyprint-override"><code>code = input(&quot; &gt; &quot;) # stuff here to process the code </code></pre> <p>However, I can't figure out how to get this to detect up/down arrow keys being pressed as well. How can I manage this?</p>
<python>
2024-02-09 18:46:17
3
801
vs07
77,970,172
5,594,008
Fakeredis, mocking django cache function in DRF tests
<p>I'm using fakeredis to mock my Django tests</p> <p>settings.py</p> <pre><code>CACHES = { &quot;default&quot;: { &quot;BACKEND&quot;: &quot;django_redis.cache.RedisCache&quot;, &quot;LOCATION&quot;: &quot;redis://127.0.0.1:6382&quot;, &quot;TIMEOUT&quot;: env.int(&quot;CACHE_TIMEOUT&quot;, default=2 * 60 * 60), &quot;OPTIONS&quot;: { &quot;CONNECTION_POOL_KWARGS&quot;: {&quot;connection_class&quot;: FakeConnection}, }, } } </code></pre> <p>In views I use from django.core.cache import cache</p> <p>views.py</p> <pre><code>class MyViewSet(GenericViewSet): @action(methods=[&quot;GET&quot;], detail=False) def my_view(self, request): if my_data := cache.get(&quot;my_key&quot;): # some logic else: # some logic with calculation cache.set(&quot;my_key&quot;, my_data) return Response(my_data) </code></pre> <p>Now, in DRF tests I want to check that cache is set and in the next call of this endpoint the value is received from cache and no calculation is performed</p> <p>tests.py</p> <pre><code>class MyTestCase(APITestCase): def test_my_view(self): response = self.client.get(reverse(&quot;my_view&quot;)) self.assertEqual(status.HTTP_200_OK, response.status_code) self.assertEqual() # want to check that cache is set </code></pre> <p>Is there any way to mock django cache with fakeredis and check that value appears in cache?</p>
<python><django><fakeredis>
2024-02-09 18:33:52
1
2,352
Headmaster
77,970,082
17,274,113
importing geopandas throws error `module 'os' has no attribute 'add_dll_directory'`
<p><code>enter code here</code>when importing geopandas: <code>import geopandas as gpd</code> error <code>module 'os' has no attribute 'add_dll_directory'</code> is thrown.</p> <p>geopandas version: 0.9.0</p> <p>python version: 3.7</p> <p>operating system: Windows 10</p> <p><strong>Constraint:</strong></p> <p>I am working on a computer which blocks the installation of Python except that which comes as part of a ArcPro (GIS) installation. Therefore I cannot update my Python installation to meet the requirements of packages.</p> <p>I recognize that this question has been asked elsewhere like <a href="https://stackoverflow.com/questions/75794403/attributeerror-module-os-has-no-attribute-add-dll-directory">here</a>. This response taught me that <code>add_dll_dirrectory</code> is not supported by python 3.7, which is my version, so that is likely my problem.</p> <p><strong>Attempts:</strong></p> <ol> <li>Downgrade geopandas to v0.10.2 (last version compatible with Python 3.7). This had no effect on the result.</li> </ol> <p><strong>Question:</strong></p> <p>What is actually causing this error? Can I avoid it and still make used of geopandas given my constraints?</p> <p>Here is the full error message:</p> <p><a href="https://i.sstatic.net/vo3hN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vo3hN.png" alt="Full error message" /></a></p> <p>Thank you very much!</p>
<python><dll><environment><arcgis><geopandas>
2024-02-09 18:13:00
0
429
Max Duso
77,970,057
12,846,524
Customtkinter thread causing strange freezing issue after self.destroy() is called under certain conditions
<h2>Outline of task</h2> <p>I have a GUI created using CustomTKinter that acquires data when a 'start' button is clicked, and a 'stop' button that stops the data acquisition. These are controlled by an instance of threading.Thread in another class that uses a state variable (ThreadState) and a boolean (get_data) to control the process through a while loop.</p> <h2>Outline of problem</h2> <p>My issue is that, if the program is actively acquiring data and I click close via the Windows close button, it causes my program to freeze. I believe this is because it has not gracefully exited the acquisition thread.</p> <p>If I click the stop button and then close, the program exits cleanly. <strong>However</strong>, I have modified the close button to perform the 'stop button' functionality itself, and then call self.destroy() to close the GUI. This causes the program to freeze as the Thread is still actively running, and I have to kill the process via terminal. <em>This confuses me as clicking the Stop button manually does <em>not</em> cause this issue.</em></p> <p>Here is a dummy version of the code that replicates this issue (hopefully the comments will explain everything!):</p> <pre><code>import customtkinter as ctk from threading import Thread, Event import time from enum import IntEnum # Define the thread states class ThreadState(IntEnum): READY = 0 CONTINUOUS = 1 class MainApp(ctk.CTk): def __init__(self, *args, **kwargs): &quot;&quot;&quot; Initialise the main application window &quot;&quot;&quot; super().__init__(*args, **kwargs) self.stop_event = Event() # event that can be used to stop the thread self.data_thread = None # instantiate thread that runs the data acquisition # Create the main frame self.title(&quot;Dummy Program&quot;) self.geometry(&quot;310x75&quot;) # Create a frame for the buttons self.control_frame = ctk.CTkFrame(self, fg_color=&quot;transparent&quot;) self.control_frame.pack(side=&quot;top&quot;, fill=&quot;both&quot;, padx=10, pady=10) start_button = ctk.CTkButton(self.control_frame, text=&quot;Start Acquisition&quot;, command=self.start_continuous) start_button.pack(side=&quot;left&quot;) stop_button = ctk.CTkButton(self.control_frame, text=&quot;Stop Acquisition&quot;, command=self.stop_continuous) stop_button.pack(side=&quot;right&quot;) # Add progress bar to the bottom of the main frame self.progressbar = ctk.CTkProgressBar(self) self.progressbar.pack(side=&quot;bottom&quot;, fill=&quot;both&quot;, padx=10, pady=10) # Bind the &quot;window close&quot; event to the on_close method self.protocol(&quot;WM_DELETE_WINDOW&quot;, self.on_close) def start_continuous(self): &quot;&quot;&quot; Start the continuous data process. It first checks if the data thread does not yet exist, or if it is not already running. If either condition is met, it creates a new thread and starts it. &quot;&quot;&quot; if not self.data_thread or not self.data_thread.is_alive(): self.stop_event.clear() # reset the stop event self.data_thread = DataThread(self, self.stop_event) # create a new thread self.data_thread.start() # start the thread self.data_thread.acquire_continuous_start() # (see the associated function) def stop_continuous(self): &quot;&quot;&quot; Stop the continuous data process. It first checks if the data thread exists and is running. If so, it sets the stop event and stops the thread. &quot;&quot;&quot; if self.data_thread and self.data_thread.is_alive(): self.stop_event.set() # set the stop event self.data_thread.acquire_continuous_stop() # (see the associated function) def on_close(self): &quot;&quot;&quot; Closes the application. It first stops the continuous data process and then closes the GUI &quot;&quot;&quot; print('Stopping thread @ ', time.time()) self.stop_continuous() print('Closing GUI @ ', time.time()) self.destroy() print('!!! The program should now be frozen if closed during acquisition !!!') print('!!! The program does not freeze if you click &quot;Stop Acquisition&quot; and then close !!!') class DataThread(Thread): def __init__(self, parent_app, stop_event): &quot;&quot;&quot; Initialise a thread to handle the acquisition of dummy data. Parameters: parent_app (MainApp): The parent application object. stop_event (Event): An event that can be used to stop the thread. &quot;&quot;&quot; super(DataThread, self).__init__() self.parent_app: MainApp = parent_app # inherit the parent application self.stop_event = stop_event # inherit the stop event self.get_data = False # flag to indicate if data should be acquired self.thread_state = ThreadState.READY # state of the thread def run(self): &quot;&quot;&quot; While the stop_event is not set, run the thread and handle different thread states &quot;&quot;&quot; while not self.stop_event.is_set(): match self.thread_state: case ThreadState.CONTINUOUS: # this loop the data acquisition self._continuous_state() self.thread_state = ThreadState.READY case ThreadState.READY: # this is an idle state print(&quot;Ready&quot;) time.sleep(0.5) case _: print(&quot;Unknown state&quot;) time.sleep(0.5) def _take_acquisition(self): &quot;&quot;&quot; Simulate acquisition time with a progress bar &quot;&quot;&quot; print(&quot;Acquiring dummy data&quot;) start_time = time.time() elapsed_time = 0.0 while elapsed_time &lt; 1.0: time.sleep(0.05) elapsed_time = time.time() - start_time self.parent_app.progressbar.set(max(min(1.0, elapsed_time), 0.0)) def _continuous_state(self): &quot;&quot;&quot; Continuously acquire dummy data &quot;&quot;&quot; while self.get_data: self._take_acquisition() def acquire_continuous_start(self): &quot;&quot;&quot; Enables continuous data acquisition &quot;&quot;&quot; self.get_data = True # set the get_data flag - see _continuous_state() self.thread_state = ThreadState.CONTINUOUS # see run() def acquire_continuous_stop(self): &quot;&quot;&quot; Disables continuous data acquisition &quot;&quot;&quot; self.get_data = False # set the get_data flag - see _continuous_state() def main(): app = MainApp() app.mainloop() if __name__ == &quot;__main__&quot;: main() </code></pre> <h2>Process to repeat issue</h2> <p>To run and close the program cleanly:</p> <ul> <li>Click 'Start Acquisition'</li> <li>Click 'Stop Acquisition' and wait for the progress bar to complete <ul> <li>(Not waiting for the progress bar to complete also causes a freeze)</li> </ul> </li> <li>Close the window</li> </ul> <p>To cause the program to freeze:</p> <ul> <li>Click 'Start Acquisition'</li> <li>Close the window</li> </ul> <h2>Screenshots of the GUI and terminal outputs</h2> <p><a href="https://i.sstatic.net/kMuit.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kMuit.png" alt="The base GUI" /></a></p> <p><sup><i> The base GUI </i></sup></p> <p><a href="https://i.sstatic.net/RiGr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RiGr5.png" alt="Safe exit when 'Stop Acquisition' is clicked before closing the window" /></a></p> <p><sup><i> Safe exit when 'Stop Acquisition' is clicked before closing the window </i></sup></p> <p><a href="https://i.sstatic.net/nuNSS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuNSS.png" alt="Unsafe exit when closing the window during data acquisition" /></a></p> <p><sup><i> Unsafe exit when closing the window during data acquisition </i></sup></p> <p>I have tried several different methods for fixing this issue. Many google searches have suggested that the .join() method of Thread may be the solution but I have not been successful with it as of yet. I have looked into queuing or multiprocessing as alternatives, but that is a more drastic change compared to what I expect to be a fairly simple solution.</p> <p>I know I could alternatively try killing the program after calling self.destroy(), but I would prefer not to do that in case I lose data from an unsafe shutdown.</p> <p>Any help or advice on this problem would be greatly appreciated!</p>
<python><multithreading><user-interface><customtkinter>
2024-02-09 18:09:23
1
374
AlexP
77,970,004
18,476,381
Pytest Unit Testing API's best practice and dependencies
<p>I'm using pytest to test various API's I have created and was curious on some best practices as my tests are getting more complicated in a sense that some api's depend on the creation of others. Example below.</p> <p>Lets say I have a service that creates purchase-orders, each purchase-order can have multiple line-items.</p> <p>My <code>POST /api/purchase-orders/line-item</code> takes a body like below.</p> <pre><code>{ &quot;service_order_id&quot;: 0, &quot;part_id&quot;: 0, &quot;vendor_id&quot;: 0, &quot;component&quot;: { &quot;component_serial_number&quot;: &quot;Test&quot;, &quot;component_name&quot;: &quot;Test&quot;, }, &quot;part_description&quot;: &quot;Test&quot;, &quot;total_qty&quot;: 1, } </code></pre> <p>Before I can create a line-item, you can see that I need a part_id, a service_order_id, and a vendor_id. These can be created and fetched using the relative POST/GET methods for those domains.</p> <p>My question is when unit-testing <code>POST /api/purchase-orders/line-item</code> what would we best practice. Should I within a function called <code>test_line_item_creation</code> first create the object of vendor, part, service-order, send the appropriate POST requests for each, fetch the id's back and then plug them into my line-item payload. Or should I have separate test files and functions for those domains which create their own objects using their relative api's and inject the response from those domains into this function?</p> <p>What would be considered best practice? Is what I am even doing considered unit-testing, is pytest the best tool for this type of testing?</p>
<python><unit-testing><pytest>
2024-02-09 17:57:40
0
609
Masterstack8080
77,969,965
13,088,678
concat a value and Null columns
<p>I have a dataframe having 3 columns.</p> <ul> <li>column1 - int</li> <li>column2 - array of struct</li> <li>column3 - array of struct</li> </ul> <p>Im trying to concatenate column2 and column3. However when any one of the column contains null/none, then whole concatenated column becomes null/none, even when one of the other column has value.</p> <p>In below case, second record ie. id=2 returns null when we concatenate <code>country</code> and <code>reference</code></p> <pre><code>+---+--------------------+--------------------+ | id| country | reference | +---+--------------------+--------------------+ | 1|[{US, 2024-01-08}] | [{UK, 2024-01-08}] | | 2|[{US, 2024-01-08}] | NULL / None | +---+--------------------+--------------------+ </code></pre> <p>So tried to use <code>coalesce</code> as below. But getting an error : AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `` cannot be resolved.</p> <pre><code>from pyspark.sql.functions import concat, coalesce, lit result_df = joined_df.select(&quot;id&quot;, concat(&quot;country&quot;, coalesce(col(&quot;reference&quot;),&quot;&quot;)).alias(&quot;segment&quot;)) OR result_df = joined_df.select(&quot;id&quot;, concat(&quot;country&quot;, coalesce(&quot;reference&quot;,&quot;&quot;)).alias(&quot;segment&quot;)) </code></pre> <p>Used <code>lit</code> along with <code>coalesce</code> as below, getting error AnalysisException:[DATATYPE_MISMATCH.DATA_DIFF_TYPES] Cannot resolve &quot;coalesce(reference, )&quot; due to data type mismatch: Input to <code>coalesce</code> should all be the same type, but it's (&quot;ARRAY&lt;STRUCT&lt;key: STRING, timestamp: TIMESTAMP&gt;&gt;&quot; or &quot;STRING&quot;).;</p> <pre><code>from pyspark.sql.functions import concat, coalesce, lit result_df = joined_df.select(&quot;id&quot;, concat(&quot;country&quot;,coalesce(&quot;reference&quot;,lit(&quot;&quot;))).alias(&quot;segment&quot;)) </code></pre> <p><strong>Desired Result:</strong></p> <pre><code>+---+--------------------+------------------ | id| concatenated_column | +---+--------------------+------------------ | 1|[{US, 2024-01-08}, {UK, 2024-01-08}] | | 2|[{US, 2024-01-08}] | +---+--------------------+------------------ </code></pre>
<python><apache-spark><pyspark>
2024-02-09 17:51:08
1
407
Matthew
77,969,964
23,260,297
Deprecation Warning with groupby.apply
<p>I have a python script that reads in data from a csv file</p> <p>The code runs fine, but everytime it runs I get this Deprecation message:</p> <pre><code>DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning. </code></pre> <p>the warning stems from this piece of code:</p> <pre><code>fprice = df.groupby(['StartDate', 'Commodity', 'DealType']).apply(lambda group: -(group['MTMValue'].sum() - (group['FixedPriceStrike'] * group['Quantity']).sum()) / group['Quantity'].sum()).reset_index(name='FloatPrice') </code></pre> <p>to my understanding, I am performing the apply function on my groupings,but then I am disregarding the groupings and not using them anymore to be apart of my dataframe. I am confused about the directions to silence the warning</p> <p>here is some sample data that this code uses:</p> <pre><code>TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue -------- ---------- --------- --------- ---------- ---------- -------- --------- aaa 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 bbb 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 ccc 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 </code></pre> <p>and here is the expected output from this data:</p> <pre><code>TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue FloatPrice -------- ---------- --------- --------- ---------- ---------- -------- --------- ---------- aaa 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0 bbb 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0 ccc 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0 </code></pre>
<python><pandas><dataframe>
2024-02-09 17:50:46
7
2,185
iBeMeltin
77,969,927
731,351
polars: select values from group_by list with a value from another column
<p>Starting with df:</p> <pre><code>df = pl.DataFrame( { &quot;seq&quot;: &quot;foo bar bar duk duk baz baz baz zed&quot;.split(), &quot;seq_grp&quot;: &quot;aa bb bb dd dd cc cc cc zz&quot;.split(), &quot;match&quot;: &quot;aa cc bb dd dd ff cc cc yy&quot;.split(), &quot;score&quot;: [10, 8, 20, 8, 7, 5, 6, 4, 6], } ) </code></pre> <p>I got:</p> <pre><code>FRAME_1 β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ seq ┆ seq_grp ┆ match ┆ score β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═══════β•ͺ═══════║ β”‚ foo ┆ aa ┆ aa ┆ 10 β”‚ β”‚ bar ┆ bb ┆ cc ┆ 8 β”‚ β”‚ bar ┆ bb ┆ bb ┆ 20 β”‚ β”‚ duk ┆ dd ┆ dd ┆ 8 β”‚ β”‚ duk ┆ dd ┆ dd ┆ 7 β”‚ β”‚ baz ┆ cc ┆ ff ┆ 5 β”‚ β”‚ baz ┆ cc ┆ cc ┆ 6 β”‚ β”‚ baz ┆ cc ┆ cc ┆ 4 β”‚ β”‚ zed ┆ zz ┆ yy ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Since I intend to select specific values per group, for each <code>seq</code>, I did the group_by:</p> <pre><code>grouped_df = df.group_by(&quot;seq&quot;, maintain_order=True).agg(df.columns[1:]).with_columns(col(&quot;seq_grp&quot;).list.get(0) ) </code></pre> <p>getting:</p> <pre><code>FRAME_2 β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ seq ┆ seq_grp ┆ match ┆ score β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ list[str] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ════════════════════β•ͺ═══════════║ β”‚ foo ┆ aa ┆ [&quot;aa&quot;] ┆ [10] β”‚ β”‚ bar ┆ bb ┆ [&quot;cc&quot;, &quot;bb&quot;] ┆ [8, 20] β”‚ β”‚ duk ┆ dd ┆ [&quot;dd&quot;, &quot;dd&quot;] ┆ [8, 7] β”‚ β”‚ baz ┆ cc ┆ [&quot;ff&quot;, &quot;cc&quot;, &quot;cc&quot;] ┆ [5, 6, 4] β”‚ β”‚ zed ┆ zz ┆ [&quot;yy&quot;] ┆ [6] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I am trying to fish out i.e. cases where <code>seq_grp</code> matches element from <code>match</code> list and check if it has the max value in the <code>score</code> list.</p> <p>getting:</p> <pre><code>FRAME_3 β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ seq ┆ seq_grp ┆ match ┆ score β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═══════β•ͺ═══════║ β”‚ foo ┆ aa ┆ aa ┆ 10 β”‚ | bar ┆ bb ┆ bb ┆ 20 β”‚ β”‚ duk ┆ dd ┆ dd ┆ 8 β”‚ β”‚ baz ┆ cc ┆ cc ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>While in this minimal example I could get almost there using:</p> <pre><code>df.filter(col(&quot;seq_grp&quot;) == col(&quot;match&quot;)).group_by(&quot;seq&quot;).agg(pl.all()) .with_columns(col(&quot;score&quot;).list.max()) .with_columns(col([&quot;seq_grp&quot;, &quot;match&quot;]).list.get(0) ) </code></pre> <p>I would like to be able to get the list index (looking at <code>FRAME_2</code>) row:</p> <pre><code> baz ┆ cc ┆ [&quot;ff&quot;, &quot;cc&quot;, &quot;cc&quot;] ┆ [5, 6, 4] β”‚ </code></pre> <p>of a top scoring non-cc match and its values: β”‚ <strong>baz</strong> ┆ <strong>cc</strong> ┆ [<strong>&quot;ff&quot;</strong>, &quot;cc&quot;, &quot;cc&quot;] ┆ [<strong>5</strong>, 6, 4]</p>
<python><python-polars>
2024-02-09 17:40:12
2
529
darked89
77,969,902
824,167
How to override task decorator in Airflow for testing?
<p>I am using a <code>@task.external_python</code> decorator that points to a custom <code>venv</code> with some additional dependencies.</p> <p>I want to be able to write an integration tests where I mock certain API's <code>requests-mock</code> and have entire DAG run end to end. That means that my pytest and dag should run in the same process.</p> <p>I figured that I can make a custom task decorator that switches between <code>external_python</code> and <code>python</code> depending on the environment. When I run the code in pytest it will use <code>python</code> which will technically run the code in the same <code>pytest</code> process with configured mocks. My idea is obviously monkey patching that actually leaks in a production code – which I am aint a fan of.</p> <p>I have a few questions:</p> <ul> <li>is there a way to redefine task operator on the fly in tests?</li> <li>if not, how do I write a custom task decorator without building a provider?</li> <li>Or maybe there is a better way to provide mocks for API's in tasks?</li> </ul> <p>I do have a lot of tests for individual tasks, however, I still want to test dags on certain subsets of data.</p>
<python><airflow>
2024-02-09 17:34:52
1
2,300
Vlad Miller
77,969,878
11,330,134
Insert outer dict value to inner dict list
<p>I have a nested <code>dict</code> of lists and dicts I want to ultimately iterate through and output to a dataframe.</p> <p>This sample dictionary has five items so I'm expecting a dataframe of five rows; however, it is doubling up and producing ten rows.</p> <p>There are nested lists and dicts. I want to keep the outer values and include them in the inner-most list.</p> <p>I have browsed too many StackOverflow posts to include, but I'm making a mistake somewhere in the iterations.</p> <p>Any help to the specific question or how to better write this overall is appreciated.</p> <pre><code>import pandas as pd sample_dic = {'game_id': 'beeec03419b1aa196028a89f177f4324', 'sportsbooks': [{'bookie_key': 'fanduel', 'market': {'market_key': 'player_points_over_under', 'outcomes': [{'timestamp': '2024-02-09T15:40:01', 'handicap': 20.5, 'odds': -122, 'participant': 16142, 'participant_name': 'Dejounte Murray', 'name': 'Dejounte Murray Over', 'description': 'Dejounte Murray - Points'}, {'timestamp': '2024-02-09T15:57:49', 'handicap': 21.5, 'odds': -111, 'participant': 16142, 'participant_name': 'Dejounte Murray', 'name': 'Dejounte Murray Over', 'description': 'Dejounte Murray - Points'}, {'timestamp': '2024-02-09T15:57:49', 'handicap': 15.5, 'odds': -125, 'participant': 17163, 'participant_name': 'Jalen Johnson', 'name': 'Jalen Johnson Over', 'description': 'Jalen Johnson - Points'}]}}, {'bookie_key': 'draftkings', 'market': {'market_key': 'player_points_over_under', 'outcomes': [{'timestamp': '2024-02-09T13:09:03', 'handicap': 18.5, 'odds': -200, 'participant': None, 'participant_name': None, 'name': 'Over - Dejounte Murray', 'description': ' Alt Points O/U'}, {'timestamp': '2024-02-09T15:21:11', 'handicap': 15.5, 'odds': -225, 'participant': None, 'participant_name': None, 'name': 'Over - Kelly Oubre Jr.', 'description': ' Alt Points O/U'}]}}]} outcomes_list = [] v_game_id = sample_dic['game_id'] v_prop = sample_dic['sportsbooks'][0]['market']['market_key'] for i in range(0, len(sample_dic['sportsbooks'])): book = sample_dic['sportsbooks'][i]['bookie_key'] for s in sample_dic['sportsbooks']: for k, v in s.items(): if k == 'market': for k2, v2 in v.items(): if k2 == 'outcomes': for o in v2: # outcomes_list.append(o) # This works for the inner 'outcomes' list only # Adding game_id, sportsbook, and prop k/v pairs back in new_dic = {'game_id': v_game_id, 'sportsbook': book, 'prop': v_prop} new_dic.update(o) outcomes_list.append(new_dic) # Then to df # This has doubled-up rows outcomes_df = pd.DataFrame(outcomes_list) outcomes_df </code></pre> <p><a href="https://i.sstatic.net/N3eRi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N3eRi.png" alt="enter image description here" /></a></p> <p>To show how the dataframe should look like, it can be manually produced like so:</p> <pre><code>wanted_outcomes_list = [['beeec03419b1aa196028a89f177f4324', 'fanduel', 'player_points_over_under', '2024-02-09T15:40:01', 20.5, -122, 16142, 'Dejounte Murray', 'Dejounte Murray Over', 'Dejounte Murray - Points'], ['beeec03419b1aa196028a89f177f4324', 'fanduel', 'player_points_over_under', '2024-02-09T15:57:49', 21.5, -111, 16142, 'Dejounte Murray', 'Dejounte Murray Over', 'Dejounte Murray - Points'], ['beeec03419b1aa196028a89f177f4324', 'fanduel', 'player_points_over_under', '2024-02-09T15:57:49', 15.5, -125, 17163, 'Jalen Johnson', 'Jalen Johnson Over', 'Jalen Johnson - Points'], ['beeec03419b1aa196028a89f177f4324', 'draftkings', 'player_points_over_under', '2024-02-09T13:09:03', 18.5, -200, None, None, 'Over - Dejounte Murray', ' Alt Points O/U'], ['beeec03419b1aa196028a89f177f4324', 'draftkings', 'player_points_over_under', '2024-02-09T15:21:11', 15.5, -225, None, None, 'Over - Kelly Oubre Jr.', ' Alt Points O/U']] wanted_outcomes_df = pd.DataFrame(wanted_outcomes_list, columns=['game_id', 'sportsbook', 'prop', 'timestamp', 'handicap', 'odds', 'participant', 'participant_name', 'name', 'description']) wanted_outcomes_df </code></pre> <p><a href="https://i.sstatic.net/lpJQt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lpJQt.png" alt="enter image description here" /></a></p>
<python><pandas><dictionary>
2024-02-09 17:30:28
1
489
md2614
77,969,833
17,556,733
How do I create an aws rds aurora database with some tables with python aws cdk?
<p>I want to create an AWS stack with the python aws-cdk library.</p> <p>The requirements are as such: There must exist an aurora3 mysql database and it needs to have 2 tables (as a reduced example, lets say <code>table1</code> needs to have 2 fields: <code>id</code> of integer type and <code>name</code> of string type, and <code>table2</code> needs to have <code>table1_id</code> of integer which is a foreign key to table1, and a <code>data</code> field of string type</p> <p>I have found some tutorials online on how to do this through the console, but I need to do it from within code, and not manually through the console, and the documentation seems lacking for this</p>
<python><amazon-web-services><aws-cdk><amazon-aurora>
2024-02-09 17:20:50
1
495
TheMemeMachine
77,969,738
2,812,625
Vertica_python and Sqlalchemy Insert
<p>I am using sqlalchemy and vertica_python to pull down tables into python and would like to writ tables to my own schema there after but am getting an error on the df.to_sql() I can read tables with the connection string.</p> <pre><code>engine = sqlalchemy.create_engine('vertica+vertica_python://username:password@localhost:3306/db_name') connection = engine.raw_connection() df.to_sql(&quot;df&quot;, if_exists=&quot;replace&quot;, con=connection, schema=&quot;schema_name&quot;) </code></pre> <blockquote> <p>--&gt; 739 raise ValueError(f'Invalid SQL: {operation}' 740 &quot;\nHINT: When argument 'parameters' is a tuple/list, &quot; 741 'variables in SQL should be specified with positional format (%s) placeholders. ' 742 'Question mark (?) placeholders have to be used with use_prepared_statements=True setting.') 743 tlist = []</p> <p>ValueError: Invalid SQL: SELECT name FROM sqlite_master WHERE type IN ('table', 'view') AND name=?; HINT: When argument 'parameters' is a tuple/list, variables in SQL should be specified with positional format (%s) placeholders. Question mark (?) placeholders have to be used with use_prepared_statements=True setting.</p> <p>The above exception was the direct cause of the following exception:</p> </blockquote>
<python><pandas><sqlalchemy><vertica-python>
2024-02-09 17:03:37
0
446
Tinkinc
77,969,707
616,460
Resolve relative or absolute path relative to another path
<p>Given a path that may be absolute or relative, I'd like to resolve its absolute path relative to another given path (if it's relative), without changing the current directory.</p> <p>For example, assuming the current working directory is <code>/cwd</code>:</p> <pre><code>def resolve_path (relative_to, path_to_resolve): ??? resolve_path(&quot;.&quot;, &quot;a/relative/path&quot;) # --&gt; returns /cwd/a/relative/path resolve_path(&quot;example&quot;, &quot;a/relative/path&quot;) # --&gt; returns /cwd/example/a/relative/path resolve_path(&quot;/some/path&quot;, &quot;a/relative/path&quot;) # --&gt; returns /some/path/a/relative/path resolve_path(&quot;/some/path&quot;, &quot;..&quot;) # --&gt; returns /some resolve_path(&quot;/some/path&quot;, &quot;/an/absolute/path&quot;) # --&gt; returns /an/absolute/path resolve_path(&quot;/some/path&quot;, &quot;/.&quot;) # --&gt; returns / </code></pre> <p>Is there something in <code>os.path</code> (or somewhere else) that does this (Python 3.8 minimum)?</p>
<python><python-3.x><path>
2024-02-09 16:58:00
1
40,602
Jason C
77,969,675
12,684,429
How to identify anomalous data automatically
<p>I'm looking to remove anomalous data from a dataset in python and as per comments below, I believe the best way to do this is to have a rolling average - and then remove data if it falls 2-3 std outside of that rolling average.</p> <p>So far I have the below but it doesn't seem to be doing what I want it to as the errors are still in the dataset</p> <pre><code>def remove_outliers_rolling_dataframe(dataframe, window_size=10, num_std_dev=1): rolling_avg = dataframe.rolling(window=window_size, min_periods=1).mean() std_dev = dataframe.rolling(window=window_size, min_periods=1).std() lower_bound = rolling_avg - (num_std_dev * std_dev) upper_bound = rolling_avg + (num_std_dev * std_dev) mask = (dataframe &gt;= lower_bound) &amp; (dataframe &lt;= upper_bound) cleaned_dataframe = dataframe[mask] return cleaned_dataframe </code></pre> <p>The above makes sense to me but it isn't picking out the data entry errors like I would hope.</p> <p>The original data looks like this as an example:</p> <p><a href="https://i.sstatic.net/8bOE0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bOE0.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/7znLT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7znLT.png" alt="enter image description here" /></a></p>
<python><data-cleaning>
2024-02-09 16:51:41
0
443
spcol
77,969,660
2,443,551
Django messages with javascript fetch() API and redirect
<p>I am doing a request with the <code>fetch()</code> API to a django view, which will set a message with the <code>messages</code> app and redirect the request. I can capture the redirect with <code>response.redirected</code> and the <code>messages</code>-cookie is set, but when i do the redirect with <code>location.replace</code>, the messages get lost.</p> <p>How do i pass on the messages from the original response?</p> <p>The django view:</p> <pre class="lang-py prettyprint-override"><code>def add(request): if request.method != 'POST': return HttpResponse(&quot;not post&quot;) pproject = request.POST['pproject'] print(&quot;pproject:&quot;, pproject) messages.add_message(request, messages.INFO, &quot;Test.&quot;) return redirect(&quot;project_detail&quot;) </code></pre> <p>my <code>urls.py</code>:</p> <pre class="lang-py prettyprint-override"><code> path('project/add/', views_project.add, name='project_add'), </code></pre> <p>and in my template (javascript):</p> <pre class="lang-js prettyprint-override"><code>var csrftoken = document.querySelector('[name=csrfmiddlewaretoken]').value; const headers = new Headers(); headers.append(&quot;X-CSRFToken&quot;, csrftoken); const form = new FormData(); form.append(&quot;pproject&quot;, &quot;{{ project.id }}&quot;); const ops = { method: &quot;POST&quot;, headers: headers, body: form, credentials: &quot;same-origin&quot; }; const req = new Request(&quot;{% url 'project_add' %}&quot;); fetch(req, ops) .then((response) =&gt; { if (response.redirected) { console.log(&quot;response redirect:&quot;, response); window.location.replace(response.url); return false; } else if (!response.ok) { throw new Error(`HTTP error! Status: ${response.status}`); } else { return response.json(); // Parse JSON response } }) </code></pre>
<javascript><python><django>
2024-02-09 16:49:33
1
451
dequid
77,969,654
8,713,442
Error while finding similarity between polars dataframe column and string variable
<p>I need to find a similarity between the column in the polar data frame and the input value.I am using jaro_winkler_metric . I am getting errors while doing it. We don't want to use UDF functions as it slows the process.</p> <pre><code>import polars as pl import jaro def test_polars(): name='savah' data = {&quot;first_name&quot;: ['sarah', 'purnima'], &quot;last_name&quot;: ['vats', 'malik']} df = pl.DataFrame(data) print(df) df = df.with_columns( [ (pl.when( jaro.jaro_winkler_metric(pl.col(&quot;first_name&quot;), name) &gt;= 0.8 ).then(1).otherwise(0)).alias(&quot;COMP80_FN&quot;), ] ) print(df) if __name__ == '__main__': test_polars() C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe &quot;C:\PythonProject\pythonProject\polars data.py&quot; shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ first_name ┆ last_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ sarah ┆ vats β”‚ β”‚ purnima ┆ malik β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Traceback (most recent call last): File &quot;C:\PythonProject\pythonProject\polars data.py&quot;, line 22, in &lt;module&gt; test_polars() File &quot;C:\PythonProject\pythonProject\polars data.py&quot;, line 12, in test_polars (pl.when( jaro.jaro_winkler_metric(pl.col(&quot;first_name&quot;), name) &gt;= 0.8 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\__init__.py&quot;, line 43, in jaro_winkler_metric return jaro.metric_jaro_winkler(string1, string2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\jaro.py&quot;, line 235, in metric_jaro_winkler ans = string_metrics(string1, string2, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\jaro.py&quot;, line 159, in string_metrics assert isinstance(s1, str) AssertionError Process finished with exit code 1 </code></pre>
<python><python-polars>
2024-02-09 16:48:57
1
464
pbh
77,969,516
10,533,225
TypeError: object Response can't be used in 'await' expression
<p>I am trying to execute this unit test. I would like to asynchronously await the GET as it takes a while to retrieve data. However, I am getting: <code>TypeError: object Response can't be used in 'await' expression</code>.</p> <pre><code>@pytest.mark.asyncio async def test_get_report(report_service, client, headers): &quot;&quot;&quot;Test Report GET Response&quot;&quot;&quot; report_id = &quot;sample-report-id&quot; report_service.query_db.query_fetchone = mock.AsyncMock( return_value={ &quot;id&quot;: &quot;sample-id&quot;, &quot;reportId&quot;: &quot;sample-report-id&quot;, &quot;reportName&quot;: &quot;sample-report-name&quot;, &quot;report&quot;: [], } ) response = await client.get(f&quot;/v2/report/{report_id }&quot;, headers=headers) assert response.status_code == 200 </code></pre> <p>If I remove the <code>await</code>, I will get this response <code>{'detail': 'Not Found'}</code>.</p>
<python><pytest><python-asyncio>
2024-02-09 16:24:46
1
583
Tenserflu
77,969,478
4,933,902
How to convert XGBClassifier with dart booster to ONNX?
<p>I need to convert a XGBClassifier with booster='dart' into an onnx model. But the converting failed with an error on last line:</p> <pre><code>.venv\Lib\site-packages\onnxmltools\convert\xgboost\common.py&quot;, line 40, in get_xgb_params gbp = config[&quot;learner&quot;][&quot;gradient_booster&quot;][&quot;gbtree_model_param&quot;] KeyError: 'gbtree_model_param' </code></pre> <p>My demo code:</p> <pre><code>from onnxmltools.convert import convert_xgboost from onnxmltools.convert.common import data_types from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split import xgboost as xgb digits = load_digits() X, y = digits.data, digits.target # Our train data shape is (x, 64) where x is total samples X_train, X_test, y_train, y_test = train_test_split(X, y) booster = xgb.XGBClassifier(booster='dart', n_estimators=50, max_depth=10, learning_rate=0.1, random_state=42) booster.fit(X_train, y_train) initial_type = [('float_input', data_types.FloatTensorType([1, 64]))] booster_onnx = convert_xgboost(booster, initial_types=initial_type) </code></pre> <p>How could I solve this?</p>
<python><scikit-learn><xgboost><onnx><xgbclassifier>
2024-02-09 16:18:13
1
910
JoGe
77,969,278
1,555,306
Pandas - Getting weird indexing and shape when reading multiple CSVs
<p>I am trying to read and merge some CSVs but I am getting an index added, which is messing up the final CSV. I have tried <code>index_col=None</code>, <code>index_col=False</code> and <code>index_col=0</code> (since my first column is epoch time) but nothing seems to be helping.</p> <p>CSV1:</p> <pre><code>1704067200000,0.14720000,0.15300000,0 1704153600000,0.15200000,0.15600000,0 </code></pre> <p>CSV2:</p> <pre><code>1704758400000,0.13780000,0.13790000,0 1704844800000,0.13240000,0.13970000,0 </code></pre> <p>I want this:</p> <pre><code>1704067200000,0.14720000,0.15300000,0 1704153600000,0.15200000,0.15600000,0 1704758400000,0.13780000,0.13790000,0 1704844800000,0.13240000,0.13970000,0 </code></pre> <p>But getting this instead:</p> <pre><code>,1704067200000,0.14720000,0.15300000,0,1704758400000,0.13780000,0.13790000 0,1704153600000.0,0.152,0.156,0,,, 1,,,,0,1704844800000.0,0.1324,0.1397 </code></pre> <p>The code I am using is this btw:</p> <pre><code>for folder in data_folders: data_lists = [pd.read_csv(csvfile, index_col=None) for csvfile in folder.glob('*.csv')] pd.concat(data_lists, ignore_index=True).to_csv(folder_1d / f&quot;{coin_folder.name}.csv&quot;) print(data_lists) </code></pre> <p>Printing dataframe gives:</p> <pre><code>[ 1704067200000 0.14720000 0.15300000 0 0 1704153600000 0.152 0.156 0, Unnamed: 0 1704067200000 0.14720000 0.15300000 0 1704758400000 \ 0 0 1.704154e+12 0.152 0.156 0 NaN 1 1 NaN NaN NaN 0 1.704845e+12 0.13780000 0.13790000 0 NaN NaN 1 0.1324 0.1397 , 1704758400000 0.13780000 0.13790000 0 0 1704844800000 0.1324 0.1397 0] </code></pre> <p>I even tried</p> <pre><code> all_files = glob.glob(os.path.join(folder_1d, &quot;*.csv&quot;)) df = pd.concat((pd.read_csv(f, index_col=False) for f in all_files), ignore_index=True) </code></pre> <p>But no solution still:</p> <pre><code> 1704067200000 0.14720000 0.15300000 0 1704758400000 0.13780000 \ 0 1.704154e+12 0.152 0.156 0 NaN NaN 1 NaN NaN NaN 0 1.704845e+12 0.1324 0.13790000 0 NaN 1 0.1397 </code></pre> <p>Pls help/advise. Please be kind, I am new to pandas :)</p>
<python><pandas><dataframe>
2024-02-09 15:44:29
1
773
masterpiece
77,969,174
20,122,390
How to use lru_cache in a Python coroutine?
<p>I need to implement lru_cache from Python's functools library in a coroutine, so I have this:</p> <pre><code>async def send_logs_simulation(self, simulation_id: int, logs: str): company_id = await self.get_company_id(simulation_id=simulation_id) data_room = { &quot;id&quot;: simulation_id, &quot;logs&quot;: logs, } data = MessageToRoom( event=&quot;update-logs&quot;, room=f&quot;{simulation_id}-{company_id}&quot;, namespace=&quot;/dhog/simulation/logs&quot;, data=data_room, ) await service_sockets.send_message_room(obj_in=data) return {&quot;logs_sent&quot;: True} @lru_cache(maxsize=8) async def get_company_id(self, simulation_id: int): simulation_in_db = await self.get_by_id(_id=simulation_id) if not simulation_in_db: raise ValueError(&quot;Simulation not found&quot;) company_id = simulation_in_db[&quot;company_id&quot;] return company_id </code></pre> <p>On the first call, the method runs fine, however, when I try to use the cache I get:</p> <pre><code>docker-ds-backend-service-1 | RuntimeError: cannot reuse already awaited coroutine </code></pre> <p>Why does this happen? Is there any way to implement it in a coroutine?</p>
<python><functools>
2024-02-09 15:26:15
0
988
Diego L
77,969,124
2,813,152
soup find link with specific `th` element appearing in DOM before
<p>I try to find specific links in my HTML. I have this code</p> <pre><code>website_link = infobox.find('a', class_='external', href=True) </code></pre> <p>This finds multiple links. But I only want links that are in a <code>td</code> element, that has a <code>th</code> Element before, with the content <code>Website</code>.</p> <p>like this:</p> <pre><code>&lt;tr&gt; &lt;th&gt; Website &lt;/th&gt; &lt;td&gt; &lt;a rel=&quot;nofollow&quot; class=&quot;external text&quot; href=&quot;example_url&quot;&gt;my_url&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; </code></pre> <p>Is it somehow possible to do limit it to these links?</p>
<python><beautifulsoup>
2024-02-09 15:18:16
1
4,932
progNewbie
77,969,045
3,034,610
On-the-fly downsampling with prometheus
<p>Every second we want to collect a range of metrics from a bunch of servers and store them in prometheus. We'll keep these high resolution metrics for say, 24 hours before discarding. We're looking for a way to downsample the metrics to 5 second and 1 minute averages so we can store these for much longer. We're wondering how do go about implementing this. Currently, we're looking at two possibilities.</p> <ol> <li><p>We're planning on using the prometheus_client python library to collect and export the metrics. Perhaps we could implement 1, 5 &amp; 60 second averages as a moving window function but then it seems we'd have to work out how to implement fixed length fifo stacks. This seems somehow possible with collections.deque.</p> </li> <li><p>We have some endpoint that reads the last 5 or 60 seconds of data from the one second data in prometheus and averages it. This then be called by different scrape which runs every 5 or 60 seconds.</p> </li> </ol> <p>Both of these options do the downsampling on the fly. Does anyone have alternative proposals or have any practical advice on how to move forward with either of these options?</p> <p>Thanks,</p> <p>Andrew</p>
<python><prometheus>
2024-02-09 15:04:49
1
351
Andrew Holway
77,968,928
7,620,499
Gantt charts in plotly
<p>Hi I have created a Gantt chart with simple code that I find online:</p> <pre><code># gantt is just my pandas dataframe import plotly.figure_factory as ff fig = ff.create_gantt(gantt, colors = ['rgb(170, 14, 200)', width = 742, height = 580) fig.show() </code></pre> <p>For some reason when I run this the output seems fine except the xaxis dates are missing a lot of months and the ending is cutoff so it looks like this:</p> <pre><code>Apr 2024 Jul 2024 Oct 2024 Jan 2025 Apr 2025 Jul 2025 Oct </code></pre> <p>can anyone help with this?</p>
<python><python-3.x><plotly><gantt-chart><highcharts-gantt>
2024-02-09 14:45:07
0
1,029
bernando_vialli
77,968,655
5,070,569
How to use Lagrange interpolation polynomial for functions of multiple variables?
<p>In the case of interpolating a function of a single variable, things are relatively simple:</p> <pre class="lang-py prettyprint-override"><code>def create_basic_polynomial(x_values, i): def basic_polynomial(x): divider = 1 result = 1 for j in range(len(x_values)): if j != i: result *= (x-x_values[j]) divider *= (x_values[i]-x_values[j]) return result/divider return basic_polynomial def create_lagrange_polynomial(x_values, y_values): basic_polynomials = [] for i in range(len(x_values)): basic_polynomials.append(create_basic_polynomial(x_values, i)) def lagrange_polynomial(x): result = 0 for i in range(len(y_values)): result += y_values[i]*basic_polynomials[i](x) return result return lagrange_polynomial x_values = [0, 2, 3, 5] y_values = [0, 1, 3, 2] lag_pol = create_lagrange_polynomial(x_values, y_values) for x in x_values: print(&quot;x = {:.4f}\t y = {:4f}&quot;.format(x,lag_pol(x))) </code></pre> <pre class="lang-bash prettyprint-override"><code>x = 0.0000 y = 0.000000 x = 2.0000 y = 1.000000 x = 3.0000 y = 3.000000 x = 5.0000 y = 2.000000 </code></pre> <p>But how can we implement the logic for working with functions of multiple variables?</p>
<python><machine-learning><math><interpolation>
2024-02-09 13:59:16
0
468
Artyom Ionash
77,968,612
6,212,718
Polars: smart way to avoid "window expression not allowed in aggregation"
<p>I have the following code which works.</p> <pre><code>import numpy as np import polars as pl data = { &quot;date&quot;: [&quot;2021-01-01&quot;, &quot;2021-01-02&quot;, &quot;2021-01-03&quot;, &quot;2021-01-04&quot;, &quot;2021-01-05&quot;, &quot;2021-01-06&quot;, &quot;2021-01-07&quot;, &quot;2021-01-08&quot;, &quot;2021-01-09&quot;, &quot;2021-01-10&quot;, &quot;2021-01-11&quot;, &quot;2021-01-12&quot;, &quot;2021-01-13&quot;, &quot;2021-01-14&quot;, &quot;2021-01-15&quot;, &quot;2021-01-16&quot;, &quot;2021-01-17&quot;, &quot;2021-01-18&quot;, &quot;2021-01-19&quot;, &quot;2021-01-20&quot;], &quot;close&quot;: np.random.randint(100, 110, 10).tolist() + np.random.randint(200, 210, 10).tolist(), &quot;company&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;] } df = pl.DataFrame(data).with_columns(date = pl.col(&quot;date&quot;).cast(pl.Date)) # Calculate Returns R = pl.col(&quot;close&quot;).pct_change() # Calculate Gains and Losses G = pl.when(R &gt; 0).then(R).otherwise(0).alias(&quot;gain&quot;) L = pl.when(R &lt; 0).then(R).otherwise(0).alias(&quot;loss&quot;) # Calculate Moving Averages for Gains and Losses window = 3 MA_G = G.rolling_mean(window).alias(&quot;MA_gain&quot;) MA_L = L.rolling_mean(window).alias(&quot;MA_loss&quot;) # Calculate Relative Strength Index based on Moving Averages RSI = (100 - (100 / (1 + MA_G / MA_L))).alias(&quot;RSI&quot;) df = df.with_columns(R, G, L, MA_G, MA_L, RSI) df.head() </code></pre> <p>I like the ability to compose different steps using <code>polars</code>, because it keeps my code readable and easy to maintain (as opposed to method chaining). Note that ultimately calculations are more complex.</p> <p>However, now I want to calculate the above column but grouped by &quot;company&quot;. I tried adding <code>.over(&quot;company&quot;)</code> where relevant. However, this doesn't work.</p> <pre><code># Calculate Returns R = pl.col(&quot;close&quot;).pct_change().over(&quot;company&quot;) # Calculate Gains and Losses G = pl.when(R &gt; 0).then(R).otherwise(0).alias(&quot;gain&quot;) L = pl.when(R &lt; 0).then(R).otherwise(0).alias(&quot;loss&quot;) # Calculate Moving Averages for Gains and Losses window = 3 MA_G = G.rolling_mean(window).alias(&quot;MA_gain&quot;).over(&quot;company&quot;) MA_L = L.rolling_mean(window).alias(&quot;MA_loss&quot;).over(&quot;company&quot;) # Calculate Relative Strength Index based on Moving Averages RSI = (100 - (100 / (1 + MA_G / MA_L))).over(&quot;company&quot;).alias(&quot;RSI&quot;) df = df.with_columns(R, G, L, MA_G, MA_L, RSI) df.head() </code></pre> <p><strong>Questions</strong></p> <p>1.) What is the best way to fix this <code>&quot;window expression not allowed in aggregation&quot;</code> error while keeping the above code approach?</p> <p>2.) Related question: why is window expression not allowed in aggregation in the first place. What is the problem with this from a technical perspective? Can someone explain to me in laymans terms?</p> <p>Thanks!</p>
<python><python-polars>
2024-02-09 13:53:12
1
1,489
FredMaster
77,968,463
4,431,798
Synchronous OCR with Azure Cognitive Services without Additional API Calls
<p>I'm currently using Azure Cognitive Services for optical character recognition (OCR) on images. I've implemented the OCR functionality using both synchronous and asynchronous methods provided by the Azure Python SDK. However, I'm concerned about the cost and the need for periodic API calls when using the asynchronous method.</p> <p>The asynchronous method involves calling read to initiate the OCR operation and then polling get_read_result until the operation is complete. <strong>While this approach works, it requires periodic calls to get_read_result, which I've learned can add to the cost as each call incurs charges</strong>.</p> <p>On the other hand, the synchronous method using recognize_printed_text provides results without the need for additional API calls. However, <strong>I've noticed that the accuracy and quality of the synchronous OCR results are weaker compared to the asynchronous method</strong>.</p> <p>I'm seeking advice on whether there's a way to perform OCR synchronously with Azure Cognitive Services without making additional API calls for result retrieval. Additionally, I'm open to suggestions on improving the accuracy of the synchronous OCR results or alternative approaches to minimize costs while maintaining accuracy.</p> <p>Any insights or suggestions would be greatly appreciated. Thank you!</p> <p><strong>asynchronous method</strong></p> <pre><code>from azure.cognitiveservices.vision.computervision import ComputerVisionClient from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes from msrest.authentication import CognitiveServicesCredentials import time ''' References: Quickstart: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/python-sdk SDK: https://docs.microsoft.com/en-us/python/api/overview/azure/cognitiveservices/computervision?view=azure-python ''' # Replace with your endpoint and key from the Azure portal endpoint = 'https://....cognitiveservices.azure.com/' key = '....' # Set credentials credentials = CognitiveServicesCredentials(key) # Create client client = ComputerVisionClient(endpoint, credentials) url = &quot;https://i.ebayimg.com/images/g/dcYAAOSw~5NlHSbh/s-l960.jpg&quot; raw = True custom_headers = None numberOfCharsInOperationId = 36 rawHttpResponse = client.read(url, language=&quot;en&quot;, raw=True) # Get ID from returned headers operationLocation = rawHttpResponse.headers[&quot;Operation-Location&quot;] idLocation = len(operationLocation) - numberOfCharsInOperationId operationId = operationLocation[idLocation:] # SDK call #this is ASYNC!!! need to wait!! #else returns empty! while True: result = client.get_read_result(operationId) print(f&quot;result: {result}&quot;) if result.status not in ['notStarted', 'running']: print(&quot;breaking&quot;) break time.sleep(1) # Get data: displays text captured and its bounding box (position in the image) if result.status == OperationStatusCodes.succeeded: for line in result.analyze_result.read_results[0].lines: print(line.text) #print(line.bounding_box) </code></pre> <p><strong>synchronous method</strong></p> <pre><code>from azure.cognitiveservices.vision.computervision import ComputerVisionClient from msrest.authentication import CognitiveServicesCredentials # Replace with your endpoint and key from the Azure portal endpoint = 'https://....cognitiveservices.azure.com/' key = ' ' # Set credentials credentials = CognitiveServicesCredentials(key) # Create client client = ComputerVisionClient(endpoint, credentials) url = &quot;https://i.ebayimg.com/images/g/dcYAAOSw~5NlHSbh/s-l960.jpg&quot; language = &quot;en&quot; # SDK call - Synchronous OCR result = client.recognize_printed_text(url, language=language) # Get data: displays text captured and its bounding box (position in the image) for region in result.regions: for line in region.lines: words = [word.text for word in line.words] print(' '.join(words)) # print(line.bounding_box) </code></pre>
<python><azure><asynchronous><ocr><azure-cognitive-services>
2024-02-09 13:27:45
0
441
SoajanII
77,968,445
10,353,865
pandas crosstab: include all levels
<p>From the pandas docs of <code>pd.crosstab</code>:</p> <p>&quot;Any input passed containing Categorical data will have <strong>all</strong> of its categories included in the cross-tabulation, even if the actual data does not contain any instances of a particular category.&quot;</p> <p>So what am I actually doing wrong in the code snippet below (the final output does not contain all levels)</p> <pre><code>import pandas as pd from pandas.api.types import CategoricalDtype as CD ctype = CD(categories=[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;]) d_cat = pd.DataFrame({&quot;x&quot;: [&quot;a&quot;,&quot;a&quot;,&quot;a&quot;], &quot;y&quot;: [&quot;b&quot;,&quot;a&quot;,&quot;b&quot;]}, dtype=ctype) d_cat[&quot;x&quot;].dtype # category d_cat[&quot;x&quot;].value_counts() # this properly includes all the levels - even unused pd.crosstab(index=d_cat[&quot;x&quot;], columns=d_cat[&quot;y&quot;]) #y a b #x #a 1 2 </code></pre>
<python><pandas><pivot-table>
2024-02-09 13:24:13
1
702
P.Jo
77,968,430
453,851
Caching files from hive layout S3 in Polars
<p>Does polars offer way to cache files it downloads from a hive layout S3 bucket?</p> <p>The use case is that we have a datalake containing several TB of data. For certain tasks we may need to pull a sizable chunk of that data, but there is some overlap between S3 objects used in several tasks. It wouldn't be practical to pull the whole S3 bucket locally, but there would be a significant speed up in pulling data if we held onto 10-20GB of data locally.</p> <p>Does polars have any way to offer alternative datasources so <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.scan_parquet.html" rel="nofollow noreferrer">scan_parquet</a> (allowing us to implement a pull through cache) or does it already have any such functionality?</p> <hr /> <p>Just to clarify the reason for this question; <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.scan_parquet.html" rel="nofollow noreferrer">the documentation of <code>scan_parquet()</code> says</a>:</p> <blockquote> <p>This function allows the query optimizer to push down predicates and projections to the scan level, typically increasing performance and reducing memory overhead.</p> </blockquote> <p>I can work around the problem by predicting the files that will be needed, downloading them locally, and then running <code>scan_parquet()</code> on the local files. <strong>The problem</strong> with that approach is that I lose some of the implicit decision making of polars itself in deciding which files need to be downloaded.</p> <p>In an ideal world, polars would offer the ability to add new (custom) data-sources which would let me create a hybrid; dynamically using local files or downloading from S3.</p>
<python><amazon-s3><hive><parquet><python-polars>
2024-02-09 13:21:00
0
15,219
Philip Couling
77,968,372
3,566,606
Mypy Plugin for Replacing Custom TypeAlias with NotRequired
<p>I want to write a mypy plugin in order to introduce a type alias for <code>NotRequired[Optional[T]]</code>. (As I found out in <a href="https://stackoverflow.com/questions/77954574/generic-type-alias-for-notrequired-and-optional">this question</a>, it is not possible to write this type alias in plain python, because <code>NotRequired</code> is not allowed outside of a <code>TypedDict</code> definition.)</p> <p>My idea is to define a generic <code>Possibly</code> type, like so:</p> <pre class="lang-py prettyprint-override"><code># possibly.__init__.py from typing import Generic, TypeVar T = TypeVar(&quot;T&quot;) class Possibly(Generic[T]): pass </code></pre> <p>I then want my plugin to replace any occurrence of <code>Possibly[X]</code> with <code>NotRequired[Optional[X]]</code>. I tried the following approach:</p> <pre class="lang-py prettyprint-override"><code># possibly.plugin from mypy.plugin import Plugin class PossiblyPlugin(Plugin): def get_type_analyze_hook(self, fullname: str): if fullname != &quot;possibly.Possibly&quot;: return return self._replace_possibly def _replace_possibly(self, ctx): arguments = ctx.type.args breakpoint() def plugin(version): return PossiblyPlugin </code></pre> <p>At the breakpoint, I understand I have to construct an instance of a subclass of <code>mypy.types.Type</code> based on <code>arguments</code>. But I didn't find a way to construct <code>NotRequired</code>. There is no corresponding type in <code>mypy.types</code>. I figure this might be due to the fact that <code>typing.NotRequired</code> is not a class, but a <code>typing._SpecialForm</code>. (I guess this is because <code>NotRequired</code> does not affect the value type, but the definition of the <code>.__optional_keys__</code> of the <code>TypedDict</code> it occurs on.)</p> <p>So, then I thought about a different strategy: I could check for <code>TypedDict</code>s, see which fields are marked <code>Possibly</code>, and set the <code>.__optional_keys__</code> of the <code>TypedDict</code> instance to make the field not required, and replace the <code>Possibly</code> type by <code>mypy.types.UnionType(*arguments, None)</code>. But I didn't find which method on <code>mypy.plugin.Plugin</code> to use in order to get the <code>TypedDict</code>s into the context.</p> <p>So, I am stuck. It is the first time I dig into the internals of <code>mypy</code>. Could you give me some direction how to achieve what I want to do?</p>
<python><python-typing><mypy>
2024-02-09 13:12:52
1
6,374
Jonathan Herrera
77,968,229
6,930,340
Mocking joblib cache using pytest
<p>I want to test a function that uses <code>joblib</code>'s caching functionality.</p> <p>I am wondering how to skip the cache and call the actual function when performing a unit test using pytest? Is it suitable to mock something like <code>joblib.Memory</code>?</p> <p>Let's say I have the following function:</p> <pre><code>from joblib import Memory memory = Memory(location=&quot;cache&quot;) @memory.cache def func(a): return a**2 def test_func(): assert func(2) == 4 </code></pre> <p>How am I supposed to mock the caching functionality and force <code>func</code> to be executed everytime I call it from within the test function?</p>
<python><mocking><pytest><joblib>
2024-02-09 12:46:45
1
5,167
Andi
77,968,049
1,610,765
select.select() isn't accepting my socket and returns empty lists
<p>I wished to write a script for something that would act as both a server and a client. For the server part I decided I should create an SSL socket <code>accept()</code> loop that would listen to new connections forever, and to make it non-blocking so that the client part can work I decided to use <code>select</code>.</p> <p>Here's the script:</p> <pre><code>from socket import (socket, AF_INET, SOCK_STREAM, create_connection, SOL_SOCKET, SO_REUSEADDR) from ssl import (SSLContext, PROTOCOL_TLS_SERVER, PROTOCOL_TLS_CLIENT) import select import threading import time from tqdm.auto import tqdm def handle_client(client, address): request_bytes = b&quot;&quot; + client.recv(1024) if not request_bytes: print(&quot;Connection closed&quot;) client.close() request_str = request_bytes.decode() print(f&quot;we've received {request_str}&quot;) ip = '127.0.0.1' port = 8443 server_context = SSLContext(PROTOCOL_TLS_SERVER) server_context.load_cert_chain('cert_ex1.pem', 'key_ex1.pem') initial_counter = 1 server_socket = socket(AF_INET, SOCK_STREAM) server_socket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) server_socket.bind((ip, port)) server_socket.listen(5) print(&quot;forming lists for select...&quot;) inputs = [server_socket] print(f&quot;we now have list inputs: {inputs}&quot;) outputs = [] while True: print(f&quot;checking: inputs still has a single socket in it: {type(inputs[0])}&quot;) readable, writable, exceptional = select.select(inputs, outputs, inputs, 1) print(f&quot;readable is {readable}, \n writable is {writable}, \n exceptional is {exceptional}&quot;) for s in tqdm(readable): if s is server_socket: print(&quot;wrapping server socket in SSL...&quot;) with server_context.wrap_socket(server, server_side=True) as tls: connection, client_address = tls.accept() print(&quot;making the connection non-blocking...&quot;) connection.setblocking(0) inputs.append(connection) print(&quot;starting a thread that'd handle the messages...&quot;) threading.Thread(target=handle_client, args=(connection, client_address)).start() else: print(f&quot;dealing with socket {s}&quot;) hostname='example.org' client_context = SSLContext(PROTOCOL_TLS_CLIENT) client_context.load_verify_locations('cert_ex1.pem') with create_connection((ip, port)) as client: with client_context.wrap_socket(client, server_hostname=hostname) as tls: print(f'Using {tls.version()}\n') print(&quot;client is sending data...&quot;) tls.sendall(int.to_bytes(initial_counter, 4, 'little')) while True: data = tls.recv(1024*8) if not data: print(&quot;Client received no data&quot;) break new_data = int.from_bytes(data, 'little') print(f'Server says: {new_data}') new_data = int.to_bytes(new_data+1, 4, 'little') print(&quot;sleeping for 0.15...&quot;) time.sleep(0.15) tls.sendall(new_data) </code></pre> <p>The problem with running this script is that it creates the socket properly, passes a list with just this socket to <code>select.select()</code>, but then <code>select()</code> returns 3 empty lists.</p> <ol> <li>Is there a reason for this?</li> <li>Is there anything else wrong with my code (the general idea, trying to make this work with SSL, using <code>connection.setblocking(0)</code>, anything else)?</li> </ol>
<python><sockets><ssl><select>
2024-02-09 12:11:02
1
1,506
Chiffa
77,968,009
4,996,681
How to add spine arrows AND offset the spine
<p>I can do either one of these separately, but not together. I think the question comes down to: after offsetting spines in a Matplotlib figure, how does one find the spine bounds in a coordinate system that can be used to plot arrowheads on the ends of the spines?</p> <p><a href="https://i.sstatic.net/YPBNq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YPBNq.png" alt="code result" /></a></p> <p>The arrows are (obviously) not aligned with the spines.</p> <p>The code for the arrowheads is from here <a href="https://matplotlib.org/stable/gallery/spines/centered_spines_with_arrows.html" rel="nofollow noreferrer">https://matplotlib.org/stable/gallery/spines/centered_spines_with_arrows.html</a></p> <p>My simplified code is:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np spine_offset = 5 fig, ax = plt.subplots() a = np.linspace(0, 6, 100) ax.plot(a, np.sin(a) + 1) ax.spines.top.set_visible(False) ax.spines.right.set_visible(False) ax.spines.left.set_position(('outward', spine_offset)) ax.spines.bottom.set_position(('outward', spine_offset)) ax.plot(1, 0, &quot;&gt;k&quot;, transform=ax.transAxes, clip_on=False) ax.plot(0, 1, &quot;^k&quot;, transform=ax.transAxes, clip_on=False) </code></pre>
<python><matplotlib>
2024-02-09 12:04:49
2
323
Luce
77,967,806
1,555,306
Pandas to_csv for multiple rows/timestamps of the same symbol
<p>I am fetching a constant stream of data into a dataframe, which I now need to save into multiple CSVs.</p> <p>For example, I am fetching OHLCV data from Binance and I get this dataframe:</p> <pre><code> sym o h l \ 2024-02-09 11:15:59.594 ETHUSDT 2419.56000000 2479.88000000 2419.16000000 c v barcomplete 2024-02-09 11:15:59.594 2471.79000000 170696.13700000 False sym o h l \ 2024-02-09 11:15:59.622 IDUSDT 0.54534000 0.64866000 0.53587000 c v barcomplete 2024-02-09 11:15:59.622 0.60634000 93122120.00000000 False sym o h l \ 2024-02-09 11:15:59.658 ICPUSDT 12.18600000 12.81000000 12.16500000 c v barcomplete 2024-02-09 11:15:59.658 12.62300000 1065607.61000000 False sym o h l \ 2024-02-09 11:15:59.594 ETHUSDT 2400.56000000 2422.88000000 2399.16000000 c v barcomplete 2024-02-08 11:15:59.594 2419.79000000 160696.13700000 False </code></pre> <p>Index is the timestamp. I get multiple data rows for the same sym (ETHUSDT price today, yesterday, day before and so on). I want to save coins/ETHUSDT rows into their own separate CSV (ETHUSDT.csv, IDUSDT.csv, etc), with new data rows being appended to those CSVs as they get fetched.</p> <p>I am using this but its slow:</p> <pre><code> for coin in df.sym: filename = r&quot;{}.csv&quot;.format(coin) print(filename) df['sym'].to_csv(filename, mode='a', header=False) </code></pre> <p>But I just can't get this working properly. Please advise. (I just started learning pandas so please be kind) :)</p>
<python><pandas>
2024-02-09 11:31:10
1
773
masterpiece
77,967,795
5,838,180
In plotly how to define colour and marker when adding a 3D trace?
<p>I am running successfully the following plotly code to create a 3D plot:</p> <pre><code>import plotly.express as px import plotly.graph_objs as go fig = px.scatter_3d(df, x='x', y='y', z='z') </code></pre> <p>What I want to do next is to add more data points to the 3D plot AND define the marker shape (e.g. as squares) as well as define the color. So I did the following</p> <pre><code>fig.add_trace(go.Scatter3d(x=df2[&quot;x&quot;], y=df2[&quot;y&quot;], z=df2['z'], mode='markers', color='black', markers='s')) </code></pre> <p>but this leads to error messages, since <code>color</code> and <code>markers</code> are not accepted keywords. Unfortunately, the <a href="https://plotly.github.io/plotly.py-docs/generated/plotly.graph_objects.Scatter3d.html" rel="nofollow noreferrer">documentation</a> about this is very short and not comprehensible for me. I also wonder if I am confusing the functionalities of <code>go.Scatter3d</code> and <code>go.scatter3d</code>, since both are actual code with different functionalities. So what code should I do to define myself the colour and marker shapes for added points? Thx</p>
<python><plotly><plotly-express>
2024-02-09 11:28:47
1
2,072
NeStack
77,967,592
8,040,369
Transform a df into a specific object structure
<p>I have a df which I am getting from a table</p> <pre><code>ID CAT SUB_CAT KEY VALUE ============================================= 1 AA AA_1 X 10 2 AA AA_1 Y 20 3 AA AA_1 Z 15 </code></pre> <p>Previously I was using the <strong>df.to_dict('records')</strong> and converting them to a list, then I was able to return it from my API call as a JSON object.</p> <p>Now in a scenario, the values for CAT and SUB_CAT will always be the same. so I was thinking to convert the df into something like</p> <pre><code>{ &quot;CAT&quot;: &quot;AA&quot;, &quot;SUB_CAT&quot;: &quot;AA_1&quot;, &quot;METRICS&quot;: [ { &quot;ID&quot;: 1, &quot;KEY&quot;: &quot;X&quot;, &quot;VALUE&quot;: 10 }, { &quot;ID&quot;: 2, &quot;KEY&quot;: &quot;Y&quot;, &quot;VALUE&quot;: 20 }, { &quot;ID&quot;: 3, &quot;KEY&quot;: &quot;Z&quot;, &quot;VALUE&quot;: 15 } ] } </code></pre> <p>Can someone help me transform the df into the above JSON format or let me know how should I proceed please.</p> <p>Thanks,</p>
<python><pandas><dataframe>
2024-02-09 10:54:45
1
787
SM079
77,967,563
2,971,574
Pyspark: Drop arrays of structs if condition is met
<p>I've got the following pyspark dataframe: <a href="https://i.sstatic.net/AfahM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AfahM.png" alt="enter image description here" /></a></p> <p>I'd like to remove all arrays from the column 'price_history' in which at least one of the following conditions is met:</p> <ol> <li>the date is null</li> <li>the price is null</li> <li>the price is 0.00</li> </ol> <p>The expected dataframe looks like this:</p> <p><a href="https://i.sstatic.net/1RyvY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1RyvY.png" alt="enter image description here" /></a></p> <p>The syntax:</p> <pre><code>from decimal import Decimal from datetime import date from pyspark.sql.functions import (array, when, array_except, col, lit, array_size, struct, max as pyspark_max) from pyspark.sql.types import (LongType, StringType, StructField, StructType, IntegerType, DateType, DecimalType, ArrayType) INPUT_SCHEMA = StructType([ StructField('ID', LongType(), True), StructField('brand', StringType(), True), StructField('price_history', ArrayType(StructType([ StructField('date', DateType(), True), StructField('identifier', StringType(), True), StructField('price', DecimalType(12, 2), True),]), True), True), ]) INPUT_DATA = [ [7572753287, 'brand 1', [[None, None, Decimal(2.19)], [date(2023, 2, 27), None, Decimal(1.79)]]], [7373874387383, 'brand 2', [[None, &quot;N&quot;, Decimal(7.00)]]], [223278687678, 'brand 3', [[None, &quot;NB&quot;, Decimal(0.63)]]], [2782872453, 'brand 2', [[None, &quot;N&quot;, Decimal(2.19)]]], [86943438343, 'brand x', [[None, &quot;NW&quot;, Decimal(1.49)]]], [87334273838, 'mybrand', [[date(2022, 3, 26), &quot;NW&quot;, None], [date(2024, 1, 15), &quot;E&quot;, Decimal(0.99)], [date(2024, 1, 15), &quot;E&quot;, Decimal(0.00)]]], [2783783972, 'other brand', [[date(2024, 1, 15), &quot;NW&quot;, None]]], ] EXPECTED_DATA = [ [7572753287, 'brand 1', [[date(2023, 2, 27), None, Decimal(1.79)]]], [7373874387383, 'brand 2', []], [223278687678, 'brand 3', []], [2782872453, 'brand 2', []], [86943438343, 'brand x', []], [87334273838, 'mybrand', [[date(2024, 1, 15), &quot;E&quot;, Decimal(0.99)]]], [2783783972, 'other brand', []], ] expected_df = spark.createDataFrame(EXPECTED_DATA, INPUT_SCHEMA) input_df = spark.createDataFrame(INPUT_DATA, INPUT_SCHEMA) input_df.display() </code></pre> <p>I already found a way to solve the problem but it seems to be too complicated. It looks like this:</p> <pre><code>max_array_size = input_df.select(pyspark_max(array_size(col('price_history')))).collect()[0][0] empty_struct = struct(lit(None).cast(DateType()).alias('date'), lit(None).cast(StringType()).alias('identifier'), lit(None).cast(DecimalType(12, 2)).alias('price')) result = input_df.withColumn('price_history', array_except(array(*[when( (col('price_history').getItem(x).getField('date').isNull()) | (col('price_history').getItem(x).getField('price').isNull()) | (col('price_history').getItem(x).getField('price')==Decimal(0.00)), empty_struct).otherwise(col('price_history').getItem(x)) for x in range(max_array_size)]), array(empty_struct))) result.display() </code></pre> <p>To confirm that both dataframes are equal (requires pyspark 3.5 and pyarrow):</p> <pre><code>from pyspark.testing import assertDataFrameEqual assertDataFrameEqual(expected_df, result) </code></pre> <p>Is there an easier way of doing this? I was going through all the pyspark functions to find something that helps me to facilitate things but I couldn't find anything :(.</p>
<python><pyspark><databricks>
2024-02-09 10:49:24
1
555
the_economist
77,967,474
13,806,869
Pandas to_sql - NotImplementedError?
<p>I have a Pandas dataframe with the following columns in it:</p> <pre><code>account_number int64 prediction int64 probability_of_prediction float64 probability_of_redemption float64 </code></pre> <p>The dataframe is dtype: object.</p> <p>I want to upload its data into a table in a DB2 database. The table is created like this:</p> <pre><code>CREATE TABLE MYSCHEMA.MYTABLE ( ACCOUNT_NUMBER INT, PREDICTION INT, PROBABILITY_OF_PREDICTION DECIMAL(5,2), PROBABILITY_OF_REDEMPTION DECIMAL(5,2) ) IN MYSCHEMA COMPRESS YES ; </code></pre> <p>This is the code I'm using for the upload:</p> <pre><code>df.to_sql(name = &quot;mytable&quot;, con = connection, schema = &quot;myschema&quot;, if_exists = 'replace', index = False, chunksize = 1000, method = &quot;multi&quot;) </code></pre> <p>However, it fails with the following message, which isn't particularly informative:</p> <pre><code>NotImplementedError </code></pre> <p>Does anyone know where I'm going wrong, please?</p> <p>Edit: As requested, here's the full error message:</p> <pre><code>ERROR:root:Problem: Traceback (most recent call last): File &quot;C:\My_filepath\my_script.py&quot;, line 185, in &lt;module&gt; modelling() File &quot;C:\My_filepath\my_script.py&quot;, line 164, in modelling df.to_sql(name = &quot;mytable&quot;, con = connection, schema = &quot;myschema&quot;, if_exists = 'replace', index = False, chunksize = 1000, method = &quot;multi&quot;) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\core\generic.py&quot;, line 2951, in to_sql return sql.to_sql( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 697, in to_sql return pandas_sql.to_sql( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 1729, in to_sql table = self.prep_table( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 1628, in prep_table table.create() File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 835, in create self.pd_sql.drop_table(self.name, self.schema) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 1787, in drop_table self.meta.reflect(bind=self.connectable, only=[table_name], schema=schema) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\sql\schema.py&quot;, line 4860, in reflect Table(name, self, **reflect_opts) File &quot;&lt;string&gt;&quot;, line 2, in __new__ File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\util\deprecations.py&quot;, line 309, in warned return fn(*args, **kwargs) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\sql\schema.py&quot;, line 616, in __new__ metadata._remove_table(name, schema) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\util\langhelpers.py&quot;, line 70, in __exit__ compat.raise_( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\util\compat.py&quot;, line 207, in raise_ raise exception File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\sql\schema.py&quot;, line 611, in __new__ table._init(name, metadata, *args, **kw) File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\sql\schema.py&quot;, line 686, in _init self._autoload( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\sql\schema.py&quot;, line 721, in _autoload conn_insp.reflect_table( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\engine\reflection.py&quot;, line 791, in reflect_table self._reflect_pk( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\engine\reflection.py&quot;, line 920, in _reflect_pk pk_cons = self.get_pk_constraint( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\engine\reflection.py&quot;, line 528, in get_pk_constraint return self.dialect.get_pk_constraint( File &quot;C:\My_filepath\Python\Python39\lib\site-packages\sqlalchemy\engine\interfaces.py&quot;, line 285, in get_pk_constraint raise NotImplementedError() NotImplementedError </code></pre>
<python><pandas>
2024-02-09 10:30:49
0
521
SRJCoding
77,967,334
849,076
Getting min/max column name in Polars
<p>In polars I can get the horizontal max (maximum value of a set of columns for reach row) like this:</p> <pre><code>df = pl.DataFrame( { &quot;a&quot;: [1, 8, 3], &quot;b&quot;: [4, 5, None], } ) df.with_columns(max = pl.max_horizontal(&quot;a&quot;, &quot;b&quot;)) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ 4 β”‚ β”‚ 8 ┆ 5 ┆ 8 β”‚ β”‚ 3 ┆ null ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>This corresponds to Pandas <code>df[[&quot;a&quot;, &quot;b&quot;]].max(axis=1)</code>.</p> <p>Now, how do I get the column names instead of the actual max value? In other words, what is the Polars version of Pandas' <code>df[[&quot;a&quot;, &quot;b&quot;]].idxmax(axis=1)</code>?</p> <p>The expected output would be:</p> <pre><code>β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ b β”‚ β”‚ 8 ┆ 5 ┆ a β”‚ β”‚ 3 ┆ null ┆ a β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><max><python-polars>
2024-02-09 10:09:12
4
8,641
leo
77,967,245
2,163,392
How to do fine tune and transfer learn CNNs in a new dataset using imagenet std and mean at data augmentation phase?
<p>I did some fine-tuning training using CNNs for diverse applications. Honestly, I never really needed to perform Imagenet normalization in new datasets as the results were already quite good without that procedure.</p> <p>Now I have for the first time a new dataset on which the results are really bad without the ImageNet normalization and I would like to give it a try. However, as I never used it, I did some research and tried to use it. However, to perform training with data augmentation, the source code seems strange to me.</p> <p>Here is what the co-pilot recommended to me.</p> <pre><code>from tensorflow.keras.preprocessing.image import ImageDataGenerator # Define the mean and standard deviation of the original dataset (e.g., ImageNet) mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] # Create an instance of ImageDataGenerator and specify the normalization parameters datagen = ImageDataGenerator( rescale=1./255, # Scale the pixel values to [0, 1] featurewise_center=True, # Apply feature-wise centering featurewise_std_normalization=True # Apply feature-wise standard deviation scaling) # Compute the mean and standard deviation of the original dataset (e.g., #ImageNet) # Note: Replace `train_images` with the original dataset images datagen.fit(train_images) # Apply data augmentation and normalization to the input images augmented_images = datagen.flow(train_images, train_labels, batch_size=batch_size) </code></pre> <p>However, this co-pilot source code seems strange, as the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator" rel="nofollow noreferrer">ImageDataGenerator documentation</a> says, <em>featurewise_center sets the input mean to 0 over the dataset, and samplewise_center sets each sample mean to 0.</em> It is not said anywhere that the featurewise_center and samplewise_center will use the means and stds from Imagenet (it seems they use the means and stds of the dataset I am using) and also not from variables <strong>std</strong> and <strong>mean</strong> the co-pilot recommended me to declare at the beginning of the source code.</p> <p>So, my question is: how do I add the std and mean normalization using values from Imagenet at the data augmentation phase? is the source code above correct?</p> <p>Just to highlight: lots of tutorials on the internet on fine-tuning and transfer learning never explicitly showed or discussed the Imagenet normalization in the source code. Additionally to my first question, is the Imagenet normalization useful in new datasets?</p>
<python><tensorflow><deep-learning><conv-neural-network><data-augmentation>
2024-02-09 09:54:06
0
2,799
mad
77,967,137
6,151,828
Pre-processing data within a Julia structure
<p>I would like to have a Julia structure with data and parameters contained with it, to be used with various functions. E.g., I start with data frame of numbers, but many functions use it as a matrix and also need to know its dimensions. In python I would use something like:</p> <pre><code>class my_class(object): def __init__(self, df): X = df.to_numpy() self.n, self.m = X.shape row_names = df.index.tolist() col_names = df.columns.tolist() Foo = my_class(df) </code></pre> <p>What poses a difficulty is whether operations <code>.to_numpy()</code>, <code>.shape</code>, etc. can be performed in the Julia structure or whether beforehand. Or perhaps this necessitates using a mutable structure?</p> <p>At my current level of knowledge I would do something like this:</p> <pre><code>struct MyClass df::DataFrame X::Matrix{Float64} n::Int64 m::Int64 row_names::Vector{String} col_names::Vector{String} end X = Matrix{Float64}(df[:,begin+1:end]) n, m = size(X) row_names = df[:,begin] col_names = names(SAT[:,begin+1:end]) Foo = MyClass(df, X, n, m, row_names, col_names) </code></pre> <p>Remark: thinking of it, I could probably wrap the code above in a function, which returns the initialized structure Foo. But it seems like a convoluted way of doing it.</p>
<python><class><data-structures><struct><julia>
2024-02-09 09:35:31
2
803
Roger V.
77,967,053
313,648
Cannot get Playwright to work in 2 separate scripts concurrently
<p>I am using the Playwright synchronous API in completely separate scripts and they all work fine when I run them at different times, but if I try to run the scripts at the same time, the script that started first seems to take ownership of the browser and the second (and subsequent) script fails with a message about &quot;Opening in existing browser session&quot;. I would have thought that Playwright could start separate browsers for each script so they would be completely independent, but it doesn't seem to.</p> <p>I should say I also need to use the persistent browser context to ensure the scripts authenticate against various systems as me and that seems to be part of the issue.</p> <p>Has anyone else encountered this issue and do you know if there's a way to have the scripts run concurrently (without using Playwright asynchronously as this will introduce a lot more complexity).</p> <p>Here's the code each script uses to start a context :</p> <pre><code>import os from playwright.sync_api import Playwright, sync_playwright def get_cookies(context): &quot;&quot;&quot; Checks for new cookies. This function running is critical to pages loading properly, or else the pages will look like they are only partially loading. &quot;&quot;&quot; global cookies cookie_set = {tempdict['name'] for tempdict in cookies} new_cookies = context.cookies() context_cookie_set = {tempdict['name'] for tempdict in new_cookies} new_cookie_set = context_cookie_set - cookie_set if len(new_cookie_set) == 0: return 0 else: pass cookies = new_cookies return len(new_cookie_set) cookies = [] playwright = sync_playwright().start() username = os.environ.get('USERNAME') context_location = rf'C:\Users\{username}\AppData\Local\Microsoft\Edge\User Data\Default' context = playwright.chromium.launch_persistent_context(context_location, headless=False, channel='msedge') get_cookies(context) context.on('requestfinished', lambda r: get_cookies(context)) context.on('requestfailed', lambda r: get_cookies(context)) page = context.new_page() page.goto('https://playwright.dev/python/') page.wait_for_load_state(state='networkidle') while get_cookies(context) &gt; 0: print('.') pass # Simulate an action with a time delay seconds = 30 page.wait_for_timeout(seconds * 1000) </code></pre> <p>I know I can pass in different startup arguments, but I haven't seen an arg that lets me start separate browsers.</p> <p>Here's an example of the output I get from a command window:</p> <pre><code>Traceback (most recent call last): File &quot;E:\OLD_STUFF\Miscellaneous_Scripts\playwright_test.py&quot;, line 28, in &lt;module&gt; context = playwright.chromium.launch_persistent_context(context_location, File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\sync_api\_generated.py&quot;, line 14662, in launch_persistent_context self._sync( File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\_impl\_sync_base.py&quot;, line 104, in _sync return task.result() File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\_impl\_browser_type.py&quot;, line 155, in launch_persistent_context from_channel(await self._channel.send(&quot;launchPersistentContext&quot;, params)), File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\_impl\_connection.py&quot;, line 44, in send return await self._connection.wrap_api_call( File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\_impl\_connection.py&quot;, line 419, in wrap_api_call return await cb() File &quot;c:\programdata\miniconda3\lib\site-packages\playwright\_impl\_connection.py&quot;, line 79, in inner_send result = next(iter(done)).result() playwright._impl._api_types.Error: Browser closed. ==================== Browser output: ==================== &lt;launching&gt; C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-component-update --no-default-browser-check --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync,Translate --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=C:\Users\****\AppData\Local\Microsoft\Edge\User Data\Default --remote-debugging-pipe about:blank &lt;launched&gt; pid=529300 [pid=529300][out] Opening in existing browser session. [pid=529300] &lt;process did exit: exitCode=0, signal=null&gt; [pid=529300] starting temporary directories cleanup =========================== logs =========================== &lt;launching&gt; C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe --disable-field-trial-config --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-back-forward-cache --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-component-update --no-default-browser-check --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=ImprovedCookieControls,LazyFrameLoading,GlobalMediaControls,DestroyProfileOnBrowserClose,MediaRouter,DialMediaRouteProvider,AcceptCHFrame,AutoExpandDetailsElement,CertificateTransparencyComponentUpdater,AvoidUnnecessaryBeforeUnloadCheckSync,Translate --allow-pre-commit-input --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --no-service-autorun --export-tagged-pdf --no-sandbox --user-data-dir=C:\Users\****\AppData\Local\Microsoft\Edge\User Data\Default --remote-debugging-pipe about:blank &lt;launched&gt; pid=529300 [pid=529300][out] Opening in existing browser session. [pid=529300] &lt;process did exit: exitCode=0, signal=null&gt; [pid=529300] starting temporary directories cleanup ============================================================ </code></pre> <p>Thanks.</p>
<python><playwright><playwright-python>
2024-02-09 09:20:00
1
532
Adrian
77,966,911
393,010
How to avoid almost duplicated uninformative type hints when function arguments should default to empty list/None
<p>Empty lists as default arguments is a gotcha in python, and adding type hints to the usual pattern of <code>=None</code> as default value make them quite messy.</p> <p>Is there any way to avoid giving the type hint twice (or even once?) in this example code:</p> <pre class="lang-py prettyprint-override"><code>class Telly: def __init__(penguin: Optional[list[str]] = None): # &lt;-- This type is ugly and adds negative value self.penguin: list[str] = penguin or [] # &lt;-- Could this type be inferred? </code></pre> <p>(Example reworked from <a href="https://docs.python.org/3/reference/compound_stmts.html#function-definitions" rel="nofollow noreferrer">https://docs.python.org/3/reference/compound_stmts.html#function-definitions</a>)</p> <ol> <li>Is there anyway to avoid the <code>Optional</code> part of the argument type hint?</li> <li>Can either of the type hints be inferred from the other?</li> </ol> <p>For example if there was some special kind of default argument that would produce a new emptylist every time, or at least hide the ugliness of the type hinting, for example</p> <pre class="lang-py prettyprint-override"><code>class Telly: def __init__(penguin=EmptyList[str]): self.penguin = penguin or [] </code></pre> <p>Would something like that be possible?</p>
<python><mypy><python-typing>
2024-02-09 08:48:53
1
5,626
Moberg
77,966,551
11,861,874
Excel Pivot Export from Pandas Dataframe
<p>I have multiple functions that generate multiple data frames of different lengths, I am aiming to consolidate all of them in one place and later pivot it out and export to Excel, here is an example of three different output data frames from three functions.</p> <pre><code>import pandas as pd data1 = {'Header':['L1','L2','L3'], 'Val1':[float(100),float(200),float(300)], 'Val2':[float(400),float(500),float(600)], 'Val3': [float(700),float(800),float(900)]} data1_summary = pd.DataFrame(data=data1) # Inside loop it'll create two more such outputs but with different values but with the same labels. data2 = {'Header':['L5','L6'], 'Val5':[float(1000),float(1100)], 'Val6':[float(1300),float(1400)]} data2_summary = pd.DataFrame(data=data2) data3 = {'Header':['L7','L8','L9','L10'], 'Val7':[float(1900),float(2000),float(2100),float(2200)], 'Val8':[float(2900),float(2300),float(2400),float(2800)], 'Val9': [float(3500),float(3600),float(3700),float(3900)]} data3_summary = pd.DataFrame(data=data3) </code></pre> <p>There are different 'Headers' in all three outputs, similarly, there are different labels 'Val1' to 'Val9' and there are corresponding values against each of them, If we output everything in a sheet (i.e.,data1_summary,data2_summary,data3_summary) it'll be like a grid and later we can perform pivot on that data.</p> <p>The expected output is as follows.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>Val1</th> <th>Val2</th> <th>Val3</th> <th>Val5</th> <th>Val6</th> <th>Val7</th> <th>Val8</th> <th>Val9</th> </tr> </thead> <tbody> <tr> <td>L1</td> <td>100</td> <td>400</td> <td>700</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>L2</td> <td>200</td> <td>500</td> <td>800</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>L3</td> <td>300</td> <td>600</td> <td>900</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>L5</td> <td>0</td> <td>0</td> <td>0</td> <td>1000</td> <td>1300</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>L6</td> <td>0</td> <td>0</td> <td>0</td> <td>1100</td> <td>1400</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>L7</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1900</td> <td>2900</td> <td>3500</td> </tr> <tr> <td>L8</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>2000</td> <td>2300</td> <td>3600</td> </tr> <tr> <td>L9</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>2100</td> <td>2400</td> <td>3700</td> </tr> <tr> <td>L10</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>2200</td> <td>2800</td> <td>3900</td> </tr> </tbody> </table></div> <pre><code></code></pre>
<python><pandas><dataframe>
2024-02-09 07:19:17
2
645
Add
77,966,503
11,861,874
Summing up Pandas dataframe in a loop
<p>I am trying to sum up multiple data frames in a loop so that I'll have one data frame that will show the total of all data frames.</p> <p>Please see here the example of what I am trying to achieve.</p> <pre><code>import pandas as pd data1 = {'Header':['L1','L2','L3'], 'Val1':[float(100),float(200),float(300)], 'Val2':[float(400),float(500),float(600)], 'Val3': [float(700),float(800),float(900)]} data1_summary = pd.DataFrame(data=data1) # Inside loop it'll create two more such outputs but with different values but with the same labels. data2 = {'Header':['L1','L2','L3'], 'Val1':[float(1000),float(1100),float(1200)], 'Val2':[float(1300),float(1400),float(1500)], 'Val3': [float(1600),float(1700),float(1800)]} data2_summary = pd.DataFrame(data=data2) data3 = {'Header':['L1','L2','L3'], 'Val1':[float(1900),float(2000),float(2100)], 'Val2':[float(2200),float(2300),float(2400)], 'Val3': [float(2500),float(2600),float(2700)]} data3_summary = pd.DataFrame(data=data3) </code></pre> <p>In the end, I am looking out for an out in 'data' data frame which will sum up all the values of data1 + data2 + data3. I tried pd.sum() but it adds up label text as well, I am only looking out for adding up values.</p>
<python><pandas><dataframe>
2024-02-09 07:08:39
1
645
Add
77,966,463
1,616,785
sqlalchemy session.execute with positional arguments error
<p>I am trying to use positional arguments to be used in a lengthy insert query. But it doesnt seem to allow using positional arguments. I am using mysql. Below is the error</p> <pre><code>sqlalchemy.exc.ArgumentError: List argument must consist only of tuples or dictionaries </code></pre> <p>sample query with placeholders</p> <pre><code> query = &quot;insert into test values(?,?,?)&quot; values = (1,2,3) </code></pre> <p>tried</p> <pre><code> connection = session.connection() result = connection.execute(text(query), *values) result = session.execute(text(query), *values) result = connection.execute(text(query), values) result = session.execute(text(query), values) changed placeholder. query = &quot;insert into test values(%s,%s,%s)&quot; </code></pre> <p>As the actual query is much bigger , I prefer not to use dict/named arguments</p>
<python><sqlalchemy>
2024-02-09 06:57:20
1
1,401
sjd
77,966,444
14,775,478
How to use Python pattern matching to match class types?
<p>How can we use Python's structural pattern matching (introduced in 3.10) to match the type of a variable without invoking the constructor / in a case where a new instantiation of the class is not easily possible?</p> <p>The following code fails:</p> <pre><code>from pydantic import BaseModel # instantiation requires 'name' to be defined class Tree(BaseModel): name: str my_plant = Tree(name='oak') match type(my_plant): case Tree: print('My plant is a tree.') case _: print('error') </code></pre> <p>with error message <code>SyntaxError: name capture 'Tree' makes remaining patterns unreachable</code></p> <p>An alternative attempt was to re-create an instance during matching (dangerous because of instantiation during matching, but worth a shot...) - it also fails:</p> <pre><code>match type(my_plant): case type(Tree()): print('My plant is a tree.') case _: print('error') TypeError: type() accepts 0 positional sub-patterns (1 given) </code></pre> <p>Checking against an instance of <code>Tree()</code> resolves the SyntaxError, but does not lead to working output, because it always produces &quot;error&quot;. I do not want to use the workaround to match against a derived <code>bool</code> (e.g., <code>type(my_plant) == Tree)</code>) because it would limit me to only compare 2 outcomes (True/False) not match against multiple class types.</p>
<python><structural-pattern-matching>
2024-02-09 06:54:50
1
1,690
KingOtto
77,966,378
7,614,968
Why is there attribute error on the class?
<p>I am trying to understand that there is a class names test_class that inherits from BaseModel. I am defining a class attribute TEST_VALUE but there is attribute error.</p> <pre><code>from pydantic import BaseModel class test_class(BaseModel): &quot;&quot;&quot;load from env&quot;&quot;&quot; TEST_VALUE: str = &quot;&quot; </code></pre> <p>If I do test_class.TEST_VALUE, I get the following error:</p> <pre><code> AttributeError: TEST_VALUE </code></pre>
<python><python-3.x><class><oop><pydantic>
2024-02-09 06:39:07
1
635
palash
77,966,259
572,616
How to build a python wheel with a specific set of dependencies?
<p>I am building a python wheel and encounter a problem by specifying the dependencies. The <code>install_requires</code> parameter of the <code>setup</code> method receives:</p> <ul> <li><code>vtk_osmesa</code> and</li> <li><code>pyvista</code>.</li> </ul> <p>The problem above is, that <code>pyvista</code> has <code>vtk</code> as a dependency. However, since I would like to provide a wheel for a headless machine with osmesa, <code>vtk_osmesa</code> is already a dependency, with a different set of libraries.</p> <p>Is there a way specify that a certain dependency should not be installed with all its sub dependencies? I would gladly specify all required dependencies using a tool like <a href="https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/" rel="nofollow noreferrer">pip-compile</a>.</p>
<python><pip><vtk><python-wheel><pyvista>
2024-02-09 06:03:20
0
14,083
Woltan
77,966,124
3,678,257
How to log traceback information using structlog
<p>I'm trying to set up an ELK based observability system in our project.</p> <p>So I started with revising a logging system in our Django based project. I have decided to start with <a href="https://www.structlog.org/en/stable/" rel="nofollow noreferrer">structlog</a> in order to obtain structured log files that are shipped to Logstash.</p> <p>I have this in my <code>logging.py</code> as a project wide logging configuration:</p> <pre class="lang-py prettyprint-override"><code>import logging.config import structlog from server.settings.components import BASE_DIR LOGGING = { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;formatters&quot;: { &quot;plain&quot;: { &quot;()&quot;: structlog.stdlib.ProcessorFormatter, &quot;processor&quot;: structlog.dev.ConsoleRenderer(), }, &quot;json&quot;: { &quot;()&quot;: structlog.stdlib.ProcessorFormatter, &quot;processor&quot;: structlog.processors.JSONRenderer(), }, }, &quot;handlers&quot;: { &quot;console&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;formatter&quot;: &quot;plain&quot;, }, &quot;file&quot;: { &quot;class&quot;: &quot;logging.handlers.RotatingFileHandler&quot;, &quot;filename&quot;: BASE_DIR.joinpath(&quot;logs/structlog.log&quot;), &quot;formatter&quot;: &quot;json&quot;, }, }, &quot;root&quot;: { &quot;level&quot;: &quot;INFO&quot;, &quot;handlers&quot;: [&quot;console&quot;, &quot;file&quot;], }, &quot;loggers&quot;: { &quot;server&quot;: { &quot;handlers&quot;: [&quot;console&quot;, &quot;file&quot;], &quot;level&quot;: &quot;INFO&quot;, &quot;propagate&quot;: False, }, }, } structlog.configure( processors=[ structlog.stdlib.filter_by_level, structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.tracebacks.extract, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.TimeStamper(fmt=&quot;iso&quot;), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, structlog.processors.JSONRenderer(), structlog.stdlib.ProcessorFormatter.wrap_for_formatter, ], context_class=dict, logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.BoundLogger, cache_logger_on_first_use=True, ) logging.config.dictConfig(LOGGING) </code></pre> <p>Everything seems to be logging fine (and I <strong>really like</strong> the console rich output) but I can't get it to serialize the traceback. All I see in my log file is:</p> <pre class="lang-bash prettyprint-override"><code>{&quot;event&quot;: &quot;Internal Server Error: /healthcheck/error&quot;, &quot;exc_info&quot;: [&quot;&lt;class 'ValueError'&gt;&quot;, &quot;ValueError('This is a ValueError')&quot;, &quot;&lt;traceback object at 0x7f13f61bf340&gt;&quot;]} </code></pre> <p>How can I have Structlog to produce a full serialized traceback object in my log file?</p>
<python><logging><python-logging><error-logging>
2024-02-09 05:18:18
1
664
ruslaniv
77,966,031
8,584,998
Check If Computer On Network Is Asleep Without Waking It Up (Python)
<p>I want a quick way to check if a computer on the LAN is awake or not given its IP address (on Windows) without waking it up. I wrote the following, which works:</p> <pre><code>def is_awake(ipOrName, timeout=0.05): try: tt = round(timeout*1000) cmdargs = [&quot;ping&quot;, &quot;-n&quot;, &quot;1&quot;, &quot;-w&quot;, f'{tt}', &quot;-a&quot;, ipOrName] result = subprocess.run(cmdargs, capture_output=True, text=True) if &quot;Received = 1&quot; in result.stdout: return True else: return False except Exception as e: print(f&quot;Error checking IP reachability: {e}&quot;) return False </code></pre> <p>But I noticed that this could be quite slow when running it to sequentially check a lot of IP addresses (30, for example). I can add print statements to the code and see a visible delay before and after the <code>subprocess.run()</code> line for the computers that are turned off (it's essentially instant for the IPs corresponding to computers that are turned on). It almost seems like the timeout parameter isn't being respected, or maybe there's some other reason the <code>subprocess.run()</code> function is taking so long to return. I'm not sure.</p> <p><strong>Is there any better way to do this?</strong> The following is much faster, but I can't use it because it wakes the computer from sleep:</p> <pre><code>def is_reachable(ipOrName, port = 445, timeout=0.05): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(timeout) try: s.connect((ipOrName, int(port))) s.shutdown(socket.SHUT_RDWR) return True except: return False finally: s.close() </code></pre> <p>Any ideas?</p> <p>This video demonstrates how slow my first method is: <a href="https://www.youtube.com/shorts/SZSMOucKiuQ" rel="nofollow noreferrer">https://www.youtube.com/shorts/SZSMOucKiuQ</a></p>
<python><windows><ip-address><sleep-mode><wakeup>
2024-02-09 04:39:51
1
1,310
EllipticalInitial
77,966,005
10,461,632
Convert a dataframe to a nested dictionary with multiple levels
<p>Is there a cleaner way to convert my dataframe to a nested dictionary with more than one level? If possible, I want to avoid the double <code>for</code> loop that I needed to achieve the result.</p> <pre><code>from pprint import pprint a = pd.DataFrame([ {'col1': 'A', 'col2': 'Person 1', 'height': 1, 'weight': 10}, {'col1': 'A', 'col2': 'Person 1', 'height': 2, 'weight': 20}, {'col1': 'A', 'col2': 'Person 1', 'height': 3, 'weight': 30}, {'col1': 'A', 'col2': 'Person 2', 'height': 4, 'weight': 40}, {'col1': 'A', 'col2': 'Person 2', 'height': 5, 'weight': 50}, {'col1': 'A', 'col2': 'Person 2', 'height': 6, 'weight': 60}, {'col1': 'B', 'col2': 'Person 1', 'height': 11, 'weight': 101}, {'col1': 'B', 'col2': 'Person 1', 'height': 21, 'weight': 201}, {'col1': 'B', 'col2': 'Person 1', 'height': 31, 'weight': 301}, {'col1': 'B', 'col2': 'Person 2', 'height': 41, 'weight': 401}, {'col1': 'B', 'col2': 'Person 2', 'height': 51, 'weight': 501}, {'col1': 'B', 'col2': 'Person 2', 'height': 61, 'weight': 601}, ]) result = {} for col1, j in a.groupby('col1'): result[col1] = {} for col2, n in j.groupby('col2'): result[col1][col2] = n.to_dict(orient='records') pprint(result) </code></pre> <p>Result:</p> <pre><code>{'A': {'Person 1': [{'col1': 'A', 'col2': 'Person 1', 'height': 1, 'weight': 10}, {'col1': 'A', 'col2': 'Person 1', 'height': 2, 'weight': 20}, {'col1': 'A', 'col2': 'Person 1', 'height': 3, 'weight': 30}], 'Person 2': [{'col1': 'A', 'col2': 'Person 2', 'height': 4, 'weight': 40}, {'col1': 'A', 'col2': 'Person 2', 'height': 5, 'weight': 50}, {'col1': 'A', 'col2': 'Person 2', 'height': 6, 'weight': 60}]}, 'B': {'Person 1': [{'col1': 'B', 'col2': 'Person 1', 'height': 11, 'weight': 101}, {'col1': 'B', 'col2': 'Person 1', 'height': 21, 'weight': 201}, {'col1': 'B', 'col2': 'Person 1', 'height': 31, 'weight': 301}], 'Person 2': [{'col1': 'B', 'col2': 'Person 2', 'height': 41, 'weight': 401}, {'col1': 'B', 'col2': 'Person 2', 'height': 51, 'weight': 501}, {'col1': 'B', 'col2': 'Person 2', 'height': 61, 'weight': 601}]}} </code></pre> <p>I tried to use something like this, but couldn't figure out how to nest <code>col2</code> properly.</p> <pre><code>a.groupby('col1').apply(lambda x: x.set_index( 'col2').to_dict(orient='records')).to_dict() </code></pre>
<python><pandas>
2024-02-09 04:32:04
2
788
Simon1
77,965,998
8,645,552
Data("Subchunk2ID") chunk order in WAV files?
<p>I have a question about the data chunk of WAV files. We know that the chunk order in the WAV files is not fixed(i.e. data chunk is not necessarily the last chunk) and we can only assume that &quot;fmt&quot; chunk must appear before the &quot;data&quot; chunk. And WAV file header can be 44 bytes or 46 bytes or even bigger.</p> <p>For the data chunk(or Subchunk 2), Subchunk2 ID(value is &quot;data&quot;) has 4 bytes, and Subchunk 2 Size has 4 bytes. Is it safe to assume that <strong>Subchunk2 ID</strong>, <strong>Subchunk 2 Size</strong>, and <strong>Subchunk 2 Data</strong> are always ordered like this and right next to each other?</p> <p>i.e. given a byte string of WAV file, is it 100% accurate to find the index of data chunk by using <code>byte_string.find('data'.encode())</code>, add 4 to that value to get data chunk size index, and add 8 to get the data chunk index?</p> <p>For example in Python:</p> <pre class="lang-py prettyprint-override"><code>data_chunk_id_index = byte_string.find('data'.encode()) data_chunk_index = data_chunk_id_index + 8 # 4 bytes each for data id &amp; data size data_chunk_size = int.from_bytes(byte_string[data_chunk_index-4:data_chunk_index], byteorder='little') </code></pre> <p>image source: <a href="http://soundfile.sapp.org/doc/WaveFormat/" rel="nofollow noreferrer">http://soundfile.sapp.org/doc/WaveFormat/</a></p> <p><a href="https://i.sstatic.net/xxepH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xxepH.png" alt="enter image description here" /></a></p>
<python><c><audio><wav><audio-streaming>
2024-02-09 04:26:17
0
370
demid
77,965,769
8,616,751
Remove top and right spine in GeoAxesSubplot
<p>I'm trying to remove the top and right spine of a plot, and initially tried</p> <pre><code># Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree()}) ax.coastlines() # Reintroduce spines ax.spines['top'].set_visible(True) ax.spines['right'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() </code></pre> <p>which gives me this figure, i.e. it clearly didn't work</p> <p><a href="https://i.sstatic.net/jS3RS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jS3RS.png" alt="enter image description here" /></a></p> <p>I then tried to remove the frame and add the two spines I want</p> <pre><code># Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree(), 'frameon': False}) ax.coastlines() # Reintroduce spines ax.spines['left'].set_visible(True) ax.spines['bottom'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() </code></pre> <p>and this also doesn't quite work - I successfully remove the frame but can't reintroduce the left and bottom spine back</p> <p><a href="https://i.sstatic.net/9p2HS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9p2HS.png" alt="enter image description here" /></a></p> <p>I did see this <a href="https://stackoverflow.com/questions/58223869/cannot-remove-axis-spines-when-using-cartopy-projections-in-matplotlib">post</a> but when I try to apply this to my code</p> <pre><code># Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree()}) ax.coastlines() # Reintroduce spines ax.outline_patch.set_visible(False) ax.spines['left'].set_visible(True) ax.spines['bottom'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() </code></pre> <p>I get the error</p> <pre><code>AttributeError: 'GeoAxes' object has no attribute 'outline_patch' </code></pre> <p>Surely there must be a way to achieve this? Does anyone know how to do this? I'm using python 3.10.</p>
<python><matplotlib><cartopy><map-projections>
2024-02-09 02:48:52
1
303
scriptgirl_3000
77,965,711
16,113,865
How can I detect whether a widget in PyQt5, is a fixed size or not?
<p>I'm working with PyQt5, and I'm looking for a straightforward way to check whether a widget (e.g., QLabel or QPushButton) has a fixed size or not. The direct method isFixedSize() is not universally available.</p> <p>Could you please share any insights or methods that can efficiently determine if a PyQt5 widget has a fixed size?</p> <pre><code>import sys from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QPushButton, QVBoxLayout class ExampleWidget(QWidget): def __init__(self): super().__init__() # Create a label with a fixed size label_fixed_size = QLabel(&quot;label_001&quot;) label_fixed_size.setFixedSize(100, 30) # Create a label with a dynamic size label_dynamic_size = QLabel(&quot;label_002&quot;) # Create a button with a fixed size button_fixed_size = QPushButton(&quot;button_001&quot;) button_fixed_size.setFixedSize(100, 30) # Create a button with a dynamic size button_dynamic_size = QPushButton(&quot;button_002&quot;) # Check if a widget has a fixed size for widget in [label_fixed_size, label_dynamic_size, button_fixed_size, button_dynamic_size]: is_fixed_size_horizontal = widget.sizePolicy().horizontalPolicy() == widget.sizePolicy().Fixed is_fixed_size_vertical = widget.sizePolicy().verticalPolicy() == widget.sizePolicy().Fixed is_fixed_size = is_fixed_size_horizontal and is_fixed_size_vertical print(f&quot;Widget: {widget.text()}, Fixed Size: {is_fixed_size}&quot;) if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = ExampleWidget() window.show() sys.exit(app.exec_()) </code></pre> <p><strong>New code</strong></p> <pre><code>import sys from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QGridLayout, QPushButton, QVBoxLayout, QHBoxLayout import math class DynamicGridLayoutAdjustments(QWidget): def __init__(self): super().__init__() label_fixed_size = QLabel(&quot;Fixed Size Label&quot;) label_fixed_size.setFixedSize(100, 30) label_dynamic_size = QLabel(&quot;Dynamic Size Label&quot;) button_fixed_size = QPushButton(&quot;Fixed Size Button&quot;) button_fixed_size.setFixedSize(100, 30) button_dynamic_size = QPushButton(&quot;Dynamic Size Button&quot;) # Layout with QGridLayout for flexible grid arrangement layout = QGridLayout(self) # Add widgets to the layout with row and column parameters layout.addWidget(label_fixed_size, 0, 0) layout.addWidget(label_dynamic_size, 0, 1) layout.addWidget(button_fixed_size, 1, 0) layout.addWidget(button_dynamic_size, 1, 1) # Calculate and set column span and row span based on widget sizes col_span_fixed, row_span_fixed = self.calculate_item_span(label_fixed_size) layout.addWidget(label_fixed_size, 0, 0, row_span_fixed, col_span_fixed) col_span_dynamic, row_span_dynamic = self.calculate_item_span(label_dynamic_size) layout.addWidget(label_dynamic_size, 0, 1, row_span_dynamic, col_span_dynamic) col_span_button_fixed, row_span_button_fixed = self.calculate_item_span(button_fixed_size) layout.addWidget(button_fixed_size, 1, 0, row_span_button_fixed, col_span_button_fixed) col_span_button_dynamic, row_span_button_dynamic = self.calculate_item_span(button_dynamic_size) layout.addWidget(button_dynamic_size, 1, 1, row_span_button_dynamic, col_span_button_dynamic) self.setWindowTitle(&quot;Dynamic Grid Layout with Adjustments&quot;) def calculate_item_span(self, item): content_width = item.sizeHint().width() label_width = item.width() is_fixed_size_horizontal = self.is_fixed_size(item) if is_fixed_size_horizontal or item.sizePolicy().horizontalPolicy() == item.sizePolicy().Fixed: print(&quot;111111 col span&quot;) col_span_value = math.ceil(label_width / 100) # Use an example column width else: print(&quot;col_span 222222&quot;) col_span_value = math.ceil(content_width / 100) # Use an example column width content_height = item.sizeHint().height() label_height = item.height() is_fixed_size_vertical = self.is_fixed_size(item) if is_fixed_size_vertical or item.sizePolicy().verticalPolicy() == item.sizePolicy().Fixed: print(&quot;11111 rwo span&quot;) row_span_value = math.ceil(label_height / 30) # Use an example row height else: print(&quot;222222 row span&quot;) row_span_value = math.ceil(content_height / 30) # Use an example row height return col_span_value, row_span_value def is_fixed_size(self, widget): return widget.sizePolicy().horizontalPolicy() == widget.sizePolicy().Fixed if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = DynamicGridLayoutAdjustments() window.show() sys.exit(app.exec_()) </code></pre>
<python><pyqt><pyqt5>
2024-02-09 02:17:53
0
492
tckraomuqnt
77,965,636
1,275,942
PyCharm "Interpreter Paths" -- What are they and where are they stored?
<p>I have a PyCharm project, named <code>pycharm_projects</code>. In it, I create a virtualenv Python interpreter, <code>example_virtualenv</code>. Then I click Interpreters -&gt; Show all -&gt; example_virtualenv -&gt; Show interpreter paths -&gt; Add path. I add <code>S:\example_python_folder</code>, making the list of paths look like the following:</p> <pre><code>C:\Python37\DLLs C:\Python37\Lib C:\Python37 S:\example_pycharm S:\example_pycharm\Lib\site-packages S:\example_python_folder (added by user) </code></pre> <p>Suppose I want to give this configuration to another user without sending them my entire PyCharm project configuration. Is there a way to export from one machine and import to another?</p> <p>I've searched all of <code>pycharm_projects/.idea</code> and <code>example_pycharm/</code>. To my knowledge, neither contains the string <code>example_python_folder</code>, and the only reference to <code>example_virtualenv</code> is here:</p> <p><code>.idea/pycharm_projects.iml</code> contains:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;module type=&quot;PYTHON_MODULE&quot; version=&quot;4&quot;&gt; &lt;component name=&quot;NewModuleRootManager&quot;&gt; &lt;orderEntry type=&quot;jdk&quot; jdkName=&quot;Python 3.7 (example_virtualenv)&quot; jdkType=&quot;Python SDK&quot; /&gt; ... &lt;snip&gt; ... &lt;/component&gt; &lt;/module&gt; </code></pre> <p>But no reference to the path or any configuration. Where is this configuration information stored?</p> <p>(For completeness, the <a href="https://www.jetbrains.com/help/pycharm/installing-uninstalling-and-reloading-interpreter-paths.html" rel="nofollow noreferrer">JetBrains docs</a> explain that they are added to PYTHONPATH and used for resolving modules, but not how they are saved.)</p>
<python><pycharm><jetbrains-ide><pythonpath>
2024-02-09 01:38:55
0
899
Kaia
77,965,595
8,996,209
How to translate a batch of txt files using gpt-4?
<p>I'm working on a Python script to translate plain text files located on a folder from English to Spanish. However, I'm facing an issue where the script doesn't work with SDK version 1.0.0 or newer. I've tried but get errors like:</p> <pre><code>You tried to access openai.ChatCompletion, but this is no longer supported in openai You tried to access openai.Completion, but this is no longer supported in openai </code></pre> <p>When I try to correct using information on the StackOverflow I get errors like this one:</p> <pre><code>AttributeError: module 'openai' has no attribute 'client'. Did you mean: 'Client'? </code></pre> <p>My current script is this one:</p> <pre><code> for file_name in os.listdir(folder_path): if file_name.endswith('.txt'): file_path = os.path.join(folder_path, file_name) with open(file_path, 'r') as file: text = file.read() # Translate the tet response = openai.ChatCompletion.create( model=&quot;gpt-4&quot;, messages=[ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful professional translator.&quot;}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;Translate this English text to Spanish: '{text}'&quot;} ] ) translated_text = response['choices'][0]['message']['content'] translated_file_path = os.path.join(folder_translated_path, file_name) with open(translated_file_path, 'w') as translated_file: translated_file.write(translated_text) </code></pre> <p>Thanks in advance.</p>
<python><openai-api>
2024-02-09 01:21:16
1
335
eera5607
77,965,579
8,997,728
Method that generate events in Json not running (Python)
<p>Hi there I have this python script below from a programming course to generate Json files (dictionaries) from events (concerts, shows, theatres) and can simulate a ticketing website with some additional info to keep track on the number of views, likes and clicks for each as customers interface with it over time. As an example please see the payload below that should be generated from this script:</p> <pre><code> { &quot;id&quot;: &quot;5ee31063-e107-404c-8e7c-e4f1e09ea449&quot;, &quot;ip&quot;: &quot;21.33.54.112&quot;, &quot;source&quot;: &quot;android&quot;, &quot;user_id&quot;: &quot;3cc6870b-9f59-4e29-ab3a-ce2052d992db&quot;, &quot;properties&quot;: { &quot;type&quot;: &quot;event_shared&quot;, &quot;recieved_at&quot;: &quot;2023-01-01T03:06:30&quot;, &quot;event_name&quot;: &quot;Fred Again: O2 Academy Brixton&quot; } } </code></pre> <p>The issue is that whenever I try to run this it keeps saying the &quot;click&quot; module has not been found whenever I run the script &quot;generate.py&quot; below. I am trying to run with CMD the command &quot;py -m generate --day 2023-01-01&quot; but keep getting the &quot;No module named &quot;click&quot; &quot;. I was not sure I would need the &quot;click&quot; file placed somewhere as the zip file I received from my teacher has no reference to any &quot;click&quot; method. Any suggestions as to how I should create this &quot;click&quot; file / method? Also do I need a requirement file and place in there the version for the &quot;click&quot; method?</p> <hr /> <pre><code>from dataclasses import dataclass, asdict import random from datetime import datetime, date from uuid import uuid4 import json from pathlib import Path import os import shutil import click ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) DATA_OUT_PATH = f&quot;{ROOT_DIR}/data&quot; TRACKING_SOURCES = {&quot;android&quot;, &quot;web&quot;, &quot;ios&quot;} SESSION_PROPERTY_VALUES = { &quot;event_viewed&quot;, &quot;event_liked&quot;, &quot;event_shared&quot;, &quot;event_disliked&quot;, } EVENT_NAMES = { &quot;U2: Brixton&quot;, &quot;Shakira: Los Angeles&quot;, &quot;Coldplay: Bristol&quot;, } @dataclass class SessionProperties: type: str recieved_at: str event_name: str @dataclass class Session: id: str ip: str source: str user_id: int properties: SessionProperties def random_ip() -&gt; list[str]: return &quot;.&quot;.join([str(random.randint(0, 255)) for _ in range(4)]) def random_source() -&gt; str: return random.choice(list(TRACKING_SOURCES)) def random_event_property_type() -&gt; str: return random.choice(list(SESSION_PROPERTY_VALUES)) def random_id() -&gt; str: return str(uuid4()) def random_event() -&gt; str: return random.choice(list(EVENT_NAMES)) def random_session_properties(recieved_at: datetime) -&gt; SessionProperties: return SessionProperties( type=random_event_property_type(), recieved_at=recieved_at.isoformat(), event_name=random_event(), ) def write_nd_out(file_name: str, data: list[Session]) -&gt; None: p = Path(os.path.dirname(file_name)) os.makedirs(p, exist_ok=True) with open(file_name, &quot;w&quot;) as f: for session in data: f.write(json.dumps(session)) f.write(&quot;\n&quot;) def generate_hour(day_to_generate: date, hour: int, hourly_path: str): shutil.rmtree(hourly_path, ignore_errors=True) random_number_of_files = random.randint(1, 10) for _ in range(random_number_of_files): random_number_of_sessions = random.randint(1, 100) sessions = [] for _ in range(random_number_of_sessions): random_session = Session( id=random_id(), ip=random_ip(), source=random_source(), user_id=random_id(), properties=random_session_properties( datetime( day_to_generate.year, day_to_generate.month, day_to_generate.day, hour, random.randint(0, 59), random.randint(0, 59), ) ), ) sessions.append(asdict(random_session)) write_nd_out(f&quot;{hourly_path}/{random_id()}.json&quot;, sessions) def _root_path(day_to_generate: date) -&gt; str: return f&quot;{DATA_OUT_PATH}/{day_to_generate.strftime('%Y/%m/%d')}&quot; @click.command() @click.option('--day', type=click.DateTime(formats=[&quot;%Y-%m-%d&quot;]), default=str(date.today())) def main(day: datetime): for hour in range(0, 24): hourly_path = f&quot;{_root_path(day.date())}/{hour:02}&quot; generate_hour(day.date(), hour, hourly_path) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><sql><events>
2024-02-09 01:13:39
0
309
Estrobelai
77,965,487
6,162,679
Error when using "LID_USAGE" from the "swmm_api" package in Python?
<p>I am trying to extract individual LID settings for each subcatchment in SWMM using <code>LID_USAGE</code> from the <code>swmm_api</code> package. I have followed the example at <a href="https://markuspichler.gitlab.io/swmm_api/examples/how_to_add_LIDs.html" rel="nofollow noreferrer">https://markuspichler.gitlab.io/swmm_api/examples/how_to_add_LIDs.html</a>, however, an error occurs. I wonder how to fix the code? I have attached the swmm model for your reference, thank you.</p> <p>The <strong>example SWMM model (input file)</strong> is available at <a href="https://1drv.ms/u/s!AnVl_zW00EHemH5cu620m6vxqXdN?e=S5VSCA" rel="nofollow noreferrer">https://1drv.ms/u/s!AnVl_zW00EHemH5cu620m6vxqXdN?e=S5VSCA</a>. I am using swmm-api 0.4.37.</p> <pre><code>from swmm_api import SwmmInput from swmm_api.input_file.section_labels import SUBAREA,SUBCATCHMENTS,LID_CONTROLS,LID_USAGE import numpy as np import pandas as pd inp = SwmmInput.read_file('Example2.inp') inp.LID_CONTROLS # LID_CONTROLS works smoothly! inp.LID_USAGE # LID_USAGE is not working... </code></pre> <p><a href="https://i.sstatic.net/iTz1Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iTz1Q.png" alt="enter image description here" /></a></p> <pre><code>inp.LID_USAGE Traceback (most recent call last): File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:780 in from_inp_line return cls(*line_args) TypeError: __init__() takes from 8 to 12 positional arguments but 17 were given During handling of the above exception, another exception occurred: Traceback (most recent call last): Cell In[6], line 1 inp.LID_USAGE File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:948 in LID_USAGE return self[LID_USAGE] File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:193 in __getitem__ self._convert_section(key) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:214 in _convert_section self._data[key] = convert_section(key, self._data[key], self._converter) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:939 in convert_section return section_.from_inp_lines(lines) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:849 in from_inp_lines return cls.create_section(lines) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:831 in create_section sec.add_inp_lines(lines_iter) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:360 in add_inp_lines self.add_multiple(*self._section_object._convert_lines(multi_line_args)) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:866 in _convert_lines yield cls.from_inp_line(*line_args) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:782 in from_inp_line raise TypeError(f'{e} | {cls.__name__}{line_args}\n\n__init__{signature(cls.__init__)}\n\n{getdoc(cls.__init__)}') TypeError: __init__() takes from 8 to 12 positional arguments but 17 were given | LIDUsage('2', 'Bioretention', '1', '5000', '0', '10', '70', '1', '&quot;E:\\Python', 'Examples\\SWMM', 'Calibration\\Example-Yang\\For', 'Calibration\\LID', 'Reports\\S2', 'Bioretention.txt&quot;', '*', '0') __init__(self, subcatchment, lid, n_replicate, area, width, saturation_init, impervious_portion, route_to_pervious=0, fn_lid_report=nan, drain_to=nan, from_pervious=nan) Assignment of LID controls to subcatchments. Args: subcatchment (str): Name of the subcatchment using the LID process. lid (str): Name of an LID process defined in the [LID_CONTROLS] section. n_replicate (int): Number of replicate LID units deployed. area (float): Area of each replicate unit (ft^2 or m^2 ). width (float): Width of the outflow face of each identical LID unit (in ft or m). This parameter applies to roofs, pavement, trenches, and swales that use overland flow to convey surface runoff off of the unit. It can be set to 0 for other LID processes, such as bio-retention cells, rain gardens, and rain barrels that simply spill any excess captured runoff over their berms. saturation_init (float): For bio-retention cells, rain gardens, and green roofs this is the degree to which the unit's soil is initially filled with water (0 % saturation corresponds to the wilting point moisture content, 100 % saturation has the moisture content equal to the porosity). The storage zone beneath the soil zone of the cell is assumed to be completely dry. For other types of LIDs it corresponds to the degree to which their storage zone is initially filled with water impervious_portion (float): Percent of the impervious portion of the subcatchment’s non-LID area whose runoff is treated by the LID practice. (E.g., if rain barrels are used to capture roof runoff and roofs represent 60% of the impervious area, then the impervious area treated is 60%). If the LID unit treats only direct rainfall, such as with a green roof, then this value should be 0. If the LID takes up the entire subcatchment then this field is ignored. route_to_pervious (int): A value of 1 indicates that the surface and drain flow from the LID unit should be routed back onto the pervious area of the subcatchment that contains it. This would be a common choice to make for rain barrels, rooftop disconnection, and possibly green roofs. The default value is 0. fn_lid_report (str): Optional name of a file to which detailed time series results for the LID will be written. Enclose the name in double quotes if it contains spaces and include the full path if it is different than the SWMM input file path. Use β€˜*’ if not applicable and an entry for DrainTo follows drain_to (str): Optional name of subcatchment or node that receives flow from the unit’s drain line, if different from the outlet of the subcatchment that the LID is placed in. from_pervious (float): optional percent of the pervious portion of the subcatchment’s non-LID area whose runoff is treated by the LID practice. The default value is 0 </code></pre> <p>.</p>
<python>
2024-02-09 00:36:51
1
922
Yang Yang
77,965,294
2,097,240
Python wait for click
<p>I'm making a script in Python (without an interface) that requires the user to click certain places on the screen so I can retrieve the mouse position.</p> <p>With <code>pyautogui</code> it's pretty easy to get the <code>position</code> of the mouse. Yet, there seems to be no click event, wait click or anything like that.</p> <p>Considering that the click can be <strong>ANYWHERE</strong> (including outside of the IDE, and even if the IDE is minimized), is there any way to get a click event or actively wait for a click?</p>
<python><onclick><pyautogui>
2024-02-08 23:23:09
0
86,748
Daniel MΓΆller
77,965,288
7,742,512
Trying to train a tensorflow model, but get: ImportError: cannot import name 'tensor' from 'tensorflow.python.framework'
<p>I have been trying train a pre trained model for a few days now, I am using:</p> <p>ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8</p> <p>I have downloaded the models complied and the protos etc... I can get the object detection run in my own script on my webcam.</p> <p>But I can't get it to train.</p> <p>I'm using an up to date Windows 10 machine:</p> <p>Here is my tensorflow installs:</p> <pre><code>tensorboard 2.10.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.10.1 tensorflow-addons 0.22.0 tensorflow-datasets 4.9.0 tensorflow-estimator 2.10.0 tensorflow-hub 0.16.1 tensorflow-intel 2.10.1 tensorflow-io 0.31.0 tensorflow-io-gcs-filesystem 0.31.0 tensorflow-metadata 1.13.0 tensorflow-model-optimization 0.7.5 tensorflow-text 2.10.0 termcolor 2.4.0 terminado 0.18.0 text-unidecode 1.3 tf-keras 2.15.0 tf-models-official 2.10.1 tf-slim 1.1.0 </code></pre> <p>here is the code i am trying to run:</p> <pre><code>import os import tensorflow as tf from object_detection import model_main def main(): # Set the paths to your pipeline configuration and model directory pipeline_config_path = r&quot;C:\python\models\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config&quot; model_dir = r&quot;C:\python\models\research\object_detection\model_main_tf2.py&quot; # Replace with your desired directory # Run the training process tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) model_main.tf_main(pipeline_config_path, model_dir)``` and this is the error i am getting: ImportError: cannot import name 'tensor' from 'tensorflow.python.framework' (C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\__init__.py) i am very stuck and i have been trying for a few days to fix this, i have tried import sys sys.path.append(&quot;/path/to/clone/repo/models/research/slim&quot;) from nets.inception_v4 import * Does anyone have any help to offer in resolving the import issue? </code></pre>
<python><python-3.x><tensorflow>
2024-02-08 23:20:37
0
473
James Morrish
77,964,948
4,707,978
Airflow use case
<p>I have a question about for what to use airflow for with our current system.</p> <p>I have an api which, by using GCP Cloud Tasks, trigger some tasks like sending a welcome email after 2 days and sending useful notifications. This is all in our main api repo.</p> <p>Now I want for example add 1) a profile image face detector and notify users about the use of a non facial profile image. 2) multiple marketing reminder emails etc 3) classification of user profiles based on their data and 4) a lot more of these kind of background tasks as we grow</p> <p>It feels weird to put all this extra logic and tasks in the main api repo, trigger it with cloud tasks/cloud scheduler etc is not exactly a good idea. Especially since some tasks can be reused/linked together. Also Cloud Tasks are just http calls but scheduled/delayed and the monitoring is not really intuitive.</p> <p>So would Airflow be a good solution to manage these tasks and be future proof as our system/business needs besides the api grow?</p>
<python><airflow>
2024-02-08 21:49:58
1
3,431
Dirk
77,964,944
1,183,804
Olive to convert Whisper Model to Onnx
<p>I am trying to convert OpenAi Whisper model to Onnx with Olive, to merge the Model Files into one file, using:</p> <pre><code>python prepare_whisper_configs.py --model_name openai/whisper-tiny.en python -m olive.workflows.run --config whisper_cpu_fp32.json --setup python -m olive.workflows.run --config whisper_cpu_fp32.json </code></pre> <p>Github: <a href="https://github.com/microsoft/Olive" rel="nofollow noreferrer">https://github.com/microsoft/Olive</a></p> <p>Olive Documentation: <a href="https://microsoft.github.io/Olive/overview/quicktour.html" rel="nofollow noreferrer">https://microsoft.github.io/Olive/overview/quicktour.html</a></p> <p>I am getting error:</p> <blockquote> <p>Traceback (most recent call last): File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\runpy.py&quot;, line 86, in <em>run_code exec(code, run_globals) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\workflows\run_<em>main</em></em>.py&quot;, line 17, in run(**vars(args)) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\workflows\run\run.py&quot;, line 187, in run return engine.run( File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\engine\engine.py&quot;, line 347, in run run_result = self.run_accelerator( File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\engine\engine.py&quot;, line 412, in run_accelerator return self.run_no_search( File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\engine\engine.py&quot;, line 483, in run_no_search should_prune, signal, model_ids = self._run_passes( File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\engine\engine.py&quot;, line 887, in _run_passes model_config, model_id = self._run_pass( File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\engine\engine.py&quot;, line 985, in _run_pass output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\systems\local.py&quot;, line 32, in run_pass output_model = the_pass.run(model, data_root, output_model_path, point) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\passes\olive_pass.py&quot;, line 367, in run output_model = self._run_for_config(model, data_root, config, output_model_path) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\passes\onnx\conversion.py&quot;, line 58, in _run_for_config return self._convert_model_on_device(model, data_root, config, output_model_path, &quot;cpu&quot;) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\passes\onnx\conversion.py&quot;, line 73, in <em>convert_model_on_device component_model = model.get_component(component_name) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\model_<em>init</em></em>.py&quot;, line 696, in get_component user_module_loader = UserModuleLoader(self.model_script, self.script_dir) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\common\user_module_loader.py&quot;, line 22, in <strong>init</strong> self.user_module = import_user_module(user_script, script_dir) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\common\import_lib.py&quot;, line 39, in import_user_module return import_module_from_file(user_script) File &quot;C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\common\import_lib.py&quot;, line 27, in import_module_from_file spec.loader.exec_module(new_module) File &quot;&quot;, line 883, in exec_module File &quot;&quot;, line 241, in <em>call_with_frames_removed File &quot;C:\OpenAI\Olive\examples\whisper\code\user_script.py&quot;, line 10, in from olive.model import PyTorchModelHandler ImportError: cannot import name 'PyTorchModelHandler' from 'olive.model' (C:\Users\User\anaconda3\envs\OliveEnv\lib\site-packages\olive\model_<em>init</em></em>.py)</p> </blockquote> <p>I have a Conda environment: OliveEnv</p> <p>Packages installed:</p> <pre><code>(OliveEnv) C:\OpenAI\Olive\examples\whisper&gt;pip list Package Version ---------------------- ---------- alembic 1.13.1 certifi 2024.2.2 charset-normalizer 3.3.2 colorama 0.4.6 coloredlogs 15.0.1 colorlog 6.8.2 contextlib2 21.6.0 contourpy 1.2.0 cycler 0.12.1 Deprecated 1.2.14 filelock 3.13.1 flatbuffers 23.5.26 fonttools 4.48.1 fsspec 2024.2.0 greenlet 3.0.3 huggingface-hub 0.20.3 humanfriendly 10.0 idna 3.6 Jinja2 3.1.3 joblib 1.3.2 kiwisolver 1.4.5 Mako 1.3.2 MarkupSafe 2.1.5 matplotlib 3.8.2 mpmath 1.3.0 networkx 3.2.1 neural-compressor 2.4.1 numpy 1.26.4 olive-ai 0.3.2 onnx 1.15.0 onnxruntime 1.17.0 opencv-python-headless 4.9.0.80 optuna 3.5.0 packaging 23.2 pandas 2.2.0 pillow 10.2.0 pip 23.3.1 prettytable 3.9.0 protobuf 3.20.3 psutil 5.9.8 py-cpuinfo 9.0.0 pycocotools 2.0.7 pydantic 1.10.14 pyparsing 3.1.1 pyreadline3 3.4.1 python-dateutil 2.8.2 pytz 2024.1 PyYAML 6.0.1 regex 2023.12.25 requests 2.31.0 safetensors 0.4.2 schema 0.7.5 scikit-learn 1.4.0 scipy 1.12.0 setuptools 68.2.2 six 1.16.0 SQLAlchemy 2.0.25 sympy 1.12 tabulate 0.9.0 threadpoolctl 3.2.0 tokenizers 0.15.1 torch 2.2.0 torchmetrics 0.10.3 tqdm 4.66.1 transformers 4.37.2 typing_extensions 4.9.0 tzdata 2023.4 urllib3 2.2.0 wcwidth 0.2.13 wheel 0.41.2 wrapt 1.16.0 </code></pre> <p>I have tried three different versions of olive, 0.3.1, 0.3.2, 0.4.0</p> <p>A very good video, showing the use of Olive is MS Build:</p> <p><a href="https://www.youtube.com/watch?v=7_0N1VL5ZmA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=7_0N1VL5ZmA</a> <a href="https://www.youtube.com/watch?v=Qj-l0tGKPf8" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Qj-l0tGKPf8</a></p>
<python><c#><onnx><olive>
2024-02-08 21:49:08
1
2,710
Rusty Nail
77,964,915
7,347,925
How to use axline with log scale axis?
<p>I'm trying to plot a linear line through zero:</p> <pre><code>fig, ax = plt.subplots() l2 = ax.axline(xy1=(0, 0), slope=0.3) ax.set_xlim(0.1, 10) ax.set_ylim(0.1, 10) </code></pre> <p>It works well:</p> <p><a href="https://i.sstatic.net/iT3qd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iT3qd.png" alt="example" /></a></p> <p>However, if I change the scale to <code>log</code>, it would be empty:</p> <pre><code>fig, ax = plt.subplots() l2 = ax.axline(xy1=(0, 0), slope=0.3) ax.set_xlim(0.1, 10) ax.set_ylim(0.1, 10) ax.set_xscale('log') ax.set_yscale('log') </code></pre> <p><a href="https://i.sstatic.net/82EwK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82EwK.png" alt="example2" /></a></p>
<python><matplotlib><line-plot>
2024-02-08 21:42:51
1
1,039
zxdawn
77,964,875
23,260,297
Future Warning with groupby in pandas
<p>I have a python script that reads in data from a csv file</p> <p>The code runs fine, but everytime it runs I get this warning message:Β </p> <pre><code> FutureWarning: A grouping was used that is not in the columns of the DataFrame and so was excluded from the result. This grouping will be included in a future version of pandas. Add the grouping as a column of the DataFrame to silence this warning. </code></pre> <p>the message points me to this code:</p> <pre><code> formatted_df = {k: 'first' for k in df.columns} | {'Commodity': tuple, 'FixedPriceStrike': tuple, 'Quantity': tuple} group = (df['TradeID'].str.strip().ne('') | df['TradeDate'].str.strip().ne('')).cumsum() df = df.groupby(group, as_index=False).agg(formatted_df) </code></pre> <p>here is some sample data:</p> <pre><code>TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue -------- ---------- --------- --------- ---------- ---------- -------- --------- aaa 01/01/2024 commodity1 01/01/2024 01/01/2024 10.00 10 100.00 commodity2 10.00 10 bbb 01/01/2024 commodity1 01/01/2024 01/01/2024 10.00 10 100.00 commodity2 10.00 10 ccc 01/01/2024 commodity1 01/01/2024 01/01/2024 10.00 10 100.00 commodity2 10.00 10 </code></pre> <p>and here is the expected output :</p> <pre><code>TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue -------- ---------- --------- --------- ---------- ---------- -------- --------- aaa 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 (10,10) (10,10) 100.00 bbb 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 (10,10) (10,10) 100.00 ccc 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 (10,10) (10,10) 100.00 </code></pre> <p>I am unsure how to get rid of this warning, since I do not want to just add whatever grouping it is telling me to to the dataframe. Any suggestions on how to get rid of this warning or a different approach to getting the desired output?</p>
<python><pandas><dataframe>
2024-02-08 21:32:40
1
2,185
iBeMeltin
77,964,858
10,589,070
DuckDb Hash a Record
<p>I have a use case where I want to check for conflicts for slowly changing dimensions. In theory, I should be able to do this by hashing a record of the original table and compare it to a hash of the new record coming in if the ID's match.</p> <p>In SQL I can do this with a binary checksum and just provide a list of column names.</p> <p>In DuckDb it appears the hash functions are all single column. Is there a convenient way to do this that isn't concatenating all of the columns together or comparing each column individually?</p> <p>Alternatively, I could hash the rows outside of DuckDb in Python, but as the data is stored in Parquet, it is very convenient if I can just keep this all in DuckDb.</p>
<python><duckdb>
2024-02-08 21:28:40
2
446
krewsayder
77,964,700
45,139
Tkinter not working on Debian 12 Bookworm, Python 3.11.5
<p>As the title states, I can't get tkinter code running. I get the error:</p> <blockquote> <p>No module named '_tkinter'</p> </blockquote> <blockquote> <p>import _tkinter # If this fails your Python may not be configured for Tk</p> </blockquote> <p>I'm running Debian 12 Bookwork, and Python 3.11.5, and after trying the suggestions on this site like: <code>sudo apt-get install python3-tk</code> or <code>sudo apt-get install python3.11-tk</code> it tells me that python3-tk is already the newest version.</p> <p>I've gone so far as to create a droplet on digital ocean for debian 12, and it had python 3.11.2 installed, but I was able to add the package and run the import statement.</p> <p>Did I break or misconfigure my python installation somehow? If so, is there some way to fix it short of wiping and reinstalling the OS?</p> <p>Thanks for any advice!</p> <p>EDIT: I've reduced the code to just this and getting the errors above: <code>import tkinter as tk</code></p> <p>At first I was using an virtual environment via <code>python3 -m venv venv</code> but to simplify things I'm just trying to run the above import statement in the terminal with <code>python3 tk_test.py</code></p> <p>results of commands from comments:</p> <p><code>which python3</code> -&gt; <code>/usr/local/bin/python3</code></p> <p><code>ls /usr/lib/python*/lib-dynload/_tkinter.*</code> -&gt; <code>/usr/lib/python3.11/lib-dynload/_tkinter.cpython-311d-x86_64-linux-gnu.so /usr/lib/python3.11/lib-dynload/_tkinter.cpython-311-x86_64-linux-gnu.so</code></p> <p>EDIT: closing this question as i've fixed it by compiling python 3.11.8 from source. I can only assume I broke my python install at some point. I used these steps to compile and replace my installation with python 3.11.8: <a href="https://fostips.com/install-python-3-10-debian-11/" rel="nofollow noreferrer">https://fostips.com/install-python-3-10-debian-11/</a></p>
<python><linux><tkinter><debian><python-3.11>
2024-02-08 20:48:01
0
3,947
LoveMeSomeCode
77,964,626
9,806,500
shinylive pyodide cannot install shiny package for a quarto dashboard
<p>I am attempting to embed a shinylive app in a quarto dashboard. App will let the user selectize filter a categorical variable in a seaborn barplot.</p> <p>However, I receive the following error in the panel where the app <em>should</em> appear:</p> <pre><code>Error starting app! Traceback (most recent call last): File &quot;&lt;exec&gt;&quot;, line 362, in _start_app ModuleNotFoundError: The module 'shiny' is included in the Pyodide distribution, but it is not installed. You can install it by calling: await micropip.install(&quot;shiny&quot;) in Python, or await pyodide.loadPackage(&quot;shiny&quot;) in JavaScript See https://pyodide.org/en/stable/usage/loading-packages.html for more details. </code></pre> <p>Currently, in my <strong><code>_quarto.yml</code></strong></p> <pre><code>project: type: website output-dir: public website: title: &quot;title&quot; navbar: left: - href: index.qmd text: Home - href: about.qmd text: About format: dashboard filters: - shinylive jupyter: python3 </code></pre> <p>and my <strong><code>index.qmd</code></strong></p> <pre><code>```{python py_load_deps} #| echo: false # data munging &amp; viz import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates import seaborn as sns import plotly.express as px ``` ## Row Some explainer text. ## Row ```{shinylive-python} #| standalone: true #| title: Grouped Bar Plot - NOV 2023 ## file: dat/filename.xlsx from shiny import * import pandas as pd import numpy as np import seaborn as sns from pathlib import Path # UI app_ui = ui.page_fluid( ui.input_select( &quot;select_items&quot;, &quot;select items:&quot;, dat().item.unique().tolist(), multiple = True ), ui.output_plot(&quot;plot&quot;) ) # server def server(input, output, session): @reactive.calc def dat(): v_path = Path(__file__).parent.resolve() / &quot;dat/filename.xlsx&quot; dict_types = {&quot;item&quot;: str, &quot;value&quot;: np.float64, &quot;value2&quot;: np.float64} df_summary = pd.read_excel(v_path, sheet_name=&quot;sheet_name&quot;, usecols=&quot;A:D&quot;, dtype=dict_types) df_summary_grp = df_summary.melt(id_vars='item', value_vars=['value1', 'value2']) df_summary_grp = df_summary_grp.reset_index() return df_summary_grp @reactive.calc def items(): return dat().item.unique().tolist() @output @render.plot(alt=&quot;Grouped Bar of Items&quot;) def plot(): g_ax_item_grp = sns.barplot(y='item', x='values', hue='variable', data=dat().loc[dat()[['item']].isin[input.select_item()]]) g_ax_item_grp.set_xlabel(None) sns.move_legend(g_ax_item_grp, loc=&quot;lower center&quot;, bbox_to_anchor=(-.4,-.3), ncol=2, title=None, frameon=False) return g_ax_item_grp # app app = App(app_ui, server) ``` </code></pre> <p>I have verified that all packages are installed, and have tried adding <code>micropip.install('shiny')</code> to the shinylive block as specified in the error message, but to no avail.</p>
<python><quarto><py-shiny><py-shinylive>
2024-02-08 20:32:35
0
635
M. Wood
77,964,507
16,179,502
Google drive python SDK throwing SSL errors
<p>I have the following method being used in my Flask app to create folders in my Google drive if they do not exist. If they do exist then their folder_id is returned.</p> <pre><code>def find_or_create_folder_in_google_drive(name, parent_id=None): query = f&quot;name='{name}'&quot; if parent_id: query += f&quot; and '{parent_id}' in parents&quot; query += &quot; and mimeType='application/vnd.google-apps.folder'&quot; query += &quot; and trashed=false&quot; response = ( GOOGLE_DRIVE_SERVICE.files() .list(q=query, spaces=&quot;drive&quot;, fields=&quot;files(id, name)&quot;) .execute() ) files = response.get(&quot;files&quot;, []) if not files: file_metadata = {&quot;name&quot;: name, &quot;mimeType&quot;: &quot;application/vnd.google-apps.folder&quot;} if parent_id: file_metadata[&quot;parents&quot;] = [parent_id] folder = ( GOOGLE_DRIVE_SERVICE.files() .create(body=file_metadata, fields=&quot;id&quot;) .execute() ) return folder.get(&quot;id&quot;) return files[0].get(&quot;id&quot;) </code></pre> <p>This snippet works flawlessly, tested many times, when sending in a payload from Postman and Thunder Client VSCode extension. However issues arise when sending the same payload from my frontend react app to my flask app. The following exception is thrown</p> <pre><code> File &quot;path/backend/storage_manager/google/prompts.py&quot;, line 19, in create_or_get_prompts_path_google_sheets root_folder_id = find_or_create_folder_in_google_drive(ROOT_FOLDER) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;path/backend/storage_manager/google_services/get_drive_paths.py&quot;, line 15, in find_or_create_folder_in_google_drive .execute() ^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/googleapiclient/_helpers.py&quot;, line 130, in positional_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/googleapiclient/http.py&quot;, line 923, in execute resp, content = _retry_request( ^^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/googleapiclient/http.py&quot;, line 222, in _retry_request raise exception File &quot;path/backend/.venv/lib/python3.11/site-packages/googleapiclient/http.py&quot;, line 191, in _retry_request resp, content = http.request(uri, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/google_auth_httplib2.py&quot;, line 218, in request response, content = self.http.request( ^^^^^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/httplib2/__init__.py&quot;, line 1724, in request (response, content) = self._request( ^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/httplib2/__init__.py&quot;, line 1444, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;path/backend/.venv/lib/python3.11/site-packages/httplib2/__init__.py&quot;, line 1396, in _conn_request response = conn.getresponse() ^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py&quot;, line 1378, in getresponse response.begin() File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py&quot;, line 318, in begin version, status, reason = self._read_status() ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py&quot;, line 279, in _read_status line = str(self.fp.readline(_MAXLINE + 1), &quot;iso-8859-1&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socket.py&quot;, line 706, in readinto return self._sock.recv_into(b) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py&quot;, line 1278, in recv_into return self.read(nbytes, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py&quot;, line 1134, in read return self._sslobj.read(len, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ssl.SSLError: [SSL] record layer failure (_ssl.c:2576) </code></pre> <p>Which stems from this line</p> <pre><code>response = ( GOOGLE_DRIVE_SERVICE.files() .list(q=query, spaces=&quot;drive&quot;, fields=&quot;files(id, name)&quot;) .execute() ) </code></pre> <p>Any thoughts would be appreciated. Not sure what else to do here as it's working fine with API calling tools but not with frontend app. Maybe a CORS issue?</p> <p>I am using the following google dependencies for reference</p> <pre><code>google-api-core==2.16.1 google-api-python-client==2.116.0 google-auth==2.27.0 google-auth-httplib2==0.2.0 google-auth-oauthlib==1.2.0 googleapis-common-protos==1.62.0 </code></pre>
<python><flask><google-drive-api>
2024-02-08 20:10:51
0
349
Malice
77,964,493
940,158
Emulating an empty queryset with an abstract model
<p>I have a use case in which I have a ListAPIView that connects to a 3rd party (Stripe) API, fetches data (invoices) and returns that data to the user. I have a serializer, but I don't have a model.</p> <p>The entire code looks something like this:</p> <pre class="lang-py prettyprint-override"><code>class InvoicesList(generics.ListAPIView): serializer_class = InvoiceSerializer def get_queryset(self): if getattr(self, 'swagger_fake_view', False): return # &lt;---- ΒΏ?ΒΏ?ΒΏ?ΒΏ?ΒΏ?ΒΏ?ΒΏ?ΒΏ? return StripeWrapper().get_invoices() class InvoiceSerializer(serializers.Serializer): ...fields.. ...fields... ...fields class StripeWrapper(): def get_invoices(): return requests.get(......) </code></pre> <p>Since I don't have a model, <code>drf-spectacular</code> refuses to generate the proper openapi specs. It expects to receive an <code>EmptyQuerySet</code> (<code>SomeModel.objects.none()</code>), but I can't provide it any since I don't have an <code>Invoice</code> model. I could create an abstract model like this:</p> <pre class="lang-py prettyprint-override"><code>class Invoice(models.Model): class Meta: abstract = True </code></pre> <p>but I still won't be able to provide <code>drf-spectacular</code> with an <code>Invoice.objects.none()</code> since there is no manager in that class (and there can't be since it's abstract).</p> <p>How can I &quot;emulate&quot; (ΒΏ?) or &quot;generate&quot; an <code>EmptyQuerySet</code> so I can workaround this issue?</p>
<python><django><drf-spectacular>
2024-02-08 20:07:18
0
15,217
alexandernst
77,964,446
11,277,108
Designing a model that can cope with dynamic field names that follow a set pattern
<p>I'd like to create a Pydantic model for the following example dictionary:</p> <pre><code>{&quot;response_code_count/R1&quot;: 23, &quot;response_code_count/PX1&quot;: 58, &quot;some_other_field&quot;: &quot;some_other_value&quot;} </code></pre> <p>The challenge is that whilst the <code>response_code_count</code> keys follow the same format, the actual code after the forward slash can be anything.</p> <p>Is there a way to use a model structure that converts all <code>response_code_count</code> key/value pairs into a list of sub-models?</p> <p>Something like the below:</p> <pre><code>from pydantic import BaseModel from typing import List class ResponseCodeCount(BaseModel): response_code: str count: int class Response(BaseModel): response_code_counts: List[ResponseCodeCount] some_other_field: str </code></pre> <p>If not then what would be a way of designing my models such that I can validate and then access dynamic input keys that follow a consistent pattern?</p>
<python><pydantic>
2024-02-08 19:57:19
1
1,121
Jossy
77,964,234
3,107,858
Meaning of VideoCapture prop definitions not in the opencv documentation
<p>Is there any to tell what the meaning of video capture properties that aren't defined in the official documentation? For example,</p> <pre><code>import cv2 cap = cv2.VideoCapture(filename) </code></pre> <p>Video properties can be polled with either: <code>cap.get(cv2.CAP_PROP_FPS)</code> or <code>cap.get(5)</code></p> <p>The opencv <a href="https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d" rel="nofollow noreferrer">documentation</a> defines all the props up to 51. But there are metadata properties at index 55, 68, 69 and 70 but they don't have a corresponding definition in the documentation. Are they defined somewhere else? Perhaps in an mp4 or h.264 spec?</p>
<python><opencv><video-capture>
2024-02-08 19:13:00
1
3,230
DanGoodrick
77,964,071
10,969,942
Seeking Cython Optimization for Conditional Branching: Is There an Equivalent to switch?
<p>I am currently working on a Python project that I need to rewrite in Cython to enhance performance. In this Python code, there is a segment that uses a series of <code>if/elif</code> statements to determine the execution path for a task, <strong>which involves hundreds of branches. Additionally, this segment of code will be executed repeatedly.</strong> I am looking for a way to rewrite this in Cython that might offer performance improvements similar to the <code>switch</code> statement found in other languages.</p> <p>For context, languages like <a href="https://stackoverflow.com/questions/10287700/difference-between-jvms-lookupswitch-and-tableswitch">Java implement</a> the switch statement in such a way that its time complexity can be O(1) for <code>tableswitch</code> or O(log n) for <code>lookupswitch</code>, depending on the sparsity of cases. This is achieved through compiler optimizations that are not present in the linear time complexity (O(n)) of chained <code>if/else</code> if statements. Similarly, <a href="https://stackoverflow.com/a/38961636/10969942">C/C++ compilers</a> can optimize switch statements to enhance performance <strong>(by jump table O(1) and binary search O(log n))</strong>.</p> <p>Given that Cython compiles code to C, I am wondering if there is a syntax or construct in Cython that behaves like the switch statement, potentially benefiting from compiler optimizations to avoid the linear time complexity. Does Cython offer any such feature, or are there recommended practices for achieving similar performance improvements when dealing with a large number of conditional branches?</p> <p><strong>Please note that in Python 3.10, a syntax resembling the switch statement, known as match/case, was introduced. However, it lacks compiler optimizations, maintaining a <a href="https://stackoverflow.com/questions/68476576/python-match-case-switch-performance">time complexity of O(n)</a>.</strong></p> <p>I have also considered using a Python dictionary and encapsulating each <code>if/elif</code> branch into a separate function to determine the branching function, as shown below:</p> <pre class="lang-py prettyprint-override"><code>func_map = { 0: func0, 1: func1, 3: func3, 10: func10, } </code></pre> <p>However, I found that the overhead of using a Python dictionary is too significant. Additionally, the functions <code>func0, func1, func3</code> etc., have <strong>different parameter lists</strong> (both in terms of the number of parameters and their names), which makes the approach quite inelegant.</p>
<python><c><switch-statement><cython><compiler-optimization>
2024-02-08 18:42:18
1
1,795
maplemaple
77,964,048
1,412,564
Django - how do I filter or exclude by two identical fields?
<p>We are using Django for our website. I have a query for model <code>User</code> and I want to exclude users which have two identical fields - in this case <code>id</code> and <code>username</code>. So if the <code>id</code>==<code>username</code>, I want to exclude them. How do I do it?</p> <p>The query looks like:</p> <pre class="lang-py prettyprint-override"><code>users = User.objects.filter(...) </code></pre> <p>And I want to add <code>.exclude(...)</code> where the <code>id</code> and <code>username</code> fields are equal.</p>
<python><django><django-queryset>
2024-02-08 18:37:19
1
3,361
Uri
77,963,963
1,559,401
PyAssimp error for any loaded file - Scene has not attribute meshes, materials or textures
<p>I am trying to get any sample (including the ones found in the Assimp repo) to work. Using <code>pyassimp 5.2.5</code> with Python 3.11.6. Below there is an example for a very basic call. I am loading an OBJ of a cube, which one can even write manually. :D</p> <pre><code>import pyassimp import pyassimp.postprocess def main(filename=None): scene = pyassimp.load(filename, processing=pyassimp.postprocess.aiProcess_Triangulate) print(&quot;Meshes:&quot; + str(len(scene.meshes))) pyassimp.release(scene) </code></pre> <p>The cube</p> <pre><code># Blender v2.61 (sub 0) OBJ File: '' # www.blender.org mtllib bigger-cube.mtl o Cube v 0.700000 -0.700000 -0.700000 v 0.700000 -0.700000 0.700000 v -0.700000 -0.700000 0.700000 v -0.700000 -0.700000 -0.700000 v 0.700000 0.700000 -0.700000 v 0.700000 0.700000 0.700000 v -0.700000 0.700000 0.700000 v -0.700000 0.700000 -0.700000 vt 0.000000 0.000000 vt 1.000000 0.000000 vt 1.000000 1.000000 vt 0.000000 1.000000 vn 0.000000 -1.000000 0.000000 vn 0.000000 1.000000 0.000000 vn 1.000000 0.000000 0.000000 vn -0.000000 -0.000000 1.000000 vn -1.000000 -0.000000 -0.000000 vn 0.000000 0.000000 -1.000000 usemtl Material.001 s off f 1/1/1 2/2/1 3/3/1 4/4/1 f 5/1/2 8/2/2 7/3/2 6/4/2 f 1/1/3 5/2/3 6/3/3 2/4/3 f 2/1/4 6/2/4 7/3/4 3/4/4 f 3/1/5 7/2/5 8/3/5 4/4/5 f 5/1/6 1/2/6 4/3/6 8/4/6 </code></pre> <p>The trace is</p> <pre><code>INFO:pyassimp:Adding Anaconda lib path:/home/user/miniconda3/envs/computergraphics/lib/ MODEL:bigger-cube.obj SCENE: Traceback (most recent call last): File &quot;/home/ale56337/Projects/cgi/playground.py&quot;, line 83, in &lt;module&gt; main(sys.argv[1]) File &quot;/home/user/Projects/cgi/playground.py&quot;, line 24, in main print(&quot; meshes:&quot; + str(len(scene.meshes))) ^^^^^^^^^^^^ AttributeError: '_GeneratorContextManager' object has no attribute 'meshes' </code></pre> <p>I also tried calling <code>materials</code> and <code>textures</code> but the result is the same. There is an <a href="https://github.com/assimp/assimp/pull/3979" rel="nofollow noreferrer">issue in the official repo</a>, which was fixed two years or so ago. Yet with the current version the issue is still there (perhaps regression or another source of the problem leading to the same outcome?).</p> <p>Any ideas how to make PyAssimp work?</p>
<python><assimp>
2024-02-08 18:19:30
1
9,862
rbaleksandar
77,963,500
4,575,197
TypeError: unhashable type: 'set' while renameing the DF in pandas version 2.1.4
<p>i have a huge dataframe now i want to rename one based on the values inside of the other one, if the columns match</p> <pre><code>for index,col in enumerate(news1.columns): for index_dic,col_dic in enumerate(dic.columns): if col==col_dic: print(col,dic.iloc[1, index_dic]) print(type(col)) print(type(dic.iloc[1, index_dic])) news1.rename(columns={f'{col}':f'{dic.iloc[1, index_dic]}'},inplace=True) </code></pre> <p>traceback of the error:</p> <pre><code>----&gt; 1 news1.rename(columns={'c15.10':'WA_ambiguous-expectation'}) File c:\Users\\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\frame.py:5518, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 5399 def rename( 5400 self, 5401 mapper: Renamer | None = None, (...) 5409 errors: IgnoreRaise = &quot;ignore&quot;, 5410 ) -&gt; DataFrame | None: 5411 &quot;&quot;&quot; 5412 Rename columns or index labels. 5413 (...) 5516 4 3 6 5517 &quot;&quot;&quot; -&gt; 5518 return super()._rename( 5519 mapper=mapper, 5520 index=index, 5521 columns=columns, 5522 axis=axis, 5523 copy=copy, 5524 inplace=inplace, 5525 level=level, 5526 errors=errors, 5527 ) File c:\Users\\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\generic.py:1086, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 1079 missing_labels = [ 1080 label 1081 for index, label in enumerate(replacements) 1082 if indexer[index] == -1 1083 ] 1084 raise KeyError(f&quot;{missing_labels} not found in axis&quot;) -&gt; 1086 new_index = ax._transform_index(f, level=level) 1087 result._set_axis_nocheck(new_index, axis=axis_no, inplace=True, copy=False) 1088 result._clear_item_cache() File c:\Users\\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\indexes\base.py:6465, in Index._transform_index(self, func, level) 6463 return type(self).from_arrays(values) 6464 else: -&gt; 6465 items = [func(x) for x in self] 6466 return Index(items, name=self.name, tupleize_cols=False) File c:\Users\\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\indexes\base.py:6465, in &lt;listcomp&gt;(.0) 6463 return type(self).from_arrays(values) 6464 else: -&gt; 6465 items = [func(x) for x in self] 6466 return Index(items, name=self.name, tupleize_cols=False) File c:\Users\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\common.py:507, in get_rename_function.&lt;locals&gt;.f(x) 506 def f(x): --&gt; 507 if x in mapper: 508 return mapper[x] 509 else: </code></pre> <p>the result of the prints are:</p> <pre><code>c15.10 WA_ambiguous-expectation &lt;class 'str'&gt; &lt;class 'str'&gt; </code></pre> <p>i can't figure out why i get the error. would appreciate any help</p>
<python><pandas><dataframe><typeerror>
2024-02-08 16:59:16
1
10,490
Mostafa Bouzari
77,963,471
5,213,015
Django - Autocomplete Search Throws Error For Logged Out Users?
<p>I’m making some updates to the autocomplete portion of the search functionality in my application and for some reason i’m getting an error for logged out users that says <code>TypeError: Field 'id' expected a number but got &lt;SimpleLazyObject: &lt;django.contrib.auth.models.AnonymousUser object at 0x1088a5e20&gt;&gt;.</code></p> <p>This is only happening for logged out users. When users are logged in the autocomplete works as built so I know I’m missing something but I just don’t know what.</p> <p>What needs to get changed to fix this issue?</p> <p>I’m building a gaming application where users have access to play certain games based on their rank in our application. So when a user is logged in I’d like the autocomplete functionality to reflect the games they have unlocked. So let’s say if a user has a <code>rank</code> of <code>100</code> then all the games with a <code>game_rank</code> of 100 and below will be displayed. For logged out users I would like all games to be shown.</p> <p>Made some notes in my <code>views.py</code> code from what I tested and added the JavaScript to the search functionality just in case.</p> <p>Below is my code.</p> <p>Any help is gladly appreciated!</p> <p>All My Best!</p> <p><strong>models.py</strong></p> <pre><code>class Game_Info(models.Model): id = models.IntegerField(primary_key=True, unique=True, blank=True, editable=False) game_title = models.CharField(max_length=100, null=True) game_rank = models.IntegerField(default=1) game_image = models.ImageField(default='default.png', upload_to='game_covers', null=True, blank=True) class User_Info(models.Model): id = models.IntegerField(primary_key=True, blank=True) image = models.ImageField(default='/profile_pics/default.png', upload_to='profile_pics', null=True, blank=True) user = models.OneToOneField(settings.AUTH_USER_MODEL,blank=True, null=True, on_delete=models.CASCADE) rank = models.IntegerField(default=1) </code></pre> <p><strong>views.py</strong></p> <pre><code>def search_results_view(request): if request.headers.get('x-requested-with') == 'XMLHttpRequest': res = None game = request.POST.get('game') print(game) ## This works for both logged in and logged out users but inlcudes all games. Would like to have this for logged out users. Commenting this out to test below. # qs = Game_Info.objects.filter(game_title__icontains=game).order_by('game_title')[:4] ## This only works for when users are logged in. user_profile_game_obj = User_Info.objects.get(user=request.user) user_profile_rank = int(user_profile_game_obj.rank) qs = Game_Info.objects.filter(game_title__icontains=game, game_rank__lte=user_profile_rank).order_by('game_title')[:4] if len(qs) &gt; 0 and len(game) &gt; 0: data = [] for pos in qs: item ={ 'pk': pos.pk, 'name': pos.game_title, 'game_provider': pos.game_provider, 'image': str(pos.game_image.url), 'url': reverse('detail', args=[pos.pk]), } data.append(item) res = data else: res = 'No games found...' return JsonResponse({'data': res}) return JsonResponse({}) </code></pre> <p><strong>custom.js</strong></p> <pre><code>// Live Search Functionalty $(function () { const url = window.location.href const searchForm = document.getElementById(&quot;search-form&quot;); const searchInput = document.getElementById(&quot;search_input_field&quot;); const resultsBox = document.getElementById(&quot;search-results&quot;); const csrf = document.getElementsByName('csrfmiddlewaretoken')[0].value const sendSearchData = (game) =&gt;{ $.ajax ({ type: 'POST', url: '/games/', data: { 'csrfmiddlewaretoken': csrf, 'game' : game, }, success: (res)=&gt; { console.log(res.data) const data = res.data if (Array.isArray(data)) { resultsBox.innerHTML = `&lt;div class=&quot;search_heading&quot;&gt;&lt;h1&gt;Recommended Games&lt;/h1&gt;&lt;/div&gt;` data.forEach(game=&gt; { resultsBox.innerHTML += ` &lt;a href=&quot;${game.url}&quot; class=&quot;item&quot; &gt; &lt;div class=&quot;row&quot;&gt; &lt;div class=&quot;search-cover-container&quot;&gt; &lt;img src=&quot;${game.image}&quot; class=&quot;game-img&quot;&gt; &lt;/div&gt; &lt;div class=&quot;search-title-container&quot;&gt; &lt;p&gt;${game.name}&lt;/p&gt; &lt;span class=&quot;publisher_title&quot;&gt;${game.game_provider}&lt;/span&gt; &lt;/div&gt; &lt;div class=&quot;search-icon-container&quot;&gt; &lt;i class=&quot;material-icons&quot;&gt;trending_up&lt;/i&gt; &lt;/div&gt; &lt;/div&gt; &lt;/a&gt; ` }) } else { if (searchInput.value.length &gt; 0) { resultsBox.innerHTML = `&lt;h2&gt;${data}&lt;/h2&gt;` } else { resultsBox.classList.add('not_visible') } } }, error: (err)=&gt; { console.log(err) } }) } searchInput.addEventListener('keyup', e=&gt; { sendSearchData(e.target.value) }) }); </code></pre> <p><strong>base.html</strong></p> <pre><code> &lt;form method=&quot;POST&quot; autocomplete=&quot;off&quot; id=&quot;search-form&quot; action=&quot;{% url 'search_results' %}&quot;&gt; {% csrf_token %} &lt;div class=&quot;input-group&quot;&gt; &lt;input id=&quot;search_input_field&quot; type=&quot;text&quot; name=&quot;q&quot; autocomplete=&quot;off&quot; class=&quot;form-control gs-search-bar&quot; placeholder=&quot;Search Games...&quot; value=&quot;&quot;&gt; &lt;div id=&quot;search-results&quot; class=&quot;results-container not_visible&quot;&gt;&lt;/div&gt; &lt;span class=&quot;search-clear&quot;&gt;x&lt;/span&gt; &lt;button id=&quot;search-btn&quot; type=&quot;submit&quot; class=&quot;btn btn-primary search-button&quot; disabled&gt; &lt;span class=&quot;input-group-addon&quot;&gt; &lt;i class=&quot;zmdi zmdi-search&quot;&gt;&lt;/i&gt; &lt;/span&gt; &lt;/button&gt; &lt;/div&gt; &lt;/form&gt; </code></pre>
<python><django><django-models><django-views><django-templates>
2024-02-08 16:54:45
2
419
spidey677
77,963,460
8,280,171
String Interpolation Convert to a Type
<p>I'm creating an s3 bucket using AWS CDK and i'm trying to usse string interpolation for the removal policy.</p> <pre><code>object_storage_bucket = s3.Bucket( self, &quot;object-storage-bucket&quot;, removal_policy=f&quot;RemovalPolicy.{props.s3_removal_policy}&quot;, auto_delete_objects=True, object_ownership=s3.ObjectOwnership.BUCKET_OWNER_ENFORCED, bucket_name=f&quot;{props.customer}-{props.region}-object-storage&quot;, ) </code></pre> <p>But when i run the code, i'd get this error <code>TypeError: type of argument removal_policy must be one of (aws_cdk.RemovalPolicy, NoneType); got str instead</code></p> <p>I know i can do <code>removal_policy=RemovalPolicy.DESTROY</code> and it will work, but i'm trying to pass <code>DESTROY</code> or <code>RETAIN</code> from a yaml file</p> <pre><code>customer: taylor swift region: us-west-2 s3_removal_policy: DESTROY </code></pre> <p>how do i do that?</p>
<python><aws-cdk>
2024-02-08 16:53:24
1
705
Jack Rogers
77,963,452
1,194,883
Running python code in a shell with VS code
<p>Using shift-enter, I can run a single line of code from some script in VS Code (using the standard Microsoft Python extension). If not already open, it pops open an ipython shell, and runs the line of code in that shell. The same works if I have consecutive non-indented lines of code β€” I just highlight all of them, and hit shift-enter. But if I want to any slightly more complicated piece of code, it fails.</p> <p>For example, suppose I have</p> <pre class="lang-py prettyprint-override"><code>def bla(x): y = x + 1 return y </code></pre> <p>(It does matter that there are three lines here.)</p> <p>Now I want to use that function in the shell, so I highlight the whole thing, hit shift-enter, and get this:</p> <pre><code>In [1]: def bla(x): ...: y = x + 1 ...: return y Cell In[1], line 3 return y ^ IndentationError: unexpected indent </code></pre> <p>Evidently the pasting process adds indents that can't be there. This sort of thing works just fine with the Julia extension, so I'm surprised the (presumably more widely used) python extension doesn't work.</p> <p>Is there some setting or something that I could change to get this to work, or is this just not something Microsoft want to support?</p>
<python><visual-studio-code>
2024-02-08 16:52:14
0
20,375
Mike
77,963,331
305,563
Recursive IndentedBlock with optional contents
<p>IndentedBlock from pyparsing not dedenting</p> <p>I am trying to write a parser in pyparsing that has a container that has optional contents, and I'm not getting the results I'm expecting.</p> <p>Here's the simplest version that demonstrates my problem I could come up with:</p> <pre><code>from pyparsing import * ident = Word(alphas) value = Word(nums) container = Forward() container &lt;&lt; (Group(ident + IndentedBlock(container | value)) | ident) test = &quot;&quot;&quot;\ A 1 B C 2 &quot;&quot;&quot; container.parse_string(test, parse_all=True).pprint() </code></pre> <p>What I'm getting is this:</p> <pre><code>[['A', ['1', ['B', [['C', ['2']]]]]]] </code></pre> <p>Which shows that the C is being treated as indented underneath the B, but it should be listed underneath the A.</p> <p>What I'm expecting is this:</p> <pre><code>[['A', ['1', ['B'], ['C', ['2']]]]] </code></pre> <p>Am I doing this wrong? How could I get a container with optional contents in pyparsing using IndentedBlock?</p> <p>I am using pyparsing 3.1.1</p>
<python><indentation><pyparsing>
2024-02-08 16:34:28
0
690
amertune
77,963,128
1,350,082
Mixed Integer Optimisation Using cvxpy
<p>I'm trying to solve an optimisation problem in python using CVXPY. Specifically, I am trying to optimise two available resources based on some data.</p> <p>Below is a toy example of the problem I am having. I have some data <strong>y</strong> which can be thought of as a vector of daily unit demand and I have two resources which are available each day to meet this demand. <em>a</em> is a constant resource where the same amount will be available each day. <em>b</em> is essentially a daily quantity of buffer resource which can be used if needed.</p> <p>There are three daily costs for the two resources, <em>a</em> has a standard cost, while <em>b</em> has a used cost and an unused cost.</p> <p>There are two constraints, <em>a</em> must be positive and <em>b</em> can't be negative. I believe this should now be trivial as the optimal solution should be a=1 and b=0. Once this works I will add another constraint to limit the number of days taht are allowed to be under resourced</p> <pre><code>import cvxpy as cp import numpy as np # some random data n_days = 5 y = np.random.randint(low=0, high=10, size=n_days) # Two variables to be optimised - These are available resources, a is always used, b is used based on demand a = cp.Variable(integer=True) b = cp.Variable(integer=True) # These are unit costs r_a_used = 1 r_b_used = 1.5 r_b_unused = 0.2 a_used = a * n_days # The same amount of a will be used at every time point b_unused = cp.sum(cp.maximum((b - cp.maximum((y - a), 0)), 0)) # calculate how much resource of b is used over all time points b_used = (b * n_days) - b_unused # calculate how much resource of b is used at each time point # Calculate the cost cost = (a_used * r_a_used) + (b_used * r_b_used) + (b_unused * r_b_unused) # Both a and b must be greater than equal to zero constraints = [a &gt;= 0, b &gt;= -1] # solve objective = cp.Minimize(cost) prob = cp.Problem(objective, constraints) prob.solve() </code></pre> <p>I'm getting the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/Documents/toy_example.py&quot;, line 30, in &lt;module&gt; prob.solve() File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/problems/problem.py&quot;, line 503, in solve return solve_func(self, *args, **kwargs) File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/problems/problem.py&quot;, line 1072, in _solve data, solving_chain, inverse_data = self.get_problem_data( File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/problems/problem.py&quot;, line 646, in get_problem_data solving_chain = self._construct_chain( File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/problems/problem.py&quot;, line 898, in _construct_chain return construct_solving_chain(self, candidate_solvers, gp=gp, File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/reductions/solvers/solving_chain.py&quot;, line 217, in construct_solving_chain reductions = _reductions_for_problem_class(problem, candidates, gp, solver_opts) File &quot;/home/user/anaconda3/envs/myenv/lib/python3.10/site-packages/cvxpy/reductions/solvers/solving_chain.py&quot;, line 132, in _reductions_for_problem_class raise DCPError( cvxpy.error.DCPError: Problem does not follow DCP rules. Specifically: The objective is not DCP. Its following subexpressions are not: maximum(Promote(var2, (5,)) + -maximum([9. 5. 1. 5. 0.] + Promote(-var1, (5,)), 0.0), 0.0) maximum(Promote(var2, (5,)) + -maximum([9. 5. 1. 5. 0.] + Promote(-var1, (5,)), 0.0), 0.0) </code></pre> <p>I think this should be a convex optimisation problem, so I think it might be the formulation of the problem in CVXPY. Below are the visualisation trees of the atomic operations I am applying:</p> <p><a href="https://i.sstatic.net/DB9Yt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DB9Yt.png" alt="To calculate a_used" /></a></p> <p><a href="https://i.sstatic.net/SE2UI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SE2UI.png" alt="To calculate b_unused" /></a></p> <p><a href="https://i.sstatic.net/MdyDX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MdyDX.png" alt="To calculate b_used" /></a></p> <p>If anyone could help then it would be greatly be appreciated.</p> <p>(Edited based on Michal's comment below)</p>
<python><optimization><cvxpy>
2024-02-08 16:07:27
0
317
speeder1987
77,963,057
2,299,245
Making a empty/template raster using rioxarray / rasterio in Python
<p>I am currently trying to convert R code into Python. I am trying to make a 'template' raster file which I will then &quot;reproject_match&quot; many other rasters to in my workflow so that they all align.</p> <p>I can do it in R using the terra package, but am unsure how to do so in python using the rioxarray package. I'm hoping someone might be able to help? Here is my R code:</p> <pre><code># load lib library(&quot;terra&quot;) # Make an empty raster. By default is has extent of the earth and projection wgs84. Setting the resolution means the rows and cols are automatically calculated. a &lt;- rast(res = 1) # Fill it with some random integer values vals(a) &lt;- as.integer(runif(nrow(a) * ncol(a), 1, 10)) # Write it out to file as an writeRaster(a, &quot;my_integer_rast.tif&quot;, wopt = list(datatype = &quot;INT1U&quot;)) </code></pre> <p>I got this far then kind of got lost/confused and thought I would reach out for some help:</p> <pre><code>import xarray import rioxarray data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # Example data coords = {'x': [0, target_res, target_res*2], 'y': [0, target_res, target_res*2]} ds = xr.Dataset({'data': (['y', 'x'], data)}, coords=coords) ds.rio.set_spatial_dims(x_dim='x', y_dim='y', inplace=True) ds.rio.write_crs(&quot;EPSG:4326&quot;, inplace=True) ds.rio.write_transform(inplace=True) with rasterio.open( &quot;test.tif, 'w', driver='GTiff', transform = ds.rio.transform(), crs=ds.rio.crs, dtype=rasterio.float32, #nodata=ds.rio.nodata, count=1, width=ref.rio.width, height=ref.rio.height) as dst: dst.write(ds.values) </code></pre>
<python><r><python-xarray><terra><rasterio>
2024-02-08 15:58:18
1
949
TheRealJimShady
77,963,015
11,911,577
How to do 2 types of asynchronous tasks in parallel without disruption?
<p>The program I am writing needs to perform 2 types of tasks.</p> <ol> <li>Connect to list of clients.</li> <li>Do a series of tasks through connected clients.</li> </ol> <p>Assuming that the program is doing things like this:</p> <pre><code>import asyncio async def task(worker_id): print(f'worker {worker_id} start working ...') # The client actually comes from the Pyrogram library. client = Client(worker_id) await client.connect() for job in jobs: ... # await jobs print(f'job {job.id} finished!') async def main(): tasks = [task(_) for _ in range(10)] await asyncio.gather(*tasks) asyncio.run(main()) </code></pre> <p>This program should execute the &quot;task&quot; function ten times and continue the process after each of them is connected, but what this program does is that it waits on connect until all 10 tasks are connected. It is currently connected and blocks the event loop.</p> <p>How to separate the part that the program tries to connect from the part that is for doing things, so that the connection does not affect other things!</p> <p>For example, whenever any of the clients are connected, will that client be taken in another part and do its work? (for example, asyncio.Queue should be used for sending and receiving between the sections)</p> <pre><code>import asyncio queue = asyncio.Queue() async def task(): for client_id in clients: print(f'worker {client_id} start working ...') client = Client(client_id) await client.connect() await queue.put(client) async def task2(client): client = await queue.get() for job in jobs: await client.do_job(job) print(f'job {job.id} finished!') async def main(): tasks = [task(), task2()] await asyncio.gather(*tasks) asyncio.run(main()) </code></pre>
<python><asynchronous><parallel-processing><task>
2024-02-08 15:50:48
0
766
Hamidreza
77,962,787
353,337
Measure pytest setup times
<p>With <code>--durations=10</code>, <a href="https://docs.pytest.org/" rel="nofollow noreferrer">pytest</a> shows the 10 slowest tests. I have many tests all of them run pretty fast, but test <em>creation</em> takes long here and there, especially in constructs like</p> <pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(&quot;arg&quot;, [ create_test_params(1), create_test_params_alt(256), # may take long? # ... # long list here # ]) def test_all(): # ... </code></pre> <p>Is there a way to find the <code>n</code> slowest test <em>creation</em> times?</p>
<python><pytest>
2024-02-08 15:17:42
0
59,565
Nico SchlΓΆmer
77,962,719
4,527,660
Why is my python code in cloud function not executing slack bot slash command
<p>I am stuck with a basic python script. where I am integrating it with a slack <code>bot</code> and using <code>slash command</code> my slash command is <code>/delete-session</code> when this command is triggered in slack I would expect cloud function in GCP to return a message.How ever I tried to post message in this slack channel and it did post. I am not able to figure out what is causing this issue here code</p> <pre><code>import os from flask import Flask, request, Response from slack_sdk import WebClient app = Flask(__name__) # Set your Slack token slack_token = &quot;abcdev-xxx-xxxxx&quot; # Initialize Slack client client = WebClient(token=slack_token) @app.route('/delete-session', methods=['POST']) def delete_session(): data = request.form channel_id = data.get('channel_id') # Handle the /delete-session command response_message = &quot;I am ready&quot; # Send the response back to the channel client.chat_postMessage(channel=channel_id, text=response_message) return Response(), 200 </code></pre> <p>Error that I am getting in cloud function</p> <pre><code> File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py&quot;, line 2073, in wsgi_app response = self.full_dispatch_request() File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py&quot;, line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py&quot;, line 1516, in full_dispatch_request rv = self.dispatch_request() File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py&quot;, line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File &quot;/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py&quot;, line 99, in view_func return function(request._get_current_object()) TypeError: delete_session() takes 0 positional arguments but 1 was given </code></pre>
<python><flask><pip><slack>
2024-02-08 15:07:57
1
303
Waseem Mir
77,962,412
13,137,625
Correct way to load massive sql table into pyspark
<p>I have a table with 1 billion plus rows, that I am looking to basically pull into a dataframe and do a join with another table of lesser size. I have found that this works for a table with a limit under 1 million fairly fast, but over that and there is issues.</p> <p>Query</p> <pre><code>query = SELECT COL1, COL2 FROM db.table </code></pre> <pre><code>def read_query(url, driver, user, password, query, chunk_size): return spark.read.format(&quot;jdbc&quot;) \ .option(&quot;url&quot;, url) \ .option(&quot;driver&quot;, driver) \ .option(&quot;user&quot;, user) \ .option(&quot;password&quot;, password) \ .option(&quot;dbtable&quot;, f&quot;({query}) as query_alias&quot;) \ .option(&quot;fetchsize&quot;, chunk_size) \ .option(&quot;numPartitions&quot;, 1000).load() def partition(iter): rows = list(iter) print(rows) return rows df_result = read_query(url, driver, user, password, query, chunk_size) df_result_partitions.rdd.mapPartitions(partition).collect() </code></pre> <p>The issue is loading in the data as df_result is trying to load a billion or so rows into memory.</p> <p>So my question is:</p> <p>Is there a way to chunk this SQL query (perhaps write to a file), and then read that into the dataframe? My approach here isn't working, nor is the partitioning because it first needs to load the data. What is the correct way to approach problems like this using pyspark?</p>
<python><mysql><dataframe><apache-spark><pyspark>
2024-02-08 14:21:06
0
305
andruidthedude
77,962,342
893,254
How to drop a subset of MultiIndex "columns" from a Pandas series or dataframe?
<p>Iterating over a pandas <code>DataFrame</code> using <code>iterrows()</code> produces a series of index <code>Series</code> pairs (tuples).</p> <pre><code>for timestamp, row in df.iterrows(): </code></pre> <p>I am aware that <code>iterrows()</code> is slow. Ignoring that issue for now -</p> <p>Some of the returned rows will contain <code>None</code> or <code>NaN</code> values. I want to remove these. (<strong>Not</strong> from the <code>DataFrame</code> but from a copy of each row returned by <code>iterrows()</code>.)</p> <p>I also want to remove a subset of &quot;columns&quot;. Columns are named with a 2-level MultiIndex.</p> <p>Here's an idea of what the DataFrame looks like:</p> <pre><code> AACT ABILF ... open high low close open high low close ... timestamp ... 2022-01-04 00:00:00 NaN NaN NaN NaN NaN NaN NaN NaN ... 2022-01-04 00:01:00 NaN NaN NaN NaN NaN NaN NaN NaN ... 2022-01-04 00:02:00 NaN NaN NaN NaN NaN NaN NaN NaN ... 2022-01-04 00:03:00 NaN NaN NaN NaN NaN NaN NaN NaN ... 2022-01-04 00:04:00 NaN NaN NaN NaN NaN NaN NaN NaN ... </code></pre> <p>All the values <strong>happen</strong> to be NaN here, however in general that will not be the case.</p> <p>Because I do not know how to approach this problem, here is some pseudocode:</p> <pre><code>for timestamp, row in df.iterrows(): row.drop([('AACT', 'open'), ('AACT', 'high'), ('AACT', 'low')]) row.drop([('ABILF', 'open'), ('ABILF', 'high'), ('ABILF', 'low')]) row.dropna() # remaining data is `('AACT', 'close')` and `('ABILF', 'close')` # iff values in this `Series` are non-NaN </code></pre>
<python><pandas><dataframe><series>
2024-02-08 14:09:58
2
18,579
user2138149
77,962,303
3,453,901
In Python the Apache Arrow ADBC driver causing silent program exit on connection attempt to PostgreSQL database
<p>My config:</p> <ul> <li>Windows</li> <li><code>python=3.11.7</code></li> <li><code>pandas=2.2.0</code></li> <li><code>adbc-driver-manager=0.9.0</code></li> <li><code>adbc-driver-postgresql=0.9.0</code></li> <li><code>pyarrow=15.0.0</code></li> </ul> <hr /> <p>Trying to implement the newish <code>ADBC Driver</code> from Apache Arrow with the connection to my PostgreSQL database. Code is simple enough, but when trying to establish the connection the program exits silently with no exceptions raised.</p> <pre class="lang-py prettyprint-override"><code>import adbc_driver_postgresql.dbapi import pandas as pd uri = ( 'postgresql://' + f'{config[&quot;USERNAME&quot;]}:' + f'{config[&quot;PASSWORD&quot;]}@' + f'{config[&quot;HOST&quot;]}:' + '5432/' + f'{config[&quot;DB_NAME&quot;]}' ) df = pd.read_csv('data.csv') with adbc_driver_postgresql.dbapi.connect(uri) as conn: # program exits silently here # neither of these lines run print('connection made, going to upload now') df.to_sql(&quot;pandas_table&quot;, conn, index=False) print('made it pass connection and upload') # doesn't run </code></pre> <hr /> <p><strong>Debugging done</strong></p> <ul> <li>Validated database can be connected to with:</li> </ul> <pre class="lang-py prettyprint-override"><code>import sqlalchemy import pandas as pd uri = ( 'postgresql://' + f'{config[&quot;USERNAME&quot;]}:' + f'{config[&quot;PASSWORD&quot;]}@' + f'{config[&quot;HOST&quot;]}:' + '5432/' + f'{config[&quot;DB_NAME&quot;]}' ) engine = sqlalchemy.create_engine(uri) df = pd.read_csv('data.csv') with engine.connect() as conn: df.to_sql(&quot;pandas_table&quot;, conn, index=False) </code></pre> <ul> <li>Ensured necessary packages installed: <ul> <li><code>pandas=2.2.0</code></li> <li><code>adbc-driver-manager=0.9.0</code></li> <li><code>adbc-driver-postgresql=0.9.0</code></li> <li><code>pyarrow=15.0.0</code></li> </ul> </li> </ul>
<python><pandas><postgresql><pyarrow>
2024-02-08 14:04:14
0
2,274
Alex F
77,962,211
8,771,082
How to subset multiple dataframes
<p>I have split data and metadata into separate dataframes for convenience. I would like to subset both at the same time, so I don't have to repeat myself. My example attempt below is a mess of if statements. This because .loc[], .iloc[], and whether to leave all columns in a certain dimension (:) are &quot;&quot;part of the syntax&quot;&quot; (for lack of better explanation), and thus not passable as regular function arguments.</p> <p>There has to be a better way! I must either be missing something obvious, or trying to do something which is really dumb for some reason that goes over my head.</p> <p>I am aware that there are workarounds, and about the xy problem problem, so advice on any level on how to handle this is welcome.</p> <p>How do I do this, or why should I not even try to do this?</p> <pre><code># Make some example data all_data = pd.DataFrame( data = np.array(range(9)).reshape(3, 3), columns = ['a', 'b', 'c']) print(all_data) # a b c # 0 0 1 2 # 1 3 4 5 # 2 6 7 8 import string all_metadata = pd.DataFrame( data = np.array(list(string.ascii_lowercase))[:6].reshape(3, 2), columns = ['meta1', 'meta2']) print(all_metadata) # meta1 meta2 # 0 a b # 1 c d # 2 e f # Subset both (several) dataframes in a single call def subset_dataframes( dataframes, rows = None, cols = None, indexer = 'loc'): dfs_out = [] for dataframe in dataframes: if rows is not None and cols is not None: if indexer == 'loc': df = dataframe.loc[rows, cols] elif indexer == 'iloc': df = dataframe.iloc[rows, cols] elif rows is not None: if indexer == 'loc': df = dataframe.loc[rows, :] elif indexer == 'iloc': df = dataframe.iloc[rows, :] elif cols is not None: if indexer == 'loc': df = dataframe.loc[:, cols] elif indexer == 'iloc': df = dataframe.iloc[:, cols] dfs_out.append(df) return(dfs_out) subset_dataframes( dataframes = (all_data, all_metadata), rows = all_metadata.meta1.isin(['a', 'c']) ) # [ a b c # 0 0 1 2 # 1 3 4 5, # meta1 meta2 # 0 a b # 1 c d] # Works, but an absolute mess of code </code></pre>
<python><pandas>
2024-02-08 13:49:38
1
449
Anton
77,962,153
19,318,120
Firestore paginate document fields
<p>This is my first time using Firestore. The project is written in Python. I've a collection let's say posts and then a document for each user that contains an array of his posts. Is there a way to paginate that array?</p> <p>to get a document</p> <pre><code>docoument = collection.document( str(document_id) ).get() </code></pre> <p>doing <code>document.to_dict()</code> will load the entire document to memory which can have thousands and maybe millions of data</p> <p>What's a proper solution to this issue?</p>
<python><firebase><google-cloud-platform><google-cloud-firestore>
2024-02-08 13:41:52
1
484
mohamed naser