QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,253,818
17,729,094
How to specify column data type
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from typing import NamedTuple class Event(NamedTuple): name: str description: str def event_table(num) -&gt; list[Event]: events = [] for i in range(num): events.append(Event(&quot;name&quot;, &quot;description&quot;)) return events data = {&quot;events&quot;: [1, 2]} df = pl.DataFrame(data).select(events=pl.col(&quot;events&quot;).map_elements(event_table)) &quot;&quot;&quot; shape: (2, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ events β”‚ β”‚ --- β”‚ β”‚ list[struct[2]] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [{&quot;name&quot;,&quot;description&quot;}] β”‚ β”‚ [{&quot;name&quot;,&quot;description&quot;}, {&quot;name&quot;… β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ &quot;&quot;&quot; </code></pre> <p>But if the first list is empty, I get a <code>list[list[str]]</code> instead of the <code>list[struct[2]]</code> that I need:</p> <pre class="lang-py prettyprint-override"><code>data = {&quot;events&quot;: [0, 1, 2]} df = pl.DataFrame(data).select(events=pl.col(&quot;events&quot;).map_elements(event_table)) print(df) &quot;&quot;&quot; shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ events β”‚ β”‚ --- β”‚ β”‚ list[list[str]] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [] β”‚ β”‚ [[&quot;name&quot;, &quot;description&quot;]] β”‚ β”‚ [[&quot;name&quot;, &quot;description&quot;], [&quot;name… β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ &quot;&quot;&quot; </code></pre> <p>I tried using the <code>return_dtype</code> of the <code>map_elements</code> function like:</p> <pre class="lang-py prettyprint-override"><code>data = {&quot;events&quot;: [0, 1, 2]} df = pl.DataFrame(data).select( events=pl.col(&quot;events&quot;).map_elements( event_table, return_dtype=pl.List(pl.Struct({&quot;name&quot;: pl.String, &quot;description&quot;: pl.String})), ) ) </code></pre> <p>but this failed with:</p> <pre><code>Traceback (most recent call last): File &quot;script.py&quot;, line 18, in &lt;module&gt; df = pl.DataFrame(data).select( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/polars/dataframe/frame.py&quot;, line 8193, in select return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;.venv/lib/python3.11/site-packages/polars/lazyframe/frame.py&quot;, line 1943, in collect return wrap_df(ldf.collect()) ^^^^^^^^^^^^^ polars.exceptions.SchemaError: expected output type 'List(Struct([Field { name: &quot;name&quot;, dtype: String }, Field { name: &quot;description&quot;, dtype: String }]))', got 'List(List(String))'; set `return_dtype` to the proper datatype </code></pre> <p>How can I get this to work? i need the type of this column to be <code>list[struct[2]]</code> event if the first list is empty.</p>
<python><python-polars>
2024-04-01 05:32:37
2
954
DJDuque
78,253,728
3,810,748
How to fix issue with column alignment when printing pandas dataframe with emojis?
<p>When printing a DataFrame with emojis, the column header alignment issue worsens with more columns. This doesn't happen without emojis. Any solutions?</p> <h1>With Emojis</h1> <pre><code>import pandas as pd pd.set_option('display.max_rows', 1000) pd.set_option('display.max_columns', 1000) pd.set_option('display.width', 1000) example = {'normal_col' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'text_col' : ['hello world'] * 10, 'emoji_col_A' : ['🟩 hello world'] * 10, 'emoji_col_B' : ['πŸŸ₯ hello world'] * 10, 'emoji_col_C' : ['🟧 hello world'] * 10, 'emoji_col_D' : ['🟨 hello world'] * 10} df = pd.DataFrame(example) print(df) </code></pre> <pre><code> normal_col text_col emoji_col_A emoji_col_B emoji_col_C emoji_col_D 0 1 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 1 2 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 2 3 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 3 4 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 4 5 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 5 6 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 6 7 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 7 8 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 8 9 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world 9 10 hello world 🟩 hello world πŸŸ₯ hello world 🟧 hello world 🟨 hello world </code></pre> <h1>Without Emojis</h1> <pre><code>import pandas as pd pd.set_option('display.max_rows', 1000) pd.set_option('display.max_columns', 1000) pd.set_option('display.width', 1000) example = {'normal_col' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'text_col' : ['hello world'] * 10, 'emoji_col_A' : ['hello world'] * 10, 'emoji_col_B' : ['hello world'] * 10, 'emoji_col_C' : ['hello world'] * 10, 'emoji_col_D' : ['hello world'] * 10} df = pd.DataFrame(example) print(df) </code></pre> <pre><code> normal_col text_col emoji_col_A emoji_col_B emoji_col_C emoji_col_D 0 1 hello world hello world hello world hello world hello world 1 2 hello world hello world hello world hello world hello world 2 3 hello world hello world hello world hello world hello world 3 4 hello world hello world hello world hello world hello world 4 5 hello world hello world hello world hello world hello world 5 6 hello world hello world hello world hello world hello world 6 7 hello world hello world hello world hello world hello world 7 8 hello world hello world hello world hello world hello world 8 9 hello world hello world hello world hello world hello world 9 10 hello world hello world hello world hello world hello world </code></pre>
<python><pandas><dataframe>
2024-04-01 04:55:43
3
6,155
AlanSTACK
78,253,231
721,666
How to run llama-cpp-python in a Docker Container?
<p>I have a more conceptional question about running llama-cpp-python in a Docker Container. Following a lot of different tutorials I am more confused as in the beginning.</p> <p>I have a Debian 12 Server with a CPU - Intel Core i7-7700 and a GPU - GeForce GTX 1080.</p> <p>I installed on the host via the Debian via DEFAULT APT Repository and the Nvidia APT Repository the following components</p> <ul> <li>linux-headers-amd64</li> <li>nvidia-detect</li> <li>nvidia-driver</li> <li>nvidia-smi</li> <li>linux-image-amd64</li> <li>cuda</li> </ul> <p>The Nvida driver is installed correctly which I can verify with</p> <pre><code># nvidia-smi Sun Mar 31 10:46:20 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: N/A | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1080 Off | 00000000:01:00.0 Off | N/A | | 36% 42C P0 39W / 180W | 0MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ </code></pre> <p>Next I build a Docker Image where I installed inside the following libraries:</p> <ul> <li>jupyterlab</li> <li>cuda-toolkit-12-3</li> <li>llama-cpp-python</li> </ul> <p>Than I run my Container with my llama_cpp application</p> <pre><code>$ docker run --gpus all my-docker-image </code></pre> <p>It works, but the GPU has no effect even if I can see from my log output that something with GPU and CUDA was detected by llama-cpp:</p> <pre><code>.... ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1080, compute capability 6.1, VMM: yes llama_kv_cache_init: CUDA_Host KV buffer size = 381.00 MiB llama_new_context_with_model: KV self size = 381.00 MiB, K (f16): 190.50 MiB, V (f16): 190.50 MiB llama_new_context_with_model: CUDA_Host output buffer size = 62.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 227.41 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 13.96 MiB llama_new_context_with_model: graph nodes = 1060 llama_new_context_with_model: graph splits = 356 AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ..... </code></pre> <p>There is no performance different in running the container with or without GPU.</p> <p>My first question is: Is my environment setup correct or are there any components missing on the Host or Container side? And my second question is: What is necessary to run llama-cpp-python inside a container using the GPU?</p> <p>My installation code of llama-cpp-python within my container looks like this:</p> <pre><code>... RUN CMAKE_ARGS=&quot;-DLLAMA_CUBLAS=on&quot; pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir .... </code></pre>
<python><docker>
2024-04-01 00:39:26
0
4,966
Ralph
78,253,071
1,543,167
Writes to child subprocess.Popen.stdin don't work from within process group?
<p>From within a Python script, I'm launching an executable in a subprocess (for anyone interested: it's the terraria server).</p> <pre><code># script.py server_process = subprocess.Popen(&quot;my_executable arg1 arg2&quot;, stdin=subprocess.PIPE, text=True) # My Bash shell $ ps aux | grep my_executable # I've removed everything but the PIDs and CLIs below 14701 my_executable arg1 arg2 </code></pre> <p><code>my_executable</code> starts writing to <code>stdout</code> as expected. However I can't seem to <em>write</em> to <code>stdin</code> from that same Python script</p> <pre><code># script.py server_process.stdin.write('exit\n') server_process.stdin.flush() </code></pre> <p><code>my_executable</code> terminates itself upon reading <code>exit</code>, so that should kill it: but nothing happens</p> <p>Here's the wrinkle: I can write to <code>my_executable</code>'s <code>stdin</code> via <code>/proc</code> from a <em>separate</em> process (like my bash shell) just fine!</p> <pre><code># Bash shell $ sudo echo exit &gt; /proc/14701/fd/0 # Works $ python3 -c 'import os; os.system(&quot;sudo echo exit &gt; /proc/14701/fd/0&quot;)' # Also works! </code></pre> <p>Putting that <code>os.system</code> call in the original script with <code>server_process.pid</code> doesn't work though</p> <pre><code># script.py os.system(f&quot;sudo echo exit &gt; /proc/{server_process.pid}/fd/0&quot;) # sh: 1: cannot create /proc/14701/fd/0: Directory nonexistent </code></pre> <p>It seems like writes to <code>stdin</code> from a separate process (like my bash shell) work, whereas writes from within the process (group?) don't. What's going on here?</p>
<python><python-3.x><subprocess><stdin>
2024-03-31 22:58:56
0
2,376
pipsqueaker117
78,253,020
2,428,124
Can't install packages in python conda environment
<p>I'm kinda new to python and it's the first time managing environments with Anaconda.</p> <p>There's some discrepancy that I can't figure out between my conda environment and my visual studio code, I'll explain:</p> <p>I created a conda environment with python 3.8.11, recognised by vscode: <a href="https://i.sstatic.net/P0dXj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P0dXj.png" alt="vscode config" /></a></p> <p>But if I open a terminal inside vscode, the environment selected is another one:</p> <pre><code># conda environments: # base * /opt/homebrew/anaconda3 quant /opt/homebrew/anaconda3/envs/quant </code></pre> <p>And this to me is already strange, I would expect the terminal environment to match the conda environment that I'm using for my project but okay.</p> <p>So I switch to the &quot;quant&quot; environment:</p> <pre><code>❯ conda activate quant ❯ conda info --envs # conda environments: # base /opt/homebrew/anaconda3 quant * /opt/homebrew/anaconda3/envs/quant </code></pre> <p>but if I run <code>python --version</code> I get:</p> <pre><code>❯ python --version Python 3.11.8 </code></pre> <p>but I created the quant environment with python 3.8.11...why is it different? This trickle down to why I can't install new packages (I think), because if I try to install something, like &quot;schedule&quot;</p> <pre><code>❯ pip install schedule Requirement already satisfied: schedule in /opt/homebrew/lib/python3.11/site-packages (1.2.1) </code></pre> <p>everything runs correctly, but then if I try to import the package I get: <code>Import &quot;schedule&quot; could not be resolved</code></p> <p>What am I missing?</p>
<python><anaconda>
2024-03-31 22:33:57
0
3,289
ste
78,252,973
11,833,216
Python and regex, can't understand why some words are left out of the match
<pre><code>s = (&quot;If I’m not in a hurry, then I should stay. &quot; + &quot;On the other hand, if I leave, then I can sleep.&quot;) re.findall(r'[Ii]f (.*), then', s) </code></pre> <p>The output is:</p> <pre><code>I’m not in a hurry, then I should stay. On the other hand, if I leave </code></pre> <p>The question is: Why aren't the words &quot;If&quot; and &quot;then&quot; not included in the output?</p> <p>Something like this:</p> <pre><code>If I’m not in a hurry, then I should stay. On the other hand, if I leave, then </code></pre>
<python><regex>
2024-03-31 22:09:42
1
307
cat15ets
78,252,766
16,717,009
Filtering inside groups in polars
<p>I'm new to Polars and need some advice from the experts. I have some working code but I've got to believe theres a faster and/or more elegant way to do this. I've got a large dataframe with columns cik(int), form(string) and period(date) of relevance here. Form can have value either '10-Q' or '10-K'. Each cik will have many rows of the 2 form types with different periods represented. What I want to end up with is, for each cik group, only the most recent 10-Q remains and only the most recent 10 10-Ks remain. Of course if there are less than 10 10-K forms, all should remain. Here's what I'm doing now (it works):</p> <pre><code>def filter_sub_for_11_rows_per_cik(df_): df = df_.sort('cik') # Keep only the last 10-Q q_filtered_df = df.group_by('cik').map_groups( lambda g: g.sort('period', descending=True).filter(pl.col('form').eq('10-Q')).head(1)) # Keep the last up to 10 10-Ks k_filtered_df = df.group_by('cik').map_groups( lambda g: g.sort('period', descending=True) .filter(pl.col('form').eq('10-K')) .slice(0, min(10, g.filter(pl.col('form').eq('10-K')).shape[0])) ) return pl.concat([q_filtered_df, k_filtered_df]) </code></pre>
<python><python-polars>
2024-03-31 20:38:58
1
343
MikeP
78,252,692
1,709,768
Why numpy.vectorize calls vectorized function more times than elements in the vector?
<p>When we call vectorized function for some vector, for some reason it is called twice for the first vector element. What is the reason, and can we get rid of this strange effect (e.g. when this function needs to have some side effect, e.g. counts some sum etc)</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>import numpy @numpy.vectorize def test(x): print(x) test([1,2,3]) </code></pre> <p>Result:</p> <pre><code>1 1 2 3 </code></pre> <p>numpy 1.26.4</p>
<python><numpy>
2024-03-31 20:09:58
1
315
vvch
78,252,518
2,813,152
use dict from python in django html template and also in js
<p>I am having a dict <code>statistics</code> in my <code>view.py</code> and give it as param in my <code>context</code> to my <code>index.html</code>. There I want to use it in my html like <code>{{ statistics.key1 }}</code>. But I also want to use it in <code>js</code>.</p> <p>When using <code>json.dumps</code> like this in my <code>view.py</code>:</p> <pre><code>&quot;statistics&quot;: json.dumps(statistics, cls=DjangoJSONEncoder), </code></pre> <p>I can use it like this in my <code>js</code>:</p> <pre><code>var statistics = JSON.parse('{{ statistics | escapejs }}'); </code></pre> <p>But then I can't use it as a dict anymore in my html. Makes sense, as it is now a JSON-String.</p> <p>But when I just pass it like a dict, I can use it in my html. but</p> <pre><code>var statistics = '{{ statistics | escapejs }}' </code></pre> <p>gives me a JSON with single quotes so I can't just parse it.</p> <p>How can I do both? Still use the dict as normal in my HTML and also parse it to use it in js?</p>
<python><django><django-views><django-templates>
2024-03-31 19:04:47
1
4,932
progNewbie
78,251,924
5,618,251
How to add a new variable to xarray.Dataset in Python with same time,lat,lon dimensions with assign?
<p>I have an xarray.Dataset that looks like:</p> <pre><code>print(ds2) &lt;xarray.Dataset&gt; Dimensions: (time: 46, latitude: 360, longitude: 720) Coordinates: * time (time) datetime64[ns] 1976-01-01 1977-01-01 ... 2021-01-01 * latitude (latitude) float64 89.75 89.25 88.75 ... -88.75 -89.25 -89.75 * longitude (longitude) float64 -179.8 -179.2 -178.8 ... 178.8 179.2 179.8 Data variables: Glacier (time, latitude, longitude) float64 dask.array&lt;chunksize=(1, 360, 720), meta=np.ndarray&gt; Uncertainty (time, latitude, longitude) float64 dask.array&lt;chunksize=(1, 360, 720), meta=np.ndarray&gt; </code></pre> <p>I also have a raster with similar dimensions:</p> <pre><code>print(np.shape(rgi_raster)) (1, 360, 720) </code></pre> <p>How do I add rgi_raster to the xarray.Dataset so that it has the same time,lat,lon coordinates at the Glacier and Uncertainty variable?</p> <p>I tried:</p> <pre><code>ds2=ds2.assign(rgi_raster=rgi_raster) </code></pre> <p>But this gives:</p> <pre><code>&lt;xarray.Dataset&gt; Dimensions: (time: 46, latitude: 360, longitude: 720, band: 1, x: 720, y: 360) Coordinates: * time (time) datetime64[ns] 1976-01-01 1977-01-01 ... 2021-01-01 * latitude (latitude) float64 89.75 89.25 88.75 ... -88.75 -89.25 -89.75 * longitude (longitude) float64 -179.8 -179.2 -178.8 ... 178.8 179.2 179.8 * band (band) int64 1 * x (x) float64 -179.8 -179.2 -178.8 -178.2 ... 178.8 179.2 179.8 * y (y) float64 -89.75 -89.25 -88.75 -88.25 ... 88.75 89.25 89.75 spatial_ref int64 0 Data variables: Glacier (time, latitude, longitude) float64 dask.array&lt;chunksize=(1, 360, 720), meta=np.ndarray&gt; Uncertainty (time, latitude, longitude) float64 dask.array&lt;chunksize=(1, 360, 720), meta=np.ndarray&gt; rgi_raster (band, y, x) float64 19.0 19.0 19.0 19.0 ... 10.0 10.0 10.0 </code></pre>
<python><dataset><python-xarray><assign>
2024-03-31 15:46:54
1
361
user5618251
78,251,888
13,349,653
Polars: efficiently get the 2nd largest element
<p>In polars, how do you efficiently get the 2nd largest element, or nth for some small n compared to the size of the column?</p>
<python><dataframe><python-polars>
2024-03-31 15:36:08
2
1,788
Test
78,251,515
13,084,917
How to append row to specific columns with gspread?
<p>I have a google sheet with values on it. Think it like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>header1</th> <th>col1</th> <th>header 3</th> <th>col2</th> </tr> </thead> <tbody> <tr> <td>First</td> <td></td> <td>row</td> <td></td> </tr> <tr> <td>Second</td> <td></td> <td>row</td> <td></td> </tr> </tbody> </table></div> <p>I will have another data that will come and fill the 2nd and 4th column by respectively.</p> <p>So what I want to use append_row with specific column name because after each process (my code) I want to immediately add it to my google sheet.</p> <hr /> <p>WHAT I DO FOR NOW (I want to change this logic):</p> <p>I have 2 columns like this. So what I have done before after my code completes (all data is ready now) I was adding those data with worksheet update like this (I am using gspread):</p> <pre class="lang-py prettyprint-override"><code> headers = worksheet.row_values(1) col1_index = headers.index('col1') + 1 col2_index = headers.index('col2') + 1 for item in result: col1_list.append(item['col1']) col2_list.append(item['col2']) col1_transposed = [[item] for item in col1_list] col2_transposed = [[item] for item in col2_list] col1_range = '{}2:{}{}'.format(chr(65 + col1_index - 1), chr(65 + col1_index - 1), len(col1_list) + 1) col2_range = '{}2:{}{}'.format(chr(65 + col2_index - 1), chr(65 + col2_index - 1), len(col2_list) + 1) worksheet.update(col1_range, col1_transposed) worksheet.update(col2_range, col2_transposed) </code></pre> <p>But now I want to say like I want to append my data row by row to specific columns. After each process I will have a data like this</p> <pre><code>{'col1': 'value1', 'col2': 'value2'} </code></pre> <p>and value1 will be on the col1 column and value2 will be on the col2 column in the first row.</p> <p>After I will have the same thing from the code:</p> <pre><code>{'col1': 'value3', 'col2': 'value4'} </code></pre> <p>The result I would like to see:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>header1</th> <th>col1</th> <th>header 3</th> <th>col2</th> </tr> </thead> <tbody> <tr> <td>First</td> <td>value1</td> <td>row</td> <td>value2</td> </tr> <tr> <td>Second</td> <td>value 3</td> <td>row</td> <td>value4</td> </tr> </tbody> </table></div>
<python><google-sheets><google-sheets-api><gspread>
2024-03-31 13:41:12
1
884
omerS
78,251,440
1,082,349
Unknown dependency "pin-1" prevents conda installation
<p>I'm trying to downgrade libffi=3.3 because of a bug I'm encountering (see very end).</p> <pre><code>conda install libffi==3.3 -n mismatch Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): done Solving environment: \ warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE failed LibMambaUnsatisfiableError: Encountered problems while solving: - package python-3.11.8-h955ad1f_0 requires libffi &gt;=3.4,&lt;3.5, but none of the providers can be installed Could not solve for environment specs The following packages are incompatible β”œβ”€ libffi 3.3 is requested and can be installed; β”œβ”€ matplotlib is installable with the potential options β”‚ β”œβ”€ matplotlib [3.6.2|3.7.1|3.7.2] would require β”‚ β”‚ └─ matplotlib-base [&gt;=3.6.2,&lt;3.6.3.0a0 |&gt;=3.7.1,&lt;3.7.2.0a0 |&gt;=3.7.2,&lt;3.7.3.0a0 ] with the potential options β”‚ β”‚ β”œβ”€ matplotlib-base [3.6.2|3.7.1|3.7.2] would require β”‚ β”‚ β”‚ └─ python &gt;=3.11,&lt;3.12.0a0 , which requires β”‚ β”‚ β”‚ └─ libffi &gt;=3.4,&lt;3.5 , which conflicts with any installable versions previously reported; β”‚ β”‚ β”œβ”€ matplotlib-base [3.6.2|3.7.1|3.7.2] would require β”‚ β”‚ β”‚ └─ python &gt;=3.10,&lt;3.11.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [3.2.1|3.2.2|...|3.7.2] would require β”‚ β”‚ β”‚ └─ python &gt;=3.8,&lt;3.9.0a0 , which can be installed; β”‚ β”‚ └─ matplotlib-base [3.3.2|3.6.2|3.7.1|3.7.2] would require β”‚ β”‚ └─ python &gt;=3.9,&lt;3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 3.8.0 would require β”‚ β”‚ └─ python &gt;=3.11,&lt;3.12.0a0 , which cannot be installed (as previously explained); β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|2.2.3] would require β”‚ β”‚ └─ python &gt;=2.7,&lt;2.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|3.0.0] would require β”‚ β”‚ └─ python &gt;=3.5,&lt;3.6.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|3.3.4] would require β”‚ β”‚ └─ python &gt;=3.6,&lt;3.7.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.2.2|2.2.3|...|3.5.3] would require β”‚ β”‚ └─ python &gt;=3.7,&lt;3.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.1.1|3.1.2|...|3.7.2] would require β”‚ β”‚ └─ python &gt;=3.8,&lt;3.9.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.2.1|3.2.2|3.3.1] would require β”‚ β”‚ └─ matplotlib-base [&gt;=3.2.1,&lt;3.2.2.0a0 |&gt;=3.2.2,&lt;3.2.3.0a0 |&gt;=3.3.1,&lt;3.3.2.0a0 ] with the potential options β”‚ β”‚ β”œβ”€ matplotlib-base [3.2.1|3.2.2|...|3.7.2], which can be installed (as previously explained); β”‚ β”‚ β”œβ”€ matplotlib-base [3.2.1|3.2.2|3.3.1|3.3.2] would require β”‚ β”‚ β”‚ └─ python &gt;=3.6,&lt;3.7.0a0 , which can be installed; β”‚ β”‚ └─ matplotlib-base [3.2.1|3.2.2|3.3.1|3.3.2] would require β”‚ β”‚ └─ python &gt;=3.7,&lt;3.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 3.3.2 would require β”‚ β”‚ └─ matplotlib-base &gt;=3.3.2,&lt;3.3.3.0a0 , which can be installed (as previously explained); β”‚ β”œβ”€ matplotlib [3.3.4|3.4.2|...|3.8.0] would require β”‚ β”‚ └─ python &gt;=3.9,&lt;3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.5.0|3.5.1|...|3.8.0] would require β”‚ β”‚ └─ python &gt;=3.10,&lt;3.11.0a0 , which can be installed; β”‚ └─ matplotlib 3.8.0 would require β”‚ └─ python &gt;=3.12,&lt;3.13.0a0 , which can be installed; └─ pin-1 is not installable because it requires └─ python 3.11.* , which conflicts with any installable versions previously reported. (mismatch) :$ conda list python -n mismatch # packages in environment at /home/x/anaconda3/envs/mismatch: # # Name Version Build Channel python 3.11.8 h955ad1f_0 </code></pre> <p>As you can see I already have downgraded python to 3.11 -- why is the conda solver complaining about &quot;pin1&quot; and its requirement for python 3.11?</p> <hr /> <p>The bug that I'm encountering is triggered by <code>pystata</code>, and is resolved after downgrading Python and libffi as suggested by the answer:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/stata18/utilities/pystata/config.py&quot;, line 239, in init stlib = cdll.LoadLibrary(lib_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;~/anaconda3/envs/mismatch/lib/python3.12/ctypes/__init__.py&quot;, line 460, in LoadLibrary return self._dlltype(name) ^^^^^^^^^^^^^^^^^^^ File &quot;~/anaconda3/envs/mismatch/lib/python3.12/ctypes/__init__.py&quot;, line 379, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined symbol: ffi_type_pointer, version LIBFFI_BASE_7.0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/snap/pycharm-professional/378/plugins/python/helpers/pydev/pydevd.py&quot;, line 1534, in _exec pydev_imports.execfile(file, globals, locals) # execute the script ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/snap/pycharm-professional/378/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py&quot;, line 18, in execfile exec(compile(contents+&quot;\n&quot;, file, 'exec'), glob, loc) File &quot;main.py&quot;, line 17, in &lt;module&gt; config.init('be') File &quot;/usr/local/stata18/utilities/pystata/config.py&quot;, line 241, in init </code></pre>
<python><conda><libffi>
2024-03-31 13:19:24
1
16,698
FooBar
78,251,217
15,524,510
How to convert pandas series to integer for use in datetime.fromisocalendar
<p>I am trying to transform a pandas series which has dates in it. I'd like to take the date that is in there and return the following Monday.</p> <p>Here is what I have tried:</p> <pre><code>db['date'] = datetime.date.fromisocalendar(db['date'].dt.year.astype(np.int64),(db['date'].dt.week+1).astype(np.int64),1) </code></pre> <p>But I get the following error:</p> <pre><code>TypeError: 'Series' object cannot be interpreted as an integer </code></pre> <p>Is there a better way to do this?</p>
<python><pandas><datetime><series>
2024-03-31 11:55:39
2
363
helloimgeorgia
78,251,194
595,305
Grey-out QLineEdit as per disabled state automatically, combined with non-conditional style sheet directive?
<p>By default, a <code>QLineEdit</code> is greyed-out when disabled.</p> <p>If you set a stylesheet with a simple directive, e.g. setting a border, this cancels that default behaviour.</p> <p>Fortunately it is possible to mimic the default behaviour, like so:</p> <pre><code>qle.setStyleSheet('QLineEdit[readOnly=\&quot;true\&quot;] {color: #808080; background-color: #F0F0F0;}') </code></pre> <p>But how might I <em>combine</em> this conditional greying-out with setting a border (unconditionally)?</p> <p>Here is an MRE:</p> <pre><code>import sys from PyQt5 import QtWidgets class Window(QtWidgets.QWidget): def __init__(self): super().__init__() self.setWindowTitle(&quot;QLE stylesheet question&quot;) self.setMinimumSize(400, 30) layout = QtWidgets.QVBoxLayout() qle = QtWidgets.QLineEdit() qle.setObjectName('qle') # qle.setStyleSheet('border:4px solid yellow;') # this sets up the style sheet to reproduce the default PyQt behaviour for a QLE (greyed-out if disabled, otherwise not) # qle.setStyleSheet('QLineEdit[readOnly=\&quot;true\&quot;] {color: #808080; background-color: #F0F0F0;}') # doesn't work: when disabled is not greyed-out! qle.setStyleSheet('#qle {border:4px solid yellow;} #qle[readOnly=\&quot;true\&quot;] {color: #808080; background-color: #F0F0F0;}') # qle.setEnabled(False) # uncomment to see &quot;disabled&quot; appearance layout.addWidget(qle) self.setLayout(layout) if __name__ == &quot;__main__&quot;: app = QtWidgets.QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) </code></pre>
<python><pyqt><qtstylesheets>
2024-03-31 11:45:16
1
16,076
mike rodent
78,251,060
3,859,500
Can't import LlamaParse
<p>I created a short python application in a Google Colab notebook, that works fine. I am trying to move the application from Google Colab into a local Docker-Python-Application.</p> <p>But whenever <strong>I run the application with Docker and do in a .py file</strong></p> <pre><code>from llama_parse import LlamaParse </code></pre> <p><strong>My application fails with the following error:</strong></p> <pre><code>pdf-compare-mvp1-web-1 | File &quot;/usr/local/lib/python3.8/site-packages/llama_parse/base.py&quot;, line 386, in LlamaParse pdf-compare-mvp1-web-1 | def get_images(self, json_result: list[dict], download_path: str) -&gt; List[dict]: pdf-compare-mvp1-web-1 | TypeError: 'type' object is not subscriptable </code></pre> <p><strong>here is the functional notebook code</strong></p> <pre><code>!pip install -q llama-index !pip install -q openai !pip install -q transformers !pip install -q accelerate import os os.environ[&quot;OPENAI_API_KEY&quot;] = &quot;...&quot; from IPython.display import Markdown, display from llama_index.llms.openai import OpenAI from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_parse import LlamaParse parser = LlamaParse( api_key=&quot;...&quot;, result_type=&quot;markdown&quot; ) document1 = await parser.aload_data(&quot;data/some.pdf&quot;) # some more code </code></pre> <p><strong>Following the not working local Docker application:</strong></p> <p>To build and run the application locally on my machine, I created Python flask app with the following relevant files:</p> <p><strong>Dockerfile</strong></p> <pre><code># Use an official Python runtime as a parent image FROM python:3.8-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 5000 # Define environment variable ENV FLASK_APP=app.py # Run app.py when the container launches CMD [&quot;flask&quot;, &quot;run&quot;, &quot;--host=0.0.0.0&quot;] </code></pre> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '3.8' services: web: build: . volumes: - .:/app ports: - &quot;5002:5000&quot; environment: - FLASK_ENV=development </code></pre> <p><strong>requirements.txt</strong></p> <pre><code>Flask==2.1.2 python-dotenv==0.20.0 boto3==1.24.12 Werkzeug==2.2.0 PyMuPDF llama-index openai llama-parse # transformers # accelerate </code></pre> <p><strong>processor.py</strong></p> <pre><code>import fitz # PyMuPDF import os from config import Config # from llama_parse import LlamaParse # import nest_asyncio # nest_asyncio.apply() from llama_index.llms.openai import OpenAI from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_parse import LlamaParse </code></pre> <p>The last line from llama_parse import LlamaParse result in the error. <strong>Whenever this line is present/not present, the application is functional/ functional</strong></p>
<python><docker><llama-index>
2024-03-31 10:52:23
1
806
Ben Spi
78,250,997
1,497,139
how to set the width and height of an ui.image in nicegui?
<p>I want to migrate some justpy code to nicegui. While the justpy IMG class could set with and height in the constructor the natural way to migrate would be:</p> <pre class="lang-py prettyprint-override"><code>with topicHeader: _topicIcon = ui.image( source=icon_url, width=f&quot;{self.iconSize}&quot;, height=f&quot;{self.iconSize}&quot;, ) </code></pre> <p>but nicegui does not have width and height. I looked in the source code:</p> <pre class="lang-py prettyprint-override"><code>class Image(SourceElement, component='image.js'): PIL_CONVERT_FORMAT = 'PNG' def __init__(self, source: Union[str, Path, 'PIL_Image'] = '') -&gt; None: &quot;&quot;&quot;Image Displays an image. This element is based on Quasar's `QImg &lt;https://quasar.dev/vue-components/img&gt;`_ component. :param source: the source of the image; can be a URL, local file path, a base64 string or a PIL image &quot;&quot;&quot; super().__init__(source=source) </code></pre> <p>and the constructor only supports the source parameter. Similar to <a href="https://stackoverflow.com/questions/76523201/how-to-set-the-width-of-a-ui-input">How to set the width of a ui.input</a> i would assume i have to set props for width and height but i did't find an example.</p>
<python><nicegui>
2024-03-31 10:21:16
1
15,707
Wolfgang Fahl
78,250,951
10,200,497
What is the best way to merge two dataframes that one of them has date ranges and the other one has date WITHOUT any shared columns?
<p>I have two DataFrames:</p> <pre><code>import pandas as pd df1 = pd.DataFrame( { 'date': ['2024-01-01','2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08', '2024-01-09', '2024-01-10', '2024-01-11', '2024-01-12', '2024-01-13'], 'price': list(range(13)) } ) df2 = pd.DataFrame( { 'start': ['2024-01-01', '2024-01-03', '2024-01-10'], 'end': ['2024-01-03', '2024-01-08', '2024-01-12'], 'id': ['a', 'b', 'c'] } ) </code></pre> <p>And this is the expected output. I want to add <code>id</code> to <code>df1</code>:</p> <pre><code> date price id 0 2024-01-01 0 NaN 1 2024-01-02 1 a 2 2024-01-03 2 a 3 2024-01-04 3 b 4 2024-01-05 4 b 5 2024-01-06 5 b 6 2024-01-07 6 b 7 2024-01-08 7 b 8 2024-01-09 8 NaN 9 2024-01-10 9 NaN 10 2024-01-11 10 c 11 2024-01-12 11 c 12 2024-01-13 12 NaN </code></pre> <p>The process is like this. Let me give you an example for row <code>1</code> of the output:</p> <p><strong>a)</strong> The <code>date</code> is 2024-01-02. Look up <code>df2</code>. There is a range for each row of <code>df2</code>. This <code>date</code> is between the first row of <code>df2</code>. Note that <code>start</code> is exclusive and <code>end</code> is inclusive.</p> <p><strong>b)</strong> Get the <code>id</code> from the identified row in <code>df2</code> and put in the output.</p> <p>Since there are no common columns between these two DataFrames, I have used a loop to get the output. It works but I am not sure if it is the best way:</p> <pre><code>df1['date'] = pd.to_datetime(df1.date) df2[['start', 'end']] = df2[['start', 'end']].apply(pd.to_datetime) for idx, row in df2.iterrows(): start = row['start'] end = row['end'] id = row['id'] df1.loc[df1.date.between(start, end, inclusive='right'), 'id'] = id </code></pre> <p>Any suggestions?</p>
<python><pandas><dataframe>
2024-03-31 10:05:52
1
2,679
AmirX
78,250,914
9,059,634
Python ModuleNotFoundError for command line tools built with setup.py
<p>I am trying to build a simple command line tool and package it with <code>setup.py</code>. Here's my directory structure.</p> <pre><code>β”œβ”€β”€ s3_md5 β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ cmd.py β”‚ └── src β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ cli.py β”‚ β”œβ”€β”€ logger.py β”‚ β”œβ”€β”€ s3_file.py β”‚ └── s3_md5.py β”œβ”€β”€ setup.py └── test β”œβ”€β”€ __init__.py β”œβ”€β”€ conftest.py β”œβ”€β”€ test_calculate_range_bytes_from_part_number.py β”œβ”€β”€ test_get_file_size.py β”œβ”€β”€ test_get_range_bytes.py └── test_parse_file_md5.py </code></pre> <p>In <code>setup.py</code></p> <pre class="lang-py prettyprint-override"><code>'''installer''' from os import getenv from setuptools import find_packages, setup setup( name=&quot;s3-md5&quot;, description=&quot;Get fast md5 hash from an s3 file&quot;, version=getenv('VERSION', '1.0.0'), url=&quot;https://github.com/sakibstark11/s3-md5-python&quot;, author=&quot;Sakib Alam&quot;, author_email=&quot;16sakib@gmail.com&quot;, license=&quot;MIT&quot;, install_requires=[ &quot;boto3==1.26.41&quot;, &quot;boto3-stubs[s3]&quot;, ], extras_require={ &quot;develop&quot;: [ &quot;autopep8==2.0.1&quot;, &quot;moto==4.0.12&quot;, &quot;pytest==7.2.0&quot;, &quot;pylint==3.1.0&quot;, ], &quot;release&quot;: [&quot;wheel==0.43.0&quot;] }, packages=find_packages(exclude=[&quot;test&quot;, &quot;venv&quot;]), python_requires=&quot;&gt;=3.10.12&quot;, entry_points={ 'console_scripts': ['s3-md5=s3_md5.cmd:run'], } ) </code></pre> <p>And <code>cmd.py</code></p> <pre><code>'''driver''' from time import perf_counter from boto3 import client from src.cli import parse_args from src.logger import logger from src.s3_md5 import parse_file_md5 def run(): // some stuff with imports from src if __name__ == &quot;__main__&quot;: run() </code></pre> <p>When I run the <code>cmd.py</code> from the <code>s3_md5</code> directory itself, everything is fine. But when I build and install it as a command line tool and try to run that, it throws</p> <pre><code>ModuleNotFoundError: No module named 'src' </code></pre> <p>I checked the lib folder and it does contain the src folder. Oddly enough when I use <code>s3_md5.src.cli</code> within <code>cmd.py</code>; the command line tool works but running the script from the directory doesn't really work as it references the package installed not the code itself which causes issues for development usage. I've tried reading everything I can about python module system but I can't wrap my head around to this. I suspect its to do with the PYTHONPATH not knowing where to look for <code>src</code> but I can be wrong. I tried using relative import which works for the command line tool but throws no known parent for directly running <code>python cmd.py</code></p>
<python><python-3.x><python-import><python-module>
2024-03-31 09:52:03
0
536
sakib11
78,250,867
1,442,554
Custom dynamic version provider for Python projects
<p>In a Python project that uses pyproject.toml (and setuptools), I want to have a custom version number provider. I want it to work like <code>setuptools_scm</code>, but use my own logic to determine the version (and preferably other fields such as description and author). I know it can be accomplished by using dynamic metadata, but I would like it to work just by including the module in <code>[build-system]</code>. How do I do that? How does <code>setuptools</code> knows where to get the metadata from?</p>
<python><setuptools><build-system><pyproject.toml>
2024-03-31 09:33:15
0
4,101
avishorp
78,250,765
2,485,708
How to call Steam InitTxn properly using python?
<p>I’m making an in app purchase for my game on Steam. On my server I use python 3. I’m trying to make an https request as follows:</p> <pre><code>conn = http.client.HTTPSConnection(&quot;partner.steam-api.com&quot;) orderid = uuid.uuid4().int &amp; (1&lt;&lt;64)-1 print(&quot;orderid = &quot;, orderid) key = &quot;xxxxxxxxxxxxxxxxxxx&quot; # omitted for security reason steamid = &quot;xxxxxxxxxxxxxxxxxxx&quot; # omitted for security reason pid = &quot;testItem1&quot; appid = &quot;480&quot; itemcount = 1 currency = 'CNY' amount = 350 description = 'testing_description' urlSandbox = &quot;/ISteamMicroTxnSandbox/&quot; s = f'{urlSandbox}InitTxn/v3/?key={key}&amp;orderid={orderid}&amp;appid={appid}&amp;steamid={steamid}&amp;itemcount={itemcount}&amp;currency={currency}&amp;itemid[0]={pid}&amp;qty[0]={1}&amp;amount[0]={amount}&amp;description[0]={description}' print(&quot;s = &quot;, s) conn.request('POST', s) r = conn.getresponse() print(&quot;InitTxn result = &quot;, r.read()) </code></pre> <p>I checked the s in console, which is:</p> <pre><code>s = /ISteamMicroTxnSandbox/InitTxn/v3/?key=xxxxxxx&amp;orderid=11506775749761176415&amp;appid=480&amp;steamid=xxxxxxxxxxxx&amp;itemcount=1&amp;currency=CNY&amp;itemid[0]=testItem1&amp;qty[0]=1&amp;amount[0]=350&amp;description[0]=testing_description </code></pre> <p>However I got a bad request response:</p> <pre><code>InitTxn result = b&quot;&lt;html&gt;&lt;head&gt;&lt;title&gt;Bad Request&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;h1&gt;Bad Request&lt;/h1&gt;Required parameter 'orderid' is missing&lt;/body&gt;&lt;/html&gt;&quot; </code></pre> <p>How to solve this? Thank you!</p> <p>BTW I use almost the same way to call GetUserInfo, except changing parameters and replace POST with GET request, and it works well.</p> <p>Just read that I should put parameters in post. So I changed the codes to as follows, but still get the same error of &quot;Required parameter 'orderid' is missing&quot;</p> <pre><code> params = { 'key': key, 'orderid': orderid, 'appid': appid, 'steamid': steamid, 'itemcount': itemcount, 'currency': currency, 'pid': pid, 'qty[0]': 1, 'amount[0]': amount, 'description[0]': description } s = urllib.parse.urlencode(params) # In console: s = key=xxxxx&amp;orderid=9231307508782239594&amp;appid=480&amp;steamid=xxx&amp;itemcount=1&amp;currency=CNY&amp;pid=testItem1&amp;qty%5B0%5D=1&amp;amount%5B0%5D=350&amp;description%5B0%5D=testing_description print(&quot;s = &quot;, s) conn.request('POST', url=f'{urlSandbox}InitTxn/v3/', body=s) </code></pre> <p>==== update ====</p> <p>Format issue has been solved. Please see the answer below.</p>
<python><steam><steamworks-api>
2024-03-31 08:56:02
1
2,022
ArtS
78,250,500
9,576,988
SQLAlchemy Many-to-Many Relationship: UNIQUE constraint failed
<p>So, I have a many to many SQLAlchemy relationship defined likeso,</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, sessionmaker from sqlalchemy import Column, Integer, String, ForeignKey, UniqueConstraint, Table, create_engine from sqlalchemy.orm import relationship, registry mapper_registry = registry() Base = declarative_base() bridge_category = Table( &quot;bridge_category&quot;, Base.metadata, Column(&quot;video_id&quot;, ForeignKey(&quot;video.id&quot;), primary_key=True), Column(&quot;category_id&quot;, ForeignKey(&quot;category.id&quot;), primary_key=True), UniqueConstraint(&quot;video_id&quot;, &quot;category_id&quot;), ) class BridgeCategory: pass mapper_registry.map_imperatively(BridgeCategory, bridge_category) class Video(Base): __tablename__ = 'video' id = Column(Integer, primary_key=True) title = Column(String) categories = relationship(&quot;Category&quot;, secondary=bridge_category, back_populates=&quot;videos&quot;) class Category(Base): __tablename__ = 'category' id = Column(Integer, primary_key=True) text = Column(String, unique=True) videos = relationship(&quot;Video&quot;, secondary=bridge_category, back_populates=&quot;categories&quot;) engine = create_engine('sqlite:///:memory:', echo=True) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) with Session() as s: v1 = Video(title='A', categories=[Category(text='blue'), Category(text='red')]) v2 = Video(title='B', categories=[Category(text='green'), Category(text='red')]) v3 = Video(title='C', categories=[Category(text='grey'), Category(text='red')]) videos = [v1, v2, v3] s.add_all(videos) s.commit() </code></pre> <p>Of course, because of the unique constraint on <code>Category.text</code>, we get the following error.</p> <pre><code>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: category.text [SQL: INSERT INTO category (text) VALUES (?) RETURNING id] [parameters: ('red',)] </code></pre> <p>I am wondering what the best way of dealing with this is. With my program, I get a lot of video objects, each with a list of unique Category objects. The text collisions happen across all these video objects.</p> <p>I could loop through all videos, and all categories, forming a Category set, but that's kinda lame. I'd also have to do that with the 12+ other many-to-many relationships my Video object has, and that seems really inefficient.</p> <p>Is there like a &quot;insert ignore&quot; flag I can set for this? I haven't been able to find anything online concerning this situation.</p>
<python><sqlalchemy><many-to-many>
2024-03-31 06:45:36
1
594
scrollout
78,250,222
5,121,282
how to set same port on fastapi and eureka?
<p>I want the same port on uvicorn and eureka configuration, I have search a lot and only found this link <a href="https://github.com/encode/uvicorn/issues/761" rel="nofollow noreferrer">https://github.com/encode/uvicorn/issues/761</a> but it doesn't work, this is my code:</p> <pre><code>import py_eureka_client.eureka_client as eureka_client import uvicorn from fastapi import FastAPI from contextlib import asynccontextmanager from com.controladores import controladorRouter @asynccontextmanager async def lifespan(app: FastAPI): await eureka_client.init_async( eureka_server=&quot;http://eureka-primary:8011/eureka/,http://eureka-secondary:8012/eureka/,http://eureka-tertiary:8013/eureka/&quot;, app_name=&quot;msprueba&quot;, instance_port=8000, instance_host=&quot;localhost&quot; ) yield app = FastAPI(lifespan=lifespan) app.include_router(controladorRouter) if __name__ == &quot;__main__&quot;: config = uvicorn.Config(&quot;com.main:app&quot;, host=&quot;localhost&quot;, port=0) server = uvicorn.Server(config) server.run() </code></pre> <p>I removed the code from the link only to show the actual code, I find it hard to believe that simple such task is so hard to do with fastapi. Don't get me wrong fastapi is a great tool, only thing is that I come from springboot and this task is much easier to do in springboot, any help?</p>
<python><fastapi><netflix-eureka>
2024-03-31 03:44:37
1
940
Alan Gaytan
78,249,960
6,000,739
Using the sympy module to compute the matrix multiplication involving symbols
<p>My problem is given as follows:</p> <p><a href="https://i.sstatic.net/XlrcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XlrcJ.png" alt="" /></a></p> <pre class="lang-py prettyprint-override"><code>import sympy as sp p = sp.symbols('p') I_p = sp.Identity(p) C = sp.BlockMatrix([[I_p, I_p], [I_p, -I_p]]) Sigma_1 = sp.MatrixSymbol('Sigma_1', p, p) Sigma_2 = sp.MatrixSymbol('Sigma_2', p, p) Sigma = sp.BlockMatrix([[Sigma_1, Sigma_2], [Sigma_2, Sigma_1]]) C_Sigma_C_transpose = C * Sigma * C.T print(C_Sigma_C_transpose) ## Matrix([ ## [I, I], ## [I, -I]])*Matrix([ ## [Sigma_1, Sigma_2], ## [Sigma_2, Sigma_1]])*Matrix([ ## [I, I], ## [I, -I]]) </code></pre> <p>The result does not match the expected output. How can I correct it?</p>
<python><matrix><sympy><matrix-multiplication>
2024-03-31 00:41:47
1
715
John Stone
78,249,695
547,231
How can I convert a flax.linen.Module to a torch.nn.Module?
<p>I would like to convert a <code>flax.linen.Module</code>, taken from <a href="https://colab.research.google.com/drive/1SeXMpILhkJPjXUaesvzEhc3Ke6Zl_zxJ?usp=sharing" rel="nofollow noreferrer">here</a> and replicated below this post, to a <code>torch.nn.Module</code>.</p> <p>However, I find it extremely hard to figure out how I need to replace</p> <ol> <li>The <code>flax.linen.Dense</code> calls;</li> <li>The <code>flax.linen.Conv</code> calls;</li> <li>The custom class <code>Dense</code>.</li> </ol> <p>For (1.), I guess I need to use <code>torch.nn.Linear</code>. But what do I need to specify as <code>in_features</code> and <code>out_features</code>?</p> <p>For (2.), I guess I need to use <code>torch.nn.Conv2d</code>. But, again, what do I need to specify as <code>in_channels</code> and <code>out_channels</code>.</p> <p>I guess I know how I can port the <code>GaussianFourierProjection</code> class and how I can mimic the &quot;swish activation function&quot;. Obviously, it would be extremely helpful if someone is that familiar with both modules so that he/she can provide the corresponding <code>torch.nn.Module</code> as an answer. But it would also already be helpful, if someone could at least answer how (1.) - (3.) need to be replaced. Any help is highly appreciated!</p> <hr /> <pre><code>#@title Defining a time-dependent score-based model (double click to expand or collapse) import jax.numpy as jnp import numpy as np import flax import flax.linen as nn from typing import Any, Tuple import functools import jax class GaussianFourierProjection(nn.Module): &quot;&quot;&quot;Gaussian random features for encoding time steps.&quot;&quot;&quot; embed_dim: int scale: float = 30. @nn.compact def __call__(self, x): # Randomly sample weights during initialization. These weights are fixed # during optimization and are not trainable. W = self.param('W', jax.nn.initializers.normal(stddev=self.scale), (self.embed_dim // 2, )) W = jax.lax.stop_gradient(W) x_proj = x[:, None] * W[None, :] * 2 * jnp.pi return jnp.concatenate([jnp.sin(x_proj), jnp.cos(x_proj)], axis=-1) class Dense(nn.Module): &quot;&quot;&quot;A fully connected layer that reshapes outputs to feature maps.&quot;&quot;&quot; output_dim: int @nn.compact def __call__(self, x): return nn.Dense(self.output_dim)(x)[:, None, None, :] class ScoreNet(nn.Module): &quot;&quot;&quot;A time-dependent score-based model built upon U-Net architecture. Args: marginal_prob_std: A function that takes time t and gives the standard deviation of the perturbation kernel p_{0t}(x(t) | x(0)). channels: The number of channels for feature maps of each resolution. embed_dim: The dimensionality of Gaussian random feature embeddings. &quot;&quot;&quot; marginal_prob_std: Any channels: Tuple[int] = (32, 64, 128, 256) embed_dim: int = 256 @nn.compact def __call__(self, x, t): # The swish activation function act = nn.swish # Obtain the Gaussian random feature embedding for t embed = act(nn.Dense(self.embed_dim)( GaussianFourierProjection(embed_dim=self.embed_dim)(t))) # Encoding path h1 = nn.Conv(self.channels[0], (3, 3), (1, 1), padding='VALID', use_bias=False)(x) ## Incorporate information from t h1 += Dense(self.channels[0])(embed) ## Group normalization h1 = nn.GroupNorm(4)(h1) h1 = act(h1) h2 = nn.Conv(self.channels[1], (3, 3), (2, 2), padding='VALID', use_bias=False)(h1) h2 += Dense(self.channels[1])(embed) h2 = nn.GroupNorm()(h2) h2 = act(h2) h3 = nn.Conv(self.channels[2], (3, 3), (2, 2), padding='VALID', use_bias=False)(h2) h3 += Dense(self.channels[2])(embed) h3 = nn.GroupNorm()(h3) h3 = act(h3) h4 = nn.Conv(self.channels[3], (3, 3), (2, 2), padding='VALID', use_bias=False)(h3) h4 += Dense(self.channels[3])(embed) h4 = nn.GroupNorm()(h4) h4 = act(h4) # Decoding path h = nn.Conv(self.channels[2], (3, 3), (1, 1), padding=((2, 2), (2, 2)), input_dilation=(2, 2), use_bias=False)(h4) ## Skip connection from the encoding path h += Dense(self.channels[2])(embed) h = nn.GroupNorm()(h) h = act(h) h = nn.Conv(self.channels[1], (3, 3), (1, 1), padding=((2, 3), (2, 3)), input_dilation=(2, 2), use_bias=False)( jnp.concatenate([h, h3], axis=-1) ) h += Dense(self.channels[1])(embed) h = nn.GroupNorm()(h) h = act(h) h = nn.Conv(self.channels[0], (3, 3), (1, 1), padding=((2, 3), (2, 3)), input_dilation=(2, 2), use_bias=False)( jnp.concatenate([h, h2], axis=-1) ) h += Dense(self.channels[0])(embed) h = nn.GroupNorm()(h) h = act(h) h = nn.Conv(1, (3, 3), (1, 1), padding=((2, 2), (2, 2)))( jnp.concatenate([h, h1], axis=-1) ) # Normalize output h = h / self.marginal_prob_std(t)[:, None, None, None] return h </code></pre>
<python><machine-learning><pytorch><neural-network><flax>
2024-03-30 22:24:42
1
18,343
0xbadf00d
78,249,645
6,301,394
Polars asof join on next available date
<p>I have a frame (events) which I want to join into another frame (fr), joining on Date and Symbol. There aren't necessarily any date overlaps. The date in events would match with the first occurrence only on the same or later date in fr, so if the event date is 2010-12-01, it would join on the same date or if not present then the next available date (2010-12-02).</p> <p>I've tried to do this using search_sorted and join_asof but I'd like to group by the Symbol column and also this isn't a proper join. This somewhat works for a single Symbol only.</p> <pre><code>fr = pl.DataFrame( { 'Symbol': ['A']*5, 'Date': ['2010-08-29', '2010-09-01', '2010-09-05', '2010-11-30', '2010-12-02'], } ).with_columns(pl.col('Date').str.to_date('%Y-%m-%d')).with_row_index().set_sorted(&quot;Date&quot;) events = pl.DataFrame( { 'Symbol': ['A']*3, 'Earnings_Date': ['2010-06-01', '2010-09-01', '2010-12-01'], 'Event': [1, 4, 7], } ).with_columns(pl.col('Earnings_Date').str.to_date('%Y-%m-%d')).set_sorted(&quot;Earnings_Date&quot;) idx = fr[&quot;Date&quot;].search_sorted(events[&quot;Earnings_Date&quot;], &quot;left&quot;) fr = fr.with_columns( pl.when( pl.col(&quot;index&quot;).is_in(idx) ) .then(True) .otherwise(False) .alias(&quot;Earnings&quot;) ) fr = fr.join_asof(events, by=&quot;Symbol&quot;, left_on=&quot;Date&quot;, right_on=&quot;Earnings_Date&quot;) fr = fr.with_columns( pl.when( pl.col(&quot;Earnings&quot;) == True ) .then(pl.col(&quot;Event&quot;)) .otherwise(False) .alias(&quot;Event&quot;) ) </code></pre>
<python><dataframe><join><python-polars>
2024-03-30 22:05:15
2
2,613
misantroop
78,249,205
1,231,450
Enable / disable the automatic reload of shiny
<p>I have a shiny express app like</p> <pre><code>import matplotlib.pyplot as plt, pandas as pd from shiny import reactive, req, render from shiny.express import input, render, ui # variables ui.page_opts(title=&quot;Range Rover&quot;, fillable=True) ... </code></pre> <p>It runs in <code>VS Code</code> which works fine. However, I'd like to prevent the automatic reload within some file setting but was unable to find the relevant documentation (if this is possible after all). So, the question: how to stop the automatic reload?</p>
<python><py-shiny>
2024-03-30 19:16:36
1
43,253
Jan
78,248,902
1,588,847
Typing a function decorator with conditional output type, in Python
<p>I have a set of functions which all accept a <code>value</code> named parameter, plus arbitrary other named parameters.</p> <p>I have a decorator: <code>lazy</code>. Normally the decorated functions return as normal, but return a partial function if <code>value</code> is None.</p> <p>How do I type-hint the decorator, whose output depends on the value input?</p> <pre class="lang-py prettyprint-override"><code>from functools import partial def lazy(func): def wrapper(value=None, **kwargs): if value is not None: return func(value=value, **kwargs) else: return partial(func, **kwargs) return wrapper @lazy def test_multiply(*, value: float, multiplier: float) -&gt; float: return value * multiplier @lazy def test_format(*, value: float, fmt: str) -&gt; str: return fmt % value print('test_multiply 5*2:', test_multiply(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format(value=7.777, fmt='%.2f')) func_mult_11 = test_multiply(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11)) print('func_mult_11 5*11:', func_mult_11(value=5)) </code></pre> <p>I'm using <code>mypy</code> and I've managed to get most of the way using mypy extensions, but haven't got the <code>value</code> typing working in <code>wrapper</code>:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar, ParamSpec, Any, Optional from mypy_extensions import DefaultNamedArg, KwArg R = TypeVar(&quot;R&quot;) P = ParamSpec(&quot;P&quot;) def lazy(func: Callable[P, R]) -&gt; Callable[[DefaultNamedArg(float, 'value'), KwArg(Any)], Any]: def wrapper(value = None, **kwargs: P.kwargs) -&gt; R | partial[R]: if value is not None: return func(value=value, **kwargs) else: return partial(func, **kwargs) return wrapper </code></pre> <p>How can I type <code>value</code>? And better still, can I do this without mypy extensions?</p>
<python><mypy><python-typing>
2024-03-30 17:44:49
1
2,124
Jetpac
78,248,879
3,433,875
Remove gaps between subplots_mosaic in matplotlib
<p>How do I remove the gaps between the subplots on a mosaic? The traditional way does not work with mosaics:</p> <pre><code>plt.subplots_adjust(wspace=0, hspace=0) </code></pre> <p>I also tried using <code>gridspec_kw</code>, but no luck.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np ax = plt.figure(layout=&quot;constrained&quot;).subplot_mosaic( &quot;&quot;&quot; abcde fghiX jklXX mnXXX oXXXX &quot;&quot;&quot;, empty_sentinel=&quot;X&quot;, gridspec_kw={ &quot;wspace&quot;: 0, &quot;hspace&quot;: 0, }, ) for k,ax in ax.items(): print(ax) #ax.text(0.5, 0.5, k, transform=ax.transAxes, ha=&quot;center&quot;, va=&quot;center&quot;, fontsize=8, color=&quot;darkgrey&quot;) ax.set_xticklabels([]) ax.set_yticklabels([]) ax.tick_params(length = 0) </code></pre> <p>The code generates: <a href="https://i.sstatic.net/TxaYQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TxaYQ.jpg" alt="enter image description here" /></a></p>
<python><matplotlib><subplot>
2024-03-30 17:39:47
1
363
ruthpozuelo
78,248,657
2,036,386
Process to Upload a run a Python Hello World script via IDE and accessed through browser
<p>I have googled this and cannot find an answer. I am coming from PHP. In PHP I would write my code in my IDE on my laptop, upload the .php script to my server (VPS), and then execute/run the script through web browser.</p> <p>I am new to Python. I have written the following code</p> <pre><code>print(&quot;Hello World&quot;) </code></pre> <p>in a file called hello.py.</p> <p>If I upload the file to my server and access it through google chrome, I get the following:</p> <p>print(&quot;Hello, World!&quot;)</p> <p>Python is installed and the version is Python 3.x In the PHP world I would conclude the server isnt parsing the PHP code. The hello.py file is not in the /bin/ folder on the server which I understand is fine with my Python install.</p> <p>Any advice please on how I can get the file to run thru browser? Thanks</p>
<python><server-side>
2024-03-30 16:22:38
1
397
bobafart
78,248,647
1,484,601
pytest mock failing when mocking function from imported package
<p>pytest-mock is properly installed:</p> <pre><code>&gt; pip list | grep pytest pytest 7.4.2 pytest-mock 3.14.0 </code></pre> <p>This unit test passes with success:</p> <pre class="lang-py prettyprint-override"><code>import pytest class A: def __init__(self, value): self.value = value def get_a() -&gt; A: return A(1) @pytest.fixture def mocked_A(mocker): a = A(2) mocker.patch(f&quot;{__name__}.get_a&quot;, return_value=a) def test_mocked_a(mocked_A) -&gt; None: a = get_a() assert a.value == 2 </code></pre> <p>Now, if I move A and get_A to mypackage.mymodule, mocking stops to work.</p> <pre class="lang-py prettyprint-override"><code>import pytest # note: these imports work: mypackage is correctly installed from mypackage.mymodule import A, get_a @pytest.fixture def mocked_A(mocker): a = A(2) mocker.patch(&quot;mypackage.mymodule.get_a&quot;, return_value=a) def test_mocked_a(mocked_A) -&gt; None: a = get_a() assert a.value == 2 </code></pre> <p>The test fails with this error:</p> <pre><code>mocked_A = None def test_mocked_a(mocked_A) -&gt; None: a = get_a() &gt; assert a.value == 2 E assert 1 == 2 E + where 1 = &lt;mypackage.mymodule.A object at 0x7f063e3f2c80&gt;.value test_a.py:13: AssertionError =============================================================================== short test summary info ================================================================================ FAILED test_a.py::test_mocked_a - assert 1 == 2 </code></pre> <p>Looks like get_a has not been mocked. Anything I am doing wrong ?</p>
<python><unit-testing><mocking><pytest><pytest-mock>
2024-03-30 16:19:46
1
4,521
Vince
78,248,551
10,200,497
How can I change the groupby scope to find the first value that meets the conditions of a mask?
<p>This is an extension to this <a href="https://stackoverflow.com/questions/78246775/how-can-i-change-the-groupby-column-to-find-the-first-row-that-meets-the-condtio">post</a>.</p> <p>My DataFrame is:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'main': ['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y'], 'sub': ['c', 'c', 'c', 'd', 'd', 'e', 'e', 'e', 'e', 'f', 'f', 'f', 'f', 'g', 'g', 'g'], 'num_1': [97, 90, 105, 2100, 1000, 101, 110, 222, 90, 100, 99, 90, 2, 92, 95, 93], 'num_2': [100, 100, 100, 102, 102, 209, 209, 209, 209, 100, 100, 100, 100, 90, 90, 90], 'num_3': [99, 110, 110, 110, 110, 222, 222, 222, 222, 150, 101, 200, 5, 95, 95, 100], 'label': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p'] } ) </code></pre> <p>And this is the expected output. I want to create column <code>result</code>:</p> <pre><code> main sub num_1 num_2 num_3 label result 0 x c 97 100 99 a b 1 x c 90 100 110 b b 2 x c 105 100 110 c b 3 x d 2100 102 110 d f 4 x d 1000 102 110 e f 5 x e 101 209 222 f f 6 x e 110 209 222 g f 7 x e 222 209 222 h f 8 x e 90 209 222 i f 9 y f 100 100 150 j k 10 y f 99 100 101 k k 11 y f 90 100 200 l k 12 y f 2 100 5 m k 13 y g 92 90 95 n NaN 14 y g 95 90 95 o NaN 15 y g 93 90 100 p NaN </code></pre> <p>The mask is:</p> <pre><code>mask = ( (df.num_1 &lt; df.num_2) &amp; (df.num_2 &lt; df.num_3) ) </code></pre> <p>The process starts like this:</p> <p><strong>a)</strong> The groupby column is <code>sub</code></p> <p><strong>b)</strong> Finding the first row that meets the condition of the mask for each group.</p> <p><strong>c)</strong> Put the value of <code>label</code> in the result</p> <p>If there are no rows that meets the condition of the mask, then the groupby column is changed to <code>main</code> to find the first row of mask. There is condition for this phase:</p> <p>The previous <code>sub</code>s should not be considered when using <code>main</code> as the <code>groupby</code> column.</p> <p>An example of the above steps for group <code>d</code> in the sub column:</p> <p><strong>a)</strong> <code>sub</code> is the groupby column.</p> <p><strong>b)</strong> There are no rows in the <code>d</code> group that <code>df.num_2</code> is between <code>df.num_1</code> and <code>df.num_3</code> (the condition of the <code>mask</code>)</p> <p>So now for group <code>d</code>, its main group is searched. However group <code>c</code> is also in this main group. Since it is before group <code>d</code>, group <code>c</code> should not count for this step. So in <code>x</code> group the first row of the <code>mask</code> has <code>f</code> label (101 &lt; 102 &lt; 222).</p> <p>One thing to note is that for each <code>sub</code> group <code>num_2</code> does not change throughout the group. For example for entire group <code>c</code> <code>num_2</code> is 100.</p> <p>This is my attempt based on this <a href="https://stackoverflow.com/a/78246938/10200497">answer</a> but it does not work:</p> <pre><code>def find(g): # get sub as 0,1,2… sub = pd.factorize(g['sub'])[0] # convert inputs to numpy a = g['num_1'].to_numpy() b = g.loc[~g['sub'].duplicated(), 'num_2'].to_numpy() c = g['num_3'].to_numpy() # form mask # (a[:, None] &gt; b) -&gt; num_1 &gt; num_2 # (sub[:, None] &gt;= np.arange(len(b))) -&gt; exclude previous groups m = (a[:, None] &lt; b) &amp; (a[:, None] &gt; c) &amp; (sub[:, None] &gt;= np.arange(len(b))) # find first True per column return pd.Series(np.where(m.any(0), a[m.argmax(0)], np.nan)[sub], index=g.index) df['result'] = df.groupby('main', group_keys=False).apply(find) </code></pre>
<python><pandas><dataframe><group-by>
2024-03-30 15:53:40
1
2,679
AmirX
78,248,462
5,618,251
Convert lat,lon,data points to matrix (2D grid) at 0.5 degree resolution in Python
<p>I have a geodataframe which I load in as follows:</p> <pre><code>gdf = gpd.GeoDataFrame( ds.to_pandas(), geometry=gpd.points_from_xy(ds[&quot;CENLON&quot;], ds[&quot;CENLAT&quot;]), crs=&quot;EPSG:4326&quot;, ) </code></pre> <p>It looks as:</p> <pre><code>print(gdf) CENLON CENLAT O1REGION O2REGION AREA ... ZMAX ZMED SLOPE \ index ... 0 -146.8230 63.6890 1 2 0.360 ... 2725 2385 42.0 1 -146.6680 63.4040 1 2 0.558 ... 2144 2005 16.0 2 -146.0800 63.3760 1 2 1.685 ... 2182 1868 18.0 3 -146.1200 63.3810 1 2 3.681 ... 2317 1944 19.0 4 -147.0570 63.5510 1 2 2.573 ... 2317 1914 16.0 ... ... ... ... ... ... ... ... ... ... 216424 -37.7325 -53.9860 19 3 0.042 ... 510 -999 29.9 216425 -36.1361 -54.8310 19 3 0.567 ... 830 -999 23.6 216426 -37.3018 -54.1884 19 3 4.118 ... 1110 -999 16.8 216427 -90.4266 -68.8656 19 1 0.011 ... 270 -999 0.4 216428 37.7140 -46.8972 19 4 0.528 ... 1170 -999 9.6 </code></pre> <p>I want to create a 2D matrix (world map) of the column &quot;01REGION&quot; at a 0.5 degree resolution (720x360 world map) with the mean as the aggregation method. How can I do this (preferably with cartopy?)</p>
<python><geospatial><geo><cartopy><rasterizing>
2024-03-30 15:24:09
2
361
user5618251
78,248,435
522,477
Calls to external API work when running code as a script, but receive `500 Internal Server Error` response when using FastAPI to run the same code?
<p>I have an application to predict the size of a fish in an image. I have built a FastAPI endpoint --<code>/predict/</code>-- that runs the multi-step process to make that prediction. The steps include two calls to external APIs (not under my control, so I can't see more than what they return).</p> <p>When I run my code just from the script, such as through an IDE (I use PyCharm), the code for the prediction steps runs correctly and I get appropriate responses back from both APIs.</p> <p>The first is to <a href="https://roboflow.ai" rel="nofollow noreferrer">Roboflow</a>, and here is an example of the output from running the script (again, I just call this from the command line or hit Run in Pycharm):</p> <pre><code>2024-03-30 10:59:36,073 - DEBUG - Starting new HTTPS connection (1): detect.roboflow.com:443 2024-03-30 10:59:36,339 - DEBUG - https://detect.roboflow.com:443 &quot;POST /fish_measure/1?api_key=AY3KX4KMynZroEOyXUEb&amp;disable_active_learning=False HTTP/1.1&quot; 200 914 </code></pre> <p>The second is to <a href="https://fishial.ai" rel="nofollow noreferrer">Fishial</a>, and here is an example of the output from running the script (script or through PyCharm), where this one has to get the token, url, etc:</p> <pre><code>2024-03-30 11:02:31,866 - DEBUG - Starting new HTTPS connection (1): api-users.fishial.ai:443 2024-03-30 11:02:33,273 - DEBUG - https://api-users.fishial.ai:443 &quot;POST /v1/auth/token HTTP/1.1&quot; 200 174 2024-03-30 11:02:33,273 - INFO - Access token: eyJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3MTE4MTE1NTMsImtpZCI6ImIzZjNiYWZlMTg2NGNjYmM3ZmFkNmE5YSJ9.YtlaecKMyxjipBDS97xNV3hYKcF3jRpOxTAVnwrxOcE 2024-03-30 11:02:33,273 - INFO - Obtaining upload url... 2024-03-30 11:02:33,582 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 11:02:33,828 - DEBUG - https://api.fishial.ai:443 &quot;POST /v1/recognition/upload HTTP/1.1&quot; 200 1120 2024-03-30 11:02:33,829 - INFO - Uploading picture to the cloud... 2024-03-30 11:02:33,852 - DEBUG - Starting new HTTPS connection (1): storage.googleapis.com:443 2024-03-30 11:02:34,179 - DEBUG - https://storage.googleapis.com:443 &quot;PUT /backend-fishes-storage-prod/6r9p24qp4llhat8mliso8xacdxm5?GoogleAccessId=services-storage-client%40ecstatic-baton-230905.iam.gserviceaccount.com&amp;Expires=1711811253&amp;Signature=gCGPID7bLuw%2FzUfv%2FLrTRPeQA060CaXQEqITPvW%2FWZ5GHXYKDRNCxVrUJ7UmpHVa0m60gIMFwFSQhYqsDmP3SkjI7ZnJSIEj53zxtOpcL7o2VGv6ZUuoowWwzmzqeM9yfbCHGI3TmtuW0lMhqAyi6Pc0wYhj73P12QU28wF8sdQMblHQLQVd1kFXtPl5yjSW12ADt4WEvB7dbnl7HmUTcL8WFS2SnJ1zcLljIbXTlRWcqc88MIcklSLG69z%2FJcUSh%2BeNxRp%2Fzotv5GitJBq9pF%2BzRt25lCt%2BYHGViJ46uu4rQapZBfACxsE762a1ZcrvTasy97idKRaijLJKAtZBRQ%3D%3D HTTP/1.1&quot; 200 0 2024-03-30 11:02:34,180 - INFO - Requesting fish recognition... 2024-03-30 11:02:34,182 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 11:02:39,316 - DEBUG - https://api.fishial.ai:443 &quot;GET /v1/recognition/image?q=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBMksyUEE9PSIsImV4cCI6bnVsbCwicHVyIjoiYmxvYl9pZCJ9fQ==--d37fdc2d5c6d8943a59dbd11326bc8a651f9bd69 HTTP/1.1&quot; 200 10195 </code></pre> <p>Here is the code for the endpoint:</p> <pre><code>from fastapi import FastAPI, File, UploadFile, HTTPException, BackgroundTasks from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel from typing import Union class PredictionResult(BaseModel): prediction: Union[float, str] eyeball_estimate: Union[float, str] species: str elapsed_time: float @app.post(&quot;/predict/&quot;, response_model=PredictionResult) async def predict_fish_length(file: UploadFile = File(...)): try: # capture the start of the process so we can track duration start_time = time.time() # Create a temporary file temp_file = tempfile.NamedTemporaryFile(delete=False) temp_file_path = temp_file.name with open(temp_file_path, &quot;wb&quot;) as buffer: shutil.copyfileobj(file.file, buffer) temp_file.close() prediction = process_one_image(temp_file_path) end_time = time.time() # Record the end time elapsed_time = end_time - start_time # Calculate the elapsed time return PredictionResult( prediction=prediction[&quot;prediction&quot;][0], eyeball_estimate=prediction[&quot;eye_ratio_len_est&quot;][0], species=prediction[&quot;species&quot;][0], elapsed_time=elapsed_time ) except Exception as e: # Clean up the temp file in case of an error os.unlink(temp_file_path) raise HTTPException(status_code=500, detail=str(e)) from e </code></pre> <p>I run this through <code>uvicorn</code>, then try to call the endpoint through <code>curl</code> as follows:</p> <pre><code>curl -X POST http://127.0.0.1:8000/predict/ -F &quot;file=@/path/to/image.jpg&quot; </code></pre> <p>The Roboflow API calls work fine, but now I get this response from the Fishial (second) API:</p> <pre><code>2024-03-30 10:48:09,166 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 10:48:10,558 - DEBUG - https://api.fishial.ai:443 &quot;GET /v1/recognition/image?q=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBMWkyUEE9PSIsImV4cCI6bnVsbCwicHVyIjoiYmxvYl9pZCJ9fQ==--36e68766cd891eb0e57610e8fb84b76e205b639e HTTP/1.1&quot; 500 89 INFO: 127.0.0.1:49829 - &quot;POST /predict/ HTTP/1.1&quot; 500 Internal Server Error </code></pre> <p>I'm not sure where to look, or perhaps what to print out/log, in order to get more information. I'm not even sure if the error is on my side or coming from the API I'm calling (though the <code>500 89</code> end of the GET line at the end makes me think it's coming from the API I'm calling).</p> <p>Many thanks!</p> <p><strong>EDIT</strong>: A request was made for more code. The function to process an image is just a series of calls to other functions. So I've included here only the code I use to call the second (Fishial) API:</p> <pre><code>def recognize_fish(file_path, key_id=key_id, key_secret=key_secret, identify=False): if not os.path.isfile(file_path): err(&quot;Invalid picture file path.&quot;) for dep in DEPENDENCIES: try: __import__(dep) except ImportError: err(f&quot;Unsatisfied dependency: {dep}&quot;) logging.info(&quot;Identifying picture metadata...&quot;) name = os.path.basename(file_path) mime = mimetypes.guess_type(file_path)[0] size = os.path.getsize(file_path) with open(file_path, &quot;rb&quot;) as f: csum = base64.b64encode(hashlib.md5(f.read()).digest()).decode(&quot;utf-8&quot;) logging.info(f&quot;\n file name: {name}&quot;) logging.info(f&quot; MIME type: {mime}&quot;) logging.info(f&quot; byte size: {size}&quot;) logging.info(f&quot; checksum: {csum}\n&quot;) if identify: return if not key_id or not key_secret: err(&quot;Missing key ID or key secret.&quot;) logging.info(&quot;Obtaining auth token...&quot;) data = { &quot;client_id&quot;: key_id, &quot;client_secret&quot;: key_secret } response = requests.post(&quot;https://api-users.fishial.ai/v1/auth/token&quot;, json=data) auth_token = response.json()[&quot;access_token&quot;] auth_header = f&quot;Bearer {auth_token}&quot; logging.info(f&quot;Access token: {auth_token}&quot;) logging.info(&quot;Obtaining upload url...&quot;) data = { &quot;blob&quot;: { &quot;filename&quot;: name, &quot;content_type&quot;: mime, &quot;byte_size&quot;: size, &quot;checksum&quot;: csum } } headers = { &quot;Authorization&quot;: auth_header, &quot;Content-Type&quot;: &quot;application/json&quot;, &quot;Accept&quot;: &quot;application/json&quot; } response = requests.post(&quot;https://api.fishial.ai/v1/recognition/upload&quot;, json=data, headers=headers) signed_id = response.json()[&quot;signed-id&quot;] upload_url = response.json()[&quot;direct-upload&quot;][&quot;url&quot;] content_disposition = response.json()[&quot;direct-upload&quot;][&quot;headers&quot;][&quot;Content-Disposition&quot;] logging.info(&quot;Uploading picture to the cloud...&quot;) with open(file_path, &quot;rb&quot;) as f: requests.put(upload_url, data=f, headers={ &quot;Content-Disposition&quot;: content_disposition, &quot;Content-MD5&quot;: csum, &quot;Content-Type&quot;: &quot;&quot; }) logging.info(&quot;Requesting fish recognition...&quot;) response = requests.get(f&quot;https://api.fishial.ai/v1/recognition/image?q={signed_id}&quot;, headers={&quot;Authorization&quot;: auth_header}) fish_count = len(response.json()[&quot;results&quot;]) logging.info(f&quot;Fishial Recognition found {fish_count} fish(es) on the picture.&quot;) if fish_count == 0: return [] species_names = [] for i in range(fish_count): fish_data = extract_from_json(f&quot;results[{i}]&quot;, response.json()) if fish_data and &quot;species&quot; in fish_data: logging.info(f&quot;Fish {i + 1} is:&quot;) for j in range(len(fish_data[&quot;species&quot;])): species_data = fish_data[&quot;species&quot;][j] if &quot;fishangler-data&quot; in species_data and &quot;metaTitleName&quot; in species_data[&quot;fishangler-data&quot;]: species_name = species_data[&quot;fishangler-data&quot;][&quot;metaTitleName&quot;] accuracy = species_data[&quot;accuracy&quot;] logging.info(f&quot; - {species_name} [accuracy {accuracy}]&quot;) species_names.append(species_name) else: logging.error(&quot; - Species name not found in the response.&quot;) else: logging.error(f&quot;\nFish {i + 1}: Species data not found in the response.&quot;) return species_names </code></pre> <p><em>P.S. This feels like it's getting a little long. If putting this much code on Pastebin is more appropriate, I'm happy to edit.</em></p>
<python><fastapi><http-status-code-500>
2024-03-30 15:16:29
2
2,079
Savage Henry
78,248,039
66,191
Validate a nested dictionary using python/marshmallow when field names are variable
<p>Given this example...</p> <pre><code>{ &quot;A-B&quot;: { &quot;x&quot;: { &quot;name&quot;: &quot;x A-B&quot; }, &quot;y&quot;: { &quot;level&quot;: 6 } }, &quot;B-C&quot;: { &quot;x&quot;: { &quot;name&quot;: &quot;x B-C&quot; }, &quot;y&quot;: { &quot;level&quot;: 9 } }, &quot;A-C&quot;: { &quot;x&quot;: { &quot;name&quot;: &quot;x A-C&quot; } } } </code></pre> <p>Where &quot;A&quot;, &quot;B&quot; and &quot;C&quot; are states and &quot;x&quot; and &quot;y&quot; are different dicts. &quot;x&quot; can have a &quot;name&quot; key with a string value amd &quot;y&quot; can have a &quot;level&quot; key with an integer value.</p> <p>Both &quot;x&quot; and &quot;y&quot; are optional.</p> <p>I've create the schemas for &quot;x&quot; and &quot;y&quot;...</p> <pre><code>def X_Schema( Schema ): name = fields.String( required = True ) def Y_Schema( Schema ): level = fields.Integer( required = True ) </code></pre> <p>I know I need to use some nesting but I can't work out how this is going to work when the top level &quot;key&quot; is not fixed. i.e. it can be &quot;A-B&quot;, &quot;A-C&quot;, &quot;B-C&quot; etc. &quot;x&quot; and &quot;y&quot; can only occur once within a top level value, both are optional.</p> <p>I'd like to do something like...</p> <pre><code>def StateSchema( Schema ): ???? = fields.Dict( keys = &lt; &quot;x&quot; or &quot;y&quot; &gt;, values = &lt;X_Schema or Y_Schema&gt; ) </code></pre> <p>Personally, I don't think this is possible using marshmallow. I do not control this input so I'm pretty much stuck not validating it if can't be done using marshmallow.</p> <p>I've got this far...</p> <pre><code>class StateSchema( Schema ) a_b = fields.Dict( keys = fields.String(), values = fields.Nested( X_Schema() ), required = False, data_key = &quot;A-B&quot; ) b_c = fields.Dict( keys = fields.String(), values = fields.Nested( X_Schema() ), required = False, data_key = &quot;B-C&quot; ) </code></pre> <p>but I feel this is sub-optimal as I need to create fields for all of the possible states.... :(</p>
<python><marshmallow>
2024-03-30 13:07:08
1
2,975
ScaryAardvark
78,247,747
45,843
Tkinter menu spontaneously adding extra item
<p>I'm writing a Tkinter program that so far creates a window with a menu bar, a File menu, and a single item. The menu is successfully created, but with two items, the first being one that I did not specify, whose name is &quot;-----&quot;.</p> <p>If I don't add an item, the spontaneous one is still added. This still happens if I specify tearoff=0.</p> <p>Any idea why this is happening?</p> <p>Windows 11, Python 3.12.2, Tkinter and Tcl 8.6.</p> <pre><code>import tkinter as tk window = tk.Tk() window.geometry(&quot;800x600&quot;) menubar = tk.Menu(window) window.config(menu=menubar) fileMenu = tk.Menu(menubar) fileMenu.add_command( label=&quot;Exit&quot;, command=window.destroy, ) menubar.add_cascade(label=&quot;File&quot;, menu=fileMenu, underline=0) window.mainloop() </code></pre>
<python><tkinter>
2024-03-30 11:32:11
1
34,049
rwallace
78,247,587
6,630,397
Sweep shape along 3D path in Python
<p>Using <a href="https://www.python.org/" rel="nofollow noreferrer">Python</a> (3.10.14 at the time of writing), how could one build a 3D mesh object (which can be saved in either STL, PLY or GLB/GLTF format) using:</p> <ul> <li>a 3D path as the sweep axis,</li> <li>a 2D rectangular shape</li> </ul> <p>with those constraints:</p> <ul> <li>the 3D path is a true 3D path, which means that each coordinate varies in space; it's not contained in a single plane</li> <li>the upper and lower edges of the rectangle shape must always be horizontal (which means that no banking occurs, i.e. there is no rotation of the shape during the sweep along the 3D axis)</li> <li>the 3D path always passes perpendicularly through the center of the rectangle</li> </ul> <p>?</p> <p>We can consider the 3D trajectory as being composed of straight segments only (no curves). This means that two segments of the 3D axis meet at an angle, i.e. that the derivative at this point is not continuous. The resulting 3D mesh should not have holes at those locations. Therefore, the &quot;3D join style&quot; should be determined with a given cap style (e.g. as described <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/joinstyle.html" rel="nofollow noreferrer">here</a> for 2 dimensions).</p> <p>The 3D path is given as a numpy 3D array as follow:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np path = np.array([ [ 5.6, 10.1, 3.3], [ 5.6, 12.4, 9.7], [10.2, 27.7, 17.1], [25.3, 34.5, 19.2], [55. , 28.3, 18.9], [80.3, 24.5, 15.4] ]) </code></pre> <p>The 2D rectangular shape is given as a <a href="https://pypi.org/project/shapely/2.0.3/" rel="nofollow noreferrer">Shapely 2.0.3</a> <a href="https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html" rel="nofollow noreferrer">Polygon</a> feature:</p> <pre class="lang-py prettyprint-override"><code>from shapely.geometry import Polygon polygon = Polygon([[0, 0],[1.2, 0], [1.2, 0.8], [0, 0.8], [0, 0]]) </code></pre> <h3>What I achieved so far</h3> <p>I'm currently giving <a href="https://pypi.org/project/trimesh/4.2.3/" rel="nofollow noreferrer">Trimesh 4.2.3</a> (<a href="https://pypi.org/project/numpy/1.26.4/" rel="nofollow noreferrer">Numpy 1.26.4</a> being available) a try by using <a href="https://trimesh.org/trimesh.creation.html#trimesh.creation.sweep_polygon" rel="nofollow noreferrer"><code>sweep_polygon</code></a> but without success because each time the rectangle shape has to change direction, it also rotates around an axis perpendicular to the plane defined by the two egdes meeting at that vertex where the direction changes, violating the second constraint here above.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from shapely.geometry import Polygon from trimesh.creation import sweep_polygon polygon = Polygon([[0, 0],[1.2, 0], [1.2, 0.8], [0, 0.8], [0, 0]]) path = np.array([ [ 5.6, 10.1, 3.3], [ 5.6, 12.4, 9.7], [10.2, 27.7, 17.1], [25.3, 34.5, 19.2], [55. , 28.3, 18.9], [80.3, 24.5, 15.4] ]) mesh = sweep_polygon(polygon, path) </code></pre> <p>In addition, the <a href="https://trimesh.org/trimesh.creation.html#trimesh.creation.sweep_polygon" rel="nofollow noreferrer"><code>sweep_polygon</code></a> doc says:</p> <blockquote> <p>Doesn’t handle sharp curvature well.</p> </blockquote> <p>which is a little obscure.</p> <p><a href="https://i.sstatic.net/0gkpV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0gkpV.png" alt="Mesh rendered in meshlab. The shape's tilt is clearly visible as it rises to the right." /></a></p> <p>Mesh rendered in <a href="https://www.meshlab.net/" rel="nofollow noreferrer">meshlab</a>. The shape's tilt is clearly visible as it rises to the right.</p> <p>The final goal is to run that in a Docker container on a headless server.</p>
<python><3d><computational-geometry><trimesh>
2024-03-30 10:34:34
2
8,371
swiss_knight
78,247,408
13,231,896
Validation for multiple fields on pydantic BaseModel
<p>In my fastapi app I have created a pydantic BaseModel with two fields (among others): 'relation_type' and 'document_list' (both are optional). I want to validate that, if 'relation_type' has a value, then, 'document_list' must have al least one element. Otherwise it will show a Validation error. How can I do it?</p> <pre><code>class TipoRelacionEnum(str, Enum): nota_credito = &quot;01&quot; nota_debito = &quot;02&quot; devolucion_mercancias = &quot;03&quot; sustitucion = &quot;04&quot; traslado = &quot;05&quot; facturacion_generada = &quot;06&quot; anticipo = &quot;07&quot; class Cfdi(BaseModel): relation_type: Optional[Annotated[TipoRelacionEnum, Field(title=&quot;Tipo de RelaciΓ³n&quot;, description=&quot;&quot;&quot;Se debe registrar la clave de la relaciΓ³n que existe entre este comprobante que se estΓ‘ generando y el o los CFDI previos.&quot;&quot;&quot;, examples=[ &quot;04&quot;, &quot;01&quot;, ], max_length=2)]] = None document_list: list[str] | None = None </code></pre>
<python><fastapi><pydantic>
2024-03-30 09:26:47
1
830
Ernesto Ruiz
78,247,260
1,307,905
How to set the Python executable name, now that Py_SetProgramName() is deprecated?
<p>The Python 3.12 embedding documentation for embedding gives this example:</p> <pre><code>#define PY_SSIZE_T_CLEAN #include &lt;Python.h&gt; int main(int argc, char *argv[]) { wchar_t *program = Py_DecodeLocale(argv[0], NULL); if (program == NULL) { fprintf(stderr, &quot;Fatal error: cannot decode argv[0]\n&quot;); exit(1); } Py_SetProgramName(program); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString(&quot;from time import time,ctime\n&quot; &quot;print('Today is', ctime(time()))\n&quot;); if (Py_FinalizeEx() &lt; 0) { exit(120); } PyMem_RawFree(program); return 0; } </code></pre> <p>Although calling <code>Py_SetProgramName()</code> is recommended, it throws a compile warning:</p> <pre><code>test01.c:12:5: warning: 'Py_SetProgramName' is deprecated [-Wdeprecated-declarations] Py_SetProgramName(program); /* optional but recommended */ ^ /opt/python/3.11/include/python3.11/pylifecycle.h:37:1: note: 'Py_SetProgramName' has been explicitly marked deprecated here Py_DEPRECATED(3.11) PyAPI_FUNC(void) Py_SetProgramName(const wchar_t *); ^ /opt/python/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 1 warning generated. </code></pre> <p>The resulting excecutable runs and if you add <code>import sys</code> and <code>print(sys.executable)</code> to the <code>PyRun_SimpleString()</code> argument, the correct executable name is shown.</p> <p>As this was deprecated in 3.11, and although is still recommended for 3.12, I rather get rid of the warning. How should I change the program?</p>
<python><c><python-c-api>
2024-03-30 08:24:44
1
78,248
Anthon
78,247,258
1,330,734
Selenium Wire webdriver cannot browse site
<p>I am using Python and Selenium (selenium-wire) to automatically control a browser for two sites. Browsing most sites like: <a href="https://www.google.com" rel="nofollow noreferrer">https://www.google.com</a><br /> <a href="https://platform.cmcmarkets.com/" rel="nofollow noreferrer">https://platform.cmcmarkets.com/</a><br /> using a chromedriver under seleniumwire works fine.</p> <p>However when browsing to: <a href="https://app.plus500.com/" rel="nofollow noreferrer">https://app.plus500.com/</a></p> <p>A Chrome error page is displayed with the message:</p> <pre><code>This site can’t be reached The web page at https://app.plus500.com/ might be temporarily down or it may have moved permanently to a new web address. ERR_HTTP2_PROTOCOL_ERROR </code></pre> <p>selenium-wire is a python module that extends selenium by allowing the capture of requests and responses. The site can be reached using plain selenium webdrivers, so it seems the issue lies with the selenium-wire package in use.</p> <p>Importing the Root Authority certificate issued by selenium-wire into Chrome allows browsing HTTPS sites without the blocking &quot;Your connection is not private&quot; screen.</p> <p>What is odd is that while certificate viewer for google.com (and other sites) shows the Selenium Wire CA:</p> <pre><code>Common Name (CN) www.google.com Organisation (O) &lt;Not part of certificate&gt; Organisational Unit (OU) &lt;Not part of certificate&gt; Common Name (CN) Selenium Wire CA Organisation (O) &lt;Not part of certificate&gt; Organisational Unit (OU) &lt;Not part of certificate&gt; </code></pre> <p>app.plus500.com does not:</p> <pre><code>Common Name (CN) *.plus500.com Organisation (O) Edgio, Inc. Organisational Unit (OU) &lt;Not part of certificate&gt; Common Name (CN) DigiCert Global G2 TLS RSA SHA256 2020 CA1 Organisation (O) DigiCert Inc Organisational Unit (OU) &lt;Not part of certificate&gt; </code></pre> <p>The plus500.com site then reports that I am offline, possible because some app assets failed to load as a result. However it does report the certificate is valid so that does not seem to be the issue.</p> <p>I've tried toggling the mitmproxy backend in the selenium-wire options also with no success.</p> <p>Packet captures to the server on port 443 show the failed connection sequence corresponds with some extra TCP RST from the server, but I cannot interpret much else from them.</p> <p>Please, any help diagnosing this would be great!</p>
<python><selenium-webdriver><http2><mitmproxy><seleniumwire>
2024-03-30 08:22:58
1
490
user1330734
78,246,797
1,572,146
Including non-Python files without __init__.py using `package_data` in setup.py?
<p>Given this directory structure (empty <code>__init__.py</code> and <code>logging.yml</code> is fine):</p> <pre class="lang-none prettyprint-override"><code>foo β”‚ setup.py β”‚ └─── foo β”‚ __init__.py β”‚ └─── config logging.yml </code></pre> <p>Here is my attempt, this <code>setup.py</code>:</p> <pre class="lang-python prettyprint-override"><code>from os import path from setuptools import find_packages, setup package_name = &quot;foo&quot; if __name__ == &quot;__main__&quot;: setup( name=package_name, packages=find_packages(), package_dir={package_name: package_name}, package_data={&quot;config&quot;:[path.join(package_name, &quot;config&quot;, &quot;logging.yml&quot;)]}, include_package_data=True, ) # Also tried: # package_data={&quot;config&quot;: [path.join(&quot;config&quot;, &quot;logging.yml&quot;)]} # package_data={&quot;&quot;: [path.join(&quot;config&quot;, &quot;logging.yml&quot;)]} # package_data={&quot;&quot;: [path.join(package_name, &quot;config&quot;, &quot;logging.yml&quot;)]} </code></pre> <p>No errors after a <code>python setup.py install</code> (also tried <code>python -m pip install .</code>), but running from my virtualenv root <code>fd -HIFuuueyml logging</code> returns no results and it's absent from <code>foo.egg-info\SOURCES.txt</code>.</p> <p>PS: Testing locally with 3.13.0a5; setuptools 69.2.0; pip 24.0. But on my CI test &amp; release to 2.7, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12 across Windows, Linux and macOS.</p>
<python><setuptools><setup.py><egg><data-files>
2024-03-30 04:06:19
1
1,930
Samuel Marks
78,246,775
10,200,497
How can I change the groupby column to find the first row that meets the conditions of a mask if the initial groupby failed to find it?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'main': ['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y'], 'sub': ['c', 'c', 'c', 'd', 'd', 'e', 'e', 'e', 'e', 'f', 'f', 'f', 'f', 'g', 'g', 'g'], 'num_1': [10, 9, 80, 80, 99, 101, 110, 222, 90, 1, 7, 10, 2, 10, 95, 10], 'num_2': [99, 99, 99, 102, 102, 209, 209, 209, 209, 100, 100, 100, 100, 90, 90, 90] } ) </code></pre> <p>And This is my expected output. I want to add column <code>result</code>:</p> <pre><code> main sub num_1 num_2 result 0 x c 10 99 101 1 x c 9 99 101 2 x c 80 99 101 3 x d 80 102 110 4 x d 99 102 110 5 x e 101 209 222 6 x e 110 209 222 7 x e 222 209 222 8 x e 90 209 222 9 y f 1 100 NaN 10 y f 7 100 NaN 11 y f 10 100 NaN 12 y f 2 100 NaN 13 y g 10 90 95 14 y g 95 90 95 15 y g 10 90 95 </code></pre> <p>The mask is:</p> <pre><code>mask = (df.num_1 &gt; df.num_2) </code></pre> <p>The process starts like this:</p> <p><strong>a)</strong> The <code>groupby</code> column is <code>sub</code></p> <p><strong>b)</strong> Finding the first row that meets the condition of the mask for each group.</p> <p><strong>c)</strong> Put the value of <code>num_1</code> in the <code>result</code></p> <p>If there are no rows that meets the condition of the mask, then the <code>groupby</code> column is changed to <code>main</code> to find the first row of <code>mask</code>. There is condition for this phase:</p> <p>The previous <code>subs</code> should not be considered when using <code>main</code> as the <code>groupby</code> column.</p> <p>An example of the above steps for group <code>d</code> in the <code>sub</code> column:</p> <p>a) <code>sub</code> is the <code>groupby</code> column.</p> <p>b) There are no rows in the <code>d</code> group that <code>df.num_1 &gt; df.num_2</code></p> <p>So now for group <code>d</code>, its <code>main</code> group is searched. However group <code>c</code> is also in this <code>main</code> group. Since it is before group <code>d</code>, group <code>c</code> should not count for this step.</p> <p>In this image I have shown where those values come from:</p> <p><a href="https://i.sstatic.net/WlDyI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WlDyI.png" alt="enter image description here" /></a></p> <p>And this is my attempt. It partially solves the issue for some groups but not all of them:</p> <pre><code>def step_a(g): mask = (g.num_1 &gt; g.num_2) g.loc[mask.cumsum().eq(1) &amp; mask, 'result'] = g.num_1 g['result'] = g.result.ffill().bfill() return g a = df.groupby('sub').apply(step_a) </code></pre>
<python><pandas><dataframe><group-by>
2024-03-30 03:51:48
2
2,679
AmirX
78,246,765
5,121,282
Adding eureka to FastAPI is throwing RuntimeError: Cannot run the event loop while another loop is running
<p>My project have the following structure:</p> <p><a href="https://i.sstatic.net/439AC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/439AC.png" alt="enter image description here" /></a></p> <p>I want to add eureka to my fastapi applicaction, IΒ΄'m trying to do that in the <strong>init</strong>.py file:</p> <pre><code>import py_eureka_client.eureka_client as eureka_client from fastapi import FastAPI app = FastAPI() eureka_client.init(eureka_server=&quot;http://eureka-primary:8011/eureka/,http://eureka-secondary:8012/eureka/,http://eureka-tertiary:8013/eureka/&quot;, app_name=&quot;your_app_name&quot;, instance_port=8000) </code></pre> <p>Also I'm using uvicorn to run the app like this:</p> <pre><code>uvicorn landus.main:app --reload </code></pre> <p>But I get the error:</p> <pre><code>RuntimeError: Cannot run the event loop while another loop is running </code></pre> <p>I saw a few links like this <a href="https://github.com/keijack/python-eureka-client/issues/76" rel="nofollow noreferrer">https://github.com/keijack/python-eureka-client/issues/76</a> but I don't understand how it works, I only know that fastapi or uvicorn are async application and when I add eureka, that is another async application is where a get this error, but don't know how to make it work</p>
<python><runtime-error><fastapi><netflix-eureka>
2024-03-30 03:45:32
2
940
Alan Gaytan
78,246,736
15,412,256
Scikit-Learn Permutating and Updating Polars DataFrame
<p>I am trying to re-write the source code of <a href="https://github.com/scikit-learn/scikit-learn/blob/f07e0138b/sklearn/inspection/_permutation_importance.py#L77" rel="nofollow noreferrer">scikit-learn permutation importance</a> to achieve:</p> <ol> <li>Compatibility with Polars</li> <li>Compatibility with clusters of features</li> </ol> <pre class="lang-py prettyprint-override"><code>import polars as pl import polars.selectors as cs import numpy as np from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split X, y = make_classification( n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0, n_classes=2, random_state=42, shuffle=False, ) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42) feature_names = [f&quot;feature_{i}&quot; for i in range(X.shape[1])] X_train_polars = pl.DataFrame(X_train, schema=feature_names) X_test_polars = pl.DataFrame(X_test, schema=feature_names) y_train_polars = pl.Series(y_train, schema=[&quot;target&quot;]) y_test_polars = pl.Series(y_test, schema=[&quot;target&quot;]) </code></pre> <p>To get future importances for a cluster of feature, we need to permutate a cluster of features simutiousnly then pass into the scorer to compare with the baseline score.</p> <p>However, I am struglling to <strong>replace multiple polars dataframe columns</strong> in case of examining clusters of features:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.utils import check_random_state random_state = check_random_state(42) random_seed = random_state.randint(np.iinfo(np.int32).max + 1) X_train_permuted = X_train_polars.clone() shuffle_arr = np.array(X_train_permuted[:, [&quot;feature_0&quot;, &quot;feature_1&quot;]]) </code></pre> <pre class="lang-py prettyprint-override"><code>random_state.shuffle(shuffle_arr) X_train_permuted.replace_column( # This operation is in place 0, pl.Series(name=&quot;feature_0&quot;, values=shuffle_arr)) </code></pre> <p>Normally the <code>shuffle_arr</code> would have a shape of (n_samples,) which can easily replace assosicated column in polars dataframe using <code>polars.DataFrame.replace_column()</code>. In this case, <code>shuffle_arr</code> has multi-dimensional shape of (n_samples, n_features in a cluster). What would be an efficient way to replace the assosicated columns?</p>
<python><machine-learning><scikit-learn><python-polars>
2024-03-30 03:20:39
1
649
Kevin Li
78,246,713
2,756,466
chromadb Collection.add(Includes=[""]) giving error
<p>I am new to chromadb. Learning it first time. Just a simple code is here:</p> <pre><code> collection.add( documents=[&quot;This is a document&quot;, &quot;This is another document&quot;], metadatas=[{&quot;source&quot;: &quot;test_source&quot;, &quot;page&quot;: 1},{&quot;source&quot;: &quot;test_source2&quot;, &quot;page&quot;: 2}, ], ids=[&quot;id1&quot;, &quot;id2&quot;], include=['distances', 'metadatas', 'embeddings', 'documents'], ) </code></pre> <p>I am getting error <code>TypeError: Collection.add() got an unexpected keyword argument 'include'</code>. Need some help, how to show embeddings in chromadb. I have installed latest version.</p>
<python><chromadb>
2024-03-30 03:03:19
1
7,004
raju
78,246,430
23,805,311
Issues with my Javascript code to parse files
<p>I am currently working on implementing a JavaScript code that aims to read and parse some numpy .npy files generated from Python's numpy, specified by a JSON file. These .npy files contain arrays of floating values (a ML model weights and biases as arrays). However, I am encountering errors while running the script. What is causing the issues in my implementation? How to resolve them?</p> <p>Here is the code I have written so far:</p> <pre><code>import fs from 'fs'; import path from 'path'; import numpy from 'numpy'; const jsonPath = 'model_weights.json'; const jsonData = JSON.parse(fs.readFileSync(jsonPath, 'utf8')); for (const [layerName, layerData] of Object.entries(jsonData)) { console.log(`Layer: ${layerName}`); // Read and print the weight data const weightFile = layerData.weight; const weightData = numpy.load(weightFile); console.log('Weight:'); console.log(weightData); // Read and print the bias data const biasFile = layerData.bias; const biasData = numpy.load(biasFile); console.log('Bias:'); console.log(biasData); console.log(); } </code></pre> <p>And here is a sample JSON file:</p> <pre><code>{ &quot;conv_mid.0&quot;: { &quot;weight&quot;: &quot;conv_mid.0_weight.npy&quot;, &quot;bias&quot;: &quot;conv_mid.0_bias.npy&quot; }, &quot;conv_mid.1&quot;: { &quot;weight&quot;: &quot;conv_mid.1_weight.npy&quot;, &quot;bias&quot;: &quot;conv_mid.1_bias.npy&quot; }, &quot;conv_mid.2&quot;: { &quot;weight&quot;: &quot;conv_mid.2_weight.npy&quot;, &quot;bias&quot;: &quot;conv_mid.2_bias.npy&quot; }, &quot;conv_mid.3&quot;: { &quot;weight&quot;: &quot;conv_mid.3_weight.npy&quot;, &quot;bias&quot;: &quot;conv_mid.3_bias.npy&quot; } } </code></pre> <p>And here is the error I get:</p> <pre><code>node:internal/modules/esm/resolve:214 const resolvedOption = FSLegacyMainResolve(packageJsonUrlString, packageConfig.main, baseStringified); ^ Error: Cannot find package 'node_modules\numpy\package.json' imported from retrieve.mjs </code></pre> <p>For context, here is how I generate the .npy files:</p> <pre><code>import torch import json import numpy as np state_dict = torch.load('network.pth') # Extract the 'params' OrderedDict from the state_dict params_dict = state_dict['params'] indices = {} for name, param in params_dict.items(): layer_name, param_type = name.rsplit('.', 1) if layer_name not in indices: indices[layer_name] = {} if param_type == 'weight': # Store the weight data as a numpy array data = np.array(param.tolist(), dtype=np.float32) np.save(f&quot;{layer_name}_weight.npy&quot;, data) indices[layer_name]['weight'] = f&quot;{layer_name}_weight.npy&quot; elif param_type == 'bias': # Store the bias data as a numpy array data = np.array(param.tolist(), dtype=np.float32) np.save(f&quot;{layer_name}_bias.npy&quot;, data) indices[layer_name]['bias'] = f&quot;{layer_name}_bias.npy&quot; json_data = json.dumps(indices, indent=4) with open('model_weights.json', 'w') as f: f.write(json_data) print(&quot;JSON data and weights/biases saved to model_weights.json and .npy files&quot;) print(json_data) </code></pre> <p>UPDATE: when I print <code>loadedData</code> this is the output:</p> <pre><code>{ 'conv_head.bias': [ -0.00030437775421887636, 0.0042824335396289825, 0.0022352728992700577, -0.0019111455185338855, 0.00017375686729792506, 0.0017428217688575387, 0.005364884156733751, 0.0028239202219992876 ], 'conv_mid.1.bias': [ -0.002030380303040147, 0.009842530824244022, -0.0008345193928107619, -0.007043828722089529, -0.00633968273177743, -0.006188599392771721, -0.0018206692766398191, 0.0037471423856914043 ], 'conv_head.weight': [ [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ], [ [Array], [Array], [Array] ] ], 'conv_mid.0.weight': [ [ [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array] ], [ [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array], [Array] ], ... </code></pre> <p>And here is the python version to read the files which works fine:</p> <pre><code>import numpy as np import json # Load the JSON data containing the file names and indices with open('model_weights.json', 'r') as f: json_data = json.load(f) # Iterate over the layers for layer_name, layer_data in json_data.items(): print(f&quot;Layer: {layer_name}&quot;) # Read and print the weight data weight_file = layer_data['weight'] weight_data = np.load(weight_file) print(&quot;Weight:&quot;) print(weight_data) # Read and print the bias data bias_file = layer_data['bias'] bias_data = np.load(bias_file) print(&quot;Bias:&quot;) print(bias_data) print() </code></pre>
<javascript><python><node.js><arrays><numpy>
2024-03-30 00:15:54
1
409
Mary H
78,246,105
2,057,516
How to start a download and render a response without hitting disk?
<p>So I have a scientific data Excel file validation form in django that works well. It works iteratively. Users can upload files as they accumulate new data that they add to their study. The <code>DataValidationView</code> inspects the files each time and presents the user with an error report that lists issues in their data that they must fix.</p> <p>We realized recently that a number of errors (but not all) can be fixed automatically, so I've been working on a way to generate a copy of the file with a number of fixes. So we rebranded the &quot;validation&quot; form page as a &quot;build a submission page&quot;. Each time they upload a new set of files, the intention is for them to still get the error report, but also automatically receive a downloaded file with a number of fixes in it.</p> <p>I learned just today that there's no way to both render a template and kick off a download at the same time, which makes sense. However, I had been planning to not let the generated file with fixes hit the disk.</p> <p>Is there a way to present the template with the errors and automatically trigger the download without previously saving the file to disk?</p> <p>This is my <code>form_valid</code> method currently (without the triggered download, but I had started to do the file creation before I realized that both downloading and rendering a template wouldn't work):</p> <pre><code> def form_valid(self, form): &quot;&quot;&quot; Upon valid file submission, adds validation messages to the context of the validation page. &quot;&quot;&quot; # This buffers errors associated with the study data self.validate_study() # This generates a dict representation of the study data with fixes and # removes the errors it fixed self.perform_fixes() # This sets self.results (i.e. the error report) self.format_validation_results_for_template() # HERE IS WHERE I REALIZED MY PROBLEM. I WANTED TO CREATE A STREAM HERE # TO START A DOWNLOAD, BUT REALIZED I CANNOT BOTH PRESENT THE ERROR REPORT # AND START THE DOWNLOAD FOR THE USER return self.render_to_response( self.get_context_data( results=self.results, form=form, submission_url=self.submission_url, ) ) </code></pre> <p>Before I got to that problem, I was compiling some pseudocode to stream the file... This is totally untested:</p> <pre><code>import pandas as pd from django.http import HttpResponse from io import BytesIO def download_fixes(self): excel_file = BytesIO() xlwriter = pd.ExcelWriter(excel_file, engine='xlsxwriter') df_output = {} for sheet in self.fixed_study_data.keys(): df_output[sheet] = pd.DataFrame.from_dict(self.fixed_study_data[sheet]) df_output[sheet].to_excel(xlwriter, sheet) xlwriter.save() xlwriter.close() # important step, rewind the buffer or when it is read() you'll get nothing # but an error message when you try to open your zero length file in Excel excel_file.seek(0) # set the mime type so that the browser knows what to do with the file response = HttpResponse(excel_file.read(), content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') # set the file name in the Content-Disposition header response['Content-Disposition'] = 'attachment; filename=myfile.xlsx' return response </code></pre> <p>So I'm thinking either I need to:</p> <ol> <li>Save the file to disk and then figure out a way to make the results page start its download</li> <li>Somehow send the data embedded in the results template and sent it back via javascript to be turned into a file download stream</li> <li>Save the file somehow in memory and trigger its download from the results template?</li> </ol> <p>What's the best way to accomplish this?</p> <p><em>UPDATED THOUGHTS</em>:</p> <p>I recently had done a simple trick with a <code>tsv</code> file where I embedded the file content in the resulting template with a download button that used javascript to grab the <code>innerHTML</code> of the tags around the data and start a &quot;download&quot;.</p> <p>I thought, if I encode the data, I could likely do something similar with the excel file content. I could base64 encode it.</p> <p>I reviewed past study submissions. The largest one was 115kb. That size is likely to grow by an order of magnitude, but for now 115kb is the ceiling.</p> <p>I googled to find a way to embed the data in the template and I got <a href="https://nemecek.be/blog/8/django-how-to-send-image-file-as-part-of-response" rel="nofollow noreferrer">this</a>:</p> <pre><code>import base64 with open(image_path, &quot;rb&quot;) as image_file: image_data = base64.b64encode(image_file.read()).decode('utf-8') ctx[&quot;image&quot;] = image_data return render(request, 'index.html', ctx) </code></pre> <p>I recently was playing around with base64 encoding in javascript for some unrelated work, which leads me to believe that embedding is do-able. I could even trigger it automatically. Anyone have any caveats to doing it this way?</p> <h2>Update</h2> <p>I have spent all day trying to implement @Chukwujiobi_Canon's suggestion, but after working through a lot of errors and things I'm inexperienced with, I'm at the point where I am stuck. A new tab is opened (but it's empty) and a file is downloaded, but it won't open (and there's a error in the browser console saying &quot;Frame load interrupted&quot;.</p> <p>I implemented the django code first and I think it is working correctly. When I submit the form without the javascript, the browser downloads the multipart stream, and it looks as expected:</p> <pre><code>--3d6b6a416f9b5 Content-Type: application/octet-stream Content-Range: bytes 0-9560/9561 PK?N˝Ö€]'[Content_Types].xm... ... --3d6b6a416f9b5 Content-Type: text/html Content-Range: bytes 0-16493/16494 &lt;!--use Bootstrap CSS and JS 5.0.2--&gt; ... &lt;/html&gt; --3d6b6a416f9b5-- </code></pre> <p>Here's the javascript:</p> <pre><code>validation_form = document.getElementById(&quot;submission-validation&quot;); // Take over form submission validation_form.addEventListener(&quot;submit&quot;, (event) =&gt; { event.preventDefault(); submit_validation_form(); }); async function submit_validation_form() { // Put all of the form data into a variable (formdata) const formdata = new FormData(validation_form); try { // Submit the form and get a response (which can only be done inside an async functio let response; response = await fetch(&quot;{% url 'validate' %}&quot;, { method: &quot;post&quot;, body: formdata, }) let result; result = await response.text(); const parsed = parseMultipartBody(result, &quot;{{ boundary }}&quot;); parsed.forEach(part =&gt; { if (part[&quot;headers&quot;][&quot;content-type&quot;] === &quot;text/html&quot;) { const url = URL.createObjectURL( new Blob( [part[&quot;body&quot;]], {type: &quot;text/html&quot;} ) ); window.open(url, &quot;_blank&quot;); } else if (part[&quot;headers&quot;][&quot;content-type&quot;] === &quot;application/octet-stream&quot;) { console.log(part) const url = URL.createObjectURL( new Blob( [part[&quot;body&quot;]], {type: &quot;application/octet-stream&quot;} ) ); window.location = url; } }); } catch (e) { console.error(e); } } function parseMultipartBody (body, boundary) { return body.split(`--${boundary}`).reduce((parts, part) =&gt; { if (part &amp;&amp; part !== '--') { const [ head, body ] = part.trim().split(/\r\n\r\n/g) parts.push({ body: body, headers: head.split(/\r\n/g).reduce((headers, header) =&gt; { const [ key, value ] = header.split(/:\s+/) headers[key.toLowerCase()] = value return headers }, {}) }) } return parts }, []) } </code></pre> <p>The server console output looks fine, but so far, the outputs are non-functional.</p>
<python><django><django-templates><stream>
2024-03-29 21:58:54
2
1,225
hepcat72
78,246,081
823,859
How to transfer all conda environments to new computer
<p>Is it possible to transfer all of my individual conda environments to a file, where each environment lists the packages, and then I can rebuild all of the environments at once on the new machine?</p> <p>There are many similar questions on here, but commands such as <code>conda list --export &gt; package-list.txt</code> just list every package in all of the environments. I could build a file environment-by-environment using <code>conda env export &gt; environment.yml</code>, but would prefer to avoid that.</p> <p>Is there a way to create a file that would look something like the below, that would then recreate each environment?</p> <pre><code>name: ckcviz channels: - defaults - conda-forge dependencies: - abseil-cpp=20230802.0=h61975a4_2 - altair=5.0.1=py311hecd8cb5_0 - anyio=4.2.0=py311hecd8cb5_0 name: env2 channels: - defaults - conda-forge dependencies: - abseil-cpp=20230802.0=h61975a4_2 - blas=1.0=openblas - bleach=4.1.0=pyhd3eb1b0_0 </code></pre>
<python><anaconda><conda>
2024-03-29 21:52:41
1
7,979
Adam_G
78,246,079
19,090,746
A python Path.rglob pattern to match all package.json files in a directory that are not nested inside of a node_modules folder
<p>I'm working with a massive monorepo, and I'm trying to write a script that will need to grab some information from all of the monorepo's package.json files, but <em>not</em> and package.json files that are nested in any of the <code>node_modules</code> folder. I've tried everything apart from just filtering them with a regex after recursively going through the entire directory, including the <code>node_modules</code> folder. I'm aware that that's an option, but ideally I'd like to be able to filter those directories before the search for performance reasons. The monorepo structure looks something like:</p> <pre><code>root/ node_modules/ apps/ someApp/ node_modules/ someApp2/ node_modules/ packages/ somePackage1/ node_modules/ somePackage2/ node_modules/ somePackage3/ node_modules/ ... </code></pre> <p>Any help would be greatly appreciated! Thanks.</p>
<python><glob><pathlib>
2024-03-29 21:52:29
1
642
Andrew
78,245,999
2,621,316
java.lang.AssertionError when trying to train certain datasets with h2o
<p>I am getting an error for my desired dataset when trying to use an isolation forest method to detect anomalies. However I have another completely different dataset that it works fine for, what could cause this issue?</p> <pre><code>isolationforest Model Build progress: | (failed) | 0% Traceback (most recent call last): File &quot;h2o_test.py&quot;, line 149, in &lt;module&gt; isoforest.train(x=iso_forest.col_names[0:65], training_frame=iso_forest) File &quot;/home/ec2-user/.local/lib/python3.7/site- packages/h2o/estimators/estimator_base.py&quot;, line 107, in train self._train(parms, verbose=verbose) File &quot;/home/ec2-user/.local/lib/python3.7/site- packages/h2o/estimators/estimator_base.py&quot;, line 199, in _train job.poll(poll_updates=self._print_model_scoring_history if verbose else None) File &quot;/home/ec2-user/.local/lib/python3.7/site-packages/h2o/job.py&quot;, line 89, in poll &quot;\n{}&quot;.format(self.job_key, self.exception, self.job[&quot;stacktrace&quot;])) OSError: Job with key $03017f00000132d4ffffffff$_92ee3e892f7bc86460e80153eaec4b70 failed with an exception: java.lang.AssertionError stacktrace: java.lang.AssertionError at hex.tree.DHistogram.init(DHistogram.java:350) at hex.tree.DHistogram.init(DHistogram.java:343) at hex.tree.ScoreBuildHistogram2$ComputeHistoThread.computeChunk(ScoreBuildHistogram2.java:427) at hex.tree.ScoreBuildHistogram2$ComputeHistoThread.map(ScoreBuildHistogram2.java:408) at water.LocalMR.compute2(LocalMR.java:89) at water.LocalMR.compute2(LocalMR.java:81) at water.H2O$H2OCountedCompleter.compute(H2O.java:1704) at jsr166y.CountedCompleter.exec(CountedCompleter.java:468) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.popAndExecAll(ForkJoinPool.java:906) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:979) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) </code></pre> <pre><code>with open('/home/webapp/flask-api/tmp_rows/temp_file2.csv', 'w+') as tmp_file: temp_name = &quot;/tmp_rows/temp_file2.csv&quot; tmp_file.write(text_stream.getvalue()) tmp_file.close() h2o.init() print(&quot;TEMP_nAME&quot;, temp_name) iso_forest = h2o.import_file('/home/webapp/flask-api/{0}'.format(temp_name)) seed = 12345 ntrees = 100 isoforest = h2o.estimators.H2OIsolationForestEstimator( ntrees=ntrees, seed=seed) isoforest.train(x=iso_forest.col_names[0:65], training_frame=iso_forest) predictions = isoforest.predict(iso_forest) print(predictions) h2o.cluster().shutdown() </code></pre> <p>The CSV is being created fine, so there doesn't seem to be an issue with that, what is causing this Java error? I even increased the size of my ec2 to have more RAM, that didn't solve it either.</p>
<python><java><h2o>
2024-03-29 21:21:16
1
2,981
Amon
78,245,954
1,210,296
How do you update Celery Task State/Status to see it in Flower?
<p>When using a Python Celery Task callback to process tasks on a Redis Queue, how do you dynamically change state/status to show interim updates within Flower?</p> <pre><code>@shared_task(queue='my_queue', bind=True) def process_event( self, payload ): self.update_state( state=&quot;PROGRESS&quot;, meta={ 'current': 1, 'max': 10 } ) time.sleep( 5 ) self.update_state( state=&quot;SUCCESS&quot; ) time.sleep( 5 ) return true </code></pre> <p>For some reason, Flower only displays &quot;Started&quot; and &quot;Success&quot; or &quot;Failure&quot;.</p> <p>Is there something I'm doing wrong?</p> <p>Do I need to use a separate Task for updating the Status and call it asynchronously within the Task?</p> <p>I tried various combinations including changing the state within the debug console of Visual Studio Code.</p> <p>I know the state and status updated because I can fetch it back out, but Flower won't show it.</p> <p>Any ideas? I'm going to try using a separate async Task next just to see if it will work.</p>
<python><asynchronous><celery><task><apply-async>
2024-03-29 21:06:10
0
617
justdan23
78,245,928
11,930,602
How would I modify an outside variable within a list comprehension?
<p>Python code:</p> <pre class="lang-py prettyprint-override"><code>i: int = 1 table: list = [[i, bit, None, None] for bit in h] </code></pre> <p>Expected behaviour: <code>i</code> needs to be incremented by 1 per iteration.</p> <p>Pseudocode:</p> <pre class="lang-py prettyprint-override"><code>i: int = 1 table: list = [[i++, bit, None, None] for bit in h] </code></pre>
<python><list-comprehension><variable-assignment>
2024-03-29 20:58:57
3
2,322
kesarling
78,245,818
825,227
Pandas pivot_table where one values column is comprised of lists of values
<p>I have a Pandas dataframe that I'm looking to pivot where one of the included <code>values</code> entries is comprised of lists of values.</p> <p><strong>df</strong></p> <pre><code> l1_ratio mse_path coef intercept 0 1.0 581499561.8597653 [253.03936312711443, 3649.109906786345, 14798.282876868554, 4810.759749988098, 8878.435479294187, 3276.7488397077536, 5171.81213816377, 9059.114207547327, 11.512129682487922, -0.0, 8710.623469210213, 0.0, 0.0, -598.4862056332136, 28425.47104579133, 0.0, -0.0, 0.0, 0.0, -2198.436487334907, -2418.25205363894, 0.0, 1692.592401606466, -0.0, 390.8734205415331, 4623.261872236998, 699.2497258078156, 62.239979130074225, 0.0, -0.0, 1399.6393449513669, 912.4450426439255, 0.0, -0.0, -0.0, -0.0, -209.73967722935762, 0.0, -684.1610010188986, 162.79546249454427, 0.0, 0.0, 0.0, -0.0, 0.0, 0.0, -0.0, -0.0, 0.0, -1070.3678099420492, -0.0, 0.0, 0.0, -0.0, 0.0, -1073.2954543338647, 0.0, 0.0, -0.0, -0.0, 951.6951496704327, -0.0, 0.0, -0.0, -0.0, 580.5064483922187, -0.0, -0.0, 0.0, 0.0, -172.30638224708616, 0.0, -0.0, 176.66963320427527, -0.0, 0.0, 2101.100923369119, -0.0, -0.0, 0.0, 1899.3349417983618, -0.0, -0.0, -0.0, -0.0, -0.0, 0.0, -444.2106622645467, 4079.307030203663, 5932.224851788115, -0.0, -0.0, 0.0, -0.0, 1271.547535404467, 4589.157779146342, -0.0, -0.0, -0.0, 1546.2122719295912, 43.68350839455409, 799.1353377003892, -0.0, 0.0, -0.0, -0.0, -0.0, -0.0, 1997.1599067500804, 0.0, -0.0, -0.0, 0.0, -0.0, -139.11014781577043, -2134.548060887319, -2725.175680089511, 0.0, 0.0, -0.0, -0.0, 0.0, 0.0, -0.0, -0.0, 0.0, 762.0978139944438, -0.0, -0.0, 0.0, 0.0, -0.0, -0.0, -0.0, 2641.475746141273, 0.0, -0.0, 1665.684460419034, 0.0, 757.0523501978001, -0.0, -0.0, 0.0, -0.0, 0.0, -0.0, 0.0, 0.0, -0.0, -0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.0, 0.0, 0.0, -0.0, -370.42602621137394, 0.0, 0.0, 0.0, 0.0, -0.0, -0.0, -1944.4825663547163, -591.9747091248928, 0.0, 0.0, -0.0, -0.0, -1838.374401828182, -0.0, 0.0, -0.0, 0.0, -517.5891475030328, 1476.6782408902266, 0.0, 0.0, -0.0, -0.0, -5174.996138309539, 0.0, -0.0, -3801.7738352516685, -0.0, 0.0, 0.0, -0.0, -0.0, 3963.0786330603796, -266.91651148289, -1241.125779832931, 0.0, -0.0, 577.5306432786302, -0.0, 0.0, -0.0, 0.0, -0.0, 747.9365019997231, -0.0, 0.0, -0.0, -0.0, 0.0, -0.0, -0.0, -182.83255241378126, 0.0, -0.0, -0.0, -0.0, -796.6213861438707, 0.0, -0.0, -0.0, -0.0, -0.0, -162.4390369338168, -4651.940469706817, 0.0, -4659.32104928002, -0.0, -0.0, 0.0, -0.0, -0.0, -552.1105441909145, 2193.112874094114, -0.0, 370.0503349786939, -0.0, -0.0, -0.0, -0.0, -0.0, 384.7492825385162, -0.0, -0.0, 0.0, 0.0, -819.7722957127545, -0.0, 0.0, 72.38445704768583, 0.0, -0.0, -0.0, -0.0, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.0, 154.7098997509337, -0.0, -0.0, 0.0, 5001.135030874798, 0.0, -0.0, -0.0, 0.0, -0.0, -0.0, 0.0, 0.0] 180401.21694528882 0 0.99 2214034038.866108 [1288.5108755365814, 1141.4002144343365, 2896.4098231445964, 0.0, 1437.726183039458, 1569.5515270912845, 1936.6841301002705, 1823.1086710815543, 0.0, 423.2345294724227, 2480.5476534169384, 2546.30937757155, 1070.7509478625852, -0.0, 2916.119955036174, 980.4527182573493, -37.378802962765505, 1735.7361484508306, 938.1229701846534, 496.3570106270763, -295.7527553007795, 1920.1478016912283, 1684.65644459793, 483.04213977421233, 2049.0920392831786, 2168.5834067751116, 1076.238008628459, 1058.7703167992531, -106.97246191967314, 18.28220317948517, 471.1157781234777, 305.10803219890505, -0.0, 25.894030212031456, -57.28144005355844, -0.0, -395.7046573315114, -175.91804876091754, -190.40676758531208, 91.17482633148427, -577.5845817859226, -0.0, -47.09497144947905, -238.17589197161303, 1026.282311618595, 0.0, 96.01760755796542, -134.44726519764308, -96.23577659089972, -258.99863026275324, -254.82049933253407, 138.28152785044944, -15.594574791548254, -53.419363210838, 630.7985756100181, -720.4901156275829, 138.85484964691793, 249.4203033082621, 20.713356956941293, -844.2025321573319, 722.4689151000729, 0.0, -310.2623401629524, -0.0, -32.915850514256306, 471.6124459129323, -62.11862211556542, 0.0, -132.17418703748717, 251.0320144432172, 0.0, -0.0, -194.78961750141593, -189.65400031255203, 20.85964176300863, 0.0, 349.09806278077383, -435.11189433379064, -142.87977206520654, 0.0, 182.79611576906126, -356.63579947995066, -0.0, -378.27264014515623, -128.9808913603312, -586.6177554814338, -44.200078058630645, -0.0, 1170.4500814296446, 1673.9783693722238, -448.6418089397299, -89.06494894229179, -331.7417791922637, -18.536617814051603, 326.5829823034383, 1060.562730511171, 255.0468131982831, 190.05497434530736, -299.47501805430954, 244.60190846218444, 284.5540364646699, 285.8681195735776, -150.32896927509282, -0.0, -4.972625039681385, 0.0, -57.65656657850986, -59.22279196361179, 556.4109422397493, 263.2936227316695, -0.0, -0.0, -0.0, -183.4560408780339, -258.9898426045801, -353.33368501106855, -34.100549991687394, -63.652247300182076, -94.30027635409674, 41.26781180294321, 0.0, 601.5146707800469, -208.4615238479576, -173.82150371664653, -992.1349156402378, -5.202445989743139, 1065.0290066847888, -0.0, 0.0, 0.0, 0.0, -0.0, 0.0, 11.183040696304912, 623.7881665925077, -0.0, -0.0, 169.81050210880974, -8.99546735793611, 496.59978201632106, -346.0610215064944, 0.0, -212.21802108495342, -107.52597574093022, 67.2604487664647, 26.18127976869913, -0.1859106121510271, 689.7833802221692, -295.7595004044681, -31.828934672723047, -0.0, -13.431770612003968, 47.34684657851442, -0.0, 456.7752948253894, -295.7097467369197, 217.75663833813476, -167.0240525154184, 13.119028988342224, -190.38680651312075, 67.26089494444646, -0.0, -0.0, 684.4056604135054, -256.26977952572673, -52.023214009686555, 547.1319773359934, -55.979446689306364, -1075.392831544414, 991.7328009993199, -289.4642763067799, 1042.614795123592, -1818.0125822592026, -316.0191242640419, -0.0, -67.50860076854983, 110.97572842589076, -1042.7942642855426, 1449.6554692979535, -202.8457512185241, 0.0, -0.0, -355.6609329403728, 55.99441520091489, -317.94628256217095, -0.0, -1235.8949254076197, -268.38614492459465, 223.77798861865043, -317.94218063833654, -19.485732547608414, 143.59190337259554, 1432.4716556753574, 0.0, -981.5310761970001, -316.65366054598894, -265.5204624571671, 1440.9518123212615, -158.70992249129722, -317.93827165503893, -356.0739471006797, -372.2717195206707, -32.76230852993284, 136.57315701079378, -35.57211729261106, -307.05951186249825, -67.39523106286269, 207.5873505271385, 70.99392538698746, -0.0, -86.92648016855802, -0.0, -74.2555658664915, -311.53928831040525, -326.07741269037325, -57.42460332942894, -991.8374742057182, 626.0598772128691, -220.31626712062035, -57.58038162029573, -0.0, 517.846849748521, -310.30485997062044, 434.4311428989644, -0.0, -1623.713494776604, -147.24790042773182, -75.08697695680965, -93.20446491616461, -0.0, -118.05622510867376, -71.83673868144035, 300.0590500084468, -87.82710434512016, 1283.226314913383, -1551.4156046961957, -134.62472833566693, 229.74025226894264, 754.9018930958974, -118.97884961404972, 696.281628642635, -173.77757446902064, -899.4327344671217, -403.98364401941393, -400.8475048015838, 56.13121758676302, -1137.1913929011605, -309.31673166928437, 297.977600702344, -400.8507800725531, -38.771944949119685, 410.98676359917056, -327.6428271259632, 0.0, -400.85996582844353, -127.83828668012485, 569.3934393329992, -96.82731711388608, 604.4531534927594, -0.0, 36.379889517416466, -135.49296057984154, -0.0, -16.196866480782283, 1182.6005299476797, -0.07870072095822686, -0.0, -603.0486105754311, -44.98469441598015, -0.0, -90.17559291390286, -379.08488477737984, 1166.8318977943532] 180401.21694528876 0 0.95 4477216372.464831 [451.90379889404005, 364.42705000899014, 1000.411275893526, -70.67834247340133, 632.9642092794054, 621.5537787066967, 649.9067608748471, 580.0671499591238, 0.0, 202.34565811997464, 838.5700412425219, 835.6333063594087, 348.49657416657107, -15.435525288027897, 944.254509860495, 338.0328035262107, -25.01372649044941, 661.1395852808022, 345.20131644220334, 187.72254522333623, -119.75812611787666, 643.8362889301778, 596.5734316125265, 265.95524492651293, 781.0370773108815, 790.8493276183372, 391.4790922420105, 392.28276353035255, -107.65109614935568, 20.49781579702922, 140.84646319735447, 89.21383666299006, -5.709537584686129, 29.585311060864015, -29.73170383999819, -0.0, -137.74419480613744, -74.7864616578633, -94.4276526086043, 53.42182824135511, -260.02253156825583, -3.7458522151629796, -57.312008847900074, -170.70164218535564, 423.50265683498435, -29.819573656467355, 17.959719899823106, -35.199059516069376, -47.36055181108414, -101.65546133033381, -117.24242369513455, 91.12970247460754, -22.045447863665522, -43.492085002480614, 275.747859012535, -323.21171444980564, 61.4614740346074, 104.45059077873566, 30.536427200632605, -347.46829255393794, 241.8145406945114, 9.571595012513287, -94.85279189868415, -0.0, -21.727340289738624, 167.20848810503156, -9.379669243490675, 2.293390540409582, -69.4319705489917, 77.0817901696514, 9.583115375013227, -16.99796142873087, -95.64615672686531, -134.88611098355096, 26.5470573144565, 58.47518976754938, 89.62417000334226, -184.33126127862616, 0.0, 0.0, 36.92843080976247, -182.7076651682731, -0.0, -142.17260147053102, -45.142641452555985, -221.00966926721514, -37.44871372877418, 0.0, 372.0792644429926, 548.3617791060332, -224.6789632670541, -60.08715245526823, -136.25308295377084, -0.0, 160.0377849358875, 324.8067483079273, 134.1284353205398, 78.3926784205144, -121.72994145046935, 102.02012998308071, 91.7780087522938, 100.93612781401833, -50.56840275081557, -0.0, -13.772377823492741, 2.3262737137240297, -40.28721848451935, -10.181637858708378, 162.0048522115716, 85.16316106820207, 0.0, -0.0, -14.594827968941022, -95.23833749612112, -101.6557586637468, -128.24070531800922, 2.3865912247025003, -62.22954884685509, -27.265771781291203, 13.471630973205269, 0.0, 248.21296970579854, -86.08221556298983, -47.016631659131995, -331.98477549895966, -28.336512986318287, 357.0072423691016, -0.0, 0.0, 3.3844109010754972, -0.0, -0.0, 0.0, 31.112787210504425, 163.34403318785365, -13.857192293882765, -0.3675636122583585, 21.627418219161214, -17.757252967995132, 153.50853753056447, -120.74206393872593, 5.768971646215191, -142.92300592382966, -38.48261718876772, 14.772021549464295, 17.395339523870867, -37.01148362726991, 363.34054735331756, -172.427907165573, -33.86090644617767, -11.588867834202748, -31.26609414221257, 10.54808040632687, -10.47614548807429, 143.0533803172757, -107.78152556098911, 71.86045484035397, -128.65693892885932, 22.60726266074622, -65.35663811891264, 14.772008169480653, -2.1381671106279065, -33.352671214565795, 359.68320546519556, -164.17572694325133, -44.20005770639454, 276.27768386643555, -16.83104754120719, -465.3536491105819, 370.7140039279059, -130.0994336258551, 502.9686606142887, -707.0334321310855, -139.74774120604772, -38.57210890679462, -32.679642302123014, 96.92020965703168, -410.92209227307313, 604.504687523593, -116.83771821052989, -0.0, -0.0, -167.49274277078953, 198.1841355349268, -163.43464615962813, -18.324381770410486, -515.845277506657, -133.82201511612627, 92.67039438417184, -163.43478011758893, -31.476151953025166, 102.38540830430046, 461.0645771866528, 17.648893806112273, -348.92765407219025, -160.14517544265917, -124.42315535269812, 544.669974828584, -85.44328433953059, -163.43499382445293, -166.9397748848065, -123.17986490991824, -24.884048060379097, 39.5717252934144, -28.936549430635935, -158.75387571515523, -36.550677313764304, 116.12393272375051, 64.17472959500047, -1.9135356231550804, -54.780463237937056, -0.6133711624965174, -49.35800394033436, -138.0279627142042, -140.3987867678661, -37.11750662219265, -394.2593248954027, 290.9257534222436, -123.85884235347251, -51.09647438406486, -13.544957174666138, 259.75807112335156, -152.5424268343465, 306.5985535957529, -1.0878182287872915, -625.5063770582592, -62.187170999213194, -44.75252670857082, -59.22084584877021, -18.939160152721886, -46.535754875386935, -21.78135924967242, 120.43404404393506, -24.616794527524767, 463.80909571266795, -588.4261765857282, -56.29597855184704, 146.729763298118, 376.64630969071925, -38.5573333649, 268.04274745482746, -70.55005887711458, -410.8781686014436, -233.3242595314783, -232.6920626430331, 139.1631402151629, -483.85895403594765, -173.3033397283782, 90.23259483058987, -232.69204558485438, -37.97159376837881, 261.53843032432206, -157.4693011412452, 1.2179972941224972, -232.69196112437277, -68.80194159149096, 295.1490306038171, -74.12074492974521, 293.5207681963812, -0.0, 14.676896684265426, -64.76785846165711, -10.632647213938986, -25.640850987244516, 441.7478730306915, -23.518055099186338, -0.0, -245.27582688053062, -47.94734530008497, -3.606409330493235, -31.07248101988179, -169.87105595144465, 437.08863958436984] 180401.21694528876 0 0.9 5295530119.621321 [251.97593564024584, 199.6252818805267, 559.4157970065792, -50.2762286812302, 369.3987053762267, 356.54031154076546, 360.38951544011144, 318.91545548846716, 0.0, 118.61147271515632, 466.3076634751436, 460.9401784466648, 192.32186554762092, -12.923216647133366, 519.7287513461932, 188.34702432552217, -15.441650383291199, 374.8995459455578, 195.6475337840008, 105.03488323836403, -70.76552829179302, 355.5664401118226, 332.96647036861344, 160.2062316550571, 443.71242163288287, 446.0840468860182, 220.40033093081306, 221.4348870270198, -68.56242469622417, 13.042105928681904, 76.17869818705489, 48.122477975635725, -5.2006843500336135, 18.434156745036734, -17.689726843055425, -0.0, -76.0629392470267, -42.977960480563205, -57.194265849729874, 33.07807632159702, -151.71882426687026, -4.363050458394881, -36.360820583438716, -105.05970152372988, 244.1870859239563, -24.59362540244982, 7.931297065737346, -18.364206952553698, -27.619171604684887, -59.52129426024859, -69.28245719128246, 56.73227471120291, -14.466181792591545, -27.29198046008644, 160.28067077967034, -188.93179365526964, 36.35542577240674, 60.62404277278584, 19.344131455046167, -199.7110229366974, 133.44124614918306, 6.104313605316847, -50.31907322398952, -0.0, -13.683349449930509, 94.16753947343864, -3.914661837804196, 3.7982681154892894, -40.92530829263081, 41.43236940829952, 6.525998928882921, -11.089112859573875, -55.659136501601985, -82.62862319263004, 16.30765906059863, 43.18385464742543, 46.42646378176672, -107.0745916322784, 3.7475566477347733, 0.0, 18.75332759099362, -108.84994516198435, -0.0, -80.4498600671448, -25.21035499384165, -125.13369434158435, -22.60145914871905, 0.6655567576831709, 204.6909069593819, 302.7953402949714, -133.77317695119248, -37.40097810431059, -77.94419734205874, 0.0, 95.55214491873977, 177.26194057295228, 79.16444052615762, 44.89782919194303, -70.57954795891733, 59.9066254845167, 50.44266404536765, 56.673619743147455, -28.10333292161411, -0.0, -9.093471914862008, 3.351470919938149, -24.940102833741957, -4.168284141514984, 87.23899723186607, 46.9199770701425, 0.0, -0.0, -10.446901497122793, -57.938063199764045, -59.52129707146652, -71.54664424671307, 9.774056674076164, -39.218334251180956, -14.467755172783718, 6.870902970549463, 0.0, 143.16691573457, -49.35295285232284, -24.744152655102297, -182.3683596724859, -19.08020301973659, 196.61567671709685, -0.0, 0.0, 2.5101603694472496, -0.0, -0.0, 0.0, 19.914000557977452, 86.29799452961046, -10.7735280294561, -3.080372367329778, 8.506164935337873, -11.741792090087765, 83.79133488869304, -66.8386639577978, 4.881679096916573, -87.30774034750957, -21.900457754537193, 7.395117682725237, 10.659533154277838, -25.137701510972327, 216.9478904524629, -104.74181555433421, -21.72488751060374, -9.037135005768699, -19.803875445642742, 5.287505865926, -8.833347141843182, 78.26693364877822, -60.095752870295904, 40.0099510253525, -79.4388893502883, 14.468818577102, -36.917842085927965, 7.395117454805569, -4.079165803408042, -23.178578195536105, 214.70844510584757, -100.46504892251885, -27.920486562921113, 162.5792694231069, -9.615195887086825, -268.3767953920419, 209.2764735946544, -76.92514347941093, 295.7369916691882, -403.1577798690914, -82.66302398394107, -27.112161648070977, -19.795104675299704, 61.130417816399415, -234.33994553963043, 348.87676164897545, -70.86957238197064, -2.270490911155394, -0.0, -98.72915770165692, 130.09785672988977, -97.75526773601291, -12.876219696788745, -297.85011958370563, -80.15061611644273, 53.25499628572116, -97.75526961339325, -20.704658886344514, 63.97074507487598, 253.32495549034326, 12.92807690136254, -195.1267191359952, -95.61666708873803, -72.82314796580661, 309.19154230685, -51.49950243694954, -97.7552742667307, -97.865512739687, -68.15529672376819, -15.388701694861002, 21.73858055535895, -18.00611251114517, -95.05972006233033, -22.213966476944826, 69.92075510560996, 41.20989103619624, -5.482872468431436, -34.04504718862672, -2.9212743556224776, -30.467420875737112, -81.37749137271904, -81.32079217675143, -23.047181857203558, -225.76496142888334, 171.76954947247322, -74.96314767246852, -32.22175342944953, -9.693536935423761, 154.2979768894359, -90.92335953187093, 187.50068831538525, -2.779782211313613, -356.20424032184957, -36.49705552850452, -27.629650311168337, -36.67877125976382, -13.71207283116544, -27.2226089246089, -12.588386918950114, 70.78067443101921, -13.295239210573113, 259.26112643749656, -332.20684523757933, -32.540184905026024, 88.60229627789839, 222.95708452889198, -21.785344895666807, 152.61655029062462, -40.82150472184441, -239.56715889111723, -141.35785403930564, -141.06369785545422, 90.62784751027456, -279.3414798697823, -104.28788574685343, 48.98274746883981, -141.06369882197117, -24.250854488884155, 159.88543062595173, -93.28019886660428, 1.2634227757717067, -141.0636996091875, -41.78406728973744, 176.68759096952232, -46.20492187953321, 174.25693248558335, -1.151617971795489, 8.832671743886184, -38.4798038844731, -7.5347037525017075, -16.508423814338375, 250.14846116755533, -15.522808743135782, -0.0, -140.4430254127462, -30.68364681227584, -4.300539489420825, -17.623436394444067, -97.99582503102106, 247.6211752017288] 180401.21694528876 0 0.8 5841959239.549489 [134.19775500504332, 105.34983711411064, 298.24859199114815, -30.138427337307657, 201.60903772081693, 192.8403492498848, 191.3221917915364, 168.58647404672152, 0.0, 65.03923595485183, 247.87382726102447, 243.86588885038506, 101.9638555707181, -8.502173275460372, 274.59174174969525, 100.41130490564063, -8.972756728045846, 201.4025173839802, 105.2885201731031, 56.157993490476876, -39.23494019052579, 188.22651980055406, 177.33129395643803, 89.04381579953215, 238.60904466578492, 238.9830138177559, 118.09868497505296, 118.80885632426357, -39.32110446395187, 7.726215082099112, 40.12385203377445, 25.430517019023142, -3.6841027773092754, 10.665100759726728, -10.048238106693484, -0.0, -40.460800263361456, -23.51816176152124, -32.14066467931564, 18.91773400164531, -82.94841114985522, -3.3041604166342533, -20.928472740317503, -59.07334561328584, 132.4939270184611, -15.75707274031772, 3.896385329455582, -9.716089422379476, -15.333091039371626, -32.88653486238259, -38.34076924631165, 32.28862757250509, -8.666941224225281, -15.73540394502983, 87.43148045021756, -103.32641976639566, 20.29360366110752, 33.2657474193707, 11.262402730789209, -108.21612228476006, 70.81337597163015, 3.7531061574300617, -26.237690820949528, -0.2697564963305331, -8.078238646312515, 50.67491101238815, -2.0111626796037485, 3.0758161954254377, -22.69892082418587, 21.86081576003769, 4.094441220584245, -6.660900126134252, -30.53469533664621, -46.40505042955501, 9.387860443787114, 26.419131865462713, 23.938064412791075, -58.53015868032509, 5.553727635395261, 0.0, 9.80925256951117, -60.25617144880243, -0.06143267326634735, -43.411406075779794, -13.726965954623113, -67.38703769628576, -12.78527160664753, 2.042540152230447, 108.35603711097683, 160.43417521935254, -73.97816087019804, -21.387334711490997, -42.306718719776896, 0.0, 53.03670984356364, 93.46630849324384, 43.64228645807663, 24.524233224638824, -38.67674695661177, 33.118019647038636, 26.915673803136457, 30.56902027473035, -15.233440429924899, -0.0, -5.571065037550382, 2.6965056261490057, -14.319418973555011, -2.0710024782947922, 45.798681292065346, 25.100687650890613, 0.0, -0.11191654800259945, -6.5491387325967, -32.61833326717106, -32.88654009885019, -38.31289183419553, 8.003771042264502, -22.49304691936769, -7.7731011699443195, 3.7447892734588644, -0.0, 77.82543173078268, -26.93480222640096, -13.044247623034982, -96.36986210283152, -11.450898890368734, 104.05031538825274, -0.2669329668491528, 0.0, 1.8464842243064044, -0.0, -0.0, 0.0, 11.62833719934292, 44.84330874270659, -6.969210292545817, -2.8118802480599427, 3.8218418670018037, -7.115802503098517, 44.35636674425629, -35.66830870199088, 3.4032149454743963, -48.950531906210905, -12.10875801953953, 4.007074209990364, 6.264555752916631, -15.033111798694927, 119.88110574385743, -58.57665040848605, -12.72115781623657, -5.9009945791586595, -11.505061298086492, 2.9572106448796456, -5.902916082826569, 41.50636596270579, -32.22659816519079, 21.591293029463962, -44.78850148935345, 8.549757870269053, -20.117760730622535, 4.007074135550812, -3.3552008417876795, -14.025671169232977, 118.63142694168714, -56.38311690161957, -16.14386102463473, 89.19294076886472, -5.524705312958239, -145.52519097514272, 112.29825332880539, -42.56517219396247, 161.99448373811038, -217.2621026645042, -45.733168414649185, -16.384642421033032, -11.334094739660085, 34.93986510765465, -126.39243041285101, 189.26518494303338, -39.71582608396147, -2.7597952883150323, -0.0, -54.392476161076026, 75.22549449405486, -54.28232783984403, -7.946703052734747, -161.67333258826844, -44.61513020387595, 29.083693384473023, -54.28232917836108, -12.276092489131623, 36.41993520456413, 133.88088376726853, 8.059512449672276, -104.18704980453235, -53.0565539660227, -40.03215631863106, 166.32561568174953, -28.853465964612734, -54.28232964430377, -53.74038773338643, -36.4181071190953, -8.940562751124657, 11.810506240363761, -10.436473832248012, -52.82504560638791, -12.686385576866389, 39.02132965234936, 23.900328776739634, -4.585345015817707, -19.488407476710762, -2.6613619170881613, -17.406383419063502, -44.9431652590183, -44.438193696785, -13.284453267225942, -122.08188428186402, 94.5230364255816, -41.940981173554626, -18.560612439636746, -6.116100398928026, 85.1524522135842, -50.4295461116148, 104.81099240919306, -2.439798555741905, -191.8657172644588, -20.290996376781393, -15.827868248234378, -20.933517522276524, -8.581907382715155, -15.202106452707177, -7.184811934679164, 39.114867570845554, -7.265585889042667, 138.2340236557468, -178.019947540464, -17.956043705531084, 49.369062517296946, 122.65363852169045, -12.015415966316262, 82.3708389174356, -22.472359255767675, -130.70422060550646, -78.82911202415569, -78.69025764659365, 52.315527006398625, -151.5841124094573, -58.02706377700307, 25.96938993790877, -78.69025660689593, -14.134550621522441, 89.49667043487959, -51.550337965745236, 1.1556466408249366, -78.69025500136955, -23.575660874248175, 97.88041546244763, -26.339403440349276, 96.1418968789176, -1.3786177067024226, 5.2285238492403545, -21.491285539529443, -4.8090883369898325, -9.741864452334086, 134.42517794466278, -9.266508289223758, -0.07170959802954245, -76.03373457981506, -17.81138263975082, -3.3169984671064126, -9.794420907271167, -53.3139007143992, 133.10136874421642] 180401.21694528876 0 0.75 5965993746.00419 [108.89807921531231, 85.35007603434583, 241.95635971015204, -25.102755461228387, 164.3810762311068, 156.9464151368808, 155.10995504761055, 136.57267168112935, 0.0, 53.149165502255364, 200.98208447366665, 197.53479886811024, 82.69263176456117, -7.278212551238809, 222.3457714967922, 81.52957329213805, -7.509951789354052, 163.67551318638576, 85.65287466627308, 45.66623047524298, -32.18639936982642, 152.5041502529267, 143.86515479390638, 72.9287417734735, 193.93608666023908, 194.09053875018455, 95.97188877338867, 96.57346511370555, -32.46336025800864, 6.504501293891862, 32.561920715609624, 20.69246933018626, -3.2506721585682934, 8.896705167660143, -8.360965351205332, -0.0, -32.89712556187716, -19.2797293191564, -26.454728353829495, 15.668156493060867, -67.71704309842804, -2.9530892198040757, -17.338558723663777, -48.52647502388796, 107.93464173090452, -13.333809609205765, 3.2107316878723995, -7.975197258782164, -12.643130882566942, -26.97671027885705, -31.43829028515265, 26.639916226902415, -7.299127514667054, -13.06919288647227, 71.33820528910786, -84.33276554936837, 16.720707690673724, 27.24397737642954, 9.40018721333397, -88.15130880749072, 57.471900380108224, 3.2337929055573897, -21.279855312930145, -0.4019558828880392, -6.793079269139279, 41.28085429307551, -1.7269292587565714, 2.7802745341514483, -18.661764283897156, 17.794790224900506, 3.5299525096156694, -5.634692011283101, -25.010035336395713, -38.13298731250063, 7.835304503367987, 22.105926456313544, 19.381189755554654, -47.81530816324119, 5.211095321545491, 0.0, 8.035445113996595, -49.34583196978446, -0.27419413635586143, -35.39657617255039, -11.288030620082937, -54.86443378083852, -10.596651101070757, 2.047875244904435, 87.844257246688, 130.0336961031593, -60.54761354874705, -17.698409187474333, -34.53949983528737, 0.0, 43.472230651026344, 75.72618376755756, 35.73621992189584, 20.093348957554763, -31.65058387372062, 27.16654833221309, 21.938100114456255, 24.957000720003517, -12.503958242240122, -0.0, -4.7485847109771075, 2.4485206122154985, -11.89374854230436, -1.7617042434551498, 37.126864710659355, 20.477405727252886, 0.0, -0.307761262827194, -5.584602957999561, -26.855484313128777, -26.976712261497354, -31.200729358599787, 7.070193189029051, -18.614949135873932, -6.421239653919545, 3.1567467761799644, -0.0, 63.46844765982592, -22.053554626529284, -10.66142799555631, -78.10193432234227, -9.611593225270221, 84.34537536055214, -0.45877112555776056, 0.0, 1.6926698314370845, -0.0, -0.18126174023014224, 0.0, 9.706753785380116, 36.278461500546406, -5.968759991557666, -2.5870602735354167, 3.089532588224244, -6.024359390116996, 36.02475830024542, -29.034249585838594, 3.003625294981295, -40.20565523126456, -10.001829737079703, 3.3694900623145942, 5.286651846542852, -12.576044931324988, 98.02939781634083, -48.0720104488355, -10.617526584016257, -5.077970725961848, -9.59523798611822, 2.5304051325151096, -5.097454407140856, 33.72963376289568, -26.269314290078142, 17.662106071207386, -36.836090849646105, 7.182185581408473, -16.497714216216277, 3.3694900254395144, -3.0301322395152877, -11.76609044749702, 97.00664052809199, -46.306850584695475, -13.413303502641652, 72.8533516778684, -4.657947244643848, -118.51797943626362, 91.28877243214096, -34.8904797972877, 132.19282944538963, -176.67236209572292, -37.47855125154955, -13.720761353565766, -9.433288881699259, 28.839740376000034, -102.84040675124183, 154.13042445061765, -32.64235405362943, -2.6091514379590164, -0.0, -44.51205432875111, 62.09570487347614, -44.493782932151795, -6.735506124016224, -131.69165010295418, -36.60825856151467, 23.80807110442326, -44.4937834842551, -10.273749848341025, 30.036388610820918, 108.47092087883794, 6.838255208154944, -84.61572239644775, -43.48544924966698, -32.77280672514327, 135.22980860067304, -23.742821220212363, -44.49378388085102, -43.94883163830121, -29.654851329673587, -7.48253225597478, 9.725196343438919, -8.712389455798638, -43.308763162124215, -10.539549657368095, 32.045312766799036, 19.82004021799844, -4.111174240578379, -16.140185327817072, -2.4553707265204796, -14.421353040135779, -36.81935839545978, -36.32483510050797, -11.050759126937297, -99.39178063080855, 77.25787517632916, -34.45353511602918, -15.393597356348717, -5.22932614010588, 69.64620946940286, -41.3342532064658, 85.91130998719821, -2.2492689071253027, -156.0180583174554, -16.705356170836435, -13.130910608125577, -17.31884527536344, -7.285594539823926, -12.554767307971803, -6.01791338105396, 32.06588048964975, -6.030850817862119, 112.19463850104479, -144.60673078963066, -14.77165577086809, 40.49792986757101, 100.20443417830002, -9.921815435178818, 67.0706966745595, -18.45203821981898, -106.5933597364891, -64.61771412458596, -64.50817595934366, 43.20333746476734, -123.46979752965439, -47.57217970875114, 21.142630169977636, -64.50817567340636, -11.775049213689378, 73.40067064894541, -42.218849993423866, 1.1269790287691057, -64.50817521563422, -19.44769684524522, 80.10348025749806, -21.756219215194413, 78.6192017751207, -1.3566607699571704, 4.4378830059620835, -17.701926529608187, -4.143008665145555, -8.166912877167615, 109.28961644619227, -7.789260577493193, -0.28011246852619986, -61.956147455473136, -14.798250988865766, -2.9741770413368154, -8.119540021849149, -43.51561003496967, 108.22007467433805] 180401.21694528876 </code></pre> <p>...</p> <pre><code>df.dtypes Out[178]: alpha float64 l1_ratio float64 mse_path float64 coef object intercept float64 dtype: object </code></pre> <p>This throws an error:</p> <pre><code>df.pivot(columns = 'alpha', values = ['mse_path','coef']) </code></pre> <p>..</p> <pre><code>ValueError: Index contains duplicate entries, cannot reshape </code></pre> <p>This returns a pivoted table, but omits the <code>coef</code> variable consisting of the lists of values.</p> <pre><code>df.pivot_table(columns = 'alpha', values = ['mse_path','coef']) Out[179]: alpha 0.1 1.0 ... 250.0 1000.0 mse_path 5.335019e+08 6.061998e+08 ... 3.950346e+09 4.962799e+09 [1 rows x 9 columns] </code></pre> <p>How can I achieve this?</p>
<python><pandas><pivot><pivot-table>
2024-03-29 20:24:12
1
1,702
Chris
78,245,627
428,862
How to rework rolling sum using Numpy.strides?
<p>I have this code that works. I was wondering how to implement using np.lib.stride_tricks.as_strided or avoid loops.</p> <pre><code>import yfinance as yf import pandas as pd import numpy as np # Fetch Apple stock data apple_data = yf.download('AAPL', start='2024-01-01', end='2024-03-31') # Extract volume data apple_volume = apple_data['Volume'] # Resample to ensure every date is included apple_volume = apple_volume.resample('D').ffill() # Function to calculate rolling sum with reset using NumPy def rolling_sum_with_reset(series, window_size): rolling_sums = np.zeros(len(series)) current_sum = 0 for i, value in enumerate(series): if i % window_size == 0: current_sum = 0 current_sum += value rolling_sums[i] = current_sum return rolling_sums rolling_3_day_volume = rolling_sum_with_reset(apple_volume, 3) </code></pre>
<python><numpy>
2024-03-29 19:39:35
2
25,847
Merlin
78,245,568
1,317,018
Understanding batching in pytorch models
<p>I have following model which forms one of the step in my overall model pipeline:</p> <pre><code>import torch import torch.nn as nn class NPB(nn.Module): def __init__(self, d, nhead, num_layers, dropout=0.1): super(NPB, self).__init__() self.te = nn.TransformerEncoder( nn.TransformerEncoderLayer(d_model=d, nhead=nhead, dropout=dropout, batch_first=True), num_layers=num_layers, ) self.t_emb = nn.Parameter(torch.randn(1, d)) self.L = nn.Parameter(torch.randn(1, d)) self.td = nn.TransformerDecoder( nn.TransformerDecoderLayer(d_model=d, nhead=nhead, dropout=dropout, batch_first=True), num_layers=num_layers, ) self.ffn = nn.Linear(d, 6) def forward(self, t_v, t_i): print(&quot;--------------- t_v, t_i -----------------&quot;) print('t_v: ', tuple(t_v.shape)) print('t_i: ', tuple(t_i.shape)) print(&quot;--------------- t_v + t_i + t_emb -----------------&quot;) _x = t_v + t_i + self.t_emb print(tuple(_x.shape)) print(&quot;--------------- te ---------------&quot;) _x = self.te(_x) print(tuple(_x.shape)) print(&quot;--------------- td ---------------&quot;) _x = self.td(self.L, _x) print(tuple(_x.shape)) print(&quot;--------------- ffn ---------------&quot;) _x = self.ffn(_x) print(tuple(_x.shape)) return _x </code></pre> <p>Here <code>t_v</code> and <code>t_i</code> are inputs from earlier encoder blocks. I pass them as shape of <code>(4,256)</code>, where <code>256</code> is number of features and <code>4</code> is batch size. <code>t_emb</code> is temporal embedding. <code>L</code> represents learned matrix representing the embedding of the query. I tested this module block with following code:</p> <pre><code>t_v = torch.randn((4,256)) t_i = torch.randn((4,256)) npb = NPB(d=256, nhead=8, num_layers=2) npb(t_v, t_i) </code></pre> <p>It outputted:</p> <pre><code>=============== NPB =============== --------------- t_v, t_i ----------------- t_v: (4, 256) t_i: (4, 256) --------------- t_v + t_i + t_emb ----------------- (4, 256) --------------- te --------------- (4, 256) --------------- td --------------- (1, 256) --------------- ffn --------------- (1, 6) </code></pre> <p>I was expecting the output should be of shape <code>(4,6)</code>, 6 values for each sample in the batch of size <code>6</code>. But the output was of size <code>(1,6)</code>. After a lot of tweaking, I tried changing <code>t_emb</code> and <code>L</code> shape from <code>(1,d)</code> to <code>(4,d)</code>, since I did not wanted all sampled to share these variables (through broadcasting:</p> <pre><code>self.t_emb = nn.Parameter(torch.randn(4, d)) # [n, d] = [4, 256] self.L = nn.Parameter(torch.randn(4, d)) </code></pre> <p>This gives desired output of shape (4,6:</p> <pre><code>--------------- t_v, t_i ----------------- t_v: (4, 256) t_i: (4, 256) --------------- t_v + t_i + t_emb ----------------- (4, 256) --------------- te --------------- (4, 256) --------------- td --------------- (4, 256) --------------- ffn --------------- (4, 6) </code></pre> <p>I have following doubts:</p> <p><strong>Q1.</strong> Exactly why changing <code>L</code> and <code>t_emb</code> shape from <code>(1,d)</code> to <code>(4,d)</code> worked? Why it did not work with <code>(1,d)</code> through broadcasting?<br /> <strong>Q2.</strong> Am I doing batching right way or the output is artificially correct while under the hood its doing something different than what I am expecting (predicting 6 values for each sample in the batch of size 4)?</p>
<python><machine-learning><deep-learning><pytorch><transformer-model>
2024-03-29 19:24:58
1
25,281
Mahesha999
78,245,458
5,443,401
Vectorization with multiple rows and columns of dataframe instead of one
<p>Currently working on building a csv file that include historical stock info, while not only includes historical prices, but momentum indicators. I've successfully added indicators by looping through an entire dataframe (w/ &gt; 25,000,000 rows), but it takes too long (30 - 36 h).</p> <p>What I'm trying to accomplish: I'd like start with the the 3 day high:</p> <p><code>high = stock_df.loc[x+1:x+4, &quot;High&quot;].max(axis=0) </code></p> <p>and divide that by the day 0 low:</p> <p><code>low = stock_df.loc[x, &quot;Low&quot;] </code></p> <p>without iterating through entire loop like as shown below:</p> <pre><code>stock_ticker_list = df.Symbol.unique() for ticker in stock_ticker_list: #return dataframe thats historical infor for one stock print(ticker) stock_df = df.loc[df.Symbol == ticker] start = stock_df.index[stock_df['Symbol'] == ticker][0] for x in range(start, start + len(stock_df) - 2): try: high = stock_df.loc[x+1:x+4, &quot;High&quot;].max(axis=0) low = stock_df.loc[x, &quot;Low&quot;] df2.loc[x,&quot;H/L&quot;] = high/low except: df2.loc[x,&quot;H/L&quot;] = pd.NA </code></pre> <p>I've look through the documentation and found methods like pandas.Series.pct_change and pandas.Series.div but it does not appear as though these functions will work without me also creating a column for the 3 day high. I tried to create a column for the three day high</p> <pre><code>s = stock_df[&quot;High&quot;] stock_df['Three_day_high'] = max([s.diff(-1),s.diff(-2),s.diff(-3)]) + s </code></pre> <p>but got a ValueError (<code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code>)</p>
<python><pandas><dataframe>
2024-03-29 18:54:33
1
425
Joel J.
78,245,411
47,152
Unclear about python async task cancel/shutdown at program exit
<p>I can't find docs that clearly describe the mechanism here so just want to try to understand and learn best practices. For tasks created by <code>asyncio.create_task</code>, they seem to be cleaned up at program end without generating any <code>CancelledError</code> or requiring a call to explicitly cancel the task. <strong>Do we need to cancel to ensure orderly shutdown, or does the asyncio loop stuff just handle all this cleanly for us?</strong></p> <p>Here's an example. The program seems to exit cleanly after 20s. So I'm assuming this idiom is fine. Just want to verify I'm not violating any best practices.</p> <pre class="lang-py prettyprint-override"><code>import asyncio import random import time async def forever_while_loop_task(): while True: sleep_seconds = random.randint(5, 30) print(f'sleeping for {sleep_seconds} seconds') await asyncio.sleep(sleep_seconds) print('looping') async def main(): start = time.monotonic() the_looping_task = asyncio.create_task(forever_while_loop_task()) await asyncio.sleep(20) print(f'ok done with program, took {time.monotonic() - start:.2f}s') # Do I need to call the_looping_task.cancel() here # or do anything to ensure orderly shutdown/cleanup? if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>Sample output from a run:</p> <pre><code>sleeping for 12 seconds looping sleeping for 27 seconds ok done with program, took 20.00s </code></pre>
<python><python-asyncio>
2024-03-29 18:43:45
1
1,465
chacmool
78,245,380
7,592,072
Regex - capture group whish is optionally enclosed in sequence of characters
<p>I have a file with lines I need to extract from the JSON-like syntax. My regex works good in most cases. It extracts desired symbols into a second capture group. But I noticed sometimes my desired text is optionally can be enclosed by some tags which I want to ignore.</p> <p><strong>Sample file:</strong></p> <pre><code> {&quot;title_available&quot; &quot;text1&quot;} {&quot;title_value&quot; &quot;&lt;c(20a601)&gt;text2&quot;} {&quot;tags&quot; {&quot;all&quot; &quot;text3&quot;} {&quot;ignore&quot; &quot;text4&quot;} {&quot;chargeFactor&quot; &quot;text5 %1%&quot;} {&quot;resourceSpeed&quot; &quot;%1% text6&quot;} } {&quot;rules&quot; &quot;bla-bla-bla\n\n \&quot;BLA\&quot; bla-bla-bla.&quot;} {&quot;id1&quot; &quot;&lt;c(c3baae)&gt;text7&lt;/c&gt;&quot;} </code></pre> <p><strong>My regex:</strong> <code>\s+{\&quot;\S+\&quot; \&quot;(&lt;c\(\S+\)&gt;)?(.+)\&quot;}</code></p> <p><strong>Desired output:</strong></p> <pre><code>text1 text2 text3 text4 text5 %1% %1% text6 bla-bla-bla\n\n \&quot;BLA\&quot; bla-bla-bla. text7 </code></pre> <p><strong>Current output</strong>:</p> <pre><code>all ok except: text7&lt;/c&gt; </code></pre> <p><a href="https://i.sstatic.net/kssoR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kssoR.png" alt="enter image description here" /></a></p> <p>I guees I need to use a lookahead somehow with second capture group, but I didn't find how. Also I'm not sure if I should use a capture group for skipping first optional &lt;c...&gt;. Can someone help with this pls?</p> <p>P.S. speed or simplicity of the pattern doesn't matter.</p>
<python><regex><regex-group>
2024-03-29 18:34:26
1
871
Nikita Smirnov
78,245,236
480,118
postrgres/psycopg: cannot insert multiple commands into a prepared statement
<p>I have two postgres tables that have a relationship, but no cascading delete set. So i need to delete from the related table as well as the main table manually. I am using sqlalchemy core along with psycopg3 driver.</p> <p>I have the following code:</p> <pre><code> identifiers = ['id100', 'id200'] sql = &quot;&quot;&quot; delete from related_table where entity_id in (select f.id from main_table f where identifier in (%(id1)s)); delete from main_table where identifier in (%(id2)s)&quot;&quot;&quot; params = {'id1': identifiers, 'id2': identifiers} conn = self.db_engine.raw_connection() cursor = conn.cursor() cursor.execute(text(sql), params=params) </code></pre> <p>This results in the following error:</p> <pre><code> Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.11/site-packages/psycopg/cursor.py&quot;, line 732, in execute raise ex.with_traceback(None) psycopg.errors.SyntaxError: cannot insert multiple commands into a prepared statement </code></pre> <p>What is the proper way of executing multiple statements without having to loop (and consequently make round trips)?</p>
<python><postgresql><sqlalchemy><psycopg3>
2024-03-29 17:57:31
2
6,184
mike01010
78,245,156
8,236,076
How to respond to slash command in different channel?
<p>I'm using slash commands in pycord. Normally, to respond to a user's command, one can use the <code>ctx.respond</code> method, like so:</p> <pre class="lang-py prettyprint-override"><code>@bot.slash_command(description=&quot;Say hello.&quot;) async def say_hello(ctx: ApplicationContext) -&gt; None: ctx.respond(&quot;Hello!&quot;) </code></pre> <p>But what if I want to reply in another channel? I'm aware I can use <code>ctx.channel.send</code> but this does not count as a response, triggering a &quot;The application did not respond&quot; warning in discord:</p> <p><a href="https://i.sstatic.net/Gz3Ee.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gz3Ee.jpg" alt="enter image description here" /></a></p> <p>So how can I respond in a different channel in the proper way? Or is there a way to just make an empty response in the original channel, which means I'm free to use <code>ctx.channel.send</code> to send a message in the channel of my choice?</p>
<python><pycord>
2024-03-29 17:37:17
1
1,144
Willem
78,245,061
2,612,259
How do I properly add a callback to individual traces in Plotly?
<p>The code below attempts to register a callback on Plotly traces in 3 different ways.</p> <ol> <li>Method 'a', registers a callback on the trace object directly, before adding it to the figure.</li> <li>Method 'b', registers the callback after adding it to the figure by getting the object from the figure data list after it has been added.</li> <li>Method 'c' is just a variation of 'a' where the callback is local (ie not a member function)</li> </ol> <p>Here is the code showing the 3 attempts I have made to add the callback. This should be self contained and if you run it in a Jupyter Notebook you should see 3 plots and clicking on a any point on any plot should print a message, but the only plot that seems to work is the one using callback method 'b'</p> <pre class="lang-py prettyprint-override"><code>import ipywidgets as widgets import plotly.graph_objects as go line = {'name': 'line','data': ((1,1), (2,2), (3,3)), 'color':'red', 'dash':'solid'} squared = {'name': 'squared','data': ((1,1), (2,2**2), (3,3**2)), 'color':'blue', 'dash':'4,4'} cubed = {'name': 'cubed','data': ((1,1), (2,2**3), (3,3**3)), 'color':'green', 'dash':'solid'} n4 = {'name': 'n4','data': ((1,1), (2,2**4), (3,3**4)), 'color':'purple', 'dash':'solid'} traces = (line, squared, cubed, n4) class MyPlot: def __init__(self, traces, use_callback): self.traces = traces self.use_callback = use_callback def get_values(self, func, index): return [e[index] for e in func['data']] def callback_a(self, trace, points, state): print(f&quot;in callback_a with trace = {trace}, points = {points}, state = {state}&quot;) def callback_b(self, trace, points, state): if len(points.point_inds) &lt; 1: return print(f&quot;in callback_b with trace = {trace}, points = {points}, state = {state}&quot;) def display(self): def callback_c(trace, points, state): print(f&quot;in callback_c with trace = {trace}, points = {points}, state = {state}&quot;) fig = go.FigureWidget() for t in traces: s = go.Scatter(mode=&quot;lines&quot;, name=t['name'], x=self.get_values(t, 0), y=self.get_values(t, 1), line=dict(width=2, color=t['color'], dash=t['dash'])) if self.use_callback == 'a': s.on_click(self.callback_a) if self.use_callback == 'c': s.on_click(callback_c) fig.add_trace(s) if self.use_callback == 'b': fig.data[-1].on_click(self.callback_b) fig.layout.title = f&quot;Plot using callback {self.use_callback}&quot; display(fig) my_plot_a = MyPlot(traces, 'a') my_plot_b = MyPlot(traces, 'b') my_plot_c = MyPlot(traces, 'c') my_plot_a.display() my_plot_b.display() my_plot_c.display() </code></pre> <p>As I said above, the only method that seems to work is method 'b'. This however has an undesirable &quot;property&quot;. When I click on any point on any trace, the callbacks for each trace are called, even if I did not click on that trace. I workaround this by checking length of the <code>points.point_inds</code> property which is only &gt; 0 in the desired callback. While this works, its seems unnecessary, and the desired method, 'a', seems to be the documented way to do this, however I can't seem to get it to work.</p> <p>Here is the example from the Plotly <a href="https://plotly.com/python-api-reference/generated/plotly.html?highlight=on_click#plotly.basedatatypes.BaseTraceType.on_click" rel="nofollow noreferrer">doc</a> I think I am following this example in both 'a' and 'c'. Am I doing something wrong?</p> <p><a href="https://i.sstatic.net/5og2P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5og2P.png" alt="enter image description here" /></a></p>
<python><plotly>
2024-03-29 17:16:14
0
16,822
nPn
78,245,057
7,437,143
Sphinx warning: Failed to import test.test_adder from module `pythontemplate`
<h2>Context</h2> <p>After creating a <code>root_dir/docs/source/conf.py</code> that automatically generates the <code>.rst</code> files for each <code>.py</code> file in the <code>root_dir/src</code> (and <code>root_dir/test/</code>) directory (and its children), I am experiencing some difficulties linking to the <code>src/projectname/__main__.py</code> and <code>root_dir/test/&lt;test files&gt;.py</code> within the <code>.rst</code> files.</p> <p>The repository structure follows:</p> <pre><code>src/projectname/__main__.py src/projectname/helper.py test/test_adder.py docs/source/conf.py </code></pre> <p>(where <code>projectname</code> is: <code>pythontemplate</code>.)</p> <h2>Error message</h2> <p>When I build the Sphinx documentation using: <code>cd docs &amp;&amp; make html</code>, I get the following &quot;warning&quot;:</p> <pre><code>WARNING: Failed to import pythontemplate.test.test_adder. Possible hints: * AttributeError: module 'pythontemplate' has no attribute 'test' * ModuleNotFoundError: No module named 'pythontemplate.test' ... WARNING: autodoc: failed to import module 'test.test_adder' from module 'pythontemplate'; the following exception was raised: No module named 'pythontemplate.test' </code></pre> <h2>Design Choices</h2> <p>I know some projects include the <code>test/</code> files within the <code>src/test</code> and some put the test files into the root dir, the latter is followed in this project. By naming the test directory <code>test</code> instead of <code>tests</code>, they are automatically included in the <code>dist</code> created with <code>pip install -e .</code>. This is verified by opening the:<code>dist/pythontemplate-1.0.tar.gz</code> file and verifying that the <code>pythontemplate-1.0</code> directory contains the <code>test</code> directory (along with the <code>src</code> directory). However the <code>test</code> directory is not included in the <code>whl</code> file. (This is desired as the users should not have to run the tests, but should be able to do so if they want using the <code>tar.gz</code>).</p> <h2>generated .rst documentation files</h2> <p>For the tests, <code>test/test_adder.py</code> file I generated <code>root_dir/docs/source/autogen/test/test_adder.rst</code> with content:</p> <pre><code> .. _test_adder-module: test_adder Module ================= .. automodule:: test.test_adder :members: :undoc-members: :show-inheritance: </code></pre> <p>Where it is not able to import the <code>test.test_adder.py</code> file. (I also tried <code>.. automodule:: pythontemplate.test.test_adder</code> though that did not import it either).</p> <h2>Question</h2> <p>How can I refer to the <code>test_&lt;something&gt;.py</code> files in the <code>root_dir/test</code> folder from the (auto-generated) <code>.rst</code> documents in <code>docs/source/autogen/test/test_&lt;something&gt;.rst</code> file, such that Sphinx is able to import it?</p> <h2>Conf.py</h2> <p>For completeness, below is the <code>conf.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;Configuration file for the Sphinx documentation builder. For the full list of built-in configuration values, see the documentation: https://www.sphinx-doc.org/en/master/usage/configuration.html -- Project information ----------------------------------------------------- https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information &quot;&quot;&quot; # # This file only contains a selection of the most common options. For a full # list see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- Project information ----------------------------------------------------- import os import shutil import sys # This makes the Sphinx documentation tool look at the root of the repository # for .py files. from datetime import datetime from pathlib import Path from typing import List, Tuple sys.path.insert(0, os.path.abspath(&quot;..&quot;)) def split_filepath_into_three(*, filepath: str) -&gt; Tuple[str, str, str]: &quot;&quot;&quot;Split a file path into directory path, filename, and extension. Args: filepath (str): The input file path. Returns: Tuple[str, str, str]: A tuple containing directory path, filename, and extension. &quot;&quot;&quot; path_obj: Path = Path(filepath) directory_path: str = str(path_obj.parent) filename = os.path.splitext(path_obj.name)[0] extension = path_obj.suffix return directory_path, filename, extension def get_abs_root_path() -&gt; str: &quot;&quot;&quot;Returns the absolute path of the root dir of this repository. Throws an error if the current path does not end in /docs/source. &quot;&quot;&quot; current_abs_path: str = os.getcwd() assert_abs_path_ends_in_docs_source(current_abs_path=current_abs_path) abs_root_path: str = current_abs_path[:-11] return abs_root_path def assert_abs_path_ends_in_docs_source(*, current_abs_path: str) -&gt; None: &quot;&quot;&quot;Asserts the current absolute path ends in /docs/source.&quot;&quot;&quot; if current_abs_path[-12:] != &quot;/docs/source&quot;: print(f&quot;current_abs_path={current_abs_path}&quot;) raise ValueError( &quot;Error, current_abs_path is expected to end in: /docs/source&quot; ) def loop_over_files(*, abs_search_path: str, extension: str) -&gt; List[str]: &quot;&quot;&quot;Loop over all files in the specified root directory and its child directories. Args: root_directory (str): The root directory to start the traversal from. &quot;&quot;&quot; filepaths: List[str] = [] for root, _, files in os.walk(abs_search_path): for filename in files: extension_len: int = -len(extension) if filename[extension_len:] == extension: filepath = os.path.join(root, filename) filepaths.append(filepath) return filepaths def is_unwanted(*, filepath: str) -&gt; bool: &quot;&quot;&quot;Hardcoded filter of unwanted datatypes.&quot;&quot;&quot; base_name = os.path.basename(filepath) if base_name == &quot;__init__.py&quot;: return True if base_name.endswith(&quot;pyc&quot;): return True if &quot;something/another&quot; in filepath: return True return False def filter_unwanted_files(*, filepaths: List[str]) -&gt; List[str]: &quot;&quot;&quot;Filters out unwanted files from a list of file paths. Unwanted files include: - Files named &quot;init__.py&quot; - Files ending with &quot;swag.py&quot; - Files in the subdirectory &quot;something/another&quot; Args: filepaths (List[str]): List of file paths. Returns: List[str]: List of filtered file paths. &quot;&quot;&quot; return [ filepath for filepath in filepaths if not is_unwanted(filepath=filepath) ] def get_abs_python_filepaths( *, abs_root_path: str, extension: str, root_folder_name: str ) -&gt; List[str]: &quot;&quot;&quot;Returns all the Python files in this repo.&quot;&quot;&quot; # Get the file lists. py_files: List[str] = loop_over_files( abs_search_path=f&quot;{abs_root_path}docs/source/../../{root_folder_name}&quot;, extension=extension, ) # Merge and filter to preserve the relevant files. filtered_filepaths: List[str] = filter_unwanted_files(filepaths=py_files) return filtered_filepaths def abs_to_relative_python_paths_from_root( *, abs_py_paths: List[str], abs_root_path: str ) -&gt; List[str]: &quot;&quot;&quot;Converts the absolute Python paths to relative Python filepaths as seen from the root dir.&quot;&quot;&quot; rel_py_filepaths: List[str] = [] for abs_py_path in abs_py_paths: flattened_filepath = os.path.normpath(abs_py_path) print(f&quot;flattened_filepath={flattened_filepath}&quot;) print(f&quot;abs_root_path={abs_root_path}&quot;) if abs_root_path not in flattened_filepath: print(f&quot;abs_root_path={abs_root_path}&quot;) print(f&quot;flattened_filepath={flattened_filepath}&quot;) raise ValueError( &quot;Error, root dir should be in flattened_filepath.&quot; ) rel_py_filepaths.append( os.path.relpath(flattened_filepath, abs_root_path) ) return rel_py_filepaths def delete_directory(*, directory_path: str) -&gt; None: &quot;&quot;&quot;Deletes a directory and its contents. Args: directory_path (Union[str, bytes]): Path to the directory to be deleted. Raises: FileNotFoundError: If the specified directory does not exist. PermissionError: If the function lacks the necessary permissions to delete the directory. OSError: If an error occurs while deleting the directory. Returns: None &quot;&quot;&quot; if os.path.exists(directory_path) and os.path.isdir(directory_path): shutil.rmtree(directory_path) def create_relative_path(*, relative_path: str) -&gt; None: &quot;&quot;&quot;Creates a relative path if it does not yet exist. Args: relative_path (str): Relative path to create. Returns: None &quot;&quot;&quot; if not os.path.exists(relative_path): os.makedirs(relative_path) if not os.path.exists(relative_path): raise NotADirectoryError(f&quot;Error, did not find:{relative_path}&quot;) def create_rst( *, autogen_dir: str, rel_filedir: str, filename: str, pyproject_name: str, py_type: str, ) -&gt; None: &quot;&quot;&quot;Creates a reStructuredText (.rst) file with automodule directives. Args: rel_filedir (str): Path to the directory where the .rst file will be created. filename (str): Name of the .rst file (without the .rst extension). Returns: None &quot;&quot;&quot; if py_type == &quot;src&quot;: prelude: str = pyproject_name elif py_type == &quot;test&quot;: prelude = f&quot;{pyproject_name}.test&quot; else: raise ValueError(f&quot;Error, py_type={py_type} is not supported.&quot;) # if filename != &quot;__main__&quot;: title_underline = &quot;=&quot; * len(f&quot;{filename}-module&quot;) rst_content = f&quot;&quot;&quot; .. _{filename}-module: {filename} Module {title_underline} .. automodule:: {prelude}.{filename} :members: :undoc-members: :show-inheritance: &quot;&quot;&quot; # .. automodule:: {rel_filedir.replace(&quot;/&quot;, &quot;.&quot;)}.{filename} rst_filepath: str = os.path.join( f&quot;{autogen_dir}{rel_filedir}&quot;, f&quot;{filename}.rst&quot; ) with open(rst_filepath, &quot;w&quot;, encoding=&quot;utf-8&quot;) as rst_file: rst_file.write(rst_content) def generate_rst_per_code_file( *, extension: str, pyproject_name: str ) -&gt; List[str]: &quot;&quot;&quot;Generates a parameterised .rst file for each .py file of the project, to automatically include its documentation in Sphinx. Returns rst filepaths. &quot;&quot;&quot; abs_root_path: str = get_abs_root_path() abs_src_py_paths: List[str] = get_abs_python_filepaths( abs_root_path=abs_root_path, extension=extension, root_folder_name=&quot;src&quot;, ) abs_test_py_paths: List[str] = get_abs_python_filepaths( abs_root_path=abs_root_path, extension=extension, root_folder_name=&quot;test&quot;, ) current_abs_path: str = os.getcwd() autogen_dir: str = f&quot;{current_abs_path}/autogen/&quot; prepare_rst_directories(autogen_dir=autogen_dir) rst_paths: List[str] = [] rst_paths.extend( create_rst_files( pyproject_name=pyproject_name, abs_root_path=abs_root_path, autogen_dir=autogen_dir, abs_py_paths=abs_src_py_paths, py_type=&quot;src&quot;, ) ) rst_paths.extend( create_rst_files( pyproject_name=pyproject_name, abs_root_path=abs_root_path, autogen_dir=autogen_dir, abs_py_paths=abs_test_py_paths, py_type=&quot;test&quot;, ) ) return rst_paths def prepare_rst_directories(*, autogen_dir: str) -&gt; None: &quot;&quot;&quot;Creates the output directory for the auto-generated .rst documentation files.&quot;&quot;&quot; delete_directory(directory_path=autogen_dir) create_relative_path(relative_path=autogen_dir) def create_rst_files( *, pyproject_name: str, abs_root_path: str, autogen_dir: str, abs_py_paths: List[str], py_type: str, ) -&gt; List[str]: &quot;&quot;&quot;Loops over the python files of py_type src or test, and creates the .rst files that point to the actual .py file such that Sphinx can generate its documentation on the fly.&quot;&quot;&quot; rel_root_py_paths: List[str] = abs_to_relative_python_paths_from_root( abs_py_paths=abs_py_paths, abs_root_path=abs_root_path ) rst_paths: List[str] = [] # Create file for each py file. for rel_root_py_path in rel_root_py_paths: rel_filedir: str filename: str rel_filedir, filename, _ = split_filepath_into_three( filepath=rel_root_py_path ) create_relative_path(relative_path=f&quot;{autogen_dir}{rel_filedir}&quot;) create_rst( autogen_dir=autogen_dir, rel_filedir=rel_filedir, filename=filename, pyproject_name=pyproject_name, py_type=py_type, ) rst_path: str = os.path.join(f&quot;autogen/{rel_filedir}&quot;, f&quot;{filename}&quot;) rst_paths.append(rst_path) return rst_paths def generate_index_rst(*, filepaths: List[str]) -&gt; str: &quot;&quot;&quot;Generates the list of all the auto-generated rst files.&quot;&quot;&quot; now = datetime.now().strftime(&quot;%a %b %d %H:%M:%S %Y&quot;) content = f&quot;&quot;&quot;\ .. jsonmodipy documentation main file, created by sphinx-quickstart on {now}. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. .. include:: manual.rst Auto-generated documentation from Python code ============================================= .. toctree:: :maxdepth: 2 &quot;&quot;&quot; for filepath in filepaths: content += f&quot;\n {filepath}&quot; content += &quot;&quot;&quot; Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` &quot;&quot;&quot; return content def write_index_rst(*, filepaths: List[str], output_file: str) -&gt; None: &quot;&quot;&quot;Creates an index.rst file that is used to generate the Sphinx documentation.&quot;&quot;&quot; index_rst_content = generate_index_rst(filepaths=filepaths) with open(output_file, &quot;w&quot;, encoding=&quot;utf-8&quot;) as index_file: index_file.write(index_rst_content) # Call functions to generate rst Sphinx documentation structure. # Readthedocs sets it to contents.rst, but it is index.rst in the used example. # -- General configuration --------------------------------------------------- project: str = &quot;Decentralised-SAAS-Investment-Structure&quot; main_doc: str = &quot;index&quot; PYPROJECT_NAME: str = &quot;pythontemplate&quot; # pylint:disable=W0622 copyright: str = &quot;2024, a-t-0&quot; author: str = &quot;a-t-0&quot; the_rst_paths: List[str] = generate_rst_per_code_file( extension=&quot;.py&quot;, pyproject_name=PYPROJECT_NAME ) if len(the_rst_paths) == 0: raise ValueError( &quot;Error, did not find any Python files for which documentation needs&quot; + &quot; to be generated.&quot; ) write_index_rst(filepaths=the_rst_paths, output_file=&quot;index.rst&quot;) # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions: List[str] = [ &quot;sphinx.ext.duration&quot;, &quot;sphinx.ext.doctest&quot;, &quot;sphinx.ext.autodoc&quot;, &quot;sphinx.ext.autosummary&quot;, &quot;sphinx.ext.intersphinx&quot;, # Include markdown files in Sphinx documentation &quot;myst_parser&quot;, ] # Add any paths that contain templates here, relative to this directory. templates_path: List[str] = [&quot;_templates&quot;] # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. exclude_patterns: List[str] = [] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme: str = &quot;alabaster&quot; # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named &quot;default.css&quot; will overwrite the builtin &quot;default.css&quot;. html_static_path: List[str] = [&quot;_static&quot;] </code></pre> <h2>Note</h2> <p>I am aware the error message is thrown because the tests are not in <code>pythontemplate</code> pip package. As explained above, that is a design choice. The question is about how to import those test files from the <code>.rst</code> file without adding the <code>test</code> into the pip package.</p> <p>I can import the content of the <code>test_adder.py</code> file in the <code>.rst</code> file that should do the autodoc using:</p> <pre><code>.. _test_adder-module: test_adder Module ================= Hello ===== .. include:: ../../../../test/test_adder.py .. automodule:: ../../../../test/test_adder.py :members: :undoc-members: :show-inheritance: </code></pre> <p>However the automodule does not recognise that path, nor does <code>automodule ........test/test_adder</code>.</p>
<python><python-sphinx><restructuredtext><documentation-generation><autodoc>
2024-03-29 17:15:47
1
2,887
a.t.
78,244,944
5,589,640
Matching hundreds non-adjaccent keywords in large text corpus in Python
<p>I need to <strong>match non-adjacent keywords</strong> in a large collection of texts (several thousands). If matched, a label is assigned, else a label &quot;unkown&quot; is assigned.</p> <p>To provide an example, I would like to find the keywords <em>sales representative</em> and <em>dealt</em> in the below text snippet and assign it the category <em>keyword pattern A</em>: <br></p> <blockquote> <p>Text: &quot;The sales representative dealt with everything. It was very helpful to know that he compiled the best option for me.&quot; <br></p> <ul> <li>The keyword pattern is thus <em>sales representative</em> and <em>dealt</em></li> <li>Since sales representative might be also called sales rep or customer rep, there are multiple keywords I need to match. The same holds true for the word dealt. So you see the where it gets complex.</li> </ul> </blockquote> <p>There are many solutions for finding and matching unigrams or adjacent words (n-grams). I have implemented this myself. Now I need to find different keywords that are not written next to each other and assign a label. Also, I don't what is written between the different keyword. It could be anything.</p> <br> I am approaching the problem with a lexical approach to look-up the keywords in a dictionary with different columns to accommodate the matching of single keywords, two keywords, or three keywords. Note that a keyword is always a unigram or a bigram. Also, I don't know what is written in between the keywords. Below some code I have written. <pre><code>import pandas as pd #creat mock dictionary Dict = pd.DataFrame({'word1':['dealt','dealt','dealt',''], 'word2':['sales representative','sales rep', 'customer rep', 'options'] } ) #create sample text texts = [&quot;The sales representative dealt with everything.&quot;, &quot;The sales rep dealt with everything.&quot;, &quot;The agent answered all questions&quot; , &quot;The customer rep answered all questions.&quot;, &quot;The agent dealt with everything.&quot;] motive =[] # only checks for the keyword in the first column for item in texts: item = str(item) if any(x in item for x in Dict['word1']): motive.append('keyword pattern A') else: motive.append('unkown') </code></pre> <p>The label should only be assigned when <em>dealt</em> and <em>sales rep</em> are present in the text. So sentences 3 and 5 are incorrectly assigned. So I have up-dated the code. I runs through but does not assign any labels.</p> <pre><code>for item in texts: #convert into string item = str(item) #check if keyword can be found in first column tempM1 = {x for x in Dict['word1'] if x in item} #check if keyword was found if tempM1 != None: #if yes, locate all of their positions in the dictionary for i in tempM1: i = -1 #get row index ind = Dict.index[Dict['word1'] == list(tempM1)[i+1]] #gives pandas.core.indexes.base.Index #check if column next to given row index is no empty if pd.isnull(Dict['word2'].iloc[ind]) is False: #match keyword in second column tempM2 = {x for x in Dict['word2'] if x in item} #if second keyword was found if tempM2 != None: motive.append('keyword pattern A') else: #check again first keyword column tempM3 = {x for x in Dict['word1'] if x in item} if tempM3 != None: motive.append('keyword pattern A') else: motive.append('unknown') </code></pre> <p>How to tweak above code?</p> <p>I know about Regular Expression (RegEx). Seems to me that it will require more code lines and be less efficient given the amount of keywords (about 700 to 1000) and the combinations between them. Happy to be proven wrong, though!</p> <p>Also, I know it can be viewed as a classification problem. Explanation and transparency are required in the project, so deep learning and the sorts it is not an option. For the same reason I am not considering embeddings.</p> <p>Thanks!</p>
<python><loops><match><nested-loops>
2024-03-29 16:50:49
1
625
Simone
78,244,888
20,358
Using Python CDK to bundle dotnet 8 code to AWS Lambda function
<p>I am using CDK with Python and trying to build, package and deploy dotnet 8 code as a lambda function.</p> <p>This python code below gets an error <code>Error: .NET binaries for Lambda function are not correctly installed in the /var/task directory of the image when the image was built.</code></p> <pre><code>from constructs import Construct from aws_cdk import ( Duration, Stack, aws_iam as iam, aws_lambda as lambda_ ) csharp_lambda = lambda_.Function( self, &quot;PythonCdkDotnetLambda&quot;, runtime=lambda_.Runtime.DOTNET_8, handler=&quot;helloworld::helloworld.Functions::ExecuteFunc&quot;, code=lambda_.Code.from_asset(&quot;../path/to/src&quot;), ) </code></pre> <p>When using CDK with dotnet there is a bundling option like below where the code is built and packaged</p> <pre><code> //C# Code var buildOption = new BundlingOptions() { Image = Runtime.DOTNET_8.BundlingImage, User = &quot;root&quot;, OutputType = BundlingOutput.ARCHIVED, Command = new string[]{ &quot;/bin/sh&quot;, &quot;-c&quot;, &quot; dotnet tool install -g Amazon.Lambda.Tools&quot;+ &quot; &amp;&amp; dotnet build&quot;+ &quot; &amp;&amp; dotnet lambda package --project-location helloworld/ --output-package /asset-output/function.zip&quot; } }; </code></pre> <p>How do you do this BundlingOption for Python CDK that needs to deploy a dotnet 8 lambda or how do you build .net code into the required binaries before deploying using Python CDK?</p>
<python><python-3.x><.net-core><aws-lambda><aws-cdk>
2024-03-29 16:35:30
2
14,834
user20358
78,244,861
3,353,160
Launching Python Debug with Arguments messes with file path
<p>I'm using VSCode on Windows, with the GitBash as integrated terminal. When I launch the Python Debugger with default configurations, it works fine, and I get this command executed on the terminal:</p> <pre class="lang-bash prettyprint-override"><code>/usr/bin/env c:\\Users\\augus\\.Apps\\anaconda3\\envs\\muskit-env\\python.exe \ c:\\Users\\augus\\.vscode\\extensions\\ms-python.debugpy-2024.2.0-win32-x64\\bundled\\libs\\debugpy\\adapter/../..\\debugpy\\launcher \ 53684 -- E:\\muskit\\QuantumSoftwareTestingTools\\Muskit\\Muskit\\CommandMain.py </code></pre> <p>Notice the <code>\\</code> in the file path. Again, above works just fine.</p> <p>The problem is when I add the <code>args</code> property to my <code>launch.json</code> configuration.</p> <p>launch.json</p> <pre class="lang-json prettyprint-override"><code>{ &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python Debugger: Current File with Arguments&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;args&quot;: &quot;foo&quot; } ] } </code></pre> <p>The following command is executed on the terminal:</p> <pre class="lang-bash prettyprint-override"><code>$ /usr/bin/env c:\Users\augus\.Apps\anaconda3\envs\muskit-env\python.exe \ c:\Users\augus\.vscode\extensions\ms-python.debugpy-2024.2.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher \ 53805 -- E:\muskit\QuantumSoftwareTestingTools\Muskit\Muskit\CommandMain.py foo </code></pre> <pre class="lang-bash prettyprint-override"><code>/usr/bin/env: β€˜c:Usersaugus.Appsanaconda3envsmuskit-envpython.exe’: No such file or directory </code></pre> <p>Notice that, instead of <code>\\</code>. it uses <code>\</code>, which causes the &quot;No such file or directory&quot;.</p> <p>Is this a bug, or am I missing something?</p>
<python><vscode-extensions><vscode-debugger><debugpy>
2024-03-29 16:29:23
1
721
Erico
78,244,796
5,079,779
Is it possible to type-hint a strict subclass of a given type?
<p>Let's say I wanted to hint that a particular field of a dataclass should be a subclass of one class, but not the type itself. For a more concrete example:</p> <pre class="lang-py prettyprint-override"><code>class Foo: ... class Bar(Foo): ... class Baz(Foo): ... @dataclass class Data: foo_subclass_instance: StrictSubclassOf[Foo] # accepts an instance of Bar or Baz, but not a Foo foo_subclass: type[StrictSubclassOf[Foo]] # accepts Bar or Baz, but not Foo </code></pre> <p>If the set of subclasses is small and known in advance, you can work around this with something like this:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Data: foo_subclass_instance: Bar | Baz foo_subclass: type[Bar] | type[Baz] </code></pre> <p>However, this approach breaks down if you have a lot of subtypes to keep track of, potentially with some of those being in other files you can't import because doing so would create import cycles.</p> <p>So is there some sort of annotation to describe <code>StrictSubclassOf[T]</code>?</p>
<python><python-typing>
2024-03-29 16:12:19
0
808
Beefster
78,244,790
13,494,917
Failing to pass a downloaded file-like object from sharepoint using the shareplum and paramiko libraries
<p>I'm receiving an error when passing a file object using paramiko's putfo().</p> <blockquote> <p>AttributeError: 'bytes' object has no attribute 'read'</p> </blockquote> <p>Here's what my putfo() looks like</p> <pre><code>ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_client.connect(hostname=host, port=port, username=username,password=password, look_for_keys=False) ftp = ssh_client.open_sftp() ftp.putfo(file_obj, remotepath=file_name) </code></pre> <p><code>file_obj</code> is downloaded to memory using shareplum's get_file()</p> <p>I'm unfamiliar what I should try here to solve this problem.</p>
<python><sftp><paramiko><shareplum>
2024-03-29 16:11:29
1
687
BlakeB9
78,244,664
5,379,479
Keras TensorFlow Probability model not learning distribution spread
<p>I built and trained a Keras Tensorflow Probability model. It's basically a fully connected Neural Network model with a DistributionLambda on the output layer. Last Layer code example here:</p> <pre><code>tfp.layers.DistributionLambda( lambda t: tfd.Independent(tfd.Normal(loc=t[..., :n], scale=1e-5 + tf.nn.softplus(c + t[..., n:])), reinterpreted_batch_ndims=1)) </code></pre> <p>During training I'm using Mean Squared Error as my loss function. The training seems to progress well and is numerically stable.</p> <p>After training, I start by removing the last layer of the model and then make forward-pass predictions with my test set data. This basically gives me the &quot;learned&quot; expected <code>loc</code> and <code>scale</code> for the distribution the model learned for each data point in the test set. However, because of the <code>softplus</code> correction in the <code>DistributionLambda</code> I also have to apply that same correction to the chopped model's prediction for <code>scale</code>.</p> <p>I'm trying to verify that the model learned the appropriate distributions contingent on the input values. So, with these predictions for the <code>loc</code> (mean) and <code>scale</code> (standard deviation) I can create calibration plots to see how well the model learned the latent distributions. The calibration plot for the mean looks great. I'm also creating a calibration plot for the <code>scale</code>/stdev parameter with code like this:</p> <pre><code>def create_stdev_calibration_plot(df: pd.DataFrame, y_true: str = 'y_true', y_pred_mean: str = 'y_pred_mean', y_pred_std: str = 'y_pred_std', title: Optional[str] = None, save_path: Optional[str] = None): # Compute the residuals df['residual'] = df[y_true] - df[y_pred_mean] # Bin data based on predicted standard deviation bins = np.linspace(df[y_pred_std].min(), df[y_pred_std].max(), 10) df['bin'] = np.digitize(df[y_pred_std], bins) # For each bin, compute mean predicted std and actual std of residuals df['y_pred_variance'] = df[y_pred_std] ** 2 bin_means_variance = df.groupby('bin')['y_pred_variance'].mean() # Convert back to standard deviation bin_means = np.sqrt(bin_means_variance) bin_residual_stds = df.groupby('bin')['residual'].std() # Create the calibration plot plt.figure(figsize=(8, 8)) plt.plot(bin_means, bin_residual_stds, 'o-') xrange = plt.xlim() yrange = plt.ylim() max_val = max(xrange[1], yrange[1]) min_val = min(xrange[0], yrange[0]) plt.axline((min_val, min_val), (max_val, max_val), linestyle='--', color='k', linewidth=2) plt.xlabel('Mean Predicted Standard Deviation') plt.ylabel('Actual Standard Deviation of Residuals') plt.title('Spread Calibration Plot') plt.grid(True) plt.show() </code></pre> <p>I generated some synthetic data to prove that this standard deviation calibration plot works as expected like this:</p> <pre><code># Number of samples n_samples = 1000 # Input feature x = np.random.uniform(-10, 10, size=n_samples) # True mean and standard deviation as functions of the input feature true_mean = 2 * x + 3 true_std = 0.5 * np.abs(x) + 1 # Generate synthetic data y_true = np.random.normal(loc=true_mean, scale=true_std) # Simulate model predictions (with some error) y_pred_mean = true_mean + np.random.normal(loc=0, scale=1, size=n_samples) y_pred_std = true_std + np.random.normal(loc=0, scale=0.5, size=n_samples) # Ensure standard deviations are positive y_pred_std = np.abs(y_pred_std) df = pd.DataFrame({ 'y_true': y_true, 'y_pred_mean': y_pred_mean, 'y_pred_std': y_pred_std }) create_stdev_calibration_plot(df) </code></pre> <p>Here's what the calibration looks like with the synthetic data: <a href="https://i.sstatic.net/N8V1T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N8V1T.png" alt="enter image description here" /></a></p> <p>When I run the same function on the output data from my model the plot looks like this: <a href="https://i.sstatic.net/gcMD1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gcMD1.png" alt="enter image description here" /></a></p> <p>Based on the calibration plot. It <em>looks</em> like the model is NOT learning the spread, but just learning the mean and keeping the spread tight to minimize the loss. What changes can I make to my training to incentivize the model to accurately learn the spread?</p> <h3>Update:</h3> <p>One thought I had was to create a custom loss function that is based off of the average <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/stats/expected_calibration_error" rel="nofollow noreferrer">expected calibration error</a> from both the mean and spread calibrations. However, the inputs for loss functions are the <code>y_true</code> tensor and the <code>y_pred</code> tensor from the model. The <code>y_pred</code> would just be samplings from the current learned distribution(s) and I wouldn't be able to know the distribution parameters (<code>loc</code> and <code>scale</code>); that makes the spread calibration impossible. Also expected calibration error is not differentiable due to the binning required, so that makes learning with back propagation impossible as well.</p> <h3>Update 2:</h3> <p>I'm currently looking into changing the loss function to be the negative log likelihood (NLL). I'll have the &quot;learned&quot; distribution parameters so I can just calculate the loss based on the NLL for each data point against the &quot;learned&quot; distributions. I'm not confident this will work though because the NLL for only 1 data point (1 per row and distribution combination) <em>might</em> just do the same thing as MSE, since the NLL is maximized when a single data point equals the distribution mean.</p>
<python><tensorflow><tf.keras><calibration><tensorflow-probability>
2024-03-29 15:44:42
1
2,138
Jed
78,244,656
6,376,297
Distribute a list of positive numbers into a desired number of sets, aiming to have sums as close as possible between them
<p>My goal in posting this is to integrate my other post <a href="https://stackoverflow.com/q/78236646/6376297">here</a>, where I was asked to provide a minimal working example, but also to ask for advice regarding the code itself, in case someone can suggest a better/faster/more reliable approach. Hence the separate post (not in the same scope as the reproducibility post).</p> <p>You can imagine several use cases of this concept.<br /> E.g. you can have several files to process, each with different numbers of records, and you want to distribute their processing between N parallel lines, ensuring that each line has more or less the same total number of records to handle.<br /> Or, you could have several objects of different weight, and N people to carry them from A to B, and you want to ensure that each person has more or less the same weight to carry.</p> <p>See below my code, based on linear programming (essentially an assignment problem coupled with the minimisation of the sum of absolute differences).</p> <p>For some reason, when run with <code>threads</code> &gt; 1, it gives different results on different runs, as per above post.<br /> But it's much slower if I run it with a single thread, so it's a catch-22 situation.</p> <p><strong>Any advice for improvement / alternative approaches <em>that still deliver an absolute optimal solution</em> (not greedy) would be appreciated.</strong></p> <pre><code># Take a list of numbers and partition them into a desired number of sets, # ensuring that the sum within each set is as close as possible # to the sum of all numbers divided by the number of sets. # USER SETTINGS list_of_numbers = [ 21614, 22716, 1344708, 8948, 136944, 819, 7109, 255182, 556354, 1898763, 1239808, 925193, 173237, 64301, 147896, 824564, 16028, 1021326, 108042, 72221, 368270, 17467, 2953, 52942, 1855, 739627, 460833, 30955] N_sets_to_make = 4 import numpy as np import pulp from pulp import * # Calculate the desired size of each set S = N_sets_to_make average_size = sum(list_of_numbers) / S sizes = [average_size] * S # Create the coefficient matrix N = len(list_of_numbers) A = np.array(list_of_numbers, ndmin = 2) # Create the pulp model prob = LpProblem(&quot;Partitioning&quot;, LpMinimize) # Create the pulp variables # x_names : binary variables encoding the presence of each initial number in each final set x_names = ['x_'+str(i) for i in range(N * S)] x = [LpVariable(x_names[i], lowBound = 0, upBound = 1, cat = 'Integer') for i in range(N * S)] # X_names : continuous positive variables encoding the absolute difference between each final set sum and the desired size X_names = ['X_'+str(i) for i in range(S)] X = [LpVariable(X_names[i], lowBound = 0, cat = 'Continuous') for i in range(S)] # Add the objective to the model (mimimal sum of X_i) prob += LpAffineExpression([(X[i], 1) for i in range(S) ]) # Add the constraints to the model # Constraints forcing each initial number to be in one and only one final set for c in range(N): prob += LpAffineExpression([(x[c+m*N],+1) for m in range(S)]) == 1 # Constraints forcing each final set to be non-empty for m in range(S): prob += LpAffineExpression([(x[i],+1) for i in range(m,(m+1)*N)]) &gt;= 1 # Constraints encoding the absolute values for m in range(S): cs = [c for c in range(N) if A[0,c] != 0] prob += LpAffineExpression([(x[c+m*N],A[0,c]) for c in cs]) - X[m] &lt;= sizes[m] prob += LpAffineExpression([(x[c+m*N],A[0,c]) for c in cs]) + X[m] &gt;= sizes[m] # Solve the model prob.solve(PULP_CBC_CMD(gapRel = 0, timeLimit = 3600, threads = 1)) # Extract the solution values_of_x_i = [value(x[i]) for i in range(N * S)] selected_ind_initial_numbers = [(list(range(N)) * S)[i] for i,l in enumerate(values_of_x_i) if l == 1] selected_ind_final_sets = [(list((1 + np.repeat(range(S), N)).astype('int64')))[i] for i,l in enumerate(values_of_x_i) if l == 1] ind_final_set_for_each_initial_number = [x for _, x in sorted(zip(selected_ind_initial_numbers, selected_ind_final_sets))] # Find the numbers that ended up in each final set d = dict() for m, n in sorted(zip(ind_final_set_for_each_initial_number, list_of_numbers)) : if m in d : d[m].append(n) else : d[m] = [n] print(d) # And their sums s = [sum(l) for i, l in enumerate(d.values())] print(s) # And the absolute differences of their sums from the desired sum absdiffs = [np.abs(s[i] - sizes[i]) for i in range(len(s))] print(absdiffs) # And the absolute fractional differences print([absdiffs[i]/sizes[i]/S for i in range(len(absdiffs))]) </code></pre> <hr /> <p><strong>EDIT</strong> modification of the code using row-normalised versions of the desired sizes and of the coefficient matrix, often resulting in faster execution (and making the problem independent from the total sum of numbers)</p> <p>After:</p> <pre><code>sizes = [average_size] * S </code></pre> <p>add:</p> <pre><code>fracs = [1 / S] * S </code></pre> <p>After:</p> <pre><code>A = np.array(list_of_numbers, ndmin = 2) </code></pre> <p>add:</p> <pre><code>An = A / np.sum(A, axis = 1, keepdims = True) </code></pre> <p>Replace:</p> <pre><code>prob += LpAffineExpression([(x[c+m*N],A[0,c]) for c in cs]) - X[m] &lt;= sizes[m] prob += LpAffineExpression([(x[c+m*N],A[0,c]) for c in cs]) + X[m] &gt;= sizes[m] </code></pre> <p>by:</p> <pre><code>prob += LpAffineExpression([(x[c+m*N],An[0,c]) for c in cs]) - X[m] &lt;= fracs[m] prob += LpAffineExpression([(x[c+m*N],An[0,c]) for c in cs]) + X[m] &gt;= fracs[m] </code></pre> <p>Also, unexpectedly (to me):</p> <pre><code>prob.solve(PULP_CBC_CMD(gapRel = 0, timeLimit = 3600, threads = None)) </code></pre> <p>Results in a much faster solution (occurring before the time limit is reached) than when the number of threads is specified(?).</p> <hr /> <p><strong>EDIT 2</strong> reporting the results of running the code with the changes suggested by AirSquid and the new version of the code shown in the first edit.</p> <p>Modified version of my code, using <code>fracs</code> and <code>An</code> instead of <code>sizes</code> and <code>A</code>, and <code>threads = None</code>: very fast run time (~10 s):</p> <pre><code>{1: [8948, 255182, 1021326, 1344708], 2: [2953, 7109, 17467, 64301, 72221, 108042, 136944, 556354, 739627, 925193], 3: [819, 1855, 21614, 173237, 368270, 824564, 1239808], 4: [16028, 22716, 30955, 52942, 147896, 460833, 1898763]} [2630164, 2630211, 2630167, 2630133] [4.75, 42.25, 1.75, 35.75] [4.514919432450865e-07, 4.015902021495769e-06, 1.6633913698503185e-07, 3.3980709412656508e-06] </code></pre> <p>My original code, using <code>sizes</code> and <code>A</code>, but no <code>X</code> variables, as per AirSquid's suggestion, <code>gapRel = 0.0001</code>, <code>threads = None</code>: almost instant run time:</p> <pre><code>{1: [819, 2953, 64301, 72221, 739627, 824564, 925193], 2: [108042, 255182, 368270, 1898763], 3: [7109, 16028, 22716, 1239808, 1344708], 4: [1855, 8948, 17467, 21614, 30955, 52942, 136944, 147896, 173237, 460833, 556354, 1021326]} [2629678, 2630257, 2630369, 2630371] [490.75, 88.25, 200.25, 202.25] [4.664624655737393e-05, 8.388245050816607e-06, 1.9033949817858644e-05, 1.922405168869868e-05] </code></pre> <p>Combination of the two (<code>fracs</code> and <code>An</code>, no <code>X</code> variables, <code>gapRel = 0.0001</code>, <code>threads = None</code>): almost instant run time:</p> <pre><code>{1: [819, 17467, 21614, 22716, 64301, 136944, 1021326, 1344708], 2: [1855, 7109, 8948, 30955, 460833, 556354, 739627, 824564], 3: [108042, 255182, 368270, 1898763], 4: [2953, 16028, 52942, 72221, 147896, 173237, 925193, 1239808]} [2629895, 2630245, 2630257, 2630278] [273.75, 76.25, 88.25, 109.25] [2.6020193571229983e-05, 7.247633825776388e-06, 8.388245050816607e-06, 1.0384314694636988e-05] </code></pre> <p><strong>Conclusion</strong>: if the use of <code>max_sum</code> instead of the individual sums of absolute differences always results in close to optimality for all sets (not only the largest one), this method could be very advantageous, especially if combined with the normalisation, if one is prepared to accept a small nonzero relative gap.</p> <hr /> <p><strong>Addendum</strong> after further testing</p> <p>It turns out that the <code>max_sum</code>-based approach, even when left to run to completion (i.e. allowed to find an optimal solution, not stopped by the gap or time limit), <em>does not result in the desired optimality</em>.</p> <p>Example:</p> <pre><code>list_of_numbers = [10000] + [100, 200, 300, 400, 500] * 5 N_sets_to_make = 3 </code></pre> <p>With original method (min of sum of absolute differences) based on fractions (<code>fracs</code>, <code>An</code>):</p> <pre><code># Final sets {1: [10000], 2: [200, 300, 300, 400, 500], 3: [100, 100, 100, 100, 100, 200, 200, 200, 200, 300, 300, 300, 400, 400, 400, 400, 500, 500, 500, 500]} # Their sums [10000, 1700, 5800] # Absolute differences [4166.666666666667, 4133.333333333333, 33.33333333333303] # Fractional differences [0.23809523809523814, 0.23619047619047617, 0.0019047619047618874] # Their sum 0.4761904761904762 </code></pre> <p>With <code>max_sum</code> method (min of max sum of fractions):</p> <pre><code># Final sets {1: [10000], 2: [500], 3: [100, 100, 100, 100, 100, 200, 200, 200, 200, 200, 300, 300, 300, 300, 300, 400, 400, 400, 400, 400, 500, 500, 500, 500]} # Their sums [10000, 500, 7000] # Absolute differences [4166.666666666667, 5333.333333333333, 1166.666666666667] # Fractional differences [0.23809523809523814, 0.30476190476190473, 0.0666666666666667] # Their sum 0.6095238095238096 </code></pre>
<python><optimization><partitioning><linear-programming><pulp>
2024-03-29 15:43:31
1
657
user6376297
78,244,632
2,776,012
How to fix node position and increase edge spacing in Pyvis network graph?
<p>I am very new to Python and its graph libraries however i have a requirement to construct a directed graph network with multiple nodes and edges. I have kind of achieved the functionality but with a couple of hiccups. One, unable to fix the position of the nodes in the network i.e it changes with every refresh and secondly, the edges between two nodes kind of overlap. How can I fix these two issues? code snippets are given below</p> <pre><code>net = Network('500px','100%',notebook=True, cdn_resources=&quot;remote&quot;, bgcolor=&quot;#white&quot;, font_color=&quot;black&quot;, select_menu=True, directed=True, neighborhood_highlight=True ) net.add_node(AppName, label=AppName, title=AppDetails, shape=&quot;oval&quot;, color=#222234) net.add_edge(IntSource, IntTarget, label=IntFriceId, title=IntName, color=#555676, arrows='to') </code></pre>
<python><charts><networkx><pyvis>
2024-03-29 15:37:06
0
837
Shivayan Mukherjee
78,244,620
547,231
jax: How do we solve the error: pmap was requested to map its argument along axis 0, which implies that its rank should be at least 1, but is only 0?
<p>I'm trying to run this <a href="https://colab.research.google.com/drive/1SeXMpILhkJPjXUaesvzEhc3Ke6Zl_zxJ?usp=sharing#scrollTo=8PPsLx4dGCGa" rel="nofollow noreferrer">simple introduction to score-based generative modeling</a>. The code is using <code>flax.optim</code>, which seems to be moved to <code>optax</code> meanwhile (<a href="https://flax.readthedocs.io/en/latest/guides/converting_and_upgrading/optax_update_guide.html" rel="nofollow noreferrer">https://flax.readthedocs.io/en/latest/guides/converting_and_upgrading/optax_update_guide.html</a>).</p> <p>I've made a <a href="https://colab.research.google.com/drive/13XoMxAOkfYoFpK9-CQoh0jwk2QsNcrin#scrollTo=21v75FhSkfCq" rel="nofollow noreferrer">copy of the colab code</a> with the changes I think needed to be made (I'm only unsure how I need to replace <code>optimizer = flax.jax_utils.replicate(optimizer)</code>).</p> <p>Now, in the <a href="https://colab.research.google.com/drive/13XoMxAOkfYoFpK9-CQoh0jwk2QsNcrin#scrollTo=8PPsLx4dGCGa&amp;line=1&amp;uniqifier=1" rel="nofollow noreferrer">training section</a>, I get the error</p> <blockquote> <p>pmap was requested to map its argument along axis 0, which implies that its rank should be at least 1, but is only 0 (its shape is ())</p> </blockquote> <p>at the line <code>loss, params, opt_state = train_step_fn(step_rng, x, params, opt_state)</code>. This obviously comes from the <code>return jax.pmap(step_fn, axis_name='device')</code> in the &quot;Define the loss function&quot; section.</p> <p>How can I fix this error? I've googled it, but have no idea what's going wrong here.</p>
<python><neural-network><jax><pmap>
2024-03-29 15:34:59
1
18,343
0xbadf00d
78,244,582
3,821,009
Parsing strings with numbers and SI prefixes in polars
<p>Say I have this dataframe:</p> <pre><code>&gt;&gt;&gt; import polars &gt;&gt;&gt; df = polars.DataFrame(dict(j=['1.2', '1.2k', '1.2M', '-1.2B'])) &gt;&gt;&gt; df shape: (4, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”‚ j β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•‘ β”‚ 1.2 β”‚ β”‚ 1.2k β”‚ β”‚ 1.2M β”‚ β”‚ -1.2B β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How would I go about parsing the above to get:</p> <pre><code>&gt;&gt;&gt; df = polars.DataFrame(dict(j=[1.2, 1_200, 1_200_000, -1_200_000_000])) &gt;&gt;&gt; df shape: (4, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ j β”‚ β”‚ --- β”‚ β”‚ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 1.2 β”‚ β”‚ 1200.0 β”‚ β”‚ 1.2e6 β”‚ β”‚ -1.2000e9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ &gt;&gt;&gt; </code></pre>
<python><python-polars>
2024-03-29 15:26:58
2
4,641
levant pied
78,244,574
6,207,558
Python BaseHTTPRequestHandler css loaded but not rendered
<p>I'm stumped. The below code will return the index.html as well as the css asked for, but when the index.html is requested by Firefox, it won't render the page with the css applied. I have the same result in Konqueror (which in Chrome based)</p> <pre><code>class MyServer(BaseHTTPRequestHandler): def do_GET(self): if (self.path.endswith(&quot;/&quot;)): self.send_response(200) self.send_header(&quot;Content-type&quot;, &quot;text/html&quot;) self.end_headers() with open('index.html', 'r') as file: template = file.read() self.wfile.write(bytes(eval_template(template, env).encode(&quot;utf-16&quot;))) elif (self.path.endswith(&quot;.css&quot;)): with open(os.path.join('.', self.path[1:]), 'rb') as file: response = file.read() self.send_response(200) self.send_header(&quot;Content-type&quot;, &quot;text/css&quot;) self.end_headers() self.wfile.write(response) </code></pre> <p>The css is referenced in the index.html in this manner:</p> <pre><code>&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;style.css&quot; /&gt; </code></pre> <p>When I look in the developer console, I can verify that the request for style.css file went without issue (200 OK).</p> <p>When I open the file with Firefox as a file, it sees the css as well in the same directory and renders the file with the css as intended, so I'm inclined to think my code above is missing something.</p> <p>Thanks for any help with this.</p>
<python><html><css><basehttprequesthandler>
2024-03-29 15:25:50
1
510
Mr. Wrong
78,244,443
5,618,251
How to replace last 4 digits in Pandas Dataframe by 0101 if they are 9999 (Python)
<p>I have a Dataframe that looks like this:</p> <pre><code>OrdNo year 1 20059999 2 20070830 3 20070719 4 20030719 5 20039999 6 20070911 7 20050918 8 20070816 9 20069999 </code></pre> <p>How to replace last 4 digits in the Pandas Dataframe by 0101 if they are 9999?</p> <p>Thanks</p>
<python><pandas>
2024-03-29 15:00:06
2
361
user5618251
78,243,767
17,194,313
Why do Python type hints sometimes worsen IDE recommendations?
<p>I'm going through an exercise of adding type hints through a large codebase, but I'm sometimes seeing that less-than-optimal type hint worsens IDE recommendations:</p> <p>Before, the IDE is able to figure out that y['result'] is a string:</p> <p><a href="https://i.sstatic.net/vlcUE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vlcUE.png" alt="Enter image description here" /></a></p> <p>After, it does not know:</p> <p><a href="https://i.sstatic.net/tlMER.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tlMER.png" alt="Enter image description here" /></a></p> <p>I know I could fix this with a more specific type hint:</p> <p><a href="https://i.sstatic.net/iW1wT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iW1wT.png" alt="Enter image description here" /></a></p> <p>But in general, is there a way to know if the type hint being added is less specific that what the <a href="https://en.wikipedia.org/wiki/Language_Server_Protocol" rel="nofollow noreferrer">LSP</a> was able to infer from the code?</p> <p>I've looked to add very detailed type hints to avoid this issue (e.g., subclassing TypeDict), but I am keen to avoid the effort if the LSP is able to figure things out itself.</p> <p>I'm asking if there is a way to warn the user when they are adding a type hint that will worsen the LSP's understanding of the code:</p> <p>We have warnings when the type hints contradict the code:</p> <p><a href="https://i.sstatic.net/oL13p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oL13p.png" alt="Enter image description here" /></a></p> <p>I'm looking for some warning for when the type hint is worse than no type hint at all.</p> <p><a href="https://i.sstatic.net/VSgnX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VSgnX.png" alt="Enter image description here" /></a></p> <p>You can see here that the LSP was easily able to infer a pretty detailed return type.</p> <p><a href="https://i.sstatic.net/bjwsp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjwsp.png" alt="Enter image description here" /></a></p> <p>Which would have been lost if the user added a &quot;bad&quot; type hint.</p> <p><a href="https://i.sstatic.net/KSW5m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KSW5m.png" alt="Enter image description here" /></a></p> <p>But I take the overall feedback. Maybe this is an ask to the <a href="https://devblogs.microsoft.com/python/announcing-pylance-fast-feature-rich-language-support-for-python-in-visual-studio-code/" rel="nofollow noreferrer">Pylance</a> project more than a StackΒ Overflow question.</p>
<python><python-typing><pylance>
2024-03-29 12:32:00
2
3,075
MYK
78,243,747
774,575
How to use two key functions when sorting a MultiIndex dataframe?
<p>In this call to <code>df.sort_index()</code> on a MultiIndex dataframe, how to use <code>func_2</code> for level <code>two</code>?</p> <pre><code>func_1 = lambda s: s.str.lower() func_2 = lambda x: np.abs(x) m_sorted = df_multi.sort_index(level=['one', 'two'], key=func_1) </code></pre> <p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_index.html#pandas.DataFrame.sort_index" rel="noreferrer">documentation</a> says &quot;<em>For MultiIndex inputs, the key is applied per level</em>&quot;, which is ambiguous.</p> <hr /> <pre><code>import pandas as pd import numpy as np np.random.seed(3) # Create multiIndex choice = lambda a, n: np.random.choice(a, n, replace=True) df_multi = pd.DataFrame({ 'one': pd.Series(choice(['a', 'B', 'c'], 8)), 'two': pd.Series(choice([1, -2, 3], 8)), 'A': pd.Series(choice([2,6,9,7] ,8)) }) df_multi = df_multi.set_index(['one', 'two']) # Sort MultiIndex func_1 = lambda s: s.str.lower() func_2 = lambda x: np.abs(x) m_sorted = df_multi.sort_index(level=['one'], key=func_1) </code></pre>
<python><pandas><sorting><multi-index>
2024-03-29 12:27:15
1
7,768
mins
78,243,509
3,616,293
LeNet5 & Self-Organizing Maps - RuntimeError: Trying to backward through the graph a second time - PyTorch
<p>I have a LeNet-5 CNN training with a Self-Organizing Map trained on MNIST data. The training code (for brevity) is:</p> <pre><code># SOM (flattened) weights- # m = 40, n = 40, n = 84 (LeNet's output shape/dim) centroids = torch.randn(m * n, dim, device = device, dtype = torch.float32) locs = [np.array([i, j]) for i in range(m) for j in range(n)] locations = torch.LongTensor(np.asarray(locs)).to(device) del locs def get_bmu_distance_squares(bmu_loc): bmu_distance_squares = torch.sum( input = torch.square(locations.float() - bmu_loc), dim = 1 ) return bmu_distance_squares distance_mat = torch.stack([get_bmu_distance_squares(loc) for loc in locations]) centroids = centroids.to(device) num_epochs = 50 qe_train = list() step = 1 for epoch in range(1, num_epochs + 1): qe_epoch = 0.0 for x, y in train_loader: x = x.to(device) z = model(x) # SOM training code: batch_size = len(z) # Compute distances from batch to (all SOM units) centroids- dists = torch.cdist(x1 = z, x2 = centroids, p = p_norm) # Find closest (BMU) and retrieve the gaussian correlation matrix # for each point in the batch # bmu_loc is BS, num points- mindist, bmu_index = torch.min(dists, -1) # print(f&quot;quantization error = {mindist.mean():.4f}&quot;) bmu_loc = locations[bmu_index] # Compute the SOM weight update: # Update LR # It is a matrix of shape (BS, centroids) or, (BS, mxn) and tells # for each input how much it will affect each (SOM unit) centroid- bmu_distance_squares = distance_mat[bmu_index] # Get current lr and neighbourhood radius for current step- decay_val = scheduler(it = step, tot = int(len(train_loader) * num_epochs)) curr_alpha = (alpha * decay_val).to(device) curr_sigma = (sigma * decay_val).to(device) # Compute Gaussian neighbourhood function- neighborhood_func = torch.exp(torch.neg(torch.div(bmu_distance_squares, ((2 * torch.square(curr_sigma)) + 1e-5)))) expanded_z = z.unsqueeze(dim = 1).expand(-1, grid_size, -1) expanded_weights = centroids.unsqueeze(0).expand((batch_size, -1, -1)) delta = expanded_z - expanded_weights lr_multiplier = curr_alpha * neighborhood_func delta.mul_(lr_multiplier.reshape(*lr_multiplier.size(), 1).expand_as(delta)) delta = torch.mean(delta, dim = 0) new_weights = torch.add(centroids, delta) centroids = new_weights # return bmu_loc, torch.mean(mindist) # Compute quantization error los- qe_loss = torch.mean(mindist) qe_epoch += qe_loss.item() # Empty accumulated gradients- optimizer.zero_grad() # Perform backprop- qe_loss.backward() # Update model trainable params- optimizer.step() step += 1 qe_train.append(qe_epoch / len(train_loader)) print(f&quot;\nepoch = {epoch}, QE = {qe_epoch / len(train_loader):.4f}&quot; f&quot; &amp; SOM wts L2-norm = {torch.norm(input = centroids, p = 2).item():.4f}&quot; ) </code></pre> <p>On trying to execute this code, I get the error:</p> <p><em>line 252: qe_loss.backward()</em></p> <pre><code>Traceback (most recent call last): File &quot;c:\some_dir\som_lenet5.py&quot;, line 252, in &lt;module&gt; qe_loss.backward() File &quot;c:\pytorch_venv\pytorch_cuda\lib\site-packages\torch\_tensor.py&quot;, line 522, in backward torch.autograd.backward( File &quot;c:\pytorch_venv\pytorch_cuda\lib\site-packages\torch\autograd\__init__.py&quot;, line 266, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. </code></pre>
<python><pytorch><conv-neural-network>
2024-03-29 11:29:45
1
2,518
Arun
78,243,450
589,571
Extending Airflow DAG class - is this a bad practice?
<p>I didn't see any examples of this so I am wondering if this is a bad practice to extend DAG class. Is it a bad practice and why, if it is?</p> <p>Example of where I can see this useful follows...</p> <p>Let's say we have a number of DAGs which all share the same behaviour: calling a specific function as a very last thing, regardless of success or failure. This function could be something like invoking some external API, for instance.</p> <p>My idea to approach this would be something along these lines:</p> <ul> <li>extend the DAG class creating a new class DAGWithFinishAction</li> <li>implement on_success_callback and on_failure_callback in DAGWithFinishAction to do what I wanted to achieve</li> <li>use the new class in <code>with DAGWithFinishAction(dag_id=..., ...) as dag: ...</code></li> <li>schedule tasks in each of implementing DAGs</li> <li>expect that each of those DAGs call it's success/failure callbacks after all tasks are finished (in any state)</li> </ul> <p>Is there anything wrong with this approach? I couldn't find anything similar which makes me believe I am missing something.</p> <pre><code>class DAGWithFinishAction(DAG): def __init__(self, dag_id, **kwargs): self.metric_callback = publish_execution_time on_success_callback = kwargs.get(&quot;on_success_callback&quot;) if on_success_callback is None: on_success_callback = self.metric_callback else: if isinstance(on_success_callback, list): on_success_callback.append(self.metric_callback) else: on_success_callback = [on_success_callback, self.metric_callback] kwargs[&quot;on_success_callback&quot;] = on_success_callback super().__init__(dag_id, **kwargs) with DAGWithFinishAction(dag_id=..., ...) as dag: ... </code></pre> <p>The code above works but I am still not sure if this is something that should be avoided or is it a legitimate approach when designing DAGs.</p>
<python><airflow><orchestration>
2024-03-29 11:15:20
0
2,182
ezamur
78,243,391
5,049,813
How to type-hint a truly optional parameter with overloads
<p>I have this function complete with type-hinting:</p> <pre class="lang-py prettyprint-override"><code>class HDF5DataTypes(Enum): SCALAR = &quot;scalar&quot; ARRAY = &quot;array&quot; UNKNOWN = &quot;unknown&quot; @overload def index_hdf5_to_value(file_or_group: Union[h5py.File, h5py.Group], indexes: List[str], expected_output_type: Literal[HDF5DataTypes.SCALAR]) -&gt; np.float_: ... @overload def index_hdf5_to_value(file_or_group: Union[h5py.File, h5py.Group], indexes: List[str], expected_output_type: Literal[HDF5DataTypes.ARRAY]) -&gt; npt.NDArray: ... @overload def index_hdf5_to_value(file_or_group: Union[h5py.File, h5py.Group], indexes: List[str], expected_output_type: Literal[HDF5DataTypes.UNKNOWN]) -&gt; Union[npt.NDArray, np.float_]: ... def index_hdf5_to_value(file_or_group: Union[h5py.File, h5py.Group], indexes: List[str], expected_output_type: HDF5DataTypes=HDF5DataTypes.UNKNOWN) -&gt; Union[npt.NDArray, np.float_]: '''Given a file or group, returns the output of indexing the file or group with the indexes down until it gets to the dataset, at which point it gives back the value of the dataset (either the scalar or numpy array). ''' dataset = index_hdf5(file_or_group, indexes, h5py.Dataset) if len(dataset.shape) == 0: if expected_output_type == HDF5DataTypes.ARRAY: raise ValueError(f&quot;Expected output to be an array, but it was a scalar&quot;) return cast(np.float_, dataset[()]) else: if expected_output_type == HDF5DataTypes.SCALAR: raise ValueError(f&quot;Expected output to be a scalar, but it was an array&quot;) return cast(npt.NDArray, dataset[:]) </code></pre> <p>However, when I go to call it with <code>index_hdf5_to_value(hdf5_file, [&quot;key1&quot;, &quot;key2&quot;])</code> I get the error <code>No overloads for &quot;index_hdf5_to_value&quot; match the provided arguments Argument types: (File, list[str])</code>.</p> <p>In short, it's upset because I didn't provide a the third parameter.</p> <p>Now, I could type-hint the third parameter as <code>Optional</code>, but then I worry that that hints that calling the function with <code>index_hdf5_to_value(hdf5_file, [&quot;key1&quot;, &quot;key2&quot;], None)</code> is okay, which it is not.</p> <p><strong>How should I correctly type-hint this function in order to tell the user that the third parameter is optional, but cannot be set to <code>None</code>? (That is, it's optional, but not <code>Optional</code>.)</strong></p>
<python><python-typing><pyright>
2024-03-29 11:04:04
1
5,220
Pro Q
78,243,381
15,474,507
How to include Poppler for docker build
<p>I use <a href="https://github.com/mathewskevin/pdf-to-cbz/blob/master/pdf_to_cbz.py" rel="nofollow noreferrer">this</a> script to convert pdf into cbz. For Windows no problem, just add Poppler bin folder into PATH<br> But I try to understand how can I include to install <a href="https://github.com/cbrunet/python-poppler" rel="nofollow noreferrer">Poppler</a> into my Dockerfile (for bot development)</p> <pre><code>FROM python:3.8 RUN echo 'deb http://deb.debian.org/debian/ bullseye main contrib non-free' &gt; /etc/apt/sources.list &amp;&amp; \ apt-get update &amp;&amp; apt-get install -y unrar WORKDIR /app COPY . /app RUN pip install -r requirements.txt CMD [&quot;python&quot;, &quot;app.py&quot;] </code></pre> <p>Is it just necessary to add <code>pip install python-poppler</code> in <code>requirements.txt</code> or do I need to install something else as a separate package?</p>
<python><docker>
2024-03-29 11:01:17
1
307
Alex Doc
78,243,370
20,920,790
How to simulate windows fuction with partition by in pandas?
<p>I got this data with Nulls in original_eur column.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">event_id</th> <th style="text-align: left;">category</th> <th style="text-align: left;">rounds_bot_date</th> <th style="text-align: right;">original_eur</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">1</td> <td style="text-align: left;">Category 1</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">200</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">1</td> <td style="text-align: left;">Category 1</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">nan</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">2</td> <td style="text-align: left;">Category 2</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">nan</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">2</td> <td style="text-align: left;">Category 2</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">150</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">2</td> <td style="text-align: left;">Category 2</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">150</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: left;">2</td> <td style="text-align: left;">Category 1</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">nan</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: left;">3</td> <td style="text-align: left;">Category 3</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">nan</td> </tr> <tr> <td style="text-align: right;">7</td> <td style="text-align: left;">3</td> <td style="text-align: left;">Category 2</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">150</td> </tr> <tr> <td style="text-align: right;">8</td> <td style="text-align: left;">3</td> <td style="text-align: left;">Category 3</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">60</td> </tr> <tr> <td style="text-align: right;">9</td> <td style="text-align: left;">3</td> <td style="text-align: left;">Category 2</td> <td style="text-align: left;">2024-03-25 00:00:00</td> <td style="text-align: right;">150</td> </tr> </tbody> </table></div> <p>I need replace each null in columns with median value for appropriate event_id, category, rounds_bot_date.</p> <p>With SQL I can use case + median window function:</p> <pre><code>case when original_eur = NaN then median(original_eur) over(partition by event_id, category, rounds_bot_date) else original_eur end as original_eur </code></pre> <p>For Pandas I make table with medians:</p> <pre><code>median_table = ( dataset .groupby(['event_id', 'category', 'rounds_bot_date']) .agg(original_eur_median = ('original_eur', 'median')) .reset_index() ) </code></pre> <p>And apply this fuction to dataset:</p> <pre><code>def fill_na(value, event_id, category, rounds_bot_date, median_table: pd.DataFrame): if math.isnan(value): value = ( median_table[ (median_table['event_id'] == event_id) &amp; (median_table['category'] == category) &amp; (median_table['rounds_bot_date'] == rounds_bot_date)]['original_eur_median'].values[0] ) return value else: return value dataset['original_eur'] = ( dataset .apply( lambda x: fill_na(x['original_eur'], x['event_id'], x['category'], x['rounds_bot_date'], median_table), axis = 1) ) </code></pre> <p>Is there any way to optimize this code and simulate median window function in Pandas?</p> <p>P. S. I an make iterrows with same logic, but it's not fast as SQL-function.</p> <p>Solution:</p> <pre><code># add new column with median values dataset['original_eur_median'] = ( dataset .groupby(['event_id', 'category', 'rounds_bot_date'])['original_eur'] .transform('median') ) # fill NaN with median values. dataset['original_eur'] = dataset['original_eur'].fillna(dataset['original_eur_median']) </code></pre>
<python><pandas><window-functions>
2024-03-29 10:58:31
1
402
John Doe
78,243,249
17,015,816
Scraping Text through sections using scrapy
<p>So i am currently using scrapy to scrape a website. The website has n number sublinks which i was able to enter. Each sublink has 3 things i need title, description and content. I am able to get title, description but the content is split across n number of section where number of section differ per sublink like in this example <a href="https://i.sstatic.net/K9Zmn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K9Zmn.png" alt="enter image description here" /></a></p> <p>now i tried using loops to go through each section and store it but the yield functions gives me title,desc, and the content from the last section</p> <p>below is code</p> <pre><code>def parse_instructions(self, response): title = response.xpath('//\*\[@id=&quot;d-article&quot;\]/div\[1\]/div\[1\]/h1/text()').get() description = response.xpath('//\*\[@id=&quot;ency\_summary&quot;\]/p/text()').getall() joined_description = ' '.join(description) sections = response.css('section div.section:not([class*=&quot; &quot;])') for section in sections: section_text = ' '.join(section.css('p::text').getall()) section_text = ' '.join('a::text').getall() section_text = ' '.join('ul::text').getall() yield { &quot;title&quot;: title, &quot;description&quot;: joined_description, &quot;section_text&quot;: section_text, } </code></pre>
<python><web-scraping><scrapy>
2024-03-29 10:31:10
1
479
Sairam S
78,243,244
15,163,418
Python pillow library text align center
<p>Am trying to make the text align center but it doens't work as I expected.</p> <p>Expected output: <a href="https://imgur.com/5HU7TBv.jpg" rel="nofollow noreferrer">https://imgur.com/5HU7TBv.jpg</a> (photoshop)</p> <p>My output: <a href="https://i.imgur.com/2jpgNr6.png" rel="nofollow noreferrer">https://i.imgur.com/2jpgNr6.png</a> (python code)</p> <pre class="lang-py prettyprint-override"><code>from PIL import Image from PIL import ImageDraw from PIL import ImageFont from PIL import ImageEnhance # Open an Image and resize img = Image.open(&quot;input.jpg&quot;) # Calculate width to maintain aspect ratio for 720p height original_width, original_height = img.size new_height = 720 new_width = int((original_width / original_height) * new_height) img = img.resize((new_width, new_height), Image.LANCZOS) # Use Image.LANCZOS for antialiasing # Lower brightness to 50% enhancer = ImageEnhance.Brightness(img) img = enhancer.enhance(0.5) # 0.5 means 50% brightness # Call draw Method to add 2D graphics in an image I1 = ImageDraw.Draw(img) # Custom font style and font size myFont = ImageFont.truetype(&quot;Fact-Bold.ttf&quot;, 105) # Calculate text position to center it text_x = (img.width) // 2 text_y = (img.height) // 2 # Add Text to an image I1.text((text_x, text_y), &quot;Movies&quot;, font=myFont, fill=(255, 255, 255)) # Display edited image img.show() </code></pre>
<python><image><image-processing><python-imaging-library>
2024-03-29 10:29:44
1
541
Raghavan Vidhyasagar
78,242,979
961,631
How to use Anaconda?
<p>I am in Windows 10 and I need to switch between Python environments. I found there is a program named &quot;Anaconda&quot; for that.</p> <p>After installing the heavy (1GB) Anaconda installer, with default options the command prompt still does not recognize <code>conda</code> command.</p> <p>I searched and then found there should be &quot;Anaconda Prompt&quot;, so I did Windows search on that, however, no chance:</p> <p><a href="https://i.sstatic.net/RpXUh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RpXUh.png" alt="" /></a></p> <p>The only way I was able to launch something that recognizes this program is opening the Anaconda Navigator, that proposed me an update, and it's still loading that.</p> <p><a href="https://i.sstatic.net/0K4d9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0K4d9.png" alt="" /></a></p> <p>Is there a <em>simple</em> way to switch between Python environments?</p>
<python><anaconda>
2024-03-29 09:35:15
3
15,427
serge
78,242,915
4,095,235
cov2corr() for scipy sparse matrices
<p>How do I make (big) sparse covariance matrices into sparse correlation matrices?</p> <p>Following the code for <a href="https://www.statsmodels.org/dev/generated/statsmodels.stats.moment_helpers.cov2corr.html" rel="nofollow noreferrer"><code>statsmodels.stats.moment_helpers.cov2corr()</code></a>, if the covariance matrix isn't too big I can (element-wise) divide it by the (dense!) standard deviation outer product, and convert back to sparse:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy import sparse A = np.array([ [1.0, 0.2, 0.3, 0.0, 0.0], [0.2, 2.0, 1.0, 0.0, 0.0], [0.3, 1.0, 3.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0, 0.5], [0.0, 0.0, 0.0, 0.5, 4.0]]) cov = sparse.csr_matrix(A) cor = sparse.csr_matrix(cov / np.outer(np.sqrt(cov.diagonal()), np.sqrt(cov.diagonal()))) cor.toarray() </code></pre> <blockquote> <pre><code>array([[1. , 0.14142136, 0.17320508, 0. , 0. ],` [0.14142136, 1. , 0.40824829, 0. , 0. ],` [0.17320508, 0.40824829, 1. , 0. , 0. ],` [0. , 0. , 0. , 1. , 0.25 ],` [0. , 0. , 0. , 0.25 , 1. ]])` </code></pre> </blockquote> <p>However the dense standard deviation outer product is n X n, and for n = 100K and up, this is wasteful if at all possible with even a large RAM.</p>
<python><scipy><sparse-matrix><covariance-matrix>
2024-03-29 09:18:09
1
3,709
Giora Simchoni
78,242,608
5,379,182
Why is query via SQLAlchemy session slower on first run
<p>I am profiling an SQLAalchemy repository method to fetch orders as of a certain date</p> <pre><code>def test_profile_get_orders_asof(): for i in range(1, 11): with DBContextManager() as session: data_repo = SqlAlchemyRepository(session) sut = data_repo.get_orders_asof sut = profile( sut, immediate=True, filename=f&quot;get_orders_asof_{i}.prof&quot; ) sut(datetime(2022, 12, 1, tzinfo=pytz.utc)) </code></pre> <p>And the total execution time is always higher for the first call of the method</p> <pre><code>[1.3190752999999988, 0.2215163999999999, 0.25437130000000036, 0.19252480000000013, 0.1886178, 0.22220109999999996, 0.21765510000000007, 0.2034394999999999, 0.19447940000000002, 0.20421450000000013] </code></pre> <p>The method <code>data_repo.get_orders_asof</code> like something like this</p> <pre><code>from datetime import datetime import pandas as pd def get_orders_asof(self, date: datetime) -&gt; pd.DataFrame: asof_date = date.strftime(&quot;%Y-%m-%d&quot;) column_types = { &quot;id&quot;: &quot;int&quot;, &quot;quantity&quot;: &quot;int&quot;, &quot;price&quot;: &quot;float&quot;, &quot;date&quot;: &quot;str&quot; } return pd.read_sql( &quot;SELECT * FROM Orders WHERE order_date &lt;= ?&quot;, con=self.session.get_bind(), params=(asof_date,), dtype=column_types ) </code></pre> <p>and is executed against an SQL Server database. Why is the first call so much slower than the subsequent calls? Is the database caching anything?</p>
<python><sql-server><sqlalchemy>
2024-03-29 08:00:04
0
3,003
tenticon
78,242,561
11,046,379
Pandas dataframe : Replace value according case conditions
<p>There is dataframe with one column</p> <pre><code>disposition --------- NO ANSWER ANSWERED FAILED BUSY ERROR WARNING CANCEL </code></pre> <p>code:</p> <pre><code>import pandas as pd data1 = {'disposition': ['NO ANSWER', 'ANSWERED', 'FAILED', 'BUSY', 'ERROR', 'WARNING', 'CANCEL']} df = pd.DataFrame(data1) </code></pre> <p>How to replace values according conditions:</p> <pre><code> WHEN disposition = 'NO ANSWER' THEN 0 WHEN disposition = 'ANSWERED' THEN 1 WHEN disposition = 'FAILED' THEN 2 WHEN disposition = 'BUSY' THEN 3 ELSE 9 </code></pre> <p>Wished result is</p> <pre><code>disposition --------- 0 1 2 3 9 9 9 </code></pre>
<python><pandas>
2024-03-29 07:45:41
2
1,658
harp1814
78,242,488
354,051
similarity between two numpy arrays based on shape but not distance
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np from numpy.linalg import norm def cosine_similarity(arr1:np.ndarray, arr2:np.ndarray)-&gt;float: dot_product = np.dot(arr1, arr2) magnitude = norm(arr1) * norm(arr2) similarity = dot_product / magnitude return similarity def euclidean_distance(arr1:np.ndarray, arr2:np.ndarray)-&gt;float: return 1 / (1 + np.linalg.norm(arr1 - arr2)) black = np.array([0.93036434, 0.80134155, 0.82428051, 0.88877041, 0.90235719, 0.86631497, 0.82428051, 0.84878065, 0.99113482, 0.81413637, 0.82428051, 0.80268685, 0.76705671, 0.76605398, 0.82428051, 0.81137288, 0.83886563, 0.80749507, 0.82428051]) blue = np.array([1., 0.75256457, 0.78572852, 0.84459419, 0.88112504, 0.82160288, 0.78572852, 0.8022456 , 0.9949841 , 0.78979966, 0.78572852, 0.76791598, 0.70410357, 0.72986952, 0.78572852, 0.76683488, 0.78731431, 0.77301876, 0.78572852]) green = np.array([1., 0.62172262, 0.60678783, 0.57714708, 0.73848085, 0.69695676, 0.60678783, 0.58584646, 0.60622072, 0.6202182 , 0.60678783, 0.57949767, 0.52131047, 0.5814518 , 0.60678783, 0.5958478 , 0.62959938, 0.60829778, 0.60678783]) fig = plt.figure(figsize=(8, 4), dpi=80) gs = fig.add_gridspec(1, hspace=0) axs = gs.subplots() print(&quot;cosine_similarity = &quot;, cosine_similarity(black, blue)) print(&quot;cosine_similarity = &quot;, cosine_similarity(black, green)) print(&quot;euclidean_distance = &quot;, euclidean_distance(black, blue)) print(&quot;euclidean_distance = &quot;, euclidean_distance(black, green)) axs.plot(black, color='black') axs.plot(blue, color='blue') axs.plot(green, color='green') fig.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/RQ1JS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RQ1JS.jpg" alt="angular similarity" /></a></p> <p>I'm trying to create a similarity factor between two numpy arrays based on shape rather than distance. Even though the shapes (blue and green) are visually different the code prints almost the same factor.</p> <pre><code>cosine_similarity = 0.9993680126707705 cosine_similarity = 0.9914859250612972 </code></pre>
<python><similarity>
2024-03-29 07:24:07
2
947
Prashant
78,242,484
10,200,497
How can I create groups based on ascending streak of a column?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'a': [10, 14, 20, 10, 12, 5, 3] } ) </code></pre> <p>And this is the expected output. I want to create four groups:</p> <pre><code> a 0 10 1 14 2 20 a 3 10 4 12 a 5 5 a 6 3 </code></pre> <p>From top to bottom, as far as <code>a</code> is increasing/equal groups do not change. That is why first three rows are in one group. However, in row <code>3</code> <code>a</code> has decreased (i.e. 20 &gt; 10). So this is where the second group starts. And the same logic applies for the rest of groups.</p> <p>This is one my attempts. But I don't know how to continue:</p> <pre><code>import numpy as np df['dir'] = np.sign(df.a.shift(-1) - df.a) </code></pre>
<python><pandas><dataframe><group-by>
2024-03-29 07:22:55
1
2,679
AmirX
78,242,480
11,279,170
Train and test split in such a way that each name and proportion of tartget class is present in both train and test
<p>I am trying to solve a ML problem if a person will deliver an order or not. Highly Imbalance dataset. Here is the glimpse of my dataset</p> <pre><code>[{'order_id': '1bjhtj', 'Delivery Guy': 'John', 'Target': 0}, {'order_id': '1aec', 'Delivery Guy': 'John', 'Target': 0}, {'order_id': '1cgfd', 'Delivery Guy': 'John', 'Target': 0}, {'order_id': '1bceg', 'Delivery Guy': 'Tom', 'Target': 0}, {'order_id': '1a2fg', 'Delivery Guy': 'Tom', 'Target': 0}, {'order_id': '1cbsf', 'Delivery Guy': 'Tom', 'Target': 1}, {'order_id': '1bc5', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1a22', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1bzc5', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1av22', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1bsc5', 'Delivery Guy': 'Jay', 'Target': 1}, {'order_id': '1a2t2', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1bc5b', 'Delivery Guy': 'Jay', 'Target': 0}, {'order_id': '1a22a', 'Delivery Guy': 'Mary', 'Target': 0}, {'order_id': '1c5bv', 'Delivery Guy': 'Mary', 'Target': 0}, {'order_id': 'vb2er', 'Delivery Guy': 'Mary', 'Target': 0}, {'order_id': '1bs5s', 'Delivery Guy': 'Mary', 'Target': 0}, {'order_id': '1a22n', 'Delivery Guy': 'Mary', 'Target': 0}, {'order_id': '122a', 'Delivery Guy': 'James', 'Target': 1}, {'order_id': '1cw5bv', 'Delivery Guy': 'James', 'Target': 0}, {'order_id': 'vb=er', 'Delivery Guy': 'James', 'Target': 0}, {'order_id': '1b5s', 'Delivery Guy': 'James', 'Target': 0}, {'order_id': '1a2n', 'Delivery Guy': 'James', 'Target': 1}] </code></pre> <p>This is my table :</p> <pre><code>| order_id | Delivery Guy | Target | |----------|--------------|--------| | 1bjhtj | John | 0 | | 1aec | John | 0 | | 1cgfd | John | 0 | | 1bceg | Tom | 0 | | 1a2fg | Tom | 0 | | 1cbsf | Tom | 1 | | 1bc5 | Jay | 0 | | 1a22 | Jay | 0 | | 1bzc5 | Jay | 0 | | 1av22 | Jay | 0 | | 1bsc5 | Jay | 1 | | 1a2t2 | Jay | 0 | | 1bc5b | Jay | 0 | | 1a22a | Mary | 0 | | 1c5bv | Mary | 0 | | vb2er | Mary | 0 | | 1bs5s | Mary | 0 | | 1a22n | Mary | 0 | | 122a | James | 1 | | 1cw5bv | James | 0 | | vb=er | James | 0 | | 1b5s | James | 0 | | 1a2n | James | 1 | </code></pre> <p>I want my machine learning model to understand each person attributes and predict these two</p> <p>cases: will deliver &quot;0&quot; and will not deliver &quot;1&quot;</p> <p>I want to split my train and test in such a way that it should preserver few rows of name and few rows of Target class so that it learns all the patterns.</p> <p>I have used this so far</p> <pre><code>X = df.drop(columns = &quot;Target&quot;) y = df.Target X_train,X_test,y_train,y_test=train_test_split(X,y,train_size=0.7,stratify=y) </code></pre> <p>It does give me output of each Delivery Guy but it misses the part where we can split 'James' in such way that &quot;1&quot; will be there in train another &quot;1&quot; will be in test. Could anyone help me approach this problem in different way.</p>
<python><machine-learning><scikit-learn><train-test-split><imbalanced-data>
2024-03-29 07:21:32
1
631
DSR
78,242,432
4,019,775
ImportError: cannot import name 'DIFFUSERS_SLOW_IMPORT' from 'diffusers.utils'
<p>When using diffusers like</p> <pre><code>from diffusers import AutoencoderKL, DDPMScheduler, UNet2DConditionModel </code></pre> <p>I get the following error:</p> <pre><code>ImportError: cannot import name 'DIFFUSERS_SLOW_IMPORT' from 'diffusers.utils' (/opt/conda/lib/python3.10/site-packages/diffusers/utils/__init__.py) </code></pre> <p>Also tried downgrading the version to 0.26.3 , but didn't help.</p> <p>Diffusers version : 0.27.2</p>
<python>
2024-03-29 07:07:02
2
884
Nilamber Singh
78,242,414
2,358,969
Why does the value interpolation not work when using python hydra library?
<p>I modify the <a href="https://github.com/facebookresearch/hydra/blob/main/examples/tutorials/basic/your_first_hydra_app/6_composition/conf/config.yaml" rel="nofollow noreferrer">config.yaml</a> form the tutorials in the <a href="https://github.com/facebookresearch/hydra/tree/main" rel="nofollow noreferrer">hydra</a> repository as follows:</p> <pre><code>defaults: - db: mysql - ui: full - schema: school test_key: ${db.user} </code></pre> <p>However, the <code>test_key</code> is still <code>${db.user}</code> in the output without being replaced by the actual value <code>omry</code>. I type the command <code>python my_app.py</code> in the <a href="https://github.com/facebookresearch/hydra/tree/main/examples/tutorials/basic/your_first_hydra_app/6_composition" rel="nofollow noreferrer">folder</a>.</p>
<python><fb-hydra><omegaconf>
2024-03-29 07:01:39
1
1,263
Yulong Ao
78,242,374
19,130,803
creating lists in single scan
<p>I want to create 2 lists from one column based on condition in other column. Currently, I am able to get the 2 lists by scaning dataframe twice.</p> <ol> <li>Is it possible to get the 2 lists in single scan?</li> <li>Getting lists by individual groups?</li> </ol> <pre><code>data = { &quot;co2&quot;: [95, 90, 99, 104, 105, 94, 99, 104], &quot;model&quot;: [ &quot;Citigo&quot;, &quot;Fabia&quot;, &quot;Fiesta&quot;, &quot;Rapid&quot;, &quot;Focus&quot;, &quot;Mondeo&quot;, &quot;Octavia&quot;, &quot;B-Max&quot;, ], &quot;car&quot;: [&quot;Skoda&quot;, &quot;Skoda&quot;, &quot;Ford&quot;, &quot;Skoda&quot;, &quot;Ford&quot;, &quot;Ford&quot;, &quot;BMW&quot;, &quot;Ford&quot;], } df = pd.DataFrame(data) # For 2 lists list_skoda = df.loc[df[&quot;car&quot;] == &quot;Skoda&quot;, &quot;model&quot;].tolist() print(f&quot;{list_skoda=}&quot;) list_others = df.loc[df[&quot;car&quot;] != &quot;Skoda&quot;, &quot;model&quot;].tolist() print(f&quot;{list_others=}&quot;) # For individual groups df.groupby([&quot;car&quot;]).apply(print) l = df.groupby([&quot;car&quot;])[&quot;model&quot;].groups print(f&quot;{l=}&quot;) # This gives indices not names </code></pre> <p>Please suggest.</p>
<python><pandas>
2024-03-29 06:52:08
1
962
winter
78,242,185
8,652,920
How to create an improperly closed gzip file using python?
<p>I have an application that occasionally needs to be able to read improperly closed gzip files. The files behave like this:</p> <pre><code>&gt;&gt;&gt; import gzip &gt;&gt;&gt; f = gzip.open(&quot;path/to/file.gz&quot;, 'rb') &gt;&gt;&gt; f.read() Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.8/gzip.py&quot;, line 292, in read return self._buffer.read(size) File &quot;/usr/lib/python3.8/gzip.py&quot;, line 498, in read raise EOFError(&quot;Compressed file ended before the &quot; EOFError: Compressed file ended before the end-of-stream marker was reached </code></pre> <p>I wrote a function to handle this by reading the file line by line and catching the <code>EOFError</code>, and now I want to test it.</p> <p>The input to my test should be a gz file that behaves in the same way as demonstrated. How do I make this happen in a controlled testing environment?</p> <p>I really strongly prefer not making a copy of the improperly closed files that I get in production.</p>
<python><python-3.x><unit-testing><pytest><gzip>
2024-03-29 05:42:43
2
4,239
notacorn
78,242,183
4,399,016
Python GET Request returns data when tried on Postman but the generated python code not working
<p>I have this URL that works on Postman and returns data:</p> <pre><code>https://www.tablebuilder.singstat.gov.sg/api/table/resourceid?isTestApi=true&amp;keyword=manufacturing&amp;searchoption=all </code></pre> <p>But the Python code generated on Postman does not work.</p> <pre><code>import requests url = &quot;https://www.tablebuilder.singstat.gov.sg/api/table/resourceid?isTestApi=true&amp;keyword=manufacturing&amp;searchoption=all&quot; response = requests.request(&quot;GET&quot;, url) print(response.text) </code></pre> <p>What could the reason be? This code used to work in the past. Is there a permanent fix for the problem?</p>
<python><python-requests>
2024-03-29 05:41:48
1
680
prashanth manohar
78,242,095
6,702,598
How do I create a *clean* AWS lambda function in python?
<h5>Problem</h5> <p>When I run my lambda API locally with <code>sam local start-api</code>, I get the error <code>{&quot;message&quot;:&quot;Missing Authentication Token&quot;}</code> for a request that is not supposed to require an authentication token.</p> <h5>Code</h5> <p>In my cloudformation template file I create a lambda function (see below) and an API Gateway for the API calls. As this setup will fail to run locally with <code>start-api</code> (thank you AWS?), I added the <code>Events</code> section you can see below.</p> <pre><code>MyApi: Type: AWS::Serverless::Function Properties: CodeUri: myapi Handler: lambda_handler.lambda_handler Runtime: python3.10 Architectures: - arm64 ... Events: Any: Type: HttpApi Properties: Path: '/myapi/*' Method: any </code></pre> <h5>Steps to reproduce</h5> <ul> <li>I run <code>sam build &amp;&amp; sam local start-api</code> to run the API.</li> <li>I run <code>curl localhost:8080/myapi/ping</code> <ul> <li>8080 is configured in samconfig</li> <li><code>/myapi/ping</code> is configure in the API Gateway and the lambda_handler itself.</li> </ul> </li> </ul>
<python><amazon-web-services><aws-lambda>
2024-03-29 05:08:26
1
3,673
DarkTrick
78,241,869
1,228,906
No matching distribution found for tensorboard~=2.15.0
<p>I'm trying to build a Docker image and I'm running into a compatibility issue when building the Dockerfile.</p> <p>The Dockerfile below leads to a successful build. But when I add &quot;tensorflow-gpu&quot; it fails with a requirements error. I'm not sure how to isolate this issue, so any guidance will be appreciated!</p> <p><code>Dockerfile</code>:</p> <pre><code>FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.2-cudnn8-ubuntu20.04:20230530.v1 ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.7 # Create conda environment RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \ python=3.8 pip=20.2.4 # Prepend path to AzureML conda environment ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH # Install pip dependencies RUN HOROVOD_WITH_TENSORFLOW=1 pip install 'matplotlib' \ 'psutil' \ 'tqdm' \ 'pandas' \ 'scipy' \ 'numpy' \ 'ipykernel' \ 'azureml-core' \ 'azureml-defaults' \ 'azureml-mlflow' \ 'azureml-telemetry' \ 'tensorboard' # This is needed for mpi to locate libpython ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH </code></pre> <p>Error:</p> <pre><code>2024-03-29T03:05:42: ---&gt; Running in bec9494787c1 2024-03-29T03:05:43: Collecting matplotlib~=3.5.0 2024-03-29T03:05:43: Downloading matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.3 MB) 2024-03-29T03:05:44: Collecting psutil~=5.8.0 2024-03-29T03:05:44: Downloading psutil-5.8.0-cp38-cp38-manylinux2010_x86_64.whl (296 kB) 2024-03-29T03:05:44: Collecting tqdm~=4.62.0 2024-03-29T03:05:44: Downloading tqdm-4.62.3-py2.py3-none-any.whl (76 kB) 2024-03-29T03:05:45: Collecting pandas~=1.3.0 2024-03-29T03:05:45: Downloading pandas-1.3.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) 2024-03-29T03:05:45: Collecting scipy~=1.7.0 2024-03-29T03:05:45: Downloading scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) 2024-03-29T03:05:47: Collecting numpy~=1.21.0 2024-03-29T03:05:47: Downloading numpy-1.21.6-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) 2024-03-29T03:05:47: Collecting ipykernel~=6.0 2024-03-29T03:05:47: Downloading ipykernel-6.29.4-py3-none-any.whl (117 kB) 2024-03-29T03:05:47: Collecting azureml-core==1.51.0 2024-03-29T03:05:47: Downloading azureml_core-1.51.0-py3-none-any.whl (3.3 MB) 2024-03-29T03:05:47: Collecting azureml-defaults==1.51.0 2024-03-29T03:05:47: Downloading azureml_defaults-1.51.0-py3-none-any.whl (2.0 kB) 2024-03-29T03:05:47: Collecting azureml-mlflow==1.51.0 2024-03-29T03:05:47: Downloading azureml_mlflow-1.51.0-py3-none-any.whl (814 kB) 2024-03-29T03:05:47: Collecting azureml-telemetry==1.51.0 2024-03-29T03:05:47: Downloading azureml_telemetry-1.51.0-py3-none-any.whl (30 kB) 2024-03-29T03:05:47: [91mERROR: Could not find a version that satisfies the requirement tensorboard~=2.15.0 (from versions: 1.6.0rc0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.15.0, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.6.0, 2.7.0, 2.8.0, 2.9.0, 2.9.1, 2.10.0, 2.10.1, 2.11.0, 2.11.1, 2.11.2, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 2.13.0, 2.14.0) 2024-03-29T03:05:47: ERROR: No matching distribution found for tensorboard~=2.15.0 2024-03-29T03:05:48: The command '/bin/sh -c HOROVOD_WITH_TENSORFLOW=1 pip install 'matplotlib~=3.5.0' 'psutil~=5.8.0' 'tqdm~=4.62.0' 'pandas~=1.3.0' 'scipy~=1.7.0' 'numpy~=1.21.0' 'ipykernel~=6.0' 'azureml-core==1.51.0' 'azureml-defaults==1.51.0' 'azureml-mlflow==1.51.0' 'azureml-telemetry==1.51.0' 'tensorboard~=2.15.0'' returned a non-zero code: 1 2024-03-29T03:05:48: [0m 2024-03-29T03:05:48: CalledProcessError(1, ['docker', 'build', '-f', 'Dockerfile', '.', '-t', 'e91555eeb3224b08b13539c983a2c3f8.azurecr.io/azureml/azureml_50f64810c9ea1320c2b49770067c34d2', '-t', 'e91555eeb3224b08b13539c983a2c3f8.azurecr.io/azureml/azureml_50f64810c9ea1320c2b49770067c34d2:1']) 2024-03-29T03:05:48: Building docker image failed with exit code: 1 </code></pre>
<python><docker><tensorflow><horovod>
2024-03-29 03:37:14
1
1,896
webber
78,241,291
2,348,587
How to deselect columns in PySide/Qt QColumnView
<p>I am creating a simple File Browser and trying to implement <a href="https://en.wikipedia.org/wiki/Miller_columns" rel="nofollow noreferrer">Miller Columns</a> like found in the macOS Finder. Qt provides both <code>QColumnView</code> and a <code>QFileSystemModel</code> which should make it easy to combine and get the functionality I'm after.</p> <p>However, if you click on several levels of directories, then click on an empty space a couple levels up from the current directory, the view doesn't change. The highlight is removed from the folder you're clicking in, but that is the only change to the visual.</p> <p>As an example of what I am trying to do, on top is the Finder and on bottom is my current app:</p> <p><a href="https://i.sstatic.net/JXKEP.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JXKEP.gif" alt="Incorrect column functionality example" /></a></p> <p>I have tried intercepting as many <code>Signals</code> and <code>Slots</code> as I can think of, including <code>pressed</code>, <code>clicked</code>, <code>entered</code>, <code>selectionModel().currentRowChanged</code>, and <code>UpdateRequest</code> to override Qt's behavior and set the correct <code>currentIndex</code>, but have not found the state information in the Model or View useful for setting the correct Index.</p> <p>I have also tried logging every event (with a bare <code>def event()</code> override) and there doesn't even seem to be an <code>event</code> triggered when I make the deselection click in the root folder.</p> <p>MCVE:</p> <pre><code>from pathlib import Path from PySide6.QtWidgets import ( QApplication, QFileSystemModel, QColumnView, QMainWindow, ) class CustomColumns(QColumnView): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class BrowserWindow(QMainWindow): def __init__(self, location=None): super().__init__() if location is None: location = Path.home() location = Path(location) self.setGeometry(625, 333, 1395, 795) self.setWindowTitle(location.name) self._root = str(location) self._files = QFileSystemModel() self._files.setRootPath(self._root) self._view = CustomColumns() self._view.setModel(self._files) self.setCentralWidget(self._view) def show(self): super().show() self._view.setRootIndex(self._files.index(self._root)) class FileBrowser(QApplication): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setQuitOnLastWindowClosed(False) self.window = BrowserWindow(location=Path.home()) self.window.show() def main(): app = FileBrowser() app.exec() if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><qt><selection><pyside6><qcolumnview>
2024-03-28 23:16:18
1
3,763
Darrick Herwehe
78,240,874
22,407,544
No such file or directory: '/tmp/tmp_ejr26m6.upload.mp3' in Django
<p>I recently adjusted the location of media files when I was presented with this error: <code>[Errno 2] No such file or directory: '/tmp/tmp1d93dhp7.upload.mp4'</code> in Django. So far I've checked for typos in my file location code in my settings, views and models.</p> <p>The website works by simply storing user uploaded media files in a media folder.</p> <p>Here is the relevant code:</p> <p>views.py:</p> <pre><code>... if form.is_valid() and 'file' in request.FILES: if not request.session.get('user_id'): request.session['user_id'] = generate_unique_id() user_id = request.session.get('user_id') uploaded_file = request.FILES.getlist('file') for file in uploaded_file: fs = FileSystemStorage(location=settings.MEDIA_ROOT) # Use MEDIA_ROOT for permanent storage filename = fs.save(file.name, file) uploaded_file_path = fs.path(filename) file_type = mimetypes.guess_type(uploaded_file_path) request.session['uploaded_file_path'] = uploaded_file_path #user_doc, created = RequirementsChat.objects.get_or_create(id=user_id) user_doc, created = RequirementsChat.objects.get_or_create(id=user_id) uploaded_file = UploadedFile(input_file=file, requirements_chat=user_doc, chat_id = user_id) uploaded_file.save() user_doc.save() #save details user_doc, created = RequirementsChat.objects.get_or_create(id = user_id) user_doc.alias = alias user_doc.email = email user_doc.language = language user_doc.due_date = due_date user_doc.subtitle_type = subtitle_type user_doc.transcript_file_type = transcript_file_type user_doc.additional_requirements = additional_requirements user_doc.date = timezone.now() user_doc.save() return HttpResponse(status=200) ... </code></pre> <p>The error seems to be caused by the <code>uploaded_file.save()</code> line( I used print statements to confirm. Nothing is printed to the console after this line).</p> <p>models.py:</p> <pre><code>class RequirementsChat(models.Model): id = models.CharField(primary_key=True, max_length=40) alias = models.CharField(max_length=20, blank=True, null=True) email = models.CharField(max_length=100, blank=True, null=True) language = models.CharField(max_length=10, blank=True, null=True) due_date = models.CharField(max_length=10, blank=True, null=True) subtitle_type = models.CharField(max_length=10, blank=True, null=True) transcript_file_type = models.CharField(max_length=10, blank=True, null=True) additional_requirements = models.TextField(max_length=500, blank=True, null=True) date = models.DateTimeField(auto_now_add=True, blank=True, null=True) url = models.CharField(max_length=250, blank=True, null=True) task_completed = models.BooleanField(default=False) class UploadedFile(models.Model): input_file = models.FileField(upload_to='human_upload/')#new chat_id = models.CharField(max_length=40, null= True) requirements_chat = models.ForeignKey(RequirementsChat, on_delete=models.CASCADE, related_name='uploaded_files', null=True) </code></pre> <p>settings.py:</p> <pre><code> MEDIA_URL='/media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') </code></pre> <p>urls.py:</p> <pre><code>urlpatterns = [ path('', include('homepage.urls')),#redirects to transcribe/, path('transcribe/', include('transcribe.urls')), path('human/', include('human.urls')), path('admin/', admin.site.urls), ]+ static( settings.MEDIA_URL, document_root=settings.MEDIA_ROOT ) </code></pre> <p>My media folder structure is <code>mysite &gt; media&gt; human_upload</code></p>
<python><django><file>
2024-03-28 21:07:05
1
359
tthheemmaannii
78,240,565
2,280,645
How to speed up combining multiple sheets of Excel files into one Excel file
<p><strong>Problem overview</strong></p> <p>I need to combine several .xlsx files into sheets, where each sheet name must be the file name.</p> <hr /> <p><strong>Current issue</strong></p> <p>The code below, becomes slow and spends a lot of memory after a few files.</p> <hr /> <p><strong>Attempted solution</strong></p> <p>Closing excel file and deleting dataframe and running <code>gc</code> manually does not work.</p> <p><strong>code</strong></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import openpyxl import os import gc as gc print(&quot;Copying sheets from multiple files to one file&quot;) dir_input = 'D:/MeusProjetosJava/Importacao/' dir_output = &quot;Integrados/combined.xlsx&quot; cwd = os.path.abspath(dir_input) files = os.listdir(cwd) df_total = pd.DataFrame() df_total.to_excel(dir_output) #create a new file workbook=openpyxl.load_workbook(dir_output) ss_sheet = workbook['Sheet1'] ss_sheet.title = 'TempExcelSheetForDeleting' workbook.save(dir_output) for file in files: # loop through Excel files if file.endswith('.xls') or file.endswith('.xlsx'): excel_file = pd.ExcelFile(cwd+&quot;/&quot;+file) sheets = excel_file.sheet_names for sheet in sheets: sheet_name = str(file.title()) sheet_name = sheet_name.replace(&quot;.xlsx&quot;,&quot;&quot;).lower() sheet_name = sheet_name.removesuffix(&quot;.xlsx&quot;) print(file, sheet_name) df = excel_file.parse(sheet_name = sheet) with pd.ExcelWriter(dir_output,mode='a') as writer: df.to_excel(writer, sheet_name=f&quot;{sheet_name}&quot;, index=False) del df excel_file.close() del excel_file sheets = None gc.collect() workbook=openpyxl.load_workbook(dir_output) std=workbook[&quot;TempExcelSheetForDeleting&quot;] workbook.remove(std) workbook.save(dir_output) print(&quot;all done&quot;) </code></pre> <p><strong>References</strong></p> <p><a href="https://stackoverflow.com/questions/73539271/combining-several-sheets-into-one-excel">Combining several SHEETS into one EXCEL</a></p>
<python><excel><pandas><file-io><openpyxl>
2024-03-28 19:53:20
3
797
KenobiBastila