QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,281,118
5,462,743
Azure Machine Learning SDK V1 migration to V2 Pipeline Steps
<p>I am trying to migrate Pipelines from Azure Machine Learning SDK V1 to V2, but sometimes I don't understand the logic behind the V2 and I get stuck.</p> <p>In V1, I just had to create PythonScriptStep and wrap it into a StepSequence and deploy the pipeline. My scripts are simple, no input, no outputs. We store data in ADLS Gen2 and use databricks tables as inputs. This is why I don't have any inputs/outputs.</p> <pre><code>script_step_1 = PythonScriptStep( name=&quot;step1&quot;, script_name=&quot;main.py&quot;, arguments=arguments, # list of PipelineParameter compute_target=ComputeTarget(workspace=ws, name=&quot;cpu-16-128&quot;), source_directory=&quot;./my_project_folder&quot;, runconfig=runconfig, # Conda + extra index url + custom dockerfile allow_reuse=False, ) script_step_2 = PythonScriptStep( name=&quot;step2&quot;, ... ) step_sequence = StepSequence( steps=[ script_step_1, script_step_2, ] ) # Create Pipeline pipeline = Pipeline( workspace=ws, steps=step_sequence, ) pipeline_run = experiment.submit(pipeline) </code></pre> <p>With V2, we need to create a &quot;node&quot; in a component that will be use by a pipeline.</p> <p>I've made my Environment with dockerfile with BuildContext, and feed a representation of requirements.txt to a conda environment dictionary where I added my extra index url.</p> <pre><code>azureml_env = Environment( build=BuildContext( path=&quot;./docker_folder&quot;, # With Dockerfile and requirements.txt ), name=&quot;my-project-env&quot;, ) </code></pre> <p>Now I make a command component that will invoke python and a script with some arguments:</p> <pre><code>step_1 = command( environment=azureml_env , command=&quot;python main.py&quot;, code=&quot;./my_project_folder&quot;, ) </code></pre> <p>Now that I have my step1 and step2 in SDK V2, I have no clue on how to make a sequence without Input/Output</p> <pre><code>@pipeline(compute=&quot;serverless&quot;) def default_pipeline(): return { &quot;my_pipeline&quot;: [step_1, step_2] } </code></pre> <p>I can not manage to make the <code>pipeline</code> work to make a basic run a 2 consecutive steps.</p> <p>I guess after I manage to get this right, I can create/update the pipeline like this:</p> <pre><code>my_pipeline = default_pipeline() # submit the pipeline job pipeline_job = ml_client.jobs.create_or_update( my_pipeline, experiment_name=experiment_name, ) </code></pre> <p>UPDATE 1:</p> <p>Tried to create my own <code>StepSequence</code> (very naive) with dummies input/outputs</p> <pre><code>class CommandSequence: def __init__(self, commands, ml_client): self.commands = commands self.ml_client = ml_client def build(self): for i in range(len(self.commands)): cmd = self.commands[i] if i == 0: cmd = command( display_name=cmd.display_name, description=cmd.description, environment=cmd.environment, command=cmd.command, code=cmd.code, is_deterministic=cmd.is_deterministic, outputs=dict( my_output=Output(type=&quot;uri_folder&quot;, mode=&quot;rw_mount&quot;), ), ) else: cmd = command( display_name=cmd.display_name, description=cmd.description, environment=cmd.environment, command=cmd.command, code=cmd.code, is_deterministic=cmd.is_deterministic, inputs=self.commands[i - 1].outputs.my_output, outputs=dict( my_output=Output(type=&quot;uri_folder&quot;, mode=&quot;rw_mount&quot;), ), ) cmd = self.ml_client.create_or_update(cmd.component) self.commands[i] = cmd print(self.commands[i]) return self.commands </code></pre> <p>I had to recreate <code>command</code> because they protected a lot of stuff in the object...</p> <pre><code>@pipeline(compute=&quot;serverless&quot;) def default_pipeline(): command_sequence = CommandSequence([step_1, step_2], ml_client).build() return { &quot;my_pipeline&quot;: command_sequence[-1].outputs.my_output } </code></pre> <p>But it fails to link the output of step 1 to input of step 2.</p> <blockquote> <p>inputs=self.commands[i - 1].outputs.my_output, AttributeError: 'dict' object has no attribute 'my_output'</p> </blockquote>
<python><azure-machine-learning-service><azureml-python-sdk><azuremlsdk><azure-ml-pipelines>
2023-10-12 13:39:40
1
1,033
BeGreen
77,280,762
22,466,650
How to recognize groups in a column and outer border them?
<p>My input is this simple dataframe :</p> <pre><code>df = pd.DataFrame({'class': ['class_a', 'class_a', 'class_a', 'class_b', 'class_c', 'class_c'], 'name': ['name_1', 'name_2', 'name_3', 'name_1', 'name_1', 'name_2'], 'id': [5, 7, 1, 2, 3, 8]}) print(df) class name id 0 class_a name_1 5 1 class_a name_2 7 2 class_a name_3 1 3 class_b name_1 2 4 class_c name_1 3 5 class_c name_2 8 </code></pre> <p>I want to draw a blue solid border (a blue rectangle) around each group in the column <code>class</code>.</p> <p>I found a solution in stackoverflow <a href="https://stackoverflow.com/questions/65939324/pandas-style-draw-borders-over-whole-row-including-the-multiindex">Pandas Style: Draw borders over whole row including the multiindex</a></p> <pre><code>s = df.style for idx, group_df in df.groupby('class'): s = s.set_table_styles({group_df.index[0]: [{'selector': '', 'props': 'border-top: 3px solid blue;'}]}, overwrite=False, axis=1) </code></pre> <p>But there is two problems :</p> <ol> <li>the outer border are missing</li> <li>the styles are lost when I save it to excel</li> </ol> <p><a href="https://i.sstatic.net/gfcrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gfcrj.png" alt="enter image description here" /></a></p> <p>Is there a change guys we could fix at least &quot;point 1&quot; ?</p>
<python><pandas>
2023-10-12 12:52:19
1
1,085
VERBOSE
77,280,611
7,729,563
Unexpected result from simple lambda function in list comprehension
<p>Note: Using Python 3.11.5 on Windows 11 (x64)</p> <pre class="lang-py prettyprint-override"><code># Input: items = [['fish:', '2'], ['cats:', '3'], ['dogs:', '5']] # Goal - convert to dictionary, strip ':' from word, convert string to int: # e.g., {'fish': 2, 'cats': 3, 'dogs': 5} # Approach - use list comprehension with lambda (this is only part of the solution): [(lambda item: item[0].strip(':'), int(item[1])) for item in items] # Unexpected output: [(&lt;function &lt;listcomp&gt;.&lt;lambda&gt; at 0x00000185D4D51620&gt;, 2), (&lt;function &lt;listcomp&gt;.&lt;lambda&gt; at 0x00000185D7239D00&gt;, 3), ...] # The lambda works - I can even add the following: [(lambda item: item[0].strip(':'), item[0].strip(':'), int(item[1])) for item in items] [(&lt;function &lt;listcomp&gt;.&lt;lambda&gt; at 0x00000185D4810C20&gt;, 'fish', 2), ...] </code></pre> <p>Why is the first item in the tuple returned by the lambda what appears to be a function? I could come up with other ways to solve this. However, I have not encountered this kind of behavior from a lambda before, and I would like to understand what's going on.</p>
<python>
2023-10-12 12:32:42
2
529
James S.
77,280,606
173,003
Catching a specific exception in a chain when reraising it is not an option
<p>Consider the following Python code:</p> <pre class="lang-py prettyprint-override"><code>def say(words): for word in words: if word == &quot;Ni&quot;: raise ValueError(&quot;We are no longer the knights who say Ni!&quot;) print(word) def blackbox(function, sentence): words = sentence.split() try: function(words) except Exception as e: raise RuntimeError(&quot;Generic Error&quot;) blackbox(say, &quot;Foo Ni Bar&quot;) </code></pre> <p>It prints the following traceback:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [35], in blackbox(function, sentence) 9 try: ---&gt; 10 function(words) 11 except Exception as e: Input In [35], in say(words) 3 if word == &quot;Ni&quot;: ----&gt; 4 raise ValueError(&quot;We are no longer the knights who say Ni!&quot;) 5 print(word) ValueError: We are no longer the knights who say Ni! During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) Input In [35], in &lt;cell line: 14&gt;() 11 except Exception as e: 12 raise RuntimeError(&quot;Generic Error&quot;) ---&gt; 14 blackbox(say, &quot;Foo Ni Bar&quot;) Input In [35], in blackbox(function, sentence) 10 function(words) 11 except Exception as e: ---&gt; 12 raise RuntimeError(&quot;Generic Error&quot;) RuntimeError: Generic Error </code></pre> <p>Assume I am only interested in the first error. I could simply reraise it by replacing <code>raise RuntimeError(&quot;Generic Error&quot;)</code> by <code>raise e</code> in <code>blackbox()</code>: end of the story.</p> <p>Except (!) that I cannot modify the code of <code>blackbox()</code>, which belongs to an external library.</p> <p>How can I obtain the same result without touching it? My guess is that I could wrap the call to <code>blackbox()</code> in a <code>try... except...</code>, retrieve the chain of exceptions, and select the one I am interested in. But I failed to find anywhere how to do such a thing.</p> <p>Edit: changed the name and the signature of the second function to make the constraints clearer.</p>
<python><higher-order-functions>
2023-10-12 12:31:53
1
4,114
Aristide
77,280,579
8,261,345
Pandas: vectorised way to forward fill a series using a gradient?
<p>Consider a dataframe which has a series <code>price</code> with gaps containing NaN:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd df = pd.DataFrame({&quot;price&quot;: [1, 2, 3, np.nan, np.nan, np.nan, np.nan, np.nan, 9, 10]}, index=pd.date_range(&quot;2023-01-01&quot;, periods=10)) </code></pre> <pre><code> price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 NaN 2023-01-05 NaN 2023-01-06 NaN 2023-01-07 NaN 2023-01-08 NaN 2023-01-09 9.0 2023-01-10 10.0 </code></pre> <p>My desired result is to fill this gap using the last known gradient prior to the gap, i.e.:</p> <pre><code> price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 4.0 2023-01-05 5.0 2023-01-06 6.0 2023-01-07 7.0 2023-01-08 8.0 2023-01-09 9.0 2023-01-10 10.0 </code></pre> <p>This is easy to achieve using iteration:</p> <pre class="lang-py prettyprint-override"><code>gradients = (df[&quot;price&quot;] - df[&quot;price&quot;].shift(1)).ffill() price_values = df[&quot;price&quot;].values for index, val in enumerate(price_values): last_price = price_values[index - 1] gradient = gradients.iloc[index] if pd.isna(val) and not pd.isna(last_price) and not pd.isna(gradient): df[&quot;price&quot;].iat[index] = last_price + gradient </code></pre> <pre><code> price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 4.0 2023-01-05 5.0 2023-01-06 6.0 2023-01-07 7.0 2023-01-08 8.0 2023-01-09 9.0 2023-01-10 10.0 </code></pre> <p>This works fine but is slow. This also feels like a common use case and I would be surprised if it was not built in to pandas, but I am unable to find it in documentation. Is there a better, vectorised way to do this?</p>
<python><pandas><dataframe><numpy>
2023-10-12 12:27:47
1
694
Student
77,280,470
2,633,704
clicking "go to definition" in vscode use all my system ram
<p>I am using vs-code for Python coding. now I have a big issue. when I set python Interpreter and right-click on anywhere in my code and click go to definition, all my ram used by vscode ms python extension. Is there any other setting I need to do to prevent it? I have to kill the process every time that happens!</p>
<python><visual-studio-code>
2023-10-12 12:13:09
0
990
MarziehSepehr
77,280,434
2,516,783
Is there a clean way of starting a task execution straight away with asyncio.create_task()?
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import time async def coro_1(seconds=5): await asyncio.sleep(seconds) async def main_1(): task_1 = asyncio.create_task(coro_1()) # Long computation here time.sleep(5) await task_1 async def main_2(): task_1 = asyncio.create_task(coro_1()) await asyncio.sleep(0) # Long computation here time.sleep(5) await task_1 if __name__ == '__main__': start = time.time() asyncio.run(main_1()) end = time.time() print(f'Main_1 took { end - start } seconds.') start = time.time() asyncio.run(main_2()) end = time.time() print(f'Main_2 took { end - start } seconds.') </code></pre> <p>The output:</p> <pre><code>Main_1 took 10.005882263183594 seconds. Main_2 took 5.005404233932495 seconds. </code></pre> <p>I understand that <code>main_1</code> coro takes longer as the <code>time.sleep()</code> does not happen &quot;concurrently&quot; with the <code>asyncio.sleep()</code>. As far as I understand, this is because the task does not start its execution until the main_1 coro &quot;yields&quot; the execution in the <code>await task_1</code> sentence.</p> <p>In <code>main_2</code> this does not happen because we allowed the start of the task by &quot;yielding&quot; with <code>await asyncio.sleep(0)</code>.</p> <p>Is there a better way of achieving this behaviour? I would like to create a task and have it started straight away without needing an explicit <code>asyncio.sleep(0)</code> so my code runs faster. I feel like adding sleeps all over the place is ugly and adds a lot of boilerplate code.</p> <p>Any suggestions?</p>
<python><python-asyncio>
2023-10-12 12:06:40
2
711
Selnay
77,280,421
3,110,458
Portable Python pip execution fails on "module pip not found"
<p>I wrote a simple Nodejs script that checks if Python is installed with the specific version and if not it will install a portable version via zip.</p> <p>Executing Python works fine but when I try to install sth via pip with :</p> <p><code>.\python.exe -m pip intstall xzy</code></p> <p>it says that pip is not installed.</p> <p>When i check the Scripts and lib folder where python is installed i see the pip.exe</p> <p>here is my script :</p> <pre><code> import { exec, spawn } from 'child_process'; import fs from 'fs'; import axios from 'axios'; import os from 'os'; import path from 'path'; const platform = os.platform(); async function getPythonPath(version: string): Promise&lt;string&gt; { const pythonFolder = path.join(&quot;public&quot;, &quot;python&quot;, version) const pythonCmd = path.join(pythonFolder, 'python'); if (fs.existsSync(pythonFolder)) { return platform === &quot;win32&quot; ? pythonCmd + &quot;.exe&quot; : pythonCmd; } let pythonUrl: string, pythonZipPath: string, unzipCmd: string, pipCmd: string; if (platform === 'win32') { pythonUrl = `https://www.python.org/ftp/python/${version}/python-${version}-embed-amd64.zip`; pythonZipPath = path.join(os.tmpdir(), `python-${version}.zip`); unzipCmd = `7z x ${pythonZipPath} -o${pythonFolder}`; pipCmd = `${pythonCmd}.exe ${path.join(pythonFolder, 'get-pip.py')}`; } else if (platform === 'darwin') { pythonUrl = `https://www.python.org/ftp/python/${version}/python-${version}-macosx10.9.pkg`; pythonZipPath = path.join(os.tmpdir(), `python-${version}.pkg`); unzipCmd = `sudo installer -pkg ${pythonZipPath} -target /`; pipCmd = `sudo ${pythonCmd} ${path.join(pythonFolder, 'get-pip.py')}`; } else { throw new Error('Unsupported platform'); } try { console.log(`Downloading Python ${version}...`); const response = await axios({ method: 'get', url: pythonUrl, responseType: 'stream', }); const fileStream = fs.createWriteStream(pythonZipPath); response.data.pipe(fileStream); await new Promise&lt;void&gt;((resolve, reject) =&gt; { fileStream.on('finish', resolve); fileStream.on('error', reject); }); await new Promise&lt;void&gt;((resolve, reject) =&gt; { console.log(`Unzipping Python ${version}...`); exec(unzipCmd, (error) =&gt; { if (error) { reject(error); } else { resolve(); } }); }); // Download get-pip.py console.log('Downloading get-pip.py...'); const getPipUrl = 'https://bootstrap.pypa.io/get-pip.py'; const getPipScriptPath = path.join(pythonFolder, 'get-pip.py'); const getPipResponse = await axios({ method: 'get', url: getPipUrl, }); fs.writeFileSync(getPipScriptPath, getPipResponse.data); await new Promise&lt;void&gt;((resolve, reject) =&gt; { console.log('Installing pip...'); exec(pipCmd, (error) =&gt; { if (error) { reject(error); } else { resolve(); } }); }); return platform === &quot;win32&quot; ? pythonCmd + &quot;.exe&quot; : pythonCmd; } catch (error) { throw error; } } export async function pyrun(scriptPath: string, pythonVersion: string, callback: (text: string) =&gt; void, onerror: (text: string) =&gt; void): Promise&lt;void&gt; { return new Promise&lt;void&gt;(async (resolve, reject) =&gt; { const pythonExecutable = await getPythonPath(pythonVersion); const command = `${pythonExecutable} ${scriptPath}`; exec(command, (error, stdout, stderr) =&gt; { if (error) { reject(error); } else { resolve(); } if (stdout &amp;&amp; callback) { callback(stdout); } if (stderr &amp;&amp; onerror) { onerror(stderr); } }); }); } </code></pre> <p>this is my try to run it :</p> <pre><code>import path from &quot;path&quot;; import { pyrun } from &quot;../basic-py-run&quot;; async function main() { const version = '3.9.0'; const installScript = path.join('public', 'python', version) + ' -m pip install cowsay'; await pyrun(installScript, version, (text) =&gt; { console.log(text) }, (error) =&gt; { console.error(error); }); } main(); </code></pre>
<javascript><python><node.js>
2023-10-12 12:04:52
3
374
user3110458
77,280,233
7,857,466
How to in-place assign an item to a class containing a list, without inheritance?
<p>Why do I get <code>IndexError: list assignment index out of range</code> with the following code:</p> <pre><code>class MyOwnList(): def __init__(self, a_list): self.list = a_list def __getitem__(self, index): return self.list[index] def __setitem__(self, index, value): self.list[index]= value L2 = MyOwnList([]) L2[0] = &quot;a&quot; </code></pre> <p>I know I can derive from <code>list</code> or <code>UserList</code>, but I want to use composition not inheritance.</p>
<python>
2023-10-12 11:38:19
1
4,984
progmatico
77,280,130
542,270
pre-commit not picking files for pip-tools
<p>I've the following repo structure:</p> <pre><code>libs/ - l1/ - pyproject.toml - l2/ - pyproject.toml batch/ - b1/ - pyproject.toml - b2/ - pyproject.toml pipelines/ - p1/ - pyproject.toml - p2/ - pyproject.toml pyproject.toml </code></pre> <p>And the following pre-commit hook configured:</p> <pre><code>--- files: .(yaml|yml|py|toml)$ repos: - repo: https://github.com/jazzband/pip-tools rev: 7.3.0 hooks: - id: pip-compile name: Run pip-compile for 'prod' env. args: - -o - pyproject.lock - --generate-hashes - --strip-extras </code></pre> <p>The problem is it only picks the repo root's pyproject.toml file. All the others are skipped? Why is that? I've tried using <code>files</code> options to no avail. What is the problem here?</p>
<python><pre-commit-hook><pre-commit><pre-commit.com><pip-tools>
2023-10-12 11:24:36
2
85,464
Opal
77,279,937
7,307,824
Maintaining a big list of properies in a class that can be exported as a csv row
<p>I'm new to Python and trying to store a list of data in CSV format.</p> <p>I've got a wrapper class for the csv and for each row I have a <code>DataRow</code>. However, I know this code isn't very maintainable.</p> <p>Currently, if I want to add a new value I need to add a condition in the <code>set_item</code> add a new field in <code>fields</code> and a new property in the class.</p> <p>Is there a better solution to maintaining a list of properties in a class?</p> <p>Ideally my <code>DataRow</code> class would just have a list of <code>fields</code> with some generic methods.</p> <pre class="lang-py prettyprint-override"><code>class DataRow: # These are used as the column headings in the csv. # They are just the accessible keys. fields: List[str] = ['id', 'name', 'created', 'time'] # I still want to reference specific values so I add them to class # I have so many properties I don't really want to list them all here. id:str name:str created: str time: str def __init__(self) -&gt; None: def get_labels(self) -&gt; List[str]: return self.fields # reference is a string from external source so need to map it to a property def set_item(self, reference:str, value:str): # in this case reference doesn't match up with the property name if reference == 'my id': self.id = value if reference == 'item name': self.name = value if reference == 'date': self.created = value if reference == 'time': self.time = value def get_item(self, key:str): # I want to get the property based on field self[name] class DataFile: name: str rows: List[DataRow] = [] def __init__(self, name: str) -&gt; None: self.name = name def open(self): # will open the csv file and set the rows def save(self): # will save the latest rows to csv file def add_row(id: str, row:DataRow ): self.rows.append(row) </code></pre>
<python><python-3.x>
2023-10-12 10:51:37
1
568
Ewan
77,279,694
6,797,800
Legend with many elements causes plot to be small
<p>As shown below, not sure whether there are too many signals in one subplot, hence the legend takes too much space, the plot itself is too small, i.e the height is short.</p> <p>How may I make the plot bigger please?</p> <p>Code for the plot</p> <pre><code>cm = 1/2.54 fig, axes = plt.subplots(nrows=len(unique_signals), ncols=1, figsize=(23.5*cm, 17.2*cm)) sig_col = filtered_df.columns[1:] plot_counter = 0 previous_label = &quot;&quot; for column in sig_col: signal_name = column.split('_')[0] if ':' in column else column[:-1] if signal_name != previous_label or plot_counter == 0: ax = axes[plot_counter] plot_counter += 1 ax.grid(True) previous_label = signal_name ax.plot(filtered_df['time'], filtered_df[column], label=column) y_min, y_max = ax.get_ylim() more_ext = ['Ilw1_X','Ilw2_X','IvwTrf1_X','IdcP_X','IdcN_X','Vlw2_X', 'Ilw1_Y','Ilw2_Y','IvwTrf1_Y','IdcP_Y','IdcN_Y','Vlw2_Y','Ivlv','IvlvSum','Icir','Ignd'] percentage = 0.02 if signal_name not in more_ext else 0.2 y_min_ext = y_min*(1-percentage) if y_min &gt; 0 else y_min*(1+percentage) y_max_ext = y_max*(1+percentage) if y_max &gt; 0 else y_max*(1-percentage) ax.set_ylim(y_min_ext, y_max_ext) for ax in axes: ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.tight_layout() plt.savefig(group_name.split('_')[0]+'.png', dpi=300) plt.close() </code></pre> <p><a href="https://i.sstatic.net/HfV0G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HfV0G.png" alt="enter image description here" /></a></p> <p>My expectation: <a href="https://i.sstatic.net/1NK23.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NK23.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-10-12 10:13:25
1
769
Victor
77,279,621
5,211,659
How do I find an optimum selling point in a time/value series using pandas?
<p>I have a given dataset with value depreciation over time: <a href="https://i.sstatic.net/F5gnb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F5gnb.png" alt="value depreciation" /></a>)</p> <p>As you can see, the depreciation is not linear and sometimes even negative between two months.</p> <p>What I am looking for is a way in pandas to calculate the optimal purchase and selling point given a minimum duration. How can I implement that and which methods should I look at?</p>
<python><pandas><dataframe>
2023-10-12 10:03:24
1
821
Daniel Becker
77,279,618
3,426,328
Wrong sorting order while using pysaprk custom sort
<p>I have a dataframe with about 20 columns and 15M rows that I have to sort, based on some conditions. I also prefer not to add new columns to the dataframe, to help setting the order.<br /> For simplicity lets say that I have the following data, where A is an integer and a1, a2 have the value of 0 or 1:</p> <p>| A| a1| a2|<br /> |:---:|:---:|:---:|<br /> | 3| 0| 0|<br /> | 1| 0| 1|<br /> | 2| 0| 1|<br /> | 1| 1| 0|<br /> | 3| 1| 1|<br /> | 1| 1| 1|<br /> I'd like to sort it by 'A' column first and then on some conditions on 'A1' and 'A2', so I use the following code -</p> <pre><code>df_sorted = df.orderBy(col('a'), f.when((col('A1') == 1) &amp; (col('A2') == 0), 1) .when((col('A1') == 0) &amp; (col('A2') == 1), 2) .when((col('A1') == 1) &amp; (col('A2') == 1), 3) .when((col('A1') == 0) &amp; (col('A2') == 0), 4)) </code></pre> <p>which gives my the desired results:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">A</th> <th style="text-align: center;">a1</th> <th style="text-align: center;">a2</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">0</td> <td style="text-align: center;">0</td> </tr> </tbody> </table> </div> <p>The problem is that I have some other group of columns in the dataframe that I have to sort by (B, B1, B2, C, C1, C2 and so on), so I prefer the following method -</p> <pre><code>sorting_order = [(col(&quot;A1&quot;) == 1) &amp; (col(&quot;A2&quot;) == 0), (col(&quot;A1&quot;) == 0) &amp; (col(&quot;A2&quot;) == 1), (col(&quot;A1&quot;) == 1) &amp; (col(&quot;A2&quot;) == 1), (col(&quot;A1&quot;) == 0) &amp; (col(&quot;A2&quot;) == 0)] df_sorted = df.orderBy(col(&quot;A&quot;), *sorting_order) </code></pre> <p>because I can write down a function that returns all the necessary lists for all the other sortings, but I get wrong result:</p> <p>| A| a1| a2| |:---:|:---:|:---:| | 1| 1| 1| | 1| 0| 1| | 1| 1| 0| | 2| 0| 1| | 3| 0| 0| | 3| 1| 1| It looks like the order of the condition on a1 and a2 is now descending! I guess I can reverse the <code>sorting_order</code> list to get the right result, but I'd like to know why the result is not as I expected.<br /> Also tried to use <code>df_sorted = df.orderBy([col(&quot;A&quot;), *sorting_order], ascending=[1, 0])</code> but that messed up the output even more:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">A</th> <th style="text-align: center;">a1</th> <th style="text-align: center;">a2</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> </tr> </tbody> </table> </div> <p>So my question is - why is the ordering reversed when I am using the list, and is there a way to avoid it without reversing the list?</p>
<python><pyspark>
2023-10-12 10:03:14
1
6,181
TDG
77,279,578
12,412,154
AsyncIO Streams works with FastAPI but doesn't works separately
<p>So I have this class for async handling Socket connection.</p> <pre class="lang-py prettyprint-override"><code># socket_service.py class Sockets: reader: asyncio.StreamReader writer: asyncio.StreamWriter message_queue = [] async def start(self): reader, writer = await asyncio.wait_for( asyncio.open_connection(host, port), timeout=5 ) self.reader = reader self.writer = writer loop = asyncio.get_running_loop() loop.create_task(self.read()) loop.create_task(self.write()) async def read(self): while True: response = await asyncio.wait_for( self.reader.read(), timeout=60, ) if response: message_queue.append(response) await asyncio.sleep(1) async def write(self): while True: if message_queue: self.writer.write(message.queue.pop(0)) await self.writer.drain() </code></pre> <p>And I run it like this with FastAPI and Uvicorn:</p> <pre class="lang-py prettyprint-override"><code># application.py from fastapi import FastAPI def register_startup_event(app: FastAPI): @app.on_event(&quot;startup&quot;) async def _startup() -&gt; None: app.state.session = Sockets() await app.state.session.start() return _startup def get_app(): app = FastAPI() register_startup_event(app) return app </code></pre> <pre class="lang-py prettyprint-override"><code># __main__.py import uvicorn def main(): uvicorn.run( &quot;application_folder.application:get_app&quot;, workers=1, factory=True, ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>And it works perfectly in FastAPI! But when I tried to run it manually I get errors</p> <pre class="lang-py prettyprint-override"><code># manual_start.py async def init(): session = Sockets() await session.start() </code></pre> <ol> <li>In this case, the application terminates immediately, as if the infinite loop had gone through once and exited immediately. <code>Process finished with exit code 0</code></li> </ol> <pre class="lang-py prettyprint-override"><code>if __name__ == &quot;__main__&quot;: asynio.run(init()) </code></pre> <ol start="2"> <li>In both of these cases, the application does not terminate immediately, but it does not receive messages on the socket. The print outputs zero bytes <code>b''</code>, as if the server is not sending it anything, although I note that when I immediately launch fastapi stack everything works and all data arrives</li> </ol> <pre class="lang-py prettyprint-override"><code># CASE 2 if __name__ == &quot;__main__&quot;: loop = asyncio.get_event_loop() loop.create_task(init()) loop.run_forever() # CASE 3 if __name__ == &quot;__main__&quot;: loop = asyncio.get_event_loop() loop.run_until_complete(init()) loop.run_forever() </code></pre> <p>I'm sure that FastAPI or Uvicorn is doing some kind of asynchronous magic under the hood that I can't figure out and can't get my class to work separately. What could be the problem? You can simply agree to use it only inside FastAPI, but I need it separately</p> <p>P.S. I asked ChatGPT, he suggests that I either remove <code>wait_for</code> or add <code>await</code>'s or swap function calls, in general, so that he doesn’t advise me, nothing works for me, but with FastAPI everything works</p>
<python><sockets><stream><python-asyncio><fastapi>
2023-10-12 09:57:41
1
543
RoyalGoose
77,279,467
6,308,605
Reading nested dictionary without using .get
<p>I have a file called <code>sample</code> (no file format) that looks like this:</p> <pre><code>{ &quot;schema&quot;: &quot;abc&quot;, &quot;region&quot;: &quot;asia&quot;, &quot;values&quot;: { &quot;before_values&quot;: { &quot;id&quot;: 123, &quot;created_at&quot;: &quot;2023-07-28 19:21:39&quot;, &quot;name&quot;: &quot;alex&quot; }, &quot;after_values&quot;: { &quot;id&quot;: 123, &quot;created_at&quot;: &quot;2024-07-28 19:21:39&quot;, &quot;name&quot;: null }, &quot;file_name&quot;: &quot;my_file.1234&quot; } }{ &quot;schema&quot;: &quot;abc&quot;, &quot;region&quot;: &quot;asia&quot;, &quot;values&quot;: { &quot;values&quot;: { &quot;id&quot;: 456, &quot;created_at&quot;: &quot;2023-10-10 17:15:59&quot;, &quot;name&quot;: null }, &quot;file_name&quot;: &quot;my_file.1234&quot; } } </code></pre> <p>Note that the file comes in multiple dictionary without delimiter. So I need to read the file like written here (works perfectly!):</p> <pre><code>import json decoder = json.JSONDecoder() with open('/path/to/sample', 'r') as content_file: content = content_file.read() content_length = len(content) decode_index = 0 raw_data_list = [] while decode_index &lt; content_length: try: data, decode_index = decoder.raw_decode(content, decode_index) # print(&quot;File index:&quot;, decode_index) print(type(data)) # returns dict print(data[&quot;schema&quot;]) # works print(data[&quot;values&quot;]) # works print(data[&quot;values&quot;][&quot;values&quot;]) # KeyError: 'values' # WORKROUND raw_data = data.get(&quot;values&quot;, {}) # Append raw_data to the list raw_data_list.append(raw_data) except json.JSONDecodeError as e: print(&quot;JSONDecodeError:&quot;, e) # Scan forward and keep trying to decode decode_index += 1 </code></pre> <p>Apparently the workaround to get <code>data[&quot;values&quot;][&quot;values&quot;]</code> is to add <code>raw_data = data.get(&quot;values&quot;, {}) </code> inside the <code>try</code> block. And append to a list then iterate it over such as:</p> <pre><code>for raw_data in raw_data_list: raw_data = raw_data.get('values', {}) print(raw_data) </code></pre> <p>There should be a better way to handle this right? Because retrieving values inside <code>values</code> or <code>before_values</code> or <code>after_values</code> will also have the same issue of KeyError, for eg: accessing <code>created_at</code>..</p>
<python><json><dictionary>
2023-10-12 09:42:19
1
761
user6308605
77,279,465
13,123,667
Use python3 in VS Code instead of python
<p>-&gt; python3 --version = 3.10<br /> -&gt; python --version = 3.12</p> <p>I usually run my code with python3 -m main, being on my conda virtual env and it works great. But when I want to run a notebook, I choose my interpreter (correct venv) but it runs the code with python (not python3) leading to &quot;No module named ...&quot;</p> <p>What can I do ?</p> <ul> <li>Downgrade python and python -m pip install for python instead of python3 ?</li> <li>Try to configure vs code to use python3 -m ? This would be my prefered solution but I didn't find how to do it</li> </ul>
<python><visual-studio-code><virtualenv><python-venv>
2023-10-12 09:42:11
1
896
Timothee W
77,279,396
1,107,595
embed binary file in python wheel package
<p>==Context==</p> <p>I'm building a python package that is going to be installed and used inside an AWS container lambda. My package is built using poetry and deployed in a self-managed python package index.</p> <p>One of my lambda dependency is imageio-ffmpeg, which surprisingly to me embed the ffmpeg binary see <a href="https://github.com/imageio/imageio-ffmpeg/blob/master/imageio_ffmpeg/binaries/README.md" rel="nofollow noreferrer">here</a>, the result is that I can use ffmpeg without having to install it on the system myself, I only need to install the python package using pip.</p> <p>==The problem==</p> <p>My new package is using ffmpeg and ffprobe, but ffprobe is not installed in my aws lambda context, I would like to reproduce the behavior of imageio-ffmpeg and embed the ffprobe binary in my package.</p> <p>I've tried adding a binary folder like imageio-ffmpeg but the binary files are not made available by my package. I suppose the wheel doesn't contain the required information about these binaries. I can't find documentation or resources about how this work or how this can be done.</p>
<python><aws-lambda><python-poetry><python-wheel>
2023-10-12 09:33:44
1
2,538
BlueMagma
77,279,039
14,649,310
VS Code Python extension v2023.18.0 stopped resolving all python imports and Sort Imports option not available
<p>My VS Code was working fine, I have a pyenv environment with a specific python version and installed dependencies which I was using and all was good and suddenly it stopped recognizing all imports. <a href="https://i.sstatic.net/ITTOR.png" rel="noreferrer">They are all whited out</a>.</p> <p>Also I noticed that <a href="https://i.sstatic.net/gE1ko.png" rel="noreferrer">the Sort Imports option disappeared from the context menu options I have when I right click</a>.</p> <p>I have not changed anything in VS Code, any idea what might be wrong? Current VS Code Python extension version <code>2023.18.0</code></p>
<python><visual-studio-code>
2023-10-12 08:43:22
3
4,999
KZiovas
77,278,889
10,303,199
How to stream data in 4MB chunks using python (through grpc)
<p>I hava a python grpc service that streams large amounts of data to client microservices.</p> <pre><code>service GeoService { rpc GetGeoCoordinates(GetRequest) returns (stream Chunk){} } message Chunk { bytes data_part = 1; } </code></pre> <p>I can not send more that 4MB of data at once because tcp connection has limits. Here is my code (only relevant part):</p> <pre><code>def GetGeoCoordinates(self, request, context): ... ... dataBytes = geo_pb2.Chunk(data_part=bytes(json.dumps(coordinates[&quot;data&quot;]), 'utf-8')) yield dataBytes </code></pre> <p>How can I send this data in 4MB chunks?</p> <p>Also is it a good practice to <code>json.dumps()</code> large data then stream? Any help is appreciated.</p>
<python><python-3.x><stream><grpc-python>
2023-10-12 08:22:50
1
5,384
ABDULLOKH MUKHAMMADJONOV
77,278,879
1,422,096
Copy with SHFileOperation for files in non-drive-letter like Computer\Phone\card\DCIM\test.jpg
<p>Context: I know that using <a href="https://learn.microsoft.com/en-us/windows/win32/api/shellapi/nf-shellapi-shfileoperationw" rel="nofollow noreferrer"><code>SHFileOperation</code></a> for copying with Windows Shell has been replaced by <code>IFileOperation</code> (<em>&quot;Copies, moves, renames, or deletes a file system object. This function has been replaced in Windows Vista by <a href="https://learn.microsoft.com/en-us/windows/win32/api/shobjidl_core/nn-shobjidl_core-ifileoperation" rel="nofollow noreferrer"><code>IFileOperation</code></a>&quot;</em>). But <code>IFileOperation</code> / <code>CopyItem</code> is not supported for all types of devices on Windows 7 (which I need to support), see <a href="https://stackoverflow.com/questions/77277997/copy-file-with-windows-using-shell-api-no-such-interface-supported">Copy file with Windows using Shell API (&quot;No such interface supported&quot;)</a>, that's why I want to make <code>SHFileOperation</code> work for my application, like in <a href="https://stackoverflow.com/questions/16867615/copy-using-the-windows-copy-dialog">Copy using the Windows copy dialog</a>.</p> <p><strong>Question: is there a way to make <code>SHFileOperation</code> work in Windows 7 with paths like <code>Computer\Phone\card\DCIM\test.jpg</code> i.e. not in a volume / no drive letter?</strong></p> <p>Example:</p> <pre><code>from win32com.shell import shell, shellcon dest = r&quot;E:\Temp&quot; for src in [r&quot;E:\Temp2\test.txt&quot;, r&quot;Computer\Phone\card\DCIM\test.jpg&quot;]: result, aborted = shell.SHFileOperation((0, shellcon.FO_COPY, src, dest, shellcon.FOF_NOCONFIRMMKDIR, None, None)) print(result, aborted) </code></pre> <p>Result:</p> <pre><code>E:\Temp2\test.txt 0 False # success Computer\Phone\card\DCIM\test.jpg 124 False # not working </code></pre>
<python><windows><shell><winapi><pywin32>
2023-10-12 08:20:35
0
47,388
Basj
77,278,868
14,282,714
Get page number of certain string using pdfminer
<p>I would like to find the page number of certain string in a pdf document using <code>pdfminer.six</code>. <a href="https://www.africau.edu/images/default/sample.pdf" rel="nofollow noreferrer">Here</a> you can find some reproducible pdf document. We can use the <code>extract_pages</code> function to find the number of pages and <code>extract_text</code> to extract the text. But I'm not sure how to find the page of certain string. Imagine we want to find the page number of string &quot;File 2&quot;, which is on page 2. According to his <a href="https://stackoverflow.com/questions/68115627/extract-first-page-of-pdf-file-using-pdfminer-library-of-python3?rq=3">answer</a>, we could use the <code>page_numbers</code> argument from <code>extract_pages</code>. Here is some code I tried:</p> <pre><code>from pdfminer.high_level import extract_pages, extract_text file = 'sample.pdf' for i in range(len(list(extract_pages(file)))): extract_pages(file, page_numbers=i, maxpages=len(list(extract_pages(file)))) </code></pre> <p>But now I don't understand how to get the page number of certain string, so I was wondering if anyone could explain how to get the page number of certain string in a pdf document?</p>
<python><pdf><nlp><pdfminer>
2023-10-12 08:17:45
1
42,724
Quinten
77,278,758
8,176,763
install superset 3.0 without building assets with npm
<p>I am trying to install apache-superset and following the guidelines on installing from sratch in their website <a href="https://superset.apache.org/docs/installation/installing-superset-from-scratch/" rel="nofollow noreferrer">https://superset.apache.org/docs/installation/installing-superset-from-scratch/</a>:</p> <p>I manage to get everything done up to this step:</p> <pre><code># Build javascript assets cd superset-frontend npm ci npm run build cd .. </code></pre> <p>I am having problems with build process in node , and was wondering if I can install install superset only with pip and copy the static assets from somewhere to the right directory?</p>
<python><npm><apache-superset>
2023-10-12 08:01:20
0
2,459
moth
77,278,730
14,459,677
Deleting the rows in COLUMNS that do not match the rows in another column (all belonging to a 1 dataframe)
<p>My dataframe looks like this:</p> <pre><code>A B C D E F G H I J FP002 12 FP001 113 406 519 85 82 FP001 6240 FP003 7610 FP002 99 552 651 49 64 FP002 12294 FP005 12, FP003 102 131 1416 24 89 FP003 761 FP005 1250 FP004 94 739 833 122 215 FP004 400 </code></pre> <p>I want my output to be like this:</p> <pre><code>A B C D E F G H I J FP002 12 FP002 99 552 651 49 64 FP002 12294 FP003 7610 FP003 102 1314 1416 247 89 FP003 761 FP005 12, FP005 1250 </code></pre> <p>So basically retaining the rows following what is in Column A.</p> <p>My code to start is this:</p> <pre><code>dfR = df1.join( df1 ,on=['A','C'], how='inner') </code></pre> <p>but it's not giving me the result i need.</p>
<python><pandas><join><merge>
2023-10-12 07:57:31
1
433
kiwi_kimchi
77,277,997
1,422,096
Copy file with Windows using Shell API CopyItem: "No such interface supported" for non-drive-letter paths like Computer\Phone\card\DCIM\test.jpg
<p>I'm using Windows Shell API <a href="https://learn.microsoft.com/en-us/windows/win32/api/shobjidl_core/nn-shobjidl_core-ifileoperation" rel="nofollow noreferrer"><code>IFileOperation</code></a> to copy files from a phone (connected on USB) to the PC. It would be impossible with usual file-based copy functions (<code>os</code> or <code>shutil</code>), because it has no driver letter. Thus, I'm using:</p> <pre><code>import pythoncom from win32comext.shell import shell, shellcon fo = pythoncom.CoCreateInstance(shell.CLSID_FileOperation, None, pythoncom.CLSCTX_ALL, shell.IID_IFileOperation) src = shell.SHCreateShellItem(...) # see last note below dst = shell.SHCreateItemFromParsingName(path, None, shell.IID_IShellItem) # these objects are well created # e.g. &lt;PyIShellFolder at 0x0000000002A6AB50 with obj at 0x0000000002ADA270&gt; fo.CopyItem(src, dst) # also tested: fo.CopyItem(src, dst_folder, filename), fo.CopyItem(src, dst_folder, None, None) etc. fo.PerformOperations() # error during this line </code></pre> <p>Report:</p> <ul> <li>standard paths like <code>C:\ABC\DEF\test.txt</code>: it works on both Win7 and Win10</li> <li>paths like <code>This PC\Phone\card\DCIM\test.jpg</code>: it works on Win10</li> <li>paths like <code>Computer\Phone\card\DCIM\test.jpg</code>: <strong>it fails on Win7</strong> with the following error (note that <code>&quot;This PC&quot;</code> is named <code>&quot;Computer&quot;</code> on Win7).</li> </ul> <p>Error:</p> <blockquote> <p>pywintypes.com_error: (-2147467262, 'No such interface supported', None, None)</p> </blockquote> <p>How to make this code work on Windows 7? (I need to still support this platform)</p> <p>Notes:</p> <ul> <li><p>linked with <a href="https://stackoverflow.com/questions/36594470/shell-api-to-copy-all-files-in-a-folder">Shell API to copy all files in a folder?</a> -(which advises to use <code>IFileOperation</code> and <code>CopyFile</code>) but not a duplicate.</p> </li> <li><p>also linked with <a href="https://microsoft.public.platformsdk.shell.narkive.com/tshONvOX/ifileoperation-performoperations-returns-e-nointerface" rel="nofollow noreferrer">https://microsoft.public.platformsdk.shell.narkive.com/tshONvOX/ifileoperation-performoperations-returns-e-nointerface</a>.</p> </li> <li><p>about Windows version: it seemed that <a href="https://learn.microsoft.com/en-us/windows/win32/api/shobjidl_core/nf-shobjidl_core-ifileoperation-copyitem" rel="nofollow noreferrer"><code>IFileOperation::CopyItem</code></a> was supported since Windows Vista, see linked documentation</p> </li> <li><p>@SimonMourier: I use <code>SHCreateShellItem</code> to create <code>src</code> because I am walking in a directory tree recursively:</p> <pre><code>for f in folder.EnumObjects(0, shellcon.SHCONTF_NONFOLDERS): srcfolder = shell.SHGetIDListFromObject(folder) src = shell.SHCreateShellItem(srcfolder, None, f) ... </code></pre> <p>Also, I didn't use <code>shell.SHParseDisplayName</code> or <code>shell.SHCreateItemFromParsingName</code> because it doesn't work on Win7 for paths like <code>Computer\MyPhone\Card\myfiles\test.txt</code>, see comments in <a href="https://stackoverflow.com/questions/42966489/how-to-use-shcreateitemfromparsingname-with-names-from-the-shell-namespace">How to use SHCreateItemFromParsingName with names from the shell namespace?</a></p> </li> </ul>
<python><windows><shell><winapi><pywin32>
2023-10-12 05:51:27
1
47,388
Basj
77,277,944
6,338,996
How can I create a callable progress bar with a real-time clock in Python?
<p>I am trying to monitor a process that takes a bunch of time. My code goes through a number of files, operating on them a couple of times. The files are big, and I want to know how much time has elapsed.</p> <p>I borrowed <a href="https://stackoverflow.com/a/37630397/6338996">a progress bar that I found in SO</a>. After that, I tried creating a thread using threading. This is my attempt:</p> <pre><code>def progress_bar(current, total, filename='', bar_length = 20): threading.Timer(1., progress_bar(current,total,filename)).start() percent = float(current) * 100 / total arrow = '-' * int(percent/100 * bar_length - 1) + '&gt;' spaces = ' ' * (bar_length - len(arrow)) # sys.stdout.write(f'\rProgress: [{arrow}{spaces}] {percent:.0f}% (converting {filename})') # sys.stdout.flush() time_passed = time.strftime(&quot;%H:%M:%S&quot;,time.gmtime(time.monotonic()-start)) if current==total: print(&quot;\rProgress: [-------------------&gt;] 100% (COMPLETED)\033[K | &quot;\ f&quot;time passed: {time_passed}&quot;) else: print(f&quot;\rProgress: [{arrow}{spaces}] {percent:.2f}% | &quot;\ f&quot;tile: {filename} | &quot;\ f&quot;time passed: {time_passed}&quot;, end='\r',flush=True) </code></pre> <p>I include the full code snippet so I don't erase something that may have been important. As you can see, the function itself has arguments that need to be passed (to itself, I presume), as opposed to the simple examples I've seen where there are no arguments to be passed.</p> <p>I would want for the function to be called both when the code progresses and when a second passes. How could I achieve that?</p>
<python>
2023-10-12 05:38:05
3
573
condosz
77,277,319
5,193,319
Why do these two python dynamic property definitions not return the same values?
<p>I have two scripts that I believed should behave the same way. Here is the first:</p> <pre class="lang-py prettyprint-override"><code>class Q: def __init__(self): self._foo = 123 self._bar = 456 self._properties = { &quot;foo&quot;: property(lambda self: getattr(self, &quot;_foo&quot;)), &quot;bar&quot;: property(lambda self: getattr(self, &quot;_bar&quot;)) } def __getattr__(self, name): if name not in self._properties: raise AttributeError(f&quot;Property '{name}' does not exist&quot;) return self._properties[name].__get__(self, self.__class__) q = Q() print(q.foo) print(q.bar) </code></pre> <p>This produces the expected output of:</p> <pre><code>123 456 </code></pre> <p>Here is the second script:</p> <pre class="lang-py prettyprint-override"><code>class Q: def __init__(self): self._foo = 123 self._bar = 456 self._properties = { k: property(lambda self: getattr(self, f&quot;_{k}&quot;)) for k in ['foo', 'bar'] } def __getattr__(self, name): if name not in self._properties: raise AttributeError(f&quot;Property '{name}' does not exist&quot;) return self._properties[name].__get__(self, self.__class__) q = Q() print(q.foo) print(q.bar) </code></pre> <p>Unfortunately this second version returns:</p> <pre><code>456 456 </code></pre> <p>Why do the two scripts not behave the same way? What is going on with the lambdas in the second that causes the foo property object to be overwritten by the bar property object?</p>
<python>
2023-10-12 02:16:09
0
1,374
John Forbes
77,277,248
20,456,016
{ "n_estimators" } are not used during Optuna Study
<p>While performing optima study, I tried to tune <code>n_estimators</code> for xgboost in a binary classification problem, but I get:</p> <pre><code>WARNING: ../src/learner.cc:767: Parameters: { &quot;n_estimators&quot; } are not used. </code></pre> <p>Code :</p> <pre><code>def objective(trial): # Define the search space for hyperparameters params = { 'objective': 'binary:logistic', # For binary classification 'eval_metric': 'auc', # AUC-ROC as the evaluation metric 'booster': 'gbtree', 'learning_rate': trial.suggest_float('learning_rate', 0.01, 0.3), 'max_depth': trial.suggest_int('max_depth', 3, 10), 'min_child_weight': trial.suggest_int('min_child_weight', 1, 10), 'subsample': trial.suggest_float('subsample', 0.4, 1.0), 'colsample_bytree': trial.suggest_float('colsample_bytree', 0.55, 1.0), 'lambda': trial.suggest_float('lambda', 1e-5, 1.0), 'alpha': trial.suggest_float('alpha', 1e-5, 1.0), 'eta': trial.suggest_float('eta', 0.01, 0.3), 'gamma': trial.suggest_float('gamma', 0.0, 1.0), 'n_estimators': trial.suggest_int('n_estimators', 100, 1300), ​ } ​ X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42) ​ dtrain = xgb.DMatrix(X_train, label=y_train) dval = xgb.DMatrix(X_val, label=y_val) ​ model = xgb.train(params, dtrain, num_boost_round=params['n_estimators'], evals=[(dval, 'eval')], verbose_eval=False) y_pred = model.predict(dval) ​ auc_roc = roc_auc_score(y_val, y_pred) # AUC-ROC score ​ return -auc_roc </code></pre>
<python><machine-learning><xgboost><optuna>
2023-10-12 01:50:23
1
471
jerrycalebj
77,277,139
1,577,110
Install python 3.12 using mamba on mac
<p>I am trying to install python 3.12 on an M1 Apple Mac using mamba as follows ...</p> <blockquote> <p>mamba install -c conda-forge python=3.12.0</p> </blockquote> <p>It yields the following error message ...</p> <pre><code>Looking for: ['python=3.12.0'] conda-forge/osx-arm64 Using cache conda-forge/noarch Using cache Could not solve for environment specs The following packages are incompatible ├─ mamba is installable with the potential options │ ├─ mamba [1.0.0|1.1.0|...|1.5.1] would require │ │ └─ python_abi 3.11.* *_cp311, which can be installed; │ ├─ mamba [0.10.0|0.11.1|...|1.5.1] would require │ │ └─ python_abi 3.8.* *_cp38, which can be installed; │ ├─ mamba [0.10.0|0.11.1|...|1.5.1] would require │ │ └─ python_abi 3.9.* *_cp39, which can be installed; │ └─ mamba [0.18.1|0.18.2|...|1.5.1] would require │ └─ python_abi 3.10.* *_cp310, which can be installed; └─ python 3.12.0** is not installable because there are no viable options ├─ python 3.12.0 would require │ └─ python_abi 3.12.* *_cp312, which conflicts with any installable versions previously reported; └─ python 3.12.0rc3 would require └─ _python_rc, which does not exist (perhaps a missing channel). </code></pre> <p>Any pointers on how to do this properly would be much appreciated.</p>
<python><conda><mamba>
2023-10-12 01:09:14
1
5,141
Mark Graph
77,277,033
1,833,118
How to obtain the start and commit timestamps of transactions in MongoDB?
<h2>Motivation</h2> <p>We are working on a white-box checking algorithm of Snapshot Isolation (SI): given an execution of a database, to check whether it satisfies SI.</p> <p>The SI checking problem is <a href="https://www.lix.polytechnique.fr/%7Ecenea/papers/oopsla19.pdf" rel="nofollow noreferrer">NP-hard for general executions</a>. So it is desirable to make use of the knowledge of how SI is actually implemented in databases.</p> <p>The insight is that most databases, especially distributed databases, implement SI following a generic protocol using <em>start-timestamps</em> and <em>commit-timestamps</em>. With these timestamps of transactions in an execution, the SI checking problem becomes solvable in polynomial time. Therefore, we want to obtain these timestamps when generating executions.</p> <p>It is crucial for us to really understand the meaning of the start-timestamps and commit-timestamps in the database under testing. We must be very sure that we have obtained the right timestamps in the right way.</p> <p>That is why we ask for help here.</p> <h2>Background</h2> <p>We are digging into the implementation of snapshot isolation of MongoDB, especially into the use of timestamps in transactions.</p> <p>Consider the classic description of <em>start-timestamp</em> and <em>commit-timestamp</em> in implementing Snapshot Isolation, quoted from the <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf" rel="nofollow noreferrer">paper; Section 4.2</a>:</p> <blockquote> <p><em><strong>For start-timestamp</strong></em>: A transaction executing with Snapshot Isolation always reads data from a snapshot of the (committed) data as of the time the transaction started, called its <em>Start-Timestamp</em>. This time may be any time before the transaction’s first Read.</p> </blockquote> <blockquote> <p><em><strong>For commit-timestamp</strong></em>: When the transaction <code>T1</code> is ready to commit, it gets a <em>Commit-Timestamp</em>, which is larger than any existing Start-Timestamp or Commit-Timestamp. When <code>T1</code> commits, its changes become visible to all transactions whose Start-Timestamps are larger than <code>T1</code>'s Commit-Timestamp.</p> </blockquote> <blockquote> <p><em><strong>For conflict detection</strong></em>: The transaction <code>T1</code> successfully commits only if no other transaction <code>T2</code> with a Commit-Timestamp in <code>T1</code>'s execution interval [Start-Timestamp, Commit-Timestamp] wrote data that <code>T1</code> also wrote. Otherwise, <code>T1</code> will abort. This feature, called First-committer-wins prevents lost updates.</p> </blockquote> <h2>Our Problem</h2> <p>How can we obtain such <em>start-timestamp</em> and <em>commit-timestamp</em> of a transaction in MongoDB from, e.g., database logs?</p> <h2>Our Solution</h2> <h3>Environment</h3> <ul> <li>MongoDB v7.0.2</li> <li>Python driver v4.1.1</li> <li>Sharded cluster deployment <ul> <li><a href="https://www.mongodb.com/docs/manual/tutorial/deploy-shard-cluster/" rel="nofollow noreferrer">collections are assigned to different shards when created</a></li> </ul> <pre><code>sh.shardCollection(&quot;&lt;database&gt;.&lt;collection&gt;&quot;, { &lt;shard key field&gt; : &quot;hashed&quot; } ) </code></pre> </li> </ul> <h3>Run transactions in MongoDB</h3> <p>We use a simpler version of the <a href="https://www.mongodb.com/docs/upcoming/core/transactions/#transactions-api" rel="nofollow noreferrer">official example</a> which uses the <a href="https://www.mongodb.com/docs/upcoming/core/transactions-in-applications/#example" rel="nofollow noreferrer"><code>with_transaction</code></a> API.</p> <pre class="lang-py prettyprint-override"><code>from pymongo import MongoClient from pymongo.read_concern import ReadConcern from pymongo.write_concern import WriteConcern client = MongoClient(host=&quot;10.206.0.12&quot;, port=27017) def callback(session): collection_one = session.client.mydb1.foo collection_one.insert_one({&quot;abc&quot;: 1}, session=session) with client.start_session() as session: session.with_transaction( callback, read_concern=ReadConcern(&quot;snapshot&quot;), write_concern=WriteConcern(&quot;majority&quot;) ) </code></pre> <h3>To obtain the start-timestamp</h3> <p>According to <a href="https://www.mongodb.com/docs/v6.0/reference/configuration-options/#mongodb-setting-systemLog.verbosity" rel="nofollow noreferrer">mongodb-setting-systemLog.verbosity @ docs</a>, we provide the following configure file (<code>srs.conf</code>)</p> <pre class="lang-yaml prettyprint-override"><code>sharding: clusterRole: shardsvr replication: replSetName: rs2 net: bindIp: 0.0.0.0 storage: oplogMinRetentionHours: 48 systemLog: destination: file logAppend: true component: transaction: verbosity: 1 </code></pre> <p>when starting a mongod using the command</p> <pre><code>mongod --fork --logpath /root/mongo-config/mongo-srs.log --config srs.conf </code></pre> <p>The option <code>systemLog.component.transaction.verbosity</code> enables MongoDB to log the start-timestamp into the log file <code>/root/mongo-config/mongo-srs.log</code> which looks like:</p> <pre class="lang-json prettyprint-override"><code>{ ... &quot;c&quot;:&quot;TXN&quot;, &quot;ctx&quot;:&quot;conn80&quot;, &quot;msg&quot;:&quot;transaction&quot;, &quot;attr&quot;:{ &quot;parameters&quot;:{ &quot;lsid&quot;:{ &quot;id&quot;:{ &quot;$uuid&quot;:&quot;d25844f9-b25b-4ed3-8734-cccf8a4c584a&quot; }, ... }, &quot;txnNumber&quot;:1, &quot;readConcern&quot;:{ &quot;level&quot;:&quot;snapshot&quot;, &quot;atClusterTime&quot;:{ &quot;$timestamp&quot;:{ &quot;t&quot;:1696995553, &quot;i&quot;:2 } }, &quot;provenance&quot;:&quot;clientSupplied&quot; } }, &quot;readTimestamp&quot;:&quot;Timestamp(1696995553, 2)&quot;, &quot;terminationCause&quot;:&quot;committed&quot;, ... } } </code></pre> <p><code>readTimestamp</code> (<code>Timestamp(1696995553, 2)</code>) is the start-timestamp of the transaction with <code>lsid</code> and <code>txnNumber</code>.</p> <h3>Question 1</h3> <ul> <li>Have we obtained the start-timestamp correctly?</li> <li>Furthermore, what is the meaning of <code>atClusterTime</code> in the log above? Is it always the same with <code>readTimestamp</code> for multi-document transactions?</li> </ul> <h3>To obtain the commit-timestamp</h3> <p>The commit-timestamp of transactions are in the <code>oplog.rs</code> collection of the <code>local</code> database managed by MongoDB.</p> <pre class="lang-json prettyprint-override"><code>{ lsid: { id: new UUID(&quot;d25844f9-b25b-4ed3-8734-cccf8a4c584a&quot;), uid: Binary(Buffer.from(&quot;e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855&quot;, &quot;hex&quot;), 0) }, txnNumber: Long(&quot;1&quot;), op: 'c', ns: 'admin.$cmd', o: { applyOps: [ { op: 'i', ns: 'mydb1.foo', ui: new UUID(&quot;d9490802-b1bc-431d-90ac-db2a535ecc91&quot;), o: { _id: ObjectId(&quot;652618e4b4c5bae6e2da451c&quot;), abc: 1 }, o2: { abc: 1, _id: ObjectId(&quot;652618e4b4c5bae6e2da451c&quot;) } } ] }, ts: Timestamp({ t: 1696995556, i: 3 }), t: Long(&quot;15&quot;), v: Long(&quot;2&quot;), wall: ISODate(&quot;2023-10-11T03:39:16.566Z&quot;), prevOpTime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(&quot;-1&quot;) } } </code></pre> <p><code>ts</code> (<code>Timestamp({ t: 1696995556, i: 3 })</code>) is the commit-timestamp of the transaction with <code>lsid</code> and <code>txnNumber</code>.</p> <h3>Question 2</h3> <ul> <li>Have we obtained the commit-timestamp correctly?</li> </ul> <h2>About Read-Only Transactions</h2> <p>We can obtain the start-timestamp of read-only transactions in the way as described above. We do <em>not</em> find commit-timestamp of read-only transactions in <code>oplog.rs</code>.</p> <h3>Question 3</h3> <p>Do read-only transactions have commit-timestamps in MongoDB? If so, how to obtain them?</p> <h2>Thanks</h2> <p>Related: <a href="https://www.mongodb.com/community/forums/t/how-to-obtain-the-start-and-commit-timestamps-of-transactions-in-mongodb/248532?u=hengfeng_wei" rel="nofollow noreferrer">https://www.mongodb.com/community/forums/t/how-to-obtain-the-start-and-commit-timestamps-of-transactions-in-mongodb/248532?u=hengfeng_wei</a></p>
<python><mongodb><transactions><timestamp><mongodb-oplog>
2023-10-12 00:21:34
0
2,011
hengxin
77,276,993
19,506,623
How to split list in sublists based on string length?
<p>I have the following input list <code>data</code></p> <pre><code>data = '''ABCD1 2040805025@HHS_2332 801111@PPOD_1 225@DDMDM DEFA1 23333@HHS_998 7859000@FGL3 44532009@LLLKH_9 225@DDMDM FGGH5 78271@WQE 8003013@UTTY7'''.split() </code></pre> <p>If I want to split in sublist of lenght 3 I can use this</p> <pre><code>In[1]: [data[i:i+3] for i in range(0,len(data),3)] Out[1]: [['ABCD1', '2040805025@HHS_2332', '801111@PPOD_1'], ['225@DDMDM', 'DEFA1', '23333@HHS_998'], ['7859000@FGL3', '44532009@LLLKH_9', '225@DDMDM'], ['FGGH5', '78271@WQE', '8003013@UTTY7']] </code></pre> <p>But how to split the list each time a string of 5 characters appears (in this example <code>ABCD1,DEFA1 and FGGH5</code>)? to get this output:</p> <pre><code>[ ['2040805025@HHS_2332','801111@PPOD_1','225@DDMDM'], ['23333@HHS_998','7859000@FGL3','44532009@LLLKH_9','225@DDMDM'], ['78271@WQE','8003013@UTTY7'] ] </code></pre>
<python>
2023-10-12 00:07:46
2
737
Rasec Malkic
77,276,896
384,386
Coverage of C++ Code called via Pybind from Python Test
<p>Currently working with a legacy codebase that creates a number of Pybind modules and tests most of the exposed C++ code via Python tests.</p> <p>Is it possible to determine code coverage of the C++ code via Python tests? My understanding of C++ coverage is that you need to have a compiled test executable with the object files that can be run in order to gather the coverage data (the resultant <code>gcda</code> files). The problem here is that the Python tests don't call a test executable, they use the compiled <code>.so</code> files via the Pybind module.</p> <p>Is it possible to generate coverage data via Pybind and <code>.so</code> files?</p> <p>If it helps we're using Bazel and I've created a simple sandbox environment with a basic <code>pybind_library</code> target and <code>py_test</code> target. I can compile the <code>pybind_library</code> (which wraps a <code>cc_binary</code> target) with the coverage flags and generate the <code>gcno</code> files, but when the <code>py_test</code> target executes it doesn't generate any <code>gcda</code> files but is definitely using the compiled <code>.so</code> library.</p>
<python><code-coverage><pybind11>
2023-10-11 23:34:07
2
1,989
celestialorb
77,276,871
2,136,286
Python use Mac as a mouse for Android
<p>Basically I'm trying to move a mouse cursor on an android with broken screen using just USB-C cable and a Mac. The goal here is to emulate a HID device without external hardware like PyBoard or ESP32</p> <p>I'm using PyUSB library</p> <pre><code>import usb.core import usb.util import time def find_android_device(): # Find the USB device of the Android phone or tablet dev = usb.core.find(idVendor=0x18d1, idProduct=0x4ee1) return dev def send_mouse_movement(dev, dx, dy): # Send control transfer to simulate mouse movement bmRequestType = usb.util.build_request_type(usb.util.CTRL_OUT, usb.util.CTRL_TYPE_CLASS, usb.util.CTRL_RECIPIENT_INTERFACE) dev.ctrl_transfer(bmRequestType, 0x01, 0, 0, [dx, dy, 0, 0, 0, 0, 0, 0]) def send_mouse_command(ep, x, y, button): # Send a simple mouse move command in a different way data = [button, x, y, 0, 0] android_device.write(ep, data) # Example usage android_device = find_android_device() if android_device is not None: if android_device.is_kernel_driver_active(0): android_device.detach_kernel_driver(0) android_device.set_configuration() cfg = android_device.get_active_configuration() intf = cfg[(0,0)] ep = usb.util.find_descriptor(intf, custom_match=lambda e: usb.util.endpoint_direction(e.bEndpointAddress) == usb.util.ENDPOINT_OUT) send_mouse_command(ep, 100,100,1) send_mouse_movement(android_device, 10, 5) else: print(&quot;Android device not found.&quot;) </code></pre> <p>Either of the functions <code>send_mouse_command</code> or <code>send_mouse_movement</code> usually fail with timeout, and I suspect this is because of either invalid arguments, or that Mac's port has to somehow pretend to be &quot;HID&quot;, but I can't find any good examples or documentation for neither cases.</p> <p>I'm running out of ideas and documentation, so I need your help.</p>
<python><android><emulation><pyusb>
2023-10-11 23:26:49
0
676
the.Legend
77,276,843
472,485
Setting json string in http requests in Perl
<p>Following Perl code generates an error printed below:</p> <pre><code>use strict; use Data::Dumper; use LWP::UserAgent; use JSON; my $token = 'my token'; my $ua = LWP::UserAgent-&gt;new; my $req = HTTP::Request-&gt;new(PUT =&gt; &quot;endpoint&quot;); $req-&gt;header( 'Authorization' =&gt; &quot;Bearer $token&quot; ); $req-&gt;content_type('application/json'); $req-&gt;content('{&quot;text&quot;:&quot;whiteboard&quot;}'); my $res = $ua-&gt;request($req); if ($res-&gt;is_success) { my $content = $res-&gt;decoded_content; my $fromjson = from_json($content); print Dumper $fromjson-&gt;{'results'}; } else { print $res-&gt;status_line, &quot;\n&quot;; print $res-&gt;content, &quot;\n&quot;; } </code></pre> <p>Error:</p> <pre><code> {&quot;detail&quot;:[{&quot;loc&quot;:[&quot;body&quot;],&quot;msg&quot;:&quot;str type expected&quot;,&quot;type&quot;:&quot;type_error.str&quot;}]} </code></pre> <p>However if I write the same code in Python, it works:</p> <pre><code>import requests import os import json url = 'endpoint' token='my token' headers = { &quot;Authorization&quot;: &quot;Bearer &quot;+token[:-1], &quot;Content-type&quot; : &quot;application/json&quot; } res=requests.put(url, json='{&quot;text&quot;:&quot;whiteboard&quot;}', headers=headers) #res=requests.put(url, json='test string', headers=headers) # this also works print('Response Content:\n',res) </code></pre> <p>What am I missing in the Perl code?</p>
<python><rest><perl><put>
2023-10-11 23:19:48
2
22,975
Jean
77,276,788
2,386,605
How to make Agents not exceed token length in Langchain?
<p>I am currently trying to make use of a ChatGPT plugin in langchain:</p> <pre><code>from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent from langchain.agents import AgentType from langchain.tools import AIPluginTool tool = AIPluginTool.from_plugin_url(&quot;https://www.wolframalpha.com/.well-known/ai-plugin.json&quot;) llm = ChatOpenAI(temperature=0, streaming=True, max_tokens=1000) tools = load_tools([&quot;requests_all&quot;]) tools += [tool] agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) # agent_chain.run(&quot;what t shirts are available in klarna?&quot;) agent_chain.run(&quot;How can I solve dx/dt = a(t)*x + b(t)&quot;) </code></pre> <p>However, I get the error:</p> <pre><code>InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 5071 tokens (4071 in the messages, 1000 in the completion). Please reduce the length of the messages or completion. </code></pre>
<python><openai-api><langchain><chatgpt-api>
2023-10-11 23:01:36
1
879
tobias
77,276,779
14,459,677
Adding the values of multiple duplicated rows in python and creating another column
<p>I have file which have several columns. However, some rows are duplicated in Column1 with corresponding value in Column2.</p> <p>It looks like this:</p> <pre><code>COL1 COL2 AB 5 AB 5 AA 2 AC 3 AD 8 AD 4 </code></pre> <p>I want to create Column3 and have it like this:</p> <pre><code>COL3 AB 10 AA 2 AC 3 AD 12 </code></pre> <p>My code is like this:</p> <pre><code>df['COL3'] = df.groupby(['COL2', 'COL1']).transform('sum') </code></pre> <p>my error is:</p> <pre><code>ValueError: Cannot set a DataFrame with multiple columns to the single column Final Approved Ref </code></pre>
<python><pandas><duplicates>
2023-10-11 22:58:25
0
433
kiwi_kimchi
77,276,692
16,988,223
BeautifulSoup with python unable to get value of a h2 tag
<p>I'm trying to get this value from this <a href="http://larepublica.pe/" rel="nofollow noreferrer">web page</a> from the &quot;Economia&quot; section:</p> <p><a href="https://i.sstatic.net/RLdCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RLdCY.png" alt="enter image description here" /></a></p> <p>I want to get all those titles. This is my current code:</p> <pre><code>html = client.get(&quot;http://larepublica.pe/&quot;) soup = BeautifulSoup(html.text, 'html.parser') # Obtener la noticia de portada principal economyNews = &quot;&quot; for div in soup.findAll('h2', attrs={'class':'ItemSection_itemSection__title__PleA9'}): n = div.text economyNews += n+&quot;\\n&quot; print(economyNews ) </code></pre> <p>I have tested many ways to get this, but seems that the webpage is locking this. Any idea to fix this problem guys I will appreciate it. Thanks so much.</p>
<python><web-scraping><beautifulsoup>
2023-10-11 22:26:39
1
429
FreddicMatters
77,276,533
4,777,670
How to convert multi-valued truth table to if-conditions or expressions
<p>I've got a table like this:</p> <pre><code>Location Weather Temperature Time of Day Activity Indoors Sunny Hot Morning Reading Indoors Sunny Hot Evening Watching TV Indoors Sunny Cool Morning Reading Indoors Sunny Cool Evening Watching TV Indoors Rainy Hot Morning Reading Indoors Rainy Hot Evening Watching TV Indoors Rainy Cool Morning Reading Indoors Rainy Cool Evening Watching TV Outdoors Sunny Hot Morning Gardening Outdoors Sunny Hot Evening Barbecue Outdoors Sunny Cool Morning Playing Sports Outdoors Sunny Cool Evening Barbecue Outdoors Rainy Hot Morning Shopping Outdoors Rainy Hot Evening Barbecue Outdoors Rainy Cool Morning Shopping Outdoors Rainy Cool Evening Barbecue None Sunny Hot Morning Reading None Sunny Hot Evening Barbecue None Sunny Cool Morning Reading None Sunny Cool Evening Shopping None Rainy Hot Morning Reading None Rainy Hot Evening Barbecue None Rainy Cool Morning Shopping None Rainy Cool Evening Shopping </code></pre> <p>In this table, each input, such as &quot;Location,&quot; &quot;Weather,&quot; &quot;Temperature,&quot; and &quot;Time of Day,&quot; can only have specific values. For example, &quot;Location&quot; can only be one of: Indoors, Outdoors, or None. The table includes rows for all possible combinations of these input values.</p> <p>I'm aware of how to create functions for boolean truth tables, but I'm looking for guidance on handling non-boolean truth tables like this one. I'd like to create a Python function based on this table that takes these specific input conditions and produces the corresponding &quot;Activity&quot; as output. The function should be efficient, without redundant code or conditions. Is there a straightforward way, an algorithm, or a tool that can help me turn this table into a Python function? I'm looking for some guidance to create it myself.</p>
<python><algorithm><karnaugh-map>
2023-10-11 21:39:30
1
3,620
Saif
77,276,440
11,145,822
Connectivity issue in DB2
<p>Getting the below error while trying to connect DB2 server from Debian GNU/Linux system. Steps I have done</p> <ol> <li>Installed ibm_db &amp; ibm_db_sa module</li> <li>Connected Python3.8 &amp; imported ibm_db, which was succesful</li> <li>Run this command &gt;&gt;&gt; ibm_db.connect(&quot;DATABASE=DB;HOSTNAME=xx.xx.x.xxx;PORT=449;PROTOCOL=TCPIP;UID=xxxxx; PWD=xxxxxx;DRIVER={IBM Db2 ODBC DRIVER}&quot;, &quot;&quot;, &quot;&quot;)</li> </ol> <p>THe above command gives me below error, I have searched many articles and some are saying this is a network error which needs to be checked by the administrator and all, But the system connectivity is working fine. We are using a IPSEC tunnel for this and this tunnel destination is the DB2 IP &amp; Source IO is the Ip we re using in the script.</p> <p>We haven't installed any driver? Is it due to this? Is there any way or guidance to resolve this or to install a driver if it is required?</p> <pre><code> Exception: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: &quot;TCP/IP&quot;. Communication API being used: &quot;SOCKETS&quot;. Location where the error was detected: &quot;xx.0.x.xxx&quot;. Communication function detecting the error: &quot;recv&quot;. Protocol specific error code(s): &quot;*&quot;, &quot;*&quot;, &quot;0&quot;. SQLSTATE=08001 SQLCODE=-30081 </code></pre>
<python><python-3.x><db2>
2023-10-11 21:17:00
0
731
Sandeep
77,276,311
3,609,976
imshow uses a seemingly random background color
<p>I am trying to use <code>imshow()</code>, with categorical data. But I cannot reliably control the colors used. This is my code (inspired in the solution provided <a href="https://stackoverflow.com/questions/43971138/python-plotting-colored-grid-based-on-values">here</a></p> <pre><code>from random import choice import matplotlib.pyplot as plt # draw pretty stuff from matplotlib import colors def draw_trace2(mem_trace): print(f&quot;Matrix: {mem_trace.block_size} rows, {mem_trace.len()} cols&quot;) # this prints: 'Matrix: 80 rows, 178 cols' dummy = True if dummy: # create dummy data access_matrix = [ [choice([10,10,10,10,20,30]) for x in range(mem_trace.len())] for y in range(mem_trace.block_size)] else: # create real data access_matrix = [ [10 for x in range(mem_trace.len())] for y in range(mem_trace.block_size)] for ac in mem_trace.access_list: for i in range(ac.size): access_matrix[ac.offset+i][ac.time] = 20 if ac.action=='R' else 30 # paranoically check correct values for row in access_matrix: for cell in row: if cell not in [10,20,30]: raise ValueError(&quot;Wrong Value&quot;) # create discrete colormap. Credit to the previously cited SO answer cmap = colors.ListedColormap(['white', 'green', 'red']) bounds = [0,15,25,35] norm = colors.BoundaryNorm(bounds, cmap.N) # draw and export image fig, axe1 = plt.subplots() axe1.imshow(access_matrix, cmap=cmap, norm=norm) fig.set_size_inches(100,44) fig.savefig('mem_trace_plot.pdf', bbox_inches='tight') return </code></pre> <ul> <li><code>dummy=True</code>: I get dummy data with a red background.</li> <li><code>dummy=False</code>: I get the real data, with what looks to be 50% gray background or sometimes red too</li> </ul> <p>My intention is to have white background. Another oddity: If I reduce the size of the plot (for example, <code>fig.set_size_inches(10,4.4)</code>) the problem goes away.</p>
<python><matplotlib><imshow>
2023-10-11 20:46:05
0
815
onlycparra
77,276,288
9,415,280
UP Date: How remove sample from tf.data.dataset with missing or NaN values?
<p><strong>Up Date</strong> I add this command to clear sample with missing values who lead my neural network to fail:</p> <pre><code>ds = ds.ignore_errors() </code></pre> <p>I use this function to remove all samples with NaN or missing values... but it don't work well</p> <pre><code>def filter_nan_sample(ds): # find NaN ynan = tf.math.is_nan(ds) y = tf.reduce_sum(tf.cast(ynan, tf.float32)) if y &gt;0: return False return True ds = ds.filter(filter_nan_sample) # catch all sample with &quot;defect&quot; like missing values ds = ds.ignore_errors() </code></pre> <p>I get this error:</p> <pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Field 4 is required but missing in record! [Op:IteratorGetNext] name: </code></pre> <p>field 4 match with a variable not always availlable in the record. It is not inpossible in my case to deal this problem before turning data to dataset.</p>
<python><tensorflow><tf.data.dataset>
2023-10-11 20:40:41
1
451
Jonathan Roy
77,276,274
12,436,050
Extract values from pandas series column
<p>I have a pandas dataframe with column of type Series.</p> <pre><code>id col1 1 b'[{&quot;code&quot;:&quot;P16_HCAQNB&quot;,&quot;onto&quot;:&quot;finngen&quot;,&quot;syns&quot;:[&quot;hydrocephalus acquired newborn&quot;,&quot;of newborn acquired hydrocephalus&quot;,&quot;hydrocephalus, acquired, newborn&quot;,&quot;newborn acquired hydrocephalus&quot;,&quot;hydrocephalus, acquired, of newborn&quot;,&quot;hydrocephalus acquired of newborn&quot;,&quot;p16_hcaqnb&quot;],&quot;term&quot;:&quot;Hydrocephalus, acquired, of newborn&quot;}]' 2 b'[{&quot;code&quot;:&quot;OVERPROD_THYROID__HORMONE&quot;,&quot;onto&quot;:&quot;finngen&quot;,&quot;syns&quot;:[&quot;drug induced overproduction of thyroid stimulating hormone&quot;,&quot;overproduction thyroid stimulating hormone, drug induced&quot;,&quot;overproduction thyroid-stimulating hormone, drug-induced&quot;,&quot;overproduction thyroid- stimulating hormone drug-induced&quot;,&quot;drug-induced overproduction of thyroid-stimulating hormone&quot;,&quot;drug-induced overproduction thyroid-stimulating hormone&quot;,&quot;overprod_thyroid__hormone&quot;,&quot;overproduction of thyroid-stimulating hormone drug- induced&quot;,&quot;overproduction of thyroid stimulating hormone, drug induced&quot;,&quot;drug induced overproduction thyroid stimulating hormone&quot;,&quot;overproduction thyroid stimulating hormone drug induced&quot;,&quot;overproduction of thyroid-stimulating hormone, drug-induced&quot;,&quot;overproduction of thyroid stimulating hormone drug induced&quot;],&quot;term&quot;:&quot;Overproduction of thyroid-stimulating hormone, drug-induced&quot;}]' </code></pre> <p>How can I extract value for 'code'. The final output should be:</p> <pre><code>id col1 1 P16_HCAQNB 2 OVERPROD_THYROID__HORMONE </code></pre> <p>I have tried below code but it is not working.</p> <pre><code>df['col1'][0][0] df['col1'][0][code] </code></pre>
<python><pandas>
2023-10-11 20:37:59
0
1,495
rshar
77,276,144
7,166,834
extract consecutive rows with similar values in a column more with a specific patch size
<p>I was looking out to extract consecutive rows with specified text repeated continuously for more than 5 times.</p> <p>ex:</p> <pre><code> A B C 10 john 1 12 paul 1 23 kishan 1 12 teja 1 12 zebo 1 324 vauh -1 3434 krish -1 232 poo -1 4535 zoo 1 4343 doo 1 342 foo -1 123 soo 1 121 koo -1 34 loo -1 343454 moo -1 565343 noo -1 2323234 voo -1 3434 coo 1 545 xoo 1 6565 zoo 1 232321 qoo 1 34454 woo 1 546556 eoo 1 65665 roo -1 5343 too -1 3232 yoo 1 1212 uoo 1 23355667 ioo 1 787878 joo -1 </code></pre> <p>I am looking out for the below result where the column value 'c' has consecutive 1's repeated more than 4 times as different groups .</p> <p>Output:</p> <pre><code>A B C group 10 john 1 1 12 paul 1 1 23 kishan 1 1 12 teja 1 1 12 zebo 1 1 3434 coo 1 2 545 xoo 1 2 6565 zoo 1 2 232321 qoo 1 2 34454 woo 1 2 546556 eoo 1 2 </code></pre>
<python><pandas><group-by>
2023-10-11 20:12:30
2
1,460
pylearner
77,276,097
5,253,084
Windows exec built by nuitka fails by not finding a file
<p>I have a python (11) application running under Windows 10 in a virtual environment. It works fine. However, when I build an executable with nuitka it fails. Here is the relevant code:</p> <pre><code>import speech_recognition as sr self.recognizer = sr.Recognizer() with self.microphone as source: self.audio = self.recognizer.listen(source, timeout=2) self.text = str(self.recognizer.recognize_google(self.audio)) </code></pre> <p>The last statement ('self.text = ...') throws an exception. When I print that exception I get:</p> <pre><code>*[WinError 2] The system cannot find the file specified* </code></pre> <p>The speech_recognition module requires pyaudio. I followed suggestions to solve a similar exception (not in the context of nuitka) with pyaudio by installing pipwin and then ran</p> <pre><code>py -m pipwin install pyaudio </code></pre> <p>The nuitka command I am using:</p> <pre><code>py -m nuitka --enable-plugins=tk-inter --standalone --include-module=pyaudio ./logic/elizaAI.py </code></pre> <p>nuitka gives no build errors. The application runs fine until it hits the above statement. Any help is appreciated.</p>
<python><speech-recognition><pyaudio><nuitka>
2023-10-11 20:03:27
2
365
fredm73
77,276,047
1,181,065
The added layer must be an instance of class Layer. Found: KerasTensor
<pre><code>from tensorflow.keras import regularizers def model_variant(model, num_feat_map, dim, network_type, p): print(network_type) if network_type == 'ConvLSTM': model.add(Permute((2, 1, 3))) model.add(Reshape((-1, num_feat_map * dim))) lstm_output = Bidirectional(LSTM(128, return_sequences=False, stateful=False))(model.output) model.add(lstm_output) if network_type == 'CNN': model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(p)) </code></pre> <p>The error comes from adding model.add(lstm_output)</p> <pre><code>p=0.5 #Dropout b = 1 #BatchNorm print('building the model ... ') model = Sequential() if network_type=='CNN' or network_type=='ConvLSTM': model_conv(model, num_feat_map,p,b) model_variant(model, num_feat_map, dim, network_type,p) if network_type=='LSTM': model_LSTM(model,p) model_output(model) model.summary() </code></pre> <p>and more details of the error:</p> <pre><code> 7 model.add(Reshape((-1, num_feat_map * dim))) 8 lstm_output = Bidirectional(LSTM(128, return_sequences=False, stateful=False))(model.output) ----&gt; 9 model.add(lstm_output) 10 if network_type == 'CNN': 11 model.add(Flatten()) File ~/miniconda3/envs/test/lib/python3.9/site-packages/tensorflow/python/training/tracking/base.py:517, in no_automatic_dependency_tracking.&lt;locals&gt;._method_wrapper(self, *args, **kwargs) 515 self._self_setattr_tracking = False # pylint: disable=protected-access 516 try: --&gt; 517 result = method(self, *args, **kwargs) 518 finally: 519 self._self_setattr_tracking = previous_value # pylint: disable=protected-access File ~/miniconda3/envs/test/lib/python3.9/site-packages/tensorflow/python/keras/engine/sequential.py:182, in Sequential.add(self, layer) 179 layer = origin_layer ... 184 'Found: ' + str(layer)) 186 tf_utils.assert_no_legacy_layers([layer]) 187 if not self._is_layer_name_unique(layer): TypeError: The added layer must be an instance of class Layer. Found: KerasTensor(type_spec=TensorSpec(shape=(None, 256), dtype=tf.float32, name=None), name='bidirectional_11/concat:0', description=&quot;created by layer 'bidirectional_11'&quot;) </code></pre> <p>I am running an older code and here are the versions of tensorflow and keras that I'm using: tensorflow 2.4.1 keras 2.14.0</p>
<python><tensorflow><keras><deep-learning>
2023-10-11 19:52:33
0
539
Hanna
77,276,038
10,413,759
How to merge Firefox bookmarks exported as json (dictionaries within dictionaries structure)
<p>I am trying to make a new dictionary that is made from an existing dictionary that it has been creating from a function which tries to merge two dictionaries that derive from json files from backup firefox bookmarks.</p> <p>1st json bookmark dictionary:</p> <pre><code>json1={'guid': 'root________', 'title': '', 'index': 0, 'dateAdded': 1688927106926000, 'lastModified': 1697130233008000, 'id': 1, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'placesRoot', 'children': [{'guid': 'menu________', 'title': 'menu', 'index': 0, 'dateAdded': 1688927106926000, 'lastModified': 1697130233008000, 'id': 2, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'bookmarksMenuFolder', 'children': [{'guid': '9GEAbdFPVBqv', 'title': 'Getting started - mypy 1.5.1 documentation', 'index': 0, 'dateAdded': 1696274666207000, 'lastModified': 1696274666207000, 'id': 16, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://mypy.readthedocs.io/en/stable/getting_started.html'}, {'guid': 'PDKXoMPpSKZ9', 'title': 'testFolder2', 'index': 1, 'dateAdded': 1697130042452000, 'lastModified': 1697130183178000, 'id': 18, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'jbP4ff424REs', 'title': 'Secure Coding with Python', 'index': 0, 'dateAdded': 1697130058445000, 'lastModified': 1697130058445000, 'id': 19, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://devopedia.org/secure-coding-with-python'}, {'guid': 'bZSAKQe67MEP', 'title': 'testSubFolder', 'index': 1, 'dateAdded': 1697130074677000, 'lastModified': 1697130183178000, 'id': 20, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': '0U5O4Rw6M3M5', 'title': 'Typer', 'index': 0, 'dateAdded': 1697130183178000, 'lastModified': 1697130183178000, 'id': 21, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://typer.tiangolo.com/'}]}]}, {'guid': '-j27AP1Cwt0O', 'title': 'testFolder1', 'index': 2, 'dateAdded': 1697130021758000, 'lastModified': 1697130233008000, 'id': 17, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'Wb3-R2DDT8Ip', 'title': 'Welcome to Click — Click Documentation (8.1.x)', 'index': 0, 'dateAdded': 1697130230240000, 'lastModified': 1697130230240000, 'id': 22, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://click.palletsprojects.com/en/8.1.x/'}]}, {'guid': 'VDTmkniLNlvN', 'title': 'Mozilla Firefox', 'index': 3, 'dateAdded': 1688927107386000, 'lastModified': 1697129949344000, 'id': 7, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'vUwrKuzYfywC', 'title': 'Get Help', 'index': 0, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 8, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://support.mozilla.org/products/firefox', 'type': 'text/x-moz-place', 'uri': 'https://support.mozilla.org/products/firefox'}, {'guid': 'mKpEl6U5Pppr', 'title': 'Customize Firefox', 'index': 1, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 9, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://support.mozilla.org/kb/customize-firefox-controls-buttons-and-toolbars?utm_source=firefox-browser&amp;utm_medium=default-bookmarks&amp;utm_campaign=customize', 'type': 'text/x-moz-place', 'uri': 'https://support.mozilla.org/kb/customize-firefox-controls-buttons-and-toolbars?utm_source=firefox-browser&amp;utm_medium=default-bookmarks&amp;utm_campaign=customize'}, {'guid': 'Rw167-bbT1fR', 'title': 'Get Involved', 'index': 2, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 10, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/contribute/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/contribute/'}, {'guid': 'stHPEtkREVvD', 'title': 'About Us', 'index': 3, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 11, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/about/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/about/'}]}]}, {'guid': 'toolbar_____', 'title': 'toolbar', 'index': 1, 'dateAdded': 1688927106926000, 'lastModified': 1696274298931000, 'id': 3, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'toolbarFolder', 'children': [{'guid': '690YbVlf5eS_', 'title': 'Getting Started', 'index': 0, 'dateAdded': 1688927107484000, 'lastModified': 1688927107484000, 'id': 12, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/firefox/central/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/firefox/central/'}, {'guid': 'YJ_gjoZ6Wwj1', 'title': 'json — JSON encoder and decoder — Python 3.11.5 documentation', 'index': 1, 'dateAdded': 1696274298931000, 'lastModified': 1696274298931000, 'id': 13, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://docs.python.org/3/library/json.html'}]}, {'guid': 'unfiled_____', 'title': 'unfiled', 'index': 3, 'dateAdded': 1688927106926000, 'lastModified': 1688927107332000, 'id': 5, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'unfiledBookmarksFolder'}, {'guid': 'mobile______', 'title': 'mobile', 'index': 4, 'dateAdded': 1688927106942000, 'lastModified': 1688927107332000, 'id': 6, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'mobileFolder'}]} </code></pre> <p>2nd json bookmark dictionary:</p> <pre><code>json2={'guid': 'root________', 'title': '', 'index': 0, 'dateAdded': 1688927106926000, 'lastModified': 1697131416648000, 'id': 1, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'placesRoot', 'children': [{'guid': 'menu________', 'title': 'menu', 'index': 0, 'dateAdded': 1688927106926000, 'lastModified': 1697131416648000, 'id': 2, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'bookmarksMenuFolder', 'children': [{'guid': '9GEAbdFPVBqv', 'title': 'Getting started - mypy 1.5.1 documentation', 'index': 0, 'dateAdded': 1696274666207000, 'lastModified': 1696274666207000, 'id': 16, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://mypy.readthedocs.io/en/stable/getting_started.html'}, {'guid': 'FNxknWay_xr8', 'title': &quot;Command-line Applications — The Hitchhiker's Guide to Python&quot;, 'index': 1, 'dateAdded': 1697130502023000, 'lastModified': 1697130502023000, 'id': 23, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://docs.python-guide.org/scenarios/cli/'}, {'guid': 'PDKXoMPpSKZ9', 'title': 'testFolder2', 'index': 2, 'dateAdded': 1697130042452000, 'lastModified': 1697131416648000, 'id': 18, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'bZSAKQe67MEP', 'title': 'testSubFolder', 'index': 0, 'dateAdded': 1697130074677000, 'lastModified': 1697131416648000, 'id': 20, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'm6MBObvLXgt6', 'title': 'The Python Fire Guide - Python Fire', 'index': 0, 'dateAdded': 1697130663668000, 'lastModified': 1697130663668000, 'id': 28, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://google.github.io/python-fire/guide/'}, {'guid': 'fl2vHRLT-RJY', 'title': 'Typer', 'index': 1, 'dateAdded': 1697131416648000, 'lastModified': 1697131416648000, 'id': 30, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://typer.tiangolo.com/'}]}, {'guid': 'Z2khP-DX2nJU', 'title': 'testsubF2', 'index': 1, 'dateAdded': 1697130537074000, 'lastModified': 1697130642695000, 'id': 26, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'ZHUGVs2ZYUiA', 'title': 'argparse — Parser for command-line options, arguments and sub-commands — Python 3.12.0 documentation', 'index': 0, 'dateAdded': 1697130642695000, 'lastModified': 1697130642695000, 'id': 27, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://docs.python.org/3/library/argparse.html'}]}, {'guid': '3RP_KOI4Pq0q', 'title': 'plac · PyPI', 'index': 2, 'dateAdded': 1697130513781000, 'lastModified': 1697130513781000, 'id': 24, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://pypi.org/project/plac/'}]}, {'guid': '-j27AP1Cwt0O', 'title': 'testFolder1', 'index': 3, 'dateAdded': 1697130021758000, 'lastModified': 1697130520562000, 'id': 17, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'Wb3-R2DDT8Ip', 'title': 'Welcome to Click — Click Documentation (8.1.x)', 'index': 0, 'dateAdded': 1697130230240000, 'lastModified': 1697130230240000, 'id': 22, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://click.palletsprojects.com/en/8.1.x/'}, {'guid': 'zuT4_jp_Rj5l', 'title': 'cliff – Command Line Interface Formulation Framework — cliff 4.3.1.dev12 documentation', 'index': 1, 'dateAdded': 1697130520562000, 'lastModified': 1697130520562000, 'id': 25, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://docs.openstack.org/cliff/latest/'}]}, {'guid': 'LjrKYDavnU7w', 'title': 'Generating Command-Line Interfaces (CLI) with Fire in Python', 'index': 4, 'dateAdded': 1697130696550000, 'lastModified': 1697130696550000, 'id': 29, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://stackabuse.com/generating-command-line-interfaces-cli-with-fire-in-python/'}, {'guid': 'VDTmkniLNlvN', 'title': 'Mozilla Firefox', 'index': 5, 'dateAdded': 1688927107386000, 'lastModified': 1697129949344000, 'id': 7, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'children': [{'guid': 'vUwrKuzYfywC', 'title': 'Get Help', 'index': 0, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 8, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://support.mozilla.org/products/firefox', 'type': 'text/x-moz-place', 'uri': 'https://support.mozilla.org/products/firefox'}, {'guid': 'mKpEl6U5Pppr', 'title': 'Customize Firefox', 'index': 1, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 9, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://support.mozilla.org/kb/customize-firefox-controls-buttons-and-toolbars?utm_source=firefox-browser&amp;utm_medium=default-bookmarks&amp;utm_campaign=customize', 'type': 'text/x-moz-place', 'uri': 'https://support.mozilla.org/kb/customize-firefox-controls-buttons-and-toolbars?utm_source=firefox-browser&amp;utm_medium=default-bookmarks&amp;utm_campaign=customize'}, {'guid': 'Rw167-bbT1fR', 'title': 'Get Involved', 'index': 2, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 10, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/contribute/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/contribute/'}, {'guid': 'stHPEtkREVvD', 'title': 'About Us', 'index': 3, 'dateAdded': 1688927107386000, 'lastModified': 1688927107386000, 'id': 11, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/about/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/about/'}]}]}, {'guid': 'toolbar_____', 'title': 'toolbar', 'index': 1, 'dateAdded': 1688927106926000, 'lastModified': 1696274298931000, 'id': 3, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'toolbarFolder', 'children': [{'guid': '690YbVlf5eS_', 'title': 'Getting Started', 'index': 0, 'dateAdded': 1688927107484000, 'lastModified': 1688927107484000, 'id': 12, 'typeCode': 1, 'iconUri': 'fake-favicon-uri:https://www.mozilla.org/firefox/central/', 'type': 'text/x-moz-place', 'uri': 'https://www.mozilla.org/firefox/central/'}, {'guid': 'YJ_gjoZ6Wwj1', 'title': 'json — JSON encoder and decoder — Python 3.11.5 documentation', 'index': 1, 'dateAdded': 1696274298931000, 'lastModified': 1696274298931000, 'id': 13, 'typeCode': 1, 'type': 'text/x-moz-place', 'uri': 'https://docs.python.org/3/library/json.html'}]}, {'guid': 'unfiled_____', 'title': 'unfiled', 'index': 3, 'dateAdded': 1688927106926000, 'lastModified': 1688927107332000, 'id': 5, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'unfiledBookmarksFolder'}, {'guid': 'mobile______', 'title': 'mobile', 'index': 4, 'dateAdded': 1688927106942000, 'lastModified': 1688927107332000, 'id': 6, 'typeCode': 2, 'type': 'text/x-moz-place-container', 'root': 'mobileFolder'}]} </code></pre> <p>My idea was to simplify the merging of these two dictionaries by doing:</p> <pre><code>def has_url(diction): if &quot;uri&quot; in diction: return True else: return False def merge_dicts(dictmain, dict2): return {**dict2, **dictmain} def get_bm_path(bookmarks): urls = {} def bm_path(x, name=&quot;&quot;): if type(x) is dict: name = name + x.get(&quot;guid&quot;) + &quot;/&quot; if has_url(x): urls[name] = [f&quot;uri_{x.get('guid')}&quot;, dict((k, x[k]) for k in x.keys() if k not in &quot;children&quot;)] else: urls[name] = [f&quot;folder_{x.get('guid')}&quot;, dict((k, x[k]) for k in x.keys() if k not in &quot;children&quot;)] bm_path(x=x.get(&quot;children&quot;), name=name) elif type(x) is list: for i, a in enumerate(x): bm_path(a, name=name) bm_path(bookmarks) return urls </code></pre> <p>Then by running:</p> <pre><code>merged_dict = merge_dicts(get_bm_path(json1), get_bm_path(json2)) </code></pre> <p>I get a merged dictionary with keys that resemble folders/files structure so as not to lose the track of the parent/child structure of bookmarks while merging.</p> <p>And then I need to make the function to derive from that merged dictionary the actual json structure that firefox uses.</p> <blockquote> <p>The minimal (maybe) reproducible example:</p> </blockquote> <p>main dictionary:</p> <pre class="lang-py prettyprint-override"><code>{'main_folder/': {'id': 'main_folder', 'ad': 'what'}, 'main_folder/subfolder1/': {'id': 'subfolder1', 'ad': 'what'}, 'main_folder/subfolder1/9GEAbdFPVBqv/': {'id': '9GEAbdFPVBqv', 'ad': 'what1'}, 'main_folder/subfolder1/eaXY8H5Y1cJ_/': {'id': 'eaXY8H5Y1cJ_', 'ad': 'what2'}, 'main_folder/subfolder1/eaXY8H5Y1cJ_/9p2UFp7-qcEt/': {'id': '9p2UFp7-qcEt', 'ad': 'what3'}, 'main_folder/subfolder1/fijaCypbmbU1/': {'id': 'fijaCypbmbU1', 'ad': 'what4'}, 'main_folder/subfolder2/': {'id': 'subfolder2', 'ad': 'what7'}} </code></pre> <p>The resulting dictionary should be something like that:</p> <pre class="lang-py prettyprint-override"><code>{'id': 'main_folder', 'ad': 'what', 'children':[ {'id': 'subfolder1', 'ad': 'what', 'children':[ {'id': '9GEAbdFPVBqv', 'ad': 'what1'}, {'id': 'eaXY8H5Y1cJ_', 'ad': 'what2', 'children': [{'id': '9p2UFp7-qcEt', 'ad': 'what3'}]}, {'id': 'fijaCypbmbU1', 'ad': 'what4'} ] }, {'id': 'subfolder2', 'ad': 'what7'} ] } </code></pre> <p>The resulting dictionary makes the tree like structure for each element (sub-folder) in a parent/child dictionary/list. I can't find a way to reduce each &quot;layer&quot; of files to subfolders recursively.</p>
<python><recursion><parent-child>
2023-10-11 19:50:29
2
378
K Y
77,275,834
7,387,749
TensorFlow: Read an Image Dataset from csv file
<p>How can I create a <code>TensorFlow</code> image dataset from a <code>Pandas DataFrame</code> that contains image file paths and labels?</p> <p>I have a <code>.csv</code> file from which I have loaded a <code>Pandas DataFrame</code>. The DataFrame has two columns: <code>img_path</code>, which contains the file paths to the images, and <code>label</code>.</p> <p>I'm looking for a way to create a <code>TensorFlow</code> image dataset from this DataFrame, but I couldn't find any documentation or examples to help me achieve this. Can anyone provide guidance or code examples to get me started?</p>
<python><pandas><tensorflow>
2023-10-11 19:12:43
1
4,980
Simone
77,275,603
1,671,319
How do I use pytest and unittest.mock to mock interactions with a paramiko.SSHClient
<p>I am trying to write a unit test for code that interacts with an SFTP server using the <code>paramiko</code> library. The code under test receives a list of remote file locations and a callback. Each file is fetched and sent into the callback. The test shall simulate a scenario, where the caller sends two files to visit and one of the files fails with an IOError. I want to make sure that the failing file is excluded from the response.</p> <p>Here is the <code>code.py</code>:</p> <pre><code>import io from typing import Callable, List import typing import paramiko def visit_files(files: List[str], callback: Callable[[typing.BinaryIO], None]) -&gt; List[str]: response = [] with paramiko.SSHClient() as ssh: ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(&quot;test.rebex.net&quot;, port=22, username=&quot;demo&quot;, password=&quot;password&quot;) with ssh.open_sftp() as sftp: for file_name in files: try: with sftp.open(file_name, &quot;rb&quot;) as f: try: b = f.read() callback(io.BytesIO(b)) response.append(file_name) except ValueError: print(&quot;Something went wrong&quot;) except IOError: print(&quot;Unknown IO error&quot;) return response </code></pre> <p>And my <code>test_code.py</code>:</p> <pre><code>import typing from unittest.mock import Mock from pytest_mock import MockerFixture from src.utils.code import visit_files def test_visiting(mocker: MockerFixture): mock = mocker.patch('paramiko.SSHClient') ssh_client_mock = mock.return_value ssh_client_mock.connect.return_value = Mock() sftp_mock = ssh_client_mock.open_sftp.return_value sftp_mock.open.side_effect = [ Mock(read=Mock(return_value=b'Hello, World!')), # Mock for the first file IOError(&quot;Unable to open file&quot;), # Simulate IOError for the second file ] def print_size(b: typing.BinaryIO) -&gt; None: print(b.tell()) response = visit_files(files=[&quot;file1.txt&quot;, &quot;file2.txt&quot;], callback=print_size) assert response == [&quot;file1.txt&quot;] </code></pre> <p>The error I am receiving is: <code>TypeError: a bytes-like object is required, not 'MagicMock'</code> in line <code>callback(io.BytesIO(b))</code>. I can't figure out where my mocks are not set up properly.</p>
<python><unit-testing><pytest><python-unittest.mock><pytest-mock>
2023-10-11 18:34:50
1
3,074
reikje
77,275,476
1,678,467
Do numpy.savez and numpy.savez_compressed use pickle?
<p>I recently encountered numpy.savez and numpy.savez_compressed. Both seem to work well with arrays of differing types, including object arrays. However, numpy.load does not work well with object type arrays. For example:</p> <pre><code>import numpy as np numbers = np.full((10, 1), np.pi) strings = np.full((10, 1), &quot;letters&quot;, dtype=object) np.savez(&quot;test.npz&quot;, numbers=numbers, strings=strings) data = np.load(&quot;test.npz&quot;) </code></pre> <p>Calling <code>data[&quot;strings&quot;]</code> throws the following ValueError:</p> <pre><code>ValueError: Object arrays cannot be loaded when allow_pickle=False </code></pre> <p>However, enabling pickle on <code>numpy.load</code> resolves this issue. Pickling is not discussed within the <code>numpy.savez</code> and <code>numpy.savez_compressed</code> documents...which makes me wonder why pickle is required to load the data. Do <code>numpy.savez</code> and <code>numpy.savez_compressed</code> use pickle automatically behind the scenes?</p>
<python><numpy><pickle>
2023-10-11 18:14:44
3
6,303
tnknepp
77,275,430
10,967,961
Cannot open New python3 in jupyter
<p>I am trying in all ways to open a new Python 3 ipynb file from jupyter. I tried brew uninstalling jupyter, updating python3 and following the steps described here: <a href="https://medium.com/@iamclement/how-to-install-jupyter-notebook-on-mac-using-homebrew-528c39fd530f" rel="nofollow noreferrer">https://medium.com/@iamclement/how-to-install-jupyter-notebook-on-mac-using-homebrew-528c39fd530f</a>.</p> <p>The result however is always the same, i.e. the one describe in the picture: every time I try to open a new ipynb from jupyter, the options are Python 2 and two types that I mistakenly (and honestly I do not know how I made it) created from conda env. Is there a way to uninstall jupyter and just have the two options Python 2 and Python 3 when clicking on New?</p> <p><a href="https://i.sstatic.net/xOzeG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xOzeG.png" alt="enter image description here" /></a></p>
<python><jupyter-notebook>
2023-10-11 18:06:09
0
653
Lusian
77,275,400
113,586
Create a typing.Annotated instance with variadic args in older Python
<p>The question is simply how to reproduce this:</p> <pre><code>import typing a = [1, 2, 3] cls = typing.Annotated[int, *a] </code></pre> <p>or even this:</p> <pre><code>import typing cls1 = typing.Annotated[int, 1, 2, 3] unann_cls = typing.get_args(cls1)[0] metadata = cls1.__metadata__ cls2 = typing.Annotated[unann_cls, *metadata] </code></pre> <p>in Python 3.9-3.10. <code>Annotated[int, *a]</code> is a syntax error in &lt;3.11 so <code>typing_extensions.Annotated</code> shouldn't help here. &quot;Nested Annotated types are flattened&quot; suggests the following code:</p> <pre><code>cls2 = unann_cls for m in metadata: cls2 = typing.Annotated[cls2, m] </code></pre> <p>and it seems to work but surely there should be a more clean way?</p>
<python><python-typing>
2023-10-11 18:00:25
1
25,704
wRAR
77,275,300
565,341
How to load a numpy file (.npy) in DJL?
<p>I have traced a model in PyTorch and loaded it in successfully in DJL. Now I want to make sure pre/postprocessing works correctly in my JAVA server. I have written the result of the pre- and postprocessing into numpy files (<code>.npy</code>), e.g. with <code>np.save('preprocessed.npy', some_tensor.detach().numpy())</code></p> <p>How can I load the <code>.npy</code> file in JAVA/DJL into an <code>NDArray</code> to test the input/output of my traced model?</p> <p>A solution using purely DJL would be preferred, but additional helper libraries are also a possibility.</p>
<python><java><numpy><djl>
2023-10-11 17:45:24
1
1,467
Christoph Henkelmann
77,275,149
4,709,889
Use python code block to change raw rows from Google Sheets into list of lists
<p>I am working with data set I pull from Google sheets into Zapier. The data set looks like the below, and I am writing a code block to compare a users values against the all the values in the data set.</p> <pre><code>Bank | Product | Bundle | Restriction Geo | Restriction Geo 2 | Restriction Geo Val | Restriction Type | Restriction Type Val _______________________________________________________________________________________________________________________________________________ Wells Savings Ind,Sp State NY,CA,NV General 100% JMorg IRA Ind County CA Solano, Marin Claims 3 Goldman Savings All Zip 94954,27717,38002 Credit Score 680 Wells Savings All Zip 48441 General 100% </code></pre> <p>The issue I am running into is that some of the cells are themselves CSVs, so I can't pull each of the formatted columns individually and split them via .split(&quot;,&quot;). I think that what I need is to work with the raw rows, which have the following format:</p> <pre><code>[[&quot;Wells&quot;,&quot;Savings&quot;,&quot;Ind,Sp&quot;,&quot;State&quot;,&quot;&quot;,&quot;NY,CA,NV&quot;,&quot;General&quot;,&quot;100%&quot;],[&quot;JMorg&quot;,&quot;IRA&quot;,&quot;Ind&quot;,&quot;County&quot;,&quot;CA&quot;,&quot;Solano,Marin&quot;,...,&quot;48441&quot;,&quot;General&quot;,&quot;100%&quot;]] </code></pre> <p>The raw rows come through as a single text string, rather than a list of lists, which is what I want. I tried running this regex pattern</p> <pre><code>(?&lt;=\[).+?(?=\]) </code></pre> <p>to extract each of the matches as line items, but this both keeps the opening and closing brackets, and leaves the values within quotes and not as lists. What I am trying to get is essentially a list of lists of lists, something like;</p> <pre><code>[[Wells,Savings,[Ind,Sp],State, ,[NY,CA,NV]...]] </code></pre> <p>This way I can do something like (I have already have this part written, and it works on test data, but that test data is already formatted as lists of lists).</p> <pre><code>for d in data_set: temp_val = [] if d[0] == &quot;Wells&quot;: temp_val = d[2].split(&quot;,&quot;) for t in temp_val: if t == &quot;Ind&quot;: ###do something&quot; temp_val.clear() </code></pre>
<python><python-3.x><google-sheets><zapier>
2023-10-11 17:20:46
1
391
FrenchConnections
77,275,096
8,378,817
Handling empty list in for loop python
<p>I have come across a situation where I am dealing a nest list with nested dictionaries. Sometimes, one of the dictionary key would have an empty list. So, when iterating with for loop, I am getting and list index out of range error.</p> <p>Below, I will give you a small portion of the data from dataframe:</p> <pre><code>list = [{'author_position': 'first', 'author': {'id': 'https://openalex.org/A5012408034', 'display_name': 'Vincent S. Tagliabracci', 'orcid': 'https://orcid.org/0000-0002-9735-4678'}, 'institutions': [], 'is_corresponding': False, 'raw_affiliation_string': 'Molecular Biology', 'raw_affiliation_strings': ['Molecular Biology']}, {'author_position': 'last', 'author': {'id': 'https://openalex.org/A5076217348', 'display_name': 'Peter J. Roach', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I55769427', 'display_name': 'Indiana University – Purdue University Indianapolis', 'ror': 'https://ror.org/05gxnyn08', 'country_code': 'US', 'type': 'education'}], 'is_corresponding': False, 'raw_affiliation_string': 'Indiana-University Purdue-University Indianapolis', 'raw_affiliation_strings': ['Indiana-University Purdue-University Indianapolis']}] </code></pre> <p>This list has two nested dictionaries. I am trying to extract a list of informations:</p> <p>[author_id, author_name, institution_id, institution_name, etc... ] in a list or tuple</p> <p>If you notice, the first item 'institutions' is an empty list whereas the second is not empty and that is giving me hard time. Below is my code snippet:</p> <pre><code>author_id = [] institution_id = [] for item in list: author_id.append(item['author']['id']) if item['institutions'][0]: institution_id.append(item['institutions'][0]['id']) institution_id </code></pre> <p>The error I am getting is:</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) /home/azureuser/******.ipynb Cell 20 line 5 3 for item in a[1]: 4 author_id.append(item['author']['id']) ----&gt; 5 if item['institutions'][0]: 6 institution_id.append(item['institutions'][0]['id']) 7 institution_id IndexError: list index out of range </code></pre> <p>I would really appreciate if someone can help me navigate this situation. Thank you all!</p>
<python><python-3.x><pandas><dataframe><list>
2023-10-11 17:12:53
3
365
stackword_0
77,275,088
4,594,924
Unable to run my test file using pytest framework
<p>I have python application and tried to write a test cases. These test cases are importing actual functions resides in my src folder. In IDE, I don't find any error. But when I execute I am getting as below.</p> <p>My project structure is</p> <pre><code>myapplication |-&gt; src |-&gt;my_app |-&gt;utils |-&gt; Utils.py |-&gt; config |-&gt; data |-&gt; tests |-&gt; test_my_app |-&gt; test_utils |-&gt;test_Utils.py </code></pre> <p>My utils.py has below code snippet which has import of utils function from src which throws me an error.</p> <pre><code>import pytest from datetime import datetime from src.my_app.utils.Utils import Utils # Define test cases def test_roundOff(): assert Utils.roundOff(12.345) == 12.35 assert Utils.roundOff(10.0) == 10.0 assert Utils.roundOff(9.999) == 10.0 def test_getMarketStartTime(): now = datetime(2023, 10, 11, 9, 0, 0) market_start = Utils.getMarketStartTime(now) assert market_start == datetime(2023, 10, 11, 9, 15, 0) def test_getMarketEndTime(): now = datetime(2023, 10, 11, 16, 0, 0) market_end = Utils.getMarketEndTime(now) assert market_end == datetime(2023, 10, 11, 15, 30, 0) # You can add more test functions for other methods in the Utils class # Run the tests with pytest if __name__ == &quot;__main__&quot;: pytest.main() </code></pre> <p>The error I am facing is</p> <pre><code> (venv) PS C:\Users\mylaptop\PycharmProjects\myapplication&gt; pytest ================================================================================= test session starts ================================================================================= platform win32 -- Python 3.10.9, pytest-7.4.2, pluggy-1.3.0 rootdir: C:\Users\mylaptop\PycharmProjects\myapplication collected 0 items / 1 error ======================================================================================= ERRORS ======================================================================================== ___________________________________________________________ ERROR collecting tests/test_my_app/test_utils/test_Utils.py ___________________________________________________________ ImportError while importing test module 'C:\Users\mylaptop\PycharmProjects\myapplication\tests\test_my_app\test_utils\test_Utils.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: C:\Python310\lib\importlib\__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests\test_my_app\test_utils\test_Utils.py:3: in &lt;module&gt; from src.my_app.utils.Utils import Utils E ModuleNotFoundError: No module named 'src' =============================================================================== short test summary info =============================================================================== ERROR tests/test_my_app/test_utils/test_Utils.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ================================================================================== 1 error in 0.60s =================================================================================== </code></pre> <p>What I tried is</p> <ol> <li>Deleted pytest_cache folder and retried</li> <li>I am able to test same test cases using run option available in pyCharm. It was working. But not able to test via terminal available in pyCharm which results above error.</li> <li>File-&gt;settings. Default test framework was selected as &quot;pytest&quot;</li> <li>tried below list of commands</li> <li>pytest pytest -v tests/test_my_app/test_utils/test_Utils.py pytest -m test_Utils -k test_my_app/test_utils</li> <li>Placed empty <strong>init</strong>.py file under src,my_app and utils folder</li> <li>I have set my project path till /src folder in system environment variable. Restarted machine.</li> <li>Marked src folder as &quot;source&quot; folder in pyCharm and removed all src. from import statement.</li> </ol> <p>After trying all these options still I am getting same error.</p> <p>My envinronment is Python 3.10 and pytest. Using PyCharm as IDE. None of these options are not working. Please show me some light to resolve this issue.</p>
<python><pytest>
2023-10-11 17:12:04
0
798
Simbu
77,275,050
8,387,921
Use for loop and .loc to create a new column and add value equals to lenght of first column data
<p>I am very new to pandas, So i am trying to create a new column and add the value of column as total lenght of names of first column. But it is giving all column value same which is equal to the value of lenght of last item of first column.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>import pandas as pd mydictionary = {'names': ['Somuli', 'Kiu', 'mol', 'Lixxxx'], 'physics': [68, 74, 77, 78], 'chemistry': [84, 56, 73, 69], 'algebra': [78, 88, 82, 87]} # Create dataframe df_marks = pd.DataFrame(mydictionary) print('Original DataFrame\n--------------') print(df_marks) # Add column df_marks['geometry'] = "" print('\n\n DataFrame after adding "geometry" column\n--------------') mynames = df_marks['names'].unique() for std in mynames : df_marks['geometry']=len(std) print(df_marks)</code></pre> </div> </div> </p>
<python><pandas>
2023-10-11 17:05:15
1
399
Sagar Rawal
77,275,030
1,712,607
Re-use the same engine for different databases on the same server?
<p>Sqlalchemy creates a new connection whenever you call <code>create_engine</code>. If you have multiple databases on the same instance, this gets tricky because Postgres can only support so many active connections.</p> <p>I have a topology where my backend needs to access different databases on the same database instance. That id for a Postgres URI <code>postgresql://[ userspec @][ hostspec ][/ dbname ]</code>, <code>userspec</code> and <code>hostspec</code> are always the same but <code>dbname</code> can change. I'd like for Sqlalchemy to re-use the same engine connection since it's hitting the same host even if it's going to different databases. Any ideas?</p>
<python><postgresql><sqlalchemy>
2023-10-11 17:01:06
1
961
Math is Hard
77,274,887
323,571
How to iterate over values of a recordset?
<p>I am working on generating a csv report. I need to get the field names for the first row of the csv file and the value of the field for each record.</p> <p>Since the model from which I need to generate the report is being inherited by other models and new fields can be created in the child models. I need to be able to dynamically get the field names.</p> <p>In my class I'm using the following, which is what has worked best so far:</p> <pre><code>class MyClass(models.Model): _inherit = 'custom.model' def csv_export(self): headings = self.fields_get() header = list(headings.keys()) print(header) # header returns the list below # ['field1', 'field2', ...] </code></pre> <p>Now, in order to get the recordset, I am performing a search with some criteria to narrow down the records to the data that I need. It looks as follows:</p> <pre><code>records = self.env['custom.model'].search([('country', '=', 'US')], limit=n) </code></pre> <p>Next, I iterate through the &quot;records&quot; to extract the value of each field in the record. The loop looks like this:</p> <pre><code>for rec_val in records: print(rec_val.state) # 'state' being one of the fields in the model # ['Florida', 'Georgia', ...] </code></pre> <p>Good so far. But since &quot;rec_val.state&quot; is hard-coded and the field name could have changed or new fields could have been added to the child models. I need to access the values of &quot;records&quot; while looping, using the &quot;keys&quot; I extracted for the field names, that are stored in the &quot;header&quot; list.</p> <p>The way I attempted to do that is as follows:</p> <pre><code>for index, rec_val in enumerate(records): print(rec_val[header[index]]) # this throws an exception print(rec_val.header[index]) # this also throws an exception </code></pre> <p>So, I can access the values when I use &quot;rec_val.state&quot;, but I have not found the proper way to access them otherwise.</p> <p>The reason why I'm trying to use the field name as key to access the value, is because I also need to exclude (while looping) some fields that are not needed. So it can't be hard-coded either.</p> <p>When I use &quot;search_read&quot; as my method on the model, like so:</p> <pre><code>records = self.env['custom.model'].search_read([('country', '=', 'US')], limit=n) </code></pre> <p>I get a more manageable recordset, more a plain list. But I still can't use the &quot;header[index]&quot; to match the value to the key.</p> <p>Hopefully the above makes sense and one of you can point me in the right direction.</p> <p>Thanks</p>
<python><odoo>
2023-10-11 16:39:08
1
682
jnkrois
77,274,842
7,657,180
Adjust dates form in excel output pandas package
<p>I have the following code that extracts data from pdf file and the code is working well. As for the column G in the excel file has dates but when trying to change the dates format in the excel file, the format doesn't respond. I have to double click each cell to get the desired format</p> <pre><code>import pdfplumber import pandas as pd def extract_lines(pdf_file_path, excel_output_path): table_data = [] with pdfplumber.open(pdf_file_path) as pdf: for page_number in range(len(pdf.pages)): page = pdf.pages[page_number] page_text = page.extract_text() rows = page_text.strip().split('\n') for row in rows: if row.strip()[-1].isdigit(): segments = row.strip().split() table_data.append(segments) if table_data: df = pd.DataFrame(table_data) df = df.iloc[:, ::-1] excel_writer = pd.ExcelWriter(excel_output_path, engine='xlsxwriter') df.to_excel(excel_writer, index=False, sheet_name='Sheet1') workbook = excel_writer.book worksheet = excel_writer.sheets['Sheet1'] worksheet.right_to_left() excel_writer._save() print(f'PDF Data Converted And Saved To {excel_output_path}') else: print('No Lines Ending With Digits Found In The PDF') if __name__ == '__main__': extract_lines('Sample.pdf', 'Output.xlsx') </code></pre> <p>I tried to apply the format <code>strftime(&quot;%d/%m/%Y&quot;)</code> in the code but the problem persists.</p>
<python><pandas>
2023-10-11 16:30:45
1
9,608
YasserKhalil
77,274,838
274,460
How do I wrap asyncio calls in general-purpose non-async functions?
<p>I'm accessing a DBus API using <code>dbus_next</code>. Since that's an asyncio library and the rest of my code is synchronous/threaded code, I wrap it up in a synchronous function:</p> <pre><code>intfc = &quot;...&quot; path = &quot;...&quot; async def dbus_call_async(): bus = await dbus_next.aio.MessageBus(bus_type=dbus_next.BusType.SYSTEM).connect() introspection = await bus.introspect(intfc, path) obj = bus.get_proxy_object(intfc, path, introspection) return await obj.call_my_api() def dbus_call(): return asyncio.get_event_loop().run_until_complete(dbus_call_async()) </code></pre> <p>Note that Python 3.6 is a requirement.</p> <p>This works - until it gets called in an asyncio context, when it dies with <code>RuntimeError: This event loop is already running</code>.</p> <p>Now, the obvious answer is to call the async version from async contexts and the sync version from sync contexts. But of course there are numerous layers of function call between the asyncio context I'm now using it from and the wrapper function so this would mean maintaining parallel async and sync versions of <strong>all my code</strong>. Which is plainly bonkers.</p> <p>So what's the right way to do this? How can I write a wrapper function that can be called from both async and sync contexts?</p> <p><strong>Edit</strong> Here's a minimal reproducible example:</p> <pre><code>import asyncio import sys async def async_call(): return &quot;Hello, world&quot; def _run_asyncio(coro): loop = asyncio.get_event_loop() result = asyncio.get_event_loop().run_until_complete(coro) return result def wrapped_call(): return _run_asyncio(async_call()) async def async_main(): print(wrapped_call()) def sync_main(): print(wrapped_call()) if len(sys.argv) &gt; 1: print(&quot;Calling in asyncio context&quot;) asyncio.get_event_loop().run_until_complete(async_main()) else: print(&quot;Calling in sync context&quot;) sync_main() </code></pre> <p>The important point here is that <code>wrapped_call()</code> should be callable from both sync and async contexts. But the above only works when invoked without parameters, going via the sync route. When invoked with a parameter, and so invoking <code>wrapped_call()</code> via <code>async_main()</code>, it produces the following stack trace:</p> <pre><code>Calling in asyncio context Traceback (most recent call last): File &quot;/home/tkcook/test2.py&quot;, line 23, in &lt;module&gt; asyncio.get_event_loop().run_until_complete(async_main()) File &quot;/usr/lib/python3.10/asyncio/base_events.py&quot;, line 649, in run_until_complete return future.result() File &quot;/home/tkcook/test2.py&quot;, line 16, in async_main print(wrapped_call()) File &quot;/home/tkcook/test2.py&quot;, line 13, in wrapped_call return _run_asyncio(async_call()) File &quot;/home/tkcook/test2.py&quot;, line 9, in _run_asyncio result = asyncio.get_event_loop().run_until_complete(coro) File &quot;/usr/lib/python3.10/asyncio/base_events.py&quot;, line 625, in run_until_complete self._check_running() File &quot;/usr/lib/python3.10/asyncio/base_events.py&quot;, line 584, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running </code></pre>
<python><python-3.x><python-asyncio><python-3.6>
2023-10-11 16:30:03
2
8,161
Tom
77,274,716
10,430,394
Using mutually exclusive groups in argparse: How to channel logic based on groups?
<p>I want to have mutually exclusive arguments for my script and channel logic based on what has been specified by a user. What I currently have is this:</p> <pre class="lang-py prettyprint-override"><code>supported_formats = ['xyz','mol','pdb'] defaults = {'extension':'xyz', 'dirname':None, 'name':None} info = '''Add info here.''' parser = argparse.ArgumentParser(description=info) group1 = parser.add_mutually_exclusive_group() group2 = parser.add_mutually_exclusive_group() group1.add_argument('-i', '--infile', help='Specify a file path to a names_smiles csv.') group2.add_argument('-s', '--smiles', nargs='+', help='Specify your smiles (multiple possible).') group2.add_argument('-n', '--name', nargs='+', help='Specify (a) name(s) for your molecule.') parser.add_argument('-d', '--dirname', default=defaults['dirname'], help='Specify the name of the directory to which the coords files will be written.') parser.add_argument('-x', '--extension', default=defaults['extension'], help='Specify the file extension for your output coordinates. No dot necessary. Supported formats: %s'%', '.join(supported_formats)) mutually_exclusive = parser.add_mutually_exclusive_group() mutually_exclusive.add_argument_group(group1) mutually_exclusive.add_argument_group(group2) args = parser.parse_args() </code></pre> <p>As you can see, <code>group1</code> and <code>group2</code> args are exclusive, while the last ones are not. I want to later channel execution based on these groups kind of like:</p> <pre class="lang-py prettyprint-override"><code>if args.group1: # do Stuff for csv print('Using input file %s as input.'%os.path.basename(args.infile)) elif args.group2: print('Using manual input.') else: sys.exit('No input specified.') </code></pre> <p>The problem is that argparse doesn't work like that. In order to achieve my desired functionality, I need to check for each args of each group indivudually like this:</p> <pre class="lang-py prettyprint-override"><code>if args.infile: print(&quot;Group 1 logic&quot;) elif args.name or args.smiles: print(&quot;Group 2 logic&quot;) else: print(&quot;No valid arguments were provided.&quot;) </code></pre> <p>Right now that is not that big a deal. But I am planning on adding a lot of args in the future and I don't want to have an <code>if</code> statement with like 10 <code>or</code> components.</p> <p>I have tried to find out if there is an iterator for args groups so that I could do something like:</p> <pre class="lang-py prettyprint-override"><code>if any([subarg == True for subarg in args.group2.subargs]): print('Run group2 logic') </code></pre> <p>But a look at the docs and <code>dir(parser)</code> didn't give me anything like that.</p> <p>Is this possible or do I have to manually add each argument for each group?</p> <p>EDIT: The previous post does not answer my question since it was a general post about how to use mutually exclusive groups. I want to know how to check if any argument of a particular group has been specified and channel logic based on that info.</p>
<python><command-line-interface><argparse>
2023-10-11 16:10:00
1
534
J.Doe
77,274,665
2,956,276
Cannot debug script with trio_asyncio in PyCharm
<p>I have this (very simplified) program running with trio as base async library and with trio_asyncio library allowing me to call asyncio methods too:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import trio import trio_asyncio async def async_main(*args): print('async_main start') async with trio_asyncio.open_loop() as loop: print('async_main before trio sleep') await trio.sleep(1) print('async_main before asyncio sleep') await trio_asyncio.aio_as_trio(asyncio.sleep)(2) print('async_main after sleeps') print('async_main stop') if __name__ == '__main__': print('main start') trio.run(async_main) print('main stop') </code></pre> <p>It works well, if I run it from PyCharm:</p> <pre><code>main start async_main start async_main before trio sleep async_main before asyncio sleep async_main after sleeps async_main stop main stop </code></pre> <p>But if I run the same code from PyCharm in debug mode (menu Run / Debug), then it raises an exception:</p> <pre><code>Connected to pydev debugger (build 232.9559.58) main start async_main start async_main before trio sleep async_main before asyncio sleep Traceback (most recent call last): File &quot;/home/vaclav/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_3.py&quot;, line 12, in async_main await trio_asyncio.aio_as_trio(asyncio.sleep)(2) File &quot;/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_adapter.py&quot;, line 54, in __call__ return await self.loop.run_aio_coroutine(f) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_base.py&quot;, line 214, in run_aio_coroutine fut = asyncio.ensure_future(coro, loop=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/pycharm-community/plugins/python-ce/helpers/pydev/_pydevd_asyncio_util/pydevd_nest_asyncio.py&quot;, line 156, in ensure_future return loop.create_task(coro_or_future) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/asyncio/base_events.py&quot;, line 436, in create_task task = tasks.Task(coro, loop=self, name=name, context=context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/pycharm-community/plugins/python-ce/helpers/pydev/_pydevd_asyncio_util/pydevd_nest_asyncio.py&quot;, line 390, in task_new_init self._loop.call_soon(self, context=self._context) File &quot;/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_base.py&quot;, line 312, in call_soon self._check_callback(callback, 'call_soon') File &quot;/usr/lib/python3.11/asyncio/base_events.py&quot;, line 776, in _check_callback raise TypeError( TypeError: a callable object was expected by call_soon(), got &lt;Task pending name='Task-1' coro=&lt;_call_defer() running at /home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_adapter.py:16&gt;&gt; python-BaseException sys:1: RuntimeWarning: coroutine '_call_defer' was never awaited Exception in default exception handler Traceback (most recent call last): File &quot;/usr/lib/python3.11/asyncio/base_events.py&quot;, line 1797, in call_exception_handler self.default_exception_handler(context) File &quot;/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_async.py&quot;, line 42, in default_exception_handler raise RuntimeError(message) RuntimeError: Task was destroyed but it is pending! Process finished with exit code 1 </code></pre> <p>The source code is copied from the <a href="https://trio-asyncio.readthedocs.io/en/latest/usage.html#trio_asyncio.open_loop" rel="nofollow noreferrer">official <code>trio_asyncio</code> documentation</a>.</p> <p>I have two questions:</p> <ol> <li>Why the code works well if it is run without debugging, and why it fails when it is run in debugger?</li> <li>How I should modify this code to be able still call both - <code>trio</code> and <code>asyncio</code> methods and it will be possible to use debugger with such code?</li> </ol>
<python><pycharm><python-trio>
2023-10-11 16:00:23
1
1,313
eNca
77,274,495
16,459,035
Access denied on pandas to_parquet python
<p>I have the following dir structure</p> <pre><code>data data1 year month codes extract_data historical_data script.py </code></pre> <p>When I run <code>df.to_parquet('C:\users\john\data\data1\year\month')</code> I get <code>PermissionError: [WinError 5] Failed to open local file 'C:\users\john\data\data1\year\month'. Detail: [Windows error 5] Access denied.</code></p> <p>I'm running the script from <code>C:\users\john\codes\extract_data\historical_data\</code></p> <p>I tried to use <code>C:\\users\\john\\data\\data1\\year\\month</code> and <code>r'C:\users\john\data\data1\year\month'</code> as path, but with no success.</p>
<python><pandas>
2023-10-11 15:36:00
2
671
OdiumPura
77,274,367
12,827,931
Randomly repalce values in array with None
<p>Suppose there's an array</p> <pre><code>arr = np.array((0,0,1,2,0,1,2,2,1,0)) </code></pre> <p>What I'd like to do is to randomly replace <code>n</code> values with <code>None</code>. What I tried is</p> <pre><code>n = 5 arr[[random.randint(0, len(arr)-1) for i in range(n)]] = None </code></pre> <p>but what I get is</p> <pre><code>TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' </code></pre> <p>How can I achieve that?</p>
<python><arrays><numpy>
2023-10-11 15:18:07
2
447
thesecond
77,274,171
1,671,319
How do I mock the paramiko.SSHClient so that an error is raised on the connect call
<p>I am trying to write a unit test and raise an error from the <code>connect</code> call in the <code>paramiko.SSHClient</code>. Here is the code I want to test:</p> <pre><code>with paramiko.SSHClient() as ssh: ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host_name, username=user_name, pkey=pk, port=port) </code></pre> <p>and this is what I have in the test:</p> <pre><code>def raiser(): raise ValueError(&quot;test&quot;) ssh_client_mock = mocker.patch('paramiko.SSHClient') ssh_client_mock.connect.side_effect = raiser </code></pre> <p>When debugging, I can see that <code>with paramiko.SSHClient() as ssh:</code> give me an <code>ssh</code> instance that is a <code>MagicMock</code> - but the connect call does not raise ....</p>
<python><pytest><python-unittest.mock><pytest-mock>
2023-10-11 14:50:05
1
3,074
reikje
77,274,065
1,724,590
Python Plotly how to show axis values outside your dataset's max value?
<pre><code>import plotly.express as px import pandas as pd df = pd.DataFrame(dict( r=[1,2,3], theta=['a', 'b','c'])) fig = px.line_polar(df, r=[1,2,3], theta=['a','b','c'], line_close=True) fig.update_polars(radialaxis_tickvals=[1,2,3,4,5], radialaxis_tickmode='array') fig.show() </code></pre> <p>I have code like so that is able to generate a polar chart that looks like below</p> <p><a href="https://i.sstatic.net/M4eM0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M4eM0.png" alt="enter image description here" /></a></p> <p>However what I want is to ensure that the maximum value of my tickvals is the maximum on the chart. So even though 3 is the highest value in this particular dataset, I want it to show tickvals 4 and 5 on the chart itself. This is so I can visually see that the 3 here is lower than the maximum otherwise when compared to other charts, at a glance it would look like 3 or 5 both meet the maximum.</p>
<python><plotly>
2023-10-11 14:36:27
1
634
Michael Yousef
77,273,922
10,416,012
aio download_blob works once but not twice when run with asyncio.run
<p>The code aio download_blob of the azure blob works once but not twice when run with asyncio.run, this looks like a bug related to iohttp, but could not figure out how to solve it. (Windows)</p> <p>The code i have is almost a copy from their original example at: <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/samples/blob_samples_hello_world_async.py" rel="nofollow noreferrer">https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/samples/blob_samples_hello_world_async.py</a></p> <pre><code>from azure.storage.blob.aio import ContainerClient from azure.identity import DefaultAzureCredential credentials = DefaultAzureCredential() async def test(conn_client): async with conn_client as client_conn: stream = await client_conn.download_blob(my_path) data = await stream.readall() return data if __name__ == &quot;__main__&quot;: my_container_name = &quot;Container name&quot; my_client = ContainerClient.from_container_url(container_url=my_container_name, credential=credentials) my_path = 'test_path' data = asyncio.run(test(my_client)) # works and returns the file from blob storage data2 = asyncio.run(test(my_client)) # doesn't work </code></pre> <p>Error Message:</p> <pre><code>DEBUG - asyncio: Using proactor: IocpProactor ... await self.open() File &quot;C...\Cache\virtualenvs\transformer-wi-nHELc-py3.11\Lib\site-packages\azure\core\pipeline\transport\_aiohttp.py&quot;, line 127, in open await self.session.__aenter__() ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute '__aenter__'. Did you mean: '__delattr__'? Process finished with exit code 1 </code></pre> <p>Any idea or work around?</p>
<python><azure><azure-blob-storage><aiohttp><azure-sdk-python>
2023-10-11 14:18:59
1
2,235
Ziur Olpa
77,273,704
8,510,149
Rotate x-axis labels in a chart using openpyxl
<p>The code below generates a simple barchart using openpyxl.</p> <p><a href="https://i.sstatic.net/6vPaf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6vPaf.png" alt="enter image description here" /></a></p> <p>Now I'm struggling to rotate the x-axis labels. Is perhaps an XML solution possible? IS there a way to be able to locate that particular piece of XML code and adjust?</p> <pre><code>import openpyxl from openpyxl.chart import BarChart, Reference # Create a workbook and activate a sheet wb = openpyxl.Workbook() sheet = wb.active # insert some categories cell = sheet.cell(row=1, column=1) cell.value = 'Category 1.1' cell = sheet.cell(row=2, column=1) cell.value = 'Category 1.2 - limit' cell = sheet.cell(row=3, column=1) cell.value = 'Category 2' cell = sheet.cell(row=4, column=1) cell.value = 'Category 2.1 - extra' cell = sheet.cell(row=5, column=1) cell.value = 'Category 2.2 - extra2' # insert some values for i in range(5): cell = sheet.cell(row=i+1, column=2) cell.value = i+2 # create chart chart = BarChart() values = Reference(sheet, min_col = 2, min_row = 1, max_col = 2, max_row = 5) bar_categories = Reference(sheet, min_col=1, min_row=1, max_row=5) chart.add_data(values) chart.set_categories(bar_categories) chart.title = &quot; BAR-CHART &quot; chart.legend = None chart.x_axis.title = &quot; X_AXIS &quot; chart.y_axis.title = &quot; Y_AXIS &quot; sheet.add_chart(chart, &quot;E2&quot;) # save the file wb.save(&quot;barChart.xlsx&quot;) </code></pre>
<python><excel><xml><openpyxl>
2023-10-11 13:52:10
0
1,255
Henri
77,273,655
21,305,238
What to put in the [project.scripts] table if scripts are not stored in the "src" directory?
<p>I have this package which follows the <em><a href="https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/" rel="nofollow noreferrer">src layout</a></em>:</p> <pre class="lang-none prettyprint-override"><code>. +-- scripts | \-- lorem.py | +-- src | \-- bar | |-- __init__.py | \-- bazqux.py | |-- .editorconfig |-- .gitignore |-- LICENSE |-- pyproject.toml \-- README.md </code></pre> <p><em>pyproject.toml</em> has the following table:</p> <pre class="lang-ini prettyprint-override"><code>[project.scripts] foo = &quot;bar.lorem:ipsum&quot; </code></pre> <p>This is supposed to expose a command line executable called <code>foo</code> which will run the function <code>ipsum</code> in the file <code>./src/bar/lorem.py</code>, if I understand it correctly.</p> <p>However, I want my scripts to stay in <code>scripts</code>, a sibling of <code>src</code>. <code>foo = &quot;scripts.lorem:ipsum&quot;</code> doesn't work: it leads to a <code>ModuleNotFoundError: No module named 'scripts'</code>, which is quite understandable.</p> <p>What should I put in that field then? Or should I change the project layout instead?</p>
<python><python-packaging><pyproject.toml>
2023-10-11 13:47:02
3
12,143
InSync
77,273,461
8,588,743
ERROR: Could not build wheels for fasttext, which is required to install pyproject.toml-based projects
<p>I'm trying to install <code>fasttext</code> using <code>pip install fasttext</code> in python 3.11.4 but I'm running into trouble when building wheels. The error reads as follows:</p> <pre><code> error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.37.32822\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for fasttext Running setup.py clean for fasttext Failed to build fasttext ERROR: Could not build wheels for fasttext, which is required to install pyproject.toml-based projects </code></pre> <p>I've searched the web and most hits indicated that the error has something to do with the build tools of visual studio (which the error above also indicated). I've installed/updated all my build tools and I've also installed the latest SDK as suggested <a href="https://stackoverflow.com/questions/75927836/error-could-not-build-wheels-for-fairseq-which-is-required-to-install-pyprojec">here</a>, but the error persists.</p> <p>Has anyone solved this problem before and can share any potential solution?</p>
<python><python-wheel><fasttext>
2023-10-11 13:23:15
3
903
Parseval
77,273,344
11,329,736
MultiQC snakemake wrapper: ModuleNotFoundError No module named 'imp'
<p>I am running FastQC and MultiQC in my <code>snakemake</code> pipeline:</p> <pre><code>rule fastqc: input: &quot;reads/{sample}_trimmed.fq.gz&quot; output: html=&quot;qc/fastqc/{sample}.html&quot;, zip=&quot;qc/fastqc/{sample}_fastqc.zip&quot; # the suffix _fastqc.zip is necessary for multiqc to find the file params: extra = &quot;--quiet&quot; log: &quot;logs/fastqc/{sample}.log&quot; threads: config[&quot;resources&quot;][&quot;fastqc&quot;][&quot;cpu&quot;] resources: runtime=config[&quot;resources&quot;][&quot;fastqc&quot;][&quot;time&quot;] wrapper: &quot;v1.31.1/bio/fastqc&quot; rule multiqc: input: expand(&quot;qc/fastqc/{sample}_fastqc.zip&quot;, sample=SAMPLES) output: report(&quot;qc/multiqc.html&quot;, caption=&quot;workflow/report/multiqc.rst&quot;, category=&quot;MultiQC analysis of fastq files&quot;) params: extra=&quot;&quot;, # Optional: extra parameters for multiqc. use_input_files_only=True, # Optional, use only a.txt and don't search folder samtools_stats for files resources: runtime=config[&quot;resources&quot;][&quot;fastqc&quot;][&quot;time&quot;] log: &quot;logs/multiqc/multiqc.log&quot; wrapper: &quot;v1.31.1/bio/multiqc&quot; </code></pre> <p>The fastqc rule runs without any issues, but multiqc fails:</p> <pre><code>Traceback (most recent call last): File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/bin/multiqc&quot;, line 6, in &lt;module&gt; from multiqc.__main__ import run_multiqc File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/__init__.py&quot;, line 16, in &lt;module&gt; from .multiqc import run File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/multiqc.py&quot;, line 30, in &lt;module&gt; from .plots import table File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/plots/table.py&quot;, line 9, in &lt;module&gt; from multiqc.plots import beeswarm, table_object File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/plots/beeswarm.py&quot;, line 8, in &lt;module&gt; from multiqc.plots import table_object File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/plots/table_object.py&quot;, line 9, in &lt;module&gt; from multiqc.utils import config, report File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/multiqc/utils/report.py&quot;, line 18, in &lt;module&gt; import lzstring File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/lzstring/__init__.py&quot;, line 11, in &lt;module&gt; from future import standard_library File &quot;/mnt/4TB_SSD/analyses/CRISPR/test/.snakemake/conda/db6c33339e73e6beea68618300022717_/lib/python3.12/site-packages/future/standard_library/__init__.py&quot;, line 65, in &lt;module&gt; import imp ModuleNotFoundError: No module named 'imp' </code></pre> <p>Using a later version of the wrapper (v2.6.0) gives me the same error. The multiqc rule has worked before, so I don't understand why <code>imp</code> is not found suddenly.</p> <p>The yaml file to create the conda env for the multiqc wrapper:</p> <pre><code>channels: - conda-forge - bioconda - nodefaults dependencies: - multiqc =1.16 </code></pre> <p>A quick Internet search tells me that <a href="http://pymotw.com/2/imp/" rel="nofollow noreferrer">&quot;The imp module includes functions that expose part of the underlying implementation of Python’s import mechanism for loading code in packages and modules&quot;</a> , and seems to me already a part of Python?</p>
<python><snakemake>
2023-10-11 13:05:25
1
1,095
justinian482
77,272,982
583,464
select the first of two or more filenames and save only the first
<p>I want to save files with these filenames:</p> <pre><code>filenames = [ '4_66\nNUC_66377\nAPPR\nWONDER', '4_66\nNUC_66377\nAPPR\nCOT', '8_21\nAKRO\nNUT\nAMY' ] </code></pre> <p>and if the filename starts with the same number before underscore, then write to file only the first filename.</p> <p>So, I am doing:</p> <pre><code>for idx in range(len(filenames)-1): if filenames[idx][0:2] != filenames[idx + 1][0:2]: with open('./' + filenames[idx] + '.txt', 'w') as file: file.write('111') # save the last file with open('./' + filenames[-1] + '.txt', 'w') as file: file.write('111') </code></pre> <p>But if you run the code, it saves the second one! <code>'4_66\nNUC_66377\nAPPR\nCOT'</code></p>
<python>
2023-10-11 12:15:33
2
5,751
George
77,272,952
2,386,605
OpenAI API: How do I use the ChatGPT plugin in Python?
<p>I want to use the <a href="https://github.com/openai/openai-python" rel="nofollow noreferrer">OpenAI python package</a>. However, I also want to make use of some ChatGPT plugins.</p> <p>I tried the following with Langchain:</p> <pre><code>tool = AIPluginTool.from_plugin_url(&quot;https://scholar-ai.net/.well-known/ai-plugin.json&quot;) llm = ChatOpenAI(temperature=0, streaming=True, model_name=&quot;gpt-3.5-turbo-16k-0613&quot;) tools = [tool] agent_chain = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True ) agent_chain.run(&quot;What are the antiviral effects of Sillymarin?&quot;) </code></pre> <p>Sadly, I got: <code>InvalidRequestError: This model's maximum context length is 16385 tokens. However, your messages resulted in 16552 tokens (16455 in the messages, 97 in the functions). Please reduce the length of the messages or functions. </code></p> <p>Is there a way to do it directly via OpenAI or via Langchain? If so, how could I do so?</p>
<python><openai-api><chatgpt-api><azure-openai>
2023-10-11 12:12:36
1
879
tobias
77,272,845
5,180,979
Sequential allocation problem using pandas
<p>I encountered the following allocation problem as follows.</p> <p>Consider the following dataframe representing the supply schedule of some item.</p> <pre><code>supply_df = item supplier supply_week supply 0 a s1 1 10 1 a s2 1 20 2 a s3 1 10 3 a s1 2 23 4 a s2 2 33 5 a s3 2 42 6 a s1 3 52 7 a s2 3 27 8 a s3 3 29 9 b s1 1 37 10 b s2 1 32 11 b s3 1 38 12 b s1 2 17 13 b s2 2 28 14 b s3 2 44 15 b s1 3 41 16 b s2 3 45 17 b s3 3 24 </code></pre> <p>Here is the consumption information</p> <pre><code>consume_df = item week consume 0 a 1 33 1 a 2 100 2 a 3 102 3 b 1 90 4 b 2 100 5 b 3 80 </code></pre> <p>I wish to map which supply got consumed in which week and what quantity.</p> <pre><code>out_df = item supplier supply_week supply week consume 0 a s1 1 10 1 10 1 a s2 1 20 1 20 2 a s3 1 10 1 3 3 a s3 1 10 2 7 4 a s1 2 23 2 23 5 a s2 2 33 2 33 6 a s3 2 42 2 37 7 a s3 2 42 3 5 8 a s1 3 52 3 52 9 a s2 3 27 3 27 10 a s3 3 29 3 18 11 b s1 1 37 1 37 12 b s2 1 32 1 32 13 b s3 1 38 1 21 14 b s3 1 38 2 17 15 b s1 2 17 2 17 16 b s2 2 28 2 28 17 b s3 2 44 2 38 18 b s3 2 44 3 6 19 b s1 3 41 3 41 20 b s2 3 45 3 33 </code></pre> <p>Notable points:</p> <ul> <li>It is already known that the consumption of supply shall happen in the order of preference of <code>s1</code> &gt; <code>s2</code> &gt; <code>s3</code> for the same week.</li> <li>Likewise, consumption will only happen on an earlier-week-earlier-consume basis.</li> <li>Supply of one item cannot contribute to another item.</li> <li>While consumption of an item in a given week can come from multiple suppliers, it is also possible for a supply from a given supplier to be split between multiple weeks.</li> <li>It is entirely possible for some supplier to have intermittent supply (not every week) unlike the sample problem above.</li> <li>Any leftover supply is not important.</li> </ul> <p>I was able to do it using <code>pd.DataFrame.GroupBy.apply</code> or iterators like <code>pd.DataFrame.itertuples</code> and <code>pd.DataFrame.iterrows</code> with custom function applied, however, data being large in actual problem, it is not efficient.</p> <p>Looking for a more efficient solution which can solve this problem. Please help.</p>
<python><python-3.x><pandas><dataframe><merge>
2023-10-11 11:57:13
2
315
CharcoalG
77,272,833
300,963
Can I save my python-statemachine to file?
<p>Using the python-statemachine package, I would like to save the current state to external file or database, preferably in json format, and later load it to recreate the state. Is this possible, and if so, how would I do it?</p>
<python>
2023-10-11 11:56:11
0
5,073
Johan
77,272,779
52,074
How do you test N significant digits for a float value?
<p>Lots of code (e.g. <code>numpy</code>, <code>scipy</code>, <code>sklearn</code>) does math processing where the result is a float or an array of float. In <code>unittest.TestCase</code> there is a method for comparing float values called <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual" rel="nofollow noreferrer"><code>assertAlmostEqual</code></a> but this does not test for significant digits</p> <p>Checking the significant digits is important because:</p> <ul> <li>some values are going to be in the range of 1e-9 and so looking at the N digits (i.e. 0.0000000) after the decimal point will not help here because all the minimum values start at 1e-9</li> </ul> <h2>How do you test N significant digits for a float value?</h2>
<python><python-unittest>
2023-10-11 11:48:58
2
19,456
Trevor Boyd Smith
77,272,655
5,814,943
Why is a `lambdef` allowed as a type-hint for variables?
<p>The Python <a href="https://docs.python.org/3.11/reference/grammar.html" rel="nofollow noreferrer">grammar</a> has this rule:</p> <pre><code>assignment: | NAME ':' expression ['=' annotated_rhs ] # other options for rule omitted </code></pre> <p>while the <code>expression</code> rules permit a lambda definition (<code>lambdef</code>).</p> <p>That means this python syntax is valid:</p> <pre class="lang-py prettyprint-override"><code>q: lambda p: p * 4 = 1 </code></pre> <p>Is there a use case for permitting a lambda there, or is this just a quirk of a somewhat loose grammar? Similarly, this allows conditional types <code>a: int if b &gt; 3 else str = quux</code>, which seems a bit more sane but still unexpected.</p>
<python><python-3.x><grammar>
2023-10-11 11:31:45
1
1,445
Max
77,272,637
13,890,967
gurobipy: update tupledict implementation
<p>I am using <code>tupledict</code> from <code>gurobipy</code> and I really like the <code>select</code> function. However it is limited to values and not conditioanl expressions.</p> <p>My tupledict of variables contains as a key a tuple <code>(r,t)</code> where <code>r</code> is an object that stands for route and has 2 main attributes nodes and edges (for simplicity I just mention these two), and <code>t</code> is an integer representing the time period. I am doing a branch and price algorithm and I have to branch on the number of visits to a node, meaning that I will have to iterate over my<code>tupledict</code> of variables to see whether a route <code>r</code> contains a node or not. Other times I have to check whether a route contains an edge. These iterations are quite expensive, and I was wondering if it is possible to know how the <code>select</code> method is built as it is quite fast when selecting a <code>t</code> for instance. I am willing to write my own <code>tupledict</code> class using cython and write functions like <code>select</code> for each of the branching rules I have, however I do not seem to find the implementation of <code>tupledict</code>. I hope it is not a &quot;secret&quot;.</p>
<python><cython><gurobi>
2023-10-11 11:29:15
1
305
sos
77,272,595
7,980,206
Use Tiebreaker having nan values while calculating the Rank in pandas
<p>I have a pandas dataframe, where there is a weighted score, and tiebreaker column. We have <code>NaN</code> values in tiebreaker column. The data frame looks like -</p> <pre><code>Name Weighted_Score(%) tie_breaker A 12.0 2.7 B 13.0 2.8 C 14.0 NaN D 14.0 3.2 </code></pre> <p>Now i want to calculate the Rank based on weighted_Score(%), and if weighted_score(%) is same, then use Tie breaker.</p> <p>In my case, following code is working fine until and unless there are no &quot;NaN&quot; values in tie_breaker column.</p> <pre><code>df['Rank'] = df[['Weighted_Score(%)', 'tie_breaker']].apply(tuple, axis=1).rank(method='dense', ascending=True, na_option='bottom').astype('int') </code></pre> <p>Above is giving the wrong Rank.</p> <pre><code>Name Weighted_Score(%) tie_breaker Rank A 12.0 2.7 1 B 13.0 2.8 2 C 14.0 NaN 3 D 14.0 3.2 4 </code></pre> <p>i tried converting the tiebreker nan values to 0.0, still nothing happening.</p>
<python><pandas>
2023-10-11 11:21:52
1
717
ggupta
77,272,542
14,695,308
How to correctly split DLT-log file into multiple files
<p>I'm trying to develop a <code>dlt-analyzer</code> that would check newly generated logs &quot;on the fly&quot;.</p> <p>For this purpose I run the <code>dlt-receive</code> that outputs all the logs into <em>main.dlt</em> file. Then using Python code I split the logs into 16kB chunks with <code>readlines</code> method and put each chunk subsequently into <em>temp.dlt</em>:</p> <pre><code>def read_in_chunks(file_object): while True: data = file_object.readlines(16384) yield data with open('main.dlt', 'rb') as f: for chunks in read_in_chunks(f): with open('temp.dlt', 'wb') as temp_dlt: for chunk in chunks: temp_dlt.write(chunk) </code></pre> <p>Then I run <code>dlt-viewer -s -csv -f &lt;FILTER NAME&gt; -c temp.dlt results.csv</code> to get filtered results. But in most cases it doesn't work (<em>results.csv</em> file appears empty) as it seems that logs from <em>main.dlt</em> copied to <em>temp.dlt</em> ignoring dlt-headers and so <code>dlt-viewer</code> unable to correctly parse logs... Is there a way to split DLT file with preserving message headers? Or can I somehow add missed headers automatically?</p>
<python><binaryfiles><dlt-daemon><dlt>
2023-10-11 11:16:15
1
720
DonnyFlaw
77,272,418
15,283,686
Python Why `import` does not work in this case (in `exec`)?
<p>Sorry but the situation a bit complicated that I can't describe it clearly in the title.</p> <p>So this is the script to be imported later in <code>exec</code>:</p> <pre class="lang-py prettyprint-override"><code># script_to_be_imported.py def f(): print(&quot;Hello World!&quot;) </code></pre> <p>And this is my main script:</p> <pre class="lang-py prettyprint-override"><code># main.py script = &quot;&quot;&quot; import script_to_be_imported def function_to_use_the_imported_script(): script_to_be_imported.f() function_to_use_the_imported_script() &quot;&quot;&quot; def function_invokes_exec(): exec(script) function_invokes_exec() </code></pre> <p>I am using <code>Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] on win32</code>, and it tells me that:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\yueyinqiu\Documents\MyTemporaryFiles\stackoverflow\importInExec\main.py&quot;, line 16, in &lt;module&gt; function_invokes_exec() File &quot;C:\Users\yueyinqiu\Documents\MyTemporaryFiles\stackoverflow\importInExec\main.py&quot;, line 13, in function_invokes_exec exec(script) File &quot;&lt;string&gt;&quot;, line 6, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 4, in function_to_use_the_imported_script NameError: name 'script_to_be_imported' is not defined </code></pre> <p>But when I make some small changes which I think they are unrelated, it could work correctly.</p> <p>For example, it works when <code>exec</code> is invoked outside the function:</p> <pre class="lang-py prettyprint-override"><code># main.py script = &quot;&quot;&quot; import script_to_be_imported def function_to_use_the_imported_script(): script_to_be_imported.f() function_to_use_the_imported_script() &quot;&quot;&quot; exec(script) </code></pre> <p>and also works when:</p> <pre class="lang-py prettyprint-override"><code># main.py script = &quot;&quot;&quot; import script_to_be_imported script_to_be_imported.f() &quot;&quot;&quot; def function_invokes_exec(): exec(script) function_invokes_exec() </code></pre> <p>It even works when a value is passed to <code>global</code> although it's just an empty dictionary:</p> <pre class="lang-py prettyprint-override"><code># main.py script = &quot;&quot;&quot; import script_to_be_imported def function_to_use_the_imported_script(): script_to_be_imported.f() function_to_use_the_imported_script() &quot;&quot;&quot; def function_invokes_exec(): exec(script, {}) function_invokes_exec() </code></pre> <p>So have I misunderstood something? Or it is a bug of python?</p>
<python>
2023-10-11 11:01:02
1
453
yueyinqiu
77,272,326
7,274,343
Remove red highlighted text from Jupyter notebook from an API call
<p>How do I remove the red highlighted text in Jupyter Notebook which relays the whole API request information when called?</p> <p><a href="https://i.sstatic.net/RgKq3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RgKq3.png" alt="enter image description here" /></a></p> <p>I have tried</p> <pre><code>import warnings warnings.filterwarnings('ignore') warnings.simplefilter('ignore') </code></pre> <p>but this does not solve the problem.</p>
<python><jupyter-notebook>
2023-10-11 10:49:08
2
329
homelessmathaddict
77,272,309
12,959,241
Get Countries from specific continent using Faker Python
<p>I am generating a random country name using the api <code>Faker</code> but I wanted to specific from either europian or american continent but couldn't find a parameter to add. I tried to explore with the Available locals but couldn't link both to get a country from a specific locals. Any idea how?</p> <pre><code>from faker.config import AVAILABLE_LOCALES from faker import Faker print([local for local in AVAILABLE_LOCALES]) #just to check the avaialable locals fake = Faker() print(fake.country()) # want country from specific continent </code></pre>
<python><python-3.x><faker>
2023-10-11 10:45:38
0
675
alphaBetaGamma
77,272,196
583,464
model did not return a loss / BertForQuestionAnswering.forward() got an unexpected keyword argument 'labels'
<p>I have this data:</p> <p><code>intents.json</code>:</p> <pre><code>{&quot;version&quot;: &quot;0.1.0&quot;, &quot;data&quot;: [ {&quot;id&quot;: &quot;hi&quot;, &quot;question&quot;: [&quot;hi&quot;, &quot;how are you&quot;], &quot;answers&quot;: [&quot;hi!&quot;, &quot;how can i help you?&quot;], &quot;context&quot;: &quot;&quot; }, {&quot;id&quot;: &quot;bye&quot;, &quot;question&quot;: [&quot;Bye&quot;, &quot;good bye&quot;, &quot;see you&quot;], &quot;answers&quot;: [&quot;see you later&quot;, &quot;have a nice day&quot;, &quot;bye&quot;, &quot;thanks for visiting&quot;], &quot;context&quot;: &quot;&quot; }, {&quot;id&quot;: &quot;weather&quot;, &quot;question&quot;: [&quot;how is the weather&quot;, &quot;weather forecast&quot;, &quot;weather&quot;], &quot;answers&quot;: [&quot;weather is good&quot;, &quot;we have 25 degrees&quot;], &quot;context&quot;: &quot;&quot; } ] } </code></pre> <p>and I am trying to build a question answer bot.</p> <p>I am using this code:</p> <pre><code>from datasets import load_dataset import datasets from transformers import AutoTokenizer, AutoModel, TrainingArguments,\ Trainer, AutoModelForQuestionAnswering, DefaultDataCollator, \ DataCollatorForLanguageModeling MAX_LENGTH = 128 tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') def preprocess_func(x): return tokenizer(x[&quot;id&quot;], padding='max_length', truncation=True, max_length=MAX_LENGTH) train = load_dataset('json', data_files='intents.json', field='data', split='train[:80%]') test = load_dataset('json', data_files='intents.json', field='data', split='train[80%:]') data = datasets.DatasetDict({&quot;train&quot;:train, &quot;test&quot;: test}) tokenized = data.map(preprocess_func, batched=True) #data_collator = DefaultDataCollator() data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True ) device = &quot;cpu&quot; model = AutoModelForQuestionAnswering.from_pretrained('bert-base-uncased') model = model.to(device) training_args = TrainingArguments( output_dir=&quot;./results&quot;, evaluation_strategy=&quot;epoch&quot;, learning_rate=2e-5, per_device_train_batch_size=2, num_train_epochs=2, weight_decay=0.01, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized[&quot;train&quot;], tokenizer=tokenizer, data_collator=data_collator, ) trainer.train() </code></pre> <p>and I am receiving:</p> <p><code>BertForQuestionAnswering.forward() got an unexpected keyword argument 'labels'</code></p> <p>but I don't have any labels in the data:</p> <pre><code>tokenized DatasetDict({ train: Dataset({ features: ['context', 'id', 'question', 'answers', 'input_ids', 'token_type_ids', 'attention_mask'], num_rows: 2 }) test: Dataset({ features: ['context', 'id', 'question', 'answers', 'input_ids', 'token_type_ids', 'attention_mask'], num_rows: 1 }) }) </code></pre> <p>If I use :</p> <p><code>DefaultDataCollator()</code> instead of <code>DataCollatorForLanguageModeling</code>, I receive:</p> <p><code>The model did not return a loss from the inputs, only the following keys: start_logits,end_logits</code></p> <p>I am not sure if the <code>preprocess_func</code> needs more things to do.</p> <p>Like, for example <a href="https://medium.datadriveninvestor.com/lets-build-an-ai-powered-question-answering-system-with-huggingface-transformers-2622d8de18e9" rel="nofollow noreferrer">here</a></p> <pre><code>def preprocess_function(examples): questions = [q.strip() for q in examples[&quot;question&quot;]] inputs = tokenizer( questions, examples[&quot;context&quot;], max_length=512, truncation=&quot;only_second&quot;, return_offsets_mapping=True, padding=&quot;max_length&quot;, ) offset_mapping = inputs.pop(&quot;offset_mapping&quot;) answers = examples[&quot;answers&quot;] start_positions = [] end_positions = [] for i, offset in enumerate(offset_mapping): answer = answers[i] start_char = answer[&quot;answer_start&quot;][0] end_char = answer[&quot;answer_start&quot;][0] + len(answer[&quot;text&quot;][0]) sequence_ids = inputs.sequence_ids(i) # Find the start and end of the context idx = 0 while sequence_ids[idx] != 1: idx += 1 context_start = idx while sequence_ids[idx] == 1: idx += 1 context_end = idx - 1 # If the answer is not fully inside the context, label it (0, 0) if offset[context_start][0] &gt; end_char or offset[context_end][1] &lt; start_char: start_positions.append(0) end_positions.append(0) else: # Otherwise it's the start and end token positions idx = context_start while idx &lt;= context_end and offset[idx][0] &lt;= start_char: idx += 1 start_positions.append(idx - 1) idx = context_end while idx &gt;= context_start and offset[idx][1] &gt;= end_char: idx -= 1 end_positions.append(idx + 1) inputs[&quot;start_positions&quot;] = start_positions inputs[&quot;end_positions&quot;] = end_positions return inputs </code></pre>
<python><machine-learning><deep-learning><nlp><huggingface-transformers>
2023-10-11 10:29:01
1
5,751
George
77,272,131
10,722,752
Lambda function returns different output from direct code
<p>I am checking for a condition of the difference between two values is 0.5 <strong>AND</strong> if they occurred on different dates, then it's a flag.</p> <p>Sample Data:</p> <pre><code>df = pd.DataFrame({'date1' : ['2023-05-11', '2023-02-24', '2023-07-9', '2023-01-19', '2023-02-10'], 'date2' : ['2023-05-11', '2023-02-24', '2023-07-8', '2023-01-17', '2023-02-10'], 'value1' : [9.11, .12, 49.1, 2.25, 6.22], 'value2' : [2.12, .86, 0.03, .17, 4.71]}) df date1 date2 value1 value2 0 2023-05-11 2023-05-11 9.11 2.12 1 2023-02-24 2023-02-24 0.12 0.86 2 2023-07-09 2023-07-08 49.1 0.03 3 2023-01-19 2023-01-17 2.25 0.17 4 2023-02-10 2023-02-10 6.22 4.71 df['date1'] = pd.to_datetime(df['date1']) df['date2'] = pd.to_datetime(df['date2']) </code></pre> <p>When I try with <code>apply</code> function:</p> <pre><code>df.apply(lambda x : 'yes' if (abs(x['value1'] - x['value2']) &gt; .5) &amp; (x['date1'].date != x['date2'].date) else 'no', axis = 1) 0 yes 1 yes 2 yes 3 yes 4 yes dtype: object </code></pre> <p>Without <code>apply</code> function:</p> <pre><code>(abs(df['value1'] - df['value2']) &gt; .5) &amp; (df['date1'].dt.date != df['date2'].dt.date) 0 False 1 False 2 True 3 True 4 False dtype: bool </code></pre> <p>As we can see above, the direct approach without <code>apply</code> function is giving expected output, whereas the apply function is not. Could you please let me know why is that the case.</p>
<python><pandas>
2023-10-11 10:19:39
2
11,560
Karthik S
77,272,024
3,002,166
Tensorflow - training custom model on dataset
<p>I feel it's something minor I'm missing, but can't seem to figure out what it is. I'm trying to train a very simple model on <strong>cassava</strong> dataset, but when I call the <code>fit</code> function, the input name doesn't match the expected name. I tried naming the input layer to match the model, but tf insists on appending _input to the layer name causing conflict. I'm sure it's a fairly typical use-case for the tfds and it must be something trivial.</p> <p>The Error:</p> <pre><code>ValueError: Missing data for input &quot;flatten_input&quot;. You passed a data dictionary with keys ['image', 'image/filename', 'label']. Expected the following keys: ['flatten_input'] </code></pre> <p>I borrowed the viewing code from a github project and that one definitely works as I can view the loaded data.</p> <pre><code># tensorflow 2.x core api import logging from mlflow.models import infer_signature import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow import keras as K logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) ############################################################################################################# from matplotlib import pyplot as plt def plot(examples, predictions=None): # Get the images, labels, and optionally predictions images = examples['image'] labels = examples['label'] batch_size = len(images) if predictions is None: predictions = batch_size * [None] # Configure the layout of the grid x = np.ceil(np.sqrt(batch_size)) y = np.ceil(batch_size / x) fig = plt.figure(figsize=(x * 6, y * 7)) for i, (image, label, prediction) in enumerate(zip(images, labels, predictions)): # Render the image ax = fig.add_subplot(int(x), int(y), i+1) ax.imshow(image, aspect='auto') ax.grid(False) ax.set_xticks([]) ax.set_yticks([]) # Display the label and optionally prediction x_label = 'Label: ' + name_map[class_names[label]] if prediction is not None: x_label = 'Prediction: ' + name_map[class_names[prediction]] + '\n' + x_label ax.xaxis.label.set_color('green' if label == prediction else 'red') ax.set_xlabel(x_label) plt.show() # dataset, info = tfds.load('cassava', with_info=True) dataset, info = tfds.load(&quot;cassava&quot;, shuffle_files=True, with_info=True) print(&quot;INFO:\n&quot;, info) # Extend the cassava dataset classes with 'unknown' class_names = info.features['label'].names + ['unknown'] # Map the class names to human readable names name_map = dict( cmd='Mosaic Disease', cbb='Bacterial Blight', cgm='Green Mite', cbsd='Brown Streak Disease', healthy='Healthy', unknown='Unknown') print(len(class_names), 'classes:') print(class_names) print([name_map[name] for name in class_names]) def preprocess_fn(data): image = data['image'] # Normalize [0, 255] to [0, 1] image = tf.cast(image, tf.float32) image = image / 255. # Resize the images to 224 x 224 image = tf.image.resize(image, (224, 224)) data['image'] = image return data def create_model(type=&quot;default&quot;, n_classes=6): if type == &quot;something&quot;: pass else: model = K.Sequential() # naming the below layer with name argument still appends the _input to the actually name model.add(K.layers.Flatten(input_shape=(244, 244, 3))) model.add(K.layers.Dense(512, activation=&quot;relu&quot;)) model.add(K.layers.Dense(256, activation=&quot;relu&quot;)) model.add(K.layers.Dense(128, activation=&quot;relu&quot;)) model.add(K.layers.Dense(64, activation=&quot;relu&quot;)) model.add(K.layers.Dense(n_classes, activation=&quot;softmax&quot;)) model.compile(loss='sparse_categorical_crossentropy', optimizer=K.optimizers.Adam(0.01), metrics=['accuracy']) return model # batch = dataset['validation'].map(preprocess_fn).batch(25).as_numpy_iterator() # examples = next(batch) # plot(examples) print(tf.__version__) model = create_model() model.fit(dataset[&quot;train&quot;], epochs=5) </code></pre>
<python><keras><tensorflow2.0>
2023-10-11 10:03:35
1
740
user3002166
77,271,900
12,965,658
Fetch the latest 2 extreme subfolders from s3 bucket using python
<p>I have a s3 bucket which has multiple integrations.</p> <p>I want to read the files from the latest 2 extreme subfolders.</p> <p><a href="https://i.sstatic.net/QPyCJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QPyCJ.png" alt="enter image description here" /></a></p> <p>I want to read all files from 2023/1/30/ and 2023/1/31/</p> <pre><code>import boto3 bucket_name = 'Bucket' prefix = 'Facebook/Ad/' s3_conn = boto3.client(&quot;s3&quot;) response = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix) objects = sorted(response['Contents'], key=lambda x: x['LastModified'], reverse=True) for obj in objects[:2]: subfolder = obj['Key'] print(f&quot;Subfolder: {subfolder}&quot;) </code></pre> <p>But this gives me the latest 2 files from the last subfolder:</p> <pre><code>2023/1/31/file12 2023/1/31/file13 </code></pre> <p>How Can I read all files from the last 2 subfolders? Also, I do want to hard code things as the level of subfolders might increase. I need to find some how the latest 2 subfolders at the deepest level and fetch all files from them.</p>
<python><python-3.x><amazon-s3><boto3>
2023-10-11 09:45:42
1
909
Avenger
77,271,897
3,125,592
Activating a pyenv virtual environment with direnv
<p>I use both <code>direnv</code> and <code>pyenv</code> and would like to add something to my <code>.envrc</code> so that whenever I change directory, I also activate my virtual environment with pyenv.</p> <p>I printed environment variables both when my virtualenv is active and also when it is not. There were a few pyenv variables set, which I added to my <code>.envrc</code> (see below). I was hoping these would activate my pyenv virtual environment upon changing to the directory, but it didn't work.</p> <p>I'll keep poking at this and trying to sort it out. If I find the answer, I'll update the question with the answer. In the meantime, I'm curious if anyone else has configured <code>direnv</code> so that a virtual environment is loaded upon <code>cd</code>'ing to a directory. If so, would you mind sharing how you did it?</p> <p>** DID NOT WORK WHEN ADDED TO .envrc**</p> <pre><code>PYENV_VERSION=ds PYENV_ACTIVATE_SHELL=1 PYENV_VIRTUAL_ENV=/Users/evan/.pyenv/versions/3.10.4/envs/ds VIRTUAL_ENV=/Users/evan/.pyenv/versions/3.10.4/envs/ds </code></pre>
<python><virtualenv><pyenv><direnv>
2023-10-11 09:45:02
2
2,961
Evan Volgas
77,271,830
3,446,051
Running Python Script inside AWS RDS
<p>We have an SSIS Project on our local server. Somewhere in the project there is a <code>Process Task</code> which calls a python script on the server which runs a machine learning project to make some prediction based on data in the SQL Server database.<br /> Now we want to migrate this local SSIS Project onto AWS. The problem is that according to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.SSIS.html" rel="nofollow noreferrer">supported tasks in AWS SSIS on RDS</a> Process Task is not supported. Our Process Task (which calls the python script) is between other tasks which are dependent on the Python process.<br /> One option is to use AWS Glue instead of SSIS which is not an option for us as it requires much development effort as our SSIS project is quite huge.<br /> Another option is to use AWS Lambda for the Python Task, but as I said the python process task is located between other tasks in SSIS and the other tasks depend on the python task and should not run before the Python Process Task is finished. So the only solution that comes into my mind is the following:</p> <ol> <li>Migrate the SSIS packages (minus the Process Task) into AWS.</li> <li>Create an AWS Lambda which calls the Python script.</li> <li>When the SSIS control flow reaches the point where the python package should be called, somehow the AWS Lambda is triggered.</li> <li>The SSIS control flow remains in a loop which looks for a flag in the database.</li> <li>After AWS Lambda is finished, the flag in db is set.</li> <li>The SSIS control flow exits the loop and continues with the remaining tasks.</li> </ol> <p>Is there a better solution for this problem? Is this solution correct?</p>
<python><sql-server><amazon-web-services><ssis>
2023-10-11 09:35:06
0
5,459
Code Pope
77,271,757
4,004,541
VideoCapture frames images corrupted: shifting one of the channels on random frames
<p>The camera works correctly, when I use the software to verify the camera I see no corrupted frames so I assume that this is issues are coming from OpenCV.</p> <p>I notice that random frames (one out of 10-20 frames) are corrupted and one of the channels is shifted. Example below.</p> <p><a href="https://i.sstatic.net/lHUOL.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHUOL.jpg" alt="enter image description here" /></a></p> <p>I am running the camera code as a service that runs in the background so any other application can obtain the latest frame and make use of it without having to run the read frame loop and work with asynchronous code.</p> <pre><code>import threading import time import cv2 as cv import numpy as np class CameraCU81(): def __init__(self, W=1920, H=1080, hz=30): self.cap = cv.VideoCapture(0) self.last_frame = None # if recording mjgp the frames hangs... #self.cap.set(cv.CAP_PROP_FOURCC, cv.VideoWriter_fourcc('M', 'J', 'P', 'G')) print(str(cv.VideoWriter_fourcc('M', 'J', 'P', 'G'))) self.cap.set(cv.CAP_PROP_FRAME_WIDTH, W) self.cap.set(cv.CAP_PROP_FRAME_HEIGHT, H) self.cap.set(cv.CAP_PROP_FPS, hz) print('Starting camera 81 fps at: ' + str(self.cap.get(cv.CAP_PROP_FPS))) w = str(self.cap.get(cv.CAP_PROP_FRAME_WIDTH)) h = str(self.cap.get(cv.CAP_PROP_FRAME_HEIGHT)) print('Starting camera 81 resolution at: ' + w + ' x ' + h) format = str(self.cap.get(cv.CAP_PROP_FOURCC)) print('Starting camera 81 format: ' + format) def __capture_frames(self): error_f = False while True: start_time = time.time() ret, frame = self.cap.read() if not ret: timeout_time = (time.time() - start_time) print('Frame could not be read ... is camera connected?') print(timeout_time) error_f = True else: self.last_frame = frame if error_f: timeout_time = (time.time() - start_time) print(timeout_time) def get_data(self): return self.last_frame def destroy(self): self.cap.release() def run(self): t1 = threading.Thread(target=self.__capture_frames) t1.daemon = True t1.start() </code></pre>
<python><opencv><video-streaming><video-capture>
2023-10-11 09:21:36
1
360
Patrick Vibild
77,271,626
9,644,712
From R array to Numpy array
<p>Lets say, I have a following R array</p> <pre><code>a &lt;- array(1:18, dim = c(3, 3, 2)) r$&gt; a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 10 13 16 [2,] 11 14 17 [3,] 12 15 18 </code></pre> <p>and now I want to have the same array in Python numpy. I use</p> <pre><code>a = np.arange(1, 19).reshape((3, 3, 2)) array([[[ 1, 2], [ 3, 4], [ 5, 6]], [[ 7, 8], [ 9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]]]) </code></pre> <p>But somehow, those two do not look like the same. how can one replicate the same array in Python?</p> <p>I also tried</p> <pre><code>a = np.arange(1, 19).reshape((2, 3, 3)) array([[[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]], [[10, 11, 12], [13, 14, 15], [16, 17, 18]]]) </code></pre> <p>which is also not identical.</p>
<python><r><numpy>
2023-10-11 09:01:56
3
453
Avto Abashishvili
77,271,418
630,971
Pandas.read_csv ParserError '§' expected after '"' with sep = "§"
<p>I have an issue with <code>read_csv</code> and its taking a lot of time to resolve.</p> <p>I am working with texts which have multiple special characters, so I was checking which character isn't in the list of texts and chose § as delimiter while writing the <code>csv</code> files that separates the texts with corresponding IDs.</p> <p>However, while reading the files, I am getting the following error. I could skip the bad lines, but in this case I cannot afford to lose any texts.</p> <p><code>ParserError: '§' expected after '&quot;'</code></p> <p>Writing</p> <pre><code>df.to_csv('20231010.csv', index=False, sep='§', #header=None, quoting=csv.QUOTE_NONE, quotechar=&quot;&quot;, escapechar=&quot; &quot;) </code></pre> <p>Reading</p> <pre><code>data = pd.read_csv('20231010.csv', sep =&quot;§&quot;, encoding='utf-8') </code></pre>
<python><pandas><parse-error><read-csv>
2023-10-11 08:32:34
1
472
sveer
77,270,923
2,571,805
FastAPI return datetime dictionary from endpoint
<p>I'm building an API on FastAPI 0.103.1, under Python 3.11.2.</p> <p>I have an endpoint that returns a dictionary with multiple datetime dictionaries. The format is this:</p> <pre class="lang-py prettyprint-override"><code>{ 'train_dates': { 'start_date': datetime.datetime(2018, 1, 1, 0, 0), 'end_date': datetime.datetime(2018, 1, 25, 0, 0) }, 'test_dates': { 'start_date': datetime.datetime(2018, 1, 25, 0, 0), 'end_date': datetime.datetime(2018, 2, 1, 0, 0) }, 'forecast_dates': { 'start_date': datetime.datetime(2018, 2, 1, 0, 0), 'end_date': datetime.datetime(2018, 2, 14, 0, 0) } } </code></pre> <p>My endpoint returns it like this:</p> <pre class="lang-py prettyprint-override"><code>@router.post(&quot;&quot;) async def post_endpoint(payload: dict): ... return { # the whole thing here } </code></pre> <p>and this works fine, no errors reported. The web app gets all the values, including the dates, properly formatted:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;train_dates&quot;: { &quot;start_date&quot;: &quot;2018-01-01 00:00:00&quot;, &quot;end_date&quot;: &quot;2018-01-25 00:00:00&quot; }, &quot;test_dates&quot;: { &quot;start_date&quot;: &quot;2018-01-25 00:00:00&quot;, &quot;end_date&quot;: &quot;2018-02-01 00:00:00&quot; }, &quot;forecast_dates&quot;: { &quot;start_date&quot;: &quot;2018-02-01 00:00:00&quot;, &quot;end_date&quot;: &quot;2018-02-14 00:00:00&quot; } } </code></pre> <p>However, if I capture the dictionary in a variable and return that variable:</p> <pre class="lang-py prettyprint-override"><code>@router.post(&quot;&quot;) async def post_endpoint(payload: dict): ... model_metadata = { # the whole thing here } return model_metadata </code></pre> <p>this will consistently issue an error on the backend:</p> <pre><code>TypeError: Object of type datetime is not JSON serializable </code></pre> <p>For now, my only option is to do this:</p> <pre class="lang-py prettyprint-override"><code> return json.loads(json.dumps(model_metadata, default=str)) </code></pre> <p>which feels like an overkill.</p> <p>Is there any better way of doing this?</p>
<python><dictionary><fastapi>
2023-10-11 07:21:00
0
869
Ricardo
77,270,917
7,361,580
Python Turtle detect mouse release event on screen, not a turtle
<p>How do you detect a mouse release event on a turtle screen? There is only <code>onscreenclick</code> but no corresponding release event. Is there something I can do using the tkintercanvas backend?</p>
<python><tkinter><turtle-graphics><tkinter-canvas><python-turtle>
2023-10-11 07:19:12
2
2,115
synchronizer
77,270,861
18,904,265
Why can I only use my package with "from" import?
<p>When installing my package in editable mode (<code>pip install -e .</code>), I can only use it's functions using an import with from:</p> <pre class="lang-py prettyprint-override"><code>from package import hello_world hello_world.hello_world() </code></pre> <p>using only import doesn't work:</p> <pre class="lang-py prettyprint-override"><code>import package package.hello_world.hello_world() </code></pre> <p>This results in <code>NameError: name 'hello_world' is not defined</code></p> <p>Is there something I need to set up in my package, so that I can just import the package?</p>
<python><python-packaging>
2023-10-11 07:10:21
1
465
Jan
77,270,788
8,510,149
Chart design in Openpyxl, color axis title
<p>The code below generates a simple barchart using openpyxl.</p> <p><a href="https://i.sstatic.net/40iEK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/40iEK.png" alt="enter image description here" /></a></p> <p>I want to be able to colorize the title on Y-axis. But this I can't find a working solution on. Is there anyone that know how to do that?</p> <p>My target is something simple as this: <a href="https://i.sstatic.net/71qey.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/71qey.png" alt="enter image description here" /></a></p> <pre><code>import openpyxl from openpyxl.chart import BarChart, Reference # Create a workbook and activate a sheet wb = openpyxl.Workbook() sheet = wb.active # insert some categories cell = sheet.cell(row=1, column=1) cell.value = 'Category 1.1' cell = sheet.cell(row=2, column=1) cell.value = 'Category 1.2 - limit' cell = sheet.cell(row=3, column=1) cell.value = 'Category 2' cell = sheet.cell(row=4, column=1) cell.value = 'Category 2.1 - extra' cell = sheet.cell(row=5, column=1) cell.value = 'Category 2.2 - extra2' # insert some values for i in range(5): cell = sheet.cell(row=i+1, column=2) cell.value = i+2 # create chart chart = BarChart() values = Reference(sheet, min_col = 2, min_row = 1, max_col = 2, max_row = 5) bar_categories = Reference(sheet, min_col=1, min_row=1, max_row=5) chart.add_data(values) chart.set_categories(bar_categories) chart.title = &quot; BAR-CHART &quot; chart.legend = None chart.x_axis.title = &quot; X_AXIS &quot; chart.y_axis.title = &quot; Y_AXIS &quot; sheet.add_chart(chart, &quot;E2&quot;) # save the file wb.save(&quot;barChart.xlsx&quot;) </code></pre>
<python><excel><xml><openpyxl>
2023-10-11 06:56:09
1
1,255
Henri
77,270,316
580,937
Creating a table from a Snowpark DataFrame by specifying column names and their respective data types
<p>Is it possible to do the following, and if so, how?</p> <ol> <li>Create a table, with the table name and column names and types specified dynamically.</li> <li>Pass in column names and types with parameters</li> </ol>
<python><dataframe><snowflake-cloud-data-platform>
2023-10-11 05:03:23
1
2,758
orellabac
77,270,304
580,937
Getting Data Locally from an Snowpark DataFrame
<p>Does df.collect() data into local memory? Is this the best approach to load data locally? Are there any tips or best practices to consider in snowpark?</p>
<python><dataframe><snowflake-cloud-data-platform>
2023-10-11 04:59:59
1
2,758
orellabac
77,270,135
5,818,889
Avoid code duplication between django Q expression and as python code
<p>I have a very complex property on a model</p> <pre class="lang-py prettyprint-override"><code>ALLOWED_STATES = {1,2,3} class X(models.Model): a = models.BooleanField() b = models.ForeignKey('Y', on_delete=model.CASCADE) c = models.IntegerField() d = models.IntegerField() @property def can_delete(self) # this goes on for 6 and clause return self.a and self.b.c and self.c in ALLOWED_STATES and self.d != 5 and .. </code></pre> <p>I also need this property in an annotate() call and filter()</p> <pre class="lang-py prettyprint-override"><code>#in one endpoint qs = X.objects.filter(...).annoate(can_delete=ExpressionWrapper(Q(a=True, b__c=True, c__in=ALLOWED_STATES,...) &amp; ~Q(d=5), output_field=models.BooleanField()) </code></pre> <p>I wonder if there is a way to unify these forms of this same property into one, without calling can_delete in python after I've fetched the rows. These two forms have become a bit of a maintainability issue as PM keeps on changing the definition of can_delete.</p> <p>Status update: neither of these answers quite solve the problem. I end up with a custom manager that has a <code>annotate_can_delete()</code> method. At least this will keep the Q expression and python code in the same place, making it easier to maintain.</p>
<python><django><django-models>
2023-10-11 03:52:23
2
6,479
glee8e
77,270,112
6,548,223
Does Python have an equivaled of VBA 'Set' command? (short user defined reference to manipulate an object)
<p>I have this line which changes the value of this cell to 5:</p> <pre class="lang-py prettyprint-override"><code>df[header].at[row_num] = 5 </code></pre> <p>I was wondering if there's a cleaner/shorter way to refer to <code>df[header].at[row_num]</code>?</p> <p>in VBA there's a 'Set' command that does the same job but in an easier way:</p> <pre class="lang-none prettyprint-override"><code># Note..this 'set' command is borrowed from VBA. # This is not the python 'set' command that applies to sets like {1,2,3} set my_cell = df[header].at[row_num] my_cell = 5 </code></pre> <p>In this example once I make that delcaration I don't need to write this long code <code>df[header].at[row_num]</code> anymore since I can just use <code>my_cell = 5</code> to be exactly equivalent to <code>df[header].at[row_num] = 5</code></p>
<python><pandas>
2023-10-11 03:43:32
3
3,019
Chadee Fouad