QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,492,314
9,003,900
Pick the highest group/category for each person - python
<p>I have a dataframe with three columns, Name, group1 and group2. The 'Name' column shows the different people/cases and both the 'group' columns shows the category these people belong too. Below is an image of how this data set looks:</p> <p><a href="https://i.sstatic.net/xSvfa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xSvfa.png" alt="enter image description here" /></a></p> <p>As we can see from the above data set, the same person can be assigned to multiple groups and I need to pick the highest group they belong too. 01_high being the highest group and 03_low being the lowest group.</p> <p>As an example, lets take the first case 'Tom', in group1 he belongs to '01_high' and for group 2 'Tom' belongs to '03_low'. I need to create a third group column 'group3' with the higher category. In this case the value in the group3 column for 'Tom' will be '01_high'.</p> <p>Code to create the data set:</p> <pre><code>data = {'Name': ['Tom', 'Nick','Jack', 'Ann'], 'group1': ['01_high', '02_medium', '03_low' , '02_medium'], 'group2':['03_low', '03_low', '02_medium', '03_low']} df = pd.DataFrame(data) df </code></pre> <p>Final desired output:</p> <p><a href="https://i.sstatic.net/nyXwa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nyXwa.png" alt="enter image description here" /></a></p> <p>I'm fairly new to python and not sure how to achieve the desired output so any help is greatly appreciated. Thanks</p>
<python><pandas><data-manipulation>
2023-06-16 17:10:19
2
320
SAJ
76,492,287
11,922,765
How to plot month on x, value on y, separated by year
<p>I want to plot all years-month on the x-axis and month value on the y-axis. Additionally, I would like to color the data points based on the year they belong to. I have used time-series data from a <a href="https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption?select=DOM_hourly.csv" rel="nofollow noreferrer">Kaggle dataset</a> to achieve this, but I am not getting the exact output that I intended.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import matplotlib.colors as mcolors colors_list = list(mcolors.XKCD_COLORS.keys()) df = pd.read_csv('/kaggle/input/hourly-energy-consumption/DOM_hourly.csv') df.set_index('Datetime', inplace=True, drop=True) df.index = pd.to_datetime(df.index, format='%Y-%m-%d %H:%M:%S') # drop duplicated index df = df[~df.index.duplicated(keep='first')] ## Monthly mean dataframe mdf = df.resample(rule='M', kind='interval').mean()#.to_period('M').mean() # Plot seaborn mdf['Year'] = mdf.index.year mdf['Month'] = mdf.index.month fig, ax = plt.subplots(figsize=(10, 6)) sns.scatterplot(x='Month', y=mdf.columns[0], data=mdf, hue='Year',ax=ax) plt.show() </code></pre> <p>Present output I got:</p> <p><a href="https://i.sstatic.net/tMVsQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMVsQ.png" alt="enter image description here" /></a></p> <p>Desired output: Why I like below plot, not the previous one: Because, I can see the year-to-year pattern. Also, I can check if more then one month have same pattern.</p> <p><a href="https://i.sstatic.net/bjuGY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjuGY.png" alt="enter image description here" /></a></p> <p>To achieve my desired output, I need to run the following code:</p> <pre class="lang-py prettyprint-override"><code># Draw plot showing year to year change for the same month years_list = mdf['Year'].unique() years_color = colors_list[:len(years_list)] ## Append colors to the main df mdf['year_color'] = mdf['Year'].map(dict(zip(years_list,years_color))) for one_month in range(1,13,1): one_month_all_years_data = mdf[mdf['Month']==one_month] x_ticks = np.linspace(0,1,len(one_month_all_years_data)+1,endpoint=False)[1:len(one_month_all_years_data)+1] one_month_all_years_data['All years-months'] = None one_month_all_years_data.loc[:,'All years-months'] = x_ticks + one_month ax = sns.scatterplot(x='All years-months',y=mdf.columns[0],data=one_month_all_years_data, hue = one_month_all_years_data.index, hue_order = one_month_all_years_data['year_color']) sns.move_legend(ax, one_month_all_years_data.index) plt.show() </code></pre> <p>Output I got going towards the desired output:</p> <pre><code>/tmp/ipykernel_32/2084798528.py:74: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy one_month_all_years_data['All years-months'] = None /tmp/ipykernel_32/2084798528.py:75: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead ........ </code></pre> <p><a href="https://i.sstatic.net/SOz0O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SOz0O.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><matplotlib><seaborn>
2023-06-16 17:05:12
1
4,702
Mainland
76,492,206
11,688,559
Replacing if statements and global variables in an Python Apache Beam pipeline
<p>I am getting acquainted with the Python Apache Beam SDK. A feature that I keep on wanting to use is if statements. For example, I create a PCollection which then flows into another PCollection based on some result. I am aware that the the <code>Partition</code> function can be used to perform such splits.</p> <p>Here is the catch though. With the <code>Partition</code> function, the downstream PCollections will always remain active or at least receive an empty PCollection from the parent. This becomes a problem when the desired outcome is: <em>write to a file if a condition is met, otherwise leave it alone</em>.</p> <p>More specifically, my scenario is as follows: I call an API which either returns some data or a None. I want to overwrite some file only if the API returns new data else I want to leave it as is.</p> <p>Here is my implementation that makes use of if statements and a globally defined variable. The pipeline runs using the <code>DirectRunner</code> (local machine implementation) but I think that this implementation is very bad practice. I do not think this approach will hold if I were to deploy it to Google Cloud DataFlow in a data-heavy environment.</p> <p>Please help me to figure out a way of replacing such if statements and global variables that maintain some state for the if statement to make use of. Here is my code:</p> <pre><code>import numpy as np import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions import os os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'first_service_account_key.json' N = 5 url = 'gs://test_bucket-321/routing_test' text_file = url + '/test.txt' pipeline_options = { 'job_name':'test-if-statement', 'project': 'tensile-proxy-386313' , 'runner': 'DirectRunner', } pipeline_options = PipelineOptions.from_dictionary(pipeline_options) pipeline = beam.Pipeline(options=pipeline_options) is_None = False def api_sim(): # Simulates the pull from an API global is_None # Global: outside of the pipeline. Problem? if np.random.uniform(0,1) &lt; 0.5: is_None = False for i in range(N): yield np.random.randint(0,100) else: is_None = True return None def custom_print(element): print(element) return element api_pull = ( pipeline | 'simulate an api pull' &gt;&gt; beam.Create(api_sim()) | beam.Map(custom_print) ) if not is_None: # Conditional write writer = api_pull | 'conditional write' &gt;&gt; beam.io.WriteToText(text_file) pipeline.run() </code></pre> <h3>EDIT:</h3> <p>Here is another solution. Although it still seems like I am not using best practices though returning a PCollection in a ParDo function.</p> <pre><code>def api_sim(): # Simulates the pull from an API if np.random.uniform(0, 1) &lt; 0.9: for i in range(N): yield np.random.randint(0, 100) def custom_print(element): print(element) return element class ConditionalWriteToText(beam.DoFn): def process(self,element): if element is not None: yield element | beam.io.WriteToText(text_file) api = ( pipeline | beam.Create(api_sim()) | beam.Map(custom_print) | beam.combiners.ToList() | beam.ParDo(ConditionalWriteToText()) ) pipeline.run() </code></pre>
<python><if-statement><apache-beam><apache-beam-io>
2023-06-16 16:50:07
2
398
Dylan Solms
76,492,185
11,737,958
How to iterate over each words in a list
<p>I am new to python. I need to iterate over words one by one from a list. But i can only iterate through each letters in each words vertically.</p> <pre class="lang-py prettyprint-override"><code>l = ['abcd efgh ijkl'] for i in l: for j in i: print(j) </code></pre> <h2>Output:</h2> <pre><code>a b c d e f g h i j k l </code></pre> <h2>Expected output:</h2> <pre><code>abcd efgh ijkl </code></pre>
<python>
2023-06-16 16:46:06
5
362
Kishan
76,492,072
587,650
How to avoid SQLalchemy removing zero count from query results
<p>I have a multi-vendor sales site where I want to count the sales per seller each day.</p> <p>So I first create a subquery to get the dates from the last 365 days:</p> <pre><code>date_series = func.generate_series(min_date , todays_date, timedelta(days=1)) trunc_date = func.date_trunc('day', date_series) subquery = session.query(trunc_date.label('day')).subquery() </code></pre> <p>Then I put this into the main query to get the actual counts:</p> <pre><code>query = session.query(subquery.c.day, func.count(Sale.id), Vendor.name) query = query.outerjoin(Sale, subquery.c.day == func.date_trunc('day', Sale.timestamp)) query = query.outerjoin(Product, Sale.product_id == Product.id) query = query.outerjoin(Vendor, Product.vendor_id == Vendor.id) query = query.group_by(subquery.c.day, Vendor) query = query.order_by(subquery.c.day.desc()) counts = query.all() </code></pre> <p>Problem is, when I include the <strong>Vendor</strong> to the <strong>group_by</strong> it doesn't return the days that have 0 count sales, just as if it involuntary goes from an outerjoin query to an innerjoin one.</p> <p>If I query just to see the count of <strong>Sales</strong> per day, removing the <strong>Vendor</strong> in the query and in the <strong>group_by</strong>, it return columns with 0 sales as well (but then I don't see the count per Vendor, I just get the count of all Sales each day).</p> <p>How can I do this query without the 0 count Sales days being removed from the results by the <strong>group_by</strong>?</p> <p><em><strong>EDITS:</strong></em></p> <p>Generated query:</p> <pre><code>SELECT anon_1.day AS anon_1_day, vendor.name AS vendor_name, count(sales.id) AS count_1 FROM (SELECT date_trunc(%(date_trunc_1)s, generate_series(%(generate_series_1)s, %(generate_series_2)s, %(generate_series_3)s)) AS day) AS anon_1 LEFT OUTER JOIN sales ON anon_1.day = date_trunc(%(date_trunc_2)s, sales.timestamp) LEFT OUTER JOIN product ON sales.product_id = product.id LEFT OUTER JOIN vendor ON product.vendor_id = vendor.id GROUP BY anon_1.day, vendor.name ORDER BY anon_1.day DESC </code></pre> <p>Tables:</p> <pre><code>Sale &gt;- Product &gt;- Vendor </code></pre> <p>Current query results:</p> <pre><code>(datetime.datetime(2023, 6, 11, 0, 0), None, None) (datetime.datetime(2023, 6, 10, 0, 0), None, None) (datetime.datetime(2023, 6, 9, 0, 0), None, None) (datetime.datetime(2023, 6, 8, 0, 0), Vendor_1, 10) (datetime.datetime(2023, 6, 8, 0, 0), Vendor_2, 3) (datetime.datetime(2023, 6, 7, 0, 0), Vendor_1, 9) (datetime.datetime(2023, 6, 7, 0, 0), Vendor_2, 11) </code></pre> <p>Wanted query result:</p> <pre><code>(datetime.datetime(2023, 6, 11, 0, 0), Vendor_1, 0) (datetime.datetime(2023, 6, 11, 0, 0), Vendor_2, 0) (datetime.datetime(2023, 6, 10, 0, 0), Vendor_1, 0) (datetime.datetime(2023, 6, 10, 0, 0), Vendor_2, 0) (datetime.datetime(2023, 6, 9, 0, 0), Vendor_1, 0) (datetime.datetime(2023, 6, 9, 0, 0), Vendor_2, 0) (datetime.datetime(2023, 6, 8, 0, 0), Vendor_1, 10) (datetime.datetime(2023, 6, 8, 0, 0), Vendor_2, 3) (datetime.datetime(2023, 6, 7, 0, 0), Vendor_1, 9) (datetime.datetime(2023, 6, 7, 0, 0), Vendor_2, 11) </code></pre>
<python><postgresql><sqlalchemy>
2023-06-16 16:29:50
0
719
Drublic
76,491,927
1,028,270
How do I read the current output with asynchronous without waiting for the process to finish?
<p>Also opened a GitHub issue: <a href="https://github.com/pyinvoke/invoke/issues/951" rel="nofollow noreferrer">https://github.com/pyinvoke/invoke/issues/951</a></p> <p>It's documented, but are there any complete examples and snippets for how to use <code>asynchronous=True</code>?</p> <p>I usually have to look through GitHub issues when trying to find full example snippets to stuff, but maybe I'm just missing this in the docs.</p> <p>I'm doing this</p> <pre><code>@task def my_cmd(ctx: Context): invoke_promise = ctx.run( &quot;while true; do echo running forevah; sleep 2; done&quot;, warn=True, hide=False, echo=True, asynchronous=True ) </code></pre> <p>This throws an error</p> <pre><code>print(invoke_promise) Traceback (most recent call last): File &quot;/usr/local/bin/myproject&quot;, line 8, in &lt;module&gt; sys.exit(program.run()) File &quot;/usr/local/lib/python3.10/site-packages/invoke/program.py&quot;, line 384, in run self.execute() File &quot;/usr/local/lib/python3.10/site-packages/invoke/program.py&quot;, line 569, in execute executor.execute(*self.tasks) File &quot;/usr/local/lib/python3.10/site-packages/invoke/executor.py&quot;, line 129, in execute result = call.task(*args, **call.kwargs) File &quot;/usr/local/lib/python3.10/site-packages/invoke/tasks.py&quot;, line 127, in __call__ result = self.body(*args, **kwargs) File &quot;/workspaces/myproject/myproject/tasks/mytask.py&quot;, line 479, in my_cmd print(invoke_promise) File &quot;/usr/local/lib/python3.10/site-packages/invoke/runners.py&quot;, line 1475, in __str__ if self.exited is not None: AttributeError: 'Promise' object has no attribute 'exited' </code></pre> <p>As does this</p> <pre><code>print(invoke_promise.stdout) AttributeError: 'Promise' object has no attribute 'stdout' </code></pre> <p>This hangs forever until the process exits</p> <pre><code>print(invoke_promise.join()) </code></pre> <p>What method is there on the returned promise object to just get the current output of the running background process that was started? I can't find anything in the docs about this.</p> <p>I want to be able to:</p> <ol> <li>Just see the current output so far without waiting for it to finish</li> <li>Be able to read the output of the currently running process and choose to manually kill it if I want</li> </ol>
<python><python-3.x><pyinvoke>
2023-06-16 16:07:31
1
32,280
red888
76,491,896
4,301,236
How to provide parameters defined at the source asset declaration to the IO Manager?
<p>So my initial task was to create an IO Manager that should connect to a database and return data as pandas dataframe.</p> <p>(I am using dagster 1.3.10)</p> <h2>Design</h2> <p>IMO, the credentials (ip, port, user, password) must be parameters of the IO manager because I want different resources for different credentials. But the other interesting parameters that can be used to perform a database query (select fields, optional filters, sorting, limit, ...) should be linked to an asset definition.</p> <p>I had no trouble creating the credentials parameter, like this:</p> <pre class="lang-py prettyprint-override"><code>@io_manager( config_schema={ 'ip': Field(str, is_required=True), 'port': Field(int, default_value=5432), 'username': Field(str, is_required=True), 'password': Field(str, is_required=True) } ) def database_io_manager(init_context): return DatabaseIOManager( ip=init_context.resource_config.get('ip'), port=init_context.resource_config.get('port'), username=init_context.resource_config.get('username'), password=init_context.resource_config.get('password'), ) </code></pre> <p>Then I can just provide this function in the resources dict that I provide to definitions</p> <pre class="lang-py prettyprint-override"><code>defs = Definitions(resources={'database_io_manager': database_io_manager}) </code></pre> <p>So now I can use this IO manager in my assets definitions</p> <pre class="lang-py prettyprint-override"><code>@asset(io_manager_key='database_io_manager') def my_asset(): pass </code></pre> <p>Now like I said, I want the query parameters to be at the asset level, so I've created a configuration.</p> <pre class="lang-py prettyprint-override"><code>from dagster import Config import pydantic class DatabaseConfig(Config): fields: List[str] = pydantic.Field() </code></pre> <p>I provide this configuration to the asset in the <code>metadata</code> attribute.</p> <pre class="lang-py prettyprint-override"><code>asset(io_manager_key='database_io_manager',metadata={'io_manager': DatabaseConfig(fields='*')}) def my_asset(): pass </code></pre> <p>And I can use this in my IO manager with a custom method</p> <pre class="lang-py prettyprint-override"><code> def load_metadata(self, context: Union[InputContext, OutputContext]) -&gt; None: config: DatabaseConfig = context.metadata.get(&quot;io_manager&quot;) if not isinstance(config, DatabaseConfig): raise ValueError('wrong config type') self.fields = config.fields </code></pre> <h2>Problem</h2> <p>This work with <code>Asset</code> but not with <code>SourceAsset</code>.</p> <p>If I define a source asset like this:</p> <pre class="lang-py prettyprint-override"><code>my_source_asset = SourceAsset( key='my_source_asset', io_manager_key='database_io_manager', metadata=DatabaseConfig(fields='*') ) </code></pre> <p>I can see the metadata associated with this source asset in dagit, but when effectively loading the asset, the metadata dict is empty.</p> <p>Is it a bug? Am I missing something?</p> <h2>Other (minor) problems</h2> <h3>unserializable</h3> <p>I tried to provide a minimal replication example and in the process of doing so I encountered other issues.</p> <p>The first that bugs me is that this <code>DatabaseConfig</code> object is not displayed by dagit. It says 'unserializable'. But I am extending the <code>Config</code> class and I've tested to call the <code>json()</code> method on it and it works well.</p> <p>Bonus 1: What can I do to make the <code>DatabaseConfig</code> class serializable as dagit wants it?</p> <h3>zero io manager use</h3> <p>With the code that can be found at the end of this question, when I look in dagit I have zero use of my io managers. <img src="https://i.sstatic.net/ClcnH.png" alt="zero uses" /></p> <p>Bonus 2: Why can't I see the IO managers uses ?</p> <hr /> <pre class="lang-py prettyprint-override"><code># minimal_example_bug_dagster.py from __future__ import annotations import pickle from typing import Union import pydantic from dagster import ( Config, Definitions, InputContext, OutputContext, SourceAsset, asset, IOManager, fs_io_manager, io_manager, ) class CustomIOConfig(Config): custom_file_name: str = pydantic.Field() class CustomIOManager(IOManager): my_attribute: str = None def get_key(self, context: Union[InputContext, OutputContext]) -&gt; str: return context.asset_key.path[:-1] def load_metadata(self, context: Union[InputContext, OutputContext]) -&gt; None: context.log.info(context.metadata) config: CustomIOConfig = context.metadata.get(&quot;io_manager&quot;) self.my_attribute = config.custom_file_name def load_input(self, context: InputContext) -&gt; str: context.log.info(f&quot;Inside load_input for {self.get_key(context)}&quot;) self.load_metadata(context) pickle.load(open(self.my_attribute, &quot;rb&quot;)) def handle_output(self, context: &quot;OutputContext&quot;, obj: str) -&gt; None: context.log.info(f&quot;Inside handle_output for {self.get_key(context)}&quot;) self.load_metadata(context) pickle.dump(obj, open(self.my_attribute, &quot;wb&quot;)) @asset( metadata={&quot;io_manager&quot;: CustomIOConfig(custom_file_name=&quot;foo&quot;)}, io_manager_key=&quot;custom_io_manager&quot;, ) def my_asset(): return &quot;Hello&quot; my_source_asset = SourceAsset( &quot;my_source_asset&quot;, metadata={&quot;io_manager&quot;: CustomIOConfig(custom_file_name=&quot;bar&quot;)}, io_manager_key=&quot;custom_io_manager&quot;, ) @asset(io_manager_key=&quot;fs_io_manager&quot;) def using_both_assets(my_asset, my_source_asset): return f&quot;{my_asset}, {my_source_asset}&quot; @io_manager def custom_io_manager(init_context): return CustomIOManager() defs = Definitions( assets=[my_asset, my_source_asset, using_both_assets], resources={&quot;fs_io_manager&quot;: fs_io_manager, &quot;custom_io_manager&quot;: custom_io_manager}, ) </code></pre>
<python><dagster>
2023-06-16 16:03:51
1
389
guillaume latour
76,491,760
11,230,924
regex python to transform
<p>I would like to know what is the regular expression to transform a string that starts with a minus sign, followed by a comma and a sequence of digits, such as -.13082, into a string that starts with a minus sign followed by the first digit of the previous sequence, followed by a dot and then the rest of the digits.</p> <p>This would transform <code>-.13082</code> to <code>-1.3082</code> or <code>-.26750</code> to <code>-2.6750</code></p> <p>thank you</p>
<python><regex>
2023-06-16 15:45:12
2
381
Zembla
76,491,593
16,665,831
Rerun the dag after making last run as success
<p>my scenario is as follows;</p> <p>I have a main dag with two tasks and another trigger Dag should first make the last run of the main dag as success and rerun the main dag every 12 hours.</p> <p>I wrote my main dag and trigger dag that rerun the main dag every 12 hours but I could not handle making the last run of the main dag as success after trigger dag reruns the main dag, there are multiple running main dag sessions like the below, I want just one running main dag session other old ones should be paused as success.</p> <p><a href="https://i.sstatic.net/b12So.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b12So.png" alt="enter image description here" /></a></p>
<python><airflow><airflow-2.x>
2023-06-16 15:25:39
0
309
Ugur Selim Ozen
76,491,567
3,225,072
Difference in accessing only last element from a list vs accessing a range up to including the last elements from a list
<p>Being used to work with Matlab syntax now that I'm working with Python this example is confusing:</p> <p>I have a list for which I want to access its last element, lets say <code>list_a = [0,1,2,3,4,5,6,7,8,9]</code></p> <p>In Python I have to do <code>list_a [-1] = 9</code> when in Matlab I would do <code>list_a(end)</code></p> <p>So in my mind <code>-1</code> in Python means the last element, same as the <code>end</code> keyword in Matlab.</p> <p>Then I want to access the last 5 elements from the list, including the last one.</p> <p>In Matlab I would do <code>list_a (6:end)</code>. Matlab arrays first index is 1 not 0 like in Python, that's why I have to start with index 6.</p> <p>In my mind it would be logical to do <code>list_a[5:-1]</code> since <code>-1</code> means the last item of the list as in the example above. However, this doesn't work in Python, because the returned result is <code>[5, 6, 7, 8]</code> So to get the last element of the list in Python I have to do <code>list_a[5:]</code> and leave a blank</p> <p>I don't know why they decided to do this but I'm wondering if there is something in Python I can use like the end keyword in Matlab that works for both list indexing and slicing. Thank you</p>
<python><list><matlab><slice>
2023-06-16 15:21:36
1
960
Victor
76,491,559
3,908,025
PyQt6 multithreading mixing concurrent and sequantial tasks
<p>I have a PyQt6 application in which I have implemented multithreading. I'm looking to achieve the following. Suppose I have a <code>step1</code>, which needs to be executed first. When finished, <code>step2</code> can start. However, <code>step2</code> relies on two other tasks <code>requirement_a</code> and <code>requirement_b</code>. In contrast to <code>step1</code> and <code>step2</code>, <code>requirement_a</code> and <code>requirement_b</code> are supposed to run concurrently.</p> <p>I've tried implementing this using two <code>QThreadPool</code>s, but I'm not yet getting the expected results. Consider the code below:</p> <pre><code>import sys from datetime import datetime import time from PyQt6.QtCore import QThreadPool, QRunnable, QObject, pyqtSignal, pyqtSlot from PyQt6.QtWidgets import QMainWindow, QPushButton, QApplication class WorkerSignals(QObject): &quot;&quot;&quot;Defines the signals available from a running worker thread. Supported signals are: - started: no data - finished: no data &quot;&quot;&quot; started = pyqtSignal() finished = pyqtSignal() class Worker(QRunnable): &quot;&quot;&quot;Worker thread&quot;&quot;&quot; def __init__(self, fun, *args, **kwargs): &quot;&quot;&quot;Initialize method of the Worker thread&quot;&quot;&quot; super(Worker, self).__init__() # Store constructor arguments (re-used for processing) self.fun = fun self.args = args self.kwargs = kwargs self.signals = WorkerSignals() @pyqtSlot() def run(self): &quot;&quot;&quot;Execute the function with passed args and kwargs&quot;&quot;&quot; self.signals.started.emit() self.fun(*self.args, **self.kwargs) self.signals.finished.emit() class MainWindow(QMainWindow): def __init__(self): super().__init__() self.button = QPushButton('Start', self) self.button.setGeometry(10, 10, 100, 30) self.button.clicked.connect(self.start) self.sequential_thread_pool = QThreadPool() self.sequential_thread_pool.setMaxThreadCount(1) self.concurrent_thread_pool = QThreadPool() def start(self): self.start_worker(self.step1, 'sequential') self.start_worker(self.step2, 'sequential') def start_worker(self, fun, thread_pool): worker = Worker(fun) worker.signals.started.connect(lambda: print(f'[{datetime.now()}] Started {fun.__name__}')) worker.signals.finished.connect(lambda: print(f'[{datetime.now()}] Finished {fun.__name__}')) if thread_pool == 'sequential': self.sequential_thread_pool.start(worker) elif thread_pool == 'concurrent': self.concurrent_thread_pool.start(worker) def step1(self): # this is a step that needs to execute first time.sleep(1) # do stuff print(f'[{datetime.now()}] Executing step1') time.sleep(1) # do stuff def requirement_a(self): # this is a step that needs to execute after step1 finished, but can happen concurrently with requirement_b time.sleep(1) # do stuff print(f'[{datetime.now()}] Executing requirement_a') time.sleep(1) # do stuff def requirement_b(self): # this is a step that needs to execute after step1 finished, but can happen concurrently with requirement_a time.sleep(1) # do stuff print(f'[{datetime.now()}] Executing requirement_b') time.sleep(1) # do stuff def step2(self): # this is a step that needs to execute after step1, requirement_a and requirement_b finished self.start_worker(self.requirement_a, 'concurrent') self.start_worker(self.requirement_b, 'concurrent') self.concurrent_thread_pool.waitForDone() # wait for requirement_a and requirement_b to finish time.sleep(1) # do stuff print(f'[{datetime.now()}] Executing step2') time.sleep(1) # do stuff if __name__ == '__main__': app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre> <p>Note that currently <code>step2</code> is given to the sequential threadpool, from where it sends <code>requirement_a</code> and <code>requirement_b</code> to the concurrent threadpool. This is some kind of thread nesting I suppose? Running this code and clicking the button prints the following:</p> <pre><code>[2023-06-16 17:15:54.792738] Started step1 [2023-06-16 17:15:55.794494] Executing step1 [2023-06-16 17:15:56.804084] Finished step1 [2023-06-16 17:15:56.804084] Started step2 [2023-06-16 17:15:57.817431] Executing requirement_b[2023-06-16 17:15:57.817431] Executing requirement_a [2023-06-16 17:15:59.848227] Executing step2 [2023-06-16 17:16:00.859409] Finished step2 </code></pre> <p>The order of the steps are being executed in the way that I desire, however the &quot;Started requirement_a&quot;, &quot;Finished requirement_a&quot;, &quot;Started requirement_b&quot; and &quot;Finished requirement_b&quot; prints are missing. This indicates that these print statements connected to <code>worker.signals.started</code> and <code>worker.signals.finished</code> for the <code>worker</code> that is given to <code>self.concurrent_thread_pool</code>, do not seem to be executed.</p> <p>I've spent quite some time on trying to fix this. I have also done quite some digging and reading in trying to find a best practice for mixing sequential and concurrent threading. Both without any luck... What am I doing wrong and how should this be done? Hopefully the community can clarify and help me on this!</p>
<python><multithreading><pyqt>
2023-06-16 15:21:02
1
589
Teun
76,491,378
608,395
aiohttp: AttributeError: 'NoneType' object has no attribute 'get_extra_info' using HTTPS_PROXY sand trust_env=True
<p>Trying to connect using ENV proxy:</p> <pre><code> if self.websession is None: # self.websession = aiohttp.ClientSession(trace_configs=[trace_config]) self.websession = aiohttp.ClientSession(trace_configs=[trace_config], trust_env=True) _LOGGER.debug(&quot;New connection prepared&quot;) </code></pre> <p>and invoking using</p> <pre><code>https_proxy=http://localhost:9090 python3 cli.py </code></pre> <p>gives</p> <pre><code> File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/client.py&quot;, line 1141, in __aenter__ self._resp = await self._coro ^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/client.py&quot;, line 536, in _request conn = await self._connector.connect( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/connector.py&quot;, line 540, in connect proto = await self._create_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/connector.py&quot;, line 899, in _create_connection _, proto = await self._create_proxy_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/connector.py&quot;, line 1325, in _create_proxy_connection return await self._start_tls_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/connector.py&quot;, line 1124, in _start_tls_connection tls_proto.connection_made( File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/base_protocol.py&quot;, line 62, in connection_made tcp_nodelay(tr, True) File &quot;/opt/homebrew/lib/python3.11/site-packages/aiohttp/tcp_helpers.py&quot;, line 25, in tcp_nodelay sock = transport.get_extra_info(&quot;socket&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_extra_info' ERROR:asyncio:Unclosed client session client_session: &lt;aiohttp.client.ClientSession object at 0x102259090&gt; </code></pre> <p>with aiohttp 3.8. Without trust_env everything is fine. Proxy is running and accessible.</p> <p><strong>UPDATE</strong></p> <p>It does not make a difference if I add a transport or not:</p> <pre><code>self.websession = aiohttp.ClientSession(trace_configs=[trace_config], trust_env=True, connector=aiohttp.TCPConnector()) </code></pre>
<python><aiohttp><http-proxy>
2023-06-16 14:56:59
0
14,016
andig
76,491,161
1,913,367
NumPy broadcasting from 4D using 3D
<p>I am trying to broadcast values from 4D matrix to 3D matrix using index values/locations from another 3D matrix. Unfortunately, the results are not what I expect -- either not working or getting wrong dimensions.</p> <p>Namely, I have a 4D matrix with dimensions (time,level,lat,lon). Based on maximum values along the level (axis 1), I am trying to reduce my 4D matrix to 3D matrix with dimensions (time,lat,lon).</p> <p>I tried to reduce the dimensions by selecting only the first time-moment, getting 3D and 2D matrices respectively, but the broadcasting gives me wrong dimensions -- answer will be 4D matrix again.</p> <p>Here is a simplified example:</p> <pre><code>#!/usr/bin/env ipython import numpy as np # ---------------------- nx = 10 ny = 10 ntime = 20 nlevs = 30 # ============================================= np.random.seed(10); datain_a = np.random.random((ntime,nlevs,ny,nx)); # generate data A datain_b = np.random.random((ntime,nlevs,ny,nx)); # generate data B # --------------------------------------------- calc_smt = np.abs(np.diff(datain_a,axis=1)/np.diff(datain_b,axis=1)) # calculate some ratio between two matrices # --------------------------------------------- calc_a = np.nanmax(calc_smt,axis=1) # find the maximum ratio at every gridcell -- answer has dimensions (time,lat,lon) ind_out = np.argmax(calc_smt,axis=1) # location of maximum ratio # --------------------------------------------------------------------------------- # Broadcasting attempts: calc_b = datain_b[ind_out] # Get an error with axis 26 is out of bounds... # NOT WORKING calc_b = datain_b[ind_out[:,np.newaxis,:,:]] # still an error: IndexError: index 26 is out of bounds for axis 0 with size 20 # NOT WORKING # --------------------------------------------------------------------------- # let us try 1st time moment: dd_a = ind_out[0,:,:] dd_b = datain_b[0,:,:,:] smt = dd_b[dd_a] # getting something with dimensions (10,10,10,10) # NOT WORKING? # --------------------------------------------------------------------------- # This is the output I expect: correct_output = np.zeros((ntime,ny,nx)); for itime in range(ntime): for jj in range(ny): for ii in range(nx): correct_output[itime,jj,ii] = datain_b[itime,ind_out[itime,jj,ii],jj,ii] # ------------------------------------------------------------------------------ # How to get the same without 3 loops? </code></pre> <p>I would like to get an answer with dimensions (time,lat,lon) instead of not getting one at all.</p>
<python><numpy>
2023-06-16 14:30:20
1
2,080
msi_gerva
76,490,945
235,671
How to debug sqlite's execute method that never ends?
<p>I've got a rather strange issue with pandas' <code>read_sql_query</code> on my staging server. It never ends. The exact same code and libraries runs on my dev machine under 3 seconds and the query itself executes under half a second. It reads an SQLite database. When I copy that database to my dev machine, again, no issues. I even copied my dev setup to the staging server and it simply doesn't ever end. When I start the Task Manager the python.exe process hangs there with 50% CPU usage (the VM has 2 CPUs) for an hour or two or more.</p> <p>What can I do to crack it?</p> <hr /> <p>I tried the same query with only sqlite and it hangs too so apparently it's not an issue with pandas.</p>
<python><sqlite><debugging>
2023-06-16 14:03:20
0
19,283
t3chb0t
76,490,890
6,930,340
Filtering a pandas multi-columns index dataframe while preserving column level names
<p>How can I filter a multi-column index dataframe while preserving all column labels/names?</p> <pre><code>import pandas as pd # Create a sample DataFrame with a multi-column index df = pd.DataFrame( [[1, 2, 3, 4], [5, 6, 7, 8]], columns=[[&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;], [&quot;X&quot;, &quot;X&quot;, &quot;Y&quot;, &quot;X&quot;], [10, 20, 30, 40]], index=[&quot;row1&quot;, &quot;row2&quot;], ) df.columns = df.columns.set_names([&quot;level_1&quot;, &quot;level_2&quot;, &quot;level_3&quot;]) print(df) level_1 A B level_2 X Y X level_3 10 20 30 40 row1 1 2 3 4 row2 5 6 7 8 </code></pre> <p>When filtering for <code>[&quot;A&quot;, &quot;X&quot;]</code>, pandas doens't return all level names (only <code>level_3</code>). But I would like to get a dataframe with all columns level labels/names, no matter if the dataframe only consists of one single column or multiple columns.</p> <pre><code>print(df.loc[:, pd.IndexSlice[&quot;A&quot;, &quot;X&quot;]]) # level_3 10 20 # row1 1 2 # row2 5 6 # That's what I am looking for: level_1 A level_2 X level_3 10 20 row1 1 2 row2 5 6 </code></pre>
<python><pandas><multi-index>
2023-06-16 13:55:20
3
5,167
Andi
76,490,841
386,861
Trying to load JSON file using either pandas or python
<pre><code>import json with open(&quot;https://www.dropbox.com/s/55vqvl1qdsdul8v/publicextract_charity.json?dl=0&quot;, &quot;r&quot;) as file: for line in file: try: json_data = json.loads(line) # Process the JSON data here except json.JSONDecodeError as e: print(f&quot;Error: {e}&quot;) </code></pre> <p>json_data Why does it return:</p> <p>NameError: name 'json_data' is not defined</p> <p>Tried to use pd.read_json but I don't know how to handle errors.</p> <p>Updated link so you can see file.</p> <p>Entire code:</p> <pre><code>import pandas as pd import altair as alt df = pd.read_json( &quot;https://www.dropbox.com/s/55vqvl1qdsdul8v/publicextract_charity.json?dl=0&quot; , encoding= 'utf-8' ) </code></pre> <p>Returns:</p> <pre><code>ValueError: Expected object or value </code></pre> <p>Approach two:</p> <pre><code>import json with open(&quot;/Users/davidelks/Dropbox/Personal/publicextract_charity.json&quot;, &quot;r&quot;) as file: for line in file: try: json_data = json.loads(line) # Process the JSON data here except json.JSONDecodeError as e: print(f&quot;Error: {e}&quot;) json_data </code></pre> <p>The resulting error above.</p>
<python><json><pandas>
2023-06-16 13:48:50
1
7,882
elksie5000
76,490,816
2,112,406
Python script to get line numbers of lines starting with a special character without reading the whole file
<p>I have a big file, and I'd like to get the line numbers of the lines that start with the character <code>&gt;</code>. Is there a way to do this without going through the file line by line?</p>
<python><file><file-io>
2023-06-16 13:45:41
0
3,203
sodiumnitrate
76,490,748
14,777,704
How to prevent background callback error -"No Process Id found" in Dash Plotly apps?
<p>I am a newbie in Plotly, Dash. I have written a background callback function. On updating it, sometimes it shows error - No process id found. Code snippet is shown below</p> <pre><code>cache = diskcache.Cache(&quot;./cache&quot;) background_callback_manager = DiskcacheManager(cache) @app.callback( Output('intermediate-value','data'), Input(component_id=input_dropdown, component_property='value') ) def cleanData(input): # the intermediate cleaned data is used in another function too if input is not None: df2=df[df['col']==input] # cleaning steps return cleaneddf2 else: return None @dash.callback( Output(component_id='outputDCC', component_property='figure'), Input('intermediate-value','data'),background=True, manager=background_callback_manager ) def createView(data): if data is not None: # do some necessary heavy calculations figure=px.imshow(............) return figure else: return None </code></pre> <p>The error occurs seldom, not often, and disappears on refreshing. Is there any way to prevent or safeguard against these errors?</p>
<python><plotly-dash>
2023-06-16 13:37:22
0
375
MVKXXX
76,490,706
3,878,398
How do I break up my glue code into multiple files?
<p>I'm starting to build out my glue job however I'd like to add a custom logger (and other files) to it so that it's cleaner code. How do i break up my files and re import them into the glue job. Say a <code>main.py</code>, <code>utils.py</code> and <code>logging.py</code>?</p>
<python><amazon-web-services><aws-glue>
2023-06-16 13:33:25
1
351
OctaveParango
76,490,666
126,769
How to remove last N chars from a string column in python-polars?
<p>Given this dataframe:</p> <pre><code>df = pl.DataFrame({&quot;s&quot;: [&quot;pear&quot;, None, &quot;papaya&quot;, &quot;dragonfruit&quot;]}) </code></pre> <p>I want to remove the last X chars, e.g. remove the last 2 chars from the column. This obviously doesn't do what I want:</p> <pre><code>df.with_columns( pl.col(&quot;s&quot;).str.slice(2).alias(&quot;s_sliced&quot;), ) </code></pre> <p>I'd like the result to be:</p> <pre><code>shape: (4, 2) ┌─────────────┬──────────┐ │ s ┆ s_sliced │ │ --- ┆ --- │ │ str ┆ str │ ╞═════════════╪══════════╡ │ pear ┆ pe │ │ null ┆ null │ │ papaya ┆ papa │ │ dragonfruit ┆ dragonfru| </code></pre>
<python><dataframe><python-polars>
2023-06-16 13:25:39
3
230,456
nos
76,490,664
4,059,141
Syntax for passing a variable in Apache Zeppelin from an Angular paragraph to a Pyspark paragraph?
<p>I’m trying to do a very simple thing – passing a variable (ideally an array, but string will do) from an Angular paragraph to a pyspark paragraph. Unfortunately I just get the result “None” when printing the value in pyspark.</p> <p>Setting the variable in my Angular paragraph (I’m using Javascript to do this for reasons related to the bigger project I’m working on and this works fine for passing to another %angular or %spark.spark paragraph):</p> <pre><code>%angular &lt;form id=&quot;main_form&quot;&gt;&lt;/form&gt; &lt;script&gt; $( document ).ready(function() { angFormScope = angular.element(document.getElementById('main_form')).scope(); // Set Angular scope variable now document is fully loaded and ready angFormScope.testvar = &quot;hello&quot; }); &lt;/script&gt; </code></pre> <p>Trying to read the variable in my pyspark paragraph:</p> <pre><code>%spark.pyspark testvar = z.get(&quot;testvar&quot;); print(testvar); </code></pre> <p>Output: <code>None</code></p> <p>However, getting the variable in another angular paragraph works fine.</p> <p>I also seem to have access to the Zeppelin context in %pyspark and %python paragraphs, e.g. these work:</p> <pre><code>%spark.pyspark print(z.z); </code></pre> <p>Output: <code>org.apache.zeppelin.spark.SparkZeppelinContext@2cd4a775</code></p> <pre><code>%python # Gets the user's ID from the system, and puts it in the user_ID bound variable z.z.angularBind(&quot;user_ID&quot;, z.getInterpreterContext().getAuthenticationInfo().getUser()) </code></pre> <p>Output: gives the user id correctly</p> <pre><code>%spark.pyspark z.z.getInterpreterContext().getNoteId() </code></pre> <p>Output: Gives the note ID correctly.</p> <p>I notice from the python Zeppelin context source that get() should be available but the function call should be “def get(self, key)” – am I supposed to specify self somehow? <a href="https://github.com/apache/zeppelin/blob/master/python/src/main/resources/python/zeppelin_context.py#L58" rel="nofollow noreferrer">https://github.com/apache/zeppelin/blob/master/python/src/main/resources/python/zeppelin_context.py#L58</a></p> <p>Think I just need the right syntax here unless there’s a bug in Zeppelin blocking this from working (unfortunately it’s running in a corporate environment with Zeppelin 0.8.2).</p>
<python><apache-spark><pyspark><data-analysis><apache-zeppelin>
2023-06-16 13:25:12
1
1,100
Alex Kerr
76,490,550
2,426,635
MS SQL Server Connector [Flutter integration]
<p>I'm looking to implement a function to connect to SQL Server in a Flutter Desktop app and looking for some best practices on how to do this. The requirement is that this runs locally (mentioning this because I've seen the advice to build a web server and make an HTTP call from the Flutter app). Given the lack of Flutter/Dart libraries to connect to MS SQL Server, I would write that part of the code in another language and then integrate that into Flutter.</p> <p>Different patterns I've considered:</p> <ul> <li>Write it in C++ as a flutter Plugin. <a href="https://codelabs.developers.google.com/codelabs/flutter-github-client#3" rel="nofollow noreferrer">Example</a></li> <li>Write it in Python, build it to an exe and call that exe from Flutter, call that exe with command line arguments, and pass the result back to Flutter/Dart. <a href="https://github.com/pyinstaller/pyinstaller" rel="nofollow noreferrer">Using something like PyInstaller to package the Python</a></li> <li>Write it in Python but as part of a local Flask app. Launch that when Flutter starts up, and call the API <a href="https://stackoverflow.com/questions/53519266/python-and-dart-integration-in-flutter-mobile-application">Here is the SO post with the recommendation to launch a web server. DO this, but locally</a></li> </ul> <p>Options 2 and 3 seem more hackish than the first. Is there some guidance around design patterns or for flutter apps generally, or does anyone have experience implementing something like this?</p>
<python><c++><flutter><dart><design-patterns>
2023-06-16 13:13:10
1
626
pwwolff
76,490,517
10,396,469
print tqdm internal counter value
<p>How to print tqdm internal counter value? Need this to avoid using extra enumerate wrapper. Somehow <code>.n</code> attribute is not working</p> <pre><code># my example: from tqdm.auto import tqdm s = 'abcd' pb = tqdm(s) for character in pb: print(pb.n, end = ' ') output &gt;&gt;&gt; 0 0 0 0 expected &gt;&gt;&gt; 0 1 2 3 </code></pre>
<python><tqdm>
2023-06-16 13:09:21
2
4,852
Poe Dator
76,490,333
2,245,709
Pandas: How to use list variable in groupby?
<p>I have a pandas dataframe <code>df</code>:</p> <pre><code>state name age WB Jim 26 CA John 32 CA Jason 14 </code></pre> <p>where I am trying to use <code>groupby</code> <strong>state,name</strong> and and find <code>max()</code> of <strong>age</strong>:</p> <pre><code>df2 = df.groupby(['state', 'name'])['age'].max().reset_index() </code></pre> <p>The above is working, but when I use a list variable instead of hardcoding column names like:</p> <pre><code>cols = ['state', 'name'] df2 = df.groupby(cols)['age'].max().reset_index() </code></pre> <p>I am getting error:</p> <pre><code>raise TypeError(&quot;You have to supply one of 'by' and 'level'&quot;) TypeError: You have to supply one of 'by' and 'level' </code></pre> <p>How do i solve this?</p>
<python><pandas>
2023-06-16 12:43:49
1
1,115
aiman
76,490,274
3,879,858
Python: Fork support is only compatible with the epoll1 and poll polling strategies
<p>I am using one of the google cloud virtual machines to do my job and I am using multiprocessing library especially pooling and pandarallel. Sometimes I am getting this error:</p> <p>&quot;Fork support is only compatible with the epoll1 and poll polling strategies&quot;.</p> <p>Does anybody encountered this error? Thank you.</p>
<python><google-cloud-platform><multiprocessing>
2023-06-16 12:34:57
0
3,385
s900n
76,490,007
16,222,048
How do I get the subscripted class of instantiated `types.GenericAlias`?
<p>According to the <a href="https://docs.python.org/3/reference/expressions.html#subscriptions" rel="nofollow noreferrer">docs</a> on subscripting a generic class, <code>list[str]</code> will return a <code>types.GenericAlias</code> with the subscripted type of <code>list</code>.</p> <p>However, I cannot find a pythonic way of extracting this generic class from the instantiated <code>types.GenericAlias</code>.</p> <p>What is the most pythonic way to get the generic class <code>list</code> from <code>list[str]</code>?</p> <p>The only (hacky) way I can think of is this:</p> <pre><code>def is_list(type_: types.GenericAlias) -&gt; bool: return str(type_).startswith(&quot;list&quot;) </code></pre>
<python><python-typing>
2023-06-16 12:01:27
1
371
Angelo van Meurs
76,489,981
1,140,048
How to shift a pcolor plot along the x axis
<p>I'd like to shift a pcolor plot along the x direction. But I'm not sure how to do it, as it's not as simple as using <strong>plot</strong> with a vector that specifies the x values</p> <p>With this code:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np np.random.seed(0) Z = np.random.rand(6, 5) tk = list(range(0,10+1)) fig, ax = plt.subplots(figsize=(7.2, 2.3)) ax.pcolor(Z) ax.set_xticks(tk) plt.show() </code></pre> <p>It produced this plot:</p> <p><a href="https://i.sstatic.net/6pNBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6pNBl.png" alt="enter image description here" /></a></p> <p>However I want the heatmap shifted to the right to start at x = 2, for example, like the following plot:</p> <p><a href="https://i.sstatic.net/rKKCw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rKKCw.png" alt="enter image description here" /></a></p> <p>What am I missing???</p> <p>As a side question, if I swap lines 11 and 12 to:</p> <pre><code>ax.set_xticks(tk) ax.pcolor(Z) </code></pre> <p>I get this plot with the x axis contracted to the range [0,5]. I'm not sure why setting ticks before adding pcolor would do that?</p> <p><a href="https://i.sstatic.net/jSXzY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jSXzY.png" alt="enter image description here" /></a></p>
<python><matplotlib><x-axis>
2023-06-16 11:59:01
2
308
owl7
76,489,963
9,363,181
Git not found even after installing on Amazon linux 2 OS
<p>I am using the <strong>AWS Python base image</strong> for deploying my project as the docker image for the lambda function. Below is my docker file for installing <code>git</code> in the docker image. Commands ran successfully but when I tried to verify git via CLI it wasn't found. Below is my docker file code:</p> <pre><code>FROM public.ecr.aws/lambda/python:3.8 # Install the function's dependencies using file requirements.txt # from your project folder. COPY requirements.txt . RUN pip3 install -r requirements.txt --target &quot;${LAMBDA_TASK_ROOT}&quot;; \ yum update -y \ yum install git -y # Copy function code COPY app.py ${LAMBDA_TASK_ROOT} ENTRYPOINT [&quot;/bin/bash&quot;, &quot;-l&quot;, &quot;-c&quot;] # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ &quot;app.handler&quot; ] </code></pre> <p>I have used ENTRYPOINT to check via CLI. This is the <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html" rel="nofollow noreferrer">official documentation</a> where the list of base images can be found. When I checked my image I can see that <strong>git</strong> is installed.<br /> <a href="https://i.sstatic.net/Nj9wK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nj9wK.png" alt="enter image description here" /></a></p> <p>P.S: I want to use base images only.</p> <p>How can I fix this issue?</p>
<python><amazon-web-services><git><docker>
2023-06-16 11:56:49
1
645
RushHour
76,489,951
7,635,619
Firefox profile proxy setup inside RSelenium driver
<p>I'm using <code>selenium/standalone-firefox:111.0</code> docker image with RSelenium <code>remoteDriver</code> and need to use proxy inside the browser session.<br /> <strong>First experiment:</strong> I have tried to pass the proxy info through <code>extraCapabilities</code> parameter as follows:</p> <pre><code>firefox_profile &lt;- makeFirefoxProfile(list(network.proxy.http = &quot;host&quot;, network.proxy.http_port = '22225', network.proxy.type = '1')) remDr &lt;- remoteDriver( remoteServerAddr = &quot;localhost&quot;, port = 4444L, browserName = &quot;firefox&quot;, extraCapabilities = firefox_profile ) </code></pre> <p>Then I get nothing inside the settings of the Firefox session and my IP is still being shown when I run <code>remDr$navigate(&quot;https://ipinfo.io/&quot;)</code>.</p> <p><strong>Second experiment:</strong> On the other hand, I have tried to reach the Firefox proxy settings using the following code but can't reach the proxy fields inside the connection settings.</p> <pre><code>remDr$open() remDr$navigate(&quot;about:preferences&quot;) webElem &lt;- remDr$findElement(&quot;css&quot;, &quot;#connectionSettings&quot;)$clickElement() dialogFrame &lt;- remDr$findElement(using = 'name',value = &quot;dialogFrame-1&quot;) dialogFrame$highlightElement() remDr$findElement(&quot;xpath&quot;, &quot;//input[@data-l10n-id='connection-proxy-autologin']&quot;)$clickElement() </code></pre> <p>I have already tried the following with no success: <a href="https://stackoverflow.com/questions/28670484/using-rselenium-with-firefox-and-socks5h">Using Rselenium with firefox and socks5h</a></p> <p><a href="https://i.sstatic.net/6FJoT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6FJoT.png" alt="enter image description here" /></a></p>
<python><r><selenium-webdriver><proxy><rselenium>
2023-06-16 11:55:51
0
1,269
ML_Enthousiast
76,489,928
15,283,041
Error when importing pandas "ImportError: Can't determine version for numexpr"
<p>I am having problems with importing the <code>pandas</code> package. I used the following command to import it:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd </code></pre> <p>However, I receive the following error message:</p> <pre><code>Traceback (most recent call last): Cell In[54], line 1 import pandas as pd File ~\AppData\Local\anaconda3\lib\site-packages\pandas\__init__.py:48 from pandas.core.api import ( File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\api.py:27 from pandas.core.arrays import Categorical File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\__init__.py:1 from pandas.core.arrays.arrow import ArrowExtensionArray File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\arrow\__init__.py:1 from pandas.core.arrays.arrow.array import ArrowExtensionArray File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\arrow\array.py:60 from pandas.core.arraylike import OpsMixin File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arraylike.py:21 from pandas.core.ops.common import unpack_zerodim_and_defer File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\ops\__init__.py:38 from pandas.core.ops.array_ops import ( File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py:57 from pandas.core.computation import expressions File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\computation\expressions.py:20 from pandas.core.computation.check import NUMEXPR_INSTALLED File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\computation\check.py:5 ne = import_optional_dependency(&quot;numexpr&quot;, errors=&quot;warn&quot;) File ~\AppData\Local\anaconda3\lib\site-packages\pandas\compat\_optional.py:157 in import_optional_dependency version = get_version(module_to_get) File ~\AppData\Local\anaconda3\lib\site-packages\pandas\compat\_optional.py:84 in get_version raise ImportError(f&quot;Can't determine version for {module.__name__}&quot;) ImportError: Can't determine version for numexpr </code></pre> <p>I am using the following version of Python:</p> <pre><code>Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] </code></pre> <p>Is there any way to solve this problem?</p> <p>Some important information is maybe that this computer has remote access to the server that I'm using via VPN. So I should only have access to the program when I am logged into the VPN.</p>
<python><pandas><import><python-import><importerror>
2023-06-16 11:53:38
2
493
Victor Nielsen
76,489,894
4,305,436
Mocking a SQLAlchemy Row object in Pytest
<p>I've read some answers on this website which talk about mocking SQLAlchemy objects using dictionaries. Such as <a href="https://stackoverflow.com/questions/60080521/how-would-you-unit-test-this-sqlalchemy-core-query-function">this</a>. However, I want to not have a dictionary, but an actual SQLALchemy <code>Table</code> object instantiated.</p> <p>Let me explain with sample code.</p> <p>This is the <code>User</code> table (I've shown only 2 columns but there are others)</p> <pre><code>User = Table( &quot;User&quot;, Base.metadata, Column( &quot;id&quot;, BigInteger, primary_key=True, autoincrement=True, ), Column(&quot;name&quot;, String(length=100), nullable=False), ) </code></pre> <p>This is the function that I'm testing</p> <pre><code>async def fetch_users(self, conn: AsyncConnection) -&gt; User: results = await conn.execute(select(User)) return results.all() </code></pre> <p>This function is called in a controller and there are other service/repository classes in that flow, but to explain how the return value of the above function is used, it's kind of like this in <code>user_service.py</code>.</p> <pre><code>users = await user_repository.fetch_users(conn=conn) for user in users: json_response[&quot;owner&quot;] = user.name json_response[&quot;id&quot;] = user.id . . . some other attributes of the `User` table </code></pre> <p>Now, I have a test case written for the controller like so</p> <pre><code>app = FastAPI() @pytest_asyncio.fixture() def client(): with TestClient(app=app) as c: yield c class TestUser: USER_FETCH_URL = &quot;/users&quot; @pytest.mark.asyncio @patch(&quot;repositories.users.UserRepository.fetch_users&quot;) async def test_fetch_users(self, mock_fetch_users, client): mock_fetch_users.return_value = ??? response = client.get(self.USER_FETCH_URL) assert response.status_code == 200 </code></pre> <p>I don't know what the <code>return_value</code> of <code>mock_fetch_users</code> should be. When I do <code>User(id=11, name=&quot;David&quot;)</code>, it throws an error - <code>TypeError: 'Table' object is not callable</code>.</p> <ol> <li>I know I can put a dictionary for the <code>return_value</code>, like <code>{&quot;name&quot;: &quot;David&quot;, &quot;id&quot;: 11}</code>, but my service layer code which accesses by <code>.id</code> and <code>.name</code> will fail when it's called by the test case.</li> <li>I also know that I can change the service layer code to access by square brackets. Square brackets will work for the SQLAlchemy <code>Row</code> object (i.e., when the function is called in the usual flow for customers), as well as when I am running my test case using a mocked <code>dict</code>. So, <code>user_service.py</code> will now have</li> </ol> <pre><code>for user in users: json_response[&quot;owner&quot;] = user[&quot;name&quot;] json_response[&quot;id&quot;] = user[&quot;id&quot;] . . . some other attributes of the `User` table </code></pre> <p>But this is not very readable. I'd prefer to have the <code>.id</code> format instead.</p> <p>So, I wanted to know if there's any way I can mock/instantiate a <code>Row</code> object with dummy data or something. My purpose is to make the service class function work for both normal API calls triggered by customers as well as the test cases I've written, while also retaining readability of the format.</p>
<python><unit-testing><sqlalchemy><pytest><python-asyncio>
2023-06-16 11:50:03
1
796
Sidharth Samant
76,489,805
10,164,750
Dynamically reading JSON file in Pyspark
<p>I want to read <code>json</code> file. Right now, I am doing the following logic, which is not that dynamic.</p> <pre><code>df = spark.read.option(&quot;multiline&quot;, True).json(loc) df = df.select(&quot;data.*&quot;, &quot;event.*&quot;, &quot;resource_id&quot;, &quot;resource_kind&quot;, &quot;resource_uri&quot;) </code></pre> <p>I will have to write <code>column.*</code> multiple times as the file is heavily nested, it has multiple <code>StructType</code></p> <p>The schema of the same is as below:</p> <pre><code>root |-- data: struct (nullable = true) | |-- accounts: struct (nullable = true) | | |-- accounting_reference_date: struct (nullable = true) | | | |-- day: string (nullable = true) | | | |-- month: string (nullable = true) | | |-- last_accounts: struct (nullable = true) | | | |-- made_up_to: string (nullable = true) | | | |-- period_end_on: string (nullable = true) | | | |-- period_start_on: string (nullable = true) | | | |-- type: string (nullable = true) | | |-- next_accounts: struct (nullable = true) | | | |-- due_on: string (nullable = true) | | | |-- overdue: boolean (nullable = true) | | | |-- period_end_on: string (nullable = true) | | | |-- period_start_on: string (nullable = true) | | |-- next_due: string (nullable = true) | | |-- next_made_up_to: string (nullable = true) | | |-- overdue: boolean (nullable = true) | |-- can_file: boolean (nullable = true) | |-- company_name: string (nullable = true) | |-- company_number: string (nullable = true) | |-- company_status: string (nullable = true) | |-- confirmation_statement: struct (nullable = true) | | |-- last_made_up_to: string (nullable = true) | | |-- next_due: string (nullable = true) | | |-- next_made_up_to: string (nullable = true) | | |-- overdue: boolean (nullable = true) | |-- date_of_creation: string (nullable = true) | |-- etag: string (nullable = true) | |-- has_charges: boolean (nullable = true) | |-- is_community_interest_company: boolean (nullable = true) | |-- jurisdiction: string (nullable = true) | |-- last_full_members_list_date: string (nullable = true) | |-- links: struct (nullable = true) | | |-- charges: string (nullable = true) | | |-- filing_history: string (nullable = true) | | |-- officers: string (nullable = true) | | |-- persons_with_significant_control: string (nullable = true) | | |-- persons_with_significant_control_statements: string (nullable = true) | | |-- registers: string (nullable = true) | | |-- self: string (nullable = true) | |-- previous_company_names: array (nullable = true) | | |-- element: struct (containsNull = true) | | | |-- ceased_on: string (nullable = true) | | | |-- effective_from: string (nullable = true) | | | |-- name: string (nullable = true) | |-- registered_office_address: struct (nullable = true) | | |-- address_line_1: string (nullable = true) | | |-- address_line_2: string (nullable = true) | | |-- country: string (nullable = true) | | |-- locality: string (nullable = true) | | |-- po_box: string (nullable = true) | | |-- postal_code: string (nullable = true) | | |-- region: string (nullable = true) | |-- registered_office_is_in_dispute: boolean (nullable = true) | |-- sic_codes: array (nullable = true) | | |-- element: string (containsNull = true) | |-- subtype: string (nullable = true) | |-- type: string (nullable = true) |-- event: struct (nullable = true) | |-- published_at: string (nullable = true) | |-- timepoint: long (nullable = true) | |-- type: string (nullable = true) |-- resource_id: string (nullable = true) |-- resource_kind: string (nullable = true) |-- resource_uri: string (nullable = true) </code></pre> <p>As few of the fields are having same names, I need to capture the field name from root.</p> <p>For eg. field <code>period_start_on</code> is present in both <code>last_accounts</code> and <code>next_accounts</code>. So, I need to make the column name as below:</p> <p><code>data.accounts.last_accounts.period_start_on</code></p> <p><code>data.accounts.next_accounts.period_start_on</code></p> <p>I think the approach I am taking wont take me longer. Could you please suggest the effective way of reading the json. Also how can we identify 2 fields having same name.</p> <p>Thank you</p>
<python><apache-spark><pyspark>
2023-06-16 11:39:10
1
331
SDS
76,489,804
16,723,655
Shortest distance from one point at sea to coast with information of lat/long
<p>I need to find shortest distance from vessel to coast. I have information of latitude and longitude only. Are there any library or code to do this?</p> <pre><code>from shapely.geometry import Point from shapely.ops import nearest_points from geopy.distance import geodesic import geopandas as gpd # Read coastline data coastline_data = gpd.read_file('./ne_10m_coastline.shp') coastline = coastline_data.geometry.unary_union # Define target coordinate target_coordinate = Point(36.20972222, 125.7061111) # Find nearest point on coastline nearest_point = nearest_points(target_coordinate, coastline)[0] # Calculate distance distance = geodesic((target_coordinate.x, target_coordinate.y), (nearest_point.x, nearest_point.y)).kilometers print(f&quot;The closest point to the coast is at {nearest_point} and the distance is {distance} kilometers.&quot;) </code></pre> <p>I tried this code, however the nearest point is same and distance is zero. What's wrong in here?</p>
<python><distance>
2023-06-16 11:39:06
2
403
MCPMH
76,489,736
14,244,437
Convert ZoneInfo to Pytz timezone name
<p>I have a datetime string such as <code>date = &quot;2023-06-16T07:46:00-03:00&quot;</code></p> <p>Extracting timezone information from it give me the following results:</p> <pre><code>&gt;&gt;&gt; formatted_date = datetime.strptime(date, &quot;%Y-%m-%dT%H:%M:%S%z&quot;) &gt;&gt;&gt; formatted_date.tzinfo datetime.timezone(datetime.timedelta(days=-1, seconds=75600)) &gt;&gt;&gt; formatted_date.tzinfo.tzname(formatted_date) 'UTC-03:00' </code></pre> <p>The problem is, I need the timezone information to be something like <code>America/Los_Angeles</code>, in order to use with PostgreSQL EXTRACT function.</p> <p>Is there a package or a function in Pytz that convert this tzinfo to a name such as the example I gave?</p>
<python><python-3.x><datetime><timezone><pytz>
2023-06-16 11:26:39
1
481
andrepz
76,489,606
3,641,140
Debug from interactive window in Visual Studio (python)
<p>(Visual Studio Professional 2022, Version 17.4.3)</p> <p>I'd like to debug a python function called from an interactive window. That is, I'd like to be able to use the interactive window as my main workflow tool, and occasionally &quot;step into&quot; my function calls by setting breakpoints in the source code. In other words, always work in &quot;interactive debug mode&quot;.</p> <p>How can I configure VS to allow for this?</p>
<python><visual-studio><debugging><interactive>
2023-06-16 11:10:19
0
319
SuperUser01
76,489,554
303,513
What is the most efficent way to save trained pytorch model?
<p>I am training my collaborative filtering model using pytorch and saving the trained model to disk using the torch.save method. However, the resulting file is 5GB due to the massive dataset.</p> <pre><code>torch.save('model.pkl') </code></pre> <p>I have tried using the compress-pickle package, but it actually increases the file size. Is there a more efficient way to pickle the model and reduce the file size? If so, how can I achieve this?</p> <p>UPDATE:</p> <p>I'm using fastai's export method:</p> <pre><code>def export(self:Learner, fname='export.pkl', pickle_module=pickle, pickle_protocol=2): &quot;Export the content of `self` without the items and the optimizer state for inference&quot; if rank_distrib(): return # don't export if child proc self._end_cleanup() old_dbunch = self.dls self.dls = self.dls.new_empty() state = self.opt.state_dict() if self.opt is not None else None self.opt = None with warnings.catch_warnings(): #To avoid the warning that come from PyTorch about model not being checked warnings.simplefilter(&quot;ignore&quot;) torch.save(self, self.path/fname, pickle_module=pickle_module, pickle_protocol=pickle_protocol) self.create_opt() if state is not None: self.opt.load_state_dict(state) self.dls = old_dbunch </code></pre>
<python><machine-learning><pytorch>
2023-06-16 11:03:46
0
46,260
Silver Light
76,489,374
14,976,580
Merge two dataframes with a single column in pandas
<p>I am trying to merge two dataframes with a single column. I have looked for the solution in other questions, but without success.</p> <p>Given the dataframes</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame({'x': [1, 2, 3, 4]}) df2 = pd.DataFrame({'x': [1, 2, 3, 5, 6]} </code></pre> <p>the desired output is</p> <pre class="lang-py prettyprint-override"><code>x_1 | x_2 1 | 1 2 | 2 3 | 3 4 | NaN NaN | 5 NaN | 6 </code></pre> <p>How can I achieve this result using pandas?</p> <p>If I understand correctly, I cannot exploit either the column itself or the indexes to use <code>pd.merge</code>.<br /> I could not get the result even using <code>pd.concat([df1, df2], axis=1)</code>.</p> <p>Thank you for your time and help</p>
<python><pandas><dataframe>
2023-06-16 10:38:58
0
2,515
Leonardo
76,489,247
539,490
Optional type argument for Python function to map over list
<p>Is there an idiomatic way to define a typed function in Python that ignores the <code>int</code> index parameter passed by <code>map</code> function?</p> <p>It should allow for the typed function to be called from elsewhere without raising a type error. For example I'm using the following function:</p> <pre class="lang-py prettyprint-override"><code>def get_empty_chosen_number_count () -&gt; ChosenNumberCount: return {&quot;n1&quot;: 0, &quot;n2&quot;: 0, &quot;n3&quot;: 0, &quot;n4&quot;: 0, &quot;n5&quot;: 0, &quot;n6&quot;: 0, &quot;n7&quot;: 0, &quot;n8&quot;: 0, &quot;n9&quot;: 0} </code></pre> <p>Where <code>ChosenNumberCount</code> is defined as:</p> <pre class="lang-py prettyprint-override"><code>class ChosenNumberCount (TypedDict): n1: int # ... n9: int </code></pre> <p>And if I just define it as above then calling it with the <code>map</code> function e.g:</p> <pre class="lang-py prettyprint-override"><code>map(get_empty_chosen_number_count, range(5)) </code></pre> <p>It gets a type error of <code>Argument of type &quot;() -&gt; ChosenNumberCount&quot; cannot be assigned to parameter &quot;__func&quot; of type &quot;(_T1@__init__) -&gt; _S@map&quot; in function &quot;__init__&quot;</code></p> <p>If I define it as: <code>def get_empty_chosen_number_count (_: int) -&gt; ChosenNumberCount: #...</code> that type error disappears but then I can't call the function as <code>get_empty_chosen_number_count()</code> because it expects an argument.</p>
<python><python-typing>
2023-06-16 10:21:58
1
29,009
AJP
76,489,061
17,427,519
Generate all possible combinations with contraints
<p>I want to create all possible combinations of arrays with size (N) given that elements can be [-1, 0, 1], however it is only allowed to have at most 2 elements [-1, 1] while all others should be 0.</p> <p>A recursive approach can suffice for the N&lt;1000 however I am looking for efficient (both memory and computationally) way to generate up until N=10000.</p> <p>The attempt for recursive case and result for N=6 is as follow;</p> <pre><code>def generate_combinations(N): elements = [-1, 0, 1] combinations = [] generate_combinations_recursive(elements, N, [], 0, 0, combinations) return combinations def generate_combinations_recursive(elements, repetitions, current_combination, num_nonzero, index, combinations): if index == repetitions: combinations.append(tuple(current_combination)) return for element in elements: if element != 0: if num_nonzero &lt; 2: generate_combinations_recursive(elements, repetitions, current_combination + [element], num_nonzero + 1, index + 1, combinations) else: generate_combinations_recursive(elements, repetitions, current_combination + [element], num_nonzero, index + 1, combinations) combinations = generate_combinations(N=6) </code></pre> <p><strong>Results</strong></p> <pre><code>[(-1, -1, 0, 0, 0, 0), (-1, 0, -1, 0, 0, 0), (-1, 0, 0, -1, 0, 0), (-1, 0, 0, 0, -1, 0), (-1, 0, 0, 0, 0, -1), (-1, 0, 0, 0, 0, 0), (-1, 0, 0, 0, 0, 1), (-1, 0, 0, 0, 1, 0), (-1, 0, 0, 1, 0, 0), (-1, 0, 1, 0, 0, 0), (-1, 1, 0, 0, 0, 0), (0, -1, -1, 0, 0, 0), (0, -1, 0, -1, 0, 0), (0, -1, 0, 0, -1, 0), (0, -1, 0, 0, 0, -1), (0, -1, 0, 0, 0, 0), (0, -1, 0, 0, 0, 1), (0, -1, 0, 0, 1, 0), (0, -1, 0, 1, 0, 0), (0, -1, 1, 0, 0, 0), (0, 0, -1, -1, 0, 0), (0, 0, -1, 0, -1, 0), (0, 0, -1, 0, 0, -1), (0, 0, -1, 0, 0, 0), (0, 0, -1, 0, 0, 1), (0, 0, -1, 0, 1, 0), (0, 0, -1, 1, 0, 0), (0, 0, 0, -1, -1, 0), (0, 0, 0, -1, 0, -1), (0, 0, 0, -1, 0, 0), (0, 0, 0, -1, 0, 1), (0, 0, 0, -1, 1, 0), (0, 0, 0, 0, -1, -1), (0, 0, 0, 0, -1, 0), (0, 0, 0, 0, -1, 1), (0, 0, 0, 0, 0, -1), (0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 1), (0, 0, 0, 0, 1, -1), (0, 0, 0, 0, 1, 0), (0, 0, 0, 0, 1, 1), (0, 0, 0, 1, -1, 0), (0, 0, 0, 1, 0, -1), (0, 0, 0, 1, 0, 0), (0, 0, 0, 1, 0, 1), (0, 0, 0, 1, 1, 0), (0, 0, 1, -1, 0, 0), (0, 0, 1, 0, -1, 0), (0, 0, 1, 0, 0, -1), (0, 0, 1, 0, 0, 0), (0, 0, 1, 0, 0, 1), (0, 0, 1, 0, 1, 0), (0, 0, 1, 1, 0, 0), (0, 1, -1, 0, 0, 0), (0, 1, 0, -1, 0, 0), (0, 1, 0, 0, -1, 0), (0, 1, 0, 0, 0, -1), (0, 1, 0, 0, 0, 0), (0, 1, 0, 0, 0, 1), (0, 1, 0, 0, 1, 0), (0, 1, 0, 1, 0, 0), (0, 1, 1, 0, 0, 0), (1, -1, 0, 0, 0, 0), (1, 0, -1, 0, 0, 0), (1, 0, 0, -1, 0, 0), (1, 0, 0, 0, -1, 0), (1, 0, 0, 0, 0, -1), (1, 0, 0, 0, 0, 0), (1, 0, 0, 0, 0, 1), (1, 0, 0, 0, 1, 0), (1, 0, 0, 1, 0, 0), (1, 0, 1, 0, 0, 0), (1, 1, 0, 0, 0, 0)] </code></pre>
<python><algorithm><numpy><recursion><combinations>
2023-06-16 10:00:40
4
598
Slybot
76,489,049
10,413,428
QCoreApplication.translate() and mypy result in error
<p>When using:</p> <pre class="lang-py prettyprint-override"><code>self.translate(&quot;MyCustomClass&quot;,&quot;This text should be translated&quot;, None) </code></pre> <p>in non QObject based classes, mypy will report each call as <code>Mypy: Argument 1 has incompatible type &quot;str&quot;; expected &quot;bytes&quot; [arg-type]</code>. However, the translations work as expected.</p> <p>But when I try to fix the error detected by mypy with:</p> <pre class="lang-py prettyprint-override"><code>self.translate(b&quot;MyCustomClass&quot;,b&quot;This text should be translated&quot;, None) </code></pre> <p>I get the following error:</p> <pre class="lang-bash prettyprint-override"><code>ValueError: 'PySide6.QtCore.QCoreApplication.translate' called with wrong argument values: PySide6.QtCore.QCoreApplication.translate(b'MyCustomClass', b'This text should be translated', None) Found signature: PySide6.QtCore.QCoreApplication.translate(bytes, bytes, Optional[bytes] = None, int = -1) </code></pre> <p>Is there I fix for this error, which is apparently no real error?</p> <p>The translate method is defined inside the QCoreApplication is defined with bytes:</p> <pre class="lang-py prettyprint-override"><code>def translate(self, context, key, disambiguation, bytes=None, *args, **kwargs): &quot;&quot;&quot; translate(context: bytes, key: bytes, disambiguation: Optional[bytes] = None, n: int = -1) -&gt; str &quot;&quot;&quot; pass </code></pre> <p>which is propertly the reason why mypy complains</p>
<python><python-3.x><pyside><pyside6>
2023-06-16 09:59:13
1
405
sebwr
76,489,042
4,948,798
Filter specified file extensions to other file
<p>I have many files in different paths with various file extensions.</p> <p>My <code>input.txt</code> content as follows,</p> <pre><code>/path/to/dir1/readme.html /path/to/dir1/file.c /path/to/dir1/file1.c /path/to/dir1/a.html /path/to/dir2/abc.java /path/to/dir1/sample.js /path/to/dir2/a.bin /path/to/dir1/as.json ....................... ........................... .............................. </code></pre> <p>I need to filter and move the specified extension files from all occurrences of <code>input.txt</code> file to <code>output.txt</code> file.</p> <p>For this, i have below script.</p> <pre class="lang-py prettyprint-override"><code>import shutil input_file = 'input.txt' output_file = 'output.txt' file_extensions = ['.html', '.c', '.cpp', '.h', '.py', '.txt', '.js', '.json', '.csv'] with open(input_file, 'r') as input_file, open(output_file, 'w') as output_file: for line in input_file: file_path = line.strip() if any(file_path.endswith(ext) for ext in file_extensions): output_file.write(file_path + '\n') shutil.move(file_path, file_path + '.processed') print('Matching file moved to output.txt.') </code></pre> <p>Expected <code>output.txt</code> should be like below.</p> <pre><code>/path/to/dir1/readme.html /path/to/dir1/file.c /path/to/dir1/file1.c /path/to/dir1/sample.js /path/to/dir1/as.json </code></pre> <p>Above script doesn't work, it fails with below errors</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3.8/shutil.py&quot;, line 791, in move os.rename(src, real_dst) FileNotFoundError: [Errno 2] No such file or directory: '/path/to/dir1/readme.html' -&gt; '/path/to/dir1/readme.html.processed' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;split_src_libs.py&quot;, line 24, in &lt;module&gt; shutil.move(file_path, file_path + '.processed') File &quot;/usr/lib/python3.8/shutil.py&quot;, line 811, in move copy_function(src, real_dst) File &quot;/usr/lib/python3.8/shutil.py&quot;, line 435, in copy2 copyfile(src, dst, follow_symlinks=follow_symlinks) File &quot;/usr/lib/python3.8/shutil.py&quot;, line 264, in copyfile with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst: FileNotFoundError: [Errno 2] No such file or directory: '/path/to/dir1/readme.html' </code></pre> <p>What is causing this issue?</p> <p>Any help would be appreciated to filter &amp; move the files to <code>output.txt</code> file.</p> <p>Note: Moved files shouldn't be exist in <code>input.txt</code> file.</p>
<python><python-3.x>
2023-06-16 09:58:09
2
2,138
user4948798
76,488,984
9,360,793
Redefine a function from an existing by specifying one keyword argument
<p>I want to create a dictionary with string keys and some existing functions as corresponding values.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np my_useful_funtions_dicts = {'sum': np.sum, 'dot': np.dot} </code></pre> <p>This works great. But I wanted to use a function from scikit learn and specifying a keyword argument. For example using <code>sklearn.metrics.roc_auc_score(*args, average='weighted')</code></p> <p>I would like to run smtg like this :</p> <pre class="lang-py prettyprint-override"><code>from sklearn import metrics my_dict = {'weighted_roc_auc': metrics.roc_auc_score(*args, average='weighted')} </code></pre>
<python><python-3.x><function><scikit-learn><keyword-argument>
2023-06-16 09:50:10
2
869
inarighas
76,488,711
6,199,181
httpx.RemoteProtocolError while processign 1Gb+ file
<p>My code downloads huge file with httpx and process it's chunks on the fly</p> <pre class="lang-py prettyprint-override"><code>async with httpx.AsyncClient() as client: async with client.stream(&quot;GET&quot;, self.url, follow_redirects=True, timeout=60) as stream: async for chunk in stream.aiter_text(): parser.feed(chunk) await self._process_element(parser) </code></pre> <p>When I run it on my notebook it works good but in the the kuber cluster on dedicated pod I have got error: <code>httpx.RemoteProtocolError: peer closed connection without sending complete message body</code> after about 300K <code>_process_element()</code>'s.</p> <p>Doc for h11 says, that &quot;maximum number of bytes we’re willing to buffer of an incomplete event. In practice this mostly sets a limit on the maximum size of the request/response line + headers. If this is exceeded, then <code>next_event()</code> will raise <code>RemoteProtocolError</code>.&quot; Does it means my code working too slow and cannot manage incoming stream? And the 2nd: can I increase buffer for incoming stream? There is no interface for this in HTTPX as far as I know.</p> <p>Any advises welcome. Thank you.</p>
<python><httpx>
2023-06-16 09:16:26
1
1,517
Serge
76,488,710
8,548,828
How can I align two bokeh lines with different periods for the datapoints into one figure
<p>So I have two datasets:</p> <pre><code>a = [0,1,2,5,4,3,1] b = [9,2,1,3,6,8,5] </code></pre> <p>Dataset <code>a</code> has a period of 416 femtoseconds, dataset <code>b</code> has a period of 25 nanoseconds. This is because my application samples a electronic signal with two oscilloscopes and creates two waveforms, <code>a</code> and <code>b</code>.</p> <p>I want to plot these signals in one bokeh figure. Just taking the arrays and plotting them creates a misaligned view of the information.</p> <pre><code>p = figure(plot_width=500, plot_height=200) xrange = range(len(a)) p.line(xrange, a, line_color=&quot;blue&quot;) p.line(xrange, b, line_color=&quot;red&quot;) </code></pre> <p><a href="https://i.sstatic.net/LWHQI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LWHQI.png" alt="enter image description here" /></a></p> <p>How can I align these signals such that they are correctly aligned in regards to time? One of the problems is that the <code>datetime</code> object doesn't have enough precision to work with femtosecond time, otherwise I would've converted my xaxis ticker to a datetime one.</p>
<python><bokeh>
2023-06-16 09:16:23
2
3,266
Tarick Welling
76,488,697
17,560,347
better way to calculate mean of each row in polars dataframe
<p>I wrote this:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;nrs&quot;: [1, 2, 3, None, 5], &quot;names&quot;: [1., 2., 3., None, 5], } ) mean = df.mean(axis=1).to_frame('mean') </code></pre> <p>Can I use <code>select()</code> and <code>alias()</code> to achieve better performance and readability?</p>
<python><python-polars>
2023-06-16 09:14:40
3
561
吴慈霆
76,488,582
1,484,601
python: proper way to run an async routine in a pytest fixture?
<p>The test below passes, but I have doubts that I am using asyncio correctly:</p> <ul> <li>The code mixes asyncio and threading</li> <li>The test is passing but never exits (probably because the &quot;loop.run_until_complete&quot; never ends)</li> </ul> <pre class="lang-py prettyprint-override"><code>import asyncio import threading import pytest import websockets async def echo(websocket): async for message in websocket: await websocket.send(message) async def websocket_server(): async with websockets.serve(echo, &quot;localhost&quot;, 8765): await asyncio.Future() def _run_server(): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(websocket_server()) loop.close() @pytest.fixture def run_server(): thread = threading.Thread(target=_run_server) thread.start() yield thread # no idea how to stop the loop here thread.join() @pytest.mark.asyncio async def test_websocket(run_server): async with websockets.connect(&quot;ws://localhost:8765&quot;) as websocket: await websocket.send(&quot;Hello!&quot;) response = await websocket.recv() assert response == &quot;Hello!&quot; </code></pre> <p>(note: for stopping the loop I attempted the solution proposed here (<a href="https://stackoverflow.com/questions/56663152/how-to-stop-websocket-server-created-with-websockets-serve">How to stop websocket server created with websockets.serve()?</a>) but this resulted in the server not starting)</p>
<python><websocket><pytest><python-asyncio><python-multithreading>
2023-06-16 09:00:34
3
4,521
Vince
76,488,480
8,204,956
Using `django_get_or_create` with onetoone related field
<p>Given this django model</p> <pre class="lang-py prettyprint-override"><code>from django.db import Model from django.contrib.auth.models import User class Customer(models.Model): user = models.OneToOneField(User, on_delete=models.PROTECT) some_other_field = model.CharField(...) </code></pre> <p>I have created 2 factory for the user and the customer model:</p> <pre><code>import factory class UserFactory(factory.django.DjangoModelFactory): class Meta: model = User django_get_or_create = ('username',) first_name = factory.Faker(&quot;first_name&quot;, locale=&quot;fr_FR&quot;) last_name = factory.Faker(&quot;last_name&quot;, locale=&quot;fr_FR&quot;) username = factory.LazyAttribute(lambda m: f&quot;{m.first_name[0]}{m.last_name[0]}&quot;.lower()) email = factory.LazyAttribute(lambda m: f&quot;{m.first_name.lower()}.{m.last_name.lower()}@ielo.net&quot;) customer = factory.RelatedFactory(CustomerFactory, factory_related_name=&quot;user&quot;, user=None) is_staff = False class CustomerFactory(factory.django.DjangoModelFactory): class Meta: model = &quot;customers.Customer&quot; user = factory.SubFactory('myapp.tests.fixtures.UserFactory', customer=None) </code></pre> <p>To avoid flaky tests, I have set the <code>django_get_or_create</code>, since most of the time I just want a user, and I create specific classes for specific cases (<code>UserIsStaffFactory</code>, <code>UserSuperAdminFactory</code>)</p> <p>I copied the <code>RelatedFactory/SubFactory</code> from <a href="https://factoryboy.readthedocs.io/en/stable/recipes.html#example-django-s-profile" rel="nofollow noreferrer">https://factoryboy.readthedocs.io/en/stable/recipes.html#example-django-s-profile</a> but If I run:</p> <pre><code>u1 = UserFactory(username='foo') u2 = UserFactory(username='foo') # raise IntegrityError, UNIQUE constraint failed: customers_customer.user_i </code></pre>
<python><django><factory-boy>
2023-06-16 08:48:02
1
938
Rémi Desgrange
76,488,460
12,415,855
Align watermark horizontally in the middle in PDF?
<p>i try to align a watermark in a pdf so it is exactly in the middle at the bottom with the following code:</p> <pre><code>from reportlab.pdfgen import canvas from reportlab.lib.units import inch from reportlab.lib import colors from reportlab.lib.pagesizes import A4 from reportlab.pdfbase.pdfmetrics import stringWidth # text = &quot;test test test&quot; text = &quot;Peter Schmidt, Musterstraße 1, 50767 Köln&quot; pdf = canvas.Canvas(&quot;watermark.pdf&quot;, pagesize=A4) pdf.translate(inch, inch) pdf.setFillColor(colors.grey, alpha=0.6) pdf.setFont(&quot;Helvetica&quot;, 15) widthText = stringWidth(text, &quot;Helvetica&quot;, 15) / 2 widthPage = pdf._pagesize[0] x = (widthPage / 2) - (widthText / 2) pdf.drawCentredString(x, -45, text) pdf.save() </code></pre> <p>Generally it seems to work fine and when i use the text</p> <pre><code>text = &quot;Peter Schmidt, Musterstraße 1, 50767 Köln&quot; </code></pre> <p>it centers fine</p> <p><a href="https://i.sstatic.net/DqhS2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DqhS2.png" alt="enter image description here" /></a></p> <p>but when i try it with a shorter text like</p> <pre><code>text = &quot;test test test&quot; </code></pre> <p>its not working anymore and the text is far to right</p> <p><a href="https://i.sstatic.net/8fkP9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8fkP9.png" alt="enter image description here" /></a></p> <p>How can i solve this that the text is allways in the middle of the pdf?</p>
<python><pdf><reportlab>
2023-06-16 08:45:36
1
1,515
Rapid1898
76,488,451
3,336,412
SqlAlchemy uses None/null as default value which conflicts with SqlServer
<p>So I'm using sqlmodel (sqlalchemy with pedantic) and I have something like this model:</p> <pre class="lang-py prettyprint-override"><code>class SequencedBaseModel(BaseModel): sequence_id: str = Field(alias=&quot;sequence_id&quot;) @declared_attr def sequence_id(cls): return Column( 'sequence_id', VARCHAR(50), server_default=text(f&quot;SELECT '{cls.__tablename__}_'&quot; f&quot; + convert(varchar(10), NEXT VALUE FOR dbo.sequence)&quot;)) class Project(SequencedBaseModel, table=True): pass </code></pre> <p>with SqlAlchemy I now try to insert rows over an API. The request-json looks like this:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;test_project&quot; } </code></pre> <p>So the sequence_id is not entered, and will be generated by database. SqlAlchemy generates a statement like:</p> <pre class="lang-sql prettyprint-override"><code>insert into Project (name, sequence_id) values (&quot;test_project&quot;, null) </code></pre> <p>which wouldn't be wrong, if it wasn't SQL Server... therefore I get the exception that NULL cannot be inserted into the column sequence_id. For SQL Server we need the <code>default</code> keyword instead of <code>null</code>. If I execute the statement</p> <pre class="lang-sql prettyprint-override"><code>insert into Project (name, sequence_id) values (&quot;test_project&quot;, default) </code></pre> <p>it works...</p> <p>any idea on how to make sql-alchemy to use default instead of null/None if there is a default-value?</p> <p>I also tried to change the sequence_id to use something like</p> <pre class="lang-py prettyprint-override"><code>sequence_id: str = Field(alias=&quot;sequence_id&quot;, default=sqlalchemy.sql.elements.TextClause('default')) </code></pre> <p>but this doesn't work either</p>
<python><sql-server><sqlalchemy><pydantic><sqlmodel>
2023-06-16 08:44:24
1
5,974
Matthias Burger
76,488,433
7,585,973
Is change kafka to kafka-python in docker container is changing from docker-compose configuration?
<p>Previosly I have error like this <a href="https://stackoverflow.com/questions/65809459/syntaxerror-on-self-async-when-running-python-kafka-producer">SyntaxError on &quot;self.async&quot; when running python kafka producer</a></p> <p>Based on the answer, best answer is switch from kafka package to kafka-python package, this is what the detail of my docker-compose configuration, I think to change the docker image because of the docker log still similar</p> <pre><code> kafka: image: wurstmeister/kafka container_name: kafka ports: - &quot;9092:9092&quot; environment: KAFKA_ADVERTISED_HOST_NAME: localhost KAFKA_ADVERTISED_PORT: 9092 KAFKA_CREATE_TOPICS: &quot;segmentation&quot; KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 volumes: - /var/run/docker.sock:/var/run/docker.sock zookeeper: image: wurstmeister/zookeeper container_name: zookeeper ports: - &quot;2181:2181&quot; </code></pre> <p>Do I need to change the docker compose? If yes what to change?</p>
<python><docker><apache-kafka><kafka-python>
2023-06-16 08:41:29
0
7,445
Nabih Bawazir
76,488,393
4,806,787
pd.read_excel() bug for single level MultiIndex
<p>Consider the following MWE:</p> <pre><code>import pandas as pd I = range(2) C = range(2) mi_i = pd.MultiIndex.from_product([I], names=['i']) mi_c = pd.MultiIndex.from_product([C], names=['c']) df = pd.DataFrame(0, index=mi_i, columns=mi_c) df.to_excel('df.xlsx') df_in = pd.read_excel('df.xlsx', index_col=0, header=0) </code></pre> <p>Printing <code>df</code> yields</p> <pre><code>c 0 1 i 0 0 0 1 0 0 </code></pre> <p>The xlsx Output is correct</p> <p><a href="https://i.sstatic.net/rQYDg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rQYDg.png" alt="table" /></a></p> <p>After reading the xlsx and printing the DataFrame <code>df_in</code> the following, erroneous table occurs</p> <pre><code> 0 1 c i NaN NaN 0 0.0 0.0 1 0.0 0.0 </code></pre> <p>Am I doing something wrong here?</p>
<python><pandas><dataframe><multi-index>
2023-06-16 08:37:31
1
313
clueless
76,488,374
12,415,855
Add custom metadata and password protect PDF?
<p>i try to write a custom metadata to a pdf and password protect the pdf with the following code - (you can try this with every pdf-file)</p> <pre><code>import os, sys from PyPDF2 import PdfReader, PdfWriter if __name__ == '__main__': path = os.path.abspath(os.path.dirname(sys.argv[0])) fnTemplate = os.path.join(path, &quot;template.pdf&quot;) propName = &quot;Some Name&quot; propValue = f&quot;Some Value&quot; wPW = &quot;pw1&quot; # Write custom metadata with open(fnTemplate, &quot;rb&quot;) as file: pdf_reader = PdfReader(file) pdf_writer = PdfWriter() [pdf_writer.add_page(page) for page in pdf_reader.pages] pdf_writer.add_metadata( { f&quot;/{propName}&quot;: propValue, **pdf_reader.metadata, } ) with open(&quot;workPDF1.pdf&quot;, &quot;wb&quot;) as outFile: pdf_writer.write(outFile) # Password protect pdf reader = PdfReader(&quot;workPDF1.pdf&quot;) writer = PdfWriter() writer.append_pages_from_reader(reader) writer.encrypt(wPW) with open(&quot;finalPDF.pdf&quot;, &quot;wb&quot;) as out_file: writer.write(out_file) </code></pre> <p>My problem now is, that in the &quot;finalPDF.pdf&quot; the pdf is only password-protected but the custom metadata entry is gone. When i check the workfile &quot;workPDF1.pdf&quot; from the previous step the custom metadata is there</p> <p><a href="https://i.sstatic.net/BQJEB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BQJEB.png" alt="enter image description here" /></a></p> <p>How can i set the custom metadata and password-protect the pdf-file?</p>
<python><pdf><pypdf>
2023-06-16 08:34:53
1
1,515
Rapid1898
76,488,165
8,040,369
How to read values from SQL into a dict of lis
<p>I am reading values from SQL server using <strong>pymssql</strong> library using the below code</p> <pre><code>df = pd.read_sql_query(query, cnxn) </code></pre> <p>This is giving me a df as below</p> <pre><code>addr port device_id ============================== XXX 1001 01 XXX 1001 02 XXX 1001 03 YYY 1001 04 YYY 1001 05 </code></pre> <p>Is there a way to convert this df into a dict of list as shown below,</p> <pre><code>{ 'addr': [XXX, XXX, XXX], 'port': [1001,1001,1001], 'device_id': [01, 02, 03] }, { 'addr': [YYY,YYY], 'port': [1001,1001], 'device_id': [04, 05] } </code></pre> <p>Thanks,</p>
<python><dataframe><list>
2023-06-16 08:05:50
1
787
SM079
76,488,007
13,038,144
Substituting sub-dictionaries with specific keys using dpath library
<p>I have a nested input dictionary in this form:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;a&quot;: { &quot;b&quot;: { &quot;Red&quot;: {&quot;min&quot;: 0, &quot;max&quot;: 1}, &quot;Green&quot;: {&quot;min&quot;: 1, &quot;max&quot;: 10} }, &quot;c&quot;: { &quot;Red&quot;: {&quot;min&quot;: 2, &quot;max&quot;: 100} } } } </code></pre> <p>I would like to use the <code>dpath</code> library to search for all the sub-dictionaries in the form <code>{'min': min_value, 'max': max_value}</code> and substitute all such sub-dictionaries with a random number between <code>min_value</code> and <code>max_value</code>.</p> <h4>Expected output</h4> <pre class="lang-py prettyprint-override"><code>{ &quot;a&quot;: { &quot;b&quot;: { &quot;Red&quot;: random_number_between_0_and_1, &quot;Green&quot;: random_number_between_1_and_10 }, &quot;c&quot;: { &quot;Red&quot;: random_number_between_2_and_100 } } } </code></pre> <p>Note that the code should be as general as possible, as the sub-dict with min/max keys could be at any level in the dictionary. I've been trying to use the regex option of dpath, but I was not able to make good use of it for this application.</p>
<python><dictionary><dpath>
2023-06-16 07:46:45
1
458
gioarma
76,487,699
12,282,349
Set cookie in FastApi middleware
<p>I can set up language variable cookie like this:</p> <pre><code>@app.get(&quot;/language/{lang}&quot;) async def language(request: Request, lang: str = None): response = RedirectResponse(url=&quot;/&quot;) if lang == 'en': #request.session[&quot;lang&quot;] = 'en' response.set_cookie(key=&quot;lang&quot;, value=&quot;en&quot;) elif lang == 'lt': #request.session[&quot;lang&quot;] = 'lt' response.set_cookie(key=&quot;lang&quot;, value=&quot;lt&quot;) return response </code></pre> <p>How could I intercept all urls and set default cookie if none exists?</p> <p>I tried with no luck like this:</p> <pre><code>@app.middleware(&quot;http&quot;) async def some_middleware(request: Request, call_next): response = await call_next(request) session = request.cookies.get('session') if session: response.set_cookie(key='session', value=request.cookies.get('session'), httponly=True) lang_cookie = request.cookies.get('lang') if not lang_cookie: response.set_cookie(key=&quot;lang&quot;, value=&quot;en&quot;) return response </code></pre>
<python><fastapi>
2023-06-16 07:04:36
0
513
Tomas Am
76,487,533
1,421,239
Gevent worker is blocked by SQLAlchemy's query
<p>I am running a Flask app using gunicorn with only 1 gevent worker and using MySQL database via SQLAlchemy with mysql-connector and executing a long-time query.</p> <p>When issuing two API calls simultaneously, the worker is blocked or killed due to the timeout of the first query until the database responds and the second API call is handled.</p> <p>The package versions are: (by the way, the latest versions still get the same result)</p> <pre><code>flask = &quot;2.0.3&quot; sqlalchemy = &quot;1.4.39&quot; mysql-connector-python = &quot;8.0.29&quot; gunicorn = &quot;20.1.0&quot; gevent = &quot;22.10.2&quot; </code></pre> <p>Here's the main.py:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask from sqlalchemy import create_engine from sqlalchemy.orm import Session, scoped_session, sessionmaker app = Flask(__name__) @app.route(&quot;/&quot;) def hello(): print(&quot;!!!! GET REQUEST !!!!&quot;) db_url = &quot;mysql+mysqlconnector://username:password@host:3306/db_name&quot; engine = create_engine(db_url) session_maker = sessionmaker(bind=engine) session: Session = scoped_session(session_maker) sql_string = &quot;&quot;&quot; select * from big_table inner join another_big_table group by big_table.id; &quot;&quot;&quot; result = session.execute(sql_string) result = dict(result.fetchall()) session.close() print(result) print(&quot;!!!! FINISH REQUEST !!!!&quot;) return &quot;&quot; </code></pre> <p>The command to start the Flask server:</p> <pre><code>gunicorn -k gevent -w 1 -b 127.0.0.1:11111 main:app </code></pre> <p>The command to invoke api:</p> <pre><code>curl localhost:11111 </code></pre>
<python><mysql><flask><sqlalchemy><gevent>
2023-06-16 06:39:35
0
1,051
Scottie
76,487,429
11,922,765
Python Dataframe compare column values with a list and produce output with matching
<p>I have a dataframe with year-month as index. I want to assign a color to the dataframe based on the year the sample was collected.</p> <pre><code>import matplotlib.colors as mcolors colors_list = list(mcolors.XKCD_COLORS.keys()) colors_list = ['xkcd:cloudy blue', 'xkcd:dark pastel green', 'xkcd:dust', 'xkcd:electric lime', 'xkcd:fresh green', 'xkcd:light eggplant' ........ ] df = sensor_value Year Month 0 5171.318942 2002 4 1 5085.094086 2002 5 3 5685.681944 2004 6 4 6097.877688 2006 7 5 6063.909946 2003 8 ..... years_list = df['Year'].unique().tolist() req_colors_list = colors_list[:len(years_list)] df['year_color'] = df['Year'].apply(lambda x: clr if x==year else np.nan for year,clr in zip(years_list,req_colors_list)) </code></pre> <p>Present output:</p> <pre><code>&lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; &lt;lambda&gt; Year 2002 tab:blue NaN NaN NaN NaN NaN NaN NaN NaN NaN 2002 tab:blue NaN NaN NaN NaN NaN NaN NaN NaN NaN 2006 tab:blue NaN NaN NaN NaN NaN NaN NaN NaN NaN 2006 tab:blue NaN NaN NaN NaN NaN NaN NaN NaN NaN 2003 tab:blue NaN NaN NaN NaN NaN NaN NaN NaN NaN ... ... ... ... ... ... ... ... ... ... ... </code></pre> <p>Expected output:</p> <pre><code>2002 'xkcd:cloudy blue' 2002 'xkcd:cloudy blue' 2006 'xkcd:fresh green' 2006 'xkcd:fresh green' 2003 </code></pre>
<python><pandas><dataframe>
2023-06-16 06:22:35
2
4,702
Mainland
76,487,348
14,895,107
getting "localhost didn’t send any data" in Kubernetes deployment
<p>I created a Flask API and dockerized it. When I run <code>docker run -p 9999:8000 urban-dictionary-api-unofficial</code>. It runs perfectly and can be accessed via my browser. However, I created a Kubernetes deployment for it which allows more pods to run, but after doing so, it is unable to be accessed. Below are the files used for the deployment.</p> <p>deployments.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: urban-dictionary-api-unofficial-deployment labels: app: urban-dictionary-api-unofficial spec: replicas: 3 selector: matchLabels: app: urban-dictionary-api-unofficial template: metadata: labels: app: urban-dictionary-api-unofficial spec: containers: - name: urban-dictionary-api-unofficial image: n1nja0p/urban-dictionary-api-unofficial imagePullPolicy: Always ports: - containerPort: 8000 </code></pre> <p>services.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: urban-dictionary-api-unofficial-service labels: app: urban-dictionary-api-unofficial-service spec: type: LoadBalancer ports: - name: http port: 9999 protocol: TCP targetPort: 8000 selector: app: urban-dictionary-api-unofficial sessionAffinity: None </code></pre> <p>Dockerfile:</p> <pre><code>FROM python:latest COPY . /app WORKDIR /app RUN [&quot;pip&quot;,&quot;install&quot;,&quot;flask&quot;,&quot;bs4&quot;,&quot;requests&quot;] EXPOSE 8000 CMD [&quot;python&quot;,&quot;src/main.py&quot;] </code></pre> <p>However, I'm getting an error when trying to access it through my browser:</p> <p><a href="https://i.sstatic.net/dQXfh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dQXfh.png" alt="the error" /></a></p> <p>What did I do wrong?</p> <p>Edit :</p> <p>Output of <code>kubectl get service urban-dictionary-api-unofficial-service</code>:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE urban-dictionary-api-unofficial-service LoadBalancer 10.99.186.186 localhost 9999:31048/TCP 3h10m </code></pre>
<python><docker><kubernetes><flask>
2023-06-16 06:07:40
0
903
Abhimanyu Sharma
76,487,207
6,212,530
Visual Studio Code with Pylance cannot resolve imports while using hatch
<p>When I created my project using <code>hatch new name</code> imports were resolved correctly. Now that I opened it again I get yellow squigly line under each import with error in tooltip:</p> <pre class="lang-py prettyprint-override"><code>from django.conf import settings # -&gt; Import &quot;django.conf&quot; could not be resolved from source Pylance(reportMissingModuleSource) </code></pre> <p>I understand this error happens when vscode finds wrong python executable (usually global one instead of venv one). So when not using hatch I could resolve it by creating <code>.vscode/settings.json</code> file with content like:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;python.defaultInterpreterPath&quot;: &quot;path/to/venv&quot; } </code></pre> <p>However in this project I am using <code>hatch</code> build tool, which manages environments itself (at least I cannot find them in project directory). <strong>How do I point vscode to correct python interpreter in this case?</strong></p> <p>Edit:</p> <p>I tried changing venv location by adding <code>dirs.env</code> to my <code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>[dirs.env] virtual = &quot;.hatch&quot; </code></pre> <p>Then I deleted existing default environment using <code>hatch env prune</code> and creating it again using <code>hatch env create</code>. However <code>hatch env find default</code> still shows old location and <code>.hatch</code> was not created:</p> <pre><code>$ hatch env find default C:\Users\Matija\AppData\Local\hatch\env\virtual\cq\bnqHl4TX\cq </code></pre> <p>Adding new environment to <code>pyproject.toml</code> and creating it using <code>hatch env create vsc</code> also creates it in <code>AppData</code> instead of in <code>.hatch</code>:</p> <p>In <code>pyproject.toml</code>:</p> <pre><code>[tool.hatch.envs.vsc] </code></pre> <p>Commands:</p> <pre><code>$ hatch env create vsc $ hatch env find vsc C:\Users\Matija\AppData\Local\hatch\env\virtual\cq\bnqHl4TX\vsc </code></pre>
<python><visual-studio-code><pylance><hatch>
2023-06-16 05:33:08
3
1,028
Matija Sirk
76,487,194
8,471,995
Type annotation for a subclass of Mapping + dataclass
<p>Ultimately, I wanted to extend the <code>dataclasses.dataclass</code> and <code>collections.Mapping</code> because I wanted to make dataclasses that I can use <code>**</code> operator on with some additional functions.</p> <p>Since <code>dataclass</code> is a function, not a class, I couldn't use inheritance. So I made a new wrapping function</p> <pre><code>from dataclasses import dataclass as original_dataclass, fields from collections import Mapping from typing import Any, Generator, Type def __iter__(self) -&gt; Generator[str, None, None]: for field in fields(self): yield field.name def __len__(self) -&gt; int: return len(fields(self)) def __getitem__(self, item: Any) -&gt; Any: return getattr(self, item) def additional_function(self): print(&quot;Yes&quot;) def dataclass(cls: Type) -&gt; Type: cls.additional_function = additional_function cls.__iter__ = __iter__ cls.__len__ = __len__ cls.__getitem__ = __getitem__ # If `cls` has these methods, then it counts as a Mapping in the python world for method in (&quot;__contains__&quot;, &quot;keys&quot;, &quot;items&quot;, &quot;values&quot;, &quot;get&quot;, &quot;__eq__&quot;, &quot;__ne__&quot;): setattr(cls, method, getattr(Mapping, method)) return original_dataclass(cls) @dataclass class MyNewClass: hello: str # usage def usecase_function(hello:str): pass new_class = MyNewClass(&quot;hi&quot;) usecase_function(**new_class) # &lt;- vs-code complains here new_class.hello # &lt;- vs-code complains here </code></pre> <p>The python interpreter(3.8.13) does not throw an error. However, my editor, vs-code is complaining that this is not a Mapping. I assume because I return a mere <code>Type</code> as the return of the new <code>dataclass</code> function.</p> <p>So I annotate the return as <code>Type[Mapping]</code>. Now, vs-code complains that <code>new_class</code> doesn't have the attribute <code>hello</code>. I assume because <code>dataclass</code> returns a <code>Type[Mapping]</code> which doesn't have a <code>hello</code> attribute.</p> <p>But I don't know how to annotate the function <code>dataclass</code> in this situation. Does anyone know how to annotate in this situation?</p> <p>Thank you.</p>
<python><python-typing>
2023-06-16 05:30:24
0
1,617
Inyoung Kim 김인영
76,487,144
9,042,093
Dynamically import modules inside the function
<p>I have a function inside the class which imports the modules dynamically. The modules are to be imported from outside of the working directory.</p> <pre><code> def dynamic_import(self,name): sys.path.append(os.getenv(&quot;TEMPLATE&quot;)) exec('from templates.{} import template_attribute'.format(name)) print(template_attribute) </code></pre> <p>I have set env TEMPLATE to path to template. when I pass <strong>name = 'template1'</strong> when it should import template_attribute from <strong>template1</strong>.</p> <p>This is not working inside the function. When I run same lines (except function) in python terminal its working.</p> <p>I tried by replacing <code>exec</code> with <code>__import__</code></p> <p>Like this</p> <pre><code>template_attribute = getattr(__import__(f'templates.{name}'), 'template_attribute') </code></pre> <p>This is also not working.</p> <p>How do I make it work correctly ?</p> <p>One more question if <code>template_attribute</code> is imported like this, can this be used inside the other functions inside the class ? (or other functions called from this )</p>
<python><python-3.x><import><exec>
2023-06-16 05:19:57
1
349
bad_coder9042093
76,487,118
5,637,601
OpenCV plays back video way faster than the original recording
<pre><code>import cv2 import time # Open the video file cap = cv2.VideoCapture('marker test.mp4') # Get the frames per second (fps) of the video fps = cap.get(cv2.CAP_PROP_FPS) # Read the first frame ret, frame = cap.read() while ret: # Display the frame cv2.imshow('Video Playback', frame) # Wait for the specified delay to maintain the original timeline delay = int(1000 / fps) # Calculate the delay in milliseconds if cv2.waitKey(delay) == 27: # Exit if the 'Esc' key is pressed break # Read the next frame ret, frame = cap.read() # Release the video capture object and close the windows cap.release() cv2.destroyAllWindows() </code></pre> <p>I have here a simple block of dumbed down code that reads a .mp4 video recording and has it replay out by cv2. The video was recorded at 30 fps by a webcam. I am aware that cv2 plays out videos at a rate different from the fps of the video, so I tried to correct it by inserting a <code>delay</code> variable to account for this discrepancy. However, it doesn't fix the issue.</p> <p>The original video is about 13 seconds long. With this code, the replay of the video was dragged out to as far as 20 seconds long. Without the fps fix, and assuming <code>delay = 1</code> as typically is the default, the video replay only lasted about 6+ seconds long, which sounds about right as I'm running this on a PC that is at 60hz refresh rate.</p> <p>What am I getting wrong here? How do I get cv2 to play out the video in the exact time frame as with the original video?</p>
<python><opencv>
2023-06-16 05:14:20
0
419
Ang Jit Wei Aaron
76,486,711
14,320,103
Using js/flask, getting internal 500 error on using Post request
<p>So I have a little project Im working on, that involves sending the text-area value, to the python side of my flask app, and my front end code looks like this</p> <pre class="lang-js prettyprint-override"><code>&lt;input id=&quot;promptArea&quot; type=&quot;text-area&quot; placeholder=&quot;write your story...&quot; /&gt; &lt;button id=&quot;submitButton&quot;&gt;Make your story now!&lt;/button&gt; &lt;p&gt;your story here...&lt;/p&gt; &lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;script&gt; $(document).ready(function() { $('#submitButton').click(function (event) { event.preventDefault(); $('#submitButton').prop(&quot;disabled&quot;, true); $.ajax({ data : { prompt : document.getElementById(&quot;promptArea&quot;).value }, type : 'POST', url : '/prompt', //post grabbed text to flask endpoint for saving role success: function (data) { console.log('Sent Successfully') }, error: function (e) { console.log('Submission failed...') } }); }); }) &lt;/script&gt; </code></pre> <p>I based this off <a href="https://stackoverflow.com/questions/63513147/flask-communication-between-js-and-python">here</a> and I'm relatively new to flask and full stack apps, and I'm getting an error 500 on the JavaScript end, How do I fix this?</p> <p>python side(if required)</p> <pre><code>@app.route('/prompt', methods=['POST', 'GET']) def prompt(): prompt = request.form['prompt'] #parse received content to variable chatbot.chat(prompt) return (&quot;works&quot;, 205) #or whatever you wish to return </code></pre>
<javascript><python><flask><http-error>
2023-06-16 03:05:10
1
619
Vatsa Pandey
76,486,667
6,394,722
Why objects are same in 2 different process of multiprocessing?
<p>I have next code:</p> <p><strong>test.py:</strong></p> <pre><code>import multiprocessing import time class A: def __action(self): print(&quot;another process:&quot;) print(id(self)) def run(self): print(&quot;this process&quot;) print(id(self)) p = multiprocessing.Process(target=self.__action, daemon=True) p.start() a = A() print(&quot;main&quot;) print(id(a)) a.run() time.sleep(3) </code></pre> <p>It runs as next:</p> <pre><code>$ python3 test.py main 140643898766000 this process 140643898766000 another process: 140643898766000 </code></pre> <p>But from <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">doc</a> next:</p> <blockquote> <p>As mentioned above, when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes.</p> <p>However, if you really do need to use some shared data then multiprocessing provides a couple of ways of doing so.</p> <p>Shared memory</p> <p>Data can be stored in a shared memory map using Value or Array.</p> </blockquote> <p>So it looks only with explicitly define <code>Value</code>/<code>Array</code> can the 2 processes have the same object. Then, why the <code>A object</code> in 2 different processes now have same id?</p>
<python><multiprocessing>
2023-06-16 02:47:41
1
32,101
atline
76,486,653
1,834,787
Python http SSL webserver using self-signed certificate: OPENSSL_internal:WRONG_VERSION_NUMBER
<p>I am trying to use <a href="https://github.com/DMTF/Redfish-Mockup-Server.git" rel="nofollow noreferrer">this library</a> that serves mock BMC Redfish server. It's <a href="https://github.com/DMTF/Redfish-Mockup-Server/blob/main/redfishMockupServer.py#L845" rel="nofollow noreferrer">basically a Python http server</a>. I am getting a certificate error but I am not sure if it's because of my certificate OR this library. BTW: using provided Docker image and running it locally with Python results in the same error.</p> <p>I start it up with these parameters:</p> <pre><code>python redfishMockupServer.py -D C:\mock-location --cert C:\cert.pem --key C:\key.pem -p 443 </code></pre> <p>But it fails on <code>GET https://localhost</code> because of some sort of certificate issue:</p> <pre><code>Redfish Mockup Server, version 1.2.3 Hostname: 127.0.0.1 Port: 443 Mockup directory path specified: C:\mock-location Response time: 0 seconds Serving Mockup in absolute path: C:\mock-location Serving Redfish mockup on port: 443 running Server... 127.0.0.1 - - [15/Jun/2023 21:34:57] code 400, message Bad request version ('À\\x13À') 127.0.0.1 - - [15/Jun/2023 21:34:57] &quot;\x16\x03\x01\x00÷\x01\x00\x00ó\x03\x03\x82\x8d¡ýÙ\x01Uå}5éÊ4ãµ2.x_°!y\x8a\x0cTU³fG½±B ,\x10\x8d\x93]^\x1b\x02P\x08¼\x04³ëÇ4¬+èá=ÞXyõÈ\x88§Õ\x12¨\x00$\x13\x01\x13\x02\x13\x03À/À+À0À,̨̩À\x09À\x13À&quot; 400 - </code></pre> <p>Postman error:</p> <pre><code>Error: write EPROTO 66780680:error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER:../../../../src/third_party/boringssl/src/ssl/tls_record.cc:242: </code></pre> <p><a href="https://i.sstatic.net/fsQoB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fsQoB.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/6vypV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6vypV.png" alt="enter image description here" /></a></p> <p>This is how I generate my self-signed certificate:</p> <pre><code>openssl req -new -x509 -keyout cert.pem -out cert.pem -days 365 -nodes </code></pre> <p>cert.pem</p> <pre><code>-----BEGIN CERTIFICATE----- MIIDazCCAlOgAwIBAgIUXoAL1PDRuv7F/mVBnYfEewU8tHMwDQYJKoZIhvcNAQEL BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMzA2MTUyMzMxMjFaFw0yNDA2 MTQyMzMxMjFaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQCaIDQTlFbOIbddI7pAiKRxb++kx2hKcG3GLFRlc0D/ sZR2HRBpjqkKdMH8VRiHOahthyMT1bEFXY8k2v2qwQe7nkZ2Ti82hJ0hZFtvCyzb UV/NcOXf/Vz3nP7qcyrrtXkcYD4lMgAFUjmeiuhxajLJyAkYXXUmjfjX593y5QGR bipZXBW9tvfU+Aoe2JgGn+QXrHK2e0mucKyKpeUU7GFOudcR+sSQXUF/6vd09uzH tp5i59EsWEdIvoeLilj48wgMz3eV7AhNc6qiZ5zXxTXGm3zNnamWxxKoxHN3wzsS nWT/fzmDwdqv27hlgAO90Sw+BQSLQFjNEx+UhoWb79htAgMBAAGjUzBRMB0GA1Ud DgQWBBSdadBQZD6yaIyeebnZE3UhBmCIRzAfBgNVHSMEGDAWgBSdadBQZD6yaIye ebnZE3UhBmCIRzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQB+ L1ccj/uVpoQVdBBl8n8xKT/sobjW/j+L98lxEhiBbhqkkA73HRLs9VEnCd2k5/km qx0I8uyjJ8iaGN1iYkP0MWlJCrXpCrmqu/GieKdS1ne3/G0Ml1lv6YvvlG843eeO 25xIXqi+0m021qTdfXK/Fbr8xG4gAqX8RGpAu+5StszwMAERe/JHHV7vjkQvxM/5 MDHEAmK7lRsEpcip6lw2dT05bom0KjemDM0b0pVdBH4Tsg/NiWheuANzz7mRfSnO D2oSKdERdnvTu6tGxQVqQWwWF14RQmF3UdmhmyPVtnIKfgLQWBOQsV9XY9Fe3nQn kK/lA3hHv5d3ByFTRCwb -----END CERTIFICATE----- </code></pre> <p>key.pem</p> <pre><code>-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCaIDQTlFbOIbdd I7pAiKRxb++kx2hKcG3GLFRlc0D/sZR2HRBpjqkKdMH8VRiHOahthyMT1bEFXY8k 2v2qwQe7nkZ2Ti82hJ0hZFtvCyzbUV/NcOXf/Vz3nP7qcyrrtXkcYD4lMgAFUjme iuhxajLJyAkYXXUmjfjX593y5QGRbipZXBW9tvfU+Aoe2JgGn+QXrHK2e0mucKyK peUU7GFOudcR+sSQXUF/6vd09uzHtp5i59EsWEdIvoeLilj48wgMz3eV7AhNc6qi Z5zXxTXGm3zNnamWxxKoxHN3wzsSnWT/fzmDwdqv27hlgAO90Sw+BQSLQFjNEx+U hoWb79htAgMBAAECggEBAJDdWfVZRSnkePO7ZBHKHT5eJtIrd3QYLqXI/t6IMPzk TZWjBc0hgPNKARcKaM6ZPB0OmsLG5OcVJDlQ+IKpgnovby09mZTVmtdK+8HosBXI a5Ku3fHls58tWlDFRP9dh+NK9r6BO5HE0lGZYJdRaUFNmnbjSPyfDtjooC3wX8Pv YaAcRvnqVmxKcTFLj23g+VAeYl/eIsTD8re3pF16Dh4roodMSIBYj/Az25py8ob1 gEFrFejXJRusb+KzJQqwWBiKJWpvc2u1jXtRJAjlzibe1r9HT3znw+Pvu4I0pOSy QlSeP5AksgK0pAlh9pRGxf51npdxw0y0x/er57l5zuUCgYEAzIMve5L5++QrOsAn 3g6VB/b6ve/aC1QLMMcWjNneH4yBF/zrul8DdEcKmd3DoisDfIXqaejffHEdFmM0 AeoJIUxFbi55cpCibED9a9DDAimUj+jmPzyAoP9wH5K6uMLK/KUDOE1EhfxfI2VR /rGQGtBmhXgyy8oLsYPG000hUo8CgYEAwO2Z/EZWGVdRkZGDlgj68SKtg9OXC+WH eg+2HFOuj3Tn9w3qX3+hmzcOlqRG10k0BMfPVn+5Wbo6vSKn8DVc0T/5nnY1v0NE /1tOo/ivD/lAPKpopkA6oSxN9YF/nwcZ46KKlTppY+GJHPlFy7yiPZAhpO88XkSy PUCay2ZPc0MCgYEAqeYuEz4mGWITm8o5FJwOqUBATHyvKwwWA97RWBBDHPiP4orG lt0KNJY0M2FtfhK34cIq3POOfoZGAOxHL3PrQ9NmNsO7Nzb7CG3xWpli+C/s8KUu ashrn9S1pDU0k/uXwM2hYCuoypq/utsYhDulGPGayjTyFiTzE/UCv1XrYfcCgYAS v8SEOM2rPsoljG+uSAcjIgyc0BZQyKim2xoGnLdNJ75XSxno1/17mRko2KQtzeZp RIXI0TbRGoEU2mZZuMXhbAc1OCW3BbGR42y8ELHqqn1sp97tsTZBbY3R+xjM+qKw dZ5kLD4Lv+JUV4FJ8HYP547teXZzbtenjjy84Z99AwKBgAp7FYzfzQJQ1uBqRCC0 KfoA++J7Z3RshXsSF5JpCKssAHLKmGI9bjUgUoifnzpK7om2EkhXJQezQf/AE77k PC4EBjASCsajoVtoKz8B9x6nAy7APgQKAYrARcyXcuOtngGS5CJkLDMGcrDR+U+w FoZHTPq4CiUmOxE39Pw98JY9 -----END PRIVATE KEY----- </code></pre> <p>I created a <a href="https://github.com/DMTF/Redfish-Mockup-Server/issues/96" rel="nofollow noreferrer">GitHub issue</a> with more details.</p>
<python><ssl><https>
2023-06-16 02:43:52
1
1,714
Node.JS
76,486,644
1,174,102
Child process call function in parent process (python multiprocessing)?
<p>How can I have a child process call a function in its parent process?</p> <p>I'm writing a python program in which I need a child process (launched with the multiprocessing module) to call a function in its parent process.</p> <p>Consider the following program that simply calls a function <code>child_or_parent()</code> twice: first it calls <code>child_or_parent()</code> in the parent process, and then it calls <code>child_or_parent()</code> in a child process.</p> <pre><code>#!/usr/bin/env python3 import multiprocessing, os # store the pid of our main (parent) process parent_pid = os.getpid() # simple function that tells you if it's the parent process or a child process def child_or_parent(): if os.getpid() == parent_pid: print( &quot;I am the parent process&quot; ) else: print( &quot;I am a child process&quot; ) # first the parent process child_or_parent() # now a child process child = multiprocessing.Process( target=child_or_parent ) child.start() </code></pre> <p>When executed, the above program outputs the following</p> <pre><code>I am the parent process I am a child process </code></pre> <p>I would like to modify this program such that the child process calls the parent processes' <code>child_or_parent()</code> function, and it outputs <code>I am the parent process</code>.</p> <p>The following program has a function named <code>child_call_parent()</code> that contains pseudocode. What should I put there so that the <code>child_call_parent()</code> function will actually execute the <code>child_or_parent()</code> function inside the <em>parent</em> process?</p> <pre><code>#!/usr/bin/env python3 import multiprocessing, os # store the pid of our main (parent) process parent_pid = os.getpid() # simple function that tells you if it's the parent process or a child process def child_or_parent(): if os.getpid() == parent_pid: print( &quot;I am the parent process&quot; ) else: print( &quot;I am a child process&quot; ) def child_call_parent(): # FIXME this does not work! self.parent.child_or_parent() # have a child call a function in its parent process child = multiprocessing.Process( target=child_call_parent ) child.start() </code></pre>
<python><multiprocessing>
2023-06-16 02:40:23
2
2,923
Michael Altfield
76,486,636
8,874,388
Universally getting files from a source-Path that can be either a zip or a folder?
<p>This is a bit of a tricky problem!</p> <p>Given the following code, how would you implement a universal way of fetching files?</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path import re import zipfile as zip def check_if_zip(path: Path) -&gt; bool: return path.is_file() and ( re.search(r&quot;\.zip$&quot;, path.name, re.IGNORECASE) is not None ) sources: list[Path] = [ Path(&quot;Foo&quot;), # Directory Path(&quot;Bar.zip&quot;), # Zip ] for source in sources: source_is_zip = check_if_zip(source) # Is there a way to universally wrap each source # as some kind of &quot;streamed I/O object&quot; or class # that reads live from either the folder or zip # file? I need a storage-medium agnostic way # of fetching files from the &quot;source&quot;. # # It needs to use a fast byte-stream or similar, # NOT reading the entire source into memory! :) # And the stream needs to be seekable so that # it works as input for other libraries, for # decoding files contained within the folders/zips. </code></pre> <p>Perhaps it can't be done in Python?</p> <p>I hope someone finds this problem interesting! :)</p>
<python><python-3.x><file><zip><unzip>
2023-06-16 02:37:55
0
4,749
Mitch McMabers
76,486,611
9,019,806
ERROR: Could not build wheels for thriftpy2, which is required to install pyproject.toml-based projects
<p>When I <code>pip install thriftpy2</code>some error occurred.</p> <p>Env Base Info:</p> <pre><code>CPU: M1(10-core 64-bit westmere) Clang: 14.0.0 build 1400 Git: 2.41.0 =&gt; /usr/local/bin/git Curl: 7.79.1 =&gt; /usr/bin/curl macOS: 12.5.1-x86_64 CLT: 14.2.0.0.1.1668646533 Xcode: N/A Rosetta 2: true Python 3.10.8 Conda 23.5.0 </code></pre> <p>Error Details:</p> <pre><code>Building wheels for collected packages: thriftpy2 Building wheel for thriftpy2 (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─&gt; [277 lines of output] running bdist_wheel The [wheel] section is deprecated. Use [bdist_wheel] instead. running build running build_py creating build creating build/lib.macosx-11.0-arm64-cpython-310 creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/server.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/hook.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/thrift.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/rpc.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/utils.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/tornado.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/http.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 copying thriftpy2/_compat.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2 creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/_ssl.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/sslsocket.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/socket.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/base.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/binary.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/apache_json.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/compact.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/exc.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/multiplex.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/json.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol copying thriftpy2/protocol/base.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/parser copying thriftpy2/parser/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/parser copying thriftpy2/parser/parser.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/parser copying thriftpy2/parser/exc.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/parser copying thriftpy2/parser/lexer.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/parser creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib copying thriftpy2/contrib/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory copying thriftpy2/transport/memory/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered copying thriftpy2/transport/buffered/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed copying thriftpy2/transport/framed/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/server.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/client.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/processor.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/rpc.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio copying thriftpy2/contrib/aio/socket.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/tracking copying thriftpy2/contrib/tracking/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/tracking copying thriftpy2/contrib/tracking/tracker.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/tracking creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/transport copying thriftpy2/contrib/aio/transport/buffered.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/transport copying thriftpy2/contrib/aio/transport/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/transport copying thriftpy2/contrib/aio/transport/base.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/transport copying thriftpy2/contrib/aio/transport/framed.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/transport creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/protocol copying thriftpy2/contrib/aio/protocol/binary.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/protocol copying thriftpy2/contrib/aio/protocol/compact.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/protocol copying thriftpy2/contrib/aio/protocol/__init__.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/protocol copying thriftpy2/contrib/aio/protocol/base.py -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/protocol running egg_info writing thriftpy2.egg-info/PKG-INFO writing dependency_links to thriftpy2.egg-info/dependency_links.txt writing requirements to thriftpy2.egg-info/requires.txt writing top-level names to thriftpy2.egg-info/top_level.txt reading manifest file 'thriftpy2.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' adding license file 'LICENSE' writing manifest file 'thriftpy2.egg-info/SOURCES.txt' ~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/command/build_py.py:201: _Warning: Package 'thriftpy2.protocol.cybin' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'thriftpy2.protocol.cybin' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'thriftpy2.protocol.cybin' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about &quot;package discovery&quot; on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'thriftpy2.protocol.cybin' to be distributed and are already explicitly excluding 'thriftpy2.protocol.cybin' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about &quot;package data files&quot; on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) copying thriftpy2/transport/cybase.c -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/cybase.pxd -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport copying thriftpy2/transport/cybase.pyx -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport creating build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin copying thriftpy2/protocol/cybin/cybin.c -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin copying thriftpy2/protocol/cybin/cybin.pyx -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin copying thriftpy2/protocol/cybin/endian_port.h -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin copying thriftpy2/transport/memory/cymemory.c -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory copying thriftpy2/transport/memory/cymemory.pyx -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory copying thriftpy2/transport/buffered/cybuffered.c -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered copying thriftpy2/transport/buffered/cybuffered.pyx -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered copying thriftpy2/transport/framed/cyframed.c -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed copying thriftpy2/transport/framed/cyframed.pyx -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed copying thriftpy2/contrib/tracking/tracking.thrift -&gt; build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/tracking running build_ext building 'thriftpy2.transport.cybase' extension creating build/temp.macosx-11.0-arm64-cpython-310 creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2 creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport arm64-apple-darwin20.0.0-clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include -I~/anaconda3/envs/mtbi-python/include/python3.10 -c thriftpy2/transport/cybase.c -o build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/cybase.o arm64-apple-darwin20.0.0-clang -bundle -undefined dynamic_lookup -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/cybase.o -o build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/cybase.cpython-310-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable building 'thriftpy2.transport.buffered.cybuffered' extension creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered arm64-apple-darwin20.0.0-clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include -I~/anaconda3/envs/mtbi-python/include/python3.10 -c thriftpy2/transport/buffered/cybuffered.c -o build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered/cybuffered.o arm64-apple-darwin20.0.0-clang -bundle -undefined dynamic_lookup -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered/cybuffered.o -o build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/buffered/cybuffered.cpython-310-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable building 'thriftpy2.transport.memory.cymemory' extension creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory arm64-apple-darwin20.0.0-clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include -I~/anaconda3/envs/mtbi-python/include/python3.10 -c thriftpy2/transport/memory/cymemory.c -o build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory/cymemory.o arm64-apple-darwin20.0.0-clang -bundle -undefined dynamic_lookup -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory/cymemory.o -o build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/memory/cymemory.cpython-310-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable building 'thriftpy2.transport.framed.cyframed' extension creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed arm64-apple-darwin20.0.0-clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include -I~/anaconda3/envs/mtbi-python/include/python3.10 -c thriftpy2/transport/framed/cyframed.c -o build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed/cyframed.o arm64-apple-darwin20.0.0-clang -bundle -undefined dynamic_lookup -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed/cyframed.o -o build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/framed/cyframed.cpython-310-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable building 'thriftpy2.protocol.cybin' extension creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/protocol creating build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin arm64-apple-darwin20.0.0-clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -fPIC -O2 -isystem ~/anaconda3/envs/mtbi-python/include -arch arm64 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include -I~/anaconda3/envs/mtbi-python/include/python3.10 -c thriftpy2/protocol/cybin/cybin.c -o build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin/cybin.o arm64-apple-darwin20.0.0-clang -bundle -undefined dynamic_lookup -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,~/anaconda3/envs/mtbi-python/lib -L~/anaconda3/envs/mtbi-python/lib -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem ~/anaconda3/envs/mtbi-python/include -D_FORTIFY_SOURCE=2 -isystem ~/anaconda3/envs/mtbi-python/include build/temp.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin/cybin.o -o build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/protocol/cybin.cpython-310-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable ~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated. !! ******************************************************************************** Please avoid running ``setup.py`` directly. Instead, use pypa/build, pypa/installer, pypa/build or other standards-based tools. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. ******************************************************************************** !! self.initialize_options() installing to build/bdist.macosx-11.0-arm64/wheel running install running install_lib creating build/bdist.macosx-11.0-arm64 creating build/bdist.macosx-11.0-arm64/wheel creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2 copying build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/server.py -&gt; build/bdist.macosx-11.0-arm64/wheel/thriftpy2 creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/transport copying build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/transport/_ssl.py -&gt; build/bdist.macosx-11.0-arm64/wheel/thriftpy2/transport creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/transport/memory copying ... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/transport/buffered copying ... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/transport/framed copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/protocol copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/protocol/cybin copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/parser copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib copying build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/__init__.py -&gt; build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib/aio copying build/lib.macosx-11.0-arm64-cpython-310/thriftpy2/contrib/aio/server.py -&gt; build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib/aio creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib/aio/transport copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib/aio/protocol copying something... creating build/bdist.macosx-11.0-arm64/wheel/thriftpy2/contrib/tracking copying something... running install_egg_info Copying thriftpy2.egg-info to build/bdist.macosx-11.0-arm64/wheel/thriftpy2-0.4.16-py3.10.egg-info running install_scripts Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;/private/var/folders/rx/2h26x2c17dq3ntcr8g9dl67r0000gn/T/pip-install-hw7ktz0x/thriftpy2_071a76a186d743c88526c39c0fa8cfb9/setup.py&quot;, line 76, in &lt;module&gt; setup(name=&quot;thriftpy2&quot;, File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/__init__.py&quot;, line 107, in setup return distutils.core.setup(**attrs) File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/_distutils/core.py&quot;, line 185, in setup return run_commands(dist) File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/_distutils/core.py&quot;, line 201, in run_commands dist.run_commands() File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/_distutils/dist.py&quot;, line 969, in run_commands self.run_command(cmd) File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/dist.py&quot;, line 1244, in run_command super().run_command(command) File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/setuptools/_distutils/dist.py&quot;, line 988, in run_command cmd_obj.run() File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/wheel/bdist_wheel.py&quot;, line 328, in run impl_tag, abi_tag, plat_tag = self.get_tag() File &quot;~/anaconda3/envs/mtbi-python/lib/python3.10/site-packages/wheel/bdist_wheel.py&quot;, line 278, in get_tag assert tag in supported_tags, &quot;would build wheel with unsupported tag {}&quot;.format(tag) AssertionError: would build wheel with unsupported tag ('cp310', 'cp310', 'macosx_11_0_arm64') [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for thriftpy2 Running setup.py clean for thriftpy2 Failed to build thriftpy2 ERROR: Could not build wheels for thriftpy2, which is required to install pyproject.toml-based projects. </code></pre>
<python><macos>
2023-06-16 02:29:29
1
1,167
Long.zhao
76,486,528
7,345,779
White space stripped from Django Template tags in PrismJS code blocks
<p>When I render Django template code in a prismjs block, it strips white space {{}} and {%%}.</p> <p>For example, pre-render, the code might be</p> <pre><code>{% image self.search_image thumbnail-400x200 as img %} &lt;img src=&quot;{{ img.url }}&quot; alt=&quot;{{ img.title }}&quot;&gt; </code></pre> <p>But the rendered code block will be</p> <pre><code>{%image self.search_image thumbnail-400x200 as img%} &lt;img src=&quot;{{img.url}}&quot; alt=&quot;{{img.title}}&quot;&gt; </code></pre> <p>It's not a case of css, the space is missing from the html. I can set the language to HTML, or even python etc, the same issue remains.</p> <p>Does anyone know of a way to prevent this?</p>
<python><django><syntax-highlighting><prismjs>
2023-06-16 02:01:24
2
1,428
Rich - enzedonline
76,486,517
7,128,840
How to stream JSON data using Server-Sent Events
<p>Setting up Server-Sent events is relatively simple - especially using FastAPI - you can do something like this:</p> <pre><code>def fake_data_streamer(): for i in range(10): yield &quot;some streamed data&quot; time.sleep(0.5) @app.get('/') async def main(): return StreamingResponse(fake_data_streamer()) </code></pre> <p>And upon an HTTP GET, the connection will return &quot;some streamed data&quot; every 0.5 seconds.</p> <p>What if I wanted to stream some structured data though, like JSON?</p> <p>For example, I want the data to be JSON, so something like:</p> <pre><code>def fake_data_streamer(): for i in range(10): yield json.dumps({'result': 'a lot of streamed data', &quot;seriously&quot;: [&quot;so&quot;, &quot;much&quot;, &quot;data&quot;]}, indent=4) time.sleep(0.5) </code></pre> <p>Is this a proper server side implementation? Does this pose a risk of the client receiving partially formed payloads - which is fine in the plaintext case, but makes the JSON un-parseable?</p> <p>If this is okay, how would you read this from the client side? Something like this:</p> <pre><code>async def main(): async with aiohttp.ClientSession() as session: async with session.get(url) as resp: while True: chunk = await resp.content.readuntil(b&quot;\n&quot;) await asyncio.sleep(1) if not chunk: break print(chunk) </code></pre> <p>Although I am not sure what the proper separator / reading mode is appropriate to ensure that the client always receives fully formed JSON events.</p> <p>Or, is this a completely improper way to accomplish streaming of fully formed JSON events.</p> <p>For reference, OpenAI achieves streaming of complete JSON objects in their API: <a href="https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb" rel="nofollow noreferrer">https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb</a></p>
<python><json><http><python-asyncio><fastapi>
2023-06-16 01:57:03
1
740
GenericDeveloperProfile
76,486,410
15,239,717
How Can I implement User Sent Message and Receive Message in Django
<p>I am working on a Django project where I have three types of users: Landlord, Agent, and Prospect. I want the Prospect user to be able to contact the Property Owner by sending a message while on the Property Detail view, and the Property Owner to be able to reply back to the Prospect using the same view. I also want all these users to maintain an Inbox and Sent Message items. I'm struggling to figure out whether my logic is sound and how to implement the functionality using Django. Is there a library that will make user messaging within Django easier?</p> <p>In my code, I maintain different Models for these users with a OneToOneField and use signals for automatic profile creation upon registration. I also have a Profile Model connected through the same relationship, and Message Model. I tried passing the Property Owner ID as the Recipient ID and Property ID on url from the Property Detail view to the send_message function view, but I get the error &quot;No 'User matches the given query'.&quot;</p> <p>Here are my views:</p> <pre class="lang-py prettyprint-override"><code>def property_detail(request, property_id): user = request.user #Check if user is authenticated if not user.is_authenticated: logout(request) messages.warning(request, 'Session expired. Please log in again.') return redirect('account-login') # Replace 'login' with your actual login URL property_instance = get_object_or_404(Property, pk=property_id) properties = [] # Initialize 'properties' as an empty list property_owner = None # Get the properties for the current user if hasattr(user, 'landlord'): properties = Property.objects.filter(landlord__user=request.user).prefetch_related('agent').order_by('-last_updated')[:4] elif hasattr(user, 'agent'): properties = Property.objects.filter(agent__user=request.user).prefetch_related('landlord').order_by('-last_updated')[:4] #Get prospect user profile elif hasattr(user, 'prospect_profile'): properties = Property.objects.order_by('-last_updated')[:4] if property_instance.landlord: property_owner = property_instance.landlord elif property_instance.agent: property_owner = property_instance.agent context = { 'properties':properties, 'property_owner': property_owner, 'page_title': 'Property Detail', 'property': property_instance, } return render(request, 'realestate/property_detail.html', context) def send_message(request, property_id, recipient_id): property = get_object_or_404(Property, id=property_id) recipient = get_object_or_404(User, id=recipient_id) try: recipient = User.objects.get(id=recipient_id) except User.DoesNotExist: # Handle the case when the recipient doesn't exist message.warning(request, 'Recipient Does Not Exist.') return redirect('account-login') # Or re if request.method == 'POST': form = MessageForm(request.POST) if form.is_valid(): message = form.cleaned_data['content'] subject = form.cleaned_data['subject'] message = Message(sender=request.user, recipient=recipient, subject=subject, content=message, property=property) message.save() messages.success(request, 'Message sent successfully.') return redirect('inbox') else: form = MessageForm() context = { 'form': form, 'page_title': 'Send Message', 'recipient': recipient, } return render(request, 'realestate/send_message.html', context) </code></pre> <p>Here is the Message model:</p> <pre class="lang-py prettyprint-override"><code>class Message(models.Model): SUBJECT_CHOICES = [ ('Inquiry', 'Inquiry'), ('Reinquiry', 'Reinquiry'), ('Negotiation', 'Negotiation'), ('Payment', 'Payment'), ] property = models.ForeignKey(Property, on_delete=models.CASCADE) sender = models.ForeignKey(User, on_delete=models.CASCADE, related_name='sent_messages') recipient = models.ForeignKey(User, on_delete=models.CASCADE, related_name='received_messages') subject = models.CharField(max_length=20, choices=SUBJECT_CHOICES) content = models.TextField(max_length=220, blank=True, null=True) status = models.BooleanField(default=False) date = models.DateTimeField(auto_now_add=True) def __str__(self): return f'{self.sender.username} - {self.recipient.username}' def mark_as_read(self): self.status = True self.save() </code></pre>
<python><django>
2023-06-16 01:19:35
0
323
apollos
76,486,341
9,883,126
Can someone please explain to me why sorting function returns "None"
<pre><code>numbers = input(&quot;Enter numbers.&quot;) numbersArray = numbers.split() realNumbersArray = [] for x in numbersArray: realNumbersArray.append(int(x)) print(x) print(len(realNumbersArray)) print(realNumbersArray.sort()) </code></pre> <p>Can someone explain why the second print command keeps returning &quot;None&quot;?</p>
<python>
2023-06-16 00:54:42
2
597
DennisM
76,486,276
1,609,464
Using pyyaml, how can I override an anchor in a YAML config and have the updated anchor be persisted throughout?
<p>Here's an example of the file <code>config.yml</code>:</p> <pre><code>key_setting: &amp;key_setting True main_params: param_A: param_A_1: 1 param_A_2: 2 param_A_setting: *key_setting param_B: param_B_1: 3 param_B_2: 4 param_B_setting: *key_setting </code></pre> <p>If I run <code>yaml.load(open('config.yml').read())</code>, <code>param_A_setting</code> and <code>param_B_setting</code> would both be <code>True</code>, which was set by <code>key_setting</code>.</p> <p>Suppose I want to override <code>key_setting</code> to <code>False</code> and have that persist across <code>param_A_setting</code> and <code>param_B_setting</code>. How can I do that by overriding only <code>key_setting</code> without needing to manually override <code>param_A_setting</code> and <code>param_B_setting</code>?</p>
<python><yaml><pyyaml><yaml-anchors>
2023-06-16 00:30:48
1
1,541
nwly
76,486,217
1,478,905
Click on a tag using Python Selenium
<p>I am trying to click on an element using Selenium but keep getting the error message <code>&quot;selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element {&quot;method&quot;:&quot;partial link text&quot;,&quot;selector&quot;:&quot;foo&quot;}&quot; . </code>Here is the element I want to click:</p> <pre><code>&lt;a href=&quot;#&quot; onclick=&quot;javascript:toMove('0000172','c99','https://www.boo.com/usemn/co/UM_COJ_CIQ003.jsp','mainFrame','PT_FMJ_SFQ001.jsp');&quot; onfocus=&quot;blur()&quot;&gt; foo &lt;/a&gt; </code></pre> <p>And here is the code I'm using in Selenium:</p> <pre><code>link = driver.find_element(By.PARTIAL_LINK_TEXT, &quot;foo&quot;) </code></pre>
<python><selenium-webdriver>
2023-06-16 00:10:37
1
997
Diego Quirós
76,486,080
2,793,602
Pandas read_json script that used to work now produces an error
<p>I have a script that up until recently worked fine, but is now producing an error.</p> <pre><code>import requests import pandas as pd # Set the url to given endpoint url = &quot;https://SomeURL/SomeEndpoint&quot; print('URL set') # Connect to endpoint with credentials and put results in dictionary URLresponse = requests.get(url,auth=(&quot;SomeUser&quot;, &quot;SomePassword&quot;), verify=True) print('connection to endpoint') # Load the response as proper JSON into a var rawdata = (URLresponse.content) print(type(rawdata)) print('populating variable') # print(rawdata) # Load the var into a dataframe df = pd.read_json(rawdata) print('load variable into df') print(df) </code></pre> <p>This used to work fine but now it is producing an error as following:</p> <pre><code> File &quot;C:\Program Files\Python310\lib\site-packages\pandas\io\common.py&quot;, line 901, in get_handle raise TypeError( TypeError: Expected file path name or file-like object, got &lt;class 'bytes'&gt; type </code></pre> <p>How can I go ahead to troubleshoot this?</p>
<python><pandas>
2023-06-15 23:25:56
1
457
opperman.eric
76,486,032
884,553
with bokeh, how to export png or get Pillow image object with transparent background?
<p>In bokeh's <a href="https://docs.bokeh.org/en/2.4.1/docs/user_guide/export.html" rel="nofollow noreferrer">document</a>, it said to create png with transparent background, one should set:</p> <pre><code>plot.background_fill_color = None plot.border_fill_color = None </code></pre> <p>But since internally, they're saving png file by <a href="https://docs.bokeh.org/en/2.4.1/_modules/bokeh/io/export.html#get_screenshot_as_png" rel="nofollow noreferrer">calling</a>:</p> <pre><code>png = web_driver.get_screenshot_as_png() </code></pre> <p>and the saved screenshot is always has white background, so I can not find a way to save png with transparent backgound. please let me know what I did wrong.</p>
<python><bokeh><screenshot><transparent>
2023-06-15 23:11:33
1
373
TMS
76,485,990
907,263
How are you supposed to do admin/sysop actions with Pywikibot 8?
<p>I have been using an outdated version of Pywikibot and Python 2 and finally got around to upgrading to Python 3 and Pywikibot 8. All support for &quot;sysop&quot; accounts seems to have been removed and there's no documentation I can find on how to work with two accounts, one of which is a bot account and one of which is a sysop account. The manual page <a href="https://www.mediawiki.org/wiki/Manual:Pywikibot/delete.py" rel="nofollow noreferrer">https://www.mediawiki.org/wiki/Manual:Pywikibot/delete.py</a> for the delete script, which requires sysop access, says you need your user-config.py to have the sysop account listed in place of the bot account. Does that mean I need to manually edit the file every time I need to switch between doing a regular bot action and a bot action requiring sysop access? There must be a better way. Trying to read through <a href="https://phabricator.wikimedia.org/T71283" rel="nofollow noreferrer">https://phabricator.wikimedia.org/T71283</a>, people seem to have suggested the possibility of allowing multiple accounts to be listed for a given Wiki, but that doesn't seem to be implemented. It's hard for me to believe such basic functionality was simply removed with no replacement in mind.</p>
<python><mediawiki><pywikibot><wiktionary>
2023-06-15 23:00:04
1
7,792
Urban Vagabond
76,485,819
2,461,398
pyarrow's GcsFileSystem fails with "SSL peer certificate or SSH remote key was not OK"
<p>I can't figure out how to use <code>pyarrow</code>'s <code>GcsFileSystem</code>. It throws an <code>ArrowException: Unknown error: google::cloud::Status(UNKNOWN: Permanent error GetObjectMetadata: PerformWork() - CURL error [60]=SSL peer certificate or SSH remote key was not OK)</code> whatever I try.</p> <p>Is there a minimal way to check if <code>GcsFileSystem</code> found the credentials and is able to authenticate?</p> <p>I have the GCP JSON credentials file:</p> <pre><code>import os from pathlib import Path Path(os.environ[&quot;GOOGLE_APPLICATION_CREDENTIALS&quot;]).exists() &gt; True </code></pre> <p>The official Google storage client has no trouble authenticating and see blobs:</p> <pre><code># The bucket and blob we care about. bucket = &quot;some-bucket&quot; blob_name = &quot;path/to/file.parquet&quot; path = f&quot;{bucket_name}/{blob_name}&quot; # The GCS client authenticates sees the object. from google.cloud.storage import Client Client().get_bucket(bucket).get_blob(blob_name).exists() &gt; True </code></pre> <p>But I can't create a <code>dataset</code> using <code>GcsFileSystem</code> using <code>pyarrow</code>:</p> <pre><code>from pyarrow.fs import GcsFileSystem from pyarrow.dataset import dataset # Create GcsGcsFileSystem instance. # Should pick up credentials from `GOOGLE_APPLICATION_CREDENTIALS` env var. gcs_fs = GcsFileSystem() # Can't create a dataset. ds = dataset( source=path, format=&quot;parquet&quot;, filesystem=gcs_fs, ) &gt; --------------------------------------------------------------------------- ArrowException Traceback (most recent call last) ... File /opt/conda/lib/python3.11/site-packages/pyarrow/dataset.py:446, in _filesystem_dataset(source, schema, filesystem, partitioning, format, partition_base_dir, exclude_invalid_files, selector_ignore_prefixes) 444 fs, paths_or_selector = _ensure_multiple_sources(source, filesystem) 445 else: --&gt; 446 fs, paths_or_selector = _ensure_single_source(source, filesystem) 448 options = FileSystemFactoryOptions( 449 partitioning=partitioning, 450 partition_base_dir=partition_base_dir, 451 exclude_invalid_files=exclude_invalid_files, 452 selector_ignore_prefixes=selector_ignore_prefixes 453 ) 454 factory = FileSystemDatasetFactory(fs, paths_or_selector, format, options) File /opt/conda/lib/python3.11/site-packages/pyarrow/dataset.py:413, in _ensure_single_source(path, filesystem) 410 path = filesystem.normalize_path(path) 412 # retrieve the file descriptor --&gt; 413 file_info = filesystem.get_file_info(path) 415 # depending on the path type either return with a recursive 416 # directory selector or as a list containing a single file 417 if file_info.type == FileType.Directory: File /opt/conda/lib/python3.11/site-packages/pyarrow/_fs.pyx:571, in pyarrow._fs.FileSystem.get_file_info() File /opt/conda/lib/python3.11/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File /opt/conda/lib/python3.11/site-packages/pyarrow/error.pxi:138, in pyarrow.lib.check_status() ArrowException: Unknown error: google::cloud::Status(UNKNOWN: Permanent error GetObjectMetadata: PerformWork() - CURL error [60]=SSL peer certificate or SSH remote key was not OK) </code></pre> <p>I can't use <code>GcsFileSystem</code> directly to get blob info either:</p> <pre><code>gcs_fs.get_file_info(path) &gt; --------------------------------------------------------------------------- ... ArrowException: Unknown error: google::cloud::Status(UNKNOWN: Permanent error GetObjectMetadata: PerformWork() - CURL error [60]=SSL peer certificate or SSH remote key was not OK) </code></pre>
<python><ssl><pyarrow>
2023-06-15 22:11:29
1
1,853
capitalistcuttle
76,485,812
851,699
How can I smoothly stream frames from a video that is currently being recorded (Python)
<p>I am trying to stream a video (into numpy arrays) while recording it in Python. Currently, I am recording using ffmpeg, and opencv to read it, but it is not going smoothly.</p> <p><strong>GOAL: Receive and record a video stream in real-time, into a video file with reasonable compression, while being able to have fast random access to any frame received up to that point (minus perhaps an acceptable buffer of e.g. 30 frames / 1 second).</strong></p> <p><strong>The frame read from a given index lookup must always be identical.. i.e. if we open the video later and read from it again, we must get identical frames as when we read them &quot;live&quot;.</strong></p> <p>Here is my current code (which runs on Mac), which attempts to save a video stream from the camera, while reading from the video file as it is being written to.</p> <pre><code>import datetime, os, subprocess, time, cv2 video_path = os.path.expanduser(f&quot;~/Downloads/stream_{datetime.datetime.now()}.mkv&quot;) # Start a process that record the video with ffmpeg process = subprocess.Popen(['ffmpeg', '-y', '-f', 'avfoundation', '-framerate', '30', '-i', '0:none', '-preset', 'fast', '-crf', '23', '-b:v', '8000k', video_path]) time.sleep(4) # Let it start assert os.path.exists(video_path), &quot;Video file not created&quot; # Let's simulate a process that gets random frames from the video cap = cv2.VideoCapture(video_path) try: while True: ret, frame = cap.read() # print(f&quot;Captured frame of shape {frame.shape}&quot; if frame is not None else &quot;No frame available&quot;) if frame is not None: print(f&quot;Got frame of shape {frame.shape}&quot;) cv2.imshow(&quot;Q to Quit&quot;, frame) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break else: print(f&quot;No frame available on {video_path}.. waiting for more frames...&quot;) time.sleep(0.1) except KeyboardInterrupt: pass process.terminate() cap.release() </code></pre> <p>This code never loads a frame.</p> <p>Things I have discovered:</p> <ul> <li>If I renew the cap with <code>cap = cv2.VideoCapture(video_path)</code> in the else block, it does start showing frames from the start of the recording. After using up all frames, it stops. If I keep track frames seen so far and do</li> </ul> <pre><code> else: print(f&quot;No frame available on {video_path}&quot;) time.sleep(0.1) cap = cv2.VideoCapture(video_path) print(f&quot;Restarting video capture from frame {frames_seen_so_far}&quot;) cap.set(cv2.CAP_PROP_POS_FRAMES, frames_seen_so_far) # DOES NOT WORK </code></pre> <p>It does not work (the <code>cap.set..</code> makes no difference).</p>
<python><opencv><ffmpeg><video-streaming>
2023-06-15 22:09:48
1
13,753
Peter
76,485,769
2,651,075
How to use mapped values for numerical chart axis ticks
<p>Say I have a variable, <code>df</code>, that contains some data that looks like this in pandas</p> <pre><code>Date Number Letter 2017-10-31 2 B 2019-08-31 1 A 2021-11-30 3 C ... </code></pre> <p>I'd like to chart this out in seaborn, so I use some code like this</p> <pre><code>sb.lineplot(data=df, x=Date, y=Number) plt.show() </code></pre> <p>And I get an orderly line chart that looks something like this <a href="https://i.sstatic.net/UmCCH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmCCH.png" alt="enter image description here" /></a></p> <p>However, i'd like to have the y axis tick labels to preserve the numerical order of the number column, but present as labels from the letter column. For instance, instead of the y axis being 10 - 17 as above, i'd like the y axis to read j - t.</p> <p>Is there a way to implement that through the sb.lineplot arguments, or would I need another way?</p>
<python><matplotlib><seaborn><yaxis>
2023-06-15 21:56:44
2
1,850
Luke
76,485,745
2,102,025
Remove dictionary in list Python
<p>I have two dictionaries: <code>output</code> contains a main dataset including a list of relevant items and <code>output_historical</code> contains a subset of the items in the list of <code>output</code>.</p> <p>I would like to add the value of the <code>output_historical</code> keys to the <code>output</code> values, but delete the whole list-item if there is no value in <code>output_historical</code>. In the over-simplified example below, <code>{'test2': {'info': '1'}}</code> should be deleted from <code>output</code>.</p> <pre><code>output_historical = [ { 'test1': {'result': 0}, 'test3': {'result': 3} } ] output = [ {'tests': [ {'test1': {'info': '0'}}, {'test2': {'info': '1'}}, {'test3': {'info': '2'}} ] } ] for item in output: for testobject in item[&quot;tests&quot;]: for test in testobject.copy(): if test in output_historical[0]: print(&quot;add&quot;, test) testobject[test][&quot;result&quot;] = output_historical[0][test][&quot;result&quot;] else: print(&quot;del&quot;, test) # del ?? print(output) # Expected output: [{'tests': [{'test1': {'info': '0', 'result': 0}}, {'test2': {'info': '1'}}, {'test3': {'info': '2', 'result': 3}}]}] </code></pre>
<python><list><dictionary>
2023-06-15 21:50:44
1
579
Scripter
76,485,620
11,922,765
Python StatsModels: ValueError: Expected frequency D. Got M
<p>I am using <code>statsmodels.graphics</code> to draw a <code>month_plot</code> from timeseries data in a <a href="https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption?select=DOM_hourly.csv" rel="nofollow noreferrer">kaggle dataset</a>. I have converted the data to daily frequency mean data as required for the plot. However, I am getting an error that says <code>the expected data frequency is D, but the actual data frequency is M</code> where as my actual data is already D.</p> <pre><code>import pandas as pd from statsmodels.graphics.tsaplots import month_plot import matplotlib.pyplot as plt df = pd.read_csv('/kaggle/input/hourly-energy-consumption/DOM_hourly.csv') df.set_index('Datetime', inplace=True, drop=True) df.index = pd.to_datetime(df.index, format='%Y-%m-%d %H:%M:%S') # drop duplicated index df = df[~df.index.duplicated(keep='first')] # convert df to daily mean frequency dataframe ddf = df.resample(rule='24H', kind='interval').mean().to_period('d') # print example dataframe ddf # # DOM_MW # Datetime # 2005-05-01 7812.347826 # 2005-05-02 8608.083333 # ... ... # 2017-12-30 14079.125000 # 2017-12-31 15872.833333 # Monthly plot from the Daily frequency data plt.figure(figsize=(14,4)) month_plot(ddf) plt.show() </code></pre> <p>Present output: As you can see above, my <code>ddf</code> is clearly a daily frequency data. But I am getting following weird error saying my <code>ddf</code> data is actually M (Monthly) but it expects D (Daily).</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-7-675f2911920c&gt; in &lt;module&gt; 7 8 plt.figure(figsize=(14,4)) ----&gt; 9 month_plot(ddf) 10 plt.show() ValueError: Expected frequency D. Got M </code></pre>
<python><pandas><dataframe><statsmodels><pandas-resample>
2023-06-15 21:25:43
1
4,702
Mainland
76,485,310
14,387,264
Python: How to create a unique dataframe from multiple dataframes with the same dimensions
<p>given an empty dataframe with assigned column names :</p> <pre><code>colnames = ('ACCT', 'CTAT', 'AAAT', 'ATCG')*3 df = pd.DataFrame(columns=colnames) </code></pre> <p>I want to loop over dataframes which have the below structure: (giving 2 for demostration)</p> <pre><code>sample_df = pd.DataFrame() sample_df['tetran'] = colnames sample_df['Frequency'] = (423, 512, 25, 123,632,124,614,73,14,75,311,155) conids = (&quot;cl1_42&quot;, &quot;cl1_41&quot;, &quot;cl2_31&quot;) rep_conids = [val for val in conids for _ in range(4)] sample_df['contig_id'] = rep_conids sample_df_2 = pd.DataFrame() sample_df_2['tetran'] = colnames sample_df_2['Frequency'] = (724, 132, 4, 102,423,402,616,734,153,751,31,55) conids_2 = (&quot;se1_51&quot;, &quot;se1_21&quot;, &quot;se2_53&quot;) rep_conids_2 = [val for val in conids_2 for _ in range(4)] sample_df_2['contig_id'] = rep_conids_2 </code></pre> <p><strong>The objective is:</strong></p> <ol> <li>Add each 'Frequency' value from the 'sample_df's to the corresponding 'tetraN' value of the 'df' and add a new column to be the sample_df['contig_id']</li> </ol> <p>There are multiple 'sample_df' dataframes , so this is the idea of the desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>ACCT</th> <th>CTAT</th> <th>AAAT</th> <th>ATCG</th> </tr> </thead> <tbody> <tr> <td>cl1_42</td> <td>423</td> <td>512</td> <td>25</td> <td>123</td> </tr> <tr> <td>cl1_41</td> <td>632</td> <td>124</td> <td>614</td> <td>73</td> </tr> <tr> <td>cl2_31</td> <td>14</td> <td>75</td> <td>311</td> <td>155</td> </tr> <tr> <td>se1_51</td> <td>724</td> <td>132</td> <td>4</td> <td>102</td> </tr> <tr> <td>se1_21</td> <td>423</td> <td>402</td> <td>616</td> <td>734</td> </tr> <tr> <td>se2_53</td> <td>153</td> <td>751</td> <td>31</td> <td>55</td> </tr> </tbody> </table> </div> <p>I know how to do this in R but I need this to be done in python so I cannot add here what I tried due it is in R.</p> <p>Thanks for your time :)</p>
<python><pandas><dataframe>
2023-06-15 20:29:32
1
451
Valentin
76,485,157
5,984,358
How to determine local symmetry of graph nodes in networkx?
<p>I am using NetworkX to generate undirected graphs like the one shown in the picture below. The colors represent the equality of graph nodes, i.e., nodes of the same color are of the same type. The colors are stored as an attribute of each node. Now, I wish to determine all possible symmetry equivalent groups of nodes in the graph using NetworkX. By symmetry equivalent, I mean that switching those nodes will provide an identical graph (not just isomorphic, but identical).</p> <p><img src="https://i.sstatic.net/9IiQp.png" alt="graph picture" /></p> <p>For example, in the above graph, nodes 5 and 6 are interchangeable by symmetry. Thus, <code>{5:6}</code> is a local symmetry mapping. Additionally, nodes 2, 5, 6 and nodes 3, 7, 8 are also identical due to symmetry. Therefore, <code>{2:3, 5:7, 6:8}</code> would also be a valid symmetry mapping. There would be many other valid symmetry mappings.</p> <p><strong>Is there a function in NetworkX that I can use to identify these symmetry mappings (and possibly generate new graphs by permuting those node indices)?</strong> A google search led me to <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/generated/networkx.algorithms.isomorphism.ISMAGS.analyze_symmetry.html#networkx.algorithms.isomorphism.ISMAGS.analyze_symmetry" rel="nofollow noreferrer">this</a> page, but there is no documentation on how to use this function, and it is not even clear to me if the function does what I am trying to accomplish or is it something else entirely. Any help is greatly appreciated.</p>
<python><networkx><graph-theory>
2023-06-15 20:01:10
0
327
S R Maiti
76,485,145
1,675,422
How to use a relay in Python graphql-core
<p>The following code produces a <code>TypeError</code> error</p> <p><code>TypeError: Schema must contain uniquely named types but contains multiple types named 'Query'.</code></p> <pre class="lang-py prettyprint-override"><code> import json from graphql.utilities import build_schema from graphql.type import GraphQLObjectType, GraphQLSchema def read_file(file_path, is_json: bool = False): file_contents = None try: with open(file_path, &quot;r&quot;) as file: file_contents = file.read() except FileNotFoundError: print(f&quot;File '{file_path}' not found.&quot;) except IOError: print(f&quot;Error reading file '{file_path}'.&quot;) if not file_contents: raise Exception(&quot;GraphQL schema is empty&quot;) return json.loads(file_contents) if is_json else file_contents schema = build_schema(read_file(&quot;schemas/github.graphql&quot;)) GraphQLSchema(GraphQLObjectType('Query', lambda:schema.query_type.fields)) </code></pre> <p><a href="https://docs.github.com/en/graphql/overview/public-schema" rel="nofollow noreferrer">Github's GraphQL public schema</a> includes a relay Query type as follows:</p> <pre><code> type Query { ... &quot;&quot;&quot; Workaround for re-exposing the root query object. (Refer to https://github.com/facebook/relay/issues/112 for more information.) &quot;&quot;&quot; relay: Query! ... } </code></pre> <p>When using this schema in <a href="https://github.com/graphql-python/graphql-core" rel="nofollow noreferrer">graphql-core</a> python library I get a type error if I try to rebuild the schema.</p> <p><code>TypeError: Schema must contain uniquely named types but contains multiple types named 'Query'.</code></p> <p>Is that a bug in <code>graphql-core</code> code or am I missing something? How to properly handle relays like that?</p>
<python><graphql>
2023-06-15 19:59:04
0
1,462
jdcaballerov
76,485,119
13,155,046
How can I create a dictionary that contains information about all countries, their calling codes and lists of states or provinces within each country?
<p>I'm planning to build a contact web page where users can select their country and choose a specific region within that country. I need to have the corresponding calling code for each country. To achieve this, I would like to generate a Python dictionary that includes all the necessary information. Here's the desired format I'd like to achieve:</p> <p>output:</p> <pre class="lang-py prettyprint-override"><code>COUNTRY_DATA = { &quot;COUNTRY_ID_1&quot;: { &quot;name&quot;: &quot;COUNTRY_NAME_1&quot;, &quot;calling_code&quot;: COUNTRY_NUMBER_1, &quot;states&quot;: [LIST_COUNTRY_STATES_1] }, &quot;COUNTRY_ID_2&quot;: { &quot;name&quot;: &quot;COUNTRY_NAME_2&quot;, &quot;calling_code&quot;: COUNTRY_NUMBER_2, &quot;states&quot;: [LIST_COUNTRY_STATES_2] }, &quot;COUNTRY_ID_3&quot;: { &quot;name&quot;: &quot;COUNTRY_NAME_3&quot;, &quot;calling_code&quot;: COUNTRY_NUMBER_3, &quot;states&quot;: [LIST_COUNTRY_STATES_3] }, ... } </code></pre> <p>Could you please assist me in creating the Python dictionary COUNTRY_DATA in the specified format?</p> <p>Expected output:</p> <pre class="lang-py prettyprint-override"><code>COUNTRY_DATA = { &quot;US&quot;: { &quot;name&quot;: &quot;United States&quot;, &quot;calling_code&quot;: &quot;+1&quot;, &quot;states&quot;: [&quot;California&quot;, &quot;New York&quot;, &quot;Texas&quot;] }, &quot;GB&quot;: { &quot;name&quot;: &quot;United Kingdom&quot;, &quot;calling_code&quot;: &quot;+44&quot;, &quot;states&quot;: [&quot;England&quot;, &quot;Scotland&quot;, &quot;Wales&quot;] }, &quot;CA&quot;: { &quot;name&quot;: &quot;Canada&quot;, &quot;calling_code&quot;: &quot;+1&quot;, &quot;states&quot;: [&quot;Ontario&quot;, &quot;Quebec&quot;, &quot;British Columbia&quot;] }, ... } </code></pre>
<python><html><flask><state><country>
2023-06-15 19:55:23
1
8,961
Milovan Tomašević
76,485,096
2,057,516
Why doesn't this snakemake script directive work
<p>I'm creating a snakemake workflow. I've never used the run or script directives until yesterday. I had a run directive that worked just fine, but <code>snakemake --lint</code> complained it was too long and said I should create a separate script and use the script directive:</p> <pre><code>Lints for rule check_peak_read_len_overlap_params (line 90, /Users/rleach/PROJECT-local/ATACC/REPOS/ATACCompendium/workflow/rules/error_checks.smk): * Migrate long run directives into scripts or notebooks: Long run directives hamper workflow readability. Use the script or notebook directive instead. Note that the script or notebook directive does not involve boilerplate. Similar to run, you will have direct access to params, input, output, and wildcards.Only use the run directive for a handful of lines. Also see: https://snakemake.readthedocs.io/en/latest/snakefiles/rules.html#external-scripts https://snakemake.readthedocs.io/en/latest/snakefiles/rules.html#jupyter-notebook-integration </code></pre> <p>So I tried turning this:</p> <pre><code>rule check_peak_read_len_overlap_params: input: &quot;results/QC/max_read_length.txt&quot;, output: &quot;results/QC/parameter_validation.txt&quot;, params: frac_read_overlap=FRAC_READ_OVERLAP, min_peak_width=MAX_ARTIFACT_WIDTH + 1, summit_width=SUMMIT_FLANK_SIZE * 2, log: tail=&quot;results/QC/logs/check_peak_read_len_overlap_params.stderr&quot;, run: max_read_len = 0 # Get the max read length from the input file infile = open(input[0]) for line in infile: max_read_len = int(line.strip()) # The file should only be one line break infile.close() max_peak_frac = params.min_peak_width / max_read_len max_summit_frac = params.summit_width / max_read_len if max_peak_frac &lt; params.frac_read_overlap: raise ValueError( f&quot;There exist reads (of length {max_read_len}) that, if &quot; &quot;mapped to the smallest allowed peak (of length &quot; f&quot;{params.min_peak_width}, based on a MAX_ARTIFACT_WIDTH &quot; f&quot;of {MAX_ARTIFACT_WIDTH}), would never be counted using a &quot; f&quot;FRAC_READ_OVERLAP of {params.frac_read_overlap}.&quot; ) if max_summit_frac &lt; params.frac_read_overlap: raise ValueError( f&quot;There exist reads (of length {max_read_len}) that, if &quot; f&quot;mapped to any summit (of length {params.summit_width}), &quot; &quot;would never be counted using a FRAC_READ_OVERLAP of &quot; f&quot;{params.frac_read_overlap}.&quot; ) with open(output[0], &quot;w&quot;) as out: out.write(&quot;Parameters have validated successfully.&quot;) </code></pre> <p>into this:</p> <pre><code>rule check_peak_read_len_overlap_params: input: &quot;results/QC/max_read_length.txt&quot;, output: &quot;results/QC/parameter_validation.txt&quot;, params: frac_read_overlap=FRAC_READ_OVERLAP, max_artifact_width=MAX_ARTIFACT_WIDTH, summit_flank_size=SUMMIT_FLANK_SIZE, log: &quot;results/QC/logs/parameter_validation.log&quot;, conda: &quot;../envs/python3.yml&quot; script: &quot;scripts/check_peak_read_len_overlap_params.py&quot; </code></pre> <p><em>(Note, I changed the parameters at the same time so I could manipulate them in the script instead of under the params directive.)</em> At first, I simply pasted the <code>run</code> code into a function, and called that function. Then I tried tweaking the code to get some sort of log output. I tried adding a shebang at the top. I tried inporting snakemake (which the docs didn't mention). I tried a bunch of things, but nothing seemed to work. Currently, this is what I have in the script:</p> <pre><code>import sys def check_peak_read_len_overlap_params( infile, outfile, log, frac_read_overlap, max_artifact_width, summit_flank_size, ): log_handle = open(log, &quot;w&quot;) sys.stderr = sys.stdout = log_handle print( &quot;Parameters:\n&quot; f&quot;Input: {infile}\n&quot; f&quot;Output: {outfile}\n&quot; f&quot;Log: {log}\n&quot; f&quot;\tFRAC_READ_OVERLAP = {frac_read_overlap}\n&quot; f&quot;\tMAX_ARTIFACT_WIDTH = {max_artifact_width}\n&quot; f&quot;\tSUMMIT_FLANK_SIZE = {summit_flank_size}&quot; ) min_peak_width = max_artifact_width + 1 summit_width = summit_flank_size * 2 max_read_len = 0 inhandle = open(infile) for line in inhandle: max_read_len = int(line.strip()) # The file should only be one line break inhandle.close() max_peak_frac = min_peak_width / max_read_len max_summit_frac = summit_width / max_read_len if max_peak_frac &lt; frac_read_overlap: raise ValueError( f&quot;There exist reads (of length {max_read_len}) that, if &quot; &quot;mapped to the smallest allowed peak (of length &quot; f&quot;{min_peak_width}, based on a MAX_ARTIFACT_WIDTH &quot; f&quot;of {max_artifact_width}), would never be counted using a &quot; f&quot;FRAC_READ_OVERLAP of {frac_read_overlap}.&quot; ) if max_summit_frac &lt; frac_read_overlap: raise ValueError( f&quot;There exist reads (of length {max_read_len}) that, if &quot; f&quot;mapped to any summit (of length {summit_width}), &quot; &quot;would never be counted using a FRAC_READ_OVERLAP of &quot; f&quot;{frac_read_overlap}.&quot; ) with open(outfile, &quot;w&quot;) as outhandle: outhandle.write(&quot;Parameters have validated successfully.&quot;) check_peak_read_len_overlap_params( snakemake.input[0], snakemake.output[0], snakemake.log[0], snakemake.params.frac_read_overlap, snakemake.params.max_artifact_width, snakemake.params.summit_flank_size, ) </code></pre> <p>As I was working on this question, I figured out the issue, but I'll continue to post this and then answer, because nothing in the <a href="https://snakemake.readthedocs.io/en/v7.25.0/snakefiles/rules.html#python" rel="nofollow noreferrer">docs</a> or via google searching, that I could find, answers this question...</p>
<python><directive><snakemake>
2023-06-15 19:50:58
1
1,225
hepcat72
76,485,061
6,779,075
Stop Keras Tuner if it has found a good configuration
<p>I know that I can stop single trials using EarlyStopping or special callbacks if the accuracy is high enough, but is there a way to stop the whole hyperparameter tuning in that case?</p> <pre><code> tuner = RandomSearch( hypermodel=model, objective=Objective(config.metric, direction=config.metric_direction), max_trials=config.max_trials, overwrite=False, directory=config.log_directory, project_name=config.project_name, ) tuner.search( x=X_train, y=y_train, epochs=config.epochs, validation_data=data_test, callbacks=callbacks, # This contains EarlyStopping and a callback that terminates when a certain acc has been reached verbose=1, class_weight=class_weights, ) </code></pre>
<python><tensorflow><keras><hyperparameters>
2023-06-15 19:45:01
1
428
Mario
76,485,011
20,612,566
How to get how many lines you wrote in Github project and how many lines you wrote were deleted
<p>My senior has given me a task to find out the number of lines I wrote in a GitHub project and how many of those lines were deleted. Are there any built-in methods available to achieve this? Keep in mind that I am not the only developer on the project. My senior wants proof that my code is not up to par.</p>
<python><git><github>
2023-06-15 19:35:56
2
391
Iren E
76,485,003
1,366,040
How do I download all files from a Google Drive folder with more than 50 files?
<p>I cannot figure out how to write a program to download all files from a publicly accessible Google Drive folder, which has more than 1,000 of them.</p> <p>This is what I've tried so far:</p> <pre><code>import gdown url = 'https://drive.google.com/drive/folders/MY-PUBLICLY-ACCESSIBLE-FOLDER-ID?usp=drive_link' gdown.download_folder(url, quiet=True, remaining_ok=True, use_cookies=False) </code></pre> <p>But it only downloads 50 of the files.</p>
<python><google-drive-api>
2023-06-15 19:34:13
5
1,227
Generic_User_ID
76,484,840
11,883,900
Overlaying Interpolation on a Map
<p>I have to finally seek help. I am so stuck after trying all options.</p> <p>Now I have data which was sampled from only 10 villages from a region which has atleast 105 villages. With this sampled data of 10 villages I did prediction which also worked well and my final table with predicted values looks like this(Unfortunately I am unable to convert this table to something that can be shared here): <a href="https://i.sstatic.net/B7Ws2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B7Ws2.png" alt="enter image description here" /></a></p> <p>Now my problem is on <strong>interpolation</strong> . I wanted to <strong>interpolate this data to overlay on other unsampled villages</strong> and this is how I did it:</p> <pre><code>from scipy.interpolate import griddata # Extract the longitude, latitude, and prediction columns from the decoded dataframe interpolation_data = decoded_df[['longitude', 'latitude', 'prediction']] # Remove any rows with missing values interpolation_data = interpolation_data.dropna() # Convert the data to numpy arrays points = interpolation_data[['longitude', 'latitude']].values values = interpolation_data['prediction'].values # Define the grid points for interpolation grid_points = np.vstack((grid_lon.flatten(), grid_lat.flatten())).T # Perform IDW interpolation interpolated_values = griddata(points, values, grid_points, method='linear') interpolated_values = interpolated_values.reshape(grid_lon.shape) # Create a contour plot of the interpolated predictions plt.contourf(grid_lon, grid_lat, interpolated_values) plt.colorbar() plt.scatter(decoded_df['longitude'], decoded_df['latitude'], c=decoded_df['prediction'], cmap='viridis', edgecolors='black') plt.xlabel('Longitude') plt.ylabel('Latitude') plt.title('Interpolated Predictions') plt.show() </code></pre> <p>Now this gave me this</p> <p><a href="https://i.sstatic.net/QeJ4B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QeJ4B.png" alt="enter image description here" /></a></p> <p>Now the next step was to <strong>overlay the interpolated results to the map of that region</strong>. which I did this way:</p> <pre><code>import geopandas as gpd from mpl_toolkits.axes_grid1 import make_axes_locatable import geopandas as gpd import matplotlib.pyplot as plt # Read shapefile data of Babati shapefile_path = &quot;Babati Villages/Babati_villages.shp&quot; # Replace with the actual path to the shapefile gdf_babati = gpd.read_file(shapefile_path) gdf_bti= gdf_babati[gdf_babati[&quot;District_N&quot;] == &quot;Babati&quot;] gdf_bti.head() # Define the grid points for interpolation grid_points = np.vstack((grid_lon.flatten(), grid_lat.flatten())).T # Perform IDW interpolation interpolated_values = griddata(points, values, grid_points, method='linear') # Reshape the interpolated values to match the grid shape interpolated_values = interpolated_values.reshape(grid_lon.shape) from shapely.geometry import box # Create a bounding box geometry of the Babati region bbox = box(gdf_bti.total_bounds[0], gdf_bti.total_bounds[1], gdf_bti.total_bounds[2], gdf_bti.total_bounds[3]) # Clip the interpolated predictions to the extent of the Babati region interpolated_predictions = gpd.clip(interpolated_predictions, bbox) # Create subplots fig, ax = plt.subplots(figsize=(10, 10)) # Plot the shapefile of the Babati region gdf_bti.plot(ax=ax, facecolor='none', edgecolor='black') # Plot the interpolated predictions interpolated_predictions.plot(ax=ax, column='prediction', cmap='viridis', markersize=30, legend=True) # Add colorbar divider = make_axes_locatable(ax) cax = divider.append_axes(&quot;right&quot;, size=&quot;5%&quot;, pad=0.1) interpolated_predictions.plot(ax=cax, column='prediction', cmap='viridis', legend=True, cax=cax) # Set plot title and labels ax.set_title('Interpolated Predictions in Babati Region') ax.set_xlabel('Longitude') ax.set_ylabel('Latitude') # Show the plot plt.show() </code></pre> <p>Here is now where the problem is because the overlay of interpolated values is totally off. I expect it to cover all interpolated villages but its not. this is what I get:</p> <p><a href="https://i.sstatic.net/wRxvn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wRxvn.png" alt="enter image description here" /></a></p> <p>What am I doing wrong and any idea on how to fix this?</p>
<python><scipy><interpolation><geopandas>
2023-06-15 19:11:54
1
1,098
LivingstoneM
76,484,819
3,371,250
How to calculate the maximum occurance in a rolling window?
<p>Say I have a data frame as follows:</p> <pre><code>-------------------------------------------------- | Type | Incident ID | Date of incident| -------------------------------------------------- | A | 1 | 2022-02-12 | | A | 2 | 2022-02-14 | | A | 3 | 2022-02-14 | | A | 4 | 2022-02-14 | | A | 5 | 2022-02-16 | | A | 6 | 2022-02-17 | | A | 7 | 2022-02-19 | | A | 8 | 2022-02-19 | | A | 7 | 2022-02-19 | | A | 8 | 2022-02-19 | ... ... ... | B | 1 | 2022-02-12 | | B | 2 | 2022-02-12 | | B | 3 | 2022-02-13 | ... ... ... -------------------------------------------------- </code></pre> <p>This is a list of different types of incidents. Every incident has a type, an id and a date, at which it occurred. This is just an example to help understand my goal.</p> <p>What I want is - for a given range, e.g. 5 days - the maximum value that a rolling sum over these incidents would become:</p> <p>So I would start with all elements that fall into the first 5 days and accumulate the occurences: 6.</p> <pre><code>2022-02-12 - 2022-02-17: 6 </code></pre> <p>By starting to roll the window by one day, all elements of the first day get eliminated from the sum, in this case -1 and no element for the next day in line gets added. The next value would be 5.</p> <pre><code>2022-02-13 - 2022-02-18: 5 </code></pre> <p>6 &gt; 5. So 6 is still the maximum occurence of incidents in a 5 day window.</p> <p>Continue for the complete time range.</p> <p>This is not that hard to achieve but how would I do this in a very efficient manner for millions of elements? In short: I want to create a moving window of a fixed date range (e.g. 5 days), count all occurances for this window and give out the maximum value that was reached for any window.</p>
<python><pandas><dataframe>
2023-06-15 19:09:07
3
571
Ipsider
76,484,817
16,591,513
jupyter notebook does not see the code between blocks
<p>I'm new to Jupyter Notebook and stumbled upon weird issue recently. After reloading my notebook, as usually I decided to enter the server and start editing my project. But after trying to run an individual block of code, it shows: &quot;'variable' is not defined&quot;, regardless of having it in the previous block. Let me show you an example to demonstrate my problem.</p> <p><a href="https://i.sstatic.net/5UFUZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5UFUZ.jpg" alt="enter image description here" /></a></p> <p>as you can see is shows: 'movies' is not defined, however this variable exists in the upper block of code, same thing with 'dataset'.</p> <p>What can cause such an issue and how to fix it?</p>
<python><jupyter-notebook>
2023-06-15 19:09:01
2
449
CraZyCoDer
76,484,752
5,227,892
From array to mesh PLY file
<p>I have a numpy array containing XYZ coordinates representing a point cloud. I would like to create a mesh ply file using <a href="https://pypi.org/project/plyfile/" rel="nofollow noreferrer">plyfile</a> in a list in order to create a PLY mesh file.</p> <pre><code> vertex = [[52.45258174 78.63234647 -0.90487998] [52.46268174 78.68184647 1.09133402] [52.48928449 78.7930997 -0.90905187] [52.49938449 78.84259974 1.08716213] [52.5921233 78.92200466 -0.91276864] [52.6022233 78.97150466 1.08344536]] PLY_vertices = PlyElement.describe(np.array(vertex, dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')]), 'vertex') ValueError: only one-dimensional arrays are supported </code></pre>
<python><numpy>
2023-06-15 18:59:51
2
435
Sher
76,484,663
3,116,380
How to efficiently find the date of the N-th occurrence of a specific weekday in each month within a given pandas DataFrame date range?
<p>I have a pandas DataFrame with a date range, and I want to find the date of the N-th occurrence of a specific weekday (say, the third Tuesday) of each month within that range. I'm dealing with a large dataset spanning several years, so efficiency is crucial. Here's how my DataFrame looks:</p> <pre><code>import pandas as pd date_rng = pd.date_range(start='1/1/2020', end='12/31/2023', freq='D') df = pd.DataFrame(date_rng, columns=['date']) </code></pre> <p>IDK if this fall under the umbrella of SO questions, bud i'd like a faster approach than just looping through everything. Is there a more efficient/pro way of doing this?</p>
<python><python-3.x><pandas><dataframe><performance>
2023-06-15 18:47:04
1
396
AlanGhalan
76,484,607
2,987,488
Ortools scheduling problem with no feasible solution
<p>I have created the following function which takes a list of drivers, a list of hubs and the driver-hubs relationships, i.e. the hubs where each driver can work. However, the solution is not feasible whereas the drivers are more than the demand covering the day OFF as well. Any help would be highly appreciated. <br> drivers: <a href="https://pastebin.com/S13w2Me5" rel="nofollow noreferrer">https://pastebin.com/S13w2Me5</a>. <br> hubs: <a href="https://pastebin.com/r7mzMzUX" rel="nofollow noreferrer">https://pastebin.com/r7mzMzUX</a>. <br> driver_hubs_relationships: <a href="https://pastebin.com/ANpnXw9f" rel="nofollow noreferrer">https://pastebin.com/ANpnXw9f</a>. <br> hubs_and_demand: <a href="https://pastebin.com/021z8Ta1" rel="nofollow noreferrer">https://pastebin.com/021z8Ta1</a>. For this variable, simply create a dataframe with columns hub and drivers_needed and based on the data and it will work.</p> <pre><code>from ortools.sat.python import cp_model from datetime import datetime, timedelta import time import copy days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] def create_schedule(drivers, hubs, hubs_and_demand, driver_hubs_relationships): drivers_list = copy.deepcopy(drivers) hubs_list = copy.deepcopy(hubs) hubs_list_and_demand = copy.deepcopy(hubs_and_demand) driver_hubs_list_relationships = copy.deepcopy(driver_hubs_relationships) model = cp_model.CpModel() # Define an extra &quot;hub&quot; that represents a day off hubs_list.append('OFF') for e in drivers_list: driver_hubs_list_relationships[e].append('OFF') # Create variables total size len(drivers)*(1+len(hubs))*len(days) schedule = {} for e in drivers_list: for d in days: for r in hubs_list: schedule[(e, d, r)] = model.NewBoolVar(f'schedule_{e}_{d}_{r}') # Each driver must have one day off per week for e in drivers_list: model.Add(sum(schedule[(e, d, 'OFF')] for d in days) == 1) # Each driver works at most one day at a specific hub for e in drivers_list: for d in days: model.Add(sum(schedule[(e, d, r)] for r in hubs_list) == 1) # Assign each driver to their specific hub for e in drivers_list: for r in hubs_list: if r not in driver_hubs_list_relationships[e]: for d in days: model.Add(schedule[(e, d, r)] == 0) # Create variables for the number of drivers needed at each hub on each day drivers_needed = {} for d in days: for r in hubs_list: if r != 'OFF': hub_demand =\ hubs_list_and_demand.loc[hubs_list_and_demand['hub'] == r, 'drivers_needed'] drivers_needed[(d, r)] = model.NewIntVar(1, int(hub_demand.values[0], f'drivers_needed_{d}_{r}') # Each hub needs to have the required number of drivers each day for d in days: for r in hubs_list: if r != 'OFF': model.Add(sum(schedule[(e, d, r)] for e in drivers_list) == drivers_needed[(d, r)]) # Solve and print schedule solver = cp_model.CpSolver() solver.parameters.max_time_in_seconds = 2500.0 solver.parameters.log_search_progress = True status = solver.Solve(model) print(f&quot;status code {status}&quot;) # Print out the solution status if status == cp_model.OPTIMAL: print(f'Optimal solution found.') elif status == cp_model.FEASIBLE: print('Feasible solution found, but not necessarily optimal.') elif status == cp_model.INFEASIBLE: print('No feasible solution found.') elif status == cp_model.MODEL_INVALID: print('The given model is invalid.') elif status == cp_model.UNKNOWN: print( 'The status of the model is unknown because a time limit was reached or because the ' 'problem is unsolved.') if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE: solution = {} for e in drivers_list: solution[e] = {} for d in days: for r in hubs_list: if solver.Value(schedule[(e, d, r)]) == 1: print(f&quot;{e} works at {r} on {d}&quot;) solution[e][d] = r return solution else: return None </code></pre>
<python><optimization><or-tools><cp-sat>
2023-06-15 18:39:00
1
1,272
azal
76,484,525
2,975,438
Streamlit input to chat via button does not persist
<p>I have a streamlit app that takes a file and builds a chat bot on top of it.</p> <p>There are two ways to upload files. From folder (&quot;Use Data from folder&quot; button) and from the &quot;drag and drop&quot; box.  The option with the &quot;drag and drop&quot; box works fine, but for the option with the &quot;Use Data from folder&quot; button, chat appears once, but when I put a question in chat and click &quot;send&quot;, chat disappears for some reason.</p> <p>Here code for buttons from main app:</p> <pre><code>use_example_file = st.sidebar.button(&quot;Use Data from folder&quot;) uploaded_file = utils.handle_upload([&quot;pdf&quot;, &quot;txt&quot;, &quot;csv&quot;], use_example_file) if uploaded_file: [do something] </code></pre> <p>Where <code>handle_upload</code> is the following function:</p> <pre><code>@staticmethod def handle_upload(file_types, use_example_file): &quot;&quot;&quot; Handles and display uploaded_file :param file_types: List of accepted file types, e.g., [&quot;csv&quot;, &quot;pdf&quot;, &quot;txt&quot;] &quot;&quot;&quot; if use_example_file is False: uploaded_file = st.sidebar.file_uploader(&quot;upload&quot;, type=file_types, label_visibility=&quot;collapsed&quot;) else: # uploaded_file = use_example_file # use_example_file = st.sidebar.button(use_example_file) uploaded_file = open(&quot;example.csv&quot;, &quot;rb&quot;) if uploaded_file is not None: def show_csv_file(uploaded_file): file_container = st.expander(&quot;Your CSV file :&quot;) uploaded_file.seek(0) shows = pd.read_csv(uploaded_file) file_container.write(shows) def show_pdf_file(uploaded_file): file_container = st.expander(&quot;Your PDF file :&quot;) with pdfplumber.open(uploaded_file) as pdf: pdf_text = &quot;&quot; for page in pdf.pages: pdf_text += page.extract_text() + &quot;\n\n&quot; file_container.write(pdf_text) def show_txt_file(uploaded_file): file_container = st.expander(&quot;Your TXT file:&quot;) uploaded_file.seek(0) content = uploaded_file.read().decode(&quot;utf-8&quot;) file_container.write(content) def get_file_extension(uploaded_file): return os.path.splitext(uploaded_file)[1].lower() file_extension = get_file_extension(uploaded_file.name) # Show the contents of the file based on its extension #if file_extension == &quot;.csv&quot; : # show_csv_file(uploaded_file) if file_extension== &quot;.pdf&quot; : show_pdf_file(uploaded_file) elif file_extension== &quot;.txt&quot; : show_txt_file(uploaded_file) else: st.session_state[&quot;reset_chat&quot;] = True #print(uploaded_file) return uploaded_file </code></pre> <p>Full code for app is:</p> <pre><code>import os import streamlit as st from io import StringIO import re import sys from modules.history import ChatHistory from modules.layout import Layout from modules.utils import Utilities from modules.sidebar import Sidebar import traceback # load .env from dotenv import load_dotenv load_dotenv() #To be able to update the changes made to modules in localhost (press r) def reload_module(module_name): import importlib import sys if module_name in sys.modules: importlib.reload(sys.modules[module_name]) return sys.modules[module_name] history_module = reload_module('modules.history') layout_module = reload_module('modules.layout') utils_module = reload_module('modules.utils') sidebar_module = reload_module('modules.sidebar') ChatHistory = history_module.ChatHistory Layout = layout_module.Layout Utilities = utils_module.Utilities Sidebar = sidebar_module.Sidebar st.set_page_config(layout=&quot;wide&quot;, page_icon=&quot;.&quot;, page_title=&quot;AI Assistant&quot;) # Instantiate the main components layout, sidebar, utils = Layout(), Sidebar(), Utilities() layout.show_header(&quot;PDF, TXT, CSV&quot;) user_api_key = utils.load_api_key() # user_api_key = '.' if not user_api_key: layout.show_api_key_missing() else: os.environ[&quot;OPENAI_API_KEY&quot;] = user_api_key os.environ[&quot;OPENAI_API_TYPE&quot;] = &quot;azure&quot; os.environ[&quot;OPENAI_API_BASE&quot;] = 'https://testingchat.openai.azure.com/' os.environ[&quot;OPENAI_API_VERSION&quot;] = &quot;2023-05-15&quot; # os.environ[&quot;OPENAI_API_DEPLOYMENT_NAME&quot;] = 'Petes-Test' # uploaded_file = utils.handle_upload([&quot;pdf&quot;, &quot;txt&quot;, &quot;csv&quot;]) use_example_file = st.sidebar.button(&quot;Use Data from folder&quot;) # st.write(f&quot;use_example_file: {use_example_file}&quot;) uploaded_file = utils.handle_upload([&quot;pdf&quot;, &quot;txt&quot;, &quot;csv&quot;], use_example_file) # if use_example_file: # uploaded_file = open(&quot;example.csv&quot;, &quot;rb&quot;) # st.write(uploaded_file) if uploaded_file: # Configure the sidebar sidebar.show_options() sidebar.about() # Initialize chat history history = ChatHistory() chatbot = utils.setup_chatbot( uploaded_file, st.session_state[&quot;model&quot;], st.session_state[&quot;temperature&quot;] ) st.session_state[&quot;chatbot&quot;] = chatbot if st.session_state[&quot;ready&quot;]: # Create containers for chat responses and user prompts response_container, prompt_container = st.container(), st.container() with prompt_container: # Display the prompt form is_ready, user_input = layout.prompt_form() # Initialize the chat history history.initialize(uploaded_file) # Reset the chat history if button clicked if st.session_state[&quot;reset_chat&quot;]: history.reset(uploaded_file) if is_ready: # Update the chat history and display the chat messages history.append(&quot;user&quot;, user_input) old_stdout = sys.stdout sys.stdout = captured_output = StringIO() output = st.session_state[&quot;chatbot&quot;].conversational_chat(user_input) sys.stdout = old_stdout history.append(&quot;assistant&quot;, output) # Clean up the agent's thoughts to remove unwanted characters thoughts = captured_output.getvalue() cleaned_thoughts = re.sub(r'\x1b\[[0-9;]*[a-zA-Z]', '', thoughts) cleaned_thoughts = re.sub(r'\[1m&gt;', '', cleaned_thoughts) # Display the agent's thoughts with st.expander(&quot;Display the agent's thoughts&quot;): st.write(cleaned_thoughts) history.generate_messages(response_container) </code></pre>
<python><streamlit>
2023-06-15 18:25:16
1
1,298
illuminato
76,484,504
1,432,980
Datatype cannot be written to CSV
<p>I want to a write big decimal to a CSV file that looks like this:</p> <pre><code>''' pyarrow.decimal128(int precision, int scale=0) -&gt; pl.Decimal(scale: int, precision: int | None = None) ''' df = DataFrame( data={'BIG_DECIMAL': [Decimal(216106296082069775206775124.4217024184)]}, schema={'BIG_DECIMAL': pl.Decimal(10, 38)} ) df.write_csv('file.csv') </code></pre> <p>However, when I run it, I get an error that says:</p> <pre><code>datatype 216106296082069783127785472 cannot be written to csv </code></pre> <p>What is the problem?</p>
<python><python-polars>
2023-06-15 18:21:39
0
13,485
lapots
76,484,484
777,377
Get keys of nested TypedDicts
<p>Having the following TypedDict:</p> <pre class="lang-py prettyprint-override"><code>class MySubClass(TypedDict): name: str color: str class MyClass(TypedDict): x: MySubClass y: str </code></pre> <p>What is the function, that can extract the keys recursively like this:</p> <pre><code>[x_name, x_color, y] </code></pre> <p>The function should be dynamic so that it can extract all kinds of nested structures, but one level of nesting is already enough.</p> <p>Many Thanks!</p>
<python><python-typing><typeddict>
2023-06-15 18:18:50
1
653
bayerb
76,484,316
4,175,822
With type hinting, how do I require a key value pair when the key has an invalid identifier?
<p>With type hinting, how do I require a key value pair in python, where the key is an invalid identifier? By required I mean that the data is required by type hinting and static analysis tools like mypy/pyright + IDES. For context: keys with invalid python identifiers are sent over json.</p> <ul> <li>for example the <a href="https://github.com/NodeBB/NodeBB/blob/a757716ddd520e608f6e1d973dabd49b344d033a/public/openapi/read/search.yaml#L79" rel="nofollow noreferrer">key name class</a></li> </ul> <p>A dict payload with key value pair requirements must be ingested. See below for the additional requirements.</p> <p>Here is a sample payload:</p> <pre><code>{ 'a': 1, # required pair with valid identifier '1req': 'blah', #required key with invalid identifier 'someAdditionalProp': [] # optional additional property, value must be list } </code></pre> <p>The requirements for the ingestion of this dict payload are are:</p> <ol> <li>there is a valid identifier 'a', with type int that is required</li> <li>there is an invalid identifier '1req' with type string that is required</li> <li>additional properties can be passed in with any string keys different from the above two and value list</li> <li>all input types must be included so TypedDict will not work because it does not capture item 3</li> <li>in the real life use case the payload can contain n invalidly named identifiers</li> <li>in real life there can be other known keys like b: float that are optional and are not the same as item 3 additional properties. They are not the same because the value type is different, and this key is a known literal (like 'b') but item 3 keys are strings but their value is unknown</li> </ol> <p>What I want is a class or function signature that ingests the above payload and meets the above type hinting requirements for all required and optional key value pairs. Errors should be thrown for invalid inputs to the class or function in mypy/an IDE with type checking turned on.</p> <p>An example of an error that would work is:</p> <pre><code>Argument of type &quot;tuple[Literal['otherReq'], None]&quot; cannot be assigned to parameter &quot;args&quot; of type &quot;OptionalDictPair&quot; in function &quot;__new__&quot; &quot;tuple[Literal['otherReq'], None]&quot; is incompatible with &quot;OptionalDictPair&quot; Tuple entry 2 is incorrect type Type &quot;None&quot; cannot be assigned to type &quot;list[Unknown]&quot;PylancereportGeneralTypeIssues </code></pre> <p>A naive implementation that does NOT meet requirements would be:</p> <pre><code>DictType = typing.Mapping[ str, typing.Union[str, int, list] ] def func_with_type_hinting(arg: DictType): pass func_with_type_hinting( { 'a': 1, '1req': 'blah', 'someAdditionalProp': None } ) # static analysis should show an error here, someAdditionalProp's type is wrong </code></pre>
<python><mypy><python-typing><typeddict>
2023-06-15 17:52:51
2
2,821
spacether