QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,014,319
6,646,188
How to map multiple values to a single value in python?
<p>How to map multiple values to a single value and return that single value when I will search? For example:</p> <pre><code>Fruit -&gt; Mango, Banana, Apple Vehicle -&gt; Car, Bus, Truck Place -&gt; Berlin, NewYork </code></pre> <p>Let's say, I will give input &quot;<strong>Truck</strong>&quot; and it will search and return back &quot;<strong>Vehicle</strong>&quot;.</p> <p>How can I find my desired mapping functionality in Python 3? So far, I can see there are some way using dictionary or tuple. But, how to achieve that and what is the best way to implement this in python?</p>
<python><python-3.x>
2024-02-18 02:01:39
2
2,019
Abhijit Mondal Abhi
78,014,255
386,159
Python is_file not working on a mounted raw image
<p>Pathlib's is_file doesn't seem to work when running on a loop-backed mounted device</p> <p>I have a text file with file paths in it. The files correspond to a loop-back mounted .raw image.</p> <pre><code>cat diff.txt | grep kvm-recheck-rcu.sh usr/src/linux-hwe-6.5-headers-6.5.0-17/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh </code></pre> <p>These are mounted underneath a mount point</p> <pre><code>ls /mnt/new/usr/src/linux-hwe-6.5-headers-6.5.0-17/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh /mnt/new/usr/src/linux-hwe-6.5-headers-6.5.0-17/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh </code></pre> <p>In the example below <code>is_file</code> will always return False.</p> <pre><code>#!/usr/bin/env python3 import pathlib import os size = 0 mount_point = pathlib.Path(&quot;/mnt/new&quot;) with open(&quot;/tmp/diff.txt&quot;, mode=&quot;r&quot;) as f: while True: line = f.readline() p = pathlib.Path(line) mounted_file = mount_point / p if &quot;kvm-recheck-rcu.sh&quot; in mounted_file.name: print(f&quot;debug: {mounted_file}:{mounted_file.is_file()}&quot;) if mounted_file.is_file(): print(mounted_file) size = size + os.stat(str(mounted_file)) if not line: break print(size) </code></pre> <p>The debug statement will print</p> <pre><code>debug: /mnt/new/usr/src/linux-hwe-6.5-headers-6.5.0-17/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcu.sh :False </code></pre>
<python>
2024-02-18 01:20:23
0
2,376
bearrito
78,014,132
1,911,722
In python multimethod, how to define a case for empty list?
<p>For example, I define</p> <pre><code>@multimethod def f(x:list[str]): return 1 @multimethod def f(x:list[int]): return 0 </code></pre> <p>then</p> <pre><code>f([]) </code></pre> <p>gives error</p> <blockquote> <p>DispatchError: ('f: 2 methods found', (&lt;class 'multimethod.&lt;class 'list'&gt;'&gt;,), [(&lt;class 'multimethod.list[str]'&gt;,), (&lt;class 'multimethod.list[int]'&gt;,)])</p> </blockquote> <p>how to define a case specific for empty list?</p>
<python><multimethod>
2024-02-18 00:14:49
1
2,657
user15964
78,014,051
1,424,880
pycharm adds extra tab inside parentheses
<p>I just recently upgraded to a newer pycharm 2023.3.3 and noticing that it started to put an extra tab inside the parentheses.<br /> Consider</p> <p><code>x = 'orange</code><BR> <code>x.replace('r', 'o')</code></p> <p>I would expect to have <code>x.replace('r', 'o')</code> stay as-is. <br> However, pycharm places extra tabs before 'r' and 'o', ie <code>x.replace( 'r', 'o')</code></p> <p>How do I disable this auto-indent? I looked all over but could not find anything...</p> <p><a href="https://i.sstatic.net/NMSRo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMSRo.png" alt="enter image description here" /></a></p>
<python><pycharm>
2024-02-17 23:21:16
0
1,545
JavaFan
78,013,861
6,533,037
Calculate Simple Moving Average
<p><code>data</code> is list containing candlestick objects. The <code>c</code>-property is the closing price of that candlestick.</p> <p>I have the following Python code to calculate the SMA:</p> <pre><code># Calculate to Simple Moving Average. def sma(close_prices): return sum(close_prices) / len(close_prices) def func(data): short_sma = 20 long_sma = 100 if len(data) &lt;= long_sma: return 0 close_prices = [d.c for d in data] sma_short = sma(close_prices[-short_sma:]) sma_long = sma(close_prices[-long_sma:]) if sma_short &gt; sma_long: return 0.0005 # buy if sma_short &lt; sma_long: return -0.0001 # sell return 0 </code></pre> <p>I basically first put all the closing prices of the data list into a separate list called <code>closing_prices</code>. Then I indicate a long and short-window. The window is just the amount of candlesticks used to calculate the SMA.</p> <p>The function <code>sma()</code> just sums up all the close prices and then divides it by the amount of datapoints, resulting in the average.</p> <p>If the <code>short_sma</code> is bigger then the <code>long_sma</code> the stock is bullish and I return a positive number indicating a buy signal, otherwise a negative number is returned indicating a sell signal.</p> <p>Is this the correct way to calculate the Simple Moving Average of a stock based on CandleSticks?</p>
<python>
2024-02-17 21:57:58
0
1,683
O'Niel
78,013,836
1,443,702
Designing python code to execute concurrent requests by server for maximum requests
<p>I am trying to write the below python script, where for I have 5 servers and each server can process at max 2 requests concurrently. I have a pool of 10 requests to processed. Each server picks 2 requests each, processes them and picks up another request from the pool as soon as they have capacity to do so.</p> <p>The below code which I have written wait for all 2 requests by the server and only then the server picks another 2 requests. I want it to pick a request as soon as it's done processing one.</p> <pre><code>async def process_request(server_id, request_id): processing_time = random.randint(10, 30) print(f&quot;Server {server_id} is processing request {request_id} for {processing_time} seconds&quot;) await asyncio.sleep(processing_time) print(f&quot;Server {server_id} finished processing request {request_id}&quot;) async def server_worker(server_id, queue, num_concurrent_requests_per_server): while True: request_ids = [] for _ in range(num_concurrent_requests_per_server): try: request_id = await queue.get() request_ids.append(request_id) except asyncio.QueueEmpty: break tasks = [] for request_id in request_ids: task = asyncio.create_task(process_request(server_id, request_id)) tasks.append(task) await asyncio.gather(*tasks) for _ in range(len(request_ids)): await queue.put(random.randint(1, 100)) # Add one more request to the queue async def main(): num_servers = 5 num_concurrent_requests_per_server = 2 total_requests = 100 servers = [asyncio.Queue() for _ in range(num_servers)] # Start server workers server_tasks = [] for i in range(num_servers): task = asyncio.create_task(server_worker(i, servers[i], num_concurrent_requests_per_server)) server_tasks.append(task) # Generate and enqueue initial requests for _ in range(num_servers * num_concurrent_requests_per_server): server_id = _ % num_servers await servers[server_id].put(random.randint(1, 100)) # Wait for all requests to be processed await asyncio.gather(*[servers[i].join() for i in range(num_servers)]) # Cancel server workers for task in server_tasks: task.cancel() if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre>
<python><python-3.x><concurrency><python-asyncio>
2024-02-17 21:44:51
3
4,726
xan
78,013,757
4,842,857
Package on pipx is missing a version
<p>I'm new to Python, so I'm probably missing something about pipx, but I've been instructed to install version 0.0.13 and the latest version I'm seeing is 0.0.8:</p> <pre class="lang-bash prettyprint-override"><code>pipx install insanely-fast-whisper==0.0.13 --force Fatal error from pip prevented installation. Full pip output in file: /Users/jacksteam/Library/Logs/pipx/cmd_2024-02-16_17.30.08_pip_errors.log Some possibly relevant errors from pip install: ERROR: Ignored the following versions that require a different python version: 0.0.10 Requires-Python &lt;=3.11,&gt;=3.8; 0.0.11 Requires-Python &lt;=3.11,&gt;=3.8; 0.0.12 Requires-Python &lt;=3.11,&gt;=3.8; 0.0.13 Requires-Python &lt;=3.11,&gt;=3.8; 0.0.9 Requires-Python &lt;=3.11,&gt;=3.8 ERROR: Could not find a version that satisfies the requirement insanely-fast-whisper==0.0.13 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5b0, 0.0.5b1, 0.0.5b2, 0.0.5b3, 0.0.5, 0.0.6, 0.0.7, 0.0.8) ERROR: No matching distribution found for insanely-fast-whisper==0.0.13 Error installing insanely-fast-whisper from spec 'insanely-fast-whisper==0.0.13'. </code></pre> <p>I'm using Conda with Python 3.10. I installed pipx with Homebrew as suggested in the pipx docs.</p>
<python><conda><pipx>
2024-02-17 21:13:51
1
5,349
Jack Steam
78,013,649
893,254
Python functional method of checking that all elements in a list are equal
<p>I am trying to find a functional method which is available in Python which can be used to check if all elements in a list are equal. Instinctively, I feel that this should be possible.</p> <p>My objective is to build a functional solution to solve the problem of checking if all values in list are equal.</p> <p>If you search for similar questions on this site you will find both questions and answers, written for Python, but none of these take a functional approach.</p>
<python><functional-programming>
2024-02-17 20:39:01
3
18,579
user2138149
78,013,505
14,839,602
python-docx does not apply font size when changing text direction to RTL
<p>I want to change the direction to RTL since I write in Kurdish which uses Arabic numerals. but whenever I apply <code>run.font.rtl = True</code>, it will cancel the font size <code>run.font.size = Pt(20)</code></p> <p>How can I fix that issue?</p> <pre><code>for paragraph in doc.paragraphs: if '{text}' in paragraph.text: paragraph.text = paragraph.text.replace('{text}', text[i].strip()) for run in paragraph.runs: run.font.size = Pt(20) run.font.name = 'Arial' run.font.rtl = True </code></pre>
<python><ms-word><right-to-left><python-docx>
2024-02-17 19:50:26
1
434
Hama Sabah
78,013,422
595,305
qtbot in pytest-qt raising AttributeError
<p>OS: W10<br> Python: 3.10<br> pytest: 7.3.1<br> pytest-qt: 4.2.0<br></p> <p>It's some time since I've used the <code>qtbot</code> fixture. I have what I believe to be a simple setup here. In this code I have included the <code>qtbot</code> fixture in the test function:</p> <pre><code>def test_is_possible_to_log_a_message(qtbot): ... index_0_2 = log_table_view.model().createIndex(0, 2) with qtbot.wait(100): text_0_2 = log_table_view.model().data(index_0_2) assert ... </code></pre> <p>This is causing the following error:</p> <pre><code>&gt; with qtbot.wait(100): E AttributeError: __enter__ </code></pre> <p>... I can't see any sign of others experiencing this with pytest-qt. Anyone got any ideas what the problem might be?</p> <p><strong>Update</strong><br> I was also getting a (more familiar) error when I instead used <code>waitUntil</code> (by the way, here, following a signal being emitted, I'm waiting for some text to appear in a cell of a <code>QTableView</code>):</p> <pre><code>def text_obtained(): text_0_2 = log_table_view.model().data(index_0_2) print(f'text_0_2 |{text_0_2}|') assert text_0_2 != None qtbot.waitUntil(text_obtained) </code></pre> <p>I <em>was</em> getting, on the first call to <code>text_obtained</code>:</p> <pre><code>&gt; text_0_2 = log_table_view.model().data(index_0_2) E RuntimeError: wrapped C/C++ object of type LogTableView has been deleted </code></pre> <p>... but I've now found that by calling <code>show()</code> on my <code>QMainWindow</code> (NB nothing appears as I'm using <code>QT_QPA_PLATFORM = offscreen</code>) and by using <code>waitUntil</code> for elements to become visible, this no longer happens.</p> <p>However, the <code>AttributeError</code> is still happening on <code>wait</code>...</p>
<python><qt><pyqt><pytest><pytest-qt>
2024-02-17 19:24:52
0
16,076
mike rodent
78,013,394
15,061,831
WebSocket connection to 'ws://127.0.0.1:5000/socket.io/?EIO=4&transport=websocket&sid=wBX-4EpWW6-y-DfFAAAC' failed:
<p>Im trying to send a &quot;refresh message&quot; from a Flask python app to the html.</p> <p>I did this:</p> <pre><code>def alert(previous_date, date, archivo): with app.app_context(): if previous_date!= date: print(f&quot;Alert&quot;) socketio.emit('refresh_page') return redirect(app.config['FLASK_APP_URL']) </code></pre> <hr /> <p>app = Flask(<strong>name</strong>)</p> <p>socketio = SocketIO(app)</p> <p>app.config['FLASK_APP_URL'] = 'http://127.0.0.1:5000'</p> <hr /> <p>Javascript:</p> <pre><code> var socket = io.connect('http://' + document.domain + ':' + location.port); // listen 'refresh_page' sent by server socket.on('refresh_page', function() { // Reload location.reload(); }); </code></pre> <p><a href="https://i.sstatic.net/pHkVu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHkVu.png" alt="enter image description here" /></a></p> <p>Whats the problem in here?</p>
<javascript><python><socket.io>
2024-02-17 19:16:54
0
833
JustToKnow
78,013,308
2,658,627
Why is this command not syncing in the bot - discord.py
<p>I am using discord.py and have a bot using <a href="https://discordpy.readthedocs.io/en/stable/interactions/api.html#commandtree" rel="nofollow noreferrer">app_commands.CommandTree</a>.</p> <p>When I go to define and add 2 slash commands, 1 syncs and shows up in the UI, the other does not.</p> <p>Can someone help me understand why this is happening?</p> <pre><code>async def is_owner(interaction: discord.Interaction) -&gt; bool: return interaction.user.id == OWNER_ID @app_commands.command(name=&quot;resyncmessage&quot;, description=&quot;Checks message sent and posts or edits it&quot;) @app_commands.check(is_owner) async def resyncmessage(interaction: discord.Interaction, guildId: int, channelId: int, messageId: int): logging.debug(&quot;Entering resyncMessage():&quot;) await interaction.response.defer() embed = discord.Embed(title = &quot;resyncMessage&quot;) embedMessage = await interaction.followup.send(embed=embed, ephemeral=True) await procResyncmessage(guildId, channelId, messageId, embed, embedMessage) await embedMessage.edit(embed=embed) @app_commands.command(name=&quot;resync&quot;, description=&quot;Checks all messages and syncs&quot;) @app_commands.check(is_owner) async def resync(interaction: discord.Interaction, channel: discord.TextChannel): logging.debug(&quot;Entering resync():&quot;) await interaction.response.defer() embed = discord.Embed(title = &quot;Resync&quot;) embedMessage = await interaction.followup.send(embed=embed, ephemeral=True) await procResync(channel, embed, embedMessage) await embedMessage.edit(embed=embed) class aclient(discord.Client): def __init__(self): intents = discord.Intents.default() intents.members = True intents.messages = True intents.message_content = True logging.debug(f&quot;intents={intents}&quot;) super().__init__(intents=intents) self.synced = False async def on_ready(self): logging.debug(&quot;on_ready started: {self}&quot;) await self.wait_until_ready() logging.debug(&quot;ready.&quot;) async def setup_hook(self): logging.debug(f&quot;We have logged in as {self.user}.&quot;) if not self.synced: logging.debug(f&quot;Adding command={resync}, guild={LH}&quot;) tree.add_command(resync, guild=LH) logging.debug(f&quot;Adding command={resyncmessage}, guild={LH}&quot;) tree.add_command(resyncmessage, guild=LH) logging.debug(&quot;Starting sync...&quot;) await tree.sync() self.synced = True logging.debug(&quot;Sync complete.&quot;) else: logging.debug(&quot;Already synced.&quot;) logging.debug(&quot;Reboot complete, resync started...&quot;) await self.resyncTrackers() client = aclient() tree = app_commands.CommandTree(client) client.tree = tree logging.debug(&quot;Running...&quot;) client.run(TOKEN) </code></pre> <p>App log:</p> <pre><code>2024-02-15 20:45:59,308 - root - DEBUG - create_pool:&lt;mysql.connector.pooling.MySQLConnectionPool object at 0x7f43f4e97fd0&gt; 2024-02-15 20:45:59,309 - root - DEBUG - __init__:&lt;MySQLPool.MySQLPool object at 0x7f43f4f8f850&gt; 2024-02-15 20:45:59,590 - root - DEBUG - intents=&lt;Intents value=3276543&gt; 2024-02-15 20:45:59,592 - root - DEBUG - Running... 2024-02-15 20:45:59,593 - asyncio - DEBUG - Using selector: EpollSelector 2024-02-15 20:45:59,595 - discord.client - INFO - logging in using static token 2024-02-15 20:46:00,215 - root - DEBUG - We have logged in as ******. 2024-02-15 20:46:00,215 - root - DEBUG - Adding command=&lt;discord.app_commands.commands.Command object at 0x7f43f1ef4d90&gt;, guild=&lt;Object id=646... type=&lt;class 'discord.object.Object'&gt;&gt; 2024-02-15 20:46:00,215 - root - DEBUG - Adding command=&lt;discord.app_commands.commands.Command object at 0x7f43f1eb33d0&gt;, guild=&lt;Object id=646... type=&lt;class 'discord.object.Object'&gt;&gt; 2024-02-15 20:46:00,215 - root - DEBUG - Starting sync... 2024-02-15 20:46:00,442 - root - DEBUG - Sync complete. 2024-02-15 20:46:00,442 - root - DEBUG - Reboot complete, resync started... </code></pre> <p>And when I am on the server with the bot, this is what I see:</p> <p><a href="https://i.sstatic.net/5HDGz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5HDGz.png" alt="Bot integrations" /></a></p> <p>And if I try to inspect the slash commands, I can only see /resync, no mention of /resyncmessages.</p> <p><a href="https://i.sstatic.net/L6B5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L6B5n.png" alt="Slash commands" /></a></p> <p>How can I add the second slash command into the command tree and have it synced for my guild?</p>
<python><discord.py>
2024-02-17 18:50:51
1
709
anonymous
78,013,251
16,425,408
How to make the Telegram bot send the PDF file or return a G-drive link
<p>I am just working around the Telegram bot using Python, and I wish to send a pdf file or a drive link via the bot. However, I am getting an error. </p> <pre><code>No error handlers are registered, logging exception. Traceback (most recent call last): File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/ext/dispatcher.py&quot;, line 557, in process_update handler.handle_update(update, self, check, context) File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/ext/handler.py&quot;, line 199, in handle_update return self.callback(update, context) File &quot;/home/murgod/KubeBot/KubeBot-4/GKEbot.py&quot;, line 217, in send_resume bot.send_document( File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/bot.py&quot;, line 133, in decorator result = func(*args, **kwargs) File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/bot.py&quot;, line 976, in send_document return self._message( # type: ignore[return-value] File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/bot.py&quot;, line 339, in _message result = self._post(endpoint, data, timeout=timeout, api_kwargs=api_kwargs) File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/bot.py&quot;, line 298, in _post return self.request.post( File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/utils/request.py&quot;, line 361, in post result = self._request_wrapper( File &quot;/home/murgod/.local/lib/python3.9/site-packages/telegram/utils/request.py&quot;, line 279, in _request_wrapper raise BadRequest(message) telegram.error.BadRequest: Wrong file identifier/http url specified </code></pre> <p>The defined function is</p> <pre><code>def xyz(update, context): resume_url = '' bot.send_document( chat_id=update.effective_chat.id, document=xyz_url, filename='xyz.pdf', ) xyz_handler = CommandHandler('xyz', xyz) dispatcher.add_handler(xyz_handler) </code></pre> <p>Does anyone have a suggestion?</p>
<python><python-3.x><bots><telegram><telegram-bot>
2024-02-17 18:31:39
0
838
Nani
78,013,162
14,196,341
Typehint function *args -> tuple[*args] with constraint on the args
<p>We want to type hint a function</p> <pre class="lang-py prettyprint-override"><code>def f(*args:float)-&gt;tuple[float,...]: ... return tuple(args) </code></pre> <p>such that it is specified that the number of elements in the tuple matches the number of args. Of course, the return here is a placeholder for more complicated logic.</p> <p>We would like to use mypy or pylance to check if we always return a) the correct number of elements and b) the correct tyoe of all elements.</p> <p>Using <a href="https://peps.python.org/pep-0646/#args-as-a-type-variable-tuple" rel="nofollow noreferrer"><code>TypeVarTuple</code></a> would allow to specify that we return the same number of elements, but not the type.</p> <p>Is there in current python (3.12) way to do it besides writing many overloads for 1-parameter, 2-parameter, 3-parameters etc?</p>
<python><python-typing><mypy><pyright>
2024-02-17 18:03:44
1
392
Felix Zimmermann
78,013,031
1,581,441
debug pymongo access - [Errno 111] Connection refused
<p>I am trying to access my database on MongoDB from my virtual private server and from my shared hosting account on Godaddy. I follow the same installation steps for pymongo, and I added the public IP for both. The thing is that the connection to the DB works correctly for the virtual private server, but not for the shared hosting, where I get the error <code>[Errno 111] Connection refused</code></p> <p>Here is the code, only hiding parts from the connection string:</p> <pre><code>from requests import get ip = get('https://api.ipify.org').content.decode('utf8') print('My public IP address is: {}'.format(ip)) from pymongo.mongo_client import MongoClient uri = &quot;mongodb+srv://XXXXXX@cluster0.XXXX.mongodb.net/?retryWrites=true&amp;w=majority&amp;ssl=true&amp;ssl_cert_reqs=CERT_NONE&quot; # Create a new client and connect to the server client = MongoClient(uri) # Send a ping to confirm a successful connection try: client.admin.command('ping') print(&quot;Pinged your deployment. You successfully connected to MongoDB!&quot;) except Exception as e: print(e) </code></pre> <p>Here is the output of the virtual private server:</p> <p><code>My public IP address is: 166.62.100.113 Pinged your deployment. You successfully connected to MongoDB! </code></p> <p>And here is the output from the shared hosting:</p> <p><code>My public IP address is: 198.71.231.19 XXXX.mongodb.net:27017: [Errno 111] Connection refused,XXXX.mongodb.net:27017: [Errno 111] Connection refused,XXXX.mongodb.net:27017: [Errno 111] Connection refused </code></p> <p>I made sure both IPs are added to the access list.</p> <p><a href="https://i.sstatic.net/UGrUu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UGrUu.png" alt="enter image description here" /></a></p> <p>So, the question is, how can I debug the connection and know what causes it to be refused? Maybe there is some kind of firewall or proxy at GoDaddy that I should be adding? Any help would be really appreciated.</p>
<python><mongodb><pymongo>
2024-02-17 17:21:04
1
1,512
hmghaly
78,012,948
14,775,478
How to invoke pydantic field_validator on another, optional field?
<p>I am trying to use a <code>field_validator</code> to validate a field based on the presence of another, optional field. So this is really about validating one input based on another input (the classic &quot;password1/password2&quot; tutorial case, albeit validating password1 on optional password2).</p> <p>Example: If a given <code>plant='flower'</code>, then it must have a <code>color</code> (which is optional, because other plants may not have a color).</p> <pre><code>from typing import Optional from pydantic import BaseModel, field_validator from pydantic_core.core_schema import FieldValidationInfo class MyClass(BaseModel): plant: str color: Optional[str] = None @field_validator('plant') def flowers_have_color(cls, v, info: FieldValidationInfo): if v == 'flower': if info.data['color'] is None: raise ValueError(&quot;if 'plant' is a flower, it must have a color&quot;) return v </code></pre> <p>Expected behavior:</p> <pre><code>&gt;&gt;&gt; MyClass(plant='tree') # ok MyClass(plant='tree', color=None) &gt;&gt;&gt; MyClass(plant='tree', color='red') # ok MyClass(plant='tree', color='red') &gt;&gt;&gt; MyClass(plant='flower') # raise &gt;&gt;&gt; MyClass(plant='flower', color='red') # ok MyClass(plant='flower', color='red') </code></pre> <p>This raises the following error:</p> <pre><code>MyClass(plant='flower', color='red') &gt;&gt; KeyError: 'color' </code></pre> <p>While the above code invokes the field_validator when passing a &quot;flower&quot; plant (as expected), it does not see the <code>color</code> inside the validator, because the <code>color</code> is <code>Optional</code>. Indeed we see</p> <pre><code>print(info.data.keys()) dict_keys([]) </code></pre> <p>How can I make the optional field <code>color</code> available to the field_validator (in general, or at least if it is provided)?</p> <p>EDIT: As per comment below (thanks to Yurii Motov) this can be solved by using a <code>model_validator</code> (see below). But seems like an overkill to me, given I really just want to validate a single field (no need for <code>self</code>, classmethods or objects, etc.), and anything needed for that validation is input to the class constructor. It feels weird that <code>field_validator</code> would not be able to work in this situation - and vice versa, if validating &quot;any input against another input&quot; were not to be possible with <code>field_validator</code>, then why have it in pydantic in the first place and not replace any and all <code>field_validator</code>s by <code>model_validator</code>s?</p> <pre><code>@model_validator(mode='after') def plants_must_have_color(self): if self.plant == 'flower': if self.color is None: raise ValueError(&quot;'flower' must have a color&quot;) return self </code></pre>
<python><pydantic><pydantic-v2>
2024-02-17 16:58:19
0
1,690
KingOtto
78,012,870
16,689,086
Errors in installing mpc
<p>hi everyone when am trying to install mpc, by running the command:</p> <pre><code>pip install mpc </code></pre> <p>I get the following error:</p> <pre><code> Using cached mpc-0.0.4.tar.gz (17 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [3 lines of output] error in mpc setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected end or semicolon (after version specifier) numpy&gt;=1&lt;2 ~~~^ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>how to solve it?</p>
<python><pip><mpc><meshroom>
2024-02-17 16:34:26
1
653
Noya_G
78,012,868
1,030,287
jupyter lab rotating a 3D plot uses all CPUs
<p>I just did a fresh install of the latest linux Mint 21.3 &quot;Virginia&quot; Cinnamon based on Ubuntu 22.04. I've got a dedicated NVidia graphics card with the latest Nvidia drivers installed. I'm running AMD Ryzen 7 5800X and my NVidia graphics card is nothing exciting - it's a fanless 2GB card but this should be enough not to cause the problem that I'm facing. I am using the latest google chrome browser.</p> <p>I've started to need to create interactive 3D plots. I use:</p> <pre><code>from matplotlib import pyplot as plt %matplotlib ipympl </code></pre> <p>When manipulating the plot (e.g. rotating) <strong>CPU</strong> usage goes very high (80%) on all 16 CPUs (threads). I can also see GPU usage go up a bit (normally stays ~ 4% and goes up to 24%).</p> <p>I feel I should not be seeing my CPU usage go up by much when rotating a plot. A video game has to rotate so many more objects that if my CPU went to 80% just by rotating a plot, it will have to go to 800% to play even the simplest of video games. I feel either my browser or jupyter kernel is not using the GPU.</p> <p>I don't have a lot of experience with 3D plot rendering but this must be a common requirement/problem. <strong>Any help or pointers will be greatly appreciated!</strong></p> <p>Lastly, my browser GPU settings are below.</p> <h1>Graphics Feature Status</h1> <ul> <li>Canvas: Hardware accelerated</li> <li>Canvas out-of-process rasterization: Enabled</li> <li>Direct Rendering Display Compositor: Disabled</li> <li>Compositing: Hardware accelerated</li> <li>Multiple Raster Threads: Enabled</li> <li>OpenGL: Enabled</li> <li>Rasterization: Hardware accelerated</li> <li>Raw Draw: Disabled</li> <li>Skia Graphite: Disabled</li> <li>Video Decode: Hardware accelerated</li> <li>Video Encode: Software only. Hardware acceleration disabled</li> <li>Vulkan: Disabled</li> <li>WebGL: Hardware accelerated</li> <li>WebGL2: Hardware accelerated</li> <li>WebGPU: Disabled</li> </ul>
<python><matplotlib><jupyter-notebook><jupyter><jupyter-lab>
2024-02-17 16:34:06
0
12,343
s5s
78,012,840
243,090
Polars: AWS S3 conection pooling
<p>I want to use Polars to read from Parquet files stored on S3. I'm running my code in AWS Lambda.</p> <p>When using boto3 I would make a client in the global scope in order that the connection is reused for each invocation (e.g, a client is made for each Lambda cold start, but not for each invocation):</p> <pre class="lang-py prettyprint-override"><code> client - boto3.client(&quot;s3&quot;) def handler(event, context): # Use the client here, ensuring the connection already exist </code></pre> <p>The Polars documentation says that Polars can connect to S3 for me by looking at the location of the file I'm reading:</p> <pre class="lang-py prettyprint-override"><code>df = pl.read_parquet(&quot;s3://path/to/file.parquet&quot;) </code></pre> <p>However, if I put this inside the handler, I assume the connection is re-created for each Lambda invocation. I really want to be able to pass a connection into <code>read_parquet</code> (or <code>scan_parquet</code>, or other IO methods) like:</p> <pre class="lang-py prettyprint-override"><code>df = pl.read_parquet(&quot;s3://path/to/file.parquet&quot;, connection_options={&quot;aws&quot;: {&quot;client&quot;: client}}) </code></pre> <p>My reading of the docs is that the <code>client</code> config is simply about <em>how</em> it connects, not a client object.</p> <p>If I'm wrong and I can pass a client, what sort of object should it be? Am I wrong in assuming that a connection pool is useful here, or is the underlying API doing this for me, in some way?</p>
<python><amazon-s3><aws-lambda><python-polars>
2024-02-17 16:30:30
1
537
Sym
78,012,759
3,232,982
Retry on "socket.send() raised exception"
<p>My code:</p> <pre><code>import asyncio import time valueTime = 3 async def telnet_client(host: str, port: int) -&gt; None: while True: try: reader, writer = await asyncio.open_connection(host, port) print(f&quot;connected to ({host}, {port})&quot;) while True: writer.write(&quot;hello\n&quot;.encode()) time.sleep(valueTime) except: print(&quot;error: inner&quot;) try: asyncio.run(telnet_client(&quot;192.168.1.126&quot;, 23)) except: print(&quot;error: main&quot;) </code></pre> <p>I want to implement the following senario: When there is a communication problem, when communication is restored it will try to reconnect instead of throwing socket.send() raised exception. to infinity..</p> <p>I tried add try..except but none caught the socket.send() exception.</p>
<python><python-asyncio>
2024-02-17 16:07:09
1
413
Roni Hacohen
78,012,269
1,820,480
Nextflow Execution Environment Differs Between Processes
<p>I am defining two nextflow processes. The first one, scatter(), creates two files. Then, parallel() is spawned twice, once for each file.</p> <p>Here is my setup.</p> <pre><code>// bug.nf nextflow.enable.dsl = 2 workflow { main: scatter(params.config) scatter.out.configs | flatten | parallel } process scatter { container &quot;python:3.11.8&quot; input: path &quot;config.txt&quot; output: path &quot;config*.txt&quot;, emit: configs script: &quot;&quot;&quot; echo $PWD ls -hal /home/alex/my_cool_repo touch config1.txt touch config2.txt &quot;&quot;&quot; } process parallel { container &quot;python:3.11.8&quot; input: path &quot;config.txt&quot; script: &quot;&quot;&quot; echo $PWD ls -hal /home/alex/my_cool_repo &quot;&quot;&quot; } </code></pre> <pre class="lang-bash prettyprint-override"><code>// run command nextflow run nextflow/bug.nf --config /home/alex/my_cool_repo/my_cool_repo/config/bla.txt </code></pre> <p>The <code>ls</code> output from all processes should look the same but it does not.</p> <p>Output from scatter() (truncated):</p> <pre><code>/home/alex/my_cool_repo total 656K drwxrwxr-x 16 1035 1036 4.0K Feb 17 13:20 . drwxr-xr-x 3 root root 4.0K Feb 17 13:20 .. -rw-rw-r-- 1 1035 1036 3.3K Feb 17 11:09 .dockerignore -rw-rw-r-- 1 1035 1036 3.2K Feb 6 15:33 .gitignore drwxrwxr-x 4 1035 1036 4.0K Feb 17 13:20 .nextflow -rw-rw-r-- 1 1035 1036 5.4K Feb 17 13:20 .nextflow.log -rw-rw-r-- 1 1035 1036 5 Jan 26 18:18 .python-version drwxrwxr-x 6 1035 1036 4.0K Feb 7 14:20 .venv drwxrwxr-x 2 1035 1036 4.0K Feb 6 13:28 .vscode -rw-rw-r-- 1 1035 1036 848 Feb 17 12:28 Dockerfile -rw-rw-r-- 1 1035 1036 627 Feb 6 15:33 README.md drwxrwxr-x 3 1035 1036 4.0K Feb 17 12:55 nextflow -rw-rw-r-- 1 1035 1036 527K Feb 17 11:45 poetry.lock -rw-rw-r-- 1 1035 1036 32 Jan 26 18:18 poetry.toml -rw-rw-r-- 1 1035 1036 2.2K Feb 16 19:36 pyproject.toml drwxrwxr-x 9 1035 1036 4.0K Feb 6 13:28 my_cool_repo drwxrwxr-x 3 1035 1036 4.0K Feb 17 13:20 work </code></pre> <p>Output from the two parallel() processes:</p> <pre><code>/home/alex/my_cool_repo total 12K drwxr-xr-x 3 root root 4.0K Feb 17 13:20 . drwxr-xr-x 3 root root 4.0K Feb 17 13:20 .. drwxrwxr-x 5 1035 1036 4.0K Feb 17 13:20 work </code></pre> <p>Why are the outputs not the same?</p> <p>Context: Instead of <code>ls</code> I actually would like to run <code>poetry run ...</code> but poetry gives the following error message for the parallel() processes: <code>Poetry could not find a pyproject.toml file in /home/alex/my_cool_repo/work/f3/766313fbc5d6aeeb39f19193956ffd or its parents</code>.</p>
<python><docker><nextflow>
2024-02-17 13:31:39
2
3,196
r0f1
78,012,062
5,285,918
Why does it seem necessary to rotate transformation matrix for mapping coordinates with scikit image?
<p>I have a set of points that are effectively 3 vertices of a 45-45-90 right triangle and some other points, <code>a</code>, that should map to them.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np points = np.array([ ( 90, 416), (398, 390), (374, 84) ]) a = np.array([ (0, 1), # maps to (90, 416) (1, 1), # maps to (398, 390) (1, 0) # maps to (374, 84) ]) </code></pre> <p>I want to find the <a href="https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.SimilarityTransform" rel="nofollow noreferrer">similarity transformation</a> that properly maps <code>a</code> to <code>points</code>.</p> <pre class="lang-py prettyprint-override"><code>from skimage import transform # transformation that makes sense to me T1 = transform.estimate_transform( ttype=&quot;similarity&quot;, src=a, dst=points ) # invert the rotation for no reason # other than to show that it works T2 = transform.SimilarityTransform( scale=T1.scale, rotation=-T1.rotation, translation=T1.translation ) # apply transformations via matrix multiplication a_T1 = a @ T1.params[:2, :2] + T1.params[:2, 2] a_T2 = a @ T2.params[:2, :2] + T2.params[:2, 2] </code></pre> <p>Why is it that <code>T2</code> (where I just inverted the rotation for no real reason other than I eventually found out that it works) yields the better mapping? Or am I making a dumb mistake in my implementation?</p> <p><a href="https://i.sstatic.net/y3m7U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y3m7U.png" alt="enter image description here" /></a></p>
<python><computer-vision><linear-algebra><scikit-image>
2024-02-17 12:18:28
1
5,384
lanery
78,012,016
1,485,877
Equivalent to dlopen(None) in Windows
<p>I am using Python CFFI and want to open a library where I can call <code>free</code> on some data I get back. In Linux and Mac, I put <code>free</code> in the cdefs and load a base C library with <code>dlopen(None)</code>.</p> <pre class="lang-py prettyprint-override"><code>from cffi import FFI header = &quot;&quot;&quot; typedef struct { int32_t* members; } container_t; void free(void *ptr); &quot;&quot;&quot; cdefs = FFI() cdefs.cdef(header) cdefs.set_source(&quot;_main&quot;, &quot;&quot;) # So that the header can be `include`d externally lib = cdefs.dlopen(None) def free_members(container) -&gt; None: lib.free(container.members) </code></pre> <p>This fails on Windows with the well-documented error:</p> <pre><code>OSError: dlopen(None) cannot work on Windows for Python 3 (see http://bugs.python.org/issue23606) </code></pre> <p>What gives the equivalent behavior on Windows? I just want <code>free</code>, which should be available on any old DLL anywhere.</p>
<python><windows><python-cffi>
2024-02-17 12:01:14
1
9,852
drhagen
78,011,941
887,651
Django application to manage a small Inventory
<p>I have created a very small Django application to manage a very small Inventory.</p> <p>My <strong>models.py</strong> code is:</p> <pre><code>class Inventory(models.Model): account = models.ForeignKey( &quot;accounts.Account&quot;, on_delete=models.DO_NOTHING, null=False, blank=False ) class InventoryProduct(models.Model): inventory = models.ForeignKey(&quot;Inventory&quot;, on_delete=models.CASCADE) sku = models.ForeignKey( &quot;products.SKU&quot;, on_delete=models.DO_NOTHING, null=False, blank=False ) quantity = models.PositiveIntegerField( default=0 ) class Transaction(models.Model): IN = 1 OUT = 0 TYPE_CHOICES = ( (IN, &quot;Incoming&quot;), # Carico / Entrata (OUT, &quot;Outgoing&quot;), # Scarico / Uscita ) inventory = models.ForeignKey(&quot;Inventory&quot;, on_delete=models.CASCADE) transferred_to = models.ForeignKey( &quot;Inventory&quot;, on_delete=models.CASCADE, blank=True, null=True ) code = models.UUIDField(default=uuid.uuid4) transaction_type = models.PositiveSmallIntegerField( choices=TYPE_CHOICES, default=IN ) transaction_date = models.DateTimeField(auto_now_add=True) notes = models.TextField(null=True, blank=True) class TransactionItem(models.Model): transaction = models.ForeignKey(&quot;Transaction&quot;, on_delete=models.CASCADE) item = models.ForeignKey(&quot;InventoryProduct&quot;, on_delete=models.CASCADE) quantity = models.IntegerField() def save(self, *args, **kwargs): super().save(*args, **kwargs) self.item.quantity += self.quantity self.item.save() </code></pre> <p>The code is quite explanatory, I basically have an <strong>Inventory</strong> per account and the <strong>Inventory</strong> has products that I add in the related model <strong>InventoryItem</strong>. Each product in <strong>InventoryItem</strong> has a quantity that will be updated during a <strong>Transaction</strong>.</p> <p>A Transaction has a type IN/OUT to understand if i have to add or remove Items from the Inventory. Lastly, as you can surelly understand the TransactionItem has all the Items with the quantity (to add or remove) inside a Transaction.</p> <p>It is quite simply, I really appreciate your opinion if i can improve the modelling somehow.</p> <p>My questions are:</p> <ol> <li><p>Add a way to transfer products from an Inventory to another (i think i need an OUT Transaction and then an IN Transaction), but how can I keep track of this &quot;movement&quot; from an Inventory to another? I would like to save from what Inventory the products are coming from.</p> </li> <li><p>This question is also related to the first one. Products can be added to an Inventory from an <strong>Order</strong> or by a transfer from another Inventory. How can I also include the <strong>Order</strong> &quot;concept&quot;?</p> </li> </ol>
<python><django><django-models>
2024-02-17 11:34:39
2
4,644
Dail
78,011,917
22,437,609
Kivy Buildozer: aiohttp asyncio Issue, App Crashes
<p>I will try to expain my problem exactly. First of all, my kivy app and its all sections works in VScode. It is a calculator app that scrape some datas and calculate them</p> <p><strong>Python version: 3.11.7 Kivy version: 2.3.0</strong></p> <p>I have two functions in my code so i want them to work together at the same time so i used aiohttp asyncio. Lets start</p> <p><strong>Libraries that i used in my app</strong></p> <pre><code>from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.label import Label from kivy.uix.image import Image from kivy.uix.behaviors import ButtonBehavior from kivy.metrics import dp from kivy.clock import mainthread from kivy.uix.popup import Popup from kivy.factory import Factory from kivy.properties import ObjectProperty from time import sleep from unidecode import unidecode from bs4 import BeautifulSoup import aiohttp import asyncio import requests import threading import re </code></pre> <p><strong>Code example of aiohttp / asyncio</strong></p> <pre><code>@mainthread def calculate(self, lig, *args): async def fetch_stats_home(session, url, match_codes_home): try: querystring = {} headers = {} async with session.get(url, params=querystring, headers=headers) as response: soup = BeautifulSoup(await response.text(), &quot;html.parser&quot;) async def fetch_stats_away(session, url, match_codes_away): try: querystring = {} headers = {} async with session.get(url, params=querystring, headers=headers) as response: soup = BeautifulSoup(await response.text(), &quot;html.parser&quot;) except: pass async def main(): url = &quot;https://Handler.aspx&quot; match_codes_home = evsahibi_evindeki_maclar_kodlar match_codes_away = deplasman_deplasmandki_maclar_kodlar async with aiohttp.ClientSession() as session: tasks_home = [fetch_stats_home(session, url, match_code) for match_code in match_codes_home] tasks_away = [fetch_stats_away(session, url, match_code) for match_code in match_codes_away] await asyncio.gather(*tasks_home, *tasks_away) asyncio.run(main()) </code></pre> <p><strong>VSCODE TESTS &amp; Terminal Logs (APP Works Without a Problem)</strong></p> <pre><code>[DEBUG ] [Using proactor] IocpProactor [DEBUG ] [Starting new HTTPS connection (1)] .com:443 [DEBUG ] [https ]//.com:443 &quot;GET /Takim/451/ HTTP/1.1&quot; 200 27720 [DEBUG ] [https ]//.com:443 &quot;GET /Takim/570/ HTTP/1.1&quot; 200 28575 [DEBUG ] [Using proactor] IocpProactor [DEBUG ] [Starting new HTTPS connection (1)] arsiv.mackolik.com:443 [DEBUG ] [https ]//.com:443 &quot;GET /Takim/3/ HTTP/1.1&quot; 200 29920 [DEBUG ] [https ]//:443 &quot;GET /Takim/447/ HTTP/1.1&quot; 200 28574 [DEBUG ] [Using proactor] IocpProactor [DEBUG ] [Starting new HTTPS connection (1)] arsiv.mackolik.com:443 [DEBUG ] [https ]//:443 &quot;GET /Takim/8/ HTTP/1.1&quot; 200 28140 [DEBUG ] [https ]//:443 &quot;GET /Takim/448/ HTTP/1.1&quot; 200 28235 [DEBUG ] [Using proactor] IocpProactor Scroll 1 APP WORKS </code></pre> <p><strong>BUILDOZER: i compiled app in Buildozer, when i click Calculate button then APP crashes.</strong></p> <p><strong>Here is the log</strong> : <strong>adb logcat -s python</strong></p> <p>As you can see, at the first <strong>[Using proactor] IocpProactor</strong> , APP Crashes.</p> <pre><code>Starting new HTTPS connection (1)] arsiv.mackolik.com:443 02-17 13:59:55.087 12778 13111 I python : [DEBUG ] [https ]//arsiv.mackolik.com:443 &quot;GET /Takim/574/ HTTP/1.1&quot; 200 27720 02-17 13:59:56.036 12778 13111 I python : [DEBUG ] [https ]//arsiv.mackolik.com:443 &quot;GET /Takim/446/ HTTP/1.1&quot; 200 28575 02-17 13:59:56.346 12778 13111 I python : [DEBUG ] [Using selector] EpollSelector 02-17 13:59:56.762 12778 13111 I python : [INFO ] [Base ] Leaving application in progress... 02-17 13:59:56.763 12778 13111 I python : Traceback (most recent call last): 02-17 13:59:56.763 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/app/main.py&quot;, line 825, in &lt;module&gt; 02-17 13:59:56.763 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/app.py&quot;, line 956, in run 02-17 13:59:56.763 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/base.py&quot;, line 574, in runTouchApp 02-17 13:59:56.764 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/base.py&quot;, line 339, in mainloop 02-17 13:59:56.764 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/base.py&quot;, line 379, in idle 02-17 13:59:56.764 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/clock.py&quot;, line 733, in tick 02-17 13:59:56.765 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/clock.py&quot;, line 776, in post_idle 02-17 13:59:56.765 12778 13111 I python : File &quot;kivy/_clock.pyx&quot;, line 620, in kivy._clock.CyClockBase._process_events 02-17 13:59:56.765 12778 13111 I python : File &quot;kivy/_clock.pyx&quot;, line 653, in kivy._clock.CyClockBase._process_events 02-17 13:59:56.765 12778 13111 I python : File &quot;kivy/_clock.pyx&quot;, line 649, in kivy._clock.CyClockBase._process_events 02-17 13:59:56.766 12778 13111 I python : File &quot;kivy/_clock.pyx&quot;, line 218, in kivy._clock.ClockEvent.tick 02-17 13:59:56.766 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/footballpredictor/arm64-v8a/kivy/clock.py&quot;, line 1095, in callback_func 02-17 13:59:56.766 12778 13111 I python : File &quot;/home/seo/Desktop/APP/.buildozer/android/app/main.py&quot;, line 598, in calculate 02-17 13:59:56.766 12778 13111 I python : ZeroDivisionError: division by zero 02-17 13:59:56.766 12778 13111 I python : Python for android ended. </code></pre> <p><strong>Buildozer.spec</strong></p> <pre><code>requirements = python3==3.11.7,kivy==2.3.0,aiohttp==3.9.3,aiosignal==1.3.1,attrs==23.2.0,beautifulsoup4==4.12.3,certifi==2024.2.2,charset-normalizer==3.3.2,docutils==0.20.1,frozenlist==1.4.1,idna==3.6,multidict==6.0.5,requests==2.31.0,soupsieve==2.5,Unidecode==1.3.8,urllib3==2.2.0,yarl==1.9.4 osx.python_version = 3.11.7 osx.kivy_version = 2.3.0 </code></pre> <p><strong>UPDATE Found Problem but can't fixed</strong></p> <p>When i create APK via buildozer, i checked with &quot;buildozer android debug deploy run logcat&quot; that <strong>'async with session.get(url, params=querystring, headers=headers) as response:'</strong> sections do not work and also beautifulsoup which is inside does not work as well so data is not scraped! and we get ZeroDivisionError</p> <p>But in VSCODE there are no problem.</p> <p>Why <strong>'async with session.get(url, params=querystring, headers=headers) as response:'</strong> part does not work in kivy?</p> <p>Thanks</p>
<python><python-3.x><kivy><kivymd><buildozer>
2024-02-17 11:25:37
1
313
MECRA YAVCIN
78,011,829
5,025,650
Inconsistent sending and reading of integers via Python serial to an Arduino
<p>I am trying to send Integers via Python and serial communication to an Arduino, read it there, and send it back (solely for debugging) to print via Python.</p> <p>I am getting often either zeros send back, or the integers in wrong orders. I am hitting the wall. I went back to a minimal example and still can't see my mistake.</p> <p>Arduino code:</p> <pre class="lang-cpp prettyprint-override"><code>uint32_t val1; uint32_t val2; uint32_t val3; void setup() { Serial.begin(115200); } void loop() { if (Serial.available() &gt; 0) { // read values from Python val1 = Serial.parseInt(); val2 = Serial.parseInt(); val3 = Serial.parseInt(); } if (Serial.available() &gt; 0) { // Print received values for reading back to Python Serial.println(val1); Serial.println(val2); Serial.println(val3); } } </code></pre> <p>Python code:</p> <pre class="lang-py prettyprint-override"><code>import serial import time port = '/dev/cu.usbmodem14301' ser = serial.Serial(port, baudrate=115200, timeout=None) # Wait for Arduino to initialize time.sleep(2) #flush input buffer ser.reset_input_buffer() # define three parameters/integers to send in a list numbers = [123, 60000, 789] #max integer is 2^15-1 for normal int in Arduino, we use uint32_t in Arduino, which is an unsigned 32bit -&gt; 2^32-1 # send to Arduino as csv for num in numbers: ser.write(str(num).encode()) ser.write(b',') # Send comma as delimiter # Read the response from Arduino response = [] for _ in range(len(numbers)): data = ser.readline().strip() response.append(data.decode()) # Close the serial port ser.close() # Print the response print(&quot;Response from Arduino:&quot;, response) </code></pre> <p>Also is there a possibility to read the integers from Python only once - outside the loop and declare them globally. I failed at that, too.</p>
<python><arduino><serial-port>
2024-02-17 10:58:07
1
321
Nikolaij
78,011,805
5,695,336
Why does boolean indexing reverse the order of axis?
<pre><code>import numpy as np x = np.zeros((5,4,3,2)) i = np.array([[True, False], [False, False], [False, True]]) print(x[4,:,i].shape) # (2, 4) </code></pre> <p>Why is <code>x[4,:,i].shape</code> (2, 4) instead of (4, 2)?</p> <p>The indexing array with the 2 <code>True</code> values is after <code>:</code> which corresponding to the axis with 4 values. (4, 2) seems to make more sense.</p>
<python><numpy>
2024-02-17 10:52:41
0
2,017
Jeffrey Chen
78,011,718
13,798,993
Does jax save the jaxpr of jit compiled functions?
<p>Consider the following example:</p> <pre><code>import jax import jax.numpy as jnp @jax.jit def test(x): if x.shape[0] &gt; 4: return 1 else: return -1 print(test(jnp.ones(8,))) print(test(jnp.ones(3,))) </code></pre> <p>The output is</p> <pre><code>1 -1 </code></pre> <p>However, I thought that on the first call jax compiles a function to use in subsequent calls. Shouldn't this then give the output 1 and 1, because jax traces through an if and does not use a conditional here? In the jaxpr of the first call is no conditional:</p> <pre><code>{ lambda ; a:f32[8]. let b:i32[] = pjit[name=test jaxpr={ lambda ; c:f32[8]. let in (1,) }] a in (b,) } </code></pre> <p>So how exactly does this work under the hood. Is the jaxpr unique for every call. Does jax only reuse jaxprs if the shape matches? Does jax recompile functions if the shape is different?</p>
<python><jax><google-jax>
2024-02-17 10:22:53
1
689
Quasi
78,011,566
309,917
How to delete a Datastore database with Google App Engine search data in it without Python 2.7
<p>Trying to delete the <code>(defaut)</code> Datastore database I get this message:</p> <pre><code>400: Database contains Google App Engine search data. To delete this database you must first remove the search data. </code></pre> <p>This data is there, unused, from years and years ago when this project used the Python 2.7 appengine just for while. It's a risible weight so never bothered deleting.</p> <p>Going to <a href="https://console.cloud.google.com/appengine/search?project=" rel="nofollow noreferrer">https://console.cloud.google.com/appengine/search?project=</a>&lt;PROJECT_ID&gt; I see there's no option to remove the search data.</p> <p>In <a href="https://cloud.google.com/appengine/docs/legacy/standard/python/search#deleting_an_index" rel="nofollow noreferrer">App Engine Search API docs</a> there is some python code to delete an index where I probably need Python 2.7 which I don't even have installed anymore. Everywhere deprecation warnings.</p> <p>Using a new project is <strong>NOT</strong> an option as this is a production environment which is tied to so many other systems and it would be a pain in the heck to change all the references to a new project — or at least it's not an option for such a meaningless constraint.</p> <p>And also I exclude using another database than <code>(default)</code>, I would lose the free tier quota and also the application code should be adapted to use another db.</p> <p>This also prevents me being able to do a disaster recovery using backups created with the recently added <code>gcloud alpha firestore backups</code> feature.</p> <h2>Edit:</h2> <p>Trying to follow the <a href="https://github.com/GoogleCloudPlatform/appengine-python-standard" rel="nofollow noreferrer">documentation</a> I've been pointed to and also this <a href="https://cloud.google.com/appengine/docs/standard/python3/services/access#installing" rel="nofollow noreferrer">Installing the App Engine services SDK</a> I tried to modify my existing Flask app so I could access <code>search</code> from a <code>flask shell</code> prompt.</p> <p>I added this to my <code>requirements.txt</code></p> <pre><code>appengine-python-standard&gt;=1.0.0 </code></pre> <p>Then, after creating a fresh new <code>virtualenv</code> including the above, I added the wsgi_app lines to my flask app:</p> <pre><code>from flask import Flask from google.appengine.api import wrap_wsgi_app app = Flask(__name__) app.wsgi_app = wrap_wsgi_app(app.wsgi_app) </code></pre> <p>And in <code>flask shell</code> I tried:</p> <pre><code>In [1]: from google.appengine.api import search In [2]: index = search.Index('general-index') In [3]: index.get_range(ids_only=True) --------------------------------------------------------------------------- RPCFailedError Traceback (most recent call last) Cell In[3], line 1 ----&gt; 1 index.get_range(ids_only=True) [...] RPCFailedError: Attempted RPC call without active security ticket </code></pre> <p>It seems I have to bring in also the <a href="https://cloud.google.com/appengine/docs/standard/tools/using-local-server?tab=python#set-up" rel="nofollow noreferrer">local development server</a></p> <pre><code>gcloud components install app-engine-python </code></pre> <p>which on my linux box can be done with:</p> <pre><code>You cannot perform this action because the Google Cloud CLI component manager is disabled for this installation. You can run the following command to achieve the same result for this installation: sudo apt-get install google-cloud-sdk-app-engine-python </code></pre> <p>which is bringing in Python2.7 as well</p> <pre><code>$ sudo apt-get install google-cloud-sdk-app-engine-python Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: google-cloud-sdk libpython2.7-minimal libpython2.7-stdlib python2.7 python2.7-minimal Suggested packages: google-cloud-sdk-app-engine-java google-cloud-sdk-pubsub-emulator google-cloud-sdk-bigtable-emulator google-cloud-sdk-datastore-emulator kubectl python2.7-doc binfmt-support The following packages will be REMOVED: google-cloud-cli The following NEW packages will be installed: google-cloud-sdk google-cloud-sdk-app-engine-python libpython2.7-minimal libpython2.7-stdlib python2.7 python2.7-minimal 0 upgraded, 6 newly installed, 1 to remove and 30 not upgraded. Need to get 117 MB of archives. After this operation, 60,7 MB of additional disk space will be used. </code></pre> <p><strong>All this to delete 13,4kb worth of search index...</strong></p> <p>Also I suspect that running the <code>delete_index</code> routine in <code>dev_appserver.py</code> would eventually delete only <strong>local</strong> (inexisting) search indexes leaving the one in cloud I want to remove untouched.</p> <p>Can someone from Google shed a light on this please???</p>
<python><google-app-engine><google-cloud-datastore>
2024-02-17 09:22:33
2
12,525
neurino
78,011,332
11,748,924
Construct list of possible sentences from given alphabet quota and wordlist.txt
<p>I have <code>wordlist.txt</code> that separated by new line.</p> <p>If I specify number of quotas to use for each alphabet for example</p> <pre><code>n: 1 e: 1 w: 1 b: 1 o: 2 k: 1 Remain alphabets quota is 0. </code></pre> <p>How to construct a sentence from given quotas of alphabet that must be spent all until zero and based on words that defined in wordlist.txt?</p> <p>For example, from given alphabet quotas, it will returning &quot;new book&quot; or &quot;book new&quot;. Word order doesn't matter.</p> <p>Where &quot;new&quot; and &quot;book&quot; has been exists in <code>wordlist.txt</code>.</p> <p>So list of possible sentences might be like this:</p> <pre><code>new book book new bow neko neko bow </code></pre>
<python><text><anagram>
2024-02-17 07:52:41
1
1,252
Muhammad Ikhwan Perwira
78,011,008
8,760,028
How to return multiline sql queries in Python
<p>I want to run a sql query in a server. And the query is too long, so I am not able to figure out how it should be broken down:</p> <p>The query looks something like this:</p> <pre><code>return(&quot;usr/&lt;path&gt; -e 'select val1 from table_name where val2 = (select val3 f rom another_table where val = 'abc')'&quot;) </code></pre> <p>Also, I can't post the exact query as its related to my work. So now the issue is, if I run a select query which comes in a single line of the return statement, then the value gets returned successfully. But in case of this long query, I am getting Syntax errors as it gets psuhed to the next line. Also, the code has to be compatible with both Python version 2 and 3. Can someone please help?</p>
<python>
2024-02-17 05:11:01
1
1,435
pranami
78,010,889
12,461,032
Pandarallel cannot apply two transformations
<p>I have a pandas data frame (with ~57 million rows of floats) that I want to undergo two transformations.</p> <p>These are the two functions to apply the transformations:</p> <pre><code>def apply_feature_aggregation(df, weights, scale, shift, cores): #This runs without a problem t1 = time.time() pandarallel.initialize(nb_workers=cores, progress_bar=False) new_df = df[['id']].copy() new_df['weight'] = df.parallel_apply(calculate_weight, axis=1, weights=weights, scale=scale, shift=shift) t2 = time.time() print(f'init weights {t2-t1}') return new_df def apply_weight_scaling(df, cores): #The apply section of it only works if I click run pandarallel.initialize(nb_workers=cores, progress_bar=False) # initialize(36) or initialize(os.cpu_count()-1) t2 = time.time() new_df, second_min, current_max = interval_mapping_preprocessing(df, 'weight') t3 = time.time() print(f'initial mapping preprocessing finished {t3-t2}') new_df['weight'] = new_df.parallel_apply(apply_linear_transformation, axis=1, second_min=second_min, current_max= current_max) #This is not run t4 = time.time() print(f'I calculated second weights {t4-t3}') #This is not printed </code></pre> <p>The problem is whenever I'm running my code on PyCharm by clicking execute, the two transformations are applied successfully. But whenever I try to run with nohup, although on <code>top</code> command I can see parallel workers twice, but the second run never ends.</p> <p>My question is how to run two subsequent transformations? I even tried to have the two transformations on the same wrapper function, but I encountered the same problem.</p> <p>This is the output I get in nohup:</p> <pre><code>INFO: Pandarallel will run on 36 workers. INFO: Pandarallel will use Memory file system to transfer data between the main process and workers. init weights 135.39842891693115 INFO: Pandarallel will run on 36 workers. INFO: Pandarallel will use Memory file system to transfer data between the main process and workers. initial mapping preprocessing finished 2.7095065116882324 </code></pre> <p>This is the output when I run it with PyCharm:</p> <pre><code>INFO: Pandarallel will run on 36 workers. INFO: Pandarallel will use Memory file system to transfer data between the main process and workers. init weights 143.19737672805786 INFO: Pandarallel will run on 36 workers. INFO: Pandarallel will use Memory file system to transfer data between the main process and workers. initial mapping preprocessing finished 2.6010115146636963 I calculated second weights 117.14078521728516 </code></pre> <p>Once the parallel executors of the second function finish with the nohup case, there is only one memory-intensive job and nothing else happens</p> <p>Thanks.</p>
<python><pandas><pandarallel>
2024-02-17 03:41:55
0
472
m0ss
78,010,857
2,200,963
How can Metaflow FlowSpec instance be wrapped in a context manager without the context manager getting called twice?
<p>I'm trying to use a context manager to open an <a href="https://sshtunnel.readthedocs.io/en/latest/#example-1" rel="nofollow noreferrer">SSH tunnel forwarder</a> when Metaflow FlowSpec subclass instances are run locally, but somehow instantiating a FlowSpec instance in the context manager calls the context manager twice causing the instantiation of the FlowSpec class to fail because the ssh tunnel is already in use. How can I prevent Metaflow from causing the context manager to get called twice?</p> <p>Running the following flow fails because two attempts to open the ssh tunnel are made despite the context manager only getting called once.</p> <h2>Failing Code</h2> <pre class="lang-py prettyprint-override"><code>from metaflow import FlowSpec from metaflow import step from my_pkg.library.databases import initialize_database from my_pkg.library.ssh import SshTunnel from my_pkg.settings.databases import DatabaseConfiguration class TunnelingPipelineExample(FlowSpec): @step def start(self): self.next(self.query_tables) @step def query_tables(self): db, cursor = initialize_database() cursor.execute(&quot;USE some_database;&quot;) cursor.execute(&quot;SELECT * FROM some_table;&quot;) for result in cursor.fetchall(): print(f&quot;RESULT: {result}&quot;) self.next(self.end) @step def end(self): print(f&quot;Completed pipeline!&quot;) if __name__ == &quot;__main__&quot;: with SshTunnel(DatabaseConfiguration()): TunnelingPipelineExample() </code></pre> <h2>Working Code</h2> <pre class="lang-py prettyprint-override"><code>from my_pkg.library.databases import initialize_database from my_pkg.library.ssh import SshTunnel from my_pkg.settings.databases import ApplicationDatabaseUsa if __name__ == &quot;__main__&quot;: with SshTunnel(DatabaseConfiguration()): db, cursor = initialize_database() cursor.execute(&quot;USE some_database;&quot;) cursor.execute(&quot;SELECT * FROM some_table;&quot;) for result in cursor.fetchall(): print(f&quot;RESULT: {result}&quot;) </code></pre> <p>The same code succeeds without Metaflow, so I assume Metaflow is somehow causing the context manager to get called twice given the error message says the tunnel is already in use.</p>
<python><ssh-tunnel><contextmanager><netflix-metaflow>
2024-02-17 03:25:12
0
764
datasmith
78,010,770
826,112
How do I pass only a multiprocessing.Array?
<p>When using multiprocessing, why does this, from the <a href="https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes" rel="nofollow noreferrer">docs</a>, work...</p> <pre><code>from multiprocessing import Process, Value, Array def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i] if __name__ == '__main__': num = Value('d', 0.0) arr = Array('i', range(10)) p = Process(target=f, args=(num, arr)) p.start() p.join() print(num.value) print(arr[:]) </code></pre> <p>But this variation,</p> <pre><code>from multiprocessing import Process, Value, Array def f(a): for i in range(len(a)): a[i] = -a[i] if __name__ == '__main__': arr = Array('i', range(10)) p = Process(target=f, args=(arr)) p.start() p.join() print(arr[:]) </code></pre> <p>produces a TypeError?</p> <pre><code>TypeError: f() takes 1 positional argument but 10 were given </code></pre> <p>How do I pass just the array to the function?</p>
<python><python-3.x><python-multiprocessing>
2024-02-17 02:28:54
1
536
Andrew H
78,010,668
3,788,939
Docker fastapi unable to connect to mysql
<p>I'm new to docker and fastapi and I'm trying to connect fastapi with mysql service but it always say that it cannot connect to mysql in the logs. What am I missing here?</p> <p><strong>.env file</strong></p> <pre><code> API_PATH='api' API_VERSION='v1' FASTAPI_CONTAINER_NAME='app_fastapi' FASTAPI_PORT=8000 REDIS_CONTAINER_NAME='app_redis' REDIS_HOST='app_redis' REDIS_PORT=8001 MYSQL_CONTAINER_NAME='app_mysql' MYSQL_DATABASE_NAME='fastapi_db' MYSQL_ROOT_PASSWORD='P@ssw0rd' MYSQL_USER='fastapi_mysql_admin' MYSQL_PASSWORD='P@ssw0rd' MYSQL_HOST='app_mysql' MYSQL_PORT=3306 </code></pre> <p><strong>docker-compose file</strong></p> <pre><code> version: &quot;3.9&quot; # Specify a compatible Docker Compose version services: fastapi: build: context: ./fastapi # Build your FastAPI app dockerfile: Dockerfile # Use your app's Dockerfile container_name: ${FASTAPI_CONTAINER_NAME} ports: - &quot;${FASTAPI_PORT}:${FASTAPI_PORT}&quot; # Expose port 8000 for the FastAPI app volumes: - .:/app env_file: - .env depends_on: # Ensure both mysql and redis starts first mysql: condition: service_healthy redis: condition: service_healthy restart: always tty: true networks: - backend mysql: build: context: ./mysql # Build from the current directory dockerfile: Dockerfile # Use the specific Dockerfile container_name: ${MYSQL_CONTAINER_NAME} ports: - &quot;${MYSQL_PORT}:${MYSQL_PORT}&quot; # Expose port 3306 for database access env_file: - .env healthcheck: test: [&quot;CMD&quot;, &quot;mysqladmin&quot;, &quot;ping&quot;, &quot;-h&quot;, &quot;localhost&quot;, &quot;-p${MYSQL_PASSWORD}&quot;] interval: 10s timeout: 5s retries: 3 start_period: 7s tty: true restart: always networks: - backend redis: build: context: ./redis # Build from the current directory dockerfile: Dockerfile # Use the specific Dockerfile container_name: ${REDIS_CONTAINER_NAME} ports: - &quot;${REDIS_PORT}:${REDIS_PORT}&quot; # Expose Redis port env_file: - .env healthcheck: test: [&quot;CMD&quot;, &quot;redis-cli&quot;, &quot;ping&quot;] interval: 10s timeout: 5s retries: 3 start_period: 7s tty: true restart: always networks: - backend networks: backend: </code></pre> <p><strong>database.py</strong></p> <pre><code> import sqlalchemy from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.exc import OperationalError from sqlalchemy.orm import sessionmaker from os import environ as env user = env['MYSQL_USER'] password = env['MYSQL_PASSWORD'] host = env['MYSQL_HOST'] port = env['MYSQL_PORT'] database = env['MYSQL_DATABASE_NAME'] DATABASE_URL = 'mysql+pymysql://{}@{}/{}:{}?charset=utf8'.format(user, host, database, port) engine = create_engine(DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() def get_db(): db = SessionLocal() try: yield db finally: db.close() </code></pre> <p><strong>main.py</strong></p> <pre><code>from fastapi import FastAPI from os import environ as env from app.modules.todo import router as TodoRouter app = FastAPI() apiPath = '/' + env['API_PATH'] + '/' + env['API_VERSION'] # routers app.include_router(TodoRouter.router, prefix=apiPath + '/todo', tags=['Todo']) </code></pre> <p><strong>todo/router.py</strong></p> <pre><code>from fastapi import APIRouter, Depends from sqlalchemy.orm import Session from app.database import get_db from app.modules.todo.data_models import TodoRequestModel router = APIRouter() @router.post('/') def create(todo: TodoRequestModel, db: Session = Depends(get_db)): return TodoController.create(todo, db) </code></pre> <p><strong>On docker start</strong></p> <p><a href="https://i.sstatic.net/a2OZq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a2OZq.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/PME94.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PME94.jpg" alt="enter image description here" /></a></p> <p><strong>Docker container log when /post endpoint is hit</strong></p> <pre><code>- INFO: 172.31.0.1:37760 - &quot;POST /api/v1/todo/ HTTP/1.1&quot; 500 Internal Server Error - ERROR: Exception in ASGI application - Traceback (most recent call last): - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 796, in _connect - self._get_server_information() - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 994, in _get_server_information - self.server_charset = charset_by_id(lang).name - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/charset.py&quot;, line 34, in by_id - return self._by_id[id] - KeyError: 255 - - During handling of the above exception, another exception occurred: - - Traceback (most recent call last): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3250, in _wrap_pool_connect - return fn() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 310, in connect - return _ConnectionFairy._checkout(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 868, in _checkout - fairy = _ConnectionRecord.checkout(pool) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 476, in checkout - rec = pool._do_get() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 145, in _do_get - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 143, in _do_get - return self._create_connection() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 256, in _create_connection - return _ConnectionRecord(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 371, in __init__ - self.__connect() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 665, in __connect - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 661, in __connect - self.dbapi_connection = connection = pool._invoke_creator(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py&quot;, line 590, in connect - return dialect.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py&quot;, line 597, in connect - return self.dbapi.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/__init__.py&quot;, line 88, in Connect - return Connection(*args, **kwargs) - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 634, in __init__ - self._connect() - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 817, in _connect - raise OperationalError( - pymysql.err.OperationalError: (2003, &quot;Can't connect to MySQL server on 'mysql' (255)&quot;) - - The above exception was the direct cause of the following exception: - - Traceback (most recent call last): - File &quot;/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py&quot;, line 404, in run_asgi - result = await app( # type: ignore[func-returns-value] - File &quot;/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 84, in __call__ - return await self.app(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/applications.py&quot;, line 1054, in __call__ - await super().__call__(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/applications.py&quot;, line 123, in __call__ - await self.middleware_stack(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py&quot;, line 186, in __call__ - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py&quot;, line 164, in __call__ - await self.app(scope, receive, _send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py&quot;, line 62, in __call__ - await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 64, in wrapped_app - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 53, in wrapped_app - await app(scope, receive, sender) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 762, in __call__ - await self.middleware_stack(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 782, in app - await route.handle(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 297, in handle - await self.app(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 77, in app - await wrap_app_handling_exceptions(app, request)(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 64, in wrapped_app - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 53, in wrapped_app - await app(scope, receive, sender) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 72, in app - response = await func(request) - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 299, in app - raise e - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 294, in app - raw_response = await run_endpoint_function( - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 193, in run_endpoint_function - return await run_in_threadpool(dependant.call, **values) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/concurrency.py&quot;, line 40, in run_in_threadpool - return await anyio.to_thread.run_sync(func, *args) - File &quot;/usr/local/lib/python3.10/site-packages/anyio/to_thread.py&quot;, line 56, in run_sync - return await get_async_backend().run_sync_in_worker_thread( - File &quot;/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py&quot;, line 2134, in run_sync_in_worker_thread - return await future - File &quot;/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py&quot;, line 851, in run - result = context.run(func, *args) - File &quot;/app/fastapi/app/modules/todo/router.py&quot;, line 23, in create - return TodoController.create(todo, db) - File &quot;/app/fastapi/app/modules/todo/controller.py&quot;, line 17, in create - return TodoService.create(student, db) - File &quot;/app/fastapi/app/modules/todo/repository.py&quot;, line 62, in create - db.commit() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 1431, in commit - self._transaction.commit(_to_root=self.future) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 829, in commit - self._prepare_impl() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 808, in _prepare_impl - self.session.flush() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3363, in flush - self._flush(objects) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3502, in _flush - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3463, in _flush - flush_context.execute() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py&quot;, line 456, in execute - rec.execute(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py&quot;, line 630, in execute - util.preloaded.orm_persistence.save_obj( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 211, in save_obj - for ( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 372, in _organize_states_for_save - for state, dict_, mapper, connection in _connections_for_states( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 1711, in _connections_for_states - connection = uowtransaction.transaction.connection(base_mapper) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 626, in connection - return self._connection_for_bind(bind, execution_options) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 735, in _connection_for_bind - conn = self._parent._connection_for_bind(bind, execution_options) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 747, in _connection_for_bind - conn = bind.connect() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3204, in connect - return self._connection_cls(self, close_with_result=close_with_result) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 96, in __init__ - else engine.raw_connection() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3283, in raw_connection - return self._wrap_pool_connect(self.pool.connect, _connection) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3253, in _wrap_pool_connect - Connection._handle_dbapi_exception_noconnection( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 2100, in _handle_dbapi_exception_noconnection - util.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3250, in _wrap_pool_connect - return fn() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 310, in connect - return _ConnectionFairy._checkout(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 868, in _checkout - fairy = _ConnectionRecord.checkout(pool) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 476, in checkout - rec = pool._do_get() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 145, in _do_get - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 143, in _do_get - return self._create_connection() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 256, in _create_connection - return _ConnectionRecord(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 371, in __init__ - self.__connect() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 665, in __connect - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 661, in __connect - self.dbapi_connection = connection = pool._invoke_creator(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py&quot;, line 590, in connect - return dialect.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py&quot;, line 597, in connect - return self.dbapi.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/__init__.py&quot;, line 88, in Connect - return Connection(*args, **kwargs) - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 634, in __init__ - self._connect() - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/connections.py&quot;, line 817, in _connect - raise OperationalError( - sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, &quot;Can't connect to MySQL server on 'mysql' (255)&quot;) - (Background on this error at: https://sqlalche.me/e/14/e3q8) </code></pre> <p>I've tried adding a password in the database connection string but gives me a different error like the following</p> <pre><code>DATABASE_URL = &quot;mysql+pymysql://{}:{}@{}:{}/{}&quot;.format( user, password, host, port, database ) </code></pre> <p><strong>new log</strong></p> <pre><code>- INFO: 172.31.0.1:52082 - &quot;POST /api/v1/todo/ HTTP/1.1&quot; 500 Internal Server Error - ERROR: Exception in ASGI application - Traceback (most recent call last): - File &quot;/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py&quot;, line 404, in run_asgi - result = await app( # type: ignore[func-returns-value] - File &quot;/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 84, in __call__ - return await self.app(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/applications.py&quot;, line 1054, in __call__ - await super().__call__(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/applications.py&quot;, line 123, in __call__ - await self.middleware_stack(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py&quot;, line 186, in __call__ - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py&quot;, line 164, in __call__ - await self.app(scope, receive, _send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py&quot;, line 62, in __call__ - await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 64, in wrapped_app - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 53, in wrapped_app - await app(scope, receive, sender) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 762, in __call__ - await self.middleware_stack(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 782, in app - await route.handle(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 297, in handle - await self.app(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 77, in app - await wrap_app_handling_exceptions(app, request)(scope, receive, send) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 64, in wrapped_app - raise exc - File &quot;/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py&quot;, line 53, in wrapped_app - await app(scope, receive, sender) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/routing.py&quot;, line 72, in app - response = await func(request) - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 299, in app - raise e - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 294, in app - raw_response = await run_endpoint_function( - File &quot;/usr/local/lib/python3.10/site-packages/fastapi/routing.py&quot;, line 193, in run_endpoint_function - return await run_in_threadpool(dependant.call, **values) - File &quot;/usr/local/lib/python3.10/site-packages/starlette/concurrency.py&quot;, line 40, in run_in_threadpool - return await anyio.to_thread.run_sync(func, *args) - File &quot;/usr/local/lib/python3.10/site-packages/anyio/to_thread.py&quot;, line 56, in run_sync - return await get_async_backend().run_sync_in_worker_thread( - File &quot;/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py&quot;, line 2134, in run_sync_in_worker_thread - return await future - File &quot;/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py&quot;, line 851, in run - result = context.run(func, *args) - File &quot;/app/fastapi/app/modules/todo/router.py&quot;, line 23, in create - return TodoController.create(todo, db) - File &quot;/app/fastapi/app/modules/todo/controller.py&quot;, line 17, in create - return TodoService.create(student, db) - File &quot;/app/fastapi/app/modules/todo/repository.py&quot;, line 62, in create - db.commit() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 1431, in commit - self._transaction.commit(_to_root=self.future) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 829, in commit - self._prepare_impl() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 808, in _prepare_impl - self.session.flush() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3363, in flush - self._flush(objects) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3502, in _flush - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 3463, in _flush - flush_context.execute() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py&quot;, line 456, in execute - rec.execute(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py&quot;, line 630, in execute - util.preloaded.orm_persistence.save_obj( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 211, in save_obj - for ( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 372, in _organize_states_for_save - for state, dict_, mapper, connection in _connections_for_states( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py&quot;, line 1711, in _connections_for_states - connection = uowtransaction.transaction.connection(base_mapper) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 626, in connection - return self._connection_for_bind(bind, execution_options) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 735, in _connection_for_bind - conn = self._parent._connection_for_bind(bind, execution_options) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py&quot;, line 747, in _connection_for_bind - conn = bind.connect() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3204, in connect - return self._connection_cls(self, close_with_result=close_with_result) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 96, in __init__ - else engine.raw_connection() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3283, in raw_connection - return self._wrap_pool_connect(self.pool.connect, _connection) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py&quot;, line 3250, in _wrap_pool_connect - return fn() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 310, in connect - return _ConnectionFairy._checkout(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 868, in _checkout - fairy = _ConnectionRecord.checkout(pool) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 476, in checkout - rec = pool._do_get() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 145, in _do_get - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py&quot;, line 143, in _do_get - return self._create_connection() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 256, in _create_connection - return _ConnectionRecord(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 371, in __init__ - self.__connect() - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 665, in __connect - with util.safe_reraise(): - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 70, in __exit__ - compat.raise_( - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py&quot;, line 207, in raise_ - raise exception - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py&quot;, line 661, in __connect - self.dbapi_connection = connection = pool._invoke_creator(self) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py&quot;, line 590, in connect - return dialect.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py&quot;, line 597, in connect - return self.dbapi.connect(*cargs, **cparams) - File &quot;/usr/local/lib/python3.10/site-packages/pymysql/__init__.py&quot;, line 88, in Connect - return Connection(*args, **kwargs) - TypeError: Connection.__init__() got an unexpected keyword argument 'password' </code></pre>
<python><mysql><docker><docker-compose><fastapi>
2024-02-17 01:29:10
1
2,108
PenAndPapers
78,010,560
9,582,542
Selenium Driver Proxy wont connect to page Not Secure Error
<p>Using Selenium to try and connect to this page comes up with Internal Server Error. Next to the link on the address bar it says &quot;Not Secure&quot; and it has <strong>https</strong> Crossed out.</p> <pre><code>from zyte_smartproxy_selenium import webdriver browser = webdriver.Edge(spm_options={'spm_apikey': '&lt;MY API KEY HERE&gt;'}) browser.get('https://www.somedomain.com/accounts/login') </code></pre> <p>This is the result I get on the page</p> <pre><code>Internal Server Error </code></pre>
<python><selenium-webdriver><proxy>
2024-02-17 00:26:41
1
690
Leo Torres
78,010,494
9,582,542
Selenium driver closed when used with a Proxy
<p>I am trying to use Selenium with a proxy to log into a company site. If I dont use Proxy the driver and code works perfectly fine. When I use the proxy the page opens and immediately closes.</p> <pre><code>from selenium import webdriver HOSTNAME = 'proxy.froxy.com' PORT = '9000' HEADLESS_PROXY = HOSTNAME + &quot;:&quot; + PORT webdriver.DesiredCapabilities.EDGE['proxy'] = { &quot;httpProxy&quot;: HEADLESS_PROXY, &quot;sslProxy&quot;: HEADLESS_PROXY, &quot;proxyType&quot;: &quot;MANUAL&quot;, } with webdriver.Edge() as driver: driver.get(&quot;https://www.SomeDomain.com/account/login&quot;) </code></pre> <p>I can keep page open if I add this after driver.get</p> <pre><code>while(true): pass </code></pre> <p>But by adding that line then the python interpreter does not come back so i can not run any more commands unless I kill the process at which time my session with selenium is destroyed and cant continue anyway.</p>
<python><selenium-webdriver><proxy>
2024-02-16 23:52:21
1
690
Leo Torres
78,010,354
1,783,732
not able to check a key pressed in Python on macOS without echo
<p>In Python, I wanted to detect a key is pressed without using non-standard third-party libs, on macOS. Here is my test code:</p> <pre><code>import select import sys import termios from time import sleep def is_key_pressed(): fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: new_settings = termios.tcgetattr(fd) new_settings[3] = new_settings[3] &amp; ~(termios.ECHO | termios.ICANON) termios.tcsetattr(fd, termios.TCSADRAIN, new_settings) ready, _, _ = select.select([sys.stdin], [], [], 0) return sys.stdin in ready finally: termios.tcsetattr(fd, termios.TCSAFLUSH, old_settings) def main(): print(&quot;Testing select:&quot;) sleep_time = 1 / 60 while True: sleep(sleep_time) if is_key_pressed(): break if __name__ == &quot;__main__&quot;: main() </code></pre> <p>It actually detects key pressed. But there is a problem: the input char is echoed. I tried to disable the echo by disabling <code>termios.ECHO</code> but it seems not working.</p> <p>Here is the output of the test on macOS:</p> <pre><code>$ python test.py Testing select: 1% </code></pre> <p>(Pressed the key '1' during the test). A second question is: why is there a <code>%</code> shown at the end ?</p> <p>What I wanted: no echo of '1', nor '%'.</p>
<python><macos><terminal>
2024-02-16 22:56:29
2
1,909
user1783732
78,010,311
1,132,544
Why is package pandas in Juyter unknown?
<p>I‘ve installed Jupyter with <code>brew install jupyterlab</code> on my M1 Mac. I started <code>jupyter lab</code> and created a new notebook. As kernel I‘ve chosen the only available IPython3. I‘ve then tried to import pandas and got unknown module. Afterwards I tried <code>!pip install pandas</code> and got that’s already installed.</p> <p>Executing <code>python3 —version</code> outputs <code>3.11.3</code> while <code>print(sys.version);</code> gives <code>3.12.2</code>.</p> <p>I have no idea what’s the reason and how to solve that issue.</p> <p>BR</p>
<python><jupyter-notebook><jupyter><ipython>
2024-02-16 22:39:42
0
2,707
Gerrit
78,010,221
3,261,292
Encoding issue with parsing the same text using lxml.etree
<p>I am parsing an HTML script using lxml.etree library. I am facing a weird issue where when I parse the same exact script and get the content using a different XPATH, the encoding of the retrieved text becomes different.</p> <p>Here is a reproducible code:</p> <pre><code>from lxml import etree from io import StringIO def parse_html(html_, xpath): html_parser = etree.HTMLParser() listing_page_parsed = etree.parse(StringIO(html_), html_parser).xpath(xpath) listing_page_parsed = [etree.tostring(item, encoding='unicode') for item in listing_page_parsed] return listing_page_parsed listing_page = &quot;&quot;&quot;&lt;div class=&quot;positions-container&quot; style=&quot;position:_|_relative;_|_height:_|_4309.41px;&quot; a=&quot;a&quot;&gt; &lt;div class=&quot;item-container_|_software_development_and_architecture_|_remote_work_|_helsinki_|_jyväskylä_|_lahti_|_oulu_|_rauma_|_tampere_|_turku_|_vaasa&quot; style=&quot;position:_|_absolute;_|_left:_|_0px;_|_top:_|_0px;&quot; a=&quot;a&quot;&gt; &lt;/div&gt; &lt;/div&gt;&quot;&quot;&quot; xpath_1 = &quot;//*&quot; xpath_2 = '//div[@class=&quot;positions-container&quot;]//div[contains(@class,&quot;item-container&quot;)]' result_1 = parse_html(listing_page, xpath_1)[0] print(&quot;Parsing using xpath_1:&quot;, f&quot;...{result_1[190:220]}...&quot;) result_2 = parse_html(listing_page, xpath_2)[0] print(&quot;Parsing using xpath_2:&quot;, f&quot;...{result_2[85:125]}...&quot;) </code></pre> <p>Outputs:</p> <pre><code>Parsing using xpath_1: ...lsinki_|_jyväskylä_|_lahti_|_o... Parsing using xpath_2: ...lsinki_|_jyv&amp;#xE4;skyl&amp;#xE4;_|_lahti_|_o... </code></pre>
<python><html><character-encoding><lxml><elementtree>
2024-02-16 22:05:52
0
5,527
Minions
78,010,154
6,466,366
OSError: [WinError 127] after gdal installation
<p>I have just finished installing gdal 3.6 on Windows, and it looks like my django project is getting the file correctly. But when I run django manage.py check, it gives me the following output:</p> <pre><code>File &quot;C:\users\rodri\dev\mymap\.mymap\Lib\site-packages\django\contrib\gis\gdal\libgdal.py&quot;, line 69, in &lt;module&gt; lgdal = CDLL(lib_path) ^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\ctypes\__init__.py&quot;, line 379, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [WinError 127] Não foi possível encontrar o procedimento especificado </code></pre> <p>Did someone already get through it, or knows how to fix it?</p> <p>-- Edited -- It seems there are an amount of libs missing.OSGeo4W is properly installed. So what could be missing?</p> <p>Full log:</p> <pre><code>... File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\admin\apps.py&quot;, line 27, in ready self.module.autodiscover() File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\admin\__init__.py&quot;, line 50, in autodiscover autodiscover_modules(&quot;admin&quot;, register_to=site) File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\utils\module_loading.py&quot;, line 58, in autodiscover_modules import_module(&quot;%s.%s&quot; % (app_config.name, module_to_search)) File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1387, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1360, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1331, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 935, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 995, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\admin\__init__.py&quot;, line 14, in &lt;module&gt; from django.contrib.gis.admin.options import GeoModelAdmin, GISModelAdmin, OSMGeoAdmin File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\admin\options.py&quot;, line 4, in &lt;module&gt; from django.contrib.gis.admin.widgets import OpenLayersWidget File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\admin\widgets.py&quot;, line 5, in &lt;module&gt; from django.contrib.gis.gdal import GDALException File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\gdal\__init__.py&quot;, line 28, in &lt;module&gt; from django.contrib.gis.gdal.datasource import DataSource File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\gdal\datasource.py&quot;, line 40, in &lt;module&gt; from django.contrib.gis.gdal.driver import Driver File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\gdal\driver.py&quot;, line 5, in &lt;module&gt; from django.contrib.gis.gdal.prototypes import ds as vcapi File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\gdal\prototypes\ds.py&quot;, line 9, in &lt;module&gt; from django.contrib.gis.gdal.libgdal import lgdal File &quot;C:\Users\rodri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\django\contrib\gis\gdal\libgdal.py&quot;, line 71, in &lt;module&gt; lgdal = CDLL(lib_path) ^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\ctypes\__init__.py&quot;, line 379, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [WinError 127] Não foi possível encontrar o procedimento especificado </code></pre>
<python><django><pip><gdal><geodjango>
2024-02-16 21:49:12
1
656
rdrgtec
78,010,048
11,665,178
How to make a successful http request in Flutter web from a Python Cloud Function?
<p>I have a very basic python Cloud Function (with CORS enabled) that looks like :</p> <pre><code>@functions_framework.http def function_with_cors_enabled(event): method = event.method endpoint = event.path route_key = method + &quot; &quot; + endpoint logging.info(f&quot;Route key : {route_key}&quot;) if method == &quot;OPTIONS&quot;: # Allows GET requests from any origin with the Content-Type # header and caches preflight response for an 3600s headers = { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'POST, GET', 'Access-Control-Allow-Headers': 'Content-Type, X-AppCheck', 'Access-Control-Max-Age': '3600' } return '', 204, headers # Set CORS headers for the main request headers = { 'Access-Control-Allow-Origin': '*', } if route_key == &quot;GET /test&quot;: return test_call(), 200, headers return {&quot;error&quot;: &quot;not allowed&quot;}, 401 def test_call(): logging.info(&quot;Test !&quot;) return {&quot;message&quot;, &quot;success&quot;} </code></pre> <p>I have tried all combination of return :</p> <ul> <li><code>return test_call(), 200, headers</code> to match with the <a href="https://cloud.google.com/functions/docs/writing/write-http-functions#cors" rel="nofollow noreferrer">documentation</a></li> <li><code>return test_call(), headers</code> with <code>test_call()</code> returning the status code in addition of the <code>dict</code></li> <li><code>return test_call()</code> without <code>headers</code></li> <li>And so on...</li> </ul> <p>When i check my Cloud Logging logs, i do see everything successfully running in python the the <code>Test !</code> log with the return status code 200.</p> <p>In Flutter web i do see an undefined error :</p> <pre><code>ClientException: XMLHttpRequest error., uri=https://mycloud_function_endpoint </code></pre> <p>How to make a successful http request in flutter web from a python cloud function ?</p>
<python><flutter><google-cloud-functions>
2024-02-16 21:18:33
1
2,975
Tom3652
78,009,928
1,847,702
use google.cloud python library insidr Github Actions
<p>I need to use the google cloud sdk on pthyon. The code works well on my local environment.</p> <pre class="lang-py prettyprint-override"><code>from google.cloud import translate client = translate.TranslationServiceClient() response = client.translate_text( request={ &quot;parent&quot;: projectId, &quot;contents&quot;: [text], &quot;mime_type&quot;: mime_type, &quot;source_language_code&quot;: &quot;en-US&quot;, &quot;target_language_code&quot;: &quot;es&quot;, } ) </code></pre> <p>But I'm not sure how integrate it inside a github action.</p> <pre class="lang-yaml prettyprint-override"><code>jobs: update-dep: permissions: contents: &quot;read&quot; id-token: &quot;write&quot; runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: &quot;google-github-actions/auth@v2&quot; name: &quot;Authenticate to Google Cloud&quot; with: project_id: &quot;project-12345&quot; workload_identity_provider: &quot;projects/12345/locations/global/workloadIdentityPools/github/providers/project&quot; - name: PythonScript run: | python ./script.py </code></pre> <p>Looks like <code>google-github-actions/auth@v2</code> is working fine, the output is:</p> <p><code>Created credentials file at &quot;/home/runner/foo/bar/gha-creds-b8701ebd3eb851cb.json&quot;</code></p> <p>But I don't know if I need to do something with that json file or if I need to add additional code in my python script.</p> <p>The output from translate_text call is:</p> <pre><code> raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.Unknown: None Authentication backend unknown error. </code></pre> <p>Any hint?</p>
<python><google-cloud-platform><github-actions>
2024-02-16 20:44:12
1
833
Carlos Garces
78,009,785
1,616,528
Github action linkcheck not passing due to stackoverflow links
<p>In our documentation, compiled through Sphinx, we have a few links to Stack Overflow answers (to acknowledge the good suggestions found there). We run a link check Action to verify that we don't have broken links, and today all Stack Overflow links started being reported broken, with errors like:</p> <pre><code>Warning, treated as error: /home/runner/work/stingray/stingray/docs/api.rst:27:broken link: http://stackoverflow.com/questions/4494404/find-large-number-of-consecutive-values-fulfilling-condition-in-a-numpy-array (403 Client Error: Forbidden for url: https://stackoverflow.com/questions/4494404/find-large-number-of-consecutive-values-fulfilling-condition-in-a-numpy-array) ( api: line 27) linkcheck: exit 2 (63.54 seconds) /home/runner/work/stingray/stingray/docs&gt; sphinx-build -W -b linkcheck . </code></pre> <p>But I verified by hand that these links are not broken. Is Stack Overflow limiting links somehow? Is there a way to put links to SO in our documentation?</p>
<python><github><github-actions><documentation><broken-links>
2024-02-16 20:12:00
0
329
matteo
78,009,635
13,086,128
Selecting a single column from a polars DataFrame giving a DataFrame?
<p>Does selecting a single column from a polars DataFrame using <strong><code>select</code></strong> give a DataFrame? If so how to get the series?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;A&quot;: [1, 2, 3], &quot;B&quot;: [6, 7, 8], &quot;C&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;], } ) df.select(&quot;A&quot;) </code></pre> <p>Output</p> <pre><code>shape: (3, 1) ┌─────┐ │ A │ │ --- │ │ i64 │ ╞═════╡ │ 1 │ │ 2 │ │ 3 │ └─────┘ </code></pre>
<python><dataframe><python-polars>
2024-02-16 19:35:37
1
30,560
Talha Tayyab
78,009,621
1,961,574
run_in_executor causes a TimeoutError that was never retrieved
<p>I've created a program that processes a stream of data coming from a device and writes it to a websocket for consumption by a web app. The library I've written to read from said device and <code>yield</code> its computed values is a synchronous (ergo blocking) library. I don't want to re-write it in async, so I am using the asyncio <code>run_in_executor</code> function to run it in a thread (to be precise: I'm using tornado because the program will also accept web requests).</p> <p>While the code works, I get frequent errors saying that Future exception was never retrieved. This exception is a Timeout error related to the code for running the blocking function in an executor (error below code block).</p> <p><strong>Note</strong> that I could not properly run the function in the executor without setting the asyncio event policy to <code>tornado.platform.asyncio.AnyThreadEventLoopPolicy()</code>. If I do not do that, I am constantly getting the error that &quot;there is no current event loop in ThreadExecutor&quot;, and I've not found a way around it.</p> <pre class="lang-py prettyprint-override"><code>class ClientHandler(tornado.websocket.WebSocketHandler): clients = set() def __init__(self, *args, **kwargs): self.config = kwargs.pop(&quot;config&quot;) super().__init__(*args, **kwargs) self.started = False def on_message(self, message): if message == &quot;data&quot;: WebClientStreamHandler.clients.add(self) self.start_client() def write_data(self, message: str): for client in WebClientStreamHandler.clients: client.write_message(message) def start_client(self): if self.started: return asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy()) sync_func = partial(run_client_forever, self.config) loop = tornado.ioloop.IOLoop.current() loop.run_in_executor(None, sync_func) self.started = True </code></pre> <p>ERROR (abridged):</p> <pre><code>ERROR:asyncio:Future exception was never retrieved future: &lt;Future finished exception=TimeoutError('timed out') created at /usr/lib/python3.11/asyncio/base_events.py:427&gt; Traceback: ... File &quot;/path/src/cps_demo/web_stream_handler.py&quot;, line 118, in start_client loop.run_in_executor(None, sync_func) File &quot;/path/venv/lib/python3.11/site-packages/tornado/platform/asyncio.py&quot;, line 266, in run_in_executor return self.asyncio_loop.run_in_executor(executor, func, *args) File &quot;/usr/lib/python3.11/asyncio/base_events.py&quot;, line 828, in run_in_executor return futures.wrap_future( File &quot;/usr/lib/python3.11/asyncio/futures.py&quot;, line 417, in wrap_future new_future = loop.create_future() File &quot;/usr/lib/python3.11/asyncio/base_events.py&quot;, line 427, in create_future return futures.Future(loop=self) TimeoutError: timed out </code></pre>
<python><python-asyncio><tornado>
2024-02-16 19:32:47
1
2,712
bluppfisk
78,009,526
2,107,632
pandas pythonic way of handling dependency on a previous row
<p>I'm facing (yet another time) a situation in <code>pandas</code> where, in a given column, a value in a row <code>n+1</code> depends on a value from the row <code>n</code>. This is quite common and easily handled in Excel, as in the picture attached, where the value in P8 builds on the value from P7.</p> <p>My question is what is the pythonic way of implementing in <code>pandas</code> something like in the attached example? For as long as components of what needs to be computed all sit in one row (e.g., all corresponding to one day), the vectorization offered by <code>pandas</code> works great, but what to do if something from a previous row (e.g., yesterday) contributes to the quantity we compute in this row (e.g. today)?</p> <p><a href="https://i.sstatic.net/OmnK7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OmnK7.png" alt="Screenshot of Excel showing calculation that depends on the previous row value: P8 &quot;=P7*(1+O8)&quot;" /></a></p>
<python><pandas><dataframe>
2024-02-16 19:09:46
2
4,998
Simon Righley
78,009,460
5,392,289
Add hyperlink to footer that directs to Table of Contents using ReportLab
<p>The following ReportLab code generates a starting template that I am (almost) pleased with. The missing piece of the puzzle is to add clickable &quot;Back To Contents&quot; text at the left hand side of the footer, on every page.</p> <p>Any footer wizards out there?</p> <pre><code>from datetime import datetime from reportlab.pdfgen import canvas from reportlab.platypus.doctemplate import PageTemplate, BaseDocTemplate from reportlab.platypus.frames import Frame from reportlab.lib.units import cm from reportlab.platypus import Paragraph, Spacer, PageBreak from reportlab.platypus.tableofcontents import TableOfContents from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle from reportlab.lib import colors from reportlab.lib.pagesizes import LETTER class FooterCanvas(canvas.Canvas): def __init__(self, *args, **kwargs): canvas.Canvas.__init__(self, *args, **kwargs) self.pages = [] def showPage(self): self.pages.append(dict(self.__dict__)) self._startPage() def save(self): page_count = len(self.pages) for page in self.pages: self.__dict__.update(page) if (self._pageNumber &gt; 1): self.draw_canvas(page_count) canvas.Canvas.showPage(self) canvas.Canvas.save(self) def draw_canvas(self, page_count): page = f&quot;Page {self._pageNumber} of {page_count}&quot; x = 128 self.saveState() self.setStrokeColorRGB(0, 0, 0) self.setLineWidth(0.5) self.line(66, 78, LETTER[0] - 66, 78) self.setFont('Helvetica', 10) self.drawString(LETTER[0]-x, 65, page) self.restoreState() class MyDocTemplate(BaseDocTemplate): def __init__(self, filename, **kw): self.allowSplitting = 0 BaseDocTemplate.__init__(self, filename, **kw) template = PageTemplate('normal', [Frame(2.5*cm, 2.5*cm, 15*cm, 25*cm, id='F1')]) self.addPageTemplates(template) def afterFlowable(self, flowable): &quot;Registers TOC entries.&quot; if flowable.__class__.__name__ == 'Paragraph': text = flowable.getPlainText() style = flowable.style.name if style == 'Heading1': self.notify('TOCEntry', (0, text, self.page)) if style == 'Heading2': key = 'h2-%s' % self.seq.nextf('heading2') self.canv.bookmarkPage(key) self.notify('TOCEntry', (1, text, self.page, key)) if style == 'Heading3': key = 'h3-%s' % self.seq.nextf('heading3') self.canv.bookmarkPage(key) self.notify('TOCEntry', (2, text, self.page, key)) timestamp_now = datetime.now().strftime(&quot;%Y-%m-%dT%H_%M_%S&quot;) date_now = datetime.now().strftime(&quot;%d.%m.%Y&quot;) doc = MyDocTemplate( filename=&quot;mydoc.pdf&quot;, author=&quot;Person A&quot; ) # Define styles styles = getSampleStyleSheet() content = [] # Add title to the PDF title = Paragraph(text=&quot;My Document&quot;, style=styles[&quot;Title&quot;]) content.append(title) content.append(Spacer(width=0 * cm, height=15 * cm)) content.append(Paragraph(text=&quot;Authors&quot;, style=styles['h4'])) for author in [&quot;Person A&quot;, &quot;Person B&quot;, &quot;Person C&quot;]: author_para = Paragraph(author, styles[&quot;Italic&quot;]) content.append(author_para) content.append(Spacer(width=0 * cm, height=4 * cm)) date_para = Paragraph( text=f&quot;Document Compiled on {date_now}&quot;, style=ParagraphStyle(name='centered_date', parent=styles[&quot;h4&quot;], alignment=1) ) content.append(date_para) # Add a table of contents content.append(PageBreak()) toc = TableOfContents() toc.levelStyles = [ toc.getLevelStyle(0), ParagraphStyle(name='blue_hyperlinks', parent=toc.getLevelStyle(1), textColor=colors.blue), ParagraphStyle(name='blue_hyperlinks', parent=toc.getLevelStyle(2), textColor=colors.blue) ] content.append(toc) content.append(PageBreak()) content.append(Paragraph(text=&quot;This is an h1 heading&quot;, style=styles['Heading1'])) content.append(Paragraph(text=&quot;This is an h2 heading&quot;, style=styles['Heading2'])) content.append(PageBreak()) content.append(Paragraph(text=&quot;This is an h3 heading&quot;, style=styles['Heading3'])) content.append(PageBreak()) content.append(Paragraph(text=&quot;This is an h2 heading&quot;, style=styles['Heading2'])) content.append(Paragraph(text=&quot;This is an h3 heading&quot;, style=styles['Heading3'])) doc.multiBuild(story=content, canvasmaker=FooterCanvas) </code></pre>
<python><pdf><footer><reportlab>
2024-02-16 18:56:28
0
1,305
Oliver Angelil
78,008,875
4,734,563
Jupyter AI : There seems to be a problem with the Chat backend,
<p>When I install <a href="https://github.com/jupyterlab/jupyter-ai" rel="nofollow noreferrer">Jupyter AI</a>, the <code>%%ai</code> magic command is working in the code cell, but the Chat user interface is not working and it shows the following message</p> <blockquote> <p>There seems to be a problem with the Chat backend, please look at the JupyterLab server logs or contact your administrator to correct this problem.</p> </blockquote> <p>Here are the errors at the jupyterlab terminal:</p> <pre><code>[I 2024-02-19 17:33:13.711 AiExtension] Configured provider allowlist: None [I 2024-02-19 17:33:13.711 AiExtension] Configured provider blocklist: None [I 2024-02-19 17:33:13.711 AiExtension] Configured model allowlist: None [I 2024-02-19 17:33:13.711 AiExtension] Configured model blocklist: None [I 2024-02-19 17:33:13.711 AiExtension] Configured model parameters: {} [I 2024-02-19 17:33:13.739 AiExtension] Registered model provider `ai21`. [I 2024-02-19 17:33:13.739 AiExtension] Registered model provider `bedrock`. [I 2024-02-19 17:33:13.739 AiExtension] Registered model provider `bedrock-chat`. [I 2024-02-19 17:33:13.740 AiExtension] Registered model provider `anthropic`. [I 2024-02-19 17:33:13.740 AiExtension] Registered model provider `anthropic-chat`. [I 2024-02-19 17:33:13.740 AiExtension] Registered model provider `azure-chat-openai`. [I 2024-02-19 17:33:13.740 AiExtension] Registered model provider `cohere`. [I 2024-02-19 17:33:13.741 AiExtension] Registered model provider `gpt4all`. [I 2024-02-19 17:33:13.741 AiExtension] Registered model provider `huggingface_hub`. [W 2024-02-19 17:33:13.742 AiExtension] Unable to load model provider `nvidia-chat`. Please install the `langchain_nvidia_ai_endpoints` package. [I 2024-02-19 17:33:13.742 AiExtension] Registered model provider `openai`. [I 2024-02-19 17:33:13.742 AiExtension] Registered model provider `openai-chat`. [I 2024-02-19 17:33:13.742 AiExtension] Registered model provider `qianfan`. [I 2024-02-19 17:33:13.742 AiExtension] Registered model provider `sagemaker-endpoint`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `bedrock`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `cohere`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `gpt4all`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `huggingface_hub`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `openai`. [I 2024-02-19 17:33:13.758 AiExtension] Registered embeddings model provider `qianfan`. [I 2024-02-19 17:33:13.774 AiExtension] Registered providers. [I 2024-02-19 17:33:13.774 AiExtension] Registered jupyter_ai server extension [W 2024-02-19 17:33:13.789 ServerApp] jupyter_ai | extension failed loading with message: ValidationError(model='CohereEmbeddingsProvider', errors=[{'loc': ('__root__',), 'msg': 'Could not import cohere python package. Please install it with `pip install cohere`.', 'type': 'value_error'}]) Traceback (most recent call last): File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\manager.py&quot;, line 359, in load_extension extension.load_all_points(self.serverapp) File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\manager.py&quot;, line 231, in load_all_points return [self.load_point(point_name, serverapp) for point_name in self.extension_points] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\manager.py&quot;, line 231, in &lt;listcomp&gt; return [self.load_point(point_name, serverapp) for point_name in self.extension_points] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\manager.py&quot;, line 222, in load_point return point.load(serverapp) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\manager.py&quot;, line 150, in load return loader(serverapp) ^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\application.py&quot;, line 474, in _load_jupyter_server_extension extension.initialize() File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\application.py&quot;, line 435, in initialize self._prepare_settings() File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_server\extension\application.py&quot;, line 315, in _prepare_settings self.initialize_settings() File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_ai\extension.py&quot;, line 238, in initialize_settings learn_chat_handler = LearnChatHandler(**chat_handler_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_ai\chat_handlers\learn.py&quot;, line 66, in __init__ self._load() File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_ai\chat_handlers\learn.py&quot;, line 70, in _load embeddings = self.get_embedding_model() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_ai\chat_handlers\learn.py&quot;, line 295, in get_embedding_model return em_provider_cls(**em_provider_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\jupyter_ai_magics\embedding_providers.py&quot;, line 68, in __init__ super().__init__(*args, **kwargs, **model_kwargs) File &quot;C:\Users\user\AppData\Local\miniconda3\Lib\site-packages\pydantic\v1\main.py&quot;, line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for CohereEmbeddingsProvider __root__ Could not import cohere python package. Please install it with `pip install cohere`. (type=value_error) </code></pre> <p>Then after installing the two requested packages:</p> <pre><code>pip install langchain_nvidia_ai_endpoints #and pip install cohere </code></pre> <p>I am still getting this error</p> <pre><code>[E 2024-02-19 17:46:26.012 AiExtension] Could not load vector index from disk. </code></pre> <p>I have another machine where I am not getting this problem. I am assuming that this might be due to firewall or some other issues. Any suggestions on how to solve this problem?</p>
<python><jupyter><jupyter-lab>
2024-02-16 16:54:26
1
1,890
ASE
78,008,810
11,318,930
polars - processing of 7 GB parquet aborts with segmentation fault
<p>I have a 7 GB <code>.parquet</code> file with a 128M rows of accounting data (which I cannot share) and split in to 53 row groups. My task is to 'clean' the data by retaining in each cell, only specific words from a dictionary. The reading of the file is trouble free. The processing of the file ends in a segmentation fault on a 20-core, 128GB RAM Ubuntu O/S desktop.</p> <p>Using Python and the Polars library, I convert the data in to a Polars data frame with the following columns:</p> <pre><code>['rowid', 'txid', 'debit', 'credit', 'effective_date', 'entered_date', 'user_id', 'transaction', 'memo', 'type', 'account', 'total_amt'] </code></pre> <p>The columns to be cleaned in this file are <code>memo</code>, <code>type</code>, and <code>account</code>. My approach is to take each of those columns and apply a <code>filter_field</code> method to them and the problem occurs in the loop I made to do that:</p> <pre><code> # two-step cleaning: first memo/account fields for memo_field in memo_columns+account_columns: print('memo_field:', memo_field) data = data.with_columns( (pl.col(memo_field).map_elements(lambda x: self.filter_field(text=x, word_dict=word_dict))).alias('clean_' + memo_field) # baseline ) data = data.drop(memo_field) #drop cleaned column </code></pre> <p>In each loop, I'm actually creating a new &quot;clean&quot; column and then dropping the original.</p> <p>I know the filtering is sound b/c everything runs fine on near identical files of up to 78M rows. When I run this larger file I see the memory consumption climb steadily until the seg fault. Just before the seg fault I noticed in <code>htop</code> that about a dozen processes seemingly identical to the main python process are spawned but it happened too fast for me to see what exactly happened there.</p> <p>What I would like to know:</p> <ol> <li>is there a better approach than the looping/mapping I'm using;</li> <li>are there improvements I could make for memory management; or</li> <li>is this simply a case of needing more resources</li> </ol> <p>Update:</p> <p>Changed the code so that instead of creating a new column and dropping the old, I just change the column (like Pandas 'inplace'?). This allowed processing of the entire file. Each column takes ~1500 sec to be cleaned.</p> <p>HOWEVER, a write error now appears: <code>Parquet cannot store strings with size 2GB or more</code>. This does not make sense to me as the data can only shrink in size through the cleaning process.</p>
<python><python-polars>
2024-02-16 16:41:03
0
1,287
MikeB2019x
78,008,758
17,800,932
How do I get multiple OID values in SNMP?
<p>I am using <code>pysnmp</code> installed by the <a href="https://github.com/lextudio/pysnmp" rel="nofollow noreferrer"><code>pysnmp-lextudio</code></a>. I chose this package because it is pure Python and thus cross-platform. Other libraries were either too hard for me to understand or not cross-platform or required external system dependencies to be installed.</p> <p>Currently, I am talking to a <a href="https://www.cyberpowersystems.com/products/pdu/" rel="nofollow noreferrer">CyberPower PDU</a>, which is basically a power supply with controllable outlets, with the following <code>get_data</code> command:</p> <pre class="lang-py prettyprint-override"><code>def get_data(ip_address: str, object_identity: str) -&gt; int | str: &quot;&quot;&quot;Get the OID's value. Only integer and string values are currently supported.&quot;&quot;&quot; iterator = getCmd( SnmpEngine(), CommunityData(&quot;public&quot;, mpModel=0), UdpTransportTarget(transportAddr=(ip_address, 161), timeout=1, retries=0), ContextData(), ObjectType(ObjectIdentity(object_identity)), ) error_indication, error_status, error_index, variable_bindings = next(iterator) if error_indication: raise RuntimeError(str(error_indication)) elif error_status: raise RuntimeError(str(error_status)) else: [variable_binding] = variable_bindings [_oid, value] = variable_binding return convert_snmp_type_to_python_type(value) </code></pre> <p>For a 16-port PDU, calling <code>get_data</code> 16 times takes a little over a second. Each call of <code>get_data</code> takes around 70ms. This is problematic because it makes keeping a GUI responsive to the actual state of an outlet difficult. I want a sub-process effectively looping at a cadence of 1 Hz to 2 Hz getting the state of all the outlets. This is because an outlet could be turned off or on by something external to the GUI, so it needs to be able to accurately show the actual state.</p> <p>So I tried adjusting my command to something like this:</p> <pre class="lang-py prettyprint-override"><code>def get_multiple_data(ip_address: str, object_identities: list[str]) -&gt; int | str: &quot;&quot;&quot;Get the OID's value. Only integer and string values are currently supported.&quot;&quot;&quot; # The OID for retrieving an outlet's state is hardcoded for debugging purposes ids = [&quot;.1.3.6.1.4.1.3808.1.1.3.3.5.1.1.4.{}&quot;.format(outlet) for outlet in range(1, 17)] oids = [ObjectType(ObjectIdentity(id)) for id in ids] print(&quot;OIDs: &quot; + str(oids)) iterator = getCmd( SnmpEngine(), CommunityData(&quot;public&quot;, mpModel=0), UdpTransportTarget(transportAddr=(ip_address, 161), timeout=10, retries=0), ContextData(), *oids, ) error_indication, error_status, error_index, variable_bindings = next(iterator) ... </code></pre> <p>This seems to work for when the <code>oids</code> list is only one or two elements, but it times out for the full list of 16 OIDs for the 16 outlets. And it times out even if I wait like 5 seconds. So I'm not sure what's going on.</p> <p>I realize there's also a <code>bulkCmd</code>, but I'm not entirely sure how to use it, as SNMP is new to me and quite arcane.</p> <hr /> <p><strong>Summary</strong>: I have the list of OIDs:</p> <pre class="lang-py prettyprint-override"><code>ids = [&quot;.1.3.6.1.4.1.3808.1.1.3.3.5.1.1.4.{}&quot;.format(outlet) for outlet in range(1, 17)] </code></pre> <p>I am looking for the fastest way to query these such that the response time is well-below a second for the state of all 16 outlets. Ideally the solution uses the <code>pysnmp</code> package, but I am open to others as long as they are cross-platform and require no external system dependencies.</p>
<python><snmp><net-snmp>
2024-02-16 16:32:13
1
908
bmitc
78,008,755
18,150,609
Is it safe to use the variable from dictionary comprehension during error handling?
<p>Consider the following example:</p> <pre><code>config_parameters = ['PARAM1', 'PARAM2', 'PARAM3'] sample_config = { 'PARAM1': 'Value1', 'PARAM2': 'Value2' } try: # Collects param values from the sample config config = {k: sample_config[k] for k in config_parameters} except KeyError as e: # Tries using comprehension var `k` to advise which variable was missing raise KeyError(f'Value: &quot;{k}&quot; missing from config file.') </code></pre> <p>In error handling, it tries to catch key errors that arise during a strict dict lookup step which using dict comprehension. For any caught errors, it attempts to utilize a variable that was assigned during the dict comprehension.</p> <p>Is this considered safe with Python, or is there any reason to expect that <code>k</code> may not indicate the same value which caused the error?</p>
<python>
2024-02-16 16:31:39
1
364
MrChadMWood
78,008,563
11,167,163
.yml does not install packages from .requirements.txt in the dedicated environment
<p>I have the following which should install packages from <code>requirements.txt</code> in the environment <code>Env_Name</code> which has just been created</p> <pre><code>- script: | python -m venv Env_Name cd Env_Name .\scripts\Activate displayName: 'Create &amp; Activate Env_Name ' - script: | python -m pip install --upgrade pip displayName: 'Install dependencies' - script: | pip install wheel displayName: 'Wheel install' - script: | pip install python_ldap-3.4.0-cp311-cp311-win_amd64.whl displayName: 'Install ldap from wheel' - script: | type requirements.txt displayName: 'display the content of the requirements file' - script: | pip install -r requirements.txt -v displayName: 'packages installation' - script: | pip list displayName: 'Display packages' - script: | python setup.py sdist displayName: 'Artifact creation' - task: CopyFiles@2 inputs: targetFolder: $(Build.ArtifactStagingDirectory) - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'My_App' publishLocation: 'Container' </code></pre> <hr /> <p>display packages does show that packages have been installed :</p> <pre><code>pip list ========================== Starting Command Output =========================== &quot;C:\Windows\system32\cmd.exe&quot; /D /E:ON /V:OFF /S /C &quot;CALL &quot;D:\a\_temp\f53d2436-67e2-4d6f-bf00-e90e1dfcb18c.cmd&quot;&quot; Package Version -------------------- ------------ asgiref 3.7.2 beautifulsoup4 4.12.2 certifi 2023.7.22 cffi 1.15.1 charset-normalizer 3.2.0 contourpy 1.1.1 cryptography 41.0.3 cx_Oracle 8.3.0 cycler 0.11.0 Django 4.2.5 django-adminlte3 0.1.6 django-auth-ldap 4.6.0 django-mssql-backend 2.8.1 et-xmlfile 1.1.0 fonttools 4.42.1 idna 3.4 joblib 1.3.2 kiwisolver 1.4.5 lxml 4.9.3 </code></pre> <p>but looking into the env folder i can only see in site-packages :</p> <p><a href="https://i.sstatic.net/NmE4Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NmE4Z.png" alt="enter image description here" /></a></p> <p>as we can see, we there is no packages from requirements.txt. So the question is :</p> <p>Where are all these packages installed ? Is there a script which can be used to show the path of the folder where they are ?</p> <p>How do I get them installed in <code>Env_Name</code> ?</p>
<python><azure-devops><continuous-integration>
2024-02-16 15:56:06
2
4,464
TourEiffel
78,008,454
14,045,537
How to order angular & radial axis labels in Plotly scatter_polar chart?
<p>I have a time series data from which I'm extracting weekday hourly counts. And to visualize the data I came across an <a href="https://echarts.apache.org/examples/en/editor.html?c=scatter-polar-punchCard" rel="nofollow noreferrer">Echart example</a>. Is it possible to create something like the echart example using Plotly?</p> <p>Sample data</p> <pre class="lang-py prettyprint-override"><code>timeData = ['2009/6/12 5:00', '2009/6/12 7:00', '2009/6/12 9:00', '2009/6/12 13:00', '2009/6/12 15:00', '2009/6/12 17:00', '2009/6/12 21:00', '2009/6/13 1:00', '2009/6/13 5:00', '2009/6/13 7:00', '2009/6/13 9:00', '2009/6/13 13:00', '2009/6/13 15:00', '2009/6/13 17:00', '2009/6/13 21:00', '2009/6/14 1:00', '2009/6/14 5:00', '2009/6/14 7:00', '2009/6/14 9:00', '2009/6/14 13:00', '2009/6/14 15:00', '2009/6/14 17:00', '2009/6/14 21:00', '2009/6/15 1:00', '2009/6/15 5:00', '2009/6/15 7:00', '2009/6/15 9:00', '2009/6/15 13:00', '2009/6/15 15:00', '2009/6/15 17:00', '2009/6/15 21:00', '2009/6/16 1:00', '2009/6/16 5:00', '2009/6/16 7:00', '2009/6/16 9:00', '2009/6/16 13:00', '2009/6/16 15:00', '2009/6/16 17:00', '2009/6/16 21:00', '2009/6/17 1:00', '2009/6/17 5:00', '2009/6/17 7:00', '2009/6/17 9:00', '2009/6/17 13:00', '2009/6/17 15:00', '2009/6/17 17:00', '2009/6/17 21:00', '2009/6/18 1:00', '2009/6/18 5:00', '2009/6/18 7:00', '2009/6/18 9:00', '2009/6/18 13:00', '2009/6/18 15:00', '2009/6/18 17:00', '2009/6/18 21:00', '2009/6/19 1:00', '2009/6/19 5:00', '2009/6/19 7:00', '2009/6/19 9:00', '2009/6/19 13:00', '2009/6/19 15:00', '2009/6/19 17:00', '2009/6/19 21:00', '2009/6/20 1:00', '2009/6/20 5:00', '2009/6/20 7:00', '2009/6/20 9:00', '2009/6/20 13:00', '2009/6/20 15:00', '2009/6/20 17:00', '2009/6/20 21:00', '2009/6/21 1:00', '2009/6/21 5:00', '2009/6/21 7:00', '2009/6/21 9:00', '2009/6/21 13:00', '2009/6/21 15:00', '2009/6/21 17:00', '2009/6/21 21:00', '2009/6/22 1:00', '2009/6/22 5:00', '2009/6/22 7:00', '2009/6/22 9:00', '2009/6/22 13:00', '2009/6/22 15:00', '2009/6/22 17:00', '2009/6/22 21:00', '2009/6/23 1:00', '2009/6/23 7:00', '2009/6/23 9:00', '2009/6/23 11:00', '2009/6/23 15:00', '2009/6/23 17:00', '2009/6/23 19:00', '2009/6/23 23:00', '2009/6/24 5:00', '2009/6/24 9:00', '2009/6/24 11:00', '2009/6/24 13:00', '2009/6/24 17:00', '2009/6/24 19:00', '2009/6/24 21:00', '2009/6/25 1:00', '2009/6/25 7:00', '2009/6/25 11:00', '2009/6/25 13:00', '2009/6/25 15:00', '2009/6/25 19:00', '2009/6/25 21:00', '2009/6/25 23:00', '2009/6/27 5:00', '2009/6/27 9:00', '2009/6/27 13:00', '2009/6/27 15:00', '2009/6/27 17:00', '2009/6/27 21:00', '2009/6/27 23:00', '2009/6/28 1:00', '2009/6/28 5:00', '2009/6/28 9:00', '2009/6/28 13:00', '2009/6/28 15:00', '2009/6/28 17:00', '2009/6/28 21:00', '2009/6/28 23:00', '2009/6/29 1:00', '2009/6/29 5:00', '2009/6/29 9:00', '2009/6/29 13:00', '2009/6/29 15:00', '2009/6/29 17:00', '2009/6/29 21:00', '2009/6/29 23:00', '2009/6/30 1:00', '2009/6/30 5:00', '2009/6/30 9:00', '2009/6/30 13:00', '2009/6/30 15:00', '2009/6/30 17:00', '2009/6/30 21:00', '2009/6/30 23:00', '2009/7/2 1:00', '2009/7/2 5:00', '2009/7/2 9:00', '2009/7/2 13:00', '2009/7/2 15:00', '2009/7/2 17:00', '2009/7/2 21:00', '2009/7/2 23:00', '2009/7/3 1:00', '2009/7/3 5:00', '2009/7/3 9:00', '2009/7/3 13:00', '2009/7/3 15:00', '2009/7/3 17:00', '2009/7/3 21:00', '2009/7/3 23:00', '2009/7/5 1:00', '2009/7/5 5:00', '2009/7/5 9:00', '2009/7/5 13:00', '2009/7/5 15:00', '2009/7/5 17:00', '2009/7/5 21:00', '2009/7/5 23:00', '2009/7/6 1:00', '2009/7/6 5:00', '2009/7/6 9:00', '2009/7/6 13:00', '2009/7/6 15:00', '2009/7/6 17:00', '2009/7/6 21:00', '2009/7/6 23:00', '2009/7/7 1:00', '2009/7/7 5:00', '2009/7/7 9:00', '2009/7/7 13:00', '2009/7/7 15:00', '2009/7/7 17:00', '2009/7/7 21:00', '2009/7/7 23:00', '2009/7/8 1:00', '2009/7/8 5:00', '2009/7/8 9:00', '2009/7/8 13:00', '2009/7/8 15:00', '2009/7/8 17:00', '2009/7/8 21:00', '2009/7/8 23:00', '2009/7/9 1:00', '2009/7/9 5:00', '2009/7/9 9:00', '2009/7/9 13:00', '2009/7/9 15:00', '2009/7/9 17:00', '2009/7/9 21:00', '2009/7/9 23:00', '2009/7/10 1:00', '2009/7/10 5:00', '2009/7/10 9:00', '2009/7/10 13:00', '2009/7/10 15:00', '2009/7/10 17:00', '2009/7/10 21:00', '2009/7/10 23:00', '2009/7/11 1:00', '2009/7/11 5:00', '2009/7/11 9:00', '2009/7/11 13:00', '2009/7/11 15:00', '2009/7/11 17:00', '2009/7/11 21:00', '2009/7/11 23:00', '2009/7/12 1:00', '2009/7/12 5:00', '2009/7/12 9:00', '2009/7/12 13:00', '2009/7/12 15:00', '2009/7/12 17:00', '2009/7/12 21:00', '2009/7/12 23:00', '2009/7/13 1:00', '2009/7/13 5:00', '2009/7/13 9:00', '2009/7/13 13:00', '2009/7/13 15:00', '2009/7/13 17:00', '2009/7/13 21:00', '2009/7/13 23:00', '2009/7/14 1:00', '2009/7/14 5:00', '2009/7/14 9:00', '2009/7/14 13:00', '2009/7/14 15:00', '2009/7/14 17:00', '2009/7/14 21:00', '2009/7/14 23:00', '2009/7/15 1:00', '2009/7/15 5:00', '2009/7/15 9:00', '2009/7/15 13:00', '2009/7/15 15:00', '2009/7/15 17:00', '2009/7/15 21:00', '2009/7/15 23:00', '2009/7/16 1:00', '2009/7/16 5:00', '2009/7/16 9:00', '2009/7/16 13:00', '2009/7/16 15:00', '2009/7/16 17:00', '2009/7/16 21:00', '2009/7/16 23:00', '2009/7/17 1:00', '2009/7/17 5:00', '2009/7/17 9:00', '2009/7/17 13:00', '2009/7/17 15:00', '2009/7/17 17:00', '2009/7/17 21:00', '2009/7/17 23:00', '2009/7/18 1:00', '2009/7/18 5:00', '2009/7/18 9:00', '2009/7/18 13:00', '2009/7/18 15:00', '2009/7/18 17:00', '2009/7/18 21:00', '2009/7/18 23:00', '2009/7/19 1:00', '2009/7/19 5:00', '2009/7/19 9:00', '2009/7/19 13:00', '2009/7/19 15:00', '2009/7/19 17:00', '2009/7/19 21:00', '2009/7/19 23:00', '2009/7/20 1:00', '2009/7/20 5:00', '2009/7/20 9:00', '2009/7/20 13:00', '2009/7/20 15:00', '2009/7/20 17:00', '2009/7/20 21:00', '2009/7/20 23:00', '2009/7/21 1:00', '2009/7/21 6:00', '2009/7/21 10:00', '2009/7/21 14:00', '2009/7/21 16:00', '2009/7/21 18:00', '2009/7/21 22:00', '2009/7/22 0:00', '2009/7/22 3:00', '2009/7/22 7:00', '2009/7/22 11:00', '2009/7/22 15:00', '2009/7/22 17:00', '2009/7/22 19:00', '2009/7/22 23:00', '2009/7/23 1:00', '2009/7/23 4:00', '2009/7/23 8:00', '2009/7/23 12:00', '2009/7/23 16:00', '2009/7/23 18:00', '2009/7/23 20:00', '2009/7/24 0:00', '2009/7/24 3:00', '2009/7/24 5:00', '2009/7/24 9:00', '2009/7/24 13:00', '2009/7/24 17:00', '2009/7/24 19:00', '2009/7/24 21:00', '2009/7/25 1:00', '2009/7/25 4:00', '2009/7/25 6:00', '2009/7/25 10:00', '2009/7/25 14:00', '2009/7/25 18:00', '2009/7/25 20:00', '2009/7/25 22:00', '2009/7/26 3:00', '2009/7/26 5:00', '2009/7/26 7:00', '2009/7/26 11:00', '2009/7/26 15:00', '2009/7/26 19:00', '2009/7/26 21:00', '2009/7/26 23:00', '2009/7/27 3:00', '2009/7/27 5:00', '2009/7/27 7:00', '2009/7/27 11:00', '2009/7/27 15:00', '2009/7/27 19:00', '2009/7/27 21:00', '2009/7/27 23:00', '2009/7/28 3:00', '2009/7/28 5:00', '2009/7/28 7:00', '2009/7/28 11:00', '2009/7/28 15:00', '2009/7/28 19:00', '2009/7/28 21:00', '2009/7/28 23:00', '2009/7/29 3:00', '2009/7/29 5:00', '2009/7/29 7:00', '2009/7/29 11:00', '2009/7/29 15:00', '2009/7/29 19:00', '2009/7/29 21:00', '2009/7/29 23:00', '2009/7/30 3:00', '2009/7/30 5:00', '2009/7/30 7:00', '2009/7/30 11:00', '2009/7/30 15:00', '2009/7/30 19:00', '2009/7/30 21:00', '2009/7/30 23:00', '2009/7/31 3:00', '2009/7/31 5:00', '2009/7/31 7:00', '2009/7/31 11:00', '2009/7/31 15:00', '2009/7/31 19:00', '2009/7/31 21:00', '2009/7/31 23:00', '2009/8/1 3:00', '2009/8/1 5:00', '2009/8/1 7:00', '2009/8/1 11:00', '2009/8/1 15:00', '2009/8/1 19:00', '2009/8/1 21:00', '2009/8/1 23:00', '2009/8/2 3:00', '2009/8/2 5:00', '2009/8/2 7:00', '2009/8/2 11:00', '2009/8/2 15:00', '2009/8/2 19:00', '2009/8/2 21:00', '2009/8/2 23:00', '2009/8/3 3:00', '2009/8/3 5:00', '2009/8/3 7:00', '2009/8/3 11:00', '2009/8/3 15:00', '2009/8/3 19:00', '2009/8/3 21:00', '2009/8/3 23:00', '2009/8/4 3:00', '2009/8/4 5:00', '2009/8/4 7:00', '2009/8/4 11:00', '2009/8/4 15:00', '2009/8/4 19:00', '2009/8/4 21:00', '2009/8/4 23:00', '2009/8/5 3:00', '2009/8/5 5:00', '2009/8/5 7:00', '2009/8/5 11:00', '2009/8/5 15:00', '2009/8/5 19:00', '2009/8/5 21:00', '2009/8/5 23:00', '2009/8/6 3:00', '2009/8/6 5:00', '2009/8/6 7:00', '2009/8/6 11:00', '2009/8/6 15:00', '2009/8/6 19:00', '2009/8/6 21:00', '2009/8/6 23:00', '2009/8/7 3:00', '2009/8/7 5:00', '2009/8/7 7:00', '2009/8/7 11:00', '2009/8/7 15:00', '2009/8/7 19:00', '2009/8/7 21:00', '2009/8/7 23:00', '2009/8/8 3:00', '2009/8/8 5:00', '2009/8/8 7:00', '2009/8/8 11:00', '2009/8/8 15:00', '2009/8/8 19:00', '2009/8/8 21:00', '2009/8/8 23:00', '2009/8/9 3:00', '2009/8/9 5:00', '2009/8/9 7:00', '2009/8/9 11:00', '2009/8/9 15:00', '2009/8/9 19:00', '2009/8/9 21:00', '2009/8/9 23:00', '2009/8/10 3:00', '2009/8/10 5:00', '2009/8/10 7:00', '2009/8/10 11:00', '2009/8/10 15:00', '2009/8/10 19:00', '2009/8/10 21:00', '2009/8/10 23:00', '2009/8/11 3:00', '2009/8/11 5:00', '2009/8/11 7:00', '2009/8/11 11:00', '2009/8/11 15:00', '2009/8/11 19:00', '2009/8/11 21:00', '2009/8/11 23:00', '2009/8/12 3:00', '2009/8/12 5:00', '2009/8/12 7:00', '2009/8/12 11:00', '2009/8/12 15:00', '2009/8/12 19:00', '2009/8/12 21:00', '2009/8/12 23:00', '2009/8/13 3:00', '2009/8/13 5:00', '2009/8/13 7:00', '2009/8/13 11:00', '2009/8/13 15:00', '2009/8/13 19:00', '2009/8/13 21:00', '2009/8/13 23:00', '2009/8/14 3:00', '2009/8/14 5:00', '2009/8/14 7:00', '2009/8/14 11:00', '2009/8/14 15:00', '2009/8/14 19:00', '2009/8/14 21:00', '2009/8/14 23:00', '2009/8/15 3:00', '2009/8/15 5:00', '2009/8/15 7:00', '2009/8/15 11:00', '2009/8/15 15:00', '2009/8/15 19:00', '2009/8/15 21:00', '2009/8/15 23:00', '2009/8/16 3:00', '2009/8/16 5:00', '2009/8/16 7:00', '2009/8/16 11:00', '2009/8/16 15:00', '2009/8/16 19:00', '2009/8/16 21:00', '2009/8/16 23:00', '2009/8/17 3:00', '2009/8/17 5:00', '2009/8/17 7:00', '2009/8/17 11:00', '2009/8/17 15:00', '2009/8/17 19:00', '2009/8/17 21:00', '2009/8/17 23:00', '2009/8/18 3:00', '2009/8/18 5:00', '2009/8/18 7:00', '2009/8/18 11:00', '2009/8/18 15:00', '2009/8/18 19:00', '2009/8/18 21:00', '2009/8/18 23:00', '2009/8/19 3:00', '2009/8/19 5:00', '2009/8/19 7:00', '2009/8/19 11:00', '2009/8/19 15:00', '2009/8/19 19:00', '2009/8/19 21:00', '2009/8/19 23:00', '2009/8/20 3:00', '2009/8/20 5:00', '2009/8/20 7:00', '2009/8/20 11:00', '2009/8/20 15:00', '2009/8/20 19:00', '2009/8/20 21:00', '2009/8/20 23:00', '2009/8/21 3:00', '2009/8/21 5:00', '2009/8/21 7:00', '2009/8/21 11:00', '2009/8/21 15:00', '2009/8/21 19:00', '2009/8/21 21:00', '2009/8/21 23:00', '2009/8/22 3:00', '2009/8/22 5:00', '2009/8/22 7:00', '2009/8/22 11:00', '2009/8/22 15:00', '2009/8/22 19:00', '2009/8/22 21:00', '2009/8/22 23:00', '2009/8/23 3:00', '2009/8/23 5:00', '2009/8/23 7:00', '2009/8/23 11:00', '2009/8/23 15:00', '2009/8/23 19:00', '2009/8/23 21:00', '2009/8/23 23:00', '2009/8/24 3:00', '2009/8/24 5:00', '2009/8/24 7:00', '2009/8/24 11:00', '2009/8/24 15:00', '2009/8/24 19:00', '2009/8/24 21:00', '2009/8/24 23:00', '2009/8/25 3:00', '2009/8/25 5:00', '2009/8/25 7:00', '2009/8/25 11:00', '2009/8/25 15:00', '2009/8/25 19:00', '2009/8/25 21:00', '2009/8/25 23:00', '2009/8/26 3:00', '2009/8/26 5:00', '2009/8/26 7:00', '2009/8/26 11:00', '2009/8/26 15:00', '2009/8/26 19:00', '2009/8/26 21:00', '2009/8/26 23:00', '2009/8/27 3:00', '2009/8/27 5:00', '2009/8/27 7:00', '2009/8/27 11:00', '2009/8/27 15:00', '2009/8/27 19:00', '2009/8/27 21:00', '2009/8/27 23:00', '2009/8/28 3:00', '2009/8/28 5:00', '2009/8/28 7:00', '2009/8/28 11:00', '2009/8/28 15:00', '2009/8/28 19:00', '2009/8/28 21:00', '2009/8/28 23:00', '2009/8/29 3:00', '2009/8/29 5:00', '2009/8/29 7:00', '2009/8/29 11:00', '2009/8/29 15:00', '2009/8/29 19:00', '2009/8/29 21:00', '2009/8/29 23:00', '2009/8/30 3:00', '2009/8/30 5:00', '2009/8/30 7:00', '2009/8/30 11:00', '2009/8/30 15:00', '2009/8/30 19:00', '2009/8/30 21:00', '2009/8/30 23:00', '2009/8/31 3:00', '2009/8/31 5:00', '2009/8/31 7:00', '2009/8/31 11:00', '2009/8/31 15:00', '2009/8/31 19:00', '2009/8/31 21:00', '2009/8/31 23:00', '2009/9/1 3:00', '2009/9/1 5:00', '2009/9/1 7:00', '2009/9/1 11:00', '2009/9/1 15:00', '2009/9/1 19:00', '2009/9/1 21:00', '2009/9/1 23:00', '2009/9/2 3:00', '2009/9/2 5:00', '2009/9/2 7:00', '2009/9/2 11:00', '2009/9/2 15:00', '2009/9/2 19:00', '2009/9/2 21:00', '2009/9/2 23:00', '2009/9/3 3:00', '2009/9/3 5:00', '2009/9/3 7:00', '2009/9/3 11:00', '2009/9/3 15:00', '2009/9/3 19:00', '2009/9/3 21:00', '2009/9/3 23:00', '2009/9/4 3:00', '2009/9/4 5:00', '2009/9/4 7:00', '2009/9/4 11:00', '2009/9/4 15:00', '2009/9/4 19:00', '2009/9/4 21:00', '2009/9/4 23:00', '2009/9/5 3:00', '2009/9/5 5:00', '2009/9/5 7:00', '2009/9/5 11:00', '2009/9/5 15:00', '2009/9/5 19:00', '2009/9/5 21:00', '2009/9/5 23:00', '2009/9/6 3:00', '2009/9/6 5:00', '2009/9/6 7:00', '2009/9/6 11:00', '2009/9/6 15:00', '2009/9/6 19:00', '2009/9/6 21:00', '2009/9/6 23:00', '2009/9/7 3:00', '2009/9/7 5:00', '2009/9/7 7:00', '2009/9/7 11:00', '2009/9/7 15:00', '2009/9/7 19:00', '2009/9/7 21:00', '2009/9/7 23:00', '2009/9/8 3:00', '2009/9/8 5:00', '2009/9/8 7:00', '2009/9/8 11:00', '2009/9/8 15:00', '2009/9/8 19:00', '2009/9/8 21:00', '2009/9/8 23:00', '2009/9/9 3:00', '2009/9/9 5:00', '2009/9/9 7:00', '2009/9/9 11:00', '2009/9/9 15:00', '2009/9/9 19:00', '2009/9/9 21:00', '2009/9/9 23:00', '2009/9/10 3:00', '2009/9/10 5:00', '2009/9/10 7:00', '2009/9/10 11:00', '2009/9/10 15:00', '2009/9/10 19:00', '2009/9/10 21:00', '2009/9/10 23:00', '2009/9/11 3:00', '2009/9/11 5:00', '2009/9/11 7:00', '2009/9/11 11:00', '2009/9/11 15:00', '2009/9/11 19:00', '2009/9/11 21:00', '2009/9/11 23:00', '2009/9/12 3:00', '2009/9/12 5:00', '2009/9/12 7:00', '2009/9/12 11:00', '2009/9/12 15:00', '2009/9/12 19:00', '2009/9/12 21:00', '2009/9/12 23:00', '2009/9/13 3:00', '2009/9/13 5:00', '2009/9/13 7:00', '2009/9/13 11:00', '2009/9/13 15:00', '2009/9/13 19:00', '2009/9/13 21:00', '2009/9/13 23:00', '2009/9/14 3:00', '2009/9/14 5:00', '2009/9/14 7:00', '2009/9/14 11:00', '2009/9/14 15:00', '2009/9/14 19:00', '2009/9/14 21:00', '2009/9/14 23:00', '2009/9/15 3:00', '2009/9/15 5:00', '2009/9/15 7:00', '2009/9/15 11:00', '2009/9/15 15:00', '2009/9/15 19:00', '2009/9/15 21:00', '2009/9/15 23:00', '2009/9/16 3:00', '2009/9/16 5:00', '2009/9/16 7:00', '2009/9/16 11:00', '2009/9/16 15:00', '2009/9/16 19:00', '2009/9/16 21:00', '2009/9/16 23:00', '2009/9/17 3:00', '2009/9/17 5:00', '2009/9/17 7:00', '2009/9/17 11:00', '2009/9/17 15:00', '2009/9/17 19:00', '2009/9/17 21:00', '2009/9/17 23:00', '2009/9/18 3:00', '2009/9/18 5:00', '2009/9/18 7:00', '2009/9/18 11:00', '2009/9/18 15:00', '2009/9/18 19:00', '2009/9/18 21:00', '2009/9/18 23:00', '2009/9/19 3:00', '2009/9/19 5:00', '2009/9/19 7:00', '2009/9/19 11:00', '2009/9/19 15:00', '2009/9/19 19:00', '2009/9/19 21:00', '2009/9/19 23:00', '2009/9/20 3:00', '2009/9/20 5:00', '2009/9/20 7:00', '2009/9/20 11:00', '2009/9/20 15:00', '2009/9/20 19:00', '2009/9/20 21:00', '2009/9/20 23:00', '2009/9/21 3:00', '2009/9/21 5:00', '2009/9/21 7:00', '2009/9/21 11:00', '2009/9/21 15:00', '2009/9/21 19:00', '2009/9/21 21:00', '2009/9/21 23:00', '2009/9/22 3:00', '2009/9/22 5:00', '2009/9/22 7:00', '2009/9/22 11:00', '2009/9/22 15:00', '2009/9/22 19:00', '2009/9/22 21:00', '2009/9/22 23:00', '2009/9/23 3:00', '2009/9/23 5:00', '2009/9/23 7:00', '2009/9/23 11:00', '2009/9/23 15:00', '2009/9/23 19:00', '2009/9/23 21:00', '2009/9/23 23:00', '2009/9/24 3:00', '2009/9/24 5:00', '2009/9/24 7:00', '2009/9/24 11:00', '2009/9/24 15:00', '2009/9/24 19:00', '2009/9/24 21:00', '2009/9/24 23:00', '2009/9/25 3:00', '2009/9/25 5:00', '2009/9/25 7:00', '2009/9/25 11:00', '2009/9/25 15:00', '2009/9/25 19:00', '2009/9/25 21:00', '2009/9/25 23:00', '2009/9/26 3:00', '2009/9/26 5:00', '2009/9/26 7:00', '2009/9/26 11:00', '2009/9/26 15:00', '2009/9/26 19:00', '2009/9/26 21:00', '2009/9/26 23:00', '2009/9/27 3:00', '2009/9/27 5:00', '2009/9/27 7:00', '2009/9/27 11:00', '2009/9/27 15:00', '2009/9/27 19:00', '2009/9/27 21:00', '2009/9/27 23:00', '2009/9/28 3:00', '2009/9/28 5:00', '2009/9/28 7:00', '2009/9/28 11:00', '2009/9/28 15:00', '2009/9/28 19:00', '2009/9/28 21:00', '2009/9/28 23:00', '2009/9/29 3:00', '2009/9/29 5:00', '2009/9/29 7:00', '2009/9/29 11:00', '2009/9/29 15:00', '2009/9/29 19:00', '2009/9/29 21:00', '2009/9/29 23:00', '2009/9/30 3:00', '2009/9/30 5:00', '2009/9/30 7:00', '2009/9/30 11:00', '2009/9/30 15:00', '2009/9/30 19:00', '2009/9/30 21:00', '2009/9/30 23:00', '2009/10/1 3:00', '2009/10/1 5:00', '2009/10/1 7:00', '2009/10/1 11:00', '2009/10/1 15:00', '2009/10/1 19:00', '2009/10/1 21:00', '2009/10/1 23:00', '2009/10/2 3:00', '2009/10/2 5:00', '2009/10/2 7:00', '2009/10/2 11:00', '2009/10/2 15:00', '2009/10/2 19:00', '2009/10/2 21:00', '2009/10/2 23:00', '2009/10/3 3:00', '2009/10/3 5:00', '2009/10/3 7:00', '2009/10/3 11:00', '2009/10/3 15:00', '2009/10/3 19:00', '2009/10/3 21:00', '2009/10/3 23:00', '2009/10/4 3:00', '2009/10/4 5:00', '2009/10/4 7:00', '2009/10/4 11:00', '2009/10/4 15:00', '2009/10/4 19:00', '2009/10/4 21:00', '2009/10/4 23:00', '2009/10/5 3:00', '2009/10/5 5:00', '2009/10/5 7:00', '2009/10/5 11:00', '2009/10/5 15:00', '2009/10/5 19:00', '2009/10/5 21:00', '2009/10/5 23:00', '2009/10/6 3:00', '2009/10/6 5:00', '2009/10/6 7:00', '2009/10/6 11:00', '2009/10/6 15:00', '2009/10/6 19:00', '2009/10/6 21:00', '2009/10/6 23:00', '2009/10/7 3:00', '2009/10/7 5:00', '2009/10/7 7:00', '2009/10/7 11:00', '2009/10/7 15:00', '2009/10/7 19:00', '2009/10/7 21:00', '2009/10/7 23:00', '2009/10/8 3:00', '2009/10/8 5:00', '2009/10/8 7:00', '2009/10/8 11:00', '2009/10/8 15:00', '2009/10/8 19:00', '2009/10/8 21:00', '2009/10/8 23:00', '2009/10/9 3:00', '2009/10/9 5:00', '2009/10/9 7:00', '2009/10/9 11:00', '2009/10/9 15:00', '2009/10/9 19:00', '2009/10/9 21:00', '2009/10/9 23:00', '2009/10/10 3:00', '2009/10/10 5:00', '2009/10/10 7:00', '2009/10/10 11:00', '2009/10/10 15:00', '2009/10/10 19:00', '2009/10/10 21:00', '2009/10/10 23:00', '2009/10/11 3:00', '2009/10/11 5:00', '2009/10/11 7:00', '2009/10/11 11:00', '2009/10/11 15:00', '2009/10/11 19:00', '2009/10/11 21:00', '2009/10/11 23:00', '2009/10/12 3:00', '2009/10/12 5:00', '2009/10/12 7:00', '2009/10/12 11:00', '2009/10/12 15:00', '2009/10/12 19:00', '2009/10/12 21:00', '2009/10/12 23:00', '2009/10/13 3:00', '2009/10/13 5:00', '2009/10/13 7:00', '2009/10/13 11:00', '2009/10/13 15:00', '2009/10/13 19:00', '2009/10/13 21:00', '2009/10/13 23:00', '2009/10/14 3:00', '2009/10/14 5:00', '2009/10/14 7:00', '2009/10/14 11:00', '2009/10/14 15:00', '2009/10/14 19:00', '2009/10/14 21:00', '2009/10/14 23:00', '2009/10/15 3:00', '2009/10/15 5:00', '2009/10/15 7:00', '2009/10/15 11:00', '2009/10/15 15:00', '2009/10/15 19:00', '2009/10/15 21:00', '2009/10/15 23:00', '2009/10/16 3:00', '2009/10/16 5:00', '2009/10/16 7:00', '2009/10/16 11:00', '2009/10/16 15:00', '2009/10/16 19:00', '2009/10/16 21:00', '2009/10/16 23:00', '2009/10/17 3:00', '2009/10/17 5:00', '2009/10/17 7:00', '2009/10/17 11:00', '2009/10/17 15:00', '2009/10/17 19:00', '2009/10/17 21:00', '2009/10/17 23:00', '2009/10/18 3:00', '2009/10/18 5:00', '2009/10/18 7:00'] </code></pre> <p><strong>Data Manipulation</strong></p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Timestamp': timeData}) df[&quot;Timestamp&quot;] = pd.to_datetime(df[&quot;Timestamp&quot;], format=&quot;%Y/%m/%d %H:%M&quot;) df['hours'] = df[&quot;Timestamp&quot;].dt.strftime('%I%p').str.lower().str.lstrip('0') df['days'] = df[&quot;Timestamp&quot;].dt.day_name() df = pd.crosstab(df['days'], df['hours']).reindex(index=df['days'].unique(), columns=df['hours'].unique()) df.reset_index(inplace=True) df_melt = df.melt(id_vars='days', value_vars=df.columns.to_list()) </code></pre> <pre><code>df_melt.head() days hours value 0 Friday 5am 18 1 Saturday5am 17 2 Sunday 5am 19 3 Monday 5am 18 4 Tuesday 5am 16 </code></pre> <pre class="lang-py prettyprint-override"><code>fig = px.scatter_polar(df_melt, r=&quot;days&quot;, theta=&quot;hours&quot;, size=&quot;value&quot;, color=&quot;days&quot;, ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/Eny16.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eny16.png" alt="enter image description here" /></a></p> <p>required order:</p> <pre class="lang-py prettyprint-override"><code>angular_order = [&quot;1am&quot;, &quot;2am&quot;, &quot;3am&quot;, &quot;4am&quot;, &quot;5am&quot;,&quot;6am&quot;, &quot;7am&quot;, &quot;8am&quot;, &quot;9am&quot;, &quot;10am&quot;, &quot;11am&quot;, &quot;12pm&quot;, &quot;1pm&quot;, &quot;2pm&quot;, &quot;3pm&quot;, &quot;4pm&quot;, &quot;5pm&quot;,&quot;6pm&quot;, &quot;7pm&quot;, &quot;8pm&quot;, &quot;9pm&quot;, &quot;10pm&quot;, &quot;11pm&quot;] radial_order = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] </code></pre> <p>How can I order angular &amp; radial axis labels in scatter_polar chart with Plotly Python?</p>
<python><plotly><visualization><polar-chart>
2024-02-16 15:36:06
1
3,025
Ailurophile
78,008,394
3,224,483
How do I get the line numbers of a saxonc XPath match?
<p>I'm building a report that will show the line numbers of XML elements that match a set of XPaths. I need to support XPath 2.0. Sending the XML to a separate web based processor written in Java or C# is a valid solution, but one I'm avoiding because my entire team works in Python, I want my tool to still work offline, and maintaining another web service is a lot of work.</p> <p>Saxonche supports XPath 2.0. The <a href="https://www.saxonica.com/saxon-c/doc12/html/saxonc.html#PyDocumentBuilder" rel="nofollow noreferrer">documentation</a> describes multiple options for enabling line numbers, but never explains how to get the line numbers out once you have enabled them.</p> <p>Here's my code:</p> <pre><code>input_file_path = 'test.xml' # Contents below input_xpath = './/foo' with PySaxonProcessor(license=False) as saxon_proc: # Attempt #1 to enable line numbers saxon_proc.set_configuration_property('l', 'on') doc_builder = saxon_proc.new_document_builder() # Attempt #2 to enable line numbers doc_builder.set_line_numbering(True) xml_tree = doc_builder.parse_xml(xml_file_name=input_file_path) xpath_processor = saxon_proc.new_xpath_processor() xpath_processor.set_context(xdm_item=xml_tree) foo_elements = xpath_processor.evaluate(input_xpath) # Do not see any line numbers on foo_elements in the debugger </code></pre> <p>I inspected the result of <code>evaluate()</code> in the debugger, but I don't see anything that looks like a line number.</p> <p><a href="https://i.sstatic.net/UwhIp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UwhIp.png" alt="debugger inspect" /></a></p> <p>Both <code>PySaxonProcessor</code> and <code>PyDocumentBuilder</code> have a <code>parse_xml()</code> method. In my code I am using <code>PyDocumentBuilder</code>, but I tried both and didn't notice any differences.</p> <p>test.XML</p> <pre><code>&lt;root&gt; &lt;foo&gt;fah&lt;/foo&gt; &lt;/root&gt; </code></pre> <p>Apparently there are <a href="https://stackoverflow.com/a/58455579/3224483">wrong ways to feed your XML to Saxon</a> that can result in no line numbers, but all of the information I found about that is in other languages.</p> <p>Any ideas about what I am doing wrong?</p>
<python><xpath><saxon><xpath-2.0><saxon-c>
2024-02-16 15:24:06
1
3,659
Rainbolt
78,008,188
7,470,925
Best practice to write Python exception in a log file
<p>I defined a <code>log_text()</code> function to print custom messages and write it in a log file. My project architecture is quite simple, with the main script calling functions from another file:</p> <pre><code>main.py | |--- functions.py -- log_text() </code></pre> <p>I would like to catch any Python exception and print them in the log file, to help future users. For now I embedded almost all operations of main in a try/except block like this:</p> <pre><code>import #... class parameters(): log_file_path = #path to the log file prm = parameters() try: #main code except Exception as e: log_text(log_file_path, e) print(e) </code></pre> <p>I am used to do this for small portions of a code, but not the whole script like that. Is it the common practice? Is there a better way to write any exception to a log file?</p>
<python><logging>
2024-02-16 14:50:32
1
1,226
jeannej
78,008,134
10,353,865
What is the difference between these three ways of changing an element in a dataframe?
<p>In the code below please uncomment exactly one line with a hash. Do this for all three possible ways/cases and notice the following:</p> <ul> <li>All cases seem to imply that elements in the original dataframe are changed</li> <li>In my opinion they accomplish the same (except for the fact that the selected elements differ in the three cases - but thats not my point)</li> <li>Yet: Only in case 2 I get a warning: &quot;SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame&quot;</li> </ul> <p>So here is my question: Why is case 2 particularly bad in comparison to case 1 and 3? I don't understand why I get a warning. More specifically: Even if I accept the rationale behind the warning, I don't understand why it is not issued in the other cases. Can someone explain?</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;foo&quot;: [1, 2, 3], &quot;bar&quot;: [4, 5, 6]}) # subset = df[&quot;foo&quot;] # case 1 # subset = df.loc[0:1,:] # case 2 # subset = df.loc[0,:] # case 3 subset.iloc[0] = 100 </code></pre>
<python><pandas>
2024-02-16 14:42:13
0
702
P.Jo
78,008,086
5,661,667
Numpy slicing given row for each column
<p>I have an ... x n x m array, say <code>a</code>, where ... stands for an arbitrary number of additional dimensions. Let's call the n-dimension &quot;rows&quot; and the m-dimension&quot; columns for simplicity, even though the array is higher-dimensional.</p> <p>I also have a vector <code>v</code> of length n which contains indices for the last dimension (ranging from 0 to m-1). I would like to create an array <code>b</code> that uses this vector to extract the indicated column for each row.</p> <p>One can easily do this using a loop. Here is a minimal working example:</p> <pre><code>import numpy as np a = np.round(np.random.rand(2,3,4)*10) v = [0, 2, 1] print(a) &quot;&quot;&quot;[[[ 1. 6. 9. 9.] [ 1. 8. 4. 10.] [ 0. 0. 5. 3.]] [[ 7. 8. 1. 10.] [ 7. 9. 7. 8.] [ 3. 4. 8. 7.]]] &quot;&quot;&quot; b = [] for i in range(len(v)): b.append(a.take(i, axis=-2).take(v[i], axis=-1)) b = np.asarray(b) print(b) &quot;&quot;&quot; [[1. 7.] [4. 7.] [0. 4.]] &quot;&quot;&quot; </code></pre> <p>Is there any smarter way to do this kind of indexing without looping?</p>
<python><arrays><numpy><indexing>
2024-02-16 14:32:27
2
1,321
Wolpertinger
78,008,025
11,170,382
How to use a config from pyproject.toml inside pre-commit-config.yaml
<p>I have a python repository which has a pyproject.toml and a pre-commit-config.yaml</p> <p>The pyproject.toml looks something like this</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;projectname&quot; description = &quot;My project's description&quot; authors = [&quot;My Name &lt;my-name@example.com&gt;&quot;] version = &quot;0.1.0&quot; </code></pre> <p>Now I want to use the name field from pyproject.toml instead of in the following custom hook in my pre-commit config</p> <pre class="lang-yaml prettyprint-override"><code> - repo: local hooks: - id: check-for-utility-imports name: check-for-utility-imports entry: ^from &lt;PROJECT-NAME-COMES-HERE&gt;.utility.* import files: ^&lt;PROJECT-NAME-COMES-HERE&gt;/contracts/.*.py$ language: pygrep types: [ python ] </code></pre> <p>Is there any way to do this?</p>
<python><pre-commit-hook><pre-commit><pre-commit.com><pyproject.toml>
2024-02-16 14:24:38
1
453
Saroopashree Kumaraguru
78,008,017
8,792,159
Can scipy.stats.bootstrap be used to compute confidence intervals for feature weighs in regression or classification tasks?
<p>I am interested in computing confidence intervals for my feature weights using a bootstrap approach. Is <code>scipy.stats.bootstrap</code> able to do this? Consider this classification task as an example (but same idea for regression tasks). We can get <code>coefficients</code> which will return a vector of feature weights.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from sklearn.discriminant_analysis import LinearDiscriminantAnalysis X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) y = np.array([1, 1, 1, 2, 2, 2]) clf = LinearDiscriminantAnalysis() clf.fit(X, y) coefficients = clf.coef_ </code></pre> <p>The idea would be to draw samples (with replacement) n-times from <code>X</code> and <code>y</code>, fit the classifier on these batches, get the coefficients and finally compute confidence intervals using the coefficients from all resampling trials.</p>
<python><scipy><bootstrapping>
2024-02-16 14:23:06
1
1,317
Johannes Wiesner
78,007,995
11,861,874
Clear Content of Excel Using Python
<p>I would like to clear the content of the Excel file using Python and re-write it at the end with different values. I would like to do it while Excel remains open and don't want to close before re-writing it again.</p> <pre><code>import pandas as pd Tab = pd.read_excel(&quot;abc.xls&quot;,sheet_name=&quot;Sheet1&quot;) Col_A = Tab['Col A'] </code></pre> <p>The Col_A in excel has the following values currently. I would like to clear those values except the heading 'Col A' and update them with new values.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Old Values</th> </tr> </thead> <tbody> <tr> <td>100</td> </tr> <tr> <td>200</td> </tr> <tr> <td>300</td> </tr> <tr> <td>400</td> </tr> <tr> <td>500</td> </tr> </tbody> </table></div> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>New Values</th> </tr> </thead> <tbody> <tr> <td>600</td> </tr> <tr> <td>700</td> </tr> <tr> <td>800</td> </tr> <tr> <td>900</td> </tr> <tr> <td>1000</td> </tr> </tbody> </table></div> <p>I would like to keep excel open and save the file after updating new values in Col A.</p>
<python><excel><openpyxl><xlwings>
2024-02-16 14:19:54
1
645
Add
78,007,939
1,023,390
Create numpy array of unit matrices
<p>I have a function that takes a numpy array as argument to represent a physical 3D vector (with <code>shape=(3,)</code>) or an array of such vectors (with <code>shape=(3,n)</code>). In that function I need to form the 3x3 unit matrix or an array of 3x3 unit matrices, depending on the input. Thus</p> <pre><code>def func(x): if x.ndim == 1: unit = numpy.idenity(3) elif x.ndim == 2: unit = ??? # should create 3D array of shape = (d,d,n) where x.shape=(d,n) # with entries unit[i,i,k]=1 and unit[i,j,k]=0 for i≠j </code></pre> <p>(needless to say that I want an efficient way for this. Doing this by hand, i.e.</p> <pre><code>unit = np.zeros((3,3,n)) unit[0,0,:] = 1 unit[1,1,:] = 1 unit[2,2,:] = 1 </code></pre> <p>is not what I want)</p>
<python><numpy>
2024-02-16 14:09:55
6
45,824
Walter
78,007,916
7,422,392
Django i18n get_language_info_list and get_available_languages provide different language code
<p>With <code>{% get_current_language as CURRENT_LANGUAGE %} </code>, <code>{{ CURRENT_LANGUAGE }}</code> shows <code>nl-nl</code></p> <p>With <code>{% get_available_languages as AVAILABLE_LANGUAGES %} </code>, <code>{{ AVAILABLE_LANGUAGES }}</code> shows <code>[('nl-NL', 'Nederlands'), ('en', 'Engels')]</code></p> <p>With <code>{% get_language_info_list for AVAILABLE_LANGUAGES as LANGUAGES %} </code>, <code>{{ LANGUAGES }}</code> shows <code>[{'bidi': False, 'code': 'nl', 'name': 'Dutch', 'name_local': 'Nederlands', 'name_translated': 'Nederlands'}, {'bidi': False, 'code': 'en', 'name': 'English', 'name_local': 'English', 'name_translated': 'Engels'}]</code></p> <p>Settings:</p> <pre><code># https://docs.djangoproject.com/en/dev/ref/settings/#language-code LANGUAGE_CODE = &quot;nl-NL&quot; # https://docs.djangoproject.com/en/dev/ref/settings/#languages LANGUAGES = [ ('nl-NL', _('Dutch')), ('en', _('English')), ] </code></pre> <p>In <code>{{ LANGUAGES }}</code> I expect <code>code</code> to be <code>nl-NL</code> and not <code>nl</code>. Anyone that can explain this discrepancy?</p> <p>Django version: <code>Django==4.2.10</code></p>
<python><django>
2024-02-16 14:05:39
1
1,006
sitWolf
78,007,899
2,386,113
Converting flat indices to 3D indices in Python
<p>I have an array of flat indices and I want to get the corresponding 3D indices. I want to avoid using a for-loop to convert each flat index to a 3D index one by one.</p> <p>I tried to use the numpy's <code>np.unravel_index()</code> method to compute the 3D indices as shown below:</p> <pre><code> import numpy as np # Column vector of flat indices test_flat_indices = np.array([[3957], [8405], [9161], [11105], [969]]) # Shape of 3D array num_rows = 51 num_cols = 51 num_frames = 8 # Convert flat indices to 3D indices indices_3d = np.unravel_index(test_flat_indices, (num_rows, num_cols, num_frames)) # Format the result to [row, col, frame] format indices_3d = np.column_stack(np.array(indices_3d)) print(indices_3d) </code></pre> <p>The above code produces the following <strong>output</strong>:</p> <pre><code> array([[ 9, 35, 5], [20, 30, 5], [22, 23, 1], [27, 11, 1], [ 2, 19, 1]], dtype=int64) </code></pre> <p><strong>Problem:</strong> The above out seems wrong because if I try to convert the 3D indices above to flat indices back (for verification) then, the values does not match. For example, the output value <code>[9, 35, 5]</code> should represent 9th-row, 35th-column and 5th-frame, which would actually result in <code>(5x51x51 + 9x51 + 35) = 13499</code>, which is wrong (the correct value should be 3957).</p> <p><strong>NOTE:</strong> If I change the <code>unravel_index()</code> method arguments to <code>np.unravel_index(test_flat_indices, (num_frames, num_rows, num_cols))</code> then, the output is correct, which is:</p> <pre><code> array([[ 1, 26, 30], [ 3, 11, 41], [ 3, 26, 32], [ 4, 13, 38], [ 0, 19, 0]], dtype=int64) </code></pre> <p>but why do I need to put the number of frames first?</p>
<python><numpy><indices>
2024-02-16 14:02:44
0
5,777
skm
78,007,858
1,162,465
PBKDF AES Encryption converting Java to python
<p>I have a java code that does encryption decryption , but however there is some issue in getting derivedKEK.</p> <p>In java it is done as</p> <pre><code>derivedKEK = PBKDF2(ENC_KEY, salt, dkLen=32, count=1000, prf=lambda p, s: HMAC.new(p, s, SHA512).digest()) </code></pre> <p>The value I get is of length 32 and the data format is</p> <pre><code>b'K27\xd7\xfc\xbf\xd0d\xec\x8f\xe4\\\x02\x95:0zn\xaf\x8c\xca\xf9\xf8:^(\xf4)\x9b\xd2p\xb8' </code></pre> <p>When the original java code is used</p> <pre><code> import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.PBEKeySpec; PBEKeySpec pbeKeySpec = new PBEKeySpec(ENC_KEY.toCharArray(), salt, 1000, 256); // derive a Key-Encrypting Key (KEK) based on password and other PBEKeySpec attributes SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(&quot;PBKDF2WithHmacSHA512&quot;); SecretKey derivedKEK = keyFactory.generateSecret(pbeKeySpec); </code></pre> <p>When i sysout the derivedKEK.getEncoded, i get something in this format</p> <pre><code>'[B@5123a213' </code></pre> <p>as a byte object. Here also length is same 32 bytes. How to adjust the python code to get the same format of derived KEK as java</p>
<python><java><encryption>
2024-02-16 13:56:38
0
537
slaveCoder
78,007,791
1,559,401
EOFError while parsing .netrc in CI job for accessing GitLab PyPI package registry
<p>I have a CI job in repo <strong>A</strong> that builds an image (<code>Dockerfile</code> via Kaniko), which requires a package from a package repository from repo <strong>B</strong>.</p> <p>Since this is a task that will repeat in the future quite a bit, I created a GAT (group access token, rights: <strong>api</strong>, role: <strong>Developer</strong>) called <code>CI_GROUP_API</code> with a corresponding group CICD variable with the same name and value, so that my CI job can access all other repos in that group by using it.</p> <p>I use the GAT to generate a <code>.netrc</code> on-the-fly in my CI job and I'm also adjusting my <code>requirements.txt</code> to add the package (will make this change permanent in the future). The files that are required for the CI job can be seen after the error log below. When I run my job I receive:</p> <pre><code>Requirement already satisfied: pip in /usr/local/lib/python3.10/site-packages (24.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://gitlab.company.com/api/v4/projects/58838/packages/pypi/simple User for gitlab.company.com: ERROR: Exception: Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/cli/base_command.py&quot;, line 180, in exc_logging_wrapper status = run_func(*args) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/cli/req_command.py&quot;, line 245, in wrapper return func(self, options, args) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/commands/install.py&quot;, line 377, in run requirement_set = resolver.resolve( File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 95, in resolve result = self._result = resolver.resolve( File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 546, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 397, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 173, in _add_to_criteria if not criterion.candidates: File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/resolvelib/structs.py&quot;, line 156, in __bool__ return bool(self._sequence) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 155, in __bool__ return any(self) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 143, in &lt;genexpr&gt; return (c for c in iterator if id(c) not in self._incompatible_ids) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 44, in _iter_built for version, func in infos: File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py&quot;, line 297, in iter_index_candidate_infos result = self._finder.find_best_candidate( File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/package_finder.py&quot;, line 890, in find_best_candidate candidates = self.find_all_candidates(project_name) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/package_finder.py&quot;, line 831, in find_all_candidates page_candidates = list(page_candidates_it) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/sources.py&quot;, line 194, in page_candidates yield from self._candidates_from_page(self._link) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/package_finder.py&quot;, line 791, in process_project_url index_response = self._link_collector.fetch_response(project_url) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/collector.py&quot;, line 461, in fetch_response return _get_index_content(location, session=self.session) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/collector.py&quot;, line 364, in _get_index_content resp = _get_simple_response(url, session=session) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/index/collector.py&quot;, line 135, in _get_simple_response resp = session.get( File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/requests/sessions.py&quot;, line 602, in get return self.request(&quot;GET&quot;, url, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/network/session.py&quot;, line 520, in request return super().request(method, url, *args, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/requests/sessions.py&quot;, line 589, in request resp = self.send(prep, **send_kwargs) File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/requests/sessions.py&quot;, line 710, in send r = dispatch_hook(&quot;response&quot;, hooks, r, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/pip/_vendor/requests/hooks.py&quot;, line 30, in dispatch_hook _hook_data = hook(hook_data, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/network/auth.py&quot;, line 500, in handle_401 username, password, save = self._prompt_for_password(parsed.netloc) File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/network/auth.py&quot;, line 455, in _prompt_for_password username = ask_input(f&quot;User for {netloc}: &quot;) if self.prompting else None File &quot;/usr/local/lib/python3.10/site-packages/pip/_internal/utils/misc.py&quot;, line 251, in ask_input return input(message) EOFError: EOF when reading a line error building image: error building stage: failed to execute command: waiting for process to exit: exit status 2 </code></pre> <hr /> <p>My CI job is defined as follows:</p> <pre><code>build-service: stage: build image: name: gcr.io/kaniko-project/executor:debug entrypoint: [''] tags: - asprunner rules: # Build base image only if the Dockerfile or CICD config file have changed and been pushed to the cloud branch - if: '$CI_PIPELINE_SOURCE == &quot;push&quot; &amp;&amp; $CI_COMMIT_BRANCH == &quot;dev&quot;' when: on_success script: - echo &quot;Building image for service X using Kaniko&quot; - | export BASE_REGISTRY_URL=registry.gitlab.company.com/mygroup/base-service export BASE_TOKEN_NAME=CI_GROUP_API export BASE_TOKEN_VALUE=$CI_GROUP_API cat &lt;&lt;EOT &gt;&gt; .netrc machine gitlab.company.com login $BASE_TOKEN_NAME password $BASE_TOKEN_VALUE EOT cat &lt;&lt;EOT &gt;&gt; requirements.txt --index-url https://gitlab.company.com/api/v4/projects/58838/packages/pypi/simple base-service EOT echo &quot;{\&quot;auths\&quot;:{\&quot;${CI_REGISTRY}\&quot;:{\&quot;auth\&quot;:\&quot;$(printf &quot;%s:%s&quot; &quot;${CI_REGISTRY_USER}&quot; &quot;${CI_JOB_TOKEN}&quot; | base64 | tr -d '\n')\&quot;},\&quot;${BASE_REGISTRY_URL}\&quot;:{\&quot;username\&quot;:\&quot;${BASE_TOKEN_NAME}\&quot;,\&quot;password\&quot;:\&quot;${BASE_TOKEN_VALUE}\&quot;}}}&quot; &gt; /kaniko/.docker/config.json - /kaniko/executor --context . --dockerfile Dockerfile --insecure --skip-tls-verify --skip-tls-verify-pull --insecure-pull --destination &quot;${CI_REGISTRY_IMAGE}:v0.2&quot; </code></pre> <p>My corresponding <code>Dockerfile</code> can be seen below:</p> <pre><code>FROM registry.company.com/mygroup/base-service:v1.0 ENV DEBIAN_FRONTEND noninteractive COPY ./requirements.txt requirements.txt COPY ./.netrc .netrc RUN echo &quot;----------- requirements.txt -----------&quot; &amp;&amp; cat requirements.txt RUN echo &quot;----------- .netrc -----------&quot; &amp;&amp; cat .netrc RUN pip3 install --upgrade pip &amp;&amp; pip3 install --no-cache -r requirements.txt RUN mkdir -p service COPY main.py service/ COPY gltf_transformer service/ RUN rm requirements.txt .netrc ENTRYPOINT [&quot;sh&quot;, &quot;python3 main.py&quot;] </code></pre> <p>The resulting <code>pip</code>-related files look like this:</p> <p><strong>requirements.txt</strong></p> <pre><code>pygltflib --index-url https://gitlab.company.com/api/v4/projects/58838/packages/pypi/simple x-base-service </code></pre> <p><strong>.netrc</strong></p> <pre><code>machine gitlab.company.com login CI_GROUP_API password [MASKED] </code></pre> <p>I thought that perhaps the <code>[MASKED]</code>, which is how GitLab hides masked variables in the CI job's logs, might have been written to the <code>.netrc</code> as the actual value. However, first the error comes while reading the user and second I added</p> <p>artifacts: paths: - requirements.txt - .netrc when: on_failure</p> <p>to my job to get the files and verified that the token's value is there.</p> <p>I have also downloaded both files locally to my system and ran pip for the <code>requirements.txt</code> using the <code>.netrc</code> from the job's artifacts. It worked just fine.</p>
<python><gitlab><continuous-integration><.netrc>
2024-02-16 13:48:09
1
9,862
rbaleksandar
78,007,786
9,722,279
set location of .snakemake directory
<p>Is there a way to define where snakemake creates the <em>.snakemake</em> folder. I could change the whole working dic with <code>--directory</code>, however this also changes the storage location of every relative path.</p> <p>I thought <code>--shadow-prefix</code> is what I was hoping for, but <em>.snakemake/scripts</em> for example is still created in the working dic.</p> <p>The reasons is, that I am working on an shared CIFS/SMB drive (which I have no control over). In theory I have read/write access but for some actions snakemake performs in the .snakefile folder it refuses to give permission (<a href="https://stackoverflow.com/questions/77990826/cifs-smb-with-inconsistent-read-write-permissions">don't know why</a>). So my hope is that i could place <em>.snakemake</em> on a small local drive, but the data of the workflow is still written to the CIFS/SMB.</p>
<python><shell><symlink><snakemake>
2024-02-16 13:47:39
1
598
kEks
78,007,663
6,224,975
Pytorch reserving way more data than needed
<p>I'm trying to finetune a sentencetransformer.</p> <p>The issue is that I'm running into an OOM-error (I'm using google-cloud to train the model).</p> <p>I keep getting that pytorch reserves ~13GB (theres ~14GB) available thus there's no room for any batch.</p> <p>If I try to calculate the actual memory used, it's around 1.3 GB.</p> <pre class="lang-py prettyprint-override"><code>from sentence_transformers import models model_name = &quot;alexandrainst/scandi-nli-large&quot; word_embedding_model = models.Transformer(model_name, max_seq_length=512) pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension()) model = SentenceTransformer(modules=[word_embedding_model, pooling_model]) model.to(&quot;cuda&quot;) param_size = 0 for param in model.parameters(): param_size += param.nelement() * param.element_size() buffer_size = 0 for buffer in model.buffers(): buffer_size += buffer.nelement() * buffer.element_size() size_all_mb = (param_size + buffer_size) / 1024 ** 2 print('model size: {:.3f}MB'.format(size_all_mb)) # ~1300 torch.cuda.memory_reserved()/(1024**2) # ~1300 </code></pre> <p>I have tried to call <code>torch.cuda.empty_cache()</code> and set <code>os.environ[&quot;PYTORCH_CUDA_ALLOC_CONF&quot;] = &quot;max_split_size_mb:128&quot;</code> but the same error occurs.</p> <p>Isn't there a way to don't make pytorch reserve memory (or at least reduce it) but just use the memory which is needed?</p>
<python><memory><pytorch><sentence-transformers>
2024-02-16 13:24:34
1
5,544
CutePoison
78,007,647
9,506,773
Cannot mimic manual document split in Azure, programatically, using Azure SplitSkill
<p>I am going from a manual setup of my RAG solution in Azure to setting up everything programmatically using the azure python sdk. I have a container with a single <code>pdf</code>. When setting up manually is see that the Document count under the created index is 401 when setting the chunking to 256. When using my custom skillset:</p> <pre class="lang-py prettyprint-override"><code>split_skill = SplitSkill( name=&quot;split&quot;, description=&quot;Split skill to chunk documents&quot;, context=&quot;/document&quot;, text_split_mode=&quot;pages&quot;, default_language_code=&quot;en&quot;, maximum_page_length=300, # why cannot this be set to 256 if I can do this with a manual setup? page_overlap_length=30, inputs=[ InputFieldMappingEntry(name=&quot;text&quot;, source=&quot;/document/content&quot;), ], outputs=[ OutputFieldMappingEntry(name=&quot;textItems&quot;, target_name=&quot;pages&quot;) ], ) </code></pre> <p>I get 271. I want to mimic my manual chunking setup as much as possible as I already have good performance. What am I missing? Alternatively, could somebody point me to the default setup for chunking when it is performed by hand?</p> <h1>22 FEB EDIT</h1> <p>Answering @JayashankarGS</p> <blockquote> <p>According to this doc the minimum value you need give is 300. learn.microsoft.com/en-us/azure/search/… Chunking in RAG is not as same as maximumPageLength in split skillset.</p> </blockquote> <p>To me it looks like <code>maximum_page_length</code> is exactly <code>chunking_size</code>. But you are right, as of today, <a href="https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-textsplit#skill-parameters" rel="nofollow noreferrer">there is nothing to do regarding selecting a chunk size of less than 300 using SplitSkill</a>...</p> <p><a href="https://i.sstatic.net/22GJy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/22GJy.png" alt="enter image description here" /></a></p>
<python><azure-cognitive-services><azure-python-sdk>
2024-02-16 13:22:05
1
3,629
Mike B
78,007,627
17,778,275
Combining multiple sheets in excel file to one
<p>I have a use case where in I have multiple sheets in one excel file [can be any number of files], the columns in each sheets can be any, but I want only &quot;Roll Number&quot;, &quot;Brief&quot; from all the sheets combined with each sheet as a column in the new one.</p> <p>For eg.</p> <p>If &quot;ClassA&quot; is sheet name and contents are</p> <pre><code>Roll Number Brief email 11 Maths 11 abc@abc 11 Science 12 abc@abc 12 History </code></pre> <p>If &quot;ClassB&quot; is the sheet name and contents are</p> <pre><code>Roll Number Brief email 11 Art 71 abc@abc 13 Science 12 abc@abc 12 Maths </code></pre> <p>The output I need is</p> <pre><code>Roll Number ClassA ClassB 11 Maths 11 Art 71 11 Science 12 12 History Maths 13 Science 12 </code></pre> <p>So I need to merge the df by roll number and want the subsequent &quot;Brief&quot; column informations</p> <p>tried</p> <pre><code>xls = pd.ExcelFile(inputfile) sheet_names = xls.sheet_names print(sheet_names) combined_df = None for sheet in sheet_names: df = pd.read_excel(xls, sheet_name=sheet) df = df[['Roll Number', 'Brief']] df = df.rename(columns={'Brief': sheet}) if combined_df is None: combined_df = df else: combined_df = pd.merge(combined_df, df, on='Roll Number', how='outer') print(combined_df) combined_df.to_excel('combined2.xlsx', index=False) </code></pre> <p>This is creating duplicates or is somewhat irregular for example</p> <pre><code>Roll Number ClassA ClassB 11 Maths 11 Art 71 11 Science 12 Art 71 12 History Maths 13 Science 12 </code></pre>
<python><pandas><dataframe><group-by><merge>
2024-02-16 13:19:14
1
354
spd
78,007,405
6,485,881
Aggregating over subsets of columns in Numpy array
<p>I'm trying to compute the aggregate metrics (e.g. mean, median, sum) of column subsets in a numpy array.</p> <p>Take this array for example:</p> <pre><code>1 6 3 4 2 3 4 5 1 4 5 6 3 5 6 7 </code></pre> <p>I have a set of clusters, which are lists of column indices like this:</p> <pre class="lang-py prettyprint-override"><code>clusters = [[0, 1], [2], [3]] </code></pre> <p>Both the array and the list of cluster indices can be large and I can guarantee that each column in the array belongs to exactly one cluster, i.e. there are no duplicates in the clusters list. The indexes in the list aren't necessarily ordered, i.e. <code>[[0, 3], [2, 1]]</code> would also be a valid cluster.</p> <p>What I'm looking for is for example summing up the values per cluster - the result for the example above would look like this:</p> <pre><code>[25, 18, 22] </code></pre> <p>A simple implementation in python could look like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np arr = np.array([ [1, 6, 3, 4], [2, 3, 4, 5], [1, 4, 5, 6], [3, 5, 6, 7], ]) clusters = [[0, 1], [2], [3]] result = np.array([arr[:,c_indices].sum() for c_indices in clusters]) # array([25, 18, 22]) </code></pre> <p>My problem is that the matrix and number of clusters can grow quite large and I'd like to avoid looping in Python and instead keeping as much of this as possible in numpy's C-implementation for performance reasons.</p> <p><strong>Are there more efficient ways of doing this?</strong> (Ideally compatible with min, max, median, mean, and sum)</p>
<python><numpy>
2024-02-16 12:39:50
1
13,322
Maurice
78,007,299
1,009,823
How can I sample from a table with weights
<p>I have a table that looks like this:</p> <pre><code>view weight A 1 B 1 C 2 D 1 E 1 F 1 G 3 </code></pre> <p>I would like to sample from this table, but use the <code>weight</code> column to determine the probability of being chosen. For the above table, A, B, D, E, and F would all be chosen with probability 0.1, C would be with probability 0.2 and G would be probability 0.3</p> <p>Answers with python or SQL would be great.</p>
<python><mysql>
2024-02-16 12:22:18
2
7,617
Robert Long
78,007,071
13,803,549
Django ManyToMany not saving
<p>I have a ModelMultipleChoiceField that I try to save save as ManyToMany relationships and tried almost every solution on the internet but for some reason I can't get mine to save. I don't get any errors and the rest of the form submits just not the ManyToMany.</p> <pre class="lang-py prettyprint-override"><code> Models.py class Game(models.Model): players = models.ManyToManyField(Student, blank=True) other fields... class Student(models.Model): user_name = models.CharField(max_length=150, null=False, blank=False) </code></pre> <pre class="lang-py prettyprint-override"><code> Forms.py class Game(forms.ModelForm): players = forms.ModelMultipleChoiceField(widget=forms.CheckboxSelectMultiple(), queryset=Student.objects.all(), label=&quot;Players&quot;, required=False) </code></pre> <pre class="lang-py prettyprint-override"><code> Views.py def creategame(request): if request.method == &quot;POST&quot;: form = Game(request.POST) if form.is_valid(): form.save() return redirect(reverse('management')) else: ... </code></pre> <p>I tried multiple solutions like this, but none of them worked.</p> <pre class="lang-py prettyprint-override"><code> Views.py def creategame(request): if request.method == &quot;POST&quot;: form = Game(request.POST) if form.is_valid(): new_game = form.save(commit=False) new_game.save() form.save_m2m() return redirect(reverse('management')) else: ... </code></pre>
<python><django><django-forms><many-to-many>
2024-02-16 11:39:14
1
526
Ryan Thomas
78,007,063
1,506,763
Chained expressions in polars not working
<p>I'm trying to learn a bit about using polars so I had a simple problem to test it out where I ultimately want to run a <code>group_by</code> operation on the data.</p> <p>While analysis the data I create a few extra series from the initial data by adding and then cumulating.</p> <p>I understand that when you want to use a newly created variable with an expression, it needs to be in another chained <code>with_columns</code> but I can't seem to make it work.</p> <p>I have the following example code which I believe should be correct, but fails. Here's the code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import polars as pl data = np.random.random((50,5)) df = pl.from_numpy(data, schema=[&quot;id&quot;, &quot;sampling_time&quot;, &quot;area&quot;, &quot;val1&quot;, &quot;area_corr&quot;]) (df .with_columns([ pl.col(&quot;id&quot;).cast(pl.Int32), pl.Series(name=&quot;total_area&quot;, values=df.select(pl.col(&quot;area&quot;) + pl.col(&quot;area_corr&quot;))), ]) .with_columns([ pl.Series(name=&quot;cumulative_area&quot;, values=df.select(pl.cum_sum(&quot;total_area&quot;)) / 0.15), ]) .with_columns([ pl.Series(name=&quot;parcel_id&quot;, values=df.select(pl.col(&quot;cumulative_area&quot;).cast(pl.Int32))), ]) ) </code></pre> <p>However the snippet fails with the following stacktrace:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\xxx\anaconda3\envs\py38\lib\site-packages\IPython\core\interactiveshell.py&quot;, line 3508, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;ipython-input-8-8e61e84b3c85&gt;&quot;, line 7, in &lt;module&gt; pl.Series(name=&quot;cumulative_area&quot;, values=df.select(pl.cum_sum(&quot;total_area&quot;)) / 0.15), File &quot;C:\Users\xxx\anaconda3\envs\py38\lib\site-packages\polars\dataframe\frame.py&quot;, line 8142, in select return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) File &quot;C:\Users\xxx\anaconda3\envs\py38\lib\site-packages\polars\lazyframe\frame.py&quot;, line 1940, in collect return wrap_df(ldf.collect()) polars.exceptions.ColumnNotFoundError: total_area Error originated just after this operation: DF [&quot;id&quot;, &quot;sampling_time&quot;, &quot;area&quot;, &quot;val1&quot;]; PROJECT */5 COLUMNS; SELECTION: &quot;None&quot; </code></pre> <p>I don't understand why the newly created <code>total_area</code> series is not found.</p> <p>I'm on polars 0.20.7 with python 3.8.18</p>
<python><python-polars>
2024-02-16 11:37:42
1
676
jpmorr
78,006,992
4,634,965
very high level embedded Python in C / C++ application is not using Python from activated venv environment
<p>I followed the official Python documentation to do a very high-level Python runtime embedding in my C or C++ application.</p> <ul> <li><a href="https://docs.python.org/3/extending/embedding.html#very-high-level-embedding" rel="nofollow noreferrer">https://docs.python.org/3/extending/embedding.html#very-high-level-embedding</a></li> </ul> <pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_T_CLEAN #include &lt;Python.h&gt; int main(int argc, char *argv[]) { wchar_t *program = Py_DecodeLocale(argv[0], NULL); if (program == NULL) { fprintf(stderr, &quot;Fatal error: cannot decode argv[0]\n&quot;); exit(1); } Py_SetProgramName(program); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString(&quot;import sys\n&quot; &quot;print(sys.path)\n&quot;); if (Py_FinalizeEx() &lt; 0) { exit(120); } PyMem_RawFree(program); return 0; } </code></pre> <p>The problem is, always the basic python installation form operating system gets loaded, even when I activate a (venv) Python environment before I compile and run my C or C++ code.</p> <p>I am not sure, if the changes I have to do are in the embedding code itself, or in the compile command below.</p> <p>This is the compile command for the C++ code I embedded:</p> <pre class="lang-bash prettyprint-override"><code>g++ -march=native -O3 -fomit-frame-pointer -mfpmath=both -fopenmp -m64 -std=c++11 -o project `python3-config --cflags --embed` main.cpp `python3-config --ldflags --embed` </code></pre> <p>This is the minimal C compiler command that causes the same trouble. For a demonstration of the problem, please activate your python environment and compile the code from above.</p> <pre class="lang-bash prettyprint-override"><code>gcc `python3-config --cflags --embed` main.c `python3-config --ldflags --embed` </code></pre> <p>Run the code:</p> <pre class="lang-bash prettyprint-override"><code>./a.out </code></pre> <p>Now, fire up a python shell, run the same command.</p> <pre class="lang-py prettyprint-override"><code>import sys print(sys.path) </code></pre> <p>You will realize that the path output differs.</p> <p>Any help on how to overcome this issue is welcome.</p>
<python><c++><c>
2024-02-16 11:25:02
0
693
bue
78,006,930
3,961,495
Matplotlib not plotting figure (not responding) on Windows 10 powershell when using "plt.ion()"
<p>I'm working Windows 10 and I have encountered an issue that I don't have on Ubuntu:</p> <p>From the command line in powershell I run <code>python myscript.py</code> where the scriptp has the following code:</p> <pre><code>from matplotlib import pyplot as plt plt.plot([1,2], [1,1]) plt.ion() plt.show() import pdb pdb.set_trace() pass </code></pre> <p>The following happens: The plot appears, but very quickly the window gives the &quot;thinking&quot; icon. In the task bar the message is shown &quot;Not responding&quot;. The thinking icon appears after having entered something on the command line.</p> <p>I've tried:</p> <ul> <li>various matplotlib backends</li> <li>ipdb (as opposed to pdb). Now only a little window appears, with the thinking icon and &quot;Not responding&quot; in the task bar.</li> <li>Adding <code>plt.pause(1)</code> after <code>plt.show()</code> (and after <code>plt.ion()</code>)</li> <li>Writing <code>plt.pause(1)</code> <em>instead</em> of <code>plt.show()</code> --&gt; Some progress, see below:</li> </ul> <p><strong>Edit</strong>:</p> <p>Thanks to Sergei Kox I found a work-around for my original question.</p> <p>However, I usually run powershell inside an inferior emacs shell. I did not mention that, as I thought it was a powershell/windows issue.</p> <p>Unfortunately, the workaround does not work as well when powershell runs inside emacs. The difference is like this:</p> <p><em>Just Powershell (not inside emacs):</em></p> <p>Using <code>plt.pause(1)</code> instead of <code>plt.show()</code> the figure becomes responsive and mostly stays responsive! (I also had a few cases where I had to re-type <code>plt.pause(1)</code> on the command line). So this is actually a (clunky) work-around, thanks @Sergei Kox.</p> <p><em>Powershell inside emacs inferior shell:</em></p> <p>The figure becomes responsive only during the pause. Once the pause has elapsed, the figure becomes immediately unresponsive. Typing again <code>plt.pause(1)</code> the figure becomes responsive again, but only during the pausing ...</p> <p>... can this be solved!?</p>
<python><matplotlib><emacs><windows-10>
2024-02-16 11:13:05
1
3,127
Ytsen de Boer
78,006,685
19,556,055
Second kivy screen ScreenManager accepts only Screen Widget error
<p>I'm starting out with kivy and trying to write some simple little code to follow along with a video, but already I'm getting errors that he is not. I have the following code...</p> <p><strong>my_tours.kv</strong></p> <pre><code>&lt;MyTours&gt;: Button: text: &quot;Planned trips&quot; on_release: app.change_screen(&quot;settings&quot;) </code></pre> <p><strong>settings.kv</strong></p> <pre><code>&lt;Settings&gt;: Button: text: &quot;Settings&quot; </code></pre> <p><strong>main.kv</strong></p> <pre><code>#:include kv/my_tours.kv #:include kv/settings.kv GridLayout: cols: 1 ScreenManager: id: screen_manager MyTours: name: &quot;my_tours&quot; id: my_tours Settings: name: &quot;settings&quot; id: settings </code></pre> <p><strong>main.py</strong></p> <pre><code>from kivy.app import App from kivy.lang import Builder from kivy.uix.screenmanager import Screen class MyTours(Screen): pass class Settings(Screen): pass gui = Builder.load_file(&quot;main.kv&quot;) class MainApp(App): def build(self): return gui def change_screen(self, screen_name): # Get the screen manager from the kv file screen_manager = self.root.ids[&quot;screen_manager&quot;] screen_manager.current = screen_name MainApp().run() </code></pre> <p>If I only include the MyTours screen in the main.kv file, everything works fine. For some reason, the Settings screen does not work, on its own or together with MyTours. Perhaps I'm overlooking something, but I cannot find a difference between the files or code of the two screens. The error is <code>kivy.uix.screenmanager.ScreenManagerException: ScreenManager accepts only Screen widget.</code> What am I doing wrong here?</p>
<python><kivy>
2024-02-16 10:32:30
1
338
MKJ
78,006,568
10,334,846
Bash command calculates wrong time
<p>I am using bash script and want to do a for loop of time from 3 hours ago to 3 hours later. For example, the pseudo codes follow:</p> <pre><code>time_end = 2000010100 #YYYYMMDDHH time_beg = 1996010100 #YYYYMMDDHH for time_beg -le time_end # while time_beg&lt;=time_end time_tmp = time_beg + 3 hours do some calculation from time_beg to time_tmp time_beg = time_tmp end for </code></pre> <p>And the part <code>do some calculation from time_beg to time_tmp</code> is a model to simulate something from <code>1996-01-01-00</code> to <code>1996-01-01-03</code> (for example). Then next run would start from <code>1996-01-01-03</code> to <code>1996-01-01-06</code>.</p> <p>To do the part of <code>time_tmp = time_beg + 3 hours</code>, I use following codes:</p> <pre><code>YMDH=$YYYYMMDDHH time_resolution=3 INC=+${time_resolution}hours YMDH_newend=`date +%Y%m%d%H -d &quot;${YMDH::8} ${YMDH:8:2} ${INC}&quot;` echo ${YMDH_newend} </code></pre> <p>I tested it worked well but got an error when <code>$YYYYMMDDHH=1996033101</code>. Running the codes above yield results of <code>$YYYYMMDDHH=1996033105</code> which is wrong and should be <code>$YYYYMMDDHH=1996033104</code>...</p> <p>So what happened? or is there any smarter way to do this? like using python? something like <code>$(python -c &quot;print(blablabla about YYYYMMDDHH)&quot;)</code>?</p> <p>Thanks!</p>
<python><linux><bash><shell><datetime>
2024-02-16 10:12:20
0
325
Xu Shan
78,006,473
11,629,296
Find total no of days spent by people from entry and exit day in pandas/python
<p>I have a data frame like this,</p> <pre><code>day entry_persons exit_persons 1 4 0 2 2 1 3 3 4 4 5 0 5 0 1 </code></pre> <p>So basically for 5 days people are entering to and taking exit to a campus, I want to calculate the total no of days people stayed inside. For example,</p> <pre><code>day 1 -- 4 people day 2 -- 4(from previous day)+2(current day)-1(exit), so 5 day 3 -- 5+3-4 so 4 day 4 -- 4+5 so 9 day 6 -- 9-1 so 8 so the output will be total no days spend by all people = 4+5+4+9+8 = 30 </code></pre> <p>Looking for some pandas trick to do this efficiently</p>
<python><pandas><dataframe>
2024-02-16 09:56:18
1
2,189
Kallol
78,006,347
4,883,949
Python program freezes when multiprocessing call follows sequential call
<p>I encountered a strange problem with a Python program processing images with the <code>pyvips</code> library. Here is a simplified version of my problem.</p> <p>I try to create a numpy array (coming from zeros in my simple example) several times (2 times in the example).</p> <p>When I do it sequentially, everything is fine. When I use the <code>multiprocessing</code> module, it also works as it should. But during my tests I noticed <strong>the program freezes when I perform these two runs rapidly one after another</strong>. Hence the program below. When I run it, it freezes before the first &quot;step 3&quot; in the multiprocessing part.</p> <p>I think it is linked to a garbage collection mechanism but I am lost here.</p> <p>I'm using python 3.11.3 and pyvips 2.2.2 (with vips-8.15.1). I use Spyder with its IPython console on a linux machine (a Ubuntu 22.04 based distribution).</p> <pre><code>import multiprocessing import numpy as np import pyvips def myfunction(useless): print(&quot;step 1&quot;) image = pyvips.Image.new_from_array(np.zeros((8, 8, 3)), interpretation=&quot;rgb&quot;) print(&quot;step 2&quot;) res = np.asarray(image) print(&quot;step 3&quot;) return res def main(nbpool): srcrange = range(2) if nbpool == 0: res = list() for srcid in srcrange: res.append(myfunction(srcid)) return res else: pool = multiprocessing.Pool(nbpool) return pool.map(myfunction, srcrange) if __name__ == &quot;__main__&quot;: res1 = main(0) print(&quot;Now with multiprocessing&quot;) res2 = main(1) </code></pre>
<python><multiprocessing><vips>
2024-02-16 09:33:14
0
938
proprit
78,006,246
8,462,809
Pandas dataframe to BQ fail with timestamp column
<p>I encountered with the timestamp format.</p> <p>I want to retrieve current timestamp with a function and use it into a pandas dataframe as below:</p> <pre><code>df['CURRENT_TS'] = pd.Timestamp.now() </code></pre> <p>After that, the DF will be converted to CSV on a GCS bucket. Following this, the csv will be ingested to BQ using <strong>load_table_from_uri</strong> :</p> <pre><code>Error:load_table_from_uri() : POST https://bigquery.googleapis.com/bigquery/v2/...: Invalid value for mode: TIMESTAMP is not a valid value 'message': 'Invalid value for mode: TIMESTAMP is not a valid value', 'domain': 'global', 'reason': 'invalid' </code></pre>
<python><pandas><dataframe><google-bigquery>
2024-02-16 09:17:14
1
459
Jonito
78,006,234
1,559,401
Creating Python module from Fortan using CMake and make yields no errors but importing module in Python fails due to undefined symbol
<p>I am receiving</p> <pre><code>Python 3.12.0 | packaged by conda-forge | (main, Oct 3 2023, 08:43:22) [GCC 12.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import foo Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ImportError: /home/.../myproject/build/foo.cpython-312-x86_64-linux-gnu.so: undefined symbol: f2pywrapfoo_ </code></pre> <p>when trying to load a Python module created using CMake from Fortran code using <code>numpy.f2py</code>.</p> <hr /> <p>I am following the <a href="https://numpy.org/doc/stable/f2py/buildtools/cmake.html" rel="nofollow noreferrer">official guide</a> for using <code>numpy.f2py</code> in CMake. I am using the latest version of NumPy (1.26), CMake 3.22 and the <code>gfortran</code> and <code>gcc</code> compilers from the Xubuntu 22.04 repos (11.4 and 12.3 respectively).</p> <p>My Fortran function is even simpler (since I am very new to the language) than the example in the documentation, namely</p> <pre><code>function foo(a) result(b) implicit none real(kind=8), intent(in) :: a(:,:) complex(kind=8) :: b(size(a,1),size(a,2)) b = exp((0,1)*a) end function foo </code></pre> <p>The part of my CMake that handles the module generation is</p> <pre><code>if(PYTHON_F2PY) # https://numpy.org/doc/stable/f2py/buildtools/cmake.html # https://numpy.org/doc/stable/f2py/usage.html message(&quot;Creating Python module from Fortran code enabled&quot;) # Example for interfacing with Python using f2py # Check if Python with the required version and components is available find_package(Python 3.12 REQUIRED COMPONENTS Interpreter Development.Module NumPy) # Grab the variables from a local Python installation # F2PY headers execute_process( COMMAND &quot;${Python_EXECUTABLE}&quot; -c &quot;import numpy.f2py; print(numpy.f2py.get_include())&quot; OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) #message(&quot;${F2PY_INCLUDE_DIR}&quot;) # Print out the discovered paths include(CMakePrintHelpers) cmake_print_variables(Python_INCLUDE_DIRS) cmake_print_variables(F2PY_INCLUDE_DIR) cmake_print_variables(Python_NumPy_INCLUDE_DIRS) # Common variables set(f2py_module_name &quot;foo&quot;) set(fortran_src_file &quot;${CMAKE_SOURCE_DIR}/src/foo.f90&quot;) set(f2py_module_c &quot;${f2py_module_name}module.c&quot;) # Generate sources add_custom_target( genpyf DEPENDS &quot;${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}&quot; ) add_custom_command( OUTPUT &quot;${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}&quot; COMMAND ${Python_EXECUTABLE} -m &quot;numpy.f2py&quot; &quot;${fortran_src_file}&quot; -m &quot;${f2py_module_name}&quot; --lower # Important DEPENDS &quot;src/foo.f90&quot; # Fortran source ) # Set up target Python_add_library(foo MODULE WITH_SOABI &quot;${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}&quot; # Generated &quot;${F2PY_INCLUDE_DIR}/fortranobject.c&quot; # From NumPy &quot;${fortran_src_file}&quot; # Fortran source(s) #&quot;foo-f2pywrappers2.f90&quot; ) # Depend on sources target_link_libraries(foo PRIVATE Python::NumPy) add_dependencies(foo genpyf) target_include_directories(foo PRIVATE &quot;${F2PY_INCLUDE_DIR}&quot;) endif(PYTHON_F2PY) </code></pre> <p>Inside my building directory I run</p> <pre><code>cmake -Wno-dev -DPYTHON_F2PY=1 .. </code></pre> <p>to generate the project, followed by</p> <pre><code>make -j10 </code></pre> <p>to build it.</p> <p>Among others I get the following files:</p> <ul> <li><p><code>foo.cpython-312-x86_64-linux-gnu.so</code> - the shared library that I can load in Python</p> </li> <li><p><code>foo-f2pywrappers.f</code> - an empty Fortran file</p> </li> <li><p><code>foo-f2pywrappers2.f90</code> - Fortran file containing some wrapper code</p> <pre><code>! -*- f90 -*- ! This file is autogenerated with f2py (version:1.26.4) ! It contains Fortran 90 wrappers to fortran functions. subroutine f2pywrapfoo (foof2pywrap, a, f2py_a_d0, f2py_a_d1) integer f2py_a_d0 integer f2py_a_d1 real(kind=8) a(f2py_a_d0,f2py_a_d1) complex(kind=8) foof2pywrap(size(a, 1),size(a, 2)) interface function foo(a) result (b) real(kind=8), intent(in),dimension(:,:) :: a complex(kind=8), dimension(size(a,1),size(a,2)) :: b end function foo end interface foof2pywrap = foo(a) end </code></pre> </li> <li><p><code>foomodule.c</code> - the C code generated for my module that will is used to build shared the library</p> </li> </ul> <p>The error message</p> <pre><code>Python 3.12.0 | packaged by conda-forge | (main, Oct 3 2023, 08:43:22) [GCC 12.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import foo Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ImportError: /home/.../myproject/build/foo.cpython-312-x86_64-linux-gnu.so: undefined symbol: f2pywrapfoo_ </code></pre> <p>points as <code>f2pywrapfoo_</code>. As you can see above, the <code>foo-f2pywrappers2.f90</code> file contains the function (albeit without the <code>_</code> suffix).</p> <p>What I did is to add that wrapper to the list of Fortran source files</p> <pre><code># Set up target Python_add_library(foo MODULE WITH_SOABI &quot;${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}&quot; # Generated &quot;${F2PY_INCLUDE_DIR}/fortranobject.c&quot; # From NumPy &quot;${fortran_src_file}&quot; &quot;foo-f2pywrappers2.f90&quot; # Fortran source(s) ) </code></pre> <p>I run CMake and <code>make</code> again. When I repeat the steps for importing the module, now it works:</p> <pre><code>Python 3.12.0 | packaged by conda-forge | (main, Oct 3 2023, 08:43:22) [GCC 12.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; from foo import foo &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.array([[1,2,3,4], [5,6,7,8]], order='F') &gt;&gt;&gt; foo(a) array([[ 0.54030231+0.84147098j, -0.41614684+0.90929743j, -0.9899925 +0.14112001j, -0.65364362-0.7568025j ], [ 0.28366219-0.95892427j, 0.96017029-0.2794155j , 0.75390225+0.6569866j , -0.14550003+0.98935825j]]) </code></pre> <p>The problem is that <code>foo-f2pywrappers2.f90</code> is not available during the first run of CMake and <code>make</code>. It is created only after <code>make</code> is executed. So I cannot really add it as a dependency for the library building stage in the <code>CMakeLists.txt</code>.</p> <p>Any ideas what to change in order to make this work?</p>
<python><c><cmake><fortran><f2py>
2024-02-16 09:14:58
1
9,862
rbaleksandar
78,006,150
4,704,065
Check element from 1 column is present in another dataframe with different lengths and names
<p>I have 2 DF with different names and lengths but few elements of 1 column from both DF are common.</p> <p>I want to check if a particular element is present in the second DF or not and if present copy the elements of that DF in first DF as a new column</p> <p><strong>result</strong> column in first DF indicates that <strong>iTOW</strong> of first DF is present in second DF .</p> <p>DF1: <a href="https://i.sstatic.net/WZ5go.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WZ5go.png" alt="enter image description here" /></a></p> <p>DF2: <a href="https://i.sstatic.net/fVlOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fVlOZ.png" alt="enter image description here" /></a></p> <p>I need to check if <strong>iTOW</strong> of first DF is present in second DF : <strong>_iTOW</strong> . If yes then copy that row in the first DF as a new column . I should then remove the rows from the first DF where result==0 / elements which are not common</p> <p>Expected DF: <a href="https://i.sstatic.net/eFuD2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eFuD2.png" alt="enter image description here" /></a></p> <p>I tried below code but not able to copy the common elements of second DF to first DF</p> <p>DF1=DF2.assign(result=source_df['iTOW'].isin(sv_df['_iTOW']).astype(int))</p> <p>result_df=DF1.loc[DF1['result']==1]</p> <p>df_common = sv_df.loc[result_df['iTOW'].isin(sv_df['_iTOW'])]</p>
<python><dataframe>
2024-02-16 09:02:07
1
321
Kapil
78,005,989
10,200,497
How to groupby a dataframe by using a column and the last row of the group?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'x': ['a', 'b', 'c', 'c', 'e', 'f', 'd', 'a', 'b', 'c', 'c', 'e', 'f', 'd'], 'y': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'f', 'f', 'f', 'f', 'g', 'g', 'g'], } ) </code></pre> <p>And this is the output that I want:</p> <pre><code> x y 0 a a 1 b a 2 c a 3 c a 7 a f 8 b f 9 c f 10 c f x y 4 e b 5 f b 6 d b 11 e g 12 f g 13 d g </code></pre> <p>These are the steps that are needed:</p> <p>a) Groupby <code>y</code></p> <p>b) Groupby last row of <code>x</code></p> <p>Basically groups are:</p> <pre><code>df1 = df.groupby('y').filter(lambda g: g.x.iloc[-1] == 'c') df2 = df.groupby('y').filter(lambda g: g.x.iloc[-1] == 'd') </code></pre> <p>In this example I know I have two different values in the last rows, which are <code>c</code> and <code>d</code>, that is why I could <code>filter</code> them But in my data I do not know that.</p>
<python><pandas><dataframe>
2024-02-16 08:25:27
2
2,679
AmirX
78,005,759
14,388,247
Efficient way to combine two extremely long lists in python
<p>In python, we have two long lists each sorted and with more than around 20 million items and are <strong>different</strong> in <strong>only a few items</strong>.</p> <pre class="lang-py prettyprint-override"><code>list_1 = [ ..., ## 20 million items common between the two lists 'item_a', 'item_b', ] list_2 = [ ..., ## 20 million items common between the two lists 'item_c', 'item_d', ] </code></pre> <p><br>What I have is mind is to combine these two to get:</p> <pre class="lang-py prettyprint-override"><code>combined = sorted([ ..., ## 20 million items common between the two lists 'item_a', 'item_b', 'item_c', 'item_d', ]) </code></pre> <p>What I've tried is:</p> <pre class="lang-py prettyprint-override"><code>combined = sorted( set([ *list_1, *list_2, ]) ) </code></pre> <p>but the problem is it takes too long to finish (for me it took around 50 minutes). <br> What's a more efficient way to do so?</p> <hr> Updates:<br> I tried these solutions provided on this page: <pre class="lang-py prettyprint-override"><code>## 1. by Cow import itertools all_ = [list_1, list_2] combined = sorted(set(itertools.chain.from_iterable(all_))) ## Results: no improvements in time </code></pre> <pre class="lang-py prettyprint-override"><code>## 2. by Joooeey combined = sorted(set(list_1) | set(list_2)) ## Results: no improvements in time </code></pre>
<python><list>
2024-02-16 07:35:54
3
706
nino
78,005,435
14,250,641
How to randomly sample very large pyArrow dataset
<p>I have a very large arrow dataset (181GB, 30m rows) from the huggingface framework I've been using. I want to randomly sample with replacement 100 rows (20 times), but after looking around, I cannot find a clear way to do this. I've tried converting to a pd.Dataframe so that I can use df.sample(), but python crashes everytime (assuming due to large dataset). So, I'm looking for something built-in within pyarrow.</p> <pre><code>df = Dataset.from_file(&quot;embeddings_job/combined_embeddings_small/data-00000-of-00001.arrow&quot;) df=df.to_table().to_pandas() #crashes at this line random_sample = df.sample(n=100) </code></pre> <p>Some ideas: not sure if this is w/replacement</p> <pre><code>import numpy as np random_indices = np.random.randint(0, len(df), size=100) # Take the samples from the dataset sampled_table = df.select(random_indices) </code></pre> <p>Using huggingface shuffle</p> <pre><code> sample_size = 100 # Shuffle the dataset shuffled_dataset = df.shuffle() # Select the first 100 rows sampled_dataset = df.select(range(sample_size)) </code></pre> <p>Is the only other way through terminal commands? Would this be correct:</p> <pre><code>for i in {1..30}; do shuf -n 1000 -r file &gt; sampled_$i.txt; done </code></pre> <p>After getting each chunk, the plan is to run each chunk through a random forest algoritm. What is the best way to go about this?</p> <p>Also, I would like to note that whatever solution should make sure the indices do not get reset when I get the output subset.</p>
<python><pandas><large-data><pyarrow><huggingface-datasets>
2024-02-16 06:09:20
1
514
youtube
78,005,434
9,112,151
How to implement __init_subclass__ magic method properly?
<p>I've created a base class:</p> <pre><code>class BaseNewAdapter: def __init_subclass__(cls, /, base_url: str) -&gt; None: cls._base_url = base_url </code></pre> <p>But PyCharm gives me a warning:</p> <blockquote> <p>Signature of method 'BaseNewAdapter.<strong>init_subclass</strong>()' does not match signature of the base method in class 'object'</p> </blockquote> <p><a href="https://i.sstatic.net/O6tYP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O6tYP.png" alt="enter image description here" /></a></p> <p>For me it seems the code is ok. What's wrong?</p>
<python>
2024-02-16 06:09:01
0
1,019
Альберт Александров
78,005,393
395,857
How can I find out the GPT-4 model version when using openai Python library and Azure OpenAI?
<p>I use GPT-4 via <code>openai</code> Python library and Azure OpenAI. How can I find out the GPT-4 model version by using the <code>openai</code> Python library (and not looking at <a href="https://portal.azure.com/" rel="nofollow noreferrer">https://portal.azure.com/</a> because for some Azure OpenAI instances I only have the credentials to use the API but I can't view them on <a href="https://portal.azure.com/" rel="nofollow noreferrer">https://portal.azure.com/</a>)?</p> <p>I read:</p> <p><a href="https://platform.openai.com/docs/models/continuous-model-upgrades" rel="nofollow noreferrer">https://platform.openai.com/docs/models/continuous-model-upgrades</a>:</p> <blockquote> <p>You can verify this by looking at the response object after sending a request. The response will include the specific model version used (e.g. gpt-3.5-turbo-0613).</p> </blockquote> <p><a href="https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo" rel="nofollow noreferrer">https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo</a>:</p> <blockquote> <p>gpt-4 currently points to <code>gpt-4-0613</code>.</p> </blockquote> <hr /> <p>However, I tried calling gpt-4 version 0314 and gpt-4 version 0125-preview: for both of them, the response object after sending a request only contains <code>gpt-4</code>:</p> <pre class="lang-py prettyprint-override"><code>ChatCompletion( id='chatcmpl-8slN5Cbbsdf16s51sdf8yZpRXZM1R', choices=[ Choice( finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage( content='blahblah', role='assistant', function_call=None, tool_calls=None ), content_filter_results={ 'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'} } ) ], created=1708062499, model='gpt-4', object='chat.completion', system_fingerprint='fp_8absdfsdsfs', usage=CompletionUsage( completion_tokens=185, prompt_tokens=4482, total_tokens=4667 ), prompt_filter_results=[ { 'prompt_index': 0, 'content_filter_results': { 'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'} } } ] ) </code></pre> <p>How can I find out the GPT-4 model version when using <code>openai</code> Python library and Azure OpenAI?</p>
<python><version><openai-api><azure-openai><gpt-4>
2024-02-16 05:56:39
1
84,585
Franck Dernoncourt
78,005,200
1,712,287
How to solve problem of "scipy pip installation error in python
<p>When I went to install scipy following error has been found in python. Do you know any way how to solve this problem?</p> <pre><code>pip install scipy </code></pre> <pre class="lang-none prettyprint-override"><code>Collecting scipy Downloading scipy-1.12.0-cp312-cp312-win_amd64.whl.metadata (60 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 kB 14.5 kB/s eta 0:00:00 Requirement already satisfied: numpy&lt;1.29.0,&gt;=1.22.4 in c:\users\hp\appdata\local\programs\python\python312\lib\site-packages (from scipy) (1.26.4) Downloading scipy-1.12.0-cp312-cp312-win_amd64.whl (45.8 MB) ━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.7/45.8 MB 115.5 kB/s eta 0:04:21 ERROR: Exception: Traceback (most recent call last): File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 438, in _error_catcher yield File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 561, in read data = self._fp_read(amt) if not fp_closed else b&quot;&quot; ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 527, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py&quot;, line 98, in read data: bytes = self.__fp.read(amt) ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\http\client.py&quot;, line 479, in read s = self.fp.read(amt) ^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\socket.py&quot;, line 707, in readinto return self._sock.recv_into(b) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\ssl.py&quot;, line 1253, in recv_into return self.read(nbytes, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\ssl.py&quot;, line 1105, in read return self._sslobj.read(len, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TimeoutError: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\cli\base_command.py&quot;, line 180, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\cli\req_command.py&quot;, line 245, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\commands\install.py&quot;, line 377, in run requirement_set = resolver.resolve( ^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py&quot;, line 179, in resolve self.factory.preparer.prepare_linked_requirements_more(reqs) File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\operations\prepare.py&quot;, line 552, in prepare_linked_requirements_more self._complete_partial_requirements( File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\operations\prepare.py&quot;, line 467, in _complete_partial_requirements for link, (filepath, _) in batch_download: File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\network\download.py&quot;, line 183, in __call__ for chunk in chunks: File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\cli\progress_bars.py&quot;, line 53, in _rich_progress_bar for chunk in iterable: File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\network\utils.py&quot;, line 63, in response_chunks for chunk in response.raw.stream( File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 622, in stream data = self.read(amt=amt, decode_content=decode_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 560, in read with self._error_catcher(): File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\contextlib.py&quot;, line 158, in __exit__ self.gen.throw(value) File &quot;C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\urllib3\response.py&quot;, line 443, in _error_catcher raise ReadTimeoutError(self._pool, None, &quot;Read timed out.&quot;) pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. </code></pre>
<python>
2024-02-16 04:39:59
1
1,238
Asif Iqbal
78,005,051
14,771,666
ModuleNotFoundError: No module named 'llama_index.graph_stores'
<p>I am trying to use the <code>NebulaGraphStore</code> class from <code>llama_index</code> via <code>from llama_index.graph_stores.nebula import NebulaGraphStore</code> as suggested by the <a href="https://docs.llamaindex.ai/en/stable/examples/index_structs/knowledge_graph/NebulaGraphKGIndexDemo.html" rel="nofollow noreferrer">llama_index documentation</a>, but the following error occurred:</p> <pre><code>ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 from llama_index.graph_stores.nebula import NebulaGraphStore ModuleNotFoundError: No module named 'llama_index.graph_stores' </code></pre> <p>I tried updating <code>llama_index</code> (version 0.10.5) with <code>pip install -U llama-index</code> but it doesn't work. How can I resolve this?</p>
<python><langchain><large-language-model><llama-index><nebula-graph>
2024-02-16 03:32:18
2
368
Kaihua Hou
78,004,832
6,466,366
Could not find module 'C:\OSGeo4W\bin\gdal308.dll' (or one of its dependencies)
<p>I'm trying to use GeoDjango + gdal + PostGis in order to create a map application.</p> <p>OSGeo4W is completelly installed, and all the dll files are there</p> <p>When I run this command:</p> <pre><code>python3 manage.py check </code></pre> <p>I get the following result:</p> <pre><code> ^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: Could not find module 'C:\OSGeo4W\bin\gdal308.dll' (or one of its dependencies). Try using the full path with constructor syntax. </code></pre> <p>However the files are correctly existing in the location. Here is how my settings file looks like, for the gdal part:</p> <pre><code>if os.name == 'nt': OSGEO4W = r&quot;C:\\OSGeo4W&quot; # if '64' in platform.architecture()[0]: # OSGEO4W += &quot;64&quot; assert os.path.isdir(OSGEO4W), &quot;Directory does not exist: &quot; + OSGEO4W os.environ['OSGEO4W_ROOT'] = OSGEO4W os.environ['GDAL_DATA'] = OSGEO4W + r&quot;\\share\\gdal&quot; os.environ['PROJ_LIB'] = OSGEO4W + r&quot;\\share\\proj&quot; os.environ['PATH'] = OSGEO4W + r&quot;\\bin;&quot; + os.environ['PATH'] GDAL_LIBRARY_PATH = OSGEO4W + r&quot;\\bin\\gdal308.dll&quot; </code></pre> <p>Could someone who has gone through this please help me shed some light over it?</p>
<python><django><postgis><gdal><geodjango>
2024-02-16 01:48:22
1
656
rdrgtec
78,004,822
1,036,582
PyTorch Model always returns Zero Accuracy
<p>I wanted to create a neural network to predict the hypotenuse of a triangle given the other two sides. For this, I use the pythagorean theorem to create 10,000 values which are used to train the model. The problem is that even though my average loss is 0.18, the accuracy is 0%. What am I doing wrong?</p> <pre><code>class SimpleMLP(nn.Module): def __init__(self, num_of_classes=10): super().__init__() self.layers = nn.Sequential( nn.Linear(2, 64), nn.ReLU(), nn.Linear(64, 64), nn.ReLU(), # Output matches input and number of classes nn.Linear(64, num_of_classes), ) def forward(self, x): return self.layers(x) class PythagoreanDataset(Dataset): def __init__(self, transform=None): self.values = self._get_pythagorean_values() def __getitem__(self, index): a, b, c = self.values[index] label = torch.as_tensor([c], dtype=torch.float) data = torch.as_tensor([a, b], dtype=torch.float) return data, label def __len__(self): return len(self.values) def _get_pythagorean_values(self, array_size: int = 10000) -&gt; list: values = [] for i in range(array_size): a = float(randint(1, 500)) b = float(randint(1, 500)) c = math.sqrt(pow(a, 2) + pow(b, 2)) values.append((a, b, c)) return values def _correct(output, target): predicted_digits = output.argmax(1) # pick digit with largest network output correct_ones = (predicted_digits == target).type( torch.float ) # 1.0 for correct, 0.0 for incorrect return correct_ones.sum().item() def train( data_loader: DataLoader, model: torch.nn.Module, criterion: torch.nn.Module, optimizer: torch.optim.Optimizer, device: torch.device, ): model.train() num_batches = len(data_loader) num_items = len(data_loader.dataset) train_loss = 0 total_loss = 0 total_correct = 0 for data, target in data_loader: # Copy data and targets to device data = data.to(device) target = target.to(device) # Do a forward pass output = model(data) # Calculate the loss loss = criterion(output, target) total_loss += loss # Count number of correct digits total_correct += _correct(output, target) # Backpropagation loss.backward() optimizer.step() optimizer.zero_grad() train_loss = float(total_loss / num_batches) accuracy = total_correct / num_items print(f&quot;Train accuracy: {accuracy:.2%}, Average loss: {train_loss:7f}&quot;) return train_loss def test( test_loader: DataLoader, model: torch.nn.Module, criterion: torch.nn.Module, device: torch.device, ): model.eval() num_batches = len(test_loader) num_items = len(test_loader.dataset) test_loss = 0 total_correct = 0 with torch.no_grad(): for data, target in test_loader: # Copy data and targets to GPU data = data.to(device) target = target.to(device) # Do a forward pass output = model(data) # Calculate the loss loss = criterion(output, target) test_loss += loss.item() # Count number of correct digits total_correct += _correct(output, target) test_loss = test_loss / num_batches accuracy = total_correct / num_items print(f&quot;Test accuracy: {100*accuracy:&gt;0.1f}%, average loss: {test_loss:&gt;7f}&quot;) return test_loss def main(): device = &quot;cpu&quot; dataset = PythagoreanDataset() # Creating data indices for training and validation splits: validation_split = 0.2 dataset_size = len(dataset) indices = list(range(dataset_size)) split = int(np.floor(validation_split * dataset_size)) train_indices, val_indices = indices[split:], indices[:split] # Creating PT data samplers and loaders: train_sampler = SubsetRandomSampler(train_indices) valid_sampler = SubsetRandomSampler(val_indices) train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, sampler=train_sampler) test_loader = DataLoader(dataset, batch_size=BATCH_SIZE, sampler=valid_sampler) model = SimpleMLP(num_of_classes=1).to(device) print(model) criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters()) epochs = 500 losses = [] for epoch in tqdm(range(epochs)): print(f&quot;Training epoch: {epoch+1}&quot;) train_loss = train(train_loader, model, criterion, optimizer, device=device) test_loss = test(test_loader, model, criterion, device=device) losses.append((train_loss, test_loss)) plot_loss_curves(losses=losses) # Example prediction test_input = torch.tensor([[3, 4]], dtype=torch.float32) predicted_output = model(test_input) print(&quot;Predicted hypotenuse:&quot;, predicted_output.item()) --- </code></pre> <p><a href="https://i.sstatic.net/Bb7sI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bb7sI.png" alt="enter image description here" /></a></p>
<python><machine-learning><pytorch><neural-network>
2024-02-16 01:41:18
1
373
Movieboy
78,004,735
12,704,700
Thoughts on improving the existing aggregate pyspark code
<p>Here is a spark dataframe i have,</p> <pre><code>+---+----------------------------------+----------+----------+ |id |timestamp |Fname |Lname | +---+----------------------------------+----------+----------+ |1 |2024-01-19T11:52:44.775205Z |Robert |Albert | |1 |2024-01-20T11:52:44.775205Z |Remo |Lergos | |2 |2024-01-21T11:52:44.775205Z |Charlie |Jameson | |2 |2024-01-22T11:52:44.775205Z |Anastacio |Sporer | |2 |2024-01-23T11:52:44.775205Z |Luz |Toy | |3 |2024-01-24T11:52:44.775205Z |Crystal |Hills | |3 |2024-01-25T11:52:44.775205Z |Nicholas |Johnson | +---+----------------------------------+----------+----------+ </code></pre> <p>Below are the steps involved,</p> <ol> <li>Group by the &quot;id&quot; vales</li> <li>Get the first name and latest name as list of dictionaries.</li> <li>Based on the most recent timestamp for the &quot;id&quot; get the fname and Lname and store them as a seperate column as a dictionary.</li> <li>The most recent timestamp used in step3 should also be stored as seperate column.</li> </ol> <p>Based on the above steps I am trying to get to the resultant dataframe as below,</p> <pre><code>+----+--------------------------------------+----------------------------+-----------------------------------------------------------------------------------------------------+ |id |latest_names |latest_timestamp |all_names | +----+--------------------------------------+----------------------------+-----------------------------------------------------------------------------------------------------+ |1 |{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;} |2024-01-20T11:52:44.775205Z |[{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;},{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;}] | |2 |{&quot;Fname&quot;:&quot;Luz&quot;,&quot;Lname&quot;:&quot;Toy&quot;} |2024-01-23T11:52:44.775205Z |[{&quot;Fname&quot;:&quot;Luz&quot;,&quot;Lname&quot;:&quot;Toy&quot;},{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;},{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;}] | |3 |{&quot;Fname&quot;:&quot;Nicholas&quot;,&quot;Lname&quot;:&quot;Johnson&quot;}|2024-01-25T11:52:44.775205Z |[{&quot;Fname&quot;:&quot;Nicholas&quot;,&quot;Lname&quot;:&quot;Johnson&quot;},{&quot;Fname&quot;:&quot;Remo&quot;,&quot;Lname&quot;:&quot;Lergos&quot;}] | +----+--------------------------------------+----------------------------+-----------------------------------------------------------------------------------------------------+ </code></pre> <p>I tried the following pyspark code with join and a windowspec to get the first elements with desc of timestamp,</p> <pre><code>windowspec = Window.partitionBy(&quot;id&quot;).orderBy(df[&quot;timestamp&quot;].desc()) columns_names = [&quot;Fname&quot;,&quot;Lname&quot;] df.withColumn( &quot;all_names&quot;, F.to_json( F.struct( &quot;Fname&quot;,&quot;Lname&quot; ) ), ) .withColumn( &quot;latest_names&quot;, F.to_json( F.struct(*[F.first(field).over(windowspec).alias(field) for field in columns_names]) ), ) .withColumn(&quot;latest_timestamp&quot;, F.first(&quot;timestamp&quot;).over(windowspec).alias(&quot;timestamp&quot;)) .groupBy(&quot;id&quot;) .agg( F.collect_set(&quot;all_names&quot;).alias(&quot;all_names&quot;), F.first(&quot;latest_names&quot;).alias(&quot;latest_names&quot;), F.first(&quot;l_timestamp&quot;).alias(&quot;latest_timestamp&quot;), ) </code></pre> <p>Iam able to acheive the result but would like to know if there is a better of doing this as i have multiple columns to perform a similar operation of (Fname,Lname) i have other columns (address1,address2,address3) where i want to perform the same column operation to get the latest address, i was using a single windowspec for that operation butis there a better way of doing this ???</p>
<python><performance><apache-spark><pyspark><aggregate>
2024-02-16 01:05:52
1
2,505
Sundeep
78,004,569
11,907,420
Getting exception when trying to delete entity in Milvis vector database using pymilvus
<p>I am working on the python application and I am storing data in Milvus vector database. I have written the following code to delete entity in n Milvus vector database using pymilvus library.</p> <pre><code>def delete_entities(self, collection_name, entity_id): expr = &quot;obj_id in [&quot; + entity_id + &quot;]&quot; collection = Collection(collection_name) collection.delete(expr) </code></pre> <p>When I execute the code, I am getting this error:</p> <pre><code>RPC error: [delete], &lt;MilvusException: (code=65535, message=failed to create expr plan, expr = obj_id in [65ce930d39989b871863b5dd])&gt;, &lt;Time:{'RPC start': '2024-02-16 00:53:20.651576', 'RPC error': '2024-02-16 00:53:20.653786'}&gt; Exception in thread Thread-9: Traceback (most recent call last): File &quot;C:\Users\usajid\AppData\Local\Programs\Python\Python39\lib\threading.py&quot;, line 980, in _bootstrap_inner self.run() File &quot;C:\Users\usajid\AppData\Local\Programs\Python\Python39\lib\threading.py&quot;, line 917, in run self._target(*self._args, **self._kwargs) File &quot;C:\Users\usajid\Desktop\semanticsearch\src\backend\components\consumer.py&quot;, line 61, in subscribe_topic self.COOPERANTS.delete_entities(collection_name, payload['documentKey']['_id']['$oid']) File &quot;C:\Users\usajid\Desktop\semanticsearch\src\backend\components\clients.py&quot;, line 50, in delete_entities collection.delete(expr) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\orm\collection.py&quot;, line 573, in delete res = conn.delete(self._name, expr, partition_name, timeout=timeout, **kwargs) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\decorators.py&quot;, line 135, in handler raise e from e File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\decorators.py&quot;, line 131, in handler return func(*args, **kwargs) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\decorators.py&quot;, line 170, in handler return func(self, *args, **kwargs) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\decorators.py&quot;, line 110, in handler raise e from e File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\decorators.py&quot;, line 74, in handler return func(*args, **kwargs) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\client\grpc_handler.py&quot;, line 602, in delete raise err from err File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\client\grpc_handler.py&quot;, line 596, in delete check_status(response.status) File &quot;c:\Users\usajid\Desktop\semanticsearch\.venv\lib\site-packages\pymilvus\client\utils.py&quot;, line 54, in check_status raise MilvusException(status.code, status.reason, status.error_code) pymilvus.exceptions.MilvusException: &lt;MilvusException: (code=65535, message=failed to create expr plan, expr = obj_id in [65ce930d39989b871863b5dd])&gt; </code></pre> <p>Can anyone tell me how to fix this issue?</p>
<python><milvus>
2024-02-16 00:02:04
1
329
Usman Sajid
78,004,262
1,183,804
Loading Pytorch model in C# with Binary Reader
<p>I want to read and study the basic Model Structures of some of the Pytorch models on HuggingFace.</p> <p>NOTE: I do not want to convert the model to Onnx, this is not what I want.</p> <p>I have set the <code>gcAllowVeryLargeObjects</code> so I can load objects larger than 4Gb.</p> <pre><code>&lt;configuration&gt; &lt;runtime&gt; &lt;gcAllowVeryLargeObjects enabled=&quot;true&quot; /&gt; &lt;/runtime&gt; &lt;/configuration&gt; </code></pre> <p>Open and Read the Models is easy:</p> <pre><code>public static void Load(string modelPath) { List&lt;byte&gt; bytes = new List&lt;byte&gt;(); using (FileStream fileStream = File.OpenRead(modelPath)) using (BinaryReader binaryReader = new BinaryReader(fileStream)) { // Read in all Bytes. while (binaryReader.BaseStream.Position != binaryReader.BaseStream.Length) { bytes.Add(binaryReader.ReadByte()); } } } </code></pre> <p>Some of the Model can be read as a String, some are different types.</p> <p>My Question is, how do we convert all of the Types necessary, without knowing what's coming next?</p> <pre><code>string JsonString = Encoding.ASCII.GetString(bytes.ToArray()); </code></pre> <p>Is this enough to properly read in the Model? As a Pytorch Graph Representation. Is there a better, faster way to do it?</p> <blockquote> <p>&quot;??\0\0\0\0\0\0{&quot;<strong>metadata</strong>&quot;:{&quot;format&quot;:&quot;pt&quot;},&quot;model.embed_tokens.weight&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[51200,2560],&quot;data_offsets&quot;:[0,262144000]},&quot;model.layers.0.input_layernorm.bias&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560],&quot;data_offsets&quot;:[262144000,262149120]},&quot;model.layers.0.input_layernorm.weight&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560],&quot;data_offsets&quot;:[262149120,262154240]},&quot;model.layers.0.mlp.fc1.bias&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[10240],&quot;data_offsets&quot;:[262154240,262174720]},&quot;model.layers.0.mlp.fc1.weight&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[10240,2560],&quot;data_offsets&quot;:[262174720,314603520]},&quot;model.layers.0.mlp.fc2.bias&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560],&quot;data_offsets&quot;:[314603520,314608640]},&quot;model.layers.0.mlp.fc2.weight&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560,10240],&quot;data_offsets&quot;:[314608640,367037440]},&quot;model.layers.0.self_attn.dense.bias&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560],&quot;data_offsets&quot;:[367037440,367042560]},&quot;model.layers.0.self_attn.dense.weight&quot;:{&quot;dtype&quot;:&quot;F16&quot;,&quot;shape&quot;:[2560,2560],&quot;data_offsets&quot;:[367042560,380149760]},&quot;model.layers.0.self_attn.k_proj.bias&quot;:{&quot;dtype&quot;:...&quot;</p> </blockquote> <p>Should we expect a Json String format? My early experiments show some of the model seems to be in Json String Format.</p>
<python><c#><machine-learning><pytorch>
2024-02-15 22:25:05
0
2,710
Rusty Nail
78,004,175
2,658,228
Python Folium - Combine states in choropleth map of USA
<p>I want to create a custom choropleth map for USA in which instead of showing states individually, I'd like to combine some states together e.g. Louisiana, Mississippi, Alabama, and Arkansas.</p> <p>Below is some sample code with data from Folium github <a href="https://github.com/python-visualization/folium/tree/main/examples/data" rel="nofollow noreferrer">page</a>. I tried to add a column named &quot;region&quot; to <code>state_data</code> which was set to unique values for all other states except the ones indicated above and changed the <code>columns</code> argument in <code>folium.Choroplet</code> to <code>region</code> but that didn't work either. Open to using another package besides <code>folium</code> (<code>plotly</code> etc.).</p> <p>Sample Code:</p> <pre><code>import folium import pandas as pd sample_map = folium.Map(location=[40, -95], zoom_start=4) url = ( &quot;https://raw.githubusercontent.com/python-visualization/folium/main/examples/data&quot; ) state_geo = f&quot;{url}/us-states.json&quot; state_unemployment = f&quot;{url}/US_Unemployment_Oct2012.csv&quot; state_data = pd.read_csv(state_unemployment) folium.Choropleth( geo_data=state_geo, name=&quot;choropleth&quot;, data=state_data, columns=[&quot;Region&quot;, &quot;Unemployment&quot;], key_on=&quot;feature.id&quot;, fill_color=&quot;YlGn&quot;, fill_opacity=0.7, line_opacity=.1, legend_name=&quot;Unemployment Rate (%)&quot;, ).add_to(sample_map) folium.LayerControl().add_to(sample_map) sample_map.save('test.html') </code></pre>
<python><python-3.x><folium><choropleth>
2024-02-15 22:04:29
3
2,763
Gautam
78,004,105
746,100
how do i fix PyQt6 import problem on macos?
<p>I'm trying to run my Python3 .pyw script on MacOS which used to run okay, but now gets error.</p> <p>(Could not find solution by Googling)</p> <p>THE CODE IN MY .pyw FILE ...</p> <pre><code>import os import signal from PyQt6.QtGui import QIcon from PyQt6.QtWidgets import QApplication import sys from include.BaseDialog import BaseDialog if __name__ == '__main__': qapp = QApplication(sys.argv) qapp.setWindowIcon(QIcon(f'{sys.path[0]}/include/Icon.png')) styleSheet = &quot;&quot;&quot; &quot;&quot;&quot; qapp.setStyleSheet(styleSheet) app = BaseDialog() qapp.aboutToQuit.connect(app.close_program) signal.signal(signal.SIGINT, app.close_program) signal.signal(signal.SIGTERM, app.close_program) sys.exit(qapp.exec()) </code></pre> <p>THE ERROR ...</p> <pre><code>&gt; root@Dougs-MacBook-2023 softwarehubgui # Python3 SoftwareHubGUI.pyw &gt; Traceback (most recent call last): File &gt; &quot;/Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw&quot;, &gt; line 4, in &lt;module&gt; &gt; from PyQt6.QtGui import QIcon ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/QtGui.abi3.so, &gt; 0x0002): Symbol not found: &gt; __ZN13QRasterWindow11resizeEventEP12QResizeEvent &gt; &gt; Referenced from: &lt;C8D7E625-2A13-3C34-9DFF-B6656A6F86E7&gt; &gt; /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/QtGui.abi3.so &gt; Expected in: &lt;FC67C721-05AD-33BB-A2A8-F70FC3403D7A&gt; &gt; /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/Qt6/lib/QtGui.framework/Versions/A/QtGui </code></pre>
<python><pyqt><pyqt6>
2024-02-15 21:50:38
1
8,387
Doug Null
78,003,863
3,579,198
Dockerfile for azure-search-openai-demo App
<p>I am trying to Dockerise an app which uses the <a href="https://pypi.org/project/Quart/" rel="nofollow noreferrer">Quart</a> package.</p> <p><code>Dockerfile</code>:</p> <pre><code>FROM python:3.11 AS backend-build COPY backend /app/backend WORKDIR /app/backend RUN pip install -r requirements.txt EXPOSE 5000 CMD [&quot;python&quot;, &quot;-m&quot;, &quot;quart&quot;, &quot;--app&quot;, &quot;main:app&quot;, &quot;run&quot;, &quot;--port&quot;, &quot;5000&quot;, &quot;--host&quot;, &quot;localhost&quot;, &quot;--reload&quot;] </code></pre> <p>Docker image gets built but when I start it and try to access the web at http://localhost:5000/ I see <code>HTTP ERROR 403</code>.</p>
<python><docker><dockerfile><quart>
2024-02-15 20:53:42
1
7,098
rp346
78,003,829
2,947,435
pyhon send data to bigquery returns a field mismatch
<p>i'm using python and pandas to push data to BigQuery. The script works well but I have an error:</p> <pre><code>object of type &lt;class 'str'&gt; cannot be converted to int File &quot;/**/bq.py&quot;, line 71, in post job = self.client.load_table_from_dataframe(df, File &quot;/**/tobq.py&quot;, line 99, in &lt;module&gt; bq.post(data, target_table=&quot;tablemane&quot;) pyarrow.lib.ArrowTypeError: object of type &lt;class 'str'&gt; cannot be converted to int </code></pre> <p>I understand the error but there is no way I can localize it, both my data, <code>df.dtypes</code> and the BQ schema look coherent. Ex (here I'm trying to push only one single line)</p> <p>my code:</p> <pre><code> df = pd.DataFrame(data) df = df.reset_index(drop=True) df['crawl_date'] = pd.to_datetime(df['crawl_date']).dt.date df['r_index'] = df['rank_index'].astype('float') df['v_index']=df['visibility_index'].astype('float') df['s_var']=df['serp_var'].astype('float') df['kd']=df['kd'].astype('float') df['camp_id'] = df['camp_id'].astype('int64') print(df.head()) print(df.isnull().sum()) table_id = f&quot;{os.getenv('GCP_DATASET_NAME')}.{table_name}&quot; print(df.dtypes) df.to_gbq(destination_table=table_id, table_schema=schema_path, project_id=os.getenv('GCP_PROJECT_NAME'), if_exists='append') </code></pre> <p>my data:</p> <pre><code>[{'crawl_date': '2021-03-22', 'domain': 'www.example.com', 'categ': 't1', 'position': 1, 'position_spread': 'TOP_5', 'position_change': 0, 'v_index': 100, 'r_index': 100, 'estimated_traffic': 101881, 'traffic_change': 0, 'max_traffic': 0, 'device': 'desktop', 'top_rank': 1, 's_var': 0, 'kwd': '****** pro', 'volume': 461000, 'kd': 0, 'camp_id': 2, 'camp_name': '******'}] </code></pre> <p><code>print(df.dtypes)</code></p> <pre><code>crawl_date object domain object categ object position int64 position_spread object position_change int64 v_index float64 r_index float64 estimated_traffic int64 traffic_change int64 max_traffic int64 device object top_rank int64 s_var float64 kwd object volume int64 kd float64 camp_id int64 camp_name object </code></pre> <p>Finally, my BQ schema:</p> <pre><code>[ { &quot;name&quot;: &quot;crawl_date&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;DATE&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;domain&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;categ&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;position&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;position_spread&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;position_change&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;v_index&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;FLOAT&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;r_index&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;FLOAT&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;estimated_traffic&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;traffic_change&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;max_traffic&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;device&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;top_rank&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;s_var&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;FLOAT&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;kwd&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;volume&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;kd&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;FLOAT&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;camp_id&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;INTEGER&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] }, { &quot;name&quot;: &quot;camp_name&quot;, &quot;mode&quot;: &quot;NULLABLE&quot;, &quot;type&quot;: &quot;STRING&quot;, &quot;description&quot;: null, &quot;fields&quot;: [] } ] </code></pre>
<python><pandas><google-bigquery>
2024-02-15 20:45:34
1
870
Dany M
78,003,706
4,398,966
How to add a print statement in a complicated expression?
<p>I have the following code:</p> <pre><code>prime_numbers = [ number for number in range(2, 101) if all(number % div != 0 for div in range(2, int(number**0.5) + 1)) ] </code></pre> <p>How would I add a print statement for number?</p>
<python>
2024-02-15 20:14:10
2
15,782
DCR