QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,183,198
13,944,524
What is the benefit of using import X as X?
<p>I know what <code>as</code> keyword does in import statement, we use it to give an object <em>a different</em> name in the module's namespace. It can be its simplified name(like <code>np</code> for numpy) or completely different name that avoids clashes with other names present in the module.</p> <p>But, I've seen many libraries that give the exact same name to the object like:</p> <pre class="lang-py prettyprint-override"><code>from package.module import FOO as FOO </code></pre> <p>Why? Is it different from: <code>from package.module import FOO</code> ?</p> <p>couple of examples:</p> <p><a href="https://github.com/tiangolo/fastapi/blob/master/fastapi/__init__.py#L7" rel="nofollow noreferrer">FastAPI</a>:</p> <pre class="lang-py prettyprint-override"><code>from .applications import FastAPI as FastAPI </code></pre> <p><a href="https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/__init__.py#L13" rel="nofollow noreferrer">SQLAlchemy</a>:</p> <pre class="lang-py prettyprint-override"><code>from .engine import AdaptedConnection as AdaptedConnection </code></pre> <p>and so many others.</p>
<python><python-import>
2024-03-18 21:54:13
0
17,004
S.B
78,183,138
4,999,991
Visual Studio Code Not Recognizing findent Installation for Modern Fortran Extension Despite Correct Python Interpreter and PATH Configuration
<p>I am working with the <a href="https://marketplace.visualstudio.com/items?itemName=fortran-lang.linter-gfortran" rel="nofollow noreferrer">Modern Fortran extension</a> in Visual Studio Code on Windows and keep encountering a persistent issue. Despite having correctly installed <code>findent</code>, <code>fortran-language-server</code>, and <code>fprettify</code> using a specific Python interpreter and updating my user <code>settings.json</code> in VS Code to include the necessary path, the extension does not seem to recognize <code>findent</code>, and I continue to receive the following message:</p> <blockquote> <p>Installing findent.exe through pip with --user option<br/>Source: Modern Fortran</p> </blockquote> <p><a href="https://i.sstatic.net/B7Qww.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B7Qww.png" alt="enter image description here" /></a></p> <p>Here are the steps I've taken:</p> <ol> <li>Selected the Python interpreter inside Visual Studio Code explicitly set to <code>C:\Users\&lt;userName&gt;\AppData\Local\Programs\Python\Python312\python.exe</code>.</li> <li>Installed the required packages with the commands: <ul> <li><code>C:\Users\&lt;userName&gt;\AppData\Local\Programs\Python\Python312\Scripts\pip.exe install findent --user</code></li> <li><code>C:\Users\&lt;userName&gt;\AppData\Local\Programs\Python\Python312\Scripts\pip.exe install fortran-language-server</code></li> <li><code>C:\Users\&lt;userName&gt;\AppData\Local\Programs\Python\Python312\Scripts\pip.exe install fprettify</code></li> </ul> </li> <li>Added the following to my <code>settings.json</code> to ensure the VS Code terminal recognizes the scripts installed by <code>pip</code>: <pre class="lang-json prettyprint-override"><code>&quot;terminal.integrated.env.windows&quot;: { &quot;PATH&quot;: &quot;${env:PATH};C:\\Users\\&lt;userName&gt;\\AppData\\Roaming\\Python\\Scripts;&quot; } </code></pre> </li> </ol> <p>Despite these configurations, indentation, beautifying, and prettifying for Fortran code do not work, and I keep seeing the prompt to install <code>findent</code>.</p> <p>Has anyone faced a similar issue, or can you offer insight into what might be missing or incorrectly configured in my setup?</p>
<python><visual-studio-code><pip><environment-variables><fortran>
2024-03-18 21:38:21
1
14,347
Foad S. Farimani
78,183,125
1,174,102
Scrolling causes click (on_touch_up) event on widgets in Kivy RecycleView
<p>Why does scrolling call <code>on_touch_up()</code> in widgets in this Kivy RecycleView?</p> <p>I created a custom <a href="https://kivy.org/doc/stable/api-kivy.uix.settings.html#kivy.uix.settings.SettingItem" rel="nofollow noreferrer">SettingItem</a> for use in Kivy's <a href="https://kivy.org/doc/stable/api-kivy.uix.settings.html" rel="nofollow noreferrer">Settings Module</a>. It's similar to the built-in Kivy <a href="https://kivy.org/doc/stable/api-kivy.uix.settings.html#kivy.uix.settings.SettingOptions" rel="nofollow noreferrer">SettingOptions</a>, except it opens a new screen that lists all of options. This is more in-line with <a href="https://m1.material.io/patterns/settings.html#settings-usage" rel="nofollow noreferrer">Material Design</a>, and it allows for us to display a description about each option. I call it a <code>ComplexOption</code>.</p> <p>Recently I had to create a <code>ComplexOption</code> that included thousands of options: a font picker. Displaying thousands of widgets in a ScrollView caused the app to crash, so I switched to a <a href="https://kivy.org/doc/stable/api-kivy.uix.recycleview.html" rel="nofollow noreferrer">RecycleView</a>. Now there is no performance degredation, but I did notice a strange effect:</p> <h1>The Problem</h1> <p><strong>If a user scrolls &quot;to the end&quot;, it registers the scroll event as a click event</strong>. This happens in all 4 directions:</p> <ol> <li><p>If a user is at the very &quot;top&quot; and they scroll up, then whatever widget that their cursor is over will register a click event, <code>on_touch_up()</code> will be called, and therefore my app will update the Config to the font under their cursor at the time they were scrolling (as if they had clicked on the font)</p> </li> <li><p>If a user scrolls &quot;to the left&quot;, then whatever widget that their cursor is over will register a click event, <code>on_touch_up()</code> will be called, and therefore my app will update the Config to the font under their cursor at the time they were scrolling (as if they had clicked on the font)</p> </li> <li><p>If a user scrolls &quot;to the right&quot;, then whatever widget that their cursor is over will register a click event, <code>on_touch_up()</code> will be called, and therefore my app will update the Config to the font under their cursor at the time they were scrolling (as if they had clicked on the font)</p> </li> <li><p>If a user is at the very &quot;bottom&quot; and they scroll down, then whatever widget that their cursor is over will register a click event, <code>on_touch_up()</code> will be called, and therefore my app will update the Config to the font under their cursor at the time they were scrolling (as if they had clicked on the font)</p> </li> </ol> <h1>The code</h1> <p>I've tried my best to reduce the size of my app to a simple example of this behaviour for the purposes of this question. Consider the following files</p> <h3>Settings JSON</h3> <p>The following file named <code>settings_buskill.json</code> defines the Settings Panel</p> <pre><code>[ { &quot;type&quot;: &quot;complex-options&quot;, &quot;title&quot;: &quot;Font Face&quot;, &quot;desc&quot;: &quot;Choose the font in the app&quot;, &quot;section&quot;: &quot;buskill&quot;, &quot;key&quot;: &quot;gui_font_face&quot;, &quot;options&quot;: [] } ] </code></pre> <p>Note that the <code>options</code> list gets filled at runtime with the list of fonts found on the system (see <code>main.py</code> below)</p> <h3>Kivy Language (Design)</h3> <p>The following file named <code>buskill.kv</code> defines the app layout</p> <pre><code>&lt;-BusKillSettingItem&gt;: size_hint: .25, None icon_label: icon_label StackLayout: pos: root.pos orientation: 'lr-tb' Label: id: icon_label markup: True # mdicons doesn't have a &quot;nbsp&quot; icon, so we hardcode the icon to # something unimportant and then set the alpha to 00 if no icon is # defined for this SettingItem #text: ('[font=mdicons][size=40sp][color=ffffff00]\ue256[/color][/size][/font]' if root.icon == None else '[font=mdicons][size=40sp]' +root.icon+ '[/size][/font]') text: 'A' size_hint: None, None height: labellayout.height Label: id: labellayout markup: True text: u'{0}\n[size=13sp][color=999999]{1}[/color][/size]'.format(root.title or '', root.value or '') size: self.texture_size size_hint: None, None # set the minimum height of this item so that fat fingers don't have # issues on small touchscreen displays (for better UX) height: max(self.height, dp(50)) &lt;BusKillOptionItem&gt;: size_hint: .25, None height: labellayout.height + dp(10) radio_button_label: radio_button_label StackLayout: pos: root.pos orientation: 'lr-tb' Label: id: radio_button_label markup: True #text: '[font=mdicons][size=18sp]\ue837[/size][/font] ' text: 'B' size: self.texture_size size_hint: None, None height: labellayout.height Label: id: labellayout markup: True text: u'{0}\n[size=13sp][color=999999]{1}[/color][/size]'.format(root.value or '', root.desc or '') font_size: '15sp' size: self.texture_size size_hint: None, None # set the minimum height of this item so that fat fingers don't have # issues on small touchscreen displays (for better UX) height: max(self.height, dp(80)) &lt;ComplexOptionsScreen&gt;: color_main_bg: 0.188, 0.188, 0.188, 1 content: content rv: rv # sets the background from black to grey canvas.before: Color: rgba: root.color_main_bg Rectangle: pos: self.pos size: self.size BoxLayout: size: root.width, root.height orientation: 'vertical' RecycleView: id: rv viewclass: 'BusKillOptionItem' container: content bar_width: dp(10) RecycleGridLayout: default_size: None, dp(48) default_size_hint: 1, None size_hint_y: None height: self.minimum_height orientation: 'vertical' id: content cols: 1 size_hint_y: None height: self.minimum_height &lt;BusKillSettingsScreen&gt;: settings_content: settings_content # sets the background from black to grey canvas.before: Rectangle: pos: self.pos size: self.size BoxLayout: size: root.width, root.height orientation: 'vertical' BoxLayout: id: settings_content </code></pre> <h3>main.py</h3> <p>The following file creates the Settings screen, populates the fonts, and displays the RecycleView when the user clicks on the <code>Font Face</code> setting</p> <pre><code>#!/usr/bin/env python3 ################################################################################ # IMPORTS # ################################################################################ import os, operator import kivy from kivy.app import App from kivy.core.text import LabelBase from kivy.core.window import Window Window.size = ( 300, 500 ) from kivy.config import Config from kivy.uix.floatlayout import FloatLayout from kivy.uix.screenmanager import ScreenManager, Screen from kivy.uix.settings import Settings, SettingSpacer from kivy.properties import ObjectProperty, StringProperty, ListProperty, BooleanProperty, NumericProperty, DictProperty from kivy.uix.recycleview import RecycleView ################################################################################ # CLASSES # ################################################################################ # recursive function that checks a given object's parent up the tree until it # finds the screen manager, which it returns def get_screen_manager(obj): if hasattr(obj, 'manager') and obj.manager != None: return obj.manager if hasattr(obj, 'parent') and obj.parent != None: return get_screen_manager(obj.parent) return None ################### # SETTINGS SCREEN # ################### # We heavily use (and expand on) the built-in Kivy Settings modules in BusKill # * https://kivy-fork.readthedocs.io/en/latest/api-kivy.uix.settings.html # # Kivy's Settings module does the heavy lifting of populating the GUI Screen # with Settings and Options that are defined in a json file, and then -- when # the user changes the options for a setting -- writing those changes to a Kivy # Config object, which writes them to disk in a .ini file. # # Note that a &quot;Setting&quot; is a key and an &quot;Option&quot; is a possible value for the # Setting. # # The json file tells the GUI what Settings and Options to display, but does not # store state. The user's chosen configuration of those settings is stored to # the Config .ini file. # # See also https://github.com/BusKill/buskill-app/issues/16 # We define our own BusKillOptionItem, which is an OptionItem that will be used # by the BusKillSettingComplexOptions class below class BusKillOptionItem(FloatLayout): title = StringProperty('') desc = StringProperty('') value = StringProperty('') parent_option = ObjectProperty() manager = ObjectProperty() def __init__(self, **kwargs): super(BusKillOptionItem, self).__init__(**kwargs) # this is called when the 'manager' Kivy Property changes, which will happen # some short time after __init__() when RecycleView creates instances of # this object def on_manager(self, instance, value): self.manager = value def on_parent_option(self, instance, value): if self.parent_option.value == self.value : # this is the currenty-set option # set the radio button icon to &quot;selected&quot; self.radio_button_label.text = '[size=80sp][sup]\u2022[sup][/size][/font] ' else: # this is not the currenty-set option # set the radio button icon to &quot;unselected&quot; self.radio_button_label.text = '[size=30sp][sub]\u006f[/sub][/size][/font] ' # this is called when the user clicks on this OptionItem (eg choosing a font) def on_touch_up( self, touch ): print( &quot;called BusKillOptionItem().on_touch_up() !!&quot; ) print( touch ) print( &quot;\t&quot; +str(dir(touch)) ) # skip this touch event if it wasn't *this* widget that was touched # * https://kivy.org/doc/stable/guide/inputs.html#touch-event-basics if not self.collide_point(*touch.pos): return # skip this touch event if they touched on an option that's already the # enabled option if self.parent_option.value == self.value: msg = &quot;DEBUG: Option already equals '&quot; +str(self.value)+ &quot;'. Returning.&quot; print( msg ) return # enable the option that the user has clicked-on self.enable_option() # called when the user has chosen to change the setting to this option def enable_option( self ): # write change to disk in our persistant buskill .ini Config file key = str(self.parent_option.key) value = str(self.value) msg = &quot;DEBUG: User changed config of '&quot; +str(key) +&quot;' to '&quot; +str(value)+ &quot;'&quot; print( msg ); Config.set('buskill', key, value) Config.write() # change the text of the option's value on the main Settings Screen self.parent_option.value = self.value # loop through every available option in the ComplexOption sub-Screen and # change the icon of the radio button (selected vs unselected) as needed for option in self.parent.children: # is this the now-currently-set option? if option.value == self.parent_option.value: # this is the currenty-set option # set the radio button icon to &quot;selected&quot; option.radio_button_label.text = '[size=80sp][sup]\u2022[sup][/size][/font] ' else: # this is not the currenty-set option # set the radio button icon to &quot;unselected&quot; option.radio_button_label.text = '[size=30sp][sub]\u006f[/sub][/size][/font] ' # We define our own BusKillSettingItem, which is a SettingItem that will be used # by the BusKillSettingComplexOptions class below. Note that we don't have code # here because the difference between the SettingItem and our BusKillSettingItem # is what's defined in the buskill.kv file. that's to say, it's all visual class BusKillSettingItem(kivy.uix.settings.SettingItem): pass # Our BusKill app has this concept of a SettingItem that has &quot;ComplexOptions&quot; # # The closeset built-in Kivy SettingsItem type is a SettingOptions # * https://kivy-fork.readthedocs.io/en/latest/api-kivy.uix.settings.html#kivy.uix.settings.SettingOptions # # SettingOptions just opens a simple modal that allows the user to choose one of # many different options for the setting. For many settings, # we wanted a whole new screen so that we could have more space to tell the user # what each setting does # Also, the whole &quot;New Screen for an Option&quot; is more # in-line with Material Design. # * https://m1.material.io/patterns/settings.html#settings-usage # # These are the reasons we create a special BusKillSettingComplexOptions class class BusKillSettingComplexOptions(BusKillSettingItem): # each of these properties directly cooresponds to the key in the json # dictionary that's loaded with add_json_panel. the json file is what defines # all of our settings that will be displayed on the Settings Screen # options is a parallel array of short names for different options for this # setting (eg 'lock-screen') options = ListProperty([]) def on_panel(self, instance, value): if value is None: return self.fbind('on_release', self._choose_settings_screen) def _choose_settings_screen(self, instance): manager = get_screen_manager(self) # create a new screen just for choosing the value of this setting, and # name this new screen &quot;setting_&lt;key&gt;&quot; screen_name = 'setting_' +self.key # did we already create this sub-screen? if not manager.has_screen( screen_name ): # there is no sub-screen for this Complex Option yet; create it # create new screen for picking the value for this ComplexOption setting_screen = ComplexOptionsScreen( name = screen_name ) # determine what fonts are available on this system option_items = [] font_paths = set() for fonts_dir_path in LabelBase.get_system_fonts_dir(): for root, dirs, files in os.walk(fonts_dir_path): for file in files[0:10]: if file.lower().endswith(&quot;.ttf&quot;): font_path = str(os.path.join(root, file)) font_paths.add( font_path ) print( &quot;Found &quot; +str(len(font_paths))+ &quot; font files.&quot; ) # create data for each font to push to RecycleView for font_path in font_paths: font_filename = os.path.basename( font_path ) option_items.append( {'title': 'title', 'value': font_filename, 'desc':'', 'parent_option': self, 'manager': manager } ) # sort list of fonts alphabetically and add to the RecycleView option_items.sort(key=operator.itemgetter('value')) setting_screen.rv.data.extend(option_items) # add the new ComplexOption sub-screen to the Screen Manager manager.add_widget( setting_screen ) # change into the sub-screen now manager.current = screen_name # We define BusKillSettings (which extends the built-in kivy Settings) so that # we can add a new type of Setting = 'commplex-options'). The 'complex-options' # type becomes a new 'type' that can be defined in our settings json file class BusKillSettings(kivy.uix.settings.Settings): def __init__(self, *args, **kargs): super(BusKillSettings, self).__init__(*args, **kargs) super(BusKillSettings, self).register_type('complex-options', BusKillSettingComplexOptions) # Kivy's SettingsWithNoMenu is their simpler settings widget that doesn't # include a navigation bar between differnt pages of settings. We extend that # type with BusKillSettingsWithNoMenu so that we can use our custom # BusKillSettings class (defined above) with our new 'complex-options' type class BusKillSettingsWithNoMenu(BusKillSettings): def __init__(self, *args, **kwargs): self.interface_cls = kivy.uix.settings.ContentPanel super(BusKillSettingsWithNoMenu,self).__init__( *args, **kwargs ) def on_touch_down( self, touch ): print( &quot;touch_down() of BusKillSettingsWithNoMenu&quot; ) super(BusKillSettingsWithNoMenu, self).on_touch_down( touch ) # The ComplexOptionsScreen is a sub-screen to the Settings Screen. Kivy doesn't # have sub-screens for defining options, but that's what's expected in Material # Design. We needed more space, so we created ComplexOption-type Settings. And # this is the Screen where the user transitions-to to choose the options for a # ComplexOption class ComplexOptionsScreen(Screen): pass # This is our main Screen when the user clicks &quot;Settings&quot; in the nav drawer class BusKillSettingsScreen(Screen): def on_pre_enter(self, *args): # is the contents of 'settings_content' empty? if self.settings_content.children == []: # we haven't added the settings widget yet; add it now # kivy's Settings module is designed to use many different kinds of # &quot;menus&quot; (sidebars) for navigating different sections of the settings. # while this is powerful, it conflicts with the Material Design spec, # so we don't use it. Instead we use BusKillSettingsWithNoMenu, which # inherets kivy's SettingsWithNoMenu and we add sub-screens for # &quot;ComplexOptions&quot;; s = BusKillSettingsWithNoMenu() s.root_app = self.root_app # create a new Kivy SettingsPanel using Config (our buskill.ini config # file) and a set of options to be drawn in the GUI as defined-by # the 'settings_buskill.json' file s.add_json_panel( 'buskill', Config, 'settings_buskill.json' ) # our BusKillSettingsWithNoMenu object's first child is an &quot;interface&quot; # the add_json_panel() call above auto-pouplated that interface with # a bunch of &quot;ComplexOptions&quot;. Let's add those to the screen's contents self.settings_content.add_widget( s ) class BusKillApp(App): # copied mostly from 'site-packages/kivy/app.py' def __init__(self, **kwargs): super(App, self).__init__(**kwargs) self.built = False # instantiate our scren manager instance so it can be accessed by other # objects for changing the kivy screen manager = ScreenManager() def build_config(self, config): Config.read( 'buskill.ini' ) Config.setdefaults('buskill', { 'gui_font_face': None, }) Config.write() def build(self): screen = BusKillSettingsScreen(name='settings') screen.root_app = self self.manager.add_widget( screen ) return self.manager ################################################################################ # MAIN BODY # ################################################################################ if __name__ == '__main__': BusKillApp().run() </code></pre> <h1>To Reproduce</h1> <p>To reproduce the issue, create all three of the above files in the same directory on a system with python3 and python3-kivy installed</p> <pre><code>user@host:~$ ls buskill.kv main.py settings_buskill.json user@host:~$ </code></pre> <p>Then execute <code>python3 main.py</code></p> <pre><code>user@host:~$ python3 main.py [INFO ] [Logger ] Record log in /home/user/.kivy/logs/kivy_24-03-18_55.txt [INFO ] [Kivy ] v1.11.1 [INFO ] [Kivy ] Installed at &quot;/tmp/kivy_appdir/opt/python3.7/lib/python3.7/site-packages/kivy/__init__.py&quot; [INFO ] [Python ] v3.7.8 (default, Jul 4 2020, 10:00:57) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)] ... </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;"><a href="https://i.sstatic.net/a1Yf5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a1Yf5.png" alt="Screenshot of a simple kivy app displaying a clickable button with the text &quot;Font Face&quot;" /></a></th> <th style="text-align: center;"><a href="https://i.sstatic.net/JbDft.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JbDft.png" alt="Screenshot of a simple kivy app showing a list of font files on a scrollable screen" /></a></th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">Click on the <code>Font Face</code> Setting to change screens to the list of fonts to choose-from</td> <td style="text-align: center;">Scrolling &quot;left&quot; over the <code>Arimo-Italic.ttf</code> font label will erroneously &quot;click&quot; it</td> </tr> </tbody> </table></div> <p>In the app that opens:</p> <ol> <li>Click on the <code>Font Face</code> Setting</li> <li>Hover over any font, and scroll-up</li> <li>Note that the font is erroneously &quot;selected&quot; (as if you clicked on it)</li> <li>Hover over any other font, and scroll to the left</li> <li>Note that the font is erroneously &quot;selected&quot; (as if you clicked on it)</li> <li>Hover over any other font, and scroll to the right</li> <li>Note that the font is erroneously &quot;selected&quot; (as if you clicked on it)</li> </ol> <blockquote> <p><strong>Note</strong> For simplicity, I've replaced the Material Design Icons used to display checked &amp; unchecked radio box icons with simple unicode in the built-in (Roboto) font.</p> <p>So the hollow circle is a crude &quot;unchecked radio box&quot; and the filled-in circle is a crude &quot;checked radio box&quot;</p> </blockquote> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;"><a href="https://i.sstatic.net/6Lb6K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Lb6K.png" alt="Screenshot of a Kivy app that looks lieke an Android app following the Material Design Spec" /></a></th> <th style="text-align: center;"><a href="https://i.sstatic.net/nfyul.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nfyul.png" alt="Screenshot of a Kivy app with many fonts listed on a scrollable screen, including proper material design radio buttons next to each font" /></a></th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">The original app includes icons from the Material Design Font</td> <td style="text-align: center;">The original app includes icons from the Material Design Font</td> </tr> </tbody> </table></div> <p>Whey does the above app call <code>on_touch_up()</code> when a user scrolls over a widget in the RecycleView?</p>
<python><android-recyclerview><scroll><kivy><click>
2024-03-18 21:36:56
1
2,923
Michael Altfield
78,183,052
1,028,270
How do I set environment variables in a pytest fixture with the MonkeyPatch context manager?
<p>I'm not using classes or test cases I'm just using pytest functions (want to keep it that way).</p> <p>This does not work:</p> <pre><code>@pytest.fixture(scope=&quot;function&quot;) def set_env(): with MonkeyPatch.context() as mp: mp.setenv(&quot;VAR_ONE&quot;, &quot;123&quot;) mp.setenv(&quot;VAR_TWO&quot;, &quot;test&quot;) def test_blah(set_env): print(os.environ[&quot;VAR_ONE&quot;]) print(os.environ[&quot;VAR_TWO&quot;]) </code></pre> <p>This does:</p> <pre><code>@pytest.fixture(scope=&quot;function&quot;) def set_env(monkeypatch): monkeypatch.setenv(&quot;VAR_ONE&quot;, &quot;123&quot;) monkeypatch.setenv(&quot;VAR_TWO&quot;, &quot;test&quot;) def test_blah(monkeypatch, set_env): print(os.environ[&quot;VAR_ONE&quot;]) print(os.environ[&quot;VAR_TWO&quot;]) </code></pre> <p>I was hoping to avoid passing around monkeypatch fixtures like this. I thought I could use <code>MonkeyPatch</code> to abstract this behind a single fixture. Am I misunderstanding <code>MonkeyPatch</code> as a context manager?</p> <p>The whole pytest fixture magic thing doesn't play nice with type hinting so I really want to minimize the fixtures I need to pass around (while still not using test cases and classes).</p>
<python><pytest>
2024-03-18 21:15:38
1
32,280
red888
78,183,024
11,670,196
How to solve RuntimeError: Couldn't find appropriate backend to handle uri dataset/data/0.wav and format None
<p>The Problem is if I try to run <code>metadata = torchaudio.info(path)</code> I get the error message <code>RuntimeError: Couldn't find appropriate backend to handle uri dataset/data/0.wav and format None.</code> And if I run <code>print(str(torchaudio.list_audio_backends()))</code> it returns an empty list</p> <p>I looked at both the documentation and similar questions, like those <a href="https://stackoverflow.com/questions/78097861/how-to-solve-runtimeerror-couldnt-find-appropriate-backend-to-handle-uri-in-py/78103260#78103260">How to solve RuntimeError: Couldn&#39;t find appropriate backend to handle uri in python</a>, <a href="https://superuser.com/questions/1819222/how-to-install-sox-for-pytorch-audio/1819866#1819866">https://superuser.com/questions/1819222/how-to-install-sox-for-pytorch-audio/1819866#1819866</a> and <a href="https://stackoverflow.com/questions/62543843/cannot-import-torch-audio-no-audio-backend-is-available">cannot import torch audio &#39; No audio backend is available.&#39;</a>. According to them I just need sox and libsox. None of the install commands from the answeres helped me.</p> <p>I have installed both sox and libsox-dev. Here are the Versions:</p> <ul> <li><code>pip show torchaudio</code> -&gt; <code>... Version: 2.2.1 ...</code></li> <li><code>sox --version</code> -&gt; <code>sox: SoX v14.4.2</code></li> <li><code>ldd $(which sox) | grep libsox</code> -&gt; <code>libsox.so.3 =&gt; /lib/x86_64-linux-gnu/libsox.so.3</code></li> </ul> <p>I got no idea what's wrong and would appreciate any help.</p> <p>Ps. I am using Ubuntu</p>
<python><pytorch><sox><libsox>
2024-03-18 21:09:29
1
383
Tobias
78,183,014
3,124,181
Can I use Azure DocumentAnalysisClient with no credentials?
<p>I am trying to use azure's document analysis client but I don't use credentials for my form recognizer. I can make a simple request call to it and it works fine, like so:</p> <pre><code>import requests my_endpoint = &quot;http://form_recognizer...?api-version=2022-08-31&quot; data = &quot;some data&quot; params = &quot;some params&quot; requests.post(my_endpoint, headers={'Content-Type': 'application/octet-stream'}, data=data, params=params) </code></pre> <p>The request works and it gives me the results I need. However, I would prefer to use <code>DocumentAnalysisClient</code> because it comes with several methods that would save me weeks of coding. However, this class requires a <code>credential</code> argument but I am not using credentials for this endpoint. It is an open HTTP endpoint, so how exactly can I use the class without providing any credentials?</p> <pre><code>from azure.ai.formrecognizer import DocumentAnalysisClient client = DcoumentAnalysisClient(endpoint=my_endpoint, credential=...) </code></pre>
<python><azure-form-recognizer>
2024-03-18 21:06:20
1
903
user3124181
78,182,894
6,769,082
get the name of the group inside pandas groupby transform
<p>Here is what I am trying to do. I have the following DataFrame in pandas:</p> <pre><code>import numpy as np import pandas as pd n_cols = 3 n_samples = 4 df = pd.DataFrame(np.arange(n_samples * n_cols).reshape(n_samples, n_cols), columns=list('ABC')) print(df) </code></pre> <p>output:</p> <pre><code> A B C 0 0 1 2 1 3 4 5 2 6 7 8 3 9 10 11 </code></pre> <p>I have a category to which each sample (row) belongs:</p> <pre><code>cat = pd.Series([1,1,2,2]) </code></pre> <p>And I have a reference row related to each category:</p> <pre><code>df_ref = pd.DataFrame(np.zeros((2, n_cols)), index=[1,2], columns=list('ABC')) df_ref.loc[1] = 10 print(df_ref) </code></pre> <p>output:</p> <pre><code> A B C 1 10.0 10.0 10.0 2 0.0 0.0 0.0 </code></pre> <p>How do I do the following in a more elegant way (e.g., using groupby and transform):</p> <pre><code>result = df.copy() for i in range(n_cols): result.iloc[i] = df.iloc[i] - df_ref.loc[cat[i]] print(results) </code></pre> <p>output:</p> <pre><code> A B C 0 -10 -9 -8 1 -7 -6 -5 2 6 7 8 3 9 10 11 </code></pre> <p>I thought something like this should work:</p> <pre><code>df.groupby(cat).transform(lambda x: x - df_ref.loc[x.GROUP_NAME]) </code></pre> <p>where x.GROUP_NAME is accessing the name of the group on which transform is operating. In the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html#pandas.core.groupby.DataFrameGroupBy.transform" rel="nofollow noreferrer">pandas documentation about transform</a> it is written: &quot;Each group is endowed the attribute ‘name’ in case you need to know which group you are working on.&quot; I tried to access x.name, but that gives the name of a column, not the name of the group. So I don't understand what this documentation is referring to.</p>
<python><pandas><dataframe><group-by>
2024-03-18 20:37:08
2
481
Chachni
78,182,856
6,115,999
How do I add to a particular column in an association table in Flask?
<p>I have a table called SetList which are setlists that a user may have. I want users to put songs in the setlist. It's a many to many relationship so here is my SetList class:</p> <pre><code>class SetList(db.Model, UserMixin): id = db.Column(db.Integer, primary_key = True) name = db. Column(db.String(75), nullable=False) owner = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) songs = db.relationship('Song', secondary=&quot;setlist_song&quot;, back_populates='setlists') </code></pre> <p>I needed to create an association table to my SetList table. I also want users to be able to rearrange the songs within the setlist as they see fit. So I thought I would add an &quot;order&quot; column in the association table. It looks like this:</p> <pre><code>setlist_song = db.Table('setlist_song', db.Column('song_id', db.Integer, db.ForeignKey('song.id')), db.Column('setlist_id', db.Integer, db.ForeignKey('set_list.id')), db.Column('order', db.Integer, nullable=True) ) </code></pre> <p>I know how to add songs to a setlist, like this:</p> <pre><code>song_id = 1 songtoadd = Song.query.get(1) setlist.songs.add(songtoadd)` </code></pre> <p>and it successfully adds it, but this has no effect on the order column. I tried this</p> <pre><code>song_id = 1 order = 0 songtoadd=[Song.query.get(song_id), order] setlist.songs.append(songtoadd) </code></pre> <p>and I get this error:</p> <p><code>AttributeError: 'list' object has no attribute '_sa_instance_state'</code></p> <p>What's the proper way to add the order to the column within an association table, or am I supposed to do this some other way?</p>
<python><flask><flask-sqlalchemy>
2024-03-18 20:29:39
0
877
filifunk
78,182,788
547,231
How to generate jacobian of a tensor-valued function using torch.autograd?
<p>Computing the jacobian of a function f : R^d -&gt; R^d is not too hard:</p> <pre><code>def jacobian(y, x): k, d = x.shape jacobian = list() for i in range(d): v = torch.zeros_like(y) v[:, i] = 1. dy_dx = torch.autograd.grad(y, x, grad_outputs = v, retain_graph = True, create_graph = True, allow_unused = True)[0] # shape [k, d] jacobian.append(dy_dx) jacobian = torch.stack(jacobian, dim = 1).requires_grad_() return jacobian </code></pre> <p>Above, <code>jacobian</code> is invoked with <code>y = f(x)</code>. However, now I have a function <code>g = g(t, x)</code>, where <code>t</code> is a <code>torch.tensor</code> of shape <code>k</code> and <code>x</code> is a <code>torch.tensor</code> of shape <code>(k, d1, d2, d3)</code>. The result of <code>g</code> is again a <code>torch.tensor</code> of shape <code>(k, d1, d2, d3)</code></p> <p>I've tried to use my already existing <code>jacobian</code> function. What I did was</p> <pre><code>y = g(t, x) x = x.flatten(1) y = y.flatten(1) jacobian(y, x) </code></pre> <p>The problem is that all the time <code>dy_dx</code> is <code>None</code>. The only explanation I have for this is that most probably the dependency graph is broken after the <code>flatten(1)</code> call.</p> <p>So, what can I do here? I should remark that what I actually want to compute is the divergence. That is, the trace of the jacobian. If there is a more performant solution for that specific case available, I'd be interested in that one.</p>
<python><pytorch><autograd><automatic-differentiation>
2024-03-18 20:11:24
1
18,343
0xbadf00d
78,182,726
2,188,011
Saving yolo Model Result as String
<p>I have a model, <code>best.pt</code>, that I'd like to run. It takes an image as input. It classifies an object in this image, a fruit.</p> <pre><code>from PIL import Image from ultralytics import YOLO # Load the pre-trained model model = YOLO('best.pt') # Load the input image input_image = Image.open('fruit.jpeg') # Pass the image through the model output = model(input_image) </code></pre> <p>Running this code outputs:</p> <pre><code>0: 640x640 1 Banana, 70.7ms Speed: 2.3ms preprocess, 70.7ms inference, 0.7ms postprocess per image at shape (1, 3, 640, 640) </code></pre> <p><strong>I'd like to save <code>Banana</code> to a string.</strong></p> <p><code>output</code> is an object of type <code>list</code> with one item in the list, which is an instance of <code>Results</code> from the <code>ultralytics</code> library. When trying to print <code>output</code>, I get the following:</p> <pre><code>ultralytics.engine.results.Results object with attributes: boxes: ultralytics.engine.results.Boxes object keypoints: None masks: None names: {0: 'Apple', 1: 'Banana', 2: 'Blueberry'} obb: None orig_img: array([[[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]], ..., [[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]], dtype=uint8) orig_shape: (225, 225) path: 'fruit.jpeg' probs: None save_dir: 'runs/detect/predict' speed: {'preprocess': 2.3441314697265625, 'inference': 70.66583633422852, 'postprocess': 0.6880760192871094} </code></pre> <p>Trying to print <code>output.boxes</code> gives me:</p> <pre><code>ultralytics.engine.results.Boxes object with attributes: cls: tensor([0.]) conf: tensor([0.9832]) data: tensor([[ 63.5629, 44.9124, 153.1798, 183.1818, 0.9832, 0.0000]]) id: None is_track: False orig_shape: (225, 225) shape: torch.Size([1, 6]) xywh: tensor([[108.3714, 114.0471, 89.6169, 138.2694]]) xywhn: tensor([[0.4817, 0.5069, 0.3983, 0.6145]]) xyxy: tensor([[ 63.5629, 44.9124, 153.1798, 183.1818]]) xyxyn: tensor([[0.2825, 0.1996, 0.6808, 0.8141]]) </code></pre> <p>Nowhere in any of these outputs can I determine that the result was <code>Banana</code>.</p> <p>How do I save the result (<code>Banana</code>) to a string?</p>
<python><yolo><ultralytics>
2024-03-18 19:55:45
1
1,293
Fares K. A.
78,182,723
6,197,439
Does _memimporter still exist in py2exe?
<p>I have only recently started with py2exe, and I would like to use py2exe with MINGW64 Python3 programs (and corresponding libraries). However, the first example I tried failed to build.</p> <p>After that, I found the <a href="https://www.py2exe.org/index.cgi/TroubleshootingImportErrors" rel="nofollow noreferrer">https://www.py2exe.org/index.cgi/TroubleshootingImportErrors</a> page, where it is noted:</p> <blockquote> <p>Check that zipextimporter works on your system</p> <p>zipextimporter is the starting component of py2exe that may cause problems. To debug it you will need _memimporter.pyd binary module. These modules can be found in binary py2exe distributions for your Python version (I unpack .exe distribution with 7Zip).</p> </blockquote> <p>There is also a test script, but it is in Python 2 (last update of that page is 2011-01-07); so I converted it to Python 3 syntax:</p> <pre class="lang-python prettyprint-override"><code>import zipextimporter zipextimporter.install() import sys sys.path.insert(0, &quot;lib.zip&quot;) import _socket print(_socket) # &lt;module '_socket' from 'lib.zip\_socket.pyd'&gt; print(_socket.__file__) # 'lib.zip\\_socket.pyd' print(_socket.__loader__) # &lt;ZipExtensionImporter object 'lib.zip'&gt; # Reloading also works correctly: print(_socket is reload(_socket)) # True </code></pre> <p>But when I run it, I get:</p> <pre class="lang-none prettyprint-override"><code>$ python3 test_zipextimporter.py Traceback (most recent call last): File &quot;D:/msys64/tmp/test_zipextimporter.py&quot;, line 1, in &lt;module&gt; import zipextimporter File &quot;D:/msys64/mingw64/lib/python3.11/site-packages/zipextimporter.py&quot;, line 51, in &lt;module&gt; import _memimporter ModuleNotFoundError: No module named '_memimporter' </code></pre> <p>So, as the note above says, I need a <code>_memimporter.pyd</code> binary module; there is no such file in the package directly:</p> <pre class="lang-none prettyprint-override"><code>$ pacman -Ql mingw-w64-x86_64-python-py2exe | grep _mem $ </code></pre> <p>However, the note also states &quot;These modules can be found in binary py2exe distributions for your Python version (I unpack .exe distribution with 7Zip).&quot; Currently there are these binary files in the package:</p> <pre class="lang-none prettyprint-override"><code>$ pacman -Ql mingw-w64-x86_64-python-py2exe | grep '\.exe\|\.dll' mingw-w64-x86_64-python-py2exe /mingw64/lib/python3.11/site-packages/py2exe/resources.dll mingw-w64-x86_64-python-py2exe /mingw64/lib/python3.11/site-packages/py2exe/run-py311-mingw_x86_64.exe mingw-w64-x86_64-python-py2exe /mingw64/lib/python3.11/site-packages/py2exe/run_ctypes_dll-py311-mingw_x86_64.dll mingw-w64-x86_64-python-py2exe /mingw64/lib/python3.11/site-packages/py2exe/run_w-py311-mingw_x86_64.exe </code></pre> <p>None of these can be listed with say <code>unzip -l</code>; but they can be listed with <code>7z l</code> - unfortunately, I see nothing there resembling <code>_memimporter.pyd</code> - both .exes show this structure of files:</p> <pre class="lang-none prettyprint-override"><code>$ 7z l /mingw64/lib/python3.11/site-packages/py2exe/run_w-py311-mingw_x86_64.exe ... Date Time Attr Size Compressed Name ------------------- ----- ------------ ------------ ------------------------ 2023-12-27 10:55:20 ..... 24064 24064 .text 2023-12-27 10:55:20 ..... 1024 1024 .data 2023-12-27 10:55:20 ..... 6656 6656 .rdata 2023-12-27 10:55:20 ..... 2048 2048 .pdata 2023-12-27 10:55:20 ..... 2048 2048 .xdata 2023-12-27 10:55:20 ..... 0 0 .bss 2023-12-27 10:55:20 ..... 2048 2048 .edata 2023-12-27 10:55:20 ..... 4096 4096 .idata 2023-12-27 10:55:20 ..... 512 512 .CRT 2023-12-27 10:55:20 ..... 512 512 .tls ..... 766 744 .rsrc/1033/ICON/1.ico ..... 20 20 .rsrc/1033/GROUP_ICON/1 ..... 1167 1167 .rsrc/0/MANIFEST/1 2023-12-27 10:55:20 ..... 512 512 .reloc ------------------- ----- ------------ ------------ ------------------------ 2023-12-27 10:55:20 45473 45451 14 files </code></pre> <p>I've tried to compare with the vanilla py2exe download, and results are similar:</p> <pre class="lang-none prettyprint-override"><code>$ wget https://files.pythonhosted.org/packages/b1/07/f45b201eb8c3fea1af6a9bd9f733479aa9d009139ce2396e06db7aa778c8/py2exe-0.13.0.1-cp311-cp311-win_amd64.whl # ... $ mkdir py2exe_0.13.0.1 $ (cd py2exe_0.13.0.1; unzip ../py2exe-0.13.0.1-cp311-cp311-win_amd64.whl) # ... $ 7z l py2exe_0.13.0.1/py2exe/run-py3.11-win-amd64.exe # ... Date Time Attr Size Compressed Name ------------------- ----- ------------ ------------ ------------------------ 2023-10-07 18:15:11 ..... 20480 20480 .text 2023-10-07 18:15:11 ..... 11264 11264 .rdata 2023-10-07 18:15:11 ..... 512 512 .data 2023-10-07 18:15:11 ..... 2048 2048 .pdata ..... 766 744 .rsrc/ICON/1.ico ..... 20 20 .rsrc/GROUP_ICON/1 ..... 381 381 .rsrc/MANIFEST/1 2023-10-07 18:15:11 ..... 512 512 .reloc ------------------- ----- ------------ ------------ ------------------------ 2023-10-07 18:15:11 35983 35961 8 files </code></pre> <p>I guess this <code>_memimporter</code> is still a thing, because after all my test script fails with &quot;No module named '_memimporter'&quot;; and also there are still references in the Python code of the package:</p> <pre class="lang-none prettyprint-override"><code>$ grep -rI _memimporter /mingw64/lib/python3.11/site-packages/py2exe /mingw64/lib/python3.11/site-packages/py2exe/distutils_buildexe.py:## self.excludes.append(&quot;_memimporter&quot;) # builtin in run_*.exe and run_*.dll /mingw64/lib/python3.11/site-packages/py2exe/hooks.py:# _memimporter can be excluded because it is built into the run-stub. /mingw64/lib/python3.11/site-packages/py2exe/hooks.py:_memimporter </code></pre> <p>... but, I have to ask - is this <code>_memimporter.pyd</code> still a thing - and if so, where do I find it?</p>
<python><python-3.x><py2exe><mingw-w64>
2024-03-18 19:55:29
1
5,938
sdbbs
78,182,561
921,527
Minimum cases of n choose k with respect of n choose q
<p>I have a list</p> <pre><code>people = ['P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7'] allComb4 = list(itertools.combinations(people,4)) # n choose k #[('P1', 'P2', 'P3', 'P4'), ('P1', 'P2', 'P3', 'P5'), ('P1', 'P2', 'P3', 'P6'), ('P1', 'P2', 'P3', 'P7'), ('P1', 'P2', 'P4', 'P5'), ('P1', 'P2', 'P4', 'P6'), ('P1', 'P2', 'P4', 'P7'), ('P1', 'P2', 'P5', 'P6'), ('P1', 'P2', 'P5', 'P7'), ('P1', 'P2', 'P6', 'P7'), ('P1', 'P3', 'P4', 'P5'), ('P1', 'P3', 'P4', 'P6'), ('P1', 'P3', 'P4', 'P7'), ('P1', 'P3', 'P5', 'P6'), ('P1', 'P3', 'P5', 'P7'), ('P1', 'P3', 'P6', 'P7'), ('P1', 'P4', 'P5', 'P6'), ('P1', 'P4', 'P5', 'P7'), ('P1', 'P4', 'P6', 'P7'), ('P1', 'P5', 'P6', 'P7'), ('P2', 'P3', 'P4', 'P5'), ('P2', 'P3', 'P4', 'P6'), ('P2', 'P3', 'P4', 'P7'), ('P2', 'P3', 'P5', 'P6'), ('P2', 'P3', 'P5', 'P7'), ('P2', 'P3', 'P6', 'P7'), ('P2', 'P4', 'P5', 'P6'), ('P2', 'P4', 'P5', 'P7'), ('P2', 'P4', 'P6', 'P7'), ('P2', 'P5', 'P6', 'P7'), ('P3', 'P4', 'P5', 'P6'), ('P3', 'P4', 'P5', 'P7'), ('P3', 'P4', 'P6', 'P7'), ('P3', 'P5', 'P6', 'P7'), ('P4', 'P5', 'P6', 'P7')] allComb2 = list(itertools.combinations(people,2)) # n choose q # [('P1', 'P2'), ('P1', 'P3'), ('P1', 'P4'), ('P1', 'P5'), ('P1', 'P6'), ('P1', 'P7'), ('P2', 'P3'), ('P2', 'P4'), ('P2', 'P5'), ('P2', 'P6'), ('P2', 'P7'), ('P3', 'P4'), ('P3', 'P5'), ('P3', 'P6'), ('P3', 'P7'), ('P4', 'P5'), ('P4', 'P6'), ('P4', 'P7'), ('P5', 'P6'), ('P5', 'P7'), ('P6', 'P7')] </code></pre> <p>I need to find in <code>allComb4</code> minimum number of elements with respect of <code>allComb2</code>. Desired result like bellow.</p> <pre><code>output = [['P1', 'P2', 'P5', 'P6'], ['P1', 'P3', 'P4', 'P7'], ['P2', 'P3', 'P5', 'P7'], ['P2', 'P4', 'P5', 'P6'], ['P3', 'P4', 'P6', 'P7']] </code></pre> <p>That means, any pair I pick up from <code>allComb2</code> I will find that pair elements in one element of <code>output</code>. How can I do that?</p> <p>LE: Always q &lt; k</p>
<python><python-itertools><combinatorics><set-cover>
2024-03-18 19:16:04
2
509
Ciprian
78,182,306
14,923,149
Extracting NCBI RefSeq and Submitted GenBank assembly accession numbers using Selenium and BeautifulSoup
<p><a href="https://stackoverflow.com/questions/78178650/title-difficulty-extracting-genbank-accession-number-using-species-and-strain-n/78179474#78179474">Title: Difficulty Extracting GenBank Accession Number Using Species and Strain Name, using webscraping (Using BeautifulSoup or Selenium)</a> Following this post, I'm attempting to extract NCBI RefSeq and Submitted GenBank assembly accession numbers from a webpage using Selenium and BeautifulSoup in Python. However, I'm encountering an issue where the previous code doesn't work for genomes with a single assembly, as it opens a different page.</p> <p>To address this, I've tried a different approach:</p> <p>This codes</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException, NoSuchElementException # Define the search term search_term = &quot;Streptomyces anthocyanicus NBC 01687&quot; # Open a Chrome browser driver = webdriver.Chrome() # Construct the search URL for assembly search_url = f&quot;https://www.ncbi.nlm.nih.gov/assembly/?term={search_term.replace(' ', '+')}&quot; # Navigate to the search URL driver.get(search_url) try: # Wait for the main content to be visible main_content = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, &quot;maincontent&quot;))) # Find the assembly information assembly_info = main_content.text if main_content else &quot;Assembly information not found&quot; #print(assembly_info) # Extract GenBank and RefSeq assembly IDs if the assembly widget is present try: assembly_table = driver.find_element(By.CLASS_NAME, &quot;assembly-widget&quot;) rows = assembly_table.find_elements(By.TAG_NAME, &quot;tr&quot;) for row in rows: cells = row.find_elements(By.TAG_NAME, &quot;td&quot;) if len(cells) == 3: label = cells[1].text.strip() assembly_id = cells[2].text.strip() if label == &quot;NCBI RefSeq assembly&quot;: print(&quot;NCBI RefSeq assembly:&quot;, assembly_id) elif label == &quot;Submitted GenBank assembly&quot;: print(&quot;Submitted GenBank assembly:&quot;, assembly_id) except NoSuchElementException: print(&quot;Assembly information widget not found.&quot;) except TimeoutException: print(&quot;Elements not found or timed out waiting for them to appear.&quot;) # Initialize variables to store assembly IDs genbank_assembly = None refseq_assembly = None # Split the assembly information into lines and iterate over them lines = assembly_info.split(&quot;\n&quot;) for i in range(len(lines)): if &quot;NCBI RefSeq assembly&quot; in lines[i]: refseq_assembly = lines[i+1].strip() elif &quot;Submitted GenBank assembly&quot; in lines[i]: genbank_assembly = lines[i+1].strip() # Print the assembly IDs if found if refseq_assembly: print(&quot;NCBI RefSeq assembly:&quot;, refseq_assembly) if genbank_assembly: print(&quot;Submitted GenBank assembly:&quot;, genbank_assembly) # Close the browser driver.quit() </code></pre> <p>output is</p> <pre><code>Assembly information widget not found. NCBI RefSeq assembly: GCF_036226945.1 Submitted GenBank assembly: GCA_036226945.1 </code></pre> <p>However, this code opens the entire page, , so I process extract the RefSeq and GenBank accession numbers.</p> <p>But, I Think, its not good way, there will be some correct way to acheive</p> <p>anotherway I found is</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.common.exceptions import NoSuchElementException # Define the search term search_term = &quot;Streptomyces anthocyanicus NBC 01687&quot; # Open a Chrome browser driver = webdriver.Chrome() try: # Construct the search URL for assembly search_url = f&quot;https://www.ncbi.nlm.nih.gov/assembly/?term={search_term.replace(' ', '+')}&quot; # Navigate to the search URL driver.get(search_url) # Find elements containing the organism name elements = driver.find_elements(By.XPATH, &quot;//*[contains(text(), 'NCBI RefSeq assembly')]&quot;) #{search_term} if elements: print(f&quot;Text '{search_term}' found on the webpage.&quot;) # Loop through elements containing the organism name for element in elements: # Find the parent element of the matched element parent_element = element.find_element(By.XPATH, &quot;..&quot;) # for sibling&quot;following-sibling::*[1]&quot; #for parents &quot;..&quot; and for grand parents &quot;../..&quot; # Print the text content of the parent element print(&quot;Parent element:&quot;) print(parent_element.text) else: print(f&quot;Text '{search_term}' not found on the webpage.&quot;) except Exception as e: print(&quot;An error occurred:&quot;, e) finally: # Quit the browser driver.quit() </code></pre> <p>But I want to do same way as I did previous <a href="https://stackoverflow.com/questions/78178650/title-difficulty-extracting-genbank-accession-number-using-species-and-strain-n/78179474#78179474">Title: Difficulty Extracting GenBank Accession Number Using Species and Strain Name, using webscraping (Using BeautifulSoup or Selenium)</a> for this page, so I can collect all information in one scripts , Could someone please suggest me proper code to achieve this?, kindly help</p> <p>Thank you in advance!</p>
<python><selenium-webdriver><beautifulsoup><biopython>
2024-03-18 18:21:42
1
504
Umar
78,182,041
400,691
How to create a context manager which is NOT a decorator?
<p>I have a function which looks something like this:</p> <pre class="lang-py prettyprint-override"><code>import contextlib @contextlib.contextmanager def special_context(...): ... yield ... </code></pre> <p>It is appropriate for this to be used as a context manager, like this:</p> <pre class="lang-py prettyprint-override"><code>with special_context(...): ... </code></pre> <p>... but it is <em>not</em> appropriate for it to be used as a decorator:</p> <pre class="lang-py prettyprint-override"><code># Not OK: @special_context(...) def foo(): ... </code></pre> <p>I understand that Python 3.2 added decorator support to <code>contextlib.contextmanager</code>, but in my API it indicates a mistake which causes bugs. I like the ergonomics of <code>contextlib.contextmanager</code>, but I would like to prevent the API from being misused.</p> <p>Is there a similar construct available (ideally in the standard libs) which would make <code>special_context</code> a context manager, but not a decorator?</p> <p>Specifically, I want something like this:</p> <pre class="lang-py prettyprint-override"><code>@contextmanager_without_decorator def special_context(...): ... yield ... </code></pre> <p>Please help me to find or define <code>contextmanager_without_decorator</code>.</p>
<python><python-decorators>
2024-03-18 17:34:18
3
9,184
meshy
78,181,866
2,188,011
Simple Torch Model Test: ModuleNotFoundError: No module named 'ultralytics.yolo'
<p>I have a model, <code>best.pt</code>, that I'd like to run. It takes an image as input, and outputs a string.</p> <p>I have <code>ultralytics</code>, <code>torch</code> and <code>torchvision</code> installed.</p> <p>My code is simple:</p> <pre><code>import torch from PIL import Image # Load the pre-trained model model = torch.load('best.pt') # Load the input image input_image = Image.open('input_image.jpg') # Pass the image through the model output = model(input_image) # Print the output print(output) </code></pre> <p>The result is as follows:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/fares/project/model/main.py&quot;, line 5, in &lt;module&gt; model = torch.load('best.pt') ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/serialization.py&quot;, line 1026, in load return _load(opened_zipfile, ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/serialization.py&quot;, line 1438, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/serialization.py&quot;, line 1431, in find_class return super().find_class(mod_name, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'ultralytics.yolo' </code></pre> <p>What am I doing wrong?</p>
<python><pytorch><yolov5><ultralytics>
2024-03-18 17:04:49
1
1,293
Fares K. A.
78,181,822
11,235,680
Return Parent Child json serialized object with SQLAlchemy - lazy loading issue
<p>I'm trying to use a generic query to select a parent class with all its children classes.</p> <p>The query looks like this:</p> <pre><code>def get_data(session: Session, table, data_filter): try: data = session.query(table).filter_by(**data_filter).first() return data except SQLAlchemyError as e: raise e </code></pre> <p>I'm not sure if the issue is the lazy loading but whenever I run the server I only get the data regarding the parent class. Here's an example of the model I'm using:</p> <pre><code>class User(Base): __tablename__ = &quot;user_account&quot; email: Mapped[str] = mapped_column(String(60), primary_key=True) username: Mapped[str] = mapped_column(String(30)) password: Mapped[str] = mapped_column(BYTEA(60)) firstname: Mapped[str] = mapped_column(String(30)) lastname: Mapped[str] = mapped_column(String(30)) domainName = relationship(&quot;DomainName&quot;, cascade=&quot;all, delete-orphan&quot;, backref=&quot;user_account&quot;, lazy=False) class DomainName(Base): __tablename__ = &quot;user_domain&quot; domain_id: Mapped[int] = mapped_column(primary_key=True) user_id: Mapped[int] = mapped_column(ForeignKey(&quot;user_account.email&quot;)) domain_name: Mapped[str] = mapped_column(String(30)) </code></pre> <p>My service just return the get_data call. If I use the debugger I can see all the data (in the user object the &quot;domainName&quot; child object is populated without even interacting with the IDE). But when returned to postman, I only have the User fields (minus DomainName):</p> <pre><code>{ &quot;email&quot;: &quot;r@tester.com&quot;, &quot;username&quot;: &quot;r&quot;, &quot;password&quot;: &quot;&quot;, &quot;firstname&quot;: &quot;r&quot;, &quot;lastname&quot;: &quot;o&quot; } </code></pre>
<python><sqlalchemy><fastapi>
2024-03-18 16:57:47
1
316
Bouji
78,181,726
13,176,726
Django Admin "Export selected" button not showing in Django Admin
<p>I'm trying to enable the &quot;Export selected&quot; button in the Django admin for users to download data as an Excel sheet. I'm using django-import-export but the button isn't appearing.</p> <p><em><strong>Here's what I've done:</strong></em> Installed django-import-export (pip install django-import-export).</p> <p><strong>Trial 1:</strong></p> <pre><code>class UserAdmin(ImportExportModelAdmin): list_display = ('username', 'email'....) admin.site.unregister(User) admin.site.register(User, ImportExportModelAdmin) </code></pre> <p><strong>Trial 2:</strong></p> <pre><code>class UserAdmin(ExportMixin, admin.ModelAdmin): list_display = ('username', 'email'.....) admin.site.unregister(User) admin.site.register(User, UserAdmin) </code></pre> <p>Restarted the development server.</p> <p>the django-import-export is in INSTALLED_APPS in settings.py</p> <p>Expected behavior: The &quot;Export selected&quot; button should appear in the Django admin user list view.</p> <p>Actual behavior: The button is not displayed.</p> <p><strong>My Question</strong>: Why the button is not showing and how can I fix it.</p> <p>Any suggestions or insights into why the button might not be showing would be greatly appreciated.</p>
<python><django><django-import-export>
2024-03-18 16:42:01
1
982
A_K
78,181,708
181,783
Coverage of process spawned by pytest
<p>I am trying to get coverage on a Python process spawned by pytest. Here are the steps I took:</p> <ol> <li>Create a sitecustomize.py module in my local site packages directory</li> </ol> <pre class="lang-py prettyprint-override"><code>#/home/Olumide/.local/lib/python3.10/site-packages/sitecustomize.py import coverage coverage.process_startup() </code></pre> <ol start="2"> <li>Set the <code>COVERAGE_PROCESS_START</code> environment variable as follows <code>export COVERAGE_PROCESS_START=True</code></li> <li>Run the test <code>coverage run --rcfile=.coveragerc -m pytest tests/gui/test_screenshots.py</code> where the current directory contains the the .coveragerc file</li> </ol> <pre><code>[run] source = src/ parallel = True relative_files = True omit = **/tests/* </code></pre> <p>Note that the script <code>tests/gui/test_screenshots.py</code> launches an external python application that I want coverage on.</p> <p>Unfortunately I'm still getting the warning:</p> <pre><code>/home/Olumide/repos/app/3.10_env/lib/python3.10/site-packages/coverage/control.py:887: CoverageWarning: No data was collected. (no-data-collected) self._warn(&quot;No data was collected.&quot;, slug=&quot;no-data-collected&quot;) </code></pre> <hr /> <p><strong>Update</strong></p> <p>Found a 53248 byte .coverage file (called .coverage.Ubuntu-22.5371.XcjyqXNx) in the directory from which I ran the test. I can generate an HTML report from this file via the command:</p> <p><code>coverage html --data-file=.coverage.Ubuntu-22.5371.XcjyqXNx</code>.</p> <p>So it looks like I've got coverage! Oddly though, only the <code>__init__.py</code> files have coverage stats (100%). I wonder whether this is what the warning message meant.</p>
<python><pytest><coverage.py>
2024-03-18 16:38:58
1
5,905
Olumide
78,181,604
11,628,437
How to subtract pandas columns for specific groups in a multi-index dataframe?
<p>I'd like to subtract the average row of every group with it's corresponding sub_column. This implies I need to difference the <code>Dribbling_Speed_Team_Blue</code> with <code>Dribbling_Speed</code> corresponding to <code>Best Player Statistics</code> Therefore, the final row (<code>Difference</code>) will have the following values -</p> <p><a href="https://i.sstatic.net/v4JbK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v4JbK.png" alt="enter image description here" /></a></p> <p>This is my prior work to generate this dataframe -</p> <p>I created a multi-index panda dataframe using a nested dictionary -</p> <pre><code>import pandas as pd nested_dict = { 'Game':{ 'Basketball': { 'Player Statistics': { 'Dribbling_Speed_Team_Blue': { 'Player_A': 1, 'Player_B': 3 }, 'Dribbling_Speed_Team_Red': { 'Player_A': 2, 'Player_B': 4 } }, 'Best Player Statistics': { 'Dribbling_Speed': { 'Player': 20, } } }, 'Football': { 'Best Player Statistics': { 'Kicking_Power': { 'Player_A': 12, 'Player_B': 8 } }, 'Player Statistics': { 'Kicking_Power_Team_Blue': { 'Player': 40, }, 'Kicking_Power_Team_Red': { 'Player': 40, } } }, } } </code></pre> <p>Then I performed the following operations on it -</p> <pre><code>out = pd.json_normalize(nested_dict) out.columns = out.columns.str.split('.', expand=True) sum_data = out.groupby(level=[0, 1,2,3], axis = 1).sum() count_data = out.groupby(level=[0, 1,2,3], axis = 1).count() result_df = pd.concat([sum_data, count_data], axis=0, keys=['Sum', 'Count']) result_df.index = result_df.index.droplevel(-1) result_df.loc['avg'] = result_df.loc['Sum']/result_df.loc['Count'] </code></pre> <p>Doing this gave the following result -</p> <p><a href="https://i.sstatic.net/jzO8b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jzO8b.png" alt="enter image description here" /></a></p> <p>Please let me know if something is unclear. I am quite new to the process and therefore don't know where to start.</p>
<python><pandas>
2024-03-18 16:21:45
1
1,851
desert_ranger
78,181,518
6,151,828
Does scikit-learn train_test_split copy data?
<p>Does the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html" rel="nofollow noreferrer"><code>train_test_split</code></a> method of scikit-learn duplicate the data? In other words, if I work with a large dataset, <code>X, y</code>, does it mean that after performing something like</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2023) </code></pre> <p>my data use twice as much memory as the original dataset? Or is there some scikit-learn (or basic python) magic that prevents it? (E.g., as using <a href="https://stackoverflow.com/questions/62304176/how-to-find-out-dataframe-to-numpy-did-not-create-a-copy"><code>.to_numpy()</code> does not necessarily lead to data duplication</a>)</p> <p>If the memory use does double, what is the best practical way around this problem? Perhaps, something like</p> <pre><code>X, X_test, y, y_test = train_test_split(X, y, test_size=0.2, random_state=2023) </code></pre> <p>?</p> <p><strong>Remark</strong><br /> Using <code>np.shares_memor(X_train, X)</code> suggests that the data is indeed duplicated.</p>
<python><machine-learning><scikit-learn><training-data>
2024-03-18 16:08:22
0
803
Roger V.
78,181,494
6,195,489
Pandas read_csv works but pyarrow doesnt
<p>I have a csv file, which is tab separated. The following code:</p> <pre><code>import numpy as np import sys import pyarrow.csv as pa_csv import pandas as pd df = pd.read_csv(sys.argv[1],sep='\t',header=0,dtype='object') parse_options = pa_csv.ParseOptions(delimiter='\t') data = pa_csv.read_csv(sys.argv[1], parse_options=parse_options) </code></pre> <p>fails on the pyarrow read:</p> <p>Having looked at the data I have been given it seems the nunmber of columns varies:</p> <pre><code>awk '{print NF}' data.csv: 200651 200651 200651 200653 200651 200651 200651 </code></pre> <p>How does pandas handle this case, and why doesnt pyarrow do the same?</p> <p>Can pyarrow be forced to behave in the same way?</p> <p><strong>EDIT</strong></p> <p>The number of columns doesnt vary. I didnt use the tab as a delimter to awk.</p> <pre><code>awk -F'\t' '{print NF}' 200669 200669 200669 200669 200669 200669 200669 200669 </code></pre> <p>so what is causing the error?</p> <p><strong>Update</strong></p> <p>adding</p> <pre><code>read_options=pa_csv.ReadOptions(block_size=1e9) </code></pre> <p>solved the issue. I guess it is down to the number of columns being large.</p>
<python><pandas><csv><pyarrow>
2024-03-18 16:05:12
0
849
abinitio
78,181,458
10,908,375
How do I get the rolling proportion between multiple columns?
<p>For every row, I want to have a proportion of the total values (sales). For instance, for some row, we would take the total of the 2 past values of two columns, and compute the proportion of each column.</p> <p>Let's say we have the following dataset:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'factory1sales': [0, 1, 2, 3, 4], 'factory2sales': [5, 6, 7, 8, 9] }) </code></pre> <pre><code> factory1sales factory2sales rolling_proportion_factory1 rolling_proportion_factory1 0 0 5 1 1 6 2 2 7 3 3 8 4 4 9 0.25 0.75 </code></pre> <p>The rolling proportion of sales for <code>factory1</code> would be (with a window of 2):</p> <pre><code>(2 + 3) / ((2 + 3) + (7 + 8)) = 0.25 </code></pre> <p>How can I achieve this? I know it's probably going to be a combination of <code>pd.shift</code>, <code>pd.rolling</code>, etc.</p>
<python><pandas>
2024-03-18 16:00:01
2
36,924
Nicolas Gervais
78,181,389
8,684,461
getting tickers from interactive brokers using post requests
<p>Hi All for some reason interactive brokers don't make it easily accessible to get tickers from their site. I currently get them via their exchange pages using a normal request query. However, this is becoming a bit less reliable. I am trying to implement a imitation of their product search <a href="https://www.interactivebrokers.co.uk/en/trading/products-exchanges.php#/" rel="nofollow noreferrer">https://www.interactivebrokers.co.uk/en/trading/products-exchanges.php#/</a></p> <p>However, i am having some problems getting it to work as i am newish to this sort of web scrapping.</p> <p>This is my current code</p> <pre><code>url = &quot;https://www.interactivebrokers.co.uk/IBSales/servlet/exchange?apiPath=getProductsByFilters&quot; payload = {&quot;pageNumber&quot;:1,&quot;pageSize&quot;:&quot;100&quot;,&quot;sortField&quot;:&quot;symbol&quot;,&quot;sortDirection&quot;:&quot;ASC&quot;,&quot;product_country&quot;:[&quot;GB&quot;],&quot;product_symbol&quot;:&quot;&quot;,&quot;new_product&quot;:&quot;all&quot;,&quot;product_type&quot;:[&quot;STK&quot;],&quot;domain&quot;:&quot;uk&quot;} headers ={'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} response = requests.post(url, data=payload, headers=headers) print(response.text) </code></pre> <p>However, it returns the following</p> <pre><code>&lt;!DOCTYPE HTML PUBLIC &quot;-//IETF//DTD HTML 2.0//EN&quot;&gt; &lt;html&gt;&lt;head&gt; &lt;title&gt;400 400&lt;/title&gt; &lt;/head&gt;&lt;body&gt; &lt;h1&gt;400&lt;/h1&gt; &lt;p&gt;Your browser sent a request that this server could not understand.&lt;br /&gt; &lt;/p&gt; &lt;/body&gt;&lt;/html&gt; </code></pre> <p>So clearly I am not doing it correctly. I was wondering if anyone could help me make this work.</p> <p>Cheers.</p>
<python><web-scraping><post><python-requests>
2024-03-18 15:50:07
1
789
JPWilson
78,181,354
4,784,914
Define relationship through double belongs-to-many in SQLAlchemy
<p>I have three tables: <code>Item</code>, <code>Shelve</code> and <code>Cabinet</code>.</p> <p>A <code>Cabinet</code> has-many <code>Shelve</code>s and a <code>Shelve</code> has-many <code>Item</code>s:</p> <pre><code>Item id: int shelve_id: int Shelve id: int cabinet_id: int Cabinet: id: id </code></pre> <p>I am looking to make the convenient relationship of <code>Cabinet</code> to a list of <code>Item</code>s:</p> <pre><code>class Cabinet(DeclarativeBase): # ... items: Mapped[List[&quot;Item&quot;]] = relationship() </code></pre> <p>However, this gives an error that no relationship between <code>Cabinet</code> and <code>Item</code> could be found. Which I can understand because the relationship is not obvious, and <code>Shelve</code> will act as a sort of join table.</p> <p><em>How can I accomplish this in SQLAlchemy?</em></p> <p>I was reading the docs on <a href="https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html" rel="nofollow noreferrer">relationships</a> but I'm not sure which applies. Do I need a secondary join?<br /> Other questions I found involve a double many-to-many relation, which sounds different: <a href="https://stackoverflow.com/questions/45987959/sqlalchemy-relationship-through-2-many-to-many-tables">SQLAlchemy relationship through 2 many-to-many tables</a></p>
<python><sqlalchemy>
2024-03-18 15:43:52
1
1,123
Roberto
78,181,346
5,618,856
FastAPI in docker - module not found error from main
<p>I have a (locally) working fastAPI-app. Now I intend to bring it to docker. I followed the instruction as <a href="https://fastapi.tiangolo.com/deployment/docker/" rel="nofollow noreferrer">in the docs</a>. When starting up the container it stops. The log tells</p> <pre><code> File &quot;/code/./app/main.py&quot;, line 12, in &lt;module&gt; from database import get_session ModuleNotFoundError: No module named 'database' </code></pre> <p>This is my Dockerfile:</p> <pre><code># FROM python:3.10 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY ./app /code/app CMD [&quot;uvicorn&quot;, &quot;app.main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;80&quot;] </code></pre> <p>If I check the existence of the files with <code>sudo docker run -it fastapi bash</code> I see all files in place:</p> <pre><code>root@d6d938b2a0da:/code/app# ls -la total 40 drwxr-xr-x 3 root root 4096 Mar 17 12:28 . drwxr-xr-x 1 root root 4096 Mar 17 12:31 .. -rw-rw-r-- 1 root root 0 Mar 16 19:58 __init__.py -rw-rw-r-- 1 root root 744 Mar 16 19:58 database.py -rw-rw-r-- 1 root root 12288 Mar 17 12:26 db.sqlite3 -rw-rw-r-- 1 root root 1170 Mar 16 19:58 main.py -rw-rw-r-- 1 root root 602 Mar 16 19:58 models.py -rw-rw-r-- 1 root root 2942 Mar 16 19:58 populate.py drwxrwxr-x 2 root root 4096 Mar 16 19:58 templates </code></pre> <p>Why can't main.py load from database.py in the docker?</p> <p>The dot in &quot;/code/./app/main.py&quot; looks suspicious. But main.py starts...</p>
<python><docker><fastapi>
2024-03-18 15:42:47
1
603
Fred
78,181,094
172,131
Change record title in a StackedInline in Django
<p>I am trying to either change or remove the title for each record in an inline, but have not been able to find a way to do it in the docs or by override get_formset. Specifically, I want to change or remove the title highlighted in the attached image. Any ideas how to do it please? Preferably without overriding CSS etc. <a href="https://i.sstatic.net/Gy4lc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gy4lc.png" alt="enter image description here" /></a></p>
<python><django><django-forms>
2024-03-18 15:01:36
1
20,218
RunLoop
78,180,968
7,456,923
Can I force array numpy to keep its uint32 type?
<p>I would like to reproduce C behavior in Python, presumably using numpy, but I'm running into this issue :</p> <pre><code>&gt;&gt;&gt; import numpy &gt;&gt;&gt; a = numpy.uint32(4294967295) &gt;&gt;&gt; type(a) &lt;class 'numpy.uint32'&gt; &gt;&gt;&gt; a += 1 &gt;&gt;&gt; a 4294967296 &gt;&gt;&gt; type(a) &lt;class 'numpy.int64'&gt; </code></pre> <p>In C, with uint32, I'd get <code>4294967295 + 1 = 0</code></p> <p>Can I force my array a to remain a <code>numpy.uint32</code> array in order to get <code>0</code> at the end of my script ?</p> <p>Related to this other question of mine: <a href="https://stackoverflow.com/questions/78180950/does-numpy-exactly-reproduce-all-c-behaviors-on-usual-operations">Does numpy exactly reproduce all C behaviors on usual operations?</a></p>
<python><numpy><uint32>
2024-03-18 14:37:02
1
6,220
gdelab
78,180,950
7,456,923
Does numpy exactly reproduce all C behaviors on usual operations?
<p>I'm designing an algorithm in python and know I'll want to translate it to C later.</p> <p>However, mathematical operations in Python might not yield the same result as in C, for instance <code>4294967295 + 1 = 0</code> in C for unsigned integers, but not with plain Python integers operations. Therefore, I should not use Python integers in my design.</p> <p>Can I safely and easily use Numpy to reproduce C behavior ? That is, if I perform usual operations (+, -, *, /, %, casting from float to int or the other way around) on arrays with types <code>np.uint32</code> or <code>np.float64</code> for instance, am I guaranteed (or can I get this guarantee somehow) to get the same result as a C program with <code>uint32_t</code> and <code>float64_t</code> ?</p> <p>I'm only interested of what's part of the C &quot;official behavior&quot;, anything that is allowed to depend on the compiler or processor in C can also differ with numpy as if it was another compiler/processor. I'm asking in particular since numpy has a Nan that is not always in C.</p> <p><strong>EDIT after comments :</strong></p> <p>I'm looking more particularly at this set of operations : (+, -, *, /, %, casting from float to int or the other way around).</p> <p>I've tried to look at numpy documentation to no avail, and have run a few tests myself, for instance :</p> <p><strong>TEST 1 : int32 overflow (<code>(uint32_t) 4294967295 + (uint32_t) 1 == 0</code> in C)</strong></p> <p>It does not seem to work with numpy scalars</p> <pre><code>&gt;&gt;&gt; import numpy &gt;&gt;&gt; a = numpy.uint32(4294967295) &gt;&gt;&gt; type(a) &lt;class 'numpy.uint32'&gt; &gt;&gt;&gt; a += 1 &gt;&gt;&gt; a 4294967296 &gt;&gt;&gt; type(a) &lt;class 'numpy.int64'&gt; </code></pre> <p>But it does with numpy arrays :</p> <pre><code>import numpy a = numpy.array([4294967295], dtype='uint32') a += 1 print(a) print(a.dtype) </code></pre> <p>Output:</p> <pre><code>[0] uint32 </code></pre> <p>But this specific case does not give me any insurance that it always works with arrays.</p> <p>**TEST 2 : negative integer division : **</p> <p><code>-1/2 == 0</code> in C for int32.</p> <p>But in &quot;plain&quot; numpy :</p> <pre><code>two = np.int64(2) mone = np.int64(-1) print(mone / two) print(mone // two) </code></pre> <p>Gives :</p> <pre><code>-0.5 -1 </code></pre> <p>I'm wondering whether there is some kind of &quot;switch&quot; to numpy, or operands that I could use, so that numpy would give me 0 in the above case for instance</p>
<python><c><numpy>
2024-03-18 14:33:34
1
6,220
gdelab
78,180,814
5,344,240
Decorating an instance method of a class with a decorating function
<p>I am using Python 3.10. Consider this toy example of a cache that caches the very first call to an instance method and then returns the cached value on subsequent calls:</p> <pre><code>import functools def cache(func): @functools.wraps(func) # for __name__ def wrapper(*args, **kwargs): if not wrapper.cache: print(&quot;caching...&quot;) wrapper.cache = func(*args, **kwargs) return wrapper.cache wrapper.cache = None return wrapper class Power: def __init__(self, exponent): self.exponent = exponent @cache def of(self, base): return base ** self.exponent # test &gt;&gt;&gt; cube = Power(3) &gt;&gt;&gt; cube.of(2) caching... 8 &gt;&gt;&gt; cube.of.cache 8 &gt;&gt;&gt; cube.of.__dict__ {'__wrapped__': &lt;function __main__.Power.of(self, base)&gt;, 'cache': 8} &gt;&gt;&gt; cube.of.cache = None ... AttributeError: 'method' object has no attribute 'cache' </code></pre> <p>I have two questions:</p> <p>1.) The accepted answer <a href="https://stackoverflow.com/questions/3401421/decorating-a-method?rq=3">here</a> says that the <code>@cache</code> decorator runs when the <code>Power</code> class is constructed and it will be passed an <em>unbound</em> method (<code>of</code> in my case). I guess this claim is true only when you decorate an instance method with a class decorator. There it is an issue that you would need a reference of the <code>cube</code> object to be stored in the decorating class instance construction, but that <code>cube</code> instance is not defined yet. I am having trouble reconciling this claim with the fact that my example works; the decorated <code>of</code> method is passed a tuple with the first element being the <code>cube</code> instance and the second the <code>base=2</code> parameter</p> <p>2.) I can access the <code>.cache</code> attribute but why can't I reset it? It gives <code>AttributeError</code>.</p>
<python><python-decorators>
2024-03-18 14:10:48
2
455
Andras Vanyolos
78,180,808
7,352,883
How to collect performance trace via CDP commands through Selenium Python?
<p>I want to collect Chrome Profiler trace dump via Selenium - Python similar to <a href="https://zchandikaz.medium.com/start-chrome-profile-recording-from-selenium-java-a2fee0351396" rel="nofollow noreferrer">this approach in JAVA</a></p> <p>Appropriate <a href="https://chromedevtools.github.io/devtools-protocol/tot/Tracing/" rel="nofollow noreferrer">CDP</a> commands are -</p> <pre><code>Tracing.start Tracing.end Tracing.dataCollected </code></pre> <p>I saw some examples of <a href="https://www.selenium.dev/documentation/webdriver/bidirectional/chrome_devtools/bidi_api/" rel="nofollow noreferrer">BiDi Api</a> and tried using them but unable to replicate my use case in Python.</p> <p>Please help with an example of listening to <code>Tracing.dataCollected</code> event asynchronously and collecting Trace dump via Selenium-Python.</p>
<python><selenium-webdriver>
2024-03-18 14:10:30
1
1,449
Shivam Mishra
78,180,770
5,725,780
What's the function object alternative to 1D linear interpolation with SciPy/NumPy?
<p>I'm looking for a way to create a &quot;functor&quot; for linear interpolation of time,value pairs using SciPy (or NumPy) but according to the <a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate.html" rel="nofollow noreferrer">SciPy tutorial</a> there is none! (Kind of the opposite of <a href="https://stackoverflow.com/questions/46207821">Trying to understand scipy and numpy interpolation</a>)</p> <p>The natural method would be <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d" rel="nofollow noreferrer">interp1d</a> but that has a warning:</p> <blockquote> <p><strong>Legacy</strong> This class is considered legacy and will no longer receive updates. […] For a guide to the intended replacements for interp1d see <a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#tutorial-interpolate-1dsection" rel="nofollow noreferrer">1-D interpolation</a>.</p> </blockquote> <p>Following the link takes me to a page that tells me:</p> <blockquote> <p>If all you need is a linear (a.k.a. broken line) interpolation, you can use the <a href="https://numpy.org/devdocs/reference/generated/numpy.interp.html#numpy.interp" rel="nofollow noreferrer">numpy.interp</a> routine.</p> </blockquote> <p>The problem is, these are not at all equivalent. <code>numpy.interp</code> requires me to know the points beforehand, and does not return a function that can be used to look up the interpolated values.</p> <p>Meanwhile, SciPy has a number of other interpolation methods that all return a <em>function</em> (or a function object) such as <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.CubicSpline.html#scipy.interpolate.CubicSpline" rel="nofollow noreferrer"><code>CubicSpline</code></a> or <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.PchipInterpolator.html#scipy.interpolate.PchipInterpolator" rel="nofollow noreferrer"><code>PchipInterpolator</code></a>.</p> <p>What's the easy way to construct a function or object similar to what <code>PchipInterpolator</code> returns, but for simple linear interpolation now that <code>interp1d</code> is deprecated?</p>
<python><numpy><scipy><linear-interpolation>
2024-03-18 14:04:08
2
721
pipe
78,180,518
1,422,096
Grammar for combinations of Numpy arrays
<p>For a specific application, I do a GUI to manipulate some data (internally: numpy 1D arrays), and plot them.</p> <p>The end-user can choose in the UI to plot various series <code>a</code>, <code>b</code>, <code>c</code>.</p> <p>Now I also need to allow a <strong>&quot;custom combination&quot; of <code>a</code>, <code>b</code>, <code>c</code></strong>. More precisely, the user (who doesn't know Python/Numpy, but can learn a few keywords) should enter in a GUI textbox a &quot;formula&quot;, and then my program should transcribe this into real numpy code (probaly using <code>eval(...)</code>, here few security problem because the end-user is the only user), and plot the data.</p> <p>Examples of end-user input:</p> <p><code>a * 3 + 1.234 * c - d</code><br /> <code>a + b.roll(2)</code><br /> <code>a + b / b.max() * a.max()</code></p> <p>For example, the allowed syntax is: basic arithmetic (+ * - / and parentheses), float numbers, <code>a.max()</code>, and <code>a.roll(3)</code> to shift the arrays.</p> <p>Question: <strong>is there a function inside Numpy or Scipy to provide such a way to interpret combinations of arrays with a basic arithmetic grammar?</strong></p>
<python><numpy><eval><grammar>
2024-03-18 13:21:24
1
47,388
Basj
78,180,508
19,499,853
Transform Postgres recursive query to Python Pandas Dataframe
<p>I've got recursion, which is written in Postgresql database.</p> <pre><code>with recursive relations_recurs( pos_id, boss_pos_id, level_num, link_type, link_type_array, pos_id_array ) as ( select l.pos_id, l.boss_pos_id, 1 as level_num, l.link_type, l.link_type_array, l.pos_id_array from temp_loop l union all select l.pos_id, l.boss_pos_id, (r.level_num + 1) as level_num, l.link_type, (r.link_type_array || l.link_type) as link_type_array, (r.pos_id_array || l.pos_id) as pos_id_array from temp_pos_boss_with_min_link l join relations_recurs r on l.pos_id = r.boss_pos_id and l.pos_id &lt;&gt; all (r.pos_id_array) ) select distinct pos_id_array[1] as pos_id, boss_pos_id as boss_pos_id, level_num as level_id, pos_id as pos_original_id, (case when array[1, 2] &lt;@ t.link_type_array then 0 when array[1] &lt;@ t.link_type_array then 1 else 2 end) link_type from relations_recurs t; </code></pre> <p>This is only part of whole procedure in pl/pgsql. I want to move part of code which is recursion to another procedure LANGUAGE 'plpython3u'. I think Python in memory can be quicker But I'm new in python How can I transform code from Postgres to Python. I start it ...</p> <pre><code>import pandas as pd sql_query_temp_loop = f''' select l.pos_id, l.boss_pos_id, 1 as level_num, l.link_type, l.link_type_array, l.pos_id_array from temp_loop l ''' sql_query_temp_pos_boss_with_min_link = f''' select pos_id , boss_pos_id , min(link_type) as link_type from temp_pos_boss_with_min_link group by 1,2 ''' temp_loop = pd.DataFrame.from_records(plpy.execute(sql_query_temp_loop)) temp_pos_boss_with_min_link = pd.DataFrame.from_records(plpy.execute(sql_query_temp_pos_boss_with_min_link)) </code></pre> <p>Can you help to make recursion in Python?</p> <p>Something wrong with my Python recursion. I get only one last dataframe, but I can't make it append everytime. On every circle of recursion I see dataframe,which I need to be merged with others</p> <pre><code>CREATE OR REPLACE PROCEDURE dm_sp_land.sp_load_relations_all_py() LANGUAGE plpython3u AS $procedure$ from itertools import cycle, islice from tabulate import tabulate import pandas as pd import numpy as np sql_query_temp_pos_boss_with_min_link = f''' select pos_id , boss_pos_id , link_type from temp_pos_boss_with_min_link ''' df = pd.DataFrame.from_records(plpy.execute(sql_query_temp_pos_boss_with_min_link)) emptydf = pd.DataFrame(columns = df.columns) emptydf = emptydf.assign(level=1) emptydf = emptydf.assign(pos_id_array=[]) emptydf = emptydf.assign(link_type_arrays=[]) rowlevel = 0 pos_id_arrays = np.array([]) link_type_arrays = np.array([]) def getLevel(mgrid,rowlevel,pos_id_arrays,link_type_arrays, emptydf): if mgrid in pos_id_arrays: return else: rowlevel += 1 pos_id_arrays = np.append(pos_id_arrays, [mgrid]) length_of_array = len(pos_id_arrays) length_of_array_link = len(pos_id_arrays) link_type_arrays = np.append(link_type_arrays, [df[df['pos_id'] == mgrid]['link_type'].values]) childs = df[df['boss_pos_id'] == mgrid] row = df[df['pos_id'] == mgrid] length_of_dataframe = len(row) pos_id_arrayska = list(map(np.copy, [pos_id_arrays] * length_of_dataframe)) link_type_arrayska = list(map(np.copy, [link_type_arrays] * length_of_dataframe)) row.loc[df['pos_id'] == mgrid, 'level'] = rowlevel row.loc[df['pos_id'] == mgrid, 'pos_id_array'] = pd.Series(pos_id_arrayska,index=row.loc[df['pos_id'] == mgrid].index) row.loc[df['pos_id'] == mgrid, 'link_type_arrays'] = pd.Series(link_type_arrayska,index=row.loc[df['pos_id'] == mgrid].index) emptydf = pd.concat([row, emptydf], ignore_index=True) for ind in childs.index: getLevel(childs['pos_id'][ind],rowlevel,pos_id_arrays,link_type_arrays,emptydf) return emptydf plpy.notice(tabulate(getLevel(99900445810,rowlevel,pos_id_arrays,link_type_arrays, emptydf), headers='keys', tablefmt='psql')) $procedure$ ; </code></pre> <p>Why emptydf does not merge each time?</p>
<python><pandas><postgresql><algorithm><recursion>
2024-03-18 13:20:11
0
309
Gerzzog
78,180,479
10,722,752
How to view histograms juxtaposed using matplotlib
<p>I am trying to visualize how the distributions differ based on the flag column:</p> <p>Sample Data:</p> <pre><code>np.random.seed(0) df = pd.DataFrame({'col1' : np.random.uniform(size = 100), 'col2' : np.random.uniform(size = 100), 'col3' : np.random.uniform(size = 100), 'flag' : np.random.choice([0,1], 100)}) df col1 col2 col3 flag 0 0.548814 0.677817 0.311796 0 1 0.715189 0.270008 0.696343 0 2 0.602763 0.735194 0.377752 1 3 0.544883 0.962189 0.179604 1 4 0.423655 0.248753 0.024679 1 ... ... ... ... ... 95 0.183191 0.490459 0.224317 0 96 0.586513 0.227415 0.097844 0 97 0.020108 0.254356 0.862192 0 98 0.828940 0.058029 0.972919 1 99 0.004695 0.434417 0.960835 0 </code></pre> <p>I can view the histograms using 2 <code>for</code> loops one each for <code>flag == 0</code> and <code>flag == 1</code> using:</p> <pre><code>for col in df.loc[df['flag'] == 0, ['col1', 'col2', 'col3']].columns: plt.hist(df[col]) plt.title(col) plt.show() </code></pre> <p>Could someone please let me know if I can generate visualizations wherein the histograms for each column is placed side by side, one each for different <code>flag</code> columns.</p>
<python><pandas><matplotlib>
2024-03-18 13:16:07
1
11,560
Karthik S
78,180,462
165,753
How to share downloaded huggingface models among users?
<p>I'd like several users to share downloaded models, such that when any of the users downloads a model, e.g. using</p> <pre><code>tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) </code></pre> <p>the other users would be able to use it as well for inference, without having to download it again. The users would all be using linux, but may use different hosts, have different python environments and package versions, etc.</p> <p>I'd like to know if the following solution will work: Creating a shared NFS mount, e.g. <code>/models</code> and mount it on all hosts. Then, for each user, symlink their HF cache hub dir to a shared path. E.g. <code>ln -s /models ~/.cache/huggingface/hub</code>.</p> <p>I don't want to symlink <code>~/.cache/huggingface/</code>, since it also contains a personal HF token, and custom code in <code>modules</code>.</p> <p>Assuming we can configure file permissions properly, will this work as expected or could there still be issues? E.g.:</p> <ul> <li>conflicts between different versions of packages, virtualenv/conda envs etc. between users</li> <li>file locking issues</li> </ul>
<python><huggingface-transformers><nfs><huggingface-hub>
2024-03-18 13:12:25
1
7,729
dimid
78,180,325
7,480,820
Can you use a function's return type as a type elsewhere?
<p>I have a callback that takes the result of another function as input. Is there a way to directly reference that function's return type? Currently I have the return type defined as a type alias that I can use in both places but that doesn't seem ideal. Does something like C++'s <a href="https://en.cppreference.com/w/cpp/types/result_of" rel="nofollow noreferrer">std::result_of</a> exist in python?</p> <h3>Current Code</h3> <pre class="lang-py prettyprint-override"><code>from typing import Tuple func_return = Tuple[int, int] def func() -&gt; func_return: return 1, 1 def call_back(arg: func_return) -&gt; int: a, b = arg return a + b </code></pre> <h3>Ideal Code</h3> <pre class="lang-py prettyprint-override"><code>from typing import Tuple def func() -&gt; Tuple[int, int]: return 1, 1 def call_back(arg: return_type(func)) -&gt; int: a, b = arg return a + b </code></pre>
<python><python-typing>
2024-03-18 12:49:24
1
1,282
Philip Nelson
78,180,153
10,574,250
VS Code - An Invalid Python interpreter is selected, please try changing it to enable features such as IntelliSense, linting, and debugging
<p>I am trying to select my python interpreter in VS Code using a venv that I have created. I have tried everything but it doesn't work.</p> <p>My folder structure looks like this:</p> <pre class="lang-none prettyprint-override"><code>- practice - venv - Scripts - python.exe - all other associated venv files. </code></pre> <p>I then use <code>Python: Select interpreter</code> and use path <code>practice\venv\Scripts</code> I have also tried <code>practice\venv\Scripts\python.exe</code></p> <p>However I get the following error:</p> <pre class="lang-none prettyprint-override"><code>An Invalid Python interpreter is selected, please try changing it to enable features such as IntelliSense, linting, and debugging. </code></pre> <p>Why is this an invalid python interpreter?</p>
<python><visual-studio-code>
2024-03-18 12:18:27
0
1,555
geds133
78,180,139
4,578,454
django custom datetime format not working with form fields
<p>I'm working on a Django project where I wanted to add a date format for local usage. As per the documentation, I have updated the settings to use local date time format : <a href="https://docs.djangoproject.com/en/5.0/ref/settings/#std-setting-DATETIME_INPUT_FORMATS" rel="nofollow noreferrer">Link</a></p> <p>Settings are as below:</p> <pre><code> DATE_INPUT_FORMATS = [ &quot;%Y-%m-%d&quot;, &quot;%m/%d/%Y&quot;, &quot;%m/%d/%y&quot;, &quot;%b %d %Y&quot;, &quot;%b %d, %Y&quot;, &quot;%d %b %Y&quot;, &quot;%d %b, %Y&quot;, &quot;%B %d %Y&quot;, &quot;%B %d, %Y&quot;, &quot;%d %B %Y&quot;, &quot;%d %B, %Y&quot;, '%d-%m-%Y', '%d/%m/%Y' ] DATETIME_INPUT_FORMATS = [ &quot;%Y-%m-%d %H:%M:%S&quot;, &quot;%Y-%m-%d %H:%M:%S.%f&quot;, &quot;%Y-%m-%d %H:%M&quot;, &quot;%m/%d/%Y %H:%M:%S&quot;, &quot;%m/%d/%Y %H:%M:%S.%f&quot;, &quot;%m/%d/%Y %H:%M&quot;, &quot;%m/%d/%y %H:%M:%S&quot;, &quot;%m/%d/%y %H:%M:%S.%f&quot;, &quot;%m/%d/%y %H:%M&quot;, '%d-%m-%Y %H:%M:%S', '%d/%m/%Y %H:%M:%S' ] </code></pre> <p>But for the form fields, it's still using the default date time formats and not taking the date time formats added in the settings file.</p> <p>In my form, the form throws validation error saying &quot;Enter a valid date/time&quot;</p> <pre><code>class CDetailsForm(forms.ModelForm): class Meta: model = CDetails fields = ('c_date',) widgets = { 'c_date': forms.DateTimeInput( attrs={'class': 'datetimepicker-input datetime_stamp form-control', 'readonly': 'readonly'}), } def is_valid(self): import pdb pdb.set_trace() result = super(CDetailsForm, self).is_valid() return result </code></pre> <p>DateTimeInput widget code says it refers the <strong>DATETIME_INPUT_FORMATS</strong> settings</p> <pre><code>class DateTimeInput(DateTimeBaseInput): format_key = 'DATETIME_INPUT_FORMATS' template_name = 'django/forms/widgets/datetime.html' </code></pre> <p>But in the form field validation the settings is completely ignored</p> <pre><code>&gt; &lt;project&gt;/env/lib/python3.9/site-packages/django/forms/fields.py(384)to_python() -&gt; for format in self.input_formats: (Pdb) self.input_formats ['%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M', '%Y-%m-%d', '%m/%d/%Y %H:%M:%S', '%m/%d/%Y %H:%M:%S.%f', '%m/%d/%Y %H:%M', '%m/%d/%Y', '%m/%d/%y %H:%M:%S', '%m/%d/%y %H:%M:%S.%f', '%m/%d/%y %H:%M', '%m/%d/%y'] (Pdb) self &lt;django.forms.fields.DateTimeField object at 0x128049c10&gt; (Pdb) value '19-03-2024 17:25:17' (Pdb) </code></pre> <p>Looking forward to suggestions or feedback.</p>
<python><django><datetime>
2024-03-18 12:16:54
0
4,667
silverFoxA
78,180,128
6,943,622
Build Palindrome from two strings
<p>I want to write a python function that does this efficiently:</p> <p>The function will take two strings, 'a' and 'b', and attempt to find the longest palindromic string that can be formed such that it is a concatenation of a non-empty substring of 'a' and a non-empty substring of 'b'. If there are multiple valid answers, it will return the lexicographically smallest one. If no such string can be formed, it will return '-1'.</p> <p>I have an inefficient solution that generates all the substrings of both strings, and then creates all possible concatenations whle tracking the longest which is a valid palindrome:</p> <pre><code>def is_palindrome(word): &quot;&quot;&quot;Check if a word is a palindrome.&quot;&quot;&quot; reversed_word = word[::-1] return word == reversed_word def all_substrings_of_word(word): &quot;&quot;&quot;Generate all possible non-empty substrings of a given string.&quot;&quot;&quot; substrings = [] for sub_string_length in range(1, len(word) + 1): for i in range(len(word) - sub_string_length + 1): new_word = word[i:i + sub_string_length] substrings.append(new_word) return substrings def buildPalindrome(a, b): &quot;&quot;&quot;Attempt to find the longest palindromic string created by concatenating a substring of `a` with a substring of `b`.&quot;&quot;&quot; sub_strings_a = all_substrings_of_word(a) sub_strings_b = all_substrings_of_word(b) # Generate all possible concatenations of substrings from `a` and `b` multiplexed_array = [ word_a + word_b for word_a in sub_strings_a for word_b in sub_strings_b] # Find the best palindrome (longest, then lexicographically smallest) best_palindrome = &quot;&quot; for word in multiplexed_array: if is_palindrome(word): if len(word) &gt; len(best_palindrome): best_palindrome = word elif len(word) == len(best_palindrome) and word &lt; best_palindrome: best_palindrome = word return best_palindrome if best_palindrome else &quot;-1&quot; print(buildPalindrome(&quot;bac&quot;, &quot;bac&quot;)) # EXPECTED OUTPUT -- aba print(buildPalindrome(&quot;abc&quot;, &quot;def&quot;)) # EXPECTED OUTPUT -- -1 print(buildPalindrome(&quot;jdfh&quot;, &quot;fds&quot;)) # EXPECTED OUTPUT -- dfhfd </code></pre> <p>Can I please get an explanation on how this can be improved?</p>
<python><algorithm><palindrome>
2024-03-18 12:16:01
1
339
Duck Dodgers
78,179,966
14,989,571
Remove background of image using sobel edge detection
<p>I have a bunch of images representing coins, some of which have a noisy background (e.g. letters or different background color). I'm trying to remove the background of each coin image to leave only the coin itself but I cannot get the <code>cv2.findContours</code> function from OpenCV to only detect the main contour of the coin, it erases some other parts as well or it leaves some extra noise from the background.</p> <p>The following is the code that I'm using, the process I'm following is:</p> <ol> <li>Read image as numpy array from bytes object.</li> <li>Decode it as color image.</li> <li>Convert to gray-scale image.</li> <li>Add Gaussian Blur to remove noise.</li> <li>Detect edges in the image applying a sobel filter <code>edgedetect()</code>. Here it computes the X and Y sobels and converts to threshold by applying Otsu thresholding.</li> <li>Computes the mean from the image and zeroes any vaalue below it to remove noise.</li> <li>Find significant contours (<code>findSignificantContours()</code>.</li> <li>Creates mask from contours, inverts and removes it to get background.</li> <li>Set mask to 255 to remove the background in the original image.</li> </ol> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np from google.colab.patches import cv2_imshow def edgedetect(channel): sobelX = cv2.Sobel(channel, cv2.CV_64F, 1, 0, ksize = 3, scale = 1) sobelY = cv2.Sobel(channel, cv2.CV_64F, 0, 1, ksize = 3, scale = 1) sobel = np.hypot(sobelX, sobelY) sobel = cv2.convertScaleAbs(sobel) sobel[sobel &gt; 255] = 255 # Some values seem to go above 255. However RGB channels has to be within 0-255 _, sobel_binary = cv2.threshold(sobel, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU) return cv2.bitwise_not(sobel_binary) def findSignificantContours (img, edgeImg): print(f'edgeimg:') cv2_imshow(edgeImg) contours, hierarchy = cv2.findContours(edgeImg, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Find level 1 contours level1 = [] for i, tupl in enumerate(hierarchy[0]): # Each array is in format (Next, Prev, First child, Parent) # Filter the ones without parent if tupl[3] == -1: tupl = np.insert(tupl, 0, [i]) level1.append(tupl) # From among them, find the contours with large surface area. significant = [] tooSmall = edgeImg.size * 5 / 100 # If contour isn't covering 5% of total area of image then it probably is too small for tupl in level1: contour = contours[tupl[0]] area = cv2.contourArea(contour) if area &gt; tooSmall: significant.append([contour, area]) # Draw the contour on the original image cv2.drawContours(img, [contour], 0, (0, 255, 0), 2, cv2.LINE_8) significant.sort(key = lambda x: x[1]) return [x[0] for x in significant] def remove_background(bytes_data): # Read image. image = np.asarray(bytearray(bytes_data.read()), dtype = &quot;uint8&quot;) img = cv2.imdecode(image, cv2.IMREAD_COLOR) print(f'Original:') cv2_imshow(img) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) print(f'Gray:') cv2_imshow(gray) blurred_gray = cv2.GaussianBlur(gray, (3, 3), 0) # Remove noise. print(f'Blurred Gray:') cv2_imshow(blurred_gray) edgeImg = np.max( np.array([edgedetect(blurred_gray[:, :])]), axis = 0) mean = np.mean(edgeImg) # Zero any value that is less than mean. This reduces a lot of noise. edgeImg[edgeImg &lt;= mean] = 0 edgeImg_8u = np.asarray(edgeImg, np.uint8) # Find contours. significant = findSignificantContours(img, edgeImg_8u) # Mask. mask = edgeImg.copy() mask[mask &gt; 0] = 0 cv2.fillPoly(mask, significant, 255) mask = np.logical_not(mask) # Invert mask to get the background. # Remove the background. img[mask] = 255; print(f'FINAL:') cv2_imshow(img) return img if __name__ == '__main__': imgUrl = 'http://images.numismatics.org/archivesimages%2Farchive%2Fschaefer_clippings_output_383_06_od.jpg/2648,1051,473,453/full/0/default.jpg' obvPage = requests.get(imgUrl, stream = True, verify = False, headers = header) img_final = remove_background(obvPage.raw) </code></pre> <p>As representation, here is the original image, as you can see it has some letters written on the right side which is what I'm trying to remove. The rest of the images are similar although some have different background color not just white.</p> <p><a href="https://i.sstatic.net/1NHwt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NHwt.png" alt="enter image description here" /></a></p> <p>The following image is the image of the edges after performing the <code>edgedetect()</code> function using the sobels.</p> <p><a href="https://i.sstatic.net/QaegU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QaegU.png" alt="enter image description here" /></a></p> <p>And the last one is the final image with the 'removed' background, sadly it still contains some of the letters there and I don't know what I'm doing wrong or how could I improve my code to achieve what I want. Could someone help me with this?</p> <p><a href="https://i.sstatic.net/MDvcE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MDvcE.png" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><semantic-segmentation>
2024-03-18 11:47:23
1
2,489
Shunya
78,179,864
3,468,067
Is there a shorter way to tell Mypy that a given optional chaining is fine?
<p>I am working on Python code where the domain logic makes it natural to have a class with an optional field of a second class, which itself has an optional field of a third class. Boiling it down to a minimum working example, this is what I mean:</p> <pre><code>class C: def __init__(self, number: int) -&gt; None: self.number = number class B: def __init__(self, c: C | None) -&gt; None: self.c = c class A: def __init__(self, b: B | None) -&gt; None: self.b = b </code></pre> <p>At some point in our code, we are receiving an object <code>a</code> of class <code>A</code> where we know that the field <code>a.b</code> is not <code>None</code> and furthermore that that the nested field <code>a.b.c</code> is not <code>None</code>. Of course, mypy does not know this and sensibly informs us that we might be doing something stupid:</p> <pre><code># Here is an object where it is guaranteed that we won't encounter None values a = A( B( C(1337) ) ) print(a.b.c.number) # mypy sensibly returns two errors: # Item &quot;None&quot; of &quot;B | None&quot; has no attribute &quot;c&quot; # Item &quot;None&quot; of &quot;C | Any | None&quot; has no attribute &quot;number&quot; # This satisfies mypy: assert (a.b is not None) and (a.b.c is not None) print(a.b.c.number) </code></pre> <p>This <code>assert</code> statement works perfectly well, but it looks cumbersome when our fields names are longer and the nesting goes deeper. Is there a neater and shorter way to satisfy mypy?</p>
<python><mypy><python-typing><optional-chaining>
2024-03-18 11:28:39
0
383
Erlend Magnus Viggen
78,179,759
1,766,088
Read .accdb database in Python app running on Docker container (Alpine)
<p>I am trying and failing to read a local .accdb file in my Python 3.11 app, which is running in an <code>python:3.11-alpine</code> container.</p> <p>My Dockerfile executes without errors:</p> <pre class="lang-dockerfile prettyprint-override"><code>FROM python:3.11-alpine EXPOSE 5001 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 RUN apk update &amp;&amp; apk add --no-cache gcc g++ musl-dev unixodbc-dev flex bison gawk COPY requirements.txt . RUN python -m pip install -r requirements.txt RUN apk add --no-cache git autoconf automake libtool gettext-dev make RUN git clone https://github.com/mdbtools/mdbtools.git WORKDIR /mdbtools RUN autoreconf -i -f RUN ./configure --with-unixodbc=/usr --disable-dependency-tracking RUN make RUN make install RUN echo -e &quot;\n[MDBTools]\nDescription=MDBTools Driver\nDriver=/usr/local/lib/odbc/libmdbodbc.so&quot; &gt;&gt; /etc/odbcinst.ini RUN apk add --no-cache nano WORKDIR /app COPY . /app RUN adduser -u 5678 --disabled-password --gecos &quot;&quot; appuser &amp;&amp; chown -R appuser /app USER appuser CMD [&quot;python&quot;, &quot;server.py&quot;] </code></pre> <p>My Python script (<code>accdb_test.py</code>):</p> <pre><code>import pyodbc import argparse parser = argparse.ArgumentParser(description='Connect to an Access database.') parser.add_argument('db_path', type=str, help='The path to the Access database') args = parser.parse_args() conn_str = ( r'DRIVER={MDBTools};' r'DBQ=' + args.db_path + ';' ) try: conn = pyodbc.connect(conn_str) print(&quot;Connection successful!&quot;) except pyodbc.Error as e: print(&quot;Failed to connect to the database:&quot;, e) </code></pre> <p>I build the container connect to its terminal, than I run the script with this result:</p> <pre class="lang-bash prettyprint-override"><code>/app $ python accdb_test.py /app/input_examples/caesar/MODEL_13-16_R01.ACCDB ['MDBTools'] File not found File not found Unable to locate database Failed to connect to the database: ('HY000', 'The driver did not supply an error!') </code></pre> <p>The path to the <code>.accdb</code> file is correct, I checked:</p> <pre class="lang-bash prettyprint-override"><code>/app $ ls -l /app/input_examples/caesar/MODEL_13-16_R01.ACCDB -rwxrwxrwx 1 appuser root 47116288 Mar 18 09:29 /app/input_examples/caesar/MODEL_13-16_R01.ACCDB </code></pre>
<python><docker><ms-access><alpine-linux><mdbtools>
2024-03-18 11:09:03
2
675
asdf
78,179,714
3,973,269
Extract photos from a photo album using Python
<p>I have a physical photo album, for which each page might contain one or more photos glued on it.</p> <p>I took a picture of each individual page, containing multiple photos. Now, I placed all the pictures that I took into a single folder, and I would like to iterate over it with Python to extract all photos that were glued on that page.</p> <p>I have the following Python script, but the downside of this script is that it finds way too many contours (on the pictures itself as well).</p> <p>What is a good (alternative) method for getting the contrasts right when the page's background is white?</p> <pre><code># Read the image img = cv2.imread(&quot;images/&quot; + image) # Convert the image to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Show gray image cv2.imshow('Gray Image', gray) cv2.waitKey(0) blurred = cv2.GaussianBlur(gray, (5, 5), 0) # Apply edge detection using the Canny edge detector edged = cv2.Canny(blurred, 50, 150) contours, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) min_area = 50000 filtered_contours = [cnt for cnt in contours if min_area &lt; cv2.contourArea(cnt)] extracted_photos = [] for i, contour in enumerate(filtered_contours): x, y, w, h = cv2.boundingRect(contour) extracted_photos.append(img[y:y+h, x:x+w]) # Uncomment the following line to save individual photos # cv2.imwrite(f'photo_{i}.jpg', image[y:y+h, x:x+w]) # Show the extracted photos cv2.imshow('Original Image', img) cv2.waitKey(0) for i, photo in enumerate(extracted_photos): cv2.imshow(f'Photo {i}', photo) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <h1>Original photo</h1> <p><a href="https://i.sstatic.net/dfhZK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dfhZK.jpg" alt="Original" /></a></p> <h1>Grayscale photo</h1> <p><a href="https://i.sstatic.net/iM7jn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iM7jn.jpg" alt="Grayscale" /></a></p> <h1>Contours</h1> <p><a href="https://i.sstatic.net/Ggblc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ggblc.jpg" alt="Contours" /></a></p>
<python><opencv><contour><canny-operator>
2024-03-18 11:02:23
1
569
Mart
78,179,471
22,418,446
mdates locators show non-existent time intervals in my graph
<p>I'm trying to build streamlit dashboard for stock marketting. My stock datas is between 9am to 16pm. I'm using locators for x-axis ticks. However, the chart shows the data for non-existent time intervals like in graph there is big gap between tciks and draw linear between 16pm to 9am. I want to remove this gap and make it continuous graph. How can I solve this problem. I tried to filter my data but it didn't work.</p> <p>Here is my code and some example outputs.</p> <pre><code>def create_plot(name:str,label:str,title_info:str,data,period_var:str): fig, ax1 = plt.subplots(figsize=(16,8)) ax1.set_ylabel(name) plt.xlabel('Datetime') plt.ylabel(label) plt.title(title_info) if period_var in [&quot;1d&quot;]: date_formatter = DateFormatter('%H:%M') ax1.xaxis.set_major_formatter(date_formatter) ax1.xaxis.set_major_locator(mdates.MinuteLocator(interval=30)) elif period_var in [&quot;3d&quot;,&quot;5d&quot;]: date_formatter = DateFormatter(&quot;%d-%H&quot;) ax1.xaxis.set_major_formatter(date_formatter) ax1.xaxis.set_major_locator(mdates.HourLocator(byhour=range(9,16),interval=2)) elif period_var in [&quot;1mo&quot;, &quot;3mo&quot;, &quot;6mo&quot;]: date_formatter = DateFormatter(&quot;%Y - %b&quot;) ax1.xaxis.set_major_formatter(date_formatter) ax1.xaxis.set_major_locator(mdates.DayLocator(interval=int(period_var[0])*2)) elif period_var in [&quot;1y&quot;, &quot;2y&quot;, &quot;5y&quot;]: date_formatter = DateFormatter(&quot;%Y-%b&quot;) ax1.xaxis.set_major_formatter(date_formatter) ax1.xaxis.set_major_locator(mdates.MonthLocator(interval=int(period_var[0]))) else: date_formatter = DateFormatter(&quot;%Y&quot;) ax1.xaxis.set_major_formatter(date_formatter) ax1.xaxis.set_major_locator(mdates.YearLocator()) ax1.plot(data.Date, data[name], label=name, marker='o', markersize=1) plt.xticks(rotation=45) plt.legend() plt.grid(True) plt.tight_layout() return fig </code></pre> <p><a href="https://i.sstatic.net/YbGTD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YbGTD.png" alt="enter image description here" /></a> I'd be happy if anyone can help with that.</p> <p>I tired to filter my data which is data[&quot;Date&quot;] it doesn't work. I want to get rid of non-existent time intervals which I don't have in my dataframe.</p>
<python><pandas><matplotlib><streamlit>
2024-03-18 10:24:45
2
1,160
msamedozmen
78,179,350
1,613,983
How do I generate ngroups from a comparison function?
<p>Suppose I have a function that compares rows in a dataframe:</p> <pre><code>def comp(lhs: pandas.Series, rhs: pandas.Series) -&gt; bool: if lhs.id == rhs.id: return True if abs(lhs.val1 - rhs.val1) &lt; 1e-8: if abs(lhs.val2 - rhs.val2) &lt; 1e-8: return True return False </code></pre> <p>Now I have a dataframe containing <code>id</code>, <code>val1</code> and <code>val2</code> columns and I want to generate group ids such that any two rows for which <code>comp</code> evaluates to true have the group number. How do I do this with pandas? I've been trying to think of a way to get <code>groupby</code> to achieve this but can't think of a way.</p> <p>MRE:</p> <pre><code>example_input = pandas.DataFrame({ 'id' : [0, 1, 2, 2, 3], 'value1' : [1.1, 1.2, 1.3, 1.4, 1.1], 'value2' : [2.1, 2.2, 2.3, 2.4, 2.1] }) example_output = example_input.copy() example_output.index = [0, 1, 2, 2, 0] example_output.index.name = 'groups' </code></pre>
<python><pandas>
2024-03-18 10:01:29
2
23,470
quant
78,179,269
6,435,921
Why Pycharm's struggles with Scipy functions that can return multiple outputs?
<p>Scipy is one of the most used scientific packages in Python. Most of its functions have a common interface: they either return only a value (which can be <code>float/np.ndarray</code>) or a <code>tuple</code>, where the second term is <code>boolean</code> or a <code>dict</code>. Something like this:</p> <pre><code>value = some_scipy_function(*args, full_output=False) value, flag = some_scipy_function(*args, full_output=True) </code></pre> <p>Yet, if you use PyCharm, it <strong>constantly</strong> raises warnings for SciPy functions because it does not understand that that these functions will indeed return a single value if <code>full_output</code> is <code>False</code>.</p> <hr /> <blockquote> <p>Why is it so difficult for PyCharm to do this? This information is not only available in the Docstrings for the SciPy functions, but also obvious from the code, before excecution. Additionally, it most often does not allow you to suppress these warnings unlike others, perhaps because they are seen as major warnings (it would be impossible to subtract from a tuple).</p> </blockquote> <p>Frustration aside, I feel that it is a bit odd that if you use Python's most used scientific computing library, your file is filled with unsuppressable warnings because of this. Am I the only one who has been frustrated by this?</p> <hr /> <h1>Examples</h1> <ul> <li><code>logsumexp</code> (or generally most linear algebra functions)</li> <li><code>brentq</code> (or generally most optimization functions)</li> </ul> <pre><code>from scipy.special import logsumexp logw = np.log(np.random.rand(1000)) lse_plus_value = 1.0 + exp(logsumexp(logw, return_sign=False)) # raises a warning </code></pre>
<python><scipy><pycharm>
2024-03-18 09:47:19
1
3,601
Euler_Salter
78,178,650
14,923,149
Difficulty Extracting GenBank Accession Number Using Species and Strain Name, using webscraping (Using BeautifulSoup or Selenium)
<p>I need to extract specific information from a webpage using BeautifulSoup and / or Selenium. I'm trying to extract information related to a particular organism from a webpage, but I'm encountering difficulties.</p> <p>I tried this</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Define the search term search_term = &quot;Streptomyces anthocyanicus JCM 5058&quot; # Open a Chrome browser driver = webdriver.Chrome() # Construct the search URL for assembly search_url = f&quot;https://www.ncbi.nlm.nih.gov/assembly/?term={search_term.replace(' ', '+')}&quot; # Navigate to the search URL driver.get(search_url) from selenium.webdriver.common.by import By # Find elements containing the text &quot;JCM 5058&quot; elements = driver.find_elements(By.XPATH, &quot;//*[contains(text(), 'JCM 5058')]&quot;) if elements: print(&quot;Text 'JCM 5058' found on the webpage.&quot;) # Loop through elements and extract text text_to_print = &quot;&quot; for element in elements: text_to_print += element.text + &quot;\n&quot; # Add newline for readability # Print the extracted text print(text_to_print) else: print(&quot;Text 'JCM 5058' not found on the webpage.&quot;) </code></pre> <p>and I got like this</p> <pre><code>Text 'JCM 5058' found on the webpage. JCM 5058 (&quot;Streptomyces anthocyanicus&quot;[Organism] AND (&quot;Streptomyces anthocyanicus&quot;[Organism] OR JCM 5058[All Fields])) AND (latest[filter] AND all[filter] NOT anomalous[filter]) Streptomyces anthocyanicus JCM 5058 AND (latest[filter] AND all[f... (6) </code></pre> <p>but Matched section look like this in web page</p> <pre><code>ASM1465115v1 Organism: Streptomyces anthocyanicus (high G+C Gram-positive bacteria) Infraspecific name: Strain: JCM 5058 Submitter: WFCC-MIRCEN World Data Centre for Microorganisms (WDCM) Date: 2020/09/12 Assembly level: Scaffold Genome representation: full Relation to type material: assembly from type material GenBank assembly accession: GCA_014651155.1 (latest) RefSeq assembly accession: GCF_014651155.1 (latest) IDs: 8121141 [UID] 22194358 [GenBank] 22446388 [RefSeq] </code></pre> <p>I want to extract or print all this information as such or in a table.</p>
<python><selenium-webdriver><beautifulsoup><biopython>
2024-03-18 07:45:21
0
504
Umar
78,178,479
424,957
how to load page by python selenium?
<p>I can view page by any browser, I can only view blank when I open that by Selenium, I found there is javaScript as below, I guess that I need to run this code, what can I do next?</p> <pre><code>&lt;body&gt; &lt;script&gt; !function(){ var e=document.createElement(“iframe”); function n(){ var n=e.contentDocument||e.contentWindow.document; if(n){ var t=n.createElement(“script”); t.nonce=“”,t.innerHTML=“window[‘__CF$cv$params’]={r:‘7e88f63d224cc37d’,m:‘JpppFqs3NGPEA327hy7XHbhTbKXvZxzVh_Th60Z2NO4-1689664824-0-AdsfsWHmOM/vJvQfOZX4DHS1zxskac6BpgnnirFZJp3k’}; _cpo=document.createElement(‘script’); _cpo.nonce=‘’,_cpo.src=‘/cdn-cgi/challenge-platform/scripts/invisible.js’,document.getElementsByTagName(‘head’)[0].appendChild(_cpo);“,n.getElementsByTagName(“head”)[0].appendChild(t)}}if(e.height=1,e.width=1,e.style.position=“absolute”,e.style.top=0,e.style.left=0,e.style.border=“none”,e.style.visibility=“hidden”,document.body.appendChild(e),“loading”!==document.readyState)n(); else if(window.addEventListener) document.addEventListener(“DOMContentLoaded”,n); else{ var t=document.onreadystatechange||function(){}; document.onreadystatechange=function(e){ t(e),“loading”!==document.readyState&amp;&amp;(document.onreadystatechange=t,n()) } } }(); &lt;/script&gt; ... &lt;/body&gt; </code></pre>
<javascript><python><selenium-webdriver>
2024-03-18 07:09:48
1
2,509
mikezang
78,178,330
2,862,945
Updating multiple plots in Jupyter notebook when a slider value changes
<p>I want to update multiple <code>imshow</code> plots in a jupyter notebook when an <code>IntSlider</code> value changes. What is wrong with by code?</p> <p>Those are the versions I am using</p> <pre><code>import ipywidgets as widgets import matplotlib.pyplot as plt import matplotlib import numpy as np print( 'versions: ipywidgets = ', widgets.__version__) print( ' matplotlib = ', matplotlib.__version__) print( ' numpy = ', np.__version__) </code></pre> <p>This is the corresponding output</p> <blockquote> <pre><code>versions: ipywidgets = 8.0.4 matplotlib = 3.5.0 numpy = 1.20.3 </code></pre> </blockquote> <p>And here is the code</p> <pre><code>def plot_image(ax, seed=0): np.random.seed(0) data2plot = np.random.rand(5,5) img = ax.imshow(data2plot) fig = plt.figure( figsize=(12,6) ) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) plot_image(ax1) plot_image(ax2) plt.show() slider = widgets.IntSlider(value=0, min=0, max=100, step=1) # callback function for the slider widget def update(change): plot_image(ax1, seed=0) plot_image(ax2, seed=change.new) fig.canvas.draw() # connect update function to slider widget using the .observe() method, observing changes in value attribute slider.observe(update 'value') slider </code></pre> <p>There is a slider, see the screenshot, and I can change its value, but it has no effect. What am I missing?</p> <p><a href="https://i.sstatic.net/YM3MS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YM3MS.png" alt="Screenshot" /></a></p>
<python><matplotlib><jupyter-notebook><jupyter-lab>
2024-03-18 06:29:20
2
2,029
Alf
78,177,927
336,489
Azure Functions in Python and GnuPG invocation
<p>I have an Azure Function in Python and I am trying to use the python-gnupg wrapper to invoke a GnuPG binary while doing local development.</p> <p>This is the code I am trying out inside of the Azure Function with a HTTP Trigger.</p> <pre><code>import gnupg import tempfile import subprocess import azure.functions as func import logging @app.route(route=&quot;PGPOne&quot;) def PGPOne(req: func.HttpRequest, context: func.Context) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') # Correctly obtaining the GPG binary path gpg_path = r'C:\PGPD\dependencies\gpg.exe' # Testing the GPG binary works result = subprocess.run([gpg_path, '--version'], capture_output=True, text=True) print(result.stdout) # Creating a temporary directory for GPG home temp_dir = tempfile.mkdtemp() print(f&quot;Temporary GPG home directory: {temp_dir}&quot;) # Initializing GPG with the temporary home directory gpg = gnupg.GPG(homedir=temp_dir, binary=gpg_path) name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: return func.HttpResponse(f&quot;Hello, {name}. This HTTP triggered function executed successfully.&quot;) else: return func.HttpResponse( &quot;This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.&quot;, status_code=200 </code></pre> <p>The code blocks [Testing the GPG binary works] and [Creating a temporary directory for GPG home] both work as expected and I get the following outputs for the respective print statements.</p> <pre><code>Temporary directory: C:\Users\&lt;myusername&gt;\AppData\Local\Temp\tmpxacvv8_i GPG binary path: C:\PGPD\dependencies\gpg.exe </code></pre> <p>But the invocation of</p> <pre><code>gnupg.GPG(homedir=temp_dir, binary=gpg_path) </code></pre> <p>results in an error starting with -</p> <pre><code>Python HTTP trigger function processed a request. [2024-03-18T04:13:19.620Z] Creating directory: C:\PGPD\'C:\Users\&lt;myusername&gt;\AppData\Local\Temp\tmpxacvv8_i' [2024-03-18T04:13:19.658Z] [WinError 123] The filename, directory name, or volume label syntax is incorrect: &quot;C:\\PGPD\\'C:&quot; </code></pre> <p>Why is this part being prefixed in the invocation while Creating directory:</p> <pre><code> C:\PGPD\' </code></pre> <p>What am I doing wrong and how to correct this?</p> <hr /> <p>This is while debugging the function locally using Function Core Tools and using Python 3.10 in a virtual env setting within VS Code.</p> <p>And I have brought in the GnuPG binary dependency into the code folder structure as recommended by Microsoft docs.</p>
<python><azure><azure-functions><gnupg><python-gnupgp>
2024-03-18 04:25:12
1
5,130
GilliVilla
78,177,866
5,794,617
How to get number of cores from inside a pod
<p>Kubernetes allows <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">to set CPU &amp; memory limits for a pod</a>. Is there a way to get CPU request/limits from inside a POD without using <code>kubectl</code>?</p>
<python><kubernetes><cgroups>
2024-03-18 03:53:04
1
2,453
Artavazd Balayan
78,177,751
6,587,318
Is there any way to have re.sub report out on every replacement it makes?
<p>TL;DR: How to get <code>re.sub</code> to print out what substitutions it makes, including when using groups?</p> <p>Kind of like having a verbose option, is it possible to have <code>re.sub</code> print out a message every time it makes a replacement? This would be very helpful for testing how multiple lines of <code>re.sub</code> is interacting with large texts.</p> <p>I've managed to come up with this workaround for simple replacements utilizing the fact that the <code>repl</code> argument can be a function:</p> <pre><code>import re def replacer(text, verbose=False): def repl(matchobj, replacement): if verbose: print(f&quot;Replacing {matchobj.group()} with {replacement}...&quot;) return replacement text = re.sub(r&quot;[A-Z]+&quot;, lambda m: repl(m, &quot;CAPS&quot;), text) text = re.sub(r&quot;\d+&quot;, lambda m: repl(m, &quot;NUMBER&quot;), text) return text replacer(&quot;this is a 123 TEST 456&quot;, True) # Log: # Replacing TEST with CAPS... # Replacing 123 with NUMBER... # Replacing 456 with NUMBER... </code></pre> <p>However, this doesn't work for groups--it seems <code>re.sub</code> automatically escapes the return value of <code>repl</code>:</p> <pre><code>def replacer2(text, verbose=False): def repl(matchobj, replacement): if verbose: print(f&quot;Replacing {matchobj.group()} with {replacement}...&quot;) return replacement text = re.sub(r&quot;([A-Z]+)(\d+)&quot;, lambda m: repl(m, r&quot;\2\1&quot;), text) return text replacer2(&quot;ABC123&quot;, verbose=True) # returns r&quot;\2\1&quot; # Log: # Replacing ABC123 with \2\1... </code></pre> <p>Of course, a more sophisticated <code>repl</code> function can be written that actually checks for groups in <code>replacement</code>, but at that point that solution seems too complicated for the goal of just getting <code>re.sub</code> to report out on substitutions. Another potential solution would be to just use <code>re.search</code>, report out on that, then use <code>re.sub</code> to make the replacement, potentially using the <code>Pattern.sub</code> variant in order to specify <code>pos</code> and <code>endpos</code> to save the <code>sub</code> function from having to search the whole string again. Surely there's a better way than either of these options?</p>
<python><regex><python-re>
2024-03-18 02:59:40
2
326
Zachary
78,177,697
1,174,102
How to access Kivy Properties from within its "self" on __init__()
<p>How can I access a <a href="https://kivy.org/doc/stable/api-kivy.properties.html" rel="nofollow noreferrer">Kivy Property</a> within a widget's own <code>__init__()</code> function?</p> <p>I wrote a custom <a href="https://kivy.org/doc/stable/api-kivy.uix.widget.html" rel="nofollow noreferrer">widget in Kivy</a>. I need to display several thousand instances of this widget object in a <a href="https://kivy.org/doc/stable/api-kivy.uix.gridlayout.html" rel="nofollow noreferrer">Grid</a> on a kivy <a href="https://kivy.org/doc/stable/api-kivy.modules.screen.html" rel="nofollow noreferrer">screen</a>. Doing so crashes the system, so I'm wrapping the GridLayout in a <a href="https://kivy.org/doc/stable/api-kivy.uix.recycleview.html" rel="nofollow noreferrer">RecycleView</a>. Doing so makes it render immediately and without any lag. Great!</p> <p>Previously, I had positional arguments passed to my custom widget's <code>__init__()</code> function, which I used to setup a few instance fields that would be used to determine what would appear (and how) in the widget. Unfortunaetly, RecycleView has forced me to replace the positional arguments with Kivy Properties. And I can't seem to access the values of those properties within the object's <code>__init__()</code> function.</p> <p>For simplicity, let's consider this minimal example of the issue, taken from <a href="https://groups.google.com/g/kivy-users/c/4_xaX7xtL_s" rel="nofollow noreferrer">this question</a> on the Kivy Mailing list.</p> <pre><code>#!/usr/bin/env python3 from kivy.app import App from kivy.lang import Builder from kivy.uix.recycleview import RecycleView from kivy.uix.boxlayout import BoxLayout from kivy.properties import StringProperty, ListProperty kv = ''' &lt;TwoButtons&gt;: # This class is used as the viewclass in the RecycleView # The means this widget will be instanced to view one element of data from the data list. # The RecycleView data list is a list of dictionaries. The keys in the dictionary specify the # attributes of the widget. Button: text: root.left_text on_release: print(f'Button {self.text} pressed') Button: text: root.right_text on_release: print(f'Button {self.text} pressed') BoxLayout: orientation: 'vertical' Button: size_hint_y: None height: 48 text: 'Add widget to RV list' on_release: rv.add() RV: # A Reycleview id: rv viewclass: 'TwoButtons' # The view class is TwoButtons, defined above. scroll_type: ['bars', 'content'] bar_width: 10 RecycleBoxLayout: # This layout is used to hold the Recycle widgets default_size: None, dp(48) # This sets the height of the BoxLayout that holds a TwoButtons instance. default_size_hint: 1, None size_hint_y: None height: self.minimum_height # To scroll you need to set the layout height. orientation: 'vertical' ''' class TwoButtons(BoxLayout): # The viewclass definitions, and property definitions. left_text = StringProperty() right_text = StringProperty() print( &quot;TwoButtons top&quot; ) def __init__(self, **kwargs): print( &quot;self.left_text:|&quot; +str(self.left_text)+ &quot;|&quot; ) super().__init__(**kwargs) print( &quot;self.left_text:|&quot; +str(self.left_text)+ &quot;|&quot; ) class RV(RecycleView): def __init__(self, **kwargs): super().__init__(**kwargs) self.data = [{'left_text': f'Left {i}', 'right_text': f'Right {i}'} for i in range(2)] def add(self): l = len(self.data) self.data.extend( [{'left_text': f'Added Left {i}', 'right_text': f'Added Right {i}'} for i in range(l, l + 1)] ) class RVTwoApp(App): def build(self): return Builder.load_string(kv) RVTwoApp().run() </code></pre> <p>In the above code segment, the custom widget that we're using <code>RecycleView</code> to instantiate many instances-of is called <code>TwoButtons</code>.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;"><a href="https://i.sstatic.net/nGzT4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nGzT4.png" alt="Screenshot of a CLI terminal showing the execution of the above code and a GUI window with an array of button widgets displayed" /></a></th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">The buttons in the GUI appear with the text <code>Left 0</code> and <code>Left 1</code>, as desired -- but attempting to access their values from within <code>__init__()</code> results in the empty string (<code>self.left_text:||</code>)</td> </tr> </tbody> </table></div> <p>The <code>TwoButtons</code> class has two Kivy Properties:</p> <ol> <li>StringProperty <code>left_text</code> and</li> <li>StringProperty <code>right_text</code></li> </ol> <p>If you execute the app, you can clearly see that <code>RecycleView</code> is able to pass the data into the <code>TwoButtons</code> instances' Properties, as the text appears in the buttons as-expected.</p> <p><strong>The problem is that the value of the <code>left_text</code> Property is the empty string inside <code>__init__()</code></strong></p> <p>Consider the following execution of the above program:</p> <pre><code>user@buskill:~/tmp/rv$ python3 rv.py [INFO ] [Logger ] Record log in /home/user/.kivy/logs/kivy_24-03-17_23.txt [INFO ] [Kivy ] v2.1.0 [INFO ] [Kivy ] Installed at &quot;/home/user/.local/lib/python3.9/site-packages/kivy/__init__.py&quot; [INFO ] [Python ] v3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] ... [INFO ] [Text ] Provider: sdl2 [INFO ] [Base ] Start application main loop [INFO ] [GL ] NPOT texture support is available self.left_text:|| self.left_text:|| self.left_text:|| self.left_text:|| </code></pre> <p>As you can see, the <code>print()</code> statements inside the <code>__init__()</code> function of <code>TwoButtons</code> returns an empty string for <code>left_text</code> (even though the actual <code>left_text</code> in the buttons in the GUI appear as <code>Left 0</code> and <code>Left 1</code>, as desired).</p> <p>Moreover, if you click the <code>Add widget to RV list</code> button to add a third row of buttons with <code>Added Left 2</code> and <code>Added Right 2</code> text, then the following new lines are <code>print()</code>ed again.</p> <pre><code>self.left_text:|| self.left_text:|| </code></pre> <p>How can I actually access a given object's Properties' values from within the object's <code>__init__()</code> function?</p>
<python><android-recyclerview><kivy>
2024-03-18 02:32:51
1
2,923
Michael Altfield
78,177,620
661,424
Scrolling a tk.Text with the yview_scroll method results in a glitch sometimes
<p>I have a <code>tk.Text</code> widget inside a frame, that is inside a <code>Notebook</code>. Inside the tk.Text I embed another tk.Frame that has a tk.Frame header and a tk.Text content. The problem is that when I try to scroll the main tk.Text, from bottom to top, a pure black background box appears as I scroll up, eventually disappears after more scrolling. Doesn't happen when scrolling down. This only happens when <code>self.yview_scroll(-2, &quot;units&quot;)</code> is used. If I scroll by it's normal events with the mousewheel or scrollbar it doesn't happen. But I need to scroll programmatically sometimes. Any ideas on what the problem might be?</p> <p><a href="https://i.sstatic.net/q3lMs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q3lMs.jpg" alt="enter image description here" /></a></p> <p>In this image the embeded frame is below, the header being <code>Language: rust</code>. This happens when I scroll up from the bottom. Also this only happens at certain window sizes like for example <code>--width 900 --height 1000</code>.</p> <p>This is how I add the embeded (Snippet) frame:</p> <pre class="lang-py prettyprint-override"><code>def format_snippets(self) -&gt; None: from .snippet import Snippet start_index = self.position text = self.get(start_index, &quot;end-1c&quot;) pattern = r&quot;^```([\w#]*)\n(.*?)\n```$&quot; matches = [] for match in re.finditer(pattern, text, flags=re.MULTILINE | re.DOTALL): language = match.group(1) content_start = match.start(2) content_end = match.end(2) matches.append((content_start, content_end, language)) for content_start, content_end, language in reversed(matches): start_line_col = self.index_at_char(content_start, start_index) end_line_col = self.index_at_char(content_end, start_index) snippet_text = self.get(start_line_col, end_line_col) self.delete(f&quot;{start_line_col} - 1 lines linestart&quot;, f&quot;{end_line_col} + 1 lines lineend&quot;) snippet = Snippet(self, snippet_text, language) self.window_create(f&quot;{start_line_col} - 1 lines&quot;, window=snippet) self.snippets.append(snippet) </code></pre> <p>For now I'm using this as a workaround:</p> <pre class="lang-py prettyprint-override"><code>def get_fraction(self, num_lines: int = 1) -&gt; float: total_lines = self.count(&quot;1.0&quot;, &quot;end-1c lineend&quot;, &quot;displaylines&quot;)[0] fraction = num_lines / total_lines return fraction def scroll_up(self, check: bool = False) -&gt; None: fraction = self.get_fraction() self.yview_moveto(self.yview()[0] - fraction) </code></pre> <p>It calculates the % of a single line.</p>
<python><tkinter>
2024-03-18 01:54:30
0
4,073
madprops
78,177,595
2,504,762
nox not able to to find python 3.7 interpreter
<p>I am trying to contribute to open source repo. And I realize that my test is failing on python 3.7 interpreter, it passing in 3.8, 3.9 and 3.10. I am trying to run this locally so I could fix it.</p> <p>However, when I am trying to run it. it shows me following error.</p> <pre><code>❯ nox -R -s unit-3.7 -- -k test_load_table_from_dataframe_w_datatype_mismatch nox &gt; Running session unit-3.7 nox &gt; Missing interpreters will error by default on CI systems. nox &gt; Session unit-3.7 skipped: Python interpreter 3.7 not found. </code></pre> <p>I have python 3.7 installed using <code>pyenv</code></p> <pre><code>❯ pyenv versions system 3.7.10 * 3.7.12 (set by /Users/fki/Documents/git/python-bigquery/.python-version) 3.8.10 3.8.10/envs/airflow-env 3.8.18 airflow-env --&gt; /Users/fki/.pyenv/versions/3.8.10/envs/airflow-env </code></pre> <p>I have no issues running tests on python 3.10 even though it's not installed.</p> <pre><code>❯ nox -R -s unit-3.10 -- -k test_load_table_from_dataframe_w_datatype_mismatch nox &gt; Running session unit-3.10 nox &gt; Re-using existing virtual environment at .nox/unit-3-10. nox &gt; py.test --quiet --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 --capture=tee-sys tests/unit -k test_load_table_from_dataframe_w_datatype_mismatch . [100%] ====================================================================================================== warnings summary ====================================================================================================== .nox/unit-3-10/lib/python3.10/site-packages/_pytest/mark/structures.py:357 /Users/fki/Documents/git/python-bigquery/.nox/unit-3-10/lib/python3.10/site-packages/_pytest/mark/structures.py:357: PytestRemovedIn9Warning: Marks applied to fixtures have no effect See docs: https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a-fixture-function store_mark(func, self.mark) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 1 passed, 2291 deselected, 1 warning in 2.22s nox &gt; Session unit-3.10 was successful. </code></pre> <p>could someone explain, from where nox is taking interpreter ?</p>
<python><python-3.x><unit-testing><nox>
2024-03-18 01:44:33
0
13,075
Gaurang Shah
78,177,570
11,628,437
How to `pd.concat` nested dictionaries?
<p>I am trying concatenate multiple DataFrames using <code>pd.concat</code>. Basically I am trying to follow the instructions from <a href="https://stackoverflow.com/questions/78175233/can-i-create-a-nested-column-pandas-dataframe-using-a-nested-dictionary?noredirect=1#comment137819708_78175233">this</a> post for a four level nested dictionary.</p> <p>Here is my attempt with a minimal example -</p> <pre><code>import pandas as pd nested_dict = { 'level1': { 'level2': { 'level3': { 'level4': 'value' } } } } for key0, value0 in nested_dict.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): out = pd.concat(key3:{pd.DataFrame(key2:{pd.DataFrame(key1:{pd.DataFrame({key0: pd.DataFrame(value0)})})})}, axis = 1) </code></pre> <p>Unfortunately, I get the error -</p> <pre><code> out = pd.concat(key3:{pd.DataFrame(key2:{pd.DataFrame(key1:{pd.DataFrame({key0: pd.DataFrame(value0)})})})}, axis = 1) ^ SyntaxError: invalid syntax </code></pre> <p>This is the output that I am looking for -</p> <pre><code> level1 level2 level3 level4 0 value </code></pre> <p>Edit -</p> <p>I followed the instructions given in the answer -</p> <pre><code> for key0, value0 in nested_dict.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): out = pd.concat({key3:pd.DataFrame({key2:pd.DataFrame({key1:pd.DataFrame({key0: pd.DataFrame(value0)})})})}, axis = 1) </code></pre> <p>Now, I get the following error -</p> <pre><code>Traceback (most recent call last): File &quot;/home/thoma/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_14.py&quot;, line 16, in &lt;module&gt; out = pd.concat({key3:pd.DataFrame({key2:pd.DataFrame({key1:pd.DataFrame({key0: pd.DataFrame(value0)})})})}, axis = 1) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/frame.py&quot;, line 663, in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/internals/construction.py&quot;, line 494, in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/internals/construction.py&quot;, line 119, in arrays_to_mgr index = _extract_index(arrays) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/internals/construction.py&quot;, line 657, in _extract_index raise ValueError(&quot;If using all scalar values, you must pass an index&quot;) ValueError: If using all scalar values, you must pass an index </code></pre> <p>Edit 2: I tried <code>out = pd.concat([{key3:pd.DataFrame([{key2:pd.DataFrame([{key1:pd.DataFrame([{key0: pd.DataFrame(value0)}])}])}])}], axis = 1)</code></p> <p>But now I get the error -</p> <pre><code>Traceback (most recent call last): File &quot;/home/thoma/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_14.py&quot;, line 16, in &lt;module&gt; out = pd.concat([{key3:pd.DataFrame([{key2:pd.DataFrame([{key1:pd.DataFrame([{key0: pd.DataFrame(value0)}])}])}])}], axis = 1) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/util/_decorators.py&quot;, line 317, in wrapper return func(*args, **kwargs) File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/reshape/concat.py&quot;, line 369, in concat op = _Concatenator( File &quot;/home/thoma/anaconda3/envs/benchmark/lib/python3.8/site-packages/pandas/core/reshape/concat.py&quot;, line 459, in __init__ raise TypeError(msg) TypeError: cannot concatenate object of type '&lt;class 'dict'&gt;'; only Series and DataFrame objs are valid </code></pre>
<python><pandas>
2024-03-18 01:32:27
1
1,851
desert_ranger
78,177,372
268,847
Allow a Python typer option to appear anywhere on the command line
<p>Consider the following typer-based Python program:</p> <pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python import typer app = typer.Typer() from typing_extensions import Annotated VERBOSE = False books_app = typer.Typer() app.add_typer(books_app, name=&quot;books&quot;) authors_app = typer.Typer() app.add_typer(authors_app, name=&quot;authors&quot;) ### @books_app.command(&quot;list&quot;) def books_list() -&gt; str: if (VERBOSE): print(&quot;entering 'books_list'&quot;) @books_app.command(&quot;delete&quot;) def books_delete( book_name: Annotated[str, typer.Argument(help=&quot;name of book to delete&quot;)], ) -&gt; str: if (VERBOSE): print(&quot;entering 'books_delete'&quot;) ### @app.callback(no_args_is_help=True, invoke_without_command=True) def main(ctx: typer.Context, verbose: Annotated[bool, typer.Option(&quot;--verbose&quot;, &quot;-v&quot;, help=&quot;runs in verbose mode&quot;)] = False ) -&gt; None: global VERBOSE if (verbose): VERBOSE = True if __name__ == &quot;__main__&quot;: app() </code></pre> <p>I would like to be able to put the &quot;global&quot; option <code>--verbose</code> anywhere on the command line. That is, I want each of the following to work the same way:</p> <pre class="lang-none prettyprint-override"><code>$ ./myapp --verbose books list $ ./myapp books --verbose list $ ./myapp books list --verbose </code></pre> <p>As the above program is written only the first of the above three invocations works.</p> <p>One way to accomplish this would be to add the <code>--verbose</code> option to each command/subcommand function definitions. However, that seems inefficient, especially if I have many command and subcommands.</p> <p>Is there a better way to do this?</p>
<python><typer>
2024-03-17 23:50:49
0
7,795
rlandster
78,177,175
10,755,782
Difficulty Finding Input Field Using Selenium XPath in Python
<p>I'm currently learning Selenium automation and I'm trying to automate a simple task:</p> <ol> <li>navigating to a webpage, (<a href="https://sri-gpt.github.io" rel="nofollow noreferrer">https://sri-gpt.github.io</a>)</li> <li>locating an input field,</li> <li>entering text into it, and submitting the form.</li> </ol> <p>To achieve this, I'm using Python and Selenium WebDriver.</p> <p>Here's the code I've written:</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver driver = webdriver.Chrome() driver.get(&quot;https://sri-gpt.github.io&quot;) # Find the form field using its XPath form_field = driver.find_element(&quot;xpath&quot;, '/html/body/flutter-view/flt-text-editing-host/form/input[1]') form_field.send_keys(&quot;MyInput_123&quot;) driver.quit() </code></pre> <p>However, when I execute this code, I encounter the following error:</p> <pre><code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;xpath&quot;,&quot;selector&quot;:&quot;/html/body/flutter-view/flt-text-editing-host/form/input[1]&quot;} </code></pre> <p>It seems that my <code>XPath</code> expression to locate the input field is incorrect. I've tried different variations, but none seem to work. Can anyone help me correct my XPath expression to successfully locate the input field on the webpage?</p> <p>Additionally, if there are any best practices or alternative methods for locating elements using Selenium WebDriver, I'd appreciate any advice or suggestions.</p> <pre><code>Selenium version: 4.18.1 Python version: 3.10.12 Ubuntu version: 22.04 Chrome version: 22.0.6261.128 (Official Build) (64-bit) </code></pre> <p>Thank you!</p>
<python><google-chrome><selenium-webdriver><selenium-chromedriver>
2024-03-17 22:30:13
2
660
brownser
78,177,066
814,354
Jupyterlab "Too many open files" cause by duplicate loads of libraries
<p>I repeatedly get jupyter lab sessions that start to hang while spamming tracebacks to the terminal that end with:</p> <pre><code> zmq.error.ZMQError: Too many open files </code></pre> <p>An example complete traceback is:</p> <pre><code>[E 2024-03-17 17:40:36.843 ServerApp] Uncaught exception GET /api/kernels/059a770b-436f-40b5-b41b-95a1748ef7fd/channels?session_id=f60b1486-f926-4c96-924d-692a4e50d1e4 (172.16.206.48) HTTPServerRequest(protocol='http', host='localhost:23456', method='GET', uri='/api/kernels/059a770b-436f-40b5-b41b-95a1748ef7fd/channels?session_id=f60b1486-f926-4c96-924d-692a4e50d1e4', version='HTTP/1.1', remote_ip='172.16.206.48') Traceback (most recent call last): File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/tornado/websocket.py&quot;, line 944, in _accept_connection await open_result File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_server/services/kernels/websocket.py&quot;, line 77, in open await self.connection.connect() File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_server/services/kernels/connection/channels.py&quot;, line 363, in connect self.create_stream() File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_server/services/kernels/connection/channels.py&quot;, line 154, in create_stream self.channels[channel] = stream = meth(identity=identity) File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_client/ioloop/manager.py&quot;, line 25, in wrapped socket = f(self, *args, **kwargs) File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_client/connect.py&quot;, line 664, in connect_iopub sock = self._create_connected_socket(&quot;iopub&quot;, identity=identity) File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/jupyter_client/connect.py&quot;, line 654, in _create_connected_socket sock = self.context.socket(socket_type) File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/zmq/sugar/context.py&quot;, line 362, in socket s: ST = socket_class( # set PYTHONTRACEMALLOC=2 to get the calling frame File &quot;/orange/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/zmq/sugar/socket.py&quot;, line 159, in __init__ super().__init__( File &quot;zmq/backend/cython/socket.pyx&quot;, line 332, in zmq.backend.cython.socket.Socket.__init__ zmq.error.ZMQError: Too many open files </code></pre> <p>This lab session is only running a single notebook. While that notebook is large, it's not extremely large.</p> <p>I've investigated some, and <code>lsof</code> reveals that there are &gt;30 copies of various library files being opened. For example:</p> <pre><code>$ lsof | grep scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so | wc 33 361 8613 $ lsof | grep indexing.cpython-310-x86_64-linux-gnu.so | wc 33 361 8481 </code></pre> <p>These individually look like this:</p> <pre><code>COMMAND PID TID TASKCMD USER FD TYPE DEVICE SIZE/OFF NODE NAME python 1325008 adamginsburg mem REG 2445,764964 4384216 180149904096755221 /blue/adamginsburg/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so python 1325008 1325015 ZMQbg/Rea adamginsburg mem REG 2445,764964 4384216 180149904096755221 /blue/adamginsburg/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so python 1325008 1325016 ZMQbg/IO/ adamginsburg mem REG 2445,764964 4384216 180149904096755221 /blue/adamginsburg/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so python 1325008 1325017 python adamginsburg mem REG 2445,764964 4384216 180149904096755221 /blue/adamginsburg/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so python 1325008 1325018 python adamginsburg mem REG 2445,764964 4384216 180149904096755221 /blue/adamginsburg/adamginsburg/miniconda3/envs/python310/lib/python3.10/site-packages/scipy/sparse/_sparsetools.cpython-310-x86_64-linux-gnu.so </code></pre> <p>i.e., they are all loaded by the same <code>PID</code> but different <code>TID</code>s.</p> <p>So, my question: What would cause a jupyter python session to load 30 copies of the libraries, and how do I prevent it?</p>
<python><jupyter><jupyter-lab>
2024-03-17 21:46:42
0
19,445
keflavich
78,177,061
6,031,223
Error on Hybrid Search using Azure AI Search
<p>I've been getting the following error using python sdk and front end although I believe I have my schema set up &quot;Searchable&quot; correctly. Please advice, thank you.</p> <blockquote> <p>&quot;The 'search' parameter requires at least 1 searchable text field in the index.\r\nParameter name: searchFields&quot;</p> </blockquote> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>def simple_hybrid_search(): # [START simple_hybrid_search] query = "classify earthworks" search_client = SearchClient(endpoint, index_name, AzureKeyCredential(key)) vector_query = VectorizedQuery(vector=generate_embeddings(query), k_nearest_neighbors=3, fields="vector") results = search_client.search( search_text=query, vector_queries=[vector_query], select=["metadata"], ) for result in results: print(result)</code></pre> </div> </div> </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>fields = [ SimpleField(name="id", type=SearchFieldDataType.String, key=True, searchable=True), SearchField(name="vector",type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_profile_name="uniclass-rag-vector-config",), ComplexField(name="metadata", fields=[ SearchableField(name="code", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="group", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="group_title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="sub_group", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="sub_group_title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="sub_object_title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="object_title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), SearchableField(name="classification_title", type=SearchFieldDataType.String, nullable=True, searchable=True, filterable=True, sortable=True, facetable=True), </code></pre> </div> </div> </p> <p><a href="https://i.sstatic.net/g2ecr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g2ecr.png" alt="enter image description here" /></a></p>
<python><azure><azure-cognitive-search><azure-python-sdk><azure-ai-search>
2024-03-17 21:44:38
1
333
ROAR.L
78,176,968
11,628,437
How can I add the values for same groups until the second last index level of a nested Pandas dataframe
<p>I've created a minimal example of a nested pandas dataframe from a nested dictionary based on the instructions given in this <a href="https://stackoverflow.com/questions/78175233/can-i-create-a-nested-column-pandas-dataframe-using-a-nested-dictionary/78175296#78175296">post</a>.</p> <pre><code>nested_dict = { 'Full_Grades': { 'Science_Marks': { 'Physics': { 'Theo': 99, 'Prac': 100 }, 'Biology': { 'Theo': 89, 'Prac': 100 } }, 'Finance_Marks': { 'Economics': { 'Theo': 99, 'Prac': 100 }, 'Accounting': { 'Theo': 89, 'Prac': 100 } } } } </code></pre> <pre><code>import pandas as pd out = pd.concat({k: pd.concat({k2: pd.DataFrame(v2) for k2,v2 in v.items()}, axis = 1) for k, v in nested_dict.items()}, axis = 1) .unstack().to_frame().T print(out) </code></pre> <p>Here is the output -</p> <pre><code> Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Science_Marks Science_Marks Science_Marks Science_Marks Finance_Marks Finance_Marks Finance_Marks Finance_Marks Physics Physics Biology Biology Economics Economics Accounting Accounting Theo Prac Theo Prac Theo Prac Theo Prac 0 99 100 89 100 99 100 89 100 </code></pre> <p>Can anyone suggest a technique to add the numbers with same groups until the second last index level. For instance the total of <code>Physics</code> (under the group <code>Full_Grades-Science_Marks-Physics</code>) will be 199. It's okay if the last index level names are different (<code>Theo</code> and <code>Practical</code>).</p> <p>For this post I don't have any work to show, as I really don't know how to even begin. Also, apologies if the question title or contents aren't clear. I wrote them to the best of my ability. Let me know if any further clarification is required.</p> <p>Edit 1: Here is the output I am looking for -</p> <pre><code> Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Full_Grades Science_Marks Science_Marks Science_Marks Science_Marks Finance_Marks Finance_Marks Finance_Marks Finance_Marks Physics Physics Biology Biology Economics Economics Accounting Accounting Theo Prac Theo Prac Theo Prac Theo Prac 0 99 100 89 100 99 100 89 100 Sum 199 189 199 189 </code></pre> <p>Also, I'd appreciate a way to access the values corresponding to row <code>0</code> so that I can do various analysis such as sum, average, etc.</p>
<python><pandas>
2024-03-17 21:13:45
1
1,851
desert_ranger
78,176,948
2,195,440
How to resolve types in Python code with Tree-sitter?
<p>I'm using Tree-sitter to parse Python code and extract ASTs, and trying to manually traverse ASTs to infer types based on assignments and function definitions.</p> <p>But I'm struggling with accurately resolving types (variables, functions, classes) due to Python's dynamic typing. Specifically, challenges arise with:</p> <ul> <li>Inferring types in dynamically typed contexts.</li> <li>Handling types from external modules/packages.</li> <li>Leveraging Python's type annotations for improved type resolution.</li> </ul> <p>I just need to resolve type that I have defined in my repository.</p> <p>For example:</p> <pre><code>car.py class Car # ... </code></pre> <p>Now in a different <code>class</code>:</p> <pre><code>from car import Car car = Car() # ... bmw = car # ... </code></pre> <p>I need to know BMW is a car.</p> <p>How can I successfully navigate type resolution in Python with Tree-sitter?</p> <p>What approaches or algorithms can I use to accurately resolve types defined in external modules without executing the Python code?</p> <p>I also need to tackle def-use (definition-use) chains to track variable assignments and their types across the codebase.</p> <ul> <li>Resolve identifiers to their definitions.</li> <li>Determine the type of each identifier (class, module, etc.).</li> <li>Extract the name of the class or module the identifier refers to.</li> </ul> <p>Also I think it needs to handle control flow as well.</p> <p>Do we need to implement some form of scope-graph to achieve this?</p> <p>Also it needs to implement python LEGB logic somehow.</p>
<python><treesitter><type-resolution>
2024-03-17 21:06:35
0
3,657
Exploring
78,176,906
640,205
Why is there a file handle after opening and closing an xlsx file workbook with openpyxl with read_only=True?
<p>Can I get your help troubleshooting a file handle issue with the Python package openpyxl version 3.0.7? If the load_workbook 'read_only' parameter is set to False, this does not occur. It only occurs when set to True. If you call these load_workbook and close functions multiple times (of the same file), this will eventually happen. I believe I narrowed down the source code opening the file handle. The problem is that it isn't removed. The exception is thrown when calling <code>shutil.move(source_file, target_file)</code> after opening/closing the same workbook multiple times. I'm going to try and avoid it by opening and closing one time, but I'll need to build a data structure to store everything because the workbook has 23 worksheets. But this seems like an issue regardless. If I set to read_only=False, the performance is terrible! So it takes about an hour+ longer to run.</p> <pre><code>import openpyxl # openpyxl 3.0.7 # repeat open/close multiple times wb_source = openpyxl.load_workbook(file_path, read_only=True) ws_source = wb_source[worksheet_name] for row in ws_source.rows: for cell in # cells # ... wb_source.close() shutil.move(file_path, file_path_archive) </code></pre> <p><strong>Here is the exception:</strong></p> <pre><code>Traceback (most recent call last): File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.2544.0_x64__qbz5n2kfra8p0\lib\shutil.py&quot;, line 566, in move os.rename(src, real_dst) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Python\\...file.xlsx' -&gt; 'C:\\Python\\...file.xlsx' </code></pre> <p><code>.\venv\Lib\site-packages\openpyxl\reader\excel.py</code></p> <pre><code># Python stdlib imports from zipfile import ZipFile, ZIP_DEFLATED, BadZipfile from sys import exc_info from io import BytesIO import os.path import warnings # ... if self.read_only: ws = ReadOnlyWorksheet(self.wb, sheet.name, rel.target, self.shared_strings) ws.sheet_state = sheet.state self.wb._sheets.append(ws) continue else: fh = self.archive.open(rel.target) ws = self.wb.create_sheet(sheet.name) ws._rels = rels ws_parser = WorksheetReader(ws, fh, self.shared_strings, self.data_only) ws_parser.bind_all() </code></pre> <p><code>.\venv\Lib\site-packages\openpyxl\packaging\manifest.py</code></p> <pre><code>mimetypes = MimeTypes() </code></pre> <p><code>C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.2544.0_x64__qbz5n2kfra8p0\Lib\mimetypes.py</code></p> <pre><code>class MimeTypes: def init(files=None): global suffix_map, types_map, encodings_map, common_types global inited, _db inited = True # so that MimeTypes.__init__() doesn't call us again if files is None or _db is None: db = MimeTypes() if _winreg: db.read_windows_registry() if files is None: files = knownfiles else: files = knownfiles + list(files) else: db = _db for file in files: if os.path.isfile(file): db.read(file) # &lt;-------------------------------------- read file encodings_map = db.encodings_map suffix_map = db.suffix_map types_map = db.types_map[True] common_types = db.types_map[False] # Make the DB a global variable now that it is fully initialized _db = db </code></pre> <pre><code>def read(self, filename, strict=True): &quot;&quot;&quot; Read a single mime.types-format file, specified by pathname. If strict is true, information will be added to list of standard types, else to the list of non-standard types. &quot;&quot;&quot; with open(filename, encoding='utf-8') as fp: self.readfp(fp, strict) </code></pre> <pre><code>def readfp(self, fp, strict=True): &quot;&quot;&quot; Read a single mime.types-format file. If strict is true, information will be added to list of standard types, else to the list of non-standard types. &quot;&quot;&quot; while 1: line = fp.readline() if not line: break words = line.split() for i in range(len(words)): if words[i][0] == '#': del words[i:] break if not words: continue type, suffixes = words[0], words[1:] for suff in suffixes: self.add_type(type, '.' + suff, strict) </code></pre>
<python><openpyxl>
2024-03-17 20:49:04
1
19,120
JustBeingHelpful
78,176,717
2,340,002
How to bind functions returning references with pybind11?
<p>When binding C++ with <code>pybind11</code>, I ran into an issue regarding a couple of class members that return (const or non-const) references; considering the following snippet:</p> <pre><code>struct Data { double value1; double value2; }; class Element { public: Element() = default; Element(Data values) : data(values) { } Element(const Element&amp; other) : data(other.data) { printf(&quot;copying from %p\n&quot;, &amp;other); } Data data; }; class Container { public: Container() : data(10, Element(Data{0.1, 0.2})) {}; Element&amp; operator[](size_t idx) { return data[idx]; } const Element&amp; operator[](size_t idx) const { return data[idx]; } protected: std::vector&lt; Element &gt; data; }; </code></pre> <p>which is bound to a Python module with:</p> <pre><code>py::class_&lt; Data &gt; (module, &quot;Data&quot;) .def(py::init&lt; double, double &gt;(), &quot;Constructs a new instance&quot;, &quot;v1&quot;_a, &quot;v2&quot;_a) .def_readwrite(&quot;value1&quot;, &amp;Data::value1) .def_readwrite(&quot;value2&quot;, &amp;Data::value2); py::class_&lt; Element &gt; (module, &quot;Element&quot;) .def(py::init&lt; Data &gt;(), &quot;Constructs a new instance&quot;, &quot;values&quot;_a) .def_readwrite(&quot;data&quot;, &amp;Element::data) .def(&quot;__repr__&quot;, [](const Element&amp; e){ return std::to_string(e.data.value1); }); py::class_&lt; Container &gt; (module, &quot;Container&quot;) .def(py::init&lt; &gt;(), &quot;Constructs a new instance&quot;) .def(&quot;__getitem__&quot;, [](Container&amp; c, size_t idx) { return c[idx]; }, &quot;idx&quot;_a) .def(&quot;__setitem__&quot;, [](Container&amp; c, size_t idx, Element e) { c[idx] = e; }, &quot;idx&quot;_a, &quot;val&quot;_a); </code></pre> <p>I am having trouble getting the <code>[]</code> operator to work on <code>Container</code> class</p> <pre><code>print(&quot;-------------&quot;) foo = module.Data(0.9, 0.8) print(foo.value2) foo.value2 = 0.7 # works print(foo.value2) print(&quot;-------------&quot;) e = module.Element(motion.Data(0.3, 0.2)) print(e.data.value1) e.data.value2 = 0.6 # works print(e.data.value2) e.data = foo # works print(e.data.value2) print(&quot;-------------&quot;) c = motion.Container() print(c[0].data.value1) c[0] = e # works print(c[0].data.value1) c[0].data = foo # does not work (!) print(c[0].data.value1) c[0].data.value1 = 0.0 # does not work (!) print(c[0].data.value1) </code></pre> <p>While the <code>__getitem__</code> (<code>[]</code>) function does seem to be working as intended, it seems to fail when accessing members on the returned object; instead, a temporary copy is created from the returned reference, and any changes to that instance are not applied. I've tried 1) declaring a <code>std::shared_ptr&lt;Element&gt;</code> holder type when binding the <code>Element</code> class; 2) defining specific return value policy <code>py::return_value_policy::reference</code> and <code>py::return_value_policy::reference_internal</code> on <code>__getitem__</code>; and 3) defining specific call policies <code>py::keep_alive&lt;0,1&gt;()</code> and <code>py::keep_alive&lt;1,0&gt;()</code>; but none of these solutions worked.</p> <p>Any hints on what how solve this issue?</p>
<python><c++><reference><pybind11>
2024-03-17 19:55:58
1
1,767
joaocandre
78,176,587
884,463
Python's file.truncate() unexpectedly does not truncate
<p>I have this very simple Python program:</p> <pre><code>def print_file(filename): with open(filename,'r') as read_file: print(read_file.read()) def create_random_file(filename,count): with open(filename,'w+', encoding='utf-8') as writefile: for row_num in range(count): writefile.write(f'{row_num}: fo bar baz\n') def truncate_file_after_first_line(file,read_a_line): file.seek(0,0) # go to start of file print(f&quot;After seeking to 0, at position {file.tell()}&quot;); if (read_a_line): header = file.readline() print(f&quot;After reading a line, at position {file.tell()}&quot;); print(f&quot;Found header '{header.rstrip()}'\n&quot;) file.write('TRUNCATE AFTER THIS\n') print(f&quot;After writing marker, at position {file.tell()}&quot;); file.truncate() def mangle_file(filename,read_a_line): with open(filename,'r+') as file: truncate_file_after_first_line(file,read_a_line) # ---- filename = 'testpy.txt' read_a_line = True create_random_file(filename,5) print(&quot;Original file:&quot;) print_file(filename) mangle_file(filename,read_a_line) print(&quot;Truncated file:&quot;) print_file(filename) </code></pre> <p>So, I:</p> <ul> <li>I create a file with 5 lines (and print it to stdout too).</li> <li>Then, in <code>mangle_file()</code>: <ul> <li>I open the file with <a href="http://www.manpagez.com/man/3/fopen/" rel="nofollow noreferrer">the <code>r+</code> option</a>, i.e. <em>Open for reading and writing. The stream is positioned at the beginning of the file.</em></li> <li>Depending on bool <code>read_a_line</code>, I then either <ul> <li>a) Seek to position 0, <strong>read a line</strong>, write the marker <code>TRUNCATE AFTER THIS\n</code>, then truncate the file.</li> <li>b) Seek to position 0, write the marker <code>TRUNCATE AFTER THIS\n</code>, then truncate the file.</li> </ul> </li> <li>Finally I close the file</li> </ul> </li> <li>And then re-read it to print it.</li> </ul> <p>Sounds straightforward but for a) where the first line in the file (i.e. <code>0: fo bar baz</code>) is read before truncation, the resulting file is:</p> <pre><code>0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz TRUNCATE AFTER THIS </code></pre> <p>i.e. <strong>the <code>truncate()</code> did nothing</strong>, the marker has been appened to the untruncated file. Whereas I would expect truncation after the first line read:</p> <pre><code>0: fo bar baz TRUNCATE AFTER THIS </code></pre> <p>For b), as expected, the resulting file is</p> <pre><code>TRUNCATE AFTER THIS </code></pre> <p>What do I get wrong about <code>truncate()</code>?</p> <p><strong>Update: Added some <code>tells</code></strong></p> <p>With <code>read_a_line = True</code></p> <pre><code>Original file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz After seeking to 0, at position 0 After reading a line, at position 14 Found header '0: fo bar baz' After writing marker, at position 90 Truncated file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz TRUNCATE AFTER THIS </code></pre> <p>With <code>read_a_line = False</code></p> <pre><code>Original file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz After seeking to 0, at position 0 After writing marker, at position 20 Truncated file: TRUNCATE AFTER THIS </code></pre>
<python>
2024-03-17 19:15:14
2
15,375
David Tonhofer
78,176,517
386,861
Overlaid boundary and point Altair plots are not aligning
<p>I've got a layer map plot in Altair of London that I was struggling to flip - <a href="https://stackoverflow.com/questions/78175314/solving-upside-plot-and-projection-problems-in-geopandas-and-altair">Solving upside plot and projection problems in Geopandas and Altair</a></p> <p>However, a more troubling problem is that I've got a dataframe with shapefiles that look a bit like shape_data and the points are the points.</p> <pre><code> Postcode Ward lat long geometry 0 N16 5RF Springfield 51.572046 -0.075572 POINT (-0.07557 51.57205) 1 N16 5RF Springfield 51.572046 -0.075572 POINT (-0.07557 51.57205) 2 N16 5RF Stamford Hill West 51.572046 -0.075572 POINT (-0.07557 51.57205) 3 N16 5RF Stamford Hill West 51.572046 -0.075572 POINT (-0.07557 51.57205) </code></pre> <p>The shape_data looks a bit like this:</p> <pre><code> NAME GSS_CODE DISTRICT LAGSSCODE HECTARES NONLD_AREA geometry 555 Hoxton East &amp; Shoreditch E05009377 Hackney E09000012 102.355 0.0 POLYGON ((532942.69738 -182547.89554, 532938.50065 -182560.80185, 532968.49618 -182570.49907, 533030.40010 -182589.10375, 533033.09623 -182616.59588, 533034.39895 -182629.40221, 533033.89601 -182758.79517, 533032.79941 -182788.39670, 533031.99964 -182826.59577, 533031.29881 -182869.10360, 533033.69812 -182877.70114, 533035.99850 -182888.59802, 533039.40371 -182920.99874, 533042.70173 -182947.60113, 533055.00336 -182961.49715, 533061.50047 -182969.10497, 533063.99872 -182973.60369, 533068.69841 -182994.69765, 533071.89749 -183013.50227, 533073.99998 -183036.29574, 533074.40399 -183054.20062, 533072.90339 -183085.20174, 533073.80210 -183093.49937, 533070.80090 -183130.69872, 533066.39804 -183178.80495, 533061.89623 -183225.90147, 533060.50282 -183259.90173, 533060.19775 -183329.40184, 533026.50019 -183325.80287, 532990.79907 -183322.10393, 532960.69635 -183321.50410, 532923.89864 -183320.50439, 532838.90031 -183317.69519, 532835.99805 -183357.40383, 532833.59874 -183382.29670, 532829.59988 -183384.19616, 532824.19937 -183382.29670, 532809.80349 -183375.29870, 532736.39771 -183338.09935, 532718.10192 -183373.39925, 532688.60109 -183433.50204, 532625.60058 -183559.99583, 532590.30347 -183629.69588, 532627.70307 -183662.39652, 532660.09791 -183691.59816, 532691.50335 -183718.20055, 532725.29986 -183746.50245, 532745.20344 -183759.49873, 532767.39914 -183769.29592, 532786.00000 -183775.10426, 532799.80223 -183774.00457, 532801.50071 -183770.39561, 532815.69871 -183774.50443, 532869.70386 -183787.80063, 532922.90099 -183798.69751, 532939.10254 -183801.89659, 532951.99781 -183804.19593, 532964.00262 -183804.49585, 532992.59649 -183799.49728, 533054.09640 -183786.70094, 533142.49995 -183769.39589, 533255.49851 -183747.40219, 533293.40106 -183740.90405, 533369.70085 -183724.99860, 533386.70217 -183721.99946, 533449.39761 -183711.00261, 533476.69701 -183705.40421, 533475.40254 -183547.39944, 533473.20111 -183489.89590, 533470.49673 -183412.99791, 533467.60271 -183358.40354, 533467.29764 -183337.19961, 533466.70400 -183324.70319, 533465.59916 -183312.99654, 533471.89839 -183259.10196, 533475.30360 -183235.09883, 533478.79950 -183205.89719, 533479.69821 -183182.70383, 533479.20351 -183162.79953, 533478.40374 -183142.09546, 533477.00208 -183129.79898, 533474.70171 -183093.19945, 533472.79710 -183073.79501, 533471.99733 -183066.29715, 533471.09862 -183056.20004, 533470.80179 -183051.40142, 533463.19985 -182999.59625, 533460.00076 -182978.50228, 533452.29988 -182940.20325, 533447.09724 -182915.50032, 533440.30331 -182892.29696, 533436.09833 -182878.80082, 533429.60122 -182847.19987, 533432.89925 -182846.50007, 533443.99710 -182843.30099, 533460.39653 -182839.40210, 533470.49673 -182836.70287, 533487.30016 -182832.80399, 533501.69604 -182828.99508, 533524.30399 -182823.29671, 533540.59624 -182819.89768, 533558.70239 -182815.69889, 533567.40093 -182813.39954, 533575.10182 -182807.90112, 533580.60127 -182803.80229, 533560.10405 -182777.69976, 533539.30176 -182749.59781, 533502.99876 -182707.19994, 533478.29655 -182683.59670, 533543.80356 -182668.60099, 533541.70107 -182637.99975, 533530.10027 -182597.50134, 533523.80105 -182583.69530, 533523.80105 -182570.99893, 533523.99893 -182557.90268, 533525.40059 -182539.69789, 533528.09672 -182510.69619, 533532.10382 -182466.29890, 533543.30062 -182350.40208, 533579.80150 -182355.20070, 533590.80041 -182303.69545, 533595.59904 -182273.90397, 533595.30222 -182262.49724, 533593.19973 -182228.29703, 533590.50359 -182212.80146, 533589.10193 -182204.90372, 533587.60133 -182196.69607, 533578.10302 -182158.19709, 533574.09592 -182138.80265, 533568.80259 -182118.89834, 533567.30199 -182113.39992, 533549.20408 -182097.40450, 533530.29815 -182119.09829, 533501.90216 -182107.50161, 533447.50125 -182086.89750, 533425.99813 -182076.50048, 533411.89908 -182041.60047, 533410.70354 -182037.90153, 533320.19751 -182053.39709, 533250.89777 -182075.90065, 533230.49949 -182082.49876, 533215.79854 -182024.09548, 533198.59935 -181977.99868, 533184.59923 -181948.19721, 533182.39780 -181945.39801, 533167.49898 -181925.30376, 533146.80387 -181897.90160, 533119.19941 -181869.79965, 533078.89755 -181840.49803, 532946.10259 -181894.90246, 532955.69985 -181928.00299, 532965.80005 -181968.70134, 532978.10168 -182037.30170, 532989.19953 -182089.29682, 532993.70133 -182136.00345, 532998.20314 -182237.50439, 532999.20079 -182259.79801, 533004.70024 -182293.29842, 532983.60113 -182292.39868, 532941.09784 -182294.29814, 532941.60079 -182350.20213, 532918.30025 -182358.99961, 532920.40274 -182368.09701, 532929.29916 -182396.79880, 532944.89882 -182436.89732, 532963.69755 -182460.80047, 532951.39592 -182511.79588, 532942.69738 -182547.89554)) </code></pre> <p>I can write the code in Python's Altair to plot the data</p> <pre><code>london_wards_map = alt.Chart(london_wards_gpd).mark_geoshape( fill=None, # No fill stroke='darkgray', # Black stroke strokeWidth=1 # Stroke width ).encode( tooltip='NAME:N' # Replace 'NAME' with the actual name of the column that contains the ward names ).properties( width=800, height=600 ).project( type='identity') postcode_meals.crs = 4326 points = alt.Chart(postcode_meals).mark_circle(color='#008751').encode( longitude='long:Q', latitude='lat:Q', size=alt.Size('count:Q', scale=alt.Scale(domain=[postcode_meals['count'].min(), postcode_meals['count'].max()], range=[10, 1000])), # Adjust the range as needed tooltip=['Ward', 'Postcode', 'count', 'long', 'lat'] ).properties(title='Map of vouchers by postcode') text = alt.Chart(postcode_meals).mark_text(dy=-5).encode( longitude='long:Q', latitude='lat:Q', text='Postcode' ) shape_data = {'NAME': {555: 'Hoxton East &amp; Shoreditch', 556: 'Haggerston', 557: 'De Beauvoir', 558: 'London Fields', 559: 'Hackney Wick'}, 'GSS_CODE': {555: 'E05009377', 556: 'E05009375', 557: 'E05009371', 558: 'E05009381', 559: 'E05009374'}, 'DISTRICT': {555: 'Hackney', 556: 'Hackney', 557: 'Hackney', 558: 'Hackney', 559: 'Hackney'}, 'LAGSSCODE': {555: 'E09000012', 556: 'E09000012', 557: 'E09000012', 558: 'E09000012', 559: 'E09000012'}, 'HECTARES': {555: 102.355, 556: 86.724, 557: 59.347, 558: 101.739, 559: 163.387}, 'NONLD_AREA': {555: 0.0, 556: 0.0, 557: 0.0, 558: 0.0, 559: 0.0}, 'geometry': {555: &lt;POLYGON ((532942.697 -182547.896, 532938.501 -182560.802, 532968.496 -18257...&gt;, 556: &lt;POLYGON ((534479.503 -183624.697, 534466.698 -183600.304, 534458.098 -18357...&gt;, 557: &lt;POLYGON ((533479.5 -183988.203, 533451.096 -183988.003, 533450.7 -183978.79...&gt;, 558: &lt;POLYGON ((533479.5 -183988.203, 533479.5 -184005.498, 533479.698 -184015.39...&gt;, 559: &lt;POLYGON ((537572.901 -185494.702, 537509.497 -185492.903, 537446.1 -185494....&gt;}} points = {'Postcode': {0: 'N16 5RF', 1: 'N16 5RF', 2: 'N16 5RF', 3: 'N16 5RF'}, 'Ward': {0: 'Springfield', 1: 'Springfield', 2: 'Stamford Hill West', 3: 'Stamford Hill West'}, 'lat': {0: 51.572046, 1: 51.572046, 2: 51.572046, 3: 51.572046}, 'long': {0: -0.075572, 1: -0.075572, 2: -0.075572, 3: -0.075572}, 'geometry': {0: &lt;POINT (-0.076 51.572)&gt;, 1: &lt;POINT (-0.076 51.572)&gt;, 2: &lt;POINT (-0.076 51.572)&gt;, 3: &lt;POINT (-0.076 51.572)&gt;}} </code></pre> <p>But when I tried that I get this.</p> <p><a href="https://i.sstatic.net/ILe5d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ILe5d.png" alt="enter image description here" /></a></p> <p>The CRS setting is right but for some reason the inner coords of the boundaries are an several orders of magnitude beyond typical latitude and longitudes.</p>
<python><altair>
2024-03-17 18:52:31
1
7,882
elksie5000
78,176,515
3,231,778
Azure Machine Learning dataset creation hangs forever
<p>I'm trying to create a <code>Dataset</code> from a datastore using Azure ML, however, the execution hangs forever and never finishes.</p> <p>This is the code I'm running which I've adapted from the Msft documentation:</p> <pre class="lang-py prettyprint-override"><code>import azureml.core from azureml.core import Workspace, Datastore, Dataset ws = Workspace.from_config() datastore = Datastore.get(ws, datastore_name='blobs') data_path=[(datastore,&quot;contacts.csv&quot;)] Dataset.File.from_files(path=data_path) # &lt;-- This method never finishes </code></pre> <p>Here we can see that the command never completes:</p> <p><a href="https://i.sstatic.net/DjFLg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DjFLg.png" alt="enter image description here" /></a></p> <p>There is only 1 test file <code>contacts.csv</code> in the storage. The storage is a Blob container, but I've tested with a data lake (DSL) container and got the same issue. It looks like a similar problem shared in <a href="https://stackoverflow.com/q/68556675/3231778">this other question</a>.</p> <p>I must add that <strong>outbound rules are configured to use private endpoints</strong>.</p> <p>As part of my troubleshooting steps, I've confirmed that network connectivity to the storage looks OK - not only by testing via SSH inside the Azure ML instance and it resolves to a private IP, but also using other SDKs such as with the <code>Datastore.download()</code> method.</p> <p>Here I show how using a <code>download</code> approach I can reach the file from the same datastore. This tells me that network and authentication are properly configured, and something is wrong with my <code>Dataset</code> code? Same infrastructure, just changed the code a bit.</p> <pre class="lang-py prettyprint-override"><code>import os import azureml.core from azureml.core import Workspace, Datastore, Dataset ws = Workspace.from_config() datastore = Datastore.get(ws, datastore_name='blobs') datastore.download(target_path=&quot;./output&quot;, prefix=&quot;contacts.csv&quot;, overwrite=False) arr = os.listdir('./output') print(arr) file = open(&quot;./output/contacts.csv&quot;, &quot;r&quot;).read() print(file) </code></pre> <p><a href="https://i.sstatic.net/NePkf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NePkf.png" alt="enter image description here" /></a></p>
<python><azure><azure-machine-learning-service><azureml-python-sdk><azuremlsdk>
2024-03-17 18:52:19
1
15,302
Evandro Pomatti
78,176,271
11,628,437
How does `pandas.concat` work when the input is a dictionary?
<p>I am struggling to understand how <code>pd.concat</code> works when the input is a dictionary.</p> <p>Let's say we have the following pandas dataframe -</p> <pre><code># Import pandas library import pandas as pd # initialize list of lists data = [['tom', 10], ['nick', 15], ['juli', 14]] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['Name', 'Age']) </code></pre> <p>Then, we do the following concatenation operation -</p> <pre><code>z = pd.concat({&quot;z&quot;:df}, axis = 1) print(z) </code></pre> <p>The output comes out to be -</p> <pre><code> z Name Age 0 tom 10 1 nick 15 2 juli 14 </code></pre> <p>It seems like the key <code>z</code> was stacked on top of the dataframe <code>df</code>. But this doesn't make sense as the axis specified was <code>1</code> and therefore, the stacking (if that's what occurred) should've been across columns.</p>
<python><pandas>
2024-03-17 17:37:18
1
1,851
desert_ranger
78,176,092
17,800,932
Wrapping and integrating an existing Python `socket`-based class with `asyncio`
<p>I have a use-case where there are a lot of TCP/IP clients being bundled up into a single Python process. The current desire is to use <code>asyncio</code> to provide concurrency for the program. For all <em>new</em> TCP/IP clients, <a href="https://docs.python.org/3/library/asyncio-stream.html" rel="nofollow noreferrer"><code>asyncio</code> streams</a> will be used, and for all new HTTP clients, <a href="https://docs.aiohttp.org/en/stable/" rel="nofollow noreferrer"><code>aiohttp</code></a> will be used.</p> <p>However, there are several <em>existing</em> clients that are written using Python's <a href="https://docs.python.org/3/library/socket.html" rel="nofollow noreferrer"><code>socket</code></a> module. My question is: how should you &quot;wrap&quot; the existing <code>socket</code>-based classes and methods with <code>async</code>?</p> <hr /> <p>Example existing TCP/IP client:</p> <pre class="lang-py prettyprint-override"><code>import socket class ExistingClient: def __init__(self, host: str, port: int) -&gt; None: self.__host = host self.__port = port self.__socket: socket.socket | None = None def initialize(self) -&gt; None: self.__socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.__socket.connect((self.__host, self.__port)) def get_status(self) -&gt; int: self.__socket.send(&quot;2\n&quot;.encode()) data: str = str(self.__socket.recv(1024).decode()).strip() return int(data) def close(self) -&gt; None: self.__socket.close() </code></pre> <h4>Example TCP/IP server</h4> <p>You can run this server script to serve as an example for the below client script. Save in <code>server.py</code>.</p> <pre class="lang-py prettyprint-override"><code>import asyncio import functools class State: def __init__(self) -&gt; None: self.__count: int = 0 def add_to_count(self, count: int) -&gt; None: self.__count = self.__count + count @property def count(self) -&gt; int: return self.__count async def handle_echo( reader: asyncio.StreamReader, writer: asyncio.StreamWriter, state: State ): # Ignore the use of the infinite while loop for this example. # Controlling the loop would be handled in a more sophisticated way. while True: data = await reader.readline() message: int = int(data.decode()) state.add_to_count(message) addr = writer.get_extra_info(&quot;peername&quot;) writer.write(f&quot;{state.count}\n&quot;.encode()) await writer.drain() writer.close() await writer.wait_closed() async def server(port: int): state = State() partial = functools.partial(handle_echo, state=state) server = await asyncio.start_server(partial, &quot;127.0.0.1&quot;, port) address = &quot;, &quot;.join(str(sock.getsockname()) for sock in server.sockets) print(f&quot;Serving on {address}&quot;) async with server: await server.serve_forever() async def main(): await asyncio.gather( server(8888), server(8889), ) asyncio.run(main()) </code></pre> <h4>Possible solution 1</h4> <p>To me, one solution seems to be that the existing class(es) could be editing with <code>async</code> methods that simply defer down to the blocking I/O <code>socket</code> calls. For example:</p> <pre class="lang-py prettyprint-override"><code>import socket class ExistingClient: def __init__(self, host: str, port: int) -&gt; None: self.__host = host self.__port = port self.__socket: socket.socket | None = None def initialize(self) -&gt; None: self.__socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.__socket.connect((self.__host, self.__port)) async def async_initialize(self) -&gt; None: self.initialize() def get_status(self) -&gt; int: self.__socket.send(&quot;2\n&quot;.encode()) data: str = str(self.__socket.recv(1024).decode()).strip() return int(data) async def async_get_status(self) -&gt; int: return self.get_status() def close(self) -&gt; None: self.__socket.close() async def async_close(self) -&gt; None: self.close() </code></pre> <p>Does this work in the sense of not blocking the <code>asyncio</code> event loop such that it behaves very similarly to <code>asyncio</code> streams calls? As an example of how I envision something like this getting integrated with normal <code>asyncio</code> code (saved in the same file as the directly above modified <code>ExistingClient</code> with the <code>async</code> methods in say <code>client.py</code>):</p> <pre class="lang-py prettyprint-override"><code>import asyncio async def streams_tcp_client(): reader, writer = await asyncio.open_connection(&quot;127.0.0.1&quot;, 8888) writer.write(&quot;1\n&quot;.encode()) await writer.drain() data: bytes = await reader.readline() print(f&quot;asyncio streams data: {data.decode().strip()}&quot;) writer.close() await writer.wait_closed() async def existing_tcp_client1(): existing_client = ExistingClient(&quot;127.0.0.1&quot;, 8889) await existing_client.async_initialize() data: int = await existing_client.async_get_status() print(f&quot;Existing client data: {data}&quot;) await existing_client.async_close() async def main(): await asyncio.gather( streams_tcp_client(), existing_tcp_client1(), ) </code></pre> <p>Does this work as one might expect it to such that the <code>ExistingClient</code> <code>async</code> calls, that contain blocking I/O calls, <em>do not block</em> the <code>asyncio</code> event loop?</p> <p>I have ran this code, and it runs and prints out the expected data. But it is unclear how to test if the event loop is running as expected or desired.</p> <hr /> <h4>Possible solution 2</h4> <p>I have seen some mention of <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.to_thread" rel="nofollow noreferrer"><code>asyncio.to_thread</code></a>. In the documentation, it states:</p> <blockquote> <p>This coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were run in the main thread.</p> </blockquote> <p>However, this does not really explain anything. And it doesn't explain why simply defining</p> <pre class="lang-py prettyprint-override"><code>async def async_blocking_io(): blocking_io() </code></pre> <p>is not enough to not block the event loop. Because the actual blocking I/O, such as a TCP/IP write and read operation, shouldn't block the event loop's thread, correct?</p> <p>Would the way to use this be the following:</p> <pre class="lang-py prettyprint-override"><code>async def existing_tcp_client2(): existing_client = ExistingClient(&quot;127.0.0.1&quot;, 8889) await asyncio.to_thread(existing_client.initialize) data: int = await asyncio.to_thread(existing_client.get_status) print(f&quot;Existing client data with to_thread: {data}&quot;) await asyncio.to_thread(existing_client.close) </code></pre> <p>This runs if I modify <code>main</code> to be:</p> <pre class="lang-py prettyprint-override"><code>async def main(): await asyncio.gather( streams_tcp_client(), existing_tcp_client1(), ) </code></pre> <hr /> <h3>Final thoughts</h3> <p>What is the difference between making <code>async def async_&lt;existing_method&gt;</code> methods vs the <code>asyncio.to_thread</code> method? The <code>asyncio.to_thread</code> method is concerning, because each call would be ran in a new thread? That could be an issue for thread-unsafe classes and also creates overhead by constantly spawning new threads.</p> <p>What are the other solutions to this problem?</p>
<python><tcp><python-asyncio><python-sockets>
2024-03-17 16:44:26
1
908
bmitc
78,175,724
1,537,003
ChromaDB: How to check if collection exists?
<p>I want to create a script that recreates a chromadb collection - delete previous version and creates a new from scratch.</p> <pre><code>client.delete_collection(name=COLLECTION_NAME) collection = client.create_collection( name=COLLECTION_NAME, embedding_function=embedding_func, metadata={&quot;hnsw:space&quot;: &quot;cosine&quot;}, ) </code></pre> <p>However, if the collection does not exist, I receive error:</p> <pre><code> File &quot;*/lib/python3.11/site-packages/chromadb/api/segment.py&quot;, line 347, in delete_collection raise ValueError(f&quot;Collection {name} does not exist.&quot;) </code></pre> <p>ValueError: Collection operations does not exist.</p> <p>Is there any command to check if a collection exists? I haven't found any in documentation.</p>
<python><chromadb>
2024-03-17 14:58:58
4
2,059
Michal
78,175,707
3,577,054
How to find a particular exception inside the traceback using pytest
<p>Having a <code>test_raises</code> test like this, which checks that <code>ValueError</code> was raised using <a href="https://docs.pytest.org/en/4.6.x/reference.html#pytest-raises" rel="nofollow noreferrer"><code>pytest.raises</code></a>:</p> <pre class="lang-py prettyprint-override"><code>import pytest def foo(): raise RuntimeError(&quot;Foo&quot;) def bar(): try: foo() except RuntimeError: raise ValueError(&quot;Bar&quot;) def test_raises(): with pytest.raises(ValueError, match=&quot;Bar&quot;): bar() </code></pre> <p>How can I check within the test that <code>RuntimeError</code> was also raised at some point and that it was raised with the <code>&quot;Foo&quot;</code> message?</p> <p>It seems like pytest allows you to <code>with pytest.raises(ValueError) as exc_info:</code>, but not sure which is the best way to traverse the <a href="https://docs.pytest.org/en/4.6.x/reference.html#_pytest._code.ExceptionInfo" rel="nofollow noreferrer"><code>ExceptionInfo</code></a> in order to find the <code>RuntimeError</code>.</p>
<python><pytest>
2024-03-17 14:53:16
1
15,051
Peque
78,175,561
14,364,775
How to process data in chunks(Pandas)?
<p>I have a function:</p> <pre><code>def extract_named_entities(note): &quot;&quot;&quot; Extract the named entities identified in a given note. &quot;&quot;&quot; doc = nlp(note) return [ent.text for ent in doc.ents] df['named_entities'] = df['NOTE'].apply(extract_named_entities) e_df = df.explode('named_entities').reset_index(drop=True) </code></pre> <p>The each row of <code>df['NOTE']</code> contains a <strong>2500</strong> word paragraph. I want to optimize this function as this function works quickly for 5-10 rows. But, I have 6400 rows and it is taking a lot of time. I</p> <p>Is it possible to apply chunks or any other optimization techniques or is it possible to avoid the usage of lists?</p>
<python><python-3.x><pandas>
2024-03-17 14:12:08
1
1,018
Rikky Bhai
78,175,528
2,338,792
Getting ImportError When running EXE created by pyInstaller
<p>I'm using numpy in a python code, it runs successfully when running it as python. But when converting it to executable using pyInstaller, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;numpy\core\__init__.py&quot;, line 24, in &lt;module&gt; File &quot;PyInstaller\loader\pyimod03_importers.py&quot;, line 540, in exec_module File &quot;numpy\core\multiarray.py&quot;, line 10, in &lt;module&gt; File &quot;PyInstaller\loader\pyimod03_importers.py&quot;, line 540, in exec_module File &quot;numpy\core\overrides.py&quot;, line 8, in &lt;module&gt; ImportError: DLL load failed while importing _multiarray_umath: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;numpy\__init__.py&quot;, line 159, in &lt;module&gt; File &quot;PyInstaller\loader\pyimod03_importers.py&quot;, line 540, in exec_module File &quot;numpy\__config__.py&quot;, line 4, in &lt;module&gt; File &quot;PyInstaller\loader\pyimod03_importers.py&quot;, line 540, in exec_module File &quot;numpy\core\__init__.py&quot;, line 50, in &lt;module&gt; ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from &quot;c:\test_numpy.exe&quot; * The NumPy version is: &quot;1.26.4&quot; and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found. The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;C:\test_numpy.py&quot;, line 1, in &lt;module&gt; import numpy as np File &quot;PyInstaller\loader\pyimod03_importers.py&quot;, line 540, in exec_module File &quot;numpy\__init__.py&quot;, line 164, in &lt;module&gt; ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. [58772] Failed to execute script test_numpy </code></pre> <p>When doing it from a different PC, the EXE runs successfully.</p>
<python><numpy><pyinstaller>
2024-03-17 13:59:50
0
2,354
DavidS
78,175,445
17,889,328
change pydantic 'extra=' behaviour per call
<p>i want to define a pydantic BaseModel with extra='forbid', and then in specific places validate objects i know will have extras, while still disallowing them if not specified - is this possible?</p> <p>if not what's most concise or generally recommended approach? intermediate class? classmethod on the model? typeadaptor? utility converter function?</p> <p>im interested in general, although my specific case was handling pydantic and sqlmodel objects in and out of database, with and without related models</p> <p>i hoped for something like:</p> <pre class="lang-py prettyprint-override"><code>class NoExtras(BaseModel) model_config = Dict( extra='forbid', ) name:str somedict = dict(name='aname', extra_field='astring') validated = NoExtras.model_validate(somedict, allow_extra=True) </code></pre>
<python><pydantic>
2024-03-17 13:34:03
1
704
prosody
78,175,314
386,861
Solving upside plot and projection problems in Geopandas and Altair
<p>I'm trying to plot a multi-layered map of London using some data from the ONS.</p> <pre><code>import geopandas as gpd from shapely import wkt # Convert the 'geometry' column to shapely geometry objects london_wards_shp['geometry'] = london_wards_shp['geometry'].apply(wkt.loads) london_wards_gpd = gpd.GeoDataFrame(london_wards_shp, geometry='geometry') london_wards_gpd = london_wards_gpd.set_crs(epsg=4326) london_wards_gpd.plot() </code></pre> <p>That came back with an error:</p> <pre><code>ValueError: aspect must be finite and positive </code></pre> <p>I found a solution to plotting:</p> <pre><code>london_wards_gpd.plot(aspect=1) </code></pre> <p><a href="https://i.sstatic.net/uiINP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uiINP.png" alt="enter image description here" /></a></p> <p>But then I wanted to use altair to build up layers, one with the whole map of layers.</p> <pre><code>&gt; london_wards_map = alt.Chart(london_wards_gpd).mark_geoshape( &gt; fill=None, # No fill &gt; stroke='darkgray', # Black stroke &gt; strokeWidth=1 # Stroke width ).encode( &gt; tooltip='NAME:N' # Replace 'NAME' with the actual name of the column that contains the ward names ).properties( &gt; width=800, &gt; height=600 ).project( &gt; type='identity' ) &gt; &gt; hackney_wards = london_wards_gpd[london_wards_gpd['DISTRICT'] &gt; =='Hackney'] &gt; #hackney_wards = gpd.GeoDataFrame(hackney_wards, geometry='geometry') # Convert DataFrame to GeoDataFrame &gt; #hackney_wards = hackney_wards.set_crs(epsg=4326) hackney_layer = alt.Chart(hackney_wards).mark_geoshape( &gt; fill='lightgray', # No fill &gt; stroke='darkgray', # Black stroke &gt; strokeWidth=1 # Stroke width ).encode( &gt; tooltip='NAME:N' # Replace 'NAME' with the actual name of the column that contains the ward names ).properties( &gt; width=800, &gt; height=600 ).project( &gt; type='identity' ) </code></pre> <p>london_wards_map + hackney_layer</p> <p><a href="https://i.sstatic.net/6VfzV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6VfzV.png" alt="enter image description here" /></a></p> <p>So why is it upside down?</p> <p>ot quite sure how to diagnose the projection issue here</p>
<python><geopandas><altair>
2024-03-17 12:53:14
2
7,882
elksie5000
78,175,287
8,964,393
How to train a linear regression for each pandas dataframe row and generate the slope
<p>I have created the following pandas dataframe:</p> <pre><code>import numpy as np import pandas as pd ds = {'col1' : [11,22,33,24,15,6,7,68,79,10,161,12,113,147,115]} df = pd.DataFrame(data=ds) predFeature = [] for i in range(len(df)): predFeature.append(0) predFeature[i] = predFeature[i-1]+1 df['predFeature'] = predFeature arrayTarget = [] arrayPred = [] target = np.array(df['col1']) predFeature = np.array(df['predFeature']) for i in range(len(df)): arrayTarget.append(target[i-4:i]) arrayPred.append(predFeature[i-4:i]) df['arrayTarget'] = arrayTarget df['arrayPred'] = arrayPred </code></pre> <p>Which looks like this:</p> <pre><code> col1 predFeature arrayTarget arrayPred 0 11 1 [] [] 1 22 2 [] [] 2 33 3 [] [] 3 24 4 [] [] 4 15 5 [11, 22, 33, 24] [1, 2, 3, 4] 5 6 6 [22, 33, 24, 15] [2, 3, 4, 5] 6 7 7 [33, 24, 15, 6] [3, 4, 5, 6] 7 68 8 [24, 15, 6, 7] [4, 5, 6, 7] 8 79 9 [15, 6, 7, 68] [5, 6, 7, 8] 9 10 10 [6, 7, 68, 79] [6, 7, 8, 9] 10 161 11 [7, 68, 79, 10] [7, 8, 9, 10] 11 12 12 [68, 79, 10, 161] [8, 9, 10, 11] 12 113 13 [79, 10, 161, 12] [9, 10, 11, 12] 13 147 14 [10, 161, 12, 113] [10, 11, 12, 13] 14 115 15 [161, 12, 113, 147] [11, 12, 13, 14] </code></pre> <p>I need to generate a new column called <code>slope</code>, which corresponds to the coefficient of a linear regression trained for each row and for which:</p> <ul> <li>target = each array contained in <code>arrayTarget</code></li> <li>predictive features = each array contained in <code>arrayPred</code></li> </ul> <p>For example:</p> <ul> <li><p>the <code>slope</code> for the first 4 rows is <code>null</code>.</p> </li> <li><p>the slope for the 5th row is given by the coefficient of the linear regression which considers the following values:</p> <ul> <li>independent (or predictive) values: <code>[1, 2, 3, 4]</code></li> <li>dependent (or predicted) values: <code>[11, 22, 33, 24]</code> The result would be: <code>0.10204081632653061</code>.</li> </ul> </li> <li><p>the slope for the 6th row is given by the coefficient of the linear regression which considers the following values:</p> <ul> <li>independent (or predictive) values: <code>[2, 3, 4, 5]</code></li> <li>dependent (or predicted) values: <code>[22, 33, 24, 15]</code> The result would be: <code>-0.09090909090909091</code>.</li> </ul> </li> </ul> <p>And so on.</p> <p>Can anyone help me, please?</p>
<python><pandas><dataframe><linear-regression><coefficients>
2024-03-17 12:45:19
1
1,762
Giampaolo Levorato
78,175,235
710,734
Get the sorted elements using pre-computed sort only
<p>Looking for the following algorithm.</p> <p>Given for example the following unordered list:</p> <pre><code>main_list = np.array([100,200,400,1000,800,900,700,600,500,300]) </code></pre> <p>and given the query list elements <code>q = np.array([2,5,7,9])</code>, that correspond to the list <code>np.array([400, 900, 600, 300])</code>, get directly the sorted elements from main_list and the sorted positions</p> <pre><code>np.array([300, 400, 600, 900]) np.array([9,2,7,5]) </code></pre> <p>The main condition: use only pre-computed sorted arrays, to avoid sorting operation each time I get a new query.</p> <p>EDIT: Solution in numpy (thanks @cary-swoveland!)</p> <p>Firt pre-compute the sorted of sorted arguments. This is offline step:</p> <pre><code>argsort = np.argsort(main_list) argsort_argsort = np.argsort(argsort) </code></pre> <p>Then at query time:</p> <pre><code>q = np.array([2,5,7,9]) new_array = np.full(main_list.shape[0], -1) new_array[argsort_argsort[q]] = q sorted_q = new_array[new_array != -1] sorted_values = main_list[sorted_q] print(sorted_q) print(sorted_values) </code></pre> <p>results:</p> <pre><code>[9 2 7 5] [300 400 600 900] </code></pre>
<python><algorithm><numpy><sorting>
2024-03-17 12:29:15
2
3,124
Miguel
78,175,233
11,628,437
Can I create a nested column pandas dataframe using a nested dictionary?
<p>I am trying to think out of the box here, and my idea might be really bad. Feel free to point out better alternatives. I want to create a nested column pandas dataframe, both for visualization and analysis purposes. The output should look like this -</p> <pre><code> Marks Physics | Biology Theo|Prac | Theo|Prac 99 | 100 | 89 | 100 </code></pre> <p>My data is stored in the form of a nested dictionary -</p> <pre><code>nested_dict = { 'Marks': { 'Physics': { 'Theo': 99, 'Prac': 100 }, 'Biology': { 'Theo': 89, 'Prac': 100 } } } </code></pre> <p>I think the above table looks great for visualization but I am not sure if it'll make things easy for analysis. For analysis, I'd need to do operations on the subgroups, For eg: Physics percentage = (Theo + Prac)/200*100. Is a nested panda dataframe the best way to do the analysis?</p> <p>Is there a way I can do that? Doing <code>pd.DataFrame.from_dict(nested_dict)</code> doesn't seem to work. This is what I get -</p> <pre><code> Marks Biology {'Theo': 89, 'Prac': 100} Physics {'Theo': 99, 'Prac': 100} </code></pre>
<python><pandas>
2024-03-17 12:29:00
1
1,851
desert_ranger
78,175,064
20,830,264
Error on getting the xref of an image with PyMuPDF using page.get_text("dict")["blocks"]
<p>With the following Python function I'm trying to extract text and images from a pdf document. Also, I want to put a label like <code>f&quot;&lt;&lt;&lt;image_{image_counter}&gt;&gt;&gt;&quot;</code> in the extracted text at the exact location of the corresponding image. This is the Python function I have:</p> <pre class="lang-py prettyprint-override"><code>def extract_text_and_save_images_not_working(pdf_path): doc = fitz.open(pdf_path) full_text = &quot;&quot; image_counter = 1 # Initialize the image counter before iterating through pages for page_num in range(len(doc)): # Iterate through each page of the pdf document page = doc.load_page(page_num) # Load the pdf page blocks = page.get_text(&quot;dict&quot;)[&quot;blocks&quot;] # The list of block dictionaries for block in blocks: # Iterate through each block if block['type'] == 0: # If the block is a text block for line in block[&quot;lines&quot;]: # Iterate through lines in the block for span in line[&quot;spans&quot;]: # Iterate through spans in the line full_text += span[&quot;text&quot;] + &quot; &quot; # Append text to full_text full_text += &quot;\n&quot; # Add newline after each block elif block['type'] == 1: # If the block is an image block image_label = f&quot;&lt;&lt;&lt;image_{image_counter}&gt;&gt;&gt;&quot; # Label to insert in the extracted text in place of the corresponding image full_text += f&quot;{image_label}\n&quot; # Insert image label at the image location img = block['image'] xref = img[0] print() print(xref) print() base_image = doc.extract_image(xref) # Attempt to extract image image_bytes = base_image[&quot;image&quot;] # Get the image bytes image_filename = f&quot;image_{image_counter}.png&quot; with open(image_filename, &quot;wb&quot;) as img_file: # Save the image img_file.write(image_bytes) image_counter += 1 # Increment counter for next image regardless of extraction success doc.close() # Close the pdf document return full_text </code></pre> <p>Basically the function extract the block dictionaries of each page using this function <code>blocks = page.get_text(&quot;dict&quot;)[&quot;blocks&quot;]</code> and for each block checks if it is a text block (<code>block['type'] == 0</code>) or an image block (<code>block['type'] == 1</code>). If the block is an image, then the function saves the image in the same directory of the running script with this name <code>f&quot;image_{image_counter}.png&quot;</code> and adds a label (<code>f&quot;&lt;&lt;&lt;image_{image_counter}&gt;&gt;&gt;&quot;</code>) in the extracted text at the line that identifies the position of the image in the pdf. Now, when I run this function, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\xxxx\Desktop\X_Project\extract_images_from_pdf\extract_text_and_images_from_pdf.py&quot;, line 93, in &lt;module&gt; extracted_text = extract_text_and_save_images_not_working(pdf_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;c:\Users\xxxx\Desktop\X_Project\extract_images_from_pdf\extract_text_and_images_from_pdf.py&quot;, line 76, in extract_text_and_save_images_not_working base_image = doc.extract_image(xref) # Attempt to extract image ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\xxxx\Desktop\X_Project\extract_images_from_pdf\venv\Lib\site-packages\fitz\__init__.py&quot;, line 3894, in extract_image raise ValueError( MSG_BAD_XREF) ValueError: bad xref </code></pre> <p>Which makes sense this error because in the variable <code>xref</code> I should get an integer number representing the cross reference number of the image, but instead I get another integer number that doesn't represents the correct cross reference number. In other words, in my exercise for the specific document pdf I'm using, I expect <code>xref</code> = 52 but instead I get <code>xref</code> = 137.</p>
<python><pdf><pymupdf>
2024-03-17 11:36:39
1
315
Gregory
78,174,891
4,262,057
How do I generate embeddings for dicts (not text) for Vertex AI Search?
<p>I am trying to generate and store vector embeddings in my GCS bucket such that they can be accessed by Vector AI Search to find the most similar items.</p> <p>Following <a href="https://cloud.google.com/vertex-ai/docs/vector-search/overview" rel="nofollow noreferrer">this official tutorial</a>, they mention that the first step is to generate an embedding, and that this can be done by <a href="https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings" rel="nofollow noreferrer">generating text embeddings</a>.</p> <p>If we look at the referenced code, with Python one would do the following:</p> <pre><code>from vertexai.language_models import TextEmbeddingModel def text_embedding() -&gt; list: &quot;&quot;&quot;Text embedding with a Large Language Model.&quot;&quot;&quot; model = TextEmbeddingModel.from_pretrained(&quot;textembedding-gecko@001&quot;) embeddings = model.get_embeddings([&quot;What is life?&quot;]) for embedding in embeddings: vector = embedding.values print(f&quot;Length of Embedding Vector: {len(vector)}&quot;) return vector </code></pre> <p>Now there is <a href="https://cloud.google.com/spanner/docs/vector-search-embeddings" rel="nofollow noreferrer">another tutorial</a> where they basically generate and store vector embeddings, also using <code>textembedding-gecko</code> model from Spanner to Vector Search. Obviously, data stored in Spanner is not stored as a text and has a row/column or key/value dict structure.</p> <p>With the code above, which is pointing on generating text embeddings, this format is not supported. How do I therefore go from a dict to embedding?</p> <p>Other resources I looked at:</p> <ul> <li>Generating text embeddings for Stackoverflow data. Even here they don't explain how to go from column format to generate an embedding, as they only use the column &quot;title&quot;. See: <a href="https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb</a></li> </ul> <p>Other comments:</p> <ul> <li>In case of product images as the initial tutorial shows, one would indeed expect the object to have multiple attributes, not just texts.</li> <li>In the future, I would also like to explore overweighting and underweighting some attributes if possible.</li> </ul>
<python><google-cloud-platform><google-cloud-vertex-ai><vector-search><google-generativeai>
2024-03-17 10:41:22
1
7,054
WJA
78,174,526
1,753,273
Assertion Failure in SWI-Prolog When Using pyswip to Consult a Prolog File
<p>I'm working on a project where I use pyswip to integrate SWI-Prolog with Python. As someone new to Prolog programming, I've encountered an assertion failure when attempting to consult a Prolog file using pyswip. The error occurs specifically at the prolog.consult(&quot;/mnt/data/Dynamic_aci_assignment_PS2.pl&quot;) line in my Python script. My environment uses SWI-Prolog version 9.2.2 and pyswip version 0.2.10 on a Linux system.</p> <pre><code>from pyswip import Prolog prolog = Prolog() prolog.consult(&quot;test.pl&quot;) </code></pre> <p>test.pl file contains -</p> <pre><code>% test.pl likes(john, pizza). </code></pre> <p>I am getting the folllowing error -</p> <pre><code>/home/agaonsindhe/Downloads/yes/bin/python /home/agaonsindhe/Desktop/codebase/python/mtech/aci/assignment_2/sample_prolog_run.py [Thread 1 (main) at Sun Mar 17 13:48:59 2024] ./src/pl-fli.c:2674: PL_put_chars: Assertion failed: 0 C-stack trace labeled &quot;assert_fail&quot;: [0] PL_changed_cwd() at ??:? [0x75f3db6dcb3b] [1] _PL_atoms() at ??:? [0x75f3db6b989e] [2] PL_put_chars() at ??:? [0x75f3db6cbe19] [3] ffi_call_unix64() at :? [0x75f3dc217052] [4] ffi_call_int() at ffi64.c:? [0x75f3dc215925] [5] ffi_call() at ??:? [0x75f3dc21606e] [6] _call_function_pointer() at /usr/local/src/conda/python-3.11.5/Modules/_ctypes/callproc.c:944 [0x75f3dbfd52e4] [7] PyCFuncPtr_call() at /usr/local/src/conda/python-3.11.5/Modules/_ctypes/_ctypes.c:4201 [0x75f3dbfde4ce] [8] /home/agaonsindhe/Downloads/yes/bin/python(_PyObject_MakeTpCall+0x254) [0x502d54] [9] /home/agaonsindhe/Downloads/yes/bin/python(_PyEval_EvalFrameDefault+0x755) [0x50f025] [10] /home/agaonsindhe/Downloads/yes/bin/python(+0x19b8c7) [0x59b8c7] [11] /home/agaonsindhe/Downloads/yes/bin/python(+0x12bc6b) [0x52bc6b] [12] /home/agaonsindhe/Downloads/yes/bin/python(PyObject_Vectorcall+0x31) [0x51bff1] [13] /home/agaonsindhe/Downloads/yes/bin/python(_PyEval_EvalFrameDefault+0x755) [0x50f025] [14] /home/agaonsindhe/Downloads/yes/bin/python(+0x1c82ce) [0x5c82ce] [15] /home/agaonsindhe/Downloads/yes/bin/python(PyEval_EvalCode+0x9f) [0x5c79cf] [16] /home/agaonsindhe/Downloads/yes/bin/python(+0x1e8807) [0x5e8807] [17] /home/agaonsindhe/Downloads/yes/bin/python(+0x1e4e40) [0x5e4e40] [18] /home/agaonsindhe/Downloads/yes/bin/python(+0x1f9132) [0x5f9132] [19] /home/agaonsindhe/Downloads/yes/bin/python(_PyRun_SimpleFileObject+0x19f) [0x5f871f] [20] /home/agaonsindhe/Downloads/yes/bin/python(_PyRun_AnyFileObject+0x43) [0x5f8473] [21] /home/agaonsindhe/Downloads/yes/bin/python(Py_RunMain+0x2ee) [0x5f2fee] [22] /home/agaonsindhe/Downloads/yes/bin/python(Py_BytesMain+0x39) [0x5b6e19] [23] __libc_start_call_main() at ./csu/../sysdeps/x86/libc-start.c:74 [0x75f3dc028150] [24] call_init() at ./csu/../csu/libc-start.c:128 [0x75f3dc028209] [25] /home/agaonsindhe/Downloads/yes/bin/python(+0x1b6c6f) [0x5b6c6f] Prolog stack: (null) Process finished with exit code 134 (interrupted by signal 6:SIGABRT) </code></pre> <p>What I've tried:</p> <ol> <li>Ensuring SWI-Prolog and pyswip are properly installed and accessible in my PATH.</li> <li>Verifying the file path and permissions for the Prolog file.</li> <li>Running basic pyswip commands to ensure general functionality, which worked without issues.</li> </ol> <p>What could be causing this assertion failure in SWI-Prolog when consulting a Prolog file via pyswip? As I am new to Prolog programming, any insights on potential fixes, workarounds, or even general advice on integrating Prolog with Python would be greatly appreciated.</p>
<python><list><prolog><swi-prolog>
2024-03-17 08:25:20
1
594
agaonsindhe
78,174,454
5,102,848
How to implement continuous scroll using Selenium + Python
<p>Using Selenium in Python, I would like to load the entirety of a JS generated list from this webpage: <a href="https://partechpartners.com/companies" rel="nofollow noreferrer">https://partechpartners.com/companies</a>. There is a 'LOAD MORE' button at the bottom.</p> <p>The code I've written to press the button (it just does it once currently, I know I'll need to extend it to be able to do it multiple times with a <code>while</code>):</p> <pre><code>from selenium import webdriver #The Selenium webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException, WebDriverException from time import sleep chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) driver = webdriver.Chrome(options=chrome_options) url = 'https://partechpartners.com/companies' driver.get(url) sleep(2) load_more = driver.find_element('xpath','//*[ text() = &quot;LOAD MORE&quot;]') sleep(2) try: ActionChains(driver).move_to_element(load_more).click(load_more).perform() print(&quot;Element was clicked&quot;) except Exception as e: print(&quot;Element wasn't clicked&quot;) </code></pre> <p>The code returns <code>Element was clicked</code>. However, when I add the following code to the bottom of the above script I only get 30 items returned, which is the number if the button hadn't been clicked, and the relative Xpath is the same for the elements pre and post button click, so I know it's not that:</p> <pre><code>len(driver.find_elements('xpath','//h2')) </code></pre> <p>I've also tried commenting out <code>chrome_options.add_argument(&quot;--headless&quot;)</code> to see if it works not asa headless browser and to follow the clicks. An accept cookies button appears that I can't get rid of, but that doesn't seem to matter because it still returns elements when I run the script above. What could I do to ensure the webdriver browser is actually loading the page?</p>
<python><selenium-webdriver>
2024-03-17 07:57:27
1
410
tktk234
78,174,202
130,208
in python code, Is this a proper place to use dependency injection -- if so, how
<p>The code setup is as follows:</p> <ul> <li>Module ui_theme.py defines a theme and variant selector.</li> <li>variant_selector has an on_change event handler.</li> <li>Module cards_page.py imports ui_theme.py and has a handler on_variant_change.</li> </ul> <p>Now, what I want to achieve that when ui_theme.on_change event is invoked then it should somehow call the cards_page.on_variant_change event.</p> <p>Constraint: I definitely do not want to create a function to generate variant_selector. That makes the code-org bit messy. I also, can not post initialization, set the event handler.</p> <p>My current solution is as follows:</p> <ul> <li>ui_theme.py</li> </ul> <pre><code>on_change_variant_callback = None def on_change_variant_click(dbref, msg, to_ms): print (&quot;button clicked:&quot;, msg.value) if on_change_variant_callback: on_change_variant_callback(dbref, msg, to_ms) pass </code></pre> <ul> <li>in cards.py</li> </ul> <pre><code>import ui_theme def on_variant_select(): pass ui_theme.on_change_variant_callback = on_variant_select </code></pre> <p>Seems to me that there should be a better way -- probably this where dependency injection can help, although i don't understand that concept well enough.</p>
<python><dependency-injection><callback><eventhandler>
2024-03-17 05:52:49
1
2,065
Kabira K
78,174,069
1,736,389
How to expand macros with python and libclang
<p>Say I have the following C code.</p> <pre class="lang-c prettyprint-override"><code>#define A 0x1800 #define MACRO_FUNC(in) (A | (in)) #define B 6 #define MY_MACRO MACRO_FUNC(B) </code></pre> <p>How would I use <code>libclang</code> python bindings to expand <code>MY_MACRO</code> to <code>(0x1800 | (6))</code> or preferably to <code>0x1806</code></p>
<python><c><libclang>
2024-03-17 04:29:59
0
741
Sam P
78,174,062
1,606,657
How to use a generic class type in combination with a function return type?
<p>Why is this type hint not working?</p> <pre><code>from typing import Generic, TypeVar from dataclasses import dataclass V = TypeVar('V', int, str) @dataclass class Test(Generic[V]): a: V class Base(Generic[V]): def test(self) -&gt; Test[V]: t = '1' return Test[V](t) b = Base[str]() b.test() </code></pre> <p><code>mypy</code> shows:</p> <pre><code>test.py:16: error: Argument 1 to &quot;Test&quot; has incompatible type &quot;str&quot;; expected &quot;int&quot; [arg-type] </code></pre> <p>My expectation would be that creating the instance <code>Base</code> with the specified type <code>str</code> it is used for <code>V</code> which should be compatible in the <code>test()</code> return value as it converts to <code>Test[str](t)</code>.</p>
<python><mypy><python-typing>
2024-03-17 04:26:32
1
6,352
wasp256
78,174,010
829,782
What is the most efficient method to get the last modification time of every file in a git revision?
<p>I want to programmatically list the name and last modification time of every file in a certain revision. Running <code>git log</code> for every file, <a href="https://serverfault.com/questions/401437/how-to-retrieve-the-last-modification-date-of-all-files-in-a-git-repository">as suggested here</a> is very slow. Is there a faster way to accomplish this?</p> <p>Running the script below on a non-trivial repo (<a href="https://github.com/libsdl-org/SDL" rel="nofollow noreferrer">SDL</a>) takes 59s on my machine.</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python import datetime import subprocess import time commit = &quot;HEAD&quot; start = time.time() file_names = subprocess.check_output([&quot;git&quot;, &quot;ls-tree&quot;, &quot;--name-only&quot;, &quot;-r&quot;, commit], text=True).strip().split(&quot;\n&quot;) print(f&quot;[{time.time() - start:.4f}] git ls-tree finished&quot;) file_times = list(datetime.datetime.fromisoformat(subprocess.check_output([&quot;git&quot;, &quot;log&quot;, &quot;-1&quot;, &quot;--pretty=format:%cI&quot;, commit, &quot;--&quot;, name], text=True).strip()) for name in file_names) print(f&quot;[{time.time() - start:.4f}] git info finished&quot;) </code></pre>
<python><git><time>
2024-03-17 03:55:29
3
386
maarten
78,173,977
12,139,738
How Can I Apply Custom Corner Radius to an Image Inside a CTkFrame Using customtkinter in Python?
<p>I'm currently working on a Python project using <code>customtkinter</code>, and I'm facing an issue with applying a custom corner radius to an image inside a <code>CTkFrame</code>. Here's a minimal example of my code:</p> <pre class="lang-py prettyprint-override"><code>import customtkinter from PIL import Image image_path = #image path class App(customtkinter.CTk): def __init__(self): super().__init__() frame = customtkinter.CTkFrame(self, width=200, height=111, corner_radius=5) frame.pack() image = customtkinter.CTkImage(Image.open(image_path), size=(200, 111)) image_label = customtkinter.CTkLabel(frame, image=image, text=&quot;&quot;) image_label.pack() if __name__ == &quot;__main__&quot;: app = App() app.mainloop() </code></pre> <p>In this code, I'm trying to display an image inside a <code>CTkFrame</code> with a custom corner radius. However, the image doesn't follow the corner radius of the parent frame and appears with sharp corners.</p> <p>I attempted to use a canvas to mask the image with rounded corners, but this approach results in low resolution corners and the corners don't always match the background.</p> <p>How can I properly apply the custom corner radius to the image inside the <code>CTkFrame</code>?</p> <p>Any insights or alternative approaches would be greatly appreciated. Thank you!</p>
<python><tkinter><customtkinter>
2024-03-17 03:29:15
2
391
DYD
78,173,925
525,865
BeautifulSoup-scraper runs well and robust some times - but otherwhile it fails :: probably some more exception-handling needed here?
<p>For some reason this clutch.co scraper is working propperly if i run it on one site</p> <ol> <li><strong>a.</strong> <a href="https://clutch.co/us/web-developers" rel="nofollow noreferrer">https://clutch.co/us/web-developers</a> - the us-category: it works awesome</li> <li><strong>b.</strong> <a href="https://clutch.co/il/web-developers" rel="nofollow noreferrer">https://clutch.co/il/web-developers</a> - the israel-category: it does not work</li> </ol> <p>So when I run this code it'll only get information from the first page and then close itself. I added in waits to allow the page to load but it hasn't helped. When watching the browser you can see it scrolls to the bottom of the page but then closes itself.</p> <p>well this runs for me - see below: but only for the us-site, not for others, eg. the israel site: <strong>a.</strong> <a href="https://clutch.co/us/web-developers" rel="nofollow noreferrer">https://clutch.co/us/web-developers</a> - this runs great. <strong>b.</strong> <a href="https://clutch.co/il/web-developers" rel="nofollow noreferrer">https://clutch.co/il/web-developers</a> - it stops and gives a whole lot of errors back.</p> <p>well - it seems like sometimes there might be some issue with locating the elements with the class name 'provider-info': i guess that this could be due to changes in the website's structure on the clutch.co-site or on the other handside some timing issues. I think that there a handling of potential exceptions should set in; This one works for me:</p> <pre><code>import pandas as pd from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException, NoSuchElementException import pandas as pd import time website = &quot;https://clutch.co/us/web-developers&quot; options = webdriver.ChromeOptions() options.add_experimental_option(&quot;detach&quot;, False) driver = webdriver.Chrome(options=options) driver.get(website) wait = WebDriverWait(driver, 10) # Function to handle page navigation def navigate_to_next_page(): try: next_page = driver.find_element(By.XPATH, '//li[@class=&quot;page-item next&quot;]/a[@class=&quot;page-link&quot;]') np = next_page.get_attribute('href') driver.get(np) time.sleep(6) return True except: return False company_names = [] taglines = [] locations = [] costs = [] ratings = [] current_page = 1 last_page = 250 while current_page &lt;= last_page: try: company_elements = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'provider-info'))) except TimeoutException: print(&quot;Timeout Exception occurred while waiting for company elements.&quot;) break for company_element in company_elements: try: company_name = company_element.find_element(By.CLASS_NAME, &quot;company_info&quot;).text company_names.append(company_name) tagline = company_element.find_element(By.XPATH, './/p[@class=&quot;company_info__wrap tagline&quot;]').text taglines.append(tagline) rating = company_element.find_element(By.XPATH, './/span[@class=&quot;rating sg-rating__number&quot;]').text ratings.append(rating) location = company_element.find_element(By.XPATH, './/span[@class=&quot;locality&quot;]').text locations.append(location) cost = company_element.find_element(By.XPATH, './/div[@class=&quot;list-item block_tag custom_popover&quot;]').text costs.append(cost) except NoSuchElementException: print(&quot;Element not found while extracting company details.&quot;) continue current_page += 1 if not navigate_to_next_page(): break driver.close() data = {'Company_Name': company_names, 'Tagline': taglines, 'location': locations, 'Ticket_Price': costs, 'Rating': ratings} df = pd.DataFrame(data) df.to_csv('companies_test1.csv', index=False) print(df) </code></pre> <p>which gives back the following</p> <pre><code> import pandas as pd Timeout Exception occurred while waiting for company elements. Company_Name ... Rating 0 Hyperlink InfoSystem ... 4.9 1 Plego Technologies ... 5.0 2 Azuro Digital ... 4.9 3 Savas Labs ... 5.0 4 The Gnar Company ... 4.8 5 Sunrise Integration ... 5.0 6 Baytech Consulting ... 5.0 7 Inventive Works ... 4.9 8 Utility ... 4.8 9 Busy Human ... 5.0 10 Rootstrap ... 4.8 11 micro1 ... 4.9 12 ChopDawg.com ... 4.8 13 Emergent Software ... 4.9 14 Beehive Software Inc. ... 5.0 15 3 Media Web ... 4.9 16 Webstacks ... 5.0 17 Mutually Human ... 5.0 18 AnyforSoft ... 4.8 19 NL Softworks ... 5.0 20 OpenSource Technologies Inc. ... 4.8 21 Marcel Digital ... 4.8 22 Twin Sun ... 5.0 23 SPARK Business Works ... 4.9 24 Darwin ... 4.9 25 Perrill ... 5.0 26 Nimi ... 4.9 27 Scopic ... 4.9 28 Interactive Strategies ... 4.9 29 Unleashed Technologies ... 4.9 30 Oyova ... 4.9 31 BrandExtract ... 4.9 32 The Brick Factory ... 4.9 33 My Web Programmer ... 5.0 34 PureLogics LLC ... 4.9 35 Social Driver ... 4.9 36 Calibrate Software ... 4.9 37 VisualFizz ... 5.0 38 Camber Creative ... 4.9 39 Susco Solutions ... 4.9 40 Lunarbyte.io ... 5.0 41 thoughtbot ... 4.9 42 CR Software Solutions ... 5.0 43 Solwey Consulting ... 5.0 44 Ambaum ... 4.9 45 Pacific Codeline LLC ... 5.0 46 PERC ... 5.0 47 Beesoul LLC ... 4.9 48 Novalab Tech ... 5.0 49 Dragon Army ... 5.0 [50 rows x 5 columns] </code></pre> <p>and the following data that is stored:</p> <p>Process finished with exit code 0</p> <pre><code>Company_Name,Tagline,Location,Ticket_Price,Rating,Website_Name,URL Hyperlink InfoSystem,&quot;#1 Mobile App, Web, &amp; Software Development Company&quot;,&quot;Jersey City, NJ&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Plego Technologies,Shaping the Future of Technology,&quot;Downers Grove, IL&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Azuro Digital,&quot;Award-Winning Web Design, Development &amp; SEO&quot;,&quot;New York, NY&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers App Makers USA,Top US Mobile &amp; Web App Development Agency,&quot;Los Angeles, CA&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers ChopDawg.com,Dreams Delivered Since 2009. Let's Make It App'n!®,&quot;Philadelphia, PA&quot;,&quot;$5,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Savas Labs,Designing and developing elegant web products.,&quot;Raleigh, NC&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers The Gnar Company,Solving Gnarly Software Problems. Faster.,&quot;Boston, MA&quot;,&quot;$25,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Sunrise Integration,Enterprise Solutions &amp; Ecommerce Apps,&quot;Los Angeles, CA&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Baytech Consulting,TRANSLATING YOUR VISION INTO SOFTWARE,&quot;Irvine, CA&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Inventive Works,Custom Software Product Development,&quot;Manor, TX&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Utility,AWARD-WINNING MOBILE DESIGN &amp; DEVELOPMENT AGENCY,&quot;New York, NY&quot;,&quot;$50,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Busy Human,Making life more user-friendly,&quot;Orem, UT&quot;,&quot;$1,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Rootstrap,Outcome-driven development. At any scale.,&quot;Beverly Hills, CA&quot;,&quot;$50,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers micro1,&quot;World-class software engineers, powered by AI&quot;,&quot;Los Angeles, CA&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Emergent Software,Your Full-Stack Technology Partner,&quot;Saint Paul, MN&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers 3 Media Web,Award-Winning Digital Experience Agency 🏆🏆🏆,&quot;Marlborough, MA&quot;,&quot;$50,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Beehive Software Inc.,Software reinvented,&quot;Los Gatos, CA&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Webstacks,&quot;The website is a product, not a project.&quot;,&quot;San Diego, CA&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Mutually Human,Custom Software Development and Design,&quot;Ada, MI&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers AnyforSoft,Amplify digital excellence with AnyforSoft,&quot;Sarasota, FL&quot;,&quot;$50,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers NL Softworks,Website Design &amp; Development Made to Convert,&quot;Boston, MA&quot;,&quot;$5,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers OpenSource Technologies Inc.,Web &amp; Mobile APP | Digital Marketing | Cloud,&quot;Lansdale, PA&quot;,&quot;$25,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Twin Sun,Trustworthy partners that deliver results,&quot;Nashville, TN&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Marcel Digital,Changing the Idea of What an Agency Is And Can Be,&quot;Chicago, IL&quot;,&quot;$5,000+&quot;,4.7,Top Web Developers in the United States,https://clutch.co/us/web-developers Darwin,We create incredible digital experiences,&quot;Reston, VA&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers SPARK Business Works,Award-winning custom software dev &amp; web design,&quot;Kalamazoo, MI&quot;,&quot;$5,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Nimi,&quot;Bring your product ideas to life, to Grow Today.&quot;,&quot;Oakland, CA&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Scopic,&quot;Your Cross-continental, Digital Innovation Partner&quot;,&quot;Rutland, MA&quot;,&quot;$5,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Interactive Strategies,&quot;Full Service Digital Design, Dev &amp; Marketing&quot;,&quot;Washington, DC&quot;,&quot;$100,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Unleashed Technologies,Unleash Your Potential®,&quot;Ellicott City, MD&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Social Driver,Experience digital with us.,&quot;Washington, DC&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Oyova,More Business For Your Business Is Our Business.™,&quot;Jacksonville Beach, FL&quot;,&quot;$5,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers The Brick Factory,A DC-based digital agency.,&quot;Washington, DC&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers My Web Programmer,→Top-Quality Custom Software &amp; Web Development Co.,&quot;Atlanta, GA&quot;,&quot;$1,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers PureLogics LLC,No Magic. Just Logic.,&quot;New York, NY&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers BrandExtract,&quot;We inspire people to create, transform, and grow.&quot;,&quot;Houston, TX&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Calibrate Software,We craft digital experiences that spark joy 🎉,&quot;Chicago, IL&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Camber Creative,Things worth building are worth building well.,&quot;Orlando, FL&quot;,&quot;$25,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers VisualFizz,Impactful Marketing for Industry-Leading Brands,&quot;Chicago, IL&quot;,&quot;$5,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Susco Solutions,Solve Together | Developing Intuitive Software,&quot;Harvey, LA&quot;,&quot;$50,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Lunarbyte.io,Launching big ideas with startups &amp; enterprises,&quot;Seattle, WA&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers CR Software Solutions,Innovative Digital Solutions For Your Business,&quot;Canton, MI&quot;,&quot;$5,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Ambaum,Ambaum is your Shopify Plus Agency,&quot;Burien, WA&quot;,&quot;$5,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Solwey Consulting,Custom software solutions to elevate your business,&quot;Austin, TX&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Pacific Codeline LLC,&quot;Reliable, Experienced, 100% U.S. based.&quot;,&quot;San Clemente, CA&quot;,&quot;$1,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Novalab Tech,Your Trusted IT Partner,&quot;San Francisco, CA&quot;,&quot;$10,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers Dragon Army,A purpose-driven digital engagement company.,&quot;Atlanta, GA&quot;,&quot;$25,000+&quot;,5.0,Top Web Developers in the United States,https://clutch.co/us/web-developers CodigoDelSur,Rockstar coders for rockstar companies,&quot;Montevideo, Uruguay&quot;,&quot;$75,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers Brainhub,Top 1.36% engineering team - onboarding in 10 days,&quot;Gliwice, Poland&quot;,&quot;$50,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Curotec,Your digital product engineering department,&quot;Philadelphia, PA&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers TekRevol,Creative Web | App | Software Development Company,&quot;Houston, TX&quot;,&quot;$25,000+&quot;,4.8,Top Web Developers in the United States,https://clutch.co/us/web-developers XWP,Building a better web at enterprise scale,&quot;New York, NY&quot;,&quot;$50,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers Five Jars,⭐️⭐️⭐️⭐️⭐️ OUTSTANDING WEB DESIGN &amp; DEVELOPMENT,&quot;Brooklyn, NY&quot;,&quot;$10,000+&quot;,4.9,Top Web Developers in the United States,https://clutch.co/us/web-developers </code></pre> <p>hmm - but wait: it does not work here - if we choose another base-url <a href="https://clutch.co/il/web-developers" rel="nofollow noreferrer">https://clutch.co/il/web-developers</a></p> <pre><code>company details. Element not found while extracting company details. Element not found while extracting company details. Timeout Exception occurred while waiting for company elements. Traceback (most recent call last): File &quot;/home/ubuntu/.config/JetBrains/PyCharmCE2023.3/scratches/scratch.py&quot;, line 74, in &lt;module&gt; df = pd.DataFrame(data) ^^^^^^^^^^^^^^^^^^ File &quot;/home/ubuntu/PycharmProjects/clutch_scraper_2/.venv/lib/python3.11/site-packages/pandas/core/frame.py&quot;, line 767, in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ubuntu/PycharmProjects/clutch_scraper_2/.venv/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 503, in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ubuntu/PycharmProjects/clutch_scraper_2/.venv/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 114, in arrays_to_mgr index = _extract_index(arrays) ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/ubuntu/PycharmProjects/clutch_scraper_2/.venv/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 677, in _extract_index raise ValueError(&quot;All arrays must be of the same length&quot;) ValueError: All arrays must be of the same length Process finished with exit code 1 </code></pre> <p>well i think that this has to do with some exceptions</p> <pre><code> import pandas as pd Element not found while extracting company details. Element not found while extracting company details. Element not found while extracting company details. Element not found while extracting company details. Element not found while extracting company details. Element not found while extracting company details. Element not found while extracting company details. Timeout Exception occurred while waiting for company elements. </code></pre> <p>well i think that there may be a whole couple of issues:</p> <p>first of all there were some Element not found while extracting company details: This indicates that some elements were not found while extracting details for certain companies. This could be due to variations in the structure of the website or changes in the layout. I guess that we can handle this; thereofore we should include additional error handling or refine our XPath expressions.</p> <p>during several trials and attempts also Timeout Exception occurred while waiting for company elements: This suggests that the script timed out while waiting for elements to load on the page.</p> <p>last but not least i also had ValueError: All arrays must be of the same length: This error occurs because the arrays used to construct the DataFrame are of different lengths. This typically happens when one or more data points are not collected properly.</p> <p>see below what code i used:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException, NoSuchElementException import pandas as pd import time website = &quot;https://clutch.co/il/it-services&quot; options = webdriver.ChromeOptions() options.add_experimental_option(&quot;detach&quot;, False) driver = webdriver.Chrome(options=options) driver.get(website) wait = WebDriverWait(driver, 20) # Function to handle page navigation def navigate_to_next_page(): try: next_page = driver.find_element(By.XPATH, '//li[@class=&quot;page-item next&quot;]/a[@class=&quot;page-link&quot;]') np = next_page.get_attribute('href') driver.get(np) time.sleep(6) return True except: return False company_names = [] taglines = [] locations = [] costs = [] ratings = [] websites = [] current_page = 1 last_page = 250 while current_page &lt;= last_page: try: company_elements = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'provider-info'))) except TimeoutException: print(&quot;Timeout Exception occurred while waiting for company elements.&quot;) break for company_element in company_elements: try: company_name = company_element.find_element(By.CLASS_NAME, &quot;company_info&quot;).text company_names.append(company_name) tagline = company_element.find_element(By.XPATH, './/p[@class=&quot;company_info__wrap tagline&quot;]').text taglines.append(tagline) rating = company_element.find_element(By.XPATH, './/span[@class=&quot;rating sg-rating__number&quot;]').text ratings.append(rating) location = company_element.find_element(By.XPATH, './/span[@class=&quot;locality&quot;]').text locations.append(location) cost = company_element.find_element(By.XPATH, './/div[@class=&quot;list-item block_tag custom_popover&quot;]').text costs.append(cost) # Extracting website URL website_element = company_element.find_element(By.XPATH, './/a[@class=&quot;website-link&quot;]') website_url = website_element.get_attribute('href') websites.append(website_url) except NoSuchElementException: print(&quot;Element not found while extracting company details.&quot;) continue current_page += 1 if not navigate_to_next_page(): break driver.close() # Ensure all arrays have the same length min_length = min(len(company_names), len(taglines), len(locations), len(costs), len(ratings), len(websites)) company_names = company_names[:min_length] taglines = taglines[:min_length] locations = locations[:min_length] costs = costs[:min_length] ratings = ratings[:min_length] websites = websites[:min_length] data = {'Company_Name': company_names, 'Tagline': taglines, 'Location': locations, 'Ticket_Price': costs, 'Rating': ratings, 'Website': websites} df = pd.DataFrame(data) # Check if DataFrame is empty if not df.empty: df.to_csv('companies_test10.csv', index=False) print(df) else: print(&quot;DataFrame is empty. No data to save.&quot;) </code></pre>
<python><pandas><dataframe><beautifulsoup><request>
2024-03-17 02:35:40
2
1,223
zero
78,173,697
11,793,491
Replace values with assign in pandas
<p>I have this data frame:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Address': ['234 JALAN ST KULAR LUMPUR MALAYSIA', '123 BUILDING STREET SINGAPORE', '67 CANNING VALE, HONG KONG', np.nan]}) df Address 0 234 JALAN ST KULAR LUMPUR MALAYSIA 1 123 BUILDING STREET SINGAPORE 2 67 CANNING VALE, HONG KONG 3 NaN </code></pre> <p>And I want to create a new column. In this case firstly I replace NaN with <code>--</code>, and the rest of the non NaN is 'Yes'. So I tried this:</p> <pre class="lang-py prettyprint-override"><code>df_mod = ( df .assign( verify = lambda x: '--' if x['Address'].isna() else 'Yes' ) ) </code></pre> <p>I want to do it in a chain using sign because there are more columns in the dataset. But I get this error: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p> <p>This is the expected result</p> <pre class="lang-py prettyprint-override"><code>df Address Verify 0 234 JALAN ST KULAR LUMPUR MALAYSIA Yes 1 123 BUILDING STREET SINGAPORE Yes 2 67 CANNING VALE, HONG KONG Yes 3 NaN -- </code></pre> <p>Please, any help to do it using 'assign' will be welcome.</p>
<python><pandas>
2024-03-17 00:04:00
2
2,304
Alexis
78,173,436
727,238
Inspecting Python operation: how to get all decorators literally?
<p>I'm inspecting code in runtime. I want to get all decorators a function/method is decorated with. For example, if it is <code>@classmethod</code> I want to get just the string: <strong>&quot;@classmethod&quot;</strong>, if it is something like <code>@path(MyClass.myop)</code> I want to get the string <strong>&quot;@path(MyClass.myop)&quot;</strong>. I know, I can try to analyze the source code but it is tricky and I'm sure all this information must be stored somewhere and should be retrievable. How can it be achieved?</p>
<python><decorator>
2024-03-16 22:04:48
1
2,071
ardabro
78,173,316
2,386,605
Scholarly package does return cropped output
<p>I try to fetch some papers via scholarly, which works nicely. However, when I run</p> <pre><code>from scholarly import scholarly from pprint import pprint search_query = scholarly.search_pubs(query='Perception of physical stability and center of mass of 3D objects', year_low=2010) pprint(next(search_query)) </code></pre> <p>I receive</p> <pre><code>{'author_id': ['2efgcS0AAAAJ', '3hdOFF0AAAAJ', 'jTnQTBoAAAAJ', ''], 'bib': {'abstract': 'in adulthood, we can perceive physical attributes in ' 'just static For 3D shape Vi, we have four variables: a ' 'shape type ti, and stable in a different scene; and ' 'which of the two objects is', 'author': ['J Wu', 'I Yildirim', 'JJ Lim', 'B Freeman'], 'pub_year': '2015', 'title': 'Galileo: Perceiving physical object properties by ' 'integrating a physics engine with deep learning', 'venue': 'Advances in neural …'}, 'citedby_url': '/scholar?cites=13801231000054551969&amp;as_sdt=5,33&amp;sciodt=0,33&amp;hl=en', 'container_type': 'Publication', 'eprint_url': 'https://proceedings.neurips.cc/paper/2015/file/d09bf41544a3365a46c9077ebb5e35c3-Paper.pdf', 'filled': False, 'gsrank': 1, 'num_citations': 398, 'pub_url': 'https://proceedings.neurips.cc/paper/2015/hash/d09bf41544a3365a46c9077ebb5e35c3-Abstract.html', 'source': &lt;PublicationSource.PUBLICATION_SEARCH_SNIPPET: 'PUBLICATION_SEARCH_SNIPPET'&gt;, 'url_add_sclib': '/citations?hl=en&amp;xsrf=&amp;continue=/scholar%3Fq%3DPerception%2Bof%2Bphysical%2Bstability%2Band%2Bcenter%2Bof%2Bmass%2Bof%2B3D%2Bobjects%26hl%3Den%26as_sdt%3D0,33%26as_ylo%3D2010&amp;citilm=1&amp;update_op=library_add&amp;info=oVVc9XjSh78J&amp;ei=zwj2Za2PCNGcy9YP4-WF0A8&amp;json=', 'url_related_articles': '/scholar?q=related:oVVc9XjSh78J:scholar.google.com/&amp;scioq=Perception+of+physical+stability+and+center+of+mass+of+3D+objects&amp;hl=en&amp;as_sdt=0,33&amp;as_ylo=2010', 'url_scholarbib': '/scholar?hl=en&amp;q=info:oVVc9XjSh78J:scholar.google.com/&amp;output=cite&amp;scirp=0&amp;hl=en'} </code></pre> <p>Hence, in <code>bib</code>, both the <code>abstract</code> and the <code>venue</code> is cropped.</p> <p>Do you know how fix that?</p>
<python><python-3.x><google-scholar>
2024-03-16 21:07:47
0
879
tobias
78,173,209
547,231
How to save an `exr` from a pytorch tensor in Python?
<p>Previously, there was a function <code>torchvision.utils.save_float_image</code> with which it was possible to store an <code>.exr</code> file from a pytorch tensor. This function is gone in the current relase (0.17). Now, there is only the function <code>torchvision.utils.save_image</code> (<a href="https://pytorch.org/vision/stable/generated/torchvision.utils.save_image.html#torchvision.utils.save_image" rel="nofollow noreferrer">https://pytorch.org/vision/stable/generated/torchvision.utils.save_image.html#torchvision.utils.save_image</a>). But when I try to execute</p> <pre><code>with open(os.path.join('folder/', `foo.exr`), &quot;wb&quot;) as fout: save_image(tensor, fout) </code></pre> <p>I'm receiving the error &quot;unknown file extension *.exr&quot;. So, what can we do now?</p>
<python><python-3.x><pytorch>
2024-03-16 20:26:42
1
18,343
0xbadf00d
78,173,203
547,231
Increment a float pointer in Python
<p>I have a <code>float</code> pointer <code>p</code> from a C++ library which I want to increment in a Python module. When I try to write <code>p + distance</code>, I'm receiving the error that the operator <code>+</code> would be undefined for <code>LP_c_float</code>.</p> <p>A possible solution is to use <code>advance(p, distance)</code> instead, where</p> <pre><code>def advance(pointer, distance, type = ctypes.c_float): return ctypes.cast(ctypes.cast(pointer, ctypes.c_voidp).value + distance, ctypes.POINTER(type)) </code></pre> <p>However, is the cast to <code>ctypes.c_voidp</code> really necessary? Why addition is not defined for <code>LP_c_float</code> (or why doesn't it have a similar <code>value</code> field)?</p>
<python><python-3.x><ctypes>
2024-03-16 20:23:30
1
18,343
0xbadf00d
78,173,150
2,382,141
AttributeError: module 'keras.src.backend' has no attribute 'Variable' with Dropout layer
<p>I'm trying to re-use a neural network for sound classification but keras give an error: AttributeError: module 'keras.src.backend' has no attribute 'Variable'. May it be a compatibility problem? I'm using keras v3.0.5. This is my code:</p> <pre><code>import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.optimizers import Adam # from keras.utils import np_utils from sklearn import metrics from tensorflow.keras import layers import keras from keras import backend num_labels = yy.shape[1] filter_size = 2 # Construct model model = Sequential() model.add(Dense(256, input_shape=(40,))) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(256)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(num_labels)) model.add(Activation('softmax')) model.summary() </code></pre> <p>According to Keras documentation, if I use a dropout layer:</p> <pre><code>layers.Dropout(0.5, noise_shape=None, seed=None), </code></pre> <p>it gives the same error. Can someone help me? Thanks.</p>
<python><tensorflow><keras><neural-network>
2024-03-16 20:00:18
1
348
Giuseppe Ricci
78,173,000
11,748,924
Numpythonic way to perform vector substraction where the operands has different shape each other (a,n) - (b,n)
<p>I have two matrix operand like these:</p> <pre><code>a = np.array([[1,2], [3,4], [5,6], [7,8]]) b = np.array([[9,10], [11,12], [13,14]]) </code></pre> <p>If we debug <code>a</code> and <code>b</code>, they will look like these:</p> <pre><code>[[1 2] [3 4] [5 6] [7 8]] (4, 2) int32 [[ 9 10] [11 12] [13 14]] (3, 2) int32 </code></pre> <p>I can achieve what I want with this way, where <code>c</code> is result:</p> <pre><code>c = [] for i in range(b.shape[0]): c.append(b[i] - a) c = np.array(c) </code></pre> <p>Now, the <code>c</code> will be looks like this:</p> <pre><code>[[[ 8 8] [ 6 6] [ 4 4] [ 2 2]] [[10 10] [ 8 8] [ 6 6] [ 4 4]] [[12 12] [10 10] [ 8 8] [ 6 6]]] (3, 4, 2) int32 </code></pre> <p>As you see, how I perform substraction was still using <code>for looping</code>, is there numpythonic way where I can substract without using loop so that I can utilize numpy optimization therefore the performance will be faster since numpy is using <code>C language</code>.</p>
<python><numpy><multidimensional-array><tensor>
2024-03-16 19:06:22
2
1,252
Muhammad Ikhwan Perwira
78,172,715
7,846,884
how to generate linear X Y data to regression practice in Python
<p>Que: how can i get linear relation between each features/independent variables (X) and target (Y). Both X and Y must be real-values.</p> <p>But my plot shows only 1/10 features is linearly associated with my target.</p> <p>Pls see code to generate data with true coefficients to test linear regression with SGD i found that scatter of each feature vrs target may not be linearly correlated</p> <pre><code>import numpy as np ##generate data np.random.seed(0) # for reproducibility def rand_X_y_LR(nsamples=None, nfeatures=None, plot_XY = False): # Define the number of samples and features num_samples = nsamples num_features = nfeatures # Generate a random design matrix X with values between 0 and 1 X = np.random.rand(num_samples, num_features) print(f&quot;shape of random X; {X.shape}&quot;) ones_column = np.ones((len(X), 1)) print(f&quot;shape ones_column, {ones_column.shape}&quot;) X_plusOnes=np.hstack([ones_column, X]) print(f&quot;shape of X_plusOnes_column, {X_plusOnes.shape}&quot;) # Generate random coefficients for the features true_coefficients = np.random.normal(loc=0, scale=1, size=(num_features+1)) print(f&quot;shape of true_coefficients; {true_coefficients.shape}&quot;) # Generate random noise for the target variable noise = np.random.normal(loc=0, scale=1) # Calculate the target variable y using a linear combination of X and coefficients #y = np.dot(X_plusOnes, true_coefficients) + noise #X dot B y = X_plusOnes @ true_coefficients + noise #X@B print(f&quot;y.shape; {y.shape}&quot;) if plot_XY == True: #plot each X column against target fig, axes = plt.subplots(nrows=round(num_features/2), ncols=2, figsize=(9, 7)) for i, ax in enumerate(axes.flat): ax.scatter(Xdata_in[:, i], y_data_in) ax.set_xlabel(f&quot;Xdata_in_col {i}&quot;) ax.set_ylabel('Target') ax.set_title(f&quot;Xdata_in_col {i} vrs Target&quot;) plt.tight_layout() plt.show() #fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(9, 7)) plt.figure(figsize=(4,5)) plt.hist(y_data_in) plt.title('Target distribution') plt.show() plt.figure(figsize=(4,5)) plt.hist(true_coefficients) plt.title('true_coefficients') plt.show() return X, y, true_coefficients # Generating random dataset of size 1024x10 for X Xdata_in, y_data_in, true_beta = rand_X_y_LR(nsamples=1000, nfeatures=10) </code></pre>
<python><numpy><linear-regression>
2024-03-16 17:27:24
1
473
sahuno
78,172,548
1,916,588
Mock only one attribute of all instances of a class
<p>I have these 3 modules:</p> <pre class="lang-py prettyprint-override"><code># config.py class Config: def __init__(self): self.some_attribute = &quot;some value&quot; # &lt;-- I want to mock this attribute self.another_attribute = 123 </code></pre> <pre class="lang-py prettyprint-override"><code># some_module.py from config import Config def method_that_uses_config(): print(Config().another_attribute) # This should not be mocked return Config().some_attribute # This should be mocked </code></pre> <pre class="lang-py prettyprint-override"><code># test_some_module.py: from unittest.mock import patch from some_module import method_that_uses_config class TestConfig: @patch(&quot;some_module.Config&quot;) def test_method_that_uses_config(self, mock_config): mock_config.return_value.some_attribute = &quot;mocked value&quot; assert method_that_uses_config() == &quot;mocked value&quot; </code></pre> <p>This works only in part. The Config class is now completely mocked, while I'd like to only mock one specific attribute and leave the remaining attributes untouched:</p> <pre class="lang-py prettyprint-override"><code>Config().some_attribute # 'mocked value' Config().another_attribute # &lt;MagicMock name='Config().another_attribute' id='4329847393'&gt; </code></pre> <p>I'd like <code>Config().another_attribute</code> to return its original value (<code>123</code>) instead. I basically want the Config instance to behave as it would normally do, with the only exception of the mocked attribute.</p> <p>I think this should be quite basic, but I'm probably missing something.</p>
<python><unit-testing><mocking>
2024-03-16 16:38:09
1
12,676
Kurt Bourbaki
78,172,478
4,639,580
How to obtain data from two tables and that two table join using Foreign-key in the Django rest framework
<p>I am quite new to Django. I'm writing and app using Vue3 in the frontend and Django5 in the backend. I've implemented login already, but I want the user returning to contain the role, that happens to be in another table, and I just get the id.</p> <p>My Django models:</p> <pre class="lang-py prettyprint-override"><code>class Role(models.Model): id_role = models.AutoField(primary_key=True) name = models.TextField(max_length=100) permissions = models.ManyToManyField(Permission) def __str__(self): return self.name class User(models.Model): id_user = models.AutoField(primary_key=True) email = models.TextField(max_length=100) password = models.CharField(max_length=255) role_id = models.ForeignKey(Role, on_delete=models.PROTECT) locale = models.TextField(max_length=5) def __str__(self): return self.email </code></pre> <p>Serializers:</p> <pre class="lang-py prettyprint-override"><code>class RoleSerializer(ModelSerializer): class Meta: model = Role fields = ( 'id_role', 'name', 'permissions' ) class UserSerializer(ModelSerializer): class Meta: model = User fields = ( 'id_user', 'email', 'password', 'role_id', 'locale' ) </code></pre> <p>This is my viewset, where I have the login method, and I want it to return the role name instead of the role_id:</p> <pre class="lang-py prettyprint-override"><code>class UserViewSet(viewsets.ModelViewSet): serializer_class = UserSerializer queryset = User.objects.all() @action(detail=False, methods=['post']) def login(self, request, pk=None): user = serializers.serialize('json', self.queryset.filter(email=request.data[&quot;email&quot;], password=request.data[&quot;password&quot;])) if (user and user != &quot;[]&quot;): return Response(user) else: return Response({&quot;error&quot;: &quot;The user does not exist&quot;}, status=status.HTTP_400_BAD_REQUEST) </code></pre> <p>I've already looked at SO questions (<a href="https://stackoverflow.com/questions/48128714/how-to-make-an-inner-join-in-django">How to make an Inner Join in django?</a>) and tried using this code:</p> <pre class="lang-py prettyprint-override"><code>@action(detail=False, methods=['post']) def login(self, request, pk=None): queryset = User.objects.select_related('roles') user = serializers.serialize('json', queryset.filter(email=request.data[&quot;email&quot;], password=request.data[&quot;password&quot;])) if (user and user != &quot;[]&quot;): return Response(user) else: return Response({&quot;error&quot;: &quot;The user does not exist&quot;}, status=status.HTTP_400_BAD_REQUEST) </code></pre> <p>but I just get this error:</p> <pre><code>File &quot;/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py&quot;, line 1367, in get_related_selections raise FieldError( django.core.exceptions.FieldError: Invalid field name(s) given in select_related: 'roles'. Choices are: role_id </code></pre> <p>When I try using role_id I just get the same response that in the beginning. I don't know what I'm doing wrong.</p> <hr /> <p>EDIT: I just tried this answer also (<a href="https://stackoverflow.com/a/48341567/4639580">https://stackoverflow.com/a/48341567/4639580</a>), but I got:</p> <pre><code>FieldError at /api/users/login/ Cannot resolve keyword 'role' into field. Choices are: email, id_user, locale, password, person, role_id, role_id_id Request Method: POST Request URL: http://localhost:8000/api/users/login/ Django Version: 5.0.2 Exception Type: FieldError Exception Value: Cannot resolve keyword 'role' into field. Choices are: email, id_user, locale, password, person, role_id, role_id_id Exception Location: /usr/local/lib/python3.12/site-packages/django/db/models/sql/query.py, line 1772, in names_to_path </code></pre>
<python><django><django-rest-framework>
2024-03-16 16:14:31
1
619
Dairelys García Rivas
78,172,031
339,144
How to obtain an exception with a `__traceback__` attribute that contains the stack outside a `try`
<p>It seems that in Python (3.10) an exception that is raised inside a <code>try</code> contains a traceback that does not extend to the calling location of the <code>try</code>. This is somewhat surprising to me, and more importantly, not what I want.</p> <p>Here's a short program that illustrates the problem:</p> <pre><code># short_tb.py import traceback def caller(): somefunction() def somefunction(): try: raise ValueError(&quot;This is a value error&quot;) except ValueError as e: # in my actual code, e is passed to a function, and __traceback__ is pulled from it for logging at this other # location; the code below simply demonstrates the lack of frames in the traceback object print(&quot;&quot;.join(traceback.format_tb(e.__traceback__))) print(&quot;This will produce a traceback with only one frame, the one in somefunction()&quot;) caller() def caller2(): somefunction2() def somefunction2(): raise ValueError(&quot;This is a value error&quot;) print(&quot;This will produce a traceback with all relevant frames (the default behavior of the python interpreter)&quot;) caller2() </code></pre> <p>output:</p> <pre><code>This will produce a traceback with only one frame, the one in somefunction() File &quot;.../short_tb.py&quot;, line 10, in somefunction raise ValueError(&quot;This is a value error&quot;) This will produce a traceback with all relevant frames (the default behavior of the python interpreter) Traceback (most recent call last): File &quot;.../short_tb.py&quot;, line 30, in &lt;module&gt; caller2() File &quot;.../short_tb.py&quot;, line 22, in caller2 somefunction2() File &quot;.../short_tb.py&quot;, line 26, in somefunction2 raise ValueError(&quot;This is a value error&quot;) ValueError: This is a value error </code></pre> <p>what I want is for <code>__traceback__</code> to contain all the information in the second example. I'm happy to overwrite the variable and I can do so at the exception-handling location... but how do I get an object to use for that purpose?</p> <p>in this question there are many answers about tracebacks, but none of them seem to be about <code>traceback objects</code>: <a href="https://stackoverflow.com/questions/3702675/catch-and-print-full-python-exception-traceback-without-halting-exiting-the-prog">Catch and print full Python exception traceback without halting/exiting the program</a></p>
<python><python-3.x>
2024-03-16 13:55:47
2
2,577
Klaas van Schelven
78,171,843
525,865
OpenStreetMap: concatenate a request into a loop that iterates over each 3166 country code, parse response into DF with Python
<p>i am currently working on a combined request that runs on the API-end of Overpass-Turbo: the aim is to concatenate a request like the following;</p> <pre><code>[out:csv(::id,::type,&quot;name&quot;,&quot;addr:postcode&quot;,&quot;addr:city&quot;,&quot;addr:street&quot;,&quot;addr:housenumber&quot;,&quot;website&quot;,&quot; contact:email=*&quot;)][timeout:600]; area[&quot;ISO3166-1&quot;=&quot;NL&quot;]-&gt;.a; ( node(area.a)[amenity=childcare]; way(area.a)[amenity=childcare]; rel(area.a)[amenity=childcare];); out; </code></pre> <p>with the key of the ISO3166-1 - see <a href="https://de.wikipedia.org/wiki/ISO-3166-1-Kodierliste" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/ISO-3166-1-Kodierliste</a> note - i want to run e.g. in Python - encoded with the various country-codes for</p> <pre><code>Netherlands, Germany, Austria, Switzerland, France, </code></pre> <p>and so forth - how to encode this - so that we can run all in a single request - in python.. so that all comes in a dataframe - in comma separated values</p> <p>Well i think that to concatenate multiple requests with different ISO3166-1 country codes and run them in a single request in Python, we need to construct a loop to iterate over the above mentioned country codes, modify the request accordingly, and then merge the results into one complete and single DataFrame. Running such a requests library to make HTTP requests and using pandas to handle the data would be appropiate to get this done:</p> <pre><code>import requests import pandas as pd from io import StringIO # List of ISO3166-1 country codes country_codes = [&quot;NL&quot;, &quot;DE&quot;, &quot;AT&quot;, &quot;CH&quot;, &quot;FR&quot;] # Add more country codes as needed # Base request template base_request = &quot;&quot;&quot; [out:csv(::id,::type,&quot;name&quot;,&quot;addr:postcode&quot;,&quot;addr:city&quot;,&quot;addr:street&quot;,&quot;addr:housenumber&quot;,&quot;website&quot;,&quot; contact:email=*&quot;)][timeout:600]; area[&quot;ISO3166-1&quot;=&quot;{}&quot;]-&gt;.a; ( node(area.a)[amenity=childcare]; way(area.a)[amenity=childcare]; rel(area.a)[amenity=childcare];); out; &quot;&quot;&quot; # List to store individual DataFrames dfs = [] # Loop through each country code for code in country_codes: # Construct the request for the current country request = base_request.format(code) # Send the request to the Overpass API response = requests.post(&quot;https://overpass-api.de/api/interpreter&quot;, data=request) # Check if the request was successful if response.status_code == 200: # Parse the response as CSV and convert it to DataFrame try: df = pd.read_csv(StringIO(response.text), error_bad_lines=False) except pd.errors.ParserError as e: print(f&quot;Error parsing CSV data for {code}: {e}&quot;) continue # Add country code as a new column df['country_code'] = code # Append the DataFrame to the list dfs.append(df) else: print(f&quot;Error retrieving data for {code}&quot;) # Merge all DataFrames into a single DataFrame result_df = pd.concat(dfs, ignore_index=True) # Save the DataFrame to a CSV file or perform further processing result_df.to_csv(&quot;merged_childcare_data.csv&quot;, index=False) </code></pre> <p>i run this on <strong>Google-Colab:</strong></p> <p>well - i wanted to achieve that this code:</p> <p>a. gets the the country_codes - i.e. that it contains the ISO3166-1 country codes for the countries we want to query. b. base_request; which should be the base template of our Overpass API request with a placeholder {} for the corresponding country code.</p> <p>Looping: The loop it should iterate over each country code, modifies the base request with the current country code, and subsequently send the request and finally parses the response into a DataFrame, and appends it to the dfs list.</p> <p>well i wanted all to do finally one thing: all DataFrames in dfs should be concatenated into a single DataFrame result_df, which we can then save to a CSV file or further process as needed.</p> <p>but well - at the moment i run into some errors- which i got back on google-colab. see here</p> <pre><code>&lt;ipython-input-3-67ee61d1e734&gt;:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future. df = pd.read_csv(StringIO(response.text), error_bad_lines=False) Skipping line 337: expected 1 fields, saw 2 Skipping line 827: expected 1 fields, saw 2 &lt;ipython-input-3-67ee61d1e734&gt;:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future. df = pd.read_csv(StringIO(response.text), error_bad_lines=False) Skipping line 27: expected 1 fields, saw 2 Skipping line 132: expected 1 fields, saw 2 Skipping line 366: expected 1 fields, saw 2 Skipping line 539: expected 1 fields, saw 2 Skipping line 633: expected 1 fields, saw 2 Skipping line 881: expected 1 fields, saw 2 Skipping line 1394: expected 1 fields, saw 2 Skipping line 1472: expected 1 fields, saw 2 Skipping line 1555: expected 1 fields, saw 4 Skipping line 1580: expected 1 fields, saw 2 Skipping line 1630: expected 1 fields, saw 2 Skipping line 1649: expected 1 fields, saw 2 Skipping line 1766: expected 1 fields, saw 2 Skipping line 1843: expected 1 fields, saw 2 Skipping line 2067: expected 1 fields, saw 2 Skipping line 2208: expected 1 fields, saw 2 Skipping line 2349: expected 1 fields, saw 3 Skipping line 2414: expected 1 fields, saw 2 Skipping line 2419: expected 1 fields, saw 2 Skipping line 2423: expected 1 fields, saw 2 Skipping line 2464: expected 1 fields, saw 2 Skipping line 2515: expected 1 fields, saw 2 Skipping line 2581: expected 1 fields, saw 2 Skipping line 2855: expected 1 fields, saw 2 Skipping line 2899: expected 1 fields, saw 2 Skipping line 2950: expected 1 fields, saw 2 &lt;ipython-input-3-67ee61d1e734&gt;:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future. df = pd.read_csv(StringIO(response.text), error_bad_lines=False) &lt;ipython-input-3-67ee61d1e734&gt;:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future. df = pd.read_csv(StringIO(response.text), error_bad_lines=False) Skipping line 114: expected 1 fields, saw 2 Skipping line 212: expected 1 fields, saw 2 Skipping line 339: expected 1 fields, saw 2 Skipping line 340: expected 1 fields, saw 4 Skipping line 351: expected 1 fields, saw 3 Skipping line 357: expected 1 fields, saw 2 Skipping line 359: expected 1 fields, saw 3 Skipping line 510: expected 1 fields, saw 6 Skipping line 535: expected 1 fields, saw 2 Skipping line 546: expected 1 fields, saw 3 Skipping line 590: expected 1 fields, saw 4 Skipping line 596: expected 1 fields, saw 4 Skipping line 602: expected 1 fields, saw 3 Skipping line 659: expected 1 fields, saw 3 Skipping line 764: expected 1 fields, saw 2 Skipping line 836: expected 1 fields, saw 2 Skipping line 838: expected 1 fields, saw 2 &lt;ipython-input-3-67ee61d1e734&gt;:33: FutureWarning: The error_bad_lines argument has been deprecated and will be removed in a future version. Use on_bad_lines in the future. df = pd.read_csv(StringIO(response.text), error_bad_lines=False) Skipping line 50: expected 1 fields, saw 3 Skipping line 302: expected 1 fields, saw 2 Skipping line 303: expected 1 fields, saw 2 Skipping line 740: expected 1 fields, saw 2 Skipping line 758: expected 1 fields, saw 2 Skipping line 1440: expected 1 fields, saw 2 Skipping line 1476: expected 1 fields, saw 3 Skipping line 1680: expected 1 fields, saw 3 Skipping line 1687: expected 1 fields, saw 2 Skipping line 1954: expected 1 fields, saw 3 </code></pre>
<python><pandas><dataframe><request><openstreetmap>
2024-03-16 12:55:58
1
1,223
zero