QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,043,774
4,045,275
How to add multiple empty columns (no names) to a dataframe? To export to Excel
<h1>The issue</h1> <p>I have a dataframe which summarises certain data, and I am exporting it to Excel.</p> <p>There is a logical order to the columns, whereby it would be useful to separate certain blocks: e.g. first 3 columns related to a, then an empty column as separator, then 2 columns related to b, then an empty column as separator, then to block related to c, etc.</p> <p>I can do <code>df[' ']= None</code> and it works, I have an empty column which acts as a separator, but if I repeat the command Python will replace the column I had already created, it won't add a new one.</p> <p>I would like to create a dataframe which would look like this when exported to Excel (note the 4th and 6th columns are blank and act as separators):</p> <p><a href="https://i.sstatic.net/iD0hy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iD0hy.png" alt="Excel screenshot" /></a></p> <h1>Why I need it</h1> <p>I need to export to Excel the results in the dataframe. The output is some kind of report, not, say, a file which will be imported into a database and which therefore cannot have empty columns. Readability is key and separating the blocks with empty columns helps readability because it makes it easier to find when a block ends and when another starts.</p> <h1>My partial solution</h1> <p>The only solution I have found is to manually remember how many empty columns I have already added and create columns whose name = a certain number of white spaces. E.g.</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame() df['a'] = np.arange(0,10) df[' '] = None # 1 white space df['b'] = 5 df[' '] = None # 2 white spaces df['c'] = 10 </code></pre> <p>It works, but the problem with this is that it is prone to errors, because I often have to reshuffle the blocks around, and in reality I have, many, many blocks, not just 3 as in this toy example.</p> <h1>What I have found</h1> <p>I understand that pandas allows non-unique columns <a href="https://pandas.pydata.org/docs/user_guide/duplicates.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/duplicates.html</a> . I can create such a dataframe if I specify the columns upfront: <code>df = pd.DataFrame(columns = ['a','b','c','b'])</code> but I have not figured out how to add duplicate columns one by one. That would be useful because I'm often reshuffling the order of the columns, so it's easier for me to have</p> <pre><code>df['a'] = something df['c']= something_else </code></pre> <p>than defining everything in one statement</p> <h1>The questions</h1> <ol> <li>Is there a better way than remembering how many separator columns I have created, and first creating one with 1 whitespace as column name, then one with 2, then one with 3 etc?</li> <li>Is there a way to add a duplicate column after having already created the dataframe?</li> <li>Anything else you can think of?</li> </ol>
<python><pandas><dataframe>
2023-04-18 11:03:40
2
9,100
Pythonista anonymous
76,043,761
5,180,644
How can I serialize/deserialize SQLAlchemy.Column object
<p>I searched around and couldn't find an answer, forgive me if I missed something obvious. Any pointers will be helpful. :)</p> <p>The issue is around the serialization of <code>sqlalchemy.Column</code> object. I'm representing a database table in the below class</p> <pre><code>from sqlalchemy import Column Class Table: name: Optional[str] = None columns: Optional[list[Column]] = None </code></pre> <p>I need to serialize the Table class object and that means both name and <code>columns</code> attributes must be serializable. However, the problem lies with serializing <code>columns</code>. Is there any way we can do this reliably?</p> <p>And we cannot use Python's pickle.</p>
<python><sqlalchemy>
2023-04-18 11:01:59
1
367
Utkarsh Sharma
76,043,689
1,307,905
pkg_resources is deprecated as an API
<p>When I try to install from a .tar.gz package, while making warnings into errors:</p> <pre><code>python -W error -m pip install /some/path/nspace.pkga-0.1.0.tar.gz </code></pre> <p>I get this error:</p> <pre><code>ERROR: Exception: Traceback (most recent call last): File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/base_command.py&quot;, line 169, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py&quot;, line 248, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/commands/install.py&quot;, line 324, in run session = self.get_default_session(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py&quot;, line 98, in get_default_session self._session = self.enter_context(self._build_session(options)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py&quot;, line 125, in _build_session session = PipSession( ^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/network/session.py&quot;, line 342, in __init__ self.headers[&quot;User-Agent&quot;] = user_agent() ^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/network/session.py&quot;, line 175, in user_agent setuptools_dist = get_default_environment().get_distribution(&quot;setuptools&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py&quot;, line 188, in get_distribution return next(matches, None) ^^^^^^^^^^^^^^^^^^^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py&quot;, line 183, in &lt;genexpr&gt; matches = ( ^ File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/base.py&quot;, line 612, in iter_all_distributions for dist in self._iter_distributions(): File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py&quot;, line 176, in _iter_distributions for dist in finder.find_eggs(location): File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py&quot;, line 144, in find_eggs yield from self._find_eggs_in_dir(location) File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py&quot;, line 111, in _find_eggs_in_dir from pip._vendor.pkg_resources import find_distributions File &quot;/opt/util/nspace1/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py&quot;, line 121, in &lt;module&gt; warnings.warn(&quot;pkg_resources is deprecated as an API&quot;, DeprecationWarning) DeprecationWarning: pkg_resources is deprecated as an API </code></pre> <p>pip seems to have vendored in a deprecated package. The pip code responsible is in <code>pip/_internal/metadata/importlib/_envs.py</code> in class <code>Environment</code>:</p> <pre><code> def _iter_distributions(self) -&gt; Iterator[BaseDistribution]: finder = _DistributionFinder() for location in self._paths: yield from finder.find(location) for dist in finder.find_eggs(location): # _emit_egg_deprecation(dist.location) # TODO: Enable this. yield dist # This must go last because that's how pkg_resources tie-breaks. yield from finder.find_linked(location) </code></pre> <p>If I comment out the nested for loop (doing the find_eggs) thinks work fine: I get no error and a working package installed.</p> <p>How do I monkeypatch that <code>Environment</code> instance from my setup.py file?</p> <p>This is Python 3.11.3 (so it should be using importlib.metadata and not pkg_resources) on macOS, pip==23.1, setuptools==67.6.1</p> <p>Background: I am just trying this out on an example package, the reason for this is based on a bug report for my <code>ruamel.yaml</code> package, where it is build in such a less forgiving environment. I could of course say don't use <code>-W error</code>, but I rather solve this, by not invoking the offending, unused code in the first place</p>
<python><pip><setuptools><deprecation-warning>
2023-04-18 10:53:23
4
78,248
Anthon
76,043,027
2,562,137
Load Python Tensorflow saved_model with Tensorflow.js
<p>I've saved a model I created with Python Tensorflow. Is it possible to load this with Tensorflow.js?</p> <p><strong>Save</strong></p> <pre><code>tf.saved_model.save(translator, 'ipa_translator', signatures={'serving_default': translator.tf_translate}) </code></pre> <p><strong>Load</strong></p> <pre><code>reloaded = tf.saved_model.load('ipa_translator') result = reloaded.tf_translate(input_text) </code></pre> <p>I've found <a href="https://www.tensorflow.org/js/tutorials/conversion/import_saved_model" rel="nofollow noreferrer">Tensorflow import_saved_model</a> which looks like it'll handle the conversion, but I can't figure out how it's meant to be used.</p> <p>I also tried following this <a href="https://codelabs.developers.google.com/codelabs/tensorflowjs-convert-python-savedmodel#4" rel="nofollow noreferrer">guide</a>, but got errors about mismatching model types.</p> <p>There appears to be a wizard script <code>pip3 install tensorflowjs[wizard]</code>, when I try to install it I get <code>no matches found: tensorflowjs[wizard]</code></p> <p>This is the model that I want to convert if that helps: <a href="https://colab.research.google.com/drive/16ge-HE2RZ6TmG9_zS2_L4dLKOO00mVRB?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/16ge-HE2RZ6TmG9_zS2_L4dLKOO00mVRB?usp=sharing</a></p> <p>I would appreciate any guidance at all on getting this running.</p>
<python><tensorflow><tensorflow.js><tensorflowjs-converter>
2023-04-18 09:37:57
1
3,898
OrderAndChaos
76,042,956
10,844,937
What kind of lock should I use when update and insert the same table?
<p>I use MySQL's default engine <code>innodb</code> and MySQL's default transaction <code>REPEATABLE READS</code>.</p> <p>My table <code>Job</code> has two fields, <code>job_id</code> and <code>queue</code>. Here are the data in it.</p> <pre><code>+--------------------------------------+-------+ | job_id | queue | +--------------------------------------+-------+ | 9f3ab652-dd2d-11ed-8cc5-d5f95fee8efd | 1 | | c41fc700-dd2d-11ed-8cc5-d5f95fee8efd | 2 | | d86ec674-dd2e-11ed-8cc5-d5f95fee8efd | 3 | | 8e2b9c46-ddc8-11ed-a8b4-acde48001122 | 4 | +--------------------------------------+-------+ </code></pre> <p>Here I have <code>threads</code> modify the table. <code>update_func</code> is to reduce every row's queue value by one. <code>insert_func</code> is to insert a record based on the max queue value.</p> <pre><code>import uuid import MySQLdb import threading conn = MySQLdb.connect(&quot;conn_info&quot;) cursor = conn.cursor() def update_func(): cursor.execute(&quot;UPDATE job SET queue = queue - 1&quot;) conn.commit() conn.close() def insert_func(): cursor.execute(&quot;SELECT max(queue) from job&quot;) max_queue = cursor.fetchone()[0] job_id, queue = str(uuid.uuid1()), max_queue + 1 cursor.execute(&quot;INSERT INTO job VALUES(%s,%s)&quot;, (job_id, queue)) conn.commit() conn.close() if __name__ == '__main__': t1 = threading.Thread(target=update_func) t2 = threading.Thread(target=insert_func) t1.start() t2.start() </code></pre> <p>After executing it, I got the following.</p> <pre><code>+--------------------------------------+-------+ | job_id | queue | +--------------------------------------+-------+ | 9f3ab652-dd2d-11ed-8cc5-d5f95fee8efd | 0 | | c41fc700-dd2d-11ed-8cc5-d5f95fee8efd | 1 | | d86ec674-dd2e-11ed-8cc5-d5f95fee8efd | 2 | | 8e2b9c46-ddc8-11ed-a8b4-acde48001122 | 3 | | ed910f36-ddc8-11ed-be60-acde48001122 | 5 | +--------------------------------------+-------+ </code></pre> <p>What I want is the queue are <code>0~4</code>. Here I got two questions.</p> <ol> <li>what kind of <code>lock</code> should I use?</li> <li>Do I have to use <code>transaction</code> when using <code>lock</code> as well?</li> </ol>
<python><mysql>
2023-04-18 09:29:46
0
783
haojie
76,042,924
16,389,095
Python/Kivy - How to add a drop down item when a button is pressed: AttributeError 'super' object has no attribute '__getattr__'
<p>I developed a simple UI in Kivy/KivyMD - Python. When a button is pressed a dropdown item should be visualized <a href="https://kivymd.readthedocs.io/en/1.1.1/components/dropdownitem/" rel="nofollow noreferrer">https://kivymd.readthedocs.io/en/1.1.1/components/dropdownitem/</a>. Whilst the majority of examples consider a dropdown item already designed in the UI, I need to add it in the Py code, specifically in the <em>on-release</em> event associated to the button.</p> <p>Here is my code:</p> <pre><code>from kivy.lang import Builder from kivymd.app import MDApp from kivymd.uix.screen import MDScreen from kivymd.uix.menu import MDDropdownMenu from kivymd.uix.dropdownitem.dropdownitem import MDDropDownItem Builder.load_string( &quot;&quot;&quot; &lt;View&gt;: MDGridLayout: rows: 3 id: layout padding: 100, 50, 100, 50 MDRaisedButton: id: button text: 'CREATE DDI' on_release: root.Button_On_Click() &quot;&quot;&quot;) class View(MDScreen): def __init__(self, **kwargs): super(View, self).__init__(**kwargs) def Button_On_Click(self): myDdi = MDDropDownItem( # size_hint_x = None, # width = dp(100), # pos_hint = {&quot;right&quot;: 1, &quot;center_y&quot;: 0.5}, text = 'SELECT POSITION') myMenu, scratch = Create_DropDown_Widget(myDdi, ['POS 1', 'POS 2', 'POS 3'], width=4) myDdi.on_release = myMenu.open() self.ids.layout.add_widget(myDdi) def Create_DropDown_Widget(self, drop_down_item, item_list, width): items_collection = [ { &quot;viewclass&quot;: &quot;OneLineListItem&quot;, &quot;text&quot;: item_list[i], &quot;height&quot;: dp(56), &quot;on_release&quot;: lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x), } for i in range(len(item_list)) ] menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=width) menu.bind() return menu, items_collection def Set_DropDown_Item(self, drop_down_item, menu, text_item): drop_down_item.set_item(text_item) menu.dismiss() class MainApp(MDApp): def __init__(self, **kwargs): super().__init__(**kwargs) self.View = View() def build(self): self.title = ' DROP DOWN ITEM ADDED DYNAMICALLY' return self.View if __name__ == '__main__': MainApp().run() </code></pre> <p>Trying to execute the code, when I press the button I get this error:</p> <pre><code>line 49, in Button_On_Click text = 'SELECT POSITION') File &quot;C:\Users\\Miniconda3\lib\site-packages\kivymd\uix\behaviors\declarative_behavior.py&quot;, line 311, in __init__ super().__init__(**kwargs) File &quot;C:\Users\\Miniconda3\lib\site-packages\kivymd\theming.py&quot;, line 1668, in __init__ super().__init__(**kwargs) File &quot;C:\Users\\Miniconda3\lib\site-packages\kivy\uix\behaviors\button.py&quot;, line 121, in __init__ super(ButtonBehavior, self).__init__(**kwargs) File &quot;C:\Users\\Miniconda3\lib\site-packages\kivy\uix\boxlayout.py&quot;, line 145, in __init__ super(BoxLayout, self).__init__(**kwargs) File &quot;C:\Users\\Miniconda3\lib\site-packages\kivy\uix\layout.py&quot;, line 76, in __init__ super(Layout, self).__init__(**kwargs) File &quot;C:\Users\\Miniconda3\lib\site-packages\kivy\uix\widget.py&quot;, line 357, in __init__ super(Widget, self).__init__(**kwargs) File &quot;kivy\_event.pyx&quot;, line 262, in kivy._event.EventDispatcher.__init__ File &quot;kivy\properties.pyx&quot;, line 520, in kivy.properties.Property.__set__ File &quot;kivy\properties.pyx&quot;, line 567, in kivy.properties.Property.set File &quot;kivy\properties.pyx&quot;, line 606, in kivy.properties.Property._dispatch File &quot;kivy\_event.pyx&quot;, line 1307, in kivy._event.EventObservers.dispatch File &quot;kivy\_event.pyx&quot;, line 1213, in kivy._event.EventObservers._dispatch File &quot;C:\Users\\Miniconda3\lib\site-packages\kivymd\uix\dropdownitem\dropdownitem.py&quot;, line 96, in on_text self.ids.label_item.text = text_item File &quot;kivy\properties.pyx&quot;, line 964, in kivy.properties.ObservableDict.__getattr__ **AttributeError: 'super' object has no attribute '__getattr__'** </code></pre>
<python><kivy><kivy-language><kivymd>
2023-04-18 09:25:59
1
421
eljamba
76,042,888
15,452,168
scraping reviews from playstore using google_play_scraper
<p>I am trying to scrap the reviews for an app using google_play_scraper, the play store says 16thousand reviews but I am only able to scrap 4869 reviews. what could be the reason?</p> <p>As you can see I am trying to use all the languages available via a function.</p> <p>I am using the below code</p> <pre><code># Import necessary libraries import pandas as pd import numpy as np from google_play_scraper import app, Sort, reviews_all # Define function to scrape reviews for a specific country and language combination def scrape_reviews(country, lang): reviews = reviews_all( 'com.canda.mobileapp', sleep_milliseconds=0, lang=lang, country=country, sort=Sort.NEWEST ) reviews_df = pd.DataFrame(reviews) try: reviews_df = reviews_df[['reviewId', 'userName', 'userImage', 'content', 'score', 'thumbsUpCount', 'reviewCreatedVersion', 'at', 'replyContent', 'repliedAt']].assign(Country_name=df.loc[df['country'] == country, 'Country_name'].iloc[0], Language=lang) except KeyError: reviews_df = pd.DataFrame() return reviews_df # Define dataframe containing country and language combinations data = {'Country_name': ['Belgium', 'Belgium', 'Belgium', 'Belgium', 'Czechia', 'Czechia', 'Denmark', 'Denmark', 'Germany', 'Germany', 'Greece', 'Greece', 'Spain', 'Spain', 'France', 'France', 'Croatia', 'Croatia', 'Italy', 'Italy', 'Hungary', 'Hungary', 'Netherlands', 'Netherlands', 'Austria', 'Austria', 'Poland', 'Poland', 'Portugal', 'Portugal', 'Romania', 'Romania', 'Switzerland', 'Switzerland', 'Switzerland', 'Switzerland', 'Switzerland', 'Slovakia', 'Slovakia', 'Slovenia', 'Slovenia', 'Sweden', 'Sweden', 'Finland', 'Finland'], 'country': ['BE', 'BE', 'BE', 'BE', 'CZ', 'CZ', 'DK', 'DK', 'DE', 'DE', 'GR', 'GR', 'ES', 'ES', 'FR', 'FR', 'HR', 'HR', 'IT', 'IT', 'HU', 'HU', 'NL', 'NL', 'AT', 'AT', 'PL', 'PL', 'PT', 'PT', 'RO', 'RO', 'CH', 'CH', 'CH', 'CH', 'CH', 'SK', 'SK', 'SI', 'SI', 'SE', 'SE', 'FI', 'FI'], 'Language': ['Dutch', 'French', 'German', 'English', 'Czech', 'English', 'Danish', 'English', 'German', 'English', 'Greek', 'English', 'Spanish', 'English', 'French', 'English', 'Croatian', 'English', 'Italian', 'English', 'Hungarian', 'English', 'Dutch', 'English', 'German', 'English', 'Polish', 'English', 'Portuguese', 'English', 'Romanian', 'English', 'German', 'French', 'Italian', 'Romansh', 'English', 'Slovak', 'English', 'Slovenian', 'English', 'Swedish', 'English', 'Finnish', 'English'], 'lang': ['nl', 'fr', 'de', 'en', 'cs', 'en', 'da', 'en', 'de', 'en', 'el', 'en', 'es', 'en', 'fr', 'en', 'hr', 'en', 'it', 'en', 'hu', 'en', 'nl', 'en', 'de', 'en', 'pl', 'en', 'pt', 'en', 'ro', 'en', 'de', 'fr', 'it', 'rm', 'en', 'sk', 'en', 'sl', 'en', 'sv', 'en', 'fi', 'en'] } df = pd.DataFrame(data) reviews_dfs = df.apply(lambda row: scrape_reviews(row['country'], row['lang']), axis=1) combined_reviews_df = pd.concat(reviews_dfs.to_list(), ignore_index=True) #remove duplicates combined_reviews_df = combined_reviews_df.groupby(['content', 'Language']).first().reset_index() combined_reviews_df </code></pre>
<python><pandas><web-scraping><google-play-services>
2023-04-18 09:21:45
2
570
sdave
76,042,884
18,091,040
Pandas DataFrame.to_csv creating a new line when saving a dataframe to file
<p>I have a dataframe <code>df_new</code> which looks like:</p> <p><a href="https://i.sstatic.net/25Usn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/25Usn.png" alt="enter image description here" /></a></p> <p>And when I use the command:</p> <pre><code>df_new.to_csv(csv+'_filtered.csv', index=False) </code></pre> <p>I generate a csv file that suddenly breaks the row 24 before the 2 last columns, generating a new undesirable row which is completely empty, except the data of the last 2 columns that should be in the previous row:</p> <p><a href="https://i.sstatic.net/1BGFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1BGFl.png" alt="enter image description here" /></a></p> <p>Am I doing something wrong to save this file?</p>
<python><pandas><csv>
2023-04-18 09:20:59
1
640
brenodacosta
76,042,579
12,242,085
How to compare values between columns in 2 DataFrames in Python Pandas?
<p>I have 2 DataFrames in Python Pandas like below:</p> <p><strong>Input:</strong></p> <pre><code>df1 = pd.DataFrame({&quot;col1&quot;:[&quot;APPLE&quot;, &quot;BANANA&quot;, &quot;ORANGE&quot;]}) df2 = pd.DataFrame({&quot;col_x&quot;:[&quot;APPLEXX&quot;, &quot;BANANA&quot;, &quot;CARROT&quot;]}) </code></pre> <p>df1:</p> <pre><code>col1 ------ APPLE ORANGE BANANA </code></pre> <p>df2:</p> <pre><code>col_x -------- APPLEXX BANANA CARROT </code></pre> <p><strong>Requirements:</strong></p> <p>And I need to print only rows from df2 (col_x), which: contain values from df1 (col1) as part or all of the value in df2 (col_x) and their equivalent in df1 (col1)</p> <p><strong>Desire output:</strong></p> <p>So, as an output I need something like below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col1</th> <th>col_x</th> </tr> </thead> <tbody> <tr> <td>APPLE</td> <td>APPLEXX</td> </tr> <tr> <td>BANANA</td> <td>BANANA</td> </tr> </tbody> </table> </div> <p>How can I do that in Python Pandas ?</p>
<python><pandas><dataframe><merge><isin>
2023-04-18 08:47:33
1
2,350
dingaro
76,042,566
12,903,729
Run function when a file is externally closed python
<p>I am trying to run a function when a .txt file is closed. This file is opened in notepad using <code>os.startfile()</code>. Before using this I tried to using <code>subprocess</code> like this:</p> <pre><code>import subprocess def on_file_closed(): print(&quot;The external file has been closed&quot;) path2file = &quot;path/to/your/file.txt&quot; process = subprocess.Popen(['open', path2file]) process.wait() on_file_closed() </code></pre> <p>This code returns the error <code>[WinError 193] %1 is not a valid Win32 application</code>, and apparently this is because the file is not a .exe file. I then tried the following code which I belive is more promising:</p> <pre><code>import os import win32file import win32con def on_file_closed(): print(&quot;The external file has been closed&quot;) path2file = &quot;path/to/your/file.txt&quot; os.startfile(path2file) file_handle = win32file.CreateFile( path2file, win32con.GENERIC_READ, win32con.FILE_SHARE_DELETE | win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE, None, win32con.OPEN_EXISTING, win32con.FILE_ATTRIBUTE_NORMAL, None ) while True: results = win32file.ReadDirectoryChangesW( file_handle, 4096, True, win32con.FILE_NOTIFY_CHANGE_FILE_NAME | win32con.FILE_NOTIFY_CHANGE_DIR_NAME | win32con.FILE_NOTIFY_CHANGE_LAST_WRITE, None, None ) if results[0][0] == 0x00000002 or results[0][0] == 0x00000003: on_file_closed() break </code></pre> <p>This code returns the error returns the error <code>(87, 'ReadDirectoryChangesW', 'The parameter is incorrect.')</code>. Is there a way for me to fix this error or is there another more simple method I can use? I'd appreciate any help.</p>
<python><file><winapi><operating-system>
2023-04-18 08:46:05
2
1,212
Thomas
76,042,527
4,417,586
GitHub Actions workflow for Python project on Google Cloud Build/Run
<p>I have a Python/Django project deployed on GCP Cloud Run. To deploy it, I use a gcloud config file <code>cloudbuild.yaml</code> which builds then pushes the image using <code>gcr.io/cloud-builders/docker</code>, then executes some project-specific Python commands using <code>gcr.io/google-appengine/exec-wrapper</code>.</p> <p>I'm trying to convert that as a GitHub Actions workflow but I don't know about the steps to execute project-specific Python commands in the container built by Google Cloud.</p> <p>What would be the equivalent steps than the ones using <code>gcr.io/google-appengine/exec-wrapper</code> for that in a GitHub Actions YAML workflow?</p> <p>The gcloud config file <code>cloudbuild.yaml</code> looks like this:</p> <pre><code># [START cloudrun_django_cloudmigrate] steps: - id: &quot;build image&quot; name: &quot;gcr.io/cloud-builders/docker&quot; args: [&quot;build&quot;, &quot;-t&quot;, &quot;gcr.io/${PROJECT_ID}/${_SERVICE_NAME}&quot;, &quot;.&quot;] - id: &quot;push image&quot; name: &quot;gcr.io/cloud-builders/docker&quot; args: [&quot;push&quot;, &quot;gcr.io/${PROJECT_ID}/${_SERVICE_NAME}&quot;] - id: &quot;apply migrations&quot; name: &quot;gcr.io/google-appengine/exec-wrapper&quot; args: [ &quot;-i&quot;, &quot;gcr.io/$PROJECT_ID/${_SERVICE_NAME}&quot;, &quot;-s&quot;, &quot;${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}&quot;, &quot;--&quot;, &quot;python&quot;, &quot;manage.py&quot;, &quot;migrate&quot;, ] secretEnv: ['ENVIRONMENT', 'SECRET_KEY', 'DATABASE_URL', 'GS_BUCKET_NAME'] - id: &quot;collect static&quot; name: &quot;gcr.io/google-appengine/exec-wrapper&quot; args: [ &quot;-i&quot;, &quot;gcr.io/$PROJECT_ID/${_SERVICE_NAME}&quot;, &quot;-s&quot;, &quot;${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}&quot;, &quot;--&quot;, &quot;python&quot;, &quot;manage.py&quot;, &quot;collectstatic&quot;, &quot;--verbosity&quot;, &quot;2&quot;, &quot;--no-input&quot;, ] secretEnv: ['ENVIRONMENT', 'SECRET_KEY', 'DATABASE_URL', 'GS_BUCKET_NAME'] availableSecrets: secretManager: - versionName: projects/$PROJECT_ID/secrets/ENVIRONMENT/versions/latest env: 'ENVIRONMENT' - versionName: projects/$PROJECT_ID/secrets/SECRET_KEY/versions/latest env: 'SECRET_KEY' - versionName: projects/$PROJECT_ID/secrets/DATABASE_URL/versions/latest env: 'DATABASE_URL' - versionName: projects/$PROJECT_ID/secrets/GS_BUCKET_NAME/versions/latest env: 'GS_BUCKET_NAME' substitutions: _INSTANCE_NAME: my-db-instance _REGION: europe-west9 _SERVICE_NAME: my-project images: - &quot;gcr.io/${PROJECT_ID}/${_SERVICE_NAME}&quot; # [END cloudrun_django_cloudmigrate] </code></pre>
<python><google-cloud-platform><continuous-integration><github-actions><gcloud>
2023-04-18 08:42:02
0
1,152
bolino
76,042,472
14,896,591
Invalid redirect_uri in instagram oauth
<p>I am using python django and django-allauth to implement oauth to instagram. I think I have set all the required things in instagram app setting and got the application id and secret. <a href="https://i.sstatic.net/GdYxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdYxB.png" alt="enter image description here" /></a></p> <p>This is my app setting and I have added in my project to work with this configuration. This is my views.py</p> <pre><code>from django.views.generic import TemplateView from allauth.socialaccount.models import SocialAccount class ConnectInstagram(TemplateView): template_name = 'connect.html' </code></pre> <p>And connect.html file looks like this</p> <pre><code>{% load socialaccount %} &lt;a href=&quot;{% provider_login_url 'instagram' %}&quot;&gt; Connect with Instagram &lt;/a&gt; </code></pre> <p>I have already registered <code>social applications</code> in django admin site with site and application id and secret. This is my settings.py file</p> <pre><code>import os from dotenv import load_dotenv from pathlib import Path load_dotenv() # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent INSTAGRAM_APP_ID = os.getenv('INSTAGRAM_APP_ID') INSTAGRAM_APP_SECRET = os.getenv('INSTAGRAM_APP_SECRET') # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/4.2/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = os.getenv('SECRET_KEY') # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = ['41b3-107-155-105-218.ngrok-free.app'] CSRF_TRUSTED_ORIGINS = ['https://41b3-107-155-105-218.ngrok-free.app'] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # 3rd party 'django.contrib.sites', 'allauth', 'allauth.account', 'allauth.socialaccount', 'allauth.socialaccount.providers.instagram', # installed app 'crafter.apps.CrafterConfig' ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] AUTHENTICATION_BACKENDS = [ 'django.contrib.auth.backends.ModelBackend', 'allauth.account.auth_backends.AuthenticationBackend', ] SITE_ID = 1 ROOT_URLCONF = 'main.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'main.wsgi.application' # Database # https://docs.djangoproject.com/en/4.2/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.getenv('DB_NAME'), 'USER': os.getenv('DB_USER'), 'PASSWORD': os.getenv('DB_PASSWORD'), 'HOST': os.getenv('DB_HOST'), 'PORT': os.getenv('DB_PORT'), } } # Password validation # https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/4.2/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/4.2/howto/static-files/ STATIC_URL = 'static/' # Default primary key field type # https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' ACCOUNT_EMAIL_VERIFICATION = 'none' SOCIALACCOUNT_PROVIDERS = { 'instagram': { 'SCOPE': [ 'user_profile', 'user_media' ], } } </code></pre> <p>After I login to instagram using the url in django, it goes to instagram oauth link but it says <code>{&quot;error_type&quot;: &quot;OAuthException&quot;, &quot;code&quot;: 400, &quot;error_message&quot;: &quot;Invalid redirect_uri&quot;}</code></p> <p>Could anyone help me? Appreciate.</p>
<python><django><instagram><django-allauth>
2023-04-18 08:35:40
0
315
Fortdev
76,042,419
5,931,672
Generator map methods implementation
<p>So I have a generator of a sine wave that returns two values like <code>yield time, sine</code>. I would like to be able to use point functions to add stuff to this generator like the following:</p> <pre><code>my_generator.add_noise(mean=0, std=1).shuffle().to_pandas() </code></pre> <p>Where <code>add_noise</code> will for example add a random uniform noise to the <code>sine</code> value only while leaving the <code>time</code> untouched. The output of <code>my_generator.add_noise(mean=0, std=1)</code> would be another generator but with a noisy <code>sine</code>.</p> <p>My idea is to use it incrementally in a similar way to TensorFlow Dataset. However I don't find how to do it probably due to ignorance of words to google.</p> <p>Also, is this good practice? or is it a better method? I am doing a dataset generator to try some algorithms and I want it to be escalable. So if I change the generator to a logarithmic generator I dont need to change the noise function for example.</p> <p>I had a partial solution like this:</p> <pre><code>import math import random import pandas as pd class SineWaveGenerator: def __init__(self, freq, amplitude, sampling_rate, num_samples): self.freq = freq self.amplitude = amplitude self.sampling_rate = sampling_rate self.num_samples = num_samples def __iter__(self): for i in range(self.num_samples): time = i / self.sampling_rate sine = self.amplitude * math.sin(2 * math.pi * self.freq * time) yield time, sine def add_noise(self, noise_amplitude): for time, sine in self: noisy_sine = sine + noise_amplitude * random.uniform(-1, 1) yield time, noisy_sine def to_pandas(self): return pd.DataFrame(list(self), columns=[&quot;Time&quot;, &quot;Sine&quot;]) </code></pre> <p>This works with:</p> <pre><code>sin_generator = SineWaveGenerator(freq=10, amplitude=1, sampling_rate=1000, num_samples=1000) df = sin_generator.as_pandas() </code></pre> <p>or</p> <pre><code>sin_generator = SineWaveGenerator(freq=10, amplitude=1, sampling_rate=1000, num_samples=1000) noisy_sine_wave = sine_wave.add_noise(noise_amplitude=0.1) </code></pre> <p>but the following breaks:</p> <pre><code>sin_generator = SineWaveGenerator(freq=10, amplitude=1, sampling_rate=1000, num_samples=1000) df_noisy_sine_wave = sine_wave.add_noise(noise_amplitude=0.1).as_pandas() </code></pre> <p>Saying: <code>AttributeError: 'generator' object has no attribute 'as_pandas'</code></p>
<python><generator>
2023-04-18 08:30:40
1
4,192
J Agustin Barrachina
76,042,381
4,045,275
PyCharm: formatting dataframes in SciView: what are the options? Can I separate the thousands with a comma?
<p>The Scientific View mode of PyCharm displays pandas dataframes and numpy arrays.</p> <p>At the bottom right there is an option to change the format (see screenshot below). E.g.</p> <p><code>%.2f</code> = 2 decimal digits</p> <p><code>%.3e</code> = scientific notation with 3 decimal digits. e.g. 3.456e-2 for 3.456%</p> <p>I have 3 questions:</p> <ol> <li>What are all the options, are they documented anywhere? I couldn't find anything on the PyCharm website nor online.</li> <li>Is there an option to separate the thousands with a comma? In Python I'd use <code>{0:,.2f}</code> to separate the thousands with a comma and show 2 decimal digits. I have tried many variants of this, none seems to work</li> <li>Is there an option for percentages? <code>%.2%</code> does not work</li> <li>Is it possible to format different columns of a dataframe differently?</li> </ol> <p>To be clear, I am talking about formatting the display in PyCharm only. I am not talking about how to modify the dataframe so that the number 1000.152 becomes the string &quot;1,000.15&quot;</p> <p><a href="https://i.sstatic.net/vfMLV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vfMLV.png" alt="Data view screenshot" /></a></p>
<python><pandas><pycharm>
2023-04-18 08:25:38
0
9,100
Pythonista anonymous
76,042,303
7,767,306
How to edit claims in ID Token supplied with access token in django Oauth toolkit?
<p>When working with Django OAuth Toolkit, using OIDC if you supply <code>openid</code> as the <code>scope</code> in request you get a <code>id_token</code> with access token. This ID Token can initially be used to identify the user you have got access token for and also create a session.</p> <p><a href="https://i.sstatic.net/qE3Bt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qE3Bt.png" alt="This is what i am getting in JWT ID Token" /></a></p> <pre><code>{ &quot;aud&quot;: &quot;audience&quot;, &quot;iat&quot;: 1681801110, &quot;at_hash&quot;: &quot;hash of included access token&quot;, &quot;sub&quot;: &quot;1&quot;, &quot;given_name&quot;: &quot;Dushyant&quot;, &quot;preferred_username&quot;: &quot;dsh&quot;, &quot;iss&quot;: &quot;authEndpointurl&quot;, &quot;exp&quot;: 1681837110, &quot;auth_time&quot;: 1681567804, &quot;jti&quot;: &quot;tokenid&quot; } </code></pre> <p>But I wish to modify the claims in this JWT ID Token, it reveals the primary key of the database of Authorization Server as the <code>unique id</code> of the user which i don't want in the claim called <code>sub</code>. I want to use another unique key as the value of <code>sub</code> claim. I tried overriding the response a Custom Backend:</p> <pre><code>class CustomOAuthBackend(OAuthLibCore): def __init__(self, server): super().__init__(server) def create_token_response(self, request): response_data = super().create_token_response(request) #Modify the response here return response_data </code></pre> <p>And added this as the Custom Backend in the settings. <code>&quot;OAUTH2_BACKEND_CLASS&quot;:&quot;pathto.CustomOAuthBackend&quot;,</code></p> <p>I haven't written any modification code for the response here since the token here is already created When I am calling the original create_token_response.</p> <p><strong>I want to override it in a way that I can modify the claims dict, I am not sure where it is getting created. We get the id_token all prepared. I want to override the process of creating id_token and change <code>sub</code> claim value.</strong></p> <p>Let me know if any more information is required.</p> <p>Update 1: While looking for a possible solution, found oauth2_provider.oauth2_validators have methods <code>get_oidc_claims()</code> and <code>get_id_token_dictionary()</code> which looks like could lead to a possible solution. Figuring out how to use it.</p> <p>There is a write up in the code</p> <blockquote> <p>This method is OPTIONAL and is NOT RECOMMENDED. <code>finalize_id_token</code> SHOULD be implemented instead. However, if you want a full control over the minting of the <code>id_token</code>, you MAY want to override <code>get_id_token</code> instead of using <code>finalize_id_token</code>.</p> </blockquote> <p>This write up talks about <code>get_id_token()</code> method. Going through the code I see that <code>finalize_id_token()</code> is the method where JWT is created from claims. This method calls <code>get_id_token_dictionary</code> then this internally calls <code>get_oidc_claims()</code> then this calls <code>get_claim_dict(request)</code>. get_claim_dict(request) only takes a request while others take token etc also. This method looks correct to override as this actually adds that <code>sub</code> claim to claims.</p> <pre><code>def get_claim_dict(self, request): if self._get_additional_claims_is_request_agnostic(): claims = {&quot;sub&quot;: lambda r: str(r.user.id)} else: claims = {&quot;sub&quot;: str(request.user.id)} # https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims if self._get_additional_claims_is_request_agnostic(): add = self.get_additional_claims() else: add = self.get_additional_claims(request) claims.update(add) return claims </code></pre> <p>I can get claims from this modify it and return the new updated claims.</p>
<python><django><django-oauth-toolkit>
2023-04-18 08:17:17
1
407
Dushyant Deshwal
76,042,259
784,433
high F1 score and low values in confusion matrix
<p>consider I have 2 classes of data and I am using sklearn for classification,</p> <pre class="lang-py prettyprint-override"><code>def cv_classif_wrapper(classifier, X, y, n_splits=5, random_state=42, verbose=0): ''' cross validation wrapper ''' cv = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=random_state) scores = cross_validate(classifier, X, y, cv=cv, scoring=[ 'f1_weighted', 'accuracy', 'recall_weighted', 'precision_weighted']) if verbose: print(f&quot;=====================&quot;) print(f&quot;Accuracy: {scores['test_accuracy'].mean():.3f} (+/- {scores['test_accuracy'].std()*2:.3f})&quot;) print(f&quot;Recall: {scores['test_recall_weighted'].mean():.3f} (+/- {scores['test_recall_weighted'].std()*2:.3f})&quot;) print(f&quot;Precision: {scores['test_precision_weighted'].mean():.3f} (+/- {scores['test_precision_weighted'].std()*2:.3f})&quot;) print(f&quot;F1: {scores['test_f1_weighted'].mean():.3f} (+/- {scores['test_f1_weighted'].std()*2:.3f})&quot;) return scores </code></pre> <p>and I call it by</p> <pre class="lang-py prettyprint-override"><code>scores = cv_classif_wrapper(LogisticRegression(), Xs, y0, n_splits=5, verbose=1) </code></pre> <p>Then I calculate the confusion matrix with this:</p> <pre class="lang-py prettyprint-override"><code>model = LogisticRegression(random_state=42) y_pred = cross_val_predict(model, Xs, y0, cv=5) cm = sklearn.metrics.confusion_matrix(y0, y_pred) </code></pre> <p>The question is I am getting 0.95 for F1 score but the confusion matrix is <a href="https://i.sstatic.net/DXegw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DXegw.png" alt="enter image description here" /></a></p> <p>Is this consistent with <code>F1 score=0.95</code>? Where is wrong if there is? note that there is 35 subject in class 0 and 364 in class 1.</p> <pre><code>Accuracy: 0.952 (+/- 0.051) Recall: 0.952 (+/- 0.051) Precision: 0.948 (+/- 0.062) F1: 0.947 (+/- 0.059) </code></pre>
<python><scikit-learn>
2023-04-18 08:13:20
2
1,237
Abolfazl
76,042,258
8,973,609
Counting consecutive values without interruption in pandas/Python
<p>I am working on my pandas skills and have been stuck on an exercise for a while:</p> <p>I have created a DF with bike test data. Tests are ordered by time (ascending test_id). I would like to get the most recent fail test (max <code>test_id</code>) for each <code>bike</code> + <code>test_type</code> group and count how many consecutive fails appeared in total (without interruption from a pass test for same group). Additionally, I would like to have another <code>ends_with_pass</code> column which tells us if a fail <code>bike</code> + <code>test_type</code> group is followed up by a pass test. How can I achieve this? I have the following code but don't know how to proceed:</p> <pre><code>fail = data[data[&quot;test_result&quot;] == &quot;fail&quot;] max_fail_tests = fail.groupby([&quot;bike&quot;, &quot;test_type&quot;])[&quot;test_id&quot;].max().reset_index() res = pd.merge(max_failed_tests, data, on=[&quot;bike&quot;, &quot;test_type&quot;]) res[&quot;fails_in_a_row&quot;] = res.groupby( [&quot;bike&quot;, &quot;test_type&quot;] )[&quot;test_result&quot;].apply( lambda x: ( x.eq(&quot;fail&quot;) &amp; x.shift().ne(&quot;pass&quot;) ).cumsum() ) </code></pre> <p>Given input:</p> <pre><code>| test_id | bike | test_type | test_result | |---------|------|-----------|-------------| | 1 | a | slow | pass | | 1 | a | fast | pass | | 15 | c | fast | pass | | 15 | c | slow | pass | | 34 | b | slow | fail | &lt;- | 34 | b | fast | fail | &lt;- 1st fail for b | 36 | a | slow | pass | | 36 | a | fast | pass | | 37 | c | fast | fail | &lt;- | 37 | c | slow | fail | &lt;- 1st fail for c | 87 | c | fast | fail | &lt;- | 87 | c | slow | fail | &lt;- 2nd consecutive fail for c | 99 | b | slow | fail | &lt;- | 99 | b | fast | fail | &lt;- 2nd consecutive fail for b. Followed by pass, therefore `ends_with_pass` = `yes` | 124 | b | slow | pass | | 124 | b | fast | pass | </code></pre> <p>Expected output:</p> <pre><code>| bike | test_type | fails_in_a_row | ends_with_pass | |------|-----------|----------------|----------------| | b | fast | 2 | yes | | b | slow | 2 | yes | | c | fast | 2 | no | | c | slow | 2 | no | </code></pre>
<python><pandas>
2023-04-18 08:13:14
3
507
konichiwa
76,042,113
1,734,097
GSpread error when inserting to Google Sheet: Out of range float values are not JSON compliant
<p>i have the following functions:</p> <pre><code>import os,sys import configparser from mystrings import * from mypandas import * import gspread import gspread_dataframe as gd from oauth2client.service_account import ServiceAccountCredentials def send_data_gsheet(file_id,worksheet_name,df,mode='a'): gc = gspread_connect() sh = gc.open_by_key(file_id) ws = None try: ws = sh.worksheet(worksheet_name) except: #create worksheet if worksheet isn't exist print(&quot;Worksheet '{}' is not found. Creating one...&quot;.format(worksheet_name)) sh.add_worksheet(worksheet_name,1000,1000) ws = sh.worksheet(worksheet_name) print(&quot;Worksheet '{}' created.&quot;.format(worksheet_name)) print_text_line(1,&quot;Current worksheet is: '{}'&quot;.format(ws.title)) if(mode=='w'): ws.clear() gd.set_with_dataframe(worksheet=ws,dataframe=df,include_index=False,include_column_header=True,resize=True) return True elif(mode=='a'): ws.add_rows(df.shape[0]) gd.set_with_dataframe(worksheet=ws ,dataframe=df ,include_index=False ,include_column_header=False ,row=ws.row_count+1 ,resize=False) return True return False </code></pre> <p>I am able to connect with <code>gspread</code>. In the functions above i want to send dataframes to Google Sheets and create a sheet automatically if worksheet is not found. I am able to call the functions using the following commands:</p> <pre><code>send_data_gsheet(file_id = '1qDdOpKoYQ5R4VlP612_T845RJyQni2-V9IuJ676E07c' ,worksheet_name='brand2' ,df=df ,mode='w') </code></pre> <p>I am also able to create the new file using <code>add_worksheet()</code>. Now i am wondering why wouldn't it send the dataframes into Google Sheet? it returned the following error:</p> <blockquote> <p>Out of range float values are not JSON compliant</p> </blockquote> <p>How do i resolve this?</p> <p>Update: My top 5 dataframe:</p> <pre><code>Brand 0 A Plus 1 A Plus G-Strength 2 A Tube 3 A1 (A One) 4 AA </code></pre> <p>Replicate:</p> <p>here's my <code>gspread_connect</code> function:</p> <pre><code>def gspread_connect(): gc = None json_key = os.path.dirname(os.getcwd())+'\\keyz\gsheet-automations-383508-f832c5a2f142.json' print(&quot;json key: {}&quot;.format(json_key)) scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive'] if os.path.exists(json_key): print_text_line(1,&quot;json key exists. Creating credentials...&quot;) credentials = ServiceAccountCredentials.from_json_keyfile_name(json_key, scope) print_text_line(1,&quot;credentials created. Authorizing GSpread...&quot;) try: gc = gspread.authorize(credentials) print('Gspread authorized') except Exception as e: print_text_line(1,&quot;Authorization with gspread failed.\n{}&quot;.format(e)) else: print_text_line(1,'Failed to get credentials. Key is not found.\nLocation: {}'.format(json_key)) return gc </code></pre> <p>here's my <code>get_data_gsheet</code> function:</p> <pre><code>def get_data_gsheet(file_id,worksheet_name): df = pd.DataFrame() try: gc = gspread_connect() sh = gc.open_by_key(file_id) try: ws = sh.worksheet(worksheet_name) print_text_line(1,&quot;Getting data from '{}' in '{}'...&quot;.format(worksheet_name,file_id)) data = ws.get_all_records() df = pd.DataFrame(data) print(&quot;Getting data completed.\n Total row: {} row(s)&quot;.format(df.shape[0])) except Exception as e: print(&quot;Failed to get data from '{}'&quot;.format(worksheet_name)) print(e) except Exception as e: print(&quot;Failed to connect with file_id '{}'&quot;.format(file_id)) print(e) return df </code></pre> <p>the functions above are located in <code>mygsuite.py</code>.</p> <p>Here's my source code that return error <code>InvalidJSONError: Out of range float values are not JSON compliant</code>:</p> <pre><code>import os,sys sys.path.append(os.path.dirname(os.getcwd())+'\\commons') from mygsuite import * from mypandas import * from mystrings import * import re </code></pre> <p>basic prerequisites to simplify my source code.</p> <pre><code>df = get_data_gsheet(file_id='1qDdOpKoYQ5R4VlP612_T845RJyQni2-V9IuJ676E07c' ,worksheet_name='brand') df.head() </code></pre> <p>Getting data from google sheet.</p> <pre><code>send_data_gsheet(file_id = '1tOXEnyNfEh9QC7f8FXJuEORIw4PpjR3sfp8tRmxmjT0' ,worksheet_name='brand2' ,df=df ,mode='w')? </code></pre> <p>Objective:</p> <ol> <li>i want to read and write from and to the same spreadsheet</li> <li>Read from <code>brand</code> sheet,</li> <li>write to <code>brand2</code> sheet. Create <code>brand2</code> sheet if it's not exist.</li> </ol>
<python><google-sheets><gspread>
2023-04-18 07:57:05
1
1,099
Cignitor
76,041,993
2,147,823
Runging own Telgram bot based on PTB 13x and local Telegram bot API behind nginx
<p>Try to use local Telegram bot API to take benefits of larger files for my bot serving and helping users in supergroup with files <a href="https://core.telegram.org/bots/api#using-a-local-bot-api-server" rel="nofollow noreferrer">as described here</a> Build stack with Telegram Bot API, nginx as reverse proxy and my bot works ok, based on <a href="https://github.com/aiogram/telegram-bot-api/tree/master/example" rel="nofollow noreferrer">docker-compose template</a>:</p> <pre><code>version: '3.7' networks: int: name: internal driver: bridge services: api: hostname: local-api image: aiogram/telegram-bot-api:latest restart: always networks: - int environment: TELEGRAM_API_ID: BOT_ID TELEGRAM_API_HASH: BOT_HASH TELEGRAM_LOCAL: &quot;true&quot; TELEGRAM_HTTP_IP_ADDRESS: 0.0.0.0 TELEGRAM_HTTP_PORT: 8081 TELEGRAM_MAX_FILESIZE: 1000000000 # задава максимален размер на файла на 1GB volumes: - telegram-bot-api-data:/var/lib/telegram-bot-api nginx: hostname: nginx image: nginx:1.19-alpine restart: always depends_on: - api volumes: - telegram-bot-api-data:/var/lib/telegram-bot-api - ./nginx:/etc/nginx/conf.d/ ports: - &quot;88:88&quot; networks: - int telegram-bot: depends_on: - api - nginx hostname : telegram-bot networks: - int restart: unless-stopped build: context: /root/docker/telegram-bot dockerfile: Dockerfile image: telegram-bot/latest environment: LOCAL_API_URL: http://nginx:88/bot LOCAL_API_FILES: http://nginx:88/file/bot labels: - &quot;com.centurylinklabs.watchtower.enable=false&quot; volumes: - /root/docker/telegram-bot/:/app/ volumes: telegram-bot-api-data: </code></pre> <p>And accoring nginx config with redirects:</p> <pre><code># use $sanitized_request instead of $request to hide Telegram token log_format token_filter '$remote_addr - $remote_user [$time_local] ' '&quot;$sanitized_request&quot; $status $body_bytes_sent ' '&quot;$http_referer&quot; &quot;$http_user_agent&quot;'; upstream telegram-bot-api { server local-api:8081; } server { listen 88; chunked_transfer_encoding on; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; client_max_body_size 2G; client_body_buffer_size 30M; keepalive_timeout 0; proxy_http_version 1.1; rewrite_log on; set $sanitized_request $request; if ( $sanitized_request ~ (\w+)\s(\/bot\d+):[-\w]+\/(\S+)\s(.*) ) { set $sanitized_request &quot;$1 $2:&lt;hidden-token&gt;/$3 $4&quot;; } access_log /var/log/nginx/access.log token_filter; error_log /var/log/nginx/error.log notice; location ~* \/file\/bot\d+:(.*) { rewrite ^/file\/bot(.*) /$1 break; proxy_http_version 1.1; try_files $uri @files; } location / { try_files $uri @api; proxy_http_version 1.1; } location @files { root /var/lib/telegram-bot-api; gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 64 8k; gzip_http_version 1.1; gzip_min_length 1100; } location @api { proxy_pass http://telegram-bot-api; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } } </code></pre> <p>At last Telegram bot is setup to work with local Telegram Bot API:</p> <pre><code>.env LOCAL_API=True LOCAL_API_URL=http://nginx:88/bot LOCAL_API_FILES=http://nginx:88/file/bot </code></pre> <p>PTB telegram bot setup</p> <pre><code>updater = Updater(config[&quot;BOT_TOKEN&quot;], use_context = True, base_url = config[&quot;LOCAL_API_URL&quot;], base_file_url = config[&quot;LOCAL_API_FILES&quot;], arbitrary_callback_data = True) </code></pre> <p>Things works fine but file redirect wont work as expect - the limit is still on and can't get local link for received from bot file</p> <blockquote> <p>2023/04/04 11:15:52 [error] 30#30: *15 open() &quot;/var/lib/telegram-bot-api/TOKEN_ID:TOKEN_HASH/var/lib/telegram-bot-api/TOKEN_ID:TOKEN_HASH/documents/file_16.apk&quot; failed (2: No such file or directory), client: 172.19.0.4, server: _, request: &quot;GET /file/botTOKEN_ID%3ATOKEN_HASH//var/lib/telegram-bot-api/TOKEN_ID%3ATOKEN_HASH/documents/file_16.apk HTTP/1.1&quot;, host: &quot;nginx:81&quot; 172.19.0.4 - - [04/Apr/2023:11:15:52 +0000] &quot;GET /file/botTOKEN_ID%3ATOKEN_HASH//var/lib/telegram-bot-api/TOKEN_ID%3ATOKEN_HASH/documents/file_16.apk HTTP/1.1&quot; 404 154 &quot;-&quot; &quot;Python Telegram Bot (<a href="https://github.com/python-telegram-bot/python-telegram-bot" rel="nofollow noreferrer">https://github.com/python-telegram-bot/python-telegram-bot</a>)&quot;</p> </blockquote> <p>There is &quot;dublicate path&quot; in logs <strong>/var/lib/telegram-bot-api/TOKEN_ID:TOKEN_HASH/var/lib/telegram-bot-api/TOKEN_ID:TOKEN_HASH/documents/file_16.apk</strong></p> <p>But despite of that curl request works as expected</p> <pre><code>curl -i http://nginx:81/file/botTOKEN_ID:TOKEN_HASH/documents/file_16.apk </code></pre> <blockquote> <p>HTTP/1.1 200 OK Server: nginx/1.19.10 Date: Tue, 04 Apr 2023 14:31:01 GMT Content-Type: application/octet-stream Content-Length: 10066456 Last-Modified: Tue, 04 Apr 2023 06:26:47 GMT Connection: close ETag: &quot;642bc327-999a18&quot; Accept-Ranges: bytes</p> <p>Warning: Binary output can mess up your terminal. Use &quot;--output -&quot; to tell Warning: curl to output it to your terminal anyway, or consider &quot;--output Warning: &quot; to save to a file.</p> </blockquote> <p>Can't realize from where become error, bot itslef request to local API or nginx redirects ? Rewriting bot at PTB 20.x is not an option do not have enough time for probono project. Any ideas are welcome!</p> <p>Edit: <em><strong>Wrong path comes form context.bot.get_file(file_id) and return file_path !</strong></em></p>
<python><nginx><telegram><telegram-bot><python-telegram-bot>
2023-04-18 07:42:31
1
540
Topper
76,041,978
5,567,893
Does extension in vscode affect to the remote server?
<p>I'm wondering if the VSCode extension installation affects the remote server.</p> <p>I already connected to the server using remote ssh and tried debugging the Python file.<br /> When I tried to run the Python code in VSCode, there was an error message that I should install the Python extension for the following process.<br /> But I'm worried whether the Python extension affects the version of Python in the remote server because I share the server with others.<br /> Does the Python extension in the VSCode independent of the Python in the system?</p> <p><a href="https://i.sstatic.net/0Vry7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Vry7.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><vscode-remote>
2023-04-18 07:40:17
1
466
Ssong
76,041,926
19,989,634
Using http methods with django rest framework
<p>I'm looking for some advice on how to correctly use my DRF API I have built, specifically the PATCH method at present. I am trying to formulate a script to patch the quantity of a Product / Cart Item that has been successfully added to the cart but I cannot get it to work.</p> <p>For context of my project: My goal of the scripts are to be able to create a Cart, use a button to either add and Item or remove an Item (-1 quantity), Once the customer clicks a proceed to payment button the cart items are then transfered into the Order model So I can then process the order. Once order status is complete, cart is deleted and recreated.</p> <p>Here is my current script:</p> <pre><code>const updateBtns = document.querySelectorAll(&quot;.update-cart&quot;); const user = &quot;{{request.user}}&quot;; for (let i = 0; i &lt; updateBtns.length; i++) { updateBtns[i].addEventListener(&quot;click&quot;, function () { event.preventDefault(); // prevent page from refreshing const productId = this.dataset.product; const action = this.dataset.action; console.log(&quot;productId:&quot;, productId, &quot;action:&quot;, action); console.log(&quot;USER:&quot;, user); createCart(productId, action); }); } function createCart(productId, action, cartId) { var csrftoken = getCookie(&quot;csrftoken&quot;); console.log(&quot;User is logged in, sending data...&quot;); fetch(&quot;/get_cart_id/&quot;) .then((response) =&gt; response.json()) .then((data) =&gt; { const cartId = data.cart_id; console.log(&quot;Cart ID:&quot;, cartId); // log cartId in console let method, url; if (action === &quot;add&quot;) { method = &quot;POST&quot;; url = `/api/carts/${cartId}/items/`; } else if (action === &quot;remove&quot;) { method = &quot;PATCH&quot;; url = `/api/carts/${cartId}/items/${THIS_IS_WHERE_I_NEED_ITEM_ID}/`; } else { console.log(`Invalid action: ${action}`); return; } fetch(url, { method: method, headers: { &quot;Content-Type&quot;: &quot;application/json&quot;, &quot;X-CSRFToken&quot;: csrftoken, }, body: JSON.stringify({ product_id: productId, quantity: 1, }), }) .then((response) =&gt; response.json()) .then((data) =&gt; { console.log(`Item ${action}ed in cart:`, data); }); }); } </code></pre> <p>Here are my Serializera relating to my cart(please ask for me if you require more info to assist me):</p> <pre><code>class CartItemSerializer(serializers.ModelSerializer): product = SimpleProductSerializer() total_price = serializers.SerializerMethodField() def get_total_price(self, cart_item:CartItem): return cart_item.quantity * cart_item.product.price class Meta: model = CartItem fields = ['id', 'product', 'quantity', 'total_price'] CartSerializer(serializers.ModelSerializer): id = serializers.UUIDField(read_only=True) items = CartItemSerializer(many=True, read_only=True) total_price = serializers.SerializerMethodField() def get_total_price(self, cart): return sum([item.quantity * item.product.price for item in cart.items.all()]) class Meta: model = Cart fields = ['id', 'items', 'total_price'] class AddCartItemSerializer(serializers.ModelSerializer): product_id = serializers.IntegerField() def validate_product_id(self, value): if not Product.objects.filter(pk=value).exists(): raise serializers.ValidationError('No product with the given ID was found.') return value def save(self, **kwargs): cart_id = self.context['cart_id'] product_id = self.validated_data['product_id'] quantity = self.validated_data['quantity'] try: cart_item = CartItem.objects.get(cart_id=cart_id, product_id=product_id) cart_item.quantity += quantity cart_item.save() self.instance = cart_item except CartItem.DoesNotExist: self.instance = CartItem.objects.create(cart_id=cart_id, **self.validated_data) return self.instance class Meta: model = CartItem fields = ['id', 'product_id', 'quantity'] class UpdateCartItemSerializer(serializers.ModelSerializer): class Meta: model = CartItem fields = ['quantity'] </code></pre> <p>Finally here are my viewsets:</p> <pre><code>class CartViewSet(CreateModelMixin, RetrieveModelMixin, DestroyModelMixin, GenericViewSet): queryset = Cart.objects.prefetch_related('items__product').all() serializer_class = CartSerializer class CartItemViewSet(ModelViewSet): http_method_names = ['get', 'post', 'patch', 'delete'] def get_serializer_class(self): if self.request.method == 'POST': return AddCartItemSerializer elif self.request.method == 'PATCH': return UpdateCartItemSerializer return CartItemSerializer def get_serializer_context(self): return {'cart_id': self.kwargs['cart_pk']} def get_queryset(self): return CartItem.objects \ .filter(cart_id=self.kwargs['cart_pk']) \ .select_related('product') </code></pre> <p>This is my view that I created to retrieve the cart that is related to the current user which is created when they enter the home/gallery/product page:</p> <pre class="lang-py prettyprint-override"><code>def get_cart_id(request): if request.user.is_authenticated: cart = Cart.objects.get(user=request.user) cart_id = cart.id return JsonResponse({'cart_id': cart_id}) else: return JsonResponse({'error': 'User is not authenticated'}) </code></pre>
<javascript><python><django-rest-framework><django-rest-viewsets>
2023-04-18 07:33:22
1
407
David Henson
76,041,810
6,525,686
Spark Transitive Equality Mapping Usecase
<p>I have the following use case where based on the Comp_Key Column, we will have multiple Comp_Name's and their associated Comp_Value's and ID column, which is unique for a given Comp_Name - Comp_Value combination.</p> <p>The task is to reassign the generated ID column based on the combination of the Comp_value across the Comp_Names, where the transitive equality (if A=B and B=C, then A=c) rule applies.</p> <p>For ex, Comp_value '123' is assigned to multiple Comp_Name's - abc.com/Collin.com/Boyd.com/clad.com. In the same way, Comp_value '356' is assigned to both Comp_Name's - MB.com and Collin.com. so we need to treat all of these comp_names as a single entity and need to assign the same ID value and can take any of the ID's of the three Comp_Names and so can assign ID = 1.</p> <p>Similarly, Comp_value = 435 has matching Comp_Names - miro.com and xyz.com, so we can treat these two comp_names as same and assign any of the ID values and take 7 for both of these.</p> <p>Even though the Comp_Names Denton.com/Insta.com have the same Comp_key = A, we dont consider these in the above set, as they don't have any matching Comp_values w.r.t any of the above sets.</p> <pre><code>|Comp_KEY| Comp_NAME | Comp_Value | ID| |--------|------------|------------|---| | A | abc.com | 123 | 1 | | A | abc.com | | 1 | | A | MB inc | 356 | 2 | | A | Collin.com | 123 | 3 | | A | Collin.com | 356 | 3 | | A | Boyd.com | 123 | 4 | | A | Boyd.com | 790 | 4 | | A | clad.com | 123 | 5 | | A | clad.com | 2324 | 5 | | A | Denton.com | 555 | 6 | | A | Denton.com | 666 | 6 | | A | Micro.com | 435 | 7 | | A | Micro.com | 987 | 7 | | A | XYZ.com | 435 | 8 | | A | XYZ.com | 334 | 8 | | A | Insta.com | 777 | 9 | |--------|------------|------------|---| </code></pre> <p>The expected output with the new ID column will be..</p> <pre><code>|Comp_KEY| Comp_NAME | Comp_Value | ID| |--------|------------|------------|---| | A | abc.com | 123 | 1 | | A | abc.com | | 1 | | A | MB inc | 356 | 1 | | A | Collin.com | 123 | 1 | | A | Collin.com | 356 | 1 | | A | Boyd.com | 123 | 1 | | A | Boyd.com | 790 | 1 | | A | clad.com | 123 | 1 | | A | clad.com | 2324 | 1 | | A | Denton.com | 555 | 6 | | A | Denton.com | 666 | 6 | | A | Micro.com | 435 | 7 | | A | Micro.com | 987 | 7 | | A | XYZ.com | 435 | 7 | | A | XYZ.com | 334 | 7 | | A | Insta.com | 777 | 9 | |--------|------------|------------|---| </code></pre> <p>any help is appreciated. Thank You -^-.</p>
<python><scala><apache-spark><pyspark><bigdata>
2023-04-18 07:19:52
1
319
marc
76,041,346
1,643,537
For loop overwrites value when appending to list in Python
<p>I am trying to create a nested dictionary out of a list of postcodes and locations. Somehow I keep overwriting the value in the loop leaving only the last value on the list.</p> <p>Code snippet</p> <pre><code>postcode_dict = {} postcode_list = [['01000', 'location1', 'district1', 'state1'], ['01000', 'location2', 'district1', 'state1'], ['01200', 'location3', 'district2', 'state2'], ['01200', 'location4', 'district2', 'state2']] for row in postcode_list: state = row[3] state_dict = {} if state not in state_dict: district_loc = [] temp = [{row[1]: row[0]}] district_loc.append(temp) state_dict[row[2]] = district_loc postcode_dict[row[3]] = {district: district_loc} print(postcode_dict) </code></pre> <p>Results:</p> <pre><code>{'state1': {'district1': [{'location2': '01000'}]}, 'state2': {'district2': [{'location4': '01200'}]}} </code></pre> <p>Expected Results:</p> <pre><code>{'state1': {'district1': [{'location1': '01000'}, {'location2': '01000'}]}, 'state2': {'district2': [{'location3': '01200'}, {'location4': '01200'}]}} </code></pre>
<python><dictionary><for-loop>
2023-04-18 06:18:19
4
3,205
Cryssie
76,041,282
4,688,722
How to use view control in non-blocking visualization for zoom in?
<p>I followed <a href="https://stackoverflow.com/questions/70842338/playing-sequence-of-ply-files-in-open3d">this link</a> to visualize multiple .pcd as a video via open3d. It is working fine. However, I am unable to zoom in the generated output.</p> <p>I have tried to use <code>o3d.visualization.Visualizer.get_view_control()</code> but it doesn't zoom at all.</p> <p>My code is as follows</p> <pre><code>import os import fnmatch import numpy as np if __name__ == &quot;__main__&quot;: pcd_directory = './2023-04-07-17-17-59/2023-04-07-17-17-59_Clouds' # List all files in the directory files = os.listdir(pcd_directory) # Filter the .pcd files pcd_files = fnmatch.filter(files, '*.pcd') # Sort the .pcd files pcd_files.sort() # Creating an object of class visualizer vis = o3d.visualization.Visualizer() vis.create_window() # Setting the colors of visualizer ropt = vis.get_render_option() ropt.point_size = 1.0 ropt.background_color = np.asarray([0, 0, 0]) ropt.light_on = False # Reading first Pcd and adding to geometry pcd = o3d.io.read_point_cloud(f'./2023-04-07-17-17-59/2023-04-07-17-17-59_Clouds/0000000001.pcd') vis.add_geometry(pcd) # zoom the view ctr = vis.get_view_control() ctr.set_zoom(4) for i in range(1, len(pcd_files) - 550): pcd.points = o3d.io.read_point_cloud(f'./2023-04-07-17-17-59/2023-04-07-17-17-59_Clouds/{i:010d}.pcd').points vis.update_geometry(pcd) vis.poll_events() vis.update_renderer() vis.destroy_window() </code></pre> <p>I have attached my current results and expected results below.</p> <p><a href="https://i.sstatic.net/d3UG8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d3UG8.png" alt="Current results" /></a></p> <p><a href="https://i.sstatic.net/SZlxJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SZlxJ.png" alt="expected results" /></a></p>
<python><3d><open3d>
2023-04-18 06:07:15
1
926
Ammar Ul Hassan
76,041,179
2,981,639
python -m build including additional folders
<p>I have src-layout package with pyproject.toml and setup.cfg which I'm building use <code>python -m build</code></p> <p>It builds and installs fine, but when I open the archive file it includes the contents of a bunch of additional folders that I don't want, i.e.</p> <p>my project has the following structure</p> <pre><code>project_root_directory ├── pyproject.toml # AND/OR setup.cfg, setup.py ├── datasets/ ├── model/ ├── ... └── src/ └── mypkg/ ├── __init__.py ├── ... ├── module.py </code></pre> <p>My setup.cfg is</p> <pre><code>[options] packages = find: package_dir = =src zip_safe = False install_requires = torch==2.0.0 ... [options.packages.find] where = src include = mypkg </code></pre> <p>pyproject.toml</p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=40.8.0&quot;, &quot;wheel&quot;, &quot;setuptools_scm[toml]&gt;=6.0&quot;] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools_scm] write_to = &quot;src/warpspeed_multiclass/_version.py&quot; </code></pre> <p>setup.py</p> <pre><code>from setuptools import setup if __name__ == '__main__': setup() </code></pre> <p>As well as the package, all the files/folders from the <code>project_root_directory</code> are included, i.e. model, data etc. I don't want this, they're large and I'm deploying to sagemaker so I only want the source - the model is loaded from s3 and the data is no longer required (and in general might be sensitive)</p> <p>I've tried to add <code>exclude</code> to <code>setup.cfg</code> but my attempt failed. How do I ensure I only get the contents of <code>mypkg</code> and the associated metadata in the <code>tar.gz</code> produced by <code>python -m build</code>?</p>
<python><setup.py>
2023-04-18 05:54:09
1
2,963
David Waterworth
76,041,149
5,677,298
Generators seems to skip a step when sending data
<p>I am trying to understand why the below is happening.</p> <p>Let us assume that we have this simple <code>is_even</code> function:</p> <pre class="lang-py prettyprint-override"><code> def is_even(num): return num % 2 == 0 </code></pre> <p>And then we also have this generator function:</p> <pre class="lang-py prettyprint-override"><code> def get_evens(): start_value = 0 while True: if is_even(start_value): incoming = yield start_value if incoming is not None: start_value = incoming start_value += 1 </code></pre> <p>The above should simple yield to us the next available even number.</p> <p>So if we do something like this:</p> <pre class="lang-py prettyprint-override"><code> evens_gen = get_evens() print(next(evens_gen)) print(next(evens_gen)) print(next(evens_gen)) print(next(evens_gen)) </code></pre> <p>The output would be:</p> <pre><code>0 2 4 6 </code></pre> <p>Now assume that we want to send some data to the generator in order to move to specific number check. e.g:</p> <pre class="lang-py prettyprint-override"><code> evens_gen.send(500) </code></pre> <p>And then we attempt to call <code>next</code> again</p> <pre class="lang-py prettyprint-override"><code> print(next(evens_gen)) </code></pre> <p>The output would be</p> <pre><code>504 </code></pre> <p>What I noticed while debugging is that the value of <code>incoming</code> is still set as 500 when the loop is reaching the check for 502. Then it turns to None.</p> <p>Still I cannot grasp why this is happening. Should not the first <code>next</code> post <code>send</code> be 502 ?</p> <p>Edit: Forgot to mention that this is checked with 3.8 and 3.10 versions</p>
<python><python-3.x><generator>
2023-04-18 05:49:41
0
702
kakou
76,041,127
14,103,418
Why is version number 0.10.0 considered as older than 0.9.0 in setuptools_scm?
<p>I have a Python package that is going through frequent changes and it brought us to version <code>0.9.3</code> currently. My team is not confident to bump it to <code>1.0.0</code> yet.</p> <p>The team agreed to version number <code>0.10.0</code> but why <code>setuptools_scm</code> seems to consider <code>0.10.0</code> to be earlier than <code>0.9.3</code>?</p> <p>I tried tagging with <code>git tag</code> and check the list:</p> <pre><code>$ git tag 0.10.0 $ git tag --list 0.10.0 0.2.0 0.2.1 0.2.2 0.3.0 0.5.0 0.7.0 0.7.1 0.8.0 0.8.1 0.9.0 0.9.1 0.9.2 0.9.3 </code></pre> <p>Was expecting <code>0.10.0</code> to be listed after <code>0.9.3</code>.</p>
<python><version-control><git-tag><setuptools-scm>
2023-04-18 05:46:09
2
380
yuenherny
76,040,957
610,569
How to use pipeline for multiple target language translations with M2M model in Huggingface?
<p>The <a href="https://huggingface.co/facebook/m2m100_418M" rel="nofollow noreferrer">M2M model</a> is trained on ~100 languages and able to translate different languages, e.g.</p> <pre><code>from transformers import pipeline m2m100 = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=&quot;de&quot;) m2m100([&quot;hello world&quot;, &quot;foo bar&quot;]) </code></pre> <p>[out]:</p> <pre><code>[{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}] </code></pre> <p>But to enable multiple target translations, user have to initialize multiple pipelines:</p> <pre><code>from transformers import pipeline m2m100_en_de = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=&quot;de&quot;) m2m100_en_fr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=&quot;fr&quot;) print(m2m100_en_de([&quot;hello world&quot;, &quot;foo bar&quot;])) print(m2m100_en_fr([&quot;hello world&quot;, &quot;foo bar&quot;])) </code></pre> <p>[out]:</p> <pre><code>[{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}] [{'translation_text': 'Bonjour Monde'}, {'translation_text': 'Le bar Fou'}] </code></pre> <h3>Is there a way to use a single pipeline for multiple target languages and/or source languages for the M2M model?</h3> <p>I've tried this:</p> <pre><code>from transformers import pipeline m2m100_en_defr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=[&quot;de&quot;, &quot;fr&quot;]) print(m2m100_en_defr([&quot;hello world&quot;, &quot;foo bar&quot;])) </code></pre> <p>But it throws the error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_28/3374873260.py in &lt;module&gt; 3 m2m100_en_defr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=[&quot;de&quot;, &quot;fr&quot;]) 4 ----&gt; 5 print(m2m100_en_defr([&quot;hello world&quot;, &quot;foo bar&quot;])) /opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in __call__(self, *args, **kwargs) 364 token ids of the translation. 365 &quot;&quot;&quot; --&gt; 366 return super().__call__(*args, **kwargs) /opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in __call__(self, *args, **kwargs) 163 &quot;&quot;&quot; 164 --&gt; 165 result = super().__call__(*args, **kwargs) 166 if ( 167 isinstance(args[0], list) /opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1088 inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params 1089 ) -&gt; 1090 outputs = list(final_iterator) 1091 return outputs 1092 else: /opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __next__(self) 122 123 # We're out of items within a batch --&gt; 124 item = next(self.iterator) 125 processed = self.infer(item, **self.params) 126 # We now have a batch of &quot;inferred things&quot;. /opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __next__(self) 122 123 # We're out of items within a batch --&gt; 124 item = next(self.iterator) 125 processed = self.infer(item, **self.params) 126 # We now have a batch of &quot;inferred things&quot;. /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 626 # TODO(https://github.com/pytorch/pytorch/issues/76750) 627 self._reset() # type: ignore[call-arg] --&gt; 628 data = self._next_data() 629 self._num_yielded += 1 630 if self._dataset_kind == _DatasetKind.Iterable and \ /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 669 def _next_data(self): 670 index = self._next_index() # may raise StopIteration --&gt; 671 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 672 if self._pin_memory: 673 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 56 data = self.dataset.__getitems__(possibly_batched_index) 57 else: ---&gt; 58 data = [self.dataset[idx] for idx in possibly_batched_index] 59 else: 60 data = self.dataset[possibly_batched_index] /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in &lt;listcomp&gt;(.0) 56 data = self.dataset.__getitems__(possibly_batched_index) 57 else: ---&gt; 58 data = [self.dataset[idx] for idx in possibly_batched_index] 59 else: 60 data = self.dataset[possibly_batched_index] /opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __getitem__(self, i) 17 def __getitem__(self, i): 18 item = self.dataset[i] ---&gt; 19 processed = self.process(item, **self.params) 20 return processed 21 /opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in preprocess(self, truncation, src_lang, tgt_lang, *args) 313 if getattr(self.tokenizer, &quot;_build_translation_inputs&quot;, None): 314 return self.tokenizer._build_translation_inputs( --&gt; 315 *args, return_tensors=self.framework, truncation=truncation, src_lang=src_lang, tgt_lang=tgt_lang 316 ) 317 else: /opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in _build_translation_inputs(self, raw_inputs, src_lang, tgt_lang, **extra_kwargs) 351 self.src_lang = src_lang 352 inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs) --&gt; 353 tgt_lang_id = self.get_lang_id(tgt_lang) 354 inputs[&quot;forced_bos_token_id&quot;] = tgt_lang_id 355 return inputs /opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in get_lang_id(self, lang) 379 380 def get_lang_id(self, lang: str) -&gt; int: --&gt; 381 lang_token = self.get_lang_token(lang) 382 return self.lang_token_to_id[lang_token] 383 /opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in get_lang_token(self, lang) 376 377 def get_lang_token(self, lang: str) -&gt; str: --&gt; 378 return self.lang_code_to_token[lang] 379 380 def get_lang_id(self, lang: str) -&gt; int: TypeError: unhashable type: 'list' </code></pre> <p>One would have expected the output to look something like this instead:</p> <pre><code>{&quot;de&quot;: [{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}] &quot;fr&quot;: [{'translation_text': 'Bonjour Monde'}, {'translation_text': 'Le Foo Bar'}] } </code></pre> <h3>If we use multiple pipelines, are the model mmap and shared? Will it initialize multiple models with multiple tokenizer pairs? Or will it initialize a single model with multiple tokenizers?</h3>
<python><nlp><huggingface-transformers><machine-translation><large-language-model>
2023-04-18 03:45:45
1
123,325
alvas
76,040,912
2,604,247
How to Concatenate PDFs via Pikepdf and Python without Unnecessary Disk Read-Write?
<p><strong>Current technology stack</strong></p> <ul> <li>img2pdf==0.4.4</li> <li>pikepdf==7.1.2</li> <li>Python 3.10</li> <li>Ubuntu 22.04</li> </ul> <p><strong>The requirement</strong></p> <p>A pdf file (let's call it <code>static.pdf</code>) exists in the disk. Another pdf (let's call it <code>dynamic.pdf</code>) is being generated dynamically in <em>memory</em> with img2pdf library, depending on some user input parameters.</p> <p>The task is to concatenate these two pdfs as a single one (<code>static.pdf</code>, then <code>dynamic.pdf</code>) and send it as an email attachment via the SMTP library.</p> <p><strong>Current Solution I am Employing</strong></p> <p>This is based on the <a href="https://pikepdf.readthedocs.io/en/latest/topics/pages.html#merge-concatenate-pdf-from-several-pdfs" rel="nofollow noreferrer">pikepdf documentation</a>.</p> <ul> <li>Dump <code>dynamic.pdf</code> in the disk</li> <li>Read <code>static.pdf</code> from disk with pikepdf</li> <li>Read <code>dynamic.pdf</code> from disk with pikepdf</li> <li>Concatenate them with list-like API provided by pikepdf, let's call this <code>final.pdf</code>.</li> <li>Dump <code>final.pdf</code> on disk with pikepdf api</li> <li>Read it from disk with <code>open(file='final.pdf', mode='rb')</code> as bytes</li> <li>Attach the bytes to the email message</li> </ul> <p><strong>What I want</strong></p> <p>Remove all the unnecessary disk-I/O when I already have <code>dynamic.pdf</code> in memory, and the final result is needed to be attached to email as bytes (no need to persist on disk). So ideally, the only disk operation should be reading <code>static.pdf</code>.</p> <p>But I cannot find much information on the pikepdf site about in-memory concatenation. Moreover, I am also not certain whether a <code>pikepdf.Pdf</code> object can expose the <em>exact same bytes</em> as what I would get if I dump the pdf on disk and then read it using python native <code>open</code> function.</p> <p>So any ideas around this would be helpful, even if there are other libraries that allow this functionality. The constraints on other libraries would be</p> <ul> <li>Plays will with my tech stack (python, Ubuntu and also needs to run on windows)</li> <li>FOSS, and trustworthy enough</li> </ul>
<python><pdf><qpdf><pikepdf>
2023-04-18 03:36:34
1
1,720
Della
76,040,850
610,569
Can mT5 model on Huggingface be used for machine translation?
<p>The <code>mT5</code> model is pretrained on the mC4 corpus, covering 101 languages:</p> <blockquote> <p>Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.</p> </blockquote> <h3>Can it do machine translation?</h3> <p>Many users have tried something like this but it fails to generate a translation:</p> <pre><code>from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained(&quot;google/mt5-small&quot;) tokenizer = T5Tokenizer.from_pretrained(&quot;google/mt5-small&quot;) article = &quot;translate to french: The capital of France is Paris.&quot; batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors=&quot;pt&quot;) output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1) tokenizer.decode(output_ids[0]) </code></pre> <p>[out]:</p> <pre><code>&gt;&gt;&gt; &lt;pad&gt; &lt;extra_id_0&gt;&lt;/s&gt; </code></pre> <h3>How do we make the mt5 model do machine translation?</h3>
<python><nlp><huggingface-transformers><machine-translation><large-language-model>
2023-04-18 03:20:07
1
123,325
alvas
76,040,803
14,923,024
Python polars dataframe transformation: from flat dataframe to one dataframe per category
<p>I have a flat dataframe representing data in multiple databases, where each database has multiple tables, each table has multiple columns, and each column has multiple values:</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame( { 'db_id': [&quot;db_1&quot;, &quot;db_1&quot;, &quot;db_1&quot;, &quot;db_2&quot;, &quot;db_2&quot;, &quot;db_2&quot;], 'table_id': ['tab_1', 'tab_1', 'tab_2', 'tab_1', 'tab_2', 'tab_2'], 'column_id': ['col_1', 'col_2', 'col_1', 'col_2', 'col_1', 'col_3'], 'data': [[1, 2, 3], [10, 20, 30], [4, 5], [40, 50], [6], [60]] } ) </code></pre> <pre><code>shape: (6, 4) ┌───────┬──────────┬───────────┬──────────────┐ │ db_id ┆ table_id ┆ column_id ┆ data │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ list[i64] │ ╞═══════╪══════════╪═══════════╪══════════════╡ │ db_1 ┆ tab_1 ┆ col_1 ┆ [1, 2, 3] │ │ db_1 ┆ tab_1 ┆ col_2 ┆ [10, 20, 30] │ │ db_1 ┆ tab_2 ┆ col_1 ┆ [4, 5] │ │ db_2 ┆ tab_1 ┆ col_2 ┆ [40, 50] │ │ db_2 ┆ tab_2 ┆ col_1 ┆ [6] │ │ db_2 ┆ tab_2 ┆ col_3 ┆ [60] │ └───────┴──────────┴───────────┴──────────────┘ </code></pre> <p>As you can see, different databases share some tables, and tables share some columns.</p> <p>I want to extract one dataframe per <code>table_id</code> from the above dataframe, where the extracted dataframe is transposed and exploded, i.e. the extracted dataframe should have as its columns the set of <code>column_id</code>s corresponding to the specific <code>table_id</code> (plus <code>db_id</code>), with values being the corresponding values in <code>data</code>. That is, for the above example, the result should be a dictionary with keys &quot;tab_1&quot; and &quot;tab_2&quot;, and values being the following dataframes:</p> <p>tab_1:</p> <pre><code>shape: (5, 3) ┌───────┬───────┬───────┐ │ db_id ┆ col_1 ┆ col_2 │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═══════╪═══════╪═══════╡ │ db_1 ┆ 1 ┆ 10 │ │ db_1 ┆ 2 ┆ 20 │ │ db_1 ┆ 3 ┆ 30 │ │ db_2 ┆ null ┆ 40 │ │ db_2 ┆ null ┆ 50 │ └───────┴───────┴───────┘ </code></pre> <p>tab_2:</p> <pre><code>shape: (3, 3) ┌───────┬───────┬───────┐ │ db_id ┆ col_1 ┆ col_3 │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═══════╪═══════╪═══════╡ │ db_1 ┆ 4 ┆ null │ │ db_1 ┆ 5 ┆ null │ │ db_2 ┆ 6 ┆ 60 │ └───────┴───────┴───────┘ </code></pre> <p>I have a working function that does just that (see below), but it's a bit slow. So, I'm wondering if there is a faster way to achieve this?</p> <p>This is my current solution:</p> <pre class="lang-py prettyprint-override"><code>def dataframe_per_table( df: pl.DataFrame, col_name__table_id: str = &quot;table_id&quot;, col_name__col_id: str = &quot;column_id&quot;, col_name__values: str = &quot;data&quot;, col_name__other_ids: Sequence[str] = (&quot;db_id&quot;, ) ) -&gt; Dict[str, pl.DataFrame]: col_name__other_ids = list(col_name__other_ids) table_dfs = {} for (table_name, *_), table in df.group_by( [col_name__table_id] + col_name__other_ids ): new_table = table.select( pl.col(col_name__other_ids + [col_name__col_id, col_name__values]) ).pivot( on=col_name__col_id, index=col_name__other_ids, values=col_name__values, aggregate_function=None, ).explode( columns=table[col_name__col_id].unique().to_list() ) table_dfs[table_name] = pl.concat( [table_dfs.setdefault(table_name, pl.DataFrame()), new_table], how=&quot;diagonal&quot; ) return table_dfs </code></pre> <h1>Update: Benchmarking/Summary of Answers</h1> <p>On a dataframe with ~2.5 million rows, my original solution takes about <strong>70 minutes</strong> to complete.</p> <p><em>Disclaimer: since the execution times were too long, I only timed each solution once (i.e. 1 run, 1 loop), so the margin of error is large.</em></p> <p>However, right after posting the question, I realized I can make it much faster just by performing the <code>concat</code> in a separate loop, so that each final dataframe is created by one <code>concat</code> operation instead of many:</p> <pre class="lang-py prettyprint-override"><code>def dataframe_per_table_v2( df: pl.DataFrame, col_name__table_id: str = &quot;table_id&quot;, col_name__col_id: str = &quot;column_id&quot;, col_name__values: str = &quot;data&quot;, col_name__other_ids: Sequence[str] = (&quot;db_id&quot;, ) ) -&gt; Dict[str, pl.DataFrame]: col_name__other_ids = list(col_name__other_ids) table_dfs = {} for (table_name, *_), table in df.group_by( [col_name__table_id] + col_name__other_ids ): new_table = table.select( pl.col(col_name__other_ids + [col_name__col_id, col_name__values]) ).pivot( on=col_name__col_id, index=col_name__other_ids, values=col_name__values, aggregate_function=None, ).explode( columns=table[col_name__col_id].unique().to_list() ) # Up until here nothing is changed. # Now, instead of directly concatenating, we just # append the new dataframe to a list table_dfs.setdefault(table_name, list()).append(new_table) # Now, in a separate loop, each final dataframe is created # by concatenating all collected dataframes once. for table_name, table_sub_dfs in table_dfs.items(): table_dfs[table_name] = pl.concat( table_sub_dfs, how=&quot;diagonal&quot; ) return table_dfs </code></pre> <p>This reduced the time from 70 min to about <strong>10 min</strong>; much better, but still too long.</p> <p>In comparison, the <a href="https://stackoverflow.com/a/76043979/14923024">answer by @jqurious</a> takes about <strong>5 min</strong>. It needs an additional step at the end to remove the unwanted columns and get a dict from the list, but it's still much faster.</p> <p>However, the winner is by far the <a href="https://stackoverflow.com/a/76055210/14923024">answer by @Dean MacGregor</a>, taking only <strong>50 seconds</strong> and directly producing the desired output.</p> <p>Here is their solution re-written as a function:</p> <pre class="lang-py prettyprint-override"><code>def dataframe_per_table_v3( df: pl.DataFrame, col_name__table_id: str = &quot;table_id&quot;, col_name__col_id: str = &quot;column_id&quot;, col_name__values: str = &quot;data&quot;, col_name__other_ids: Sequence[str] = (&quot;db_id&quot;, ) ) -&gt; Dict[str, pl.DataFrame]: table_dfs = { table_id: df.filter( pl.col(col_name__table_id) == table_id ).with_columns( idx_data=pl.int_ranges(pl.col(col_name__values).list.len()) ).explode( [col_name__values, 'idx_data'] ).pivot( on=col_name__col_id, values=col_name__values, index=[*col_name__other_ids, 'idx_data'], aggregate_function='first' ).drop( 'idx_data' ) for table_id in df.get_column(col_name__table_id).unique() } return table_dfs </code></pre>
<python><dataframe><python-polars><data-transform>
2023-04-18 03:08:40
3
457
AAriam
76,040,672
2,605,327
Optional chaining in Python
<p>Javascript has a useful pattern called optional chaining, that lets you safely traverse through a deeply nested object without key errors.</p> <p>For example:</p> <pre><code>const foo= { correct: { key: 'nested' } } const correct = foo?.correct?.key const incorrect = foo?.incorrect?.key </code></pre> <p>You can also do it the old fashioned way like this:</p> <pre><code>const nested = foo &amp;&amp; foo.bar &amp;&amp; foo.bar.key </code></pre> <p>Does Python have a pattern as clean as this for traversing nested dictionaries? I find it particularly verbose when traversing deeply nested objects with long key names.</p> <p>This is the closest I can get in Python:</p> <pre><code>getattr(getattr(foo, 'correct', None), 'key', None) </code></pre> <p>Which is pretty gross and starts getting dumb beyond two or three nestings</p>
<javascript><python>
2023-04-18 02:37:56
0
718
Ucinorn
76,040,523
13,099,964
Auto-GPT Command evaluate_code returned: Error: The model: `gpt-4` does not exist
<p>I'm working with <a href="https://github.com/Significant-Gravitas/Auto-GPT" rel="noreferrer">auto-gpt</a> and I got this error:</p> <pre><code>Command evaluate_code returned: Error: The model: `gpt-4` does not exist </code></pre> <p>and it's like it can't go furthermore.</p> <p>what should I do?</p>
<python>
2023-04-18 01:57:23
1
2,299
Raskul
76,040,475
395,857
How can I split the column of a panda dataframe so that each new column correspond to a single value in the split column?
<p>I have this panda dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>System name</th> <th>Rating</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>system1</td> <td>1</td> <td>12</td> </tr> <tr> <td>system1</td> <td>2</td> <td>156</td> </tr> <tr> <td>system1</td> <td>3</td> <td>16</td> </tr> <tr> <td>systemZ</td> <td>1</td> <td>77</td> </tr> <tr> <td>systemZ</td> <td>2</td> <td>56</td> </tr> <tr> <td>systemZ</td> <td>3</td> <td>66</td> </tr> <tr> <td>systemY</td> <td>1</td> <td>99</td> </tr> <tr> <td>systemY</td> <td>2</td> <td>77</td> </tr> <tr> <td>systemY</td> <td>3</td> <td>99</td> </tr> </tbody> </table> </div> <p>How can I split the <code>Rating</code> column so that each new column correspond to a single value in the split column. I.e., I'd like to obtain:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>System name</th> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>system1</td> <td>12</td> <td>156</td> <td>16</td> </tr> <tr> <td>systemZ</td> <td>77</td> <td>56</td> <td>66</td> </tr> <tr> <td>systemY</td> <td>99</td> <td>77</td> <td>99</td> </tr> </tbody> </table> </div> <p>How can I do this with pandas?</p> <hr /> <p>To create the dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame(data={ 'System name': ['system1', 'system1', 'system1', 'systemZ', 'systemZ', 'systemZ', 'systemY', 'systemY', 'systemY'], 'Rating': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'Count': [12, 156, 16, 77, 56, 66, 99, 77, 99]}) print(df) </code></pre>
<python><pandas><dataframe>
2023-04-18 01:47:09
0
84,585
Franck Dernoncourt
76,040,325
736,312
Pytorch forward hook on FCN-ResNet50 architecture
<p>I am using the FCN-Resnet50 model from Pytorch framework and I would like to extract the features vector of one layer using the <code>register_forward_hook</code> function.<br> I am using the following code to load the model.</p> <pre><code>import torch model = torch.hub.load(&quot;pytorch/vision:v0.10.0&quot;, &quot;fcn_resnet50&quot;, pretrained=True) model.eval() </code></pre> <p>Using the following code gives me the names of each layer.</p> <pre><code>for _item in model.named_modules(): print(_item) </code></pre> <p>The layer that I am interested in is the fourth layer of the backbone which is defined as below.</p> <pre><code>('backbone.layer4.2.conv1', Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)) ('backbone.layer4.2.bn1', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)) ('backbone.layer4.2.conv2', Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)) ('backbone.layer4.2.bn2', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)) ('backbone.layer4.2.conv3', Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)) ('backbone.layer4.2.bn3', BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)) ('backbone.layer4.2.relu', ReLU(inplace=True)) </code></pre> <p>I am using the following function to register the forward hook and to get the feature vector of one of the last layer (ReLU).</p> <pre><code>def get_features_vector(_path_img, _model, _layer) -&gt; torch.Tensor: &quot;&quot;&quot; Input: path_img: string, /path/to/image _model: a pretrained torch model Output: my_output: torch.tensor, output of last convolution layer &quot;&quot;&quot; input_image = Image.open(_path_img) preprocess = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) with torch.no_grad(): my_output = None def my_hook(module_, input_, output_): print(f&quot;Output Shape: {output_.shape}&quot;) nonlocal my_output my_output = output_ a_hook = _layer.register_forward_hook(my_hook) _model(input_batch) a_hook.remove() return my_output </code></pre> <p>And I am using the following code to extract the feature vector.</p> <pre><code>_feature_vector = get_features_vector(path_to_image, model, model.backbone.layer4[2].relu) </code></pre> <p>The problem I am having is that about the <code>print</code> output inside the <code>my_hook</code> nested function as it is:</p> <pre><code>Output Shape: torch.Size([1, 512, 60, 80]) Output Shape: torch.Size([1, 512, 60, 80]) Output Shape: torch.Size([1, 2048, 60, 80]) </code></pre> <p>I don't understand why I am getting three lines as I am expecting only one.<br> Please help.</p>
<python><pytorch>
2023-04-18 00:59:51
1
796
Toyo
76,040,306
2,348,503
ModuleNotFoundError: No module named 'llama_index.langchain_helpers.chatgpt'
<p>I'd like to use <code>ChatGPTLLMPredictor</code> from <code>llama_index.langchain_helpers.chatgpt</code>, but I got an error below on M1 Macbook Air.</p> <pre><code>ModuleNotFoundError: No module named 'llama_index.langchain_helpers.chatgpt' </code></pre> <p>My code looks like this and line 3 is the problem.</p> <pre class="lang-py prettyprint-override"><code>import csv from llama_index import GPTSimpleVectorIndex, SimpleWebPageReader from llama_index.langchain_helpers.chatgpt import ChatGPTLLMPredictor article_urls = [] with open('article-urls.csv') as f: reader = csv.reader(f) for row in reader: article_urls.append(row[0]) documents = SimpleWebPageReader().load_data(article_urls) index = GPTSimpleVectorIndex(documents=documents, llm_predictor=ChatGPTLLMPredictor() ) index.save_to_disk('index.json') </code></pre> <ul> <li>Python v3.10.10</li> <li>requirements.txt</li> </ul> <pre><code>aiohttp==3.8.4 aiosignal==1.3.1 async-timeout==4.0.2 attrs==23.1.0 cachetools==5.3.0 certifi==2022.12.7 charset-normalizer==3.1.0 dataclasses-json==0.5.7 frozenlist==1.3.3 gptcache==0.1.14 idna==3.4 langchain==0.0.142 llama-index==0.5.16 marshmallow==3.19.0 marshmallow-enum==1.5.1 multidict==6.0.4 mypy-extensions==1.0.0 numexpr==2.8.4 numpy==1.24.2 openai==0.27.4 openapi-schema-pydantic==1.2.4 packaging==23.1 pandas==2.0.0 pydantic==1.10.7 python-dateutil==2.8.2 pytz==2023.3 PyYAML==6.0 regex==2023.3.23 requests==2.28.2 six==1.16.0 SQLAlchemy==1.4.47 tenacity==8.2.2 tiktoken==0.3.3 tqdm==4.65.0 typing-inspect==0.8.0 typing_extensions==4.5.0 tzdata==2023.3 urllib3==1.26.15 yarl==1.8.2 </code></pre> <p>Thanks.</p>
<python><python-3.x><openai-api><chatgpt-api><llama-index>
2023-04-18 00:54:02
1
420
Taishi Kato
76,040,205
3,769,033
How to dynamically inspect a GenericAlias (Type) object to determine the types it contains?
<p>I have a dataclass with typed <code>tuple</code> fields, and I'd like to dynamically see the types of their elements. When I print the types, I see them as containers (whose contents are exactly what I want):</p> <pre><code>from dataclasses import dataclass, fields @dataclass class TupleContainer: tuple2: tuple[float, str] tuple1: tuple[int, ...] for field in fields(TupleContainer): t = field.type print(repr(t)) # prints: # tuple[float, str] # tuple[int, ...] </code></pre> <p>but when I try to iterate through or index into <code>t</code> I get <code>TypeError: 'types.GenericAlias' object is not iterable</code> or <code>TypeError: There are no type variables left in tuple[float, str]</code>.</p> <p>It really seems like this ought to be possible, but I've had no luck reading the PEPs or docs.</p>
<python><python-3.x><python-typing>
2023-04-18 00:23:07
1
1,245
JoshuaF
76,040,150
3,103,957
Python type metaclass __call__() method
<p>The type meta class in Python has a <code>__call__()</code> method. And the same method is used to create object of a class and for a class. For example:</p> <pre><code>class Animal: pass </code></pre> <p>In creating an object <strong>for</strong> the class Animal (in the notion that every class is stored as an object), the <code>__call__()</code> method is invoked. And similarly when we create an instance <strong>of</strong> the Animal class (i.e: <code>Animal()</code> ), the same <code>__call__()</code> method invoked. I am trying to understand how the same method handles it.. could someone please throw the pseudo code of the <code>__call__</code> method.</p> <p>Thanks in advance.</p>
<python><metaprogramming>
2023-04-18 00:12:13
1
878
user3103957
76,040,071
11,475,651
Write a python function to return a differential equation: cannot assign to operator?
<p>I would like to write a Python function which simply returns a differential equation. Basically, my ODE is a function of wind speed, solar insolation and ambient temperature. I want this first function to take in those values and to produce an equation; I then want to take multiple such equations and solve them simultaneously, but I was hoping to set each equation in a separate function, as below.</p> <pre><code>def define_front_equation(wind_speed, insolation, ambient_temperature): sky_temperature = 0.0552*ambient_temperature**(1.5) param_heat = tau_alpha_param*area_measurement_for_everything*insolation front_convection = convection_param*area_measurement_for_everything*(TempF - ambient_temperature) front_radiation1 = radiation_param*glass_emissivity*view_factor*area_measurement_for_everything front_radiation2 = (TempF**4) - ambient_temperature**4 front_radiation = front_radiation1*front_radiation2 dTempF/dt = (param_Ps + param_heat - front_convection - front_radiation)*(1/glass_mass) return dTempF/dt </code></pre> <p>The third to last line gives me the syntax error &quot;cannot assign to operator&quot;. Am I doing something wrong here? How do I get around this?</p>
<python><ode>
2023-04-17 23:49:22
1
317
Abed
76,040,055
11,277,108
TypeError: boolean value of NA is ambiguous when using np.where() to compare string columns
<p>I'm importing a feather into a pandas dataframe and then looking to compare two string columns using np.where(). However, I'm getting the following error: <code>TypeError: boolean value of NA is ambiguous</code>. An MRE is below:</p> <pre><code>import pandas as pd import numpy as np d = {&quot;col1&quot;: [np.NaN, &quot;x&quot;], &quot;col2&quot;: [np.NaN, &quot;x&quot;]} df = pd.DataFrame(data=d) df.to_feather(&quot;test.feather&quot;) df_f = pd.read_feather(&quot;test.feather&quot;) df_f[&quot;col1&quot;] = df_f[&quot;col1&quot;].astype(&quot;string&quot;) df_f[&quot;col2&quot;] = df_f[&quot;col2&quot;].astype(&quot;string&quot;) df_f[&quot;is_equal&quot;] = np.where(df_f[&quot;col1&quot;] == df[&quot;col2&quot;], 1, 0) </code></pre> <p>I had to manually format the two columns as strings to replicate the format of the actual dataframe when I import it.</p> <p>I've read up on the error and it's to do with the <code>pd.NA</code> values that are created when the columns are converted into strings.</p> <p>I've tried converting these values to <code>np.NaN</code> as per <a href="https://stackoverflow.com/a/65068615/11277108">here</a>:</p> <pre><code>df_f[&quot;col1&quot;].replace({pd.NA: np.NaN}, inplace=True) df_f[&quot;col2&quot;].replace({pd.NA: np.NaN}, inplace=True) </code></pre> <p>But I get the same error.</p> <p>I've tried converting the column to a float as per <a href="https://stackoverflow.com/a/73041862/11277108">here</a>:</p> <pre><code>df_f[&quot;col1&quot;] = df_f[&quot;col1&quot;].astype(&quot;float&quot;) df_f[&quot;col2&quot;] = df_f[&quot;col2&quot;].astype(&quot;float&quot;) </code></pre> <p>But I get <code>ValueError: could not convert string to float</code>.</p> <p>Would anyone have any suggestions as to how I might be able to solve this?</p>
<python><pandas><dataframe><numpy>
2023-04-17 23:45:11
2
1,121
Jossy
76,039,831
10,184,783
How to detect line using radon transformation using python?
<p>MATLAB solutions are available elsewhere on the internet, but there is need for the open-source python based solution.</p> <p>Starter code to create a blank image with a white line.</p> <pre><code>import cv2 import numpy as np from skimage.transform import radon from matplotlib import pyplot as plt blank = np.zeros((100,100)) blank = cv2.line(blank, (25,25), (75,75), (255, 255, 255), thickness=1) plt.imshow(blank, cmap='gray') </code></pre> <p><a href="https://i.sstatic.net/vkOid.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vkOid.png" alt="enter image description here" /></a></p>
<python><image-processing><computer-vision><transform><transformation>
2023-04-17 22:47:22
1
4,763
Abhi25t
76,039,805
6,368,579
How to set __len__ method for an itertools.chain object?
<p>Let's say I'm building an <code>itertools.chain</code> instance as follows:</p> <pre class="lang-py prettyprint-override"><code>from itertools import chain list_1 = list(range(5, 15)) list_2 = list(range(20, 30)) chained = chain(list_1, list_2) </code></pre> <p>Now, since I already know the length of the lists contained in <code>chained</code> I can easily get the length of <code>chained</code>. How can I add the <code>__len__</code> to <code>chained</code>?</p> <p>I tried this:</p> <pre class="lang-py prettyprint-override"><code>full_len = len(list_1) + len(list_2) setattr(chained, '__len__', lambda: full_len) </code></pre> <p>but it fails with the error</p> <pre><code>AttributeError: 'itertools.chain' object has no attribute '__len__' </code></pre> <p>Edit: I need this to be able to display the progress of a long process with <code>tqdm</code>, which relays in the <code>__len__</code> method to be able to show the progress bar</p>
<python><attributeerror>
2023-04-17 22:42:35
2
483
DSantiagoBC
76,039,374
2,791,346
Concatenate tf.range in Keras model
<p>I would like to create a Keras model and add sequencial number to a input tensor.</p> <p>What I would like to do is something like this:</p> <pre><code>input_layer = Input(shape=(3, 3)) seq = tf.range(3) seq = tf.reshape(seq, (3, 1)) concatenated = Concatenate(axis=-1)([input_layer, seq]) additional_layer = Dense(4, activation=&quot;relu&quot;)(concatenated) ... </code></pre> <p>The problem is that input layer is of a size <code>(none, 3,3)</code> and the seq is of a size <code>(3,1)</code></p> <p>Even if I transform it to a</p> <pre><code>seq = tf.reshape(seq, (1, 3, 1)) </code></pre> <p>the concatenation will give me an error that shapes don't match.</p> <p>How to add sequential numbers to every row that will go into the Input layer?</p> <hr /> <h1>Try the Solution from the answer:</h1> <pre><code>a = np.array([ [[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]], [[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]], [[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]] ]) x_tf = tf.convert_to_tensor(a) input_layer = tf.keras.layers.Input(shape=(3, 3)) seq = tf.range(3, dtype=tf.float32) seq = tf.reshape(seq, (1, 3, 1)) concatenated = tf.keras.layers.Lambda(lambda x:tf.concat([x, seq], axis=-1))(input_layer) model = Model(inputs=input_layer, outputs=concatenated) print(model(x_tf)) </code></pre> <p>and I get:</p> <pre><code>InvalidArgumentError: Exception encountered when calling layer 'lambda_5' (type Lambda). {{function_node __wrapped__ConcatV2_N_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} ConcatOp : Dimension 0 in both shapes must be equal: shape[0] = [3,3,3] vs. shape[1] = [1,3,1] [Op:ConcatV2] name: concat Call arguments received by layer 'lambda_5' (type Lambda): • inputs=tf.Tensor(shape=(3, 3, 3), dtype=float32) • mask=None • training=None </code></pre>
<python><tensorflow><keras><concatenation>
2023-04-17 21:22:14
1
8,760
Marko Zadravec
76,039,364
3,130,747
Extract raw sql from SqlAlchemy with replaced parameters
<p>Given a sql query such as</p> <pre><code>query = &quot;&quot;&quot; select some_col from tbl where some_col &gt; :value &quot;&quot;&quot; </code></pre> <p>I'm executing this with sqlalchemy using</p> <pre><code>connection.execute(sa.text(query), {'value' : 5}) </code></pre> <p>Though this does what's expected, I would like to be able to get the raw sql, with replaced parameters. Meaning I would like a way to be able to get</p> <pre><code>select some_column from tbl where some_column &gt; 5 </code></pre> <p>I've tried to echo the sql using:</p> <pre><code> engine = sa.create_engine( '&lt;CONNECTION STRING&gt;', echo=True, ) </code></pre> <p>But this didn't replace the parameters.</p> <p>If there's not a way to do this in sqlalchemy, but is a way using something like psycopg2 (as long as the syntax <code>:value</code> doesn't change) then that would be of interest.</p>
<python><postgresql><sqlalchemy>
2023-04-17 21:19:52
1
4,944
baxx
76,039,198
1,810,940
Properties with single dispatch
<p>Is it possible to compose a <code>property</code> with <code>singledispatch</code> / <code>singledispatchmethod</code> features? I have tried the obvious patterns (nesting <code>@singledispatchmethod</code> with <code>@prop.setter</code>, etc) and get various errors. Here's a MWE of what I'd like to do, which only allows the property <code>Foo.bar</code> to be set to <code>str</code> values.</p> <pre><code>class Foo(object): def __init__(self): self._bar = 'baz' @property def bar(self): return self._bar @bar.setter def bar(self,value): self._bar = value @singledispatch def _set(value): raise NotImplementedError @_set.register(str) def _(value): return value try: self._bar = _set(value) except NotImplementedError: raise AttributeError(f&quot;Can't set attribute 'bar' to type {type(value).__name__}&quot;) </code></pre> <p>One could, of course, rewrite <code>singledispatch</code> or <code>property</code> in order to facilitate this behavior. However, this seems like an obvious use case that should be a feature (now or eventually)</p>
<python><properties><decorator><functools><single-dispatch>
2023-04-17 20:50:02
0
503
jay
76,039,042
5,111,234
OpenMDAO Specifying DOEDriver Number of Processors
<p>I am trying to use the in-built DOEDriver in OpenMDAO. I know there is a setting called <code>procs_per_model</code> which determines the number of processors given to the model to run. However, is there a way to set the max number of processors that the DOE itself can use? E.g. I have 10 cores on my machine, but I only want to use 8. Is there a way to set the DOEDriver up so that it can run 8 cases at a time instead of 10 (i.e. call my model 8 times independently to run and each model run is only using one processor).</p> <p>I was trying to do this by setting <code>max_procs</code> when adding a subsystem, however I think that has the same effect as <code>procs_per_model</code> in that it is just limiting the number of processors each model run can use.</p>
<python><parallel-processing><openmdao>
2023-04-17 20:25:53
1
679
Jehan Dastoor
76,039,015
2,367,231
Pyscript: How to load a file via html input and passing it to Python Pandas
<p>I try to implement a webpage which has a button (file input dialogue) to load an Excel/ODF file and pass it to pandas for processing.</p> <p>The problem is, that I do not have any glue how to pass the file (<code>f in fileList</code>) to <code>pandas.ExcelFile(...)</code>.</p> <p>Here is the not fully working example:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;ie=edge&quot;&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;https://pyscript.net/alpha/pyscript.css&quot; /&gt; &lt;script defer src=&quot;https://pyscript.net/latest/pyscript.js&quot;&gt;&lt;/script&gt; &lt;title&gt;File Example&lt;/title&gt; &lt;py-config type=&quot;json&quot;&gt; { &quot;packages&quot;: [&quot;pandas&quot;] } &lt;/py-config&gt; &lt;/head&gt; &lt;body&gt; &lt;label for=&quot;file_input&quot;&gt;Select a text file:&lt;/label&gt; &lt;input type=&quot;file&quot; id=&quot;file_input&quot; name=&quot;file_input&quot;&gt; &lt;br /&gt; &lt;div id=&quot;print_output&quot;&gt;&lt;/div&gt; &lt;br /&gt; &lt;p&gt;File Content:&lt;/p&gt; &lt;div style=&quot;border:2px inset #AAA;cursor:text;height:120px;overflow:auto;width:600px; resize:both&quot;&gt; &lt;div id=&quot;content&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;py-script output=&quot;print_output&quot;&gt; import asyncio import pandas from js import document, FileReader, Uint8Array, console, TextDecoder from pyodide import create_proxy import io async def process_file(event): console.log('process_file()', event) fileList = event.target.files.to_py() for f in fileList: console.log('File:', f) data = Uint8Array.new(await f.arrayBuffer()) data = TextDecoder.new().decode(data) efile = pandas.ExcelFile(data) console.log('Data:', efile.sheet_names) document.getElementById(&quot;content&quot;).innerText = data console.log('process_file()', 'end') def main(): console.log('main()', 'start') # Create a Python proxy for the callback function # process_file() is your function to process events from FileReader file_event = create_proxy(process_file) # Set the listener to the callback e = document.getElementById(&quot;file_input&quot;) e.addEventListener(&quot;change&quot;, file_event, False) console.log('main()', 'end') main() &lt;/py-script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<python><pandas><pyscript>
2023-04-17 20:21:53
0
3,975
Alex44
76,038,966
315,168
Type hinting Pandas DataFrame content and columns
<p>I am writing a function that returns a Pandas <code>DataFrame</code> object. I would like to have a type hint that specifies which columns this <code>DataFrame</code> contains, besides just specifying in the docstring, to make it easier for the end user to read the data.</p> <p>Is there a way to type hint <code>DataFrame</code> content like this? Ideally, this would integrate well with tools like Visual Studio Code and PyCharm when editing Python files and Jupyter Notebooks.</p> <p>An example function:</p> <pre class="lang-py prettyprint-override"><code>def generate_data(bunch, of, inputs) -&gt; pd.DataFrame: &quot;&quot;&quot;Massages the input to a nice and easy DataFrame. :return: DataFrame with columns a(int), b(float), c(string), d(us dollars as float) &quot;&quot;&quot; </code></pre>
<python><pandas><dataframe><python-typing>
2023-04-17 20:15:30
5
84,872
Mikko Ohtamaa
76,038,944
9,883,236
Type Hinting Python3 How to Decorate a function based on whether its a method or a function, and transform an argument?
<p>Hi I'm trying to have a decorator that takes the first or second argument (depending on whether its in a class), and then transforming it before invoking the function, so the minimum code would be like this:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, Concatenate, ParamSpec, TypeVar, overload, Union, cast from inspect import ismethod T = TypeVar(&quot;T&quot;) P = ParamSpec(&quot;P&quot;) class FixedClass: ... class Given: ... class Desired(Given): def __init__(self,): super().__init__() Function = Callable[Concatenate[Desired, P], T] TransformedFunction = Callable[Concatenate[Given, P], T] FunctionTransformer = Callable[[Function[P, T]], TransformedFunction[P, T]] Method = Callable[Concatenate[FixedClass, Desired, P], T] TransformedMethod = Callable[Concatenate[FixedClass, Given, P], T] MethodTransformer = Callable[[Method[P, T]], TransformedMethod[P, T]] def convert_oracle(given: Given) -&gt; Desired: return Desired() def transform() -&gt; Union[FunctionTransformer[P, T], MethodTransformer[P, T]]: @overload def command(func: Function[P, T]) -&gt; TransformedFunction[P, T]: ... @overload def command(func: Method[P, T]) -&gt; TransformedMethod[P, T]: ... def command(func: Union[Function[P, T], Method[P, T]]) -&gt; Union[TransformedMethod[P, T], TransformedFunction[P, T]]: if ismethod(func): def transformed_method(self: FixedClass, given: Given, *args: P.args, **kwargs: P.kwargs) -&gt; T: desired: Desired = convert_oracle(given) method = cast(Method[P, T], func) return method(self, desired, *args, **kwargs) return transformed_method def transformed_function(given: Given, *args: P.args, **kwargs: P.kwargs) -&gt; T : desired: Desired = convert_oracle(given) function = cast(Function[P, T], func) return function(desired, *args, **kwargs) return transformed_function return command </code></pre> <p>By itself like this does not raise any issues with mypy or pyright, but if I try using it, as such in these cases:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any from main import transform, FixedClass, Desired @transform() def write_command(desired: Desired) -&gt; None: return None class Test(FixedClass): @transform() def read_command(self, desired: Desired) -&gt; None: return None </code></pre> <p>I get pyright errors:</p> <pre><code> /home/runner/MyPy-Test/test.py:6:2 - error: Argument of type &quot;(desired: Desired) -&gt; None&quot; cannot be assigned to parameter of type &quot;(FixedClass, Desired, ...) -&gt; Unknown&quot; Type &quot;(desired: Desired) -&gt; None&quot; cannot be assigned to type &quot;(FixedClass, Desired, ...) -&gt; Unknown&quot; Function accepts too many positional parameters; expected 1 but received 2 Parameter 1: type &quot;FixedClass&quot; cannot be assigned to type &quot;Desired&quot; &quot;FixedClass&quot; is incompatible with &quot;Desired&quot; (reportGeneralTypeIssues) </code></pre> <pre><code> /home/runner/MyPy-Test/test.py:12:4 - error: Argument of type &quot;(self: Self@Test, desired: Desired) -&gt; None&quot; cannot be assigned to parameter of type &quot;(Desired, ...) -&gt; Unknown&quot; Type &quot;(self: Self@Test, desired: Desired) -&gt; None&quot; cannot be assigned to type &quot;(Desired, ...) -&gt; Unknown&quot; Parameter 1: type &quot;Desired&quot; cannot be assigned to type &quot;Self@Test&quot; &quot;Desired&quot; is incompatible with &quot;Test&quot; (reportGeneralTypeIssues) </code></pre> <pre><code> /home/runner/MyPy-Test/test.py:12:4 - error: Argument of type &quot;(self: Self@Test, desired: Desired) -&gt; None&quot; cannot be assigned to parameter of type &quot;(FixedClass, Desired, ...) -&gt; Unknown&quot; Type &quot;(self: Self@Test, desired: Desired) -&gt; None&quot; cannot be assigned to type &quot;(FixedClass, Desired, ...) -&gt; Unknown&quot; Parameter 1: type &quot;FixedClass&quot; cannot be assigned to type &quot;Self@Test&quot; &quot;FixedClass&quot; is incompatible with &quot;Test&quot; (reportGeneralTypeIssues) </code></pre> <p>For some reason is unable to match it to the correct argument type even though its a Union of both callables, so shouldn't either/or function/method be accepted?</p> <p>and mypy errors:</p> <pre><code>test.py:6: error: Argument 1 has incompatible type &quot;Callable[[Desired], None]&quot;; expected &quot;Callable[[Desired, VarArg(Any), KwArg(Any)], Any]&quot; [arg-type] test.py:6: note: This is likely because &quot;write_command&quot; has named arguments: &quot;desired&quot;. Consider marking them positional-only </code></pre> <pre><code>test.py:6: error: Argument 1 has incompatible type &quot;Callable[[Desired], None]&quot;; expected &quot;Callable[[FixedClass, Desired, VarArg(Any), KwArg(Any)], Any]&quot; [arg-type] </code></pre> <pre><code>test.py:12: error: Argument 1 has incompatible type &quot;Callable[[Test, Desired], None]&quot;; expected &quot;Callable[[Desired, VarArg(Any), KwArg(Any)], Any]&quot; [arg-type] test.py:12: note: This is likely because &quot;read_command of Test&quot; has named arguments: &quot;self&quot;. Consider marking them positional-only </code></pre> <pre><code>test.py:12: error: Argument 1 has incompatible type &quot;Callable[[Test, Desired], None]&quot;; expected &quot;Callable[[FixedClass, Desired, VarArg(Any), KwArg(Any)], Any]&quot; [arg-type] test.py:12: note: This is likely because &quot;read_command of Test&quot; has named arguments: &quot;self&quot;, &quot;desired&quot;. Consider marking them positional-only Found 4 error </code></pre> <p>I tried searching why mypy cannot match it correctly, and why T isn't inferred (expected Any, rather than None), but was unable to reach an answer...</p>
<python><mypy><python-typing><pyright>
2023-04-17 20:12:03
0
345
YousefZ
76,038,772
5,094,207
One of two flask apps deployed on the same server with unresponsive '/' route
<h1>Problem</h1> <p>I have two flask applications deployed to the same GCP compute engine with an nginx server.</p> <p>The first app on the main domain: <code>myapp.com.</code></p> <p>The second is on a subdomain: <code>example.myapp.com.</code></p> <p>All the routes from the first app on the main domain are accessible and render their views properly, however the '/' route from the second app just hangs, and eventually returns a timeout error. The strange part is, if I visit other routes explicitly on the second app, for example: <code>example.myapp.com/about-example</code> or <code>example.myapp.com/home</code>, the page renders immediately.</p> <p>I tested all the routes on the second app, they all work. The default home route <code>/</code> is the only one causing the issue when I navigate to <code>example.myapp.com</code></p> <h1>Project structure</h1> <p>Apps are deployed on the user directory</p> <pre class="lang-bash prettyprint-override"><code>home └── user1 ├── app1 └── app2 </code></pre> <h1>NGINX</h1> <p>I removed the default conf from <code>/etc/nginx/sites-enabled</code> and added these additional confs for each app</p> <h2>App1 conf</h2> <pre><code>server { server_name myapp.com; access_log /var/log/nginx/app1.access.log; error_log /var/log/nginx/app1.error.log; location / { proxy_pass http://127.0.0.1:8000; include /etc/nginx/proxy_params; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static { alias /home/user1/app1/app/static; } } </code></pre> <h2>App2 conf</h2> <pre><code>server { server_name example.myapp.com; access_log /var/log/nginx/app2.access.log; error_log /var/log/nginx/app2.error.log; location / { proxy_pass http://127.0.0.1:8001; include /etc/nginx/proxy_params; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /static { alias /home/user1/app2/app/src/static; } } </code></pre> <h1>Deployment</h1> <p>I use gunicorn along with supervisor to run the apps. Each application gets a supervisor conf. stored at /etc/supervisor/conf.d/(name-of-app)-supervisor.conf. Verified supervisord is reading in the additional conf properly.</p> <h2>App1 supervisor conf</h2> <pre class="lang-ini prettyprint-override"><code>[program:app1] command=/home/user1/app1/venv/bin/gunicorn personal_website:app -w 2 directory=/home/user1/app1 user=user1 autostart=true autorestart=true stopasgroup=true killasgroup=true stdout_logfile=/var/log/app1/app1.out.log stderr_logfile=/var/log/app1/app1.err.log </code></pre> <h2>App2 supervisor conf</h2> <pre class="lang-ini prettyprint-override"><code>[program:app2] command=/home/user1/app2/venv/bin/gunicorn -b 0.0.0.0:8001 run:app directory=/home/user1/app2 user=user1 autostart=true autorestart=true stdout_logfile=/var/log/supervisor/app2.out.log stderr_logfile=/var/log/supervisor/app2.err.log </code></pre> <h1>Flask routes</h1> <p>Application is organized as a package using blueprints</p> <h2>app2</h2> <p>location: <code>app/src/main/routes.py</code></p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;Main blueprint module&quot;&quot;&quot; from flask import Blueprint, render_template main = Blueprint(&quot;main&quot;, __name__) @main.route(&quot;/&quot;) @main.route(&quot;/home&quot;) def home(): return render_template(&quot;home.html&quot;, active_page=&quot;home&quot;) </code></pre> <h1>Flask app configuration</h1> <p>There is more stuff going on in this <code>__init__</code> file but removed it for brevity</p> <p>location: <code>app/src/__init__.py</code></p> <pre class="lang-py prettyprint-override"><code>from flask import Flask from app.config import Config app = Flask(__name__) app.config.from_object(Config) def create_app(config_class=Config): app = Flask(__name__) from app.src.users.routes import users from app.src.entries.routes import entries from app.src.main.routes import main app.register_blueprint(users) app.register_blueprint(entries) app.register_blueprint(main) return app </code></pre> <p>Appreciate any advice and feedback. I know there are a lot of moving pieces here and this is a bit messy but I am at my wits end here. Let me know if I have missed any pieces to help make the question more understandable. Thanks!</p>
<python><nginx><flask><deployment>
2023-04-17 19:47:03
1
647
null-point-exceptional
76,038,371
5,879,640
Websocket to Serial bridge in python getting stuck
<p>I am working on writing a websocket to serial bridge in Python for which I have used the <code>websockets</code> and <code>pyserial</code> library.</p> <p>My ultimate goal is to have a websocket that forwards incoming messages to the serial device and also is constantly reading the serial port for incoming messages and once it reads one (looking for the newline character) also forwards it to the websocket.</p> <p>My problem is that I cannot make the incoming / outgoing handlers work independantly using asyncio. Reading the serial port is a blocking task which should not be paused, so I run it in an execution pool. This works as I the messages recieved via the serial port are forwarded immediatly to the websocket.</p> <p>However the incoming messages to the websocket are not being forwarded to the serial port in time. It seems that all the processing is being allocated to reading the serial messages even though I attempted to avoid this via the execution pool. After reading a new serial message all the incoming websocket messages are forwarded to the serial port, instead of them being forwarded immediately.</p> <p>My code:</p> <p><code>SerialConnection</code> class</p> <pre><code>from serial import Serial import asyncio from .serial_emulator import SerialEmulator from concurrent.futures import ThreadPoolExecutor class SerialConnection: def __init__(self, port, baudrate=115200, timeout=1): if port == &quot;test&quot;: self.connection = SerialEmulator() else: self.connection = Serial(port, baudrate=baudrate, timeout=timeout) self.reader = asyncio.StreamReader() self.executor = ThreadPoolExecutor() def write(self, data): self.connection.write(data) async def read(self, size: int = 1): return self.connection.read(size) def close(self): self.connection.close() @property def port(self): if self.connection is None: return None return self.connection.port async def read_message(self): if self.connection is None: return async def blocking_read(): message = b&quot;&quot; while not message.endswith(b&quot;\n&quot;): data = await self.read() message += data return message.decode().rstrip(&quot;\r\n&quot;) loop = asyncio.get_running_loop() message = await loop.run_in_executor(self.executor, blocking_read) return await message </code></pre> <p><code>WebSocketServer</code> class:</p> <pre><code>import asyncio import websockets class WebSocketServer: def __init__(self, port, handler=None): self.port = port self.server = None self.handler = handler or self.handler async def handler(self, websocket, path): async for message in websocket: print(f&quot;websocket message: {message}, path: {path}&quot;) async def start(self): self.server = await websockets.serve(self.handler, &quot;localhost&quot;, self.port) await self.server.wait_closed() async def stop(self): if self.server: self.server.close() await self.server.wait_closed() def run_forever(self): asyncio.get_event_loop().run_until_complete(self.start()) asyncio.get_event_loop().run_forever() </code></pre> <p><code>main</code></p> <pre><code>import argparse import asyncio from websocket_serial_bridge import WebSocketServer, SerialConnection def main(): parser = argparse.ArgumentParser() parser.add_argument(&quot;-p&quot;, &quot;--websocket-port&quot;, type=int, default=8765) # add optional argument &quot;list&quot; that when present will list all available serial ports then exit parser.add_argument(&quot;-l&quot;, &quot;--list&quot;, &quot;--list-serial&quot;, action=&quot;store_true&quot;) # add optional argument &quot;serial_ports&quot; which will be a list of serial ports to connect to parser.add_argument(&quot;-s&quot;, &quot;--serial-ports&quot;, nargs=&quot;+&quot;, type=str, default=[]) # add optional argument &quot;test&quot; parser.add_argument(&quot;-t&quot;, &quot;--test&quot;, action=&quot;store_true&quot;, help=&quot;Run in test mode (emulate serial ports).&quot;) args = parser.parse_args() def list_serial_ports(): import serial.tools.list_ports ports = serial.tools.list_ports.comports() if len(ports) == 0: print(&quot;No serial ports found&quot;) return for port in ports: print(f&quot; - {port.device} - {port.description}&quot;) if args.list: print(&quot;Listing serial ports:&quot;) list_serial_ports() return websocket_port = args.websocket_port serial_ports = args.serial_ports if args.test: if len(serial_ports) &gt; 0: print(&quot;Warning: --test specified, ignoring --serial-ports&quot;) serial_ports = [&quot;test&quot;] # if no serial ports specified, list all available serial ports if len(serial_ports) == 0: print(&quot;No serial ports specified, listing all available serial ports:&quot;) list_serial_ports() return serial_connections = [] for serial_port in serial_ports: serial_connections.append(SerialConnection(serial_port)) async def serial_to_websocket(websocket, serial_connection): while True: serial_msg = await serial_connection.read_message() message = f&quot;PORT:{serial_connection.port}|{serial_msg}&quot; print(message) await websocket.send(serial_msg) async def websocket_to_serial(websocket, serial_connection): async for websocket_msg in websocket: print(f&quot;Received from websocket: {websocket_msg}&quot;) serial_connection.write(websocket_msg.encode() + b&quot;\n&quot;) async def handler(websocket, path): serial_connection = serial_connections[0] await asyncio.gather( asyncio.create_task(websocket_to_serial(websocket, serial_connection)), asyncio.create_task(serial_to_websocket(websocket, serial_connection)), ) server = WebSocketServer(websocket_port, handler=handler) print(f&quot;&quot;&quot;Starting websocket server on &quot;ws://localhost:{websocket_port}&quot;.&quot;&quot;&quot;) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(server.start()) loop.run_forever() if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><websocket><python-asyncio><pyserial>
2023-04-17 18:51:02
0
471
Trantidon
76,038,225
9,473,350
Figure created using axes of another figure not dynamically updating axes
<p>If you run the below code in ipython, you will find that the first window that pops up can be resized and the plots all adjust as usual.</p> <p>The second figure is created by trying to copy over the axis objects from the first figure. It seems to work, but the second window that pops up doesn't readjust the size of the plots like the first window did.</p> <p>What can be done so that the second figures axes update just as the first figures do? In case it's relevant, my particular application will pull axis objects from several different figures and replot them in a new figure. I won't be merely recreating the same plot.</p> <pre><code>import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt fig_temp, axlist = plt.subplots(2, 2) axlist[0][0].plot([1, 2, 3]) axlist[0][1].plot([3, 2, 3]) axlist[1][0].plot([3, 2, 1]) axlist[1][1].plot([1, 3, 2]) plt.show() fig = plt.figure(figsize=(16, 16)) gs = gridspec.GridSpec(2, 2) for i, row in enumerate(axlist): for ax, gs_elem in zip(row, [gs[i, 0], gs[i, 1]]): ax.figure = fig ax.set_position(gs_elem.get_position(fig)) ax.set_subplotspec(gs_elem) fig.add_axes(ax) fig.tight_layout() plt.show() </code></pre>
<python><matplotlib><matplotlib-gridspec>
2023-04-17 18:34:23
0
321
MaanDoabeDa
76,038,131
3,799,923
Type check ignore not working for multiline code
<p>When checking for type using <code>mypy</code> I want to ignore a line by adding <code>type: ignore</code>. But the problem is it's not working when I break the line of code into multiple lines.</p> <p>This works</p> <pre class="lang-py prettyprint-override"><code>items[&quot;a&quot;][&quot;b&quot;] = &quot;value&quot; # type: ignore </code></pre> <p>Whereas below one is not working</p> <pre class="lang-py prettyprint-override"><code>items[&quot;a&quot;][ &quot;b&quot; ] = &quot;value&quot; # type: ignore </code></pre>
<python><mypy>
2023-04-17 18:19:50
1
2,888
dhilmathy
76,038,104
11,530,571
Problem with library for quick Wasserstein distance calculation
<p>I need a tool to quickly calculate the Wasserstein distance between two two-dimensional point sets. I have been using Gudhi, but it appears to be too slow, and I need a faster alternative. I found the geomloss library, which appears to be fast enough, but the results differ, e.g.</p> <pre class="lang-py prettyprint-override"><code>from gudhi.wasserstein import wasserstein_distance import numpy as np dgm1 = np.array([[2.7, 3.7],[9.6, 14.],[34.2, 34.974]]) dgm2 = np.array([[2.8, 4.45],[9.5, 14.1]]) wasserstein_distance(dgm1, dgm2, order=1) </code></pre> <p>yields <code>1.2369999999999965</code>, while</p> <pre class="lang-py prettyprint-override"><code>import torch from geomloss import SamplesLoss I1 = torch.Tensor(dgm1) I2 = torch.Tensor(dgm2) I1.requires_grad_() loss = SamplesLoss(loss='sinkhorn', debias=False, p=1, blur=1e-3, scaling=0.999, backend='auto') loss(I1, I2) </code></pre> <p>yields <code>tensor(12.9882, grad_fn=&lt;SelectBackward0&gt;)</code>. I don't expect the two results to match perfectly, but ten-fold difference is a bit too much.</p> <p>I would highly appreciate if anyone could help me with either forcing the geomloss to yield result similar to the gudhi, or finding any alternative (that gives result similar to the gudhi).</p>
<python><pytorch><data-analysis>
2023-04-17 18:16:13
1
456
guest
76,038,058
86,638
Why isn't widgets in the BoxLayout laid out correctly when only using Python?
<p>I am trying to get my head around Kivy and was playing with making some Widgets in Python, to make the structure of the kv file a bit lighter.</p> <p>I have created a small compound widget, with a BoxLayout containing a label, and another label between two buttons.</p> <p>To my confusion if I add this code by itself, the layout is placed at 0,0 with sizes 100,100. So it doesn't look like the layout is performed at all. While adding another widget through a .kv file/string the layout is done.</p> <p>I assume this is not a bug, but that I am missing something in how kivy works. My question is how can I get the layout without having to add an extra widget through a .kv file?</p> <pre><code>import kivy from kivy.app import App from kivy.uix.label import Label from kivy.uix.button import Button from kivy.uix.boxlayout import BoxLayout from kivy.lang import Builder kivy.require('2.1.0') class Example(App): def build(self): return Spinner() class Spinner(BoxLayout): def __init__(self, **kwargs): # make sure we aren't overriding any important functionality super(BoxLayout, self).__init__(**kwargs) self.orientation = &quot;horizontal&quot; label = Label(text=&quot;Counting very long&quot;, size_hint=(0.5, None)) lower = Button(text=&quot;-&quot;, size_hint=(0.1, None)) number = Label(text=&quot;1&quot;, size_hint=(0.2, None)) higher = Button(text=&quot;+&quot;, size_hint=(0.1, None)) self.add_widget(label) self.add_widget(lower) self.add_widget(number) self.add_widget(higher) #If we do not add this, no layout is done Builder.load_string(&quot;&quot;&quot; &lt;Spinner&gt;: Label: text: &quot;Wow, now it works&quot; size_hint: (0.1, None) &quot;&quot;&quot;) if __name__ == &quot;__main__&quot;: Example().run() </code></pre>
<python><kivy>
2023-04-17 18:10:24
1
6,175
daramarak
76,037,730
10,308,565
How to disable periodic DAG parsing in Apache Airflow
<p>We are facing an issue where dynamically-generated DAGs are periodically disappearing from the UI for some time (usually they are gone for a couple of minutes). DAGs keep being scheduled and executed correctly in this period, so it seems like it only affects the UI.</p> <p>We think this is due to Airflow periodically parsing the DAG files, which in our case takes a long time (we have ~1000 dynamically-generated DAGs from user-provided YAML files).</p> <p>We currently bake-in all of our definitions into the Docker image, so there isn't really a need to periodically parse the DAG definitions after deployment.</p> <p>We tried setting the <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-dir-list-interval" rel="nofollow noreferrer">dag-dir-list-interval</a> and <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#min-file-process-interval" rel="nofollow noreferrer">min-file-process-interval</a> parameters to some big value (86400), but that didn't seem to help and DAGs are still periodically missing very frequently.</p> <p>Is there a way to completely disable periodic DAG parsing? Or are we looking in the wrong direction for this issue?</p>
<python><airflow>
2023-04-17 17:24:42
1
394
Alexander
76,037,675
21,420,742
Managing count grouped by another column in python
<p>I have a dataset and I want to see the amount of Unique IDs go to another ID and create a new column showing the count. Here is a sample.</p> <pre><code> ID Reporting ID 101 103 102 103 103 107 104 103 105 107 106 103 107 110 108 103 109 110 110 </code></pre> <p>Desired output:</p> <pre><code> ID Reporting ID Reporting Count 101 103 0 102 103 0 103 107 5 104 103 0 105 107 0 106 103 0 107 110 2 108 103 0 109 110 0 110 2 </code></pre> <p>I have tried <code>df['Reporting Count'] = df.groupby('Reporting ID')['ID'].nunique()</code></p>
<python><python-3.x><pandas><dataframe><numpy>
2023-04-17 17:16:32
1
473
Coding_Nubie
76,037,448
710,955
How to use json_each with alias() to query in SQLAlchemy/SQLite?
<p>I'm storing JSON documents in one of the columns of an SQLite table. The following query works fine if executed from the SQLite CLI</p> <pre><code>SELECT jsonfield.value FROM dataset, json_each(dataset.samples) as jsonfield WHERE json_extract(jsonfield.value, '$.instruction') == &quot;intrus2&quot; </code></pre> <p>Is it possible to write this query using SQLAlchemy/SQLite?</p> <p>I try this</p> <pre><code>filter = 'intrus2' query_samples = dbSession.query(Dataset) .select_from(Dataset, alias(func.json_each(Dataset.samples), 'jsonfield')) .filter(text('jsonfield.value ='\&quot;' + filter + '\&quot;')) </code></pre> <p>but I got error</p> <blockquote> <p>alias() got an unexpected keyword argument 'flat'</p> </blockquote> <p>Example Dataset.table</p> <pre><code>id (type INTEGER): 1 samples (type JSON): [{&quot;instruction&quot;: &quot;intrus1&quot;, &quot;input&quot;: &quot;Twitter, Instagram, Telegram&quot;, &quot;output&quot;: &quot;T\u00e9l\u00e9gramme&quot;}, {&quot;instruction&quot;: &quot;intrus2&quot;, &quot;input&quot;: &quot;Twitter, Instagram, Telegram&quot;, &quot;output&quot;: &quot;T\u00e9l\u00e9gramme&quot;}] </code></pre>
<python><sqlite><sqlalchemy><json-extract><json-each>
2023-04-17 16:46:07
0
5,809
LeMoussel
76,037,279
468,455
Python: getting an needs bytes not str on a Requests put
<p>We use Interact as our intranet vendor. I am trying to use their API to update the review date on some of the pages. Each session requires the user to get a token and then use that token for auth on other commands. That works fine, I can get the token and read stuff through the API. But when I try to use PUT to update pages I get this error message which looks like it is being generated from one of the libraries that the Request module is using.</p> <pre><code>Error with updating page data... memoryview: a bytes-like object is required, not 'str' Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 998, in send self.sock.sendall(data) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py&quot;, line 1233, in sendall with memoryview(data) as view, view.cast(&quot;B&quot;) as byte_view: TypeError: memoryview: a bytes-like object is required, not 'str' </code></pre> <p>Here's my relevant code:</p> <pre><code>def updatePages(dictPages): for pageId in dictPages: dictPage = dictPages[pageId] jsonPage = json.dumps(dictPage, indent = 4) print(jsonPage) strURL = rootURL + &quot;api/page/&quot; + pageId + &quot;/composer&quot; dictHeader = {&quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded; charset=utf-8&quot;, &quot;accept&quot;: &quot;application/json&quot;, &quot;X-Tenant&quot;:tenantGuid, &quot;Authorization&quot;: &quot;Bearer &quot; + accessToken} try: response = requests.put(strURL, data={jsonPage}, headers=dictHeader) print(response.json()) except Exception as e: print(&quot;Error with updating page data...&quot;) print(e) print(traceback.format_exc()) break </code></pre> <p>here's the json I am sending (which I've filtered through a lint)</p> <pre><code>{ &quot;Transition&quot;: { &quot;State&quot;: &quot;published&quot;, &quot;Message&quot;: &quot;Updating the review date&quot; }, &quot;Page&quot;: { &quot;ContentType&quot;: &quot;html&quot;, &quot;TopSectionIds&quot;: 4070, &quot;AuthorId&quot;: 449, &quot;Title&quot;: &quot;How to refer a candidate to an open position&quot;, &quot;Summary&quot;: &quot;Help Recruit a Talented Individual and Collect a Bonus.&quot;, &quot;Features&quot;: { &quot;DefaultToFullWidth&quot;: true, &quot;AllowComments&quot;: true, &quot;IsKeyPage&quot;: false, &quot;Recommends&quot;: { &quot;Show&quot;: true, &quot;MaxContentAge&quot;: 7 } }, &quot;PubStartDate&quot;: &quot;2022-09-20T20:31:12.3+00:00&quot;, &quot;PubEndDate&quot;: &quot;2025-09-20T09:00:00.0000000Z&quot;, &quot;ReviewDate&quot;: &quot;2023-03-20T09:00:00.0000000Z&quot;, &quot;TagIds&quot;: [], &quot;Keywords&quot;: [] } } </code></pre> <p>Full traceback</p> <pre><code>Error with updating page data... memoryview: a bytes-like object is required, not 'str' Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 998, in send self.sock.sendall(data) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py&quot;, line 1233, in sendall with memoryview(data) as view, view.cast(&quot;B&quot;) as byte_view: TypeError: memoryview: a bytes-like object is required, not 'str' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/xxxxxxxxxx/Desktop/xxxxxx/Projects/Intranet/Development/updateReviewDate.py&quot;, line 133, in updatePages response = requests.put(strURL, data={jsonPage}, headers=dictHeader) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py&quot;, line 130, in put return request(&quot;put&quot;, url, data=data, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py&quot;, line 59, in request return session.request(method=method, url=url, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py&quot;, line 587, in request resp = self.send(prep, **send_kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py&quot;, line 701, in send r = adapter.send(request, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/adapters.py&quot;, line 489, in send resp = conn.urlopen( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/urllib3/connectionpool.py&quot;, line 703, in urlopen httplib_response = self._make_request( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/urllib3/connectionpool.py&quot;, line 398, in _make_request conn.request(method, url, **httplib_request_kw) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/urllib3/connection.py&quot;, line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 1282, in request self._send_request(method, url, body, headers, encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 1328, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 1277, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 1076, in _send_output self.send(chunk) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py&quot;, line 1002, in send self.sock.sendall(d) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py&quot;, line 1233, in sendall with memoryview(data) as view, view.cast(&quot;B&quot;) as byte_view: TypeError: memoryview: a bytes-like object is required, not 'str' </code></pre> <p>Do I have to convert the dictionary to binary data?</p>
<python><request><put>
2023-04-17 16:24:51
0
6,396
PruitIgoe
76,037,162
10,994,166
Pyspark String Type to Array column
<p>I have a spark df column like this:</p> <pre><code>[&quot;100075010&quot;, &quot;100075010&quot;, &quot;100075010&quot;] </code></pre> <p>It's StringType.</p> <p>I wanna cast this column datatype to Arraytype.</p>
<python><apache-spark><pyspark><apache-spark-sql>
2023-04-17 16:11:26
1
923
Chris_007
76,036,961
6,672,237
Function to add quarters represented as strings in Polars
<p>The main idea is to take string values that represent year+quarter concatenation in format <code>2020Q4</code>, <code>1999Q1</code>...ect and add/subtract an arbitrary number of quarters to/from them. These string values live in a 'polars' DataFrame in a column called 'quarter'. I want to create another column 'quarter_diff' where I subtract the previous row's 'quarter' from the current row's value. The subtraction/addition should be smart enough to see that if it's <code>2020Q4</code>, by adding 1 quarter to it, it would be <code>2021Q1</code> and not <code>2020Q5</code></p> <p>Ideally it should be done with 'polars' own function since they are already optimised for speed.</p> <p>Here is a function that is written in python, therefore not optimised for 'polars's speed or parallelisation and it only works for adding quarters, not subtracting them.</p> <pre><code>def add_quarter(quarter_str): year, quarter = quarter_str[:-2], quarter_str[-1] if quarter == '4': return f&quot;{int(year) + 1}Q1&quot; else: return f&quot;{year}Q{int(quarter) + 1}&quot; </code></pre> <p>Can anyone help in refactoring it for both addition as well as subtraction and for use of internal 'polars' function like maybe <code>offset_by()</code> or other? Thanks</p>
<python><python-polars>
2023-04-17 15:47:36
1
562
kuatroka
76,036,878
3,788
Formatting for separate rows per column using Pandas pivot?
<p>Consider the following data:</p> <pre class="lang-py prettyprint-override"><code> data = [ { &quot;Account&quot;: &quot;Cash&quot;, &quot;amount&quot;: 10, &quot;direction&quot;: &quot;debit&quot;, }, { &quot;Account&quot;: &quot;Cash&quot;, &quot;amount&quot;: 10, &quot;direction&quot;: &quot;credit&quot;, } ] df = pd.DataFrame.from_dict(data) df.pivot(index=&quot;Account&quot;, columns=&quot;direction&quot;, values=&quot;amount&quot;) </code></pre> <p>The value of <code>pivot()</code> outputs the following:</p> <pre><code>direction credit debit Account Cash 10 10 </code></pre> <p>What I actually want, however, is something akin to:</p> <pre><code>direction credit debit Account Cash 10 Cash 10 </code></pre> <p>That is, I want all of the &quot;credit&quot; lines to be on a separate row as &quot;debit&quot; for formatting purposes (ie, when using <code>to_table()</code> or <code>to_csv()</code>).</p> <p>I've tried using <code>to_dict()</code>, manipulating the result, and recreating a dataframe using <code>from_dict()</code> but this has become fairly tricky with multi-indexes.</p> <p>The only way I can think to do this is to take the data as a dictionary and manually format and output. Is there a way to do this separation in pandas itself?</p>
<python><pandas>
2023-04-17 15:37:09
1
19,469
poundifdef
76,036,699
18,018,869
broadcast arrays to same shape, then get the minimum along one axis
<p>I have 3 numpy arrays with different shapes (one not even an array but a scalar) or dimensions.</p> <pre class="lang-py prettyprint-override"><code>arr1.shape # (3, 3, 11) arr2 # this is the scalar # 1 arr3.shape # (3, 11) </code></pre> <p>I want to compute the minimum along a specific axis.</p> <p>I tried like <a href="https://stackoverflow.com/a/39279912/18018869">this</a>:</p> <pre class="lang-py prettyprint-override"><code>np.minimum.accumulate([arr1, np.expand_dims(arr2, axis=(0,1,2)), np.expand_dims(arr3, axis=1)]) </code></pre> <p>Result is:</p> <blockquote> <p>VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. <br> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p> </blockquote> <p>I thought that would work because it matches the <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting rules</a></p> <blockquote> <p>Two dimensions are compatible when <br> 1. they are equal, or <br> 2. one of them is 1. <br></p> </blockquote> <p>Adding, subtracting or any other mathematical operation work, so why not also the minimum function?</p> <pre class="lang-py prettyprint-override"><code># adding works as a proof that broadcasting works arr1 + np.expand_dims(arr2, axis=(0,1,2)) + np.expand_dims(arr3, axis=1) </code></pre> <p>The way I achieve what I want is the following, but it does not feel like it is the correct numpy approach. Why do I have to &quot;manually&quot; broadcast the arrays to same shape, when I do not need to do it with the &quot;usual&quot; mathematical operations like adding and subtracting. Am I missing something or is this already the most &quot;clean&quot; way of doing it?</p> <pre class="lang-py prettyprint-override"><code>np.array(np.broadcast_arrays(arr1, np.expand_dims(arr2, axis=(0,1,2)), np.expand_dims(arr3, axis=1))).min(axis=0) </code></pre> <p>I want to add that I know about <code>np.minimum()</code>. But it only allows &quot;comparison&quot; of two arrays. I want to pass multiple arrays.</p>
<python><numpy><numpy-ndarray><array-broadcasting>
2023-04-17 15:18:10
1
1,976
Tarquinius
76,036,574
5,743,692
Why does the literal string """"""" (seven quotes) give an error?
<p>Processing client's input we often use the <code>strip()</code> method. If we want to remove starting-ending symbols from some specific set we just place all it in the parameter.</p> <p>The code:</p> <pre><code>&quot;.yes' &quot;.strip(&quot;. '&quot;) </code></pre> <p>obviously gives <code>'yes'</code> string as a result.<br /> When I try to remove set <code>' &quot;.</code> The result depends from this symbols order. Variant <code>&quot;.yes' &quot;.strip(&quot;&quot;&quot; .&quot;'&quot;&quot;&quot;)</code> works properly, when variant with symbol <code>&quot;</code> at the end gives the <code>SyntaxError: unterminated string literal (detected at line 1)</code>.</p> <p>Why literal string <code>&quot;&quot;&quot;&quot;&quot;&quot;&quot;</code> (using seven quotes) gives error? It is just the same <code>'&quot;'!&quot;</code>.</p> <p>Let's look documentation:</p> <p><a href="https://docs.python.org/3/library/stdtypes.html#textseq" rel="nofollow noreferrer">Triple quoted: '''Three single quotes''', &quot;&quot;&quot;Three double quotes&quot;&quot;&quot;</a></p> <p>and</p> <p><a href="https://i.sstatic.net/tE9uK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tE9uK.png" alt="Language reference" /></a></p> <p>Click <a href="https://docs.python.org/3/reference/lexical_analysis.html#literals" rel="nofollow noreferrer">here</a> to verify. So</p> <ol> <li><code>longstring</code> i.e. <code>&quot;&quot;&quot;longstringitem&quot;&quot;&quot;</code></li> <li><code>longstringitem</code> may be a single char.</li> </ol> <p>So do we have to rewrite documentation or interpreter?</p> <p>I've register my question on Python documentation issue. Can see <a href="https://github.com/python/cpython/issues/103594" rel="nofollow noreferrer">here</a>.</p>
<python><string><literals>
2023-04-17 15:03:56
2
451
Vasyl Kolomiets
76,036,558
11,135,962
Convert a string of list to a proper list
<p>I have the following:</p> <pre><code>a = &quot;&quot;&quot;\&quot;[&quot;&quot;123456789&quot;&quot;,&quot;&quot;987654321&quot;&quot;]\&quot;&quot;&quot;&quot; </code></pre> <p>I am trying to convert that to a list of strings. I've tried the following:</p> <pre><code>lst = ast.literal_eval(a) </code></pre> <p>but this returns all the characters as an individual string. What is the mistake I am doing?</p> <p>The expected output: <code>[&quot;123456789&quot;, &quot;987654321&quot;]</code></p>
<python><string>
2023-04-17 15:01:43
1
3,620
some_programmer
76,036,476
12,760,550
Save dataframe in loop according to unide values in column and password protect it
<p>I am testing the following code but, although generating the files I need (1 file per unique Business Region - which is the index column of the dataframe), it does not password protect it. Any idea?</p> <pre><code>import os import pandas as pd import numpy as np from openpyxl.workbook.protection import WorkbookProtection grouped = list(consolidated_df.groupby(['Business Region'])) passwords = {'AE': 'AE_', 'CH': 'CH', 'CZ': 'CZ', 'DK': 'DK', 'ES': 'ES', 'FI': 'FI', 'FR': 'FR', 'HQ': 'HQ', 'NL': 'NL', 'NO': 'NO', 'PE': 'PE', 'PT': 'PT', 'SE': 'SE', 'SK': 'SK', 'TMC': 'TMC', 'London': 'GB', 'York': 'GB'} for df in grouped: region = df[0] name = str(df[1].index.unique().format()).replace(&quot;'&quot;, '').replace(&quot;[&quot;, '').replace(&quot;]&quot;, '') path = os.path.join(filepathtovalidate, f'{name} DM4 Oracle Load.xlsx') writer = pd.ExcelWriter(path) df[1].to_excel(writer, index=False) workbook = writer.book workbook.security = WorkbookProtection(workbookPassword=passwords.get(region)) writer.save() </code></pre>
<python><pandas><group-by><openpyxl><export-to-excel>
2023-04-17 14:54:49
1
619
Paulo Cortez
76,036,392
1,833,326
Different result in each run (pyspark)
<p>I have a data frame as a result of multiple joins. And I want to investigate for duplicates. But each time when I investigate it the data frame looks different. In particular, the following command leads to different <code>IDs</code> but the number of results stays constant.</p> <pre><code>from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, IntegerType, StringType import pyspark.sql.functions as f from pyspark.sql.functions import lit # Create a Spark session spark = SparkSession.builder.appName(&quot;CreateDataFrame&quot;).getOrCreate() # User input for number of rows n_a = 10 n_a_c = 5 n_a_c_d = 3 n_a_c_e = 4 # Define the schema for the DataFrame schema_a = StructType([StructField(&quot;id1&quot;, StringType(), True)]) schema_a_b = StructType( [ StructField(&quot;id1&quot;, StringType(), True), StructField(&quot;id2&quot;, StringType(), True), StructField(&quot;extra&quot;, StringType(), True), ] ) schema_a_c = StructType( [ StructField(&quot;id1&quot;, StringType(), True), StructField(&quot;id3&quot;, StringType(), True), ] ) schema_a_c_d = StructType( [ StructField(&quot;id3&quot;, StringType(), True), StructField(&quot;id4&quot;, StringType(), True), ] ) schema_a_c_e = StructType( [ StructField(&quot;id3&quot;, StringType(), True), StructField(&quot;id5&quot;, StringType(), True), ] ) # Create a list of rows with increasing integer values for &quot;id1&quot; and a constant value of &quot;1&quot; for &quot;id2&quot; rows_a = [(str(i),) for i in range(1, n_a + 1)] rows_a_integers = [str(i) for i in range(1, n_a + 1)] rows_a_b = [(str(i), str(1), &quot;A&quot;) for i in range(1, n_a + 1)] def get_2d_list(ids_part_1: list, n_new_ids: int): rows = [ [ (str(i), str(i) + &quot;_&quot; + str(j)) for i in ids_part_1 for j in range(1, n_new_ids + 1) ] ] return [item for sublist in rows for item in sublist] rows_a_c = get_2d_list(ids_part_1=rows_a_integers, n_new_ids=n_a_c) rows_a_c_d = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_d) rows_a_c_e = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_e) # Create the DataFrame df_a = spark.createDataFrame(rows_a, schema_a) df_a_b = spark.createDataFrame(rows_a_b, schema_a_b) df_a_c = spark.createDataFrame(rows_a_c, schema_a_c) df_a_c_d = spark.createDataFrame(rows_a_c_d, schema_a_c_d) df_a_c_e = spark.createDataFrame(rows_a_c_e, schema_a_c_e) # Join everything df_join = ( df_a.join(df_a_b, on=&quot;id1&quot;) .join(df_a_c, on=&quot;id1&quot;) .join(df_a_c_d, on=&quot;id3&quot;) .join(df_a_c_e, on=&quot;id3&quot;) ) # Nested structure # show df_nested = df_join.withColumn(&quot;id3&quot;, f.struct(f.col(&quot;id3&quot;))) for i, index in enumerate([(5, 3), (4, 3), (3, None)]): remaining_columns = list(set(df_nested.columns).difference(set([f&quot;id{index[0]}&quot;]))) df_nested = ( df_nested.groupby(*remaining_columns) .agg(f.collect_list(f.col(f&quot;id{index[0]}&quot;)).alias(f&quot;id{index[0]}_tmp&quot;)) .drop(f&quot;id{index[0]}&quot;) .withColumnRenamed( f&quot;id{index[0]}_tmp&quot;, f&quot;id{index[0]}&quot;, ) ) if index[1]: df_nested = df_nested.withColumn( f&quot;id{index[1]}&quot;, f.struct( f.col(f&quot;id{index[1]}.*&quot;), f.col(f&quot;id{index[0]}&quot;), ).alias(f&quot;id{index[1]}&quot;), ).drop(f&quot;id{index[0]}&quot;) # Investigate for duplicates in id3 (should be unique) df_test = df_nested.select(&quot;id2&quot;, &quot;extra&quot;, f.explode(f.col(&quot;id3&quot;)[&quot;id3&quot;]).alias(&quot;id3&quot;)) for i in range(5): df_test.groupby(&quot;id3&quot;).count().filter(f.col(&quot;count&quot;) &gt; 1).show() </code></pre> <p>The last command prints in one of two my case different results. Sometimes:</p> <pre><code>+---+-----+ |id3|count| +---+-----+ |6_4| 2| +---+-----+ </code></pre> <p>And sometimes</p> <pre><code>+---+-----+ |id3|count| +---+-----+ |9_3| 2| +---+-----+ </code></pre> <p>If it helps I use Databricks Runtime Version 11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12)</p> <p>Moreover, there can be no duplicate to my understanding based on the design of the code. The found duplicate seems to be a bug!?</p> <p>Maybe as a potential proof that the join does not result in any duplicates:</p> <pre><code>df_join.groupby(&quot;id3&quot;, &quot;id4&quot;, &quot;id5&quot;).count().filter(f.col(&quot;count&quot;) &gt; 1).show() </code></pre> <p>is empty</p>
<python><apache-spark><pyspark><apache-spark-sql><databricks>
2023-04-17 14:44:37
1
1,018
Lazloo Xp
76,036,228
327,572
How do you implement exec_module for a custom Python module loader?
<p>I want to implement a custom Python module loader, as mentioned <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer">here</a>. The process seems pretty straightforward, except I don't how to eventually delegate to the default loading process, which I think I want do when I implement <code>exec_module</code>.</p> <p>So for example, I can put this 'Finder' on the <code>sys.meta_path</code> and it gets called when I <code>import</code> a module.</p> <pre class="lang-py prettyprint-override"><code> import sys import importlib.machinery import types class Finder: def find_spec(name, path, target): print(f&quot;find_spec: {name}, path={path}, target={target}&quot;) if not name.startswith('my_prefix.'): return None return importlib.machinery.ModuleSpec(name, Finder, is_package=True) def create_module(spec): print(f&quot;create_module, spec = {spec}&quot;) mod = types.ModuleType(spec.name) return mod def exec_module(mod): print(f&quot;exec_module, mod = {mod}&quot;) # How to use a file here, to define the module? mod.test = 123 sys.meta_path.append(Finder) import my_prefix.foo </code></pre> <p>Those three functions get called as expected. Now let's say I want to load a certain file to define most of the <code>my_prefix.foo</code> module. How do I do that inside <code>exec_module</code>? Do I manually read the code from the file and call the builtin <a href="https://docs.python.org/3/library/functions.html?highlight=exec#exec" rel="nofollow noreferrer">exec function</a>? Or is there a better way to do that when defining modules?</p>
<python><python-import><python-importlib>
2023-04-17 14:28:09
1
16,719
Rob N
76,036,172
1,028,270
How do I execute a custom filter that returns no output without assigning to a variable?
<p>I have a custom filter that updates a global variable in my code and does some other stuff when called. It returns no output.</p> <p>I just want to call it inside a template without assigning the output to a variable.</p> <p>Doing this works but I want to remove the unnecessary variable assignment:</p> <pre><code>{%- set unused = {'one': 'aaa', &quot;two&quot;: &quot;bbb&quot;, 'three': 'ccc'} | get_env_var -%} </code></pre> <p>I've tried the following:</p> <pre><code>{%- {'one': 'aaa', &quot;two&quot;: &quot;bbb&quot;, 'three': 'ccc'} | get_env_var -%} jinja2.exceptions.TemplateSyntaxError: tag name expected </code></pre> <pre><code>{{'one': 'aaa', &quot;two&quot;: &quot;bbb&quot;, 'three': 'ccc'} | get_env_var} jinja2.exceptions.TemplateSyntaxError: expected token 'end of print statement', got ':' </code></pre> <pre><code>{{{'one': 'aaa', &quot;two&quot;: &quot;bbb&quot;, 'three': 'ccc'} | get_env_var}} yaml.composer.ComposerError: expected a single document in the stream in &quot;&lt;unicode string&gt;&quot;, line 3, column 1: ccc ^ but found another document in &quot;&lt;unicode string&gt;&quot;, line 4, column 1: --- ^ </code></pre> <p>How do I just execute a custom filter and not capture its output?</p>
<python><python-3.x><jinja2>
2023-04-17 14:21:58
1
32,280
red888
76,036,074
5,409,315
Cannot debug test case in VS Code: Found duplicate in "env": PATH
<p>I am using VS Code for developing in Python. I have been able to debug single test cases from the test module, which is very practical. Since recently, it no longer works. After a short waiting time, a dialog pops up: &quot;Invalid message: Found duplicate in &quot;env&quot;: PATH.&quot; with the buttons &quot;Open launch.json&quot; and &quot;Cancel&quot;. Opening launch.json does not help, since I don't manipulate PATH there. I also did not edit PATH recently (and the message seems to point not to a wrong value in PATH, but to PATH being set multiple times).</p> <p>I tried to see the duplicate PATH myself, but failed: Both <code>echo ${env:PATH}</code> at the console and <code>os.environ</code> in Python will only give a single PATH variable - as already implied by the used data structure.</p> <p>I tried applying <a href="https://stackoverflow.com/a/69226515/5409315">this SO answer</a>. It does neither relate directly to VS Code, nor to duplicate PATH variables, but to duplicates <em>in</em> PATH, but it was the closest I found on here. It didn't work.</p>
<python><unit-testing><visual-studio-code><debugging>
2023-04-17 14:09:44
4
604
Jann Poppinga
76,035,887
5,597,037
Selenium Firefox in headless running from a Celery task - Webdriver unexpectedly closes
<p>Running Selenium from a Flask app / with Celery task in headless mode. I get the error message: <code>selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 127</code> Here is the code:</p> <pre><code>from selenium.webdriver.firefox.service import Service import sys, os from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.firefox.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from time import sleep from dotenv import load_dotenv from check_email import EmailedReferrals, Referrals from sqlalchemy import create_engine, Column, Integer, String, Date, Text from sqlalchemy.orm import declarative_base from sqlalchemy.orm import sessionmaker from datetime import datetime from check_email import EmailedReferrals, Referrals import logging logger = logging.getLogger('celery') logger.setLevel(logging.DEBUG) load_dotenv() env_conf = os.getenv(&quot;CONFIG&quot;) Base = declarative_base() if env_conf == &quot;dev&quot;: driver = webdriver.Firefox() DATABASE_URL = os.getenv('DEV_DATABASE_URL') else: # Set up Firefox options options = Options() # Set window size options.add_argument(&quot;--window-size=1920,1080&quot;) options.add_argument(&quot;-headless&quot;) # Include the path to the Firefox binary firefox_binary_path = '/usr/bin/firefox' geckodriver_path = '/home/fusion/bin/geckodriver' options.binary_location = firefox_binary_path service = Service(executable_path=geckodriver_path) driver = webdriver.Firefox(service=service, options=options) DATABASE_URL = os.getenv('PROD_DATABASE_URL') engine = create_engine(DATABASE_URL) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() def get_row_xpath(row, cell): #pulls xpath of rows and cells from Incoming Referrals View ctl = str(row).zfill(2) action = row - 3 id_cell = f&quot;&quot;&quot;/html/body/form/table/tbody/tr[4]/td[2]/table/tbody/tr/td/table/tbody/tr[2]/td/div/table/tbody\ /tr/td[3]/div/table/tbody/tr[2]/td/div/table/tbody/tr[{row}]/td[{cell}]&quot;&quot;&quot; select_element = f&quot;&quot;&quot;//*[@id=&quot;ucViewGrid_dgView_ctl{ctl}_Actions_{action}_0_0_ActionItems&quot;]&quot;&quot;&quot; go_button = f&quot;&quot;&quot;#ucViewGrid_dgView_ctl{ctl}_Actions_{action}_0_0_ActionButton &gt; input:nth-child(1)&quot;&quot;&quot; return id_cell, select_element, go_button def main(referral_id_from_flask, comment): print('hello from main') logger.info(&quot;hello from main&quot;) driver.get(&quot;https://www.example.com&quot;) #Click Account and sign in driver.find_element(By.XPATH, '//*[@id=&quot;UserNameTextBox&quot;]').send_keys(os.getenv('USERNAME')) driver.find_element(By.XPATH, '//*[@id=&quot;PasswordTextBox&quot;]').send_keys(os.getenv('PASSWORD')) driver.find_element(By.XPATH, '//*[@id=&quot;btnLoginHack&quot;]').click() sleep(2) driver.find_element(By.XPATH, '//*[@id=&quot;ctl13&quot;]').click() #loop through table cell = 3 for row in range(3, 52): logger.info(f&quot;Looping: {row}&quot;) print(f&quot;Loop #: {row}&quot;) id_cell = driver.find_element(By.XPATH, get_row_xpath(row, cell)[0]).text referral_id = id_cell.split('\n')[0].strip() #find row with the correct if referral_id == referral_id_from_flask: #Select View Online Referral select_element = driver.find_element(By.XPATH, get_row_xpath(row, cell)[1]) select = Select(select_element) select.select_by_visible_text(&quot;View&quot;) #click go driver.find_element(By.CSS_SELECTOR, get_row_xpath(row, cell)[2]).click() #we are in View Online Referral of the patient #Find select box #select value 0 which is accept patient select_element = driver.find_element(By.XPATH, &quot;&quot;&quot;//*[@id=&quot;dgProviders_ctl02_ddResponse&quot;]&quot;&quot;&quot; ) select = Select(select_element) select.select_by_value(&quot;1&quot;) sleep(0.5) #comment box # wait up to 10 seconds for the element to be present and clickable comment_box = WebDriverWait(driver, 10).until( EC.element_to_be_clickable((By.XPATH, '//*[@id=&quot;dgProviders_ctl02_txtProviderComments&quot;]')) ) comment_box.clear() sleep(0.5) # Add a small delay here comment_box.send_keys(comment) sleep(0.5) # Add a small delay here #click send response #driver.find_element(By.XPATH, &quot;&quot;&quot;//*[@id=&quot;ButtonBarSendResponse&quot;]&quot;&quot;&quot;).click() #add to query r = session.query(Referrals).filter(Referrals.referral_id == int(referral_id)).first() r.referral_acceptence_status = &quot;accepted&quot; session.commit() session.close() print('done') break if __name__ == &quot;__main__&quot;: referral_id_from_flask = sys.argv[1] comment = sys.argv[2] main(referral_id_from_flask, comment) </code></pre> <p>And the complete stack trace:</p> <pre><code>[2023-04-17 09:35:13,375: ERROR/ForkPoolWorker-2] Traceback (most recent call last): File &quot;/home/fusion/scrapers/care_portal/accept.py&quot;, line 43, in &lt;module&gt; driver = webdriver.Firefox(service=service, options=options) File &quot;/home/fusion/flaskapp/venv/lib/python3.10/site-packages/selenium/webdriver/firefox/webdriver.py&quot;, line 199, in __init__ super().__init__(command_executor=executor, options=options, keep_alive=True) File &quot;/home/fusion/flaskapp/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 286, in __init__ self.start_session(capabilities, browser_profile) File &quot;/home/fusion/flaskapp/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 378, in start_session response = self.execute(Command.NEW_SESSION, parameters) File &quot;/home/fusion/flaskapp/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 440, in execute self.error_handler.check_response(response) File &quot;/home/fusion/flaskapp/venv/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 127 </code></pre>
<python><selenium-webdriver><celery>
2023-04-17 13:50:15
1
1,951
Mike C.
76,035,847
3,802,473
Polars / Python Limits Number of Printed Table Output Rows
<p>Does anyone know why polars (or maybe my pycharm setup or python debugger) limits the number of rows in the output? This drives me nuts.</p> <p>Here is the polars code i am running but I do suspect its not polars specific as there isnt much out there on google (and chatgpt said its info is too old haha).</p> <pre><code>import polars as pl df = pl.scan_parquet('/path/to/file.parquet') result_df =( df .filter(pl.col(&quot;condition_category&quot;) == 'unknown') .groupby(&quot;type&quot;) .agg( [ pl.col(&quot;type&quot;).count().alias(&quot;counts&quot;), ] ) ).collect() print(result_df) </code></pre> <p><a href="https://i.sstatic.net/EX96O.png" rel="noreferrer"><img src="https://i.sstatic.net/EX96O.png" alt="polars printout of query result" /></a></p>
<python><debugging><python-polars><truncated>
2023-04-17 13:46:32
2
725
theStud54
76,035,817
6,237,395
How do you write to azure with Pandas
<p>You'd think this would be the same as on-prem, but it doesn't appear to be.</p> <p>Writing works fine like so (lifted from <a href="https://learn.microsoft.com/en-us/sql/machine-learning/data-exploration/python-dataframe-pandas?view=sql-server-ver16" rel="nofollow noreferrer">microsoft's page</a>):</p> <pre><code>import pyodbc import pandas as pd server = 'myserver.database.windows.net' database = 'mydatabase' username = 'DefinitelyNotAdmin' password = 'DefinitelyNotPassword' driver= 'ODBC Driver 17 for SQL Server' cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = cnxn.cursor() sqlcmd = &quot;SELECT TOP (1000) * FROM information_schema.tables&quot; df = pd.read_sql(sqlcmd, cnxn) </code></pre> <p>So far, so good. You can get the data out.</p> <p>...but when you try and put the data back:</p> <pre><code>df.to_sql('test',cnxn,schema = 'lz') </code></pre> <p>You get the following error:</p> <pre><code>DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', &quot;[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server] Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)&quot;) </code></pre> <p>When you follow microsoft's guide, it takes you to <a href="https://learn.microsoft.com/en-us/sql/machine-learning/data-exploration/python-dataframe-sql-server?view=sql-server-ver16" rel="nofollow noreferrer">Insert Python dataframe into SQL table</a></p> <p>which isn't pandas.</p> <p><strong>Question:</strong> How do you write a pandas dataframe to Azure?</p>
<python><pandas><azure>
2023-04-17 13:42:47
1
386
James
76,035,783
891,441
Make all fields with nested pydantic models optional recursively
<p>I have a pydantic model with many fields that are also pydantic models. I would like to make recursively all fields optional.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>class MyModelE(BaseModel): f: float class MyModelA(BaseModel): d: str e: MyModelE class MyModel(BaseModel): a: MyModelA b: int </code></pre> <p>I would like to have a metaclass or method to make all fields of <code>MyModel</code> optional, including fields <code>a</code>, <code>b</code>, <code>d</code>, <code>e</code>, and <code>f</code>.</p> <p>The solutions described in this previously asked question only make fields that are non-pydantic models optional.</p> <p><a href="https://stackoverflow.com/questions/67699451/make-every-fields-as-optional-with-pydantic">Make every fields as optional with Pydantic</a></p>
<python><pydantic>
2023-04-17 13:38:08
2
2,435
morfys
76,035,572
9,978,422
How to ignore fragment of f-string in python assert
<p>Is it possible to ignore one fragment of f-string in python assertion, eg.</p> <pre><code>assert results.to_dict() == { &quot;comment&quot;: f&quot;Retrying in 3600 seconds (1/5)&quot;, } </code></pre> <p>How to make that any of tries 1-5 was assert as true in this line? Maybe some regex?</p> <pre><code>assert results.to_dict() == { &quot;comment&quot;: f&quot;Retrying in 3600 seconds ({some_regex}/5)&quot;, } </code></pre>
<python><assert><f-string>
2023-04-17 13:14:14
1
322
lukos06
76,035,499
18,806,499
Why does my sorting algo with numpy arrayw work slower than with lists?
<p>I was trying to sort a list with sequential bitonic sort and wanted to make it faster by sorting a numpy array instead of a list, but it only became slower. What did I do wrong?</p> <p>Here is sorting algo:</p> <pre><code>from datetime import datetime import numpy as np def compAndSwap(a, i, j, dire): if (dire == 1 and a[i] &gt; a[j]) or (dire == 0 and a[i] &lt; a[j]): a[i], a[j] = a[j], a[i] def bitonicMerge(a, low, cnt, dire): if cnt &gt; 1: k = cnt // 2 for i in range(low, low + k): compAndSwap(a, i, i + k, dire) bitonicMerge(a, low, k, dire) bitonicMerge(a, low + k, k, dire) def bitonicSort(a, low, cnt, dire): if cnt &gt; 1: k = cnt // 2 bitonicSort(a, low, k, 1) bitonicSort(a, low + k, k, 0) bitonicMerge(a, low, cnt, dire) def sort_(a, N, up): bitonicSort(a, 0, N, up) </code></pre> <p>And here is part when run this algo for list:</p> <pre><code>with open('data.txt') as f: line = f.readline() a = line[1:-2].split(', ') a = list(map(int, a)) n = len(a) up = 1 time1 = datetime.now() sort_(a, n, up) time2 = datetime.now() print(&quot;\nCurrent Time =&quot;, time2-time1) </code></pre> <p>And here is for numpy array:</p> <pre><code> with open('data.txt') as f: line = f.readline() a = np.array(line[1:-2].split(', ')).astype('int32') n = a.size up = 1 time1 = datetime.now() sort_(a, n, up) time2 = datetime.now() print(&quot;\nCurrent Time =&quot;, time2-time1) </code></pre> <p>What did I miss?</p>
<python><algorithm><numpy><sorting>
2023-04-17 13:07:34
1
305
Diana
76,035,476
9,244,276
How to stream local video using Django
<p>I’m using <a href="https://github.com/hadronepoch/python-ffmpeg-video-streaming" rel="nofollow noreferrer">python-ffmpeg-video-streaming</a> in my Django project. I can able to generate mpd and m3u8 files successfully, but I didn’t know how to serve these file using Django views.</p> <p>I tried this approach, but this is not working:</p> <pre><code>def stream_multi_quality_video(request): video_path = &quot;./dashpath/dash.mpd&quot; with open(video_path, &quot;rb&quot;) as mpd_file: mpd_contents = mpd_file.read() # Set the response headers response = HttpResponse(mpd_contents, content_type=&quot;application/dash+xml&quot;) response[&quot;Content-Length&quot;] = len(mpd_contents) return response </code></pre> <p>After this, I got a &quot;chunk file not found&quot; error in the browser console.</p> <p>Here is my frontend code for videojs:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;video-container&quot;&gt; &lt;video id=&quot;my-video&quot; class=&quot;video-js&quot; controls preload=&quot;auto&quot; width=&quot;640&quot; height=&quot;264&quot; data-setup=&quot;{}&quot;&gt; &lt;source src=&quot;{% url 'stream_video' %}&quot; type=&quot;application/dash+xml&quot;&gt; &lt;/video&gt; &lt;/div&gt; &lt;script&gt; var player = videojs('my-video'); &lt;/script&gt; </code></pre>
<python><django><video-streaming>
2023-04-17 13:05:28
1
1,308
Sandeep Prasad Kushwaha
76,035,215
3,116,231
retrieve the object_id of the Azure service principal
<p>How can I retrieve the object_id of an Azure service principal programatically using the Python APIs?</p> <p><a href="https://i.sstatic.net/fNcT0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fNcT0.jpg" alt="enter image description here" /></a></p>
<python><azure><azure-active-directory><azure-keyvault>
2023-04-17 12:35:31
1
1,704
Zin Yosrim
76,035,200
1,826,893
FAISS IndexHNSW Search throws an error on Linux but works on Windows (when installing from pip)
<p>I trained a HSMW Index using faiss on Windows. I wanted to deploy this to an Azure Function App, hence I just <code>pip</code> to install <code>faiss-cpu==1.7.3</code> rather than the recommended install with <code>conda</code>.</p> <p>The code that searches for the most similar vector is</p> <pre><code>distances, indices = index.search(vector, k) </code></pre> <p>works without issue on Windows, Python 3.10.4, faiss-cpu==1.7.3</p> <p>However, in the Azure Function App and also on Ubuntu, it throws the following error.</p> <pre><code> File &quot;/home/vboxuser/Documents/VectorDB/src/match_vector.py&quot;, line 30, in get_matching_vectors distances, indices = index.search(vector, k) File &quot;/home/vboxuser/Documents/VectorDB/.venv/lib/python3.10/site-packages/faiss/__init__.py&quot;, line 322, in replacement_search self.search_c(n, swig_ptr(x), k, swig_ptr(D), swig_ptr(I)) File &quot;/home/vboxuser/Documents/VectorDB/.venv/lib/python3.10/site-packages/faiss/swigfaiss.py&quot;, line 5436, in search return _swigfaiss.IndexHNSW_search(self, n, x, k, distances, labels) TypeError: in method 'IndexHNSW_search', argument 3 of type 'float const *' </code></pre> <p>I would love to understand what causes this and whether there is a work around so that I can run it in the Azure Function App.</p> <p>Thanks</p>
<python><azure-functions><faiss>
2023-04-17 12:33:30
1
1,559
Edgar H
76,035,089
12,860,924
How to get and generate the Labels of the augmented data?
<p>I am working on classification of images breast cancer using <code> DensetNet121</code> pretrained model. I want to apply data augmentation on the training data. I tried on the below code to augment the data using <code>Keras</code> and <code>ImageDataGenerator</code>. The data augmented but I don't know how to gel the labelled for each augmented image ?? Also I want to save each image in a specific iterated loop like <code>aug_0_0</code>, <code>aug_0_1</code> and <code>aug_0_2</code> but the data generated automatically by different numbers in names like <code>aug_0_76768</code> and <code>aug_0_23563</code>. I want to stop the numbers which is automatically appears in the name of the augmented image.</p> <p>My Question is How I get generate the label file for each augmented image ? and How to rename it and stop automatic appears of numbers ?</p> <p>Code:</p> <pre><code>import glob import cv2 import os from keras.preprocessing.image import ImageDataGenerator import numpy as np import matplotlib.pyplot as plt string2 = [1000, 137, 166, 220, 226, 42, 49, 51, 55, 66, 68, 750, 800, 850, 900, 950] for f in string2: list2 = [] normal_dir = 'D:\Images\Metodologia\SAUDAGoaVEIS\{}\Segmentadas'.format( f) dir1 = os.path.join(normal_dir, &quot;*.png&quot;) datagen = ImageDataGenerator( rotation_range = 30, zoom_range = 0.02) for img in glob.glob(dir1): cv_img = cv2.imread(img) cv_resize = cv2.resize(cv_img,(200,200)) cv_norm_img = cv_resize/255.0 break cv_norm_img = np.array(cv_norm_img) input_batch = cv_norm_img.reshape((1,*cv_norm_img.shape)) i = 0 for output_batch in datagen.flow(input_batch,batch_size=1, save_to_dir='D:\Images\Metodologia\SAUDAGoaVEIS\{}\Segmentadas'.format( f),save_prefix='aug', save_format='png'): i+=1 if i &gt; 0: break </code></pre>
<python><tensorflow><keras><tf.keras><data-augmentation>
2023-04-17 12:18:30
0
685
Eda
76,034,809
3,387,716
pandas.concat() second argument?
<p>I freshly installed a program through pip3 and it now fails like this:</p> <pre class="lang-none prettyprint-override"><code>pandas.concat(dataFrameList, 0) TypeError: concat() takes 1 positional argument, but 2 were given </code></pre> <p>I looked at the <code>concat()</code> function prototype in Pandas 0.9, 1.x and 2.x documentation but it appears that this second argument was never allowed; it works with Pandas 1.5 though...</p> <p><strong>What is the <code>0</code> supposed to do? Is it safe to remove it from the code?</strong></p>
<python><pandas>
2023-04-17 11:42:23
1
17,608
Fravadona
76,034,614
198,145
How do I handle LabVIEW strings over COM to python
<p>I want to call LabVIEW functions from a python script and try to understand the memory handling of strings and arrays.</p> <p>Let's say that I have a COM API created by LabVIEW with a function that look like this:</p> <pre><code>void Foo(LStrHandle *str) </code></pre> <p>I want to call this from python and use the library ctypes. I give below the code I have written so far:</p> <pre><code>from ctypes import* mydll = cdll.LoadLibrary(&quot;MyDll.dll&quot;) # A typedef for the signature of a LStrHandle LVString = POINTER(POINTER(c_int)) # Cast a LStrHandle to a python string. # LStrHandle is a int32 that tell the number of chars that following # direct after the int32 in memory. def CastLVString(lvValue): if lvValue and lvValue.contents and lvValue.contents.contents: # Get the pointer from the pointer to pointer. lstr = lvValue.contents.contents # Get the value from the pointer that is the count. cnt = lstr.value if cnt &gt; 0: # Get the bytes after cnt. Number of bytes is defined by cnt. str_bytes = cast(addressof(lstr) + sizeof(c_int32), POINTER(c_char * cnt)) # Cast the bytes to a string. return str_bytes.contents.value.decode('utf-8') return &quot;&quot; # Wrapper function for COM function Foo. def Foo(): mydll.Foo.argtypes = [ POINTER(LVString) ] str = LVString() mydll.Foo(byref(str)) # Cast the LStrHandle to a python string and return. return CastLVString(message) </code></pre> <p>This works, but I wonder if it is correct. Aren't there better ways to handle strings (and vectors)? I wonder how the memory management works. I guess I have to tell LabVIEW that it can free the memory after I read the data. If so, how do I do it? I have seen that the function <code>DSDisposePtr</code> should be used if doing it from C, can this function be used in python?</p> <p><strong>Edit</strong></p> <p>The C signature for <code>LStrHandler</code> is as following:</p> <pre><code>/** @brief Long Pascal-style string types. */ typedef struct { int32 cnt; /* number of bytes that follow */ uChar str[1]; /* cnt bytes */ } LStr, *LStrPtr, **LStrHandle; </code></pre>
<python><python-3.x><memory-management><com><labview>
2023-04-17 11:22:45
0
6,283
magol
76,034,506
18,140,022
How to call a PostgreSQL function that does not wait for completion?
<p>I have a data pipeline that after wrangling data it creates a new table in a PostgreSQL database and inserts data into such table. I also have a PostgreSQL function that I want to call after the process. The postgres function is pretty heavy and can take up to 10 minutes to finish. I want Python to call the postgres function but do not wait until resolves.</p> <p>I am using psycogpg2-binary but it waits until the postgres function is completed:</p> <pre><code> conn = psycopg2.connect(user=configs.USERNAME, password=configs.PASSWORD, host=configs.HOST, port=configs.PORT, dbname=configs.DATABASE) cur = conn.cursor() cur.execute(&quot;SELECT * FROM wrangler_func('new_data')&quot;) conn.commit() </code></pre>
<python>
2023-04-17 11:09:10
1
405
user18140022
76,034,294
9,600,253
django.db.utils.OperationalError [Microsoft][ODBC Driver 17 for SQL Server]Client unable to establish connection
<p>I am using mssql server as my database in Django, hosted on Heroku. I am having trouble connecting to that database with ODBC Driver 17 for SQL Server.</p> <pre><code>#requirements.txt asgiref Django==4.0 pytz sqlparse djangorestframework gunicorn python-dotenv django-mssql-backend whitenoise pyodbc </code></pre> <pre><code>Config Vars . . . ENGINE: sql_server.pyodbc </code></pre> <p>MSSQL version: <code>Microsoft SQL Server 2014</code></p> <pre><code>settings.py 'default': { 'ENGINE': os.getenv('ENGINE'), 'NAME': os.getenv('NAME'), 'USER': os.getenv('USER'), 'PASSWORD': os.getenv('PASSWORD'), 'HOST': os.getenv('DATABASE_HOST'), &quot;OPTIONS&quot;: { &quot;driver&quot;: &quot;ODBC Driver 17 for SQL Server&quot;, } }, </code></pre> <p>Aptfile: <code>unixodbc unixodbc-dev</code></p> <p>Buildpacks: <a href="https://i.sstatic.net/TQk2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TQk2n.png" alt="enter image description here" /></a></p> <p>As for the compatibility check, the SQL server is in-fact compatible with Driver 17. <a href="https://learn.microsoft.com/en-us/sql/connect/odbc/windows/system-requirements-installation-and-driver-files?view=sql-server-ver16" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/sql/connect/odbc/windows/system-requirements-installation-and-driver-files?view=sql-server-ver16</a></p> <p>Complete error:</p> <pre><code>File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 230, in ensure_connection self.connect() File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/utils/asyncio.py&quot;, line 25, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 211, in connect self.connection = self.get_new_connection(conn_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/sql_server/pyodbc/base.py&quot;, line 312, in get_new_connection conn = Database.connect(connstr, ^^^^^^^^^^^^^^^^^^^^^^^^^ pyodbc.OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Client unable to establish connection (0) (SQLDriverConnect)') The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;console&gt;&quot;, line 1, in &lt;module&gt; File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/models/query.py&quot;, line 280, in __iter__ self._fetch_all() File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/models/query.py&quot;, line 1354, in _fetch_all self._result_cache = list(self._iterable_class(self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/models/query.py&quot;, line 51, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/models/sql/compiler.py&quot;, line 1189, in execute_sql sql, params = self.as_sql() ^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/sql_server/pyodbc/compiler.py&quot;, line 177, in as_sql supports_offset_clause = self.connection.sql_server_version &gt;= 2012 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/utils/functional.py&quot;, line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) ^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/sql_server/pyodbc/base.py&quot;, line 400, in sql_server_version with self.temporary_connection() as cursor: File &quot;/app/.heroku/python/lib/python3.11/contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 614, in temporary_connection with self.cursor() as cursor: ^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/utils/asyncio.py&quot;, line 25, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 270, in cursor return self._cursor() ^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/sql_server/pyodbc/base.py&quot;, line 218, in _cursor conn = super()._cursor() ^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 246, in _cursor self.ensure_connection() File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/utils/asyncio.py&quot;, line 25, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 229, in ensure_connection with self.wrap_database_errors: File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/utils.py&quot;, line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 230, in ensure_connection self.connect() File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/utils/asyncio.py&quot;, line 25, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/django/db/backends/base/base.py&quot;, line 211, in connect self.connection = self.get_new_connection(conn_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/.heroku/python/lib/python3.11/site-packages/sql_server/pyodbc/base.py&quot;, line 312, in get_new_connection conn = Database.connect(connstr, ^^^^^^^^^^^^^^^^^^^^^^^^^ django.db.utils.OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Client unable to establish connection (0) (SQLDriverConnect)') </code></pre> <p>Complete log of code being pushed to Heroku</p> <pre><code>Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 369 bytes | 184.00 KiB/s, done. Total 3 (delta 1), reused 0 (delta 0), pack-reused 0 remote: Updated 85 paths from aec0419 remote: Compressing source files... done. remote: Building source: remote: remote: -----&gt; Building on the Heroku-22 stack remote: -----&gt; Using buildpacks: remote: 1. https://github.com/heroku/heroku-buildpack-apt.git remote: 2. https://github.com/heroku/heroku-buildpack-python.git remote: 3. https://github.com/matt-bertoncello/python-pyodbc-buildpack.git remote: -----&gt; Apt app detected remote: -----&gt; Reusing cache remote: -----&gt; Updating apt caches remote: Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease remote: Hit:2 http://apt.postgresql.org/pub/repos/apt jammy-pgdg InRelease remote: Hit:3 http://archive.ubuntu.com/ubuntu jammy-security InRelease remote: Hit:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease remote: Reading package lists... remote: W: http://apt.postgresql.org/pub/repos/apt/dists/jammy-pgdg/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. remote: -----&gt; Fetching .debs for unixodbc remote: Reading package lists... remote: Building dependency tree... remote: The following additional packages will be installed: remote: libodbc2 libodbcinst2 unixodbc-common remote: Suggested packages: remote: odbc-postgresql tdsodbc remote: The following NEW packages will be installed: remote: libodbc2 libodbcinst2 unixodbc unixodbc-common remote: 0 upgraded, 4 newly installed, 0 to remove and 14 not upgraded. remote: Need to get 0 B/227 kB of archives. remote: After this operation, 719 kB of additional disk space will be used. remote: Download complete and in download only mode remote: -----&gt; Fetching .debs for unixodbc-dev remote: Reading package lists... remote: Building dependency tree... remote: The following additional packages will be installed: remote: libodbc2 libodbccr2 libodbcinst2 unixodbc-common remote: Suggested packages: remote: odbc-postgresql tdsodbc remote: The following NEW packages will be installed: remote: libodbc2 libodbccr2 libodbcinst2 unixodbc-common unixodbc-dev remote: 0 upgraded, 5 newly installed, 0 to remove and 14 not upgraded. remote: Need to get 0 B/465 kB of archives. remote: After this operation, 2,504 kB of additional disk space will be used. remote: Download complete and in download only mode remote: -----&gt; Installing libodbc2_2.3.9-5_amd64.deb remote: -----&gt; Installing libodbccr2_2.3.9-5_amd64.deb remote: -----&gt; Installing libodbcinst2_2.3.9-5_amd64.deb remote: -----&gt; Installing unixodbc_2.3.9-5_amd64.deb remote: -----&gt; Installing unixodbc-common_2.3.9-5_all.deb remote: -----&gt; Installing unixodbc-dev_2.3.9-5_amd64.deb remote: -----&gt; Writing profile script remote: -----&gt; Rewrite package-config files remote: -----&gt; Python app detected remote: -----&gt; No Python version was specified. Using the same version as the last build: python-3.11.3 remote: To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes remote: -----&gt; Requirements file has been changed, clearing cached dependencies remote: -----&gt; Installing python-3.11.3 remote: -----&gt; Installing pip 23.0.1, setuptools 67.6.1 and wheel 0.40.0 remote: -----&gt; Installing SQLite3 remote: -----&gt; Installing requirements with pip remote: Collecting asgiref remote: Downloading asgiref-3.6.0-py3-none-any.whl (23 kB) remote: Collecting Django==4.0 remote: Downloading Django-4.0-py3-none-any.whl (8.0 MB) remote: Collecting pytz remote: Downloading pytz-2023.3-py2.py3-none-any.whl (502 kB) remote: Collecting sqlparse remote: Downloading sqlparse-0.4.3-py3-none-any.whl (42 kB) remote: Collecting djangorestframework remote: Downloading djangorestframework-3.14.0-py3-none-any.whl (1.1 MB) remote: Collecting gunicorn remote: Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB) remote: Collecting python-dotenv remote: Downloading python_dotenv-1.0.0-py3-none-any.whl (19 kB) remote: Collecting django-mssql-backend remote: Downloading django_mssql_backend-2.8.1-py3-none-any.whl (52 kB) remote: Collecting whitenoise remote: Downloading whitenoise-6.4.0-py3-none-any.whl (19 kB) remote: Collecting pyodbc remote: Downloading pyodbc-4.0.39-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (352 kB) remote: Installing collected packages: pytz, whitenoise, sqlparse, python-dotenv, pyodbc, gunicorn, asgiref, django-mssql-backend, Django, djangorestframework remote: Successfully installed Django-4.0 asgiref-3.6.0 django-mssql-backend-2.8.1 djangorestframework-3.14.0 gunicorn-20.1.0 pyodbc-4.0.39 python-dotenv-1.0.0 pytz-2023.3 sqlparse-0.4.3 whitenoise-6.4.0 remote: -----&gt; Skipping Django collectstatic since the env var DISABLE_COLLECTSTATIC is set. remote: -----&gt; odbc app detected remote: -----&gt; Starting adding ODBC Driver 17 for SQL Server remote: -----&gt; copied libmsodbcsql-17.5.so.2.1 remote: -----&gt; copied msodbcsqlr17.rll remote: -----&gt; copied odbcinst.ini remote: -----&gt; copied profile.d remote: -----&gt; Finished adding ODBC Driver 17 for SQL Server remote: -----&gt; Discovering process types remote: Procfile declares types -&gt; web remote: remote: -----&gt; Compressing... remote: Done: 35.2M remote: -----&gt; Launching... remote: Released v54 remote: https://blood-link-heroku.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. To https://git.heroku.com/blood-link-heroku.git de79a1c..838b433 main -&gt; main </code></pre> <p>No clue why this error keeps popping up. Tired of it.</p>
<python><sql-server><django><heroku><pyodbc>
2023-04-17 10:43:26
0
316
Musab Gulfam
76,034,280
614,944
Convert windows time to timestamp
<p>I'm using the following code to convert windows time to unix timestamp,</p> <pre><code>def convert_windows_time(windows_time): return datetime.datetime(1601, 1, 1) + datetime.timedelta(microseconds=windows_time / 10) </code></pre> <p>How can I do the same in nodejs? This does not work, it results in a different date:</p> <pre><code>let a = parseInt(129436810067618693) let b = (new Date('1601-01-01').getMilliseconds()) + a / 10 console.log(new Date(b / 10000)) </code></pre> <p>Any ideas what's wrong?</p>
<python><node.js><time>
2023-04-17 10:42:07
1
23,701
daisy
76,034,275
18,756,733
query() and isin() combination is not working in Kaggle notebook
<p>I want to filter dataframe using .query() and .isin() functions in Kaggle notebook.</p> <pre><code>standard_stats=standard_stats.query('`Unnamed: 0_level_0_Player`.isin([&quot;Squad Total&quot;,&quot;Opponent Total&quot;])==False') </code></pre> <p><code>Unnamed: 0_level_0_Player</code> is the name of the column and [&quot;Squad Total&quot;,&quot;Opponent Total&quot;] is the list of values, that should not be in the filtered dataframe.</p> <p>After running this code, I get the following error: TypeError: unhashable type: 'numpy.ndarray'.</p> <p>I did not get the error, when I ran the code in Jupyter Notebook. How can I resolve the problem?</p>
<python><isin>
2023-04-17 10:41:05
1
426
beridzeg45
76,033,901
5,560,529
How to get a PySpark Dataframe containing al sundays between the current date and a given number of weeks
<p>Given a constant integer <code>MAX_WEEKS</code>, I want to get all Sundays between today's date and today's date + <code>MAX_WEEKS</code> weeks.</p> <p>For example, take today's date (<code>2023-04-17</code>) at the time of writing and <code>MAX_WEEKS</code> = 5. My desired output would be a PySpark Dataframe looking like this (where all entries can be of the Date Type):</p> <pre><code>Date 2023-04-23 2023-04-30 2023-05-07 2023-05-14 2023-05-21 </code></pre> <p>I tried the following:</p> <pre><code>from pyspark.sql import functions as f current_date = f.current_date() number_of_weeks = 0 dates = [] while number_of_weeks &lt;= MAX_WEEKS: next_date = f.next_day(current_date, &quot;Sunday&quot;) dates.append(next_date) current_date = next_date number_of_weeks += 1 </code></pre> <p>But that gives me an output like this:</p> <pre><code>[Column&lt;'next_day(current_date(), Sunday)'&gt;, Column&lt;'next_day(next_day(current_date(), Sunday), Sunday)'&gt;, Column&lt;'next_day(next_day(next_day(current_date(), Sunday), Sunday), Sunday)'&gt;, Column&lt;'n.... </code></pre> <p>I know the problem lies in the fact that the output of <code>f.current_date()</code> produces a DateType column, and that <code>f.next_day()</code> requires a column as input, but I can't figure out how to get this to work (or find examples that teach me how). Moreover, I think this solution looks way too ugly to be the most elegant and efficient one.</p> <p>Thanks in advance!</p>
<python><apache-spark><pyspark>
2023-04-17 09:59:52
0
784
Peter
76,033,717
8,016,168
How can I calculate the multinomial probability values?
<p>I am trying to write a function that finds the multinomial expansion. In order to do that I modified the binomial probability function;</p> <p>From:</p> <p>f(n, k, p) = n! / (k! * (n - k)!) * p<sup>k</sup> * (1 - p)<sup>(n - k)</sup></p> <p>To:</p> <p>f(n, k, p) = n! / (k[0]! * k[1]! * ... * k[x]!) * p[0]<sup>k[0]</sup> * p[1]<sup>k[1]</sup> * ... p[x]<sup>k[x]</sup></p> <p>The codes are:</p> <pre class="lang-py prettyprint-override"><code>from math import prod def factorial(n: int) -&gt; int: r = 1 for i in range(1, n + 1): r *= i return r def combination(n: int, k: list) -&gt; float: return factorial(n) / prod(map(factorial, k)) def multinomial_probability(n: int, k: list, p: list) -&gt; float: return combination(n, k) * prod(map(pow, p, k)) </code></pre> <p>To get <code>the coefficients of binomial expansion</code>, I am calling the <code>multinomial_probability</code> function with the following arguments:</p> <pre class="lang-py prettyprint-override"><code>for n in range(5): print([multinomial_probability(n=n, k=[k, n - k], p=[1/2, 1/2]) for k in range(n + 1)]) </code></pre> <p>The result I get is below and it's the correct result:</p> <pre class="lang-py prettyprint-override"><code>[1.0] [0.5, 0.5] [0.25, 0.5, 0.25] [0.125, 0.375, 0.375, 0.125] [0.0625, 0.25, 0.375, 0.25, 0.0625] </code></pre> <p>Now I am thinking how to find <code>the coefficients of the trinomial expansion</code>.</p> <p>In order to do that, I called the <code>multinomial_probability</code> function with the following arguments, despite I knew that it was not going to give me the correct result.</p> <pre class="lang-py prettyprint-override"><code>for n in range(5): print([multinomial_probability(n=n, k=[m, k - m, n - k], p=[1/3, 1/3, 1/3]) for k in range(n + 1) for m in range(k + 1)]) </code></pre> <p>The above code printed the following result:</p> <pre class="lang-py prettyprint-override"><code>[1.0] [0.3333333333333333, 0.3333333333333333, 0.3333333333333333] [0.1111111111111111, 0.2222222222222222, 0.2222222222222222, 0.1111111111111111, 0.2222222222222222, 0.1111111111111111] [0.03703703703703703, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.2222222222222222, 0.1111111111111111, 0.03703703703703703, 0.1111111111111111, 0.1111111111111111, 0.03703703703703703] [0.012345679012345677, 0.0493827160493827, 0.0493827160493827, 0.07407407407407407, 0.14814814814814814, 0.07407407407407407, 0.0493827160493827, 0.14814814814814814, 0.14814814814814814, 0.0493827160493827, 0.012345679012345677, 0.0493827160493827, 0.07407407407407407, 0.0493827160493827, 0.012345679012345677] </code></pre> <p>The number of elements of the lists I would like to get are 1, 3, 5, 9, 11 in order. The code I wrote produces lists with element numbers of 1, 3, 6, 10, 15, and the probabilities are calculated according to the number of elements of these lists.</p> <p>So the code I wrote produces lists with element numbers of 1, 3, 6, 10, 15, and the probabilities are calculated according to the number of elements of these lists. We can't give decimal numbers to k or m either. Multiplying two <code>int</code>s does not yield 1, 3, 5, 7, 9 in order.</p> <p>Now, I am changing the code as below:</p> <pre class="lang-py prettyprint-override"><code>desired_outputs = [1, 3, 5, 7, 9] for n in range(5): print([multinomial_probability(n=n, k=[k, n - 2 * k, k], p=[1/3, 1/3, 1/3]) for k in range(desired_outputs[n])]) </code></pre> <p>The output I get is like this:</p> <pre><code>[1.0] [0.3333333333333333, 0.3333333333333333, 0.08333333333333334] [0.1111111111111111, 0.2222222222222222, 0.055555555555555566, 0.006172839506172837, 0.00038580246913580245] [0.03703703703703703, 0.2222222222222222, 0.05555555555555555, 0.0061728395061728366, 0.00038580246913580234, 1.5432098765432092e-05, 4.286694101508915e-07] [0.012345679012345677, 0.14814814814814814, 0.07407407407407407, 0.008230452674897117, 0.0005144032921810698, 2.0576131687242793e-05, 5.71559213534522e-07, 1.166447374560249e-08, 1.8225740227503888e-10] </code></pre> <p>This is not correct either.</p> <p>I change <code>k</code> like this:</p> <pre class="lang-py prettyprint-override"><code>desired_outputs = [1, 3, 5, 7, 9] for n in range(5): print([multinomial_probability(n=n, k=[k, n - k - 1, 1], p=[1/3, 1/3, 1/3]) for k in range(desired_outputs[n])]) </code></pre> <p>And this produces the following output:</p> <pre class="lang-py prettyprint-override"><code>[1.0] [0.3333333333333333, 0.3333333333333333, 0.16666666666666669] [0.2222222222222222, 0.2222222222222222, 0.1111111111111111, 0.037037037037037035, 0.009259259259259257] [0.1111111111111111, 0.2222222222222222, 0.1111111111111111, 0.03703703703703703, 0.009259259259259259, 0.001851851851851851, 0.0003086419753086419] [0.0493827160493827, 0.14814814814814814, 0.14814814814814814, 0.0493827160493827, 0.012345679012345677, 0.0024691358024691353, 0.0004115226337448558, 5.878894767783656e-05, 7.348618459729569e-06] </code></pre> <p>What kind of change I should make in order to get the coefficients of the following output (trinomial expansion)?</p> <p><a href="https://i.sstatic.net/SVqT2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SVqT2.png" alt="enter image description here" /></a></p> <p>Thanks in advance...</p>
<python><algorithm>
2023-04-17 09:39:47
4
1,342
dildeolupbiten
76,033,376
13,750,668
Prefect deploy in python "no-start" option
<p>I want to deploy a prefect flow in python. So far I've been using a script like this:</p> <pre class="lang-py prettyprint-override"><code>deployment = Deployment.build_from_flow( flow=flow_name, name=&quot;deployment_name&quot;, version=1, work_queue_name=&quot;queue_name&quot;, schedule=CronSchedule(cron=&quot;0 23 * * *&quot;) ) deployment.apply() </code></pre> <p>It works fine, but every time I restart the code it will immediatly runs the flow. I want it to only run when scheduled. I know if you run prefect from command line there is a &quot;no-start&quot; option, but so far i haven't been able to find how to implement it in Python.</p>
<python><prefect>
2023-04-17 08:58:13
1
488
Reine Baudache
76,033,365
4,908,900
imaplib finds the full subject string but not partial - how to fix?
<p>I'm using impalib to connect to my gmail:</p> <pre><code>import imaplib mail = imaplib.IMAP4_SSL('imap.gmail.com') mail.login('bi@great_company', 'app_password') mail.select('&quot;[Gmail]/All Mail&quot;', readonly=True) </code></pre> <p>this finds an email:</p> <pre><code>mail.search(None, 'SUBJECT &quot;Green Law_Settlement_12242022&quot;') </code></pre> <p>the email subject is: &quot;Green Law_Settlement_12242022.xlsx&quot;</p> <p>but this does not return any match:</p> <pre><code>mail.search(None, 'SUBJECT &quot;Green Law_Settlement_122&quot;') </code></pre> <p>I need all the mails where the subject starts with &quot;Green Law_Settlement_&quot; any way of doing that?</p>
<python><imap><imaplib>
2023-04-17 08:57:18
1
3,749
Ezer K
76,033,173
10,844,937
Do I need to lock the table?
<p>I use MySQL's default engine <code>innodb</code> and MySQL's default transaction <code>REPEATABLE READS</code>.</p> <p>My table <code>Job</code> has two fields, <code>job_id</code> and <code>queue</code>. Here I have two tasks, the first task is to insert a record based on the max queue.</p> <pre><code>import uuid import MySQLdb db = MySQLdb.connect(&quot;conn_info&quot;) cursor = db.cursor() cursor.execute(&quot;SELECT max(queue) from job&quot;) max_queue = cursor.fetchone()[0] job_id, queue = str(uuid.uuid1()), max_queue + 1 cursor.execute(&quot;INSERT INTO job VALUES(%s,%s)&quot;, (job_id, queue)) db.commit() db.close() </code></pre> <p>The second task is to reduce every row's <code>queue</code> value by one.</p> <pre><code>import MySQLdb db = MySQLdb.connect(&quot;conn_info&quot;) cursor = db.cursor() cursor.execute(&quot;UPDATE job SET queue = queue - 1&quot;) db.commit() db.close() </code></pre> <p>Here if two tasks happened at the same time. I want to make sure the second task executed after the first task. That's to say, first inserts a value then reduce the value of each row. In this case, do I have to add a <code>lock</code>? If the answer is yes, what kind of <code>lock</code> should I add?</p>
<python><mysql>
2023-04-17 08:34:54
1
783
haojie
76,033,166
3,059,024
Is it possible to syncronize threads between java and Python?
<p>I have a server written in Java. It accepts http requests as commands to do work. The work takes time.</p> <p>We use Java in the backend and Python in the front end sending the http requests and controlling the work done. In Python, there exists a<code>threading.Event</code> which allows communication between threads. What I would like to do might be difficult because of our language choice, but is it possible to have the same type communication between java and Python?</p> <p>That is, Java creates the event and provides a pointer to it which can be accessed and <code>set</code> by Python? I've used this pattern between C++ and Python, but I guess that's easier because Python is written in C.</p> <p>Any help or pointers to resources will be welcome, thanks.</p>
<python><java><multithreading>
2023-04-17 08:34:08
2
7,759
CiaranWelsh
76,032,976
13,518,907
Recover CSV-File in Jupyter that was overwritten
<p>I work with JupyterLab and have accidentally overwritten an existing CSV-File in my environment.</p> <p>In my environment, there was a &quot;data.csv&quot;-file at first which I have uploaded to my environment. Now I accidentally executed the following line in my Jupyter-Notebook:</p> <pre><code>df.to_csv(&quot;data.csv&quot;) </code></pre> <p>So the &quot;data.csv&quot;-File was overwritten with the content of the &quot;df&quot; variable (the wrong dataframe). Is there a way to get back the old version of the &quot;data.csv&quot; file or is this dataset now lost?</p> <p>Thanks in advance!</p>
<python><csv><jupyter><jupyter-lab>
2023-04-17 08:10:25
1
565
Maxl Gemeinderat
76,032,641
3,685,918
Bloomberg python API (blpapi) installation error
<p>I would like to use Bloomberg desktop API for Python. To do this I have to install package <code>blpapi</code>.</p> <p>My Bloomberg terminal PC environment :</p> <ol> <li>WIN10(64bit)</li> <li>Ananconda 4.13.0 and Python 3.9.12</li> <li>No internet environment</li> </ol> <p>Due to no internet environment, I could not use <code>pip install</code> option.</p> <p>Instead I received installation files and tried to install <code>python setup.py install</code></p> <p>However I enconter error and belows. How to handle it?</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\bok\Downloads\blpapi-3.18.3\setup.py&quot;, line 49, in &lt;module&gt; assert blpapiRoot or (blpapiIncludesVar and blpapiLibVar), ( AssertionError: BLPAPI_ROOT (or BLPAPI_INCDIR/BLPAPI_LIBDIR) environment variable isn't defined </code></pre>
<python><bloomberg><blpapi>
2023-04-17 07:29:42
0
427
user3685918
76,032,579
13,803,549
Discord.py create_text_channel overwrites dynamic options
<p>I have a slash command for people to setup a bot in their server. During setup, the bot creates a category and adds channels but I want to add overwrites to make these channels read only.</p> <p>The slash command takes in one parameter, 'role', so the user that is setting up the bot can choose which server role will have access to use the bot.</p> <p>My question is... Is there a way to make all the roles of each individual server come up as a choice so the person setting up can just choose instead of typing it in?</p> <p>I know app_commands.choices can't really dynamically take in outside data but maybe because the data is coming from the Discord server it can somehow?</p> <pre class="lang-py prettyprint-override"><code> @bot.tree.command(name=&quot;setupBot&quot;, description=&quot;Enter role needed for access&quot;) async def setupBot(ctx, role: str,): await ctx.send(&quot;Setting up category and channels!&quot;) </code></pre>
<python><discord.py>
2023-04-17 07:21:21
1
526
Ryan Thomas
76,032,154
4,398,699
Absolute import works with pytest but fails when running with python
<p>I setup my project similar to other projects (like <a href="https://github.com/numpy/numpy" rel="nofollow noreferrer">Numpy</a> and <a href="https://github.com/pandas-dev/pandas" rel="nofollow noreferrer">Pandas</a>, i.e. using the project name as a directory):</p> <p><strong>Directory structure:</strong></p> <pre><code>/project_name project_name/ utils/ __init__.py utils.py tests/ __init__.py test.py __init__.py </code></pre> <p><strong>Contents:</strong></p> <pre><code># /project_name/project_name/__init__.py (empty) # /project_name/project_name/utils/__init__.py from .utils import foo # /project_name/project_name/utils/utils.py def foo(x): return x # /project_name/project_name/tests/test.py from project_name.utils import foo def test_foo(): assert foo(&quot;1&quot;) == &quot;1&quot; </code></pre> <p>My working directory is <code>/project_name</code>, when I run <code>python project_name/tests/test.py</code> I get an ImportError, but interestingly when I use pytest the import works just fine:</p> <pre class="lang-bash prettyprint-override"><code>$ cd /project_name $ python project_name/tests/test.py &gt; ModuleNotFoundError: No module named 'project_name' $ pytest &gt; =================== 1 passed in 1.76s =================== </code></pre> <p>I'm trying to follow the <a href="https://peps.python.org/pep-0008/" rel="nofollow noreferrer">PEP8</a> standard of using absolute imports for scripts and I do not want to have to modify sys.path or PYTHONPATH. I'm a bit confused why running the script causes the import to fail while when using pytest, the import works just fine. My python version is 3.10, how can I structure the project such that the running <code>python project_name/tests/test.py</code> works as expected?</p> <p>Additional follow up, is there a <a href="https://peps.python.org/pep-0008/" rel="nofollow noreferrer">PEP8</a> practice for where to place scripts? I recall reading that scripts inside a package is an anti-practice, but couldn't find any reference to this. If there is a practice for this, how would the imports be written?</p>
<python><python-3.x>
2023-04-17 06:14:53
1
2,751
q.Then
76,032,125
11,163,122
np.clip vs np.max to limit lower value
<p>Let's say I am trying to codify <code>max(0, x)</code> (the formula for <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)" rel="nofollow noreferrer">ReLU activation</a>) with numpy, where <code>x</code> is a numpy array.</p> <p>I can think of two obvious implementations:</p> <ul> <li><code>np.clip(x, a_min=0, a_max=None)</code></li> <li><code>numpy.maximum(0, x)</code></li> </ul> <p>Which is the better choice, and why?</p>
<python><numpy><machine-learning>
2023-04-17 06:09:07
1
2,961
Intrastellar Explorer
76,032,124
1,610,626
Highlight row based in streamlit
<p>I can't seem to find any specific answers to my question as I'm not sure if it can be done with <code>apply</code> or <code>applymap</code>.</p> <p>I have a parameter dataframe, lets call it <code>x</code>. I have a raw data frame, lets call it <code>y</code>.</p> <pre><code> y = pd.DataFrame({&quot;a&quot;: {&quot;h&quot;:0.5,&quot;x&quot;: 2, &quot;y&quot;: 1}, &quot;b&quot;: {&quot;h&quot;:15,&quot;x&quot;: 20, &quot;y&quot;: 6}}) x = pd.DataFrame({&quot;thres1&quot;: {&quot;x&quot;: 2, &quot;h&quot;: 1,&quot;y&quot;:3}, &quot;thres2&quot;: {&quot;x&quot;: 10, &quot;h&quot;: 12,&quot;y&quot;:3}}) </code></pre> <p><code>x</code> provides the thresholds where if certain column of raw data exceeds, will require the row to be highlighted. Note the rows in <code>y</code> needs to be compared to the correct rows in <code>x</code> given the row indexes. There can be situations where <code>x</code> will have more rows than <code>y</code>, (but not the other way around, so we need to make sure we <code>.loc</code> to match the correct row.</p> <p>so for example, I want to compare column &quot;b&quot; in raw dataframe <code>y</code> to column 'thres2&quot; in raw dataframe <code>x</code>. for the first row of <code>y</code> in column &quot;b&quot;, its 15. I need to compare 15 to the second row second column of <code>x</code> which is 12. Because its bigger, I need to highlight that.</p> <p><code>apply</code> apply the entire dataframe while <code>applymap</code> does cell by cell. Issue is I need to do .loc before. How would styling the dataframe work for this in streamlit? (preferably without json please)</p> <p>Update: I'm trying to get the row index &quot;B&quot; to highlight but it doesn't. Here is an example app that tries to do this:</p> <pre><code>import streamlit as st import pandas as pd import numpy as np def MainDashboard(): st.sidebar.title(&quot;Test&quot;) df = pd.DataFrame([[1,2], [3,4]], index=[&quot;A&quot;, &quot;B&quot;]) def color_b(s): return np.where(s == &quot;B&quot;, &quot;background-color: yellow;&quot;, &quot;&quot;) df.style.apply_index(color_b) st.dataframe(df) if __name__ == '__main__': MainDashboard() </code></pre> <p><a href="https://i.sstatic.net/JeL2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JeL2G.png" alt="enter image description here" /></a></p>
<python><pandas><styling><streamlit>
2023-04-17 06:09:05
1
23,747
user1234440
76,031,976
17,801,773
How to get 3 greatest values in a nestet defaultdict in python?
<p>I have a nested defaultdict like this:</p> <pre><code>defaultdict(&lt;function __main__.&lt;lambda&gt;()&gt;, {'A': defaultdict(&lt;function __main__.&lt;lambda&gt;()&gt;, {'a':2, 'b':1, 'd':2, 'f':1} 'B': defaultdict(&lt;function __main__.&lt;lambda&gt;()&gt;, {'a':3, 'c':4, 'e':1}} </code></pre> <p>I want to get an output like this:</p> <pre><code>B,c : 4 B,a : 3 A,a : 2 </code></pre> <p>How can I sort a nested defaultdict like this?</p> <p>Thanks for your help in advance.</p>
<python><dictionary><sorting><defaultdict>
2023-04-17 05:32:57
4
307
Mina
76,031,835
678,572
How to subset a binary matrix to maximize “uniqueness”?
<p>I’m trying to optimize a matrix by subsetting rows and then performing a calculation on the subsetted rows. The calculation is representing each of the columns exactly once and having as few duplicates as possible.</p> <p>To be clear, the parameters to optimize are the following:</p> <ul> <li>p = Number of columns detected (More is better)</li> <li>q = Number of duplicate rows (Less is better)</li> <li>r = Penalty on including a duplicate row to increase p</li> </ul> <p>Uniqueness would be defined as p - q*r</p> <p>Here is a simple example where there is a clear answer where rows [2,3,4] will be chosen (row 0 is dropped because row 2 is better):</p> <pre><code>A = [ [1,0,0,0,0], [0,0,0,0,0], [1,0,1,0,0], [0,1,0,1,0], [0,0,0,0,1], ] </code></pre> <p>p = 5 (all columns are represented) q = 0 (no duplicates)</p> <p>There will not always be a perfect combination and sometimes it will need to add a penalty (include a duplicate q) to add to the number of columns represented (p). Weighting this will be important which will be done by r.</p> <pre><code>B = [ [1,1,0,1,0], [0,0,0,0,0], [1,0,0,0,1], [0,0,1,0,0], [0,0,0,0,1], ] </code></pre> <p>Best combination is [0,2,3]</p> <p>P = 5 (all columns detected) q = 1 (1 duplicate)</p> <p>Lastly, there will sometimes be 2 or more subsets that have the best combination so it should include all best subsets:</p> <pre><code>C = [ [1,0,0,1,0], [0,1,1,0,1], [0,1,0,0,1], [1,0,0,1,0], [0,0,0,0,1], ] </code></pre> <p>In this one, I spot a few good options: [0,1], [1,3]</p> <p><strong>Other than doing bruteforce of all combinations of rows, can someone help me understand how to start implement this while leveraging any of the algorithms in Scikit-Learn, SciPy, NumPy, or Platypus in Python?</strong></p> <p>More specifically, what algorithms can I use to optimize the rows in a NumPy array that maximizes <code>p</code>, minimize <code>q</code> weighted by <code>r</code> (e.g., <code>score = p - q*r</code>)?</p> <p>Here's some of my test code:</p> <pre class="lang-py prettyprint-override"><code>from platypus import NSGAII, Problem, Real A = np.asarray([ [1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [1, 0, 1, 0, 0], [0, 1, 0, 1, 0], [0, 0, 0, 0, 1], ]) def objective(x): x = np.asarray(x) selected_rows = A[x &gt;= 0.5] p = len(set(selected_rows.flatten())) q = len(selected_rows) - len(set([tuple(row) for row in selected_rows])) return [p, q * 0.1] problem = Problem(5, 2) problem.types[:] = [Real(0, 1)] * 5 problem.function = objective problem.directions[:] = [Problem.MAXIMIZE, Problem.MINIMIZE] algorithm = NSGAII(problem) algorithm.run(10000) for solution in algorithm.result: print(f&quot;x = {solution.variables} \tp = {solution.objectives[0]:.2f} \tq*r = {solution.objectives[1]:.2f}&quot;) </code></pre>
<python><algorithm><matrix><optimization><platypus-optimizer>
2023-04-17 04:59:19
1
30,977
O.rka