QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,229,875
9,983,652
how to replace part of string using regular expression?
<p>I have a dataframe like this:</p> <pre><code>fict={'well':['10B23','10B23','10B23','10B23','10B23','10B23'], 'tag':['15B22|TestSep_OutletFlow','15B22|TestSep_GasOutletFlow','15B22|TestSep_WellNum','15B22|TestSep_GasPresValve','15B22|TestSep_Temp','WHT']} df=pd.DataFrame(dict) df well tag 0 10B23 15B22|TestSep_OutletFlow 1 10B23 15B22|TestSep_GasOutletFlow 2 10B23 15B22|TestSep_WellNum 3 10B23 15B22|TestSep_GasPresValve 4 10B23 15B22|TestSep_Temp 5 10B23 WHT </code></pre> <p>Now I'd like to replace anything before <code>|</code> in column of tag to a string like <code>11A22</code>, so the dataframe after replace should look like this:</p> <pre><code>well tag 0 10B23 11A22|TestSep_OutletFlow 1 10B23 11A22|TestSep_GasOutletFlow 2 10B23 11A22|TestSep_WellNum 3 10B23 11A22|TestSep_GasPresValve 4 10B23 11A22|TestSep_Temp 5 10B23 WHT </code></pre> <p>I am thinking to use regular expression with group to replace group by a string, something in my mind look like this</p> <pre><code>df['tag2']=df['tag'].str.replace(r'([a-z0-9]*)|TestSep_[a-z0-9]*','11A22',regex=True) </code></pre> <p>then i got result of</p> <pre><code>well tag tag2 0 10B23 15B22|TestSep_OutletFlow 11A2211A22B11A2211A22|11A2211A2211A22O11A2211A... 1 10B23 15B22|TestSep_GasOutletFlow 11A2211A22B11A2211A22|11A2211A2211A22G11A2211A... 2 10B23 15B22|TestSep_WellNum 11A2211A22B11A2211A22|11A2211A2211A22W11A2211A... 3 10B23 15B22|TestSep_GasPresValve 11A2211A22B11A2211A22|11A2211A2211A22G11A2211A... 4 10B23 15B22|TestSep_Temp 11A2211A22B11A2211A22|11A2211A2211A22T11A2211A22 5 10B23 WHT 11A22W11A22H11A22T11A22 </code></pre> <p>Thanks for your help</p>
<python><pandas><dataframe>
2023-01-25 05:04:55
1
4,338
roudan
75,229,630
14,297,619
Correct approach on throwing raise exception
<p>I have this code smell in my sonarScan which is throwing me &quot;Add logic to this except clause or eliminate it and rethrow the exception automatically.&quot;</p> <p>for an example , i am trying to raise a fileNotFoundError and code is working perfectly fine in my server . am i doing the right approach on this ?</p> <pre><code>def read_file(): try: with open(&quot;/test/test.txt&quot;, 'r') as f: json_obj = json.load(f) except FileNotFoundError as e: raise e </code></pre>
<python><exception><static-analysis>
2023-01-25 04:13:40
0
334
Farid Arshad
75,229,588
2,961,927
Validation but not cross validation in Python sklearn Lasso
<p>I need to train a LASSO model using <code>sklearn</code>. I am given a pair of <strong>specifically designed</strong> training and validation datasets.</p> <p>The goal is to let the algorithm autogenerate a sequence of <code>alpha</code>s (the L1 penalty strength), and for each <code>alpha</code>, fit a model with the training data, and then evaluate the model on the validation data. Finally, select the model that performs the best on the validation data.</p> <p>How to achieve the above in the most efficient way?</p> <p>I attempted <code>sklearn.linear_model.LassoCV()</code> by binding the training and validation data, and enforced it to do like a 1-fold CV by supplying iterator to argument <code>cv</code>, but the <code>fit()</code> method will eventually use the optimized <code>alpha</code> and the entire merged data to produce the final model. I of course can take the optimized <code>alpha</code> and call <code>sklearn.linear_model.Lasso()</code> again, but this seems too troublesome:</p> <pre><code>import numpy as np from sklearn.linear_model import LassoCV from sklearn.datasets import make_regression X, y = make_regression(noise = 4, random_state = 0) Nrow, Ncol = len(X), len(X[0]) Ntrain = int(np.round(Nrow * 0.7)) Nvalid = Nrow - Ntrain trainInd = np.asarray([i for i in range(Ntrain)]) validInd = np.asarray([i for i in range(Ntrain, Nrow)]) trainValidInd = [(trainInd, validInd)] cvIter = iter(trainValidInd) reg = LassoCV(cv = cvIter, verbose = True).fit(X, y) ''' But .fit() will use the optimized alpha and the entire merged data to train the model. ''' </code></pre> <p>I also attempted <code>sklearn.linear_model.lasso_path()</code>, but how to apply it to a new dataset (the validation set) and make predictions? It also doesn't return the intercept term. How can I find it?</p> <p>Thanks!</p> <p>Came up with a &quot;smart&quot; workaround:</p> <pre><code>sampleW = np.asarray([1.0 for i in range(Ntrain)] + \ [1e-200 for i in range(Nvalid)]) reg = LassoCV(cv = cvIter, verbose = True).fit(X, y, sampleW) </code></pre> <p>By lowering the weight on the portion of validation data to almost 0, validation data is effectively excluded from training. Tests have proven its correctness, but it looks ridiculous. It shouldn't be this hard to achieve what I need.</p>
<python><machine-learning><scikit-learn><lasso-regression>
2023-01-25 04:03:51
1
1,790
user2961927
75,229,453
13,534,073
PyInstaller collect package issue (Google Analytics)?
<p>I have the below imports:</p> <pre><code>from apiclient import discovery from oauth2client.service_account import ServiceAccountCredentials </code></pre> <p>I'm trying to make a single file exe file by running:</p> <pre><code>pyinstaller --onefile -w --icon=icon.ico --add-data client_secrets.json;. main.py --collect-data &quot;google-api-python-client&quot; --collect-data &quot;oauth2client&quot;;. </code></pre> <p>I also tried:</p> <pre><code>pyinstaller --onefile -w --icon=icon.ico --add-data client_secrets.json;. main.py --collect-data google-api-python-client --collect-data oauth2client;. </code></pre> <p>But when i run the exe file I get the error:</p> <pre><code> Failed to execute script 'main' due to unhandled exception: name analytics version: v3 File &quot;main.py&quot;, line 81, in &lt;module&gt; File &quot;main.py&quot;, line 32, in get_service File &quot;googleapiclient\_helpers.py&quot;, line 130, in positional_wrapper File &quot;googleapiclient\discovery.py&quot;, line 287, in build File &quot;googleapiclient\discovery.py&quot;, line 404, in _retrieve_discovery_doc googleapiclient.errors.UnknownApiNameOrVersion: name: analytics version: v3 </code></pre> <p>The script works fine as a python file.</p> <p>How to reproduce:</p> <pre><code>from apiclient import discovery from oauth2client.service_account import ServiceAccountCredentials import os dir_path = os.path.dirname(os.path.realpath(__file__)) def get_service(api_name, api_version, scopes, key_file_location): credentials = ServiceAccountCredentials.from_json_keyfile_name( key_file_location, scopes=scopes) # Build the service object. # service = build(api_name, api_version, credentials=credentials) service = discovery.build(api_name, api_version, credentials=credentials) return service if __name__ == '__main__': scope = 'https://www.googleapis.com/auth/analytics.edit' # client_secret from console.cloud.google.com key_file_location = f'{dir_path}/client_secrets.json' # Authenticate and construct service. service = get_service( api_name='analytics', api_version='v3', scopes=[scope], key_file_location=key_file_location) </code></pre>
<python>
2023-01-25 03:29:26
1
581
squidg
75,229,395
10,748,412
TypeError: '<' not supported between instances of 'torch.device' and 'int'
<pre><code>2023-01-25 08:21:21,659 - ERROR - Traceback (most recent call last): File &quot;/home/xyzUser/project/queue_handler/document_queue_listner.py&quot;, line 148, in __process_and_acknowledge pipeline_result = self.__process_document_type(message, pipeline_input) File &quot;/home/xyzUser/project/queue_handler/document_queue_listner.py&quot;, line 194, in __process_document_type pipeline_result = bill_parser_pipeline.process(pipeline_input) File &quot;/home/xyzUser/project/main/billparser/__init__.py&quot;, line 18, in process bill_extractor_model = MachineGeneratedBillExtractorModel() File &quot;/home/xyzUser/project/main/billparser/models/qa_model.py&quot;, line 25, in __new__ cls.__model = TransformersReader(model_name_or_path=cls.__model_path, use_gpu=False) File &quot;/home/xyzUser/project/.env/lib/python3.8/site-packages/haystack/nodes/base.py&quot;, line 48, in wrapper_exportable_to_yaml init_func(self, *args, **kwargs) File &quot;/home/xyzUser/project/.env/lib/python3.8/site-packages/haystack/nodes/reader/transformers.py&quot;, line 93, in __init__ self.model = pipeline( File &quot;/home/xyzUser/project/.env/lib/python3.8/site-packages/transformers/pipelines/__init__.py&quot;, line 542, in pipeline return task_class(model=model, framework=framework, task=task, **kwargs) File &quot;/home/xyzUser/project/.env/lib/python3.8/site-packages/transformers/pipelines/question_answering.py&quot;, line 125, in __init__ super().__init__( File &quot;/home/xyzUser/project/.env/lib/python3.8/site-packages/transformers/pipelines/base.py&quot;, line 691, in __init__ self.device = device if framework == &quot;tf&quot; else torch.device(&quot;cpu&quot; if device &lt; 0 else f&quot;cuda:{device}&quot;) TypeError: '&lt;' not supported between instances of 'torch.device' and 'int' </code></pre> <p>This is the error message i got after installing a requirement.txt file from my project. I think it is related to torch but also dont know how to fix it. I am new to hugging face transformers and dont know if it is a version issue.</p>
<python><machine-learning><huggingface-transformers><torch>
2023-01-25 03:16:10
1
365
ReaL_HyDRA
75,229,357
3,508,811
How to run a command with %?
<p>I am trying to run command <code>git log origin/master..HEAD --format=format:&quot;%H&quot;</code> in python as below but running into below error,I tried to escape <code>%</code> but that doesn't fix the error, any idea how to fix it?</p> <pre><code>def runCmd2(cmd): logger.info(&quot;Running command %s&quot;%cmd) proc = Popen(cmd ,universal_newlines = True, shell=True, stdout=PIPE, stderr=PIPE) (output, error) = proc.communicate() return output.strip(),error.strip() def get_local_commits(): &quot;&quot;&quot;Get local commits &quot;&quot;&quot; branch = &quot;master&quot; cmd = &quot;git log origin/%s..HEAD --format=format:\&quot;%H\&quot; &quot;%(branch) output,error = runCmd2(cmd) return output,error </code></pre> <p>ERROR:-</p> <pre><code> File &quot;/Users/gnakkala/jitsuin/wifi-ci/enable_signing.py&quot;, line 45, in get_local_commits cmd = &quot;git log origin/%s..HEAD --format=format:\&quot;%H\&quot; &quot;%(branch) TypeError: not enough arguments for format string </code></pre>
<python><string-formatting>
2023-01-25 03:07:41
3
925
user3508811
75,229,333
402,649
Pandas does not have a function "read_sql"
<p>Python 3.6 on RHEL 8, Pandas 1.1.5</p> <p>Trying to use pandas to do a query, <code>pd.read_sql(&lt;my query&gt;, conn).to_dict(orient=&quot;records&quot;)</code></p> <p>Error message is: <code>AttributeError: module 'pandas' has no attribute 'read_sql'</code></p> <p>At the command line, <code>python3 -c &quot;import pandas; print(str(pandas.__dict__))&quot;</code> returns, among other things, <code>'read_sql': &lt;function read_sql at 0x7eff9c08df28&gt;</code> so the function is obviously there and visible to Python. It is using the version of Pandas from the venv in the Pandas distro from pip not some other random local file called pandas or pd.</p> <p>Please advise, why is this happening?</p> <p>Tried uninstalling and reinstalling, no good.</p> <p>Update with new information: This does not occur in the Python test server, but does only with WSGI.</p> <p>My Python Path in WSGI (getting the error):</p> <pre><code>&quot;['/var/www/FLASKAPPS/', '/var/www/FLASKAPPS/myapp', '/usr/lib64/python36.zip', '/usr/lib64/python3.6', '/usr/lib64/python3.6/lib-dynload', '/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages', '/var/www/FLASKAPPS/myapp/venv/lib/python3.6/site-packages']&quot; </code></pre> <p>This is the WSGI <code>pandas.__dict__</code>:</p> <pre><code>&quot;{'__name__': 'pandas', '__doc__': None, '__package__': 'pandas', '__loader__': &lt;_frozen_importlib_external._NamespaceLoader object at 0x7f56025be668&gt;, '__spec__': ModuleSpec(name='pandas', loader=None, origin='namespace', submodule_search_locations=_NamespacePath(['/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas', '/var/www/FLASKAPPS/myapp/venv/lib/python3.6/site-packages/pandas'])), '__path__': _NamespacePath(['/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas', '/var/www/FLASKAPPS/myapp/venv/lib/python3.6/site-packages/pandas'])}&quot; </code></pre> <p>My Python Path in the Python test server (no error):</p> <pre><code>&quot;['/var/www/FLASKAPPS/myapp', '/usr/lib64/python36.zip', '/usr/lib64/python3.6', '/usr/lib64/python3.6/lib-dynload', '/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages', '/var/www/FLASKAPPS/myapp/venv/lib/python3.6/site-packages']&quot; </code></pre> <p>And here is a partial of the <code>pandas.__dict__</code> from the working Python test server:</p> <pre><code>origin='/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas/__init__.py', submodule_search_locations=['/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas']), '__path__': ['/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas'], '__file__': '/var/www/FLASKAPPS/myapp/venv/lib64/python3.6/site-packages/pandas/__init__.py' </code></pre> <h2>&quot;Solution&quot;</h2> <p>Not really a solution, but at least I finally found the cause of the issue. Apparently, pandas uses numpy, which is not compatible with mod_wsgi, when running on RHEL 8. Python, in its infinite wisdom, would not include this in the error output.</p> <p>Thanks to everyone that looked at this for me, sorry the answer is not so fulfilling.</p>
<python><python-3.x><pandas><python-3.6>
2023-01-25 03:03:10
0
3,948
Wige
75,229,250
15,632,586
Is there a method to run a Conda environment in Google Colab?
<p>I have a YML file for a Conda environment that runs with Python 3.8.15 (<code>environment.yml</code>). I am currently trying to load that file into my Google Colab, based on this answer: <a href="https://stackoverflow.com/questions/53031430/conda-environment-in-google-colab-google-colaboratory/62346425#62346425">conda environment in google colab [google-colaboratory]</a>.</p> <pre><code>!wget -c https://repo.anaconda.com/archive/Anaconda3-2022.10-Windows-x86_64.exe !chmod +x Anaconda3-2022.10-Windows-x86_64.exe !bash ./Anaconda3-2022.10-Windows-x86_64.exe -b -f -p /usr/local </code></pre> <p>And while the executable file for Anaconda was installed in my Google Drive folder, when I run the code, it turms out that Colab could not execute that file:</p> <pre><code>./Anaconda3-2022.10-Windows-x86_64.exe: ./Anaconda3-2022.10-Windows-x86_64.exe: cannot execute binary file </code></pre> <p>Is there any other method that I could use to install Anaconda to work with it in Google Colab? And furthermore, how should I load my <code>environment.yml</code> file after getting Anaconda to run in Colab?</p>
<python><anaconda><conda><google-colaboratory>
2023-01-25 02:43:27
2
451
Hoang Cuong Nguyen
75,229,224
10,970,202
How to download yum packages on windows or from some website?
<p>When creating virtual env via <code>python3.7 -m venv myenv</code> I get following error:</p> <pre><code>Error: Command '[/home/..../python3', '-lm', 'ensurepip', '--upgrade', '--deault-pip'] returned non-zero exit status 1 </code></pre> <p>This seems to be solved my installing 3 python packages via: <code>apt-get/yum install python3.7 python3-venv python3.7-venv</code> according to <a href="https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1">pyvenv-3.4 returned non-zero exit status 1</a></p> <p>However My linux server (RHEL 8.4) does not have internet connection therefore I need to download 3 packages mentioned above and then transfer it to the server.</p> <p>Where can I install them?</p> <ul> <li>I know I could spin up same os version then download files (last resort)</li> <li>I'm aware of websites pkgs.org/, rpmfind.net : I cannot find all 3 packages.</li> </ul>
<python><linux><redhat><python-venv>
2023-01-25 02:37:22
0
5,008
haneulkim
75,229,154
874,380
How to set up JupyterHub/JupyterLab with different conda environments correctly?
<p>A little background: I want to use JupyterHub/JupterLab to allow my students to design notebooks in a collaborative manner. In principle, I have a JupyterLab running with collaboration which seems to work. However, from what I can tell, JupyterLab does not come with user management, right? So anyone with the link and the password can open a notebook and edit it, right?</p> <p>I therefore also installed JupyterHub which supports user management based on system user accounts, which seems fine enough. JupterHub is running, but not in a collaborative manner. I think I'm just too confused about how these work together. For example, I can open the same notebook on 2 different machines using either JupyterHub or JupyterLab, but Real-Time Collaboration (RTC) only works with JupyterLab.</p> <p>I also don't know how to integrate this correctly with virtual environments in conda. It seems I have JupyterLab installed in all my current environments, e.g.,</p> <pre><code>./lib/python3.9/site-packages/jupyterlab ./envs/env_a/lib/python3.9/site-packages/jupyterlab ./envs/env_b/lib/python3.9/site-packages/jupyterlab </code></pre> <p>But JupyterHub inly in the &quot;base&quot; environment:</p> <pre><code>./bin/jupyterhub ./lib/python3.9/site-packages/jupyterhub ./share/jupyterhub ./pkgs/jupyterhub-3.1.0-py39h06a4308_0/bin/jupyterhub ./pkgs/jupyterhub-3.1.0-py39h06a4308_0/lib/python3.9/site-packages/jupyterhub ./pkgs/jupyterhub-3.1.0-py39h06a4308_0/share/jupyterhub </code></pre> <p>So things kind of work, I'm not sure if this is done properly. Is it correct to say the starting JupyterHub in the &quot;base&quot; environment is using the JupyterLab installation in the &quot;base&quot; environment? Do I need a JupyterHub installation for each conda environment?</p> <p>I generally only find basic information about the installation, but not really how this all works together, particularly when having multiple conda environments? That being said, within JupyterHub and JupyterLab, I see all environments and I can change the kernels for a notebook.</p>
<python><jupyter-notebook><jupyter><jupyter-lab><jupyterhub>
2023-01-25 02:20:25
0
3,423
Christian
75,229,145
875,295
can I share a multiprocessing.Manager() instance across 3 processes?
<p>I'm trying to understand if I'm allowed to do the following in python:</p> <ul> <li>create a <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Manager" rel="nofollow noreferrer">manager</a> instance in my program</li> <li>fork the existing process N times</li> <li>in my initial process, send data to the manager (to some shared variable)</li> <li>in my forked processes, read data from the manager (from the shared variable)</li> </ul> <p>Based on my understanding, it should be the main use-case for managers. However, I am trying this for more than 2 processes I'm getting pickle errors sometimes when trying to do this. With only 2 processes it always works My question is thus : is this supposed to work or not?</p> <p>I wonder if maybe the Manager system is not based on a file-descriptor/socket so it's only made to communicate 1-to-1, or at least by default</p>
<python><multiprocessing>
2023-01-25 02:17:54
1
8,114
lezebulon
75,229,007
11,922,765
pandas plot every Nth index but always include last index
<p>I have a plot, and I want to display only specific values. The plot looks good and not clumsy. In the below, I want to display values every two years but I don't want miss displaying the last value.</p> <pre><code>df = Year Total value 0 2011 11.393630 1 2012 11.379185 2 2013 10.722502 3 2014 10.304044 4 2015 9.563496 5 2016 9.048299 6 2017 9.290901 7 2018 9.470320 8 2019 9.533228 9 2020 9.593088 10 2021 9.610742 # Plot df.plot(x='year') # Select every other point, these values will be displayed on the chart col_tuple = df[['Year','Total value']][::3] for j,k in col_tuple : plt.text(j,k*1.1,'%.2f'%(k)) plt.show() </code></pre> <p>How do I pick and show the last value as well?</p>
<python><pandas><dataframe><matplotlib><plot>
2023-01-25 01:48:24
1
4,702
Mainland
75,228,886
6,367,971
Split CSV into multiple files based on column value
<p>I have a poorly-structured CSV file named <code>file.csv</code>, and I want to split it up into multiple CSV using Python.</p> <pre><code>|A|B|C| |Continent||1| |Family|44950|file1| |Species|44950|12| |Habitat||4| |Species|44950|22| |Condition|Tue Jan 24 00:00:00 UTC 2023|4| |Family|Fish|file2| |Species|Bass|8| |Species|Trout|2| |Habitat|River|3| </code></pre> <p>The new files need to be separated based on everything between the <code>Family</code> rows, so for example:</p> <p><code>file1.csv</code></p> <pre><code>|A|B|C| |Continent||1| |Family|44950|file1| |Species|44950|12| |Habitat||4| |Species|44950|22| |Condition|Tue Jan 24 00:00:00 UTC 2023|4| </code></pre> <p><code>file2.csv</code></p> <pre><code>|A|B|C| |Continent||1| |Family|Fish|file2| |Species|Bass|8| |Species|Trout|2| |Habitat|River|3| </code></pre> <p>What's the best way of achieving this when the number of rows between appearances of <code>Species</code> is not consistent?</p>
<python><pandas><csv>
2023-01-25 01:21:25
3
978
user53526356
75,228,819
9,668,218
How to find values in a column of a PySpark DataFrame that don't exist in another DataFrame?
<p>I have two PySpark DataFrames, both have a column named &quot;Country&quot;. One DataFrame is the reference and I want to compare name of the countries in the 2nd DataFrame with the reference DataFrame to find the difference. The output that I want is a list of countries in the 2nd DataFrame that don't exist in the reference DataFrame.</p> <p>Note that the datasets I am working on are large and following is only a sample.</p> <p>I am running code on Databricks.</p> <p>Sample data (the 1st DataFrame is the reference):</p> <pre><code># Prepare Data data_1 = [(1, 'Italy'), \ (2, 'Taiwan'), \ (3, 'USA'), \ (4, 'United Kingdom'), \ (5, 'Japan') ] # Create DataFrame columns = ['Code', 'Country'] df_1 = spark.createDataFrame(data = data_1, schema = columns) df_1.show(truncate=False) </code></pre> <p><a href="https://i.sstatic.net/ftWaC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ftWaC.png" alt="enter image description here" /></a></p> <pre><code># Prepare Data data_2 = [(1, 'Japan'), \ (2, 'China'), \ (3, 'United States'), \ (4, 'italy') ] # Create DataFrame columns = ['Code', 'Country'] df_2 = spark.createDataFrame(data = data_2, schema = columns) df_2.show(truncate=False) </code></pre> <p><a href="https://i.sstatic.net/rbeTJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rbeTJ.png" alt="enter image description here" /></a></p> <p>The expected output will be :</p> <p>['China', 'United States', 'italy']</p>
<python><dataframe><apache-spark><pyspark><compare>
2023-01-25 01:03:53
1
1,033
Mohammad
75,228,701
13,638,243
Implementing ERGMs with PyMC
<p>I am trying to implement ERGMs with PyMC. <br> <br> I've found <a href="https://socialabstractions-blog.tumblr.com/post/53391947460/exponential-random-graph-models-in-python" rel="nofollow noreferrer">this</a>, <a href="https://gist.github.com/dmasad/78cb940de103edbee699" rel="nofollow noreferrer">this</a>, <a href="https://towardsdatascience.com/building-a-bayesian-logistic-regression-with-python-and-pymc3-4dd463bbb16" rel="nofollow noreferrer">this</a> and <a href="https://computational-communication.com/forum/ergm-python/" rel="nofollow noreferrer">this</a>, but these resources are a bit dated.</p> <p>I have an NxN matrix for each network statistic (<code>density</code>, <code>triangles</code>, <code>istar2</code>, <code>istar3</code> &amp; <code>distance</code>). Each cell in each matrix indicates how the presence of that potential edge would change that statistic, holding the rest of the network constant. <code>am</code> is the adjacency matrix of graph <code>G</code> (<code>nx.to_numpy_array(G)</code>). <br> <br> My model looks like this.</p> <pre class="lang-py prettyprint-override"><code>with pm.Model() as model: density = pm.ConstantData(&quot;density&quot;, density) triangles = pm.ConstantData(&quot;triangles&quot;, triangles) istar2 = pm.ConstantData(&quot;istar2&quot;, istar2) istar3 = pm.ConstantData(&quot;istar3&quot;, istar3) distance = pm.ConstantData(&quot;distance&quot;, distance) β_density = pm.Normal('β_density', mu=0, sigma=100) β_triangles = pm.Normal('β_triangles', mu=0, sigma=100) β_istar2 = pm.Normal('β_istar2', mu=0, sigma=100) β_istar3 = pm.Normal('β_istar3', mu=0, sigma=100) β_distance = pm.Normal('β_distance', mu=0, sigma=100) μ = β_density*density + β_triangles*triangles + β_istar2*istar2 + β_istar3*istar3 + β_distance*distance θ = pm.Deterministic('θ', pm.math.sigmoid(μ)) y = pm.Bernoulli('y', p=θ, observed=am) trace=pm.sample( draws=500, tune=1000, cores=1, ) </code></pre> <p>Am I doing this correctly?</p>
<python><pymc3><pymc>
2023-01-25 00:37:35
0
363
Neotenic Primate
75,228,631
2,383,842
How can I pipe JPEG files into FFMPEG and create an RTSP, H.264 stream?
<p>I have an input RTSP stream that I would like to manipulate on a frame-by-frame basis using openCV. After these changes are applied, I'd like to create a separate RTSP stream from those frames. I'm piping the resulting JPEG images to FFMPEG via STDIN. I need the intermediate frame to be a JPEG.</p> <p>In other words, I must conform to this pattern: RTSP IN -&gt; Create JPEG as input -&gt; manipulation, JPEG out -&gt; RTSP</p> <p>The <strong>PROBLEM</strong> I'm trying to solve deals with a codec at this point. See the last few lines of FFMPEG's output error message.</p> <p>Here is what I have:</p> <pre><code>def open_ffmpeg_stream_process(): args = ( &quot;ffmpeg -re -stream_loop -1 &quot; &quot;-f jpeg_pipe &quot; &quot;-s 512x288 &quot; &quot;-i pipe:0 &quot; &quot;-c:v h264 &quot; &quot;-f rtsp &quot; &quot;rtsp://localhost:8100/out0&quot; ).split() return subprocess.Popen(args, stdin=subprocess.PIPE) video_source = cv2.VideoCapture('rtsp://localhost:9100/in0') frame_cnt = 0 FRAME_SKIP = 30 ffmpeg_process = open_ffmpeg_stream_process() while video_source.isOpened(): frame_cnt += 1 if frame_cnt % FRAME_SKIP: continue else: frame_cnt = 0 _, frame = video_source.read() _, jpg = cv2.imencode('.jpg', frame) # Work on the JPEG occurs here, and the output will be a JPEG ffmpeg_process.stdin.write(jpg.astype(np.uint8).tobytes()) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break video_source.release() </code></pre> <p>Here is FFMPEG's output:</p> <pre><code>ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers built with gcc 11 (Ubuntu 11.2.0-19ubuntu1) configuration: ---- noisy configuration stuff ---- libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 / 58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libswscale 5. 9.100 / 5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 Input #0, jpeg_pipe, from 'pipe:0': Duration: N/A, bitrate: N/A Stream #0:0: Video: mjpeg, rgb24(bt470bg/unknown/unknown), 512x288, 25 fps, 25 tbr, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -&gt; #0:0 (mjpeg (native) -&gt; h264 (libx264)) [libx264 @ 0x562a1ffc5840] using SAR=1/1 [libx264 @ 0x562a1ffc5840] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0x562a1ffc5840] profile High, level 3.1, 4:2:0, 8-bit [libx264 @ 0x562a1ffc5840] 264 - core 163 r3060 5db6aa6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [tcp @ 0x562a20826c00] Connection to tcp://localhost:8100?timeout=0 failed: Connection refused Could not write header for output file #0 (incorrect codec parameters ?): Connection refused Error initializing output stream 0:0 -- Conversion failed! </code></pre> <p>How does one create an H.264, RTSP stream from a series of JPEG frames using FFMPEG.</p> <p>Notes: The FFMPEG command/subproccess might need unrelated improvements, feel free to comment on my crappy code.</p> <p>Edit: Oof, I just found out the FFMPEG command doesn't even work stand alone.</p>
<python><opencv><ffmpeg><subprocess><rtsp>
2023-01-25 00:23:13
0
415
Michael Schmidt
75,228,574
448,862
How do you selcet the specific cell in a QTableView
<p>In Python, using a QTableView how do I get a specific cell and how do I set the current cell.</p>
<python><pyqt><qtableview>
2023-01-25 00:13:54
1
342
QuentinJS
75,228,549
4,451,521
To print or not to print in pytest
<p>When talking about pytest we know two things:</p> <ol> <li>When a test pass, no output is given in principle</li> <li>Sometimes the assertion failures can have very cryptic messages.</li> </ol> <p>I took a course that solved this by using <code>print</code> to clarify desired outputs and calling the pytest as <code>pytest -v -s</code>. I think it is a great solution</p> <p>Another developer in my company thinks that test code should be as free of &quot;side effects&quot; as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think <em>that</em> is an undesirable side effect)</p> <p>So I would like to hear about this from other developers. How do you solve the two points given in the beginning and do you use prints in your tests?</p>
<python><pytest>
2023-01-25 00:08:39
2
10,576
KansaiRobot
75,228,471
1,828,539
Why are miniconda envs located in /path/to/miniconda/base/envs/ ? [homebrew install]
<p>I installed miniconda via homebrew (<a href="https://formulae.brew.sh/cask/miniconda" rel="nofollow noreferrer">cask</a>)</p> <pre><code>brew install miniconda </code></pre> <p>And created a new environment <code>myenv</code></p> <pre><code>conda create -n myenv </code></pre> <p>I noticed that the path for this environment is at</p> <pre><code>/usr/local/Caskroom/miniconda/base/envs/myenv </code></pre> <p>instead of</p> <pre><code>/usr/local/Caskroom/miniconda/base/myenv </code></pre> <p>Curious why it is organized this way, as I feel this is messing with my ability to use this environment with R reticulate.</p>
<python><conda><homebrew><miniconda>
2023-01-24 23:54:48
1
2,376
Carmen Sandoval
75,228,418
3,591,044
Adding line breaks and white space to string programmatically
<p>I have a string with multiple line breaks in it. An example looks as follows:</p> <pre><code>s = &quot;This is a conversation between a Human and person alpha.\n\n1) Human: Hello\n2)person alpha:What's up?\n3)Human:Not much, how about you?4) person alpha: I'm watching TV.\n5) Human: What are you watching?\n6) person alpha: I'm watching a series7)Human: Interesting.&quot; </code></pre> <p>Before each Human and personal alpha there should be a line break. In addition, sometimes the space after the colon is missing. The correct string should look as follows:</p> <pre><code>s = &quot;This is a conversation between a Human and person alpha.\n\n1) Human: Hello\n2)person alpha: What's up?\n3) Human: Not much, how about you?\n4) person alpha: I'm watching TV.\n5) Human: What are you watching?\n6) person alpha: I'm watching a series\n7)Human: Interesting.&quot; </code></pre> <p>I've tried to achieve it by using <code>replace(&quot;Human:&quot;, &quot;Human: &quot;)</code> and <code>replace(&quot;person alpha:&quot;, &quot;person alpha: &quot;)</code> but this introduces additional whitespace when there is already a correct whitespace. For line break, I would like to add a &quot;\n&quot; for every x) where x is a number and if it does not yet have a &quot;\n&quot;.</p>
<python><string><replace>
2023-01-24 23:43:39
2
891
BlackHawk
75,228,285
9,749,124
ValueError: Input contains NaN, ... when doing fit_transform() in BERTopic
<p>I want to make BERTopic model with my clustering algorithm (KMeans) and my Vectorizer (Count Vectorizer), but I keep getting this warning and error when I want to do .fit_transform(data) :</p> <p>Warining:</p> <pre><code>/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/bertopic/vectorizers/_ctfidf.py:69: RuntimeWarning: divide by zero encountered in divide </code></pre> <p>And then, error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-104-1f024d22018f&gt; in &lt;module&gt; ----&gt; 1 topics, probs = bert_topic_model.fit_transform(final_df.body) /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/bertopic/_bertopic.py in fit_transform(self, documents, embeddings, y) 368 self._map_representative_docs(original_topics=True) 369 else: --&gt; 370 self._save_representative_docs(documents) 371 372 self.probabilities_ = self._map_probabilities(probabilities, original_topics=True) /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/bertopic/_bertopic.py in _save_representative_docs(self, documents) 3000 bow = self.vectorizer_model.transform(selected_docs) 3001 ctfidf = self.ctfidf_model.transform(bow) -&gt; 3002 sim_matrix = cosine_similarity(ctfidf, self.c_tf_idf_[topic + self._outliers]) 3003 3004 # Extract top 3 most representative documents /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/metrics/pairwise.py in cosine_similarity(X, Y, dense_output) 1178 # to avoid recursive import 1179 -&gt; 1180 X, Y = check_pairwise_arrays(X, Y) 1181 1182 X_normalized = normalize(X, copy=True) /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args &lt;= 0: ---&gt; 63 return f(*args, **kwargs) 64 65 # extra_args &gt; 0 /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/metrics/pairwise.py in check_pairwise_arrays(X, Y, precomputed, dtype, accept_sparse, force_all_finite, copy) 144 estimator=estimator) 145 else: --&gt; 146 X = check_array(X, accept_sparse=accept_sparse, dtype=dtype, 147 copy=copy, force_all_finite=force_all_finite, 148 estimator=estimator) /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args &lt;= 0: ---&gt; 63 return f(*args, **kwargs) 64 65 # extra_args &gt; 0 /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 648 if sp.issparse(array): 649 _ensure_no_complex_data(array) --&gt; 650 array = _ensure_sparse_format(array, accept_sparse=accept_sparse, 651 dtype=dtype, copy=copy, 652 force_all_finite=force_all_finite, /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/utils/validation.py in _ensure_sparse_format(spmatrix, accept_sparse, dtype, copy, force_all_finite, accept_large_sparse) 446 % spmatrix.format, stacklevel=2) 447 else: --&gt; 448 _assert_all_finite(spmatrix.data, 449 allow_nan=force_all_finite == 'allow-nan') 450 /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/utils/validation.py in _assert_all_finite(X, allow_nan, msg_dtype) 101 not allow_nan and not np.isfinite(X).all()): 102 type_err = 'infinity' if allow_nan else 'NaN, infinity' --&gt; 103 raise ValueError( 104 msg_err.format 105 (type_err, ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). </code></pre> <p>This is my full code:</p> <pre><code>features = final_df[&quot;body&quot;] # does not have NaN or Infinite values, I have checked 10 times transformerVectoriser = CountVectorizer(analyzer = 'word', ngram_range = (1, 4), vocabulary = vocab_list) #my vocab list does not have NaN or Infinite values, I have checked 10 times cluster_model = KMeans(n_clusters = 50, init='k-means++', max_iter = 1500, random_state=None) bert_topic_model = BERTopic(hdbscan_model = cluster_model, vectorizer_model = transformerVectoriser, verbose = True, top_n_words = 15) #final_df.body does not have NaN or Infinite values, I have checked 10 times topics, probs = bert_topic_model.fit_transform(final_df.body) #ERROR </code></pre> <p>I really do not know what is the problem, and what is going on. All values in vocab_list are string values and all values in final_df.body are string values</p>
<python><machine-learning><bert-language-model>
2023-01-24 23:22:42
1
3,923
taga
75,228,274
15,724,084
python tkinter module Label widget wrapped in oop
<p>I am practicing on my personal project to create oop from tkinter module. I have written a code block where I try to create Label widget by imperative code line, then within a class.</p> <pre><code>from tkinter import * app=Tk() app.geometry('500x300') Button(app,width=13,height=1,text='scrape').pack() var_str=StringVar() var_str='result 001 ...' Label(app,width=33,height=1,text='res',textvariable=var_str).pack() class label(): def __init__(self,master,var_text): self.label=Label(master,width=33,height=1,textvariable=var_text).pack() lbl_one=label(app,var_str) app.mainloop() </code></pre> <p>The strangeness is if I comment out <code>Label(app,width=33,height=1,text='res',textvariable=var_str).pack()</code> then my object instantiation does not work out. I would like to have a clear answer why <code>lbl_one</code> object gives same text result as with Label(app...) line?</p>
<python><oop><tkinter>
2023-01-24 23:20:37
1
741
xlmaster
75,228,151
7,102,346
Why does an inherited, nested class not recognise its nested, parent class in its own method's type hint?
<p>That title is a handful, so let me share a code snippet first...</p> <pre class="lang-py prettyprint-override"><code>class Parent: class GenericNode(): def __init__(self, name: str) -&gt; None: self.name: str = name class SpecificNode(Node): def __init__(self, name: str) -&gt; None: super().__init__(name) self.node_list: list[Parent.GenericNode] = [] def add_node(self, node: Parent.GenericNode) -&gt; None: self.node_list.append(node) </code></pre> <p>Here, in the <code>add_node(self, node: Parent.GenericNode)</code> method, the <code>Parent.GenericNode</code> type is not recognised by the Python interpreter - it throws a <code>NameError</code> when trying to run it. That seems very odd to me, since the exact same type was just used to define the type of the list in the constructor.</p> <p>However, when I take everything one level up - and get rid of the <code>Parent</code> class altogether like this:</p> <pre class="lang-py prettyprint-override"><code>class Node(): def __init__(self, name: str) -&gt; None: self.name: str = name class SpecificNode(Node): def __init__(self, name: str) -&gt; None: super().__init__(name) self.node_list: list[Node] = [] def add_node(self, node: Node) -&gt; None: self.node_list.append(node) </code></pre> <p>... everything miraculously works, and the code executes.</p> <p>I came across this <a href="https://stackoverflow.com/questions/63503512/python-type-hinting-own-class-in-method#63504316">answer</a>, which suggests using a string in place of a type hint like so: <code>add_node(self, node: 'Parent.GenericNode')</code>. Now this code executes and even <a href="https://github.com/python/mypy" rel="nofollow noreferrer"><code>mypy</code></a> manages to sucessfully type-check it.</p> <p>However, as that answer also mentions, this (or rather the equivalent of using <code>from __future__ import annotations</code>) should have been the default since Python 3.10 and I'm testing this on Python 3.11. Was it perhaps not made part of the release?</p> <p>I tried searching online but couldn't find any mention of this particular behaviour. Could someone please help me shed some light on this?</p> <p><strong>EDIT:</strong> I should also mention, I find it weird that there would be a problem with the <code>Parent</code> being undefined like many other answers suggest (<a href="https://stackoverflow.com/questions/33533148/how-do-i-type-hint-a-method-with-the-type-of-the-enclosing-class">e.g.</a>), because that very type annotation is used in inside another method on the same level. The oddity to me is that in parameter type hint it does not work, but in a member variable type hint, it does.</p>
<python><oop><type-hinting><python-3.11>
2023-01-24 22:58:00
0
370
Marty Cagas
75,228,061
2,897,989
Jupyter on Kubeflow more unstable than locally - even with better specs
<p>I'm trying to run some SciKit-Learn training in a Jupyter Notebook running on (Charmed) Kubeflow. I've done the same training locally on my Windows laptop also on a Jupyter Notebook, and it works well. I try to run the same on Kubeflow Jupyter (with the notebook instance having better specs than my laptop, the latter having a 4-core CPU and 16GBs of ram, and the KF notebook - not Kubeflow generally, the specific notebook server - having 8 cores and 20GBs assigned to it).</p> <p>From what I've searched around here on SO and Google, the logs showing <code>AsyncIOLoopKernelRestarter: restarting kernel (1/5), keep random ports</code> probably means I'm running out of memory. But on Windows I have less memory, and the training runs fine, just slower.</p> <p>If this question is inappropriate or not formulated correctly, please give feedback in a comment about what to do before reporting, I'll modify or delete accordingly. Thank you!</p> <p>What can cause such behavior (the kernel stopping when I try to re-run the training cell) and why does it work under Windows but not Kubeflow? For reference, this is the code I'm running:</p> <pre><code>from sklearn.model_selection import RandomizedSearchCV if models: del(models) models = [] # Iterate through each target variable for target in target_cols: # Create a random forest classifier rf = RandomForestClassifier(verbose=1) # Set up the hyperparameter search space param_distributions = {'n_estimators': [10, 50, 100, 200], 'max_depth': [None, 10, 20, 30], 'min_samples_split': [2, 5, 10]} # Create a randomized search object search = RandomizedSearchCV(estimator=rf, param_distributions=param_distributions, n_iter=10, cv=5, n_jobs=-1) # Fit the model to the data #with mlflow.start_run() as run: search.fit(X_train, y_train[target]) models.append(search.best_estimator_) print(f&quot;Search complete for column {target}, with a best score of: {search.best_score_}&quot;) </code></pre>
<python><jupyter-notebook><kubeflow>
2023-01-24 22:46:50
1
7,601
lte__
75,228,036
4,784,433
How to use ANTLR4 to get a list of functions and classes in string format regardless of the programming language?
<p>Say we have some files &quot;index.js&quot;, &quot;main.java&quot;, &quot;test.rs&quot;, and I want to output a list of functions/classes (along with doc comments) in these files.</p> <p>For example:</p> <pre><code>output: [ &quot;function jsFunction() { console.log(&quot;hello world!&quot;); }&quot;, &quot;class HelloWorld&quot;: [ &quot;// This function prints &quot;Hello World&quot; public void javaFunction() { this.print(); }&quot;, &quot;// This is a private method for printing private void print() { System.out.println(&quot;Hello world&quot;); } &quot; ], &quot;// This is a rust function fn main() { println!(&quot;Hello, world!&quot;); } &quot; ] </code></pre> <p>Is it possible to do this with ANTLR4 and Python? Assuming I have all parsers and lexers for popular languages.</p>
<python><antlr><antlr4>
2023-01-24 22:42:40
2
641
PipEvangelist
75,227,988
2,009,558
why can I model some of these ellipses with skimage and not others?
<p>I am cross-posting this from GitHub as I am not sure whether the issue is a problem with my data or a bug. <a href="https://github.com/scikit-image/scikit-image/issues/6699" rel="nofollow noreferrer">https://github.com/scikit-image/scikit-image/issues/6699</a></p> <p>I have thousands of elliptical features in my microscopy data that I want to fit models to using skimage. The model fails on some for no obvious reason. Here's code to reproduce:</p> <pre><code>import numpy as np from skimage.measure import EllipseModel import plotly.express as px good_x1 = [779.026, 778.125, 776.953, 776.195, 775.617, 775.068, 774.127, 773.696, 773.305, 773.113, 773.088, 773.233, 773.449, 773.913, 774.344, 774.625, 775.179, 775.777, 776.254, 777.039, 777.926, 778.945, 780.023, 781.059, 781.973, 782.777, 783.244, 783.922, 784.995, 785.825, 786.196, 786.486, 786.65, 786.614, 786.482, 786.153, 785.749, 785.507, 784.901, 784.482, 783.879, 782.809, 781.965, 780.998, 780.001, 779.026] good_y1 = [309.143, 309.432, 309.912, 310.35, 310.46, 311.087, 312.099, 312.879, 314.085, 315.012, 315.995, 316.948, 318.166, 319.044, 319.751, 320.283, 320.794, 321.34, 321.505, 321.908, 322.254, 322.478, 322.467, 322.243, 321.929, 321.561, 321.449, 320.891, 319.995, 318.905, 318.07, 316.872, 315.97, 315.037, 313.883, 312.943, 312.17, 311.623, 311.093, 310.477, 310.151, 309.54, 309.18, 309.027, 309.022, 309.143] good_x2 = [434.959, 434.0, 433.012, 432.093, 430.938, 430.279, 429.847, 429.535, 429.257, 429.031, 428.843, 429.0, 429.348, 429.872, 430.313, 431.048, 432.189, 433.043, 434.003, 434.971, 435.769, 436.199, 436.743, 437.263, 437.824, 438.017, 438.018, 437.831, 437.449, 437.29, 436.807, 436.255, 435.776, 434.959] good_y2 = [215.849, 216.001, 215.929, 215.684, 215.09, 214.615, 214.117, 213.631, 212.903, 211.992, 211.017, 210.0, 209.39, 208.857, 208.587, 208.087, 207.57, 207.247, 207.135, 207.2, 207.565, 207.73, 208.248, 208.819, 210.055, 210.998, 212.001, 212.952, 213.687, 214.168, 214.781, 215.333, 215.49, 215.849] good_x3 = [1666.998, 1666.014, 1665.206, 1664.689, 1664.302, 1663.977, 1663.969, 1664.293, 1664.527, 1665.09, 1665.929, 1667.048, 1668.016, 1668.658, 1669.171, 1669.638, 1669.599, 1668.995, 1667.916, 1666.998] good_y3 = [85.023, 85.07, 85.414, 85.685, 86.245, 86.994, 88.004, 88.835, 89.364, 89.862, 90.302, 90.338, 90.034, 89.491, 89.134, 87.917, 86.807, 86.004, 85.251, 85.023] bad_x1 = [1541.221, 1541.848, 1543.009, 1544.15, 1544.962, 1545.777, 1545.943, 1545.786, 1545.103, 1543.986, 1543.14, 1541.968, 1541.094, 1540.765, 1540.799, 1541.221] bad_y1 = [1254.78, 1255.29, 1255.535, 1255.395, 1254.945, 1253.922, 1253.0, 1252.063, 1250.892, 1250.374, 1250.401, 1250.959, 1252.049, 1252.968, 1254.069, 1254.78] bad_x2 = [1739.079, 1738.567, 1738.392, 1738.118, 1738.17, 1738.782, 1739.302, 1740.179, 1741.013, 1741.999, 1742.997, 1743.423, 1744.178, 1743.811, 1743.735, 1743.595, 1743.308, 1742.834, 1742.342, 1741.813, 1740.998, 1739.995, 1739.079] bad_y2 = [329.807, 329.316, 328.814, 327.989, 327.061, 325.853, 325.22, 324.478, 324.115, 324.078, 324.154, 324.49, 324.753, 325.994, 326.902, 327.679, 328.143, 328.836, 329.41, 329.628, 329.99, 330.067, 329.807] bad_x3 = [992.001, 991.057, 989.879, 989.599, 989.252, 989.286, 989.894, 991.05, 991.983, 992.806, 993.286, 993.846, 994.32, 994.481, 994.088, 992.959, 992.001] bad_y3 = [136.048, 136.19, 136.883, 137.551, 138.053, 138.929, 140.102, 140.767, 140.846, 140.551, 140.416, 139.851, 139.115, 137.938, 136.94, 136.168, 136.048] xy = [(good_x1, good_y1, 'good_1'), (good_x2, good_y2, 'good_2'), (good_x3, good_y3, 'good_3'), (bad_x1, bad_y1, 'bad_1'), (bad_x2, bad_y2, 'bad_2'), (bad_x3, bad_y3, 'bad_3')] for ii in xy: points = list(zip(ii[0], ii[1])) a_points = np.array(points) model = EllipseModel() if model.estimate(a_points) == False: fig = px.line(x= ii[0], y= ii[1], title='model fitting failed for ' + ii[2]) fig.show() try: xc, yc, a, b, theta = model.params print(model.params) ellipse_centre = (xc, yc) residuals = model.residuals(a_points) print(residuals) except Exception as e: print(e) else: fig = px.line(x= ii[0], y= ii[1], title='model fitting successful for ' + ii[2]) fig.show() xc, yc, a, b, theta = model.params print(model.params) ellipse_centre = (xc, yc) residuals = model.residuals(a_points) print(residuals) </code></pre> <p>Visually these features all seem elliptical and there is no difference between them in the length of the point lists or other properties. I think it's a bug but would appreciate a 2nd opinion, thanks.</p> <p><strong>Addition 2023-01-25:</strong> There is another implementation of the same algorithm underlying the skimage function at SciPy, here: <a href="https://scipython.com/blog/direct-linear-least-squares-fitting-of-an-ellipse/" rel="nofollow noreferrer">https://scipython.com/blog/direct-linear-least-squares-fitting-of-an-ellipse/</a></p> <p>I have tested this on the same data and it variously fits, misfits or throws errors on my sets of points. Interesting it does this in a slightly different pattern to the skimage function. I think this problem is a fundamental flaw in the algorithm and not a bug.</p>
<python><computational-geometry><scikit-image>
2023-01-24 22:35:55
1
341
Ninja Chris
75,227,947
19,130,803
Dash: how to write callback for html.Form component
<p>I am developing a dash app. I am creating a form using <strong>html.Form</strong> component.</p> <pre><code>html.Form( id=&quot;form_upload&quot;, name=&quot;form_upload&quot;, method=&quot;POST&quot;, action=&quot;/upload&quot;, encType=&quot;multipart/form-data&quot;, # accept=&quot;application/octet-stream&quot;, children=[ dbc.Input(id=&quot;ip_name&quot;, name=&quot;ip_name&quot;, type=&quot;text&quot;), dbc.Input(id=&quot;ip_upload&quot;, name=&quot;ip_upload&quot;, type=&quot;file&quot;), dbc.Input(id=&quot;ip_submit&quot;, name=&quot;ip_submit&quot;, type=&quot;submit&quot;), ], ), </code></pre> <p>But I have no clue how to write a callback for above form, so that I can access and process the form inputs(request payload) i.e. name and file contents. I read offical document and searched alot but didnot find any demo or example.</p> <p>Please help.</p>
<python><plotly-dash>
2023-01-24 22:30:13
1
962
winter
75,227,885
11,627,201
How can stomp with RabbitMQ be faster than just a normal websocket?
<p>Sending 100 000 messages of 300 characters from a python server to a JavaScript client using RabbitMQ and STOMP takes 10-20 seconds:</p> <p>Python server:</p> <pre class="lang-py prettyprint-override"><code>import stomp PORT = 61613 LOCALHOST = '0.0.0.0' conn = stomp.Connection11([(LOCALHOST, PORT)]) # conn.start() conn.connect('guest','guest') # from time import sleep from random import choice, randint from string import ascii_uppercase from time import time conn.send(body=&quot;start&quot;,destination='/queue/test') t = time() for i in range(100000): conn.send(body=''.join(choice(ascii_uppercase) for i in range(300)),destination='/queue/test') t = time() - t conn.send(body=&quot;end&quot;,destination='/queue/test') print() print(100000, &quot;took &quot;, t*1000, &quot; ms&quot;) conn.disconnect() </code></pre> <p>JavaScript client:</p> <pre class="lang-js prettyprint-override"><code> &lt;script&gt; var client = Stomp.client('ws://localhost:15674/ws'); var test = document.getElementById(&quot;test&quot;) client.debug = null; var date1 = null var date2 = null test.innerText = &quot;READY... START!&quot; var sub = function(d) { test.innerText = d.body if(d.body == &quot;start&quot;) { date1 = new Date().valueOf() } if(d.body == &quot;end&quot;) { date2 = new Date().valueOf() console.log(100000, &quot;took &quot;, date2 - date1, &quot; ms&quot;) } } var on_connect = function(x) { id = client.subscribe(&quot;/queue/test&quot;, sub); console.log(&quot;connected&quot;) }; var on_error = function(e) { console.log('error', e); }; client.connect('guest', 'guest', on_connect, on_error, '/'); &lt;/script&gt; </code></pre> <p>However, when I tried to do it with just a plain websocket, this takes at least 22 seconds:</p> <p>Python server:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import websockets import random import string async def echo(websocket): print(&quot;connected&quot;) await websocket.send(&quot;start&quot;) for i in range(100000): await websocket.send(''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(300))) await websocket.send(&quot;end&quot;) print(&quot;sent&quot;) async def main(): async with websockets.serve(echo, &quot;localhost&quot;, 8765): await asyncio.Future() </code></pre> <p>javascript client:</p> <pre><code>&lt;body&gt; &lt;div id=&quot;test&quot;&gt; READY... SET... GO!! &lt;/div&gt; &lt;script&gt; var time1 = null var time2 = null var socket = new WebSocket('ws://localhost:8765/ws'); socket.onmessage = function(e) { var server_message = e.data; if (server_message == &quot;start&quot;) { time1 = new Date().valueOf() } if (server_message == &quot;end&quot;) { time2 = new Date().valueOf() console.log(&quot;100000 messages took &quot;, (time2 - time1) / 1000, &quot;seconds&quot;) } document.getElementById(&quot;test&quot;).innerText = server_message } &lt;/script&gt; &lt;/body&gt; </code></pre> <p>Is there any reason for this? I thought STOMP is basically websocket + whatever RabbitMQ is doing (after all the client is receiving information through a websocket), so wouldn't it be slower than just a plain websocket? Or is the problem with asyncio?</p> <p>Thanks!</p>
<python><websocket><rabbitmq><python-asyncio><stomp>
2023-01-24 22:21:45
0
798
qwerty_99
75,227,868
10,963,057
plotly (python) linechart with changing color
<p>I am looking for a solution to create a plotly linechart, build with go.Scattter like this example:</p> <p><a href="https://i.sstatic.net/yVpFN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yVpFN.png" alt="enter image description here" /></a></p> <p>what i already tried:</p> <pre><code>import plotly.graph_objects as go from plotly.subplots import make_subplots import yfinance as yf df = yf.download(&quot;AAPL MSFT&quot;, start=&quot;2022-01-01&quot;, end=&quot;2022-07-01&quot;, group_by='ticker') df.reset_index(inplace=True) title = 'Price over time' fig = make_subplots(rows=1, cols=1, vertical_spacing=0.05, shared_xaxes=True, subplot_titles=(title, &quot;&quot;)) # AAPL fig.add_trace(go.Scatter(x=df['Date'], y=df[('AAPL', 'Close')], marker_color=df[('AAPL', 'Close')].apply( lambda x: 'green' if 120 &lt;= x &lt;= 150 else 'red' if 151 &lt;= x &lt;= 170 else 'yellow' if 171 &lt;= x &lt;= 190 else 'blue'), mode='lines+markers', showlegend=True, name=&quot;AAPL&quot;, stackgroup='one'), row=1, col=1, secondary_y=False) fig.show() </code></pre> <p>I expect a solution with go.Scatter. The code provided is only a part of the solution, and I would like to add subplots. The colors used should depend on the value of the y-axis (e.g. 'Close' in this case). Two colors are possible; for example, the higher the price, the more green the color. It may be necessary to create an additional column with color codes first.</p>
<python><colors><plotly>
2023-01-24 22:20:03
2
1,151
Alex
75,227,661
940,490
Dynamically creating serializable classes in Python
<p>I am trying to update a class that is supposed to be used as a custom type for the <code>multiprocessing.manager</code> and imitate a basic dictionary. All works well on Linux, but things fail on Windows and I understood that the problem lies in a possibly suboptimal creation mechanism that it uses that involves a closure. With forking, Linux gets around serializing something that <code>pickle</code> cannot cope with, while this does not happen on Windows. I am using Python 3.6 and feel like it is better to improve the class rather than force a new package dependency that has more robust serialization than <code>pickle</code>.</p> <p>An example that I think demonstrates this is presented below. It involves a class that is meant to act like a <code>dict</code>, but have an additional method and a class attribute. These are bound in a factory method that the code calls and passes to <code>multiprocessing.manager.register</code>. I get <code>AttributeError: Can't pickle local object 'foo_factory.&lt;locals&gt;.Foo'</code> as a result here.</p> <pre class="lang-py prettyprint-override"><code>import abc import pickle class FooTemplate(abc.ABC, dict): bar = None def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.foo = 'foo' @abc.abstractmethod def special_method(self, arg1, arg2): pass def foo_factory(dynamic_special_method): class Foo(FooTemplate): bar = 'bar' def special_method(self, arg1, arg2): print(self.foo, ' ', self.bar, ' ', dynamic_special_method(arg1, arg2)) return Foo def method_to_pass(a1, a2): return a1 + a2 if __name__ == '__main__': foo = foo_factory(method_to_pass)() pickle.dumps(foo) </code></pre> <p>I attempted to fix the problem by creating a class dynamically, but this throws a new error that I am not sure I understand and it makes things look even worse with all honesty. Using the <code>main</code> part from above with the code below produces error <code>_pickle.PicklingError: Can't pickle &lt;class '__main__.Foo'&gt;: attribute lookup Foo on __main__ failed</code>.</p> <pre class="lang-py prettyprint-override"><code>class FooTemplate(dict): bar = None method_map = None def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.foo = 'foo' def special_method(self, arg1, arg2): print(self.foo, ' ', self.bar, ' ', self.method_map[self.bar](arg1, arg2)) def foo_factory(dynamic_special_method): return type('Foo', (FooTemplate,), {'bar': 'bar', 'method_map': {'bar': dynamic_special_method}}) </code></pre> <p>Error above aside, I feel like I am missing something fundamental and that I took a wrong direction. Even if this worked, it feels wrong to introduce a new attribute with a nested structure to simply keep a method which avoids calls to this method as a class method with <code>self</code> in the front...</p> <p>Maybe someone can suggest a better direction how to create a preferably serializable class which imitates a dictionary and that can also get parameters dynamically? An explanation of the error that I get would be very useful too, but I think this is not the biggest problem I am facing here. Thank you for any help in advance.</p>
<python><oop><python-multiprocessing><python-class>
2023-01-24 21:51:29
1
1,615
J.K.
75,227,369
12,647,327
Aiokafka consumer process events in parallel
<p>I'm just getting started with kafka, I have a k8s cluster in which I want to deploy event listeners. When I have one listener running, everything works fine, but with several pods they process events in parallel, I would like the event to be processed only once. How can I achieve this?</p> <p>My listener code:</p> <pre><code>import asyncio, settings, json from aiokafka import AIOKafkaConsumer event_handler = { &quot;table_create&quot;: table_create_event, &quot;table_delete&quot;: table_delete_event, } consumer_config = [ { &quot;name&quot;: &quot;main consumer1&quot;, &quot;topics&quot;: [&quot;storage_create&quot;, &quot;storage_update&quot;, &quot;storage_delete&quot;, &quot;table_create&quot;, &quot;table_update&quot;, &quot;table_delete&quot;, &quot;field_create&quot;, &quot;field_update&quot;, &quot;field_delete&quot;, &quot;value_create&quot;, &quot;value_update&quot;, &quot;value_delete&quot;], &quot;group_id&quot;: &quot;cms_events&quot; }, { &quot;name&quot;: &quot;main consume2r&quot;, &quot;topics&quot;: [&quot;storage_create&quot;, &quot;storage_update&quot;, &quot;storage_delete&quot;, &quot;table_create&quot;, &quot;table_update&quot;, &quot;table_delete&quot;, &quot;field_create&quot;, &quot;field_update&quot;, &quot;field_delete&quot;, &quot;value_create&quot;, &quot;value_update&quot;, &quot;value_delete&quot;], &quot;group_id&quot;: &quot;cms_events&quot; } ] async def consume(topics, group_id): consumer = AIOKafkaConsumer( *topics, bootstrap_servers='localhost:9092', group_id=group_id, auto_offset_reset=&quot;earliest&quot;, metadata_max_age_ms=30000, ) await consumer.start() try: async for msg in consumer: print( &quot;{}:{:d}:{:d}: key={} value={} timestamp_ms={}&quot;.format( msg.topic, msg.partition, msg.offset, msg.key, msg.value, msg.timestamp) ) topic = msg.topic encode_event_body = msg.value decode_event_body = json.loads(encode_event_body) try: await event_handler[topic](decode_event_body) except Exception as exc: print(exc) finally: await consumer.stop() async def main(): await asyncio.gather(*[ consume(topics=consumer.get(&quot;topics&quot;), group_id=consumer.get(&quot;group_id&quot;)) for consumer in consumer_config ] ) if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre>
<python><kubernetes><apache-kafka><aiokafka>
2023-01-24 21:14:53
1
323
Kabiljan Tanaguzov
75,227,205
1,330,381
How to configure thread names for multiprocessing.BaseManager instances
<p>There's some usage of these BaseManager types in some code I'm debugging. <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.BaseManager" rel="nofollow noreferrer">https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.BaseManager</a></p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.managers import BaseManager </code></pre> <p>The log configuration for our process has the thread name expressed using the standard <a href="https://docs.python.org/3/library/logging.html#logrecord-attributes" rel="nofollow noreferrer">record attribute</a> <code>%(threadName)s</code>.</p> <p>This results in the typical indexed format you find in multithreaded log traces from threads that do not get names ascribed to them</p> <pre><code>BaseManager-15|MainProcess BaseManager-17|MainProcess BaseManager-38|MainProcess </code></pre> <p>Is there a way to get threads under a BaseManager to have their thread name specified?</p>
<python><python-multiprocessing>
2023-01-24 20:55:25
1
8,444
jxramos
75,227,167
8,372,455
pandas df index to list
<p>Im trying to run a pandas df <em><strong>index</strong></em> to list (time series data) with <code>index_list = df.index.values.tolist()</code> and it will output data like this:</p> <pre><code>index_list: [1674576900000000000, 1674577800000000000, 1674578700000000000, 1674579600000000000, 1674580500000000000, 1674581400000000000, 1674582300000000000, 1674583200000000000, 1674584100000000000, 1674585000000000000, 1674585900000000000, 1674586800000000000, 1674587700000000000, 1674588600000000000, 1674589500000000000, 1674590400000000000, 1674591300000000000, 1674592200000000000, 1674593100000000000, 1674594000000000000, 1674594900000000000, 1674595800000000000, 1674596700000000000, 1674597600000000000, 1674598500000000000, 1674599400000000000, 1674600300000000000, 1674601200000000000, 1674602100000000000, 1674603000000000000, 1674603900000000000, 1674604800000000000, 1674605700000000000, 1674606600000000000, 1674607500000000000, 1674608400000000000, 1674609300000000000, 1674610200000000000, 1674611100000000000, 1674612000000000000, 1674612900000000000, 1674613800000000000, 1674614700000000000, 1674615600000000000, 1674616500000000000, 1674617400000000000, 1674618300000000000, 1674619200000000000, 1674620100000000000, 1674621000000000000, 1674621900000000000, 1674622800000000000, 1674623700000000000, 1674624600000000000, 1674625500000000000, 1674626400000000000, 1674627300000000000, 1674628200000000000, 1674629100000000000, 1674630000000000000, 1674630900000000000, 1674631800000000000, 1674632700000000000, 1674633600000000000, 1674634500000000000, 1674635400000000000, 1674636300000000000, 1674637200000000000, 1674638100000000000, 1674639000000000000, 1674639900000000000, 1674640800000000000, 1674641700000000000, 1674642600000000000, 1674643500000000000, 1674644400000000000, 1674645300000000000, 1674646200000000000, 1674647100000000000, 1674648000000000000, 1674648900000000000, 1674649800000000000, 1674650700000000000, 1674651600000000000, 1674652500000000000, 1674653400000000000, 1674654300000000000, 1674655200000000000, 1674656100000000000, 1674657000000000000, 1674657900000000000, 1674658800000000000, 1674659700000000000, 1674660600000000000, 1674661500000000000, 1674662400000000000] </code></pre> <p>Is it possible in Pandas to output the pandas time series <em><strong>index</strong></em> to a format like this below which is like a list of time tuples if I understand the data structure correctly?</p> <pre><code>index_list= [time(hour=11, minute=14, second=15), time(hour=11, minute=14, second=30), time(hour=11, minute=14, second=45), time(hour=11, minute=15, second=00), time(hour=11, minute=15, second=15), time(hour=11, minute=15, second=30), time(hour=11, minute=15, second=45), time(hour=11, minute=16, second=00), time(hour=11, minute=16, second=15), time(hour=11, minute=16, second=30), time(hour=11, minute=16, second=45), time(hour=11, minute=17, second=00), time(hour=11, minute=17, second=15), time(hour=11, minute=17, second=30), time(hour=11, minute=17, second=45), time(hour=11, minute=18, second=00), time(hour=11, minute=18, second=15), time(hour=11, minute=18, second=30)] </code></pre>
<python><pandas>
2023-01-24 20:51:48
1
3,564
bbartling
75,227,108
11,922,765
AttributeError: 'DataFrame' object has no attribute 'to_flat_index'
<p>I am importing an HTML file. It has the data in a weird format and with multi index.</p> <p><a href="https://i.sstatic.net/40iKg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/40iKg.png" alt="enter image description here" /></a></p> <p>I am particularly interested in importing the table 'Photovoltaic' and it starts at line 10 in the big table. The table seems to be of multiindex.</p> <p>code:</p> <pre><code> net_met_cus = 'https://www.eia.gov/electricity/annual/html/epa_04_10.html' net_met = pd.read_html(net_met_cus) print(len(net_met)) net_met_pv = net_met[1] # Photovoltaic table starts at 12 row print(net_met_pv.loc[12]) Unnamed: 0_level_0 Year Photovoltaic Capacity (MW) Residential Photovoltaic Commercial Photovoltaic Industrial Photovoltaic Transportation Photovoltaic Total Photovoltaic Customers Residential Photovoltaic Commercial Photovoltaic Industrial Photovoltaic Transportation Photovoltaic Total Photovoltaic Name: 12, dtype: object # Is it multiindex print(net_met_pv.loc[12].index) MultiIndex([('Unnamed: 0_level_0', 'Year'), ( 'Capacity (MW)', 'Residential'), ( 'Capacity (MW)', 'Commercial'), ( 'Capacity (MW)', 'Industrial'), ( 'Capacity (MW)', 'Transportation'), ( 'Capacity (MW)', 'Total'), ( 'Customers', 'Residential'), ( 'Customers', 'Commercial'), ( 'Customers', 'Industrial'), ( 'Customers', 'Transportation'), ( 'Customers', 'Total')], ) # Okay, let's flaten it net_met_pv.to_flat_index() </code></pre> <p>Present output:</p> <pre><code>AttributeError: 'DataFrame' object has no attribute 'to_flat_index' </code></pre>
<python><html><pandas><dataframe>
2023-01-24 20:45:17
1
4,702
Mainland
75,227,024
3,247,006
How to print object's values with "annotate()" and "for loop" in ascending order?
<p>I have <code>Category</code> and <code>Product</code> models below. *I use <strong>Django 3.2.16</strong>:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; from django.db import models class Category(models.Model): name = models.CharField(max_length=20) class Product(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE) name = models.CharField(max_length=50) </code></pre> <p>Then, when running <code>test</code> view to print <code>id</code>, <code>name</code> and <code>product__count</code> with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#annotate" rel="nofollow noreferrer">annotate()</a> and <strong>index</strong> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from .models import Category from django.http import HttpResponse from django.db.models import Count def test(request): qs = Category.objects.annotate(Count('product')) print(qs[0].id, qs[0].name, qs[0].product__count) print(qs[1].id, qs[1].name, qs[1].product__count) print(qs[2].id, qs[2].name, qs[2].product__count) print(qs[3].id, qs[3].name, qs[3].product__count) return HttpResponse(&quot;Test&quot;) </code></pre> <p>These are printed in ascending order properly as shown below:</p> <pre class="lang-none prettyprint-override"><code>1 Fruits 14 2 Vegetables 10 3 Meat 4 4 Fish 3 </code></pre> <p>But, when running <code>test</code> view to print <code>id</code>, <code>name</code> and <code>product__count</code> with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#annotate" rel="nofollow noreferrer">annotate()</a> and <strong>for loop</strong> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; # ... def test(request): qs = Category.objects.annotate(Count('product')) for obj in qs: print(obj.id, obj.name, obj.product__count) return HttpResponse(&quot;Test&quot;) </code></pre> <p>These are printed in descending order improperly as shown below:</p> <pre class="lang-none prettyprint-override"><code>4 Fish 3 2 Vegetables 10 3 Meat 4 1 Fruits 14 [25/Jan/202 </code></pre> <p>In addition, when running <code>test</code> view to print <code>id</code> and <code>name</code> with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.all" rel="nofollow noreferrer">all()</a> and <strong>index</strong> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; # ... def test(request): qs = Category.objects.all() print(qs[0].id, qs[0].name) print(qs[1].id, qs[1].name) print(qs[2].id, qs[2].name) print(qs[3].id, qs[3].name) return HttpResponse(&quot;Test&quot;) </code></pre> <p>And, when running <code>test</code> view to print <code>id</code> and <code>name</code> with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.all" rel="nofollow noreferrer">all()</a> and <strong>for loop</strong> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; # ... def test(request): qs = Category.objects.all() for obj in qs: print(obj.id, obj.name) return HttpResponse(&quot;Test&quot;) </code></pre> <p>These are printed in ascending order properly as shown below:</p> <pre class="lang-none prettyprint-override"><code>1 Fruits 14 2 Vegetables 10 3 Meat 4 4 Fish 3 </code></pre> <p>So, how can I print object's values with <code>annotate()</code> and <strong>for loop</strong> in ascending order properly?</p>
<python><django><for-loop><django-queryset><django-annotate>
2023-01-24 20:36:07
1
42,516
Super Kai - Kazuya Ito
75,227,019
130,964
skimage/imageio fails to read grayscale images
<p>I have been reading images using (Python) <code>skimage.imread</code> (or <code>imageio.imread</code>) for months successfully, but now, without changing the code, I get failures when reading grayscale images. My collaborators can read the files. The image properties are:</p> <pre><code> identify test/resources/biosynth1_cropped/text_removed.png test/resources/biosynth1_cropped/text_removed.png PNG 1512x315 1512x315+0+0 8-bit sRGB 48c 10094B 0.000u 0:00.005 </code></pre> <p>and some properties with <code>--verbose</code></p> <pre><code> Type: Grayscale Base type: Undefined Endianness: Undefined Depth: 8-bit Channel depth: Red: 8-bit Green: 8-bit Blue: 8-bit Channel statistics: Pixels: 476280 </code></pre> <p>The code (run in repl) is:</p> <pre><code>from skimage import io import imageio file=&quot;/Users/pm286/workspace/pyamiimage/test/resources/biosynth1_cropped/arrows_removed.png&quot; img = imageio.imread(file) </code></pre> <p>and gives errors:</p> <pre><code>(base) pm286macbook:pyamiimage pm286$ python test_load.py Traceback (most recent call last): File &quot;test_load.py&quot;, line 9, in &lt;module&gt; img = imageio.imread(file) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/core/functions.py&quot;, line 265, in imread reader = read(uri, format, &quot;i&quot;, **kwargs) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/core/functions.py&quot;, line 186, in get_reader return format.get_reader(request) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/core/format.py&quot;, line 170, in get_reader return self.Reader(self, request) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/core/format.py&quot;, line 221, in __init__ self._open(**self.request.kwargs.copy()) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/plugins/pillow.py&quot;, line 298, in _open return PillowFormat.Reader._open(self, pilmode=pilmode, as_gray=as_gray) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/plugins/pillow.py&quot;, line 138, in _open as_gray=as_gray, is_gray=_palette_is_grayscale(self._im) File &quot;/opt/anaconda3/lib/python3.8/site-packages/imageio/plugins/pillow.py&quot;, line 689, in _palette_is_grayscale palette = np.asarray(pil_image.getpalette()).reshape((256, 3)) ValueError: cannot reshape array of size 144 into shape (256,3) </code></pre> <p>I have (un/re)installed <code>skimage</code>, <code>imageio</code>, and <code>pillow</code>.</p> <p>(The code seems to read coloured images satisfactorily.)</p> <p>I'd be grateful for pointers to solutions.</p> <p>EDIT.The comment from @Nick ODell seems to have identified the problem. Has the dimensionality of the output changed from (x, y)to (x, y, 3)?</p>
<python><scikit-image><python-imageio>
2023-01-24 20:35:15
1
38,146
peter.murray.rust
75,227,015
11,909,334
Error mkdocstrings generation error "No module named"
<p>I was building a documentation site for my python project using mkdocstrings.</p> <p>For generating the code referece files I followed this instructions <a href="https://mkdocstrings.github.io/recipes/" rel="noreferrer">https://mkdocstrings.github.io/recipes/</a></p> <p>I get these errors:</p> <pre><code> INFO - Building documentation... INFO - Cleaning site directory INFO - The following pages exist in the docs directory, but are not included in the &quot;nav&quot; configuration: - reference\SUMMARY.md - reference_init_.md ... ... - reference\tests\manual_tests.md ERROR - mkdocstrings: No module named ' ' ERROR - Error reading page 'reference/init.md': ERROR - Could not collect ' ' </code></pre> <p>This is my file structure: <a href="https://i.sstatic.net/MWC6H.png" rel="noreferrer"><img src="https://i.sstatic.net/MWC6H.png" alt="project structure" /></a></p> <p>This is my docs folder: <a href="https://i.sstatic.net/2GBKj.png" rel="noreferrer"><img src="https://i.sstatic.net/2GBKj.png" alt="docs folder" /></a></p> <p>I have the same gen_ref_pages.py file shown in the page:</p> <pre><code> from pathlib import Path import mkdocs_gen_files nav = mkdocs_gen_files.Nav() for path in sorted(Path(&quot;src&quot;).rglob(&quot;*.py&quot;)): module_path = path.relative_to(&quot;src&quot;).with_suffix(&quot;&quot;) doc_path = path.relative_to(&quot;src&quot;).with_suffix(&quot;.md&quot;) full_doc_path = Path(&quot;reference&quot;, doc_path) parts = tuple(module_path.parts) if parts[-1] == &quot;__init__&quot;: parts = parts[:-1] elif parts[-1] == &quot;__main__&quot;: continue nav[parts] = doc_path.as_posix() # with mkdocs_gen_files.open(full_doc_path, &quot;w&quot;) as fd: ident = &quot;.&quot;.join(parts) fd.write(f&quot;::: {ident}&quot;) mkdocs_gen_files.set_edit_path(full_doc_path, path) with mkdocs_gen_files.open(&quot;reference/SUMMARY.md&quot;, &quot;w&quot;) as nav_file: # nav_file.writelines(nav.build_literate_nav()) # ``` This is my mkdocs.yml: ``` site_name: CA Prediction Docs theme: name: &quot;material&quot; palette: primary: deep purple logo: assets/logo.png favicon: assets/favicon.png features: - navigation.instant - navigation.tabs - navigation.expand - navigation.top # - navigation.sections - search.highlight - navigation.footer icon: repo: fontawesome/brands/git-alt copyright: Copyright &amp;copy; 2022 - 2023 Ezequiel González extra: social: - icon: fontawesome/brands/github link: https://github.com/ezegonmac - icon: fontawesome/brands/linkedin link: https://www.linkedin.com/in/ezequiel-gonzalez-macho-329583223/ repo_url: https://github.com/ezegonmac/TFG-CellularAutomata repo_name: ezegonmac/TFG-CellularAutomata plugins: - search - gen-files: scripts: - docs/gen_ref_pages.py - mkdocstrings nav: - Introduction: index.md - Getting Started: getting-started.md - API Reference: reference.md # - Reference: reference/ - Explanation: explanation.md </code></pre>
<python><documentation><docstring><mkdocs>
2023-01-24 20:35:05
3
637
Ezequiel González Macho
75,226,965
169,992
How is the PyTorch Tensor source code organized?
<p>I am going to see if it's possible to port PyTorch to TypeScript, but not sure how the PyTorch repo is organized. It appears a lot of it is CUDA/C++, and I don't seem to find a <a href="https://github.com/pytorch/pytorch/find/master" rel="nofollow noreferrer">tensor.py</a> anywhere. Where is the source code the <a href="https://pytorch.org/docs/stable/generated/torch.tensor.html" rel="nofollow noreferrer">Torch.Tensor</a> docs reference? That is, where is the tensor source code? If it is scattered across multiple files, could you point out the key ones?</p> <p>For example, the <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.sum.html#torch.Tensor.sum" rel="nofollow noreferrer"><code>tensor.sum</code></a> function is nowhere to be found in the repo, as part of some &quot;tensor&quot; class in python. So now I assume it is a dynamically generated C/C++ binding or something, maybe even GLSL.</p> <p><a href="https://i.sstatic.net/sf9xQ.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sf9xQ.gif" alt="enter image description here" /></a></p>
<python><pytorch>
2023-01-24 20:29:59
0
80,366
Lance Pollard
75,226,632
9,194,965
datetime conversion of a column results in pandas warning
<p>I am trying to convert a column in a pandas dataframe to datetime format as follows:</p> <pre><code>df[&quot;date&quot;] = pd.to_datetime(df[&quot;date&quot;]) </code></pre> <p>Although this works as expected, pandas gives the following warning:</p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if sys.path[0] == '': </code></pre> <p>Is there a better way to to a datetime conversion of a pandas column that does not produce this warning?</p>
<python><pandas><dataframe><datetime>
2023-01-24 19:56:19
1
1,030
veg2020
75,226,607
12,436,050
Get all the rows with max value in a pandas dataframe column in Python
<p>I have a pandas dataframe with following columns.</p> <pre><code>col1 col2 col3 col4 A101 3 LLT 10028980 A101 7 LLT 10028980 A101 7 PT 10028980 A102 5 LLT 10028981 A102 3 PT 10028981 A103 2 PT 10028982 A103 4 LLT 10028982 </code></pre> <p>I would like to extract all those rows where col2 is max for each value of col1. The expected output is:</p> <pre><code>col1 col2 col3 col4 A101 7 LLT 10028980 A101 7 PT 10028980 A102 5 LLT 10028981 A103 4 LLT 10028982 </code></pre> <p>I am using following lines of code but it is filtering the rows where there are multiple rows with max value (row 1 is excluded).</p> <p><code>m = df.notnull().all(axis=1)</code></p> <p><code>df = df.loc[m].groupby('col1').max().reset_index()</code></p> <p>I am getting this output:</p> <pre><code>col1 col2 col3 col4 A101 7 PT 10028980 A102 5 LLT 10028981 A103 4 LLT 10028982 </code></pre>
<python><pandas><group-by><max>
2023-01-24 19:53:50
2
1,495
rshar
75,226,465
15,584,917
Installed cd-hit (using conda) for Jupyter Notebook but command not found?
<p>I installed the package cd-hit in a Jupyter notebook. It seems successfully installed.</p> <pre><code>conda install -c bioconda cd-hit </code></pre> <p>However when I try to run cd-hit I get the error <code>command not found</code></p> <p>I am using</p> <pre><code>%%bash cd-hit -i input.fasta -o output.fasta -c .99 </code></pre> <p>I have also tried without the <code>%%bash</code>. Am I missing something? Should I be adding <code>cd-hit</code> to a Path somewhere? Should I be using a conda environment in Jupyter notebook rather than just directly using conda install? Thank you.</p>
<python><bash><jupyter-notebook><conda>
2023-01-24 19:40:43
0
339
1288Meow
75,226,279
16,169,533
Sort a string in the list alphabetacaly
<p>i have the following list:</p> <pre><code>strs = [&quot;tea&quot;,&quot;tea&quot;,&quot;tan&quot;,&quot;ate&quot;,&quot;nat&quot;,&quot;bat&quot;] </code></pre> <p>i want to sort the strings in that list to be like that:</p> <pre><code>strs = [&quot;aet&quot;,&quot;aet&quot;,&quot;ant&quot;,&quot;aet&quot;,&quot;ant&quot;,&quot;abt&quot;] </code></pre> <p>how i can do this ?</p> <p>Thanks</p>
<python><arrays><list><sorting>
2023-01-24 19:23:01
2
424
Yussef Raouf Abdelmisih
75,226,158
12,596,824
Checking paths in Python
<p>I have a path like so:</p> <pre><code>S:\Test\Testing\Tested\A\B\C </code></pre> <p>and a list</p> <pre><code>include = ['S:\Test\Testing', 'S:\Domino\Testing', 'S:\Money\tmp'] </code></pre> <p>How do I check if the path I have starts with any of the paths in this list?</p> <p>So in this case the first element would match and it would return True.</p>
<python><path>
2023-01-24 19:09:05
3
1,937
Eisen
75,226,105
11,308,308
Message not received by mqtt client
<p>I'm trying to send a message to a device using mqtt. The sender can publish a message. It is received by the queue which I made sub to the topic. The receiver is subscribed to the topic. Yet it receives no message.</p> <pre><code>class MQTTSub: class MQTTClient: def __init__(self, host: str): self.host = host self.client = mqtt.Client() def connect(self): self.client.on_connect = self.on_connect self.client.on_message = self.on_message self.client.on_subscribe = self.on_subscribe self.client.on_disconnect = self.on_disconnect self.client.on_unsubscribe = self.on_unsubscribe self.client.on_log = self.on_log #bind_address in case of multiple interfaces self.client.connect(self.host, 1883, 60) self.client.loop_forever() def on_log(self, client, userdata, level, buf): logging.debug(f&quot;log| client: {client}, userdata: {userdata}, level: {level}, buf: {buf}&quot;) def on_connect(self, client: mqtt.Client, userdata, flags: dict, rc: int): logging.debug(f&quot;connection| client: {client}, userdata: {userdata}, flags: {flags}, rc: {rc}&quot;) status = self.client.subscribe(topic=&quot;/warehouse&quot;, qos=2) logging.debug(f&quot;status: {status}&quot;) def on_message(self, client: mqtt.Client, userdata, message): logging.debug(f&quot;message| client: {client}, userdata: {userdata}, message: {message.payload}&quot;) def on_subscribe(self, client: mqtt.Client, userdata, mid: int, granted_qos: int): logging.debug(f&quot;subscription| client: {client}, userdata: {userdata}, mid: {mid}, granted_qos: {granted_qos}&quot;) def on_disconnect(self, client: mqtt.Client, userdata, rc: int): logging.debug(f&quot;disconnection| client: {client}, userdata: {userdata}, rc: {rc}&quot;) def on_unsubscribe(self, client: mqtt.Client, userdata, rc:int): logging.debug(f&quot;unsubscription| {client}, userdata: {userdata}, rc: {rc}&quot;) @staticmethod def subscribe_all(): with open(os.path.join(settings.BASE_DIR, &quot;warehouses.json&quot;)) as warehouses_file: warehouses = json.load(warehouses_file) for warehouse in warehouses: logging.debug(f&quot;warehouse: {warehouse}&quot;) client = MQTTSub.MQTTClient(warehouse['host']) threading.Thread(target=client.connect).start() </code></pre> <p>My logs</p> <pre><code>DEBUG:root:log| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, level: 16, buf: Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k60) client_id=b'' DEBUG:root:log| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, level: 16, buf: Received CONNACK (0, 0) DEBUG:root:connection| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, flags: {'session present': 0}, rc: 0 DEBUG:root:log| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, level: 16, buf: Sending SUBSCRIBE (d0, m1) [(b'/warehouse', 2)] DEBUG:root:status: (0, 1) DEBUG:root:log| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, level: 16, buf: Received SUBACK DEBUG:root:subscription| client: &lt;paho.mqtt.client.Client object at 0x7f2f26b28050&gt;, userdata: None, mid: 1, granted_qos: (2,) </code></pre> <p>You can see here that the client is subscribed. I tried from the same container (docker) with subscribe.callback it works. I am out of idea to troubleshoot my issue here.</p> <p>Does someone have a clue ?</p>
<python><mqtt><mosquitto><paho>
2023-01-24 19:03:26
0
567
SamHuffman
75,225,989
17,696,880
Why does this regex capture a maximum of 2 capture groups and not all those within the input string?
<pre class="lang-py prettyprint-override"><code>import re def verify_need_to_restructure_where_capsule(m): capture_where_capsule = str(m.group(1)) print(capture_where_capsule) return capture_where_capsule input_text = &quot;Rosa está esperándote ((PL_ADVB='saassa')abajo). Estábamos ((PL_ADVB='la casa cuando comenzó el temporal')dentro). Los libros que buscas están ((PL_ADVB='en la estantería de allí arriba')arriba); Conociéndole, quizás ya tenga las cosas preparadas ((PL_ADVB='mente mesa principal, ademas la hemos arreglado')sobre)&quot; list_all_adverbs_of_place = [&quot;adentro&quot;, &quot;dentro&quot;, &quot;arriba de&quot;, &quot;arriba&quot;, &quot;al medio&quot;, &quot;abajo&quot;, &quot;hacía&quot;, &quot;hacia&quot;, &quot;por sobre&quot;, &quot;sobre las&quot;,&quot;sobre la&quot;, &quot;sobre el&quot;, &quot;sobre&quot;] place_reference = r&quot;(?i:\w\s*)+&quot; pattern = re.compile(r&quot;(\(\(PL_ADVB='&quot; + place_reference + r&quot;'\)&quot; + rf&quot;({'|'.join(list_all_adverbs_of_place)})&quot; + r&quot;\))&quot;, re.IGNORECASE) input_text = re.sub(pattern, verify_need_to_restructure_where_capsule, input_text, re.IGNORECASE) </code></pre> <p>Even if you try with several input_text in all cases it is limited (at most) to capture the first 2 matches, but not all the occurrences that actually exist</p> <pre><code>((PL_ADVB='saassa')abajo) ((PL_ADVB='la casa cuando comenzó el temporal')dentro) </code></pre> <p>This should be the correct output, that is, when it succeeds in identifying all occurrences and not just the first 2 matches.</p> <pre><code>((PL_ADVB='saassa')abajo) ((PL_ADVB='la casa cuando comenzó el temporal')dentro) ((PL_ADVB='en la estantería de allí arriba')arriba) ((PL_ADVB='mente mesa principal, ademas la hemos arreglado')sobre) </code></pre> <p>It's quite curious because if I invert the order of the capture groups within the string, the pattern will detect them, but always limited to the first 2. It is as if the <code>re.sub()</code> method had passed the parameter to replace n number of times (in this case like 2 times), but in that case I am not indicating that parameter, and even so <code>re.sub()</code> just works a limited number of times.</p> <hr /> <p>EDIT (with findall):</p> <pre class="lang-py prettyprint-override"><code>import re def verify_need_to_restructure_where_capsule(m): capture_where_capsule = str(m.group(1)) print(capture_where_capsule) return capture_where_capsule input_text = &quot;Rosa está esperándote ((PL_ADVB='saassa')abajo). Estábamos ((PL_ADVB='la casa cuando comenzó el temporal')dentro). Los libros que buscas están ((PL_ADVB='en la estantería de allí arriba')arriba); Conociéndole, quizás ya tenga las cosas preparadas ((PL_ADVB='mente mesa principal, ademas la hemos arreglado')sobre)&quot; list_all_adverbs_of_place = [&quot;adentro&quot;, &quot;dentro&quot;, &quot;arriba de&quot;, &quot;arriba&quot;, &quot;al medio&quot;, &quot;abajo&quot;, &quot;hacía&quot;, &quot;hacia&quot;, &quot;por sobre&quot;, &quot;sobre las&quot;,&quot;sobre la&quot;, &quot;sobre el&quot;, &quot;sobre&quot;] place_reference = r&quot;(?i:\w\s*)+&quot; pattern = re.compile(r&quot;(\(\(PL_ADVB='&quot; + place_reference + r&quot;'\)&quot; + rf&quot;({'|'.join(list_all_adverbs_of_place)})&quot; + r&quot;\))&quot;, re.IGNORECASE) print(re.findall(pattern, input_text)) input_text = re.sub(pattern, verify_need_to_restructure_where_capsule, input_text, re.IGNORECASE) </code></pre>
<python><python-3.x><regex><string><regex-group>
2023-01-24 18:52:33
1
875
Matt095
75,225,969
388,916
How to configure root logger + custom logger without duplicate log entries
<p>I want to configure the root logger and my project's custom logger separately. Here's my current logging configuration:</p> <pre class="lang-py prettyprint-override"><code>logging.config.dictConfig( { &quot;version&quot;: 1, &quot;root&quot;: { &quot;handlers&quot;: [&quot;stdout&quot;], &quot;level&quot;: &quot;WARNING&quot;, }, &quot;loggers&quot;: { &quot;my_project_name&quot;: { &quot;handlers&quot;: [&quot;stdout&quot;], &quot;level&quot;: &quot;DEBUG&quot;, } }, &quot;handlers&quot;: { &quot;stdout&quot;: { &quot;formatter&quot;: &quot;fancy&quot; if sys.stdout.isatty() else &quot;normal&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;level&quot;: &quot;DEBUG&quot;, }, }, &quot;formatters&quot;: { &quot;normal&quot;: { &quot;format&quot;: &quot;%(asctime)s [%(levelname)s] (%(name)s): %(msg)s&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%d %H:%M:%S.%f&quot;, }, &quot;fancy&quot;: { &quot;format&quot;: &quot;TODO&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%d %H:%M:%S.%f&quot;, } }, } ) </code></pre> <p>Basically, I want warnings and errors to be logged for the root logger (i.e. all loggers), but I want to control the log level for all loggers in my project separately.</p> <p>What I end up with in the above configuration is that any log messages from my project is printed to the terminal twice, since they are handled by both the custom logger config and the root logger.</p> <p>Is there a nice way to achieve this? (i.e. configuring the project's loggers and the root logger separately)</p>
<python><logging><python-logging>
2023-01-24 18:50:45
1
59,976
Hubro
75,225,904
402,649
Python WSGI can't find local modules?
<p>I am trying to get my Flask application to work with WSGI under Apache on RHEL 8. A flat file with no imports works fine, and the development server works fine, but WSGI returns module not found errors. This is my file strucuture:</p> <pre><code>/var/www/FLASKAPPS/myapp/ utility/ configuration.py logging.py __init__.py </code></pre> <p>Contents of <code>__init__.py</code>:</p> <pre><code>from flask import Flask, Response, request, abort from flask_restful import Resource, Api, reqparse, inputs from utility.configuration import Configuration from utility.logger import Logger app = Flask(__name__) api = Api(app) configuration = Configuration() logger = Logger() class Main(Resource): @staticmethod def get(): return {'message': 'Hello World.'} api.add_resource(Main, '/') if __name__ == &quot;__main__&quot;: app.run() </code></pre> <p>Running the Python test server works fine.</p> <p>Accessing the server via WSGI on Apache gives the error:</p> <pre><code>File &quot;/var/www/FLASKAPPS/myapp/__init__.py&quot;, line 6, in &lt;module&gt; from utility.configuration import Configuration No module named 'utility' </code></pre> <p>What do I need to do for WSGI to be able to see these modules?</p>
<python><python-3.x><mod-wsgi><wsgi>
2023-01-24 18:43:54
1
3,948
Wige
75,225,893
15,176,150
Why is pandas.read_sas() making random mistakes?
<p>I'm trying to work on a SAS dataset using pandas. The dataset is in <code>sas7bdat</code> format, and is encoded in <code>iso-8859-15</code>. The dataset is 230,000 rows long and 1300 columns wide.</p> <p>I'm using the following command to load the data into pandas:</p> <pre><code>pandas.read_sas('my_file.sas7bdat', format='sas7bdat', encoding='iso-8859-15') </code></pre> <p>This works fine for most values, however, some values are read in incorrectly. Values that are read incorrectly often occur along the same row.</p> <p>For example, say <code>column 1</code>'s expected value is <code>000</code>, I instead get <code>0-0</code>. Then, along the same row, the expected value of another column might be <code>-000003</code> but the value in pandas is <code>00003-0</code>.</p> <p>I have absolutely no idea why this is happening. I've tried searching for this problem everywhere but can't find an answer. I've reloaded the dataset into SAS to see if it's an error with SAS's export, but SAS reads the data in fine.</p> <p>The changes are still present when I load the dataset directly into raw bytes, which indicates that it's not a problem with the encoding.</p> <p>Has anyone else come across this problem before? Is there a known issue with <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_sas.html" rel="nofollow noreferrer"><code>pandas.read_sas</code></a> incorrectly reading in data?</p>
<python><pandas><dataframe><encoding><sas>
2023-01-24 18:42:45
1
1,146
Connor
75,225,845
3,417,592
How can I run Django on a subpath on Google Cloud Run with load balancer?
<p>I'll preface by noting that I have a system set up using Google Cloud Run + Load Balancer + IAP to run a number of apps on <a href="https://example.com/app1" rel="nofollow noreferrer">https://example.com/app1</a>, <a href="https://example.com/app2" rel="nofollow noreferrer">https://example.com/app2</a>, etc, and up to now I've only deployed Streamlit apps this way. The load balancer is directing traffic to each app in Cloud Run according to subpath (/app1, ...), and I used the <code>--server.baseUrlPath=app2</code> option of <code>streamlit run</code> with no problems.</p> <p>Now I'm trying to get a 'hello, world' Django 4.1.5 app running on <a href="https://example.com/directory" rel="nofollow noreferrer">https://example.com/directory</a>, and I can't seem to get the routes right.</p> <p>The Dockerfile ends with</p> <pre><code>CMD exec poetry run gunicorn --bind 0.0.0.0:${PORT} --workers 1 --threads 8 --timeout 0 example_dir.wsgi:application </code></pre> <p>I added <code>FORCE_SCRIPT_NAME = &quot;/directory&quot;</code> in <code>settings.py</code>.<br /> Here's <code>example_dir/urls.py</code>:</p> <pre><code>urlpatterns = urlpatterns = [ path(&quot;admin/&quot;, admin.site.urls), path(&quot;&quot;, include(&quot;directory.urls&quot;)), ] </code></pre> <p>and here's <code>directory/urls.py</code>:</p> <pre><code>urlpatterns = [ path(&quot;&quot;, views.hello, name=&quot;hello&quot;), ] </code></pre> <p>Visiting <code>https://example.com/directory</code> returns</p> <pre><code>Page not found (404) Request Method: GET Request URL: http://example.com/directory/directory Using the URLconf defined in example_dir.urls, Django tried these URL patterns, in this order: 1. admin/ 2. [name='hello'] </code></pre> <p>That Request URL is surprising and weird. I'd expect the request url to be just <a href="http://example.com/directory" rel="nofollow noreferrer">http://example.com/directory</a>.</p> <p>Adding either <code>USE_X_FORWARDED_HOST = True</code> or <code>SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')</code> or both (per <a href="https://nixhive.com/how-to-run-django-on-a-subpath-via-proxy/" rel="nofollow noreferrer">nixhive.com</a>) did not affect the result.</p>
<python><django><google-cloud-run><google-cloud-load-balancer>
2023-01-24 18:37:58
1
1,951
Nathan Lloyd
75,225,561
2,985,796
Apply 1D array representing index to element translation over 2D array of index values?
<p>I have a 2D array</p> <pre><code>arr = np.array([ [ 1, 2, -1, -1], [ 0, 1, -1, -1], [ 3, 5, -1, -1], [ 7, 8, -1, -1], [ 6, 7, -1, -1], [ 9, 11, -1, -1]]) </code></pre> <p>Its elements are related to the indices of some other array. A <code>-1</code> value represent &quot;no index&quot;. I also have a translation of the elements in <code>arr</code> to some other value (indices of a different array) in the form of</p> <pre><code>trans = np.array([[ 0], [-1], [ 1], [-1], [ 2], [-1], [ 3], [-1], [ 4], [-1], [ 5], [-1]]) </code></pre> <p>Here the <code>n</code>th element of <code>trans</code> denotes the mapping of the element values in <code>arr</code> to the element value of <code>trans</code>. For example, a <code>8</code> in <code>arr</code> should be translated to a value of <code>4</code> (<code>trans[8]</code> == <code>4</code>).</p> <p><strong>How can I apply <code>trans</code> to translate the values of <code>arr</code>?</strong></p> <p>Desired output</p> <pre><code>np.array([ [-1, 1, -1, -1], [0, -1, -1, -1], [-1, -1, -1, -1], [-1, 4, -1, -1], [3, -1, -1, -1], [-1, -1, -1, -1] ]) </code></pre>
<python><arrays><indexing>
2023-01-24 18:11:24
2
7,178
KDecker
75,225,556
9,274,940
Group by the all the columns except the first one, but aggregate as list the first column
<p>Let's say, I have this dataframe:</p> <pre><code>df = pd.DataFrame({'col_1': ['yes','no'], 'test_1':['a','b'], 'test_2':['a','b']}) </code></pre> <p>What I want, is to group by all the columns except the first one and aggregate the results where the group by is the same.</p> <p>This is what I'm trying:</p> <pre><code>col_names = df.columns.to_list() df_out = df.groupby([col_names[1:]])[col_names[0]].agg(list) </code></pre> <p>This is my end data frame goal:</p> <pre><code>df = pd.DataFrame({'col_1': [['yes','no']], 'test_1':['a'], 'test_2':['b']}) </code></pre> <p>And, if I have more rows, I want it to behave with the same principle, join in list the groups that are the same based on the column [1:] (from the second till end.</p>
<python><pandas>
2023-01-24 18:10:37
1
551
Tonino Fernandez
75,225,413
8,981,474
Why does a second call to discover() result in an import error?
<p>I want to run both two separate test suites, using the standard library <code>unittest</code>.</p> <p>Specifically, I want to use the <code>load_tests</code> approach to test discovery, as described in the documentation for <a href="https://docs.python.org/3/library/unittest.html#unittest.TestLoader.discover" rel="nofollow noreferrer"><code>unittest.discover()</code></a>.</p> <p>However, I found that, regardless of the test suite contents, the second test suite would fail to load. I created a MRE like so:</p> <p>Suppose the test suites are called <code>x</code> and <code>y</code>, with a directory structure like so:</p> <pre><code>Main.py test_cases/ __init__.py x/ __init__.py &lt;more individual modules for tests&gt; y/ __init__.py &lt;individual modules for tests&gt; </code></pre> <p>where <code>Main.py</code> is a driver script containing:</p> <pre><code>import unittest if __name__ == &quot;__main__&quot;: x_test_suite = unittest.TestLoader().discover('test_cases/x', pattern='__init__.py') unittest.TextTestRunner(verbosity=1).run(x_test_suite) y_test_suite = unittest.TestLoader().discover('test_cases/y', pattern='__init__.py') unittest.TextTestRunner(verbosity=1).run(y_test_suite) unittest.main() </code></pre> <p>In each <code>__init__.py</code>, I <code>import unittest</code> and then define <code>load_tests</code> to return a <code>unittest.TestSuite</code> instance, like so:</p> <pre><code>import unittest # In the actual code, I import the other test modules and use # them to populate the `TestSuite` that is returned, but this # is not relevant to causing the error. def load_tests(loader, tests, pattern): return unittest.TestSuite() </code></pre> <p>The result is that the <code>x</code> suite runs (and, for this MRE, all 0 tests are successful), but the <code>y</code> suite cannot be loaded and an <code>ImportError</code> occurs (paths redacted):</p> <pre><code>$ python Main.py ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK Traceback (most recent call last): File &quot;Main.py&quot;, line 7, in &lt;module&gt; y_test_suite = unittest.TestLoader().discover('test_cases/y', pattern='__init__.py') File &quot;/usr/lib/python3.8/unittest/loader.py&quot;, line 349, in discover tests = list(self._find_tests(start_dir, pattern)) File &quot;/usr/lib/python3.8/unittest/loader.py&quot;, line 405, in _find_tests tests, should_recurse = self._find_test_path( File &quot;/usr/lib/python3.8/unittest/loader.py&quot;, line 458, in _find_test_path raise ImportError( ImportError: '__init__' module incorrectly imported from '/path/to/test_cases/x'. Expected '/path/to/test_cases/y'. Is this module globally installed? </code></pre> <p>If I try swapping the order in <code>Main.py</code> to try loading the <code>y</code> suite first, then it runs successfully while <code>x</code> fails to import.</p> <p>I also tried using <code>pattern='*.py'</code> for the <code>.discover</code> calls and removing the <code>load_tests</code> implementations (to let <code>unittest</code> do its own discovery), but this also doesn't resolve the problem. Neither does reordering <code>Main.py</code> to do both <code>.discover</code> calls first and then both <code>TextTestRunner</code> calls. In fact, having both <code>.discover</code> calls causes the error even without any attempt to run the tests.</p> <p>Why does the second test discovery fail, and how can I fix the problem?</p>
<python><import><python-unittest>
2023-01-24 17:56:03
0
1,559
isakbob
75,225,371
898,042
how to rename python script to be used in lambda - lambda_handler vs main() function name?
<p>initially this was python script on ec2 but now i want it become aws lambda - generated from terraform! Since aws lambda needs lambda_handler function vs &quot;__ main __&quot;.</p> <p>I wonder what to put in my tf code for handler arg.</p> <p>The python script(it gets zipped by terraform then loaded up to aws lambda):</p> <pre><code>#!/usr/bin/env python3 import boto3 import json import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.INFO) queue = boto3.resource( 'sqs', region_name='us-east-1').get_queue_by_name(QueueName=&quot;erjan&quot;) table = boto3.resource('dynamodb', region_name='us-east-1').Table('Votes') def process_message(message): try: payload = message.message_attributes voter = payload['voter']['StringValue'] vote = payload['vote']['StringValue'] logging.info(&quot;Voter: %s, Vote: %s&quot;, voter, vote) update_count(vote) message.delete() except Exception as e: print('-----EXCEPTION-----') def update_count(vote): logging.info('update count....') cur_count = 0 if vote == 'b': response = table.get_item(Key={'voter': 'count'}) item = response['Item'] item['b'] += 1 table.put_item(Item=item) elif vote == 'a': table.update_item( Key={'voter': 'count'}, UpdateExpression=&quot;ADD a :incr&quot;, ExpressionAttributeValues={':incr': 1}) if __name__ == &quot;__main__&quot;: logging.info('--------inside main-------') while True: try: messages = queue.receive_messages(MessageAttributeNames=['vote', 'voter']) except KeyboardInterrupt: logging.info(&quot;Stopping...&quot;) break except: logging.error(sys.exc_info()[0]) continue for message in messages: process_message(message) </code></pre> <p>the tf code:</p> <pre><code>resource &quot;aws_iam_role&quot; &quot;vote_processor_lambda_iam_role&quot; { name = &quot;vote_processor_lambda_iam_role&quot; assume_role_policy = &lt;&lt;EOF { &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Action&quot;: &quot;sts:AssumeRole&quot;, &quot;Principal&quot;: { &quot;Service&quot;: &quot;lambda.amazonaws.com&quot; }, &quot;Effect&quot;: &quot;Allow&quot;, &quot;Sid&quot;: &quot;&quot; } ] } EOF } resource &quot;aws_iam_policy&quot; &quot;vote_processor_dynamodb_policy&quot; { name = &quot;vote_processor_dynamodb_policy&quot; policy = &lt;&lt;EOF { &quot;some json&quot; } } resource &quot;aws_iam_role_policy_attachment&quot; &quot;attach_vote_processor_dynamodb_policy_to_iam_role&quot; { role = aws_iam_role.vote_processor_lambda_iam_role.name policy_arn = aws_iam_policy.vote_processor_dynamodb_policy.arn } resource &quot;aws_iam_policy&quot; &quot;vote_processor_sqs_policy&quot; { name = &quot;vote_processor_sqs_policy&quot; policy = &lt;&lt;EOF { &quot;some json&quot; } EOF } resource &quot;aws_iam_role_policy_attachment&quot; &quot;vote_processor_sqs_access_policy&quot; { role = aws_iam_role.vote_processor_lambda_iam_role.name policy_arn = aws_iam_policy.vote_processor_sqs_policy.arn } data &quot;archive_file&quot; &quot;vote_processor_zip_code&quot; { type = &quot;zip&quot; source_file = &quot;${path.module}/vote_processor.py&quot; output_path = &quot;${path.module}/vote_processor.zip&quot; } #what to put in handler arg? resource &quot;aws_lambda_function&quot; &quot;vote_processor_lambda_backend&quot; { filename = &quot;${path.module}/vote_processor.zip&quot; function_name = &quot;vote_processor&quot; role = aws_iam_role.vote_processor_lambda_iam_role.arn handler = &quot;result.lambda_handler&quot; #should this be __main__? runtime = &quot;python3.9&quot; } </code></pre> <p>should i rename the python script <strong>main</strong> function be &quot;lambda handler&quot;? or vice versa in tf code?</p>
<python><amazon-web-services><aws-lambda><terraform>
2023-01-24 17:51:53
1
24,573
ERJAN
75,225,115
5,763,413
Get N Smallest values from numpy array with array size potentially less than N
<p>I am trying to use <code>numpy.argpartition</code> to get the <code>n</code> smallest values from an array. However, I cannot guarantee that there will be at least <code>n</code> values in the array. If there are fewer than <code>n</code> values, I just need the entire array.</p> <p>Currently I am handling this with checking the array size, but I feel like I'm missing a native numpy method that will avoid this branching check.</p> <pre class="lang-py prettyprint-override"><code>if np.size(arr) &lt; N: return arr else: return arr[np.argpartition(arr, N)][:N] </code></pre> <p>Minimal reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np #Find the 4 smallest values in the array #Arrays can be arbitrarily sized, as it's the result of finding all elements in a larger array # that meet a threshold small_arr = np.array([3,1,4]) large_arr = np.array([3,1,4,5,0,2]) #For large_arr, I can use np.argpartition just fine: large_idx = np.argpartition(large_arr, 4) #large_idx now contains array([4, 5, 1, 0, 2, 3]) #small_arr results in an indexing error doing the same thing: small_idx = np.argpartition(small_arr, 4) #ValueError: kth(=4) out of bounds (3) </code></pre> <p>I've looked through the numpy docs for truncation, max length, and other similar terms, but nothing came up that is what I need.</p>
<python><numpy>
2023-01-24 17:27:46
2
2,125
blackbrandt
75,224,935
7,053,813
Matrix Multiplication with Multindex columns using broadcast
<p>I want to multiply two matrices (<code>.dot</code> not <code>.mul</code>), one of which may or may not have 2-D mutli-index columns. I have solved this for the 1-D case and the 2-D case, I feel like there should be a generalization between the two, but I can't figure it out.</p> <h3>Sample Data</h3> <pre><code>&gt;&gt;&gt; A = pd.DataFrame({'b': [1, 0], 'c': [1, 0], 'e': [0, 1]}, index=['a','d']) &gt;&gt;&gt; A b c e 0 1 1 0 1 0 0 1 </code></pre> <pre><code>&gt;&gt;&gt; columns = pd.MultiIndex.from_product([['b', 'c', 'e'], ['metric1', 'metric2']]) &gt;&gt;&gt; B2D = pd.DataFrame( [ [22, 24, 20, 31, 29, 20], [12, 14, 10, 21, 24, 91] ], columns=columns ) &gt;&gt;&gt; B2D b c e metric1 metric2 metric1 metric2 metric1 metric2 0 22 24 20 31 29 20 1 12 14 10 21 24 91 &gt;&gt;&gt; B1D = B2D.xs('metric1', 1, 1) </code></pre> <h3>Desired Result</h3> <pre><code>&gt;&gt;&gt; func(A, B2D) a d metric1 metric2 metric1 metric2 0 42 55 29 20 1 22 35 24 91 &gt;&gt;&gt; func(A, B1D) a d 0 42 29 1 22 24 </code></pre> <h3>My Version</h3> <p>This works for each case but not both. I hope theres a simpler way to combine these into a generalized version...</p> <pre><code>def func1D(a, b): return (a @ b.T).T def func2D(a, b): return (B.stack(level=1) @ A.T).unstack() </code></pre> <h3>Other Relevant Answers</h3> <p><a href="https://stackoverflow.com/questions/65665720/pandas-dot-product-on-each-sub-frame-in-multi-index-data-frame">pandas dot product on each sub frame in multi-index data frame</a></p> <p><a href="https://stackoverflow.com/questions/41493177/pandas-multiply-dataframes-with-multiindex-and-overlapping-index-levels">Pandas multiply dataframes with multiindex and overlapping index levels</a></p>
<python><pandas>
2023-01-24 17:11:40
1
759
Collin Cunningham
75,224,811
11,462,274
Even defining object before calling query for a DataFrame, returns the error name is not defined
<pre><code>def fl_base(df,file_name): columns = ['historic_odds_1','historic_odds_2','historic_odds_3','historic_odds_4','odds'] mean_cols = df[columns].mean(axis=1) df = df.query( f&quot;\ (mean_cols &gt; 0) and \ ((@df['minute_traded']/mean_cols)*100 &gt;= 1000) and \ (@df['minute_traded'] &gt;= 1000)\ &quot; ).reset_index(drop=True) return df </code></pre> <pre><code>name 'mean_cols' is not defined </code></pre> <p>If <code>mean_cols</code> is being created before calling <code>df.query</code>, why is it saying that it has not defined?</p>
<python><pandas><dataframe>
2023-01-24 17:01:36
1
2,222
Digital Farmer
75,224,524
3,595,907
Pairwise combinations of two image stacks
<p>I have 2 RGB image stacks containing 200 images each. Each image is (300, 300, 3) so each stack is (200, 300, 300, 3).</p> <p>So we have:</p> <pre><code>a_stack[200, 300, 300, 3] b_stack[200, 300, 300, 3] </code></pre> <p>My aim is to calculate the Euclidean distance between every pairwise combination of images in each stack, which I can do using</p> <p><code>measure = dist.euclidean(a_img.flatten(), b_img.flatten())</code></p> <p>My problem is constructing the appropriate iterator to get every pairwise combination between <code>a_stack</code> and <code>b_stack</code></p> <p>I had a look at <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow noreferrer">itertools.combinations</a> but this appeared to be for combinations of elements in strings. Is there a similar thing for narrays?</p>
<python><combinations><numpy-ndarray><combinatorics>
2023-01-24 16:36:25
0
3,687
DrBwts
75,224,455
15,080,473
error while receiving data and converting it into f64
<p>I am trying to send data over a TCP connection from rust to python, however while receiving the data in python I am getting the following error when trying to convert it from bytes to f64.</p> <pre><code>Traceback (most recent call last): File &quot;server.py&quot;, line 36, in &lt;module&gt; [x] = struct.unpack('f', data) struct.error: unpack requires a buffer of 4 bytes </code></pre> <p>I am using the follwoing method to convert the data from bytes to f64,</p> <pre><code> [x] = struct.unpack('f', data) print(x) </code></pre> <p>my data looks like this, which I am sending over a tcp</p> <pre><code>x: 0.011399809271097183 (f64 from rust) </code></pre> <p>and getting something like</p> <pre><code>b'?t=*\x00\x00\x00\x00?s\xbd\xc6\x80\x00\x00\x00?q\xd5s\x00\x00\x00\x00?|\xae\x85\x80\x00\x00\x00?e\xb5\xc3\x00\x00\x00\x00?yp;\x80\x00\x00\x00?p\x7f\x98\x00\x00\x00\x00?hG|\x00\x00\x00\x00?o\x8d&amp;\x00\x00\x00\x00?cv[\x00\x00\x00\x00?s\xdf\x97\x80\x00\x00\x00?{\x0e\xde\x80\x00\x00\x00?n\xec\xbf\x00\x00\x00\x00?n\xd8E\x00\x00\x00\x00?y+\xdd\x80\x00\x00\x00?r\xd90\x80\x00\x00\x00?r\xc2\x89\x00\x00\x00\x00?q\xc2i\x00\x00\x00\x00?kq&quot;\x00\x00\x00\x00?t5\xec\x80\x00\x00\x00?|\xaak\x80\x00\x00\x00?z\x10\x9d\x00\x00\x00\x00?o\xeb\xde\x00\x00\x00\x01?m6\xfc\x00\x00\x00\x00' </code></pre>
<python><tcp><byte>
2023-01-24 16:30:50
1
524
Ankit Kumar
75,224,450
5,684,405
How to refer to subclass property in abstract shared implementation in abstract class method
<p>For classes:</p> <pre><code>class Base(ABC): def __init__(self, param1): self.param1 = param1 @abstractmethod def some_method1(self): pass # @abstractmethod # def potentially_shared_method(self): # ???? class Child(Base): def __init__(self, param2): self.param1 = param1 self.param2 = param2 def some_method1(self): self.object1 = some_lib.generate_object1(param1, param2) def potentially_shared_method(self): return object1.process() </code></pre> <p>I want to move the <code>potentially_shared_method</code> to be shared in abstract calss, however it uses <code>object1</code> that is initialized in <code>some_method1</code> and needs to stay there.</p>
<python><python-3.x>
2023-01-24 16:30:39
1
2,969
mCs
75,224,346
9,749,124
Visualise topics from BERTopic
<p>I am using <code>TOPIC_MODEL.visualize_topics()</code> for visualising my topics on the graph. This is the example:</p> <p><a href="https://i.sstatic.net/E3Dym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E3Dym.png" alt="enter image description here" /></a></p> <p>My question is, how to see more than 5 words when I hover over blob topics on the graph?</p>
<python><machine-learning><bert-language-model>
2023-01-24 16:21:54
2
3,923
taga
75,224,332
2,662,901
Why does indexing DataFrame by date string affect column return type?
<p>While investigating <a href="https://stackoverflow.com/q/75178105/2662901">why int columns are sometimes returned as floats</a> we found the following surprising behavior:</p> <pre><code>import pandas as pd import numpy as np import datetime df = pd.DataFrame(data=list(zip(*[np.random.randint(1,3,5), np.random.random(5)])), index=pd.date_range('2020-01-01','2020-01-05').tolist(), columns=['a', 'b']) df['c'] = np.ceil(df.a/df.b).astype(int) df.loc['20200101', 'c'] # &lt;&lt; Returns value as float df.loc[datetime.datetime(2020,1,1), 'c'] # &lt;&lt; Returns value as int </code></pre> <p>Why does using the index value <code>&quot;20200101&quot;</code> cause the <code>c</code> value to be returned as a <code>float</code> (even though the column is of type <code>int</code>) but using the index value <code>datetime.datetime(2020,1,1)</code> causes it to be returned as an <code>int</code>?</p>
<python><pandas><dataframe>
2023-01-24 16:21:00
1
3,497
feetwet
75,224,203
5,040,775
How to separate dataframe by consecutive months in column?
<pre><code> Ticker Month Price 0 ABC US EQUITY 1/1/2020 20 1 ABC US EQUITY 2/1/2020 20 2 ABC US EQUITY 3/1/2020 20 3 ABC US EQUITY 4/1/2020 20 4 ABC US EQUITY 5/1/2020 20 5 ABC US EQUITY 6/1/2020 20 6 ABC US EQUITY 7/1/2020 20 7 ABC US EQUITY 8/1/2020 20 8 ABC US EQUITY 9/1/2020 20 9 ABC US EQUITY 10/1/2020 20 10 ABC US EQUITY 1/1/2021 20 11 ABC US EQUITY 2/1/2021 20 12 ABC US EQUITY 3/1/2021 20 13 ABC US EQUITY 4/1/2021 20 14 ABC US EQUITY 5/1/2021 20 15 ABC US EQUITY 6/1/2021 20 16 ABC US EQUITY 7/1/2021 20 17 ABC US EQUITY 8/1/2021 20 18 ABC US EQUITY 9/1/2021 20 </code></pre> <p>As you can see, I have a dataframe as above. I want to separate the dataframe into two by Month; one from 01/01/2020 to 10/01/2020 and the other one from 01/01/2021 to 09/01/2021. The rule should be that the rows with consecutive months should be together. how do I do it using pandas? Thanks.</p>
<python><pandas><dataframe>
2023-01-24 16:09:41
1
3,525
JungleDiff
75,224,099
10,689,857
Merge two lists of dictionaries ignoring None entries
<p>I have the below two lists:</p> <pre><code>a = [{'a1': 1, 'a2': 2}, {'a3': 3}] b = [{'b1': 1, 'b2': 2}, None] </code></pre> <p>I would like to merge them creating an output like the below, ignoring the None elements.</p> <pre><code>desired_output = [{'a1': 1, 'a2': 2, 'b1': 1, 'b2': 2}, {'a3': 3}] </code></pre>
<python>
2023-01-24 16:02:29
2
854
Javi Torre
75,224,063
1,230,694
Convert Time Series Data By Column Into Rows
<p>I have output from a system which has multiple readings for a date range, date is one column and then each reading is a column of its own, an example data frame looks like this:</p> <pre><code> Date/Time DEVICE_1 DEVICE_2 01/01 01:00:00 10.141667 8.807851 </code></pre> <p>I would like to convert this into the following format where each column is &quot;flattened&quot; into a row so the output would look something like:</p> <pre><code>Date/Time Name Value 01/01 01:00:00 DEVICE_1 10.141667 01/01 01:00:00 DEVICE_2 8.807851 </code></pre> <p>If there were ten devices then for each row in the current file for a particular timestamp I would need to extract this into ten rows, one for each device with the same timestamp.</p> <p>Is this possible with pandas? I don't want to resort to lots of looping if possible.</p>
<python><pandas><time-series>
2023-01-24 15:59:02
1
3,899
berimbolo
75,224,020
3,128,109
Can I use scipy to check the jacobian of a function?
<p>I have a function for which I know the explicit expression of the jacobian. I would like to check the correctness of this jacobian by comparing it against a finite-element approximation. Scipy has a <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.check_grad.html" rel="nofollow noreferrer">function that does a similar check on the gradient of a function</a> but I haven't found the equivalent for a jacobian (if it existed in scipy, I assume it would be in <a href="https://docs.scipy.org/doc/scipy/reference/optimize.html#utilities" rel="nofollow noreferrer">this listing</a>). I would like a function that similarly takes two callables (the function and the jacobian) and an ndarray (the points to check the jacobian against its approximation) and returns the error between the two.</p> <p>The jacobian of a function can be written in a form that uses the gradients of the components of the function, so the <code>scipy.optimize.check_grad</code> function might be usable to this extent, but I don't know how that might be implemented in practice.</p> <p>Say I have function</p> <pre class="lang-py prettyprint-override"><code>def fun(x, y): return y, x </code></pre> <p>with the jacobian</p> <pre class="lang-py prettyprint-override"><code>from numpy import ndarray, zeros def jac(x, y): result = zeros((2, 2)) result[0, 1] = 1 result[1, 2] = 1 return result </code></pre> <p>How should I go about to separate these variables in order to use the scipy function? The solution must be generalizable to n-dimensional functions. Or is there an existing function to fill this task?</p> <p>If I were limited to 2-dimensional functions, I might do</p> <pre class="lang-py prettyprint-override"><code>from scipy.optimize import check_grad def fun1(x, y): return fun(x, y)[0] def grad1(x, y): return jac(x, y0)[0] check_grad(fun1, grad1, [1.5, -1.5]) ... </code></pre> <p>but this solution isn't trivially extended to functions of higher dimensions.</p>
<python><numpy><scipy><derivative>
2023-01-24 15:55:19
1
2,245
usernumber
75,223,860
5,522,007
Connect to SQL Server db with Azure AD MFA using Python in Docker
<p>I am trying to set up a Python Docker container from which I can connect to several SQL Server databases using Azure Active Directory MFA.</p> <p>I've created the Docker file as shown below. This builds ok and I am using it in a VSCode devcontainer but not certain I have got all the sql server/odbc stuff correct as I'm unable to get that far. I can connect to the databases successfully in other applications, but not from within the container.</p> <pre><code># Docker image with Python3, poyodbc, MS ODBC 18 driver (SQL Server) # Use official Python image FROM python:3.10-bullseye # Set working directory WORKDIR /app # Send Python output streams to container log ENV PYTHONUNBUFFERED 1 # Update apt-get RUN apt-get update # Install ggc RUN apt-get install gcc # pyodbc dependencies RUN apt-get install -y tdsodbc unixodbc-dev RUN apt install unixodbc -y RUN apt-get clean -y ADD odbcinst.ini /etc/odbcinst.ini # ODBC driver dependencies RUN apt-get install apt-transport-https RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - RUN curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list &gt; /etc/apt/sources.list.d/mssql-release.list RUN apt-get update # Install ODBC driver RUN ACCEPT_EULA=Y apt-get install msodbcsql18 --assume-yes # Configure ENV for /bin/bash to use MSODBCSQL18 RUN echo 'export PATH=&quot;$PATH:/opt/mssql-tools/bin&quot;' &gt;&gt; ~/.bash_profile RUN echo 'export PATH=&quot;$PATH:/opt/mssql-tools/bin&quot;' &gt;&gt; ~/.bashrc # Upgrade pip RUN pip3 install --upgrade pip3 # Install Python libraries from requirements.txt COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt # Copy project code into image COPY . . </code></pre> <p>Within the container I am running a python script that attempts to connect to a SQL Server database using pyodbc like so:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import pyodbc conn = pyodbc.connect('Driver={ODBC Driver 18 for SQL Server};' 'Server=&lt;my-server-name&gt;;' 'Database=&lt;my-db-name&gt;;' 'Trusted_Connection=yes;') cursor = conn.cursor() </code></pre> <p>I have set <code>Trusted_Connection</code> to 'yes', which should allow use of MFA, but presumably only under Windows. Instead I get the error message:</p> <pre class="lang-bash prettyprint-override"><code>Error: ('HY000', '[HY000] [Microsoft][ODBC Driver 18 for SQL Server]SSPI Provider: No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_0) (851968) (SQLDriverConnect)') </code></pre> <p>I'm not at all familiar with Kerberos, but it seems I should be able to obtain a ticket using <code>kinit</code>. Reading the docs, it looks like I need to install kinit into the container and also create the file <a href="https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html" rel="nofollow noreferrer">krb5.conf</a> but I'm not sure how to figure out what needs to go in this file, or even if this is the right approach at all. Can anyone please point me in the right direction with this authentication step?</p>
<python><sql-server><docker><azure-active-directory><kerberos>
2023-01-24 15:44:10
1
371
Violet
75,223,845
381,281
How to fill a contingency table from the marginal distributions given some optimization constraints?
<p>I need to find the cells of a contingency table given the marginal distributions.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>2574</th> <th>2572</th> <th>3393</th> <th>3768</th> <th>3822</th> <th><em><strong>b</strong></em></th> <th><em><strong>e</strong></em></th> </tr> </thead> <tbody> <tr> <td>x₁₁</td> <td>x₁₂</td> <td>x₁₃</td> <td>x₁₄</td> <td>x₁₅</td> <td><strong>187</strong></td> <td>23846753.74</td> </tr> <tr> <td>x₂₁</td> <td>x₂₂</td> <td>x₂₃</td> <td>x₂₄</td> <td>x₂₅</td> <td><strong>3</strong></td> <td>324024.64</td> </tr> <tr> <td>x₃₁</td> <td>x₃₂</td> <td>x₃₃</td> <td>x₃₄</td> <td>x₃₅</td> <td><strong>13755</strong></td> <td>1489591510.50</td> </tr> <tr> <td>x₄₁</td> <td>x₄₂</td> <td>x₄₃</td> <td>x₄₄</td> <td>x₄₅</td> <td><strong>543</strong></td> <td>76173239.22</td> </tr> <tr> <td>x₅₁</td> <td>x₅₂</td> <td>x₅₃</td> <td>x₅₄</td> <td>x₅₅</td> <td><strong>68</strong></td> <td>8188751.57</td> </tr> <tr> <td>x₆₁</td> <td>x₆₂</td> <td>x₆₃</td> <td>x₆₄</td> <td>x₆₅</td> <td><strong>1332</strong></td> <td>172945247.86</td> </tr> <tr> <td>x₇₁</td> <td>x₇₂</td> <td>x₇₃</td> <td>x₇₄</td> <td>x₇₅</td> <td><strong>361</strong></td> <td>41675606.70</td> </tr> </tbody> </table> </div> <p>xᵢⱼ are non-negative integers I want to find. The column sums are exact. The row sums (<em>b</em>) are approximate.</p> <p>There are additional constraints to guide the distribution of the xᵢⱼ:</p> <p>Given constant factors <em>F</em> = (13336.41847153, 102412.73466321, 41811.01724119, 78689.83110577, 282353.66682778)<sup>T</sup> and the expected result <em>e</em> then <em>X·F ≈ e</em></p> <p>Is this a Mixed-Integer Quadratic Programming problem if I want to minimize the deviations of the approximate equalities?</p> <ul> <li><em>b</em> are the column sums</li> <li><em>A</em> is a vector of ones, so that every column gets summed up</li> <li><em>d</em> are the row sums</li> <li><em>C</em> is a vector of ones, such that every row gets summed up</li> <li><em>e</em> is a vector of expected weighted sums</li> <li><em>F</em> is a constant vector of weights</li> </ul> <p>The optimization problem can be formulated as</p> <ul> <li>min (‖ <em>X C – d</em> ‖² + ‖ <em>X F – e</em> ‖²)</li> <li>such that <em>A X = b</em></li> <li><em>x</em> ≥ 0</li> <li><em>x</em> ∈ ℕ</li> </ul> <p>I’ve tried to solve this using <a href="https://www.cvxpy.org/" rel="nofollow noreferrer">cvxpy</a>:</p> <pre><code>import cvxpy as cp import numpy as np F = np.array([13336.41847153, 102412.73466321, 41811.01724119, 78689.83110577, 282353.66682778]) e = [23846753.74, 324024.64, 1489591510.50, 76173239.22, 8188751.57, 172945247.86, 41675606.70] d = [187., 3., 13755., 543., 68., 1332., 361.] C = np.ones(len(F)) b = np.array([2574, 2572, 3393, 3768, 3822]) A = np.ones(len(d)) x = cp.Variable((len(e), len(b)), integer=True) cost = cp.sum_squares(x @ C - d) + cp.sum_squares(x @ F - e) objective = cp.Minimize(cost) constraint_gt0 = x &gt;= 0 constraint_eq = A @ x == b problem = cp.Problem(objective, [constraint_gt0, constraint_eq]) solution = problem.solve() </code></pre> <p>But this results in the error message:</p> <p>Either candidate conic solvers (['GLPK_MI', 'SCIPY']) do not support the cones output by the problem (SOC, NonNeg, Zero), or there are not enough constraints in the problem.</p> <p>If I remove the <code>integer=True</code> constraint the method completes without error but doesn’t find a solution.</p> <p>There are many of these tables to solve, which are all independent of each other. So I need a solution in code, not the set of <em>x</em> for the particular given example.</p> <p>My questions:</p> <p>Is this a well-formed problem and solvable?</p> <p>Is it under-constrained like the error message suggests?</p> <p>Why does it result in a Second-Order Cone (SOC) as mentioned in the error message? That sounds unnecessarily complicated. I thought this is a “least squares problem with some extra constraints”.</p> <p>How can I solve the problem with cvxpy or some other Python package?</p> <p>If an approximate, non-integer solution is more feasible, how would I achieve that?</p>
<python><mathematical-optimization><cvxpy><mixed-integer-programming><cvxopt>
2023-01-24 15:43:10
1
2,613
hfs
75,223,799
14,599,244
Is there any way to call a Python script from an Ansible playbook and pass the output as variable to an Ansible playbook back again for the next task?
<p>We are trying to pass return output (JSON) from Firewall Facts (containing 1000+ Policies) to a Python script, then Python Script will process that output (like getting right firewall policy to modify) and pass the output as variable to Ansible playbook back again for next task execution.</p> <p>We have tested this method to convert CSV files to Excel. But not sure on in terms of variables.</p>
<python><ansible>
2023-01-24 15:40:21
2
347
UME
75,223,777
2,439,905
Extraction of the text content of mixed text/json list of strings using python3
<p>I have a corpus of texts packaged into JSON. I have already stripped of some outer JSON layers and I have now lists like the following one</p> <pre><code>data = [ '“Ми підписали угоду, яка дозволяє громадянам України подорожувати без віз до Монголії. Це означає, що громадяни України можуть в’їжджати до Монголії без віз на 90 днів упродовж 180 днів” - ', { 'type': 'link', 'attributes': { 'href': 'https://www.facebook.com/UkraineMFA/posts/2498886686831903', 'target': '_blank', 'rel': None }, 'content': [ 'повідомляє сторінка МЗС у Facebook' ] }, ' із посиланям на посла України у Польщі Андрія Дещицю.' ] </code></pre> <p>Each list element is either a text string or yet another JSON element encoded as a Python dict and finally I want to reduce it to its content (niftily already named 'content' here).</p> <p>What is the Pythonese way of achieving this task?</p> <p>P.S. The final output should be a line of plain text without any embellishments (neither from Python syntax nor from JSON) left, i.e. for the sample snippet</p> <pre><code>“Ми підписали угоду, яка дозволяє громадянам України подорожувати без віз до Монголії. Це означає, що громадяни України можуть в’їжджати до Монголії без віз на 90 днів упродовж 180 днів” - повідомляє сторінка МЗС у Facebook із посиланям на посла України у Польщі Андрія Дещицю. </code></pre>
<python><json><list><text-processing>
2023-01-24 15:38:19
1
665
Sir Cornflakes
75,223,687
12,906,445
Can not convert Pytorch ml model to TorchScript Version
<p>I'm trying to convert <code>PyTorch ml model</code> into <code>TorchScript Version</code> and then convert it to <code>Core ML</code> using <code>coremltools</code>.</p> <p><strong>While trying to convert <code>PyTorch ml model</code> into <code>TorchScript Version</code> my code below keep getting following error:</strong> <code>Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor]]</code></p> <p>my code:</p> <pre><code>model = AutoModelForSeq2SeqLM.from_pretrained(&quot;Seungjun/t5-small-finetuned-xsum&quot;) model.eval() decoder_input_ids = output traced_model = torch.jit.trace(model, (inputs['input_ids'], inputs['attention_mask'], output)) out = traced_model(inputs['input_ids']) </code></pre> <p>But all three parameters for <code>torch.jit.trace</code> have same type like follow:</p> <pre><code>inputs['input_ids'].shape torch.Size([1, 219]) inputs['attention_mask'].shape torch.Size([1, 219]) output.shape torch.Size([1, 23]) </code></pre> <p>Does anyone know why this is happening or is there something wrong with my code?</p>
<python><machine-learning><pytorch><coremltools>
2023-01-24 15:30:32
1
1,002
Seungjun
75,223,644
13,762,083
How to plot a rectangle facing in a particuliar direction in matplotlib?
<p>I would like to plot a rectangle centered at the position <code>(x, y, z)</code>, with length <code>a</code> along the <code>(n1x, n1y, n1z)</code> direction and width <code>b</code> along the <code>(n2x, n2y, n2z)</code> direction. I also want the surface of the rectangle to face a certain direction, the direction being specified by a unit vector perpendicular to the face of the rectangle <code>(nx, ny, nz</code>).</p> <p>How do I create a function that takes in <code>(x, y, z)</code>, <code>a</code>, <code>b</code>, and <code>(nx, ny, nz)</code> which plots a rectangle using matplotlib?</p>
<python><matplotlib><plot>
2023-01-24 15:28:23
1
409
ranky123
75,223,506
5,288,867
Access blob in storage container from function triggered by Event Grid
<p>Just for reference I am coming from AWS so any comparisons would be welcome.</p> <p>I need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.</p> <p>I have created a storage account with a container in, and a function app with a python function in it. I have then set up a event grid topic and subscription so that blob creation events trigger the event. I can verify that this is working. This gives me the URL of the blob which looks something like <code>https://&lt;name&gt;.blob.core.windows.net/&lt;container&gt;/&lt;blob-name&gt;</code>. However then when I try to download this blob using BlobClient I get various errors about not having the correct authentication or key. Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions, or do I need to create some key to pass through somehow?</p> <p>Edit: I need this to run ASAP when the blob is put in the container so as far as I can tell I need to use EventGrid triggers not the normal blob triggers</p>
<python><amazon-web-services><azure><azure-functions><azure-blob-storage>
2023-01-24 15:15:44
2
3,540
dangee1705
75,223,419
4,047,472
Python import function from another file via argparse
<p>I'm writing a small utility function which takes in input arguments of the location of a Python file, and also a function to call within the Python file</p> <p>For example <code>src/path/to/file_a.py</code></p> <pre><code>def foo(): ... </code></pre> <p>In the utility function, I'm parsing the arguments like so:</p> <pre><code>python ./util.py --path src/path/to/file_a.py --function foo </code></pre> <p>and the <code>foo</code> function needs to be used later in the <code>util.py</code> in another library:</p> <pre><code>def compile(): compiler.compile( function=foo, etc ) </code></pre> <p>What's the best way of importing the <code>foo</code> function via argparse?</p> <hr /> <p>Some initial thoughts:</p> <p><code>util.py</code>:</p> <pre><code>def get_args(): parser = argparse.ArgumentParser() parser.add_argument(&quot;--path&quot;, type=str) parser.add_argument(&quot;--function&quot;, type=str) return parser.parse_args() def compile(path, function): import path compiler.compile( function=path.function, etc ) if __name__ == &quot;__main__&quot;: args = get_args() compile( path=args.path function=args.function ) </code></pre> <p>however importing via argparse, and adding it to the function does not seem to work nicely.</p> <p>There's also the idea of using <code>sys.path.append</code>:</p> <pre><code>def compile(path, function): import sys sys.path.append(path) </code></pre> <p>but then I'm unsure of how to import the <code>foo</code> function from that.</p>
<python><python-3.x><argparse>
2023-01-24 15:07:31
1
7,592
Rekovni
75,223,415
7,031,021
python-docx trailing trilling whitepsaces not showing correctly
<p><strong>Goal</strong></p> <p>I am trying to add a text to a table cell where the text is a combination of 2 strings and the space between the strings of variable size so that the final text has the same length and it appears as if the second string is right aligned.</p> <p>I can either use format or ljust to combine the strings in python.</p> <pre class="lang-py prettyprint-override"><code>period = &quot;from Monday to Friday&quot; item_text = &quot;Some txt&quot; item_text2 = &quot;Some other txt&quot; t1 = &quot;t1: {:&lt;30}{:0}&quot;.format(item_text,period) t2 = &quot;t2: {:&lt;30}{:0}&quot;.format(item_text2,period) t3 = f&quot;t3: {item_text.ljust(30)}{period}&quot; t4 = f&quot;t4: {item_text2.ljust(30)}{period}&quot; from pprint import pprint pprint(t1) pprint(t2) pprint(t3) pprint(t4) </code></pre> <p><strong>Text in python with variable space length between strings</strong></p> <p><a href="https://i.sstatic.net/ebyBsl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ebyBsl.png" alt="enter image description here" /></a></p> <p>However, if I add this text to a docx table, the space between the strings changes.</p> <pre class="lang-py prettyprint-override"><code>from docx import Document doc = Document() # Creating a table object table = doc.add_table(rows=2, cols=2, style=&quot;Table Grid&quot;) table.rows[0].cells[0].text = f&quot;{item_text.ljust(30)}{period}&quot; table.rows[1].cells[0].text = f&quot;{item_text2.ljust(30)}{period}&quot; def set_col_widths(table): widths = tuple( Cm(val) for val in [15,8]) for row in table.rows: for idx, width in enumerate(widths): row.cells[idx].width = width set_col_widths(table) doc.save(&quot;test_whitespace.docx&quot;) </code></pre> <p><strong>Text in word. Space between strings changed.</strong></p> <p><a href="https://i.sstatic.net/GKYlR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GKYlR.png" alt="enter image description here" /></a></p> <p>Note I am aware that I could add a table to the table cell and left adjust the left and right adjust the right but that seems like way more code to write.</p> <p><strong>Question</strong> Why is the spacing changing in the word document and how can I create the text differently to get the desired goal?</p>
<python><python-docx>
2023-01-24 15:07:26
1
510
RSale
75,223,400
5,956,459
Import a module in python as a command line option
<p>Is there a way to import a module through the command line, that gets exposed to the running script (or ideally through all execution)? My use case is to debug: I have a set of debugging utils (to display images, histograms...) that I only want to import while debugging (and that other people on my team do not need to care about, the debugging code doesn't get pushed to the main repo, it doesn't get imported at execution time...).</p> <p>For example, when debugging on my end, I would like to do: <code>python --option &quot;import debug_utils.py&quot; main.py</code>, and my personal debugging functionalities would be visible to the running scripts.</p> <p>At execution time others and me would simply execute, without package/import conflicts: <code>python main.py</code>, and the debugging utils are not imported.</p> <p>Thanks!</p>
<python><debugging><command-line-interface>
2023-01-24 15:06:26
1
313
Manel B
75,223,246
1,564,070
Selenium blocking downloads with Edge?
<p>I have a web automation tool developed with Python and Selenium and using the MS Edge driver. All is working well except when downloading a file. The message &quot;Couldn't download - blocked&quot; appears in the Downloads window.</p> <p>I've tested this same case with a user-launched instance of Edge and it works fine. So I'm guessing the selenium launched instance of Edge has some restriction applied to it. Not sure how to approach this?</p> <p>Environment: Windows 10 Enterprise 19044.2486 (no admin access) Selenium: 4.1.3 Edge: Version 109.0.1518.61 (Official build) (64-bit) Msedgedriver.exe: 109.0.1518.52 Python.exe: 3.10.4150.1013</p>
<python><selenium-webdriver><microsoft-edge>
2023-01-24 14:54:00
1
401
WV_Mapper
75,223,206
5,597,037
Python YFinance - Earnings Calendar no longer works
<p>I am trying to get the next earnings date using python. Up until now, I've been using:</p> <pre><code>obj = yf.Ticker('TSLA') cal = obj.calendar next_earnings_date = cal.iloc[0][0] </code></pre> <p>I think the API has recently broken. I have also tried:</p> <pre><code>import pandas_datareader as web import pandas as pd df = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12') df.head() import yfinance as yf aapl = yf.Ticker(&quot;AAPL&quot;) print(aapl.earnings) </code></pre> <p>I tried this one also:</p> <pre><code>import yahoo_fin.stock_info as si for ticker_symbol in ['IMGN', 'DIDI']: print('Symbol:', ticker_symbol) try: tickerEarnings = si.get_next_earnings_date(ticker_symbol) print('Earnings:', tickerEarnings) except Exception as ex: print('[Exception]', ex) print('Earnings: skiping this symbol')import yahoo_fin.stock_info as si for ticker_symbol in ['IMGN', 'DIDI']: print('Symbol:', ticker_symbol) try: tickerEarnings = si.get_next_earnings_date(ticker_symbol) print('Earnings:', tickerEarnings) except Exception as ex: print('[Exception]', ex) print('Earnings: skiping this symbol') </code></pre> <p>The reason I think the API is broken is because I'm getting</p> <pre><code>data = j[&quot;context&quot;][&quot;dispatcher&quot;][&quot;stores&quot;][&quot;HistoricalPriceStore&quot;] TypeError: string indices must be integers in some of the things that I have tried. </code></pre>
<python><yfinance>
2023-01-24 14:50:30
1
1,951
Mike C.
75,223,099
11,564,487
NA_character_ not identidied as NaN after importing it into Python with rpy2
<p>I am using the following code inside a R magic cell:</p> <pre><code>%%R -o df library(tibble) df &lt;- tibble(x = c(&quot;a&quot;, &quot;b&quot;, NA)) </code></pre> <p>However, when I run in another cell (a Python one):</p> <pre><code>df.isna() </code></pre> <p>I get</p> <pre><code> x 1 False 2 False 3 False </code></pre> <p>In fact, the imported dataframe is</p> <pre><code> x 1 a 2 b 3 NA_character_ </code></pre> <p>How can I convert <code>NA_character_</code> to a Python <code>NaN</code>?</p> <p>I have tried</p> <pre><code>df.replace('NA_character_', np.nan) </code></pre> <p>but with no success.</p>
<python><r><rpy2>
2023-01-24 14:41:33
1
27,045
PaulS
75,222,998
9,377,539
Why not able to scrape all pages from a website with BeautifulSoup?
<p>I'm trying to get all the data from all pages, i used a counter and cast it to take the page number in the url then looped using this counter but always the same result This is my code :</p> <pre><code> # Scrapping job offers from hello work website #import libraries import random import requests import csv from bs4 import BeautifulSoup from datetime import date #configure user agent for mozilla browser user_agents = [ &quot;Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0&quot;, &quot;Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0&quot;, &quot;Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0&quot; ] random_user_agent= random.choice(user_agents) headers = {'User-Agent': random_user_agent} </code></pre> <p>here where i have used my counter:</p> <pre><code>i=0 for i in range(1,15): url = 'https://www.hellowork.com/fr-fr/emploi/recherche.html?p='+str(i) print(url) page = requests.get(url,headers=headers) if (page.status_code==200): soup = BeautifulSoup(page.text,'html.parser') jobs = soup.findAll('div',class_=' new action crushed hoverable !tw-p-4 md:!tw-p-6 !tw-rounded-2xl') #config csv csvfile=open('jobList.csv','w+',newline='') row_list=[] #to append list of job try : writer=csv.writer(csvfile) writer.writerow([&quot;ID&quot;,&quot;Job Title&quot;,&quot;Company Name&quot;,&quot;Contract type&quot;,&quot;Location&quot;,&quot;Publish time&quot;,&quot;Extract Date&quot;]) for job in jobs: id = job.get('id') jobtitle= job.find('h3',class_='!tw-mb-0').a.get_text() companyname = job.find('span',class_='tw-mr-2').get_text() contracttype = job.find('span',class_='tw-w-max').get_text() location = job.find('span',class_='tw-text-ellipsis tw-whitespace-nowrap tw-block tw-overflow-hidden 2xsOld:tw-max-w-[20ch]').get_text() publishtime = job.find('span',class_='md:tw-mt-0 tw-text-xsOld').get_text() extractdate = date.today() row_list=[[id,jobtitle,companyname,contracttype,location,publishtime,extractdate]] writer.writerows(row_list) finally: csvfile.close() </code></pre>
<python><web-scraping><beautifulsoup><python-requests>
2023-01-24 14:33:49
1
711
K_mns
75,222,929
6,435,921
Remove rows of dataframe contained in list (without using a loop)
<h1>Problem Explanation</h1> <p>I have a dataframe with two columns <code>'A'</code> and <code>'B'</code>. I also have a list of tuples where the first element of the tuple is an element in the column <code>'A'</code>, and the second is in the column <code>'B'</code>. I would like to remove all rows of the dataframe coinciding with the tuples.</p> <p>Of course, I could just use a loop, but I want a smarter solution that would be faster and cleaner.</p> <h1>Minimal Working Example</h1> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame( { 'A': ['a', 'b', 'c', 'd', 'a', 'd', 'a', 'c'], 'B': [4, 2, 2, 1, 3, 4, 3, 2], } ) rows_to_remove = [('a', 4), ('c', 2), ('d', 4), ('a', 3)] </code></pre>
<python><pandas><numpy>
2023-01-24 14:27:50
5
3,601
Euler_Salter
75,222,897
6,282,032
Sort dataframe based on minimum value of two columns
<p>Let's assume I have the following dataframe:</p> <pre><code>import pandas as pd d = {'col1': [1, 2,3,4], 'col2': [4, 2, 1, 3], 'col3': [1,0,1,1], 'outcome': [1,0,1,0]} df = pd.DataFrame(data=d) </code></pre> <p>I want this dataframe sorted by col1 and col2 on the minimum value. The order of the indexes should be 2, 0, 1, 3.</p> <p>I tried this with <code>df.sort_values(by=['col2', 'col1'])</code>, but than it takes the minimum of col1 first and then of col2. Is there anyway to order by taking the minimum of two columns?</p>
<python><pandas>
2023-01-24 14:24:51
4
854
Tox
75,222,562
7,713,770
How to merge two models into one model with django for admin?
<p>I have to identical models:</p> <pre><code>class AnimalGroup(models.Model): name = models.CharField(max_length=50, unique=True) description = models.TextField(max_length=1000, blank=True) images = models.ImageField(upload_to=&quot;photos/groups&quot;) class AnimalSubGroup(models.Model): name = models.CharField(max_length=50, unique=True) description = models.TextField(max_length=1000, blank=True) images = models.ImageField(upload_to=&quot;photos/groups&quot;) </code></pre> <p>and they have a on-to-many relationship. So AnimalGroup can have multiple AnimalSubGroups</p> <p>But as you see they have identical fields.</p> <p>Question: how to make one model of this?</p> <p>So that in Admin I can create animalgroups, like:</p> <ul> <li>mammals</li> <li>fish</li> </ul> <p>And then I can create the subgroups. Like</p> <ul> <li>bigCat and then I select mammals.</li> </ul>
<python><django>
2023-01-24 13:59:06
2
3,991
mightycode Newton
75,222,540
5,896,319
How to solve libmagic.dylib - incompatible architecture error?
<p>I'm trying to install my Django project to my M1 Mac laptop but it is giving errors.</p> <pre><code>..OSError: dlopen(/Users/e.celik/Library/Python/3.8/lib/python/site-packages/magic/libmagic/libmagic.dylib, 0x0006): tried: '/Users/e.celik/Library/Python/3.8/lib/python/site-packages/magic/libmagic/libmagic.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/e.celik/Library/Python/3.8/lib/python/site-packages/magic/libmagic/libmagic.dylib' (no such file), '/Users/e.celik/Library/Python/3.8/lib/python/site-packages/magic/libmagic/libmagic.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) </code></pre> <p>I do not know why is this happening? I'm using python 3.8 and last version of pip.</p> <p>How can I fix that?</p>
<python><django>
2023-01-24 13:57:16
1
680
edche
75,222,273
618,579
Grabbing the octet stream data from a Graph API response
<p>I have been working on some code to download a days worth of Teams usage data from the Graph API. I can successfully send the token and receive the response. The response apparently contains the URL in the head to download the csv file. I can't see to find the code to grab it though.</p> <p>My code as the moment is as follows.</p> <pre><code>import requests, urllib, json, csv, os client_id = urllib.parse.quote_plus('XXXX') client_secret = urllib.parse.quote_plus('XXXX') tenant = urllib.parse.quote_plus('XXXX') auth_uri = 'https://login.microsoftonline.com/' + tenant \ + '/oauth2/v2.0/token' auth_body = 'grant_type=client_credentials&amp;client_id=' + client_id \ + '&amp;client_secret=' + client_secret \ + '&amp;scope=https%3A%2F%2Fgraph.microsoft.com%2F.default' authorization = requests.post(auth_uri, data=auth_body, headers={'Content-Type': 'application/x-www-form-urlencoded'}) token = json.loads(authorization.content)['access_token'] graph_uri = 'https://graph.microsoft.com/v1.0/reports/getTeamsUserActivityUserDetail(date=2023-01-22)' response = requests.get(graph_uri, data=auth_body, headers={'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token}) print(response. Headers) </code></pre> <p>Is there any easy way to parse the URL from the header and to obtain the CSV file?</p> <p>REF: <a href="https://learn.microsoft.com/en-us/graph/api/reportroot-getteamsuseractivityuserdetail?view=graph-rest-beta" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/graph/api/reportroot-getteamsuseractivityuserdetail?view=graph-rest-beta</a></p>
<python><microsoft-graph-api>
2023-01-24 13:37:07
1
2,513
Nathan
75,222,252
9,919,423
ValueError: Can't convert Python sequence with a value out of range for a double-precision float
<p>when I use TensorBoard to record the <code>adaptive learning rate</code> of a neural network, I use this code:</p> <pre><code>summary_writer = tf.summary.create_file_writer(log_dir) with summary_writer.as_default(): tf.summary.scalar('metrics/learning rate', data=float(model.optimizer._hyper['learning_rate']), step=epoch) </code></pre> <p>The problem is the <code>model.optimizer._hyper['learning_rate']</code> goes all the way up to an extremely large number, 4.07e+38. Then, the program yields an error:</p> <pre><code>ValueError: Can't convert Python sequence with a value out of range for a double-precision float. </code></pre> <p>How can I fix it?</p>
<python><tensorflow><deep-learning><neural-network><tensorboard>
2023-01-24 13:35:24
0
412
David H. J.
75,222,239
4,342,608
How to set operation id for custom application insight events?
<p>We have running on Azure a Python Web App (flask). It processes requests and also logs some results in the end via a custom event to Application Insights.</p> <p>However inside Application Insights our end-to-end transaction Operation ID for our Custom Events is 0.</p> <p><a href="https://i.sstatic.net/NaYd2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NaYd2.png" alt="enter image description here" /></a></p> <p>Other event types such as Requests, do have the operation id. How can we get the same operation id in here as the Request event?</p> <pre><code>import logging from opencensus.ext.azure.log_exporter import AzureEventHandler, AzureLogHandler logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) logger.addHandler(AzureEventHandler( connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')) def track_result(result): properties = {'result': result} logger.info('Result log', extra={'custom_dimensions': properties}) return None </code></pre>
<python><flask><azure-application-insights><azure-webapps>
2023-01-24 13:34:03
1
489
Swifting
75,222,111
11,222,963
How to create a fake but realistic scatter plot showing a relationship?
<p>I would like to generate some dummy data to show a positive relationship in a scatterplot.</p> <p>I have some code below but the output looks too &quot;perfect&quot;:</p> <pre><code>import random import pandas as pd # num_obs = number of observations def x_and_y(num_obs): x_list = [] y_list = [] for i in range(1,num_obs): # between 1 and 10,000 x = round(random.randint(1,10000)) y_ratio = random.uniform(0.15,0.2) # multiply each X by above ratio y = round(x*y_ratio) # add to list x_list.append(x) y_list.append(y) return x_list, y_list # run function x, y = x_and_y(500) # add to dataframe and plot df = pd.DataFrame(list(zip(x, y)), columns =['X', 'Y']) df.plot.scatter(x='X', y='Y') </code></pre> <p>I get this very clean looking relationship:</p> <p><a href="https://i.sstatic.net/EuijP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EuijP.png" alt="enter image description here" /></a></p> <p>Is there anything I can do to make it look more natural / scattered without losing the relationship?</p> <p>Something like this (just a screenshot from google):</p> <p><a href="https://i.sstatic.net/mPO6h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mPO6h.png" alt="enter image description here" /></a></p>
<python><numpy><random>
2023-01-24 13:22:55
1
3,416
SCool
75,222,110
14,843,373
Built in way to convert this data to dict or json in python
<p>So I have an input of</p> <pre><code>s = &quot;['some.dot.seperated.words.here[id=123,rapidId=76,state=CLOSED,name=james_simpson,startDate=2021-10-30T11:16:00.000Z,endDate=2022-12-23T11:16:00.000Z,completeDate=2023-01-02T11:07:43.518Z,activatedDate=2022-10-30T11:20:03.627Z,sequence=33643,goal=do something\nfun\nwith flags, autoStartStop=false]']&quot; </code></pre> <p>and i want to end up with a python dict like</p> <pre><code>example = { &quot;id&quot;:123, &quot;rapidId&quot;:76, &quot;state&quot;:&quot;CLOSED&quot;, &quot;name&quot;:&quot;james_simpson&quot;, &quot;startDate&quot;:&quot;2021-10-30 11:16:00.000&quot;, &quot;endDate&quot;:&quot;2022-12-23 11:16:00.000&quot;, &quot;completeDate&quot;:&quot;2023-01-02 11:07:43.518&quot;, &quot;activatedDate&quot;:&quot;2022-10-30 11:20:03.627&quot;, &quot;sequence&quot;:33643, &quot;goal&quot;:&quot;do something\nfun\nwith flags&quot;, &quot;autoStartStop&quot;:False } </code></pre> <p><strong>The question I have is</strong>: Is there a pre built way to achieve this, similar to json.loads?</p> <p>I understand I could do something like:</p> <pre><code> print(&quot;s\n&quot;, s) s2 = s[s.find(&quot;[&quot;) + 1 : s.find(&quot;]&quot;)] print(&quot;s2\n&quot;, s2) s3 = dict(u.split(&quot;=&quot;) for u in s2.split(&quot;,&quot;)) print(s3) </code></pre> <p>but why would someone store it like that and require such wrangling ... :/</p>
<python>
2023-01-24 13:22:54
1
361
beautysleep
75,222,091
11,251,373
Python: partially initialize instance
<p>Not sure wether it is possible or not in general but i have a following question:</p> <p>for example we have class Foo</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(a, b, c): self.a = a self.b = b self.c = c def sum(self): return self.a + self.b + self.c </code></pre> <p>What i want to do is to get somehow partially initialized instance of this class with <strong>a</strong>, <strong>b</strong> initialized and possibility to initialize <strong>c</strong> later.</p> <pre class="lang-py prettyprint-override"><code># pseudocode partially_initialized = something(Foo, a=1, b=2) fully_initialized = partially_initialized(c=3) fully_initialized.sum() -&gt; 6 </code></pre> <p>python version 3.6</p>
<python>
2023-01-24 13:21:04
0
2,235
Aleksei Khatkevich
75,222,031
14,125,436
How to highlight area between 3D lines with texture in Python
<p>I want to fill between two curves in a 3D plot in matplotlib. I used <a href="https://stackoverflow.com/questions/36737053/mplot3d-fill-between-extends-over-axis-limits">this answer</a> to do that. But I want to use a texture for filling rather than a single color. I very much appreciate any help. I used the following syntax but it did not work:</p> <pre><code>ax.add_collection3d(Poly3DCollection([verts],facecolor=&quot;none&quot;, hatch=&quot;x&quot;, edgecolor=&quot;orange&quot;)) </code></pre> <p>In the original code is and has only color as an argument:</p> <pre><code>ax.add_collection3d(Poly3DCollection([verts],color='orange')) </code></pre>
<python><matplotlib>
2023-01-24 13:16:02
1
1,081
Link_tester
75,221,888
5,561,875
Fast Savgol Filter on 3D Tensor
<p>I have a tensor of example shape <code>(543, 133, 3)</code>, meaning 543 frames, with 133 points of X,Y,Z</p> <p>I would like to run a <code>savgol_filter</code> on every point in every dimension, however, naively, this is quite slow:</p> <pre class="lang-py prettyprint-override"><code>points, frames, dims = tensor.shape new_data = [] for point in range(points): new_dims = [] for dim in range(dims): new_dims.append(scipy.signal.savgol_filter(data[point, :, dim], 3, 1)) new_data.append(new_dims) tensor = np.array(new_data) </code></pre> <p>On my computer, for this small tensor, this takes 300ms, which is quite a long time.</p> <p>Is there a way to make this faster?</p>
<python><numpy><scipy>
2023-01-24 13:03:10
1
6,344
Amit
75,221,879
16,978,074
create an edge list on films that share a genre
<p>hello everyone I'm doing a project to analyze a website and build a network graph with python. I chose the themovieb.org website. The nodes are the ids of the movies and the links between the nodes are the genres that two movies depend on. For example node_A and node_B have a link if they have the same genres in common. I extracted the nodes and put them in an array: nodes. I have for example:</p> <pre><code>[ {'id': 315162, 'label': 'Puss in Boots: The Last Wish', 'genre_ids_1': '16', 'genre_ids_2': '28'}, {'id': 536554, 'label': 'M3GAN', 'genre_ids_1': '878', 'genre_ids_2': '27'}, {'id': 76600, 'label': 'Avatar: The Way of Water', 'genre_ids_1': '878', 'genre_ids_2': '12'}, {'id': 653851, 'label': 'Devotion', 'genre_ids_1': '10752', 'genre_ids_2': '878'}, {'id': 846433, 'label': 'The Enforcer', 'genre_ids_1': '28', 'genre_ids_2': '53'} ] </code></pre> <p>so I want to make a link for example between the movie &quot;Puss in Boots: The Last Wish&quot; and the movie &quot;The Enforcer&quot; which share the genre 28. I want as a result the edge list:</p> <pre><code>source target genre_ids 315162 846433 28 846433 315162 28 76600 536554 878 76600 653851 878 536554 76600 878 so on... </code></pre> <p>this is my code:</p> <pre><code>genres=[28,12,16,35,80,99,18,10751,14,36,27,10402,9648,10749,878,10770,53,10752,37] edges=[] nodes = [{'id': 315162, 'label': 'Puss in Boots: The Last Wish', 'genre_ids_1':'16','genre_ids_2': '28'},{'id': 536554, 'label': 'M3GAN','genre_ids_1':'878','genre_ids_2': '27'},{'id': 76600, 'label': 'Avatar: The Way of Water','genre_ids_1':'878', 'genre_ids_2': '12'},{'id': 653851, 'label': 'Devotion','genre_ids_1': '10752', 'genre_ids_2': '878'},{'id': 846433, 'label': 'The Enforcer','genre_ids_1': '28', 'genre_ids_2': '53'}] dictionary={} def get_edges(): for i in nodes: if i[&quot;genre_ids_1&quot;] in genres: dictionary.setdefault(i['genre_ids_1'], []).append(i['label']) elif i[&quot;genre_ids_2&quot;] in genres: dictionary.setdefault(i['genre_ids_2'], []).append(i['label']) if i[&quot;genre_ids_1&quot;] in dictionary: if i[&quot;label&quot;] not in dictionary[ i[&quot;genre_ids_1&quot;]][0]: edges.append({&quot;source&quot;:i[&quot;label&quot;],&quot;target&quot;:i[&quot;id&quot;],&quot;genre_id&quot;:dictionary[ i[&quot;genre_ids_1&quot;]][0] }) elif i[&quot;genre_ids_2&quot;] in dictionary: if i[&quot;label&quot;] not in dictionary[ i[&quot;genre_ids_2&quot;]][1]: edges.append({&quot;source&quot;:i[&quot;label&quot;],&quot;target&quot;:i[&quot;id&quot;],&quot;genre_id&quot;:dictionary[ i[&quot;genre_ids_2&quot;]][1] }) print(edges) get_edges() </code></pre> <p>How can i do?</p>
<python><arrays><dictionary><themoviedb-api><edge-list>
2023-01-24 13:02:19
1
337
Elly
75,221,728
4,763,333
pipenv install cannot find any packages
<p>I cannot build a virutal environment using pipenv install, I always get the error No matching distribution found for aws-lambda-powertools if I remove that package I will get the same error for another package I have in there.</p> <p>Stack trace (displayed when I try to build throgh pycharm)</p> <pre><code> Pipfile.lock (cb83d8) out of date, updating to (459cf8)... Locking [packages] dependencies... ⠋ Locking... Building requirements... ⠙ Locking... Resolving dependencies... ⠹ Locking... ⠸ Locking... ⠼ Locking...✘ Locking Failed! CRITICAL:pipenv.patched.pip._internal.resolution.resolvelib.factory:Could not find a version that satisfies the requirement aws-lambda-powertools (from versions: none) [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/resolver.py&quot;, line 833, in _main [ResolutionFailure]: resolve_packages( [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/resolver.py&quot;, line 781, in resolve_packages [ResolutionFailure]: results, resolver = resolve( [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/resolver.py&quot;, line 760, in resolve [ResolutionFailure]: return resolve_deps( [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/utils/resolver.py&quot;, line 1103, in resolve_deps [ResolutionFailure]: results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps( [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/utils/resolver.py&quot;, line 892, in actually_resolve_deps [ResolutionFailure]: resolver.resolve() [ResolutionFailure]: File &quot;/Users/n0359100/Library/Python/3.9/lib/python/site-packages/pipenv/utils/resolver.py&quot;, line 687, in resolve [ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: No matching distribution found for aws-lambda-powertools </code></pre>
<python><pip><pipenv>
2023-01-24 12:48:07
0
1,425
AnonymousAlias
75,221,726
5,884,126
Reusing a variable from an if clause
<p>Let's say we have the following code:</p> <pre><code>if re.search(r&quot;b.&quot;, &quot;foobar&quot;): return re.search(r&quot;b.&quot;, &quot;foobar&quot;).group(0) </code></pre> <p>This is obviously a redundant call, which can be avoided by assigning the condition to a variable before the if block:</p> <pre><code>match = re.search(r&quot;b.&quot;, &quot;foobar&quot;) if match: return match.group(0) </code></pre> <p>However, this means that the condition is always evaluated. In the example above, that makes no difference, but if the match is only used in an <code>elif</code> block, that's an unnecessary execution.<br /> For example:</p> <pre><code>match = re.search(r&quot;b.&quot;, &quot;foobar&quot;) if somecondition: return &quot;lorem ipsum&quot; elif match: return match.group(0) </code></pre> <p>If <code>somecondition</code> is true and we had the redundant version like in the first code block, we would never call <code>re.search</code>. However, with the variable placed before the if-elif-Block like this, it would be called unnecessarily. And even with the duplicated call, we're instead executing it twice if <code>somecondition</code> is false.</p> <p>Unfortunately, depending on the use case, evaluating the condition could be very computationally expensive. With the two variants above, if performance is the goal, a choice can be made depending on the likelyhood of <code>somecondition</code> evaluating to true. More specifically, if the elif block is called more often than not, declaring the variable before if the if block is more performant (unless Python somehow caches the result of two identical stateless function calls), whereas the alternative is better if the elif block is rarely reached.</p> <p>Is there a way to avoid this duplication by reusing the variable from the (el)if block?</p>
<python><if-statement><optimization>
2023-01-24 12:47:55
0
964
PixelMaster
75,221,713
10,192,593
For loop through list of strings
<p>I am trying to save certain information out of xarray in a loop. I keep getting an error msg. Here is an example:</p> <pre><code>import numpy as np import pandas as pd import xarray as xr samples = {} samples['first'] = [1,2] samples['second'] = [3,4] samples categories = list(samples.keys()) categories dta = [] for i in range(len(categories)): dta[categories[i]] = samples[categories[i]] dta </code></pre> <p>I get an error saying &quot;TypeError: list indices must be integers or slices, not str&quot;</p>
<python>
2023-01-24 12:47:26
3
564
Stata_user
75,221,654
2,163,392
How to slice and calculate the pearson correlation coefficient between one big and small array with "overlapping" windows arrays
<p>Suppose I have two very simple arrays with numpy:</p> <pre><code>import numpy as np reference=np.array([0,1,2,3,0,0,0,7,8,9,10]) probe=np.zeros(3) </code></pre> <p>I would like to find which slice of array <code>reference</code> has the highest pearson's correlation coefficient with array <code>probe</code>. To do that, I would like to slice the array <code>reference</code> using some sort of sub-arrays that are overlapped in a for loop, which means I shift one element at a time of <code>reference</code>, and compare it against array <code>probe</code>. I did the slicing using the non elegant code below:</p> <pre><code>from statistics import correlation for i in range(0,len(reference)): #get the slice of the data sliced_data=reference[i:i+len(probe)] #only calculate the correlation when probe and reference have the same number of elements if len(sliced_data)==len(probe): my_rho = correlation(sliced_data, probe) </code></pre> <p>I have one issues and one question about such a code:</p> <p>1-once I run the code, I have the error below:</p> <blockquote> <pre><code>my_rho = correlation(sliced_data, probe) File &quot;/usr/lib/python3.10/statistics.py&quot;, line 919, in correlation raise StatisticsError('at least one of the inputs is constant') statistics.StatisticsError: at least one of the inputs is constant </code></pre> </blockquote> <p>2- is there a more elegant way of doing such slicing with python?</p>
<python><arrays><numpy><pearson-correlation><pearson>
2023-01-24 12:42:41
1
2,799
mad
75,221,639
8,771,201
Python mysql get value from table before insert in the same table
<p>I have a table with different type of article numbers:</p> <pre><code>Date - Artsup - ArtTest - ArtCombo ------------------------------- 01-01-23 - S1 - T1 - S1T1 01-01-23 - S2 - T2 - S2T2 </code></pre> <p>Now I want to insert a new record in the same table but first I want to check if I have a 'ArtCombo' available.</p> <p>So now I first read the table like this:</p> <pre><code>cur.execute('SELECT ArtCombo from table1 where ArtTest = %s and Date BETWEEN CURDATE() - INTERVAL 1 DAY AND CURDATE()') val = (varArtTest) varArtCombo = value-from-select-query </code></pre> <p>In this way I get the ArtCombo value (if available). Now do a new insert using this ArtCombo value like this</p> <pre><code>sql = &quot;INSERT INTO table1 (Date, ArtSup, ArtTest, ArtCombo) VALUES (%s, %s, %s, %s)&quot; val = (datetime.now(tz=None), varArtSup, varArtTest, varArtCombo) </code></pre> <p>This works but wouldn't it be easier or faster to make this one single query? If so, how can that be achieved?</p>
<python><mysql>
2023-01-24 12:41:09
1
1,191
hacking_mike
75,221,569
9,102,437
How to monkeypatch a python library class method?
<p>I am trying to modify a <code>better_profanity</code> library to include an additional argument to <code>get_replacement_for_swear_word</code> function. To do so I first import the necessary parts of the library and test its functionality before:</p> <pre class="lang-py prettyprint-override"><code>from better_profanity import profanity, Profanity text = &quot;Nice c0ck&quot; censored = profanity.censor(text) print(censored) </code></pre> <p>Now I get the source code of the class method, modify it and execute it to <code>__main___</code>:</p> <pre class="lang-py prettyprint-override"><code>from inspect import getsource new_hide_swear_words = getsource(profanity._hide_swear_words).replace( 'get_replacement_for_swear_word(censor_char)', 'get_replacement_for_swear_word(censor_char, cur_word)').replace( 'ALLOWED_CHARACTERS', 'self.ALLOWED_CHARACTERS' ) # fixing the indent new_hide_swear_words = '\n'.join(i[4:] for i in new_hide_swear_words.split('\n')) exec(new_hide_swear_words) </code></pre> <p>Now I replace this function inside the class:</p> <pre><code>profanity._hide_swear_words = _hide_swear_words.__get__(profanity, Profanity) </code></pre> <p>Note that I swap <code>ALLOWED_CHARACTERS</code> for <code>self.ALLOWED_CHARACTERS</code>. This is because the author of the library has imported <code>ALLOWED_CHARACTERS</code> in the same file where the class is defined, so when I swap the function and try to run the first piece of code again, it sais that this variable is not defined. It just so happens that it is stored in <code>self</code> as well, but there is no such luck with several other imported modules. Any ideas how to tackle this?</p> <p><a href="https://github.com/snguyenthanh/better_profanity/blob/b80ace7eb78addbdf7f625068e8a86f582d918c9/better_profanity/better_profanity.py" rel="nofollow noreferrer">Here</a> is the class definition on github.</p>
<python><python-3.x><monkeypatching>
2023-01-24 12:34:10
1
772
user9102437
75,221,387
4,654,120
Python - How to run a huggingface pipeline offline for the first time?
<p>I have a python code that uses a Huggingface pipeline.</p> <pre><code>from transformers import pipeline import pandas as pd tqa = pipeline(task=&quot;table-question-answering&quot;, model=&quot;google/tapas-base-finetuned-wtq&quot;) table = pd.read.csv(&quot;table.csv&quot;) table = table.astype(str) query = [&quot;who has the highest salary&quot;, &quot;what is Brenden's tsrat date?&quot;] answers = tqa(table=table, query=query) </code></pre> <p>This code works fine from a computer that has unrestricted access to the internet. During first run, this code downloads the transformers and probably caches it. on subsequent run it does not download the files anymore.</p> <p>But my problem is I need to make this program run in a restricted server environment that does not have internet connectivity.</p> <p>I was able to install all the required libraries using offline .whl files, but the first time run is failing as it's not able to download the transformers.</p> <p>I have tried <code>os.environ['TRANSFORMERS_OFFLINE']=1</code> but it will only work if those files are at least downloaded once in the system.</p> <p>How can this be made to work? Can I download those files manually (if yes- how ?) and place it somewhere in the target server manually ?</p>
<python><machine-learning><huggingface-transformers>
2023-01-24 12:18:45
1
2,099
Bluemarble
75,221,290
9,196,760
Send scheduled message everyday to all users using the slack bot
<p>I am creating a slack bot project using Django. Here I would like to send a scheduled message to every user of the workspace. Every day, the bot will ask specific questions to all users, but personally. In that case, how can I post questions every day within a specific schedule?</p> <p>Which tool or django-package will be appropriate to fulfill my problem?</p>
<python><django><slack><slack-api>
2023-01-24 12:11:20
0
452
Barun Bhattacharjee