QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,306,735
5,024,631
How to quickly convert groups in a pandas df to a list of separate arrays?
<p>I made this function which converts the groups within a pandas dataframe into a separate list of arrays:</p> <pre><code>def convertPandaGroupstoArrays(df): # convert each group to arrays in a list. groups = df['grouping_var'].unique() mySeries = [] namesofmyseries = [] for group in groups: #print(group) single_ts = df[df['grouping_var'] == group] ts_name = single_ts['grouping_var'].unique() ts_name = ts_name[0] namesofmyseries.append(ts_name) single_ts = single_ts[['time_series', 'value']] #set the time columns as index single_ts.set_index('time_series', inplace=True) single_ts.sort_index(inplace=True) mySeries.append(single_ts) return mySeries, namesofmyseries </code></pre> <p>However, my dataframe contains 80 million rows (many groups each containing 400 rows). I've been running the function all morning on just 5 million rows and it never seems to be ending. Is there a faster way to do this? Thanks!</p>
<python><arrays><pandas>
2023-02-01 07:13:13
1
2,783
pd441
75,306,453
13,359,498
Explaining Resnet50/Densenet121 outputs with LIME
<p>I am trying to explain the outputs of my Transfer learning models in Keras with LIME. I am following this <a href="https://towardsdatascience.com/interpreting-image-classification-model-with-lime-1e7064a2f2e5" rel="nofollow noreferrer">blog</a>.</p> <p>My model is a multi-class image classifier. I am implementing LIME on my resnet50 mode. There are 4 classes in the dataset. the code snippet of LIME:</p> <pre><code>img = cv2.imread('/content/drive/MyDrive/Dataset/cat/cat129.png') img = cv2.resize(img, (224,224)) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) import lime from lime import lime_image explainer = lime_image.LimeImageExplainer() img[0].shape explanation = explainer.explain_instance(img[0].astype('double'), model.predict, top_labels=3, hide_color=0, num_samples=1000) from skimage.segmentation import mark_boundaries temp_1, mask_1 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True ,negative_only=False, num_features=5, hide_rest=True) temp_2, mask_2 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False ,negative_only=True, num_features=10, hide_rest=False) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,15)) ax1.imshow(mark_boundaries(temp_1, mask_1)) ax2.imshow(mark_boundaries(temp_2, mask_2)) ax1.axis('off') ax2.axis('off') </code></pre> <p>The output I expecting was something like this: <a href="https://i.sstatic.net/vtk5f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vtk5f.png" alt="enter image description here" /></a></p> <p>But the output I'm getting is something like this: <a href="https://i.sstatic.net/xdwAh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xdwAh.png" alt="enter image description here" /></a></p> <p>I want to know how I can achieve this, which line of code should I modify?</p>
<python><tensorflow><keras><deep-learning><visualization>
2023-02-01 06:36:49
1
578
Rezuana Haque
75,306,441
3,247,006
How to use "Prefetch()" with "filter()" to reduce `SELECT` queries to iterate 3 or more models?
<p>I have <code>Country</code>, <code>State</code> and <code>City</code> models which are chained by foreign keys as shown below:</p> <pre class="lang-py prettyprint-override"><code>class Country(models.Model): name = models.CharField(max_length=20) class State(models.Model): country = models.ForeignKey(Country, on_delete=models.CASCADE) name = models.CharField(max_length=20) class City(models.Model): state = models.ForeignKey(State, on_delete=models.CASCADE) name = models.CharField(max_length=20) </code></pre> <p>Then, when I iterate <code>Country</code>, <code>State</code> and <code>City</code> models with <a href="https://docs.djangoproject.com/en/4.0/ref/models/querysets/#prefetch-related" rel="nofollow noreferrer">prefetch_related()</a> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.all" rel="nofollow noreferrer">all()</a> as shown below:</p> <pre class="lang-py prettyprint-override"><code> # ↓ Here ↓ for country_obj in Country.objects.all().prefetch_related(&quot;state_set__city_set&quot;): for state_obj in country_obj.state_set.all(): # Here for city_obj in state_obj.city_set.all(): # Here print(country_obj, state_obj, city_obj) </code></pre> <p>3 <code>SELECT</code> queries are run as shown below. *I use PostgreSQL and these below are the query logs of PostgreSQL and you can see <a href="https://stackoverflow.com/questions/722221/how-to-log-postgresql-queries#answer-75031321">this answer</a> explaining how to enable and disable the query logs on PostgreSQL:</p> <p><a href="https://i.sstatic.net/6nP8A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6nP8A.png" alt="enter image description here" /></a></p> <p>But, when I iterate with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#filter" rel="nofollow noreferrer">filter()</a> instead of <code>all()</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code> # Here for country_obj in Country.objects.filter().prefetch_related(&quot;state_set__city_set&quot;): for state_obj in country_obj.state_set.filter(): # Here for city_obj in state_obj.city_set.filter(): # Here print(country_obj, state_obj, city_obj) </code></pre> <p>8 <code>SELECT</code> queries are run as shown below instead of 3 <code>SELECT</code> queries:</p> <p><a href="https://i.sstatic.net/6banW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6banW.png" alt="enter image description here" /></a></p> <p>So, I use <a href="https://docs.djangoproject.com/en/4.0/ref/models/querysets/#prefetch-objects" rel="nofollow noreferrer">Prefetch()</a> with <code>filter()</code> to reduce 8 <code>SELECT</code> queries to 3 <code>SELECT</code> queries as shown below:</p> <pre class="lang-py prettyprint-override"><code>for country_obj in Country.objects.filter().prefetch_related( Prefetch('state_set', # Here queryset=State.objects.filter(), to_attr='state_obj' ), Prefetch('city_set', # Here queryset=City.objects.filter(), to_attr='city_obj' ), ): print(country_obj, country_obj.state_obj, country_obj.city_obj) </code></pre> <p>But, the error below occurs:</p> <blockquote> <p>AttributeError: Cannot find 'city_set' on Country object, 'city_set' is an invalid parameter to prefetch_related()</p> </blockquote> <p>So, how can I use <code>Prefetch()</code> with <code>filter()</code> to reduce 8 <code>SELECT</code> queries to 3 <code>SELECT</code> queries?</p>
<python><django><postgresql><django-models><django-prefetch>
2023-02-01 06:35:49
1
42,516
Super Kai - Kazuya Ito
75,306,200
4,095,108
Add the word "cant" to Spacy stopwords
<p>How to I get <code>SpaCy</code> to set words such as &quot;cant&quot; and &quot;wont&quot; as stopwords?<br /> For example, even with tokenisation it will identify &quot;can't&quot; as a stop word, but not &quot;cant&quot;.<br /> When it sees &quot;cant&quot;, it removes &quot;ca&quot; but leaves &quot;nt&quot;. Is it by design? I guess &quot;nt&quot; is not really a word.</p> <p>Here is a sample code:</p> <pre><code>import spacy from spacy.lang.en.stop_words import STOP_WORDS nlp = spacy.load(&quot;en_core_web_sm&quot;) text = &quot;cant can't cannot&quot; doc = nlp(text) for word in doc: print(word,&quot;:&quot;,word.is_stop) ca : True nt : False ca : True n't : True can : True not : True </code></pre>
<python><spacy>
2023-02-01 06:00:57
2
1,685
jmich738
75,306,132
866,082
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph
<p>I'm saving and loading a Keras model with some custom layers (even though I'm not sure if custom layers have anything to do with the issue). For the record, I'm not saving the model myself but it's done through TFX's Pusher component. At the time of loading, I get a few warnings:</p> <pre><code>WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. For example, in the saved checkpoint object, `model.layer.weight` and `model.layer_copy.weight` reference the same variable, while in the current object these are two different variables. The referenced variables are:(&lt;keras.layers.core.dense.Dense object at 0x7f1d0c557190&gt; and &lt;keras.engine.functional.Functional object at 0x7f1d0c557f10&gt;). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. For example, in the saved checkpoint object, `model.layer.weight` and `model.layer_copy.weight` reference the same variable, while in the current object these are two different variables. The referenced variables are:(&lt;keras.layers.core.dense.Dense object at 0x7f1d0c5577c0&gt; and &lt;Layers.SingleOutputWithName.SingleOutputWithName object at 0x7f1d0c4cde80&gt;). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. For example, in the saved checkpoint object, `model.layer.weight` and `model.layer_copy.weight` reference the same variable, while in the current object these are two different variables. The referenced variables are:(&lt;keras.engine.functional.Functional object at 0x7f1d0c4cd7c0&gt; and &lt;keras.saving.legacy.saved_model.load.TensorFlowTransform&gt;TransformFeaturesLayer object at 0x7f1d0c557e20&gt;). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. For example, in the saved checkpoint object, `model.layer.weight` and `model.layer_copy.weight` reference the same variable, while in the current object these are two different variables. The referenced variables are:(&lt;keras.saving.legacy.saved_model.load.TensorFlowTransform&gt;TransformFeaturesLayer object at 0x7f1d0c557e20&gt; and &lt;keras.engine.input_layer.InputLayer object at 0x7f1d0d5e1a00&gt;). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. For example, in the saved checkpoint object, `model.layer.weight` and `model.layer_copy.weight` reference the same variable, while in the current object these are two different variables. The referenced variables are:(&lt;keras.engine.functional.Functional object at 0x7f1d0c4cd7c0&gt; and &lt;keras.saving.legacy.saved_model.load.TensorFlowTransform&gt;TransformFeaturesLayer object at 0x7f1d0c557e20&gt;). </code></pre> <p>My question is, are these warnings important? Do I need to do something about them? I'm asking this because the model loads as far as I can tell, and it's working. But at the same time, I cannot be sure if these warnings are messing with the output of the model or not.</p> <p>The model itself is rather straightforward. No weight sharing or anything complex. It just includes some custom layers, like this:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf from tensorflow.keras import layers @tf.keras.utils.register_keras_serializable() class SingleOutputWithName(layers.Layer): def __init__(self, label_name: str, **kwargs): super(SingleOutputWithName, self).__init__() self.label_name = label_name def call(self, x): return {self.label_name: x, 'dummy': x} def get_config(self): config = super().get_config() config['label_name'] = self.label_name return config </code></pre> <p>Does anyone know anything about these warnings?</p>
<python><tensorflow><machine-learning><keras><deep-learning>
2023-02-01 05:50:42
0
17,161
Mehran
75,306,070
15,233,108
How do I match the file name from different directories and replace the partial filename with the actual filename?
<p>So I have a slightly complicated issue that I need some help with :(</p> <p>In Directory 1, I have the filenames as follows:</p> <blockquote> <p>00HFP.mp4<br /> 0AMBV.mp4<br /> 2D5GN.mp4<br /> 3HVKR.mp4<br /> 3IJGQ.mp4</p> </blockquote> <p>In Directory 2, I did some processing to the mp4s and got some output files:</p> <blockquote> <p>_0HFP.usd<br /> _AMBV.usd<br /> _D5GN.usd<br /> _HVKR.usd<br /> _IJGQ.usd</p> </blockquote> <p>For some reason, the programme I'm using replaces the first number/character with an underscore for some files. Other files are generally left alone. But I need the filenames to match :( How do I do a mass renaming (over 500 files) based on this partial naming using python script? So like for example: <strong>_0HFP.usd</strong> should become <strong>00HFP.usd</strong> since there's a 00HFP.mp4 file in Directory 1.</p> <p>Please help :( Thank you!</p> <p>Trying this (as suggested by Corralien): but still doesn't work for me :(</p> <pre><code>dir1 = pathlib.Path('./mnt/d/Downloads/Charades_v1_480/charades_18Jan/done/') dir2 = pathlib.Path('./mnt/d/Downloads/Charades_v1_480/charades_18Jan_anim/pt-charades-output/') print('i am here') for f1 in dir1.glob('*.mp4'): print(f1) f2 = dir2 / f'_{f1.stem[1:]}.usd' if f2.exists(): f2.rename(dir2 / f'{f1.stem}.usd') </code></pre>
<python><file><io>
2023-02-01 05:38:57
1
582
Megan Darcy
75,305,784
10,339,757
Replace value in mutiple columns based on dictionary map
<p>I have a dataframe that looks similar to -</p> <pre><code>df = DataFrame(data={'ID': ['a','b','c','d'], 'col1':[1,2,3,4], 'col2':[5,6,7,8], 'col3':[9,10,11,12]}) </code></pre> <p>I have a dictionary like this</p> <pre><code>mapper = {'a':100,'d':3} </code></pre> <p>Where the key in the dictionary matches the ID in the dataframe, I want to be able to replace the values in say col1 and col3 with the value in the dictionary. Currently I can do this as such</p> <pre><code>for id, val in mapper.items(): df.loc[df['ID']==id, 'col1']=val df.loc[df['ID']==id, 'col3']=val </code></pre> <p>But I'm wondering if there is a vectorised way to do this outside of a for loop as my dataframe is large.</p>
<python><python-3.x><pandas><dataframe>
2023-02-01 04:49:04
3
371
thefrollickingnerd
75,305,671
16,169,533
Seperate duplicate in string into list
<p>I have a following string</p> <pre><code>&quot;TAUXXTAUXXTAUXX&quot; </code></pre> <p>i want to make a list contains the following</p> <pre><code>lst = [&quot;TAUXX&quot;, &quot;TAUXX&quot;, &quot;TAUXX&quot;] </code></pre> <p>How i make it and is there is a string library in python to do it ?</p> <p>Thanks in advance.</p> <p>P.S : I want it in python</p>
<python><arrays><string><list>
2023-02-01 04:31:01
2
424
Yussef Raouf Abdelmisih
75,305,603
9,092,563
How do you divide up a list into chunks which vary according to a normal distribution
<p>I want to take a list of thousands of items and group them into 12 chunks, where the number of items found in each chunk correspond to a normal distribution (bell curve) and <strong>no duplicates across chunks - the list must exhaust itself</strong>.</p> <h2>Input data looks like this</h2> <pre><code>['6355ab76f70c5c59749f2018', '6355c797f70c5c5974a1cb15', '6355d256f70c5c5974a36a6c', '6355d270f70c5c5974a37356', '6355d29bf70c5c5974a3810a', '6355d300f70c5c5974a3a202', '6355d31af70c5c5974a3ab03', '6355d36cf70c5c5974a3c103', '6355d371f70c5c5974a3c236', '6355d389f70c5c5974a3c828', '6355d94df70c5c5974a55450', '6355d956f70c5c5974a556c1', '6355d987f70c5c5974a5626d', '6355d99df70c5c5974a566d9', '6355d9b1f70c5c5974a56b5c', '6355d9bbf70c5c5974a56d50', '6355d9d3f70c5c5974a572e1', '6355d9fdf70c5c5974a57c53', '6355da0cf70c5c5974a57f8f', '6355da11f70c5c5974a58065', '6355da19f70c5c5974a58261', '6355da68f70c5c5974a592ca', '6355da6cf70c5c5974a593ab', '6355da80f70c5c5974a597de', '6355da8af70c5c5974a599fa', '6355da93f70c5c5974a59c09', '6355da98f70c5c5974a59d20', '6355daa1f70c5c5974a59ec9', '6355daa7f70c5c5974a59fec', '6355dac5f70c5c5974a5a6dd', '6355dadaf70c5c5974a5ab75', '6355dafcf70c5c5974a5b2dc', '6355db6df70c5c5974a5d24b', '6355dba0f70c5c5974a5dfea', '6355dc16f70c5c5974a5fe14', '6355dc31f70c5c5974a6059d', '6355dc37f70c5c5974a60782', '6355dc3cf70c5c5974a608eb', '6355dc41f70c5c5974a60a99', '6355dc47f70c5c5974a60bb9', '6355dc5cf70c5c5974a611ef', '6355dc67f70c5c5974a61578', '6355dcaaf70c5c5974a62831', '6355dcb4f70c5c5974a62b2c', '6355dcbff70c5c5974a62e73', '6355dcc8f70c5c5974a63113', '6355dcd7f70c5c5974a6355c', '6355dcf3f70c5c5974a63c91', '6355dcf7f70c5c5974a63de9', '6355dd04f70c5c5974a64144', '6355dd0ef70c5c5974a64438', '6355dd53f70c5c5974a65902', '6355dd61f70c5c5974a65cf6', '6355dd6bf70c5c5974a66010', '6355dd70f70c5c5974a66195', '6355dd74f70c5c5974a662f9', '6355dd98f70c5c5974a66d4e', '6355dd9df70c5c5974a66e99', '6355dda2f70c5c5974a66fbd', '6355ddb0f70c5c5974a673e4', '6355ddbaf70c5c5974a67638', '6355ddc5f70c5c5974a6796b', '6355ddcef70c5c5974a67bcf', '6355de01f70c5c5974a6892c', '6355de15f70c5c5974a68ecf', '6355de1bf70c5c5974a69023', '6355de3df70c5c5974a699ad', '6355de58f70c5c5974a6a1ab', '6355de62f70c5c5974a6a4df', '6355de6bf70c5c5974a6a787', '6355de9cf70c5c5974a6b5a8', '6355dea0f70c5c5974a6b6ed', '6355deccf70c5c5974a6c3dc', '6355ded4f70c5c5974a6c602', '6355dee8f70c5c5974a6cbd2', '6355e8f1f70c5c5974a9db18', '6355e924f70c5c5974a9ec85', '6355e9dbf70c5c5974aa2b37', '6355eaaef70c5c5974aa7348', '6355ead5f70c5c5974aa81ac', '6355ec02f70c5c5974aaefaa', '6355ec64f70c5c5974ab135d', '6355ec8df70c5c5974ab2157', '6355ecb2f70c5c5974ab2ce7', '6355eccaf70c5c5974ab346f', '6355eccff70c5c5974ab3691', '6355ecd3f70c5c5974ab376b', '6355ece2f70c5c5974ab3ba0', '6355eceef70c5c5974ab3efb', '6355ecfef70c5c5974ab4384', '6355ed03f70c5c5974ab44c3', '6355ed24f70c5c5974ab4f4f', '6355ed4cf70c5c5974ab5b39', '6355ed78f70c5c5974ab6840', '6355ed9ff70c5c5974ab7388', '6355edb1f70c5c5974ab7888', '6355edb3f70c5c5974ab790b'] </code></pre> <h2>What output should look like...</h2> <p>I am looking for output like this, a list of objects with a numerical key corresponding to a number from 0-11, with the chunked list items as the keys:</p> <pre><code>[ { 0: ['6355ab76f70c5c59749f2018', '6355c797f70c5c5974a1cb15', '6355d256f70c5c5974a36a6c' ] }, { 1: ['6355d270f70c5c5974a37356', '6355d29bf70c5c5974a3810a', '6355d300f70c5c5974a3a202', '6355d31af70c5c5974a3ab03', '6355d36cf70c5c5974a3c103', '6355d371f70c5c5974a3c236', '6355d389f70c5c5974a3c828'] }, ... ] </code></pre> <h4>The output chunks should be along the same gradients as this image, even on both sides and greater near the center, for n size list...</h4> <p><a href="https://i.sstatic.net/bFCux.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bFCux.jpg" alt="enter image description here" /></a> <em>It should lump the input list into even (on both sides) chunks, with incrementally, in a gradient mathematical way, more per chunk leading toward the center of the output list.</em></p> <p>I want it so the list I pass in is divided so that the most amount of items are grouped in the middle (numbers 4-8 roughly) and that it less items are grouped together as they reach the &quot;edges&quot; of the resulting list (numbers 0-3, and numbers 9-12). But everything of the input list must be exhausted so the items are fully distributed in this way.</p> <p>I tried to tackle this with <code>numpy</code> but so far I have not been able to get the output I want.</p> <p>My current code (two different functions):</p> <pre><code> def divide_list_normal(lst): normal_dist = np.random.normal(size=len(lst)) # Generate a normal distribution of numbers sorted_list = [x for _,x in sorted(zip(normal_dist,lst))] # Sort the list according to the normal distribution chunk_size = int(len(lst)/len(normal_dist)) # Divide the list into chunks chunks = [sorted_list[i:i+chunk_size] for i in range(0, len(sorted_list), chunk_size)] return chunks def divide_list_normal_define_chunk_size(lst, n): normal_dist = np.random.normal(size=len(lst)) # Generate a normal distribution of numbers sorted_list = [x for _,x in sorted(zip(normal_dist,lst))] # Sort the list according to the normal distribution chunk_size = int(len(lst)/len(normal_dist)) # Divide the list into chunks chunks = [sorted_list[i:i+chunk_size] for i in range(0, n, chunk_size)] return chunks </code></pre> <p>The output for the first comes out like so:</p> <pre><code>[['63a8d83336756fd65d455c77'], ['6355f7c6f70c5c5974adfbce'], ['635629c6f70c5c5974bbab53'], ['6355fa8bf70c5c5974aeb70f'], ['6355dcd7f70c5c5974a6355c'], ['63a96dae36756fd65d549333'], ['639245927eeb4e9fd025e397'], ['63562463f70c5c5974ba3b5c'], ['63a8e04736756fd65d4635cf'], ['635629a5f70c5c5974bba1c1'], ['6355f74ef70c5c5974addd2c'],...] </code></pre> <p>The output for the second comes out like so:</p> <pre><code>[['63aa1a9d36756fd65d7566cf'], ['6355ed78f70c5c5974ab6840'], ['63a94e1836756fd65d500d5d'], ['63a8e23e36756fd65d4667ec'], ['63a96c6536756fd65d5463db'], ['63d39021d34efb9c0983d64a'], ['635627a9f70c5c5974bb1573'], ['63b3a4c236756fd65d33750a'], ['63562320f70c5c5974b9e50b'], ['63aa1aec36756fd65d758676'], ['63a9551636756fd65d5111fb'], ['63562443f70c5c5974ba31ed']] </code></pre> <p>Is there a way to divide up a list into chunks which vary according to a normal distribution? If you know how, please share it. Thank you!</p>
<python><list><numpy><sorting><normal-distribution>
2023-02-01 04:17:40
2
692
rom
75,305,569
12,319,746
Getting all build logs from Jenkins API
<p>I need to get logs for all builds from a Jenkins instance.</p> <pre><code>def get_builds(): builds_api_res = session.get('https://build.org.com/api/json?depth=3',stream=True,auth=(username,access_token),).json() for chunk in builds_api_res.iter_content(chunk_size=1024*1024): # Do something with the chunk of data print(chunk) get_builds() </code></pre> <p>The problem is, this returns such a huge response that the Jenkins instance itself runs out of memory.</p> <p>So, the other approach would be to get all builds from each folder individually. That is where I am facing the problem.</p> <p>When I look at the folders, some <em>builds</em> are at 2 folder down levels, some are at 3 levels down, maybe there are builds which are 4 folder level down from the root folder as well. I am not sure how to keep on looking for folders inside folders unless I find a build.</p> <pre><code>for project in projects_api['jobs']: folder_url = JENKINS_URL+'/job/'+project['name']+'/api/json' folders = session.get(folder_url, auth=(username, access_token)).json() jobs = folders['jobs'] for job in jobs: job_det_url = job['url'] + 'api/json' all_builds = session.get(job_det_url, auth=(username, access_token)).json() latest_build = all_builds['builds'][-1] print(latest_build['url']) log_url = latest_build['url'] +'/consoleText' print(log_url) logs = session.get(log_url, auth=(username, access_token)) print(logs.text) </code></pre> <p>This works for builds which are inside the <em>root</em> folder. However, if there are more folders inside that folder then it will fail. How do I locate the builds directly from the parent folder <em>using the API</em>, if there is at all a way to do that? or is there a better approach to the whole thing?</p>
<python><jenkins><jenkins-api>
2023-02-01 04:10:54
1
2,247
Abhishek Rai
75,305,478
9,919,423
Can tf.gradienttape() calculate gradient of other library's function
<p>If I include inside the <code>tf.GradientTape()</code> some functions from other Python libraries, like `sklearn.decomposition.PCA.inverse_transform()', can TensorFlow calculate gradients from that function?</p> <p>Specifically, can tf automatically differetiate <code>pca_inverse_tranform = pca.inverse_transform(h2)</code>?</p> <pre><code>... from sklearn.decomposition import PCA pca = PCA(n_components=10) pca.fit(x) ... with tf.GradientTape() as tape: h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256]) h1 = tf.nn.relu(h1) h2 = h1@w2 + tf.broadcast_to(b2, [x.shape[0], 10]) h2 = tf.nn.relu(h2) pca_inverse_tranform = pca.inverse_transform(h2) loss = tf.square(pca_inverse_tranform - target) loss = tf.reduce_mean(loss) [dl_dw1, dl_db1, dl_dw2, dl_db2] = tape.gradient(loss, [w1,b1,w2,b2]) </code></pre>
<python><tensorflow><keras><automatic-differentiation><gradienttape>
2023-02-01 03:53:11
1
412
David H. J.
75,305,341
2,579,031
How to add pause in Google Text to Speech?
<p>I am trying to use Google Cloud text to speech module, and I can convert a text to audio using below code. But I am unable to add breaks in the code, like a pause of 5 sec. I have added tags for break in my <strong>synthesis_input variable</strong>. Can anyone help me with that?</p> <pre><code>import os from google.cloud import texttospeech os.environ[&quot;GOOGLE_APPLICATION_CREDENTIALS&quot;]=&quot;G:\service-account-key.json&quot; client = texttospeech.TextToSpeechClient() synthesis_input = texttospeech.SynthesisInput(text=&quot;&lt;speak&gt;You know that facebook is a place where millions of people share their thoughts. &lt;break time=&quot;10s&quot;/&gt; Today I am going to discuss 10 amazing things shared by people on facebook.&lt;/speak&gt;&quot;) voice = texttospeech.VoiceSelectionParams(language_code='en-IN',name=&quot;en-IN-Wavenet-C&quot;,ssml_gender=texttospeech.SsmlVoiceGender.MALE) audio_config = texttospeech.AudioConfig( audio_encoding=texttospeech.AudioEncoding.MP3 ) response = client.synthesize_speech( input=synthesis_input, voice=voice, audio_config=audio_config ) with open(&quot;output.mp3&quot;, &quot;wb&quot;) as out: # Write the response to the output file. out.write(response.audio_content) print('Audio content written to file &quot;output.mp3&quot;') </code></pre>
<python><google-cloud-platform><google-text-to-speech>
2023-02-01 03:26:19
2
939
Abhishek dot py
75,305,242
1,828,539
How to read a list of h5 objects in a dictionary and assign them to the names said dictionary?
<p>If I have the following dictionary:</p> <pre><code>input_dict = {'Sample': ['org_1', 'org_2', 'org_3'], 'Location': ['../cellbender/SAM24425933_cellbender_out_filtered.h5', '../cellbender/SAM24425932_cellbender_out_filtered.h5', '../cellbender/SAM24425934_cellbender_out_filtered.h5'] } </code></pre> <p>How can I create the variables</p> <pre><code>org_1 org_2 org_3 </code></pre> <p>where <code>org_1</code> is the object created by reading <code>'../cellbender/SAM24425933_cellbender_out_filtered.h5'</code>, etc.</p> <p>I've tried this but get an error:</p> <pre><code>for sample, location in input_dict.items(): adata = sc.read_hdf(filename = location, key = sample) </code></pre> <blockquote> <p>TypeError: expected str, bytes or os.PathLike object, not list</p> </blockquote>
<python><dictionary>
2023-02-01 03:04:03
1
2,376
Carmen Sandoval
75,305,169
3,843,659
Decoding hidden layer embeddings in T5
<p>I'm new to NLP (pardon the very noob question!), and am looking for a way to perform vector operations on sentence embeddings (e.g., randomization in embedding-space in a uniform ball around a given sentence) and then decode them. I'm currently attempting to use the following strategy with T5 and Huggingface Transformers:</p> <ol> <li>Encode the text with <code>T5Tokenizer</code>.</li> <li>Run a forward pass through the encoder with <code>model.encoder</code>. Use the last hidden state as the embedding. (I've tried <code>.generate</code> as well, but it doesn't allow me to use the decoder separately from the encoder.)</li> <li>Perform any desired operations on the embedding.</li> <li><strong>The problematic step: Pass it through <code>model.decoder</code> and decode with the tokenizer.</strong></li> </ol> <p>I'm having trouble with (4). My sanity check: I set (3) to do nothing (no change to the embedding), and I check whether the resulting text is the same as the input. So far, that check always fails.</p> <p>I get the sense that I'm missing something rather important (something to do with the lack of beam search or some other similar generation method?). I'm unsure of whether what I think is an embedding (as in (2)) is even correct.</p> <p><strong>How would I go about encoding a sentence embedding with T5, modifying it in that vector space, and then decoding it into generated text?</strong> Also, might another model be a better fit?</p> <p>As a sample, below is my incredibly broken code, based on <a href="https://huggingface.co/blog/encoder-decoder#encoder-decoder" rel="nofollow noreferrer">this</a>:</p> <pre class="lang-py prettyprint-override"><code>t5_model = transformers.T5ForConditionalGeneration.from_pretrained(&quot;t5-large&quot;) t5_tok = transformers.T5Tokenizer.from_pretrained(&quot;t5-large&quot;) text = &quot;Foo bar is typing some words.&quot; input_ids = t5_tok(text, return_tensors=&quot;pt&quot;).input_ids encoder_output_vectors = t5_model.encoder(input_ids, return_dict=True).last_hidden_state # The rest is what I think is problematic: decoder_input_ids = t5_tok(&quot;&lt;pad&gt;&quot;, return_tensors=&quot;pt&quot;, add_special_tokens=False).input_ids decoder_output = t5_model.decoder(decoder_input_ids, encoder_hidden_states=encoder_output_vectors) t5_tok.decode(decoder_output.last_hidden_state[0].softmax(0).argmax(1)) </code></pre>
<python><machine-learning><nlp><huggingface-transformers><transformer-model>
2023-02-01 02:49:47
1
565
jmindel
75,305,001
10,844,937
How to add Pandas dataframe to an existing xlsx file using to_excel
<p>I have write some content to a xlsx file by using <code>xlsxwriter</code></p> <pre><code>workbook = xlsxwriter.Workbook(file_name) worksheet = workbook.add_worksheet() worksheet.write(row, col, value) worksheet.close() </code></pre> <p>I'd like to add a dataframe after the existing rows to this file by <code>to_excel</code></p> <pre><code> df.to_excel(file_name, startrow=len(existing_content), engine='xlsxwriter') </code></pre> <p>However, this seems not work.The dataframe not inserted to the file. Anyone knows why?</p>
<python><pandas><xlsxwriter>
2023-02-01 02:12:11
2
783
haojie
75,304,991
1,256,757
How to deploy Apache superset using mod_wsgi that is installed in python3-venv?
<p>I have installed apache superset and i need to run this through wsgi server using mod_wsgi of apache. Im new to python and configuring apache. I need help on what exactly my superset.wsgi would look like and my superset.conf would look like.</p> <p>This is my conf</p> <pre><code>&lt;VirtualHost *:80&gt; WSGIDaemonProcess superset python-home=/var/www/superset WSGIScriptAlias / /var/www/superset/superset.wsgi WSGIProcessGroup superset WSGIApplicationGroup %{GLOBAL} &lt;Directory /var/www/&gt; # set permissions as per apache2.conf file Options FollowSymLinks AllowOverride None Require all granted &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <p>and this is my superset.wsgi</p> <pre><code>import sys import site sys.path.insert(0,'/var/www/superset') sys.path.insert(0,'/var/www/superset/lib/python3.8/site-packages') site.addsitedir('/var/www/superset/lib/python3.8/site-packages') from superset import app as application </code></pre> <p>and this is my error in apache2/error.log</p> <pre><code> Exception occurred processing WSGI script '/var/www/superset/superset.wsgi'. Traceback (most recent call last): File &quot;/var/www/superset/lib/python3.8/site-packages/werkzeug/local.py&quot;, line 316, in __get__ obj = instance._get_current_object() # type: ignore[misc] File &quot;/var/www/superset/lib/python3.8/site-packages/werkzeug/local.py&quot;, line 513, in _get_current_&gt; raise RuntimeError(unbound_message) from None RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed the current application. To solve this, set up an application context with app.app_context(). See the documentation for more information. </code></pre> <p>please help, thanks</p>
<python><mod-wsgi><python-venv><apache-superset>
2023-02-01 02:09:33
1
342
butching
75,304,942
15,843,133
xlrd assertion error when opening a .xls file (converted from .xlsx): assert _unused_i == nstrings - 1
<p>I have a script that uses the xlrd library to read and write .xls files. The program works for most .xls', but I found that after I converted an .xlsx to .xls and try to open the workbook, I get the following assertion error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/troublebucket/Projects/script.py&quot;, line 51, in &lt;module&gt; wb = xlrd.open_workbook(bom_table_wb) File &quot;/home/troublebucket/.local/lib/python3.10/site-packages/xlrd/__init__.py&quot;, line 172, in open_workbook bk = open_workbook_xls( File &quot;/home/troublebucket/.local/lib/python3.10/site-packages/xlrd/book.py&quot;, line 104, in open_workbook_xls bk.parse_globals() File &quot;/home/troublebucket/.local/lib/python3.10/site-packages/xlrd/book.py&quot;, line 1211, in parse_globals self.handle_sst(data) File &quot;/home/troublebucket/.local/lib/python3.10/site-packages/xlrd/book.py&quot;, line 1178, in handle_sst self._sharedstrings, rt_runlist = unpack_SST_table(strlist, uniquestrings) File &quot;/home/troublebucket/.local/lib/python3.10/site-packages/xlrd/book.py&quot;, line 1472, in unpack_SST_table assert _unused_i == nstrings - 1 AssertionError </code></pre> <p>I tried commenting out the assertion in book.py's <code>unpack_SST_table()</code>, and the script ran without errors but didn't actually read from the workbook. Any advice would be appreciated!</p>
<python><excel><xlrd>
2023-02-01 01:57:38
1
353
Trouble Bucket
75,304,882
678,188
Terminal on brand new macbook 2023 with Mac OSX Ventura 13.2 gives error "zsh: command not found: python"
<p>Terminal on brand new macbook 2023 with M2 chip and Mac OSX Ventura 13.2 gives error &quot;zsh: command not found: python&quot;.</p> <p>Haven't been able to find answer on google.</p>
<python><macos-ventura>
2023-02-01 01:46:21
0
529
esd100
75,304,836
10,049,514
FastAPI Pagination with Redis
<p>Currently I have some cached data in a Redis cluster and the data is being served over an endpoint developed by FastAPI.</p> <p>Below is an example:</p> <pre><code>key_001: [value_1, value_2, value_3, value_4] </code></pre> <p>There is another service that updates this data in real-time. For example this service and delete an existing value or insert a new value into the list.</p> <pre><code>key_001: [value_1, value_2, value_10] </code></pre> <p>The FastAPI endpoint will read this data and paginate it according to the given parameters. For example, the <code>page = 1</code> and the <code>size = 1</code>.</p> <p>What if the next moment, the data changes to a new list? With the same page and size, the data is different? How do I update the API so that it will serve the new content?</p> <p>Thank you!</p>
<python><redis><pagination><fastapi>
2023-02-01 01:36:03
0
1,071
knl
75,304,698
772,649
How to add typing hints of **kwargs?
<p>Here is an example:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, a=1, b=2, c=3): self.a = a self.b = b self.c = c class B(A): def __init__(self, d=4, **kwargs): super().__init__(**kwargs) self.d = d </code></pre> <p><code>B</code> adds one argument <code>d</code> and passes all key arguments to the super class <code>A</code>. The typing hint of <code>B()</code> shown by pylance is <code>(d: int = 4, **kwargs: Unknown) -&gt; None</code>, How to add typing hint to <code>**kwargs</code>, that can make pylance knows the arguments is <code>(d: int = 4, a: int = 1, b: int = 2, c: int = 3) -&gt; None</code>?</p>
<python><typing><keyword-argument><pylance>
2023-02-01 01:08:41
1
97,797
HYRY
75,304,693
2,175,783
how to set env variable in conda and use it in ansible
<p>I am debugging this ansible call in a shell script (I am a complete beginner in ansible)</p> <pre><code>source /path/.conda/etc/profile.d/conda.sh &amp;&amp; \ conda activate my_ansible &amp;&amp; \ AWS_REGION=us-east-1 \ AWS_SHARED_CREDENTIALS_FILE=acredsfile \ ansible-playbook /path/ansible/init.yml \ -e s3_bucket=${S3_BUCKET} </code></pre> <p>the ansible task gives me a region not found error</p> <pre><code>File &quot;/path/.conda/envs/my_ansible/lib/python3.9/site-packages/botocore/regions.py&quot;, line 260, in _endpoint_for_partition raise NoRegionError() botocore.exceptions.NoRegionError: You must specify a region. </code></pre> <p>I think the way the aws region env is being set is correct but apparently not?</p>
<python><amazon-web-services><ansible><conda>
2023-02-01 01:07:55
0
1,496
user2175783
75,304,615
4,926,165
How do I programmatically create class methods in Python?
<p>Suppose I want to define a Python class whose instances have several members of similar form:</p> <pre><code>class Bar: def __init__(self): self.baz=3 self.x=1 self.name=4 </code></pre> <p>I can instead create all members at once using explicit manipulation of <code>__dict__</code>:</p> <pre><code>class Bar: def __init__(self): self.__dict__.update({name: len(name) for name in (&quot;baz&quot;, &quot;x&quot;, &quot;name&quot;)}) </code></pre> <p>However, if the associated members are <em>class</em> data members, instead of instance members, then I am aware of no analogous way to programmatically mutate <code>Bar.__dict__</code>:</p> <pre><code>class Bar: #Fails with &quot;AttributeError: 'mappingproxy' object has no attribute 'update'&quot; Bar.__dict__.update({name: len(name) for name in (&quot;baz&quot;, &quot;x&quot;, &quot;name&quot;)}) </code></pre> <p>In Javascript, programmatically manipulating the properties of a constructor is possible, because constructors are just ordinary functions, which are just ordinary objects, and therefore just a mutable dictionary mapping properties to more objects. Is it possible to write analogous code in Python?</p>
<python><class><namespaces>
2023-02-01 00:51:17
1
730
Jacob Manaker
75,304,567
18,758,062
Get the current step number in a gym.Env
<p>Is there a way to access the current step number of a <code>gym.Env</code> from inside its <code>step</code> method?</p> <p>I'm using a model from <code>stable_baselines3</code> and want to terminate the env when N steps have been taken.</p>
<python><openai-gym><stable-baselines>
2023-02-01 00:42:48
2
1,623
gameveloster
75,304,566
9,194,965
remove commas/quotation marks in column name in pandas or sql
<p>I am trying to pull some columns from a snowflake table using python/sqlalchemy into a pandas dataframe and subsequently do additional operations using Python/Pandas.</p> <p>However, it appears that the resulting dataframe has some quotation marks/commas in the column names.</p> <p>Code follows below:</p> <pre><code> sql = '''SELECT 'concept_name', 'ndc' FROM db.schema.tbl''' df = pd.read_sql(sql, conn) df.columns.to_list() #print out column names </code></pre> <p>This is the output I get for column names: [&quot;'CONCEPT_NAME'&quot;, &quot;'NDC'&quot;]</p> <p>How do I remove the special characters in each column name either in SQL itself or in pandas?</p>
<python><sql><pandas><dataframe><sqlalchemy>
2023-02-01 00:42:38
1
1,030
veg2020
75,304,550
19,831,782
Python byte decode for JPEG EXIF data fails
<p>I am trying to decode JPEG EXIF data using Pillow. When I decode, I either get a parsing error or the decode results in hex instead of actually decoding the bytes.</p> <p>Here is my code based on work by Abdou Rockikz <a href="https://www.thepythoncode.com/article/extracting-image-metadata-in-python" rel="nofollow noreferrer">https://www.thepythoncode.com/article/extracting-image-metadata-in-python</a></p> <pre class="lang-py prettyprint-override"><code>from PIL import Image from PIL.ExifTags import TAGS # path to the image or video imagename = &quot;image.jpg&quot; # read the image data using PIL image = Image.open(imagename) # extract EXIF data exifdata = image.getexif() # iterating over all EXIF data fields for tag_id in exifdata: # get the tag name, instead of human unreadable tag id tag = TAGS.get(tag_id, tag_id) data = exifdata.get(tag_id) # decode bytes if isinstance(data, bytes): data = data.decode() print(f&quot;{tag:25}: {data}&quot;) </code></pre> <p>I have tried <code>.decode('utf-8')</code> (the default), which results in the following error <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xea in position 1: invalid continuation byte</code>.</p> <p>I have tried <code>latin-1</code> and <code>ISO-8859-1</code> but these result in a huge hex output rather than human readable values such as <code>\x00\x08...</code>. The values are fine when I decode TIF images but JPEGs don't seem to work for some reason. Is there a decoding scheme I need to be aware of or some other process? Thanks!</p>
<python><jpeg><decode>
2023-02-01 00:39:11
0
395
usagibear
75,304,530
13,114,791
importlib.metadata.PackageNotFoundError: No package metadata was found for djoser pyinstaller
<h2>Context</h2> <p>I made a Django react app. Now I want to make it a desktop application so that the user does not have type <code>python manage.py runserver</code> and also activate the environment every time. I used pyinstaller. I did all the steps mentioned for <a href="https://github.com/pyinstaller/pyinstaller/wiki/Recipe-Executable-From-Django" rel="nofollow noreferrer">django</a>.</p> <h2>Problem</h2> <p>when I run my executable file made from pyinstaller, I got this error</p> <pre><code>File &quot;manage.py&quot;, line 5, in &lt;module&gt; File &quot;PyInstaller\loader\pyimod02_importers.py&quot;, line 499, in exec_module File &quot;djoser\__init__.py&quot;, line 6, in &lt;module&gt; File &quot;importlib\metadata\__init__.py&quot;, line 955, in version File &quot;importlib\metadata\__init__.py&quot;, line 928, in distribution File &quot;importlib\metadata\__init__.py&quot;, line 518, in from_name importlib.metadata.PackageNotFoundError: No package metadata was found for djoser [2200] Failed to execute script 'manage' due to unhandled exception! </code></pre> <h2>What I have done</h2> <p>I have already installed Djoser in the environment and the environment is also activated. I have also tried to add in manage.py file and also in hidden_import lists but nothing changed. I have also tried adding <code>--copy-meta=djoser </code> in the build command but it got even worse error.</p> <p>How do I fix this error and If there are any better alternative solutions out there? Thanks</p>
<python><pyinstaller><djoser><python-exec><django-react>
2023-02-01 00:32:59
2
521
Hasnain Sikander
75,304,404
14,293,020
Python index selection in a 3D list
<p>I have the following 3D list:</p> <pre><code>test = [[[(x,y,z) for x in range(0,5)] for y in range(5,8)] for z in range(0,4)] test[0].append([(0,5),(5,0)]) </code></pre> <p>I want to select all the indices of the first dimension, the 0th index of the 2nd dimension and all the indices of the 3rd dimension. If it was an array I would write <code>array[:,0,:]</code>. However when I write <code>test[:][0][:]</code> it is the same as doing <code>test[0][:][:]</code> which is not what I want.</p> <p>How could I do that ?</p>
<python><list><list-comprehension><indices>
2023-02-01 00:09:07
1
721
Nihilum
75,304,252
6,467,512
Object detection using fastai
<p>Object detection using fastai Hello,</p> <p>I am looking to create a fast ai object detection model on the deep fashion dataset but do not know how make it predict. How can I make my model output prediction on images. Here is the code:</p> <p>The data is loaded using the COCO format as a json. Then I create the model and run the training like this:</p> <pre><code>def get_train_imgs(noop): return imgs datablock = DataBlock(blocks=(ImageBlock, BBoxBlock, BBoxLblBlock), splitter=RandomSplitter(), get_items=get_train_imgs, getters=getters, item_tfms=item_tfms, batch_tfms=batch_tfms, n_inp=1) model = resnet34() encoder = create_body(model, pretrained=False) get_c(dls) %cd &quot;Practical-Deep-Learning-for-Coders-2.0/Computer Vision&quot; from imports import * arch = RetinaNet(encoder, get_c(dls), final_bias=-4) create_head(124, 4) arch.smoothers arch.classifier arch.box_regressor ratios = [1/2,1,2] scales = [1,2**(-1/3), 2**(-2/3)] crit = RetinaNetFocalLoss(scales=scales, ratios=ratios) def _retinanet_split(m): return L(m.encoder,nn.Sequential(m.c5top6, m.p6top7, m.merges, m.smoothers, m.classifier, m.box_regressor)).map(params) learn = Learner(dls, arch, loss_func=crit, splitter=_retinanet_split) learn.freeze() learn.fit_one_cycle(2, slice(1e-5, 1e-4)) from fastai.vision.core import PILImage image = PILImage.create('./000032.jpg') prediction, _, _ = learn.predict(image) </code></pre> <p>predict is outputing the following error: TypeError: clip_remove_empty() missing 2 required positional arguments: 'bbox' and 'label'</p> <p>I understand that I am not suppose to use .predict but I cant find any reference on what else to use.</p>
<python><model><python-imaging-library><object-detection><fast-ai>
2023-01-31 23:40:44
0
323
AynonT
75,304,110
14,914,517
Keras model predicts different results using the same input
<p>I built a Keras sequential model on the simple dataset. I am able to train the model, however every time I try to get a prediction on the same input I get different values. Anyone knows why? I read through different Stackoverflow here (<a href="https://stackoverflow.com/questions/64321976/why-the-exactly-identical-keras-model-predict-different-results-for-the-same-inp">Why the exactly identical keras model predict different results for the same input data in the same env</a>, <a href="https://stackoverflow.com/questions/49624965/keras-saved-model-predicting-different-values-on-different-session">Keras saved model predicting different values on different session</a>, <a href="https://stackoverflow.com/questions/54744552/different-prediction-after-load-a-model-in-keras">different prediction after load a model in keras</a>), but couldn't find the answer. I tried to set the Tensorflow seed and still getting different results. Here is my code</p> <pre><code>from pandas import concat from pandas import DataFrame # create sequence length = 10 sequence = [i/float(length) for i in range(length)] # create X/y pairs df = DataFrame(sequence) df = concat([df, df.shift(1)], axis=1) df.dropna(inplace=True) print(df) # convert to LSTM friendly format values = df.values X, y = values[:, 0], values[:, 1] X = X.reshape(len(X), 1, 1) print(X.shape, y.shape) </code></pre> <p>output is:</p> <pre><code> 0 0 1 0.1 0.0 2 0.2 0.1 3 0.3 0.2 4 0.4 0.3 5 0.5 0.4 6 0.6 0.5 7 0.7 0.6 8 0.8 0.7 9 0.9 0.8 (9, 1, 1) (9,) </code></pre> <p>Then start building the model</p> <pre><code>#configure network from tensorflow import keras from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM tf.random.set_seed(1337) n_batch = len(X) n_neurons = 10 #design network model = Sequential() model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X,y,epochs=2,batch_size=n_batch,verbose=1,shuffle=False) </code></pre> <p>Now every time I run the following code to get the prediction I get different results as you can see here</p> <pre><code>model.predict(X) ********output************** array([[0.03817442], [0.07164046], [0.10493257], [0.13797525], [0.17069395], [0.20301574], [0.23486984], [0.26618803], [0.29690543]], dtype=float32) </code></pre> <pre><code>model.predict(X) ********output************** array([[0.04415776], [0.08242793], [0.12048437], [0.15823033], [0.19556962], [0.2324073 ], [0.26865062], [0.3042098 ], [0.33899906]], dtype=float32) </code></pre>
<python><tensorflow><keras><prediction>
2023-01-31 23:16:58
3
439
Shahin Shirazi
75,304,086
16,978,074
Read all fields of a CSV file in Python
<p>I have a problem reading a csv file. Each line of the csv file is separated by , My edge.csv file looks like this:</p> <pre><code>source,target,genre apple,banana,28 strawberry,mango,30 so on..... </code></pre> <p>So this is my code for read edge.csv file:</p> <pre><code>def read_net(filename): g = nx.Graph() with open(filename,encoding=&quot;ISO-8859-1&quot;,newline=&quot;&quot;) as f: f.readline() for l in f: l = l.split(&quot;,&quot;) g.add_edge(l[0], l[1], l[2]) //my error is here return g read_net(&quot;edge.csv&quot;) </code></pre> <p>My code don't work because my code can't read &quot;genre&quot; field of edge.csv file. It's like my code only reads the first two fields and not the third one as well. Why? This is my error:</p> <pre><code>TypeError: Graph.add_edge() takes 3 positional arguments but 4 were given </code></pre> <p>how can i read all three fields of the edge.csv file?</p>
<python><csv>
2023-01-31 23:12:17
1
337
Elly
75,304,048
9,100,431
How to randomly split grouped dataframe in python
<p>I have the next dataframe:</p> <pre><code>df = pd.DataFrame({ &quot;player_id&quot;:[1,1,2,2,3,3,4,4,5,5,6,6], &quot;year&quot; :[1,2,1,2,1,2,1,2,1,2,1,2], &quot;overall&quot; :[20,16,7,3,8,80,20,12,9,3,2,1]}) </code></pre> <p>what is the easiest way to randomly sort it grouped by player_id, e.g.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>player_id</th> <th>year</th> <th>overall</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>1</td> <td>80</td> </tr> <tr> <td>4</td> <td>2</td> <td>20</td> </tr> <tr> <td>1</td> <td>1</td> <td>20</td> </tr> <tr> <td>1</td> <td>2</td> <td>16</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <p>And then split it 80-20 into a train and testing set where they don't share any player_id.</p>
<python><pandas><dataframe>
2023-01-31 23:03:11
1
660
Diego
75,303,937
7,376,511
Access subtype of type hint
<pre><code>class MyClass: prop: list[str] MyClass.__annotations__ # {'prop': list[str]} </code></pre> <p>How do I access &quot;str&quot;?</p> <p>As a more generic question, given an obnoxiously complex and long type hint, like <code>prop: list[set[list[list[str] | set[int]]]]</code>, how do I access the internal values programmatically?</p>
<python><type-hinting>
2023-01-31 22:46:42
1
797
Some Guy
75,303,877
2,152,371
Fetch Request in Jinjia Include getting CORS Error
<p>So I have a site that is using Flask for the Front and Backend with Jinja templates. Currently testing with localhost (5000 is the backend and 8000 is for the frontend)</p> <p>the page in question</p> <p>main.html</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;Title&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div id='container'&gt; {% include &quot;user.html&quot; %} &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>user.html</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;Title&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div id=&quot;user&quot;&gt; &lt;h3&gt;User: &lt;span id=&quot;username&quot;&gt;&lt;/span&gt;&lt;/h3&gt; &lt;h4&gt;Winnings: &lt;span id=&quot;winnings&quot;&gt;&lt;/span&gt;&lt;/h4&gt; &lt;h4&gt;Plays: &lt;span id=&quot;credits&quot;&gt;&lt;/span&gt;&lt;/h4&gt; &lt;/div&gt; &lt;script&gt; var token = String(window.location).split('#')[1].split('&amp;')[0].split('=')[1]; fetch('http://localhost:5000/user',{ method:'POST', headers:{ 'Authorization':'bearer '+token, 'Content-Type':'application/json', 'Access-Control-Allow-Origin':'*' } }).then(function(r){ return r.json(); }).then(function(response){ document.getElementById('username').innerHTML = response['username']; document.getElementById('winnings').innerHTML = response['winnings']; document.getElementById('credits').innerHTML = response['credits']; }).catch(function(error){ console.log(error); }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>frontend.py</p> <pre><code>from flask import Flask, render_template, request, redirect, url_for, jsonify from flask_cors import CORS import requests app = Flask(__name__) cors = CORS(app) @app.route('/main',methods=['GET']) def main(): return render_template('main.html') </code></pre> <p>backend.py</p> <pre><code>from flask import Flask, send_file, abort, jsonify, request from flask_cors import CORS from functools import wraps from urllib.request import urlopen from flask_sqlalchemy import SQLAlchemy from sqlalchemy import func, Column, Integer, String, create_engine from flask_migrate import Migrate from dotenv import load_dotenv import pandas as pd import os, sys, json, jwt, requests load_dotenv() app = Flask(__name__) cors = CORS(app, origins=['http://localhost:8000']) app.config.from_object('config') db = SQLAlchemy(app) migrate = Migrate(app,db,compare_type=True) def requires_auth(): #auth code here to make sure the token in the header is valid #token includes the user name, which is passed to the database @app.route('/user',methods=['POST']) @requires_auth() def getUser (p): try: user = User.query.filter_by(username=p['user']).first() return jsonify({ 'success':True, 'username':username, 'credits':user.credit, 'winnings':user.score }) except: print(sys.exc_info()) abort(401) </code></pre> <p>When this runs, I get this in the browser, and the user information does not update:</p> <pre><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:5000/user. (Reason: CORS header β€˜Access-Control-Allow-Origin’ missing). Status code: 200. Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:5000/user. (Reason: CORS request did not succeed). Status code: (null). TypeError: NetworkError when attempting to fetch resource. </code></pre> <p>I checked the CORS, and the origin in the response matches the frontend port, as well as referer.</p> <p>I expect the javascript in &quot;user.html&quot; to make the call to the backend. As it is part of &quot;main.html&quot; which is rendered when 'http:localhost:8000/main' is called.</p>
<javascript><python><flask><cors><flask-cors>
2023-01-31 22:38:22
2
470
Miko
75,303,726
8,537,770
AWS CDK EC2 Instance SSH with Keypair Timing out
<p>I've created an ec2 instance with AWS CDK in python. I've added a security group and allowed ingress rules for ipv4 and ipv6 on port 22. The keypair that I specified, with the help of <a href="https://stackoverflow.com/questions/60041500/create-associate-ssh-keypair-to-an-ec2-instance-with-the-cdk/60043612#60043612">this stack question</a> has been used in other EC2 instances set up with the console with no issue.</p> <p>Everything appears to be running, but my connection keeps timing out. I went through the checklist of what usually causes this <a href="https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-resolve-ssh-connection-errors/" rel="nofollow noreferrer">provided by amazon</a>, but none of those common things seems to be the problem (at least to me).</p> <p>Why can't I connect with my ssh keypair from the instance I made with AWS CDK? I'm suspecting the KeyName I am overriding is not the correct name in Python, but I can't find it in the cdk docs.</p> <p>Code included below.</p> <pre><code>vpc = ec2.Vpc.from_lookup(self, &quot;VPC&quot;, vpc_name=os.getenv(&quot;VPC_NAME&quot;)) sec_group = ec2.SecurityGroup(self, &quot;SG&quot;, vpc=vpc, allow_all_outbound=True) sec_group.add_ingress_rule(ec2.Peer.any_ipv4(), connection=ec2.Port.tcp(22)) sec_group.add_ingress_rule(ec2.Peer.any_ipv6(), connection=ec2.Port.tcp(22)) instance = ec2.Instance( self, &quot;name&quot;, vpc=vpc, instance_type=ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO), machine_image=ec2.AmazonLinuxImage( generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2 ), security_group=sec_group, ) instance.instance.add_property_override(&quot;KeyName&quot;, os.getenv(&quot;KEYPAIR_NAME&quot;)) elastic_ip = ec2.CfnEIP(self, &quot;EIP&quot;, domain=&quot;vpc&quot;, instance_id=instance.instance_id) </code></pre>
<python><amazon-web-services><amazon-ec2><aws-cdk>
2023-01-31 22:17:01
1
663
A Simple Programmer
75,303,621
153,612
Set range and label for axis
<p>I'm ploting data from an array A of size 10*10, each element <code>A[x,y]</code> is calculated by a function <code>f(x,y)</code> where x and y are in the range <code>(-3, 3)</code></p> <pre><code>import numpy as np import matplotlib.pyplot as plt def f(x,y): return ... s = 10 a = np.linspace(-3, 3, s) fxy = np.array([f(x,y) for x in a for y in a]).reshape((s, s)) plt.xticks(labels=np.arange(-3, 3), ticks=range(6)) plt.yticks(labels=np.arange(-3, 3), ticks=range(6)) plt.imshow(fxy) </code></pre> <p><a href="https://i.sstatic.net/oOhpe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oOhpe.png" alt="enter image description here" /></a></p> <p>So the labels on xy-axis are not that I want since x and y are taken from the range <code>(-3, 3)</code> (not from <code>(0, 10)</code> which is the size of the 2d-array). How can I set these labels correctly?</p>
<python><matplotlib><imshow>
2023-01-31 22:03:55
1
671
Ta Thanh Dinh
75,303,360
2,542,194
PyPdf2 interactive form elements missing after Adobe sign
<p>I have an Acroform pdf that contains a combination of text and interactive fields like dropdowns and checkboxes. I am using PyPDF2 to successfully retrieve all the field values (using get_Fields() and decrypting it with the default '' password), however once the pdf is signed using Acrobat Sign, I cannot access the interactive fields anymore. I read on another SO post that signing a pdf flattens it, however I can still access all the text fields after signing.</p> <p>I have tried both PyPDF2 and the java RUPS iText 5.5.9 desktop app, and neither of them can see any interactive fields (dropdowns, checkboxes, datepickers) after signing the pdf. Is there a way to read interactive fields at all after signing?</p> <p>Thank you.</p>
<python><python-3.x><itext><pypdf><adobe-sign>
2023-01-31 21:32:15
0
737
javapyscript
75,303,328
860,991
Accessing dockerized neo4j using neo4j vs py2neo
<p>I have setup neo4j to run in docker and exposed the http and bolt ports (7474, 7687).</p> <p>This is the setup I used :</p> <pre><code>docker run \ --name testneo4j \ -p7474:7474 -p7687:7687 \ -d \ -v `pwd`/neo4j/data:/data \ -v `pwd`/neo4j/logs:/logs \ -v `pwd`/import:/var/lib/neo4j/import \ -v `pwd`/neo4j/plugins:/plugins \ --env NEO4J_AUTH=neo4j/XXXXXXX \ </code></pre> <p>I am now trying to connect to the graph database using Python</p> <p>Using the <code>py2neo</code> library works fine:</p> <pre><code>In [1]: from py2neo import Graph In [2]: graph=Graph('bolt://localhost:7687',user=&quot;neo4j&quot;, password=&quot;XXXXXXX&quot;) ...: graph.run('MATCH(x) RETURN COUNT(x)') COUNT(x) ---------- 0 </code></pre> <p>But when I use the <code>neo4j</code> module:</p> <pre><code>from neo4j import GraphDatabase, TRUST_ALL_CERTIFICATES trust=TRUST_ALL_CERTIFICATES neo4j_user=&quot;neo4j&quot; neo4j_passwd=&quot;XXXXXXX&quot; uri=&quot;bolt://localhost:7687&quot; driver = GraphDatabase.driver(uri, auth=(neo4j_user, neo4j_passwd), encrypted=False, trust=trust) </code></pre> <p>I get this error:</p> <pre><code> File ~/local/anaconda3/lib/python3.8/site-packages/neo4j/__init__.py:120, in GraphDatabase.driver(cls, uri, **config) 114 @classmethod 115 def driver(cls, uri, **config): 116 &quot;&quot;&quot; Create a :class:`.Driver` object. Calling this method provides 117 identical functionality to constructing a :class:`.Driver` or 118 :class:`.Driver` subclass instance directly. 119 &quot;&quot;&quot; --&gt; 120 return Driver(uri, **config) File ~/local/anaconda3/lib/python3.8/site-packages/neo4j/__init__.py:161, in Driver.__new__(cls, uri, **config) 159 for subclass in Driver.__subclasses__(): 160 if parsed_scheme in subclass.uri_schemes: --&gt; 161 return subclass(uri, **config) 162 raise ValueError(&quot;URI scheme %r not supported&quot; % parsed.scheme) File ~/local/anaconda3/lib/python3.8/site-packages/neo4j/__init__.py:235, in DirectDriver.__new__(cls, uri, **config) 232 return connect(address, **dict(config, **kwargs)) 234 pool = ConnectionPool(connector, instance.address, **config) --&gt; 235 pool.release(pool.acquire()) 236 instance._pool = pool 237 instance._max_retry_time = config.get(&quot;max_retry_time&quot;, default_config[&quot;max_retry_time&quot;]) File ~/local/anaconda3/lib/python3.8/site-packages/neobolt/direct.py:715, in ConnectionPool.acquire(self, access_mode) 714 def acquire(self, access_mode=None): --&gt; 715 return self.acquire_direct(self.address) File ~/local/anaconda3/lib/python3.8/site-packages/neobolt/direct.py:608, in AbstractConnectionPool.acquire_direct(self, address) 606 if can_create_new_connection: 607 try: --&gt; 608 connection = self.connector(address, error_handler=self.connection_error_handler) 609 except ServiceUnavailable: 610 self.remove(address) File ~/local/anaconda3/lib/python3.8/site-packages/neo4j/__init__.py:232, in DirectDriver.__new__.&lt;locals&gt;.connector(address, **kwargs) 231 def connector(address, **kwargs): --&gt; 232 return connect(address, **dict(config, **kwargs)) File ~/local/anaconda3/lib/python3.8/site-packages/neobolt/direct.py:972, in connect(address, **config) 970 raise ServiceUnavailable(&quot;Failed to resolve addresses for %s&quot; % address) 971 else: --&gt; 972 raise last_error File ~/local/anaconda3/lib/python3.8/site-packages/neobolt/direct.py:964, in connect(address, **config) 962 s = _connect(resolved_address, **config) 963 s, der_encoded_server_certificate = _secure(s, host, security_plan.ssl_context, **config) --&gt; 964 connection = _handshake(s, address, der_encoded_server_certificate, **config) 965 except Exception as error: 966 last_error = error File ~/local/anaconda3/lib/python3.8/site-packages/neobolt/direct.py:920, in _handshake(s, resolved_address, der_encoded_server_certificate, **config) 918 if agreed_version == 0: 919 log_debug(&quot;[#%04X] C: &lt;CLOSE&gt;&quot;, local_port) --&gt; 920 s.shutdown(SHUT_RDWR) 921 s.close() 922 elif agreed_version in (1, 2): OSError: [Errno 57] Socket is not connected </code></pre> <p>Does anyone know why the former works but the latter doesn't?</p>
<python><python-3.x><neo4j><cypher>
2023-01-31 21:28:20
1
3,517
femibyte
75,303,230
14,908,234
How to delete line of large CSV file in place for upload to Postgres
<p>I have a 102gb CSV file exported from MongoDB that I'm trying to upload to Postgres. The file contains ~55 million rows. I'm using <code>\copy</code> to upload. However, I get a carriage return error on line 47,867,184:</p> <pre><code>ERROR: unquoted carriage return found in data HINT: Use quoted CSV field to represent carriage return. CONTEXT: COPY reso_facts, line 47867184 </code></pre> <p>To my knowledge, Postgres doesn't allow for skipping bad rows on import. Seems like I need to fix the file. Is there a way to delete a CSV row in-place using Python? I strongly prefer not to write the file to an external hard drive.</p> <p>I found <a href="https://stackoverflow.com/questions/2329417/fastest-way-to-delete-a-line-from-large-file-in-python/2330081#2330081">this</a> elegant solution for txt files:</p> <pre><code>import os from mmap import mmap def removeLine(filename, lineno): f=os.open(filename, os.O_RDWR) m=mmap(f,0) p=0 for i in range(lineno-1): p=m.find('\n',p)+1 q=m.find('\n',p) m[p:q] = ' '*(q-p) os.close(f) </code></pre> <p>But it errors out when fed a CSV: <code>TypeError: a bytes-like object is required, not 'str'</code>. Is there a way to modify the above for CSVs, or is there an alternative method entirely?</p>
<python><python-3.x><postgresql><csv><bigdata>
2023-01-31 21:15:48
0
1,151
mmz
75,303,213
10,194,070
python3 + how to open files while file that opened created with owner and group on the fly
<p>here is simple example when I wrote the list - some_list in file test.txt</p> <pre><code> with open('/home/moon/test.txt', 'w') as f: print(some_list, file=f) f.close() </code></pre> <p>the file above created with owner and group <code>root:root</code></p> <p>is it possible to create the same file with specific owner and group as <code>moon:san</code> instead of <code>root:root</code></p> <p>the other choice is of course to create the file first under <code>/home/moon/test.txt</code> and then change the owner as moon and group as san , and then use the code that open the file</p>
<python><python-3.x><linux>
2023-01-31 21:14:28
1
1,927
Judy
75,303,020
11,281,877
Label specific points in seaborn based on x-values
<p>I have a dataframe where idividuals have some scores. The idea is to highlight the reference indididual (check) in red and the individuals with a lower score in green. Following similar problem on StackOverflow (<a href="https://stackoverflow.com/questions/46027653/adding-labels-in-x-y-scatter-plot-with-seaborn">Adding labels in x y scatter plot with seaborn</a>), I was able to highlight the check in red. However, I failed to highlight in green the two individuals (id_11, id_17) with a lower score. I got the error &quot;ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().&quot; Please, find below my code. Thank you in advance for your help.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.DataFrame( {'Individual Name': ['id_1', 'check', 'id_3', 'id_4', 'id_5', 'id_6', 'id_7', 'id_8', 'id_9', 'id_10', 'id_11', 'id_12', 'id_13', 'id_14', 'id_15', 'id_16', 'id_17', 'id_18', 'id_19', 'id_20', 'id_21', 'id_22', 'id_23', 'id_24', 'id_25', 'id_26', 'id_27', 'id_28', 'id_29', 'id_30'], 'feature': [0.508723818, 0.438733637, 0.718100026, 0.506722786, 0.520924985, 0.69302915, 0.659499198, 0.547989555, 0.714309067, 0.617602669, 0.35364303, 0.534064345, 0.59011931, 0.488031738, 0.511025466, 0.655582175, 0.32029745, 0.594929278, 0.562511802, 0.571763799, 0.681324482, 0.40444921, 0.628999099, 0.497668065, 0.690914914, 0.530561335, 0.798924312, 0.671025127, 0.71243462, 0.539980784], 'score': [91.5, 89.75, 94.25, 91.75, 91.75, 93.5, 93.25, 92.25, 94.0, 93.0, 89.25, 92.0, 92.5, 91.5, 91.5, 93.5, 88.5, 92.25, 92.0, 93.25, 93.25, 90.25, 92.75, 90.75, 94.0, 92.0, 95.75, 93.75, 94.5, 92.0]}) fig, ax = plt.subplots() sns.scatterplot(data=df, x='score', y='feature') plt.text(x=df['score'][df['Individual Name'] == 'check'], y=df['feature'][df['Individual Name'] == 'check'], s='check', color='red') score_of_check = df['score'][ df['Individual Name'] == 'check'] # reference value for highlighting idividuals that have a lower score print(score_of_check) # label points if score is lower than score_of_check for x in df['score']: if x &lt; score_of_check: print(x) # Even print generate the error plt.text(x=df['score'], y=df['feature'], s=df['Individual Name'], color='green') # Ultimately I would like to label the 2 materials, id_11 and id_17 in green plt.show() plt.close() </code></pre>
<python><matplotlib><text><seaborn>
2023-01-31 20:53:43
1
519
Amilovsky
75,302,846
7,984,318
Python move file to folder based on both partial of file name and value of another partial of file name
<p>There are 3 files in folder:</p> <pre><code>my_floder: Review Report - 2020-3.20230110151743889.xlsx Review Report - 2020-3.20230110151753535.xlsx Review Report - 2019-4.20230110151744423.xlsx </code></pre> <p>Each of the file name has 3 parts,take the first file as an example:</p> <pre><code>First Part:&quot;Review Report -&quot; Second Part:&quot;2020-3&quot; Third Part:&quot;.20230110151743889&quot; </code></pre> <p>The logic is: if some files have the same second part file names, then only choose the one who has the larger third part value and move it to another folder .The third part of the file name is a time stamp,yyyymmddhhmm...</p> <p>For example the second part of the first 2 files are the same, but since the second file Review Report - 2020-3.20230110151753535.xlsx has a large 3rd part of the file name '20230110151753535',so only the second and third files will be copy to another file, the first file will be skipped.</p> <p>Some helpful script:</p> <pre><code>parts_list=os.path.basename(filename).split(&quot;.&quot;) output is: ['Review Report - 2020-3', '20230110151743889', 'xlsx'] second_part = parts_list[0].split(&quot; - &quot;)[1] output is: '2020-3' thrid_part=parts_list[1] output is: 20230110151743889 </code></pre> <p>The best that I can do:</p> <pre><code> unique = [] for filename in glob.glob(my_floder): parts_list=os.path.basename(filename).split(&quot;.&quot;) second_part = parts_list[0].split(&quot; - &quot;)[1] thrid_part=parts_list[1] if second_part not in unique: unique.append(second_part) else: # here need to compare the value of the third part ,and move the file with larger third part to another folder but I have no idea how to do that </code></pre> <p>any friend can help ?</p>
<python><python-3.x>
2023-01-31 20:34:25
2
4,094
William
75,302,795
9,391,359
How to get text and corresponding tag with BeautifulSoup?
<p>I have a text, contains HTML tags something like:</p> <pre><code>text = &lt;p&gt;Some text&lt;/p&gt; &lt;h1&gt;Some text&lt;/h1&gt; .... soup = BeautifulSoup(text) </code></pre> <p>I parsed this text using <code>BeautifulSoup</code>. I would like to extract every sentence with corresponding text and tag. I tried:</p> <pre><code>for sent in soup: print(sent.text) &lt;- ok print(sent.tag) &lt;- **not ok since NavigableString does not has tag attribute** </code></pre> <p>I also tried <code> soup.find_all()</code> and stuck at the same point: I have access to text but not original tag.</p>
<python><html><web-scraping><beautifulsoup>
2023-01-31 20:30:04
1
941
Alex Nikitin
75,302,631
1,464,160
Installing ssdeep package from PyPi on M1 Macbook
<h1>The Goal</h1> <p>Install <a href="https://pypi.org/project/ssdeep/" rel="nofollow noreferrer">ssdeep</a> PyPi package on a M1 Macbook Pro.</p> <h1>The Problem</h1> <p>When I run <code>pip install ssdeep</code> I get 2 errors</p> <p>The first error is caused because <code>fuzzy.h</code> cannot be found.</p> <pre><code>warnings.warn( running egg_info creating /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info writing /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/PKG-INFO writing dependency_links to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/dependency_links.txt writing requirements to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/requires.txt writing top-level names to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/top_level.txt writing manifest file '/private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/SOURCES.txt' src/ssdeep/__pycache__/_ssdeep_cffi_a28e5628x27adcb8d.c:266:14: fatal error: 'fuzzy.h' file not found #include &quot;fuzzy.h&quot; ^~~~~~~~~ 1 error generated. Traceback (most recent call last): File &quot;/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/unixccompiler.py&quot;, line 186, in _compile self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs) File &quot;/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/ccompiler.py&quot;, line 1007, in spawn spawn(cmd, dry_run=self.dry_run, **kwargs) File &quot;/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/spawn.py&quot;, line 70, in spawn raise DistutilsExecError( distutils.errors.DistutilsExecError: command '/usr/bin/clang' failed with exit code 1 </code></pre> <p>The second error has to do with setuptools.installer being deprecated. I'm not sure this is all that important though. I think resolving the first error would resolve this one as well.</p> <pre><code>/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. </code></pre> <h1>Attempted Solutions</h1> <p><strong>Solution 1:</strong> Install SSDeep with Homebrew <code>brew install ssdeep</code></p> <p><strong>Result:</strong> <code>pip install ssdeep</code> has the same error about fuzzy.h missing</p> <p><strong>Solution 2:</strong> Use the prepackaged version of SSDeep <code>BUILD_LIB=1 pip install ssdeep</code></p> <p><strong>Result</strong>: The error about <code>fuzzy.h</code> goes away but the second error regarding <code>setuptools.installer</code> being deprecated remains.</p> <h1>References</h1> <ul> <li><a href="http://wzr.github.io/blog/2014/10/06/compiling-ssdeep-and-pydeep-on-mac-os-x-10-dot-9-plus/" rel="nofollow noreferrer">Compiling SSDeep and pydeep on MacOS X 10.9+</a> This was pretty out of date though.</li> <li><a href="https://pypi.org/project/ssdeep/" rel="nofollow noreferrer">SSDeep Documentation</a></li> </ul>
<python><pip><apple-m1><pypi>
2023-01-31 20:12:28
3
1,483
HopAlongPolly
75,302,370
7,228,093
Create Flask Restx endpoints without namespaces
<p>I'm working on the redesign of an API with Flask using Flask-restx, but I've a problem: We need a legacy version of the API that accepts the old URLs, for compatibility reasons, but I'm not understanding how to do this since Flask-restx requires a namespace to be declared.</p> <p>Urls should be something like this:</p> <pre><code>{{host}}/api/v1/art/savegallery &lt;- new one {{host}}/savegallery &lt;- legacy </code></pre> <p>In Flask I've something like this:</p> <p><code>app/__init__.py</code></p> <pre><code>db = SQLAlchemy() migrate = Migrate() cors = CORS() def create_app(config_class=DevelopmentConfig): app = Flask(__name__) app.config.from_object(config_class) db.init_app(app=app) migrate.init_app(app=app, db=db) cors.init_app(app=app) from app.api import api_bp, legacy_bp app.register_blueprint(api_bp, url_prefix='/api/v1') app.register_blueprint(legacy_bp) return app </code></pre> <p><code>/app/api/__init__.py</code></p> <pre><code>api_bp = Blueprint('v1', __name__) legacy_bp = Blueprint('legacy', __name__) api_v1 = Api( app=api_bp, version='1.00', title='Art', description=( &quot;API&quot; ), ) api_lgc = Api( app=legacy_bp, version='1.00', title='Art Legacy', description=( &quot;API Legacy&quot; ), ) from app.art.routes import art_ns api_v1.add_namespace(art_ns) api_lgc.add_namespace(art_ns) </code></pre> <p><code>app/art/routes.py</code></p> <pre><code>art_ns = Namespace(name='art', description='Art Storage') #artlegacy_ns = Namespace(name='legacy', description='Art Storage') @art_ns.route('/savegallery') class GalleryAPI(Resource): def get(self): try: #data = request.json data = {} return {&quot;foo&quot;:&quot;bar&quot;}, 200 except Exception as e: print(e) return {&quot;error&quot;: &quot;Something happened&quot;}, 500 </code></pre> <p>With this, I can access <code>{{host}}/api/v1/art/savegallery</code> correctly, but I'm not finding a way to declare the legacy one, since creating a URL this way would require at least the namespace part of the URL. Does Flask-restx have a way to declare those URLs and/or redirect the flow to the new ones?</p>
<python><flask><flask-restx>
2023-01-31 19:43:58
1
515
EfraΓ­n
75,302,280
12,695,210
Pytest fixture scope and @pytest.mark.parametrize
<p>I am a little confused about fixture scope in pytest. Say I have a fixture</p> <pre><code>@pytest.fixture(scope=&quot;function&quot;) def data(): data = generate_some_data() yeild data teardown() </code></pre> <p>and a test function</p> <pre><code>@pytest.mark.parametrize(&quot;runs&quot;, [&quot;one&quot;, &quot;two&quot;]) def my_test(data, runs): run_some_tests(runs) </code></pre> <p>My understanding is that in this case, the generate_some_data() function will be run for each parameterisation, with the fixture beign setup then torn down. Is it possible to keep the scope so that the fixture is only setup and torn down once, for all paramterizations?</p>
<python><pytest>
2023-01-31 19:34:13
1
695
Joseph
75,302,259
9,648,665
Sort the products based on the frequency of changes in customer demand
<p>Imagine following dataframe is given.</p> <pre><code>import pandas as pd products = ['Apple', 'Apple', 'Carrot', 'Eggplant', 'Eggplant'] customer_demand_date = ['2023-01-01', '2023-01-07', '2023-01-01', '2023-01-01', '2023-01-07', '2023-01-14'] col_02_2023 = [0, 20, 0, 0, 0, 10] col_03_2023 = [20, 30, 10, 0, 10, 0] col_04_2023 = [10, 40, 50, 30, 40, 10] col_05_2023 = [40, 40, 60, 50, 60, 20] data = {'Products': products, 'customer_demand_date': customer_demand_date, '02_2023': col_02_2023, '03_2023': col_03_2023, '04_2023': col_04_2023, '05_2023': col_05_2023} df = pd.DataFrame(data) print(df) Products customer_demand_date 02_2023 03_2023 04_2023 05_2023 0 Apple 2023-01-01 0 20 10 40 1 Apple 2023-01-07 20 30 40 40 2 Carrot 2023-01-01 0 10 50 60 3 Egg 2023-01-01 0 0 30 50 4 Egg 2023-01-07 0 10 40 60 5 Egg 2023-01-14 0 0 10 20 </code></pre> <p>I have columns products, custome_demand_date (every week there is new customer demand for products per upcoming months) and months with quantity demand. How can I determine which product has experienced the most frequent changes in customer demand over the months, and sort the products in descending order of frequency of change? I have tried to group by product, accumulate the demand quantity but none of them can analyze the data both horizontally (per customer demand date) and vertically (per months). Desired output:</p> <pre><code>Sorted products Ranking(or %, or count of changes) Egg 1 (or 70% or 13) Apple 2 (or 52% or 8) Carrot 3 (22% or 3) </code></pre> <p>Either ranking or % of change frequency or count of changes.</p> <ul> <li>Note: percentages in desired output are random numbers</li> </ul> <p>I'd really appreciate if you have any clever approach to solve this problem? Thanks</p>
<python><pandas><sorting><group-by><accumulate>
2023-01-31 19:32:22
2
687
Sascha
75,302,200
1,484,601
python types: Literal of logging level as type?
<p>the following code:</p> <pre class="lang-py prettyprint-override"><code>import logging print(type(1)) print(type(logging.WARNING)) </code></pre> <p>prints:</p> <pre><code>&lt;class 'int'&gt; &lt;class 'int'&gt; </code></pre> <p>yet, according to mypy, the first line of this code snippet is legal, but the second is not (Variable &quot;logging.WARNING&quot; is not valid as a type):</p> <pre class="lang-py prettyprint-override"><code>OneOrTwo = Literal[1,2] # ok WarningOrError = Literal[logging.WARNING, logging.ERROR] # not ok </code></pre> <p>I do not understand why the definition of OneOrTwo is ok but WarningOrError is not.</p> <p>I would like also know what could be done to use a legal equivalent of WarningOrError, i.e. something I could use like this:</p> <pre class="lang-py prettyprint-override"><code>def a(arg: WarningOrError)-&gt;None: pass </code></pre> <p>note: mypy redirects to <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases</a> , but this did not clarify things for me.</p>
<python><types><mypy><literals>
2023-01-31 19:25:12
1
4,521
Vince
75,302,161
1,028,237
ECS Fargate shutdown and decrement desired count in task
<p>I would like to shut down a task once I've detected there is no more work. The task is part of fargate target group with autoscaling rules so simply doing an <code>ecs.stop_task(cluster=cluster_arn, task=task_id)</code> won't work as the group will add another task to meet the desired task count. I want something like <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.terminate_instance_in_auto_scaling_group" rel="nofollow noreferrer">auto_scaling.terminate_instance_in_auto_scaling_group(InstanceId=instance_id, ShouldDecrementDesiredCapacity=True)</a> but instance ID seems like an ECS thing and is not present in the task Metadata. I know theres a URI that provides instance ID for EC2, would that work here? Whats the best way to achieve this, specifically with the <code>boto3</code> library?</p> <p>EDIT #1: Is there a way to decrement desired count with ECS?</p>
<python><amazon-web-services><boto3><amazon-ecs>
2023-01-31 19:21:47
0
1,426
Verbal_Kint
75,302,111
2,185,248
using python to load docker image zip file without python docker module
<p>Using python3.10.6 and docker 20.10.23, build 7155243 on Ubuntu 22.04. And trying not to use the <code>docker</code> module but just <code>subprocess</code> to load a docker image</p> <pre><code>def run(params): output = subprocess.run(params, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if output.returncode == 0: print(output.stdout.decode()) else: print(output.stderr.decode()) run(['sudo', 'docker', 'load', '&lt;', 'PATH/OF/AIMAGE.zip']) run(['sudo', 'docker', 'run', '-t', '-d', '-v', PATH_REPO + ':/root/windows-mount', 'IMAGE:v3']) </code></pre> <p>I can see the image is loaded fine by using <code>sudo docker ps</code></p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 96d1f80916ce IMAGE:v3 &quot;/bin/bash&quot; 53 seconds ago Up 52 seconds happy_joliot </code></pre> <p>but the <code>docker load</code> always output an error:</p> <pre><code>&quot;docker load&quot; accepts no arguments. See 'docker load --help'. Usage: docker load [OPTIONS] Load an image from a tar archive or STDIN [docerapp.py:66] </code></pre> <p>Any suggestion to fix the error?</p>
<python><docker>
2023-01-31 19:15:57
1
2,809
r0n9
75,302,082
1,028,237
Pandas: Group by contiguous time blocks
<p>I have API access logs with a timestamp user ID and request payload. I want to group by user and contiguous requests within 1 minute of each other and aggregate a count within each block. So if I had:</p> <pre><code>@timestamp @data @id 2023-01-21 09:46:33.478 ... Gh8Z4 2023-01-21 09:46:33.690 ... Gh8Z4 2023-01-21 09:46:34.189 ... Gh8Z4 2023-01-21 09:48:28.282 ... Gh8Z4 2023-01-21 09:51:27.652 ... HVtpG 2023-01-21 09:51:28.682 ... Gh8Z4 2023-01-21 09:52:17.412 ... HVtpG </code></pre> <p>I would like to see something like:</p> <pre><code>@id start end count Gh8Z4 2023-01-21 09:46:33.478 2023-01-21 09:46:34.189 3 Gh8Z4 2023-01-21 09:48:28.282 2023-01-21 09:48:28.282 1 HVtpG 2023-01-21 09:51:27.652 2023-01-21 09:52:17.412 2 Gh8Z4 2023-01-21 09:51:28.682 2023-01-21 09:51:28.682 1 </code></pre>
<python><pandas>
2023-01-31 19:13:12
1
1,426
Verbal_Kint
75,301,743
2,623,317
How to list, concatenate, and evaluate polars expressions?
<p>I would like to store in an object (a list, a dictionary or whatever) many different filters, and then be able to select the ones I want and evaluate them in the <code>.filter()</code> method. Below is an example:</p> <pre><code># Sample DataFrame df = pl.DataFrame( {&quot;col_a&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], &quot;col_b&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]} ) # Set a couple of filters filter_1 = pl.col(&quot;col_a&quot;) &gt; 5 filter_2 = pl.col(&quot;col_b&quot;) &gt; 8 # Apply filters: this works fine! df_filtered = df.filter(filter_1 &amp; filter_2) # Concatenate filters filters = [filter_1, filter_2] # This won't work: df.filter((&quot; &amp; &quot;).join(filters)) df.filter((&quot; | &quot;).join(filters)) </code></pre> <p>What would be the correct way of <code>(&quot; &amp; &quot;).join(filters)</code> that will work?</p>
<python><python-polars>
2023-01-31 18:39:50
2
477
Guz
75,301,630
5,675,325
Pass data from the pipeline to views in Django Python Social Auth
<p>I was reading the documentation of Python Social Auth and got curious about the section of <a href="https://github.com/python-social-auth/social-docs/blob/master/docs/developer_intro.rst#interrupting-the-pipeline-and-communicating-with-views" rel="nofollow noreferrer">Interrupting the Pipeline (and communicating with views)</a>.</p> <p>In there, we see the following pipeline code</p> <pre><code>In our pipeline code, we would have: from django.shortcuts import redirect from django.contrib.auth.models import User from social_core.pipeline.partial import partial # partial says &quot;we may interrupt, but we will come back here again&quot; @partial def collect_password(strategy, backend, request, details, *args, **kwargs): # session 'local_password' is set by the pipeline infrastructure # because it exists in FIELDS_STORED_IN_SESSION local_password = strategy.session_get('local_password', None) if not local_password: # if we return something besides a dict or None, then that is # returned to the user -- in this case we will redirect to a # view that can be used to get a password return redirect(&quot;myapp.views.collect_password&quot;) # grab the user object from the database (remember that they may # not be logged in yet) and set their password. (Assumes that the # email address was captured in an earlier step.) user = User.objects.get(email=kwargs['email']) user.set_password(local_password) user.save() # continue the pipeline return </code></pre> <p>and the following view</p> <pre><code>def get_user_password(request): if request.method == 'POST': form = PasswordForm(request.POST) if form.is_valid(): # because of FIELDS_STORED_IN_SESSION, this will get copied # to the request dictionary when the pipeline is resumed request.session['local_password'] = form.cleaned_data['secret_word'] # once we have the password stashed in the session, we can # tell the pipeline to resume by using the &quot;complete&quot; endpoint return redirect(reverse('social:complete', args=(&quot;backend_name,&quot;))) else: form = PasswordForm() return render(request, &quot;password_form.html&quot;) </code></pre> <p>Specially interested in the line</p> <pre><code>return redirect(reverse('social:complete', args=(&quot;backend_name,&quot;))) </code></pre> <p>which is used to redirect the user back to the pipeline using an already stablished backend.</p> <p>We can see <a href="https://github.com/python-social-auth/social-docs/blob/master/docs/developer_intro.rst#understanding-the-pipeline" rel="nofollow noreferrer">earlier in that page</a> a condition that's used to check which backend is being used.</p> <pre><code>def my_custom_step(strategy, backend, request, details, *args, **kwargs): if backend.name != 'my_custom_backend': return # otherwise, do the special steps for your custom backend </code></pre> <p>The question is, instead of manually adding it in the <code>args=(&quot;backend_name,&quot;)</code>, how can the pipeline communicate the correct backend to the view?</p>
<python><django><authentication><django-views><python-social-auth>
2023-01-31 18:30:30
1
15,859
Tiago Peres
75,301,389
5,993,616
How to properly plot graph using matplotlib?
<p>I have these two lists:</p> <pre><code>**l1** = ['100.00', '120.33', '140.21', '159.81', '179.25', '183.13', '202.49', '202.89', '204.18', '205.35', '206.44', '207.45', '208.40', '209.30', '210.15', '210.96', '211.73', '212.47', '213.18', '213.87', '214.53', '215.17', '215.79', '216.39', '216.98', '217.54', '218.10', '218.63', '219.16', '219.67', '220.18', '220.67', '221.15'] **l2** = ['13.14', '13.37', '13.53', '13.66', '13.76', '13.77', '20.70', '21.51', '23.85', '26.39', '29.13', '32.06', '35.17', '38.47', '41.95', '45.63', '49.50', '53.59', '57.90', '62.45', '67.25', '72.33', '77.70', '83.40', '89.43', '95.83', '102.65', '109.90', '117.65', '125.95', '134.84', '144.40', '154.71'] </code></pre> <p>I plot l1 against l2 and this is what it's supposed to come out: <a href="https://i.sstatic.net/u1klq.png" rel="nofollow noreferrer">should_be</a></p> <p>I use the following code:</p> <pre><code>fig, ax = plt.subplots(1) ax.plot(l1, l2) plt.show() </code></pre> <p>and this is what comes out <a href="https://i.sstatic.net/SZgtZ.png" rel="nofollow noreferrer">it_is</a> like the step is regular even if the values are not equally distributed. Thanks</p>
<python><matplotlib>
2023-01-31 18:10:25
3
317
drSlump
75,301,374
5,510,540
python: cumulative density plot
<p>I have the following dataframe:</p> <pre><code>df = Time_to_event event 0 0 days 443 1 1 days 226 2 2 days 162 3 3 days 72 4 4 days 55 5 5 days 30 6 6 days 36 7 7 days 18 8 8 days 15 9 9 days 14 10 10 days 21 11 11 days 13 12 12 days 10 13 13 days 10 14 14 days 8 </code></pre> <p>I want to produce a cumulative density plot of the sum of the events per days. For example 0 days 443, 1 days = 443 + 226 etc.</p> <p>I am currently trying this code:</p> <pre><code> stat = &quot;count&quot; # or proportion sns.histplot(df, stat=stat, cumulative=True, alpha=.4) </code></pre> <p>but I come up with a pretty terrible plot: <a href="https://i.sstatic.net/CEr8i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CEr8i.png" alt="enter image description here" /></a></p> <p>If I could also come up with a line instead of bars that would be awesome!</p>
<python><cumulative-sum>
2023-01-31 18:08:54
2
1,642
Economist_Ayahuasca
75,301,344
7,895,542
Install python package for python2.7 with pip
<p>I am trying to install pre-commit for python2.7 with pip 8.1.2.</p> <p>If i do <code>pip install --user pre-commit</code> or Β΄python -m pip install --user pre-commitΒ΄ it keeps trying to load pre-commit3.0.2 and failing.</p> <p>So i tried to find the most recent version that still supports python2.7 (by manually going through the version history, is there not better way?) and that is 1.21.0.</p> <p>But even when i do pip install --user pre-commit==1.21.0 it fails due to</p> <pre><code>Collecting virtualenv&gt;=15.2 (from pre-commit==1.21.0) Using cached https://files.pythonhosted.org/packages/7b/19/65f13cff26c8cc11fdfcb0499cd8f13388dd7b35a79a376755f152b42d86/virtualenv-20.17.1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/tmp/pip-build-mjJnKQ/virtualenv/setup.py&quot;, line 4, in &lt;module&gt; raise RuntimeError(&quot;setuptools &gt;= 41 required to build&quot;) RuntimeError: setuptools &gt;= 41 required to build </code></pre> <p>which i assume is because virtualenv-20.17.1 requires python3</p>
<python><python-2.7><pip>
2023-01-31 18:06:25
1
360
J.N.
75,301,282
17,696,880
How to send each of the strings within a list of strings to a function, and then replace that list element with the list that the function generates?
<pre class="lang-py prettyprint-override"><code>import re, spacy def evaluates_if_substring_is_a_verb_func(input_element): #---------------------------------- #nlp = spacy.load('en_core_web_sm') nlp = spacy.load('es_core_news_sm') doc = nlp(input_element) # Your text here list_verbs_in_element = [] for token in doc: if token.pos_ == &quot;VERB&quot;: #Only verbs start = token.idx # Start position of token end = token.idx + len(token) # End position = start + len(token) list_verbs_in_element.append(token.text) #---------------------------------- return(list_verbs_in_this_input) #input_list: list_verbs_in_this_input = ['correr saltar', 'llegamos', 'allΓ­', 'hacΓ­a', 'allΓ‘', 'en', 'el', 'centro', 'habrΓ‘', ''] #call to the function for each element of the list by the return of the function #remove strings that only have whitespace or are empty print(list_verbs_in_this_input) # --&gt; print here to check the result </code></pre> <p>I need to send one by one each of the substrings within the list called <code>list_verbs_in_this_input</code> to the <code>evaluates_if_substring_is_a_verb_func()</code> function, and replace the element(s) of the <code>list_verbs_in_element</code> list that is generated in this function by the element from the list <code>list_verbs_in_this_input</code> that was sent as a parameter to this function.</p> <p><a href="https://i.sstatic.net/LhsSv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhsSv.png" alt="enter image description here" /></a></p> <p>At the end and removing the elements that contain empty strings or only whitespaces, you should get this when printing the original list:</p> <pre class="lang-py prettyprint-override"><code>['correr', 'saltar', 'llegamos', 'habrΓ‘'] </code></pre>
<python><python-3.x><string><list><spacy>
2023-01-31 18:01:45
1
875
Matt095
75,301,216
777,593
Pandas convert a column containing strings into new columns
<p>I have a dataframe with columns that contains comma separated strings. I would like to create new columns similar to what one hot encoding does.</p> <p>Below is a very simplistic example. In my use case, I have thousands of rows with more columns, and two columns containing comma separated many strings. I could have used apply+lamda function+string contains condition to create each column but that is very tedious as it will be 100s of new columns</p> <p>Input Datafarme</p> <pre><code>ColumnA ColumnB 1 {&quot;alpha&quot;, &quot;bravo&quot;} 2 {&quot;bravo&quot;, &quot;charlie&quot;} 3 {&quot;alpha&quot;, &quot;charlie&quot;,&quot;gama&quot;} 4 {&quot;bravo&quot;, &quot;charlie&quot;,&quot;delta&quot;} </code></pre> <p>Output dataframe</p> <pre><code>ColumnA alpha bravo charlie delta gamma 1 1 1 0 0 0 2 0 1 0 0 0 3 1 0 1 0 1 4 0 1 1 1 0 </code></pre> <p><strong>edit:</strong> I should have mentioned that the data is read from csv file. Column B is is of type str. It contains many strings inside. it looks {&quot;value1&quot; , &quot;value2&quot;....&quot;valueN&quot;} here values could be of the format &quot;XYZ&quot; or &quot;X Y Z&quot; or &quot;X_Y_Z&quot;. The example i have provided is very simplistic version</p>
<python><pandas><dataframe><one-hot-encoding>
2023-01-31 17:57:06
2
2,411
Khurram Majeed
75,301,196
2,013,056
Getting list of all the URLs in a Closed Issue page in GitHub using Selenium
<p>I am trying to store the links of all the closed issues from a GitHub (<a href="https://github.com/mlpack/mlpack/issues?q=is%3Aissue+is%3Aclosed" rel="nofollow noreferrer">https://github.com/mlpack/mlpack/issues?q=is%3Aissue+is%3Aclosed</a>) project using Selenium. I use the code below:</p> <pre><code>repo_closed_url = [link.find_element(By.CLASS_NAME,'h4').get_attribute('href') for link in driver.find_elements(By.XPATH,'//div[@aria-label=&quot;Issues&quot;]')] </code></pre> <p>However, the above code only returns the first URL. How can I get all the URLs in that page? I iterate through all the pages. So just getting the links from the first page is fine.</p>
<python><selenium><selenium-webdriver><xpath><selenium-chromedriver>
2023-01-31 17:55:07
3
649
Mano Haran
75,301,185
11,462,274
Center align all dataframe headers except two that one of them must align left and the other right
<p>The code below creates a table with the values of all columns centered and the column titles also centered. I align the values in the <code>local_team</code> column to the right and the values in the <code>visitor_team</code> column to the left:</p> <pre class="lang-python prettyprint-override"><code>ef dfi_image(list_dict,name_file): df = pd.DataFrame(list_dict) df = df[['time','competition','local_team','score','visitor_team','channels']] df = df.style.set_table_styles([dict(selector='th', props=[('text-align', 'center'),('background-color', '#40466e'),('color', 'white')])]) df.set_properties(**{'text-align': 'center'}).hide(axis='index') df.set_properties(subset=['local_team'], **{'text-align': 'right'}).hide(axis='index') df.set_properties(subset=['visitor_team'], **{'text-align': 'left'}).hide(axis='index') dfi.export(df, name_file + &quot;.png&quot;) dfi_image(games,'table_dfi.png') </code></pre> <p><a href="https://i.sstatic.net/g16j3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g16j3.png" alt="enter image description here" /></a></p> <p>But the title of <code>local_team</code> doesn't align to the right and neither the title of <code>visitor_team</code> aligns to the left and I need them with equal alignment to the values of these columns.</p> <p>So i try:</p> <pre class="lang-python prettyprint-override"><code>def dfi_image(list_dict,name_file): df = pd.DataFrame(list_dict) df = df[['time','competition','local_team','score','visitor_team','channels']] df = df.style.set_table_styles([ dict(selector='th', props=[('text-align', 'center'), ('background-color', '#40466e'), ('color', 'white')]), dict(selector='th.col_heading.local_team', props=[('text-align', 'right')]), dict(selector='th.col_heading.visitor_team', props=[('text-align', 'left')]) ]) df.set_properties(**{'text-align': 'center'}).hide(axis='index') df.set_properties(subset=['local_team'], **{'text-align': 'right'}).hide(axis='index') df.set_properties(subset=['visitor_team'], **{'text-align': 'left'}).hide(axis='index') dfi.export(df, name_file + &quot;.png&quot;) dfi_image(games,'table_dfi.png') </code></pre> <p><a href="https://i.sstatic.net/Opewy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Opewy.png" alt="enter image description here" /></a></p> <p>how to proceed for the result is equal this image:</p> <p><a href="https://i.sstatic.net/U3wMA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U3wMA.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-01-31 17:54:19
1
2,222
Digital Farmer
75,300,936
11,092,636
%%timeit magic command and variable set to global in function
<p>Here is a MRE:</p> <pre class="lang-py prettyprint-override"><code>%%timeit variable = 0 def func(): global variable variable += 1 func() assert (variable == 1) </code></pre> <p>It works perfectly without the magic command <code>%%timeit</code>.</p> <p>I'm not sure I understand why it doesn't work when I add the magic command. It seems that it's due to the fact that the variable <code>variable</code> got set to <code>global</code>.</p> <p>I'm using <code>VSCode</code> and <code>Python 3.11.1</code></p> <p>EDIT after It_is_Chris comment suggesting it was just an <code>AssertionError</code>. This doesn't work either:</p> <pre class="lang-py prettyprint-override"><code>%%timeit variable = 0 def func(): global variable variable += 1 func() print(f&quot;Variable is {variable}&quot;) </code></pre>
<python><visual-studio-code><timeit>
2023-01-31 17:33:26
1
720
FluidMechanics Potential Flows
75,300,900
11,887,287
conda python 3.8 ffmpeg【ffprobe: symbol lookup error: /anaconda3/envs/bin/../lib/./libgnutls.so.30: undefined symbol: mpn_add_1, version HOGWEED_4】
<p><strong>Dependencies:</strong></p> <ul> <li>Ubuntu: 20.04</li> <li>conda: 4.12.0</li> <li>Python: 3.8</li> <li>Pytorch: 1.7.1</li> <li>ffmpjpe: 4.2.3 (conda-forge)</li> </ul> <p>I am facing a problem after installing <strong>FFmpeg</strong> from the conda-forge channel as follows command:</p> <pre><code>$ conda config --add channels conda-forge``` $ conda install ffmpeg </code></pre> <p><code>$ ffmpeg -version</code> error message:</p> <pre><code>ffprobe: symbol lookup error: /home/user/anaconda3/envs/myenv/bin/../lib/./libgnutls.so.30: undefined symbol: mpn_add_1, version HOGWEED_4 </code></pre> <p>I have also try</p> <pre><code>$ pip install ffmpeg or $ pip install ffprobe or $ conda install ffmpeg-python </code></pre> <p>But they do not work for me.</p> <p>Could anyone point me out how to solve this issue?? Thanks in advance!!</p>
<python><ffmpeg><conda><ffprobe>
2023-01-31 17:31:02
3
985
wen
75,300,844
2,186,785
Executing python script with pdflatex using PHP on Ubuntu webserver?
<p>I am trying to execute a python script which uses pdflatex using php. Running the python script via command line works well.</p> <p>But if I try to call it with php, it throws this error:</p> <blockquote> <p>I can't write on file mylatex.log'. (Press Enter to retry, or Control-D to exit; default file extension is `.log') Please type another transcript file name: ! Emergency stop ! ==&gt; Fatal error occurred, no output PDF file produced!</p> </blockquote> <p>So there seems to be a permission error.</p> <p>This is the way I am trying to call the php file:</p> <pre><code>$command = escapeshellcmd('python3 /home/ubuntu/test.py'); $output = shell_exec($command); echo $output; </code></pre> <p>The mylatex.log file has 777 permission as a test.</p> <p>Is there a way to execute a python script which uses a library like pdflatex?</p>
<python><php><webserver><pdflatex>
2023-01-31 17:25:59
1
1,179
JavaForAndroid
75,300,635
6,218,501
Version control with DataFrames
<p>I'm trying to compare two dataframes in order to check what have changed between both of them. This is part of a version control script so I've made a simplified version trying to find a solution:</p> <pre><code>data = {'ID': ['1', '2', '3', '4'], 'Date': ['23-01-2023', '01-12-1995', '03-07-2013', '05-09-2013'], 'Time': ['01:45:08', '02:15:21', '23:57:14', '03:57:15'], 'Path': ['//server/test/File1.txt', '//server/test/File2.txt', '//server/test/File3.txt', '//server/test/File4.txt'], } data2 = {'ID': ['1', '2', '3'], 'Date': ['23-01-2023', '03-07-2013', '01-12-1995', '05-09-2013'], 'Time': ['01:45:08', '23:57:14', '02:17:21', '03:18:31'], 'Path': ['//server/test/File1.txt', '//server/test/File3.txt', '//server/test/File2.txt', '//server/test/File5.txt'], } df = pd.DataFrame(data) df2 = pd.DataFrame(data2) </code></pre> <p>So I've the 2 dataframes created as follows:</p> <p><strong>DataFrame 1</strong></p> <pre><code> | ID | Date | Time | Path | | 1 | 23-01-2023 | 01:45:08 | //server/test/File1.txt | | 2 | 01-12-1995 | 02:15:21 | //server/test/File2.txt | | 3 | 03-07-2013 | 23:57:14 | //server/test/File3.txt | | 4 | 05-09-2013 | 03:57:15 | //server/test/File4.txt | </code></pre> <p><strong>DataFrame 2</strong></p> <pre><code> | ID | Date | Time | Path | | 1 | 23-01-2023 | 01:45:08 | //server/test/File1.txt | | 2 | 03-07-2013 | 23:57:14 | //server/test/File3.txt | | 3 | 01-12-1995 | 02:17:21 | //server/test/File2.txt | | 4 | 21-11-1991 | 03:18:31 | //server/test/File5.txt | </code></pre> <p>Taking as reference the first one I know:</p> <ol> <li>File with ID 4 has been removed</li> <li>File 2 have been modified</li> <li>New file has been added (ID 4 in table dataframe 2)</li> </ol> <p>At the end I would like to have the following output :</p> <pre><code> | ID | Date | Time | Path | Status | | 1 | 23-01-2023 | 01:45:08 | //server/test/File1.txt | - | | 2 | 01-12-1995 | 02:15:21 | //server/test/File2.txt | UPDATED | | 3 | 03-07-2013 | 23:57:14 | //server/test/File3.txt | - | | 4 | 05-09-2013 | 03:57:15 | //server/test/File4.txt | DELETED | | 5 | 21-11-1991 | 03:18:31 | //server/test/File5.txt | ADDED | </code></pre> <p>Can that be done using just JOINs of Pandas ?</p>
<python><pandas>
2023-01-31 17:07:29
2
461
Ralk
75,300,605
7,920,004
Return data from Python's Redshift procedure call
<p><code>redshift_connector</code> is defined to be aligned with <a href="https://peps.python.org/pep-0249/#id24" rel="nofollow noreferrer">https://peps.python.org/pep-0249/#id24</a> but I can't, after calling procedure, retrieve data into a dataframe.</p> <p>Instead I'm getting <code>'mycursor'</code> value. How to overcome this?</p> <p><code>fetch*()</code> methods don't allow passing argument that will allow getting into data in <code>mycursor</code>.</p> <p>I also tried <a href="https://docs.aws.amazon.com/redshift/latest/dg/c_PLpgSQL-structure.html#r_PLpgSQL-record-type" rel="nofollow noreferrer">RECORDS</a> type but no luck.</p> <p><strong>Procedure's body</strong>:</p> <pre><code>--CREATE TABLE reporting.tbl(a int, b int); --INSERT INTO reporting.tblVALUES(1, 4); CREATE OR REPLACE PROCEDURE reporting.procedure(param IN integer, rs_out INOUT refcursor) LANGUAGE plpgsql AS $$ BEGIN OPEN rs_out FOR SELECT a FROM reporting.tbl; END; $$; </code></pre> <p><strong>Python code</strong>:</p> <pre><code>import redshift_connector conn = redshift_connector.connect( host='xyz.xyz.region.redshift.amazonaws.com', database='db', port=5439, user=&quot;user&quot;, password='p@@s' ) cursor = conn.cursor() cursor.execute(&quot;BEGIN;&quot;) res = cursor.callproc(&quot;reporting.procedure&quot;, parameters=[1, 'mycursor']) res = cursor.fetchall() cursor.execute(&quot;COMMIT;&quot;) #returns (['mycursor'],) print(res) </code></pre>
<python><amazon-web-services><amazon-redshift>
2023-01-31 17:05:05
2
1,509
marcin2x4
75,300,476
14,912,118
PySpark GroupBy agg collect_list multiple columns
<p>I have working with following code.</p> <pre><code>df = spark.createDataFrame(data=[[&quot;john&quot;, &quot;tomato&quot;, 1.99, 1],[&quot;john&quot;, &quot;carrot&quot;, 0.45, 1],[&quot;bill&quot;, &quot;apple&quot;, 0.99, 1],[&quot;john&quot;, &quot;banana&quot;, 1.29, 1], [&quot;bill&quot;, &quot;taco&quot;, 2.59, 1]], schema = [&quot;name&quot;, &quot;food&quot;, &quot;price&quot;, &quot;col_1&quot;]) </code></pre> <pre><code>+----+------+-----+-----+ |name| food|price|col_1| +----+------+-----+-----+ |john|tomato| 1.99| 1| |john|carrot| 0.45| 1| |bill| apple| 0.99| 1| |john|banana| 1.29| 1| |bill| taco| 2.59| 1| +----+------+-----+-----+ </code></pre> <p>I am using collect_list with multiple columns with below code.</p> <pre><code>df.groupBy('name').agg(collect_list(concat_ws(', ','food','price')).alias('sample')).show(10,False) </code></pre> <p>I getting below output.</p> <pre><code>+----+------------------------------------------+ |name|sample | +----+------------------------------------------+ |john|[tomato, 1.99, carrot, 0.45, banana, 1.29]| |bill|[apple, 0.99, taco, 2.59] | +----+------------------------------------------+ </code></pre> <p>But i need to get the below output.</p> <pre><code>+----+------------------------------------------+ |name|sample | +----+------------------------------------------+ |john|tomato 1.99, carrot 0.45, banana 1.29 | |bill|apple 0.99, taco 2.59 | +----+------------------------------------------+ </code></pre> <p>Is there any way to get the above output it will be helpful. I know above code won't get above output.</p> <p>Could anyone provide the solution which i am expecting.</p>
<python><apache-spark><pyspark>
2023-01-31 16:53:38
1
427
Sharma
75,300,188
2,371,684
python dictionary update without overwriting
<p>I am learning python, and I have two json files. The data structure in these two json files are different structures.</p> <p>I start by importing both of the json files. I want to choose a course from the courses dict, and then add it to a specific education in the educations dict. What I want to solve is via user input choose a key from the first dict, and then within a while loop, so I can add choose a key from the second dict to be added to the dict chosen from the first dict.</p> <p>I am able to add the dict from the second dict to the one first as a sub dict as I want to, but with the update method it overwrites all previous values.</p> <p>I have used the dict.update() method so not to overwrite previous values. I then want to write back the updated dict back to the first json file.</p> <p>My code works partially, I am able to add a course to a educations, but it overwrites all previous courses I chose to add to a specific education.</p> <p>This is the content of the first json file:</p> <pre><code>{ &quot;itsak22&quot;: { &quot;edcuationId&quot;: &quot;itsak22&quot;, &quot;edcuation_name&quot;: &quot;cybersecurityspecialist&quot; }, &quot;feu22&quot;: { &quot;edcuationId&quot;: &quot;feu22&quot;, &quot;edcuation_name&quot;: &quot;frontendutvecklare&quot; } } </code></pre> <p>This is the content of the second json file:</p> <pre><code>{ &quot;sql&quot;: { &quot;courseId&quot;: &quot;itsql&quot;, &quot;course_name&quot;: &quot;sql&quot;, &quot;credits&quot;: 35 }, &quot;python&quot;: { &quot;courseId&quot;: &quot;itpyt&quot;, &quot;course_name&quot;: &quot;python&quot;, &quot;credits&quot;: 30 }, &quot;agile&quot;: { &quot;courseId&quot;: &quot;itagl&quot;, &quot;course_name&quot;: &quot;agile&quot;, &quot;credits&quot;: 20 } } </code></pre> <p>And this is my python code:</p> <pre><code>import json # Load the first JSON file of dictionaries with open('edcuations1.json') as f: first_dicts = json.load(f) # Load the second JSON file of dictionaries with open('courses1.json') as f: second_dicts = json.load(f) # Print the keys from both the first and second JSON files print(&quot;All educations:&quot;, first_dicts.keys()) print(&quot;All courses:&quot;, second_dicts.keys()) # Ask for input on which dictionary to add to which first_key = input(&quot;Which education would you like to choose to add courses to? (Enter 'q' to quit): &quot;) while True: second_key = input(&quot;Which course would you like to add to education? (Enter 'q' to quit)&quot;) if second_key == 'q': break # Create a sub-dictionary named &quot;courses&quot; in the specific dictionary of the first file if &quot;courses&quot; not in first_dicts[first_key]: first_dicts[first_key][&quot;courses&quot;] = {} first_dicts[first_key][&quot;courses&quot;].update(second_dicts[second_key]) #first_dicts = {**first_dicts, **second_dicts} #first_dicts.update({'courses': second_dicts}) # Update the first JSON file with the new dictionaries with open('edcuations1.json', 'w') as f: json.dump(first_dicts, f, indent=4) </code></pre>
<python><dictionary>
2023-01-31 16:30:54
2
1,575
user2371684
75,300,029
5,833,797
Python SQLAlchemy 2.0 non required field types using dataclass_transform
<p>I have just installed SQLAlchemy 2.0 on a new project and I am trying to make my models as type-safe as possible.</p> <p>By using <code>@typing_extensions.dataclass_transform</code>, I have been able to achieve most of what I want to achieve in terms of type checking, however all fields are currently being marked as not required.</p> <p>For example:</p> <pre><code> @typing_extensions.dataclass_transform(kw_only_default=True) class Base(DeclarativeBase): pass class TestModel(Base): __tablename__ = &quot;test_table&quot; name: Mapped[str] id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) external_id: Mapped[int] = mapped_column( ForeignKey(&quot;external.id&quot;), nullable=False ) def test_test_model(session: Session) -&gt; None: TEST_NAME = &quot;name&quot; external = External() session.add(external) session.commit() model1 = TestModel() # Intellisense shows error because &quot;name&quot; is required model2 = TestModel(name=TEST_NAME, external_id=external.id). # no error session.add(model2) session.commit() # model commits successfully model3 = TestModel(name=TEST_NAME) # No intellisense error, despite &quot;external_id&quot; being required session.add(model3) session.commit(). # error when saving because of missing &quot;external_id&quot; </code></pre> <p>In the example above, how can I set the type of <code>external_id</code> to be required?</p>
<python><sqlalchemy><python-typing><python-dataclasses>
2023-01-31 16:17:07
1
727
Dave Cook
75,299,996
1,446,710
Importing multiple CSV into one DataFrame?
<p>I tried many answers but none of them working for me:</p> <p>For example this: <a href="https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe">Import multiple CSV files into pandas and concatenate into one DataFrame</a></p> <pre><code>import pandas as pd import glob import os path = r'C:\DRO\DCL_rawdata_files' # use your path all_files = glob.glob(os.path.join(path , &quot;/*.csv&quot;)) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) frame = pd.concat(li, axis=0, ignore_index=True) </code></pre> <p>I have only 2 csv files:</p> <p>1.csv:</p> <pre><code>1,1 2,1 3,1 4,1 5,1 </code></pre> <p>2.csv:</p> <pre><code>6,1 7,1 8,1 9,1 </code></pre> <p>To be fair, this is my routine for merging:</p> <pre><code>files = glob.glob(&quot;data/*.csv&quot;) df = [] for f in files: csv = pd.read_csv(f, index_col=None, header=0) df.append(csv) df = pd.concat(df, axis=0, ignore_index=True) df.to_csv(&quot;all.csv&quot;) print(df); </code></pre> <p>This is the output (print(df)):</p> <pre><code> 1 1.1 6 0 2 1.0 NaN 1 3 1.0 NaN 2 4 1.0 NaN 3 5 1.0 NaN 4 1 NaN 7.0 5 1 NaN 8.0 6 1 NaN 9.0 </code></pre> <p>And this is the &quot;all.csv&quot;:</p> <pre><code>,1,1.1,6 0,2,1.0, 1,3,1.0, 2,4,1.0, 3,5,1.0, 4,1,,7.0 5,1,,8.0 6,1,,9.0 </code></pre> <p>Whereas I would need all.csv to be:</p> <pre><code>1,1 2,1 3,1 4,1 5,1 6,1 7,1 8,1 9,1 </code></pre> <p>I'm using Python3.9 with PyCharm 2022.3.1.</p> <p>Why is my all.csv look like that, and how can I simply read multiple csv into one dataframe for further processing?</p>
<python><pandas><csv>
2023-01-31 16:14:31
3
2,725
Daniel
75,299,972
1,552,080
Flask + Jinja2: how to replace the occurrence of a value in a table by a icon
<p>I am working on a simple web page showing the content of a pandas <code>DataFrame</code> by using the Flask framework:</p> <pre><code>from flask import Flask, render_template import DatabaseConnector from jinja2 import Environment app = Flask(__name__) app.jinja_options = {'lstrip_blocks': True, 'trim_blocks': True} app.create_jinja_environment() @app.route('/') def index(): connector = DatabaseConnector() df = connector.getAvailableData() return render_template('index.html', tables=[df.to_html(classes='data', header=True)]) </code></pre> <p>From a tutorial I borrowed the following jinja/html code in a file <code>index.html</code> extending <code>base.html</code>:</p> <pre><code>{% extends 'base.html' %} {% block content %} &lt;h1&gt;{% block title %}Status Viewer{% endblock %}&lt;/h1&gt; {% for table in tables %} {{ table|safe }} {% endfor %} {% endblock %} </code></pre> <p>The result it produces is somewhat OK:</p> <p><a href="https://i.sstatic.net/dHKIb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dHKIb.png" alt="Resulting table on web page" /></a></p> <p>Now I would like to make the table a bit mor fancy and replace the True/False status in the columns &quot;Collecting&quot;/&quot;Archiving&quot; with a red/green dot or tick mark icon. How can I do that?</p> <p>I suppose there is more than one approach to this problem, in this sense I don't care if the solution is in CSS, JavaScript, Jinja2, Python.</p>
<javascript><python><html><css><pandas>
2023-01-31 16:12:36
1
1,193
WolfiG
75,299,946
14,802,285
How to access the gradients of intermediate outputs during the training loop?
<p>Let's say I have following (relatively) small lstm model:</p> <p>First, let's create some pseudo input/target data:</p> <pre><code>import torch # create pseudo input data (features) features = torch.rand(size = (64, 24, 3)) # of shape (batch_size, num_time_steps, num_features) # create pseudo target data targets = torch.ones(size = (64, 24, 1)) # of shape (batch_size, num_time_steps, num_targets) # store num. of time steps num_time_steps = features.shape[1] </code></pre> <p>Now, let's define a simple lstm model:</p> <pre><code># create a simple lstm model with lstm_cell class SmallModel(torch.nn.Module): def __init__(self): super().__init__() # initialize the parent class # define the layers self.lstm_cell = torch.nn.LSTMCell(input_size = features.shape[2], hidden_size = 16) self.fc = torch.nn.Linear(in_features = 16, out_features = targets.shape[2]) def forward(self, features): # initialise states hx = torch.randn(64, 16) cx = torch.randn(64, 16) # empty list to collect final preds a_s = [] b_s = [] c_s = [] for t in range(num_time_steps): # loop through each time step # select features at the current time step t features_t = features[:, t, :] # forward computation at the current time step t hx, cx = self.lstm_cell(features_t, (hx, cx)) out_t = torch.relu(self.fc(hx)) # do some computation with the output a = out_t * 0.8 + 20 b = a * 2 c = b * 0.9 a_s.append(a) b_s.append(b) c_s.append(c) a_s = torch.stack(a_s, dim = 1) # of shape (batch_size, num_time_steps, num_targets) b_s = torch.stack(b_s, dim = 1) c_s = torch.stack(c_s, dim = 1) return a_s, b_s, c_s </code></pre> <p>Instantiating model, loss fun. and optimizer:</p> <pre><code># instantiate the model model = SmallModel() # loss function loss_fn = torch.nn.MSELoss() # optimizer optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9) </code></pre> <p>Now, during the training loop, I want to print the gradients of the intermediate (<code>a_s.grad</code>, <code>b_s.grad</code>) outputs for each epoch:</p> <pre><code># number of epochs n_epoch = 10 # training loop for epoch in range(n_epoch): # loop through each epoch # zero out the grad because pytorch accumulates them optimizer.zero_grad() # make predictions a_s, b_s, c_s = model(features) # retain the gradients of intermediate outputs a_s.retain_grad() b_s.retain_grad() c_s.retain_grad() # compute loss loss = loss_fn(c_s, targets) # backward computation loss.backward() # print gradients of outpus at each epoch print(a_s.grad) print(b_s.grad) # update the weights optimizer.step() </code></pre> <p>But I get the following:</p> <pre><code>None None None None None None None None None None None None None None None None None None None None </code></pre> <p>How can I get the actual gradients of the intermediate outputs?</p>
<python><deep-learning><pytorch><lstm><recurrent-neural-network>
2023-01-31 16:10:25
1
3,364
bird
75,299,891
6,930,340
Type hint for numpy.ndarray containing unsignedinteger
<p>I have a numpy array that contains unsignedinteger, something like this:</p> <pre><code>arr = np.uint16([5, 100, 2000]) array([ 5, 100, 2000], dtype=uint16) </code></pre> <p>This <code>arr</code> will be input to a function. I am wondering how the type hint of the function argument should look like?</p> <pre><code>def myfunc(arr: ?): pass </code></pre> <p>I was first thinking it should be <code>arr: np.ndarray</code>. But then <code>mypy</code> is complaining.<br /> <code>Argument &quot;arr&quot; to &quot;myfunc&quot; has incompatible type &quot;unsignedinteger[_16Bit]&quot;; expected &quot;ndarray[Any, Any]&quot; [arg-type]</code></p> <p>Neither does <code>arr: np.ndarray[np.uint16]</code> work.<br /> <code>error: &quot;ndarray&quot; expects 2 type arguments, but 1 given [type-arg]</code></p>
<python><numpy><mypy>
2023-01-31 16:06:04
2
5,167
Andi
75,299,817
9,458,342
Plotly px.Scatter or go.Scatter Graph Unique Color/Symbol for specific points (Simple Map)
<p>I am attempting to create a simple site map using Plotly. I can't use scatter_geo because I am using known X, Y coordinates relative the site - not lat/long. px.Scatter or go.Scatter seem to be a viable option for a mostly simple map. So, I have a dataframe from that essentially looks like:</p> <pre><code>Location Tag x_coord y_coord Unit1 1-ABC 12.2 2.2 Unit1 2-DEF -4.2 18 Unit2 3-HIJ 9 2 NaN NaN 11 12 NaN NaN 13 14 NaN NaN 15 16 </code></pre> <p>The NaNs are locations where a tag was NOT found, but the tool we used to do mapping continued to track an X, Y coordinate. Therefore, those points are quite useful as well.</p> <p>Starting out, to create a simple scatter plot, I used:</p> <pre><code>fig = px.scatter(tagLocData, x=tagLocData['x_locations'], y=tagLocData['y_locations']) </code></pre> <p>Which worked fine for basic plotting, but I could not see the different locations. So, I moved to:</p> <pre><code>fig = px.scatter(tagLocData, x=tagLocData['x_locations'], y=tagLocData['y_locations'], color='Location') </code></pre> <p>However, this did not include the coordinates where nothing was found at the location, just the location: <a href="https://i.sstatic.net/asrdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/asrdT.png" alt="enter image description here" /></a></p> <p>I then tried adding a trace to a go.Figure via:</p> <pre><code>fig = go.Figure(data=go.Scatter(x=tagLocData['x_locations'], y=tagLocData['y_locations'],\ mode='markers')) fig.add_trace(go.Scatter(x=tagLocData['x_locations'], y=tagLocData['y_locations'], fillcolor='Location')) </code></pre> <p>Which yielded an expected error wanting a specific color for location, so I tried making a colorDict.</p> <pre><code>colors = ColorDict() test = dict(zip(tagLocData['Location'].unique(), colors)) fig = go.Figure(data=go.Scatter(x=tagLocData['x_locations'], y=tagLocData['y_locations'],\ mode='markers', marker=test)) </code></pre> <p>This overlayed the X, Y markers over the actual locations. I have tried a few other things as well with no luck. There are a lot of related questions out there but nothing that seems to hit on what I want to do.</p> <p>How can I create a nice X, Y scatter plot with this data?</p>
<python><pandas><plotly><nan>
2023-01-31 16:01:32
1
399
Sam Dean
75,299,808
1,681,409
Finding the Plot Coordinates from the Scatter Plot Data
<p>So I want to annotate a plot of points in an ellipse by embedding a graph on top of each of the points. I have done this manually with the first few points via the inset axes function.</p> <p><a href="https://i.sstatic.net/8Fojk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Fojk.png" alt="enter image description here" /></a></p> <p>For instance, I plotted the graph with the line via:</p> <pre><code> ins = ax.inset_axes([0.45,0.015,0.1,0.1]) xxx = numpy.arange(10) yyy = numpy.arange(10) ins.plot(xxx, yyy) </code></pre> <p>I placed all the graphs manually so far, but this is tedious, and if the points change, then my code is invalid. Let's say the green circle in the above plot was plotted using:</p> <pre><code> ax.scatter(green_x, green_y, marker=&quot;o&quot;, s=50, edgecolors=&quot;green&quot;, c=&quot;green&quot;) </code></pre> <p><strong>Question</strong></p> <p>How can I find the ax.inset_axes() coordinates for the green circle in the plot above so that I can overlay my graph automatically?</p>
<python><matplotlib><overlay>
2023-01-31 16:00:42
0
681
The Dude
75,299,770
14,790,056
I want to groupby, and then, create a new column which takes a value from a different column if a condition is met
<p>I have the following dataframe. I want to create a new column <code>col2</code> which takes a value from the column <code>value</code> after groupby ID, if the value from <code>col1</code> is BX.</p> <p>and another new column <code>col3</code> which takes the value from <code>value</code> if the value from `col1 is AX.</p> <pre><code>ID value col1 A 1 BX A 2 AX B 3 BX B 4 AX C 5 BX C 6 AX </code></pre> <p>desired df</p> <pre><code>ID value col1 col2 col3 A 1 BX 1 2 A 2 AX 1 2 B 3 AX 4 3 B 4 BX 4 3 C 5 BX 5 6 C 6 AX 5 6 </code></pre>
<python><pandas><dataframe>
2023-01-31 15:58:25
2
654
Olive
75,299,734
4,498,050
Using gymnasium play on CartPole makes the cart go left all the time
<p>I'm trying to play CartPole on Jupyter Notebook using my keyboard. I'm using the following code from Farama documentation</p> <pre><code>import gymnasium as gym from gymnasium.utils.play import play env = gym.make(&quot;CartPole-v1&quot;, render_mode=&quot;rgb_array&quot;) play(env, keys_to_action={&quot;a&quot;: 0, &quot;d&quot;: 1}, fps=2) </code></pre> <p>However, the cart keeps going to the left despite pressing <code>d</code>. How may I solve this?</p>
<python><reinforcement-learning><openai-gym>
2023-01-31 15:55:08
2
610
Moltres
75,299,671
825,489
How can I count # of occurences of more than one column (eg city & country)?
<p>Given the following data ...</p> <pre><code> city country 0 London UK 1 Paris FR 2 Paris US 3 London UK </code></pre> <p>... I'd like a count of each city-country pair</p> <pre><code> city country n 0 London UK 2 1 Paris FR 1 2 Paris US 1 </code></pre> <p>The following works but feels like a hack:</p> <pre><code>df = pd.DataFrame([('London', 'UK'), ('Paris', 'FR'), ('Paris', 'US'), ('London', 'UK')], columns=['city', 'country']) df.assign(**{'n': 1}).groupby(['city', 'country']).count().reset_index() </code></pre> <p>I'm assigning an additional column <code>n</code> of all 1s, grouping on city&amp;country, and then <code>count()</code>ing occurrences of this new 'all 1s' column. It works, but adding a column just to count it feels wrong.</p> <p>Is there a cleaner solution?</p>
<python><pandas><dataframe>
2023-01-31 15:50:58
1
1,269
Bean Taxi
75,299,652
6,694,814
Conditional-based geolocator in Python
<p>I am working on the Nominatim geolocator in Python. Unfortunately, some addresses are missing, therefore I tried to make some condition-based workaround, which would allow executing something based at least on postcode, which works well in any case. Unfortunately, I failed for now. With the following code:</p> <pre><code> import pandas as pd import folium import web-browser from geopy.geocoders import Nominatim geolocator = Nominatim(timeout=10, user_agent=&quot;Krukarius&quot;) def find_location(row): place = row['Address'] place_data = newstr = place[-8:] location = geolocator.geocode(place) location_overall = geolocator.geocode(place_data) if location != None: return location.latitude, location.longitude else: #return 0,0 return location_overall points = pd.read_csv(&quot;Addresses4.csv&quot;) points[['Lat','Lng']] = points.apply(find_location, axis=&quot;columns&quot;, result_type=&quot;expand&quot;) print(points) points.to_csv('NewAddresses4.csv') </code></pre> <p><strong>ValueError: Location should consist of two numerical values, but '' of type &lt;class 'str'&gt; is not convertible to float.</strong></p>
<python><geocoding><nominatim>
2023-01-31 15:49:44
1
1,556
Geographos
75,299,587
6,766,408
How to add code change to multiple scripts in pycharm
<p>I have develop around 85 automation scripts using python-selenium-robot framework in pycharm. I need to add one piece of code in all the 85 scripts. Is there way to do it without opening every script and adding? Thanks!</p>
<python><selenium-webdriver><pycharm><automated-tests><robotframework>
2023-01-31 15:44:37
2
312
ADS KUL
75,299,524
50,065
Missing type parameters for generic type "Callable"
<p>What is the correct way to add type hints to the following function?</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable def format_callback(f: Callable) -&gt; Callable: &quot;&quot;&quot;Function to wrap a function to use as a click callback. Taken from https://stackoverflow.com/a/42110044/8056572 &quot;&quot;&quot; return lambda _, __, x: f(x) </code></pre> <p>Now <code>mypy</code> is complaining with <code>Missing type parameters for generic type &quot;Callable&quot;</code></p> <p>The code needs to be compatible with both Python 3.9 and 3.10. I can use <code>typing_extensions</code> if needed.</p> <p><strong>Edit:</strong></p> <p>The following passes <code>mypy</code> but has too many <code>Any</code>'s for my taste. Is there a better way?</p> <pre class="lang-py prettyprint-override"><code>from typing import Any from typing import Callable import click def format_callback(f: Callable[[Any], Any]) -&gt; Callable[[click.Context, dict[str, Any], Any], Any]: &quot;&quot;&quot;Function to wrap a function to use as a click callback. Taken from https://stackoverflow.com/a/42110044/8056572 &quot;&quot;&quot; return lambda _, __, x: f(x) </code></pre>
<python><mypy><python-typing><python-click>
2023-01-31 15:39:13
1
23,037
BioGeek
75,299,298
6,628,988
writing json record from dataframe column to S3 in spark streaming
<p>I have a drataframe shown in below format with records as json data (which is in string format) read from kafka topic</p> <p><a href="https://i.sstatic.net/9mYWp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9mYWp.png" alt="enter image description here" /></a></p> <p>I need to write just the json records present in dataframe to S3.</p> <p>Is there any way where I can parse the records and convert json to dataframe and write to s3?</p> <p>or any other solutions provided will be helpfull</p> <p>I have tried to use foreach but could not convert to dataframe to write to s3</p> <pre><code>def foreach_function(self,row): print(&quot;*&quot;*100) print(row[0]) query = df.writeStream.foreach(self.foreach_function).start() query.awaitTermination() </code></pre>
<python><pyspark><apache-kafka><user-defined-functions><spark-structured-streaming>
2023-01-31 15:20:53
1
430
Saranraj K
75,299,245
11,001,493
How to split time series in clusters by different patterns?
<p>This is an example of a larger data with many dataframes similar to this one below (df_final):</p> <pre><code>df1 = pd.DataFrame({&quot;DEPTH (m)&quot;:np.arange(0, 2000, 2), &quot;SIGNAL&quot;:np.random.uniform(low=-6, high=10, size=(1000,))}) df2 = pd.DataFrame({&quot;DEPTH (m)&quot;:np.arange(2000, 3000, 2), &quot;SIGNAL&quot;:np.random.uniform(low=0, high=5, size=(500,))}) for i, row in df2.iterrows(): df2.loc[i, &quot;SIGNAL&quot;] = row[&quot;SIGNAL&quot;] * (i / 100) df_final = pd.concat([df1, df2]) </code></pre> <p>You can see that this signal has two patterns (one &quot;constant&quot; and other increasing):</p> <pre><code>plt.figure() plt.plot(df_final[&quot;SIGNAL&quot;], df_final[&quot;DEPTH (m)&quot;], linewidth=0.5) plt.ylim(df_final[&quot;DEPTH (m)&quot;].max(), df_final[&quot;DEPTH (m)&quot;].min()) plt.xlabel(&quot;SIGNAL&quot;) plt.ylabel(&quot;DEPTH&quot;) </code></pre> <p><a href="https://i.sstatic.net/WEpNE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WEpNE.png" alt="enter image description here" /></a></p> <p>Is there a way I can automatically create a flag/cluster to split this signal? In this example I would have one cluster before depth 2000 and other after it.</p> <p>Another problem is that, in my project, I will have other dataframes with more than two signal patterns and couldn't set it manually for each dataframe as there are many.</p>
<python><pandas><time-series><cluster-analysis><signal-processing>
2023-01-31 15:17:00
2
702
user026
75,299,204
2,163,392
Error in applying Butterworth lowpass filter in Scipy - advice for a good Wn parameter
<p>I am brand new to Digital Signal Processing and I would like to understand how to choose the parameter Wn (or the critical frequency) when aplying Butterworth lowpass filter with scipy.signal. To apply the butterworth lowpassfilter, the following Python/scipy source code can do it.</p> <pre><code> from scipy.signal import butter from scipy.signal import filtfilt def butter_lowpass_filter(data, cutoff, fs, order): nyq = 0.5 * fs normal_cutoff = cutoff / nyq # Get the filter coefficientsΒ  b, a = butter(order, normal_cutoff, btype='low', analog=False) y = filtfilt(b, a, data) return y </code></pre> <p>I would like to understand what could be a value for the <em>cutoff</em> parameter for my case. Suppose I have a 1kHZ (or 1000HZ) sampling rate in an audio file, and in this audio file I would like to filter out frequencies higher than 600HZ. IF I simply put 600 as cutoff requency (the threshold I want), the normalized cutoff would be 600/500=1.2, and the following error returns to me:</p> <pre><code>ValueError: Digital filter critical frequencies must be 0 &lt; Wn &lt; 1 </code></pre> <p>So, it seems that the cutoff should never be higher than half of the sampling rate. Other values lower than 500 would work for the cutoff value, but I am not sure which one would be the best for my case. So, is there any rule of thumb to select a cutoff value considering my requirements of filtering values higher than 600Hz? or such a value would depend only on the data? is my 600Hz requirement feasible? is the butterworth filter the right solution for this?</p>
<python><scipy><signal-processing><fft><lowpass-filter>
2023-01-31 15:13:18
0
2,799
mad
75,299,191
12,274,651
Global minimum versus local minima solution with Python Gekko
<p>A simple optimization example has 2 local minima at <code>(0,0,8)</code> with objective <code>936.0</code> and <code>(7,0,0)</code> with objective <code>951.0</code>. What are techniques to use local optimizers in Python Gekko (<code>APOPT</code>,<code>BPOPT</code>,<code>IPOPT</code>) to find a global solution?</p> <pre class="lang-py prettyprint-override"><code>from gekko import GEKKO m = GEKKO(remote=False) x = m.Array(m.Var,3,lb=0) x1,x2,x3 = x m.Minimize(1000-x1**2-2*x2**2-x3**2-x1*x2-x1*x3) m.Equations([8*x1+14*x2+7*x3==56, x1**2+x2**2+x3**2&gt;=25]) m.solve(disp=False) res=[print(f'x{i+1}: {xi.value[0]}') for i,xi in enumerate(x)] print(f'Objective: {m.options.objfcnval:.2f}') </code></pre> <p>This produces a local minimum:</p> <pre><code>x1: 7.0 x2: 0.0 x3: 0.0 Objective: 951.00 </code></pre> <p>There are solvers for a global optimum such as <code>BARON</code>, <code>COCOS</code>, <code>GlobSol</code>, <code>ICOS</code>, <code>LGO</code>, <code>LINGO</code>, and <code>OQNLP</code>, but what are some quick strategies that can be used with a local optimizer to search for a global solution? Some industrial applications have highly nonlinear models that haven't been fully tested for global solutions in control and design. Can the strategy be parallelized in Python?</p>
<python><mathematical-optimization><nonlinear-optimization><gekko>
2023-01-31 15:12:39
2
744
TexasEngineer
75,299,182
13,219,123
Writing to delta table using spark sql
<p>In python I am trying to create and write to the table <code>TBL</code> in the database <code>DB</code> in Databricks. But I get an exception: <em>A schema mismatch detected when writing to the Delta table</em>. My code is as follows, here <code>df</code> is a pandas dataframe.</p> <pre><code>from pyspark.sql import SparkSession DB = database_name TMP_TBL = temporary_table TBL = table_name sesh = SparkSession.builder.getOrCreate() df_spark = sesh.createDataFrame(df) df_spark.createOrReplaceTempView(TMP_TABLE) create_db_query = f&quot;&quot;&quot; CREATE DATABASE IF NOT EXISTS {DB} COMMENT &quot;This is a database&quot; LOCATION &quot;/tmp/{DB}&quot; &quot;&quot;&quot; create_table_query = f&quot;&quot;&quot; CREATE TABLE IF NOT EXISTS {DB}.{TBL} USING DELTA TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true, delta.autoOptimize.autoCompact = true) COMMENT &quot;This is a table&quot; LOCATION &quot;/tmp/{DB}/{TBL}&quot;; &quot;&quot;&quot; insert_query = f&quot;&quot;&quot; INSERT INTO TABLE {DB}.{TBL} select * from {TMP_TBL} &quot;&quot;&quot; sesh.sql(create_db_query) sesh.sql(create_table_query) sesh.sql(insert_query) </code></pre> <p>The code fails at the last line, <code>insert_query</code> line. When I check the database and table have been created but is of course empty. So the problem lies with that the <code>TMP_TBL</code> and <code>TBL</code> have different schemas, how and where do I define the schema so they match?</p>
<python><pyspark><apache-spark-sql><azure-databricks>
2023-01-31 15:11:20
1
353
andKaae
75,299,153
732,629
Problems when running python using new environment in spyder/anaconda
<p>I installed new packages using anaconda3 navigator into a new environment called <code>newconda</code>, then I changed the conda environment in spyder to <code>newconda</code>. When I run the code it displays errors such as &quot;module not found&quot;, despite that the module exist in <code>newconda</code>. I restarted the spyder kernel as suggested in some posts, but I have got new problem:</p> <blockquote> <p>Error while finding module specification for 'spyder_kernels.console' (ModuleNotFoundError: No module named 'spyder_kernels')</p> </blockquote> <p>Please any help to solve this problem?</p>
<python><conda><spyder><anaconda3>
2023-01-31 15:09:13
0
333
jojo
75,299,135
8,262,535
Screen freeze when training deep learning model from terminal but not Pycharm
<p>I have an extremely weird issue where if I run pytorch model training from Pycharm, it works fine but when I run the same code on the same environment from terminal, it freezes the screen. All windows become non-interactable. The freeze affects only me, not other users and for them &gt;&gt;top shows that the model is no longer training. The issue is consistent and reproducible across machines, users, and GPU slots.</p> <p>All dependencies are installed to a conda environment dl_segm_auto. In pycharm I have it selected as the interpreter. Parameters are passed through Run-&gt;Edit configuration.</p> <p><a href="https://i.sstatic.net/M5xxW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5xxW.png" alt="enter image description here" /></a></p> <p>From terminal, I run</p> <pre><code>conda activate dl_segm_auto python training.py [parameters] </code></pre> <p>After the first epoch the entire remote session freezes.</p> <p>Suggestions greatly appreciated!</p>
<python><pytorch><pycharm><conda><virtualenv>
2023-01-31 15:07:52
1
385
illan
75,299,066
1,132,175
Pyflink kafka topic consumption not working
<p>I have this code consuming from confluent's kafka platform (pageviews) but for some reason, execution just hangs up and I can't get it to work. I am new to flink, my experience is mainly with spark streaming, so this is my learning process with flink.</p> <p>Below you can find the python code and then the exception I found in the logs:</p> <pre class="lang-py prettyprint-override"><code>from pyflink.datastream.connectors.kafka import KafkaSource, KafkaOffsetsInitializer from pyflink.datastream import StreamExecutionEnvironment from pyflink.datastream.formats.json import JsonRowDeserializationSchema from pyflink.common.watermark_strategy import WatermarkStrategy from pyflink.common.typeinfo import Types from pyflink.table import StreamTableEnvironment from pyflink.datastream.connectors import FlinkKafkaConsumer from pyflink.datastream.checkpoint_config import CheckpointingMode import json ######### schema_pageviews = Types.ROW_NAMED( [ &quot;schema&quot;, &quot;payload&quot;, ], [ Types.ROW_NAMED( [&quot;type&quot;, &quot;fields&quot;, &quot;optional&quot;, &quot;name&quot;], [ Types.STRING(), Types.OBJECT_ARRAY( Types.ROW_NAMED([&quot;type&quot;, &quot;optional&quot;, &quot;field&quot;], [Types.STRING(), Types.BOOLEAN(), Types.STRING()]) ), Types.BOOLEAN(), Types.STRING(), ] ), Types.ROW_NAMED( [ &quot;viewtime&quot;, &quot;userid&quot;, &quot;pageid&quot;, ], [ Types.LONG(), Types.STRING(), Types.STRING(), ] ), ], ) stream_env = StreamExecutionEnvironment.get_execution_environment() stream_env.enable_checkpointing(1000) stream_env.get_checkpoint_config().set_checkpointing_mode(CheckpointingMode.EXACTLY_ONCE) stream_env.get_checkpoint_config().set_checkpoint_storage_dir(&quot;file:///tmp&quot;) value_deserializer = ( JsonRowDeserializationSchema.builder() .type_info(schema_pageviews) .build() ) #kafka_source = KafkaSource.builder().set_bootstrap_servers(&quot;localhost:9092&quot;).set_group_id(&quot;repl-flink&quot;).set_topics(&quot;page_views&quot;).set_value_only_deserializer(value_deserializer).set_starting_offsets(KafkaOffsetsInitializer.earliest()).build() kafka_consumer = FlinkKafkaConsumer( topics='page_views', deserialization_schema=value_deserializer, properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'repl_flink'} ) #page_views_stream = stream_env.from_source(kafka_source, WatermarkStrategy.no_watermarks(), &quot;pageviews-kafka&quot;) page_views_stream = stream_env.add_source(kafka_consumer) parsed_stream = page_views_stream.map( lambda obj: obj[&quot;payload&quot;], Types.ROW_NAMED( [ &quot;viewtime&quot;, &quot;userid&quot;, &quot;pageid&quot;, ], [ Types.LONG(), Types.STRING(), Types.STRING(), ] ), ) parsed_stream.print() stream_env.execute(&quot;app-name&quot;) </code></pre> <p>Exception</p> <pre><code>Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.io.IOException: Cannot run program &quot;/opt/flink/opt/python/pyflink.zip/pyflink/bin/pyflink-udf-runner.sh&quot;: error=20, Not a directory at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:451) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:436) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:619) ~[flink-python-1.16.0.jar:1.16.0] ... 15 more Caused by: java.io.IOException: Cannot run program &quot;/opt/flink/opt/python/pyflink.zip/pyflink/bin/pyflink-udf-runner.sh&quot;: error=20, Not a directory at java.lang.ProcessBuilder.start(Unknown Source) ~[?:?] at java.lang.ProcessBuilder.start(Unknown Source) ~[?:?] at org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:147) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:122) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:104) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:451) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:436) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:619) ~[flink-python-1.16.0.jar:1.16.0] ... 15 more Suppressed: java.lang.NullPointerException: Process for id does not exist: 4701-1 at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:895) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessManager.stopProcess(ProcessManager.java:172) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:124) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:451) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:436) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:619) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:275) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.operators.python.process.AbstractExternalPythonFunctionOperator.open(AbstractExternalPythonFunctionOperator.java:57) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.operators.python.process.AbstractExternalDataStreamPythonFunctionOperator.open(AbstractExternalDataStreamPythonFunctionOperator.java:85) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.operators.python.process.AbstractExternalOneInputPythonFunctionOperator.open(AbstractExternalOneInputPythonFunctionOperator.java:117) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.operators.python.process.ExternalPythonProcessOperator.open(ExternalPythonProcessOperator.java:64) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) ~[flink-dist-1.16.0.jar:1.16.0] at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) ~[flink-dist-1.16.0.jar:1.16.0] at java.lang.Thread.run(Unknown Source) ~[?:?] Caused by: java.io.IOException: error=20, Not a directory at java.lang.ProcessImpl.forkAndExec(Native Method) ~[?:?] at java.lang.ProcessImpl.&lt;init&gt;(Unknown Source) ~[?:?] at java.lang.ProcessImpl.start(Unknown Source) ~[?:?] at java.lang.ProcessBuilder.start(Unknown Source) ~[?:?] at java.lang.ProcessBuilder.start(Unknown Source) ~[?:?] at org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:147) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:122) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:104) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:451) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.&lt;init&gt;(DefaultJobBundleFactory.java:436) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303) ~[flink-python-1.16.0.jar:1.16.0] at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:619) ~[flink-python-1.16.0.jar:1.16.0] ... 15 more </code></pre> <p>Message from kafka example:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;schema&quot;: { &quot;type&quot;: &quot;struct&quot;, &quot;fields&quot;: [ { &quot;type&quot;: &quot;int64&quot;, &quot;optional&quot;: false, &quot;field&quot;: &quot;viewtime&quot; }, { &quot;type&quot;: &quot;string&quot;, &quot;optional&quot;: false, &quot;field&quot;: &quot;userid&quot; }, { &quot;type&quot;: &quot;string&quot;, &quot;optional&quot;: false, &quot;field&quot;: &quot;pageid&quot; } ], &quot;optional&quot;: false, &quot;name&quot;: &quot;ksql.pageviews&quot; }, &quot;payload&quot;: { &quot;viewtime&quot;: 92871, &quot;userid&quot;: &quot;User_5&quot;, &quot;pageid&quot;: &quot;Page_31&quot; } } </code></pre> <p>Added dependencies to /lib:</p> <pre><code>wget https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka/1.16.0/flink-connector-kafka-1.16.0.jar wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar wget https://repo1.maven.org/maven2/org/apache/flink/flink-avro/1.16.0/flink-avro-1.16.0.jar wget https://repo1.maven.org/maven2/org/apache/avro/avro/1.11.1/avro-1.11.1.jar wget https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.14.1/jackson-core-2.14.1.jar wget https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-databind/2.14.1/jackson-databind-2.14.1.jar wget https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-annotations/2.14.1/jackson-annotations-2.14.1.jar wget https://repo1.maven.org/maven2/org/apache/flink/flink-avro-confluent-registry/1.16.0/flink-avro-confluent-registry-1.16.0.jar wget https://repo1.maven.org/maven2/org/apache/kafka/kafka_2.12/3.3.1/kafka_2.12-3.3.1.jar wget https://repo1.maven.org/maven2/org/apache/flink/flink-connector-base/1.16.0/flink-connector-base-1.16.0.jar wget https://repo1.maven.org/maven2/org/apache/flink/flink-core/1.16.0/flink-core-1.16.0.jar wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-jdbc/1.16.0/flink-connector-jdbc-1.16.0.jar </code></pre> <p>What does the exception mean? How can I debug this? So far I've been working on <code>pyflink-shell local</code></p> <p>Thanks in advance.</p>
<python><apache-kafka><apache-flink>
2023-01-31 15:02:21
0
597
Jorge Cespedes
75,298,972
4,075,169
ModuleNotFoundError: No module named '_curses' on Ubuntu 22.04
<p>I want to use <code>curses</code> for a personnal Python project. However, when I try to import it, I get the following error :</p> <pre class="lang-bash prettyprint-override"><code>&gt;&gt;&gt; import curses Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/xxxxxx/.asdf/installs/python/3.9.13/lib/python3.9/curses/__init__.py&quot;, line 13, in &lt;module&gt; from _curses import * ModuleNotFoundError: No module named '_curses' </code></pre> <p>I know this is an identified bug on windows, solved through the installation of <code>windows-curses</code>, however I can't solve the problem on my Ubuntu machine. Here are all the <code>curses</code>-related packages currently installed on the machine :</p> <pre><code>&gt; apt search --installed curses libncurses-dev 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── developer's libraries for ncurses libncurses6 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── shared libraries for terminal handling libncursesw5-dev 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── transitional package for libncurses-dev libncursesw6 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── shared libraries for terminal handling (wide character support) libtinfo6 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── shared low-level terminfo library for terminal handling ncurses-base 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── basic terminal type definitions ncurses-bin 6.3-2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── terminal-related programs and man pages pinentry-curses 1.1.1-1build2 [Ubuntu/jammy main] β”œβ”€β”€ is installed └── curses-based PIN or pass-phrase entry dialog for GnuPG </code></pre> <p>Any idea ?</p>
<python><ubuntu><ncurses><curses><python-curses>
2023-01-31 14:54:38
1
847
Kahsius
75,298,911
8,848,630
AttributeError when using wrds library
<p>Doing the following:</p> <pre><code>import wrds db = wrds.Connection() </code></pre> <p>does throw this error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File ~\anaconda3\envs\playground\lib\site-packages\sqlalchemy\engine\base.py:1410, in Connection.execute(self, statement, parameters, execution_options) 1409 try: -&gt; 1410 meth = statement._execute_on_connection 1411 except AttributeError as err: AttributeError: 'str' object has no attribute '_execute_on_connection' The above exception was the direct cause of the following exception: ObjectNotExecutableError Traceback (most recent call last) Cell In [2], line 2 1 #connect to wrds api ----&gt; 2 db = wrds.Connection() File ~\anaconda3\envs\playground\lib\site-packages\wrds\sql.py:101, in Connection.__init__(self, autoconnect, **kwargs) 99 if (autoconnect): 100 self.connect() --&gt; 101 self.load_library_list() File ~\anaconda3\envs\playground\lib\site-packages\wrds\sql.py:197, in Connection.load_library_list(self) 162 print(&quot;Loading library list...&quot;) 163 query = &quot;&quot;&quot; 164 WITH pgobjs AS ( 165 -- objects we care about - tables, views, foreign tables, partitioned tables (...) 195 ORDER BY 1; 196 &quot;&quot;&quot; --&gt; 197 cursor = self.connection.execute(query) 198 self.schema_perm = [x[0] for x in cursor.fetchall()] 199 print(&quot;Done&quot;) File ~\anaconda3\envs\playground\lib\site-packages\sqlalchemy\engine\base.py:1412, in Connection.execute(self, statement, parameters, execution_options) 1410 meth = statement._execute_on_connection 1411 except AttributeError as err: -&gt; 1412 raise exc.ObjectNotExecutableError(statement) from err 1413 else: 1414 return meth( 1415 self, 1416 distilled_parameters, 1417 execution_options or NO_OPTIONS, 1418 ) ObjectNotExecutableError: Not an executable object: '\nWITH pgobjs AS (\n -- objects we care about - tables, views, foreign tables, partitioned tables\n SELECT oid, relnamespace, relkind\n FROM pg_class\n WHERE relkind = ANY (ARRAY[\'r\'::&quot;char&quot;, \'v\'::&quot;char&quot;, \'f\'::&quot;char&quot;, \'p\'::&quot;char&quot;])\n),\nschemas AS (\n -- schemas we have usage on that represent products\n SELECT nspname AS schemaname, pg_namespace.oid, array_agg(DISTINCT relkind) AS relkind_a\n FROM pg_namespace\n JOIN pgobjs ON pg_namespace.oid = relnamespace\n WHERE nspname !~ \'(^pg_)|(_old$)|(_new$)|(information_schema)\'\n AND has_schema_privilege(nspname, \'USAGE\') = TRUE\n GROUP BY nspname, pg_namespace.oid\n)\nSELECT schemaname\nFROM schemas\nWHERE relkind_a != ARRAY[\'v\'::&quot;char&quot;] -- any schema except only views\nUNION\n-- schemas w/ views (aka &quot;friendly names&quot;) that reference accessable product tables\nSELECT nv.schemaname\nFROM schemas nv\nJOIN pgobjs v ON nv.oid = v.relnamespace AND v.relkind = \'v\'::&quot;char&quot;\nJOIN pg_depend dv ON v.oid = dv.refobjid AND dv.refclassid = \'pg_class\'::regclass::oid\n AND dv.classid = \'pg_rewrite\'::regclass::oid AND dv.deptype = \'i\'::&quot;char&quot;\nJOIN pg_depend dt ON dv.objid = dt.objid AND dv.refobjid &lt;&gt; dt.refobjid\n AND dt.classid = \'pg_rewrite\'::regclass::oid AND dt.refclassid = \'pg_class\'::regclass::oid\nJOIN pgobjs t ON dt.refobjid = t.oid\n AND (t.relkind = ANY (ARRAY[\'r\'::&quot;char&quot;, \'v\'::&quot;char&quot;, \'f\'::&quot;char&quot;, \'p\'::&quot;char&quot;]))\nJOIN schemas nt ON t.relnamespace = nt.oid\nGROUP BY nv.schemaname\nORDER BY 1;\n </code></pre> <p>You need a WRDS account in order to connect to the WRDS API. I have one and I inserted the correct information. Nonetheless I get this error. In fact, in the Jupyter Notebook I am using, it explicitly states <code>&quot;Loading library list...&quot;</code> before the error pops up. Why is that so? I have used the library for ages and this has never occurred to me.</p> <p><strong>EDIT</strong>: I am using version 3.1.2 (the latest version) of the WRDS package. This error does not occur when using version 3.1.1.</p>
<python><wrds-compusat>
2023-01-31 14:49:58
1
335
shenflow
75,298,904
9,262,339
How to determine the status after a celery task has been completed inside code?
<p>Sample logic</p> <p><strong>logic.py</strong></p> <pre><code>@shared_task def run_create_or_update_google_creative(): return create_or_update_google_creative() def create_or_update_google_creative() : # do some logic def run_db_sinc(): result = run_create_or_update_google_creative.delay() job = CeleryJobResult(job_id=result.task_id, status=result.status) job.save() return 201, job.id </code></pre> <p>There is such a structure to the celery task call logic. First I call <code>run_db_sinc</code>, a new celery task is generated and I immediately get the <code>task_id</code> value which I save in the database and send as a response to the frontend. As long as the status is PENDING the frontend will go through the endpoint to the database and find the task_id status.</p> <p>My question is how do I know that the task has completed and the status has changed to SUCCESS? At what point and how to do it ? I know that it is possible to use similar function</p> <pre><code>from celery.result import AsyncResult def get_task_status(task_id): task = AsyncResult(task_id) if task.status = 'SUCCESS': # or task ended already job = CeleryJobResult.objects.get(job_id=task_id) job.status=task.status job.save() return task.status </code></pre> <p>But I can't understand at what point in time and where in my code to call it.</p>
<python><celery>
2023-01-31 14:48:55
1
3,322
Jekson
75,298,738
4,125,774
Problem in using conftest.py from a packaged pytest framework
<p>I am working on a pytest-framework that will be packed as a package. The setup file i am using for this is as this:</p> <pre><code>setup( name='MyTestFrameWork', version=&quot;2&quot;, author='my name', author_email='name@gmail.com', description='My test framework', long_description=open('README.md', 'rb').read().decode('utf-8'), url='http://my.test.framework.dk', license=&quot;Free loot&quot;, packages=find_namespace_packages(), python_requires=&quot;&gt;=3.10&quot;, include_package_data=True, install_requires=['pytest'], entry_points={&quot;pytest11&quot;: [&quot;MyTestFrameWork = MyTestFrameWork&quot;]}, ) </code></pre> <p>In this package (in the root of this folder MyTestFrameWork ) I have a conftest.py with some fixtures.</p> <p><strong>MY problem/Question:</strong> When I import my framework from another python project eg: by importing the testframework. I cant get the framework to use the fixtures in the conftest ...... however,....</p> <p>if i move the content from the conftest.py into <code>__init__.py</code> in my framework ie: in MyTestFrameWork folder the fixtures are working as expected.</p> <p>why it is like this ....why cant i have my fixtures in the conftest.py instead of having them in the <code>__init__.py</code> am i missing something ?</p> <p>for better view of my file-structure on the framework:</p> <p><a href="https://i.sstatic.net/OakvK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OakvK.png" alt="enter image description here" /></a></p>
<python><python-3.x><pytest>
2023-01-31 14:35:26
1
307
KapaA
75,298,725
12,065,403
How to populate an AWS Timestream DB?
<p>I am trying to use <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html" rel="nofollow noreferrer">AWS Timestream</a> to store data with timesteamp (in python using <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="nofollow noreferrer">boto3</a>).</p> <p>The data I need to store corresponds to prices over time of different tokens. Each record has 3 field: <code>token_address</code>, <code>timestamp</code>, <code>price</code>. I have around 100 M records (with timestamps from 2019 to now).</p> <p>I have all the data in a CSV and I would like to populate the DB with it. But I don't find a way to do this in the documentation as I am limited by 100 writes per query according to <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html" rel="nofollow noreferrer">quotas</a>. The only optimization proposed in documentation is <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.write.html" rel="nofollow noreferrer">Writing batches of records with common attributes</a> but in my my case they don't share the same values (they all have the same structure but not the same values so I can not define a <code>common_attributes</code> as they do in the example).</p> <p><strong>So is there a way to populate a Timestream DB without writing records by batch of 100 ?</strong></p>
<python><amazon-web-services><amazon-timestream>
2023-01-31 14:34:21
1
1,288
Vince M
75,298,536
12,125,777
Uploading files using design pattern with python
<p>I used to upload csv, excel, json or geojson files in my a postegreSQL using Python/Django. I noticed that the scripts is redundant and sometimes difficult to maintain when we need to update key or columns. Is there a way to use design pattern? I have never used it before. Any suggestion or links could be hep!</p>
<python><django><design-patterns>
2023-01-31 14:19:10
0
542
aba2s
75,298,403
1,314,503
Highlight inserted,deleted elements/text - Python Docx
<p>I want to <strong>highlight</strong> the text or elements which are inserted or deleted, after combine the two version of the Docx file.</p> <p><a href="https://stackoverflow.com/questions/51361538/python-docx-get-inserted-deteled-revised-paragraphs-elements">Here</a> there are just returning the the values. I tried following code. It is highlighting full paragraph.</p> <pre><code>def get_accepted_text(p): xml = p._t.xml if &quot;w:del&quot; in xml or &quot;w:ins&quot; in xml: for run in p.runs: run.font.highlight_color = WD_COLOR_INDEX.PINK </code></pre> <p>But I need, highlight the text.</p> <p>Note: <a href="https://stackoverflow.com/questions/51361538/python-docx-get-inserted-deteled-revised-paragraphs-elements">Here</a> there are <strong>return</strong>ing the the values</p>
<python><python-3.x><ms-word><docx><python-docx>
2023-01-31 14:09:26
1
5,746
KarSho
75,298,356
13,174,189
What does [1,2] means in .mean([1,2]) for tensor?
<p>I have a tensor with shape <code>torch.Size([3, 224, 225])</code>. when I do <code>tensor.mean([1,2])</code> I get tensor([0.6893, 0.5840, 0.4741]). What does [1,2] mean here?</p>
<python><python-3.x><pytorch>
2023-01-31 14:05:28
2
1,199
french_fries
75,298,308
3,130,747
Plot line segments between two dates in matplotlib
<p>Given the following data:</p> <pre class="lang-py prettyprint-override"><code>dt = pd.DataFrame.from_dict( { &quot;thing&quot;: {0: &quot;A&quot;, 1: &quot;B&quot;, 2: &quot;C&quot;}, &quot;min&quot;: { 0: &quot;2021-11-01 00:00:00+00:00&quot;, 1: &quot;2021-11-01 00:00:00+00:00&quot;, 2: &quot;2021-11-01 00:00:00+00:00&quot;, }, &quot;max&quot;: { 0: &quot;2021-11-02 00:00:00+00:00&quot;, 1: &quot;2021-11-05 00:00:00+00:00&quot;, 2: &quot;2021-11-07 00:00:00+00:00&quot;, }, } ).assign( min=lambda x: pd.to_datetime(x[&quot;min&quot;]), max=lambda x: pd.to_datetime(x[&quot;max&quot;]), ) </code></pre> <p>Which looks like:</p> <pre><code>| | thing | min | max | |---:|:--------|:--------------------------|:--------------------------| | 0 | A | 2021-11-01 00:00:00+00:00 | 2021-11-02 00:00:00+00:00 | | 1 | B | 2021-11-01 00:00:00+00:00 | 2021-11-05 00:00:00+00:00 | | 2 | C | 2021-11-01 00:00:00+00:00 | 2021-11-07 00:00:00+00:00 | </code></pre> <p>I would like to create a plot which has <code>thing</code> on the y-axis, and each a line representing the min / max on the x-axis.</p> <p>Eg:</p> <p><a href="https://i.sstatic.net/aeABP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aeABP.png" alt="enter image description here" /></a></p> <p>So the x-axis is the date, and the y-axis represents each 'thing'.</p>
<python><matplotlib><datetime><plot>
2023-01-31 14:02:01
1
4,944
baxx
75,298,299
5,722,716
Python requests stream reads more data than chunk size
<p>I am using python requests library to stream the data from a streaming API.</p> <pre class="lang-py prettyprint-override"><code>response = requests.get('http://server/stream-forever', stream=True) for chunk in response.iter_content(chunk_size=1024): print len(chunk) # prints 1905, 1850, 1909 </code></pre> <p>I have specified the chunk size as 1024. Printing the length of the chunk read gives the chunk size greater than 1024 like 1905, 1850, 1909 and so on.</p> <p>Am I missing something here?</p>
<python><python-requests>
2023-01-31 14:01:29
1
683
Prajwal
75,298,179
14,843,068
Convert pandas column with featurecollection to GeoJSON
<p>I downloaded a CSV that contains a column which has a GeoJSON format, and imported it as a pandas dataframe. How can I convert this to a GeoJSON (.geojson)? I have about 10,000 rows, each with information as shown below:</p> <p>This is an example of a cell in the column: {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type&quot;:&quot;Feature&quot;,&quot;geometry&quot;:{&quot;type&quot;:&quot;Polygon&quot;,&quot;coordinates&quot;:[[[-0.0903517,9.488375],[-0.0905786,9.488523],[-0.0909767,9.48913],[-0.09122,9.4895258],[-0.0909733,9.4901503],[-0.0908833,9.4906802],[-0.0906984,9.4905612],[-0.0907146,9.4898184],[-0.090649,9.4895175],[-0.0907516,9.489142],[-0.0906146,9.4889654],[-0.0903517,9.488375]]]},&quot;properties&quot;:{&quot;pointCount&quot;:&quot;11&quot;,&quot;length&quot;:&quot;502.9413&quot;,&quot;area&quot;:&quot;8043.091133117676&quot;}}]}</p> <pre><code>Overview of my pandas dataframe print now: site_registration_gps_area ... geometry 11 {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type... ... POINT (-76.75880 2.38031) 14 {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type... ... POINT (-76.73718 2.33163) 40 {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type... ... POINT (-0.15727 9.69560) 42 {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type... ... POINT (-0.11686 9.65522) 44 {&quot;type&quot;:&quot;FeatureCollection&quot;,&quot;features&quot;:[{&quot;type... ... POINT (-0.10379 9.65226) </code></pre>
<python><json><pandas><geojson><geopandas>
2023-01-31 13:52:03
1
622
CrossLord
75,298,089
3,768,053
Calculate averages over subgroups of data in extremely large (100GB+) CSV file
<p>I have a large semicolon-delimited text file that weighs in at a little over 100GB. It comprises ~18,000,000 rows of data and 772 columns.</p> <p>The columns are: 'sc16' (int), 'cpid' (int), 'type' (str), 'pubyr' (int) and then 767 columns labeled 'dim_0', 'dim_1', 'dim_2' ... 'dim_767', that are all ints.</p> <p>The file is already arranged/sorted by sc16 and pubyr so that each combination of sc16+pubyr are grouped together in ascending order.</p> <p>What I'm trying to do is get the average of each 'dim_' column for each unique combination of sc16 &amp; pubyr, then output the row to a new dataframe and save the final result to a new text file.</p> <p>The problem is that in my script below, the processing gradually gets slower and slower until it's just creeping along by row 5,000,000. I'm working on a machine with 96GB of RAM, and I'm not used to working with a file so large I can't simply load it into memory. This is my first attempt trying to work with something like itertools, so no doubt I'm being really inefficient. Any help you can provide would be much appreciated!</p> <pre><code>import itertools import pandas as pd # Step 1: create an empty dataframe to store the mean values mean_df = pd.DataFrame(columns=['sc16', 'pubyr'] + [f&quot;dim_{i}&quot; for i in range(768)]) # Step 2: open the file and iterate through the rows with open('C:\Python_scratch\scibert_embeddings_sorted.txt') as f: counter = 0 total_lines = sum(1 for line in f) f.seek(0) for key, group in itertools.groupby(f, key=lambda x: (x.split(';')[0], x.split(';')[3])): # group by the first (sc16) and fourth (pubyr) column sc16, pubyr = key rows = [row.strip().split(';') for row in group] columns = rows[0] rows = rows[1:] # Step 3: convert the group of rows to a dataframe group_df = pd.DataFrame(rows, columns=columns) # Step 4: calculate the mean for the group mean_row = {'sc16': sc16, 'pubyr': pubyr} for col in group_df.columns: if col.startswith('dim_'): mean_row[col] = group_df[col].astype(float).mean() # Step 5: append the mean row to the mean dataframe mean_df = pd.concat([mean_df, pd.DataFrame([mean_row])], ignore_index=True) counter += len(rows) print(f&quot;{counter} of {total_lines}&quot;) # Step 6: save the mean dataframe to a new file mean_df.to_csv('C:\Python_scratch\scibert_embeddings_mean.txt', sep=';', index=False) </code></pre>
<python><pandas><python-itertools>
2023-01-31 13:44:46
1
423
Obed
75,298,035
7,026,806
Narrower types for pytest fixtures that perform actions without returning objects?
<p>Consider a fixture like</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture def mock_database(monkeypatch: MonkeyPatch) -&gt; None: ... </code></pre> <p>And it's use in a test</p> <pre class="lang-py prettyprint-override"><code>def test_with_mock_database(mock_database: None) -&gt; None: ... </code></pre> <p>What is the type of <code>mock_database</code> in the test argument? It appears to actually be just a <code>None</code> at runtime, but this makes it a little hard to narrow down, if I want to distinguish between e.g. a <code>SetupFixture</code> and a <code>TeardownFixture</code>, since there's no such thing as a class Fixture(None), and casting hacks like</p> <pre class="lang-py prettyprint-override"><code>class _FixtureType: pass FixtureType = cast(_FixtureType, None) @pytest.fixture def mock_database(monkeypatch: MonkeyPatch) -&gt; FixtureType: ... </code></pre> <p>Kind of break, because I have to <code>return</code> <em>something</em> now.</p> <p>Are there other, more elegant solutions?</p>
<python><types><pytest><mypy>
2023-01-31 13:40:35
1
2,020
komodovaran_