QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,493,478
9,974,205
How can I identify changes in stock in a pandas dataframe
<p>I am working with a pandas data frame. This data frame has 3 important columns, one is <code>AmountOfStock</code>, which indicates the amount of available units, other is <code>ProductType</code>, which is the code of the specified product, finally <code>DateTime</code>, indicates the date and time at which the data has been sent to the database. The database registers each 10 seconds the amount of stock of each product, thus some rows would be</p> <pre><code>1-2023-11-16 10:00:00, ProductA, 30 2-2023-11-16 10:00:00, ProductB, 15 3-2023-11-16 10:00:10, ProductA, 29 4-2023-11-16 10:00:10, ProductB, 15 5-2023-11-16 10:00:20, ProductA, 29 6-2023-11-16 10:00:20, ProductB, 14 </code></pre> <p>I want to get only the rows in which the quantity of product changes or the initial values. Thus, I would be interested in removing the 4th and 5th rows. Can someone please tell me how to do this?</p>
<python><pandas><database><row><delete-row>
2023-11-16 09:06:33
2
503
slow_learner
77,493,349
17,438,511
How to launch `python -m MY_MODULE` in debug mode in pycharm
<p>I have a script that can only be run as <code>python -m path.to.my_script</code>, due to the use of relative imports. Running it as <code>python path/to/my_script.py</code> gives &quot;Attempted relative import with no known parent package&quot; error.</p> <p>How can I launch such script in PyCharm debug mode?</p>
<python><pycharm>
2023-11-16 08:45:45
1
647
Boschie
77,493,160
13,836,083
why we cannot use alone starred target while assignment in python
<p>I was going through the python docs on simple assignment.</p> <p>I found below from the docs.</p> <blockquote> <p>Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows.</p> <p>If the target list is a single target with no trailing comma, optionally in parentheses, the object is assigned to that target.</p> <p>Else:</p> <p>If the target list contains one target prefixed with an asterisk, called a “starred” target: The object must be an iterable with at least as many items as there are targets in the target list, minus one. The first items of the iterable are assigned, from left to right, to the targets before the starred target. The final items of the iterable are assigned to the targets after the starred target. A list of the remaining items in the iterable is then assigned to the starred target (the list can be empty).</p> <p>Else: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.</p> </blockquote> <p>What I understood is that if the target list contains starred target then object, on RHS, must be iterable. So while assignment, python first unpacks the object and assigns the items as per the above rule and then rest of the values is assigned to the starred target.</p> <p>Now, Considering above I have kept only starred target and was expecting the values on RHS ( which is a tuple) to be assigned to starred target. However, Python gives syntax error for this line. I am still trying to understand where I am lacking as it is nowhere mentioned that I can't use starred target alone.</p> <pre><code>*a=1,2,3 </code></pre> <p>But Below works. Please explain why here alone why alone starred target is working here ?</p> <pre><code>[*a] = 1,2,3 </code></pre>
<python>
2023-11-16 08:13:47
2
540
novice
77,493,052
435,093
Testing ansible "to_nice_json" filter within Python - "No filter named 'to_nice_json'"
<p>I want to test an Ansible Jinja2 template using a Python script, based on the answer provided here: <a href="https://stackoverflow.com/questions/35407822/how-can-i-test-jinja2-templates-in-ansible">How can I test jinja2 templates in ansible?</a></p> <p>This code <em>used</em> to work, however I don't remember in which environment. Now, when I run it, I get an error about the filter not being found:</p> <pre><code>#!/usr/bin/env bash # python3 -m pip install ansible python3 &lt;&lt;EOF import ansible import jinja2 print(ansible.__version__) print(jinja2.__version__) output = jinja2.Template(&quot;Hello {{ var | to_nice_json }}!&quot;).render(var=f&quot;{{ 'a': 1, 'b': 2 }}&quot;) print(output) EOF </code></pre> <p>This returns:</p> <pre><code>2.15.6 3.1.2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 7, in &lt;module&gt; File &quot;/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py&quot;, line 1208, in __new__ return env.from_string(source, template_class=cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py&quot;, line 1105, in from_string return cls.from_code(self, self.compile(source), gs, None) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py&quot;, line 768, in compile self.handle_exception(source=source_hint) File &quot;/Users/werner/.pyenv/versions/3.11.6/lib/python3.11/site-packages/jinja2/environment.py&quot;, line 936, in handle_exception raise rewrite_traceback_stack(source=source) File &quot;&lt;unknown&gt;&quot;, line 1, in template jinja2.exceptions.TemplateAssertionError: No filter named 'to_nice_json'. </code></pre> <p>The <code>jinja2</code> pip dependency is pulled in via <code>ansible</code>.</p> <p>How would I make my Python code find the right filter, or get jinja2 to load the filter?</p>
<python><ansible><jinja2>
2023-11-16 07:52:55
1
39,571
slhck
77,492,920
16,405,935
Count unique value for each group and subtotal
<p>I have a simple dataframe as below:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'BR_NM': ['HN', 'HN', 'HP'], 'CUS_ID': ['12345', '12345', '12345'], 'ACC_ID': ['12345_1', '12345_2', '12345_3'], 'REGION': ['North', 'North', 'North'], 'CUS_TYPE': ['Individual', 'Individual', 'Individual']}) df BR_NM CUS_ID ACC_ID REGION CUS_TYPE HN 12345 12345_1 North Individual HN 12345 12345_2 North Individual HP 12345 12345_3 North Individual </code></pre> <p>I want to count unique <code>CUS_ID</code> based on <code>BR_NM</code> then sum it based on <code>REGION</code>. In my case, it's just one customer with three account but I want to count it as two customer. Below is my desired Ouput:</p> <pre><code>REGION CUS_TYPE North 0 Individual 2 </code></pre> <p>If I used <code>pivot_table</code> and <code>aggfunc = pd.Series.nunique</code> it just count as 1.</p> <pre><code>df2 = pd.pivot_table(df, values='CUS_ID', columns='REGION', index='CUS_TYPE', aggfunc=pd.Series.nunique).reset_index() </code></pre> <p>Thank you.</p>
<python><pandas><pivot>
2023-11-16 07:25:27
2
1,793
hoa tran
77,492,750
10,200,497
Change the background color of every other group in Excel using Pandas
<p>This is my dataframe:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame( { 'p': ['short', np.nan, 'short', np.nan, np.nan, 'long', 'long', np.nan, np.nan], 's': [13, 13, 14, 15, 100, 1, 1000, 12, 1111] } ) </code></pre> <p>I want to change the background color of every other group (even groups) to a different color in Excel.</p> <p>This is what I want in Excel:</p> <p><a href="https://i.sstatic.net/S7KY3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S7KY3.png" alt="enter image description here" /></a></p> <p>The groups are defined based on <code>p</code>. That is a value in <code>p</code> and all of the <code>NaN</code> values below it, is one group. It is clear in the image above.</p> <p>This is what I have tried but did not work:</p> <pre><code>from matplotlib import colors def colr(x): y = x.assign(k=x['p'].ne(x['p'].shift()).cumsum()) d = dict(enumerate(colors.cnames)) y[:] = np.broadcast_to(y['k'].map(d).radd('background-color:').to_numpy()[:,None] ,y.shape) return y.drop(&quot;k&quot;,1) df = df.style.apply(colr,axis=None) df.to_excel('file.xlsx', index=False, engine='openpyxl') </code></pre>
<python><pandas>
2023-11-16 06:43:47
1
2,679
AmirX
77,492,156
13,994,829
How to calculate the "Engagement Rate" in GA4 connected to BigQuery?
<h2>Description</h2> <ul> <li>I have connected the <a href="https://support.google.com/analytics/answer/9823238?hl=en#zippy=%2Cin-this-article" rel="nofollow noreferrer">GA4 to BigQuery</a>.</li> <li>The GA4 data in bigquery include: <code>ga_session_id</code>, <code>page_location</code>, <code>user_pseudo_id</code>, <code>session_engaged</code>, ...</li> <li>I want to use python client api to query the data from bigquery.</li> <li>Then use the data from bigquery to calculate the <code>engagement rate</code> for each page like GA4 report.</li> </ul> <h2>Try</h2> <p>According to the <a href="https://measureschool.com/ga4-metrics/" rel="nofollow noreferrer">defination of <code>engagement rate</code></a>:</p> <p><code>engagement_rate = engaged_session_nums / session_nums * 100%</code></p> <p>I have create a dataframe which include <code>ga_session_id</code>, <code>page_location</code>, <code>user_pseudo_id</code>, <code>session_engaged</code>, and the <code>session_engaged</code> is a booling type.</p> <p>I use the following code to calculate <code>engagement rate</code>:</p> <p>(<strong>not sure <code>engaged_session_nums</code> how to calculate</strong>)</p> <pre class="lang-py prettyprint-override"><code># the session = ga_session_id + user_pseudo_id df['pseudo_with_session'] = df['ga_session_id'].astype(str) + df['user_pseudo_id'] # not sure engaged_session_nums how to calculate df['engaged_session_nums'] = df.groupby(['page_location'])['session_engaged'].transform(lambda x: (x == '1').sum()) # calculate the session nums for each page df['total_session_nums'] = df.groupby('page_location')['pseudo_with_session'].transform('nunique') df['engaged_rate'] = df['engaged_session_nums'] / df['total_session_nums'] * 100 </code></pre> <h2>Results</h2> <p>The result get over 100%, and very different with GA4 report.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>page_location</th> <th>engagement_rate</th> </tr> </thead> <tbody> <tr> <td>/</td> <td>26.2%</td> </tr> <tr> <td>/en/</td> <td>117%</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> </div><h2>Expected (GA4 report)</h2> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>page_location</th> <th>engagement_rate</th> </tr> </thead> <tbody> <tr> <td>/</td> <td>75.22%</td> </tr> <tr> <td>/en/</td> <td>73.34%</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> </div>
<python><dataframe><google-bigquery><google-analytics><google-analytics-4>
2023-11-16 03:37:43
1
545
Xiang
77,492,144
1,942,868
conditional constraints for django model
<p>I have database model like this</p> <pre><code>ActionType = ( (1,&quot;PostBack&quot;), (2,&quot;Uri&quot;), ) class RichMenu(BaseModel): key = m.CharField(max_length=64,unique=True) action_type = m.IntegerField(choices=ActionType,default=1) url = m.CharField(max_length=255,blank=True,null=True) name = m.CharField(max_length=255,blank=True,null=True) </code></pre> <p>Now I would like to make constraint like this,</p> <ul> <li><p>When <code>action_type</code> is 1, <code>url</code> should be null and <code>name</code> should not be null</p> </li> <li><p>When <code>action_type</code> is 2, <code>name</code> should be null and <code>url</code> should not be null</p> </li> </ul> <p>Is it possible to make conditional constraints for this case?</p>
<python><django><model>
2023-11-16 03:34:45
2
12,599
whitebear
77,492,026
10,018,602
Query GPT4All local model with Langchain and many .txt files - KeyError: 'input_variables'
<p><code>python 3.8, Windows 10, neo4j==5.14.1, langchain==0.0.336</code></p> <p>I'm attempting to utilize a local Langchain model (GPT4All) to assist me in converting a corpus of loaded <code>.txt</code> files into a <code>neo4j</code> data structure through querying. I have provided a minimal reproducible example code below, along with the references to the article/repo that I'm attempting to emulate. I have also provided a &quot;context&quot; that should be included in the query, along with all the <code>Document</code> objects. I'm still learning how to use <code>Langchain</code> so I really don't know what I'm doing yet, but the current traceback I'm getting looks like this:</p> <pre><code>Traceback (most recent call last): File &quot;.\neo4jmain.py&quot;, line xx, in &lt;module&gt; prompt_template = PromptTemplate( File &quot;C:\Users\chalu\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\load\serializable.py&quot;, line 97, in __init__ super().__init__(**kwargs) File &quot;pydantic\main.py&quot;, line 339, in pydantic.main.BaseModel.__init__ File &quot;pydantic\main.py&quot;, line 1102, in pydantic.main.validate_model File &quot;C:\Users\chalu\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\schema\prompt_template.py&quot;, line 76, in validate_variable_names if &quot;stop&quot; in values[&quot;input_variables&quot;]: KeyError: 'input_variables' </code></pre> <p>As you'll see, I'm literally not defining <code>input_variables</code> anywhere, so I assume this is a default behaviour of Langchain, but again, not sure. I was also getting an error:</p> <pre><code>LLaMA ERROR: The prompt is 5161 tokens and the context window is 2048! ERROR: The prompt size exceeds the context window size and cannot be processed. </code></pre> <p>...which is clearly a result of the query string itself being too big. I want to be able to query my documents for the answer, while providing the model with the Documents to reference. How can I do this? The Langchain documentation is not great for noobies in this space, it's all over the place and lacks MANY, SIMPLE use cases for noobies so I'm asking it here.</p> <pre><code># https://medium.com/neo4j/enhanced-qa-integrating-unstructured-and-graph-knowledge-using-neo4j-and-langchain-6abf6fc24c27 # https://github.com/sauravjoshi23/ai/blob/main/retrieval%20augmented%20generation/integrated-qa-neo4j-langchain.ipynb # Script to convert a corpus of many text files into a neo4j graph # Imports import os from langchain.document_loaders import TextLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.llms.gpt4all import GPT4All from langchain.prompts import PromptTemplate from transformers import AutoTokenizer # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(&quot;bert-base-uncased&quot;) def bert_len(text): &quot;&quot;&quot;Return the length of a text in BERT tokens.&quot;&quot;&quot; tokens = tokenizer.encode(text) return len(tokens) def get_files(path: str) -&gt; list: &quot;&quot;&quot;Return a list of all files in a directory, recursively.&quot;&quot;&quot; files = [] for file in os.listdir(path): file_path = os.path.join(path, file) if os.path.isdir(file_path): files.extend(get_files(file_path)) else: files.append(file_path) return files # Get the text files all_txt_files = get_files('data') raw_txt_files = [] for current_file in all_txt_files: raw_txt_files.extend(TextLoader(current_file, encoding='utf-8').load()) # Create a text splitter object that will help us split the text into chunks text_splitter = RecursiveCharacterTextSplitter( chunk_size = 1024, # 200, chunk_overlap = 128, # 20 length_function = bert_len, separators=['\n\n', '\n', ' ', ''], ) # Split the text into &quot;documents&quot; documents = text_splitter.create_documents([raw_txt_files[0].page_content]) # Utilizing these Document objects, we want to query the GPT4All model to help us create # a JSON object that contains the ontology of terms mentioned in the given context, # while mitigating &quot;max_tokens&quot; error. # Create a PromptTemplate object that will help us create the prompt for GPT4All(?) prompt_template = PromptTemplate( template = &quot;&quot;&quot; You are a network graph maker who extracts terms and their relations from a given context. You are provided with a context chunk (delimited by ```). Your task is to extract the ontology of terms mentioned in the given context. These terms should represent the key concepts as per the context. Thought 1: While traversing through each sentence, Think about the key terms mentioned in it. Terms may include object, entity, location, organization, person, condition, acronym, documents, service, concept, etc. Terms should be as atomistic as possible Thought 2: Think about how these terms can have one on one relation with other terms. Terms that are mentioned in the same sentence or the same paragraph are typically related to each other. Terms can be related to many other terms Thought 3: Find out the relation between each such related pair of terms. Format your output as a list of json. Each element of the list contains a pair of terms and the relation between them, like the following: [Dict(&quot;node_1&quot;: &quot;A concept from extracted ontology&quot;, &quot;node_2&quot;: &quot;A related concept from extracted ontology&quot;, &quot;edge&quot;: &quot;relationship between the two concepts, node_1 and node_2 in one or two sentences&quot;, ), Dict(&quot;node_1&quot;: &quot;A concept from extracted ontology&quot;, &quot;node_2&quot;: &quot;A related concept from extracted ontology&quot;, &quot;edge&quot;: &quot;relationship between the two concepts, node_1 and node_2 in one or two sentences&quot;, ), Dict(...)] Context Documents: {documents} &quot;&quot;&quot;, variables = { &quot;documents&quot;: documents, } ) # Create a GPT4All object that will help us query the GPT4All model llm = GPT4All( model=r&quot;C:\Users\chalu\AppData\Local\nomic.ai\GPT4All\gpt4all-falcon-q4_0.gguf&quot;, n_threads=3, max_tokens=5162, # &lt;-- attempt to mitigate &quot;max_tokens&quot; error verbose=True, ) # Get the response from GPT-4-All response = llm(prompt_template) print(response) </code></pre>
<python><neo4j><langchain>
2023-11-16 02:51:54
2
1,335
wildcat89
77,491,948
9,500,955
Pass a whole dataset contains multiple files to HuggingFace function in Palantir
<p>I am using the pre-trained model from HuggingFace (<a href="https://huggingface.co/dslim/bert-base-NER/tree/main" rel="nofollow noreferrer">dslim/bert-base-NER</a>). Normally when working locally or in a Colab notebook, we can use the code below to load the model:</p> <pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained(&quot;dslim/bert-base-NER&quot;) model = AutoModelForTokenClassification.from_pretrained(&quot;dslim/bert-base-NER&quot;) </code></pre> <p>This code will connect with the HuggingFace server to load the model directly. But on Palantir, we cannot do much unless configuring the security.</p> <p>The workaround approach here is to download all the files and upload them to a dataset. We can pass a folder that contains these files to the HuggingFace function (check this <a href="https://stackoverflow.com/a/64007213/9500955">answer</a>) like below:</p> <pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/') model = AutoModelForTokenClassification.from_pretrained('./local_model_directory/') </code></pre> <p>For a model that has only a <code>pickle</code> file, we can easily read that file via a dataset called <code>classifier</code>:</p> <pre class="lang-py prettyprint-override"><code>@transform( file_input=Input(&quot;/Users/model/classifier&quot;) ) def load_model(file_input): with file_input.filesystem().open(&quot;classifier.pkl&quot;, &quot;rb&quot;) as f: model = pickle.load(f) return model </code></pre> <p>My question is: How can we open a dataset and pass a whole directory to this function on the Palantir code repository?</p>
<python><palantir-foundry><foundry-code-repositories>
2023-11-16 02:25:18
1
1,974
huy
77,491,941
2,850,913
Llama 2 with Langchain tools
<p>I am trying to follow <a href="https://www.pinecone.io/learn/llama-2/" rel="nofollow noreferrer">this tutorial</a> on using Llama 2 with Langchain tools (you don't have to look at the tutorial all code is contained in this question).</p> <p>My code is very similar to that in the tutorial except I am using a local model rather than connecting to Hugging Face and I am not using bitsandbytes for quantisation since it requires cuda and I am on macOS.</p> <p>I am using the unquantised Meta Llama 2 13b chat model <a href="https://huggingface.co/meta-llama/Llama-2-13b-chat-hf" rel="nofollow noreferrer">meta-llama/Llama-2-13b-chat-hf</a>.</p> <p>The model appears to be outputting JSON correctly but for some reason I am getting &quot;Could not parse LLM output&quot;.</p> <p>Here is the code (added \ before triple backtick due to Stackoverflow code formatting). Note I added the following to the prompt &quot;When Assistant responds with JSON they make sure to enclose the JSON with three back ticks.&quot; This resolves an issue where the model was outputting JSON that was not in the correct format (surrounded with backticks and a 'json' tag).</p> <pre class="lang-py prettyprint-override"><code>import transformers model_id = './Models/Llama_2/llama_2_13b' # initialize the model model = transformers.AutoModelForCausalLM.from_pretrained(model_id) tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) generate_text = transformers.pipeline( model=model, tokenizer=tokenizer, return_full_text=True, # langchain expects the full text task='text-generation', # we pass model parameters here too temperature=0.01, # 'randomness' of outputs, 0.0 is the min and 1.0 the max max_new_tokens=512, # mex number of tokens to generate in the output repetition_penalty=1.1 # without this output begins repeating ) from langchain.llms import HuggingFacePipeline llm = HuggingFacePipeline(pipeline=generate_text) from langchain.memory import ConversationBufferWindowMemory from langchain.agents import load_tools memory = ConversationBufferWindowMemory( memory_key=&quot;chat_history&quot;, k=5, return_messages=True, output_key=&quot;output&quot; ) tools = load_tools([&quot;llm-math&quot;], llm=llm) from langchain.agents import initialize_agent # initialize agent agent = initialize_agent( agent=&quot;chat-conversational-react-description&quot;, tools=tools, llm=llm, verbose=True, early_stopping_method=&quot;generate&quot;, memory=memory, handle_parsing_errors=True ) # special tokens used by llama 2 chat B_INST, E_INST = &quot;[INST]&quot;, &quot;[/INST]&quot; B_SYS, E_SYS = &quot;&lt;&lt;SYS&gt;&gt;\n&quot;, &quot;\n&lt;&lt;/SYS&gt;&gt;\n\n&quot; # create the system message sys_msg = &quot;&lt;s&gt;&quot; + B_SYS + &quot;&quot;&quot;Assistant is a expert JSON builder designed to assist with a wide range of tasks. Assistant is able to respond to the User and use tools using JSON strings that contain &quot;action&quot; and &quot;action_input&quot; parameters. All of Assistant's communication is performed using this JSON format. Assistant can also use tools by responding to the user with tool use instructions in the same &quot;action&quot; and &quot;action_input&quot; JSON format. Tools available to Assistant are: - &quot;Calculator&quot;: Useful for when you need to answer questions about math. - To use the calculator tool, Assistant should write like so: ```json {{&quot;action&quot;: &quot;Calculator&quot;, &quot;action_input&quot;: &quot;sqrt(4)&quot;}} ``` When Assistant responds with JSON they make sure to enclose the JSON with three back ticks. Here are some previous conversations between the Assistant and User: User: Hey how are you today? Assistant: ```json {{&quot;action&quot;: &quot;Final Answer&quot;, &quot;action_input&quot;: &quot;I'm good thanks, how are you?&quot;}} \``` User: I'm great, what is the square root of 4? Assistant: ```json {{&quot;action&quot;: &quot;Calculator&quot;, &quot;action_input&quot;: &quot;sqrt(4)&quot;}} \``` User: 2.0 Assistant: ```json {{&quot;action&quot;: &quot;Final Answer&quot;, &quot;action_input&quot;: &quot;It looks like the answer is 2!&quot;}} \``` User: Thanks could you tell me what 4 to the power of 2 is? Assistant: ```json {{&quot;action&quot;: &quot;Calculator&quot;, &quot;action_input&quot;: &quot;4**2&quot;}} \``` User: 16.0 Assistant: ```json {{&quot;action&quot;: &quot;Final Answer&quot;, &quot;action_input&quot;: &quot;It looks like the answer is 16!&quot;}} \``` Here is the latest conversation between Assistant and User.&quot;&quot;&quot; + E_SYS new_prompt = agent.agent.create_prompt( system_message=sys_msg, tools=tools ) agent.agent.llm_chain.prompt = new_prompt instruction = B_INST + &quot; Respond to the following in JSON with 'action' and 'action_input' values &quot; + E_INST human_msg = instruction + &quot;\nUser: {input}&quot; agent.agent.llm_chain.prompt.messages[2].prompt.template = human_msg </code></pre> <p>Then I prompt it with;</p> <pre class="lang-py prettyprint-override"><code>agent(&quot;hey how are you today?&quot;) </code></pre> <p>and I get the following;</p> <pre class="lang-none prettyprint-override"><code>&gt; Entering new AgentExecutor chain... Assistant: ```json {&quot;action&quot;: &quot;Final Answer&quot;, &quot;action_input&quot;: &quot;I'm good thanks, how are you?&quot;} \``` &gt; Finished chain. {'input': 'hey how are you today?', 'chat_history': [], 'output': &quot;I'm good thanks, how are you?&quot;} </code></pre> <p>Which is great, but when I prompt it with a question that requires use of a tool;</p> <pre class="lang-py prettyprint-override"><code>agent(&quot;what is 4 to the power of 2.1?&quot;) </code></pre> <p>I get;</p> <pre class="lang-none prettyprint-override"><code>&gt; Entering new AgentExecutor chain... Assistant: ```json {&quot;action&quot;: &quot;Calculator&quot;, &quot;action_input&quot;: &quot;4**2.1&quot;} \``` Observation: Answer: 18.37917367995256 Thought:Could not parse LLM output: Observation: Invalid or incomplete response Thought:Could not parse LLM output: Observation: Invalid or incomplete response Thought:Could not parse LLM output: Observation: Invalid or incomplete response </code></pre> <p>and it just gets stuck in a loop repeating &quot;Could not parse LLM output&quot; and &quot;Invalid or incomplete response&quot;</p> <p>Does anyone know how to fix the &quot;Could not parse LLM output&quot; errors?</p> <p>Supposedly this code worked for the authors of the tutorial.</p> <p>I am using Python 3.11.5 with Anaconda, tensorflow 2.15.0, transformers 4.35.2, langchain 0.0.336, on macOS Sonoma.</p> <p>I updated the code to use the output parser from <a href="https://github.com/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/llama-2/llama-2-70b-chat-agent.ipynb" rel="nofollow noreferrer">here</a>.</p> <pre class="lang-py prettyprint-override"><code>from langchain.agents import AgentOutputParser from langchain.agents.conversational_chat.prompt import FORMAT_INSTRUCTIONS from langchain.output_parsers.json import parse_json_markdown from langchain.schema import AgentAction, AgentFinish class OutputParser(AgentOutputParser): def get_format_instructions(self) -&gt; str: return FORMAT_INSTRUCTIONS def parse(self, text: str) -&gt; AgentAction | AgentFinish: try: # this will work IF the text is a valid JSON with action and action_input response = parse_json_markdown(text) action, action_input = response[&quot;action&quot;], response[&quot;action_input&quot;] if action == &quot;Final Answer&quot;: # this means the agent is finished so we call AgentFinish return AgentFinish({&quot;output&quot;: action_input}, text) else: # otherwise the agent wants to use an action, so we call AgentAction return AgentAction(action, action_input, text) except Exception: # sometimes the agent will return a string that is not a valid JSON # often this happens when the agent is finished # so we just return the text as the output return AgentFinish({&quot;output&quot;: text}, text) @property def _type(self) -&gt; str: return &quot;conversational_chat&quot; # initialize output parser for agent parser = OutputParser() </code></pre> <p>by adding</p> <pre class="lang-py prettyprint-override"><code>agent_kwargs={&quot;output_parser&quot;: parser} </code></pre> <p>to initialise_agent, and the code no longer produces an error but the final output from the LLM is still empty.</p> <pre class="lang-none prettyprint-override"><code>&gt; Entering new AgentExecutor chain... Assistant: ```json {&quot;action&quot;: &quot;Calculator&quot;, &quot;action_input&quot;: &quot;4**2.1&quot;} \``` Observation: Answer: 18.37917367995256 Thought: &gt; Finished chain. {'input': 'what is 4 to the power of 2.1?', 'chat_history': [HumanMessage(content='hey how are you today?'), AIMessage(content=&quot;I'm good thanks, how are you?&quot;)], 'output': ''} </code></pre>
<python><langchain><llama>
2023-11-16 02:23:05
1
750
tail_recursion
77,491,924
699,467
Is it possible to skip loop iterations for itertools.product() if i know the ID of the iteration?
<p>I have the following code:</p> <pre><code>import math import numpy as np import itertools s = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] def variate4(l): yield from itertools.product(*([l] * 4)) for i in variate4(s): print(repr(''.join(str(i)))) </code></pre> <p>The result of execution is all possible four-digit combinations of numbers from the <code>s</code> array. For example: (12, 32, 46, 50). As i see, there is always the same order in results.</p> <p>Is it possible to generate combination by its id? I mean result number 4856 for example.</p> <p>I tried this way:</p> <pre><code>j = 0 for i in variate4(s): j = j + 1 if j == 4856: print(repr(''.join(str(i)))) break </code></pre> <p>But still have to wait until all previous combinations are generated. I'm trying to achieve instant jump to the desired iteration.</p>
<python><loops>
2023-11-16 02:15:03
1
2,406
Sir D
77,491,831
748,493
Pandas DataFrame from a list of strings and n-th order tuples
<p>I am dealing with data that can contain stings, tuples of strings, tuples of tuples of strings, etc., e.g.</p> <pre><code>values = ['a0'] + [('b0', 'b1')] + [(('c0', 'c1'), ('c2', 'c3'))] </code></pre> <p>and I need to construct a dataframe from these values that has as many columns as the maximum number of strings amongst all list elements, i.e. 4 for the above example, with <code>None</code> in those locations where there are less than this maximum, e.g.</p> <p><a href="https://i.sstatic.net/4yuij.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4yuij.png" alt="enter image description here" /></a></p> <p>The <code>pd.DataFrame</code> constructor behaves differently depending on the order of elements in the list.</p> <p>For example, <code>pd.DataFrame(['a0'] + [('b0','b1')])</code> results in</p> <p><a href="https://i.sstatic.net/YJhR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YJhR2.png" alt="enter image description here" /></a></p> <p>while <code>pd.DataFrame( [('b0','b1')] + ['a0'])</code> produces</p> <p><a href="https://i.sstatic.net/Nrhc3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nrhc3.png" alt="enter image description here" /></a></p> <p>The desired output, in this case, is</p> <p><a href="https://i.sstatic.net/bCmDr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCmDr.png" alt="enter image description here" /></a></p> <p>In general, a tuple might contain elements of mixed order, e.g. <code>[('d0', ('d1', ('d2', 'd3')))]</code>.</p>
<python><pandas><dataframe>
2023-11-16 01:42:18
4
522
Confounded
77,491,797
5,363,840
How does Pycharm handle virtual enviroments and their variables set in activate script?
<p>I'm running into an issue in pycharm. I'm using a virtual enviroment and I have added the following line to my activate script</p> <pre><code>source /home/me/environment.sh </code></pre> <p>environment.sh only contains some lines with exports for variables such as:</p> <pre><code>#!/bin/bash export USERNAME=&quot;GUY&quot; </code></pre> <p>When I run the python interpreter from the terminal in pycharm, I am able to see these variables by printing out</p> <p><code>os.environ</code></p> <p>However if I attempt to run any python script in my project using the actual <em>run</em> button in Pycharm, these variables are not available. I have checked the run/debug configuration and everything looks correct and it is using my venv interpreter. I do see that I can manually enter these variables into the run configuration, but why would it not be recognizing them from my active virtual environment?</p>
<python><pycharm><python-venv>
2023-11-16 01:30:09
0
462
Dr.Tautology
77,491,665
12,474,157
How to Add Authentication Mock to Pytest in a FastAPI Application?
<p>I am working on testing a FastAPI application using pytest and need to integrate authentication mocking. I have an endpoint <code>/roles</code> which requires authentication, but I'm not sure how to correctly add a mock for the authentication in my pytest setup.</p> <h3>pytest code I have:</h3> <pre class="lang-py prettyprint-override"><code># pytest code for testing /roles endpoint import pytest from fastapi.testclient import TestClient from unittest.mock import patch, AsyncMock from main import app from tests.mock_data import test_roles_data class MockRoles: async def setup(self): pass async def get_json(self): return test_roles_data @pytest.fixture def client(): with TestClient(app) as c: yield c @patch('routers.base.parse_params', return_value=('custom_fields', '2023-01-01', '2023-01-31')) @patch('services.roles.Roles', return_value=MockRoles()) def test_roles_endpoint(mock_parse_params, mock_roles, client): response = client.get(&quot;/roles&quot;, headers={&quot;account&quot;: &quot;test_account&quot;}) assert response.status_code == 200 assert response.json() == test_roles_data </code></pre> <p>I also have a fake authentication function which I need to integrate into the test: I believe this is the correct way to define it</p> <pre class="lang-py prettyprint-override"><code># Fake authentication function async def fake_authenticate( apitoken: Optional[str] = None, account: Optional[str] = None, sessionHost: Optional[str] = None ): return { 'apitoken': 'apitoken', 'account': 'account', 'sessionHost': 'sessionHost', } </code></pre> <p>My goal is to mock the authentication process so that the <code>/roles</code> endpoint can be tested without actual authentication. Could someone guide me on how to correctly add this <code>fake_authenticate</code> function to my pytest setup?</p> <p><strong>Additional Info:</strong></p> <ul> <li>The <code>/roles</code> endpoint in the FastAPI app requires authentication.</li> <li>I am looking to test the endpoint's response assuming the authentication is successful.</li> </ul> <h3>/main.py</h3> <pre class="lang-py prettyprint-override"><code>async def authenticate( apitoken: Optional[str] = Header(&quot;&quot;), account: Optional[str] = Header(&quot;&quot;), sessionHost: Optional[str] = Header(&quot;&quot;), ): print(sessionHost) if ( sessionHost and sessionHost[-20:] == &quot;mydomain-staging.com&quot; or sessionHost == &quot;https://apistaging.mydomain.com&quot; ): url = sessionHost + &quot;/api/jml/templates&quot; else: url = PF_API_URL + &quot;/jml/templates&quot; print(url) params = {&quot;per_page&quot;: 1, &quot;page&quot;: 1} accept = &quot;application/vnd.mydomain+json;version=2&quot; auth_headers = { &quot;content_type&quot;: &quot;application/json&quot;, &quot;accept&quot;: accept, &quot;account&quot;: account, &quot;apitoken&quot;: apitoken, } r = await session.get(url, headers=auth_headers, params=params, timeout=30) if r.status_code != 200: raise HTTPException(status_code=401, detail=&quot;Unauthorized&quot;) app.token_cache[apitoken] = { &quot;account&quot;: account, &quot;ts&quot;: datetime.now(), } app.include_router( stats.router, dependencies=[Depends(authenticate)], responses={404: {&quot;description&quot;: &quot;Not found&quot;}}, ) </code></pre> <h3>/routers/stats.py</h3> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/roles&quot;) async def roles( request: Request, account: str = Header(...), start_date: str = None, end_date: str = None, profile_id: int = 0, ): custom_fields, start_date, end_date = parse_params( request, start_date, end_date ) new_role = Roles( account, start_date, end_date, profile_id, custom_fields=custom_fields ) try: await new_role.setup() result = await new_role.get_json() except NoDataException: result = {&quot;Error&quot;: &quot;No Data&quot;} return result </code></pre>
<python><authentication><mocking><pytest><fastapi>
2023-11-16 00:31:16
0
1,720
The Dan
77,491,623
12,474,157
How to Correctly Implement Mocked Authentication in Pytest for FastAPI Application?
<p>I am working with FastAPI and pytest to test an endpoint in my application. I need to mock the authentication dependency, but I'm facing challenges in implementing it correctly. My primary goal is to ensure that the mocked authentication is properly set up so that my tests can run without hitting the actual authentication logic.</p> <h2>My pytest code</h2> <pre class="lang-py prettyprint-override"><code>import os from typing import Optional from fastapi.testclient import TestClient from unittest.mock import patch, AsyncMock from main import app from tests.mock_data import test_roles_data os.environ[&quot;TEST_ENV&quot;] = &quot;test&quot; client = TestClient(app) async def fake_authenticate( apitoken: Optional[str] = None, account: Optional[str] = None, sessionHost: Optional[str] = None ): return { 'apitoken': 'apitoken', 'account': 'account', 'sessionHost': 'sessionHost', } app.dependency_overrides[app.authenticate] = fake_authenticate @patch('routers.base.parse_params', new_callable=AsyncMock) @patch('services.roles.Roles.get_json', new_callable=AsyncMock) def test_roles_endpoint(mock_parse_params, mock_get_json): mock_get_json.return_value = test_roles_data mock_parse_params.return_value = (&quot;custom_fields_value&quot;, &quot;2023-01-01&quot;, &quot;2023-01-31&quot;) response = client.get(&quot;/roles&quot;, headers={ 'apitoken': 'apitoken', 'account': 'account', 'sessionHost': 'sessionHost', }, params={&quot;start_date&quot;: &quot;2023-01-01&quot;, &quot;end_date&quot;: &quot;2023-01-31&quot;}) assert response.status_code == 200 # assert response.json() == test_roles_data # Reset the environment variable after tests del os.environ[&quot;TEST_ENV&quot;] </code></pre> <p>However, when I run the test, I get the following error:</p> <pre><code>AttributeError: 'FastAPI' object has no attribute 'authenticate' </code></pre> <p>This error occurs at the line where I try to override the dependency with the mocked authentication function. I am certain that the <code>authenticate</code> function exists in my main application and is used correctly in my routes.</p> <p>I am looking for guidance on how to properly mock the authentication in this pytest setup for my FastAPI application. What is the correct way to implement this so that my endpoint tests can bypass the actual authentication logic?</p> <h2>Additional info</h2> <h3>/main.py</h3> <pre class="lang-py prettyprint-override"><code>async def authenticate( apitoken: Optional[str] = Header(&quot;&quot;), account: Optional[str] = Header(&quot;&quot;), sessionHost: Optional[str] = Header(&quot;&quot;), ): print(sessionHost) if ( sessionHost and sessionHost[-20:] == &quot;mydomain-staging.com&quot; or sessionHost == &quot;https://apistaging.mydomain.com&quot; ): url = sessionHost + &quot;/api/jml/templates&quot; else: url = PF_API_URL + &quot;/jml/templates&quot; print(url) params = {&quot;per_page&quot;: 1, &quot;page&quot;: 1} accept = &quot;application/vnd.mydomain+json;version=2&quot; auth_headers = { &quot;content_type&quot;: &quot;application/json&quot;, &quot;accept&quot;: accept, &quot;account&quot;: account, &quot;apitoken&quot;: apitoken, } r = await session.get(url, headers=auth_headers, params=params, timeout=30) if r.status_code != 200: raise HTTPException(status_code=401, detail=&quot;Unauthorized&quot;) app.token_cache[apitoken] = { &quot;account&quot;: account, &quot;ts&quot;: datetime.now(), } app.include_router( stats.router, dependencies=[Depends(authenticate)], responses={404: {&quot;description&quot;: &quot;Not found&quot;}}, ) </code></pre> <h3>/routers/stats.py</h3> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/roles&quot;) async def roles( request: Request, account: str = Header(...), start_date: str = None, end_date: str = None, profile_id: int = 0, ): custom_fields, start_date, end_date = parse_params( request, start_date, end_date ) new_role = Roles( account, start_date, end_date, profile_id, custom_fields=custom_fields ) try: await new_role.setup() result = await new_role.get_json() except NoDataException: result = {&quot;Error&quot;: &quot;No Data&quot;} return result </code></pre>
<python><unit-testing><mocking><pytest><fastapi>
2023-11-16 00:15:35
1
1,720
The Dan
77,491,414
3,083,830
What does n_jobs=-1 do in XGBClassifier from xgboost?
<p>What does the n_jobs=-1 means do when creating a XGBClassifier object?</p> <pre><code>xgbc = XGBClassifier(n_jobs=-1) </code></pre>
<python><scikit-learn><xgboost><xgbclassifier>
2023-11-15 23:09:12
1
641
Ivan Verges
77,491,346
1,934,510
cannot import name 'Job' from partially initialized module 'models' (most likely due to a circular import)
<p>I'm trying to build a Flask app but I'm receiving this error</p> <p>ImportError: cannot import name 'Job' from partially initialized module 'models' (most likely due to a circular import)</p> <p>This my code:</p> <p>app.py</p> <pre><code>from flask import Flask, render_template, redirect, url_for, flash from flask_sqlalchemy import SQLAlchemy from flask_wtf import FlaskForm from wtforms import StringField, TextAreaField, SubmitField from wtforms.validators import DataRequired app = Flask(__name__) app.config[&quot;SQLALCHEMY_DATABASE_URI&quot;] = &quot;sqlite:///jobs.db&quot; app.config[&quot;SECRET_KEY&quot;] = &quot;your_secret_key&quot; db = SQLAlchemy(app) @app.route(&quot;/&quot;) def index(): jobs = Job.query.all() return render_template(&quot;index.html&quot;, jobs=jobs) from models import Job class JobForm(FlaskForm): title = StringField(&quot;Job Title&quot;, validators=[DataRequired()]) description = TextAreaField(&quot;Job Description&quot;, validators=[DataRequired()]) submit = SubmitField(&quot;Post Job&quot;) @app.route(&quot;/post&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def post_job(): form = JobForm() if form.validate_on_submit(): job = Job(title=form.title.data, description=form.description.data) db.session.add(job) db.session.commit() flash(&quot;Job has been posted!&quot;, &quot;success&quot;) return redirect(url_for(&quot;index&quot;)) return render_template(&quot;post_job.html&quot;, title=&quot;Post Job&quot;, form=form) if __name__ == &quot;__main__&quot;: app.run(debug=True) </code></pre> <p>models.py</p> <pre><code>from app import db class Job(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100), nullable=False) description = db.Column(db.Text, nullable=False) def __repr__(self): return f&quot;Job('{self.title}', '{self.description}')&quot; </code></pre>
<python><flask>
2023-11-15 22:52:39
1
8,851
Filipe Ferminiano
77,491,336
1,663,528
How to search results of PyMySQL query with IN operator when it returns a tuple of tuples?
<p>I have a MySQL database table of ipv4 addresses I want to search against, I pull it in to a Python using an SQL query (PyMySQL library) with a single column in the SELECT and cursor.fetchall() but when I loop through the resulting tuple.</p> <pre><code>with conn.cursor() as cur: sqlQuery = 'SELECT ipv4_address FROM bad_ips WHERE IS NOT NULL' cur.execute(sqlQuery) bad_ips = cur.fetchall() conn.close() </code></pre> <p>When I loop through it, it appears to return a tuple of tuples, show this sort of thing:</p> <p>('192.168.1.1',) ('192.168.1.5',)</p> <p>I want it to return just the IP addresses as expected like this:</p> <p>'192.168.1.1' '192.168.1.5'</p> <p>That way I can look up an IP using:</p> <pre><code>if str(this_ip_address) in bad_ips: </code></pre>
<python><pymysql>
2023-11-15 22:50:12
1
8,579
Mr Fett
77,491,117
5,519,012
Using cross join of subquery in python Peewee
<p>I want to convert the following sql query to python peewee -</p> <pre><code>WITH common_subquery AS ( SELECT t1.fly_from, t1.airlines as first_airline, t1.flight_numbers as first_flight_number, t1.link_to as first_link, t1.departure_to, t1.fly_to AS connection_at, t2.airlines as second_airline, t2.flight_numbers as second_flight_number, t2.link_to as second_link, t2.fly_to, t1.arrival_to AS landing_at_connection, t2.departure_to AS departure_from_connection, t2.arrival_to, CAST((julianday(t2.departure_to) - julianday(t1.arrival_to)) * 24 AS INTEGER) AS duration_hours, t1.discount_price + t2.discount_price AS total_price FROM flights AS t1 JOIN flights AS t2 ON t1.flight_hash = t2.flight_hash WHERE (t2.fly_from != t1.fly_from) AND (t1.fly_from != t2.fly_to) ORDER BY total_price ASC ) SELECT t1.fly_from as source, t1.first_airline as source_outbound_airline, t1.first_flight_number as source_outbound_flight_number, t1.first_link as source_outbound_link, t1.departure_to as outbound_departure, t1.landing_at_connection, t1.connection_at as outbound_connection, t1.second_airline as connection_outbound_airline, t1.second_flight_number as connection_outbound_flight_number, t1.second_link as connection_outbound_link, t1.departure_from_connection, t1.arrival_to as destination_arrival, t1.fly_to as destination, t2.first_airline as inbound_connection_airline, t2.first_flight_number as inbound_connection_flight_number, t2.first_link as inbound_connection_link, t2.departure_to as return_departure, t2.landing_at_connection as return_arrival, t2.connection_at as inbound_connection, t2.second_airline as inbound_airline, t2.second_flight_number as inbound_flight_number, t2.second_link as inbound_link, t2.departure_from_connection as return_departure_from, t2.arrival_to as return_destination_arrival, CEIL((t1.total_price + t2.total_price) / 100.0) * 100 AS round_total_price, FLOOR((julianday(t2.departure_from_connection) - julianday(t1.arrival_to))) AS days_in_dest FROM common_subquery AS t1 CROSS JOIN common_subquery AS t2 WHERE (julianday(t2.departure_from_connection) - julianday(t1.arrival_to)) BETWEEN 5 AND 8 AND t1.duration_hours &lt; 24 AND t2.duration_hours &lt; 24 AND t1.fly_to = t2.fly_from AND t1.fly_from like '%TLV%' and t1.fly_to like '%PRG%' ORDER BY round_total_price ASC, t1.duration_hours ASC, t2.duration_hours ASC; </code></pre> <p>The peewee model is -</p> <pre><code>class Flights(Model): fly_from = CharField() fly_to = CharField() nights = IntegerField() days_off = IntegerField() price = IntegerField() discount_price = IntegerField() airlines = CharField() flight_numbers = CharField() departure_to = DateTimeField() arrival_to = DateTimeField() departure_from = DateTimeField() arrival_from = DateTimeField() link_to = CharField() link_from = CharField() month = IntegerField() date_of_scan = DateTimeField() holiday_name = CharField() special_date = BooleanField(default=False) is_connection_flight = BooleanField(default=False) flight_hash = CharField(default=&quot;&quot;) class Meta: database = db </code></pre> <p>I was able to recreate the subquery, but the issue is that I am not able to <code>cross join</code> the subquery.</p> <p>for example -</p> <pre><code>subquery = Flight.select(...).order_by(...) </code></pre> <p>I can't figure out how to cross join it in another query with itself. I want to get something like -</p> <pre><code> subquery.select().join(subquery, CROSS) </code></pre> <p>is that possible using peewee? I can do it using python, but I want to have that calculations on the database engine side</p>
<python><sql><peewee>
2023-11-15 21:59:09
1
365
Meir Tolpin
77,491,078
5,036,928
PyVista: 3D Gaussian Smoothing of PolyData
<p>I would like to replicate the example here <a href="https://docs.pyvista.org/version/stable/examples/01-filter/gaussian-smoothing.html" rel="nofollow noreferrer">https://docs.pyvista.org/version/stable/examples/01-filter/gaussian-smoothing.html</a> using my own data but trying to apply the <code>gaussian_smooth()</code> method to my <code>ImageData</code> results in <code>MissingDataError: No data available.</code> (but works for the example). I'm guessing I need to pass my scalar field to <code>ImageData</code> but I'm not sure with what attribute I do this.</p> <p>Some potentially helpful code:</p> <pre><code># create a uniform grid to sample the function with n = 40 x_min, y_min, z_min = [np.min(q) - 0.25*np.absolute(np.min(q)) for q in [tmp[tmp[:,3]==1, 0], tmp[tmp[:,3]==1, 1], tmp[tmp[:,3]==1, 2]]] x_max, y_max, z_max = [np.max(q) + 0.25*np.absolute(np.max(q)) for q in [tmp[tmp[:,3]==1, 0], tmp[tmp[:,3]==1, 1], tmp[tmp[:,3]==1, 2]]] grid = pv.ImageData( dimensions=(n, n, n), spacing=( (x_max - x_min) / n, (y_max - y_min) / n, (z_max - z_min) / n), origin=(x_min, y_min, z_min), ) smooth_grid = grid.gaussian_smooth(std_dev=3.0) </code></pre> <p>My question: How can I successfully perform a <code>gaussian_smooth</code> on my <code>ImageData</code></p>
<python><3d><surface><gaussianblur><pyvista>
2023-11-15 21:51:27
1
1,195
Sterling Butters
77,490,979
9,100,431
Python - Datetime Problem printing '%p' with locale
<p>I have a simple script that fetches a UTC datetime from an API and then I try to parse it to</p> <pre><code>&quot;%I:%M:%S %p . %d de %B del %Y&quot; </code></pre> <p>I get the correct format without a locale set (12:35:01 PM. 15 de November del 2023). But when I try to get the Spanish month, the %p suddenly disappears.</p> <pre><code>def get_time(): locale.setlocale(locale.LC_TIME, 'es_ES.utf8') sharepoint_api = api.SharePointAPI() last_time_ran = sharepoint_api.get_last_executed_time() #Change UTC to local timezone utc_format = &quot;%Y-%m-%dT%H:%M:%SZ&quot; utc_time = datetime.strptime(last_time_ran, utc_format) utc_time = utc_time.replace(tzinfo=pytz.UTC) local_tz = pytz.timezone(&quot;America/Monterrey&quot;) local_time = utc_time.astimezone(local_tz) # Parse to desired format output_format = &quot;%I:%M:%S %p . %d de %B del %Y&quot; formatted_date = local_time.strftime(output_format) return formatted_date </code></pre> <p>This prints:</p> <pre><code>12:35:01 . 15 de Noviembre del 2023 </code></pre> <p>Different workarounds say I should use a dictionary to change the months, or parse the date before the locale is set to get the PM/AM and then add it after the locale.</p> <p>Isn't there an easier way? Is this a datetime library bug?</p>
<python><datetime><python-datetime>
2023-11-15 21:28:41
0
660
Diego
77,490,678
11,163,122
SQL Alchemy v2 is after_transaction_end needed for nested session
<p>In <a href="https://aalvarez.me/posts/setting-up-a-sqlalchemy-and-pytest-based-test-suite/" rel="nofollow noreferrer">Setting Up a SQLAlchemy and Pytest Based Test Suite</a> (published May 2022 with SQLAlchemy v1), for the transactional testing, it uses the below code snippet. Note I cleaned it up a bit for this post:</p> <pre class="lang-py prettyprint-override"><code># conftest.py import pytest # Import a session factory in our app code. Possibly created using # `sessionmaker()` from myapp.db import Session @pytest.fixture(autouse=True) def session(connection): transaction = connection.begin() session = Session(bind=connection) session.begin_nested() @event.listens_for(session, &quot;after_transaction_end&quot;) def restart_savepoint(db_session, ended_transaction): if ended_transaction.nested and not ended_transaction._parent.nested: session.expire_all() session.begin_nested() yield session Session.remove() transaction.rollback() </code></pre> <p>I am now working with SQLAlchemy v2 in November 2023, and have three questions around the <code>restart_savepoint</code> inner function:</p> <ul> <li>Can you explain the <code>ended_transaction.nested and not ended_transaction._parent.nested</code> logic?</li> <li>Is this <code>restart_savepoint</code> behavior still necessary with SQL Alchemy v2?</li> <li>Is the <code>Session(bind=connection)</code> still necessary with SQL Alchemy v2?</li> </ul> <p>I am thinking the below is equally viable:</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True) def session(connection): with Session() as session, session.begin(nested=True): yield session </code></pre>
<python><sqlalchemy>
2023-11-15 20:21:38
1
2,961
Intrastellar Explorer
77,490,677
1,309,005
Can I use a symbol other than the equal sign in self-documenting f-strings
<p>Self-documenting f-strings in Python allow you to write a print statement like this:</p> <pre class="lang-py prettyprint-override"><code>print(f&quot;{result=}&quot;) </code></pre> <p>This works fine, but the <code>=</code> sign is not just used to tell Python you want a self-documenting f-string...it is also the symbol that gets formatted into the output. Is there any way to get Python to use the <code>:</code> character instead of the <code>=</code> character in the <em>output</em> of self-documenting f-strings, maybe something like this:</p> <pre class="lang-py prettyprint-override"><code>print(f&quot;{result:}&quot;) </code></pre>
<python><f-string>
2023-11-15 20:21:33
1
1,707
RSW
77,490,543
3,233,017
Can TQDM track the output of a subprocess?
<p>As part of a long pipeline (all orchestrated in Python), I'm calling an external program using the subprocess module. This program takes a while to finish, so I'd like to show the user a nice progress bar, to reassure them that it hasn't frozen.</p> <p>This program also sends a whole lot of information to stdout—much more than I want to show the end user. But the number of lines in this output is consistent.</p> <p>What's the best way to track the number of lines that have been sent to stdout by a subprocess, for the purpose of updating a real-time status bar (via tqdm)?</p>
<python><subprocess><tqdm>
2023-11-15 19:52:04
0
3,547
Draconis
77,490,435
13,086,128
AttributeError: cython_sources
<p>I am using:</p> <pre><code>python: 3.12 OS: Windows 11 Home </code></pre> <p>I tried to install <code>catboost==1.2.2</code></p> <p>I am getting this error:</p> <pre><code>C:\Windows\System32&gt;py -3 -m pip install catboost==1.2.2 Collecting catboost==1.2.2 Downloading catboost-1.2.2.tar.gz (60.1 MB) ---------------------------------------- 60.1/60.1 MB 5.1 MB/s eta 0:00:00 Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; [135 lines of output] Collecting setuptools&gt;=64.0 Using cached setuptools-68.2.2-py3-none-any.whl (807 kB) Collecting wheel Using cached wheel-0.41.3-py3-none-any.whl (65 kB) Collecting jupyterlab Downloading jupyterlab-4.0.8-py3-none-any.whl (9.2 MB) ---------------------------------------- 9.2/9.2 MB 7.8 MB/s eta 0:00:00 Collecting conan&lt;=1.59,&gt;=1.57 Downloading conan-1.59.0.tar.gz (780 kB) -------------------------------------- 781.0/781.0 kB 4.9 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting async-lru&gt;=1.0.0 (from jupyterlab) Downloading async_lru-2.0.4-py3-none-any.whl (6.1 kB) Collecting ipykernel (from jupyterlab) Downloading ipykernel-6.26.0-py3-none-any.whl (114 kB) -------------------------------------- 114.3/114.3 kB 6.5 MB/s eta 0:00:00 Collecting jinja2&gt;=3.0.3 (from jupyterlab) Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB) -------------------------------------- 133.1/133.1 kB 7.7 MB/s eta 0:00:00 Collecting jupyter-core (from jupyterlab) Downloading jupyter_core-5.5.0-py3-none-any.whl (28 kB) Collecting jupyter-lsp&gt;=2.0.0 (from jupyterlab) Downloading jupyter_lsp-2.2.0-py3-none-any.whl (65 kB) ---------------------------------------- 66.0/66.0 kB 3.7 MB/s eta 0:00:00 Collecting jupyter-server&lt;3,&gt;=2.4.0 (from jupyterlab) Downloading jupyter_server-2.10.1-py3-none-any.whl (378 kB) -------------------------------------- 378.6/378.6 kB 4.7 MB/s eta 0:00:00 Collecting jupyterlab-server&lt;3,&gt;=2.19.0 (from jupyterlab) Downloading jupyterlab_server-2.25.1-py3-none-any.whl (58 kB) ---------------------------------------- 59.0/59.0 kB 3.0 MB/s eta 0:00:00 Collecting notebook-shim&gt;=0.2 (from jupyterlab) Downloading notebook_shim-0.2.3-py3-none-any.whl (13 kB) Collecting packaging (from jupyterlab) Downloading packaging-23.2-py3-none-any.whl (53 kB) ---------------------------------------- 53.0/53.0 kB 2.7 MB/s eta 0:00:00 Collecting tornado&gt;=6.2.0 (from jupyterlab) Downloading tornado-6.3.3-cp38-abi3-win_amd64.whl (429 kB) -------------------------------------- 429.2/429.2 kB 9.1 MB/s eta 0:00:00 Collecting traitlets (from jupyterlab) Downloading traitlets-5.13.0-py3-none-any.whl (84 kB) ---------------------------------------- 85.0/85.0 kB 4.7 MB/s eta 0:00:00 Collecting requests&lt;3.0.0,&gt;=2.25 (from conan&lt;=1.59,&gt;=1.57) Downloading requests-2.31.0-py3-none-any.whl (62 kB) ---------------------------------------- 62.6/62.6 kB ? eta 0:00:00 Collecting urllib3&lt;1.27,&gt;=1.26.6 (from conan&lt;=1.59,&gt;=1.57) Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB) -------------------------------------- 143.8/143.8 kB 4.3 MB/s eta 0:00:00 Collecting colorama&lt;0.5.0,&gt;=0.3.3 (from conan&lt;=1.59,&gt;=1.57) Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) Collecting PyYAML&lt;=6.0,&gt;=3.11 (from conan&lt;=1.59,&gt;=1.57) Downloading PyYAML-6.0.tar.gz (124 kB) -------------------------------------- 125.0/125.0 kB 3.6 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 [54 lines of output] running egg_info writing lib\PyYAML.egg-info\PKG-INFO writing dependency_links to lib\PyYAML.egg-info\dependency_links.txt writing top-level names to lib\PyYAML.egg-info\top_level.txt Traceback (most recent call last): File &quot;C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 355, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 325, in _get_build_requires self.run_setup() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 341, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 288, in &lt;module&gt; File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\__init__.py&quot;, line 103, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py&quot;, line 185, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py&quot;, line 201, in run_commands dist.run_commands() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 969, in run_commands self.run_command(cmd) File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\dist.py&quot;, line 989, in run_command super().run_command(command) File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 988, in run_command cmd_obj.run() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py&quot;, line 318, in run self.find_sources() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py&quot;, line 326, in find_sources mm.run() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py&quot;, line 548, in run self.add_defaults() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py&quot;, line 586, in add_defaults sdist.add_defaults(self) File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\sdist.py&quot;, line 113, in add_defaults super().add_defaults() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py&quot;, line 251, in add_defaults self._add_defaults_ext() File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py&quot;, line 336, in _add_defaults_ext self.filelist.extend(build_ext.get_source_files()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;string&gt;&quot;, line 204, in get_source_files File &quot;C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\cmd.py&quot;, line 107, in __getattr__ raise AttributeError(attr) AttributeError: cython_sources [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [notice] A new release of pip is available: 23.1.2 -&gt; 23.3.1 [notice] To update, run: C:\Users\talta\AppData\Local\Programs\Python\Python312\python.exe -m pip install --upgrade pip [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>Any workaround or solutions?</p> <p>Comments and answers are much appreciated.</p>
<python><python-3.x><pip><catboost><python-3.12>
2023-11-15 19:31:56
1
30,560
Talha Tayyab
77,490,281
11,981,718
How to group 2d spatial grid data based on their elevation using clustering: clusters have to be contiguous
<p><a href="https://i.sstatic.net/W6nfd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W6nfd.png" alt="enter image description here" /></a></p> <p>I have 2d gridded data that I want to group based on elevation, but the clusters have to be continuous and have a minimum number of data in the (4). A visual representation of the clustering would look like that:</p> <p>Thanks!</p> <p><a href="https://i.sstatic.net/Q8KwT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q8KwT.png" alt="gridded data with clusters" /></a></p>
<python><scikit-learn><cluster-analysis>
2023-11-15 18:59:21
1
412
tincan
77,490,265
1,497,199
How to run celery workers with sequential task execution with "per-child" bounds
<p>The context is that I'm using kubernetes for parallelism and workload balancing, similar to <a href="https://miguescri.com/post/2021-05-22-celery-hpa/" rel="nofollow noreferrer">this</a>. Given this, it is desirable to have no concurrency within the celery workers -- we get that by k8s horizontal scaling. In addition, limiting each worker to sequential task processing ensures that the memory use is constrained.</p> <p>Given this context, it seems sensible to use the <code>solo</code> pool option when starting the workers. However, I am suspicious that keeping the worker running for an arbitrarily long time and doing all of the work in the same process will result in gradual memory leakage that will cause a task execution to fail.</p> <p>Celery provides options for starting the worker to address this memory (or other resource) leakage: <code>max-tasks-per-child, max-memory-per-child</code>; these restart the child processes (or threads) that actually do the processing work either periodically or when a memory threshold is reached. The latter helps deal with packages/processes that leak memory.</p> <p>The seemingly ideal solution would be to start the worker(s) like this:</p> <pre><code>python -m celery -A worker.app worker -P solo --max-memory-per-child 1048576 --max-tasks-per-child 10 </code></pre> <p>That way, the worker (in a given pod), just processes tasks sequentially, but also restarts periodically (every 10 jobs) or if the memory bound is exceeded (1048576KiB) in order to ensure that all resources are returned to the OS.</p> <p>However, I have my doubts that the &quot;per-child&quot; options have an effect on a solo worker -- Will the solo worker re-start if the bounds are exceeded or are these options only relevant for multi-process or multi-threaded workers? [the celery documentation is not helpful]</p> <p>Overall my goals are this:</p> <ul> <li>Use <code>celery</code> for task distribution an processing</li> <li>Run the workers in separate kubernetes pods to take advantage of that system's ability to scale resources</li> <li>Have no concurrency in the celery workers so that the pod memory usage is stable and predictable,</li> <li>Protect against resource (memory) leakage to reduce the chance that tasks fail</li> </ul> <p>how can I set up the celery workers to achieve this?</p> <p>The &quot;per-child&quot; options look like a promising approach if there is a way to confirm that they operate correctly for a <code>solo</code> pool worker.</p>
<python><kubernetes><celery>
2023-11-15 18:55:24
0
8,229
Dave
77,490,225
4,542,117
python matrix traversal expanding outwards concentric squares
<p>Let's say I have a 5x5 matrix as follows:</p> <pre><code>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>Based on some location, I am given an index in both the (x,y) coordinate where I want to start building concentric squares outwards of values. A few examples below:</p> <pre><code>3 3 3 3 3 3 2 2 2 2 3 2 1 1 2 3 2 1 1 2 3 2 2 2 2 </code></pre> <pre><code>2 2 2 3 4 1 1 2 3 4 1 1 2 3 4 2 2 2 3 4 3 3 3 3 4 </code></pre> <p>Is there a more automated / function / libraries that can do this easily instead of hard-coding these sort of values?</p>
<python><numpy>
2023-11-15 18:46:56
1
374
Miss_Orchid
77,490,131
5,840,173
Different headers in the same session
<p>After I ran the following python script,</p> <pre><code>import requests header_1 = {&quot;Authorization&quot;:&quot;Bearer abc123&quot;} header_2 = {&quot;Authorization&quot;:&quot;Bearer def456&quot;} url = &quot;https://notify-api.line.me/api/notify&quot; session = requests.session() a = session.post(url, headers=header_1, files=file, data=data) print(a.json()) b = session.post(url, headers=header_2, files=file, data=data) print(b.json()) </code></pre> <p>I got the following error message.</p> <pre><code>{'status': 200, 'message': 'ok'} {'status': 400, 'message': 'Invalid image.'} </code></pre> <p>I don't know why I can't POST twice using different headers. If I like to do so, how should I do?</p> <p><strong>Another Try 1</strong></p> <p>I have tried to clear the headers between two POSTs. But it doesn't work.</p> <pre><code>import requests header_1 = {&quot;Authorization&quot;:&quot;Bearer abc123&quot;} header_2 = {&quot;Authorization&quot;:&quot;Bearer def456&quot;} url = &quot;https://notify-api.line.me/api/notify&quot; session = requests.session() a = session.post(url, headers=header_1, files=file, data=data) print(a.json()) session.headers.clear() ########## new line b = session.post(url, headers=header_2, files=file, data=data) print(b.json()) </code></pre> <p><strong>Another Try 2</strong></p> <p>I also have tried to define different sessions. This got the same error message</p> <pre><code>import requests header_1 = {&quot;Authorization&quot;:&quot;Bearer abc123&quot;} header_2 = {&quot;Authorization&quot;:&quot;Bearer def456&quot;} url = &quot;https://notify-api.line.me/api/notify&quot; session_1 = requests.session() session_2 = requests.session() a = session_1.post(url, headers=header_1, files=file, data=data) print(a.json()) b = session_2.post(url, headers=header_2, files=file, data=data) print(b.json()) </code></pre> <p><strong>Another Try 3</strong></p> <p>If I changed the order of the POSTs. The first POST is ok and the second one got an error.</p> <pre><code>import requests header_1 = {&quot;Authorization&quot;:&quot;Bearer abc123&quot;} header_2 = {&quot;Authorization&quot;:&quot;Bearer def456&quot;} url = &quot;https://notify-api.line.me/api/notify&quot; session_1 = requests.session() session_2 = requests.session() b = session_2.post(url, headers=header_2, files=file, data=data) print(b.json()) a = session_1.post(url, headers=header_1, files=file, data=data) print(a.json()) </code></pre> <p>Is there anyone can help. Thanks.</p>
<python><http>
2023-11-15 18:31:10
0
511
bfhaha
77,490,095
688,954
ModuleNotFoundError when running coverage run -m pytest . in GitHub Actions
<p>I have a very simple project like below</p> <pre><code>// ├─ src/ │ ├─ sample/ │ │ ├─ __init__.py │ │ ├─ simple.py ├─ tests/ │ ├─ __init__.py │ ├─ test_simple.py </code></pre> <p>When I tried running the coverage command below in the my local laptop, it runs okay and generates <code>coverage.xml</code></p> <pre><code>coverage run -m pytest . &amp;&amp; coverage xml </code></pre> <p>But when I run in the GitHub actions using the config below, it returns error</p> <pre><code> - uses: actions/setup-python@v4 with: python-version: '3.11' - name: Install dependencies run: | python -m pip install --upgrade pip pip install pytest coverage - name: Run test run: | coverage run -m pytest . coverage xml </code></pre> <p>The error is below</p> <pre><code>Run coverage run -m pytest . ============================= test session starts ============================== platform linux -- Python 3.11.6, pytest-7.4.3, pluggy-1.3.0 rootdir: /home/runner/work/openapi-splitter/openapi-splitter collected 0 items / 1 error ==================================== ERRORS ==================================== ____________________ ERROR collecting tests/test_simple.py _____________________ ImportError while importing test module '/home/runner/work/openapi-splitter/openapi-splitter/tests/test_simple.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_simple.py:7: in &lt;module&gt; from sample.simple import add_one E ModuleNotFoundError: No module named 'sample' =========================== short test summary info ============================ ERROR tests/test_simple.py !!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!! =============================== 1 error in 0.11s =============================== </code></pre> <p>Why did this happen and how to fix this?</p>
<python><pytest><github-actions><code-coverage>
2023-11-15 18:23:00
2
4,033
Petra Barus
77,490,071
6,067,528
How can I divide these two sparse matrices together?
<p>I am trying to move dense matrix operations to be sparse. I was using numpy broadcasting to divide an array of shape (432,) to (591, 432) when they were dense, but how can do I this with sparse matrices?</p> <pre><code>&lt;591x432 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 3876 stored elements in Compressed Sparse Column format&gt; &lt;1x432 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 432 stored elements in COOrdinate format&gt; </code></pre> <p>When I try with this dummy data below...</p> <pre><code>import numpy as np from sklearn.feature_extraction.text import CountVectorizer matrix = CountVectorizer().fit_transform(raw_documents=[&quot;test sentence.&quot;, &quot;test sent 2.&quot;).T max_w = np.max(matrix, axis=0) matrix / max_w </code></pre> <p>I get <code>ValueError: inconsistent shapes</code>. How can I divide these ?</p>
<python><numpy><scipy>
2023-11-15 18:18:38
1
1,313
Sam Comber
77,490,002
3,782,911
How to calculate a rolling function of only defined values in pandas/numpy?
<p>I've found <a href="https://stackoverflow.com/questions/69269582/how-to-ignore-nan-when-applying-rolling-with-pandas">How to ignore NaN when applying rolling with Pandas</a></p> <p>But it didn't help.</p> <ul> <li>I have arrays where each line item is a specific value for a specific time index for a specific column.</li> <li>There may be more than one non-NaN entry per row, but certainly not many.</li> <li>I want to calculate a function that calculates the gradient according to the defined values, relative to the index.</li> </ul> <p>Can't work out how to do it with either numpy or pandas.</p> <p>Advice helpful. pandas.drop_na, skip_na didn't help.</p> <p>Template for output</p> <pre><code>fa = np.random.randn(10,4) mask = np.zeros(40, dtype=bool) mask[:15] = True np.random.shuffle(mask) mask = mask.reshape(10,4) fa[mask] = np.nan fa Out[40]: array([[ nan, -0.57681061, nan, 0.23047461], [ 0.26260072, -0.62024175, 0.35678478, nan], [-0.5781359 , -0.17364336, nan, nan], [-0.58982883, nan, 0.07114217, 1.03781762], [-0.03906354, -0.49546887, nan, nan], [-0.3988263 , 0.21794358, nan, -0.04167338], [ 0.35731643, -0.80956629, -0.29624602, 2.59351753], [-0.02804324, nan, nan, nan], [ nan, 0.75344618, -0.52145898, nan], [-0.45565981, 0.26946552, nan, 1.64095417]]) dx = pd.date_range(&quot;2023-01-01&quot;, periods=10, freq=&quot;S&quot;) df = pd.DataFrame(fa, index=idx) ## Apply function df.rolling(3).apply(lambda s: s.sum()) Out[52]: 0 1 2 3 2018-01-01 00:00:00 NaN NaN NaN NaN 2018-01-01 00:00:01 NaN NaN NaN NaN 2018-01-01 00:00:02 NaN -1.370696 NaN NaN 2018-01-01 00:00:03 -0.905364 NaN NaN NaN 2018-01-01 00:00:04 -1.207028 NaN NaN NaN 2018-01-01 00:00:05 -1.027719 NaN NaN NaN 2018-01-01 00:00:06 -0.080573 -1.087092 NaN NaN 2018-01-01 00:00:07 -0.069553 NaN NaN NaN 2018-01-01 00:00:08 -0.126387 NaN NaN NaN 2018-01-01 00:00:09 -0.126387 NaN NaN NaN ## What would be good is to have the output array being based only on the defined values ## as if the NaN's weren't there. So, for example in the below array which is made ## from taking the columns and applying dropna to them before running the ## rolling aggregate function. 2018-01-01 00:00:00 NaN NaN NaN NaN 2018-01-01 00:00:01 NaN NaN NaN NaN 2018-01-01 00:00:02 NaN -1.370696 NaN NaN 2018-01-01 00:00:03 -0.589829 -1.289354 NaN NaN 2018-01-01 00:00:04 -0.039064 -0.451169 NaN NaN 2018-01-01 00:00:05 -0.398826 -1.087092 NaN2 1.226619 2018-01-01 00:00:06 0.357316 NaN 0.131681 3.589662 2018-01-01 00:00:07 -0.028043 NaN NaN NaN 2018-01-01 00:00:08 NaN 0.161823 -0.746563 NaN 2018-01-01 00:00:09 -0.455660 0.2133456 NaN 4.192798 </code></pre> <p>The last line has been formed by doing</p> <pre><code>df[n].dropna().rolling(3).apply(lambda s: s.sum()) </code></pre> <p>on each column, and then filled in by hand.</p> <p>Now the actual function I want to run uses the time index as an input as well, so it's a bit more complicated than this (otherwise it would easy -- just swap out all the <code>nan</code>'s with <code>0</code>'s and we're done).</p>
<python><pandas><numpy>
2023-11-15 18:07:27
0
2,795
David Boshton
77,489,981
458,742
How should mysql.connector be shared correctly with threads in Django applications?
<p>This concerns a potential solution to <a href="https://stackoverflow.com/questions/77487774/commands-out-of-sync-you-cant-run-this-command-now-on-first-use-of-cursor">my earlier question</a> in which I was getting <code>commands out of sync; you can't run this command now</code> after wrapping mysql cursors in <code>__enter__</code> and <code>__exit__</code>.</p> <p>I have read elsewhere that Django does not spawn new threads to handle multiple requests, but rather multiple Django processes would normally be spawned in order to achieve parallelism. Despite this advice, I began printing <code>Thread.ident</code> and <code>Thread.native_id</code> and noticed that they were changing.</p> <p>Based on this, I speculatively wrote something like:</p> <pre><code>def db_connect (): t = threading.current_thread() if not hasattr (t, 'db_connection'): print (f&quot;New MyDBConnectionClass for {t.ident}~{t.native_id}&quot;) t.db_connection = MyDBConnectionClass () return t.db_connection.get_cursor () # This object has a __exit__ which closes the cursor </code></pre> <p>along with</p> <pre><code>class MyDBConnectionClass (): def __del__(self): t = threading.current_thread() print (f&quot;MyDBConnectionClass deleted for {t.ident}~{t.native_id}&quot;) </code></pre> <p>the view handlers' usage of the cursors is unchanged:</p> <pre><code>with db_connect() as db: results = db.all (query, args) </code></pre> <p>So far this seems to have fixed the <code>commands out of sync</code> error (and the exit code 245 crash mentioned in the original question).</p> <p><code>MyDBConnectionClass.__delete__</code> is not being called, but various instances are created for a few different <code>ident</code> values.</p> <p>My current hypothesis is that there is some sort of thread pool going on, and my application was crashing unpredictably because <em>sometimes</em> (often) a view would be handled in the thread which created the initial connection, but sometimes not. This experiment seems to show that giving each thread a distinct connection works, which makes sense, but I am unsatisfied because:</p> <ul> <li>the Thread objects are apparently not being deleted (or else something else is preventing <code>MyDBConnection.__del__</code> from being called)</li> <li>this doesn't agree with the documentation and examples I have read elsewhere</li> <li>it's messy, and Django has a fairly clean design as far as I have seen -- I think I must be missing something</li> </ul> <p>So, what is the correct way to handle mysql connection and cursor objects so that I can do</p> <pre><code>with my_connection_wrapper.get_cursor() as my_cursor_wrapper: my_cursor_wrapper.foo () </code></pre> <p>freely in Django views without leaking resources and without causing thread affinity instability issues (assuming that really is the problem)?</p>
<python><mysql><django><multithreading>
2023-11-15 18:04:28
1
33,709
spraff
77,489,961
4,256,677
pdoc3 incorrectly rendered Args section
<p>Recently migrated from an M1 to M2 mac. Previous successful invocation of pdoc was on M1 with Ventura 13.6, same python version. Is there a prerequisite I'm missing, or maybe need to downgrade a dependency, or is this due to an unreported error in my docstrings somewhere else in the module?</p> <p>Example Source code:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; Args: script_path (str, optional): the full s3 or github uri, or local path to the script you want the service to execute. Defaults to &quot;./script.py&quot;. &quot;&quot;&quot; </code></pre> <p>Correctly rendered <strong>Args</strong> section, from previous invocation e.g.,</p> <pre class="lang-html prettyprint-override"><code>&lt;h2 id=&quot;args&quot;&gt;Args&lt;/h2&gt; &lt;dl&gt; &lt;dt&gt;&lt;strong&gt;&lt;code&gt;script_path&lt;/code&gt;&lt;/strong&gt; :&amp;ensp;&lt;code&gt;str&lt;/code&gt;, optional&lt;/dt&gt; &lt;dd&gt;the full s3 or github uri, or local path to the script you want the service to execute. Defaults to &quot;./script.py&quot;.&lt;/dd&gt; &lt;/dl&gt; </code></pre> <p>Incorrectly rendered Args section in current env:</p> <pre class="lang-html prettyprint-override"><code>&lt;p&gt;Args:&lt;br&gt; script_path (str, optional): the full s3 or github uri, or local path to the script you want the service to execute. Defaults to &quot;./script.py&quot;. &lt;/p&gt; </code></pre> <ul> <li>pdoc version: pdoc <code>0.10.0</code></li> <li>python version: <code>3.9.16</code> (pyenv)</li> <li>M2 mac with os x Ventura <code>13.6.2</code></li> </ul>
<python><docstring><pdoc>
2023-11-15 18:00:23
1
1,179
varontron
77,489,899
8,869,570
How to check that a dataframe consists of all 0 entries?
<p>I know one way is to loop through all the columns, e.g.,</p> <pre><code>for col in df.columns: assert (df[col] != 0).sum() == 0 </code></pre> <p>Is there better approach that can operate on the entire dataframe without looping through each individual column?</p>
<python><python-3.x><pandas><dataframe>
2023-11-15 17:47:26
5
2,328
24n8
77,489,728
19,276,472
Helper functions in another file, ModuleNotFoundError when trying to import
<p>I have a simple Python project using scrapy. My file structure looks like this:</p> <pre><code>top_level_folder |-scraper |--spiders |---help_functions.py |---&lt;some more files&gt; |--items.py |--pipelines.py |--settings.py |--&lt;some more files&gt; </code></pre> <p>help_functions.py has a couple functions defined, like <code>add_to_items_buffer</code>.</p> <p>In pipelines.py, I'm attempting to do...</p> <pre><code>from help_functions import add_to_items_buffer ... class BlahPipeline: def process_item(self, item, spider): ... add_to_items_buffer(item) ... </code></pre> <p>When I try to run this, I get <code>ModuleNotFoundError: No module named 'help_functions'</code>. Doing <code>from spiders.help_functions import add_to_items_buffer</code> throws a similar error.</p> <p>What's going on here? I imagine I'm misunderstanding something fundamental about how Python imports work.</p>
<python><scrapy>
2023-11-15 17:16:44
3
720
Allen Y
77,489,704
2,556,795
Groupby giving same aggregate value for all groups
<p>I am trying to take mean of each group and trying to assign those to a new column in another dataframe but the first group's mean value is populating across all groups.</p> <p>Below is my dataframe <code>df1</code></p> <pre><code>level value CF 5 CF 4 CF 6 EL 2 EL 3 EL 1 EF 4 EF 3 EF 6 </code></pre> <p>I am taking the mean of each group and saving it to a new column in another dataframe <code>df2</code>.</p> <pre class="lang-py prettyprint-override"><code>df2['value'] = df1.groupby(['level'])['value'].transform('mean') </code></pre> <p>But this is giving me below result</p> <pre><code>level value CF 5.0 EL 5.0 EF 5.0 </code></pre> <p>which should actually be</p> <pre><code>level value CF 5.0 EL 2.0 EF 4.333333 </code></pre> <p>I get expected result if I am not saving the values to new columnn. I am not sure if this is correct way of assigning group values to new column.</p>
<python><pandas><group-by>
2023-11-15 17:12:10
2
1,370
mockash
77,489,554
8,443,357
correct incorrect serilization in django
<p>After serialization,I tried to call Django-rest api by connecting 3 tables, then I got issue Original exception text was: <strong>'Store' object has no attribute 'Storeimage'.</strong>, I'm expecting to get api by connecting 3 models.</p> <p>My models</p> <pre><code>class Store(models.Model): user = models.ForeignKey(User, default=1, null=True, on_delete=models.SET_NULL) address = models.ForeignKey(Address, null=True,on_delete=models.CASCADE) store_name = models.CharField(max_length=255) store_about = models.CharField(max_length=255,null=True) store_location = models.CharField( max_length=255) #store_address = models.TextField() store_phoneno = models.CharField(max_length=12) class Storeimage(models.Model): store = models.ForeignKey(Store, null=True, on_delete=models.SET_NULL) picture = models.ImageField(upload_to='store_image/%Y/%m/',max_length=255,null=True) class Address(models.Model): street = models.TextField() country = models.ForeignKey(Country, default=1,null=True, on_delete=models.CASCADE) state = models.ForeignKey(State, default=1,null=True, on_delete=models.CASCADE) </code></pre> <p>my serilzation</p> <pre><code>class StoreGetSerialiser(serializers.ModelSerializer): #store_product = StoreproductSerializer(many=True) address = AddressSerializer() Storeimage = StoreImage(many=True) owner = UserPublicSerializer(source='user', read_only=True) class Meta: model = Store fields = [ 'pk', 'owner', 'store_name', 'Storeimage', 'store_location', 'address', 'store_phoneno', 'store_website', ] class StoreImage(serializers.ModelSerializer): class Meta: model = Storeimage fields = '__all__' </code></pre> <p>Myexpected output:</p> <pre><code> { &quot;pk&quot;: 2, &quot;owner&quot;: { &quot;id&quot;: 1 }, &quot;store_name&quot;: &quot;&quot;, &quot;store_location&quot;: &quot;&quot;, &quot;address&quot;: { &quot;id&quot;: 14, &quot;street&quot;: &quot;1&quot;, &quot;country&quot;: 1, &quot;state&quot;: 1, &quot;city&quot;: 1 }, &quot;store_phoneno&quot;: &quot;&quot;, &quot;store_image&quot;: [{&quot;pk&quot;:1,&quot;picture&quot;:&quot;/location/abcd&quot;},{&quot;pk&quot;:2,&quot;picture&quot;:&quot;/location/abcd&quot;}] }, </code></pre>
<python><django><serialization><django-rest-framework>
2023-11-15 16:50:35
1
652
selvakumar
77,489,523
1,753,640
Python Regex extract paragraph text between number
<p>I have text as follows and I want to extract just the text</p> <pre><code>1. foobar 2. foo 3. bar </code></pre> <p>The result should be <code>[foobar, foo, bar]</code>.</p> <p>What python regex will extract the results I want? I tried the following but no luck</p> <p><code>r'\d+.*?(?=\d|$)'</code></p>
<python><regex>
2023-11-15 16:45:40
2
385
user1753640
77,488,946
6,843,153
argparse to accept random arguments
<p>I have a Python application that implements <code>argparse</code> with a set of arguments declared:</p> <pre class="lang-py prettyprint-override"><code>if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser() parser.add_argument( &quot;--arg1&quot;, default=&quot;dev&quot;, choices=[&quot;real&quot;, &quot;test&quot;, &quot;dev&quot;], help=&quot;arg 1&quot; ) parser.add_argument(&quot;--arg2&quot;, default=&quot;0&quot;, help=&quot;arg 2&quot;) parser.add_argument( &quot;--arg3&quot;, nargs=&quot;+&quot;, default=[&quot;one&quot;, &quot;two&quot;], choices=[&quot;one&quot;, &quot;two&quot;], help=&quot;arg 3&quot;, ) parser.add_argument(&quot;--arg4&quot;, action=&quot;store_true&quot;, help=&quot;arg 4&quot;) parser.add_argument(&quot;--arg5&quot;, action=&quot;store_true&quot;, help=&quot;arg 5&quot;) parser.add_argument(&quot;--arg6&quot;, action=&quot;store_true&quot;, help=&quot;arg 6&quot;) parser.add_argument(&quot;--arg7&quot;, default=None, help=&quot;arg 7&quot;) args = parser.parse_args() </code></pre> <p>If I send an argument that is not defined in these declarations, I get this exception:</p> <pre><code>error: unrecognized arguments: arg8 value </code></pre> <p>Is it possible to indicate <code>argparse</code> to accept non declared arguments?</p>
<python><argparse>
2023-11-15 15:23:08
1
5,505
HuLu ViCa
77,488,891
2,741,831
Subquery from pypika documentation not working
<pre><code>from pypika import Query, Table, Field, Tables, Order history, customers = Tables('history', 'customers') last_purchase_at = Query.from_(history).select( history.purchase_at ).where(history.customer_id==customers.customer_id).orderby( history.purchase_at, order=Order.desc ).limit(1) q = Query.from_(customers).select( customers.id, last_purchase_at._as('last_purchase_at') ) </code></pre> <p>I have picked the code directly from the documentation of pypika <a href="https://pypika.readthedocs.io/en/latest/2_tutorial.html#joining-tables-and-subqueries" rel="nofollow noreferrer">https://pypika.readthedocs.io/en/latest/2_tutorial.html#joining-tables-and-subqueries</a></p> <p>yet it gives me the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/trash/pikatest/main.py&quot;, line 10, in &lt;module&gt; customers.id, last_purchase_at._as('last_purchase_at') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: 'Field' object is not callable </code></pre> <p>Did I do something wrong? Is this just broken? I already tried simplifying the code or using PostgreSQLQuery, but same result.</p>
<python><pypika>
2023-11-15 15:16:33
1
2,482
user2741831
77,488,835
14,723,580
Gradient descent stuck in local minima?
<p>I'm running gradient descent to find a root for a system of nonlinear equations and I am wondering how you might detect if the method is stuck at the local minima, because I believe with the settings I am using this might be the case? my initial values are [-2, -1], tolerance of 10^-2 and 20 iterations. One thing I had read upon was that if the residual begins to flat line or begins to decrease incredibly slowly, it could be an indicator of the method being stuck in the local minima though, I am not entirely sure. I have graphed my residual with its iteration as the values of my iterates for each iteration and I'm wondering how I might know if it's stuck at the local minima.</p> <pre><code>def system(x): F = np.zeros((2,1), dtype=np.float64) F[0] = x[0]*x[0] + 2*x[1]*x[1] + math.sin(2*x[0]) F[1] = x[0]*x[0] + math.cos(x[0]+5*x[1]) - 1.2 return F def jacb(x): J = np.zeros((2,2), dtype=np.float64) J[0,0] = 2*(x[0]+math.cos(2*x[0])) J[0,1] = 4*x[1] J[1,0] = 2*x[0]-math.sin(x[0]+5*x[1]) J[1,1] = -5*math.sin(x[0]+5*x[1]) return J iterates, residuals = GradientDescent('system', 'jacb', np.array([[-2],[-1]]), 1e-2, 20, 0); </code></pre> <p><a href="https://pastebin.com/cmJn3WxC" rel="nofollow noreferrer">FullGradientDescent.py</a> <a href="https://pastebin.com/6HYnfh8P" rel="nofollow noreferrer">GradientDescentWithMomentum</a></p> <p>I'm testing usually with 20 iterations but I did 200 to illustrate the slowing down of the residual <a href="https://i.sstatic.net/wr8pi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wr8pi.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/bzopG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bzopG.png" alt="enter image description here" /></a></p> <p><strong>Marat</strong> suggested using GD with momentum. Code changes:</p> <pre><code>dn = 0 gamma = 0.8 dn_prev = 0 while (norm(F,2) &gt; tol and n &lt;= max_iterations): J = eval(jac)(x,2,fnon,F,*fnonargs) residuals.append(norm(F,2)) dn = gamma * dn_prev+2*(np.matmul(np.transpose(J), F)) dn_prev = dn lamb = 0.01 x = x - lamb * dn </code></pre> <p>Residual using GD with momentum <a href="https://i.sstatic.net/PP7lN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PP7lN.png" alt="enter image description here" /></a></p> <p><strong>lastchance</strong> suggested doing a contour plot, this seems to show the behaviour of the algorithm but it still does not converge? <a href="https://i.sstatic.net/QoNSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QoNSg.png" alt="enter image description here" /></a></p>
<python><numerical-methods><gradient-descent><root-finding>
2023-11-15 15:08:09
2
753
Krellex
77,488,813
4,966,886
Gradients and Laplacian of an image using skimage vs. open-cv
<p>I compare the vertical and horizontal gradients and Laplacian of an image using skimage and cv2 with the following code:</p> <pre><code>import sys import matplotlib.pyplot as plt from matplotlib.image import imread import skimage import cv2 def plot(ax, img, title): ax.imshow(img) # cmap = 'gray' ax.set_title(title) ax.set_xticks([]) ax.set_yticks([]) img = imread(&quot;./strawberry.jpg&quot;) laplacian = cv2.Laplacian(img,cv2.CV_32F) sobelx = cv2.Sobel(img,cv2.CV_32F,1,0,ksize=3) sobely = cv2.Sobel(img,cv2.CV_32F,0,1,ksize=3) fig1 = plt.figure(figsize=(10, 10)) fig1.suptitle('cv2', fontsize=14, fontweight='bold') ax = fig1.add_subplot(221) plot(ax, img, 'Original') ax = fig1.add_subplot(222) plot(ax, laplacian, 'Laplacian') ax = fig1.add_subplot(223) plot(ax, sobelx, 'Sobel X') ax = fig1.add_subplot(224) plot(ax, sobely, 'Sobel Y') fig1.set_tight_layout(True) laplacian = skimage.filters.laplace(img,ksize=5) sobelx = skimage.filters.sobel(img, axis=0) sobely = skimage.filters.sobel(img, axis=1) fig2 = plt.figure(figsize=(10, 10)) fig2.suptitle('skimage', fontsize=14, fontweight='bold') ax = fig2.add_subplot(221) plot(ax, img, 'Original') ax = fig2.add_subplot(222) plot(ax, laplacian, 'Laplacian') ax = fig2.add_subplot(223) plot(ax, sobelx, 'Sobel X') ax = fig2.add_subplot(224) plot(ax, sobely, 'Sobel Y') fig2.set_tight_layout(True) plt.show() </code></pre> <p>Here are the results:</p> <p><a href="https://i.sstatic.net/kPEdO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPEdO.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/xL6gf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xL6gf.jpg" alt="enter image description here" /></a></p> <p>So, they are vastly different. Even if the kernels would differ, I would not expect such differences. Did I miss something in my script ?</p>
<python><opencv><matplotlib><scikit-image><laplacian>
2023-11-15 15:04:12
1
306
user11634
77,488,627
3,416,725
How to unit test mocking SQLAlchemy engine in Python for an update query
<p>I have a function called <code>update_worker_data</code>. This simply updates a table in PostgreSQL, however if the row does not exist in the db, it then inserts it. This is checked by getting the <code>rowcount</code> after the query is executed:</p> <pre><code>def update_worker_data(db: engine, data: List[dict]) -&gt; int: &quot;&quot;&quot;Update the data for the modifiable table. Args: ---------- * db(engine): db engine * date(List[dict]): data to either be updated or inserted Returns: ---------- * row_count(int): amount of rows updated and inserted &quot;&quot;&quot; update_query = &quot;&quot;&quot; UPDATE worker_data SET &quot;first_col&quot; = %(f_col)s, &quot;second_col&quot; = %(s_col)s, &quot;third_col&quot; = %(t_col)s, &quot;fourth_col&quot; = %(fo_col)s WHERE alt_id = %(a_id)s &quot;&quot;&quot; insert_query = &quot;&quot;&quot; INSERT INTO worker_data(&quot;first_col&quot;, &quot;second_col&quot;, &quot;third_col&quot;, &quot;fourth_col&quot;, &quot;alt_id&quot;) VALUES (%(f_col)s, %(s_col)s, %(t_col)s, %(fo_col)s, %(a_id)s); &quot;&quot;&quot; row_count = 0 for d in data: params = { &quot;f_col&quot;: d[&quot;first_col&quot;], &quot;s_col&quot;: d[&quot;second_col&quot;], &quot;t_col&quot;: d[&quot;third_col&quot;], &quot;fo_col&quot;: d[&quot;fourth_col&quot;], &quot;a_id&quot;: d[&quot;alt_id&quot;] } with db.connect() as conn: has_updated = conn.execute(update_query, params).rowcount has_inserted = 0 if has_updated == 0: has_inserted = conn.execute(insert_query, params).rowcount row_count += (has_inserted + has_updated) return row_count </code></pre> <p>My unit test is currently like so:</p> <pre><code>@patch(&quot;api.db.engine&quot;) def test_update_c_data(mock_engine, update_data_dict): cursor_mock = mock_engine.connect.return_value.__enter__.return_value cursor_mock.execute.return_value.rowcount = 1 actual_row_count = update_substance_store(mock_engine, update_data_dict) assert actual_row_count == 4 </code></pre> <p>When I run this unit test, it asserts true, however this will only ever execute the first query (<code>update_query</code>) and will never enter the <code>has_update == 0</code> block due to returning value of <code>.rowcount</code> equalling 1. So according to the <a href="https://docs.python.org/3.12/library/unittest.mock.html#unittest.mock.Mock.side_effect" rel="nofollow noreferrer">mock docs</a> I should use side_effect. I then create a list of values for the side_effect and use parameterize. My desired operation would then to be run first the <code>update_query</code> and then the insert_query.</p> <p>I then changed my unit test to use side_effect so it becomes like so:</p> <pre><code>@pytest.mark.parameterize(&quot;expected_row_count&quot;, [ ([1, 0]), ([0, 1]), ]) @patch(&quot;api.db.engine&quot;) def test_update_c_data(mock_engine, expected_row_count, update_data_dict): cursor_mock = mock_engine.connect.return_value.__enter__.return_value cursor_mock.execute.return_value.rowcount.side_effect = expected_row_count actual_row_count = update_substance_store(mock_engine, update_data_dict) assert actual_row_count == 5 </code></pre> <p>However when I run this test, it just sets the value to a mock object. Is <code>side_effect</code> the correct way to handle this test case?</p>
<python><unit-testing><pytest><pytest-mock>
2023-11-15 14:38:29
0
493
mp252
77,488,626
6,246,426
Reformat long list in PyCharm to have one item per line
<p>When I do the code formatting in Pycharm it reformats the long lists in this manner:</p> <pre class="lang-py prettyprint-override"><code>style1 = [ 'Email:', 'SSN:', 'Address:', 'Home Phone:', 'Mobile Phone: ', 'DOB:', 'Date of Surgery:', 'Date of Service:', 'Facility of Service:', 'Clinic Number:', 'Employer:', 'Work Phone: ', 'Fax: ', 'Type:', 'IPA:', 'Health Plan:', 'ID #:', 'Claims Address:', 'Group #:', 'Claim # / PO #:', 'Phone:', 'Fax:', 'Contact', 'Adjuster Email', 'Util Review Phone', 'Util Review Fax', 'Doctor:', 'NPI #: ', 'Date of Injury: ', 'Body Parts:', 'Body Part Side:', 'Gender:', 'Diagnosis:', 'Diagnosis 2:', 'Procedure:' ] </code></pre> <p>Is there any way to reformat in PyCharm and have one item per line, like so:</p> <pre class="lang-py prettyprint-override"><code>style2 = [ 'Email:', 'SSN:', 'Address:', 'Home Phone:', 'Mobile Phone: ', 'DOB:', 'Date of Surgery:', 'Date of Service:', 'Facility of Service:', 'Clinic Number:', 'Employer:', 'Work Phone: ', 'Fax: ', 'Type:', 'IPA:', 'Health Plan:', 'ID #:', 'Claims Address:', 'Group #:', 'Claim # / PO #:', 'Phone:', 'Fax:', 'Contact', 'Adjuster Email', 'Util Review Phone', 'Util Review Fax', 'Doctor:', 'NPI #: ', 'Date of Injury: ', 'Body Parts:', 'Body Part Side:', 'Gender:', 'Diagnosis:', 'Diagnosis 2:', 'Procedure:' ] </code></pre> <p>I was trying to do it with .editconfig, but have not found a parameter that does the trick yet.</p>
<python><pycharm>
2023-11-15 14:38:04
0
1,208
Victor Di
77,488,611
1,936,538
Jupyter Notebook export to HTML + Crontab doesn't show plots
<p>I am automating the automatic execution and HTML export of a Jupyter Notebook. The Jupyter Notebook was created in VS Code and when I export it from the terminal in Ubuntu (jupyter nbconvert --execute --to html) I get the correct HTML with all the plots. However this does not happen when I execute it in the crontab or when use the terminal connected remotely from another PC with ssh the plots are not shown. The script runs fine and outputs the print commands but not the plot.</p> <p>The code goes like this:</p> <pre><code>for i in range(15): (... ) np.log(pd_df+1).join(pd_df_2).plot() plt.show() plt.close() pd_df.join(pd_df_2).plot() plt.show() plt.close() np.clip(pd_df_3, -1, 1).plot() plt.show() plt.close()` (...) </code></pre> <p>I tried to include the code bellow, I have executed in sh instead of bash.</p> <pre><code>%matplotlib inline # prior to import command </code></pre> <p>I also tried to include the code bellow but it also did not work.</p> <pre><code>import plotly.io as pio pio.renderers.default = 'notebook' </code></pre>
<python><pandas><matplotlib><jupyter-notebook><cron>
2023-11-15 14:35:52
0
403
husvar
77,488,578
3,734,059
SQAlchemy>2.0 does not INSERT or UPSERT into MySQL when using text() function
<p>We have recently updated to <code>SQLAlchemy==2.0.23</code> which requires the usage of <a href="https://docs.sqlalchemy.org/en/20/core/sqlelement.html#sqlalchemy.sql.expression.text" rel="nofollow noreferrer"><code>sqlalchemy.sql.expression.text</code></a> to format executed queries.</p> <p>In older versions e. g. <code>SQLAlchemy==1.4.49</code> we could <code>INSERT</code> or <code>UPSERT</code> data using <code>session.execute(my_query)</code> which does not work anymore with <code>SQLAlchemy==2.0.23</code> using <code>session.execute(text(my_query))</code>.</p> <p>Here's a minimal example of the problem:</p> <pre><code>from sqlalchemy import create_engine, text from sqlalchemy.orm import create_session my_query = &quot;&quot;&quot; INSERT INTO test (`col_a`, `col_b`) VALUES (4, 7), (5, 8), (6, 9) as nd ON DUPLICATE KEY UPDATE `col_a` = nd.`col_a`, `col_b` = nd.`col_b` &quot;&quot;&quot; constring = &quot;mysql+pymysql://my_user:my_password@my.host:3306/my_db&quot; connection = create_engine(constring) connection.echo = True # enable logging session = create_session(bind=connection) cursor = session.execute(text(my_query)) # does not INSERT or UPSERT </code></pre> <p>Interestingly, if I run the plain query on the database, it works perfectly and the logging also shows a working query.</p> <p>Any hints on how to resolve this error?</p>
<python><mysql><sqlalchemy>
2023-11-15 14:30:52
1
6,977
Cord Kaldemeyer
77,488,576
5,309,827
django userprofile nested relationship is empty in django REST
<p>Hello I have an UserProfile model in django, and I want to serialize my User model with its UserProfile nested to receive it via ajax in my template but it cames empty.</p> <p>Model UserProfile</p> <pre><code>class UserProfile(models.Model): user = models.OneToOneField(User, related_name='profile', on_delete=models.DO_NOTHING) departamento = models.ForeignKey(Departamento,on_delete=models.PROTECT,related_name=&quot;pertenece_a&quot;) class Meta: db_table = 'UserProfile' def user_profile(sender, instance, signal, *args, **kwargs): Userprofile, new = UserProfile.objects.get_or_create(user=instance) signals.post_save.connect(user_profile, sender=User) </code></pre> <p>Serializers</p> <pre><code>class ProfileSerializer(serializers.Serializer): class Meta: model = UserProfile fields = [&quot;departamento&quot;] class GestionUsuarioSerializer(serializers.ModelSerializer): profile = ProfileSerializer(many=True,read_only=True) class Meta: model = User fields = [&quot;id&quot;,&quot;username&quot;,&quot;email&quot;,&quot;first_name&quot;,&quot;last_name&quot;,&quot;profile&quot;] </code></pre> <p>Result</p> <p><a href="https://i.sstatic.net/yM3Wx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yM3Wx.png" alt="enter image description here" /></a></p>
<python><django><django-rest-framework>
2023-11-15 14:30:38
1
323
Jaime
77,488,275
4,120,777
Filter by list's element in payload
<p>In Qdrant DB I have a payload containing a list. How can I filter results of a search limiting to the ones where the list contain a specific element?</p> <p>As example, if the set of points is:</p> <pre class="lang-json prettyprint-override"><code>[ { &quot;id&quot;: 1, &quot;Fruit&quot;: [&quot;apple&quot;, &quot;banana&quot;, &quot;orange&quot;] }, { &quot;id&quot;: 2, &quot;Fruit&quot;: [&quot;pear&quot;, &quot;orange&quot; ] }, { &quot;id&quot;: 3, &quot;test&quot;: &quot;empty&quot; }, { &quot;id&quot;: 4, &quot;Fruit&quot;: [&quot;apple&quot;, &quot;orange&quot;, &quot;pear&quot;] } ] </code></pre> <p>And I want to filter the results containing at least an Apple AND an Orange, i.e. the ones with ID 1 and 4.</p> <p>How can I build such filter?</p> <p>I already had a look at the documentation <a href="https://qdrant.tech/documentation/concepts/filtering/" rel="nofollow noreferrer">at this link</a>, without success.</p> <p>In the docs there is no my case. What I tried is:</p> <pre><code>Filter(should=[FieldCondition(key='Fruit', match=MatchValue(value='Banana'), range=None, geo_bounding_box=None, geo_radius=None, values_count=None), FieldCondition(key='Fruit', match=MatchValue(value='Apple'), range=None, geo_bounding_box=None, geo_radius=None, values_count=None)], must=None, must_not=None) </code></pre> <p>But the result that I know is in the DB does not show up</p> <p>Thank you in advance.</p>
<python><vector-database><qdrant><qdrantclient>
2023-11-15 13:51:39
1
2,136
Vincenzo Lavorini
77,488,178
17,136,258
Calculating the duration of an event
<p>I have a problem. I have a list that is full of events (note: the list does not have to be sorted by date and can contain multiple IDs). I would like to &quot;know&quot; how long an event lasted. The calculation can be represented as follows</p> <pre><code>Duration of Event1 = Take the ID(Timestamp Event 2 - Timestamp Event 1) </code></pre> <p>For example:</p> <pre><code>Cleaning = ID1234 ( 06.11.2023 14:29- 06.11.2023 14:19) = 10 min </code></pre> <p><code>End of Work</code> refers to the end, this should not be measured. I have tried it, but it counts incorrectly. So how can I improve it so that I get the desired output?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # Sample DataFrame data = { 'Timestamp': ['06.11.2023 14:19', '06.11.2023 14:29', '06.11.2023 14:37', '06.11.2023 14:41', '06.11.2023 15:00'], 'Event Date': ['06.11.2023', '06.11.2023', '06.11.2023', '06.11.2023', '06.11.2023'], 'ID': ['1234', '1234', '1234', '1234', '1234'], 'Event Category': ['Working Event', 'Working Event', 'Working Event', 'Failure Event', 'Working Event'], 'What did you do?': ['Cleaning', 'Recording', 'Insert', pd.NA, 'End of Work'] } df = pd.DataFrame(data) df['Timestamp'] = pd.to_datetime(df['Timestamp'], format='%d.%m.%Y %H:%M') df = df.sort_values(by=['ID', 'Event Category', 'Timestamp']) df['Duration'] = df.groupby(['ID', 'Event Category'])['Timestamp'].diff().dt.seconds.div(60, fill_value=0) print(df) </code></pre> <p>Dataframe</p> <pre class="lang-py prettyprint-override"><code> Timestamp Event Date ID Event Category What did you do? 0 06.11.2023 14:19 06.11.2023 1234 Working Event Cleaning 1 06.11.2023 14:29 06.11.2023 1234 Working Event Recording 2 06.11.2023 14:37 06.11.2023 1234 Working Event Insert 3 06.11.2023 14:41 06.11.2023 1234 Failure Event &lt;NA&gt; 4 06.11.2023 15:00 06.11.2023 1234 Working Event End of Work </code></pre> <p>What I have</p> <pre class="lang-py prettyprint-override"><code>[OUT] Timestamp Event Date ID Event Category What did you do? \ 3 2023-11-06 14:41:00 06.11.2023 1234 Failure Event &lt;NA&gt; 0 2023-11-06 14:19:00 06.11.2023 1234 Working Event Cleaning 1 2023-11-06 14:29:00 06.11.2023 1234 Working Event Recording 2 2023-11-06 14:37:00 06.11.2023 1234 Working Event Insert 4 2023-11-06 15:00:00 06.11.2023 1234 Working Event End of Work Duration 3 0.0 0 0.0 1 10.0 2 8.0 4 23.0 </code></pre> <p>What I want</p> <pre class="lang-py prettyprint-override"><code> Timestamp Event Date ID Event Category What did you do? Duration 0 06.11.2023 14:19 06.11.2023 1234 Working Event Cleaning 10 1 06.11.2023 14:29 06.11.2023 1234 Working Event Recording 8 2 06.11.2023 14:37 06.11.2023 1234 Working Event Insert 4 3 06.11.2023 14:41 06.11.2023 1234 Failure Event &lt;NA&gt; 19 4 06.11.2023 15:00 06.11.2023 1234 Working Event End of Work - </code></pre>
<python><pandas><dataframe>
2023-11-15 13:40:28
2
560
Test
77,488,173
5,040,775
Can you create some kind of execution file (e.g., exe) to run a python code on a computer that does not have python?
<p>I am writing python code to automatically generate some report. Ultimately, I want other people at my firm to be able to run this easily by using some kind of execution file. None of them have python on their PC and I don't want them to install it (for operational difficult reason). Is it even possible to do that?</p>
<python>
2023-11-15 13:40:07
1
3,525
JungleDiff
77,488,041
22,538,132
change background color in open3d.visualization.draw_geometries
<p>I want to change the background color in Open3D when I draw geometries using <code>open3d.visualization.draw_geometries</code>, but I can't figure out how to that, as <a href="http://www.open3d.org/docs/release/python_api/open3d.visualization.draw_geometries.html" rel="nofollow noreferrer">the documentation</a> is not showing how to do it.</p> <p>can you please tell me how can I change background color please? or show <code>Skymap</code> or example? thanks in advance.</p>
<python><colors><background><open3d>
2023-11-15 13:21:12
1
304
bhomaidan90
77,488,011
22,221,987
How to fill dynamically expanding HTML file with templates filled with python data
<p>I have a HTML template for one message (i receive messages via <code>Slack-API</code>). I want to fill the template with information from API and than stick templates together, in new HTML file (Output file will look like a chat-wall with every message from the chat).</p> <p>Because of the chats different size, i can't make full template for the whole chat. So, i'm trying to expand my output-HTML dynamically, with my templates, filled with data.</p> <p>I defined some CSS-classes in the output HTML, but they are not so important for this question, so, there is a single message html-example, which i append to the output-html:</p> <pre><code>&lt;div class='message_body'&gt; &lt;div class=&quot;message_header&quot;&gt; &lt;strong class=&quot;user_name&quot;&gt;Some UserName&lt;/strong&gt; &lt;span class=&quot;date&quot;&gt;11.12.23 2:22&lt;/span&gt; &lt;/div&gt; &lt;hr&gt; &lt;ol class=&quot;ordered_list&quot;&gt; &lt;li class=&quot;message_text&quot;&gt;Lorem ipsum dolor sit amet, consectetur...&lt;br /&gt;And second line&lt;/li&gt; &lt;/ol&gt; &lt;/div&gt; </code></pre> <p>I thought to save this template as a python string variable, than use &quot;f&quot;strings to add some data in it and than just write every filled template-string in <code>output.html</code>.</p> <p>But, it doesn't look optimised for me. Is there any better solutions how can i dynamically create HMTL with python-parsed data?</p> <p>I've heard about <code>Jinja</code> and <code>airum</code> but i can't choose between them.</p> <p>In addition, is there any other optimised solutions?</p> <p><strong>UPD</strong>:Added chat-html example. <a href="https://i.sstatic.net/MNZcb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MNZcb.png" alt="enter image description here" /></a></p> <p>Every message in this example is created with that html example. <code>Replies</code> - is just an expandable list-menu with same message-templates.</p>
<python><html><python-3.x><parsing><jinja2>
2023-11-15 13:16:12
0
309
Mika
77,488,002
14,244,437
Make inner joins by using Q objects in Django
<p>I'm customizing a model's Admin change list page in my project.</p> <p>This model contains a Many to Many field and I want to allow multi-selection filtering.</p> <p>I've used the <a href="https://github.com/JobDoesburg/django-admin-multi-select-filter" rel="nofollow noreferrer">Django Admin Multi Select Filter package</a> to do this but I noticed that the default behaviour in Django's <code>__in</code> lookup is to be inclusive and not exclusive.</p> <p>Imagine I have an object A associated with M2M objects 1 and 2 and object B associated with M2M objects 2 and 3. If I make the following query <code>Model.objects.filter(m2m_model__in=[1,2])</code> I will have both objects A and B being retrieved, since the search will use the following condition:</p> <p><code>Index Cond: (models_model_m2mfield.model_id = ANY ('{1,2}'::bigint[]))</code></p> <p>I was able to customize the behaviour of the search by doing the following:</p> <pre><code> for choice in choices: queryset = queryset.filter(**{self.lookup_kwarg: choice}) </code></pre> <p>This generates inner joins and returns the value I'm waiting for (only objects matching every option).</p> <pre><code>INNER JOIN &quot;models_model_m2mfield&quot; ON (&quot;models_m2mmodel&quot;.&quot;id&quot; = &quot;models_model_m2mfield&quot;.&quot;m2mfield_id&quot;) INNER JOIN &quot;models_model_m2mfield&quot; T4 ON (&quot;models_m2mmodel&quot;.&quot;id&quot; = T4.&quot;m2mfield_id&quot;) WHERE (&quot;models_model_m2mfield&quot;.&quot;model_id&quot; = 4 AND T4.&quot;model_id&quot; = 5) </code></pre> <p>I've tried using Q objects to get the same result, but I couldn't make it work. Is it possible to achieve the same behaviour using the Q class?</p>
<python><django><django-orm>
2023-11-15 13:14:43
1
481
andrepz
77,487,973
1,936,752
How to deal with off-by-one issues in convolution (Python)?
<p>I'm trying to write a function to add two random varibles <code>X1</code> and <code>X2</code>. In my case, they are both uniform random variables from <code>0</code> to <code>a1</code> and <code>0</code> to <code>a2</code>. To compute the random variable <code>Y = X1 + X2</code>, I need to take a convolution of the probability distributions of <code>X1</code> and <code>X2</code>.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.integrate import simps def convolution(f, g, x_range): delta = (x_range[-1] - x_range[0])/len(x_range) result = np.convolve(f(x_range), g(x_range), mode = 'full')*delta return result # Define uniform distribution for some a &gt; 0. This part can be adapted to arbitrary distributions def uniform_dist(x, a): return np.where((x &gt;= 0) &amp; (x &lt;= a), 1/a, 0) # Set the range of x values, y values and constants delta = 0.1 x_lim_low = -5 x_lim_upp = 5 a1 = 1 a2 = 1 x_range = np.arange(x_lim_low,x_lim_upp+delta,delta) y_range = np.arange(2*x_lim_low,2*x_lim_upp+delta,delta) # Perform convolution convolution_pdf = convolution(lambda x: uniform_dist(x, a1), lambda x: uniform_dist(x, a2), x_range) # Find mean of convolution convolution_mean = np.sum(convolution_pdf*y_range)*delta </code></pre> <p>I've tried various combinations but have small errors in the mean. I think this is because the convolution is an array of dimension <code>2*len(x_range) - 1</code> and it's unclear how to deal with this off by one error.</p> <p>What is the correct way to convolve to variables such that I can compute the mean of the convolution correctly?</p>
<python><probability><convolution><off-by-one>
2023-11-15 13:10:22
1
868
user1936752
77,487,776
1,608,765
Fittig a rice distribution using scipy
<p>I'm trying to write a fitter for some rice distributed data that I have, but it is not working for some, probably stupid, reason.</p> <p>The distribution gets created fine, and the fitting routing seems to work from what I am used to with Gaussians. However, when I fit the curve, I just get nonsense. Can't seem to see where I am going wrong.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.stats import rice from scipy.optimize import curve_fit # Custom Rice PDF function def rice_pdf(x, nu, amplitude, b, scale): return (x / b) * np.exp(-(x**2 + scale**2) / (2 * b**2)) * amplitude # Function to fit a Rice distribution to a histogram using curve_fit def fit_rice_distribution_to_histogram(hist_data, bins): # Calculate bin centers bin_centers = (bins[:-1] + bins[1:]) / 2 # Initial guesses for the parameters (nu, amplitude, b, scale) initial_guess = [8.4, 1.0, 1.0, np.mean(bin_centers)] # Fit the Rice distribution to the histogram data using curve_fit params, covariance = curve_fit(rice_pdf, bin_centers, hist_data, p0=initial_guess) nu, amplitude, b, scale = params # Create the fitted Rice distribution fitted_distribution = rice(nu, loc=scale, scale=np.sqrt(b**2 + scale**2)) return fitted_distribution, nu, amplitude, b, scale # Example usage: if __name__ == &quot;__main__&quot;: # Parameters for the Rice distribution nu = 8.5 sigma = 10.5 sample_size = 100 # Calculate b from nu and sigma b = nu / sigma # Generate random data points from the Rice distribution data = rice.rvs(b=b, scale=sigma, size=sample_size) # Create a histogram of the generated data hist_data, bins, _ = plt.hist(data, bins=20, density=True, alpha=0.5, label=&quot;Generated Data&quot;) plt.xlabel(&quot;Value&quot;) plt.ylabel(&quot;Probability Density&quot;) # Fit a Rice distribution to the histogram using curve_fit fitted_distribution, fitted_nu, amplitude, fitted_b, fitted_scale = fit_rice_distribution_to_histogram(hist_data, bins) # Plot the original histogram and the fitted distribution x = np.linspace(min(bins), max(bins), 1000) pdf_values = fitted_distribution.pdf(x) plt.plot(x, pdf_values, 'r', label=&quot;Fitted Rice Distribution&quot;) plt.legend() plt.show() # Print fitted parameters print(&quot;Fitted Nu:&quot;, fitted_nu) print(&quot;Fitted Amplitude:&quot;, amplitude) print(&quot;Fitted b:&quot;, fitted_b) print(&quot;Fitted Scale:&quot;, fitted_scale) </code></pre>
<python><scipy><curve-fitting>
2023-11-15 12:38:53
1
2,723
Coolcrab
77,487,774
458,742
"commands out of sync; you can't run this command now" on first use of cursor
<p>I am using MySQL in Django (Python).</p> <p>The draft version of this code worked and was stable, but it created cursors without closing them, and I just rewrote it to manage the cursor with enter/exit. Now it is unstable.</p> <pre><code>class Cursor: def __init__ (self, db): self.db = db self.cursor = db.DB_connection.cursor () def __enter__ (self): return self def __exit__ (self, ex_type, ex_value, ex_traceback): if ex_value is not None: # log stuff self.cursor.close () self.cursor = None def __del__ (self): if self.cursor is not None: # log stuff self.cursor.close () def one (self, query, args): self.cursor.execute (query, args) return self.cursor.fetchone () class DB: def __init__ (self, my_stuff): self.DB_connection = connect (...) def get (self): return Cursor (self) </code></pre> <p>and in the application:</p> <pre><code>with DB_connection.get() as db: result = db.one (&quot;SELECT ...&quot;, ...) </code></pre> <p>Sometimes this works as before, but sometimes randomly it will fail calling <code>db.one()</code></p> <pre><code>_mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now </code></pre> <p>this exception is seen in Cursor's <code>__exit__</code></p> <p>Googling this error tells me that this means an earlier SELECT on that cursor still has un-fetched results. But this is nonsense since <code>with DB_connection.get() as db</code> creates a new cursor.</p> <p>Also sometimes the process simply exists without printing any exception info. Docker log looks like this</p> <pre><code>www_1 | &quot;GET /test_page HTTP/1.1&quot; 200 16912 docker_www_1 exited with code 245 </code></pre> <p>These crashes are non-deterministic even though the code in the view which creates the cursor is entirely deterministic.</p> <p>I have added some print statements which show that there are only ever 0 or 1 cursors simultaneously in existence during the flow of the application.</p> <p>In the draft version, instead of</p> <pre><code>with DB_connection.get() as db: result = db.one (&quot;SELECT ...&quot;, ...) </code></pre> <p>it would have been something like</p> <pre><code>db = DB_connection.get() result = db.one (&quot;SELECT ...&quot;, ...) </code></pre> <p>And that was very stable, although it leaks the cursor resource. I don't think anything else significantly changed since it was stable, other than wrapping the cursor in enter/exit.</p> <p>Is this the correct way to use the API?</p>
<python><mysql><django><database-cursor>
2023-11-15 12:38:29
0
33,709
spraff
77,487,393
10,522,901
scikit-learn fit function returns immediately with 0 training samples processed
<p>I am experiencing an issue with <code>scikit-learn</code> version <code>1.3.2</code>, where the <code>fit()</code> function of the <code>MLPClassifier</code> returns almost instantly without processing any training samples. This is evident as the model's <code>t_</code> parameter (indicating the number of training samples seen by the solver during fitting) remains at <code>0</code>.</p> <p>This problem occurred while attempting to fit a dataset with over 100,000 training samples, where the <code>fit()</code> function returned in less than 10 milliseconds. The following minimal example replicates the issue:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.neural_network import MLPClassifier X = [[1., 2.], [3., 4.]] y = [1, 0] clf = MLPClassifier(solver=&quot;lbfgs&quot;, alpha=1e-5, hidden_layer_sizes=(5, 2)) clf.fit(X, y) print(clf.t_) # Outputs: 0 </code></pre> <p>I have searched but haven't found similar issues reported. Any insights into why this might be happening would be greatly appreciated.</p> <p>Edit: I observed this behavior consistently across multiple <code>scikit-learn</code> models.</p>
<python><scikit-learn>
2023-11-15 11:38:02
0
316
vegarab
77,487,354
9,353,682
How suppress `unused-property` error in vulture
<p>I have noticed that <a href="https://github.com/jendrikseipp/vulture" rel="nofollow noreferrer">vulture</a> package started to report new error <code>vulture: unused-property</code>. The <a href="https://github.com/jendrikseipp/vulture/blob/main/README.md" rel="nofollow noreferrer">documentation</a> specifies following errors code (just like for flake8 - <a href="https://flake8.pycqa.org/en/latest/user/error-codes.html" rel="nofollow noreferrer">https://flake8.pycqa.org/en/latest/user/error-codes.html</a>):</p> <ul> <li>F401 - <code>unused-import</code></li> <li>F841 - <code>unused-variable</code></li> </ul> <p>There is no code specified for <code>unused-property</code> though. I would like to have a code so I can suppress just this error (instead of using <code># noqa</code> to suppress all errors in the line) as I use other linters as well.</p>
<python>
2023-11-15 11:31:00
1
722
Maciek
77,487,326
9,110,646
Why does dict comprehension doesn't works when for-loops do?
<p>When calling the sample function <code>func</code> in this module, why does it throw an exception when I use comprehension (can be toggled with parameter)? Can someone explain the meaning of the exception? <code>cycle</code>seems to be overwritten and I can not wrap my head around it.</p> <h1>Example function with same functionality as loop and comprehension</h1> <pre><code>import pandas as pd def func( x_dict, keys_list, start_cycle, end_cycle, comprehension=True ): x_test_dicts = {} for cycle in range(start_cycle, end_cycle + 1): print(f&quot;cycle = {cycle}&quot;) if comprehension: # Fill the dict with comprehension. x_test_dict = { f&quot;{key}_input&quot;: x_dict[key].query('cycle == @cycle').values for key in keys_list } else: # Fill the dict with normal for loop. x_test_dict = {} for key in keys_list: x_test_dict[f&quot;{key}_input&quot;] = \ x_dict[key].query('cycle == @cycle').values x_test_dicts[cycle] = x_test_dict return x_test_dicts </code></pre> <h1>Creation of test data</h1> <pre><code>import pandas as pd import numpy as np # Create an ID array from 1 to 1000 ids = np.arange(1, 1001) # Calculate cycle as ID divided by 100 cycles = ids // 100 # Generate random integer values for the remaining columns # Assuming a range for random integers (e.g., 0 to 100) col1_int = np.random.randint(0, 101, 1000) col2_int = np.random.randint(0, 101, 1000) col3_int = np.random.randint(0, 101, 1000) # Update the DataFrame with integer values df = pd.DataFrame({ &quot;ID&quot;: ids, &quot;cycle&quot;: cycles, &quot;col1&quot;: col1_int, &quot;col2&quot;: col2_int, &quot;col3&quot;: col3_int }) df.head() # Display the first few rows of the updated DataFrame </code></pre> <h1>Run test cases with functions</h1> <pre><code>import pandas as pd df = df.set_index(['ID', 'cycle']) # Use multi-indexing x_dict = {'Auxin': df} # Create a simple dict with the DataFrame keys_list = ['Auxin'] # Define a list of keys to work with # Define ranges for the loop inside `func` start_cycle = 6 end_cycle = 29 # RUNS SUCCESSFULLY WITHOUT LIST COMPREHENSION comprehension = False result = func( x_dict, keys_list, start_cycle, end_cycle, comprehension=comprehension ) print(&quot;Worked without dict comprehension!&quot;) # FAILS WITH LIST COMPREHENSION comprehension = True result = func( x_dict, keys_list, start_cycle, end_cycle, comprehension=comprehension ) print(&quot;Breaks when dict comprehension is used!&quot;) </code></pre> <h1>The error</h1> <pre><code>UndefinedVariableError: local variable 'cycle' is not defined </code></pre>
<python><pandas><dictionary><dictionary-comprehension>
2023-11-15 11:26:51
1
423
Pm740
77,487,292
15,452,601
How do I quiet mypy when testing inheritence from a generic?
<p>The following MWE constructs a mapping between the typevars used in a generic class and their declared values on an instance:</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar, get_args T = TypeVar(&quot;T&quot;) class Derived(Generic[T]): def method(self, val: T) -&gt; T: return val d = Derived[int]() def get_generic_types_mapping(obj: object) -&gt; dict[type, type]: if isinstance(obj, Generic): generic_base = next( origin for origin in obj.__orig_bases__ if hasattr(origin, &quot;__origin__&quot;) and origin.__origin__ is Generic ) return { generic: decorated for generic, decorated in zip( get_args(generic_base), get_args(obj.__orig_class__) ) } else: return {} assert get_generic_types_mapping(d) == {T: int} assert get_generic_types_mapping(object()) == {} </code></pre> <p>This code works fine, but mypy (and pyright) doesn't (don't) like it:</p> <pre class="lang-bash prettyprint-override"><code>$ mypy t.py t.py:15: error: Argument 2 to &quot;isinstance&quot; has incompatible type &quot;&lt;typing special form&gt;&quot;; expected &quot;_ClassInfo&quot; [arg-type] t.py:18: error: &quot;object&quot; has no attribute &quot;__orig_bases__&quot; [attr-defined] t.py:24: error: &quot;object&quot; has no attribute &quot;__orig_class__&quot;; maybe &quot;__class__&quot;? [attr-defined] Found 3 errors in 1 file (checked 1 source file) </code></pre> <p>This makes sense, as <code>Generic</code> objects really <em>don't</em> have an <code>__orig_bases__</code>:</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic assert not hasattr(Generic(), &quot;__orig_bases__&quot;) </code></pre> <p>How do I tell mypy that <code>obj</code> does have <code>__orig_bases__</code>? [What do I read to understand where it comes from?] Should I be using something other than <code>isinstance</code>?</p> <p>(I thought of just <code>if hasattr(obj, &quot;__orig_bases__&quot;)</code>, but this doesn't remove the error on <code>__orig_class__</code>, and I actually want to raise an error if a generic <em>doesn't</em> have defined types. I could just check for both, but I feel like I'm missing something conceptually.)</p>
<python><generics><types>
2023-11-15 11:21:10
1
6,024
2e0byo
77,487,139
929,122
pyarrow.lib.ArrowTypeError: Expected bytes, got a 'LOB' object - How can I convert CLOBs to Strings at operator level?
<p>I'm trying to copy data from an Oracle database to GCS using Airflow's OracleToGCSOperator:</p> <pre class="lang-py prettyprint-override"><code>copy_data = OracleToGCSOperator( task_id='copy_data_task', oracle_conn_id='my_conn', sql=&quot;SELECT * FROM MY_TABLE&quot;, bucket=MY_BUCKET, filename=FILEPATH., export_format='PARQUET' ) </code></pre> <p>When executed I'm getting the error pyarrow.lib.ArrowTypeError: Expected bytes, got a 'LOB' object.</p> <p>MY_TABLE has more than 800 columns and only 2 of them CLOB. I assume this is what GCS/parquet doesn't like.</p> <p>Is there any way I convert the CLOB columns to strings at Operator level?</p>
<python><oracle-database><google-cloud-platform><airflow><google-cloud-composer-2>
2023-11-15 10:55:56
1
437
drake10k
77,486,881
6,448,412
Flask-CORS returns random Access-Control-Allow-Origin if Origin request header is not provided
<p>I want to enable CORS in my Flask application with a predefined set of allowed origins, as documented <a href="https://flask-cors.corydolphin.com/en/latest/api.html" rel="nofollow noreferrer">here</a>:</p> <pre><code>from flask import Flask from flask_cors import CORS app = Flask(__name__) CORS(app, origins=['http://localhost:3000', 'https://app.my_domain.com']) </code></pre> <p>The problem is that if I don't specify the <code>Origin</code> header in my request to the server, an arbitrary value for the <code>Access-Control-Allow-Origin</code> response header will be returned.</p> <p>So for example, if my web application running on <code>https://app.my_domain.com</code> sends a <code>GET</code> request to the backend without specifying the <code>Origin</code> request header, the backend returns the following response header:</p> <p><code>Access-Control-Allow-Origin: http://localhost:3000</code></p> <p>This seems not correct to me. How is this mechanism intended to be used?</p>
<python><flask><cors><flask-cors>
2023-11-15 10:18:59
1
398
Laugslander
77,486,849
1,521,241
PyDLL does not work across different Python versions
<p>I am using PyDLL function to bind DLL to Python.</p> <p>From Python side:</p> <pre><code>import ctypes as _ct _path = _parent_path(__file__) / &quot;scisuit_pybind&quot; pydll = _ct.PyDLL(str(_path)) pydll.c_root_bisect.argtypes = [_ct.py_object, _ct.c_double, _ct.c_double, _ct.c_double, _ct.c_int, _ct.c_char_p, _ct.c_bool] pydll.c_root_bisect.restype = _ct.py_object </code></pre> <p>And from C++ side:</p> <pre><code>#define EXTERN \ extern &quot;C&quot; DLLPYBIND EXTERN PyObject * c_root_bisect(PyObject * FuncObj, double a, double b, double tol = 1e-5, int maxiter = 100, const char* method = &quot;bf&quot;, bool modified = false); </code></pre> <p>Cases are:</p> <ol> <li><strong>Works:</strong> Compile the DLL with Python 3.10.6 and run it with Python 3.10.6. <strong>Fails</strong> when run with Python 3.11.</li> <li><strong>Works</strong> Compile the DLL with Python 3.11 and run with 3.11. <strong>Fails</strong> when run with Python 3.10.</li> </ol> <p><strong>Error:</strong> <em>self._handle = _dlopen(self._name, mode)</em></p> <p>What am I missing?</p> <hr /> <p><strong>EDIT 1</strong>:</p> <p>A quick fix (maybe not the best one) is to build DLLs (affected one is small, around 170 KB) for 3.x versions (currently I made them for 3.10 and 3.11) and then from Python side:</p> <pre><code>_DLLname= &quot;scisuit_pybind&quot; if sys.version_info.minor == 10: _DLLname= &quot;scisuit_pybind310&quot; _path = _parent_path(__file__) / _DLLname </code></pre>
<python><c++>
2023-11-15 10:15:25
1
1,053
macroland
77,486,675
628,228
How to validate a TextIO argument?
<p>I am just coming to terms with Python type hinting and I am confused how to implement argument validation for the following function signature:</p> <pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -&gt; str: </code></pre> <p>My initial attempt at a pythonic implementation was the following:</p> <pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -&gt; str: try: with open(file) as fileIO: return read_file(fileIO) except TypeError: return file.read() </code></pre> <p>Although this seems like a totally valid pythonic implementation for <code>read_file</code>, the type checker went all over this one. This is understandable since <code>open</code> does not accept any of the possible types except <code>TypeIO</code> and there is nothing to narrow the type down (although it is surprising that the <code>except TypeError:</code> is not considered).</p> <p>So I gave up on &quot;asking for forgiveness&quot; and instead tried to explicitly check the argument:</p> <pre class="lang-py prettyprint-override"><code>def read_file(file: Union[str, PathLike, TextIO]) -&gt; str: if isinstance(file, TextIO): return file.read() else: with open(file) as fileIO: return read_file(fileIO) </code></pre> <p>This turns the whole logic around and checks if file is <code>TextIO</code> so the type checker is all happy. The problem is that this actually makes no sense at runtime since <code>TextIO</code> is not actually a base class you can check against, but rather a type that exists only for type hinting.</p> <p>Now this is where I started to get really confused, since I realized I actually had no idea how to go about checking whether something is a <code>TextIO</code> at runtime. I dug through all kinds of rabbit holes for checking if a variable is either a path-like or file-like, but it all feels like I'm missing something fundamental here. I mean, if the type checker can know ahead of time that something is <code>TextIO</code> then how can it be hard to narrow down the type in the implementation?</p> <p>This must be something that is done in all kinds of libraries, but most implementations I found use vague checks for <code>read</code> and iterable, etc. perhaps for backwards compatibility, but I'm aiming for python 3.9+ so was hoping that by now there might be a better solution.</p> <p><strong>NOTE</strong>: As a clarification given comments and existing answers. I am not asking about what is the correct implementation for opening text files. I am also not asking how to use type hinting <em>in general</em>: I understand that you are supposed to use type narrowing to get the type checker happy.</p> <p><strong>My question is</strong>: what functions or expressions can you use in your code to do type narrowing specifically of <code>TextIO</code> type hints so that the compiler will be happy to discard that type for a variable? <code>try.. except</code> doesn't work, checking if value is instance of <code>TextIOBase</code> or <code>TextIO</code> also doesn't work.</p>
<python><python-typing>
2023-11-15 09:49:25
1
4,430
glopes
77,486,577
6,813,417
Stop pyspark aggregation if condition triggers
<p>Let's say I want to check if a pyspark dataframe has any constant column. Let's work with the dataframe from <a href="https://stackoverflow.com/questions/52113821/fastest-way-to-know-if-a-column-has-a-constant-value-in-a-pyspark-dataframe">this question</a>:</p> <pre><code>+----------+----------+ | A | B | +----------+----------+ | 2.0| 0.0| | 0.0| 0.0| | 1.0| 0.0| | 1.0| 0.0| | 0.0| 0.0| | 1.0| 0.0| | 0.0| 0.0| +----------+----------+ </code></pre> <p>Isn't it a way of generate:</p> <pre><code>+----------+----------+ | A | B | +----------+----------+ | False| True| +----------+----------+ </code></pre> <p>Without having to aggregate/filter the whole A column as proposed in this question solution? (let's say, if I dectect two rows aren't equal during aggregation, stop the operation and return False), thus, saving time? Does spark do it internally?</p>
<python><apache-spark><pyspark>
2023-11-15 09:35:18
1
1,058
Let's try
77,486,432
8,554,611
`setuptools_scm` includes a committed `.gitignore` to an `sdist` package
<p>I have a flat-layout project like this:</p> <pre><code>├── project_name │ └── ... ├── .gitignore ├── pyproject.toml └── ... </code></pre> <p>I follow <a href="https://setuptools.pypa.io/en/latest/userguide/datafiles.html#exclude-package-data" rel="nofollow noreferrer">the <code>setuptools</code> docs</a> to compose the <code>pyproject.toml</code> like this:</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = ['setuptools&gt;=45', 'setuptools_scm[toml]&gt;=6.2'] [project] name = 'project_name' version = '0.0.1' [tool.setuptools.exclude-package-data] &quot;*&quot; = [&quot;.gitignore&quot;] </code></pre> <p>However, when I do</p> <pre class="lang-bash prettyprint-override"><code>python3 -m build --sdist </code></pre> <p>I get the <code>.gitignore</code> file in the resulting <code>*.tar.gz</code> file.</p> <p>I can forcefully exclude the file using a <code>MANIFEST.in</code>:</p> <pre><code>exclude .gitignore </code></pre> <p>What the use of the <code>[tool.setuptools.exclude-package-data]</code> section, then? Can I do the job without the <code>MANIFEST.in</code> file?</p> <p>Do I misuse the section? From what the building process reports, I guess the <code>'*'</code> there means the <code>project_name</code> directory. Is there any config key to exclude the <code>.*</code> files from the root of the <code>sdist</code> package?</p> <hr /> <p>An MWE for @sinoroc (to be run in a shell):</p> <pre class="lang-bash prettyprint-override"><code># make a dedicated directory and enter it mkdir package_name pushd package_name # make two empty files (only the first one is mandatory) touch .gitignore README # fill pyproject.toml echo &quot;build-system.requires = ['setuptools', 'setuptools_scm[toml]']&quot; &gt;&gt; pyproject.toml echo &quot;project = {name = 'package_name', version = '0.0.1'}&quot; &gt;&gt; pyproject.toml echo &quot;[tool.setuptools_scm]&quot; &gt;&gt; pyproject.toml # prepare a virtual environment python -m venv venv source venv/bin/activate python -m pip install -U pip setuptools build # commit .gitignore git init git add -f .gitignore git commit -m &quot;add .gitignore&quot; # build a package python -m build --sdist # list the files in the package tar --list -f dist/package_name-0.0.1.tar.gz # exit (optional) deactivate popd </code></pre> <p>This makes the following file tree:</p> <pre><code>├── .git │ └── ... ├── dist │ └── package_name-0.0.1.tar.gz ├── package_name.egg-info │ └── ... ├── venv │ └── ... ├── .gitignore ├── pyproject.toml └── README </code></pre> <p>The <code>pyproject.toml</code> file has the following content:</p> <pre class="lang-ini prettyprint-override"><code>build-system.requires = ['setuptools', 'setuptools_scm[toml]'] project = {name = 'package_name', version = '0.0.1'} [tool.setuptools_scm] </code></pre> <p>The <code>README</code> and <code>.gitignore</code> files are empty. The latter has to be committed for the MWE to work.</p> <p>It looks that it's the <code>setuptools_scm</code> that's to blame for the inclusion of <code>.gitignore</code>. The latter should be committed.</p>
<python><setuptools><pyproject.toml><setuptools-scm>
2023-11-15 09:13:01
0
796
StSav012
77,486,242
21,309,333
Ctypes 2d array of strings in python stores different strings at same memory address
<p>the python code i have is pretty simple:</p> <pre class="lang-py prettyprint-override"><code>from ctypes import * from random import randint class uni(Union): _fields_ = [('p', c_char_p), ('a', c_longlong)] #initializing the array of strings x = ((c_char_p * 3) * 10) () for i in range(10): for j in range(3): x[i][j] = str(randint(100, 999)).encode('utf-8') #it prints what i expect it to print for i in range(10): for j in range(3): print(x[i][j], end = ' ') print() print(&quot;addresses&quot;) for i in range(10): for j in range(3): t = uni() # getting an integer that points to the string to print string's address t.p = x[i][j] print(hex(t.a), end = ' - ') print(string_at(t.a), end = ' | ') print() </code></pre> <p>This outputs the following:</p> <pre><code>b'475' b'912' b'805' b'107' b'986' b'191' b'389' b'525' b'921' b'441' b'869' b'452' b'505' b'788' b'571' b'111' b'974' b'758' b'447' b'975' b'671' b'322' b'633' b'332' b'924' b'633' b'174' b'677' b'611' b'431' addresses 0x7fdfbbbcad80 - b'475' | 0x7fdfbbbcad80 - b'912' | 0x7fdfbbbcad80 - b'805' | 0x7fdfbbbcad80 - b'107' | 0x7fdfbbbcad80 - b'986' | 0x7fdfbbbcad80 - b'191' | 0x7fdfbbbcad80 - b'389' | 0x7fdfbbbcad80 - b'525' | 0x7fdfbbbcad80 - b'921' | 0x7fdfbbbcad80 - b'441' | 0x7fdfbbbcad80 - b'869' | 0x7fdfbbbcad80 - b'452' | 0x7fdfbbbcad80 - b'505' | 0x7fdfbbbcad80 - b'788' | 0x7fdfbbbcad80 - b'571' | 0x7fdfbbbcad80 - b'111' | 0x7fdfbbbcad80 - b'974' | 0x7fdfbbbcad80 - b'758' | 0x7fdfbbbcad80 - b'447' | 0x7fdfbbbcad80 - b'975' | 0x7fdfbbbcad80 - b'671' | 0x7fdfbbbcad80 - b'322' | 0x7fdfbbbcad80 - b'633' | 0x7fdfbbbcad80 - b'332' | 0x7fdfbbbcad80 - b'924' | 0x7fdfbbbcad80 - b'633' | 0x7fdfbbbcad80 - b'174' | 0x7fdfbbbcad80 - b'677' | 0x7fdfbbbcad80 - b'611' | 0x7fdfbbbcad80 - b'431' | </code></pre> <p>How? how does it store different strings at the same address??</p> <p>Note: I found this when I was debugging a program that passes a 2d array of strings to C++ shared object. The C++ function is defined as folows:</p> <pre class="lang-cpp prettyprint-override"><code>extern &quot;C&quot; void print2d(char*** arr, int len, int inner_len) { std::cout &lt;&lt; arr &lt;&lt; '\n'; //ok std::cout.flush(); std::cout &lt;&lt; *arr &lt;&lt; '\n'; //ok std::cout.flush(); std::cout &lt;&lt; **arr &lt;&lt; '\n'; //this segfaults } </code></pre> <p>if anyone has any suggestions to fix this, i would be glad to hear them out</p>
<python><c++><arrays><string><ctypes>
2023-11-15 08:42:00
0
365
God I Am Clown
77,486,146
7,699,037
Load element as string when it ends with a colon
<p>I have a YAML document where some values end with a colon, something like:</p> <pre class="lang-yaml prettyprint-override"><code>foo: - bar - baz:: </code></pre> <p>When I load the document with <code>yaml.load</code>, the <code>baz::</code> element gets converted to a dictionary <code>{'baz:' : ''}</code>. However, I would like to read it as string.</p> <p>I've tried loading the file with the <code>yaml.BaseLoader</code>, however this did not help. Is there a way to specify that the elements should not be converted to a dict?</p>
<python><yaml><pyyaml>
2023-11-15 08:20:37
1
2,908
Mike van Dyke
77,486,094
736,662
Printing using on_start and self in Locust
<p>Having this code:</p> <pre><code>class MyUser(HttpUser): host=&quot;http://localhost&quot; def on_start(self): self.data = data.sample().to_dict() self.ts_value_random = random.randrange(10000) @task def send_request(self): print(&quot;Sending request with data: &quot;, self.data) print(&quot;Random_value: &quot;, self.ts_value_random) self.client.get(&quot;/hello&quot;, data=self.data) </code></pre> <p>Why is only &quot;Sending request with data:&quot; printed to console when running i.e. 5 Virtual Users?</p>
<python><locust>
2023-11-15 08:10:08
0
1,003
Magnus Jensen
77,485,951
7,254,635
Can any dynamic language can write the some prog like the book<Type-Driven Development with idris>'s Chapter6's datastore?
<p>The book's Chapter6's datastore:</p> <pre> module Main import Data.Vect infixr 5 .+. data Schema = SString | SInt | (.+.) Schema Schema SchemaType : Schema -> Type SchemaType SString = String SchemaType SInt = Int SchemaType (x .+. y) = (SchemaType x, SchemaType y) record DataStore where constructor MkData schema : Schema size : Nat items : Vect size (SchemaType schema) addToStore : (store : DataStore) -> SchemaType (schema store) -> DataStore addToStore (MkData schema size store) newitem = MkData schema _ (addToData store) where addToData : Vect oldsize (SchemaType schema) -> Vect (S oldsize) (SchemaType schema) addToData [] = [newitem] addToData (x :: xs) = x :: addToData xs setSchema : (store : DataStore) -> Schema -> Maybe DataStore setSchema store schema = case size store of Z => Just (MkData schema _ []) S k => Nothing data Command : Schema -> Type where SetSchema : Schema -> Command schema Add : SchemaType schema -> Command schema Get : Integer -> Command schema Quit : Command schema parsePrefix : (schema : Schema) -> String -> Maybe (SchemaType schema, String) parsePrefix SString input = getQuoted (unpack input) where getQuoted : List Char -> Maybe (String, String) getQuoted ('"' :: xs) = case span (/= '"') xs of (quoted, '"' :: rest) => Just (pack quoted, ltrim (pack rest)) _ => Nothing getQuoted _ = Nothing parsePrefix SInt input = case span isDigit input of ("", rest) => Nothing (num, rest) => Just (cast num, ltrim rest) parsePrefix (schemal .+. schemar) input = case parsePrefix schemal input of Nothing => Nothing Just (l_val, input') => case parsePrefix schemar input' of Nothing => Nothing Just (r_val, input'') => Just ((l_val, r_val), input'') parseBySchema : (schema : Schema) -> String -> Maybe (SchemaType schema) parseBySchema schema x = case parsePrefix schema x of Nothing => Nothing Just (res, "") => Just res Just _ => Nothing parseSchema : List String -> Maybe Schema parseSchema ("String" :: xs) = case xs of [] => Just SString _ => case parseSchema xs of Nothing => Nothing Just xs_sch => Just (SString .+. xs_sch) parseSchema ("Int" :: xs) = case xs of [] => Just SInt _ => case parseSchema xs of Nothing => Nothing Just xs_sch => Just (SInt .+. xs_sch) parseSchema _ = Nothing parseCommand : (schema : Schema) -> String -> String -> Maybe (Command schema) parseCommand schema "add" rest = case parseBySchema schema rest of Nothing => Nothing Just restok => Just (Add restok) parseCommand schema "get" val = case all isDigit (unpack val) of False => Nothing True => Just (Get (cast val)) parseCommand schema "quit" "" = Just Quit parseCommand schema "schema" rest = case parseSchema (words rest) of Nothing => Nothing Just schemaok => Just (SetSchema schemaok) parseCommand _ _ _ = Nothing parse : (schema : Schema) -> (input : String) -> Maybe (Command schema) parse schema input = case span (/= ' ') input of (cmd, args) => parseCommand schema cmd (ltrim args) display : SchemaType schema -> String display {schema = SString} item = show item display {schema = SInt} item = show item display {schema = (y .+. z)} (iteml, itemr) = display iteml ++ ", " ++ display itemr getEntry : (pos : Integer) -> (store : DataStore) -> Maybe (String, DataStore) getEntry pos store = let store_items = items store in case integerToFin pos (size store) of Nothing => Just ("Out of range\n", store) Just id => Just (display (index id (items store)) ++ "\n", store) processInput : DataStore -> String -> Maybe (String, DataStore) processInput store input = case parse (schema store) input of Nothing => Just ("Invalid command\n", store) Just (Add item) => Just ("ID " ++ show (size store) ++ "\n", addToStore store item) Just (SetSchema schema') => case setSchema store schema' of Nothing => Just ("Can't update schema when entries in store\n", store) Just store' => Just ("OK\n", store') Just (Get pos) => getEntry pos store Just Quit => Nothing main : IO () main = replWith (MkData (SString .+. SString .+. SInt) _ []) "Command: " processInput </pre> <p>it can define scheme of a &quot;database&quot; with String and Int in runtime:</p> <pre> Command: schema String String Int OK Command: add "Rain Dogs" "Tom Waits" 1985 ID 0 Command: add "Fog on the Tyne" "Lindisfarne" 1971 ID 1 Command: get 1 "Fog on the Tyne", "Lindisfarne", 1971 Command: quit </pre> <p>I want to know if any dynamic language like ruby or python can do the same thing?In the static language,the scheme must be defined before the compile.</p> <p>Thanks!</p>
<python><ruby><idris>
2023-11-15 07:39:53
1
1,757
wang kai
77,485,901
3,139,811
Pytest - how to find the test result after a test
<p>After a test has been executed I need to collect the result of that test. But I don't find the result in the FixtureRequest object. I can find the test name and some additional data but nowhere I see something that shows if a test has been passed or failed. Nor if there were some exceptions</p> <p>example code:</p> <pre><code>class TestSomething: @pytest.mark.test_case_id(99999) def test_example(self, api_interface) -&gt; None: assert 5 == 5 </code></pre> <p>and in someother file:</p> <pre><code>@pytest.fixture def api_interface(request: FixtureRequest): </code></pre> <p>I can see that the request has the title name and a node object but nowhere I see something result or assert like data....</p>
<python><pytest>
2023-11-15 07:30:01
2
857
John
77,485,752
4,158,016
Python pandas add column with values based on condition and pattern
<br> My python pandas dataframe originally using openpyxl engine to handle excel processing, can be described in simple form as <pre><code>df1 = pd.DataFrame({&quot;col1&quot;:[&quot;&quot;,99,88,np.nan,66,55,np.nan,11,22],&quot;col2&quot;:['Catg0','Asset1','Other','Catg1','H &amp; F','Large Item','Catg2','Fragile','Delicate item'],&quot;col3&quot;:[&quot;&quot;,0,0,np.nan,99,155,np.nan,83,115]}) col1 col2 col3 0 Catg0 1 99 Asset1 0 2 88 Other 0 3 NaN Catg1 NaN 4 66 H &amp; F 99 5 55 Large Item 155 6 NaN Catg2 NaN 7 11 Fragile 83 8 22 Delicate item 115 </code></pre> <p>While I am trying it to get modified further with having new column (col4) by pivoting value from other column (col2) when other of the column data is empty or nan for that row, untill next such condition satiesfies<br> It should remove that row after pivoting <br></p> <pre><code> col1 col4 col2 col3 0 99 Catg0 Asset1 0 1 88 Catg0 Other 0 2 66 Catg1 H &amp; F 99 3 55 Catg1 Large Item 155 4 11 Catg2 Fragile 83 5 22 Catg2 Delicate item 115 import pandas as pd import numpy as np df1 = pd.DataFrame({&quot;col1&quot;:[&quot;&quot;,99,88,np.nan,66,55,np.nan,11,22],&quot;col2&quot;:['Catg0','Asset1','Other','Catg1','H &amp; F','Large Item','Catg2','Fragile','Delicate item'],&quot;col3&quot;:[&quot;&quot;,0,0,np.nan,99,155,np.nan,83,115]}) df1.insert(1, &quot;col4&quot;, 'Catg') </code></pre> <p>I am trying to find way to add these pattern or conditin based logic to populate 'col4' and discard that rows</p>
<python><pandas><pivot>
2023-11-15 06:53:48
2
450
itsavy
77,485,192
6,025,866
Unable to run pm.sample() function using PyMC Python library even though I re-installed the libraries
<p>I am trying to run a Bayesian Linear Regression , but I am unable to create a posterior distribution using the sample() function from pymc. The code is as follows</p> <pre><code>import pandas as pd from random import randint # Generate date range dates = pd.date_range(start=&quot;2021-01-01&quot;, end=&quot;2021-01-30&quot;) data = { &quot;date&quot;: dates, &quot;gcm_direct_Impressions&quot;: [randint(10000, 20000) for _ in dates], &quot;tv_grps&quot;: [randint(30, 50) for _ in dates], &quot;tiktok_direct_Impressions&quot;: [randint(10000, 15000) for _ in dates], &quot;sell_out_quantity&quot;: [randint(150, 250) for _ in dates] } df = pd.DataFrame(data) #df.to_csv(&quot;dataset.csv&quot;, index=False) max(df['sell_out_quantity'].values) # Assigning the 'final_data' dataset to a new variable 'data' for further analysis. import pandas as pd data = df # Defining a list of variables for transformation. These include various factors that # might impact the analysis like trends, seasons, holidays, competitor sales, and different # marketing channels. transform_variables = [&quot;gcm_direct_Impressions&quot;,&quot;tv_grps&quot;,&quot;tiktok_direct_Impressions&quot;] # Identifying the channels that have a delay effect. This means the impact of these channels # on the target variable (like revenue) might not be immediate but delayed. delay_channels = [&quot;gcm_direct_Impressions&quot;,&quot;tv_grps&quot;,&quot;tiktok_direct_Impressions&quot;] # Listing the media channels that are part of the analysis. These are the channels through # which advertising or marketing is done. media_channels = [&quot;gcm_direct_Impressions&quot;,&quot;tv_grps&quot;,&quot;tiktok_direct_Impressions&quot;] # Specifying the control variables. These are the factors that need to be controlled or # accounted for in the analysis to isolate the effects of the media channels. #control_variables = [&quot;trend&quot;, &quot;season&quot;, &quot;holiday&quot;, &quot;competitor_sales_B&quot;, &quot;events&quot;] # Defining the target variable for the analysis, which in this case is 'revenue'. This is # likely the outcome or the dependent variable the analysis aims to predict or explain. target = &quot;sell_out_quantity&quot; #!pip install scikit-learn from sklearn.preprocessing import MinMaxScaler # Creating a copy of the 'data' dataframe to apply transformations. This ensures that # the original data remains unchanged. data_transformed = data.copy() # Initializing a dictionary to store the MinMaxScaler instances for each feature. # This will be useful for inverse transformations later. numerical_encoder_dict = {} # Looping through each feature in the list of variables to transform. for feature in transform_variables: # Initializing a MinMaxScaler. This scaler transforms each feature to a given range, # usually between 0 and 1, which is helpful for normalization. scaler = MinMaxScaler() # Reshaping the data for the feature into a 2D array, as required by the scaler. original = data[feature].values.reshape(-1, 1) # Applying the scaler to the feature and transforming the data. transformed = scaler.fit_transform(original) # Storing the transformed data back into the 'data_transformed' DataFrame. data_transformed[feature] = transformed # Saving the scaler instance in the dictionary for each feature. # This will be used for reversing the transformation if needed. numerical_encoder_dict[feature] = scaler # Placeholder for a potential dependent variable transformation, not utilized here. dependent_transformation = None # Scaling the target variable 'revenue' by dividing it by 100,000. # This kind of scaling might be done to bring the target variable to a smaller range # or to improve the interpretability of the model's results. original = data[target].values data_transformed[target] = original import pymc as pm import numpy as np import pytensor.tensor as tt # Initializing an empty list to store the mean response from different channels and control variables. response_mean = [] # Creating a new PyMC3 model context. All the model definitions inside this block # are part of 'model_2'. with pm.Model() as model_2: # Looping through each channel in the list of delay channels. for channel_name in delay_channels: print(f&quot;Delay Channels: Adding {channel_name}&quot;) # Extracting the transformed data for the current channel. x = data_transformed[channel_name].values # Defining Bayesian priors for the adstock, gamma, and alpha parameters for the current channel. adstock_param = pm.Beta(f&quot;{channel_name}_adstock&quot;, 2, 2) saturation_gamma = pm.Beta(f&quot;{channel_name}_gamma&quot;, 2, 2) saturation_alpha = pm.Gamma(f&quot;{channel_name}_alpha&quot;, 3, 1) transformed_X1 = tt.zeros_like(x) for i in range(1, len(x)): transformed_X1 = tt.set_subtensor(transformed_X1[i], x[i] + adstock_param * x[i - 1]) transformed_X2 = tt.zeros_like(x) for i in range(1,len(x)): transformed_X2 = tt.set_subtensor(transformed_X2[i],(transformed_X1[i]**saturation_alpha)/(transformed_X1[i]**saturation_alpha+saturation_gamma**saturation_alpha)) channel_b = pm.HalfNormal(f&quot;{channel_name}_media_coef&quot;, sigma = 250) response_mean.append(transformed_X2 * channel_b) intercept = pm.Normal(&quot;intercept&quot;,mu = np.mean(data_transformed[target].values), sigma = 3) sigma = pm.HalfNormal(&quot;sigma&quot;, 4) likelihood = pm.Normal(&quot;outcome&quot;, mu = intercept + sum(response_mean), sigma = sigma, observed = data_transformed[target].values) import arviz as az # Continuing the model context defined previously as 'model_2'. with model_2: # Sampling from the posterior distribution of the model. # This is the process where PyMC3 generates samples that represent the distribution # of the parameters given the data and the priors. # 'pm.sample()' is the main function to perform this sampling. # The parameters of pm.sample() are set to control the sampling process: # 1000 samples are drawn after a tuning phase of 1000 iterations. # 'target_accept' is set to 0.95, which is the target acceptance rate of the sampler. # A higher acceptance rate can help in achieving better convergence but might slow down the sampling. # 'return_inferencedata=True' makes the function return an InferenceData object, # which is useful for further analysis using ArviZ. trace = pm.sample(1000, tune=1000, target_accept=0.95, return_inferencedata=True) # Summarizing the trace. This generates a summary of the posterior distribution # for each parameter in the model, providing statistics like mean, standard deviation, # and the HPD (highest posterior density) interval. # This summary is useful for understanding the results of the model and for diagnostics. trace_summary = az.summary(trace) </code></pre> <p>After running the above code, I get the following error which I am unable to solve even though I removed and installed jupyter notebook and re-installed all the packages. This is on a Mac M1 machine.</p> <p>Libraries and their versions</p> <ul> <li>pymc (5.6.1)</li> <li>numpy (1.23.5)</li> <li>pytensor (2.12.3)</li> </ul> <pre><code> You can find the C code in this temporary file: /var/folders/jw/qb6bs44j0vgfsf52lxw71h300000gn/T/pytensor_compilation_error_v3472mqt --------------------------------------------------------------------------- CompileError Traceback (most recent call last) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map) 1242 thunks.append( -&gt; 1243 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl) 1244 ) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_thunk(self, node, storage_map, compute_map, no_recycling, impl) 130 try: --&gt; 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling) 132 except (NotImplementedError, MethodNotDefined): ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_c_thunk(self, node, storage_map, compute_map, no_recycling) 95 raise NotImplementedError(&quot;float16&quot;) ---&gt; 96 outputs = cl.make_thunk( 97 input_storage=node_input_storage, output_storage=node_output_storage ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs) 1199 init_tasks, tasks = self.get_init_tasks() -&gt; 1200 cthunk, module, in_storage, out_storage, error_storage = self.__compile__( 1201 input_storage, output_storage, storage_map, cache ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in __compile__(self, input_storage, output_storage, storage_map, cache) 1119 output_storage = tuple(output_storage) -&gt; 1120 thunk, module = self.cthunk_factory( 1121 error_storage, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache) 1643 cache = get_module_cache() -&gt; 1644 module = cache.module_from_key(key=key, lnk=self) 1645 ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in module_from_key(self, key, lnk) 1239 location = dlimport_workdir(self.dirname) -&gt; 1240 module = lnk.compile_cmodule(location) 1241 name = module.__file__ ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in compile_cmodule(self, location) 1542 _logger.debug(f&quot;LOCATION {location}&quot;) -&gt; 1543 module = c_compiler.compile_str( 1544 module_name=mod.code_hash, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols) 2648 # compile_stderr = compile_stderr.replace(&quot;\n&quot;, &quot;. &quot;) -&gt; 2649 raise CompileError( 2650 f&quot;Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}&quot; CompileError: Compilation failed (return status=1): /usr/bin/clang++ -dynamiclib -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -Wno-c++11-narrowing -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -fPIC -undefined dynamic_lookup -I/Users/adhokshaja/miniconda3/envs/pymc_env/lib/python3.8/site-packages/numpy/core/include -I/Users/adhokshaja/miniconda3/envs/pymc_env/include/python3.8 -I/Users/adhokshaja/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/c_code -L/Users/adhokshaja/miniconda3/envs/pymc_env/lib -fvisibility=hidden -o /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/me35a44294d03835a76b7f9ad569bbbc122b29dc588c89cb224fa59ca0e0ec6cd.so /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp:25480:32: fatal error: bracket nesting level exceeded maximum of 256 if (!PyErr_Occurred()) { ^ /Users/adhokshaja/.pytensor/compiledir_macOS-13.2-arm64-arm-64bit-arm-3.8.18-64/tmpjj80nvdi/mod.cpp:25480:32: note: use -fbracket-depth=N to increase maximum nesting level 1 error generated. During handling of the above exception, another exception occurred: CompileError Traceback (most recent call last) &lt;ipython-input-6-412f3306e835&gt; in &lt;cell line: 4&gt;() 13 # 'return_inferencedata=True' makes the function return an InferenceData object, 14 # which is useful for further analysis using ArviZ. ---&gt; 15 trace = pm.sample(1000, tune=1000, target_accept=0.95, return_inferencedata=True) 16 17 # Summarizing the trace. This generates a summary of the posterior distribution ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in sample(draws, tune, chains, cores, random_seed, progressbar, step, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, model, **kwargs) 651 652 initial_points = None --&gt; 653 step = assign_step_methods(model, step, methods=pm.STEP_METHODS, step_kwargs=kwargs) 654 655 if nuts_sampler != &quot;pymc&quot;: ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in assign_step_methods(model, step, methods, step_kwargs) 231 selected_steps.setdefault(selected, []).append(var) 232 --&gt; 233 return instantiate_steppers(model, steps, selected_steps, step_kwargs) 234 235 ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/sampling/mcmc.py in instantiate_steppers(model, steps, selected_steps, step_kwargs) 132 args = step_kwargs.get(name, {}) 133 used_keys.add(name) --&gt; 134 step = step_class(vars=vars, model=model, **args) 135 steps.append(step) 136 ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/hmc/nuts.py in __init__(self, vars, max_treedepth, early_max_treedepth, **kwargs) 178 `pm.sample` to the desired number of tuning steps. 179 &quot;&quot;&quot; --&gt; 180 super().__init__(vars, **kwargs) 181 182 self.max_treedepth = max_treedepth ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/hmc/base_hmc.py in __init__(self, vars, scaling, step_scale, is_cov, model, blocked, potential, dtype, Emax, target_accept, gamma, k, t0, adapt_step_size, step_rand, **pytensor_kwargs) 107 else: 108 vars = get_value_vars_from_user_vars(vars, self._model) --&gt; 109 super().__init__(vars, blocked=blocked, model=self._model, dtype=dtype, **pytensor_kwargs) 110 111 self.adapt_step_size = adapt_step_size ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/step_methods/arraystep.py in __init__(self, vars, model, blocked, dtype, logp_dlogp_func, **pytensor_kwargs) 162 163 if logp_dlogp_func is None: --&gt; 164 func = model.logp_dlogp_function(vars, dtype=dtype, **pytensor_kwargs) 165 else: 166 func = logp_dlogp_func ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/model.py in logp_dlogp_function(self, grad_vars, tempered, **kwargs) 607 if var in input_vars and var not in grad_vars 608 } --&gt; 609 return ValueGradFunction(costs, grad_vars, extra_vars_and_values, **kwargs) 610 611 def compile_logp( ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/model.py in __init__(self, costs, grad_vars, extra_vars_and_values, dtype, casting, compute_grads, **kwargs) 346 inputs = grad_vars 347 --&gt; 348 self._pytensor_function = compile_pymc(inputs, outputs, givens=givens, **kwargs) 349 350 def set_weights(self, values): ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pymc/pytensorf.py in compile_pymc(inputs, outputs, random_seed, mode, **kwargs) 1194 opt_qry = mode.provided_optimizer.including(&quot;random_make_inplace&quot;, check_parameter_opt) 1195 mode = Mode(linker=mode.linker, optimizer=opt_qry) -&gt; 1196 pytensor_function = pytensor.function( 1197 inputs, 1198 outputs, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/__init__.py in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input) 313 # note: pfunc will also call orig_function -- orig_function is 314 # a choke point that all compilation must pass through --&gt; 315 fn = pfunc( 316 params=inputs, 317 outputs=outputs, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/pfunc.py in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys, fgraph) 365 ) 366 --&gt; 367 return orig_function( 368 inputs, 369 cloned_outputs, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/types.py in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys, fgraph) 1754 ) 1755 with config.change_flags(compute_test_value=&quot;off&quot;): -&gt; 1756 fn = m.create(defaults) 1757 finally: 1758 t2 = time.perf_counter() ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/compile/function/types.py in create(self, input_storage, storage_map) 1647 1648 with config.change_flags(traceback__limit=config.traceback__compile_limit): -&gt; 1649 _fn, _i, _o = self.linker.make_thunk( 1650 input_storage=input_storage_lists, storage_map=storage_map 1651 ) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/basic.py in make_thunk(self, input_storage, output_storage, storage_map, **kwargs) 252 **kwargs, 253 ) -&gt; Tuple[&quot;BasicThunkType&quot;, &quot;InputStorageType&quot;, &quot;OutputStorageType&quot;]: --&gt; 254 return self.make_all( 255 input_storage=input_storage, 256 output_storage=output_storage, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map) 1250 thunks[-1].lazy = False 1251 except Exception: -&gt; 1252 raise_with_op(fgraph, node) 1253 1254 t1 = time.perf_counter() ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/utils.py in raise_with_op(fgraph, node, thunk, exc_info, storage_map) 533 # Some exception need extra parameter in inputs. So forget the 534 # extra long error message in that case. --&gt; 535 raise exc_value.with_traceback(exc_trace) 536 537 ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/vm.py in make_all(self, profiler, input_storage, output_storage, storage_map) 1241 # no_recycling here. 1242 thunks.append( -&gt; 1243 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl) 1244 ) 1245 linker_make_thunk_time[node] = time.perf_counter() - thunk_start ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_thunk(self, node, storage_map, compute_map, no_recycling, impl) 129 ) 130 try: --&gt; 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling) 132 except (NotImplementedError, MethodNotDefined): 133 # We requested the c code, so don't catch the error. ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/op.py in make_c_thunk(self, node, storage_map, compute_map, no_recycling) 94 print(f&quot;Disabling C code for {self} due to unsupported float16&quot;) 95 raise NotImplementedError(&quot;float16&quot;) ---&gt; 96 outputs = cl.make_thunk( 97 input_storage=node_input_storage, output_storage=node_output_storage 98 ) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs) 1198 &quot;&quot;&quot; 1199 init_tasks, tasks = self.get_init_tasks() -&gt; 1200 cthunk, module, in_storage, out_storage, error_storage = self.__compile__( 1201 input_storage, output_storage, storage_map, cache 1202 ) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in __compile__(self, input_storage, output_storage, storage_map, cache) 1118 input_storage = tuple(input_storage) 1119 output_storage = tuple(output_storage) -&gt; 1120 thunk, module = self.cthunk_factory( 1121 error_storage, 1122 input_storage, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache) 1642 if cache is None: 1643 cache = get_module_cache() -&gt; 1644 module = cache.module_from_key(key=key, lnk=self) 1645 1646 vars = self.inputs + self.outputs + self.orphans ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in module_from_key(self, key, lnk) 1238 try: 1239 location = dlimport_workdir(self.dirname) -&gt; 1240 module = lnk.compile_cmodule(location) 1241 name = module.__file__ 1242 assert name.startswith(location) ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/basic.py in compile_cmodule(self, location) 1541 try: 1542 _logger.debug(f&quot;LOCATION {location}&quot;) -&gt; 1543 module = c_compiler.compile_str( 1544 module_name=mod.code_hash, 1545 src_code=src_code, ~/miniconda3/envs/pymc_env/lib/python3.8/site-packages/pytensor/link/c/cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols) 2647 # difficult to read. 2648 # compile_stderr = compile_stderr.replace(&quot;\n&quot;, &quot;. &quot;) -&gt; 2649 raise CompileError( 2650 f&quot;Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}&quot; 2651 ) CompileError: Compilation failed (return status=1): </code></pre>
<python><bayesian><pymc>
2023-11-15 04:00:50
0
441
adhok
77,485,146
3,214,482
vscode cannot reconnect to existing jupyter notebook server when waking my laptop
<p>I recently switched from browser to vs code for using jupyter notebook and the IDE-like features (e.g. auto-complete, debugging, etc) provided by VS code is really so much better than a browser. But one thing bothers me is that <strong>whenever I wake my laptop from sleep mode</strong>, it often has issue re-connecting to existing jupyter notebook server (in my case localhost:8888). When using browser, I simply refresh the page and it works. But VS code does not seem to be able to &quot;reconnect&quot; to the server. I also tried close/reopen my ipynb notebook, but still no luck. I could just restart the server, but I would lose all my saved results so it is usually not the best option.</p> <p>Edit:</p> <p>&quot;Help: about&quot; in my VScode returns:</p> <pre><code> Version: 1.84.2 Commit: 1a5daa3a0231a0fbba4f14db7ec463cf99d7768e Date: 2023-11-09T10:52:33.687Z (1 wk ago) Electron: 25.9.2 ElectronBuildId: 24603566 Chromium: 114.0.5735.289 Node.js: 18.15.0 V8: 11.4.183.29-electron.0 OS: Darwin x64 22.3.0 </code></pre> <p>Following are my jupyter extension versions:</p> <pre><code>@ijmbarr/jupyterlab_spellchecker v0.2.0 enabled OK </code></pre>
<python><visual-studio-code><jupyter-notebook>
2023-11-15 03:48:02
0
983
username123
77,485,121
15,155,978
Why Python version is not been shown correctly when using pyenv for Python environments?
<p>I have installed pyenv in MacOS Sonoma 14.1.1 version. I have added the following to <code>~/.zshrc</code> and <code>~/.bashrc</code> files according to this <a href="https://stackoverflow.com/questions/71188577/having-trouble-switching-python-versions-using-pyenv-global-command">answer</a>:</p> <pre><code>export PYENV_ROOT=&quot;$HOME/.pyenv&quot; export PATH=&quot;$PYENV_ROOT/bin:$PATH&quot; export PIPENV_PYTHON=&quot;$PYENV_ROOT/shims/python&quot; plugin=( pyenv ) eval &quot;$(pyenv init -)&quot; eval &quot;$(command pyenv init --path)&quot; eval &quot;$(pyenv virtualenv-init -)&quot; </code></pre> <p>The problem is that when opening a new terminal after activating a pyenv through <code>pyenv activate 3.10.7</code> and asking <code>python --version</code>, I got: <code>zsh: command not found: python</code>. But, if I use these commands:</p> <pre><code>eval &quot;$(command pyenv init -)&quot; eval &quot;$(command pyenv init --path)&quot; </code></pre> <p>When asking again <code>python --version</code>, <code>Python 3.10.7</code> is shown in the terminal.</p> <p>I wonder why this is not working correctly after I added the <code>$PATH</code> command in ~/.zshrc and ~/.bashrc files.</p>
<python><python-3.x><pyenv><macos-sonoma><pyenv-virtualenv>
2023-11-15 03:39:56
0
922
0x55b1E06FF
77,485,119
3,600,487
Changing the encryption settings of EC2 EBS devices while creating a launch template using AWS CDK with Python
<p>In my AWS environment, I have an unencrypted AMI. When I launch EC2 instances from it, their EBS volumes are unencrypted, as expected. I'm trying to change the encryption settings of the EBS volumes to be encrypted at the launch time from this unencrypted AMI. I am using AWS CDK (version 2.66.0) with Python (3.9).</p> <p>The following code I tested and found that I was able to retrieve the AMI from its name saved in <code>ami_name</code>, its <code>BlockDeviceMappings</code> and save devices by changing the encryption settings to <code>true</code> in to the list variable <code>ami_block_device_mappings</code>. However, after adding this to the launch template, the CloudFormation shows the empty list of devices.</p> <p><strong>Note:</strong> Please change the value of <code>ami_name</code> variable as it applicable for your, if you want to try it.</p> <p><strong>mystack.py</strong>:</p> <pre><code>from aws_cdk import ( aws_ec2 as ec2, Stack, ) from constructs import Construct import boto3 # import json class MyStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -&gt; None: super().__init__(scope, construct_id, **kwargs) ami_name = &quot;MY-TEST-WEB-AMI&quot; ec2_client = boto3.client(&quot;ec2&quot;) # Get ami_id from ami_name response = ec2_client.describe_images( Filters=[{&quot;Name&quot;: &quot;name&quot;, &quot;Values&quot;: [ami_name]}], ) image = response[&quot;Images&quot;][0] ami_id = response[&quot;Images&quot;][0][&quot;ImageId&quot;] # Change EBS storage encryption settings, if any ami_block_device_mappings = [] if &quot;BlockDeviceMappings&quot; in image: for mapping in image[&quot;BlockDeviceMappings&quot;]: if &quot;Ebs&quot; in mapping: mapping[&quot;Ebs&quot;][&quot;Encrypted&quot;] = True ami_block_device_mappings.append({&quot;DeviceName&quot;: mapping[&quot;DeviceName&quot;], &quot;Ebs&quot;: mapping[&quot;Ebs&quot;]}) # print(json.dumps(ami_block_device_mappings,indent=4,default=str)) lt = ec2.CfnLaunchTemplate( self, &quot;LaunchTemplate&quot;, launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty( image_id=ami_id, block_device_mappings=ami_block_device_mappings ) ) </code></pre> <p><strong>app.py</strong></p> <pre><code>#!/usr/bin/env python3 import os import aws_cdk as cdk from stacks.my_stack import MyStack app = cdk.App() MyStack(app, &quot;MyStack&quot;) app.synth() </code></pre> <p><strong>Generated Launch Template:</strong></p> <pre><code>Resources: LaunchTemplate: Type: AWS::EC2::LaunchTemplate Properties: LaunchTemplateData: BlockDeviceMappings: - {} - {} ImageId: ami-0589c6ad4ac8694587 </code></pre> <p>What mistakes am making here?</p> <p>If you have a solution using a different <em>class</em> like <code>ec2.LaunchTemplate</code> instead of <code>ec2.cfnLaunchTemplate</code>, it is still fine, but I am looking for a solution in Python CDK.</p>
<python><amazon-ec2><aws-cloudformation><aws-cdk>
2023-11-15 03:36:56
0
1,710
Rafiq
77,485,108
6,361,531
Force insert statement in sqlalchemy to use quotes around all columns
<p>Is there a parameter in sqlalchemy when doing an insert using the ORM that you can force the python library to quote all columns?</p> <p>For example,</p> <pre><code>INSERT INTO DATABASE_A.SCHEMA_A.TABLE_A (date, col1, &quot;col 2&quot;) values (:data1, :data2, :data3) </code></pre> <p>is the statement that is currently generated if I am using a pandas dataframe with columns <code>date</code>, <code>col1</code> and <code>col 2</code>. Note, <code>col 2</code> is quoted due to the space already. However, I would like to go ahead and quote <code>date</code> and <code>col1</code> because in my database date is a reserved word and needs to be quoted.</p> <p>Is there an engine or dialect parameter that will enable quote of all columns such that the generated insert statement will result in:</p> <pre><code>INSERT INTO DATABASE_A.SCHEMA_A.TABLE_A (&quot;date&quot;, &quot;col1&quot;, &quot;col 2&quot;) values (:data1, :data2, :data3) </code></pre>
<python><sqlalchemy>
2023-11-15 03:32:50
1
154,219
Scott Boston
77,485,101
704,262
SqlAlchemy extending an existing query with additional select columns from raw sql
<p>I'm quite new to SQLAlchemy and Python, and have to fix some bugs in a legacy environment so please bear with me..</p> <p><strong>Environment</strong>: <br/>Python 2.7.18 <br/>Bottle: 0.12.7 <br/>SQLAlchemy: 1.3.24 <br/>MySQL: 5.7</p> <p><strong>Scenario</strong>: I have this SQL Statement below that pivots a linked table and adds rows dynamically as additional columns in the original table - I constructed it in SQL Workbench and it returns the results I want: All specified columns from table t1 plus the additional columns with values from table t2 appear in the result</p> <pre><code>SET SESSION group_concat_max_len = 1000000; --this is needed to prevent the group_concat function to cut off after 1024 characters, there are quite a few columns involved that easily exceed the original limit SET @sql = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'SUM( CASE WHEN custom_fields.FIELD_NAME = &quot;', custom_fields.FIELD_NAME, '&quot; THEN custom_fields_tags_link.field_value END) AS &quot;', custom_fields.FIELD_NAME, '&quot;') ) AS t0 INTO @sql FROM tags t1 LEFT OUTER JOIN custom_fields_tags_link t2 ON t1.id = t2.tag_id JOIN custom_fields ON custom_fields.id = t2.custom_field_id; SET @sql = CONCAT('SELECT tags.id AS tags_id, tags.tag_type AS tags_tag_type, tags.name AS tags_name, tags.version AS tags_version, ', @sql, ' FROM tags LEFT OUTER JOIN custom_fields_tags_link ON tags.id = custom_fields_tags_link.tag_id JOIN custom_fields ON custom_fields.id = custom_fields_tags_link.custom_field_id GROUP BY tags.id'); SELECT @sql; PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; </code></pre> <p><strong>Problem</strong>: I have an already existing SQLAlchemy session, that expands a query to be used for pagination. Currently this query returns all my specified columns from table t1, that are joined with tables t2 and custom_fields to get all necessary columns. The missing part is the SQLAlchemy representation for the SELECT GROUP_CONCAT part of the above statement, the rest is all taken care of - since I have control and know how the Frontend presenation of this table looks and now also the raw SQL version in SQL Workbench, I tried to work backwards to get the SQLAlchemy / Python part right by consulting <a href="https://docs.sqlalchemy.org/en/14/orm/queryguide.html#orm-queryguide-selecting-text" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/orm/queryguide.html#orm-queryguide-selecting-text</a> and <a href="https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.TextClause.columns" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.TextClause.columns</a>, but now I am stuck at how to get this <code>TextClause</code> object converted into a <code>TextualSelect</code> without typing the columns statically in the <code>.column()</code> function cause I don't know what the column names, that the users provide for these custom_fields, will be.</p> <p><strong>Goal</strong>: concat a dynamically created raw SQL statement to my existing SQLAlchemy query to select these dynamically created fields from a linked table so that I have the same result as when I execute this raw SQL statement in a SQL editor</p> <p><strong>Attempts</strong>:</p> <pre><code>#session is a app-wide shared MySQL session object that is created via sqlalchemy.orm's sessionmaker, scoped_session function alchemy = session.query(Tag) try: #mainly the next line will be changed (x) select_custom_fields_columns_stmt = select().from_statement(text( '''SET SESSION group_concat_max_len = 1000000; SET @sql = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'SUM( CASE WHEN custom_fields.FIELD_NAME = &quot;', custom_fields.FIELD_NAME, '&quot; THEN custom_fields_tags_link.field_value END) AS &quot;', custom_fields.FIELD_NAME, '&quot;')) AS t0 INTO @sql FROM tags t1 LEFT OUTER JOIN custom_fields_tags_link t2 ON t1.id = t2.tag_id JOIN custom_fields ON custom_fields.id = t2.custom_field_id;''')) #or here after the statement (y) # next line is my attempt to add the columns that have been generated by the previous function but of course unsuccessful alchemy = alchemy.add_columns(select_custom_fields_columns_stmt) except: logException() joined_query = alchemy.outerjoin(model.tag.CustomFieldTagLink) .join(model.tag.CustomField) </code></pre> <p><em>A</em>: This results in this error: <code>AttributeError: 'Select' object has no attribute 'from_statement' </code></p> <p><em>B</em>: Changing the line (x) above that constructs the query for the additional rows to <code>select_custom_fields_columns_stmt = session.select().from_statement(text(...</code> --&gt; results in: <code>AttributeError: 'Session' object has no attribute 'select'</code></p> <p><em>C</em>: adding a <code>.subquery(&quot;from_custom_fields&quot;)</code> statement at (y) --&gt; results in: <code>AttributeError: 'AnnotatedTextClause' object has no attribute 'alias'</code></p> <p><em>D</em>: other attempts for (x) substituting <code>select()</code> with <code>session.query()</code> or <code>session.query(Tags)</code> also didn't result in additional columns</p> <p>What else can I try? Would it be preferable/easier to write the whole raw SQL part in SQLAlchemy and if so, how could I do that?</p> <p>--</p> <p><strong>Update &amp; Examples</strong>:</p> <p>As suggested I have provided a SQLfiddle that has all the relevant information, but doesn't return any results (I am too inexperienced on how to use it):</p> <p><a href="http://sqlfiddle.com/#!9/90a0c87/2" rel="nofollow noreferrer">http://sqlfiddle.com/#!9/90a0c87/2</a></p> <p>Also provided a DB-fiddle example with the exact same information to re-create a minimal example and here also the results are returned:</p> <p><a href="https://dbfiddle.uk/NtPVWU8D" rel="nofollow noreferrer">https://dbfiddle.uk/NtPVWU8D</a></p> <p><strong>ORM definitions</strong></p> <p>for the 3 relevant tables, basically the Tags table and the CustomFields table use a CustomFieldTagLink association table to establish the link between these two.</p> <pre><code>class Tag(Base,Versioned): __tablename__ = 'tags' __table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'} #base id = Column(Unicode(255), primary_key=True) tag_type = Column(UnicodeText) barcode_id = Column(UnicodeText) name = Column(UnicodeText) created_at = Column(Integer, index=True) # version gets created automatically # Relationship with the custom fields table - syntactic sugar custom_fields = relationship(&quot;CustomField&quot;, secondary='custom_fields_tags_link', back_populates='linked_tags') custom_field_links = relationship(&quot;CustomFieldTagLink&quot;, back_populates='tag', cascade=&quot;save-update, merge, delete, delete-orphan&quot;) class CustomField(Base, Versioned): __tablename__ = 'custom_fields' __table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'} id = Column(Integer, Sequence('custom_template_field_id'), primary_key=True) field_name = Column(Unicode(255), unique=True, nullable=False) field_type = Column(Unicode(255), server_default='text') default_value = Column(TEXT(length=4294967295)) linked_tags = relationship(&quot;Tag&quot;, secondary='custom_fields_tags_link', back_populates='custom_fields') tag_links = relationship(&quot;CustomFieldTagLink&quot;, back_populates='custom_field') class CustomFieldTagLink(Base,Versioned): ''' Association Table between Tags and Custom Fields ''' __tablename__ = 'custom_fields_tags_link' __table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'} tag_id = Column(Unicode(255), ForeignKey('tags.id'), primary_key=True) custom_field_id = Column(Integer, ForeignKey('custom_fields.id'), primary_key=True) field_value = Column(Unicode(255), nullable=False) tag = relationship(&quot;Tag&quot;, back_populates='custom_field_links') custom_field = relationship(&quot;CustomField&quot;, back_populates='tag_links') </code></pre> <p>I expect the SQL query when executed to return a list of Tags items that are later processed and converted to dicts, however in this instance it is about adding the &quot;pivot table&quot; with the added custom_fields columns to the tags table to the already existing SQLAlchemy query object, as the execution of that statement happens later (filter, sort, paginate etc needs first to be done)</p>
<python><sql><mysql><python-2.7><sqlalchemy>
2023-11-15 03:30:59
0
784
hreimer
77,484,996
10,620,003
Build a df based on another df which only has 1/0
<p>I have a df which only has 1/0. I want to build a df with the same shape. I want to consider the first two 1 in the case which we have consecutive 1. For example, in a case we have 0,0,1,1,1,1, 0,0 I want to convert it to 0,0,1,1,0,0,0,0. Here is an example,</p> <pre><code>import pandas as pd df_dr = pd.DataFrame() df_dr['0'] = [0,0] df_dr['1'] = [1,0] df_dr['2'] = [1,1] df_dr['3'] = [1,1] df_dr['4'] = [0,1] df_dr['5'] = [1,0] df_dr['6'] = [1,0] df_dr['7'] = [1,0] df_dr['8'] = [1,1] df_dr['9'] = [0,1] df_dr['10'] = [0,1] df_dr['11'] = [0,1] df_dr['12'] = [0,1] </code></pre> <p>and here is the output:</p> <pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 0 0 1 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 1 1 0 0 0 </code></pre> <p>Could you please help me with that? Thanks</p>
<python><pandas>
2023-11-15 03:00:59
1
730
Sadcow
77,484,926
22,371,917
How to scroll in a nav element using seleniumbase in python
<pre class="lang-html prettyprint-override"><code>&lt;nav class=&quot;flex h-full w-full flex-col p-2 gizmo:px-3 gizmo:pb-3.5 gizmo:pt-0&quot; aria-label=&quot;Menu&quot;&gt; </code></pre> <p>This is the nav although, its a-lot longer its full of divs</p> <p>I just want to know how to scroll till the end of the menu. Edit: loading element that has to be accounted for</p> <pre class="lang-html prettyprint-override"><code>&lt;svg stroke=&quot;currentColor&quot; fill=&quot;none&quot; stroke-width=&quot;2&quot; viewBox=&quot;0 0 24 24&quot; stroke-linecap=&quot;round&quot; stroke-linejoin=&quot;round&quot; class=&quot;animate-spin text-center&quot; height=&quot;1em&quot; width=&quot;1em&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt; </code></pre> <p>and under it there are many line elements</p> <p>top of nav xpath:</p> <pre class="lang-html prettyprint-override"><code>/html/body/div[1]/div[1]/div[1]/div/div/div/div/nav </code></pre> <p>top of svg xpath:</p> <pre class="lang-html prettyprint-override"><code>/html/body/div[1]/div[1]/div[1]/div/div/div/div/nav/div[2]/div[2]/div[2]/svg </code></pre> <p>do you need scrollbar html?</p>
<python><html><selenium-webdriver><scroll><seleniumbase>
2023-11-15 02:35:23
2
347
Caiden
77,484,746
9,905,667
Getting an error when installing tensorflow
<p>I am getting an error when trying to download tensorflow through pip.</p> <pre><code>PS C:\Users\12158&gt; pip install tensorflow ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow </code></pre> <p>Very strange. Through research there seems to be an issue with newer python versions and installing tensorflow. Is there an option to not have to downgrade python?</p> <pre><code>PS C:\Users\12158&gt; python -c &quot;import struct; print(8 *struct.calcsize('P'))&quot; 64 PS C:\Users\12158&gt; </code></pre> <p>My version is Python 3.12.0.</p> <p>Thanks!</p>
<python><powershell><tensorflow><pip>
2023-11-15 01:32:55
1
726
SantiClaus
77,484,738
5,635,892
Loss function doesn't have requires_grad=True in pytorch
<p>Hello I have the following code (simplified version of my code but able to reproduce the error):</p> <pre><code>import numpy as np from numpy import linalg as LA import torch import torch.optim as optim import torch.nn as nn def func(x,pars): a = pars[0] b = pars[1] c = pars[2] d = pars[3] x = x.int() H = torch.tensor([[a,b,1],[2,3,c],[4,d,7]]) eigenvalues, eigenvectors = np.linalg.eigh(H) trans_freq = eigenvalues[x] return torch.tensor(trans_freq) x_index = torch.tensor([1,2]) y_vals = torch.tensor([0.5,12]) params = torch.tensor([1.,2.,3.,4.]) params.requires_grad=True opt = optim.SGD([params], lr=100) mse_loss = nn.MSELoss() for i in range(10): opt.zero_grad() loss = mse_loss(func(x_index,params),y_vals) print(x_index.requires_grad) print(params.requires_grad) print(y_vals.requires_grad) print(loss.requires_grad) loss.backward() opt.step() print(loss) </code></pre> <p>The output is:</p> <pre><code>False True False False </code></pre> <p>and I am getting this error: <code>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn</code> from this line: <code>loss.backward()</code>. Indeed the loss doesn't have <code>requires_grad=True</code> but why is that the case (setting it manually in the for loop doesn't work either). What should I do? Thank you!</p>
<python><python-3.x><pytorch><gradient><loss-function>
2023-11-15 01:29:06
2
719
Silviu
77,484,622
9,905,667
ValueError: 'editdistance/bycython.pyx' doesn't match any files when downloading keras_ocr
<p>When trying to download kera_ocr through pip, I get the following error:</p> <pre><code>ValueError: 'editdistance/bycython.pyx' doesn't match any files </code></pre> <p>I have tried everything I can think of.</p> <ol> <li>Upgrading pip</li> <li>Upgrading python</li> <li>Installing dependencies</li> </ol>
<python><keras><pip>
2023-11-15 00:47:29
1
726
SantiClaus
77,484,490
11,091,148
Pydantic optinal throws `Field required` in nested json list
<p>I have a nested json that I validate with pydantic:</p> <pre><code>app_dict={'apps': [{'app_id': 'a_1', 'group_id': '123', 'report_id': '456', 'principal_id': 'p_1'}, {'app_id': 'a_2', 'group_id': '789', 'report_id': '987'}]}} class PbiApps(BaseModel): app_id: ty.Required[pty.StrictStr] group_id: ty.Required[pty.StrictStr] report_id: ty.Required[pty.StrictStr] principal_id: ty.Optional[pty.StrictStr] class PbiMain(BaseModel): apps: ty.Optional[ty.List[PbiApps]] </code></pre> <p>But if I try to parse it into <code>PbiMain</code> it throws an ValidationError for <code>a_2</code></p> <pre><code>PbiRoot(**app_dict) ValidationError: 1 validation error for Settings apps.1.principal_id Field required [type=missing, input_value={'app_id': 'a_2', '...test', 'report_id': '789'}, input_type=DictConfig] For further information visit https://errors.pydantic.dev/2.5/v/missing </code></pre> <p>I can set the <code>principal_id: ty.Optional[pty.StrictStr] = None</code> to make it work but I would rather have the filed not present than of type None.</p> <p>Is there a way to achieve this?</p>
<python><python-typing><pydantic>
2023-11-14 23:56:00
2
526
Bennimi
77,484,372
437,456
asyncpg DataError: invalid input for query argument expected str, got int
<p>I admittedly don't understand how asyncpg's codecs work, but it seems to work counter to how I'd expect:</p> <pre><code>import asyncio import asyncpg async def main(): conn = await asyncpg.connect('postgresql://postgres@localhost/test') print(await conn.fetchval(&quot;select $1&quot;, 'a')) # this works: prints 'a' print(await conn.fetchval(&quot;select $1&quot;, 1)) # invalid input for query argument $1: 1 (expected str, got int) asyncio.run(main()) </code></pre> <p>It seems asyncpg wants the parameter to always be a string, but I want it to be an int. Why does this fail?</p>
<python><postgresql><asyncpg>
2023-11-14 23:12:40
1
5,340
DMac the Destroyer
77,484,338
20,235,789
How can I mock an imported dependency from my function file?
<p>I'm attempting to mock a config that is being imported in my security file here:</p> <pre><code>import aiohttp from fastapi import Header, HTTPException from .util.config import config async def get_user_profile_details( user_profile_id: str, ..., ... ): user_profile_url = f&quot;{config.entity.ENTITY_BASE_URL}/.../...&quot; async with aiohttp.ClientSession() as session: async with session.get(user_profile_url, headers=auth_header) as ... .... </code></pre> <p>here is my test setup:</p> <pre><code>import unittest from unittest.mock import patch from aioresponses import aioresponses from my_module.security import get_user_profile_details @patch('my_module.util.config.es', autospec=True) async def test_get_user_profile_details_success(self, mock_config): mock_config.entity.ENTITY_BASE_URL = &quot;http://mocked-entity-url&quot; user_profile_id = &quot;123&quot; ... ... </code></pre> <p>my test is failing even before it gets to this test function. It imports that <code>get_user_profile_details</code>, goes down its imports <code>from .util.config import config</code> , importing the config and failing when its trying to get an es instance(also in the config setup)</p> <pre><code> class Config(BaseSettings): ... ... es = ElasticsearchConfig() #this is what I'm trying to mock entity = EntityAPIConfig() ... config: Config = Config() </code></pre> <p>I've tried several ways.</p> <ul> <li><p>mocking the config like you see now</p> </li> <li><p><code>@patch('my_module.util.config.config', autospec=True)</code></p> </li> <li><p><code>@patch('my_module.util.Config', autospec=True)</code></p> </li> <li><p>tried using a:</p> </li> </ul> <pre><code> async def asyncSetUp(self): self.aiohttp_mock = aioresponses() </code></pre> <p>and mocking the request:</p> <pre><code> self.aiohttp_mock.get(... </code></pre> <p>Here is the error:</p> <pre><code> es = ElasticsearchConfig() E google.api_core.exceptions.PermissionDenied: 403 Permission denied on resource project test. </code></pre> <p>What is the correct way to mock that config?</p>
<python><python-unittest><python-unittest.mock>
2023-11-14 23:02:15
0
441
GustaMan9000
77,484,062
836,026
Read Image from numpy and display it: ValueError: conversion from L to PNG not supported
<p>I have a 2D array of an image (<strong>slice</strong>). The array dimension is (224, 224) and when I read it as an image as shown below, <em><strong>the image mode is shown as &quot;F&quot;</strong></em>, for some reason I need to save as &quot;PNG&quot; and display it. I'm gettig error message &quot;ValueError: conversion from L to PNG not supported&quot;</p> <p>The real image is shown below. See the below code.</p> <pre><code>from matplotlib import pyplot as plt from PIL import Image from matplotlib import cm # reproducible data slice = np.ones([224, 224], dtype = float) print(slice.shape) img_m3= Image.fromarray(slice)#.convert('RGB')#.convert('PNG') print(&quot;img .mode&quot;,img_m3 .mode) if img_m3 .mode != 'PNG': img_m3 = img_m3.convert('PNG') img_m3.save(&quot;/tmp/myimageb.png&quot;, &quot;PNG&quot;) im = Image.open(&quot;/tmp/myimageb.png&quot;) plt.imshow(im )#, vmin=0, vmax=255) plt.show() </code></pre> <p>Update:</p> <p>I also tried saving it as JPG, no errors, but I got blank image:</p> <pre><code>if img_m3 .mode != 'RGB': img_m3 = img_m3.convert('RGB') img_m3.save(&quot;/tmp/myimageb.jpg&quot;, 'JPEG') #plt.imshow(img , cmap='gray', vmin=0, vmax=255) im = Image.open(&quot;/tmp/myimageb.jpg&quot;) plt.imshow(im )#, vmin=0, vmax=255) plt.show() </code></pre> <p><a href="https://i.sstatic.net/6PAtO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PAtO.png" alt="original image" /></a></p> <p>Update2:</p> <p>I tried the solution on the answer, now I can see the image. But the quality degraded. See below, on top is the original image and bottom is the result.</p> <pre><code>import cv2 from matplotlib import pyplot as plt from PIL import Image print(image[0].shape) print(&quot;slice.max()&quot;,slice.max()) print(&quot;slice.min()&quot;,slice.min()) print(&quot;slice.mean()&quot;,slice.mean()) slice.min() and slice.mean() #img = Image.fromarray((slice*255).astype(np.uint8)).convert('L') img = Image.fromarray(slice.astype(np.uint8)) img.save(&quot;/tmp/myimageb.png&quot;, &quot;PNG&quot;) plt.imshow(img )#, cmap='gray', vmin=0, vmax=255) plt.show() (224, 224) slice.max() 0.5193082 slice.min() -2.0836544 slice.mean() -0.37065452 </code></pre> <p><a href="https://i.sstatic.net/3Gacy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Gacy.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/xRMOc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xRMOc.png" alt="enter image description here" /></a></p>
<python><image><matplotlib>
2023-11-14 21:53:44
1
11,430
user836026
77,484,060
9,415,459
efficient iteration & application of a function in pandas, polars or torch? Is lazy possible?
<p><strong>Goal</strong>: Find an efficient/fastest way to iterate over a table by column and run a function on each column, in python or with a python library.</p> <p><strong>Background</strong>: I have been exploring methods to improve the speed of my functions. This is because I have two models/algorithms that I want to run one small, one large (uses torch) and the large is slow. I have been using the small one for testing. The small model is seasonal decomposition of each column.</p> <p><strong>Setup</strong>:</p> <pre><code>Testing environment: ec2, t2 large. X86_64 Python version: 3.11.5 Polars: 0.19.13 pandas: 2.1.1 numpy: 1.26.0 </code></pre> <p>demo data in pandas/polars:</p> <pre><code>rows = 11020 columns = 1578 data = np.random.rand(rows, columns) df = pd.DataFrame(data) # df_p = pl.from_pandas(df) # convert if needed. </code></pre> <p><strong>Pandas</strong></p> <p>pandas and dict:</p> <pre><code>from statsmodels.tsa.seasonal import seasonal_decompose import pandas as pd class pdDictTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -&gt; torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -&gt; pd.DataFrame: trend_data_dict = {} for column in dataframe.columns: trend_data_dict[column] = cls().process_col(dataframe[column]) trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index) return trend_dataframes import timeit start = timeit.default_timer() trend_tensor = pdDictTrendExtractor.process_df(df) stop = timeit.default_timer() execution_time = stop - start print(&quot;Program Executed in &quot;+str(execution_time)) </code></pre> <p>Program Executed in 14.349091062998923</p> <p>with list comprehension instead of for loop:</p> <pre><code>class pdDict2TrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -&gt; pd.Series: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -&gt; pd.DataFrame: trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns} trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index) return trend_dataframes </code></pre> <p>Program Executed in 14.343959668000025</p> <p>Class using pandas and torch:</p> <pre><code>from statsmodels.tsa.seasonal import seasonal_decompose import torch import pandas as pd class pdTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pd.Series = None) -&gt; torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return torch.tensor(trend, dtype=torch.float32).view(-1, 1) @classmethod def process_df(cls, dataframe: pd.DataFrame) -&gt; torch.Tensor: trend_dataframes = torch.Tensor() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1) return trend_dataframes start = timeit.default_timer() trend_tensor = pdTrendExtractor.process_df(df_p) stop = timeit.default_timer() execution_time = stop - start print(&quot;Program Executed in &quot;+str(execution_time)) </code></pre> <p>Program Executed in 23.14214362200073</p> <p>with dict, multiprocessing &amp; list comprehension: As suggested by @roganjosh &amp; @jqurious below.</p> <pre><code>from multiprocessing import Pool class pdMTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -&gt; pd.Series: result = seasonal_decompose(column_data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -&gt; pd.DataFrame: with Pool() as pool: trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns]))) return pd.DataFrame(trend_data_dict, index=dataframe.index) </code></pre> <p>Program Executed in 4.582350738997775, Nice and fast.</p> <p><strong>Polars</strong></p> <p>Polars &amp; torch:</p> <pre><code>class plTorTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pl.Series = None) -&gt; torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend[np.isnan(result.trend)] = 0 return torch.tensor(trend, dtype=torch.float32).view(-1, 1) @classmethod def process_df(cls, dataframe: pl.DataFrame) -&gt; torch.Tensor: trend_dataframes = torch.Tensor() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1) return trend_dataframes </code></pre> <p>Program Executed in 13.813817326999924</p> <p>polars &amp; lamdba:</p> <pre><code>start = timeit.default_timer() df_p = df_p.select([ pl.all().map_batches(lambda x: pl.Series(seasonal_decompose(x, model=&quot;Additive&quot;, period=365).trend)).fill_nan(0) ] ) stop = timeit.default_timer() execution_time = stop - start print(&quot;Program Executed in &quot;+str(execution_time)) </code></pre> <p>Program Executed in 82.5596211330012</p> <p>I suspect this is written poorly &amp; the reason it is so slow. I have yet find a better method.</p> <p>So far I have tried, apply_many, apply, map, map_batches or map_elements.. with_columns vs select and a few other combinations.</p> <p>polars only, for loop:</p> <pre><code>class plTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pl.Series = None) -&gt; pl.DataFrame: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) # Handle missing values by replacing NaN with 0 result.trend[np.isnan(result.trend)] = 0 return pl.DataFrame({column_data.name: result.trend}) @classmethod def process_df(cls, dataframe: pl.DataFrame) -&gt; pl.DataFrame: trend_dataframes = pl.DataFrame() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = trend_dataframes.hstack(trend_data) return trend_dataframes </code></pre> <p>Program Executed in 13.34212675299932</p> <p>with list comprehensions:</p> <p>I tried with polars and list comprehension. But having difficulty with polars syntax.</p> <p>with a dict &amp; for loop:</p> <p>Program Executed in 13.743039597999996</p> <p>with dict &amp; list comprehension:</p> <pre><code>class plDict2TrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' def process_col(self, column_data: pl.Series = None) -&gt; pl.Series: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) result.trend[np.isnan(result.trend)] = 0 return pl.Series(result.trend) @classmethod def process_df(cls, dataframe: pl.DataFrame) -&gt; pl.DataFrame: trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns} trend_dataframes = pl.DataFrame(trend_data_dict) return trend_dataframes </code></pre> <p>Program Executed in 13.008102383002552</p> <p>with dict, multiprocessing &amp; list comprehension: As suggested by @roganjosh &amp; @jqurious below.</p> <pre><code>from multiprocessing import Pool class plMTrendExtractor: def __init__(self, period: int = 365) -&gt; None: self._period = period self._model = 'Additive' def process_col(self, column_data: pl.Series = None) -&gt; pl.Series: result = seasonal_decompose(column_data, model=self._model, period=self._period) result.trend[np.isnan(result.trend)] = 0 return pl.Series(result.trend) @classmethod def process_df(cls, dataframe: pl.DataFrame) -&gt; pl.DataFrame: with Pool() as pool: trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns]))) return pl.DataFrame(trend_data_dict) </code></pre> <p>Program Executed in 4.997288776001369, Nice!.</p> <p>With lazyFrame?</p> <p>I can add lazy &amp; collect to the <code>df_p.select()</code> method above but doing this does not improve the time. One of the key issues seems to be that the function that is passed to lazy operations needs to be lazy too. I was hoping that it might run each column in parallel.</p> <p><strong>current conclusions &amp; notes</strong></p> <ul> <li>I am getting a second to half a second of variation for some of the runs.</li> <li>Pandas and dict, seems to be reasonable. If you care about the index, then this can be a good option.</li> <li>Polars with dict and list comprehension are the &quot;fastest&quot;. But not by much. Considering the variation even smaller diff.</li> <li>both options also have the benefit of not needing additional packages.</li> <li>There seems to be room for improvement in polars. In terms of better code, but not sure if this would improve time much. As the main, compute time is seasonal_decompose. Which takes ~0.012 seconds per column, if run alone.</li> <li>open to any feedback on improvements</li> <li>warning: i haven't done full output validation <strong>yet</strong> on the functions above.</li> <li>how the variable is returned from process_col does have minor impacts on speed. As expected, and part of what I was tuning here. For example, with polars if I returned numpy array I got slower time. If I returned a numpy array, but declare -&gt; pl.series this seems about the same speed, with one or two trials being faster (then above).</li> </ul> <p><strong>after feedback/added multiprocessing</strong></p> <ul> <li>surprise surprise, multiprocessing for the win. This seems to be regardless of pandas or polars.</li> </ul>
<python><python-3.x><pandas><pytorch><python-polars>
2023-11-14 21:53:14
1
385
Aaron C
77,484,040
10,664,542
With Python unittest OOP base & child class, when executed the base class is observed to run the constructor & test_ method more times than expected
<p><strong>Technical areas:</strong></p> <ul> <li>Python Programming</li> <li>Python OOP</li> <li>Python unittest</li> </ul> <hr /> <p><strong>Description:</strong></p> <p>With Python OOP (unittest scenario), I am a little confused.</p> <p>I have a base/parent class. I run it and see the constructor called once and a method in the class called once as expected.</p> <pre><code>import unittest import datetime class TestBase(unittest.TestCase): &quot;&quot;&quot; All test classes should inherit this to get: self.config &quot;&quot;&quot; def __init__(self, *args, **kwargs): super(TestBase, self).__init__(*args, **kwargs) self.now = str(datetime.datetime.now()) print('TestBase.__init__(): now=[' + self.now + ']', flush=True) def test_canary_base(self): &quot;&quot;&quot; A test that is known good and will always pass &quot;&quot;&quot; print(&quot;TestBase.test_canary_base()&quot;, flush=True) </code></pre> <hr /> <p>Now I create a child class derived from the base/parent class and I run it.</p> <pre><code>import datetime from test.test_base import TestBase class TestChild(TestBase): &quot;&quot;&quot; &quot;&quot;&quot; def __init__(self, *args, **kwargs): super(TestChild, self).__init__(*args, **kwargs) self.now = str(datetime.datetime.now()) print('TestChild.__init__(): now=[' + self.now + ']') def test_canary_child(self): &quot;&quot;&quot; A test that is known good and will always pass &quot;&quot;&quot; print(&quot;TestChild.test_canary_child()&quot;, flush=True) </code></pre> <p>I see the <strong>base/parent class constructor called three times</strong> and a <strong>test_</strong> method in the base/parent class called <strong>twice</strong>. This is unexpected.</p> <hr /> <p>Additionally, I see the <strong>child constructor called twice</strong> and a <strong>test_</strong> method in the class called once.</p> <p>Why is the derived class constructor called twice?</p> <hr /> <p>Why, in both the base/parent class and the child class are the constructor and test_ method not called just once?</p> <hr /> <p>I have a git repo from which actual code can be cloned and run locally to demonstrate the problem (will commit/push in a moment).</p> <p><strong><a href="https://github.com/devlocalca/python-unittest-oop" rel="nofollow noreferrer">https://github.com/devlocalca/python-unittest-oop</a></strong></p> <hr /> <p><strong>To reproduce:</strong></p> <ol> <li>clone the git repo</li> <li>setup IDE project</li> <li>run the base/parent class: test/test_base.py</li> <li>observe the output</li> </ol> <p>You will see the following output:</p> <pre><code>TestBase.__init__(): now=[2023-11-14 14:32:52.052573] TestBase.test_canary_base() </code></pre> <p>now run the child class: test/test_child.py</p> <p>You will see the following output (that I would not expect):</p> <pre><code>TestBase.__init__(): now=[2023-11-14 14:26:02.137958] TestBase.__init__(): now=[2023-11-14 14:26:02.137958] TestChild.__init__(): now=[2023-11-14 14:26:02.137958] TestBase.__init__(): now=[2023-11-14 14:26:02.137958] TestChild.__init__(): now=[2023-11-14 14:26:02.137958] TestBase.test_canary_base() TestBase.test_canary_base() TestChild.test_canary_child() </code></pre> <hr /> <p>now run a <strong>test_grandchild.py</strong> that inherits from <strong>test_child.py</strong></p> <hr /> <p>How would this be fixed so that I would see the expected output of the base/parent constructor and <strong>test_</strong> method called only once (along with the child class being called only once).</p> <p>I believe a working example would help me explore this issue.</p>
<python><python-3.x><oop><python-unittest>
2023-11-14 21:49:38
0
1,346
user10664542
77,483,936
356,875
How can I convert from string, apply the timezone offset, make naive and convert back to string a Pandas Series Index in Python?
<p>I have a Pandas time-series object with an Index like this:</p> <pre><code>Index(['2023-05-31T00:05:00+0300', '2023-05-31T00:06:00+0300', ... '2023-09-15T13:48:00+0300', '2023-09-15T13:49:00+0300'], dtype='object', length=76106) </code></pre> <p>and I need to convert into this:</p> <pre><code>Index(['2023-05-30T21:05:00', '2023-05-30T21:06:00', ... '2023-09-15T10:48:00', '2023-09-15T10:49:00'], dtype='object', length=76106) </code></pre> <p>What I am trying to do, effectively, is convert the string to datetime, subtract the timezone offset, make it naive and convert it back to string.</p> <p>I know how to do this in a lot of (somewhat complicated) steps which will involve converting the series to dictionary, converting the (datetime) keys one by one, making the dictionary a new series, but is there a way to do this in few steps, probably in-place?</p> <p>Any help will be greatly appreciated.</p>
<python><pandas><datetime><timezone>
2023-11-14 21:28:59
1
8,468
xpanta
77,483,933
18,020,941
Django base.html block overwriting
<p>I want to define, and append to blocks defined in the base.html template.</p> <p>Say I have the following template</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;My Project&lt;/title&gt; {% block append_to_me %}{% endblock %} &lt;/head&gt; &lt;body&gt; {% block content %}{% endblock content %} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I would then use the following template for my views, my views render some wagtail components, and those components might want to use the append_to_me block.</p> <p>This goes not only for the <em>wagtail blocks</em>, but also for <strong>plain django template tags</strong></p> <pre class="lang-html prettyprint-override"><code>{% extends &quot;base.html&quot; %} {% block content %} &lt;h2&gt;Content for My App&lt;/h2&gt; &lt;p&gt;Stuff etc etc.&lt;/p&gt; {# I want it to not matter where I use this. #} {% my_custom_tag %} {% endblock content %} </code></pre> <p>Where <code>{% my_custom_tag %}</code> would do something like this:</p> <pre class="lang-py prettyprint-override"><code>@register.simple_tag(takes_context=True) def my_custom_tag(context): objects = Preload.objects.all() for object in objects: append_to_header_block(object.html) </code></pre> <p>Wagtail block example:</p> <pre class="lang-py prettyprint-override"><code>class MyBlock(blocks.StructBlock): title = blocks.CharBlock() content = blocks.RichTextBlock() class Meta: template = 'myapp/myblock.html' </code></pre> <p>myapp/myblock.html</p> <pre class="lang-html prettyprint-override"><code>{% add_to_header IMG &quot;img/my-img.jpeg&quot; %} ... </code></pre> <p>I also want to be able to keep the content of previous calls to the <code>add_to_header</code> function, as to not overwrite the previous content.</p> <p>I just cannot figure out how I would implement this, because there are a few issues:</p> <ul> <li>Order of evaluation, I am pretty sure the content of base.html gets rendered before any of the other templates. <ul> <li>Maybe this can be fixed by <em>somehow</em> overwriting the <code>append_to_me</code> block from anywhere; and calling <code>block.super</code> every time? <em>Somehow</em> would be my question.</li> </ul> </li> <li>Wagtail blocks might not even know they are not used in base.html due to their versatility.</li> </ul> <p>Any ideas as to how I would implement this? I am not even sure if this is possible, but I would love to hear your thoughts on this.</p>
<python><django><django-templates><wagtail><templatetags>
2023-11-14 21:28:02
0
1,925
nigel239
77,483,917
2,823,719
Scopes confusion using SMTP to send email using my Gmail account with XOAUTH2
<p>My application has an existing module I use for sending emails that accesses the SMTP server and authorizes using a user (email address) and password. Now I am trying to use Gmail to do the same using my Gmail account, which, for the sake of argument, we say is booboo@gmail.com (it's actually something different).</p> <p>First, I created a Gmail application. On the consent screen, which was a bit confusing, I started to add scopes that were either &quot;sensitive&quot; or &quot;restricted&quot;. If I wanted to make the application &quot;production&quot; I was told that it had to go through a verification process and I had to produce certain documentation. This was not for me as I, the owner of this account, am only trying to connect to it for the sake of sending emails programmatically. I them created credentials for a desktop application and downloaded it to file <em>credentials.json</em>.</p> <p>Next I acquired an access token with the following code:</p> <pre class="lang-py prettyprint-override"><code>from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = ['https://mail.google.com/'] def get_initial_credentials(*, token_path, credentials_path): flow = InstalledAppFlow.from_client_secrets_file(credentials_path, SCOPES) creds = flow.run_local_server(port=0) with open(token_path, 'w') as f: f.write(creds.to_json()) if __name__ == '__main__': get_initial_credentials(token_path='token.json', credentials_path='credentials.json') </code></pre> <p>A browser window opens up saying that this is not a verified application and I am given a chance to go &quot;back to safety&quot; but I click on the Advanced link and eventually get my token.</p> <p>I then try to send an email with the following code:</p> <pre class="lang-py prettyprint-override"><code>import smtplib from email.mime.text import MIMEText import base64 import json from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = ['https://www.googleapis.com/auth/gmail.send'] def get_credentials(token_path): with open(token_path) as f: creds = Credentials.from_authorized_user_info(json.load(f), SCOPES) if not creds.valid: creds.refresh(Request()) with open(token_path, 'w') as f: f.write(creds.to_json()) return creds def generate_OAuth2_string(access_token): auth_string = f'user=booboo\1auth=Bearer {access_token}\1\1' return base64.b64encode(auth_string.encode('utf-8')).decode('ascii') message = MIMEText('I need lots of help!', &quot;plain&quot;) message[&quot;From&quot;] = 'booboo@gmail.com' message[&quot;To&quot;] = 'booboo@gmail.com' message[&quot;Subject&quot;] = 'Help needed with Gmail' creds = get_credentials('token.json') xoauth_string = generate_OAuth2_string(creds.token) with smtplib.SMTP('smtp.gmail.com', 587) as conn: conn.starttls() conn.docmd('AUTH', 'XOAUTH2 ' + xoauth_string) conn.sendmail('booboo', ['booboo@gmail.com'], message.as_string()) </code></pre> <p>This works but note that I used a different scope <strong><a href="https://www.googleapis.com/auth/gmail.send" rel="nofollow noreferrer">https://www.googleapis.com/auth/gmail.send</a></strong> instead of the <strong><a href="https://mail.google.com/" rel="nofollow noreferrer">https://mail.google.com/</a></strong> I used to obtain the initial access token.</p> <p>I then edited the application to add the scope <strong><a href="https://www.googleapis.com/auth/gmail.send" rel="nofollow noreferrer">https://www.googleapis.com/auth/gmail.send</a></strong>. That required me to put the application in testing mode. I did not understand the section to add &quot;test users&quot;, that is I had no idea what I could have/should have entered here. I then generated new credentials and a new token as above. Then when I go to send my email, I see (debugging turned on):</p> <pre class="lang-None prettyprint-override"><code>... reply: b'535-5.7.8 Username and Password not accepted. Learn more at\r\n' reply: b'535 5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp\r\n' reply: retcode (535); Msg: b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp' ... </code></pre> <p>But I never sent up a password, but rather the XOAUTH2 authorization string. I don't know whether this occurred because I hadn't added test users. For what it's worth, I do not believe that this new token had expired yet and therefore it was not refreshed.</p> <p>I didn't try it, but had I made the application &quot;production&quot;, would it have worked? Again, I don't want to have to go through a whole verification process with Gmail. Unfortunately, I don't have a specific question other than I would like to define an application with the more restricted scope and use that, but it seems impossible without going through this verification. Any suggestions?</p>
<python><oauth-2.0><gmail><google-oauth><smtp-auth>
2023-11-14 21:24:30
1
45,536
Booboo
77,483,914
9,021,875
How to maintain a pool of processes
<p>I have a project in which I run multiple processes to do multiple jobs, the jobs are distributed via a queue.</p> <pre><code>queue = multiprocessing.Queue() process = multiprocessing.spawn( run_alg, args=(queue,), nprocs=process_num, join=False, ) process.join() </code></pre> <p>However, it is possible for the processes to stop working or to crash mid-operation. I want to create a pool of x processes where every time a process is shut down, a new process will be created to take its place until the queue is empty.</p> <p>Is it possible, how?</p>
<python><multiprocessing><python-multiprocessing><multiprocessing-manager>
2023-11-14 21:23:28
0
1,839
Yedidya kfir
77,483,889
5,036,928
PyVista: Getting mesh indices (triangles)
<p>I was originally using <a href="https://github.com/pmneila/PyMCubes" rel="nofollow noreferrer">https://github.com/pmneila/PyMCubes</a> which conveniently outputs the mesh points and indices but have since moved to PyVista since for whatever reason PyMCubes applied some sort of weird transformation that isn't ideal. How can I extract the same information from PyVista?</p> <p>For a simple sphere:</p> <pre><code>def sphere(x, y, z): scalar = 8 * np.ones(len(x*y*z)) dist = (x**2 + y**2 + z**2)**(1/2) scalar[(dist &gt;= 1**2) | (dist &lt;= 0.75**2) | (z &lt;= 0)] = 0 return scalar # create a uniform grid to sample the function with n = 100 x_min, y_min, z_min = -2, -2, -2 grid = pv.ImageData( dimensions=(n, n, n), spacing=(abs(x_min) / n * 2, abs(y_min) / n * 2, abs(z_min) / n * 2), origin=(x_min, y_min, z_min), ) x, y, z = grid.points.T values = sphere(x, y, z) fig = go.Figure([go.Scatter3d(x=x, y=y, z=z, name='inner', mode='markers', marker=dict(size=values)),]) # fig.show(renderer='browser') mesh = grid.contour([1], values, method='marching_cubes').smooth() dist = np.linalg.norm(mesh.points, axis=1) mesh.plot(scalars=dist, smooth_shading=True, specular=1, opacity=0.3, cmap=&quot;plasma&quot;, show_scalar_bar=False) triangulated = mesh.extract_surface() triangles = triangulated.surface_indices().reshape(-1, 3) vertices = mesh.points </code></pre> <p>The vertices and mesh points I have grabbed definitely do not correspond to each other since plotting them independently of PyVista generates garbage.</p>
<python><3d><mesh><pyvista>
2023-11-14 21:16:45
0
1,195
Sterling Butters
77,483,721
8,998,172
Is it possible to continuously substract values from two different columns in a pandas dataframe?
<p>I have two pandas dataframes; one dataframe contains orders, the other one contains items that are in stock.</p> <pre><code># Sample DF 1 - Orders # +--------------+------------+---------------+----------------+ # | T-Shirt Size | Ordered By | Shop Location | Order Quantity | # +--------------+------------+---------------+----------------+ df_1 = pd.DataFrame( columns = ['size', 'ordered_by', 'shop_location', 'order_quantity'], data = [['L', 'Tom', 'London', 1], ['M', 'Alice', 'Manchester', 1], ['S', 'Alice', 'Manchester', 1], ['S', 'Georgia', 'Newcastle', 1], ['L', 'Bart', 'Manchester', 3], ['M', 'Bob', 'Manchester', 1], ['L', 'Toby', 'London', 2]] ) </code></pre> <pre><code># Sample DF 2 - Stock # +--------------+---------------+--------+-------+ # | T-Shirt Size | Shop Location | Stock | Price | # +--------------+---------------+--------+-------+ df_2 = pd.DataFrame( columns = ['size', 'shop_location', 'stock', 'price'], data = [['S', 'London', '5', '7.99'], ['M', 'London', '9', '7.99'], ['L', 'London', '3', '8.99'], ['XL', 'London', '7', '8.99'], ['S', 'Manchester', '2', '7.99'], ['M', 'Manchester', '2', '7.99'], ['L', 'Manchester', '15', '8.99'], ['XL', 'Manchester', '8', '8.99'], ['S', 'Newcastle', '2', '7.99'], ['M', 'Newcastle', '11', '7.99'], ['L', 'Newcastle', '4', '8.99'], ['XL', 'Newcastle', '1', '8.99']] ) </code></pre> <p>After merging the dataframes, I would like to deduct the order amount from the amount of items that are in stock in a specific shop location in an ongoing fashion.</p> <p>I'm getting as far as this:</p> <pre><code>df_merge = pd.merge( df_1, df_2, how='left', left_on=['shop_location', 'size'], right_on=['shop_location','size'] ) df_merge['stock'] = df_merge['stock'].astype(int) - df_merge['order_quantity'].astype(int) df_merge </code></pre> <pre><code> size ordered_by shop_location order_quantity stock price 0 L Tom London 1 2 8.99 1 M Alice Manchester 1 1 7.99 2 S Alice Manchester 1 1 7.99 3 S Georgia Newcastle 1 1 7.99 4 L Bart Manchester 3 12 8.99 5 M Bob Manchester 1 1 7.99 6 L Toby London 2 1 8.99 </code></pre> <p>Obviously, this does what it is supposed to and simply deducts the value order_quantity in the order quantity column from the value in the stock column.</p> <p>However, what I would like to achieve is something like this:</p> <pre><code> size ordered_by shop_location order_quantity stock price ... 1 M Alice Manchester 1 1 7.99 ... 5 M Bob Manchester 1 0 7.99 ... </code></pre>
<python><pandas>
2023-11-14 20:44:10
1
1,005
holger
77,483,713
5,013,143
How can I change from the interpreter a variable value which is contained in a module of a python project?
<p>Say I have a project with a main.py file like this</p> <pre><code>from pyFiles.module1 import v from pyFiles.module1 import myClass1 from pyFiles.module2 import myClass2 def main(): #main function print(&quot;start up!&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>a folder &quot;pyFiles&quot; with an <strong>init</strong>.py file and two modules:</p> <p>module1.py:</p> <pre><code>x = 2 class myClass1(): def __init__(self, a, b, c): self.a = a + x self.b = b self.c = c def method1(self): return self.a + self.b + self.c </code></pre> <p>module2.py</p> <pre><code>from .module1 import myClass1 class myClass2(myClass1): def __init__(self, a, b, c): super().__init__(a, b, c) self.a = 2*a self.b = 2*b self.c = 2*c def method2(self): return self.a * self.b*self.b </code></pre> <p>the issue I have is about that x in module1.py, which I want it to be changeble from the user from the interpreter in manner like this os symilar:</p> <p>once the user type</p> <pre><code>x = 12 </code></pre> <p>than that x is going to be changed for ALL modules from 2 to 12. How can I do it?</p>
<python>
2023-11-14 20:43:22
1
7,483
Stefano Fedele
77,483,427
267,482
How to implement a optional last arguments for a command with Typer in Python?
<p>How do I achieve the syntax similar to</p> <pre><code>my_app --named_arg1=val1 -- misc_arg1 misc_arg2 ... </code></pre> <p>or the same without <code>--</code>?</p>
<python><typer>
2023-11-14 19:47:28
1
18,954
bobah
77,483,100
3,611,472
Tensorflow terribly slow on Mac Studio M1 Ultra
<p>I have a Mac Studio M1 Ultra and I am trying to train a simple RNN to forecast time series.</p> <p>The code is the following</p> <pre><code>import numpy as np import keras import tensorflow as tf def generate_time_series(batch_size, n_steps): freq1, freq2, offsets1, offsets2 = np.random.rand(4, batch_size, 1) time = np.linspace(0, 1, n_steps) series = 0.5 * np.sin((time - offsets1) * (freq1 * 10 + 10)) # wave 1 series += 0.2 * np.sin((time - offsets2) * (freq2 * 20 + 20)) # + wave 2 series += 0.1 * (np.random.rand(batch_size, n_steps) - 0.5) # + noise return series[..., np.newaxis].astype(np.float32) np.random.seed(42) n_steps = 50 series = generate_time_series(10000, n_steps + 1) X_train, y_train = series[:7000, :n_steps], series[:7000, -1] X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -1] X_test, y_test = series[9000:, :n_steps], series[9000:, -1] # Implementing simple RRN np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([keras.layers.SimpleRNN(1, input_shape=[None, 1])]) optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=0.005) model.compile(loss=&quot;mse&quot;, optimizer=optimizer) history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid)) </code></pre> <p>On my Mac M1, I have installed <code>tensorflow</code> 2.13.0 following the instructions <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">here</a> and I have <code>keras</code> 2.13.1. Python version is 3.11.</p> <p>When I run the code, I get the following output during training</p> <pre><code>2023-11-14 17:55:24.172843: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled. Epoch 1/20 219/219 [==============================] - 148s 676ms/step - loss: 0.4310 - val_loss: 0.2155 Epoch 2/20 219/219 [==============================] - 143s 651ms/step - loss: 0.1627 - val_loss: 0.1514 Epoch 3/20 219/219 [==============================] - 142s 649ms/step - loss: 0.1462 - val_loss: 0.1488 Epoch 4/20 219/219 [==============================] - 141s 644ms/step - loss: 0.1474 - val_loss: 0.1475 Epoch 5/20 219/219 [==============================] - 142s 649ms/step - loss: 0.1477 - val_loss: 0.1508 Epoch 6/20 219/219 [==============================] - 142s 649ms/step - loss: 0.1006 - val_loss: 0.0617 ... </code></pre> <p>which shows that it takes ~140s per epoch.</p> <p>however, if I run the same code on Google Colaboratory, I get</p> <pre><code>Epoch 1/20 219/219 [==============================] - 6s 24ms/step - loss: 0.0485 - val_loss: 0.0173 Epoch 2/20 219/219 [==============================] - 3s 13ms/step - loss: 0.0132 - val_loss: 0.0118 Epoch 3/20 219/219 [==============================] - 4s 17ms/step - loss: 0.0119 - val_loss: 0.0113 Epoch 4/20 219/219 [==============================] - 4s 18ms/step - loss: 0.0116 - val_loss: 0.0110 Epoch 5/20 219/219 [==============================] - 5s 23ms/step - loss: 0.0114 - val_loss: 0.0109 Epoch 6/20 219/219 [==============================] - 2s 8ms/step - loss: 0.0114 - val_loss: 0.0109 Epoch 7/20 219/219 [==============================] - 2s 8ms/step - loss: 0.0114 - val_loss: 0.0109 </code></pre> <p>which is 100x faster!</p> <p>Why such a difference?</p> <p>I have read the tensorflow has some performance problem on Silicon Apple. Is there any way I can reach such performances?</p>
<python><tensorflow><keras><apple-m1>
2023-11-14 18:42:16
0
443
apt45