QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,675,858 | 5,513,260 | pytest unittest spark java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset | <p>Running unit testing using pytest for pyspark code. Code snippet sample from code given below. Looks like spark runtime or hadoop runtime libraries expected , but i thought unit testing does not really need spark libraries. Just pyspark python package is enough because tools like Jenkins won't have spark runtime installed. Please guide</p>
<pre><code> def read_inputfile_from_ADLS(self):
try:
if self.segment == "US":
if self.input_path_2 is None or self.input_path_2 == "":
df = self.spark.read.format("delta").load(self.input_path)
else:
df = self.spark.read.format("delta").load(self.input_path_2)
except Exception as e:
resultmsg = "error reading input file"
</code></pre>
<p>Pytest code</p>
<pre><code>import pytest
from unittest.mock import patch,MagicMock , Mock
class TestInputPreprocessor:
inpprcr = None
dataframe_reader = 'pyspark.sql.readwriter.DataFrameReader'
def test_read_inputfile_from_ADLS(self,spark,tmp_path):
self.segment = 'US'
self.input_path_2 = tmp_path
with patch(f'{self.dataframe_reader}.format', MagicMock(autospec=True)) as
mock_adls_read:
self.inpprcr.read_inputfile_from_ADLS()
assert mock_adls_read.call_count == 1
</code></pre>
<p>Error:</p>
<pre><code>AssertionError
---------------------------------------------- Captured stderr setup -------------------
---------------------------
23/07/12 23:58:42 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException:
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see
https://wiki.apache.org/hadoop/WindowsProblems
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
23/07/12 23:58:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
</code></pre>
| <python><unit-testing><pyspark><pytest><python-unittest> | 2023-07-13 04:10:27 | 1 | 421 | Mohan Rayapuvari |
76,675,825 | 11,146,276 | Hang during queue.join() asynchronously processing a queue | <p>I'm currently using <code>multiprocessing</code> and <code>asyncio</code> to process a huge amount of data. However, my code keeps randomly hanging after it finishes processing a batch of items (200 in my case) and does not do <code>queue.join()</code> to process the next batch.</p>
<p>According to the docs:</p>
<blockquote>
<p>Block until all items in the queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.</p>
</blockquote>
<p>I made sure to call <code>queue.task_done()</code> for all items in every batch of data per worker, yet it still happens. Am I understanding something incorrectly? What am I doing wrong? What can I improve?</p>
<p>Minimal reproducible code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import logging
import random
from multiprocessing import JoinableQueue, Process
N_PROCESSES = 4
N_WORKERS = 8
def create_queue(zpids: list[int]) -> JoinableQueue:
queue = JoinableQueue()
for zpid in zpids:
queue.put(zpid)
return queue
async def worker(i_process: int, queue: JoinableQueue):
# Process items in batch of 200
query_size = 200
while True:
batch = [queue.get(timeout=0.01) for _ in range(min(queue.qsize(), query_size))]
if not batch:
break
logging.info("Faking some tasks...")
for _ in batch:
queue.task_done()
async def async_process(i_process: int, queue: JoinableQueue):
logging.info(f"p:{i_process} - launching workers...")
workers = [asyncio.create_task(worker(i_process, queue)) for _ in range(N_WORKERS)]
await asyncio.gather(*workers, return_exceptions=True)
def async_process_wrapper(i_process: int, zpids: JoinableQueue):
asyncio.run(async_process(i_process, zpids), debug=True)
def start_processes(queue: JoinableQueue):
for i in range(N_PROCESSES):
Process(target=async_process_wrapper, args=(i, queue)).start()
queue.join()
def main():
data = [random.randrange(1, 1000) for _ in range(200000)]
my_queue = create_queue(data)
start_processes(my_queue)
if __name__ == "__main__":
main()
</code></pre>
| <python><multithreading><asynchronous><multiprocessing> | 2023-07-13 04:01:09 | 1 | 428 | Firefly |
76,675,766 | 2,866,298 | dataframe replace() is not working inside function | <p>I was replacing some strings (removing whitespaces) inside multiple dataframes manually, then I decided to centralize this code inside a function as follows (the print statements are just for debugging):</p>
<pre><code>def merge_multiword_teams(dfnx,team_lst):
print(dfnx[dfnx['team'].str.contains('lazer')])
for s in team_lst:
c=s.replace(' ','')
print(s + c)
dfnx.replace({s,c},inplace=True)
print(dfnx[dfnx['team'].str.contains('lazer')])
return dfnx
</code></pre>
<p>then calling it</p>
<pre><code>df = merge_multiword_teams(df,['Trail Blazers'])
</code></pre>
<p>the print statement shows that the whitespaces were not replaced</p>
<p>team W L W/L% GB PS/G PA/G SRS year <br />
17 Portland Trail Blazers 49 33 0.598 16.0 105.6 103.0 2.6 2018<br />
52 Portland Trail Blazers 41 41 0.5 26.0 107.9 108.5 -0.23 2017<br />
79 Portland Trail Blazers 44 38 0.537 29.0 105.1 104.3 0.98 2016<br />
109 Portland Trail Blazers 51 31 .622 102.8 98.6 4.41 2015<br />
146 Portland Trail Blazers 54 28 .659 5.0 106.7 102.8 4.44 2014</p>
<p>Trail BlazersTrailBlazers
team W L W/L% GB PS/G PA/G SRS year <br />
17 Portland Trail Blazers 49 33 0.598 16.0 105.6 103.0 2.6 2018<br />
52 Portland Trail Blazers 41 41 0.5 26.0 107.9 108.5 -0.23 2017<br />
79 Portland Trail Blazers 44 38 0.537 29.0 105.1 104.3 0.98 2016<br />
109 Portland Trail Blazers 51 31 .622 102.8 98.6 4.41 2015<br />
146 Portland Trail Blazers 54 28 .659 5.0 106.7 102.8 4.44 2014</p>
<p>what can be wrong with this approach ? given that moving the replace statement outside the function works perfectly</p>
| <python><dataframe><replace> | 2023-07-13 03:47:46 | 1 | 1,906 | osama yaccoub |
76,675,287 | 1,107,474 | Cannot SSH on to AWS EC2 instance created using Boto | <p>I have been creating AWS EC2 instances using Ubuntu and Boto for months. They start-up, I issue commands from Python etc. This has been working fine.</p>
<p>I create the instances using this:</p>
<pre><code>instances = ec2.create_instances(
ImageId=image_id,
MinCount=1,
MaxCount=num_instances,
InstanceType=instance_type,
IamInstanceProfile={ 'Name': 'SSMInstanceProfile' },
Placement={'AvailabilityZone': availability_zone}
</code></pre>
<p>Today I created and downloaded a private key file from EC2 console. I then created another instance using Boto (exactly how I have been doing previously).</p>
<p>I got the public DNS address from the console and tried to ssh on to it from my local machine using:</p>
<pre><code>ssh -i /path/to/private/key.pem ec2-user@1.2.3.4.region.compute.amazonaws.com
</code></pre>
<p>etc</p>
<p>Nothing happened, the command line just hung.</p>
<p>Do I need to change how I create my boto instances, to accept the new <code>.pem</code> key file?</p>
<p>If not, do I need to change something in the EC2 console to map <code>SSMInstanceProfile</code> to be SSH'd using that <code>.pem</code>?</p>
| <python><amazon-web-services><amazon-ec2><boto3><boto> | 2023-07-13 01:08:45 | 1 | 17,534 | intrigued_66 |
76,675,241 | 9,582,542 | Scraping Angular data with Selenium | <p>Below there is some html that I can extract Text with Selenium driver.</p>
<pre><code><td colspan="2"><strong>Owner</strong>
<div ng-class="{'owner-overflow' : property.ownersInfo.length > 4}">
<!-- ngRepeat: owner in property.ownersInfo --><div ng-repeat="owner in property.ownersInfo" class="ng-scope">
<div class="ng-binding">ERROL P BROWN LLC
<!-- &nbsp;&nbsp; <span ng-if="owner.shortDescription != null && owner.shortDescription.length > 0">({{owner.shortDescription}})</span> -->
</div>
</div><!-- end ngRepeat: owner in property.ownersInfo -->
</div>
</td>
<td colspan="2" class="pi_mailing_address"><strong>Mailing Address</strong>
<div>
<span class="ng-binding">1784 NE 163 ST </span>
<span ng-show="property.mailingAddress.address2" class="ng-binding ng-hide"></span>
<span ng-show="property.mailingAddress.address3" class="ng-binding ng-hide"></span>
<span ng-show="property.mailingAddress.city" ng-class="{'inline':property.mailingAddress.city}" class="ng-binding inline">NORTH MIAMI,</span>
<span class="inline ng-binding">FL</span>
<span class="inline ng-binding">33162</span>
<span ng-hide="isCountryUSA(property.mailingAddress.country)" class="ng-binding ng-hide">USA</span>
</div>
</td>
</code></pre>
<p>When I run the code manually all the fields get picked up no issue. How ever if I run the script in a loop to extract this data These elements are blank. I am collecting other fields as well they are not coming up blank. There is no error in processing. Its just that when I save the data to a database these values are coming up empty. Is there some work around to have this NOT happen?</p>
<p>These are the lines of code:</p>
<pre class="lang-py prettyprint-override"><code>Owner = driver.find_element(By.XPATH, "//strong[text()='Owner']//following::div[1]").text
SubDivision = driver.find_element(By.XPATH, "//strong[text()='Sub-Division:']//following::div[1]").text
Address1 = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[1]").text
Address2 = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[2]").text
Address3 = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[3]").text
city = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[4]").text.replace(",", "")
state = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[5]").text
zipcode = driver.find_element(By.XPATH, "//strong[text()='Mailing Address']//following::div[1]//following::span[6]").text
</code></pre>
| <python><angular><selenium-webdriver><xpath><webdriverwait> | 2023-07-13 00:45:02 | 1 | 690 | Leo Torres |
76,675,225 | 2,725,810 | API does not return all activities | <p>I am trying to use pagination to retrieve all the video uploading activities in a YouTube channel:</p>
<pre class="lang-py prettyprint-override"><code>import googleapiclient.discovery
api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
def main():
api_service_name = "youtube"
api_version = "v3"
youtube = googleapiclient.discovery.build(
api_service_name, api_version, developerKey=api_key)
next_page_token = 0
while next_page_token is not None:
request = youtube.activities().list(
part="snippet,contentDetails",
channelId="UCG-KntY7aVnIGXYEBQvmBAQ",
maxResults=50,
pageToken = None if next_page_token == 0 else next_page_token
)
response = request.execute()
for item in response['items']:
try:
id = item['contentDetails']['playlistItem']['resourceId']['videoId']
type = 'watched'
except:
id = item['contentDetails']['upload']['videoId']
type = 'upload'
if type == 'upload':
print(type, id, item['snippet']['title'])
next_page_token = response.get('nextPageToken')
if __name__ == "__main__":
main()
</code></pre>
<p>For some reason, I get only 29 video uploads for a channel that has many more uploads than that.</p>
<p>Why is this happening?</p>
<p><strong>EDIT</strong> I printed the dates of watching activities for my channel, i.e. <code>item['snippet']['publishedAt']</code> and saw something very strange. Namely, activities of 2023 are followed by activities in 2020, which are followed by activities in 2016. It's like it took some recent activities, some activities a few years ago and some of the earliest ones. Somehow it decided to show only 29 activities in total, while it shows 79 for the channel of Thomas Frank (it's actually interesting why I am allowed to know which videos he watched)</p>
| <python><youtube-api><youtube-data-api> | 2023-07-13 00:39:54 | 0 | 8,211 | AlwaysLearning |
76,674,930 | 4,549,682 | Does category-encoders have some randomness built-in to it? | <p>I used the <a href="https://pypi.org/project/category-encoders/" rel="nofollow noreferrer">category-encoders</a> package to target encode some variables. However upon multiple runs it seems there are some tiny differences between the encodings. I haven't had time to dig deeper into it, but suspect there is some randomness in how the encoding is done. Is this correct? If so, can a random seed be set to get deterministic results, or do we need a PR in the repo?</p>
| <python><scikit-learn><category-encoders> | 2023-07-12 22:56:51 | 1 | 16,136 | wordsforthewise |
76,674,790 | 3,120,051 | Difference in the output after convert the code from python to c | <p>I have this code in python and it works correctly. The idea of this code is compute the repetition of each value in the array depending on the threshold value.
The output for the python code is [value,repetition]</p>
<pre><code>[1.2, 3, 2.4, 1, 2.5, 1, 2.3, 1, 2.4, 1, 8.5, 2, 8.9, 1, 9.11, 1]
</code></pre>
<pre><code>def dif(x, y, ma1):
res=0
if(math.fabs(x-y) <= ma1):
res=1
return res
def enc(text,th):
coded=[]
coded.clear()
index=0
unt=1
while index<=(len(text)-1):
if index==(len(text)-1) or dif(text[index],text[(index+1)],th) !=1:
coded.append(text[index])
coded.append(unt)
unt=1
else:
unt=unt+1
index=index+1
return coded
SenList=[1.1,1.1,1.2,2.4,2.5,2.3,2.4,8.6,8.5,8.9,9.11]
th = 0.1
comm= enc(SenList,th)
print(comm)
</code></pre>
<p>And this is the C code and the output for the C cod is :</p>
<pre><code>1.100000 2 1.200000 1 2.500000 2 2.300000 1 2.400000 1
8.600000 1 8.500000 1 8.900000 1 9.110000 1
</code></pre>
<pre><code>int dif(float x,float y,float sigma1){
int res=0;
if(fabsf(x-y) <= sigma1)
res=1;
return res;
}
void RL(){
float text [] = {1.1,1.1,1.2,2.4,2.5,2.3,2.4,8.6,8.5,8.9,9.11} ;
int n = sizeof(text)/sizeof(text[0]);
float th =0.1;
float num[30]; int nc = 0;
int cou[30]; int nc1= 0;
int index=0;
int unt=1;
while (index<(n)){
if ( (index==(n-1)) || (dif(text[index],text[index+1],th)!=1) ) {
cou[nc] = unt; nc=nc+1;
num[nc1] = text[index]; nc1=nc1+1;
unt=1;
}
else{
unt=unt+1;
}
index=index+1 ;
}
for(int i=0; i<nc;i++){
printf(" %3f %d \n ",num[i],cou[i]);
}
}
</code></pre>
<p>Why the C code deal with the values as int not float(compute the repetition as int)? how can fix the problem in this code please?
Note: the C code work correctly if I use int array.</p>
| <python><c><python-3.x> | 2023-07-12 22:15:44 | 1 | 727 | lena |
76,674,745 | 7,619,353 | How to fail unittest test suite even when all test cases pass | <p>I am using python unit test framework to run automated test cases. Currently I want to have the ability to fail the test suite if a condition is not met even if all the test cases pass. I want the test suite to fail because I want my CI server to report a bad build so user's know to investigate why the condition failed. Is there a way to do this elegantly using the unit test framework?</p>
| <python><python-3.x><unit-testing><testing><python-unittest> | 2023-07-12 22:07:58 | 0 | 1,840 | tyleax |
76,674,727 | 14,587,041 | conditional multi line string in python | <p>what I try to do is basically:</p>
<pre><code>cond = True
string_ = (cond*'a '
'b '
(not cond)*'c ')
</code></pre>
<p>if the variable <code>cond = True</code> the string_ should be 'a b ' but if I assign <code>cond = False</code> the string_ variable should be 'b c '.</p>
<p>Sure there is different solutions but I want to do this in a single line way.</p>
<p>Thanks in advance</p>
| <python><string><if-statement> | 2023-07-12 22:03:29 | 5 | 2,730 | Samet Sökel |
76,674,718 | 10,969,942 | How to efficiently append same element n times to a non-empty list? | <p>In Python, it's known that the most efficient way to create a list with <code>n</code> repetitions of the same element (let's say the string <code>'s'</code>) is by using list multiplication, as shown below:</p>
<pre class="lang-py prettyprint-override"><code>lst = ['s'] * 1000
</code></pre>
<p>However, when the list is non-empty initially, what would be the most optimal method to append the same element <code>n</code> times?</p>
<p>Here are a couple of methods that come to mind:</p>
<p>Method1:</p>
<pre class="lang-py prettyprint-override"><code>lst = [1,2,3]
for _ in range(1000):
lst.append('s')
</code></pre>
<p>Method2:</p>
<pre class="lang-py prettyprint-override"><code>lst = [1,2,3]
lst.extend(['s'] * 1000)
# or
# lst.extend(['s' for _ in range(1000)])
</code></pre>
<p>But it's worth noting that Method 2 does create a temporary long <code>list</code>, e.g. <code>['s' for _ in range(1000)]</code>.</p>
<p>Are there any alternative approaches that are more efficient, both in terms of time complexity and space usage? Or among the existing methods, which one is deemed the most efficient?</p>
| <python><list><performance> | 2023-07-12 22:01:59 | 3 | 1,795 | maplemaple |
76,674,663 | 962,891 | fastapi lifespan closing session raises AttributeError: 'SQLAlchemyUserDatabase' object has no attribute 'close' | <p>I am using fastapi (0.95.0), fastapi-users (10.4.2), fastapi-users-db-sqlalchemy (5.0.0) and SQLAlchemy (2.0.10) in my application.</p>
<p>This is a simplified snippet of my code:</p>
<pre><code>engine = create_async_engine(SQLALCHEMY_DATABASE_URL)
async_session_maker = async_sessionmaker(engine, expire_on_commit=False)
async def get_async_session() -> AsyncGenerator[AsyncSession, None]:
async with async_session_maker() as session:
yield session
async def get_user_db(session: AsyncSession = Depends(get_async_session)):
yield SQLAlchemyUserDatabase(session, UserModel, OAuthAccount)
@asynccontextmanager
async def lifespan(fapp: FastAPI):
# establish a connection to the database
fapp.state.async_session = await get_user_db().__anext__()
yield
# close the connection to the database
await fapp.state.async_session.close()
await fapp.state.async_session.engine.dispose()
app = FastAPI(lifespan=lifespan)
# Add Routes
# ...
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p>When I use Ctrl-C to stop the running uvicorn server, I get the following error trace:</p>
<pre><code>INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Waiting for application shutdown.
<class 'fastapi_users_db_sqlalchemy.SQLAlchemyUserDatabase'>
ERROR: Traceback (most recent call last):
File "/path/to/proj/env/lib/python3.10/site-packages/starlette/routing.py", line 677, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/usr/lib/python3.10/contextlib.py", line 206, in __aexit__
await anext(self.gen)
File "/path/to/proj/src/main.py", line 40, in lifespan
await fastapi_app.state.async_session.close()
AttributeError: 'SQLAlchemyUserDatabase' object has no attribute 'close'
ERROR: Application shutdown failed. Exiting.
INFO: Finished server process [37752]
</code></pre>
<p>Which is strange, because I am calling close on a variable of type <code>AsyncSession</code> not <code>SQLAlchemyUserDatabase</code>, so based on this error message, I change the line statement to reference the <code>session</code> attribute of the <code>SQLAlchemyUserDatabase</code> class, and call <code>close()</code> on the session attribute, as shown below:</p>
<blockquote>
<p>await fapp.state.async_session.session.close()</p>
</blockquote>
<p>Now, I get this even more cryptic error trace:</p>
<pre><code>INFO: Started server process [33125]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Waiting for application shutdown.
ERROR: Traceback (most recent call last):
File "/path/to/proj/env/lib/python3.10/site-packages/starlette/routing.py", line 677, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/path/to/proj/env/lib/python3.10/site-packages/starlette/routing.py", line 569, in __aexit__
await self._router.shutdown()
File "/path/to/proj/env/lib/python3.10/site-packages/starlette/routing.py", line 664, in shutdown
await handler()
File "/path/to/proj/src/main.py", line 88, in shutdown
await app.state.async_session.session.close()
AttributeError: 'Depends' object has no attribute 'close'
ERROR: Application shutdown failed. Exiting.
</code></pre>
<p><code>fapp.state.async_session.session</code> should not be of type Depends.</p>
<p>Why is this type error occurring, and how do I resolve it, so that I can gracefully release resources when the server is shutdown?</p>
| <python><sqlalchemy><fastapi><fastapiusers><lifespan> | 2023-07-12 21:49:42 | 2 | 68,926 | Homunculus Reticulli |
76,674,511 | 1,107,474 | Cannot save object to Python multiprocessing shared dictionary | <p>The below (self contained example) code passes a list of objects to a function in parallel using multiprocessing. There is a global multiprocessing manager and i'd like the function to store each object in the global dictionary.</p>
<p>However, I get the error:</p>
<pre><code>AttributeError: Can't get attribute 'MyObj' on <module '__main__' (built-in)>
---------------------------------------------------------------------------
"""The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 37, in <module>
File "/usr/local/lib/python3.11/multiprocessing/pool.py", line 367, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
ERROR!
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/pool.py", line 774, in get
raise self._value
multiprocessing.managers.RemoteError:
</code></pre>
<p>Is there a way to achieve this?</p>
<p>Self-contained example code:</p>
<pre><code>import multiprocessing as mp
from functools import partial
manager = mp.Manager()
dictionary = manager.dict()
class MyObj:
def __init__(self, x):
self.x = 0
def task(lock, obj):
lock.acquire()
dictionary["test"] = obj # Trying to copy object to shared dictionary
lock.release()
def task_init(output_queue):
task.output_queue = output_queue
my_list = []
obj = MyObj("")
my_list.append(obj)
output_queue = mp.Queue()
p = mp.Pool(mp.cpu_count(), task_init, [output_queue])
lock = manager.Lock()
func = partial(task, lock)
p.map(func, my_list)
p.close()
print("finished")
</code></pre>
| <python><python-3.x> | 2023-07-12 21:17:10 | 1 | 17,534 | intrigued_66 |
76,674,502 | 5,040,775 | Contraining a number of assets to be less than or equal to using cvxpy | <p>I am trying to use cvxpy to minimize the tracking error, defined as the standard deviation of the portfolio return minus the index return, with a few constraints including</p>
<ol>
<li>the sum of weights of all assets chosen has to equal to 100%</li>
<li>the weight of each asset should be >=3% and <=20%</li>
<li>the total number of assets chosen should be less than or equal to 12</li>
</ol>
<p>Right now, I have the following code:</p>
<pre><code>import pandas as pd
import cvxpy as cp
import numpy as np
import os
os.chdir(r'\\Directory')
input_file # 48 x 17
base_weights # 17 x 1
msci_na = input_file['MSCI North America Index'] # 17 x 1
funds = input_file.loc[:, input_file.columns != 'MSCI North America Index'] # 48 X 17
funds_np = funds.to_numpy()
# Generate random data for demonstration
n_funds = len(funds.columns)
weights = cp.Variable(n_funds)
# Define the tracking error objective function
tracking_error = cp.norm(funds_np @ weights - msci_na, 2) / np.sqrt(47)
# Define the constraints
constraints = [cp.sum(weights) == 1, weights >= 0.03, weights <= 0.20]
# Define the problem and solve
problem = cp.Problem(cp.Minimize(tracking_error), constraints)
problem.solve()
# Retrieve the optimal weights
optimal_weights = weights.value
</code></pre>
<p>the input_file has 48 months of returns for 17 assets along with the index. The goal is to choose at most 12 out of these 17 assets to minimize the tracking error of the resulting portfolio relative to the index return. I am having difficulty adding the contraint (as you can see, I already added the first two constraints). I would appreciate your help. Thanks.</p>
| <python><optimization><cvxpy> | 2023-07-12 21:16:05 | 0 | 3,525 | JungleDiff |
76,674,471 | 4,175,822 | How do I fix the mypy error Overloaded function signatures 1 and 2 overlap with incompatible return types? | <p>How do I fix the mypy error Overloaded function signatures 1 and 2 overlap with incompatible return types?</p>
<p>Here are my methods:</p>
<pre><code> @typing.overload
def _find_pets_by_tags(
self,
query_params: typing.Union[
QueryParametersDictInput,
QueryParametersDict
],
security_index: typing.Optional[int] = None,
server_index: typing.Optional[int] = None,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, float, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = False
) -> response_200.ApiResponse: ...
@typing.overload
def _find_pets_by_tags(
self,
query_params: typing.Union[
QueryParametersDictInput,
QueryParametersDict
],
security_index: typing.Optional[int] = None,
server_index: typing.Optional[int] = None,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, float, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[True] = ...
) -> api_response.ApiResponseWithoutDeserialization: ...
def _find_pets_by_tags(
self,
query_params: typing.Union[
QueryParametersDictInput,
QueryParametersDict
],
security_index: typing.Optional[int] = None,
server_index: typing.Optional[int] = None,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, float, typing.Tuple]] = None,
skip_deserialization: bool = False
):
</code></pre>
<p>And mypy complains that:
<code>Overloaded function signatures 1 and 2 overlap with incompatible return types [misc]</code></p>
<p>But</p>
<ul>
<li>how are these methods overlapping? They require two different literal bool inputs</li>
<li>What makes the return types incompatible? The overload requires two different class instances are returned. Both of them inherit from an api_client.ApiResponse base class</li>
</ul>
| <python><mypy><python-typing> | 2023-07-12 21:12:30 | 1 | 2,821 | spacether |
76,674,467 | 13,084,288 | 1D K-means implementation with GPU support | <p>I know there are polynomial-time solutions for the global optimum of k-means in 1D (e.g. <a href="https://www.dannyadam.com/blog/2019/07/kmeans1d-globally-optimal-efficient-1d-k-means/" rel="nofollow noreferrer">kmeans1d</a>), but they're usually implemented with dynamic programming in C++. I need a 1D k-means solution for PyTorch so I can utilize CUDA acceleration. Just using an existing library for multidimensional k-means (scikit-learn, kmeans_pytorch, etc.) with <code>dim=1</code> isn't optimal because of the random initialization and it fails to solve problems as simple as:</p>
<pre class="lang-py prettyprint-override"><code>any_kmeans_library([1., 2, 3, 4, 5, 6, 7, 8], k=4) # expected centroids: [1.5, 3.5, 5.5, 7.5]
</code></pre>
<p>Do you know of an implementation that allows this and/or a code snippet of yours that can achieve it?</p>
| <python><machine-learning><scikit-learn><pytorch> | 2023-07-12 21:11:36 | 0 | 691 | John |
76,674,272 | 10,010,623 | Pydantic BaseSettings cant find .env when running commands from different places | <p>So, Im trying to setup Alembic with FastAPI and Im having a problem with Pydantic's BaseSettings, I get a validation error (variables not found) because it doesnt find the .env file (?)</p>
<p>It can be solved by changing <code>env_file = ".env"</code> to <code>env_file = "../.env"</code> in the <code>BaseSettings</code> <code>class Config</code> but that makes the error happen when running main.py, I tried setting it as an absolute path with <code>env_file = os.path.abspath("../../.env")</code> but that didnt work.</p>
<p>What should I do?</p>
<p>config.py:</p>
<pre><code>import os
from functools import lru_cache
from pydantic_settings import BaseSettings
abs_path_env = os.path.abspath("../../.env")
class Settings(BaseSettings):
APP_NAME: str = "AppName"
SQLALCHEMY_URL: str
ENVIRONMENT: str
class Config:
env_file = ".env" # Works with uvicorn run command from my-app/project/
# env_file = "../.env" Works with alembic command from my-app/alembic
# env_file = abs_path_env
@lru_cache()
def get_settings():
return Settings()
</code></pre>
<p>Project folders:</p>
<pre><code>my-app
├── alembic
│ ├── versions
│ ├── alembic.ini
│ ├── env.py
│ ├── README
│ └── script.py.mako
├── project
│ ├── core
│ │ ├── __init__.py
│ │ └── config.py
│ └── __init__.py
├── __init__.py
├── .env
└── main.py
</code></pre>
| <python><fastapi><pydantic><alembic> | 2023-07-12 20:34:28 | 4 | 539 | Risker |
76,674,241 | 5,213,015 | Django - How to display search term on results page? | <p>I’ve made a successful form and it’s working accordingly displaying the search results when a user makes a form submission. However, I’m stuck with one new addition to this search functionality.</p>
<p>How do I get to display the search term with my current code?</p>
<p>I’m trying to display the current search term in <code>search_results.html</code> but can’t get the term to display work. The only thing that appears is a blank space. I'm guessing I'm doing something wrong in <code>views.py</code> but don't know what exactly.</p>
<p>Any help is gladly appreciated.</p>
<p>Thanks!</p>
<p><strong>search_results.html</strong></p>
<p><code><h1>Results for "search term here {{ title_contains_query }}”</h1></code></p>
<p><strong>base.html (Where the search bar is located)</strong></p>
<pre><code> <div class="search-container">
<form method="get" action="{% url 'search_results' %}">
<div class="input-group">
<input type="text" name="q" class="form-control gs-search-bar" placeholder="Search GameStation Games..." value="">
<span class="search-clear">x</span>
<button type="submit" class="btn btn-primary search-button">
<span class="input-group-addon">
<i class="zmdi zmdi-search"></i>
</span>
</button>
</div>
</form>
</div>
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def is_valid_queryparam(param):
return param != '' and param is not None
def BootstrapFilterView(request):
user_profile = User_Info.objects.all()
user_profile_games_filter = Game_Info.objects.all()
title_contains_query = request.GET.get('q')
if is_valid_queryparam(title_contains_query):
user_profile_games_filter = user_profile_games_filter.filter(game_title__icontains=title_contains_query)
if request.user.is_authenticated:
user_profile = User_Info.objects.filter(user=request.user)
context = {
'user_profile': user_profile,
'user_profile_games_filter': user_profile_games_filter
}
else:
context = {
'user_profile': user_profile,
'user_profile_games_filter': user_profile_games_filter
}
return render(request, "search_results.html", context)
</code></pre>
| <python><django><django-models><django-views><django-forms> | 2023-07-12 20:30:10 | 1 | 419 | spidey677 |
76,674,202 | 14,840,072 | Telethon event handler not firing when used in an asyncio task with another task | <p>I'm trying to get two tasks with infinite loops running simultaneously. One of them involves running the Telethon client indefinitely, the other involves a while loop in order to check for connections on a socket.</p>
<p>The socket task works fine, I can create the socket and connect a client to it. However the other task, which runs the Telethon client, doesn't seem to respond to the NewMessage event when I fire off an event from my telegram account. I've seen this working prior, so I know it's not the client itself, my account or the connection to it.</p>
<p>I assume this is just an issue with me not understanding how the asyncio package works. Could someone please point me in the right direction to understanding how I can have both of my tasks run concurrently and respond to both my socket and my message events.</p>
<pre><code>client = TelegramClient('anon', api_id, api_hash)
@client.on(events.NewMessage)
async def onMessage(event):
# print(event.raw_text) # Event handler implementation
async def run_client():
await client.start()
await client.run_until_disconnected()
# async with client:
# await client.run_until_disconnected()
# await client.loop.run_forever()
async def socket_listener():
global client_socket
while True:
# Accept a client connection
client_socket, client_address = server_socket.accept()
print("Client connected:", client_address)
async def main():
asyncio.gather(run_client(), socket_listener())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
| <python><sockets><python-asyncio><telethon> | 2023-07-12 20:23:33 | 1 | 655 | jm123456 |
76,674,137 | 3,840,700 | Tree Traversal Algorithms | <p>I have this json data model as an input:</p>
<pre class="lang-json prettyprint-override"><code>{
"nodes": [
{
"text": "Third message, which is a question",
"id": "A",
"from": "bot"
},
{
"text": "A possible answer from the player",
"id": "B",
"from": "player"
},
{
"text": "Another possible answer from the player",
"id": "C",
"from": "player"
},
{
"text": "Let's continue here",
"id": "D",
"from": "bot"
},
{
"text": "Or let's continue there",
"id": "E",
"from": "bot"
},
{
"text": "Final message",
"id": "F",
"from": "bot"
},
{
"text": "Second message",
"id": "G",
"from": "bot"
},
{
"text": "First message",
"id": "H",
"from": "bot"
}
],
"edges": [
{
"id": "986c8f1fa8579f27",
"fromNode": "G",
"toNode": "A"
},
{
"id": "94a3c66c65bd23f7",
"fromNode": "A",
"toNode": "B"
},
{
"id": "7b6cb44465c55e11",
"fromNode": "A",
"toNode": "C"
},
{
"id": "0c6880a58da67e1e",
"fromNode": "B",
"toNode": "D"
},
{
"id": "f4451611cc94a7fe",
"fromNode": "C",
"toNode": "E"
},
{
"id": "e81c1b1cb31a4659",
"fromNode": "D",
"toNode": "F"
},
{
"id": "cda64626f98c3674",
"fromNode": "E",
"toNode": "F"
},
{
"id": "f67f1d371ffbb169",
"fromNode": "H",
"toNode": "G"
}
]
}
</code></pre>
<p>And this is what I want to get as an output:</p>
<pre class="lang-json prettyprint-override"><code>{
"chats": {
"H": {
"messages": [
"First message",
"Second message",
"Third message, which is a question"
],
"answer_propositions": {
"D": "A possible answer from the player",
"E": "Another possible answer from the player"
}
},
"D": {
"messages": [
"Let's continue here",
"Final message"
],
"answer_propositions": {}
},
"E": {
"messages": [
"Or let's continue there",
"Final message"
],
"answer_propositions": {}
}
}
}
</code></pre>
<p>This data structure models a conversation between a bot (in red) and a player (in green):</p>
<p><a href="https://i.sstatic.net/ym32k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ym32k.png" alt="enter image description here" /></a></p>
<ul>
<li>When the player select a message from <code>answer_propositions</code>, the message's <code>id</code> represent the next hop of the conversation.</li>
<li>If the bot send multiple messages at once, those a grouped together in a single object with all messages part of an array.</li>
<li>If we reach the end of the conversation, the player cannot provide any more answer so <code>answer_propositions</code> is empty.</li>
<li><code>nodes</code>'s object order cannot be used as the messages in the conversation tree can be created in a different order that the messages will be displayed.</li>
<li>Performance is not an issue, it's ok if that's not a O(1) or O(log n).</li>
</ul>
| <python><algorithm><tree> | 2023-07-12 20:10:26 | 1 | 2,490 | Will |
76,674,075 | 9,582,542 | Coaleseing a dataframe by removing Nan values | <p>This is a dataframe that I generate based on the data I have coming in</p>
<pre><code>data = {'col2': ['Previous Sale', 'Price', 'OR Book-Page','Qualification Description','','',np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'col3': [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,'03/24/2015', 210000, '00000-00000', 'Sales which are qualified','','',np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'col4': [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,'08/03/1995', 121000, '00000-00000', 'Sales which are qualified','','']}
dframe = pd.DataFrame(data)
</code></pre>
<p>I need to transform dataframe to look like this instead</p>
<pre><code>data2 = {'Previous Sale': ['03/24/2015', '08/03/1995'],
'Price': [ 210000, 121000],
'OR Book-Page': [ '00000-00000', '00000-00000'],
'Qualification Description': ['Sales which are qualified','Sales which are qualified'],
'col0': ['',''],
'col1': ['','']
}
dframeFix = pd.DataFrame(data2)
</code></pre>
<p>How could I make this transformation programmatically.
The last 2 columns are blank because they dont always have data</p>
| <python> | 2023-07-12 19:58:59 | 1 | 690 | Leo Torres |
76,673,993 | 4,175,822 | With python mypy how can I pass a class into a generic and use it for deserializing? | <p>With python mypy how can I pass a class into a generic and use it for deserializing?</p>
<p>I have this class with a deserialize method</p>
<pre><code>@dataclasses.dataclass
class OpenApiResponse(typing.Generic[T]):
response_cls: typing.Type[T]
content: typing.Optional[typing.Dict[str, MediaType]] = None
headers: typing.Optional[typing.Dict[str, typing.Type[HeaderParameterWithoutName]]] = None
headers_schema: typing.Optional[typing.Type[schemas.DictSchema]] = None
@classmethod
def deserialize(cls, response: urllib3.HTTPResponse, configuration: schema_configuration_.SchemaConfiguration) -> T:
...
return cls.response_cls(
response=response,
headers=deserialized_headers,
body=deserialized_body
)
</code></pre>
<p>But mypy complains that:</p>
<pre><code>Access to generic instance variables via class is ambiguous
</code></pre>
<p>on my return statement</p>
<p>My response_cls looks like this:</p>
<pre><code>@dataclasses.dataclass
class ApiResponse(api_response.ApiResponse):
response: urllib3.HTTPResponse
body: application_json_schema.SchemaTuple
headers: schemas.Unset = schemas.unset
</code></pre>
<p>Is there a better way to structure my code to do this deserialization?</p>
<ul>
<li>Should I make a <code>get_response_cls() -> T</code> method which returns T and must always be implemented in descendant classes?</li>
</ul>
<p>I prefer all the data to be classvar data and to call the deserialize method.</p>
<p>An example class that uses this is:</p>
<pre><code>@dataclasses.dataclass
class ApiResponse(api_response.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
application_xml_schema.SchemaTuple,
application_json_schema.SchemaTuple,
]
headers: schemas.Unset = schemas.unset
class SuccessfulXmlAndJsonArrayOfPet(api_client.OpenApiResponse[ApiResponse]):
response_cls = ApiResponse
class ApplicationXmlMediaType(api_client.MediaType):
schema: typing_extensions.TypeAlias = application_xml_schema.Schema
class ApplicationJsonMediaType(api_client.MediaType):
schema: typing_extensions.TypeAlias = application_json_schema.Schema
Content = typing_extensions.TypedDict(
'Content',
{
'application/xml': typing.Type[ApplicationXmlMediaType],
'application/json': typing.Type[ApplicationJsonMediaType],
}
)
content: Content = {
'application/xml': ApplicationXmlMediaType,
'application/json': ApplicationJsonMediaType,
}
</code></pre>
| <python><mypy><python-typing> | 2023-07-12 19:43:20 | 2 | 2,821 | spacether |
76,673,850 | 9,070,040 | scikit-learn's ConfusionMatrixDisplay() with figsize() | <p>Using <code>figsize()</code> in the following code creates two plots of the confusion matrix, one with the desired size but wrong labels ("Figure 1") and another with the default/wrong size but correct labels ("Figure 2") (image attached below). Second plot is what I want, but with the specified size 8x6in. How do I do this? Thanks!</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn import datasets, svm
from sklearn.metrics import ConfusionMatrixDisplay
# import data
iris = datasets.load_iris()
X, y = iris.data, iris.target
# Run classifier
classifier = svm.SVC(kernel="linear")
y_pred = classifier.fit(X, y).predict(X)
# plot confusion matrix
fig, ax = plt.subplots(figsize=(8, 6))
cmp = ConfusionMatrixDisplay.from_predictions(y, y_pred, normalize="true", values_format=".0%")
cmp.plot(ax=ax)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/NBBsi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NBBsi.png" alt="enter image description here" /></a></p>
| <python><scikit-learn><confusion-matrix> | 2023-07-12 19:22:21 | 1 | 671 | Manojit |
76,673,840 | 5,052,482 | Increase the width of the user input box using ipywidgets | <p>I am trying to increase the width of the user input box generated by the <code>interact</code> function from <code>ipywidgets</code>.</p>
<p>My objective is that a user should be able to change the contents of the string using the interactive box, however I am unable to figure out how to do so.</p>
<pre><code>import re
from ipywidgets import interact
x1 = """The Supreme Court, in the matter of, held that moratorium mentioned in section 141 Chapter XVII of the Act. / उच्चतम न्यायालय ने
Dwarkadhish Sakhar Karkhana Ltd. Vs. Pankaj Joshi and Anr. / द्वारकाधीश
Kalpraj Dharamshi & Anr. Vs. Kotak Investment Advisors Ltd. & Anr. / कल्पराज"""
# remove all non-english characters
x2 = re.sub(r'[^\x00-\x7f]',r'', x1)
def f(x):
return x
w = interact(f, x=x2)
</code></pre>
<p>(Width of the image showing the output reduced for ease of viewing)</p>
<p><a href="https://i.sstatic.net/BEGZI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BEGZI.png" alt="enter image description here" /></a></p>
<p>However I want the input box to be much larger so that the user can make a change anywhere in the string using the box itself and that value is stored.</p>
<p>Any help will be appreciated.</p>
| <python><jupyter-notebook><ipython><ipywidgets> | 2023-07-12 19:20:43 | 1 | 2,174 | Jash Shah |
76,673,724 | 1,826,360 | Runtime Error when trying to connect to MongoDB using PyMongo in a Flask app from within Docker Container | <p>I have a flask app that connects to MongoDB to serve data.
As soon as the control hits a statement where a connection to MongoDB instance is required, it gives <code>RuntimeError: can't start new thread</code> error.</p>
<pre><code> File "/usr/local/lib/python3.11/site-packages/pymongo/cursor.py", line 1251, in next
if len(self.__data) or self._refresh():
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymongo/cursor.py", line 1142, in _refresh
self.__session = self.__collection.database.client._ensure_session()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymongo/mongo_client.py", line 1758, in _ensure_session
return self.__start_session(True, causal_consistency=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pymongo/mongo_client.py", line 1703, in __start_session
self._topology._check_implicit_session_support()
File "/usr/local/lib/python3.11/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
self._check_session_support()
File "/usr/local/lib/python3.11/site-packages/pymongo/topology.py", line 554, in _check_session_support
self._select_servers_loop(
File "/usr/local/lib/python3.11/site-packages/pymongo/topology.py", line 242, in _select_servers_loop
self._ensure_opened()
File "/usr/local/lib/python3.11/site-packages/pymongo/topology.py", line 596, in _ensure_opened
self._update_servers()
File "/usr/local/lib/python3.11/site-packages/pymongo/topology.py", line 747, in _update_servers
server.open()
File "/usr/local/lib/python3.11/site-packages/pymongo/server.py", line 49, in open
self._monitor.open()
File "/usr/local/lib/python3.11/site-packages/pymongo/monitor.py", line 79, in open
self._executor.open()
File "/usr/local/lib/python3.11/site-packages/pymongo/periodic_executor.py", line 87, in open
thread.start()
File "/usr/local/lib/python3.11/threading.py", line 957, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
</code></pre>
<p>The MongoDB instance is also running within a docker container.
The connection string I am using is like <code>mongodb://<user>:<password>@192.x.x.x:270xx/?retryWrites=true&w=majority</code></p>
<p>I am able to connect to the same Mongo instance from a Python Shell running on Host System but not from within the Flask App running in Docker.</p>
<p>Also, I ran the following command to get number of PIDs against that container <br/>
<code>docker stats --no-stream --format '{{.PIDs}}\t{{.Name}}'</code></p>
<p>and it returns <br/></p>
<pre><code>PIDs Name
1 flask-app
</code></pre>
<p>I have monitored resource usage on the host and it is negligible. I am hardly using 10% or RAM/CPU at any given time.<br/>
I have also tried stopping all containers but the FlaskApp and MongoDB and it would still return the same error.</p>
| <python><mongodb><docker><flask><uwsgi> | 2023-07-12 19:03:55 | 0 | 433 | Muhammad Tauseef |
76,673,704 | 11,898,085 | DearPyGUI script won't start after wrapping it in a class | <p>I have a messy code for sensor live data recording which I rewrote in OOP style, docstrings and comments. The new version runs without errors but the GUI is invisible or won't start at all. Here I have a minimal example. First script runs and displays the GUI window. But when I wrap it into a class and run it from main, I see nothing. No errors but also no GUI. Could someone show me my logic error? I know that the naming here is non-standard, that's irrelevant. Also, this is the first time I'm building an OOP program so I might have misconceptions about the <code>__init()__</code> method, too. Feel free to enlighten me. This script runs ok:</p>
<pre class="lang-py prettyprint-override"><code>import dearpygui.dearpygui as dpg
dpg.create_context()
with dpg.window(label="Example Window"):
dpg.add_text("Hello, world")
dpg.add_button(label="Save")
dpg.add_input_text(label="string", default_value="Quick brown fox")
dpg.add_slider_float(label="float", default_value=0.273, max_value=1)
dpg.create_viewport(title='Custom Title', width=600, height=300)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
</code></pre>
<p>But the same script as a class from main does not show anything:</p>
<pre class="lang-py prettyprint-override"><code># this script is saved to same dir with main as dpg_demo.py
import dearpygui.dearpygui as dpg
class dpg_demo():
def __init__(self):
dpg.create_context()
with dpg.window(label="Example Window"):
dpg.add_text("Hello, world")
dpg.add_button(label="Save")
dpg.add_input_text(label="string", default_value="Quick brown fox")
dpg.add_slider_float(label="float", default_value=0.273, max_value=1)
dpg.create_viewport(title='Custom Title', width=600, height=300)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
</code></pre>
<pre class="lang-py prettyprint-override"><code>import dpg_demo as demo
def main():
gui = demo()
</code></pre>
| <python><class><python-import><init><dearpygui> | 2023-07-12 19:00:39 | 2 | 936 | jvkloc |
76,673,630 | 9,576,988 | Can't perform query from databases library's connection pool with async Flask | <p>I want to use Flask[async] to query data asynchronously using the <a href="https://www.encode.io/databases" rel="nofollow noreferrer">databases</a> library. I want to query data from PostgreSQL using a connection from a connection pool. In order to create a single connection pool for my Flask application, like what's described <a href="https://stackoverflow.com/questions/55523299/best-practices-for-persistent-database-connections-in-python-when-using-flask">here</a>, I put the database connection code in its own module.</p>
<p><code>db.py</code></p>
<pre class="lang-py prettyprint-override"><code>from databases import Database
from datetime import datetime
class Postgres:
@classmethod
async def create_pool(cls):
self = Postgres()
connection_url = 'postgresql+asyncpg://USER:PASSWORD@HOST:POST/DATABASE'
self.database = Database(connection_url, min_size=1, max_size=5)
if not self.database.is_connected:
await self.database.connect()
print(f'Connected to database at {datetime.now()}')
return self
</code></pre>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template
from db import Postgres
import asyncio
from flask_wtf import FlaskForm
from wtforms.fields import StringField, SubmitField
from wtforms.validators import DataRequired, Length
async def create_app():
app = Flask(__name__)
app.db = await Postgres.create_pool()
return app
app = asyncio.run(create_app())
class ItemForm(FlaskForm):
item_id = StringField("Item ID:", validators=[DataRequired(), Length(7, 9)])
submit = SubmitField('Search')
async def get_item(item_id):
query = """select item_name from items where item_id = :item_id;"""
# app.db.database.is_connected --> True
async with app.db.database.transaction(): # AttributeError: 'NoneType' object has no attribute 'send'
item = await app.db.database.fetch_all(query, values={'item_id': item_id})
return item
@app.route('/', methods=['GET', 'POST'])
async def home():
form = ItemForm()
if form.validate_on_submit():
item = await get_item(form.item_id.data)
return render_template('home.html', form=form, item=item)
return render_template('home.html', form=form)
if __name__ == "__main__":
asyncio.run(app.run('0.0.0.0', port=8080, debug=True))
</code></pre>
<p>This prints <code>Connected to database at 2023-07-12 14:42:46.601021</code> and the connected still exists before the database transaction, but I get the error:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'send'
</code></pre>
<p>Why do I get this error, and how can I make this work?</p>
<p><strong>Edit</strong>:</p>
<p>I've removed the line <code>async with app.db.database.transaction():</code> since transactions are for writing to the database. That leaves me with the error <code>asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress</code>.</p>
<p>I then removed everything so I just have</p>
<pre class="lang-py prettyprint-override"><code># top of script
database = Databases(URL)
# inside get_item()
await database.connect()
await database.fetch_all(...)
</code></pre>
<p>The query runs, but there is no connection pool now...</p>
| <python><database><flask><asynchronous><database-connection> | 2023-07-12 18:48:32 | 1 | 594 | scrollout |
76,673,610 | 11,395,399 | Obtaining next scheduled run time of a DAG using Airflow 2.2.5 REST API | <p>I used Airflow 2.4.2 REST API and python <code>apache-airflow-client</code> to retrieve the next scheduled run time of a DAG.</p>
<pre class="lang-py prettyprint-override"><code>from airflow_client.client.api import dag_api
with airflow_client.client.ApiClient(conf) as api_client:
api_instance = dag_api.DAGApi(api_client)
dag_id = build_dag_id(name)
response = api_instance.get_dag(dag_id)
next_run = response["next_dagrun"]
</code></pre>
<p>However, now I need to do the same with Airflow 2.2.5 REST API, and it turns out that the <code>next_dagrun</code> value does not exist there. Now I'm looking for an alternative way to obtain this value. I would appreciate any advice.</p>
| <python><python-3.x><rest><airflow><airflow-api> | 2023-07-12 18:46:09 | 1 | 553 | Helen |
76,673,512 | 3,107,798 | how to copy s3 object from one bucket to another using python s3fs | <p>Using <a href="https://s3fs.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">python s3fs</a>, how do you copy an object from one s3 bucket to another? I have found <a href="https://stackoverflow.com/questions/47468148/how-to-copy-s3-object-from-one-bucket-to-another-using-python-boto3">answers using boto3</a>, but could not find anything when looking through the s3fs docs.</p>
| <python><amazon-web-services><amazon-s3><python-s3fs> | 2023-07-12 18:30:41 | 1 | 11,245 | jjbskir |
76,673,440 | 9,985,032 | subprocess check_output throws exception without any error message | <p>When I run <code>clang main.cpp -o myprogram.exe</code> in the terminal I get a meaningful error message:</p>
<pre><code>main.cpp:6:32: error: expected ';' after expression
std::cout << "Hello World!"
</code></pre>
<p>but when I try to reproduce it using python and running</p>
<pre><code>try:
subprocess.check_output('clang main.cpp -o myprogram.exe')
except subprocess.CalledProcessError as e:
print(e)
print(e.output)
print(e.stderr)
print(e.stdout)
</code></pre>
<pre><code>Command 'clang main.cpp -o myprogram.exe' returned non-zero exit status 1.
b''
None
b''
</code></pre>
<p>How can I give my user back the same error message the terminal provides?</p>
| <python><subprocess> | 2023-07-12 18:18:54 | 1 | 596 | SzymonO |
76,673,400 | 10,242,059 | PermissionError: [Errno 13] Permission denied: '<project directory>' | <p>I am troubleshooting using airflow for some MLops work (see <a href="https://towardsdatascience.com/unlocking-mlops-using-airflow-a-comprehensive-guide-to-ml-system-orchestration-880aa9be8cff" rel="nofollow noreferrer">this</a>).
Here's the log error:</p>
<pre><code>2c0cbc30645c
*** Found local files:
*** * /opt/airflow/logs/dag_id=ml_pipeline/run_id=manual__2023-07-12T17:41:32.270307+00:00/task_id=run_feature_pipeline/attempt=1.log
[2023-07-12, 17:41:34 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: ml_pipeline.run_feature_pipeline manual__2023-07-12T17:41:32.270307+00:00 [queued]>
[2023-07-12, 17:41:34 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: ml_pipeline.run_feature_pipeline manual__2023-07-12T17:41:32.270307+00:00 [queued]>
[2023-07-12, 17:41:34 UTC] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-07-12, 17:41:34 UTC] {taskinstance.py:1327} INFO - Executing <Task(_PythonVirtualenvDecoratedOperator): run_feature_pipeline> on 2023-07-12 17:41:32.270307+00:00
[2023-07-12, 17:41:34 UTC] {standard_task_runner.py:57} INFO - Started process 73 to run task
[2023-07-12, 17:41:34 UTC] {standard_task_runner.py:84} INFO - Running: ['***', 'tasks', 'run', 'ml_pipeline', 'run_feature_pipeline', 'manual__2023-07-12T17:41:32.270307+00:00', '--job-id', '9', '--raw', '--subdir', 'DAGS_FOLDER/ml_pipeline_dag.py', '--cfg-path', '/tmp/tmpw2izgy7o']
[2023-07-12, 17:41:34 UTC] {standard_task_runner.py:85} INFO - Job 9: Subtask run_feature_pipeline
[2023-07-12, 17:41:34 UTC] {task_command.py:410} INFO - Running <TaskInstance: ml_pipeline.run_feature_pipeline manual__2023-07-12T17:41:32.270307+00:00 [running]> on host 2c0cbc30645c
[2023-07-12, 17:41:34 UTC] {taskinstance.py:1547} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='***' AIRFLOW_CTX_DAG_ID='ml_pipeline' AIRFLOW_CTX_TASK_ID='run_feature_pipeline' AIRFLOW_CTX_EXECUTION_DATE='2023-07-12T17:41:32.270307+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-07-12T17:41:32.270307+00:00'
[2023-07-12, 17:41:34 UTC] {process_utils.py:181} INFO - Executing cmd: /usr/local/bin/python -m virtualenv /tmp/venv69qh8_nn --system-site-packages --python=python3.9
[2023-07-12, 17:41:34 UTC] {process_utils.py:185} INFO - Output:
[2023-07-12, 17:41:37 UTC] {process_utils.py:189} INFO - created virtual environment CPython3.9.2.final.0-64 in 1833ms
[2023-07-12, 17:41:37 UTC] {process_utils.py:189} INFO - creator CPython3Posix(dest=/tmp/venv69qh8_nn, clear=False, no_vcs_ignore=False, global=True)
[2023-07-12, 17:41:37 UTC] {process_utils.py:189} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/***/.local/share/virtualenv)
[2023-07-12, 17:41:37 UTC] {process_utils.py:189} INFO - added seed packages: pip==23.1, setuptools==67.6.1, wheel==0.40.0
[2023-07-12, 17:41:37 UTC] {process_utils.py:189} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2023-07-12, 17:41:37 UTC] {process_utils.py:181} INFO - Executing cmd: /tmp/venv69qh8_nn/bin/pip install -r /tmp/venv69qh8_nn/requirements.txt
[2023-07-12, 17:41:37 UTC] {process_utils.py:185} INFO - Output:
[2023-07-12, 17:41:38 UTC] {process_utils.py:189} INFO - adding trusted host: '172.17.0.1' (from line 1 of /tmp/venv69qh8_nn/requirements.txt)
[2023-07-12, 17:41:38 UTC] {process_utils.py:189} INFO - Looking in indexes: https://pypi.org/simple, http://172.17.0.1
[2023-07-12, 17:41:39 UTC] {process_utils.py:189} INFO - Collecting feature_pipeline (from -r /tmp/venv69qh8_nn/requirements.txt (line 3))
[2023-07-12, 17:41:39 UTC] {process_utils.py:189} INFO - Downloading
...
[2023-07-12, 17:42:56 UTC] {process_utils.py:189} INFO - Successfully installed Click-8.1.4 Ipython-8.14.0 PyMySQL-1.1.0 altair-4.2.2 asttokens-2.2.1 attrs-23.1.0 avro-1.11.0 backcall-0.2.0 boto3-1.28.2 botocore-1.31.2 certifi-2023.5.7 cffi-1.15.1 charset-normalizer-3.2.0 colorama-0.4.6 confluent-kafka-1.9.0 cryptography-41.0.2 dataclasses-0.6 decorator-5.1.1 entrypoints-0.4 executing-1.2.0 fastavro-1.7.3 fastjsonschema-2.17.1 feature_pipeline-0.1.0 fire-0.5.0 furl-2.1.3 future-0.18.3 great_expectations-0.14.13 greenlet-2.0.2 hopsworks-3.2.0 hsfs-3.2.0 hsml-3.2.0 idna-3.4 importlib-metadata-6.8.0 javaobj-py3-0.4.3 jedi-0.18.2 jinja2-3.0.3 jmespath-1.0.1 jsonpatch-1.33 jsonpointer-2.4 jsonschema-4.18.2 jsonschema-specifications-2023.6.1 jupyter-core-5.3.1 markupsafe-2.0.1 matplotlib-inline-0.1.6 mistune-3.0.1 mock-5.1.0 multidict-6.0.4 nbformat-5.9.1 numpy-1.25.1 orderedmultidict-1.0.1 packaging-23.1 pandas-1.5.3 parso-0.8.3 pexpect-4.8.0 pickleshare-0.7.5 platformdirs-3.8.1 prompt-toolkit-3.0.39 ptyprocess-0.7.0 pure-eval-0.2.2 pyarrow-12.0.1 pyasn1-0.5.0 pyasn1-modules-0.3.0 pycparser-2.21 pycryptodomex-3.18.0 pygments-2.15.1 pyhopshive-0.6.4.1.dev0 pyhumps-1.6.1 pyjks-20.0.0 pyparsing-2.4.7 python-dateutil-2.8.2 python-dotenv-1.0.0 pytz-2023.3 referencing-0.29.1 requests-2.31.0 rpds-py-0.8.10 ruamel.yaml-0.17.17 ruamel.yaml.clib-0.2.7 s3transfer-0.6.1 scipy-1.11.1 six-1.16.0 sqlalchemy-2.0.18 stack-data-0.6.2 termcolor-2.3.0 thrift-0.16.0 toolz-0.12.0 tqdm-4.65.0 traitlets-5.9.0 twofish-0.3.0 typing-extensions-4.7.1 tzlocal-5.0.1 urllib3-1.26.16 wcwidth-0.2.6 yarl-1.9.2 zipp-3.16.0
[2023-07-12, 17:42:56 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:56 UTC] {process_utils.py:189} INFO - [notice] A new release of pip is available: 23.1 -> 23.1.2
[2023-07-12, 17:42:56 UTC] {process_utils.py:189} INFO - [notice] To update, run: /tmp/venv69qh8_nn/bin/python -m pip install --upgrade pip
[2023-07-12, 17:42:57 UTC] {process_utils.py:181} INFO - Executing cmd: /tmp/venv69qh8_nn/bin/python /tmp/venv69qh8_nn/script.py /tmp/venv69qh8_nn/script.in /tmp/venv69qh8_nn/script.out /tmp/venv69qh8_nn/string_args.txt
[2023-07-12, 17:42:57 UTC] {process_utils.py:185} INFO - Output:
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - Traceback (most recent call last):
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1312, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self._accessor.mkdir(self, mode)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - FileNotFoundError: [Errno 2] No such file or directory: '/home/hud/projects/energy-forecasting/output'
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - During handling of the above exception, another exception occurred:
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - Traceback (most recent call last):
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1312, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self._accessor.mkdir(self, mode)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - FileNotFoundError: [Errno 2] No such file or directory: '/home/hud/projects/energy-forecasting'
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - During handling of the above exception, another exception occurred:
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - Traceback (most recent call last):
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1312, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self._accessor.mkdir(self, mode)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - FileNotFoundError: [Errno 2] No such file or directory: '/home/hud/projects'
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - During handling of the above exception, another exception occurred:
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO -
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - Traceback (most recent call last):
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/tmp/venv69qh8_nn/script.py", line 89, in <module>
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - res = run_feature_pipeline(*arg_dict["args"], **arg_dict["kwargs"])
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/tmp/venv69qh8_nn/script.py", line 59, in run_feature_pipeline
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - from feature_pipeline import utils, pipeline
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/tmp/venv69qh8_nn/lib/python3.9/site-packages/feature_pipeline/utils.py", line 5, in <module>
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - from feature_pipeline import settings
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/tmp/venv69qh8_nn/lib/python3.9/site-packages/feature_pipeline/settings.py", line 44, in <module>
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1316, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self.parent.mkdir(parents=True, exist_ok=True)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1316, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self.parent.mkdir(parents=True, exist_ok=True)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1316, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self.parent.mkdir(parents=True, exist_ok=True)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - File "/usr/lib/python3.9/pathlib.py", line 1312, in mkdir
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - self._accessor.mkdir(self, mode)
[2023-07-12, 17:42:57 UTC] {process_utils.py:189} INFO - PermissionError: [Errno 13] Permission denied: '/home/hud'
[2023-07-12, 17:42:57 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/decorators/base.py", line 220, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 374, in execute
return super().execute(context=serializable_context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 578, in execute_callable
result = self._execute_python_callable_in_subprocess(python_path, tmp_path)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/python.py", line 434, in _execute_python_callable_in_subprocess
os.fspath(string_args_path),
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/process_utils.py", line 170, in execute_in_subprocess
execute_in_subprocess_with_kwargs(cmd, cwd=cwd)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/process_utils.py", line 193, in execute_in_subprocess_with_kwargs
raise subprocess.CalledProcessError(exit_code, cmd)
subprocess.CalledProcessError: Command '['/tmp/venv69qh8_nn/bin/python', '/tmp/venv69qh8_nn/script.py', '/tmp/venv69qh8_nn/script.in', '/tmp/venv69qh8_nn/script.out', '/tmp/venv69qh8_nn/string_args.txt']' returned non-zero exit status 1.
[2023-07-12, 17:42:58 UTC] {taskinstance.py:1350} INFO - Marking task as FAILED. dag_id=ml_pipeline, task_id=run_feature_pipeline, execution_date=20230712T174132, start_date=20230712T174134, end_date=20230712T174258
[2023-07-12, 17:42:58 UTC] {standard_task_runner.py:109} ERROR - Failed to execute job 9 for task run_feature_pipeline (Command '['/tmp/venv69qh8_nn/bin/python', '/tmp/venv69qh8_nn/script.py', '/tmp/venv69qh8_nn/script.in', '/tmp/venv69qh8_nn/script.out', '/tmp/venv69qh8_nn/string_args.txt']' returned non-zero exit status 1.; 73)
[2023-07-12, 17:42:58 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 1
[2023-07-12, 17:42:58 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
<p>To setup airflow, I did this:</p>
<pre><code># Move to the airflow directory.
cd airflow
# Make expected directories and environment variables
mkdir -p ./logs ./plugins
sudo chmod 777 ./logs ./plugins
# It will be used by Airflow to identify your user.
echo -e "AIRFLOW_UID=$(id -u)" > .env
# This shows where the project root directory is located.
echo "ML_PIPELINE_ROOT_DIR=/opt/airflow/dags" >> .env
# Initialize the Airflow database
docker compose up airflow-init
# Start up all services
# Note: You should set up the private PyPi server credentials before running this command.
docker compose --env-file .env up --build -d
</code></pre>
<p>My Dockerfile:</p>
<pre><code>FROM apache/airflow:2.5.2
ARG CURRENT_USER=$USER
USER root
# Install Python dependencies to be able to process the wheels from the private PyPI server.
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y python3.9-distutils python3.9-dev build-essential
USER ${CURRENT_USER}
</code></pre>
<p>My project directory structure:</p>
<p><a href="https://i.sstatic.net/6mxUw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6mxUw.png" alt="blaa" /></a></p>
<p>My issue is most similar to <a href="https://stackoverflow.com/questions/72511791/airflow-docker-permissionerror-errno-13-permission-denied-project-directo">this</a>, but it doesn't seem to be some error on relative imports as I am using absolute paths. I've also tried several different solutions based on <a href="https://stackoverflow.com/questions/62499661/airflow-dockeroperator-fails-with-permission-denied-error">this</a>, but to no avail.</p>
<p>Has anyone encountered such problems before?</p>
| <python><docker><docker-compose><airflow> | 2023-07-12 18:13:08 | 1 | 301 | Hud |
76,673,295 | 3,034,160 | Weird case of Non-alphanum char in element with leading alpha | <p>I got a weird case of "<em>Non-alphanum char in element with leading alpha</em>"</p>
<pre><code>db.collection(u'clientsMap').document(tid).update({
'clients.'+uid+'.status': True
})
</code></pre>
<p>I also tried:</p>
<pre><code>db.collection(u'clientsMap').document(tid).update({
f'clients.{uid}.status': True
})
</code></pre>
<p>For the 1st user id I get:</p>
<blockquote>
<p><em>Non-alphanum char in element with leading alpha: a994f7df-47c5-4e8a-941e-2d080292694a</em></p>
</blockquote>
<p>But for the 2nd user id, <em>086aac5a-9087-40d1-b078-1273d5663067</em> it works with no issue, what am I missing?</p>
<p>For both of the users, I can use <code>.set({}, merge=True)</code>, and it works.</p>
<p>Do I need to do some kind of string sanitization, what could be the problem here?</p>
| <python><python-3.x><google-cloud-firestore> | 2023-07-12 17:56:47 | 0 | 601 | Xao |
76,673,236 | 9,585,520 | Double measurement frequency | <p>I have a pandas dataframe which can be abstract as</p>
<pre class="lang-py prettyprint-override"><code>>>> pd.DataFrame({'time':[1,2,3], 'data':[10,20,30]})
time data
0 1 10
1 2 20
2 3 30
</code></pre>
<p>I want to "double" the time column by adding rows midway between each pair, and interpolate for the data column.</p>
<p>The expected result for the example data would be</p>
<pre class="lang-py prettyprint-override"><code> time data
0 1 10
1 1.5 15
2 2 20
3 2.5 25
4 3 30
</code></pre>
<p><strong>Note</strong>: "time" will not necessarily be evenly spaced, another equally valid dataset could be</p>
<pre><code>>>> pd.DataFrame({'time':[1,2.1,3], 'data':[10,20,30]})
time data
0 1 10
1 2.1 20
2 3 30
</code></pre>
<p>As such, I don't think something like <code>resample</code> would work.</p>
| <python><pandas><dataframe> | 2023-07-12 17:48:42 | 2 | 472 | Morten Nissov |
76,673,161 | 2,875,308 | rebuild Python project which imports own package run in Docker while in development | <p>I have a python project with file structure as below</p>
<pre><code>- src
|- myproject
|- main.py
- pyproject.toml
</code></pre>
<p>in the main.py, it references to its own package like this.</p>
<pre class="lang-py prettyprint-override"><code>from myproject.server import MyprojectServiceServicer
</code></pre>
<p>currently, I run the project using Docker with Dockerfile and docker-compose.yml below</p>
<pre><code># syntax = docker/dockerfile:1-labs
FROM python:3.11
WORKDIR /app
COPY --link ./pyproject.toml /app/pyproject.toml
COPY --link ./src /app/src
RUN pip install --no-cache-dir --upgrade -v /app
</code></pre>
<pre><code>version: '3'
services:
myproject:
build:
context: .
dockerfile: Dockerfile.dev
environment:
- PYTHONUNBUFFERED=1
- PYTHONDONTWRITEBYTECODE=1
env_file:
- .env
ports:
- 8080:8080
volumes:
- .:/app
entrypoint: "python /app/src/generative_agents/main.py"
</code></pre>
<p>the problem is that the project is referencing own package built via <code>pip install --no-cache-dir --upgrade -v /app</code> and it has symbolic link to copied "src" directory and not the local files.
so whenever I make some changes to code and want to test it out, I have to rebuild the image and is slowing me down.</p>
<p>Is there any way to update project's package while referencing to local files? or only execute copying local files and build package when rebuilding?</p>
<p>I have a python project with file structure as below</p>
<pre><code>- src
|- myproject
|- main.py
- pyproject.toml
</code></pre>
<p>in the main.py, it references to its own package like this.</p>
<pre class="lang-py prettyprint-override"><code>from myproject.server import MyprojectServiceServicer
</code></pre>
<p>currently, I run the project using Docker with Dockerfile and docker-compose.yml below</p>
<pre><code># syntax = docker/dockerfile:1-labs
FROM python:3.11
WORKDIR /app
COPY --link ./pyproject.toml /app/pyproject.toml
COPY --link ./src /app/src
RUN pip install --no-cache-dir --upgrade -v /app
</code></pre>
<pre><code>version: '3'
services:
myproject:
build:
context: .
dockerfile: Dockerfile.dev
environment:
- PYTHONUNBUFFERED=1
- PYTHONDONTWRITEBYTECODE=1
env_file:
- .env
ports:
- 8080:8080
volumes:
- .:/app
entrypoint: "python /app/src/generative_agents/main.py"
</code></pre>
<p>the problem is that the project is referencing own package built via <code>pip install --no-cache-dir --upgrade -v /app</code> and it has symbolic link to copied "src" directory and not the local files.
so whenever I make some changes to code and want to test it out, I have to rebuild the image and is slowing me down.</p>
<p>Is there any way to update project's package while referencing to local files? or only execute copying local files and build package when rebuilding?</p>
| <python><docker> | 2023-07-12 17:36:47 | 0 | 481 | user2875308 |
76,673,142 | 8,973,620 | Order of batch, prefetch, shuffle and cache in a Tensorflow dataset | <p>When creating a dataset from a generator, what would be the correct order of the following dataset methods? Or does the order not matter here?</p>
<pre><code>ds = tf.data.Dataset.from_generator(my_generator)
ds = ds.prefetch(tf.data.AUTOTUNE).shuffle(1000).batch(128).cache()
</code></pre>
<p>Here I use <code>prefetch</code> to speed up data generation and <code>cache</code> to avoid calculating after every epoch.</p>
| <python><tensorflow><keras><tensorflow-datasets> | 2023-07-12 17:34:08 | 1 | 18,110 | Mykola Zotko |
76,673,079 | 1,107,474 | Python Cannot see modification to objects mapped over a multiprocessing thread pool | <p>In the below code I create a list of objects, spawn the list over a thread pool, to perform a function (modifying its state) on each object in parallel. After the thread pool I then loop over the objects and print the modification.</p>
<p>Unfortunately it's still printing the initial state.</p>
<p>Is there a way to achieve this?</p>
<pre><code>import multiprocessing as mp
class MyObj:
def __init__(self, x):
self.x = 0
def task_init(output_queue):
task.output_queue = output_queue
def task(obj):
obj.x = 5 # Here I change the value
my_list = []
obj = MyObj("")
my_list.append(obj)
output_queue = mp.Queue()
p = mp.Pool(mp.cpu_count(), task_init, [output_queue])
p.map(task, my_list)
for obj in my_list:
print(str(obj.x)) # Problem, shows the initial value, not 5
</code></pre>
| <python><python-3.x> | 2023-07-12 17:26:42 | 0 | 17,534 | intrigued_66 |
76,673,069 | 7,713,770 | django docker, how to copy local db data to the docker db? | <p>I have dockerized a existing django app and an existing postgres database.the tables are created in the docker container. But the tables are empty . There is no data in it.</p>
<p>if I connect to the docker container. And I do a <code>\d+</code></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>
| Schema | Name | Type | Owner | Persistence | Size | Description
| --------+------------------------------------------+----------+---------------+-------------+------------+-------------
| public | hotelWelzijnAdmin_animal | table | hotelwelzijn | permanent | 8192 bytes |
| public | hotelWelzijnAdmin_animal_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | hotelWelzijnAdmin_category | table | hotelwelzijn | permanent | 8192 bytes |
| public | hotelWelzijnAdmin_category_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | accounts_account | table | hotelwelzijn | permanent | 0 bytes |
| public | accounts_account_groups | table | hotelwelzijn | permanent | 0 bytes |
| public | accounts_account_groups_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | accounts_account_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | accounts_account_user_permissions | table | hotelwelzijn | permanent | 0 bytes |
| public | accounts_account_user_permissions_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | auth_group | table | hotelwelzijn | permanent | 0 bytes |
| public | auth_group_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | auth_group_permissions | table | hotelwelzijn | permanent | 0 bytes |
| public | auth_group_permissions_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes |
| public | auth_permission | table | hotelwelzijn | permanent | 8192 bytes |
| public | auth_permission_id_seq | sequence | hotelwelzijn | permanent | 8192 bytes | </code></pre>
</div>
</div>
</p>
<p>Then the list of tables are shown. But the tables are empty.</p>
<p>Because if I do for example:</p>
<p>welzijn-# select * from accounts_account</p>
<p>no data is shown</p>
<p>This is the dockerfile:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# set work directory
WORKDIR /usr/src/app
EXPOSE 8000
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add linux-headers postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./requirements.dev.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
</code></pre>
<p>and dockercompose file:</p>
<pre><code>version: '3.9'
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- .:/app
command: >
sh -c "python ./manage.py migrate &&
python ./manage.py runserver 0:8000"
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13-alpine
container_name: postgres
volumes:
- dev-db-data:/var/lib/postgresql/data
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
dev-db-data:
dev-static-data:
</code></pre>
<p>And this is the entrypoint.sh code:</p>
<pre><code>#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py makemigrations --merge
python manage.py migrate --noinput
exec "$@"
</code></pre>
<p>Question: how to copy data from the host to the docker file?</p>
| <python><django><docker><docker-compose><dockerfile> | 2023-07-12 17:25:28 | 2 | 3,991 | mightycode Newton |
76,673,056 | 11,532,220 | Differences in Type and Naming Conventions for UML Class in Different Programming Languages | <p>Are UML class representations different for each programming language, or do they adhere to a standard? I have conducted extensive research but remain confused.</p>
<p>For instance:</p>
<ul>
<li>In Python: set_name(): None</li>
<li>In Java: setName(): void</li>
<li>In Kotlin: setName(): Unit</li>
</ul>
| <python><java><kotlin><uml> | 2023-07-12 17:22:23 | 1 | 444 | Ali Dehkhodaei |
76,673,010 | 12,553,730 | Experiencing a problem with accessing CUDA(version 12) while using Cellbender | <p>CUDA cannot be detected by Cellbender. I have the latest CUDA driver installed, and here is the output that confirms the same:</p>
<pre><code>(CellBender) [user@server]$ nvidia-smi
Tue Jul 11 19:11:19 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GT 1030 Off | 00000000:43:00.0 Off | N/A |
| 35% 28C P8 N/A / 30W | 255MiB / 2048MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 4396 G /usr/libexec/Xorg 63MiB |
| 0 N/A N/A 4521 G /usr/bin/gnome-shell 190MiB |
+---------------------------------------------------------------------------------------+
</code></pre>
<p>I am using the following command to remove the background: <code>cellbender remove-background --input raw_feature_bc_matrix_pDRGN_d0_r1_50.h5 --output output.h5 --cuda --expected-cells 20000 --total-droplets-included 25000 --fpr 0.01 --epochs 150</code></p>
<h4>Error:</h4>
<pre><code>Traceback (most recent call last):
File "/path/to/.conda/envs/CellBender/bin/cellbender", line 33, in <module>
sys.exit(load_entry_point('cellbender', 'console_scripts', 'cellbender')())
File "/user/CellBender/cellbender/base_cli.py", line 98, in main
args = cli_dict[args.tool].validate_args(args)
File "/user/CellBender/cellbender/remove_background/cli.py", line 69, in validate_args
assert torch.cuda.is_available(), "Trying to use CUDA, " \
AssertionError: Trying to use CUDA, but CUDA is not available.
</code></pre>
| <python><pytorch><bioinformatics> | 2023-07-12 17:15:23 | 2 | 309 | nikhil int |
76,672,923 | 13,460,562 | Keras/Tensorflow - Custom loss function is equal to 0 when run_eagerly=None? | <pre><code>custom_loss_layer = CustomLossLayer(params)(inputs, outputs)
model.add_loss(custom_loss_layer)
model.compile(..., loss="categorical_crossentropy", run_eagerly=True)
</code></pre>
<p>Running .evaluate() on model with run_eagerly=True, I get a loss of 10- higher than the combined losses of all outputs, as should be expected. However, run_eagerly brings with it a lot of performance problems. If I set run_eagerly to None instead and run .evaluate() again, the loss is only around 3, exactly equal to the combined losses of all outputs. Does anyone have any idea why this might be happening?</p>
<p>I suspect that the nature of the loss function/layer might have something to do with it- In one step, the inputs are converted from one-hot encoded to scalar values via tf.squeeze(tf.argmax(input)), which is then converted to an simple integer used to access list indices. I suspect that since this step reduces a tensor to a primitive value, tensorflow is unable to track how the tensor is modified in a single execution of the model which messes up preprocessing steps that bring the performance benefits of disabling run_eagerly. For the record, the loss function works fine with run_eagerly enabled as printing shows, and is correctly minimised after a few epochs to only 2 or so (neglecting losses from categorical crossentropy)</p>
<p>Loss function:</p>
<pre><code>class IntervalPenaltyLayer(keras.layers.Layer):
def __init__(self, intelligible_intervals, pitch_mapping, **kwargs):
self._intelligible_intervals = intelligible_intervals
self._pitch_mapping = pitch_mapping
super().__init__(**kwargs)
def call(self, inputs, outputs):
factor = 5
loss = 0
input_current_tone_batch = inputs["tone_0"][:, -1, :]
batch_size = input_current_tone_batch.shape[0]
if batch_size is None: # Check if it is still compiling
return 0
input_next_tone_batch = inputs["tone_1"][:, -1, :]
input_pitch_batch = inputs["pitch"][:, -1, :]
output_pitch_batch = outputs["pitch"] # Output no need
for i in range(batch_size):
current_tone_tensor = input_current_tone_batch[i, :]
next_tone_tensor = input_next_tone_batch[i, :]
current_pitch_tensor = input_pitch_batch[i, :]
predicted_pitch_tensor = output_pitch_batch[i, :]
current_tone_idx = int(tf.squeeze(tf.argmax(current_tone_tensor, axis=-1)))
next_tone_idx = int(tf.squeeze(tf.argmax(next_tone_tensor, axis=-1)))
current_pitch_idx = int(tf.squeeze(tf.argmax(current_pitch_tensor, axis=-1)))
current_pitch = self._pitch_mapping[current_pitch_idx]
key = str(current_tone_idx) + "_" + str(next_tone_idx)
if key not in self._intelligible_intervals: # Rest or whatever, skipping
continue
possible_intervals = self._intelligible_intervals[key]
for j in tf.range(tf.shape(predicted_pitch_tensor)[0]):
new_pitch = self._pitch_mapping[int(j)]
interval = new_pitch - current_pitch
probability = tf.gather(predicted_pitch_tensor, j)
if interval not in possible_intervals:
loss += factor * probability
return math.log(1 + loss)
</code></pre>
| <python><tensorflow><neural-network><loss-function> | 2023-07-12 17:00:52 | 0 | 357 | Jelly Qwerty |
76,672,739 | 194,707 | Can I use Prisma as the ORM inside of Django instead of Django's ORM? | <p>I'd like to use Django + Django Rest Framework as my backend for a bunch of reasons (maturity, great tooling, security, etc.). I'm curious about using Prisma as the ORM layer instead of Django's ORM. Is there anything technically infeasible with this setup, or are there any major gotchas that may not be obvious that would make this a terrible idea?</p>
| <python><django><django-rest-framework><prisma> | 2023-07-12 16:33:17 | 0 | 1,172 | kevlar |
76,672,548 | 4,220,282 | How to support modified data interpretations in NumPy ndarrays? | <p>I am trying to write a Python 3 class that stores some data in a NumPy <code>np.ndarray</code>. However, I want my class to also contain a piece of information about how to interpret the data values.</p>
<p>For example, let's assume the <code>dtype</code> of the <code>ndarray</code> is <code>np.float32</code>, but there is also a "<strong>color</strong>" that modifies the meaning of those floating-point values. So, if I want to add a <strong>red</strong> number and a <strong>blue</strong> number, I must first convert both numbers to <strong>magenta</strong> in order to legally add their underlying <code>_data</code> arrays. The result of the addition will then have <code>_color = "magenta"</code>.</p>
<p>This is just a toy example. In reality, the "color" is not a string (it's better to think of it as an integer), the "color" of the result is mathematically determined from the "color" of the two inputs, and the conversion between any two "colors" is mathematically defined.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self, data : np.ndarray, color : str):
self._data = data
self._color = color
# Example: Adding red numbers and blue numbers produces magenta numbers
def convert(self, other_color):
if self._color == "red" and other_color == "blue":
return MyClass(10*self._data, "magenta")
elif self._color == "blue" and other_color == "red":
return MyClass(self._data/10, "magenta")
def __add__(self, other):
if other._color == self._color:
# If the colors match, then just add the data values
return MyClass(self._data + other._data, self._color)
else:
# If the colors don't match, then convert to the output color before adding
new_self = self.convert(other._color)
new_other = other.convert(self._color)
return new_self + new_other
</code></pre>
<p>My problem is that the <code>_color</code> information lives <strong>alongside</strong> the <code>_data</code>. So, I can't seem to define sensible indexing behavior for my class:</p>
<ul>
<li>If I define <code>__getitem__</code> to return <code>self._data[i]</code>, then the <code>_color</code> information is lost.</li>
<li>If I define <code>__getitem__</code> to return <code>MyClass(self._data[i], self._color)</code> then I'm creating a new object that contains a scalar number. This will cause plenty of problems (for example, I can legally index <code>that_object[i]</code>, leading to certain error.</li>
<li>If I define <code>__getitem__</code> to return <code>MyClass(self._data[i:i+1], self._color)</code> then I'm indexing an array to get an array, which leads to plenty of other problems. For example, <code>my_object[i] = my_object[i]</code> looks sensible, but would throw an error.</li>
</ul>
<p>I then started thinking that what I really want is a different <code>dtype</code> for each different "color". That way, the indexed value would have the "color" information encoded for free in the <code>dtype</code>... but I don't know how to implement that.</p>
<p>The theoretical total number of "colors" is likely to be roughly 100,000. However, fewer than 100 would be used in any single script execution. So, I guess it may be possible to maintain a list/dictionary/? of the used "colors" and how they map to dynamically generated classes ... but Python tends to quietly convert types in ways I don't expect, so that is probably not the right path to go down.</p>
<p>All I know is that I don't want to store the "color" alongside every data value. The data arrays can be ~billions of entries, with one "color" for all entries.</p>
<p>How can I keep track of this "color" information, while also having a usable class?</p>
| <python><arrays><numpy><numpy-ndarray> | 2023-07-12 16:10:27 | 1 | 946 | Harry |
76,672,449 | 6,077,239 | Polars - speedup by using partition_by and collect_all | <p>Example setup</p>
<p><em>Warning: <strong>5gb</strong> memory df creation</em></p>
<pre><code>import time
import numpy as np
import polars as pl
rng = np.random.default_rng(1)
nrows = 50_000_000
df = pl.DataFrame(
dict(
id=rng.integers(1, 50, nrows),
id2=rng.integers(1, 500, nrows),
v=rng.normal(0, 1, nrows),
v1=rng.normal(0, 1, nrows),
v2=rng.normal(0, 1, nrows),
v3=rng.normal(0, 1, nrows),
v4=rng.normal(0, 1, nrows),
v5=rng.normal(0, 1, nrows),
v6=rng.normal(0, 1, nrows),
v7=rng.normal(0, 1, nrows),
v8=rng.normal(0, 1, nrows),
v9=rng.normal(0, 1, nrows),
v10=rng.normal(0, 1, nrows),
)
)
</code></pre>
<p>I have a simple task on hand as follows.</p>
<pre><code>start = time.perf_counter()
res = (
df.lazy()
.with_columns(
pl.col(f"v{i}") - pl.col(f"v{i}").mean().over("id", "id2")
for i in range(1, 11)
)
.group_by("id", "id2")
.agg((pl.col(f"v{i}") * pl.col("v")).sum() for i in range(1, 11))
.collect()
)
time.perf_counter() - start
# 9.85
</code></pre>
<p>This task above completes in ~10s on a 16-core machine.</p>
<p>However, if I first split/partition the <code>df</code> by <code>id</code> and then perform the same calculation as above and call <code>collect_all</code> and <code>concat</code> at the end, I can get a nearly 2x speedup.</p>
<pre><code>start = time.perf_counter()
res2 = pl.concat(
pl.collect_all(
dfi.lazy()
.with_columns(
pl.col(f"v{i}") - pl.col(f"v{i}").mean().over("id", "id2")
for i in range(1, 11)
)
.group_by("id", "id2")
.agg((pl.col(f"v{i}") * pl.col("v")).sum() for i in range(1, 11))
for dfi in df.partition_by("id", maintain_order=False)
)
)
time.perf_counter() - start
# 5.60
</code></pre>
<p>In addition, if I do the partition by <code>id2</code> instead of <code>id</code>, the time it takes will be even faster ~4s.</p>
<p>I also noticed the second approach (either partition by <code>id</code> or <code>id2</code>) has better CPU utilization rate than the first one. Maybe this is the reason why the second approach is faster.</p>
<p>My question is:</p>
<ol>
<li>Why the second approach is faster and has better CPU utilization?</li>
<li>Shouldn't they be the same in terms of performance, since I think window/group_by operations will always be executed in parallel for each window/group and use as many available resources as possible?</li>
</ol>
| <python><python-polars> | 2023-07-12 15:57:26 | 1 | 1,153 | lebesgue |
76,672,447 | 6,068,731 | Histogram of integer values with correct x-axis ticks and labels | <p>I have integer values from <code>0</code> to <code>n</code> (included) and I would like to plot a histogram with <code>n+1</code> bars at with the correct x-axis labels. Here is my attempt.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
# Generate integer values
n = 20
values = np.random.randint(low=0, high=(n+1), size=1000)
# Plot histogram
fig, ax = plt.subplots(figsize=(20, 4))
_, bins, _ = ax.hist(values, bins=n+1, edgecolor='k', color='lightsalmon')
ax.set_xticks(bins)
ax.set_xticklabels(bins.astype(int))
plt.show()
</code></pre>
<p>It is almost correct, but the x-axis looks weird.</p>
<p><a href="https://i.sstatic.net/e8FkF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8FkF.png" alt="enter image description here" /></a></p>
| <python><numpy><matplotlib><histogram> | 2023-07-12 15:57:10 | 1 | 728 | Physics_Student |
76,672,343 | 1,858,264 | OpenAI API, ChatCompletion and Completion give totally different answers with same parameters. Why? | <p>I'm exploring the usage of different prompts on gpt3.5-turbo.</p>
<p>Investigating over the differences between "ChatCompletion" and "Completion", some references say that they should be more or less the same, for example: <a href="https://platform.openai.com/docs/guides/gpt/chat-completions-vs-completions" rel="nofollow noreferrer">https://platform.openai.com/docs/guides/gpt/chat-completions-vs-completions</a></p>
<p>Other sources say, as expected, that ChatCompletion is more useful for chatbots, since you have "roles" (system, user and assistant), so that you can orchestrate things like few-shot examples and/or memory of previous chat messages. While Completion is more useful for summarization, or text generation.</p>
<p>But the difference seems to be much bigger. I can't find references where they explain what is happening under the hood.</p>
<p>The following experiment gives me totally diferent results, even when using the same model with the same parameters.</p>
<h3>With ChatCompletion</h3>
<pre class="lang-py prettyprint-override"><code>import os
import openai
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = ...
openai.api_key = ...
chat_response = openai.ChatCompletion.create(
engine="my_model", # gpt-35-turbo
messages = [{"role":"user","content":"Give me something intresting:\n"}],
temperature=0,
max_tokens=800,
top_p=0.95,
frequency_penalty=0,
presence_penalty=0,
stop=None)
print(chat_response.choices[0]['message']['content'])
</code></pre>
<p>Result is a fact about a war:</p>
<pre><code>Did you know that the shortest war in history was between Britain and Zanzibar in 1896? It lasted only 38 minutes!
</code></pre>
<h3>With Completion</h3>
<pre class="lang-py prettyprint-override"><code>regular_response = openai.Completion.create(
engine="my_model", # gpt-35-turbo
prompt="Give me something intresting:\n",
temperature=0,
max_tokens=800,
top_p=0.95,
frequency_penalty=0,
presence_penalty=0,
stop=None)
print(regular_response['choices'][0]['text'])
</code></pre>
<p>Result is a python code and some explanation of what it does:</p>
<pre><code> ```
import random
import string
def random_string(length):
return ''.join(random.choice(string.ascii_letters) for i in range(length))
print(random_string(10))
```
Output:
```
'JvJvJvJvJv'
```
This code generates a random string of length `length` using `string.ascii_letters` and `random.choice()`. `string.ascii_letters` is a string containing all ASCII letters (uppercase and lowercase). `random.choice()` returns a random element from a sequence. The `for` loop generates `length` number of random letters and `join()` concatenates them into a single string. The result is a random string of length `length`. This can be useful for generating random passwords or other unique identifiers.<|im_end|>
</code></pre>
<h3>Notes</h3>
<ol>
<li>I'm using the same parameters (temperature, top_p, etc). The only difference is the ChatCompletion/Completion api.</li>
<li>The model is the same in both cases, gpt-35-turbo.</li>
<li>I'm keeping the temperature low so I can get more consistent results.</li>
<li>Other prompts also give totally different answers, like if I try something like "What is the definition of song?"</li>
</ol>
<h3>The Question</h3>
<ul>
<li>Why is this happening?</li>
<li>Shouldn't same prompts give similar results given that they are using the same model?</li>
<li>Is there any reference material where OpenAI explains what it is doing under the hood?</li>
</ul>
| <python><openai-api><chatgpt-api><azure-openai> | 2023-07-12 15:43:12 | 2 | 323 | franfran |
76,672,341 | 6,547,083 | Vue Router with Fast api backend | <p>I´m using Flask with a frontend developed in vue that has vue router inside, so when you are going to <code>localhost:80/</code>, Flask send to you the html bundle produced by <code>npm run build</code> command. All the api calls, are in the api/ route <code>localhost:80/api/YourCall</code></p>
<p>The trick is that the frontend has it owns router system, for example <code>localhost:80/getAll</code> with returns the webpage generated by vue, and that webpage calls <code>localhost:80/api/getAll</code> which retrieves the data and send it back.</p>
<p>I achieve this functionality with the following code:</p>
<pre><code>@app.route('/',defaults={'path': ''})
@app.route('/<path:path>')
def index(path):
if path != "" and os.path.exists(app.static_folder + '/' + path):
return send_from_directory(app.static_folder, path)
else:
return send_from_directory(app.static_folder, 'index.html')
</code></pre>
<p>Now, I´m trying to migrate to FastApi, and I found that <code>localhost:80/getAll</code> returns a 404, because the route is handle by FastApi before the webpage, so obviously the result is 404. I tried to solve this following the answers <a href="https://stackoverflow.com/questions/65419794/serve-static-files-from-root-in-fastapi/68488252#68488252">here</a> and <a href="https://stackoverflow.com/questions/65916537/a-minimal-fastapi-example-loading-index-html">here</a> with no results. Also I tried the code from <a href="https://stackoverflow.com/questions/64493872/how-do-i-serve-a-react-built-front-end-on-a-fastapi-backend">this answer</a></p>
<pre><code>from fastapi.staticfiles import StaticFiles
class SPAStaticFiles(StaticFiles):
async def get_response(self, path: str, scope):
response = await super().get_response(path, scope)
if response.status_code == 404:
response = await super().get_response('.', scope)
return response
app.mount('/my-spa/', SPAStaticFiles(directory='folder', html=True), name='whatever')
</code></pre>
<p>Which is similar to my previous way of working, without positive results. Is what I´m trying to do even possible?</p>
| <python><vue.js><flask><vue-router><fastapi> | 2023-07-12 15:42:54 | 0 | 573 | F.Stan |
76,672,255 | 19,675,781 | How to use same X-axis for barplot and heatmap in seaborn | <p>I want to create a plot with multiple subplots involving heatmaps and barplots.
I am facing problems with using shared X axis across the plots.</p>
<p>This is my code:</p>
<pre><code>fig = plt.figure(figsize=(4, 4))
f,(ax0,ax1,ax2) = plt.subplots(3,1,sharex=True,figsize=[3,8],
gridspec_kw={'height_ratios':[1,0.25,1],'width_ratios':[1]})
ax0.get_shared_x_axes().join(ax1,ax2)
g0 = sns.barplot(data=tpm,color='blue',ax=ax0,lw=lw)
g1 = sns.heatmap(prep(sex),cmap=sex_clr,cbar=False,ax=ax1,linewidths=lw,linecolor='white')
g1.set(xlabel='',ylabel='')
g3 = sns.barplot(data=tmb_val,color='blue',ax=ax2,lw=lw)
</code></pre>
<p>This is my output:</p>
<p><a href="https://i.sstatic.net/EcJ7h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EcJ7h.png" alt="enter image description here" /></a></p>
<p>Here I mentioned sharex=True and also used get_shared_axes() but nothing worked.
Can anyone help me with this?</p>
| <python><seaborn><bar-chart><heatmap> | 2023-07-12 15:32:52 | 0 | 357 | Yash |
76,672,126 | 7,175,247 | Reply to reviews using Google my business | <p>I want to publish review replies on google my business page.</p>
<p>Code:</p>
<pre><code>import google.auth.transport.requests
from googleapiclient.discovery import build
from google.oauth2 import service_account
# Load credentials from JSON file
credentials = service_account.Credentials.from_service_account_file(
'///credentials.json')
scoped_credentials = credentials.with_scopes(["https://www.googleapis.com/auth/business.manage", "https://www.googleapis.com/auth/plus.business.manage"])
DISCOVERY_DOC = "https://developers.google.com/static/my-business/samples/mybusiness_google_rest_v4p9.json"
# Build the Google My Business API service
service = build('mybusiness', 'v4', credentials=scoped_credentials, discoveryServiceUrl = DISCOVERY_DOC)
# Define the business account ID and review ID
account_id = '*******'
review_id = '*******'
location_id = *****
# Define the reply text
reply_text = 'Hi! Thank you so much for your positive feedback!'
# Send the review reply
request = service.accounts().locations().reviews().updateReply(
name=f'accounts/{account_id}/locations/{location_id}/reviews/{review_id}',
body={'comment': reply_text}
)
response = request.execute()
# Print the response
print(response)
</code></pre>
<p>Below are the service account credentials
<strong>credentials.json</strong></p>
<pre><code>{
"type": "service_account",
"project_id": "*******",
"private_key_id": "####****######",
"private_key": "-----BEGIN PRIVATE KEY-----*****-----END PRIVATE KEY-----\n",
"client_email": "****@**.iam.gserviceaccount.com",
"client_id": "#############",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "##########.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
</code></pre>
<p>Please help us understand why this error. I don't see anything wrong, I followed the documentation correctly.
I have checked replacing the '<strong>mybusinessbusinessinformation</strong>' and '<strong>v1</strong>' with '<strong>mybusiness</strong>' and '<strong>v4</strong>' respectively, but still no success.
I have even tried sending the response using the API key but that also doesn't work.</p>
<p><strong>Update</strong></p>
<p>I missed the discovery file, now added it. But now the execute function doesn't work. If I use execute() it gives below error else works fine but no output.</p>
<p>Error:</p>
<pre><code>An error occurred: <HttpError 404 when requesting https://mybusiness.googleapis.com/v4/accounts/****/locations/****/reviews/***/reply?alt=json returned "Requested entity was not found.". Details: "Requested entity was not found.">
</code></pre>
<p>I have tried to solve using Outh2 access token:</p>
<pre><code>import requests
import json
access_token = "ya29.******"
def reply_to_review(account_id, review_id, reply_text, location_id):
endpoint = 'https://mybusiness.googleapis.com/v4/accounts/{}/locations/{}/reviews/{}/reply'.format(
account_id, location_id, review_id)
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(access_token)
}
payload = {
'comment': reply_text
}
try:
response = requests.put(endpoint, headers=headers, json=payload)
response.raise_for_status()
print('Reply submitted successfully.')
except requests.exceptions.HTTPError as error:
print(f'Error: {error}')
except Exception as e:
print(f'An error occurred: {str(e)}')
# Example usage:
account_id = '*****'
review_id = '*****'
reply_text = 'Hi jose! Thank you for the review'
location_id = '*******'
reply_to_review(account_id, review_id, reply_text, location_id)
</code></pre>
<p>Still getting the same error:</p>
<pre><code>Error: 404 Client Error: Not Found for url: https://mybusiness.googleapis.com/v4/accounts/****/locations/****/reviews/****/reply
</code></pre>
| <python><google-cloud-platform><google-api> | 2023-07-12 15:16:48 | 2 | 814 | Nagesh Singh Chauhan |
76,672,074 | 3,710,004 | How to use concat in a for loop without using append? | <p>I have the following code that uses a for loop to iteratively create dataframes and append them to one large dataframe. It works, but I get this warning: "FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead."</p>
<pre><code>
district_list = [
{'url':'blabla.com', 'name':'Montgomery County Board of Education', 'state':'MD'},
{'url':'blabla.org', 'name':'Cabarrus County Schools', 'state':'NC'},
{'url':'blabla.k12.us', 'name':'Mariposa County Unified', 'state':'CA'}]
districts_df = pd.DataFrame()
for district in district_list:
url = district['url']
district_name = district['name']
state = district['state']
df = pd.DataFrame([{'url': url,
'district_name': district_name,
'state': state}])
districts_df = districts_df.append(df, ignore_index=True)
districts_df
</code></pre>
<p>I redid my code so that it uses concat. However, the code still uses append. I don't understand -- what is the point of switching to concat, and making my code more complicated, when I still have to use append anyway? Why am I no longer getting an error? Is there another way I should do this that doesn't use append at all?</p>
<pre><code>districts_df = []
for district in district_list:
url = district['url']
district_name = district['name']
state = district['state']
df = pd.DataFrame([{'url': url,
'district_name': district_name,
'state': state}])
districts_df.append(df)
districts_df = pd.concat(districts_df)
districts_df
</code></pre>
| <python><pandas><concatenation> | 2023-07-12 15:10:00 | 0 | 686 | user3710004 |
76,671,987 | 21,404,794 | Adding multiple rows to newly created columns in a pandas dataframe | <p>I'm using pandas to store the results of a machine learning model, and I have a dataframe that stores the input data. I want to extend that dataframe with the two outputs that the model returns, but I don't know how to do it.</p>
<p>I've tried doing somethin like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
df = pd.DataFrame({'col1':[1,2,3,4,5], 'col2':[1,2,3,4,5]})
df[['col3', 'col4']] = [[1,2,3,4,5],[1,2,3,4,5]]
</code></pre>
<p>But it throws an error</p>
<pre class="lang-py prettyprint-override"><code>Exception has occurred: ValueError
Columns must be same length as key
File "D:\InSilicoOP-FUAM\In Silico OP\src\pruebas.py", line 20, in <module>
df[['col3', 'col4']] = [[1,2,3,4,5],[1,2,3,4,5]]
ValueError: Columns must be same length as key
</code></pre>
<p>I've also tried with
<code>df['col3', 'col4'] = [1,2,3,4,5],[1,2,3,4,5]</code> and <code>df['col3', 'col4'] = [[1,2,3,4,5],[1,2,3,4,5]]</code> and those throw <code>ValueError: Length of values (2) does not match length of index (5)</code></p>
<p>I know I can assign each column separatedly, like so</p>
<pre class="lang-py prettyprint-override"><code>df['col3'] = [1,2,3,4,5]
</code></pre>
<p>But then I'd have to separate the results from the model (which is a big problem on it's own...)</p>
<p>Is there a way to assign multiple</p>
| <python><pandas><dataframe> | 2023-07-12 15:01:47 | 4 | 530 | David Siret Marqués |
76,671,775 | 6,224,975 | Slice ndarray with indexes defined in another ndarray | <p>Say I have the following:</p>
<pre class="lang-py prettyprint-override"><code>a = np.array([[1,2,3],[4,5,6]])
idx = np.array([[0,2],[1,2]])
</code></pre>
<p>Is there a way to "numpy-slice" the array such that I use the i-th row from <code>idx</code> as index for the i-th row from <code>a</code> i.e <code>a[i,idx[i]]</code>?</p>
<p>This can easily be done by a loop:</p>
<pre class="lang-py prettyprint-override"><code>np.vstack([a[i, idx[i]] for i in range(len(a))])
</code></pre>
<p>But I thought there must be a numpy way of doing it.</p>
<p>The end-result should be:</p>
<pre class="lang-py prettyprint-override"><code>[[1,3],
[5,6]]
</code></pre>
<p>Doing <code>a[idx]</code> does not work. If I do <code>a[:,idx]</code> then I get too many results i.e that gives</p>
<pre class="lang-py prettyprint-override"><code>array([[[1, 3],
[2, 3]],
[[4, 6],
[5, 6]]])
</code></pre>
<p>which is close, but not correct.</p>
| <python><numpy> | 2023-07-12 14:40:58 | 2 | 5,544 | CutePoison |
76,671,774 | 13,520,358 | Could not open requirements file in github actions | <p>I am trying to setup a github workflow (first time)</p>
<p>but keep getting this error</p>
<pre><code>/opt/hostedtoolcache/Python/3.10.12/x64/bin/python -m pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Error: ERROR: Action failed during dependency installation attempt with error: The process '/opt/hostedtoolcache/Python/3.10.12/x64/bin/python' failed with exit code 1
</code></pre>
<p>you can view my director structure <a href="https://github.com/codewithnick/searchenginepy" rel="nofollow noreferrer">here</a>
and github workflow file</p>
<pre><code>name: testing
on: [push,pull_request]
run-name: running tests for searchenginepy
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: setup python
uses: actions/setup-python@v4
with:
python-version: '3.10' # install the python version needed
- name: Install dependencies
uses: py-actions/py-dependency-install@v4
with:
path: "requirements.txt"
- name: run tests # run main.py
working-directory: tests
run: python -m unittest discover
</code></pre>
<p>I have tried</p>
<blockquote>
<p>pip install .
pip install -r .github/workflows/requirements.txt</p>
</blockquote>
<p>I am also not sure about the unittest directory because that needs to be run in the tests folder</p>
| <python><github><github-actions> | 2023-07-12 14:40:55 | 1 | 367 | Nikhil Singh |
76,671,760 | 4,174,701 | Windows Scheduler - display error on python script failure | <p>I have similar issue like <a href="https://stackoverflow.com/questions/43754488/throw-error-on-task-scheduler-when-python-script-throws-error">one from this thread</a> - I have a python script which I'd like to run and should it fail (by calling <code>exit(<value different than 0></code>), display error in the Windows Scheduler Task History window.
From the comments I understood that it actually is doable somehow - you can get python process exit code. I'm wondering can I somehow make scheduler display non-zero return codes as failures?</p>
<p>I tried running a <code>.bat</code> file wrapping the python script and setting the error code (<code>exit \b 123</code>) but again - this does not solve the issue.</p>
<p><a href="https://i.sstatic.net/pLxVt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pLxVt.png" alt="last run result" /></a></p>
<p><a href="https://i.sstatic.net/VpXj2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VpXj2.png" alt="error instead of information" /></a></p>
| <python><windows><windows-task-scheduler> | 2023-07-12 14:39:05 | 0 | 760 | michelson |
76,671,746 | 2,160,936 | How can I initialise a variable type array in python with values from an externa config file? | <p>I got a file with some configuration</p>
<pre><code>[ERU]
refreschtime = 15
forwardToA = test@gmail.com
forwardToB = test1@gmail.com, test2@gmail.com
</code></pre>
<p>Now I wanted to use forwardToB as an array instead of single string to interact over the array members</p>
<pre><code>for recipient in recipients:
log.info(recipient)
to_recipients.append(Mailbox(email_address=recipient))
</code></pre>
<p>The script is working fine for a single recipient. However when try to insert a list of recipients it fail as it take the whole list as single item.</p>
<p>The is how I'm imported the config into the script</p>
<pre><code> try:
forwardToB = [config.get('ERU', 'forwardToB')]
except configparser.NoOptionError:
log.critical('no forwardToB specified in configuration file')
</code></pre>
| <python><arrays> | 2023-07-12 14:37:20 | 1 | 519 | Sallyerik |
76,671,743 | 6,817,178 | Pika ConnectionResetError(104, 'Connection reset by peer') having 2 connections using threads | <p>I am running into the following error with pika</p>
<pre><code>pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
</code></pre>
<p>I've read on stackoverflow and other sources that the origin of this error are missed heartbeats and that threading should be the solution. I've also seen from this <a href="https://www.youtube.com/watch?v=nxQrpLfX3rs&t=397s" rel="nofollow noreferrer">talk</a> that it is recommended to have 2 connections. One for consuming and one for producing. From the pika repository I found <a href="https://github.com/pika/pika/blob/1.0.1/examples/basic_consumer_threaded.py" rel="nofollow noreferrer">this</a> solution for creating threads for doing the work load and not miss heartbeats. From the <a href="https://pika.readthedocs.io/en/latest/faq.html" rel="nofollow noreferrer">pika FAQ</a> I also know that pika is not threadsafe and thus every connection needs to run in it's own thread.</p>
<p>Based on that I designed my service to have a Consumer and a Producer object which run in their own thread and the consumer thread creates a new thread for "doing the work".</p>
<pre class="lang-py prettyprint-override"><code>producer = Producer()
consumer = Consumer(producer, mgr)
producer.start()
consumer.start()
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Producer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.message_queue = Queue()
def run(self):
self.createConnection()
# declare your exchanges and bindings here
self.channel.exchange_declare(exchange=self.exchange, exchange_type='topic')
# sleep 5 seconds
time.sleep(5)
self.add_message(self.exchange, 'hello.example', {'count': 0})
while True:
if not self.message_queue.empty():
message = self.message_queue.get()
self.publish(message['exchange'], message['routing_key'], message['body'])
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Consumer(threading.Thread):
def __init__(self, producer, mgr):
threading.Thread.__init__(self)
def run(self):
self.createConnection()
# declare your exchanges and bindings here
self.channel.exchange_declare(exchange=self.exchange, exchange_type='topic')
# example queue declaration
result = self.channel.queue_declare(queue='', exclusive=True)
queue_name = result.method.queue
self.channel.queue_bind(exchange=self.exchange, queue=queue_name, routing_key='*.example')
self.channel.basic_qos(prefetch_count=1)
threads = []
on_message_callback = functools.partial(self.on_message, args=(self.connection, threads))
self.channel.basic_consume(queue=queue_name, on_message_callback=on_message_callback, auto_ack=False)
logging.info('Start listening to messages... on exchange: {}'.format(self.exchange))
self.channel.start_consuming()
# Wait for all to complete
for thread in threads:
thread.join()
</code></pre>
<p><a href="https://github.com/RaphaelHippe/rabbitmq-producer-consumer" rel="nofollow noreferrer">Here</a> is the full example of my service implementation. It is not my exact service implementation but the general structure and idea are the same. Locally and in my docker setup the service, so far, runs fine most of the times. But deployed on my test servers it often runs into the connection reset error.</p>
<p>Are there any flaws with my service implementation? Or could the issue be with my RabbitMQ configuration? (I am using the RabbitMQ Bitnami 3.11.8-0 image, which runs on an AWS EC2 instance. I have not changed any of the default configuration.)</p>
<p>I am a bit stuck and not quite sure what to try next. So I am grateful for any pointers or suggestions.</p>
<p>Thanks!</p>
| <python><multithreading><rabbitmq><pika><connection-reset> | 2023-07-12 14:36:48 | 0 | 4,935 | Raphael Hippe |
76,671,660 | 12,243,638 | Fillna row wise as per change in another column | <p>I have a data frame in which there is a column containing several NaN values. The dataframe looks like this:</p>
<pre><code> col_1 col_2
2022-10-31 99.094 102.498
2022-11-30 99.001 101.880
2022-12-31 NaN 108.498
2023-01-31 NaN 100.500
</code></pre>
<p>I want to fill those NaN based on the simple calculation below:</p>
<pre><code>desired_val = (current value in col_2 - previous value in col_2) + previous value in col_1
</code></pre>
<p>which means,</p>
<p><code>df.loc['2022-12-31', 'col_1']</code> should be = (108.498 - 101.880) + 99.001 = 105.619</p>
<p>and <code>df.loc['2023-01-31', 'col_1']</code> should be = (100.500 - 108.498) + 105.619 = 97.621</p>
<p>I found solution by using row by row operation but it is slow when the dataset is big.</p>
<pre><code>for row in df.columns:
if df.loc[row, 'col_1'] == np.Nan:
df.loc[row, 'col_1'] = (
(df['col_2']-df['col_2'].shift(1)
)
+ df['col_1'].shift(1)
).loc[row, 'col_1']
</code></pre>
<p>Is there any column wise pandas solution for that?</p>
| <python><pandas> | 2023-07-12 14:27:40 | 1 | 500 | EMT |
76,671,641 | 567,059 | Make pytest 'capsys' fixture treat stdout the same regardless of whether or not the -s option is used | <p>I'm using<code>pytest</code> with the <code>capsys</code> fixture to check text output to <code>stdout</code> is as expected.</p>
<p>My issue is that when I run tests using the <code>-s</code> options, the assertions passes. But when I don't use the <code>-s</code>, the assertion fails because some of the text in <code>stdout</code> is wrapped, meaning the actual text is no longer the same as the expected text.</p>
<p>Is there a way of making the <code>capsys</code> fixture capture <code>stdout</code> in the same way as when <code>-s</code> was used every time?</p>
<h3>Example code</h3>
<pre class="lang-py prettyprint-override"><code>import argparse
import pytest
class Args:
class _HelpFormatter(argparse.HelpFormatter):
def __init__(self, prog: str) -> None:
super().__init__(prog, max_help_position=50)
def __init__(self) -> None:
global_parser = argparse.ArgumentParser(add_help=False, prog='wwaz',
formatter_class=self._HelpFormatter)
global_group = global_parser.add_argument_group('Global Options')
global_group.add_argument('--help', action='help',
help='Show this help message and exit.')
global_group.add_argument('--verbose', action='store_true',
default=False, help='Whether to output verbose logs. Also whether to show verbose test output. (default: False)')
self.parser = argparse.ArgumentParser(add_help=False, prog='wwaz',
formatter_class=self._HelpFormatter, parents=[global_parser],
description='Run routine Azure actions.')
@classmethod
def parse(self, args: list=None) -> argparse.Namespace:
return Args().parser.parse_args(args)
@pytest.fixture(scope='module')
def expected():
return (
'usage: wwaz [--help] [--verbose]'
'\n\nRun routine Azure actions.'
'\n\nGlobal Options:'
'\n --help Show this help message and exit.'
'\n --verbose Whether to output verbose logs. Also whether to show verbose test output. (default: False)\n'
)
help_args = [pytest.param(['--help'], id='--help')]
@pytest.mark.parametrize('input_args', help_args)
def test_stdout(capsys, expected, input_args):
capsys.readouterr()
with pytest.raises(SystemExit):
Args.parse(input_args)
actual, _ = capsys.readouterr()
assert actual == expected
</code></pre>
<h3>Successful test run</h3>
<p>Assuming the file containing the example code is ~/src/test_argparse.py -</p>
<pre class="lang-bash prettyprint-override"><code>user@computer:~/src$ pytest test_argparse.py -q -s
.
1 passed in 0.00s
</code></pre>
<h3>Unsuccessful test run</h3>
<p>Assuming the file containing the example code is ~/src/test_argparse.py -</p>
<pre class="lang-bash prettyprint-override"><code>user@computer:~/src$ pytest test_argparse.py -q
F [100%]
============================================================================================= FAILURES =============================================================================================
_______________________________________________________________________________________ test_stdout[--help] ________________________________________________________________________________________
capsys = <_pytest.capture.CaptureFixture object at 0x7f6edf733520>
expected = 'usage: wwaz [--help] [--verbose]\n\nRun routine Azure actions.\n\nGlobal Options:\n --help Show this help message and exit.\n --verbose Whether to output verbose logs. Also whether to show verbose test output. (default: False)\n'
input_args = ['--help']
@pytest.mark.parametrize('input_args', help_args)
def test_stdout(capsys, expected, input_args):
capsys.readouterr()
with pytest.raises(SystemExit):
Args.parse(input_args)
actual, _ = capsys.readouterr()
> assert actual == expected
E AssertionError: assert 'usage: wwaz ...ult: False)\n' == 'usage: wwaz ...ult: False)\n'
E Skipping 192 identical leading characters in diff, use -v to show
E - rbose test output. (default: False)
E + rbose test
E + output. (default: False)
test_argparse.py:51: AssertionError
===================================================================================== short test summary info ======================================================================================
FAILED test_argparse.py::test_stdout[--help] - AssertionError: assert 'usage: wwaz ...ult: False)\n' == 'usage: wwaz ...ult: False)\n'
1 failed in 0.01s
</code></pre>
| <python><pytest> | 2023-07-12 14:25:42 | 1 | 12,277 | David Gard |
76,671,547 | 4,659,530 | PySpark schema nullable not updated after filtering | <p>Imagine I have a df with column which can be null, but if I apply operation to filter out nulls, shouldn't spark show dtype as not nullable ?</p>
<pre class="lang-py prettyprint-override"><code>df = spark.createDataFrame(
[("a", 1), (None, 2)],
StructType(
[
StructField("col_a", nullable=True, dataType=StringType()),
StructField("col_b", nullable=False, dataType=IntegerType()),
]
),
)
</code></pre>
<pre class="lang-bash prettyprint-override"><code>> df.printSchema()
root
|-- col_a: string (nullable = true)
|-- col_b: integer (nullable = false)
</code></pre>
<pre class="lang-bash prettyprint-override"><code>> df.filter(F.col("col_a").isNotNull()).printSchema()
root
|-- col_a: string (nullable = true)
|-- col_b: integer (nullable = false)
</code></pre>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-07-12 14:15:50 | 0 | 2,405 | Rahul Kumar |
76,671,395 | 7,447,976 | Keeping the virtual size of drop down menus unchanged in Dash - Python | <p>I have multiple drop down menus having tens of options to select. As the user selects more and more options, the size of the drop down menu gets larger. Is there a way to keep the size unchanged and show the selections in a shorter way? I set the max width parameters in the style option, but it does not work as intended.</p>
<pre><code>from dash import Dash, dcc, html
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div([
html.Div(className="row", children=[
html.Div(className='row', children=[
html.Div(className='four columns', children=[
dcc.Dropdown(
options=[
{'label': 'North America', 'value': 'NA'},
{'label': 'Europe', 'value': 'EU'},
{'label': 'Asia', 'value': 'AS'}
],
multi=True
)
]),
html.Div(className='four columns', children=[
dcc.Dropdown(
options=[
{'label': 'United States', 'value': 'USA'},
{'label': 'Canada', 'value': 'CAN'},
{'label': 'France', 'value': 'FRA'},
{'label': 'Germany', 'value': 'GER'},
{'label': 'China', 'value': 'CHN'},
{'label': 'India', 'value': 'IND'}
],
multi=True
)
]),
html.Div(className='four columns', children=[
dcc.Dropdown(
options=[
{'label': 'New York City', 'value': 'NYC'},
{'label': 'Montreal', 'value': 'MTL'},
{'label': 'San Francisco', 'value': 'SF'},
{'label': 'London', 'value': 'LDN'},
{'label': 'Tokyo', 'value': 'TKY'},
{'label': 'Mumbai', 'value': 'MUM'}
],
multi=True
)
])
]),
])
])
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p><a href="https://i.sstatic.net/uNmxH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uNmxH.png" alt="enter image description here" /></a></p>
| <python><plotly-dash><dashboard> | 2023-07-12 13:58:34 | 1 | 662 | sergey_208 |
76,671,389 | 12,177,820 | Why is this SHAP summary plot not showing in databricks? | <p>When I run <code>shap.summary_plot(shap_values.values, X[input_cols]</code> or <code>shap.summary_plot(shap_values, X[input_cols])</code> databricks outputs</p>
<p><code><Figure size 576x684 with 2 Axes></code></p>
<p>The code used to work but the kernel restarted and running the same code no longer produces the plot.
I have an imported pipeline model and dataframe sampled from an rdd and run the following code before the above:</p>
<pre><code>import shap
explainer=shap.TreeExplainer(pipelineModel.stages[2])#gradient boosting model from pyspark pipeline
shap_values=explainer(X,check_additivity=False)#X is a dataframe the model predicts on
</code></pre>
<p>I've printed the shap_values and X rows to verify that they contain the desired data and I'm able to run predictions on the data using the pipeline data. Why is this code now just producing <Figure size 576x684 with 2 Axes> instead of the actual figure? This problem persists whether matplotlib is used or not and shows up places other than SHAP. I've seen other answers for similar questions but using matplotlib=True on a summary plot just causes produces and error.</p>
| <python><amazon-web-services><matplotlib><databricks><shap> | 2023-07-12 13:57:39 | 1 | 397 | DrRaspberry |
76,671,271 | 1,627,585 | Filter and sort CSV data and store as PDF file with page breaks after specific rows | <p>I am using a Python script that imports CSV data, filters and sorts it, converts it to HTML and then PDF. I'd like to find a way to add page breaks after specific rows.</p>
<p>Assume the following example:</p>
<p>The data is sorted by columns <code>col1</code> forming "groups". I'd like to add a page break after every group (new value in <code>col1</code>):</p>
<p><em>Input data (CSV table)</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>A</td>
<td>y</td>
<td>b</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>B</td>
<td>y</td>
<td>b</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>C</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>C</td>
<td>y</td>
<td>b</td>
</tr>
</tbody>
</table>
</div>
<p><em>Output data (table in PDF)</em></p>
<p>(page breaks added, column headings repeated every page)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>A</td>
<td>y</td>
<td>b</td>
</tr>
<tr>
<td>pagebreak</td>
<td></td>
<td></td>
</tr>
<tr>
<td>col1</td>
<td>col2</td>
<td>col3</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>B</td>
<td>y</td>
<td>b</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>pagebreak</td>
<td></td>
<td></td>
</tr>
<tr>
<td>col1</td>
<td>col2</td>
<td>col3</td>
</tr>
<tr>
<td>C</td>
<td>x</td>
<td>a</td>
</tr>
<tr>
<td>C</td>
<td>y</td>
<td>b</td>
</tr>
</tbody>
</table>
</div>
<p>My workflow briefly looks as follows:</p>
<pre><code>df = pd.read_csv(input_filename, encoding="")
filtered_df = df[some_condition]
filtered_df = filtered_df.sort_values(some_other_condition)
html_table = filtered_df.to_html(index=False)
html_string = html_head + html_something + html_table + html_something_else + html_foot
pdfkit.from_string(html_string, outfile_name, options=pdfkit_options)
</code></pre>
<p>I see the following alternative approaches (but don't have a clue how to implement them yet, and I don't like any of them):</p>
<ol>
<li>Parse the data and add "ghost" lines, carrying no data but some <em>magic string token</em> that can be replaced after the HTML conversion by other HTML magic (table row with specific CSS style?). Feels very hacky.</li>
<li>Split the big table into smaller tables (one for every group - but how?). Convert them to HTML separately and put them back afterwards (using some HTML/CSS magic).</li>
<li>Use some pdfkit option or <code>pandas.DataFrame.to_html</code> option I don't know about.</li>
<li>Use a completely different approach.</li>
</ol>
<p>I don't know all the values <code>col1</code> holds in advance, but it's probably easy to find them out once and reuse them for further processing.</p>
<p>Any help is very much appreciated.</p>
| <python><pandas><numpy><pdfkit> | 2023-07-12 13:44:54 | 2 | 1,077 | Matthias W. |
76,671,238 | 8,947,333 | How to debug Alembic migrations inside VS Code? | <p>Say I have a migration script like this one:</p>
<pre><code># revision identifiers, used by Alembic.
revision = 'b5c9e0aac59b'
down_revision = '4658d3e26bae'
branch_labels = None
depends_on = None
def upgrade():
print("hello world!")
</code></pre>
<p>And I want to place a breakpoint on the line that contains the <code>print</code> statement.</p>
<p>How can I configure VS Code to launch the command <code>alembic -n sandbox upgrade b5c9e0aac59b</code> (or <code>python -m alembic -n sandbox upgrade b5c9e0aac59b</code>), and use VS Code debugging features?</p>
| <python><visual-studio-code><vscode-debugger><alembic> | 2023-07-12 13:41:09 | 1 | 3,008 | Be Chiller Too |
76,671,228 | 16,027,663 | Find First and Second Occurrence of Value Across Two Columns by Group | <p>I have a df that looks like the one below. It is sorted by Ref1 and Seq.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ref1</th>
<th>EvnNo</th>
<th>P1</th>
<th>P2</th>
<th>Seq</th>
<th>PP1</th>
<th>PP2</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaaa</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>aaaa</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>3</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>4</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>5</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>6</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>7</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>aaaa</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>8</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>bbbb</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>3</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>0</td>
<td>xxx</td>
<td>yyy</td>
<td>4</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>5</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>6</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>7</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>bbbb</td>
<td>1</td>
<td>xxx</td>
<td>yyy</td>
<td>8</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to work out how to do two things:</p>
<ol>
<li><p>count the first occurrences of a 1 in either PP1 or PP2 grouped by Ref1 and EvNo. There may be no occurrences or there may be multiple occurrences but there will never be a 1 in both columns on the same row.</p>
</li>
<li><p>after the first occurrence (if any) count if there is a 1 in the other of PP1 or PP2 in the same group. Eg if the first 1 in a group was in PP1 count if the next occurrence of 1 is in PP2. If the next 1 is also in PP1 it should not be counted. There may be no further occurrences of a 1 in either column.</p>
</li>
</ol>
<p>Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>P1 First Occ</th>
<th>P2 First Occ</th>
<th>P1 Second Occ</th>
<th>P2 Second Occ</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div> | <python><pandas> | 2023-07-12 13:39:54 | 1 | 541 | Andy |
76,671,150 | 11,165,214 | How to make a numpy.arange with incremental step size? | <p>I want to process data, where the acquisition process slows down gradually as the amount of acquired data grows. In order to relate the index values to an estimated real time value, I estimate an increment function that approximates the real time delta between any two neighboring acquisition indices.
Hence, I am looking for an efficient way to generate the array of real times out of the array of increments.
As an example, let's assume a linear function for the increments. Then, any time value t(i) could be calculated in a recursive manner as t(i) = (m i + b) + t(i-1), for linearly changing time increments m i+b.</p>
<p>Now, I would like to have a time array instead of the index array. A very convenient way would be something like a numpy.arange function with incremental stepwidths; something like:</p>
<pre><code>np.arange(i_0, i_final, step=[m*i+b])
</code></pre>
<p>Unfortunately, numpy doesn't support this. Is there a ready implementation at hand? If not, the simplest solution would be a for-loop over the index array, but since the arrays could be long, I would rather avoid this way, if there are faster ways.</p>
<p>EDIT:</p>
<p>A very simple example would be</p>
<pre><code>i t dt (from i-1 to i)
0 0
1 1 1
2 5 4
3 11 6
4 19 8
5 29 10
6 41 12
</code></pre>
<p>where the increment step size would be simply dt(i) = 2i for any step from index i-1 to i. (However, the step size is usually non-integer.)</p>
| <python><arrays><numpy><increment> | 2023-07-12 13:31:44 | 1 | 449 | Lepakk |
76,671,100 | 5,852,692 | combining two dict_key instances with preserving their order | <p>I have two dict with different keys. I would like to combine both keys into a list or something so that I can iterate over. However the order is important because at some places of the script I need to preserve the order for other calculations via <code>enumerate()</code></p>
<p>Here is a small example of what I am trying to do:</p>
<pre><code>ns.keys()
Out[1]: dict_keys([108])
no.keys()
Out[2]: dict_keys([120, 124, 126, 127, 112, 114, 115, 117, 118, 135, 132, 133, 109, 130, 111, 129, 136])
</code></pre>
<p>I want to iterate over both of them like following:</p>
<pre><code>for key in [ns.keys() | no.keys()]:
print(key)
Out[3]: {129, 130, 132, 133, 135, 136, 108, 109, 111, 112, 114, 115, 117, 118, 120, 124, 126, 127}
</code></pre>
<p>The order is important because, I also want to do following:</p>
<pre><code>for i, key in enumerate([ns.keys() | no.keys()]):
print(i, key)
</code></pre>
<p>I want the order of <code>[ns.keys() | no.keys()]</code> to be first <code>ns.keys()</code> then <code>no.keys()</code>. In this example, it should be:</p>
<pre><code>[108, 120, 124, 126, 127, 112, 114, 115, 117, 118, 135, 132, 133, 109, 130, 111, 129, 136]
</code></pre>
<p>Following works <code>list(ns.keys()) + list(no.keys())</code>, any other idea?</p>
| <python><dictionary><key><union> | 2023-07-12 13:25:36 | 3 | 1,588 | oakca |
76,671,078 | 14,380,704 | Array_Split with grouped string indices | <p>I have a dataframe that I would like to create sub-arrays within (i.e. chunk) based on groups of string values within the index. I've read how you can pass a list of string values as the indices variable in np.array_split, but my scenario is a bit more complicated and I'm unsure on best approach.</p>
<p>From the below table/array, I'd like to have 2 sub-arrays: one array which includes index string values "Alpha" and "Bravo", the second with values "Charlie" and "Delta"</p>
<p>Example table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Column1</th>
<th style="text-align: right;">Column2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Alpha</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">12</td>
</tr>
<tr>
<td style="text-align: left;">Alpha</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">13</td>
</tr>
<tr>
<td style="text-align: left;">Alpha</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">14</td>
</tr>
<tr>
<td style="text-align: left;">Bravo</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">15</td>
</tr>
<tr>
<td style="text-align: left;">Charlie</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">16</td>
</tr>
<tr>
<td style="text-align: left;">Charlie</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">17</td>
</tr>
<tr>
<td style="text-align: left;">Delta</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">18</td>
</tr>
<tr>
<td style="text-align: left;">Delta</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">19</td>
</tr>
<tr>
<td style="text-align: left;">Delta</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">20</td>
</tr>
<tr>
<td style="text-align: left;">Delta</td>
<td style="text-align: center;">sample</td>
<td style="text-align: right;">21</td>
</tr>
</tbody>
</table>
</div> | <python><dataframe><numpy> | 2023-07-12 13:23:37 | 1 | 307 | 2020db9 |
76,671,047 | 1,813,275 | How to configure tox to use multiple index urls? | <p>Currently in tox, we can add a default index url for python packages. This is done like this:</p>
<pre><code>indexserver =
default = <index url I want to add>
</code></pre>
<p><strong>My query:</strong> Can we add multiple index urls for searching python packages in tox ?</p>
| <python><tox> | 2023-07-12 13:19:45 | 0 | 363 | Varun Vijaykumar |
76,671,007 | 13,354,437 | How do I expose imported classes outside of the class | <p>I have different classes that represent records in a database</p>
<p>For example</p>
<pre><code># my_records.py:
class TableARecord:
field1: str
field2: str
class TableBRecord:
field1: str
field2: str
</code></pre>
<p>I have another class that represents the entire DB. I want to give access to the record objects as well without importing them directly, and so I have this</p>
<pre><code># my_db.py:
from my_records import TableARecord, TableBRecord
class DB:
TableARecord = TableARecord
TableBRecord = TableBRecord
def some_functionality():
...
...
</code></pre>
<p>And so I can use this syntax:</p>
<pre><code>a_record = DB.TableARecord()
</code></pre>
<p>And this works well.</p>
<p>My question is specifically about these lines:</p>
<pre><code>TableARecord = TableARecord
TableBRecord = TableBRecord
</code></pre>
<p>Is there a better or more pythonic way to "expose" these classes to the outside?</p>
| <python> | 2023-07-12 13:15:19 | 0 | 1,182 | Almog-at-Nailo |
76,670,972 | 10,380,409 | Selenium ActionChains doesn't move an element | <p>I'm testing a web_app that use React-dnd library to provide drag and drop to the user.
The App has a list where the user can change the order of the elements just with drag and drop.
I'm writing the test for the app with Selenium 4.10.0 and Python.
I tried to use the ActionChains.drag-and_drop but it doesn't work. (there are here in Stack overflow several questions about that).
I tried with script in JS and execute that and that was working until the App was updated to react-dnd, now also the script to dispatch events doesn't work anymore.
So I tried again to use ActionChains with a different approach: something lihe that:</p>
<pre><code> source = self._web_driver.find_element(By.CSS_SELECTOR, "#list_data_0")
target =self._web_driver.find_element(By.CSS_SELECTOR, "#list_data_1")
#self.drag_and_drop_js(source, target) # this was the tentative with script
target_x = _target.location['x']
target_y = _target.location['y']
print (f"targhet location x={target_x} y={target_y}")
source_x = _source.location['x']
source_y= _source.location['y']
print (f"source location x={source_x} y={source_y}")
print ("start to move")
action.click_and_hold(source).pause(2).move_to_element_with_offset(target, 1, 1).release().perform()
source_x = source.location['x']
source_y= source.location['y']
print (f"source after mouve location x={source_x} y={source_y}")
</code></pre>
<p>The result:</p>
<pre><code>targhet location x=10 y=377
source location x=10 y=281
start to move
source after mouve location x=10 y=281
</code></pre>
<p>As you can see the element didn't move, On screen and in the console I can see that the element is clicked and hold but not moved.</p>
<p>can this be a problem related to the react-dnd?
Why do I have no move?</p>
<p>Thanks for any help.</p>
| <javascript><python><selenium-webdriver><react-dnd> | 2023-07-12 13:11:21 | 1 | 826 | Angelotti |
76,670,968 | 8,389,618 | Not able to push the data into Azure event hub | <p>I am trying to push the data sample data into the Azure event hub but I am not able to do so.</p>
<pre><code>import azure.functions as func
from azure.eventhub import EventData
from azure.eventhub.aio import EventHubProducerClient
from azure.identity.aio import DefaultAzureCredential
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n" f"Blob Size: {myblob.length} bytes")
event_hub_connection_string = "Endpoint=sb://event-hub-namespace/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=sharedaccesskey"
event_hub_name = "event_hub_name"
producer = EventHubProducerClient.from_connection_string(event_hub_connection_string, eventhub_name=event_hub_name)
# event_data = "this is the first message"
event_data = EventData(b'Hello, Event Hub!')
with producer:
producer.send_batch(event_data)
</code></pre>
<p>I am getting the below error and I am not sure If I am passing the correct connection string as well.</p>
<pre><code>Result: Failure Exception: RuntimeError: There is no current event loop in thread 'ThreadPoolExecutor-1_0'. Stack: File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 479, in _handle__invocation_request call_result = await self._loop.run_in_executor( File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 752, in _run_sync_func return ExtensionManager.get_sync_invocation_wrapper(context, File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/extension.py", line 215, in _raw_invocation_wrapper result = function(**args) File "/home/site/wwwroot/event-hub-test/__init__.py", line 15, in main producer = EventHubProducerClient.from_connection_string(event_hub_connection_string, eventhub_name=event_hub_name) File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/eventhub/aio/_producer_client_async.py", line 517, in from_connection_string return cls(**constructor_args) File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/eventhub/aio/_producer_client_async.py", line 181, in __init__ ALL_PARTITIONS: self._create_producer() File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/eventhub/aio/_producer_client_async.py", line 354, in _create_producer handler = EventHubProducer( # type: ignore File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/eventhub/aio/_producer_async.py", line 110, in __init__ self._lock = asyncio.Lock(**self._internal_kwargs) File "/usr/local/lib/python3.9/asyncio/locks.py", line 81, in __init__ self._loop = events.get_event_loop() File "/usr/local/lib/python3.9/asyncio/events.py", line 642, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.'
</code></pre>
| <python><azure><azure-eventhub> | 2023-07-12 13:11:11 | 1 | 348 | Ravi kant Gautam |
76,670,873 | 19,130,803 | Pandas: convert different datatypes to pyarrow datatypes using astype() | <p>I have a dataframe with different datatypes like bool, int, float, datetime, category.
currently I am converting</p>
<pre><code># Earlier to pandas 2.0
1. object -> string
2. object -> datetime[ns] # if date
</code></pre>
<p>With new pandas 2.0 or above, I am trying to use pyarrow datatypes for all fields and saving in <code>parquet</code> format.</p>
<p>We can have below:</p>
<pre><code>int8 -> int8[pyarrow] likewise for other int's type
float16 -> float16[pyarrow] likewise for other float's type
string or object -> string[pyarrow]
eg:
df['col_int'] = df['col_int'].astype('int8[pyarrow]')
</code></pre>
<p>I did not find much on how to convert datetime and category using <code>astype()</code> for below:</p>
<pre><code>1. datetime -> timestamp # if date
2. category -> dictionary
eg:
df['col_date'] = df['col_date'].astype(???)
df['col_dictionary'] = df['col_dictionary'].astype(???)
</code></pre>
<p>Please help.</p>
| <python><pandas> | 2023-07-12 13:00:45 | 1 | 962 | winter |
76,670,856 | 12,906,920 | LangChain ConversationalRetrieval with JSONloader | <p>I modified the data loader of this source code <a href="https://github.com/techleadhd/chatgpt-retrieval" rel="nofollow noreferrer">https://github.com/techleadhd/chatgpt-retrieval</a> for ConversationalRetrievalChain to accept data as JSON.</p>
<p>I created a dummy JSON file and according to the LangChain documentation, it fits JSON structure as described in the document.</p>
<pre><code>{
"reviews": [
{"text": "Great hotel, excellent service and comfortable rooms."},
{"text": "I had a terrible experience at this hotel. The room was dirty and the staff was rude."},
{"text": "Highly recommended! The hotel has a beautiful view and the staff is friendly."},
{"text": "Average hotel. The room was okay, but nothing special."},
{"text": "I absolutely loved my stay at this hotel. The amenities were top-notch."},
{"text": "Disappointing experience. The hotel was overpriced for the quality provided."},
{"text": "The hotel exceeded my expectations. The room was spacious and clean."},
{"text": "Avoid this hotel at all costs! The customer service was horrendous."},
{"text": "Fantastic hotel with a great location. I would definitely stay here again."},
{"text": "Not a bad hotel, but there are better options available in the area."}
]
}
</code></pre>
<p>The code is :</p>
<pre><code>import os
import sys
import openai
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import DirectoryLoader, TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.llms import OpenAI
from langchain.vectorstores import Chroma
from langchain.document_loaders import JSONLoader
os.environ["OPENAI_API_KEY"] = 'YOUR_API_KEY_HERE'
# Enable to save to disk & reuse the model (for repeated queries on the same data)
PERSIST = False
query = None
if len(sys.argv) > 1:
query = sys.argv[1]
if PERSIST and os.path.exists("persist"):
print("Reusing index...\n")
vectorstore = Chroma(persist_directory="persist", embedding_function=OpenAIEmbeddings())
index = VectorStoreIndexWrapper(vectorstore=vectorstore)
else:
loader = JSONLoader("data/review.json", jq_schema=".reviews[]", content_key='text') # Use this line if you only need data.json
if PERSIST:
index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])
else:
index = VectorstoreIndexCreator().from_loaders([loader])
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model="gpt-3.5-turbo"),
retriever=index.vectorstore.as_retriever()
)
chat_history = []
while True:
if not query:
query = input("Prompt: ")
if query in ['quit', 'q', 'exit']:
sys.exit()
result = chain({"question": query, "chat_history": chat_history})
print(result['answer'])
chat_history.append((query, result['answer']))
query = None
</code></pre>
<p>Some examples of results are:</p>
<pre><code>Prompt: can you summarize the data?
Sure! Based on the provided feedback, we have a mix of opinions about the hotels. One person found it to be an average hotel with nothing special, another person had a great experience with excellent service and comfortable rooms, another person was pleasantly surprised by a hotel that exceeded their expectations with spacious and clean rooms, and finally, someone had a disappointing experience with an overpriced hotel that didn't meet their expectations in terms of quality.
Prompt: how many feedbacks present in the data ?
There are four feedbacks present in the data.
Prompt: how many of them are positive (sentiment)?
There are four positive feedbacks present in the data.
Prompt: how many of them are negative?
There are three negative feedbacks present in the data.
Prompt: how many of them are neutral?
Two of the feedbacks are neutral.
Prompt: what is the last review you can see?
The most recent review I can see is: "The hotel exceeded my expectations. The room was spacious and clean."
Prompt: what is the first review you can see?
The first review I can see is "Highly recommended! The hotel has a beautiful view and the staff is friendly."
Prompt: how many total texts are in the JSON file?
I don't know the answer.
</code></pre>
<p>I can chat with my data but except for the first answer, all other answers are wrong.</p>
<p>Is there a problem with JSONloader or jq_scheme? How can I adapt the code so that I can generate the expected output?</p>
| <python><openai-api><langchain><chatgpt-api><py-langchain> | 2023-07-12 12:59:31 | 1 | 1,005 | Utku Can |
76,670,786 | 10,138,470 | XPATH selection in Selenium Webdriver in Python | <p>I'm stuck looking at scraping some data here <a href="https://internet.safaricom.co.ke/faqs/home" rel="nofollow noreferrer">https://internet.safaricom.co.ke/faqs/home</a>. I'm keen on getting the questions and answers in Pandas, thus a Python scraper is my preference. I followed this code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url = 'https://internet.safaricom.co.ke/5g-wireless/coverage'
driver = webdriver.Chrome()
driver.get(url)
# Wait for the table to be loaded
wait = WebDriverWait(driver, 10)
table = wait.until(EC.visibility_of_element_located((By.XPATH, '//*[@id="5G-coverage"]')))
soup = BeautifulSoup(table.get_attribute('outerHTML'), 'html.parser')
#Extract the table data using Pandas
dfs = pd.read_html(str(soup))
df = dfs[0]
</code></pre>
<p>The code seems to work well in extracting the table details. How can I have the XPATH right for this page <a href="https://internet.safaricom.co.ke/faqs/home" rel="nofollow noreferrer">https://internet.safaricom.co.ke/faqs/home</a> that takes care of the collapsible parts? Output is just a Pandas df with one column with question and the other with Answer for each of the FAQs. Thanks in advance.</p>
| <python><selenium-webdriver><web-scraping> | 2023-07-12 12:52:38 | 2 | 445 | Hummer |
76,670,756 | 1,520,091 | Area Weighted Average Grid Interpolation | <p>I would like to resize an n-dimensional grid similar to <code>scipy.ndimage.affine_transform</code>.
For the stake of simplicity, I describe my problem with the 2-Dimensional case.</p>
<p>I need to downsample an image with an arbitrary non integer factor by more than 2 in both directions. The area of a new pixel represents more than the 4 neighbors (more than 2 in each axis). The resulting value should be a weighted average, for which the partially contained boarder pixels are only partially included in the average.</p>
<p><code>affine_transform</code> with order=1 does bilinear interpolation and only considers the two closest pixels.</p>
<p>Is there a method (e.g. scipy) doing exactly that which I've overlooked?</p>
<p>Thank you in advance.</p>
| <python><numpy><scipy><spatial-interpolation> | 2023-07-12 12:49:06 | 0 | 631 | checkThisOut |
76,670,746 | 5,269,892 | Python error during import referring to a function which is not imported | <p>I get a <code>NameError</code> during import of a function from a module concerning a function which is not even imported itself. Minimal example is:</p>
<p><strong>test.py</strong>:</p>
<pre><code>from test2 import test2_b
test2_b(arg=None)
</code></pre>
<p><strong>test2.py</strong>:</p>
<pre><code>def test2_a(arg=a):
print('HI_a')
def test2_b(arg=b):
print('HI_b')
</code></pre>
<p><strong>Output of <code>python test.py</code></strong>:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 1, in <module>
from test2 import test2_b
File "/home/test2.py", line 2, in <module>
def test2_a(arg=a):
NameError: name 'a' is not defined
</code></pre>
<p>Why is the <code>NameError</code> referring to the function <code>test2_a</code> (instead of <code>test2_b</code>, for which indeed the error should be raised) if <code>test2_a</code> is not even imported?</p>
| <python><import><nameerror> | 2023-07-12 12:48:38 | 2 | 1,314 | silence_of_the_lambdas |
76,670,735 | 1,293,127 | Cluster a stream of items with constraints | <p>I'm looking to partition a sequence (non-repeatable stream) of items coming in. I'm sure this is some standard k-partite algo from graph theory, but I cannot remember its name / approach – can you help?</p>
<p>Each item is of the form <code>(x1, x2, x3, x4, …, xn)</code>, i.e. a tuple of <code>n</code> components, where <code>x1</code> comes from a set of possible values <code>X1</code>.</p>
<p>Example: <code>n=3</code>, <code>X1={A, B, C, -}</code>, <code>X2={0, 1, 2, -}</code>, <code>X3={green, red, blue, -}</code>. Example items: <code>(A, 0, green)</code>, <code>(C, 2, blue)</code>, <code>(A, 2, green)</code>, <code>(A, -, -)</code>, <code>(-, 1, red)</code>, etc.</p>
<p>The <code>-</code> value is a special value that means "this value is not known yet". Each item has at least one value known, i.e. <code>(-, -, -)</code> is not possible.</p>
<p>Now the clustering must group together all items that have no conflict in their values. For example <code>(A, -, blue)</code> can be grouped with <code>(A, 2, -)</code> because the only overlap is in the first component, where both share the same value <code>A</code> => both items should end up in the same cluster.</p>
<p>But <code>(A, -, blue)</code> cannot be grouped with <code>(B, -, blue)</code> because there's a conflict <code>A != B</code>.</p>
<p>Similarly <code>(A, -, -)</code> cannot be directly grouped with <code>(-, -, green)</code> because they share no known common value.</p>
<p>The clustering must be transitive, in the sense that <code>(A, -, -)</code> and <code>(-, -, green)</code> should end up in the same cluster if there's another item <code>(A, -, green)</code> that connects them. All three should become a part of the same cluster.</p>
<p>In practice, n is ~10 and each <code>Xi</code> set of possible values for component <code>i</code> is in the billions. So the potential set of all items is practically infinite. I have an input stream of a few hundred million of such items that I need to group together quickly, as they come in.</p>
<p>A greedy algo is OK, in case the constraints above don't lead to a unique clustering solution. But these constraint must be satisfied:</p>
<ol>
<li>No two items in a cluster contain conflicting values.</li>
<li>Different clusters differ in at least one conflicting value.</li>
<li>In order to merge two items, they must share at least one value.</li>
</ol>
| <python><cluster-analysis><partitioning><bipartite> | 2023-07-12 12:47:11 | 1 | 8,721 | user124114 |
76,670,696 | 8,380,724 | Delete all items from an Azure CosmosDB container with python | <p>Is there a way to delete all items from an Azure CosmosDB container (collection)? I tried it so many way, but I got <code>azure.cosmos.errors.CosmosHttpResponseError</code>.</p>
| <python><azure><azure-cosmosdb><azure-cosmosdb-sqlapi> | 2023-07-12 12:44:03 | 1 | 311 | andexte |
76,670,659 | 12,436,050 | Group by and aggregate the columns in pandas dataframe | <p>I have following dataframe which I would like to group by a certain column and aggregate the uniques values in other column of respective row by a separator like ' | '. Below is the sample rows:</p>
<pre><code>col1 col2 col3 col4
THREE M SYNDROME 1 {3-M syndrome 1, 273750 (3)} 3-m syndrome 1 {3-M syndrome 1} 273750
THREE M SYNDROME 1 {3-M syndrome 1, 273750 (3)} 3-m syndrome 2 {3-M syndrome 2} 273750
</code></pre>
<p>I would like to group by 'col1' and aggregate the other unique values. The expected df is:</p>
<pre><code>col1 col2 col3 col4
THREE M SYNDROME 1 {3-M syndrome 1, 273750 (3)} 3-m syndrome 1 | 3-m syndrome 2 {3-M syndrome 1} | {3-M syndrome 2} 273750
</code></pre>
<p>I am using following lines of code.</p>
<pre><code>join_unique = lambda x: ' | '.join(x.unique())
df2= df.groupby(['preferred_title_symbol'], as_index=False).agg(join_unique)
</code></pre>
<p>I get output but col4 is not included in the output.</p>
<pre><code>col1 col2 col3
THREE M SYNDROME 1 {3-M syndrome 1, 273750 (3)} 3-m syndrome 1 | 3-m syndrome 2 {3-M syndrome 1} | {3-M syndrome 2}
</code></pre>
<p>Any help is highly appreciated.</p>
| <python><pandas><group-by><aggregate> | 2023-07-12 12:40:42 | 1 | 1,495 | rshar |
76,670,595 | 3,157,428 | Load matlab .mat export of signal logs in python | <p>I am exporting signals in matlab R2023a like this:</p>
<pre><code>save logs.mat logsout -v7
</code></pre>
<p>v7 is recomended version for reading the file in python (seen in different threads).</p>
<p>However, the file read by python is different from what I have in matlab. In matlab I have a nice dataset:</p>
<p><a href="https://i.sstatic.net/VrjYJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VrjYJ.png" alt="matlab" /></a></p>
<p>but after reading it in python with following code:</p>
<pre><code>from scipy.io import loadmat # this is the SciPy module that loads mat-files
mat = loadmat('logs.mat')
</code></pre>
<p>this is what I see:</p>
<p><a href="https://i.sstatic.net/DIite.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DIite.png" alt="in python" /></a></p>
<p>Instead of structure, I have a array of shape <code>(1, 277355408)</code>. What do I miss? I need to process the signals in python after generating.</p>
| <python><matlab><scipy><mat-file> | 2023-07-12 12:34:23 | 1 | 2,821 | Ruli |
76,670,556 | 3,146,582 | How can I quickly test SOCKS proxy with python requests? | <p>I have created working Python script to retrieve supposedly working SOCKS4/SOCKS5 "Elite" proxies, but it seems they are not working at all for me, at least according to my second Python script ;-)</p>
<p>Is following a proper way to test if I successfuly apply proxy on my request?</p>
<pre class="lang-py prettyprint-override"><code>import requests
import time
from bs4 import BeautifulSoup
proxies_list = []
with open('../proxies.txt', 'r') as proxies_file:
proxies_list = [line.rstrip() for line in proxies_file]
headers = {"User-Agent": "Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, "
"like Gecko) Mobile/15E148"}
counter = 0
# just testing first few
for index in range(len(proxies_list) - 90):
proxies = {
"http://": proxies_list[index],
"https://": proxies_list[index],
}
try:
# print("Attempting try with", proxies_list[index])
page = requests.get(url='https://ipaddress.my/', proxies=proxies, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
detected_ip = soup.find("ul", {"class": "list-inline text-center"}).find("li", recursive=False).find("span", recursive=False).text
# ip in ip:port format
if detected_ip in proxies_list[index]:
counter += 1
# print(f"Tested {proxies_list[index]} and detected {detected_ip}")
time.sleep(2)
except requests.exceptions.ProxyError as error:
pass
print(f'In total, made {counter} connections.')
</code></pre>
<p>Unfortunately, outcome looks like this:</p>
<p><a href="https://i.sstatic.net/Sv3ph.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sv3ph.png" alt="enter image description here" /></a></p>
<p>Is there better way to implement requests via proxy, or my proxies are just not working?</p>
| <python><python-requests><proxy> | 2023-07-12 12:30:43 | 1 | 751 | Tomek |
76,670,481 | 5,080,858 | How to find the best features efficiently? | <p>I am looking to find the best possible model for predicting a target variable (categorical, 9 classes), using up to 30 available features. I have a dataset with 12k rows.</p>
<p>When I worked on similar problems previously, I had access to high-performance computing clusters, meaning that I didn't have to worry too much about resource constraints when tuning a model. Now, I'm restricted to using a 2021 M1 Macbook Pro, or a less powerful Ubuntu server. This is proving a huge challenge, as everything I try is ending up taking way too long to be feasibly used.</p>
<p>I started the process by running a very basic shoot-out cross-validation between 7 possible classifiers, employing all available features. This led to 3 potential classifiers (SVC-linear, random forest, multinomial logistic regression), all of which have returned mean accuracy values around .73 (which isn't bad, but I'm aiming for >.8.</p>
<p><strong>Now, I want to find the best possible model configuration by a) finding the best feature combo for each model, and b) the best hyperparameters.</strong></p>
<p>I've tried two strategies for feature selection:</p>
<p><strong>One</strong> - <code>mlextend</code>'s <code>SequentialFeatureSelector</code>, utilising all available processor cores. For only one model (SVC), <strong><em>this process ran for >30 hours</em></strong>, and then crashed the entire system. Not a feasible strategy.</p>
<p><strong>Two</strong> - I tried using a more statistical approach <code>SelectKBest</code>, without having to test every possible feature combination. This is the code that came up with to do that:</p>
<pre class="lang-py prettyprint-override"><code>rnd = RANDOM_STATE
model_feature_performance_df = pd.DataFrame()
for i, clf in enumerate(classifiers):
for f in range(folds):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, shuffle=True, random_state=rnd)
for k in range(1, len(X.columns)+1):
selector = SelectKBest(chi2, k=k)
selector.fit(X_train, y_train)
X_train_selected = selector.transform(X_train)
X_test_selected = selector.transform(X_test)
clf.fit(X_train_selected, y_train)
y_pred = clf.predict(X_test_selected)
f1 = np.round(f1_score(y_test, y_pred, average='weighted'), 3)
acc = np.round(accuracy_score(y_test, y_pred), 3)
features_used = ', '.join(list(X_train.columns[selector.get_support()]))
tmp_df = pd.DataFrame(
[{
'classifier': clf_names[i],
'fold': f,
'random_state': rnd,
'k': k,
'features': features_used,
'f1': f1,
'acc': acc
}]
)
model_feature_performance_df = pd.concat([model_feature_performance_df, tmp_df])
rnd += 1
</code></pre>
<p>Again, after over 24 hours, it had only completed one fold for the SVC model, and then it crashed without returning anything.</p>
<p>I am looking for any advice as to how to make an informed decision on what my best possible model could be within hours, not days.</p>
| <python><machine-learning><scikit-learn><feature-selection> | 2023-07-12 12:21:46 | 1 | 679 | nikUoM |
76,670,445 | 17,530,552 | How to group data from a multiindex column dataframe for split violin- or boxplots | <p>I computed data that I saved into a nested dictionary. Subsequently, I loaded this dictionary into a Pandas DataFrame, called <code>df</code>.</p>
<pre><code>df = pd.DataFrame.from_dict({(i,j): dict_data2[i][j]
for i in dict_data2.keys()
for j in dict_data2[i].keys()},
orient='columns')
</code></pre>
<p>This dataframe is organized and looks as follows when I print it.</p>
<pre><code> rest ... task
V1 V2 V3 ... VMA1 VMA2 VMA3
0 5.166667 5.833333 5.300000 ... 4.766667 4.800000 4.766667
1 5.166667 5.566667 5.266667 ... 4.766667 4.800000 4.733333
2 5.200000 5.633333 5.300000 ... 4.833333 4.900000 4.733333
3 5.000000 5.600000 5.333333 ... 4.966667 5.033333 4.900000
4 4.966667 5.800000 5.333333 ... 5.000000 5.066667 5.033333
.. ... ... ... ... ... ... ...
724 5.300000 6.233333 6.366667 ... 5.233333 5.666667 5.533333
725 5.266667 6.266667 6.366667 ... 5.333333 5.633333 5.633333
726 5.266667 6.266667 6.400000 ... 5.333333 5.500000 5.466667
727 5.333333 6.266667 6.400000 ... 5.366667 5.500000 5.433333
728 5.566667 6.266667 6.366667 ... 5.400000 5.533333 5.400000
[729 rows x 22 columns]
</code></pre>
<p>The dataset has two major groups, <code>rest</code> and <code>task</code>. Both major groups share subgroups, such as <code>V1</code>, over <code>V2</code>, to <code>VMA3</code>. While these subgroups are shared between both <code>rest</code> and <code>task</code>, the data (729 data points per subgroup) is not identical. That is, <code>rest V1</code> does not contain the same values as <code>task V1</code>. Hence, all subgroups exist for both <code>rest</code> and <code>task</code>, but contain different values.</p>
<p><strong>Aim:</strong> I would like to use <code>seaborn</code> to plot violin- or boxplots with the option <code>split=True</code> (<a href="https://seaborn.pydata.org/generated/seaborn.violinplot.html" rel="nofollow noreferrer">https://seaborn.pydata.org/generated/seaborn.violinplot.html</a>), so that one side of the plot should show the <code>rest</code> data, and the other side of the plot should show the <code>task</code> data. Hence, each subregion, say <code>V1</code> should share one violin- or boxplot, but with the left side showing the <code>rest</code> and the right side of the plot showing the <code>task</code> data distribution.</p>
<p><strong>Problem:</strong> I don't understand how one has to format the Pandas DataFrame <code>df</code> so that <code>seaborn</code> can read the actual data as per my aim. The problem is the "nested" data structure in the dataframe.</p>
<p><strong>Question:</strong> Is there a way to format <code>df</code> to achieve my aim, or would I have to switch to another method of organizing my data not using a Pandas DataFrame?</p>
<p>Here is my current code and what the result looks like. Currently, seaborn still plots rest and task violinplots separately, because I do not understand yet how to re-format my dataframe <code>df</code>.</p>
<pre><code>df = pd.DataFrame.from_dict({(i,j): dict_data2[i][j]
for i in dict_data2.keys()
for j in dict_data2[i].keys()},
orient='columns')
colors = ["coral", "gold", "mediumseagreen", "blueviolet",
"mediumorchid", "bisque", "cornflowerblue"]
sns.violinplot(data=df,
orient="h", width=3, linewidth=1,
saturation=1)
</code></pre>
<p><a href="https://i.sstatic.net/KYh9r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KYh9r.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><seaborn> | 2023-07-12 12:18:00 | 2 | 415 | Philipp |
76,670,388 | 11,564,487 | Limiting memory use while creating a huge polars dataframe from pickle files | <p>Consider the following code, which tries to create a huge dataframe from a set of pickle files:</p>
<pre><code>import pandas as pd
import polars as pl
import glob
pickle_files = glob.glob("/home/x/pickles/*.pkl.gz")
df_polars = pl.DataFrame()
for file in pickle_files:
df_pandas = pd.read_pickle(file)
df_temp = pl.from_pandas(df_pandas)
df_polars = df_polars.vstack(df_temp)
print(df_polars)
</code></pre>
<p>What I am wanting is to limit the use of memory while running this script, say, up 15GB. Could somebody please guide me?</p>
| <python><python-polars> | 2023-07-12 12:12:04 | 1 | 27,045 | PaulS |
76,670,200 | 6,168,639 | Mock a response to a function that is called by Django middleware in Django test? | <p>I'm struggling with Mock a little bit and am in need of some help/understanding.</p>
<p>I've got a Django application that has a middleware file that calls a function to add a queryset to the response (if needed).</p>
<p>The middleware looks like this:</p>
<pre><code>class WidgetMiddleware(object):
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
# Code to be executed for each request before
# the view (and later middleware) are called.
if resolve(request.path).app_name == "my_app":
request.full_widgets = has_full_widgets(
request.user.id
)
else:
pass
response = self.get_response(request)
# Code to be executed for each request/response after
# the view is called.
return response
</code></pre>
<p>I have a view that looks like this:</p>
<pre><code>@login_required
@permission_required("my_app.view_thing", raise_exception=True)
def cart(request, **kwargs):
if request.full_widgets:
return redirect("my_app:index")
# ... rest of view logic ...
</code></pre>
<p>The <code>has_full_widgets</code> function lives in a utility file (<code>my_app.utils</code>):</p>
<pre><code>def has_full_widgets(user_id):
return Widget.objects.filter(user=user_id, full=True)
</code></pre>
<p>I'm trying to write a test for that view in such a way that the <code>has_full_widgets</code> function would return a Queryset of widgets - which would trigger the redirect in the view. So I would like to mock the response of that function</p>
<p>I've got a test that looks like this:</p>
<pre><code>class TestCartView(TestCase):
def setUp(self):
self.client = Client()
self.cart_url = reverse("my_app:cart")
# setup random user without permissions
self.user_no_perms = Person.objects.create_user(
username="testusernoperms", password="12345"
)
self.user = Person.objects.create_user(
username="testuser", password="12345"
)
permission = Permission.objects.get(codename="view_thing")
self.user.user_permissions.add(permission)
# Login super basic user
self.force_login(self.user)
def test_mock_middleware_response(self):
# Mock the has_full_widgets method on the utils module.
with mock.patch.object(utils, "has_full_widgets", return_value=Widget.objects.all()):
# Create a request object.
request = mock.Mock()
request.user = User.objects.create_user("test_user", "test@email.com", "password")
# Call the cart view.
cart(request)
# Assert that the request.full_widgets attribute is the queryset that was returned by the middleware.
assert request.full_widgets == Widget.objects.all()
# Assert that the middleware's has_full_widgets method was called.
utils.has_full_widgets.assert_called_once_with(request.user.id)
</code></pre>
<p>I am getting the following error:</p>
<pre><code>AssertionError: assert <Mock name='mock.full_widgets' id='4512588288'> == <QuerySet [<Widget: 1>, <Widget: 2>, <Widget: 3>, ...
</code></pre>
<p>Am I going about this all wrong? My knowledge on mock is very basic at this point and I'm trying to learn - so any and all help is appreciated.</p>
| <python><django><mocking><pytest> | 2023-07-12 11:45:33 | 0 | 722 | Hanny |
76,669,927 | 640,916 | pydantic: How to ignore invalid values when creating model instance | <p>Given a sample model:</p>
<pre><code>from pydantic import BaseModel
from typing import Optional
class Foo(BaseModel):
age: Optional[int]
name: Optional[str]
</code></pre>
<p>I would like my model to be able to digest-but-ignore invalid values to receive an instance in any case. E.g.:</p>
<pre><code>Foo(age="I", name="Jim")
</code></pre>
<p>should instead of raising a <code>ValidationError</code> automatically discard the value for the <code>age</code> field and results in</p>
<pre><code>Foo(age=None, name='Jim')
</code></pre>
<p>I could manually loop over the <code>ValidationErrors</code> and drop the corresponding data or loop over the values and use <code>validate_assignment</code>, but I was thinking I am missing something built-in.</p>
| <python><python-typing><pydantic> | 2023-07-12 11:11:53 | 2 | 7,819 | djangonaut |
76,669,826 | 14,820,295 | Adding rows for missing months and Fill na values with last value in a partition Python | <p>I have a pandas dataframe and I need to fill NULL values with the last value on a partition.
Specifically, for each "id" and "month", I need to create and explode last "value" on subsequent months.</p>
<p><em><strong>Example of my dataset:</strong></em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>month</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-01-01</td>
<td>London</td>
</tr>
<tr>
<td>1</td>
<td>2023-02-01</td>
<td>Paris</td>
</tr>
<tr>
<td>2</td>
<td>2023-01-01</td>
<td>New York</td>
</tr>
<tr>
<td>3</td>
<td>2023-02-01</td>
<td>Paris</td>
</tr>
<tr>
<td>4</td>
<td>2023-03-01</td>
<td>NULL</td>
</tr>
</tbody>
</table>
</div>
<p><em><strong>My desidered output (Exploding the values up to April 2023):</strong></em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>month</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-01-01</td>
<td>London</td>
</tr>
<tr>
<td>1</td>
<td>2023-02-01</td>
<td>Paris</td>
</tr>
<tr>
<td>1</td>
<td>2023-03-01</td>
<td>Paris</td>
</tr>
<tr>
<td>1</td>
<td>2023-04-01</td>
<td>Paris</td>
</tr>
<tr>
<td>2</td>
<td>2023-01-01</td>
<td>New York</td>
</tr>
<tr>
<td>2</td>
<td>2023-02-01</td>
<td>New York</td>
</tr>
<tr>
<td>2</td>
<td>2023-03-01</td>
<td>New York</td>
</tr>
<tr>
<td>2</td>
<td>2023-04-01</td>
<td>New York</td>
</tr>
<tr>
<td>3</td>
<td>2023-02-01</td>
<td>Paris</td>
</tr>
<tr>
<td>3</td>
<td>2023-03-01</td>
<td>Paris</td>
</tr>
<tr>
<td>3</td>
<td>2023-04-01</td>
<td>Paris</td>
</tr>
<tr>
<td>4</td>
<td>2023-03-01</td>
<td>NULL</td>
</tr>
<tr>
<td>4</td>
<td>2023-04-01</td>
<td>NULL</td>
</tr>
</tbody>
</table>
</div>
<p>Thank u!</p>
| <python><pandas><explode><fill> | 2023-07-12 10:59:08 | 1 | 347 | Jresearcher |
76,669,820 | 596,504 | Complex grouping and ordering with pandas | <p>I have a pandas DataFrame like</p>
<pre><code>data = {
'A': ['foo', 'foo', 'bar', 'world', 'world', 'bar', 'foo'],
'B': ['text1', 'text2', 'text1', 'text3', 'text2', 'text3', 'text1'],
'C': [1, 2, 3, 4, 5, 6, 7]
}
</code></pre>
<p>It should be aggregated/grouped/indexed by A and B, and counted by C e.g. <code>df.pivot_table(index=['A', 'B'], values='C', aggfunc='count')</code>.</p>
<p>What I can't achieve is my desired order. The result should look like:</p>
<pre><code> C
A B
world text2 1
text3 1
bar text1 1
text3 1
foo text1 2
text2 1
</code></pre>
<p>A has a fixed order (world > bar > foo) and B is ordered by descending counts in C.</p>
| <python><pandas> | 2023-07-12 10:58:49 | 1 | 548 | Nachtgold |
76,669,799 | 9,681,081 | How to create ForeignKeyConstraint in SQLAlchemy | <p>I'm trying to temporarily drop a foreign key constraint on a table to run efficient inserts, using SQLAlchemy.</p>
<p>I've written the following context manager to handle dropping and recreating:</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
from typing import Iterator
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy.orm import Session
from sqlalchemy.sql.ddl import CreateConstraint, DropConstraint
@contextmanager
def drop_constraint(
session: Session, constraint: ForeignKeyConstraint
) -> Iterator[None]:
session.execute(DropConstraint(constraint))
yield
session.execute(CreateConstraint(constraint))
</code></pre>
<p>Dropping the constraint works well, but I get the following error when re-creating it on exit:</p>
<pre><code>ArgumentError: Executable SQL or text() construct expected,
got <sqlalchemy.sql.ddl.CreateConstraint object at ...>.
</code></pre>
<p>Any idea how I can create this constraint on the fly with pure SQLAlchemy 2.0?</p>
| <python><sqlalchemy><foreign-keys> | 2023-07-12 10:55:19 | 1 | 2,273 | Roméo Després |
76,669,723 | 2,315,911 | How to calculate autocorrelation coefficient | <p>Consider <code>Python</code> first.</p>
<pre><code>import numpy as np
import pandas as pd
x = [0.25, 0.5, 0.2, -0.05]
</code></pre>
<p>The first way to calculate autocorrelation coefficient of <code>x</code>:</p>
<pre><code>pd.Series(x).autocorr()
</code></pre>
<p>The second way:</p>
<pre><code>x0 = x[:-1]
x1 = x[1:]
np.corrcoef(x0, x1)[0,1]
</code></pre>
<p>Both of the above give <code>0.1035526330902407</code>.</p>
<p>Now turn to <code>Julia</code>. I tried the following</p>
<pre><code>using StatsBase
x = [0.25, 0.5, 0.2, -0.05]
autocor(x)[2]
</code></pre>
<p>which gives <code>0.04508196721311479</code>. It is different from what I get from <code>Python</code>.</p>
<p>What <code>Julia</code> built-in function returns the same value as in <code>Python</code>?</p>
| <python><julia> | 2023-07-12 10:45:24 | 1 | 1,300 | Spring |
76,669,711 | 324,362 | How may I instantiate a class of type hint in python? | <p>I am new to python and am coming from C# and java. I want to instantiate a class of the type provided as type hint <code>R</code> as following</p>
<pre><code>from typing import (TypeVar, Generic)
class BaseParams(object):
def __init__(self) -> None:
self.name = 'set-in-base-class'
class ChildParams(BaseParams):
def __init__(self) -> None:
super().__init__()
self.name = 'set-in-child-class'
R = TypeVar('R', bound= BaseParams)
class MyGeneric(Generic[R]):
def __init__(self) -> None:
super().__init__()
def test(self):
r = R() # how should I instantiate R here
print(r.name)
c = MyGeneric[ChildParams]()
c.test()
</code></pre>
<p>something like the following C# code</p>
<pre><code>class BaseParams
{
public BaseParams()
{
Name = "set-in-base-class";
}
public string Name { get; set; }
}
class ChildParams : BaseParams
{
public ChildParams()
{
Name = "set-in-child-class";
}
}
class MyGenericClass<R> where R : BaseParams, new()
{
public void test()
{
var r = new R();
Console.WriteLine(r.Name);
}
}
</code></pre>
<p>I've made quite a lot search on how to do that in python and all the sources refer to a situation where we provide the type in a method or something like that. I wonder if it is possible to do that at all.</p>
<p>would you please someone help me to have a workaround on this?</p>
| <python><generics><python-typing> | 2023-07-12 10:44:21 | 1 | 2,554 | anonim |
76,669,647 | 7,713,770 | dockerize django, manage.py runserver: error: unrecognized arguments: | <p>I try to dockerize an existing django application. I am using django with a virtualenv and a pipfile.</p>
<p>So my dockerfile looks:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# set work directory
WORKDIR /usr/src/app
EXPOSE 8000
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add linux-headers postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./requirements.dev.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
</code></pre>
<p>entrypoint.sh:</p>
<pre><code>#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
exec "$@"
</code></pre>
<p>and docker-compose:</p>
<pre><code>version: '3.9'
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- .:/app
command: >
sh -c "python ./manage.py migrate &&
python ./manage.py runserver 192.168.1.135:8000"
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13-alpine
container_name: postgres
volumes:
- dev-db-data:/var/lib/postgresql/data
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
dev-db-data:
dev-static-data:
</code></pre>
<p>and settings.py:</p>
<pre><code>ALLOWED_HOSTS = ['192.168.1.135']
CORS_ORIGIN_WHITELIST = [
"http://192.168.1.135:8000"
]
</code></pre>
<p>so the url is on the whitelist.</p>
<p>But when I do docker-compose up. I still get this error:</p>
<pre><code>manage.py runserver: error: unrecognized arguments: 192.168.1.135:8000
</code></pre>
<p>Question: how to dockerize an existing django application?</p>
| <python><django><docker><docker-compose><dockerfile> | 2023-07-12 10:36:05 | 1 | 3,991 | mightycode Newton |
76,669,635 | 6,564,294 | Get chatGPT to respond with a single direct answer | <p>I am querying a text using chatGPT. But I need chatGPT to respond with single direct answers, rather than long stories or irrelevant text. Any way to achieve this?</p>
<p>My code looks like:</p>
<pre><code>from langchain.document_loaders import TextLoader
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.indexes import VectorstoreIndexCreator
loader = TextLoader("path/to/extracted_text.txt")
loaded_text = loader.load()
# Save document text as vector.
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
# Query the text
response = index.query("At what time did john come home yesterday?")
print("Loaded text is:", loaded_text)
print("ChatGPT response is:", response)
</code></pre>
<blockquote>
<p>>>> Loaded text is: "< a really long text > + John came home last
night at 11:30pm + < a really long text >"</p>
</blockquote>
<blockquote>
<p>>>> ChatGPT response is: "John came back yesterday at 11:30pm."</p>
</blockquote>
<p>The problem is that I want a concise answer <code>11:30pm</code> rather than a full sentence <code>John came home last night at 11:30pm</code>. Is there a way to achieve this without adding "I need a short direct response" to my query? Can I achieve a more guaranteed concise response by setting a parameter through some other means instead?</p>
| <python><nlp><openai-api><langchain><chat-gpt-4> | 2023-07-12 10:35:00 | 2 | 324 | Chukwudi |
76,669,600 | 4,451,315 | polars - get "levels" of categorical column? | <p>In pandas if we create a categorical and then drop one of the values, we can still see that value from the internal dictionary.</p>
<pre class="lang-py prettyprint-override"><code>In [44]: s = pd.Series(['one', 'two', 'three']).astype('category')[:2]
In [45]: s
Out[45]:
0 one
1 two
dtype: category
Categories (3, object): ['one', 'three', 'two']
In [46]: s.value_counts()
Out[46]:
one 1
two 1
three 0
Name: count, dtype: int64
</code></pre>
<p><code>.value_counts</code> shows that category <code>three</code> has <code>0</code> occurrences, even though <code>three</code> doesn't appear in <code>s</code>. It knows it's part of the levels of that category.
You can get all the categories with:</p>
<pre class="lang-py prettyprint-override"><code>In [52]: s.dtype.categories
Out[52]: Index(['one', 'three', 'two'], dtype='object')
</code></pre>
<p>In polars, can we recover values from the internal dictionary if they've been dropped from the data?</p>
<pre class="lang-py prettyprint-override"><code>In [47]: s = pl.Series('category', ['one', 'two', 'three'], pl.Categorical)[:2]
In [48]: s
Out[48]:
shape: (2,)
Series: 'category' [cat]
[
"one"
"two"
]
In [49]: s.value_counts()
Out[49]:
shape: (2, 2)
┌──────────┬────────┐
│ category ┆ counts │
│ --- ┆ --- │
│ cat ┆ u32 │
╞══════════╪════════╡
│ two ┆ 1 │
│ one ┆ 1 │
└──────────┴────────┘
</code></pre>
<p>Is it even possible to get all the categories from the dtype?</p>
<hr />
<p>Note: this is not the same as <a href="https://stackoverflow.com/questions/76613542/make-a-categorical-column-which-has-categories-a-b-c-in-polars">Make a categorical column which has categories ['a', 'b', 'c'] in Polars</a>. Here, I have already made the categorical data, and want to see which categories it has</p>
| <python><dataframe><python-polars><categorical-data> | 2023-07-12 10:29:48 | 2 | 11,062 | ignoring_gravity |
76,669,572 | 11,311,798 | Efficient library for building, intersecting and measuring 3D meshes | <p>I'm working with Python 3.10.</p>
<p>Say i have a few set of points in a ring shape, contained in parallel planes.
I would like to :</p>
<ol>
<li>Build a 3D mesh by linking the rings with triangles</li>
</ol>
<p><a href="https://i.sstatic.net/0yXKc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0yXKc.png" alt="like this" /></a></p>
<ol start="2">
<li>Intersect this 3D mesh with another one, for instance a cylinder or sphere</li>
<li>Measure some metrics on this mesh (volume, area, height, etc...)</li>
</ol>
<p>I am already confident i could figure a way to do this "from scratch", however, i don't have much time (I'm experimenting different ideas here) and i would like to get some performance out of it to be able to see how it fares with use in an optimization algorithm.</p>
<p>Does anyone know a python library that can do at least some of the steps. i already found some, like <a href="https://pymesh.readthedocs.io/en/latest/" rel="nofollow noreferrer">PyMesh</a> but (maybe I'm wrong) but it does seem it does only the "intersection part" of what I'm trying to achieve.</p>
| <python><3d><geometry><mesh> | 2023-07-12 10:25:37 | 1 | 337 | J.M |
76,669,366 | 5,852,692 | Python ctypes TypeError while calling a DLL function | <p>I have a DLL which I load with ctypes CDLL and then I am calling DLL functions via python. Sadly I do not have the original DLL coding, however I have an C header file where the function INPUTS and OUTPUTS and their types can be seen.</p>
<p>When I checked the header file for the following function it says:</p>
<pre><code>INT32 DllExport load_active(
/* OUT */ INT32 *h_nw,
/* IN */ INT32 flags);
</code></pre>
<p>So I am trying to run the <code>load_active</code> function in python via:</p>
<pre><code>import ctypes as ct
dll = ct.CDLL('example.dll')
dll.load_active.argtypes = [ct.POINTER(ct.c_int32), ct.c_int32]
dll.load_active.errcheck = _validate_result
h_nw = ct.POINTER(ct.c_int32)
flags = ct.c_int32(0)
status = dll.load_active(h_nw, flags)
</code></pre>
<p>I am getting the following error:</p>
<pre><code> status = dll.load_active(h_nw, flags)
ctypes.ArgumentError: argument 1: TypeError: expected LP_c_long instance instead of _ctypes.PyCPointerType
</code></pre>
| <python><dll><typeerror><ctypes> | 2023-07-12 10:00:52 | 1 | 1,588 | oakca |
76,669,198 | 5,269,892 | Importing function with arguments with "variable" default values in Python | <p>I want to use a function with a default argument (as in argument with default value) in two separate scripts in which that default argument is set differently. Of course, the function may also be called with the argument set to anything other than the default value.</p>
<p>Below is a minimal dummy example using functions to write a message <code>msg</code> to a list of files, <code>write_list</code>. The functions are defined in <strong>test_write_utils.py</strong> and are imported in multiple different scripts, only for this example in only one script, <strong>test_write_main.py</strong>.</p>
<p><strong>test_write_utils.py:</strong></p>
<pre><code>""" def write_func(msg, write_list=[file1, file2]):
for file in write_list:
print('Write to %s' % file)
# open(file, 'w').write(msg)
"""
def write_func2(msg, write_list=None):
if write_list is None:
write_list = [file1, file2]
for file in write_list:
print('Write to %s' % file)
# open(file, 'w').write(msg)
class writeclass:
# user can (but does not have to) provide a default write list
def __init__(self, default_write_list=[]):
self.default_write_list = default_write_list
def write(self, msg, write_list=None):
if write_list is None:
write_list = self.default_write_list
for file in write_list:
print('Write to %s' % file)
# open(file, 'w').write(msg)
</code></pre>
<p><strong>test_write_main.py</strong>:</p>
<pre><code># from test_write_utils import write_func # NameError: name 'file1' is not defined
from test_write_utils import write_func2, writeclass
file1 = 'file1.txt'
file2 = 'file2.txt'
write_list = [file1, file2]
# write_func('test')
# write_func2('test') # NameError: global name 'file1' is not defined
# define variables in namespace of test_write_utils;
# import statement here instead of beginning (bad practice) only to make structure clear
import test_write_utils
test_write_utils.file1 = file1
test_write_utils.file2 = file2
write_func2('test') # works
mywriter = writeclass(write_list)
mywriter.write('test') # works
</code></pre>
<p><code>write_func</code> (when uncommented) raises an error during import since it must have <code>file1</code> and <code>file2</code> defined at import time. <code>write_func2</code>, with default argument <code>None</code> based on <a href="https://stackoverflow.com/a/69359624/5269892">this post</a>, can be imported, but will raise an error during the function call due to separate namespaces of the scripts. If the variables are defined in the appropriate namespace <code>test_write_utils</code> (I then commented <code>write_func</code> out to avoid the import error), <code>write_func2</code> works. However, the flow of information is obscured, i.e. the user does not see in test_write_utils.py where the variables are actually defined. Using a method of a class <code>writeclass</code> instead of functions also works and the default write list can be independently set in each script during instantiation.</p>
<p><strong>Question:</strong> Is there a non-class-method-based, recommended way to use a function with a "variable" default argument in different scripts (in which the default argument is set differently)? I'm using Python 2.7.18, in case it is relevant for anything.</p>
| <python><import><default-value> | 2023-07-12 09:42:42 | 1 | 1,314 | silence_of_the_lambdas |
76,669,197 | 5,640,517 | Add custom static/media url in django with variable path | <p>I have a django app that saves games and screenshots.
When I save a game I download the screenshots and save them:</p>
<pre class="lang-py prettyprint-override"><code>class ImageModel(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
img = models.ImageField()
class Game(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
screenshots = models.ManyToManyField(ImageModel)
screenshots_urls = models.JSONField()
</code></pre>
<p>When the game is saved a directory like <code>/games/<game_uuid>/</code> and <code>/games/<game_uuid>/screenshots/</code> are created.</p>
<p>Then the screenshots are downloaded and saved in the DB.</p>
<p>Next to display them in a view I was thinking of adding something like this to settings:</p>
<pre class="lang-py prettyprint-override"><code>GAMES_FOLDER = Path('/games/')
STATIC_URL = 'static/'
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static')]
VENV_PATH = os.path.dirname(BASE_DIR)
STATIC_ROOT = os.path.join(VENV_PATH, 'static_root')
SCREENSHOTS_URL = GAMES_FOLDER / "GAME_UUID" / "screenshots" / "IMAGE_UUID.png"
SCREENSHOTS_ROOT = ""
</code></pre>
<pre class="lang-py prettyprint-override"><code>
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.SCREENSHOTS_URL, document_root=settings.SCREENSHOTS_ROOT)
</code></pre>
<p>Then adding a custom tag for screenshots urls since they're not in the common static folder.</p>
<p>Am I complicating this too much?</p>
<p>Do I even need ImageModel if I'm just saving the images to disk, can't I just make game.screenshots an array of image filenames?</p>
<p>Or is there a reason why I might want images saved as ImageModel?</p>
<h1>Edit:</h1>
<p>I think what I'm looking for is something like</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns += static(settings.SCREENSHOTS_URL+'<uuid:game_id>/<uuid:image_id>.png', document_root=settings.SCREENSHOTS_ROOT)
</code></pre>
<p>Could I work with that to make django search the image in the correct folder?</p>
| <python><django> | 2023-07-12 09:42:35 | 1 | 1,601 | Daviid |
76,669,035 | 567,059 | Use pytest.mark.parametrize both directly and indirectly | <p>Using <code>pytest</code>, how can I parametrise a test function and then use the value both directly and indirectly?</p>
<p>For example, take the very basic test below to check that a function to square a number returns the correct value. As far as I know, I have to parameterise the test function with the same set of arguments twice - once directly and once indirectly.</p>
<p>However, because <code>pytest</code> runs all possible combinations of parameters, this means that the test function fill pass twice (<code>1-1</code> and <code>2-2</code>) and fail twice (<code>1-2</code> and <code>2-1</code>).</p>
<p>Is there a way to see the parameter value passed to the <code>square_number</code> fixture in the test function as well?</p>
<hr />
<h3>Example code</h3>
<pre class="lang-py prettyprint-override"><code>import pytest
def calculate_square(num: int):
return num ** 2
args = [1,2]
@pytest.mark.parametrize('number', args)
class TestSomething:
@pytest.fixture(scope='module')
def square_number(self, request):
yield calculate_square(request.param)
@pytest.mark.parametrize('square_number', args, indirect=True)
def test_first_thing(self, number, square_number):
assert number ** 2 == square_number
</code></pre>
<h3>Example code results</h3>
<pre><code>q.py::TestSomething::test_first_thing[1-1] PASSED
q.py::TestSomething::test_first_thing[1-2] FAILED
q.py::TestSomething::test_first_thing[2-1] FAILED
q.py::TestSomething::test_first_thing[2-2] PASSED
</code></pre>
<hr />
<h3>Desired code</h3>
<pre class="lang-py prettyprint-override"><code>
@pytest.mark.parametrize('square_number', args, indirect=True)
def test_first_thing(self, square_number):
number = ??? # Somehow get current 'args' value from 'square_number'
assert number ** 2 == square_number
</code></pre>
<h3>Desired results</h3>
<pre><code>q.py::TestSomething::test_first_thing[1] PASSED
q.py::TestSomething::test_first_thing[2] PASSED
</code></pre>
| <python><pytest> | 2023-07-12 09:26:05 | 1 | 12,277 | David Gard |
76,668,997 | 6,703,592 | Python large size array and high time cost | <p>I want to create an <code>np.ndarray</code> as an input for a machine learning model:</p>
<pre><code>array_X = np.array([list(w.values) for w in df[['close', 'volume']].rolling(window=20)][19:-1])
</code></pre>
<p>It is the standard way in time series, where we use a window of past values as an input to predict the future value. The shape of array is <code>2 * 20 * 20000000</code>. It will cost a lot of time to build such array and sometime there will be an error that the memory consumed by the array is too large.</p>
<p>Is there any way to improve the above (time cost and memory error)?</p>
| <python><pandas><numpy><performance><window-functions> | 2023-07-12 09:20:58 | 1 | 1,136 | user6703592 |
76,668,897 | 1,448,641 | Warning when referencing numpy scalar types | <p>Sphinx emits a warning when processing the following code in nitpicky mode:</p>
<pre><code>import numpy as np
import numpy.typing as npt
def func(x: npt.NDArray[np.double]) -> None:
"""My func docstring."""
</code></pre>
<p>The warning says</p>
<blockquote>
<p>py:class reference target not found: numpy.float64</p>
</blockquote>
<p>Sphinx emits a similar warning for every scalar type.</p>
<p>However, there are references for each scalar type in numpy's objects.inv, for example,:</p>
<blockquote>
<p>numpy.double py:class 1 reference/arrays.scalars.html#$ -</p>
</blockquote>
<blockquote>
<p>numpy.float64 py:attribute 1 reference/arrays.scalars.html#$ -</p>
</blockquote>
<p>One notable thing is that even though I reference <code>np.double</code>, sphinx looks for <code>np.float64</code>. And as per the warning, it looks for a class, but the object is an attribute. That might be the reason, why sphinx cannot find the reference.</p>
<p>Given this is indeed the reason for the warning, I have to either force sphinx to search for the class that I actually reference, or tell it somehow to look for attributes, instead of classes in this particular case.</p>
<p>Does anyone how how to do that?</p>
<p>Check out this <a href="https://github.com/Teagum/sphinx-warning" rel="nofollow noreferrer">example repo</a> to reproduce the warning.</p>
| <python><numpy><warnings><documentation><python-sphinx> | 2023-07-12 09:08:42 | 0 | 5,519 | MaxPowers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.