QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,198,520 | 17,365,694 | Docker Error build during Deployment on Railway services | <p>please I had previously deployed my Django project on Railway which worked fine. Unfortunately, when I tried to add up SendGrid mail functionality by using <code>django-sendgrid-v5</code>
package to help me handle that, everything worked pretty well in the development environment including SendGrid mails like Signup user.</p>
<p>However, when I deployed it on Railway which uses Nixpacks to manage its default project build, I kept getting this weird error that ENV cannot be blank. I followed their deployment procedures on Python since they have a similar deployment infrastructure to Heroku. I made sure that all the (env) variables needed to run the project in their platform were set correctly. I had checked my <code>settings.py</code> files and my <code>.env</code> files to know whether I was missing anything there, but I could not find the error. I even uninstall the <code>django-sendgrid-v5</code> which I believed could have introduced the error, still my deployment kept on crashing.</p>
<p>Below is the deployment build code which has been persistent.</p>
<p>`</p>
<p>βββββββββββββββββββββββββββββββ Nixpacks v1.16.0 βββββββββββββββββββββββββββββββ</p>
<p>β setup β python310, postgresql, gcc β</p>
<p>ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ</p>
<p>β install β python -m venv --copies /opt/venv && . /opt/venv/bin/activate β</p>
<p>β β && pip install -r requirements.txt β</p>
<p>ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ</p>
<p>β start β python manage.py migrate && gunicorn kester_autos.wsgi β</p>
<p>ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ</p>
<p>#0 building with "default" instance using docker driver</p>
<p>#1 [internal] load build definition from Dockerfile</p>
<p>#1 transferring dockerfile: 2.06kB done</p>
<p>#1 DONE 0.0s</p>
<p>#2 [internal] load .dockerignore</p>
<p>#2 transferring context: 2B done</p>
<p>#2 DONE 0.0s</p>
<p>Dockerfile:12</p>
<hr />
<p>10 |</p>
<p>11 | ARG DATABASE_URL EMAIL_BACKEND EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER NIXPACKS_METADATA PYTHONUNBUFFERED RAILWAY_ENVIRONMENT RAILWAY_ENVIRONMENT_ID RAILWAY_ENVIRONMENT_NAME RAILWAY_GIT_AUTHOR RAILWAY_GIT_BRANCH RAILWAY_GIT_COMMIT_MESSAGE RAILWAY_GIT_COMMIT_SHA RAILWAY_GIT_REPO_NAME RAILWAY_GIT_REPO_OWNER RAILWAY_PROJECT_ID RAILWAY_PROJECT_NAME RAILWAY_SERVICE_ID RAILWAY_SERVICE_NAME SECRET_KEY</p>
<p>12 | >>> ENV =$ DATABASE_URL=$DATABASE_URL EMAIL_BACKEND=$EMAIL_BACKEND EMAIL_HOST=$EMAIL_HOST EMAIL_HOST_PASSWORD=$EMAIL_HOST_PASSWORD EMAIL_HOST_USER=$EMAIL_HOST_USER NIXPACKS_METADATA=$NIXPACKS_METADATA PYTHONUNBUFFERED=$PYTHONUNBUFFERED RAILWAY_ENVIRONMENT=$RAILWAY_ENVIRONMENT RAILWAY_ENVIRONMENT_ID=$RAILWAY_ENVIRONMENT_ID RAILWAY_ENVIRONMENT_NAME=$RAILWAY_ENVIRONMENT_NAME RAILWAY_GIT_AUTHOR=$RAILWAY_GIT_AUTHOR RAILWAY_GIT_BRANCH=$RAILWAY_GIT_BRANCH RAILWAY_GIT_COMMIT_MESSAGE=$RAILWAY_GIT_COMMIT_MESSAGE RAILWAY_GIT_COMMIT_SHA=$RAILWAY_GIT_COMMIT_SHA RAILWAY_GIT_REPO_NAME=$RAILWAY_GIT_REPO_NAME RAILWAY_GIT_REPO_OWNER=$RAILWAY_GIT_REPO_OWNER RAILWAY_PROJECT_ID=$RAILWAY_PROJECT_ID RAILWAY_PROJECT_NAME=$RAILWAY_PROJECT_NAME RAILWAY_SERVICE_ID=$RAILWAY_SERVICE_ID RAILWAY_SERVICE_NAME=$RAILWAY_SERVICE_NAME SECRET_KEY=$SECRET_KEY</p>
<p>13 |</p>
<p>14 | # setup phase</p>
<hr />
<p>ERROR: failed to solve: dockerfile parse error on line 12: ENV names can not be blank</p>
<p>Error: Docker build failed`</p>
<p>I had looked up the possible solutions but to no avail. How to remove that blank ENV is what I don't know how to do or sort out since they built the project automatically.</p>
<p>Please, your help and sorting this issue out would be really appreciated. Thank you.</p>
| <javascript><python><django><docker><sendgrid> | 2023-09-28 22:09:01 | 1 | 474 | Blaisemart |
77,198,484 | 819,417 | Python regex to match string not preceded by another string, but with other words in between | <p>I tried the accepted answer to <a href="https://stackoverflow.com/questions/72741712/regex-match-word-not-immediately-preceded-by-another-word-but-possibly-preceded">Regex match word not immediately preceded by another word but possibly preceded by that word before</a> but that didn't work.</p>
<pre class="lang-py prettyprint-override"><code>>>> re.search('(?<!nonland) onto the battlefield', "When you cast this spell, reveal the top X cards of your library. You may put a nonland permanent card with mana value X or less from among them onto the battlefield. Then shuffle the rest into your library.")
<re.Match object; span=(144, 165), match=' onto the battlefield'>
>>> re.search('^.*?(?<!nonlaxnd)(?<!\W)\W*\bonto the battlefield\b.*', "When you cast this spell, reveal the top X cards of your library. You may put a nonland permanent card with mana value X or less from among them onto the battlefield. Then shuffle the rest into your library.")
>>> # basically only match good when no bad:
>>> re.search('(?<!bad) good', "other bad other good.")
<re.Match object; span=(15, 20), match=' good'>
</code></pre>
| <python><regex><regex-lookarounds> | 2023-09-28 21:55:45 | 1 | 20,273 | Cees Timmerman |
77,198,451 | 12,400,477 | Is it possible to do infix function composition in python without wrapping your functions in some decorator? | <p>Title says it all. Seen lots of answers where folks have implemented <code>f @ g</code>, but this requires wrapping <code>f</code>, <code>g</code> in some <code>infix</code> decorator or class. Is it possible to get this to work? Maybe by patching some class that all functions are an instance of or something?</p>
<p>Basically I'd like to get this to work:</p>
<pre class="lang-py prettyprint-override"><code>f = lambda x: x
g = lambda y: y
def h(a): return a
# <magic>
z = f @ g @ h
assert z(1) == 1
</code></pre>
<p>Key here is that the magic above cannot be specific to/reassign <code>f</code>, <code>g</code>, or <code>h</code>.</p>
<p>Edit: Hey whoever closed this and linked that other question, that isn't what I'm asking at all? I am asking about pointfree function composition. More broadly, the answer to their question is yes, and it appears the answer to my question is no. I don't know how on earth this could be a duplicate if that's the case.</p>
| <python><functional-programming><dsl><infix-notation> | 2023-09-28 21:49:27 | 2 | 615 | Brendan Langfield |
77,198,291 | 525,916 | How do I concatenate columns values (all but one) to a list and add it as a column with polars? | <p>I have the input in this format:</p>
<pre><code>import polars as pl
data = {"Name": ['Name_A', 'Name_B','Name_C'], "val_1": ['a',None, 'a'],"val_2": [None,None, 'b'],"val_3": [None,'c', None],"val_4": ['c',None, 'g'],"val_5": [None,None, 'i']}
df = pl.DataFrame(data)
print(df)
shape: (3, 6)
ββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ
β Name β val_1 β val_2 β val_3 β val_4 β val_5 β
β --- β --- β --- β --- β --- β --- β
β str β str β str β str β str β str β
ββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββββ‘
β Name_A β a β null β null β c β null β
β Name_B β null β null β c β null β null β
β Name_C β a β b β null β g β i β
ββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ
</code></pre>
<p>I want the output as:</p>
<pre><code>shape: (3, 7)
ββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββββββββββ
β Name β val_1 β val_2 β val_3 β val_4 β val_5 β combined β
β --- β --- β --- β --- β --- β --- β --- β
β str β str β str β str β str β str β list[str] β
ββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββββββββββββββββ‘
β Name_A β a β null β null β c β null β ["a", "c"] β
β Name_B β null β null β c β null β null β ["c"] β
β Name_C β a β b β null β g β i β ["a", "b","g""i"] β
ββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββ
</code></pre>
<p>I want to combine all the columns as a list except the Name column. I have simplified the data for this question but in reality we have many columns of the val_N format and a generic code where I do not have to list each column name would be great.</p>
| <python><python-polars> | 2023-09-28 21:07:28 | 1 | 4,099 | Shankze |
77,198,268 | 3,270,427 | gRPC - Keep a stream connection alive in Python | <p>I am trying to convert a C# app into a Python app and I am having issues with gRPC.</p>
<p>I need to keep a stream connection alive while the application is running, this app receive broadcast messages from the service.</p>
<p>This is the proto file:</p>
<pre><code>syntax = "proto3";
import "google/protobuf/empty.proto";
option csharp_namespace = "mcGrpcService";
message mcCommandRequest {
int32 id = 1;
string content = 2;
}
message mcCommandResponse {
bool succes = 1;
string message = 2;
}
service grpcCommsService {
rpc XchangeCommand (mcCommandRequest) returns (mcCommandResponse);
rpc XchangeBroadcast (stream google.protobuf.Empty) returns (stream mcCommandRequest);
}
</code></pre>
<p>This is the C# code:</p>
<pre class="lang-cs prettyprint-override"><code>private readonly Channel? _channel;
private readonly grpcCommsService.grpcCommsServiceClient _client;
public McGrpcClient(string host, int port) {
_channel = new Channel(host, port, ChannelCredentials.Insecure);
_client = new grpcCommsService.grpcCommsServiceClient(_channel);
}
public bool Connect(Action<mcCommandRequest> onBroadcastReceived) {
OnBroadcastReceived = onBroadcastReceived;
var token = _cancellationTokenSource.Token;
_broadcastTask = Task.Run(async () => {
var broadcastCall = _client.XchangeBroadcast(cancellationToken: token);
while (await broadcastCall.ResponseStream.MoveNext(token)) {
try {
var broadcastMessage = broadcastCall.ResponseStream.Current;
OnBroadcastReceived?.Invoke(broadcastMessage);
}
catch {
// todo implement error handling
throw;
}
}
}, token);
return true;
}
</code></pre>
<p>And this is my current Python code:</p>
<pre class="lang-py prettyprint-override"><code>async def read_broadcast(request: Iterator[xchange__pb2.mcCommandRequest], stop_event):
while not stop_event.is_set():
msg = await request.next()
print("Broadcast received: ", msg.message)
await asyncio.sleep(0.1)
def connect(stub, stop_event):
iterator = None
request = stub.XchangeBroadcast(iterator)
asyncio.run(
read_broadcast(request, stop_event)
)
if __name__ == '__main__':
channel = grpc.insecure_channel("localhost:9999")
stub = grpcCommsServiceStub(channel)
stop_event = threading.Event()
connect(stub, stop_event)
input()
stop_event.set()
channel.close()
</code></pre>
<p>But I'm getting this error:</p>
<blockquote>
<p>Exception has occurred: _MultiThreadedRendezvous<br />
<_MultiThreadedRendezvous of RPC that terminated with:<br />
status = StatusCode.UNKNOWN<br />
details = "Exception iterating requests!"<br />
debug_error_string = "None"></p>
</blockquote>
<p>I have tried also with <code>for msg in request</code> and same error.</p>
<p>Looking at server logs it seems the connection is closed just after:</p>
<p><code>request = stub.XchangeBroadcast(iterator)</code></p>
| <python><grpc><grpc-python> | 2023-09-28 21:03:17 | 2 | 10,857 | McNets |
77,198,214 | 6,296,626 | Python Flet have parent expand but child should not | <p>I am using <a href="https://flet.dev/docs/" rel="nofollow noreferrer">Flet</a> as the GUI library for my project. I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>MAIN_GUI = ft.Container(
margin=ft.margin.only(bottom=40),
expand=True,
content=ft.Row([
ft.Card(
elevation=30,
expand=4,
content=ft.Container(
content=ft.Column([
ft.Text("LEFT SIDE, 1st row", size=30, weight=ft.FontWeight.BOLD),
ft.Text("LEFT SIDE 2nd row", size=30, weight=ft.FontWeight.NORMAL)
]),
border_radius=ft.border_radius.all(20),
bgcolor=ft.colors.WHITE24,
padding=45,
)
),
ft.Tabs(
selected_index=0,
animation_duration=300,
expand=3,
tabs=[
ft.Tab(
text="Tab 1",
icon=ft.icons.SEARCH,
content=ft.Container(
content=ft.Card(
elevation=30,
content=ft.Container(
content=ft.Text("Amazing TAB 1 content", size=50, weight=ft.FontWeight.BOLD),
border_radius=ft.border_radius.all(20),
bgcolor=ft.colors.WHITE24,
padding=45,
)
)
),
),
ft.Tab(
text="Tab 2",
icon=ft.icons.SETTINGS,
content=ft.Text("Amazing TAB 2 content"),
),
],
)
])
)
def main(page: ft.Page):
page.padding = 50
page.add(MAIN_GUI)
page.update()
if __name__ == '__main__':
ft.app(target=main)
</code></pre>
<p>This will create a window, which is separated into a left and a right section, where the left section has an <code>ft.Card</code> and the right section has 2 tabs, in which one of them also has an <code>ft.Card</code>.</p>
<p>These Flet's widget has to have <code>expand=True</code> in order for the <code>ft.Tabs</code> to work, however, <strong>I would like to have the child widgets, specifically the <code>ft.Card</code> widgets to not expand</strong> and instead, their width and height should transform based on the content inside.</p>
| <python><flutter><user-interface><expand><flet> | 2023-09-28 20:53:00 | 1 | 1,479 | Programer Beginner |
77,198,140 | 9,142,198 | TypeError: _named_members() got an unexpected keyword argument 'remove_duplicate' | <p>I'm working on Supper resolution using GAN project and I'm trying to train SRGAN but I'm getting one error:</p>
<pre><code>[TLX] Linear linear_1: 1 No Activation
[TLX] Conv2d conv1_1: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv1_2: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool1: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv2_1: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv2_2: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool2: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv3_1: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_2: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_3: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_4: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool3: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv4_1: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_2: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_3: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_4: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool4: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Restore pre-trained weights
[TLX] Loading (3, 3, 3, 64) in conv1_1
[TLX] Loading (64,) in conv1_1
Traceback (most recent call last):
File "train.py", line 113, in <module>
VGG = vgg.VGG19(pretrained=True, end_with='pool4', mode='dynamic')
File "D:\E\MtechDAIICT\Sem3\ComputerVision\Project\SRGAN\vgg.py", line 261, in vgg19
restore_model(model, layer_type='vgg19')
File "D:\E\MtechDAIICT\Sem3\ComputerVision\Project\SRGAN\vgg.py", line 175, in restore_model
if len(model.all_weights) == len(weights1):
File "D:\E\anaconda\envs\opencv-env2\lib\site-packages\tensorlayerx\nn\core\core_torch.py", line 149, in all_weights
for name, param in self.named_parameters(recurse=True):
File "D:\E\anaconda\envs\opencv-env2\lib\site-packages\torch\nn\modules\module.py", line 2112, in named_parameters
gen = self._named_members(
TypeError: _named_members() got an unexpected keyword argument 'remove_duplicate'
</code></pre>
<p>link: <a href="https://github.com/tensorlayer/SRGAN/blob/7444cc758ab493f5e29227663e37cb90034b64a1/train.py#L111" rel="nofollow noreferrer">https://github.com/tensorlayer/SRGAN/blob/7444cc758ab493f5e29227663e37cb90034b64a1/train.py#L111</a></p>
<p>Could you please help me? Thank you.</p>
| <python><pytorch><generative-adversarial-network><vgg-net> | 2023-09-28 20:40:57 | 1 | 504 | krishna veer |
77,198,126 | 22,466,650 | How to pad 0s to garantee same length while iterating over two lists? | <p>My inputs are :</p>
<pre><code>x = ['A', 'B', 'B', 'C', 'D', 'D', 'D']
y = [ 4 , 3 , 5 , 9 , 1 , 6 , 2 ]
</code></pre>
<p>And I'm just trying to make this dictionary.</p>
<pre><code>{'A': [4, 0, 0], 'B': [3, 5, 0], 'C': [9, 0, 0], 'D': [1, 6, 2]}
</code></pre>
<p>The logic : we collect the values and pad <code>0</code> x times to reach a length equal to the max length of letters (<code>3</code> here for <code>D</code>).</p>
<p>I made the code below but I got a different result :</p>
<pre><code>from collections import Counter
c = Counter(x)
max_length = c.most_common()[0][-1]
my_dict = {}
for ix, iy in zip(x,y):
my_dict[ix] = [iy] + [0 for _ in range(max_length-1)]
{'A': [4, 0, 0], 'B': [5, 0, 0], 'C': [9, 0, 0], 'D': [2, 0, 0]}
</code></pre>
<p>Can you tell what's wrong with it please ?</p>
| <python><list><dictionary> | 2023-09-28 20:39:08 | 2 | 1,085 | VERBOSE |
77,198,060 | 8,372,455 | VOLTTRON set_point | <p>I am trying to use the actuator agent to write to a building automation point with <a href="https://volttron.readthedocs.io/en/stable/volttron-api/services/ActuatorAgent/actuator.agent.html#actuator.agent.ActuatorAgent.set_point" rel="nofollow noreferrer">set_point</a>. This is the a snip from my Python code:</p>
<pre><code> _log.info(f'{log_prefix} - meter_topic: {meter_topic}')
_log.info(f'{log_prefix} - self.electric_meter_value: {self.electric_meter_value}')
self.vip.rpc.call('platform.actuator',
'set_point',
self.core.identity,
meter_topic,
self.electric_meter_value
).get(timeout=300)
</code></pre>
<p>This is the traceback in my volttron.log file:</p>
<pre><code>2023-09-28 20:10:26,482 (rtuloadshedagent-0.1 49669) __main__ INFO: [WRITE POWER METER VAL INFO] - meter_topic: 500001/input-power-meter
2023-09-28 20:10:26,482 (rtuloadshedagent-0.1 49669) __main__ INFO: [WRITE POWER METER VAL INFO] - self.electric_meter_value: 0
2023-09-28 20:10:26,484 (platform_driveragent-4.0 49556) volttron.platform.vip.agent.subsystems.rpc ERROR: unhandled exception in JSON-RPC method 'set_point':
Traceback (most recent call last):
File "/home/geb/volttron/volttron/platform/vip/agent/subsystems/rpc.py", line 181, in method
return method(*args, **kwargs)
File "/home/geb/.volttron/agents/21d572a4-52ef-4e75-b603-f5d56bfc9951/platform_driveragent-4.0/platform_driver/agent.py", line 468, in set_point
return self.instances[path].set_point(point_name, value, **kwargs)
KeyError: '500001'
</code></pre>
<p>This is how the points are defined in VOLTTRON but unsure why I get the error? Unless its looking for this longer full string <code>devices/slipstream_internal/slipstream_hq/500001</code>. Any troubleshooting tips appreciated.</p>
<pre><code>vctl config store platform.driver registry_configs/500001.csv registry_configs/500001.csv --csv
vctl config store platform.driver devices/slipstream_internal/slipstream_hq/500001 devices/500001
</code></pre>
| <python><volttron> | 2023-09-28 20:26:57 | 2 | 3,564 | bbartling |
77,197,977 | 5,197,034 | Azure Functions Blob is triggered 5 times | <p>Since we switched to Azure Functions with the Python V2 programming model, we experience a lot more problems during and after deployment then before. One that keeps puzzling me is that a blob trigger function is executed 5 times every time a new file is added in blob storage. The triggers are always 10min apart.</p>
<p>The code starts an Azure ML Training Job. I can't tell how long the function runs each time, because they dont show up in monitoring.</p>
<p>Has anyone else experienced this behavior?</p>
| <python><azure-functions><azure-blob-storage><azure-blob-trigger> | 2023-09-28 20:10:48 | 2 | 2,603 | pietz |
77,197,942 | 901,426 | Python method to multithread two vastly different timed functions for MQTT | <p>i have an application that samples two IOs at vastly different rates that need to operate within an MQTT reporting loop that sends messages every 5secs. one function <strong>must</strong> sample every 10ms and increments a counter based on it's IO port value. the other can run right before the MQTT message is sent, but takes about a full second to get the data it needs, parse it, and compile a message. i'm not sure how to thread the two so that i'm not losing resolution on the critical method (10ms) while making the other runs it's loop (1sec) ultimately having them both ready to drop their MQTT messages into the queue at the 5sec mark. frankly, i don't even know where to begin properly. but here's what i've attempted as it relates to this question:</p>
<pre class="lang-py prettyprint-override"><code>import time
import concurrent.futures
import threading
elaspedTimeBeforeTX = 0
while True:
while elapsedTimeBeforeTX < 4.900:
with concurrent.futures.ThreadPoolExecutor() as executor:
if elapsedTimeBeforeTX % .01 == 0: #<-- do this every 10ms
start1 = time.process_time()
tickCount = executor.submit(sampleTickPort, tickPort)
tickTotal += tickCount.result()
end1 = time.process_time()
elapsedTimeBeforeTX += (end1 - start1)
print(f'β β β tick sample time: {elapsedTimeBeforeTX}seconds β β β')
if 4 > elapsedTimeBeforeTX > 3.5: #<-- trigger this between 3.5 and 4 sec
start2 = time.process_time()
data = executor.submit(getJ1939data)
j1939Data += data.result()
end2 = time.process_time()
loop2 = (end - start)
print(f'β β β j1939 sample time: {loop2}seconds β β β')
# time to send MQTT message
buildMessage(tickCount, j1939Data)
payload.timestamp = int(round(time.time() * 1000))
sendMQTTMessage(payload)
</code></pre>
<p>when i run this... well... <code>sampleTickPort</code> never stops, <code>getJ1939Data</code> never runs, and the fan on my system really starts to howl. O_o that's when i <em><strong>really</strong></em> know it's not working right. somewhere my logic is failing. can someone please point me in the right direction? i also think where i increment my <code>elapsedTimeBeforeTX</code> is wrong, but i don't know <em>why</em> it's wrong or where to place it so it does what it's supposed to.</p>
<p>let me know if you need more information.</p>
| <python><multithreading><mqtt> | 2023-09-28 20:01:46 | 0 | 867 | WhiteRau |
77,197,889 | 7,800,726 | How create hyperlinks in a loop? Tkinter widget, Python | <p>The following loop unintentionally associates all the links with the last link found in brand_url[i], how can this be corrected?</p>
<pre><code>for i in databrand.index:
brand_name = databrand['Brands'][i]
brand_url = databrand['Url'][i]
link_text = brand_name + " " + brand_url
my_link = tk.Label(right_frame, text= link_text,
fg="blue", cursor="hand2", font=['Times', 22, 'underline'])
my_link.pack()
my_link.bind("<Button-1>",
lambda e: webbrowser.open_new(brand_url))
text = tk.Label(right_frame, text=databrand['Brands'][i], font=['Times', 22, 'bold'])
text.pack()
print(databrand['Brands'][i])
</code></pre>
| <python><tkinter> | 2023-09-28 19:52:41 | 1 | 558 | Ian Gallegos |
77,197,792 | 3,398,271 | How to debug python tests with vimspector? | <h3>Sample code</h3>
<p><strong>main.py</strong></p>
<pre><code>#!/usr/bin/python3
if __name__ == "__main__":
test = "hello, world"
print(test)
</code></pre>
<p><strong>sometest.py</strong></p>
<pre><code>import pytest
def test_something():
print("running da test...")
foo = 42
assert foo==42
assert 1==2
</code></pre>
<p><strong>.vimspector.json</strong></p>
<pre><code>{
"configurations": {
"main":{
"adapter": "debugpy",
"configuration": {
"name": "run the executable",
"type": "python",
"request": "launch",
"python.pythonPath": "/usr/bin/python3",
"program": "~/p/pydebugtest/main.py"
}
},
"test":{
"adapter": "debugpy",
"configuration": {
"name": "run the test",
"module": "unittest",
"type": "python",
"request": "launch",
"python": "pytest",
"args": [
"-q",
"${Test}"
]
}
}
}
}
</code></pre>
<h3>Problem</h3>
<p>Probably, the <code>test</code> configuration in <code>.vimspector.json</code> is wrong, because I cannot start the debugging session in Vim. This is what happens:</p>
<ol>
<li>Go to line 4 in <code>sometest.py</code></li>
<li><code>:call vimspector#ToggleBreakpoint()</code></li>
<li><code>:call vimspector#Launch()</code></li>
<li>Prompt asks "Which launch configuration?", type <code>2</code></li>
<li>Prompt asks "Enter value for Test", type <code>sometest.py</code></li>
<li>After some time, the following error message appears:</li>
</ol>
<pre><code>βββββββββββββββββββββββββββββββββββββββββX
β Initialize for session test (0) Failed β
β β
β Timeout β
β β
β Use :VimspectorReset to close β
ββββββββββββββββββββββββββββββββββββββββββ
</code></pre>
<p>And the following message in the status bar:</p>
<p><code>Protocol error: duplicate response for request 1</code></p>
<h3>Question</h3>
<p>What would be the right way to configure Vimspector so that I can debug the sample test within Vim?</p>
<h3>Remarks</h3>
<p>1.) Running the test from the terminal works fine:</p>
<pre><code>$ pytest -q sometest.py
F [100%]
================================================================= FAILURES ==================================================================
______________________________________________________________ test_something _______________________________________________________________
def test_something():
print("running da test...")
foo = 42
assert foo==42
> assert 1==2
E assert 1 == 2
sometest.py:7: AssertionError
----------------------------------------------------------- Captured stdout call ------------------------------------------------------------
running da test...
========================================================== short test summary info ==========================================================
FAILED sometest.py::test_something - assert 1 == 2
1 failed in 0.06s
</code></pre>
<p>2.) Debugging <code>main.py</code> works fine as well with the following steps:</p>
<ol>
<li>Go to line 4</li>
<li><code>:call vimspector#ToggleBreakpoint()</code></li>
<li><code>:call vimspector#Launch()</code></li>
<li>Prompt asks "Which launch configuration?", type <code>1</code></li>
<li>Type default answers for the handling of exceptions.</li>
<li>At this point, Vimspector opens and stops at line 4 as expected.</li>
</ol>
<p>3.) In case that matters, I can provide the relevant Vim configuration (<code>vim --version</code>), but I believe it is not important, given that I can debug the main program with Vimspector, so it must be an issue with the json configuration.</p>
| <python><debugging><vim><pytest><vimspector> | 2023-09-28 19:36:31 | 1 | 1,732 | Attilio |
77,197,753 | 5,032,387 | Why is ground_truth_key in AzureML text generation component required | <p>I'm not following why the <a href="https://github.com/Azure/azureml-assets/blob/main/assets/training/finetune_acft_hf_nlp/components/pipeline_components/text_generation/README.md" rel="nofollow noreferrer">ground_truth_key</a> in AzureML's text generation pipeline component is a required argument. I understand it makes sense in summarization, translation, question_answering scenarios, but for text generation, which is what I'm using it for, just the input field should suffice.</p>
<p><a href="https://i.sstatic.net/0BhFX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0BhFX.png" alt="enter image description here" /></a></p>
<p>What am I missing?</p>
| <python><azure-machine-learning-service><azure-ml-pipelines><azure-ml-component> | 2023-09-28 19:27:25 | 0 | 3,080 | matsuo_basho |
77,197,748 | 22,466,650 | Aren't the values supposed to sum up for each bar? | <p>I was expecting for example the <code>F</code> bar to have <code>8+9=17</code> and not only <code>9</code> (the last value for <code>F</code>).</p>
<pre><code>import matplotlib.pyplot as plt
x = ['A', 'B', 'C', 'D', 'D', 'D', 'D', 'E', 'F', 'F']
y = [ 5 , 8 , 7, 9, 9, 2, 7, 8, 8, 9 ]
fig, ax = plt.subplots()
ax.bar(x, y)
plt.show();
</code></pre>
<p>Can someone explain the logic please ?</p>
<p><a href="https://i.sstatic.net/oTBqJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTBqJ.png" alt="enter image description here" /></a></p>
| <python><matplotlib> | 2023-09-28 19:26:44 | 1 | 1,085 | VERBOSE |
77,197,729 | 2,410,605 | Trying to set the Chrome download directory for Python Seleniumbase but it continues to use the default | <p>I'm still learning how to use Selenium Python and am having an issue changing the default download directory. I'm hoping a fresh or more experienced set of eyes can help.</p>
<p>I'm rocking with Python 3.10.11 / Selenium 4.10 / Chrome (whatever latest version is)</p>
<p>Below is the relevant code -- the entire script runs with no errors, but instead of using the directory path I supply it'll just create a subfolder called "downloaded_files" from whatever directory the script is running from and dump the files there.</p>
<pre><code>from selenium import webdriver
from seleniumbase import Driver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
download_path = "\\\E275proddwdb\kyote"
# Set download folder
op = webdriver.ChromeOptions()
config = {"download.default_directory":download_path, "safebrowsing.enabled":"false"}
op.add_experimental_option("prefs", config)
op.headless = False
##Call Chrome Browser
service = Service()
options = webdriver.ChromeOptions()
browser = Driver(browser="chrome", headless=False)
browser.get("https://www.kyote.org/mc/login.aspx?url=kplacementMgmt.aspx")
</code></pre>
<p>Any help in figuring out how I messed this up would be greatly, greatly appreciated!</p>
| <python><selenium-webdriver><selenium-chromedriver><seleniumbase> | 2023-09-28 19:23:29 | 1 | 657 | JimmyG |
77,197,723 | 7,120,031 | How do I stream HuggingFacePipeline output to a LangChain Dataframe Agent? | <p>I'm trying to mimic the LangChain Agent + Streamlit demo outlined <a href="https://python.langchain.com/docs/integrations/callbacks/streamlit" rel="nofollow noreferrer">in this documentation</a>, except with a local HuggingFace model using the <code>HuggingFacePipeline</code> and Langchain Dataframe Agent.</p>
<p>I am very close to matching the original functionality, save for one thing: I cannot figure out how to stream the model's thoughts and actions. My Streamlit page stops at <code>Thinking...</code> until the model completes its inference, after which the result of that Agent-Model cycle appears and the next cycle begins.</p>
<p>I tried using <code>TextStreamer</code> and <code>TextIteratorStreamer</code>, but they don't seem to work correctly with the Agent. I also tried to put the <code>StreamingStdOutCallbackHandler()</code> callback into the <code>HuggingFacePipeline</code> which had no effect.</p>
<p>Here is the relevant code:</p>
<pre class="lang-py prettyprint-override"><code>@st.cache_resource
def getLangChainPipeline(model_config: dict):
# model_config simply contains the name, path, etc of the HF model
# Loading the model and tokenizer
USE_CUSTOM_MODEL = True
CUSTOM_MODEL_PATH = model_config['path']
MODEL_PATH = (
CUSTOM_MODEL_PATH if USE_CUSTOM_MODEL else model_config['name']
)
extra_args = {}
if cuda_available:
extra_args['device_map'] = 'auto'
tokenizer = AutoTokenizer.from_pretrained(model_config['name'], trust_remote_code=model_config['trust_remote_code'])
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH,
torch_dtype=torch.float16,
trust_remote_code=model_config['trust_remote_code'],
**extra_args)
# for inference
model.eval()
# tried using TextStreamer but that didn't work
streamer = TextStreamer(tokenizer, skip_prompt=True)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
# streamer=streamer,
max_new_tokens=200,
trust_remote_code=model_config['trust_remote_code'],
)
# tried using callbacks=[StreamingStdOutCallbackHandler()] below which didn't work either
return HuggingFacePipeline(pipeline=pipe)
# load pipeline
with st.spinner('Loading model...'):
hf = getLangChainPipeline(codellama_13b_instruct)
# create agent
agent = create_pandas_dataframe_agent(hf, df, verbose=True)
# form for using to type prompt
with st.form(key="form"):
"Working with 50000 rows of sample data."
user_input = st.text_input("Query")
submit_clicked = st.form_submit_button("Submit Question")
# run Agent when submit is clicked
if submit_clicked:
with st.empty():
st.chat_message("user").write(user_input)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(user_input, callbacks=[st_callback])
st.info(response)
</code></pre>
<p>I would appreciate any help figuring this out.</p>
| <python><huggingface-transformers><streamlit><langchain><huggingface> | 2023-09-28 19:23:11 | 0 | 808 | Burhan |
77,197,671 | 7,472,392 | Splitting a column with delimiter and place a value in the right column | <p>I have a data frame with a column that potentially can be filled with 3 options (a,b, and/or c) with a comma delimiter.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1':['a,b,c', 'b', 'a,c', 'b,c', 'a,b']})
</code></pre>
<p>I want to split this column based on ','</p>
<pre><code>df['col1'].str.split(',', expand=True)
</code></pre>
<p>A problem with this is that new columns are filled from the first column where I want to fill the columns based on values.</p>
<p>For example all a's in the first column, b's in the second column, c's in the third column.</p>
| <python><pandas><dataframe> | 2023-09-28 19:14:03 | 3 | 1,511 | Yun Tae Hwang |
77,197,648 | 21,575,627 | 'from . import XYZ' | <p>I've ran into the above line. I'm searching a codebase, <a href="https://github.com/gem5/gem5" rel="nofollow noreferrer">gem5</a> to find where occurences of <code>m5.defines</code> are coming from. Inside a directory named <code>m5</code>, in the <code>__init__.py</code>, I see <code>from . import defines</code>, but there is no file or directory named <code>defines</code> there. <code>grep</code> also yields nothing useful.</p>
<p>Where is this coming from?</p>
| <python> | 2023-09-28 19:09:43 | 1 | 1,279 | user129393192 |
77,197,466 | 1,471,980 | Omit data types when inserting Pandas data frame to SQL Server table? | <p>I am loading data from Excel to a SQL Server table. All SQL table columns data types set to <code>nvarvar</code>.</p>
<p>In case I have "na" or empty cells, I am setting them to 'NULL' values in Pandas as below:</p>
<pre><code>df=df.fillna("NULL")
</code></pre>
<p>It inserts some of the rows but it is failing on some, error message is below:</p>
<blockquote>
<p>SQL Server Conversion failed when converting the nvarchar value 'NULL' to data type int.</p>
</blockquote>
<p>Any ideas how I could address this error?</p>
| <python><sql-server><pandas> | 2023-09-28 18:38:48 | 1 | 10,714 | user1471980 |
77,197,439 | 4,431,535 | What is the correct mypy annotation for a tuple where one member's type is conditioned on another member's value? | <p><strong>Update (2023-08-28 2:48 PM):</strong> Responding to Jason Harper's logical-but-sadly-not-working suggestion about using literals with more details and a better example.</p>
<hr />
<p>I'm trying to create a rust-esque expectation tuple with two possible return values that will be returned from functions manipulating a finite state machine:</p>
<ul>
<li><code>(False, None)</code> on failure</li>
<li><code>(True, int)</code> on success.</li>
</ul>
<p>How do I define a mypy type such that mypy will correctly narrow to
the expected type for the second value, given the first? Using <code>Literal[False]</code> and <code>Literal[True]</code> doesn't seem to work, as illustrated in the below toy example:</p>
<p>Example use:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, Optional, Tuple, Union
from typing import TYPE_CHECKING
StatusType = Union[Tuple[Literal[False], None], Tuple[Literal[True], int]]
def try_to_check_status(test_val: int) -> StatusType:
"""Check if test_val is positive"""
if test_val < 0:
return False, None
return True, test_val
def main() -> None:
"""Entry point"""
ok, status = try_to_check_status(-1)
if TYPE_CHECKING:
reveal_type(ok) # mypy should reveal that this is a bool
reveal_type(status) # mypy should reveal that this is None
print(f"{ok}, {status}")
ok, status = try_to_check_status(1)
if TYPE_CHECKING:
reveal_type(ok) # mypy should reveal that this is a bool
reveal_type(status) # mypy should reveal that this is an int
print(f"{ok}, {status}")
if __name__ == "__main__":
main()
</code></pre>
<p>When run with mypy:</p>
<pre class="lang-bash prettyprint-override"><code>$ mypy status_check.py
status_check.py:21: note: Revealed type is "builtins.bool"
status_check.py:22: note: Revealed type is "Union[None, builtins.int]" # I want this to be None!
status_check.py:27: note: Revealed type is "builtins.bool"
status_check.py:28: note: Revealed type is "Union[None, builtins.int]" # I want this to be builtins.int!
Success: no issues found in 1 source file
</code></pre>
<p>What's the right annotation here?
Thanks so much for your advice!</p>
| <python><types><mypy> | 2023-09-28 18:34:55 | 0 | 514 | pml |
77,197,198 | 1,889,720 | Why do I get an `npm ERR!` when running the Python quick start tutorial in Cloud Run? | <p>I am building my first job in Cloud Run. I am using Python. I am following the <a href="https://cloud.google.com/run/docs/quickstarts/jobs/build-create-python" rel="nofollow noreferrer">Quickstart</a>.</p>
<p>I am able to successfully deploy and run the job from source, but the logs show me <code>npm ERR!</code>s that make no sense to me. I tried deploying a simpler HelloWorld as well, but get the same errors.</p>
<pre><code>npm ERR! Missing script: "start"
npm ERR!
npm ERR! Did you mean one of these?
npm ERR! npm star # Mark your favorite packages
npm ERR! npm stars # View packages marked as favorites
npm ERR!
npm ERR! To see a list of scripts, run:
npm ERR! npm run
npm ERR! A complete log of this run can be found in: /home/cnb/.npm/_logs/2023-09-28T07_27_51_794Z-debug-0.log
Container called exit(1).
</code></pre>
<p>I have no idea where it wants me to run <code>npm run</code>. The deployment and running of the job appear to work just fine. Although I don't see any of my code actually run. I see nothing about npm in the Quickstart guide.</p>
<pre><code>Job [job-quickstart] has successfully been deployed.
Execution [job-quickstart-6n4mj] has successfully started running.
</code></pre>
<p>I installed the <a href="https://cloud.google.com/sdk/docs/install" rel="nofollow noreferrer">gcloud CLI</a> like they said, and put it in the home directory like they preferred. My jobs code is in another directory, and I am deploying from that other directory.</p>
<p>Any ideas what I am doing wrong?</p>
| <python><npm><google-cloud-platform><google-cloud-run> | 2023-09-28 17:48:29 | 1 | 7,436 | Evorlor |
77,196,777 | 4,701,426 | x-axis shows 1970 as the starting year of time series dates | <p>The dataframe <code>df</code> has a datetime index ranging from 2019-12-02 to 2020-01-30 and I can plot it fine using mplfinance's plot function:</p>
<p><a href="https://i.sstatic.net/S7RjK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S7RjK.png" alt="enter image description here" /></a></p>
<p>But as soon as I set the tick locators, the tick values get messed up and the years only show "1970."</p>
<p><a href="https://i.sstatic.net/v29gB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v29gB.png" alt="enter image description here" /></a></p>
<p>I read <a href="https://stackoverflow.com/questions/69083927/matplot-lib-generates-date-in-x-axis-from-jan-1970">this</a>, <a href="https://stackoverflow.com/questions/64919511/dateformatter-is-bringing-1970-as-year-not-the-original-year-in-the-dataset">this</a>, and <a href="https://stackoverflow.com/questions/69101233/using-dateformatter-resets-starting-date-to-1970">this</a> but still can't figure it out because I'm actually using matplotlib's pyplot to make the plot (instead of pandas), my dates are actually DateTimeIndex, and I already added Year as a separate x-axis.</p>
<p><strong>Goal</strong>: The format of the second plot above is exactly what I want but the days, months, and years are obviously incorrect. What am I doing wrong?
Here's a reproducible code:</p>
<pre><code>import yfinance as yf
import mplfinance as mpf
from datetime import datetime
import matplotlib.pylab as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
# downloading some sample stock price data from Yahoo Finance
start_date = datetime(2019, 12, 1)
end_date = datetime(2020, 1, 31)
df = yf.download('AAPL', start=start_date, end=end_date)
# using the mplfinance library to make a candle chart out of the data
fig, ax = plt.subplots(figsize = (20, 10))
mpf.plot(df, ax = ax, type='candle', style = 'charles')
# plotting months as major ticks
months = mdates.MonthLocator()
months_fmt = mdates.DateFormatter('%b')
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(months_fmt)
# plotting days as minor ticks
days = mdates.DayLocator(interval=1)
days_fmt = mdates.DateFormatter('%d')
ax.xaxis.set_minor_locator(days)
ax.xaxis.set_minor_formatter(days_fmt)
plt.tick_params(pad=10)
# create a second x-axis beneath the first x-axis to show the year in YYYY format
years = mdates.YearLocator()
years_fmt = mdates.DateFormatter('%Y')
sec_xaxis = ax.secondary_xaxis(-0.08)
sec_xaxis.xaxis.set_major_locator(years)
sec_xaxis.xaxis.set_major_formatter(years_fmt)
# Hide the second x-axis spines and ticks
sec_xaxis.spines['bottom'].set_visible(False)
sec_xaxis.tick_params(length=0, pad=10, labelsize=15)
</code></pre>
| <python><matplotlib><datetime><xticks><mplfinance> | 2023-09-28 16:31:53 | 0 | 2,151 | Saeed |
77,196,766 | 3,177,186 | could not find a writer for the specified extension in function 'cv::imwrite_' - filename HAS the correct extension - Python, CV2 | <p>I originally tried holding the thumbnails in memory, but that didn't work and now I'm creating temporary thumbnail files in a nearby directory. Anyway, the image creation works for photos, but it's failing for mp4 video and I'm not sure why. This is the error:</p>
<pre><code>'D:\\Stuff\\Pictures\\Transit\\20230729_020700.mp4'
'D:\\Stuff\\Pictures\\Transit\\thumbs\\20230729_020700.mp4'
{'name': '20230729_020700', 'ext': 'mp4', 'name_ext': '20230729_020700.mp4', 'type': 'movie', 'dtime': '20230729_140700', 'width': 1920, 'height': 1080}
Traceback (most recent call last):
File "C:\stuff\Working\htdocs\PSSort\picsort\main.py", line 179, in <module>
list_dest_dir()
File "C:\stuff\Working\htdocs\PSSort\picsort\main.py", line 108, in list_dest_dir
this_file = get_file_deets(file)
^^^^^^^^^^^^^^^^^^^^
File "C:\stuff\Working\htdocs\PSSort\picsort\main.py", line 71, in get_file_deets
cv2.imwrite(thumb_path,frame)
cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp:696: error: (-2:Unspecified error) could not find a writer for the specified extension in function 'cv::imwrite_'
</code></pre>
<p>In my code, I've put prints to show that the input file and output file names and paths are correct AND have the extension (which I stress because every post I could find on this error was always because someone forgot the extension). So what do I do when the extension is fine?</p>
<p>Code is below:</p>
<pre class="lang-py prettyprint-override"><code>#input a filename. Something.jpg, thisisamov.mp4, etc.
#output adds an entry
def get_file_deets(name):
if settings['dest_dir'][-1] == '\\':
full_path_filename = settings['dest_dir']+name
thumb_path = settings['dest_dir']+'thumbs\\'+name
else:
full_path_filename = settings['dest_dir']+'\\'+name
thumb_path = settings['dest_dir']+'\\thumbs\\'+name
pre_r(full_path_filename)
pre_r(thumb_path)
temp = {}
temp['name'], temp['ext'] = name.rsplit('.')
# keep track of the whole name
temp['name_ext'] = name
if temp['ext'] != 'mp4':
image = Image.open(full_path_filename)
temp['type'] = 'image'
# Grab it's "taken on" date
exif = image.getexif();
temp['dtime'] = exif.get(306).replace(":", "").replace(' ','_')
temp['width'],temp['height'] = image.size
# first make a thumbnail, then B64 THAT (not the whole image)
image.thumbnail((100,100))
image.save(thumb_path)
image.close()
else:
temp['type'] = 'movie'
temp['dtime'] = datetime.fromtimestamp(os.path.getmtime(full_path_filename)).strftime("%Y%m%d %H%M%S").replace(' ','_')
probe = ffmpeg.probe(full_path_filename)
video_streams = [stream for stream in probe["streams"] if stream["codec_type"] == "video"][0]
temp['width'], temp['height'] = video_streams['width'], video_streams['height']
pre_r(temp)
# Time in seconds where you want to capture the thumbnail (e.g., 10 seconds)
thumbnail_time = 10
cap = cv2.VideoCapture(full_path_filename)
cap.set(cv2.CAP_PROP_POS_MSEC, thumbnail_time * 1000)
ret, frame = cap.read()
if ret:
cv2.imwrite(thumb_path,frame)
cap.release()
temp['size'] = f"{os.path.getsize(full_path_filename)/1000000:.2f}MBs"
return temp
</code></pre>
<p>Bonus: I'd love to know a cleaner way to do that if statement at the beginning. I toyed with a ternary, but it was ugly so I gave up on that.</p>
| <python><opencv><mp4> | 2023-09-28 16:29:57 | 1 | 2,198 | not_a_generic_user |
77,196,501 | 4,551,325 | How to extract paragraphs from pdf? | <p>I have a pdf downloaded from SSRN here: <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337484" rel="nofollow noreferrer">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337484</a>. The default file name is "SSRN-id4337484.pdf".</p>
<p>I then extract the text from the downloaded file using the <code>pdftotext</code> package:</p>
<pre><code>from pdftotext import PDF
with open("SSRN-id4337484.pdf", "rb") as f:
pdf = PDF(f)
</code></pre>
<p>Each page is lumped into one, long string with "\r\n" tags indicating line breaks. For example if I print page 2:</p>
<pre><code>[In ]: pdf[2]
[Out]:
'Journal of AI\r\narticles which have been published in English Language.\r\n\r\n3. WHAT `IS CHATGPT?\r\nTo begin, we asked ChatGPT to explain itself. The prompt we started with was βWhat is ChatGPT?β. ChatGPT\r\ngenerated this response.\r\nβChatGPT is a large language model trained by OpenAI. It is designed to generate human-like text based on a\r\ngiven prompt or context. It can be used for a variety of natural language processing tasks, such as text\r\ncompletion, conversation generation, and language translationβ. (By ChatGPT January 23, 2023).\r\nAs a follow up, we used another prompt βHow does it work?β To know how conversational it is, we\r\nintentionally wrote βHow does it work?β instead of "How does ChatGPT work?β. It returned with the response\r\nbelow.\r\nβChatGPT is based on a transformer architecture, a type of neural network that has been shown to be very\r\neffective for natural language processing tasks. It is trained on a large dataset of text, such as books and\r\narticles, and learns to generate text that is similar to the text it was trained on. When given a prompt or\r\ncontext, the model processes the input and generates a response. The response is generated one word at a\r\ntime, with the model predicting the next word based on the input and the words it has generated so far. The\r\nmodel uses attention mechanisms to focus on the most relevant parts of the input and generate a response\r\nthat is coherent and appropriate for the given context. After the training, the model can be fine-tuned on a\r\nspecific task such as question answering or dialogue generation by providing it with task-specific examples\r\nand fine-tuning the model on this data. It can also generate text in multiple languages by fine-tuning the\r\nmodel on the multilingual dataset or by providing the model with the language codeβ (By ChatGPT January\r\n23, 2023).\r\nThen the topic of conversation was changed to discuss COVID-19 and impacts on educational deliveries using\r\nthe prompt βNow discuss how COVID-19 has changed educational pedagogies.β Within seconds it wrote the\r\npassage below.\r\nThe COVID-19 pandemic has had a significant impact on the way education is delivered, with many schools\r\nand universities shifting to online or remote learning in order to comply with social distancing guidelines. This\r\nhas resulted in a rapid shift to digital and online pedagogy, as educators have had to quickly adapt to this\r\nnew mode of instruction.\r\nOne of the major changes that has occurred is the increased use of technology for teaching and learning. This\r\nhas included the use of online learning platforms, such as Zoom, Google Classroom and other video\r\nconferencing tools, as well as the use of digital materials, such as e-books, videos, and interactive activities.\r\nThis has allowed for increased flexibility and accessibility in the delivery of education, as students can now\r\naccess materials and participate in classes from anywhere with internet access.\r\nAdditionally, there has been a shift towards more self-directed and asynchronous learning, where students\r\nare given more autonomy in their learning and are able to complete coursework at their own pace. This\r\napproach allows students to work at a time that is convenient for them, and it can be beneficial for students\r\nwho may have other responsibilities or who may have difficulty attending live classes. However, the shift to\r\nonline learning has also highlighted the digital divide, where students in low-income or rural areas may not\r\nhave access to the technology or internet connectivity needed to fully participate in remote learning. And it\r\nalso brought challenges such as lack of interaction, lack of motivation and lack of accountability.\r\nOverall, COVID-19 has accelerated the adoption of technology and digital pedagogy in education, but it has\r\nalso highlighted the need for equity and accessibility in the delivery of education. The education system will\r\nhave to continue to adapt and evolve to meet the needs of students, teachers and parents during these\r\n\r\n54\r\n\r\n\x0c'`
β
</code></pre>
<p>I can of course print it nicely using the line breaks:</p>
<pre><code>[In ]: pdf[2].split('\r\n')
[Out]:
['Journal of AI',
'articles which have been published in English Language.',
'',
'3. WHAT IS CHATGPT?',
'To begin, we asked ChatGPT to explain itself. The prompt we started with was βWhat is ChatGPT?β. ChatGPT',
'generated this response.',
'βChatGPT is a large language model trained by OpenAI. It is designed to generate human-like text based on a',
'given prompt or context. It can be used for a variety of natural language processing tasks, such as text',
'completion, conversation generation, and language translationβ. (By ChatGPT January 23, 2023).',
'As a follow up, we used another prompt βHow does it work?β To know how conversational it is, we',
'intentionally wrote βHow does it work?β instead of "How does ChatGPT work?β. It returned with the response',
'below.',
'βChatGPT is based on a transformer architecture, a type of neural network that has been shown to be very',
'effective for natural language processing tasks. It is trained on a large dataset of text, such as books and',
'articles, and learns to generate text that is similar to the text it was trained on. When given a prompt or',
'context, the model processes the input and generates a response. The response is generated one word at a',
'time, with the model predicting the next word based on the input and the words it has generated so far. The',
'model uses attention mechanisms to focus on the most relevant parts of the input and generate a response',
'that is coherent and appropriate for the given context. After the training, the model can be fine-tuned on a',
'specific task such as question answering or dialogue generation by providing it with task-specific examples',
'and fine-tuning the model on this data. It can also generate text in multiple languages by fine-tuning the',
'model on the multilingual dataset or by providing the model with the language codeβ (By ChatGPT January',
'23, 2023).',
'Then the topic of conversation was changed to discuss COVID-19 and impacts on educational deliveries using',
'the prompt βNow discuss how COVID-19 has changed educational pedagogies.β Within seconds it wrote the',
'passage below.',
'The COVID-19 pandemic has had a significant impact on the way education is delivered, with many schools',
'and universities shifting to online or remote learning in order to comply with social distancing guidelines. This',
'has resulted in a rapid shift to digital and online pedagogy, as educators have had to quickly adapt to this',
'new mode of instruction.',
'One of the major changes that has occurred is the increased use of technology for teaching and learning. This',
'has included the use of online learning platforms, such as Zoom, Google Classroom and other video',
'conferencing tools, as well as the use of digital materials, such as e-books, videos, and interactive activities.',
'This has allowed for increased flexibility and accessibility in the delivery of education, as students can now',
'access materials and participate in classes from anywhere with internet access.',
'Additionally, there has been a shift towards more self-directed and asynchronous learning, where students',
'are given more autonomy in their learning and are able to complete coursework at their own pace. This',
'approach allows students to work at a time that is convenient for them, and it can be beneficial for students',
'who may have other responsibilities or who may have difficulty attending live classes. However, the shift to',
'online learning has also highlighted the digital divide, where students in low-income or rural areas may not',
'have access to the technology or internet connectivity needed to fully participate in remote learning. And it',
'also brought challenges such as lack of interaction, lack of motivation and lack of accountability.',
'Overall, COVID-19 has accelerated the adoption of technology and digital pedagogy in education, but it has',
'also highlighted the need for equity and accessibility in the delivery of education. The education system will',
'have to continue to adapt and evolve to meet the needs of students, teachers and parents during these',
'',
'54',
'',
'\x0c']
</code></pre>
<p>My question is how to extract paragraphs from those long strings into a list? So each paragraph is an element in that list. Note that sometimes one paragraph can span across two pages.</p>
| <python><parsing><pdf><nlp> | 2023-09-28 15:54:45 | 1 | 1,755 | data-monkey |
77,196,497 | 8,037,595 | How to connect from VS Code dev Container to Postgres in Docker | <p>I am trying to connect to a dockerized Postgres from Python. However, I have not managed to make it work the way I believe it should work. As I am just a beginner with Docker, I used first used the postgres-pgadmin yaml from the <a href="https://github.com/docker/awesome-compose/tree/master/postgre" rel="nofollow noreferrer">awesome-compose github</a>, as follows:</p>
<pre><code>version: '3.1'
services:
postgres:
image: postgres:16.0
container_name: postgresdb
restart: always
ports:
- "5432:5432"
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- d:/DB/dev:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:latest
environment:
- PGADMIN_DEFAULT_EMAIL=myemail@some.com
- PGADMIN_DEFAULT_PASSWORD=pass
ports:
- "5050:80"
restart: always
</code></pre>
<p>With this, I can access pgadmin using localhost:5050, and I see that postgres is up and running without errors.</p>
<p>Now, the absolutely only way to set up the postgres server in pgadmin that works for me is to use postgresdb as the host. Using 127.0.0.1, localhost, or the IP shown in inspect does not work. Also, no matter what I try, I can't connect at all from python. Here is btw the python code:</p>
<pre><code>import psycopg2
connection = psycopg2.connect(database="postgres", user='postgres', password='postgres', host='127.0.0.1', port=5432)
</code></pre>
<p>I also checked whether port 5432 is listening using <code>netstat -ad</code> and I can see both</p>
<pre><code>0.0.0.0:5050
0.0.0.0:5432
</code></pre>
<p>These two entries don't show up when I take the containers down, thus I strongly assume they come from the postgre and pgadmin. So, I don't understand why I can't connect to postgres through python.</p>
<p>Then I thought there would be some "oddity" (from a beginner perspective at least) with docker compose, so I spun up another container on the postgres image (using docker desktop directly, not docker compose). This initially also did not work, until I changed the host port to 5433 (i.e. used the <code>5433:5432</code> mapping). And now I was able to connect to the server via python, but had to use the "docker" IP, i.e. something like <code>172.17.0.3</code> AND it only worked with port 5432, despite the mapping to 5433.</p>
<p>I would have expected that I could use something like <code>127.0.0.1:5433</code> to connect, but that does not work.</p>
<p>As for the system, I'm on Windows 11 with WSL2, and docker desktop 4.23.0. I have the suspicion that there is something in Windows 11 that causes the issues, but I don't know where to even start. Btw, did turn off virus and firewall, but to no avail.</p>
| <python><postgresql><docker><visual-studio-code> | 2023-09-28 15:54:22 | 2 | 597 | Peter K. |
77,196,429 | 747,228 | How to await on Future in from the different thread? | <p>I have a code that returns <code>Twisted</code> <code>Deferred</code> and I need to await on that deferred so I can use the usual <code>async-await</code> pattern further. So I am converting the <code>Deferred</code> returned from <code>Twisted</code> to A <code>Future</code>, like this:</p>
<pre><code>future = twisted_deferred.asFuture(asyncio.get_event_loop())
result = await future
</code></pre>
<p>After that, the code hangs on <code>await</code> expression, even though I can debug that the <code>set_result</code> is called on this particular future. After additional debugging, I discovered I am calling <code>await</code> on the Future on Thread1.
And in the <code>Deferred</code> result callback(where <code>set_result</code> is called) I am ending up in the MainThread, so I assume this is the problem here. The current event loop is the same as in the Future created, though.</p>
<p>Could someone explain why <code>await</code> is never returned? And how to properly implement the workflow so it actually awaits and returns the result when its ready?</p>
<p>Thanks in advance.</p>
| <python><async-await><python-asyncio><twisted> | 2023-09-28 15:43:31 | 1 | 2,028 | unresolved_external |
77,196,410 | 421,398 | How can I set authentication options for an azure container app via Python SDK? | <p>We're using the Python ContainerAppsAPIClient library to deploy a container app to our azure estate, and it works great however I can't find any documentation on how to set the authentication on the container app either during or after it's been created. In the portal it's super easy to do, and there are some models I've found that appear to support it, but I'm not sure what other model I need to inject them into (if any?).</p>
<p>We're creating the ContainerApp in this kind of fashion:</p>
<pre><code>container_app = ContainerApp(
location=container_location,
tags=tags,
environment_id=f"/subscriptions/{subscription_id}/resourceGroups/{shared_infra_resource_group_name}/providers/Microsoft.App/managedEnvironments/{container_app_environment}",
configuration=Configuration(
active_revisions_mode="Single",
secrets=secrets_config,
registries=[registry_credentials],
ingress=ingress,
),
template=template,
identity=identity,
)
</code></pre>
<p>Posible models I've found to use were: <code>AzureActiveDirectoryLogin</code>, <code>AuthConfig</code> etc. but no idea where to put them.. the documentation is pretty much non-existent around this.</p>
<p>More specifically we want to put the container app being our azure active directory login (on the same subscription), using the SDK. Below shows what I did manually in the portal that I'd like to recreate using the SDK:</p>
<p><a href="https://i.sstatic.net/lU3kB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lU3kB.png" alt="A screenshot of the azure portal" /></a></p>
<p>I've tried the following code:</p>
<pre><code>client.container_apps_auth_configs.create_or_update(
resource_group_name=resource_group_name,
container_app_name=container_app_name,
auth_config_name="current", # Code: AuthConfigInvalidName. Message: The name 'label-studio' is disallowed for authconfigs, please use the name 'current'.
auth_config_envelope=AuthConfig(
platform=AuthPlatform(
enabled=True
),
global_validation=GlobalValidation(
unauthenticated_client_action="Return401"
), # Some more settings for Auth if you want 'em
identity_providers=IdentityProviders(
azure_active_directory=AzureActiveDirectory(
enabled=True,
registration=AzureActiveDirectoryRegistration(
open_id_issuer="https://sts.windows.net/REDACTED-UUID/v2.0" # The azure AD app registration uri
),
login=AzureActiveDirectoryLogin(),
)
),
login=Login(),
http_settings=HttpSettings()
)
)
</code></pre>
<p>Except that this results in the portal showing this on the auth page:</p>
<pre><code>All traffic is blocked, and requests will receive an HTTP 401 Unauthorized. This is because there is an authentication requirement, but no identity provider is configured. Click 'Remove authentication' to disable this feature and remove the access restriction. Or click 'Add identity provider' to configure a way for clients to authenticate themselves.
</code></pre>
<p>No idea why as it looks like I did provide an identity provider</p>
| <python><azure><active-directory><azure-container-apps> | 2023-09-28 15:40:58 | 1 | 4,122 | John Hunt |
77,196,392 | 3,555,115 | Find all lines that match specific format | <p>I have an output of lines that looks like below after reading the command file</p>
<pre><code>fp2 = os.popen(command)
data = fp2.read()
fp2.close()
print data
data output:
LineEntry: [0x0000000002758261-0x0000000002758268): /a/b/c:7921:14
LineEntry: [0x0000000002756945-0x0000000002756960): /f/b/c:6545:10
LineEntry: [0x00000000027562c9-0x00000000027562d0): /k/b/c
LineEntry: [0x00000000027562c9-0x00000000027562d0): /c/d/f
....
....
</code></pre>
<p>I am interested only in strings that look like the first two entries.</p>
<pre><code>LineEntry: [0x0000000002758261-0x0000000002758268): /a/b/c:7921:14
LineEntry: [0x0000000002756945-0x0000000002756960): /f/b/c:6545:10
</code></pre>
<p>I tried</p>
<pre><code>k = re.findall(r'[^:]+:[^:]+:[^:]+:[^:]+:[^:]+', data)
</code></pre>
<p>But it returns no output.</p>
<p>Any effective ways of filtering lines that exactly looks like the first two line entries?</p>
| <python> | 2023-09-28 15:38:35 | 2 | 750 | user3555115 |
77,196,314 | 317,779 | TypeError: HybridTableMaskedLM.__init__() takes 1 positional argument but 2 were given | <p>Here I am asking about this recurrent, sometimes obvious, issue. I'm execution this <a href="https://colab.research.google.com/drive/1WTg-YnfNVX4M0P0m1mEJYDQVkWr4HKXl" rel="nofollow noreferrer">notebook</a>. However, I get the error :</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-24-7a312b60bb5c> in <cell line: 3>()
1 config = PretrainedConfig.from_pretrained("./config.json")
2 print(config)
----> 3 model = HybridTableMaskedLM(config)
4 tinybert_model = AutoModelForMaskedLM.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
5 model.load_pretrained(tinybert_model.state_dict())
1 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __init__(self, *args, **kwargs)
447
448 if self.call_super_init is False and bool(args):
--> 449 raise TypeError("{}.__init__() takes 1 positional argument but {} were"
450 " given".format(type(self).__name__, len(args) + 1))
451
TypeError: HybridTableMaskedLM.__init__() takes 1 positional argument but 2 were given
</code></pre>
<p>when execution the line :</p>
<pre><code>model = HybridTableMaskedLM(PretrainedConfig.from_pretrained("./config.json"))
</code></pre>
<p>The definition of HybridTableMaskedLM in the previous block is :</p>
<pre><code>class HybridTableMaskedLM(nn.Module):
def __init__(self, config):
super(HybridTableMaskedLM, self).__init__(config)
self.table = HybridTableModel(config)
self.cls = TableMLMHead(config)
self.init_weights()
</code></pre>
<p>and the one in model.py is</p>
<pre><code>class HybridTableMaskedLM(BertPreTrainedModel):
def __init__(self, config):
super(HybridTableMaskedLM, self).__init__(config)
self.table = HybridTableModel(config)
self.cls = TableMLMHead(config)
self.init_weights()
</code></pre>
<p>Thank you for your help.</p>
| <python> | 2023-09-28 15:26:53 | 0 | 787 | Rafael Angarita |
77,196,148 | 4,716,625 | Mutually Exclusive Geo-Spatial Markers with folium.LayerControl() in Python | <h2>Goal</h2>
<p>I want to use a radio button to toggle the color and the pop-up of geo-spatial markers in a folium plot, using Python3.6 and folium version <code>0.9.0</code>. I am attempting this by creating two separate data frames that have different color values and adding them both to their own <code>folium.FeatureGroup()</code>.</p>
<h2>Problem</h2>
<p>When the plot is generated, all of the radio button options just take on the last <code>folium.FeatureGroup()</code>'s values.</p>
<br/>
<h2>Reproducible Example</h2>
<p>Create two separate data frames with the same location names and geo-coordinates, but different categories and their own color values:</p>
<pre><code>import pandas as pd
from branca.colormap import LinearColormap
# Create base data frame
df = pd.DataFrame({'loc_name': ['Palmdale', 'Pacoima', 'West L.A.', 'Metro L.A.', 'Lincoln Heights', 'El Monte',
'Pomona', 'Inglewood', 'South L.A.', 'East L.A.', 'Lynwood', 'Norwalk', 'Wilmington',
'Long Beach'],
'loc_lat': [34.5800111, 34.2662363, 34.0364075, 34.0487368, 34.0735519, 34.0733908,
34.0620289, 33.930828, 33.990596, 34.021968, 33.930476, 33.902052, 33.7814812,
33.857974],
'loc_lon': [-118.0915039, -118.4224082, -118.4364343, -118.3091383, -118.2161354, -118.0418393,
-117.7610335, -118.325139, -118.3311216, -118.164376, -118.175329, -118.083686, -118.2626444,
-118.185061]})
# Create df1 and its variable coloring, based on its values
# ------------------------------------------------
df1 = df.copy()
df1['category'] = 'volume'
df1['value'] = [23148, 78629, 38670, 176483, 136961, 64221, 29217,
131525, 172852, 113231, 144485, 63213, 51900, 88446]
df1_colormap = LinearColormap(colors=['blue', 'red'],
vmin=df1['value'].min(),
vmax=df1['value'].max())
def df1_color_tag(row):
return df1_colormap(row['value'])
df1['color'] = df1.apply(df1_color_tag, axis=1)
# Create df2 and its variable coloring, based on its values
# ------------------------------------------------
df2 = df.copy()
df2['category'] = 'index'
df2['value'] = range(1, 15)
df2_colormap = LinearColormap(colors=['green', 'yellow'],
vmin=df2['value'].min(),
vmax=df2['value'].max())
def df2_color_tag(row):
return df2_colormap(row['value'])
df2['color'] = df2.apply(df2_color_tag, axis=1)
</code></pre>
<p>Create a <code>folium</code> plot, setting <code>df1</code> and <code>df2</code> values in <code>folium.FeatureGroups()</code>:</p>
<pre><code>import folium
from folium import plugins
# Create a map
la_map = folium.Map(location=[34.24, -118.091233], zoom_start=9)
category_feature_groups = {}
# Create category feature group markers for df1 dataframe
# ----------------------------------------------------------------------
feature_group_df1 = folium.FeatureGroup(name='volume', overlay=False)
folium.TileLayer(tiles='OpenStreetMap').add_to(feature_group_df1)
feature_group_df1.add_to(la_map)
#Loop through each row of crc data to plot CRC location markers
for i,row in df1.iterrows():
custom_icon = folium.DivIcon(
icon_size=(40, 40),
icon_anchor=(20, 20), # Position of the icon center
html=f"""
<div style="width: 40px;
height: 40px;
background-color: {row['color']};
border-radius: 50%;
display: flex;
justify-content: center;
align-items: center;
color: white;
font-weight: bold;">
</div>
"""
)
#Setup the content of the popup
iframe = folium.IFrame(row['loc_name'] + '<br/><br/>' + \
row['category'] + ': ' + str(row['value']))
#Initialise the popup using the iframe
popup = folium.Popup(iframe, min_width=200, max_width=200)
#Add each row to the map
folium.Marker(location=[row['loc_lat'],row['loc_lon']],
popup = popup,
icon = custom_icon).add_to(la_map)
feature_group_df1.add_to(la_map)
category_feature_groups['volume'] = feature_group_df1
# Create category feature group markers for df2 dataframe
# ----------------------------------------------------------------------
feature_group_df2 = folium.FeatureGroup(name='index', overlay=False)
folium.TileLayer(tiles='OpenStreetMap').add_to(feature_group_df2)
feature_group_df2.add_to(la_map)
#Loop through each row of crc data to plot CRC location markers
for i,row in df2.iterrows():
custom_icon = folium.DivIcon(
icon_size=(40, 40),
icon_anchor=(20, 20), # Position of the icon center
html=f"""
<div style="width: 40px;
height: 40px;
background-color: {row['color']};
border-radius: 50%;
display: flex;
justify-content: center;
align-items: center;
color: white;
font-weight: bold;">
</div>
"""
)
#Setup the content of the popup
iframe = folium.IFrame(row['loc_name'] + '<br/><br/>' + \
row['category'] + ': ' + str(row['value']))
#Initialise the popup using the iframe
popup = folium.Popup(iframe, min_width=200, max_width=200)
#Add each row to the map
folium.Marker(location=[row['loc_lat'],row['loc_lon']],
popup = popup,
icon = custom_icon).add_to(la_map)
feature_group_df2.add_to(la_map)
category_feature_groups['index'] = feature_group_df2
# Add Layer Control
folium.LayerControl(collapsed=False, overlay=True).add_to(la_map)
la_map
</code></pre>
<p>The problem is that the output displays the <code>df2</code> (i.e., last defined <code>folium.FeatureGroup()</code>) values for each of the radio button selections:</p>
<p><a href="https://i.sstatic.net/Jv1Ii.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jv1Ii.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/06mH9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/06mH9.png" alt="enter image description here" /></a></p>
| <python><folium> | 2023-09-28 15:02:59 | 1 | 1,223 | bshelt141 |
77,196,111 | 1,380,613 | Convert python post request to php/curl | <p>I have a python script I have modified and it is working correctly. It just posts an xml file to a url (using basic authentication) and saves the xml response.</p>
<p>The environment that I need it on does not support python in that way (no cgi support, no system level access) so I am attempting to convert it to php. I think I am close, but I am just getting an error 500 from the api I am posting to with no other details.</p>
<p>I have intentionally entered a different password and the response changes to tell me it is not authorized, so I know the authorization is working correctly.</p>
<p>I've obviously change the URL to something generic in this post</p>
<p><strong>My working python:</strong></p>
<pre><code>#!/usr/bin/env python3
import requests
def handle_failure(response):
if response.status_code == 401:
print("Export failed: invalid authentication. "
"Please check username and password")
elif response.status_code == 400 and "NO REPORTS FOUND" in response.text:
print("No reports matched the search parameters.")
else:
print("Export failed.")
print("HTTP code:", response.status_code)
print("Response body:", response.text)
def export(username, password, filename=None):
files = {
"xmlRequest": open("uuid.xml", "rb")
}
url = "https://url.com/rest-service/report/export/xml"
try:
response = requests.post(
url,
auth=requests.auth.HTTPBasicAuth(username, password),
files=files)
except requests.exceptions.ConnectionError:
print("Error: can't connect to", url)
return
if response.ok:
if filename is None:
print(response.text)
else:
with open(filename, "w") as f:
f.write(response.text)
print("Success. Response written to " + filename + ".")
else:
handle_failure(response)
if __name__ == "__main__":
export("username", "password", "thereport.xml")
</code></pre>
<p><strong>My Current PHP:</strong></p>
<pre><code><?php
$username = 'username';
$password = 'password';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "https://url.com/rest-service/report/export/xml");
curl_setopt($curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_setopt($curl, CURLOPT_USERPWD, "$username:$password");
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$data = '
<?xml version="1.0" encoding="UTF-8"?>
<request>
<export-report>
<report mode="original"/>
<search path="report.uuid" value="43109950-adsfa-4524f6-bfc4-adsfafeasd" operator="eq" datatype="string" conjunction="true" />
</export-report>
</request>
';
curl_setopt($curl, CURLOPT_POSTFIELDS, $data);
$resp = curl_exec($curl);
$info = curl_getinfo($curl);
curl_close($curl);
print_r($info);
echo $resp;
echo 'complete';
?>
</code></pre>
<p><strong>The Curl response from PHP:</strong></p>
<pre><code>Array
(
[url] => https://url.com/rest-service/report/export/xml
[content_type] => text/html;charset=UTF-8
[http_code] => 500
[header_size] => 859
[request_size] => 210
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.797376
[namelookup_time] => 0.049275
[connect_time] => 0.05067
[pretransfer_time] => 0.358989
[size_upload] => 270
[size_download] => 346
[speed_download] => 433
[speed_upload] => 338
[download_content_length] => 346
[upload_content_length] => 270
[starttransfer_time] => 0.359038
[redirect_time] => 0
[redirect_url] =>
[certinfo] => Array
(
)
[primary_port] => 443
[local_port] => 54178
[http_version] => 3
[protocol] => 2
[ssl_verifyresult] => 0
[scheme] => HTTPS
[appconnect_time_us] => 358591
[connect_time_us] => 50670
[namelookup_time_us] => 49275
[pretransfer_time_us] => 358989
[redirect_time_us] => 0
[starttransfer_time_us] => 359038
[total_time_us] => 797376
)
</code></pre>
<p><strong>This curl command works, so I basically just need to recreate this in PHP:</strong></p>
<pre><code>curl -X 'POST' \
'https://url.com/rest-service/report/export/xml' \
-H 'accept: text/xml' \
-H 'Content-Type: multipart/form-data' \
-u "username:password" \
-F 'xmlRequest=@uuid.xml;type=text/xml'
</code></pre>
| <python><php><curl> | 2023-09-28 14:57:02 | 1 | 412 | Developer Gee |
77,196,067 | 4,173,663 | Is there a way to check all the python modules that are currently installed that can be accessed directly from the terminal? | <p>Some python modules like <code>pip, venv, streamlit, etc</code> can be accessed by directly typing them and hitting enter in the terminal.</p>
<p>I recently started using <code>pip-tools</code> and although <code>pip-tools</code> can be seen listed in the output of <code>pip freeze</code>, the actual commands that can be accessed from the terminal directly are <code>pip-compile</code> and <code>pip-sync</code> and not <code>pip-tools</code>. Neither are <code>pip-compile</code> and <code>pip-sync</code> listed in the output of <code>pip freeze</code>.</p>
<p>This has me wondering how to list down all such python modules that can be accessed from the terminal directly because clearly <code>pip freeze</code> does not list everything. I did some basic digging on the internet but only found stuff that tells me do <code>pip freeze</code>.
So, how to check what are the options I have when it comes to accessing python modules directly from the terminal?</p>
<p><strong>TL;DR</strong>,
I tried doing <code>pip freeze</code> but <code>pip-compile</code> and <code>pip-sync</code> are not listed and only <code>pip-tools</code> is but you can only use <code>pip-compile</code> and <code>pip-sync</code> as a command directly in the terminal.</p>
<p>This is not a question about how to use <code>pip-tools</code> but what is the way to list down all the available python modules that can be accessed from the terminal directly in a particular environment.</p>
| <python><pip><module><package> | 2023-09-28 14:49:48 | 1 | 341 | Piyush Ranjan |
77,196,016 | 22,437,609 | How can I sort this list by date? | <p>I have a problem to sort my list by date.</p>
<p>Here is the code.</p>
<pre><code>a = ['28/09 FenerbahΓ§e BaΕakΕehir FK : 3.0 Normal', '29/09 Samsunspor Gaziantep FK : 2.25 Normal', '30/09 Sivasspor Hatayspor : 2.33 Normal', '30/09 Δ°stanbulspor Antalyaspor : 2.17 Alt Degerli', '30/09 Trabzonspor Pendikspor : 2.67 Normal', '30/09 Galatasaray MKE AnkaragΓΌcΓΌ : 3.8 Normal', '01/10 Fatih KaragΓΌmrΓΌk KasΔ±mpaΕa : 1.8 Alt Degerli', '01/10 FenerbahΓ§e Γaykur Rizespor : 3.8 Normal', '01/10 Adana Demirspor Alanyaspor : 2.8 Normal', '01/10 Konyaspor BeΕiktaΕ : 2.83 Normal', '29/09 Hoffenheim Borussia Dortmund : 3.75 Normal', '30/09 Bochum MΓΆnchengladbach : 4.5 Normal', '30/09 Heidenheim Union Berlin : 4.75 Normal', '30/09 KΓΆln Stuttgart : 4.25 Normal', '30/09 Mainz 05 Bayer Leverkusen : 3.25 Normal', '30/09 Wolfsburg Eintracht Frankfurt : 2.25 Alt Degerli', '30/09 RB Leipzig Bayern MΓΌnih : 4.0 Normal', '01/10 Darmstadt 98 Werder Bremen : 4.5 Normal', '01/10 Freiburg Augsburg : 3.5 Normal', '28/09 Frosinone Fiorentina : 4.0 Normal', '28/09 Monza Bologna : 1.5 Normal', '28/09 Genoa Roma : 3.5 UST Degerli', '30/09 Lecce Napoli : 2.33 Normal', '30/09 Milan Lazio : 3.2 UST Degerli', '30/09 Salernitana Inter : 2.0 Alt Degerli', '01/10 Bologna Empoli : 2.8 Normal', '01/10 Udinese Genoa : 1.33 Normal', '01/10 Atalanta Juventus : 3.2 Normal', '01/10 Roma Frosinone : 3.2 Normal', '28/09 Celta Vigo Deportivo Alaves : 1.33 Normal', '28/09 Granada Real Betis : 4.5 UST Degerli', '28/09 Osasuna Atletico Madrid : 2.5 UST Degerli', '29/09 Barcelona Sevilla : 3.8 Normal', '30/09 Getafe Villarreal : 2.17 Normal', '30/09 Rayo Vallecano Mallorca : 3.86 UST Degerli', '30/09 Girona Real Madrid : 3.29 Normal', '30/09 Real Sociedad Athletic Bilbao : 3.29 UST Degerli', '01/10 Almeria Granada : 4.0 Normal', '01/10 Deportivo Alaves Osasuna : 3.33 UST Degerli', '30/09 Aston Villa Brighton & Hove Albion : 4.25 Normal', '30/09 Bournemouth Arsenal : 1.2 Alt Degerli', '30/09 Everton Luton Town : 2.0 Normal', '30/09 Manchester United Crystal Palace : 2.83 Normal', '30/09 Newcastle United Burnley : 3.0 Normal', '30/09 West Ham United Sheffield United : 3.5 Normal', '30/09 Wolverhampton Manchester City : 3.8 Normal', '30/09 Tottenham Liverpool : 2.8 Normal', '01/10 Nottingham Forest Brentford : 2.25 Normal', '02/10 Fulham Chelsea : 2.0 Normal']
def get_first_5_chars(item):
return item[:5]
sorted_data = sorted(a, key=get_first_5_chars)
for item in sorted_data:
print(item)
</code></pre>
<p>Results:</p>
<pre><code>01/10 Fatih KaragΓΌmrΓΌk KasΔ±mpaΕa : 1.8 Alt Degerli
01/10 FenerbahΓ§e Γaykur Rizespor : 3.8 Normal
01/10 Adana Demirspor Alanyaspor : 2.8 Normal
01/10 Konyaspor BeΕiktaΕ : 2.83 Normal
01/10 Darmstadt 98 Werder Bremen : 4.5 Normal
01/10 Freiburg Augsburg : 3.5 Normal
01/10 Bologna Empoli : 2.8 Normal
01/10 Udinese Genoa : 1.33 Normal
01/10 Atalanta Juventus : 3.2 Normal
01/10 Roma Frosinone : 3.2 Normal
01/10 Almeria Granada : 4.0 Normal
01/10 Deportivo Alaves Osasuna : 3.33 UST Degerli
01/10 Nottingham Forest Brentford : 2.25 Normal
02/10 Fulham Chelsea : 2.0 Normal
28/09 FenerbahΓ§e BaΕakΕehir FK : 3.0 Normal
28/09 Frosinone Fiorentina : 4.0 Normal
28/09 Monza Bologna : 1.5 Normal
28/09 Genoa Roma : 3.5 UST Degerli
28/09 Celta Vigo Deportivo Alaves : 1.33 Normal
28/09 Granada Real Betis : 4.5 UST Degerli
28/09 Osasuna Atletico Madrid : 2.5 UST Degerli
29/09 Samsunspor Gaziantep FK : 2.25 Normal
29/09 Hoffenheim Borussia Dortmund : 3.75 Normal
29/09 Barcelona Sevilla : 3.8 Normal
30/09 Sivasspor Hatayspor : 2.33 Normal
30/09 Δ°stanbulspor Antalyaspor : 2.17 Alt Degerli
30/09 Trabzonspor Pendikspor : 2.67 Normal
30/09 Galatasaray MKE AnkaragΓΌcΓΌ : 3.8 Normal
30/09 Bochum MΓΆnchengladbach : 4.5 Normal
30/09 Heidenheim Union Berlin : 4.75 Normal
30/09 KΓΆln Stuttgart : 4.25 Normal
30/09 Mainz 05 Bayer Leverkusen : 3.25 Normal
30/09 Wolfsburg Eintracht Frankfurt : 2.25 Alt Degerli
30/09 RB Leipzig Bayern MΓΌnih : 4.0 Normal
30/09 Lecce Napoli : 2.33 Normal
30/09 Milan Lazio : 3.2 UST Degerli
30/09 Salernitana Inter : 2.0 Alt Degerli
30/09 Getafe Villarreal : 2.17 Normal
30/09 Rayo Vallecano Mallorca : 3.86 UST Degerli
30/09 Girona Real Madrid : 3.29 Normal
30/09 Real Sociedad Athletic Bilbao : 3.29 UST Degerli
30/09 Aston Villa Brighton & Hove Albion : 4.25 Normal
30/09 Bournemouth Arsenal : 1.2 Alt Degerli
30/09 Everton Luton Town : 2.0 Normal
30/09 Manchester United Crystal Palace : 2.83 Normal
30/09 Newcastle United Burnley : 3.0 Normal
30/09 West Ham United Sheffield United : 3.5 Normal
30/09 Wolverhampton Manchester City : 3.8 Normal
30/09 Tottenham Liverpool : 2.8 Normal
</code></pre>
<p>As you see there is a problem on the sorted list.
For example, 01/10 is on the top and 30/09 is at the bottom.
I want a sorted list by the dates.</p>
<p>Thanks very much</p>
<p>What i tried:
I found a code in ChatGPT and used it in the code and did not worked.</p>
<pre><code>def get_first_5_chars(item):
return item[:5]
sorted_data = sorted(a, key=get_first_5_chars)
for item in sorted_data:
print(item)
</code></pre>
| <python><python-3.x> | 2023-09-28 14:43:13 | 3 | 313 | MECRA YAVCIN |
77,195,991 | 9,342,193 | dd prefix to duplicate values AND add new rows with this values to a certain groups in pandas | <p>I have a df such as</p>
<pre><code>Type Species Value
Dog Species2 100
Dog Species1 200
Dog Species3 300
Dog Species3 900
ALL_ Species1 400
ALL_ Species2 500
ALL_ Species3 600
</code></pre>
<p>how can I add rows to <code>ALL_</code> for each duplicated Species while also adding a <code>suffix_number</code> to the duplicated Species in <code>ALL_</code> and <code>DOG</code></p>
<p>I should then get :</p>
<pre><code>Type Species Value
Dog Species2 100
Dog Species1 200
Dog Species3_1 300
Dog Species3_2 900
ALL_ Species1 400
ALL_ Species2 500
ALL_ Species3_1 600
ALL_ Species3_2 600
</code></pre>
<p>This is not a duplicate of <a href="https://stackoverflow.com/questions/66914097/pandas-dataframe-add-suffix-to-column-value-only-if-it-is-repeated">Pandas dataframe - add suffix to column value only if it is repeated</a> since I also need to add new rows for duplicated Species only in groups <code>ALL_</code></p>
<p>Here is the dataframe if it can helps :</p>
<pre><code>data = {'Type': ['Mammuthus', 'Mammuthus', 'Mammuthus', 'Mammuthus', 'ALL_', 'ALL_', 'ALL_'],
'Species': ['Species2', 'Species1', 'Species3', 'Species3', 'Species1', 'Species2', 'Species3'],
'Value': [100, 200, 300, 900, 400, 500, 600]}
</code></pre>
| <python><python-3.x><pandas> | 2023-09-28 14:38:47 | 1 | 597 | Grendel |
77,195,870 | 9,166,673 | Gradio HTML component display mounted on FAST API | <p>I am trying to achieve HTML display message on webpage using gradio mounted on fastapi.</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
from fastapi import FastAPI
from starlette.middleware.sessions import SessionMiddleware
import uuid
app = FastAPI()
def get_user_info(request: gr.Request):
return f"<b>Welcome {request.request.session.get('username', 'User')}!<b/>"
with gr.Blocks(title="test") as demo:
gr.Markdown("# Test App")
with gr.Row():
with gr.Column():
user_info = gr.components.HTML(get_user_info)
app = gr.mount_gradio_app(app, demo, "/home")
app.add_middleware(SessionMiddleware, secret_key=uuid.uuid4().hex)
</code></pre>
<p>The above code is throwing error as <code>TypeError: get_user_info() missing 1 required positional argument: 'request'</code></p>
<p>If I write this code again using button click. It will work fine.</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
from fastapi import FastAPI
from starlette.middleware.sessions import SessionMiddleware
import uuid
app = FastAPI()
def get_user_info(request: gr.Request):
return f"<b>Welcome {request.request.session.get('username', 'User')}!<b/>"
with gr.Blocks(title="test") as demo:
gr.Markdown("# Test App")
with gr.Row():
with gr.Column():
user_info = gr.components.HTML()
b2 = gr.Button(value="Logging")
b2.click(get_user_info, outputs=user_info)
app = gr.mount_gradio_app(app, demo, "/home")
app.add_middleware(SessionMiddleware, secret_key=uuid.uuid4().hex)
</code></pre>
<p>Output before click</p>
<p><a href="https://i.sstatic.net/LduqA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LduqA.png" alt="enter image description here" /></a></p>
<p>Output after click</p>
<p><a href="https://i.sstatic.net/ba8vq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ba8vq.png" alt="enter image description here" /></a></p>
<p>How can i achieve same using first code and without using button. How to resolve the above error?</p>
| <python><fastapi><gradio> | 2023-09-28 14:22:00 | 1 | 845 | Shubhank Gupta |
77,195,789 | 9,068,493 | Python unions with typevars | <pre class="lang-py prettyprint-override"><code>import typing as t
T = t.TypeVar("T")
v = t.Union[int , T]
a: v = "g"
t.reveal_type(a) # Unknown | int
</code></pre>
<p>The above code passes the Pylance checks, however...it seems <em>illogical</em>. Did I discovered another "feature" of Pylance OR the code above really <strong>is</strong> correct and has some real-world applications?</p>
<p>For what I understand, in python's type-hints TypeVars need to be <em>constrained</em>, or in another words, be used twice in a function or class declaration.</p>
| <python><python-typing> | 2023-09-28 14:13:19 | 1 | 811 | Felix.leg |
77,195,453 | 18,377,883 | Pixel Art Square | <p>I am creating a pixel art of a 64x64 Pixel image on my terminal using Python. The Problem is that it is ASCII and because of that the image always comes out as a rectangle and not as a square as you can see on the image. (I have checked and the characters are really 64x64)</p>
<p><a href="https://i.sstatic.net/ENs7l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ENs7l.png" alt="see here" /></a></p>
<p>My Question is: Is there a known/best practice way to deal with that or are there other ways like using Unicode instead of ASCII or get a perfectly square character to print the image?</p>
| <python><unicode><ascii><pixel> | 2023-09-28 13:36:22 | 1 | 1,681 | vince |
77,195,372 | 7,112,039 | python logging: replace values in dictconfig file using environment variables | <p>I am running a FastAPI app using uvicorn as server.
To use less code as possible and to use tools that my environment already provides, I am passing logging configuration to uvicorn using the shell parameter during the run, as following:</p>
<pre><code>uvicorn --reload --log-config=logging_conf.yaml main:app
</code></pre>
<p>where logging_conf.yaml is just dictconfig file written in YAML format.</p>
<p>This approach is convenient, because in the app you can just get get the logger, without configuring it, as follow:</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>Now I want to dockerize the application and I would like to change the log level to some logger, just using the environment variables. something like</p>
<pre><code>loggers:
uvicorn.access:
level: os.getenv("LOG_LEVEL")
handlers:
- access
propagate: no
</code></pre>
<p>Considering that the file is handled as a dict object I would guess that something is possible.
Unfortunately, I am not finding a way to achieve that</p>
| <python><fastapi><uvicorn> | 2023-09-28 13:25:10 | 1 | 303 | ow-me |
77,195,269 | 4,489,082 | Storing a list in panda DataFrame | <p>I am trying to save a list in a pandas DataFrame, but I am not able to achieve that all the time.</p>
<p>I will try to explain the issue with code.</p>
<p>With the following code I can save a list in pandas DataFrame</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(index = ['R1'], columns = ['C1'])
df1.loc['R1', 'C1'] = [1, 2]
print(df1.loc['R1', 'C1'])
</code></pre>
<p>but when I save the dataframe in parquet or csv and then read the dataframe from there, I am not able to save a list in the dataframe-</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(index = ['R1'], columns = ['C1'])
df1.loc['R1', 'C1'] = 1
df1.to_parquet('exampleTable.parquet')
df2 = pd.read_parquet('exampleTable.parquet')
df2.loc['R1', 'C1'] = [1, 2]
</code></pre>
<p>For the above code, I receive the following error-</p>
<pre><code>TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/ipykernel_73084/2822717306.py", line 9, in <cell line: 9>
df2.loc['R1', 'C1'] = [1, 2]
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/indexing.py", line 849, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/indexing.py", line 1830, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/indexing.py", line 2070, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 394, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 352, in apply
applied = getattr(b, f)(**kwargs)
File "/home/p/Documents/james-python-env/lib/python3.8/site-packages/pandas/core/internals/blocks.py", line 1065, in setitem
values[indexer] = casted
ValueError: setting an array element with a sequence.
</code></pre>
<p>I trided changing the data type of the dataframe but that was not useful</p>
<pre><code>df2 = df2.astype(int)
</code></pre>
<ol>
<li>What is causing the error when I load a table from parquet and is there a way to fix it?</li>
<li>My objective is to save data structures (such as lists, tuples, sets, dictionaries, numpy array, DataFrame) in a tabular form. If there an alternate solution to that, I am open to it.</li>
</ol>
<p>Thanks in advance</p>
| <python><pandas> | 2023-09-28 13:10:11 | 2 | 793 | pkj |
77,195,224 | 8,741,562 | How to get the cost details for a particular resource group from azure using python? | <p>I am using the below code to get the list of resource groups. But I need to check the cost details for the resource groups coming up for each of them. Which module will support for this and what are the permissions needed in order to achieve it?</p>
<pre><code>from azure.identity import ClientSecretCredential
import requests
subscription_id = 'MYSUBID'
client_id = 'MYCLIENTID'
client_secret = 'MYSECRETVALUE'
tenant_id = 'MYTENANTID'
# Create a ClientSecretCredential object
credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
url = f"https://management.azure.com/subscriptions/{subscription_id}/resourcegroups?api-version=2021-04-01"
# Get an access token for the Azure Management API
access_token = credential.get_token("https://management.azure.com/.default")
# Make the GET request to retrieve a list of resource groups
headers = {
"Authorization": f"Bearer {access_token}"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
resource_groups = response.json()
for rg in resource_groups['value']:
print(rg['name'])
else:
print(response.status_code, "-" ,response.text)
</code></pre>
| <python><python-3.x><azure><azure-active-directory><azure-web-app-service> | 2023-09-28 13:03:33 | 2 | 1,070 | Navi |
77,194,999 | 1,800,515 | Google sheets API: Service account authentication: Updating cell value fails with a 403 response, only reading is successful | <pre><code>from apiclient import discovery
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(
os.path.join(os.getcwd(), 'app3-c1824-91dfa420b4a8.json'),
scopes=['https://www.googleapis.com/auth/spreadsheets']
)
apiService = discovery.build('sheets', 'v4', credentials=credentials)
values = apiService.spreadsheets().values().get(
spreadsheetId='1fV0fLiLidUEfOQsmZuytRPW6FapkYkFN-NOiZOftLok',
range='A1'
).execute()
print(f"READ response: {values}") # READ is successful
res = apiService.spreadsheets().values().update(
spreadsheetId='1fV0fLiLidUEfOQsmZuytRPW6FapkYkFN-NOiZOftLok',
range='Sheet1!A2',
valueInputOption='RAW',
body={
'values': [['test']]
}
)
print(f"UPDATE response: {res}") # UPDATE fails with 403
</code></pre>
<p>Log output for the READ request:</p>
<pre><code>2023-09-28 17:45:28 [googleapiclient.discovery_cache] INFO: file_cache is only supported with oauth2client<4.0.0
2023-09-28 17:45:28 [googleapiclient.discovery_cache] INFO: file_cache is only supported with oauth2client<4.0.0
2023-09-28 17:45:28 [googleapiclient.discovery] DEBUG: URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/1fV0fLiLidUEfOQsmZuytRPW6FapkYkFN-NOiZOftLok/values/A1?alt=json
2023-09-28 17:45:28 [google_auth_httplib2] DEBUG: Making request: POST https://oauth2.googleapis.com/token
READ response: {'range': 'Sheet1!A1', 'majorDimension': 'ROWS', 'values': [['prashan']]}
</code></pre>
<p>Log output for the UPDATE request:</p>
<pre><code>2023-09-28 17:45:28 [googleapiclient.discovery] DEBUG: URL being requested: PUT https://sheets.googleapis.com/v4/spreadsheets/1fV0fLiLidUEfOQsmZuytRPW6FapkYkFN-NOiZOftLok/values/Sheet1%21A2?valueInputOption=RAW&alt=json
UPDATE response: <googleapiclient.http.HttpRequest object at 0x106c1ef50>
</code></pre>
<p>When I open up the returned <code>res</code> object from the UPDATE request:</p>
<pre><code>res
<googleapiclient.http.HttpRequest object at 0x106c1ef50>
special variables:
function variables:
body: '{"values": [["test"]]}'
body_size: 22
headers: {'accept': 'application/json', 'accept-encoding': 'gzip, deflate', 'user-agent': '(gzip)', 'x-goog-api-client': 'gdcl/2.96.0 gl-python/3.11.3', 'content-type': 'application/json'}
http: <google_auth_httplib2.AuthorizedHttp object at 0x108628590>
method: 'PUT'
methodId: 'sheets.spreadsheets.values.update'
response_callbacks: []
resumable: None
resumable_progress: 0
resumable_uri: None
uri: 'https://sheets.googleapis.com/v4/spreadsheets/1fV0fLiLidUEfOQsmZuytRPW6FapkYkFN-NOiZOftLok/values/Sheet1%21A2?valueInputOption=RAW&alt=json'
_in_error_state: False
_process_response: <bound method HttpRequest._process_response of <googleapiclient.http.HttpRequest object at 0x106c1ef50>>
_rand: <built-in method random of Random object at 0x14e0a9c20>
_sleep: <built-in function sleep>
</code></pre>
<p>When I visit the <code>res.uri</code>, I get the following JSON:</p>
<pre><code>{
"error": {
"code": 403,
"message": "The request is missing a valid API key.",
"status": "PERMISSION_DENIED"
}
}
</code></pre>
<p>I have properly enabled the Sheets API for the app3 project in which the service account resides:</p>
<p><a href="https://i.sstatic.net/zLeLx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zLeLx.png" alt="enter image description here" /></a></p>
<p>The service account has been given "Editor" access to the sheet in question:</p>
<p><a href="https://i.sstatic.net/nvaAH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nvaAH.png" alt="enter image description here" /></a></p>
<p>One thing that's interesting is that the UPDATE request doesn't explicitly fail, the log output from the UPDATE request is neutral at best and the <code>A2</code> value of the Google sheet remains blank. I have to open up the <code>res</code> object to even get to the special json resource that shows me the 403 error. It's complaining about a missing API key, but I am using service account authentication. (Google sheets API does not support WRITE operations using API key authentication anyway.</p>
| <python><google-sheets><google-sheets-api><service-accounts> | 2023-09-28 12:35:24 | 1 | 2,783 | PrashanD |
77,194,759 | 17,092,778 | How to protect /docs endpoint in FastAPI using Azure AD (fastapi_msal)? | <p>How to connect the Authorize button in protect under <code>/docs</code> endpoint in FastAPI application to Azure AD?</p>
<p>Is it possible to make Authorize button to trigger a popup for the sign in window? Similar to how its done in this example <a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/tutorial-single-page-app-react-sign-in-users" rel="nofollow noreferrer">example</a> for NodeJS?</p>
<pre><code>if (loginType === "popup") {
instance.loginPopup(loginRequest).catch((e) => {
// error code
});
</code></pre>
<p>So the token is generated and stored in a cookie in local storage and sent on each request.</p>
<p>I have protected my endpoints so it checks for a valid access token in each API request that is sent in authorization bearer token using OAuth2PasswordBearer from FastAPI. This works when it sent from frontend. However I have not been able to get this token myself from AzureAD on the FastAPI application. I have not found any way to trigger the pop in FastAPI documentation.</p>
<p>I assume that I should be able to use:
<a href="https://pypi.org/project/fastapi_msal/" rel="nofollow noreferrer">https://pypi.org/project/fastapi_msal/</a></p>
<p>Here is the pop up for sign in (Used in many sites and places):
<a href="https://i.sstatic.net/1tJA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1tJA4.png" alt="enter image description here" /></a></p>
| <python><swagger><fastapi><openapi><azure-ad-msal> | 2023-09-28 12:00:36 | 0 | 313 | starking |
77,194,746 | 16,895,640 | Transform data by grouping column value then getting all its grouped data using Panda DataFrame | <p>I got this structure of data from csv file.</p>
<pre><code>string | group | datetime
some_string1 apple 2023-09-20 17:20:19
some_string2 apple 2023-09-25 17:20:19
some_string3 apple 2023-09-21 17:20:19
some_string4 banana 2023-09-23 17:20:19
some_string5 cherry 2023-09-23 17:20:19
some_string6 apple 2023-09-22 17:20:19
</code></pre>
<p>I want to transform this into:</p>
<pre><code> apple | banana | cherry
datetime | string | datetime | string | datetime | string
2023-09-20 17:20:19 some_string1 2023-09-23 17:20:19 some_string4 2023-09-23 17:20:19 some_string5
2023-09-21 17:20:19 some_string3 somedatime some_string(n) somedatime some_string(n)
2023-09-22 17:20:19 some_string6 somedatime some_string(n) somedatime some_string(n)
2023-09-25 17:20:19 some_string2 somedatime some_string(n) somedatime some_string(n)
</code></pre>
<p>As you can see above there it is grouped by values from column <strong>group</strong> then have subcolumn datetime and string.</p>
<p>I have already got the headers(its not perfect but workable) using this snippet but I'm blocked with assigning the values into each columns. Note that all grouped value from fruits column should be a new column.</p>
<pre><code> data = {}
headers = []
with open(r'somecsv.csv') as csv_file:
reader = csv.reader(csv_file)
for n, row in enumerate(reader):
if not n:
# Skip header row (n = 0).
continue
string, fruits, datetime = row
if fruits not in data:
data[fruits] = list()
data[fruits].append((datetime, string))
# print(datetime)
if fruits not in headers:
headers.append(fruits)
cols = pd.MultiIndex.from_product([headers, ['datetime', 'string']])
df = pd.DataFrame(data, columns=cols)
df.to_csv('file_name.csv')
</code></pre>
<p>this does create a csv with headers and subcolumn but no values below it. I'm already blocked and needed some help. Thank you</p>
| <python><pandas><dataframe><list><dictionary> | 2023-09-28 11:59:06 | 1 | 4,139 | Marc Anthony B |
77,194,570 | 5,663,844 | "Warm Start" in combination with new data leads to broadcasting error when predicting with Random Forest | <p>I am trying to train a random forest model with <code>sklearn</code>. I have some original data (<code>x</code>, <code>y</code>) that I use to train the RF initially with.</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
import numpy as np
x = np.random.rand(30,20)
y = np.round(np.random.rand(30))
rf = RandomForestClassifier()
rf.fit(x,y)
</code></pre>
<p>Now I get some new data that I want to use to retrain the model, but I want to keep the already existing trees in the <code>rf</code> untouched. So I set <code>warm_start=True</code> and add additional trees.</p>
<pre><code>x_new = np.random.rand(5,20)
y_new = np.round(np.random.rand(5))
rf.n_estimators +=100
rf.warm_start = True
rf.fit(x_new,y_new)
</code></pre>
<p>So far so good. Everything works.
But when I make predictions I get an error:</p>
<pre><code>rf.predict(x)
>>> ValueError: non-broadcastable output operand with shape (30,1) doesn't match the broadcast shape (30,2)
</code></pre>
<p>Why does this happen?</p>
| <python><machine-learning><scikit-learn><random-forest> | 2023-09-28 11:33:37 | 1 | 480 | Janosch |
77,194,543 | 6,401,758 | FastAPI: how to return JSONResponse and StreamingResponse | <p>Is it possible to return both a json and a zip file at the same time with FastAPI ?
This is the tentative endpoint but it doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/search")
async def structure_search()
json_response = {"item1": result1, "item2": result2}
zip_io = BytesIO()
with zipfile.ZipFile(zip_io, mode="w", compression=zipfile.ZIP_DEFLATED) as temp_zip:
for fpath in glob.glob("directory/*"):
temp_zip.write(fpath)
return JSONResponse(content=json_response),
StreamingResponse(iter([zip_io.getvalue()]),
media_type="application/x-zip-compressed",
headers={"Content-Disposition": f"attachment; filename=results.zip"},
)
</code></pre>
| <python><python-3.x><fastapi> | 2023-09-28 11:30:34 | 0 | 415 | Gabriel Cretin |
77,194,538 | 128,618 | How can I merge two lists of dictionaries into 1, where numeric values for identical keys get added? | <p>I have two list of dictionaries:</p>
<pre><code>list_of_dicts = [{'a': 1, 'name1': "jane"}, {'b': 2, 'name1': 'jack'}, {'c': 3, 'name1': 'nak'},{'d': 3, 'name1': 'nak'}]
dict_to_check = [{'a': 1, 'name2': "jone"}, {'b': 2, 'name2': 'doe'}, {'d': 3, 'name2': 'nick'}]
</code></pre>
<p>I want to output , please help to give me some solutions :</p>
<pre><code>[
{'a': 2, 'name1': "jane", 'name2': "jone"},
{'b': 4, 'name1': 'jack','name2': 'doe'},
{'c': 3, 'name1': 'nak'},
{'d': 6, 'name1': 'nak','name2': 'nick'}
]
</code></pre>
| <python><python-3.x><algorithm> | 2023-09-28 11:29:52 | 3 | 21,977 | tree em |
77,194,300 | 9,139,930 | Identify symlinked python files for import in VS Code | <p>I am working with a complex body of code where some python files get symlinked to a directory, from which I would like to import them. For example:</p>
<pre><code>accessible_module
| __init__.py
| sourceA.py
</code></pre>
<p>The files <code>__init__.py</code> and <code>sourceA.py</code> are symlinked to source files that live elsewhere. I have another file with an import statement</p>
<pre class="lang-py prettyprint-override"><code>from accessible_module import sourceA
</code></pre>
<p>My code works at runtime, but I would like VS Code to be able to resolve the import for linting and autocompletion. At present, I get an import error from pylance. I have verified that the <code>accessible_module</code> is present in the PYTHONPATH, and I have checked that the import statements work if I replace the <code>__init__.py</code> and <code>sourceA.py</code> symlinks with their actual source files (however, this is not a viable long-term solution for me).</p>
<p>Is there a way to get VS Code to correctly recognize these symlinked imports?</p>
| <python><visual-studio-code><import><symlink><pylance> | 2023-09-28 10:47:50 | 1 | 367 | book_kees |
77,194,220 | 6,619,692 | Validate against enum values in Python | <p><strong>How can I validate a parameter passed by a user by ensuring it is one of the <em>values</em> of the members of an integer enum that already exists in my codebase?</strong></p>
<p>I have the following <code>SamplingRateEnum</code>:</p>
<pre class="lang-py prettyprint-override"><code>from enum import IntEnum, StrEnum
class SamplingRateEnum(IntEnum):
SR_22050 = 22_050
SR_44100 = 44_100
SR_80000 = 88_000 # NOTE *not* 88_200
</code></pre>
<p>I could validate the user input as follows, getting the desired result in <code>valid_flag</code>, but this seems like a bad way to achieve this outcome:</p>
<pre class="lang-py prettyprint-override"><code>user_input = 22_050
valid_flag = user_input in SamplingRateEnum.__members__.values()
</code></pre>
<p>In the related <a href="https://stackoverflow.com/q/35084518/6619692">Validate against enum members in Python</a>, data had to be validated based on members' keys, and not their values. I want to validate data based on the latter.</p>
<p>Is there a better idiom or tool for this in the Python standard library? (I would like to avoiding an additional dependency just for this purpose.)</p>
| <python><enums> | 2023-09-28 10:38:33 | 1 | 1,459 | Anil |
77,194,214 | 21,346,793 | How yo fix encoding by python.csv | <p>I have got a programm with making the report by python into csv.
There is a function to do it:</p>
<pre><code>def download_report(request, date):
food_orders = FoodOrder.objects.filter(order__date__date=date)
response = HttpResponse(
content_type="text/csv",
headers={"Content-Disposition": 'attachment; filename="report.csv"'}
)
writer = csv.writer(response)
writer.writerow([date])
total_cost_of_all_orders = 0
for order in food_orders:
order_time = order.order.date.strftime('%H:%M')
customer_name = order.order.employee.name
dishes = '; '.join([str(dish) for dish in order.food.all()])
order_cost = order.total_cost
writer.writerow([f'Time: {order_time}, Name: {customer_name}, Food: {dishes}, Cost: {order_cost} rub.'])
total_cost_of_all_orders += order_cost
writer.writerow(['', '', '', 'Total cost of day:', total_cost_of_all_orders])
return response
</code></pre>
<p>It works good, but when i open my report.csv it get me some incomprehensible symbols:<a href="https://i.sstatic.net/zyoZB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zyoZB.png" alt="enter image description here" /></a></p>
<p>How can i fix this?</p>
| <python><excel><csv> | 2023-09-28 10:37:16 | 0 | 400 | Ubuty_programmist_7 |
77,193,917 | 2,467,772 | Error in running C application with python | <p>I am running c application with python code and my code is as follows.</p>
<pre><code>import subprocess
import os
sourcepath = '/home/atic/deepstream/videos'
destpath = '/home/atic/deepstream/jsons'
avi_files = [f for f in os.listdir(sourcepath) if f.endswith('.avi')]
for file_ in avi_files:
avi_file = os.path.join(sourcepath, file_)
json_file = os.path.join(destpath, file_.split('.avi')[0]+'.json')
subprocess.run(['./deepstream-pose-estimation-app', '--input', avi_file, '--focal', '800.0', '--width', '1280', '--height', '720', '--fps', '--save-pose', json_file])
</code></pre>
<p>This line runs if I set</p>
<pre><code>subprocess.run(['./deepstream-pose-estimation-app', '--input', 'tst.avi', '--focal', '800.0', '--width', '1280', '--height', '720', '--fps', '--save-pose', 'tst.json']
</code></pre>
<p>But video file name and json file name are always changing.
So I set string variables.
But <code>subprocess</code> doesn't accept.</p>
<p>The error is</p>
<pre><code>--input value is not a valid URI address. Exiting...
</code></pre>
<p>How can I fix?</p>
<p>EDIT</p>
<pre><code>import subprocess
import os
sourcepath = '/home/atic/deepstream/videos'
destpath = '/home/atic/deepstream/jsons'
avi_files = [f for f in os.listdir(sourcepath) if f.endswith('.avi')]
for file_ in avi_files:
avi_file = os.path.join(sourcepath, file_)
json_file = os.path.join(destpath, file_.split('.avi')[0]+'.json')
print(avi_file)
print(json_file)
subprocess.run(['./deepstream-pose-estimation-app', '--input', avi_file, '--focal', '800.0', '--width', '1280', '--height', '720', '--fps', '--save-pose', json_file])
</code></pre>
<p>print out these two lines</p>
<pre><code>print(avi_file)
print(json_file)
</code></pre>
<p>Now the output is</p>
<pre><code>/home/atic/deepstream/videos/sit_1059.avi
/home/atic/deepstream/jsons/sit_1059.json
--input value is not a valid URI address. Exiting...
Usage:
deepstream-pose-estimation-app [OPTION?] Deepstream BodyPose3DNet App
Help Options:
-h, --help Show help options
--help-all Show all help options
--help-gst Show GStreamer Options
Application Options:
-v, --version Print DeepStreamSDK version.
--version-all Print DeepStreamSDK and dependencies version.
--input [Required] Input video address in URI format by starting with "rtsp://" or "file://".
--output Output video address. Either "rtsp://" or a file path or "fakesink" is acceptable. If the value is "rtsp://", then the result video is published at "rtsp://localhost:8554/ds-test".
--save-pose The file path to save both the pose25d and the recovered pose3d in JSON format.
--conn-str Connection string for Gst-nvmsgbroker, e.g. <ip address>;<port>;<topic>.
--publish-pose Specify the type of pose to publish. Acceptable value is either "pose3d" or "pose25d". If not specified, both "pose3d" and "pose25d" are published to the message broker.
--tracker Specify the NvDCF tracker mode. The acceptable value is either "accuracy" or "perf". The default value is "perf" "accuracy" mode requires DeepSORT model to be installed. Please refer to [Setup Official Re-ID Model](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html) section for details.
--fps Print FPS in the format of current_fps (averaged_fps).
--fps-interval Interval in seconds to print the fps, applicable only with --fps flag.
--width Input video width in pixels. The default value is 1280.
--height Input video height in pixels. The default value is 720.
--focal Camera focal length in millimeters. The default value is 800.79041.
--osd-process-mode OSD process mode CPU - 0 or GPU 1.
</code></pre>
| <python><subprocess> | 2023-09-28 09:50:00 | 1 | 7,346 | batuman |
77,193,700 | 4,391,249 | Why is my recursion limit not being respected? | <p>Consider this example:</p>
<pre class="lang-py prettyprint-override"><code>import math
import sys
sys.setrecursionlimit(100)
max_items = 10
def foo(ls):
if -(len(ls) // -max_items) >= sys.getrecursionlimit():
raise ValueError("List is too long. You'll hit the recusion limit")
ls, ls_ = ls[:max_items], ls[max_items:]
print(len(ls_))
if len(ls_) > 0:
foo(ls_)
foo(list(range(970)))
</code></pre>
<p>I would expect that my logic for raising a <code>ValueError</code> before the recursion limit is reached is correct. That is, I'm calculating the total number of times <code>foo</code> would need to be called (including the initial call) to process the list, and checking if that is equal to or above the recursion limit.</p>
<p>But for some reason even calling <code>foo(list(range(970)))</code> raises <code>RecursionError: maximum recursion depth exceeded while calling a Python object</code>. Maybe I'm not understanding what the argument of <code>setrecursionlimit</code> is?</p>
<p><strong>EDIT</strong>: I had the recursion limit as 5 before but that's not working on many people's machines so I changed it to be much higher to avoid confusion about what the source of the error is.</p>
| <python><recursion> | 2023-09-28 09:18:54 | 3 | 3,347 | Alexander Soare |
77,193,676 | 15,825,321 | Pandas merge data frames as of certain index | <p>I have two pandas data frames:</p>
<p><strong>x_axis:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01-01-2023</td>
</tr>
<tr>
<td>2</td>
<td>02-01-2023</td>
</tr>
<tr>
<td>3</td>
<td>03-01-2023</td>
</tr>
<tr>
<td>4</td>
<td>04-01-2023</td>
</tr>
<tr>
<td>5</td>
<td>05-01-2023</td>
</tr>
<tr>
<td>6</td>
<td>06-01-2023</td>
</tr>
<tr>
<td>7</td>
<td>07-01-2023</td>
</tr>
<tr>
<td>8</td>
<td>08-01-2023</td>
</tr>
<tr>
<td>9</td>
<td>09-01-2023</td>
</tr>
</tbody>
</table>
</div>
<p><strong>df:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>snap_date</th>
<th>some_data</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>03-01-2023</td>
<td>12</td>
</tr>
<tr>
<td>2</td>
<td>04-01-2023</td>
<td>85</td>
</tr>
<tr>
<td>3</td>
<td>05-01-2023</td>
<td>46</td>
</tr>
<tr>
<td>4</td>
<td>06-01-2023</td>
<td>74285</td>
</tr>
<tr>
<td>5</td>
<td>0</td>
<td>427</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
<td>452</td>
</tr>
</tbody>
</table>
</div>
<p>and I want to get such merge/concatination:</p>
<p><strong>desired_df:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>date</th>
<th>index_y</th>
<th>snap_date</th>
<th>some_data</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01-01-2023</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>02-01-2023</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>03-01-2023</td>
<td>1</td>
<td>03-01-2023</td>
<td>12</td>
</tr>
<tr>
<td>4</td>
<td>04-01-2023</td>
<td>2</td>
<td>04-01-2023</td>
<td>85</td>
</tr>
<tr>
<td>5</td>
<td>05-01-2023</td>
<td>3</td>
<td>05-01-2023</td>
<td>46</td>
</tr>
<tr>
<td>6</td>
<td>06-01-2023</td>
<td>4</td>
<td>06-01-2023</td>
<td>74285</td>
</tr>
<tr>
<td>7</td>
<td>07-01-2023</td>
<td>5</td>
<td>0</td>
<td>427</td>
</tr>
<tr>
<td>8</td>
<td>08-01-2023</td>
<td>6</td>
<td>0</td>
<td>452</td>
</tr>
<tr>
<td>9</td>
<td>09-01-2023</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>Basically, I want to concatenate <strong>df</strong> to <strong>x_axis</strong> as of the first match of <em>date</em> and <em>snap_date</em>, but I don't want to join on the dates because also index_y 5 and 6 should be included in <strong>desired_df</strong>.
Further info:</p>
<ul>
<li><em>snap_date</em> is always a subset of <em>date</em></li>
<li>Both <em>snap_date</em> and <em>date</em> are evenly spaced, sorted, unique and do not have NANs. Basically pandas date series.</li>
</ul>
<p>Thank you already!</p>
| <python><pandas><dataframe><join><concatenation> | 2023-09-28 09:16:13 | 2 | 303 | Paul1911 |
77,193,617 | 15,341,457 | Replace ASCII HTML characters when loading JSON | <p>I'm loading a JSON file made up of yelp restaurant reviews so that it removes Unicode characters this way:</p>
<pre><code>def parse_yelp_restaurant_api(self, response):
jsonresponse = json.loads(response.text, strict=False)
</code></pre>
<p>Now I would like to also remove ASCII HTML characters. My JSON file is full of '&#39', '&#34', etc.</p>
| <python><html><json><unicode><ascii> | 2023-09-28 09:08:38 | 1 | 332 | Rodolfo |
77,193,572 | 1,128,648 | Python logging configuration issue | <p>I have a main script called data.py, which will call <code>logger.py</code> (for log configuration) and <code>login.py</code> (for login)</p>
<pre><code># data.py
from logger import configure_logger
from login import *
if __name__ == "__main__":
script_name = (f"C:\\datalog") #save logfile in a different directory than script location
logger = configure_logger(script_name)
logger.info(f"This log is from data.py")
</code></pre>
<pre><code># logger.py
import logging
from datetime import datetime as dt
def configure_logger(script_name):
py_name = ((__file__.split('\\')[-1]).split('.')[0]).upper()
log_format = f'%(asctime)s - %(levelname)s - {py_name} - %(message)s'
logging.basicConfig(filename=f"{script_name}_{dt.now().strftime('%Y%m%d-%H_%M_%S')}.log", level=logging.INFO, format=log_format)
logger = logging.getLogger()
return logger
</code></pre>
<pre><code># login.py
from logger import configure_logger
import os
import sys
script_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
logger = configure_logger(script_name)
logger.info(f"This log is from login.py")
</code></pre>
<p>My expected output in <code>datalog_<date_timestamp>.log</code>:</p>
<pre><code>2023-09-28 14:20:27,767 - INFO - LOGIN - This log is from login.py
2023-09-28 14:20:27,768 - INFO - DATA - This log is from data.py
</code></pre>
<p>Bu above script is producing output like below to <code>data_<date_timestamp>.log</code>: (not expected filename and name in logfile)</p>
<pre><code>2023-09-28 14:20:27,767 - INFO - LOGGER - This log is from login.py
2023-09-28 14:20:27,768 - INFO - LOGGER - This log is from data.py
</code></pre>
<p>My login and logger module are common and it will be called from multiple main script like data.py.
I need to create logfile based on the name mentioned in my main script (in this case - data.py) and for each log entry I would like to include the name as original script which is executing it. For example, if <code>login.py</code> is running, I need to include <code>LOGIN</code> as the name and if it is running from <code>data.py</code>, it has to be <code>DATA</code></p>
<p>How can I achieve this?</p>
| <python><python-logging> | 2023-09-28 09:02:40 | 1 | 1,746 | acr |
77,193,458 | 1,256,495 | Download files from url using Python requests | <p>I have a generated url that will need to open up the browser to download the file from the server. I am trying to simulate this download using Python requests library</p>
<p>import requests</p>
<pre><code>url = r'https://www.filedownloadserver.com?docId=7700915281958&projectNumber=aaa'
resp = requests.get(url, verify=False)
with open('test.pdf', 'wb') as file:
file.write(chunk)
</code></pre>
<p>the output pdf file is incorrect, from the resp.content, it returns javascript that looks like this</p>
<pre><code>//<script>location.href=('https://login.fileserver.com/login1/?redirect='+encodeURIComponent(location.href));</script> location.href=('https://login.fileserver.com/login1/?redirect='+encodeURIComponent(location.href));
</code></pre>
<p>is there anyway I can get the actual file from the above content?</p>
| <python><python-requests> | 2023-09-28 08:46:27 | 0 | 559 | ReverseEngineer |
77,193,397 | 9,756,752 | Pandas read_csv with seperators in header names | <p>i've got a text file similar to this:</p>
<pre><code>@ some comment
@ some comment
@ [...]
@ some comment
* NAME S BX BY
bla foo bar foo
"ACF" 1 2 3
"BGB" 4 5 6
"CSD" 7 8 9
</code></pre>
<p>I'm using the following to read in the file. Automatic detection of the header seems not possible because the first field <code>* NAME</code> contains the column separator in its name.</p>
<pre><code>import pandas as pd
df=pd.read_csv('test.txt',sep="\s+|\t+|\s+\t+|\t+\s+",names=["Name","S","BX","BY"],skiprows=4)
</code></pre>
<ol>
<li>How to automatically detect the header names?</li>
<li>How to remove comments and the <code>bla...</code> line below the header?</li>
</ol>
| <python><pandas> | 2023-09-28 08:36:31 | 3 | 705 | Marvin Noll |
77,193,349 | 507,852 | Python TLS socket: how to detect connection closed by server due to certificate failure? | <p>Trying to write a TLS client using Python 3. Can't figure out what's the proper way to detect and handle connection rejected by server.</p>
<p>If the TLS server requires client certificate but the client didn't call <code>SSLContext.load_cert_chain()</code> or assigned wrong certificate, the connection will be terminated by server. The problem is, client side will not detect it until the next call to <code>SSLSocket.recv()</code>.</p>
<p>For example, the codes below will create a client TLS socket and connect to server and keep polling data from server:</p>
<pre><code>import ssl
import socket
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.verify_mode = ssl.CERT_REQUIRED
context.check_hostname = False
# context.load_cert_chain(certfile='client.crt', keyfile='client.key')
context.load_verify_locations(cafile='ca.crt')
with socket.create_connection(('127.0.0.1', 12345)) as client:
with context.wrap_socket(client, server_hostname='example.com') as ssock:
ssock.sendall(b'Hello, world\n')
while 1:
data = ssock.recv(1024)
if not data:
ssock.close()
exit()
do_something(data)
</code></pre>
<p>Suppose the server side is running in OpenSSL with <code>-Verify=1</code>:</p>
<pre><code>openssl s_server -port 12345 -CAfile ca.crt -cert server.crt -key server.key -Verify 1
</code></pre>
<p>With the <code>context.load_cert_chain()</code> commented in client, this connection will be rejected and dropped by server:</p>
<pre><code>ERROR
00A77D53F87F0000:error:0A0000C7:SSL routines:tls_process_client_certificate:peer did not return a certificate:ssl/statem/statem_srvr.c:3511:
shutting down SSL
CONNECTION CLOSED
</code></pre>
<p>But the client side will not know this until fist call to ssock.recv() in <code>while 1</code> loop:</p>
<pre><code>Traceback (most recent call last):
File "client.py", line 13, in <module>
data = ssock.recv(1024)
^^^^^^^^^^^^^^^^
File "***/python3.11/ssl.py", line 1263, in recv
return self.read(buflen)
^^^^^^^^^^^^^^^^^
File "***/python3.11/ssl.py", line 1136, in read
return self._sslobj.read(len)
^^^^^^^^^^^^^^^^^^^^^^
ssl.SSLError: [SSL: TLSV13_ALERT_CERTIFICATE_REQUIRED] tlsv13 alert certificate required (_ssl.c:2576)
</code></pre>
<p>Is there anyway to detect connection closed by server before entering <code>while 1</code> loop ?</p>
| <python><ssl> | 2023-09-28 08:27:41 | 0 | 1,982 | RichardLiu |
77,193,320 | 4,399,016 | Applying a function to a Pandas Dataframe | <p>I have <a href="https://github.com/voice32/stock_market_indicators/blob/master/indicators.py" rel="nofollow noreferrer">this code</a> with me. And I need to pass a pandas data frame in it as a parameter. It returns errors.</p>
<p>The <a href="https://download.esignal.com/products/workstation/help/charts/studies/acc_dist.htm" rel="nofollow noreferrer">Logic for the Technical Analysis Indicator is this</a></p>
<pre><code>def williams_ad(data, high_col='High', low_col='Low', close_col='Close'):
data['williams_ad'] = 0.
for index,row in data.iterrows():
if index > 0:
prev_value = data.at[index-1, 'williams_ad']
prev_close = data.at[index-1, close_col]
if row[close_col] > prev_close:
ad = row[close_col] - min(prev_close, row[low_col])
elif row[close_col] < prev_close:
ad = row[close_col] - max(prev_close, row[high_col])
else:
ad = 0.
data.set_value(index, 'williams_ad', (ad+prev_value))
return data
</code></pre>
<p>Documentation for the above code.</p>
<pre><code>William's Accumulation/Distribution
Source: https://www.metastock.com/customer/resources/taaz/?p=125
Params:
data: pandas DataFrame
high_col: the name of the HIGH values column
low_col: the name of the LOW values column
close_col: the name of the CLOSE values column
Returns:
copy of 'data' DataFrame with 'williams_ad' column added
</code></pre>
<p>What is the right way to use this</p>
<pre><code>williams_ad()
</code></pre>
<p>I tried several ways. But unable to debug.</p>
<pre><code>import pandas as pd
import yfinance as yF
import datetime
df = yF.download(tickers = "SPY", # list of tickers
period = "5y", # time period
interval = "1d", # trading interval
prepost = False, # download pre/post market hours data?
repair = True) # repair obvious price errors e.g. 100x?
</code></pre>
<p>Now I tried to pass df as an argument in place of data.</p>
<pre><code>williams_ad(df)
</code></pre>
<p>I get a type error:</p>
<pre><code>TypeError: '>' not supported between instances of 'Timestamp' and 'int'
</code></pre>
<p>The default index is Date Timestamp in my pandas data frame. In the first if condition, it is checking if index > 0.
This is returning an error. How to overcome this issue?</p>
<p>Comment request dtypes:</p>
<pre><code>Open float64
High float64
Low float64
Close float64
Adj Close float64
Volume int64
williams_ad float64
</code></pre>
| <python><pandas><technical-indicator> | 2023-09-28 08:23:01 | 1 | 680 | prashanth manohar |
77,193,310 | 17,160,160 | Insert series ending at last valid index in dataframe column. Pandas | <p>Given a dataframe that contains a combination of null and numeric values in which each series of numeric values is always located together and is never interspersed with nulls. Such as:</p>
<pre><code>df1 = pd.DataFrame({
'A': [1, 2, 3, np.nan, np.nan],
'B': [np.nan, np.nan, 1, 2, 3],
'C': [np.nan, 1, 2, 3, np.nan]
})
A B C
0 1.0 NaN NaN
1 2.0 NaN 1.0
2 3.0 1.0 2.0
3 NaN 2.0 3.0
4 NaN 3.0 NaN
</code></pre>
<p><strong>Desired Output</strong><br />
I'd like to create a second data frame with identical index and columns in which a defined series is inserted so that it ends at the index of the last non-null value for each column in df1.</p>
<p>Note that the length of the defined series will differ to the length of non-null values in each column. i.e.</p>
<pre><code>new_data = ['A','B']
A B C
0 NaN NaN NaN
1 A NaN NaN
2 B NaN A
3 NaN A B
4 NaN B NaN
</code></pre>
<p><strong>Current Approach</strong><br />
My current approach achieves this by creating an empty dataframe, looping through each column, defining the index range and assigning the new data:</p>
<pre><code>new_data = ['A','B']
df2 = pd.DataFrame(columns = df1.columns, index = df1.index)
for col in df2:
end = df1[col].last_valid_index()+1
start = end - len(data)
df2[col][start:end] = new_data
A B C
0 NaN NaN NaN
1 A NaN NaN
2 B NaN A
3 NaN A B
4 NaN B NaN
</code></pre>
<p>While this works, it feels somewhat brute force and I hoped to find a more elegant solution please.</p>
| <python><pandas> | 2023-09-28 08:21:27 | 1 | 609 | r0bt |
77,193,264 | 10,916,136 | How to make a tkinter appear and disappear based on a time condition | <p>The question is related to these questions:</p>
<p><a href="https://stackoverflow.com/questions/22485225/how-do-i-get-rid-of-a-label-in-tkinter">How do I get rid of a label in TkInter?</a></p>
<p><a href="https://stackoverflow.com/questions/34276672/how-to-make-a-label-appear-then-disappear-after-a-certain-amount-of-time-in-pyth">How to make a Label appear then disappear after a certain amount of time in python tkinter</a></p>
<p>I am trying to create a tkinter object where there are two labels.</p>
<p>First label shows a static text set.
Second label appears to show some message every configured time interval (e.g., shows a pomodoro message every 20 minutes)</p>
<p>I am struggling to hide the label and then make it appear again.</p>
<p>One option is to keep the label but turn the text as blank, but that is not useful in my case. I need the label itself to disappear completely.</p>
<p>Is it possible? To add a label at a set time and then disappear it again?</p>
<p>Here in the code I have taken a simple example to show seconds from time when it is in multiple of 5. Otherwise hide the label. I will modify it for other conditions once I understand the concept.</p>
<p>I have tried the below approach but either it results in variable referenced before assignment error or it keeps on adding labels one below the other without disappearing.</p>
<p>Code:</p>
<pre><code>from tkinter import Tk, Label
import time
app = Tk()
label1 = Label(app)
label1.pack()
label1.config(text='Static Text')
label2 = Label(app)
label2.pack()
def draw():
#label2 = Label(app) #create it again after destroy()?
#label2.pack() #pack it again after pack_forget?
t = time.strftime('%S')
if int(t) % 5 == 0:
label2.config(text=t)
else:
#remove label code e.g. label2.pack_forget() or label2.destroy()
label1.after(1000,draw)
draw()
app.mainloop()
</code></pre>
<p>Please help me make the question more helpful for this community, if it isn't in the requisite format.</p>
| <python><tkinter> | 2023-09-28 08:15:05 | 1 | 571 | Veki |
77,193,088 | 292,502 | How to perform inference with a Llava Llama model deployed to SageMaker from Huggingface? | <p>I deployed a Llava Llama Huggingface model (<a href="https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3" rel="nofollow noreferrer">https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3</a>) to a SageMaker Domain + Endpoint by using the deployment card provided by Huggingface:</p>
<pre><code>import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID': 'liuhaotian/llava-llama-2-13b-chat-lightning-preview',
'HF_TASK': 'text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.26.0',
pytorch_version='1.13.1',
py_version='py39',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
</code></pre>
<p>The deployment sets the <code>HF_TASK</code> as <code>text-generation</code>. Llava Llama however is a multi modal text + image model. So the big question is how I'd perform an inference / prediction. I'd need to pass the image and also the text prompt. Other image + text API such as Imagen or Chooch accepts a base64 encoded image data. I know I need to do more than that since for example the models are trained with a specific dimension dataset (I think the Llava Llama model it might be 336x336), and Imagen or Chooch as a PaaS service does the cropping / resizing / padding.</p>
<p>Llava Llama has a demo page <a href="https://llava-vl.github.io/" rel="nofollow noreferrer">https://llava-vl.github.io/</a> which uses Gradio user interface. So I cannot tell where and how is the model hosted. However we might be able to decipher the solution from the source code. This <code>get_image</code> function is I think important, it does the resize / crop / pad: <a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L109" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L109</a> and that is invoked from <a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/serve/gradio_web_server.py#L138" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/serve/gradio_web_server.py#L138</a></p>
<p>We can see that there will be some magic tokens which will mark the beginning and end of the image and separate the text prompt (<a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/serve/gradio_web_server.py#L154" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/serve/gradio_web_server.py#L154</a>). We can see that the text prompt is truncated to 1536 tokens (?) for text to image generation mode and 1200 tokens for image QnA mode. A compound prompt is assembled with the help of these tokens (<a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L287" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L287</a>) and templates (<a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L71" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L71</a>). The image is also appended as a base64 string, in PNG format: <a href="https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L154" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/blob/a4269fbf014af3cab1f1d172914493fae8b74820/llava/conversation.py#L154</a></p>
<p>When I try to invoke the endpoint for a an inference / prediction</p>
<pre><code>from sagemaker.predictor import Predictor
from base64 import b64encode
endpoint = 'huggingface-pytorch-inference-2023-09-23-08-55-26-117'
ENCODING = "utf-8"
IMAGE_NAME = "eiffel_tower_336.jpg"
payload = {
"inputs": "Describe the content of the image in great detail ",
}
with open(IMAGE_NAME, 'rb') as f:
byte_content = f.read()
base64_bytes = b64encode(byte_content)
base64_string = base64_bytes.decode(ENCODING)
predictor = Predictor(endpoint)
inference_response = predictor.predict(data=payload)
print (inference_response)
</code></pre>
<p>I get an error, that <code>ParamValidationError: Parameter validation failed: Invalid type for parameter Body, value: {'inputs': 'Describe the content of the image in great detail '}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object</code></p>
<p>This HuggingFace discussion says <a href="https://discuss.huggingface.co/t/can-text-to-image-models-be-deployed-to-a-sagemaker-endpoint/20120" rel="nofollow noreferrer">https://discuss.huggingface.co/t/can-text-to-image-models-be-deployed-to-a-sagemaker-endpoint/20120</a> that an inference.py need to be created. I don't know what the Llava Llama has though. I tried to look at the files of the model, but I don't see relevant meta data about this.</p>
<p>This StackOverflow entry <a href="https://stackoverflow.com/questions/76197446/how-to-do-model-inference-on-a-multimodal-model-from-hugginface-using-sagemaker">How to do model inference on a multimodal model from hugginface using sagemaker</a> is about a serverless deployment case, but it uses a custom TextImageSerializer serializer. Should I try to use something like that?</p>
<p>This Reddittor suggests <a href="https://www.reddit.com/r/LocalLLaMA/comments/16pzn88/how_to_parametrize_a_llava_llama_model/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/16pzn88/how_to_parametrize_a_llava_llama_model/</a> some kind of a CLIP encoding. I'm uncertain if I really need to do that or the model is able to encode?</p>
<p>Other references:</p>
<ul>
<li>Asking at the model: <a href="https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3" rel="nofollow noreferrer">https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3</a></li>
<li>GitHub Discussion: <a href="https://github.com/haotian-liu/LLaVA/discussions/454" rel="nofollow noreferrer">https://github.com/haotian-liu/LLaVA/discussions/454</a></li>
<li>HuggingFace Discussion: <a href="https://discuss.huggingface.co/t/how-to-use-llava-with-huggingface/52315" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-to-use-llava-with-huggingface/52315</a></li>
<li>Reddit: <a href="https://www.reddit.com/r/LocalLLaMA/comments/16pzn88/how_to_parametrize_a_llava_llama_model/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/16pzn88/how_to_parametrize_a_llava_llama_model/</a></li>
</ul>
| <python><amazon-sagemaker><huggingface><huggingface-hub> | 2023-09-28 07:48:32 | 3 | 10,879 | Csaba Toth |
77,193,060 | 3,871,575 | Parsing string that looks like a list using ConfigArgParse | <p>I am using Python 3.9.16 and <code>ConfigArgParse==1.7</code>.</p>
<p>I have conf file like this:</p>
<pre class="lang-ini prettyprint-override"><code>[conf]
example = [something_in_brackets]
</code></pre>
<p>I am trying to parse the config like this:</p>
<pre><code>import configargparse
p = configargparse.ArgParser(default_config_files=['conf.ini'])
p.add('--example')
conf = p.parse_args()
print(conf.example)
</code></pre>
<p>I want to read certain config value as string but sometimes the value will be in brackets making it look like a list.
When this happens <code>ConfigArgParse</code> gives following error:</p>
<pre><code>parse.py: error: example can't be set to a list '['something_in_brackets']' unless its action type is changed to 'append' or nargs is set to '*', '+', or > 1
</code></pre>
<p>Quoting conf value in the ini file does not change behaviour.</p>
<p>Trying to use action type append or nargs values as suggested in the error message changes my config value to undesired form: <code>['something_in_brackets']</code> while it should be <code>[something_in_brackets]</code>.</p>
<p>I have also experimented with options of <code>p.add</code> such as <code>type=str</code> but I could not find a way to reach desired result.</p>
<p>Is it possible to parse config values like <code>[example]</code> with <code>ConfigArgParse</code> without having them turn into lists?</p>
| <python> | 2023-09-28 07:43:40 | 1 | 568 | Madoc Comadrin |
77,193,055 | 2,153,235 | Spyder Startup command creates SparkSession object and *two* Web UIs | <p>In my Spyder preferences, I have the following IPython console Startup command:</p>
<pre><code> from pyspark.sql import SparkSession ; spark = SparkSession.builder.appName("SparkExamples.com").getOrCreate()
</code></pre>
<p>This launches <em>two</em> Web UIs on ports 4040 and 4041.</p>
<p>When I comment out the Startup and issue the same command from the IPython prompt in Spyder, I only get <em>one</em> Web UI on port 4040.</p>
<p>When the Startup command is not commented out, I find that the following
steps are needed to kill the two resulting Web UIs:</p>
<ul>
<li>Issuing spark.stop() kills the 2nd one only</li>
<li>Issuing sys.exit() (doesn't restart kernel) leaves first one still running</li>
<li>Issuing exit() restarts the kernel and kills the first one</li>
</ul>
<p><em><strong>Why does the Startup command create <em>two</em> Web UIs?</strong></em> I only have one Spyder console,
and hence, presumably one kernel.</p>
<p>I installed Python, Spyder, Java, and PySpark using Anaconda on Windows 10.</p>
<p>Depending on how unique my problem is, it could be difficult for others to comment on the reason and remedy. However, the <em>apparent</em> cause doesn't seem all that unorthodox, at least to my newbie eyes, i.e., the Startup command above. Therefore, <em><strong>it would be helpful of others could corroborate the problem</strong></em> in their Spyder/Spark setup, or the absence thereof. That is, two Web UIs when the Startup command is set, but only one Web UI if the same command is issued at the Spyder console instead. Thanks.</p>
<h2>Annex: Why restart IPython console so often?</h2>
<p>I get heartbeat timeout warnings that I <a href="https://stackoverflow.com/questions/76848115">posted about
before</a>. A side-effect of these timeouts, however, is that many
invocations to objects in other namespaces yield the error
<code>ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it</code>:</p>
<pre><code># works
spark.read? # The "?" suffix pulls up the doc string in Spyder
# Don't work
spark.read.csv?
df.show() # df is a DataFrame object
df = spark.read.csv(
r"C:\cygwin64\home\User.Name\tmp\zipcodes.csv",
header=True )
spark.stop()
</code></pre>
<p>After restarting the kernel using <code>exit()</code> and recreating all
objects, the above commands work again.</p>
| <python><apache-spark><pyspark><spyder> | 2023-09-28 07:43:01 | 1 | 1,265 | user2153235 |
77,192,757 | 9,166,673 | Gradio pop up display on success | <p>Here is the below sample code for gradio app mounted on FAST API app.</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
from fastapi import FastAPI
from starlette.responses import RedirectResponse
from starlette.requests import Request
app = FastAPI()
def submit(message):
print(f"Saving message: {message}")
@app.get('/')
async def homepage(request: Request):
return RedirectResponse(url='/home')
with gr.Blocks(title="TEST") as demo:
gr.Markdown("TEST APP NAME")
with gr.Row():
with gr.Column():
message = gr.components.Textbox(label="Message", interactive=True)
btn3 = gr.Button("Save")
btn3.click(
submit,
inputs=[message],
outputs=None
).success(None, _js="window.location.reload()")
gradio_app = gr.mount_gradio_app(app, demo, "/home")
</code></pre>
<p>Output of above sample code:
<a href="https://i.sstatic.net/bcq4c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bcq4c.png" alt="enter image description here" /></a></p>
<p>How to have a Success pop up message e.g. "Submit Successful" on pressing the Save button.</p>
| <python><fastapi><gradio> | 2023-09-28 06:56:14 | 2 | 845 | Shubhank Gupta |
77,192,718 | 8,741,562 | How to generate an auth token and list out the resource groups in azure using python? | <p>I have tried with the below code:</p>
<pre><code>from azure.identity import ClientSecretCredential
import requests
subscription_id = 'MYSUBID'
client_id = 'MYCLIENTID'
client_secret = 'MYSECRETVALUE'
tenant_id = 'MYTENANTID'
# Create a ClientSecretCredential object
credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
url = f"https://management.azure.com/subscriptions/{subscription_id}/resourcegroups?api-version=2021-04-01"
# Get an access token for the Azure Management API
access_token = credential.get_token("https://management.azure.com/.default")
# Make the GET request to retrieve a list of resource groups
headers = {
"Authorization": f"Bearer {access_token}"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
resource_groups = response.json()
for rg in resource_groups['value']:
print(rg['name'])
else:
print(response.status_code, "-" ,response.text)
</code></pre>
<p>So this code gives me the below error:</p>
<p>403 - {"error":{"code":"AuthorizationFailed","message":"The client 'f89e9744-3f48-444c-bf6f-525d15974a46' with object id 'f89e9744-3f48-444c-bf6f-525d15974a46' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/read' over scope '/subscriptions/MYSUBID' or the scope is invalid. If access was recently granted, please refresh your credentials."}}</p>
<p>But when I used this website to list it out <a href="https://learn.microsoft.com/en-us/rest/api/resources/resource-groups/list?tryIt=true&source=docs#code-try-0" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/resources/resource-groups/list?tryIt=true&source=docs#code-try-0</a></p>
<p>It successfully lists the resource groups. Then I come to know that the bearer token or the auth token is different.</p>
<p>Help on resolving this issue.</p>
| <python><python-3.x><azure><azure-active-directory><azure-web-app-service> | 2023-09-28 06:49:07 | 1 | 1,070 | Navi |
77,192,651 | 19,130,803 | Prevent ReferenceError in multi-page dash application | <p>I am developing a <code>multi-page</code> dash application. My project structure is as below:</p>
<pre><code>- project/
- pages/
- home.py
- graph.py
- app.py
- index.py
</code></pre>
<p>On <code>app.py</code> page, I have a theme switch button. The app gets loaded, When I change the theme color, The theme gets reflected on the current page. But I am getting an error as below</p>
<pre><code>ReferenceError: A nonexistent object was used in an `Input` of a Dash callback. The id of this object is `some_component` and the property is `some_property`. The string ids in the current layout are:
[some ids]
</code></pre>
<p>It looks like that due to the change theme which is also <code>input</code> in <code>graph.py</code> and its callback is getting triggered even though the page is not loaded.</p>
<p>I have used</p>
<pre><code>prevent_initial_call=True, In callbacks
suppress_callback_exceptions=True, In dash constructor and
I tried try-catch block but fail to catch exception, for eg in graph.py
if triggered_id == "some_id":
try:
some code
except Exception:
raise PreventUpdate
</code></pre>
<p>But I am still getting the error. Is there way to avoid this or catch this exception?</p>
| <python><plotly><plotly-dash> | 2023-09-28 06:36:27 | 0 | 962 | winter |
77,192,262 | 5,131,394 | Heroku Deployment: ocrmypdf.exceptions.MissingDependencyError: tesseract | <p>I'm trying to deploy a FastAPI application to Heroku that uses the ocrmypdf package for OCR (Optical Character Recognition). The application works fine locally, but on Heroku, I get a missing dependency error for tesseract.</p>
<p>Here are the relevant logs:</p>
<pre><code> 2023-09-28T04:57:02.190892+00:00 heroku[web.1]: State changed from starting to up
2023-09-28T04:57:04.351961+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [10] [INFO] Started server process [10]
2023-09-28T04:57:04.352002+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [10] [INFO] Waiting for application startup.
2023-09-28T04:57:04.352226+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [10] [INFO] Application startup complete.
2023-09-28T04:57:04.352573+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [8] [INFO] Started server process [8]
2023-09-28T04:57:04.352646+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [8] [INFO] Waiting for application startup.
2023-09-28T04:57:04.352835+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [8] [INFO] Application startup complete.
2023-09-28T04:57:04.353501+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [9] [INFO] Started server process [9]
2023-09-28T04:57:04.353548+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [9] [INFO] Waiting for application startup.
2023-09-28T04:57:04.353743+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [9] [INFO] Application startup complete.
2023-09-28T04:57:04.353866+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [7] [INFO] Started server process [7]
2023-09-28T04:57:04.353923+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [7] [INFO] Waiting for application startup.
2023-09-28T04:57:04.354135+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [7] [INFO] Application startup complete.
2023-09-28T04:57:04.420648+00:00 app[web.1]: 102.38.199.5:0 - "POST /upload/ HTTP/1.1" 500
2023-09-28T04:57:04.425146+00:00 app[web.1]: [2023-09-28 04:57:04 +0000] [10] [ERROR] Exception in ASGI application
2023-09-28T04:57:04.425147+00:00 app[web.1]: Traceback (most recent call last):
2023-09-28T04:57:04.425147+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
2023-09-28T04:57:04.425148+00:00 app[web.1]: result = await app( # type: ignore[func-returns-value]
2023-09-28T04:57:04.425148+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425148+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
2023-09-28T04:57:04.425149+00:00 app[web.1]: return await self.app(scope, receive, send)
2023-09-28T04:57:04.425149+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425168+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/fastapi/applications.py", line 292, in __call__
2023-09-28T04:57:04.425169+00:00 app[web.1]: await super().__call__(scope, receive, send)
2023-09-28T04:57:04.425169+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
2023-09-28T04:57:04.425169+00:00 app[web.1]: await self.middleware_stack(scope, receive, send)
2023-09-28T04:57:04.425169+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
2023-09-28T04:57:04.425170+00:00 app[web.1]: raise exc
2023-09-28T04:57:04.425170+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
2023-09-28T04:57:04.425170+00:00 app[web.1]: await self.app(scope, receive, _send)
2023-09-28T04:57:04.425171+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/cors.py", line 91, in __call__
2023-09-28T04:57:04.425171+00:00 app[web.1]: await self.simple_response(scope, receive, send, request_headers=headers)
2023-09-28T04:57:04.425172+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/cors.py", line 146, in simple_response
2023-09-28T04:57:04.425172+00:00 app[web.1]: await self.app(scope, receive, send)
2023-09-28T04:57:04.425172+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-09-28T04:57:04.425172+00:00 app[web.1]: raise exc
2023-09-28T04:57:04.425172+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
2023-09-28T04:57:04.425173+00:00 app[web.1]: await self.app(scope, receive, sender)
2023-09-28T04:57:04.425173+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
2023-09-28T04:57:04.425173+00:00 app[web.1]: raise e
2023-09-28T04:57:04.425173+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
2023-09-28T04:57:04.425173+00:00 app[web.1]: await self.app(scope, receive, send)
2023-09-28T04:57:04.425173+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
2023-09-28T04:57:04.425174+00:00 app[web.1]: await route.handle(scope, receive, send)
2023-09-28T04:57:04.425174+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
2023-09-28T04:57:04.425174+00:00 app[web.1]: await self.app(scope, receive, send)
2023-09-28T04:57:04.425174+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
2023-09-28T04:57:04.425174+00:00 app[web.1]: response = await func(request)
2023-09-28T04:57:04.425175+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425175+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
2023-09-28T04:57:04.425175+00:00 app[web.1]: raw_response = await run_endpoint_function(
2023-09-28T04:57:04.425175+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425176+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
2023-09-28T04:57:04.425176+00:00 app[web.1]: return await dependant.call(**values)
2023-09-28T04:57:04.425176+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425176+00:00 app[web.1]: File "/app/app/main.py", line 109, in upload_files
2023-09-28T04:57:04.425176+00:00 app[web.1]: ocrmypdf.ocr(temp_pdf_path, output_pdf_path, deskew=True, force_ocr=True)
2023-09-28T04:57:04.425176+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/ocrmypdf/api.py", line 352, in ocr
2023-09-28T04:57:04.425177+00:00 app[web.1]: check_options(options, plugin_manager)
2023-09-28T04:57:04.425177+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/ocrmypdf/_validation.py", line 245, in check_options
2023-09-28T04:57:04.425177+00:00 app[web.1]: _check_plugin_options(options, plugin_manager)
2023-09-28T04:57:04.425177+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/ocrmypdf/_validation.py", line 238, in _check_plugin_options
2023-09-28T04:57:04.425177+00:00 app[web.1]: plugin_manager.hook.check_options(options=options)
2023-09-28T04:57:04.425177+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/pluggy/_hooks.py", line 493, in __call__
2023-09-28T04:57:04.425177+00:00 app[web.1]: return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
2023-09-28T04:57:04.425178+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425178+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/pluggy/_manager.py", line 115, in _hookexec
2023-09-28T04:57:04.425178+00:00 app[web.1]: return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
2023-09-28T04:57:04.425178+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425180+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/pluggy/_callers.py", line 113, in _multicall
2023-09-28T04:57:04.425180+00:00 app[web.1]: raise exception.with_traceback(exception.__traceback__)
2023-09-28T04:57:04.425181+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/pluggy/_callers.py", line 77, in _multicall
2023-09-28T04:57:04.425181+00:00 app[web.1]: res = hook_impl.function(*args)
2023-09-28T04:57:04.425182+00:00 app[web.1]: ^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-28T04:57:04.425182+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/ocrmypdf/builtin_plugins/tesseract_ocr.py", line 139, in check_options
2023-09-28T04:57:04.425182+00:00 app[web.1]: check_external_program(
2023-09-28T04:57:04.425183+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.11/site-packages/ocrmypdf/subprocess/__init__.py", line 340, in check_external_program
2023-09-28T04:57:04.425183+00:00 app[web.1]: raise MissingDependencyError(program)
2023-09-28T04:57:04.425183+00:00 app[web.1]: ocrmypdf.exceptions.MissingDependencyError: tesseract
2023-09-28T04:57:04.425738+00:00 heroku[router]: at=info method=POST path="/upload/" host=legal-tools-backend-036eb0ac010e.herokuapp.com request_id=5c6e9753-9172-4962-9196-fec0d86d0205 fwd="102.38.199.5" dyno=web.1 connect=0ms service=543ms status=500 bytes=193 protocol=https
</code></pre>
<p>I've already tried:</p>
<ul>
<li>Added <a href="https://elements.heroku.com/buildpacks/pathwaysmedical/heroku-buildpack-tesseract" rel="nofollow noreferrer">the Tesseract buildpack</a> to my Heroku app.</li>
<li>Included an Aptfile with tesseract-ocr listed.</li>
<li>doing this in Procfile: <code>web: TESSDATA_PREFIX=./.apt/usr/share/tesseract-ocr/4.00/tessdata gunicorn -w 4 -k uvicorn.workers.UvicornWorker app.main:app</code></li>
</ul>
<p>Also tried setting the path Heroku gave me like in bash this:</p>
<pre><code> ocrmypdf.ocr(temp_pdf_path, output_pdf_path, deskew=True, force_ocr=True, tesseract_config={'tesseract_path': '/app/vendor/tesseract-ocr/bin/tesseract'})
</code></pre>
<p>Any ideas? It's driving me nuts.</p>
| <python><heroku><ocr><fastapi><tesseract> | 2023-09-28 05:05:18 | 1 | 435 | Norbert |
77,192,092 | 1,100,652 | Python Azure Function deployment (Oryx build) hangs on "Running pip install..." step | <p>I am attempting to deploy an Azure Function to production.</p>
<p>Local environment:</p>
<ul>
<li>Windows 10</li>
<li>VS Code 1.82.2</li>
<li>Python 3.9.10</li>
<li>Azure Function Core Tools 4.0.5390</li>
</ul>
<p>Azure environment:</p>
<ul>
<li>Two function apps (dev/prod)</li>
<li>Both using runtime version ~4</li>
<li>Both running on App Service Plan which is P2v2.</li>
</ul>
<p>The issue: When deploying from my local environment to Azure, the VS Code output windows shows that the build progresses to the "Running pip install..." step, but no further. It never completes:</p>
<pre><code>11:53:56 PM <function name removed>: Starting deployment...
11:53:56 PM <function name removed>: Creating zip package...
11:54:00 PM <function name removed>: Zip package size: 17.6 MB
11:54:05 PM <function name removed>: Fetching changes.
11:54:06 PM <function name removed>: Cleaning up temp folders from previous zip deployments and extracting pushed zip file <removed>.zip (16.84 MB) to /tmp/zipdeploy/extracted
11:54:09 PM <function name removed>: Updating submodules.
11:54:10 PM <function name removed>: Preparing deployment for commit id <removed>.
11:54:10 PM <function name removed>: PreDeployment: context.CleanOutputPath False
11:54:10 PM <function name removed>: PreDeployment: context.OutputPath /home/site/wwwroot
11:54:10 PM <function name removed>: Repository path is /tmp/zipdeploy/extracted
11:54:10 PM <function name removed>: Running oryx build...
11:54:10 PM <function name removed>: Command: oryx build /tmp/zipdeploy/extracted -o /tmp/build/expressbuild --platform python --platform-version 3.9.7 -i <removed> -p packagedir=.python_packages/lib/site-packages
11:54:11 PM <function name removed>: Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
11:54:11 PM <function name removed>: You can report issues at https://github.com/Microsoft/Oryx/issues
11:54:11 PM <function name removed>: Oryx Version: 0.2.20230508.1, Commit: 7fe2bf39b357dd68572b438a85ca50b5ecfb4592, ReleaseTagName: 20230508.1
11:54:11 PM <function name removed>: Build Operation ID: <removed>
11:54:11 PM <function name removed>: Repository Commit : <removed>
11:54:11 PM <function name removed>: OS Type : bullseye
11:54:11 PM <function name removed>: Image Type : githubactions
11:54:11 PM <function name removed>: Detecting platforms...
11:54:12 PM <function name removed>: Detected following platforms:
11:54:12 PM <function name removed>: python: 3.9.7
11:54:12 PM <function name removed>: Using intermediate directory <removed>.
11:54:12 PM <function name removed>: Copying files to the intermediate directory...
11:54:12 PM <function name removed>: Done in 0 sec(s).
11:54:12 PM <function name removed>: Source directory : <removed>
11:54:12 PM <function name removed>: Destination directory: /tmp/build/expressbuild
11:54:13 PM <function name removed>: Python Version: /tmp/oryx/platforms/python/3.9.7/bin/python3.9
11:54:13 PM <function name removed>: Creating directory for command manifest file if it does not exist
11:54:13 PM <function name removed>: Removing existing manifest file
11:54:13 PM <function name removed>: Running pip install...
</code></pre>
<p>I have confirmed this is the behavior both when deploying to the dev and the prod app in Azure.</p>
<p>Things I have tried with no luck:</p>
<ul>
<li>Upgrade VS Code to latest version</li>
<li>Upgrade Azure Function Core Tools to latest version</li>
<li>Initiate deployment via "External Git" connection in Deployment Center in Azure Function blade of Azure Portal. When doing this, the deployment logs in the Azure Portal likewise show the build progressing to "Running pip install..." but no further.</li>
<li>Remove the latest changes to requirements.txt thus returning it to a known working state</li>
</ul>
<p>Any ideas?</p>
| <python><azure><azure-functions><devops><azure-functions-core-tools> | 2023-09-28 04:06:59 | 2 | 415 | George |
77,191,992 | 1,854,821 | pydeck icon layer - are folium-style clustered icons possible? | <p>My question: is it possible to do folium-style icon clustering using pydeck?</p>
<p>I'm making map visualizations for a dataset in which we've made measurements at a number of locations, returning to some locations many times across the past five years. I've mapped the measurement locations using folium's MarkerClusters. Markers cluster when the map is zoomed out:</p>
<p><a href="https://i.sstatic.net/QBkgW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QBkgW.png" alt="zoomed out cluster" /></a></p>
<p>And then resolve to individual measurements upon zooming in:</p>
<p><a href="https://i.sstatic.net/0OtHy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0OtHy.png" alt="enter image description here" /></a></p>
<p>I've been playing around with implementing my visualizations using pydeck. The <a href="https://deck.gl/examples/icon-layer/" rel="nofollow noreferrer">deck.gl documentation</a> suggests that some degree of clustering is possible, but (1) I don't see how to implement that example, which uses javascript, using pydeck and more importantly (2) it seems the icons stack on top of one another at some zoom level when some icons share the same location.</p>
<p>Is this kind of thing doable using pydeck?</p>
| <python><deck.gl><pydeck> | 2023-09-28 03:19:25 | 0 | 473 | Timothy W. Hilton |
77,191,878 | 4,443,378 | How to select columns from a dataframe whose names are contained in another series? | <p>I have a series <code>A</code> that looks like:</p>
<pre><code>data = {'Animal':['a.Bear', 'b.Elephant', '123.Giraffe', 'Kangaroo']}
A = pd.DataFrame(data)
Animal
0 a.Bear
1 b.Elephant
2 123.Giraffe
3 Kangaroo
</code></pre>
<p>And a dataframe <code>df</code> like:</p>
<pre><code>column_names = ['Lion', 'Tiger', 'Bear', 'Elephant', 'Giraffe', 'Kangaroo', 'Rhino', 'Cat', 'Dog']
data = {animal: [random.random() for _ in range(10)] for animal in column_names}
df = pd.DataFrame(data)
Lion Tiger Bear Elephant Giraffe Kangaroo Rhino \
0 0.435419 0.139088 0.799243 0.095464 0.252427 0.300750 0.537184
1 0.536742 0.798354 0.359454 0.962717 0.900115 0.192034 0.255388
2 0.400937 0.999050 0.464974 0.082873 0.807442 0.152231 0.888681
3 0.962247 0.585496 0.826572 0.964859 0.061535 0.661318 0.626811
4 0.315054 0.241821 0.183458 0.767684 0.932423 0.605995 0.121704
5 0.975635 0.321856 0.640700 0.269786 0.603920 0.451022 0.202050
6 0.281994 0.790526 0.074202 0.318642 0.825572 0.006433 0.376935
7 0.002314 0.599871 0.883832 0.838671 0.193689 0.983202 0.365913
8 0.488496 0.226901 0.318186 0.527369 0.722069 0.152814 0.181855
9 0.059592 0.483801 0.419581 0.378362 0.064484 0.263958 0.183479
Cat Dog
0 0.457674 0.930943
1 0.171235 0.465397
2 0.230023 0.732982
3 0.094517 0.373322
4 0.885030 0.852047
5 0.759202 0.521539
6 0.683882 0.520186
7 0.635325 0.832302
8 0.950867 0.395677
9 0.929706 0.858686
</code></pre>
<p>I want to select only the columns from <code>df</code> whose names are contained in the series <code>A</code>.</p>
<p>I tried:</p>
<pre><code>df.loc[:,A['Animal].str.contains(df.columns)]
</code></pre>
<p>But I get error:</p>
<pre><code>TypeError: unhashable type: 'Index'
</code></pre>
| <python><pandas><dataframe> | 2023-09-28 02:38:22 | 3 | 596 | Mitch |
77,191,791 | 5,003,606 | Google Cloud Run Jobs logging splits Exception MESSAGES with multiple lines into multiple log entries | <p>Several days ago, a Python RuntimeError was raised in one of my company's Cloud Run jobs which stopped it.</p>
<p>Cloud Run's logging, unfortunately, handled that RuntimeError in a bad way.</p>
<p>It correctly put all the lines from the RuntimeError's stack trace in the first log entry.</p>
<p>But the RuntimeError's message, which had multiple lines of text (each one carrying important diagnostic information) was mutilated. The first line of that message was in the first log entry that contained the stack trace. But each of the remaining lines (except for blank ones) was put into its own subsequent log entry.</p>
<p>Below is a screenshot from the Google Cloud Run LOGS tab for the job that shows this. In the first log entry (the top one), you can see the full stack trace plus the 1st line of the RuntimeError's message ("Caught some Exception (see cause); was processing...") But after that come many log entries, each one of them being a single subsequenbt line from the RuntimeError's message. The screenshot only includes the first 4 of those subsequent lines, the first one being the text "{'broker_order_id': '196056769652',".</p>
<p><a href="https://i.sstatic.net/HV4lD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HV4lD.png" alt="enter image description here" /></a></p>
<p>That RuntimeError message handling is obviously a disaster: you have to know that the subsequent lines come later (I first thought they did not print at all), it is hard to read them, their log level is no longer ERROR but is absent, etc.</p>
<p>Does anyone know if</p>
<ol>
<li>there is a cure</li>
<li>we are not doing Cloud Run logging correctly</li>
<li>this is a known bug, or a bug that I need to report to Google</li>
</ol>
<p>?</p>
<hr />
<p>Before submitting this question, I did web searches.</p>
<p>I found many people reporting that Exception stack traces were printing on multiple lines up thru 2022: <a href="https://github.com/firebase/firebase-functions/issues/1215" rel="nofollow noreferrer">Python</a>, <a href="https://stackoverflow.com/questions/69648011/why-multi-line-stacktraces-are-shown-as-individual-logs-on-gcp-log-explorer">Java</a>, <a href="https://www.javacodegeeks.com/2022/05/google-cloud-structured-logging-for-java-applications.html" rel="nofollow noreferrer">Java</a>.</p>
<p>But the stack trace multi line/multi log entry issue reported in those links seems to have been solved by now. The problem that I am reporting is if your Exception's text message, not its stack trace, has multiple lines.</p>
<hr />
<p>My company set up Cloud Run Jobs logging > 1 year ago, back when Cloud Run Jobs was in beta, and not fully supported by the Cloud logging facility.</p>
<p>In abbreviated form, our Cloud Run Jobs logging configuration is like the Python code shown below.</p>
<p>Is it possible that this logging config is out of date and causing this problem?</p>
<pre><code>LOG_FORMAT: str = (
"%(asctime)s.%(msecs)03dZ "
"%(levelname)s "
"%(name)s.%(funcName)s "
"#%(lineno)d "
"- "
"%(message)s"
)
DATE_FORMAT: str = "%Y-%m-%d %H:%M:%S"
_is_logging_configured: bool = False
def get_logger(name: str) -> Logger:
config_logging()
return getLogger(name)
def config_logging() -> None:
if _is_logging_configured:
return
config_gcp_cloud_run_job_logging()
_is_logging_configured = True
def config_gcp_cloud_run_job_logging() -> None:
root_logger = getLogger()
root_logger.setLevel(os.environ.get("LOG_LEVEL", "WARNING"))
formatter = get_logging_formatter()
# get metadata about the execution environment
region = retrieve_metadata_server(_REGION_ID)
project = retrieve_metadata_server(_PROJECT_NAME)
# build a manual resource object
cr_job_resource = Resource(
type = "cloud_run_job",
labels = {
"job_name": os.environ.get("CLOUD_RUN_JOB", "unknownJobId"),
"location": region.split("/")[-1] if region else "",
"project_id": project,
},
)
# configure library using CloudLoggingHandler with custom resource
client = Client()
# use labels to assign logs to execution
labels = {"run.googleapis.com/execution_name": os.environ.get("CLOUD_RUN_EXECUTION", "unknownExecName")}
handler = CloudLoggingHandler(client, resource = cr_job_resource, labels = labels)
handler.setFormatter(formatter)
setup_logging(handler)
def get_logging_formatter() -> Formatter:
formatter = Formatter(fmt = LOG_FORMAT, datefmt = DATE_FORMAT)
Formatter.converter = time.gmtime
return formatter
</code></pre>
| <python><exception><logging><google-cloud-run><multiline> | 2023-09-28 02:05:31 | 1 | 951 | HaroldFinch |
77,191,734 | 1,070,833 | mypy raises Cannot assign multiple types to name | <p>This looks like a bug in mypy and a false positive. Below is a standard case where we pass an inherited class to a function to process it. Very explicit and simple code however mypy fails to parse it and forces us to declare the type before if statement like in c++</p>
<p>the code below raises:</p>
<p><code>25: error: Cannot assign multiple types to name "syncer" without an explicit "Type[...]" annotation [misc]</code></p>
<pre><code>from typing import Type
class Base():
pass
class A(Base):
pass
class B(A):
pass
def executeSync(syncer: Type[Base]):
pass
something = True
if something:
syncer = A
else:
syncer = B
executeSync(syncer)
</code></pre>
<p>we need to change it to:</p>
<pre><code>something = True
syncer: Type[Base]
if something:
syncer = A
else:
syncer = B
</code></pre>
<p>I feel like the extra declaration in this case is totally unnecessary. Is there a way to change mypy configuration to avoid this? pycharm seems to be more relaxed and is not complaining about it.</p>
| <python><mypy> | 2023-09-28 01:44:44 | 0 | 1,109 | pawel |
77,191,601 | 6,296,626 | Have tabs in a row in Python Flet | <h1>The issue</h1>
<p>I am using <a href="https://flet.dev/docs" rel="nofollow noreferrer">Flet</a> as the GUI library for my project. I am trying to create a page where its separated in two half (left and right), where on the left side, there are some content, and on the right side there are tabs and more content in the tabs. (see image below)</p>
<p><a href="https://i.sstatic.net/PYWwg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PYWwg.png" alt="the UI I am aiming for" /></a></p>
<p>The issue is that the UI breaks and is unresponsive when I try to add <code>ft.Tabs</code> under the <code>ft.Row</code>.</p>
<p>The question is whether it's a bug or intended. If it's intended, what can I do so I can build a UI where the tab section takes just half of the window (right side)? Should I use something else other than <code>ft.Row</code> to do so?</p>
<h1>Code to replicate the issue</h1>
<pre class="lang-py prettyprint-override"><code>MAIN_GUI = ft.Container(
margin=ft.margin.only(bottom=40),
content=ft.Row([
ft.Card(
elevation=30,
content=ft.Container(
content=ft.Text("Amazing LEFT SIDE Content Here", size=50, weight=ft.FontWeight.BOLD),
border_radius=ft.border_radius.all(20),
bgcolor=ft.colors.WHITE24,
padding=45,
)
),
ft.Tabs(
selected_index=1,
animation_duration=300,
tabs=[
ft.Tab(
text="Tab 1",
icon=ft.icons.SEARCH,
content=ft.Container(
content=ft.Card(
elevation=30,
content=ft.Container(
content=ft.Text("Amazing TAB 1 content", size=50, weight=ft.FontWeight.BOLD),
border_radius=ft.border_radius.all(20),
bgcolor=ft.colors.WHITE24,
padding=45,
)
)
),
),
ft.Tab(
text="Tab 2",
icon=ft.icons.SETTINGS,
content=ft.Text("Amazing TAB 2 content"),
),
],
)
])
)
def main(page: ft.Page):
page.padding = 50
page.add(MAIN_GUI)
page.update()
if __name__ == '__main__':
ft.app(target=main)
cv2.destroyAllWindows()
</code></pre>
<p>As mentioned, what I am trying to do is to have the window separated to the left half and the right half, where the tabs would only be on the right half.</p>
<p>However, when running that code, only the tabs are visible on the right side and nothing is interactable.</p>
| <python><flutter><user-interface><tabs><flet> | 2023-09-28 00:42:00 | 1 | 1,479 | Programer Beginner |
77,191,349 | 22,407,544 | How to make Django redirect after validating an uploaded file? | <p>Instead, the page just reloads. When I check my server logs nothing is happening. This tells me that either the page isn't uploading or my <code>views.py</code> has a problem but I'm not sure what that could be.</p>
<p>Here is my code:</p>
<p>HTML:</p>
<pre><code><form method="post" action="{% url 'transcribeSubmit' %}" enctype="multipart/form-data" >
{% csrf_token %}
<label for="transcribe-file" class="transcribe-file-label">
<input id="transcribe-file" name="audio-video" type="file" accept="audio/*, video/*" hidden>
<button class="upload" id="transcribe-submit" type="submit" >Submit</button>
</code></pre>
<p>JS:</p>
<pre><code>document.getElementById("transcribe-file").addEventListener("change", function(event){
document.getElementById("transcribe-submit").click();
});
</code></pre>
<p>views.py:</p>
<pre><code>from django.shortcuts import render, redirect
from django.http import HttpResponse
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_protect
from .models import TranscribedDocument
from .forms import UploadFileForm
from django.core.files.storage import FileSystemStorage
# Create your views here.
@csrf_protect
def transcribeSubmit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
try:
if form.is_valid():
audio_file = form.save()
return redirect('/t/')
except Exception as e:
print(f"An error occurred: {e}")
error_message = f"An error occurred: {e}"
return JsonResponse({'error': error_message}, status=500)
else:
form = UploadFileForm()
return render(request,'transcribe/transcribe.html', {'form': form})
</code></pre>
<p>forms.py:</p>
<pre><code>from django import forms
import mimetypes
import magic
from django.core.exceptions import ValidationError
def validate_file(file):
# Validate if no file submitted (size = None)
if file is None:
raise ValidationError("No file submitted")
else:
#Check file size
fileSize = file.size
maxSize = 5242880 # 5MB in bytes (also: 5 * 1024 * 1024)
if fileSize > maxSize:###test
raise ValidationError("The maximum file size that can be uploaded is 10MB")
else:
try:
# Check the file extension
file_type = str(file.name).lower().split('.')[-1]
#add more
allowed_types = {
'm4a': 'audio/mp4',
'wav': ['audio/wav', 'audio/x-wav'],
'mp3': 'audio/mpeg',
'mpeg': 'video/mpeg',
'mp4': 'video/mp4',
'webm': 'video/webm',
'mpga': 'audio/mpeg'
}
if file_type not in allowed_types.keys():
raise ValidationError(f"Unsupported file type. Allowed types are: {', '.join(accepted_types)}")
# Create a magic object to check the file MIME type from content
validator = magic.Magic(uncompress=True, mime=True)
# Get the MIME type based on content
mime_type = validator.from_buffer(file.read(1000))#
# Check if the guessed MIME type and content-based MIME type match
if mime_type not in allowed_types[file_type]:
raise ValidationError(f"incorrect extension")
except Exception as e:
raise ValidationError(f"Validation error: {str(e)}")
class UploadFileForm(forms.Form):
file = forms.FileField(validators=[validate_file])
</code></pre>
<p>I tried submitting the file using AJAX but there was no difference and the page again reloaded without any change or redirection.
Additionally I checked my urls.py and there is no error.</p>
| <javascript><python><html><django><file-upload> | 2023-09-27 23:10:19 | 1 | 359 | tthheemmaannii |
77,191,297 | 7,339,624 | How to remove margins padding around grouped matplotlib edges of the figure | <p>I want to create a grid of <code>matplotlib</code> subplots with no borders or whitespace around the edges of the figure (0 margin). I have tried various suggestions like <code>plt.tight_layout()</code>, <code>fig.subplots_adjust()</code>, and manually setting <code>fig.set_size_inches()</code>, and also <code>frameon=False</code>, but I still have some whitespace padding around the borders as shown in the image below.</p>
<p><a href="https://i.sstatic.net/7vWib.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7vWib.png" alt="enter image description here" /></a></p>
<p>Here is the code to reproduce it:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
rows, columns = 4, 3
images = [np.random.rand(4, 4) for i in range(rows * columns)]
fig, axs = plt.subplots(rows, columns, figsize=(columns, rows))
axs = axs.flatten()
# Display each image on its respective subplot
for img, ax in zip(images, axs):
ax.matshow(img)
ax.axis('off')
plt.subplots_adjust(wspace=0.02, hspace=0.02)
plt.show()
</code></pre>
<p>Any suggestions on how to completely remove the whitespace and margins around the grid would be appreciated. I would like the subplots to be packed tightly together.</p>
| <python><matplotlib><plot> | 2023-09-27 22:51:28 | 1 | 4,337 | Peyman |
77,191,238 | 4,792,022 | Quickly removing a list of elements from lists in the column of a dataframe | <p>I have a dataframe with a column of the form of fruit</p>
<pre><code>data = {
'fruit': [['apple', 'banana', 'cherry'], ['banana', 'orange'], None, [], ['cherry', 'grape'], ['apple']],
'location': ['New York', 'Los Angeles', 'Chicago', 'Miami', 'San Francisco', 'Seattle']
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to remove the elements of the list</p>
<pre><code>gone_off_fruit=['banana','cherry']
</code></pre>
<p>currently, i am using this</p>
<pre><code>def remove_gone_off_fruit(fruit_list):
if fruit_list:
return [fruit for fruit in fruit_list if fruit not in gone_off_fruit]
else:
return tag_list
df['fruit'] = df['fruit'].apply(remove_gone_off_fruit)
</code></pre>
<p>but it is very slow, what's the fastest way to do this?</p>
| <python><pandas><performance> | 2023-09-27 22:32:56 | 2 | 544 | Abijah |
77,191,218 | 2,051,818 | Building a customized Lasagne layer whose output is a matrix of the elementwise product (input x weight) and not the dot product | <p>I have an input sequence with shape (seq_length(19) x Features(21)), which I feed as an input to neural network.</p>
<p>I need a layer to perform an elementwise multiplication on inputs with weights (Not dot product), so the output shape should be (#units, input_shape). Since, in my case Input_shape(19 x 21), the output shape by the operation performed in that layer is also (19 x 21). And if the # units is 8, the output should be (8,19,21)</p>
<p>How to do this using Lasagne layers? I checked the Lasagne documentation on how build custom layers, as from <a href="https://lasagne.readthedocs.io/en/latest/user/custom_layers.html#:%7E:text=To%20implement%20a%20custom%20layer%20in%20Lasagne%2C%20you,input%20are%20Theano%20expressions%2C%20so%20they%20are%20symbolic." rel="nofollow noreferrer">link</a>. Following this link, the custom layer is as follows.</p>
<pre><code>class ElementwiseMulLayer(lasagne.layers.Layer):
def __init__(self, incoming, num_units, W=lasagne.init.Normal(0.01),**kwargs):
super(ElementwiseMulLayer, self).__init__(incoming, **kwargs)
self.num_inputs = self.input_shape[1]
self.num_units = num_units
self.W = self.add_param(W, (self.num_inputs,num_units), name='W')
def get_output_for(self, input, **kwargs):
#return T.dot(input, self.W)
result=input*self.W
return result
def get_output_shape_for(self, input_shape):
return (input_shape[0], self.num_units,self.num_inputs)
</code></pre>
<p>Here's the NN:</p>
<pre><code>l_in_2 = lasagne.layers.InputLayer(shape=(None, 9*19*21))
l_reshape_l_in_2 = lasagne.layers.ReshapeLayer(l_in_2, (-1, 9,19,21))
l_reshape_l_in_2_EL = lasagne.layers.ExpressionLayer(l_reshape_l_in_2, lambda X: X[:,0,:,:], output_shape='auto')
l_reshape_l_in_2_EL = lasagne.layers.ReshapeLayer(l_reshape_l_in_2_EL, (-1, 19*21))
l_out1 = ElementwiseMulLayer(l_reshape_l_in_2_EL, num_units=8, name='my_EW_layer')
l_out1 = lasagne.layers.ReshapeLayer(l_out1, (-1, 8*399))
l_out = lasagne.layers.DenseLayer(l_out1,
num_units = 19*21,
W = lasagne.init.Normal(),
nonlinearity = lasagne.nonlinearities.rectify)
</code></pre>
<p>It's worth noting that the batch size is 64. The NN summary:</p>
<pre><code>| Layer | Layer_name | output_shape | # parameters |
_____________________________________________________________________________
| 0 | InputLayer | (None, 3591) | 0 |
| 1 | ReshapeLayer | (None, 9, 19, 21) | 0 |
| 2 | ExpressionLayer | (None, 19, 21) | 0 |
| 3 | ReshapeLayer | (None, 399) | 0 |
| 4 | ElementwiseMulLayer | (None, 8, 399) | 3192 |
| 5 | ReshapeLayer | (None, 3192) | 3192 |
| 6 | DenseLayer | (None, 399) | 1277199 |
</code></pre>
<p>Now, when i try to build the NN, I recieved the following error:</p>
<pre><code>ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[0] == 399, but the output's size on that axis is 64.
Apply node that caused the error: GpuElemwise{mul,no_inplace}(GpuReshape{2}.0, my_dot_layer.W)
Toposort index: 23
Inputs types: [GpuArrayType<None>(float32, matrix), GpuArrayType<None>(float32, matrix)]
Inputs shapes: [(64, 399), (399, 8)]
Inputs strides: [(14364, 4), (32, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{2}(GpuElemwise{mul,no_inplace}.0, TensorConstant{[ -1 3192]})]]
</code></pre>
<p>I tried to set W as follows:</p>
<pre><code>self.W = self.add_param(W, (self.num_inputs,num_units, self.num_inputs), name='W')
</code></pre>
<p>but then again, received a similar error:</p>
<pre><code>ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[1] == 8, but the output's size on that axis is 64.
Apply node that caused the error: GpuElemwise{mul,no_inplace}(InplaceGpuDimShuffle{x,0,1}.0, my_EW_layer.W)
Toposort index: 26
Inputs types: [GpuArrayType<None>(float32, (True, False, False)), GpuArrayType<None>(float32, 3D)]
Inputs shapes: [(1, 64, 399), (399, 8, 399)]
Inputs strides: [(919296, 14364, 4), (12768, 1596, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{2}(GpuElemwise{mul,no_inplace}.0, TensorConstant{[ -1 3192]})]]
</code></pre>
<p>I don't have a clear perception how to overcome this issue?</p>
| <python><neural-network><theano><lasagne> | 2023-09-27 22:26:15 | 0 | 371 | HATEM EL-AZAB |
77,191,044 | 1,285,061 | Numpy rotate and save 2D array to a CSV | <p>How can I save a 2D array into a CSV by rotating the array by 90 degree?</p>
<p>I tried numpy np.rot90, it alters the position of the elements.</p>
<pre><code>>>> a = np.array([[88,87],[78,77],[68,67],[58,57]])
>>> a
array([[88, 87],
[78, 77],
[68, 67],
[58, 57]])
>>>
</code></pre>
<p>csv should look like -</p>
<pre><code>88, 78, 68, 58
87, 77, 67, 57
</code></pre>
| <python><numpy> | 2023-09-27 21:38:15 | 2 | 3,201 | Majoris |
77,190,950 | 7,846,884 | RuntimeWarning: divide by zero encountered in log in Max likelihood estimate python code | <p>i'm simulating the effect of log maximum likelihood estimate (for Bionomial distribution) as number of samples increases.
I choose a true parameter value of 0.6 for bionomial distribution from which the responses are coming from.</p>
<p>but i'm getting the error even when i remove 0 from the possible parameters i'm using for my analysis.</p>
<ol start="2">
<li>fsd</li>
</ol>
<pre><code>fods23/simulations/scripts/Binomial_MLE_simulations.py:19: RuntimeWarning: divide by zero encountered in log
cal_llh = np.log(theta**(x) * (1-theta)**(1-x))
</code></pre>
<pre><code>############################################################
## Step 1#
############################################################
# function to calculate likelihood for bernoulli
def logLikelihood(theta, x):
# cal the log likelihood of each observation in the samples collected
cal_llh = np.log(theta**(x) * (1-theta)**(1-x))
tlld = np.prod(cal_llh)# cal the total likleihood
return tlld
# function to calculate
def mle_Binom(X_samples, thetas):
loglikelihood_single_theta = [logLikelihood(theta=t, x=X_samples) for t in thetas]
# mle_val=thetas[np.argmax(likelihood_single_theta)] #get the maximum likelihood estimate
return np.array(loglikelihood_single_theta)
# test the functions
true_params_Bern = 0.6
############################################################
## Step 2#
############################################################
# how does the likelihood plot changes as sample size changes
Bern_Nsamples = np.linspace(start=100, stop=1000, num=100, dtype=int)
response_Bernoulli = np.random.binomial(n=1, p=0.6, size=100)
possible_thetas = np.linspace(start=0.001, stop=1, num=100)
result_theta = np.ma.array(possible_thetas.copy())
Bern_Nsamples = np.linspace(start=100, stop=1000, num=100, dtype=int)
beta_for_mle_holder = []
def Bernoulli_optim_nSamples(Bern_stepSize, rand_sets_thetas):
for n in Bern_stepSize:
response_Bernoulli = np.random.binomial(n=1, p=0.6, size=n)
mle_out_Binom = mle_Binom(X_samples=response_Bernoulli, thetas=rand_sets_thetas) #cal lld of specific theta
max_theta_Binom = rand_sets_thetas[np.argmax(mle_out_Binom)] #which theta gave us max lld
beta_for_mle_holder.append(max_theta_Binom)
fig, ax = plt.subplots()
ax.plot(Bern_stepSize, beta_for_mle_holder)
ax.set_title('Bernoulli dist nSamples vrs MLE')
ax.hlines(y=0.6, xmin=min(Bern_stepSize), xmax=max(Bern_stepSize), linestyles="dashed", color="red", label="MLE")
plt.xlabel("nSamples")
plt.ylabel("MLE")
plt.show()
Bernoulli_optim_nSamples(Bern_stepSize=Bern_Nsamples, rand_sets_thetas=result_theta)
</code></pre>
| <python><numpy><mle> | 2023-09-27 21:16:49 | 1 | 473 | sahuno |
77,190,726 | 4,431,798 | coremltools GPU usage with mlpackage on macOS, very slow inference/prediction | <p>for a project, i converted a Yolov8 segmentation .pt model to .mlpackage, so that i can run it. everything runs fine, items of interests are detected on the video, but inference speed is the problem. it takes like 280 ms per item, extremely slow. i run the same unconverted model as .pt on colab or on my laptop, takes only a few ms.</p>
<p>i set the model.compute_unit to ALL or CPU_and_GPU, or any other options, still no GPU is used (you can check from the terminal output, GPU is not active). here is the code:</p>
<pre><code>import coremltools as ct
import cv2
import numpy as np
from PIL import Image
import time
import os
def is_gpu_active():
# Run the ioreg command and parse the output
result = os.popen('ioreg -l | grep "performanceState"').read()
return "performanceState\" = 2" in result
def letterbox_image(image, size):
"""Resize image with unchanged aspect ratio using padding."""
ih, iw = image.shape[:2]
w, h = size
# Compute scale
scale = min(w/iw, h/ih)
nw = int(iw * scale)
nh = int(ih * scale)
# Resize the image using the computed scale
image_resized = cv2.resize(image, (nw, nh))
# Compute padding values
top = (h - nh) // 2
bottom = h - nh - top
left = (w - nw) // 2
right = w - nw - left
# Add padding to make the image square
image_padded = cv2.copyMakeBorder(image_resized, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[0, 0, 0])
return image_padded
# Load the Core ML model
model = ct.models.MLModel('vhssegmentation.mlpackage')
# Set the preferred device
model.compute_units = ct.ComputeUnit.ALL#
# Open the video file
cap = cv2.VideoCapture('VID_20230927_202037.mp4')
while cap.isOpened():
print("----")
ret, frame = cap.read()
if not ret:
break
# Time the letterboxing operation
start_time = time.time()
frame = letterbox_image(frame, (640, 640))
print(f"Letterboxing Time: {time.time() - start_time:.4f} seconds")
# Time the conversion to PIL Image
start_time = time.time()
pil_image = Image.fromarray(frame)
print(f"Conversion to PIL Image Time: {time.time() - start_time:.4f} seconds")
# Time the prediction step
start_time = time.time()
output = model.predict({'image': pil_image})
print(f"Prediction Time: {time.time() - start_time:.4f} seconds")
# Time the post-processing step
start_time = time.time()
predictions = output['var_1279']
mask = np.any(predictions[0, 4:7, :] > 0.5, axis=0)
filtered_predictions = predictions[0, :, mask]
for row in filtered_predictions:
x, y, w, h = row[:4]
x1 = int(x - w / 2)
y1 = int(y - h / 2)
x2 = int(x + w / 2)
y2 = int(y + h / 2)
classes = ['class0', 'class1', 'class2']
detected_class = classes[np.argmax(row[4:7])]
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
cv2.putText(frame, detected_class, (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
print(f"Post-Processing Time: {time.time() - start_time:.4f} seconds")
# Display the processed frame
cv2.imshow('Frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if is_gpu_active():
print("GPU is active")
else:
print("GPU is not active")
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>and here is the sample view from remote Mac i use. and it does the yolo style detection
<a href="https://i.sstatic.net/K5wUy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K5wUy.jpg" alt="enter image description here" /></a></p>
<p>and here are the sample times, from the terminal output. as you can see prediction/inference too slow:</p>
<pre><code>Letterboxing Time: 0.0009 seconds
Conversion to PIL Image Time: 0.0006 seconds
Prediction Time: 0.2791 seconds
Post-Processing Time: 0.0013 seconds
GPU is not active
Letterboxing Time: 0.0010 seconds
Conversion to PIL Image Time: 0.0006 seconds
Prediction Time: 0.2839 seconds
Post-Processing Time: 0.0015 seconds
GPU is not active
Letterboxing Time: 0.0009 seconds
Conversion to PIL Image Time: 0.0006 seconds
Prediction Time: 0.2821 seconds
Post-Processing Time: 0.0010 seconds
GPU is not active
</code></pre>
<p>Edit: Adding the model specs, and some hardware information as asked
model.get_spec().description.metadata info</p>
<pre><code>shortDescription: "Ultralytics YOLOv8m-seg model trained on /content/dataset.yaml"
versionString: "8.0.153"
author: "Ultralytics"
license: "AGPL-3.0 https://ultralytics.com/license"
userDefined {
key: "batch"
value: "1"
}
userDefined {
key: "com.github.apple.coremltools.source"
value: "torch==2.0.1+cu118"
}
userDefined {
key: "com.github.apple.coremltools.version"
value: "7.0b1"
}
userDefined {
key: "date"
value: "2023-08-13T15:19:08.788039"
}
userDefined {
key: "imgsz"
value: "[640, 640]"
}
userDefined {
key: "names"
value: "{0: \'topvhs\', 1: \'frontvhs\', 2: \'sidevhs\'}"
}
userDefined {
key: "stride"
value: "32"
}
userDefined {
key: "task"
value: "segment"
}
</code></pre>
<p>some platform and os info</p>
<pre><code>Darwin
('10.16', ('', '', ''), 'x86_64')
posix.uname_result(sysname='Darwin', nodename='perceptundrymbp.home', release='22.6.0', version='Darwin Kernel Version 22.6.0: Fri Sep 15 13:39:52 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_X86_64', machine='x86_64')
</code></pre>
<p>subprocess.check_output("system_profiler SPDisplaysDataType", shell=True) information about the GPU on the mac</p>
<pre><code>Graphics/Displays:
Intel HD Graphics 630:
Chipset Model: Intel HD Graphics 630
Type: GPU
Bus: Built-In
VRAM (Dynamic, Max): 1536 MB
Vendor: Intel
Device ID: 0x591b
Revision ID: 0x0004
Automatic Graphics Switching: Supported
gMux Version: 4.0.29 [3.2.8]
Metal Support: Metal 3
Displays:
Color LCD:
Display Type: Built-In Retina LCD
Resolution: 2880 x 1800 Retina
Framebuffer Depth: 24-Bit Color (ARGB8888)
Main Display: Yes
Mirror: Off
Online: Yes
Automatically Adjust Brightness: Yes
Connection Type: Internal
Radeon Pro 555:
Chipset Model: Radeon Pro 555
Type: GPU
Bus: PCIe
PCIe Lane Width: x8
VRAM (Total): 2 GB
Vendor: AMD (0x1002)
Device ID: 0x67ef
Revision ID: 0x00c7
ROM Revision: 113-C980AJ-927
VBIOS Version: 113-C9801AP-A02
EFI Driver Version: 01.A0.927
Automatic Graphics Switching: Supported
gMux Version: 4.0.29 [3.2.8]
Metal Support: Metal 3
</code></pre>
<p>some more info about the remote mac i'm working with</p>
<p><a href="https://i.sstatic.net/GXfq4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GXfq4.png" alt="enter image description here" /></a></p>
<p>some activity monitor while the python code is running</p>
<p><a href="https://i.sstatic.net/ZrluC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZrluC.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/8rtzK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8rtzK.png" alt="enter image description here" /></a></p>
<p>Adding more detailed GPU usage, as tadman suggested, with the gpu usage plots
<a href="https://i.sstatic.net/Xt3lY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xt3lY.png" alt="enter image description here" /></a></p>
| <python><macos><yolo><coremltools><yolov8> | 2023-09-27 20:35:51 | 0 | 441 | SoajanII |
77,190,564 | 616,728 | psycopg3 pool "connection is closed" | <p>I have set up a decorator to provide a db connection using psycopg3. Sometimes, the connection is closed and it is throwing the following error when I try to use it:</p>
<blockquote>
<p>the connection is closed</p>
</blockquote>
<p>Here is my implementation:</p>
<pre class="lang-py prettyprint-override"><code>master_connection_pool = psycopg_pool.ConnectionPool(
conninfo=masterDbConnectionString,
min_size=5,
open=True
)
def provide_db(func):
"""
Function decorator that provides a session if it isn't provided.
"""
@wraps(func)
def wrapper(*args, **kwargs):
arg_session = 'db'
func_params = func.__code__.co_varnames
session_in_args = arg_session in func_params and \
func_params.index(arg_session) < len(args)
session_in_kwargs = arg_session in kwargs
if session_in_kwargs and kwargs[arg_session] is not None:
return func(*args, **kwargs)
if session_in_args and args[func_params.index(arg_session)] is not None:
return func(*args, **kwargs)
else:
with master_connection_pool.connection() as conn:
conn.row_factory = dict_row
kwargs[arg_session] = conn
return func(*args, **kwargs)
return wrapper
</code></pre>
<p>Then in my app I can do this:</p>
<pre class="lang-py prettyprint-override"><code>@provide_db
def do_Stuff(user_id, db):
db.query(f"SELECT * FROM users WHERE id={user_id}")
...
</code></pre>
<p>What is the proper way to reconnect or prevent this from happening, this is a webapp so its a long running process.</p>
| <python><postgresql><fastapi><psycopg3> | 2023-09-27 20:05:13 | 1 | 2,748 | Frank Conry |
77,190,562 | 1,769,172 | Python Plotly Sunburst Coloring | <p>I have the following code to generate a sunburst plot using plotly in Google colab.</p>
<pre><code>import plotly.express as px
import matplotlib.pyplot as plt
def generate_single_color_sunburst(person, color):
color_hex = "#{:02x}{:02x}{:02x}".format(int(color[0]*255), int(color[1]*255), int(color[2]*255))
# Set everything to grey initially
sample_df['color'] = 'grey'
if person != 'all_greyscale':
# Get the B values associated with the current person
Bs = sample_df[sample_df[person].str.strip() == '1']['B'].tolist()
# Color those specific B values
sample_df.loc[sample_df['B'].isin(Bs), 'color'] = color_hex
# Identify the unique A values for those specific B values
As = sample_df[sample_df['B'].isin(Bs)]['A'].unique()
# Color only those specific A entries
for A in As:
sample_df.loc[(sample_df['A'] == A) & (sample_df['B'] == ''), 'color'] = color_hex
# Create the sunburst diagram with a fixed color mapping
fig = px.sunburst(sample_df,
path=['A', 'B'],
color='color',
color_discrete_map={'grey': 'grey', color_hex: color_hex},
title=f'Sunburst highlighting {person}')
# Remove the temporary 'color' column
sample_df.drop('color', axis=1, inplace=True)
return fig
def get_color_for_person(person_name):
# Map each unique name to a unique number between 0 and 1
unique_names = sample_df.columns[2:].tolist() # Assuming the person columns start at index 2
num_people = len(unique_names)
name_mapping = {name: idx/num_people for idx, name in enumerate(unique_names)}
# Get the color for the person using the viridis colormap
colormap = plt.cm.viridis
return colormap(name_mapping[person_name])
import pandas as pd
# Creating a new sample dataset
data = {
'A': ['Dolor', 'Dolor', 'Sit', 'Sit', 'Amet', 'Amet'],
'B': ['', 'Consectetur', 'Adipiscing', '', 'Elit', 'Sed'],
'Jessica Smith': ['', '1', '', '1', '1', ''],
'Thomas Brown': ['1', '', '1', '', '', '1'],
}
sample_df = pd.DataFrame(data)
# Generate a sunburst for 'Jessica Smith' for debugging
fig = generate_single_color_sunburst('Jessica Smith', get_color_for_person('Jessica Smith'))
fig.show()
</code></pre>
<p>Which produces this plot:</p>
<p><a href="https://i.sstatic.net/7k1RJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7k1RJ.png" alt="enter image description here" /></a></p>
<p>The problem is that I want "Amet" and "Sit" to be purple, too. In other words, when any B is colored in, I want the corresponding A to also be colored in. Thanks for any advice.</p>
| <python><plotly><google-colaboratory> | 2023-09-27 20:04:25 | 1 | 609 | Ken Reid |
77,190,461 | 10,489,887 | How to divide COCO dataset evenly? | <p>I have full COCO dataset I downloaded from <a href="https://cocodataset.org/" rel="nofollow noreferrer">here</a>. I need to modify the <code>instances_train2017.json</code> file a bit to get following:</p>
<ul>
<li>modify the annotations file, so it divides the full train dataset into <code>1/3</code> for all classes evenly</li>
<li>means let say I have 100 images/annotations for <code>class_1</code>, so I want my modified annotations file to hold the <code>100/3</code> of image objects/dict in json file</li>
</ul>
<p>I have written code like this, but it takes too much time and result/modified file is wrong:</p>
<pre class="lang-py prettyprint-override"><code>
import json
from collections import defaultdict
import random
# Path to your local COCO-format JSON annotation file
original_annotation_file = 'coco/annotations/instances_train2017.json'
output_annotation_file = 'evenly.json'
# Load the local COCO-format dataset from your JSON file
with open(original_annotation_file, 'r') as f:
coco_data = json.load(f)
class_counts = defaultdict(int)
target_class_counts = defaultdict(int)
# Calculate the target count for each class
for ann in coco_data['annotations']:
class_id = ann['category_id']
class_counts[class_id] += 1
for class_id, count in class_counts.items():
target_class_counts[class_id] = count // 3
# Create a list to hold the selected annotations
selected_annotations = []
# Iterate through the annotations and select the subset
for ann in coco_data['annotations']:
class_id = ann['category_id']
# Only include this annotation if we haven't reached the target count for this class
if class_counts[class_id] <= target_class_counts[class_id]:
selected_annotations.append(ann)
# Update the count for this class
class_counts[class_id] += 1
# Create a new COCO-format JSON data structure for the subset
subset_data = {
'info': coco_data['info'],
'licenses': coco_data['licenses'],
'categories': coco_data['categories'],
'images': coco_data['images'],
'annotations': selected_annotations
}
# Shuffle the selected annotations to mix up the classes if desired
random.shuffle(subset_data['annotations'])
# Write the subset data to a new JSON file
with open(output_annotation_file, 'w') as f:
json.dump(subset_data, f)
</code></pre>
| <python><json> | 2023-09-27 19:45:24 | 1 | 2,184 | mrconcerned |
77,190,372 | 6,599,648 | Cannot access Flask App running in Docker Container in my browser | <p>I'm trying to run a Flask app in a docker container and connect to it using my browser. I am not able to see the app and get an error <code>This site canβt be reached</code> when trying to go to <code>http://127.0.0.1:5000</code>. I've already followed the advice in these two questions (<a href="https://stackoverflow.com/questions/30323224/deploying-a-minimal-flask-app-in-docker-server-connection-issues">1</a>) (<a href="https://stackoverflow.com/questions/73163810/flask-app-in-docker-container-not-accessible">2</a>).</p>
<p>This is my Dockerfile:</p>
<pre><code>FROM python:3.11.5-bookworm
WORKDIR /app
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["flask", "run", "--host", "0.0.0.0"]
</code></pre>
<p>and this is my app:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return 'hello'
if __name__ == "__main__":
app.run(host="0.0.0.0")
</code></pre>
<p>When I use docker desktop, I can see that the app is running correctly inside the docker container:</p>
<pre><code>2023-09-27 14:14:44 * Debug mode: off
2023-09-27 14:14:44 WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
2023-09-27 14:14:44 * Running on all addresses (0.0.0.0)
2023-09-27 14:14:44 * Running on http://127.0.0.1:5000
2023-09-27 14:14:44 * Running on http://172.17.0.5:5000
2023-09-27 14:14:44 Press CTRL+C to quit
2023-09-27 14:15:14 127.0.0.1 - - [27/Sep/2023 19:15:14] "GET / HTTP/1.1" 200 -
</code></pre>
<p>from the command line in the docker terminal, the output is also as expected:</p>
<pre><code># curl http://127.0.0.1:5000
hello#
</code></pre>
<p>However, when I use my browser to go to localhost (<a href="http://127.0.0.1:5000" rel="nofollow noreferrer">http://127.0.0.1:5000</a>), I get an error: <code>This site canβt be reached</code></p>
<p>In the <a href="https://www.youtube.com/watch?v=gAGEar5HQoU" rel="nofollow noreferrer">tutorial</a> I was watching, it worked, so I'm not sure what I'm doing wrong here...</p>
| <python><docker><flask> | 2023-09-27 19:27:21 | 2 | 613 | Muriel |
77,190,294 | 8,675,314 | Pytest-Django Database access not allowed error when importing a form | <p>I've been having a problem trying to run some tests using Pytest in Django.</p>
<p>I've been trying to test some forms, and I've just started by importing pytest and then from the forms file imported the form to be tested.</p>
<p>The form in question has a MultipleChoiceField with a choices parameter that gets populated by querying the database for some options.</p>
<p>So it's something like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyForm(forms.Form):
my_choices=forms.MultipleChoiceField(choices=get_choices_from_db())
</code></pre>
<p>where get_choices_from_db returns a [(choice.id, choice.name) for choice in queryset] for some query set.</p>
<p>My test is still non-existant, I've just written this:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from project.forms import MyForm
@pytest.mark.django_db
def test_my_form() -> None:
pass
</code></pre>
<p>and it breaks with the following error:</p>
<p>" RuntimeError: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it."
and a stack trace which complains about the form class at the line where it has to access the db to get the choices.</p>
<p>I've seen a couple of similar cases around, like:
<a href="https://stackoverflow.com/questions/56389216/pytest-django-wont-allow-database-access-even-with-mark">pytest-django won't allow database access even with mark</a>
or <a href="https://stackoverflow.com/questions/37697215/django-pytest-database-access-for-data-migration/37704920#37704920">Django pytest database access for data migration</a>
but I did not really understand the solution.</p>
<p>Why would pytest instantiate the form at import time? Isn't it supposed to do such instantiations inside the test, as some might require db access?
The first solution talks about a migration that creates groups with permissions, which I didn't really undestand, as I don't know much about django migrations yet, but if it was relevant, I didn't understand how.</p>
<p>Does anyone has any ideas/suggestions about this?</p>
<p>Update: Importing the form inside the test function seems to work fine, but I wonder why would it try to instantiate the form on import? How does this work?</p>
| <python><django><pytest><pytest-django> | 2023-09-27 19:13:59 | 1 | 415 | Basil |
77,190,152 | 247,542 | How to change language in a Django+Selenium test? | <p>Why would Selenium not work with multiple languages?</p>
<p>I'm trying to write a Selenium test for a multi-language Django app. Everything works, until I try to change the user's language.</p>
<p>I have a <code>/set-language/</code> path, which calls:</p>
<pre><code>@login_required
def set_language(request):
lang = request.GET.get('l', 'en')
request.session[settings.LANGUAGE_SESSION_KEY] = lang
response = HttpResponseRedirect(request.META.get('HTTP_REFERER', '/'))
response.set_cookie(settings.LANGUAGE_COOKIE_NAME, lang)
return response
</code></pre>
<p>I have Django's internationalization support to pull the language from this session variable. It works perfectly in the dev server. However, when I call it in a Selenium test with:</p>
<pre><code>driver.get('/set-language/?l=ja')
</code></pre>
<p>Selenium throws the error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/myproject/.env/lib/python3.11/site-packages/django/test/utils.py", line 461, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/myproject/src/tests/test_lang.py", line 44, in test_language
driver.get(reverse('set_language') + '?l=ja')
File "/usr/local/myproject/.env/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 353, in get
self.execute(Command.GET, {"url": url})
File "/usr/local/myproject/.env/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 344, in execute
self.error_handler.check_response(response)
File "/usr/local/myproject/.env/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Reached error page: about:neterror?e=connectionFailure&u=https%3A//localhost/admin/&c=UTF-8&d=Firefox%20can%E2%80%99t%20establish%20a%20connection%20to%20the%20server%20at%20localhost.
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5
UnknownError@chrome://remote/content/shared/webdriver/Errors.jsm:513:5
checkReadyState@chrome://remote/content/marionette/navigate.js:65:24
onNavigation@chrome://remote/content/marionette/navigate.js:333:39
emit@resource://gre/modules/EventEmitter.jsm:160:20
receiveMessage@chrome://remote/content/marionette/actors/MarionetteEventsParent.jsm:44:25
</code></pre>
<p>and the browser is unable to bring up the page.</p>
<p>Does Selenium not support any languages besides English?</p>
| <python><django><selenium-webdriver><internationalization> | 2023-09-27 18:50:34 | 1 | 65,489 | Cerin |
77,190,107 | 2,530,859 | How to properly terminate a process that is running in the background via subprocess? | <p>I have method that runs a bash script in the background via <code>subprocess.Popen()</code>:</p>
<pre><code>def my_method():
try:
process = subprocess.Popen(["bash", "my_script.sh", "&" ],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except Exception as e:
print(f"Error running the script: {e}")
return process
def caller():
process = my_method()
</code></pre>
<p>Now, I want the <code>caller</code> method to properly terminate the background script. I tried several solutions like <code>process.kill()</code> and <code>process.terminate()</code>. But, none of them works. I was wondering is the best solution for this?</p>
| <python><subprocess><terminate> | 2023-09-27 18:42:43 | 0 | 445 | amin |
77,189,872 | 2,862,945 | Tkinter's checkbutton makes checkmark invisible after changing background color | <p>I am using <code>tkinter</code> to create a simple GUI with python. I wanted to change the background and foreground color of a <code>Checkbutton</code> and while that works, it makes the checkmark itself invisible. It still works, tested with a simple function, but I seem to miss something. Here is the code:</p>
<pre><code>import tkinter as tk
def check1Clicked():
if var1.get():
print("clicked")
else:
print("not clicked")
root = tk.Tk()
root.geometry("100x100")
var1 = tk.IntVar()
c1 = tk.Checkbutton(root, text="bla", variable=var1,
onvalue=1, offvalue=0,
bg="red",fg="white",
command=check1Clicked
)
# use "place" to easily center widget
c1.place(relx=.5, rely=.5, anchor="c")
root.mainloop()
</code></pre>
<p>Any suggestions what I am missing would be greatly appreciated!</p>
<p>The versions I am using:</p>
<pre><code>python: 3.8.10
tkinter: 8.6
</code></pre>
| <python><tkinter> | 2023-09-27 18:01:08 | 1 | 2,029 | Alf |
77,189,868 | 19,675,781 | How to rename pandas column names by splitting with space | <p>I have a dataframe like this:</p>
<pre><code>index col A col B col C
index1 3 1 2
index2 1 4 9
index3 5 1 2
index4 8 2 2
index5 2 1 6
</code></pre>
<p>I want to rename columns by splitting them with space. I dont want to do it manually since I have hundreds of columns. My ouutput looks like this:</p>
<pre><code>index A B C
index1 3 1 2
index2 1 4 9
index3 5 1 2
index4 8 2 2
index5 2 1 6
</code></pre>
<p>Can anyone help me with this</p>
| <python><pandas><dataframe><multiple-columns> | 2023-09-27 18:00:10 | 3 | 357 | Yash |
77,189,787 | 1,473,517 | How to fix <iv=None> in numba? | <p>I have a dict that takes a tuple of ints as keys and a numpy array as values. Here is a MWE:</p>
<pre><code>import numba as nb
import numpy as np
@nb.njit
def make_dict():
d = {}
A = (np.ones(10))
d[(1, 2, 3)] = A
return d
B = make_dict()
print(repr(B))
</code></pre>
<p>If I then look at B I get</p>
<pre><code>DictType[UniTuple(int64 x 3),array(float64, 1d, C)]<iv=None>({(1, 2, 3): [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]})
</code></pre>
<p>I think <iv=None> means that the type is unspecified which I am assuming means it won't be optimally compiled. How can I fix this?</p>
| <python><numba> | 2023-09-27 17:47:58 | 0 | 21,513 | Simd |
77,189,655 | 3,177,186 | How do I pull a thumbnail from a JPG , convert to base64, then display that in Tkinter (Python) | <p>I'm trying to show a file listing with details of files including size, dimensions, and thumbnail. Most of this is pretty straightforward, but I've tried days of searching for various posts and guides and none of them seem to do the job. This is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess, sys
from pathlib import Path
import json
from functions import *
import os
import glob
from tkinter import *
from PIL import Image, ImageTk, ImageOps
from datetime import datetime
import base64
import io
import ffmpeg
file_deets = []
blank_movie = "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBxMTEhUSEhAVFhIVGBUWGBgXEhgYGBYVGBUYFxUZFxgYHikgGBslHhUVLTEhJSktLi4uGB86ODMsNygtLisBCgoKDg0OFRAQFysdFR0rKystKysrLS0tLSstKystNy0rKystLTcrLS0tKy03NysrKysrLSstLS0rKysrKysrK//AABEIAOEA4QMBIgACEQEDEQH/xAAcAAEAAgMBAQEAAAAAAAAAAAAABgcEBQgDAgH/xABTEAABAwEEBQQJEAcHBQEBAAABAAIDEQQFEiEGBzFBURNhcYEUIjJScpGTobEVGCM1QlNiZXOCpLPB0dLjFzM0VZLC4RZDVIOUorIkY4S000Ul/8QAFgEBAQEAAAAAAAAAAAAAAAAAAAEC/8QAGREBAQEAAwAAAAAAAAAAAAAAAAERAhIh/9oADAMBAAIRAxEAPwC8UREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERARFj223RQtL5pWRsG1z3tY0dbiAgyEUBvvW/dVnqBO6d49zAwu8T3UYepyg95a955XcnYbvGImjTIXSOd0Rx0oesoL2Wtve/7LZRW02qKLgHyNaT0NJqepc/3rbNI7UGm0SzWeN9S0EizCgyILWUe4Z7wdy1Vm0EbXFaLUXE5kRjedvbv2/woLZvnXhdsWUImtDs82MwMrzukofE0qITa4r0thMd32ANNKdpG+0SNrsNaBo620yWw1eaP3YLRyUlkZIXt7R03snbNzphd2uYru3DirkiLI2hrGta0bA0AAdACCq9TusW0WqSWyW94dK2jmPwNY7bhe14aAMjhoabzVW6ua7wb2BpI+mUcsuIc7bQMYA5g91Pmro6yS4mNdxA8e9B7IiICIiAiIgIiICIiAiIgIiICLRX1pjYLJUWi2wscMyzHif5NlXeZQK+9fFijqLNZ5Z3DYXUiYes1d/tQW0vl7wASSABtJNAFzxa9bF82uoskDYmE5OjhxkDeHSSVb10C0druC8bWcVutpIrWkkrpS0/BYDgG/YQgvi+dZV12aofbY3OFRhirKajaDydQ09JCgV86/oxlZLE923tpnhlDu7RmKo+cFlaI6mrudFHPLNNaMTc24hGyuxwoztgQQR3asW59GrDZKdj2SGMjLE2MY+t57Y9ZQUVbtN9ILZUNDrOwjZHGIRQ7xJIcezg5ad2hs8zuUtltxPO01dK8jgXPIpv4q39aNlpgtLNh9jf07WO9I6mqun2rnVEr1davLqka4yROmmjcK8pIcND3JDGYQRk4UNditO77BZ7O3BBBHE3hHG1g68IzVJ6KaQdjWljyaRu7R/DC4jPqND0Aq232spJoxtOrJy9ldhHskfsjOJoO2b1trlxAVMvtXOrjtt4sibjllZG3vnvDR43FUNpXe1mjtMgglEkROJuDMDFmW1yGRqMt1FbMGyivBzHNew0c0hzTwINR6Fc113wLRCyZmx4rTgdjm9RBHUuZzfUjzhiiJcdgzcT1Bb27btvmWPkmPlhhJLqGTkhU5GrR25GWylFJcG916w4Z7NaWuAkwlho4Ym4HB8Zw7fduzpuV16C3kJ7JHINha1w5g4VH2qkLr1RPca2ic1O0MbTPf278z/Crs0Nuk2aMRhuGNrQ0DPYAA2lczkNvOlEkREUBERAREQEXja7VHE0vlkaxg2ue4NaOknIKGX1rauqz1HZPLPHuYGmSvQ/Jn+5BOUVF3vr7kccFisIqe5dM4uJ/wAuOmfzitNHemkd5yCNsr4WvrQAtswAAqdlJHDLZn4kF/XrfdmswxWi0xQg7OUka2vQCczluUFvnXXdkNRE6W0OzHscZa2vO6TDlzgFRW6tQsjzjt1v7Y5uETS4k8eVkpn80qaWDVHdULCBAZJC0tEkzy8gkZOwZMrX4KCvrw132+cltisTGdT538xFA0DraVo7Sy+7cQLVbXRxuoCHy8myhNCXRQimXO1SG1TOic6ItDCwlpaBQAg0NAN2SwZbYTvQSS5tQUDaG1WySQ5HDExsY6C52IkdFFO7m0Auuy0MViiLhnikBlcDxBkrh6qLE0K0i5eytDnVki9jdxIA7R3WKZ8QVuH2xWQVtptCbPaXMH6t3bs4YTtb1EEdFFGZLXzqwdYViM1mMjRV8FXjiWf3g8QB+aqctF6Mb3TwOaufiGaWYLV1a6Q0c+yuOTqyM8ICjx1ih+a5Tp9sXM1n0r5GVksQJcxwcNwNDmDvoRUHpUil1g3va8rJAI2E5GOIu8cklWjpyVlguW+YmzQyRyEBjmkEnY07Wuz4EA9SoC23oyNzmue0uaS04TiFQaGhGRGW1bJmg952wh1rtLqVrR8jpS08zR2o8YUmubU9DlyhklO8Vwt8TMx/EpborKfSDvWE9Jp5gpNDfd+2ljY4+UjjDWtxBoiqAKV5R9HE03gq47m1fQw9xDGw8Q0B3jFXHrKklm0fjbtzPMKec1PnUHP1k1YWud2O1WmrjSp7aRxHAveQBv4qY3Lqis7aF0bpTxe4keJuFvjqrhisjG7GDp2nxle6CI3VoVHEKNYyNvBrQPM2g863lmuSJu6vm9C2SIPOKFre5aB0BeiIgIiICIiAovrOvGez3ZaZrKaTMa2jgKlrS9okcK7w0uPNRShYl72Btoglgf3Esb43dD2lp9KDnLQjV/ab7Y61T3icLXmMl/KTS4gGuPdkACjm0OI7+CnN4am7DZrM+VvKzyx0eeUfQFre6o1gG7POuxaHUFeboLRbLDJUOoH4e9fE4xyDpONv8CuSW2KyCloZ2RikUbIx8Bob46bV9Wa+HxSMlae2Y4OHPTceY7OtY2lFl7GtD4vc1xM543dz4sx0tK0ctsAzJoOcoOjbLe7ZY2SsPavaHDrGw84+xfEltVL6KaxrPZYXxTue4NOKMRtxEh1cTcyAM88z7orxvLXHI44LJYxU5NMji8n/AC2Uz+cVqYN9rOsmGVtpaO1k7V/yjRketo/2FQGe3tb3TgOk0WTbH33eIwyYmROIOEhsLRTZlk93nWZdeqOR1DPP0hjf530/4rNGFo1p2yxzF9HSMc0tc1uVSM2Gp4HzOKz7VrXt05w2OxtaeZrp3jhSgA8bVNrl1TWZlDyGM8ZKur1Oo3xNU2sGiUbABRrQNwGXiFAPEm0UO+577t36+d7Wn3LpMLSN/sUWXjAW3unU8DTlpnuPBoDB0Z4nHxBXxBdMTfc16fuGSzWRgZAADmFFBWty6sLPFQtszAe+eMRB6X1I6gFLrLozG3b5hXzn7lvkQYsN3Rt2MB6c/TkskBfqICIiAiIgIiICIiAiIgIiICIiDnHS14uvSR07qiGR3KmmdWTsLZTTfR/KGnwQtjeuuaIZWayvf8KVwYOnC3ET4wmudoN/2EEAgssoIOYI7KlrVXNeFw2d8bmts8IcR2p5JndDMblZRyvpNptara5rpOTZgBAEbKZEg5lxJOzjvKj5kJILyTxzzpzVXQZmhGRgjBG7km5HxL6itcLSHchEaEGhjZnzbEFJXfeNijzfYHynb29roPEyMeI1Uru7WTZ4RSO6WtHwbQGg9OGHPrXRFku+ySMbI2zwlrwHD2FmwjoXr6j2b/DQ+SZ9ygoqz66Im/8A5Vf/ADKeiGvnWdFr8a3ubpA/8vPx8ipdpHZIoJy3kIsLu3b7E3Yd2zcQfMtXy8PvMXk2fcrg1XrhPiv6Z+SnrhPiv6Z+SrE0RFlnjc02eEvjOfsTM2nuTs6R1Lfeo9m/w0PkmfcoKd9cJ8V/TPyU9cJ8V/TPyVYml10wMjErLPEMJo6kTO5OQOzcaeNRDl4feYvJs+5Uar1wnxX9M/JT1wnxX9M/JUo0ctVm5drHwQ4ZO1ziZk49zu45danXqPZv8ND5Fn3KCnfXCfFf0z8lPXCfFf0z8lW7arisz2OaLPCCQaHkmZHcdnFVzK+Jri10EYc0kEcmzIg0I2KjUeuE+K/pn5KeuE+K/pn5K2zbTCCCIYqjP9Uz7lYd3WOyTRMlbZoaPFf1TMjsI2bjUdSCpvXCfFf0z8lPXCfFf0z8lXF6j2b/AA0PkmfcoVpTYoYJv1EWB4xN9ibt2OGzj6QoIl64T4r+mfkp64T4r+mfkra8vD7zF5Nn3KVaHmzTNdG6zwl7Mx7EzNp6tx9IVEA9cJ8V/TPyVn3Fr4imnjimsLomSOazlGziTCXEAEtLG9rnmQa03FWn6j2b/DQ+SZ9yoTXlZmR3vZWxsaxvIQGjWhor2RNnQb8goOiWmuY2L9Wo0XceQA4HLxBbdAREQEREFBa5/b+w+BZf/alV3mVVNru0Ots1qht9kZjEcbGEBzQ6NzJHva/tiAR24/h51Cbz0o0ggZyk1oe1lQK0gOZ2ZAEqwTnWHCYLTjGTJgXjwxlIPGQfnqL+qXOtjd2jt6XrY4Z5b0BY/E9rXQNq0hzmHNoHAp+iW2fvFnkP6oJrqx0g5Rj7MT20fbs8Bx7YdTj/AL1OOVVN3bq1vGCQSw3o1jwCKiDcRQihNCtx/Zy/P32P9Mz7kEj1iWQvs3LNHbwVd/lnu/FkfmlVV6pc6mEmjN9uBab6BBBBBszKEEUIOSj/AOiW2fvFnkP6oMnRHSTkLUxznUjd7G/gGuIz6jQ9AKuXlSqROqS2fvFnkP6qQR6NX2AAL7yAAH/TNOQyGZGaCybUwSMdG/Nr2lp6CKeNUdeMroZXwv7qNxaeemwjmIoetSr+zl+fvsf6Zn3LT3lq1vGeQyzXm10hABd2OATQUFaU3INKLz4GhV1aNX32VZo5q9sRR/M9uTvPn0EKpv0S2z94s8h/VbO6dBb1szSyC9wxrjiIFnBBdQCuddwHiQW1yqrHWTAYZ2zDuJhnzSNoD4xQ9Tl9f2cvz99j/TM+5Yd6aE3taGcnPe7XsBDgDZm5OAIBBGewnxoIz6pc6n2rDSDEZLK45/rGeYPH/E/xKJfoltn7xZ5D+q97BqyvCGRssV5tbI2uFwg2VBB2ngSgublVHtOrGZbK5zRV8XsjeJAHbj+GppxaFFP7OX5++x/pmfcn9nL7/fY/0zPuVEN9UudbLR3STse0Ryk9oDR/gOyd4tvSAvz9Ets/eLPIf1T9Ets/eLPIf1UwXYJuByVC69HVveyfIQf+zMpXZtFb6YxrGX0A1gDWjsdpo0CgFTnsCqi7n229rXHJLJyjouTDnuwtwRh5cBRoGLMupQbTwSjpzRb9T1/YFuVp9GGkQ5jOv2BbhQEREBERB+EVyKrLXxY2Nupxa2h5WLeeJVnKuNf3tS75WH0lBk6omVueyH4Mv18iluFRXVB7TWTwZfr5VLFqD5wphX0iD5wphX0vK1Wlkbcb3BrRvPHcBxPMM0H3hX44ACpIAG87AsJsk8vcN5GPvpG1kI+DHsZ0uNfgr1juiGoMgMzhnilOOh4tae0Z81oTR4+qsJ7h/KbvYmPlz5+TBp1r6Nqf7myTu6o2/WSNW0MwA3ADqACwJr4hG2dnU4H0KaPLsib/AAU38dn/APqgtT/dWSdvVE76uRxXz6u2b39vn+5e8N8wnZOzrcB6UHh6qwju3mPd7Kx8WfNygFepZzQCKggg7CNhWQ2cOGRBB6wR9q18l0Q1JjBhcc8URwVPFzR2j/nNKaMnCmFYLpJ4u7by0ffMbSQD4Uex/S0g/BWXZbSyRuNjg5p3jjvB4HmOao+8KYV9Ig+cKYV9IgMjXO2pD9om8Fn8y6Nj2LnHUh+0TeCz+ZSjpaKMNFGigX2iKAiIgIiICrjX97Uu+Vh9JVjquNf3tS75WH0lBmaoPaayeDL9fKpYolqh9prJ4Mv18qldVqD6RfNV+PkABJNAASTwA2oMe8bcImg0LnuNGMG17uHMOJ3LFsllOISzEPm3d5GDujG7ndtPmWFY5DI8zu2uyYD7iP7zvWLfV9OYeTjNHUq528V2Ac/PzhS0SfEeCFx20yVU3zeRAzcXPdvJqQN5zWmst4vjOKORzHcWuIr002qC65iHNLXZtcCCOY5FQm8tE5QSYXB7dwcaPHNnkemo6Fk6J6Sm0B0clOVYK1ApjbWhJAyBBIrTI1Ck0LS7Zs4oK99QbVs5A/xM9OJbK7tE5XEGZwY3eAQ5/myHTU9Cm/JN3vz6h5l5zREZg1HnQfcFGNDGijWgADmC9WvJ2BRrSO/uxosQAdI44WA7K0qXO4gcOJCrm33xLMayyufzE9qOhoyHUEF14jwWvtdlOIywkMm397IBukG/mdtHmVZXNeBJw17YZtNc6cKqXXRfj8QjldiByDj3QO6p3jp2IJTd1uErTkWvaaPYdrHfaOB3rLUdtjzG4Whm1uTx38e/rG0LfskBAINQQCDxB2LUH2i+apVB7R7FzjqQ/aJvBZ/MujYjkuctSH7RN4LP5lKOmURFAREQEREBVxr+9qXfKw+kqx1XGv72pd8rD6SgytUXtNZPBl+vlUqqopqj9prJ4Mv18qlFVqD7qtbf8nsRaPdkN6tp9HnWfVa++W1a08HD0FKMNr6AAbBktbeF2cq8OEgZXJxLS7ZsIAIqd20LPwJgWRW+mdl5G0ljXOewNZR5AoaipoWgCgJPOtVZbNLIaRxPd0NNOs7B1q3cKYUGg0RuR1nBklI5Vww0BqGNrUiu8kgc2SkzLURsK8MC97NJhqKDPfTzHm5kGfGxlBlWu+pz8RWFLaCKtDjhX1ycfOOYPoPEQSvySUYcLWinR589/Og0Wkt19kxYQ4Ne04mE7K0oQeY/YFXNuu+eI0kheOelW9ThkrZwJgQVVo8wyWmFnbBrnta4t2taTRxqQQKAnaFYrblbHKCJsbBn3NHVGypGTukAdC2GFMCD7dJUUOw5LNuB/sWE+4Jb1bR6fMtfgWfczaNceLvsCsGzqlV8VSq0jIhOS501IftE3gs/mXRMByXO2pD9om8Fn8yzVdMoiKAiIgIiICrjX97Uu+Vh9JVjquNf3tS75WH0lB76pPaayeDL9fKpPVRfVKf/AONZfBl+vlUlxLUR91WDfM4ZEXO2BzAeYF4bXqqsvEsK+rOZIJGAVJaaDi4ZtHjAVHk1q/cCj+il/tkIs0hpKG1jJ/vGDaBxc2mzeKHcVKA3xrCsaSjQXOIa1oJJJoA0CpJPAAKAXtrGrVthhY4g91OHVcOMcYI/3GvNuU5v+yvfZ3tZ3VAaDfRwJHmVYzaBz2n2SCPk675DgjdxIFC8dIbQ+dBhO1m29rqOEFe9dZgPQQ7zqT6L6yIp3thtMbYZHENa9riYnOOQDg6roqnYauHEhQK+bqt1lqLTZnGMV7ZzOUjoN4kbXD1kHmWrsUMdoe2GOzudJIcLRHIcyduRBoBvOwBB0bgWtv8AvmCxxGadxDa4WtaKvkfSoawHozJyA2rbWeAtY1pOIhrQSd5AoT1qudclgNIbQYjJFGHsdR5AjLi0hzg3c7CBWu1oG8INPbdalpc6kEEMbdwc10rz0uJDeoNXvY9YdtZQ2htnLe8MOGR3RgcMHSR1FRa5butlpyslldhPu2MwtpzzPy89VMLDqwtLKPldDI7bgEjsj85lHnpICYJro3fjLXGHhjo30qY3EHLvmOyxNqd4B5swTtsCiejl2TttIxxuZgBLqjKhaQAHCrXV5idhU0Ld29BjOavS4rSJI3OHc43Ac4FBXrzUb0sv9sbuxYzWZwJkI/u2UyHhOqKDcCTwrv8ARqzmOzRNO0jEfnkuAPQCB1KwbaqVXxiTEtIyrOciueNSH7RN4LP5l0NZjken7FzzqQ/aJvBZ/Ms1XTKIigIiICIiAq41/e1LvlYfSVY6rjX97Uu+Vh9JQeuqb2msvgy/XyKSVUa1T+01l8GX6+RSKq3EfdUqviqVVFZ6eXEWSY4yWhzscbgSMEm1zajZnmKdXclZuimsVhpZ7x9jlbkJqUaflKdwfhDtTzb5vbrIyVjo5BVrvGOBB3EcVVmlei5jNJAXRnJkoy27Gu713Mcju3gZsVcAjNA4Ue0ioLc6jccsiOcL8bQ7/v8AEqGum87wu8/9LMXRVqYyMTDtJrGdhJO1hBPFTW6tckDqMt1jcxwyLoqPaOcsdR7OgYlkWNhXhFYI2uL2xMa9woXBjQ4jbQuAqQtbd+md1zU5O8Y2E+5lfyR6MMwB8S38DGvzjnjeOajvO1yDxwL8MdciMiszsN/fN/hP3rynYGZyTxsHPRvnc5B44V8uoNp+/wAS1F4aZ3XDXlLxjeR7mN/KGvDDCCfGole2uSBtWWKxue45B8tGNPOGNq9/QcKuixTGaEntWgVJdlQbznkBzlV3pZrGY2tnu72SV2RmpVoP/br+sd8LuR8LdCL2vW8LwP8A1UxbFtEYGCMbNkYzcQRteSRxW20X0ZMjsMTaAZPlcKhvNzn4I5q02qDM0KuB0stZCXZ45nE1xZ1DKnaXGtTwxHgrXqsG7LAyCMRxjIZkna5x2ucd5P3AUAAWXVbkR91Sq+KpVUZllOR6fsXPepD9om8Fn8y6CshyPT9i591IftE3gs/mWKrplERQEREBERAVca/val3ysPpKsdVxr+9qXfKw+koPvVR7TWXwZfr5FIKqP6pxW57KPgy/XyKS9iniFuI8qpVevYp4hOxTxCo8qr5lYHAtc0OaRQggEEHaCDtC9+xTxCdiniEEKvfQlpq6zPwHvHklnzXZub14uaih173LJH+0Wcho905ocz+MVaPHVXN2KeIX72MeIUsg5+kuOB2YaRXvXH7ahYjtFotzndYafsCvq1aL2eQkvgixHa4NwuPS5tD51rpNAbMe5xt6JHH/AJ4lnFUr/ZltKcq6nDCKelG6LRb3O6g0fYVcn6O4Pf5v4o//AJr1j0Asw7oyO6ZCP+AamCoorihbtaXU75x+ygW3um6HyZWazlwO9jQG9bzRvnqrWsuilmjoWwR1GwuBe4dDn1IWz7GPEK9RB7p0I2OtL6/9uMkD5z8iehoHSVMIImsaGMaGtaKBrQAAOYDYsjsU8QnYp4ha8R5VSq9exTxCdiniEHlVKr17FPEJ2KeIQe1jOR6fsXP+pD9om8Fn8yvS9LxjslnlnmeAyNpcTxNO1aOJJoAOJVI6j7M7lJpKdr2jQeJAc4+IU8YWeSuk0RFkEREBERAUX1k6MPvGwyWaN4ZIS17C6uEuaa0dTMAiualCIOZ4Lk0gsbeRhlkDGEgMbO3C3Mk4Q8gAEknLivvltJPfpfLQfiXSElnY7NzQTxIzXn2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc58tpJ79L5aD8SctpJ79L5aD8S6M7Aj7wJ2BH3gQc2y6LXvb3NFttD8DTsfJylOdsbDhrntqOlW9oLoqyzsbGxtGN2k5kna4uO9x8w8SmgsEfeBZDWgCgFBzIP1ERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQf/9k="
def get_file_deets(name):
global file_deets
global thumbs
full_path_filename = f"{settings['dest_dir']}\\{name}"
temp = {}
temp['name'], temp['ext'] = name.rsplit('.')
# keep track of the whole name
temp['name_ext'] = name
try:
image = Image.open(full_path_filename)
temp['type'] = 'image'
# Grab it's "taken on" date
exif = image.getexif();
temp['dtime'] = exif.get(306).replace(":", "")
temp['width'],temp['height'] = image.size
# first make a thumbnail, then B64 THAT (not the whole image)
thumbnail = ImageOps.fit(image,(100,100))
# Convert the thumbnail to base64
with io.BytesIO() as output_buffer:
thumbnail.save(output_buffer, format="JPEG")
temp['thumb'] = base64.b64encode(output_buffer.getvalue()).decode("utf-8")
except IOError:
vid=True
temp['type'] = ['movie']
temp['dtime'] = datetime.fromtimestamp(os.path.getmtime(full_path_filename)).strftime("%Y%m%d %H%M%S")
temp['thumb'] = blank_movie
probe = ffmpeg.probe(full_path_filename)
video_streams = [stream for stream in probe["streams"] if stream["codec_type"] == "video"][0]
temp['width'], temp['height'] = video_streams['width'], video_streams['height']
temp['size'] = f"{os.path.getsize(full_path_filename)/1000000:.2f}MBs"
file_deets.append(temp)
def list_dest_dir():
global settings
if (not os.path.isdir(settings['dest_dir'])):
status_msg("Output directory doesn't exist. Unable to show files")
return
# clear previous results
#transit_folder.delete(0, END)
folder_contents = os.listdir(settings['dest_dir'])
if not len(folder_contents):
status_msg("No files found - was the phone empty?")
return
folder_sorted=[]
for index,value in enumerate(folder_contents):
if (not os.path.isdir(settings['dest_dir']+'\\'+value)):
folder_sorted.append(folder_contents[index])
folder_sorted = sorted(folder_sorted)
for file in folder_sorted:
get_file_deets(file)
# we store them in a list, but no need to iterate twice, just print the last one added
lastfile = file_deets[-1]
pre_len(lastfile)
temp = Frame(transit_folder)
temp.pack(expand=True,fill=X)
b64Img = PhotoImage(data=lastfile['thumb'])
thumbnail = Label(temp, image=b64Img)
thumbnail.pack(padx=5,pady=5)
#saves it from garbage collection apparently
thumbnail.image = b64Img
# FILE LISTING
file_frame = Frame(root)
file_frame.pack(expand=True, fill=BOTH)
transit_folder = Frame(file_frame)
transit_folder.pack(expand=True, fill=BOTH, padx=10)
list_dest_dir()
root.mainloop()
</code></pre>
<p>In theory, it should pull all files from the folder (which will be photo and movie files from a cellphone). It should pull details from the files like size, dimensions, etc. It sets the thumbnail to either a thumbnail of the pic or a static image of a movie icon (I validated the b64 image date resolves to the proper pics using an online converter).</p>
<p>So I have a nice list of files with a dict of details for each file.</p>
<p>Then I go through each and try to display them in my tkinter window. When I try, I get an error:</p>
<pre><code>Traceback (most recent call last):
File "<path>\main.py", line 168, in <module>
list_dest_dir()
File "<path>\main.py", line 102, in list_dest_dir
b64Img = PhotoImage(data=lastfile['thumb'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\t\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 4125, in __init__
Image.__init__(self, 'photo', name, cnf, master, **kw)
File "C:\Users\t\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 4072, in __init__
self.tk.call(('image', 'create', imgtype, name,) + options)
_tkinter.TclError: couldn't recognize image data
</code></pre>
<p>I've tried many different guides and combinations, but nothing seems to work.</p>
| <python><tkinter><base64> | 2023-09-27 17:25:57 | 1 | 2,198 | not_a_generic_user |
77,189,479 | 1,482,271 | How to pass "Any" type parameter in SOAP request using zeep in Python | <p>I have a WSDL that uses the "any" type for the core element (Element) in all SOAP operations. Note that I have trimmed this down as it's quite big:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<definitions targetNamespace="urn:xtk:queryDef" xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="urn:xtk:queryDef" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
<types>
<s:schema elementFormDefault="qualified" targetNamespace="urn:xtk:queryDef">
<s:complexType name="Element">
<s:sequence>
<s:any processContents="lax"/>
</s:sequence>
</s:complexType>
<s:element name="ExecuteQuery">
<s:complexType>
<s:sequence>
<s:element maxOccurs="1" minOccurs="1" name="sessiontoken" type="s:string" />
<s:element maxOccurs="1" minOccurs="1" name="entity" type="tns:Element" />
</s:sequence>
</s:complexType>
</s:element>
<s:element name="ExecuteQueryResponse">
<s:complexType>
<s:sequence>
<s:element maxOccurs="1" minOccurs="1" name="pdomOutput" type="tns:Element" />
</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
<message name="ExecuteQueryIn">
<part element="tns:ExecuteQuery" name="parameters" />
</message>
<message name="ExecuteQueryOut">
<part element="tns:ExecuteQueryResponse" name="parameters" />
</message>
<portType name="queryDefMethodsSoap">
<operation name="ExecuteQuery">
<input message="tns:ExecuteQueryIn" />
<output message="tns:ExecuteQueryOut" />
</operation>
</portType>
<binding name="queryDefMethodsSoap" type="tns:queryDefMethodsSoap">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" />
<operation name="ExecuteQuery">
<soap:operation soapAction="xtk:queryDef#ExecuteQuery" style="document" />
<input>
<soap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" use="literal" />
</input>
<output>
<soap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" use="literal" />
</output>
</operation>
</binding>
<service name="XtkQueryDef">
<port binding="tns:queryDefMethodsSoap" name="queryDefMethodsSoap">
<soap:address location="https://xxxxxxxxxxxxxx/nl/jsp/soaprouter.jsp" />
</port>
</service>
</definitions>
</code></pre>
<p>I want to produce this payload using <code>zeep</code> in Python 3:</p>
<pre><code><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:xtk:queryDef">
<soapenv:Header/>
<soapenv:Body>
<urn:ExecuteQuery>
<urn:sessiontoken>xxxxxxx</urn:sessiontoken>
<urn:entity>
<queryDef schema="nms:recipient" operation="select">
<select>
<node expr="@email"/>
<node expr="@lastName+'-'+@firstName"/>
<node expr="Year(@birthDate)"/>
</select>
<orderBy>
<node expr="@birthDate" sortDesc="true"/>
</orderBy>
</queryDef>
</urn:entity>
</urn:ExecuteQuery>
</soapenv:Body>
</soapenv:Envelope>
</code></pre>
<p>But I cannot for the life of me figure out how to manage the "Any" type in the WSDL:</p>
<pre><code><s:complexType name="Element">
<s:sequence>
<s:any processContents="lax"/>
</s:sequence>
</s:complexType>
</code></pre>
<p>That's the type required for the "entity" node in the XML. Everything I've tried results in exceptions from zeep.</p>
<p>Here's what I have so far:</p>
<pre><code># Executes a query and returns the result set
def execute_query(session_token):
# Load the WSDL locally - not authorised to get from server
wsdl_url = os.path.abspath("querydef_dev.wsdl")
history = HistoryPlugin()
client = Client(wsdl_url, plugins=[history])
execute_query_type = client.get_element("ns0:ExecuteQuery")
entity_type = client.get_type("ns0:Element")
any_entity = xsd.AnyObject(entity_type, entity_type(_value_1={'queryDef': [{'schema': 'recipients'}]}))
params = execute_query_type(entity=any_entity, sessiontoken=session_token)
response = client.service.ExecuteQuery(params)
if __name__ == '__main__':
execute_query('xxxxxxx')
</code></pre>
<p>That code specifically gives this error:</p>
<pre><code>AttributeError: 'dict' object has no attribute 'value'. Did you mean: 'values'?
</code></pre>
<p>I thought I'd made sense of it, using <code>xsd.AnyObject</code> to set things up.</p>
<p>I've tried a number of combinations with <code>get_type</code>, <code>get_element</code>, and calling the service with <code>**params</code> and <code>params</code>. Everything ends with an exception thrown at <code>client.service.ExecuteQuery()</code>.</p>
<p>Any ideas where I'm going wrong?</p>
| <python><xml><soap><wsdl><zeep> | 2023-09-27 16:55:46 | 1 | 335 | mroshaw |
77,189,430 | 8,189,123 | PyMuPDF: It is possible to get a face value from a combobox widget? | <p>I am trying to extract a chosen value from a combobox widget using the following Python code:</p>
<pre><code>#Extract data from combobox
import fitz
fileIN_Master = "Mypdf.pdf"
with fitz.open(fileIN_Master) as doc:
for page in doc:
widgets = page.widgets()
for widget in widgets:
if widget.field_type_string in ('ComboBox'):
print('field_name:', widget.field_name, 'field_value:', widget.field_value)
</code></pre>
<p>All I can get is field name and export value (field_name). I was wondering if it is possible to get the face value as well.</p>
| <python><pdf><pymupdf> | 2023-09-27 16:48:07 | 1 | 437 | Camilo |
77,189,381 | 2,386,605 | Docker multi-stage build package unrecognized | <p>I have a Dockerfile, where I want to run multistage builds, such that I can install python packages from Github (in <code>requirements.txt</code>) by using <code>slim</code> as <code>base</code> stage and then want to copy everything to an <code>alpine</code> image.</p>
<pre><code># pull official base image
FROM python:3.11-slim AS base
# set working directory
WORKDIR /src
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
FROM python:3.11-alpine
COPY --from=base /src /src
WORKDIR /src
# add app
COPY . .
</code></pre>
<p>However, when I try to run things in docker-compose via</p>
<pre><code>version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.test
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./src:/usr/src/app
ports:
- 8000:8000
</code></pre>
<p>I get:</p>
<pre><code>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "uvicorn": executable file not found in $PATH: unknown
</code></pre>
<p>Do you have an idea how to fix it?</p>
| <python><docker><docker-compose><dockerfile><uvicorn> | 2023-09-27 16:40:31 | 1 | 879 | tobias |
77,189,266 | 11,829,398 | Ways to check LLM output and retry if incorrect but return output if correct? | <p>I'm giving an LLM 37 categories and asking it to label pieces of text with the categories that apply to it (likely multiple for each text). I ask it to output its response as a markdown table.</p>
<p>Problem: the LLM doesn't always return answers for all the categories.</p>
<p>I want to check if all the categories have been returned, if they have, finish. If they haven't, ask it to classify the categories it forgot about (or, if that's too complicated, ask it to do it again).</p>
<p>I've thought about a <code>RouterChain</code> but am not sure how to handle the default chain. <code>SequentialChain</code> also confuses me since you cannot account for different actions based on Yes/No answer from 'does this contain all the classes?'</p>
<pre class="lang-py prettyprint-override"><code>from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
chat = ChatOpenAI(temperature=0, model_name='gpt-4')
survey_response = 'example response'
classes = ['class 1', 'class 2', 'class 3', ...]
template_string = """Here is a response from a survey question. I am performing
thematic analysis on it and wish to classify it into one of a list of pre-defined
classes.
Response:
####{response}####
Please output whether the response falls under any of these categories.
Classes:
####{classes}####
Output should be formatted as a table with 4 columns: 1) class, 2) is_member
1/0 depending on if the response is a member of the class), 3) confidence_score
(a confidence interval for whether the response does fit into that class,
use low (0-20%), medium (50%-80%) and high (80%+), 4) exerpt (excerpt from the
response that supports the classification).
"""
prompt_template = ChatPromptTemplate.from_template(template_string)
input_message = prompt_template.format_messages(
response=survey_response,
classes=classes
)
llm_response = chat(input_message)
</code></pre>
| <python><langchain><large-language-model><py-langchain> | 2023-09-27 16:20:41 | 1 | 1,438 | codeananda |
77,189,224 | 13,717,851 | Modify Streamlit echarts inside loop without adding new chart | <p>I want to update my echart with dynamically fetched data, but when I run below code, it adds a new chart to my page.
Unlike <code>t.text(markdown)</code> which doesn't add but modifies existing text which is what I want my echarts to be like.</p>
<p>I am using python3.</p>
<p>Is there any argument or API to overwrite existing echarts instead of adding new?</p>
<pre><code> t = st.empty()
if url:
while True:
time.sleep(1)
markdown, option = render(url)
t.text(markdown)
st_echarts(
option, width="450px", height="350px", key=str(datetime.datetime.now())
)
</code></pre>
| <python><frontend><streamlit><echarts> | 2023-09-27 16:14:40 | 2 | 876 | Sayan Dey |
77,189,121 | 4,115,123 | Pip install subprocess to install build dependencies did not run successfully | <p>I've tried a few other questions here tied to updating setuptools first, but no dice.</p>
<p>I have a machine that is going to ultimately be air-gapped. I'm trying to get the installation process down. As doing so, my procedure right now included going to the identical working machine and using <code>pip freeze</code> to generate a <em>requirements.txt</em> file.</p>
<p>I then on the target machine, add the <em>requirements.txt</em> file. I turned the Internet connection on temporarily for testing. I did a <code>pip download</code> on <em>requirements.txt</em> to pull the packages. I then shut the Internet connection off and tried <code>pip install --no-index --find-links=. requirements.txt</code></p>
<p>It installs the first few packages fine, but then it hits some and has</p>
<pre class="lang-none prettyprint-override"><code>Processing ./ansible-vault-2.1.0.tar.gz (from -r requirements.txt (line 3))
Installing build dependencies ... error
error: subprocess-exited-with-error
Γ pip subprocess to install build dependencies did not run successfully.
β exit code: 1
β°β> [4 lines of output]
Looking in links: .
Processing ./setuptools-68.2.2-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)
ERROR: No matching distribution found for wheel
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ pip subprocess to install build dependencies did not run successfully.
β exit code: 1
β°β> See above for output.
</code></pre>
<p>This happens for a few different packages, but I cannot find anything that would cause this. Can someone point me in the right direction? As the few questions I've tried on StackΒ Overflow and the few Google searches I've tried almost all point to making sure your packages are for the right Python version (they must be; I pulled them from the version I'm trying to install with), and updating setup tools (I did).</p>
| <python><pip><subprocess> | 2023-09-27 15:58:31 | 1 | 1,057 | Jibril |
77,189,113 | 468,455 | Setting up a Python environment to point to a modules directory on a Mac | <p>I'm trying to set up a local development environment for Python. We use BitBucket for source control of Python modules we've developed for our company. On my machine I have this folder/directory set up:</p>
<pre><code>~/xxxxxxx/Development/Git/xxxxxx/modules/
</code></pre>
<p>This serves as my local repo for the BitBucket repo.</p>
<p>In my .zshrc file I pointed $PYTHONPATH to this directory:</p>
<pre><code>export PYTHONPATH="~/Desktop/xxxxxx/Development/Git/xxxxxxx/modules/"
</code></pre>
<p>In Terminal, when I put in this command: echo $PYTHONPATH I get what I expect:</p>
<pre><code> xxxxxxxxxxx@Steves-MacBook-Pro ~ % echo $PYTHONPATH
~/Desktop/xxxxxx/Development/Git/xxxxxx/modules/
</code></pre>
<p>I then wrote this short script:</p>
<pre><code>import os
import datetime
import json
from jira import JIRA
from jira.client import ResultList
from jira.resources import Issue
import importlib
# import modules from relative paths
colors = importlib.import_module('modules.colors')
mondayManager = importlib.import_module('modules.mondayManager')
jiraManager = importlib.import_module('modules.jiraManager')
appManager = importlib.import_module('modules.appManager')
###############################
# APP STARTS HERE #
###############################
if __name__ == '__main__':
print("What's up dude?")
</code></pre>
<p>I get an error on the first import:</p>
<pre><code>ModuleNotFoundError: No module named 'modules'
</code></pre>
<p>I am obviously doing something wrong or misunderstanding how this is suppose to function, any help would be appreciated.</p>
| <python><pythonpath><zshrc> | 2023-09-27 15:57:01 | 0 | 6,396 | PruitIgoe |
77,188,750 | 1,422,096 | How to store metadata into a PNG or JPG with cv2, and restore it later? | <p>How to save a string metadata to a PNG or JPG file written with <code>cv2</code>?</p>
<p>It could be EXIF or any other format, as long as we can retrieve it later, like in the example below.</p>
<p>Note: linked but not duplicate: <a href="https://stackoverflow.com/questions/9542359/does-png-contain-exif-data-like-jpg">Does PNG contain EXIF data like JPG?</a>, <a href="https://stackoverflow.com/questions/56699941/how-can-i-insert-exif-other-metadata-into-a-jpeg-stored-in-a-memory-buffer">How can I insert EXIF/other metadata into a JPEG stored in a memory buffer?</a></p>
<p>Example:</p>
<pre><code>import cv2, numpy as np
x = np.array([[[0, 0, 255], [0, 255, 0], [255, 0, 0]],
[[0, 0, 255], [0, 255, 0], [255, 0, 0]],
[[0, 0, 255], [0, 255, 0], [255, 0, 0]]])
metadata = "gain: 0.12345" # how to save this to the files?
cv2.imwrite("x.png", x)
cv2.imwrite("x.jpg", x)
y = cv2.imread("x.png")
print(y)
z = cv2.imread("x.jpg")
print(z)
# how to retrieve the metadata when opening the PNG or JPG files?
</code></pre>
| <python><opencv><png><jpeg><exif> | 2023-09-27 15:10:35 | 0 | 47,388 | Basj |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.