QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,076,286
| 610,569
|
Converting JSON/dict to flatten string with indicator tokens
|
<p>Given an input like:</p>
<pre><code>{'example_id': 0,
'query': ' revent 80 cfm',
'query_id': 0,
'product_id': 'B000MOO21W',
'product_locale': 'us',
'esci_label': 'I',
'small_version': 0,
'large_version': 1,
'split': 'train',
'product_title': 'Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan',
'product_description': None,
'product_bullet_point': 'WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air\nDesigned to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace\nDetachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation\nThis Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan\n0.35 amp',
'product_brand': 'Panasonic',
'product_color': 'White'}
</code></pre>
<p>The goal is to output something that looks like:</p>
<pre><code>Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan [TITLE] Panasonic [BRAND] White [COLOR] WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air [SEP] Designed to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace [SEP] Detachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation [SEP] This Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan [SEP] 0.35 amp [BULLETPOINT]
</code></pre>
<p>There's a few operations going on to generate the desired output following the rules:</p>
<ul>
<li>If the values in the dictionary is None, don't add the content to the output string</li>
<li>If the values contains newline <code>\n</code> substitute them with <code>[SEP]</code> tokens</li>
<li>Concatenate the strings with in order that user specified, e.g. above follows the order <code>["product_title", "product_brand", "product_color", "product_bullet_point", "product_description"]</code></li>
</ul>
<p>I've tried this that kinda works but the function I've written looks a little to hardcoded to look through the wanted keys and concatenate and manipulate the strings.</p>
<pre><code>
item1 = {'example_id': 0,
'query': ' revent 80 cfm',
'query_id': 0,
'product_id': 'B000MOO21W',
'product_locale': 'us',
'esci_label': 'I',
'small_version': 0,
'large_version': 1,
'split': 'train',
'product_title': 'Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan',
'product_description': None,
'product_bullet_point': 'WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air\nDesigned to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace\nDetachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation\nThis Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan\n0.35 amp',
'product_brand': 'Panasonic',
'product_color': 'White'}
item2 = {'example_id': 198,
'query': '# 2 pencils not sharpened',
'query_id': 6,
'product_id': 'B08KXRY4DG',
'product_locale': 'us',
'esci_label': 'S',
'small_version': 1,
'large_version': 1,
'split': 'train',
'product_title': 'AHXML#2 HB Wood Cased Graphite Pencils, Pre-Sharpened with Free Erasers, Smooth write for Exams, School, Office, Drawing and Sketching, Pack of 48',
'product_description': "<b>AHXML#2 HB Wood Cased Graphite Pencils, Pack of 48</b><br><br>Perfect for Beginners experienced graphic designers and professionals, kids Ideal for art supplies, drawing supplies, sketchbook, sketch pad, shading pencil, artist pencil, school supplies. <br><br><b>Package Includes</b><br>- 48 x Sketching Pencil<br> - 1 x Paper Boxed packaging<br><br>Our high quality, hexagonal shape is super lightweight and textured, producing smooth marks that erase well, and do not break off when you're drawing.<br><br><b>If you have any question or suggestion during using, please feel free to contact us.</b>",
'product_bullet_point': '#2 HB yellow, wood-cased pencils:Box of 48 count. Made from high quality real poplar wood and 100% genuine graphite pencil core. These No 2 pencils come with 100% Non-Toxic latex free pink top erasers.\nPRE-SHARPENED & EASY SHARPENING: All the 48 count pencils are pre-sharpened, ready to use when get it, saving your time of preparing.\nThese writing instruments are hexagonal in shape to ensure a comfortable grip when writing, scribbling, or doodling.\nThey are widely used in daily writhing, sketching, examination, marking, and more, especially for kids and teen writing in classroom and home.#2 HB wood-cased yellow pencils in bulk are ideal choice for school, office and home to maintain daily pencil consumption.\nCustomer service:If you are not satisfied with our product or have any questions, please feel free to contact us.',
'product_brand': 'AHXML',
'product_color': None}
def product2str(row, keys):
key2token = {'product_title': '[TITLE]',
'product_brand': '[BRAND]',
'product_color': '[COLOR]',
'product_bullet_point': '[BULLETPOINT]',
'product_description': '[DESCRIPTION]'}
output = ""
for k in keys:
content = row[k]
if content:
output += content.replace('\n', ' [SEP] ') + f" {key2token[k]} "
return output.strip()
product2str(item2, keys=['product_title', 'product_brand', 'product_color',
'product_bullet_point', 'product_description'])
</code></pre>
<p>Q: Is there some sort of native CPython JSON to str flatten functions/recipes that can achieve similar results to the <code>product2str</code> function?</p>
<p>Q: Or is there already some function/pipeline in <code>tokenizers</code> library <a href="https://pypi.org/project/tokenizers/" rel="nofollow noreferrer">https://pypi.org/project/tokenizers/</a> that can flatten a JSON/dict into tokens?</p>
|
<python><json><tokenize><json-flattener>
|
2023-04-21 19:40:41
| 2
| 123,325
|
alvas
|
76,076,234
| 15,763,991
|
How to Remove old Slash Commands from a Discord Bot?
|
<p>I recently used a Mee6 Premium Bot for my Discord server, but now I want to run my own bot with new slash commands (running on the Bot that was the Mee6 Bot). However, the old slash commands from Mee6 are still registered in my bot. When I type "/", all of my commands show up along with the commands from Mee6. How can I get rid of the Mee6 commands? I tried using my sync command to delete the old commands, but it didn't work. Here's my code:</p>
<pre><code>@commands.command()
async def sync(self, ctx) -> None:
fmt = await ctx.bot.tree.sync()
await ctx.send(f"{len(fmt)} commands synced.")
def on_ready(self):
cmdgroup= groupCommands(
name="demo", description="This is just a Demo")
self.bot.tree.add_command(cmdgroup)
</code></pre>
<p>Can someone please help me fix my sync command or suggest a solution to remove the Mee6 commands?</p>
<p>Just using an other Bot isn't an option since the Bot is already in some other servers</p>
<p>Just to clarify, with Mee6 Premium, you can use your own bot as the Mee6 bot. So the bot that I am currently using was previously running as the Mee6 bot.</p>
|
<python><discord><discord.py>
|
2023-04-21 19:30:06
| 1
| 418
|
EntchenEric
|
76,076,225
| 814,438
|
Release Python Thread Lock or Futex Using GDB
|
<p>I would like to find a way to release a Python thread <code>Lock</code> using GDB on Linux. I am using Ubuntu 18.04, Python 3.6.9, and gdb 8.1.1. I am also willing to use the <code>gdb</code> package in Python.</p>
<p>This is for personal research and not intended for a production system.</p>
<p>Suppose I have this Python script named "m4.py", which produces a deadlock:</p>
<pre><code>import threading
import time
import os
lock1 = threading.Lock()
lock2 = threading.Lock()
def func1(name):
print('Thread',name,'before acquire lock1')
with lock1:
print('Thread',name,'acquired lock1')
time.sleep(0.3)
print('Thread',name,'before acquire lock2')
with lock2:
print('Thread',name,'DEADLOCK: This line will never run.')
def func2(name):
print('Thread',name,'before acquire lock2')
with lock2:
print('Thread',name,'acquired lock2')
time.sleep(0.3)
print('Thread',name,'before acquire lock1')
with lock1:
print('Thread',name,'DEADLOCK: This line will never run.')
if __name__ == '__main__':
print(os.getpid())
thread1 = threading.Thread(target=func1, args=['thread1',])
thread2 = threading.Thread(target=func2, args=['thread2',])
thread1.start()
thread2.start()
</code></pre>
<p>My goal is to use gdb to release either lock1 or lock2 or both, so that the "DEADLOCK: This line will never run" message is displayed.</p>
<p>I think the first obstacle is that the program reaches the deadlock almost immediately, and there is not time to set a breakpoint in gdb. Is a breakpoint necessary?</p>
<p>Suppose I attach gdb by PID like this:</p>
<pre><code>sudo gdb -p 121408
</code></pre>
<p>I can see that all threads are blocked with a <code>futex</code>.</p>
<pre><code>(gdb) info threads
Id Target Id Frame
* 1 Thread 0x7f56b324f740 (LWP 121408) "python3" 0x00007f56b2a377c6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x7f56ac000e70) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
2 Thread 0x7f56b1b8d700 (LWP 121409) "python3" 0x00007f56b2a377c6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x1bc3fc0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
3 Thread 0x7f56b138c700 (LWP 121410) "python3" 0x00007f56b2a377c6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x1bc3f90) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
</code></pre>
<p>The top five frames of the backtrace show the C function calls.</p>
<pre><code>(gdb) thread 1
[Switching to thread 1 (Thread 0x7f56b324f740 (LWP 121408))]
#0 0x00007f56b2a377c6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x7f56ac000e70) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
205 in ../sysdeps/unix/sysv/linux/futex-internal.h
(gdb) bt
#0 0x00007f56b2a377c6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x7f56ac000e70) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait (sem=sem@entry=0x7f56ac000e70, abstime=0x0) at sem_waitcommon.c:111
#2 0x00007f56b2a378b8 in __new_sem_wait_slow (sem=0x7f56ac000e70, abstime=0x0) at sem_waitcommon.c:181
#3 0x00000000005aac15 in PyThread_acquire_lock_timed () at ../Python/thread_pthread.h:386
#4 0x00000000004d0ade in acquire_timed (timeout=<optimized out>, lock=0x7f56ac000e70) at ../Modules/_threadmodule.c:68
#5 lock_PyThread_acquire_lock () at ../Modules/_threadmodule.c:151
#6 0x000000000050a335 in _PyCFunction_FastCallDict (kwargs=<optimized out>, nargs=<optimized out>, args=<optimized out>, func_obj=<built-in method acquire of _thread.lock object at remote 0x7f56b1c289e0>)
at ../Objects/methodobject.c:231
#7 _PyCFunction_FastCallKeywords (kwnames=<optimized out>, nargs=<optimized out>, stack=<optimized out>, func=<optimized out>) at ../Objects/methodobject.c:294
#8 call_function.lto_priv () at ../Python/ceval.c:4851
</code></pre>
<p>Here are some of the things I have tried:</p>
<h2>Return</h2>
<p>"When you use return, GDB discards the selected stack frame (and all frames within it)". <a href="https://ftp.gnu.org/old-gnu/Manuals/gdb/html_node/gdb_114.html" rel="nofollow noreferrer">GDB</a></p>
<pre><code>(gdb) return
Can not force return from an inlined function.
</code></pre>
<h2>Access Python <code>release</code> function.</h2>
<p>In this example, Frame 7 is the last frame where <code>py-locals</code> works. I tried accessing the <code>release()</code> method of <code>Lock</code>. As far as I know, it is not possible to invoke a method that is a member of a Python object.</p>
<pre><code>(gdb) frame 7
#7 _PyCFunction_FastCallKeywords (kwnames=<optimized out>, nargs=<optimized out>, stack=<optimized out>, func=<optimized out>) at ../Objects/methodobject.c:294
294 in ../Objects/methodobject.c
(gdb) print lock
$7 = 0
(gdb) print lock.release
Attempt to extract a component of a value that is not a structure.
</code></pre>
<h2>Interpret <code>Lock</code> as <code>PyThread_type_lock</code></h2>
<p>I am not sure that the interpreting the object as an opaque pointer is useful.</p>
<pre><code>(gdb) print *((PyThread_type_lock *) 0x7f56ac000e70)
$8 = (PyThread_type_lock) 0x100000000
</code></pre>
<h2>Call <code>void PyThread_release_lock(PyThread_type_lock);</code></h2>
<p>This attempt produces a segmentation fault.</p>
<pre><code>(gdb) print (void)PyThread_release_lock (lock)
Thread 1 "python3" received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
The program being debugged was signaled while in a function called from GDB.
GDB remains in the frame where the signal was received.
To change this behavior use "set unwindonsignal on".
Evaluation of the expression containing the function
(PyThread_release_lock) will be abandoned.
When the function is done executing, GDB will silently stop.
</code></pre>
<h2>Make System Call</h2>
<p>I reran the script because the SIGSEV killed it. I then adapted code from this Gist <a href="https://gist.github.com/openglfreak/715d5ab5902497378f1996061dbbf8ec" rel="nofollow noreferrer">Gist</a> to make a <code>syscall</code> using the <code>ctypes</code> library in a Python script. In part, the code is this:</p>
<pre><code>def _is_ctypes_obj_pointer(obj):
return hasattr(obj, '_type_') and hasattr(obj, 'contents')
def _coerce_to_pointer(obj):
print("obj", obj)
if obj is None:
return None
if _is_ctypes_obj(obj):
if _is_ctypes_obj_pointer(obj):
return obj
return ctypes.pointer(obj)
return (obj[0].__class__ * len(obj))(*obj)
def _get_futex_syscall():
futex_syscall = ctypes.CDLL(None, use_errno=True).syscall
futex_syscall.argtypes = (ctypes.c_long, ctypes.c_void_p, ctypes.c_int,
ctypes.c_int, ctypes.POINTER(timespec),
ctypes.c_void_p, ctypes.c_int)
futex_syscall.restype = ctypes.c_int
futex_syscall_nr = ctypes.c_long(202)
# pylint: disable=too-many-arguments
def _futex_syscall(uaddr, futex_op, val, timeout, uaddr2, val3):
uaddr = ctypes.c_int(uaddr)
error = futex_syscall(
futex_syscall_nr,
_coerce_to_pointer(uaddr),
ctypes.c_int(futex_op),
ctypes.c_int(val),
_coerce_to_pointer(timeout or timespec()),
_coerce_to_pointer(ctypes.c_int(uaddr2)),
ctypes.c_int(val3)
)
res2 = error, (ctypes.get_errno() if error == -1 else 0)
print(res2)
# _futex_syscall.__doc__ = getattr(futex, '__doc__', None)
res = _futex_syscall(0x7f5ca8000e70, 1, 99, 0, 0, 0)
print(res)
</code></pre>
<p>I do not know whether it is possible to unlock a <code>futex</code> with GDB. If it is, I would like to understand how.</p>
|
<python><c><multithreading><gdb><futex>
|
2023-04-21 19:27:33
| 1
| 1,199
|
Jacob Quisenberry
|
76,076,212
| 16,267,793
|
External company Sharepoint Excel Access
|
<p>I would like to know if there is any way for me to access data from an excel file that is in a directory in which I do not have access to the root folder. It is therefore a different directory than mine. Another Company.</p>
<p>The only access I can get is through the browser, entering my email address and waiting for the verification code. <strong>Without any password.</strong></p>
<p>I would like to do this with python automatically.</p>
<p>The folder with the file was shared with me.</p>
<p><strong>This is the address of file:</strong></p>
<p><em>https:// [Another Company] .sharepoint.com/:x:/r/personal/ [name of person] /_layouts/15/Doc.aspx?sourcedoc=%7B17AB449A-DCF4-471F-8129-11798DCDF24B%7D&file=2%20-%20Consulta%20Popular%202023-abr_2023.xlsx&action=default&mobileredirect=true</em></p>
<p><strong>This is the address of the folder where the file is:</strong></p>
<p><em>https:// [Another Company] .sharepoint.com/personal/ [name of person] /_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fcarolina%2Dgyenes%5Fspgg%5Frs%5Fgov%5Fbr%2FDocuments%2FConsulta%20Popular%20%2D%20DOF%2DDARP%2DSEFAZ&ga=1</em></p>
<p>I don't have any code attempt to demonstrate because all posts I've seen use password for authentication.</p>
<p>Any help will be welcome.</p>
|
<python><excel><pandas><sharepoint-online>
|
2023-04-21 19:24:20
| 0
| 1,267
|
Wilian
|
76,076,176
| 19,980,284
|
Center formatted y-tick labels in bi-directional bar chart matplotlib
|
<p>I have generated this bar chart:
<img src="https://i.sstatic.net/m0Zf3.png" alt="" /></p>
<p>With the help of @Ken Myers <a href="https://stackoverflow.com/a/76075788/19980284">here</a>. My question is, how to center the formatted y-tick labels like <code>volume</code>, <code>MAP</code>, etc. that act as sort of headers for the variable levels that are plotted. I also suppose I don't really need the y-ticks on each graph, especially not for the bolded header variable names.</p>
<p>Alternatively if I could bring the right graph more to the left, that would help too.</p>
<p>I'm thinking if I can add padding to every third y-axis label, I could move them to the middle, but I'm not sure how to specify that.</p>
|
<python><pandas><matplotlib><bar-chart>
|
2023-04-21 19:18:39
| 1
| 671
|
hulio_entredas
|
76,076,133
| 12,065,150
|
Wait for loading of `grecaptcha` variable in selenium python
|
<p>I am using selenium to automate submitting of a form on a website. Occasionally, I get a failed form submit due to this error.</p>
<pre><code>0.62d2e676.js:1 Uncaught ReferenceError: grecaptcha is not defined
at Object.reply_chat_form_submit (6.b0af483c.js:1:22012)
...
</code></pre>
<p>This variable <code>grecaptcha</code> is usually defined by the Google reCAPTCHA library, which is used to prevent automated submissions on forms. So, I want some kind of thing like <code>WebdriverWait</code> and <code>expected_conditions</code> which check if that variable is loaded in the webpage or not. I'm sure anyone who's familiar with the structure of reCAPTCHA library in web apps would be able to tell where to expect this variable.</p>
|
<javascript><python><selenium-webdriver><recaptcha><captcha>
|
2023-04-21 19:11:32
| 1
| 4,680
|
Ali Sajjad Rizavi
|
76,076,082
| 3,197,412
|
Overriding of current TracerProvider is not allowed
|
<p>I'm trying to write an opentelemetry provider for python-dependency-injector but for some reason, my traces are not sent and I get an error like - <code>Failed to export batch. Status code: StatusCode.UNAVAILABLE</code></p>
<p>My provider:</p>
<pre class="lang-py prettyprint-override"><code>from dependency_injector import providers
from opentelemetry import trace
from opentelemetry.exporter.jaeger.proto import grpc
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
class OpenTelemetryProvider(providers.Provider):
def _provide(self, *args, **kwargs):
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
jaeger_exporter = grpc.JaegerExporter(
collector_endpoint="localhost:14250",
insecure=True,
)
# create a BatchSpanProcessor and add the exporter to it
span_processor = BatchSpanProcessor(jaeger_exporter)
# add to the tracer factory
trace.get_tracer_provider().add_span_processor(span_processor)
with tracer.start_as_current_span("foo") as span:
print(span._context.span_id)
print("Hello world!")
return tracer
</code></pre>
|
<python><dependency-injection><open-telemetry>
|
2023-04-21 19:04:54
| 1
| 1,012
|
batazor
|
76,076,064
| 5,942,779
|
Pandas plot with Plotly backend and Custom Hover template
|
<p>I am trying to include additional data in the hover template, similar to this <a href="https://stackoverflow.com/questions/69278251/plotly-including-additional-data-in-hovertemplate">Plotly: Including additional data in hovertemplate</a>, but using Plotly plotting backend on Pandas.</p>
<p>I have two 2D Pandas data frames, the first is the population by country and by year, and the second is the percent population growth by country and by year, generated with the following codes.</p>
<pre><code>import pandas as pd
pd.options.plotting.backend = 'plotly'
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv')
df_pop = df.pivot_table(index='year', columns='country', values='pop').copy()
df_pct_growth = df_pop.pct_change().fillna(0) * 100
</code></pre>
<p><strong>df_pop</strong>:
<a href="https://i.sstatic.net/nYu20.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nYu20.png" alt="enter image description here" /></a></p>
<p><strong>df_pct_growth</strong>:
<a href="https://i.sstatic.net/Q77KT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q77KT.png" alt="enter image description here" /></a></p>
<hr>
<p>I visualized the population for each country as follows. As you can see, the hover text only has the <strong>country</strong>, <strong>year</strong>, and <strong>population</strong>. I want to add the <strong>% growth</strong> from the <strong>df_pct_growth</strong> into the hover text. Does anyone know how to do that?</p>
<pre><code>df_pop.plot(labels=dict(value='Population'))
</code></pre>
<p><a href="https://i.sstatic.net/gkebB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gkebB.png" alt="enter image description here" /></a></p>
|
<python><pandas><plotly>
|
2023-04-21 19:02:52
| 1
| 689
|
Scoodood
|
76,075,818
| 1,711,271
|
create multiple columns at once based on the value of another column
|
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
import random
# Create the lists for each column
nrows = 5
a = [[random.randint(0, 10), random.randint(0, 10), random.randint(0, 10)] for i in range(nrows)]
b = [[random.randint(0, 10), random.randint(0, 10), random.randint(0, 10)] for i in range(nrows)]
c = [[random.randint(0, 10), random.randint(0, 10), random.randint(0, 10)] for i in range(nrows)]
idx = [random.randint(0, 3) for i in range(nrows)]
# Create the pandas dataframe
df = pd.DataFrame({'a': a, 'b': b, 'c': c, 'idx': idx})
</code></pre>
<p>I want to create 3 more columns, <code>a_des</code>, <code>b_des</code>, <code>c_des</code>, by extracting, for each row, the values of <code>a</code>, <code>b</code>, <code>c</code> corresponding to the value of <code>idx</code> in that row. I could do this with 3 separate <code>apply</code> statements, but it's ugly (code duplication), and the more columns I need to update, the more I need to duplicate code. Is it possible to generate all three columns <code>a_des</code>, <code>b_des</code>, <code>c_des</code> with a single <code>apply</code> statement?</p>
<p><strong>EDIT</strong>: sorry all, I made a mistake. Lists are of different length for different rows. I don't have time to fix this today, but I'll definitely do it tomorrow.</p>
|
<python><pandas><dataframe><apply>
|
2023-04-21 18:23:12
| 5
| 5,726
|
DeltaIV
|
76,075,777
| 19,504,610
|
FastAPI: Returning a List[ReadSchema] as another schema
|
<p>I have a <code>CategoryEnum</code>:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class CategoryEnum(str, Enum):
A = "A"
B = "B"
</code></pre>
<p>I have two tables, <code>Category</code> and <code>Project</code> tables:</p>
<pre><code>from app.enums import CategoryEnum
from sqlmodel import SQLModel
if TYPE_CHECKING:
from app.models.project_model import Project
class Category(SQLModel, table=True):
__table_name__ = 'category'
id: Optional[int] = Field(
primary_key=True,
index=True,
nullable=False,
)
category: str
project_id: Optional[int] = Field(
default=None, foreign_key='project.id', nullable=False)
class Project(SQLModel, table=True):
__table_name__ = 'project'
id: Optional[int] = Field(
primary_key=True,
index=True,
nullable=False,
)
categories: List[Category] = Relationship(
back_populates='project',
sa_relationship_kwargs={
"lazy": "selectin",
'cascade': 'all,delete,delete-orphan',
"primaryjoin": "category.project_id==project.id",
})
</code></pre>
<p>I have two schemas which are currently like this:</p>
<pre><code>class ICategoryRead(BaseModel):
category: CategoryEnum
class IProjectRead(BaseModel):
categories: List[ICategoryRead]
</code></pre>
<p>In FastAPI, when getting a <code>Project</code> record, the <code>categories</code> field of <code>Project</code> looks like this:</p>
<pre><code>"categories": [
{
"category": "A"
},
{
"category": "B"
}
],
</code></pre>
<p>How do I edit the <code>ICategoryRead</code> schema such that the output is:</p>
<pre><code>"categories": ["A", "B"],
</code></pre>
|
<python><backend><fastapi><pydantic>
|
2023-04-21 18:15:48
| 1
| 831
|
Jim
|
76,075,727
| 8,849,071
|
How to fix mypy to allow infer types from dictionary with interfaces
|
<p>so we have a really easy injector that just fits our needs. Now we want to have type inference, but we are having quite a hard time making it work with mypy. Here you have a minified example reproducing the problem:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Type, TypeVar, Dict
class Interface(ABC):
@abstractmethod
def method(self):
pass
class Implementation(Interface):
pass
T = TypeVar("T", covariant=True)
def get_implementation(interface: Type[T], *args, **kwargs) -> T:
dictionary: Dict[Type[T], T] = {Interface: Implementation}
return dictionary[interface](*args, **kwargs)
implementation = get_implementation(Interface)
</code></pre>
<p>You can run that code on <a href="https://mypy-play.net/?mypy=0.971&python=3.6" rel="nofollow noreferrer">https://mypy-play.net/?mypy=0.971&python=3.6</a>. You will get the following error:</p>
<pre><code>main.py:15: error: Dict entry 0 has incompatible type "Type[Interface]": "Type[Implementation]"; expected "Type[T]": "T"
main.py:16: error: "object" not callable
main.py:19: error: Only concrete class can be given where "Type[Interface]" is expected
</code></pre>
<p>To be honest, I don't have a clue how to fix them. To me the code seems correct, but clearly mypy doesn't think the same. So any idea how to make this kind of injection type safe?</p>
|
<python><python-typing><mypy>
|
2023-04-21 18:07:29
| 1
| 2,163
|
Antonio Gamiz Delgado
|
76,075,487
| 15,755,176
|
Shuffle list to maximize minimum distance between new and original positions
|
<p>I am looking for a simple way that will allow to re-order elements in a Python list in such a way so that each new element position is as far as possible from its original position, in a way that takes all element positions into account.</p>
<p>I should also be able to restore the original positions. For this reason, random should not be used.</p>
<p>I tried to solve this by reversing the order of the list but the middle elements are very close to their original positions and that is not optimal.</p>
<p>Instead, I am looking for a calculated way that takes all element positions into account and maximizes the minimum distance between the new positions and the original positions.</p>
|
<python>
|
2023-04-21 17:30:13
| 1
| 376
|
mangotango
|
76,075,483
| 2,161,250
|
python mysql library not establish connection with ssh tunneling server
|
<p>SSH tunnel connection is established successfully but Error connecting to RDS MySQL database via SSH tunnel: 2003 (HY000): Can't connect to MySQL server on '0.0.0.0:52678' (10049)</p>
<p>same is working when I use library pymysql but not mysql.connector</p>
<p>can any one help me here</p>
<p>below is the code</p>
<pre><code># Create an SSH tunnel to the RDS instance via the bastion host
with sshtunnel.SSHTunnelForwarder(
ssh_config['ssh_host'],
ssh_username=ssh_config['ssh_username'],
ssh_private_key=ssh_config['ssh_private_key'],
remote_bind_address=(rds_config['host'], rds_config['port'])
) as tunnel:
print("SSH tunnel created successfully")
# Connect to the RDS instance through the SSH tunnel
conn = mysql.connector.connect(
user=rds_config['user'],
password=rds_config['password'],
host=tunnel.local_bind_host,
port=tunnel.local_bind_port,
database=rds_config['database']
)
</code></pre>
|
<python><ssh><pymysql><ssh-tunnel>
|
2023-04-21 17:29:51
| 0
| 338
|
Lavish Karankar
|
76,075,405
| 999,355
|
What is type=argparse.FileType('rb') in Python
|
<p>I am looking at other person's Python code:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(description="Recover files from an NTFS volume")
parser.add_argument('--mft', type=argparse.FileType('rb'), help='Use given file as MFT')
</code></pre>
<p>and find this <code>type</code> argument a bit strange for a classic C++/Delphi/C# developer.</p>
<p>I can imagine what <code>type=int</code> is, but <code>type=argparse.FileType('rb')</code> ... ?!</p>
<p>I can suppose that it is a kind of "all in one" operation that takes a string argument and instantly uses it as a file name for opening a file for reading and returns its file variable (descriptor). Am I right?</p>
<p>The documentation doesn't reveal the mechanisms.</p>
|
<python><argparse>
|
2023-04-21 17:18:22
| 1
| 26,768
|
Paul
|
76,075,382
| 2,756,466
|
Send pyttsx3..save_to_file output to api
|
<p>I am trying to send pyttsx3.save_to_file to an api endpoint using python. e.g. i create a Flask api endpoint which receive some text from user -> convert text to speech -> and return the mp3 as byte array.</p>
<p>How this can be acheived?</p>
|
<python><pyttsx3>
|
2023-04-21 17:14:23
| 1
| 7,004
|
raju
|
76,075,352
| 6,467,567
|
How to deep flatten a list of messy arrays?
|
<p>I have a Python list/array of really messy lists/arrays that either contains a list, array, or an empty list/array. Is there a simple operation I can use to <strong>completely</strong> flatten it to return me a 1D array?</p>
<p>Here would be an example that might not be correct</p>
<pre><code>a = np.array([[1,2,3],
[],
[np.array([1,2,3])],
[np.array([1,2,[]])]])
</code></pre>
<p>I say might not be correct because I cant think of a way to make it even messier. There would only be numbers in the data. No strings etc.</p>
<p>But all I need is to convert it to a 1D array where each entry contains a single value; neither a list nor an array. Is there such a function or do I need to iterate the the array and do a check for every element?</p>
|
<python><arrays>
|
2023-04-21 17:08:41
| 1
| 2,438
|
Kong
|
76,075,342
| 1,914,781
|
get differ by even odd row and combine with other columns
|
<p>I would like to get differ between even and odd row, then add odd row ts values as well.</p>
<p>Current implementation do differ correctly, but missed the odd row ts values.</p>
<pre><code>import pandas as pd
data = [
['04-21 10:45:21.718'],
['04-21 10:45:22.718'],
['04-21 10:45:24.718'],
['04-21 10:45:28.718'],
['04-21 10:45:32.718'],
['04-21 10:45:38.718']
]
df = pd.DataFrame(data,columns=['ts'])
df['ts'] = pd.to_datetime(df['ts'],format="%m-%d %H:%M:%S.%f")
print(df)
df2 = pd.DataFrame(df['ts'].values[1::2] - df['ts'].values[::2],columns=['delta'])
df2['delta'] = df2['delta'].dt.total_seconds()
print(df2)
</code></pre>
<p>Current output:</p>
<pre><code> ts
0 1900-04-21 10:45:21.718
1 1900-04-21 10:45:22.718
2 1900-04-21 10:45:24.718
3 1900-04-21 10:45:28.718
4 1900-04-21 10:45:32.718
5 1900-04-21 10:45:38.718
delta
0 1.0
1 4.0
2 6.0
</code></pre>
<p>Expected output:</p>
<pre><code> ts delta
1 1900-04-21 10:45:22.718 1.0
2 1900-04-21 10:45:28.718 4.0
3 1900-04-21 10:45:38.718 6.0
</code></pre>
|
<python><pandas>
|
2023-04-21 17:07:01
| 2
| 9,011
|
lucky1928
|
76,075,244
| 19,980,284
|
Add Specific Labels and Spacing to Matplotlib Bi-Directional Plot
|
<p>I have generated this plot
<a href="https://i.sstatic.net/34DFm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/34DFm.png" alt="enter image description here" /></a></p>
<p>From this df:</p>
<pre><code>variable level margins_fluid margins_vp
0 volume 1L 0.718 0.690
1 volume 2L 0.501 0.808
2 volume 5L 0.181 0.920
3 MAP 64 0.434 0.647
4 MAP 58 0.477 0.854
5 MAP 52 0.489 0.904
6 Exam dry 0.668 0.713
7 exam euvolemic 0.475 0.798
8 exam wet 0.262 0.893
9 history COPD 0.506 0.804
10 history Kidney 0.441 0.778
11 history HF 0.450 0.832
12 Case 1 (PIV) 0.435 0.802
13 Case 2 (CVC) 0.497 0.809
</code></pre>
<p>Using this code:</p>
<pre><code>font_color = '#525252'
hfont = {'fontname':'Calibri'}
facecolor = '#eaeaf2'
index = fluid_vp_1_2.index
column0 = fluid_vp_1_2['margins_fluid']
column1 = fluid_vp_1_2['margins_vp']
title0 = 'Fluids'
title1 = 'Vasopressors'
fig, axes = plt.subplots(figsize=(10,5), facecolor=facecolor, ncols=2, sharey=True)
fig.tight_layout()
axes[0].barh(index, column0, align='center', color='dimgray', zorder=10)
axes[0].set_title(title0, fontsize=18, pad=15, color='black', **hfont)
axes[1].barh(index, column1, align='center', color='lightgray', zorder=10)
axes[1].set_title(title1, fontsize=18, pad=15, color='black', **hfont)
# If you have positive numbers and want to invert the x-axis of the left plot
axes[0].invert_xaxis()
# To show data from highest to lowest
plt.gca().invert_yaxis()
axes[0].set_yticks([])
axes[1].set_yticks([])
</code></pre>
<p>However, I would like it to exactly match the leftmost panel of this drawing:</p>
<p><img src="https://i.ibb.co/YDh2ryZ/figure2-example-fluidvp.jpg" alt="" /></p>
<p>With the variable name appearing once for each set of 3 levels (minus Case, which has two levels), and some added spacing in between each variable group. Any tips?</p>
|
<python><pandas><matplotlib><bar-chart>
|
2023-04-21 16:53:24
| 1
| 671
|
hulio_entredas
|
76,075,175
| 10,426,490
|
How to setup `pip` on Windows behind a proxy, without your password in plaintext?
|
<p>How do you setup <code>pip</code> on a Windows computer that is behind a VPN proxy?</p>
<p><strong>I've seen 3-4 different options such as</strong>:</p>
<ol>
<li><a href="https://stackoverflow.com/a/41957788/10426490">Set the <code>http_proxy</code> and <code>https_proxy</code> env variables</a></li>
<li><a href="https://stackoverflow.com/a/37219624/10426490">Create a <code>pip.ini</code> file</a></li>
<li><a href="https://stackoverflow.com/a/11869484/10426490">Use CNTLM program(?)</a></li>
</ol>
<ul>
<li>Etc.</li>
</ul>
<p>Suggestion 1 and 2 above require your password to be typed in plaintext. The password is visible in either the terminal history or the <code>pip.ini</code> file which is unacceptable in my case.</p>
<p>Suggestion 3 involves a program I wouldn't install.</p>
<p>So how are those concerned with security setting up <code>pip</code> to work behind a VPN?</p>
|
<python><security><pip><proxy><vpn>
|
2023-04-21 16:42:05
| 0
| 2,046
|
ericOnline
|
76,075,123
| 3,578,468
|
Overwrite file in MLFlow on Azure ML
|
<p>I have the same question that somebody already asked on the MLFlow GitHub page. But this is not on MLFlow's but on Azure's side of things.</p>
<p>I want to overwrite model checkpoints with the best model checkpoint at the current epoch, and not accumulate old checkpoints. Hence, I log the same thing again with MLFlow if the model has improved. This works locally, but on Azure overwriting is not possible. At least by default. I did not find anything in the documentation on that. Does anybody know if overwriting can be enabled for the Azure Machine Learning Studio model registry?</p>
<p>See this question: <a href="https://github.com/mlflow/mlflow/discussions/7686" rel="nofollow noreferrer">https://github.com/mlflow/mlflow/discussions/7686</a></p>
<p>And see this error:</p>
<pre><code>Traceback (most recent call last):
File "scripts/train.py", line 30, in save_checkpoint
mlflow.pytorch.log_model(model, "model/checkpoint")
File "/opt/miniconda/lib/python3.8/site-packages/mlflow/pytorch/__init__.py", line 310, in log_model
return Model.log(
File "/opt/miniconda/lib/python3.8/site-packages/mlflow/models/model.py", line 487, in log
mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path)
File "/opt/miniconda/lib/python3.8/site-packages/mlflow/tracking/fluent.py", line 810, in log_artifacts
MlflowClient().log_artifacts(run_id, local_dir, artifact_path)
File "/opt/miniconda/lib/python3.8/site-packages/mlflow/tracking/client.py", line 1048, in log_artifacts
self._tracking_client.log_artifacts(run_id, local_dir, artifact_path)
File "/opt/miniconda/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client.py", line 448, in log_artifacts
self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path)
File "/opt/miniconda/lib/python3.8/site-packages/azureml/mlflow/_store/artifact/artifact_repo.py", line 88, in log_artifacts
self.artifacts.upload_dir(local_dir, dest_path)
File "/opt/miniconda/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_artifact_client.py", line 97, in upload_dir
result = self._upload_files(
File "/opt/miniconda/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/base_artifact_client.py", line 34, in _upload_files
empty_artifact_content = self._create_empty_artifacts(paths=batch_remote_paths)
File "/opt/miniconda/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_artifact_client.py", line 170, in _create_empty_artifacts
raise Exception("\n".join(error_messages))
Exception: UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/python_env.yaml already exists.
UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/requirements.txt already exists.
UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/MLmodel already exists.
UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/conda.yaml already exists.
UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/data/model.pth already exists.
UserError: Resource Conflict: ArtifactId ExperimentRun/dcid.99783d0b-d340-4f2c-ab02-f4dd1c9d7dc0/model/checkpoint/data/pickle_module_info.txt already exists.
</code></pre>
<p>Comes from this line:</p>
<pre><code> mlflow.pytorch.log_model(model, f"model/checkpoint/")
</code></pre>
<p>Yes, I could do something like</p>
<pre><code>mlflow.pytorch.log_model(model, f"model/checkpoint/epoch_{epoch_index}")
</code></pre>
<p>But I don't want to waste disk space for obsolete checkpoints.</p>
|
<python><azure><mlflow>
|
2023-04-21 16:33:03
| 0
| 3,954
|
lo tolmencre
|
76,075,120
| 5,858,752
|
Can only use .dt accessor with datetimelike values after upgrading pandas
|
<p>I recently did a <code>pip3 install --upgrade pandas</code> and I'm now using version <code>1.3.5</code>. Previously, I was using <code>0.24.X</code> IIRC (not sure how to check).</p>
<p>Ever since the update, the simulation code I have is seeing errors like</p>
<pre><code>raise AttributeError("Can only use .dt accessor with datetimelike values")
</code></pre>
<p>.</p>
<p>Is this a known issue in recent versions of pandas?</p>
<p>The line that's failing is:</p>
<pre><code>df["time"] = df.time.dt.tz_convert(local_timezone)
</code></pre>
<p>I checked the <code>dtype</code> of <code>df</code> and it shows <code>time</code> as <code>object</code>, strangely.</p>
<p>The line prior to the above is a merge:</p>
<pre><code> df = pd.merge(
df,
another_df[["time", some other columns]],
on=["time"],
how="left",
)
</code></pre>
<p>I printed the <code>dtype</code> of <code>df.time</code> and <code>another_df.time</code> before the merge and they're both <code>datetime64[ns, US/Eastern]</code>, so I'm not sure how the merge ended up making <code>time</code> have <code>dtype=object</code>. I think this might be a separate issue from the above, but related.</p>
|
<python><pandas><datetime>
|
2023-04-21 16:32:46
| 1
| 699
|
h8n2
|
76,075,088
| 14,808,637
|
How to get multi-dimension specific data samples on the basis of list element?
|
<p>I need to evaluate my model's performance with limited training data. I am randomly selecting p of original training data. Assume p is 0.2 in this case. Here is some intil lines of code:</p>
<pre><code>p = p*100
data_samples = (data.shape[0] * p)/100 # data.shape= (100, 50, 50, 3)
# for randomly selecting data
import random
random.seed(1234)
filter_indices=[random.randrange(0, data.shape[0]) for _ in range(data_samples)]
</code></pre>
<p>Its giving me total filter indices randomly ranging between 0 and total data size.</p>
<p>Now, I want to get those samples of indices from the 'data' that are equivalent to filter_indices but include all dimensions. How can I do that effectively and effeciently?</p>
|
<python><numpy><random><numpy-ndarray><lis>
|
2023-04-21 16:27:53
| 1
| 774
|
Ahmad
|
76,075,008
| 12,319,746
|
Connect to Azure SQL from Azure function using pypyodbc
|
<p>I am trying to connect to an azure sql db from an azure function. I am using <code>pypyodbc</code>. I am not able to find the syntax for connecting to it. I have tried</p>
<pre><code>import pypyodbc
def main(req):
# Set up the database connection
connection_string = 'Driver={ODBC Driver 17 for SQL Server};Server=tcp:<server_name>.database.windows.net,1433;Database=<database_name>;Authentication=ActiveDirectoryManagedIdentity;'
connection = pypyodbc.connect(connection_string)
# Execute a SQL query
cursor = connection.cursor()
cursor.execute('SELECT * FROM <table_name>')
rows = cursor.fetchall()
# Close the database connection
connection.close()
# Return the results
return str(rows)
</code></pre>
<p>This gives the error</p>
<blockquote>
<p>(08001, '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Invalid
value specified for connection string attribute Authentication')
Traceback (most recent call last):</p>
</blockquote>
<p>What is the correct syntax, cannot find it.</p>
|
<python><azure-functions><odbc>
|
2023-04-21 16:18:34
| 1
| 2,247
|
Abhishek Rai
|
76,074,840
| 4,307,022
|
AttributeError: module 'virtualenv.create.via_global_ref.builtin.cpython.mac_os' has no attribute 'CPython2macOsArmFramework'
|
<p>I was trying to configure pre-commit hooks and while running <code>pre-commit run --all-files</code> I got this error:</p>
<pre><code>[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/Users/dark_matter88/opt/anaconda3/bin/python3.8', '-mvirtualenv', '/Users/dark_matter88/.cache/pre-commit/repojfgkwtv7/py_env-python3.8')
return code: 1
stdout:
AttributeError: module 'virtualenv.create.via_global_ref.builtin.cpython.mac_os' has no attribute 'CPython2macOsArmFramework'
</code></pre>
<p>I've tried to upgrade pip to resolve this issue <code>pip install --upgrade pip</code> and I received another error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/admin/opt/anaconda3/bin/pip", line 5, in <module>
from pip._internal.cli.main import main
File "/Users/admin/.local/lib/python3.8/site-packages/pip/_internal/cli/main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/Users/admin/.local/lib/python3.8/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/Users/admin/.local/lib/python3.8/site-packages/pip/_internal/cli/main_parser.py", line 9, in <module>
from pip._internal.build_env import get_runnable_pip
File "/Users/admin/.local/lib/python3.8/site-packages/pip/_internal/build_env.py", line 15, in <module>
from pip._vendor.packaging.requirements import Requirement
ModuleNotFoundError: No module named 'pip._vendor.packaging'
</code></pre>
<p>I tried to check versions of both pip and pip3, and now I'm also getting the same <code>No module named 'pip._vendor.packaging'</code> error. I've tried all the solutions I could find, but nothing's helped.
I'm thinking if it's related to having several versions of python installed.</p>
|
<python><macos><pip><modulenotfounderror>
|
2023-04-21 15:55:54
| 3
| 397
|
dark_matter88
|
76,074,785
| 16,512,200
|
Implicit conversion from data type varchar(max) to varbinary(max) is not allowed
|
<p>I have a method called <code>appendTable()</code> which essentially takes the name of the table and the <code>columns=data</code> as keyword arguments. I take the keyword arguments and use that to build a <code>DataFrame</code> object and then I use the <code>dataframe.to_sql()</code> method to append the row to my database table. Shown here:</p>
<pre><code>def appendTable(self, tableName, **kwargs):
dataFrame = pd.DataFrame(data=[kwargs])
print(dataFrame)
with self.connection_handling():
with threadLock:
dataFrame.to_sql(tableName, con=self.connection.dbEngine, schema="dbo", index=False, if_exists='append')
</code></pre>
<p>For example I would use this method like this:</p>
<pre><code>self.appendTable(tableName="Notebook", FormID=ID, CompressedNotes=notebook)
</code></pre>
<p>My table design is in Microsoft SQL Server and looks something like this:</p>
<pre><code>NotebookID | int | primary auto-incrementing key
FormID | int | foreign key to a form table
Notes | varchar(MAX) | allow-nulls : True
CompressedNotes | varbinary(MAX) | allow-nulls : True
</code></pre>
<p>The data I'm passing is coming from a PyQt5 TextEdit (used as a Notebook), which gives me text/images as HTML code, I then encode the data and compress it using <code>zlib.compress()</code> shown here:</p>
<pre><code>notebook_html = self.noteBookTextEdit.toHtml()
notebookData = zlib.compress(notebook_html.encode())
</code></pre>
<p>I print the datatype and the dataframe and find that it's the expected datatype that it has always been. I'm also adding to a database table / server that I've been using for years as well.</p>
<pre><code>Notebook data type: <class 'bytes'>
FormID CompressedNotes
0 163 b'x\x9c\x03\x00\x00\x00\x00\x01'
</code></pre>
<p>The SQL that gets generated looks like this:</p>
<pre><code>SQL: INSERT INTO dbo.[Notebook] ([FormID], [CompressedNotes]) VALUES (?, ?)
parameters: ('163', b'x\x9c\x03\x00\x00\x00\x00\x01')
</code></pre>
<p>Recently though when I pass binary information for a column that is a <code>VARBINARY(MAX)</code> I am having this error appear:</p>
<pre><code> Could not execute cursor!
Reason: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Implicit conversion from data type varchar(max) to varbinary(max) is not allowed. Use the CONVERT function to run this query. (257) (SQLExecDirectW);
[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)')
[SQL: INSERT INTO dbo.[Notebook] ([FormID], [CompressedNotes]) VALUES (?, ?)]
[parameters: ('163', b'x\x9c\x03\x00\x00\x00\x00\x01')]
(Background on this error at: https://sqlalche.me/e/20/f405)
</code></pre>
<p>The only difference that I've made since this issue started was I run the <code>appendTable()</code> method through a <code>QThread()</code> instead of using <code>threading.Thread()</code> because I wanted to have access to some <code>signals</code> and <code>slots</code>. But I still use a thread lock to make sure multiple threads aren't trying to use my database engine at the same time. And I've been doing that for a very long time, but I'm unsure if the thread lock works with QThreads (I thought it did).</p>
<hr />
<p>UPDATE:</p>
<p>When I use my pyodbc cursor to write the SQL statement myself instead of using <code>pandas.DataFrame.to_sql()</code> method to generate what looks like the same statement it all works. I'm passing the exact same variables with the same data types and it works, even without using the CONVERT method that the error explains.</p>
<pre><code>cursor.execute('INSERT INTO Notebook (FormID, CompressedNotes) VALUES (?, ?)', (FormID, notebook))
</code></pre>
<p>Is <code>pandas.DataFrame()</code> converting my <code>class <bytes></code> object into something else or am I just missing something? I'm using <code>python 3.11.2</code> and <code>pandas 1.5.3</code>. Although before putting anything into this <code>QThread()</code> it previously worked with these versions.</p>
|
<python><sql-server><pandas><sqlalchemy><pyqt5>
|
2023-04-21 15:48:53
| 1
| 371
|
Andrew
|
76,074,745
| 21,420,742
|
Finding the previous value and creating a new column in python
|
<p>I have a dataset and I want to go back and see the previous value that has changed by group and creating a new column with the previous value.</p>
<p>Sample Data:</p>
<pre><code> Group Value
01 10
01 10
01 10
02 5
02 5
02 15
03 20
03 25
03 15
03 15
</code></pre>
<p>Desired Output:</p>
<pre><code>Group Value Previous_Value
01 10
01 10
01 10
02 5
02 5
02 15 5
03 20
03 25 20
03 15 25
03 15
</code></pre>
<p>I have imported pandas and numpy for this. Any suggestions?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-21 15:45:16
| 1
| 473
|
Coding_Nubie
|
76,074,708
| 21,420,742
|
Finding and Comparing Previous Rows to current value Python
|
<p>I have a dateset and I want to go back and see the previous value that has changed.</p>
<p>Sample Data:</p>
<pre><code> Group Value
01 10
01 10
01 10
02 5
02 5
02 15
03 20
03 25
03 15
03 15
</code></pre>
<p>Desired Output:</p>
<pre><code>Group Value Previous_Value
01 10
01 10
01 10
02 5
02 5
02 15 5
03 20
03 25 20
03 15 25
03 15
</code></pre>
<p>I have imported pandas and numpy for this. Any suggestions?</p>
|
<python><python-3.x><pandas>
|
2023-04-21 15:41:05
| 2
| 473
|
Coding_Nubie
|
76,074,705
| 3,249,000
|
Weighted sampling from lists in polars dataframe
|
<p>I have a dataframe that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"group" : ["foo", "bar", "baz"],
"elements" : [
pl.int_range(0, 100, eager=True),
pl.int_range(200, 300, eager=True),
pl.int_range(300, 400, eager=True)
],
"weight": [0.1, 0.5, 0.4]})
print(df)
</code></pre>
<pre><code>┌───────┬───────────────────┬────────┐
│ group ┆ elements ┆ weight │
│ --- ┆ --- ┆ --- │
│ str ┆ list[i64] ┆ f64 │
╞═══════╪═══════════════════╪════════╡
│ foo ┆ [0, 1, … 99] ┆ 0.1 │
│ bar ┆ [200, 201, … 299] ┆ 0.5 │
│ baz ┆ [300, 301, … 399] ┆ 0.4 │
└───────┴───────────────────┴────────┘
</code></pre>
<p>How would I sample e.g. 5 elements from each of the lists in the <code>elements</code> column, such that my dataframe looks something like this?</p>
<pre><code>┌───────┬───────────────────────┬────────┐
│ group ┆ elements ┆ weight │
│ --- ┆ --- ┆ --- │
│ str ┆ list[i64] ┆ f64 │
╞═══════╪═══════════════════════╪════════╡
│ foo ┆ [7,42,19,74,33] ┆ 0.1 │
│ bar ┆ [209,277,222,291,260] ┆ 0.5 │
│ baz ┆ [300,347,312,398,369] ┆ 0.4 │
└───────┴───────────────────────┴────────┘
</code></pre>
<p>If I then wanted to sample a total of 1000 <code>elements</code> from across all <code>groups</code>, weighted according to the <code>weight</code> column, how would I go about doing that?</p>
<p>I've seen this question: <a href="https://stackoverflow.com/questions/72633461/sample-from-each-group-in-polars-dataframe">Sample from each group in polars dataframe?</a> which I think is probably similar, but so far I haven't been able to come up with the combination of expressions that will work.</p>
|
<python><dataframe><python-polars>
|
2023-04-21 15:40:50
| 2
| 2,182
|
Theolodus
|
76,074,582
| 3,231,250
|
efficient way to apply function to column pairs in the dataframe without loop
|
<p>I have a dataframe and lets say shape is 1000x9000.
I want to take each pair of columns (A,B or B,C etc.) and apply a custom function on this vectors (It returns a single value not vector).<br />
In the end I will have square matrix as number of features X features.<br />
<strong>In another way to say, I want to apply custom function before do correlation.</strong> (lets say I have static value "S" and my calculation depends on using A,B columns and "S")<br />
Currently I do that with for loop but it takes time (for 9000 features 72minutes).<br />
Is there any other efficient way to do that?</p>
<pre><code>K = len(df.columns)
corrm = np.empty((K, K), dtype=float)
for i in range(K):
for j in range(K):
if i > j:
continue
if i == j:
pass
else:
x = df.values[:,i]
y = df.values[:,j]
partial_correlation = pcorr(x,y) # returns single value
corrm[i][j] = partial_correlation
</code></pre>
<p>example dataframe:</p>
<pre><code> A B C D E
sample1 -0.553511498 0.010581301 -0.031557156 0.636841226 -0.034175995
sample2 -0.086819765 -0.008343026 -0.269495143 -1.308358179 -0.944317808
sample3 0.270239413 -0.483162928 -0.062806558 -0.12840758 0.179261486
sample4 0.71904635 1.601002004 1.161017558 0.787858056 1.697507859
sample5 1.388226443 1.450427333 1.401653022 1.682112299 1.352427121
sample6 0.559113914 0.98922773 1.018382478 0.756563628 0.304599497
sample7 0.209627821 -0.145606377 -0.029444883 0.456342405 -0.064318077
sample8 0.355275804 0.485999083 1.515089717 1.462177029 2.530383401
...
</code></pre>
<p>If I apply correlation function to this dataframe I will have 9000x9000 square matrix.</p>
|
<python><pandas><numpy>
|
2023-04-21 15:24:51
| 0
| 1,120
|
Yasir
|
76,074,477
| 8,188,498
|
Callback with dynamic number of Input
|
<p>I am creating a component in Dash with rows. After a user clicks on a submit button, a figure with multiple lines (each row a line) should get created.</p>
<p>This rows component could have been a table component, but since my component contains buttons in one of the columns, I created it manually with divs. The reason is that the Dash DataTable does not allow "HTML" components.</p>
<p><strong>The Important Thing:</strong>
The first two rows contain default values. Also, there is a button that can create a new empty row, allowing the user to add new information. The user may add an arbitrary number of rows, by clicking the button multiple times.</p>
<p>All the new or edited information has to get saved somewhere (state) so that when the user clicks on the submit button, the figure appears.</p>
<p>I am still unsure if this logic is possible with Dash/Plotly. Any ideas?</p>
<p>P.S. I have already tried with three rows, but then I have to hardcode the Inputs and Outputs in the callback. This is no plausible solution.</p>
<p>Here is some sample not working code. Also, some functions would be stored as separate files as they depict some components:</p>
<pre><code>from dash import Dash, html, dcc, Input, Output
app = Dash(__name__, title="Dash App")
def input_container(index: int) -> dcc.Input:
return dcc.Input(id={'type': 'my-input', 'index': index}, value='initial value')
# mutable state
state = [input_container(1)]
def inputs_container(state) -> html.Div:
return html.Div(id='inputs-container', children=state)
def add_input_button(app, state) -> html.Button:
@app.callback(Output('inputs-container', 'children'), Input('add-input', 'n_clicks'), prevent_initial_call=True)
def add_input(n_clicks):
return state.append(input_container(len(state) + 1))
return html.Button('Add Input', id='add-input')
def submit_button(app, state) -> html.Button:
@app.callback(Output('graph', 'figure'), Input('submit-input', 'n_clicks'))
def submit_input(n_clicks):
return '' # create some figure from the state values
return html.Button('Submit Input', id='submit-input')
app.layout = html.Div([
inputs_container(state),
add_input_button(app, state),
submit_button(app, state),
dcc.Graph(id='graph'),
])
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<python><pandas><callback><plotly><plotly-dash>
|
2023-04-21 15:10:55
| 2
| 1,037
|
Georgios
|
76,074,473
| 5,852,692
|
Constant value infront of the Gaussian Process Kernel
|
<p>I would like to use GP for a surrogate model for my optimization problem. I found a nice example from <code>scikit learn</code>, which can be found: <a href="https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py</a></p>
<p>I mostly understood the code, however I do not understand this constant value(<code>1</code>) [<code>kernel = 1 * RBF(...)</code>] before the RBF definition.</p>
<pre><code>from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
kernel = 1 * RBF(length_scale=1.0, length_scale_bounds=(1e-2, 1e2))
gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
gaussian_process.fit(X_train, y_train)
gaussian_process.kernel_
</code></pre>
<p>My x value is a vector with 250 dimensions, because of that I think I need to give the attribute length scale as <code>length_scale=250</code>, but what should I do about the <code>1</code>?</p>
|
<python><optimization><scikit-learn><constants><gaussian-process>
|
2023-04-21 15:10:24
| 0
| 1,588
|
oakca
|
76,074,413
| 1,914,781
|
adjust plotly subplot xlabel and ylabel distance
|
<p>How to adjust the subplot graph xlabel and ylabel distance to the graph? Current xlabel and ylabel distance a little bit far away from the graph.</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
def save_fig(fig,pngname):
width = 800
height = 400
fig.write_image(pngname,format="png", width=width, height=height, scale=1)
print("[[%s]]"%pngname)
#fig.show()
return
def plot():
fig = make_subplots(
rows=2, cols=1,
shared_xaxes=True,
vertical_spacing = 0.02,
x_title='x',
y_title='y',
specs=[[{"type": "xy"}],
[{"type": "xy"}]],
)
fig.add_trace(go.Bar(y=[2, 3, 1]),
row=1, col=1)
fig.add_trace(go.Scatter(x=[1,2,4],y=[3,2,5]),
row=2, col=1)
fontsize=10
xpading=.05
fig.update_layout(
margin=dict(l=50,t=40,r=10,b=40),
plot_bgcolor='#ffffff',#'rgb(12,163,135)',
paper_bgcolor='#ffffff',
title_x=0.5,
showlegend=True,
legend=dict(x=.02,y=1.05),
barmode='group',
bargap=0.05,
bargroupgap=0.0,
font=dict(
family="Courier New, monospace",
size=fontsize,
color="black"
),
xaxis=dict(
visible=True,
title_standoff=1,
tickangle=-15,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
ticks='outside',
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
),
yaxis=dict(
title_standoff=1,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
zeroline=True,
zerolinecolor='grey',
zerolinewidth=.5,
showticklabels=True,
),
xaxis2=dict(
title_standoff=1,
tickangle=-15,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
ticks='outside',
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
),
yaxis2=dict(
title_standoff=1,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
zeroline=True,
zerolinecolor='grey',
zerolinewidth=.5,
showticklabels=True,
),
)
save_fig(fig,"./demo.png")
return
plot()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/cgguO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgguO.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-04-21 15:03:43
| 1
| 9,011
|
lucky1928
|
76,074,399
| 6,734,243
|
how to execute a script that will modify my package at build time?
|
<p>We have the following package structure:</p>
<pre><code>.
├── genreate_source/
│ ├── generate_schema.py
│ ├── generate_source.py
│ └── __init__.py
├── ipyvuetify/
│ ├── template.py
│ ├── Html.py
│ ├── Themes.py
│ ├── Template.py
│ └── _init__.py
├── pyproject.toml
└── setup.py
</code></pre>
<p>the main package is ipyvuetify but the content is generated from the vuetify.js API. To buid the source, we need to execute a function that live in <code>generate_source.py</code>. Until now we were adding these source at release time but that makes developping in the lib complicated. We try to make the build automatic and included in the <code>python build</code>.</p>
<p>When we try to importe "genreate_source" module in the setup.py it doesn't work (moduleNotFound). Is it normal and can we call this script from septup ?</p>
|
<python><setuptools>
|
2023-04-21 15:01:20
| 1
| 2,670
|
Pierrick Rambaud
|
76,074,394
| 2,201,603
|
TypeError: 'MultiPolygon' object is not iterable
|
<p>I am trying to run the below script from plotly: <a href="https://plotly.com/python/county-choropleth/" rel="noreferrer">https://plotly.com/python/county-choropleth/</a></p>
<p>I'm receiving the error code right out the gate: TypeError: 'MultiPolygon' object is not iterable</p>
<p>I've looked up several posts where this is a similar issue, but I'm skeptical these are solutions for this particular issue. OPTION 2 seems like the more likely approach, but why would there be a workaround for simple coding that plotly is publishing? Seems I might be missing something in the way the code is written.</p>
<p>OPTION 1: <a href="https://stackoverflow.com/questions/65124253/polygon-object-is-not-iterable-ipython-cookbook">'Polygon' object is not iterable- iPython Cookbook</a></p>
<p>OPTION 2: <a href="https://stackoverflow.com/questions/63758107/python-iteration-over-polygon-in-dataframe-from-shapefile-to-color-cartopy-map">Python: Iteration over Polygon in Dataframe from Shapefile to color cartopy map</a></p>
<pre><code>import plotly.figure_factory as ff
import numpy as np
import pandas as pd
df_sample = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/minoritymajority.csv')
df_sample_r = df_sample[df_sample['STNAME'] == 'Florida']
values = df_sample_r['TOT_POP'].tolist()
fips = df_sample_r['FIPS'].tolist()
endpts = list(np.mgrid[min(values):max(values):4j])
colorscale = ["#030512","#1d1d3b","#323268","#3d4b94","#3e6ab0",
"#4989bc","#60a7c7","#85c5d3","#b7e0e4","#eafcfd"]
fig = ff.create_choropleth(
fips=fips, values=values, scope=['Florida'], show_state_data=True,
colorscale=colorscale, binning_endpoints=endpts, round_legend_values=True,
plot_bgcolor='rgb(229,229,229)',
paper_bgcolor='rgb(229,229,229)',
legend_title='Population by County',
county_outline={'color': 'rgb(255,255,255)', 'width': 0.5},
exponent_format=True,
)
fig.layout.template = None
fig.show()
</code></pre>
|
<python><plotly><plotly-dash><geopandas>
|
2023-04-21 15:00:31
| 3
| 7,460
|
Dave
|
76,074,392
| 9,712,270
|
Better way to setup a nested object structure
|
<p>I've recently started with Python and want to write a simple API for data scrapping from a web side with selenium. Now I am having issues with keeping the code organized.</p>
<p>Currently it looks something like this</p>
<pre><code>def get_tasks():
# Init driver and open requried page
driver = init()
class TasksView:
def get_task_names():
print("Reads all task names and returns their names using driver.findElements")
def open_task(name):
print("Open task, this will change the DOM structure")
class TaskView:
def get_details():
print("Returns details by using driver.findElements")
def get_creation():
print("Returns creation date by using driver.findElements")
def close():
print("Close task, returning DOM structure to how it was after calling get_tasks")
return TaskView()
return TasksView()
task_view = get_tasks()
task = task_view.open_task("buy_cheese")
print(task.get_details())
task.close()
</code></pre>
<p>How can I make this more readable? It's already very nested and with more complicated data structures it would be a real mess.</p>
|
<python><oop><selenium-webdriver>
|
2023-04-21 15:00:17
| 1
| 1,914
|
magicmn
|
76,074,369
| 2,544,760
|
Efficiently changing values in dataframe
|
<p>my goal is to update one column's values to np.nan where a second column has a certain value. Here's an example of what I mean:</p>
<pre><code> c1 c2
a 1 8
b 2 8
c 3 8
d 4 8
e 32 8
f 32 8
g 2 8
</code></pre>
<p>should become</p>
<pre><code> c1 c2
a 1 8
b 2 8
c 3 8
d 4 8
e 32 nan
f 32 nan
g 2 8
</code></pre>
<p>The solution I have based on reading this and other sites is the following monstrosity:</p>
<pre><code>df = pd.DataFrame(data={'c1':[1,2,3,4,32,32,2], 'c2':[8,8,8,8,8,8,8]}, index=list('abcdefg'))
df.iloc[[df.index.get_loc(x) for x in df.loc[df['c1'] == 32].index.values],df.columns.get_loc(('c2'))] = np.nan
</code></pre>
<p>There must be a proper way to do this that doesn't involve list comprehension, right? Everything 'natural' that I tried resulted in me setting a value on a copy. Thanks so much.</p>
|
<python><pandas>
|
2023-04-21 14:58:22
| 0
| 358
|
irh
|
76,073,945
| 3,383,318
|
Pad zeros with previous value: is there a faster way?
|
<p>I have a tensor of size (1024x160) and I want to replace zeros by the value before in a rowwise fashion. So for each row seperately. An example of one row: [1.0, 0.0, 0.0, 0.0, 2.1, 0.0, 3.0] should become [1.0, 1.0, 1.0, 1.0, 2.1, 2.1, 3.0].</p>
<p>I searched stackoverflow and found an excellent answer to solve this problem for one vector. So I used map_fn to do the computation over the 1024 rows.</p>
<pre><code>def ffill_per_row(inputrow):
mask = tf.abs(inputrow)>1.0e-4
values = tf.concat([[0.0], tf.boolean_mask(inputrow, mask)], axis=0)
# Use cumsum over mask to find the index of the non-NaN value to pick
idx = tf.cumsum(tf.cast(mask, tf.int64))
# Gather values
result = tf.gather(values, idx)
return result
def ffill(input):
result = tf.map_fn(fn=ffill_per_row, elems=input)
return result
</code></pre>
<p>However this solution is very slow. Is there a faster way to solve this problem?</p>
|
<python><tensorflow><tensorflow2.0>
|
2023-04-21 14:04:58
| 2
| 304
|
Ruben
|
76,073,851
| 17,596,179
|
Querying s3 parquet files using duckdb
|
<p>I am trying to query my parquet files stored in my s3 bucket. But when I try to query from my s3 path it adds 's3.amazonaws.com' at the end of my bucket. this is my code</p>
<pre><code>import boto3
import duckdb
import os
sts_client = boto3.client('sts')
assumed_role_object = sts_client.assume_role(
RoleArn='arn:aws:iam::012345678901:role/my-role',
RoleSessionName='role'
)
credentials = assumed_role_object['Credentials']
con = duckdb.connect()
os.environ['AWS_ACCESS_KEY_ID'] = credentials['AccessKeyId']
os.environ['AWS_SECRET_ACCESS_KEY'] = credentials['SecretAccessKey']
os.environ['AWS_SESSION_TOKEN'] = credentials['SessionToken']
result = con.execute(query, ('s3://my-bucket/file.parquet',)).fetchall()
print(result)
</code></pre>
<p>So I am assuming the identity of my AWS account so I don't have to deal with the credentials. But upon executing this code I receive this error.</p>
<pre><code>Traceback (most recent call last):
File "D:\School\Academiejaar 3\Semester 2\stage\energy_sellers_dashboard\query.py", line 21, in <module>
result = con.execute(query).fetchall()
duckdb.IOException: IO Error: Unable to connect to URL "https://my-bucket.s3.amazonaws.com/file.parquet": 400 (Bad Request)
</code></pre>
<p>I mean I get why it returns the bad request because the <code>.s3-amazonaws.com</code> should not be concatenated with my bucket string. All help is greatly appreciated!</p>
|
<python><amazon-web-services><amazon-s3><amazon-iam><duckdb>
|
2023-04-21 13:53:27
| 0
| 437
|
david backx
|
76,073,793
| 3,490,622
|
How to get upper/lower parts of a numpy array relative to 45 degree diagonal?
|
<p>I am trying to figure out how to get the upper and lower triangles of a numpy matrix relative to the 45 degree diagonal from lower left to upper right. In other words, if my numpy array is</p>
<pre><code>a = np.array([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>I want arrays containing [1,2,3],[4,5],[7] in one (with or without the diagonal) and [6],[8,9] in the other.</p>
<p>I know that <code>np.triu</code> and <code>np.tril</code> split the array along the other diagonal (from upper left to lower right), but I can't seem to figure out how to do it along the lower left to upper right diagonal.</p>
<p>Help would be appreciated!</p>
|
<python><arrays><numpy>
|
2023-04-21 13:46:22
| 2
| 1,011
|
user3490622
|
76,073,605
| 1,150,683
|
Add py.typed as package data with setuptools in pyproject.toml
|
<p>From what I read, to make sure that the typing information of your code is distributed alongside your code for linters to read, the <code>py.typed</code> file should be part of your distribution.</p>
<p>I find answers for how to add these to <a href="https://stackoverflow.com/a/53034060/1150683">setup.py</a> but it is not clear to me 1. whether it should be included in pyproject.toml (using setuptools), 2. if so, how it should be added.</p>
<p>Scouring their github repository, it seems that this is <a href="https://github.com/pypa/setuptools/issues/3136" rel="noreferrer">not added automatically</a> so the question remains how I should add it to my pyproject.toml. I found this general discussion about <a href="https://setuptools.pypa.io/en/latest/userguide/datafiles.html#package-data" rel="noreferrer"><code>package_data</code></a> but it includes reference to <code>include_package_data</code> and a <code>MANIFEST.in</code> and it gets confusing from there what should go where.</p>
<p>Tl;dr: how should I include <code>py.typed</code> in pyproject.toml when using setuptools?</p>
|
<python><setuptools><pyproject.toml><typed>
|
2023-04-21 13:25:19
| 2
| 28,776
|
Bram Vanroy
|
76,073,586
| 19,980,284
|
Generate bidirectional bar chart in matplotlib with variables in center of chart
|
<p>I have this dataframe:</p>
<pre><code>variable level margins_fluid margins_vp
0 volfluid 1L 0.718 0.690
1 volfluid 2L 0.501 0.808
2 volfluid 5L 0.181 0.920
3 MAP 64 0.434 0.647
4 MAP 58 0.477 0.854
5 MAP 52 0.489 0.904
6 Exam dry 0.668 0.713
7 exam euvolemic 0.475 0.798
8 exam wet 0.262 0.893
9 pmh COPD 0.506 0.804
10 pmh Kidney 0.441 0.778
11 pmh HF 0.450 0.832
12 Case 1 (PIV) 0.435 0.802
13 Case 2 (CVC) 0.497 0.809
</code></pre>
<p>And I want to build a bi-directional bar chart that looks like the left-most figure in this drawing:</p>
<p><img src="https://i.ibb.co/YDh2ryZ/figure2-example-fluidvp.jpg" alt="" /></p>
<p>Where the levels of each variable are listed vertically along the center, and the margins for each are represented as percentages moving away from the center. I have gotten close with the below but not quite there yet:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
font_color = '#525252'
hfont = {'fontname':'Calibri'}
facecolor = '#eaeaf2'
index = fluid_vp_1_2.index
column0 = fluid_vp_1_2['margins_fluid']
column1 = fluid_vp_1_2['margins_vp']
title0 = 'Fluids'
title1 = 'Vasopressors'
fig, axes = plt.subplots(figsize=(10,5), facecolor=facecolor, ncols=2, sharey=True)
fig.tight_layout()
axes[0].barh(index, column0, align='center', color='dimgray', zorder=10)
axes[0].set_title(title0, fontsize=18, pad=15, color='black', **hfont)
axes[1].barh(index, column1, align='center', color='lightgray', zorder=10)
axes[1].set_title(title1, fontsize=18, pad=15, color='black', **hfont)
# If you have positive numbers and want to invert the x-axis of the left plot
axes[0].invert_xaxis()
# To show data from highest to lowest
plt.gca().invert_yaxis()
axes[0].set_yticks([])
axes[1].set_yticks([])
</code></pre>
<p><a href="https://i.sstatic.net/R3sjU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R3sjU.png" alt="enter image description here" /></a></p>
<p>But I don't know how to get the variables and their levels to show up in the center of the two graphs. I also need to convert the x-axis to percentages.</p>
|
<python><pandas><matplotlib><bar-chart>
|
2023-04-21 13:23:06
| 2
| 671
|
hulio_entredas
|
76,073,297
| 20,220,485
|
Excel Viewer for Visual Studio Code not displaying decimal point in string if it follows numerical digits
|
<p>I've included in this example a mix of strings, integers, and floats. This behaviour is unsuprising for integers and floats, however, I would like to ensure that the decimal at the end of strings is always displayed.</p>
<p>Is there a way to do this?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': ['a.', '1.'], 'col2': ['1.0.', 1], 'col3': ['11.', 1.]})
display(df)
df.to_csv('test.csv', encoding='utf-8')
>>>
col1 col2 col3
0 a. 1.0. 11.
1 1. 1 1.0
</code></pre>
<p><code>df</code> rendered with Excel Viewer v4.2.57 in Virtual Studio Code v1.77.3 (Universal)</p>
<p><a href="https://i.sstatic.net/jndYS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jndYS.png" alt="enter image description here" /></a></p>
|
<python><csv><visual-studio-code>
|
2023-04-21 12:44:56
| 1
| 344
|
doine
|
76,073,250
| 14,015,493
|
Using JS WebSocket inside a Docker compose network
|
<p>I am currently developing a bidirectional API using Python FastAPI as the backend and ReactJS as the frontend. As the API calls should be scalable, I want to create multiple replicas that can be accessible via an Nginx server that also runs in Docker. Here is my docker compose file:</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.9"
services:
api:
build:
context: api
ports:
- 8000
load_balance:
image: nginx:latest
volumes:
- ./dimer_api/nginx.conf:/etc/nginx/nginx.conf:rw
depends_on:
- api
ports:
- 8000:8000
gui:
build:
context: gui
ports:
- 80:80
depends_on:
- load_balance
</code></pre>
<p>The FastAPI code looks like:</p>
<pre class="lang-py prettyprint-override"><code>websockets = []
@router.websocket("/api/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
try:
while True:
message = await websocket.receive_text()
data = {"status": "ok", "type": "message", "message": message}
await websocket.send_text(json.dumps(data))
except WebSocketDisconnect:
websockets.remove(message)
def create_app(debug: bool = False) -> FastAPI:
app = FastAPI(debug=debug)
app.include_router(router)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
return app
</code></pre>
<p>And here one could see the UseEffect from my ReactJS script which is trying to establish a WebSocket connection.</p>
<pre class="lang-js prettyprint-override"><code>useEffect(() => {
const socket = new WebSocket("ws://localhost:8000/api/ws"); // ERROR
socket.onopen = () => {
console.log("WebSocket connection opened");
};
socket.onmessage = (event) => {
const newData = JSON.parse(event.data);
setData(newData);
};
return () => {
socket.close();
};
}, []);
</code></pre>
<p>To provide all information, here is the configuration file of the API:</p>
<pre><code>events {
worker_connections 1024;
}
http {
server {
listen 8000;
location / {
proxy_pass http://api:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
</code></pre>
<p>And the .conf file of the ReactJS app which also runs on a Nginx server:</p>
<pre><code>server {
listen 80;
location /api {
proxy_pass http://load_balance:8000;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
</code></pre>
<p>However, I am facing issues with establishing WebSocket connections between the frontend and backend. The important part is the following, where I don't know what to properly connect to:</p>
<pre class="lang-js prettyprint-override"><code>const socket = new WebSocket("ws://localhost:8000/api/ws");
</code></pre>
<p>How can I use the <code>/api</code> proxy from the GUI to connect to the WebSocket of the FastAPI? Or how could I establish this connection?</p>
|
<python><docker><nginx><websocket><fastapi>
|
2023-04-21 12:39:14
| 1
| 313
|
kaiserm99
|
76,073,118
| 7,445,528
|
How to parse boolean with PyScript from HTML to Python
|
<p>From a PyScript app, I am providing two radio buttons in my <code>index.html</code>,
allowing a user to decide wether synthesis of isotropic gaussian blobs should be overlapping or not (effectively by toggling the value of <code>Scikit-Learn</code>'s <code>cluster_std</code> value for the <code>make_blobs</code> method).</p>
<p>app reference:
<a href="https://24ec0d6b-0b55-49be-aeb7-a0046c41abf4.pyscriptapps.com/ea775901-75b9-406d-beac-944d26301b09/latest/" rel="nofollow noreferrer">https://24ec0d6b-0b55-49be-aeb7-a0046c41abf4.pyscriptapps.com/ea775901-75b9-406d-beac-944d26301b09/latest/</a></p>
<p>I am currently unable to parse the choice into Python booleans (<code>True</code> or <code>False</code>)
and always receive a <code>True</code> value regardless of which radio button that was clicked.</p>
<p>Experimented a bit with defining the values in <code>HTML</code> as double-, single-, or un-quoted as well as switching between different casings (titled, upper, lower) without luck.
Also had a look at other examples/questions here on SO, e.g. <a href="https://stackoverflow.com/questions/10693630/how-to-pass-a-boolean-from-javascript-to-python">How to pass a boolean from javascript to python?</a></p>
<p>Snippet from the relevant section in the HTML:</p>
<pre><code><div id="input" style="margin: 20px;">
Should the clusters overlap: <br/>
<input py-click="generate_blobs()" type="radio" id="true" name="overlaps" value="true">
<label for="true"> True</label>
<input py-click="generate_blobs()" type="radio" id="false" name="overlaps" value="false">
<label for="false"> False</label>
</div>
</code></pre>
<p>And similarly for the Python snippet:</p>
<pre><code>def generate_blobs():
"""Generate isotropic Gaussian blobs for clustering"""
over_laps = js.document.getElementsByName("overlaps")
for element in over_laps:
if element.checked:
overlap = bool(element.value)
print(overlap, type(overlap))
break
paragraph = Element("Overlap")
paragraph.write(f"overlaps: {overlap}")
n_samples = 4_000 # if overlap else 2_000
cluster_std = 1.5 if overlap else .4
X, y = datasets.make_blobs(
n_samples = n_samples,
n_features = 2,
centers = 5,
cluster_std = cluster_std,
shuffle = True,
)
return X, y
</code></pre>
<p>Result from setting the False option:</p>
<p><a href="https://i.sstatic.net/fUGAX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fUGAX.jpg" alt="radio button choice: false" /></a></p>
<p>Any suggestions are most welcome as I am running out of ideas to try</p>
|
<javascript><python><python-3.x><webassembly><pyscript>
|
2023-04-21 12:21:26
| 1
| 4,039
|
Gustav Rasmussen
|
76,072,801
| 9,350,030
|
python jsonschema - get the reason of validation failure
|
<p>I have a json file which I need to compare with official F5 schema. Everything works but when I do any mistake it does not show me a path or an error, it just shows me that its valid under any of the given schemas.</p>
<p>My json file:</p>
<pre class="lang-json prettyprint-override"><code>{
"schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json",
"class": "AS3",
"action": "deploy",
"persist": true,
"declaration": {
"class": "ADC",
"schemaVersion": "3.40.0",
"target": {
"hostname": "test.test.com"
},
"id": "version_21.0",
"label": "211008",
"remark": "CO_version_21.0"
}
}
</code></pre>
<p>Here is the Python script:</p>
<pre class="lang-py prettyprint-override"><code>import json
import requests
from jsonschema import Draft7Validator, ValidationError
def load_json_from_url(url):
response = requests.get(url)
return response.json()
def main():
schema_url = 'https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json'
schema = load_json_from_url(schema_url)
schema['$id'] = schema['$id'].replace('urn:uuid:', '')
with open('gtm1.json', 'r') as f:
validJson = json.load(f)
validator = Draft7Validator(schema)
try:
validator.validate(validJson)
print('The JSON file is valid according to the schema.')
except ValidationError as e:
message = e.schema["error_msg"] if "error_msg" in e.schema else e.message
print(f'Invalid data: {message}')
if __name__ == '__main__':
main()
</code></pre>
<p>If I were to change the json example from</p>
<pre class="lang-json prettyprint-override"><code>{
...
"declaration": {
"class": "ADC",
...
}
}
</code></pre>
<p>To:</p>
<pre class="lang-json prettyprint-override"><code>{
...
"declaration": {
"class": "ADC1",
...
}
}
</code></pre>
<p>Inside the script I have tried</p>
<pre class="lang-py prettyprint-override"><code>message = e.schema["error_msg"] if "error_msg" in e.schema else e.message
</code></pre>
<p><code>e.path</code> or <code>e.json_path</code> but nothing works</p>
<p>Here is the output:</p>
<p><code>Invalid data: {'schema': 'https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json', 'class': 'AS3', 'action': 'deploy', 'persist': True, 'declaration': {'class': 'ADC1', 'schemaVersion': '3.40.0', 'target': {'hostname': 'test.test.com'}, 'id': 'version_21.0', 'label': '211008', 'remark': 'CO_version_21.0'}} is not valid under any of the given schemas</code></p>
<p>How can I get the path to an item that caused validation to fail? I'd like to obtain something like <code>.declaration.class</code>, because this item is the invalid one (this specific output format is not important, e.g. list of path components would be also fine).</p>
|
<python><jsonschema>
|
2023-04-21 11:39:38
| 1
| 315
|
sundrys1
|
76,072,664
| 6,241,997
|
Convert PySpark Dataframe to Pandas Dataframe fails on timestamp column
|
<p>I create my pyspark dataframe:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, BinaryType, ArrayType, StringType, TimestampType
input_schema = StructType([
StructField("key", StringType()),
StructField("headers", ArrayType(
StructType([
StructField("key", StringType()),
StructField("value", StringType())
])
)),
StructField("timestamp", TimestampType())
])
input_data = [
("key1", [{"key": "header1", "value": "value1"}], datetime(2023, 1, 1, 0, 0, 0)),
("key2", [{"key": "header2", "value": "value2"}], datetime(2023, 1, 1, 0, 0, 0)),
("key3", [{"key": "header3", "value": "value3"}], datetime(2023, 1, 1, 0, 0, 0))
]
df = spark.createDataFrame(input_data, input_schema)
</code></pre>
<p>I want to use Pandas' <code>assert_frame_equal()</code>, so I want to convert my dataframe to a Pandas dataframe.</p>
<p><code>df.toPandas()</code> will throw <code>TypeError: Casting to unit-less dtype 'datetime64' is not supported. Pass e.g. 'datetime64[ns]' instead.</code></p>
<p>How can I successfully convert the "timestamp" column in order to not lose detail of the datetime value? I need them to remain to <code>2023-01-01 00:00:00</code> and not <code>2023-01-01</code>.</p>
|
<python><pandas><dataframe><apache-spark><pyspark>
|
2023-04-21 11:21:21
| 5
| 1,633
|
crystyxn
|
76,072,662
| 3,116,231
|
getting ResourceWarning when running unit tests testing Azure keyvault related classes
|
<p>I'm testing Python code which creates Azure key vaults and secrets. I've written a wrapper class for basic Azure keyvault functions, to create and delete keyvaults.</p>
<p>When running my unittests I get following warning:</p>
<pre><code>./opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/unittest/suite.py:107: ResourceWarning: unclosed <ssl.SSLSocket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('192.168.2.34', 57154), raddr=('51.116.150.70', 443)>
for index, test in enumerate(self):
ResourceWarning: Enable tracemalloc to get the object allocation traceback
./opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/unittest/suite.py:84: ResourceWarning: unclosed <ssl.SSLSocket fd=8, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('192.168.2.34', 57183), raddr=('51.116.150.70', 443)>
return self.run(*args, **kwds)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
----------------------------------------------------------------------
Ran 2 tests in 35.453s
OK
</code></pre>
<p>The tests run successfully, but why do I get this warning?</p>
<p>When testing the classes with an ordinary main() function instead with the testing framework, I don't get any warnings. So it seems to be an unittest issue.</p>
<p>Here's the test code:</p>
<pre><code># Testing creation an deletion of an azure keyvault
import unittest
import random
import string
from azure_keyvault import Keyvault, keyvault_client #wrapper
class TestAzureKeyvault(unittest.TestCase):
"""Test creation an deletion of an azure keyvault
Args:
unittest (_type_): _description_
"""
def setUp(self):
# create a random name for the keyvault
rand_str = "".join(random.choices(string.ascii_lowercase, k=10))
self.tresor_name = f"test-kv-{rand_str}"
# create keyvault_client and keyvault objects
self.kv_client = keyvault_client()
self.kv = Keyvault(tresor_name=self.tresor_name, keyvault_client=self.kv_client)
def test_keyvault_create(self):
# create keyvault and check if it is contained in the list of azure keyvaults
self.kv.create()
kv_list = [kv.name for kv in self.kv_client.vaults.list()]
self.assertIn(self.tresor_name, kv_list)
def test_keyvault_delete(self):
# delete previsously created keyvault and check if it's missing in the list of the azure keyvaults
kv = Keyvault(tresor_name=self.tresor_name, keyvault_client=self.kv_client)
kv.delete()
kv_list = [kv.name for kv in self.kv_client.vaults.list()]
self.assertNotIn(self.tresor_name, kv_list)
</code></pre>
|
<python><azure><python-unittest><azure-keyvault>
|
2023-04-21 11:20:52
| 0
| 1,704
|
Zin Yosrim
|
76,072,583
| 1,200,914
|
Update sqlite3 in docker image
|
<p>I'm checking my sqlite3 version using:</p>
<pre><code>python -c "import sqlite3; print(sqlite3.sqlite_version)"
</code></pre>
<p>on my local machine, this prompts me 3.41.1. However, once I build my docker image if I do:</p>
<pre><code>sudo docker exec -t <docker_image_number> python -c "import sqlite3; print(sqlite3.sqlite_version)"
</code></pre>
<p>I obtain 3.34.1.</p>
<p>To build the docker I'm running from python:3.9.3. I have read that sqlite3 is a library from python itself, that is updated depending on the python version/the version they used for compiling. I have tried even python:3.16.0, but still I get 3.34.1. I would like to have at least sqlite3 3.35. How can I update this library? For my docker I do a <code>RUN apt-get update</code> and a <code>RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt</code>, but requirements is a freeze of my conda library where sqlite3 is 3.41.1</p>
<pre><code>FROM python:3.9.3
# from compose args
ARG BITBUCKET_APP_PASS
# Variable arguments
WORKDIR /code
#Install requirements
#COPY ./.env ./.env
COPY ./requirements.txt /code/requirements.txt
RUN apt-get update
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
#Copy code into the container
COPY ./django_app/ /code/app/src/
COPY ./init_app_in_docker.sh /code/app/src/init_app_in_docker.sh
COPY ./.env ./code/app/src/.env
#Start the app
WORKDIR /code/app/src/
CMD ./init_app_in_docker.sh
</code></pre>
|
<python><docker><sqlite>
|
2023-04-21 11:10:35
| 1
| 3,052
|
Learning from masters
|
76,072,490
| 17,795,398
|
How to set an item with two left icons in a List from a .kv file?
|
<p>I'm learning <code>kivymd</code> and I would like to create a <code>MDList</code> with an <code>OneLineAvatarIconListItem</code> that has two left icons. The <a href="https://kivymd.readthedocs.io/en/1.1.1/components/list/#custom-list-item" rel="nofollow noreferrer">docs</a> only show how to do that with two right icons.</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.uix.list import ILeftBodyTouch
KV = '''
OneLineAvatarIconListItem:
text: "One-line item with avatar"
on_size:
self.ids._left_container.width = container.width
self.ids._left_container.x = container.width
YourContainer:
id: container
MDIconButton:
icon: "minus"
MDIconButton:
icon: "plus"
IconRightWidget:
icon: "cog"
'''
class YourContainer(ILeftBodyTouch, MDBoxLayout):
adaptive_width = True
class Example(MDApp):
def build(self):
self.theme_cls.theme_style = "Dark"
return Builder.load_string(KV)
Example().run()
</code></pre>
<p>If I change "right" by "left" everywhere, the icons are misplaced:
<a href="https://i.sstatic.net/afJAx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/afJAx.png" alt="picture1" /></a></p>
<p>I guess I have to do something more...</p>
|
<python><kivymd>
|
2023-04-21 10:59:12
| 1
| 472
|
Abel Gutiérrez
|
76,072,450
| 7,021,642
|
Why a change in routing policy in tornado results in sending Type Error?
|
<p>I am trying to change the whole webpages from serving on absolute path to relative path, but something went wrong as the request goes to the back-end tornado 5.0 server.</p>
<p>The javascript is like this:</p>
<pre><code> $.ajax({
url:'../pyctp/get_ctp_info.ctp',
type:'post',
dataType:'json',
data:{"InstrumentID":""},
success:function(response){
</code></pre>
<p>The nginx will proxy_pass any thing that ends with <code>.ctp</code> to the server. <code>url</code> used to be <code>/pyctp/get_ctp_info.ctp</code> but both cases fell into the server error code 500.</p>
<p>Because I add the original routing url <code>r'/pyctp/(.*)\.(.*)'</code> with <code>(.*)</code> on the front :</p>
<pre><code> app = Application([
...
(r'(.*)/pyctp/(.*)\.(.*)', CTPManageMentHandler),
],
...
</code></pre>
<p>The url is not pure regular expression, so I try to mimic what worked, allowing some path ahead of <code>/pyctp/get_ctp_info.ctp</code></p>
<p>Here is the error recorded by python:</p>
<pre><code>TypeError: write() only accepts bytes, unicode, and dict objects
ERROR:tornado.access:500 POST /pyctp/get_ctp_info.ctp (127.0.0.1) 1.93ms
ERROR:tornado.application:Uncaught exception POST /pyctp/get_ctp_info.ctp (127.0.0.1)
HTTPServerRequest(protocol='http', host='139.198.17.235', method='POST', uri='/pyctp/get_ctp_info.ctp', version='HTTP/1.0', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/pyenv/quotecli/lib/python3.8/site-packages/tornado/web.py", line 1499, in _stack_context_handle_exception
raise_exc_info((type, value, traceback))
File "<string>", line 4, in raise_exc_info
File "/pyenv/quotecli/lib/python3.8/site-packages/tornado/stack_context.py", line 315, in wrapped
ret = fn(*args, **kwargs)
File "/pyenv/quotecli/lib/python3.8/site-packages/tornado/web.py", line 1711, in future_complete
f.result()
File "/pyenv/quotecli/lib/python3.8/site-packages/tornado/gen.py", line 1113, in run
yielded = self.gen.send(value)
File "/root/QuoteClientMajor/server_router.py", line 118, in post
self.write(result)
File "/pyenv/quotecli/lib/python3.8/site-packages/tornado/web.py", line 739, in write
raise TypeError(message)
</code></pre>
<p>here is the whole <code>CTPManageMentHandler</code> class in <code>server_router.py</code></p>
<pre><code>class CTPManageMentHandler(base.RequestHandler):
executor = ThreadPoolExecutor(8)
@tornado.web.asynchronous
@tornado.gen.coroutine
def get(self, *args, **kwargs):
url_pattern = args[0]
param = get_param(self.request.query_arguments, self.session_id)
result = yield self.get_result(url_pattern, param=param, *args, **kwargs)
self.write(result)
self.finish()
@run_on_executor
def get_result(self, url_type, *args, **kwargs):
result = self.ctp_management( url_type, *args, **kwargs)
return result
@tornado.web.asynchronous
@tornado.gen.coroutine
def post(self, *args, **kwargs):
url_pattern = args[0]
#print(self.request.headers.get("X-Real-IP"))
request_data = self.request.body.decode('utf8')
param = get_param(self.request.query_arguments, self.session_id)
if '&' in request_data or '=' in request_data:
request_data = json.dumps(utils.map_to_json(request_data))
result = yield self.post_result(url_pattern, request_data, param=param, *args, **kwargs)
self.write(result) #【【here is where the error occured】】
self.finish()
@run_on_executor
def post_result(self, url_type, request_data, *args, **kwargs):
result = self.ctp_management(url_type, request_data, *args, **kwargs)
return result
def ctp_management(self, url_type, request_data=None, *args, **kwargs):
if url_type == 'get_ctp_info':
return PyCTPManager.CTPInterface.get_ctp_info(self,request_data)
</code></pre>
<p>So why is the TypeError? Or is there a right way to express some url that ends with <code>/pyctp/get_ctp_info.ctp</code> to let tornado route it?</p>
|
<python><tornado>
|
2023-04-21 10:54:42
| 1
| 535
|
George Y
|
76,072,449
| 11,795,964
|
Create a new pandas column from map of existing column with mixed datatypes
|
<p>I want to categorise an existing pandas series into a new column with 2 values (planned and non-planned)based on codes relating to the admission method of patients coming into a hospital.</p>
<p>The codes fall into two main categories - planned and unplanned (=emergencies).</p>
<p>I cannot get the mapping to work and wonder if this is all due to differing datatypes.</p>
<p>The series has 3 planned codes and 12 unplanned codes. I have created a dictionary, you can see it has integers and strings ( I cannot change this coding) .</p>
<p><code>adm_dictionary = {'planned': [11, 12, 13], 'non_planned': ['2A', '2B', '2C', '2D', 28, 31, 32, 21, 22, 23, 24, 25]}</code></p>
<p>an example of my dataframe</p>
<p><code>df2 = pd.DataFrame( {"ADMIMETH": ['11', '2D', '22']} )</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>ADMIMETH</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>11</td>
</tr>
<tr>
<td>1</td>
<td>2D</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
</tr>
</tbody>
</table>
</div>
<p>Using map.</p>
<p><code>df2['pl_nonpl'] = df2['ADMIMETH'].map(adm_dictionary)</code></p>
<p><code>df2</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>ADMIMETH</th>
<th>pl_nonpl</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>11</td>
<td>NaN</td>
</tr>
<tr>
<td>1</td>
<td>2D</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
<td>Nan</td>
</tr>
</tbody>
</table>
</div>
<p>I have checked the dataypes as far as I can. Perhaps there is an easier method.</p>
|
<python><pandas><dictionary>
|
2023-04-21 10:54:41
| 1
| 363
|
capnahab
|
76,072,435
| 16,389,095
|
Python/Kivy: how to retrieve a widget from its id
|
<p>I developed a simple UI in Python/Kivy. It contains a grid layout of three rows to host three widgets. Two buttons are designed in the kv part and identified by their id:</p>
<pre><code>id: 'button_create'
id: button_get_ids
</code></pre>
<p>whilst the ddi is added to the layout when the create button is pressed, and is identified by:</p>
<pre><code>myDdi.id = 'ddi'
</code></pre>
<p>I would like to retrieve the three widgets in order to print their text (<em>Button_GetIds_On_Click</em>). Here is the full code:</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
from kivymd.uix.screen import MDScreen
from kivymd.toast import toast
from kivymd.uix.menu import MDDropdownMenu
from kivymd.uix.dropdownitem.dropdownitem import MDDropDownItem
from kivy.metrics import dp
Builder.load_string(
"""
<View>:
MDGridLayout:
rows: 3
id: layout
padding: 100, 50, 100, 50
spacing: 0, 50
MDRaisedButton:
id: 'button_create'
text: 'CREATE DDI'
on_release: root.Button_CreateDDI__On_Click()
MDRaisedButton:
id: button_get_ids
disabled: True
text: 'GET IDS'
on_release: root.Button_GetIds_On_Click()
""")
class View(MDScreen):
def __init__(self, **kwargs):
super(View, self).__init__(**kwargs)
def Button_CreateDDI__On_Click(self):
myDdi = MDDropDownItem()
myDdi.text = 'SELECT POSITION'
myDdi.id = 'ddi'
myMenu, scratch = self.Create_DropDown_Widget(myDdi, ['POS 1', 'POS 2', 'POS 3'], width=4)
myDdi.on_release = myMenu.open
self.ids.button_get_ids.disabled = False
self.ids.layout.add_widget(myDdi)
def Button_GetIds_On_Click(self):
# GET BUTTON CREATE
buttCreate = self.ids['button_create'] #key error
print(buttCreate.text)
# GET BUTTON GET IDS (instead of using self.ids.button_get_ids)
buttGetIds = self.ids['button_get_ids'] #ok
print(buttGetIds.text)
# GET DDI
ddi = self.ids['ddi'] #key error
print(ddi.text)
def DDI_Selection_Changed(self):
toast('SELECTION CHANGED: ' + self.myDdi.current_item)
def Create_DropDown_Widget(self, drop_down_item, item_list, width):
items_collection = [
{
"viewclass": "OneLineListItem",
"text": item_list[i],
"height": dp(56),
"on_release": lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x),
} for i in range(len(item_list))
]
menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=width)
menu.bind()
#menu.open()
return menu, items_collection
def Set_DropDown_Item(self, drop_down_item, menu, text_item):
drop_down_item.set_item(text_item)
menu.dismiss()
class MainApp(MDApp):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.View = View()
def build(self):
self.title = ' DROP DOWN ITEM ADDED DYNAMICALLY'
return self.View
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>For the first and third widget I get a <strong>key error</strong>. What is the difference of identifying a widget in the kv part between <em>id: 'widgetId'</em> and <em>id: widgetId</em>? Which is the difference between <em>id</em> and <em>ids</em>?</p>
|
<python><kivy><kivy-language>
|
2023-04-21 10:52:48
| 2
| 421
|
eljamba
|
76,072,426
| 12,436,050
|
PermissionError: [Errno 13] Permission denied: while writing to a file in Python 3
|
<p>I am trying to write the output of python script to a text file. The script runs for sometime and write the content to the file. But after sometime I get 'PermissionError: [Errno 13] Permission denied:'</p>
<p>Below is my script.</p>
<pre><code>def ancestor_map(id):
row_icd = umls[umls['AUI'] == id]
cui = row_icd.CUI.to_string(index=False)
if cui in df_mdr_pt_llt['CUI'].values:
match = df_mdr_pt_llt[df_mdr_pt_llt['CUI'] == cui]
match = match[["CODE", "STR", "TTY", "SDUI"]].drop_duplicates()
match["icd_code"] = icd_code
match["icd_term"] = icd_term
match["level"] = level
match["is_a"] = 'indirect_rel'
match = match.loc[:, ["icd_code","icd_term","CODE","STR", "TTY", "SDUI", "level", "is_a"]]
match.to_csv(r'indirect_mapping_icd10cm.txt', header=None, index=None, sep='\t', mode='a')
else:
print ('match not found: continue')
</code></pre>
<p>How can I modify my script to write the whole content without any error.</p>
<p>Any help is highly appreciated!</p>
|
<python><csv>
|
2023-04-21 10:50:38
| 1
| 1,495
|
rshar
|
76,072,249
| 2,178,942
|
Reading MATLAB dictionaries in Python
|
<p>I have created several dictionaries using MATLAB, and saved them as <code>.mat</code> file using <code>"-v7.3"</code> flag (as it has been suggested as a way to read <code>.mat</code> files in Python)</p>
<p>My code in MATLAB is:</p>
<pre class="lang-matlab prettyprint-override"><code>cnn6 = load("imgnet19k_less_feature_map_20230412_alexnet_cnn6_dict.mat");
dictionary_analyze = cnn6.dictionary_imagenames_features;
keys_in_order = keys(dictionary_analyze);
values_in_order = values(dictionary_analyze);
save("cnn6_keys_in_order_imagenet_19k_v2.mat", "keys_in_order", '-v7.3');
save("cnn6_values_in_order_imagenet_19k_v2.mat", "values_in_order", '-v7.3');
</code></pre>
<p>The dictionaries are structured this way:</p>
<p><a href="https://i.sstatic.net/k3Kwi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k3Kwi.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/7XfX9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7XfX9.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/Znmfs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Znmfs.png" alt="enter image description here" /></a></p>
<p>In Python, I tried the following code to read these <code>.mat</code> files:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import pickle
import numpy as np
import h5py
import tables
from scipy.io import loadmat
normal_semantic_features_dir = "./normal_semantic_features/"
alexnet_features = ['cnn2', 'cnn4', 'cnn6', 'cnn8']
#dict_cnns = h5py.File(normal_semantic_features_dir + 'cnn2_keys_in_order_imagenet_19k_v2.mat', "r")
f1 = h5py.File(normal_semantic_features_dir + 'cnn8_keys_in_order_imagenet_19k_v2.mat', "r")
f2 = h5py.File(normal_semantic_features_dir + 'cnn8_values_in_order_imagenet_19k_v2.mat', "r")
array_of_keys = dict_cnns['keys_in_order']
print(list(f1.keys()))
</code></pre>
<p>results:</p>
<pre class="lang-py prettyprint-override"><code>> ['#refs#', '#subsystem#', 'keys_in_order']
a = f1["keys_in_order"]
type(a)
> h5py._hl.dataset.Dataset
for k, v in annots.items():
print(k,"........" ,annots[k])
print("....")
> __header__ ........ b'MATLAB 5.0 MAT-file, Platform: PCWIN64, Created on: Fri Apr 14 13:32:47 2023' ....
> __version__ ........ 1.0 ....
> __globals__ ........ [] .... None ........ [(b'keys_in_order', b'MCOS', b'string', array([[3707764736],
> [ 2],
> [ 1],
> [ 1],
> [ 1],
> [ 1]], dtype=uint32))] ....
> __function_workspace__ ........ [[ 0 1 73 ... 0 0 0]]
</code></pre>
<p>How should I read the contents of these mat files, and convert them to python-style dictionaries?</p>
|
<python><matlab><dictionary><h5py>
|
2023-04-21 10:22:59
| 1
| 1,581
|
Kadaj13
|
76,072,194
| 1,473,517
|
What causes the slowdown in this numba code?
|
<p>I have the following simple function the sums the values in the 2nd row or an array:</p>
<pre><code>@njit('float64(float64[:, ::1], uint64, uint64)', fastmath=True)
def fast_sum(array_2d, start, end):
s = 0.0
for i in range(start, end):
s += array_2d[1][i]
return s
</code></pre>
<p>I time it with:</p>
<pre><code>import numpy as np
from numba import njit
A = np.random.rand(2, 500)
%timeit fast_sum(A, 100, 300)
</code></pre>
<p>This gives me:</p>
<pre><code>%timeit fast_sum(A, 100, 300)
304 ns ± 17.4 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>In fact I want to include the value at index <code>end</code> so I change the function to be:</p>
<pre><code>@njit('float64(float64[:, ::1], uint64, uint64)', fastmath=True)
def fast_sum_v2(array_2d, start, end):
s = 0.0
end = end + 1
for i in range(start, end):
s += array_2d[1][i]
return s
</code></pre>
<p>The code now runs 40% more slowly!</p>
<pre><code>%timeit fast_sum_v2(A, 100, 299)
423 ns ± 6.93 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>My guess is that this is because adding 1 to end changes the type of end. But is that really right?</p>
|
<python><numba>
|
2023-04-21 10:15:56
| 2
| 21,513
|
Simd
|
76,071,934
| 1,992,909
|
How to correctly subtype dict so that MyPy recognizes it as generic?
|
<p>I have a subclass of dict:</p>
<pre class="lang-py prettyprint-override"><code>class MyDict(dict):
pass
</code></pre>
<p>Later I use the definition:</p>
<pre class="lang-py prettyprint-override"><code>my_var: MyDict[str, int] = {'a': 1, 'b': 2}
</code></pre>
<p>MyPy complains:</p>
<pre><code>error: "MyDict" expects no type arguments, but 2 given [type-arg]
</code></pre>
<p>How can I define MyDict so that MyPy recognizes it as generic with two type arguments?</p>
<p>I have tried deriving from <code>typing.Dict</code> and adding protocol <code>MutableMapping</code>, both to no avail.</p>
|
<python><mypy><python-typing>
|
2023-04-21 09:43:33
| 1
| 1,104
|
A Sz
|
76,071,926
| 13,158,157
|
pyspark isclose alternatvie
|
<p>How can you pass <code>math.isclose</code> to pyspark? Pandas can utilize <code>numpy.isclose</code> but it will not work with pyspark columns.
I suspect I should use broadcasting but do not know how the code would look like ?</p>
|
<python><math><pyspark>
|
2023-04-21 09:42:54
| 1
| 525
|
euh
|
76,071,842
| 595,305
|
How to get target from LibreOffice WrappedTargetException?
|
<p>This is about automation of LO Base using Python macros.</p>
<p>Please see <a href="https://ask.libreoffice.org/t/how-to-use-macro-to-open-form-in-base/90723" rel="nofollow noreferrer">this question</a> in the LO forum posed by me yesterday.</p>
<p>As you can see, from the link in my second post, it is trivial to open a form on the <code>OpenDocument</code> event, i.e. when the file is opened, if you use a VisualBasic macro.</p>
<p>However, attempts to open a form programmatically using Python macros always seem to lead to <code>WrappedTargetException</code>. e.g.:</p>
<pre><code>def open_contacts_form(e):
odb = e.Source
container = odb.FormDocuments
obj = container.getByHierarchicalName('kernel.contacts')
obj.open() # causes the WrappedTargetException
</code></pre>
<p>But I can't find out how to access the initial (target) exception. I printed out (to a file) <code>dir(e)</code>, and I don't see the attributes I expect to find from the <a href="https://api.libreoffice.org/docs/idl/ref/exceptioncom_1_1sun_1_1star_1_1lang_1_1WrappedTargetException.html#a603e39cf90de4c040e173bc87d1ae644" rel="nofollow noreferrer">API page for WrappedTargetException</a>, such as <code>TargetException</code>, etc.</p>
<p>I have a suspicion unorthodox thread use could be causing the problem. But I don't understand how to dig into <code>WrappedTargetException</code> for greater enlightenment.</p>
|
<python><exception><libreoffice-basic><libreoffice-base>
|
2023-04-21 09:32:52
| 1
| 16,076
|
mike rodent
|
76,071,801
| 888,842
|
Fit Matplotlib 3D subplot to figsize
|
<p>I have a minimum example of my code as follows:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(1, figsize=(16, 9))
ax = fig.add_subplot(projection='3d',elev=-150, azim=110)
plt.tight_layout()
plt.show()
</code></pre>
<p>If run my .py script i get following plot with a lot of white spaces on the left and right. Is it possible to fit the subplot to the defined figsize? In a previous python version there were no whitespaces, but since i use python 3.9.4 together with matplotlib 3.5.2 the subplot has white spaces on the left and right. Hope someone can help me with this issue.</p>
<p><a href="https://i.sstatic.net/NhuTh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NhuTh.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-04-21 09:28:08
| 1
| 537
|
hannes
|
76,071,772
| 4,451,521
|
Eliminate first and last rows with value 0 from dataframe
|
<p>I have a dataframe with a row with integers such as</p>
<pre><code> value
0
0
0
0
1
2
0
3
0
2
5
6
0
0
0
</code></pre>
<p>Is there a way that I can eliminate the first 0s and last 0s (but not the ones in the middle) to get</p>
<pre><code>1
2
0
3
0
2
5
6
</code></pre>
|
<python><pandas>
|
2023-04-21 09:25:37
| 2
| 10,576
|
KansaiRobot
|
76,071,738
| 12,304,000
|
run airflow dag for previous dates
|
<p>I have an Airflow DAG where I run API queries based on the current date (using Python's date.today() function).</p>
<pre><code>with DAG(
'load_api_date',
description='test',
default_args=args,
start_date=datetime(year=2023, month=4, day=17),
schedule_interval=None,
) as dag:
....
.....
....
def get_data(**kwargs):
delta_date = str(date.today() - timedelta(days=2))
params = {"aggregation": "DAY", "date": delta_date, "returntype": "csv"}
file_date = delta_date.replace('-', '')
....
</code></pre>
<p>When this runs, the <code>delta_date</code> will always consider the current date only. Is it possible to modify the dag or pass any optional date parameters such that if needed, the DAG can also run for past dates? Or pretend that the current date is something else.</p>
<p>For example, if I run this today, the date.today() will be <strong>21-04-2023</strong>. But can I make my dag <strong>pretend that the date.today()</strong> is let's say <strong>18-04-23</strong>.</p>
<p>Does the "start_date" in the DAG initialisation play a part here?</p>
|
<python><airflow><directed-acyclic-graphs><airflow-2.x>
|
2023-04-21 09:20:14
| 1
| 3,522
|
x89
|
76,071,706
| 4,451,521
|
using python in a pipe
|
<p>I am reading a text file with cat and piping it through grep, etc
Now I have several lines containing numbers</p>
<p>There is a final operation that I don't think there is a operator to do what I want so I would like to do it with a python script.</p>
<p>Is there a way that I can incorporate this python script in the pipe?</p>
<p>For example</p>
<p>If I have</p>
<pre><code>#!/usr/bin/env python
num = input()
print(num)
</code></pre>
<p>and I make this script executable, I can run it by calling it
but I cannot put it into a pipe</p>
<p>It says <code>processa.py: command not found</code></p>
<p>Is there a way that <code>processa.py</code> can be considered a command and take input and output from the console?</p>
|
<python><console><pipe>
|
2023-04-21 09:15:15
| 1
| 10,576
|
KansaiRobot
|
76,071,658
| 713,200
|
How to extract particular value from json response using python based on a condition?
|
<p>After doing a GET call I get a json response,
I need a get a particular value from that json response
Here is the json response</p>
<pre><code>{
"identifier": "id",
"items": [
{
"channelGroupId": 26,
"controlMethod": "MANUAL",
"portChannelId": 0,
"ethernetFecMode": "NONE",
"ethernetLoopback": "NONE",
"isSwitchablePort": false,
"negotiationMode": "UNKNOWN",
"configuredDuplexMode": "AUTO_NEGOTIATE",
"isConnectorPresent": "FALSE",
"mtu": 1514,
"delay": 0,
"isPromiscuous": false,
"isSpanMonitored": false,
"ifSpeed": {
"longAmount": 0
},
"isFloatingPep": false,
"isInMaintenance": false,
"adminStatus": "UP",
"operStatus": "DOWN",
"type": "PROPVIRTUAL",
"name": "Bundle-Ether26",
"id": 27423442, #I want to get this value if the channelGroupId = 26
"uuid": "5cc095bb-e67d-4827-8a46-c3d2bf9d450f",
"displayName": "f03d09bc[4427434_10.104.120.141,Bundle-Ether26]"
},
{
"channelGroupId": 36,
"controlMethod": "MANUAL",
"portChannelId": 0,
"ethernetFecMode": "NONE",
"ethernetLoopback": "NONE",
"isSwitchablePort": false,
"negotiationMode": "UNKNOWN",
"configuredDuplexMode": "AUTO_NEGOTIATE",
"isConnectorPresent": "FALSE",
"mtu": 1514,
"delay": 0,
"isPromiscuous": false,
"isSpanMonitored": false,
"ifSpeed": {
"longAmount": 0
},
"isFloatingPep": false,
"isInMaintenance": false,
"adminStatus": "UP",
"operStatus": "DOWN",
"type": "PROPVIRTUAL",
"name": "Bundle-Ether36",
"id": 27423438,
"uuid": "fff16259-4079-4d67-bdf9-c1a4720f01c3",
"displayName": "f03d09bc[4427434_10.104.120.141,Bundle-Ether36]"
},
{
"channelGroupId": 21,
"controlMethod": "MANUAL",
"portChannelId": 0,
"ethernetFecMode": "NONE",
"ethernetLoopback": "NONE",
"isSwitchablePort": false,
"negotiationMode": "UNKNOWN",
"configuredDuplexMode": "AUTO_NEGOTIATE",
"isConnectorPresent": "FALSE",
"mtu": 1514,
"delay": 0,
"isPromiscuous": false,
"isSpanMonitored": false,
"ifSpeed": {
"longAmount": 0
},
"isFloatingPep": false,
"isInMaintenance": false,
"adminStatus": "UP",
"operStatus": "DOWN",
"type": "PROPVIRTUAL",
"name": "Bundle-Ether21",
"id": 27423440,
"uuid": "40c11ffd-34cd-4e80-be1d-58899dba4894",
"displayName": "f03d09bc[4427434_10.104.120.141,Bundle-Ether21]"
}
],
"status": {
"statusCode": 0,
"statusMessage": "Get operation is successful",
"hideDialog": true
}
}
</code></pre>
<p>I want to extract the value of <code>id</code>(which is<code>27423440</code>)for which the <code>channelGroupId</code> value is <code>26</code> in the same dictionary.</p>
<p>I tried the following code , but I'm not able to access the dictionary</p>
<pre><code> if response.status_code == 200:
if "success" in response.text:
#print(response.text)
full_response = response.json()
list_of_pagp = full_response['items']
print(list_of_pagp)
for i in list_of_pagp:
if(list_of_pagp['channelGroupId'] == 26 ):
instanceId = list_of_pagp[id]
break
print(instanceId)
</code></pre>
<p>Not sure what I'm I missing here.</p>
|
<python><json><python-3.x><list><dictionary>
|
2023-04-21 09:09:52
| 1
| 950
|
mac
|
76,071,635
| 11,938,023
|
Is there a fast way to build numpy/pandas diagonal to a matrix
|
<p>Ok,i have an array of systems = np.array([ 1, 3, 1, 7, 1, 15, 1, 7, 1, 3, 1], dtype=np.int8)</p>
<p>I want to know if there is a faster way to build the following dataframe or numpy matrix rather than the formula what im using now:</p>
<pre><code>
systems = np.array([ 1, 3, 1, 7, 1, 15, 1, 7, 1, 3, 1], dtype=np.int8)
def buildsystem(system):
firstrow = [0]
sysbuild = pd.DataFrame()
for y in range(len(system)):
firstrow.append(system[y] ^ firstrow[y])
sysbuild[0] = firstrow
for y in range(1,len(system)+1):
sysbuild[y] = system[y-1] ^ sysbuild[y-1]
return sysbuild
h= buildsystem(systems)
In [145]: h
Out[145]:
0 1 2 3 4 5 6 7 8 9 10 11
0 0 1 2 3 4 5 10 11 12 13 14 15
1 1 0 3 2 5 4 11 10 13 12 15 14
2 2 3 0 1 6 7 8 9 14 15 12 13
3 3 2 1 0 7 6 9 8 15 14 13 12
4 4 5 6 7 0 1 14 15 8 9 10 11
5 5 4 7 6 1 0 15 14 9 8 11 10
6 10 11 8 9 14 15 0 1 6 7 4 5
7 11 10 9 8 15 14 1 0 7 6 5 4
8 12 13 14 15 8 9 6 7 0 1 2 3
9 13 12 15 14 9 8 7 6 1 0 3 2
10 14 15 12 13 10 11 4 5 2 3 0 1
11 15 14 13 12 11 10 5 4 3 2 1 0
np.diagonal(h, 1)
In [144]: np.diagonal(h, 1)
Out[144]: array([ 1, 3, 1, 7, 1, 15, 1, 7, 1, 3, 1])
</code></pre>
<p>you will notice that systems is the exact same as np.diagonal</p>
<p>Is there faster way to accomplish this. I will be working with int sizes of up to 64 and wanted a faster way way to look these up. Any help will be appreciated. You will notice build systems is not using numpy or pandas functions</p>
<p>Thank you</p>
|
<python><numpy><matrix>
|
2023-04-21 09:07:47
| 2
| 7,224
|
oppressionslayer
|
76,071,609
| 6,195,489
|
Using both API Key & Basic Auth in Python or Postman
|
<p>I have an API end-point that I have been given an API key for, but it also looks to have basic auth in front of it.</p>
<p>Can someone explain how I can use both basic Auth and an apiKey with Postman, or requests in python?</p>
<p>When I use the authentication section in Postman it will only let me pick one or the other.</p>
|
<python><python-requests><postman>
|
2023-04-21 09:05:26
| 1
| 849
|
abinitio
|
76,071,586
| 3,668,129
|
gradio HTML component with javascript code don't work
|
<p>I'm trying to integrate <code>HTML</code> file (which contains <code>javascript</code> function) to <code>gradio</code> HTML component (<code>gr.HTML</code>) and it seems not to work together:</p>
<p>Gradio app:</p>
<pre><code>import gradio as gr
with open("test.html") as f:
lines = f.readlines()
with gr.Blocks() as demo:
input_mic = gr.HTML(lines)
out_text = gr.Textbox()
if __name__ == "__main__":
demo.launch()
</code></pre>
<p><code>test.html</code> file:</p>
<pre><code><html>
<body>
<script type = "text/JavaScript">
function test() {
document.getElementById('demo').innerHTML = "Hello"
}
</script>
<h1>My First JavaScript</h1>
<button type="testButton" onclick="test()"> Start </button>
<p id="demo"></p>
</body>
</html>
</code></pre>
<p>When I run <code>test.html</code> it works fine (after clicking on "start" I can see "Hello" in demo id</p>
<p>The HTML seems weird (the button doesn't look like a button):
<img src="https://user-images.githubusercontent.com/128060257/233591624-9fda8775-eb04-4ae8-b537-47f059bae6fe.png" alt="image" /></p>
<p>And when I click on "start" I'm getting an error on the console:
<code>Uncaught ReferenceError: test is not defined at HTMLButtonElement.onclick</code></p>
<p>How can I integrate the HTML file (with js) to gradio gr.HTML element ?</p>
|
<python><gradio>
|
2023-04-21 09:01:16
| 2
| 4,880
|
user3668129
|
76,071,569
| 10,693,596
|
Is it possible to exit the command window when Holoviz Panel application browser window is closed?
|
<p>I am writing an application that uses Holoviz <code>panel</code>. After interaction with the application is completed, I close the browser window, but the server remains running in the terminal/command line window.</p>
<p>Is it possible to automatically close the terminal/command line window if the browser tab displaying the application is closed?</p>
<p>For example, here's my app:</p>
<pre class="lang-py prettyprint-override"><code>import panel as pn
import param
class Test(param.Parameterized):
test_number = param.Integer(default=0)
test = Test()
app = pn.template.VanillaTemplate()
app.main.append(test.param)
app.servable()
</code></pre>
<p>When I run it with <code>panel serve</code>, I can interact with the application. Once I'm done, I close the browser window. Is there a way to make sure that the <code>panel serve</code> command also exits/stops?</p>
|
<python><powershell><terminal><holoviz><holoviz-panel>
|
2023-04-21 08:58:53
| 1
| 16,692
|
SultanOrazbayev
|
76,071,380
| 5,551,539
|
Stack plots generated in a loop
|
<p>I am running an <code>if</code> loop and want to stack the resulting plots in a grid. This is my sample code, which generates two random variables and plots a third one under two conditions:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Set seed for reproducibility
np.random.seed(42)
# Generate 10 realizations of the uniform distribution between -1 and 1 for x and y
x = np.random.uniform(low=-1, high=1, size=10)
y = np.random.uniform(low=-1, high=1, size=10)
# Create empty list to store valid plots
valid_plots = []
# Initialize an empty list to store the current row of plots
current_row = []
# Loop through each realization of x and y
for i in range(len(x)):
# Check if both x and y are positive
if x[i] > 0 and y[i] < 0:
# Generate 100 values of z
z = np.linspace(-1, 1, 100)
# Compute the function z = xy*z^2
z_func = x[i] * y[i] * z*z
# Plot the function
fig, ax = plt.subplots()
ax.plot(z, z_func)
# If there are now two plots in the current row, append the row to valid_plots and start a new row
if len(current_row) % 2 == 1:
valid_plots.append(current_row)
current_row = []
# Append the current plot to the current row
current_row.append(ax)
# If there is only one plot in the last row, append the row to valid_plots
if len(current_row) > 0 and len(current_row) % 2 == 1:
current_row.append(plt.gca())
valid_plots.append(current_row)
# Create a figure with subplots for each valid plot
num_rows = len(valid_plots)
fig, axes = plt.subplots(num_rows, 2, figsize=(12, 4 * num_rows))
for i, row in enumerate(valid_plots):
for j, ax in enumerate(row):
# Check if the plot has any lines before accessing ax.lines[0]
if len(ax.lines) > 0:
axes[i, j].plot(ax.lines[0].get_xdata(), ax.lines[0].get_ydata())
plt.show()
</code></pre>
<p>The problem with the ouput is that it generates two empty graphs and then starts stacking up vertically:</p>
<p><a href="https://i.sstatic.net/JDMFa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JDMFa.png" alt="output" /></a></p>
<p>Could help me out? I would also be interested in more efficient methods of achieving this result.</p>
|
<python><numpy><matplotlib><plots.jl>
|
2023-04-21 08:34:12
| 1
| 301
|
Weierstraß Ramirez
|
76,071,272
| 891,975
|
Changing a node in hdfs
|
<p>We have the code uploading config file to HDFS:</p>
<pre><code>from hdfs import InsecureClient
def upload_file_to_hdfs(local_path, remote_path):
client = InsecureClient(url='http://hdfs_server:50070', user='dr.who')
try:
ret = client.upload(remote_path, local_path)
except Exception as e:
logging.info(f'Error: {e}')
print(f'File is uploaded: {local_path} -> {remote_path}')
return ret
</code></pre>
<p>Now the domain name and port are fixed. What to do if we have a node being switched?</p>
|
<python><hadoop><hdfs>
|
2023-04-21 08:20:45
| 1
| 1,913
|
Павел Иванов
|
76,071,267
| 5,669,713
|
How to create permeant "cache" directory for AWS Lambda function
|
<p>I have a python script, dockerized and running as a Lambda function in AWS that performs data extraction on an XBRL file with py-xbrl when triggered. Pared down, the important part of the python script is:</p>
<pre class="lang-py prettyprint-override"><code>from xbrl.cache import HttpCache
from xbrl.instance import XbrlParser
def handler(event, context):
sample_schema_url = "https://www.sec.gov/Archives/edgar/data/0000320193/000032019321000105/aapl-20210925.htm"
cache = HttpCache('./cache')
parser = XbrlParser(cache)
xbrl_instance = XbrlParser.parse(sample_schema_url)
return
</code></pre>
<p>Having a local cache of taxonomy links greatly speeds up the read time of other XBRL schemas with similar taxonomies - from 10s of seconds to 10ths of a second in most cases. But the cache must be specified as a file.</p>
<p>Currently, while this works, the cache is created and destroyed each time the Lambda function terminates. I think it remains while its in a warm state, but the triggers could be quite far apart.</p>
<p>What is the best way to import the cache from somewhere durable (S3 perhaps?) when the lambda function starts up, and export the cache back to that durable location when it closes down? I dont believe I can set an S3 bucket to be a filepath target directly.</p>
|
<python><amazon-web-services><aws-lambda>
|
2023-04-21 08:20:12
| 1
| 355
|
Alex Howard
|
76,071,127
| 6,681,932
|
Find spline knots by variable in python
|
<p>When fitting a linear <code>GAM</code> model in <code>python</code> imposing n_splines=5, a piecewise-linear function is fitted:</p>
<pre><code>import statsmodels.api as sm
from pygam import LinearGAM
data = sm.datasets.get_rdataset('mtcars').data
Y = data['mpg']
X = data.drop("mpg",axis=1)
model = LinearGAM(spline_order=1,n_splines=5).fit(X, Y)
</code></pre>
<p>By using <code>.coef</code> from fitted model, the coefficientes for every splines can be recovered for further analysis:</p>
<pre><code>model.coef_
</code></pre>
<p>However, how can we obtain the sections of each of the 5 splines for each variable?</p>
<p>As an example, for <code>cyl</code> variable we would fit the following splines:</p>
<p><a href="https://i.sstatic.net/IRwOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IRwOZ.png" alt="enter image description here" /></a></p>
<p>The 5 sections are determined by the knots, so, in the plot we would see the variable limits for the computed betas. (i.e.:4-5,5-6,6-7,7-8).</p>
<p>The only thing I find in the documentation the method <code>model.edge_knots</code> which is</p>
<blockquote>
<p>array-like of floats of length 2. The minimum and maximum domain of the spline function.</p>
</blockquote>
<p>In this example it corresponds for <code>cyl</code> to [4,8].</p>
|
<python><spline><gam><coefficients>
|
2023-04-21 07:59:57
| 1
| 478
|
PeCaDe
|
76,071,117
| 6,439,229
|
Why do python enum members have an infinite recursion of all members as attributes?
|
<p>Python enum members have all others members of the enum as attributes, including themselves, and those have too. So there is an infinite recursion of all members.</p>
<pre><code>from enum import Enum
class TT(Enum):
m1 = 1
m2 = 2
m3 = 3
t1 = TT['m1']
t1.m2.m3.m3.m1.m2.m2.m1.m3.value
>>> 3
</code></pre>
<p>Why is this?</p>
|
<python><enums>
|
2023-04-21 07:58:05
| 2
| 1,016
|
mahkitah
|
76,071,012
| 10,829,044
|
pandas dynamic date filter and compute 3 new columns based on past
|
<p>I have two dataframes that looks like below</p>
<pre><code> transaction_df = pd.DataFrame({'cust_name': ['ABC','ABC','ABC','ABC','ABC','ABC'],
'partner':['A','A','A','B','C','C'],
'part_no':['P1','P2','P1','P2','P3','P3'],
'transaction_date':['21/05/2021','21/05/2022','21/01/2023','21/09/2020','11/05/2022','18/05/2022'],
'qty':[100,100,600,150,320,410]})
transaction_df['transaction_date'] = pd.to_datetime(data_df['transaction_date'])
# Create proj_df
proj_df = pd.DataFrame({'proj_id':[1,2,3,4],
'part_no':['P1','P2','P3','P4'],
'partner':['A','A','F','C'],
'last_purchase_date':['11/07/2022','19/09/2021','20/04/2023','27/09/2020'],
'cust_name': ['ABC','ABC','ABC','ABC']})
proj_df['last_purchase_date'] = pd.to_datetime(proj_df['last_purchase_date'])
</code></pre>
<p>My objective is to do the below</p>
<p>a) Create 3 new columns in proj_df - <code>cust_total_trans</code>, <code>cust_part_total_trans</code> and <code>cust_part_partner_total_trans</code></p>
<p>Each of this column is computed based on the below logic</p>
<p><code>cust_total_trans</code> - Compute total number of transactions for each of the customer before the last_purchase_date provided in the proj_df for each proj_id</p>
<p><code>cust_part_total_trans</code> - Compute total number of transactions for each of the customer and part combination before the last_purchase_date provided in the proj_df for each proj_id</p>
<p><code>cust_part_partner_total_trans</code> - Compute total number of transactions for each of the customer, part and partner combination before the last_purchase_date provided in the proj_df for each proj_id</p>
<p>So, I tried the below</p>
<pre><code>merged_df = transaction_df.merge(proj_df, on="cust_name")
last_purchase_dates = merged_df.groupby("proj_id")["last_purchase_date"].max()
# Compute cust_total_transactions
cust_total_trans = merged_df.groupby("cust_name").size().reset_index(name="cust_total_trans")
# Compute cust_part_total_trans
cust_part_total_trans = merged_df.groupby(["cust_name", "part_no_x"]).size().reset_index(name="cust_part_total_trans")
# Compute cust_part_partner_total_trans
cust_part_partner_total_trans = merged_df.groupby(["cust_name", "part_no_x", "partner_x"]).size().reset_index(name="cust_part_partner_total_trans")
</code></pre>
<p>but this doesn't help me get the expected output. How do I apply the <code>before last purchase date</code> criteria. Meaning, I need to count only if they are before the last purchase date of the project.</p>
<p>I expect my output to be like as below. sorted in ascending order of dates column. My real transaction data is a big data with 100K rows and project data is 10K rows. So, any efficient and elegant solution is helpful</p>
<p><a href="https://i.sstatic.net/JhdWR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JhdWR.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><list><group-by>
|
2023-04-21 07:44:03
| 1
| 7,793
|
The Great
|
76,070,711
| 13,971,251
|
How to make a window scroll when the turtle hits the edge
|
<p>I made this Python program that uses psutil and turtle to graph a computer's CPU usage, real time. My problem is, when the turtle hits the edge of the window it keeps going, out of view - but I want to make the window scroll right, so the turtle continues graphing the CPU usage while staying at the edge of the window. How can I get the turtle to stay in view?</p>
<pre><code>import turtle
import psutil
import time
# HOW TO MAKE THE DOTS THAT WHEN YOU HOVER OVER THEM IT SHOWS THE PERCENT
# HOW TO MAKE IT CONTINUE SCROLLING ONCE THE LINE HITS THE END
# Set up the turtle
screen = turtle.Screen()
screen.setup(width=500, height=125)
# Set the width to the actual width, -20% for a buffer
width = screen.window_width()-(screen.window_width()/20)
# Set the height to the actual height, -10% for a buffer
height = screen.window_height()-(screen.window_height()/10)
# Create a turtle
t = turtle.Turtle()
t.hideturtle()
t.speed(0)
t.penup()
# Set x_pos to the width of the window/2 (on the left edge of the window)
x_pos = -(width/2)
# Set y_pos to the height of the window/2 (on the bottom of the window)
y_pos = -(height/2)
# Goto the bottom left corner
t.goto(x_pos, y_pos)
t.pendown()
while True:
# Get the CPU %
cpu_percent = psutil.cpu_percent(interval=None)
#Make the title of the Turtle screen the CPU %
screen.title(f"CPU %: {cpu_percent}%")
#Set y_pos as the bottom of the screen, +1% of the height of the screen for each CPU %
y_pos = (-height/2)+((height/100)*cpu_percent)
# Goto the point corresponding with the CPU %
t.goto(x_pos, y_pos)
# Make a dot
t.dot(4, "Red")
# Make add 5 to x_pos, so the next time it is farther to the left
x_pos = x_pos+5
</code></pre>
|
<python><turtle-graphics><python-turtle><psutil>
|
2023-04-21 07:05:11
| 1
| 1,181
|
Kovy Jacob
|
76,070,545
| 12,370,687
|
FastAPI difference between `json.dumps()` and `JSONResponse()`
|
<p>I am exploring FastAPI, and got it working on my Docker Desktop on Windows. Here's my <code>main.py</code> which is deployed successfully in Docker:</p>
<pre><code>#main.py
import fastapi
import json
from fastapi.responses import JSONResponse
app = fastapi.FastAPI()
@app.get('/api/get_weights1')
async def get_weights1():
weights = {'aa': 10, 'bb': 20}
return json.dumps(weights)
@app.get('/api/get_weights2')
async def get_weights2():
weights = {'aa': 10, 'bb': 20}
return JSONResponse(content=weights, status_code=200)
</code></pre>
<p>And I have a simple python file <code>get_weights.py</code> to make requests to those 2 APIs:</p>
<pre><code>#get_weights.py
import requests
import json
resp = requests.get('http://127.0.0.1:8000/api/get_weights1')
print('ok', resp.status_code)
if resp.status_code == 200:
print(resp.json())
resp = requests.get('http://127.0.0.1:8000/api/get_weights2')
print('ok', resp.status_code)
if resp.status_code == 200:
print(resp.json())
</code></pre>
<p>I get the same responses from the 2 APIs, output:</p>
<pre><code>ok 200
{"aa": 10, "bb": 20}
ok 200
{'aa': 10, 'bb': 20}
</code></pre>
<p>The response seems the same whether I use <code>json.dumps()</code> or <code>JSONResponse()</code>. I've read the <a href="https://fastapi.tiangolo.com/advanced/response-directly/" rel="nofollow noreferrer">FastAPI documentation on JSONResponse</a>, but I still have below questions:</p>
<p>May I know if there is any difference between the 2 methods?</p>
<p>If there is a difference, which method is recommended (and why?)?</p>
|
<python><json><rest><fastapi><jsonresponse>
|
2023-04-21 06:42:43
| 2
| 5,679
|
blackraven
|
76,070,398
| 6,552,836
|
Batch Interval Optimisation
|
<p>I'm trying to optimize an advertising budget plan. The advertising budget plan is made up of 2 products, each product having adverts of different durations. The optimizer should achieve two goals:</p>
<ol>
<li>Decide which weeks to place the adverts</li>
<li>How much budget to allocate for each advert, given the product budget</li>
</ol>
<p>Here are the main user inputs:</p>
<pre><code>Total Advertising Budget: £150K
Product-A Budget: £50K
Product-B Budget: £100K
Adverts Duration Table:
|‾‾‾‾‾‾‾‾‾‾‾|‾‾‾‾‾‾‾‾‾‾|‾‾‾‾‾‾‾‾‾‾|
| Advert_No | Duration | Products |
|___________|__________|__________|
| Advert_1 | 2 Weeks | Product_A|
| Advert_2 | 4 Weeks | Product_A|
| Advert_3 | 5 Weeks | Product_B|
| Advert_4 | 3 Weeks | Product_A|
| Advert_5 | 2 Weeks | Product_B|
| Advert_6 | 1 Weeks | Product_A|
| Advert_7 | 3 Weeks | Product_B|
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
</code></pre>
<p>Given a 10-week window, I want the optimizer to allocate the adverts optimally to maximize the objective function (i.e adverts should be optimally placed and budgets should be optimally allocated).</p>
<p>Below is an example of the information I want the optimizer to output:</p>
<pre><code>|‾‾‾‾‾‾‾|‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|
| Weeks | Product A | Product B |
|_______|_______________|_______________|
|Week 1 | Advert_4 | Advert_5 |
|Week 2 | (£15k) | (£50k) |
|Week 3 |_______________|‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|
|Week 4 | Advert_6 (£5k)| Advert_7 |
|Week 5 |‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾| (£15k) |
|Week 6 | Advert_2 |‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾|
|Week 7 | (£10k) | Advert_3 |
|Week 8 |_______________| (£35k) |
|Week 9 | Advert_1 | |
|Week 10| (£20k) | |
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
</code></pre>
<p>Here is my attempt at the solution. As this is an yearly advertisement budget plan broken down into weeks, x[i] here is the variable which is the budget per week. I have been using the Scipy Optimize library, which I would like to continue using.</p>
<pre><code># Import Libraries
import pandas as pd
import numpy as np
import scipy.optimize as so
import random
# Define Objective function (Maximization)
def obj_func(matrix):
def prod_a_func(x):
# Advert budgets for Prod_a is concave Exponential Function
return (1 - np.exp(-x / 70000)) * 0.2
def prod_b_func(x):
# Advert budgets for Prod_a is concave Exponential Function
return (1 - np.exp(-x / 200000)) * 0.6
prod_a = prod_a_func(matrix.reshape((-1, 2))[:,0])
prod_b = prod_b_func(matrix.reshape((-1, 2))[:,1])
output_matrix = np.column_stack((prod_a, prod_b))
return np.sum(output_matrix)
# Create optimizer function
def optimizer_result(tot_budget, col_budget_list, bucket_size_list):
# Create constraint 1) - total matrix sum range
constraints_list = [{'type': 'eq', 'fun': lambda x: np.sum(x) - tot_budget},
{'type': 'eq', 'fun': lambda x: (sum(x[i] for i in range(0, 10, 5)) - col_budget_list[0])},
{'type': 'eq', 'fun': lambda x: (sum(x[i] for i in range(1, 10, 5)) - col_budget_list[1])},
{'type': 'eq', 'fun': lambda x, advert_len_list[0]: [item for item in x for i in range(advert_len_list[0])]},
{'type': 'eq', 'fun': lambda x, advert_len_list[1]: [item for item in x for i in range(advert_len_list[1])]}]
# Create an inital matrix
start_matrix = [random.randint(0, 3) for i in range(0, 10)]
# Run optimizer
optimizer_solution = so.minimize(obj_func, start_matrix, method='SLSQP', bounds=[(0, tot_budget)] * 10,
tol=0.01,
options={'disp': True, 'maxiter': 100}, constraints=constraints_list)
return optimizer_solution
# Initalise constraints
tot_budget = 150000
col_budget_list = [100000, 50000]
advert_len_list = [[2,4,3,1], [5,2,3]]
# Run Optimizer
y = optimizer_result(tot_budget, col_budget_list, advert_len_list)
advert_plan = pd.DataFrame(y['x'].reshape(-1,2),columns=["Product-A", "Product-B"])
</code></pre>
<p><strong>EDIT</strong></p>
<p>Here is a mathematical summary of this problem:</p>
<p>I have a 10x2 matrix which I need to optimize that will give me the largest ROI. Here are the constraints:</p>
<ol>
<li>Sum of all the elements in the matrix is equal to the total budget</li>
<li>Sum of all the elements in a column must be equal to the column budget (for e.g sum of column 0 is equal to Prod_A budget)</li>
<li>Individual week budgets can be set (for e.g element [1,3] is equal to £10k)</li>
<li>Timing of each advertisement does not matter</li>
<li>Only the duration of the advertisement fits in the 10-week window for each product that will give maximum ROI</li>
</ol>
|
<python><optimization><scipy-optimize><constraint-programming><scipy-optimize-minimize>
|
2023-04-21 06:18:48
| 1
| 439
|
star_it8293
|
76,070,292
| 6,703,783
|
Why am I not able to create a data frame with correct values when combining 2 dataframes
|
<p>I have 2 <code>dataframes</code>. I want to combine them</p>
<p><code>dataframe 1</code> has columns <code>content</code> and <code>embeddings</code></p>
<pre><code>myquerycontents_and_embeddings_df.content
0 i live in space
1 i live my life to fullest
2 dogs live in kennel
3 we live to eat and not eat to live
4 cricket lives in heart of every indian
5 live and let live
6 my house is in someplace
7 my office is in someotherplace
8 chair is red
Name: content, dtype: object
myquerycontents_and_embeddings_df.embeddings
0 [0.0016913715517148376, -0.013320472091436386,...
1 [-0.01872972585260868, -0.010366685688495636, ...
2 [8.654659177409485e-05, -0.024498699232935905,...
3 [-0.024393899366259575, -0.008192254230380058,...
4 [-0.021614402532577515, -0.006505827885121107,...
5 [-0.01553483959287405, -0.014875221997499466, ...
6 [0.002573014236986637, -0.005427114199846983, ...
7 [0.013354390859603882, -0.007010389119386673, ...
8 [0.00505671463906765, -0.00909961387515068, -0...
Name: embeddings, dtype: object
</code></pre>
<p><code>dataframe2</code> has column <code>cosinesimilarity</code></p>
<pre><code>similarityvaluedf.cosinesimilarity
0 0.994341
1 0.808836
2 0.818914
3 0.727792
4 0.675430
5 0.802331
6 0.849596
7 0.778798
8 0.776794
Name: cosinesimilarity, dtype: float64
</code></pre>
<p>I want to create a new <code>dataframe</code> which has 3 columns and 8 rows but I am getting <code>NaN</code></p>
<pre><code>combineddf = pd.DataFrame((myquerycontents_and_embeddings_df.content,myquerycontents_and_embeddings_df.embeddings,similarityvaluedf.cosinesimilarity),columns=['content','embeddings','cosine_similarity'])
combineddf
</code></pre>
<p><a href="https://i.sstatic.net/ytEZb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ytEZb.png" alt="enter image description here" /></a></p>
|
<python><apache-spark>
|
2023-04-21 06:00:50
| 1
| 16,891
|
Manu Chadha
|
76,070,248
| 4,238,247
|
Python and Gekko optimization
|
<p>I'm not familiar with optimizations and I want to achieve a goal as following:</p>
<p>Suppose I have 3 lists of constants:</p>
<pre><code>list_1 = [1, 2, 3, 4]
list_2 = [5, 8, 10, 11]
list_3 = [20, 27, 89, 100]
</code></pre>
<p>I want to find 3 indexes <code>i1, i2, i3</code>, corresponding to the 3 lists above, such that the sum will be minimized: <code>min(list_1[i1] + list_2[i2] + list_3[i3])</code></p>
<p>subject to some constraints:</p>
<pre><code>i1 >= 0
i2 >= 0
i3 >= 0
i1 + i2 + i3 == 4
64 * i1 + 72 * i2 + 74 * i3 >= 240
</code></pre>
<p>I tried to write it as:</p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
m = GEKKO()
x_1_storage = m.Array(m.Const, 4)
x_1_values = [1, 2, 3, 4]
i = 0
for xi in x_1_storage:
xi.value = x_1_values[i]
i += 1
x_2_storage = m.Array(m.Const, 4)
x_2_values = [5, 8, 10, 11]
i = 0
for xi in x_2_storage:
xi.value = x_2_values[i]
i += 1
x_3_storage = m.Array(m.Const, 4)
x_3_values = [20, 27, 89, 100]
i = 0
for xi in x_3_storage:
xi.value = x_3_values[i]
i += 1
x,y,z = m.Array(m.Var,3,integer=True,lb=0)
m.Minimize(x_1_storage[x.value.value] + x_2_storage[y.value.value] + x_3_storage[z.value.value])
m.Equations([x>=0,
y>=0,
z>=0,
x+y+z==4,
64*x + 72*y + 74*z >= 240])
m.options.SOLVER = 1
m.solve()
print('Objective: ', m.options.OBJFCNVAL)
print('x: ', x.value[0])
print('y: ', y.value[0])
print('z: ', z.value[0])
</code></pre>
<p>However, the objective value printed is 26, when the correct objective value should instead be 4 since <code>x_1_storage[4] = 4</code>. Somehow, the first element in each list gets added as the objective value. My impression is something is wrong with the way the objective is expressed:</p>
<pre class="lang-py prettyprint-override"><code>m.Minimize(x_1_storage[x.value.value] + x_2_storage[y.value.value] + x_3_storage[z.value.value])
</code></pre>
<p>However, I'm not quite sure what to do in this case. Any hint is appreciated.</p>
|
<python><optimization><gekko>
|
2023-04-21 05:50:16
| 1
| 451
|
edhu
|
76,070,169
| 16,935,119
|
Extract data from csv ignoring spaces in header
|
<p>I have a csv in my project folder(file2.csv). But here the "Name" column has a space "Name " as shown. But while extracting the file, can we ignore the white spaces.</p>
<pre><code>data2 = {'Name ' : ['Tom', 'Nick', 'Tom'], 'Cat' : ['xxx', 'yyy', 'zzz']}
d2 = pd.DataFrame(data2)
</code></pre>
<p>So while extracting the file using below code</p>
<pre><code>asd = pd.read_csv('datafiles/file2.csv', nrows=1, usecols=["Name"])
ValueError: Usecols do not match columns, columns expected but not found: ['Name']
</code></pre>
<p>I get above error since "Name" is not there instead "Name " is there. So inspite of the spaces, can we make sure even if I give "Name" the file should be extracted?</p>
<p>Note : Actual data is in csv format in my project folder and d2 here is just for your reference</p>
<p>However I tried with below code, but I get error</p>
<pre><code>In [55]: df = pd.read_csv('file.txt', sep = '|', usecols=lambda x : x.strip() in "Name")
Traceback (most recent call last):
File "‹ipython-input-55-016ee29c65e6›", line 1, in ‹modules
df = pd.read_csv('file.txt', sep = '|', usecols=lambda x : x.strip() in "Name")
File "C: \Program Files \Anaconda2\lib\site-packages \pandas\io\parsers.py", line 562, in parser_f return _read(filepath_or_buffer, kwds)
File "C: \Program Files \Anaconda2\lib\site-packages \pandas \io\parsers.py", line 315, in _read
parser = TextFileReader (filepath_or_buffer, **kwds)
File "C: \Program Files \Anaconda2\lib\site-packages \pandas \io\parsers.py", line 645, in _ self._make_engine (self.engine)
アー
File "C: \Program Files \Anaconda2\lib\site-packages \pandas\io\parsers.py", line 799, in _make_engine
self._engine = CarserWrapper (self.f, **self.options)
File "C: \Program Files \Anaconda2\lib\site-packages \pandas\io\parsers.py", line 1213, in _ init
self._reader = _parser. TextReader (sc, **kwds)
File "pandas \parser.pyx", line 417, in pandas.parser. TextReader._ cinit_ (pandas\parser.c: 3977)
TypeError: 'function' object is not iterable
</code></pre>
|
<python><pandas>
|
2023-04-21 05:32:57
| 3
| 1,005
|
manu p
|
76,070,016
| 6,569,084
|
pytorch simulate dropout when evaluating model against test data
|
<p>We are training a resnet model using the CIFAR 10 dataset and we are trying to do the following:</p>
<p>After we have trained the model itself we want to simulate dropout during the model evaluation when we are feeding the test data to it.
I know it might sound weird because dropout is a regularization mechanism, but we are doing it as part of an experiment</p>
<p>One option that we are considering to try to to use the <code>state_dict</code>, create a deep copy to have the original values, and then modify values it in manually.</p>
<p>We also saw that <code>net.eval()</code> is changing the dropout layer into eval mode instead of training mode, maybe there is a way to utilize this mechanism to simulate dropout during evaluation ?</p>
<p>I want to ask if there are better ways to achieve what I am trying to do ?</p>
|
<python><deep-learning><pytorch>
|
2023-04-21 04:58:55
| 1
| 702
|
Eitank
|
76,069,849
| 19,113,780
|
matplotlib and Axes3D give a blank picture
|
<p>I want to draw a 3D picture with these data using <code>matplotlib</code> and <code>Axes3D</code></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
data = np.array([[4244.95, 4151.69, 2157.41, 829.769, 426.253, 215.655],
[8263.14, 4282.98, 2024.68, 1014.6, 504.515, 250.906],
[8658.01, 4339.53, 2173.56, 1087.65, 544.069, 544.073]])
x = np.array([1, 2, 4, 8, 16, 32])
y = np.array([2, 4, 8])
x, y = np.meshgrid(x, y)
z = data
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap='rainbow')
ax.set_xlabel('Stride')
ax.set_ylabel('Bitwidth')
ax.set_zlabel('Performance')
plt.show()
</code></pre>
<p>In my computer, it gives a blank picture. But I run this code in other two computers, one is correct, the other one is blank.</p>
<p><code>matplotlib</code>: 3.7.1</p>
<p><code>numpy</code>: 1.24.2</p>
<p>I have tried in windows11 and wsl ubuntu-20.04, but still the blank pictures.</p>
|
<python><matplotlib>
|
2023-04-21 04:15:19
| 2
| 317
|
zhaozk
|
76,069,846
| 21,107,707
|
Register function raising a NameError when referencing the class?
|
<p>I am trying to register various methods within a class using a registry function. This is the relevant parts of the code I have:</p>
<pre class="lang-py prettyprint-override"><code>class Quantity:
_CONVERT_FUNCS = []
def __register_conversion(func):
Quantity._CONVERT_FUNCS.append(func)
return func
@__register_conversion
def function1(a, b):
return a + b
@__register_conversion
def function2(a, b):
return a * b
</code></pre>
<p>When I try to run this, I get the following error:</p>
<pre><code> Quantity._CONVERT_FUNCS.append(func)
^^^^^^^^
NameError: name 'Quantity' is not defined
</code></pre>
<p>How can I fix the auto registry for the functions?</p>
|
<python><python-3.x>
|
2023-04-21 04:13:16
| 1
| 801
|
vs07
|
76,069,841
| 16,527,596
|
Why can't I call the functions within the class?
|
<p>I just started Python OOP. I have to build a simple class and function, but I don't understand why it works in one case, but doesn't in the other.</p>
<pre class="lang-py prettyprint-override"><code>class Digital_signal_information:
def __init__(self, signal_power: float, noise_power: float, n_bit_mod: int):
self.signal_power = signal_power
self.noise_power = noise_power
self.n_bit_mod = n_bit_mod
class Line:
def __init__(self,loss_coefficient: float, length: int):
self.loss_coefficient = loss_coefficient
self.length = length
def Loss(self, loss):
self.loss = loss_coefficient * length
BPSK = Digital_signal_information(0.001, 0, 1)
QPSK = Digital_signal_information(0.001, 0, 2) # In these 4 cases it works fine.
Eight_QAM = Digital_signal_information(0.001, 0, 3)
Sixteen_QAM = Digital_signal_information(0.001, 0, 4)
# But if I do
a = Line(1.0, 2);
# and try to call a.Loss, nothing is shown.
</code></pre>
<p>In the exercise I have to create the <code>Line</code> class on which I have to use the <code>signal_power</code> and <code>length</code> attributes. I also have to use a function (or property) (correct me if I'm wrong, I'm still getting used to the OOP vocabulary). But as said, I don't understand why it works in one part, but doesn't in the other.</p>
|
<python><oop>
|
2023-04-21 04:10:25
| 2
| 385
|
Severjan Lici
|
76,069,720
| 6,747,697
|
Django ORM distinct over annotated value
|
<p>I have a model which has a <code>DateTimeField</code> timestamp. What would be the best way to see the number of entries for each day (month, hours etc.) just by using Django ORM (not raw SQL)?</p>
|
<python><django><django-orm>
|
2023-04-21 03:38:28
| 2
| 383
|
Dmitry Sazhnev
|
76,069,659
| 20,122,390
|
How do I query a realtime firebase JSON database for a field?
|
<p>I would like to query a real time Firebase database to find a document according to its source
These are the fields that the document has:</p>
<pre><code>class Document(BaseModel):
_id: str
series: str
Country: str
Date: str
source: str
</code></pre>
<p>For which I have defined the following function:</p>
<pre><code>def get_source(self,
path: str,
source: str,
):
'''Method responsible only for filtering by document source'
(like google, youtube, etc.)'''
return self.db.reference(path)\
.order_by_child("source")\
.equal_to(source).get()
</code></pre>
<p>Subsequently, I build the endpoint that uses this function:</p>
<pre><code>@router.get("/{source}")
def get_by_source(
source: str,
#only_id: Optional[bool]=False
):
try:
return fb_client.get_source(settings.FB_PATH, source)
except Exception as e:
return {"error": str(e)}
</code></pre>
<p>However when I test the endpoint I get this error:</p>
<blockquote>
<p>{<br />
"error": "Index not defined, add ".indexOn": "source", for path "/news", to the rules"<br />
}</p>
</blockquote>
<p>What does this mean? I am very new to Firebase</p>
|
<python><database><firebase><firebase-realtime-database><fastapi>
|
2023-04-21 03:22:55
| 0
| 988
|
Diego L
|
76,069,634
| 11,782,991
|
Python 3 and pandas parsing a long date format in Excel
|
<p>I'm trying to use python and pandas to select specific dates from an excel spreadsheet. However, the code below always returns: Empty DataFrame</p>
<p>It does show correct columns and I can select other columns. Am I doing anything wrong here? I suspect the date format in the excel spreadsheet is the problem ex. 'Saturday, August 12, 2023'.</p>
<pre><code>#!/usr/bin/env python3
import datetime
import os
import pandas as pd
from pandasql import sqldf
# Load Excel file into DataFrame and use the second row as the column names
url = "~/Master Broadcast Schedule.xlsx"
df = pd.read_excel(url, sheet_name='2023', header=1)
# Convert "Air date" column to datetime format
df['Air date'] = pd.to_datetime(df['Air date'], format="%A, %B %d, %Y")
# Define SQL query to select data for a specific date
date_str = "2023-05-01" # replace with your desired date
date_obj = datetime.datetime.strptime(date_str, '%Y-%m-%d')
query = """
SELECT *
FROM df
WHERE "Air date" = '{}'
""".format(date_obj.strftime('%Y-%m-%d'))
# Execute SQL query on DataFrame
result = sqldf(query)
# Print query result
print(result)
</code></pre>
<p>I was expecting to get the row of the date 2023-05-01.</p>
|
<python><pandas>
|
2023-04-21 03:15:11
| 2
| 332
|
nwood21
|
76,069,561
| 1,326,306
|
Is there a way to print tabular data, fits the terminal size using python?
|
<p>I want to find an approach to print list as tabular table. There is many library help me to do that. But when the size of list is too long, it's not convenient to view results.</p>
<pre><code>root@debian:~# print.py
name | score
-----+------
Tim | 99
Ana | 80
Jim | 100
Tim | 100
Tom | 150
... long outputs ...
Lily | 100
Lucy | 120
</code></pre>
<p>If we can take advantage of screen width, with duplicate table on the right to reduce table length.</p>
<p>It's better to print as:</p>
<pre><code>name | score name | score name | score
-----+------ -----+------ -----+------
Tom | 100 Jim | 100 Tim | 99
Ana | 80 Lily | 99
</code></pre>
<p>Can any lib do that?</p>
|
<python>
|
2023-04-21 02:54:01
| 2
| 901
|
HUA Di
|
76,069,484
| 5,528,270
|
Obtaining detected object names using YOLOv8
|
<p>We are trying to get the detected object names using Python and YOLOv8 with the following code.</p>
<pre><code>import cv2
from ultralytics import YOLO
def main():
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
model = YOLO("yolov8n.pt")
while True:
ret, frame = cap.read()
result = model(frame, agnostic_nms=True)[0]
print(result)
if cv2.waitKey(30) == 27:
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
</code></pre>
<p>The following two types are shown on the log.</p>
<pre><code>0: 384x640 1 person, 151.2ms
Speed: 0.6ms preprocess, 151.2ms inference, 1.8ms postprocess per image at shape (1, 3, 640, 640)
</code></pre>
<p>The second log is the one we displayed using <code>print</code>, how do we get the <code>person</code> from now on? Presumably we get the <code>person</code> by giving 0 to the <code>names</code>, but where do we get the 0 from?</p>
<pre><code>ultralytics.yolo.engine.results.Results object with attributes:
boxes: ultralytics.yolo.engine.results.Boxes object
keypoints: None
keys: ['boxes']
masks: None
names: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
orig_img: array([[[51, 58, 64],
[52, 59, 65],
[54, 59, 65],
...,
[64, 68, 74],
[62, 67, 73],
[62, 67, 73]],
[[51, 58, 64],
[53, 59, 65],
[54, 59, 65],
...,
[63, 68, 74],
[62, 67, 73],
[62, 67, 73]],
[[53, 58, 64],
[53, 58, 64],
[53, 58, 64],
...,
[61, 67, 73],
[61, 67, 73],
[61, 67, 73]],
...,
[[43, 48, 58],
[42, 47, 57],
[41, 46, 56],
...,
[24, 35, 49],
[23, 34, 48],
[23, 34, 48]],
[[44, 48, 59],
[43, 47, 57],
[42, 46, 56],
...,
[26, 35, 49],
[26, 35, 49],
[24, 33, 48]],
[[45, 48, 59],
[43, 45, 56],
[40, 43, 54],
...,
[26, 35, 49],
[26, 35, 49],
[25, 33, 48]]], dtype=uint8)
orig_shape: (720, 1280)
path: 'image0.jpg'
probs: None
speed: {'preprocess': 1.6682147979736328, 'inference': 79.47301864624023, 'postprocess': 1.0020732879638672}
</code></pre>
<p>We would like to know the solution in this way. But if it is not possible, we can use another method if it is a combination of Python and YOLOv8. We plan to display bounding boxes and object names.</p>
<h2>Additional Information</h2>
<p>I changed the code as follows.</p>
<pre><code> ret, frame = cap.read()
# result = model(frame, agnostic_nms=True)[0]
result = model([frame])[0]
boxes = result.boxes
masks = result.masks
probs = result.probs
print("[boxes]==============================")
print(boxes)
print("[masks]==============================")
print(masks)
print("[probs]==============================")
print(probs)
</code></pre>
<p>After all, the following <code>person</code> is not included. How should we determine that?</p>
<pre><code>[boxes]==============================
WARNING ⚠️ 'Boxes.boxes' is deprecated. Use 'Boxes.data' instead.
ultralytics.yolo.engine.results.Boxes object with attributes:
boxes: tensor([[4.7356e+01, 7.2858e+00, 1.1974e+03, 7.1092e+02, 8.6930e-01, 0.0000e+00]])
cls: tensor([0.])
conf: tensor([0.8693])
data: tensor([[4.7356e+01, 7.2858e+00, 1.1974e+03, 7.1092e+02, 8.6930e-01, 0.0000e+00]])
id: None
is_track: False
orig_shape: tensor([ 720, 1280])
shape: torch.Size([1, 6])
xywh: tensor([[ 622.4028, 359.1004, 1150.0942, 703.6293]])
xywhn: tensor([[0.4863, 0.4988, 0.8985, 0.9773]])
xyxy: tensor([[ 47.3557, 7.2858, 1197.4500, 710.9150]])
xyxyn: tensor([[0.0370, 0.0101, 0.9355, 0.9874]])
[masks]==============================
None
[probs]==============================
None
</code></pre>
|
<python><python-3.x><object-detection><yolo><yolov8>
|
2023-04-21 02:27:05
| 4
| 1,024
|
Ganessa
|
76,069,337
| 14,315,663
|
How to create a function which applies lag to a numpy array in python?
|
<p>Here are 3 numpy arrays with some numbers:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr_a = np.array([[0.1, 0.2, 0.3, 0.4],
[0.2, 0.3, 0.4, 0.5],
[0.3, 0.4, 0.5, 0.6],
[0.4, 0.5, 0.6, 0.7],
[0.5, 0.6, 0.7, 0.8],
[0.6, 0.7, 0.8, 0.9]])
arr_b = np.array([[0.15, 0.25, 0.35, 0.45],
[0.35, 0.45, 0.55, 0.65],
[0.55, 0.65, 0.75, 0.85],
[0.75, 0.85, 0.95, 1.05],
[0.95, 1.05, 1.15, 1.25],
[1.15, 1.25, 1.35, 1.45]])
arr_c = np.array([[0.3, 0.6, 0.9, 1.2],
[0.6, 0.9, 1.2, 1.5],
[0.9, 1.2, 1.5, 1.8],
[1.2, 1.5, 1.8, 2.1],
[1.5, 1.8, 2.1, 2.4],
[1.8, 2.1, 2.4, 2.7]])
</code></pre>
<p>Each array has a shape of (6, 4). Where rows=person and columns=time(seconds), consider each row to represent a unique person and each column to represent acceleration associated with that person for that particular moment in time.</p>
<p>I would like to create a function called <code>calc_acc_change</code> which computes the change in acceleration according to a given lag value. I would like this function to take in an array and a lag value (default=2) which satisfies the following formula: <code>acc_change(t) = acc(t) - acc(t- lag)</code>, where t=time. And I would like the output to remain an array.</p>
<p>I've started my function as follow, but I don't know how to complete it:</p>
<pre class="lang-py prettyprint-override"><code>def calc_acc_change(array, lag=2):
...
return acc_change
</code></pre>
<p>To test the function works, please input <code>arr_a</code>, <code>arr_b</code> and <code>arr_c</code> into the function and print the output.</p>
<p>Any help is much appreciated :)</p>
|
<python><arrays><numpy><function><lag>
|
2023-04-21 01:43:57
| 1
| 595
|
kiwi
|
76,069,280
| 11,531,123
|
Tensorflow 2: GPU training is not faster than CPU training
|
<p>Recently I've been fiddling around with tensorflow 2.0. I had not used it in 2-3 years, but I know that I can use my GPU. I had most recently worked with PyTorch and when comparing my computer vs someone else that didn't have a GPU and was using Colab, it was night and day. My machine was so much faster.</p>
<p>But for some reason, as I have been running some small tests, I felt like the training was going too slowly. And then when I checked the speed by switching devices to CPU, the speed was the same.</p>
<p>Some context about my machine. I set up a new conda environment to mirror the one for the Tensorflow Developer Exam. I'm running TF 2.9.0 with Python 3.8.0 on a GeForce RTX 2060. I'm also running this on Windows 10. I did not re-download and update my CUDA libraries from a few years ago, but I checked and tensorflow recognizes my GPU.</p>
<p>Here is the code for loading tensorflow and doing GPU checking</p>
<pre><code>import tensorflow as tf
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
device_lib.list_local_devices()
</code></pre>
<p>And the result</p>
<pre><code>2.9.0
Num GPUs Available: 1
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 18213716215175288244
xla_global_id: -1,
name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 4160159744
locality {
bus_id: 1
links {
}
}
incarnation: 14308843300195357737
physical_device_desc: "device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5"
xla_global_id: 416903419]
</code></pre>
<p>As you can see, the graphics card is being recognized. I did a basic NN regression test based on some youtube videos I have been watching. It is insurance data and it's pretty small. Only about 1000 training samples and 11 features after transformation. It's all simple numbers. No images or anything complicated. A very simple regression test.</p>
<p>Here is the data download and initial transformation</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
# Read in the insurance data
insurance = pd.read_csv('https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv')
rom sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
# Create a column transformer
ct = make_column_transformer(
(MinMaxScaler(), ['age', 'bmi', 'children']),
(OneHotEncoder(), ['sex', 'smoker', 'region'])
)
# Create X and y
X = insurance.drop("charges", axis=1)
y = insurance.charges
# Build train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Fit the column transformer to the training data
ct.fit(X_train)
# Transform training and test data with normalization and one hot encoding
X_train_trans = ct.transform(X_train)
X_test_trans = ct.transform(X_test)
</code></pre>
<p>And here is how I made my neural network. Again, I kept it very simple. Here I used the functional API since I'm trying to learn it, but the speed ends up being the same with the Sequential API.</p>
<pre><code>tf.random.set_seed(42)
inputs = tf.keras.Input(shape=X_train_trans[1].shape)
x = tf.keras.layers.Dense(128, activation='relu')(inputs)
x = tf.keras.layers.Dense(64, activation='relu')(x)
outputs = tf.keras.layers.Dense(1)(x)
ins_model_4 = tf.keras.Model(inputs, outputs)
ins_model_4.compile(loss='mae',
optimizer='adam',
metrics=['mae'])
history = ins_model_4.fit(X_train_trans, y_train, epochs=200, verbose=1)
</code></pre>
<p>As you can see, it's a very shallow model. But for some reason it took 30 seconds to train. That felt too long. It should be blazing fast. So I then ran this with tf.device selected for both cpu and gpu like this.</p>
<pre><code># with gpu selected
with tf.device('/gpu:0'):
history = ins_model_4.fit(X_train_trans, y_train, epochs=200, verbose=1)
# with cpu selected
with tf.device('/cpu:0'):
history = ins_model_4.fit(X_train_trans, y_train, epochs=200, verbose=1)
</code></pre>
<p>And I found the results are the same. What is going on here? I have a few guesses.</p>
<p>Do I need to download the new CUDA files? Is it possible for TF to recognize a gpu but not utilize it? Is there something about the data I am using or the regression problem I have defined that a re for some reason slow on my network? Did I code the tf.device stuff wrong? I could really use some help resolving this situation.</p>
|
<python><tensorflow><gpu>
|
2023-04-21 01:23:54
| 0
| 435
|
Khachatur Mirijanyan
|
76,069,215
| 3,755,861
|
plotly polar plot: axis and background color
|
<p>I have discovered plotly and have a few questions: I would like to change all axis to black (not white). Below modified example code from their webpage, where I have set both radial and angular axis to black. I can use linecolor = 'black', but it seems that then the line going to 0 is thicker than the others?</p>
<pre><code>import plotly.graph_objects as go
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/hobbs-pearson-trials.csv")
fig = go.Figure()
fig.add_trace(go.Scatterpolargl(
r = df.trial_1_r,
theta = df.trial_1_theta,
name = "Trial 1",
marker=dict(size=10, color="mediumseagreen")
))
# Common parameters for all traces
fig.update_traces(mode="markers", marker=dict(line_color='black', opacity=0.7))
fig.update_layout(
title = "",
font_size = 15,
showlegend = False,
polar = dict(
bgcolor = "rgb(223, 223, 223)",
#bgcolor = "white",
angularaxis = dict(
linewidth = 3,
showline=True,
linecolor='black',
gridcolor = "black"
),
radialaxis = dict(
side = "counterclockwise",
showline = True,
linewidth = 2,
gridcolor = "black",
gridwidth = 2,
)
),
paper_bgcolor = "white"
)
fig.show()
fig.write_image("plotlyPolar.svg", scale=2)
fig.write_image("plotlyPolar.jpg", scale=3)
</code></pre>
|
<python><plotly>
|
2023-04-21 01:03:21
| 1
| 452
|
Pugl
|
76,069,205
| 1,503,005
|
How to detect client disconnection when servicing a streaming endpoint with http.server
|
<p>We have a simple web server written in Python's <code>http.server</code>. One of its endpoints serves streaming data - a series of events, with new events sent as they occur. We've successfully set up this endpoint so that it delivers the data as expected. However, if the client disconnects, the server does not detect this, and will continue delivering data down a connection no one's listening to indefinitely.</p>
<p>The relevant bit of code:</p>
<pre class="lang-py prettyprint-override"><code>import json
import queue
import http.server
from common.log import log
logger = log.get_logger()
class RemoteHTTPHandler(http.server.BaseHTTPRequestHandler):
...
def __stream_events(self, start_after_event_id: int) -> None:
"""Stream events starting after the given ID and continuing as new events become available"""
# Get a queue of events, which will include all existing events from the given starting point,
# and be updated with new events as they become available
logger.info(f"Streaming events from ID {start_after_event_id}")
with self._events_stream_manager.stream_events(start_after_event_id) as events_queue:
self.send_response(200)
self.send_header("Content-type", "application/yaml; charset=utf-8")
self.send_header("Connection", "close")
self.end_headers()
# If the server is shutting down, all ongoing streams should terminate
while not self._stop_streams_event.is_set():
try:
# Get the next event;
# if the queue is empty, will block until an event is added, up to a maximum of 1s
try:
data = events_queue.get(timeout=1)
except queue.Empty:
# Send an empty line to keep the HTTP connection from timing out
self.wfile.write(b"\n")
continue
# Send the encoded event plus the seperator line
self.wfile.write(json.dumps(data, indent=4).encode('utf-8') + b"\n\n---\n\n")
except BrokenPipeError as ex:
# TODO: This does not reliably detect loss of connection
# Broken pipe means the connection was lost,
# either because the client closed it or there was a network error
logger.info(f"Connection closed: {type(ex).__name__}: {ex}", exc_info=True)
return
def serve():
http.server.ThreadingHTTPServer(("", 8090), RemoteHTTPHandler).serve_forever()
</code></pre>
<p>I wrote this under the expectation that, once the client closed the connection, calling <code>self.wfile.write()</code> would raise BrokenPipeError (possibly after some delay for the TCP connection to time out). However, this does not happen; the server will continue to happily write events to a connection no one is listening to without any error for at least 20 minutes.</p>
<p>What is the correct way to check if the client is still there and listening?</p>
<p>Edit:
Per @Saxtheowl's <a href="https://stackoverflow.com/a/76069253/1503005">suggested solution</a>, I tried setting the socket timeout and using <code>select()</code> to check if the socket was writable:</p>
<pre class="lang-py prettyprint-override"><code>import http.server
import json
import queue
import select
import socket
from common.log import log
logger = log.get_logger()
class RemoteHTTPHandler(http.server.BaseHTTPRequestHandler):
...
def __stream_events(self, start_after_event_id: int) -> None:
"""Stream events starting after the given ID and continuing as new events become available"""
# Get a queue of events, which will include all existing events from the given starting point,
# and be updated with new events as they become available
logger.info(f"Streaming events from ID {start_after_event_id}")
with self._events_stream_manager.stream_events(start_after_event_id) as events_queue:
# Send the headers
self.send_response(200)
self.send_header("Content-type", "application/yaml; charset=utf-8")
self.send_header("Connection", "close")
self.end_headers()
# Set a timeout on the underlying socket
self.connection.settimeout(2)
# If the server is shutting down, all ongoing streams should terminate
while not self._stop_streams_event.is_set():
try:
# Get the next event;
# if the queue is empty, will block until an event is added, up to a maximum of 1s
message: bytes
try:
data = events_queue.get(timeout=1)
except queue.Empty:
# Send an empty line to keep the HTTP connection from timing out
logger.debug(f"Sending blank line") # FIXME: For test
message = b"\n"
else:
# Send the encoded event plus the seperator line
logger.debug(f"Sending event") # FIXME: For test
message = json.dumps(data, indent=4).encode('utf-8') + b"\n\n---\n\n"
# Check if the client is still connected using select
write_ready_fds: list[socket.socket]
logger.debug(f"Connection check") # FIXME: For test
__, write_ready_fds, __ = select.select([], [self.connection], [], 3)
if not write_ready_fds:
logger.info(f"Connection closed")
return
logger.debug(f"Still connected; sending") # FIXME: For test
self.wfile.write(message)
except (BrokenPipeError, TimeoutError) as ex:
# Broken pipe or socket timeout means the connection was lost,
# either because the client closed it or there was a network error
logger.info(f"Connection closed: {type(ex).__name__}: {ex}", exc_info=True)
return
except BaseException as ex:
# Unexpected error
logger.exception(f"Error sending events: {type(ex).__name__}: {ex}")
raise
</code></pre>
<p>However, this did not help; even several minutes after the client disconnected, no timeout errors were raised and <code>select()</code> still returned immediately claiming the socket was writable.</p>
|
<python><http><sockets><streaming><http.server>
|
2023-04-21 01:00:29
| 0
| 635
|
macdjord
|
76,069,185
| 194,305
|
swig ignore std::enable_shared_from_this
|
<p>I am trying to ignore std::enable_shared_from_this base class.</p>
<p>Below are my files</p>
<p>test.h:</p>
<pre><code>#ifndef _TEST_H_INCLUDED_
#define _TEST_H_INCLUDED_
#include <memory>
class Test : public std::enable_shared_from_this<Test> {
public:
Test() : val_(0) {}
virtual ~Test() {}
int GetVal() const;
void SetVal(int val);
private:
int val_;
};
#endif // _TEST_H_INCLUDED_
</code></pre>
<p>test.cpp:</p>
<pre><code>#include "test.h"
int Test::GetVal() const { return val_; }
void Test::SetVal(int val) { val_ = val; }
</code></pre>
<p>test_module.i:</p>
<pre><code>%module test_module
%{
#include "test.h"
%}
%include <memory>
%ignore std::enable_shared_from_this<Test>;
%template(TestFromThis) std::enable_shared_from_this<Test>;
typedef std::enable_shared_from_this<Test> TestFromThis;
%include "test.h"
</code></pre>
<p>test.py:</p>
<pre><code>#!/usr/bin/env python3
import sys
import test_module
if __name__ == "__main__":
test = test_module.Test()
res = test.GetVal()
print("res={}".format(res))
sys.exit()
</code></pre>
<p>Makefile:</p>
<pre><code>SWIG = swig4
CCX = g++
LD = g++
all: test_module.py
test_module.py: test.h test.cpp test_module.i
${SWIG} -python -py3 -c++ -cppext cpp -I/usr/include/c++/9 test_module.i
${CCX} -std=c++11 -O2 -fPIC -c test.cpp
${CCX} -std=c++11 -O2 -fPIC -c test_module_wrap.cpp -I/usr/include/python3.8
${LD} -shared test.o test_module_wrap.o -o _test_module.so
run: test_module.py
python3 ./test.py
clean:
rm -rf *~ test_module.py *_wrap.* *.o *.so __pycache__
</code></pre>
<p>When I compile I got the following errors:</p>
<p>test_module.i:10: Error: Template 'enable_shared_from_this' undefined.</p>
<p>test.h:6: Warning 401: Nothing known about base class 'std::enable_shared_from_this< Test >'. Ignored.</p>
<p>test.h:6: Warning 401: Maybe you forgot to instantiate 'std::enable_shared_from_this< Test >' using %template.</p>
<p>If I remove every occurrence std::enable_shared_from_this it will work without problem.</p>
|
<python><c++11><templates><swig>
|
2023-04-21 00:54:39
| 1
| 891
|
uuu777
|
76,069,184
| 457,935
|
Django ORM annotate query with length of JSONField array contents
|
<p>I have a basic Model with a JSONField that defaults to a list:</p>
<pre><code>class Map(models.Model):
id = models.UUIDField(
primary_key=True,
editable=False,
default=uuid.uuid4,
)
locations = models.JSONField(
default=list, blank=True, encoder=serializers.json.DjangoJSONEncoder
)
</code></pre>
<p>In my admin list view I'd like a column displaying the length of <code>locations</code>. That is, <code>annotate</code> the queryset to do the equivalent of <code>len(obj.locations)</code> but I haven't found any examples of this scenario.</p>
<p>I know I could write an accessor for the admin class that does <code>len(obj.locations)</code> to add the column/values but I'd also like to sort/order by this field's value.</p>
|
<python><django>
|
2023-04-21 00:54:24
| 1
| 1,237
|
saschwarz
|
76,069,122
| 5,858,752
|
How to group by an integer column and sum up each row except for maybe the last row
|
<p>I have a dataframe with 3 columns. The first 2 columns are <code>id</code> and <code>cost</code> which is an integer and float, respectively. The last column is a boolean flag, <code>omit</code>. The rows with the same <code>id</code> are already prearranged contiguously. All the rows have <code>omit = False</code>, there are some <code>id</code>s such that the last row has <code>omit = True</code>.</p>
<p>I want to compute a 4th column, <code>total_cost</code> that is the sum of the <code>cost</code> for each <code>id</code>, but it neglects the last row if <code>omit = True</code>.</p>
<p>Rather than checking for the <code>omit</code> flag for each row, I think it's faster to do a first pass that sums up all the cost and not account for <code>omit</code> and then subtract the last row for each vid if <code>omit = True</code>, so I use</p>
<pre><code>df["total_cost"] = df.groupby("id")["cost"].transform("sum")
</code></pre>
<p>But I am not sure how to do the "subtract the last row for each vid if <code>omit = True</code>" part in a vectorized way?</p>
<p>Example:</p>
<pre><code>id cost omit total_cost
0 1.1 False 2.1
0 1.0 False 2.1
0 2.2 True 2.1
1 3.2 False 3.2
1 4.0 True 3.2
2 0.5 False 1.2
2 0.4 False 1.2
2 0.3 False 1.2
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-21 00:35:33
| 2
| 699
|
h8n2
|
76,069,066
| 7,250,111
|
How to draw multiple rectangles using Vispy?
|
<p>I referred to this example : <a href="https://vispy.org/gallery/scene/polygon.html#sphx-glr-gallery-scene-polygon-py" rel="nofollow noreferrer">https://vispy.org/gallery/scene/polygon.html#sphx-glr-gallery-scene-polygon-py</a></p>
<p>I want to draw multiple same shaped rectangles, but I don't understand how to set the coordinates.</p>
<pre><code>cx, cy = (0.2, 0.2)
halfx, halfy = (0.1, 0.1)
poly_coords = [(cx - halfx, cy - halfy),
(cx + halfx, cy - halfy),
(cx + halfx, cy + halfy),
(cx - halfx, cy + halfy)]
</code></pre>
<p>The above draws a rectangle, however, when I add more coordinates as below:</p>
<pre><code>from vispy import app
import sys
from vispy.scene import SceneCanvas
from vispy.scene.visuals import Polygon, Ellipse, Rectangle, RegularPolygon
from vispy.color import Color
white = Color("#ecf0f1")
gray = Color("#121212")
red = Color("#e74c3c")
blue = Color("#2980b9")
orange = Color("#e88834")
canvas = SceneCanvas(keys='interactive', title='Polygon Example',
show=True)
v = canvas.central_widget.add_view()
v.bgcolor = gray
v.camera = 'panzoom'
cx, cy = (0.2, 0.2)
halfx, halfy = (0.1, 0.1)
poly_coords = [(cx - halfx, cy - halfy),
(cx + halfx, cy - halfy),
(cx + halfx, cy + halfy),
(cx - halfx, cy + halfy),
(0.2+cx - halfx, 0.2+cy - halfy),
(0.2+cx + halfx, 0.2+cy - halfy),
(0.2+cx + halfx, 0.2+cy + halfy),
(0.2+cx - halfx, 0.2+cy + halfy)]
poly = Polygon(poly_coords, color=red, border_color=white,border_width=3, parent=v.scene)
if __name__ == '__main__':
if sys.flags.interactive != 1:
app.run()
</code></pre>
<p>I get a strange shaped polygon, instead of 2 rectangles.</p>
<p>How could I set seperate rectangle coordinates and draw 2 different rectangles?</p>
|
<python><vispy>
|
2023-04-21 00:23:00
| 1
| 2,056
|
maynull
|
76,068,961
| 2,981,639
|
Is there a `pip` flag to upgrage to latest release candidate
|
<p>First I want to confirm if this is expected pip behaviour or not:</p>
<p>We're using <code>Sonatype Nexus Repository Manager</code> as an onsite package repository, and <code>setuptools_scm</code> to version our packages. So when I build a release candidate it obtains a version such as <code>0.4.1rc1</code></p>
<p>I publish this to the repository <code>twine upload -r pypi dist/mypkg-0.4.1rc1-py3-none-any.whl</code></p>
<p>If I switch to another project that has <code>mypkg</code> and try to update (say 0.4.0 is currently installed)</p>
<pre><code>$pip install -U mypkg --index-url https://build.xyz.com/repository/pypi-all/simple/
Requirement already satisfied: mypkg in ./.venv/lib/python3.10/site-packages (0.4.0)
</code></pre>
<p>So it won't upgrade <code>0.4.0</code> to <code>0.4.1-rc1</code> - I'd imagine this is how pip is supposed to work, you need to explicitly specify that you want to upgrade to a non-release version but would like confirmation.</p>
<p>Assuming this is correct behaviour, is there a flag to override so I can automatically upgrade to the latest release candidate? And is there then an option to further control what gets installed (i.e. latest release candidate or latest dev build for example). I'm aware of PEP 440 versioning (which is what <code>setuptools_scm</code> follows)</p>
|
<python><pip>
|
2023-04-20 23:53:59
| 1
| 2,963
|
David Waterworth
|
76,068,934
| 4,450,498
|
How can I get the raw byte offsets of individual datasets in an HDF5 file using Python?
|
<p>The byte offsets of individual (contiguous) datasets in an HDF5 file can be determined using the <a href="https://docs.hdfgroup.org/hdf5/develop/_view_tools_view.html#subsecViewToolsViewContent_h5ls" rel="nofollow noreferrer"><code>h5ls</code> tool</a> distributed as part of the <a href="https://www.hdfgroup.org/downloads/hdf5/" rel="nofollow noreferrer">HDF5 library</a>, but I have been unable to figure how to accomplish this task programmatically using Python (specifically, the <a href="https://docs.h5py.org/en/stable/index.html" rel="nofollow noreferrer"><code>h5py</code> package</a>).</p>
<p>I have found R's <a href="https://www.bioconductor.org/packages/release/bioc/html/rhdf5.html" rel="nofollow noreferrer"><code>rhdf5</code> package</a>, which includes a <a href="https://rdrr.io/bioc/rhdf5/man/h5ls.html" rel="nofollow noreferrer"><code>h5ls</code> function</a>, but <code>h5py</code> does not have anything similar as far as I can tell. I also found <a href="https://stackoverflow.com/questions/75009735/how-can-i-get-raw-byte-offsets-and-bytes-read-from-that-offset-using-the-hdf5-c">How can I get raw byte offsets and bytes read from that offset using the HDF5 C API?</a>, but it has no answers and is also asking about C instead of Python.</p>
<p>I'm not necessarily looking for a one-line solution, just a way to access this information using Python at all. I'm not hard-set on using <code>h5py</code> specifically, either. If there's another package or solution that is able to do this better, I'd be happy to consider it.</p>
|
<python><hdf5><h5py>
|
2023-04-20 23:46:32
| 1
| 992
|
L0tad
|
76,068,754
| 758,986
|
Why don't breakpoints work in Tensorflow function (in lazy/graph mode)?
|
<p>I have a code like this that loads data:</p>
<pre class="lang-py prettyprint-override"><code>def parse_tf_record(example_proto: tf.train.Example, symbols: t.List[str], all_indicators: t.List[str], indicators: t.List[str]) -> None:
feature_description = {symbol: tf.io.FixedLenFeature([len(all_indicators)], tf.float32, default_value=0.0) for symbol in symbols}
feature_description[constants.DEF_DAY_ID] = tf.io.FixedLenFeature([], tf.int64, default_value=0)
return tf.io.parse_single_example(example_proto, feature_description)
def get_dataset_from_files(paths: t.List[pl.Path], symbols: t.List[str], all_indicators: t.List[str], indicators: t.List[str]) -> tf.data.TFRecordDataset:
paths: t.List[str] = [str(p) for p in paths]
dataset = tf.data.TFRecordDataset(paths, compression_type=COMPRESSION_TYPE)
dataset = dataset.map(lambda x: parse_tf_record(x, symbols, all_indicators, indicators))
return dataset
</code></pre>
<p>If I set breakpoint in <code>parse_tf_record</code>, PyCharm debugger does not stop there.</p>
<p>I saw many "hints" that this is expected behaviour (<a href="https://stackoverflow.com/questions/51672903/keras-tensorflow-how-to-set-breakpoint-debug-in-custom-layer-when-evaluating">SO</a>, <a href="https://github.com/tensorflow/tensorflow/issues/39320" rel="nofollow noreferrer">github issue</a>), but I have not seen explanation as to why.</p>
<p>So, my question is:</p>
<p><strong>How can it be possible that a function in python is executed</strong> (if I put a <code>print</code>, it prints), <strong>but debugger does not stop there?</strong> How did they achieve this?</p>
|
<python><python-3.x><tensorflow><tensorflow2.0>
|
2023-04-20 22:56:50
| 1
| 2,011
|
DimanNe
|
76,068,692
| 547,231
|
Compiling Mitsuba 2 on Visual Studio 2022 (SCons error)
|
<p>I try to build the <code>mitsuba-msvc2010.sln</code> using Visual Studio 2022. I fail to successfully build this solution; most probably due to some changes in SCons. This is the error I obtain</p>
<blockquote>
<p>AttributeError: 'SConsEnvironment' object has no attribute 'has_key':
1> File "SConstruct", line 15: 1> env =
SConscript('build/SConscript.configure') 1> File "C:\Program Files
(x86)\Microsoft Visual
Studio\Shared\Python39_64\lib\site-packages\SCons\Script\SConscript.py",
line 662: 1> return method(*args, **kw) 1> File "C:\Program Files
(x86)\Microsoft Visual
Studio\Shared\Python39_64\lib\site-packages\SCons\Script\SConscript.py",
line 598: 1> return _SConscript(self.fs, *files, **subst_kw) 1>
File "C:\Program Files (x86)\Microsoft Visual
Studio\Shared\Python39_64\lib\site-packages\SCons\Script\SConscript.py",
line 285: 1> exec(compile(scriptdata, scriptname, 'exec'),
call_stack[-1].globals) 1> File "build\SConscript.configure", line
111: 1> if env.has_key('BOOSTINCLUDE'):</p>
</blockquote>
<p>Was anybody able to build this and/or knows how to fix this problem?</p>
|
<python><python-3.x><visual-studio-2022><python-2.x><scons>
|
2023-04-20 22:36:49
| 1
| 18,343
|
0xbadf00d
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.