QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,634,166
| 16,371,459
|
Mel Spectogram normalization for training neural network for singing voice synthesis
|
<p>What are the recommended mel-spectogram normalization techniques for training a neural network aimed at singing voice synthesis? My configuration settings are
<code> n_fft= 2048, hop_length= 512, n_mels = 80</code></p>
<p>I have implemented normalization using the code below (taken from the whisper repo), but it is not yielding satisfactory results.</p>
<pre><code> log_spec = torch.clamp(mel_spec, min=1e-10).log10()
log_spec = torch.maximum(log_spec, log_spec.max() - 8.0)
log_spec = (log_spec + 4.0) / 4.0
</code></pre>
<p>I expected the range between 0 and 1, but it isnt generating between 0 and 1. Please suggest some suitable mel-spectogram normalization technique.</p>
|
<python><normalization><spectrogram><mel>
|
2023-03-04 07:00:42
| 0
| 318
|
Basir Mahmood
|
75,634,145
| 121,861
|
How to configure pymongo to automatically marshal and unmarshal protobufs
|
<p>I'd like to have protobuf support in pymongo. Specifically, I want to be able to pass protobufs anywhere that I could pass a dict (such as <code>collection.insert_one()</code>) and I want to get the appropriate protobuf instance anywhere I'd otherwise get a dict (such as <code>collection.find_one()</code>).</p>
<p>So far, for my proof of concept, I'm able to do this by converting the protobuf to a Python dict (using <code>json_format.MessageToDict()</code>) and then passing the dict to pymongo, and vice versa. This double-conversion is hacky, inefficient, and I haven't been able to get it to handle all the datatypes I want (such as the binary formats). By "double conversion" I mean that I'm converting first to <code>dict</code> then to the final structure (either <code>protobuf</code> or <code>BSONDocument</code>). The major downside here is that despite calling the function <code>MessageToDict()</code>, the protobuf <code>json_format</code> library converts to a <em>json-safe</em> dict -- <code>bytes</code> are converted to base64, <code>int64</code>s are converted to strings, etc.</p>
<p>I found somebody who seems to have <a href="https://dataform.co/blog/mongodb-protobuf-codec" rel="nofollow noreferrer">done this in golang</a>. (Note that my primary goal is to marshal and unmarshal, not to do the protobuf ID reflection as in his example.) I found the <a href="https://pymongo.readthedocs.io/en/stable/examples/custom_type.html" rel="nofollow noreferrer">pymongo docs on custom types</a> but the way I understand the docs is that the <a href="https://pymongo.readthedocs.io/en/stable/api/bson/codec_options.html#bson.codec_options.TypeCodec" rel="nofollow noreferrer"><code>TypeCodec</code> class</a> is only for individual types (such as <code>Decimal</code>) and <strong>not</strong> for the entire message / document. I also found <code>document_class</code> but that appears to be <a href="https://github.com/mongodb/mongo-python-driver/blob/540562a60630a57d3eb0c06358b19d3882a5de18/bson/codec_options.py#L289" rel="nofollow noreferrer">only for decoding and requires a subclass of <code>MutableMapping</code></a>, which generated protobufs are not.</p>
<p>I found <code>bson.encode()</code> and <code>bson.decode()</code> but, short of monkeypatching, it's not clear how I would override or configure these.</p>
|
<python><mongodb><protocol-buffers><pymongo>
|
2023-03-04 06:56:22
| 1
| 3,384
|
James S
|
75,633,851
| 5,694,144
|
Is there an efficient way to determine if a sum of floats will be order invariant?
|
<p>Due to precision limitations in floating point numbers, the order in which numbers are summed can affect the result.</p>
<pre class="lang-py prettyprint-override"><code>>>> 0.3 + 0.4 + 2.8
3.5
>>> 2.8 + 0.4 + 0.3
3.4999999999999996
</code></pre>
<p>This small error can become a bigger problem if the results are then rounded.</p>
<pre class="lang-py prettyprint-override"><code>>>> round(0.3 + 0.4 + 2.8)
4
>>> round(2.8 + 0.4 + 0.3)
3
</code></pre>
<p>I would like to generate a list of random floats such that their rounded sum does not depend on the order in which the numbers are summed. My current brute force approach is O(n!). Is there a more efficient method?</p>
<pre class="lang-py prettyprint-override"><code>import random
import itertools
import math
def gen_sum_safe_seq(func, length: int, precision: int) -> list[float]:
"""
Return a list of floats that has the same sum when rounded to the given
precision regardless of the order in which its values are summed.
"""
invalid = True
while invalid:
invalid = False
nums = [func() for _ in range(length)]
first_sum = round(sum(nums), precision)
for p in itertools.permutations(nums):
if round(sum(p), precision) != first_sum:
invalid = True
print(f"rejected {nums}")
break
return nums
for _ in range(3):
nums = gen_sum_safe_seq(
func=lambda :round(random.gauss(3, 0.5), 3),
length=10,
precision=2,
)
print(f"{nums} sum={sum(nums)}")
</code></pre>
<p>For context, as part of a programming exercise I'm providing a list of floats that model a measured value over time to ~1000 entry-level programming students. They will sum them in a variety of ways. Provided that their code is correct, I'd like for them all to get the same result to simplify checking their code. I do not want to introduce the complexities of floating point representation to students at this level.</p>
|
<python><algorithm><math>
|
2023-03-04 05:33:51
| 5
| 363
|
John Cole
|
75,633,830
| 16,009,435
|
POST request from python to nodeJS server
|
<p>I am trying to send a post request from python to my nodeJS server. I can successfully do it from client-side js to nodeJS server using the fetch API but how can I achieve this with python? What I tried below is sending the post request successfully but the data/body attached to it is not reaching the server. What am I doing wrong and how can I fix it? Thanks in advance.</p>
<p><strong>NOTE:</strong> All my nodeJS routes are set up correctly and work fines!</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>//index.js
'use strict';
const express = require('express')
const app = express()
const PORT = 5000
app.use('/js', express.static(__dirname + '/public/js'))
app.use('/css', express.static(__dirname + '/public/css'))
app.set('view engine', 'ejs')
app.set('views', './views')
app.use(cookie())
app.use(express.json({
limit: '50mb'
}));
app.use('/', require('./routes/pages'))
app.use('/api', require('./controllers/auth'))
app.listen(PORT, '127.0.0.1', function(err) {
if (err) console.log("Error in server setup")
console.log("Server listening on Port", '127.0.0.1', PORT);
})</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>//server file
//served on http://127.0.0.1:5000/api/server
const distribution = async(req, res) => {
//prints an empty object
console.log(req.body)
}
module.exports = distribution;</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>//auth
const express = require('express')
const server = require('./server')
const router = express.Router()
router.post('/server', server)
module.exports = router;</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>//routes
const express = require('express')
const router = express.Router()
router.get('/', loggedIn, (req, res) => {
res.render('test', {
status: 'no',
user: 'nothing'
})
})</code></pre>
</div>
</div>
</p>
<pre><code>#python3
import requests
API_ENDPOINT = "http://127.0.0.1:5000/api/server"
data = '{"test": "testing"}'
response = requests.post(url = API_ENDPOINT, data = data)
print(response)
</code></pre>
|
<python><node.js>
|
2023-03-04 05:26:43
| 2
| 1,387
|
seriously
|
75,633,823
| 15,239,717
|
Django filter Two Models for ID
|
<p>I am working on a Django Application where registered users can be added Deposit by staff users, and I want to know whether a user has been added Deposit in the current month. And also check in HTML on a Button url whether the user has a deposit or Not then decide whether to display the Button or Not.</p>
<p>I have tried with the below code but here is the error I am getting: <code>Cannot use QuerySet for "Account": Use a QuerySet for "Profile".</code> And even when I use a queryset for Profile <code>customer_profile = Profile.objects.all()</code> it still says I should use queryset for User so I am confused.
Here is my Models:</p>
<pre><code>class Profile(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null = True)
surname = models.CharField(max_length=20, null=True)
othernames = models.CharField(max_length=40, null=True)
gender = models.CharField(max_length=6, choices=GENDER, blank=True, null=True)
address = models.CharField(max_length=200, null=True)
phone = models.CharField(max_length=11, null=True)
image = models.ImageField(default='avatar.jpg', blank=False, null=False, upload_to ='profile_images',
)
#Method to save Image
def save(self, *args, **kwargs):
super().save(*args, **kwargs)
img = Image.open(self.image.path)
#Check for Image Height and Width then resize it then save
if img.height > 200 or img.width > 150:
output_size = (150, 250)
img.thumbnail(output_size)
img.save(self.image.path)
def __str__(self):
return f'{self.customer.username}-Profile'
class Account(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null=True)
account_number = models.CharField(max_length=10, null=True)
date = models.DateTimeField(auto_now_add=True, null=True)
def __str__(self):
return f' {self.customer} - Account No: {self.account_number}'
class Deposit(models.Model):
customer = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True)
transID = models.CharField(max_length=12, null=True)
acct = models.CharField(max_length=6, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
deposit_amount = models.PositiveIntegerField(null=True)
date = models.DateTimeField(auto_now_add=True)
def get_absolute_url(self):
return reverse('create_account', args=[self.id])
def __str__(self):
return f'{self.customer} Deposited {self.deposit_amount} by {self.staff.username}'
</code></pre>
<p>Here is my View function:</p>
<pre><code>def create_account(request):
customer = Account.objects.all()
deposited_this_month = Deposit.objects.filter(customer__profile=customer, date__year=now.year, date__month=now.month).aggregate(deposited_this_month=Sum('deposit_amount')).get('deposited_this_month') or 0
context = {
'deposited_this_month ':deposited_this_month ,
}
return render(request, 'dashboard/customers.html', context)
</code></pre>
<p>In my HTML below is my code:</p>
<pre><code>{% if deposited_this_month %}
<a class="btn btn-success btn-sm" href="{% url 'account-statement' customer.id %}">Statement</a>
{% else %}
<a class="btn btn-success btn-sm" href="">No Transaction</a>
{% endif %}
</code></pre>
|
<python><django>
|
2023-03-04 05:25:11
| 1
| 323
|
apollos
|
75,633,807
| 11,630,148
|
rant_category() got an unexpected keyword argument 'slug'
|
<p>Running into an <code>rant_category() got an unexpected keyword argument 'slug'</code> on my django project. Basically, I just need to get the slug of the <code>#category</code> in my app to show it in the url.</p>
<p>Here's my code:</p>
<p><code>views.py</code></p>
<pre><code>class RantListView(ListView):
model = Rant
context_object_name = "rants"
template_name = "rants/rant_list.html"
class RantDetailView(DetailView):
model = Rant
template_name = "rants/rant_detail.html"
def rant_category(request, category):
rants = Rant.objects.filter(categories__slug__contains=category)
context = {"category": category, "rants": rants}
return render(request, "rants/rant_category.html", context)
</code></pre>
<p><code>models.py</code></p>
<pre><code>class Category(models.Model):
title = models.CharField(max_length=50)
slug = AutoSlugField(populate_from="title", slugify_function=to_slugify)
class Meta:
get_latest_by = "-date_added"
verbose_name = _("Category")
verbose_name_plural = _("Categories")
def get_absolute_url(self):
return reverse("rants:rant-category", kwargs={"slug": self.slug})
def __str__(self):
return self.slug
class Rant(BaseModel, models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.TextField(blank=False)
slug = AutoSlugField(populate_from="post", slugify_function=to_slugify)
categories = models.ManyToManyField(Category, related_name="rants")
class Meta:
get_latest_by = "-date_added"
verbose_name = _("rant")
verbose_name_plural = _("rants")
def get_absolute_url(self):
return reverse("rants:rant-detail", kwargs={"slug": self.slug})
def __str__(self):
return self.slug
</code></pre>
<p><code>html code</code></p>
<pre class="lang-html prettyprint-override"><code> {% for rant in rants %}
{{ rant.post }}
{% for category in rant.categories.all %}
<a href="{% url 'rants:rant-category' category.slug %}">#{{ category.title }}</a>
{% endfor %}
{% endfor %}
</code></pre>
<p>I'm getting an:</p>
<pre><code>TypeError at /rants/category/category 1/
rant_category() got an unexpected keyword argument 'slug'
</code></pre>
<p>haven't coded in awhile so I based everything on my old tutorial <a href="https://github.com/reyesvicente/cookiecutter-blog-tutorial-learnetto" rel="nofollow noreferrer">https://github.com/reyesvicente/cookiecutter-blog-tutorial-learnetto</a> but it seems to not be working.</p>
<h3>EDIT</h3>
<p>Here's my <code>urls.py</code> on the app:</p>
<pre class="lang-py prettyprint-override"><code> path("", RantListView.as_view(), name="rant"),
path("<str:slug>/", RantDetailView.as_view(), name="rant-detail"),
path("category/<str:slug>/", rant_category, name="rant-category"),
</code></pre>
|
<python><django><django-models><django-views><django-templates>
|
2023-03-04 05:19:24
| 3
| 664
|
Vicente Antonio G. Reyes
|
75,633,662
| 1,601,580
|
How do you load a specific GPU from CUDA_AVAILABLE_DEVICES in PyTorch?
|
<p>I came up with this code but it's resulting in never ending bugs:</p>
<pre><code>def get_device_via_env_variables(deterministic: bool = False, verbose: bool = True) -> torch.device:
device: torch.device = torch.device("cpu")
if torch.cuda.is_available():
if 'CUDA_VISIBLE_DEVICES' not in os.environ:
device: torch.device = torch.device("cuda:0")
else:
gpu_idx: list[str] = os.environ['CUDA_VISIBLE_DEVICES'].split(',')
if len(gpu_idx) == 1:
gpu_idx: str = gpu_idx[0]
else:
# generate random int from 0 to len(gpu_idx) with import statement
import random
idx: int = random.randint(0, len(gpu_idx) - 1) if not deterministic else -1
gpu_idx: str = gpu_idx[idx]
device: torch.device = torch.device(f"cuda:{gpu_idx}")
if verbose:
print(f'{device=}')
return device
</code></pre>
<p>I have a suspicion that the <code>gpu_idx</code> and <code>CUDA_VISIBLE_DEVICES</code> don't actually match...I just want to load the right GPU. How do I do that?</p>
<p>error:</p>
<pre><code>Traceback (most recent call last):aded (0.000 MB deduped)
File "/lfs/ampere1/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_experiment_analysis_sl_vs_maml_performance_comp_distance.py", line 1368, in <module>
main_data_analyis()
File "/lfs/ampere1/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_experiment_analysis_sl_vs_maml_performance_comp_distance.py", line 1163, in main_data_analyis
args: Namespace = load_args()
File "/lfs/ampere1/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_experiment_analysis_sl_vs_maml_performance_comp_distance.py", line 1152, in load_args
args.meta_learner = get_maml_meta_learner(args)
File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/data_analysis/common.py", line 272, in get_maml_meta_learner
base_model = load_model_ckpt(args, path_to_checkpoint=args.path_2_init_maml)
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/mains/common.py", line 265, in load_model_ckpt
base_model, _, _ = load_model_optimizer_scheduler_from_ckpt(args, path_to_checkpoint,
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/mains/common.py", line 81, in load_model_optimizer_scheduler_from_ckpt
ckpt: dict = torch.load(path_to_checkpoint, map_location=torch.device('cuda:3'))
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 827, in restore_location
return default_restore_location(storage, str(map_location))
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/serialization.py", line 142, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on CUDA device '
RuntimeError: Attempting to deserialize object on CUDA device 3 but torch.cuda.device_count() is 1. Please use torch.load with map_location to map your storages to an existing device.
</code></pre>
<p>motivated by the fact that I am trying to use the remainign 40GB from my 5CNN with 256 & 512 filters but results it memory issues</p>
<pre><code>Traceback (most recent call last):
File "/lfs/ampere1/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_experiment_analysis_sl_vs_maml_performance_comp_distance.py", line 1368, in <module>
main_data_analyis()
File "/lfs/ampere1/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_experiment_analysis_sl_vs_maml_performance_comp_distance.py", line 1213, in main_data_analyis
stats_analysis_with_emphasis_on_effect_size(args, hist=True)
File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/data_analysis/stats_analysis_with_emphasis_on_effect_size.py", line 74, in stats_analysis_with_emphasis_on_effect_size
results_usl: dict = get_episodic_accs_losses_all_splits_usl(args, args.mdl_sl, loaders)
File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/data_analysis/common.py", line 616, in get_episodic_accs_losses_all_splits_usl
losses, accs = agent.get_lists_accs_losses(data, training)
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/meta_learners/pretrain_convergence.py", line 92, in get_lists_accs_losses
spt_embeddings_t = self.get_embedding(spt_x_t, self.base_model).detach()
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/meta_learners/pretrain_convergence.py", line 166, in get_embedding
return get_embedding(x=x, base_model=base_model)
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/meta_learners/pretrain_convergence.py", line 267, in get_embedding
out = base_model.model.features(x)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/lfs/ampere1/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 439, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 174.00 MiB (GPU 0; 79.20 GiB total capacity; 54.31 GiB already allocated; 22.56 MiB free; 54.61 GiB reserved in total by PyTorch)
</code></pre>
<p>I want to use GPU 3 but the last error say GPU 0. What am I doing wrong?</p>
<p>cross: <a href="https://discuss.pytorch.org/t/how-do-you-load-a-specific-gpu-from-cuda-available-devices-in-pytorch/174044" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-do-you-load-a-specific-gpu-from-cuda-available-devices-in-pytorch/174044</a></p>
|
<python><machine-learning><deep-learning><pytorch>
|
2023-03-04 04:33:57
| 2
| 6,126
|
Charlie Parker
|
75,633,628
| 19,238,204
|
Plotting Solid of Revolution Animation that is Revolved toward y-axis Python 3 and Matplotlib
|
<p>I have this modified code with Python from:</p>
<p><a href="https://stackoverflow.com/questions/36464982/ploting-solid-of-revolution-in-python-3-matplotlib-maybe">Ploting solid of revolution in Python 3 (matplotlib maybe)</a></p>
<p>But, I want to revolve this toward <code>y</code>-axis. How?</p>
<p>This is the code. The bound is between y = 1/(x^2 + 4) and x=1 and x=4 and y=0.</p>
<pre><code>import gif
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d.axes3d as axes3d
@gif.frame
def plot_volume(angle):
fig = plt.figure(figsize = (20, 15))
ax2 = fig.add_subplot(1, 1, 1, projection = '3d')
angles = np.linspace(0, 360, 20)
x = np.linspace(1, 4, 60)
v = np.linspace(0, 2*angle, 60)
U, V = np.meshgrid(x, v)
Y1 = (1 / (U**2 + 4 ))*np.cos(V)
Z1 = (1 / (U**2 + 4 ))*np.sin(V)
X = U
ax2.plot_surface(X, Y1, Z1, alpha = 0.2, color = 'blue', rstride = 6, cstride = 6)
ax2.set_xlim(-2,5)
ax2.set_ylim(-0.5,0.5)
ax2.set_zlim(-1,1)
ax2.view_init(elev = 50, azim = 30*angle)
ax2.plot_wireframe(X, Y1, Z1, color = 'black')
ax2._axis3don = False
frames = []
for i in np.linspace(0, 2*np.pi, 20):
frame = plot_volume(i)
frames.append(frame)
gif.save(frames, '/home/browni/LasthrimProjection/Python/solidofrevolution1.gif', duration = 500)
</code></pre>
|
<python><numpy><matplotlib>
|
2023-03-04 04:22:16
| 1
| 435
|
Freya the Goddess
|
75,633,585
| 5,366,075
|
unable to install forexconnect python package
|
<p>I have python 3.9.6.</p>
<p><code>Pip install, python3 pip install</code> none of these is able to install forexconnect(<a href="https://github.com/gehtsoft/forex-connect" rel="nofollow noreferrer">https://github.com/gehtsoft/forex-connect</a>) package</p>
<pre><code>$ python3 -m pip install forexconnect
ERROR: Could not find a version that satisfies the requirement forexconnect (from versions: none)
ERROR: No matching distribution found for forexconnect
</code></pre>
<p>The error is the same on mac and linux (ubuntu). Please can I get help?</p>
|
<python><pip>
|
2023-03-04 04:06:55
| 2
| 1,863
|
usert4jju7
|
75,633,563
| 9,957,175
|
How to compress existing h5py file efficiently using multiprocessing
|
<p>I have a program that streams large amounts of data I need to save for later processing. This program is highly time sensitive, and as such I need to save the data as quickly as possible. To handle large volumes of data, I use <code>h5py</code> and do not use compression to ensure we take as little time as possible to save the data. My program looks something like this:</p>
<pre class="lang-py prettyprint-override"><code># Open the h5py for writing
hf = h5py.File('raw_data.hdf5', 'w')
# Simulates getting 100 sets of data from my application
for i in tqdm(range(100)):
# Simulates getting the data
data = np.random.random(size = (500, 500, 500))
# Save to h5py using no compression to minimize write time
hf.create_dataset('{0:03d}'.format(i), data=data)
# Close the h5py file
hf.close()
</code></pre>
<p>Running this on my computer takes <code>~0.9</code> seconds and creates a <code>~600MB</code> file (my actual program generates loops over many more iterations and creates much larger files). To avoid wasting space, after collecting the data, I re-compress the file using:</p>
<pre class="lang-py prettyprint-override"><code># Open the raw data file and create a new output compressed file
hf_in = h5py.File('raw_data.hdf5', 'r')
hf_out = h5py.File('compressed_data.hdf5', 'w')
# Simulates getting 100 sets of data from my application
for key in tqdm(hf_in.keys()):
# Load data
data = hf_in.get(key)
# Save compressed data
hf_out.create_dataset('{}'.format(key), data=data, compression="gzip", compression_opts=9)
# Close the h5py file
hf_in.close()
hf_out.close()
</code></pre>
<p>This example takes <code>~15</code> seconds (i.e., 15X longer than my streaming program). <strong>Note:</strong> In this example, the file size won't be reduced significantly, but that's because we are feeding it random data. My actual program gets <code>~10X</code> size reductions, which I need.</p>
<p>I noticed that the compression only uses a single core and hoped to speed it up using <code>multiprocessing</code>. I read that we can use command line tools to do the compression in <a href="https://stackoverflow.com/questions/15903867/compression-of-existing-file-using-h5py">this answer</a>. However, I hope to avoid that, as I want to fully automate this process in my python script.</p>
<p>My question is: Is it possible to use multiprocessing to do multiple reads from the original <code>hdf5</code> file, compress the data, and save it back to a new compressed <code>hdf5</code> file. An example of this would look something like:</p>
<pre class="lang-py prettyprint-override"><code>def write_back_compressed(key):
global h5_in, h5_out
# Load the data
data = h5_in[key]
# Write the data to the file
h5_out.create_dataset(key, data=data, compression="gzip", compression_opts=9)
return None
# Load the file
global h5_in, h5_out
hf_in = h5py.File('raw_data.hdf5', 'r')
hf_out = h5py.File('compressed_parallel_data.hdf5', 'w')
# Create a number of processors
total_processors = int(4)
pool = multiprocessing.Pool(processes=total_processors)
# Call our function total_test_suites times
jobs = []
for key in hf_in.keys():
# Write it back compressed
jobs.append(pool.apply_async(write_back_compressed, args=([key])))
# Get the results
for job in tqdm(jobs):
job.get()
# Close the pool
pool.close()
hf_in.close()
</code></pre>
<p>However, when I run this, I get an error stating that it can't lock the file for writing: <code>OSError: [Errno 0] Unable to create file (unable to lock file, errno = 0, error message = 'No error', Win32 GetLastError() = 33)</code>. <a href="https://stackoverflow.com/questions/54546499/is-it-possible-to-do-parallel-writes-on-one-h5py-file-using-multiprocessing">This answer</a> suggests I can do this with <code>mpi4py</code> however I can't get that working either. Any suggestions?</p>
|
<python><multiprocessing><hdf5><h5py>
|
2023-03-04 03:59:53
| 0
| 740
|
Carl H
|
75,633,471
| 1,039,860
|
trying to get date from a customer date picker in pything and tkinter
|
<p>I want to minimize the number of clicks and typing a user has to go through to select a date. This code pops up a date picker with the option of either clicking on a date or the cancel button, or hitting the ESC key. My problem is that when I click on a date, the option_changed function is called as expected, but self.calendar.get_date() always returns today's date</p>
<pre><code>import datetime
from tkinter import Frame, StringVar, Button, Label, Entry, OptionMenu, Tk
from tkinter import simpledialog
from tkcalendar import Calendar
class DateDialog(simpledialog.Dialog):
def __init__(self, parent, title, selected: datetime.date = datetime.date.today()):
self.calendar = None
self.selected_date = selected
self.parent = parent
self.type = title
self.selected_date = selected
super().__init__(parent, title)
def body(self, frame):
horizontal_frame = Frame(self)
self.calendar = Calendar(frame, selectmode='day',
year=self.selected_date.year, month=self.selected_date.month,
day=self.selected_date.day)
cancel_button = Button(horizontal_frame, text='Cancel', width=5, command=self.cancel_pressed)
self.bind('<Escape>', lambda event: self.cancel_pressed())
for row in self.calendar._calendar:
for lbl in row:
lbl.bind('<1>', self.option_changed)
self.calendar.pack(side='left')
cancel_button.pack(side='right')
horizontal_frame.pack()
frame.pack()
return horizontal_frame
# noinspection PyUnusedLocal
def option_changed(self, *args):
self.selected_date = self.calendar.get_date()
self.destroy()
def cancel_pressed(self):
self.selected_date = None
self.destroy()
def buttonbox(self):
pass
def main():
root = Tk()
test = DateDialog(root, 'testing').selected_date
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
|
<python><tkinter><calendar>
|
2023-03-04 03:29:18
| 1
| 1,116
|
jordanthompson
|
75,633,334
| 853,113
|
Beautiful Soup - Get all text, and preserve link html?
|
<p>I am parsing multiple HTML pages using beautiful soup. Most of the scenarios work great. I want to include text along with the URL for links.</p>
<p>The current syntax is:</p>
<pre><code> soup = MyBeautifulSoup(''.join(body), 'html.parser')
body_text = self.remove_newlines(soup.get_text())
</code></pre>
<p>I found an online recommendation to override the _all_strings function:</p>
<pre><code>class MyBeautifulSoup(BeautifulSoup):
def _all_strings(self, strip=False, types=(NavigableString, CData)):
for descendant in self.descendants:
# return "a" string representation if we encounter it
if isinstance(descendant, Tag) and descendant.name == 'a':
yield str('<{}> '.format(descendant.get('href', '')))
# skip an inner text node inside "a"
if isinstance(descendant, NavigableString) and descendant.parent.name == 'a':
continue
# default behavior
if (
(types is None and not isinstance(descendant, NavigableString))
or
(types is not None and type(descendant) not in types)):
continue
if strip:
descendant = descendant.strip()
if len(descendant) == 0:
continue
yield descendant
</code></pre>
<p>However, this gives a runtime error:</p>
<pre><code>in _all_strings
(types is not None and type(descendant) not in types)):
TypeError: argument of type 'object' is not iterable
</code></pre>
<p>Is there a way around this error?</p>
<p>Thanks!</p>
|
<python><beautifulsoup>
|
2023-03-04 02:43:32
| 1
| 357
|
VV75
|
75,633,108
| 15,239,717
|
Django get Models ID from Zip_longest() function data on HTML
|
<p>I am working a Django application where I have used python zip_longest function with a the for loop for displaying both the Customer Deposits and Withdrawals in an HTML Table, and I want to get the second zipped list item ID unto a url in a button. How do I achieve this.
Here is my Model:</p>
<pre><code>class Witdrawal(models.Model):
account = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True)
transID = models.CharField(max_length=12, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
withdrawal_amount = models.PositiveIntegerField(null=True)
date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return f'{self.account}- Withdrawn - {self.withdrawal_amount}'
</code></pre>
<p>Here is my first view:</p>
<pre><code>@login_required(login_url='user-login')
def account_statement(request, id):
try:
customer = Account.objects.get(id=id)
#Get Customer ID
customerID = customer.customer.id
except Account.DoesNotExist:
messages.error(request, 'Something Went Wrong')
return redirect('create-customer')
else:
deposits = Deposit.objects.filter(customer__id=customerID).order_by('-date')[:5]
#Get Customer Withdrawal by ID and order by Date minimum 5 records displayed
withdrawals = Witdrawal.objects.filter(account__id=customerID).order_by('-date')[:5]
context = {
"deposits_and_withdrawals": zip_longest(deposits, withdrawals, fillvalue='No Transaction'),
}
return render(request, 'dashboard/statement.html', context)
</code></pre>
<p>Here is my HTML code:</p>
<pre><code>{% if deposits_and_withdrawals %}
<tbody>
{% for deposit, withdrawal in deposits_and_withdrawals %}
<tr>
<td style="background-color:rgba(231, 232, 233, 0.919); color:blue;">Deposit - </td>
<td style="background-color:rgba(231, 232, 233, 0.919)">{{ deposit.acct }}</td>
<td style="background-color:rgba(231, 232, 233, 0.919)">{{ deposit.transID }}</td>
<td style="background-color:rgba(231, 232, 233, 0.919)">N{{ deposit.deposit_amount | intcomma }}</td>
<td style="background-color:rgba(231, 232, 233, 0.919)">{{ deposit.date | naturaltime }}</td>
<th scope="row" style="background-color:rgba(231, 232, 233, 0.919)"><a class="btn btn-success btn-sm" href="{% url 'deposit-slip' deposit.id %}">Slip</a></th>
</tr>
<tr style="color: red;">
<td>Withdrawal - </td>
<td>{{ withdrawal.account.surname }}</td>
<td>{{ withdrawal.transID }}</td>
<td>N{{ withdrawal.withdrawal_amount | intcomma }}</td>
<td>{{ withdrawal.date | naturaltime }}</td>
<th scope="row"><a class="btn btn-success btn-sm" href=" {% url 'withdrawal-slip' withdrawal.withdrawal_id %} ">Slip</a></th>
</tr>
{% endfor %}
</tbody>
{% else %}
<h3 style="text-align: center; color:red;">No Deposit/Withdrawal Found for {{ customer.customer.profile.surname }} {{ customer.customer.profile.othernames }}</h3>
{% endif %}
</code></pre>
<p>Here is my URL path code:</p>
<pre><code>urlpatterns = [
path('witdrawal/slip/<int:id>/', user_view.withdrawal_slip, name = 'withdrawal-slip'),
]
</code></pre>
<p>Please, understand I am trying to get the <strong>withdrawal id</strong> for the withdrawal_slip function URL Path but this is the error I am getting <strong>NoReverseMatch at /account/1/statement/
Reverse for 'withdrawal-slip' with arguments '('',)' not found. 1 pattern(s) tried: ['witdrawal/slip/(?P[0-9]+)/\Z']</strong> . The error is pointing to the Button's URL code number in my HTML but I don't know what is the issue.</p>
|
<python><django><django-views><django-templates><django-urls>
|
2023-03-04 01:22:43
| 1
| 323
|
apollos
|
75,633,054
| 19,321,677
|
Issues with lenght mis-match when fitting model on categorical variables using xgb classifier
|
<p>I trained a model which has several categorical variables which I encoded using dummies from pandas. Now, when I score the model on new/unseen data, I have lesser categorical variables than in the train dataset. Thus, I get a mis-match error.</p>
<p>Instead of using pd.dummies, is there a better best practice so account for different distinct values for a categorical feature between train and validation dataset?</p>
<p>Thanks so much</p>
|
<python><machine-learning><scikit-learn><feature-engineering>
|
2023-03-04 01:07:03
| 1
| 365
|
titutubs
|
75,632,975
| 10,530,575
|
how to align rows in datafrane after pd.concat() Python pandas
|
<p>I have 2 single column dataframes, after perform a LEFT JOIN using pd.conca, first column value doesn't align with the second one</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({ 'city':['ABC','NEW','TWIN','KING']})
df2 = pd.DataFrame({ 'city':['NEW','ABC']})
</code></pre>
<p><a href="https://i.sstatic.net/OlqdY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlqdY.png" alt="enter image description here" /></a></p>
<pre><code>pd.concat([df1, df2], axis=1)
</code></pre>
<p><a href="https://i.sstatic.net/7r0hn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7r0hn.png" alt="enter image description here" /></a></p>
<p>my expected result</p>
<p><a href="https://i.sstatic.net/woibm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/woibm.png" alt="enter image description here" /></a></p>
<p>I know I can sort it, but I wanna know how to do something similar to SQL LEFT JOIN.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-04 00:45:11
| 2
| 631
|
PyBoss
|
75,632,938
| 11,429,035
|
Change coordinate of numpy array?
|
<p>I wonder if there is an elegant way to customize another coordinate system for numpy array? The reason I hope to change the coordinate system is I hope to avoid any potential error in my simulation process.</p>
<p>For example, if I want an object move upwards, from <code>(2,2)</code> to <code>(2,3)</code> in a normal coordinate. I have to write like from <code>[2,2]</code> to <code>[1,2]</code> in a numpy array. I am very concerned that I will make some errors in a large simulation process.</p>
<p>So I want to know what is the most elegant way so that I can think of the numpy array as a normal coordinate system? For example, for a numpy array <code>A</code> with the shape of <code>(500,500)</code>, I want <code>A(2,2)</code> actually points to <code>A[498, 2]</code>. And I can easily describe an upward movement using <code>A(2,2)</code> to <code>A(2, 20)</code>, which equivalent to numpy array from <code>A[498, 2]</code> to <code>A[480, 2]</code></p>
|
<python><numpy>
|
2023-03-04 00:35:32
| 1
| 533
|
Xudong
|
75,632,858
| 18,086,775
|
Add columns to a new level in multiindex dataframe
|
<p>My dataframe looks like this:</p>
<pre class="lang-py prettyprint-override"><code>data = {
'WholesalerID': {0: 121, 1: 121, 2: 42, 3: 42, 4: 54, 5: 43, 6: 432, 7: 4245, 8: 4245, 9: 4245, 10: 457},
'Brand': {0: 'Vans', 1: 'Nike', 2: 'Nike', 3: 'Vans',4: 'Vans', 5: 'Nike', 6: 'Puma', 7: 'Vans', 8: 'Nike', 9: 'Puma', 10: 'Converse'},
'Shop 1': {0: 'Yes', 1: 'No', 2: 'Yes', 3: 'Maybe', 4: 'Yes', 5: 'No', 6: 'Yes', 7: 'Yes', 8: 'Maybe', 9: 'Maybe', 10: 'No'}
}
df = pd.DataFrame.from_dict(data)
df = df.assign(count=1)
pivoted_df = pd.pivot_table(
df,
index=["Brand"],
columns=["Shop 1"],
values=["count"],
aggfunc={"count": "count"},
fill_value=0,
margins=True,
margins_name="N",
)
</code></pre>
<p><a href="https://i.sstatic.net/mmBc4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mmBc4.png" alt="enter image description here" /></a></p>
<p>I need to add columns <code>N, Count, Prop</code> on the first level, I am trying the following, but It does not work:</p>
<pre><code>pivoted_df.columns = pd.MultiIndex.from_product(
[pivoted_df.columns, ["N", "count", "prop"]]
)
</code></pre>
<p><strong>Desired output</strong>:<a href="https://i.sstatic.net/T4IMz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T4IMz.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><multi-index>
|
2023-03-04 00:13:33
| 1
| 379
|
M J
|
75,632,776
| 11,824,033
|
Calling function from right parent class with multiple inheritance in Python
|
<p>I have two classes <code>A</code> and <code>B</code> and both have a method <code>get_type()</code> that returns a string and which is called in the <code>__init__()</code> of both classes to set an instance attribute <code>self.type</code>. Now there's a child class that inherits from both <code>A</code> and <code>B</code> and in it's <code>__init__()</code> it takes an instance of either <code>A</code> or <code>B</code> as an argument. Depending on whether it's instantiated with an instance of <code>A</code> or <code>B</code>, I call the corresponding <code>__init__()</code> of the parent. Presumably because of the lookup order of multiple inheritance, always the <code>get_type()</code> method of the <code>A</code> parent is called, even if <code>C</code> is instantiated with an instance of <code>B</code>. See code below:</p>
<pre><code>class A:
def __init__(self):
self.type = self.get_type()
def get_type(self):
return 'A'
class B:
def __init__(self):
self.type = self.get_type()
def get_type(self):
return 'B'
class C(A, B):
def __init__(self, instance):
if isinstance(instance, A):
A.__init__(self)
print('Inheriting from A')
else:
B.__init__(self)
print('Inheriting from B')
a = A()
b = B()
c = C(b)
print(a.type, b.type, c.type)
>>> Inheriting from B
>>> A B A
</code></pre>
<p>So I want that <code>c.type</code> is 'B' since it inherited from the <code>B</code> class. Can I maybe use the <code>super()</code> function to get the behaviour I want?</p>
|
<python><python-3.x><class><inheritance><multiple-inheritance>
|
2023-03-03 23:52:17
| 3
| 359
|
Pickniclas
|
75,632,773
| 505,188
|
Matplotlib Save Figures Write Over Each Other
|
<p>I have two functions that create saved graphs, but the second graph always has the first graph laid over it - here is the code to reproduce the problem. In this case graph.png will be a combination of dendo.png and what was supposed to be graph.png by itself. Is this just way Matplotlib works or is there something I'm missing?</p>
<pre><code>def dendo():
# Create an array
x = np.array([100., 200., 300., 400., 500., 250.,
450., 280., 450., 750.])
Z=linkage(x,'ward')
dendrogram(Z, leaf_rotation=45., leaf_font_size=12.)
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('Cluster Size')
plt.ylabel('Distance')
plt.rcParams["figure.figsize"] = (9,8)
plt.savefig('dendo.png')
def stats():
df = pd.DataFrame({'gmat':[ 580.0, 660.0, 740.0, 590.0, 660.0, 540.0, 690.0, 550.0, 580.0, 620.0, 710.0, 660.0, 780.0, 680.0, 680.0, 550.0, 580.0, 650.0, 620.0, 750.0, 730.0, 690.0, 570.0, 660.0, 690.0, 650.0, 670.0, 690.0, 690.0, 590.0],
'gpa': [2.7, 3.3, 3.3, 1.7, 4.0, 2.7, 2.3, 2.7, 2.3, 2.7, 3.7, 3.3, 4.0, 3.3, 3.9, 2.3, 3.3, 3.7, 3.3, 3.9, 3.7, 1.7, 3.0, 3.7, 3.3, 3.7, 3.3, 3.7, 3.7, 2.3],
'work_experience': [4.0, 6.0, 5.0, 4.0, 4.0, 2.0, 1.0, 1.0, 2.0, 2.0, 5.0, 5.0, 3.0, 4.0, 4.0, 4.0, 1.0, 6.0, 2.0, 4.0, 6.0, 1.0, 2.0, 4.0, 3.0, 6.0, 6.0, 5.0, 5.0, 3.0],
'admitted': [0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0]})
Xtrain = df[['gmat', 'gpa', 'work_experience']]
ytrain = df[['admitted']]
log_reg = sm.Logit(ytrain, Xtrain).fit()
stats1=log_reg.summary()
plt.rc('figure', figsize=(12, 7))
plt.text(0.01, 0.05, str(stats1), {'fontsize': 10}, fontproperties = 'monospace')
plt.axis('off')
plt.tight_layout()
plt.savefig('graph.png')
dendo()
stats()
</code></pre>
|
<python><matplotlib>
|
2023-03-03 23:51:49
| 1
| 712
|
Allen
|
75,632,764
| 13,014,469
|
How to read audio data (.wav) from azure blob storage using pyspark
|
<p>Hello fellow stackoverflowers.</p>
<p>I am having a problem with reading <strong>audio data</strong> from blob storage using pyspark. I am using databricks in my example.</p>
<p>UPDATED: I have gotten to this solution, that reads the data as raw <strong>binaries</strong>, but it seems that this solution is not optimal. The problem is, that pyspark is not created to work with audio files.</p>
<pre><code># Settings
storage_account_name = "storage_account_name"
blob_name = "blob_name"
storage_account_access_key = "key"
file_name = "audio.wav"
file_type = "binaryFile"
#Spark confinguration and file location
spark.conf.set(
"fs.azure.account.key."+storage_account_name+".blob.core.windows.net",
storage_account_access_key)
file_location = "wasbs://" + blob_name + "@" + storage_account_name + ".blob.core.windows.net/" + file_name
df = spark.read.format(file_type).load(file_location)
</code></pre>
<p>Afterwards you can read the data as following:</p>
<pre><code># Convert spark dataframe to pandas dataframe
df = df.toPandas()
# Sound player
from IPython.display import Audio, display
display(Audio(df['content'].iloc[0]))
</code></pre>
<p><a href="https://i.sstatic.net/XxnLY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XxnLY.png" alt="Audio player" /></a></p>
<p><strong>Please upvote</strong> if you found this helpful, took my time to share it with you.</p>
<p>Thank you!</p>
|
<python><pandas><pyspark><azure-blob-storage><databricks>
|
2023-03-03 23:49:59
| 0
| 781
|
Vojtech Stas
|
75,632,754
| 4,790,871
|
Regex (?J) mode modifier in Python Regex or equival ability for named capture group from different patterns
|
<p>I am trying to capture from two different pattern sequences using a named capture group. <a href="https://stackoverflow.com/questions/58618907/how-to-capture-text-on-either-side-of-a-pattern-into-a-named-capture-group">This SO question</a> solves the problem in PCRE using the mode modifier <code>(?J)</code>, and <a href="https://stackoverflow.com/questions/39029801/how-to-use-a-named-regex-capture-before-or-after-a-non-capturing-group-when-name">this SO question</a> solves a related problem in Python that I haven't succeeded at applying to my use case.</p>
<p>Example test strings:</p>
<pre><code>abc-CAPTUREME-xyz-abcdef
abc-xyz-CAPTUREME-abcdef
</code></pre>
<p>Desired output:</p>
<pre><code>CAPTUREME
CAPTUREME
</code></pre>
<p><code>CAPTUREME</code> appears on either the left or right of the <code>xyz</code> sequence. My initial failed attempt at a regex looked like this:</p>
<pre><code>r'abc-(xyz-(?P<cap>\w+)|(?P<cap>\w+)-xyz)-abcdef'
</code></pre>
<p>But in Python regexes that yields an error <code>(?P<cap> A subpattern name must be unique)</code> and python doesn't support the <code>(?J)</code> modifier that was used in the first answer above to solve the problem.</p>
<p>With a single capture group I can capture <code>CAPTUREME-xyz</code> or <code>xyz-CAPTUREME</code>, but I can't reproduce the example in the 2nd stack overflow article linked above using lookarounds. Every attempt to replicate the 2nd stack overflow article simply doesn't match my string and there are too many differences for me to piece together what's happening.</p>
<pre><code>r'abc-(?P<cap>(xyz-)\w+|\w+(-xyz))-abcdef'
</code></pre>
<p><a href="https://regex101.com/r/NeWrDe/1" rel="nofollow noreferrer">https://regex101.com/r/NeWrDe/1</a></p>
|
<python><regex>
|
2023-03-03 23:47:15
| 1
| 32,449
|
David Parks
|
75,632,726
| 2,893,024
|
SQLAlchemy Mapper Events don't get triggered
|
<p><strong>I can't get <a href="https://docs.sqlalchemy.org/en/14/orm/events.html#mapper-events" rel="nofollow noreferrer">Mapper Events</a> to trigger in SqlAlchemy 2.0</strong></p>
<p><strong>tl;dr:</strong> I am able to perform operations but events never trigger.</p>
<p>Below is the example they have in their <a href="https://docs.sqlalchemy.org/en/14/orm/events.html#mapper-events" rel="nofollow noreferrer">docs</a>:</p>
<pre class="lang-py prettyprint-override"><code>def my_before_insert_listener(mapper, connection, target):
# execute a stored procedure upon INSERT,
# apply the value to the row to be inserted
target.calculated_value = connection.execute(
text("select my_special_function(%d)" % target.special_number)
).scalar()
# associate the listener function with SomeClass,
# to execute during the "before_insert" hook
event.listen(
SomeClass, 'before_insert', my_before_insert_listener)
</code></pre>
<p>And here, is my attempt to implement basically the same thing:</p>
<pre class="lang-py prettyprint-override"><code># models.py
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, String, DateTime, func, event, MetaData
metadata_obj = MetaData()
Base = declarative_base(metadata=metadata_obj)
class Opportunity(Base):
__tablename__ = "opportunity"
id = Column(String, primary_key=True, default=generate)
created_at = Column(DateTime, server_default=func.now())
updated_at = Column(DateTime, server_default=func.now(), onupdate=func.now())
name = Column(String, nullable=False)
opportunity_table = Opportunity.__table__
</code></pre>
<p>I also have the following listener</p>
<pre class="lang-py prettyprint-override"><code>#models.py
def opportunity_after_update(mapper, connection, target):
print(f"Opportunity {target.id} was updated!")
event.listen(Opportunity, 'after_update', opportunity_after_update)
</code></pre>
<p>The following code successfully updates the table in my database, but the listener never gets called:</p>
<pre class="lang-py prettyprint-override"><code>with Session() as session:
session.query(Opportunity).filter_by(id=some_id).update({'name'='New Name'})
session.commit()
</code></pre>
<p>Any clues that may be helpful?</p>
|
<python><sql><sqlalchemy>
|
2023-03-03 23:39:07
| 1
| 3,576
|
Michael Seltenreich
|
75,632,612
| 11,546,773
|
Fastest way to apply histcount on an array grouped by previous bin result
|
<p>I have 2 large numpy arrays which I need to bin according to some bin values. The first array needs to be binned with the data1Bins values. Then the data in the second array needs to be grouped by the result of the bins on the first array. When this grouping is done, the amount of values in each bin needs to be counted.</p>
<p>This counted result needs to be added as a row to a data frame and in the end the total sum of the data frames needs to be calculated so each element can be divided by the total sum value.</p>
<p>Despite my working solution, I'm wondering if there isn't a more elegant or faster solution. Time is a very important thing since this function will be executed many times.</p>
<p>So that all said I'm always happy hear possible improvements regarding this small piece of code. The current timing is <code>0.009527206420898438 s</code>.</p>
<p><strong>Current solution:</strong></p>
<pre><code>import pandas as pd
import numpy as np
import time
data1 = np.random.uniform(low=0, high=25, size=(50,))
data2 = np.random.uniform(low=0, high=25, size=(50,))
data1Bins = [0, *np.arange(1.5, 25, 1), 100]
data2Bins = [0, *np.arange(7.5, 360, 15), 360]
# Speed up from here ->
start = time.time()
inds = np.digitize(data1, data1Bins)
df = pd.DataFrame()
# 25 bins
for i in range(0, len(data1Bins)):
binned_data = data2[np.asarray(inds == i).nonzero()[0].tolist()]
count, bin_edges = np.histogram(binned_data, bins=data2Bins)
count = np.array([(count[0] + count[-1]), *count[1:-1]])
df = pd.concat([df, pd.DataFrame(count.reshape(-1, len(count)))])
# Set index
df = df.reset_index(drop=True)
# Get total sum
total_sum = df.sum().sum()
# Devide each element by total sum
df = df/ total_sum
df['Name'] = 'abc'
df['Id'] = 'def'
df['Nr'] = np.arange(df.shape[0])
print(time.time() - start)
print(df)
</code></pre>
<p><strong>End result:</strong></p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Name Id Nr
0 0.000000 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 0
1 0.020408 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 1
2 0.020408 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 2
3 0.020408 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 3
4 0.000000 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 4
5 0.000000 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 5
6 0.061224 0.040816 0.020408 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 6
7 0.000000 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 7
8 0.020408 0.081633 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 8
9 0.000000 0.040816 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 9
10 0.081633 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 10
11 0.000000 0.040816 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 11
12 0.000000 0.020408 0.020408 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 12
13 0.000000 0.040816 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 13
14 0.000000 0.040816 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 14
15 0.000000 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 15
16 0.000000 0.000000 0.020408 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 16
17 0.000000 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 17
18 0.020408 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 18
19 0.040816 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 19
20 0.000000 0.020408 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 20
21 0.020408 0.000000 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 21
22 0.020408 0.040816 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 22
23 0.020408 0.020408 0.020408 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 23
24 0.000000 0.081633 0.000000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 abc def 24
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2023-03-03 23:12:47
| 1
| 388
|
Sam
|
75,632,596
| 3,861,965
|
GCP Billing error when Billing is enabled
|
<p>I have this Python code:</p>
<pre><code>def upload_to_gcs(bucket_name, local_file_path, gcs_file_name):
"""
Uploads local files to Google Cloud Storage (GCS).
:param bucket_name: string
:param local_file_path: string
:param gcs_file_name: string
:return: None
"""
client = storage.Client()
bucket = client.bucket(bucket_name)
blob = bucket.blob(gcs_file_name)
blob.upload_from_filename(local_file_path)
</code></pre>
<p>which is failing with this error:</p>
<pre><code>Exception has occurred: Forbidden
403 POST https://storage.googleapis.com/upload/storage/v1/b/monzo/o?uploadType=multipart: {
"error": {
"code": 403,
"message": "The billing account for the owning project is disabled in state closed",
"errors": [
{
"message": "The billing account for the owning project is disabled in state closed",
"domain": "global",
"reason": "accountDisabled",
"locationType": "header",
"location": "Authorization"
}
]
}
}
</code></pre>
<p>I have confirmed and Billing is definitely enabled, see below.</p>
<p><a href="https://i.sstatic.net/mUYaC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mUYaC.png" alt="billing_!" /></a>
<a href="https://i.sstatic.net/hpHQm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hpHQm.png" alt="project billing" /></a></p>
<p>I also went through all possible solutions in <a href="https://stackoverflow.com/questions/62732504/google-cloud-403-error-the-billing-account-for-the-owning-project-is-disabled-in#:%7E:text=The%20%22The%20billing%20account%20for,is%20recommended%20to%20try%20again.">this similar question</a>. Nothing worked, double checked the bucket exists and all that. Finally also tried reaching Google but got stuck in chatbot which didn't help.</p>
<p>EDIT: Tried checking in Cloud Shell and got this back</p>
<pre><code>miguel@cloudshell:~$ gcloud beta billing projects describe miguel-377315
billingAccountName: billingAccounts/0193C3-8A2BD3-44C7DB
billingEnabled: true
name: projects/miguel-377315/billingInfo
projectId: miguel-377315
</code></pre>
<p>Does anyone have an idea?</p>
|
<python><google-cloud-platform><google-cloud-billing>
|
2023-03-03 23:10:10
| 1
| 2,174
|
mcansado
|
75,632,469
| 1,185,242
|
Why does np.astype('uint8') give different results on Windows versus Mac?
|
<p>I have a <code>(1000,1000,3)</code> shaped numpy array (<code>dtype='float32'</code>) and when I cast it to <code>dtype='uint8'</code> I get different results on Windows versus Mac.</p>
<p>Array is available here: <a href="https://www.dropbox.com/s/jrs4n2ayh86s0fn/image.npy?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/jrs4n2ayh86s0fn/image.npy?dl=0</a></p>
<p>On Mac</p>
<pre><code>>>> import numpy as np
>>> X = np.load('image.npy')
>>> X = X.astype('uint8')
>>> X.sum()
167942490
</code></pre>
<p>On Windows</p>
<pre><code>>>> import numpy as np
>>> X = np.load('image.npy')
>>> X = X.astype('uint8')
>>> X.sum()
323510676
</code></pre>
<p>Also reproduces with this array:</p>
<pre><code>import numpy as np
X = np.array([
[[46410., 42585., 32640.],
[45645., 41820., 31875.],
[45390., 41310., 32130.]],
[[44880., 41055., 31110.],
[44115., 40290., 30345.],
[46410., 42330., 33150.]],
[[45390., 41310., 32130.],
[46155., 42075., 32895.],
[42840., 38760., 30090.]]], dtype=np.float32)
print(X.sum(), X.astype('uint8').sum())
</code></pre>
<p>Prints <code>1065135.0 2735</code> on Windows and <code>1065135.0 1860</code> on Mac.</p>
<p>Here are results with different OS and Python and Numpy:</p>
<pre><code>Python 3.8.8 (Win) Numpy 1.22.4 => 1065135.0 2735
Python 3.10.6 (Mac) Numpy 1.24.2 => 1065135.0 2735
Python 3.7.12 (Mac) Numpy 1.21.6 => 1065135.0 1860
</code></pre>
|
<python><numpy>
|
2023-03-03 22:45:44
| 1
| 26,004
|
nickponline
|
75,632,438
| 3,357,935
|
How do I read a specific line from a string in Python?
|
<p>How can I read a specific line from a string in Python? For example, let's say I want line 2 of the following string:</p>
<pre><code>string = """The quick brown fox
jumps over the
lazy dog."""
line = getLineFromString(string, 2)
print(line) # jumps over the
</code></pre>
<p>There are several questions about <a href="https://stackoverflow.com/q/2081836/3357935">reading specific lines from a file</a>, but how would I read specific lines from a string?</p>
|
<python><line>
|
2023-03-03 22:40:49
| 2
| 27,724
|
Stevoisiak
|
75,632,012
| 1,039,860
|
How to tell which OptionMenu was changed using python and tkinter
|
<p>I am building a spreadsheet of sorts. One of the columns has an OptionMenu in each cell. Each OptionMenu has its own class member StringVar (in an array) associated with it. All OptionMenus use a single callback.
How do I know which OptionMenu (and its associated StringVar) was selected?</p>
<pre><code> self.event_option_string_var = []
.
.
.
for col in range(HeaderCol.MAX_COL):
if col == TransactionsGrid.DATE_COL.col:
widget = DateEntry(self.frame_main, selectmode='day')
widget.grid(row=1, column=1, padx=0)
Entry(widget).configure(highlightthickness=0)
elif col == TransactionsGrid.EVENT_COL.col:
string_var = StringVar(self.root)
string_var.set('Select an Event')
self.event_option_string_var.append(string_var)
widget = OptionMenu(self.frame_main, string_var,
string_var,
*TenantEvent.TENANT_EVENTS,
command=self.option_changed)
else:
widget = Entry(self.frame_main)
widget.configure(highlightthickness=0)
button_row.append(widget)
widget.grid(column=col, row=row + 2, sticky="")
.
.
.
def option_changed(self, *args):
self.event_option_string_var['text'] = f'You selected: {self.event_option_string_var.get()}'
</code></pre>
|
<python><tkinter><optionmenu>
|
2023-03-03 21:26:08
| 1
| 1,116
|
jordanthompson
|
75,631,799
| 2,835,670
|
Finding the values of neurons using keras
|
<p>I have a very simple neural net:</p>
<pre><code>model=Sequential()
model.add(Dense(units=2, activation='relu', input_dim=2)
model.add(Dense(units=1, activation='sigmoid'))
</code></pre>
<p>Basically I have an input layer with 2 values, a hidden layer with 2 neurons, and an output layer with one.</p>
<pre><code>print(model.get_weights())
</code></pre>
<p>gives me the weights, while</p>
<pre><code>print(model.predict(x)
</code></pre>
<p>gives me the value of the output layer.</p>
<p>I have trained the model and my weights are:</p>
<pre><code>[[2.91677 3.9492033]
[2.911809 3.9377801]]
</code></pre>
<p>The bias is:</p>
<pre><code>[-2.9282627 1.0013505]
</code></pre>
<p>I calculated the values of input into the hidden layer for specific values by:</p>
<pre><code>print(numpy.add(numpy.matmul(model.get_weights()[0], [0,0]), model.get_weights()[1]))
print(numpy.add(numpy.matmul(model.get_weights()[0], [1,0]), model.get_weights()[1]))
print(numpy.add(numpy.matmul(model.get_weights()[0], [0,1]), model.get_weights()[1]))
print(numpy.add(numpy.matmul(model.get_weights()[0], [1,1]), model.get_weights()[1]))
</code></pre>
<p>This gives me:</p>
<pre><code>[-2.92826271 1.00135052]
[-0.01149273 3.91315949]
[1.02094054 4.93913066]
[3.93771052 7.85093963]
</code></pre>
<p>So, since I am using "relu", I expected that keras would give me</p>
<pre><code>[0 1.00135052]
[0 3.91315949]
[1.02094054 4.93913066]
[3.93771052 7.85093963]
</code></pre>
<p>but when I do:</p>
<pre><code>model_dense_2 = Model(inputs=model.input, outputs = model.layers[1].input)
pred_dense_2 = model_dense_2.predict([[0,0]], steps = 1)
print(pred_dense_2)
model_dense_2 = Model(inputs=model.input, outputs = model.layers[1].input)
pred_dense_2 = model_dense_2.predict([[1,0]], steps = 1)
print(pred_dense_2)
model_dense_2 = Model(inputs=model.input, outputs = model.layers[1].input)
pred_dense_2 = model_dense_2.predict([[0,1]], steps = 1)
print(pred_dense_2)
model_dense_2 = Model(inputs=model.input, outputs = model.layers[1].input)
pred_dense_2 = model_dense_2.predict([[1,1]], steps = 1)
print(pred_dense_2)
</code></pre>
<p>I get:</p>
<pre><code>[[0. 1.0013505]]
[[0. 4.950554]]
[[0. 4.939131]]
[[2.9003162 8.888334 ]]
</code></pre>
<p>So, I am ok with the first vector, The first coordinate of the second one is correct, but not the second one, the second coordinate of the third one is correct, but not the first one, and the fourth one is different.</p>
<p>What am I not understanding?</p>
|
<python><keras><deep-learning><neural-network>
|
2023-03-03 20:56:33
| 0
| 2,111
|
user
|
75,631,703
| 6,077,239
|
Polars list.to_struct() throws "PanicException: expected known type"
|
<p>This is a new question/issue as a follow up to <a href="https://stackoverflow.com/questions/75516576/how-to-return-multiple-stats-as-multiple-columns-in-polars-grouby-context">How to return multiple stats as multiple columns in Polars grouby context?</a> and <a href="https://stackoverflow.com/questions/75595957/how-to-flatten-split-a-tuple-of-arrays-and-calculate-column-means-in-polars-data/75596769#75596769">How to flatten/split a tuple of arrays and calculate column means in Polars dataframe?</a></p>
<p>Basically, the problem/issue can be easily illustrated by the example below:</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
import polars as pl
import statsmodels.api as sm
def ols_stats(s, yvar, xvars):
df = s.struct.unnest()
reg = sm.OLS(df[yvar].to_numpy(), df[xvars].to_numpy(), missing="drop").fit()
return pl.Series(values=(reg.params, reg.tvalues), nan_to_null=True)
df = pl.DataFrame(
{
"day": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2],
"y": [1, 6, 3, 2, 8, 4, 5, 2, 7, 3],
"x1": [1, 8, 2, 3, 5, 2, 1, 2, 7, 3],
"x2": [8, 5, 3, 6, 3, 7, 3, 2, 9, 1],
}
).lazy()
res = df.group_by("day").agg(
pl.struct("y", "x1", "x2")
.map_elements(partial(ols_stats, yvar="y", xvars=["x1", "x2"]))
.alias("params")
)
res.with_columns(
pl.col("params").list.eval(pl.element().list.explode()).list.to_struct()
).unnest("params").collect()
</code></pre>
<p>After running the code above, the following error is got:</p>
<pre><code>PanicException: expected known type
</code></pre>
<p>But when <code>.lazy()</code> and <code>.collect()</code> are removed from the code above, the code works perfectly as intended. Below are the results (expected behavior) if running in eager mode.</p>
<pre><code>shape: (2, 5)
┌─────┬──────────┬──────────┬──────────┬───────────┐
│ day ┆ field_0 ┆ field_1 ┆ field_2 ┆ field_3 │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 │
╞═════╪══════════╪══════════╪══════════╪═══════════╡
│ 2 ┆ 0.466089 ┆ 0.503127 ┆ 0.916982 ┆ 1.451151 │
│ 1 ┆ 1.008659 ┆ -0.03324 ┆ 3.204266 ┆ -0.124422 │
└─────┴──────────┴──────────┴──────────┴───────────┘
</code></pre>
<p>So, what is the problem and how am I supposed to resolve it?</p>
|
<python><python-polars>
|
2023-03-03 20:43:24
| 2
| 1,153
|
lebesgue
|
75,631,648
| 4,117,869
|
Confluent Kafka - Lowest read offset
|
<p>When consuming from a non-compacted topic, how can I determine the earliest available offset on the topic? In my case the message retention is 7 days. Low watermark does not help in this case because it is 0 and of course not what I am looking for. Is there any alternative to</p>
<pre><code> get_watermark_offsets
</code></pre>
<p>in order to get the earliest available offset to be consumed?</p>
|
<python><apache-kafka><confluent-kafka-python>
|
2023-03-03 20:34:23
| 1
| 482
|
rpd
|
75,631,533
| 13,916,049
|
Retain data types after concatenating pandas dataframes
|
<p>The data type for the last column for all the dataframes is "object".
The remaining columns of the <code>mut</code> dataframe is binary, whereas the remaining columns of the other dataframes are float.</p>
<p>In my code below, the resulting dataframe <code>df_scaled</code> has "object" data type for all the columns.</p>
<pre><code>X = df.iloc[:,:-1].astype(int)
X_scaled = pd.DataFrame(mms.fit_transform(X.values), columns=X.columns, index=X.index)
y = df.iloc[:,-1:]
y = df.iloc[:,-1:]
df_scaled = pd.concat([X_scaled,y], axis=1)
df_scaled = pd.concat([mut, df_scaled])
</code></pre>
<p>Data:</p>
<p><code>df</code></p>
<pre><code>pd.DataFrame({'TCGA-Y8-A8RY-01A': {'hsa-let-7a-3p': 2.082843013790784,
'hsa-let-7b-5p': 3.5720402468662744,
'hsa-let-7b-3p': 1.454168803064294,
'hsa-let-7c-5p': 3.521051767831394},
'TCGA-Y8-A8RZ-01A': {'hsa-let-7a-3p': 2.124989064575205,
'hsa-let-7b-5p': 3.33033877243824,
'hsa-let-7b-3p': 1.795842048944672,
'hsa-let-7c-5p': 3.0978660073056066},
'TCGA-Y8-A8S0-01A': {'hsa-let-7a-3p': 1.9381691147779496,
'hsa-let-7b-5p': 3.6787575193202096,
'hsa-let-7b-3p': 1.4976013154110766,
'hsa-let-7c-5p': 3.6586682721571377},
'TCGA-Y8-A8S1-01A': {'hsa-let-7a-3p': 2.0583218372956287,
'hsa-let-7b-5p': 3.516734922406729,
'hsa-let-7b-3p': 1.3254164702581286,
'hsa-let-7c-5p': 3.3594612940444466},
'category': {'hsa-let-7a-3p': 'miRNA',
'hsa-let-7b-5p': 'miRNA',
'hsa-let-7b-3p': 'miRNA',
'hsa-let-7c-5p': 'miRNA'}})
</code></pre>
<p><code>mut</code></p>
<pre><code>pd.DataFrame({'TCGA-Y8-A8RY-01A': {'IGF2R': 0, 'NBEA': 0, 'KMT2D': 0, 'HERC2': 0},
'TCGA-Y8-A8RZ-01A': {'IGF2R': 0, 'NBEA': 0, 'KMT2D': 0, 'HERC2': 0},
'TCGA-Y8-A8S0-01A': {'IGF2R': 0, 'NBEA': 0, 'KMT2D': 0, 'HERC2': 0},
'TCGA-Y8-A8S1-01A': {'IGF2R': 0, 'NBEA': 1, 'KMT2D': 0, 'HERC2': 0},
'category': {'IGF2R': 'Mutation',
'NBEA': 'Mutation',
'KMT2D': 'Mutation',
'HERC2': 'Mutation'}})
</code></pre>
|
<python><pandas><types>
|
2023-03-03 20:17:29
| 0
| 1,545
|
Anon
|
75,631,514
| 13,083,700
|
Decoding XMP data read using python from .lrcat
|
<p>I'm reading .lrcat data using a python script and sqlite3.
I have a column in the Adobe_AdditionalMetadata table called xmp with an odd encoding, probably an Adobe Lightroom encoding.
Here's my chunk of code:</p>
<pre><code>from libxmp import XMPFiles
cursor = conn.execute('SELECT xmp FROM Adobe_AdditionalMetadata')
row = cursor.fetchone()
xmp_data = row[0]
xmp_data.decode('utf-8')
</code></pre>
<p>I tried some .decode('utf-8') or trying to convert the byte to string but didn't work.
I know there's the exiftool but I dont see any ways to decode the xmp_data. Apparently, it helps reading xmp files but not the data in the catalog...
Any ideas of what i could try? Something with LR API's maybe?</p>
|
<python><encoding><xmp><lightroom>
|
2023-03-03 20:15:36
| 2
| 470
|
Alex
|
75,631,429
| 1,175,496
|
Python str vs unicode on Windows, Python 2.7, why does 'á' become '\xa0'
|
<p><strong>Background</strong></p>
<p>I'm using a Windows machine. I know Python 2.* is not supported anymore, but I'm still learning Python 2.7.16. I also have Python 3.7.1. I know in Python 3.* <a href="https://stackoverflow.com/a/18034409/1175496">"<code>unicode</code> was renamed to <code>str</code>"</a></p>
<p>I use Git Bash as my main shell.</p>
<p>I read <a href="https://stackoverflow.com/questions/18034272/python-str-vs-unicode-types">this question</a>. I feel like I understand the difference between Unicode (code points) and encodings (different encoding systems; bytes).</p>
<p><strong>Question</strong></p>
<ul>
<li>When I evaluate <code>'á'</code>, I expect to get <code>'\xc3\xa1'</code> <a href="https://stackoverflow.com/a/49138962/1175496">as shown in this answer</a></li>
<li>When I evaluate <code>len('á')</code>, I expect to get <code>2</code>, <a href="https://stackoverflow.com/a/18034409/1175496">as shown in this answer</a></li>
</ul>
<p>But I don't get expected results.
When running git bash C:\Python27\python.exe...:</p>
<pre><code>Python 2.7.16 (v2.7.16:413a49145e, Mar 4 2019, 01:37:19) [MSC v.1500 64 bit (AMD64)] on win32
>>> 'á'
'\xa0'
#'\xc3\xa1' expected
>>> len('á')
1
#2 expected
# one more for reference:
>>> 'à'
'\x85'
#'\xc3\xa0' expected
</code></pre>
<p>Can you help me understand why I get the output shown above?</p>
<p><strong>Specifically why does <code>'á'</code> become <code>'\xa0'</code>?</strong></p>
<p><strong>What I tried</strong></p>
<p>I can use <code>unicode</code> object to get the results I expect:</p>
<pre><code>>>> u'á'.encode('utf-8')
'\xc3\xa1'
>>> len(u'á'.encode('utf-8'))
2
</code></pre>
<p>I can open <strong>IDLE</strong> and I get different results -- not <strong>expected</strong> results, but at least I understand these results.</p>
<pre><code>Python 2.7.16 (v2.7.16:413a49145e, Mar 4 2019, 01:37:19) [MSC v.1500 64 bit (AMD64)] on win32
>>> 'á'
'\xe1'
>>> len('á')
1
>>> 'à'
'\xe0'
</code></pre>
<p>The IDLE results are unexpected but I still understand the results; <a href="https://stackoverflow.com/questions/18034272/python-str-vs-unicode-types#comment26380927_18034277">Martijn Peters explains</a> why <code>'á'</code> become <code>'\xe1'</code> <strong>in the Latin 1 encoding</strong>.</p>
<p>So why does IDLE give different results from running my Git Bash Python 2.7.1 executable directly? In other words, if IDLE is <strong>using Latin 1</strong> to encoding for my input, <strong>what encoding is used by my Git Bash Python 2.7.1. executable, such that <code>'á'</code> becomes <code>'\xa0'</code></strong></p>
<p><strong>What I'm wondering</strong></p>
<p>Is my default encoding the problem? I'm too scared to <a href="https://stackoverflow.com/questions/5419/python-unicode-and-the-windows-console#comment36374776_2013263">change the default encoding.</a></p>
<pre><code>>>> import sys; sys.getdefaultencoding()
'ascii'
</code></pre>
<p>I feel like it's my <em>terminal's</em> encoding that's the problem? (I use git bash) Should I try to <a href="https://stackoverflow.com/a/32176732/1175496">change the <code>PYTHONIOENCODING</code> environment variable</a>?</p>
<p>I try to check <a href="https://stackoverflow.com/a/36692549/1175496">the git bash <code>locale</code></a>, the result is:</p>
<pre><code>LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_ALL=
</code></pre>
<p>Also I'm using interactive Python , should I try a file instead, using this?</p>
<pre><code># -*- coding: utf-8 -*- sets the source file's encoding, not the output encoding.
</code></pre>
<p>I know <a href="https://stackoverflow.com/a/4637795/1175496">upgrading to Python 3 is a solution.</a>, but I'm still curious about why my Python 2.7.16 behaves differently.</p>
|
<python><windows><encoding>
|
2023-03-03 20:03:24
| 1
| 21,588
|
Nate Anderson
|
75,631,407
| 12,671,057
|
Fast way to find max number of consecutive 1-bits in million-bit numbers
|
<p>For example, 123456789 in binary is <code>111010110111100110100010101</code>. The max number of consecutive 1-bits is <code>4</code>. I got interested in solving that efficiently for very large numbers (million bits or even more). I came up with this:</p>
<pre><code>def onebits(n):
ctr = 0
while n:
n &= n >> 1
ctr += 1
return ctr
</code></pre>
<p>The <code>n &= n >> 1</code> simultaneously cuts off the top 1-bit of every streak of consecutive 1-bits. I repeat that until each streak is gone, counting how many steps it took. For example (all binary):</p>
<pre><code> 11101011 (start value)
-> 1100001
-> 100000
-> 0
</code></pre>
<p>It took three steps because the longest streak had three 1-bits.</p>
<p>For <em>random</em> numbers, where streaks are short, this is <a href="https://stackoverflow.com/a/75629062/12671057">pretty fast</a> (<code>Kelly3</code> in that benchmark). But for numbers with long streaks, it takes up to O(b²) time, where b is the size of n in bits. Can we do better?</p>
|
<python><performance><binary><biginteger>
|
2023-03-03 20:00:05
| 2
| 27,959
|
Kelly Bundy
|
75,631,378
| 3,713,236
|
sklearn.impute.SimpleImputer: Unable to fill in the most common value for a list of dataframe columns
|
<p>I have a list of columns of a dataframe that have NA's in them (below). The <code>dtype</code> of all these columns is <code>str</code>.</p>
<pre><code>X_train_objects = ['HomePlanet',
'Destination',
'Name',
'Cabin_letter',
'Cabin_number',
'Cabin_letter_2']
</code></pre>
<p>I would like to use <code>SimpleImputer</code> to fill in the NA's will the most common value (mode). However, I am getting a <code>ValueError: Columns must be same length as key</code>. What is the reason for this, my code seems correct to me?</p>
<p>Dataframe sample (called <code>X_train</code>) of the <code>Destination</code> column being <code>np.NA</code>s:</p>
<pre><code>{'PassengerId': {47: '0045_02',
128: '0138_02',
139: '0152_01',
347: '0382_01',
430: '0462_01'},
'HomePlanet': {47: 'Mars',
128: 'Earth',
139: 'Earth',
347: nan,
430: 'Earth'},
'CryoSleep': {47: 1, 128: 0, 139: 0, 347: 0, 430: 1},
'Destination': {47: nan, 128: nan, 139: nan, 347: nan, 430: nan},
'Age': {47: 19.0, 128: 34.0, 139: 41.0, 347: 23.0, 430: 50.0},
'VIP': {47: 0, 128: 0, 139: 0, 347: 0, 430: 0},
'RoomService': {47: 0.0, 128: 0.0, 139: 0.0, 347: 348.0, 430: 0.0},
'FoodCourt': {47: 0.0, 128: 22.0, 139: 0.0, 347: 0.0, 430: 0.0},
'ShoppingMall': {47: 0.0, 128: 0.0, 139: 0.0, 347: 0.0, 430: 0.0},
'Spa': {47: 0.0, 128: 564.0, 139: 0.0, 347: 4.0, 430: 0.0},
'VRDeck': {47: 0.0, 128: 207.0, 139: 607.0, 347: 368.0, 430: 0.0},
'Name': {47: 'Mass Chmad',
128: 'Monah Gambs',
139: 'Andan Estron',
347: 'Blanie Floydendley',
430: 'Ronia Sosanturney'},
'Transported': {47: 1, 128: 0, 139: 0, 347: 0, 430: 0},
'Cabin_letter': {47: 'F', 128: 'E', 139: 'F', 347: 'G', 430: 'G'},
'Cabin_number': {47: '10', 128: '5', 139: '32', 347: '64', 430: '67'},
'Cabin_letter_2': {47: 'P', 128: 'P', 139: 'P', 347: 'P', 430: 'S'}}
</code></pre>
<p>My Code:</p>
<pre><code>imputer = SimpleImputer(missing_values=np.NaN, strategy='most_frequent')
X_train[X_train_objects] = imputer.fit_transform(X_train[X_train_objects].values.reshape(-1,1))[:,0]
</code></pre>
|
<python><pandas><scikit-learn><imputation>
|
2023-03-03 19:56:27
| 1
| 9,075
|
Katsu
|
75,631,363
| 7,376,511
|
Type-hint a nested json api response
|
<p>Let's say I have an api endpoint that returns a complex response.</p>
<pre><code>data = requests.get("https://my.api/garbage").json()
# {
# "a": 1,
# "b": "asd",
# "c": [1,2],
# "d": {
# "e": "dsa",
# "f": 1,
# },
# "g": True
# }
</code></pre>
<p>I pass this response around various places in order to process it. What is the standard way to type-hint this data?
I looked into TypedDict, but it does not seem to allow nested definitions without declaring multiple TypedDicts, which is undesiderable for very nested data, and inline declaration is also not allowed.</p>
<p>I similarly looked into attrs/cattrs to serialize this data, but those libraries do not seem to be able to handle nested dictionaries either without a lot of added complexity and declarations. I don't think it's reasonable to have to declare 10+ dataclasses for an api response that can be just summed as <code>dict[str, dict[str, str | int]]</code>, for example.</p>
<p>Could someone provide a concrete example of how I would be able to pass around this data object (or any other complex api response, from any arbitrary endpoint) in python and allow mypy or other typing helpers to recognize the data correctly?</p>
<p>Ideally if I were to pass this object to a function, if properly typed, mypy or my IDE should be able to recognize that <code>data["d"]["e"]</code> (or <code>data.d.e</code>, if the response to this question provides a serialized object) is a string.</p>
|
<python><serialization><python-dataclasses><typeddict>
|
2023-03-03 19:54:10
| 0
| 797
|
Some Guy
|
75,631,315
| 5,212,614
|
Using Natural Language Processing, how can we add our own Stop Words to a list?
|
<p>I am testing the library below, based on this code sample:</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
from collections import Counter
df_new = pd.DataFrame(['okay', 'yeah', 'thank', 'im'])
stop_words = text.ENGLISH_STOP_WORDS.union(df_new)
#stop_words
w_counts = Counter(w for w in ' '.join(df['text_without_stopwords']).split() if w.lower() not in stop_words)
df_words = pd.DataFrame.from_dict(w_counts, orient='index').reset_index()
df_words.columns = ['word','count']
import seaborn as sns
# selecting top 20 most frequent words
d = df_words.nlargest(columns="count", n = 25)
plt.figure(figsize=(20,5))
ax = sns.barplot(data=d, x= "word", y = "count")
ax.set(ylabel = 'Count')
plt.show()
</code></pre>
<p>I'm seeing this chart.</p>
<p><a href="https://i.sstatic.net/CJpwt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CJpwt.png" alt="enter image description here" /></a></p>
<p>I'm trying to add these words to stop words: 'okay', 'yeah', 'thank', 'im'</p>
<p>But...they are all coming through!! What's wrong here??</p>
|
<python><python-3.x><nlp>
|
2023-03-03 19:48:20
| 2
| 20,492
|
ASH
|
75,631,259
| 15,915,737
|
Manage github Action Workflow with Prefect Cloud 2
|
<p>I have several GitHub action workflows to run. To manage them I want to use Prefect Cloud. I created a deployment and a GitHub block in Prefect. When I schedule a deployment to run, it stays in "late" state.</p>
<p>My repository structure:</p>
<pre><code>my_git_repo
└─github/workflows
└─prefect_deployment.yml
└─flow_run.yml
flow_test.py
</code></pre>
<p>My Python flow code:</p>
<pre><code>from prefect import flow, get_run_logger
@flow(name="Demo")
def basic_flow():
logger=get_run_logger()
logger.warning("Hello World")
if __name__ == "__main__":
basic_flow()
</code></pre>
<p>My Prefect Block, linking to my repo:
<a href="https://i.sstatic.net/0lAfz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0lAfz.png" alt="enter image description here" /></a></p>
<p>This flow run is another workflow, flow_run.yml:</p>
<pre><code>name: Run Flow
on: [workflow_dispatch]
jobs:
run-flow:
runs-on: ubuntu-latest
env:
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set-Up-GitHub-Actions-Environment
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install-Python-Packages
run: |
pip install prefect==2.8.3
- name: flow-run
run: |
python3 flow_test.py
</code></pre>
<p>My YAML workflow code to create the deployment:</p>
<pre><code>name: Deploy to Prefect Cloud 2
env:
PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}
PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}
on: [workflow_dispatch]
jobs:
load-flow-to-prefect-cloud:
name: Build and apply deployment
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- uses: actions/checkout@v3
with:
persist-credentials: false
fetch-depth: 0
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install Python Packages
run: |
pip install prefect==2.8.3
- name: Build deployment
run: |
prefect deployment build -n main_deployment flow_test.py:basic_flow \
-sb github/test-block-github\
- name : Apply deployment
run: |
prefect deployment apply basic_flow-deployment.yaml
</code></pre>
<p>The deployment created in Prefect Cloud:
<a href="https://i.sstatic.net/8salt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8salt.png" alt="enter image description here" /></a></p>
<p>When I try to run my flow it stays in the "late" Sate.</p>
<p>What am I missing ?</p>
|
<python><deployment><github-actions><prefect>
|
2023-03-03 19:40:01
| 1
| 418
|
user15915737
|
75,631,221
| 275,002
|
IBKR: No security definition has been found for the request, contract:
|
<p>I am new in both IBKR and its API, the code given below is giving error on <code>qualifyContracts</code> method:</p>
<pre><code>Error 200, reqId 308: No security definition has been found for the request, contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=808.33, right='C', exchange='CBOE', currency='USD')
Error 200, reqId 309: No security definition has been found for the request, contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=816.67, right='C', exchange='CBOE', currency='USD')
Error 200, reqId 310: No security definition has been found for the request, contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=825.0, right='C', exchange='CBOE', currency='USD')
Unknown contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=1.67, right='C', exchange='CBOE', currency='USD')
Unknown contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=3.33, right='C', exchange='CBOE', currency='USD')
Unknown contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=5.0, right='C', exchange='CBOE', currency='USD')
Unknown contract: Option(symbol='TSLA', lastTradeDateOrContractMonth='20230303', strike=6.67, right='C', exchange='CBOE', currency='USD')
</code></pre>
<p>The code given below:</p>
<pre><code>from ib_insync import *
from random import getrandbits
if __name__ == '__main__':
ib = IB()
client_id = getrandbits(5)
print('Connecting with the CLIENT ID = ', client_id)
PORT = 4002
ib.connect('127.0.0.1', PORT, clientId=client_id)
# contract = Contract(conId=76792991, exchange='SMART',currency='USD')
contract = Stock('TSLA', 'SMART', 'USD')
print('Setting Market Data Type')
ib.reqMarketDataType(3)
print('1')
ib.qualifyContracts(contract)
print('CON ID = ', contract.conId)
chains = ib.reqSecDefOptParams(contract.symbol, '', contract.secType, contract.conId)
cboe_chains = [c for c in chains if c.exchange == 'CBOE']
# # print(cboe_chains)
# contracts = [Option(cboe_chains[0].underlyingConId, c.symbol, c.secType, c.exchange, c.currency, c.strike, c.right,
# c.lastTradeDateOrContractMonth) for c in cboe_chains]
opts = []
for chain in cboe_chains:
print(chain)
for strike in chain.strikes:
print(strike)
option = Option(symbol=chain.tradingClass, lastTradeDateOrContractMonth=chain.expirations[0], strike=strike,
exchange='CBOE', currency='USD', right='C')
opts.append(option)
#
# qualified_contract = ib.qualifyContracts(option)
# print(qualified_contract)
# print('-----------------------------------')
print(opts)
qualified_contract = ib.qualifyContracts(*opts) # Error comes here
</code></pre>
|
<python><interactive-brokers><tws><ib-insync>
|
2023-03-03 19:35:29
| 1
| 15,089
|
Volatil3
|
75,631,084
| 19,321,677
|
How to create stacked bar chart with given dataframe shape?
|
<p>I have a dataframe and would like to create a stacked bar chart by having date on the x-axis and quantity on the y-axis. This is the current dataframe:</p>
<pre><code>date | product_group | quantity
2021-10-01 | A | 10
2021-10-01 | C | 10
2021-10-01 | Z | 80
2021-11-11 | A | 13
2021-12-12 | B | 5..
</code></pre>
<p>I am trying to get to this output using either matplotlib or seaborn where I have:</p>
<ul>
<li>quantity on the x-axis (% stack)</li>
<li>date on the y-axis</li>
<li>have quantity stacked for each unique date & product group option. I.e. for date 10-01, we have a stack with A,C,Z and their respective quantities (relative to each other, i.e. A=0.1, C=0.1, Z=0.8)</li>
</ul>
<p>What is the best approach here? Any advise is appreciated. Thanks</p>
|
<python><pandas><matplotlib><seaborn>
|
2023-03-03 19:19:26
| 2
| 365
|
titutubs
|
75,631,069
| 15,766,257
|
How to create file object using filename in Box
|
<p>I need to rename a file in Box using the Python SDK API. I know the filename, but I guess to rename it I need a file object. How do I create a file object when all I have is the name of the file? The file is located in the root folder.</p>
|
<python><box-api>
|
2023-03-03 19:18:12
| 1
| 331
|
Bruce Banner
|
75,630,999
| 15,781,591
|
How to make custom row and column labels in displot
|
<p>I have the following code using the <code>seaborn</code> library in python that plots a grid of histograms from data from within the <code>seaborn</code> library:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns, numpy as np
from pylab import *
penguins = sns.load_dataset('penguins')
sns.displot(penguins, x='bill_length_mm', col='species', row='island', hue='island',height=3,
aspect=2,facet_kws=dict(margin_titles=True, sharex=False, sharey=False),kind='hist', palette='viridis')
plt.show()
</code></pre>
<p>This produces the following grid of histograms:
<a href="https://i.sstatic.net/v3KCi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v3KCi.png" alt="enter image description here" /></a></p>
<p>And so we have histograms for each species-island combination showing the frequency distribution of different penguin bill lengths, organized in a "grid" of histograms, where the columns of this grid of histograms are organized by species and the rows of this grid are organized by island. And so, I see that seaborn automatically names each column label as the "species" by the argument: col=<code>species</code>. I then see seaborn labels each row as "Count" with the rows organized by island, with different representative "hues" from the argument: hue=<code>island</code>.</p>
<p>What I am trying to do is override these default automatic labels to add my own customization. Specifically what I want to do is replace the top axes labels with just "A", "B", and "C" below a "Species" header, and on the left axis, replace each "Count" instance with the names of each island, but all of these labels in much bigger font size.</p>
<p>This is what I am trying to produce:
<a href="https://i.sstatic.net/B8iL5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B8iL5.png" alt="enter image description here" /></a></p>
<p>What I am trying to figure out is, how can I "override" the automatic labelling from the above seaborn arguments so that I can print my custom histogram grid labels, but done in a dynamic way, such that if there were potentially another data set with more islands and more species, the intended labelling organization would still be produced?</p>
|
<python><matplotlib><seaborn><facet-grid><displot>
|
2023-03-03 19:10:09
| 1
| 641
|
LostinSpatialAnalysis
|
75,630,851
| 12,470,058
|
How to replace character $ by using re.sub?
|
<p>Suppose we have the following string and list of numbers:</p>
<p><code>my_string = "We change $ to 10, $ to 22, $ to 120, $ to 230 and $ to 1000."</code></p>
<p><code>nums = [1, 2, 3, 4, 5]</code></p>
<p>By only using <code>re.sub</code>, how to replace the <code>$</code> character in <code>my_string</code> with each of the elements in the list to have:</p>
<p><code>"We change 1 to 10, 2 to 22, 3 to 120, 4 to 230 and 5 to 1000."</code></p>
<p>When I use <code>re.sub(r'\b$', lambda i: str(nums.pop(0)), my_string)</code>, it doesn't work and the reason is that <code>$</code> is a reserved character in <code>re.sub</code> and according to <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">the documentation</a>:</p>
<blockquote>
<p>Matches the end of the string or just before the newline at the end of the string ...</p>
</blockquote>
<p>So if I want to replace the character <code>$</code> with a constant value by using <code>re.sub</code>, is there any solution for it?</p>
|
<python><python-3.x>
|
2023-03-03 18:52:01
| 1
| 368
|
Bsh
|
75,630,794
| 12,574,341
|
Python venv pip is always outdated
|
<p>Whenever I create a fresh Python 3.11 virtual environment using <code>venv</code>, the provided <code>pip</code> always prompts me to update to the latest version, even though my base version appears to be up to date</p>
<pre class="lang-bash prettyprint-override"><code>$ python3.11 -m pip --version
pip 23.0.1 from /opt/homebrew/lib/python3.11/site-packages/pip (python 3.11)
$ python3.11 -m venv venv
$ source venv/bin/activate.fish
(venv) $ pip install requests
...
Successfully installed ...
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: pip install --upgrade pip
</code></pre>
<p>Why doesn't the virtual environment have the newest release to begin with? Is there a way I can manually set it for future virtual environments? I'd like to not have to deal with this slight inconvenience everytime I create a new virtual environment.</p>
|
<python><pip><python-venv>
|
2023-03-03 18:44:49
| 1
| 1,459
|
Michael Moreno
|
75,630,733
| 1,936,752
|
Leetcode "Decode Ways" problem and recursion error
|
<p>I am trying to solve the Leetcode Decode Ways problem (<a href="https://leetcode.com/problems/decode-ways/" rel="nofollow noreferrer">https://leetcode.com/problems/decode-ways/</a>). Consider a string with upper case elements from the English alphabet represented by numbers.
So <code>A</code> maps to <code>1</code>, <code>B</code> maps to <code>2</code>, and so on until <code>Z</code> maps to <code>26</code>. Given a string of numbers, how many ways can one decode it to a string of alphabets?</p>
<p>For example, <code>11106</code> can be mapped into:</p>
<pre><code>"AAJF" with the grouping (1 1 10 6)
"KJF" with the grouping (11 10 6)
</code></pre>
<p>Note that the grouping <code>(1 11 06)</code> is invalid because <code>06</code> cannot be mapped into <code>F</code> since <code>6</code> is different from <code>06</code>.</p>
<p>I run into a maximum recursion depth error in Python, even for very short input strings. My solution is below:</p>
<pre><code>def numDecodings(s):
## Case of empty string
if len(s)== 0:
return 1
## Error cases if a zero is in a weird place
for idx, i in enumerate(s):
if idx == 0 and s[idx]=="0":
return 0
if idx > 1 and s[idx-1] not in "12" and s[idx]==0:
return 0
## Recursion
def subProblem(substring):
if len(substring) == 1:
res = 1
else:
res = subProblem(substring[1:])
if (len(substring) > 1) and (substring[0] == "1") or (substring[0] == "2" and (substring[1] in "0123456")):
res += subProblem(substring[2:])
return res
return subProblem(s)
</code></pre>
<p>What is causing the unbounded recursion?</p>
|
<python><recursion>
|
2023-03-03 18:37:28
| 1
| 868
|
user1936752
|
75,630,570
| 21,113,865
|
How to avoid SSL error when using python requests in a virtual environment?
|
<p>So I have a python script that needs to access webpage content via 'requests'. Due to the environment this script is running in, I need to use a virtual environment. However, this results in the request failing, since it cannot find the certificate from the virtual environment.</p>
<pre><code>raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='...', port=443): Max retries exceeded with url: ... (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')))
</code></pre>
<p>This issue does not occur when run outside the virtual environment, and I can see the error occur as a result of where requests it looking for certificates:</p>
<p>in venv (request fails):</p>
<pre><code>>>> print (requests.certs.where())
/home/user/python/venv/lib64/python3.11/site-packages/certifi/cacert.pem
</code></pre>
<p>not in venv (request will certify successfully):</p>
<pre><code>>>> print (requests.certs.where())
/etc/pki/tls/certs/ca-bundle.crt
</code></pre>
<p>I could specify the certification using request(url, verify=/path/to/cert) however this script needs to run on any machine, so hardcoding the path will not work.</p>
<p>I could ignore the verification, but this seems very dumb and bad.</p>
<p>Therefore I am wondering if there is a way to instruct python to use the same certification as the underlying environment rather than the venv path?</p>
<p>Is there some workaround where given a url, I could detect which certification will be used and provide that in my python script to the request?</p>
<p>Thank you</p>
|
<python><python-3.x><ssl><python-requests><ssl-certificate>
|
2023-03-03 18:17:09
| 0
| 319
|
user21113865
|
75,630,544
| 2,146,894
|
Can these pairs of regexes be simplified into one?
|
<p>I'm trying to fetch twitter usernames from strings. My current solution looks like this</p>
<pre><code>def get_username(string):
p1 = re.compile(r'twitter\.com/([a-z0-9_\.\-]+)', re.IGNORECASE)
p2 = re.compile(r'twitter[\s\:@]+([a-z0-9_\.\-]+)', re.IGNORECASE)
match1 = re.search(p1, string)
match2 = re.search(p2, string)
if match1:
return match1.group(1)
elif match2:
return match2.group(1)
else:
return None
</code></pre>
<h2>Examples</h2>
<pre><code>get_username("Twitter: https://twitter.com/foo123")
get_username("Twitter: twitter.com/foo123")
get_username("https://twitter.com/foo123")
get_username("https://twitter.com/foo123?blah")
get_username("Twitter foo123")
get_username("Twitter @foo123")
get_username("Twitter: foo123")
get_username("Twitter: foo123 | youtube: ...")
</code></pre>
<p>I'm wondering if my two regexes can be simplified into one. My best attempt was</p>
<pre><code>pattern = re.compile(r'twitter(?:(?:\.com/)|(?:[\s\:@]+))([a-z0-9_\.\-]+)', re.IGNORECASE)
</code></pre>
<p>but this fails on the first example because <code>Twitter: https</code> matches <em>before</em> <code>twitter.com/foo123</code>.</p>
|
<python><regex>
|
2023-03-03 18:12:54
| 4
| 21,881
|
Ben
|
75,630,411
| 346,977
|
Python & pandas: Batching data where difference between timestamps < set value
|
<p>I'm trying to create a data set in python (preferably pandas) that groups together all rows where the amount of time between the end_time of the last entry and the start_time of the subsequent one is < 10 minutes.</p>
<p>Example data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>activity</th>
<th>start_time</th>
<th>end_time</th>
</tr>
</thead>
<tbody>
<tr>
<td>foo</td>
<td>9:08:34am</td>
<td>9:11:27am</td>
</tr>
<tr>
<td>bar</td>
<td>9:12:14am</td>
<td>10:28:41am</td>
</tr>
<tr>
<td>baz</td>
<td>2:38:11pm</td>
<td>2:41:19pm</td>
</tr>
<tr>
<td>bay</td>
<td>2:41:33pm</td>
<td>2:48:53pm</td>
</tr>
</tbody>
</table>
</div>
<p>In the above, the solution would batch together foo/bar rows as one output, and baz/bay rows for another.</p>
<p>Some traits of the data:</p>
<ul>
<li>No times overlap (aka there is at most one entry with start_time before and end_time after any given time)</li>
<li>There may be hundreds/thousands of rows per "batch"</li>
<li>A batch may go through midnight</li>
</ul>
<p>I realize this may well be a common problem, but I can't figure out quite how to (elegantly) solve it, or frankly, quite how to elegantly google it. Any thoughts appreciated</p>
|
<python><pandas>
|
2023-03-03 17:58:16
| 1
| 12,635
|
PlankTon
|
75,630,285
| 10,634,126
|
Logging python tenacity retry_state with logger from outer scope
|
<p>I have a module that includes a utility function with a tenacity retry tag</p>
<pre><code>from tenacity import retry, stop_after_attempt, wait_random
def log_attempt_number(logger, retry_state):
logger.info(f"Attempt number {retry_state}")
logger.info("Logger is not recognized here unless we instantiate in this file")
@retry(stop=stop_after_attempt(3), wait=wait_random(min=1, max=3), before=log_attempt_number)
def retry_function(logger, driver):
logger.info("Outer scope logger works fine here")
source = driver.page_source
return source
</code></pre>
<p>Since the <code>retry_function</code> is used in other scripts, I want to be able to use those scripts' existing loggers for output, rather than instantiate a logger within this module. However, I cannot figure out how to pull in the outer scope logger for use in the <code>log_attempt_number</code> function, since it is only called within Tenacity <code>@retry</code> which doesn't seem to actually take in the arguments.</p>
|
<python><logging><python-logging><python-tenacity><tenacity>
|
2023-03-03 17:43:56
| 1
| 909
|
OJT
|
75,630,249
| 5,235,665
|
Pandas groupby and sum are dropping numeric columns
|
<p>I have the following Python/Pandas code:</p>
<pre><code>standardized_df = get_somehow()
standardized_df['TermDaysAmountProduct'] = standardized_df['TermDays'] * standardized_df['Amount']
standardized_df['DaysToCollectAmountProduct'] = standardized_df['DaysToCollect'] * standardized_df['Amount']
logger.info("standardized_df cols are {}".format(standardized_df.head()))
grouped_df = standardized_df.groupby(["Customer ID"], as_index=False).sum()
logger.info("grouped_df cols are {}".format(grouped_df.head()))
</code></pre>
<p>When this runs it produces the following logs:</p>
<pre><code>standardized_df cols are Customer ID Customer Name ... TermDaysAmountProduct DaysToCollectAmountProduct
grouped_df cols are Customer ID Amount
</code></pre>
<p>So apparently during the groupby, the <code>TermDaysAmountProduct</code> and <code>DaysToCollectAmountProduct</code> columns (which are both numeric and <em>should</em> be summed) are getting removed for some reason. How can I keep these columns in the dataframe <em>after</em> the sum?</p>
|
<python><pandas>
|
2023-03-03 17:39:38
| 1
| 845
|
hotmeatballsoup
|
75,630,200
| 4,450,090
|
python aiokafka many consumers to many producers
|
<p>I use aiokafka to consume from, filter message fields and produce messages back to kafka.
I run 4 async consumers which put messages to async queue.
Then single process consumes that queue and produces to async output_queue.
Multiple produces consume from async output_queue and send back to kafka.</p>
<p>I wanted to achieve solution so I would have:</p>
<p>MANY consumers >> processor >> MANY producers.</p>
<p>I would like to solve problem with consumers/producers first before I focus on processor.</p>
<p>The problem I experience is that the code produces slowly like 50 messages per second.
I have a stream of 100k messages so I must have some bug in the code.</p>
<p>How can I fix it?</p>
<pre><code>import asyncio
from aiokafka import AIOKafkaProducer, AIOKafkaConsumer
import json
BROKERS = [
"BROKER0:PORT",
"BROKER1:PORT",
"BROKER2:PORT",
]
GROUP_ID = "group_id"
TOPIC_INPUT = "topic_input"
TOPIC_OUTPUT = "topic_output"
async def consume(queue):
consumer = AIOKafkaConsumer(
TOPIC_INPUT,
bootstrap_servers=BROKERS,
value_deserializer=lambda m: json.loads(m.decode('utf-8')),
group_id=GROUP_ID,
auto_offset_reset="latest"
)
await consumer.start()
try:
async for message in consumer:
processed_message = {
"timestamp": message.timestamp,
"col1": message.value["col1"],
"col2": message.value["col2"],
"col3": message.value["col3"],
}
await queue.put(processed_message)
finally:
await consumer.stop()
async def process_message(message):
print(message)
return message
async def process_messages(queue, output_queue):
while True:
message = await queue.get()
processed_message = await process_message(message)
await output_queue.put(processed_message)
queue.task_done()
# async def produce(output_queue):
# producer = AIOKafkaProducer(
# bootstrap_servers=BROKERS,
# value_serializer=lambda m: json.dumps(m).encode('utf-8')
# )
# await producer.start()
# try:
# while True:
# message = await output_queue.get()
# print(message)
# await producer.send_and_wait(TOPIC_OUTPUT, message)
# output_queue.task_done()
# finally:
# await producer.stop()
async def main():
queue = asyncio.Queue(maxsize=1000000)
output_queue = asyncio.Queue(maxsize=1000000)
consumers = [asyncio.create_task(consume(queue)) for i in range(4)]
# producers = [asyncio.create_task(produce(output_queue)) for i in range(3)]
process_task = asyncio.create_task(process_messages(queue, output_queue))
# await asyncio.gather(*consumers, *producers, process_task)
await asyncio.gather(*consumers, process_task)
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><consumer><producer><aiokafka>
|
2023-03-03 17:33:29
| 0
| 2,728
|
Dariusz Krynicki
|
75,630,145
| 3,941,935
|
Initialize fsspec DirFileSystem from a URL
|
<p>I want to initalize a <code>fsspec</code> filesystem based on a URL - both the protocol and the root directory.
E.g. I could create a filesystem from <code>gcs://my-bucket/prefix</code> that would use <code>my-bucket</code> on GCS, or <code>file:///tmp/test</code> that would use the <code>/tmp/test</code> directory in the local filesystem.</p>
<p>It can be done easily with following 2-liner:</p>
<pre class="lang-py prettyprint-override"><code>from fsspec.core import url_to_fs
from fsspec.implementations.dirfs import DirFileSystem
URL = 'file:///tmp/test'
root_fs, root_path = url_to_fs(URL)
fs = DirFileSystem(root_path, root_fs)
fs.open('foo') # This opens /tmp/test/foo if URL was file:///tmp/test
</code></pre>
<p>but it feels like there should be an API for that in fsspec directly.</p>
<p>Is there?</p>
|
<python><fsspec>
|
2023-03-03 17:26:02
| 1
| 347
|
Tomasz Sodzawiczny
|
75,630,114
| 19,130,803
|
Multiprocessing and event, type hint issue python
|
<p>I have used multiprocessing module to perform a background task.</p>
<pre><code># module_a.py
from multiprocessing import Event
from multiprocessing import Process
class BackgroundWorker(Process):
"""Create a worker background process."""
def __init__(
self,
name: str,
daemon: bool,
contents: Any,
event: Event,
) -> None:
"""Initialize the defaults."""
self.contents: Any = contents
self._event: Event = event
super().__init__(name=name, daemon=daemon)
def run(self) -> None:
"""Run the target function."""
some code
if self._event.wait(0.4):
some code
if self._event.is_set():
break
some code
</code></pre>
<pre><code># module_b.py
from multiprocessing import Event
event: Event = Event()
def cancel_task():
event.set()
</code></pre>
<p>Following are the errors on running <code>mypy</code></p>
<pre><code>error: Variable "multiprocessing.Event" is not valid as a type [valid-type]
note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
error: Variable "multiprocessing.Event" is not valid as a type [valid-type]
note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
error: Event? has no attribute "is_set" [attr-defined]
error: Event? has no attribute "wait" [attr-defined]
</code></pre>
<p>Please suggest.</p>
|
<python><multiprocessing><mypy>
|
2023-03-03 17:22:05
| 1
| 962
|
winter
|
75,629,948
| 19,482,605
|
Why does getrefcount increase by 2 when put inside a function?
|
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
a = [1, 2, 3]
def foo(x):
print(sys.getrefcount(x))
foo(a) # prints out 4 -- but why?
</code></pre>
<p>When we invoke <code>foo(a)</code> and when <code>print(sys.getrefcount(x))</code> executes, the array <code>[1, 2, 3]</code> is referenced by:</p>
<ol>
<li>variable <code>a</code></li>
<li>the parameter <code>x</code> of the function <code>foo</code></li>
<li>the parameter of <code>sys.getrefcount</code></li>
</ol>
<p>I expected 3 to be printed out. What have I missed?</p>
|
<python><cpython><reference-counting>
|
2023-03-03 17:03:44
| 1
| 367
|
lamc
|
75,629,940
| 12,469,912
|
How to replace particular characters of a string with the elements of a list in an efficient way?
|
<p>There is a string:</p>
<p><code>input_str = 'The substring of "python" from index @ to index @ inclusive is "tho"'</code></p>
<p>and a list of indices:</p>
<p><code>idx_list = [2, 4]</code></p>
<p>I want to replace the character <code>@</code> in <code>str_input</code> with each element of the <code>idx_list</code> to have the following output:</p>
<p><code>output_str = 'The substring of "python" from index 2 to index 4 inclusive is "tho"'</code></p>
<p>So I have coded it as follows:</p>
<pre><code>def replace_char(input_str, idx_list):
output_str = ""
idx = 0
for i in range(0, len(input_str)):
if input_str[i] == '@':
output_str += str(idx_list[idx])
idx += 1
else:
output_str += input_str[i]
return output_str
</code></pre>
<p>I wonder if there is any shorter and faster way than the concatenation that I have used?</p>
|
<python><python-3.x>
|
2023-03-03 17:02:45
| 4
| 599
|
plpm
|
75,629,863
| 12,597,387
|
How to check which format is date in python
|
<p>I'm getting the date in a string and it's not always same format, is there a way to detect what format is that date for example:</p>
<p>If data comes like this 2023-12-03
I want to somehow get output like this "%Y-%m-%d", so I know which format is using. This is just an example format date in the string can come also like this 13/02/2023 and the format would be "%d/%m/%Y"</p>
|
<python><python-3.x>
|
2023-03-03 16:55:33
| 1
| 314
|
GomuGomu
|
75,629,852
| 15,176,150
|
Why use a superclass's __init__ to change it into a subclass?
|
<p>I'm working on replicating the <a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">SHAP package</a> algorithm - an explainability algorithm for machine learning. I've been reading through the author's code, and I've come across a pattern I've never seen before.</p>
<p>The author has created a superclass called <code>Explainer</code>, which is a common interface for all the different model specific implementations of the algorithm. The <code>Explainer</code>'s <code>__init__</code> method accepts a string for the algorithm type and switches itself to the corresponding subclass if called directly. It does this using multiple versions of the following pattern:</p>
<pre><code>if algorithm == "exact":
self.__class__ = explainers.Exact
explainers.Exact.__init__(self, self.model, self.masker, link=self.link, feature_names=self.feature_names, linearize_link=linearize_link, **kwargs)
</code></pre>
<p>I understand that this code sets the superclass to one of its subclasses and initialises the subclass by passing itself to <code>__init__</code>. But why would you do this?</p>
|
<python><design-patterns><subclass><superclass><monkeypatching>
|
2023-03-03 16:55:04
| 1
| 1,146
|
Connor
|
75,629,831
| 11,312,371
|
Pandas Average If Across Multiple Columns
|
<p>In pandas, I'd like to calculate the average age and weight for people playing each sport. I know I can loop, but was wondering what the most efficient way is.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([
[0, 1, 0, 30, 150],
[1, 1, 1, 25, 200],
[1, 0, 0, 20, 175]
], columns=[
"Plays Basketball",
"Plays Soccer",
"Plays Football",
"Age",
"Weight"
])
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Plays Basketball</th>
<th>Plays Soccer</th>
<th>Plays Football</th>
<th>Age</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>30</td>
<td>150</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>25</td>
<td>200</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
<td>20</td>
<td>175</td>
</tr>
</tbody>
</table>
</div>
<p>I tried <code>groupby</code> but it creates a group for every possible combination of sports played. I just need an average age and weight for each sport.</p>
<p>Result should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Age</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>Plays Basketball</td>
<td>22.5</td>
<td>187.5</td>
</tr>
<tr>
<td>Plays Soccer</td>
<td>27.5</td>
<td>175.0</td>
</tr>
<tr>
<td>Plays Football</td>
<td>25.0</td>
<td>200.0</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-03-03 16:52:29
| 2
| 457
|
Scott Guthart
|
75,629,813
| 2,532,296
|
signed fractional multiplication using q(4,20)
|
<p>I am doing a simple test to understand signed fractional multiplication in ordinary format
and in Q(4,20) format (<a href="https://en.wikipedia.org/wiki/Q_(number_format)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Q_(number_format)</a>). I am using the fxpmath library (<a href="https://github.com/francof2a/fxpmath" rel="nofollow noreferrer">https://github.com/francof2a/fxpmath</a>) to achieve the Q(4,20) conversion.</p>
<p>I have an issue when testing my case 2, details below.</p>
<p><em>my code</em></p>
<pre><code>from fxpmath import Fxp
# data set
data= [-0.003, 1.0, 0.707, -0.707, -0.002121, 0.002121]
print("---------------------------------------------")
print("signed_fractional_value""\t¦", "Q(4,20) value")
print("---------------------------------------------")
for i in range(0,len(data)):
# convert to Q(4,20) format
x = Fxp(data[i], signed=True, n_word=24, n_frac=20)
print("%.6f" % data[i],"\t\t¦", x.hex())
</code></pre>
<p><em>output</em></p>
<pre><code>---------------------------------------------
signed_fractional_value ¦ Q(4,20) value
---------------------------------------------
-0.003000 ¦ 0xFFF3B7
1.000000 ¦ 0x100000
0.707000 ¦ 0x0B4FDF
-0.707000 ¦ 0xF4B021
-0.002121 ¦ 0xFFF750
0.002121 ¦ 0x0008B0
</code></pre>
<h2>case 1 : -0.003 * 1.0 = -0.003</h2>
<p>Now I try the multiplication in Q(4,20) domain for the above numbers by looking at corresponding value in above table and I expect the result to be 0xFFF3B7</p>
<pre><code>>>> hex(0xFFF3B7 * 0x100000)
'0xfff3b700000'
>>> hex((0xFFF3B7 * 0x100000) >> 20)
'0xfff3b7'
</code></pre>
<h2>case 2 : -0.003 * 0.707 = -0.002121</h2>
<p>Now doing the same for the second case,
I am expecting to get 0xFFF750 as shown in the table</p>
<pre><code>>>> hex(0xFFF3B7 * 0x0B4FDF)
'0xb4f5407c569'
>>> hex((0xFFF3B7 * 0x0B4FDF) >> 20)
'0xb4f540'
</code></pre>
<p>I am expecting the value to be 0xFFF750 but what I receive is 0xb4f540, I am sure there is something wrong.</p>
<p>Could someone explain what am I missing here.</p>
|
<python><signal-processing><signed><fractions>
|
2023-03-03 16:50:58
| 1
| 848
|
user2532296
|
75,629,394
| 13,359,498
|
Why can't I show confusion matrix of ensemble model majority voting approach?
|
<p>I've built an ensemble model with a majority voting approach. I used pretrained models to build the model.
code snippet:</p>
<pre><code>from sklearn.metrics import classification_report, confusion_matrix
# Make predictions using the ensemble method with max voting
ensemble_predictions = [model1.predict(X_test), model2.predict(X_test), model3.predict(X_test), model4.predict(X_test), model5.predict(X_test)]
ensemble_predictions = np.array(ensemble_predictions)
ensemble_predictions = np.mean(ensemble_predictions, axis=0)
ensemble_predictions = np.round(ensemble_predictions)
# ensemble_predictions = np.argmax(ensemble_predictions)
print(ensemble_predictions.shape)
print(y_test.shape)
# Print the classification report for the ensemble predictions
print(classification_report(y_test, ensemble_predictions))
</code></pre>
<p>This code works fine, but I want to visualize this model with a confusion matrix. But that is showing error.</p>
<pre><code>ValueError: Classification metrics can't handle a mix of multiclass and multilabel-indicator targets
</code></pre>
<p>It seems like this model can't be visualized with a confusion matrix. Is there any way to generate confusion matrix or something which I can use to visualize the outputs of the models?</p>
|
<python><tensorflow><keras><deep-learning>
|
2023-03-03 16:07:57
| 0
| 578
|
Rezuana Haque
|
75,629,388
| 17,194,313
|
How to update a Python library that is currently in use?
|
<p>I have a few long running processes (FastAPI, Dagster, etc...) and it's not clear to me how I should handle updates.</p>
<p>I can just run <code>pip install {package} --upgrade</code>, and run it until it eventually works.</p>
<p>But is this the best way to handle it?</p>
|
<python>
|
2023-03-03 16:07:25
| 0
| 3,075
|
MYK
|
75,629,102
| 4,451,315
|
"or" in pip constraint
|
<p>On conda-forge, <code>tzdata</code> has version numbers like:</p>
<ul>
<li>2022a</li>
<li>2022b</li>
<li>2022c</li>
</ul>
<p>But on PyPI, the version numbers are like:</p>
<ul>
<li>2022.1</li>
<li>2022.2</li>
<li>2022.3</li>
</ul>
<p>How can I set a requirement, in my <code>pyproject.toml</code> file, to require at least tzdata 2022.1 or 2022a?</p>
<p>I've tried guessing</p>
<pre><code>dependencies = [
"tzdata>=2022a|tzdata>=2022.1"
]
</code></pre>
<p>but that doesn't work, it's not valid syntax. Is there a solution here?</p>
|
<python><pip>
|
2023-03-03 15:38:51
| 1
| 11,062
|
ignoring_gravity
|
75,629,071
| 17,696,880
|
Why when split a string into a list of substrings, without removing the separators, parts of this original string are lost in the splitting process?
|
<pre><code>import re
from itertools import chain
def identification_of_nominal_complements(input_text):
pat_identifier_noun_with_modifiers = r"((?:l[oa]s|l[oa])\s+.+?)\s*(?=\(\(VERB\))"
substrings_with_nouns_and_their_modifiers_list = re.findall(pat_identifier_noun_with_modifiers, input_text)
separator_elements = r"\s*(?:,|(,|)\s*y)\s*"
substrings_with_nouns_and_their_modifiers_list = [re.split(separator_elements, s) for s in substrings_with_nouns_and_their_modifiers_list]
substrings_with_nouns_and_their_modifiers_list = list(chain.from_iterable(substrings_with_nouns_and_their_modifiers_list))
substrings_with_nouns_and_their_modifiers_list = list(filter(lambda x: x is not None and x.strip() != '', substrings_with_nouns_and_their_modifiers_list))
print(substrings_with_nouns_and_their_modifiers_list) # --> list output
pat = re.compile(rf"(?<!\(PERS\))({'|'.join(substrings_with_nouns_and_their_modifiers_list)})(?!['\w)-])")
input_text = re.sub(pat, r'((PERS)\1)', input_text)
return input_text
#example 1, it works well:
input_text = "He ((VERB)visto) la maceta de la señora de rojo ((VERB)es) grande. He ((VERB)visto) que la maceta de la señora de rojo y a ((PERS)Lucila) ((VERB)es) grande."
#example 2, it works wrong and gives error:
input_text = "((VERB)Creo) que ((PERS)los viejos gabinetes) ((VERB)estan) en desuso, hay que ((PERS)los viejos gabinetes) ((VERB)hacer) algo con ((PERS)los viejos gabinetes), ya que ((PERS)los viejos gabinetes) son importantes. ((PERS)los viejos gabinetes) ((VERB)quedaron) en el deposito. ((PERS)los candelabros) son brillantes los candelabros ((VERB)brillan). ((PERS)los candelabros) ((VERB)estan) ahi"
input_text = identification_of_nominal_complements(input_text)
print(input_text) # --> string output
</code></pre>
<p>Why does this function with <code>example 2</code> cut off the <code>((PERS)</code> part of some of the elements of the <code>substrings_with_nouns_and_their_modifiers_list</code> list, and in <code>example 1</code> this same function doesn't?
For this reason, elements are generated with unbalanced parentheses, which generates a <code>re.error: unbalanced parenthesis</code> later, specifically on the line where the <code>re.compile()</code> function is used.</p>
<p>For <code>example 1</code>, the output obtained is correct, they are not removed unnecessarily <code>((PERS)</code> and consequently the error of unbalanced parentheses is not obtained</p>
<pre><code>['la maceta de la señora de rojo', 'la maceta de la señora de rojo', 'a ((PERS)Lucila)']
'He ((VERB)visto) ((PERS)la maceta de la señora de rojo) ((VERB)es) grande. He ((VERB)visto) que ((PERS)la maceta de la señora de rojo) y a ((PERS)Lucila) ((VERB)es) grande.'
</code></pre>
<p>In <code>example 2</code>, is where the problem is, although the function with which the string is processed is the same, for some reason the substring <code>((PERS)</code> is removed from some elements of the <code>substrings_with_nouns_and_their_modifiers_list</code> list , which will trigger an unbalanced parenthesis error when using <code>re.compile()</code>, because, in this particular case, there are some substrings that contain <code>)</code> but not <code>(</code>, because the <code>((PERS)</code> was removed</p>
<pre><code>['los viejos gabinetes)', 'los viejos gabinetes)', 'los viejos gabinetes)', 'a que ((PERS)los viejos gabinetes) son importantes. ((PERS)los viejos gabinetes)', 'los candelabros) son brillantes los candelabros', 'los candelabros)']
Traceback (most recent call last):
pat = re.compile(rf"(?<!\(PERS\))({'|'.join(substrings_with_nouns_and_their_modifiers_list)})(?!['\w)-])")
raise source.error("unbalanced parenthesis")
re.error: unbalanced parenthesis at position 56
</code></pre>
<p>And if the <code>identification_of_nominal_complements()</code> function worked correctly, these should be the outputs you would get when sending the function the string from <code>example 2</code>, where not removing some <code>((PERS)</code> avoids the unbalanced parenthesis error when using <code>re.compile()</code>. This is the <strong>correct output</strong> for the <code>example 2</code> string:</p>
<pre><code>['((PERS)los viejos gabinetes)', '((PERS)los viejos gabinetes)', '((PERS)los viejos gabinetes)', 'a que ((PERS)los viejos gabinetes) son importantes. ((PERS)los viejos gabinetes)', '((PERS)los candelabros) son brillantes los candelabros', '((PERS)los candelabros)']
'((VERB)Creo) que ((PERS)los viejos gabinetes) ((VERB)estan) en desuso, hay que ((PERS)los viejos gabinetes) ((VERB)hacer) algo con ((PERS)los viejos gabinetes), ya que ((PERS)los viejos gabinetes) son importantes. ((PERS)los viejos gabinetes) ((VERB)quedaron) en el deposito. ((PERS)los candelabros) son brillantes los candelabros ((VERB)brillan). ((PERS)los candelabros) ((VERB)estan) ahi'
</code></pre>
<p>What should I modify in the <code>identification_of_nominal_complements()</code> function so that when sending the string of <code>example 2</code> I don't have the unbalanced parentheses error and I can get this correct output</p>
|
<python><python-3.x><regex><list><split>
|
2023-03-03 15:35:29
| 1
| 875
|
Matt095
|
75,629,048
| 1,818,059
|
How to calculate amount of cyclic permuations in an array (variable shift)
|
<p>Referring to <a href="https://stackoverflow.com/questions/22615621/how-to-calculate-cyclic-permutation-of-an-array">another question</a>, I would like to print (and count amount of) cyclic permuations of an array. My input to such function would be the array, stride and start value.</p>
<p>My array does not necessarily contain numbers only.</p>
<p>Example: given the array <code>X, 1, 2, 3, Y</code> (5 elements) and a stride of 3, I would have</p>
<pre class="lang-none prettyprint-override"><code>X, 1, 2 // first line
3, Y, X
1, 2, 3
Y, X, 1
2, 3, Y // last line since it would be repeating hereafter.
</code></pre>
<p>The count would be "5" in this case. In many cases, the count is identical to amount of elements, but not always. With 8 elements and stride=4, it's 2. Using 8 elements and 6, it is 4.</p>
<p><strong>The array may also contain identical values</strong>, such as leadin / leadout and duplicate numbers.</p>
<p>Example: <code>LEADIN, LEADIN, LEADIN, LEADIN, 1, 1, 2, 2, 3, 3, 4, 4, LEADOUT, LEADOUT</code> (for 4 leadin, numbers 1..4 duplicated *2 and 2 leadout. Total element count = 14.</p>
<p>The purpose is to form an endless sequence of subsets, each 1 stride long. There must not be any empty spaces in a subset. All elements must be used, and number must stay the same.</p>
<p>With leadin, trivial example: <code>LI, LI, 1, 2, 3, LO, LO</code> in a stride of 2 will be:</p>
<p><code>LI LI | 1 2 | 3 LO | LO LI | LI 1 | 2 3 | LO LO</code> (7 repeats).</p>
<p>I would probably be using Python for the job. Getting data from a cyclic array is no problem - but I need to find out how many "shifts" I need to do.</p>
<p>Using this simple function, I can "count" the amount, but I would think there is a formula to do this ?</p>
<pre class="lang-py prettyprint-override"><code>def getiterations(elements, stride):
# assuming here that elements > stride
lc = 0
lineno = 0
finished = False
while not finished:
lc = lc+stride # simulate getting N numbers
lineno= lineno+1
if (lc %elements)==0:
finished = True
return lineno
</code></pre>
|
<python><arrays>
|
2023-03-03 15:33:49
| 2
| 1,176
|
MyICQ
|
75,629,006
| 12,082,289
|
Apache Airflow, Custom Trigger logs not showing up
|
<p>I'm learning about Apache Airflow, and have implemented a simple custom Sensor and Trigger</p>
<pre><code>from datetime import timedelta
from airflow.sensors.base import BaseSensorOperator
from airflow.utils.decorators import apply_defaults
from airflow.triggers.temporal import TimeDeltaTrigger
from triggers.example_trigger import ExampleTrigger
class ExampleSensor(BaseSensorOperator):
@apply_defaults
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def execute(self, context):
self.defer(trigger=ExampleTrigger(), method_name="execute_complete")
def execute_complete(self, context, event=None):
print(f"{type(event)} - {event}")
context["ti"].xcom_push(key="approved", value=event)
# We have no more work to do here. Mark as complete.
return
</code></pre>
<pre><code>from typing import Any, Tuple, Dict
import asyncio
import logging
from airflow.models import Variable
from airflow.triggers.temporal import TimeDeltaTrigger
from datetime import datetime
from airflow.triggers.base import BaseTrigger, TriggerEvent
class ExampleTrigger(BaseTrigger):
def __init__(self):
super().__init__()
def serialize(self) -> Tuple[str, Dict[str, int]]:
return ("triggers.example_trigger.ExampleTrigger", {})
async def run(self):
var_val = str(Variable.get("approval")).lower().strip()
logging.info(f"Approval was: {var_val}")
while var_val != "true" and var_val != "false":
await asyncio.sleep(10)
var_val = str(Variable.get("approval")).lower().strip()
logging.info(f"Approval was: {var_val}")
yield TriggerEvent(var_val == "true")
</code></pre>
<p>This is all working fine, but for some reason the logging just in the Trigger is not working</p>
<pre><code>logging.info(f"Approval was: {var_val}")
</code></pre>
<p>Here are the logs for the <code>ExampleSensor</code></p>
<pre><code>[2023-03-03, 15:17:30 UTC] {taskinstance.py:1300} INFO - Executing <Task(ExampleSensor): approval_sensor> on 2023-03-03 15:17:26.006014+00:00
[2023-03-03, 15:17:30 UTC] {standard_task_runner.py:55} INFO - Started process 412 to run task
[2023-03-03, 15:17:30 UTC] {standard_task_runner.py:82} INFO - Running: ['***', 'tasks', 'run', 'example_flow', 'approval_sensor', 'manual__2023-03-03T15:17:26.006014+00:00', '--job-id', '36', '--raw', '--subdir', 'DAGS_FOLDER/example_flow.py', '--cfg-path', '/tmp/tmp2p2l6n8u']
[2023-03-03, 15:17:30 UTC] {standard_task_runner.py:83} INFO - Job 36: Subtask approval_sensor
[2023-03-03, 15:17:30 UTC] {task_command.py:388} INFO - Running <TaskInstance: example_flow.approval_sensor manual__2023-03-03T15:17:26.006014+00:00 [running]> on host 44f3eeacb622
[2023-03-03, 15:17:30 UTC] {logging_mixin.py:137} INFO - <class 'bool'> - False
[2023-03-03, 15:17:30 UTC] {taskinstance.py:1323} INFO - Marking task as SUCCESS. dag_id=example_flow, task_id=approval_sensor, execution_date=20230303T151726, start_date=20230303T151726, end_date=20230303T151730
[2023-03-03, 15:17:30 UTC] {local_task_job.py:208} INFO - Task exited with return code 0
[2023-03-03, 15:17:30 UTC] {taskinstance.py:2578} INFO - 1 downstream tasks scheduled from follow-on schedule check
</code></pre>
<p>Is there something special that needs to be done in order to get the logs from the Trigger to show up?</p>
<p>Thanks!</p>
|
<python><airflow>
|
2023-03-03 15:29:30
| 1
| 565
|
Jeremy Farmer
|
75,628,777
| 9,859,642
|
Reading 3-dimensional data from many files
|
<p>I have many text files with data written in such a structure:</p>
<pre><code>#ABTMTY
mdjkls 993583.17355
ebgtas 899443.47380
udenhr 717515.59788
paomen 491385.80901
gneavc 275411.91025
wesuii 119744.95306
ploppm 59145.56233
#MNTGHP
mdjkls 5668781.68669
ebgtas 3852468.72569
.
.
.
</code></pre>
<p>and the name of the file "ang_001", "ang_002" etc. is the third dimension. I have to make a 3D matrix of values, but I don't know how to make this in an efficient way.</p>
<p>I thought about such an approach:</p>
<ol>
<li>Iterate over each file so I can get filename (variable_1)</li>
<li>Go to each file and find how many times 6-capital-letter code appears (variable_2) appears. Then cut out the "table" parts with small letter code (variable_3) and value, and paste them into a DataFrame.</li>
<li>Have a series of DataFrames, each corresponding to different variable_1.</li>
</ol>
<p>For now I tried to iterate over a single file. First I count the occurrences of this 6-capital-letters code, as all of them start from "#":</p>
<pre><code>for ang_file in ang_all:
file = open(ang_file, "r")
text = file.read()
count = text.count("#")
</code></pre>
<p>Then I iterate over the tables with data that are in this single file. Each new table I add to the main DataFrame. Each table length is 101 lines and they are separated by a single space.</p>
<pre><code>n = 0
for header in range(count):
df_temp = pd.read_csv("ang_001.txt", delim_whitespace=True, skipinitialspace = True, nrows= 101, skiprows = 1 + n*header, names = ["code", "value"])
df = pd.concat([df, df_tmp], axis = 0)
n += 100
</code></pre>
<p>The problem is that there are around 1000 such files, and each of them is above 20 MB. This one short loop already took a lot of time to complete, and I'll still have to work somehow with the data in the DataFrame. Is there a better way to do it? Are there any Python packages that specialize in efficient reading text files?</p>
|
<python><pandas><dataframe><python-xarray>
|
2023-03-03 15:10:49
| 1
| 632
|
Anavae
|
75,628,737
| 13,950,870
|
Best way to use a single logger in my python project, overwrite root logger or replace import everywhere?
|
<p>Title is a bit vague but I think my code examples make it clear. I want to start using a custom logger in a large project of mine.</p>
<p>Currently, in every file that I use logging, I import logging and use that like this:</p>
<pre><code># file A
import logging
logging.info(...)
</code></pre>
<p>I do this in many files. I want to use my own specific logger:</p>
<pre><code># __init__.py
logger = logging.getLogger('my_logger!')
</code></pre>
<p>There are two approaches I think to using this logger everywhere in my project:</p>
<p>Option 1:
Importing the logger defined above in all of my files instead of the logging module:</p>
<pre><code># file A
from ... import logger
logger.info(...)
</code></pre>
<p>Option 2: Overriding the root logger, so that I do not have to replace anything</p>
<pre><code># __init_
logger = logging.getLogger('my_logger!')
logging.root = logger
</code></pre>
<p>Now I do not have to replace the imports in all my files. What is the best option and why?</p>
|
<python><logging><python-logging>
|
2023-03-03 15:06:52
| 1
| 672
|
RogerKint
|
75,628,646
| 8,087,322
|
pyproject.toml in an isolated environment
|
<p>I am trying to use <code>pyproject.toml</code> (and specifically setuptools_scm) in an isolated environment. My minimal <code>pyproject.toml</code> is:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools-scm"]
[tool.setuptools_scm]
write_to = "mypackage/version.py"
</code></pre>
<p>However, when trying to install my package in an isolated environment, I get:</p>
<pre><code>$ pip3 install --no-index -e .
Obtaining file:///home/…/myproject
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [2 lines of output]
ERROR: Could not find a version that satisfies the requirement setuptools>=40.8.0 (from versions: none)
ERROR: No matching distribution found for setuptools>=40.8.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>However, setuptools and setuptools_scm are already installed (setuptools 66.1.1, setuptools_scm 7.1.0). There is no legacy <code>setup.py</code>.</p>
<p>How can I ensure my package can be installed without network access (supposing that all dependencies are already resolved)?</p>
|
<python><setuptools><pyproject.toml><setuptools-scm>
|
2023-03-03 14:59:23
| 1
| 593
|
olebole
|
75,628,594
| 2,852,466
|
Python program to load only a certain mega bytes of an excel file into the dataframe and convert it to string
|
<p>I am pretty new to Python and I was going through some of the uses of the <code>pandas</code> library. However, I could not find a way to load only a partial excel file into the memory and play with it. For example, if I set the memory limit as 1MB, the program should be able to read the first 1MB from the excel file of a size larger than 1MB.</p>
<p>From the answer mentioned <a href="https://stackoverflow.com/questions/35747476/in-pandas-whats-the-equivalent-of-nrows-from-read-csv-to-be-used-in-read-ex">here</a>, I see an option to load a certain number of rows. But I would not know the number of rows in the input file. Also, I do not know how many bytes of data has been read by this code.</p>
<p>Is there a way to load the number of rows in an iterative way where in the number of bytes read can also be calculated in each iteration and can be cumulatively summed up?</p>
|
<python><pandas><openpyxl>
|
2023-03-03 14:54:05
| 1
| 1,001
|
Ravi
|
75,628,544
| 19,130,803
|
access logger object from decorator with type hints python
|
<p>I have created a <code>log</code> decorator. I have put the decorator on required functions. I also want to access the <code>log</code> object inside the decorated functions.</p>
<p>The code works correctly when I run the program, Later I added type hints to the code.</p>
<p>On running mypy I am getting error as below:</p>
<pre><code>error: Missing named argument "logger" for "main" [call-arg]
</code></pre>
<pre><code># logger.py
def log(*, module_name: str = "") -> Callable[[F], F]:
"""Create the logger."""
def decorator(func: F) -> F:
"""Initialize the logger."""
@wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
"""Wrap the called function."""
stream: Any = yaml.safe_load(open("log.yaml"))
status: bool = False
logger: logging.Logger | None = None
# msg: str = "Unable to initialize logger"
if stream:
dictConfig(stream)
logger = logging.getLogger(module_name)
# msg = "Logger Initialized.."
status = True
if status:
kwargs["logger"] = logger
value = func(*args, **kwargs)
return value
return cast(F, wrapper)
return decorator
</code></pre>
<p>Another module</p>
<pre><code># common.py
@log(module_name=__name__)
def is_directory_empty(*, dir: Path, logger: logging.Logger) -> tuple[bool, str]:
"""Check whether given directory is empty or not.
Args:
dir(Path): directory path.
Returns:
tuple[bool, str]: (True, msg) if successful, (False, msg) otherwise.
Raises: None
"""
status: bool = False
msg: str = "Directory is empty."
result: tuple[bool, str]
if any(dir.iterdir()):
msg = "Directory is not empty"
status = True
logger.info(msg) # here access log object from decorator
result = (status, msg)
return result
</code></pre>
<p>Main module</p>
<pre><code># main.py
dir: Path = some_path
status, msg = is_directory_empty(dir=dir)
</code></pre>
<p>How to resolve mypy issue or is there better way to access <code>log</code> object inside decorated functions?</p>
|
<python><logging><mypy><python-logging>
|
2023-03-03 14:49:24
| 0
| 962
|
winter
|
75,628,536
| 1,057,639
|
Spark ETL Large data transfer - how to parallelize
|
<p>I am looking to move a large amount of data from one db to another and I have seen that Spark is a good tool for doing this. I am trying to understand the process and the ideology behind Spark's big data ETL's. Would also appreciate if someone could explain how Spark goes about parallelizing (or splitting) the data in to the various jobs that it spawns. My main aim is to move the data from BigQuery to Amazon Keyspaces - and the data is around 40Gigs.</p>
<p>I am putting here the understanding I have gathered already from online.</p>
<p>This is the code to read the data from Bigquery.</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.master('yarn') \
.appName('spark-bigquery-ks') \
.getOrCreate()
spark.conf.set("credentialsFile", "./cred.json")
# Load data from BigQuery.
df = spark.read.format('bigquery') \
.option('parentProject','project-id') \
.option('dataset', 'mydataset') \
.option('table', 'mytable') \
.option('query', 'SELECT * from mytable LIMIT 10') \
.load()
print(df.head())
</code></pre>
<p>Now I need to figure out the best way to transform the data (which is super easy and I can do that myself) - but my most important question is regarding the batching and handling of such a large data set. Are there any considerations that I need to have to move such data ( which wont fit in memory ) to KeySpaces.</p>
<pre><code>from ssl import SSLContext, CERT_REQUIRED, PROTOCOL_TLSv1_2
import boto3
from boto3 import Session
from cassandra_sigv4.auth import AuthProvider, Authenticator, SigV4AuthProvider
from cassandra.cluster import Cluster, ExecutionProfile, EXEC_PROFILE_DEFAULT
ssl_context = SSLContext(PROTOCOL_TLSv1_2)
ssl_context.load_verify_locations('./AmazonRootCA1.pem')
ssl_context.verify_mode = CERT_REQUIRED
boto_session = boto3.Session(aws_access_key_id="accesstoken",
aws_secret_access_key="accesskey",
aws_session_token="token",
region_name="region")
auth_provider = SigV4AuthProvider(boto_session)
cluster = Cluster(['cassandra.region.amazonaws.com'], ssl_context=ssl_context, auth_provider=auth_provider,
port=9142)
session = cluster.connect()
</code></pre>
<p>and finally to push the data to keyspaces would look something like this code.</p>
<pre><code># Write data to Amazon Keyspaces
for index, row in pdf.iterrows():
keyfamilyid = row["keyfamilyid"]
recommendedfamilyid = row["recommendedfamilyid"]
rank = row["rank"]
chi = row["chi"]
recommendationtype = row["recommendationtype"]
title = row["title"]
location = row["location"]
typepriority = row["typepriority"]
customerid = row["customerid"]
insert_query = f"INSERT INTO {keyspace_name}.{table_name} (keyfamilyid, recommendedfamilyid, rank, chi, recommendationtype, title, location, typepriority, customerid) VALUES ('{keyfamilyid}', '{recommendedfamilyid}', {rank}, {chi}, '{recommendationtype}', '{title}', '{location}', '{typepriority}', '{customerid}')"
try:
client.execute(insert_query)
except ClientError as e:
print(f"Error writing data for row {index}: {e.response['Error']['Message']}")
</code></pre>
|
<python><pyspark><google-bigquery><aws-glue><amazon-keyspaces>
|
2023-03-03 14:48:12
| 1
| 26,831
|
Ganaraj
|
75,628,495
| 9,403,794
|
How to groupby numpy ndarray and return first row from each group. Now sort before
|
<p>I have ndarray:</p>
<pre><code>[[1 1]
[0 2]
[0 3]
[1 4]
[1 5]
[0 6]
[1 7]]
</code></pre>
<p>I expect reduced result like that:</p>
<pre><code>[[1 1]
[0 2]
[1 4]
[0 6]
[1 7]]
</code></pre>
<p>Result ndarray should contain first row from each group.
I build a groups on values from column 0. This is values 0 or 1.</p>
<p>Similar problem was resolved in thread: <a href="https://stackoverflow.com/q/38013778/9403794">Is there any numpy group by function?</a>
But there key was sorted and in my case it does not work.</p>
<pre><code>l1 = [1,0,0,1,1,0,1]
l2 = [1,2,3,4,5,6,7]
a = np.array([l1, l2]).T
print(a)
values, indexes = np.unique(a[:, 0], return_index=True)
</code></pre>
<p>In pandas we can achieve this by (solution from stack, but i do not remember owner, sorry for no link):</p>
<pre><code>m1 = ( df['c0'] != df['c0'].shift(1)).cumsum()
df = df.groupby([df['c0'], m1]).head(1)
</code></pre>
<p>How to make it with numpy?</p>
<p>Thank you for solutions.</p>
<p>EDITED:</p>
<p>At the time when mozway wrote solution i created something like that:</p>
<pre><code>import numpy as np
l1 = [1,0,0,1,1,0,1]
l2 = [1,2,3,4,5,6,7]
a = np.array([l1, l2]).T
print("solution")
"shift for numpy"
arr3 = np.array([np.NaN])
arr4 = np.array(a[ :-1, 0])
arr5 = np.concatenate([arr3, arr4])
print('arr5')
print(arr5)
"add shifted column"
a = np.c_[ a, arr5 ]
"diff between column 0 and shofted colum"
dif_col = np.where(a[:, 0] != a[:, 2], True, False)
"add diff column"
a = np.c_[ a, dif_col ]
"select only true"
mask = (a[:, 3] == True)
a = a[mask, :]
"remove unnecessary redundant columns "
a = np.delete(a, 2, 1)
a = np.delete(a, 2, 1)
print(a)
</code></pre>
<p>Output:</p>
<pre><code>[[1. 1.]
[0. 2.]
[1. 4.]
[0. 6.]
[1. 7.]]
</code></pre>
<p>What do you think?</p>
|
<python><pandas><numpy>
|
2023-03-03 14:44:02
| 2
| 309
|
luki
|
75,628,175
| 16,127,735
|
Running Python on a webserver using PHP
|
<p>I'm using a PHP script to execute a Python script on a localhost XAMPP web server:</p>
<pre><code><?php
$python_path = 'C:\Users\alona\AppData\Local\Programs\Python\Python39\python.exe';
$output = shell_exec($python_path . ' my_script.py');
echo $output;
?>
</code></pre>
<p>When the Python script contains simple output like print("Hi"), the PHP code executes the Python program and displays the output correctly.</p>
<p>However, with a more complex Python code like this:</p>
<pre><code>import pytz
from datetime import datetime
city = input("What country do you live in? ")
timezone = pytz.timezone(city)
current_time = datetime.now(timezone)
print("The time in your country is:", current_time.strftime("%H:%M:%S"))
</code></pre>
<p>The PHP page only shows "What's your country?" without allowing the user to input their country or displaying the second output. The PHP script should function like a command line, but on a web page. So, just like running python my_script.py on my computer prompts: "What country do you live in?" and after inputting "Canada" it shows "The time in your country is: 09:52:06". The same functionality should be achieved with my PHP script.</p>
|
<python><php>
|
2023-03-03 14:13:48
| 1
| 1,958
|
Alon Alush
|
75,628,106
| 5,510,540
|
Python: creating plot based on observation dates (not as a time series)
|
<p>I have the following dataset</p>
<pre><code>df
id medication_date
1 2000-01-01
1 2000-01-04
1 2000-01-06
2 2000-04-01
2 2000-04-02
2 2000-04-03
</code></pre>
<p>I would like to first reshape the data set into days after the first observation per patient:</p>
<pre><code>id day1 day2 day3 day4
1 yes no no yes
2 yes yes yes no
</code></pre>
<p>in order to ultimately create a plot with the above table: columns the dates and in black if yes, and white if not.</p>
<p>any help really appreciated it</p>
|
<python><pandas><plot>
|
2023-03-03 14:07:01
| 1
| 1,642
|
Economist_Ayahuasca
|
75,628,095
| 13,349,539
|
Field not updating in MongoDB using FastAPI
|
<p>I have a document schema that looks like this:</p>
<pre><code>{
"_id": "8a28d1fc-602b-43ba-a017-105a4fff35b3",
"isDeleted": false,
"user": {
"timestamp": "2023-03-03",
"phone": "+012345678912",
"age": 25,
"gender": "male",
"nationality": "smth",
"universityMajor": "ENGINEERING",
"preferences": {
"doesMindHavingPets": true
},
"password": "$2b$12$9dRbQF2N5kAM44JBAphrrOWT23d206TIZ.VWMhZ9m.PwqnHEZKNhO"
}
}
</code></pre>
<p>I am trying to implement an update function using FastAPI, the function looks like this:</p>
<pre><code>async def updateUser (id: str, user: BasicUserUpdate = Body(...)):
new_user = user.dict(exclude_none=True)
user_preferences = None
if "preferences" in new_user:
user_preferences = new_user['preferences']
del new_user ['preferences']
if 'password' in new_user:
new_user['password'] = get_hashed_password(new_user['password'])
print (new_user['password'])
if (len(new_user) + len(user_preferences)) >= 1: # type: ignore
update_result = await dbConnection.find_one_and_update(
{"_id": id, "isDeleted" : False},
[
{ "$set":
{ "user.preferences":
{ "$mergeObjects":
[ "$user.preferences", user_preferences ]
}
}
},
{ "$set":
{ "user":
{ "$mergeObjects":
[ "$user", new_user ]
}
}
}
],
return_document = ReturnDocument.AFTER)
if update_result is not None:
return update_result['user']
else:
raise HTTPException(status_code=404, detail=f"User {id} not found, or has been banned")
</code></pre>
<p>The function <em>get_hashed_password</em> is below:</p>
<pre><code>password_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def get_hashed_password(password: str) -> str:
return password_context.hash(password)
</code></pre>
<p>All the updating for all the fields works fine except for the password. As you can see, there is a print, <code>print (new_user['password'])</code>, statement right before I update it in MongoDB. That print statement displays the correct new hashed password. However, when MongoDB executes the update statement, everything updates properly except for the password.</p>
<p>Below is an example of a sample body sent to the function:</p>
<pre><code>{
"highPrivacy" : "true",
"password" : "abcd1234",
"preferences" : {
"doesMindHavingPets" : "true",
"isMorningPerson" : "false"
}
}
</code></pre>
<p>When I send this body to the function, a <code>print(new_user)</code> statement will print the following:</p>
<pre><code>{'password': '$2b$12$7256L8YrH1Yz5/S0GutFM.og1fckAoWUboVqZepMLXZ9KnHk4PwrC', 'highPrivacy': True}
</code></pre>
<p>As you can see, the hash is different here, but this never updates in MongoDB, and the password will always remain as <em>$2b$12$9dRbQF2N5kAM44JBAphrrOWT23d206TIZ.VWMhZ9m.PwqnHEZKNhO</em></p>
<p>What is the issue here, the behavior is very weird especially because all the other fields update normally, so there does not seem to be any issue with the MongoDB query logic itself.</p>
|
<python><mongodb><nosql><fastapi>
|
2023-03-03 14:06:21
| 0
| 349
|
Ahmet-Salman
|
75,628,082
| 13,328,195
|
How to get the closest match among three numbers
|
<p>I have a list :</p>
<pre><code>[(1, 49, 47), (11, 44, 6), (24, 16, 31), (11, 29, 47), (41, 14, 24), (40, 29, 1), (32, 49, 44), (41, 14, 14), (24, 21, 49), (19, 24, 6)]
</code></pre>
<p>And a tuple <code>(7,2,3)</code> I need to choose a value from this list such that every element in the tuple should be less than or equal to the new value selected according to their positions, for example, if I select <code>(19, 24, 6)</code>, the condition is true as</p>
<pre><code>7 <= 19
2 <= 24
3 <= 6
</code></pre>
<p>How can I select the new tuple such that the individual values are minimum.</p>
<p>If I assume the numbers have a higher weightage towards left, I can sort them based on first element, then the second...</p>
<p>But is there are better way to do this? if the numbers did not have any weightage according to their position?</p>
|
<python><numbers>
|
2023-03-03 14:04:49
| 1
| 2,719
|
Roshin Raphel
|
75,627,989
| 618,099
|
git-filter-repo loses the remotes
|
<p>I <a href="https://stackoverflow.com/questions/74887239/git-automate-rewording-git-commit-messages-on-branch">previously</a> got this rewrite of history commit messages to work:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# Create a temporary file to store the commit messages
temp_file=$(mktemp)
main_head_hash=$(git rev-parse main)
suffix="⚠️ rebased since!"
# Use git log to retrieve the commit messages and store them in the temporary file
git log --pretty=format:%s $main_head_hash.. | grep '📌 build version' | grep -v "$suffix" > $temp_file
# Create a file to store the replacements
echo > replacements.txt
# Iterate over the commit messages in the temporary file
while read commit_message; do
# Print the replacement message to the replacements.txt file
echo "$commit_message==>$commit_message $suffix" >> replacements.txt
done < $temp_file
# # ⚠️⚠️ Rewriting history ⚠️⚠️
git filter-repo --replace-message replacements.txt --force
# # Remove the temporary files
rm $temp_file
rm replacements.txt
</code></pre>
<p>but I have no began to notice that it causes my git repo to lose the remotes setup. Af first I thought it was some other git tool I was using, but now I have this script being the suspect 🕵️</p>
<p>How do I avoid losing the remotes?</p>
|
<python><bash><git><git-filter-repo>
|
2023-03-03 13:56:39
| 1
| 9,620
|
Norfeldt
|
75,627,943
| 1,387,346
|
Trigger visual update of CSS-specified property after Gtk3 widget state change
|
<p>I have a widget on which I am drawing in a python-gtk3 application, and clicking on it triggers a specific action. I want to show some visual feedback on hover, to indicate the widget is interactive. After figuring out cursors are not the way in Gtk3, I am trying to use a border, similar to what happens when hovering buttons.</p>
<p>After a lot of playing around, what I have seems to work when I query the widgets and CSS for what color they should have -- however the visuals are never updated. Here’s a minimal working example:</p>
<pre class="lang-py prettyprint-override"><code>import gi
import cairo
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk
# Event handlers
def on_da_click(widget, event):
print('Clicked!')
def on_da_hover(widget, event):
# Set widget state as this does not happen by default for DrawingArea
context = widget.get_style_context()
context.set_state(Gtk.StateFlags.PRELIGHT if event.type == Gdk.EventType.ENTER_NOTIFY else
Gtk.StateFlags.NORMAL)
# Check the color changed
print('Border color should now be', context.get_property('border-color', context.get_state()))
# ----- ISSUE IS HERE ----- #
# Tried various things to get the visual to update, none seem to work:
context.emit('changed')
widget.queue_draw()
widget.show_all()
# Create some widgets
da = Gtk.DrawingArea()
da.add_events(Gdk.EventMask.ENTER_NOTIFY_MASK | Gdk.EventMask.LEAVE_NOTIFY_MASK |
Gdk.EventMask.BUTTON_PRESS_MASK | Gdk.EventMask.BUTTON_RELEASE_MASK)
da.connect('button-release-event', on_da_click)
da.connect('touch-event', on_da_click)
da.connect('enter-notify-event', on_da_hover)
da.connect('leave-notify-event', on_da_hover)
frame = Gtk.AspectFrame()
frame.set(.5, .5, 16/9, False)
frame.set_shadow_type(Gtk.ShadowType.NONE)
frame.get_style_context().add_class('grid-frame')
frame.add(da)
# Create a Window
win = Gtk.Window(title='Hello World')
win.set_default_size(400, 400)
win.connect('destroy', Gtk.main_quit)
win.add(frame)
win.show_all()
# Load a stylesheet for the whole app
css_loader = Gtk.CssProvider.new()
css_loader.load_from_data('''
.grid-frame > * {
border-style: solid;
border-width: 2px;
border-color: @theme_selected_bg_color;
}
.grid-frame > *:hover {
border-color: @theme_bg_color;
}
'''.encode())
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(), css_loader,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
# Run
Gtk.main()
</code></pre>
<p>When I enter/exit the area I get the expected alternating colors in my messages:</p>
<pre><code>Border color should now be Gdk.RGBA(red=0.000000, green=0.000000, blue=0.196078, alpha=1.000000)
Border color should now be Gdk.RGBA(red=0.000000, green=0.000000, blue=0.913725, alpha=1.000000)
Border color should now be Gdk.RGBA(red=0.000000, green=0.000000, blue=0.196078, alpha=1.000000)
Border color should now be Gdk.RGBA(red=0.000000, green=0.000000, blue=0.913725, alpha=1.000000)
</code></pre>
<p>However visually the border never changes color. How can I trigger the updated? The usual <code>queue_draw</code> etc. do not work. My current gtk3 version is 3.24.35, but I’d like this to work over all Gtk 3.x.</p>
|
<python><gtk3><pygobject><gobject-introspection>
|
2023-03-03 13:52:53
| 0
| 11,496
|
Cimbali
|
75,627,855
| 3,352,632
|
How do I mock a function that is defined within another function in python pytest with magic mock framework?
|
<p>Considering the following peace of code:</p>
<pre><code>class X:
def func_a(a):
//function does somthing
def func_b(b):
// do something
func_b(a)
</code></pre>
<p>How can I mock function func_b(a) in my test?</p>
<pre><code>with patch('x.func_a.func_b', side_effect=mock_func_b)
</code></pre>
<p>Does not work. I am only able to mock func_a. How can I handle this issue?</p>
|
<python><mocking><pytest><python-mock><pytest-mock>
|
2023-03-03 13:43:56
| 0
| 667
|
user3352632
|
75,627,785
| 19,723,806
|
How do I get indentation of bullet list (python-docx)
|
<p>I don't know what am I doing wrong. Based on the ruler in Word <code>left_indent</code> is the place where the text begins. So, I tried <code>left_indent</code> and it doesn't work, it only works on the normal text, with no bullet. Then I tried <code>first_line_indent</code>, doesn't work either.</p>
<p>Btw, here's my code:</p>
<pre><code>import docx
def getText(filename):
doc = docx.Document(filename)
fullText = []
for paragraph in doc.paragraphs:
indent = ''
if paragraph.paragraph_format.left_indent:
indent = ' ' * int(round(paragraph.paragraph_format.left_indent / 100000))
fullText.append(indent + paragraph.text)
return '\n'.join(fullText)
print(getText(r"file.docx"))
</code></pre>
<p>I'm trying to extract Word to string as simple as possible, don't mind my nasty code.</p>
<p>ref: <a href="https://python-docx.readthedocs.io/en/latest/user/styles-using.html" rel="nofollow noreferrer">https://python-docx.readthedocs.io/en/latest/user/styles-using.html</a></p>
|
<python><python-3.x><python-docx>
|
2023-03-03 13:37:48
| 0
| 354
|
Zigatronz
|
75,627,546
| 7,836,976
|
Jasypt equivalent for Python
|
<p>I'm trying to write a python application that must connect to a <code>Postgres</code> database.</p>
<p>I'm trying to use <code>psycopg2</code> for this. Here's my connection:</p>
<pre class="lang-py prettyprint-override"><code>connection = psycopg2.connect(user="username",
password="password",
host="1.6.2.1",
port="5432",
database="mydatabase",
sslmode='require')
</code></pre>
<p>My problem is that I can't save the password in my code or in any kind of properties file or anywhere else that someone could view it.</p>
<p>As a Java developer, I'm used to using <code>Jasypt</code> and saving an encrypted password string in my application.properties. The entry looks like this:</p>
<pre><code>spring.datasource.password=ENC(udfrtIm1ypnfWTTOb29mt2IzvTTZsgwi)
</code></pre>
<p>Is there any way I can use this particular string in my Python application or do something similar in Python?</p>
|
<python><database><encryption><connection-string><jasypt>
|
2023-03-03 13:14:25
| 0
| 7,544
|
runnerpaul
|
75,627,541
| 13,158,157
|
plotly subplots: is it possible to have one subplots occupy multiple columns or rows
|
<p>I am trying to have 3 plots on one figure using plotly subplots.
Naturally, I could do just 3 rows but I am trying to have first two figures side-by-side and another being stretched out. Overall I am trying to get a layout on the picture (that I made with paint):
<a href="https://i.sstatic.net/L3PdO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L3PdO.png" alt="enter image description here" /></a>
Is it possible and if yes how can I achieve this ?</p>
|
<python><plotly>
|
2023-03-03 13:14:14
| 1
| 525
|
euh
|
75,627,463
| 3,861,965
|
Reading JSON with Terraform jsondecode failing for valid JSON file
|
<p>I have a JSON file in S3 which looks like this:</p>
<pre><code>{
"data_sources": [
{
"name": "monzo",
"tables": [
"transactions"
],
"format": "CSV",
"extension": "csv"
}
]
}
</code></pre>
<p>and I am trying to read it in Terraform with this</p>
<pre><code>...
data_sources = jsondecode(data.aws_s3_bucket_object.data_sources.body)
...
</code></pre>
<p>and it is throwing this error:</p>
<pre><code>╷
│ Error: Invalid function argument
│
│ on variables.tf line 41, in locals:
│ 41: data_sources = jsondecode(data.aws_s3_bucket_object.data_sources.body)
│ ├────────────────
│ │ while calling jsondecode(str)
│ │ data.aws_s3_bucket_object.data_sources.body is null
│
│ Invalid value for "str" parameter: argument must not be null.
</code></pre>
<p>However, if I save the file and auto-format it in VSCode, making it look like this:</p>
<pre><code>{
"data_sources": [
{
"name": "monzo",
"tables": ["transactions"],
"format": "CSV",
"extension": "csv"
}
]
}
</code></pre>
<p>Terraform can read it and everything works.</p>
<p>The data is</p>
<pre><code>data "aws_s3_bucket_object" "data_sources" {
bucket = "my_bucket"
key = "data_sources.json"
}
</code></pre>
<p>Why is this happening? And how can I create that file in Python in such a way that Terraform can read it? I'm creating it from a dict (<code>converted_data</code>) like so</p>
<pre><code>json.dumps(converted_data, indent=2)
</code></pre>
|
<python><json><terraform>
|
2023-03-03 13:07:40
| 0
| 2,174
|
mcansado
|
75,627,362
| 260
|
Azure Functions Python error - no job functions
|
<p>I am trying to run locally my Azure Functions using python and the error message I get is the following:</p>
<pre><code>[2023-03-03T11:34:59.832Z] No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup
code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
</code></pre>
<p>I created my project with this command and did not change anything in the generated code:</p>
<pre><code>func init LocalFunctionProj --python -m V2
</code></pre>
<p>When I try to run the app with this command then I get the error message mentioned above:</p>
<pre><code>func start --verbose
</code></pre>
<p>I am running this on python v3.10. Also, when I tried to create the V1 python function it worked without any problems..</p>
<p>Any ideas why this is happening?</p>
|
<python><azure><azure-functions>
|
2023-03-03 12:55:55
| 6
| 11,562
|
gyurisc
|
75,627,327
| 9,773,920
|
Convert text to float in redshift with same output
|
<p>My view returns text field output since I concatenate my field with '%' symbol. When I push this data into excel using to_excel, it inserts as text and not numbers. Is there a way to fix this problem so that my excel sheets has numbers instead of text?</p>
<p>redshift code:</p>
<pre><code>select concat(((("replace"((myfield)::text, '%'::text, ''::text))::numeric(10,2))::character varying)::text, '%'::text) AS myfield_formatted
from myview
</code></pre>
<p>Python code to push data into excel:</p>
<pre><code> df3 = df3.apply(pd.to_numeric, errors='ignore')
df3.to_excel(writer, sheet_name=comp,startrow=0, startcol= 0, index=False, float_format='%.2f')
</code></pre>
<p>Should i fix thsi problem at redshift level or pandas dataframes level?</p>
|
<python><pandas><aws-lambda><amazon-redshift><export-to-excel>
|
2023-03-03 12:52:47
| 2
| 1,619
|
Rick
|
75,627,291
| 5,575,597
|
How do I change a pandas row value, for a certain column, for a certrain date (datetimeindex) in a dataframe?
|
<p>I have a pd like this:</p>
<pre><code>DATE delivery
2020-01-01 1
2020-01-01 11
2020-01-01 10
2020-01-01 9
2020-01-01 8
..
2023-03-02 5
2023-03-02 4
2023-03-02 3
2023-03-02 2
2023-03-02 11
</code></pre>
<p>Index is DateTimeIndex but not unique. I have a list (date_adj) of a few dates from the df, and I want to change the 'delivery' column for those dates. I tried:</p>
<pre><code>for i in date_adj:
for x in range(1,11):
df.loc[((df.index == i) & (df.delivery == x)),
[df.delivery]] = (df.delivery + 1)
</code></pre>
<p>I get the following error message: KeyError: "None of [Index([(1, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 6, 9, 8, 7, 5, 4, 3, 2, 10, 11, 4, 1, 5, 6, 7, 8, 1, 9, 10, 11, 3, 2, 2, 7, 3, 4, 5, 6, 8, 9, 10, 11, 1, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2, 11, 2, 4, 5, 6, 7, 8, 9, 10, 11, 3, 11, 10, 9, 8, 7, 2, 5, 4, 3, 1, 2, 6, 1, 3, 7, 4, 5, 6, 8, 10, 11, 9, 4, 9, 8, 7, 6, 5, 4, 3, 1, 5, 11, 9, ...)], dtype='object')] are in the [columns]"</p>
<p>For illustrational purposes, I would like the result to look like this, given the date in the list is '2023-03-02':</p>
<pre><code>DATE delivery
2020-01-01 1
2020-01-01 11
2020-01-01 10
2020-01-01 9
2020-01-01 8
..
2023-03-02 6
2023-03-02 5
2023-03-02 4
2023-03-02 3
2023-03-02 12
</code></pre>
<p>Help would be very appreciated.</p>
|
<python><pandas><pandas-loc>
|
2023-03-03 12:48:47
| 1
| 863
|
cJc
|
75,627,197
| 6,156,353
|
Alembic - how to create hypertable
|
<p>How to generate hypertable using Alembic? Some custom call must be added I suppose, but where? I tried event.listen but Alembic does not register it.</p>
|
<python><sqlalchemy><alembic><timescaledb><sqlmodel>
|
2023-03-03 12:40:39
| 2
| 1,371
|
romanzdk
|
75,627,164
| 17,316,080
|
Choose only one from 2 args with Python ArgumentParser
|
<p>I use <code>ArgumentParser</code> to parse some arguments for function.</p>
<p>I want to limit that that user can use or <code>count</code> or <code>show</code>, he can't use then both.</p>
<p>How can I limit that?</p>
<pre><code>parser = argparse.ArgumentParser(
formatter_class=argparse.RawTextHelpFormatter,
description="""some desc""",
)
parser.add_argument(
"count",
nargs="?",
type=lambda n: max(int(n, 0), 1),
default=1,
)
parser.add_argument(
"--show",
"-s",
action="store_true",
default=False,
)
parser.add_argument(
"--decode",
"-d",
action="store_true",
default=False,
)
</code></pre>
|
<python><python-3.x><arguments>
|
2023-03-03 12:37:06
| 2
| 363
|
Kokomelom
|
75,627,137
| 4,338,000
|
How can I subtract from subsequent column in pandas?
|
<p>How can I subtract from column before itself for many columns without hardcoding it? I can do it by hard coding it as shown below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"a":[1,2,3,4],"b":[1,3,5,6],"c":[6,7,8,9]})
df['a_diff'] = df['a']-16
df['b_diff'] = df['b']-df['a']
df['c_diff'] = df['c']-df['b']
</code></pre>
<p>I know there is a way to do it rowwise by using shift function. Can we also it as column wise? There are 100 columns I need to use this technique on so I would rather do it pythonically instead of hard coding it. Please note that "a_diff" was subtracted from constant intentionally since I will have to subtract that column by constant in my code as well.</p>
<p>Thank you,</p>
<p>Sam</p>
|
<python><pandas><dataframe>
|
2023-03-03 12:33:42
| 2
| 1,276
|
Sam
|
75,627,124
| 7,019,073
|
Lineplot with color, line style, and marker style as data dimension
|
<p>I have a pandas data frame that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>performance</th>
<th>attr_c</th>
<th>attr_s</th>
<th>attr_m</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>10</td>
<td>C1</td>
<td>S1</td>
<td>M0</td>
</tr>
<tr>
<td>1</td>
<td>15</td>
<td>C1</td>
<td>S0</td>
<td>M1</td>
</tr>
<tr>
<td>2</td>
<td>9</td>
<td>C1</td>
<td>S1</td>
<td>M2</td>
</tr>
<tr>
<td>3</td>
<td>12</td>
<td>C2</td>
<td>S0</td>
<td>M0</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>Time and performance are numbers, all attributes are categories. Each combination of attributes does occur several times.</p>
<p>I would like to use seaborn to plot the performance over time with a line plot. I also want to see the state of all attributes at each data point to visualize any correlations.
However, with a naive line plot I can, as far as I can see, only use color and style as data dimensions, but I want to use line and marker style as two dimensions:</p>
<pre class="lang-py prettyprint-override"><code>sns.lineplot(data=df,
x='time',
y='performance',
hue='attr_c',
style='attr_s',
# markers='attr_m' # this does not work!
)
</code></pre>
<p>So in the end I would like, for each unique combination of attributes, a line with a unique combination of color (<code>attr_c</code>), line style (<code>attr_s</code>), and marker style (<code>attr_m</code>). And a legend of course :)</p>
|
<python><pandas><seaborn><line-plot>
|
2023-03-03 12:32:26
| 1
| 1,040
|
Seriously
|
75,627,051
| 4,096,572
|
How to use np.cumsum to replicate the output of scipy.stats.expon.cdf?
|
<p>For context, I'm trying to understand how to use <code>np.cumsum</code> to replicate <code>scipy.stats.expon.cdf</code> because I need to write a function compatible with <code>scipy.stats.kstest</code> which is not one of the distributions already available in <code>scipy.stats</code>.</p>
<p>I am having issues with finding resources online which give good guides to numerically computing the CDF, because the majority of resources just point you to in-built methods. I've gotten as far as the <code>custom_exponential_cdf</code> function defined below, which works well in some cases but breaks down in others (for example, the one below).</p>
<p>Does anyone know what I am doing wrong, and how I can modify <code>custom_exponential_cdf</code> to match the output from <code>scipy.stats.expon.cdf</code>?</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import kstest, expon
def custom_exponential_cdf(x, lamb):
x = x.copy()
x[x < 0] = 0.0
pdf = lamb * np.exp(-lamb * x)
cdf = np.cumsum(pdf * np.diff(np.concatenate(([0], x))))
return cdf
unique_values = np.array([0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.85, 0.87, 0.89])
counts = np.array([1597, 1525, 1438, 1471, 1311, 1303, 1202, 1147, 1002, 918, 893, 801, 713, 680, 599, 578, 478, 430, 409, 353, 350, 292, 245, 211, 224, 182, 171, 151, 125, 111, 94, 85, 72, 73, 57, 36, 53, 35, 35, 27, 19, 20, 15, 10, 20, 12, 10, 13, 11, 10, 17, 15, 8, 3, 3, 3, 5, 6, 6, 2, 3, 3, 4, 6, 1, 1, 3, 1, 2, 1, 3, 1, 1, 2, 2, 2, 2, 2, 1, 1, 2])
x = np.repeat(unique_values, counts)
lamb = 9.23
fig, ax = plt.subplots()
ax.plot(x, expon.cdf(x, 0.0, 1.0 / lamb))
ax.plot(x, custom_exponential_cdf(x, lamb))
print(kstest(x, custom_exponential_cdf, (lamb,)))
print(kstest(x, "expon", (0.0, 1.0 / lamb)))
</code></pre>
<p>The output from the print statements is</p>
<pre><code>KstestResult(statistic=0.08740741955472273, pvalue=6.857709296861777e-145, statistic_location=0.02, statistic_sign=-1)
KstestResult(statistic=0.0988550670723162, pvalue=2.7098163860110364e-185, statistic_location=0.04, statistic_sign=-1)
</code></pre>
<p>The output from the plot is:</p>
<p><a href="https://i.sstatic.net/MBMcp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBMcp.png" alt="A graph illustrating the difference between my function and scipy.stats.expon.cdf" /></a></p>
|
<python><numpy><scipy><probability-density><probability-distribution>
|
2023-03-03 12:25:16
| 1
| 605
|
John Coxon
|
75,627,001
| 542,270
|
How to convert a single object or a set into a set?
|
<p>There's a method as follows:</p>
<pre><code>def send_message(
content: str,
slack_conn_ids: Union[SlackConnection, Set[SlackConnection]],
send_only_in_production: bool = True,
):
...
if isinstance(slack_conn_ids, set):
set_slack_conn_ids = slack_conn_ids
elif isinstance(slack_conn_ids, SlackConnection):
set_slack_conn_ids = {slack_conn_ids}
else:
raise ValueError("`slack_conn_ids` should be of type `SlackConnection` or `Set[SlackConnection]`"
)
...
</code></pre>
<p>Can this code be simplified, implemented more idiomatically?</p>
|
<python>
|
2023-03-03 12:20:24
| 3
| 85,464
|
Opal
|
75,626,691
| 1,618,893
|
Session handling with pytest and sqlalchemy
|
<p>For every unit test I want to rollback already commited statements using <code>pytest.fixture</code> and <code>scoped_session</code>.</p>
<h3>Setup</h3>
<ul>
<li>python 3.11</li>
<li>sqlalchemy==2.0.3</li>
<li>pytest==7.2.1</li>
<li>factory-boy==3.2.1</li>
<li>fastapi==0.91.0</li>
</ul>
<h3>Implementation</h3>
<h4>conftest.py</h4>
<ul>
<li>The engine is simply created using <code>engine = create_engine(DATABASE_URL)</code></li>
<li><code>init_database</code> creates the database if it doesn't exist using <code>sqlalchemy</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pytest
from sqlalchemy_utils import drop_database
from database_test_setup.manage import Session
@pytest.fixture(scope="session")
def db_engine():
from settings import settings
from database import engine, init_database, DATABASE_URL
init_database(user=settings.DB_USER, pwd=settings.DB_PWD, host=settings.DB_HOST, port=settings.DB_PORT,
db_name=settings.DB_NAME)
Session.configure(bind=engine)
yield engine
drop_database(DATABASE_URL)
engine.dispose()
@pytest.fixture(scope="function", autouse=True)
def db_session(db_engine):
session_ = Session()
session_.begin_nested()
yield session_
session_.rollback()
</code></pre>
<h4>factories.py</h4>
<ul>
<li><code>Order</code> is a simple SQLAlchemy model</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from factory import Sequence
from factory.alchemy import SQLAlchemyModelFactory
from database_test_setup.manage import Session
from order.models import Order
class BaseFactory(SQLAlchemyModelFactory):
"""Base Factory."""
class Meta:
"""Factory configuration."""
abstract = True
sqlalchemy_session = Session
sqlalchemy_session_persistence = "flush"
class OrderFactory(BaseFactory):
id = Sequence(lambda n: n)
quality = "X"
start_date = "2021-09-15T17:53:00"
end_date = "2021-09-15T15:53:00"
class Meta:
model = Order
</code></pre>
<h4>manage.py</h4>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session(sessionmaker())
</code></pre>
<h3>Problem</h3>
<p>When I use the <code>db_session</code> fixture in my test to commit a statement, the statement will not be affected by <code>session_.rollback</code>.
The same is true if I set <code>sqlalchemy_session_persistence = "commit"</code> in my <code>SQLAlchemyModelFactory</code>.</p>
<p>I understand that this question has been <a href="https://stackoverflow.com/a/58662370/1618893">answered</a> before, but the proposed solution doesn't seem to work for me, even though I use the same setup except using a separate module for creating a <code>scoped_session</code> instead of a fixture to use additionally in my <code>SQLAlchemyModelFactory</code>.</p>
|
<python><unit-testing><sqlalchemy><pytest>
|
2023-03-03 11:46:49
| 1
| 962
|
Roman Purgstaller
|
75,626,660
| 2,107,030
|
pandas read_table with stopping strings to delimit different dataframes to assign
|
<p>I have a csv file of the form :</p>
<pre><code>LINE 1 to SKIP
LINE 2 to SKIP
2.13999987 0.139999986 -0.398405492 1
2.61999989 6.0000062E-2 0.450082362 1
2.74000001 5.99999428E-2 1.04403841 1
2.84000015 4.00000811E-2 6.17375337E-2 1
IGN IGN IGN IGN
21.4200001 0.420000076 1.53572667 1
22.3199997 0.479999542 -0.595370948 1
23.3199997 0.520000458 0.136062101 1
24.3600006 0.519999504 -0.520044923 1
25.3999996 0.520000458 2.45230961 1
26.4399986 0.519999504 -2.08248448 1
27.4799995 0.520000458 -0.263438225 1
IGN IGN IGN IGN
58.6800003 0.520000458 -0.789233088 1
59.7200012 0.520000458 -1.02961564 1
60.7600021 0.51999855 -0.889572859 1
61.7999992 0.520000458 -1.03346229 1
62.8400002 0.520000458 4.94940579E-2 1
</code></pre>
<p>And I would like to read that with pandas like:</p>
<pre><code>df_first = pd.read_table('file.txt', names=names, delimiter=' ', skiprows=3, nrows=4)
</code></pre>
<p>(where <code>names</code> are the name of each column in the file.txt).
I want to assign each series of rows to a <code>df</code> with a given name specified (perhaps with an array of names), until the <code>IGN IGN IGN IGN</code> string is met, and then assign the rest of the rows to the following <code>df</code> again until the the next <code>IGN IGN IGN IGN</code> string is met, till the end of the file.</p>
<p>What is a good way to do that?</p>
|
<python><pandas><dataframe><csv>
|
2023-03-03 11:43:47
| 1
| 2,166
|
Py-ser
|
75,626,522
| 6,730,854
|
How to add different background colors to different selection options in ttk.Combobox?
|
<p>I want to add different colors to different selections in Combobox. I found questions about changing the <a href="https://stackoverflow.com/questions/64755118/how-to-change-ttk-combobox-dropdown-colors-dynamically">overall</a> background color, but not per entry.</p>
<p>I'm attaching some examples below.</p>
<p><a href="https://i.sstatic.net/jseuR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jseuR.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/Sw082.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sw082.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/C3mDp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C3mDp.png" alt="This is an example from your answer @toyota-supra" /></a></p>
|
<python><tkinter><combobox><ttk><ttkwidgets>
|
2023-03-03 11:28:43
| 1
| 472
|
Mike Azatov
|
75,626,508
| 11,202,401
|
How to improve reliability of Django-Q Schedule Tasks
|
<p>My project currently has about 300 scheduled tasks. Some of these run every hour on the hour and some run once a day (at midnight). The tasks make API calls to third party API's.</p>
<p>This is the current set up of my Q_CLUSTER</p>
<pre><code>Q_CLUSTER = {
'name': 'DataScheduler',
'workers': 3,
'timeout': 15,
'retry': 300,
'queue_limit': 10000,
'bulk': 10,
'max_attempts': 1,
'orm': 'default'
</code></pre>
<p>}</p>
<p>Of the 300 tasks, about 120 of them fail (different ones every time). If I manually trigger these to run they pass so there isn't anything wrong with the actual request. I believe the q cluster can't process all of the tasks when they are triggered at the same time. I think this can be improved by tweaking the Q_CLUSTER settings but I'm unsure how best to do this.</p>
<p>I've tried running through the documentation <a href="https://django-q.readthedocs.io/en/latest/configure.html" rel="nofollow noreferrer">https://django-q.readthedocs.io/en/latest/configure.html</a></p>
<p>But I'm unsure how to improve the success rates of the scheduled tasks. I've struggled to find any other articles or documentation to explain how best to utilise these settings.</p>
<p>My question is - how can I improve these settings to ensure the tasks pass? With time I will have more scheduled tasks so looking to address now before it gets worse.</p>
<p>Appreciate any advice or help in the right direction for where to go on this.</p>
|
<python><django><django-q>
|
2023-03-03 11:27:28
| 0
| 605
|
nt95
|
75,626,439
| 10,634,362
|
Negative float type value from python (pybind11) cannot cast correctly in C++
|
<p>I would like to pass a negative float value from python using pybind11 to a cpp function where that negative value will be statically cast to <code>uint32_t</code> type and do further processing. There, I am facing a very weird issue that after casting the resutl returns ZERO.</p>
<ul>
<li>my cpp function is like as follows:</li>
</ul>
<pre class="lang-cpp prettyprint-override"><code>void do_calc(double value){
// while calling this function from python the following line
//print the correct negative value, eg: -6 or -6.1
std::cout << "value: " << value << std::endl;
// But this is the error case. While calling C++ function alone,
//it results the correct uint32_t value. But from python calling it gives me zero
std::cout << "value: " << static_cast<uint32_t>(value) << std::endl;
}
</code></pre>
<ul>
<li>Pybind code is more general way, I am only giving here the pseudo one</li>
</ul>
<pre class="lang-cpp prettyprint-override"><code>.def("do_calc",<double>(&CPP_CLASS_NAME::do_calc))
</code></pre>
<ul>
<li>I am taking a string input from python end and convert this to float and call cpp the function.</li>
</ul>
<p>Any idea what would be the cause of it?</p>
|
<python><c++><pybind11>
|
2023-03-03 11:21:38
| 0
| 701
|
karim
|
75,626,435
| 1,944,101
|
Common resource file in PyQt/ PySide and resource file location
|
<p>When a resource file is created via Qt Designer in a Form, the python code generated by the Qt Designer includes the following import statement:</p>
<pre><code>import icons_rc
</code></pre>
<p>This import statement is same irrespective of the qrc file location (say shared location \Modules\ZA\RES\ or ui location \Modules\ZA\MDH).</p>
<p>The generated form works only if the generated python file for qrc file is in same location as form; else it raises the error:</p>
<pre><code> File "S:\...\Modules\ZA\MDH\ui_BObj.py", line 25, in <module>
import icons_rc
ModuleNotFoundError: No module named 'icons_rc'
</code></pre>
<p>This implies saving all images and compiled qrc file in same location as the UI/Form folder. I used PySide6 with pyside6-rcc and I believe this behaviour is same in PyQt as well.</p>
<p>Does this mean that qrc file for every UI form must be created in the respective locations, even if these forms use same common icon set?</p>
<p>All documentation/ posts on this topic talks about the qrc file format and compiler, but there is no indication on location of the resource files. Is it not possible to create a shared/ common icons qrc file in one location, compile it and then use it in different UI forms in different locations?</p>
|
<python><pyqt><pyside><qt-designer><qt-resource>
|
2023-03-03 11:21:17
| 1
| 410
|
Ajay
|
75,626,220
| 21,295,456
|
Why this regex is not matching the text string?
|
<p>I have a python code as follows:</p>
<pre><code>import re
string=" S/O: fathersName,other details,some other details."`
fathersName=re.match(r":.*?,",string).group(0)
</code></pre>
<p>The regex match is supposed to match the <code> fatherName</code> part of the string, but I get a attributr error saying no mathces found</p>
<p>I even tried with <code>re.match(':',string)</code> and still I get 0 matches.</p>
<p>I think it is somehow related to : symbol, but I'm not sure.</p>
<p>I am using Jupyter Notebook</p>
|
<python><regex><jupyter-notebook>
|
2023-03-03 11:00:50
| 1
| 339
|
akashKP
|
75,626,130
| 4,065,451
|
Length of pySpark is bigger than when using pandas
|
<p>I am experimenting with different ways to load data into dataframes.
One of the frameworks I am looking into is PySpark, but when I load a CSV with 14149 Rows and return the length of the df, it returns 14153, while pandas return 14149.</p>
<pre><code> #pandas
df = pd.read_csv("data_file.csv")
print df.shape
#spark
spark = SparkSession \
.builder \
.appName("Task_1") \
.getOrCreate()
df = spark.read.csv("data_file.csv")
print(df.toPandas().shape)
</code></pre>
<p>The result is (14149, 5), (14153, 5)</p>
<p>When I am inspecting the spark df, the head looks fine, and also the tail has the correct information, but the id number is off. Where are the extra four rows coming from, and how can I prevent pySpark from adding rows to a df, that are not in the src file?</p>
<p>Link to the file:
<a href="https://anonymfile.com/OEgRK/training.csv" rel="nofollow noreferrer">training.csv</a></p>
|
<python><pandas><pyspark>
|
2023-03-03 10:52:57
| 1
| 344
|
Raavgo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.