text string | meta dict |
|---|---|
Q: Is there a CUDA equivalent to the OpenCL shuffle operation? http://man.opencl.org/shuffle.html
This shuffles the elements of a vector type, based on a mask. Is there an equivalent in CUDA ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I use target="blank" in an external link on MDX? I am creating a blog with NextJS and MDX, and I would like to add a target="_blank" to external links.
Normally I do it with regular Markdown using the following format [words](link){:target="blank"} but this is not working properly in MDX.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python: Create new column that numbers how many occurrences has taken place from another column Just for a basic understanding of what I am referring too, the frequency column is what I am trying to create, based on the number of times fruits has appeared prior to that given row
Fruit
Frequency
Date
Apple
1
Banana
1
Orange
1
Apple
2
Apple
3
Orange
2
I tried df['Frequency']=df.groupby['fruit', 'date'].cumcount() but could not get it to work
A: IIUC:
newdf = df.assign(Frequency=df.groupby('Fruit').cumcount() + 1)
>>> newdf
Fruit Frequency Date
0 Apple 1 NaN
1 Banana 1 NaN
2 Orange 1 NaN
3 Apple 2 NaN
4 Apple 3 NaN
5 Orange 2 NaN
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: how to scale a polar plotted sphere up to a different radius? python So i need to scale up the size of a sphere i plotted with polar coordinates, but I am unsure if Im doing it correctly in a way that scales properly.
#og code
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = [7.00, 3.50]
plt.rcParams["figure.autolayout"] = True
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
r = 0.05
u, v = np.mgrid[0:2 * np.pi:30j, 0:np.pi:20j]
x = np.cos(u) * np.sin(v)
y = np.sin(u) * np.sin(v)
z = np.cos(v)
ax.plot_surface(x, y, z, cmap=plt.cm.YlGnBu_r)
plt.show()
#no matter what I change r to its always .5 by .5 units on a plot. I want it to be a radius of 50.
first sphere
I then tried this and it returns a sphere the size I want, but I am unsure if the z coordinate is scaling the same rate as the rest because it doesn't look as spherical.
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = [10.00, 40.50]
plt.rcParams["figure.autolayout"] = False
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.axes.set_xlim3d(left=-100, right=100)
ax.axes.set_ylim3d(bottom=-100, top=100)
ax.axes.set_zlim3d(bottom=-100, top=100)
r = 50
u, v = np.mgrid[0:2 * np.pi:300j, 0:np.pi:300j]
x = 100*(np.cos(u) * np.sin(v))
y = 100*(np.sin(u) * np.sin(v))
z = 100*(np.cos(v))
ax.plot_surface(x, y, z, rstride= 5, cstride = 5, cmap=plt.cm.YlGnBu_r)
plt.show()```
[second sphere](https://i.stack.imgur.com/qOGaX.png)
I tried to just add more points to plot and scaling them by multiplying by 100. not sure if spherical or tripping.
A: Use the ax.set_box_aspect() to make sure all the axes are scaling at the same rate (aspect ratio along X, Y, and Z axis). See the corresponding doc
You can either pass hard-coded values or do it dynamically by retrieving the total range of x, y, and z values with the np.ptp() function.
[...]
# Hard coded values would be (200, 200, 200)
ax.set_box_aspect((np.ptp(x), np.ptp(y), np.ptp(z))
ax.plot_surface(x, y, z, rstride= 5, cstride = 5, cmap=plt.cm.YlGnBu_r)
plt.show()
This should give you the following figure
Regarding the radius, you're not using the r variable you define. Scalling x, y and z with r should work:
# Keep same box aspect as previous figure to demonstrate the scaling effect
ax.set_box_aspect((200, 200, 200))
x = r*(np.cos(u) * np.sin(v))
y = r*(np.sin(u) * np.sin(v))
z = r*(np.cos(v))
Here, r=50 will generate the following figure:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ModuleNotFoundError: No module named '"travelApp' I am using Django to create a web app, and when I tried to use the py manage.py runserver command, I got this error:
ModuleNotFoundError: No module named '"travelApp'
This is the full error:
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\commands\runserver.py", line 60, in execute
super().execute(*args, **options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 364, in execute
output = self.handle(*args, **options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\commands\runserver.py", line 67, in handle
if not settings.DEBUG and not settings.ALLOWED_HOSTS:
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 21, in <module>
main()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 336, in run_from_argv
connections.close_all()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 219, in close_all
for alias in self:
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 213, in __iter__
return iter(self.databases)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\utils\functional.py", line 80, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 147, in databases
self._databases = settings.DATABASES
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
(.venv) C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp>py manage.py runserver
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\commands\runserver.py", line 60, in execute
super().execute(*args, **options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 364, in execute
output = self.handle(*args, **options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\commands\runserver.py", line 67, in handle
if not settings.DEBUG and not settings.ALLOWED_HOSTS:
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 21, in <module>
main()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 336, in run_from_argv
connections.close_all()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 219, in close_all
for alias in self:
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 213, in __iter__
return iter(self.databases)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\utils\functional.py", line 80, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 147, in databases
self._databases = settings.DATABASES
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
(.venv) C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp>py manage.py migrate
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 361, in execute
self.check()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 387, in check
all_issues = self._run_checks(
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\commands\migrate.py", line 64, in _run_checks
issues = run_checks(tags=[Tags.database])
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\checks\registry.py", line 72, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\checks\database.py", line 9, in check_database_backends
for conn in connections.all():
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 216, in all
return [self[alias] for alias in self]
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 213, in __iter__
return iter(self.databases)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\utils\functional.py", line 80, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 147, in databases
self._databases = settings.DATABASES
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 21, in <module>
main()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\core\management\base.py", line 336, in run_from_argv
connections.close_all()
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 219, in close_all
for alias in self:
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 213, in __iter__
return iter(self.databases)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\utils\functional.py", line 80, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\db\utils.py", line 147, in databases
self._databases = settings.DATABASES
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 79, in __getattr__
self._setup(name)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\n_mac\OneDrive\Desktop\Coding\Python\Django\travelApp\.venv\lib\site-packages\django\conf\__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\n_mac\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named '"travelApp'
I have no idea what is happening, as travelApp is not a module, it is my project name, so I don't know why it is treating it like one.
By the way, here is my settings.py file if that helps.
"""
Django settings for travelApp project.
Generated by 'django-admin startproject' using Django 2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
CSRF_COOKIE_HTTPONLY = True
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'i1kf*^g1+dt*8n9bgcl80$d!970186x(x(9z2)7dfy1ynlxixn'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['127.0.0.1']
EMAIL_HOST_USER = "koolfacts67@gmail.com"
EMAIL_HOST_PASSWORD = "m80222jms"
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'crispy_forms',
'django.contrib.staticfiles',
'sights.apps.SightsConfig',
'itineraries.apps.ItinerariesConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'travelApp.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'static')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
TEMPLATE_DIRS = (
'/home/django/myproject/templates',
)
WSGI_APPLICATION = 'travelApp.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'railway',
'USER': 'postgres',
'PASSWORD': 'dRfHDaVZymxUWsZzCRtq',
'HOST': 'containers-us-west-182.railway.app',
'PORT': '6045',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'US/Central'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, '/static')
STATICFILES_DIRS = [os.path.join(BASE_DIR, "static/")]
Anybody know what is happening?
A: On the error from terminal I can see ModuleNotFoundError: No module named '"travelApp' has extra double quotes fix this in your code and it will start working. Its a syntax error which is causing the issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Problem to capture specific group in regular expression I'm trying to develop a regular expression that matches data with this format:
430::1820-07-27::Vitorino Pinheiro Lacerda::Rodrigo Pinheiro Lacerda::Custodia Maria Alvares::::
430::1873-05-12::Vitorino Teixeira Pires::Jose Teixeira::Ana Martins Pires::Doc.danificado.::
425::1724-09-06::Xavier Araujo Costa::Bernardo Araujo::Angela Costa::::
425::1714-07-30::Xavier Araujo Ferreira::Geraldo Araujo::Ana Ferreira::Jose Araujo Ferreira,Irmao. Proc.21011.::
425::1689-11-02::Xisto Magalhaes Cunha::Francisco Fernandes::Maria Francisca::Doc.danificado.::
426::1898-11-18::Zacarias Rodrigues Mano::Manuel Rodrigues Mano::Felicidade Jesus Tarrio::::
426::1900-11-12::Zacarias Silva Mariz::Luis Silva Mariz::Felicidade Correia Santos::::
426::1785-10-20::Zeferino Antonio Pereira Nobre::Antonio Pereira Nobre::Maria Josefa Garcia::::
425::1809-01-27::Zeferino Antonio Vassalo::Simao Vassalo::Maria Jose::::
For now, I managed to obtain most of the specific groups however I am struggling to obtain an expression to obtain a group that captures the content between a "," and the next occurrence of a ".". An example can be the fourth case previously presented where the content that should be captured is "Irmao".
Here's the regular expression I obtained so far:
(?P<folder>\d*)::(?P<date>\d{4}-\d{2}-\d{2})::(?P<name>.+?)::(?P<father>.+?)::(?P<mother>.+?)::(?P<observations>.*((?:,)?(?P<family>[^\.]*)?))(?:::)?
A: In the named group observations, you can optionally match a comma, then any char except a comma or dot until you match the first dot.
(?P<observations>[^,\n]*(?:,(?P<family>[^,.\n]*)\.)?.*)::
The full pattern:
(?P<folder>\d*)::(?P<date>\d{4}-\d{2}-\d{2})::(?P<name>.+?)::(?P<father>.*?)::(?P<mother>.*?)::(?P<observations>[^,\n]*(?:,(?P<family>[^,.\n]*)\.)?.*)::
Regex demo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I set the default buttons in the config file for EmulatorJS? How can I set the default buttons in the config file for EmulatorJS?
I know there is a way to set them when the game has loaded, but I don't know how to set them in the configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Adaptive Cards in MS Teams - Header and Notification I am sending an adaptive card to a Microsoft Teams chat using the Graph API. I have been following this in the documentation, but I've changed application/vnd.microsoft.card.thumbnail to application/vnd.microsoft.card.adaptive in my code, and am using a card I designed using https://adaptivecards.io.
I have a couple of questions:
*
*When I post a message with an adaptive card, it looks like its inside another container rather than the whole message just being the card. Where does this header come from? Can I customize it or remove it?
*When I send an HTML message, the content of the message shows up in the toast notification. Can I set preview or notification content in a card?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Percent change of dataframe I have a DataFrame like this:
Date Close Symbol
0 2018-03-05 44.21 AAPL
1 2018-03-06 44.17 AAPL
2 2018-03-07 43.76 AAPL
3 2018-03-08 44.24 AAPL
4 2018-03-09 44.99 AAPL
5 2018-03-12 45.43 AAPL
6 2018-03-13 44.99 AAPL
7 2018-03-14 44.61 AAPL
8 2018-03-15 44.66 AAPL
9 2018-03-16 44.51 AAPL
...
2506 2023-02-16 50.99 CSCO
2507 2023-02-17 50.77 CSCO
2508 2023-02-21 49.69 CSCO
2509 2023-02-22 49.31 CSCO
2510 2023-02-23 49.21 CSCO
2511 2023-02-24 48.48 CSCO
2512 2023-02-27 48.73 CSCO
2513 2023-02-28 48.42 CSCO
2514 2023-03-01 48.34 CSCO
2515 2023-03-02 48.53 CSCO
I need to take the daily percent change of each Symbol, replace the Close column with that value, and then bring back the result in this Date Close Symbol format.
I have tried groupby following this post, but I can't quite get it to work.
A: If possible, I suggest that you use a new column name for the percentage change. Naming it Close makes things rather confusing.
You can try this:
# or df["Close"] = ...
df["Change"] = df.groupby("Symbol")["Close"].pct_change()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Plotting data with different x-axis on the same graph in Python I have one dataframe that looks a bit like this:
df1
County
2019
2020
2021
2022
Dune
0.20
0.33
0.28
0.56
Clarke
0.22
0.31
0.30
0.63
Adams
0.19
0.30
0.30
0.59
Burke
0.23
0.29
0.29
0.54
James
0.22
0.31
0.23
0.52
and another that looks like this:
df2
Month
2019
2020
2021
2022
1
0.19
0.40
0.28
0.56
2
0.22
0.31
0.99
0.77
3
0.22
0.31
0.53
0.93
4
0.58
0.87
nan
0.62
5
nan
1.11
0.23
1.01
6
nan
0.67
0.55
0.83
7
nan
1.06
0.72
0.92
8
nan
nan
0.88
0.99
9
nan
nan
0.99
1.06
10
nan
nan
nan
1.16
11
nan
nan
nan
1.12
I would like to create a bar chart with the data from df1 with a line chart from df2 in the background. Is this possible? I've tried multiple iterations using matplotlib and seaborn but haven't landed on anything quite like I'm looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: A state in my pomodoro clock doesn't change after certain conditions I am working on building a Pomodoro clock using React, and I have a state variable called onBreak which is initially set to false. My goal is to have onBreak switch to true after the sessionTime is over, at which point displayTime should switch to breakTime. However, I am experiencing a problem where onBreak switches to true and breakTime is displayed, but it never switches back to false and sessionTime is not displayed again.
const handleTimeSwitch = () => {
let second = 1000;
if (!timerOn) {
let interval = setInterval(() => {
setDisplay((currentSecond) => {
if (currentSecond <= 0 && !onBreak) {
handlePlaySound();
setOnBreak(true);
return breakTime;
}
if (currentSecond <= 0 && onBreak) {
setOnBreak(false);
return sessionTime;
}
return currentSecond - 1;
});
}, second);
localStorage.clear();
localStorage.setItem("interval-id", interval);
}
if (timerOn) {
clearInterval(localStorage.getItem("interval-id"));
}
setTimerOn(!timerOn);
};
As you can see, I am a total newbie of this and English too(Apologize for my grammar mistakes). I really can't find the bug. Hopes someone can help me out. Thank you very much!
Here is the full code from Code Sanbox.
I try to use console.log and React developer tool to identify the problem as possible as I can. And I tried to use useEffect to change my onBreak back to false after it set to true, but doesn't work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Binding raw socket to address with TCP protocol kills process Using the socket2 lib https://crates.io/crates/socket2 simply binding to an address with a raw socket using TCP kills the process. If I create a socket and simply try to send over it I get an os error An invalid argument was supplied. (os error 10022)
Assuming this is not a crate issue but instead a windows issue, whats its cause?
use socket2::{Domain, Protocol, SockAddr, Socket, Type};
use std::{env, net::SocketAddr, process::{Command, Stdio}, thread, time::Duration, io::{BufReader, BufRead}};
fn main() -> std::io::Result<()> {
let mut args: Vec<String> = std::env::args().collect();
args.remove(0);
if args.len() == 0 {
let file_path = env::current_exe().unwrap();
let file_path = file_path.to_str().unwrap();
let (command, cmd_args) = (
"powershell.exe",
vec![
String::from("-NoProfile"),
String::from("-NoExit"),
String::from("-Command"),
String::from("Start-Process"),
String::from(file_path),
String::from("-Verb"),
String::from("RunAs"),
String::from("-ArgumentList"),
String::from("'bind'"),
],
);
let mut cmd = Command::new(command)
.args(cmd_args)
.stderr(Stdio::piped())
.spawn()
.unwrap();
let err = cmd.stderr.take().unwrap();
let reader = BufReader::new(err);
for line in reader.lines() {
println!("{}", line.unwrap());
}
println!("DONE");
} else {
thread::sleep(Duration::from_secs(5));
let address = SockAddr::from(SocketAddr::new([0, 0, 0, 0].into(), 0));
let send_socket = Socket::new(Domain::IPV4, Type::RAW, Some(Protocol::TCP))?;
send_socket.bind(&address)?;
loop {}
}
Ok(())
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java switch case throwing nullPointer Exception I have an enum declared as follows -
public enum Status {
REQ ("URL1"),
NOT ("URL2"),
GET ("URL3");
String getURL;
Status(String getURL) {
this.getURL = getURL;
}
}
And a field in my class:
private Status status;
I have a function in order to retrieve the URL based on the enum type as follows -
public String viewURL() {
switch (status) {
case REQ:
return REQ.getURL;
case NOT:
return NOT.getURL;
case GET:
return GET.getURL;
}
return null;
}
I'm encountering a NullPointerException in this method when status is null.
However when I implement the same functionality using if-statements it works fine -
public String viewURL() {
if (status == REQ) {
return REQ.getURL;
}
if (status == NOT) {
return NOT.getURL;
}
if (status == GET) {
return GET.getURL;
}
return null;
}
Not able to understand where I'm going wrong. Any help would be really appreciated!
Any help on re-factoring also is appreciated!
A: This is an ideal use case for Optional:
public String viewURL() {
return Optional.ofNullable(status)
.map(s -> s.getUrl) // only executes if previous step returns non-null
.orElse(null); // executes if null returned from any step
}
A: If the viewURL method has access to the status variable, you can use this code:
public String viewURL() {
if (status != null)
return status.getURL;
return null;
}
I don't think you have to use a switch statement because we end up using the same code in every case. That is, we return status.getURL in every case.
The only requirement here is that every instance of the Status enum has a URL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: A somewhat complex problem of generic information loss I have a class, that is, 'C'. I hope that its generic 'T' can be obtained from the instantiation parameters, and the generic 'G' can be converted from the generic 'T' (the 'TypeTo' part in the article is only for demonstration and is not meaningful).
type Type = 't1' | 't2'
type TypeTo<T extends Type> = T
class C<T extends Type = Type, G = TypeTo<T>> {
g?: T
constructor(readonly p: { t: T }) {
}
}
Secondly, I have a function 'F', which uses the 'Record<string, C>' type as the parameter, and determines the return value type according to the parameter type (the 'F' part in the article is only for demonstration, and is not meaningful)
function F<T extends Record<string, C<any, any>>>(p: T) {
return 0 as unknown as { [K in keyof T]: Exclude<T[K]['g'], void> }
}
Next, type test
// A<"t1", "t1">
const a = new C({ t: 't1' } as const)
// {a1: "t1"}
const b = F({ a1: a })
Everything is normal here, but if I don't want to define an extra 'a' variable and use the following method, the generic information will be lost
// {a1: any}
const c = F({ a1: new C({ t: 't1' } as const) })
Many types of tools are used at the same time in the above, and this problem may not occur without one of them. I am designing a very novel model. For this reason, I have tried dozens of ways of writing. Other ways of writing may have some problems. I have been thinking about this problem for a week. The above way of writing is a simple reproduction of the most feasible solution I have tried so far
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ImportError: cannot import name '_check_weights' from 'sklearn.neighbors._base' I am trying to do Missforest as a method for handling missing values in table data.
import sklearn
print(sklearn.__version__)
->1.2.1
import sklearn.neighbors._base
import sys
sys.modules['sklearn.neighbors.base'] = sklearn.neighbors._base
!pip install missingpy
from missingpy import MissForest
It was working fine until now, but since yesterday, the following error message has appeared.
ImportError: cannot import name '_check_weights' from 'sklearn.neighbors._base'
I would like to know how to deal with this error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there any trick to make the code only run when hydrating html There are three situations when react ssr
*
*Server-side rendering
*Client-side Hydrating
*Normal client-side rendering
Just like using typeof window === 'undefined' to make the code run only on the server side, is there any trick to make the code run only on the second situation ?
const useSomething = () => {
if (typeof window === 'undefined') {
// when ssr
}
if ( ??? ) {
// when client-side Hydrating
}
// client-side rendering
}
A: When dealing with SSR in React applications, there are three situations (like you mentioned) where the code can be executed: during server-side rendering, during client-side hydration, and during normal client-side rendering.
*
*During server-side rendering, the JavaScript code is executed on the server, where window is not defined, that's why typeof window === 'undefined' works to detect that.
*When the application is loaded on the client, the server-generated HTML is rendered in the browser, and the JavaScript code is executed again. This time, window is defined, but the application's state needs to be hydrated with the data passed from the server. The window.INITIAL_STATE object contains the initial state values that were passed from the server.
So, to detect whether the code is running during client-side hydration, you can check if window.INITIAL_STATE is defined.
Here's how you can check for that
const useSomething = () => {
if (typeof window === 'undefined') {
// We're in server-side rendering
} else if (window.__INITIAL_STATE__) {
// We're in client-side hydration
} else {
// We're in normal client-side rendering
}`
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Webpack is not bundling a dependency I have a Typescript library which uses Axios as following:
import axios from 'axios';
export class Connector { readonly axios = axios.create(); ... }
After publishing, I import the library in a new React project created with create-react-app --template=typescript. When I run the project in the browser and instantiate a new connector such as:
import { Connector } from 'mylib';
const connector = new Connector();
I get an Uncaught TypeError: Cannot read properties of undefined at the axios.create() call. Inspecting in the browser, it seems webpack is not bundling anything related to Axios (which is a dependency from mylib).
Now, If I do import Axios directly in the App.tsx file and call axios.create(), this last call works (but the call done inside the mylib library still does not work).
Basically the axios object is undefined inside mylib, and it seems webpack is producing an import for node_modules/axios/dist/browser/axios.cjs, which does not exist in the bundle for some reason.
What is happening here?
Using webpack 5.
A: This is a bug in Axios 1.1.3+, which is still broken: https://github.com/axios/axios/issues/5154
Downgrade to 1.1.2 and stuff works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to display black background behind with opacity 0.5 behind new stage I want to display black background behind the new main stage but it doesn't show correctly and. When you open new stage behind this stage appears black background with opacity for example 0.5d. When you close the stage the backgroundground closes as wellness. That's it
This is my code: and this is the efekt I want to achive: in the picture
enter image description here
@FXML
public void showNewStage(ActionEvent actionEvent) throws IOException {
Parent parent = FXMLLoader.load(Objects.requireNonNull(getClass().getResource("newStage.fxml")));
Pane pane = new Pane();
pane.setOpacity(0.5);
pane.setPrefWidth(1000);
pane.maxWidth(1000);
pane.maxHeight(700); ///this shoold be new black background with opacity 0.5
pane.setPrefHeight(700);
pane.setStyle("-fx-background-color: black;");
Scene scene = new Scene(parent,983,684);
Stage stage = new Stage();
stage.setScene(scene);
paneAnchor.getChildren().add(pane);
stage.show();
}
How to write Code to show and close properly?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Creating an android app using mobile phone I have designed a couple of android apps , of varying levels of complexity , but since my job puts me on the road most days, this has left me with no better options than a laptop, tablet, and phone .
Now, the first two are out, due to several unrelated issues .
This leaves me with an android phone , that has only 2GB of RAM.The solution I'm using is to use Google CoLab to write the APK in python. Because i have had repeated bugs appear , one following the one i just fixed, leads me to believe that there is something inherently wrong with my approach altogether; that's why I am not going to explain my previous problems , or current issues , until I can get some feedback on the efficacy of this approach to creating an apk using a mobile device .
Seemingly unconnected issues , one after the other. As i eliminated one problem , a completely different one developed shortly thereafter. From errors in compiling, to apis that suddenly ignore their own configuration files.
A detailed description of all the issues would be 6 or 7 times the text size of this question .
Does anyone know of anything FUNDAMENTAL that I'm doing wrong here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: When navigating jetpack compose there is a recomposition composable(Screens.Recommendation.route) {
BackHandler(true) {}
RecomendationScreen(
onNavigateToStatistics = { navController.navigate(Screens.Statistics.route) }
)
}
I pass a function for navigation, in its execution recomendation happens RecomendationScreen When you call this method does not change state, which would cause recomendation, the question is whether it should be, it comes out of the box, or still have to keep looking for a problem
fun RecomendationScreen(onNavigateToStatistics: () -> Unit) {
RecomendationContent(onNavigateToStatistics) }
I tried to fix it by looking through the documentation to find an answer to my question but I couldn't find it, on top of that I searched for the source of recomposition but I couldn't find it either
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Azure DevOps Extension - How to resolve HostAuthorizationNotFound error I get the following error with getWorkItem call. I have the scope currently as VSO.Work_Full. It is already published.
The codes look like this:
/// <reference types="vss-web-extension-sdk" />
import TFS_Wit_WebApi = require('TFS/WorkItemTracking/RestClient');
let client = TFS_Wit_WebApi.getClient();
client.getWorkItem(1).then(wi=>{
console.log(wi.id);
})
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: System.Reflection.Assembly.LoadFile(), AssemblyResolve event and handling 3rd party dependencies I have this class which will hopefully let me essentially hot-load my .dll's, so that I can push changes to them while my coworkers are using them, without needing them to totally restart the application to begin seeing the effects.
I had been using .LoadFrom(), but found it preferred using already-loaded versions of the files, whereas .LoadFile() takes the latest. The catch being that any dependencies the loaded file needs have to be loaded as well. But I've managed to do this by listening to the AppDomain.AssemblyResolve event, and it seems to work for any of the custom .dll's.
But we also have third-party dependencies (Autodesk Inventor 2022 .dll's) which my Visual Studio projects/solution don't generate automatically in my build folder (despite a couple I have had to include), and which I can't find anywhere on my machine. I've tried using .Load() for these, but it doesn't find anything (again, I guess, because they don't seem to be actually on my machine; the first one I consistently encounter is "Autodesk.iLogic.Core.resources.dll").
A 'FileNotFoundException' (not the one in the class) is thrown when the script instantiating the class runs within Inventor, so maybe my AppDomain(?) is the same as Inventor's, and by using .LoadFile() I'm obligated to resolve Inventor's dependencies too, but I really have no idea.
If this is some kind of problem that can't be fixed, are there any other strategies that might work to let me work as though the .dll's aren't locked, while remaining agnostic to the version number?
Imports System.IO
Imports System.Reflection
Public NotInheritable Class LibInterface
Private Const sBuildDir As String = "F:\Lib\Build"
Private Const sReleaseRootDir As String = "F:\Lib\Releases"
Private _oAssembly As System.Reflection.Assembly = Nothing
Public Property oAssembly() As System.Reflection.Assembly
Get
Return _oAssembly
End Get
Protected Set(oVal As System.Reflection.Assembly)
_oAssembly = oVal
End Set
End Property
Private _oType As Type = Nothing
Public Property oType() As Type
Get
Return _oType
End Get
Protected Set(oVal As Type)
_oType = oVal
End Set
End Property
Private Class Initialiser
Public Sub New(
ByVal oParent As LibInterface
)
AddHandler AppDomain.CurrentDomain.AssemblyResolve, AddressOf oParent.HandleResolve
End Sub
End Class
Private _initialiser As New Initialiser(Me)
Public Function HandleResolve(
ByVal sender As Object _
, ByVal args As ResolveEventArgs
) As System.Reflection.Assembly
Return _GetAssembly(Split(args.Name, ",")(0)) + ".dll")
End Function
Private Function _GetAssembly(
ByVal sFileName As String
) As System.Reflection.Assembly
Return System.Reflection.Assembly.LoadFile(_SearchLastBuild(sFileName))
End Function
Private Function _SearchLastBuild(
ByVal sFileName As String
) As String
Dim sPathName = System.IO.Path.Combine(sBuildDir, sFileName)
If Not File.Exists(sPathName) Then
Throw New FileNotFoundException(".dll not found", sPathName)
End If
Dim oVersionInfo As FileVersionInfo = FileVersionInfo.GetVersionInfo(sPathName)
Dim sReleaseDir As String = System.IO.Path.Combine(sReleaseRootDir, oVersionInfo.FileVersion)
Dim sReleasePathName = System.IO.Path.Combine(sReleaseDir, sFileName)
If File.Exists(sReleasePathName) Then
Return sReleasePathName
End If
Directory.CreateDirectory(sReleaseDir)
_CopyFilesRecursively(sBuildDir, sReleaseDir)
Return sReleasePathName
End Function
Private Sub _CopyFilesRecursively(
ByVal sSrcPath As String _
, ByVal sDestPath As String
)
For Each sDirPath As String In Directory.GetDirectories(sSrcPath, "*", SearchOption.AllDirectories)
Directory.CreateDirectory(sDirPath.Replace(sSrcPath, sDestPath))
Next
For Each sOldPath As String In Directory.GetFiles(sSrcPath, "*.*", SearchOption.AllDirectories)
File.Copy(sOldPath, sOldPath.Replace(sSrcPath, sDestPath), True)
Next
End Sub
Public Sub New(
ByVal sFileName As String _
, ByVal sType As String
)
_oAssembly = _GetAssembly(sFileName)
_oType = _oAssembly.GetType(sType, False)
End Sub
End Class
Any advice or guidance is much appreciated
Thanks!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Removing Duplicate Values in MultiCombobox Hi guys I am trying to learn JSON Binding in Fiori. I want to remove Duplicate values in this MultiCombobox.
MY XML section is here
<fb:FilterItem name="Side" label="Side">
<fb:control>
<MultiComboBox selectionChange="handleSelectionChange" selectionFinish="handleSelectionFinish" width="" placeholder="Choose Side." required="true" items="{Wolfteam>/Characters}" mergeDuplicates ="true">
<core:Item key="{Wolfteam>Side}" text="{Wolfteam>Side}" />
</MultiComboBox>
</fb:control>
</fb:FilterItem>
My JSON is here
{
"Characters": [{
"Name": "Pedro Gomez",
"Health": "200",
"Side": "Red"
},
{
"Name": "Angela Mao",
"Health": "210",
"Side": "Blue"
},
{
"Name": "Adriana Tenorio",
"Health": "230",
"Side": "Red"
},
{
"Name": "Pedro Gomez",
"Health": "240",
"Side": "Blue"
}
]
}
I tried Chat GPT and Sap Forums. Didnt find any solutions
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using argparse to run associated functions I use argparse to specify multiple command line arguments. How would I run a particular function that is associated with an argparse command line input?
Consider:
py test.py -s google.com
As the argument -s has been entered, a specific function only associated with the -s argument should be ran.
A: Here is a very simple example of using argparse options to trigger specific functions.
After adding the -w and -s as arguments, you can test the returned Namespace object to see which arguments were found and call the appropriate functions accordingly.
main.py
import argparse
def sfunc():
print("-s argument was found")
def wfunc():
print("-w argument was found")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-w", action="store_true")
parser.add_argument("-s", action="store_true")
ns = parser.parse_args()
if ns.s:
sfunc()
if ns.w:
wfunc()
Here is what the results are from using each of the arguments
/>python main.py -s
-s argument was found
/>python main.py -w
-w argument was found
/>python main.py -w -s
-s argument was found
-w argument was found
There are many other ways to achieve these same results, this is just a very simple example.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How can create a email alert for windows events with id 0 using the message as filter I have event which is with event id =0, thus makes create the email alert with event id
not feasible, how can I create email alert using the message content (in Eventdata/data attributes)?
can use PowerShell or other method?
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<Provider Name="Test Portal"/>
<EventID Qualifiers="0">0</EventID>
<Level>2</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2023-02-03"/>
<EventRecordID>336</EventRecordID>
<Channel>Application</Channel>
<Computer></Computer>
<Security/>
</System>
<EventData>
<Data>Message: Your Service is not available.</Data>
</EventData>
</Event>
A: You can use PowerShell to find and parse the Windows Eventlog like below.
In this case, it will look for events in the 'Application' log with an ID of 0
For demo I have limited the search to a maximum of 50 items, but you can set your own value of course.
$result = Get-WinEvent -FilterHashtable @{LogName='Application';ID=0} -MaxEvents 50 | ForEach-Object {
# convert the event to XML and grab the Event node
$eventXml = ([xml]$_.ToXml()).Event
# output the values from the XML representation
[PsCustomObject]@{
Provider = $eventXml.System.Provider.Name
Message = $eventXml.EventData.Data #.'#text'
Date = [DateTime]$eventXml.System.TimeCreated.SystemTime
}
}
Now in variable $result you have objects you can use in your email.
When output on screen you will see something like
$result | Format-Table -AutoSize
Provider Message Date
-------- ------- ----
edgeupdate Service stopped 4-3-2023 11:38:52
gupdate Service stopped 4-3-2023 11:38:32
Test Portal Message: Your Service is not available. 4-3-2023 11:38:32
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: SpotifyAPI-NET Giving 'invalid_grant' When Authorizing I'm trying to set up an application using C# on AWS Lambda for the backend, and a React frontend (Currently just a basic locally hosted server).
From the front-end, I begin the OAuth process of requesting an access code from Spotify, then when I have that code I call my backend server.com/api/authenticate?code=SPOTIFYCODE
My backend will then attempt to use the clientID, clientSecret and the code passed in from the frontend to retrieve an access token from Spotify.
I would like to use the SpotifyAPI-NET sdk since that does a lot of the hard work, however it always seems to throw a SpotifyAPI.Web.APIException error with invalid_grant. I would normally assume the request parameters were wrong, but it works fine making the request manually so I'm stumped.
I can successfully grab an access token with the following:
// GET api/authenticate
[HttpGet]
public async Task<ActionResult> Authenticate(string code)
{
// Exchange the authorization code for an access token
var parameters = new Dictionary<string, string>
{
{ "grant_type", "authorization_code" },
{ "code", code },
{ "redirect_uri", RedirectUri },
{ "client_id", ClientId },
{ "client_secret", ClientSecret }
};
var content = new FormUrlEncodedContent(parameters);
var response = await _httpClient.PostAsync(TokenEndpoint, content);
if (!response.IsSuccessStatusCode) return BadRequest("Failed to get access token.");
var responseContent = await response.Content.ReadAsStringAsync();
var tokenResponse = JsonSerializer.Deserialize<TokenResponse>(responseContent);
var accessToken = tokenResponse?.AccessToken;
// Create a new cookie with the access token
if (string.IsNullOrEmpty(accessToken)) return BadRequest("Failed to get access token. Access token is null or empty.");
var cookieOptions = new CookieOptions
{
HttpOnly = true,
Secure = true,
Expires = DateTimeOffset.Now.AddDays(1),
SameSite = SameSiteMode.None,
Domain = Request.Host.Value,
Path = "/api",
};
Response.Cookies.Append("access_token", accessToken, cookieOptions);
return Ok();
}
However if I try to use the same information, but using the sdk following these instructions... It fails.
// GET api/authenticate
[HttpGet]
public async Task<ActionResult> Authenticate(string code)
{
var response = await new OAuthClient().RequestToken(
new AuthorizationCodeTokenRequest(
ClientId,
ClientSecret,
code,
new Uri(RedirectUri))
);
var accessToken = response.AccessToken;
// Create a new cookie with the access token
if (string.IsNullOrEmpty(accessToken)) return BadRequest("Failed to get access token. Access token is null or empty.");
var cookieOptions = new CookieOptions
{
HttpOnly = true,
Secure = true,
Expires = DateTimeOffset.Now.AddDays(1),
SameSite = SameSiteMode.None,
Domain = Request.Host.Value,
Path = "/api",
};
Response.Cookies.Append("access_token", accessToken, cookieOptions);
return Ok();
}```
A: Probably this is due to redirect_uri. In your working code, you pass a string but it turns Uri class when you use the SDK.
In the source code of the SDK it converts it to string in this line:
new KeyValuePair<string?, string?>("redirect_uri", request.RedirectUri.ToString())
And when you do this with a base url, it appends a trailing slash in the end. If your original redirect_uri does not contain trailing slash, then they might mismatch and you get this error.
You can add a trailing slash to your original redirect_uri (when you redirect user to spotify service) to test this case.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to transfer data from worksheet to worksheet while preserving hyperlinks I have two separate Google Worksheets -- I want to simply transfer the data from the destination to the source. I've always used the code below to do this & it has always worked well:
function dataImport2() {
//@NotOnlyCurrentDoc
var values = SpreadsheetApp.openById('id').
getSheetByName('All').getRange('c1:m').getValues();
SpreadsheetApp.getActive().getSheetByName('All').
getRange(1,1,values.length,values[0].length).setValues(values);
}
In this particular case, the data in the source is a mix of strings and number values, some of which have HYPERLINKS attached.
I realized, this code does not keep the hyperlinks on the cells when transferred.
I'm trying to build a code that would transfer all of the info, but also keep the links -- doing some reading it seems like getRichTextValues would help, but haven't been able to recreate.
This is where I'm at now:
function dataImport() {
var sourceSheet = SpreadsheetApp.openById('id').getSheetByName('All');
var destinationSheet = SpreadsheetApp.getActive().getSheetByName('All');
var range = sourceSheet.getRange('C1:M');
var formulas = range.getFormulas();
var richTextValues = range.getRichTextValues();
for (var i = 0; i < formulas.length; i++) {
for (var j = 0; j < formulas[0].length; j++) {
var formula = formulas[i][j];
var linkUrl = formula.match(/"(.*?)"/);
if (linkUrl !== null) {
var richTextValue = richTextValues[i][j];
richTextValue.setLinkUrl(linkUrl[1]);
destinationSheet.getRange(i+1, j+1).setRichTextValue(richTextValue);
} else {
destinationSheet.getRange(i+1, j+1).setValue(formulas[i][j]);
}
}
}
}
Any thoughts as to how I could accomplish this transfer, while preserving hyperlinks?
A: In your situation, how about the following 2 patterns?
Pattern 1:
In this pattern, the values are copied with setValues and "Method: spreadsheets.values.update" of Sheets API. So, please enable Sheets API at Advanced Google services. The reason for using Sheets API is to reduce the process cost by avoiding that setValue and/or setRichTextValue are not used in a loop.
function dataImport() {
var sourceSheet = SpreadsheetApp.openById('id').getSheetByName('All');
var destinationSS = SpreadsheetApp.getActive();
var destinationSheet = destinationSS.getSheetByName('All');
var range = sourceSheet.getRange('C1:M' + sourceSheet.getLastRow());
var values = range.getValues();
var formulas = range.getFormulas();
var richTextValues = range.getRichTextValues();
destinationSheet.getRange(1, 1, richTextValues.length, richTextValues[0].length).setRichTextValues(richTextValues);
Sheets.Spreadsheets.Values.update({ values: values.map((r, i) => r.map((c, j) => richTextValues[i][j].getRuns().some(e => e.getLinkUrl()) ? null : (formulas[i][j] || c))) }, destinationSS.getId(), "All", { valueInputOption: "USER_ENTERED" });
}
Pattern 2:
In this pattern, copyTo is used.
function dataImport() {
var sourceSheet = SpreadsheetApp.openById('id').getSheetByName('All');
var destinationSS = SpreadsheetApp.getActive();
var temp = sourceSheet.copyTo(destinationSS);
var range = temp.getRange('C1:M' + temp.getLastRow());
var destinationSheet = destinationSS.getSheetByName('All');
range.copyTo(destinationSheet.getRange("A1"));
destinationSS.deleteSheet(temp);
}
References:
*
*Method: spreadsheets.values.update
*copyTo(spreadsheet) of Class Sheet
*copyTo(destination) of Class Range
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Analytically calculate integral of a function only under a given condition with Python-SymPy I am trying to analytically calculate the integral of a function using this code:
from sympy import sqrt, symbols, integrate, pi
x=symbols('x',real=True, positive=True)
rho, r=symbols('rho r', real=True, positive=True)
f=sqrt(rho**2-(x-r)**2)
integrate(f,(x,r,r+rho)).simplify()
SymPy tells me the result is the one in the picture:
however I want to evaluate only the second one. How can I do that?
A: According to this comment, you can pass inequalities as an assumption to refine:
print(refine(integrate(f,(x,r,r+rho)).simplify(), (r - x)**2/rho**2 > 1))
# Integral(I*sqrt(r**2 - 2*r*x - rho**2 + x**2), (x, r, r + rho))
(I'm not sure why this is undocumented)
A: Here is what SymPy gives:
In [10]: i = integrate(f,(x,r,r+rho)).simplify()
In [11]: i
Out[11]:
r + ρ
⌠
⎮ ⎧ 2 2 2 2 4 2 2
⎮ ⎪ ⅈ⋅ρ ⋅(r - x) ⅈ⋅ρ ⅈ⋅ρ ⅈ⋅(r - x) 3⋅ⅈ⋅(r - x) (r - x)
⎮ ⎪─────────────────────── - ─────────────────────────────── - ─────────────────────── - ─────────────────────── + ─────────────────────── for ──────── > 1
⎮ ⎪ 3/2 ____________ ____________ __________________ 3/2 __________________ 2
⎮ ⎪ ⎛ 2 2⎞ 2⋅╲╱ -r - ρ + x ⋅╲╱ -r + ρ + x ╱ 2 2 ⎛ 2 2⎞ ╱ 2 2 ρ
⎮ ⎨2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) 2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) dx
⎮ ⎪
⎮ ⎪ _______________
⎮ ⎪ ╱ 2 2
⎮ ⎪ ╲╱ ρ - (r - x) otherwise
⎮ ⎩
⌡
r
You say that you want the second case. In general we can extract cases from a Piecewise using .args but we need to bring the Piecewise out to top lvel using piecewise_fold first:
In [12]: piecewise_fold(i)
Out[12]:
⎧r + ρ
⎪ ⌠
⎪ ⎮ ⎛ 2 2 2 2 4 2 ⎞ 2
⎪ ⎮ ⎜ ⅈ⋅ρ ⋅(r - x) ⅈ⋅ρ ⅈ⋅ρ ⅈ⋅(r - x) 3⋅ⅈ⋅(r - x) ⎟ (r - x)
⎪ ⎮ ⎜─────────────────────── - ─────────────────────────────── - ─────────────────────── - ─────────────────────── + ───────────────────────⎟ dx for ──────── > 1
⎪ ⎮ ⎜ 3/2 ____________ ____________ __________________ 3/2 __________________⎟ 2
⎪ ⎮ ⎜ ⎛ 2 2⎞ 2⋅╲╱ -r - ρ + x ⋅╲╱ -r + ρ + x ╱ 2 2 ⎛ 2 2⎞ ╱ 2 2 ⎟ ρ
⎪ ⎮ ⎝2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) 2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) ⎠
⎪ ⌡
⎨ r
⎪
⎪ r + ρ
⎪ ⌠
⎪ ⎮ _______________
⎪ ⎮ ╱ 2 2
⎪ ⎮ ╲╱ ρ - (r - x) dx otherwise
⎪ ⌡
⎪ r
⎩
In [13]: piecewise_fold(i).args[0][0]
Out[13]:
r + ρ
⌠
⎮ ⎛ 2 2 2 2 4 2 ⎞
⎮ ⎜ ⅈ⋅ρ ⋅(r - x) ⅈ⋅ρ ⅈ⋅ρ ⅈ⋅(r - x) 3⋅ⅈ⋅(r - x) ⎟
⎮ ⎜─────────────────────── - ─────────────────────────────── - ─────────────────────── - ─────────────────────── + ───────────────────────⎟ dx
⎮ ⎜ 3/2 ____________ ____________ __________________ 3/2 __________________⎟
⎮ ⎜ ⎛ 2 2⎞ 2⋅╲╱ -r - ρ + x ⋅╲╱ -r + ρ + x ╱ 2 2 ⎛ 2 2⎞ ╱ 2 2 ⎟
⎮ ⎝2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) 2⋅⎝- ρ + (-r + x) ⎠ 2⋅╲╱ - ρ + (-r + x) ⎠
⌡
r
In [14]: piecewise_fold(i).args[1][0]
Out[14]:
r + ρ
⌠
⎮ _______________
⎮ ╱ 2 2
⎮ ╲╱ ρ - (r - x) dx
⌡
r
That seems to be what you wanted. However it is just an unevaluated integral. If that is actually what you want then you should use Integral rather than integrate in the first place:
In [15]: Integral(f,(x,r,r+rho))
Out[15]:
r + ρ
⌠
⎮ ________________
⎮ ╱ 2 2
⎮ ╲╱ ρ - (-r + x) dx
⌡
r
The difference between integrate and Integral is that Integral just represents the integral symbolically whereas integrate will attempt to compute the integral ideally giving an expression that does not involve integrals:
In [16]: Integral(x, x)
Out[16]:
⌠
⎮ x dx
⌡
In [17]: integrate(x, x)
Out[17]:
2
x
──
2
In [18]: Integral(x, x).doit()
Out[18]:
2
x
──
2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Need help on The kernel appears to have died. It will restart automatically When i import tensorflow it says The kernel appears to have died. It will restart automatically.
import tensorflow as tf
print(tf.__version__)
Need some suggestions.
How can i solve this problem. I am using Mac.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Why are my Scala+Spark app Unit Tests run so much faster in IntelliJ vs a regular mvn clean test run? I have a whole set of UTs that perform quite a lot of Spark operations. I've noticed that when I run the test set in InteliiJ IDEA, it finishes in about 10 minutes. When I proceed to build using maven, the process takes almost an hour. If I just run the maven test goal, it takes a over 50 minutes, so most of the time is in the UTs execution.
I compared the execution logs between IntelliJ vs Maven execution and they are all the same (obviously the order of the parallel operations is not), so the execution is functionally equivalent. I'm not sure what to do to find what's causing this huge performance drop when UTs run in Maven.
An example of the time differences using the time reported in logs (grouping and discarding identical lines/times) in one of the test.
Maven: 102 seconds
12:34:50 [ScalaTest...
12:35:01 [ScalaTest...
12:35:19 [Executor...
12:35:20 [ScalaTest...
12:35:25 [ScalaTest...
12:36:06 [Executor...
12:36:08 [ScalaTest...
12:36:16 [ScalaTest...
12:36:24 [ScalaTest...
12:36:32 [ScalaTest...
IntelliJ: 26 seconds
12:49:53 [ScalaTest...
12:49:58 [ScalaTest...
12:50:04 [Executor...
12:50:04 [ScalaTest...
12:50:07 [ScalaTest...
12:50:13 [Executor...
12:50:14 [ScalaTest...
12:50:16 [ScalaTest...
12:50:18 [ScalaTest...
12:50:19 [ScalaTest...
I see the same pattern in every other test where Spark operations are performed. Sometimes the time difference is almost 10x between environments and averaged between all tests, it's around 5x. Seems like a lot of the waits happen when switching to parallel execution in nodes. Any idea how to identify the configuration settings that may cause this? Any spark setting I can apply to have both environments running with similar processing times?
I have already tried with reducing partitions and setting the spark.sql.shuffle.partitions to low values (1, 2, 3...), but I don't see any difference.
Thanks!
A: There could be several reasons:
*
*Parallel test execution: IntelliJ can run unit tests in parallel by default, whereas `mvn clean test runs tests sequentially by default.
*Different test runner: IntelliJ uses its own test runner, while Maven uses the Surefire test runner. The performance of these test runners may differ
*Caching: IntelliJ can cache compiled classes and dependencies, which can speed up the test execution, while Maven doesn't cache compiled classes
*There may be configuration differences between your IntelliJ test environment and your Maven test environment
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Firebase functions 2nd generation on Emulator I'm trying to run 2nd generation firebase function onCall with Emulator.
based on official documentation this is the solution:
import { getApp } from "firebase/app";
import { getFunctions, connectFunctionsEmulator } from "firebase/functions";
const functions = getFunctions(getApp());
connectFunctionsEmulator(functions, "localhost", 5001);
but in my case, it's not working. here is my code:
onst firebaseConfig = {
apiKey: '',
authDomain: '',
projectId: '',
storageBucket: '',
messagingSenderId: '',
appId: '',
measurementId: '',
};
// Initialize Firebase
export const app = initializeApp(firebaseConfig);
export const functions = getFunctions(app);
export const firestore = getFirestore(app);
export const auth = getAuth(app);
if (process.env.NODE_ENV === 'development') {
connectFirestoreEmulator(firestore, 'localhost', 8080);
connectFunctionsEmulator(functions, 'localhost', 5001);
connectAuthEmulator(auth, 'http://localhost:9099', { disableWarnings: true });
}
export const helloWorld = httpsCallableFromURL<unknown, THelloResponse>(
functions,
'https://hello-cp54mvsyeq-uc.a.run.app'
);
with httpsCallable it's ok.
"firebase": "^9.17.1",
"firebase-admin": "^11.3.0",
"firebase-functions": "^4.1.0",
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: API to do arithmetic operations on pointers (cl_mem) My ultimate goal is to do arithmetic operations on pointers on the host side. I know that we can do the following on the kernels:
// .cl file
__kernel void pointerAdd(float* arr, int index, float newVal) {
*(arr + index) = newVal;
}
// .cpp file
cl_mem arr = clCreateBuffer(...);
opencl.run("pointerAdd", {1} /* GWS */, {1} /* LWS */, arr, 10, 2.2f);
But, can I increment the pointer on the host before passing it to the kernel?
Since OpenCL buffers are represented by objects (i.e., cl_mem), doing arr += 10; on the host is completely wrong. Is there an API that we can use to do pointer arithmetic?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JavaFX: java.lang.NoClassDefFoundError: org/eclipse/jface/databinding/swt/WidgetProperties So, I tried to create my first JavaFX project and everything went fine until I tried to add a new "JavaFX Document".
When I try to create one I get this message:
An error has occurred. See error log for more details.
org/eclipse/jface/databinding/swt/WidgetProperties
Log:
java.lang.NoClassDefFoundError: org/eclipse/jface/databinding/swt/WidgetProperties
at org.eclipse.fx.ide.ui.wizards.AbstractJDTElementPage.createControl(AbstractJDTElementPage.java:109)
at org.eclipse.jface.wizard.Wizard.createPageControls(Wizard.java:178)
at org.eclipse.jface.wizard.WizardDialog.createPageControls(WizardDialog.java:744)
at org.eclipse.jface.wizard.WizardDialog.setWizard(WizardDialog.java:1182)
at org.eclipse.jface.wizard.WizardDialog.updateForPage(WizardDialog.java:1235)
at org.eclipse.jface.wizard.WizardDialog.lambda$3(WizardDialog.java:1223)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:74)
at org.eclipse.jface.wizard.WizardDialog.showPage(WizardDialog.java:1223)
at org.eclipse.ui.internal.dialogs.NewWizardSelectionPage.advanceToNextPageOrFinish(NewWizardSelectionPage.java:73)
at org.eclipse.ui.internal.dialogs.NewWizardNewPage.lambda$0(NewWizardNewPage.java:342)
at org.eclipse.jface.viewers.StructuredViewer$1.run(StructuredViewer.java:780)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:45)
at org.eclipse.jface.util.SafeRunnable.run(SafeRunnable.java:174)
at org.eclipse.jface.viewers.StructuredViewer.fireDoubleClick(StructuredViewer.java:777)
at org.eclipse.jface.viewers.AbstractTreeViewer.handleDoubleSelect(AbstractTreeViewer.java:1542)
at org.eclipse.jface.viewers.StructuredViewer$4.widgetDefaultSelected(StructuredViewer.java:1211)
at org.eclipse.jface.util.OpenStrategy.fireDefaultSelectionEvent(OpenStrategy.java:272)
at org.eclipse.jface.util.OpenStrategy$1.handleEvent(OpenStrategy.java:329)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:89)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4256)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1066)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4054)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3642)
at org.eclipse.jface.window.Window.runEventLoop(Window.java:823)
at org.eclipse.jface.window.Window.open(Window.java:799)
at org.eclipse.ui.internal.handlers.WizardHandler$New.executeHandler(WizardHandler.java:263)
at org.eclipse.ui.internal.handlers.WizardHandler.execute(WizardHandler.java:283)
at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:283)
at org.eclipse.ui.internal.handlers.E4HandlerProxy.execute(E4HandlerProxy.java:97)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:578)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:58)
at org.eclipse.e4.core.internal.di.InjectorImpl.invokeUsingClass(InjectorImpl.java:317)
at org.eclipse.e4.core.internal.di.InjectorImpl.invoke(InjectorImpl.java:251)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.invoke(ContextInjectionFactory.java:173)
at org.eclipse.e4.core.commands.internal.HandlerServiceHandler.execute(HandlerServiceHandler.java:156)
at org.eclipse.core.commands.Command.executeWithChecks(Command.java:488)
at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:485)
at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:213)
at org.eclipse.ui.internal.handlers.LegacyHandlerService.executeCommand(LegacyHandlerService.java:389)
at org.eclipse.ui.internal.actions.CommandAction.runWithEvent(CommandAction.java:142)
at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:580)
at org.eclipse.jface.action.ActionContributionItem.lambda$4(ActionContributionItem.java:414)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:89)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4256)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1066)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4054)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3642)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$5.run(PartRenderingEngine.java:1155)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1046)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:155)
at org.eclipse.ui.internal.Workbench.lambda$3(Workbench.java:643)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:550)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:171)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:152)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:203)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:136)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:402)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:255)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:578)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:659)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:596)
at org.eclipse.equinox.launcher.Main.run(Main.java:1467)
at org.eclipse.equinox.launcher.Main.main(Main.java:1440)
Caused by: java.lang.ClassNotFoundException: org.eclipse.jface.databinding.swt.WidgetProperties cannot be found by org.eclipse.fx.ide.ui_3.8.0.202204150904
... 68 more
In the added picture you can see that I added a databinding jar with a WidgetProperties.class in it. I have no idea what I have to change.
I hope you can help me!
Thanks
Aip
Tried to create a new "JavaFX Document" -> Got nothing
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python PyQt6 QTreeView, QStandardItem not appending row I am trying to populate a QTreeView() but it's not working. The root item and first row get added, but nothing after that does. Any ideas as to why? Code:
class MainGUI(QMainWindow):
def __init__(self):
super().__init__()
frame_main = QFrame()
self.setCentralWidget(frame_main)
grid_main = QGridLayout(frame_main)
self.tree_main = QTreeView()
self.model = QStandardItemModel()
grid_main.addWidget(self.tree_main, 0, 0)
self.populate_tree(data)
self.resize(500, 500)
def populate_tree(self, param):
model_root = self.model.invisibleRootItem()
tmp = []
for index, (key, value) in enumerate(param.items()):
print(key, value)
if index == 0:
sub = QStandardItem(key)
model_root.appendRow(sub)
for i, (k, v) in enumerate(value.items()):
if index == 0:
sub.appendRow(QStandardItem(k))
if v == 'Directory':
tmp.append(QStandardItem(k))
sub_dir = key[key.rfind('/') + 1:]
item = [c for c in tmp if c.text() == sub_dir]
if item:
item[0].appendRow(QStandardItem(k))
item.pop(0)
self.tree_main.setModel(self.model)
That's a MRE. The code actually involves separate threads, and scanning a directory (which is what I am trying to populate the tree with). I use os.scandir() recursively to grab files and folders from my drive. I don't see what's wrong, and net searching is turning up little I can use.
EDIT:
Here's the recursive scan call:
def recursive(path):
with os.scandir(path) as it:
results = {}
filesystem[path] = results
for entry in it:
if entry.is_file():
results[entry.name] = 'File'
elif entry.is_dir(follow_symlinks=False):
results[entry.name] = 'Directory'
recursive(entry.path)
recursive('/home/mark/Downloads')
I call this from a QThread and emit a signal to call populate_tree.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Cannot find corresponding coco.names for Yolo V4 I am trying to follow this tutorial
https://lindevs.com/yolov4-object-detection-using-opencv/
I found Yolo V4 here https://github.com/AlexeyAB/darknet/releases
But I don't see a corresponding coco.names file anywhere with the "yolov4.cfg" and "yolov4.weights" above
This labels file is required by the code
I see many different builds of Yolo v4 all over the internet, and some of them have coco.names associated with them,
is there a specific coco.names that i need for the official release of Yolo V4 on this release page?
Also, can I substitute the latest Yolo V7? can I use the same coco.names?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Passing an MLMultiArray (or CGImage) to PencilKit so it can be edited Is it possible to initialize my PKCanvasView.drawing from an MLMultiArray or CGImage so that the user can edit it?
I have a semantic segmentation model that outputs the MLMultiArray of 1s (detected) and 0s (not-detected). I want the user to be able to edit the mask so they can submit corrections that could be used to improve the model, but for the life of me I can't understand how to make it so the segmentation mask can be edited. Any help is greatly appreciated. I know that the init method can be called with .init(from: Data), but can't seem to understand what format the Data needs to be in.
My struct is below, the rawArray is as described above, this throws: "Argument type 'Data' does not conform to type 'Decorder'"
struct pencilView: UIViewRepresentable {
var inputImage: UIImage
var rawArray: MLMultiArray
var colorModel: ColorModel
@Binding var drawingView: PKCanvasView
func updateUIView(_ uiView: PKCanvasView, context: Context) {
}
func makeUIView(context:Context) -> PKCanvasView {
let length = rawArray.count
let dblPtr = rawArray.dataPointer.bindMemory(to: Double.self, capacity: length)
let dataOut = Data.init(bytes: dblPtr, count:length)
drawingView.drawing = try! .init(from: dataOut)
drawingView.drawingPolicy = .pencilOnly
drawingView.tool = PKInkingTool(.pen, color: UIColor(colorModel.overlayColor), width: 15)
return drawingView
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Correct way to read data from firestore to avoid duplicated data renders I've been building a react based e-commerce platform as a personal portfolio project using firebase as a backend service. I have gotten it set up so far with a user being able to create an account and log in with firebase auth. I was instituting an add to cart feature where the 'add to cart' button is clicked, and it sends the data to a firestore database that I created. I am able to receive all the data in the database so far but when I go to read it it will render to my page in duplicated form. So despite there being one item in the database, two of the items are rendered to my page. I've looked through the docs but since I'm pretty new to the service I'm having troubles finding the pertinent info in them.
I tried to add in an unsubscribe method after finding something similar on here, but it only seems to work occasionally. I would add an item to the cart, then when I reloaded the page it would still be duplicated, but if I returned to the product page and added another item it would re render not duplicated. Looking at the console I can still see the array of products is being duplicated though and I could only get it to trigger the non duplicated version occasionally, which was odd to me. After doing a lot of reading through the day my closest guess was something was occurring with the snapshot method and the way it's being cached in my browser, but I'm having troubles making any sort of progress that is beneficial. Here's the method I was using to get the data from the firestore database:
useEffect(() => {
const auth = getAuth();
const user = auth.currentUser;
const userId = user ? user.uid : null;
// Construct a query to retrieve all cart items for the current user
const cartItemsRef = collection(db, "cartItems");
const q = query(cartItemsRef, where("userId", "==", userId));
const unsubscribe = onSnapshot(q, (snapshot) => {
snapshot.docChanges().forEach((change) => {
if (change.type === "added") {
console.log("New product: ", change.doc.data());
}
if (change.type === "modified") {
console.log("Modified product: ", change.doc.data());
}
if (change.type === "removed") {
console.log("Removed product: ", change.doc.data());
}
});
});
// Retrieve the cart items and log them to the console
getDocs(q)
.then((getDocs) => {
getDocs.forEach((doc) => {
addItemToCart.push(doc.data());
});
// I was using this to log the data to see how it was returning
getDocs.forEach((doc) => {
console.log(doc.id, " => ", doc.data());
});
setCartItems(addItemToCart)
})
.catch((error) => {
console.log("Error getting documents: ", error);
});
}, [])
Any help would be appreciated. Thanks all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: $and operator not working in findOne mongoose BorrowBookSchema
const borrowBookSchema = new mongoose.Schema({
member_id : {
type : String,
required: true,
},
member_name : {
type : String,
required : true
},
bookdetails : [{
bookid : String,
bookname : String,
todaydate : String,
returndate : String,
status : Boolean
}]
})
only return status : false but its also return status : true How is solve ?
const returnBorrowBook = (req,res) => {
const {memberid} = req.body
let response = await BorrowBook.findOne({$and : [{member_id : memberid} ,{"bookdetails.status" : false}]})
console.log(response)
its result look like :-
bookdetails: [
{
bookid: '63f5d00a0effd7323ac20bea',
bookname: 'Atomic Habits',
todaydate: '03/03/2023',
returndate: '03/03/2023',
status: true,
_id: new ObjectId("64016a278023cdb8a881a35a")
},
{
bookid: '63f5d00a0effd7323ac20bea',
bookname: 'Atomic Habits',
todaydate: '01/03/2023',
returndate: null,
status: false,
_id: new ObjectId("64016b365203f7c11528392e")
}
]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Quarkus maven build throws SSL error: unable to find valid certification path to requested target I have a working Quarkus project on my personal laptop, but when I try to setup the build behind a corporate firewall I get the SSL error "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". The following maven command threw this error:
mvn io.quarkus.platform:quarkus-maven-plugin:2.16.4.Final:create -DprojectGroupId=org.acme -DprojectArtifactId=myArtifact -Dextensions="quarkus-amazon-lambda-rest"
This I was able to resolve by passing the argument "-Dmaven.wagon.http.ssl.insecure=true" to instruct maven to ignore SSL in the above command and it works perfectly.
The problem comes with the command "mvnw install" to build the project. The batch command throws the same error, but I can't figure out how to pass the argument "-Dmaven.wagon.http.ssl.insecure=true" to this script and cannot find documentation on how to do this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: React props with map function not working Trying to test my functions and get the items from 'names' to be rendered onto the page using props. Looked over tutorials and unsure why its not showing?
Here is my code:
import React from 'react'
const names =[
{
projectname: "Project 1",
emoji: "happy",
},
{
projectname: "Project 2",
emoji: "sad",
},
{
projectname: "Project 3",
emoji: "angry",
},
{
projectname: "Project 4",
emoji: "confused",
},
{
projectname: "Project 5",
emoji: "wink",
},
{
projectname: "Project 6",
emoji: "smile",
}
]
function cards(props){
<>
<h1>{props.projectname}</h1>
<h2>{props.emoji}</h2>
</>
}
function Projects(){
return(
<>
<div className="projectsAll">
<h1> This is a test</h1>
{names.map((cards) => (
<cards
projectname={cards.projectname}
emoji={cards.emoji}
/>
))
}
</div>
</>
)
}
export default Projects
This is my app.js file which is correctly showing the 'this is a test'
import 'bootstrap/dist/css/bootstrap.min.css';
import Header from './pages/Header';
import {BrowserRouter as Router, Route, Routes} from 'react-router-dom';
import Homepage from './pages/Homepage';
import Contact from './pages/Contact';
import About from './pages/AboutMe';
import Projects from './pages/Projects';
function App() {
return (
<Router>
<Header />
<Routes>
<Route path="/" element={<Homepage />} />
<Route path="/aboutme" element={<About />} />
<Route path="/projects" element={<Projects />} />
<Route path="/contactme" element={<Contact />} />
</Routes>
</Router>
)
}
export default App;
I want to projectname and emojis to be rendered onto the page,
A: There were a few errors, first, the JSX function should have a name starting with a capital letter so, from cards to Cards.
This is done so that you can use this function as a tag in other functions.
Secondly added the return statement to the Cards function.
This is the updated Projects.jsx,
import React from 'react'
const names =[
{
projectname: "Project 1",
emoji: "happy",
},
{
projectname: "Project 2",
emoji: "sad",
},
{
projectname: "Project 3",
emoji: "angry",
},
{
projectname: "Project 4",
emoji: "confused",
},
{
projectname: "Project 5",
emoji: "wink",
},
{
projectname: "Project 6",
emoji: "smile",
}
]
function Cards(props){
return(
<>
<h1>{props.projectname}</h1>
<h2>{props.emoji}</h2>
</>
)
}
function Projects(){
return(
<>
<div className="projectsAll">
<h1> This is a test</h1>
{names.map((cards) => (
<Cards
projectname={cards.projectname}
emoji={cards.emoji}
/>
))
}
</div>
</>
)
}
export default Projects
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Sharepoint excel hyperlink issue We have excel file with sharepoint links and links are working fine when opening fine in online version but issue happens when downloading excel file from sharepoint site the hyperlinks changing to C : users path instead of sharepoint links. We tried to download and readd the hyperlinks but still same issue.Please help on this.
We have excel file in sharepoint links when downloading excel file from sharepoint site the hyperlinks changing to C : users path instead of sharepoint links. Please help on to get the solution for this
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How can I assign SPIFFS file name to a variable? I am a relatively beginner programmer, and I've been combing for an answer for a while now. Coming up empty handed, I decided it was time to reach out. This is my first stack overflow post, be gentle.
I am trying to build a program to retrieves a jpeg files from SPIFFS and loads image on the ESP32 TFT. I've managed access the files in spiffs, to display the photos, but the file names are hardcoded into the program currently. I would like the program to read the file names, and load into drawJpeg(). I am struggling to figure out how to obtain a return from my getFileName() function. Code below:
#define FS_NO_GLOBALS
#include <Arduino.h>
#include <FS.h>
#include <string.h>
#include <iostream>
// #ifdef ESP32
#include "SPIFFS.h" // ESP32 only
// #endif
#include <TFT_eSPI.h> // Hardware-specific library
TFT_eSPI tft = TFT_eSPI(); // Invoke custom library
// JPEG decoder library
#include <JPEGDecoder.h>
#include "JPEG_functions.h"
// Declaration
void getFileName();
void setup()
{
Serial.begin(115200);
delay(10);
Serial.printf("\nSerial Debugger active!\n");
// Initialize TFT
tft.begin();
tft.setRotation(1);
tft.fillScreen(TFT_BLUE);
delay(10);
// Initialize SPIFFS
if (!SPIFFS.begin(true))
{
Serial.println("SPIFFS initialisation failed!");
}
Serial.println("\nSPIFFS Initialisation Complete.\n");
getFileName();
tft.setRotation(1);
tft.fillScreen(TFT_RED);
}
void loop()
{
drawJpeg("/001.jpg", 0, 0);
delay(2500);
drawJpeg("/002.jpg", 0, 0);
delay(2500);
drawJpeg("/003.jpg", 0, 0);
delay(2500);
drawJpeg("/004.jpg", 0, 0);
delay(2500);
drawJpeg("/005.jpg", 0, 0); // 240 x 320 image
delay(2500);
}
void getFileName()
{
// Open SPIFFS
File root = SPIFFS.open("/");
File file = root.openNextFile();
// Print SPIFFS file names
while (file)
{
Serial.print("FILE DETECTED\n");
Serial.print("FILE: ");
Serial.println(file.name());
file = root.openNextFile();
}
}
I've tried a number of changes that result in programming not compiling. I am missing a basic understanding about how to handle the File file variable and call to name().
A: What exactly do you plan to do with those files? If you want to go over each file and display it, then you don't need to involve any functions for determining the file name - the File already does that with method name(). Just do what you want to do in the main loop() and you're done.
void loop()
{
File root = SPIFFS.open("/");
File file = root.openNextFile();
while (file)
Serial.println(file.name());
drawJpeg(file.name(), 0, 0);
delay(2500);
file = root.openNextFile();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to add a control to form1 designer by code and make the control to be in the front? When I drag the control in the designer of form1 from the toolbox the control is in the front:
The ChromiumWebBrowser control is on the top of everything when dragging it from the toolbox:
The problem is that I don't want the control to be on top of everything if I'm not running the application and clicking a button. I want the control to be in the form1 designer but not on the top.
so, I thought to create the control by code:
private void btnShowOnGoogleMaps_Click(object sender, EventArgs e)
{
ChromiumWebBrowser chromiumWebBrowser1 = new ChromiumWebBrowser();
chromiumWebBrowser1.Size = new Size(500,500);
chromiumWebBrowser1.Show();
chromiumWebBrowser1.Load("D:\\Csharp Projects\\Weather\\map.html");
}
but with the code when I click the button the control is in the back. no matter what I tried.
I tried to add this line:
chromiumWebBrowser1.BringToFront();
but it didn't change much.
this is the control after clicking the button:
you can see the control (with the map) on the left side of the form1 on the left edge and the control is behind everything else.
I marked with red ellipse the control to show where it is hiding behind.
A: Is there a ZOrder function for this control? Usually you can just do control.ZOrder(0) to bring the control in front.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to allocate different memories to multiple gpus while training? Suppose I have two GPUs, GPU-0 and GPU-1 (they are the same type). I hope to train a simple classification network (e.g. ResNet) on them. For some special reasons, I hope GPU-0 can take more memories.
For example, consider the batch size set to 64, I hope about 40 batches of data are allocated on GPU-0 and the rest 24 batches on GPU-1.
I am guessing this can not be done via nn.DataParallel or nn.DistributedDataParallel, right? To do this, I think I need to copy the model and data manually to GPU-0 and GPU-1, then merge the computed loss together.
I am pretty unfamiliar with distributional training in PyTorch and fail to find a proper tutorial. A related question is raised here, however the objective is quite different.
Could anyone illustrate this problem with an example? Thanks ahead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: importing global variable in python from different module I have main.py
globvar = 0
def set_globvar_to_one():
global globvar # Needed to modify global copy of globvar
globvar = 1
and cs.py
from main import globvar
from main import set_globvar_to_one
def print_globvar():
print(globvar) # No need for global declaration to read value of globvar
set_globvar_to_one()
print_globvar()
when i run cs.py output of above code is 0. where as i was expecting output is 1 as from cs.py i modified the global variable present in main.py .I imported globvar in cs.py ?
Am i missing something ?
A: In the code you provided, cs.py calls set_globvar_to_one() from myMain.py before calling print_globvar(). However, when main.py is imported, the line globvar = 0 initializes the global variable globvar to 0.
Therefore, when set_globvar_to_one() is called from cs.py, it sets the global variable globvar to 1, but this modification only affects the global copy of globvar within the scope of the main.py module. It does not affect the global variable globvar within the scope of cs.py.
When print_globvar() is called from cs.py, it prints the value of globvar within the scope of cs.py, which is still 0 because it has not been modified within that scope.
To achieve the desired result, you need to modify main.py so that globvar is a mutable object, such as a list or dictionary, and modify the elements of the object instead of reassigning the variable.
You can do the following to fix it:
# main.py
globvar = {'value': 0}
def set_globvar_to_one():
globvar['value'] = 1
return globvar['value']
# cs.py
from myMain import globvar
from myMain import set_globvar_to_one
def print_globvar():
print(globvar['value'])
set_globvar_to_one()
print_globvar()
When I ran this, it seems to work.
A: Notice the output of the following code:
globvar = 0
def set_glob_var():
global globvar
print "In set_glob_var 1: ", id(globvar)
globvar = 1
print "In set_glob_var 2: ", id(globvar)
and then:
from main import globvar
from main import set_glob_var
print "Outside of main 1", id(globvar)
globvar = 1
print "Outside of main 2", id(globvar)
It turn out that it will print something like:
Outside of main 1: <id globvar 1>
In set_glob_var 1: <id globvar 1>
In set_glob_var 2: <id globvar 2>
Outside of main 2: <id globvar 1>
Notice that all the ids of globvar will be the same, except for that in line 3. So, when you try to change the value of the variable, it will reset the id to a different address, within the module being imported, since the variable is already global an is stored in the cache.
You can explore this further by calling set_glob_var multiple times. The results will be interesting!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Specification '\dropbox-sdk-java\generated_stone_source\main\src' must have a .stone extension After importing the dropbox-sdk-java example from the official GitHub into Android Studio I get this building and trying to run it.
Specification 'C:\Users\XXX\StudioProjects\dropbox-sdk-java\dropbox-sdk-java\generated_stone_source\main\src' must have a .stone extension.
Execution failed for task ':dropbox-sdk-java:generateStone'.
Process 'command 'python'' finished with non-zero exit value 1
Android Studio version is Electric Eel
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ORACLE UPDATE using FULL OUTER JOIN and sub-query I have the following QUERY
SELECT DISTINCT EPS_PROPOSAL.PROPOSAL_NUMBER FROM PROP_ADMIN, EPS_PROPOSAL
FULL OUTER JOIN PROP_ADMIN
ON EPS_PROPOSAL.PROPOSAL_NUMBER = PROP_ADMIN.PROPOSAL_NUMBER
WHERE EPS_PROPOSAL.SPONSOR_CODE = 100728 AND
(EPS_PROPOSAL.STATUS_CODE = 3 OR EPS_PROPOSAL.STATUS_CODE = 6)
PROPOSAL_NUMBER FUNDING_CODE
4214 (null)
3079 (null)
3212 (null)
. .
. .
TOTAL RECORDS: 339
I am attempting to update the FUNDING_CODE to F using the previously used WHERE condition and OUTER JOIN.
UPDATE PROP_ADMIN
SET FUNDING_CODE = 'F'
WHERE PROPOSAL_NUMBER IN(
SELECT DISTINCT EPS_PROPOSAL.PROPOSAL_NUMBER FROM PROP_ADMIN, EPS_PROPOSAL
FULL OUTER JOIN PROP_ADMIN
ON EPS_PROPOSAL.PROPOSAL_NUMBER = PROP_ADMIN.PROPOSAL_NUMBER
WHERE EPS_PROPOSAL.SPONSOR_CODE = 100728 AND
(EPS_PROPOSAL.STATUS_CODE = 3 OR EPS_PROPOSAL.STATUS_CODE = 6)
When I run this, only 1 row is updated from my list above.
PROPOSAL_NUMBER FUNDING_CODE
4214 F
3079 (null)
3212 (null)
. .
. .
How do I make the UPDATE statement execute across all the rows instead of just the first row returned from the sub-query.
A: That is because only PROPOSAL_NUMBER 4214 exists in PROP_ADMIN table. All the remaining 338 PROPOSAL_NUMBER's would be from EPS_PROPOSAL table.
Please note that you're performing a full outer join to pull those 339 records. So, it's quite obvious to see only a very few records being updated.
Hope it helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Trouble with retrieving specific node in firebase realtime DB via react native expo New issue that appear after Grabbing the wrong item from firebase realtime database via react native expo
Part of my database:
{
"Bakery": {
"Bakery Cuisine": {
"Description": "Within North Spine Plaza",
"Halal": "Yes",
"Latitude": 1.34714,
"Location": "50 Nanyang Ave, #01-20 North Spine Plaza, Singapore 639798",
"Longitude": 103.68066,
"OH": "Mon - Sat : 8 AM to 7 PM, Sun Closed",
"ShopNo": 1
}
},
"Beverage": {
"Beverage": {
"Description": "Within the South Spine food court",
"Halal": "No",
"Latitude": 1.34253,
"Location": "21 Nanyang Link, Singapore 637371",
"Longitude": 103.68243,
"OH": "Mon - Fri: 7 30 am to 8 pm, Sat - Sun/PH Closed",
"ShopNo": 2
},
"Beverages": {
"Description": "Within North Spine Koufu",
"Halal": "No",
"Latitude": 1.34708,
"Location": "76 Nanyang Dr, #02-03 North Spine Plaza, Singapore 637331",
"Longitude": 103.68002,
"OH": "Mon - Fri : 7 am to 8 pm, Sat : 7 am to 3 pm, Sun Closed",
"ShopNo": 3
},
"Boost": {
"Description": "Within North Spine Plaza",
"Halal": "No",
"Latitude": 1.34735,
"Location": "50 Nanyang Ave, #01-11 North Spine Plaza, Singapore 639798",
"Longitude": 103.68036,
"OH": "Mon - Fri : 10 am to 9 pm, Sat - Sun: 10 am to 6 pm",
"ShopNo": 4
},
"Total": 89,
}
My code
const SubScreen2 = () => {
const navigation = useNavigation()
const [todoData, setToDoData] = useState([])
useEffect (() => {
var random = 0
get(ref(db, "food/Total")).then(snapshot => {
const count = snapshot.val();
console.log(count)
random = Math.floor((Math.random() * count));
console.log(random)
const rc = query(ref(db, `food/`), orderByChild("ShopNo"), equalTo(random))
get (rc)
.then((querySnapshot) => {
querySnapshot.forEach((shopSnapshot) => {
const shopKey = shopSnapshot.key;
console.log("Hello")
console.log("Randomly selected shop: " + shopKey)
const shopData = shopSnapshot.val();
console.log("Shop data", shopData);
})
.catch(error => {
console.log(error);
})
})
},) })
No errors reported in console but the only logs I get was
I tried catching the error but there was nothing appearing in my log. I am very confused about this. Why is there no error but at the same time, it cannot grab the node?
Using the same method but with function:
Code
const SubScreen2 = () => {
const navigation = useNavigation()
const [todoData, setToDoData] = useState([])
function getFirstChild(queryRef) {
return get(query(queryRef, limitToFirst(1))) // mix in a query limit
.then((querySnapshot) => {
let firstChild = null;
querySnapshot.forEach((childSnapshot) => {
firstChild = childSnapshot;
});
return firstChild; // DataSnapshot | null
});
}
useEffect (() => {
var random = 0
get(ref(db, "food/Total")).then(snapshot => {
const count = snapshot.val();
console.log(count)
random = Math.floor((Math.random() * count));
console.log(random)
const rc = query(ref(db, `food/`), orderByChild("ShopNo"), equalTo(random))
getFirstChild(rc) // rc being query(ref(db, "food"), orderByChild("ShopNo"), equalTo(random))
.then((shopSnapshot) => {
const shopKey = shopSnapshot.key;
console.log("Randomly selected shop: " + shopKey)
const shopData = shopSnapshot.val();
console.log("Shop data", shopData);
})
})
})
},) })
I got a error of
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Has the react-native openai-api module been modified to access the ChatGPT API? Has the react-native openai-api module been modified to access ChatGPT API? When using expo, I get the following error :
WARN Possible Unhandled Promise Rejection (id: 0): TypeError: Cannot read property 'create' of undefined
using openai.ChatCompletion.create seems not to work in react-native
code :
import React, { useState, useEffect } from "react";
import { View, TextInput, Button, FlatList, Text } from "react-native";
import OpenAI from "openai-api";
const openai = new OpenAI("YOUR_API_KEY");
export default function App() {
const [inputText, setInputText] = useState("");
const [messages, setMessages] = useState([]);
const [responseText, setResponseText] = useState("");
useEffect(() => {
async function generateResponse() {
if (messages.length > 0) {
const response = await openai.ChatCompletion.create({
model: "gpt-3.5-turbo",
messages,
});
print(response);
setResponseText(response.choices[0].text);
}
}
generateResponse();
}, [messages]);
A: As of 4th March 2023 the openai-api library does not yet support the chat completion functionality. So the ChatCompletion property that you are trying to access does not exist, hence you are getting the Cannot read property 'create' of undefined message.
I would instead strongly advise you to use the official openai library which works in the browser and fully supports the new chat completions.
You can do this by changing your code to the following and replacing the YOUR API KEY string to your own key:
import React, { useState, useEffect } from "react";
import { View, TextInput, Button, FlatList, Text } from "react-native";
import { Configuration, OpenAIApi } from 'openai';
const configuration = new Configuration({
apiKey: "YOUR API KEY",
});
const openai = new OpenAIApi(configuration);
export default function App() {
const [inputText, setInputText] = useState("");
const [messages, setMessages] = useState([]);
const [responseText, setResponseText] = useState("");
useEffect(() => {
async function generateResponse() {
if (messages.length > 0) {
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages,
});
print(response);
setResponseText(completion.data.choices[0].message.content);
}
}
generateResponse();
}, [messages]);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way for a discord.py bot to react to every reaction in an already reacted message? So, lets say I want my bot to get a message, then when a random user reacts to that message, the bot adds the same reaction to that message, like for example when someone adds a reaction and I click on it to also react.
I know how to use message.add_reaction, but I cant find a way to do what I want, I dont know if there is a way to actually do it
@bot.event
async def on_message(message):
await bot.process_commands(message)
if "hi" in message.content:
await message.add_reaction("(any reaction)")
A: There's an event called on_reaction_add in discord.py
https://discordpy.readthedocs.io/en/stable/api.html#discord.on_reaction_add
@bot.event
async def on_reaction_add(reaction, user):
message = reaction.message
if "hi" in message.content:
await message.add_reaction(reaction)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Xcode - One of the iOS frameworks is not generating dSYM file One of the 8 frameworks fails to generate a dSYM file.
Debug Information Format is set to DWARF with dSYM File and
Generate Debug Symbols is enabled.
dSYMs are generated for all frameworks but one.
What else should I check to debug.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Vectorize or optimize several nested for loops R I am looking to optimize code where I am using multiple for loops and if statements.
I have a dataframe:
set.seed(10)
df<-data.frame(val1 = c(1.1,0.2,-1.5,-2.3,2.0),
val2 = c(0.4,0.1,-0.2,0.4,-1.1))
df
val1 val2
1 1.1 0.4
2 0.2 0.1
3 -1.5 -0.2
4 -2.3 0.4
5 2.0 -1.1
Say I also have a list, l
l <- list(
data.frame(val1 = runif(5,-3,3),
val2 = runif(5,-2,2)),
data.frame(val1 = runif(5,-3,3),
val2 = runif(5,-2,2)),
data.frame(val1 = runif(5,-3,3),
val2 = runif(5,-2,2))
)
l
[[1]]
val1 val2
1 0.04486922 -1.0982535
2 -1.15938896 -0.9018779
3 -0.43855400 -0.9107797
4 1.15861249 0.4633172
5 -2.48918419 -0.2813139
[[2]]
val1 val2
1 0.9099340 -0.2847623
2 0.4064265 -1.7923867
3 -2.3189461 -0.9432893
4 0.5755518 -0.4048371
5 -0.8517001 1.3445366
[[3]]
val1 val2
1 2.1883274 0.8265877
2 0.6921145 1.3531507
3 1.6506594 -1.0416435
4 -0.8665879 1.0830861
5 -0.5649002 -0.5764090
Every dataframe within each element of the list has the same number of rows and columns as the dataframe df. What I want to do is, if a number in df is positive, is to count all the times that numbers in the corresponding row and column of each dataframe in the list are more positive. Conversely, if a number in df is negative, I want to count the number of times that the corresponding number in each dataframe's row and columns is more negative (i.e. smaller, has a higher magnitude).
For example, the number in row 3, column 2 of df is -0.2. It is negative. Looking at the numbers in row 3, column 2 of each dataframe, we have -0.91, -0.94, -1.04. These are all more negative (smaller), so we should get 3. In row 1, column 1 in df we have 1.1. In row 1, column 1 of all the other dataframes, we have 0.04, 0.91, 2.19. So we should get 1, as only one number is more positive.
I wrote nested for loops and if statements to do this:
mat = matrix(0, nrow=5, ncol = 2)
for(i in 1:3){
for(c in 1:2){
for(r in 1:5){
if(df[r,c]>0){
if(l[[i]][r,c]>df[r,c]){
mat[r,c] = mat[r,c]+1
}
} else
if(l[[i]][r,c]<df[r,c]){
mat[r,c] = mat[r,c]+1
}
}
}
}
mat is the desired output:
mat
[,1] [,2]
[1,] 1 1
[2,] 2 1
[3,] 1 3
[4,] 0 2
[5,] 0 0
Any help in speeding up this process and not relying on these loops would be appreciated.
A: I think you can do this using vectorized comparisons across 3-dim arrays:
library(abind)
a <- abind(l,along = 3)
df3 <- abind(df,df,df,along = 3)
apply(abs(a) > abs(df3) & (sign(a) == sign(df3)),
MARGIN = 1:2,
FUN = sum)
Additionally, you can create the replicated version of df without typing out df multiple times via:
df3 <- abind(replicate(3,df,simplify = FALSE),along = 3)
...or directly just by feeding a repeated vector of values to array:
df3 <- array(rep(c(as.matrix(df)),times = 3),dim = c(5,2,3))
A: We can directly subtract the two data frames where the absolute value in the list must be greater than the absolute value in the df, and at the same time, their sign must be equal. The result would be logical, we can use Reduce to sum up the logical matrices.
Reduce(`+`, lapply(l, \(x) {
(abs(x) > abs(df) & sign(x) == sign(df))
}))
val1 val2
[1,] 1 1
[2,] 2 1
[3,] 1 3
[4,] 0 2
[5,] 0 0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: GMapsFX not displaying but running I am trying to link my GMapsFX API with Java but I get an error when trying to use GMapsFX. I am using NetBeans.
So basically, Everything seems to be running without errors but the actual display is "hidden" and I don't know why.
fxmlfile:
<?xml version="1.0" encoding="UTF-8"?>
<?import com.lynden.gmapsfx.GoogleMapView?>
<?import javafx.scene.control.TextField?>
<?import javafx.scene.layout.AnchorPane?>
<AnchorPane id="AnchorPane" prefHeight="443.0" prefWidth="711.0" xmlns="http://javafx.com/javafx/8.0.171" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.lynden.gmaps.directions.GmapController">
<children>
<GoogleMapView fx:id="mapView" layoutX="-311.0" layoutY="-244.0" prefWidth="490.0" AnchorPane.bottomAnchor="0.0" AnchorPane.leftAnchor="0.0" AnchorPane.rightAnchor="0.0" AnchorPane.topAnchor="0.0" />
<TextField fx:id="fromTextField" prefHeight="27.0" prefWidth="222.0" promptText="From:" AnchorPane.leftAnchor="10.0" AnchorPane.topAnchor="10.0" />
<TextField fx:id="toTextField" layoutX="10.0" layoutY="10.0" onAction="#toTextFieldAction" prefHeight="27.0" prefWidth="222.0" promptText="To:" AnchorPane.leftAnchor="10.0" AnchorPane.topAnchor="50.0" />
</children>
</AnchorPane>
controller:
package tn.esprit.carngo.presentation;
import com.lynden.gmapsfx.GoogleMapView;
import com.lynden.gmapsfx.MapComponentInitializedListener;
import com.lynden.gmapsfx.javascript.object.*;
import com.lynden.gmapsfx.service.directions.*;
import java.net.URL;
import java.util.ResourceBundle;
import javafx.beans.property.SimpleStringProperty;
import javafx.beans.property.StringProperty;
import javafx.event.ActionEvent;
import javafx.fxml.FXML;
import javafx.fxml.Initializable;
import javafx.scene.control.TextField;
public class GmapController implements Initializable, MapComponentInitializedListener, DirectionsServiceCallback {
protected DirectionsService directionsService;
protected DirectionsPane directionsPane;
protected StringProperty from = new SimpleStringProperty();
protected StringProperty to = new SimpleStringProperty();
@FXML
protected GoogleMapView mapView;
@FXML
protected TextField fromTextField;
@FXML
protected TextField toTextField;
@FXML
private void toTextFieldAction(ActionEvent event) {
DirectionsRequest request = new DirectionsRequest(from.get(), to.get(), TravelModes.DRIVING);
directionsService.getRoute(request, this, new DirectionsRenderer(true, mapView.getMap(), directionsPane));
}
@Override
public void directionsReceived(DirectionsResult results, DirectionStatus status) {
}
@Override
public void initialize(URL url, ResourceBundle rb) {
mapView.addMapInializedListener(this);
to.bindBidirectional(toTextField.textProperty());
from.bindBidirectional(fromTextField.textProperty());
}
@Override
public void mapInitialized() {
MapOptions options = new MapOptions();
options.center(new LatLong(47.606189, -122.335842))
.zoomControl(true)
.zoom(12)
.overviewMapControl(false)
.mapType(MapTypeIdEnum.ROADMAP);
GoogleMap map = mapView.createMap(options);
directionsService = new DirectionsService();
directionsPane = mapView.getDirec();
}
}
the output :
here
the displayed window :
window
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Firebase GET request not returning data to client We have an app with a firestore database, using firebase cloud functions. We are trying to get data about each user from an external API. Our firebase cloud function is returning data - I can see it correctly in the logs. However, I cannot see that data in the browser. I'm guessing maybe I'm not using async/await properly?
Here's how we're calling the function from our app (in Vuex):
async retrieveByExternalId({ commit }, payload) {
const retrieveByExternalId = await firebase.functions().httpsCallable('retrieveByExternalId')
retrieveByExternalId({
id: payload
})
.then(result => {
console.log(result.data)
commit('setUserContractorPayProfile', result.data)
})
},
Result.data shows as null
Then, here's the cloud function:
exports.retrieveByExternalId = functions.https.onCall(async (data, context) => {
const id = data.id
axios({
method: "GET",
url: `https://website/api/v2/workers/external/${id}`,
headers: {
accept: '*',
'Access-Control-Allow-Origin': '*',
Authorization: 'API KEY'
}
})
.then(response => {
functions.logger.log("Response", " => ", response.data);
return response.data
})
.catch((error) => {
functions.logger.log("Error", " => ", error);
})
});
In the functions log, I can see everything correctly.
Is it an async/await issue? Or am I returning data wrong?
Thanks!
A: I haven't tried your code but the problem is most probably due to the fact you don't return the promise chain in your Cloud Function.
You should either do:
return axios({ // <====== See return here
// ...
})
.then(response => {
functions.logger.log("Response", " => ", response.data);
return response.data
})
.catch((error) => {
functions.logger.log("Error", " => ", error);
})
or, since you declared the function async, use the await keyword as follows:
exports.retrieveByExternalId = functions.https.onCall(async (data, context) => {
try {
const id = data.id
const axiosResponse = await axios({
method: "GET",
url: `https://website/api/v2/workers/external/${id}`,
headers: {
accept: '*',
'Access-Control-Allow-Origin': '*',
Authorization: 'API KEY'
}
});
functions.logger.log("Response", " => ", axiosResponse.data);
return axiosResponse.data
} catch (error) {
// see https://firebase.google.com/docs/functions/callable#handle_errors
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: In springboot project, use RequestContextHolder.getRequestAttributes().getRequest() to get the HttpServletRequest Object but it is null Because I need to get the HttpServletRequest object in thread of thread-pool,So I code like this
ServletRequestAttributes servletRequestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
RequestContextHolder.setRequestAttributes(servletRequestAttributes,true);
// get the HttpServletRequest
RequestAttributes attributes = RequestContextHolder.getRequestAttributes();
HttpServletRequest request = attributes.getRequest();
The result is as expected when I'm in local development environment. It can acquire the HttpServletRequest. But in product environment. There are many java project running on multiple servers.We use nginx to forward the front-end request. The above code is unexpected. When I get the
HttpServletRequest Object, it always null.
Do you know why?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Process Pipe Read&Write I try to use two-way pipe to connect parent-child process;but something wrong happened.
Here are my codes
Parent
……
uint32_t v[2] = {0};
uint32_t k[4] = {1, 2, 3, 4};
uint32_t delta = 0x9E3779B9;
uint32_t sum;
//获取到之后进行XTEA加密,在此过程中改变sum的值
for (int j = 0; j < 8; j += 2) {
v[0] = data[j];
v[1] = data[j + 1];
sum = 0;
for (int i = 0; i < 32; ++i) {
v[0] += ((v[1] << 4 ^ v[1] >> 5) + v[1]) ^ (sum + k[sum & 3]);
sum += delta;
uint32_t val_net = htonl(sum);
write(fd[1], &val_net, sizeof(val_net));
uint32_t val_net_read;
read(fd1[0], &val_net_read, sizeof(val_net_read));
sum = ntohl(val_net_read);
v[1] += ((v[0] << 4 ^ v[0] >> 5) + v[0]) ^ (sum + k[sum >> 11 & 3]);
}
data[j] = v[0];
data[j + 1] = v[1];
}
……
Child
……
uint32_t val_net_read_1;
uint32_t val_net;
uint32_t sum;
for (int j = 0; j < 8; j += 2) {
for (int i = 0; i < 32; ++i) {
read(fd[0], &val_net_read_1, sizeof(val_net_read_1)); // 读取整型数据
sum = ntohl(val_net_read_1);
if (check == 0 || check > 0) {
sum ^= 0x12345678;
} else {
sum ^= 0x23456789;
}
uint32_t val_net = htonl(sum);
write(fd1[1], &val_net, sizeof(val_net)); // 写入整型数据
}
}
……
When I debug it,I found the result was different from what i want
just like the sum sent to child process was 0x9E3779B9,and the data received from child process shoule be 0x8C032FC1.But it wasn't.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Setting custom input type for passing down controller from parent components to useFieldArray in child component in react-hook-form I am trying to create a shared component that will accept a useForm props and will use it as part of useFieldArray. However, my intention is that the useFieldArray will only accept a FormValues input type.
So far I have:
type FormValues = {
[x: string]: {
key: string,
value: string
}[]
}
interface Props {
prop?: boolean;
}
function PairInput<TFieldValues extends FormValues = FormValues,
TName extends ArrayPath<TFieldValues> = ArrayPath<TFieldValues>>({
prop,
control,
name,
rules,
shouldUnregister
}: Props & UseControllerProps<TFieldValues, TName>) {
const { fields, append, remove } = useFieldArray({
control,
name: name,
});
return (
{/** Controller component here *}
)
}
However, on the TName I have an error:
Type 'TName' does not satisfy the constraint 'Path<TFieldValues>'.
Type 'ArrayPath<TFieldValues>' is not assignable to type 'Path<TFieldValues>'.
How can I enforce that FormValues type on the useFieldArray fields parameter.
Edit:
I have change the props of the component to:
export default function PairInput<TFieldValues extends FormValues = FormValues,
TFieldArrayName extends ArrayPath<TFieldValues> = ArrayPath<TFieldValues>, TKeyName extends string = "id">({
prop,
control,
name,
rules,
shouldUnregister
}: Props & UseFieldArrayProps<TFieldValues, TFieldArrayName, TKeyName>) {
const { fields, append, remove } = useFieldArray<TFieldValues>({
control,
name: name,
});
return (
<Controller control={control} ... />
)
}
However now I am getting this error:
Type 'Control<TFieldValues>' is not assignable to type 'Control<FieldValues>'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Button for staff to approve user identification I'm developing a bot for discord, where new users will identify themselves through a modal form and wait for the approval of some staff so that they are receiving a position. But I am not able to make a button for some staff to approve.
after the user identifies himself, his information is sent to a log channel, for the staff to approve
import discord
from discord.ext import commands
import discord.ui
from discord import ui
intents = discord.Intents.all()
intents.members = True
bot = commands.Bot(command_prefix="/", intents=intents, help_command=None)
@bot.event
async def on_ready():
print(f"Bot Ligado.")
bot.add_view(Verify())
#----------------------------------------------------
class Questionare(ui.Modal, title="Solicitar SET"):
qra = ui.TextInput(label="QRA:")
patente = ui.TextInput(label="PATENTE:", placeholder="Aluno / Agente")
async def on_submit(self, ctx, interaction: discord.Interaction):
user = interaction.user
guild = interaction.guild
role = guild.get_role(1073616684763578398)
if self.patente.value == "Aluno" or self.patente.value == "aluno":
await user.edit(nick="Aln. "+self.qra.value)
elif self.patente.value == "Agente" or self.patente.value == "agente":
await user.edit(nick="Agt. "+self.qra.value)
channel = interaction.guild.get_channel(1078375134479515708) # canal de logs
embed = discord.Embed( color=discord.Color.green())
embed.set_author(name="Novo Registro", icon_url="https://cdn-icons-png.flaticon.com/512/3804/3804348.png"),
embed.set_thumbnail(url=user.avatar),
embed.add_field(name="QRA: ", value=user.mention, inline=False)
embed.add_field(name="CARGO: ", value=self.patente , inline=False)
await channel.send(embed=embed,view = Aprovar())
await interaction.response.send_message("Sucesso! Seja Bem-Vindo.", ephemeral=True)
#----------------------------------------------------
class Verify(discord.ui.View):
def __init__(self):
super().__init__(timeout = None)
@discord.ui.button(label="Identifique-se", custom_id="Verify", style= discord.ButtonStyle.success)
async def verify(self, interaction: discord.Interaction, button: discord.ui.button):
await interaction.response.send_modal(Questionare())
class Aprovar(discord.ui.View):
def __init__(self):
super().__init__(timeout = None)
@discord.ui.button(label="Aprovar", custom_id="aprove", style= discord.ButtonStyle.success)
async def aprove(self, interaction: discord.Interaction, button: discord.ui.button):
??????
@bot.command()
async def iniciar(ctx):
embed = discord.Embed(
title="INVESTIGAÇÕES PCERJ",
description="Para ter acesso, identifique-se abaixo.",
colour= 14483456
)
embed.set_thumbnail(url="https://i.pinimg.com/originals/8d/a3/ad/8da3ad9575652bba957033720f43434f.png")
await ctx.send(embed = embed , view = Verify())
bot.run("TOKEN")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: JAVA - Failed to download json-lib Trying to install JMeter5.5 and followed this article , I'm with step 7 to download the plugins using cmdrunner-2.3.jar however I got issue on downloading json-lib. Please advise on how to resolve it.
This is the error message I'm gettiung when executed the step 7:
2023-03-04 02:02:43,339 INFO o.j.r.JARSourceHTTP: Downloading: http://search.maven.org/remotecontent?filepath=net/sf/json-lib/json-lib/2.4/json-lib-2.4-jdk15.jar
2023-03-04 02:02:43,339 INFO o.j.r.PluginManagerCMD: Downloading json-lib...
2023-03-04 02:07:04,309 ERROR o.j.r.PluginManager: Failed to download json-lib
java.net.ConnectException: Connection timed out (Connection timed out)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:121) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:326) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:605) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:440) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.13.jar:4.5.13]
at org.jmeterplugins.repository.JARSourceHTTP.execute(JARSourceHTTP.java:499) ~[plugins-manager.jar:?]
at org.jmeterplugins.repository.JARSourceHTTP.getJAR(JARSourceHTTP.java:389) ~[plugins-manager.jar:?]
at org.jmeterplugins.repository.PluginManager.applyChanges(PluginManager.java:167) ~[plugins-manager.jar:?]
at org.jmeterplugins.repository.PluginManagerCMD.installAll(PluginManagerCMD.java:146) ~[plugins-manager.jar:?]
at org.jmeterplugins.repository.PluginManagerCMD.processParams(PluginManagerCMD.java:78) ~[plugins-manager.jar:?]
at kg.apc.cmdtools.PluginsCMD.processParams(PluginsCMD.java:62) ~[cmdrunner-2.3.jar:?]
at kg.apc.cmdtools.PluginsCMD.processParams(PluginsCMD.java:21) ~[cmdrunner-2.3.jar:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at kg.apc.cmd.UniversalRunner.main(UniversalRunner.java:117) ~[cmdrunner-2.3.jar:?]
A: The problem is that the url it's trying to download from is timing out.
2023-03-04 02:02:43,339 INFO o.j.r.JARSourceHTTP: Downloading: http://search.maven.org/remotecontent?filepath=net/sf/json-lib/json-lib/2.4/json-lib-2.4-jdk15.jar
...
java.net.ConnectException: Connection timed out (Connection timed out)
pinging search.maven.org comes back with:
% ping search.maven.org -c 5
PING search-maven-org-prod-env.us-east-1.elasticbeanstalk.com (34.195.195.141): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
--- search-maven-org-prod-env.us-east-1.elasticbeanstalk.com ping statistics ---
5 packets transmitted, 0 packets received, 100.0% packet loss
Obviously the url mentioned in step 7 is no longer valid, or it's temporarily down. Whatever the case, that's the problem, and you'll need to find an alternate download url for the file json-lib-2.4-jdk15.jar.
Maybe try these steps instead (starting from step #2):
*curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.5.zip
*tar -xvf apache-jmeter-5.5.tgz
*rm -rf apache-jmeter-5.5/docs apache-jmeter-5.5/printable_docs
*cd apache-jmeter-5.5/lib ; curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.3/cmdrunner-2.3.jar
*cd ext/ ; curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.8/jmeter-plugins-manager-1.8.jar
*cd .. ; java -jar cmdrunner-2.3.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
*cd apache-jmeter-5.5/bin ; ./jmeter.sh --version
*cp -r apache-jmeter-5.5 /opt/
*nano .profile
Add Jmeter Home variable at the end of the line
JMETER_HOME="/opt/apache-jmeter-5.5"
Modify PATH variable at the end of the line
PATH="$JMETER_HOME/bin:$PATH"
Reload your bash profile
source ~/.profile
The steps above simply follow those of article you linked, although swapping the older version of the managers and plug-in with the newest. It could simply be that url (mentioned above) cmdrunner is trying to install the json-lib-2.4-jdk15.jar from is down.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Exception in thread "main" java.lang.NoSuchMethodError in Lenskit for Java I am researching with Lenskit for Java, I wanna load file with CSV file. I using TextEntitySource to load source but I got error.
import java.io.File;
import java.io.IOException;
import org.lenskit.data.dao.file.TextEntitySource;
public class MainCTF {
public static void main(String[] args) throws IOException {
TextEntitySource entitySource = new TextEntitySource();
File file = new File("D:\\newdocres\\dataset\\item_feature_matrix.csv");
entitySource.setFile(file.toPath());
}
}
Exception in thread "main" java.lang.NoSuchMethodError: 'com.google.common.io.ByteSource org.grouplens.lenskit.util.io.LKFileUtils.byteSource(java.io.File, org.grouplens.lenskit.util.io.CompressionMode)'
at org.lenskit.data.dao.file.TextEntitySource.setFile(TextEntitySource.java:107)
at main.MainCTF.main(MainCTF.java:17)
So How to can load a file CSV or using JDBC in Lenskit for java
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error while initializing the union of struct member in c I have a struct like this -
struct ArrayAdv{
int size;
int length;
union{
struct{
int *A;
} dynamic;
struct{
int A[20];
} stat;
} item;
};
And I am getting error when I am trying to initialize the array -
Error I am encountered with
I am trying to initialize the array like this -
struct ArrayAdv arrAdv;
arrAdv.item.stat.A = {33, 2, 7, 88, 35, 90, 102, 23, 81, 97};
And getting the error which, I have mentioned, I want it to be initialized correctly.
A: You're getting an error because you're not actually initializing, but assigning, and you can't assign to an array. An initialization happens at the time a variable is defined.
What you're looking for is:
struct ArrayAdv arrAdv =
{ .item = { .stat = { .A = { 33, 2, 7, 88, 35, 90, 102, 23, 81, 97 } } } };
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Most efficient way to update/create records in rails with slightly different records so I am uploading distribution lists to an app I just started which have a variety of item information. So, for example I have excel sheets, openoffice sheets, and csv files that will have rows containing data such as:
[(itemsource: "Moen", itemnumber: "S7170SRS", description: "Spot Resist Single Mount Kitchen Faucet", handles: "1")]
[(itemsource: "Delta", itemnumber: "DL02GD", description: "Faucet Cartridge", faucetstyle: "vessel")]
Let's assume I convert them all to csv. Then I can use:
file = File.open(file)
csv = CSV.parse(file, headers: true)
csv.each do |row|
Product.create!(row.to_hash.compact)
Which is called on my itemcontroller:
def import
file = params[:file]
return redirect_to items_path, notice: "No file selected." if file == nil
return redirect_to items_path, notice: "Import files must be of CSV format." unless file.content_type == "text/csv"
csvimport.new.call(file)
return redirect_to products_path, notice: "File has been successfully uploaded!"
end
Now, the issue I have is that any items that are already in my system will be ignored since the itemnumber is already taken. I need to devise a way to update when in the system, and create when it is new.
The one wrinkle I see is since the columns are not identical between the arrays/rows that I cannot just use
item.find_or_create_by!(row.to_hash.compact)
I was thinking of using conditionals with logic such as:
if itemnumber == row("itemnumber") && itemsource == row("itemsource") then use item.find_or_create_by!(row.to_hash.compact)
else use item.create!(row.to_hash.compact)
But this seems less efficient than using a single method that has already been created.
Hopefully this makes sense.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Set One instance SlateJs editor in one page that contains many edirots I used to manage many slateJs in one page, but when i tried to update one editor it update only the last one and never the right, how we can have many slateJs instance in one page.
TextEditor:
<Slate
editor={editor}
value={initialValue}
onChange={value => onChangeCurrentNote(value)}
>
<Editable
id={props.id}
key={props.id}
style={{ minHeight: '5vh', padding: '2%' }}
onKeyDown={event => {
onKeyDown(event)
}}
renderElement={renderElement}
/>
</Slate>
How i call it:
{items.filter(item => !item.isDraft).map(function (item, i) {
return <ItemSchemaUI key={i} item={item}></ItemSchemaUI>;
})
here is the ItemSchemaUI:
<Card.Body>
<Col md={{ span: 12 }} ref={messagesEndRef}>
<form>
<div onClick={() => dispatch(noteActions.setCurrentNote(props.item))}>
<TextEditor
isDraft={props.item.isDraft}
id={props.item.id}
initialValue={initialeValue}
/>
</div>
</form>
</Col>
</Card.Body>
```
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ROOM ORDER BY CASE: Where Can I define the optional parameter?
I would like insert the Query in DAO interface like below.
@Query("SELECT * FROM words WHERE childId = :childId ORDER BY " +
"CASE WHEN parameter = 0 THEN dateTime END DESC, " +
"CASE WHEN parameter = 1 THEN dateTime END ASC, " +
"CASE WHEN parameter = 2 THEN wordName END DESC, " +
"CASE WHEN parameter = 3 THEN wordName END ASC")
List<Word> findWord(int childId, int parameter);
But I can't figure out where and how define the optional parameter used after CASE WHEN(below).
private int parameter;
How can I get it?
Thank you.
A: I believe you could use-
@Query("SELECT * FROM words WHERE childId = :childId ORDER BY " +
"CASE WHEN :parameter = 0 THEN dateTime END DESC, " +
"CASE WHEN :parameter = 1 THEN dateTime END ASC, " +
"CASE WHEN :parameter = 2 THEN wordName END DESC, " +
"CASE WHEN :parameter = 3 THEN wordName END ASC")
List<Word> findWord(int childId, int parameter);
i.e. : to indicate where the parameter value is to be used (bound). Room knows to bind the same value multiple times
Additional re the comment:-
I partially recant my comment, the query might run, but it is not what the OP intends. The OP's intended logic is to order by just one column in ascending or descending order. Your answer (and the OP's original query) are ordering by four columns.
Using slightly adapted Query (to not take into account the childId column (to simplify/reduce insertions to suit)) as per:-
@Query(
"SELECT * FROM words /*WHERE childId = :childId*/ ORDER BY " +
"CASE WHEN :parameter = 0 THEN dateTime END DESC, " +
"CASE WHEN :parameter = 1 THEN dateTime END ASC, " +
"CASE WHEN :parameter = 2 THEN wordName END DESC, " +
"CASE WHEN :parameter = 3 THEN wordName END ASC"
)
fun findWord(/*childId: Int,*/ parameter: Int): List<Words>
and then using the following code in an activity (allowMainThreadQueries for brevity):-
daoDB1.insert(Words(wordName = "C", datetime = (System.currentTimeMillis() / 1000) - (10 * onedayAsLong)))
daoDB1.insert(Words(wordName = "Z", datetime = (System.currentTimeMillis() / 1000) + (8 * onedayAsLong)))
daoDB1.insert(Words(wordName = "A", datetime = (System.currentTimeMillis() / 1000) - (7 * onedayAsLong)))
daoDB1.insert(Words(wordName = "B", datetime = (System.currentTimeMillis() / 1000) - (8 * onedayAsLong)))
daoDB1.insert(Words(wordName = "Y", datetime = (System.currentTimeMillis() / 1000) + (10 * onedayAsLong)))
for (w in daoDB1.findWord(0)) {
Log.d("DBINFO_RSLT1","Word is ${w.wordName} DateTime is ${w.datetime} ID is ${w.childId}")
}
for (w in daoDB1.findWord(1)) {
Log.d("DBINFO_RSLT2","Word is ${w.wordName} DateTime is ${w.datetime} ID is ${w.childId}")
}
for (w in daoDB1.findWord(2)) {
Log.d("DBINFO_RSLT3","Word is ${w.wordName} DateTime is ${w.datetime} ID is ${w.childId}")
}
for (w in daoDB1.findWord(3)) {
Log.d("DBINFO_RSLT4","Word is ${w.wordName} DateTime is ${w.datetime} ID is ${w.childId}")
}
for (w in daoDB1.findWord(99 /*>>>>>>>>>> OOOPS??? <<<<<<<<<<*/)) {
Log.d("DBINFO_RSLT5","Word is ${w.wordName} DateTime is ${w.datetime} ID is ${w.childId}")
}
Then the result in the log is:-
2023-03-04 15:42:56.237 D/DBINFO_RSLT1: Word is Y DateTime is 1686544976 ID is 5
2023-03-04 15:42:56.238 D/DBINFO_RSLT1: Word is Z DateTime is 1684816976 ID is 2
2023-03-04 15:42:56.238 D/DBINFO_RSLT1: Word is A DateTime is 1671856976 ID is 3
2023-03-04 15:42:56.238 D/DBINFO_RSLT1: Word is B DateTime is 1670992976 ID is 4
2023-03-04 15:42:56.238 D/DBINFO_RSLT1: Word is C DateTime is 1669264976 ID is 1
2023-03-04 15:42:56.243 D/DBINFO_RSLT2: Word is C DateTime is 1669264976 ID is 1
2023-03-04 15:42:56.243 D/DBINFO_RSLT2: Word is B DateTime is 1670992976 ID is 4
2023-03-04 15:42:56.243 D/DBINFO_RSLT2: Word is A DateTime is 1671856976 ID is 3
2023-03-04 15:42:56.243 D/DBINFO_RSLT2: Word is Z DateTime is 1684816976 ID is 2
2023-03-04 15:42:56.243 D/DBINFO_RSLT2: Word is Y DateTime is 1686544976 ID is 5
2023-03-04 15:42:56.245 D/DBINFO_RSLT3: Word is Z DateTime is 1684816976 ID is 2
2023-03-04 15:42:56.246 D/DBINFO_RSLT3: Word is Y DateTime is 1686544976 ID is 5
2023-03-04 15:42:56.246 D/DBINFO_RSLT3: Word is C DateTime is 1669264976 ID is 1
2023-03-04 15:42:56.246 D/DBINFO_RSLT3: Word is B DateTime is 1670992976 ID is 4
2023-03-04 15:42:56.246 D/DBINFO_RSLT3: Word is A DateTime is 1671856976 ID is 3
2023-03-04 15:42:56.249 D/DBINFO_RSLT4: Word is A DateTime is 1671856976 ID is 3
2023-03-04 15:42:56.249 D/DBINFO_RSLT4: Word is B DateTime is 1670992976 ID is 4
2023-03-04 15:42:56.249 D/DBINFO_RSLT4: Word is C DateTime is 1669264976 ID is 1
2023-03-04 15:42:56.249 D/DBINFO_RSLT4: Word is Y DateTime is 1686544976 ID is 5
2023-03-04 15:42:56.250 D/DBINFO_RSLT4: Word is Z DateTime is 1684816976 ID is 2
2023-03-04 15:42:56.251 D/DBINFO_RSLT5: Word is C DateTime is 1669264976 ID is 1
2023-03-04 15:42:56.251 D/DBINFO_RSLT5: Word is Z DateTime is 1684816976 ID is 2
2023-03-04 15:42:56.251 D/DBINFO_RSLT5: Word is A DateTime is 1671856976 ID is 3
2023-03-04 15:42:56.251 D/DBINFO_RSLT5: Word is B DateTime is 1670992976 ID is 4
2023-03-04 15:42:56.251 D/DBINFO_RSLT5: Word is Y DateTime is 1686544976 ID is 5
*
*i.e. all appear to be sorted as probably expected (last set RSLT5 purposefully not meeting any of the CASE conditions).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I get the result data back synchronously from a promise in javascript? I am working on some javascript code for an extension I'm attempting to write for a platform called Cockpit. in their API documentation they provide the following example:
cockpit.file("/path/to/file").read()
.then((content, tag) => {
...
})
.catch(error => {
...
});
ok, great. This works fine when I have an actual callback function I want to call, but I'm currently working on trying to set up a file read in a synchronous way so I can set up my initial class state and this is driving me insane. I have been fighting with this and scouring for solutions for a few hours and nothing is making any sense and I can't figure any of this out.
I understand that there's supposed to be some way to use an async function definition or an await, or something, but everything I try results in my data being a promise of some kind.
This is what my current attempt looks like:
const versionObj = async () => {
return await cockpit.file("/path/to/version.txt").read()
.then((content) => { return { version: content || "Unknown" } });
};
console.log(versionObj());
I know that this code is wrong, I just don't know how I'm supposed to be doing it and I'm at my wit's end. Can anyone please help me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Script RDP Server Port change i am very new and only amateur. only thing i did before with script is creating terminal server.
My company have a problem. RDP connection and the antimalware service blocking bruteforce attacks and almost every server gets notifications from inbound connections blocked; svchost.exe blocks from unknown IP's etc.
Very new for me.
after reading a hour trough solutions i think changing port 3389 for RDP is the fastest and resolve 90% of the problem. (445,5985 getting inbound blocks from antivirus too "svchost.exe")
I already use VPN on every server. Next thing i want to ask is a script or even connect it here where i only need to put allowed IP inside the code.
Write-host "NEWPORT " -ForegroundColor Yellow -NoNewline
$RDPPort = Read-Host
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-TCP\" -Name PortNumber -Value $RDPPort
New-NetFirewallRule -DisplayName "NewRDPPort-TCP-In-$RDPPort" -Direction Inbound –LocalPort $RDPPort -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "NewRDPPort-UDP-In-$RDPPort" -Direction Inbound –LocalPort $RDPPort -Protocol UDP -Action Allow
Restart-Service termservice -force
Write-host "NEWPORT " -ForegroundColor Magenta
Is the code correct and i dont kill every server or programs cant run normal anymore? Thank you for any response!
I did change port manual, brute force attacks stopped for the server. Then i allowed inbound connection for new random port and
A: Your code looks correct to change the RDP port (but ive not tested it) - however this is a classic example of "security by obscurity" - its an improvement but not really a solution.
Yes this approach will reduce the number of connection attempts as there are lots of scripts/scrapers/scanners out there that are only looking for default ports, but its not hard to perform a port scan and locate your RDP port if its still exposed to the internet so this wont solve the problem entirely.
A better solution is to restrict what IPs can attempt an RDP connection in the first place - though this becomes more complicated and is dependent on your skills and available resources to implement (and theres a risk of locking yourself out of your servers!). You mention VPN but dont describe its setup - its certainly possible to route RDP traffic over a VPN and avoid exposing it to the internet which would be a better solution. Another common solution is to use a "jump host" - a linux box that you make an SSH connection and port forward your RDP connections through. Your windows servers then only need to allow the jumphost access to RDP and not the internet in general.
Also be aware Windows could have numerous other ports open like RPC and SMB which are also exploitable and run on known ports (i would suggest your conduct a port scan of your server yourself to see what is open).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Visual diagram tool suggestions in .Net core web application I am in need of a suggestion of either a third party or open source tool that can be integrated with .Net core web application and build a tool to draw visual diagrams.
The tool should look similar to Visio but that needs to be developed for internal purpose.
Can anyone suggest me such drawing tool that I can integrate with my web application and use it by dragging and dropping it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do i make discordpy check when a user got a specific role? I'm trying to make a bot that removes a user's role a certain amount of time after it has been added, I'm having some issues with it though
I tried asking ChatGPT, it told me to do it like this
time_left = role.created_at.replace(tzinfo=None) + timedelta(days=role_length) - now
I expected it to check when a user got the role and tell the user the right time, but it checks when the role was made, and gives the user a wrong time, how do i fix this?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Setup webpack on my basic JS project NO framework I'm trying to setup webpack on my JS project, but it always flashing errors
> code@1.0.0 build
> webpack
'Workflows\Code\node_modules\.bin\' is not recognized as an internal or external command,
operable program or batch file.
node:internal/modules/cjs/loader:942
throw err;
^
Error: Cannot find module 'E:\KalBonyanAlmarsos\04 - JavaScript\webpack\bin\webpack.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:939:15)
at Module._load (node:internal/modules/cjs/loader:780:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.2.0
File Structure & Webpack Config file
screenshot
Did you face this before?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is it better to use RouterLink or router.push for a button? I am using Vue Router 4 in Vue 3 Composition API, and I want to make a back button. I want to know what is the most optimal and good practice to redirect, a RouterLink or the router with the push method..
RouterLink
<RouterLink :to="{ name: 'clients-index' }">
<AppButton text="Volver" icons="fa-chevron-left" />
</RouterLink>
router.push
<AppButton text="Volver" icons="fa-chevron-left" @click="() => router.push({ name: 'clients' })" />
A: This is clearly written inside the documentation that clicking <router-link :to="..."> is the equivalent of calling router.push(...).
In router-link we create anchor tags for declarative navigation and the same job can be done programmatically using the router's instance, router.push. To navigate to a different URL we use router.push and this is the same method called internally when you click a <router-link>.
Declarative
Programmatic
<router-link :to="...">
router.push(...)
So use either a declarative or programmatic approach, both are equalized.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: error C2451: a conditional expression of type 'main::' is not valid I'm new to programming and I'm trying to review this C++ primer book, however while checking LAMBDA part I get into a compile error on my Visual studio 2022.
#include <algorithm>
#include <iostream>
using namespace std;
int main()
{
int i = 10;
if ([&]()->bool { while(i) --i; return true; })
cout << "the captured variable is 0" << endl;
return 0;
}
Appreciate your enlightenment...
A: The lambda returns a bool value when called. But the code is not calling the lambda at all. It is instead trying to evaluate the lambda object itself in a boolean context, which is not a valid operation since a lambda's generated type does not define a conversion of a lambda object to a bool.
You need to actually call the lambda first (ie, invoke the lambda type's function call operator) by placing () after the lambda object, and then evaluate its return value afterwards, eg:
int main()
{
int i = 10;
if ([&]()->bool { while(i) --i; return true; }())
// ^ add this !
cout << "the captured variable is 0" << endl;
return 0;
}
Coding a lambda inside an if statement is not a good idea, though. It is legal, but it makes the code harder to read and understand. It would be better to assign the lambda to a variable first, and then call it afterwards, eg:
int main()
{
int i = 10;
auto lambda = [&]() -> bool { while(i) --i; return true; };
if (lambda())
cout << "the captured variable is 0" << endl;
return 0;
}
A: if ([&]()->bool { while(i) --i; return true; })
The lambda is not called there. Try
if ([&]()->bool { while(i) --i; return true; }())
It might be a misprint in the book. I can't find it on C++ Primer Errata.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why is my Discord.py bot's command_prefix not updating after I update MySQL unless I restart the bot? I'm trying to use a MySQL database to store prefixes for different servers. When I use a command to update the prefix in the DB it does work as I can see the change in MySQL but when I try to use the new prefix it seems that command_prefix is not updating since the new prefix doesn't work, UNLESS I restart the bot. Please help? Here's the code below:
main.py:
db = mysql.connector.connect(
host='localhost',
user='root',
password='',
database='nashudb'
)
cursor = db.cursor(dictionary=True)
def get_server_prefix(client, message):
cursor.execute(f"SELECT PREFIX from prefixes where ID = {message.guild.id}")
rows = cursor.fetchall()
for row in rows:
return row["PREFIX"]
client = commands.Bot(command_prefix=get_server_prefix, intents=discord.Intents.all())
set prefix command:
@commands.hybrid_command(name="prefix", description="Change prefix Nashu uses!")
async def prefix(self, ctx, *, newprefix: str):
sql = "UPDATE prefixes SET PREFIX = %s WHERE ID = %s"
val = (newprefix, str(ctx.guild.id))
cursor.execute(sql, val)
db.commit()
await ctx.send("Prefix changed! :3")
I've tried using cursor.fetchone() method instead since I'm only trying to get one row(?) I'm new to SQL and not really sure but that didn't work either. I used a JSON file before and it worked fine. Thanks in advance!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Adjust Axis Scales of Histograms In R studio, I have a group of histograms with three columns and two rows for each 'type' x, y, z, -x, -y, and -z. Currently, the scales fit to each individual graph type. The problem is that some types have a max bin of say 30, where others have a max bin of 5, so you could not visually compare the graphs together.
What I need is to match the histogram scales of the y(count) and x axis(load) for 'type' x, y, -x and -y so they can be equally comparable. I would also like to match the z and -z 'type' scales.
Is this possible?
I might also be happy if I could get all 6 axis scales be the same.
My guess is somehow to find the each types max bin value and then find the max bin value between the all the types maxes and use that to set the y axis. Then for the x scale, do the same but just find the max 'value' for each type.
This is the code I have that works so far:
library(tidyverse)
library(dplyr)
library(ggplot2)
#Sample data set
max_values <- data.frame(
type = c("x","x","x","x","y","y","y","y","z","z","z","z","z","-x","-x","-x","-x","-y","-y","-y","-y","-z","-z","-z","-z","-z"),
values = c(0.5,1,2,0.3,1,2,3,4,0.5,0.4,0.3,0.2,2,5,5.1,6,4.9,3,10,9.5,9.1,9,9.7,8)
)
# create a list of type values
type_list <- c("x", "y", "z", "-x", "-y", "-z")
# create an empty list to store the plots
plot_list <- list()
# loop through the type values and create a plot for each type
for (i in type_list) {
plot_data <- max_values %>% filter(type == i, scenario == 1)
if(nrow(plot_data) > 0) { # add a check to make sure the data is not empty
plot <- ggplot(plot_data, aes(value)) +
geom_histogram(colour= 1, fill = "white", binwidth=0.5) +
ggtitle(paste("Histogram |", toupper(i), "Max values | Scenario 1")) +
xlab("Load (N) [Bin top 0.5]") +
ylab("Count (@500 samples per sec)")
# add the plot to the plot_list
plot_list[[i]] <- plot
}
}
# display the plots
gridExtra::grid.arrange(
plot_list[["x"]],
plot_list[["y"]],
plot_list[["z"]],
plot_list[["-x"]],
plot_list[["-y"]],
plot_list[["-z"]],
ncol = 3
)
Thanks!
I tried adding things in the loop, out side of the loop, lots of stuff nothing seemed to work correctly
here is something I tried that almost worked, but not quite
# loop through the type values and create a plot for each type
for (i in type_list) {
plot_data <- max_values %>%
filter(type == i, scenario == 1)
if(nrow(plot_data) > 0 && !all(is.na(plot_data$value))) { # check for non-missing values
# create bins of width 0.5 based on the values in the `value` column
plot_data$binwidth <- cut(plot_data$value, breaks = seq(0, max(plot_data$value), by = 0.5), include.lowest = TRUE)
# add a group column based on the bin width
plot_data$group <- as.factor(match(plot_data$binwidth, unique(plot_data$binwidth)))
# get the max bin width
max_binwidth <- max(as.numeric(plot_data$binwidth))
# create the plot
plot <- ggplot(plot_data, aes(value)) +
geom_histogram(colour=1, fill="white", binwidth=0.5) +
ggtitle(paste("Histogram |", toupper(i), "Max values | Scenario 1")) +
xlab("Load (N) [Bin top 0.5]") +
ylab("Count (@500 samples per sec)") +
facet_wrap(~ group, scales = "free_y") + # group the histograms by the bin width
scale_x_continuous(limits = c(0, max_binwidth), expand = c(0,0)) + # set the x axis limits
scale_y_continuous(limits = c(0, NA), expand = c(0,0)) # set the y axis limits
# add the plot to the plot_list
plot_list[[i]] <- plot
}
}
A: The easiest approach to achieve your desired result would be to simply use faceting instead of creating individual plots. Doing you will automatically get identical scales for each of your types:
library(tidyverse)
library(patchwork)
max_values <- data.frame(
type = c("x", "x", "x", "x", "y", "y", "y", "y", "z", "z", "z", "z", "z", "-x", "-x", "-x", "-x", "-y", "-y", "-y", "-y", "-z", "-z", "-z"),
value = c(0.5, 1, 2, 0.3, 1, 2, 3, 4, 0.5, 0.4, 0.3, 0.2, 2, 5, 5.1, 6, 4.9, 3, 10, 9.5, 9.1, 9, 9.7, 8),
scenario = 1
)
type_list <- c("x", "y", "z", "-x", "-y", "-z")
max_values$type <- factor(max_values$type, type_list)
# Facetting
ggplot(max_values, aes(value)) +
geom_histogram(colour = 1, fill = "white", binwidth = 0.5) +
facet_wrap(~type, labeller = as_labeller(function(x) paste("Histogram |", toupper(x), "Max values | Scenario 1"))) +
xlab("Load (N) [Bin top 0.5]") +
ylab("Count (@500 samples per sec)")
To achieve the same result with individual plots requires some more effort. Basically you were on the right track in computing the x and y ranges. However, in case of geom_histogram getting the ranges requires to take account of the binning when computing the ranges and when setting the limits.
Note: Instead of using a for loop I would suggest to use lapply which in general works better with ggplot2. To this end I also use a custom plotting function. Additionally I switched to patchwork to glue the plots.
# Plot Function with fixed limits
plot_fun <- function(i, binwidth = .5) {
# Take account of the binning
xlim <- range(plyr::round_any(max_values$value, binwidth / 2, floor))
ylim <- cut_width(max_values$value, width = binwidth) |>
table(max_values$type) |>
range()
plot_data <- max_values %>% filter(type == i, scenario == 1)
ggplot(plot_data, aes(value)) +
geom_histogram(colour = 1, fill = "white", binwidth = binwidth) +
# Take account of the binning
scale_x_continuous(limits = xlim + binwidth / 2 * c(-1, 1)) +
scale_y_continuous(limits = ylim) +
labs(
title = paste("Histogram |", toupper(i), "Max values | Scenario 1"),
x = "Load (N) [Bin top 0.5]",
y = "Count (@500 samples per sec)"
)
}
plot_list <- lapply(type_list, plot_fun)
wrap_plots(plot_list, ncol = 3)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to make Google oauth work in Python Quickstart I am trying to run a Python script that accesses Google apps using Google's Python Quickstart on Python 3.10 on Windows 11 (originally installed as W10). The early steps are: enable the API, Create OAuth client ID, install the Google Client Library with Pip, and run some sample code that is available on the page.
I do all that and I get
.
However when I click on my user id I get
.
This happens up to 30 hours after I create the credential.
From that dialog I am directed to error details:
Error 403: access_denied
Request details: access_type=offline o2v=1 service=lso response_type=code redirect_uri=http://localhost:54834/ state=GFDKQ4HClD7WXqUE1N3rkrnumkpo1Z flowName=GeneralOAuthFlow client_id=956839549735-cr7gen4vnsp1tva07c8lh656j8dovpbj.apps.googleusercontent.com scope=https://www.googleapis.com/auth/spreadsheets.readonly
According to Google's Troubleshoot authentication & authorization issues I should be able to "Advanced > Go to {Project Name} (unsafe)." but no such entry appears on the Access Blocked or Error Details dialogs.
What can I do to get past this?
Is it not recognizing me as the developer? If so, how do I remedy that?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: I get an empty device list when using PcapPlusPlus At the moment, I am interested in networking and in order to learn more, I decided to use the PcapPlusPlus library to capture packets on my local network. My goal is to understand how packets are structured, position and content of the different headers (ethernet,ip,etc)
I compile in C++20 using cmake and vcpkg to manage my dependencies, I develop on the Visual Studio 2022 IDE on Windows. I'm able to compile and use the library. However, I get an empty list when I try to retrieve the different interfaces from my computer.
Here is the code I am using to list the interfaces :
#include <iostream>
#include <PcapLiveDeviceList.h>
int main()
{
const std::vector<pcpp::PcapLiveDevice*>& devList = pcpp::PcapLiveDeviceList::getInstance().getPcapLiveDevicesList();
if (devList.empty())
{
std::cout << "Empty device list" << std::endl;
return 1;
}
std::cout << "Network interfaces:" << std::endl;
for (std::vector<pcpp::PcapLiveDevice*>::const_iterator iter = devList.begin(); iter != devList.end(); iter++)
{
std::cout << " -> Name: '" << (*iter)->getName() << "' IP address: " << (*iter)->getIPv4Address().toString() << std::endl;
}
return 0;
}
Here is the CMakeLists:
cmake_minimum_required (VERSION 3.8)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
project ("test_pcap")
file(GLOB SOURCES *.cpp)
find_package(unofficial-pcapplusplus CONFIG REQUIRED)
add_executable (${PROJECT_NAME} ${SOURCES})
target_link_libraries(${PROJECT_NAME} PRIVATE
unofficial::pcapplusplus::pcappp
unofficial::pcapplusplus::packetpp
unofficial::pcapplusplus::commonpp
)
target_link_libraries(${PROJECT_NAME} PRIVATE ws2_32)
In order to fix the problem:
*
*I tried to run the program with administrator privileges without success.
*I tried to launch the program on another computer, again without success.
And finally I ran an example application provided by PcapPlusPlus (Arping) which allows to list the interfaces and it worked.
I think I must be missing a dependency, but as I have no error, I have no idea how to solve my problem.
A: It's possible that your program is not able to access the network interfaces because it is running in a sandboxed environment that doesn't have the necessary permissions. One thing you could try is to explicitly specify the network interface you want to capture packets on, using the PcapLiveDeviceList::getInstance().getPcapLiveDeviceByName() method.
const std::string interfaceName = "eth0"; // replace with the name of the interface you want to capture on
pcpp::PcapLiveDevice* dev = pcpp::PcapLiveDeviceList::getInstance().getPcapLiveDeviceByName(interfaceName);
if (dev == nullptr)
{
std::cout << "Failed to open interface '" << interfaceName << "'" << std::endl;
return 1;
}
// capture packets using the 'dev' object
If this works, it means that the issue was with the library not being able to access the network interfaces automatically, possibly due to lack of permissions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to get the creation date of a table on Firebird 2.5 I'm using Firebird 2.5 and need to get the date of creation of a table.
SELECT DISTINCT RDB$RELATION_NAME
FROM RDB$RELATION_FIELDS
WHERE RDB$SYSTEM_FLAG=0;
I've tried with this table but there isn't any date field.
A: You can't get the creation date of a table as this information is not recorded anywhere.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Beautiful Soup - Get all text, and preserve link html? I am parsing multiple HTML pages using beautiful soup. Most of the scenarios work great. I want to include text along with the URL for links.
The current syntax is:
soup = MyBeautifulSoup(''.join(body), 'html.parser')
body_text = self.remove_newlines(soup.get_text())
I found an online recommendation to override the _all_strings function:
class MyBeautifulSoup(BeautifulSoup):
def _all_strings(self, strip=False, types=(NavigableString, CData)):
for descendant in self.descendants:
# return "a" string representation if we encounter it
if isinstance(descendant, Tag) and descendant.name == 'a':
yield str('<{}> '.format(descendant.get('href', '')))
# skip an inner text node inside "a"
if isinstance(descendant, NavigableString) and descendant.parent.name == 'a':
continue
# default behavior
if (
(types is None and not isinstance(descendant, NavigableString))
or
(types is not None and type(descendant) not in types)):
continue
if strip:
descendant = descendant.strip()
if len(descendant) == 0:
continue
yield descendant
However, this gives a runtime error:
in _all_strings
(types is not None and type(descendant) not in types)):
TypeError: argument of type 'object' is not iterable
Is there a way around this error?
Thanks!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Editing row labels in {gt} in R I have created a gt table using the following code:
bovids.summary.carb <-
bovids %>%
mutate(across(Member, factor, levels = c("UpperBurgi", "KBS", "Okote"))) %>%
group_by(Dep_context, Member, Tribe) %>%
summarize(
mean_d13C = mean(d.13.C, na.rm = TRUE),
median_d13C = median(d.13.C, na.rm = TRUE),
range_d13C = max(d.13.C, na.rm = TRUE) - min(d.13.C, na.rm = TRUE))
#table
gt(bovids.summary.carb) %>%
tab_header(title = "Mean, median, and ranges of d.13.C values by Tribe across Depositional Environemnt and Member")%>%
fmt_number(columns = vars(mean_d13C, median_d13C, range_d13C), decimals = 2)%>%
sub_missing(missing_text = "--")%>%
cols_label(mean_d13C = "Mean", median_d13C = "Median", range_d13C = "Range") %>%
tab_style(style = cell_text(weight = "bold"),
locations = cells_body(columns = "Tribe"))
And I get an output like this:
I've played around with some other tab_style calls trying to get the Deltaic - Upper Burgi (and equivalents) are bold instead of the Tribe names. Any suggestions?
Here is a subset of the data:
> dput(bovids)
structure(list(CA = c("41", "131", "131", "131", "131", "131",
"131", "131", "131", "131", "131", "131", "131", "1A", "1A",
"1A", "1A", "1A", "1A", "1A", "1A", "1A", "1A", "1", "1", "1",
"1", "1", "1", "1", "1", "1", "1", "3", "3", "3", "3", "6", "6",
"6", "6", "8", "8", "11", "1A", "1A", "1A", "6/6A", "6/6A", "6/6A",
"6A", "8A", "8A", "8A", "8A", "8A", "8A", "8A", "8A", "8A", "8A",
"8A", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41",
"41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41",
"41", "105", "105", "41", "105", "105", "105", "105", "105",
"105", "105", "105", "105", "8", "103", "131", "131", "100",
"123", "6"), Member = c("Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "Okote", "Okote", "Okote", "Okote", "Okote", "Okote",
"Okote", "UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi",
"UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi",
"UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi",
"UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi", "UpperBurgi",
"UpperBurgi", "UpperBurgi", "UpperBurgi", "KBS", "KBS", "UpperBurgi",
"KBS", "KBS", "KBS", "KBS", "KBS", "KBS", "KBS", "KBS", "KBS",
"KBS", "KBS", "KBS", "UpperBurgi", "UpperBurgi", "KBS", "KBS"
), Dep_context = c("Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ",
"Fluvial ", "Fluvial ", "Fluvial ", "Fluvial ", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic", "Deltaic",
"Deltaic", "Deltaic", "Fluvial ", "Fluvial ", "Deltaic", "Deltaic",
"Deltaic", "Lacustrine", "Fluvial "), Family = c("Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae",
"Bovidae", "Bovidae", "Bovidae", "Bovidae", "Bovidae"), Tribe = c("Alcelaphini",
"Alcelaphini", "Alcelaphini", "Alcelaphini", "Bovini", "Tragelaphini",
"Tragelaphini", "Tragelaphini", "Tragelaphini", "Tragelaphini",
"Tragelaphini", "Tragelaphini", "Tragelaphini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Reduncini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Reduncini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Reduncini", "Reduncini",
"Tragelaphini", "Tragelaphini", "Tragelaphini", "Tragelaphini",
"Alcelaphini", "Bovini", "Reduncini", "Reduncini", "Alcelaphini",
"Alcelaphini", "Reduncini", "Tragelaphini", "Alcelaphini", "Alcelaphini",
"Reduncini", "Alcelaphini", "Alcelaphini", "Antilopini", "Alcelaphini",
"Alcelaphini", "Alcelaphini", "Reduncini", "Aepycerotini", "Aepycerotini",
"Alcelaphini", "Alcelaphini", "Alcelaphini", "Reduncini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Tragelaphini", "Aepycerotini",
"Aepycerotini", "Aepycerotini", "Aepycerotini", "Alcelaphini",
"Alcelaphini", "Alcelaphini", "Alcelaphini", "Alcelaphini", "Alcelaphini",
"Alcelaphini", "Alcelaphini", "Reduncini", "Reduncini", "Reduncini",
"Reduncini", "Reduncini", "Alcelaphini", "Alcelaphini", "Alcelaphini",
"Alcelaphini", "Alcelaphini", "Alcelaphini", "Antilopini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Reduncini", "Reduncini",
"Reduncini", "Reduncini", "Reduncini", "Reduncini", "Tragelaphini",
"Tragelaphini", "Tragelaphini", "Tragelaphini", "Tragelaphini",
"Tragelaphini", "Tragelaphini"), Genus = c("", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", ""), d.13.C = c(0.4, 0.7, 0.6, -1.2, -1.2, -6.4, -5, -3,
-2.7, -6.5, -6.2, -2.7, -5.7, -1.2, -0.4, -0.7, 0.8, 0.8, -2,
0.6, 0.4, -1.2, 0.6, 2.8, -0.3, -1, -0.8, 1, -1, -4.8, -5.9,
-3.9, -8.7, -1, 0.8, 1.7, 0, -1.1, 0.3, -2.3, -11.5, -0.3, -1.1,
-1.9, -0.1, 2.2, 1.5, -0.3, 0.2, -1.2, 0.8, 0.9, -2.4, 1.2, 2.4,
0.1, -0.7, -1.7, 0.6, 0.1, -2.4, -1.7, -0.9, 0.3, -3.3, 1.5,
1.7, 1.1, 1.2, 1.3, 1.8, 1.2, 1, 1.5, 1.3, 1, 0.5, 0.7, 1.3,
1.5, 2.7, -0.2, 2.1, 2, 0.6, -3.3, 1.3, 0.8, -3.2, 0.1, -1.2,
0.2, -1.6, -0.4, 0.3, 0.3, -4.2, -2.6, -4.5, -5.3, -8.1, -8.7,
-8.1), d.18.O = c(0.9, 0.8, 3.1, 1, 2.3, 1.5, 1.6, 2.5, 4.3,
1, -2.1, 2.7, 2.5, -0.1, -0.3, -0.6, 1.1, 0.5, -1, 1.3, 0.3,
0, 0.9, -0.6, 1.7, 0.5, 0, 3.9, -3.4, 0.1, -1.1, 0.2, -0.2, 1.6,
4.1, 4.2, 0, 2, 2.9, -0.4, 2.1, 2.5, 0.1, -0.8, 0.7, 1, 3.2,
-1.4, 5.3, 4.1, 2.5, 3.2, 0.4, 1.9, 1.2, 1.7, -0.1, -2.1, 0.5,
0.5, 0.2, 1.8, -2.3, -3.6, -1.8, -0.4, -1.6, -0.7, 0.1, -1.1,
-1, -0.3, 0.1, -0.7, -1.1, -2.3, -0.8, -3.7, -2.2, 2.6, 1.3,
1.1, 1.7, 1.5, 2.2, 1.2, -1.4, 0.3, -0.6, -0.4, 1.5, 0.3, -0.6,
-1.6, 0.6, -4.3, 3.3, 2.4, -1.3, 0.4, 1.4, 3.7, 0), Age.range = c("1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4", "1.55-1.4",
"1.55-1.4", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9",
"2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9",
"2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9", "2.1-1.9",
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "")), row.names = c(NA, -103L), class = c("tbl_df",
"tbl", "data.frame"))
I've tried to use cell_row_groups() but got a warning message " "Since gt v0.3.0, 'columns = vars(...) has been deprecated. Please use 'columns = c(...)' instead and the same bolded table appears. I'm wondering if it has something to do with the order I did group_by().
A: To style the row group headers use cells_row_groups() instead of cells_body in tab_style
Using some fake example data based on the gtcars dataset:
library(gt)
library(dplyr)
bovids.summary.carb <- gtcars |>
group_by(ctry_origin, bdy_style) %>%
summarize(
mean_d13C = mean(hp, na.rm = TRUE),
median_d13C = median(hp, na.rm = TRUE),
range_d13C = diff(range(max(hp, na.rm = TRUE)))
)
gt(bovids.summary.carb) %>%
tab_header(title = "Mean, median, and ranges of d.13.C values by Tribe across Depositional Environemnt and Member") %>%
fmt_number(columns = c(mean_d13C, median_d13C, range_d13C), decimals = 2) %>%
sub_missing(missing_text = "--") %>%
cols_label(mean_d13C = "Mean", median_d13C = "Median", range_d13C = "Range") %>%
tab_style(
style = cell_text(weight = "bold"),
locations = cells_row_groups()
)
| {
"language": "hr",
"url": "https://stackoverflow.com/questions/75633338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Where can I initialize a plugin's Redux store? I have created a plugin that requires the app consuming it to call a function, initializePluginStore(initialState) to initialize the store. Where should the app consuming it call this function?
Plugin code:
/store.ts
let store = null
export const configurePluginStore = (initialState) => {
const { key, reducer, actions } = createThemeSlice(initialState)
const rootReducer = combineReducers({
[key]: reducer,
})
store = configureStore({
reducer: rootReducer,
})
}
export default store
The store is exported so that other files in the plugin can grab data from the store from outside a component. Ex:
/example.ts
import store from './store'
const getTheme = () => {
const { theme } = store.getState()
// Do something with theme and return a function that a component can use
}
My issue here is that when I call initializePluginStore in the consuming app's index.ts, the store is null and I get an error when the function in the plugin calls store.getState()
How do I make sure that the plugin's store is initialized before the function that calls store.getState() is used?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to install toolchain and custom GDB in VSCode to debug remote target I am trying to debug a cross compiled remote target with GDB.
The target OS only has gdbserver so I have to use a custom GDB from my WSL host to remotely attach to it. The target is cross compiled with a custom Yocto distro toolchain, which is only installed in the same shell session I build it in.
I can install the toolchain with a helper script for a shell session, and launch the custom GDB with $GDB.
./toolchain-helper.sh
whereis $GDB
/opt/toolchains/.../sysroots/usr/bin/x86_64-special-linux-gdb
$GDB
(gdb) target remote 192.168.0.100:9999
I'm trying to use VSCode debugger and its GUI to make debugging eaiser. I can't figure out how to configure launch.json to run toolchain-heler.sh and use $GDB. Is it possible to do?
VSCode Version: 1.75.1
Native Debug Version: 0.26.1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Get a COMException from WinUI3 when create a grid When I try to create a Grid, it throw System.Runtime.InteropServices.COMException
var definition = new RowDefinition {Height = GridLength.Auto};
var grid = new Grid { RowDefinitions = { definition, definition } };
StackTrace below:
System.Runtime.InteropServices.COMException (0x800F1000): No installed components were detected.
at WinRT.DelegateExtensions.DynamicInvokeAbi(Delegate del, Object[] invoke_params)
at ABI.System.Collections.Generic.IVectorMethods`1.Append(IObjectReference obj, T value)
at Test..ctor()
...
I try to run below code at Immediate Window:
*
*var g1 = new Grid{RowDefinitions = { new RowDefinition {Height = GridLength.Auto} } }; No Exception.
*RowDefinition d = new() {Height = GridLength.Auto}; No Exception.
*var g3 = new Grid{RowDefinitions = {d}}; COMException.
And I change my code into
RowDefinition Definition() => new RowDefinition {Height = GridLength.Auto};
var grid = new Grid { RowDefinitions = { Definition(), Definition() } };
it can run without exception.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Rubymine cursor control and Enter key do not work on remote host I run Rubymine IDE (Windows 11) for a remote project (Ubuntu 20) via ssh. Hitting cursor controls left - right, backspace, and most annoyingly, the Enter key seem do not cause anything during text editing, though they work for menu navigation. Any fixes or workarounds? At the moment I use context copy-paste to enter new line.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to Run Exec Command In Background (PHP & Python) Nginx I have a php file name (download.php?url=$url), basically I make a request of xhr to that php file from my JS code. Now, the url=$url is query, and basically the download.php will execute a py script using the $output = exec($command);. Now, it fetch the file link and return response in json and write it to a file, and my php file after exec reads the json file and send the response to user.
Problem is, when multiple users clicked the Download button, it works on first user who clicked, and when first finishes then others turn, why this? I want this script to run for all those requested, at same time. It is not working, I used the > /dev/null 2>&1 & also, but it executes the the next php code after exec command instantly while the py file is still getting the link of requested file and writing it to json, result in json file not found.
What I want is let py file finishes the process and let other users use it same time, no waiting. I hope you understand what I mean.
Thanks!
I even used random names for py file, I write the code for py file inside from php file, to a $random-name.py file and after process get completed, the php file deletes the py file at end.
Still not works at same time for all users who are sending requests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Is it possible to pack a react app into a distributable and install on any computer? I have a web application with React front end and Node backend. I want to pack this whole app into a distributable (exe msi etc) and install on clients machine. Once they start the app it will run the server on a local port and react app will run in the browser on another local port. Is this possible? If so what tools should I use?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to handle the "AssertionError: No text to speak" in playsound module after the audio is played from an mp3 file? (I am using flask) My flask application takes input from the user in the form of audio, processes it and returns an appropriate output in the form of an audio.
The audio file ("audiofile.mp3") my python produces is accessed through an endpoint("/audio") and is played using the tage from home.html.
The output is played in the html file but after speaking everything. It throws "AssertionError: No text to speak". I want to overcome that error. If anyone can please help me with that? Thanks!
The error I get is:
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 2548, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 2528, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\flask\app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "c:\Users\ricky\OneDrive\Desktop\ark\FlaskApp.py", line 12, in hello_name
**speak(query)**
File "c:\Users\ricky\OneDrive\Desktop\ark\speech_to_text.py", line 14, in speak
**audio = gtts.gTTS(text, lang_check= False)**
File "C:\Users\ricky\AppData\Roaming\Python\Python310\site-packages\gtts\tts.py", line 128, in __init__
assert text, "No text to speak"
AssertionError: No text to speak
Here are my endpoints:
@app.route('/hello')
def hello_name():
query = speech_to_text.proc_queries()
speak(query)
send_audio_file()
return render_template("home.html", query = speech_to_text.proc_queries())
@app.route('/audio')
def send_audio_file():
return send_file("audiofile.mp3", "audio/mp3", True)
def speak(text):
audio = gtts.gTTS(text, lang_check= False)
audio_file = "audiofile.mp3"
audio.save(audio_file)
playsound.playsound(audio_file, block=True)
os.remove(audio_file)
This is where gTTS raises an exception after speaking the audio. Instead of throwing an exception I want it to terminate or pause the speaking.
This is my home.html file
<body>
<h2> The query is: {{query}}</h2>
<audio controls>
<source src="http://127.0.0.1:5000/audio" type="audio/mp3"/>
</audio>
</body>
I tried to terminate the porgram after deleting the file, yet met with the same error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Having a method run at application start continuously I am having a little bit of trouble with something I am trying to pull off. I have an application that uses Java 19 and Springboot that is being hosted locally (but will eventually be hosted elsewhere). I have an SQL database setup on the same. I want to be able to send users an email, based on the due date of an event they are participating in. I have a few methods that allow me to query using JPA to find these emails, the issue I am having is actually calling these methods. I want these methods to run on intervals of 5 minutes always (that is to say, I want the methods to query the DB every 5 minutes of so). Is there a way to do this? As of now any call to a repository is triggered by events on the front-end, and I am uncertain as to how I could call these methods continuously? Please find below the methods:
mport java.util.Timer;
//Main class
public class SchedulerMain {
//Timer for 7 days
public static Boolean sevenDays() throws InterruptedException {
Timer time = new Timer(); // Instantiate Timer Object
ScheduledTaskSevenDays st = new ScheduledTaskSevenDays(); // Instantiate SheduledTask class
time.schedule(st, 0, 300000); // Create Repetitively task for every 5 minutes
return true;
}
}
public class ScheduledTaskSevenDays extends TimerTask {
@Autowired
private AdminService adminService;
// To run for 7 days
public void run() {
//Going to call on method here to gather participants and send email
adminService.SevenDayNotification();
}
}
public boolean SevenDayNotification() {
System.out.println("Start of email Method");
try {
//Get Survey Participation from DB
List<String> surveyParticipation;
surveyParticipation = surveyParticipationRepository.findAllParticipantEmailSevenDays();
System.out.println("we have a list of participants");
//send emails to all participants in the survey
for(int i = 0; i<surveyParticipation.size(); i++)
{
System.out.println("im in the loop");
emailService.sendMessage(surveyParticipation.get(i), "Notice: Survey will end soon", "This survey is set to end in 7 days, please be sure to complete and submit it before the due date." );
}
System.out.println("we are returning true");
//return true if successful
return true;
}
catch(Exception e){
//return false if unsuccessful
System.out.println("I have failed at sending email");
System.out.println(e.getMessage());
return false;
}
}
A: You can use the @Scheduled annotation.
In your main application class, you need to first enable scheduling.
@SpringBootApplication
@EnableScheduling
public class MyApplication {
// ...
}
Then, add @Scheduled to a method of one of your components.
import org.springframework.stereotype.Component;
import org.springframework.scheduling.annotation.Scheduled;
import java.util.concurrent.TimeUnit;
@Component
public class MyTask {
@Scheduled(fixedDelay = 5, timeUnit = TimeUnit.MINUTES)
public void run() {
// execute periodic task
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Unable to import name 'create_connection' from 'sql' I am trying to use Flask on my pc but I keep getting an error whenever I import from the sql module. The error states:
ImportError: cannot import name 'create_connection' from 'sql' (C:<Directory to sql.py>)
# Imports
import flask
from flask import jsonify
from flask import request
import mysql.connector
from mysql.connector import Error
from sql import create_connection
from sql import execute_read_query
What can I do to fix this?
A: I think it's the file path problem or package install issue. If you confirm your project path don't have the same file named "sql.py", you can try to below command:
pip uninstall python-sql
then
pip install python-sql
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: MudBlazor Component Sizing & Layout In the example code on Mudblazor's website (https://mudblazor.com/components/select#variants) there is no sizing or layout specified, they just line up neatly next to each other. When I implement the code it makes the select boxes 100% of the page width and places them on top of each other instead of next to each other. How would you go about setting the size and layout/placement of the components? I would like the two select boxes side by side and the checkbox and buttons centered.
<MudSelect T="MailboxUser" Placeholder="Select Forward From Mailbox" Variant="Variant.Outlined" Label="Forward From" @bind-Value="@SelectedForwardFrom" AnchorOrigin="Origin.BottomCenter" Dense="true" FullWidth="false">
@foreach (var user in SortedUsers)
{
<MudSelectItem Value="@user" >@user.DisplayName (@user.PrimarySmtpAddress)</MudSelectItem>
}
</MudSelect>
<MudSelect T="MailboxUser" Placeholder="Select Forward To Mailbox" Variant="Variant.Outlined" Label="Forward To" @bind-Value="@SelectedForwardTo" AnchorOrigin="Origin.BottomCenter" Dense="true" FullWidth="false">
@foreach (var user in SortedUsers)
{
<MudSelectItem Value="@user">@user.DisplayName (@user.PrimarySmtpAddress)</MudSelectItem>
}
</MudSelect>
<MudCheckBox @bind-Checked="@SendToBoth" Color="Color.Primary" Label="Deliver To Both Mailboxes"></MudCheckBox>
<MudButton Variant="Variant.Filled" Color="Color.Primary" @onclick="HandleSetForwardAction">Set Forward</MudButton>
<MudButton Variant="Variant.Filled" Color="Color.Primary" @onclick="HandleRefreshAction">Refresh</MudButton>
A: I think what you're looking for is a grid for your form inputs.
In MudBlazor this is the MudGrid. I don't have MudBlazor installed on my travelling machine, so here's some example code I lifted straight from here - https://www.mudblazor.com/components/autocomplete#usage.
The xs, sm,... control the formatting at different screen widths so you can collapse a row into columns on small screens. You'll need to read up on that.
<MudGrid>
<MudItem xs="12" sm="6" md="4">
<MudAutocomplete T="string" Label="US States" @bind-Value="value1" SearchFunc="@Search1"
ResetValueOnEmptyText="@resetValueOnEmptyText"
CoerceText="@coerceText" CoerceValue="@coerceValue" />
</MudItem>
<MudItem xs="12" sm="6" md="4">
<MudAutocomplete T="string" Label="US States" @bind-Value="value2" SearchFunc="@Search2"
ResetValueOnEmptyText="@resetValueOnEmptyText"
CoerceText="@coerceText" CoerceValue="@coerceValue"
AdornmentIcon="@Icons.Material.Filled.Search" AdornmentColor="Color.Primary" />
</MudItem>
<MudItem xs="12" md="12">
<MudText Class="mb-n3" Typo="Typo.body2">
<MudChip>@(value1 ?? "Not selected")</MudChip><MudChip>@(value2 ?? "Not selected")</MudChip>
</MudText>
</MudItem>
<MudItem xs="12" md="12" class="flex-column">
<MudSwitch @bind-Checked="resetValueOnEmptyText" Color="Color.Primary">Reset Value on empty Text</MudSwitch>
<MudSwitch @bind-Checked="coerceText" Color="Color.Secondary">Coerce Text to Value</MudSwitch>
<MudSwitch @bind-Checked="coerceValue" Color="Color.Tertiary">Coerce Value to Text (if not found)</MudSwitch>
</MudItem>
</MudGrid>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Implement routing(nested) with lazy loading In app-routing.module.ts
{
path: '',
canActivate: [AuthGuardService],
loadChildren: () => import('../app/appeals/appeals.module').then((mod) => mod.AppealsModule),
},
{
path: '',
canActivate: [AuthGuardService],
loadChildren: () => import('../app/others/others.module').then((mod) => mod.OtherModule),
},
{
path: '',
redirectTo: '/dashboard',
pathMatch: 'full',
},
{
path: '**',
redirectTo: '/dashboard',
},
In appeals-routing.module.ts
const routes: Routes = [{
path: WebAdminConstants.ROUTES.Appeals,
canActivate: [AuthGuardService],
children: [
{
path: '',
canActivate: [AuthGuardService],
component: AppealsHearingOfficersComponent,
},
{
path: WebAdminConstants.ROUTES.AppealsNotifications,
canActivate: [AuthGuardService],
component: AppealsNotificationsComponent,
},
{
path: WebAdminConstants.ROUTES.AppealsSearch,
canActivate: [AuthGuardService],
component: AppealsSearchComponent,
},
{
path: WebAdminConstants.ROUTES.AppealsScheduler,
component: AppealsSchedulerComponent,
},
{
path: WebAdminConstants.ROUTES.AppealsHearingOfficers,
canActivate: [AuthGuardService],
children: [
{
path: '',
canActivate: [AuthGuardService],
component: AppealsHearingOfficersComponent,
},
{
path: 'create-hearing-officer',
canActivate: [AuthGuardService],
component: CreateNewOfficerComponent,
},
{
path: ':id',
canActivate: [AuthGuardService],
component: AppealsHearingOfficerDetailComponent,
},
],
},
{
path: WebAdminConstants.ROUTES.AppealsCalendar,
canActivate: [AuthGuardService],
component: AppealsCalendarComponent,
},
{
path: WebAdminConstants.ROUTES.AppealsCreateAppeal,
canActivate: [AuthGuardService],
component: CreateAppealComponent,
},
],
},];
@NgModule({
imports: [RouterModule.forChild(routes)],
exports: [RouterModule]
})
I want to handle incase empty path will redirect to /dashboard, but I already use empty path for handle lazy loading before. How can I solve this ?
I try to add path name in app-routing.module.ts replace to empty for each lazy loading routing. But not nice in url from user view.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: DynamoDBDocument.put runtime error _b.map is not a function I create a new Lambda function nodejs18.x and I understand I need to use AWS JS v3. However I get runtime error even when I'm not referencing v2:
"errorType": "TypeError",
"errorMessage": "_b.map is not a function",
"stack": [
"TypeError: _b.map is not a function",
" at /var/runtime/node_modules/@aws-sdk/middleware-user-agent/dist-cjs/user-agent-middleware.js:14:151",
" at /var/runtime/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22",
" at _context (/var/task/account/webpack:/mynode/account/handler.ts:108:15)",
" at Runtime.b [as handler] (/var/task/account/webpack:/mynode/account/handler.ts:83:69)"
Here is my code:
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { PutCommandInput, BatchGetCommandInput, BatchGetCommandOutput, DynamoDBDocument, TranslateConfig, PutCommandOutput } from "@aws-sdk/lib-dynamodb";
let client: DynamoDBClient = new DynamoDBClient({
region: "us-west-2",
endpoint: "https://dynamodb.us-west-2.amazonaws.com",
apiVersion: "2012-08-10",
customUserAgent: <any>{
rejectUnauthorized: true,
secureProtocol: "TLSv1_method",
ciphers: "ALL",
}
});
const marshallOptions = {
// Whether to automatically convert empty strings, blobs, and sets to `null`.
convertEmptyValues: true, // false, by default.
// Whether to remove undefined values while marshalling.
removeUndefinedValues: false, // false, by default.
// Whether to convert typeof object to map attribute.
convertClassInstanceToMap: false, // false, by default.
};
const unmarshallOptions = {
// Whether to return numbers as a string instead of converting them to native JavaScript numbers.
wrapNumbers: false, // false, by default.
};
const translateConfig: TranslateConfig = { marshallOptions, unmarshallOptions };
const docClient = DynamoDBDocument.from(client, translateConfig);
....
const putParam: PutCommandInput = {
TableName: dynamoTable,
Item: dynamoItem,
ReturnValues: "NONE"
};
await docClient.put(putParam);
The last line failed
Here is my package.json:
"dependencies": {
"@aws-sdk/client-dynamodb": "^3.284.0",
"@aws-sdk/client-lambda": "^3.282.0",
"@aws-sdk/client-s3": "^3.282.0",
"@aws-sdk/lib-dynamodb": "^3.284.0",
"@aws-sdk/util-utf8-node": "^3.259.0",
},
Any suggestion what I'm missing here?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: In R's purrr, why does pmap through Error in FUN(X[[i]], ...) : object '.z' not found when .z is clearly defined? I'm trying to get better with pmap(), but I'm hitting some errors. In this example, I have a simple graphing function that I want to iterate over. My understanding of pmap() is that I can set and define an infinite number of parameters. However, I'm confused why it is saying that .z isn't defined when I so clearly have it defined.
Unless necessary, I'm not interested in changing any of the ways the arguments are defined--I just want to understand and fix why I can't have this 3rd argument, even though .x and .y work fine (even if I switch around what is defined as .x, .y, and .z).
library(purrr)
library(ggplot2)
library(dplyr)
#Plot function
make_chart <- function(data, x, y, xtitle){
require(stringr)
ggplot(data, aes(x = as.factor({{x}}), y = {{y}})) +
geom_col() +
ggtitle(paste0("Number of ", str_to_title({{xtitle}}), " by MPG")) +
xlab({{xtitle}})
}
#Define x variables
x_variables <- c("cyl", "vs", "am", "gear", "carb")
#pmap it--why is .z not found and how do I get it to be?
pmap(list(.x = mtcars %>% dplyr::select(matches(x_variables)),
.y = x_variables,
.z = mtcars %>% dplyr::select(mpg)),
~mtcars %>%
make_chart(x = .x, xtitle = .y, y = .z))
A: from ?pmap
pmap(.l, .f, ..., .progress = FALSE)
.l => A list of vectors. The length of .l determines the number of arguments that .f will be called with. Arguments will be supplied by position if unnamed, and by name, if named.
Either use an anonymous function to supply arguments to make_chart by name from the list (preferred way) or supply arguments by position using formula syntax,
# using anonymous function to supply argument by name
pmap(.l = list(x = mtcars %>% dplyr::select(matches(x_variables)),
y = x_variables,
z = mtcars %>% dplyr::select(mpg)),
.f = \(x, y, z) mtcars %>% make_chart(x = x, xtitle = y, y = z))
or,
# supplying arguments by position
pmap(.l = list(mtcars %>% dplyr::select(matches(x_variables)),
x_variables,
mtcars %>% dplyr::select(mpg)),
.f = ~ mtcars %>% make_chart(x = ..1, xtitle = ..2, y = ..3))
A: Another option to the ones offered by @shafee would be to pass a named list to pmap with the names of the function arguments. Doing so we don't need an anonymous function which merely maps the names of the list passed to pmap to the names of the function arguments.
Moreover, at least from a ggplot2 standpoint best practice to create your loop would be to loop over the column names (and making use of the .data pronoun) instead of passing vectors to your function. Actually, doing so you could get rid of the xtitle argument and replace xtitle by x in the plotting function.
library(purrr)
library(ggplot2)
library(stringr)
make_chart <- function(data, x, y, xtitle) {
ggplot(data, aes(x = as.factor(.data[[x]]), y = .data[[y]])) +
geom_col() +
ggtitle(paste0("Number of ", str_to_title(xtitle), " by MPG")) +
xlab(xtitle)
}
x_variables <- c("cyl", "vs", "am", "gear", "carb")
pmap(
list(
x = x_variables,
xtitle = x_variables,
y = "mpg"
),
make_chart,
data = mtcars
)
#> [[1]]
#>
#> [[2]]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: TextField cannot dismiss keyboard when put to multiline When the TextField is created like this
TextField(
"Describe...",
text: $text,
axis: .vertical
)
.focused(focused, equals: .prompt)
.textFieldStyle(.roundedBorder)
.lineLimit(2...)
.modifier(ClearButton(text: $text))
.submitLabel(.done)
.onSubmit { hideKeyboard()
print("test")
}
When the keyboard done button is pressed, it creates a new line instead. I don't want any new lines in the TextField at all.
.submitLabel is supposed to do it. So I added but it doesn't.
Next I tried creating a .onSubmit callback, that doesn't work either. In fact, "test" is never printed.
A: It is not because of the multiline of the textField, but because of the axis. If you remove the linelimit the problem remains, but if you change the axis the textfield now "works" (it is not multiline). The same thing happens if you use a TextEditor. However, if you want a multiline textField with the ability to dismiss the keyboard a user-friendly solution could be adding a toolbar. Now the user will interpret the textField as a multiline textField(because of the return button) and have the option to disable the textField.
struct TestView: View {
enum Field {
case prompt
}
@FocusState var focus: Field?
@State var text = ""
var body: some View {
VStack {
TextField(
"Describe...",
text: $text,
axis: .vertical
)
.focused($focus, equals: .prompt)
.textFieldStyle(.roundedBorder)
.lineLimit(2...)
.submitLabel(.return) // <- change to return
.toolbar { // adding the toolbar
ToolbarItemGroup(placement: .keyboard) {
Spacer()
Button("Done") {
focus = nil
}
}
}
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why does GLib.File.new_for_uri (url) fail in vala but not whilst using the same url with curl Why does the following fail with uncaught error: HTTP Client Error: Forbidden (g-io-error-quark, 14):
static int main (string[] args) {
string url = "https://www.netfilter.org/projects/iptables/files/";
var page = GLib.File.new_for_uri (url);
var dis = new DataInputStream (page.read ());
return 0;
}
But using curl it works perfectly fine.
curl https://www.netfilter.org/projects/iptables/files/
The expected page is returned.
Other urls work fine with the vala code, it's just this particular url fails.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to read first line of CSV file I'm trying to write a simple python code for taking in a csv file, and using tabulate module to print out nicely-formatted contents in the terminal. Here is snippet of my code:
table = []
try:
with open(filename) as csvfile:
reader = csv.DictReader(csvfile)
i = 0
for row in reader:
if (i == 0):
headers = row
i = 1
table.append(row)
except FileNotFoundError:
print("File not found.")
sys.exit()
#output
print(tabulate(table, headers, tablefmt='grid'))
When I run the code, it prints out the table, but is using the second line in the csv file for the headers instead of the first line. This makes first two rows of output duplicates.
Basically, how to store the first row of the .csv file as the table headers?
Based on the help provided, I have fixed my code by changing to the following. Now, it works. Thank you.
table = []
try:
with open(filename, newline='') as csvfile:
reader = csv.DictReader(csvfile)
headers = reader.fieldnames
for row in reader:
table.append(list(row.values()))
except FileNotFoundError:
print("File not found.")
sys.exit()
#output
print(tabulate(table, headers, tablefmt='grid'))
A: See the csv.DictReader docs, in particular the second paragraph. If you don't use a fieldnames parameter (which you didn't), the resulting dict's keys (fieldnames) are the values in the first row of the CSV file. You can access the fieldnames in your code using reader.fieldnames.
Your code should look something like this:
table = []
try:
with open(filename) as csvfile:
reader = csv.DictReader(csvfile)
headers = reader.fieldnames
for row in reader:
table.append(row)
except FileNotFoundError:
print("File not found.")
sys.exit()
#output
print(tabulate(table, headers, tablefmt='grid'))
A: The only thing I can think of is that your program somehow assumes the first row as headers. Typically .csv files have headers as the first line. So what I would do is probably get rid of if(i == 0) and its content to just directly append the row.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I set up env vars locally + in prod in a similar style? I am trying to set up a backend for a web app using Node.js. My goal is to create a system that can both be run locally and deployed on AWS. To deploy, I have a Jenkins pipeline that pulls from my git repo, builds the app, and deploys it using embedded Github + AWS credentials. This all works well.
I am struggling now to come up with a pattern to set up environment variables.
Initially I elected to use .env files to specify separate dev and prod environments. However, this is not ideal as the npm page forbids it. Additionally, this will require me to check my env files into source control, which goes against best practice.
My next idea was to use a .env file locally, and save some sort of environment variables directly in Jenkins separately that would be read into the pipeline’s environment (and thus into process.env) during building. However, I’m not a massive fan of this solution either as I’d prefer a common style of storing these env variables between local and prod to simplify the process of adding new env vars as necessary.
Is there a better solution?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to deploy and configure next.js app on a rental server installed apache2 I'm a beginner to deploy application. I saw a lot of article that is written about deploying next.js app on Vercel or other servers that support next.js app. But I'm here because my next.js app can deploy and run on my server that I'm currently renting, but I can run the app only local host on the server. So when I access to the application I need to access such as "http://www.example.com:3000". How can I configure next.js app to run on such as "https://www.example.com"?
*
*My server is installed apache2 and node.js16
*Since my server is a rental server, I cannot run sudo command through ssh
*I have custom domain and ssl certificate
*Added "Homepage" property to package.json since it is required when react app is deployed
*Tried to change apache config through ssh
*Added "BasePath" property to next.config.js to route(This is probably not relevant)
A: There could be an issue with the basepath property in the next.config. Js, so if you have it in there, you must remove before you deploy your app on the particular server. This is recommended because, if you have that, it may results in some issues relating to routing when running behind a reverse proxy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Unable to fetch base64 response in component level (Angular) I making a GET API to fetch an image, the response is status 200 and its data is in byte64 format. The problem here is, I was not able to read the response in my component. My component code is below.
getUserImageById()
{
this.userService.getUserProfileById(this.currentUser.id).subscribe((res : Blob)=>
{
console.log("working"); // after debugging controller is not comming here and there is no error in console
console.log(res);
this.data = res;
})
}
I know how to convert base64 to image, but I need to get the response in the component first.
Here are my headers in network.
Here is my response
I know I might be sending wrong headers while making the API, any one has any suggestions will be much appreciated. Thanks in adavance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Click method is not working for me when i am trying to automate web browser using selenium with python while I was trying to automate the web browser, I used click method at the end to click the button, I gave button Id to find element but its opening the new automated browser but not clicking on "Start Download" button. I am getting no errors. My code is given below.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
import time
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))
driver.get("https://jqueryui.com/resources/demos/progressbar/download.html")
time.sleep(20)
my_element = driver.find_element(By.ID, "downloadButton")
my_element.click()
I tried click method to call the download button but its not working properly.
A: reason for this issue could be that the download button may not be ready to be clicked when the click() method is being called. you can try using the WebDriverWait class to wait for the button to become clickable before clicking it.
my_element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "downloadButton")))
my_element.click()
A: To click on the clickable element you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following locator strategies:
*
*Using CSS_SELECTOR:
driver.get("https://jqueryui.com/resources/demos/progressbar/download.html")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button#downloadButton"))).click()
*Using XPATH:
driver.get("https://jqueryui.com/resources/demos/progressbar/download.html")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[@id='downloadButton']"))).click()
*Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: SQL not accepting JSON data app.post("/register",async(req,res)=>{
const data=JSON.stringify(req.body.data)
console.log(data+"\n")
connection.query(`UPDATE course SET 'registered_courses'= ? WHERE id='${req.body.id}'`,[data],(err, rows, fields) => {
if (err) {
console.log(err.message)
res.send({
success: false
})
}
else {
res.send({
success:true,
data:req.body.data
})
}
})
})
enter image description here
Repeateadly getting this error . Can anyone slove this?
i used JSON.stringfy() to convert ot into JSON string but mysql says synatx error
OUTPUT :-
[{"id":"CSE7000","name":"JAPANESE","faculties":"20FAC0700","course_type":"THEORY","allowed_slots":"D1"},{"id":"CSE8000","name":"HCI","faculties":"20FAC0800","course_type":"LAB","allowed_slots":"D2"}]
ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''registered_courses'= '[{"id":"CSE7000","name":"JAPANESE","faculties...' at line 1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: how many images per album are there in imgbb I'm using imgbb to store images for a website, it creates albums that it uses as image containers for each post (manga) I make but I ran into the problem that it can't put 700 images inside an album, I try and they get shorter or count I try to upload 800 it never loads.
You can't use imgur because I'm from Latin America (Peru) and I need a phone number to create an account.
I thought about paying but I think it's just not possible or is there a way?
If you can't, do you know of any other service to upload images where you don't have that inconvenience?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75633397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |