instruction stringlengths 0 30k ⌀ |
|---|
So I am trying to deploy laravel on vercel from [php-laravel](https://github.com/juicyfx/vercel-examples/tree/master/php-laravel)
but I am getting below error -->
Downloading user files
11:24:03.561 | Downloading PHP runtime files
11:24:03.564 | Installing Composer dependencies [START]
11:24:03.567 | php: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory
11:24:03.569 | Error: Exited with 127
11:24:03.569 | at ChildProcess.<anonymous> (/vercel/path1/.vercel/builders/node_modules/vercel-php/dist/utils.js:178:24)
11:24:03.569 | at ChildProcess.emit (node:events:518:28)
11:24:03.571 | at ChildProcess.emit (node:domain:488:12)
11:24:03.571 | at ChildProcess._handle.onexit (node:internal/child_process:294:12)
11:24:03.592 | Error: Command "vercel build" exited with 1
11:24:03.647 | Command "vercel build" exited with 1
can someone help, how can I fix this?
I did try other methods but I am getting same erorr.
I tried putting custom installation commands but didn't work :( |
The table is not creating and I dont want to use `declarative_base` and `sqlalchemy.orm`
I was told to use `class` but I don't know how to arrange it.
```
class Tables():
def __init__(self, engine, metadata, table, name, age, location):
self.engine = sqlengine
self.metadata = sql_metadata
self.table = table_name
self.name = data_name
self.age = data_age
self.location = data_location
def make_table(self):
with self.engine.connect() as connection:
member = Table(self.table, self.metadata,
Column("id", Integer, primary_key=True),
Column(self.name, String, unique=True),
Column(self.age, Integer),
Column(self.location, String)
)
sql_metadata.create_all(sqlengine)
with sqlengine.connect() as connection:
sql_metadata.reflect(bind=sqlengine)
for table in sql_metadata.tables.values():
print(table)
``` |
How do I use class in SQLAlchemy. I have being trying to use class but the table is not creating |
null |
Based on what you've asked and said in the comments, this sounds like what you need:
int chickens = Convert.ToInt32(Console.ReadLine());
for (int i = 0; i < chickens; ++i)
{
Console.Write("Eggs:");
}
|
So I am trying to deploy laravel on vercel from [php-laravel](https://github.com/juicyfx/vercel-examples/tree/master/php-laravel)
but I am getting below error -->
Downloading user files
Downloading PHP runtime files
Installing Composer dependencies [START]
php: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory
Error: Exited with 127
at ChildProcess.<anonymous> (/vercel/path1/.vercel/builders/node_modules/vercel-php/dist/utils.js:178:24)
at ChildProcess.emit (node:events:518:28)
at ChildProcess.emit (node:domain:488:12)
at ChildProcess._handle.onexit (node:internal/child_process:294:12)
Error: Command "vercel build" exited with 1
Command "vercel build" exited with 1
can someone help, how can I fix this?
I did try other methods but I am getting same erorr.
I tried putting custom installation commands but didn't work :( |
I know there have been a few posts on this, but my case is a little bit different and I wanted to get some help on this.
I have a pandas dataframe symbol_df with 1 min bars in the below format for each stock symbol:
id Symbol_id Date Open High Low Close Volume
1 1 2023-12-13 09:15:00 4730.95 4744.00 4713.95 4696.40 2300
2 1 2023-12-13 09:16:00 4713.20 4723.70 4717.85 4702.55 1522
3 1 2023-12-13 09:17:00 4716.40 4718.55 4701.00 4701.00 909
4 1 2023-12-13 09:18:00 4700.15 4702.80 4696.70 4696.00 715
5 1 2023-12-13 09:19:00 4696.70 4709.90 4702.00 4696.10 895
... ... ... ... ... ... ... ...
108001 1 2024-03-27 13:44:00 6289.95 6291.95 6289.00 6287.55 989
108002 1 2024-03-27 13:45:00 6288.95 6290.85 6289.00 6287.75 286
108003 1 2024-03-27 13:46:00 6291.25 6293.60 6292.05 6289.10 1433
108004 1 2024-03-27 13:47:00 6295.00 6299.00 6293.20 6293.15 2702
108005 1 2024-03-27 13:48:00 6292.05 6296.55 6291.95 6291.95 983
I would like to calculate the "Relative Volume Ratio" indicator and add this calculated value to the symbol_df as a new column on a rolling basis.
"Relative volume ratio" indicator calculated as below:
So far today's Volume is compared with the mean volume of the last 10 days of the same period. To get the ratio value, we simply divide "today so far volume" by "mean volume of the last 10 days of the same period".
For example..the current bar time is now 13:48.
`cumulativeVolumeOfToday = Volume` of 1 minuite bars between 00:00 -13:48 today added up
`avergeVolumeOfPreviousDaysOfSamePeriod = Average` accumulation of volume from the same period(00:00 - 13:48) over the last 10 days.
relativeVolumeRatio = CumulativeVolumeOfToday/AvergeVolumeOfPrevious10DaysOfSamePeriod
Add this value as a new column to the dataframe.
Sample data download for the test case:
import yfinance as yf #pip install yfinance
from datetime import datetime
import pandas as pd
symbol_df = yf.download(tickers="AAPL", period="7d", interval="1m")["Volume"]
symbol_df=symbol_df.reset_index(inplace=False)
symbol_df['Datetime'] = symbol_df['Datetime'].dt.strftime('%Y-%m-%d %H:%M')
symbol_df = symbol_df.rename(columns={'Datetime': 'Date'})
#We can only download 7 days sample data. So 5 days mean for calculations
How can I do this in Pandas? |
Hi I am trying to pip install Bloomberg API blpapi in anaconda prompt...so i tried...
(base) C:\Windows\system32>conda install -c conda-forge blpapi
Collecting package metadata (current_repodata.json): failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://conda.anaconda.org/conda-forge/win-64/current_repodata.json
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https://conda.anaconda.org/conda-forge/win-64'
then as per the bbg api website i tried...
(base) C:\Windows\system32>pip install --index-url=https://bcms.bloomberg.com/pip/simple blpapi
Looking in indexes: https://bcms.bloomberg.com/pip/simple
Looking in links: https://nexus-devops.income.com.sg/repository/pypi-all
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionAbortedError(10053, 'An established connection was aborted by the software in your host machine', None, 10053, None))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionAbortedError(10053, 'An established connection was aborted by the software in your host machine', None, 10053, None))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionAbortedError(10053, 'An established connection was aborted by the software in your host machine', None, 10053, None))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionAbortedError(10053, 'An established connection was aborted by the software in your host machine', None, 10053, None))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionAbortedError(10053, 'An established connection was aborted by the software in your host machine', None, 10053, None))': /pip/simple/blpapi/
ERROR: Could not find a version that satisfies the requirement blpapi
ERROR: No matching distribution found for blpapi
finally i tried...
(base) C:\Windows\system32>python -m pip install --index-url=https://bcms.bloomberg.com/pip/simple blpapi
Looking in indexes: https://bcms.bloomberg.com/pip/simple
Looking in links: https://nexus-devops.income.com.sg/repository/pypi-all
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))': /pip/simple/blpapi/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))': /pip/simple/blpapi/
Could not fetch URL https://bcms.bloomberg.com/pip/simple/blpapi/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='bcms.bloomberg.com', port=443): Max retries exceeded with url: /pip/simple/blpapi/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)'))) - skipping
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000027AF2844220>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /repository/pypi-all
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000027AF2844430>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /repository/pypi-all
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000027AF2844640>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /repository/pypi-all
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000027AF2844850>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /repository/pypi-all
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000027AF2844A60>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': /repository/pypi-all
ERROR: Could not find a version that satisfies the requirement blpapi
ERROR: No matching distribution found for blpapi
It shouldn't be due to the environment as i can download other packages like xbbg using pip install with no problem
|
How do I install Bloomberg API blpapi |
|bloomberg| |
Activate the virtual environment by adding below commands in the deployment script `deploy.sh`:
```sh
source <path_to_virtualenv/bin/activate> (i.e.,source antenv/bin/activate)
pip install -r <path_of_requirements.txt>
```
I have created a flask application and used below code:
```py
from dotenv import load_dotenv
load_dotenv()
```
- Created virtual environment(**env**) to install the packages using the commands:
```py
py -m venv env
.\env\Scripts\activate
```
**Folder Structure:**
```
│ .gitignore
│ app.py
│ CHANGELOG.md
│ CONTRIBUTING.md
│ LICENSE.md
│ README.md
│ requirements.txt
├───.github
├───env
├───static
└───templates
```
- Deployed the application to Azure App Service using Local GIT.
```
(env) C:\Users\uname\flaskapp>git push azure main:master
Enumerating objects: 125, done.
Counting objects: 100% (125/125), done.
Delta compression using up to 8 threads
Compressing objects: 100% (63/63), done.
Writing objects: 100% (125/125), 785.30 KiB | 39.26 MiB/s, done.
Total 125 (delta 56), reused 125 (delta 56), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (56/56), done.
remote: Deploy Async
remote: Updating branch 'master'.
remote: Updating submodules.
remote: Preparing deployment for commit id '223b3cf48e'.
remote: PreDeployment: context.CleanOutputPath False
remote: PreDeployment: context.OutputPath /home/site/wwwroot
remote: Repository path is /home/site/repository
remote: Running oryx build...
remote: Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
remote: You can report issues at https://github.com/Microsoft/Oryx/issues
remote:
remote: Oryx Version: 0.2.20230508.1, Commit: 7fe2bf39b357dd68572b438a85ca50b5ecfb4592, ReleaseTagName: 20230508.1
remote:
remote: Build Operation ID: 006694baf2ca5e2b
remote: Repository Commit : 223b3cf48eb78f055f670202ead044e98ab0360a
remote: OS Type : bullseye
remote: Image Type : githubactions
remote:
remote: Detecting platforms...
remote: Detected following platforms:
remote: python: 3.11.8
remote: Version '3.11.8' of platform 'python' is not installed. Generating script to install it...
remote:
remote: Using intermediate directory '/tmp/8dc4efcb326576d'.
remote:
remote: Copying files to the intermediate directory...
remote: Done in 0 sec(s).
remote:
remote: Source directory : /tmp/8dc4efcb326576d
remote: Destination directory: /home/site/wwwroot
remote: Downloading and extracting 'python' version '3.11.8' to '/tmp/oryx/platforms/python/3.11.8'...
remote: Detected image debian flavor: bullseye.
remote: Downloaded in 2 sec(s).
remote: Verifying checksum...
remote: Extracting contents...
remote: performing sha512 checksum for: python...
remote: Done in 6 sec(s).
remote:
remote: image detector file exists, platform is python..
remote: OS detector file exists, OS is bullseye..
remote: Python Version: /tmp/oryx/platforms/python/3.11.8/bin/python3.11
remote: Creating directory for command manifest file if it does not exist
remote: Removing existing manifest file
remote: Python Virtual Environment: antenv
remote: Creating virtual environment...
remote: Activating virtual environment...
remote: Running pip install...
remote: [07:57:35+0000] Collecting Flask==2.2.2 (from -r requirements.txt (line 1))
remote: [07:57:35+0000] Downloading Flask-2.2.2-py3-none-any.whl.metadata (3.9 kB)
remote: [07:57:35+0000] Collecting gunicorn (from -r requirements.txt (line 2))
remote: [07:57:35+0000] Downloading gunicorn-21.2.0-py3-none-any.whl.metadata (4.1 kB)
remote: [07:57:35+0000] Collecting Werkzeug==2.2.2 (from -r requirements.txt (line 3))
remote: [07:57:35+0000] Downloading Werkzeug-2.2.2-py3-none-any.whl.metadata (4.4 kB)
remote: [07:57:35+0000] Collecting Jinja2>=3.0 (from Flask==2.2.2->-r requirements.txt (line 1))
remote: [07:57:35+0000] Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
remote: [07:57:35+0000] Collecting itsdangerous>=2.0 (from Flask==2.2.2->-r requirements.txt (line 1))
remote: [07:57:35+0000] Downloading itsdangerous-2.1.2-py3-none-any.whl.metadata (2.9 kB)
remote: [07:57:35+0000] Collecting click>=8.0 (from Flask==2.2.2->-r requirements.txt (line 1))
remote: [07:57:35+0000] Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
remote: [07:57:35+0000] Collecting MarkupSafe>=2.1.1 (from Werkzeug==2.2.2->-r requirements.txt (line 3))
remote: [07:57:35+0000] Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
remote: [07:57:35+0000] Collecting packaging (from gunicorn->-r requirements.txt (line 2))
remote: [07:57:35+0000] Downloading packaging-24.0-py3-none-any.whl.metadata (3.2 kB)
remote: [07:57:35+0000] Downloading Flask-2.2.2-py3-none-any.whl (101 kB)
remote: [07:57:35+0000] ΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöü 101.5/101.5 kB 28.3 MB/s eta 0:00:00
remote: [07:57:35+0000] Downloading Werkzeug-2.2.2-py3-none-any.whl (232 kB)
remote: [07:57:35+0000] ΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöü 232.7/232.7 kB 75.6 MB/s eta 0:00:00
remote: [07:57:35+0000] Downloading gunicorn-21.2.0-py3-none-any.whl (80 kB)
remote: [07:57:35+0000] ━━━━━━━━━━━━━━━━━━━━��━━━━━━━━━━━━━━━━━━━ 80.2/80.2 kB 35.6 MB/s eta 0:00:00
remote: [07:57:35+0000] Downloading click-8.1.7-py3-none-any.whl (97 kB)
remote: [07:57:35+0000] ΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöü 97.9/97.9 kB 41.5 MB/s eta 0:00:00
remote: [07:57:35+0000] Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)
remote: [07:57:35+0000] Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
remote: [07:57:35+0000] ΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöü 133.2/133.2 kB 51.0 MB/s eta 0:00:00
remote: [07:57:35+0000] Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
remote: [07:57:35+0000] Downloading packaging-24.0-py3-none-any.whl (53 kB)
remote: [07:57:35+0000] ΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöüΓöü 53.5/53.5 kB 17.6 MB/s eta 0:00:00
remote: [07:57:35+0000] Installing collected packages: packaging, MarkupSafe, itsdangerous, click, Werkzeug, Jinja2, gunicorn, Flask
remote: [07:57:36+0000] Successfully installed Flask-2.2.2 Jinja2-3.1.3 MarkupSafe-2.1.5 Werkzeug-2.2.2 click-8.1.7 gunicorn-21.2.0 itsdangerous-2.1.2 packaging-24.0
remote: Not a vso image, so not writing build commands
remote: Preparing output...
remote:
remote: Copying files to destination directory '/tmp/_preCompressedDestinationDir'...
remote: Done in 0 sec(s).
remote: Compressing content of directory '/tmp/_preCompressedDestinationDir'...
remote: Copied the compressed output to '/home/site/wwwroot'
remote:
remote: Removing existing manifest file
remote: Creating a manifest file...
remote: Manifest file created.
remote: Copying .ostype to manifest output directory.
remote:
remote: Done in 13 sec(s).
remote: Running post deployment command(s)...
remote:
remote: Generating summary of Oryx build
remote: Parsing the build logs
remote: Found 0 issue(s)
remote:
remote: Build Summary :
remote: ===============
remote: Errors (0)
remote: Warnings (0)
remote:
remote: Triggering recycle (preview mode disabled).
remote: Deployment successful. deployer = deploymentPath =
remote: Deployment Logs : 'https://kpflaskapp.scm.azurewebsites.net/newui/jsonviewer?view_url=/api/deployments/223b3cf48eb78f055f670202ead044e98ab0360a/log'
To https://kpflaskapp.scm.azurewebsites.net:443/kpflaskapp.git
* [new branch] main -> master
```
**App Service Logs:**
```
2024-03-28T09:30:53.366091831Z Documentation: http://aka.ms/webapp-linux
2024-03-28T09:30:53.366095719Z Python 3.11.7
2024-03-28T09:30:53.366099887Z Note: Any data outside '/home' is not persisted
2024-03-28T09:30:53.857082966Z Starting OpenBSD Secure Shell server: sshd.
2024-03-28T09:30:53.876787582Z App Command Line not configured, will attempt auto-detect
2024-03-28T09:30:53.876977196Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite
2024-03-28T09:30:53.936844682Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2024-03-28T09:30:53.939444394Z Build Operation ID: 006694baf2ca5e2b
2024-03-28T09:30:53.940956897Z Output is compressed. Extracting it...
2024-03-28T09:30:53.940981173Z Oryx Version: 0.2.20240127.1, Commit: 4b7f2dffcc69c214f9806d67a85ec8926e2393e1, ReleaseTagName: 20240127.1
2024-03-28T09:30:53.942382458Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8dc4efcb326576d'...
2024-03-28T09:30:54.520441476Z App path is set to '/tmp/8dc4efcb326576d'
2024-03-28T09:30:54.553473156Z Detected an app based on Flask
2024-03-28T09:30:54.553544960Z Generating `gunicorn` command for 'app:app'
2024-03-28T09:30:54.555519014Z Writing output script to '/opt/startup/startup.sh'
2024-03-28T09:30:54.584079610Z Using packages from virtual environment antenv located at /tmp/8dc4efcb326576d/antenv.
2024-03-28T09:30:54.584102352Z Updated PYTHONPATH to '/opt/startup/app_logs:/tmp/8dc4efcb326576d/antenv/lib/python3.11/site-packages'
2024-03-28T09:30:54.842338343Z [2024-03-28 09:30:54 +0000] [73] [INFO] Starting gunicorn 21.2.0
2024-03-28T09:30:54.854736676Z [2024-03-28 09:30:54 +0000] [73] [INFO] Listening at: http://0.0.0.0:8000 (73)
2024-03-28T09:30:54.854796869Z [2024-03-28 09:30:54 +0000] [73] [INFO] Using worker: sync
2024-03-28T09:30:54.858336021Z [2024-03-28 09:30:54 +0000] [76] [INFO] Booting worker with pid: 76
2024-03-28T09:30:55.499981679Z 172.16.0.1 - - [28/Mar/2024:09:30:55 +0000] "GET /robots933456.txt HTTP/1.1" 404 207 "-" "HealthCheck/1.0"
2024-03-28T09:30:56.121392969Z Request for index page received
2024-03-28T09:30:56.121424398Z 172.16.0.1 - - [28/Mar/2024:09:30:56 +0000] "GET / HTTP/1.1" 200 1469 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0"
2024-03-28T09:30:56.543593372Z 172.16.0.1 - - [28/Mar/2024:09:30:56 +0000] "GET /static/bootstrap/css/bootstrap.min.css HTTP/1.1" 200 0 "https://kpflaskapp.azurewebsites.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0"
2024-03-28T09:30:56.547191577Z 172.16.0.1 - - [28/Mar/2024:09:30:56 +0000] "GET /static/images/azure-icon.svg HTTP/1.1" 200 0 "https://kpflaskapp.azurewebsites.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0"
2024-03-28T09:30:57.350264488Z Request for index page received
2024-03-28T09:30:57.350299212Z 172.16.0.1 - - [28/Mar/2024:09:30:57 +0000] "GET / HTTP/1.1" 200 1469 "-" "AlwaysOn"
2024-03-28T09:30:57.896618043Z 172.16.0.1 - - [28/Mar/2024:09:30:57 +0000] "GET /static/favicon.ico HTTP/1.1" 200 0 "https://kpflaskapp.azurewebsites.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0"
2024-03-28T09:31:07.346168021Z Request for index page received2024-03-28T09:31:07.346213867Z 172.16.0.1 - - [28/Mar/2024:09:31:07 +0000] "GET / HTTP/1.1" 200 1469 "-" "AlwaysOn"
```
**Response:**

|
null |
[![vs code suggestion][1]][1]
[1]: https://i.stack.imgur.com/oBDKa.png
How can i dissable just this suggestion? |
How to disable vs code this suggestion? |
|visual-studio-code| |
|hadoop|hdfs|apache-storm|mapr| |
If you take a snapshot of the kind of processes that are currently in place in a typical software development company, almost all of these are set up and managed by staff-level engineers on behalf of the owners, and we're utilizing available technologies in places where it can speed up tasks, for example a developer using AI to contribute to a part of the system they're not entirely familiar with.
I feel it would be a sort of a natural progression that things get inverted, that instead of owners hiring staff-level engineers they hire an AI model, and that instead of us using AI in small parts to speed up tasks, the AI mostly utilizes other AIs and then hires a human where needed.
While the events in recent history definitely show we're running down a slope into a scientific revolution, it's still really hard to know know what's the best way these systems should be architected, and how we, as humans, will end up contributing.
I think it's going to be a rough ride until we settle many of these concerns, but while this work isn't done, I definitely think there's going to be an ongoing need for software engineers for design and implementation, and help strike a balance between man and machine, and what the modality of our collective participation is going to be based on.
Programming isn't dying out, but the face of it will radically change, and what was usually done in large groups and plenty of resources will probably shift more towards even smaller groups, with even fewer resources still. |
|python|selenium-webdriver|web-crawler| |
null |
See: https://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.fit_regularized.html
There is a L1_wt term that penalizes the Lasso fit. If 0, the fit is a ridge fit, if 1 it is a lasso fit. |
{"OriginalQuestionIds":[11322740],"Voters":[{"Id":4108803,"DisplayName":"blackgreen"}]} |
If you look at some examples you can see the pattern:
| input number | input number in binary | result of `count0()` | result of `count1()` |
| --- | --- | --- | --- |
| 0 | 0 | 1 | 0 |
| 1 | 1 | 0 | 1 |
| 2 | 10 | 1 | 1 |
| 3 | 11 | 0 | 2 |
| 4 | 100 | 2 | 1 |
| 7 | 111 | 0 | 3 |
| 8 | 1000 | 3 | 1 |
| 10 | 1010 | 2 | 2 |
|
This is a part of my Bitbucket pipeline:
npm_audit: &npm_audit
echo "Running npm audit..."
npm_audit_output=$(npm audit --audit-level=critical)
if [ $? -ne 0 ]; then
echo "Critical vulnerabilities found. They must be fixed manually:"
exit 1
fi
npm_audit_output=$(npm audit)
if [ $? -ne 0 ]; then
echo "Non-critical vulnerabilities found."
echo "$npm_audit_output"
echo "Attempting to fix..."
npm audit fix
else
echo "No vulnerabilities found."
fi
later called like:
- step:
...
script:
- *npm_audit
BB returns:
+ echo "Running npm audit..." npm_audit_output=$(npm audit --audit-level=critical) if [ $? -ne 0 ]; then echo "Critical vulnerabilities found. They must be fixed manually:" exit 1 fi npm_audit_output=$(npm audit) if [ $? -ne 0 ]; then echo "Non-critical vulnerabilities found." echo "$npm_audit_output" echo "Attempting to fix..." npm audit fix else echo "No vulnerabilities found." fi
bash: bashScript5261110360543614450.sh: line 5: syntax error near unexpected token `then'
Why is there a syntax error? [ShellCheck][1] is green
And why is the whole script echoed instead of just strings I asked for?
[1]: https://www.shellcheck.net/ |
Bitbucket pipeline script failure |
|bash|sh|bitbucket|bitbucket-pipelines| |
I am trying to deploy Airflow on Kubernetes with Istio. Here is my VirtualService config:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-virtualservice
namespace: mynamespace
spec:
hosts:
- "myapp.example.com"
gateways:
- mygateway
http:
- match:
- uri:
prefix: /api/v1/
route:
- destination:
host: backend-service
port:
number: 8000
- match:
- uri:
prefix: /airflow/home/
rewrite:
uri: /home
route:
- destination:
host: airflow-service
port:
number: 8080
- match:
- uri:
prefix: /
route:
- destination:
host: frontend-service
port:
number: 443
So when I access https://myapp.example.com/airflow/home/, it reaches my airflow webserver in the pod, and I can see this log:
10.196.182.95 - - [20/Mar/2024:15:51:39 +0530] "GET /home HTTP/1.1" 302 319 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0"
But then it tries redirecting to the login page based on headers Location: `https://myapp.example.com/login/?next=https%3A%2F%2Fmyapp.example.com%2Fhome` but it cannot find it, I think. So, then it redirects to `https://myapp.example.com/404?next=https:%2F%2Fmyapp.example.com%2Fhome`, and that's it. I cannot reach airflow UI at all, getting 404 error all the time.
How to fix the redirection in this case?
Here is my airflow.cfg for webserver:
[webserver]
base_url = https://myapp.example.com/airflow/
web_server_host = 0.0.0.0
web_server_port = 8080
web_server_worker_timeout = 1200
enable_proxy_fix = True
web_server_ssl_cert = /airflow/cert/tls.crt
web_server_ssl_key = /airflow/cert/tls.key
I tried accessing the webserver without istio:
kubectl port-forward svc/airflow-service 8080:8080
and I was able to reach the airlfow UI and the login page on localhost:8080 locally on my machine, so it seems that airflow is setup correctly but something might be wrong with istio. Any ideas?
EDIT: This approach worked but its not very clean and honestly I would prefer to have something that works in a proper way:
- match:
- uri:
prefix: /airflow/home/
- uri:
prefix: /airflow/home
rewrite:
uri: /home
route:
- destination:
host: airflow-service
port:
number: 8080
- match:
- uri:
prefix: /login
rewrite:
uri: /login
route:
- destination:
host: airflow-service
port:
number: 8080
So in this setup I can go to myapp.example.com/login, then login to airflow first. Then it will redirect me to /home (which is not airflow app, just my base app). But after I logged in I can access /airflow/home and the airflow app will not redirect me to /login page anymore which cause 404 previously.
But it would be nice to have airflow/login and airflow/home working straight away. |
> If a Windows username is created in a xxx@domain.com email syntax...
They're more formally referred to as _User Principal Name_, or UPN for short precisely to avoid confusion with identically-formatted SMTP e-mail addresses.
<sub>**Nostalgia time!**: Back in the days of Windows 2000 I remember the mood and expectation was that UPNs would _eventually_ be mapped to a valid e-mail address _and_ they'd replace NT's `DOMAIN\USERNAME`-style logins long before Windows Server 2003 came out - well, that never happened, and despite Microsoft _really_ pushing people to use UPNs in the early 2000s I think they realised it wasn't going to go anywhere by the time Windows 7 came out.</sub>
> How does `Netplwiz` retrieve and display usernames properly in their email syntax?
(_Ghidra speedrun time..._)
I noticed that `netplwiz` uses COM to access and call-out into services that would be resolved at runtime and so _it's possible_ that `netplwiz` may be using some hither-unknown COM interface/service to perform some aspects of name translation, so the function list here is not exhaustive.
But moving on to what a straightforward static-analysis can tell us:
* Looking at the Imports table, ``netplwiz.dll` directly references dozens of Win32 libraries, and from those I saw these imported funcs that can perform name translation (either searching by UPN or login-name and resolving it to a SID, or vice-versa; or translating between them).
* [`API-MS-WIN-SECURITY-ACTIVEDIRECTORYCLIENT-L1-1-0.DLL::DsCrackNamesW`][1]
* [`API-MS-WIN-SECURITY-LSALOOKUP-L2-1-0.DLL::LookupAccountNameW`][2]
* `API-MS-WIN-SECURITY-LSALOOKUP-L2-1-0.DLL::LookupAccountSidW`
* `API-MS-WIN-SECURITY-SDDL-L1-1-0.DLL::ConvertSidToStringSidW`
* `DSROLE.DLL::DsRoleGetPrimaryDomainInformation`
* `SAMCLI.DLL::NetLocalGroupEnum`
* `SAMCLI.DLL::NetLocalGroupGetMembers`
* `SAMCLI.DLL::NetUserGetInfo`
* [`SECUR32.DLL::TranslateNameW`][3]
----
There are other parts of Win32 that can do this too, which `netplziwz` doesn't use, like:
* [`NetQueryDisplayInformation`][4].
* [`NetUserEnum`][5]
* [`GetUserNameExW`][6]
-----------------------
> Can that be done for the currently logged in user?
Yes.
From the command-line, just run `whoami /upn`
I poked around in `whoami.exe` _just to be sure_ and it looks like it gets your username in [UPN format using `GetUserNameExW`][6] [by passing `NameUserPrincipal`][7] (i.e. the integer value `8`) as the first argument:
[![enter image description here][8]][8]
[1]: https://learn.microsoft.com/en-us/windows/win32/api/ntdsapi/nf-ntdsapi-dscracknamesw
[2]: https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-lookupaccountnamew
[3]: https://learn.microsoft.com/en-us/windows/win32/api/secext/nf-secext-translatenamew
[4]: https://learn.microsoft.com/en-us/windows/win32/api/lmaccess/nf-lmaccess-netquerydisplayinformation
[5]: https://learn.microsoft.com/en-us/windows/win32/api/lmaccess/nf-lmaccess-netuserenum
[6]: https://learn.microsoft.com/en-us/windows/win32/api/secext/nf-secext-getusernameexw
[7]: https://learn.microsoft.com/en-us/windows/win32/api/secext/ne-secext-extended_name_format
[8]: https://i.stack.imgur.com/PnZpY.png |
Hi I'm new to Nextjs(v14). I want to implement a multiple page form using react-hook-form.
However, it seems that `useForm` executed every time and `defaultValues` are set at page routing (click `<Link to=XX>`).
How to keep form data across multiple pages?
please help me.
here my code.
_app.tsx
```tsx
return (
<div>
<FormProvider>
<Component {...pageProps} />
</FormProvider>
</div>
);
```
FormProvider.tsx
```tsx
export const FormProvider = ({
children,
}: {
children: React.ReactNode;
}) => {
const defaultValues: LiffForm = {
menuId: '',
numberOfPatient: {
adult: 1,
child: 0,
},
};
const form = useForm({
defaultValues,
resolver: zodResolver(liffFormSchema),
});
const onSubmit = () => console.log("something to do");
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)}>{children}</form>
</Form>
);
};
``` |
I debug the code using `{{ dd($order->id) }}`, I got 100238 order id value. But when I type `<form action="{{ route('admin.pos.update_order', ['id' => $order->id]) }}" id='update_order' method="post">` then its not working. Then I write this `{{ route('admin.pos.update_order', ['id' => 100238]) }}` its works great.
I cannot solve this abnormal behaviour. Can anyone guide me what is the actual issue?
As in debugging it is providing order id then it should work in the action code as `$order->id` |
Laravel form action not accepting $order->id but accepting hard coded value |
|php|laravel|routes|controller|laravel-blade| |
null |
using System;
namespace Q1
{
public class Q1
{
static void Main ()
{
// Keep the following line intact
Console.WriteLine( "===========================" );
// Insert your solution here.
const string EMPTY = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string ZEROEGG = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string NEGATIVEGG = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string TWELVEEGG = "You have 12 eggs which equals 1 dozen and 0 eggs left over.";
Console.WriteLine("Enter the number of chickens:");
int chickens = Convert.ToInt32(Console.ReadLine());
{
if (chickens <= 0)
{
Console.WriteLine(EMPTY);
Console.WriteLine( "===========================" );
return;
}
Console.WriteLine("Eggs:");
int egg_ = Convert.ToInt32(Console.ReadLine());
if (chickens == 1)
{
switch (egg_) {
case 0:
Console.WriteLine(ZEROEGG);
break;
case <= -1:
Console.WriteLine(NEGATIVEGG);
break;
case 1:
Console.WriteLine("You have " + egg_ + " egg which equals 0 dozen and " + egg_ + " egg left over.");
break;
case <= 11:
Console.WriteLine("You have " + egg_ + " eggs which equals 0 dozen and " + egg_ + " eggs left over.");
break;
case 12:
Console.WriteLine(TWELVEEGG);
break;
case 13:
Console.WriteLine("You have " + egg_ + " eggs which equals 1 dozen and " + (egg_ - 12) + " egg left over.");
break;
case < 24:
Console.WriteLine("You have " + egg_ + " eggs which equals 1 dozen and " + (egg_ - 12) + " eggs left over.");
break;
case 24:
Console.WriteLine("You have " + egg_ + " eggs which equals 2 dozen and 0 eggs left over.");
break;
case > 24:
Console.WriteLine("You have " + egg_ + " eggs which equals " + (egg_/12) + " dozen " + (egg_ - 12) + " eggs left over.");
break;
}
for (int i = 0; i < chickens; ++i)
{
Console.Write("Eggs:");
egg_ = Convert.ToInt32(Console.ReadLine());
}
}
// I want the code to prompt the user with the writeline statement equal to the number of chickens i.e. chickens = 4, Console.Write("Eggs:") 4 times.
//Ive tried while loop and if statements but couldn't figure it out. |
I am trying to test cryptography performance virtual crypto AES-NI MB PMD using below command,
sudo ./dpdk-test-crypto-perf -c 0xf -n 4 --vdev crypto_aesni_mb -- --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 16 --total-ops 100000 --burst-sz 32 --buffer-sz 2048
Using below link I followed steps to enable the virtual crypto
https://doc.dpdk.org/guides-20.11/cryptodevs/aesni_mb.html
EAL: failed to parse device "crypto_aesni_mb"
EAL: Unable to parse device 'crypto_aesni_mb'
I want to enable virtual crypto
Thanks & Regards
Raizz |
I am testing cryptographic performance using crypto_aesni_mb |
|cryptography|dpdk|aes-ni|dpdk-pmd| |
null |
I stumbled upon https://next-auth.js.org/ which is great for giving users the possibility to sign up for my website with github and slack for example.
Since I do not want multiple logins with the same email and different providers (security issue -> see here https://next-auth.js.org/faq), my question is. Could there be a way to still use next auth, but only for linking, but not for login? Or can anyone recommend a library that does the authentication but not the login?
Use case:
1. User registers with Github and is logged in
2. User clicks on connect with Slack (OAuth2 procedure etc.) but this time he is not able to login with that
3. I pull data from slack periodically and assign it to the user (comment done in a channel etc.)
|
Solid JS library that can use OAuth2 to gather information from external Accounts |
|oauth-2.0|integration|next-auth| |
Using gdal-async in nodejs, im trying to convert verctor files from geojson to dxf.
```
const dsGeoJSON2 = gdal.open('./upload2/objects.geojson'); const out2 = gdal.vectorTranslate('./upload2/objects.dxf', dsGeoJSON2, ['-f', 'DXF']);
```
the output file is emty, even with .kml but when i change to gpx for example it work. |
empty output files when using gdal.vectorTranslate |
|javascript|node.js|gdal|dxf|ogr2ogr| |
null |
Found a number of errors here: First, range in line 25 is calling a sheet, not a range of cells. I have changed it to `e.range` to call on the event object.
Also, I believe you have to use an installable onEdit trigger to utilize the mail service. I changed the name of your function from `onEdit` to `onMyEdit` and set up an onEdit trigger.
[![Screenshot1][1]][1]
[![Screenshot2][2]][2]
[![Screenshot3][3]][3]
Second there are some misspellings:
First, `senEmail()` in line 4 and line 23. Doesn't matter if you don't call that function but you do in the last line of your code.
Second, `getRagne()` in line 37 and line 38.
Additionally, I got the following error after these corrections had been made:
> Exception: The parameters (String,String,String,String,String) don't
> match the method signature for MailApp.sendEmail.
I combined messageBody, messageBody2, and respondent to bring it in line with the method signature.
Finally, you have name and email calling on the same cell. I changed the name to Column 3, but if this assumption is incorrect it can easily be modified.
These changes are reflected in the code below:
function onMyEdit(e) {
addTimeStamp(e);
sendEmail(e);
}
function addTimeStamp(e){
var sheet = SpreadsheetApp.getActive().getSheetByName("Sheet5") ;
var range = e.range;
var row = range.getRow();
var col = range.getColumn();
var cellValue = range.getValue();
if (col == 9 && cellValue === true){
sheet.getRange(row,10).setValue(new Date());
}
}
function sendEmail(e){
var sheet = SpreadsheetApp.getActive().getSheetByName("Sheet5") ;
var range = e.range;
var row = range.getRow();
var col = range.getColumn();
let cellValue = range.getValue(); // The value of the edited cell
// Check if the edit occurred in column 5 and the value is true
if (col == 9 && cellValue === true) {
let email = sheet.getRange(row, 2).getValue();
let name = sheet.getRange(row, 3).getValue();
let orderID = sheet.getRange(row, 5).getValue();
let orderDate =sheet.getRange(row, 1).getValue();
let methodOfPayment = sheet.getRange(row, 4).getValue();
let numberOfTickets = sheet.getRange(row, 6).getValue();
let payableTotal = sheet.getRange(row, 7).getValue();
let html_link ="https://checkout.payableplugins.com/order/"+orderID;
const subject = "PHLL Raffle Fundraiser Follow-Up on Order ID #" + orderID ;
const messageBody = 'Dear ' + name + ',' + "\n\n" + 'We hope this message finds you well. We wanted to remind you that we have received your order information for the raffle ticket. According to our records, you have selected the cash method for payment, but we have not yet received the payment.' + "\n\n" + 'If you have already provided the payment to the player, please let us know as soon as possible. This will allow us to coordinate with them to ensure that your payment is collected and your raffle ticket is processed accordingly.' + "\n\n" + 'However, if you have not yet provided the payment to the player, we kindly ask you to do so at your earliest convenience. Once the payment is received, we will promptly send you your raffle ticket.' + "\n\n" +
'Please note that if payment is not received within the specified timeframe, we will have to remove your name from the drawing.' + "\n\n" + 'If you wish to change your method of payment, you may do so by following this link: empty. Should you have any further questions or concerns, please do not hesitate to reach out to us. We are here to assist you in any way we can.' + "\n\n" + 'Best of luck in the raffle drawing!' + "\n\n" + 'Warm regards,' + "\n\n" + 'Treasurer' + "\n\n" + 'EmailAddress' + "\n\n" + 'Paradise Hills Little League' + "\n\n" + 'ORDER DETAILS:' + "\n\n" + 'ORDER DATE:' + orderDate +"\n\n" + 'ORDER ID #' + orderID +"\n\n" + 'METHOD OF PAYMENT:' + methodOfPayment + "\n\n" + 'NUMBER OF TICKETS:' + numberOfTickets +"\n\n" +'PAYABLE TOTAL:' + payableTotal;
MailApp.sendEmail(email,subject,messageBody);
console.log(sendEmail)
}
}
If this is not working for you, let me know.
[1]: https://i.stack.imgur.com/uv2Fy.png
[2]: https://i.stack.imgur.com/F9oaO.png
[3]: https://i.stack.imgur.com/eP4uh.png |
I have to convert a rvalue to integer, boolean or string i am unable to convert it is giving the error
error: inconsistent deduction for auto return type: ‘int’ and then ‘bool’
```cpp
auto convertData(const crow::json::rvalue &data) {
std::string type = get_type_str(data.t());
if(type == "Number"){
return int(data.i());
} else if(type == "True" || type == "False") {
return bool(data.b());
}
std::string val = data.s();
return std::string(val);
}
```
Can someone please help
Please help me anyone |
How to get a variable and return multiple different datatype variable using auto in c++ |
|c++|std|auto|crow| |
I'm optimizing some work I did moving from Orange to Python code, but I'm having some problems with image Embedders.
I'm trying to recreate my work using Tensorflow/Keras, but the outputs of the VGG16 networks but the 4096 outputs of the activation layer of the penultimate FC layer for this architecture, using Orange and Keras, are different.
[In the Orange documentation it is written:](https://i.stack.imgur.com/X1Kya.png)
For python:
```
model_vgg = VGG16(include_top=True, weights='imagenet', pooling=None, input_shape=(224, 224, 3))
model_vgg16 = Model(inputs=model_vgg.input, outputs=model_vgg.layers[-2].output)
```
[Keras Model](https://i.stack.imgur.com/31umC.png)
To reshape some images to 224x244 pixels i use the same package and code of function load_image_or_none -> https://github.com/biolab/orange3-imageanalytics/blob/master/orangecontrib/imageanalytics/utils/embedder_utils.py
And get the same image resized to 224x224 used for VGG16 in Orange, by widget Save Image.
[My resized images and Orange images are the same](https://i.stack.imgur.com/GBO7o.png)
Perhaps I'm making a mistake during preprocessing, since in the Orange documentation it is written that they use the original weights of the model.
To preprocess the images i try the Keras preprocess_input of VGG16, and manually
```
def process_vgg16(imgs):
output = np.zeros(imgs.shape)
VGG_MEAN = np.array([103.939, 116.779, 123.68], dtype=np.float32)
for i in range(0, imgs.shape[0]):
b = np.array(imgs[i,:,:,2], dtype=np.float32)
g = np.array(imgs[i,:,:,1], dtype=np.float32)
r = np.array(imgs[i,:,:,0], dtype=np.float32)
output[i,:,:,0] = b - VGG_MEAN[0]
output[i,:,:,1] = g - VGG_MEAN[1]
output[i,:,:,2] = r - VGG_MEAN[2]
#output = output/255
return output
```
Note: The images are in gray scale, so all channels are the same.
Results:
[First three output of an image in orange (VGG16)](https://i.stack.imgur.com/Jex5Z.png)
[First three output of an image in Keras (VGG16)](https://i.stack.imgur.com/T29ph.png)
Would anyone know the reason? |
I want to know why I get this JDBC connection error. I already added the SQL driver in classpath.
When I try use init parameter
```
@WebServlet(urlPatterns="/addServlet",initParams= {@WebInitParam(name="dburl",value="com.mysql.cj.jdbc.Driver"),
@WebInitParam(name="dbUser",value="test"),@WebInitParam(name="dbPassword",value="test")})
```
And right this line
```java
public void init(ServletConfig config) {
try {
Class.forName("com.mysql.cj.jdbc.Driver");
System.out.println("dbURL"+ config.getInitParameter("dburl"));
System.out.println("dbUSER"+config.getInitParameter("dbUser"));
System.out.println(config.getInitParameter("dbPassword"));
con=DriverManager.getConnection(config.getInitParameter("dburl"),config.getInitParameter("dbUser"),config.getInitParameter("dbPassword"));
//con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb","test","test");
System.out.println(con);
con=DriverManager.getConnection("jdbc:mysql//localhost/mydb:3306","test","test");
Statement stmt=con.createStatement();
//con = DriverManager.getConnection("jdbc:mysql://localhost/mydb", "test", "test");
con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb", "test", " test");
System.out.println(con);
} catch (SQLException e) {
e.printStackTrace();
} catch(ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
```
By normal connection with this `//con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb","test","test");` I am able to connect to the database, but if I try using initParmas as below it gives me an error.
```
con=DriverManager.getConnection(config.getInitParameter("dburl"),config.getInitParameter("dbUser"),config.getInitParameter("dbPassword"));
```
Error as below
```none
java.sql.SQLException: No suitable driver found for com.mysql.cj.jdbc.Driver
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:706)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:229)
at com.test.user.servlet.CreateUserServlet.init(CreateUserServlet.java:44)
at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:944)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:901)
at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:649)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:115)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:673)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:833)
Mar 21, 2024 4:44:36 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [com.test.user.servlet.CreateUserServlet] in context with path [/UserApp] threw exception
java.lang.NullPointerException: Cannot invoke "java.sql.Connection.createStatement()" because "this.con" is null
```
|
You could create a JSON with the version of the app and use a Guard to fetch this JSON and compare the version with the last version of your app (You can store this data in environment). If the version is different you can trigger a refresh into the client browser to get the last version. |
Property 'sourceInformation' not found on object of type 'CLLocation when i install Geolocation library in IOS
When attempting to retrieve the current location after installing the GeoLibrary library, I encounter the error "Property 'sourceInformation' not found on object of type 'CLLocation." If I install a version that is two years old, then I encounter the error "react/config/ReactNativeConfig.h' file not found." |
Property 'sourceInformation' not found on object of type 'CLLocation |
|javascript| |
null |
This code do the job:
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType
import json
spark = SparkSession.builder \
.appName("JSON to DataFrame") \
.getOrCreate()
json_string = "['a','b','c']"
json_list = json.loads(json_string.replace("'", "\""))
schema = StructType([
StructField("Column1", StringType(), True)
])
df = spark.createDataFrame([(value,) for value in json_list], schema=schema)
df.show()
|
react-hook-form useForm executed every time on Nextjs page routing |
|reactjs|typescript|next.js|react-hook-form| |
null |
Functional programming is a programming paradigm based upon building abstractions using functions, isolating side effects and mutable state. Executing pure functions is thread-safe. |
null |
If you need to access the Elasticsearch service from another pod, consider creating a Kubernetes Service object that targets your Elasticsearch Deployment. This will provide a stable endpoint (the Service’s ClusterIP) that other pods can use to access the Elasticsearch API on port 9200.
Here’s an example of how you might define such a Service:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
With this Service in place, other pods should be able to access the Elasticsearch API via http://elasticsearch:9200.
|
I have a USB device that identifies as a HID peripheral. I communicate with it using the `ReadFile` and `WriteFile` Win32 API functions.
Occasionally the device gets "stuck" and does not respond, and needs to be unplugged and replugged. That's not a problem in and of itself, but it causes the call to `WriteFile` to block until the device is unplugged, and that _is_ a problem.
I have read that I can use overlapped I/O to prevent this, but it doesn't work. The call to `WriteFileEx` still blocks!
Code fragment:
```
HANDLE hFCreateFileW(devicePath, NULL, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, NULL, NULL);
OVERLAPPED overlapped = {0};
overlapped.hEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (!WriteFileEx(hFile, buffer, length, &overlapped, nullptr))
return FALSE;
for (int elapsed = 0; !HasOverlappedIoCompleted(&overlapped); elapsed++) {
std::this_thread::sleep_for(1ms);
if (elapsed > READ_TIMEOUT_MS)
return FALSE;
}
```
How can I force writes to timeout instead of blocking forever?
(NOTE: The solution needs to work on all versions of Windows from XP onwards.) |
I'd like to update the CocoaPods built into my app, so I ran `pod install` from the terminal.
That's when I got this error:
```lang-none
[!] The `APP_NAME [Debug]` target overrides the `FRAMEWORK_SEARCH_PATHS` build setting defined in `Pods/Target Support Files/Pods/Pods.debug.xcconfig'. This can lead to problems with the CocoaPods installation
- Use the `$(inherited)` flag, or
- Remove the build settings from the target.
```
How do you **Use the $(inherited) flag?**
|
|cocoapods| |
Make sure you have enabled sign in with apple inside the supabase auth dashboard |
It helps for me as well! But when I run pod install library search paths overwrites.
So I made a hook for podfile, maybe it will help someone
```lang-none
project_path = './yourProject.xcodeproj'
project = Xcodeproj::Project.open(project_path)
project.targets.each do |target|
if target.name == 'yourProjectTarget'
target.build_configurations.each do |config|
config.build_settings['LIBRARY_SEARCH_PATHS'] = ['$(inherited)']
end
end
end
project.save
```
This hook overwrites **LIBRARY_SEARCH_PATHS** to **$(inherited)** for you each time you run **pod install**.
Place it inside post_install Podfile block
|
>i dont understand it so if it could be provided with proper explanation
As shown in [The Java Tutorials][1] by Oracle Corp, a `for` loop has three parts:
```java
for ( initialization ; termination ; increment )
{
statement(s)
}
```
For example:
```java
for ( int index = 0 ; index < myArray.length ; index ++ )
{
System.out.println( "Index: " + index + ", value: " + myArray[ index ] );
}
```
To reverse the direction:
- The first part, *initialization*, can start at the *last* index of the array rather than at the *first* index. Ex: `int index = ( myArray.length - 1 )`. (Parentheses optional there.)
- The second part, *termination*, can test for going past the first index into zero or negative numbers: `index > 0`
- The third part, *increment*, can go downwards (negative) rather than upwards (positive):
```java
for ( int index = ( myArray.length - 1 ) ; index > 0 ; index -- )
{
System.out.println( "Index: " + index + ", value: " + myArray[ index ] );
}
```
[1]: https://docs.oracle.com/javase/tutorial/java/nutsandbolts/for.html |
null |
null |
null |
null |
null |
```
using System;
namespace Q1
{
public class Q1
{
static void Main ()
{
// Keep the following line intact
Console.WriteLine( "===========================" );
// Insert your solution here.
const string EMPTY = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string ZEROEGG = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string NEGATIVEGG = "You have 0 eggs which equals 0 dozen and 0 eggs left over.";
const string TWELVEEGG = "You have 12 eggs which equals 1 dozen and 0 eggs left over.";
Console.WriteLine("Enter the number of chickens:");
int chickens = Convert.ToInt32(Console.ReadLine());
{
if (chickens <= 0)
{
Console.WriteLine(EMPTY);
Console.WriteLine( "===========================" );
return;
}
Console.WriteLine("Eggs:");
int egg_ = Convert.ToInt32(Console.ReadLine());
if (chickens == 1)
{
switch (egg_) {
case 0:
Console.WriteLine(ZEROEGG);
break;
case <= -1:
Console.WriteLine(NEGATIVEGG);
break;
case 1:
Console.WriteLine("You have " + egg_ + " egg which equals 0 dozen and " + egg_ + " egg left over.");
break;
case <= 11:
Console.WriteLine("You have " + egg_ + " eggs which equals 0 dozen and " + egg_ + " eggs left over.");
break;
case 12:
Console.WriteLine(TWELVEEGG);
break;
case 13:
Console.WriteLine("You have " + egg_ + " eggs which equals 1 dozen and " + (egg_ - 12) + " egg left over.");
break;
case < 24:
Console.WriteLine("You have " + egg_ + " eggs which equals 1 dozen and " + (egg_ - 12) + " eggs left over.");
break;
case 24:
Console.WriteLine("You have " + egg_ + " eggs which equals 2 dozen and 0 eggs left over.");
break;
case > 24:
Console.WriteLine("You have " + egg_ + " eggs which equals " + (egg_/12) + " dozen " + (egg_ - 12) + " eggs left over.");
break;
}
for (int i = 0; i < chickens; ++i)
{
Console.Write("Eggs:");
egg_ = Convert.ToInt32(Console.ReadLine());
}
}
```
// I want the code to prompt the user with the writeline statement equal to the number of chickens i.e. chickens = 4, Console.Write("Eggs:") 4 times.
//Ive tried while loop and if statements but couldn't figure it out. |
null |
{"Voters":[{"Id":874188,"DisplayName":"tripleee"},{"Id":10008173,"DisplayName":"David Maze"},{"Id":147356,"DisplayName":"larsks"}],"SiteSpecificCloseReasonIds":[18]} |
I have a firebase project.
I am calling an external api via cloud function in the hope the api key (stored in env variable) will be less discoverable.
While I have managed to get the following code working in the emulator, it doesn't work after deployment.
```
const {onRequest} = require("firebase-functions/v2/https");
const fetch = require("node-fetch");
const makeRequest = async (req, res) => {
const apiKey = process.env.KEY1;
const apiUrl = `[externalAPIurl]&key=${apiKey}`;
try {
const response = await fetch(apiUrl);
const data = await response.json();
res.set("Content-Type", "application/json");
res.status(200).send(JSON.stringify(data));
} catch (error) {
console.error("Error:", error);
res.status(500).send("Internal Server Error");
}
};
exports.helloWorld = onRequest(makeRequest);
```
Postman tells me that it's a lack of authentication being passed in the header (error 403).
I have struggled to adapt existing code (Google samples/turtorials, and community-provided excerpts). For example:
1. The following excludes req, res parameters and isn't obviously amenable to calling an external api (url)
https://cloud.google.com/functions/docs/samples/functions-bearer-token?hl=en
2. The following documents how one might create authenticated functions: https://stackoverflow.com/questions/57589339/how-to-invoke-authenticated-functions-on-gcp-cloud-functions
But the linked documentation suggests the approach is for development purposes (only?): https://cloud.google.com/functions/docs/securing/authenticating
3. This is very clpse, but is 1st gen and lacks yhe external api call: https://github.com/firebase/functions-samples/blob/main/Node-1st-gen/authorized-https-endpoint/functions/index.js
For context, the plan is to anonymously authenticate all users of the website. If anyone can point me in the right direction, that would be sweet. |
I was facing the same issue after some troubleshooting, I was able to fix the issue.
- Check the python version in local machine and Lambda (It should be similar) as few packages are not imported properly when different version is there.
Create Zip file in below mentioned path:
- Create Virtual environment for python3.
```
python3 -m venv venv
source venv/bin/activate
```
- Create a directory with name "python"
```
mkdir python
cd python
```
- Install you packages in this folder.
```
pip3 install tuya-connector-python -t .
```
- Once installed Check the packages.
- Create ZIP file for python directory
```
cd ..
zip -r deployment_name.zip python
```
- Now you can upload the .zip file to you AWS Lambda Layer.
**Make sure the ZIP file is created for all folder available in python folder Don't miss out any files or folders it may be our main package may be dependent on it.**
----------
**For Windows**
- Using power shell or Command Prompt perform following steps (admin privileges):
```
mkdir layers
cd layers
mkdir python
cd python
pip3 install pycryptodome -t ./
```
- Now open the python folder in file explorer.
- Zip the python folder along with its child folders.
- Now you are good to go and upload it as layer in AWS.
**It will great if you have Linux Machine or Create an EC2 Instance with Ubuntu AMI and Copy the Content to S3, the you can upload it from S3 bucket to AWS lambda layer directly.**
Update:
- I have Created Similar Scenario in my environment:
- I installed the Crypto library in my local machine(Ubuntu) as per above mentioned steps.
[![Installation Image][1]][1]
- Created a Zip.
[![Zip File][2]][2]
- Uploaded the ZIP file to create layer and added layer to lambda
[![Lamabda Functions][4]][4]
- It worked as Charm:
[![Output][5]][5]
**Make Sure to add the layer to lambda**
**My Python version was 3.10 for local and Lambda's python**
[1]: https://i.stack.imgur.com/XjbIS.png
[2]: https://i.stack.imgur.com/dYFXF.png
[4]: https://i.stack.imgur.com/i08Co.png
[5]: https://i.stack.imgur.com/rczXC.png |
null |
{"Voters":[{"Id":11810933,"DisplayName":"NotTheDr01ds"},{"Id":1007220,"DisplayName":"aynber"},{"Id":874188,"DisplayName":"tripleee"}]} |
Unfortunately, what you're asking for isn't going to be ready made. You will need to create your own sort of plugin for this, and even that would be inconsistent based on your partner's WP version, theme, page builder, PHP server version, level of comfortability within WP, etc.
The only thing I can think of, is creating a plugin that essentially injects the banner via JS in the header or footer and places it on the page outside of any type of builder. That way once your partner installs the plugin, and activates it, everything else is taken care of.
You basically bundle the image within the plugin or have a CDN that can deliver a specific banner/image to a specific partner, so if the banner changes, the partner doesn't have to do anything, it's updated at the CDN/distribution level.
I don't think you're going to get an easily satisfying answer to this question. There are too many variables to account for. |
null |
class ItemGetter extends EventEmitter {
constructor() {
super();
this.on('item', item => this.handleItem(item));
}
handleItem(item) {
console.log('Receiving Data: ' + item);
}
getAllItems () {
for (let i = 0; i < 15; i++) {
this.getItem(i).then(item => this.emit('item', item));
}
console.log('=== Loop ended ===')
}
async getItem (item = '') {
console.log('Getting data:', item);
return new Promise((resolve, reject) => {
exec('echo', [item], (error, stdout, stderr) => {
if (error) {
throw error;
}
resolve('Receiving Data: ' + item);
});
});
}
}
(new ItemGetter()).getAllItems()
You logic, for first, run loop with calling all GetItem, then output '=== Loop ended ===', and only after that run all promices resolution, so, if you want get result of each getItem execution independently of eachother, just don't abuse asynchronous logic, frequently right solution much simpliest, than it seems ;)
Note: in this solution, you will get the same output, because loop with getItem calling, runnung faster, then promises with exec, but in this case, each item will be handled exactly after appropriate promise resolve, except of awaiting of all promises resolution |
I want to locate my iBook files and copy them to Calibre. I found a solution here somewhere and have now lost it. It suggested using the Terminal command above which worked well. I tried to copy the data into a new folder and that's when things went wrong. I can no longer access the folder containing the pdf files that show in iBooks.
In very simple language, how can I reinstate my permissions?
dazed2022@dazeds-MacBook-2023 ~ % ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks zsh: permission denied: /Users/dazed2023/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks
Access was originally granted and I could see the pdf files for the books I have installed. When I tried to copy them into another folder (which was offered to me as an option) I seemed to get a folder within a folder within a folder but no books in any of them.
Trying to repeat the command failed with the permissions message shown.
I have looked at some of the answers here, and I am afraid I don't fully understand them enough to be brave enough to try in case I mess things up even more.
What have I tried?
(In Terminal) dazed2022@dazeds-MacBook-2023 ~ % ~/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks zsh: permission denied: /Users/dazed2023/Library/Containers/com.apple.BKAgentService/Data/Documents/iBooks
Apparently, this should have restored permsions for me. It didn't. |
I am trying to use `yarn4` in a nodejs project but I failed to setup with `corepack` following this doc: https://yarnpkg.com/corepack.
Below is `package.json` file:
```
{
"name": "test",
"version": "1.0.0",
"main": "index.js",
"packageManager": "yarn@4.0.2",
"license": "MIT"
}
```
When I run `yarn install` on that folder, I got below error:
```
error This project's package.json defines "packageManager": "yarn@4.0.2". However the current global version of Yarn is 1.22.21.
Presence of the "packageManager" field indicates that the project is meant to be used with Corepack, a tool included by default with all official Node.js distributions starting from 16.9 and 14.19.
Corepack must currently be enabled by running corepack enable in your terminal. For more information, check out https://yarnpkg.com/corepack.
```
I have run `corepack enable` already but I still get the same error. My node version is:
```
node --version
v18.17.0
yarn --version
1.22.21
```
what I should do to use `yarn4`? |
{"Voters":[{"Id":5625547,"DisplayName":"0stone0"},{"Id":535275,"DisplayName":"Scott Hunter"},{"Id":72178,"DisplayName":"ks1322"}]} |
I'm optimizing some work I did moving from Orange to Python code, but I'm having some problems with image Embedders.
I'm trying to recreate my work using Tensorflow/Keras, but the outputs of the VGG16 networks but the 4096 outputs of the activation layer of the penultimate FC layer for this architecture, using Orange and Keras, are different.
[In the Orange documentation it is written:](https://i.stack.imgur.com/X1Kya.png)
For python:
```
model_vgg = VGG16(include_top=True, weights='imagenet', pooling=None, input_shape=(224, 224, 3))
model_vgg16 = Model(inputs=model_vgg.input, outputs=model_vgg.layers[-2].output)
```
[VGG16](https://i.stack.imgur.com/31umC.png)
To reshape some images to 224x244 pixels i use the same package and code of function load_image_or_none -> https://github.com/biolab/orange3-imageanalytics/blob/master/orangecontrib/imageanalytics/utils/embedder_utils.py
And get the same image resized to 224x224 used for VGG16 in Orange, by widget Save Image.
[My resized images and Orange images are the same](https://i.stack.imgur.com/GBO7o.png)
Perhaps I'm making a mistake during preprocessing, since in the Orange documentation it is written that they use the original weights of the model.
To preprocess the images i try the Keras preprocess_input of VGG16, and manually
```
def process_vgg16(imgs):
output = np.zeros(imgs.shape)
VGG_MEAN = np.array([103.939, 116.779, 123.68], dtype=np.float32)
for i in range(0, imgs.shape[0]):
b = np.array(imgs[i,:,:,2], dtype=np.float32)
g = np.array(imgs[i,:,:,1], dtype=np.float32)
r = np.array(imgs[i,:,:,0], dtype=np.float32)
output[i,:,:,0] = b - VGG_MEAN[0]
output[i,:,:,1] = g - VGG_MEAN[1]
output[i,:,:,2] = r - VGG_MEAN[2]
#output = output/255
return output
```
Note: The images are in gray scale, so all channels are the same.
Results:
[First three output of an image in orange (VGG16)](https://i.stack.imgur.com/Jex5Z.png)
[First three output of an image in Keras (VGG16)](https://i.stack.imgur.com/T29ph.png)
Would anyone know the reason? |
How to use yarn version4 in a nodejs project |
I am facing issue to create the below shape in css I have no idea how to creat the following shape please help me.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/KSioh.jpg |
Difference shape in css |
|html|css|shapes| |
I'm trying to write a xUnit test for `HomeController`, and some important configuration information is put into **Nacos**.
**The problem now is that I can't get the configuration information in nacos.**
# Here is my test class for HomeController
```
using Xunit;
namespace ApiTestProject
public class HomeControllerTest
{
public HomeControllerTest()
{
Init(); // using DI to mock register services
controller = new HomeController();
}
//in this method, I can not access the nacos config strings
private void Init()
{
var builder = WebApplication.CreateBuilder();
//add appsettings.json
builder.Host.ConfigureAppConfiguration(cbuilder =>
{
cbuilder.AddJsonFile("appsettings.Test.json", optional: false, reloadOnChange: true);
});
//get nacosconfig in appsettings
var nacosconfig = builder.Configuration.GetSection("NacosConfig");
builder.Host.ConfigureAppConfiguration((context, builder) =>
{
// add nacos
builder.AddNacosV2Configuration(nacosconfig);
});
//try to get the "DbConn" in nacos, but connstr is null
string connstr = builder.Configuration["DbConn"];
//other register logic...
}
}
```
# and the appsettings.Test.json file:
```
{
"NacosConfig": {
"Listeners": [
{
"Optional": false,
"DataId": "global.dbconn",
"Group": "DEFAULT_GROUP"
},
{
"Optional": false,
"DataId": "global.redisconfig",
"Group": "DEFAULT_GROUP"
},
{
"Optional": false,
"DataId": "global.baseurlandkey",
"Group": "DEFAULT_GROUP"
}
],
"Namespace": "my-dev",
"ServerAddresses": [
"http://mynacos.url.address/"
],
"UserName": "dotnetcore",
"Password": "123456",
"AccessKey": "",
"SecretKey": "",
"EndPoint": "",
"ConfigUseRpc": false,
"NamingUseRpc": false
}
}
```
**Update:** I've checked in detail to make sure there aren't any spelling mistakes, case sensitivity issues. and the code in **Init()** function works well in the **Program.cs** file in the tested API project, but in this xUnit project, it's not working at all. |
I have this JSON expression
```json
{
"uid":false,
"token":null,
"ma":null,
"question":[
{
"questionId":47599,
"answerId":190054,
"answer":1
}
],
"iframe":true,
"ref":"<SOMEREFERRER>",
"ts":1711214476
}
```
that a webclient (in this case, Google Chrome and Firefox on iOS) wants to send to a webserver. It is a reply on a survey form another server (SOMEREFERRER) has sent. I am rather unfamiliar with JSON and wondering what this string is exactly doing when posted to the target server.
I have found that the survey server is accepting this string multiple times and counts it as a "vote". This is not supposed to happen.
|
iBooks folder permissions issue. I had access, now I don't have access. How can I regain access please? |
|macos|permissions| |
null |
same error occurred with me
I just go to the directory where my `.keystore` exists for my current project and delete this, then generate a new `.keystore`
`keytool -genkey -v -keystore debug.keystore -storepass android -alias androiddebugkey -keypass android -keyalg RSA -keysize 2048 -validity 10000 -dname "CN=Android Debug,O=Android,C=US"`
Using this command for this project and then generate SHA-1 key now its unique and then use this SHA-1 key in my firebase |
null |