instruction stringlengths 0 30k ⌀ |
|---|
null |
Use this tag for questions about analytically integrating a function. |
null |
|windows|assembly|floating-point|x86-64|calling-convention| |
In the DJL example, the AirPassengers is used to predict, but the code to train the model is not shown as it uses an already trained model.
After some Googling, I found a Python solution for training that model: https://ts.gluon.ai/v0.11.x/index.html#simple-example, which is super simple code.
DJL shows an example of how to train a time series model, but it seems so complicated, and I could not translate it to train the air passengers example. Any hint on how to do it?
Thank you very much. |
Training model for AirPassengers dataset |
|java|artificial-intelligence|djl| |
You're using
```rust
mut player_query: Query<(&mut Velocity, &mut Transform, &SpriteSize, &Player), With<PlayerId>>,
wall_query: Query<(&Transform, &SpriteSize), With<Barrier>>,
```
All entities that have `Transform`, `Velocity`, `SpriteSize`, `Player`, `PlayerId`, and `Barrier` components are in both queries.
There is no way for Rust or Bevy to tell that there aren't any such Entities.
Getting mutable references to `Transform` would therefore be [undefined behaviour](https://doc.rust-lang.org/reference/behavior-considered-undefined.html) because it violates the aliasing rules of mutable references.
To fix it just follow one of the suggestions. |
The matrix you've posted is symmetric, and real-valued. (In other words, `A = A.T`, and it has no complex numbers.) This matters because all matrices which are symmetric and real-valued are [normal matrices](https://en.wikipedia.org/wiki/Normal_matrix). [Source](https://en.wikipedia.org/wiki/Symmetric_matrix#Symmetry_implies_normality). If the matrix is normal, then any polar decomposition of it follows `PU = UP`. [Source](https://math.stackexchange.com/questions/3038582/prove-that-the-polar-decomposition-of-normal-matrices-a-su-is-such-that-su).
Any diagonal matrix is also symmetric. However, technically the matrix you have posted is not diagonal - it has entries outside its main diagonal. The matrix is only [tri-diagonal](https://en.wikipedia.org/wiki/Tridiagonal_matrix). These matrices are not necessarily symmetric. However, if your tridiagonal matrix is symmetric and real-valued, then its polar decomposition is commutative.
In addition to mathematically proving this idea, you can also check it experimentally. The following code generates thousands of matrices, and their polar decompositions, and checks if they are commutative.
```
import numpy as np
from scipy.linalg import polar
N = 4
iterations = 10000
for i in range(iterations):
A = np.random.randn(N, N)
# A = A + A.T
U, P = polar(A)
are_equal = np.allclose(U @ P, P @ U)
if not are_equal:
print("Matrix A does not have commutative polar decomposition!")
print("Value of A:")
print(A)
break
if (i + 1) % (iterations // 10) == 0:
print(f"Checked {i + 1} matrices, all had commutative polar decompositions")
```
If you run this, it will immediately find a counter-example, because the matrix is not symmetric. However, if you uncomment `A = A + A.T`, which forces the random matrix to be symmetric, then all of the matrices work.
Lastly, if you need a left-sided polar decomposition, you can use the `side='left'` argument to `polar()` to get that. The [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.polar.html) explains how to do this. |
I was able to keep using eclipse, but first, google app engine tools is no longer available. I have to use Maven to import most of my libraries:
[Maven New Project][1]
Set java8 to false and Use App Engine Api true:
[Maven Properties][2]
Then in the terminal, i suggest set the appengine-version to 2.0.24
[Appengine version][3]
[1]: https://i.stack.imgur.com/Esfmv.png
[2]: https://i.stack.imgur.com/2W3Jp.png
[3]: https://i.stack.imgur.com/nso9X.png |
I have an excel spreadsheet with multiple sheets. I am reading the entire sheet and making into a dictionary where the the key is the sheet_name and the value is a dataframe of all the data. I am using the pydantic model below and cant seem get it update the dataframe value to the default. If I input enable/disable it works and validation passes, but I would like to add a default value and have that reflected on the dataframe. I have played with different ways of doing this and cant seem to figure it out. I am trying to have a set_default_value function do a precheck and have it set the default values if they are not present, but this does not seem to work.
```
from pydantic import BaseModel, Field, validator, field_validator, model_validator
from pydantic.networks import IPv4Address
from typing_extensions import Annotated, Literal
ENABLED_DISABLED = Literal["disabled", "enabled"]
class GlobalSchema(BaseModel):
mgmt_vip: Annotated[IPv4Address, Field(description="Global vip")]
mgmt_gw: Annotated[IPv4Address, Field(description="Global gw")]
radius1: Annotated[IPv4Address, Field(description="Primary Radius IP")]
radius2: Annotated[IPv4Address, Field(description="Backup Radius IP")]
location: Annotated[ENABLED_DISABLED, Field(description="Location", default="disabled")]
@model_validator(mode='before')
def set_default_values(cls, values):
print(f'type is {type(values)}')
# Iterate over all fields and set default value if field is empty
for field_name, field_value in values.items():
if field_value == "" or field_value is None:
default_value = cls.__fields__[field_name].default
values[field_name] = default_value
# print(f'type is {type(values)}')
return values
def update_schema_instance(model_instance):
# Convert model instance to a dictionary
values = model_instance.dict()
return GlobalSchema(**values)
def get_dataframe_data(excel_file: str) -> dict:
"""
Reads all sheets from an Excel file into a dictionary of DataFrames.
Parameters:
excel_file (str): Path to the Excel file.
Returns:
dict: A dictionary where keys are sheet names and values are DataFrames.
"""
# Read all sheet names from the Excel file
sheet_names = pd.ExcelFile(excel_file).sheet_names
# Create an empty dictionary to store DataFrames
sheet_name_df = {}
# Loop through each sheet name
for sheet_name in sheet_names:
# Read the sheet into a DataFrame and assign it to a variable with the sheet name
sheet_name_df[sheet_name] = pd.read_excel(excel_file, sheet_name=sheet_name, na_filter=False)
return sheet_name_df
if __name__ == '__main__':
sheets_data = get_dataframe_data(EXCEL_FILE)
print(validate_schema(sheets_data))
``` |
I have this request, on my node.js application :
```js
return new Promise((resolve, reject) => {
const options = {
hostname: 'py', <== docker container
port: 8000,
path: '/resume',
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
file
})
}
const req = http.request(options, (res) => {
const body = [];
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
res.on('data', (chunk) => {
console.log('BODY: ' + chunk);
body.push(chunk);
});
resolve(body);
});
req.on('error', (error) =>{
console.log('problem with request: ' + error.message);
reject(error)
});
req.end();
})
```
And this, on my python app, that I copied from this topic => https://stackoverflow.com/a/65141532/14317753
```python
@app.post("/resume")
async def resume(request: Request):
da = await request.form()
da = jsonable_encoder(da)
print('file', da)
return da
```
But unfortunately, `da` is an empty string.
Obviously, I've tried everything : changing the GET request to POST and putting the base64 file body param in the get params, using the `request.json()` and the `request.body()` instead of `request.form()` for the fixes I remember.
Thanks for you help. |
I'm working on graphs each with 140 nodes and edges of positive integer weight. the sum of the weights is 420, so there are at most 420 edges on each graph. some of the graphs are not fully connected.
the rule of cutting the graph is that each group must have 4 nodes, meaning 35 groups. the objective is to minimize the summed weight of the cut edges (edges that connect nodes of different groups)
can somebody help me with some algorithm suggestions? (I know that splitting the graph in two and optimizing the split cut each time can give a nice solution, but I'm looking for the strictly optimal solution)
as you might have guessed, the background of this problem is dorm assigning, each student saying who the three people they would like to be in the same dorm with, and the teachers finding the strictly optimal solution based on student satisfaction. Each dorm must house 4 students.
|
several minimum cut on a graph with limited group size |
|algorithm|graph-theory| |
null |
async function getData() {
const apiEndpoint = process.env.NEXT_AUTH_URL;
const res = await fetch(`${apiEndpoint}api/posts`, {
cache: "no-store",
});
if (!res.ok) {
throw new Error("Failed to fetch data");
}
return res.json();
}
if you use this function in client component ,await fetch(`/api/posts`),this way also work, but it is user component you should declared as
await fetch(`${apiEndpoint}api/posts`)
this way,
another thing is , if that component is server component , you can directly use env file variable this way ,
process.env.NEXT_AUTH_URL
but it is client component , you can only have a access to the variable that has define ,
NEXT_PUBLIC_....
this way enter code here` |
pydantic validation does not add default value |
|python|pandas|pydantic| |
We manage to initially create a TLS connection from a server to the java client listening for incoming connections.
We have created a socket using a serversocket and then wrap it into an SSLsocket.
The below code is used:
@Override
boolean openSessionImpl() throws Exception {
LOGGER.info("Creating a Call Home TLS connection to {}:{}", getProperties().getHost(),
getProperties().getChTlsPort());
// create listening socket.
Socket socket = createSocket(getProperties().getChTlsPort());
if (socket == null) {
return false;
}
try {
SSLSocketFactory sslSf = getSSLSocketFactory();
sslSocket = (SSLSocket) sslSf.createSocket(socket, null, socket.getPort(), false);
sslSocket.setUseClientMode(true);
sslSocket.setSoTimeout(1);//same as in remote-cli
} catch (IOException | GeneralSecurityException e) {
throw new NetconfException("Could not create a TLS/SSL Socket for Call Home to "
+ socket.getInetAddress().getHostAddress() + ":" + socket.getPort(), e);
}
LOGGER.info("Established Call Home TLS connection to {}:{} ", getProperties().getHost(),
getProperties().getChTlsPort());
return true;
}
This is how we create the socket:
Socket createSocket(int port) {
Socket socket = null;
try (ServerSocket serverSocket = new ServerSocket()) {
serverSocket.bind(new InetSocketAddress(port));
serverSocket.setSoTimeout(properties.getCallHomeConnectTimeout());
LOGGER.info("Call Home listening on port [{}]", port);
socket = serverSocket.accept();
} catch (Exception e) {
LOGGER.warn("Failed to create a TCP server socket: ", e);
}
return socket;
}
We then loose the connection and then we use the same way to setup the connection but then we
this error:
> 2024-03-30 18:12:16,364 (Slf4jLogConsumer.java:73) INFO : STDERR: [INF]: LN: Successfully connected to host.testcontainers.internal:4335 over IPv4.
INFO : STDERR: [ERR]: LN: SSL_accept failed (Success).
Any idea what we are doing wrong or hints how to find out is greatly appreciated.
Also do you think:
-Djavax.net.debug=all
is good to use?
br,
//mike |
Reversed TLS re-connection issue |
|java|sockets|ssl| |
In my active project with 3 devs, I got the same error during code merge.
But the error pointing view id is inside the content view only, also `setContentView()` was called before `findViewById()`.
This issue got fixed easily after I cleaned the project, rebuilt and installed it. |
My Android app uses Fragments and includes Google signin authorization via Firebase. After signing in and starting a Fragment transaction, the following appears, overlaying the Fragment underneath it:
[![enter image description here][1]][1]
It's as if the gray screen was a Dialog window. When I touch the screen, the gray view disappears:
[![enter image description here][2]][2]
Has anyone here seen this problem or know what's causing it?
It doesn't happen when the user signs in with an email address (passwordless), only with Google signin.
[1]: https://i.stack.imgur.com/H7AYV.jpg
[2]: https://i.stack.imgur.com/Z5sfr.jpg |
Type in terminal
$ swift -v
Output in terminal
> Welcome to Apple Swift version 5.2.4 (swiftlang-1103.0.32.9 clang-1103.0.32.53). |
I'm using `py setup.py bdist_msi` to generate windows single executable file for my python application. this works fine on my computer, but it does not work when it is executed in GitLab pipeline. does anyone know how to solve this issue? i'm using Python 3.11.8 on my local machine and on the pipeline. here is the error:
[![enter image description here][1]][1]
here is my setup.py file:
```
from cx_Freeze import setup, Executable
import sys
directory_table = [("ProgramMenuFolder", "TARGETDIR", "."),
("MyProgramMenu", "ProgramMenuFolder", "MYPROG~1|My Program")]
msi_data = {"Directory": directory_table,
"ProgId": [("Prog.Id", None, None, "This is a description", "IconId", None)],
"Icon": [("IconId", "matplotlib.ico")]}
files = ['param_list.py', 'mca_ioconfig_parser.py', 'mca_package_manifest_parser.py', 'mca_rti_log_parser.py',
'ui_interface.py', 'resources_rc.py']
bdist_msi_options = {"add_to_path": False,
"data": msi_data,
'initial_target_dir': r'[ProgramFilesFolder]%s' % 'yammiX',
"upgrade_code": "{96a85bac-52af-4019-9e94-3afcc9e1ad0c}"}
build_exe_options = {"excludes": [], "includes": []}
executables = Executable(script="toolName.py",
base="Win32GUI" if sys.platform == "win32" else None,
icon="matplotlib.ico",
shortcut_name="toolName",
shortcut_dir="DesktopFolder")
setup(name="toolName",
version="0.1.5",
author="toolName",
description="",
executables=[executables],
options={"build_exe": build_exe_options,
"bdist_msi": bdist_msi_options})
```
here is my yml file:
```
image: python:3.11.8
# List of stages for jobs, and their order of execution
stages:
- build
- test
- deploy
# This job runs in the build stage, which runs first.
build-job:
stage: build
# Update stuff before building versions
before_script:
- apt-get update -q -y
- apt install -y python3-pip
- apt install -y python3-venv
- python3 -m venv .venv
- source .venv/bin/activate
- python3 --version
- python3 -m pip install wheel
- python3 -m pip install ansi2html
- python3 -m pip install certifi
- python3 -m pip install charset-normalizer
- python3 -m pip install click
- python3 -m pip install colorama
- python3 -m pip install comm
- python3 -m pip install contourpy
- python3 -m pip install pyinstaller
- python3 -m pip install pywin32-ctypes
- python3 -m pip install windows-filedialogs
- python3 -m pip install -r requirements.txt
script:
- python3 setup.py -q bdist_msi
artifacts:
paths:
- build/
# This job runs in the test stage.
unit-test-job:
stage: test
script:
- echo "Running unit tests... This will take about few seconds."
- echo "Code coverage is 100%"
# This job runs in the deploy stage.
deploy-job:
stage: deploy
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
```
[1]: https://i.stack.imgur.com/QkIW7.png |
Invalid command 'bdist_msi' when trying to create MSI installer with 'cx_Freeze' in Gitlab CI/CD Pipeline |
Golang == Error: OCI runtime create failed: unable to start container process: exec: "./bin": stat ./bin: no such file or directory: unknown |
|docker|go|kubernetes|google-kubernetes-engine| |
I am attempting to implement Segment Analytics on my `Next.js` site. I am leveraging `NextAuth` for authentication through Google. All of this is working well, but I am now trying to call Segment's [`Identify`](https://segment.com/docs/connections/spec/identify/) method when the user logs in.
However, it does not appear that `NextAuth` emits a client-side event which I can listen for. I have found that I can attach to `events` on the server-side, but that doesn't help me.
`/src/app/api/[...nextauth]/route.js`:
```react.js
import NextAuth from "next-auth";
import GoogleProvider from "next-auth/providers/google";
export const authOptions = {
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_CLIENT_ID,
clientSecret: process.env.GOOGLE_CLIENT_SECRET,
}),
],
events: {
signIn: async (data) => {
console.log("server-side signIn event detected", data);
},
},
};
export const handler = NextAuth(authOptions);
export { handler as GET, handler as POST };
```
Any clever ideas on how I can push this down to the client, or leverage the client-side `useSession` hook to detect this? I would prefer to not call `identify` on every page load, and only call when the user actually just logged in.
I tried building a client-side function and import it to the server, but got the following error:
> Attempted to call the default export of /SegmentIdentify.js from the server but it's on the client. It's not possible to invoke a client function from the server, it can only be rendered as a Component or passed to props of a Client Component. |
null |
|react-native|tizen|samsung-smart-tv|tizen-web-app| |
null |
Here I am developing a movie ott application using react native web.
Everything working fine in my application. Except I am unable to play local video in my landing screen.
but In my landing screen I am playing a video which is in `.mp4` format, It's playing perfectly in tizen simulator but after installing the application in the physical device it's not playing.
Please help me in this.
Here is my `config.xml` file privilege:
```
<tizen:privilege name="http://tizen.org/privilege/internet"/>
<access origin="*" subdomains="true"></access>
``` |
This keymap will **duplicate the current line** if nothing is selected or **duplicate the selected words**.
1. Access:
**File/Preferences/Keyboard shortcuts**
2. In the right top corner, click on **Open Keyboard Shortcuts (JSON)**.
3. Insert this keymap:
```json
{
"key": "shift+alt+d",
"command": "editor.action.duplicateSelection"
}
```
3. Use **alt+arrows up/down** to move the line (Default behaviour of VS Code). |
null |
You can modify this line:
`If Len(arrData(i, 12)) * Len(arrData(i, 13)) > 0 Then`
to
`If Len(arrData(i, 12)) * Len(arrData(i, 13)) > 0 And rngData.Cells(i, 12).EntireRow.Hidden = False Then`
|
I suggest using the official Vue 2 scaffolding tool, [vue/cli](https://cli.vuejs.org/). If you already have it installed and it's version number is >= 5.x (check with command `vue --version`), it will install webpack 5, so first uninstall it with `npm uninstall -g @vue/cli`. Reinstall using a version number that will install webpack 4:
```lang-shell
npm install -g @vue/cli@4.5.19
```
Then run the command to create a new Vue project, `vue create my-project`.
This will scaffold a Vue project for you with webpack 4. You can confirm by checking `"node_modules/webpack"` in package.json.
|
Yes, vanilla SwaggerUI supports PKCE!
I run SwaggerUI with PKCE with the following command:
docker run --rm -p 80:8080 \
-v ~/Desktop/SWAGGER_UI:/foo \
-e SWAGGER_JSON=/foo/my-service-openapi.yml \
-e OAUTH_CLIENT_ID=my-service-client-id \
-e OAUTH_SCOPES="openid offline" \
-e OAUTH_USE_PKCE=true \
swaggerapi/swagger-ui
(Place your OpenAPI spec in "~/Desktop/SWAGGER_UI/my-service-openapi.yml")
You can see the settings [here][1]
In the OpenAPI Spec you can define it like described [here][2]!
You have to change to "authorizationCode":
components:
securitySchemes:
oAuthSample:
type: oauth2
description: This API uses OAuth 2
flows:
authorizationCode:
authorizationUrl: https://your-host.com/oauth2/auth
tokenUrl: https://your-host.com/oauth2/token
refreshUrl: https://your-host.com/oauth2/token
scopes:
openid: openid scope
offline: offline scope
[1]: https://swagger.io/docs/open-source-tools/swagger-ui/usage/oauth2/
[2]: https://swagger.io/docs/specification/authentication/oauth2/ |
{"OriginalQuestionIds":[1101750,19104771],"Voters":[{"Id":8512262,"DisplayName":"JRiggles"},{"Id":4834,"DisplayName":"quamrana","BindingReason":{"GoldTagBadge":"python"}}]} |
I have a dataframe which has columns as per the screen shot below. I want to add an additional column "all_data" which will hold all the data of the columns in it.
[enter image description here](https://i.stack.imgur.com/PJPeO.png)
here is what i tried
```
from pyspark.sql.functions import collect_list, udf
from pyspark.sql.types import ArrayType, StringType
def read_file_content(file_path):
content = spark.read.json(file_path).rdd.map(lambda x: x[0]).collect()
return content
read_file_content_udf = udf(read_file_content, ArrayType(StringType()))
file_with_all_data = daftrame.withColumn("all_data", read_file_content_udf("file_name_input"))
```
how ever with the above approach i get error as
````org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 63.0 failed 4 times, most recent failure: Lost task 0.3 in stage 63.0 (TID 5275) (10.99.0.10 executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/root/.ipykernel/2377/command-3710246798592077-2084292290", line 12, in read_and_collect_data
File "/databricks/python/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 284, in _modified_open
return io_open(file, *args, **kwargs)
FileNotFoundError: [Errno 2] No such file or directory: 'abfss://soruce@storage_abs.dfs.core.windows.net/xbyte/keyword_search/2024/03/asdaf-adase2-47217e-31-0150bda34e47_20240308_09-19-35.json'
```
where as the file is available and i can read in a separate dataframe
so the final data frame would look like all the columns along with that the additional column "all_data" holding the data from each file individually in the rows.
The column name "file_name_input" has the file location basically something like "abfss://soruce@storage_abs.dfs.core.windows.net/bite/searc/2024/03/asdaf-adase2-47217e-31-0150bda34e47_20240308_09-19-35.json"
and likewise there are 196 other files name and location in the column "file_name_input".
can it be possible to read all the files individually and store the data in the additional column "all_data" respectively |
I have this request, on my node.js application :
```js
return new Promise((resolve, reject) => {
const options = {
hostname: 'py', <== docker container
port: 8000,
path: '/resume',
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
file
})
}
const req = http.request(options, (res) => {
const body = [];
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
res.on('data', (chunk) => {
console.log('BODY: ' + chunk);
body.push(chunk);
});
resolve(body);
});
req.on('error', (error) =>{
console.log('problem with request: ' + error.message);
reject(error)
});
req.end();
})
```
And this, on my python app, that I copied from this topic => https://stackoverflow.com/a/65141532/14317753
```python
@app.post("/resume")
async def resume(request: Request):
da = await request.form()
da = jsonable_encoder(da)
print('file', da)
return da
```
But unfortunately, `da` is an empty string.
Obviously, I've tried everything : changing the GET request to POST and putting the get file param into a base64 file body param, using the `request.json()` and the `request.body()` instead of `request.form()` for the fixes I remember.
Thanks for you help. |
I want to apply a blurred background effect during a call using the Vidyo SDK for Android. I downloaded their sample application and tried using the setCameraBackgroundEffect method. However, the end result is always the same. I get this error: VIDYO_CONNECTORCAMERAEFFECTERROR_LoadEffectFailed.
Here's exactly what I did:
1. I cloned this [repo][1].
2. As instructed I downloaded [Vidyo SDK 23.1.1.5][2]
3. I copied VidyoClient.aar and banuba_effect_player_c_api-release.aar to app/libs
4. In activity_video_conference.xml I added an extra button (SET BLUR) that calls setBlur method.
5. I launch the app and click on 'VIDEO CONFERENCE.' The camera preview starts, but when I click on the 'SET BLUR' button, the camera effect is not applied. I receive a 'VIDYO_CONNECTORCAMERAEFFECTERROR_LoadEffectFailed' error via the CameraEffectErrorListener.
private void setBlur() {
connector.registerCameraEffectErrorListener(connectorCameraEffectError -> android.util.Log.d("yomama", "onCameraEffectError: " + connectorCameraEffectError));
BnbResources bnbResources = BanubaHelpersKt.prepareBnbResources(this);
ConnectorCameraEffectInfo cameraEffect = new ConnectorCameraEffectInfo();
cameraEffect.effectType = Connector.ConnectorCameraEffectType.VIDYO_CONNECTORCAMERAEFFECTTYPE_Blur;
cameraEffect.blurIntensity = 5;
cameraEffect.pathToEffect = bnbResources.getBackgroundBlurEffect().getAbsolutePath();
cameraEffect.pathToResources = bnbResources.getRoot().getAbsolutePath();
cameraEffect.token = "yP3dl+HLWuvbCBP65hfTXjRKfEC8pkAjGbiOTzWJ7EqOv1CPuRAzMXL/FL+QCPM/+L9SaFjkOqgbUjzlCV3HG5IqgIXScOmDG9AFZKaWjzgY9JsbOjP1ryvjz0GY2fS7CmfsNJt8mshflXzNW2pGEEOv1QRxbdMYz4nU1MiT0B54amokYGrzOBjCPgaTVJURMfcgOY1ch7q8Ga6JtgWgEGQZFiieAqb4MinvoBiti3nYNt4c6bzFAoAetuwar2LlzXwmjvRLhL+Ij/tQ4s7jkZQmq1pqg1JK4K3dsdcB3VM9ZHn70K5+f6l74Teu0KE1RF6efLH86HsU5bbTNmzqNftbmYPXhB4SRHRRjmXk2FB8fE8B43S/j15InvN/RHctHcMYmBeyjmv2vJvaMQIWMboo86S8Ati4R147u7JSetkFnFJF1wGAz77DPQUiFdyIdzGI6qxKF8rsLqgqhXRrlZXfnxkupsqjwmA5fbR4pxrhq1xRWGngWQd0xP1Y9xl7GD9fFNcyrFCvGIHb2DaHdsjOYDhtfRouWJcYTD2lE2juHMPIpDforQDjwQG7r0hHE6N0sWafyQ/SbNHrOTVY6mdJGe9CMvG9";
connector.setCameraBackgroundEffect(cameraEffect);
}
What am I doing wrong? How to correctly apply camera background effects?
[1]: https://github.com/Vidyo/VidyoPlatform-Connector-Android
[2]: https://static.vidyo.io/23.1.1.5/package/VidyoClient-AndroidSDK.zip |
|python|gitlab-ci|cx-freeze|gitlab-ci.yml| |
Right now im using an x-csrf token, ctype password and cvalue (my actual username with cvalue and password), user id which is jsut some random letters, for captcha token i wrote in CAPTCHATOKENFROMFUNCAPTCHA, captcha provider as arkose labs and "captcha" as captchaid. Default stuff for headers, nothing roblox specific.
What else do i need? do i need to render an actual captcha?
To bypass CORS im using roproxy.
ive trierd copying the cookies from the official roblox login page requests , and trying to get the authority to count as roblox.com. dosent seem to work.
Response is just 403 code 0 |
Why does the roblox api keep 403'ing me? |
|javascript|api|roblox| |
null |
This error occurs due to dependency issues. To fix this:
Step 1: Uninstall the following libraries using these commands:
```
pip uninstall opencv-contrib-python
pip uninstall opencv-contrib-python-headless
pip uninstall opencv-python
```
Step 2: Install opencv
```
pip install opencv-python
``` |
I've been trying everything but I can't generate the PDF properly; either I get a blank PDF or a text file with gibberish characters.
`try {
const response = await getFunc(getUrl, token, header);
const url = window.URL.createObjectURL(new Blob([response]));
// Crear un enlace para descargar el PDF
const link = document.createElement("a");
link.href = url;
link.setAttribute("download", "miArchivo.pdf"); // Cambiar a .pdf
document.body.appendChild(link);
link.click();
// Limpieza después de la descarga
window.URL.revokeObjectURL(url);
document.body.removeChild(link);
} catch (error) {
console.error("Error al descargar el certificado:", error);
}`
I need the PDF file
By the way, this is how the API give me the PDF.
Postman View of the response: [Postman][1]
Postman headers that came with the file: [Headers][2]
[1]: https://i.stack.imgur.com/vDDEJ.png
[2]: https://i.stack.imgur.com/y2lMc.png |
How can I let a Trading View strategy in Pinescript v5 go long at the open of the next day after entry conditions are met, but sell at the close(!) of just the day the exit conditions are met?
It is no solution to use process_orders_on_close=true, because it prevents TV from buying at the open of the next candle.
The code snippet so far is this:
```
longCondition = up_cross
exitCondition = down_cross
// Buy at Open of Next Candle
strategy.entry("Long", strategy.long, when=longCondition and time >= startDate and time <= endDate ? open : na)
// Sell at Close of Day
strategy.close("Long", when=exitCondition and time >= startDate and time <= endDate ? close : na)
``` |
How to sell at end of candle but buy at open on another in Trading View? |
|pine-script-v5|tradingview-api| |
null |
I tried creating a registration page and connecting it to the localhost database. after I input the data, I get an error : flutter: FormatException: Unexpected end of input (at character 1) and the data is not saved into the database. I have checked whether the data entered is the same but the results still produce an error :flutter: FormatException: Unexpected end of input (at character 1)
**this is my code. i hope someone to help me with my problem**
```
import 'dart:convert';
import 'package:flutter/material.dart';
import 'package:appproject/pages/main_home.dart';
import 'package:http/http.dart' as http;
class RegisterPage extends StatelessWidget {
RegisterPage({super.key, required String title});
TextEditingController nama = TextEditingController();
TextEditingController email = TextEditingController();
TextEditingController password = TextEditingController();
TextEditingController no_hp = TextEditingController();
Future<void> insertrecord() async {
if(nama.text!="" || email.text!="" || password.text!="" ){
try{
String uri = "http://192.168.1.5/hp_api/insert_data.php";
var res = await http.post(Uri.parse(uri), body: {
"nama": nama.text,
"email": email.text,
"password": password.text
});
var response=jsonDecode(res.body);
if(response["success"]=="true"){
print("Data Berhasil Disimpan");
}else{
print("Data Gagal Disimpan");
}
} catch (e) {
print(e);
}
}else{
print("Dimohon Mengisi Semua Data");
}
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home : Scaffold(
appBar: AppBar(
title: const Text('Register'),
),
body: Padding(
padding: const EdgeInsets.all(16.0),
child: Column(
children:[
Container( child: TextFormField(
controller: nama,
decoration: InputDecoration(hintText: 'Nama Lengkap'),
),
),
const SizedBox(height: 16.0),
Container( child: TextFormField(
controller: email,
decoration: InputDecoration(hintText: 'Email'),
),
),
const SizedBox(height: 16.0),
Container( child: TextFormField(
controller: password,
decoration: InputDecoration(hintText: 'Password'),
),
),
const SizedBox(height: 16.0),
ElevatedButton(
child: const Text('Register'),
onPressed: () {
insertrecord();
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => const MainHome(),
),
);
},
),
],
),
),
),
);
}
}
```
I got error flutter: FormatException: Unexpected end of input (at character 1), the connection to db is work but that is not insert into db |
So I'm working on a speed typing game using React and I have this function that removes the word from the array after it's either skipped or entered correctly. I think there's a better way to do it by creating a new array/without directly removing it from my words array but I'm a bit stuck on implementation. What would be the best approach to refactor the remove word function?
>
import wordsArray from "./components/wordsArray"; `import at the top`
`
code inside of export default function App() {
`
const getRandomWord = () => {
return wordsArray[Math.floor(Math.random() * wordsArray.length)];
};
const [word, setWord] = useState(getRandomWord());
const removeWord = () => {
const removedWordIndex = wordsArray.indexOf(word)
wordsArray.splice(removedWordIndex, 1)
if (wordsArray.length === 0) {
setGameOver(true)
}
} |
best way to remove a word from an array in a react app |
|reactjs|arrays|splice| |
Emulator doesn't have an easy, built-in way to send custom `channelData`. There's a few different ways you can (kind of) do this, though:
### Debug Locally
As @EricDahlvang mentioned (I forgot about this), you can [debug any channel locally](http://web.archive.org/web/20200722110139/https://blog.botframework.com/2017/10/19/debug-channel-locally-using-ngrok/)
### WebChat
Emulator is built in [WebChat](https://github.com/Microsoft/BotFramework-WebChat), so the output will be the exact same. However, you miss some of the debugging functionality from Emulator.
1. Clone a [WebChat Sample](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/15.a.backchannel-piggyback-on-outgoing-activities)
2. Edit `index.html` with `http://localhost:3978/api/messages` and your `channelData`
3. Run `npx serve`
4. Navigate to `http://localhost:5000`
### Modify Messages In OnTurnAsync()
This would only be for testing/mocking purposes and you'd want to ensure this doesn't go into production, but you can modify incoming messages inside `OnTurnAsync()` and manually add the `channelData`.
Something like:
public async Task OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken = default(CancellationToken))
{
var activity = turnContext.Activity;
activity.ChannelData = new
{
testProperty = "testValue",
};
You could even make it happen with only specific messages, with something like:
if (turnContext.Activity.Text == "change channel data")
{
activity.ChannelData = new
{
testProperty = "testValue",
};
}
There's a lot of different options with this one, you just need to make sure it doesn't go into production. |
null |
Laravel spatie permission many to through? query |
|docker|nginx|scalability| |
|python|machine-learning|data-preprocessing|feature-scaling| |
TL;DR
just adding id field to the returned user solved problem for me
example
import nextAuth from "next-auth/next";
import { AuthOptions } from "next-auth";
import CredentialsProvider from "next-auth/providers/credentials";
export const authOptions: AuthOptions = {
providers: [
CredentialsProvider({
credentials: {
email: {},
password: {},
},
async authorize(credentials) {
const user = { id: "hello", name: "jay", password: "dave" };
if (!user || !user.password) return null;
const passwordsMatch = user.password === credentials?.password;
if (passwordsMatch) return user;
return null;
},
}),
],
};
export default nextAuth(authOptions);
how I decided to add id,
I looked into the types,
this is how credentials config interface is defined
export interface CredentialsConfig<
C extends Record<string, CredentialInput> = Record<string, CredentialInput>
> extends CommonProviderOptions {
type: "credentials"
credentials: C
authorize: (
credentials: Record<keyof C, string> | undefined,
req: Pick<RequestInternal, "body" | "query" | "headers" | "method">
) => Awaitable<User | null>
}
as you can see it returns awaitable user and user is defined like this
export interface DefaultUser {
id: string
name?: string | null
email?: string | null
image?: string | null
}
/**
* The shape of the returned object in the OAuth providers' `profile` callback,
* available in the `jwt` and `session` callbacks,
* or the second parameter of the `session` callback, when using a database.
*
* [`signIn` callback](https://next-auth.js.org/configuration/callbacks#sign-in-callback) |
* [`session` callback](https://next-auth.js.org/configuration/callbacks#jwt-callback) |
* [`jwt` callback](https://next-auth.js.org/configuration/callbacks#jwt-callback) |
* [`profile` OAuth provider callback](https://next-auth.js.org/configuration/providers#using-a-custom-provider)
*/
export interface User extends DefaultUser {}
as you can see id is a compulsory field. |
This is the weirdest thing so far and cost me a few days of debugging. i think there is a bug in the T5 greedy search algorithm
During training:
The cross entropy loss becomes smaller and smaller, to the point where you do argmax(dim=2) on the logits, you get exactly the same result as the “labels”.
This means that when you feed the same input string into the model.generate(), you SHOULD get the same output.
However, the generate (inference) produce slightly different.
This is very annoying. |
First off: Are you sure that you are picking the right kind of model here? Training a regression model for each patient sounds like you want a linear mixed effects model here (LME), not a neural network. In general, the advantages of neural networks fly out the window if you train them on single individuals, especially if you train a high number of parameters.
LMEs are very common if you want to look at patient data. They will allow you to infer the population mean of an effect, as well as its variance across the population. They replace the constant parameters in a traditional regression model with a parameter distribution. Each individual then has its parameter set, following the common distribution.
There is a Python implementation in `statsmodels`, documented here: https://www.statsmodels.org/stable/mixed_linear.html
Now, about the number of rows of your .csv files: I suggest you import them into a `dataframe` with pandas and then use `dataframe.shape` to get the number of rows and columns of each of them. Then you make the number of rows an input to your program.
|
It is normal to get a very-low use of the compute units and high use of the memory bandwidth because this task has a **very-low arithmetic intensity** (1 integer operation for 2 x 16-bit items, that is 0.25 not to mention the stores). GPU are optimized for a much higher arithmetic intensity like at least >10. You can benefit from their high memory bandwidth though.
However, not saturating the RAM is an issue. This is certainly due to the memory access which are not coalesced here. You can load data and shuffle items in warps before computing the subtraction. However, half the threads must be disabled for the stores using basic conditions (and possibly the subtraction but I think it does not matter). It should be enough to saturate memory. I do not think shared memory is useful here with this trick (though it could be used to do this operation I it should be less efficient than the trick because shared memory access are more expensive than register accesses).
Doing two kernel is inefficient here : it forces you to store data in memory so to then read it again and all of this with an inefficient memory access pattern. It is better to do the multiplication in the same kernel, still thanks to shuffle (and conditions so to disable threads). Only 1/4 of the thread will work for the second part which is not great, but far better than reading/storing data in memory. You should avoid reading/storing data in memory on GPU as much as possible if you want your kernel to be fast. In fact the same think tends to be true on CPU too.
Warp shuffle functions are [detailed in the CUDA manual](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#warp-shuffle-functions).
|
I have 3 devices (I call them device A,B, and C) and I want to connect them in a group.
I can do one-to-one connection now and it works fine.
However, while I am trying to connect the third device, the connection between the other two devices fails.
For example, first I connect A and B. After negotiation, A becomes the group owner. Now everything is alright. Then I try to connect A and C. The accept message prompts successfully on C, but "most of the time" the connection fail, and A and B also disconnect.
Why I say "most of the time" is because it really works, although the probability is very low (< 10%).
Anyone happen to know why this happen? |
Blurred background using Vidyo SDK for android |
|android|blur|vidyo|banuba| |
The plugin GameplayCamera works well in blueprint,
but when I try to work with it in c++, I still have a problem resolving dependencies
Simple exemple:
MyCamereManager.cpp
#include "GameplayCamerasSubsystem.h"
here I try to get my subsystem:
UGameplayCamerasSubsystem* CamSubInstance = GetWorld()->GetSubsystem<UGameplayCamerasSubsystem>();
in my Build.cs
I add
"GameplayCameras",
Everything seems to work in VS, But when I try to build I get this error:
error LNK2019: external symbol not resolved "private: static class UClass * __cdecl UGameplayCamerasSubsystem::GetPrivateStaticClass(void)" (?GetPrivateStaticClass@UGameplayCamerasSubsystem@@CAPEAVUClass@@XZ)
has anyone had a problem with this subsystem?
|
Unreal Engine c++ GameplayCamerasSubsystem fatal error LNK1120: 1 unresolved externals |
|c++|unreal-engine5| |
For me the issue was my node version. Upgrading to node v18 from v15 solved the issue. |
As the title says, if I change the number of hidden layers in my pytorch neural network to be anything different from the amount of input nodes it returns the error below.
> RuntimeError: mat1 and mat2 shapes cannot be multiplied (380x10 and 2x10)
I think that the architecture is incorrectly coded but I am relatively new to pytorch and neural networks so I can't spot the mistake. Any help is greatly appreciated, I've included the code below
class FCN(nn.Module):
def __init__(self, N_INPUT, N_OUTPUT, N_HIDDEN, N_LAYERS):
super().__init__()
activation = nn.Tanh
self.fcs = nn.Sequential(*[
nn.Linear(N_INPUT, N_HIDDEN),
activation()])
self.fch = nn.Sequential(*[
nn.Sequential(*[
nn.Linear(N_INPUT, N_HIDDEN),
activation()]) for _ in range(N_LAYERS-1)])
self.fce = nn.Linear(N_INPUT, N_HIDDEN)
def forward(self, x):
x = self.fcs(x)
x = self.fch(x)
x = self.fce(x)
return x
torch.manual_seed(123)
pinn = FCN(2, 2, 10, 8)
If the pinn architecture is defined as `pinn = FCN(2, 2, 2, 8)` no errors are returned but neural network does not perform well. |
Changing the number of hidden layers in my NN results in an error |
|python|pytorch|neural-network| |
I was surprised that there **isn't** a lightweight library that does that **cross-platform**. So I decided to create a library since I need the functionality anyway.
Here's the library
https://github.com/Neko-Box-Coder/System2
It works for **posix** and **windows** systems. (It doesn't use popen for windows so it should also work in GUI applications)
It can send **input** and receive **output** from command.
And have **blocking** and **non-blocking** version.
And it has both **header only** and source version as well
|
I am trying to connect MongoDB with node.js but not working as i taught
Mongosh Version 2.1.4
I have installed every libraries that want and i order the code via files.
I made a folder config inside that `connection.js` file which contains:
```
const mongoClient = require('mongodb').MongoClient
const state = { db:null }
module.exports.connect=function(done){
const url='mongodb://localhost:27017'
const dbname='shopping'
mongoClient.connect(url,(err,data)=>{
if(err)
return done(err)
state.db=data.db(dbname)
})
done()
}
module.exports.get=function(){
return state.db
}
```
I made a another folder helpers inside that `product-helpers.js` file which contains:
```
var db = require('../config/connection')
module.exports = {
addProduct:(product,callback)=>{
console.log(product);
db.collection('product').insertOne(product).then((data)=>{
callback(true)
})
}
}
```
And i have a file for admin panel `admin.js` which contains:
```
var express = require('express');
var router = express.Router();
var productHelper = require('../helpers/product-helpers')
/* GET users listing. */
router.get('/', function(req, res, next) {
let products = [
{
name:'Samsung A15',
category:'Mobile',
description:'this is samsung set phone',
image:'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSyMghPEOARrzvgJb9WjhGYSDXA3DuEPIlmGA&usqp=CAU'
},
{
name:'Honor X6',
category:'Mobile',
description:'this is honor set phone',
image:'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTzAofD1NQohCChcMJWQ40h23t1eL8StX_fzA&usqp=CAU'
},
{
name:'Iphone 14',
category:'Mobile',
description:'this is apple set phone',
image:'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcR267jdT1RX_Wsff-KT9MiGy7eSF_FjK-QbWpenIbHiMLK90UBiqpPCREIT24if9Bk50R4&usqp=CAU'
},
{
name:'Vivo Y20',
category:'Mobile',
description:'this is vivo set phone',
image:'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQmGBnugl9D6sCtQblqXMuOiDVmaC_5QYD1MA&usqp=CAU'
}
]
res.render('admin/view-products',{products , admin:true});
});
router.get('/add-product',function(req,res){
res.render('admin/add-product')
});
router.post('/add-product',(req,res)=>{
console.log(req.body);
console.log(req.files.Image);
productHelper.addProduct(req.body,(result)=>{
res.render('admin/add-product')
});
})
module.exports = router;
```
**and in app.js i provide**
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'hbs');
app.engine('hbs', hbs.engine({extname:'hbs',defaultLayout:'layout',layoutsDir:__dirname+'/views/layout/',partialsDir:__dirname+'/views/partials/'}));
app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));
app.use(fileupload());
db.connect((err)=>{
if(err) console.log('conn error'+err);
else console.log('database connected to port 27017');
});
app.use('/', userRouter);
app.use('/admin', adminRouter);
<!-- end snippet -->
and when i run in terminal it shows
```
GET /admin/add-product 304 22.615 ms - -
{
Name: 'shuhaib',
Category: 'mobile',
Price: '5603.2',
Description: 'super hot model'
}
{
name: 'lol.png',
data: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 02 58 00 00 02 56 08 02 00 00 00 0b 0e 6e fb 00 00 00 09 70 48 59 73 00 00 2e 23 00 00 2e 23 01 ... 96738 more bytes>,
size: 96788,
encoding: '7bit',
tempFilePath: '',
truncated: false,
mimetype: 'image/png',
md5: '155d2f982dfceb23abc6eac6226971da',
mv: [Function: mv]
}
{
Name: 'shuhaib',
Category: 'mobile',
Price: '5603.2',
Description: 'super hot model'
}
POST /admin/add-product 500 55.478 ms - 4227
```
but when i see in database by opening mongosh-showdbs there is no folder called product
**the database is not connecting why**
|
WebScraping doesnt work, even without error |
|python|visual-studio-code|web-scraping|beautifulsoup|python-requests| |
null |
Far from perfect. This one:
* wraps each word in a ``<span>``
* calculates by ``offsetHeight`` if a SPAN spans multiple lines
* of so it found a hyphened word
* it then **removes** each _last_ character from the ``<span>``
to find when the word wrapped to a new line

<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<style>
.hyphen { background: pink }
.remainder { background: lightblue }
</style>
<process-text lang="en" style="display:inline-block;
overflow-wrap: word-break; hyphens: auto; zoom:1.2; width: 7em">
By using words like
"incomprehensibilities",
we can demonstrate word breaks.
</process-text>
<script>
customElements.define('process-text', class extends HTMLElement {
connectedCallback() {
setTimeout(() => {
let words = this.innerHTML.trim().split(/(\W+)/);
let spanned = words.map(w => `<span>${w}</span>`).join('');
this.innerHTML = spanned;
let spans = [...this.querySelectorAll("span")];
let defaultHeight = spans[0].offsetHeight;
let hyphend = spans.map(span => {
let hyphen = span.offsetHeight > defaultHeight;
console.assert(span.offsetHeight == defaultHeight, span.innerText, span.offsetWidth);
span.classList.toggle("hyphen", hyphen);
if (hyphen) {
let saved = span.innerText;
while (span.innerText && span.offsetHeight > defaultHeight) {
span.innerText = span.innerText.slice(0, -1);
}
let remainder = document.createElement("span");
remainder.innerText = saved.replace(span.innerText, "");
remainder.classList.add("remainder");
span.after(remainder);
}
})
console.log(this.querySelectorAll("span").length, "<SPAN> created" );
}) //setTimeout to read innerHTML
} // connectedCallback
});
</script>
<!-- end snippet -->
The error is "demonstrate" _fits_ when shortened to "demonstr" **-** "ate"

Needs some more JS voodoo |
A quick aside, we can cast to-and-fro from one table type to another, with restrictions:
/** create a TYPE **/
CREATE TYPE tyNewNames AS ( a int, b int , c int ) ;
SELECT
/** note: order, type & # of columns must match exactly**/
ROW((rec).*)::tyNewNames AS "rec newNames" -- f1,f2,f3 --> a,b,c
, (ROW((rec).*)::tyNewNames).* -- expand the new names
,'<new vs. old>' AS "<new vs. old>"
,*
FROM
(
SELECT
/** inspecting rec: PG assigned stand-in names f1, f2, f3, etc... **/
rec /* a record*/
,(rec).* -- expanded fields f1, f2, f3
FROM (
SELECT ( 1, 2, 3 ) AS rec -- an anon type record
) cte0
)cte1
;
+---------+-+-+-++------------+--------+--+--+--+
|rec |rec |
|newnames |a|b|c|<new vs. old>|oldnames|f1|f2|f3|
+---------+-+-+-++------------+--------+--+--+--+
|(1,2,3) |1|2|3|<new vs. old>|(1,2,3) |1 |2 |3 |
+---------+-+-+-++------------+--------+--+--+--+
A compressed example of this code might look like this:
SELECT ( ( ROW( (rec).* ) )::tyNewNames ).* ;
db fiddle(uk)
[https://dbfiddle.uk/dlTxd8Y3][1]
[1]: https://dbfiddle.uk/dlTxd8Y3
|
|android|wifi-direct| |
null |
You can use `regexpr` to capture the first occurrence of consecutive `1`s followed by values greater than `1`, for example
```
set.seed(0)
my_df %>%
mutate(event = regexpr("1{2}[^1]{2}", do.call(paste0, select(., -ID)))) %>%
mutate(event = ifelse(event > 0, paste0("X", event), NA))
```
gives
```
ID X1 X2 X3 X4 X5 X6 X7 X8 event
1 a 2 3 2 1 2 2 4 3 <NA>
2 b 1 2 2 1 1 2 1 4 <NA>
3 c 4 2 2 1 4 2 3 2 <NA>
4 d 3 3 2 2 3 3 2 2 <NA>
5 e 1 3 3 1 1 4 1 3 <NA>
6 f 2 1 1 1 4 4 4 3 X3
7 g 1 1 3 2 3 4 4 2 X1
8 h 3 1 1 2 2 2 1 2 X2
``` |
Fairly new to Python. Making a battleship project from scratch. Just wanna know what would be a way to actually identify a SPECIFIC point on the map, and let you choose it, and then print out the chosen grid, with the "#" replaced with something else like a "X". Here's the code, thanks in advance.
```
turns = 10
alphabet = ["A","B","C","D","E","F","G","H","I","J","a","b","c","d","e","f","g","h","i","j"]
def StartScreen():
print("Welcome to Battleship!")
print('''
1. Start
2. Exit
''')
choice = int(input("Enter your number: "))
if choice == 1:
GameScreen()
elif choice == 2:
exit()
else:
print("Please choose a valid number.")
StartScreen()
def GameScreen():
print("\n")
print(" A B C D E F G H I J")
row = [" # ", "# ", "# ", "# ", "# ", "# ", "# ", "# ", "# ", "# "]
i = -1
while i != 9:
print(i+1, row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[9])
i = i + 1
print("\n")
rowx = input("Choose a letter(A-J): ")
rowy = int(input("Enter a number(0-9): "))
if rowx in alphabet and rowy in range(0,10):
print(f'you chose {rowx.upper()}{rowy}')
else:
print("Please choose a valid point(A-J/0-9)\n")
StartScreen()
``` |
How to give the player the ability to choose a grid in Battleship? |
|python|list| |
null |
I have written a script (I'm quite new to PHP) which will take a list of elements as an input and checks whether it’s only a username or it contains username+domain name (I mean email).
If it’s only a username, then it will append '@test.com' into that username and combine all the strings together to send an email notification.
```
$recipients = array("kumar", "ram@test.com", "ravi", "rob@example.com");
$with_domain = array();
$without_domain = array();
$exp = "/^\S+@\S+\.\S+$/";
foreach ($recipients as $user) {
if(preg_match($exp, $user)) {
array_push($with_domain, $user);
} else {
array_push($without_domain, $user);
}
}
$without_domain_new = implode("@test.com", $without_domain)."@test.com";
print_r($with_domain);
print_r($without_domain_new);
echo "<br>";
$r = implode("", $with_domain);
$s = $without_domain_new.$r;
print "Email notification need to be sent to: $s";
```
Here script works as expected. But is the line `$r = implode("", $with_domain); $s = $without_domain_new.$r;` really required?
Is it be possible to merge array and strings in PHP? I mean, is it possible to merge `$with_domain` and `$without_domain_new` and store it to `$s` as a string? |
I got error with flutter: FormatException: Unexpected end of input (at character 1) |
|flutter| |
null |
So I was given this block of code and I had to give an output in the format
`y:[result]; y: [result]; y:[result]; x:[result]; a:[result]`
```
public class Main {
static int x = 2;
public static void main(String[] args) {
int[] a = {17, 43, 12};
foo(a[x]);
foo(x);
foo(a[x]);
System.out.println("x:" + x);
System.out.println("a:" + Arrays.toString(a));
}
static void foo(int y) {
x = x - 1;
y = y + 2;
if (x < 0) {
x = 5;
} else if (x > 20) {
x = 7;
}
System.out.println("y:" + y);
}
}.
```
I'm not 100% sure on how the call by reference works in some cases and I'm not sure which result is the right one. Anyway here is one: <b3>
1) `foo(a[x])` is called with `a[2]` (which is 12). y becomes 12 + 2 = 14. x is decremented to 1.
`foo(x)` is called with x (which is 1). both x and y point to the value 1 of x. x is decremented to 0. and then x becomes 3 because y=y+2 and y was pointing at the value 1 of x.
`foo(a[x])` is called with `a[3]` (which doesnt exists). x is decremented to 2.
The array a transforms into `17,43,14`.
So the results would be like:
`y : 14; y : 3; y : ?; x : 2; a : 17,43,14`<br>
I think the thing that confuses me the most is in the case of foo(x) does y point at variable x or the value of x at the moment the method is called? <br><br>
Any help is appreciated. Thanks a lot in advance.
|
Call by reference confusion |
|reference|pass-by-reference|call-by-value| |
I have a React frontend application configured with a proxy in the package.json file pointing to my Flask backend running on http://localhost:2371. However, when making requests to fetch data from the backend using fetch("/members"), the frontend seems to be fetching data from localhost:5173 (the address on which the react site is running) instead of the expected localhost:2371. I've double-checked the proxy configuration
(Here is my package.json):
"name": "react-frontend",
"private": true,
"proxy": "http://localhost:2371",
"version": "0.0.0",
"type": "module",
and ensured that the backend server is running, but I'm still encountering this issue. What could be causing the frontend to fetch data from the unexpected localhost address instead of the configured proxy? The code works if I fetch from the whole address ("http://localhost:2371/members"), but it would be simpler to just write "/members". Do I need to import the package.json to my App.tsx to make it work or is it alredy somehow connected?
Here is my backend script:
from flask import Flask, jsonify
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route("/members")
def members():
return jsonify({"members": ["Mem1", "Mem2", "Mem3"]})
if __name__ == "__main__":
app.run(debug=True, port=2371)
Any insights or suggestions for troubleshooting would be greatly appreciated. Thank you! |