instruction stringlengths 0 30k ⌀ |
|---|
Exists not working in Firebase Security Rules |
|firebase|firebase-security| |
null |
```ts
import { A } from "https://example.com/type.d.ts";
type B = A | A
```
[Playground](https://www.typescriptlang.org/play?#code/JYWwDg9gTgLgBAbzgQTgXzgMyhEcBEAFjDGAM4BcA9FQKYAeAhuADa0B0AxrlTAJ5gOAE3Ywy+ANwAofoLgAhOAF4UcAD4ogA)
If you hover on `B` in both TS Playground and VS Code, it will display it as `A` in the case `type B = A` and `any` in the case `type B = A | A`. Regardless of whether A is known or not, I expect they should be consistent. Why is that not the case? |
This is an issue with `mkdir` itself and is due to the `umask` as you mentioned. In the [mkdir spec][1] it's specified that when you use the `-p` flag `chmod` is only called on the full filepath after all the intermediate directories are created.
The Kubernetes [empty dir implementation][2] follows this same logic, calling `os.MkdirAll` which will be impacted by the umask, and then running `os.Chmod` on the full directory path.
As the comment in the Kubernetes source says:
> If the permissions on the created directory are wrong, the kubelet is probably running with a umask set. In order to avoid clearing the umask for the entire process or locking the thread, clearing the umask, creating the dir, restoring the umask, and unlocking the thread, we do a chmod to set the specific bits we need.
I'm not sure what would control the umask on the kubelet process, or if there's any better solution to this problem!
[1]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/mkdir.html
[2]: https://github.com/kubernetes/kubernetes/blob/0e8ef9c353092b4c921429a212ebb4c780f83715/pkg/volume/emptydir/empty_dir.go#L442-L469 |
I make a video editing scripts but there is a bug here |
|opencv|operating-system|tqdm| |
null |
I had the same problem on React JS and I found the simplest way to implement the @IvanSanchez solution.
You just need to make React execute the `<div id="leafletmap">` first, before executing javascript.
To do this, you just need to add the javascript code inside the useEffect hook, because useEffect is asynchronous it will execute after the synchronous code execution is done.
Here is my code:
import L from 'leaflet'
import 'leaflet/dist/leaflet.css';
import { useEffect } from 'react';
export default function LeafletContainer() {
useEffect(()=>{
var map = L.map('map').setView([51.505, -0.09], 13);
L.tileLayer('https://tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 19,
attribution: '© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>'
}).addTo(map);
})
return(
<>
<div className="leaflet-container">
<div id="map"></div>
</div>
</>
)
}
Dont forget to add height of the map in css:
#map {
height: 180px;
} |
I had the same problem on React JS and I found the simplest way to implement the @IvanSanchez solution.
You just need to make React execute the `<div id="leafletmap">` first, before executing javascript.
To do this, you just need to add the javascript code inside the useEffect hook, because useEffect is asynchronous it will execute after the synchronous code execution is done.
Here is my code:
import L from 'leaflet'
import 'leaflet/dist/leaflet.css';
import { useEffect } from 'react';
export default function LeafletContainer() {
useEffect(()=>{
var map = L.map('map').setView([51.505, -0.09], 13);
L.tileLayer('https://tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 19,
attribution: '© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>'
}).addTo(map);
})
return(
<div className="leaflet-container">
<div id="map"></div>
</div>
)
}
Dont forget to add height of the map in css:
#map {
height: 180px;
} |
I'm using df.compare where I'm doing a diff between 2 csv's which have same index/row names, but when it does df.compare, it does the diff as expected but gives the row index numbers as 0,2,5,... where ever it find the diff between the csv's.
What I'm looking out is instead of the row numbers where It finds the diff, I need df.compare to show me the row text.
diff_out_csv = old.compare(latest,align_axis=1).rename(columns={'self':'old','other':'latest'})
Current output -
NUMBER1 NUMBER2 NUMBER3
old latest old latest old latest
0 -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN
2 -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0
3 -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0
4 -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0
Requirement -
NUMBER1 NUMBER2 NUMBER3
old latest old latest old latest
JACK -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN
JASON -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0
JACOB -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0
JIMMY -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0
I was able to replace the column names using df.compare.rename(columns={}) but how do I replace 0, 2, 3, 4 with the text names ?
|
How do I display row names instead of index numbers while doing df.compare()? |
|python|dataframe| |
null |
In object-oriented programming, your program is objects that can communicate with other objects through their interfaces (invoke their methods).
For your needs, you can use a "regular" association with the object (put the object into the constructor).
class Greetings {
hello() {
return "Hello world!"
}
}
class ForwardConsole {
constructor(greetings) {
this.greetings = greetings;
}
print() {
console.log(this.greetings.hello());
}
}
class BackwardConsole {
constructor(greetings) {
this.greetings = greetings;
}
print() {
console.log(this.greetings.hello().split("").reverse().join(""));
}
}
let greetings = new Greetings();
let fc = new ForwardConsole(greetings);
let bc = new BackwardConsole(greetings);
In the code, we have encapsulated (injected) the same object into different objects using a standard constructor. |
I am using boto3 for uploading files to s3 bucket
Here is the code I am using
# s3 client initialization
```s3_client = boto3.client(
service_name='s3',
region_name='us-west-2',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
#endpoint_url='https://s3:us-west-2.amazonaws.com',
verify=True
)
```
If Verify = TRUE, then I am getting this error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1000)
If Verify = False, then I am getting this error:
InsecureRequestWarning: Unverified HTTPS request is being made to host 'itd-us-west-2-**-****-*****.s3.us-west-2.amazonaws.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings
warnings.warn(
Just wanted to check here if I am doing any mistake.
|
I am using glove embeddings in embeddings layer of LSTM. I wrote a function to build model as below:
def build_model(hp):
model = keras.Sequential()
model.add(Embedding(input_dim=vocab_size, # Size of the vocabulary
output_dim=EMBEDDING_DIM, # Length of the vector for each word
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH, trainable=False))
# model.add(Embedding(input_dim=vocab_size, # Size of the vocabulary
# output_dim=50, # Length of the vector for each word
# input_length = MAX_SEQUENCE_LENGTH)) # Maximum length of a sequence
model.add(SpatialDropout1D(hp.Choice('sdropout_', values=[0.2, 0.3, 0.4, 0.5, 0.6])
)
)
model.add(LSTM(hp.Int('units_', min_value=30, max_value=70, step=10), kernel_initializer='random_normal', dropout=0.5, recurrent_dropout=0.5))
counter = 0
for i in range(hp.Int('num_layers', 1, 5)):
if counter == 0:
model.add(layers.Dense(units=hp.Int('units_', min_value=16, max_value=512, step=32),
kernel_initializer='random_normal',
#kernel_initializer=hp.Choice('kernel_initializer_', ['random_normal', 'random_uniform', 'zeros']),
#bias_initializer=hp.Choice('bias_initializer_', ['random_normal', 'random_uniform', 'zeros']),
input_shape=y_train.shape,
#kernel_regularizer=regularizers.l2(0.03),
kernel_regularizer=keras.regularizers.l2(hp.Choice('l2_value', values = [1e-2, 3e-3, 2e-3, 1e-3, 1e-4])),
#kernel_regularizer=regularizers.L1L2(l1=1e-5, l2=1e-4),
#bias_regularizer=regularizers.L2(1e-4),
#activity_regularizer=regularizers.L2(1e-5),
activation=hp.Choice('dense_activation_' + str(i), values=['relu', 'tanh', 'sigmoid'], default='relu')
)
)
model.add(layers.Dropout(rate=hp.Float('dropout_' + str(i), min_value=0.0, max_value=0.6, default=0.25, step=0.05)
)
)
else:
model.add(layers.Dense(units=hp.Int('units_' + str(i), min_value=12, max_value=512, step=32),
kernel_initializer='random_normal',
#kernel_initializer=hp.Choice('kernel_initializer_', ['random_normal', 'random_uniform', 'zeros']),
#bias_initializer=hp.Choice('bias_initializer_', ['random_normal', 'random_uniform', 'zeros']),
#kernel_regularizer=regularizers.l2(0.03),
kernel_regularizer=keras.regularizers.l2(hp.Choice('l2_value', values = [1e-2, 3e-3, 2e-3, 1e-3, 1e-4])),
#kernel_regularizer=regularizers.L1L2(l1=1e-5, l2=1e-4),
#bias_regularizer=regularizers.L2(1e-4),
#activity_regularizer=regularizers.L2(1e-5),
activation=hp.Choice('dense_activation_' + str(i), values=['relu', 'tanh', 'sigmoid'], default='relu')
)
)
model.add(layers.Dropout(rate=hp.Float('dropout_' + str(i), min_value=0.0, max_value=0.5, default=0.25, step=0.05)
)
)
counter+=1
model.add(layers.Dense(classes, activation='softmax'))
optimizer = hp.Choice('optimizer', values = ['adam'
# ,'sgd', 'rmsprop', 'adadelta'
])
lr = hp.Choice('learning_rate', values=[1e-2, 1e-3])
if optimizer == 'adam':
optimizer = keras.optimizers.Adam(learning_rate=lr)
#elif optimizer == 'sgd':
# optimizer = keras.optimizers.SGD(learning_rate=lr)
#elif optimizer == 'rmsprop':
# optimizer = keras.optimizers.RMSprop(learning_rate=lr)
# else:
# optimizer = keras.optimizers.Adadelta(learning_rate=lr)
model.compile(
optimizer=optimizer,
loss='sparse_categorical_crossentropy',
#steps_per_execution=32,
metrics=['accuracy']
)
return model
After executing the above function, I built a tuner object.
tuner = RandomSearch(
build_model,
objective='val_accuracy', # Set the objective to 'accuracy'
#objective='val_loss', # Set the objective to 'val_loss'
max_trials=3, # Set the maximum number of trials
executions_per_trial=3, # Set the number of executions per trial
overwrite=True,
directory='my_dir', # Set the directory where the results are stored
project_name='consumer_complaints' # Set the project name
)
# Display the search space summary
tuner.search_space_summary()
But after this step, when I run belw code to search the hyper-parameter space.
# Assume pad_data_train, pad_data_testare your data
tuner.search(pad_data_train, y_train,
epochs=5,
validation_data=(pad_data_test, y_test)
,
batch_size = 64
)
I am getting error as shown below. I cant seem t figure out what I am doing wrong.
> Traceback (most recent call last): File
> "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 270, in _try_run_and_update_trial
> self._run_and_update_trial(trial, *fit_args, **fit_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 235, in _run_and_update_trial
> results = self.run_trial(trial, *fit_args, **fit_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 287, in run_trial
> obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 214, in _build_and_fit_model
> results = self.hypermodel.fit(hp, model, *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/hypermodel.py",
> line 144, in fit
> return model.fit(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 70, in error_handler
> raise e.with_traceback(filtered_tb) from None File "/opt/conda/lib/python3.10/site-packages/tensorflow/python/eager/execute.py",
> line 53, in quick_execute
> tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError:
> Graph execution error:
>
> Detected at node 'sequential/embedding/embedding_lookup' defined at
> (most recent call last):
> File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
> return _run_code(code, main_globals, None,
> File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
> exec(code, run_globals)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel_launcher.py", line
> 17, in <module>
> app.launch_new_instance()
> File "/opt/conda/lib/python3.10/site-packages/traitlets/config/application.py",
> line 1043, in launch_instance
> app.start()
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelapp.py", line
> 736, in start
> self.io_loop.start()
> File "/opt/conda/lib/python3.10/site-packages/tornado/platform/asyncio.py",
> line 195, in start
> self.asyncio_loop.run_forever()
> File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
> self._run_once()
> File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
> handle._run()
> File "/opt/conda/lib/python3.10/asyncio/events.py", line 80, in _run
> self._context.run(self._callback, *self._args)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 516, in dispatch_queue
> await self.process_one()
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 505, in process_one
> await dispatch(*args)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 412, in dispatch_shell
> await result
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 740, in execute_request
> reply_content = await reply_content
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/ipkernel.py", line
> 422, in do_execute
> res = shell.run_cell(
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/zmqshell.py", line
> 546, in run_cell
> return super().run_cell(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3009, in run_cell
> result = self._run_cell(
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3064, in _run_cell
> result = runner(coro)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/async_helpers.py",
> line 129, in _pseudo_sync_runner
> coro.send(None)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3269, in run_cell_async
> has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3448, in run_ast_nodes
> if await self.run_code(code, result, async_=asy):
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3508, in run_code
> exec(code_obj, self.user_global_ns, self.user_ns)
> File "/tmp/ipykernel_42/3795112731.py", line 2, in <module>
> tuner.search(pad_data_train, y_train,
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 230, in search
> self._try_run_and_update_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 270, in _try_run_and_update_trial
> self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 235, in _run_and_update_trial
> results = self.run_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 287, in run_trial
> obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 214, in _build_and_fit_model
> results = self.hypermodel.fit(hp, model, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/hypermodel.py",
> line 144, in fit
> return model.fit(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1742, in fit
> tmp_logs = self.train_function(iterator)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1338, in train_function
> return step_function(self, iterator)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1322, in step_function
> outputs = model.distribute_strategy.run(run_step, args=(data,))
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1303, in run_step
> outputs = model.train_step(data)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1080, in train_step
> y_pred = self(x, training=True)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 569, in __call__
> return super().__call__(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/base_layer.py",
> line 1150, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 96, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/sequential.py",
> line 405, in call
> return super().call(inputs, training=training, mask=mask)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/functional.py",
> line 512, in call
> return self._run_internal_graph(inputs, training=training, mask=mask)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/functional.py",
> line 669, in _run_internal_graph
> outputs = node.layer(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/base_layer.py",
> line 1150, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 96, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/layers/core/embedding.py",
> line 272, in call
> out = tf.nn.embedding_lookup(self.embeddings, inputs) Node: 'sequential/embedding/embedding_lookup' indices[41,8] = 6402 is not in
> [0, 5201) [[{{node sequential/embedding/embedding_lookup}}]]
> [Op:__inference_train_function_14440]
> --------------------------------------------------------------------------- RuntimeError Traceback (most recent call
> last) Cell In[43], line 2
> 1 # Assume x_train, y_train are your data
> ----> 2 tuner.search(pad_data_train, y_train,
> 3 epochs=5,
> 4 validation_data=(pad_data_test, y_test)
> 5 ,
> 6 batch_size = 64
> 7 )
>
> File
> /opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py:231,
> in BaseTuner.search(self, *fit_args, **fit_kwargs)
> 229 self.on_trial_begin(trial)
> 230 self._try_run_and_update_trial(trial, *fit_args, **fit_kwargs)
> --> 231 self.on_trial_end(trial)
> 232 self.on_search_end()
>
> File
> /opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py:335,
> in BaseTuner.on_trial_end(self, trial)
> 329 def on_trial_end(self, trial):
> 330 """Called at the end of a trial.
> 331
> 332 Args:
> 333 trial: A `Trial` instance.
> 334 """
> --> 335 self.oracle.end_trial(trial)
> 336 # Display needs the updated trial scored by the Oracle.
> 337 self._display.on_trial_end(self.oracle.get_trial(trial.trial_id))
>
> File
> /opt/conda/lib/python3.10/site-packages/keras_tuner/engine/oracle.py:107,
> in synchronized.<locals>.wrapped_func(*args, **kwargs)
> 105 LOCKS[oracle].acquire()
> 106 THREADS[oracle] = thread_name
> --> 107 ret_val = func(*args, **kwargs)
> 108 if need_acquire:
> 109 THREADS[oracle] = None
>
> File
> /opt/conda/lib/python3.10/site-packages/keras_tuner/engine/oracle.py:434,
> in Oracle.end_trial(self, trial)
> 432 if not self._retry(trial):
> 433 self.end_order.append(trial.trial_id)
> --> 434 self._check_consecutive_failures()
> 436 self._save_trial(trial)
> 437 self.save()
>
> File
> /opt/conda/lib/python3.10/site-packages/keras_tuner/engine/oracle.py:386,
> in Oracle._check_consecutive_failures(self)
> 384 consecutive_failures = 0
> 385 if consecutive_failures == self.max_consecutive_failed_trials:
> --> 386 raise RuntimeError(
> 387 "Number of consecutive failures excceeded the limit "
> 388 f"of {self.max_consecutive_failed_trials}.\n"
> 389 + trial.message
> 390 )
>
> RuntimeError: Number of consecutive failures excceeded the limit of 3.
> Traceback (most recent call last): File
> "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 270, in _try_run_and_update_trial
> self._run_and_update_trial(trial, *fit_args, **fit_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 235, in _run_and_update_trial
> results = self.run_trial(trial, *fit_args, **fit_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 287, in run_trial
> obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 214, in _build_and_fit_model
> results = self.hypermodel.fit(hp, model, *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/hypermodel.py",
> line 144, in fit
> return model.fit(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 70, in error_handler
> raise e.with_traceback(filtered_tb) from None File "/opt/conda/lib/python3.10/site-packages/tensorflow/python/eager/execute.py",
> line 53, in quick_execute
> tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError:
> Graph execution error:
>
> Detected at node 'sequential/embedding/embedding_lookup' defined at
> (most recent call last):
> File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
> return _run_code(code, main_globals, None,
> File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
> exec(code, run_globals)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel_launcher.py", line
> 17, in <module>
> app.launch_new_instance()
> File "/opt/conda/lib/python3.10/site-packages/traitlets/config/application.py",
> line 1043, in launch_instance
> app.start()
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelapp.py", line
> 736, in start
> self.io_loop.start()
> File "/opt/conda/lib/python3.10/site-packages/tornado/platform/asyncio.py",
> line 195, in start
> self.asyncio_loop.run_forever()
> File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
> self._run_once()
> File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
> handle._run()
> File "/opt/conda/lib/python3.10/asyncio/events.py", line 80, in _run
> self._context.run(self._callback, *self._args)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 516, in dispatch_queue
> await self.process_one()
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 505, in process_one
> await dispatch(*args)
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 412, in dispatch_shell
> await result
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/kernelbase.py",
> line 740, in execute_request
> reply_content = await reply_content
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/ipkernel.py", line
> 422, in do_execute
> res = shell.run_cell(
> File "/opt/conda/lib/python3.10/site-packages/ipykernel/zmqshell.py", line
> 546, in run_cell
> return super().run_cell(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3009, in run_cell
> result = self._run_cell(
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3064, in _run_cell
> result = runner(coro)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/async_helpers.py",
> line 129, in _pseudo_sync_runner
> coro.send(None)
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3269, in run_cell_async
> has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3448, in run_ast_nodes
> if await self.run_code(code, result, async_=asy):
> File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py",
> line 3508, in run_code
> exec(code_obj, self.user_global_ns, self.user_ns)
> File "/tmp/ipykernel_42/3795112731.py", line 2, in <module>
> tuner.search(pad_data_train, y_train,
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 230, in search
> self._try_run_and_update_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 270, in _try_run_and_update_trial
> self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py",
> line 235, in _run_and_update_trial
> results = self.run_trial(trial, *fit_args, **fit_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 287, in run_trial
> obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/tuner.py",
> line 214, in _build_and_fit_model
> results = self.hypermodel.fit(hp, model, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras_tuner/engine/hypermodel.py",
> line 144, in fit
> return model.fit(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1742, in fit
> tmp_logs = self.train_function(iterator)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1338, in train_function
> return step_function(self, iterator)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1322, in step_function
> outputs = model.distribute_strategy.run(run_step, args=(data,))
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1303, in run_step
> outputs = model.train_step(data)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 1080, in train_step
> y_pred = self(x, training=True)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/training.py",
> line 569, in __call__
> return super().__call__(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/base_layer.py",
> line 1150, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 96, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/sequential.py",
> line 405, in call
> return super().call(inputs, training=training, mask=mask)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/functional.py",
> line 512, in call
> return self._run_internal_graph(inputs, training=training, mask=mask)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/functional.py",
> line 669, in _run_internal_graph
> outputs = node.layer(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 65, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/engine/base_layer.py",
> line 1150, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py",
> line 96, in error_handler
> return fn(*args, **kwargs)
> File "/opt/conda/lib/python3.10/site-packages/keras/src/layers/core/embedding.py",
> line 272, in call
> out = tf.nn.embedding_lookup(self.embeddings, inputs) Node: 'sequential/embedding/embedding_lookup' indices[41,8] = 6402 is not in
> [0, 5201) [[{{node sequential/embedding/embedding_lookup}}]]
> [Op:__inference_train_function_14440]
|
How do I solve this Error while using keras tuner in building LSTM model? |
|runtime-error|hyperparameters|keras-tuner| |
In my company we were initially using the most voted answer on this thread https://stackoverflow.com/a/39835908/14204063.
Kudos to him for coming up with this algorithm.
Although there is a common missed case for this.
Consider plurals for words like company -> companies, category -> categories cannot be handled with that algorithm.
Thus I came up with this algorithm, please have a look at this.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const maybePluralize = ({ count, noun, suffix = "s", pluralNoun }) => {
if (count <= 1) {
return `${count} ${noun}`;
}
const pluralForm = pluralNoun ? `${pluralNoun}` : `${noun}${suffix}`;
return `${count} ${pluralForm}`;
};
<!-- end snippet -->
Example:
word: category
maybepluralize({count: 1, noun: category}) -> category
maybepluralize({count: 10, noun: category, pluralNoun: "categories"}) -> categories
Please upvote if you found this useful!
|
Django comes translated in my language however I don't like some of the translated phrases (personal preference).
So I set the ```locale``` folder up and run ```python manage.py makemessages -l fa_IR``` and I get the file ```django.po``` but when I open it, I find that the phrases that are already translated and included with Django are not there.
So looks like I have to edit (for example) ```\venv\Lib\site-packages\django\contrib\auth\locale\fa\LC_MESSAGES\django.po``` but if I ever update Django package of this project, all my edits will be lost, right? Or there's a way to have all phrases in my ```locale\django.po``` and translate them again?
Another challenge for me is that some of the permissions for built-in models are not marked for translation. So do I have to subclass all of them and add the translation myself?
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/iimDG.png |
I'm using WSL 2 and want to dockerize a spring boot kotlin app with gradle (kotlin) and have it all running in Docker so I don't have to install anything locally. But everytime I run a `docker-compose` command I get the same message:
`2024-01-13 13:26:19 Error: Unable to access jarfile ./build/libs/app.jar`
and the spring boot container stops running. Where do I start to look for what's going wrong and how do I proceed?
I tried many different Dockerfiles but this is the most recent one:
# Use the official Gradle image with JDK 17 as the base image
FROM gradle:7.4.1-jdk17 AS builder
WORKDIR /app
COPY . .
RUN ./gradlew clean build -x test
EXPOSE 8080
CMD ["java", "-jar", "./build/libs/app.jar"]
docker-compose
version: '3'
services:
mysql:
image: 'mysql:latest'
environment:
- 'MYSQL_DATABASE=twitch-bot'
- 'MYSQL_PASSWORD=secret'
- 'MYSQL_ROOT_PASSWORD=secret'
- 'MYSQL_HOST=localhost'
ports:
- '3306:3306'
spring-boot-kotlin:
depends_on:
- mysql
build:
context: .
dockerfile: Dockerfile
ports:
- '8080:8080'
application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/twitch-bot
spring.datasource.username=root
spring.datasource.password=secret
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
spring.jpa.generate-ddl=true
spring.jpa.hibernate.ddl-auto=update
Edit:
I ran `docker-compose run yourapp ls build/libs` and was shown `twitch-bot-0.0.1-SNAPSHOT.jar` and so I changed the `app.jar` in the Dockerfile to `twitch-bot-0.0.1-SNAPSHOT.jar` and it started outputting other things.
So now the Dockerfile looks like:
# Use the official Gradle image with JDK 17 as the base image
FROM gradle:7.4.1-jdk17 AS builder
WORKDIR /app
COPY . .
RUN ./gradlew clean build -x test
EXPOSE 8080
CMD ["java", "-jar", "./build/libs/twitch-bot-0.0.1-SNAPSHOT.jar"]
The `docker build` command outputs:
ERROR: "docker buildx build" requires exactly 1 argument.
See 'docker buildx build --help'.
Usage: docker buildx build [OPTIONS] PATH | URL | -
Start a build
I now get errors like this from the spring-boot container:
2024-01-13 14:29:53 com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
2024-01-13 14:29:53 2024-01-13T13:29:53.503Z WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata
2024-01-13 14:29:53
2024-01-13 14:29:53 java.lang.NullPointerException: Cannot invoke "org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(java.sql.SQLException, String)" because the return value of "org.hibernate.resource.transaction.backend.jdbc.internal.JdbcIsolationDelegate.sqlExceptionHelper()" is null |
Just store the data in variables (likely global).
Try to NOT think about all the things that a real database provides; like persistence. You may have to assume that the test cases will all use the same memory when run, which again is not normal. It would be fair for them to share those kinds of details.
It is easy to see why this would confuse someone; as this is something you would NEVER do in any REAL life coding situation other than these creative interview questions. I don't believe you are even allowed to use files either. With these online coding assessments you REALLY have to think outside the box, which slows you down.
It would have been easy for them to spin up a container alongside the container that runs the tests to make the situation less awkward. Also, they don't ask questions like these when practicing on their site. So surprise!
|
I am trying to combine multiple macros into one. How do I add a loop so that it repeats itself 'x' amount of times, with 'x' being the values of Sheet3!A1, and then it executes the print preview function?
This is the VBA code I have so far. I cant seem to get the loop to work correctly. Im trying to work out what I need to add to *"I CANT WORK OUT WHAT TO PUT HERE"*
```
Sub Calculate()
' Macro to copy data to a new row, run it a specified number of times, and print output.
' Turn off screen updating.
Application.ScreenUpdating = False
' Declarations.
Dim copySheet As Worksheet
Dim pasteSheet As Worksheet
Dim targetRange As Range
Dim loopCount As Long
Dim i As Long
Dim printSheet As Worksheet
' Set variables.
Set copySheet = Worksheets("Sheet5")
Set pasteSheet = Worksheets("Sheet6")
Set printSheet = Worksheets("Sheet6")
' Unlock pasteSheet to allow editing.
pasteSheet.Unprotect Password:="password"
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Set targetRange as the first cell in column C without conditional formatting
' under the last cell in column C with no value.
Do Until targetRange.FormatConditions.Count = 0
Set targetRange = targetRange.Offset(1, 0)
Loop
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
' Lock pasteSheet to prevent further editing.
pasteSheet.Protect Password:="password"
' Get the loop count from Sheet3!A1
loopCount = Sheets("Sheet3").Range("A1").Value
' Loop from 1 to the specified loop count
For i = 1 To loopCount
' # **"I CANT WORK OUT WHAT TO PUT HERE"**
Next i
' Unprotect and unhide printSheet.
printSheet.Visible = xlSheetVisible
printSheet.Unprotect Password:="password"
' Open print preview for the specified range.
printSheet.Range("B1:N46").PrintPreview
' Hide and protect printSheet again.
printSheet.Protect Password:="password"
printSheet.Visible = xlSheetHidden
' Turn off the cut-copy mode.
Application.CutCopyMode = False
' Turn on screen updating.
Application.ScreenUpdating = True
End Sub
```
Ive tried copying this as the task I want to repeat in to *"I CANT WORK OUT WHAT TO PUT HERE"*
```
' Set variables.
Set copySheet = Worksheets("Sheet5")
Set pasteSheet = Worksheets("Sheet6")
' Unlock pasteSheet to allow editing.
pasteSheet.Unprotect Password:="password"
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Set targetRange as the first cell in column C without conditional formatting
' under the last cell in column C with no value.
Do Until targetRange.FormatConditions.Count = 0
Set targetRange = targetRange.Offset(1, 0)
Loop
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
' Lock pasteSheet to prevent further editing.
pasteSheet.Protect Password:="password"
``` |
You are utilizing the method wrong. It should be:
addEntity: rxMethod<entity>(pipe(
// ...
));
Follow this:
1. https://github.com/ngrx/platform/discussions/3796#rxmethod
2. https://www.angulararchitects.io/blog/the-new-ngrx-signal-store-for-angular-2-1-flavors/
2. https://ngrx.io/guide/signals/signal-store |
For this problem, I would like to model out this constraint in cplex.But I get an error saying Operator not available for dvar float+[][Periods] == dvar float+. Is there a way to model this
```
forall(p in Products,l in Operations,t in Periods)
ct3:
X[p][t-LeadTime[p][l]] == Y[p][l][t];
```
The LeadTime is the estimated lead time of l operation of product p.
I had try to make it in terms of int but it just got lot more errors.
Thanks in advance. |
Modelling leadtime in cplex (Production Planning Problem) |
|indexing|cplex| |
null |
null |
{"Voters":[{"Id":3358272,"DisplayName":"r2evans"}],"DeleteType":1} |
I have 5 (soon to be more) clients. For each of them I am developing 1-2 custom apps. Each of these apps requires a react front-end, and functions backend (cloudflare workers). And the apps for a given client roll up into a "container" app, so the route maps can all be combined into a single react and functions deployment. In addition to the 1-2 custom apps for each client, there is 1 shared/common app (auth), a shared UI library, and ideally identical deployment scripts across each of them.
I think a monorepo could work well for this (keep repo structure and common modules on the main trunk, create forks for each client, push up updates to trunk from common modules, all other clients can pull those updates down, when they're ready to adopt that most recent version). I've tried this structure so far:
/apps
/frontend
/functions
/libs
/modules
/shared
/auth
/frontend
/functions
/client-app-A
/frontend
/functions
/client-app-B
/frontend
/functions
/shared
/other-shared-lib
/frontend and /functions in app directory import their respective folders from each of the modules in /libs.
The challenge is with nx I have no way of incrementally adopting versions or easily contributing updates to sub-modules or shared libs back to trunk.
Any ideas on how to better go about this? I realize git submodules may be what I need but online reviews say it's lot of foot-guns. And ideally there is some way to keep all these as part of the main core repo...
Happy to provide more context if needed... Many many thanks in advance...
|
Multiple customers forking off same monorepo |
|monorepo|nrwl-nx|pnpm|nx-monorepo| |
I'm trying to develop a simple js runtime on top of v8 (rusty_v8) and I'm having some trouble with async stuff.
I have a function that is an entry point from the js realm:
```rust
fn message_from_worker(
scope: &mut v8::HandleScope,
args: v8::FunctionCallbackArguments,
_ret: v8::ReturnValue,
)
// ...
```
This function takes different kinds of messages from the js realm and does different things with them. One of the messages is a request to do a long running task (fetch). I want to be able to send a message back to the js realm when the task is done.
So far this is what I have:
```rust
fn message_from_worker ... {
// ...
let message = args.get(0);
let message = message.to_object(scope).unwrap();
let kind = utils::get(scope, message, "kind").to_rust_string_lossy(scope);
match kind.as_str() {
// ...
"fetch" => {
let request = utils::get(scope, message, "request");
let request = request.to_object(scope).unwrap();
let url = utils::get(scope, request, "url").to_rust_string_lossy(scope);
let callback = utils::get(scope, message, "sendResponse");
let callback = match v8::Local::<v8::Function>::try_from(callback) {
Ok(callback) => callback,
Err(_) => {
utils::throw_type_error(scope, "sendResponse is not a function");
return;
}
};
// We want to perform our async http request here
let response = { /* for now response is a sync mock */ };
let undefined = v8::undefined(scope).into();
callback.call(scope, undefined, &[response.into()]);
}
// ...
```
The naive approach (using a blocking request) doesn't work:
```rust
reqwest::blocking::get(&url).unwrap()
```
`Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context.`
-------
So I'm thinking about registering all pending operations in a global state and then poll them from the main thread. I'm not sure how to do this though.
I don't even know if I can poll multiple operations at the same time, handling the resolution of one of them and then continue to poll the rest.
--------
EDIT: I [published the source code][1] to make it easier to reproduce, the current code is on the `fetch` branch, use `cargo run --bin cli scripts/fetch.js --event=fetch` to execute de fetch script and trigger the fetch event
[1]: https://github.com/max-lt/openworkers-runtime/tree/fetch |
As @Christopher-Renauro pointed out, you currently have a list of length 10.
Setting the diagonal of a matrix to zeros can be easily done using Numpy.
- convert your list to a numpy array of 10x10 shape :
new_m = np.array(m).reshape(10,10)
- use fill_diagonal() :
fill_diagonal(new_m, val=0)
|
In my Flutter app, there is a custom drawer that is accessed through a button. In this drawer, there is a widget (InfoCard) which displays some of the user's information. However, in this widget, there is a FutureBuilder, so every time the drawer is displayed it shows a CircularProgressIndicator until it gets all the pieces of information from the Firestore server. My question is: can I avoid the FutureBuilder so that, when the app is fully loaded, it gets all the data needed for the widget?
Another problem related to this is that I need to wait until the Firestore database is loaded before returning the InfoCard widget (I don't know why because all the data is already loaded when the app il launched). So the thing is: I have to avoid calling the database or I call it somehow before building the widget so that I don't need to use the FutureBuilder.
Here is the widget drawer:
class SideDrawer extends ConsumerStatefulWidget {
SideDrawer({
super.key,
required this.setIndex,
});
Function(int index) setIndex;
@override
ConsumerState<SideDrawer> createState() => _SideDrawerState();
}
class _SideDrawerState extends ConsumerState<SideDrawer> {
@override
Widget build(BuildContext context) {
final userP = ref.watch(userStateNotifier);
return Scaffold(
body: Container(
width: MediaQuery.sizeOf(context).width * 3 / 4,
height: double.infinity,
color: Theme.of(context).colorScheme.secondary,
child: SafeArea(
child: Column(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
FutureBuilder<Widget>(
future: Datas().infocard(userP),
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.done) {
if (snapshot.hasError) {
return Text('Errore: ${snapshot.error}');
}
return snapshot.data!;
} else {
return CircularProgressIndicator();
}
},
),
ColDrawer(setIndex: (index) {
widget.setIndex(index);
},),
],
),
),
),
);
}
}
the infoCard method:
Future<Widget> infocard(AppUser userP) async {
var db = FirebaseFirestore.instance;
QuerySnapshot qn =
await db.collection('users').where('id', isEqualTo: userP.id).get();
return InfoCard(
username: userP.username,
age: userP.age,
weight: userP.measurements
.firstWhere((measure) => measure.title == 'Weight')
.datas
.last
.values
.first,
);
}
If I don't wait for the database to be loaded or if I substitute the FutureBuilder with the InfoCard widget, it throws this error ONLY for the userP.measurements... line:
StateError (Bad state: No element)
Lastly, why do I need to call the database even though all the data is already loaded when the app is launched on the Firestore database? |
Here is my code: https://github.com/d0uble-happiness/DiscogsCsvVue
App.vue
<template>
<div>
<FileUpload @file="setFile" />
<ParseCsvToArray v-if="file" :file="file" />
<ProcessReleaseData />
<FetchRelease />
</div>
</template>
<script lang="ts">
import { defineComponent } from 'vue';
import FileUpload from './components/FileUpload.vue'
import ParseCsvToArray from './components/ParseCsvToArray.vue'
import ProcessReleaseData from './components/ProcessReleaseData.vue'
import FetchRelease from './components/FetchRelease.vue'
export default defineComponent({
name: 'App',
components: {
FileUpload,
ParseCsvToArray,
ProcessReleaseData,
FetchRelease
},
data() {
return {
file: null as null | File,
}
},
methods: {
setFile(file: File) {
console.log("Received file:", file)
this.file = file;
}
},
mounted() {
console.log("mounted");
},
});
</script>
<style></style>
ParseCsvToArray.vue
<template>
<div>
<p v-for="row of parsedData" v-bind:key="row.id">
{{ row }}
</p>
</div>
</template>
<script lang="ts">
import { defineComponent } from 'vue'
import Papa from 'papaparse';
import ROW_NAMES from './RowNames.vue'
export default defineComponent({
name: 'ParseCsvToArray',
props: {
file: File
},
data() {
return {
parsedData: [] as any[],
rowNames: ROW_NAMES
}
},
methods: {
parseCsvToArray(file: File) {
Papa.parse(file, {
header: false,
complete: (results: Papa.ParseResult<any>) => {
console.log('Parsed: ', results.data);
this.parsedData = results.data;
}
});
}
},
mounted() {
if (this.file) {
this.parseCsvToArray(this.file);
}
},
});
</script>
<style></style>
FetchRelease.vue
<template>
<label>Fetch release</label>
</template>
<script lang="ts">
import { DiscogsClient } from '@lionralfs/discogs-client';
import ProcessReleaseData from './ProcessReleaseData.vue'
import { defineComponent } from 'vue'
// import { defineAsyncComponent } from 'vue'
export default defineComponent ({
name: 'FetchRelease',
methods: {
fetchRelease
}
});
const db = new DiscogsClient().database();
// const AsyncComp = defineAsyncComponent(() => {
// return new Promise((resolve, reject) => {
// // ...load component from server
// resolve(/* loaded component */)
// })
// })
async function fetchRelease(releaseId: string): Promise<any[] | { error: string }> {
try {
const { data } = await db.getRelease(releaseId);
return ProcessReleaseData(releaseId, data);
} catch (error) {
return {
error: `Release with ID ${releaseId} does not exist`
};
}
}
</script>
<style></style>
ProcessReleaseData.vue
<template>
<label>Process release data</label>
</template>
<script lang="ts">
import { type GetReleaseResponse } from '@lionralfs/discogs-client/types/types';
export default {
name: 'ProcessReleaseData',
methods: {
processReleaseData
},
data() {
return {
// formattedData
}
},
};
function processReleaseData(releaseId: string, data: GetReleaseResponse) {
const { country = 'Unknown', genres = [], styles = [], year = 'Unknown' } = data;
const artists = data.artists?.map?.(artist => artist.name);
const barcode = data.identifiers.filter(id => id.type === 'Barcode').map(barcode => barcode.value);
const catno = data.labels.map(catno => catno.catno);
const uniqueCatno = [...new Set(catno)];
const descriptions = data.formats.map(descriptions => descriptions.descriptions);
const format = data.formats.map(format => format.name);
const labels = data.labels.map(label => label.name);
const uniqueLabels = [...new Set(labels)];
const qty = data.formats.map(format => format.qty);
const tracklist = data.tracklist.map(track => track.title);
// const delimiter = document.getElementById('delimiter').value || '|';
const delimiter = '|';
const formattedBarcode = barcode.join(delimiter);
const formattedCatNo = uniqueCatno.join(delimiter);
const formattedGenres = genres.join(delimiter);
const formattedLabels = uniqueLabels.join(delimiter);
const formattedStyles = styles.join(delimiter);
const formattedTracklist = tracklist.join(delimiter);
const preformattedDescriptions = descriptions.toString().replace('"', '""').replace(/,/g, ', ');
const formattedDescriptions = '"' + preformattedDescriptions + '"';
const formattedData: any[] = [
releaseId,
artists,
format,
qty,
formattedDescriptions,
formattedLabels,
formattedCatNo,
country,
year,
formattedGenres,
formattedStyles,
formattedBarcode,
formattedTracklist
];
return formattedData;
}
</script>
<style></style>
AFAIK I want to take the `parsedData` (which will be an array of integers), and pass it on to `FetchRelease`, to do API calls using `ProcessReleaseData`.
At the moment, `return ProcessReleaseData` is throwing this error:
> Value of type 'DefineComponent<{}, {}, {}, {}, { processReleaseData: (releaseId: string, data: GetReleaseResponse) => any[]; }, ComponentOptionsMixin, ComponentOptionsMixin, ... 5 more ..., {}>' is not callable. Did you mean to include 'new'?
...but VSCode's suggested fix doesn't solve it.
I was told...
> Two ways to go about this - js/ts file that is a shared “helper” or something and exports the function, or include the component, give it a ref and access function through components ref.
...but I really don't know how to do that?
I was wondering if it could possibly be something as simple as...
const AsyncComp = defineAsyncComponent(() => {
return new Promise((resolve, reject) => {
// ...load component from server
component: import('./ProcessReleaseData.vue'),
resolve(ProcessReleaseData)
})
})
...but that throws this error:
> Argument of type 'DefineComponent<{}, {}, {}, {}, { processReleaseData: (releaseId: string, data: GetReleaseResponse) => any[]; }, ComponentOptionsMixin, ComponentOptionsMixin, ... 5 more ..., {}>' is not assignable to parameter of type '{ default: never; } | PromiseLike<{ default: never; }>'.
Any help please? TIA
Edit: the project now won't compile at all, and I am now getting this:
> 14:03:49 [vite] Pre-transform error: Transform failed with 1 error: D:/MY DOCUMENTS/CODE/DISCOGS API CLIENT/DiscogsCsvVue/src/components/FetchRelease.vue:12:1: ERROR: Expected ";" but found ")"
I can't find any such error though, I am completely at a loss at this stage. |
Hello I have a problem I use the annotated string of Kotlin I am creating a string but I realize that when I created a string the text inside the build noted string , in the withstyle we have 2 parameters the first parameter is style this is where we can define the style of the text that we are creating but the 2nd parameter is block it is this 2nd parameter contained in withstyle which I cannot use, I wonder what it is for and if anyone has the different possible uses of this 2nd parameter.
My second concern is to know if we can group several evils together to apply them directly to a style when the words do not follow one another because I realize that when we have to apply the same style to several words we must each time put the word in question in append that is to say we put the first word, then we give the characteristics of the style and we put the word in append and then we move on to the 2nd word uh and then just like next word to which we want to apply the same style we put them again in a paint by applying the same style and we move on to the 3rd word to which we want to apply the same style and we put again in append.
Is there no way to group all these words together and apply the style to them only once.
|
Hello, how can I use a block parameter of withstyle parameter when we create a annotated string in jetpackpack compose |
|string| |
null |
I have setup a postgres:13.3 docker container and scram-sha-256 authentication.
Initially, I ran:
```
docker run -d --name my-postgres13 -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=fbp123 -e POSTGRES_DB=mydb -e POSTGRES_HOST_AUTH_METHOD=scram-sha-256 -v pgdata13:/var/lib/postgresql/data postgres:13.3
```
Postgres.conf:
```
password_encryption = scram-sha-256
```
pg_hba.conf:
```
hostnossl all all 0.0.0.0/0 scram-sha-256
local all all scram-sha-256
```
After above done and restarted container, I created a new fbp2 user and applied password 'fbp123', and password seems to be saved as
scram in pg_authid table:
```
16386 | fbp2 | t | t | f | f | t | f | f | -1 | SCRAM-SHA-256$4096:yw+jyaEzlvlOjZnc/L/flA==$tqPlJIDXv9zueaGd8KpQf11N82IGgAOsK4
Lhb7lPhi4=:+mCXFKb2y5PG6ycIKCz7xaY8U5MNLnkzlPZK8pt3to0= |
```
I use the original plain-text from within my java app to connect:
```
hikariConfig = new HikariConfig();
hikariConfig.setUsername("fbp2");
hikariConfig.setPassword("fbp123");
hikariConfig.setJdbcUrl("jdbc:postgresql://%s:%s/%s".formatted("localhost", 5432, "mydb"));
HikariDataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource.getConnection();
```
From logs, this url is used: ``` jdbc:postgresql://localhost:5432/mydb ```
The issue is I'm having authentication issue, although I use the plain-text password that I used in postgres server:
```
2024-03-30 14:38:03.372 DEBUG 22440 [ main] c.z.h.u.DriverDataSource : Loaded driver with class name org.postgresql.Driver for jdbcUrl=jdbc:postgresql://localhost:5432/mydb
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.PoolBase : HikariPool-1 - Failed to create/setup connection: FATAL: password authentication failed for user "fbp2"
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.HikariPool : HikariPool-1 - Cannot acquire connection from data source
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "fbp2"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:693)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:203)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:258)
```
Note that If I revert to "trust" and send no passwords, I have this:
```
org.postgresql.util.PSQLException: The server requested SCRAM-based authentication, but no password was provided.
```
So, it seems server only wants scram. I have tried md5 with no success.
----
Some relevant dependencies:
```
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.3.0</version>
</dependency>
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>5.1.0</version>
</dependency>
```
My docker desktop runs on windows 11.
I use Oracle OpenJDK 20.0.1
I can connect to mydb with fbp2 user with no problem via psql admin tool (after plain password):
```
root@a00ccf79f08a:/# psql -h localhost -p 5432 -U fbp2 -d mydb
Password for user fbp2:
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
mydb=#
``` |
null |
Storing the preferred font-size in localStorage |
The trouble with importing MirageJS seems to be linked to how its modules are understood in the Jest setup. Since MirageJS is an add-on for Ember CLI, its module system might not be fully ready to work seamlessly with a Jest environment set up for ES modules.
Try to simplify your jest setup for typescript something like this:
```
import type { Config } from '@jest/types';
// Jest configuration for TypeScript with ES Modules
const config: Config.InitialOptions = {
preset: 'ts-jest/presets/default-esm', // Use the ESM preset for ts-jest
globals: {
'ts-jest': {
useESM: true,
},
},
moduleNameMapper: {
'^(\\.{1,2}/.*)\\.js$': '$1', // Redirect .js imports to their .ts counterparts
'^public-ip$': '<rootDir>/src/__tests__/__mocks__/publicIp.js',
},
transform: {}, // Override to prevent jest from transforming
extensionsToTreatAsEsm: ['.ts'], // Treat .ts files as ES Modules
testEnvironment: 'node', // Specify the environment in which the tests are run
setupFilesAfterEnv: ['<rootDir>/jest.setup.js'], // Setup file for Jest
};
export default config;
``` |
I was searching for a solution of this problem and I have encountered number of threads/topic/issues regarding ssh agent forwarding not working through vscode. My problem is quite opposite - I want to prevent vscode from forwarding agent to dev container, and I could not find solution to it neither through googling, nor through experiments.
My setup is rather simple. I am connecting to a remote server and have a container set up in there, to which I am attaching (this container is not created by vscode, it is a jupyter container spawned by a jupyterhub).
When attaching to the container, vscode code creates a socket under `/tmp` named `vscode-ssh-auth-<id>.sock`, and it is a remote endpoint to an ssh auth socket forwarded from my local machine, up to my understanding. The problem is, I want to prevent it from doing that for security reasons (I don't want to have agent sock forwarded the whole time I'm working there, as it is a shared machine), and I was not able to figure out how to do it.
I have `ForwardAgent` explicitly set to `no` for the server I'm connecting to (and I've checked that the agent is not forwarded to the server itself). I have `Enable Agent Forwarding` disabled in the vscode settings. I have played around with couple more settings, even checked config dirs for any clue, but found none. I have also checked whether the same happens for newly created dev container (using vscode utils), and it seems so.
I'm not sure whether I'm missing something obvious here, or it is a baked-in behaviour for some reason, but I'd appreciate any clues or clarification on this topic. |
Is there a way to prevent vscode from forwarding ssh agent to remote dev container? |
|visual-studio-code|ssh|vscode-remote| |
I am trying out testing for the first time since I have started my journey with Express and TS. I decided to do some testing on a new project with Jest, but I want to use import/exports instead of require...but now jest is complaining that I cant use import statement outside a moduel. I looked at different solutions on here, but I might have messed up something previously and thats why its not working for me. Currently I have these files:
package.json:
{
"name": "backend",
"version": "1.0.0",
"description": "",
"main": "index.ts",
"type": "module",
"scripts": {
"test": "jest",
"start": "node src/index.ts",
"dev": "nodemon src/index.ts",
"build": "tsc"
},
"keywords": [],
"author": "Gabor Adorjani <gadorjani@windowslive.com>",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.17.21",
"@types/jest": "^29.5.12",
"@types/node": "^20.11.30",
"bcrypt": "^5.1.1",
"cors": "^2.8.5",
"crypto": "^1.0.1",
"express": "^4.19.1",
"jest": "^29.7.0",
"jsonwebtoken": "^9.0.2",
"mongodb": "^6.5.0",
"mongoose": "^8.2.3",
"nodemon": "^3.1.0",
"ts-jest": "^29.1.2",
"typescript": "^5.4.3",
"uuid": "^9.0.1"
},
"dependencies": {
"@types/bcryptjs": "^2.4.6",
"@types/supertest": "^6.0.2",
"bcryptjs": "^2.4.3",
"crypto-js": "^4.2.0",
"dotenv": "^16.4.5",
"express-rate-limit": "^7.2.0",
"http": "^0.0.1-security",
"http-proxy-middleware": "^2.0.6",
"mongodb-memory-server": "^9.1.8",
"supertest": "^6.3.4"
}
}
tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"sourceMap": true,
"resolveJsonModule": true,
"outDir": "dist",
"esModuleInterop": true
},
"include": ["src/**/*"],
"files": ["types.d.ts"],
"ts-node": {
"esm": true,
"compiler": "typescript"
}
}
|
Which datapicker you are using doesn't make any difference, since the behaviour you are searching for is from `RxJs`. You can use `pairwise` for that
HTML
----
<form [formGroup]="dateForm">
<mat-form-field style="margin-right: 10px;">
<input matInput [matDatepicker]="picker" formControlName="startdate" placeholder="Start date"/>
<mat-datepicker-toggle matSuffix [for]="picker"></mat-datepicker-toggle>
<mat-datepicker #picker></mat-datepicker>
</mat-form-field>
<mat-form-field>
<input matInput [matDatepicker]="picker2" formControlName="enddate" placeholder="End date"/>
<mat-datepicker-toggle matSuffix [for]="picker2"></mat-datepicker-toggle>
<mat-datepicker #picker2></mat-datepicker>
</mat-form-field>
</form>
TS
----
dateForm: FormGroup;
constructor(private formBuilder: FormBuilder) {
this.dateForm = this.formBuilder.group({
startdate: [null, Validators.required],
enddate: [null, Validators.required],
});
}
ngOnInit(): void {
this.dateForm.valueChanges.pipe(
startWith({ oldValue: null, newValue: null }),
distinctUntilChanged(),
pairwise(),
map(([oldValue, newValue]) => { return { oldValue, newValue } })
)
.subscribe({
next: (val) => {
console.log(val)
}
});
}
[Here is the stackblitz][1]
[1]: https://stackblitz.com/edit/angular-ivy-pktoy7?file=src%2Fapp%2Fapp.component.html,src%2Fapp%2Fapp.component.ts,src%2Fmain.ts
Have in mind that this is just basic example explaining how you can do that, but its not complete, to begin with you should add some types here. To `formControls`, to `rxjs`. You should destroy subscription when no longer needed, etc...
Additionally, I just took whatever project and implemented staff, although this particular example will be practically identical in the latest versions of angular, stackblitz example uses old version of angular, so this is not moduleless, you will be able to optimize some staff if you use angular 16-17 |
IN MbedTls with RSA in a C-programm encryption/decryption works when using separate buffers (for plainText, cipherText, and decryptedText, i.e. the content of plainText and decryptedText is the same), but not when using just one buffer to perform in-place encryption/decryption as i get gibberish/not correctly encrypted data.
Is that just a general limitation or is my code wrong?
Background:
I'm trying to use in-place encryption and decryption with RSA in MbedTls in a C-programm. [Here](https://forums.mbed.com/t/in-place-encryption-decryption-with-aes/4531) it says that "In place cipher is allowed in Mbed TLS, unless specified otherwise.", although I'm not sure if they are also talking about AES. From my understanding i didn't see any specification saying otherwise (should work) for mbedtls_rsa_rsaes_oaep_decrypt in the Mbed TLS API documentation.
Code:
```
size_t sizeDecrypted;
unsigned char plainText[15000] = "yxcvbnm";
unsigned char cipherText[15000];
unsigned char decryptedText[15000];
rtn = mbedtls_rsa_rsaes_oaep_encrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, sizeof("yxcvbnm"), &plainText, &cipherText);
rtn = mbedtls_rsa_rsaes_oaep_decrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, &sizeDecrypted, &cipherText, &decryptedText, 15000);
//decryptedText afterwards contains the correctly decrypted text just like plainText
unsigned char text[15000] = "yxcvbnm";
rtn = mbedtls_rsa_rsaes_oaep_encrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, sizeof("yxcvbnm"), &text, &text);
rtn = mbedtls_rsa_rsaes_oaep_decrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, &sizeDecrypted, &text, &text, 15000);
//someText afterwards doesn't contain the correctly decrypted text/has a different content than plainText
//rtn is always 0, i.e. no error is returned
``` |
The data returned from `useSearch` needs to be an array in order for you to map it, so in your `useSearch` hook you need to only get the `value` array if you want to only render the results.
```javascript
// in useSearch hook
// we only take the array of webpages value
// looking from your console.log screenshot
setData(response.data.webPages.value);
```
You are trying to map your `searchData` which is a string. In order for you to map the actual api data you would need to use the data returned from your `useSearch` hook
```javascript
// this is the API data
const { data } = useSearch({ searchTerm: searchData });
// replace searchData with data
{data ? (
<ul>
{data.map((i) => (
<TileItems
key={i.id}
image={i.thumbnailUrl}
title={i.name}
description={i.snippet}
website={i.url}
/>
))}
</ul>
) : (
<p>Loading...</p>
)}
```
Lemme know if this fixes your problem
|
I managed to solve my question. I was trying to adapt on old macro and mis-understanding the loop function.
I have removed the following from the above code:
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Set targetRange as the first cell in column C without conditional formatting
' under the last cell in column C with no value.
Do Until targetRange.FormatConditions.Count = 0
Set targetRange = targetRange.Offset(1, 0)
Loop
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
' Lock pasteSheet to prevent further editing.
pasteSheet.Protect Password:="password"
AND I have amended the loop to be:
' Get the loop count from Sheet3!A1
loopCount = Sheets("Sheet3").Range("A1").Value
' Loop from 1 to the specified loop count
For i = 1 To loopCount
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
Next i
The above now removes redundant code and allows my desired function to run "X" amount of times with "X" being the value of Sheet3A1
|
Hej i am new here and i am NOT a techsavy guy (coding) so please dont be to harsh with me ;-)
I am trying to add an extra field in the products backend but struggle a lot because i dont understand the coding. I got help from ChatGPT and it works 50%.
What i am trying to acomplish is to have a extra field similar to the SKU field. It has to show in (BACKEND) the product page and the kvick editing page for every product. It has to be searchable (in BACKEND) when i search for products.
Right now my boss is using the SKU field to write the position where the product fysically is. So when customers ordering we get printet the order and first description of every product is the SKU field what we have used for the position instead a real SKUnumber.
It works ... but not ideal. Everytime we get a new product or we need to change the position where to fysically find the product, we have to edit the SKU.
I would love to make it much easier by having an extrafield that behaves like SKU field in tearm of editing, searchable and so on.
It is NOT visible for cusomers on the shoppage but it is on the order we print out.
I would like to show the code ChatGPT got me and some pictures i made because i think its better to understand then, but i am new here and dont know the rules if its allowed for a newbi. So thanks in advance and hope to here from you.
I tried Reddit woocommerce group and Chatgpt.
I would hope someone could help with the code i got wich doenst work proper. |
Using ES Modules with TS, and Jest testing(cannot use import statement outside module) |
|testing|jestjs|esmodules| |
How do you calculate the in-tangents and out-tangents of a cubic Bézier curve, knowing the following: start control point (x1, y1), end control point (x2, y2) and end point (x, y)? |
Function for calculating tangents in a cubic Bézier curve |
{"OriginalQuestionIds":[56658553],"Voters":[{"Id":4518341,"DisplayName":"wjandrea","BindingReason":{"GoldTagBadge":"python"}}]} |
ese metodo de descarga no me esta funcionando para mi proyecto, porfa necesito ayuda, ya que estoy trabajando en .xaml y .xaml.cs
private void OnDescargarDocumento()
{
foreach (var item2 in ListadoInfDoc)
{
Num_CodDocumentos = item2.Num_CodDocumentos;
Bin_Imagen = item2.Bin_Imagen;
Tx_ImagenMIME = item2.Tx_ImagenMIME;
Tx_NombreArchivo = item2.Tx_NombreArchivo;
}
SaveFileDialog saveFileDialog = new SaveFileDialog();
saveFileDialog.FileName = Tx_NombreArchivo;
saveFileDialog.Filter = "All files (*.*)|*.*";
if (saveFileDialog.ShowDialog() == DialogResult.OK)
{
File.WriteAllBytes(saveFileDialog.FileName, Bin_Imagen);
}
}
|
metodo de descarga de archivo en visual studio 2017 |
|c#|.net|visual-studio-2017| |
null |
I'm developing a [macOS desktop application](https://github.com/milanvarady/Applite) that interfaces with [Homebrew](https://brew.sh/). It does this by calling actual shell commands. Sometimes brew needs the user's password to complete some tasks and I want to provide a graphical way for the user to enter the password, which can be done with the `SUDO_ASKPASS` environment variable. **I am looking for the most secure way to implement a GUI sudo dialog**. Here are the options I have considered so far:
### 1. AppleScript "do shell script with administrator privileges"
```
osascript -e 'do shell script "{script_here}" with administrator privileges'
```
This method invokes the system prompt and runs the script with admin privileges, which is good, but unfortunately, brew doesn't allow you to run it directly with sudo. So this is not an option.
### 2. Simple AppleScript
```
osascript -e 'text returned of (display dialog "Enter password:" default answer "" with hidden answer)' 2> /dev/null
```
This method uses some simple AppleScript to ask for the password and echoes it out. I don't think this option would be secure at all.
### 3. Simple SwiftUI app
I was thinking of making a simple SwiftUI app, this does essentially the same as the 2nd option, but I'm not sure if it's more secure. Some sample code:
```swift
struct ContentView: View {
@State private var password = ""
var body: some View {
VStack {
Text("Enter your password:")
SecureField("Password", text: $password)
Button("OK") {
print(password)
exit(0)
}
}
.padding()
}
}
```
### 4. Pinentry (GPG Tools)
My application currently uses this option, but it's a bit convoluted. [PINEntry](https://github.com/GPGTools/pinentry) is a passphrase entry dialog from GPG Tools, which should be theoretically secure as far as I understand. This seems to be the best option at the moment, but I would be happy if someone could suggest something better. |
> I expect to first have `hi` in my console then the error and at the end `why`
That's what you'd get **if** you were handling the rejection of the promise, like this:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const test = new Promise((resolve, reject) => {
console.log("hi");
throw new Error("error");
});
test
.catch((error) => console.error(error))
.finally(() => {
console.log("why")
});
<!-- end snippet -->
...but your code isn't handling rejection, so the rejection isn't reported until the environment's unhandled rejection code kicks in, which isn't until after all the code explicitly handling the promise has been executed.
As you've said in a comment, `new Promise` and the execution of the function you pass it are all synchronous, but the error thrown in that function isn't unhandled, it's handled by the `Promise` constructor and converted to a promise rejection (which, *later*, is unhandled).
So the sequence is:
1. Create a promise
2. Output `hi`
3. Reject the promise with the error
4. Attach a `finally` handler to it
5. Run any appropriate handlers attached to the promise (since it's settled), in this case your `finally` handler
6. Determine that the rejection is unhandled and report it |
{"Voters":[{"Id":10248678,"DisplayName":"oguz ismail"},{"Id":1765658,"DisplayName":"F. Hauri - Give Up GitHub"},{"Id":874188,"DisplayName":"tripleee"}]} |
I answer my question, if you refine the postinst script, then everything works (it seems)
```
if [ -n "$SUDO_USER" ]; then
user_home=$(getent passwd $SUDO_USER | cut -d: -f6)
else
user_home=$HOME
fi
sed -i "s:%homedir%:$user_home:g" /lib/systemd/system/mydemon.service
``` |
I am using an asp.net core web api with entity framework core (pomelo). I have a MariaDB database. I use Swagger UI to explore my api, as per the template. When I try to use it to delete a row, I get the following error:
```
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
---> MySqlConnector.MySqlException (0x80004005): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'RETURNING 1' at line 3
at MySqlConnector.Core.ServerSession.ReceiveReplyAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/ServerSession.cs:line 894
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in /_/src/MySqlConnector/Core/ResultSet.cs:line 37
at MySqlConnector.MySqlDataReader.ActivateResultSet(CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 130
at MySqlConnector.MySqlDataReader.InitAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, IDictionary`2 cachedProcedures, IMySqlCommand command, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 483
at MySqlConnector.Core.CommandExecutor.ExecuteReaderAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/CommandExecutor.cs:line 56
at MySqlConnector.MySqlCommand.ExecuteReaderAsync(CommandBehavior behavior, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 357
at MySqlConnector.MySqlCommand.ExecuteDbDataReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 350
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
```
The deletion should be handled here in my Controller and Repository, like this:
```
[HttpDelete("alertId")]
public async Task<IActionResult> DeleteAlert(int alertId)
{
var alert = await _dataRepository.GetAlertAsync(alertId);
if(alert is null)
{
return NotFound("Alert not found");
}
await _dataRepository.DeleteAlertAsync(alert);
return NoContent();
}
```
and this
```
public class AlertRepository (IrtsContext context) : IDataRepositoryAlerts
{
readonly IrtsContext _alertContext = context;
public async Task DeleteAlertAsync(Alert entity)
{
if (entity != null)
{
_alertContext.Remove(entity);
await _alertContext.SaveChangesAsync();
}
else
{
throw new NotImplementedException();
}
}
```
I do not understand this. I believe it is my dbContext that handles the "saving the entity changes". How can I have an sql syntax error? I cannot find "Returning 1" anywhere in my code.
I have tried deleting the row manually in my database. That works.
All other operations (GET, POST and PUT) work just fine.
I have tried running this with holding points to see where the error occurs but everything seems to execute without issue.
I am grateful for any hints. I am obviously very new to this ;) |
EntityFrameworkCore.DbUpdateException: Unable to delete row with Swagger, SQL Syntax error |
|asp.net-web-api|entity-framework-core|swagger-ui|dbcontext| |
null |
I have understood and solved the notebook available on Coursera for the Deep Learning Specialization (Sequence Models course) by Andrew Ng.
In the notebook, he provides a detailed walkthrough for building a wake word detection model. However, at the end, **he loads a pre-trained model trained on the word "activate."**
I attempted to use Google Colab and my own data. I collected 369 voices of people saying "Alexa," which are available on Kaggle. However, they have a sample rate of 16000KHz.
I also used Google voice commands as negative sounds and collected some clips from YouTube that contain various environmental sounds.
I followed all the steps exactly as instructed, a**nd i'm using google colab for training**. but when I try to create the dataset, **the RAM quickly fills up**, and I cannot create 4000 samples as mentioned by Andrew in his notebook.
here is my code of "create_training_examples":
```
nsamples = 4000
X_train = []
Y_train= []
X_test = []
Y_test = []
train_count = 0
test_count = 0
to_test = False
for i in range(0, nsamples):
if i % 500 == 0:
print(i)
rand = random.randint(0,61)
if i%5 == 0:
x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = True)
X_test.append(x.swapaxes(0,1))
Y_test.append(y.swapaxes(0,1))
test_count+=1
else:
x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = False)
X_train.append(x.swapaxes(0,1))
Y_train.append(y.swapaxes(0,1))
train_count+=1
print("Number of training samples:", train_count)
print("Number of testing samples:", test_count)
X_train = np.array(X_train)
Y_train = np.array(Y_train)
np.save('XY_train/X_train.npy', X_train)
np.save('XY_train/Y_train.npy', Y_train)
X_test = np.array(X_test)
Y_test = np.array(Y_test)
np.save('XY_test/X_test.npy', X_test)
np.save('XY_test/Y_test.npy', Y_test)
print('done saving')
print('X_train.shape: ',X_train.shape)
print('Y_train.shape: ',Y_train.shape)
```
here is the model i use:
```
def model(input_shape):
X_input = Input(shape = input_shape)
X = Conv1D(196,15,strides=4)(X_input)
X = BatchNormalization()(X)
X = Activation('relu')(X)
X = Dropout(0.8)(X)
X = GRU(128,return_sequences = True)(X)
X = Dropout(0.8)(X)
X = BatchNormalization()(X)
X = GRU(128,return_sequences=True)(X)
X = Dropout(0.8)(X)
X = BatchNormalization()(X)
X = Dropout(0.8)(X)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
model = Model(inputs = X_input, outputs = X)
return model
```
**and for training:**
```
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=20)
```
I have used the notebook as it is, even following the same method for feature extraction. **The only thing I modified was the training data, using my own data.** However, what happened is that the RAM quickly filled up. And when I reduced the number of samples to 1600KHz or 8000KHz, **I didn't get good results at all.**
am i doing something wrong?
Do you have any advice or suggestions please? |
Only really useful if you need to do something specific when the exception is thrown. There's a slight performance cost to catching and throwing exceptions, so you generally only do it if you really have to |
null |
Suppose I have colmap camera poses, is it possible and how to obtain a new view of input image `I` (planar object) from a different viewpoint/camera pose using those poses?
Colmap camera poses has following data:
```
extr = cam_extrinsics[key]
intr = cam_intrinsics[extr.camera_id]
height = intr.height
width = intr.width
uid = intr.id
R = np.array(qvec2rotmat(extr.qvec))
T = np.array(extr.tvec)
if intr.model=="SIMPLE_PINHOLE":
focal_length_x = intr.params[0]
FovY = focal2fov(focal_length_x, height)
FovX = focal2fov(focal_length_x, width)
fx = fy = intr.params[0]
cx = intr.params[1]
cy = intr.params[2]
elif intr.model=="PINHOLE":
focal_length_x = intr.params[0]
focal_length_y = intr.params[1]
FovY = focal2fov(focal_length_y, height)
FovX = focal2fov(focal_length_x, width)
fx = intr.params[0]
fy = intr.params[1]
cx = intr.params[2]
cy = intr.params[3]
```
```
class DummyCamera:
def __init__(self, uid, R, T, FoVx, FoVy, K, image_width, image_height):
self.uid = uid
self.R = R
self.T = T
self.FoVx = FoVx
self.FoVy = FoVy
self.K = K
self.image_width = image_width
self.image_height = image_height
self.projection_matrix = getProjectionMatrix(znear=0.01, zfar=100.0, fovX=FoVx, fovY=FoVy).transpose(0,1).cuda()
self.world_view_transform = torch.tensor(getWorld2View2(R, T, np.array([0,0,0]), 1.0)).transpose(0, 1).cuda()
self.full_proj_transform = (self.world_view_transform.unsqueeze(0).bmm(self.projection_matrix.unsqueeze(0))).squeeze(0)
self.camera_center = self.world_view_transform.inverse()[3, :3]
```
Colmap camera poses are computed on different flat object, size of images used in this computation is different from size of image `I`
going from this:
[enter image description here](https://i.stack.imgur.com/SyQ5q.jpg)
to this after transformation using comap pose:
[enter image description here](https://i.stack.imgur.com/b6k8V.jpg) |
How to do Perspective Transformation of Image |
|graphics|computer-vision|transformation|perspectivecamera| |
null |
I have a txt file where I store data and with that is stored backslash escape characters too. I read the content of that file and then substitute that value in a json field so I want to add double backslash to be able to parse the json later from different programming language.
for eg. create a file named abc.txt with content `Hello\nmate`
This is my shell script used to parse the file
```
#!/bin/bash
expected_output=$( cat abc.txt)
expected_output=${expected_output//\\/\\\\}
JSON_FMT='{"output":"%s"}'
printf "%s" "$expected_output" >> aaa.txt
```
If I read that file from a different programming language, I dont see the backslash being replaced with double backslash BUT if I replaced H with a for eg, it gets replaced. Why is that I am not able to replace the \n character?
I have tried to replace the backslash with double backslash but it is not working however if i replace other characters it works. I am expecting that the backslash is replaced with double backslash |
macOS - Most secure way of a GUI SUDO_ASKPASS |
|macos|shell|sudo| |
You can do it by creating a function that changes the `fontSize` and add it on the `LocalStorage` every time a person clicks on it.
After that you can use the `window.onload()` to always check, when the page loads, if there is any `fontSize` on the `LocalStorage` and set it if there is.
Example:
**HTML**
<section class="sec">
<div class="content">
<div class="buttons">
<span class="btn" onclick="changeFontSize('1em')">A</span>
<span class="btn" onclick="changeFontSize('1.25em')">A</span>
<span class="btn" onclick="changeFontSize('1.75em')">A</span>
</div>
<div class="text" id="text">
<h3>i'm an h3</h3>
<p>i'm a paragraph</p>
</div>
</div>
</section>
**JavaScript**
const buttons = document.querySelector('.buttons');
const btn = buttons.querySelectorAll('.btn');
window.onload = function() {
const fontSize = localStorage.getItem('fontSize');
if (fontSize) {
document.getElementById('text').style.fontSize = fontSize;
}
}
function changeFontSize(size) {
localStorage.setItem('fontSize', size);
document.getElementById('text').style.fontSize = size;
}
for(let i = 0; i < btn.length; i++){
btn[i].addEventListener('click', function(){
let current = document.getElementsByClassName('clicked');
current[0].className = current[0].className.replace(" clicked", "");
this.className += " clicked";
})
} |
Sdk 34 WRITE_EXTERNAL_STORAGE not working |
|java|android|bluetooth-lowenergy|android-bluetooth| |
null |
How to receive Bluetooth value in Android app |
```
# Import the AWS PowerShell module
Import-Module AWSPowerShell -Force
# Define the S3 bucket and object key
$bucketName = "bucket-name"
$objectKey = "new ad account/fd new ad account V4.ps1"
# Define the local file path where you want to save the downloaded code
$localFilePath = "D:\Script Test\fd new ad account V4.ps1" # Replace with your desired local file path
# Download the code from S3
try {
Read-S3Object -BucketName $bucketName -Key $objectKey -File $localFilePath -ErrorAction Stop
Write-Host "File downloaded successfully from S3."
}
catch {
Write-Error "Failed to download file from S3: $_"
exit 1
}
# Execute the downloaded code
try {
& $localFilePath
Write-Host "Script executed successfully."
}
catch {
Write-Error "Failed to execute script: $_"
exit 1
}
exit 0
```
Make sure that the IAM role attached to your WorkSpace has the necessary permissions to access the S3 bucket. Your policy looks good but replace "arn:aws:s3:::your-bucket-name/*" with the actual ARN and make sure the role attached to your WorkSpace has the rights.
Also, replace "arn:aws:s3:::your-bucket-name/*" in your bucket policy. |
The strings are of the form:
```
Item 5. Some text: 48
Item 5E. Some text,
```
The result of the search should produce
4 groups as follows.
```
Group(1) = "#"
Group(2) = "5" or "5E"
Group(3) = "Some text"
Group(1) = "48" or ","
```
I have tried:
```
(a) r"(.*)Group (.*)\.(.+)(?::(.+))|(,)"
(b) r"(.*)Group (.*)\.(.+)(?:(?::(.+))|(,))"
(c) r"(.*)Group (.*)\.(.+)(:(.+))|(,)"
```
It have tried a variety of ways to solve this, but none work as required.
What should the regular expression be? |
{"OriginalQuestionIds":[2299469],"Voters":[{"Id":157882,"DisplayName":"BalusC","BindingReason":{"GoldTagBadge":"jakarta-ee"}}]} |
|c|winapi|operating-system|kill-process| |
|jdbc|servlets| |
I tried to upload my django project to Render which contains plenty of HTML files as well CSS styles. The latter is used in a CSS file called 'styles.css' within the static folder. When I run the project, the images are shown, but it's like the styles have not been applied to everything, not even the Django Admin panel. On the console, it mentioned something about how the stylesheet wasn't loaded because of MIME type. The same happens with my main.js file, which is also within the static folder. I have heard about using gunicorn, which I have it installed, but i'm not sure if it's actually responsible for causing this since the images load fine (although their sizes are very much different). I want to know if there's a solution to this.
For reference, this is how I added the urls of my media and static in the settings.py:
```
STATIC_URL = 'static/'
STATICFILES_DIRS = [BASE_DIR / 'static']
MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR / 'media'
```
This is my current urls.py for my main project:
```
from django.contrib import admin
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('store.urls')),
path('chatbot/', include('chatbot.urls')),
]
urlpatterns += static(settings.STATIC_URL,document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT)
```
The base.html I am using, which contains links and scripts tags in its header, is this:
```
{% load static %}
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<link rel="stylesheet" href="{% static 'style.css' %}">
<!-- jquery -->
<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<!-- Bootstrap CSS -->
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-QWTKZyjpPEjISv5WaRU9OFeRpok6YctnYmDr5pNlyT2bRjXh0JMhjY6hW+ALEwIH" crossorigin="anonymous">
<!-- custom css & js -->
<script src="{% static 'main.js' %}" defer></script>
<link rel="shortcut icon" href="{% static 'favicon.ico' %}">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<title>U-Community Mart</title>
</head>
<body class="body-main">
{% include 'navbar.html' %}
{% if messages %}
{% for message in messages %}
<div class="alert alert-info alert-dismissible fade show" id="message" role="alert">
{{message}}
</div>
{% endfor %}
{% endif %}
<div class="container" style="margin-top: 50px; margin-bottom: 0; margin-left: auto; margin-right: auto;">
{% block content %}
{% endblock content %}
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js"
integrity="sha384-YvpcrYf0tY3lHB60NNkmXc5s9fDVZLESaAA55NDzOxhy9GkcIdslK1eN7N6jIeHz"
crossorigin="anonymous"></script>
</body>
</html>
``` |
how to globally replace backslash with double backslash in a text file using bash? |
|string|bash|file| |
null |
I'm running a group of tasks in Celery like so:
```python
def run_many(n):
job_id = str(uuid4())
g = group(dummy.s(job_id) for _ in range(n))
job = task.apply_async(task_id=job_id)
job.save() # so children are accessible later in the GroupResult
@app.task(bind=True, ignore_result=True)
def dummy(self, job_id):
outputdir = Path(job_id/self.request.id).resolve()
print(f"dummy: {self.request.id}")
# do some work and write results into outputdir
```
I monitor the state of the whole deal like this:
```python
def check_job_status(job_id):
job = GroupResult.restore(id=job_id, app=app)
n_total = len(job.children)
n_finished = job.completed_count()
status = "SUCCESS" if n_finished == n_total else "RUNNING"
return status, n_finished/n_total
```
I'd like to be able to get information about an individual task, including the `id` of the job that created it, but I can't figure out how to store that information:
```python
def check_task_status(task_id):
result = AsyncResult(id=task_id, app=app)
```
in that function, `result.parent` is `None` even if I set `self.parent = job_id` in `dummy()`.
Is there some way to store custom metadata for a task that get ends up getting written to the backend and loaded by `AsyncResult`? I know that I can use `update_state(state, meta)`, but creating that `meta` dictionary with a `job_id` key each time I want to update the task state seems cumbersome, particularly if I want to update the state from called functions because then I have to pass the `job_id` around. If I could access a task's current `meta` dictionary from within the task, I could just `update()` the dictionary, but I don't know how to retrieve that dictionary from within the task itself.
Any ideas for me? |
Celery - Specify task parent manually? |
|python|celery| |