instruction stringlengths 0 30k ⌀ |
|---|
|gigya|sap|sap-cloud-identity-services| |
When I pass my ggplot2-object to ggplotly() it throws a strange error;
Error in train(..., self = self) : unused argument (list("b", "a", "c"))
It seems to work if I omit the color/fill arguments from the layer - but that’s nit what I want. Does anybody else get this error, or is it just me? In the example below the error occurs when I pass ggp3 to ggplotly().
```code
df <- data.frame(a = c(1:10), b = c(2010:2019), c = c(rep("a", 5), rep("b", 5)))
# as expected
ggp1 <- ggplot(data = df) +
geom_point(aes(y = a,
x = b,
# color = c # without color
))
ggp1
ggp1 %>% plotly::ggplotly()
# as expected
ggp2 <- ggplot(data = df) +
geom_point(aes(y = a,
x = b),
color = "red") # with non aes color
ggp2
ggp2 %>% plotly::ggplotly()
# not as expected
ggp3 <- ggplot(data = df) +
geom_point(aes(y = a,
x = b,
color = c)) # with color
ggp3
ggp3 %>% plotly::ggplotly()
```
Any help much appreciated!
|
Adding colour or fill to ggplot2 layer results in error in ggplotly |
|r|ggplot2|ggplotly| |
Go to "Installed app" in Windows and remove one of jdk versions |
I have the same error just now while importing scanpy library into my googlecolab environment.
I did
!pip install scanpy. The package was successfully installed and I was able to import scanpy library.
I hope it worked for whoever encountered the same problem. |
I found a solution for myself and wanted to share it here even though this thread is so old. Krishnakanth was close; `Microsoft-Windows-DeviceSetupManager/Admin` is the correct event log. In my experience with Windows 10, when I switch to a different monitor using an HDMI switch, the log registers Event ID 100 to show that the DeviceSetupManager service has started working. It then iterates through connected devices and issues Event ID 112 for each device. This is not useful, however, because I don't think Event ID 112 really distinguishes whether a device is actually on and connected. I think it only shows that the device is configured in Device Manager. After scanning all of the devices, the log registers Event 101 to show that it is finished.
Accordingly, what you can do is use a Scheduled Task to watch for `Microsoft-Windows-DeviceSetupManager/Admin::100 (Event ID)` and then run a simple PowerShell script to check the connected monitors:
# Get all PnP devices that are monitors
$monitors = Get-CimInstance -ClassName Win32_PnPEntity | Where-Object {$_.Service -eq 'monitor'}
# Display the PNPDeviceID for each monitor
foreach ($monitor in $monitors) {
Write-Output "PNPDeviceID: $($monitor.PNPDeviceID)"
}
This outputs a unique ID for all of the connected monitors, which can be used to carry out whatever operation you have in mind.
One catch: because the DeviceSetupManager takes a while to complete, this method cannot perfectly handle situations where a monitor is rapidly unplugged and plugged in again. Event ID 100 only triggers once and cannot trigger again until Event ID 101 is given about 1 minute later (depending on how many devices it must check). One potential workaround is to watch for Event 101 instead, which means that your script will not trigger right away but will wait until DeviceSetupManager is done.
|
How to fix FileUploadBase$SizeLimitExceededException: the request was rejected because its size (337867) exceeds the configured maximum (200) |
Suppose we have a system with radix 2^32 (x = a0 \* 2^0 + a1 \* 2^32 + a2 \* 2^64 +...) and we store the coefficients in a vector (a0 in position [0] , a1 in position [1] ...).
This function allows you to calculate the square of x (for simplicity the non-zero coefficients of x occupy only half of the vector).
#include <iostream>
#include <vector>
#include <cstdlib>
#include <stdint.h>
#include <vector>
#include <omp.h>
const int64_t esp_base_N = 32;
const uint64_t base_N = 1ull << esp_base_N;
void sqr_base_N(std::vector<uint64_t> &X, std::vector<uint64_t> &Sqr , const int64_t len_base_N)
{
std::vector<uint64_t> Sqr_t(len_base_N, 0ull);
uint64_t base_N_m1 = base_N - 1ull;
for (int64_t i = 0; i < (len_base_N / 2); i++)
{
uint64_t x_t = X[i];
for (int64_t j = i + 1; j < (len_base_N / 2); j++)
{
uint64_t xy_t = x_t * X[j];
Sqr_t[i + j] += 2ull * (xy_t & base_N_m1);
Sqr_t[i + j + 1] += 2ull * (xy_t >> esp_base_N);
}
x_t *= x_t;
Sqr_t[2 * i] += x_t & base_N_m1;
Sqr_t[2 * i + 1] += x_t >> esp_base_N;
}
uint64_t mul_t;
mul_t = Sqr_t[0] >> esp_base_N;
Sqr[0] = Sqr_t[0] & base_N_m1;
for (int64_t j = 1; j < len_base_N ; j++)
{
Sqr[j] = (Sqr_t[j] + mul_t) & base_N_m1;
mul_t = (Sqr_t[j] + mul_t) >> esp_base_N;
}
}
I don't know if it is possible to speed up the function using multithreading.
I tried using OpenMP but managed to get no error just by applying to the outer for loop, furthermore the function is slower than the previous one.
#pragma omp parallel for
for (int64_t i = 0; i < (len_base_N / 2); i++)
{
uint64_t x_t = X[i];
x_t *= x_t;
Sqr_t[2 * i] += x_t & base_N_m1;
Sqr_t[2 * i + 1] += x_t >> esp_base_N;
}
for (int64_t i = 0; i < (len_base_N / 2); i++)
{
uint64_t x_t = X[i];
for (int64_t j = i + 1; j < (len_base_N / 2); j++)
{
uint64_t xy_t = x_t * X[j];
Sqr_t[i + j] += 2ull * (xy_t & base_N_m1);
Sqr_t[i + j + 1] += 2ull * (xy_t >> esp_base_N);
}
}
Is there an easy way to use multithreading for the function (OpenMP or other) to speed it up?
Edit (to test the function):
int main()
{
std::vector<uint64_t> X_t;
X_t.push_back(1286608618ull);
X_t.push_back(2);
int64_t len_base_N = 2 * X_t.size() + 2;
std::vector<uint64_t> Sqr_X(len_base_N, 0ull);
sqr_base_N(X_t, Sqr_X , len_base_N);
for(int64_t i = 0; i < len_base_N - 1; i++)
std::cout << Sqr_X[i] << ", ";
std::cout << Sqr_X[len_base_N - 1] << std::endl;
return 0;
} |
Launch Single Kernel on problem space vs Launch same kernel, multiple times on smaller problem spaces |
|gpu|opencl| |
null |
null |
null |
I am trying to deploy my ASP.NET Core Web API project that I created using Clean Architecture and my Azure Student portal account. The deployment is successful however when I navigate to the API's URL endpoint it doesn't show the JSON object that I was expecting. I also checked the application event logs in Azure but found nothing.
I tried searching online for other ways and I found that other options are using Bicep with Azure pipeline or using Bicep with GitHub action however I'm not sure if I'm on the right track. I'm hoping that anyone can point me on the right path.
Expected:
[![enter image description here][1]][1]
Actual:
[![enter image description here][2]][2]
The job records are stored in Azure's SQL Servers resource and when I ran the dotnet run, it lists all the records within the database. When I ran the command below it added the Release folders for each class libraries. I'm using Azure App Extension in VS Code to deploy the Web API.
Command:
`dotnet publish -c Release`
[![enter image description here][3]][3]
Application Event Log:
[![enter image description here][4]][4]
UPDATED: Added Program.cs
using Infrastructure;
using Application;
using Infrastructure.Persistence;
using Microsoft.EntityFrameworkCore;
using Microsoft.AspNetCore.Identity;
using Domain.Entities;
using Microsoft.Net.Http.Headers;
var builder = WebApplication.CreateBuilder(args);
var worklogCorsPolicy = "WorklogAllowedCredentialsPolicy";
builder.Services.AddCors(options =>
{
options.AddPolicy(worklogCorsPolicy,
policy =>
{
policy.WithOrigins("http://localhost:5173")
.WithMethods("GET", "POST")
.WithHeaders(HeaderNames.ContentType, "x-custom-header",
HeaderNames.Authorization, "true")
.AllowCredentials();
});
});
builder.Services.AddControllers();
builder.Services.AddInfrastructure(builder.Configuration);
builder.Services.AddApplication();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseRouting();
app.UseHsts();
app.UseHttpsRedirection();
app.UseCors(worklogCorsPolicy);
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
using var scope = app.Services.CreateScope();
var services = scope.ServiceProvider;
try
{
var context = services.GetRequiredService<WorklogDbContext>();
var userManager = services
.GetRequiredService<UserManager<AppUser>>();
await context.Database.MigrateAsync();
await Seed.SeedData(context, userManager,
builder.Configuration);
}
catch (Exception ex)
{
var logger = services
.GetRequiredService<ILogger<Program>>();
logger.LogError(ex, "An error occured during migration.");
}
app.Run();
Other Things I tried:
1. I tried creating a separate .NET Core Web API called Weather that doesn't use the clean architecture approach, after I published the application using the Release build configuration, I deployed this in Azure.
I compared this with my project that uses clean architecture and I noticed that the worklog has Release build configuration for each layer(Domain, Application, Infrastructure and API) as shown in the screenshot below: I somehow think that this is where I'm having but since there's no error thrown I couldn't verify it.
[![enter image description here][5]][5]
To deploy the API I'm using Azure App Service Extension as shown below:
[![enter image description here][6]][6]
[1]: https://i.stack.imgur.com/kiDOi.png
[2]: https://i.stack.imgur.com/ZNcuQ.png
[3]: https://i.stack.imgur.com/Bqcm5.png
[4]: https://i.stack.imgur.com/cfWdw.png
[5]: https://i.stack.imgur.com/yYzaM.png
[6]: https://i.stack.imgur.com/CodSA.png
2. Publish Profile - I found that in Visual Studio, it is possible to deploy the API thru importing the publish profile downloaded from Azure, however since I'm using VS Code I won't be able to do this. |
We have trained ultralytics yolov8 model on 1024*1024, 3 channel images and converted to onnx and ran that onnx in visual studio 2022 c# .net v4.8 with onnxruntime-gpu v1.16.3 and it's taking around 90 ms on A5000 GPU.
We also tried different onnxruntime sessions options like : Graph Optimization Level, inter_op_num_threads, intra_op_num_threads, Execution mode (ORT_PARALLEL and ORT_SEQUENTIAL), Optimization Options (enable_mem_pattern) to optimize the model inference capability and reduce the inference time..
But still there is no difference in the inference time.
So can anyone suggest if we are missing something or how we can reduce the time further even a bit?
Also, we are using the same version of CUDA(11.2) for both the hardwares.
we had inference time of 35 ms with rtx-4090 and with rtx-A5000 we are getting 90 ms. We want our inference time to be 35-40 ms when we deploy on rtx-A5000. |
Inference speed problem even if using a high-end Hardware |
|c#|computer-vision|onnx|yolov8|onnxruntime| |
null |
I had the same problem, with my actuator on an other port.
One solution can be to use another SecurityFilterChain for your actuators :
1st filterChain (The one you already have) :
@Order(Ordered.HIGHEST_PRECEDENCE)
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
return http
.securityMatcher(new NegatedRequestMatcher(EndpointRequest.toAnyEndpoint()))
...
.authorizeHttpRequests(...).build()
}
The securityMatcher + NegatedRequestMatcher will make this fitlerChain ignored what you pass in arguments: Here it's EndpointRequest.toAnyEndpoint() which correspond to any Actuator you may have. It can be EndpointRequest.to("actuatorName") or for a path : new AntPathRequestMatcher("/path/**")
Then the second SecurityFilterChain which permitAll:
@Bean
public SecurityFilterChain actuatorFilterChain(HttpSecurity http) throws Exception {
http.authorizeHttpRequests(requests -> requests.anyRequest().permitAll());
return http.build();
}
source : https://stackoverflow.com/questions/77710370/actuator-endpoints-returning-500-error-after-upgrade-to-spring-boot-3
EDIT :
If the objective is to authorise everything for Actuators or paths with another port, you can do it with only one filterChain :
From my answer, forget the actuatorFilterChain() and remove the .securityMatcher(...) from filterChain() and add a WebSecurityCustomizer Bean like that :
@Bean
public WebSecurityCustomizer webSecurityCustomizer() {
return web -> web.ignoring()
.requestMatchers(EndpointRequest.toAnyEndpoint());
//.requestMatchers(EndpointRequest.to("actuatorName"));
//.requestMatchers("/path/**");
}
It'll ignore security for what you specify in the requestMatcher whitout the 500 Exception caused by RequestMatcherDelegatingAuthorizationManager |
The mistake is the result set from that query has _two columns_.
Otherwise, this dataset is a poor choice to demonstrate the JOIN types, because there are no values that are unique to A and B, such that the OUTER join types will always closely resemble the INNER joins. |
I fixed it with
#txa_movieEditComment {
-fx-background: #191919;
}
I dont know why you can't use
#txa_movieEditComment {
-fx-background-color: #191919;
}
but it fixed it for me.
[This is how it's supposed to look][1]
[1]: https://i.stack.imgur.com/fv4nS.png |
I have Ollama running in a Docker container that I spun up from the official image. I can successfully pull models in the container via interactive shell by typing commands at the command-line such as:
`ollama pull nomic-embed-text`
This command pulls in the model: nomic-embed-text.
Now I try to do the same via dockerfile:
```
FROM ollama/ollama
RUN ollama pull nomic-embed-text
# Expose port 11434
EXPOSE 11434
# Set the entrypoint
ENTRYPOINT ["ollama", "serve"]
```
and get
```
Error: could not connect to ollama app, is it running?
------
dockerfile:9
--------------------
7 | COPY . .
8 |
9 | >>> RUN ollama pull nomic-embed-text
10 |
11 | # Expose port 11434
--------------------
ERROR: failed to solve: process "/bin/sh -c ollama pull nomic-embed-text" did not complete successfully: exit code: 1
```
As far as I know, I am doing the same thing but it works in one place and not another. Any help?
Based on the error message, I also tried:
```
FROM ollama/ollama
# Expose port 11434
EXPOSE 11434
RUN ollama serve &
RUN ollama pull nomic-embed-text
```
This ought to launch the ollama service and then pull the model. However, it gave same error message.
|
I'm working on integration of a Bluetooth enabled Litmann Stethoscope with an Android app. I'm able to pair with the Stethoscope and enable Characteristic notifications. Once notification is enabled I'm getting continuous stream of data from the Stethoscope at the rate of hundreds of Characteristic changed notifications per second. I guessed this is because the Stethoscope is streaming the audio from its sensor. Need help to understand what is the audio format I'm receiving and how can I play this audio in the app. Below is the sample of the bytes received on each characteristic changed notification.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/9uz84.png
Below is a sample of the code I wrote in getting the characteristics.
public static final UUID stethoscopeServiceUuid = UUID.fromString("##respectiveUUID##");
public static final UUID stethoscopeCharacteristicUuid = UUID.fromString("##respectiveUUID##");
public static final UUID generalDescriptorUuid = UUID.fromString("00002902-0000-1000-8000-00805f9b34fb");
ByteArrayOutputStream stethescopeStream = new ByteArrayOutputStream();
private void handleConnectionStateChange(BluetoothGatt gatt, int newState) {
if (newState == BluetoothProfile.STATE_CONNECTED) {
gatt.discoverServices();
}
}
private void handleServicesDiscovered(BluetoothGatt gatt, int status) {
if (status == BluetoothGatt.GATT_SUCCESS) {
BluetoothGattCharacteristic stethoscopeCharacteristic = gatt.getService(stethoscopeServiceUuid).getCharacteristic(stethoscopeCharacteristicUuid);
gatt.setCharacteristicNotification(stethoscopeCharacteristic, true);
BluetoothGattDescriptor descriptor = stethoscopeCharacteristic.getDescriptor(generalDescriptorUuid);
descriptor.setValue(BluetoothGattDescriptor
.ENABLE_NOTIFICATION_VALUE);
gatt.writeDescriptor(descriptor);
}
}
private void handleCharacteristicChanged(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, byte[] value){
handleStethoscopeValueReceived(gatt.getDevice().getName(),characteristic, value);
}
private void handleStethoscopeValueReceived(String deviceName, BluetoothGattCharacteristic characteristic, byte[] value) {
//value
//00 42 11 89 99 BB CA AA AA 80 0A 99 9E BB CD CA A9
//01 99 90 08 A9 8A CB A9 10 11 02 20 11 03 53 35 42
//02 32 42 21 43 53 23 34 33 34 33 53 35 43 34 34 33
//03 34 32 34 42 23 34 32 22 32 21 08 91 90 90 90 99
//04 9B BB BD DA BB EB BB BC BB AB CB BC BD BA BB BB
//05 CB BC BA CB AB CB AA 9B B9 BB 9A BB BB AB 99 11
//06 BA 01 BB 91 B0 91 90 BB BA 99 99 A9 99 BA BB A9
//....
stethescopeStream.write(Arrays.copyOfRange(value, 0, value.length));
} |
Terraform: how to create a reusable module to create aws security groups |
|amazon-web-services|terraform|terraform-provider-aws|aws-security-group|infrastructure-as-code| |
null |
You can make use of below sample PowerShell script to add *Microsoft Graph* and *Azure Service Management* API permissions of **Delegated** type to app registration:
```powershell
# Define the list of delegated permissions names
$delegatedPermissions = @(
"AuditLog.Read.All",
"Directory.Read.All",
"User.Read.All",
"offline_access",
"Group.Read.All",
"GroupMember.Read.All",
"GroupMember.ReadWrite.All"
)
# Get Microsoft Graph service principal
$msGraphSP = Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property Oauth2PermissionScopes | Select -ExpandProperty Oauth2PermissionScopes
$filteredPermissions = $msGraphSP | Where-Object { $delegatedPermissions -contains $_.Value }
# Define Azure Service Management API permission
$azureServicePermission = @{
resourceAppId = "797f4846-ba00-4fd7-ba43-dac1f8f63013"
resourceAccess = @(
@{
id = "41094075-9dad-400e-a0bd-54e686782033"
type = "Scope"
}
)
}
$appObjId = "your_app_reg_ObjectID"
$params = @{
requiredResourceAccess = @(
$azureServicePermission,
@{
resourceAppId = "00000003-0000-0000-c000-000000000000"
resourceAccess = $filteredPermissions | ForEach-Object {
@{
id = $_.Id
type = "Scope"
}
}
}
)
}
Update-MgApplication -ApplicationId $appObjId -BodyParameter $params
```
**Response:**

To confirm that, I checked the same in Azure AD application where **Delegated** permissions added successfully as below:

To add **admin consent** to *Microsoft Graph* permissions, you can use below sample script:
```powershell
$params = @{
clientId = "service_principal_ObjID"
consentType = "AllPrincipals"
resourceId = "54858dc8-ace7-47d4-82b2-e74d83062e7b"
scope = "AuditLog.Read.All Directory.Read.All User.Read.All offline_access Group.Read.All GroupMember.Read.All GroupMember.ReadWrite.All"
}
New-MgOauth2PermissionGrant -BodyParameter $params
```
**Response:**

To add **admin consent** to *Azure Service Management* permissions, you can use below sample script:
```powershell
$params = @{
clientId = "service_principal_ObjID"
consentType = "AllPrincipals"
resourceId = "65805703-c2cf-48a5-8835-e8d233b234e3"
scope = "user_impersonation"
}
New-MgOauth2PermissionGrant -BodyParameter $params
```
**Response:**

When I checked the same in Portal, **admin consent** granted successfully to all permissions as below:
 |
null |
{"Voters":[{"Id":1427878,"DisplayName":"CBroe"},{"Id":7313094,"DisplayName":"Yehor Androsov"},{"Id":205233,"DisplayName":"Filburt"}],"SiteSpecificCloseReasonIds":[]} |
I need to send REST request to a REST API service, which is running on on-premiese IIS.
The IIS uses NTLM authentication, so I tried to send GET request using requests and requests_ntlm library, but the RESR API service does not function with IWA.
I tried to send a request using requests and requests_ntlm library with Windows username and password, but I got status code 401.
I would like to know how to authenticate with this REST APOI service using pass-through authentiction. |
My data is two different lists and they seem as following:
**List 1**
[![List 1][1]][1]
[1]:https://i.stack.imgur.com/97EtI.png
**List2**
[![List 2][2]][2]
[2]:https://i.stack.imgur.com/THHie.png
in the attached pictures, are two lists, so I am trying to compare Date column in List 1 with Date column in List 2. If they match then multiply corresponding cells for (NG rate X Weight) and return results in Approved column.
**OBS** List 1 has always uniqe date while List 2 has several cells with same date. However, I want to multiply NG Rate for the corresponding date with Weight in List 2 which has the same date so the result will be as presented in pic of List 2.
I will appreciate your help
BR
I tried VLOOKUP but didn't work.
|
I am working on a quadratic conic optimization problem, but I have discovered that it would be preferrable if the quadratic constraint is linearly approximated. In other words, I need some way to make the quadratic beta variable linear. Is there any good way to do this? The only decision variable here is the beta, everything else is given as inputs to the problem.
[enter image description here](https://i.stack.imgur.com/jyctL.png)
I have thought about maybe using Taylor expansion, but this needs to be around a point A so I am not quite sure how it would work. Would I need to divide the original quadratic constraint into piecewise constraints based on the value of the beta? |
Linearlization of quadratic constraint |
|linear-programming|quadratic|quadratic-programming| |
null |
In powershell 5.1, you can get a list of online computers quickly like this (responsetime is the actual property):
```
$list = Get-ADComputer -Filter * | % name
$uplist = test-connection -count 1 $list -AsJob | receive-job -wait -auto |
? responsetime | % address
$uplist | Foreeach-Object {
#Do Something with $_
}
# invoke-command $uplist { 'whatever' }
``` |
I have a sequence of azimuth angles ranging from 0 to 180, then from -170 to -10, totaling 36 angles. In some cases, certain azimuth angles may be unavailable. I aim to choose **N** points from the available set and determine the maximum sum of distances between these points. For instance, if all points are valid, selecting N=4 would yield 0, 90, 180, and -90, ensuring maximum distance between them
how can I find solution for **N** points in given valid set of angles?
Maybe it can be solved through auction algorithm |
I am trying to train an AI on a custom database and i ran into this error
I tried removing and altering the numbers and activations etc. but either i get a syntax error or the same ValueError said in the title. The 2 codes are from different tutorials so i understand if they arent compatible but for the most part the code works except for the last part( starting at model = models.Sequential()
```
# Imports needed
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator
ds_train = tf.keras.preprocessing.image_dataset_from_directory(
"dataset",
labels="inferred",
label_mode="int", # categorical, binary
color_mode="grayscale",
batch_size=2,
image_size=(28,28), # reshape if not in this size
shuffle=True,
seed=123,
validation_split=0.1,
subset="training",
)
ds_validation = tf.keras.preprocessing.image_dataset_from_directory(
"dataset",
labels="inferred",
label_mode="int",
color_mode="grayscale",
batch_size=2,
image_size=(28, 28), # reshape if not in this size
shuffle=True,
seed=123,
validation_split=0.1,
subset="validation",
)
def augment(x, y):
image = tf.image.random_brightness(x, max_delta=0.05)
return image, y
ds_train = ds_train.map(augment)
# Custom Loops
for epochs in range(10):
for x, y in ds_train:
# train here
pass
model = models.Sequential()
model.add(layers.Conv2D(32,(3,3), activation = 'relu', input_shape = (32,32,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3), activation = 'relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation = 'relu'))
model.add(layers.Dense(10, activation= 'softmax'))
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(ds_train, epochs=10, verbose=2)
``` |
{"OriginalQuestionIds":[45594504],"Voters":[{"Id":215552,"DisplayName":"Heretic Monkey"},{"Id":22180364,"DisplayName":"Jan"},{"Id":7079025,"DisplayName":"Shiladitya"}]} |
Did you try:
```
sudo apt-get update && sudo apt-get upgrade && sudo apt-get install build-essential
``` |
Tekton, npm ci, and "npm ERR! EMFILE: too many open files, open '/root/.npm/_cacache/" |
If I've understood your problem statement correctly, this sounds a lot like Russian peasant multiplication. Consider that a randomly generated `long` will have each bit independent and each bit set with probability 50%. Suppose (for the sake of recursion) we have your desired function `f(p)`; then
rand() gives the same distribution as f(0.5)
f(p) & f(q) gives the same distribution as f(p*q)
f(p) | f(q) gives the same distribution as f(p + q - p*q)
So suppose we're looking for `f(0.9)`. We know that
f(0.9) = f(0.5) | f(x) where 0.5 + x - 0.5*x = 0.9, i.e. 0.5x = 0.4
So now we're looking for `f(0.8)`. We know that
f(0.8) = f(0.5) | f(x) where 0.5x = 0.3
f(0.6) = f(0.5) | f(x) where 0.5x = 0.1
f(0.2) = f(0.5) & f(x) where 0.5x = 0.2
f(0.4) = f(0.5) & f(x) where 0.5x = 0.4
f(0.8) = f(0.5) | f(x) where 0.5x = 0.3
[...]
This recursion will never actually finish; but at any point we like, we can bottom out and start re-winding the stack.
f(0.9) = R | R | (R & R & (R | R | (R & R & ...)))
For example, this C code gives a distribution pretty darn close to p=0.9:
#define R (rand() % 256)
unsigned get09() {
unsigned x = R; // 0.5
x |= R; // 0.75
x |= R; // 0.875
x &= R; // 0.4375
x &= R; // 0.21875
x |= R; // 0.609375
x |= R; // 0.8046875
x |= R; // 0.90234375
return x;
}
Now, could you get _even closer_ by using productions other than directly with `R` — for example, could we use the fact that `f(0.9) = f(0.684) | f(0.684)`, and then look for ways to produce `f(0.684)`? Sure.
Now, maybe this is the Baader-Meinhof effect in action, but this actually (serendipitously!) feels just like a directed hypergraph "tersest path" problem akin to https://mathoverflow.net/questions/466176/what-is-the-proper-name-for-this-tersest-path-problem-in-infinite-craft — see https://quuxplusone.github.io/blog/2024/03/03/infinite-craft-theory/ and Knuth Volume 2 §4.6.3 "Evaluation of Powers" for some theory. We're looking for the tersest path from $V_0={0.5}$ to `p`, using an operation $E$ which is actually _two_ possible operations: `&` and `|`. So from {0.5} we can reach any of {0.5, 0.25, 0.75} in one step; we can reach any of {0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, 0.1875, 0.8125} in two steps; and so on; and you just want to know how many steps it'll take before we've reached some number within a certain epsilon of `p`.
---
EDIT: It occurs to me that those targets are much better represented in base 2, not base 10. From {0.1} (base 2) we can reach any of {0.1, 0.01, 0.11} in one step; any of {0.1, 0.01, 0.11, 0.001, 0.101, 0.011, 0.111, 0.0011, 0.1101} in two steps; and so on. Of course it's going to be impossible to reach any `p` whose binary representation has an infinite number of 1-bits (such as p=0.9=0.1110011001...<sub>2</sub>); but we can get arbitrarily close.
And it sure looks like the general solution is (just like Russian peasant multiplication) to write the target in binary; then turn the 1s into |s and the 0s into &s. For example, to hit p=0.7, which is 0.101100111...<sub>2</sub>, we'd write this, where reading upward from bottom to top the bitwise operations are `|&||&&|||...` Each bitwise operation adds one more leading bit to the result.
```
#define R (rand() % 256)
unsigned get07() {
unsigned x = R | R; // 0.11
x |= R; // 0.111
x &= R; // 0.0111
x &= R; // 0.00111
x |= R; // 0.100111
x |= R; // 0.1100111
x &= R; // 0.01100111
x |= R; // 0.101100111
return x;
}
```
|
In my use case, I have a single AD B2C tenant to handle a user authentication requests coming from a single web-application. However, in the existing login flow, once user enter a login email and passcode, next orchestrion is to make a backend REST api call to check the user existence, and that is working as expected.
To enhance this login flow in b2c, and to handle an authentication requests coming from a ‘Two different web applications’, we need to make a parallel REST api calls based on the request url.
I assume, as a first step In B2C need to look for a ‘query parameter’ in the request url, to handle this multiple rest api calls based on a query parameter from the website request url. I’m not sure if b2c can support a parallel REST api calls from a single custom policy.
But how to handle these multiple authentication requests are coming from two a two different web applications to b2c through b2c custom policy? Any helpful documentation or examples to support this usecase would be greatly helpful? |
Parallel REST API calls through Azure ADB2C |
|azure-ad-b2c|azure-ad-b2c-custom-policy|azure-ad-b2b|aad-b2c| |
When the project had one data source, native queries ran fine, now when there are two data sources, hibernate cannot determine the schema for receiving native queries, non-native queries work fine.
**application.yaml**
```
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
flyway:
test:
testLocations: classpath:/db_test/migration,classpath:/db_test/migration_test
testSchemas: my_schema
locations: classpath:/db_ems/migration
baselineOnMigrate: true
schemas: my_schema
jpa:
packages-to-scan: example.example1.example2.example3.example4
show-sql: false
properties:
hibernate.dialect: org.hibernate.dialect.PostgreSQL10Dialect
hibernate.format_sql: false
hibernate.jdbc.batch_size: 50
hibernate.order_inserts: true
hibernate.order_updates: true
hibernate.generate_statistics: false
hibernate.prepare_connection: false
hibernate.default_schema: my_schema
org.hibernate.envers:
audit_table_prefix: log_
audit_table_suffix:
hibernate.javax.cache.uri: classpath:/ehcache.xml
hibernate.cache:
use_second_level_cache: true
region.factory_class: org.hibernate.cache.ehcache.internal.SingletonEhcacheRegionFactory
hibernate:
connection:
provider_disables_autocommit: true
handling_mode: DELAYED_ACQUISITION_AND_RELEASE_AFTER_TRANSACTION
hibernate.ddl-auto: validate
# todo:
open-in-view: false
database-platform: org.hibernate.dialect.H2Dialect
#database connections
read-only:
datasource:
url: jdbc:postgresql://localhost:6432/db
username: postgres
password: postgres
configuration:
pool-name: read-only-pool
read-only: true
auto-commit: false
schema: my_schema
read-write:
datasource:
url: jdbc:postgresql://localhost:6433/db
username: postgres
password: postgres
configuration:
pool-name: read-write-pool
auto-commit: false
schema: my_schema
```
**Datasources config:**
```
@Configuration
public class DataSourceConfig {
@Bean
@ConfigurationProperties("spring.read-write.datasource")
public DataSourceProperties readWriteDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource")
public DataSourceProperties readOnlyDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource.configuration")
public DataSource readOnlyDataSource(DataSourceProperties readOnlyDataSourceProperties) {
return readOnlyDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@ConfigurationProperties("spring.read-write.datasource.configuration")
public DataSource readWriteDataSource(DataSourceProperties readWriteDataSourceProperties) {
return readWriteDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@Primary
public RoutingDataSource routingDataSource(DataSource readWriteDataSource, DataSource readOnlyDataSource) {
RoutingDataSource routingDataSource = new RoutingDataSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(DataSourceType.READ_WRITE, readWriteDataSource);
dataSourceMap.put(DataSourceType.READ_ONLY, readOnlyDataSource);
routingDataSource.setTargetDataSources(dataSourceMap);
routingDataSource.setDefaultTargetDataSource(readWriteDataSource);
return routingDataSource;
}
@Bean
public BeanPostProcessor dialectProcessor() {
return new BeanPostProcessor() {
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof HibernateJpaVendorAdapter) {
((HibernateJpaVendorAdapter) bean).getJpaDialect().setPrepareConnection(false);
}
return bean;
}
};
}
}
```
**Routing data sources**
```
public class RoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataSourceTypeContextHolder.getTransactionType();
}
@Override
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
super.setTargetDataSources(targetDataSources);
afterPropertiesSet();
}
}
```
**depending on the type of transaction readOnly or not, the datasource is selected**
```
public class DataSourceTypeContextHolder {
private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>();
public static void setTransactionType(DataSourceType dataSource) {
contextHolder.set(dataSource);
}
public static DataSourceType getTransactionType() {
return contextHolder.get();
}
public static void clearTransactionType() {
contextHolder.remove();
}
}
```
```
@Aspect
@Component
@Slf4j
public class TransactionAspect {
@Before("@annotation(transactional) && execution(* *(..))")
public void setTransactionType(Transactional transactional) {
if (transactional.readOnly()) {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_ONLY);
} else {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_WRITE);
}
}
@AfterReturning("@annotation(transactional) && execution(* *(..))")
public void clearTransactionType(Transactional transactional) {
DataSourceTypeContextHolder.clearTransactionType();
}
}
```
**Error**
```
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [UPDATE my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_table.name = ? AND my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP)]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "shedlock" does not exist
Позиция: 8
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:235)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1443)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:633)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:883)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:321)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:326)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.lambda$execute$0(JdbcTemplateStorageAccessor.java:115)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.execute(JdbcTemplateStorageAccessor.java:115)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.updateRecord(JdbcTemplateStorageAccessor.java:81)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.doLock(StorageBasedLockProvider.java:91)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.lock(StorageBasedLockProvider.java:65)
at jdk.internal.reflect.GeneratedMethodAccessor328.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
at com.sun.proxy.$Proxy139.lock(Unknown Source)
at net.javacrumbs.shedlock.core.DefaultLockingTaskExecutor.executeWithLock(DefaultLockingTaskExecutor.java:63)
at net.javacrumbs.shedlock.spring.aop.MethodProxyScheduledLockAdvisor$LockingInterceptor.invoke(MethodProxyScheduledLockAdvisor.java:86)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
at example1.example2.example3.example3.example3.example3.example3.scheduler.RpoContentSheduler$$EnhancerBySpringCGLIB$$631d68e1.loadData(<generated>)
at jdk.internal.reflect.GeneratedMethodAccessor320.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "my_table" does not exist
```
when I change the native query and specify the schema before the table name, the query processes normally :
UPDATE my_schema.my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_schema.my_table.name = ? AND my_schema.my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP);
|
|python|pandas|matplotlib| |
When the project had one data source, native queries ran fine, now when there are two data sources, hibernate cannot determine the schema for receiving native queries, non-native queries work fine.
**application.yaml**
```
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
flyway:
test:
testLocations: classpath:/db_test/migration,classpath:/db_test/migration_test
testSchemas: my_schema
locations: classpath:/db_ems/migration
baselineOnMigrate: true
schemas: my_schema
jpa:
packages-to-scan: example.example1.example2.example3.example4
show-sql: false
properties:
hibernate.dialect: org.hibernate.dialect.PostgreSQL10Dialect
hibernate.format_sql: false
hibernate.jdbc.batch_size: 50
hibernate.order_inserts: true
hibernate.order_updates: true
hibernate.generate_statistics: false
hibernate.prepare_connection: false
hibernate.default_schema: my_schema
org.hibernate.envers:
audit_table_prefix: log_
audit_table_suffix:
hibernate.javax.cache.uri: classpath:/ehcache.xml
hibernate.cache:
use_second_level_cache: true
region.factory_class: org.hibernate.cache.ehcache.internal.SingletonEhcacheRegionFactory
hibernate:
connection:
provider_disables_autocommit: true
handling_mode: DELAYED_ACQUISITION_AND_RELEASE_AFTER_TRANSACTION
hibernate.ddl-auto: validate
# todo:
open-in-view: false
database-platform: org.hibernate.dialect.H2Dialect
#database connections
read-only:
datasource:
url: jdbc:postgresql://localhost:6432/db
username: postgres
password: postgres
configuration:
pool-name: read-only-pool
read-only: true
auto-commit: false
schema: my_schema
read-write:
datasource:
url: jdbc:postgresql://localhost:6433/db
username: postgres
password: postgres
configuration:
pool-name: read-write-pool
auto-commit: false
schema: my_schema
```
**Datasources config:**
```
@Configuration
public class DataSourceConfig {
@Bean
@ConfigurationProperties("spring.read-write.datasource")
public DataSourceProperties readWriteDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource")
public DataSourceProperties readOnlyDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource.configuration")
public DataSource readOnlyDataSource(DataSourceProperties readOnlyDataSourceProperties) {
return readOnlyDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@ConfigurationProperties("spring.read-write.datasource.configuration")
public DataSource readWriteDataSource(DataSourceProperties readWriteDataSourceProperties) {
return readWriteDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@Primary
public RoutingDataSource routingDataSource(DataSource readWriteDataSource, DataSource readOnlyDataSource) {
RoutingDataSource routingDataSource = new RoutingDataSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(DataSourceType.READ_WRITE, readWriteDataSource);
dataSourceMap.put(DataSourceType.READ_ONLY, readOnlyDataSource);
routingDataSource.setTargetDataSources(dataSourceMap);
routingDataSource.setDefaultTargetDataSource(readWriteDataSource);
return routingDataSource;
}
@Bean
public BeanPostProcessor dialectProcessor() {
return new BeanPostProcessor() {
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof HibernateJpaVendorAdapter) {
((HibernateJpaVendorAdapter) bean).getJpaDialect().setPrepareConnection(false);
}
return bean;
}
};
}
}
```
**Routing data sources**
```
public class RoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataSourceTypeContextHolder.getTransactionType();
}
@Override
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
super.setTargetDataSources(targetDataSources);
afterPropertiesSet();
}
}
```
**depending on the type of transaction readOnly or not, the datasource is selected**
```
public class DataSourceTypeContextHolder {
private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>();
public static void setTransactionType(DataSourceType dataSource) {
contextHolder.set(dataSource);
}
public static DataSourceType getTransactionType() {
return contextHolder.get();
}
public static void clearTransactionType() {
contextHolder.remove();
}
}
```
```
@Aspect
@Component
@Slf4j
public class TransactionAspect {
@Before("@annotation(transactional) && execution(* *(..))")
public void setTransactionType(Transactional transactional) {
if (transactional.readOnly()) {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_ONLY);
} else {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_WRITE);
}
}
@AfterReturning("@annotation(transactional) && execution(* *(..))")
public void clearTransactionType(Transactional transactional) {
DataSourceTypeContextHolder.clearTransactionType();
}
}
```
**Error**
```
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [UPDATE my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_table.name = ? AND my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP)]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "shedlock" does not exist
Позиция: 8
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:235)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1443)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:633)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:883)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:321)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:326)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.lambda$execute$0(JdbcTemplateStorageAccessor.java:115)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.execute(JdbcTemplateStorageAccessor.java:115)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.updateRecord(JdbcTemplateStorageAccessor.java:81)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.doLock(StorageBasedLockProvider.java:91)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.lock(StorageBasedLockProvider.java:65)
at jdk.internal.reflect.GeneratedMethodAccessor328.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
at com.sun.proxy.$Proxy139.lock(Unknown Source)
at net.javacrumbs.shedlock.core.DefaultLockingTaskExecutor.executeWithLock(DefaultLockingTaskExecutor.java:63)
at net.javacrumbs.shedlock.spring.aop.MethodProxyScheduledLockAdvisor$LockingInterceptor.invoke(MethodProxyScheduledLockAdvisor.java:86)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
at example1.example2.example3.example3.example3.example3.example3.scheduler.ExampleContentSheduler$$EnhancerBySpringCGLIB$$631d68e1.loadData(<generated>)
at jdk.internal.reflect.GeneratedMethodAccessor320.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "my_table" does not exist
```
when I change the native query and specify the schema before the table name, the query processes normally :
UPDATE my_schema.my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_schema.my_table.name = ? AND my_schema.my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP);
|
im using this code to replace `<div id="replaceit"></div>` with content in my dynamic url
but it does not instant update when i change select option.
how can i make it work.
Thank you!
$(document).ready(function() {
function updatePreview() {
var year = $("#year").val();
var month = $("#month").val();
var dynamicUrl = "print.php?year=" + year + "&month=" + month;
$.get(dynamicUrl, function(data) {
$("#replaceit").replaceWith(data);
}).fail(function() {
alert("Failed to fetch preview content. Please try again.");
});
}
$('#year, #month').on('change', function(){
updatePreview();
});
updatePreview();
}); |
JQuery replaceWith() does not work onChange |
|jquery|ajax| |
Q. Write a program that separates odd and even numbers by receiving a number from the user
- Output 'Cancel Input' when input is cancelled
\-Distinguish odd-even if a value has been entered
I made it using the if else statement, but I can't output the input cancel when I cancel the input.
I want to cancel the input, and make it aware when something that is not a number comes in as well.
Is it not possible for if to recognize null and NaN as false?
So how do you make it recognizable?
<!doctype html>
<html lang="ko">
<head>
<meta charset="utf-8">
<title>JAVASCRIPT</title>
</head>
<body>
<script>
let num = Number(prompt('put a number',15));
let re;
if(num == null){
re = 'no'
}
if(num != Number){
re ='please put a number'
}
if(num % 2 === 0){
re = 'even';
}else if(num % 2 === 1){
re = 'odd';
}
</script>
</body>
</html> |
What does "ValueError: Exception encountered when calling Sequential.call()" mean and how can i fix it |
|python|tensorflow|artificial-intelligence| |
null |
You need to distinguish between kNN (k Nearest Neighbors) and exact search.
With exact search (i.e. brute-force search by using a `script_score` query), if you have 1M vectors, your query vector will be compared against each of them and the results you'll get are the real 10 closest vectors to your query vector.
With kNN search, also called **approximate** nearest neighbors (ANN) it's a bit different, because your 1M vectors will be indexed in a dedicated structure depending on your vector search engine (Inverted File Index, KD trees, Hierarchical Navigable Small Worlds, etc). For Elasticsearch, which is based on Apache Lucene, vectors are indexed in a [Hierarchical Navigable Small Worlds](https://opster.com/guides/opensearch/opensearch-machine-learning/introduction-to-vector-search/#Hierarchical-Navigable-Small-Worlds-(HNSW)) structure. At search time, the HNSW algorithm will try to figure out the k nearest neighbors to your query vector based on their closest distance, or highest similarity. It might find the real ones, or not, hence the **approximate** nature of these search algorithms. In order to decrease the odds of "or not", the idea is to visit a higher amount of vectors, and that's the role of `num_candidates`.
The idea is NOT to pick a value of `num_candidates` that is high enough to visit all vectors in your database, as that would boil down to make an exact search and it would make no sense to use an ANN algorithm for this, just run an exact search, pay the execution price and that's it.
The shard sizing document you are referring to does not pertain to kNN search. kNN search has its own [tuning strategy](https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-knn-search.html) that is different. As the HNSW graph needs to be built per segment and each segment needs to be searched, the ideal situation would be to have a single segment to search, i.e. one shard with one force-merged segment. Depending on your data volume and if you're constantly indexing new vectors, it might not be feasible. But you should optimize in that direction, i.e. less shards with less segments as much as possible.
Let's say that you manage to get your 1M vectors into a single shard with a single segment, there's no reason to have a high `num_candidates`, because the HNSW search algorithm has a pretty good recall rate and doesn't need to visit more than a certain amount of candidates (to be figured out depending on your use case, constraints, data volume, SLA, etc) in order to find the top k ones.
**Update March 27th, 2024:**
It is worth noting that [as of ES 8.13](https://www.elastic.co/guide/en/elasticsearch/reference/8.13/knn-search-api.html#knn-search-api-request-body), `k` and `num_candidates` have become optional and their respective values are set to sensible default, i.e.:
* `k` defaults to the value of `size`
* `num_candidates` defaults to the minimum of `10000` and `1.5 x k`
So by default:
* `size = k = 10`
* `num_candidates = 15` |
If you use `FocusNode` then delete it. Now it's working fine. |
Here's a read-only `Stream` implementation that uses an `IEnumerable<byte>` as input:
public class ByteStream : Stream, IDisposable
{
private readonly IEnumerator<byte> _input;
private bool _disposed;
public ByteStream(IEnumerable<byte> input)
{
_input = input.GetEnumerator();
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => 0;
public override long Position { get; set; } = 0;
public override int Read(byte[] buffer, int offset, int count)
{
int i = 0;
for (; i < count && _input.MoveNext(); i++)
buffer[i + offset] = _input.Current;
return i;
}
public override long Seek(long offset, SeekOrigin origin) => throw new InvalidOperationException();
public override void SetLength(long value) => throw new InvalidOperationException();
public override void Write(byte[] buffer, int offset, int count) => throw new InvalidOperationException();
public override void Flush() => throw new InvalidOperationException();
void IDisposable.Dispose()
{
if (_disposed)
return;
_input.Dispose();
_disposed = true;
}
}
What you then still need is a function that converts `IEnumerable<string>` to `IEnumerable<byte>`:
public static IEnumerable<byte> Encode(IEnumerable<string> input, Encoding encoding)
{
byte[] newLine = encoding.GetBytes(Environment.NewLine);
foreach (string line in input)
{
byte[] bytes = encoding.GetBytes(line);
foreach (byte b in bytes)
yield return b;
foreach (byte b in newLine)
yield return b;
}
}
And finally, here's how to use this in your controller:
public FileResult GetResult()
{
IEnumerable<string> data = GetDataForStream();
var stream = new ByteStream(Encode(data, Encoding.UTF8));
return File(stream, "text/plain", "Result.txt");
}
|
I have hosted the react app on IIS server inside wwwroot/{myFolder}. I am currently facing the below error :[Unexpected Application Error!(404 Not Found )](https://i.stack.imgur.com/888re.png)
I already have a web.config file isnide wwwroot/{myfolder} which looks like this.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="React Routes" stopProcessing="true">
<match url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
<add input="{REQUEST_URI}" pattern="^/(api)" negate="true" />
</conditions>
<action type="Rewrite" url="/index.html" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
And I have also installed URL Rewrite Module.
|
Problem in hosting React App with react-router-dom on IIS Server |
|reactjs|iis|routes|url-rewriting|hosting| |
null |
<!-- language-all: sh -->
The registry key of origin and the comparison operand imply that you're dealing with _version numbers_.
To meaningfully compare version numbers based on their _string representations_, cast them to [`[version]`](https://learn.microsoft.com/en-US/dotnet/api/System.Version):
```
[version] $soft.DisplayVersion -lt [version] '124.0'
```
Note:
* For brevity, you could omit the RHS `[version]` cast, because the LHS cast would implicitly coerce the RHS to `[version]` too.
* For a `[version]` cast to succeed, the input string must have _at least 2_ components (e.g. `[version] '124.0'` and _at most 4_ (e.g, `[version] '124.0.1.2'`), and each component must be a positive decimal integer (including `0`).
* `[version]` is _not_ capable of parsing _semantic_ version numbers, however, which may contain non-numeric parts.
* Use [`[semver]`](https://learn.microsoft.com/en-US/dotnet/api/System.Management.Automation.SemanticVersion) to parse semantic version, which are only available in [_PowerShell (Core) 7_](https://github.com/PowerShell/PowerShell/blob/master/README.md), however. Unlike `[version]` a _single_ component is sufficient; e.g.,
`[semver] '124'` works to create a version whose full string representation is `'9.0.0'`. |
Here is my project setup
```
project/
app.py
test_app.py
```
app.py
```
from pydantic import BaseModel
from sqlalchemy.orm import Session
from fastapi import Depends, FastAPI, HTTPException
class UserCreateModel(BaseModel):
username: str
password: str
def get_database_session():
yield
class UserRepo:
def create_user(self, user_create_model: UserCreateModel, session: Session):
pass
def get_user_by_id(self, id: int, session: Session):
pass
def create_app():
app = FastAPI()
@app.post("/api/v1/users", status_code=201)
async def create_user(
user_create_model: UserCreateModel, user_repo: UserRepo = Depends(), session=Depends(get_database_session)
):
user = user_repo.create_user(user_create_model=user_create_model, session=session)
return user
@app.get("/api/v1/users/{id}", status_code=200)
async def get_user_by_id(id: int, user_repo: UserRepo = Depends(), session=Depends(get_database_session)):
user = user_repo.get_user_by_id(id=id, session=session)
if not user:
raise HTTPException(status_code=404)
return user
return app
app = create_app()
```
test_app.py
```
from app import UserCreateModel, UserRepo, create_app, get_database_session
from dataclasses import dataclass
from fastapi import FastAPI
from fastapi.testclient import TestClient
from pytest import fixture
from sqlalchemy.orm import Session
@fixture(scope="session")
def app():
def _get_database_session():
return True
app = create_app()
app.dependency_overrides[get_database_session] = _get_database_session
yield app
@fixture(scope="session")
def client(app: FastAPI):
client = TestClient(app=app)
yield client
@fixture(scope="function")
def user_repo(app: FastAPI):
print("created")
@dataclass
class User:
id: int
username: str
password: str
class MockUserRepo:
def __init__(self):
self.database = []
def create_user(self, user_create_model: UserCreateModel, session: Session) -> User | None:
if session:
user = User(
id=len(self.database) + 1,
username=user_create_model.username,
password=user_create_model.password,
)
self.database.append(user)
return user
def get_user_by_id(self, id: int, session: Session) -> User | None:
print(self.database)
if session:
for user in self.database:
if user.id == id:
return user
return None
app.dependency_overrides[UserRepo] = MockUserRepo
yield
def test_create_user(client: TestClient, user_repo):
data = {"username": "mike", "password": "123"}
res = client.post("/api/v1/users", json=data)
assert res.status_code == 201
assert res.json()["username"] == "mike"
assert res.json()["password"] == "123"
assert "id" in res.json()
def test_get_user(client: TestClient, user_repo):
data = {"username": "nick", "password": "123"}
res = client.post("/api/v1/users", json=data)
assert res.status_code == 201
user = res.json()
user_id = user["id"]
res = client.get(f"/api/v1/users/{user_id}")
assert res.status_code == 200
```
For test_get_user(), I first make a post request and append {"username": "nick", "password": "123"} to self.database, then I follow up with a get request, and yet my self.database is still empty even though the fixture is set to function scope. Does anyone know what is happening? |
How to authenticate with REST API service on IIS using pass-through authentication in Python? |
|python|python-3.x|rest|iis|ntlm-authentication| |
null |
{"OriginalQuestionIds":[25204158],"Voters":[{"Id":13061224,"DisplayName":"siggemannen"},{"Id":2029983,"DisplayName":"Thom A","BindingReason":{"GoldTagBadge":"t-sql"}}]} |
I have a json response as below.
['{"accountNumber":"2130005","billDayModelName":"","billDayModelScore":"0","defaultBadWriteOffModelName":"XYZ","defaultBadWriteOffModelScore":600.677286,"accountModelName":"","accountModelScore":"0","customerModelName":"","customerModelScore":"0","accountRiskCode":"","customerRiskCode":""}', '{"ACCTNBR":"2130005","PREV3MOTOTAMT":0,"PREV3CALL":1,"DAY_SYNC":4,"STD_DVTN":79.72,"MNTHCNT":2,"AVG_AMT":0,"PMT_CNT":0}', '[{"ACCT_NBR":2130005,"ACCOUNT_SCORE":0.8,"BHVR_SCR":380}]']
would like to all the values to a pandas dataframe with individual column names with values as coming in the response.
Is there any way to pass all the columns with values in a single dataframe |
inserting the all the responses of an api output to pandas dataframe with indivdual column names |
|json|python-3.x|pandas|dataframe| |
Visual Studio Code displays a gutter line to indicate some files have changed. When you stage the changes, the line disappears.
This is what unstaged changes looks like:
[![Unstaged changes in VS Code][1]][1]
This is what it looks like after running `git add .`:
[![Staged changes in VS Code][2]][2]
Is there someway to make VS Code highlight staged changes in the gutter?
[1]: https://i.stack.imgur.com/FIkcx.png
[2]: https://i.stack.imgur.com/b6y38.png |
Can I make Visual Studio Code highlight staged changes? |
@SpringBootTest used to integrate test environment, enables you to test controllers, services, other repositories.
if you're writing unit tests for your controller and service classes using Mockito, you'll typically use @ExtendWith(MockitoExtension.class) to enable Mockito and then mock any dependencies of the class you're testing. |
I've changed visual studio code settings so that by default it's opening git bash instead of default one.
Now for a specific project I'd like to make it always open in a specific subfolder by default so that I don't have to execute `cd MyFolder` manually every time before I move on to next commands I need for my work.
How do I do that?
Didn't find anything valuable or working at google, msdn or chat gpt. |
How do I specify a default folder for bash terminal in visual studio code? |
|visual-studio-code|settings|git-bash| |
I get a file from a third party. The file seems to contain both ANSI and UTF-8 encoded characters (not sure if my terminology is correct).
Changing the encoding in Notepad++ yields the following:
[![Notepad++ screenshot][1]][1]
[1]: https://i.stack.imgur.com/G0hcd.png
So when using ANSI encoding, Employee2 is incorrect. And when using UTF-8 encoding, Employee1 is incorrect.
Is there a way in C# to set 2 encodings for a file?
Whichever encoding I set in C#, one of the two employees is incorrect:
string filetext = "";
filetext = File.ReadAllText(@"C:\TESTFILEx.txt", Encoding.GetEncoding("ISO-8859-1")); // Employee1 is correct, Employee2 is wrong
filetext = File.ReadAllText(@"C:\TESTFILEx.txt", Encoding.GetEncoding("Windows-1252")); // Employee1 is correct, Employee2 is wrong
filetext = File.ReadAllText(@"C:\TESTFILEx.txt", Encoding.UTF7); // Employee1 is correct, Employee2 is wrong
filetext = File.ReadAllText(@"C:\TESTFILEx.txt", Encoding.Default); // Employee1 is correct, Employee2 is wrong
filetext = File.ReadAllText(@"C:\TESTFILEx.txt", Encoding.UTF8); // Employee1 is wrong, Employee2 is correct
Has anyone else encountered this and found a solution?
|
I am creating an API that searches and reverses an entry in the account.move model. I am able to find the correct entry and reverse it using the refund_moves() method. However, whenever I try to confirm the reversed entry using the action_post() method, I get a "Expected singleton: res.company()" error.
I've used the action_post() method before on other models such as sale.order/account.move and it works fine.
Code:
```python
@http.route('/update_invoice', website="false", auth='custom_auth', type='json', methods=['POST'])
#Searching for entry
invoice = request.env['account.move'].sudo().search([('matter_id','=',matterID),('account_id','=',accountID),('move_type','=','out_invoice'),('company_id','=',creditor.id)])
if invoice:
#Create Reversal
move_reversal = request.env['account.move.reversal'].with_context(active_model="account.move", active_ids=invoice.id).sudo().create({
'date': intakeDate,
'reason': 'Balance Adjustment',
'journal_id': invoice.journal_id.id,
})
#Reverse Entry
move_reversal.refund_moves()
#Search for created reversed entry
refundInvoice = request.env['account.move'].sudo().search([('name','=',"/"),('company_id','=',creditor.id),('move_type','=','out_refund')])
if refundInvoice:
_logger.info("Refund Invoice Found")
#Error occurs
refundInvoice.action_post()
```
Custom Authorization:
```python
@classmethod
def _auth_method_custom_auth(cls):
_logger.info("+++++++++++++++++++++++++++++++++++")
access_token = request.httprequest.headers.get('Authorization')
_logger.info(access_token)
if not access_token:
_logger.info('Access Token Missing')
raise BadRequest('Missing Access Token')
if access_token.startswith('Bearer '):
access_token = access_token[7:]
_logger.info(access_token)
user_id = request.env["res.users.apikeys"]._check_credentials(scope='odoo.restapi', key=access_token)
if not user_id:
_logger.info('No user with api key found')
raise BadRequest('Access token Invalid')
request.update_env(user=user_id)
#users = request.env["res.users"].search([])
_logger.info("+++++++++++++++++++++++++++++++++++")
```
Traceback:
```
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/models.py", line 5841, in ensure_one
_id, = self._ids
ValueError: not enough values to unpack (expected 1, got 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo/http.py", line 2189, in __call__
response = request._serve_db()
File "/home/odoo/src/odoo/odoo/http.py", line 1765, in _serve_db
return service_model.retrying(self._serve_ir_http, self.env)
File "/home/odoo/src/odoo/odoo/service/model.py", line 133, in retrying
result = func()
File "/home/odoo/src/odoo/odoo/http.py", line 1792, in _serve_ir_http
response = self.dispatcher.dispatch(rule.endpoint, args)
File "/home/odoo/src/odoo/odoo/http.py", line 1996, in dispatch
result = self.request.registry['ir.http']._dispatch(endpoint)
File "/home/odoo/src/odoo/addons/website/models/ir_http.py", line 235, in _dispatch
response = super()._dispatch(endpoint)
File "/home/odoo/src/odoo/odoo/addons/base/models/ir_http.py", line 222, in _dispatch
result = endpoint(**request.params)
File "/home/odoo/src/odoo/odoo/http.py", line 722, in route_wrapper
result = endpoint(self, *args, **params_ok)
File "/home/odoo/src/user/account_ext/controllers/main.py", line 411, in update_invoice
refundInvoice.action_post()
File "/home/odoo/src/odoo/addons/sale/models/account_move.py", line 63, in action_post
res = super(AccountMove, self).action_post()
File "/home/odoo/src/enterprise/account_accountant/models/account_move.py", line 76, in action_post
res = super().action_post()
File "/home/odoo/src/odoo/addons/account/models/account_move.py", line 4072, in action_post
other_moves._post(soft=False)
File "/home/odoo/src/enterprise/sale_subscription/models/account_move.py", line 13, in _post
posted_moves = super()._post(soft=soft)
File "/home/odoo/src/enterprise/account_asset/models/account_move.py", line 109, in _post
posted = super()._post(soft)
File "/home/odoo/src/odoo/addons/sale/models/account_move.py", line 99, in _post
posted = super()._post(soft)
File "/home/odoo/src/enterprise/account_reports/models/account_move.py", line 48, in _post
return super()._post(soft)
File "/home/odoo/src/enterprise/account_avatax/models/account_move.py", line 15, in _post
res = super()._post(soft=soft)
File "/home/odoo/src/enterprise/account_invoice_extract/models/account_invoice.py", line 262, in _post
posted = super()._post(soft)
File "/home/odoo/src/enterprise/account_inter_company_rules/models/account_move.py", line 14, in _post
posted = super()._post(soft)
File "/home/odoo/src/enterprise/account_external_tax/models/account_move.py", line 53, in _post
return super()._post(soft=soft)
File "/home/odoo/src/enterprise/account_accountant/models/account_move.py", line 68, in _post
posted = super()._post(soft)
File "/home/odoo/src/odoo/addons/account/models/account_move.py", line 3876, in _post
draft_reverse_moves.reversed_entry_id._reconcile_reversed_moves(draft_reverse_moves, self._context.get('move_reverse_cancel', False))
File "/home/odoo/src/odoo/addons/account/models/account_move.py", line 3694, in _reconcile_reversed_moves
lines.with_context(move_reverse_cancel=move_reverse_cancel).reconcile()
File "/home/odoo/src/odoo/addons/account/models/account_move_line.py", line 2935, in reconcile
return self._reconcile_plan([self])
File "/home/odoo/src/odoo/addons/account/models/account_move_line.py", line 2345, in _reconcile_plan
self._reconcile_plan_with_sync(plan_list, all_amls)
File "/home/odoo/src/odoo/addons/account/models/account_move_line.py", line 2492, in _reconcile_plan_with_sync
exchange_diff_values = exchange_lines_to_fix._prepare_exchange_difference_move_vals(
File "/home/odoo/src/odoo/addons/account/models/account_move_line.py", line 2603, in _prepare_exchange_difference_move_vals
accounting_exchange_date = journal.with_context(move_date=exchange_date).accounting_date
File "/home/odoo/src/odoo/odoo/fields.py", line 1207, in __get__
self.compute_value(recs)
File "/home/odoo/src/odoo/odoo/fields.py", line 1389, in compute_value
records._compute_field_value(self)
File "/home/odoo/src/odoo/addons/mail/models/mail_thread.py", line 424, in _compute_field_value
return super()._compute_field_value(field)
File "/home/odoo/src/odoo/odoo/models.py", line 4867, in _compute_field_value
fields.determine(field.compute, self)
File "/home/odoo/src/odoo/odoo/fields.py", line 102, in determine
return needle(*args)
File "/home/odoo/src/odoo/addons/account/models/account_journal.py", line 366, in _compute_accounting_date
journal.accounting_date = temp_move._get_accounting_date(move_date, has_tax)
File "/home/odoo/src/odoo/addons/account/models/account_move.py", line 4358, in _get_accounting_date
lock_dates = self._get_violated_lock_dates(invoice_date, has_tax)
File "/home/odoo/src/odoo/addons/account/models/account_move.py", line 4389, in _get_violated_lock_dates
return self.company_id._get_violated_lock_dates(invoice_date, has_tax)
File "/home/odoo/src/odoo/addons/account/models/company.py", line 369, in _get_violated_lock_dates
self.ensure_one()
File "/home/odoo/src/odoo/odoo/models.py", line 5844, in ensure_one
raise ValueError("Expected singleton: %s" % self)
ValueError: Expected singleton: res.company()
``` |
null |
I was looking at the option of embedding Python into Fortran to add Python functionality to my existing Fortran 90 code. I know that it can be done the other way around by extending Python with Fortran using the f2py from NumPy. But, I want to keep my super optimized main loop in fortran and add python to do some additional tasks / evaluate further developments before I can do it in fortran, and also to ease up code maintenance. I am looking for answers for the following questions:
1) Is there a library that already exists from which I can embed Oython into Fortran? (I am aware of f2py and it does it the other way around)
2) How do we take care of data transfer from Fortran to Python and back?
3) How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main_fortran program in Fortran, that call Func1_Python module in python. Now, from this Func1_Python, I want to call another function...say Func2_Fortran in fortran)
4) What would be the impact of embedding the interpreter of python inside fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc.
Thanks a lot in advance for your help!!
Edit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays / matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library). |
Embed python into Fortran |
I have a javafx project. In the main view i have a listview with some tags item and a delete button. The whole project is connected to a postgresql db. The tags are stored as ( user_id, name, color). When i click the delete button, the tag is deleted from the db, but i get this error "No results were returned by the query." while i have items in db and the list is updated. Why do i keep getting this error?
public class TagsController extends SQLConnection implements Initializable {
@FXML
public Button forest, shop, timeline, tags, rewards, settings, friends, button;
@FXML
public VBox menu, vbox;
@FXML
public Label gold;
@FXML
public ListView<Tag> listView;
public Stage stage;
public Scene scene;
ObservableList<Tag> list = FXCollections.observableArrayList();
List<Tag> listOfTags;
{
try {
listOfTags = new ArrayList<>(listOfTags());
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
User user = new User();
@Override
public void initialize(URL url, ResourceBundle resourceBundle) {
//set the gold value
gold.setText(String.valueOf(getGold(user.getId())));
//set the image of the menu button
InputStream in = getClass().getResourceAsStream("/images/menu.png");
Image image = new Image(in);
ImageView imageView = new ImageView(image);
button.setGraphic(imageView);
button.setMaxSize(40, 40);
button.setMinSize(40, 40);
button.setContentDisplay(ContentDisplay.TOP);
imageView.fitWidthProperty().bind(button.widthProperty());
imageView.setPreserveRatio(true);
//hide the menu buttons
forest.setVisible(false);
shop.setVisible(false);
timeline.setVisible(false);
tags.setVisible(false);
rewards.setVisible(false);
settings.setVisible(false);
friends.setVisible(false);
menu.setVisible(false);
listView.setItems(list);
listView.setCellFactory(param -> new ListCell<>() {
private final Button deleteButton = new Button();
@Override
protected void updateItem(Tag tag, boolean empty) {
super.updateItem(tag, empty);
if (empty || tag == null) {
setText(null);
setGraphic(null);
} else {
setText(tag.getName());
setGraphic(deleteButton);
setStyle("-fx-background-color: #deaef4; -fx-padding: 20px; -fx-border-width: 1px; -fx-border-color: #cccccc; -fx-font-family: System Italic; -fx-font-size: 19; -fx-text-fill: #f5f599;");
InputStream in1 = getClass().getResourceAsStream("/images/bin.png");
Image image1 = new Image(in1);
ImageView imageView1 = new ImageView(image1);
imageView1.setFitHeight(40);
imageView1.setFitWidth(40);
deleteButton.setGraphic(imageView1);
deleteButton.setStyle("-fx-background-color: #deaef4;");
deleteButton.setOnAction(actionEvent -> {
removeTag(user.getId(), tag.getName());
list.remove(listView.getSelectionModel().getSelectedItem());
listView.getItems().remove(tag);
refreshTags();
});
}
}
});
refreshTags();
}
private void refreshTags(){
listView.getItems().removeAll();
list.removeAll();
try {
Connection connection = connection();
Statement statement = connection.createStatement();
String query = "SELECT * FROM tags WHERE user_id = " + user.getId();
ResultSet resultSet = statement.executeQuery(query);
while (resultSet.next()) {
listView.getItems().add(new Tag(
user.getId(),
resultSet.getString("name"),
resultSet.getString("color"))
);
}
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
} |
Listview issue in javafx |
|java|postgresql|list|listview|javafx| |
I'm currently developing a project in **React Native** that heavily relies on WebRTC functionality for peer-to-peer communication. In the web environment, I've found packages like **simple-peer** and **peer.js** immensely helpful in simplifying peer connection management.
I've tried integrating both **react-native-simple-peer** and **react-native-peerjs** for peer management, but unfortunately, neither seems to work seamlessly with **react-native-webrtc**. Has anyone come across a similar situation and found a suitable alternative library or package for **managing peers** in React Native WebRTC applications?
If there is an alternative library available, could you please provide some guidance on how to integrate it with **react-native-webrtc** effectively? Any tips or code snippets would be greatly appreciated. Thank you! |
Peer management library/package for React Native WebRTC integration? |
|react-native|webrtc|peerjs|simple-peer|react-native-webrtc| |
Pytest: in memory data doesn't persist through fixture |
|python|unit-testing|pytest|fastapi|fixtures| |
class ItemGetter extends EventEmitter {
constructor() {
super();
this.on('item', item => this.handleItem(item));
}
handleItem(item) {
console.log('Receiving Data: ' + item);
}
getAllItems () {
for (let i = 0; i < 15; i++) {
this
.getItem(i)
.then(item => this.emit('item', item))
.catch(console.error);
}
console.log('=== Loop ended ===')
}
async getItem (item = '') {
console.log('Getting data:', item);
return new Promise((resolve, reject) => {
exec('echo', [item], (error, stdout, stderr) => {
if (error) {
throw error;
}
resolve(item);
});
});
}
}
(new ItemGetter()).getAllItems()
You logic, for first, run loop with calling all GetItem, then output '=== Loop ended ===', and only after that run all promices resolution, so, if you want get result of each getItem execution independently of eachother, just don't abuse asynchronous logic, frequently right solution much simpliest, than it seems ;)
Note: in this solution, you will get the same output, because loop with getItem calling, runnung faster, then promises with exec, but in this case, each item will be handled exactly after appropriate promise resolve, except of awaiting of all promises resolution |
DynamoDB data structure can be likened to a B-tree. That is to say that an efficient Query reads only a contiguous piece of data.
You have an access pattern where you cannot guarantee that the data is contiguous and stored beside each other on disk. It's for that reason you are struggling.
### Possible Solutions
- Depending on the actual data you store in your sort key, you can do a `between` or `begins_with` condition. `WHERE SK between(str1,str3).
- However your access pattern seems to suggest that you have specific sort keys you need that do not follow a pattern, in which case you should us a `BatchGetItem` which will allow you to fetch up to 100 items per request.
- If the full value of the sort key is unknown, then you can do an `ExecuteStatement` request which will execute multiple Query calls under the hood fulfilling your access pattern. |
null |
My Screen-Component is rendered with the right heigt, stopping at the top of the tabbar.
One of the Child elements (Flatlist) gets rendered with a larger height, and gets behind the tabbar.
I am trying to use a `Flatlist ` with `pagingEnabled` to show the Elements one by one.
However when I scroll on the list, it doesn't 'page' one Element, but about 1.7. I found out that although I am using `<SafeAreaView>`, the list is behind the Tabbar, which is not optional because the `pagingEnabled` uses the lists height to determine an elements height.
Screen in question:
```
<View className="relative bg-[#232323] h-full overflow-none">
<SafeAreaView>
<View className="relative bg-red-500 h-full">
<View className=" bg-[#232323] rounded-b-xl">
</View>
<FlatList
pagingEnabled
data={data}
renderItem={renderDay}
initialScrollIndex={data.currWeek}
getItemLayout={getItemLayout}
initialNumToRender={1}
windowSize={3}
/>
</View>
</SafeAreaView>
</View>
```
_layout.tsx:
```
<SafeAreaProvider initialMetrics={initialWindowMetrics}>
<Tabs screenOptions={{headerShown: false}}>
<Tabs.Screen
name='index'
options={{
title: 'Timetable',
tabBarIcon: ({color}) => <FontAwesome size={28} name="table" color={color}/>,
}}
/>
<Tabs.Screen
name='courses'
options={{
title: 'Courses',
tabBarIcon: ({color}) => <FontAwesome size={28} name="table" color={color}/>,
}}
/>
</Tabs>
</SafeAreaProvider>
```
When I read the SafeAreaInsets the bottom-padding is only 34 Pixels...
Anybody with a clue?
I had this Problem with the safeAreaView by rn, that's why I migrated to expo router, it said that it was implemented at default... |
> Azure Application Gateway Bypass
The Network Security Group attached to the VM's network interface or subnet must allow inbound traffic on the port that your website is using (typically port 80 for HTTP and 443 for HTTPS).
This can be achieved in the following steps.
***Assign a Public IP Address to Your VM:***
Before modifying the NSG, ensure your VM has a public IP address. If not, you must create one and associate it with your VM's network interface.

Navigate to the Azure Portal: Log in to your Azure Portal.
Find the NSG: Find the Network Security Group associated with your VM. Click on the relevant NSG that is associated with the network interface or subnet of your VM.
To create an inbound security rule, select "Any" to allow traffic from all IP addresses or define a specific range to limit access. Typically, you can leave the port setting as "*" to indicate any port, unless a specific source port is required. Choose "IP Addresses" and enter your VM's private IP if it isn't already filled in.
For HTTP traffic, input "80" or use "443" for HTTPS traffic, or specify another port used by your application. Opt for "TCP" as the protocol for HTTP/HTTPS. Set the action to "Allow" to permit the traffic. Assign a priority to the rule, remembering that lower numbers signify higher priority. Confirm that this priority is set lower than any existing block rules to prevent conflicts.

After configuring the NSG, you should test to see if the VM is accessible from the internet using the public IP. Remember, changes to NSG rules can take a few minutes to become effective.
**Note: ** Ensure that the Windows Firewall on the VM is configured to allow inbound traffic on the required ports (again, typically 80 and 443).
***reference:***
https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-filter-network-traffic
https://learn.microsoft.com/en-us/azure/virtual-network/manage-network-security-group?tabs=network-security-group-portal |
null |
I have a sequence of azimuth angles ranging from 0 to 180, then from -170 to -10, totaling 36 angles. In some cases, certain azimuth angles may be unavailable. I aim to choose **N** points from the available set and determine the maximum sum of distances between these points. For instance, if all points are valid, selecting N=4 would yield 0, 90, 180, and -90, ensuring maximum distance between them
how can I find solution for **N** points in given valid set of angles and get maximum distance ?
Maybe it can be solved through auction algorithm |
{"OriginalQuestionIds":[17408769],"Voters":[{"Id":466862,"DisplayName":"Mark Rotteveel","BindingReason":{"GoldTagBadge":"java"}}]} |
i have a pogram that takes the items from a CSV into a string and searches a given area for those files, and if found it zips them up and places them wherever the user chooses.
for whatever reason there is a specific file that i KNOW it can pick up is just being skipped over.
the filename is is missing is FP14427P-PL_2
the CSV contains FP14427P-PL without the _2 but i know it can pick it up as it is not strict on the search for text.
code below:
```
import tkinter as tk
from tkinter import ttk, filedialog
import os
import zipfile
from threading import Thread
class FileSearchApp:
def __init__(self, master):
self.master = master
self.master.title("File Search App")
self.master.geometry("400x700") # Set the initial size of the window (increased height)
style = ttk.Style()
style.configure("TButton",
font=("Arial", 12),
padding=10,
foreground="black", # Black text color
background="#4CAF50") # Green background color
self.csv_label = tk.Label(master, text="Select CSV file:")
self.csv_label.pack(pady=10)
self.csv_browse_button = ttk.Button(master, text="Browse CSV", command=self.browse_csv)
self.csv_browse_button.pack(pady=10)
self.path_label = tk.Label(master, text="Select search path:")
self.path_label.pack(pady=10)
self.path_browse_button = ttk.Button(master, text="Browse Path", command=self.browse_path)
self.path_browse_button.pack(pady=10)
self.save_label = tk.Label(master, text="Select save location for zip:")
self.save_label.pack(pady=10)
self.save_entry = tk.Entry(master, width=40)
self.save_entry.pack(pady=10)
self.save_browse_button = ttk.Button(master, text="Browse Save Location", command=self.browse_save_location)
self.save_browse_button.pack(pady=10)
self.search_button = ttk.Button(master, text="Search", command=self.start_search)
self.search_button.pack(pady=10)
self.progress_label = tk.Label(master, text="")
self.progress_label.pack(pady=5)
self.progress_bar = ttk.Progressbar(master, orient="horizontal", length=200, mode="indeterminate")
self.progress_bar.pack(pady=10)
self.result_label = tk.Label(master, text="")
self.result_label.pack(pady=10)
self.csv_path = ""
self.search_path = ""
self.save_location = ""
def browse_csv(self):
file_path = filedialog.askopenfilename(filetypes=[("CSV Files", "*.csv")])
if file_path:
self.csv_path = file_path
self.csv_label.config(text=f"Selected CSV file: {os.path.basename(file_path)}")
def browse_path(self):
directory_path = filedialog.askdirectory()
if directory_path:
self.search_path = directory_path
self.path_label.config(text=f"Selected search path: {directory_path}")
def browse_save_location(self):
save_location = filedialog.askdirectory()
if save_location:
self.save_location = save_location
self.save_entry.delete(0, tk.END)
self.save_entry.insert(0, save_location)
def start_search(self):
self.result_label.config(text="")
self.progress_label.config(text="Searching...")
self.progress_bar.start()
search_thread = Thread(target=self.search_and_zip)
search_thread.start()
def search_and_zip(self):
try:
if not self.csv_path or not self.search_path or not self.save_location:
self.show_result("Please select CSV file, search path, and save location.")
return
search_query = self.search_path
if not search_query:
self.show_result("Please enter a search path.")
return
# Read CSV file and extract filenames
with open(self.csv_path, 'r') as csv_file:
csv_items = {line.strip() for line in csv_file.readlines() if line.strip()}
matches = []
not_found = set(csv_items) # To store names of files not found, initialize with all expected names
for root, _, files in os.walk(search_query):
for file in files:
for item in csv_items:
if item.lower() in file.lower():
matches.append(os.path.join(root, file))
not_found.discard(item)
break
if matches:
zip_filename = os.path.join(self.save_location, 'found_files.zip')
with zipfile.ZipFile(zip_filename, 'w') as zip_file:
for match in matches:
zip_file.write(match, os.path.relpath(match, search_query))
result_text = f"Found {len(matches)} matching files.\n"
result_text += f"Zip file created: {zip_filename}"
self.show_result(result_text)
else:
self.show_result("No matching files found.")
if not_found:
not_found_filename = os.path.join(self.save_location, 'not_found_files.txt')
with open(not_found_filename, 'w') as not_found_file:
not_found_file.write("\n".join(not_found))
print(f"List of not found files saved to: {not_found_filename}")
self.show_result(f"List of not found files saved to: {not_found_filename}")
# Print out filenames for debugging
print("Matched files:")
for match in matches:
print(match)
except Exception as e:
error_message = f"An error occurred: {str(e)}"
print(error_message) # Print error message for debugging
self.show_result(error_message)
finally:
self.progress_bar.stop()
self.progress_label.config(text="")
def show_result(self, message):
self.result_label.config(text=message)
def main():
root = tk.Tk()
app = FileSearchApp(root)
root.mainloop()
if __name__ == "__main__":
main()
```
i've tried debugging to see if maybe there is something weird going on with the regex or if maybe because it has a hyphen in it but just not quite sure. |