instruction stringlengths 0 30k ⌀ |
|---|
{"OriginalQuestionIds":[42866013],"Voters":[{"Id":7758804,"DisplayName":"Trenton McKinney"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":466862,"DisplayName":"Mark Rotteveel"}]} |
React having to double-click |
|reactjs| |
null |
In a Java EE web application project, there is a DAO annotated as a CDI bean:
@RequestScoped
public class CustomerDAO {
@PersistentContext
private EntityManager em;
//some persistence operation afterwards
@Transactional
public void update() {
//implementation using the injected em
}
}
The injected `EntityManager` is not thread-safe according to JPA spec, but the **Question** is:
Is the injection of `EntityManager` into this `@RequestScoped` CDI bean thread-safe? and if it is not thread-safe, what potential concurrency issues there might be? |
Is EntityManager injected with @PersistenceContext to a @RequestScoped CDI bean thread-safe? |
I want a cron to run every hour that updates a table in my database.
I have a colum called "is_premium" and another that is "is_premium_until". If the time that is stored in "is_premium_until" has passed then, the "is_premium" changes from "Premium" to "Basic".
I'm struggling with the logic, can anyone help?
```
<?php
$servername = "*";
$dbusername = "*";
$dbpassword = "*";
$dbname = "*";
$con = mysqli_connect($servername, $dbusername, $dbpassword, $dbname);
$query = "SELECT is_premium_until FROM users";
$result = $con->query($query);
$user = $result->fetch_assoc();
if (strtotime($user["is_premium_until"]) <= time()) {
$query2 = "UPDATE is_premium='Basic'";
$query_run = mysqli_query($con, $query2);
}
else {
Die
}
?>
```
Various ways of doing it, but I can't seem to get the logic in order. |
My program has a main while loop as the main logic, with an inner while loop for running the logic of the "command function". When inside the inner while loop, I want EOF (<kbd>Ctrl</kbd> + <kbd>D</kbd>) to exit from the inner while loop only and continue with the outer loop (to listen to more commands). However, its exiting from both the inner "command" loop and outer main while loop.
```java
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class Main {
public static void main(String[] args) {
String line;
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
try {
while ((line = reader.readLine()) != null) {
if (line.compareTo("echo") == 0) {
while ((line = reader.readLine()) != null)
System.out.println("echo: " + line);
// Ctrl+D (EOF) will exit loop
System.out.println("Stop echo-ing");
}
else System.out.println("Cmd not recognized");
// No EOF given, should not exit this loop
}
System.out.println("Exit main loop");
} catch (IOException e) {
System.err.println(e.getMessage());
}
}
}
```
To replicate problem:
1. Copy and paste code, and run it
2. Type `echo` to enter into inner while loop
3. Press <kbd>Ctrl</kbd> + <kbd>D</kbd> to provide EOF to standard input
The following is printed:
```shell
^D
Stop echo-ing
Exit main loop
```
"Exit main loop" is printed unexpectedly.
How can I exit only the inner while loop when <kbd>Ctrl</kbd> + <kbd>D</kbd> is pressed? |
When trying to minimise an objective through CVXPY, I have two different optimisation problems. When a parameter alpha is set to 0, both these objectives should give the same minimisation results. But for me it gives two different results.
These are the two problems
Problem 1 :
```
w = cp.Variable(shape = m)
alpha = cp.Parameter(nonneg=True)
w_sb = w[some_edge_indices]
w_ob = w[other_edge_indices]
MKw = MK @ w
MKsbwb = MK_sb @ w_sb
MKobwb = MK_ob @ w_ob
MKswm = MK_some @ w_some
MKowm = MK_other @ w_other
alpha.value = alph
obj1 = cp.sum_squares(MKw)
obj2 = cp.sum_squares(MKsbwb - MKswm)
obj3 = cp.sum_squares(MKobwb - MKowm)
reg = obj2 + obj3
objective = cp.Minimize(obj1 + alpha*(reg))
constraints = [AK@w >= np.ones((n,))]
prob = cp.Problem(objective, constraints)
result = prob.solve()
```
Consider all the unknows variables to be some given matrices. Also alph is a given value.
Problem 2:
```
w = cp.Variable(shape = m)
MKw = MK @ w
obj1norm = cp.sum_squares(MKw)
objective = cp.Minimize(obj1norm)
constraints = [AK@w >= np.ones((n,))]
prob = cp.Problem(objective, constraints)
result = prob.solve()
```
Here, as we can see when alpha = 0, both the objectives should return the same w. But it is giving different w values. What could be the reason? |
CVXPY : Minimising with parameter set to 0 and minimising without parameter gives different answers |
|python|mathematical-optimization|cvxpy| |
null |
I am running `flutter build apk` command in my flutter project and receiving the error below
```
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':generateReleaseBuildConfig'.
> Error while evaluating property 'applicationId' of task ':generateReleaseBuildConfig'
> Failed to calculate the value of task ':generateReleaseBuildConfig' property 'applicationId'.
> Failed to query the value of property 'applicationId'.
> Manifest file does not exist: C:\Repos\yousafe\android\src\main\AndroidManifest.xml
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 8s
Running Gradle task 'assembleRelease'... 9.2s
Gradle task assembleRelease failed with exit code 1
```
However the AndroidManifest.XML file exists at the said location in the Android\app\src\main directory and there's nothing wrong with it;
I could also share the settings.gradle and the build.gradle files but I am sure there's nothing wrong with them.
I ran Flutter Doctor and this was the output;
```
$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.16.7, on Microsoft Windows [Version 10.0.22635.3350], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.5)
[!] Android Studio (not installed)
[√] VS Code (version 1.87.2)
[√] Connected device (3 available)
[√] Network resources
! Doctor found issues in 1 category.
```
and this one issues is because I prefer to use VsCode to Android Studio so I uninstalled it. Everything seems to checkout. I would appreciate any pointers on to what exactly I am missing.
|
```
SELECT OBJECTID, ACTIVITYID, STATUS,LASTUPDATEDATE,DATAMGMTPOLICY,DCTCOMMAND,ACTIVITYTYPE
FROM ( SELECT OBJECTID, ACTIVITYID, STATUS,LASTUPDATEDATE, DATAMGMTPOLICY,DCTCOMMAND,ACTIVITYTYPE
FROM (SELECT CASE
WHEN DAH.STATUS = 'I' THEN 1
WHEN DAH.STATUS = 'Q' AND DAC.ACTIVITYTYPE = 'U' AND DAH.SCHEDDATETIME < now()::timestamp(0) at time zone 'utc' THEN 2
WHEN DAH.STATUS = 'R' AND V_ISCOMMUNENABLED = 'T' THEN 3
WHEN DAH.STATUS = 'Q' AND DAC.ACTIVITYTYPE = 'F' AND DAH.SCHEDDATETIME < now()::timestamp(0) at time zone 'utc' AND V_ISCOMMUNENABLED = 'T' THEN 4
WHEN DAH.STATUS = 'Q' AND DAC.ACTIVITYTYPE = 'C' AND DAH.SCHEDDATETIME < now()::timestamp(0) at time zone 'utc' AND V_ISCOMMUNENABLED = 'T' THEN 5 END
AS ACTIVITYORDER,DAH.DCTOID,DAH.OBJECTID,DAH.STATUS,DAH.ACTIVITYID,DAH.LASTUPDATEDATE,DEC.DATAMGMTPOLICY,DEC.DCTCOMMAND,DAC.ACTIVITYTYPE
FROM ACTIVITYHEADER DAH
INNER JOIN ACTIVITYCONFIG DAC
ON (DAH.ACTIVITYID = DAC.ACTIVITYID and DAC.vkey = 'X')
LEFT OUTER JOIN EVENTCONFIG DEC
ON (EVENTCONFIGOID = DEC.OBJECTID and DEC.vkey = 'X')
WHERE DCTOID = 1056969173 and DAH.vkey = 'X' ) INLINEVIEW_2
WHERE ACTIVITYORDER IS NOT NULL
ORDER BY ACTIVITYORDER) FIRSTROW LIMIT 1;
Here are the indexes
"ACTIVITYHEADER.activityheader_idx8" btree (vkey, dctoid, status, activityid, scheddatetime)
"ACTIVITYCONFIG.eventconfig_idx1" btree (vkey, activityid)
"ACTIVITYCONFIG.activityconfig_pk" PRIMARY KEY, btree (vkey, activityid)
"EVENTCONFIG.eventconfig_idx1" btree (vkey, activityid)
```
`The below part of the pgpsql function. And the most of time is being spend on CASE statement as per plan and which is a filter condition "ACTIVITYORDER IS NOT NULL"
ACTIVITYHEADER table is having 23.5 Million rows
ACTIVITYCONFIG is having 1.2 million rows and
EVENTCONFIG is having 223552 rows.
- Here are the indexes that are being used in plan. How to optimize this query?
I've this in my mind. If we can push this CASE filter further down the tree, the work taken to read and process those rows might be saved?.
Here is the plan
https://explain.depesz.com/s/3YLt#stats` |
Postgres SQL performance improve |
|postgresql| |
null |
Just find another way to do it. `window.getSelection()` is just frustratingly poorly designed. It should have been a text based function, not a node based function from day 1.
Let's take something really basic - you want to highlight a text an make that section bold. Great, window.getSelection() is fantastic for that...once.
However since you also want to make part of that text italic...well...now you're screwed. The text you want to highlight is now divided between 2 or potentially more nodes, so you can't look up the selected phrase in it's entirety, you may have several pieces of text that are the same, so looping through the nodes won't help you get where you should do your edits to the text, an the starting point given by window.getSelection() only applies to the text and not the formating...it just beecomes a fantastic mess.
Don't use it, it's garbage from an era where bad programmers saw it as their duty to dictate what browser you use or how you should highlight text - and if something was buggy or wrong it's the users fault for not doing things the way the programmer wanted them to.
Code something better yourself from scratch instad.
Maybe even something like putting all the letters in an array where each item in the array is a letter and then decide starting point and ending point with onmousedown/onmouseup instead if you have to.
But this is woefully unfit for 99% of eeverything you want it to do. |
I'm trying to biuld a counter with stm32f103c8t6 and watch counting variable value using keil uvision debugger.
when i press the button, the variable value change, but after that the variable value change to zero and doesn't save .
I don't know what to do.
i think that keil write code on micro repeatedly and so the variable value doesn't change or doest let to micro to run delay to let led blink .
I write led blinking code and run it on micro and it work properly, but when I'm useing debuging to test this code, code doesn't work and led doesn't blinking and led is always on. |
keil debugging mode doesn't work properly |
|hex| |
null |
Another option is to define the environment variables of uvicorn app and get them through Python. I recommend [this amazing tutorial](tps://fastapi.tiangolo.com/advanced/settings/#environment-variables) that will explain it better than me. But if you are lazy to read or prefer a TL;DR explanation, here it goes:
You can configure environment variables with `ADMIN_EMAIL="deadpool@example.com" APP_NAME="ChimichangApp" uvicorn main:app` (for example). And then get the variable from your Python app using:
```Python
import os
name = os.getenv("ADMIN_EMAIL")
print(f"The email is {ADMIN_EMAIL}")
```
However, this are not best practices. First of all, it would be better to define the env variables with a `*.env` file, and use it with `uvicorn main:app --env-file <my-filename>.env`. This env file will look like this:
```
ADMIN_EMAIL="deadpool@example.com"
APP_NAME="ChimichangApp"
```
Secondly, it would be better to use pydantic to get the env variables. |
I have created my schema and my resolver class and i implement GraphQLQueryResolver but it is not mapping it[[enter image description here](https://i.stack.imgur.com/0MxRr.jpg)](https://i.stack.imgur.com/B5sGX.jpg)
I tried injecting some dependancies like java kickstart for graphql, and i annotated the method in my resolver class with @QueryMapping and still not working, when i test everything in my graphiql interface it is returning null for me |
GraphQL and springboot resolver mapping problem |
|java|spring-boot|graphql| |
null |
```
try:
something = response.json['field_name']
except:
pass
``` |
You get the error due to the `JsonDeserializer` uses the constructor in your `AddSkillRequest` class and the sent JSON request doesn't match the parameters for the constructor.
[Parameterized constructors][1]
> For a class, if the only constructor is a parameterized one, that constructor will be used.
You need to define a constructor that matches your JSON request body parameters for the deserialization with the `[JsonConstructor]` attribute.
```csharp
using System.Text.Json.Serialization;
public class AddSkillRequest
{
[JsonConstructor]
public AddSkillRequest(
JsonDocument? icon,
JsonDocument? thumbnail,
JsonDocument? image,
int? displayOrder,
bool? isActive,
DateTime createdAt,
Guid createdBy,
DateTime updatedAt,
Guid updatedBy)
{
Icon = icon;
Thumbnail = thumbnail;
Image = image;
DisplayOrder = displayOrder;
IsActive = isActive;
CreatedAt = createdAt;
CreatedBy = createdBy;
UpdatedAt = updatedAt;
UpdatedBy = updatedBy;
}
...
}
```
[1]: https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/immutability#parameterized-constructors |
How to solve Execution failed for task ':generateReleaseBuildConfig'. error in a flutter project |
|android|flutter|gradle|apk| |
These days, the most popular (and very simple) option is the [ElementTree API][3],
which has been included in the standard library since Python 2.5.
The available options for that are:
- ElementTree (Basic, pure-Python implementation of ElementTree. Part of the standard library since 2.5)
- cElementTree (Optimized C implementation of ElementTree. Also offered in the standard library since 2.5. Deprecated and folded into the regular ElementTree as an automatic thing as of 3.3.)
- LXML (Based on libxml2. Offers a rich superset of the ElementTree API as well XPath, CSS Selectors, and more)
Here's an example of how to generate your example document using the in-stdlib cElementTree:
import xml.etree.cElementTree as ET
root = ET.Element("root")
doc = ET.SubElement(root, "doc")
ET.SubElement(doc, "field1", name="blah").text = "some value1"
ET.SubElement(doc, "field2", name="asdfasd").text = "some vlaue2"
tree = ET.ElementTree(root)
tree.write("filename.xml")
I've tested it and it works, but I'm assuming whitespace isn't significant. If you need "prettyprint" indentation, let me know and I'll look up how to do that. (It may be an LXML-specific option. I don't use the stdlib implementation much)
For further reading, here are some useful links:
- [API docs for the implementation in the Python standard library][3]
- [Introductory Tutorial][1] (From the original author's site)
- [LXML etree tutorial][2]. (With example code for loading the best available option from all major ElementTree implementations)
[1]: https://web.archive.org/web/20201124024954/http://effbot.org/zone/element-index.htm
[2]: https://lxml.de/tutorial.html
[3]: https://docs.python.org/3/library/xml.etree.elementtree.html
As a final note, either cElementTree or LXML should be fast enough for all your needs (both are optimized C code), but in the event you're in a situation where you need to squeeze out every last bit of performance, the benchmarks on the LXML site indicate that:
- LXML clearly wins for serializing (generating) XML
- As a side-effect of implementing proper parent traversal, LXML is a bit slower than cElementTree for parsing. |
null |
{"Voters":[{"Id":9223839,"DisplayName":"Joakim Danielson"},{"Id":3306020,"DisplayName":"Magnas"},{"Id":466862,"DisplayName":"Mark Rotteveel"}]} |
We first define two helper functions: `calculateWeights` corresponds to your first four lines of provided code and `getResidualData` reflects the fifth line.
```
library(dplyr)
library(tidyr)
library(ape)
calculateWeights <- function(df, site) {
df |>
filter(Site == site) |>
select(x_coord, y_coord) |>
dist() |>
as.matrix() |>
(\(.) 1 / replace(., . == 0, 1))()
}
getResidualData <- function(df, site, t) {
Resid_data |>
filter(Site == site) |>
select(ends_with(paste0("_", t))) |>
pull()
}
```
Then your desired result can be calculated like this:
```
res <- lapply(unique(data$Site), function(site) {
weight <- calculateWeights(data, site)
resSite <- data.frame()
lapply(1:(sum(grepl(
'Taxa', colnames(data)
))), function(t) {
x <- getResidualData(Resid_data, site, t)
rbind(resSite, list(
site = site,
t = t,
val = Moran.I(x, weight)$observed
))
}) |> bind_rows()
}) |> bind_rows() |>
mutate(Treatment = case_match(substring(site, 1, 1), "I" ~ "Inside", "O" ~ "Outside")) |>
pivot_wider(names_from = t,
names_glue = "MoranI_Taxa_{t}",
values_from = val)
```
This would for example look like
```
> res
# A tibble: 2 × 12
site Treatment MoranI_taxa_1 MoranI_taxa_2 MoranI_taxa_3 MoranI_taxa_4
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 I1 Inside -0.393 -0.136 0.0111 0.0191
2 O2 Outside -0.116 0.262 -0.153 -0.639
# ℹ 6 more variables: MoranI_taxa_5 <dbl>, MoranI_taxa_6 <dbl>, MoranI_taxa_7 <dbl>,
# MoranI_taxa_8 <dbl>, MoranI_taxa_9 <dbl>, MoranI_taxa_10 <dbl>
```
using the sample data
```
> dput(data)
structure(list(Sample = c("I1S1", "O2S1", "O2S2", "O2S3", "O2S4",
"I1S2", "I1S3", "I1S4"), Site = c("I1", "O2", "O2", "O2", "O2",
"I1", "I1", "I1"), Treatment = c("Inside", "Outside", "Outside",
"Outside", "Outside", "Inside", "Inside", "Inside"), x_coord = c(140,
141, 141.1, 139.9, 139.4, 141.2, 140.5, 139.8), y_coord = c(-29,
-28, -28.1, -28.5, -29.1, -28.9, -28.3, -29.2), Abundance_Taxa_1 = c(42L,
46L, 24L, 93L, 30L, 45L, 100L, 39L), Abundance_Taxa_2 = c(52L,
85L, 53L, 43L, 97L, 26L, 62L, 6L), Abundance_Taxa_3 = c(58L,
23L, 42L, 41L, 60L, 45L, 82L, 85L), Abundance_Taxa_4 = c(33L,
11L, 45L, 14L, 2L, 98L, 35L, 28L), Abundance_Taxa_5 = c(45L,
16L, 80L, 100L, 8L, 72L, 37L, 87L), Abundance_Taxa_6 = c(10L,
60L, 75L, 91L, 23L, 33L, 86L, 15L), Abundance_Taxa_7 = c(68L,
60L, 10L, 72L, 95L, 92L, 45L, 84L), Abundance_Taxa_8 = c(55L,
48L, 8L, 96L, 3L, 99L, 75L, 13L), Abundance_Taxa_9 = c(18L, 85L,
5L, 31L, 56L, 20L, 82L, 67L), Abundance_Taxa_10 = c(8L, 19L,
10L, 79L, 61L, 12L, 35L, 52L)), class = "data.frame", row.names = c(NA,
-8L))
> dput(Resid_data)
structure(list(Site = c("O2", "I1", "I1", "O2", "I1", "I1", "O2",
"O2"), Abundance_Taxa_1 = c(0.77, -0.68, -0.33, -0.19, -0.39,
0, -0.02, -0.59), Abundance_Taxa_2 = c(-0.45, 0.52, 0.66, -0.87,
-0.4, -0.1, -0.27, 0.5), Abundance_Taxa_3 = c(-0.27, 0.48, 0.68,
0.66, -0.31, 0.79, 0.55, -0.93), Abundance_Taxa_4 = c(0.58, -0.11,
0.89, -0.77, -0.64, -0.43, -0.18, -0.17), Abundance_Taxa_5 = c(-0.9,
-0.1, -0.3, 0.83, 0.05, 0.71, 0.17, 0.31), Abundance_Taxa_6 = c(0.58,
0.79, -0.66, 0.88, 0.97, -0.36, 0.75, 0.21), Abundance_Taxa_7 = c(-0.96,
-0.84, -0.58, 0.92, -0.68, -0.19, -0.34, -0.32), Abundance_Taxa_8 = c(-0.62,
0.97, -0.88, -0.68, 0.19, 0.77, 0.37, -0.23), Abundance_Taxa_9 = c(0.05,
-0.03, 0.23, -0.2, -0.99, -0.14, -0.1, -0.61), Abundance_Taxa_10 = c(0.94,
0.61, 0.76, 0.27, 0.17, -0.34, 0.7, -0.43)), class = "data.frame", row.names = c(NA,
-8L))
```
|
I'm trying to upload a file using my spring boot app from a pod to Amazon S3 storage. Last year this setup worked without any problems, but now I'm getting an error
java.net.UnknownHostException: custom-bucket-name.s3.eu-west-1.amazonaws.com
This spring boot app works when I run it locally, but once deployed to a Kubernetes cluster it can't get through. When I tried to ping the above address it worked from my local machine, but when I logged into a pod and tried to ping it I got
ping custom-bucket-name.s3.eu-west-1.amazonaws.com
ping: bad address "custom-bucket-name.s3.eu-west-1.amazonaws.com"
What's interesting is when I try a shorter bucket name or another long address it works, e.g.:
ping short-name.s3.eu-west-1.amazonaws.com
64 bytes from 3.5.62.11: seq=0 ttl=63 time=43.670 ms
ping llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch.co.uk
64 bytes from 109.70.148.44: seq=0 ttl=63 time=51.668 ms
**Do you know what could be causing this? Is there any way to fix this other than creating a shorter bucket name?**
PS
env: latest Desktop and Kubernetes,
Docker Desktop: 4.27.1 (136059)
Engine: 25.0.2
Kubernetes: 1.29.1 |
I can't connect to S3 from the Kubernetes pod when the bucket name is longer than 14 characters |
|kubernetes|amazon-s3| |
It was resolved by adding the line:
```
response.setCharacterEncoding("ISO-8859-1")
``` |
null |
{"OriginalQuestionIds":[73326884],"Voters":[{"Id":14868997,"DisplayName":"Charlieface","BindingReason":{"GoldTagBadge":"sql"}}]} |
I have a program that is able to expose itself as a self hosted WCF service. It does this when it's started as a process with certain parameters. In the past I've passed in a port for it to host on but I want to change that so it finds an available port and then returns that to the caller. (It also sets up a SessionBound End Point but that's incidental to this question). It self hosts to do this using the following code:-
Uri baseAddress = new Uri("net.tcp://localhost:0/AquatorXVService");
m_host = new ServiceHost(typeof(AquatorXV.Server.AquatorXVServiceInstance), baseAddress);
ServiceMetadataBehavior smb = new ServiceMetadataBehavior();
smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15;
m_host.Description.Behaviors.Add(smb);
m_host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexTcpBinding(), "mex");
NetTcpBinding endPointBinding = new NetTcpBinding()
{
MaxReceivedMessageSize = 2147483647,
MaxBufferSize = 2147483647,
MaxBufferPoolSize = 2147483647,
SendTimeout = TimeSpan.MaxValue,
OpenTimeout = new TimeSpan(0, 0, 20)
};
ServiceEndpoint endPoint = m_host.AddServiceEndpoint(typeof(AquatorXVServiceInterface.IAquatorXVServiceInstance), endPointBinding, baseAddress);
endPoint.ListenUriMode = ListenUriMode.Unique;
ServiceDebugBehavior debug = m_host.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
m_host.Description.Behaviors.Add( new ServiceDebugBehavior() { IncludeExceptionDetailInFaults = true });
else if (!debug.IncludeExceptionDetailInFaults)
debug.IncludeExceptionDetailInFaults = true;
m_host.Open();
int port = m_host.ChannelDispatchers.First().Listener.Uri.Port;
//Start the session bound factory service
Uri sessionBoundFactoryBaseAddress = new Uri("net.tcp://localhost:" + port.ToString() + "/AquatorXVSessionBoundFactoryService");
m_sessionBoundFactoryHost = new ServiceHost(typeof(AquatorXV.Server.SessionBoundFactory), sessionBoundFactoryBaseAddress);
ServiceMetadataBehavior smbFactory = new ServiceMetadataBehavior();
smbFactory.MetadataExporter.PolicyVersion = PolicyVersion.Policy15;
m_sessionBoundFactoryHost.Description.Behaviors.Add(smbFactory);
m_sessionBoundFactoryHost.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexTcpBinding(), "mex");
m_sessionBoundFactoryHost.AddServiceEndpoint(typeof(AquatorXVServiceInterface.ISessionBoundFactory), new NetTcpBinding(), sessionBoundFactoryBaseAddress);
m_sessionBoundFactoryHost.Open();
return port;
Ultimately, port is set as the return value of Main().
On the client side it starts the program as a process. It then needs to access the port number so it can connect to it. Here's what I've been trying to do:-
ProcessStartInfo psi = new ProcessStartInfo(exePath, $"/remotingWCF") { UseShellExecute = false };
mProcess = Process.Start(psi);
mProcess.WaitForInputIdle();
mPort = mProcess.ExitCode;
CreateWCFClient(mPort);
The problem is that this fails when trying to access mProcess.ExitCode with a message: InvalidOperationException - Process Must Exit before requested information can be determined.
Googling around suggests that I use WaitForExit instead of WaitForInputIdle but that requires the program to actually be shut down before I can access ExitCode. I need to get the port from the running program.
I *think* this means that I won't be able to use ExitCode as a way to get the port back but I can't find a different mechanism. Can anyone suggest a way I can return a value from Process.Start without waiting for the program to close?
|
How to return the port from a program that self hosts as a WCF service |
|process.start| |
I tried the 'fa-meh-blank' Remove. When I clicked it worked. And when I clicked again, my icon disappeared completely.
My new JavaScript:
icone.addEventListener('click', function(){
//console.log('icône cliqué');
icone.classList.toggle('happy');
icone.classList.toggle('fa-meh-blank');
icone.classList.toggle('fa-smile-wink');
}); |
When installing packages with yarn berry(v3.6.3), I encountered an error where types couldn't be imported.
Although there were no issues in vscode, when running the React app with 'yarn start', the following error occurred: a
"Module not found: Error: Package path ./dist/types is not exported from package /Users/user/project/.yarn/cache/class-variance-authority-npm-0.7.0-1a63840197-e7fd1fab43.zip/node_modules/class-variance-authority..."
Below is the problematic code.
import { ClassValue } from 'class-variance-authority/dist/types';
When installed via npm, I could directly import types from 'class-variance-authority', but with yarn berry, I had to go into the dist/types folder to import types. Though, an error occurred when executing the code.
Environment
yarn: 3.6.3
npm: 9.6.7
node: 18.17.1
typescript: 4.4.2
|
Yarn berry can't find type in module |
Exit inner loop only when EOF (Ctrl+D) is given via standard input |
The way to do this is actually easy, but requires some manual steps. Here are the steps for Linux:
1. You create a network:
```
$ docker network create docker_airgap_network
bb7645298697d420dab8eaf1fd4738fccceb8230c783aa93ebe76abec6c40f41
```
2. Fetch the created bridge interface name:
```
$ export IFACE=br-$(docker network inspect docker_airgap_network | jq -r '.[0].Id | .[:12]')
The above will output something like: br-bb7645298697
```
3. Verify this interface is actually there:
```
$ ip -br l | grep -i "$IFACE"
br-bb7645298697 DOWN 01:43:e3:a2:5f:9d <BROADCAST,MULTICAST,DOWN,LOWER_UP>
```
4. Add an `iptables` rule to drop traffic outside the docker subnet
```
$ iptables -I DOCKER-USER -i $IFACE ! -d 172.0.0.0/8 -j DROP
$ iptables -nvL DOCKER-USER
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
48 4032 DROP 0 -- br-bb7645298697 * 0.0.0.0/0 !172.0.0.0/8
406 34104 0 -- * * 0.0.0.0/0 0.0.0.0/0
430 36120 RETURN 0 -- * * 0.0.0.0/0 0.0.0.0/0
```
5. Bring up your `docker-compose.yml` file to life with `docker-compose up`:
```
version: '3'
services:
service1:
image: alpine
command: /bin/sh
tty: true
networks:
- docker_airgap_network
service2:
image: alpine
command: /bin/sh
tty: true
networks:
- docker_airgap_network
networks:
docker_airgap_network:
external: true
```
6. Test
```
$ docker-compose exec -it service1 ping -c 1 -W 1 google.com
PING google.com (142.250.184.142): 56 data bytes
--- google.com ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
$ docker-compose exec -it service1 ping -c 1 -W 1 service2
PING service2 (172.26.0.2): 56 data bytes
64 bytes from 172.26.0.2: seq=0 ttl=64 time=0.077 ms
--- service2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.077/0.077/0.077 ms
$ docker-compose exec -it service2 ping -c 1 -W 1 google.com
PING google.com (142.250.184.142): 56 data bytes
--- google.com ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
$ docker-compose exec -it service2 ping -c 1 -W 1 service1
PING service1 (172.26.0.3): 56 data bytes
64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.054 ms
--- service1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.054/0.054 ms
``` |
After deploying to webhosting, it throws me an error on the /graphiql route to the console:
Mixed Content: The page at 'https://myweburl.com/graphiql/' was loaded over HTTPS, but requested an insecure resource 'http://myweburl.com/graphql/'. This request has been blocked; the content must be served over HTTPS.
I have the `/config/lighthouse.php` file by default
I don't know why lighthouse is asking me for an unsecured url |
Mixed Content HTTP / HTTPS - (GraphQL) Lighthouse is requesting an unsecured http url |
I see two methods how to train model:
- training based on one picture with my true predict of decision
- training based on one big picture with set of pics inside and keep training like sets with my true predict of decision
Predict input may be as one picture or sets
If you have suggestions, will be nice to hear your best practice for
how to properly give training input
how to properly give predict input
Now I keep training my model with inputs as sets and big separated images.
I receive answers based on one picture or sets as predict input. |
Should I train my model with a set of pictures as one input data or I need to crop to small one? | PyTorch | Machine Learning |
|python|machine-learning|pytorch| |
null |
The flow is to be basically as follows:
1. Function ```addTaskToDb()``` writes HTML form data to the spreadsheet (e.g. Task 3);
2. ```google.script.run.withSuccessHandler(_ => loadClientTasks(selectedClient)).newTask(task);``` will load updated tasks onto the active HTML page (e.g. Tasks: 1, 2 and **3**);
3. ```google.script.run.updateFilesWithTask('new', selectedAgency, selectedClient, task);``` is supposed to get the last task added o the spreadsheet and copy it into other files (should be picking task 3, but it's getting Task 2). It's getting it from there, because there is numbering being applied to that task.
```
function addTaskToDb() {
var formElements = document.getElementById("form").elements;
var postData = [];
for (var i = 0; i < formElements.length; i++) {//Converts checkboxes' status to sheets
if (formElements[i].type != "submit" && formElements[i].type != 'checkbox') {
postData.push(formElements[i].value);
} else if (formElements[i].type == 'checkbox' && formElements[i].checked == true) {
postData.push(formElements[i].checked);
} else if (formElements[i].type == 'checkbox' && !formElements[i].checked) {
postData.push('false');
}
}
let timeStamp = new Date();
timeStamp = timeStamp.toString();
const agencyPartner = document.getElementById('agencySelect');
const selectedAgency = agencyPartner.options[agencyPartner.selectedIndex].text;
const client = document.getElementById('clientSelect');
const selectedClient = client.options[client.selectedIndex].text;
let dateAssigned = postData[1].toString();
const item = postData[0];
const link = postData[2];
const notes = postData[3];
const requestApproval = postData[4];
let task = [];
task.push(timeStamp, selectedAgency, selectedClient, '', '', dateAssigned, item, link, notes, '', requestApproval, '', '', '')
google.script.run.withSuccessHandler(_ => loadClientTasks(selectedClient)).newTask(task);
google.script.run.updateFilesWithTask('new', selectedAgency, selectedClient, task);
document.getElementById("form").reset();
}
```
I've tried using ```Utilities.sleep(3000);``` within ```updateFilesWithTask()```, but it didn't work.
|
I'm trying to get the other input values, but they're always blank. Here's the code I got:
```
function savePo() {
let table = document.getElementById("dtable");
let [, ...tr] = table.querySelectorAll("tr");
let tableData = [...tr].map(r => {
let td = r.querySelectorAll("td");
return [...td].map((c, j) => j == 9 ? c.querySelectorAll('input[type="checkbox"]')[0].checked : j === 8 ? c.innerText : c.querySelectorAll('input').value)
});
console.log('Table Data: ' + tableData);
}
```
This is the [Fiddle](https://jsfiddle.net/santosonit/1ghryfz9/39/#&togetherjs=C4Vy6oHp2f) in case you feel like putting a finger on the issue.
|
|reactjs|typescript|hadoop-yarn|yarn-berry| |
> CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing `CUDA_LAUNCH_BLOCKING=1`.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I try the enter the command in the bash: `export CUDA_LAUNCH_BLOCKING=1` and it worked. However, I am not sure whether this way affects the GPU(s) efficiency or not. Can I change some setting to make it the default such that I do not have to type this command line every time? |
[![enter image description here][1]][1]
In my case Mysql could not start due to the below issue (mysql_error.txt):
> 2024-03-27 19:42:10 0 [ERROR] mysqld.exe: Aria recovery failed. Please
> run aria_chk -r on all Aria tables and delete all aria_log.########
> files 2024-03-27 19:42:10 0 [ERROR] Plugin 'Aria' registration as a
> STORAGE ENGINE failed.
**Solution:**
1. Open Shell from xampp control panel.
2. run aria_chk -r
3. Del all aria_log.######## files
**After following the above steps:**
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/CYMlH.png
[2]: https://i.stack.imgur.com/Fgrnb.png |
Try using delagtes!
protocol FileImporterDelegate: AnyObject {
func fileImporter(_ importer: FileImporter,
shouldImportFile file: File) -> Bool
func fileImporter(_ importer: FileImporter,
didAbortWithError error: Error)
func fileImporterDidFinish(_ importer: FileImporter)
}
class FileImporter {
weak var delegate: FileImporterDelegate?
} |
I want to remove this border (regardless of what theme is used).
Other borders are removed and only this remains:
[![border to remove][1]][1]
[1]: https://i.stack.imgur.com/1uz0q.png |
How can you remove the border from the top tab bar of Visual Studio Code? |
|visual-studio-code| |
It appears that your `GROUP BY` is grouping the entirety of each `InvoiceHeader` row, while the other table references are only used to calculate sums for details and payments. In that case, I believe that is would be simpler to select directly from `InvoiceHeader` only at the top level and use `CROSS APPLY` subqueries to calculate the various sums.
The other thing I see, and this is a *big red flag*, is that your posted query appears to have **multiple independent one-to-many joins**. This will almost never yield the correct result. If an invoice had three details, two cash payments and two cheque payments, the details would be overstated by a factor of 4 and payments would each be overstated by a factor of 6. The fix is to isolate each one-to-many relationship into a separate `CROSS APPLY` and calculate the totals of each independently.
The `DocumentNumber` can also (optionally) be moved to a `CROSS APPLY`, solely for the purpose of reducing clutter in the main select list.
These `CROSS APPLY` results can then be referenced in the top level select list.
The updated query would be something like:
```
SELECT M.InvoiceID,
M.InvoiceNumber,
M.InvoiceNumber1,
M.IsOther,
M.OtherName,
M.OtherNationalNo,
M.Date,
DSUM.LineTotal + DSUM.Extra - DSUM.Reduction + DSUM.Discount AS SumTotal,
DSUM.Discount,
DSUM.Vat,
DSUM.Tax,
DSUM.LineTotal - DSUM.Vat - DSUM.Tax - DSUM.Extra - DSUM.Reduction AS TotalNet,
M.OnlineInvoiceFlag,
M.RecordType,
M.InvoiceKindFK,
M.StoreFK,
M.AccountFK,
M.PaymentTermFK,
M.DeliverAddress,
DN.DocumentNumber,
M.Time,
M.Description,
M.SubTotal,
M.Reduction,
M.Extra,
M.ProjectFK,
M.CostCenterFK,
M.MarketerAccountFK,
M.MarketingCost,
M.DriverAccountFK,
M.DriverWages,
M.SettelmentDate,
M.DueDate,
M.FinancialPeriodFK,
M.CompanyInfoFK,
M.PrintCount,
M.LetterFK,
M.InvoiceFK,
dbo.getname(M.AccountFK, M.AccountGroupFK, M.FinancialPeriodFK) AS AccountTopic,
M.AccountGroupFK,
RCSUM.ReceivedCash,
RQSUM.ReceivedCheque
FROM Sales.InvoiceHeader M
CROSS APPLY (
SELECT
ISNULL(SUM(ISNULL(D.LineTotal, 0)), 0) AS LineTotal,
ISNULL(SUM(ISNULL(M.Extra, 0)), 0) AS Extra,
ISNULL(SUM(ISNULL(M.Reduction, 0)), 0) AS Reduction,
ISNULL((SUM(ISNULL(D.DiscountAmount, 0)), 0) AS Discount,
ISNULL((SUM(ISNULL(D.VatAmount, 0)), 0) AS Vat,
ISNULL((SUM(ISNULL(D.TaxAmount, 0)), 0) AS Tax
FROM Sales.InvoiceDetail D
WHERE D.InvoiceFK = M.InvoiceID
AND D.InvoiceKindFK = M.InvoiceKindFK
AND D.FinancialPeriodFK = M.FinancialPeriodFK
) DSUM
CROSS APPLY (
SELECT ISNULL(SUM(RC.Price), 0) AS ReceivedCash
FROM Banking.ReceivedCash RC
WHERE RC.SalesInvoiceHeaderFK = M.InvoiceNumber
AND RC.FinancialPeriodFK = M.FinancialPeriodFK
) RCSUM
CROSS APPLY (
SELECT ISNULL(SUM(Banking.ReceivedCheque.Price), 0) AS ReceivedCheque
FROM Banking.ReceivedCheque RQ
WHERE RQ.SalesInvoiceHeaderFK = M.InvoiceNumber
AND RQ.FinancialPeriodFK = M.FinancialPeriodFK
) RQSUM
CROSS APPLY (
SELECT MAX(DocumentFK) AS DocumentNumber
FROM Accounting.DocumentDetail DD
WHERE DD.ItemFK = @Item + CAST(M.InvoiceNumber AS nvarchar(10))
AND DD.documenttypeid = @DocumentTypeFK
AND DD.financialPeriodFK = @FinancialPeriodFK
) DN
WHERE ( (M.InvoiceKindFK = @InvoiceKindFK)
AND (M.FinancialPeriodFK = @FinancialPeriodFK))
ORDER BY M.StoreFK,
M.InvoiceNumber;
```
A `CROSS APPLY` is like an `INNER JOIN` to a subselect. For each usage above, the aggregate functions will always produce a single scalar result, so each should produce exactly one row. (If that was not the case, an `OUTER APPLY` would have been appropriate - equivalent to a `LEFT JOIN` to a subselect.)
I wrapped the `SUM()`s up in additional `ISNULL()` functions to ensure a zero result if no matching rows were found. The inner `ISNULL()` function references could be eliminated if you don't mind the "Null value is eliminated by an aggregate or other SET operation" warnings.
I presume that:
* Every invoice should have at least one and perhaps multiple detail rows.
* Every invoice may have zero, one, or multiple cash payment rows.
* Every invoice may have zero, one, or multiple cheque payment rows.
Be sure to test the final query using all combinations of the above conditions, carefully checking that the calculated sums are correct.
Also check your `InvoiceDetail` join conditions. The referenced columns are not the same as defined in your `FK_InvoiceDetail_InvoiceHeader` foreign key constraint. (The other three table definitions were not posted, but might also be worth a review.) |
I am working with Azure Mobile Services API, my API on the local host running well. I have checked with the help of Swagger UI. but when I publish my API to azure then after that by accessing the API with Swagger I got this error.
> 500 : {"Message":"An error has occurred."}
http://xxxxxxxxxxx.azurewebsites.net/swagger/docs/v1
Now if I type this route: `http://xxxxxxxxxxx.azurewebsites.net/tables/doctor?ZUMO-API-VERSION=2.0.0`
to any table then I got the result;
why not with Swagger?
Help me to get on the right path. |
|azure|swagger|azure-mobile-services| |
When trying `from osgeo import gdal` I got the error `ModuleNotFoundError: No module named '_gdal'`. I installed gdal 3.5.1 from binary.
|
ModuleNotFoundError: No module named '_gdal' |
I am trying to use a form with two images, and then download both images to local when submitted the form. But both images are downloaded as the same image.
In HTML I have this fields:
<img src="data:image/jpeg;base64,UklGRoog..." id="foto_muestra__url_val">
<img src="data:image/jpeg;base64,/9j/4AAQ..." id="foto_monton__url_val">
Then I have this JS Called when Form Submitted:
var selectFotoMonton = document.getElementById("foto_monton__url_val");
var selectedFotoMonton_url = selectFotoMonton.src;
var selectFotoMuestra = document.getElementById("foto_muestra__url_val");
var selectedFotoMuestra_url = selectFotoMuestra.src;
$.ajax({
type: "POST",
url: "procesos/procesar_imagenes.php",
data: {
encodedFotoMuestra: encodeURIComponent(selectedFotoMuestra_url),
encodedFotoMonton: encodeURIComponent(selectedFotoMonton_url)
},
contentType: "application/x-www-form-urlencoded;charset=UTF-8",
done: function(data){
//uniqid_code_ = data.responseText;
},
success: function(data){
},
error: function(xhr, status, error) {
var err = eval("(" + xhr.responseText + ")");
alert(err.Message);
}
});
And then, in the file **procesos/procesar_imagenes.php** :
if (ISSET($_POST['encodedFotoMonton'])){
$isFotoMontonSaved = saveImage($_POST['encodedFotoMonton'], true);
$isFotoMuestraSaved = saveImage($_POST['encodedFotoMuestra'], false);
}
And then, I have the function saveImage() before this code in the same file:
function saveImage($base64img_encoded, $isMonton, $uniqid_){
$previousDir = '';
//define('UPLOAD_IMAGES_DIR', $previousDir.'/Imagenes/');
//define('UPLOAD_IMAGES_DIR', '../Imagenes/');
$fileExtension = "jpeg";
$isFileImage = true;
$base64img = rawurldecode($base64img_encoded);
//base64_decode // DEPRECATED FOR BASE64 - FORMAT IN HTML-URL DECODER
if (str_contains($base64img, 'data:image/jpeg')) {
$fileExtension = "jpeg";
$base64img = str_replace('data:image/jpeg;base64,', "", $base64img);
} else if (str_contains($base64img, 'data:image/jpg')) {
$fileExtension = "jpg";
$base64img = str_replace('data:image/jpg;base64,', "", $base64img);
} else if (str_contains($base64img, 'data:image/png')) {
$fileExtension = "png";
$base64img = str_replace('data:image/png;base64,', "", $base64img);
} else if (str_contains($base64img, 'data:image/gif')) {
$fileExtension = "gif";
$base64img = str_replace('data:image/gif;base64,', "", $base64img);
} else if (str_contains($base64img, 'data:image/tiff')) {
$fileExtension = "tiff";
$base64img = str_replace('data:image/tiff;base64,', "", $base64img);
} else if (str_contains($base64img, 'data:image/webm')) {
$fileExtension = "webm";
$base64img = str_replace('data:image/webm;base64,', "", $base64img);
} else {
$isFileImage = false;
}
if ($isFileImage){
$selected_img = 'muestra';
if ($isMonton) {
$selected_img = 'monton';
}
$file_img = '../Imagenes/temp_' . $selected_img . '-' . $uniqid_ . '.' . $fileExtension;
if (file_exists($file_img)) {
unlink($file_img);
}
$file_action = file_put_contents($file_img, $base64img);
if ($file_action){
return $base64img; // Mandar mensaje de Imagen Procesada
} else {
return $base64img; // Mandar mensaje de Imagen no procesada
}
} else {
return 'No se han detectado las imágenes. '.$base64img;
}
}
And the results as images:
[![enter image description here][1]][1]
There's something I am not doing well but I don't know where or why it happens. I always test with different images, for Monton image and Muestra image.
Monton = Mount image
Muestra = Preview image
Thanks in advance!
If something it's bad explained, please tell me.
[1]: https://i.stack.imgur.com/rrCoT.png |
Form with two images is saving both with same content as first in PHP |
|php|image|forms|upload|move| |
I have a usercontrol that raises an event after communicating with a web service. The parent handles this event when raised. What I *thought* would be the proper approach would be to pass the object returned from the webservice to the parent as eventargs???
If this is the proper way I can't seem to find the instructions on how to do so.
> UserControl
public event EventHandler LoginCompleted;
then later after the service returns biz object:
if (this.LoginCompleted != null)
{
// This is where I would attach / pass my biz object no?
this.LoginCompleted(this, new EventArgs());
}
> Parent
ctrl_Login.LoginCompleted += ctrl_Login_LoginCompleted;
....snip....
void ctrl_Login_LoginCompleted(object sender, EventArgs e)
{
// Get my object returned by login
}
So my question is what would be the "approved" method for getting the user object back to the parent? Create a property class that everything can access and put it there? |
|ios|swift|avfoundation|avaudiosession|avaudiorecorder| |
I have written a C# program that reads WMI Instance operation events in a particular namespace. Another third-party software (SCCM client [1]) is responsible for creating these instances in the background. When running multiple instances of my program, only the first one is able to listen to the events and fire the callback function. Rest of the processes keep looping without recording anything on the console. Furthermore, if I kill the first process and spawn a new one, none of the processes -- those already running or the new one -- are still able to read new Instance operation events, even though they are visible in the WMI Explorer tool. What stops other processes from also listening to these events?
class Program
{
static ManagementEventWatcher watcher;
static void StopListening(object sender, ConsoleCancelEventArgs e)
{
Console.WriteLine("Stopping listener");
watcher.Stop();
watcher.Dispose();
watcher.EventArrived -= WmiEventArrived;
Console.WriteLine("...stopped.");
}
static void Main(string[] args)
{
string ClassName = "CCM_StateMsg";
string WmiQueryAsync = "SELECT * FROM __InstanceOperationEvent WHERE TargetInstance ISA '{0}'";
string query = string.Format(CultureInfo.InvariantCulture, WmiQueryAsync, ClassName);
string scope = @"root\ccm\StateMsg";
watcher = new ManagementEventWatcher(scope, query);
watcher.EventArrived += WmiEventArrived;
Console.CancelKeyPress += StopListening;
Console.WriteLine("Starting watcher");
watcher.Start();
Console.WriteLine("...started");
while (true)
{
Console.WriteLine("Waiting for events...");
Thread.Sleep(10000);
}
}
public static void WmiEventArrived(object sender, EventArrivedEventArgs e)
{
// Choose type of event based on class name
string eventClassName = (string)e.NewEvent.GetPropertyValue("__CLASS");
Console.WriteLine("StateMessageProvider got event from WMI of type: {0}", eventClassName);
ManagementBaseObject currInstance = (ManagementBaseObject)e.NewEvent.GetPropertyValue("TargetInstance");
// RAISE THE EVENT
Console.WriteLine("GOT EVENT: {0}", e.NewEvent.ClassPath.ToString());
}
}
[Configuration Manager WMI namespaces and classes for Configuration Manager reports][1]
[1]: https://learn.microsoft.com/en-us/mem/configmgr/develop/core/understand/sqlviews/wmi-namespaces-classes-configuration-manager-reports |
|laravel|graphql|laravel-lighthouse|graphiql| |
I'm trying to publish a project with Vercel, but I'm getting the following error:

I've tried this but it didn't work:
{"minifySvg": false }
This didn't work also:
module.exports = { "minifySvg":false }
What could be the problem? |
I have this code
```lang-html
<h1><span class="tag">My text to animate</span></h1>
```
```lang-CSS
.tag {
color : #0000;
--g : linear-gradient(beige 0 0) no-repeat;
background : var(--g),var(--g);
background-size : 0% 100%;
-webkit-background-clip : padding-box,text;
background-clip : padding-box,text;
animation:
t 1.2s .5s both,
b 1.2s 1.3s both;
}
@keyframes t{
to { background-size: 150% 100% }
}
@keyframes b {
to { background-position:-200% 0,0 0 }
}
.hidden{
opacity: 0;
}
```
```lang-javascript
new IntersectionObserver();
const observer = new IntersectionObserver((entries) => {
entries.foreach((entry) => {
console.log(entry)
if(entry.isIntersecting){
entry.target.classList.add('.tag');
}
else{
entry.target.classList.remove('.tag');
}
})
})
const hiddenElements = document.querySelectorAll("h1");
hiddenElements.forEach((el) => observer.observe(el));
```
I want to run the animation every time it comes into the viewport. In the main html body I have several <section> elements and a scroll snap feature. The animation plays right and works, but plays only once. I would like it to play every time it comes into the viewport.
I tried several tutorials in JS that play the animation when it comes into viewport, but none worked.
Edit: I added the observer in javascript and I have referenced the javascript in the html. The animation plays once. When I scroll up or down, the animation doesnt play and there is only the text. It looks like the animation plays just once and then never again unless I refresh the page. |
Why container constraints property ignore minimum width? |
I am working through the book "[Program Proofs][1]". Specifically, I am doing exercise 5.9.
My attempt is as follows:
function Mult(x: nat, y: nat): nat
{
if y == 0 then 0 else x + Mult(x, y - 1)
}
lemma {:induction false} MultCommutative(x: nat, y: nat)
decreases x + y
ensures Mult(x, y) == Mult(y, x)
{
if x == y {
// Trivial
} else if x == 0 {
MultCommutative(x, y - 1);
} else if y < x {
MultCommutative(y, x);
}
}
Dafny does not verify at `MultCommutative(y, x)`, with the error that the decreases clause might not decrease.
However, I do not understand this.
To my knowledge, Dafny should check whether the following lexicographical tuple comparison holds:
`x + y ≻ y, x`. Which holds, as `x + y` exceeds `y`. Therefore, I do not understand why Dafny still says that the decreases clause might not decrease.
[1]: https://program-proofs.com/ |
Dafny: comparing lexicographic tuples |
|dafny| |
I am going through TPM (Trusted Platform Module) and trying to do a task.
How can I store data on the TPM chip? Also, how can I read that data?
|
How to read and store data to TPM Chip? |
null |
I am not sure about `Provider` but you can surround your `ListTile` Item with a `GestureDetector` or `InkWill` Widget and when you tap on your list tile you can send the id of your item to the next page with page parameters for example:
GestureDetector(
onTap:(){
Navigator.of(context).push(
new MaterialPageRoute(
builder: (BuildContext context) =>
new productDetailsScreen(product:product),
),
)
);
},
child: ProductTile(product: product),
);
After create `productDetailsScreen` page like this:
import 'package:flutter/material.dart';
class productDetailsScreen extends StatelessWidget {
final Product Product;
const productDetailsScreen({Key? key, required this.Product})
: super(key: key);
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
//this is a mock code just to show you can show value of your productDetails field like this
title: const Text(product.name),
),
body: ...);
}
}
|
I think it should always be done if you made the inner type `pub`. As this basically means that we have complete transparency. I actually think in this case `Deref`, `DerefMut`, `From` and `Into` should be implemented automatically by Rust like `Sync` and `Send` is.
But if it is not `pub` then never do it, unless the new type also is not `pub`, then it is your own decision.
The question is: is it actually the newtype pattern, if the inner type is not `pub` but the outer one is? |
null |
It's pretty easy -- just start by selecting your target nodes, then doing the layout using "Selected Only". That will only layout your target nodes. Then repeat the process for your source nodes.
|
Is there a paradigm that gives you a different mindset or have a different take to writing multithreaded applications?
Perhaps something that feels vastly different, like [procedural programming][1] to [functional programming][2].
[1]: https://en.wikipedia.org/wiki/Procedural_programming
[2]: https://en.wikipedia.org/wiki/Functional_programming
|
It can be done in single animation starting at "`0` rotation" without stacking and without negative delay, and you were pretty close to that. (Welcome to SO, by the way!)
You just had the easing functions set one frame later, but the progression (`ease-out` - `ease-in-out` - `ease-in`) was correct.
For the POC demo I've changed the "thing" to resemble a pendulum, because I think it is slightly more illustrative for this purpose:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
@keyframes swing {
/* Starting at the bottom. */
0% {
transform: rotate(0turn); color: red;
animation-timing-function: ease-out;
}
/* From the bottom to the right cusp:
start full speed, end slow (ease-out). */
25% {
transform: rotate(-0.2turn); color: blue;
animation-timing-function: ease-in-out;
}
/* From the right cusp to the left cusp:
start slow, end slow (ease-in-out).
It will effectively cross the bottom
`0turn` point at 50% in full speed.
*/
75% {
transform: rotate(0.2turn); color: green;
animation-timing-function: ease-in;
}
/* From the left cusp to the bottom:
start slow, end full speed (ease-in). */
100% {
transform: rotate(0turn); color: yellow;
animation-timing-function: step-end;
}
/* Back at the bottom.
Arrived here at the full speed.
Animation timing function has no effect here. */
}
div {
animation: swing;
animation-duration: 3s;
/* `animation-timing-function` is set explicitly
(overridden) in concrete keyframes. */
animation-iteration-count: infinite;
animation-direction: normal;
/* `reverse` still obeys "reversed" timing functions from *previous* frames. */
animation-play-state: running;
transform-origin: center top;
margin: auto;
width: 100px;
display: flex;
flex-direction: column;
align-items: center;
pointer-events: none;
&::before,
&::after {
content: '';
background-color: currentcolor;
}
&::before {
width: 1px;
height: 100px;
}
&::after{
width: 50px;
height: 50px;
}
}
#reset:checked ~ div {
animation: none;
}
#pause:checked ~ div {
animation-play-state: paused;
}
<!-- language: lang-html -->
<meta name="color-scheme" content="dark light">
<input type="checkbox" id="pause"><label for="pause">Pause animation</label>,
<input type="checkbox" id="reset"><label for="reset">Remove animation</label>.
<div></div>
<!-- end snippet -->
I must admit it never occurred to me that we can set different timing functions for each key frame, so such naturally looking multi-step animation with "bound" easing types is in fact achievable. Big takeaway for me is also information that easing function of the last (`to` / `100%`) key frame logically doesn't have any effect.
---
Personally I'd most probably go with terser "back-and-forth" (`animation-direction: alternate` between cusp points, `ease-in-out` timing and negative half-duration delay shifting its initial state to the "bottom" mid-point (similar to that proposed in other answer here) but I definitely see the benefits of this more straightforward approach without delay. |
{"Voters":[{"Id":23179206,"DisplayName":"Lee-xp"}],"DeleteType":1} |
When writing async code, you need to code to the principle not the implementation.
> Would the following program be safe to run without locks?
No, the operation is non-atomic and a race condition. A task could read the variable, any other number of tasks in the meantime can read and write to the value, then the original task could update the variable. Protect the data.
In practice if you using a single thread for async, you'll never see the race condition. If you're using multiple threads, you still will not see the race condition because cpython has the GIL which a thread would have to release between reading and writing the variable. That won't happen, *but it is not guaranteed*. For example you could use a different python implementation or interface c that releases the GIL.
> Does eviction need to be user-defined?
I am taking eviction to mean, the os will switch context and let something else run. There is no guarantee. If your running a cpu bound task, then python offers no guarantee that it will be interrupted from time to time to check the async event loop. So start long running expensive tasks with that expectation. A user defined check could be appropriate. Or something similar to the answer I linked in the comment, https://stackoverflow.com/a/71971903/2067492 create a future that you can include in your asynchronous work flow.
Consider you have two tasks that you've submitted asynchronously. Each tasks has actions A1, A2,...AN and B1, B2, ... BN.
When thinking of real time execution you have to consider that any order is possible. eg.
a1, a2, a3, ... aN b1, b2, ..., bN
That is a common execution order. But it could be:
a1, b1, a2, b2, a3, ... bN, aN.
Or even:
b1, b2, b3, ... bN, a1, a2, ... aN
The thing is async gives you the tools to make sure these tasks execute how you want them. You can have `a3` be an action that waits for `b3` and then our ordering possibilities are greatly reduced.
In your example `shared_counter += 1` would be 3 actions. a1 is read, a2 is value + 1, and a3 is write. |
Just another solution:
If we want to achieve with interactive mode using git rebase command:
Then run:
`git rebase -i HEAD~2`
Note: <~N> is number of commits messages we want to see in interactive window.
There are various options, we can pick only those commits which we want, We can squash all commits into single commit or modify commit messages. Also, we can change the order of commits.
Once you make the file change then save file and run below command
`git rebase --continue`
git rebase --continue.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/zZaW2.png |
null |
Its easy do not use nodemon install the latest version of npm and use this command:-
node --watch filename
for eg
node --watch index.js
It will work same as the nodemon but remeber on latest version of node I think 19 but if you are trying 17 or something it will not work so make sure to update it before using |
My problem is that there is a generic class called `TypedProducer<K, V>` for key and value and I do have to instantiate it from configuration (let's say someone tells me that key is `Integer` and value is `String` in configuration during runtime but it can be any `Object` subtype really) so I don't know the types beforehand.
How can I instantiate `TypedProducer` by passing parameters? Or even create `SourceTypedClassProvider<keyType, valueType>` in the first place from `.class` objects?
public class SourceTypedClassProvider {
public static TypedProducer<?, ?> instantiateTypedProducer(
Class<?> keyType, Class<?> valueType) {
//should return instance of TypedProducer<Integer, String>
}
}
I know there's something like `TypeToken` in guava, but would it help me at all in this scenario when types have to be first gotten from configuration?
EDIT:
To be honest `TypedProducer` implementation shouldn't make a difference (you can treat it as if I were instantiating e.g. a `Map<K, V>`) but if it makes it easier part of the implementation below:
public class TypedProducer<K, V> {
ExternalApiProducer<K, V> externalApiProducer;
public TypedProducer() {
externalApiProducer = new ExternalApiProducer<>();
}
public Map<K, V> produceRecords() {
//some code that calls externalApiProducer to produce records
}
}
|
I have table which represents sequence of points, I need to get sum by all possible combinations. The main problem is how to do it with minimum actions because the Real table is huge
Col1|col2|col3|col4|col5|col6|ct
Id1 |id2 |id3 |id4 |id5 |id6 |30
Id8 |id3 |id5 |id2 |id4 |id6 |45
The expected result is
Id3|id5|75
Id3|id4|75
Id3|id6|75
Id5|id6|75
Id2|id4|75
Id2|id6|75
Id4|id6|75
I would be grateful for any help
|
Summarize all possible combinations in pandas dataset |
|python|pandas|dataframe| |