instruction stringlengths 0 30k ⌀ |
|---|
|azure|load-testing|openai-api|azure-openai|azure-load-testing| |
{"Voters":[{"Id":5698672,"DisplayName":"Sardar Usama"},{"Id":1377097,"DisplayName":"beaker"},{"Id":2586922,"DisplayName":"Luis Mendo"}]} |
React useEffect dependency |
|javascript|reactjs| |
null |
I'm new to HPX and trying to build it on Ubuntu 22.04. However after compiling all the examples terminate with:
```
terminate called after throwing an instance of 'hpx::detail::exception_with_info<hpx::exception>'
what(): failed to insert update_agas_cache_action into typename to id registry.: HPX(invalid_status)
```
I have tried both the release-1.9.X and master branch on github. I have tried compiling with both HPX_WITH_MALLOC=tcmalloc and system. Does anyone have any idea what i'm doing wrong? |
HPX Examples throwing "failed to insert update_agas_cache_action into typename to id registry" exception |
|hpx| |
null |
I want to send some data from my content.js to background script in my chrome extension. I tried ```chrome.runtime.sendMessage``` but it is undefined. It happens because of my manifest file where ```world:MAIN``` is set for content script. Here is the code for content.js
```
listenerFn = (event) => {
const url = event.target.responseURL;
if (url.startsWith("https://example.com/api/v2/home/allfeed")) {
chrome.runtime.sendMessage({
type: "response",
data: event.target.responseText,
});
}
};
(() => {
let XHR = XMLHttpRequest.prototype;
let send = XHR.send;
XHR.send = function () {
this.addEventListener("load", listenerFn);
return send.apply(this, arguments);
};
})();
```
manifest.json
```
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["content.js"],
"run_at": "document_start",
"world": "MAIN"
}
]
```
Is there a way of sending data from content script to background script. |
How to send data from content.js to background.js |
|google-chrome-extension| |
Unix text and keyboard event handling has been handled nicely in curses and ncurses for decades and works in python in Termux. This example works so long as you zoom out the Termux window (pinch zoom out) so it doesn't run into bounding errors while moving the cursor around.
https://medium.com/explorations-in-python/an-introduction-to-curses-in-python-7686b9641190
```
import curses
def main():
"""
The curses.wrapper function is an optional function that
encapsulates a number of lower-level setup and teardown
functions, and takes a single function to run when
the initializations have taken place.
"""
curses.wrapper(curses_main)
def curses_main(w):
"""
This function is called curses_main to emphasise that it is
the logical if not actual main function, called by curses.wrapper.
Its purpose is to call several other functions to demonstrate
some of the functionality of curses.
"""
w.addstr("-----------------\n")
w.addstr("| codedrome.com |\n")
w.addstr("| curses demo |\n")
w.addstr("-----------------\n")
w.refresh()
printing(w)
moving_and_sleeping(w)
colouring(w)
w.addstr("\npress any key to exit...")
w.refresh()
w.getch()
def printing(w):
"""
A few simple demonstrations of printing.
"""
w.addstr("This was printed using addstr\n\n")
w.refresh()
w.addstr("The following letter was printed using addch:- ")
w.addch('a')
w.refresh()
w.addstr("\n\nThese numbers were printed using addstr:-\n{}\n{:.6f}\n".format(123, 456.789))
w.refresh()
def moving_and_sleeping(w):
"""
Demonstrates moving the cursor to a specified position before printing,
and sleeping for a specified period of time.
These are useful for very basic animations.
"""
row = 5
col = 0
curses.curs_set(False)
for c in range(65, 91):
w.addstr(row, col, chr(c))
w.refresh()
row += 1
col += 1
curses.napms(100)
curses.curs_set(True)
w.addch('\n')
def colouring(w):
"""
Demonstration of setting background and foreground colours.
"""
if curses.has_colors():
curses.init_pair(1, curses.COLOR_YELLOW, curses.COLOR_RED)
curses.init_pair(2, curses.COLOR_GREEN, curses.COLOR_GREEN)
curses.init_pair(3, curses.COLOR_MAGENTA, curses.COLOR_CYAN)
w.addstr("Yellow on red\n\n", curses.color_pair(1))
w.refresh()
w.addstr("Green on green + bold\n\n", curses.color_pair(2) | curses.A_BOLD)
w.refresh()
w.addstr("Magenta on cyan\n", curses.color_pair(3))
w.refresh()
else:
w.addstr("has_colors() = False\n");
w.refresh()
main()
``` |
For RFC 9110, the current HTTP/1.1 spec as of 2024-03-31, the following statuses MUST NOT generate content:
- [Informational 1xx](https://datatracker.ietf.org/doc/html/rfc9110#name-informational-1xx)
- [204 No Content](https://datatracker.ietf.org/doc/html/rfc9110#name-204-no-content)
- [205 Reset Content](https://datatracker.ietf.org/doc/html/rfc9110#name-205-reset-content)
- [304 Not Modified](https://datatracker.ietf.org/doc/html/rfc9110#name-304-not-modified) |
It should work like this
```
library(terra)
# do not make a list
red <- list.files(pattern = "red")
nir <- list.files(pattern = "nir")
calc_ndvi <- function(nir, red){
(nir-red)/(nir+red)
}
ndvi <- list()
for (i in 1:9){
ndvi[[i]] <- calc_ndvi(rast(nir[i]), rast(red[i]))
}
ndvi <- rast(ndvi)
```
But you can also do them all at once
```
ndvi <- calc_ndvi(rast(nir), rast(red))
```
|
I'm the maintainer of [xnec2c][1] and we found that this code was causing xnec2c to beep through the PC speaker (excessively and annoyingly) while resizing a window or rotating an object:
```c
snprintf( txt, sizeof(txt), "%7.2f", Viewer_Gain(proj_params, calc_data.freq_step) );
gtk_entry_set_text( GTK_ENTRY(Builder_Get_Object(builder, widget)), txt );
```
Why would gtk_entry_set_text beep?
[1]: https://www.xnec2c.org/ |
Why does GTK beep when resizing a window? |
|c|gtk|gtk3| |
It's possible to write and run JavaScript code in Safari browser.
Follow the following steps:
1. **Enable Safari dev tools:** To open Safari dev tools, press `Ctrl + Alt + C` on Windows or
`command + option + C` Mac. Or enable Safari dev commands in the menubar in Safari Settings -> Advanced -> Show features for web developer.
2. Select the console in dev tools window. Check this image. Your selected console should look like this
:

3. Now Start writing the code in console
( **Note:** For Multiline code press `shift` + `enter` to change the line )
4. To run the code press `Enter` |
I had the same problem with copying database files from my old PC to my new PC. The solution was use Back-up (to a USB drive) from the application on the old PC then open the application on the new PC and select restore from the USB drive. |
After much troubleshooting, the `"%7.2f"` was creating text that was too long for the text field defined by `widget` (NB, `widget` is a `gchar[]` string naming the widget, not a widget object).
The text field's `max-length` was defined by the Glade XML and so `gtk_entry_set_text` would trigger a beep event every time the widget text was updated.
In case it helps someone else, this was the fix:
```diff
- snprintf( txt, sizeof(txt), "%7.2f", Viewer_Gain(proj_params, calc_data.freq_step) );
+ snprintf( txt, sizeof(txt)-1, "%.2f", Viewer_Gain(proj_params, calc_data.freq_step) );
gtk_entry_set_text( GTK_ENTRY(Builder_Get_Object(builder, widget)), txt );
...
- <property name="max-length">6</property>
+ <property name="max-length">9</property>
```
and [this is the commit][1] that solved our issue if you wish to review the entire thing.
[1]: https://github.com/KJ7LNW/xnec2c/commit/7bba2b40b9ccaf27f7367d61c48588a141e0a1be |
I use hibernate for a desktop application and the database server is in another country.
Unfortunately, connection problems are very common at the moment.
These are:
**1. 2024-03-19 14:08:42 2378 [Warning] Aborted connection 2378 to db: 'CMS_DB' user: 'JOHN' host: 'bba-83-130-102-145.alshamil.net.ae' ( Got an error reading communication packets)
2. 2024-03-19 13:44:45 1803 [Warning] Aborted connection 1803 to db: 'CMS_DB' user: 'REMA' host: '188.137.160.92' (Got timeout reading communication packets)
3. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (Got an error reading packet communications)
4. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (This connection closed normally without authentication)
5. 2024-03-19 11:55:26 1545 [Warning] IP address '94.202.229.78' could not be resolved: No such host is known.**
**In addition, these error messages often appear:**
javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1542)
at org.hibernate.query.Query.getResultList(Query.java:165)
at
**Also this:**
Caused by: java.sql.SQLTransactionRollbackException: (conn=9398) Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.createException(ExceptionFactory.java:76)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.create(ExceptionFactory.java:153)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:274)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:229)
at org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:149)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeUpdate(ClientSidePreparedStatement.java:181)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
... 41 more
Caused by: org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException: Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException.of(MariaDbSqlException.java:34)
at
So far I had the following c3p0 configuration in my hibernate.cfg.xml.
```
<!-- Related to the connection START -->
<property name="connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<!-- Related to Hibernate properties START -->
<property name="hibernate.connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.format_sql">false</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.temp.use_jdbc_metadata_defaults">false</property>
<property name="hibernate.generate_statistics">true</property>
<property name="hibernate.enable_lazy_load_no_trans">true</property>
<!-- c3p0 Setting -->
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">15</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">20</property>
<property name="hibernate.c3p0.acquire_increment">3</property>
<property name="hibernate.c3p0.idle_test_period">100</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.unreturnedConnectionTimeout">30</property>
<property name="hibernate.c3p0.debugUnreturnedConnectionStackTraces">true</property>
```
Can someone look into whether the values for the remote connection make sense? Any change recommendation is warmly welcomed!
Thanks in advance! |
I was given this block of code where `call by Reference` had to be used for every method call, and I had to give an output of a in the format:
```none
y:[result]; y: [result]; y:[result]; x:[result]; a:[result]
```
```java
public class Main {
static int x = 2;
public static void main(String[] args) {
int[] a = {17, 43, 12};
foo(a[x]);
foo(x);
foo(a[x]);
System.out.println("x:" + x);
System.out.println("a:" + Arrays.toString(a));
}
static void foo(int y) {
x = x - 1;
y = y + 2;
if (x < 0) {
x = 5;
} else if (x > 20) {
x = 7;
}
System.out.println("y:" + y);
}
}.
```
I'm not 100% sure on how the call by reference works in some cases, and I'm not sure which result is the right one.
Anyway here is one:
`foo(a[x])` is called with `a[2]` (which is 12). `y` becomes 12 + 2 = 14. `x` is decremented to 1.
`foo(x)` is called with `x` (which is 1). Both `x` and `y` point to the value 1 of `x`. `x` is decremented to 0 and then `x` becomes 3 because `y=y+2` and `y` was pointing at the value 1 of `x`.
`foo(a[x])` is called with `a[3]` (which doesnt exists). `x` is decremented to 2.
The array `a` transforms into `17,43,14`.
So, the results would be like:
```none
y : 14; y : 3; y : ?; x : 2; a : 17,43,14
```
I think the thing that confuses me the most is in the case of `foo(x)`. Does `y` point at variable `x` or the value of `x` at the moment the method is called? |
[Anchor positioning](https://developer.chrome.com/blog/introducing-popover-api#anchor_positioning) is coming, but the only major browser with support for it is Chrome, and as it is considered an experimental feature, you have to use a [flag](https://caniuse.com/?search=anchor%20positioning) to switch it on.
Until anchor positioning is widely available, popovers created with the Popover API can only be positioned relative to the viewport (by default they are in the centre).
|
You could change your function to accept a computer name like below and use that to invoke the command on the specified machine like:
function verificaSophos {
param (
[string]$ComputerName = $env:COMPUTERNAME
)
$resultado = Invoke-Command -ComputerName $ComputerName -ScriptBlock {
[bool](Get-Process 'Sophos UI' -ErrorAction SilentlyContinue)
}
if ($resultado) { 'Instalado' } else { 'Nao instalado' }
}
$Comps = Get-ADComputer -Filter 'Enabled -eq $true'
$CompList = foreach ($Comp in $Comps) {
[PSCustomObject] @{
Name = $Comp.Name
VerificaInstalacaoSophos = verificaSophos $Comp.Name
DataColeta = Get-Date -Format "dd/MM/yyyy HH:mm:ss"
}
}
However, the [answer by js2010](https://stackoverflow.com/a/78220294) is probably the easiest way. |
I have this function that updates a user's feed after following another user by getting the post IDs of that user's posts and then adding them to a 'user-feed' in the users document. It worked fine until I added the timestamp field in. What I'm trying to do is to add the timestamp from each post to the document that is being created in the user-feed, so that I can get the feed posts by ordering it by timestamp.
```
exports.updateUserFeedAfterFollow = functions.firestore.document('/following/{currentUid}/user-following/{uid}').onCreate((snap, context) => {
const currentUid = context.params.currentUid;
const uid = context.params.uid;
const db = admin.firestore();
functions.logger.log('Function called..')
return db.collection('posts').where('ownerUid', '==', uid).get().then((snapshot) => {
functions.logger.log('Fetched posts..')
snapshot.forEach((doc) => {
const postId = doc.id;
const timestamp = doc.timestamp;
functions.logger.log('PostID:', postId)
const writeResult = db.collection('users').doc(currentUid).collection('user-feed').doc(postId);
writeResult.set({timestamp: timestamp});
});
return null;
})
.catch((err) => {
functions.logger.log(err);
});
});
```
I tried to add the timestamp of each post to the document that was created. It doesn't run and gives an error |
TypeScript allowing extra properties in object assigned to interface extended from another interface [NEXT-AUTH] |
|next-auth| |
here is your answer:
import 'package:flutter/material.dart';
import 'package:flutter_application_1/domain/entities/entities.dart';
import 'package:flutter_application_1/presentation/riverpod/league_provider.dart';
import 'package:flutter_application_1/presentation/riverpod/search/search_form_provider.dart';
import 'package:flutter_application_1/presentation/riverpod/team_provider.dart';
import 'package:flutter_application_1/presentation/widgets/search/card_search_team.dart';
import 'package:flutter_application_1/presentation/widgets/general_appbar.dart';
import 'package:flutter_application_1/presentation/widgets/search/filter.dart';
import 'package:flutter_application_1/presentation/widgets/search/not_found.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import 'package:go_router/go_router.dart';
class SearchTeamView extends ConsumerStatefulWidget {
const SearchTeamView({super.key});
@override
ConsumerState<SearchTeamView> createState() => _SearchTeamViewState();
}
class _SearchTeamViewState extends ConsumerState<SearchTeamView> {
@override
Widget build(BuildContext context) {
final SearchFormState form = ref.watch(searchFormProvider);
final List<Team> teamList =
ref.watch(teamProvider.notifier).getTeamsByLeague(form.league);
return Scaffold(
appBar: GeneralAppBar.getAppBar('Equipos', () => context.pop()),
backgroundColor: Colors.transparent,
body: ListView(
children: [
Form(
child: Padding(
padding: const EdgeInsets.all(15.0),
child:
Column(crossAxisAlignment: CrossAxisAlignment.start, children: [
//* League
const SizedBox(height: 15),
const Padding(
padding: EdgeInsets.all(10.0),
child: Text(
'Liga',
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold,
fontSize: 20),
),
),
Filter(
hint: '-- Sin Especificar --',
listForm: [
'-- Sin Especificar --',
...ref.watch(leagueProvider.notifier).getLeaguesByAll(
form.league,
form.season,
form.category,
form.sex,
form.delegation)
],
onChanged: (value) {
value == '-- Sin Especificar --'
? ref
.watch(searchFormProvider.notifier)
.leagueChanged(null)
: ref
.watch(searchFormProvider.notifier)
.leagueChanged(value);
setState(() {});
},
value: ref.watch(searchFormProvider).league,
),
]),
)),
const Expanded(
child: Text(
'data',
style: TextStyle(fontSize: 40, color: Colors.white),
))
],
),
);
}
}
|
I want to install solr 9.x on a fresh linux server in order to run a pretty old application, migrating from solr 8.x. As of 9.x the data-import handler is no longer part of solr, therefore I installed https://github.com/SearchScale/dataimporthandler to overcome the issue.
Solr is running but the core will not load due to a config problem. I am unsure if it is related to dataimport-handler or any other issue.
Error log:
31/03/2024, 19:11:32
WARN false
findix
SolrConfig
Couldn't add files from /opt/solr-9.5.0/dist filtered by solr-dataimporthandler-.*\.jar to classpath: java.nio.file.NoSuchFileException: /opt/solr-9.5.0/dist
31/03/2024, 19:11:32
ERROR false
findix
CoreContainer
SolrCore failed to load on startup
the file/folder dist does not exist, however /opt/solr-9.5.0/ does.
I suspect that the project on github does not support 9.5.0 but only 9.4.0
https://github.com/SearchScale/dataimporthandler/commit/2c8bee24f14888e1f12732ed4b84581ed2953366
While calling the application error msg:
Solr Error: org.apache.solr.core.SolrCoreInitializationException: SolrCore 'findix' is not available due to init failure: Could not load conf for core findix: Error loading solr config from /var/solr/data/findix/conf/solrconfig.xml at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:2280) at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:2249) at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:257) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:509) at org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:262) at org.apache.solr.servlet.SolrDispatchFilter.lambda$doFilter$0(SolrDispatchFilter.java:219) at org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:249) at org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:215) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:210) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1635) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:527) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:131) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:598) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:223) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1384) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:484) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1553) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1306) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:149) at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:228) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:301) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:822) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122) at org.eclipse.jetty.server.Server.handle(Server.java:563) at org.eclipse.jetty.server.HttpChannel$RequestDispatchable.dispatch(HttpChannel.java:1598) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:753) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:501) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:287) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:314) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100) at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53) at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:421) at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:390) at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:277) at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:199) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:411) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:969) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1194) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1149) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.solr.common.SolrException: Could not load conf for core findix: Error loading solr config from /var/solr/data/findix/conf/solrconfig.xml at org.apache.solr.core.ConfigSetService.loadConfigSet(ConfigSetService.java:278) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1707) at org.apache.solr.core.CoreContainer.lambda$loadInternal$12(CoreContainer.java:1057) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:212) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:299) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ... 1 more Caused by: org.apache.solr.common.SolrException: Error loading solr config from /var/solr/data/findix/conf/solrconfig.xml at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:161) at org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:309) at org.apache.solr.core.ConfigSetService.loadConfigSet(ConfigSetService.java:262) ... 9 more Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/share/java" "read") at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.base/java.security.AccessController.checkPermission(AccessController.java:897) at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:322) at java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:661) at java.base/sun.nio.fs.UnixPath.checkRead(UnixPath.java:818) at java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:399) at java.base/java.nio.file.Files.newDirectoryStream(Files.java:604) at org.apache.solr.core.SolrResourceLoader.getURLs(SolrResourceLoader.java:289) at org.apache.solr.core.SolrResourceLoader.getFilteredURLs(SolrResourceLoader.java:318) at org.apache.solr.core.SolrConfig.initLibs(SolrConfig.java:969) at org.apache.solr.core.SolrConfig.(SolrConfig.java:243) at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:153) ... 11 more - also logged in/home/www/logs/solr-errors.log
q=*:*&start=0&rows=30&fq=type:classifieds&fq=country:DE&fq={!tag=white_label_id}white_label_id:0&fq=confirmed:1&fl=*,score&facet=true&facet.mincount=1&facet.limit=100&facet.field={!ex=province}province&facet.field={!ex=area}area&facet.field={!ex=quarter}quarter&facet.field={!ex=details}details&facet.field={!ex=rooms}rooms&facet.field={!ex=sqm}sqm&facet.field={!ex=picture}picture&facet.field={!ex=ad_type}ad_type&facet.field={!ex=xx_cat_1}xx_cat_1&facet.field={!ex=xx_cat_2}xx_cat_2&facet.field={!ex=xx_cat_3}xx_cat_3&stats=true&stats.field={!ex=price}price
What seems to be the problem here? |
I've found that the best way to compare your Axios request to Postman request is via **Postman console**.
In the console you can see (hidden) proxy settings, hidden headers, and other network info.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/bG4ka.png |
it seems to me you could achieve your goal by means of formulas
if you insert one column and one row you could use this formula
=IF(SUM(G$5:G5)<G$2;IF(SUM(F6:$F6)<8;1;"");"")
[![enter image description here][1]][1]
or you can keep your original rows and columns layout and adopt this other formula:
=IF(ROW()=5;
IF(COLUMN()=7;
1;
IF(SUM(F5:$G5)<8;1;""));
IF(SUM(G4:G$5)<G$2;
IF(COLUMN()=7;
1;
IF(SUM(F5:$G5)<8;1;""));
"")
)
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/we5Ul.png
[2]: https://i.stack.imgur.com/I4Ouv.png |
I understand that controller in Argonaut are singletone(*), therefore I do not want to havea bean instance injected with a bean to be used in the controller methods. I want instead to get a new instance of a bean. For example:
@Controller("/store")
@Secured(SecurityRule.IS_AUTHENTICATED)
public class StoreController {
@Post("/get") @Produces(MediaType.APPLICATION_JSON)
public List<Chunk> get(@Body Set<String> ids) {
Store store ... <- injected
}
}
Is it possible at all in Argonaut? |
How to get an new instance of a bean in a Argonaut controller method |
|argonaut| |
I was following [How to build PyTorch from Source][1] in order to install Pytorch as I have older Graphic card which only supports Cuda 11.4. I am working in Conda environment. At the last stage I am getting following error.
Building wheel torch-2.4.0a0+git2b1ba0c
-- Building version 2.4.0a0+git2b1ba0c
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_ARGS=-DCMAKE_AR=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-ar -DCMAKE_CXX_COMPILER_AR=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_C_COMPILER_AR=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-gcc-ar -DCMAKE_RANLIB=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_C_COMPILER_RANLIB=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-gcc-ranlib -DCMAKE_LINKER=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/home/sinfinities/.conda/envs/StableD/bin/x86_64-conda-linux-gnu-strip -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/sinfinities/Downloads/pytorch/torch -DCMAKE_PREFIX_PATH=/home/sinfinities/.conda/envs/StableD/lib/python3.12/site-packages;/home/sinfinities/.conda/envs/StableD:/home/sinfinities/.conda/envs/StableD/x86_64-conda-linux-gnu/sysroot/usr -DNUMPY_INCLUDE_DIR=/home/sinfinities/.conda/envs/StableD/lib/python3.12/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/home/sinfinities/.conda/envs/StableD/bin/python -DPYTHON_INCLUDE_DIR=/home/sinfinities/.conda/envs/StableD/include/python3.12 -DPYTHON_LIBRARY=/home/sinfinities/.conda/envs/StableD/lib/libpython3.12.a -DTORCH_BUILD_VERSION=2.4.0a0+git2b1ba0c -DUSE_NUMPY=True /home/sinfinities/Downloads/pytorch
-- The CXX compiler identification is GNU 9.5.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - failed
-- Check for working CXX compiler: /home/sinfinities/.conda/envs/StableD/bin/c++
-- Check for working CXX compiler: /home/sinfinities/.conda/envs/StableD/bin/c++ - broken
CMake Error at /home/sinfinities/.conda/envs/StableD/share/cmake-3.26/Modules/CMakeTestCXXCompiler.cmake:60 (message):
The C++ compiler
"/home/sinfinities/.conda/envs/StableD/bin/c++"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /home/sinfinities/Downloads/pytorch/build/CMakeFiles/CMakeScratch/TryCompile-tQ7Kro
Run Build Command(s):/home/sinfinities/.conda/envs/StableD/bin/ninja -v cmTC_6d41d && [1/2] /home/sinfinities/.conda/envs/StableD/bin/c++ -o CMakeFiles/cmTC_6d41d.dir/testCXXCompiler.cxx.o -c /home/sinfinities/Downloads/pytorch/build/CMakeFiles/CMakeScratch/TryCompile-tQ7Kro/testCXXCompiler.cxx
[2/2] : && /home/sinfinities/.conda/envs/StableD/bin/c++ CMakeFiles/cmTC_6d41d.dir/testCXXCompiler.cxx.o -o cmTC_6d41d && :
FAILED: cmTC_6d41d
: && /home/sinfinities/.conda/envs/StableD/bin/c++ CMakeFiles/cmTC_6d41d.dir/testCXXCompiler.cxx.o -o cmTC_6d41d && :
/home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/../../../../x86_64-conda-linux-gnu/bin/ld: /home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/libstdc++.so: undefined reference to `memcpy@GLIBC_2.14'
/home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/../../../../x86_64-conda-linux-gnu/bin/ld: /home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/libstdc++.so: undefined reference to `aligned_alloc@GLIBC_2.16'
/home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/../../../../x86_64-conda-linux-gnu/bin/ld: /home/sinfinities/.conda/envs/StableD/bin/../lib/gcc/x86_64-conda-linux-gnu/9.5.0/libstdc++.so: undefined reference to `clock_gettime@GLIBC_2.17'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:29 (project)
My nvidia-smi output is
Sun Mar 31 22:59:04 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02 Driver Version: 470.223.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 N/A | N/A |
| 33% 43C P8 N/A / N/A | 53MiB / 1999MiB | N/A Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
gcc --version
gcc (conda-forge gcc 9.5.0-17) 9.5.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
c++ --version
c++ (conda-forge gcc 9.5.0-17) 9.5.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I have been going back and forth with this but not able to figure out how to fix it. Any guidence will be much appreciated.
[1]: https://github.com/pytorch/pytorch?tab=readme-ov-file#get-the-pytorch-source |
Conda CMAKE CXX Compiler error while compiling Pytorch |
|c++|cmake|pytorch|conda| |
I am trying to accept case insensitive fields to my API running on JBoss Server.
Currently I have a Spring Boot Project. But i have a sprintApplicationContext.xml that i have contextConfigured inside web.xml.
When i try below code for MapperFeature's Case Insensitive property, it is not working.
```
@Bean
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
mapper.configure(MapperFeature.ACCEPT_CASE_INSENSITIVE_PROPERTIES, true);
return mapper;
}
```
Then i tried the Same Bean configuration with Jackson Object Mapper like below(With the help of ChatGPT)
```
<bean id="objectMapper" class="com.fasterxml.jackson.databind.ObjectMapper">
<property name="configOverrides">
<bean class="com.fasterxml.jackson.databind.cfg.MapperConfigOverrides">
<!-- Add custom configuration options here -->
<!-- For example, enable a feature -->
<property name="mapperFeatures">
<bean class="com.fasterxml.jackson.databind.cfg.MapperConfig$EnumResolverBuilder" factory-method="construct">
<constructor-arg type="com.fasterxml.jackson.databind.MapperFeature"/>
<constructor-arg>
<set>
<value>ACCEPT_CASE_INSENSITIVE_PROPERTIES</value>
</set>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
```
|
{"Voters":[{"Id":18470692,"DisplayName":"ouroboros1"},{"Id":16343464,"DisplayName":"mozway"},{"Id":20430449,"DisplayName":"Panda Kim"}]} |
I have access to 2 nodes, each has 2 GPUs. I want to have 4 processes, each has a GPU. I use `nccl` (if this is relevant).
Here is the Slurm script I tried. I tried different combinations of setup.
It works occasionally as wanted. Most of time, it creates 4 processes in 1 node, and allocate 2 processes to 1 GPU. It slows down the program and cause out of memory, and makes `all_gather` fail.
How can I distribute processes correctly?
```
#!/bin/bash
#SBATCH -J jobname
#SBATCH -N 2
#SBATCH --cpus-per-task=10
# version 1
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --gpu-bind=none
# version 2
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
# version 3
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
# version 4
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --gpus-per-task=1
# # version 5
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gpus-per-task=1
module load miniconda3
eval "$(conda shell.bash hook)"
conda activate gpu-env
nodes=( $( scontrol show hostnames $SLURM_JOB_NODELIST) )
nodes_array=($nodes)
head_node=${nodes_array[0]}
head_node_ip=$(srun --nodes=1 --ntasks=1 -w "$head_node" hostname --ip-address)
echo Node IP: $head_node_ip
export LOGLEVEL=INFO
export NCCL_P2P_LEVEL=NVL
srun torchrun --nnodes 2 --nproc_per_node 2 --rdzv_id $RANDOM --rdzv_backend c10d --rdzv_endpoint $head_node_ip:29678 mypythonscript.py
```
In python script:
```
dist.init_process_group(backend="nccl")
torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
```
Log:
```
[W socket.cpp:464] [c10d] The server socket has failed to listen on [::]:29678 (errno: 98 - Address already in use).
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.local_elastic_agent: [INFO] log directory set to: /tmp/torchelastic_f6ldgsym/4556_xxbhwnb4
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.api: [INFO] [default] starting workers for entrypoint: python
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.api: [INFO] [default] Rendezvous'ing worker group
[W socket.cpp:464] [c10d] The server socket has failed to bind to 0.0.0.0:29678 (errno: 98 - Address already in use).
[E socket.cpp:500] [c10d] The server socket has failed to listen on any local network address.
```
I am not sure if this is relevant, because for the successful cases, I also see this info. |
I am using following generic interface to select matching key-value pair types of an interface:
```ts
interface DefaultParams<T> {
screen: keyof T;
params?: T[keyof T];
}
```
and I want to use it with this navigation types so that I can pass respectively matching values in my navigate function:
```ts
export type RootTabParamList = {
Home: undefined | DefaultParams<HomeTabStackParamList>;
Health: undefined | DefaultParams<HealthTabStackParamList>;
};
export enum TenetCodes {
Energy = "Energy",
Sleep = "Sleep",
}
export type HomeTabStackParamList = {
Dashboard: { tab: 'feed' | 'recommended' };
Activity: undefined;
}
export type HealthTabStackParamList = {
HealthScreen: undefined;
SystemsList: undefined;
SystemDetailsScreen: {
tenetCode: TenetCodes;
tenetResult?: string;
};
SampleSummaryScreen: {
range?: Range;
tenetCode: TenetCodes;
sampleCode: string;
};
};
```
But it allows me to use both `HomeTabStackParamList` and `HealthTabStackParamList` interchangeably between `Home` and `Health` keys |
The top rated answer does not work with the new Reflection implementation of (JEP416)[https://openjdk.org/jeps/416] in e.g. Java 21 that uses MethodHandles and ignores the flags value on the Field abstraction object.
One solution is to use Unsafe, however with [this JEP](https://openjdk.org/jeps/8323072) Unsafe and the important `long objectFieldOffset(Field f)` and
`long staticFieldOffset(Field f)` methods are getting deprecated for removal so for example this will not work in the future:
```java
final final Unsafe unsafe = //..get Unsafe (...and add subsequent --add-opens statements for this to work)
final Field ourField = Example.class.getDeclaredField("changeThis");
final Object staticFieldBase = unsafe.staticFieldBase(ourField);
final long staticFieldOffset = unsafe.staticFieldOffset(ourField);
unsafe.putObject(staticFieldBase, staticFieldOffset, "it works");
```
I do not recommend this but it is possible in Java 21 with the new reflection implementation when making heavy use of the internal API if really needed.
# Java 21+ solution without `Unsafe`
See my answer [here](https://stackoverflow.com/a/77705202/23144795) on how to leverage the internal API to set a final field in Java 21 without Unsafe.
|
The top rated answer does not work with the new Reflection implementation of [JEP416](https://openjdk.org/jeps/416) in e.g. Java 21 that uses MethodHandles and ignores the flags value on the Field abstraction object.
One solution is to use Unsafe, however with [this JEP](https://openjdk.org/jeps/8323072) Unsafe and the important `long objectFieldOffset(Field f)` and
`long staticFieldOffset(Field f)` methods are getting deprecated for removal so for example this will not work in the future:
```java
final final Unsafe unsafe = //..get Unsafe (...and add subsequent --add-opens statements for this to work)
final Field ourField = Example.class.getDeclaredField("changeThis");
final Object staticFieldBase = unsafe.staticFieldBase(ourField);
final long staticFieldOffset = unsafe.staticFieldOffset(ourField);
unsafe.putObject(staticFieldBase, staticFieldOffset, "it works");
```
I do not recommend this but it is possible in Java 21 with the new reflection implementation when making heavy use of the internal API if really needed.
# Java 21+ solution without `Unsafe`
See my answer [here](https://stackoverflow.com/a/77705202/23144795) on how to leverage the internal API to set a final field in Java 21 without Unsafe.
|
For commercially free open-source task you need to avoid those that depend on licensed GhostScript PDF handling in the background such as ImageMagick GraphicsMagick etc.
If it's for personal use then consider Ghostscript's sister MuTool. It's generally the fastest method, see: https://stackoverflow.com/questions/73482110/what-is-fastest-way-to-convert-pdf-to-jpg-image/73500232#73500232
So the best FOSS workhorse for this task is **Poppler** and the means to convert PDF into image pages is **pdftoppm** which has many output formats including 2 types of jpg. However, I recommend consider PNG as preferable output for documents. Any difference in size is more than compensated by clarity of pixels.
- For OCR use ppm
- For DOCuments / LineART use PNG
- for Photos use standard JPEG
>-png : generate a PNG file
-jpeg : generate a JPEG file
-jpegcmyk : generate a CMYK JPEG file
-jpegopt : jpeg options, with format `<opt1>=<val1>[,<optN>=<valN>]*`
Typical Windows command line
```
"bin\pdftoppm.exe" -png -r %resolution% "%filename.pdf%" "%output/rootname%"
```
|
In *vobject*, the value of an EXDATE property is a *list* of datetime values.
So, instead of:
ev.instance.vevent.add("exdate").value = vevent.dtstart.value
your code should do something like:
ev.instance.vevent.add("exdate").value.append(vevent.dtstart.value)
This avoids the exception reported during serialization. |
In Prometheus operator ServiceMonitors, you can use `spec.endpoints[*].relabelings` to alter labels and metrics:
```
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
spec:
endpoints:
- interval: 30s
port: metrics
scheme: http
metricRelabelings: <-- here you shine!
- action: labeldrop
regex: (foo|otherlabeltodrop)
```` |
null |
If you're using the `options` prop of `v-data-table`, you can just set the page back to 1.
```
methods: {
goToFirstPage() {
this.$set(this.options, 'page', 1);
}
}
```
If not, add a `page` data and bind it to the data table so you can easily change the page.
```
<v-data-table :page="page" />
```
```
data: () => ({
page: 1,
}),
methods: {
goToFirstPage() {
this.page = 1;
}
}
``` |
You can try check internet connection/proxy or you can specify timeout like this
COMPOSER_PROCESS_TIMEOUT=2000 composer create-project --prefer-dist laravel/laravel yourProjectName
|
Consider turning the problem around from "find out where the browser inserted line breaks" to "tell the browser to make specific line breaks".
This can be easily achieved by inserting [soft hyphens][1] (`­`) at desired locations.
[1]: https://en.wikipedia.org/wiki/Soft_hyphen |
You just need to indent the `.` and `var x` lines one more level so that they're within the second conditional:
```
- var a = 5;
- var b = 3;
script
if (a === 5)
if (b === 2)
.
var x = true;
include script.js
```
If you want `x` to always be declared and be either true or false based on your conditions, another approach would be something like this:
```
- var a = 5
- var b = 3
script
| var x = #{a === 5 && b === 2}
include script.js
```
This will make `x` true if the Pug variables `a` and `b` are `5` and `2` respectively, and `false` if not. |
{"Voters":[{"Id":550094,"DisplayName":"Thierry Lathuille"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":5320906,"DisplayName":"snakecharmerb"}]} |
I can declare a C# positional record like this:
```
public record Box(double Capacity);
```
which results in the definition of a record class named `Box` with a property named `Capacity`, as well as a primary constructor for `Box` that takes a parameter named `Capacity`. Now I want to be able to create independent documentation comments for all four of these items:
1. The record type `Box`.
2. The property `Capacity`.
3. The primary constructor.
4. The constructor parameter `Capacity`.
If I create a documentation comment like this:
```
/// <summary>
/// A container for items.
/// </summary>
/// <param name="Capacity">How much the Box can hold</param>
public record Box(double Capacity);
```
then I get the *same* summary comment for both the `Box` type and the constructor, and the *same* parameter comment for both the `Box` property and the constructor parameter.
As discussed in [this question](https://stackoverflow.com/questions/65341952/documentation-comments-for-properties-of-positional-records-in-c-sharp/65342485#65342485), I can create independent comments for the property and the constructor parameter by explicitly declaring the property:
```
/// <summary>
/// A container for items.
/// </summary>
/// <param name="Capacity">Specifies the size of the box</param>
public record Box(double Capacity) {
/// <summary>
/// Returns the size of the box.
/// </summary>
public double Capacity { get; init; } = Capacity;
}
```
And I can create independent comments for all of the parts by explicitly declaring the constructor as well:
```
/// <summary>
/// A container for items.
/// </summary>
public record Box {
/// <summary>
/// Returns the size of the box.
/// </summary>
public double Capacity { get; init; }
/// <summary>
/// Creates a new Box instance.
/// </summary>
/// <param name="Capacity">Specifies the size of the box</param>
public Box(double Capacity) {
this.Capacity = Capacity;
}
}
```
But now I have completely lost the syntactic elegance of the record type! Is there any way to retain the concise record declaration with positional parameters while generating independent documentation comments for all of its generated parts?
|
While writing python (3.12.2) code to interact with Adobe Illustrator via WIN32COM (Windows 10), I sometimes crash Illustrator with "pure virtual function call" errors.
Here is an example line of code that causes such a crash:
```
font_size = text_frame.TextRange.CharacterAttributes.Size
```
The mere act of trying to access the Size property triggers the crash. However, these two lines of semantically-equivalent code manage to avoid any crashing:
```
text_range = text_frame.TextRange
font_size = text_range.CharacterAttributes.Size
```
I'm hoping someone who knows the intricacies of WIN32COM programming can take me on a medium-depth dive into the technical guts of interacting with COM objects and perhaps explain why breaking up the access chain into two parts makes the code work. I'd like to be able to anticipate when this will happen so I can write working code without having to stumble into this minefield every time I try to access something new from the Illustrator object model.
The full exception traceback is as follows:
```
Traceback (most recent call last):
File "C:\Users\John\PycharmProjects\Adobe API\Short Grain 500 Hex Coordinates.py", line 83, in <module>
main()
File "C:\Users\John\PycharmProjects\Adobe API\Short Grain 500 Hex Coordinates.py", line 21, in main
original = OriginalTextInfo(item)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\PycharmProjects\Adobe API\hex_coord.py", line 10, in __init__
self.font_size = item.TextRange.CharacterAttributes.Size
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\PycharmProjects\venv\Lib\site-packages\win32com\client\__init__.py", line 585, in __getattr__
return self._ApplyTypes_(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\John\PycharmProjects\venv\Lib\site-packages\win32com\client\__init__.py", line 574, in _ApplyTypes_
self._oleobj_.InvokeTypes(dispid, 0, wFlags, retType, argTypes, *args),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pywintypes.com_error: (-2147023170, 'The remote procedure call failed.', None, None)
``` |
I'm switching my app from firebase web (V8) to react-native-firebase. (I figured it would be less work to refactor than going to web V9) But I have run into a problem.
I use Priority values to store a value that indicates ordering of objects in a realtime database container, and when I read the snapshot the priorities are not there. It seems like the top-level snapshot has a priority value, but when I access the child snapshots with snap.child("child1') or snap.forEach(), the child snapshots do not contain a `.priority` property.
This is the code has been working in my app for over 5 years (with added debugging printouts).
```
subscribeFB() {
if (!this.unlistenFB) {
this.unlistenFB = this.user.child('Counternames').on(
'value',
(snap) => {
// Need to also grab the current timestamps when
// Counternames changes.
this.getTimestamps(snap);
},
(error) => {
info('Counternames subscription cancelled because', error);
}
);
}
}
getTimestamps(snap) {
this.user.child('Timestamps').once(
'value',
(tsSnap) => {
setTimeout(
() =>
MultiActions.setCounters({
counterSnap: snap,
timestamps: tsSnap.val(),
}),
0
);
},
(error) => {
info('Could not get timestamps because', error);
}
);
}
// Then in setCounters I use both snaps
setCounters(snaps) {
[...]
if ( timestamps.sequence > this.state.timestamps.sequence) {
debug('Getting new sequence from FB');
// get sequence from FB
var CounternameSnap = snaps.counterSnap;
var cns = CounternameSnap.exportVal();
debug(`CounternameSnap exportVal ${JSON.stringify(cns, null, 2)}`);
var ch = CounternameSnap.child('-MH3NnGU5iM0eakrd7ZS').exportVal();
debug(`Counter 44 exportVal ${JSON.stringify(ch)}`);
CounternameSnap.forEach((ctr) => {
var p = ctr.getPriority();
debug(`Counter priority ${p} has key ${ctr.key}, val ${ctr.val()}`);
// Do something with priority...
});
```
When I change the order of the counters on another device, my `.on` listener triggers and I get the following printout:
```
LOG MultiStore:debug Getting new sequence from FB +2ms
LOG MultiStore:debug CounternameSnap exportVal {
".value": {
"-MH3MqLK32gCi0pqBUg3": "Counter 33",
"-MH4yucF8rmeewGbPAVI": "Counter 55",
"-MH3NnGU5iM0eakrd7ZS": "Counter 44",
"-NuBYkNVlAUIUeoQQCKA": "Counter 66",
"-MGQOBLgEvEekOg6geQI": "Counter 11",
"-MGzzZXdUwpBr8RlLLE-": "Counter 22"
},
".priority": null
} +1ms
LOG MultiStore:debug Counter 44 exportVal {".value":"Counter 44"} +1ms
LOG MultiStore:debug Counter priority undefined has key -MGzzZXdUwpBr8RlLLE-, val Counter 22 +2ms
LOG MultiStore:debug Counter priority undefined has key -MH4yucF8rmeewGbPAVI, val Counter 55 +1ms
LOG MultiStore:debug Counter priority undefined has key -MH3NnGU5iM0eakrd7ZS, val Counter 44 +1ms
LOG MultiStore:debug Counter priority undefined has key -MH3MqLK32gCi0pqBUg3, val Counter 33 +0ms
LOG MultiStore:debug Counter priority undefined has key -MGQOBLgEvEekOg6geQI, val Counter 11 +1ms
LOG MultiStore:debug Counter priority undefined has key -NuBYkNVlAUIUeoQQCKA, val Counter 66 +1ms
```
This shows the top-level snap contains a `.priority` value (null) but the "Counter 44" child does not have the property at all. I can verify that the other device successfully pushed the new priorities to firebase: although the firebase console does not show priorities, I can export and download the container, and it looks like this:
```
{
"-MGQOBLgEvEekOg6geQI": {
".value": "Counter 11",
".priority": 4
},
"-MGzzZXdUwpBr8RlLLE-": {
".value": "Counter 22",
".priority": 0
},
"-MH3MqLK32gCi0pqBUg3": {
".value": "Counter 33",
".priority": 1
},
"-MH3NnGU5iM0eakrd7ZS": {
".value": "Counter 44",
".priority": 2
},
"-MH4yucF8rmeewGbPAVI": {
".value": "Counter 55",
".priority": 3
},
"-NuBYkNVlAUIUeoQQCKA": {
".value": "Counter 66",
".priority": 5
}
}
```
Is this exposing a previously unknown bug in my program, is it a bug in RNFirebase, a change in underlying FB V9+ iOS/Android libraries (It has the same problem on both platforms), or ???
react-native: 0.70.15
react-native-firebase: 19.1.1
[UPDATE:] I wondered if offline persistence might have something to do with this so I enabled persistence and found the problem remains the same either way. |
For this section of code, I am trying to randomly choose either theta or mu to be zero. When one variable is zero, then I need the other one uniformly randomized (and vice versa).
```
N = 10000
random = np.arccos(np.random.uniform(-1, 1, N))
zero = 0
choices = [random, zero]
theta = np.random.choice(choices)
if theta == random:
mu = zero
else:
mu = random
```
I know that `random` and `zero` do not have a homogenous shape. This is why I got the error `ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.` However, I do not know how to fix this (I am still very new to programming). Any thoughts would be appreciated.
|
How do I fix a NumPy ValueError for an inhomogeneous array shape? |
|python|numpy| |
null |
So I'm trying to run this command to turn files into a .chd file:
```
for i in *.cue; do chdman createcd -i "$i" -o "${i%.*}.chd"; done
```
But I get this error:
```
fish: ${ is not a valid variable in fish.
for i in *.cue; do chdman createcd -i "$i" -o "${i%.*}.chd"; done
```
Which is boggling to me cuz I run this command to convert flac to ogg and it works just fine:
```
find . -name "*flac" -exec sh -c 'oggenc -q 7 "$1" -o ~/vorbis/"${1%.*}".ogg' _ {} \;
```
The second line uses `${` and works fine. So why is the first command giving me that error for `${`? I've searched around and can't find much on the subject. I'd really like to understand so if I use a similar string in the future I'll know what to do. I don't really know much about coding, so this really has my interest. And frustration at the same time.
What I have tried is replacing `${` with `$(`, and other variants. But nothing has worked. The command works fine in zsh, so it's something with Fish.
I did find [this](https://stackoverflow.com/questions/57882337/resolving-is-not-a-valid-variable-in-fish-error-when-installing-ghc-the-h)
Looks like someone having a similar problem, but... do I really have to install something (ghc) to get this working? Seems like there would be a different work around. |
The man page and [this 2002 email from Linus Torvalds](https://web.archive.org/web/20151201111541/http://article.gmane.org/gmane.linux.kernel/43445) (and [longer thread](https://lists.archive.carbon60.com/linux/kernel/287485)) strongly suggests that all [`write(2)`](https://www.man7.org/linux/man-pages/man2/write.2.html) calls
* to regular, local files
* on "regular UNIX filesystems"
* implemented entirely in-kernel (so no network file systems or FUSE)
* of at most 0x7ffff000 bytes (per the man-page)
are atomic, short of I/O errors (e.g. disk full), or the writing process being SIGKILL'ed (not to mention kernel bugs and power failures).
This both for concurrent writes to the same file descriptor, and for concurrent writes to different file descriptors referring to the same inode.
This is a stronger requirement than what [POSIX guarantees](https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_09_07), since it also promises that `write` (and `read`) will never return short in these situations, but always write (read) the entire specified amount. |
When you want to add R packages to a new OS running R, you need to be aware of any underlying OS packages that are required. In this case, reading `sf`'s [`DESCRIPTION`](https://cran.r-project.org/web/packages/sf/index.html), we see
```
SystemRequirements: GDAL (>= 2.0.1), GEOS (>= 3.4.0), PROJ (>= 4.8.0), sqlite3
```
It's not always intuitive how to know what these labels mean. One place you might look for a hint about this is Posit's [Package Manager](https://packagemanager.posit.co/client/#/repos/cran/packages/overview?search=sf), which lists the following OS requirements on Ubuntu 22.04 (on which `rocker/shiny` is based).
```
apt-get install -y libgdal-dev
apt-get install -y gdal-bin
apt-get install -y libgeos-dev
apt-get install -y libproj-dev
apt-get install -y libsqlite3-dev
apt-get install -y libssl-dev
apt-get install -y libudunits2-dev
```
Many of those are likely already present.
First, let me reproduce the problem:
```bash
$ docker built -t quux .
$ docker run -it --rm quux R --quiet
> library("sf")
Error: package or namespace load failed for ‘sf’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/usr/local/lib/R/site-library/units/libs/units.so':
libudunits2.so.0: cannot open shared object file: No such file or directory
```
Through experience, I know that Jammy (22.04) needs these packages for R's `sf` package, add these lines somewhere in your `Dockerfile`:
```docker
# ...
RUN apt-get update && \
apt-get install -y libproj22 libudunits2-0 libgdal30 && \
rm -rf /var/lib/apt/lists/*
RUN R -e "install.packages('sf')"
```
Notes:
1. Whether or not you join those with previous `apt` and `install.packages(..)` commands are up to you.
2. I encourage you to review https://docs.docker.com/develop/develop-images/instructions/ for "best practices", especially when installing OS-level packages, attempting to balance that with image size (ergo the `rm -rf` command).
It now works:
```bash
$ docker built -t quux .
$ docker run -it --rm quux R --quiet
> library("sf")
Linking to GEOS 3.10.2, GDAL 3.4.1, PROJ 8.2.1; sf_use_s2() is TRUE
```
|
null |
You can achieve this with NTILE(). Assuming your data is actually like this, rather than grouped, you can use the following, but you can modify with GROUP BY and HAVING if these are summations. But that will return the records in the reverse Quartile. To accommodate your order you can subtract the quartile from 5.
SELECT
product,
5 - NTILE(4) OVER (ORDER BY sales)
FROM
mytable
WHERE
sales > 0
ORDER BY
sales DESC |
Setting document field value using Firestore Functions |
|firebase|function|google-cloud-firestore| |
null |
{"OriginalQuestionIds":[3654830],"Voters":[{"Id":476,"DisplayName":"deceze","BindingReason":{"GoldTagBadge":"python"}}]} |
The "SLF4J: No SLF4J providers were found" message indicates slf4j-api version 2.0 or later is in use. You need to place a compatible provider on your class path, for example, logback version 1.3 or later. |
My requirement is to have a core app to manage authentication, which should include google authentication (all-auth in django) and django_cas_ng (CAS authentication).
Now I want to be able to use this authentication app for multiple projects (CAS system).
I should register only for one app , and should be able to login to another app using the same username and password. (CAS auth system).
```python
path('', django_cas_ng.views.LoginView.as_view(), name='cas_ng_login'),
path('accounts/logout', django_cas_ng.views.LogoutView.as_view(), name='cas_ng_logout'),
```
I the url cas_ng_login redirects me to `CAS_SERVER_URL = 'http://localhost:8000/accounts/'`, where I have the google authentication implemented.
 |
null |
So I'm only able to really speak to the Go server side of this question as the biggest problem you're going to run into on the JavaScript side is having to access gRPC. There's an [issue](https://github.com/apache/arrow/issues/17325) tracking examples for Flight in ArrowJS which hasn't yet been filled.
That all said, for the server side there's an example server in the [documentation](https://pkg.go.dev/github.com/apache/arrow/go/v16@v16.0.0-20240330012921-9f0101ec1433/arrow/flight).
For your particular example the minimum would be to implement one method for your server: `DoGet`:
```go
type server struct {
flight.BaseFlightServer
}
func (s *server) DoGet(*flight.Ticket, svc flight.FlightService_DoGetServer) error {
// if your server can return more than one stream of data,
// the ticket is how you would determine which to send.
// for your example, we're going to ignore the ticket.
rec := GetPutData()
defer rec.Release()
// create a record stream writer
wr := flight.NewRecordWriter(svc, ipc.WithSchema(rec.Schema()))
defer wr.Close()
// write the record
return wr.Write(rec)
}
```
Your main function could then potentially look like so:
```go
func main() {
srv := flight.NewFlightServer()
// replace "localhost:0" with the hosting addr and port
// using 0 for the port tells it to pick any open port
// rather than specifying one for it.
srv.Init("localhost:0")
// register the flight server object we defined above
srv.RegisterFlightService(&server{})
// you can tell it to automatically shutdown on SIGTERM if you like
srv.SetShutdownOnSignals(syscall.SIGTERM, os.Interrupt)
fmt.Printf("Server listening on %s...\n", srv.Addr())
srv.Serve()
}
```
When you start making the server a bit more complicated you can look into defining the `GetFlightInfo` method along with the other methods.
Hope this helps! Feel free to ask any further questions you have. |
Documentation comments for record types with primary constructors in C# |
|c#|constructor|record|xml-comments| |
null |
I am building a programming language using C++, LLVM, Clang, LLDB, user can write `import "@stdio.h"` which is similar to `#include <stdio.h>` so now I need to support C like imports of headers, however I can't get the path to system headers, let alone parse them !
Other answers have gotten old, since llvm and clang API's have updated, I tried this code
```c++
void print_system_header_paths() {
clang::CompilerInstance CI;
auto Invocation = std::make_shared<clang::CompilerInvocation>();
CI.setInvocation(Invocation);
// I can eliminate this line to get rid of an error but other answer suggested creating a preprocessor
CI.createPreprocessor(clang::TranslationUnitKind::TU_Prefix);
const clang::HeaderSearchOptions &HSOpts = CI.getInvocation().getHeaderSearchOpts();
if (HSOpts.SystemHeaderPrefixes.empty()) {
std::cout << "No system header paths found." << std::endl;
} else {
for (const auto &Path : HSOpts.SystemHeaderPrefixes) {
std::cout << Path.Prefix << std::endl;
}
}
}
```
I also tried this command
`clang -v -c -xc++ nul` on windows however I checked all the directories it listed, and all of them don't contain `stdio.h`
https://stackoverflow.com/questions/41470241/how-do-i-extract-the-search-paths-for-headers-in-the-standard-library-in-clang
My programming language : https://github.com/Qinetik/chemical |
Get search paths for headers in the standard library in Clang? |
|c++|clang|llvm|clang++|libc| |
{"Voters":[{"Id":80901,"DisplayName":"mjn"},{"Id":1940850,"DisplayName":"karel"},{"Id":354577,"DisplayName":"Chris"}],"SiteSpecificCloseReasonIds":[16]} |
I'm trying to write a simple parser for some yml config files (with serde) which involves some additional parsing for custom formats inside many params. My additional parsers are based on simple Regex'es, and I created a registry so that they are "compiled" only once. The registry is essentially a ```HashMap<String, _>```.
Now of course the method registering a Regex first checks if the same was already inserted, by means of the ```HashMap::entry()``` method.
Problem is, if I write something like this:
```
fn register_pattern(&mut self, pattern: &str) {
if let Vacant(entry) = self.regex_registry.entry(pattern.to_string()) {
let asc = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Ascii);
let jap = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Japanese);
let parse_localized_regex = ...expression involving asc, jap...;
}
}
```
...the compiler yells at me like this:
```lang-none
error[E0499]: cannot borrow `*self` as mutable more than once at a time
--> src/parsing/parsers/parser_service.rs:65:23
|
63 | if let Vacant(entry) = self.regex_registry.entry(pattern.to_string()) {
| ------------------- first mutable borrow occurs here
64 | let asc = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Ascii);
65 | let jap = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Japanese);
| ^^^^ second mutable borrow occurs here
66 | let parse_localized_regex = ParseLocalized::new(asc, jap);
67 | entry.insert(parse_localized_regex);
| ----- first borrow later used here
```
The solution I've found works, but seems too complex to me:
```
fn register_pattern(&mut self, pattern: &str) {
if let Vacant(_) = self.regex_registry.entry(pattern.to_string()) {
let asc = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Ascii);
let jap = self.parse_locale_pattern(pattern.to_string(), ParseLocale::Japanese);
if let Vacant(entry) = self.regex_registry.entry(pattern.to_string()) {
entry.insert(ParseLocalized::new(asc, jap));
}
}
}
``` |
I'm the maintainer of [xnec2c][1] and we found that this code was causing xnec2c to beep through the PC speaker (excessively and annoyingly) while resizing a window or rotating an object:
```c
snprintf( txt, sizeof(txt), "%7.2f", Viewer_Gain(proj_params, calc_data.freq_step) );
gtk_entry_set_text( GTK_ENTRY(Builder_Get_Object(builder, widget)), txt );
```
Why would `gtk_entry_set_text` beep?
[1]: https://www.xnec2c.org/ |
Why does GTK beep when calling `gtk_entry_set_text` (while resizing a window)? |
I'm doing a migration from a monolith to microservices on a springboot web application, there is a service about websocket but I can't connect when I'm doing the migration. This code still work in monolith.
Here is the log from the client:
```
Access to XMLHttpRequest at 'http://127.0.0.1:8080/api/ws/info?t=1711855420621' from origin 'http://localhost:3000' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header contains multiple values 'http://localhost:3000, http://localhost:3000', but only one is allowed.
```
Here is the log from the server (api-gateway):
```
Sorted gatewayFilterFactories: [[GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RemoveCachedBodyFilter@13067317}, order = -2147483648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter@344683bb}, order = -2147482648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter@2de8fe55}, order = -1], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter@38c2baec}, order = 0], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.GatewayMetricsFilter@111a419a}, order = 0], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter@dc085f8}, order = 10000], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ReactiveLoadBalancerClientFilter@15d560a7}, order = 10150], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerServiceInstanceCookieFilter@53f9b423}, order = 10151], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter@187cf828}, order = 2147483646], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter@5e8035ac}, order = 2147483647], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter@52b2ecb9}, order = 2147483647]]
2024-03-31T10:26:34.318+07:00 TRACE 15124 --- [api-gateway] [ parallel-8] [ ] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start
2024-03-31T10:26:34.319+07:00 TRACE 15124 --- [api-gateway] [ parallel-8] [ ] s.c.g.f.ReactiveLoadBalancerClientFilter : ReactiveLoadBalancerClientFilter url before: lb://realtime-service/api/ws/info?t=1711855594299
2024-03-31T10:26:34.320+07:00 TRACE 15124 --- [api-gateway] [ parallel-8] [ ] s.c.g.f.ReactiveLoadBalancerClientFilter : LoadBalancerClientFilter url chosen: http://host.docker.internal:8086/api/ws/info?t=1711855594299
2024-03-31T10:26:34.322+07:00 DEBUG 15124 --- [api-gateway] [ parallel-8] [ ] g.f.h.o.ObservedRequestHttpHeadersFilter : Will instrument the HTTP request headers [Host:"127.0.0.1:8080", sec-ch-ua:""Google Chrome";v="123", "Not:A-Brand";v="8", "Chromium";v="123"", sec-ch-ua-mobile:"?0", User-Agent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36", sec-ch-ua-platform:""Windows"", Accept:"*/*", Origin:"http://localhost:3000", Sec-Fetch-Site:"cross-site", Sec-Fetch-Mode:"cors", Sec-Fetch-Dest:"empty", Referer:"http://localhost:3000/", Accept-Encoding:"gzip, deflate, br, zstd", Accept-Language:"en-US,en;q=0.9,vi;q=0.8,la;q=0.7", Forwarded:"proto=http;host="127.0.0.1:8080";for="127.0.0.1:64886"", X-Forwarded-For:"127.0.0.1", X-Forwarded-Proto:"http", X-Forwarded-Port:"8080", X-Forwarded-Host:"127.0.0.1:8080"]
2024-03-31T10:26:34.325+07:00 DEBUG 15124 --- [api-gateway] [ parallel-8] [ ] g.f.h.o.ObservedRequestHttpHeadersFilter : Client observation {name=http.client.requests(null), error=null, context=name='http.client.requests', contextualName='null', error='null', lowCardinalityKeyValues=[http.method='GET', http.status_code='UNKNOWN', spring.cloud.gateway.route.id='eee45cc4-bd46-412d-98a5-f555c3b4eac0', spring.cloud.gateway.route.uri='lb://realtime-service'], highCardinalityKeyValues=[http.uri='http://127.0.0.1:8080/api/ws/info?t=1711855594299'], map=[class io.micrometer.core.instrument.Timer$Sample='io.micrometer.core.instrument.Timer$Sample@290be52c', class io.micrometer.tracing.handler.TracingObservationHandler$TracingContext='TracingContext{span=6608d7eae803b66b027ee92588e83ffe/97ba6f0d15af2bd8}', class io.micrometer.core.instrument.LongTaskTimer$Sample='SampleImpl{duration(seconds)=1.344E-4, duration(nanos)=134400.0, startTimeNanos=82941717160100}'], parentObservation=org.springframework.security.web.server.ObservationWebFilterChainDecorator$PhasedObservation@4072a488} created for the request. New headers are [Host:"127.0.0.1:8080", sec-ch-ua:""Google Chrome";v="123", "Not:A-Brand";v="8", "Chromium";v="123"", sec-ch-ua-mobile:"?0", User-Agent:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36", sec-ch-ua-platform:""Windows"", Accept:"*/*", Origin:"http://localhost:3000", Sec-Fetch-Site:"cross-site", Sec-Fetch-Mode:"cors", Sec-Fetch-Dest:"empty", Referer:"http://localhost:3000/", Accept-Encoding:"gzip, deflate, br, zstd", Accept-Language:"en-US,en;q=0.9,vi;q=0.8,la;q=0.7", Forwarded:"proto=http;host="127.0.0.1:8080";for="127.0.0.1:64886"", X-Forwarded-For:"127.0.0.1", X-Forwarded-Proto:"http", X-Forwarded-Port:"8080", X-Forwarded-Host:"127.0.0.1:8080", traceparent:"00-6608d7eae803b66b027ee92588e83ffe-97ba6f0d15af2bd8-00"]
2024-03-31T10:26:34.329+07:00 TRACE 15124 --- [api-gateway] [ctor-http-nio-6] [ ] o.s.c.gateway.filter.NettyRoutingFilter : outbound route: e6029e59, inbound: [ee6ad2f0-11]
2024-03-31T10:26:34.334+07:00 DEBUG 15124 --- [api-gateway] [ctor-http-nio-6] [ ] .f.h.o.ObservedResponseHttpHeadersFilter : Will instrument the response
2024-03-31T10:26:34.335+07:00 DEBUG 15124 --- [api-gateway] [ctor-http-nio-6] [ ] .f.h.o.ObservedResponseHttpHeadersFilter : The response was handled for observation {name=http.client.requests(null), error=null, context=name='http.client.requests', contextualName='null', error='null', lowCardinalityKeyValues=[http.method='GET', http.status_code='UNKNOWN', spring.cloud.gateway.route.id='eee45cc4-bd46-412d-98a5-f555c3b4eac0', spring.cloud.gateway.route.uri='lb://realtime-service'], highCardinalityKeyValues=[http.uri='http://127.0.0.1:8080/api/ws/info?t=1711855594299'], map=[class io.micrometer.core.instrument.Timer$Sample='io.micrometer.core.instrument.Timer$Sample@290be52c', class io.micrometer.tracing.handler.TracingObservationHandler$TracingContext='TracingContext{span=6608d7eae803b66b027ee92588e83ffe/97ba6f0d15af2bd8}', class io.micrometer.core.instrument.LongTaskTimer$Sample='SampleImpl{duration(seconds)=0.0095177, duration(nanos)=9517700.0, startTimeNanos=82941717160100}'], parentObservation=org.springframework.security.web.server.ObservationWebFilterChainDecorator$PhasedObservation@4072a488}
2024-03-31T10:26:34.337+07:00 TRACE 15124 --- [api-gateway] [ctor-http-nio-6] [ ] o.s.c.g.filter.NettyWriteResponseFilter : NettyWriteResponseFilter start inbound: e6029e59, outbound: [ee6ad2f0-11]
2024-03-31T10:26:34.339+07:00 TRACE 15124 --- [api-gateway] [ctor-http-nio-6] [ ] o.s.c.g.filter.GatewayMetricsFilter : spring.cloud.gateway.requests tags: [tag(httpMethod=GET),tag(httpStatusCode=200),tag(outcome=SUCCESSFUL),tag(routeId=eee45cc4-bd46-412d-98a5-f555c3b4eac0),tag(routeUri=lb://realtime-service),tag(status=OK)]
```
I got the connect issue but I have fixed it by adding customRouteLocator and some configuration in application.properties but then, I got the issue about the duplicate headers.
This is the configuration of api-gateway
```
@Bean
public CorsWebFilter corsWebFilter() {
String clientHostName = environment.getProperty("CLIENT_HOSTNAME");
CorsConfiguration corsConfig = new CorsConfiguration();
corsConfig.setAllowedOrigins(Arrays.asList(clientHostName));
corsConfig.setMaxAge(3000L);
corsConfig.setAllowedMethods(List.of("PUT", "GET", "POST", "DELETE", "OPTION"));
corsConfig.setAllowedHeaders(List.of("*"));
corsConfig.setAllowCredentials(true);
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", corsConfig);
return new CorsWebFilter(source);
}
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/api/ws/**")
.uri("lb://realtime-service"))
.build();
}
```
This is a part of application.properties of api-gateway
```
## Realtime Service Route
spring.cloud.gateway.routes[5].id=realtime-service
spring.cloud.gateway.routes[5].uri=lb://realtime-service
spring.cloud.gateway.routes[5].predicates[0]=Path=/api/realtime
```
This is how I handle the socket service
```
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
@Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.enableSimpleBroker("/comment", "/reaction", "/comment-total", "/home", "/profile", "/reply");
registry.setApplicationDestinationPrefixes("/app");
}
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry
.addEndpoint("/api/ws")
.setAllowedOriginPatterns("*")
.withSockJS();
}
}
```
This is how I connect to socket server
```
const socket = new SockJS(socketUrl)
const client = Stomp.over(socket)
client.debug = null
client.connect({Authorization: `Bearer ${ load("access-token") }`}, () => {
//do something
})
```
Appreciate for your help!
|
|c++|linker|clang|llvm|lld| |
I am trying to create server and client interaction on Microcontroller ESP32. It'll consist of simple HTTP server and a client. So the client will send a `POST` request to the server, and the server will do some logic with the data it received.
And for more context, here is the server-side code:
```c
if (client.available()) { // if there's bytes to read from the client,
char c = client.read(); // read a byte, then
Serial.write(c); // print it out the serial monitor
header += c;
if (c == '\n') { // if the byte is a newline character
// if the current line is blank, you got two newline characters in a row.
// that's the end of the client HTTP request, so send a response:
if (currentLine.length() == 0) {
int contentLength = header.indexOf("Content-Length: ");
if (contentLength != -1) {
contentLength = header.substring(contentLength + 16).toInt();
int bodyRead = 0;
while (bodyRead < contentLength && client.available()) {
char c = client.read();
requestBody += c;
bodyRead++;
}
}
// Separate direction and vehicle id
int sd = requestBody.indexOf('=');
int vd = requestBody.indexOf('&');
String direction = requestBody.substring(sd + 1, vd);
int pos = requestBody.indexOf('=', vd);
String vehicle = requestBody.substring(pos + 1);
// HTTP headers always start with a response code (e.g. HTTP/1.1 200 OK)
// and a content-type so the client knows what's coming, then a blank line:
client.println("HTTP/1.1 200 OK");
client.println("Content-type:text/html");
client.println("Connection: close");
client.println("Got it, " + direction + " for : " + vehicle);
client.println();
Serial.println("Request Body : " + requestBody);
Serial.println("Direction : " + direction);
Serial.println("Vehicle : " + vehicle);
// Break out of the while loop
break;
} else { // if you got a newline, then clear currentLine
currentLine = "";
}
} else if (c != '\r') { // if you got anything else but a carriage return character,
currentLine += c; // add it to the end of the currentLine
}
}
```
And here is the client-side code:
```c
if(WiFi.status()== WL_CONNECTED){
WiFiClient client;
HTTPClient http;
// Your Domain name with URL path or IP address with path
http.begin(client, serverName);
// Specify content-type header
http.addHeader("Content-Type", "application/x-www-form-urlencoded");
// Data to send with HTTP POST
String httpRequestData = "from=south&id=military";
// Send HTTP POST request
int httpResponseCode = http.POST(httpRequestData);
Serial.print("HTTP Response code: ");
Serial.println(httpResponseCode);
// Free resources
http.end();
}
else {
Serial.println("WiFi Disconnected");
}
```
Now, the problem is the server cannot correctly read the `POST` request.
And before I tested the client, I use API Tester (mobile application) to test the server, and it works as expected. So my server prints these out in the serial monitor:
```none
POST / HTTP/1.1
user-agent: apitester.org Android/7.5(641)
accept: */*
Content-Type: application/x-www-form-urlencoded
Content-Length: 22
Host: 192.168.4.1
Connection: Keep-Alive
Accept-Encoding: gzip
Request Body : from=south&id=military
Direction : south
Vehicle : military
Client disconnected.
```
But when I send the `POST` request from a client, my server doesn't return any data on the serial monitor:
```none
POST / HTTP/1.1
Host: 192.168.4.1
User-Agent: ESP32HTTPClient
Connection: keep-alive
Accept-Encoding: identity;q=1,chunked;q=0.1,*;q=0
Content-Type: application/x-www-form-urlencoded
Content-Length: 23
Request Body :
Direction :
Vehicle :
Client disconnected.
```
I still haven't figured out why this happens. I guess the mistake is on the client side? Because the server works when used with API Tester. But from many tutorials I have read, my client code should be working correctly.
Also, because there is not something like an error code, I don't know from where I should fix this issue. I hope you can help me with this.
[EDIT]:
```
if (client.available()) { // if there's bytes to read from the client,
char c = client.read(); // read a byte, then
Serial.write(c); // print it out the serial monitor
header += c;
if (c == '\n') { // if the byte is a newline character
// if the current line is blank, you got two newline characters in a row.
// that's the end of the client HTTP request, so send a response:
if (currentLine.length() == 0) {
// Serial.print("STATUS REPORT <header> : ");
// Serial.println(header);
// Find the Content-Length header
int contentLength = header.indexOf("Content-Length: ");
if (contentLength != -1) {
Serial.print("STATUS REPORT <contentLength> : ");
Serial.println(contentLength);
// Find the end of the Content-Length line
int endOfContentLength = header.indexOf("\r\n", contentLength);
Serial.print("STATUS REPORT <endOfContentLength> : ");
Serial.println(endOfContentLength);
if (endOfContentLength != -1) {
// Extract the Content-Length value as an integer
contentLength = header.substring(contentLength + 16, endOfContentLength).toInt();
int bodyRead = 0;
while (bodyRead < contentLength && client.available()) {
if (client.available()) {
char c = client.read();
requestBody += c;
bodyRead++;
}
}
}
}
// Separate direction and vehicle id
int sd = requestBody.indexOf('=');
int vd = requestBody.indexOf('&');
String direction = requestBody.substring(sd + 1, vd);
int pos = requestBody.indexOf('=', vd);
String vehicle = requestBody.substring(pos + 1);
client.println("HTTP/1.1 200 OK");
client.println("Content-type:text/plain");
client.println("Connection: close");
client.println();
client.println("Got it, " + direction + " for : " + vehicle);
Serial.println("Request Body : " + requestBody);
Serial.println("Direction : " + direction);
Serial.println("Vehicle : " + vehicle);
// Break out of the while loop
break;
} else { // if you got a newline, then clear currentLine
currentLine = "";
}
} else if (c != '\r') { // if you got anything else but a carriage return character,
currentLine += c; // add it to the end of the currentLine
}
}
```
|
I was trying to run my website, but when I try to build ng, I get this error:
>Prerendered 0 static routes.
Application bundle generation failed. [5.115 seconds]
>
>✘ [ERROR] NG8003: No directive found with exportAs 'ngForm'. [plugin angular-compiler]
>
> src/app/home.component.html:2:56:
2 │ <form (ngSubmit)="onConnexion(loginForm)" #loginForm="ngForm">
╵ ~~~~~~
>
> Error occurs in the template of component HomeComponent.
>
> src/app/home.component.ts:6:14:
6 │ templateUrl:'./home.component.html', // Make sure this path is co...
Here is my code:
**home.component.html**:
```html
<div class="container">
<form (ngSubmit)="onConnexion(loginForm)" #loginForm="ngForm">
<div class="avatar">
<img src="assets/utilisateur.png" alt="Utilisateur">
</div>
<h2>se connecter</h2>
<div class="input-container">
<input type="text" placeholder="Identifiant" name="identifiant" ngModel required>
</div>
<div class="input-container">
<input type="password" placeholder="Mot de passe" name="password" ngModel required>
</div>
<button type="submit">Connexion</button>
<div class="bottom-section">
<a href="/moob" class="forgot-password">Mot de passe oublié ?</a>
</div>
</form>
</div>
```
**app.module.ts**:
```angular
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { AppRoutingModule } from './app-routing.module';
import { HomeComponent } from './home.component';
@NgModule({
declarations: [
HomeComponent,
// ... other components ...
],
imports: [
BrowserModule,
AppRoutingModule,
FormsModule // Verify this line
],
providers: [],
bootstrap: [HomeComponent]
})
export class AppModule { }
```
I have tried:
- updating angular
- deleting node.modules and redownloading them
- various other solutions |
When I enter the values for all entry fields in the "Rating data labels", all values getting printed except input line voltage and output line voltage. Also, I get the error message that "All rating data input fields must be filled" as the code does not capture the values in input line voltage and output line voltage. can somebody explain why? Thanks
# Rating Data Labels
labels = ["KVA Rating:", "IT Type:", "Input line voltage:", "Output line voltage:",
"Frequency:", "Connection type:", "Current Density:", "Flux Density:",
"Conductor material:"]
for i, label_text in enumerate(labels):
tk.Label(self.input_frame, text=label_text).grid(row=i + 3, column=0, sticky="w")
# Entry fields
self.entries = {}
# Rating data entry
entry_values = ["KVA Rating", "IT Type", "Input line voltage", "Output line voltage", "Frequency",
"Connection type",
"Current Density", "Flux Density", "Conductor material"]
for i, value in enumerate(entry_values):
entry = ttk.Entry(self.input_frame)
entry.grid(row=i + 3, column=1, padx=1, pady=1)
self.entries["entry_" + str(i)] = entry # Use unique keys
def calculate(self):
try:
# Get values from entry fields
values = [entry.get() for entry in self.entries.values()]
print("Values:", values)
# Perform calculations
# Check for empty fields in Rating data
if any(value == '' for value in values[:9]):
raise ValueError("All Rating data input fields must be filled.")
# Rating data
kva_rating = float(values[0])
isolation_transformer_type = str(values[1])
input_line_voltage = float(values[2])
output_line_voltage = float(values[3])
frequency = float(values[4])
connection_type = str(values[5])
current_density = float(values[6])
flux_density = float(values[7])
conductor_material = str(values[8])
I tried to print the values to check whether all the fields are capturing the entry field values and found that two fields are empty even though I enter values there. |
The matrix you've posted is symmetric, and real-valued. (In other words, `A = A.T`, and it has no complex numbers.) This matters because all matrices which are symmetric and real-valued are [normal matrices](https://en.wikipedia.org/wiki/Normal_matrix). [Source](https://en.wikipedia.org/wiki/Symmetric_matrix#Symmetry_implies_normality). If the matrix is normal, then any polar decomposition of it follows `P @ U = U @ P`. [Source](https://math.stackexchange.com/questions/3038582/prove-that-the-polar-decomposition-of-normal-matrices-a-su-is-such-that-su).
Any diagonal matrix is also symmetric. However, technically the matrix you have posted is not diagonal - it has entries outside its main diagonal. The matrix is only [tri-diagonal](https://en.wikipedia.org/wiki/Tridiagonal_matrix). These matrices are not necessarily symmetric. However, if your tridiagonal matrix is symmetric and real-valued, then its polar decomposition is commutative.
In addition to mathematically proving this idea, you can also check it experimentally. The following code generates thousands of matrices, and their polar decompositions, and checks if they are commutative.
```
import numpy as np
from scipy.linalg import polar
N = 4
iterations = 10000
for i in range(iterations):
A = np.random.randn(N, N)
# A = A + A.T
U, P = polar(A)
are_equal = np.allclose(U @ P, P @ U)
if not are_equal:
print("Matrix A does not have commutative polar decomposition!")
print("Value of A:")
print(A)
break
if (i + 1) % (iterations // 10) == 0:
print(f"Checked {i + 1} matrices, all had commutative polar decompositions")
```
If you run this, it will immediately find a counter-example, because the matrix is not symmetric. However, if you uncomment `A = A + A.T`, which forces the random matrix to be symmetric, then all of the matrices work.
Lastly, if you need a left-sided polar decomposition, you can use `polar(A, side='left')` to get that. The [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.polar.html) explains how to do this. |
I use hibernate for a desktop application and the database server is in another country.
Unfortunately, connection problems are very common at the moment.
These are excerpts from the log file on the database server:
**1. 2024-03-19 14:08:42 2378 [Warning] Aborted connection 2378 to db: 'CMS_DB' user: 'JOHN' host: 'bba-83-130-102-145.alshamil.net.ae' ( Got an error reading communication packets)
2. 2024-03-19 13:44:45 1803 [Warning] Aborted connection 1803 to db: 'CMS_DB' user: 'REMA' host: '188.137.160.92' (Got timeout reading communication packets)
3. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (Got an error reading packet communications)
4. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (This connection closed normally without authentication)
5. 2024-03-19 11:55:26 1545 [Warning] IP address '94.202.229.78' could not be resolved: No such host is known.**
**In addition, these error messages often appear on the client-side:**
javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1542)
at org.hibernate.query.Query.getResultList(Query.java:165)
at
**Also this:**
Caused by: java.sql.SQLTransactionRollbackException: (conn=9398) Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.createException(ExceptionFactory.java:76)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.create(ExceptionFactory.java:153)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:274)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:229)
at org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:149)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeUpdate(ClientSidePreparedStatement.java:181)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
... 41 more
Caused by: org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException: Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException.of(MariaDbSqlException.java:34)
at
So far I had the following c3p0 configuration in my hibernate.cfg.xml.
```
<!-- Related to the connection START -->
<property name="connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<!-- Related to Hibernate properties START -->
<property name="hibernate.connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.format_sql">false</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.temp.use_jdbc_metadata_defaults">false</property>
<property name="hibernate.generate_statistics">true</property>
<property name="hibernate.enable_lazy_load_no_trans">true</property>
<!-- c3p0 Setting -->
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">15</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">20</property>
<property name="hibernate.c3p0.acquire_increment">3</property>
<property name="hibernate.c3p0.idle_test_period">100</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.unreturnedConnectionTimeout">30</property>
<property name="hibernate.c3p0.debugUnreturnedConnectionStackTraces">true</property>
```
Can someone look into whether the values for the remote connection make sense? Any change recommendation is warmly welcomed!
Thanks in advance! |
|firefox|automated-tests|playwright| |
null |
Execute some code only once for bank wire confirmed payment in WooCommerce |
There appears to be undocumented, native support for CloudFront signing of requests to Lambda Function URL origins. Neither Terraform nor the AWS Console support creating an Origin Access Control with origin type `lambda`, but the AWS CLI will happily create one. I verified with this OAC CloudFront does sign the requests and that Lambda successfully verifies them.
I wrote instructions in a blog post to implement via the CLI or Terraform: https://www.micah.soy/posts/lock-down-lambda-function-access-with-cloudfront/
I also opened an issue with the Terraform AWS Provider to add support for this value in the resource schema: https://github.com/hashicorp/terraform-provider-aws/issues/36660 |
I have a relatively large PostgreSQL (v12) table storing JSONB data that has unintentional duplicate data (based on a composite unique index on two columns, `aCol` and `bCol`). In order to create that index, I have to first delete all of the accidental duplicates from the DB. This works fine in 2/3 of our DB servers (servers for different environments, i.e. Dev and Prod, all with the same DB schema), but in the 3rd the query will just never finish, and I've let it run for up to 2 hours (in the other environments it takes a matter of minutes).
I've tried configuring the resources and configuration settings to be the same in all environments; for example, the RAM, disk size and ops, CPU, `effective_io_concurrency`, etc. I've also compared the sizes of the tables and I don't think that's the issue, because one of the servers that does complete has ~2 million records for a total of ~9 GB while the one that fails has ~400,000 records for a total of ~2 GB.
The query in question is:
```sql
DELETE FROM aTable t1 USING aTable t2
WHERE t1.aCol = t2.aCol
AND t1.bCol = t2.bCol
AND t1.id != t2.id
AND t1.created_at <= t2.created_at
```
I'm not a DBA or SQL expert by any means so I would definitely appreciate any insight into what else might be causing this query to hang in 1 specific environment, or whether there is a more efficient way to achieve what I'm trying to do that could potentially bypass the issue. Thanks in advance
**Edit:** In response to the comments, here are the explain plans for the queries in the two environments.
Working environment:
```
Delete on aTable a (cost=800504.63..5706702.06 rows=72439076 width=12)
-> Merge Join (cost=800504.63..5706702.06 rows=72439076 width=12)
Merge Cond: ((a.aCol = b.aCol) AND (a.bCol = b.bCol))
Join Filter: ((a.id <> b.id) AND (a.created_at <= b.created_at))
-> Sort (cost=400252.31..404391.52 rows=1655683 width=50)
Sort Key: a.aCol, a.bCol
-> Seq Scan on aTable a (cost=0.00..184763.83 rows=1655683 width=50)
-> Materialize (cost=400252.31..408530.73 rows=1655683 width=50)
-> Sort (cost=400252.31..404391.52 rows=1655683 width=50)
Sort Key: b.aCol, b.bCol
-> Seq Scan on aTable b (cost=0.00..184763.83 rows=1655683 width=50)
```
Failing environment:
```
Delete on aTable a (cost=0.00..1794266.00 rows=59613596 width=12)
-> Nested Loop (cost=0.00..1794266.00 rows=59613596 width=12)
-> Seq Scan on aTable a (cost=0.00..40365.00 rows=395600 width=50)
-> Index Scan using idx_aTable_aCol on aTable b (cost=0.00..4.42 rows=1 width=50)
Index Cond: (aCol = a.aCol)
Filter: ((a.id <> id) AND (a.created_at <= created_at) AND (a.bCol = bCol))
```
I notice that the failing table seems to be trying to use a hash index on `aCol`, though the tables in both environments have that exact same index. Not really sure what to make of that |
I'm building a server-side rendering blazor app
where in Program.cs I have
builder
.Services
.AddRazorComponents()
.AddInteractiveServerComponents();
and
app.MapRazorComponents<App>()
.AddInteractiveServerRenderMode();
I built a language component like the code below, which doesn't seem to be working, or at least I don't see any change in localization for the other components
@using System.Globalization
@inject IJSRuntime JSRuntime
@inject NavigationManager Nav
<select class="form-control" @onchange="ChangeLanguage">
@foreach (var language in supportedLanguages)
{
<option value="@language">@language.DisplayName</option>
}
</select>
@code
{
CultureInfo[] supportedLanguages = new[]
{
new CultureInfo("en-US"),
new CultureInfo("pt-PT"),
new CultureInfo("fr-FR"),
};
private async Task ChangeLanguage(ChangeEventArgs e)
{
var culture = e.Value?.ToString();
Console.WriteLine("culture is " + culture);
if (!string.IsNullOrEmpty(culture))
{
await JSRuntime.InvokeVoidAsync("BlazorCulture.setCulture", culture);
}
}
}
If the other components are also already rendered and I want them to have the language updated how can I do it?
For example 2 components rendered use
```IStringLocalizer<App> _localize```
But since I'm not change pages, and this is like a one page website, how can the components be re-rendered to take in account the changed language? |
How to set language in a server-side rendering blazor app |
|c#|.net|blazor|blazor-server-side| |
null |
null |