instruction
stringlengths
0
30k
So I'm not sure what the `az ad sp` command is really for, but I just found out that to make service principal client secrets you need to use `az ad app` az ad app credential reset --id ${CLIENT_ID} az ad app credential list --id ${CLIENT_ID} These are visible in **Azure Portal > Entra ID (Active Directory) > App Registrations > sp-name > Certificates & Secrets > Client Secrets** I'm not sure where the ones made by `az ad sp` go or how they can be used.
Relativly new to programming, ive been banging my head against a wall now for several day (on my own time) I need to export data from a list that ive collected through a sliced text and a collect numbers function. Ive come a long way by identifing tables in a word document, and completing this import/export process for other tables. However, the table I am trying to send data to this time is not well-built. (snippet included below). Furthermore, in word it is a "frozen" heading. meaning it repeats itself at the following page. (its a designed template for work, friggin nightmare). So theres 3 problems I need to solve. 1) alternating goal coloumns (second/third) 2) merged rows 2) repeating headings. Id love some advice on this. Not one for giving up, but its getting there. [Goal table (copied from word to excel for ease of intepretation, goal entry being the coloumn of which I want to enter my list](https://i.stack.imgur.com/eeJan.png) ``` # Open the existing Document object doc = Document(Word_file) # Find the '{Størrelser}' table table_title = '{Størrelser}' table = None for tbl in doc.tables: if tbl.cell(0, 0).text.strip() == table_title: table = tbl break ## Add the values to the specific cell for i, value in enumerate(Kontrollskjema): # start=0 by default e # Calculate row index, +1 to start from the second row row_idx = i + 1 # Set column index to 2 for the third column for rows 2,3,4,5 and 8,9,10,11 # Else set it to 1 for the second column col_idx = 2 if row_idx in [2, 3, 4, 5, 8, 9, 10, 11] else 1 # Check if the row exists if row_idx < len(table.rows): # Check if the row has the required column if len(table.rows[row_idx].cells) > col_idx: cell = table.cell(row_idx, col_idx) # cell(row_idx, col_idx), 0-indexed cell.text = str(value) paragraph = cell.paragraphs[0] run = paragraph.runs[0] run.font.name = 'Avenir Next LT Pro' run.font.size = Pt(10) paragraph.alignment = WD_PARAGRAPH_ALIGNMENT.LEFT else: print(f"Row {row_idx + 1} does not have a column {col_idx + 1}.") else: print(f"Row {row_idx + 1} does not exist.") print(f"the length of the variable Kontrollskjema is {len(Kontrollskjema)}") print(Kontrollskjema) def print_rows_per_column(table): # Initialize a list to hold the count of rows for each column column_counts = [] # Iterate over the rows of the table for row in table.rows: # Count the number of cells (columns) in the row column_count = len(row.cells) # If the list of column counts is not long enough, extend it while len(column_counts) < column_count: column_counts.append(0) # Increment the count for each column in the row for i in range(column_count): column_counts[i] += 1 # Print the number of rows for each column for i, count in enumerate(column_counts, start=1): print(f"The number of rows in column {i} is {count}") # Usage print_rows_per_column(table) #Output # 26 Row 30 does not have a column 2. Row 31 does not have a column 2. Row 32 does not have a column 2. Row 33 does not have a column 2. Row 34 does not have a column 2. Row 35 does not exist. Row 36 does not exist. the length of the variable Kontrollskjema is 35 [353, 135, 141, 16, 566, 1396, 0.19, 0.93, 0.2, 0.8, 0.06, 2.8, 0.06, 417, 9.0, 19, 2.5, 1.7, 0.0, 0.81, 25, 20.3, 0.0, 'Oppvarming 16/7/52\n', 'Oppvarming 16/7/52\nKjøling 24/7/52\nVentilasjon 24/7/52\nBelysning 16/7/52\nUtsyr 16/7/52\nPersoner 24/7/52\n', 1.95, 1.95, 3.0, 1.8, 3.4, 0.0, 1.5, 0.55, 0.2, '0.74/1.0/0.84/0.78'] The number of rows in column 1 is 29 The number of rows in column 2 is 29 The number of rows in column 3 is 28 The number of rows in column 4 is 28 The number of rows in column 5 is 28 The number of rows in column 6 is 28
Inserting data into a non-organized table in word from SMO file
|python|smo|
null
Unfortunately for Frida the **Name** of an app is not the package name (called Identifier by Frida) but the label shown to the user. You can see the app list and the app **Name** recognized by Frida by executing frida-ps -Ua Example output ``` PID Name Identifier ---- -------- --------------------------------- 2799 Gmail com.google.android.gm 2814 Messages com.google.android.apps.messaging 2218 Settings com.android.settings ``` So you have the choice to identify the app by it's name (`-n` parameter) or PID (`-p` parameter) but the package-name does not work. Edit 2024-02: In recent Frida versions there is a new parameter that allows to attach to an app by it's package name: `-N packagename` or `--attach-identifier packagename`
null
My code looks like this: execBuffer({ input: encoder.out.getData(), bin: 'gifsicle', args: ['--output', execBuffer.output, '--lossy=100', '--optimize=03', execBuffer.input] }).then(data => { console.log(data); // Works :-) let result = data; // Don't works :-( }); return result; How can I return the result variable from a promise in this case?
How can I return a data variable from a Promise module.exports function unsing node js?
|javascript|node.js|promise|
As I understand it Stata doesn't care one bit what you put after `*!`. It simply looks for lines beginning that way and echoes them given a `which` query. In contrast, I think the routines used for maintenance of .pkg files on SSC standardize on the date format 20230213 for Kit Baum's convenience. In my own commands, I have usually followed the date format I used previously for the same command, whether 13feb2023 or 13 Feb 2023 or 13 February 2023. In 30 years of writing ados I have not heard a single whisper that any date format was problematic to anyone else, and I can not imagine any problems with your format beyond possibly unwillingness for third parties to adopt. I don't use and can't comment on GitHub.
Your mistake is... $(OBJS_DIR)%.o : $(SRC_PREFIXED) # ... <- here (A) @mkdir -p $(OBJS_DIR) gcc $(FLAGS) -c $< -o $@ # ... <- and here (B) The pattern rule (A) says that *each* object file `objs/(ft_isalpha.o|ft_strlen.o)` depends on *all* the source files `src/ft_isalpha.c`, `src/ft_strlen.c`. Of course that is false. Then the command (B) means that each target object file `$@` is to be made by compiling the source file `$<`. Per the [GNU Make manual: 10.5.3 Automatic Variables](https://www.gnu.org/software/make/manual/make.html#Automatic-Variables), `$<` means: >$< > >The name of the first prerequisite. ...[cut]... So each target object file `objs/(ft_isalpha.o|ft_strlen.o)` is made by compiling *the same* source file `src/ft_isalpha.c`. Thus you create a library `libft.a` that contains two object files `ft_isalpha.o`,`ft_strlen.o` both of which define the same function `ft_isalpha` and neither of which defines `ft_strlen`. Your linkage then fails with an undefined reference to `ft_strlen`. Your linkage does *not* yield a multiple definition error for `ft_isalpha` because the the linker satified your program's reference to `ft_isalpha` by extracting and linking `libft.a(ft_isalpha.o)` and had then no reason to link `libft.a(ft_strlen.o)`: no further unresolved symbols are defined in the library. When your reverse the order of `ft_isalpha.c` and `ft_strlen.c`, the opposite thing happens, as you observed - because all of the object files are now compiled from `ft_strlen.c`. To correct this particular mistake, change: $(OBJS_DIR)%.o : $(SRC_PREFIXED) to: $(OBJS_DIR)%.o : $(SRC_DIR)%.c That makes each object file depend only on the *corresponding* source file; the first prequisite `$<` becomes the *only* prerequisite and each object file will be compiled from the correct source.
|google-chrome|selenium-chromedriver|google-chrome-devtools|chrome-for-testing|
In a DevOps pipeline I would like to use the Build.SourceVersionMessage as the releaseNotes in a NuGet.nuspec file I use to publish my artifact. In the NuGet.nuspec I have this line \<releaseNotes\>$release_notes$\</releaseNotes\>. I found that I need to escape the xml characters by using a powershell script to handle escaping the xml characters like this. ``` - task: PowerShell@2 displayName: 'Modify NuGet.nuspec with release notes' inputs: targetType: 'inline' script: | # I can not find a way to pass a value with semicolons via the -Properties to nuget pack. Therefor we modify the NuGet.nuspec file directly. $escapedMsg = [System.Security.SecurityElement]::Escape("$(Build.SourceVersionMessage)") Write-Host "Escaped release notes: $escapedMsg" (Get-Content "NuGet.nuspec") -replace '\$release_notes\$', $escapedMsg | Set-Content "NuGet.nuspec" ``` However this script still has problems in case the message contains double quotes. The powershell script fails because the string inside the Escape( is malformed. Then I could use single quotes in the script but then it will not work in case the message has single quotes. Is the any way to get a SourceVersionMessage with any characters into the NuGet.nuspec file ? Thanks
Using Build.SourceVersionMessage with ' and " in DevOps pipeline
when using es modules you must load **`fs`** moudle to read and write file support. accordion to this issue : [issue][1] ```javascript import * as XLSX from 'xlsx' import * as fs from 'fs'; XLSX.set_fs(fs); ``` [1]: https://github.com/SheetJS/sheetjs/issues/2634#issuecomment-1231412497
A signal can be connected to another signal or a slot. You tried connecting it to a signal handler (more information here : [The QML Reference, Signal attributes](https://doc.qt.io/qt-6/qtqml-syntax-objectattributes.html#signal-attributes)). The correct syntax would be: ``` buttonRect.buttonClicked.connect(mouseArea.clicked); // not onClicked ``` What you most likely wanted was connecting the other way around (like you did in your answer): ``` mouseArea.buttonClicked.connect(buttonRect.clicked); // specifying buttonRect here is superfluous ``` What you should actually do is not doing an imperative connection but just doing it declaratively and directly emitting the signal: ``` MouseArea{ anchors.fill: parent cursorShape: Qt.PointingHandCursor hoverEnabled: true onClicked: buttonRect.clicked() } ``` I also agree with @Mark that using a `Button` would be preferred.
I tried to make a full-screen dialog using Jetpack Compose using this code: ``` @Preview @Composable fun test() { val showDialog = remember { mutableStateOf(value = false) } Box( contentAlignment = Alignment.Center, modifier = Modifier.fillMaxSize().background(color = Color.White) ) { Box( contentAlignment = Alignment.Center, modifier = Modifier .size(size = 100.dp) .background(color = Color.Red) .clickable( onClick = { showDialog.value = true }, indication = null, interactionSource = remember { MutableInteractionSource() } ) ) { Text(text = "Dialog Button") } } val properties = DialogProperties( usePlatformDefaultWidth = false ) if (showDialog.value) { Dialog( onDismissRequest = { /*TODO*/ }, properties = properties, ) { Box( modifier = Modifier.fillMaxSize().background(color = Color.Blue) ) } } } ``` The running result of the above code: Before clicking the "dialog" button: [image][1] After clicking the "dialog" button: [image][2]. You can see in the images that the blue dialog cannot fill the entire screen, and there are gaps on the left and right sides of the dialog. I have already set `usePlatformDefaultWidth` to `false`, but it didn't work. [1]: https://i.stack.imgur.com/z59ey.png [2]: https://i.stack.imgur.com/EfoIi.png
when using es modules you must load **`fs`** module to read and write file support. accordion to this issue : [issue][1] ```javascript import * as XLSX from 'xlsx' import * as fs from 'fs'; XLSX.set_fs(fs); ``` [1]: https://github.com/SheetJS/sheetjs/issues/2634#issuecomment-1231412497
i am very new to JavaFX and no nothing about it. i have been watching youtube videos on how to use scenebuilder but somehow when im on this part i cant set the code and this message prompts.[this is the message that prompts](https://i.stack.imgur.com/VDeO2.png) a fix for my problem that keeps on occurring
|nuget|devops|pipeline|
Lets say, a PWA is deployed to a sub-directory of a regular website (e.g. `https://example.net/app`). It uses hash-mode for its navigation (e.g. `/app/#/login` and `/app/#/dashboard`). Now, there are links inside this PWA which leading to the website, e.g. `https://example.net/product-list`. This link should leave the PWA and open the native default browser of a mobile device. Therefore we set the scope of our manifest.json to `"scope": "https://example.net/app"`, because [these docs][1] say, that everything what leaves the sub-directory should be treated as "outside of scope": ``` Finally, the following example limits navigation to a subdirectory of the current site: "scope": "https://example.com/subdirectory/" ``` Unfortunately that does not work, neither on iOS nor on Android: the website link is still opened inside the PWAs webview. What can be the issue here? From our understanding a request to `https://example.net/product-list` clearly is outside of a scope binded to `https://example.net/app` Highly appreciate any clarification on that matter. [1]: https://developer.mozilla.org/en-US/docs/Web/Manifest/scope#examples
Manifest for PWA with scope bind to sub-folder
|progressive-web-apps|manifest|workbox|
We recently upgraded springBoot version to 3.2, and Java 21 64 bit. We observed wired behaviour while shutting down spring boot application , log level automatically change INFO to ERROR, and we are not getting expected log messages. After so many research, we found there is a feature "automatic log level propagation". We have some important INFO messages that should appear during application off. We tried to disable with few properties like : management.tracing.enabled = false spring.main.log-shutdown-info = true But nothing help, it is still ERROR. Please suggest and help what we can do to get INFO level at end of application. We need INFO log level while spring boot application is shutting down. As per Shutdown Log when run the application with argument `-Dlog4j2.debug` : `TRACE StatusLogger [AsyncContext@25bbe1b6] AsyncLoggerDisruptor: disruptor has been shut down` : **Shutdown log** `TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class jp.co.tec.ngp.broker.device.controller.DeviceAdminApiContoller TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class jp.co.tec.ngp.broker.device.admin.DeviceAdminServiceImpl TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class jp.co.tec.ngp.broker.device.admin.BrokerAdminServiceImpl TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class jp.co.tec.ngp.broker.service.impl.BrokerServiceImpl TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.tcp.TcpTransportFactory TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.util.ServiceSupport TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.TransportSupport TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.tcp.TcpTransport TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.AbstractInactivityMonitor TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.InactivityMonitor TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.WireFormatNegotiator TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.ResponseCorrelator TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.util.IdGenerator TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.thread.SchedulerTimerTask TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.transport.FutureResponse TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.AdvisoryConsumer TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.ActiveMQSession TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.TransactionContext TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.ActiveMQSessionExecutor TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.ActiveMQMessageProducer TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.management.JMSEndpointStatsImpl TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.util.ThreadPoolUtils TRACE StatusLogger Log4jLoggerFactory.getContext() found anchor class org.apache.activemq.thread.TaskRunnerFactory DEBUG StatusLogger [AsyncContext@25bbe1b6] AsyncLoggerDisruptor: shutting down disruptor for this context. TRACE StatusLogger [AsyncContext@25bbe1b6] AsyncLoggerDisruptor: disruptor has been shut down. DEBUG StatusLogger Stopping LoggerContext[name=AsyncContext@25bbe1b6, org.apache.logging.log4j.core.async.AsyncLoggerContext@45d2ade3]... WARN StatusLogger [AsyncContext@25bbe1b6] Ignoring log event after log4j was shut down: INFO [jp.co.tec.ngp.broker.device.base.statemachine.BaseStateMachineListener] [stateChanged] ------------------------------------------- TRACE StatusLogger Unregistering 1 MBeans: [org.apache.logging.log4j2:type=AsyncContext@25bbe1b6] WARN StatusLogger Ignoring log event after log4j was shut down TRACE StatusLogger Unregistering 1 MBeans: [org.apache.logging.log4j2:type=AsyncContext@25bbe1b6,component=StatusLogger] WARN StatusLogger [AsyncContext@25bbe1b6] Ignoring log event after log4j was shut down: INFO [jp.co.tec.ngp.broker.device.base.statemachine.BaseStateMachineListener] [stateChanged] Device- CashChanger state change to sub state STOPPED TRACE StatusLogger Unregistering 1 MBeans: [org.apache.logging.log4j2:type=AsyncContext@25bbe1b6,component=ContextSelector] WARN StatusLogger Ignoring log event after log4j was shut down`
|java|selenium-webdriver|selenium-chromedriver|chrome-for-testing|
I am using typescript in a nuxt3 app and want to use the module npm i nuxt-pino-log which uses pino for logging. However it is not clear how to get a reference to the logger so that one can use it to log. The example uses middleware and defines a default export which I am assuming with only be executed prior to routing to a next page. Here is the project reference https://www.npmjs.com/package/nuxt-pino-log?activeTab=readme. How can I use the logger by getting a reference to it using a type and then log with it?
Usage of npm i nuxt-pino-log is unclear from example
|logging|nuxt3|pino|
I was given a spreadsheet with app scrip macros that we would like to use from a friend. The purpose of the spreadsheet is to display datetime for follow ups over certain time periods(i.e. in 2 weeks or in 8 hours) in different time zones(PST/MST/CST/EST). All of the macro files in the appscript look good and the manifest file also looks right but when I deploy it doesn't update any of the cells in the sheets. If i run the individual macros they update the cell in the sheet that its supposed to. It is supposed to continuously update the date times to current time or at least on sheet open. I'm new to google app script but have an ok coding background and I feel like this should be something simple. I may be missing a run file or a call to run but im not sure where that should be added. Can add any other relevant info if needed. JSON Manifest file sample: ```{ "timeZone": "America/Toronto", "exceptionLogging": "STACKDRIVER", "runtimeVersion": "V8", "dependencies": { "libraries": [ { "userSymbol": "FollowupTagfilterforBCTeam", "libraryId": "186UPce75w8KpinMaHxLVgk494S7nthWonp-Iruh_Yo0PsnXqWb2qtBrl", "version": "0", "developmentMode": true } ] }, "webapp": { "executeAs": "USER_DEPLOYING", "access": "ANYONE_ANONYMOUS" }, "sheets": { "macros": [ { "menuName": "updateCSTTime10days", "functionName": "updateCSTTime10days" },``` Macro File sample for above: ```function updateCSTTime10days() { var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CST"); var cstTime = new Date(new Date().getTime() + (240 * 60 * 60 * 1000)); // Adding 240 hours for CST var formattedDate = cstTime.toLocaleString("en-US", {timeZone: "America/Chicago", year: 'numeric', month: '2-digit', day: '2-digit', hour: '2-digit', minute: '2-digit', second: '2- digit'}); sheet.getRange("B6").setValue(formattedDate); }``` Tried running individually and the macros work. Tried doing triggers but there are hundreds and started to fail some on open(also think there is a smarter way to call them all rather than a trigger for each) Tested different deployments and made sure the library lined up
your problem is in this for loop. In each iteration of the loop you are getting a random number between 1 and (9-n). there is nothing preventing you from getting the same number on a subsequent iteration. In theory you could end up getting "1" all 9 times. for (int n = unshuffledCategories.Count; n > 0; --n) { int k = r.Next(n); String temp = unshuffledCategories[k]; shuffledCategories.Add(temp); }
The problem is that when you enter the anonymous function passed to *$watch*, the context changes and *"this"* refers to the *window object* (you can check this thing by adding a `console.log(this);` in the function. To get around the problem we can use an arrow function which do not has its own context, so *"this"* will refer to the container object. From Alpine 3, by adding an *init()* function to the object, you will no longer need to specify an *x-init* attribute: the *init()* function, if present, will be called automatically. ```html <form x-data="config()"> <label> <input type="radio" value="one_date" x-model="type_date" /> Option A </label> <label> <input type="radio" value="multi_date" x-model="type_date"/> Option B </label> </form> <script> function config() { return { type_date: "", date_option: {dateFormat: "d-m-Y"}, init() { this.$watch("type_date", (value) => { console.log(this.date_option); }); } }; } </script> ```
Read the documentation for `within_intercept` (and the contained example) and you find that the function can output a model object with the intercept as well (and not only the intercept) with argument `return.model = TRUE`. A fully worked example incl. stargazer output comparing a FE model with and without intercept is: library(plm) data("Hedonic", package = "plm") mod_fe <- plm(mv ~ age + crim, data = Hedonic, index = "townid") mod_fe_int <- within_intercept(mod_fe, return.model = TRUE) stargazer::stargazer(mod_fe, mod_fe_int, type = "text") #> #> ============================================================ #> Dependent variable: #> ----------------------------------------------- #> mv transY #> (1) (2) #> ------------------------------------------------------------ #> age -0.004*** -0.004*** #> (0.001) (0.001) #> #> crim -0.009*** -0.009*** #> (0.002) (0.002) #> #> Constant 10.260*** #> (0.049) #> #> ------------------------------------------------------------ #> Observations 506 506 #> R2 0.147 0.147 #> Adjusted R2 -0.046 0.143 #> F Statistic 35.431*** (df = 2; 412) 35.431*** (df = 2; 503) #> ============================================================ #> Note: *p<0.1; **p<0.05; ***p<0.01 NB: the model object returned by `within_intercept` in this case identifies itself technically as a pooling model.
no injectable field found in FXML Controller class for the id
|java|javafx|scenebuilder|
null
I am not quite sure where the issue lies, but FastAPI generates an OpenAPI file which I believe to be true and the openapi-generator-cli in the latest version thinks the OpenAPI file is invalid. The error message is: Exception in thread "main" org.openapitools.codegen.SpecValidationException: There were issues with the specification. The option can be disabled via validateSpec (Maven/Gradle) or --skip-validate-spec (CLI). | Error count: 1, Warning count: 2 Errors: -attribute paths.'/units/{id}/charts/'(get).parameters.[group_mode].schemas.required is not of type `array` Warnings: -attribute paths.'/units/{id}/charts/'(get).parameters.[group_mode].schemas.required is not of type `array` at org.openapitools.codegen.config.CodegenConfigurator.toContext(CodegenConfigurator.java:701) at org.openapitools.codegen.config.CodegenConfigurator.toClientOptInput(CodegenConfigurator.java:728) at org.openapitools.codegen.cmd.Generate.execute(Generate.java:519) at org.openapitools.codegen.cmd.OpenApiGeneratorCommand.run(OpenApiGeneratorCommand.java:32) at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:66) This is how the OpenAPI parameter looks: [![OpenAPI schema image][1]][1] And this is how it looks in the FastAPI code: group_mode: GroupMode = Query( default="1d", description="Group by a parameter. Options are Minute, Hour, Day, Week, Month, Year.", required=True, ), The first question would be, where the issue lies. Is the OpenAPI correct and the settings for the code generator (client is typescript-axios, but I tried with others, same issue) are wrong? Or is the code generator actually correct and the FastAPI generated spec is wrong? Or is it a coding issue on my side? [1]: https://i.stack.imgur.com/RRcOE.png
|swagger|fastapi|openapi|code-generation|openapi-generator|
After opening a R-engine. I assigned to it a dataframe, that I get successfully from database. After printing sone values from the R-engine I found out, that all the string values have a mojibake like following example: "2024-00010005" -> "ÿþ2024-00010005ÿþ" ```java if(conn != null) { try{ if(!conn.isClosed()){ stmt = conn.createStatement(); rs = stmt.executeQuery(querry); String sampleCode = null; ArrayList<Double> Test_Dauer_h = new ArrayList<Double>(); while(rs.next()) { if(i == 0) { sampleCode = rs.getString("sampleCode"); } Test_Dauer_h.add(i, Double.parseDouble(rs.getString("Test_Dauer_h"))); System.out.println(sampleCode+";["+Test_Dauer_h.get(i)+"];"); i++; } System.out.println("(" + i + " rows readed)"); //Engine initialisieren & R Objekte Erzeugen.. if(i > 0) { String[] header = {"sampleCode","Test_Dauer_h"}; String[] _sampleCode = new String[i]; double[] _Test_Dauer_h = new double[i]; REXP h = null; for(int j = 0; j < i; j++) { _sampleCode[j] = sampleCode; } engine = REngine.engineForClass("org.rosuda.REngine.JRI.JRIEngine", new String[] {"-no-save"}, new REngineStdOutput(), false); h = REXP.createDataFrame(new RList(new REXP[] {new REXPString(_sampleCode),new REXPDouble(_Test_Dauer_h)},header)); engine.assign("df_lab", h); engine.assign("labSiteCode", labSiteCode); engine.parseAndEval("print(labSiteCode);print(max(df_lab$Test_Dauer_h)); print(df_lab$sampleCode[1]); print(df_lab$HGS[1])"); engine.parseAndEval("library(randomForestSRC)"); engine.parseAndEval("library(sqldf)"); engine.parseAndEval("library(stringr)"); } } }catch (SQLException e){ e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage() + "\n" + querry); }catch (REXPMismatchException e) { e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage()); } }finally{ try{ rs.close(); stmt.close(); if(i > 0) engine.close(); }catch (SQLException e) { e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage() +"\n" + querry); } SqlServerJdbcUtils.disconnect(conn); } } ``` What did I wrong in my program?
Background - I have been asked to take 5 Azure SQL servers off the public network and place them onto private endpoints. The SQL Servers are used by an application in a container within Kubernetes. I am confident the network is in place as intended. The problem is that the application is still connecting to the old FQDNs not the FQDNs of the private endpoints. My understanding is that the hostname used by the application is generated by the Kubernetes secret resource below and this comes from an output in another project via a remote state file. I am creating the following DNS records in one state file and have set and output like so: ``` data "terraform_remote_state" "staged_azure" { backend = "azurerm" config = { subscription_id = var.tf_state_subscription_id resource_group_name = var.tf_state_rg_name storage_account_name = var.tf_state_sa_name container_name = var.tf_state_container_name key = var.tf_state_staged_azure_key access_key = var.tf_state_access_key } } variable "dns_records" { default = { "btarget" = "172.16.102.4" "public-0" = "172.16.102.5" "public-1" = "172.16.102.6" "public-2" = "172.16.102.7" } } resource "azurerm_private_dns_a_record" "dns-records" { for_each = var.dns_records name = each.key records = [each.value] resource_group_name = "${var.application_name}-${var.environment_tag}-rg" ttl = 3600 zone_name = azurerm_private_dns_zone.dns-zone.name } output "sql_admin_fqdns" { description = "FQDNs of all other SQL Servers" sensitive = false value = { for entry in azurerm_private_dns_a_record.dns-records : entry.name => entry.fqdn } } ``` I am then calling the FQDN value from another project with a different state file like this: ``` resource "kubernetes_secret" "cms_database" { metadata { name = join("-", [var.application_name, "database", each.key]) namespace = kubernetes_namespace.magnolia.id } data = { username = each.value password = data.terraform_remote_state.staged_azure.outputs.sql_admin_passwords[each.key] hostname = data.terraform_remote_state.staged_azure.outputs.sql_admin_fqdns[each.key].fqdn schema = data.terraform_remote_state.staged_azure.outputs.sql_admin_schemas[each.key] port = 1433 resource-group = data.terraform_remote_state.staged_azure.outputs.sql_admin_rg } for_each = data.terraform_remote_state.staged_azure.outputs.sql_admin_usernames } ``` When running the plan on the second project it is not getting the correct value from the output in the other project: > Error: Unsupported attribute > > │ > > on tomcat.tf line 59, in resource "kubernetes_secret" "bca_cms_database": > > 59: hostname = data.terraform_remote_***.staged_azure.outputs.sql_admin_fqdns[each.key].fqdn > > │ ├──────────────── > > data.terraform_remote_***.staged_azure.outputs.sql_admin_fqdns is object with 4 attributes > > each.key is "public-2" > > │ > > Can't access attributes on a primitive-typed value (string). I've tried so many different approaches and the data source never pulls through the correct value even though I can see the correct value is being output from the source project I am expecting the Kubernetes Secret resource to grab the FQDNS of the private DNS records from the output in the remote state project and inject them into the container. I posted on the Reddit Terraform sub and they said that hashicorp recommends avoiding referencing remote states and that I should access the data from Azure driectly using an Azure Data Source. I've not been able to figure out how to do this yet. The Kubernetes secret resource was working fine when it was pulling the FQDNs from the SQL servers on the public network.
Use Invoke-WebRequest instead for your first call, $reportResponse = Invoke-WebRequest -Uri $generateReportUri -Method Post -Body ($reportRequest | ConvertTo-Json) -Headers @{ "Authorization" = "Bearer $((Get-AzAccessToken).Token)" }
|r|conditional-statements|counter|incremental|
The config is just a js/ts file. So if all you need is the colors, a simple approach is to write that as a `const` and then export it separately. Example: ```ts import type { Config } from 'tailwindcss'; export const colors = { 'custom-color-one': '#FFA210', 'custom-color-two': '#3FCC48', } as const; // "as const" makes it readonly and improves intellisense const config: Config = { theme: { extend: { colors // ...the rest of your config }, }, }; export default config; ``` In your target file, you can then: ```tsx import { colors } from '../../tailwind.config'; <MyComponent color={colors['custom-color-one']} /> ```
Is it possible to store an enum with associated values using CoreData? I can't find any relevant solution.
Enum with associated values in Core Data
|swift|core-data|enums|associated-value|
**Firmware** is the low level SW that handles HW components programming/functionality. While **Drivers** are the bridge b/w HW component and OS where OS can communicate with HW. Drivers expose some functional APIs that OS can perform using HW. In your case, since your Linux has a newer version of driver that's not able to match Firmware version, it complains to install newer firmware. BTW, Firmware need not be hard burned on HW, it can be re-programmed depending on hardware. Most of them support EEPROM/Flash and for SW, some devices run whole linux kernels now a days.
as I am new coding I get stuck easily and don't know the next step. I have a lab about addeventListener, where in a click of a button shows your for loop with office locations. `const grupList = document.querySelector("ul"); console.log(grupList); const listItems = document.querySelector("li"); console.log(listItems);` const offices = document.getElementById("ofLocations"); let officeLocations = ["New York", "London", "Paris", "Tokyo", "Sydney"]; ```function getOfficeLocations (arr){ ``` for (let i = 0; i < arr.length; i++) { `console.log(arr[i]); ```offices.addEventListener("click", getOfficeLocation { }) } } `getOfficeLocation(officeLocations);` this is what I have done so far.`
I need some assistance to put a for loop iteration in a boostrap group list button click
|for-loop|button|addeventlistener|
null
Try this code (assuming data is in Column `A`): Sub FirstVisibleRow() MsgBox Columns("A").SpecialCells(xlCellTypeVisible).Find("Retail").Row End Sub If you need the first row which is visible, you can use also `Find("*")`
I wish to add the overall intercept of a fixed effects model ("within" panel data analysis). This is common practice in Stata and I wish to mimic the Stata output by including the overall intercept. ``` library(stargazer) library(plm) data("Hedonic", package = "plm") mod_fe <- plm(mv ~ age + crim, data = Hedonic, index = "townid") overallint <- within_intercept(mod_fe) overallint # 10.25964 stargazer(mod_fe, type = "text") Dependent variable: --------------------------- mv ---------------------------------------- age -0.004*** (0.001) crim -0.009*** (0.002) ---------------------------------------- Observations 506 R2 0.147 Adjusted R2 -0.046 F Statistic 35.431*** (df = 2; 412) Note: *p<0.1; **p<0.05; ***p<0.01 ``` I could not find any relevant information to add the intercept. The `mod_fe` object does not store the intercept so I could not pull it from there. The `within_intercept` function calculates the actual value.
If you did not require the generic `Computer` struct, then you could directly extend P(which is functionally the same as @JanMensch's answer) extension P { static func compute() -> Self { instance() } } let someArray: [P.Type] = [ A.self, B.self ] for value in someArray { value.compute() } but if you needed it, I don't think this would be possible. You're trying to create a new type at runtime (`Computer<A.Type>` and `Computer<B.Type>`). You cannot do that. Swift Generics only help with code duplication, AFAIK. EDIT: I have another roundabout solution where the `Compute<>` is transient in a extension on P. However, I'm not sure if this would fit your use case without additional information, though. extension P { static func computeImpl() -> Self { Computer<Self>.compute() } } struct Computer<T: P> { static func compute() -> T { print(T.self) return T.instance() } } let someArray: [P.Type] = [ A.self, B.self ] for value in someArray { value.computeImpl() }
I'm trying to use `gnutls` to encrypt communications. I downloaded `gnutls` using `brew install gnutls` and placed the `.h` file in the `include` directory into my project. Now I try to use the function from the header I `include`, but I get an error. ``` error: undefined reference to 'gnutls_hash_fast' error: undefined reference to 'gnutls_hash_get_len' ... ``` Below is `crypto.h` from `gnutls`. ``` /* * Copyright (C) 2008-2012 Free Software Foundation, Inc. * * Author: Nikos Mavrogiannopoulos * * This file is part of GnuTLS. * * The GnuTLS is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public License * as published by the Free Software Foundation; either version 2.1 of * the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with this program. If not, see <https://www.gnu.org/licenses/> * */ #ifndef GNUTLS_CRYPTO_H #define GNUTLS_CRYPTO_H #include "gnutls.h" #ifdef __cplusplus extern "C" { #endif typedef struct api_cipher_hd_st *gnutls_cipher_hd_t; int gnutls_cipher_init(gnutls_cipher_hd_t *handle, gnutls_cipher_algorithm_t cipher, const gnutls_datum_t *key, const gnutls_datum_t *iv); int gnutls_cipher_encrypt(const gnutls_cipher_hd_t handle, void *text, size_t textlen); int gnutls_cipher_decrypt(const gnutls_cipher_hd_t handle, void *ciphertext, size_t ciphertextlen); int gnutls_cipher_decrypt2(gnutls_cipher_hd_t handle, const void *ciphertext, size_t ciphertextlen, void *text, size_t textlen); int gnutls_cipher_encrypt2(gnutls_cipher_hd_t handle, const void *text, size_t textlen, void *ciphertext, size_t ciphertextlen); /** * gnutls_cipher_flags_t: * @GNUTLS_CIPHER_PADDING_PKCS7: Flag to indicate PKCS#7 padding * * Enumeration of flags to control block cipher padding, used by * gnutls_cipher_encrypt3() and gnutls_cipher_decrypt3(). * * Since: 3.7.7 */ typedef enum gnutls_cipher_flags_t { GNUTLS_CIPHER_PADDING_PKCS7 = 1 } gnutls_cipher_flags_t; int gnutls_cipher_encrypt3(gnutls_cipher_hd_t handle, const void *ptext, size_t ptext_len, void *ctext, size_t *ctext_len, unsigned flags); int gnutls_cipher_decrypt3(gnutls_cipher_hd_t handle, const void *ctext, size_t ctext_len, void *ptext, size_t *ptext_len, unsigned flags); void gnutls_cipher_set_iv(gnutls_cipher_hd_t handle, void *iv, size_t ivlen); int gnutls_cipher_tag(gnutls_cipher_hd_t handle, void *tag, size_t tag_size); int gnutls_cipher_add_auth(gnutls_cipher_hd_t handle, const void *text, size_t text_size); void gnutls_cipher_deinit(gnutls_cipher_hd_t handle); unsigned gnutls_cipher_get_block_size(gnutls_cipher_algorithm_t algorithm) __GNUTLS_CONST__; unsigned gnutls_cipher_get_iv_size(gnutls_cipher_algorithm_t algorithm) __GNUTLS_CONST__; unsigned gnutls_cipher_get_tag_size(gnutls_cipher_algorithm_t algorithm) __GNUTLS_CONST__; /* AEAD API */ typedef struct api_aead_cipher_hd_st *gnutls_aead_cipher_hd_t; int gnutls_aead_cipher_init(gnutls_aead_cipher_hd_t *handle, gnutls_cipher_algorithm_t cipher, const gnutls_datum_t *key); int gnutls_aead_cipher_set_key(gnutls_aead_cipher_hd_t handle, const gnutls_datum_t *key); int gnutls_aead_cipher_decrypt(gnutls_aead_cipher_hd_t handle, const void *nonce, size_t nonce_len, const void *auth, size_t auth_len, size_t tag_size, const void *ctext, size_t ctext_len, void *ptext, size_t *ptext_len); int gnutls_aead_cipher_encrypt(gnutls_aead_cipher_hd_t handle, const void *nonce, size_t nonce_len, const void *auth, size_t auth_len, size_t tag_size, const void *ptext, size_t ptext_len, void *ctext, size_t *ctext_len); int gnutls_aead_cipher_encryptv(gnutls_aead_cipher_hd_t handle, const void *nonce, size_t nonce_len, const giovec_t *auth_iov, int auth_iovcnt, size_t tag_size, const giovec_t *iov, int iovcnt, void *ctext, size_t *ctext_len); int gnutls_aead_cipher_encryptv2(gnutls_aead_cipher_hd_t handle, const void *nonce, size_t nonce_len, const giovec_t *auth_iov, int auth_iovcnt, const giovec_t *iov, int iovcnt, void *tag, size_t *tag_size); int gnutls_aead_cipher_decryptv2(gnutls_aead_cipher_hd_t handle, const void *nonce, size_t nonce_len, const giovec_t *auth_iov, int auth_iovcnt, const giovec_t *iov, int iovcnt, void *tag, size_t tag_size); void gnutls_aead_cipher_deinit(gnutls_aead_cipher_hd_t handle); /* Hash - MAC API */ typedef struct hash_hd_st *gnutls_hash_hd_t; typedef struct hmac_hd_st *gnutls_hmac_hd_t; size_t gnutls_mac_get_nonce_size(gnutls_mac_algorithm_t algorithm) __GNUTLS_CONST__; int gnutls_hmac_init(gnutls_hmac_hd_t *dig, gnutls_mac_algorithm_t algorithm, const void *key, size_t keylen); void gnutls_hmac_set_nonce(gnutls_hmac_hd_t handle, const void *nonce, size_t nonce_len); int gnutls_hmac(gnutls_hmac_hd_t handle, const void *text, size_t textlen); void gnutls_hmac_output(gnutls_hmac_hd_t handle, void *digest); void gnutls_hmac_deinit(gnutls_hmac_hd_t handle, void *digest); unsigned gnutls_hmac_get_len(gnutls_mac_algorithm_t algorithm) __GNUTLS_CONST__; unsigned gnutls_hmac_get_key_size(gnutls_mac_algorithm_t algorithm) __GNUTLS_CONST__; int gnutls_hmac_fast(gnutls_mac_algorithm_t algorithm, const void *key, size_t keylen, const void *text, size_t textlen, void *digest); gnutls_hmac_hd_t gnutls_hmac_copy(gnutls_hmac_hd_t handle); int gnutls_hash_init(gnutls_hash_hd_t *dig, gnutls_digest_algorithm_t algorithm); int gnutls_hash(gnutls_hash_hd_t handle, const void *text, size_t textlen); void gnutls_hash_output(gnutls_hash_hd_t handle, void *digest); void gnutls_hash_deinit(gnutls_hash_hd_t handle, void *digest); unsigned gnutls_hash_get_len(gnutls_digest_algorithm_t algorithm) __GNUTLS_CONST__; int gnutls_hash_fast(gnutls_digest_algorithm_t algorithm, const void *text, size_t textlen, void *digest); gnutls_hash_hd_t gnutls_hash_copy(gnutls_hash_hd_t handle); /* KDF API */ int gnutls_hkdf_extract(gnutls_mac_algorithm_t mac, const gnutls_datum_t *key, const gnutls_datum_t *salt, void *output); int gnutls_hkdf_expand(gnutls_mac_algorithm_t mac, const gnutls_datum_t *key, const gnutls_datum_t *info, void *output, size_t length); int gnutls_pbkdf2(gnutls_mac_algorithm_t mac, const gnutls_datum_t *key, const gnutls_datum_t *salt, unsigned iter_count, void *output, size_t length); /* register ciphers */ /** * gnutls_rnd_level_t: * @GNUTLS_RND_NONCE: Non-predictable random number. Fatal in parts * of session if broken, i.e., vulnerable to statistical analysis. * @GNUTLS_RND_RANDOM: Pseudo-random cryptographic random number. * Fatal in session if broken. Example use: temporal keys. * @GNUTLS_RND_KEY: Fatal in many sessions if broken. Example use: * Long-term keys. * * Enumeration of random quality levels. */ typedef enum gnutls_rnd_level { GNUTLS_RND_NONCE = 0, GNUTLS_RND_RANDOM = 1, GNUTLS_RND_KEY = 2 } gnutls_rnd_level_t; int gnutls_rnd(gnutls_rnd_level_t level, void *data, size_t len); void gnutls_rnd_refresh(void); /* API to override ciphers and MAC algorithms */ typedef int (*gnutls_cipher_init_func)(gnutls_cipher_algorithm_t, void **ctx, int enc); typedef int (*gnutls_cipher_setkey_func)(void *ctx, const void *key, size_t keysize); /* old style ciphers */ typedef int (*gnutls_cipher_setiv_func)(void *ctx, const void *iv, size_t ivsize); typedef int (*gnutls_cipher_getiv_func)(void *ctx, void *iv, size_t ivsize); typedef int (*gnutls_cipher_encrypt_func)(void *ctx, const void *plain, size_t plainsize, void *encr, size_t encrsize); typedef int (*gnutls_cipher_decrypt_func)(void *ctx, const void *encr, size_t encrsize, void *plain, size_t plainsize); /* aead ciphers */ typedef int (*gnutls_cipher_auth_func)(void *ctx, const void *data, size_t datasize); typedef void (*gnutls_cipher_tag_func)(void *ctx, void *tag, size_t tagsize); typedef int (*gnutls_cipher_aead_encrypt_func)( void *ctx, const void *nonce, size_t noncesize, const void *auth, size_t authsize, size_t tag_size, const void *plain, size_t plainsize, void *encr, size_t encrsize); typedef int (*gnutls_cipher_aead_decrypt_func)( void *ctx, const void *nonce, size_t noncesize, const void *auth, size_t authsize, size_t tag_size, const void *encr, size_t encrsize, void *plain, size_t plainsize); typedef void (*gnutls_cipher_deinit_func)(void *ctx); int gnutls_crypto_register_cipher( gnutls_cipher_algorithm_t algorithm, int priority, gnutls_cipher_init_func init, gnutls_cipher_setkey_func setkey, gnutls_cipher_setiv_func setiv, gnutls_cipher_encrypt_func encrypt, gnutls_cipher_decrypt_func decrypt, gnutls_cipher_deinit_func deinit) _GNUTLS_GCC_ATTR_DEPRECATED; int gnutls_crypto_register_aead_cipher( gnutls_cipher_algorithm_t algorithm, int priority, gnutls_cipher_init_func init, gnutls_cipher_setkey_func setkey, gnutls_cipher_aead_encrypt_func aead_encrypt, gnutls_cipher_aead_decrypt_func aead_decrypt, gnutls_cipher_deinit_func deinit) _GNUTLS_GCC_ATTR_DEPRECATED; typedef int (*gnutls_mac_init_func)(gnutls_mac_algorithm_t, void **ctx); typedef int (*gnutls_mac_setkey_func)(void *ctx, const void *key, size_t keysize); typedef int (*gnutls_mac_setnonce_func)(void *ctx, const void *nonce, size_t noncesize); typedef int (*gnutls_mac_hash_func)(void *ctx, const void *text, size_t textsize); typedef int (*gnutls_mac_output_func)(void *src_ctx, void *digest, size_t digestsize); typedef void (*gnutls_mac_deinit_func)(void *ctx); typedef int (*gnutls_mac_fast_func)(gnutls_mac_algorithm_t, const void *nonce, size_t nonce_size, const void *key, size_t keysize, const void *text, size_t textsize, void *digest); typedef void *(*gnutls_mac_copy_func)(const void *ctx); int gnutls_crypto_register_mac( gnutls_mac_algorithm_t mac, int priority, gnutls_mac_init_func init, gnutls_mac_setkey_func setkey, gnutls_mac_setnonce_func setnonce, gnutls_mac_hash_func hash, gnutls_mac_output_func output, gnutls_mac_deinit_func deinit, gnutls_mac_fast_func hash_fast) _GNUTLS_GCC_ATTR_DEPRECATED; typedef int (*gnutls_digest_init_func)(gnutls_digest_algorithm_t, void **ctx); typedef int (*gnutls_digest_hash_func)(void *ctx, const void *text, size_t textsize); typedef int (*gnutls_digest_output_func)(void *src_ctx, void *digest, size_t digestsize); typedef void (*gnutls_digest_deinit_func)(void *ctx); typedef int (*gnutls_digest_fast_func)(gnutls_digest_algorithm_t, const void *text, size_t textsize, void *digest); typedef void *(*gnutls_digest_copy_func)(const void *ctx); int gnutls_crypto_register_digest( gnutls_digest_algorithm_t digest, int priority, gnutls_digest_init_func init, gnutls_digest_hash_func hash, gnutls_digest_output_func output, gnutls_digest_deinit_func deinit, gnutls_digest_fast_func hash_fast) _GNUTLS_GCC_ATTR_DEPRECATED; /* RSA-PKCS#1 1.5 helper functions */ int gnutls_encode_ber_digest_info(gnutls_digest_algorithm_t hash, const gnutls_datum_t *digest, gnutls_datum_t *output); int gnutls_decode_ber_digest_info(const gnutls_datum_t *info, gnutls_digest_algorithm_t *hash, unsigned char *digest, unsigned int *digest_size); int gnutls_decode_rs_value(const gnutls_datum_t *sig_value, gnutls_datum_t *r, gnutls_datum_t *s); int gnutls_encode_rs_value(gnutls_datum_t *sig_value, const gnutls_datum_t *r, const gnutls_datum_t *s); int gnutls_encode_gost_rs_value(gnutls_datum_t *sig_value, const gnutls_datum_t *r, const gnutls_datum_t *s); int gnutls_decode_gost_rs_value(const gnutls_datum_t *sig_value, gnutls_datum_t *r, gnutls_datum_t *s); #ifdef __cplusplus } #endif #endif /* GNUTLS_CRYPTO_H */ ``` I think I need another task to use `gnutls`, but I don't know what it is. Do you know the solution??
How to use gnutls in android studio
|android|c|gnutls|
1. `$facet` - Allow multiple pipelines to be executed in an aggregate query. 1.1. "inRange" - Get the document with `dDate` within the search date range. 1.2. "beforeRange" - Get the first matching document with `dDate` is before the starting search date range. 1.3. "afterRange" - Get the first matching document with `dDate` is after the ending search date range. 2. `$set` - Add the `docs` field by combining the three array fields into a single array. 3. `$unwind` - Deconstruct the `docs` array field into multiple documents. 4. `$replaceWith` - Replace the input document with the `docs` object. 5. `$sort` - Order the document by the `dDate` field ascending. ``` db.collection.aggregate([ { $facet: { inRange: [ { $match: { dDate: { $gte: ISODate("2024-01-22"), $lte: ISODate("2024-01-24") } } } ], beforeRange: [ { $match: { dDate: { $lt: ISODate("2024-01-22") } } }, { $sort: { dDate: -1 } }, { $limit: 1 } ], afterRange: [ { $match: { dDate: { $gt: ISODate("2024-01-24") } } }, { $sort: { dDate: 1 } }, { $limit: 1 } ] } }, { $set: { docs: { $concatArrays: [ "$beforeRange", "$inRange", "$afterRange" ] } } }, { $unwind: "$docs" }, { $replaceWith: "$docs" }, { $sort: { dDate: 1 } } ]) ``` [Demo @ Mongo Playground](https://mongoplayground.net/p/3mG-XyYE-sW)
Here's why this might result in a substantial size difference: **Windows Compatibility Pack:** The Microsoft.Windows.SDK.NET.dll is part of the Windows Compatibility Pack, and its purpose is to bridge the gap between .NET and Windows-specific APIs. When you target Windows 10, the compiler includes the necessary components of the Compatibility Pack to ensure that your application can utilize Windows 10-specific features and APIs. **APIs and Features:** Targeting a higher version of the Windows SDK allows your application to take advantage of Windows 10-specific features and APIs. The Windows Compatibility Pack includes additional functionality that might not be present in the earlier version targeted. Runtime Dependencies: Different Windows SDK versions introduce changes in runtime dependencies. When you target a specific version, your application includes the required DLLs and resources to ensure it runs on systems with that SDK version. To investigate the size difference more thoroughly, you could inspect the contents of the output folder for each build. Look for additional DLLs, resources, or folders that might be present in the output when targeting Windows 10. **The result of my test:** As can be seen from the result diagram, there is an additional folder containing the large memory file Microsoft.Windows.SDK.NET.dll(10.0.19041.0) in the bin folder. [![enter image description here][1]][1] [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/yMjN9.png [2]: https://i.stack.imgur.com/GiZAm.png
how to customise the look of carousal slider in flutter?
|flutter|user-interface|slider|carousel|
null
I send it as an http request to the Helium LoraWAN server with .net 4.8 version. But I get SSL/TLS error. Error message: The request was aborted: Could not create SSL/TLS secure channel. There was no such error a while ago. How can such an error occur later and how can I solve this situation? This process was tried many times on different networks and the same error was encountered in all of them. Note: When the same request is sent with postman and mozilla restClient, no error occurs. The operation is completed successfully. with HttpWebRequest ```c# ServicePointManager.SecurityProtocol = SecurityProtocolType.SystemDefault | SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13; ServicePointManager.Expect100Continue = true; ServicePointManager.DefaultConnectionLimit = 9999; RequestObj reqObj = new RequestObj { .. .. }; string jsonContent = JsonConvert.SerializeObject(reqObj); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding(); Byte[] byteArray = encoding.GetBytes(jsonContent); request.ContentLength = byteArray.Length; request.ContentType = "application/json"; request.ContentLength = byteArray.Length; try { using (Stream dataStream = request.GetRequestStream()) { dataStream.Write(byteArray, 0, byteArray.Length); } using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { using (Stream stream = response.GetResponseStream()) using (StreamReader reader = new StreamReader(stream)) { string responseBody = reader.ReadToEnd(); Console.WriteLine("HttpWebRequest response => " + response.StatusCode.ToString() + ") => " + responseBody); } } } catch (WebException ex) { if (ex.Response != null) { using (Stream stream = ex.Response.GetResponseStream()) using (StreamReader reader = new StreamReader(stream)) { string errorResponse = reader.ReadToEnd(); Console.WriteLine("HttpWebRequest errorResponse=>" + errorResponse); } } else { Console.WriteLine("HttpWebRequest ex.Message => " + ex.Message); } } ``` with RestSharp ```c# try { ServicePointManager.SecurityProtocol = SecurityProtocolType.SystemDefault | SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13; ServicePointManager.Expect100Continue = true; ServicePointManager.DefaultConnectionLimit = 9999; RequestObj reqObj = new RequestObj { .. .. }; string jsonContent = JsonConvert.SerializeObject(reqObj); var request = new RestRequest(Method.POST); request.AddJsonBody(jsonContent); request.AddHeader("Content-Type", "application/json"); var client = new RestClient(url); var response = client.Execute(request); Console.WriteLine("RestSharp response.Content => " + response.Content + "\n\nRestSharp response.ErrorMessage => " + response.ErrorMessage); // + "\n\nRestSharp response => " + JsonConvert.SerializeObject(response)); } catch (Exception ex) { Console.WriteLine("RestSharp Req ex => ", ex); } ``` Error Message for both => The request was aborted: Could not create SSL/TLS secure channel.
|java|r|character-encoding|mojibake|jri|
I created a product schema in mongodb with variants for the same product. For example, take a phone, something like this: ``` {"_id":{"$oid":"65d881d4d080a591933b1be2"}, "title":"Iphone 15", "category":{"$oid":"65d881b1831df3b991251f56"}, "brand":"Apple", "properties":{"memory":["64GB","128GB","256GB"],"color":["black","white"]}, "price":{"$numberLong":"1200"}, "variants":[ {"title":"Iphone 15 64GB black","SKU":"1123-1231-45lsaph15","price":{"$numberLong":"1200"},"properties":{"memory":"64GB","color":"black"},"stock":{"$numberLong":"24"}}, {"title":"Iphone 15 128GB black","SKU":"1123-1231-75lsaph15","price":{"$numberLong":"1300"},"properties":{"memory":"128GB","color":"black"},"stock":{"$numberLong":"10"}}, {"title":"Iphone 15 64GB white","SKU":"1123-1231-46lsaph15","price":{"$numberLong":"1200"},"properties":{"memory":"64GB","color":"white"},"stock":{"$numberLong":"20"}}, {"title":"Iphone 15 128GB white","SKU":"1123-1231-76lsaph15","price":{"$numberLong":"1350"},"properties":{"memory":"128GB","color":"white"},"stock":{"$numberLong":"17"}}]} ``` i retrieve this product like this: ``` const wholeProductData: any = await fetchSingleProduct(id); const variant = wholeProductData.variants.find((v: any) => v.SKU === params.id); ``` let's say that now our product is the black one with 64gb: ``` { title: 'Iphone 15 64GB black', SKU: '1123-1231-45lsaph15', price: 1200, properties: { memory: '64GB', color: 'black' }, stock: 24 } ``` my question is, for this object, how can I retrieve/filter the related variants? i.e. for each property key, to get the variants that only have that property key different. this is what I mean: for `color` i want to find the variants that have the other properties the same (`memory: '64gb'`) but other values for `color` (i.e. white). Something like iterating for only one property key **example:** Initial list with variants: ``` [ { title: 'Iphone 15 64GB black', SKU: '1123-1231-45lsaph15', price: 1200, properties: { memory: '64GB', color: 'black' }, stock: 24 }, { title: 'Iphone 15 128GB black', SKU: '1123-1231-75lsaph15', price: 1300, properties: { memory: '128GB', color: 'black' }, stock: 10 }, { title: 'Iphone 15 64gb white', SKU: '1123-1231-46lsaph15', price: 1300, properties: { memory: '64GB', color: 'white' }, stock: 10 }, { title: 'Iphone 15 128gb white', SKU: '1123-1231-76lsaph15', price: 1300, properties: { memory: '128GB', color: 'white' }, stock: 10 }, ] ``` desired result: ``` color: [ { title: 'Iphone 15 64gb white', SKU: '1123-1231-46lsaph15', price: 1300, properties: { memory: '64GB', color: 'white' }, stock: 10 }, { title: 'Iphone 15 64GB black', SKU: '1123-1231-45lsaph15', price: 1200, properties: { memory: '64GB', color: 'black' }, stock: 24 } ] memory: [ { title: 'Iphone 15 64GB black', SKU: '1123-1231-45lsaph15', price: 1200, properties: { memory: '64GB', color: 'black' }, stock: 24 }, { title: 'Iphone 15 128GB black', SKU: '1123-1231-75lsaph15', price: 1300, properties: { memory: '128GB', color: 'black' }, stock: 10 }, ] ``` **EDIT1:** Temporary solution with result to get a better understanding of the desired result: https://playcode.io/1774782
I copied a quick SDL example into my IDE but the program ignores any other commands I write like printf(); Example: ``` #include <stdio.h> #include "SDL.h" #include <MCSControl.h> int main(int argc, char *argv[]) { SDL_Window *window; SDL_Renderer *renderer; SDL_Surface *surface; SDL_Event event; printf("Test"); if (SDL_Init(SDL_INIT_VIDEO) < 0) { SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Couldn't initialize SDL: %s", SDL_GetError()); return 3; } if (SDL_CreateWindowAndRenderer(320, 240, SDL_WINDOW_RESIZABLE, &window, &renderer)) { SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Couldn't create window and renderer: %s", SDL_GetError()); return 3; } while (1) { SDL_PollEvent(&event); if (event.type == SDL_QUIT) { break; } SDL_SetRenderDrawColor(renderer, 0x00, 0x00, 0x00, 0x00); SDL_RenderClear(renderer); SDL_RenderPresent(renderer); } printf("test"); SDL_DestroyRenderer(renderer); SDL_DestroyWindow(window); SDL_Quit(); printf("Test"); return 0; } ``` This program does not print any of the things I told it. to but it opens a window successfully. Does anybody know why?
C programm opens SDL window successfully but doesn't printf
|c|sdl|
null
I am currently making a wysiwyg editor, and need to support rotation by pressing "r" key while dragging some element with mouse I noticed that onkeydown event is not being called when I try to press a key while dragging something with a mouse. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> document.addEventListener('keydown', function(event) { console.log(`Key pressed: ${event.key}`); }); document.addEventListener('mousedown', function(event) { console.log(`Mouse button pressed: ${event.button}`); }); <!-- end snippet --> This is the example code you can try on your own (make something draggable as well). So my question - is it even possible to do this in DOM or is it an unavoidable limitation?
One thing I would check - are the private DNS zones linked to the vNet? This has cost me more pain than anything else in setups like this. Does the frontend resolve backend-dev.azurewebsites.net to the expected private IP?
In version 5, you need to set middleware options at client instantiation. Since version 6, you can setup retry settings per-request options. to the default middleware to configure actions like redirects and retries, this can be done using the `requestConfiguration` lambda by adding a `RequestOption` instance to the Options collection. RetryHandlerOption retryHandlerOption = new RetryHandlerOption(...); graphClient.me().get(requestConfiguration -> { requestConfiguration.options.add(retryHandlerOption); }); [Per-request options][1] [RetryHandlerOption][2] [1]: https://github.com/microsoftgraph/msgraph-sdk-java/blob/dev/docs/upgrade-to-v6.md#per-request-options [2]: https://github.com/microsoft/kiota-java/blob/main/components/http/okHttp/src/main/java/com/microsoft/kiota/http/middleware/options/RetryHandlerOption.java
The `t` macro does not support variables. You should try using `<Trans />` which supports variables and can be helpful in your case. Here is an example code ```javascript // strings.json "user_id": "User {{id}} payments" ``` ```javascript const id = "123" <Trans i18nKey={"user_id"} values={{id}} /> // output "User 123 payments" ``` There are more options here depending on your setup, check https://react.i18next.com/latest/trans-component
For dataRow = lastRow To 3 Step -1 prodTran = Range("A" & dataRow).value prodOIS = Range("AA" & dataRow).value If (prodTran = "Ordered" or "Cancelled") Or (prodOIS ="Cancelled" or "Refund") Then Rows(dataRow).Delete End If Next dataRow
After opening a R-engine. I assigned to it a dataframe, that I get successfully from database. After printing sone values from the R-engine I found out, that all the string values have a mojibake like following example: "2024-00010005" -> "ÿþ2024-00010005ÿþ" ```java if(conn != null) { try{ if(!conn.isClosed()){ stmt = conn.createStatement(); rs = stmt.executeQuery(querry); String sampleCode = null; ArrayList<Double> Test_Dauer_h = new ArrayList<Double>(); while(rs.next()) { if(i == 0) { sampleCode = rs.getString("sampleCode"); } Test_Dauer_h.add(i, Double.parseDouble(rs.getString("Test_Dauer_h"))); System.out.println(sampleCode+";["+Test_Dauer_h.get(i)+"];"); i++; } System.out.println("(" + i + " rows readed)"); //Engine initialisieren & R Objekte Erzeugen.. if(i > 0) { String[] header = {"sampleCode","Test_Dauer_h"}; String[] _sampleCode = new String[i]; double[] _Test_Dauer_h = new double[i]; REXP h = null; for(int j = 0; j < i; j++) { _sampleCode[j] = sampleCode; } engine = REngine.engineForClass("org.rosuda.REngine.JRI.JRIEngine", new String[] {"-no-save"}, new REngineStdOutput(), false); h = REXP.createDataFrame(new RList(new REXP[] {new REXPString(_sampleCode),new REXPDouble(_Test_Dauer_h)},header)); engine.assign("df_lab", h); engine.assign("labSiteCode", labSiteCode); engine.parseAndEval("print(labSiteCode);print(max(df_lab$Test_Dauer_h)); print(df_lab$sampleCode[1]); print(df_lab$HGS[1])"); engine.parseAndEval("library(randomForestSRC)"); engine.parseAndEval("library(sqldf)"); engine.parseAndEval("library(stringr)"); } } }catch (SQLException e){ e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage() + "\n" + querry); }catch (REXPMismatchException e) { e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage()); } }finally{ try{ rs.close(); stmt.close(); if(i > 0) engine.close(); }catch (SQLException e) { e.printStackTrace(); JOptionPane.showMessageDialog(null, e.getMessage() +"\n" + querry); } SqlServerJdbcUtils.disconnect(conn); } } ``` What did I wrong in my program? [Output][1] [1]: https://i.stack.imgur.com/T9AAP.png
Since you have explicitly mentioned decrease clause, Dafny will use that decrease clause. You assumption that it will compare `x + y` with tuple `x`, `y` is wrong. It would have chosen tuple `x`, `y` as if you haven't provided decrease clause. In either case it will compare decrease clause of invoking function with called function. Hence it will compare `x+y` with `x+y` (recursion call values) or `x, y` with `x, y` (recursion call values) Now take case when it is called with `x = 3` and `y = 2`. Here `x + y` is 5 and when you call recursively in last else if branch it will be `x = 2` and `y = 3` but `x + y` is 5 still. It is not decreasing hence Dafny is complaining.
Parallelizing tasks with separate DbContext instances is fine, provided those tasks can run completely independently. However you still need to consider details like locking in the database. Where you have code-based processing time, parallelization can help but if the bulk of time is spent fetching/updating data you are likely best looking first at methods to do that as efficiently as possible. EF isn't the best option for batch-type operations vs. executing SQL statements if there is that as an option. First look at how you might do it without EF, then consider if EF improves anything. As noted in the above comments, an additional option is to use the `ExecuteUpdate()` method available in EF Core 7+. However, note that this method only executes against the database and will not update tracked instances that might already be loaded. For example: var record = context.MyTables.Single(x => x.Id == tableId); // ... context.MyTables .Where(x => x.CategoryId == categoryId) .ExecuteUpdate(x => x.SetProperty(t => t.Value, t.Value + 1); var records = context.MyTables .Where(x => x.CategoryId == categoryId) .ToList(); if the first read record belonged to the category in question, the records loaded at the end would still have the un-altered value from the originally tracked instance. The database row for this entity will have been updated, but EF does not apply the changes to tracked instances in memory. If using an `ExecuteUpdate` from an injected `DbContext` instance then you may need to be wary of any tracked references. This could involve calling `context.ChangeTracker.Clear()` prior to any `ExecuteUpdate()` call, or ensuring any reads afterwards use `AsNoTracking()` to ensure the current data state comes from the database.
Surely not needed anymore, but for future visits: Nitpicking the docs to find each possible value the following was found: Given the 4 possible values of the array (feel free to omit any one of the ```'key' => value``` pairs): ```php ['path' => '/categories/misc', # Base Path for all generated links. 'query' => ['sort' => 'age', # Some additional variables on the links. 'thing' => 5], 'fragment' => 'SeaTurtles', # Value that will appear after the '#' sign. 'pageName' => 'myPageNumber'] /* The name of the variable used for pagination that will appear on the url/request. The default value is 'page'. */ ``` Taking some "page number 3" as example case, these options would generate links in the form: ```http[s]://theSite.com/categories/misc?myPageNumber=3&sort=age&thing=5#SeaTurtles``` --- 'path' -> [Documentation for Path Option](https://laravel.com/docs/10.x/pagination#customizing-pagination-urls) 'query' -> [Documentation for Query Option](https://laravel.com/docs/10.x/pagination#appending-query-string-values) (Note I could be mistaken on the syntax of this one.) 'fragment' -> [Documentation for Fragment Option](https://laravel.com/docs/10.x/pagination#appending-hash-fragments) 'pageName' -> No Documentation. In AbstractPaginator Class it shows to be the default page number variable name that appears in the url after the '?'. It is shown in the ['path' Documentation](https://laravel.com/docs/10.x/pagination#customizing-pagination-urls).
I have a lab about `addEventListener`, where in a click of a button shows your for loop with office locations. ``` const grupList = document.querySelector("ul"); console.log(grupList); const listItems = document.querySelector("li"); console.log(listItems);` const offices = document.getElementById("ofLocations"); let officeLocations = ["New York", "London", "Paris", "Tokyo", "Sydney"]; ```function getOfficeLocations (arr){ ``` for (let i = 0; i < arr.length; i++) { `console.log(arr[i]); ```offices.addEventListener("click", getOfficeLocation { }) } } `getOfficeLocation(officeLocations);` ``` This is what I have done so far.
For those who are looking for accessing a standard 2D array,you can use nested loop. code is below **views.py** content={'f':[[1,2,3,4],[4,5,6,7]]} return render(request,'index.html',content) **index.html** {% for array in f %} {% for content in array %} <h1>{{content}}</h1> {% endfor %} {% endfor %} This actually works
null
For those who are looking for accessing a standard 2D array, you can use a nested loop: **views.py** content={'f':[[1,2,3,4],[4,5,6,7]]} return render(request,'index.html',content) **index.html** {% for array in f %} {% for content in array %} <h1>{{content}}</h1> {% endfor %} {% endfor %}
I had to write a script where i wanted to provide default parameters for folder and file locations but since the user could overwrite them AND launch the script form any directory, i needed to properly handle this in a way it was as userfriendly as possible (e.g. using autocomplete of the ps-console provides for file paths) I thought i'd share this here as I also struggled to properly handle relative paths (and wasted way too much time with .NET-Objects, trying to solve this) as [@Artem Yukhimenko][1] provided the curcual cmdlet `resolve-path`. I want to give the credits to him. ``` param ( [string] $SubFolderOne = "./SubfolderOne/", [string] $ConfigFile = "./config.json", [string] $SiblingFolder = "../SiblingFolder/" ) function ResolveDefaultParameters { # If the script has been run from a different directory and the path variables have been omitted, the default values would point to the wrong location. # This function resolves the path to the script's directory if the path has been omitted. param ( [string] $Path, [string] $VarName ) try { # this will succeed if the path is already correct $Path = $Path | Resolve-Path -ErrorAction Stop } catch { try { # This will fix the path so the default parameter value will join with the script's directory. # If the parameter has been provided with a wrong path, the error will stop the script. $Path = Join-Path $PSScriptRoot $Path | Resolve-Path -ErrorAction Stop } catch { # had to manually throw the error, because the error from the `Resolve-Path` cmdlet would be missleading. throw "Can't resolve path for `$$VarName`: '$Path'" } } return $Path } $SubFolderOne = ResolveDefaultParameters -Path $SubFolderOne -VarName "SubFolderOne" $ConfigFile = ResolveDefaultParameters -Path $ConfigFile -VarName "ConfigFile" $SiblingFolder = ResolveDefaultParameters -Path $SiblingFolder -VarName "SiblingFolder" # after handling the passed path parameters, we can set the script's location as the current location so we can access other files with known location more easily $origLocation = Get-Location Set-Location $PSScriptRoot try { # do the work here Write-Host "SubFolderOne: $SubFolderOne" Write-Host "ConfigFile: $ConfigFile" Write-Host "SiblingFolder: $SiblingFolder" #exeecute an a related script .\Subroutines\RelatedScript.ps1 } catch { throw $_.Exception.Message } finally { # return to original location Set-Location $origLocation } ``` [1]: https://stackoverflow.com/users/12303627/artem-yukhimenko
when using es modules you must load **`fs`** module to read and write file support. accordion to this issue : [issue][1] ```javascript import * as XLSX from 'xlsx' import * as fs from 'fs'; XLSX.set_fs(fs); // ------------------------------------------- const data = [ { name: "Alice", age: 25, gender: "F" }, { name: "Bob", age: 30, gender: "M" }, { name: "Charlie", age: 35, gender: "M" }, ]; const wb = XLSX.utils.book_new(); const ws = XLSX.utils.json_to_sheet(data); XLSX.utils.book_append_sheet(wb, ws, "Sheet1"); XLSX.writeFile(wb, "sample_file.xlsx"); ``` [1]: https://github.com/SheetJS/sheetjs/issues/2634#issuecomment-1231412497
Is there any way to set up a type checker to validate that SQL queries are using the right columns from the right tables and the right dialect?
Is there any way to set up VS Code to check that SQL queries are valid
I had both of these conditions inside an `if` statement to check if `x` was even or odd, but the `!(x&1)` seemed to execute the body of the `if` in case of an even `x`, while `x&1==0` didn't execute it. I expected both to give 0 considering `1 & 0` is `0` and 1 in 32 or 64 bit representations will be `000..01` and if, say, `x` is something like `10010101100` (even), then their bit-wise and should yield 0. Hence, I'm still not sure why `!(x&1)` works. Please correct me if I am wrong in anywhere. Thank you.
|c++|c++17|bitwise-operators|
Why do x&1==0 and !(x&1) not yield the same results in an if statement in C++?
I'm getting a really high chi-squared value, is there an issue with my the sample used or how I calculated the chi-sqaured value? I used this site to calulate my chisqaured value - https://datatab.net/statistics-calculator/hypothesis-test/chi-square_test_calculator [Data - has age groups and type of coffee they prefer (the numbers in the cells are the number of people that prefer that coffee according to the age group ). This was the data used to calculate the chi-squared values. (https://i.stack.imgur.com/AC3c3.png) Any help will be greatly appreciated. I did calculate the chisquared value and I got a value of 200 and a p value of less than 0.05
Getting a really high chi-squared value, is there an issue with my the sample used or how I calculated the chi-sqaured value?
|excel|
null
I have written my own 2D Convolution function as follows ```python def TwoD_convolution(img, kernel): # Flipping the kernel by 180 degrees kernel = kernel[::-1] for i in range(len(kernel)): kernel[i] = kernel[i][::-1] # Filling the new image with zeroes convolve = np.zeros((img.shape[0]+len(kernel)-1, img.shape[1]+len(kernel[0])-1, 3), np.uint8) midx = len(kernel)//2 midy = len(kernel[0])//2 # Trying to fill each cell one by one, and RGB values for i in range(convolve.shape[0]): for j in range(convolve.shape[1]): for k in range(3): cur = 0 #current sum for x in range(-midx,midx+1): for y in range(-midy,midy+1): if i+x >= 0 and i+x < img.shape[0] and j+y >= 0 and j+y < img.shape[1]: # going through every neighbour of the middle cell cur += ((img[i+x][j+y][k])*(kernel[midx+x][midy+y])) convolve[i][j][k] = cur return convolve ``` To get a sharpened image, I am calling the function as follows: ```python display_image(TwoD_convolution(img,[[-0.5,-1,-0.5],[-1,7,-1],[-0.5,-1,-0.5]])) ``` Where display_image is defined as follows: ```python plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB)) plt.show() ``` It is displaying this image: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yFImE.jpg I am sure it is getting sharpened on some pixels.. but why are the colors at the edges so random ? Where am I going wrong ? Thanks
You can create a job with a remotely partitioned step where each worker step is assigned a partition of the data and is deployed on a different node. You can find an example [here](https://github.com/spring-projects/spring-batch/tree/main/spring-batch-samples#remote-partitioning-sample).
I am new to xamarin forms and tryig to **modify** the notification content before displaying it. Problem : I can see the modified content message and also the normal message( before modify content) also. How do I cancel the previous message, I want to show only the modified message. I used this [tutorial][1] to display the firebase message using package Plugin.FirebasePushNotification CrossFirebasePushNotification.Current.OnNotificationReceived += (s, p) => { ModifyMsgAsync(this, p).Wait(); } ModifyMsgAsync(Context context, FirebasePushNotificationDataEventArgs message) { NotificationCompat.Builder builder = new NotificationCompat.Builder(context, "123"); builder.SetContentTitle("Changed the title"); builder.SetContentText("Message body is changed"); builder.SetPriority(NotificationCompat.PriorityHigh); builder.SetAutoCancel(true); //disappear after sometime NotificationManagerCompat managerCompat = NotificationManagerCompat.From(context); managerCompat.Notify(notification_id, builder.Build()); } [1]: https://www.telerik.com/blogs/how-to-use-push-notifications-xamarin-forms
Xamarin forms firebase push notification edit the message content before display
|android|firebase|xamarin.forms|push-notification|
The issue you're facing with `model_from_json()` is that it returns the class name mentioned in your JSON configuration, not an actual Keras model instance. In your case, it returns "MyModel" as specified in the "class_name" field. To address this, you need to create an instance of your custom model class and then load the model's weights separately using `load_weights()`. You can try this approach: with open('model_config.json', 'r') as json_file: json_config = json_file.read() from MyModel import MyModel model = MyModel.from_config(json_config) model.load_weights('model_weights.h5')
Adding to Eric Melskis comment, if you wanted also have your 'make all' functionality make both programs, then perhaps this would be a good substitute: target1_SRC=123 456 target2_SRC=abc def target1: TARGET=target1 target2: TARGET=target2 default: all target1: ; @echo $($(TARGET)_SRC) target2: ; @echo $($(TARGET)_SRC) all: target1 target2 Here I also added the default rule to have both targets made by default.