instruction
stringlengths
0
30k
βŒ€
I have been tasked with creating an unsaved changes alert in a form partial for a Ruby on Rails app (full MVC architecture). While not new to Rails, I am new to Views, turbolinks, and jquery and have been struggling here and there, so please bear with me. Users should encounter the alert when they have made changes to the form that are unsaved/not submitted, and attempt to navigate elsewhere in the app, close their tab/browser window, refresh the browser, or use the browsers navigation arrows. Currently, the code is working somewhat as intended. My main issue is that the turbolinks:click event listener continues to listen for click events after the alert has been dismissed and the user begins to navigate away from the form. If I dismiss the alert, click anywhere else in the app, the alert reappears. I then have to dismiss the alert every time I click somewhere. If I click 'cancel' on the alert, it appears as if the alert windows have stacked on top of one another (likely the amount of times I clicked around). I am writing this as a script inside of the _form partial. How can I terminate the turbolinks:click event listener once the user has dismissed the alert? This is my first question on stackoverflow - please let me know if any protocol wasn't followed. Thanks! ``` <script> function unsavedChangesAlert() { // initialize let formChanged = false; // add selectors, array for readability/maintainability const formSelectors = [ '.custom-control-input', //handles section (GL, WC, etc.) toggles '.gl_fields select', '.gl_fields input[type="checkbox"]', '.wc_fields select', '.wc_fields input.commaClass', '.wc_fields input[type="checkbox"]', '.auto_fields select', '.auto_fields input[type="checkbox"]', '.umb_fields select', '.umb_fields input[type="checkbox"]', '.pl_fields select', '.pl_fields input[type="checkbox"]', '.im_fields input.commaClass', '.oh_fields input.commaClass', '.gk_fields input.commaClass', '.cg_fields input.commaClass', '.pollution_fields input.commaClass', '.pollution_fields input[type="checkbox"]', '.re_fields input.commaClass', '.re_fields input[type="checkbox"]', '.ava_fields input.commaClass', '.ava_fields input[type="checkbox"]', '.db_fields input.commaClass', '.db_fields input[type="checkbox"]', ]; //take formSelectors array, join into single string const formInputs = document.querySelectorAll(formSelectors.join(', ')); // event listeners for form inputs $(formInputs).on('change keydown input', function() { formChanged = true; }); //to prevent alert on form submission; set flag to false let formSubmitted = false; // add event listener for beforeunload (browser refresh, browser close window/tab) $(window).on('beforeunload', function(event) { if (formChanged && !formSubmitted) { event.preventDefault(); window.scrollTo(0, document.body.scrollHeight); }; }); $('#edit_ins_reqs_form').submit(function(){ //set flag to true when form is submitted formSubmitted = true; }); let confirmationShown = false; $(document).on('turbolinks:click', function(event) { console.log('one'); const editForm = $('#edit_ins_reqs_form'); // check if the clicked element or its ancestors contain the edit form if (!$(event.target).closest('#edit_ins_reqs_form').length) { console.log("two") const saveButton = $(event.target).closest('.submit'); //check to exclude the save button if (!saveButton.length && !$(event.target).hasClass('submit')) { if (!confirmationShown && !confirm('turbolinks!!! You have unsaved changes. Are you sure you want to leave this page?')) { event.preventDefault(); $(window).scrollTop($(document).height()); confirmationShown = true; } else { //remove event listener once triggered $(document).off('turbolinks:click') } } console.log("three") } }); }; //call it $(document).on('turbolinks:load', function() { unsavedChangesAlert(); }); </script> ```
Terminating turbolinks:click event listener once a user has navigated away from a form
|javascript|jquery|ruby-on-rails|event-handling|turbolinks|
null
I used to create brokers in ActiveMQ Artemis on both Windows, Linux and in WSL. There was never a problem. Except on one of my machine having Windows and running WSL2. I did everything the same when installing artemis: ``` sudo groupadd artemis sudo useradd -s /bin/false -g artemis -d /opt/artemis artemis cd /opt sudo wget https://archive.apache.org/dist/activemq/activemq-artemis/2.12.0/apache-artemis-2.12.0-bin.tar.gz sudo tar -xvzf apache-artemis-2.12.0-bin.tar.gz sudo mv apache-artemis-2.12.0 artemis sudo chown -R artemis: artemis sudo chmod o+x /opt/artemis/bin/ sudo rm apache-artemis-2.12.0-bin.tar.gz ``` It installs, but when I try to create my own broker instance: ``` /opt/artemis/bin/artemis create --user app --password pwd --allow-anonymous test ``` I've got the following error message: ``` Cannot initialize queue:Function not implemented ``` I've tried it several times, even uninstalled ActiveMQ Artemis and removed the user and group and started the whole process again, but the result was always the same. I can't figure out what the difference would be or how to fix the problem. Any help would be highly appreciated! UPDATE 1: There is not much log, but turning on verbose mode gives the following lines: ``` Executing org.apache.activemq.artemis.cli.commands.Create create --verbose --user app --password pwd --allow-anonymous test Home::/opt/artemis, Instance::null Cannot initialize queue:Function not implemented ```
ActiveMQ Artemis: can't create broker: function not implemented
Show hide of HTML table is not working in JavaScript (IE8)
**Problem** * Merge `fr` and `events` based on a match between `fr['Date']` and the exact or *next* date in `events['Earnings_Date']` (i.e. looking forward), grouped by column 'Symbol'. Keep only the first match. **Setup** Let's add 2 symbols (one expecting a match, one expecting zero). Also, it is perhaps easier to use [`pl.Expr.str.to_date`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.str.to_date.html) instead of your [`pl.Expr.str.strptime`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.str.strptime.html). * `fr` ```python import polars as pl fr = ( pl.DataFrame( { 'Symbol': ['A', 'A', 'A', 'B', 'B', 'C'], 'Date': ['2010-08-29', '2010-09-01', '2010-11-30', '2010-09-05', '2010-12-01', '2010-09-01'], } ) .with_columns(pl.col('Date').str.to_date('%Y-%m-%d')) .set_sorted(('Symbol', 'Date')) ) fr shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Date β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ date β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════║ β”‚ A ┆ 2010-08-29 β”‚ β”‚ A ┆ 2010-09-01 β”‚ β”‚ A ┆ 2010-11-30 β”‚ β”‚ B ┆ 2010-09-05 β”‚ β”‚ B ┆ 2010-12-01 β”‚ β”‚ C ┆ 2010-09-01 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` * `events` ```python events = ( pl.DataFrame( { 'Symbol': ['A', 'A', 'B'], 'Earnings_Date': ['2010-06-01', '2010-09-01', '2010-12-01'], 'Event': [1, 4, 7], } ) .with_columns(pl.col('Earnings_Date').str.to_date('%Y-%m-%d')) .set_sorted(('Symbol', 'Earnings_Date')) ) events shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════║ β”‚ A ┆ 2010-06-01 ┆ 1 β”‚ β”‚ A ┆ 2010-09-01 ┆ 4 β”‚ β”‚ B ┆ 2010-12-01 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ``` **Code** ```python fr = ( fr.join_asof( events, left_on='Date', right_on='Earnings_Date', strategy='forward', by='Symbol' ) .with_columns( pl.when(pl.struct('Symbol', 'Earnings_Date').is_first_distinct()) .then(pl.col('Earnings_Date', 'Event')) ) ) fr shape: (6, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Date ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════════════β•ͺ═══════║ β”‚ A ┆ 2010-08-29 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ A ┆ 2010-09-01 ┆ null ┆ null β”‚ β”‚ A ┆ 2010-11-30 ┆ null ┆ null β”‚ β”‚ B ┆ 2010-09-05 ┆ 2010-12-01 ┆ 7 β”‚ β”‚ B ┆ 2010-12-01 ┆ null ┆ null β”‚ β”‚ C ┆ 2010-09-01 ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ``` **Explanation** * Use [`pl.DataFrame.join_asof`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.join_asof.html) with `strategy='forward'`. * Next, inside [`ppl.with_columns`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.with_columns.html) apply [`pl.when`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.when.html) to set duplicates to `None` (`null`): * Use [`pl.struct`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.struct.html) to create a single 'key' for the combination `['Symbol', 'Earnings_Date']` and check [`pl.Expr.is_first_distinct`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.is_first_distinct.html). * Where `True` we want to keep the values for `['Earnings_Date', 'Event']`, so we can pass [`pl.col`](https://docs.pola.rs/py-polars/html/reference/expressions/col.html) to `.then`. Where `False`, we will get `None`.
Here just log. Remote clients also cant connect why ls works, but not a LIST? vsFTP works fine via SFTP, but does not lock client in home directory if lock client in home only via ftp root@zzzzzz:~# ftp localhost Connected to localhost. 220 (vsFTPd 3.0.3) Name (localhost:root): aaaa 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 200 EPRT command successful. Consider using EPSV. 150 Here comes the directory listing. ------------------------------- 226 Directory send OK. ftp> quote LIST 425 Use PORT or PASV first. ftp> quote PASV 227 Entering Passive Mode (0,0,0,0,190,115). ftp> quote LIST ---- 1 minute delay ---- 425 Failed to establish connection. ftp> recv (remote-file) robots.txt (local-file) 111 local: 111 remote: robots.txt 200 EPRT command successful. Consider using EPSV. 150 Opening BINARY mode data connection for robots.txt (23 bytes). 226 Transfer complete. 23 bytes received in 0.02 secs (1.4171 kB/s) ftp> quote RECV 500 Unknown command. ftp> quote RETR 425 Use PORT or PASV first. ftp> quote PASV 227 Entering Passive Mode (0,0,0,0,66,14). ftp> quote RETR 550 Failed to open file. ftp> quote RETR robots.txt ---- 1 minute delay ---- 425 Failed to establish connection. ftp>
FTP ls works fine, but LIST not working. vsFTP server
|ftp|
{"Voters":[{"Id":573032,"DisplayName":"Roman C"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]}
{"Voters":[{"Id":12265927,"DisplayName":"Puteri"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[18]}
Out of the three following one-liners 1. `return list.size() == new HashSet<>(list).size();` 2. `return list.size() == list.stream().distinct().count();` 3. `return list.stream().allMatch(ConcurrentHashMap.newKeySet()::add);` the last one (#3) has the following benefits: * Performance: it short-circuits, i.e. it stops on the first duplicate (while #1 and #2 always iterate till the end) – as Pshemo [commented](/q/30053487#comment81961406_30053822). * Versatility: it allows to handle not only collections (e.g. lists), but also streams (without explicitly collecting them). <sub>Additional notes: &nbsp;β€’ [`ConcurrentHashMap.newKeySet()` is a way to create a concurrent hash set](/q/6992608). &nbsp;β€’ Previous version of this answer suggested the following code: `return list.stream().sequential().allMatch(new HashSet<>()::add);` β€” but, as M.&nbsp;Justin [commented](/q/30053487#comment137947083_40977178), calling `BaseStream#sequential()` doesn't guarantee that the operations will be executed from the same thread and doesn't eliminate the need for synchronization. &nbsp;β€’ Performance benefit from short-circuiting won't usually matter, because we typically use such code in assertions and other correctness checks, where items are _expected_ to be unique; in contexts where duplicates are really a possibility, we typically not only check for their presence but also to find and handle them. &nbsp;β€’ M.&nbsp;Justin's provides a [clean (not using side-effects) short-circuiting solution for Java 22](/a/78169084), but that's not a one-liner.</sub>
i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. ``` import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: MyHomePage(), ); } } class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { double blueContainerX = 50; double blueContainerY = 50; bool showRedContainer = false; double redContainerX = 50; double redContainerY = 50; double blueContainerWidth = 200; double blueContainerHeight = 200; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Container Demo'), ), body: Center( child: Stack( children: [ // Red Container (Fixed or Positioned based on showRedContainer) if (showRedContainer) Positioned( left: redContainerX, top: redContainerY, child: Container( width: 50, height: 50, decoration: BoxDecoration( border: Border.all( color: Colors.red, width: 2.0, // Border width ), ), ), ), // Blue Container (Draggable) Positioned( left: blueContainerX, top: blueContainerY, child: GestureDetector( onDoubleTap: () { setState(() { if (!showRedContainer) { // Set red container's position only when it's double-tapped showRedContainer = true; redContainerX = blueContainerX; redContainerY = blueContainerY; } }); }, onPanUpdate: (details) { setState(() { // Update blue container position blueContainerX += details.delta.dx; blueContainerY += details.delta.dy; // Limit blue container movement within screen bounds blueContainerX = blueContainerX.clamp(0.0, MediaQuery.of(context).size.width - blueContainerWidth); blueContainerY = blueContainerY.clamp(0.0, MediaQuery.of(context).size.height - blueContainerHeight); // Stop blue container movement when red container borders align if ((blueContainerX + blueContainerWidth == redContainerX + 50 && details.delta.dx > 0) || (blueContainerY + blueContainerHeight == redContainerY + 50 && details.delta.dy > 0)) { blueContainerX = blueContainerX; blueContainerY = blueContainerY; } }); }, child: Container( width: blueContainerWidth, height: blueContainerHeight, decoration: BoxDecoration( border: Border.all( color: Colors.blue, width: 2.0, // Border width ), ), ), ), ), ], ), ), ); } } ``` i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter. [![enter image description here](https://i.stack.imgur.com/rNECn.jpg)](https://i.stack.imgur.com/rNECn.jpg) [![enter image description here](https://i.stack.imgur.com/oo6il.jpg)](https://i.stack.imgur.com/oo6il.jpg) i have one blue container when double tap on this red container open. and the red container position is fix it is not change when blue container move that time if red container and blue container border same than not move to more this direction than another direction to move in flutter.
``` spring.datasource.platform=h2 spring.datasource.url=jdbc:h2:file:~/DemoDb spring.datasource.generate-unique-name=false spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=sa spring.datasource.password= spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.jpa.show-sql=true spring.jpa.hibernate.ddl-auto=update spring.h2.console.path=/h2-console spring.h2.console.enabled=true server.port=8181 ``` I read that in spring versions 3 onwards, it does not takes spring.datasource.url, rather a random instance of db comes while launching spring boot app, but its not showing up in logs, so I am also suspecting that Intellij is not taking application.properties into account at all, so I tried deleting .idea and re-opening the IDE, but it didnt helped. Kindly help me out to fix the issue.
I am trying to use h2 in-memory db from my spring boot application, my spring boot version is 3.1.10, but its not connecting to h2 properly
|spring|spring-boot|maven|intellij-idea|h2|
null
Add options when initializing the app Future<void> main() async { WidgetsFlutterBinding.ensureInitialized(); await Firebase.initializeApp( options: FirebaseOptions( apiKey: 'api key', appId: 'appId', messagingSenderId: 'messagingSenderId', projectId: 'projectId', storageBucket: "storageBucket", ) ); runApp(MyApp()); }
|python|openai-api|
{"Voters":[{"Id":23899445,"DisplayName":"Tanisha Sharma"}],"DeleteType":1}
server Linux distros are not capable of running selenium because they don't have a user interface to run chrome. You can try to set it up on a windows VDS or with a remote driver. And I can't be exactly sure about the error since you didn't provide all the logs.
|c#|linq|linq2db|
|javascript|typescript|axios|response|
{"Voters":[{"Id":68587,"DisplayName":"John Kugelman"},{"Id":9952196,"DisplayName":"Shawn"},{"Id":6570042,"DisplayName":"Gabor Lengyel"}],"SiteSpecificCloseReasonIds":[18]}
This is my code in my +page.server.js: export const actions = { default: async ({ request }) => { const data = await request.formData(); const ln = data.get('lastname'); return ln } }; export function load( ) { if (!db) throw error(404); return { todos : db.getTodos() } } My question: how do I use the actions object output variable for filtering what I get from db.getTodos() in the load function? I can implement the filter on the front-end in +page.svelte by using a bind:value on the variable, but I would rather filter on the server side.
Sveltekit: using a form variable for filtering a database
As noted in the comments, an `awk` solution is simple. For example: ``` awk '!index($0,q) || ++c>n' n=3 q=please "$filename" ``` --- It is not very convenient to try to count with `sed`, although it is possible. Parameterising arguments is also complicated. Ignoring both those issues, here is a sed script to delete the first 3 occurrences of lines containing "please": ``` sed ' 1 { x s/^/.../ x } /please/ { x s/.// x t del b :del d } ' "$filename" ``` Applying either script to: ``` 1 leave this line alone 2 leave this line alone 1 please delete this line 3 leave this line alone 2 please delete this line 4 leave this line alone 5 leave this line alone 3 please delete this line 6 leave this line alone 4 please leave this line alone 7 leave this line alone ``` produces: ``` 1 leave this line alone 2 leave this line alone 3 leave this line alone 4 leave this line alone 5 leave this line alone 6 leave this line alone 4 please leave this line alone 7 leave this line alone ``` --- If you would prefer to enter a number rather than a long string of dots, you can do: ``` sed ' 1 { x :start s/^/ / t nop :nop s/././3 t end b start :end x } /please/ { x s/.// x t del b :del d } '
To be honest, I'm not entirely clear on your question. If I understand correctly, you're looking to categorize values into intervals of 50, such as 1-50, 51-100, 101-150, and so on. After that, you want to group the IDs based on these intervals and sort them according to the "sort_me" column. Beside concatenating those ids, you want to count the number of ids in each group and discard those with 1 or less. If this is what you are looking for then you may try the following query, WITH Intervals AS ( SELECT id, CASE WHEN value BETWEEN 1 AND 50 THEN '1-50' WHEN value BETWEEN 51 AND 100 THEN '51-100' WHEN value BETWEEN 101 AND 150 THEN '101-150' -- Add more intervals as needed ELSE 'Unknown' END AS value_interval, sort_me FROM your_table ) SELECT value_interval, GROUP_CONCAT(id ORDER BY sort_me) AS grouped_ids, COUNT(*) AS num_ids FROM Intervals GROUP BY value_interval HAVING num_ids > 1 ORDER BY value_interval; There is no straight answer for your query as far as I know. Well as I said, I might be interpreting your question wrong.
Creating a Google Login-In Extension for a personal project so I've set up funtionality that allows me to log in to the extension with my Google account and it works. I'm using an initial HTML file called "popup.html" to hold the logic of Google's Authentication by calling "popup.js" as the backend script, displaying a "Waiting for Log-in" message, and then setting that <div> attribute to display the contents of another HTML file called "authorized.html" after succesful sign-in by setting its contents as the innerHTML property of "popup.html". The issue is that "authorization.html" own script "authorized.js" will not print anything to either my console nor will it even throw an error on the console despite the other contents of "authorized.html" succefully being displayed. Only the relevant parts of my code are below but I'd like to know why I'm having this issue and how to circumvent it. Thank you popup.html: ``` <!DOCTYPE html> <html> <head> <title>Google OAuth Sign-In</title> <script src="popup.js"></script> </head> <body> <div id="Sign-In TempUI" class="content-body">Waiting for Google Sign-In</div > </body> </html> ``` popup.html: ``` console.log('popup.js loaded'); document.addEventListener('DOMContentLoaded', function () { const NewGUI = document.getElementById('Sign-In TempUI'); chrome.identity.getAuthToken({ interactive: true }, function (token) { if (chrome.runtime.lastError) { console.error(chrome.runtime.lastError.message); return; } console.log('Token acquired:', token); loadAuthorizedUI(token); }); function loadAuthorizedUI(token) { console.log('Debug:', token); fetch('authorized.html') .then(response => response.text()) .then(html => { console.log('HTML content:', html); NewGUI.innerHTML = html; }) .catch(error => console.error('Error fetching or processing HTML:', error)); } }); ``` authorized.html: ``` <!-- Page Displayed after recieving token from OAuthetication. Should display GUI for Email handling --> <!DOCTYPE html> <html> <head> <title>Authorized UI</title> </head> <body> <h1>Welcome! You are now authorized.</h1> <ul id="emailList"> </ul> <script src="authorized.js"></script> <div>Hit end</div> </body> </html> ``` authorized.js: ``` //throw new Error('This is a test error from authorized.js'); console.log('authorized.js loaded'); ``` Before, I was starting another document.addEventListener('DOMContentLoaded', function ()) and then trying to replace the contents of emailList after trying to get its element by Id, but it wont even open authorized.js. Console.log doesnt print anything and the exception doesnt actually display anything on the console. However when I move "<script src="authorized.js"></script>" to popup.html, it'll suddenly display albiet errors
You can easily create a function to do that for you, change the **length** or even add it to **native Array** as `remove()` function for reuse. Imagine you have this array: var arr = [1, 2, 3, 4, 5]; //the array 1 to 5 OK, just simply run this: arr.length = 0; //change the length and the result is: [] //result easy way to empty an array... Also using loop which is not necessary but just another way to do that: /* could be arr.pop() or arr.splice(0) don't need to return as main array get changed */ function remove(arr) { while(arr.length) { arr.shift(); } } There are also tricky way which you can think about, for example something like this: arr.splice(0, arr.length); //[] So if arr has 5 items, it will splice 5 items from 0, which means nothing will remain in the array. Also other ways like simply reassign the array for example: arr = []; //[] If you look at the Array functions, there are many other ways to do this, but the most recommended one could be changing the length. As I said in the first place, you can also prototype remove() as it's the answer to your question. you can simply choose one of the methods above and prototype it to Array object in JavaScript, something like: Array.prototype.remove = Array.prototype.remove || function() { this.splice(0, this.length); }; and you can simply call it like this to empty any array in your javascript application: arr.remove(); //[]
the error was that I was passing File objects to the server function
I'm trying to connect to a nodejs socket.io server with c# using [this Client-CSharp](https://github.com/doghappy/socket.io-client-csharp) solution. And it connects when im using only "http"-protocol and NOT use `hostname`-argument in listen method. I use an SSL certificate and I get a 404 error for http requests (if from a browser, https is ok). But everything works with the socket. Exaple Working when nodejs server ``` server.listen(port, () => { console.log('running'); }); ``` .NET ``` var client = new SocketIOClass("http://serverdomain.com:2222"); ``` And not working when sometning on server/client side changes nodejs ``` const hostname = '127.0.0.1'; server.listen(port, hostname, () => { console.log('running'); }); ``` or https .NET ``` var client = new SocketIOClass("https://serverdomain.com:2222"); ``` How it works? And how critical is specifying the hostname? Does using HTTP in this case for sockets mean insecure transmission? I'm absolutely confused. Tried using different libraries but the end result is the same (remove hostname and its works).
You are using the wrong compiler. obj-m += hello_module.o CROSS=/home/thanh/work/bbb/bb-kernel/dl/gcc-8.5.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi- KERNEL=/home/thanh/work/bbb/bb-kernel/KERNEL/ all: make ARCH=arm CROSS_COMPILE=${CROSS} -C ${KERNEL} M=$(PWD) modules clean: make -C ${KERNEL} M=$(PWD) clean
I am using ESP-IDF directly (VSCode ESP-IDF extension). For PlatformIO the is not so different. My device is ESP32-S3. There are two thing you want to change: - ***IDE serial monitor baudrate*** - the baudrate the IDE monitor is expecting. - ***ESP32 console output baudrate*** - the baudrate of the ESP32 device for the UART used for console output. # Setting the IDE Serial Monitor Baudrate For ESP-IDF extension, go to the ***settings***, type `esp-ifd baud rate` in the search box, you will get two results: ***"ESP-IDF Flash Baud rate"*** and ***"ESP-IDF Monitor Baud rate"***. The monitor is the one you are looking for. Set it the value you want e.g. 230400. For PlatformIO extension, you can set the monitor speed by adding/changing the `monitor_speed` option in ***platformio.ini***, e.g. `monitor_speed = 230400`. # Setting the ESP32 Console Output Baudrate There are two ways I found. ### 1. Hard code the baudrate The default console output is `UART0` so right at the beginning of the program add: `uart_set_baudrate(UART_NUM_0, 230400)`: ```c #include "driver/uart.h" void app_main() { uart_set_baudrate(UART_NUM_0, 230400); // ... } ``` When using this option, the serial monitor may output some garbage before the code reaches this line do to some init code logs that are sent before changing the baudrate to what the monitor is expecting. ### 2. Configure ESP-IDF sdkconfig File To open a configuration editor you can use ESP-IDF command `ESP-IDF: SDK Configuration editor` (or the corresponding icon) or type `idf.py menuconfig` (for PlatformIO `pio run -t menuconfig`) in the terminal. The configurations you want to change are: 1. ***Component config -> ESP System Settings -> Channel for console output***: Set it to `Custom UART`. 2. ***Component config -> ESP System Settings -> UART peripheral to use for console output (0-1)***: Set it to `UART0`. 3. ***Component config -> ESP System Settings -> UART console baud rate***: Set it to the value you want e.g. 230400. P.S. The ***CONFIG_CONSOLE_UART_BAUDRATE*** option of the ***sdkconfig*** is no longer supported, see [ESP-IDF Monitor](https://docs.espressif.com/projects/esp-idf/en/stable/esp32/migration-guides/release-5.x/5.0/tools.html#esp-idf-monitor).
I'm attempting to calculate the sum of the differences between the 'from' and 'to' columns in the 'absences' table, but I'm encountering an error. > Column not found: 1054 Unknown column 'absences.hours_difference' in > 'field list' $classesAbsences = Classes::withSum([ 'absences' => function ($query) { $query->selectRaw('TIMESTAMPDIFF(HOUR, `from`, `to`) AS hours_difference'); } ], 'absences.hours_difference') ->get();
Calculating the sum of differences between two columns in Laravel
SOLUTION: by removing the combiner class, the reducer works just fine. I guess there was something wrong with it, but I cannot say what exactly.
Not sure why exactly it worked before, guess it was just a coincidence. `this.VCR.indexOf` actually expects `ViewRef` and you pass a `ComponentRef` there casting it to `any`. The fix is quite simple, you just need to pass `componentRef.hostView` to the `VCR.indexOf`: ```ts let vcrIndex: number = this.VCR.indexOf(componentRef.hostView); ``` [Here is a working demo][1] using Angular 17. ---------- Just in case, `ComponentFactoryResolver` was deprecated long time ago (v13), and you can pass component type directly to the `this.VCR.createComponent`: ``` let childComponentRef = this.VCR.createComponent(ChildComponent); ``` [1]: https://stackblitz.com/edit/stackblitz-starters-vcnrvd
|ios|flutter|xcode|flutter-dependencies|razorpay|
Run your frontend to port 3000, and backend at port 4000, and mention proxy to frontend/package.json file which has to mention the `"proxy": "http://xxx.xxx.x.xx:4000"`
I'm taking a Computational Neuroscience class on [Coursera](https://www.coursera.org/learn/computational-neuroscience/) and am stuck on one of the quiz problems. The question is framed as the following: > Suppose that we had a linear recurrent network of 5 input nodes and 5 output nodes. Let us say that our network's weight matrix W is: > > W = [0.6 0.1 0.1 0.1 0.1] > [0.1 0.6 0.1 0.1 0.1] > [0.1 0.1 0.6 0.1 0.1] > [0.1 0.1 0.1 0.6 0.1] > [0.1 0.1 0.1 0.1 0.6] > >(Essentially, all 0.1, besides 0.6 on the diagonals.) > >Suppose that we have a static input vector u: > > u = [0.6] > [0.5] > [0.6] > [0.2] > [0.1] > > Finally, suppose that we have a recurrent weight matrix M: > > M = [-0.25, 0, 0.25, 0.25, 0] > [0, -0.25, 0, 0.25, 0.25] > [0.25, 0, -0.25, 0, 0.25] > [0.25, 0.25, 0, -0.25, 0] > [0, 0.25, 0.25, 0, -0.25] > > Which of the following is the steady state output v_ss of the network? (Hint: See the lecture on recurrent networks, and consider writing some Octave or Matlab code to handle the eigenvectors/values (you may use the "eig" function))' **The notes for the class can be found [here](https://d18ky98rnyall9.cloudfront.net/_19b2cf740de5dc223e1ad6e8978b4acf_Suppl-Mat-Recurrent-Networks.pdf?Expires=1483747200&Signature=O7ioJ76sx9SKvvFUJUaxukab4EYD-ADsopZKCnIWreLEO0Swyw54fZnXOCgRxdgyHZrbDxuUvV-41j9Bx7HSABhO4-d3z1cfELBdIg25L8jUSNDtGZsLv6uMHl4dIf~2mHPPfK9YRvk3MF-wXLUAyzwbUKteS59c6mQyGT8kgAk_&Key-Pair-Id=APKAJLTNE6QMUY6HBC5A). Specifically, the equation for the steady state formula can be found on slides 5 and 6.** I have the following code: import numpy as np # Construct W, the network weight matrix W = np.ones((5,5)) W = W / 10. np.fill_diagonal(W, 0.6) # Construct u, the static input vector u = np.zeros(5) u[0] = 0.6 u[1] = 0.5 u[2] = 0.6 u[3] = 0.2 u[4] = 0.1 # Connstruct M, the recurrent weight matrix M = np.zeros((5,5)) np.fill_diagonal(M, -0.25) for i in range(3): M[2+i][i] = 0.25 M[i][2+i] = 0.25 for i in range(2): M[3+i][i] = 0.25 M[i][3+i] = 0.25 # We need to matrix multiply W and u together to get h # NOTE: cannot use W * u, that's going to do a scalar multiply # it's element wise otherwise h = W.dot(u) print 'This is h' print h # Ok then the big deal is: # h dot e_i # v_ss = sum_(over all eigens) ------------ e_i # 1 - lambda_i eigs = np.linalg.eig(M) eigenvalues = eigs[0] eigenvectors = eigs[1] v_ss = np.zeros(5) for i in range(5): v_ss += (np.dot(h,eigenvectors[:, i]))/((1.0-eigenvalues[i])) * eigenvectors[:,i] print 'This is our steady state v_ss' print v_ss The correct answer is: [0.616, 0.540, 0.609, 0.471, 0.430] This is what I am getting: This is our steady state v_ss [ 0.64362264 0.5606784 0.56007018 0.50057043 0.40172501] Why is my output different, and how can I fix it?
delete C:\Users\USERNAME.cache\firebase Run firebase-win-exe Authentication process will start you press Y Allow Firebase to collect CLI and Emulator Suite usage and error reporting information? Y Copy a google link and paste it into browser and Log into your google account to allow access to firebase Success! Logged in as tahira6020@gmail.com [enter image description here][1] [1]: https://i.stack.imgur.com/O3U3j.png
{"Voters":[{"Id":269970,"DisplayName":"esqew"},{"Id":12265927,"DisplayName":"Puteri"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]}
How to properly use Go html/template precompiled?
null
I am building a Chrome Extension that will call my backend APIs to validate something. I would like to know is it possible that other people could build some Chrome Extensions to manipulate the request and response to compromise the validation process? For example: API: https://my-domain.com/validation request: ```json { "key": "aaaa-bbbb-cccc-dddd" } ``` response: ```json { "status": "IN_VALID_KEY" } ``` Would someone is able to build a Chrome Extension to monitor requests on https://my-domain.com/validation and change the response to be ```json { "status": "VALID" } ``` Could that happen?
Is it possible to manipuate 3rd party Chrome Extensions Network Reqeuests?
|google-chrome|google-chrome-extension|google-chrome-devtools|google-chrome-app|
{"Voters":[{"Id":243373,"DisplayName":"TT."},{"Id":16217248,"DisplayName":"CPlus"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[16]}
{"Voters":[{"Id":2530121,"DisplayName":"L Tyrone"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]}
{"Voters":[{"Id":16540390,"DisplayName":"jabaa"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[13]}
So the fix was an error in the casing of some the login credentials. But also I had not been populating the securitystamp similar ro this post. Even though the seed entered a value their physical adding it to seed code seemed to fix issues https://github.com/aspnet/AspNetCore/issues/10689 I set it to a guid at time of seeding
{"Voters":[{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]}
{"Voters":[{"Id":20287183,"DisplayName":"HangarRash"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[13]}
Something like this seems to be what you are looking for. It is skeleton code to initialize the game field with the specified number of treasures and bullets, and leave the remaining cells blank. public enum Status { Hidden, Shown, Marked, } public enum Contents { Blank, Treasure, Bullets, } public class Point { public Point(int x, int y, Status status, Contents contents) { X=x; Y=y; Status=status; Contents=contents; } public int X { get; } public int Y { get; } public Status Status { get; set; } public Contents Contents { get; set; } } public class Grid { static readonly Random rng = new Random(); public Grid(int size, int treasures, int bullets) { Size=size; Field = new Point[size, size]; Contents[] shuffle = new Contents[size*size]; // Fill array with Contents.Blank = 0 // Add treasures in the beginning of array for (int index = 0; index < treasures; index++) { shuffle[index] = Contents.Treasure; } // Add bullets after the treasures for (int index = 0; index < bullets; index++) { shuffle[treasures+index] = Contents.Bullets; } // Shuffle the array randomly Array.Sort(shuffle, (x, y) => rng.Next(0, 2)*2-1); for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { // each field defaults to hidden status Status status = Status.Hidden; // pick in the index for the shuffle that // corresponds to the i,j coordinates int index = i*size+j; Contents contents = shuffle[index]; Field[i, j] = new Point(i, j, status, contents); } } } public int Size { get; } public Point[,] Field { get; } public override string ToString() { StringBuilder sb = new StringBuilder(); sb.AppendLine($"MineSweeper({Size},{Size})"); for (int j = 0; j < Size; j++) { for (int i = 0; i < Size; i++) { Point point = Field[i, j]; switch (point.Status) { case Status.Hidden: sb.Append("β–‘β–‘ "); break; case Status.Shown: if (point.Contents==Contents.Treasure) { sb.Append(" T "); } else if (point.Contents == Contents.Bullets) { sb.Append(" B "); } else { sb.Append(" "); } break; case Status.Marked: sb.Append("[] "); break; default: throw new NotSupportedException(point.Status.ToString()); } } sb.AppendLine(); } return sb.ToString(); } } static class Program { static void Main(string[] args) { Grid grid = new Grid(8, 6, 9); Console.WriteLine(grid); } } The result is all the cells are hidden ``` MineSweeper(8,8) β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ β–‘β–‘ ``` But if I run with `Status status = Status.Shown;` in the constructor for `Grid` then the result is (as expected) random placement of treasures and bullets. ``` MineSweeper(8,8) B B T T B T B B T T T B B B B ```
I had a surprisingly hard time getting my rspec + capybara + chrome (headful) system specs to run with chrome headless. ### What worked for me Add this to the very end of `spec/rails_helper.rb` (not inside any other block/code, just at the very end of the file on its own). ```rb # spec/rails_helper.rb if ENV['HEADLESS'] == "true" RSpec.configure do |config| config.before(:each, type: :system) do driven_by :selenium_chrome_headless end end end ``` Now you can do ```sh # run specs in chrome headless HEADLESS=true bundle exec rspec # run specs in chrome normally (headful) bundle exec rspec ``` ### Note Note that if you don't care about having the option and just want headless all the time, just plonk this at the end of `spec/rails_helper.rb` ```rb RSpec.configure do |config| config.before(:each, type: :system) do driven_by :selenium_chrome_headless end end ```
You are using the wrong compiler. obj-m += hello_module.o CROSS=/home/thanh/work/bbb/bb-kernel/dl/gcc-8.5.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi- KERNEL=/home/thanh/work/bbb/bb-kernel/KERNEL/ all: make ARCH=arm CROSS_COMPILE=${CROSS} -C ${KERNEL} M=$(PWD) modules clean: make -C ${KERNEL} M=$(PWD) clean
I am using `react-bootstrap` `Table` filteredData function should return `searchTerm` matches record from the data. I am getting empty data on search now. How to loop through Object and return true If any of the searchTerm match? ```javascript import { Table, Pagination, Form } from 'react-bootstrap'; ``` ```javascript const dummyData = [ { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-18'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-17'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-16'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-15'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-14'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-13'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-12'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-11'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-19'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-22'), edit:'edite',delete:'delete'}, { consumerID: 1, consumerName : 'John', emailId: 25, mobileno:'234567' , vehiclePlte:'abcg', SchemeCashback:'abcg' , Cashbackalance:'abcg' , status :'approved',createdDate: new Date('2024-03-24'), edit:'edite',delete:'delete'}, // Add more dummy data as needed ]; const [data, setData] = useState(dummyData); ``` ```javascript const filteredData = useMemo(() => { return data.filter(row => Object.entries(row).every(([key, value]) => { if (key === 'createdDate') { // Check if createdDate is null or matches the search date return !searchDate || value.toISOString().split('T')[0] === searchDate; } return value.toString().toLowerCase().includes(searchTerm.toLowerCase()); }) ); }, [data, searchTerm, searchDate]); ``` ```javascript <input type="text" placeholder="Search..." value={searchTerm} className='rounded border border-1 border-solid border-dark' onChange={(e) => setSearchTerm(e.target.value)} /> ```
Creating Chrome extension, but display text from Javascript file is not showing up on HTML's display. The HTML is the InnerHTML of another HTML file
|javascript|html|google-chrome-extension|error-handling|
null
You must add `ads.txt` in `web.config` file on server to have authorization for your website from google ads: Example: ``` <add input="{REQUEST_URI}" pattern="(/ads.txt)" negate="true" /> ```
I'd like to make manual multiple login. There're 3 users, Customer, Seller, and Admin. Now I'm testing Customer Login. But this error appeared. I read Laravel Document and asked ChatGPT but still the same error is showing. Could somebody fix this error ? # Error Message Illuminate\Auth\SessionGuard::__construct(): Argument #2 ($provider) must be of type Illuminate\Contracts\Auth\UserProvider, null given, called in /Users/yuto/18tha-gj_mall/vendor/laravel/framework/src/Illuminate/Auth/AuthManager.php on line 127 # What I did was 1. Make 2 models Users/Customer and Users/Seller 2. Modified config/auth.php 3. Created a new function "authenticate" on Auth/LoginController # Customer Model <?php namespace App\Models\Users; use Illuminate\Contracts\Auth\Authenticatable; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Auth\Authenticatable as AuthenticableTrait; class Customer extends Model implements Authenticatable { use HasFactory, AuthenticableTrait; protected $fillable = [ 'first_name', 'last_name', 'email', 'password', 'payment_id', ]; } # config/auth.php <?php return [ /* |-------------------------------------------------------------------------- | Authentication Defaults |-------------------------------------------------------------------------- */ 'defaults' => [ 'guard' => 'web', 'passwords' => 'users', ], /* |-------------------------------------------------------------------------- | Authentication Guards |-------------------------------------------------------------------------- */ 'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], ], /* |-------------------------------------------------------------------------- | User Providers |-------------------------------------------------------------------------- */ 'providers' => [ 'customer' => [ 'driver' => 'eloquent', 'model' => App\Models\Users\Customer::class, ], 'seller' => [ 'driver' => 'eloquent', 'model' => App\Models\Users\Seller::class, ], ], /* |-------------------------------------------------------------------------- | Resetting Passwords |-------------------------------------------------------------------------- */ 'passwords' => [ 'customer' => [ 'provider' => 'customer', 'table' => 'password_reset_tokens', 'expire' => 60, 'throttle' => 60, ], 'seller' => [ 'provider' => 'seller', 'table' => 'password_reset_tokens', 'expire' => 60, 'throttle' => 60, ], ], /* |-------------------------------------------------------------------------- | Password Confirmation Timeout |-------------------------------------------------------------------------- */ 'password_timeout' => 10800, ]; # LoginController <?php namespace App\Http\Controllers\Auth; use App\Http\Controllers\Controller; use Illuminate\Foundation\Auth\AuthenticatesUsers; use Illuminate\Support\Facades\Auth; use Illuminate\Http\Request; use Illuminate\Http\RedirectResponse; class LoginController extends Controller { use AuthenticatesUsers; /** * Where to redirect users after login. * * @var string */ protected $redirectTo = '/home'; /** * Create a new controller instance. * * @return void */ public function authenticate(Request $request): RedirectResponse { $credentials = $request->validate([ 'email' => ['required', 'email'], 'password' => ['required'], ]); if (Auth::guard('customer')->attempt($credentials)) { $request->session()->regenerate(); return redirect()->route('home'); } if (Auth::guard('seller')->attempt($credentials)) { $request->session()->regenerate(); return redirect()->route('home'); } return back()->withErrors([ 'email' => 'The provided credentials do not match our records.', ])->onlyInput('email'); } }
Laravel10: I'm crating multiple-login but having trouble with Authentication Configuration
|php|laravel|
I was writing a type safe wrapper over "request-response"-style goroutines using Go generics and stumbled upon a compile error: ```go type WorkerContext[In any, Out any] interface { Process(In) (Out, bool) Destroy() } type Worker[In any, Out any] struct { context WorkerContext[In, Out] in chan In out chan struct{Out; bool} } ``` Here, a goroutine is represented by a `Worker` instance, and the user code that is executed within the goroutine to handle requests is represented by a `WorkerContext` interface (which may potentially hold arbitrary external resources, such as a child process that the goroutine is communicating with). The idea here is that the `WorkerContext.Process()` function returns the result of processing a request, and a flag indicating if the external resource can accept more requests. Both values are transmitted via the channel back to the requesting thread. There, the flag is used to decide whether to put the worker (with its associated goroutine and the context) back into the worker pool or destroy it (creating a new one when the next request comes up). However, I can't seem to declare such a channel: ``` src/taskpool/taskpool.go:24:22: embedded field type cannot be a (pointer to a) type parameter ``` --- Why is that so? The restriction feels kinda arbitrary, and what is the idiomatic way to do what I want to do here?
Go: "embedded type cannot be a type parameter"
|go|generics|
This didn't work for me since /snap/bin was already in the path (Ubuntu 22). Despite the path being OK, I got: >/system.slice/cron.service is not a snap cgroup when running anything snap from cron. I had to start a dummy process with just loading the selenium model and loop it infinitely, then it worked. A better solution than the one above is to add: ``` loginctl enable-linger user ``` There seems to be some system magic that happens when you logoff.
{"Voters":[{"Id":2308683,"DisplayName":"OneCricketeer"},{"Id":995714,"DisplayName":"phuclv"},{"Id":341994,"DisplayName":"matt"}],"SiteSpecificCloseReasonIds":[16]}
I have a LabVIEW program which contains voltage, current and power data into the same waveform. I am planning to extract each of them one by one and putting into array. Currently, I have extracted only the voltage waveform and converted the data into a CSV file. However, the problem is that the time stamp (dt) and the y value are in the same cell of the CSV file which requires to post process in excel to get tab delimited format. I would like to get the end result without post processing. Can anyone help me to correct the code with delimitor? [LabVIEW Program for single parameter writing into CSV file](https://i.stack.imgur.com/jYEQK.png) [CSV file](https://i.stack.imgur.com/z5ffw.jpg) Moreover, If I want to add two more data (Current and power), how can I append them into another two columns in the CSV file and synchronize with the relative time (dt)?
Dynamic Nested Multi-Dimensional Arrays in Rust
|json|dictionary|rust|multidimensional-array|vector|
null
I think it's rather UV problem than script problem. you are using unity cube object that is UVed that way (each side have overlap with other sides). create a cube in a 3D program like Blender and import it and you will have something like this: [![Cude without overlap UV][1]][1] so when you are drawing on one side it will not show on other sides [1]: https://i.stack.imgur.com/vmvaJ.png
I'm encountering an issue with React Router v6 where a child route component (<List />) is not updating when navigating from a parent route (~diary) to the child route (~diary/list). Here's an overview of my setup: **Route Configuration:** index.tsx ``` const router = createBrowserRouter([ { path: "/", element: <Root />, children: [ { path: "react", element: <React /> }, { path: "diary", element: <Diary />, children: [{ path: "list", element: <List /> }], }, ], }, ]); ``` **Components:** diary.tsx ``` import { ReactElement } from "react"; import { useNavigate } from "react-router-dom"; export const Diary = (): ReactElement => { const navigate = useNavigate(); return ( <div> <h2>Welcome to Express Diary</h2> <p>Record your diary life</p> <button onClick={() => navigate("/diary/list")}> Open your Express Diary </button> </div> ); }; ``` list.tsx ``` import { ReactElement } from "react"; export const List = (): ReactElement => { return ( <div> <h3>This is the Diary List!</h3> </div> ); }; ``` **Issue:** When navigating from ~diary to ~diary/list, the URL changes correctly, but the <List /> component does not update/render. There are no errors in the console, and navigation logic seems correct. I've ensured that all components are correctly imported, and there are no conflicts with styles or components. **Expected Behavior:** When navigating from ~diary to ~diary/list, I expect the <List /> component to update/render according to the route change.
React Router v6: Child route component not updating when navigating from parent route
|reactjs|react-router|react-router-dom|
I made it! I added `alpha=year` to my code and I got this: ![Final graph][1] [1]: https://i.stack.imgur.com/Lt27G.jpg
so I am new to Rust but recently came across a problem I don't know how to solve. It's to work with nested (and multi-dimensional) value-key pair arrays in Rust that is dynamically generated based on string splitting. The sample dataset looks something like the example below: `Species | Category Dog | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Canidae, Canis, C. familiaris Cat | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Feliformia, Felidae, Felinae, Felis, F. catus Bear. | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Ursoidea, Ursidae, Ursus ...` The goal would be to split the comma delimitation and a create a map or vector. Essentially, creating "layers" of nested keys (either as a Vector array or a key to a *final value*). From my understanding, Rust has a crate called "serde_json" which can be *used* to create key-value pairings like so: `let mut array = Map::new(); for (k, v) in data.into_iter() { array.insert(k, Value::String(v)); }` As for comma delimited string splitting, it might look something like this: `let categories = "a, b, c, d, e, f".split(", "); let category_data = categories.collect::<Vec<&str>>()` However, the end goal would be to create a recursively nested map or vector that follows the *Category* column for the array which can ultimately be serialised to a json output. How would this be implemented in Rust? In addition, while we might know the number of rows in the sample dataset, isn't it quite resource intensive to calculate all the "comma-delimited layers" in the *Category* column to know the final size of the array as required by Rust's memory safe design to "initialize" an array by a defined or specified size? Would this need to specifically implemented as way to know the maximum number of layers in order to be doable? Or can we implement an infinitely nested multi-dimensional array without having to specify or initialise a defined map or vector size? For further reference, in PHP, this might be implemented so: `$output_array = array(); `foreach($data_rows as $data_row) { ` $temp =& $output_array; ` foreach(explode(', ', $data_row["Category"]) as $key) { ` $temp =& $temp[$key]; ` } ` // Check if array is already initialized, if not create new array with new data ` if(!isset($temp)) { ` $temp = array($data_row["Species"]); ` } else { ` array_push($temp, $data_row["Species"]); ` } `}` How would a similar solution like this be implemented in Rust? Thanks in advance!
You can use the following function: ```cs int ShiftOneBit(int value, int bit) { return (value & ~bit) | ((value & bit) >> 1); } ``` Then use it like this ```cs x = ShiftOneBit(x, 0b100); x = ShiftOneBit(x, 0b10000); ``` [**dotnetfiddle**][1] [1]: https://dotnetfiddle.net/kgiZs1
I found a simple solution that works reliably. If you assign a unique `GlobalKey` to your `GoogleMap` instance, and update the `GlobalKey` when the app is resumed, then on resume the old (disposed) Maps widget will be replaced with a new one, which is correctly redrawn. class _CreateEventViewState extends State<_CreateEventView> with WidgetsBindingObserver { // Add GlobalKey? _googleMapWidgetKey; // Add @override void initState() { super.initState(); WidgetsBinding.instance.addObserver(this); // Add _googleMapWidgetKey = GlobalKey(); // Add } // Add: @override void didChangeAppLifecycleState(AppLifecycleState state) async { if (state == AppLifecycleState.resumed) { WidgetsBinding.instance.addPostFrameCallback((_) { if (mounted) { setState(() => _googleMapWidgetKey = GlobalKey()); } }); } } @override Widget build(BuildContext context) { GoogleMap( key: _googleMapWidgetKey, // Add // ... ), } }
I'm writing a middleware to validate a jwt token and add current authicated user to the request instance for protected routes in Actix web. here is my code use std::{ future::{ready, Ready}, rc::Rc }; use actix_web::{ body::EitherBody, dev::{forward_ready, Service, ServiceRequest, ServiceResponse, Transform}, http::header::AUTHORIZATION, Error, HttpMessage, HttpResponse, }; use futures_util::{future::LocalBoxFuture, FutureExt}; use serde_json::json; struct AuthUser { id: usize, email: Option<String> } pub struct JwtAuthChecker; impl<S, B> Transform<S, ServiceRequest> for JwtAuthChecker where S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error> + 'static, // update here S::Future: 'static, B: 'static, { type Response = ServiceResponse<EitherBody<B>>; type Error = Error; type InitError = (); type Transform = AuthenticationMiddleware<S>; type Future = Ready<Result<Self::Transform, Self::InitError>>; fn new_transform(&self, service: S) -> Self::Future { ready(Ok(AuthenticationMiddleware { service: Rc::new(service), // convert S to Rc<S> })) } } pub struct AuthenticationMiddleware<S> { service: Rc<S>, } impl<S, B> Service<ServiceRequest> for AuthenticationMiddleware<S> where S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error> + 'static, // update here S::Future: 'static, B: 'static, { type Response = ServiceResponse<EitherBody<B>>; type Error = Error; type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>; forward_ready!(service); fn call(&self, req: ServiceRequest) -> Self::Future { // Do something with the request here let auth = req.headers().get(AUTHORIZATION); if auth.is_none() { let response = json!({ "status": false, "message": "Unauthorized" }); let http_res = HttpResponse::Unauthorized().json(response); let (http_req, _) = req.into_parts(); let res = ServiceResponse::new(http_req, http_res); // Map to R type return (async move { Ok(res.map_into_right_body()) }).boxed_local(); } let jwt_token = get_jwt_token_from_header(auth.unwrap().to_str().unwrap()); // Clone the service to keep reference after moving into async block let service = self.service.clone(); async move { // Getting some data here (just demo code for async function) let user = get_some_data(jwt_token).await; req.extensions_mut().insert(user); // Continue with the next middleware / handler let res = service.call(req).await?; // Map to L type Ok(res.map_into_left_body()) }.boxed_local() } } async fn get_some_data(token: &str) -> AuthUser { // will fetch user data from token payload here let user = AuthUser { id: 1, email: Some(String::from("dev@gmail.com")) }; return user; } fn get_jwt_token_from_header(jwt_token: &str) -> &str { let bytes = jwt_token.as_bytes(); let token_len = jwt_token.len(); for (i, &item) in bytes.iter().enumerate() { if item == b' ' { let next_index = i + 1; return &jwt_token[next_index..token_len]; } } return &jwt_token[0..token_len]; } I'm getting confused inside the async block in the call() method how I can validate the jwt token here and put the user instance in the current request instance? I have followed [this tutorial on ginkcode][1] but didn't get it properly [1]: https://ginkcode.com/post/implement-a-middleware-in-actix-web
I'm encountering a problem while implementing navigation within a widget added to the device's home screen using the home_widget package in Flutter. The widget consists of two buttons: a profile button and a settings button. ![Home widget design][1] Expected Behavior: Upon clicking the profile button, users should be navigated to the app's profile screen. Similarly, clicking the settings button should navigate users to the app's settings screen. Current Issue: Despite implementing the necessary code to handle navigation within the widget using the home_widget package, I'm facing difficulties in achieving the desired navigation behavior. Clicking on the profile or settings button within the widget does not trigger the expected navigation. Steps Taken: 1. Utilized the home_widget package to enable widget creation and management on the home screen. 2. Designed the widget to include a profile button. 3. Configured the widget to handle navigation upon clicking the profile button using 'HomeWidgetLaunchIntent.getActivity'. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> val pendingIntent = HomeWidgetLaunchIntent.getActivity( context, MainActivity::class.java ) setOnClickPendingIntent(R.id.btnProfile, pendingIntent) <!-- end snippet --> I'm seeking assistance in identifying a solution or workaround to pass data along with the navigation request within the home_widget package. [1]: https://i.stack.imgur.com/iu50M.jpg
I'm deploying a zip package as an Azure function with the use of Terraform. The relevant code fragments below: resource "azurerm_storage_account" "mtr_storage" { name = "mtrstorage${random_string.random_storage_account_suffix.result}" resource_group_name = azurerm_resource_group.mtr_rg.name location = azurerm_resource_group.mtr_rg.location account_kind = "BlobStorage" account_tier = "Standard" account_replication_type = "LRS" network_rules { default_action = "Deny" ip_rules = ["127.0.0.1", "my.id.addr.here"] virtual_network_subnet_ids = [azurerm_subnet.mtr_subnet.id] bypass = ["AzureServices", "Metrics"] } tags = { environment = "local" } } resource "azurerm_storage_blob" "mtr_hello_function_blob" { name = "MTR.ListBlobsFunction.publish.zip" storage_account_name = azurerm_storage_account.mtr_storage.name storage_container_name = azurerm_storage_container.mtr_hello_function_container.name type = "Block" source = "./example_code/MTR.ListBlobsFunction/MTR.ListBlobsFunction.publish.zip" } resource "azurerm_service_plan" "mtr_hello_function_svc_plan" { name = "mtr-hello-function-svc-plan" location = azurerm_resource_group.mtr_rg.location resource_group_name = azurerm_resource_group.mtr_rg.name os_type = "Linux" sku_name = "B1" tags = { environment = "local" } } resource "azurerm_linux_function_app" "mtr_hello_function" { name = "mtr-hello-function" location = azurerm_resource_group.mtr_rg.location resource_group_name = azurerm_resource_group.mtr_rg.name service_plan_id = azurerm_service_plan.mtr_hello_function_svc_plan.id storage_account_name = azurerm_storage_account.mtr_storage.name storage_account_access_key = azurerm_storage_account.mtr_storage.primary_access_key app_settings = { "FUNCTIONS_WORKER_RUNTIME" = "dotnet" "AzureWebJobsStorage" = azurerm_storage_account.mtr_storage.primary_connection_string "WEBSITE_RUN_FROM_PACKAGE" = "https://${azurerm_storage_account.mtr_storage.name}.blob.core.windows.net/${azurerm_storage_container.mtr_hello_function_container.name}/${azurerm_storage_blob.mtr_hello_function_blob.name}" } site_config { always_on = true application_insights_connection_string = azurerm_application_insights.mtr_ai.connection_string application_stack { dotnet_version = "8.0" use_dotnet_isolated_runtime = true } cors { allowed_origins = ["*"] } } tags = { environment = "local" } } The zip file is uploaded to the storage account, is downloadable and has the correct structure (or so I think). Function App is created as well, however when I scroll down to see the functions, it tells me there was an error loading functions: `Encountered an error (ServiceUnavailable) from host runtime.` It's probably some configuration error, but I just can't see it...
|sveltekit|
null
You can simply replace the month names by using the Intl API. That small function should support nearly all dates formated like this. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function convertLocalizedDateStringToDateObject(strToParse, from) { for (let i = 0; i < 12; i++) { const date = new Date(new Date().getFullYear(), i, 1); const short = date.toLocaleDateString(from, { month: "short" }); const long = date.toLocaleDateString(from, { month: "long" }); const shortReplace = date.toLocaleDateString("en-US", { month: "short" }); const longReplace = date.toLocaleDateString("en-US", { month: "long" }); const shortMonth = strToParse.replace(short, shortReplace); const longMonth = strToParse.replace(long, longReplace); if (!isNaN(new Date(shortMonth).valueOf())) { return new Date(shortMonth); } else if (!isNaN(new Date(longMonth).valueOf())) { return new Date(longMonth); } } return new Date(strToParse); } // March in Germany is MΓ€rz console.log(convertLocalizedDateStringToDateObject("MΓ€r 29, 2024", "de-DE")); console.log(convertLocalizedDateStringToDateObject("24. Dezember 2024", "de-DE")); // January in Austria is JΓ€nner console.log(convertLocalizedDateStringToDateObject("JΓ€n 2, 2024", "de-AT")); console.log(convertLocalizedDateStringToDateObject("7. JΓ€nner 2024", "de-AT")); <!-- end snippet -->
I tried using socket.io client npm package on react native but no event is emitted when socket is initiated. i have tried everything possible but i still don't get a response on node server. here is the socket.io setup: import { io } from "socket.io-client"; const socket = io.connect("http://localhost:4000"); export default socket; here is the code initiated when a screen loads: useEffect(() => { socket.emit('test', 'hello') }, []) here is the node server listener: socket.on('test', (mssg) => { console.log(mssg) // socket.emit('mssg', uuid.uuid()) })
Socket.io not emitting event to node server on react native
|node.js|react-native|socket.io|
I'm currently trying to deploy a TypeScript Hapi.js server to Vercel and facing difficulties with the deployment. I've tried following some express with TypeScript deployment tutorials, since I couldn't find any with Hapi.js. I ended writing this in my `vercel.json`: { "version": 2, "builds": [ { "src": "dist/src/index.js", "use": "@vercel/node", "config": { "includeFiles": ["dist/src/**"] } } ], "routes": [ { "src": "/(.*)", "dest": "dist/src/index.js" } ] } And this doesn't work. I've tried to change the src to just `/` but it only worked for the base route and not other routes. Here is the [Github repo][1], if anyone wants to look at it. Thanks for any help in advance! [1]: https://github.com/VallenDraa/mock-development-api
Vercel wildcard route's src results in 404 error in Hapi.js backend
We have built several solutions for clients based on azure functions and servicebus to move data between systems. Each solution hosts many (100+) individual functions triggering on individual topics from many different systems. Although some solutions are older, the majority are .net 8 isolated functions and a solution should be based on that. These functions are using the Worker.Extensions.Servicebus bindings to trigger off of the queues. Lately we have experienced issues with a system that is unable to process the amount of data and we therefor need to throttle requests. How do I throttle messages read of azures queues while still using azure functions? In other words, how do i make sure only a certain number of messages are processed at a time from select queues?
This is my first question here, so thanks in advance to anyone replying to this humble student. I'm trying to define a function using psycopg2. I have split into two stages. First the parameters necessary for connection, which I managed to read through documentation and solve. Now, my issue is dealing with the query to pass it through the function. Ex: def databasequery(database, user, password, host, port): psycopg2.connect( database=database, user=user, password=password, host=host, port=port, ) conn = get_connection() curr = conn.cursor() print(databasequery("Select * from customers", "your_database","your_user","your_password","your_host","your_port")) How can I add the query to pass through the function?
I found the reason for this behavior: `files.exist` relies on `ssh` method `test` ([GitHub][1]) This method is just not available on pure SFTP servers. OpenSSH does not allow to configure methods available via SFTP. Due to `chroot` it's also difficult to allow SSH access from given list of IPs. I will never use and code from `Fabric`. Don't know how I can get my code working, yet. Paramiko itself might be better but it's much work to migrate code to another framework. [1]: https://github.com/fabric/patchwork/blob/master/patchwork/files.py#L46
I have the following loop in asm: ``` .LBB5_5: vaddpd ymm0, ymm0, ymmword, ptr, [rdi, +, 8*rcx] vaddpd ymm1, ymm1, ymmword, ptr, [rdi, +, 8*rcx, +, 32] vaddpd ymm2, ymm2, ymmword, ptr, [rdi, +, 8*rcx, +, 64] vaddpd ymm3, ymm3, ymmword, ptr, [rdi, +, 8*rcx, +, 96] add rcx, 16 cmp rax, rcx jne .LBB5_5 ``` This is part of a bigger function which calculates the sum of of an `[f64]` array in Rust. I benchmarked this code with the criterion crate and get that `1 000 000 000` doubles take `200 000 000` cycles on my Rocket Lake CPU (i7 11700K) In various sources I find that the latency of a floating point addition is 4 cycles on this CPU. This would mean that each of the `vaddpd` can only the run every 4th cycle, because they carry a dependency of the previous sum. This would mean I can only do 4 double addition per cycle as maximum. My measurement shows it does 5 additions per cycle. (It uses the `RDTSC` instruction to measure it, I am not sure if this can be problematic) I mostly want to understand what is going on and test how well I understand the CPU pipeline.
|python|matrix|linear-algebra|algebra|