instruction
stringlengths
0
30k
|c#|.net|windows|deployment|maui|
null
How do you launch the long and short entry? ``` strategy.entry(id, direction, qty, limit, stop, oca_name, oca_type, comment, alert_message, disable_alert) → void ``` Note that the ID must match when you invoke the `strategy.close(""...)`: ``` if long_condition strategy.entry("Long",...) if close_condition strategy.close("Long",...) ```
In my work I use both STATA and R. Currently I want to perform a dose-response meta-analysis and use the "dosresmeta" package in R. The following syntax is used: ``` DRMeta \<- dosresmeta( formula = logRR \~ dose, type = type, cases = n_cases, n = n, lb = RR_lo, ub = RR_hi, id = ID, data = DRMetaAnalysis ) ``` When executing this syntax, however, I encounter a problem. The error message appears: > Error in if (delta \< tol) break : missing value where TRUE/FALSE needed. The reason for this error message is that I am missing some values for the variables "n_cases" and "n", which the authors have not provided. Interestingly, STATA does not require this information for the calculation of the dose-response meta-analysis. Is there a way to perform the analysis in R without requiring the values for "n_cases" and "n"? What can I do if I have not given these values? I have already asked the authors for the missing values, unfortunately without success. However, I need these studies for the dose-response meta-analysis, so it is not an option to exclude them.
|r|missing-data|meta-analysis|
I've got a page with the layout as in the attached snippet. In the right column there is a scrollable div that has some max-height set. The problem is that on smaller screens a user has to scroll down to the bottom of the page to see the bottom of the scrollable div (and the page is quite long). Is it possible to make sure that the bottom of the div will never be below the current view, so that the whole scrollable div is accessed all the time, without scrolling the page, no matter the size of the screen? For the screens with max-width: 991px the right column is hidden. From the comments: I've tried using `vh` but if the part *above* the scrollable div (from "Right column" to the button) is taller than 50vh than setting the right div to be 50vh will put its bottom below the view port. And the problem is that that header can be 10% of the viewport on big screens but 75% on small screens. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> body, html { margin: 0; padding: 0; height: 100%; } .background { background-color: #f8f8f8; /* Background color for the entire page */ height: 100%; } .container { display: flex; justify-content: space-between; /* Separate the columns */ margin: 5px; /* Add margin for visual separation */ } .left, .right { flex: 0 0 40%; background-color: #fff; /* Background color for columns */ padding: 20px; /* Add padding for content */ } .left { margin-top: 10px; /* Add margin-top to separate from content above */ } .right { margin-top: 30px; margin-left: 20px; /* Add margin-left to separate from left column */ max-height: 1350px; overflow-y: auto; /* Enable vertical scrolling */ } <!-- language: lang-html --> <body> <div class="background"> <div class="container"> <div class="left"> <h2>Left Column</h2> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam a massa bibendum, placerat sapien ultrices, molestie sem. Nam porttitor risus fringilla purus consectetur egestas. Sed elit arcu, eleifend eget velit at, rutrum cursus metus. Suspendisse sollicitudin orci leo, ac tristique nisi porta nec. Etiam at dui at turpis commodo congue nec vel turpis. Aliquam a libero sit amet lorem consectetur aliquet a vehicula nulla. Donec porta suscipit libero, at finibus nisi lacinia id. Quisque mattis magna interdum, tincidunt lacus vel, sodales ligula. Ut volutpat vestibulum justo non semper. Vivamus dictum mollis dignissim. Nullam condimentum venenatis metus sit amet commodo. Suspendisse potenti. Aliquam ac suscipit eros. Etiam metus elit, varius et feugiat quis, pellentesque elementum nibh. Morbi gravida sodales velit. Vestibulum sed turpis diam. Phasellus sed malesuada ante, id ornare augue. Integer faucibus risus sed dolor tincidunt maximus ac gravida libero. Pellentesque malesuada orci non tortor sodales ultrices. Etiam dolor eros, tempus id luctus nec, commodo et purus. Suspendisse lacus nisl, tincidunt eu diam eu, eleifend ultricies ligula. Nam velit justo, fermentum sit amet laoreet ac, consectetur ut dui. Quisque lobortis dictum justo, id fringilla ipsum scelerisque vel. Nulla a molestie sapien. Aenean varius dui lacus, tempor malesuada erat tincidunt luctus. Nam quis consectetur nisi, ut gravida mauris. Vivamus eu placerat magna, nec eleifend nibh. Phasellus iaculis congue dui, sodales dignissim nunc sollicitudin in. Cras ultricies erat id aliquam dignissim. Cras in purus malesuada, vulputate dolor vel, congue dui. Pellentesque accumsan, leo volutpat molestie tempor, odio libero ultricies ligula, quis aliquam purus neque a elit. Vestibulum sollicitudin placerat ullamcorper. Cras vel lorem eu ipsum sollicitudin rhoncus in et nisl. Cras auctor, elit et gravida facilisis, mauris ante molestie sem, in placerat orci lorem et nisl. Maecenas pellentesque, tellus a mollis posuere, nunc massa viverra nisi, id cursus leo quam non dolor. Suspendisse eget diam porta, aliquam tortor id, convallis enim. Phasellus nec lorem id arcu condimentum lobortis ut eget purus. </p> </div> <div> <h2>Right Column</h2> <p>Here are some elements</p> <button>Some button</button> <div class="right"> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris ut lectus ante. Etiam sollicitudin tortor ac nisl efficitur, a porttitor mauris bibendum. Curabitur placerat massa ligula, a tempor sem ullamcorper ac. Nam porta leo neque, vel molestie nisi ultricies at. Aliquam erat volutpat. Phasellus in augue lacus. Proin mi enim, aliquet eu elementum ut, pellentesque sit amet tellus. Proin neque mauris, molestie sed sem dictum, dictum suscipit ligula. Nulla commodo odio sem, in volutpat ante auctor tempor. Phasellus tincidunt nisi est, at dignissim augue blandit nec. Praesent et magna at libero lacinia mollis sed id neque. Pellentesque risus nibh, iaculis id congue quis, mollis sit amet nisl. Integer ut quam orci. Curabitur imperdiet eleifend ex, id vehicula est tempor vel. Donec gravida sollicitudin augue, vitae vehicula lorem dictum tempor. Donec elementum faucibus nulla, ac euismod odio volutpat sed. Praesent convallis purus sed sem tempor, sit amet tristique leo pulvinar. Fusce id enim at ante imperdiet elementum. Nullam ac odio eu leo vehicula aliquam eget sed elit. Fusce feugiat orci arcu, sagittis faucibus sem porttitor id. Etiam et mi ac justo consequat malesuada ac at risus. Ut viverra, mauris dapibus consectetur ornare, justo metus ullamcorper nisl, eu commodo arcu enim ac ex. Suspendisse facilisis ex eu nunc egestas pharetra. Sed egestas ultricies nisi, ac tincidunt sem consectetur ut. Nulla euismod maximus nisl sed fringilla. Morbi congue, risus ac ultricies malesuada, tellus mauris gravida leo, sit amet tincidunt massa nulla ut diam. Donec porttitor venenatis metus vel commodo. Duis varius mattis orci et tincidunt. Nulla viverra porttitor mi et feugiat. Nullam volutpat est urna, iaculis finibus felis finibus nec. Curabitur ut vestibulum enim. Phasellus at iaculis metus. Nulla ac porta purus. Nullam sit amet mi non massa pharetra convallis a vel urna. Aenean a aliquam nisi. Cras vehicula interdum dui et convallis. Praesent quis justo tempor, semper felis sit amet, finibus arcu. Quisque posuere luctus eros aliquam convallis. Integer dictum et dui tristique blandit. Nam efficitur sodales eros sed vulputate. Aliquam eget ex mattis, congue orci ac, euismod magna. Donec egestas placerat laoreet. Nam a mi purus. Aliquam erat volutpat. Praesent aliquet, nisl at tempus fermentum, justo diam tempor orci, at hendrerit diam felis ac libero. Aliquam erat volutpat. Proin lobortis enim odio, sed mollis sapien semper eu. Etiam porta orci nisi. In hac habitasse platea dictumst. Vestibulum efficitur sed dolor at viverra. Duis consectetur sed eros in volutpat. Donec condimentum dolor non faucibus ultrices. Nunc ornare eros at arcu volutpat hendrerit. Fusce ut varius lacus, sed eleifend mauris. Quisque tortor quam, condimentum ut erat vitae, placerat congue erat. Sed in felis purus. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Aenean sed dictum diam. Etiam ullamcorper turpis at consectetur lobortis. Suspendisse ultrices, purus et facilisis molestie, quam dui placerat lacus, nec tempor turpis felis id elit. Pellentesque vulputate tortor vitae efficitur ullamcorper. Quisque rhoncus erat eu velit eleifend tempus. Etiam non metus vel erat venenatis tincidunt. Vivamus eu sapien sed enim eleifend rhoncus. Pellentesque dignissim purus a arcu venenatis, vel finibus metus tempus. Cras hendrerit feugiat est non semper. Morbi egestas arcu id justo lobortis feugiat. Praesent sollicitudin sollicitudin semper. Maecenas faucibus non nisl vitae luctus. Cras semper non nisl vitae fermentum. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Phasellus eu posuere ipsum. Proin viverra laoreet risus, fermentum interdum risus ultrices vitae. Aliquam ultricies lorem vitae massa gravida, ut dapibus nisi tristique. Phasellus in tellus finibus, volutpat risus ut, bibendum nibh. In a venenatis mi, id eleifend nunc. Donec quis nisi nec erat dictum rhoncus sed nec velit. Nunc viverra nisi quis facilisis imperdiet. Cras id turpis ac nunc pharetra posuere eu quis sem. Suspendisse luctus, ex eget aliquet ultrices, diam magna luctus nisi, maximus facilisis ex lectus vitae enim. Donec varius justo a justo suscipit, ac lacinia urna aliquet. Curabitur elementum vehicula mauris, non consequat mauris dictum nec. Integer convallis pulvinar ex, in accumsan lectus. Vivamus lacinia augue at eros sodales suscipit. Mauris ullamcorper urna nec tristique suscipit. Sed sed felis pretium, ultricies orci sit amet, scelerisque nisl. Fusce laoreet magna ligula, sit amet tristique turpis varius nec. Vivamus quis turpis semper, rhoncus mauris vel, congue enim. Pellentesque nibh nulla, bibendum sed dapibus vitae, scelerisque ullamcorper diam. Donec viverra hendrerit nisi, sed sollicitudin libero iaculis et. Praesent sit amet velit mi. Ut semper libero a finibus pretium. Ut molestie purus eget rutrum blandit. Aenean tincidunt enim a nisi malesuada, viverra semper nibh eleifend. Maecenas tincidunt quis risus a pellentesque. Duis sodales purus dolor, id malesuada lorem dapibus eget. Nam ultricies dignissim erat quis varius. Donec at aliquam enim, ut auctor magna. Duis nunc est, bibendum non nisl vitae, congue blandit erat. Duis bibendum sed neque ut lobortis. Duis nisi lacus, dapibus quis sapien sed, vestibulum volutpat elit. Phasellus ex neque, tincidunt a quam fermentum, tincidunt luctus velit. Sed luctus, quam vel convallis volutpat, mauris lectus sagittis nulla, a vehicula tortor augue sed tellus. </p> <!-- Add more content as needed --> </div> </div> </div> </div> </body> <!-- end snippet -->
In an KQL Query, displaying Azure Function App Logs, that will be used in Grafana, we want to have an Variable "Show Host Status Messages" in Grafana. If "$ShowHostStatus" == True, show all Messages. If "$ShowHostStatus" == False, show all Messages, but filter out all messages starting with "Host Status". (where Message !startswith "Host Status") Would it be possible to use a Query similar to this, applying some inline filter for messages inside {Message_Filtered_For_Host_Status} ? FunctionAppLogs | where AppName == "$FunctionApp" | extend Message = iff(("$ShowHostStatus" == "True"), Message, {Message_Filtered_For_Host_Status} ) | project TimeGenerated, FunctionName, Level, Message
KQL Query to filter Message based on Grafana Variable
|azure|azure-functions|grafana|kql|
llm model invocation should be **parallel** ``` chain1 = prompt | model | outputparser``` ``` chain2 = prompt2 | model2 | outputparser ```
How to invoke multiple LLM model in single chain or invoke multiple LLM model parallelly in Langchain?
|fastapi|openai-api|langchain|large-language-model|azure-openai|
I noticed that Sublime started giving me incomplete results when hovering over a method to go to its source. I figured the indexes had gotten corrupted, so I made a backup folder and moved the indexes there, thinking that Sublime would automatically rebuild the indexes. Nothing. Clicking on Help > Indexing status revealed no workers, no attempt to re-index. I finally remembered that the last time this happened, a preference had gotten changed. Sure enough, when I opened my preferences, I found this at the bottom of the file: ``` // "index_files": false, // HOW DID THIS NONSENSE GET HERE? NEVER, NEVER, NEVER! "index_files": false, ``` The first line is from the first time this happened, and I commented it out. The second line has just appeared out of thin air-- I didn't put it there! My question is, does anybody know why this would happen, and is there any way to prevent it? (Sublime Build 4169 on Windows 10)
If you have the two tables either: both in the Oracle database; or have the MariaDB database accessible from the Oracle database (i.e. via a database link) then you can find all the relationships between the two tables using the query: ```lang-sql WITH cardboard_bounds (id, cardboard_number, start_dt, end_dt, production_line) AS ( SELECT id, cardboard_number, date_time, LEAD(date_time, 1, SYSTIMESTAMP) OVER ( PARTITION BY productionline_number ORDER BY date_time ), productionline_number FROM cardboard ), production_bounds (production_number, posnr, process_active, production_line, start_dt, end_dt) AS ( SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as start_dt, LEAD(DATETIME, 1, SYSTIMESTAMP) OVER ( PARTITION BY production_line ORDER BY DATETIME ASC ) AS end_dt FROM productions ) SELECT p.production_number, p.posnr, p.process_active, p.production_line, GREATEST(p.start_dt, c.start_dt) AS start_dt, LEAST(p.end_dt, c.end_dt) AS end_dt, c.id, c.cardboard_number FROM production_bounds p INNER JOIN cardboard_bounds c ON p.production_line = c.production_line AND p.start_dt < c.end_dt AND p.end_dt > c.start_dt ``` Which, for the sample data (with both tables in Oracle): ```lang-sql CREATE TABLE cardboard ( id int, Cardboard_Number varchar2(100), date_Time TIMESTAMP(0), ProductionLine_Number int ); INSERT INTO cardboard VALUES (2,'WDL-005943998-1', TIMESTAMP '2014-08-05 10:03:32', 1), (4,'spL1ml82N4o',TIMESTAMP '2024-02-29 17:13:54', 1), (5,'WDL-005943998-1',TIMESTAMP '2024-03-01 09:44:42', 1), (6,'WDL-005943998-1',TIMESTAMP '2024-03-01 10:34:57', 1), (7,'950024027237',TIMESTAMP '2024-03-01 10:44:57', 1), (8,'950024027237',TIMESTAMP '2024-03-01 10:52:57', 1), (9,'WDL-005943998-1',TIMESTAMP '2024-03-01 13:58:43', 2), (10,'WDL-005943998-1',TIMESTAMP '2024-03-01 13:58:46', 2), (11,'spL1ml82N4o',TIMESTAMP '2024-03-01 14:09:43', 2), (12,'WDL-005943998-1',TIMESTAMP '2024-03-12 15:48:36', 2); CREATE TABLE Productions ( PRODUCTION_NUMBER NUMBER, POSNR NUMBER, DATETIME TIMESTAMP(0), PROCESS_ACTIVE VARCHAR2(1), PRODUCTION_LINE NUMBER ); BEGIN INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461793, 1, TO_TIMESTAMP('2014-08-04 09:01:41', 'YYYY-MM-DD HH24:MI:SS'), '1', 1); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461793, 1, TO_TIMESTAMP('2014-08-04 11:01:41', 'YYYY-MM-DD HH24:MI:SS'), '0', 1); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461618, 2, TO_TIMESTAMP('2014-08-05 10:01:41', 'YYYY-MM-DD HH24:MI:SS'), '1', 1); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461619, 2, TO_TIMESTAMP('2014-08-05 10:02:46', 'YYYY-MM-DD HH24:MI:SS'), '1', 2); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461618, 2, TO_TIMESTAMP('2014-08-05 10:05:09', 'YYYY-MM-DD HH24:MI:SS'), '0', 1); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461619, 2, TO_TIMESTAMP('2014-08-05 10:07:46', 'YYYY-MM-DD HH24:MI:SS'), '0', 2); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461818, 1, TO_TIMESTAMP('2014-08-14 22:53:12', 'YYYY-MM-DD HH24:MI:SS'), '1', 1); INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE) VALUES (461818, 1, TO_TIMESTAMP('2014-08-14 23:25:30', 'YYYY-MM-DD HH24:MI:SS'), '0', 1); END; / ``` Outputs: | PRODUCTION\_NUMBER | POSNR | PROCESS\_ACTIVE | PRODUCTION\_LINE | START\_DT | END\_DT | ID | CARDBOARD\_NUMBER | | -----------------:|-----:|:--------------|---------------:|:--------|:------|--:|:----------------| | 461618 | 2 | 1 | 1 | 2014-08-05 10:03:32. | 2014-08-05 10:05:09.000000 | 2 | WDL-005943998-1 | | 461618 | 2 | 0 | 1 | 2014-08-05 10:05:09. | 2014-08-14 22:53:12.000000 | 2 | WDL-005943998-1 | | 461818 | 1 | 1 | 1 | 2014-08-14 22:53:12. | 2014-08-14 23:25:30.000000 | 2 | WDL-005943998-1 | | 461818 | 1 | 0 | 1 | 2014-08-14 23:25:30. | 2024-02-29 17:13:54.000000 | 2 | WDL-005943998-1 | | 461818 | 1 | 0 | 1 | 2024-02-29 17:13:54. | 2024-03-01 09:44:42.000000 | 4 | spL1ml82N4o | | 461818 | 1 | 0 | 1 | 2024-03-01 09:44:42. | 2024-03-01 10:34:57.000000 | 5 | WDL-005943998-1 | | 461818 | 1 | 0 | 1 | 2024-03-01 10:34:57. | 2024-03-01 10:44:57.000000 | 6 | WDL-005943998-1 | | 461818 | 1 | 0 | 1 | 2024-03-01 10:44:57. | 2024-03-01 10:52:57.000000 | 7 | 950024027237 | | 461818 | 1 | 0 | 1 | 2024-03-01 10:52:57. | 2024-03-28 12:20:50.832254 | 8 | 950024027237 | | 461619 | 2 | 0 | 2 | 2024-03-01 13:58:43. | 2024-03-01 13:58:46.000000 | 9 | WDL-005943998-1 | | 461619 | 2 | 0 | 2 | 2024-03-01 13:58:46. | 2024-03-01 14:09:43.000000 | 10 | WDL-005943998-1 | | 461619 | 2 | 0 | 2 | 2024-03-01 14:09:43. | 2024-03-12 15:48:36.000000 | 11 | spL1ml82N4o | | 461619 | 2 | 0 | 2 | 2024-03-12 15:48:36. | 2024-03-28 12:20:50.832254 | 12 | WDL-005943998-1 | If you want to search for active rows with a specific `cardboard_number` then add those filters: ```lang-sql WITH cardboard_bounds (id, cardboard_number, start_dt, end_dt, production_line) AS ( SELECT id, cardboard_number, date_time, LEAD(date_time, 1, SYSTIMESTAMP) OVER ( PARTITION BY productionline_number ORDER BY date_time ), productionline_number FROM cardboard ), production_bounds (production_number, posnr, process_active, production_line, start_dt, end_dt) AS ( SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as start_dt, LEAD(DATETIME, 1, SYSTIMESTAMP) OVER ( PARTITION BY production_line ORDER BY DATETIME ASC ) AS end_dt FROM productions ) SELECT p.production_number, p.posnr, p.process_active, p.production_line, GREATEST(p.start_dt, c.start_dt) AS start_dt, LEAST(p.end_dt, c.end_dt) AS end_dt, c.id, c.cardboard_number FROM production_bounds p INNER JOIN cardboard_bounds c ON p.production_line = c.production_line AND p.start_dt < c.end_dt AND p.end_dt > c.start_dt WHERE cardboard_number = 'WDL-005943998-1' AND process_active = 1 ``` Which outputs: | PRODUCTION\_NUMBER | POSNR | PROCESS\_ACTIVE | PRODUCTION\_LINE | START\_DT | END\_DT | ID | CARDBOARD\_NUMBER | | -----------------:|-----:|:--------------|---------------:|:--------|:------|--:|:----------------| | 461618 | 2 | 1 | 1 | 2014-08-05 10:03:32. | 2014-08-05 10:05:09.000000 | 2 | WDL-005943998-1 | | 461818 | 1 | 1 | 1 | 2014-08-14 22:53:12. | 2014-08-14 23:25:30.000000 | 2 | WDL-005943998-1 | [fiddle](https://dbfiddle.uk/jHkQWqLt)
Well, the current version of XSLT is 3.0, there you could use `xsl:where-populated` as follows: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="#all" expand-text="yes"> <xsl:strip-space elements="*"/> <xsl:output indent="yes"/> <xsl:mode on-no-match="shallow-copy"/> <xsl:key name="lov" match="ValueGroup[@AttributeID='tec_att_ignore_basic_data_text']/Value" use="@QualifierID" /> <xsl:template match="ValueGroup[@AttributeID='prd_att_description']/Value[key('lov', @QualifierID)/@ID='Y']"/> <xsl:template match="ValueGroup[@AttributeID='prd_att_description']"> <xsl:where-populated> <xsl:next-match/> </xsl:where-populated> </xsl:template> </xsl:stylesheet> I don't think there is a direct equivalent in XSLT 1/2 but for your use case I think you basically want the empty template for <xsl:template match="ValueGroup[@AttributeID='prd_att_description'][not(Value[not(key('lov', @QualifierID)/@ID='Y')])]"/>
I am a bit stuck with writing a regex on JavaScript. I want to write a regex to match a string that is - if it contains only digits, consisting of **NOT ONLY** 0 or more '0's ; - if it doesn't only contain digits, starting with either of these letters: tysjod ; Test cases for matching: - "yes" - "1" - "1010" - " 0 " Test cases for not matching: - "no" - "0" - "00" - "" I have written this so far but it isn't quite right, as it doesn't pass all the test cases. I don't know what is wrong. ``` return value.match(/!(^0*$)|(^[tysjod])/i); ``` Can I wrap `^` and `$` in the brackets like this? Just wondering...
I have a problem with script when trying to run flow in power automate. It was working in last week but now for some reason is not working and returning this error : { "error": { "code": 502, "source": "flow-apim-msmanaged-na-westus2-01.azure-apim.net", "clientRequestId": "80fc1432-3104-42df-b926-6e57c5c04aa4", "message": "BadGateway", "innerError": { "message": "We were unable to run the script. Please try again.\nOffice JS error: Line 11: Range clear: The request failed with status code of 404.\r\nclientRequestId: 80fc1432-3104-42df-b926-6e57c5c04aa4", "logs": [] } } } The code looks like this: function main(workbook: ExcelScript.Workbook, wsName: string, startCell: string, strArr: string) { // Convert the strArr to an array let newDataArr: string[][] = JSON.parse(strArr); // Declare and assign the worksheet let ws = workbook.getWorksheet(wsName); // Clear the existing data in the worksheet let clearRange = ws.getUsedRange(); clearRange.clear(ExcelScript.ClearApplyTo.contents); // Paste the new data into the worksheet let dataRng = ws.getRange(startCell).getAbsoluteResizedRange(newDataArr.length, newDataArr[0].length); dataRng.setValues(newDataArr); } I don't know what is the problem with this. I would like to clear data in worksheet before pasting data
Hi I'm new to Nextjs(v14). I want to implement a multiple page form using react-hook-form. However, it seems that `useForm` executed every time and `defaultValues` are set at page routing (click `<Link to=XX>`). How to keep form data across multiple pages? please help me. here my code. _app.tsx ```tsx return ( <div> <FormProvider> <Component {...pageProps} /> </FormProvider> </div> ); ``` FormProvider.tsx ```tsx export const FormProvider = ({ children, }: { children: React.ReactNode; }) => { const defaultValues: MyForm = { name: '', description: '' }; const form = useForm({ defaultValues, resolver: zodResolver(MyFormSchema), }); const onSubmit = () => console.log("something to do"); return ( <Form {...form}> <form onSubmit={form.handleSubmit(onSubmit)}>{children}</form> </Form> ); }; ```
**Print Binary for Any Datatype** ```c // Assumes little endian void printBits(size_t const size, void const * const ptr) { unsigned char *b = (unsigned char*) ptr; unsigned char byte; int i, j; for (i = size-1; i >= 0; i--) { for (j = 7; j >= 0; j--) { byte = (b[i] >> j) & 1; printf("%u", byte); } } puts(""); } ``` Test: ```c int main(int argc, char* argv[]) { int i = 23; uint ui = UINT_MAX; float f = 23.45f; printBits(sizeof(i), &i); printBits(sizeof(ui), &ui); printBits(sizeof(f), &f); return 0; } ```
null
null
I am practicing Bayesian and trying to translate WINBUGS models into JAGS in R. I have looked the JAGS manual and still haven't been able to work out this one, which gives error - Possible directed cycle involving some or all of the following nodes. The example is taken from the paper: <https://onlinelibrary.wiley.com/doi/full/10.1002/jrsm.1112> Anyone can give me some directions, please? ``` model{ for(i in 1:n) { # Calculate probabilities: P[i, 1, 1] <- p[i, 1] P[i, 1, 2] <- P[i, 1, 1] + (1-p[i,1])*p[i,2] P[i, 1, 3] <- P[i, 1, 2] + (1-p[i,1])*(1-p[i,2])*p[i,3] P[i, 1, 4] <- P[i, 1, 3] + (1-p[i,1])*(1-p[i,2])*(1-p[i,3])*p[i,4] P[i, 1, 5] <- P[i, 1, 4] + (1-p[i,1])*(1-p[i,2])*(1-p[i,3])*(1-p[i,4])*p[i,5] P[i, 1, 6] <- P[i, 1, 5] + (1-p[i,1])*(1-p[i,2])*(1-p[i,3])*(1-p[i,4])*(1-p[i,5])*p[i,6] P[i, 1, 7] <- P[i, 1, 6] + (1-p[i,1])*(1-p[i,2])*(1-p[i,3])*(1-p[i,4])*(1-p[i,5])*(1-p[i,6])*p[i,7] P[i, 2, 2] <- p[i, 2] P[i, 2, 3] <- P[i, 2, 2] + (1-p[i,2])*p[i,3] P[i, 2, 4] <- P[i, 2, 3] + (1-p[i,2])*(1-p[i,3])*p[i,4] P[i, 2, 5] <- P[i, 2, 4] + (1-p[i,2])*(1-p[i,3])*(1-p[i,4])*p[i,5] P[i, 2, 6] <- P[i, 2, 5] + (1-p[i,2])*(1-p[i,3])*(1-p[i,4])*(1-p[i,5])*p[i,6] P[i, 2, 7] <- P[i, 2, 6] + (1-p[i,2])*(1-p[i,3])*(1-p[i,4])*(1-p[i,5])*(1-p[i,6])*p[i,7] P[i, 3, 3] <- p[i, 3] P[i, 3, 4] <- P[i, 3, 3] + (1-p[i,3])*p[i,4] P[i, 3, 5] <- P[i, 3, 4] + (1-p[i,3])*(1-p[i,4])*p[i,5] P[i, 3, 6] <- P[i, 3, 5] + (1-p[i,3])*(1-p[i,4])*(1-p[i,5])*p[i,6] P[i, 3, 7] <- P[i, 3, 6] + (1-p[i,3])*(1-p[i,4])*(1-p[i,5])*(1-p[i,6])*p[i,7] P[i, 4, 4] <- p[i, 4] P[i, 4, 5] <- P[i, 4, 4] + (1-p[i,4])*p[i,5] P[i, 4, 6] <- P[i, 4, 5] + (1-p[i,4])*(1-p[i,5])*p[i,6] P[i, 4, 7] <- P[i, 4, 6] + (1-p[i,4])*(1-p[i,5])*(1-p[i,6])*p[i,7] P[i, 5, 5] <- p[i, 5] P[i, 5, 6] <- P[i, 5, 5] + (1-p[i,5])*p[i,6] P[i, 5, 7] <- P[i, 5, 6] + (1-p[i,5])*(1-p[i,6])*p[i,7] P[i, 6, 6] <- p[i, 6] P[i, 6, 7] <- P[i, 6, 6] + (1-p[i,6])* p[i,7] P[i, 7, 7] <- p[i, 7] logit(p[i,1])<-mu[1] + delta[study[i],1] for(k in 2:7) { p[i,k]<-max((exp(mu[k]+delta[study[i],k])*(1+exp(mu[k-1]+delta[study[i],(k-1)]))-exp(mu[k-1]+delta[study[i],(k-1)])*(1+exp(mu[k]+delta[study[i],k])))/(1+exp(mu[k]+delta[study[i],k]))) } #Model outcomes; start and end are the start and end of the intervals the deaths occur in (denoted by j and k in the paper) probs[i]<-P[study[i], start[i], end[i]] D[i]~dbin(probs[i], N[i]) } # Generate random effects: for(i in 1:studies) { delta[i, 1:7]~ dmnorm(zero[] , Omega[,]) } #Convert odds to probabilities. theta[1:7] are the parameters of primary interest. for(i in 1:7) { theta[i]<-exp(mu[i])/(1+exp(mu[i])) } # Priors mu[1]~dnorm(0,0.001) I(, mu[2]) mu[2]~dnorm(0,0.001) I(mu[1], mu[3]) mu[3]~dnorm(0,0.001) I(mu[2], mu[4]) mu[4]~dnorm(0,0.001) I(mu[3], mu[5]) mu[5]~dnorm(0,0.001) I(mu[4], mu[6]) mu[6]~dnorm(0,0.001) I(mu[5], mu[7]) mu[7]~dnorm(0,0.001) I(mu[6], ) Omega[1 : 7 , 1 : 7] ~ dwish(R[ , ], 7) Sigma[1 : 7 , 1 : 7] <- inverse(Omega[ , ]) } DATA <- list(studies=50, D=c(85, 10, 108, 11, 27, 22, 87, 52, 220, 103, 61, 45, 32, 89, 70, 291, 245, 155, 62, 14, 17, 83, 3, 20, 32, 0, 18, 0, 28, 0, 40, 2, 31, 0, 5, 3, 2, 31, 4, 268, 80, 1, 1, 10, 3, 109, 41, 864, 26, 74, 2, 5, 13, 113, 41, 13, 38, 38, 187, 5, 6, 8, 1, 15, 6, 24, 33, 1, 118, 1288, 8, 7, 11, 51, 37, 39, 13, 65, 0, 28, 31, 15, 6, 14, 20, 26, 22, 2, 20, 8, 17, 33, 15, 209, 89, 44, 23, 11, 11, 261, 873, 8, 9, 10, 13, 7, 9, 22, 12, 21, 39, 8, 71, 22, 107), N=c(525, 171, 161, 208, 197, 170, 1039, 584, 532, 312, 209, 148, 103, 278, 1425, 1355, 1064, 819, 149, 106, 92, 510, 101, 98, 78, 197, 197, 106, 106, 155, 155, 103, 101, 108, 108, 103, 103, 101, 554, 550, 241, 140, 139, 138, 128, 259, 150, 1840, 376, 350, 137, 135, 314, 301, 443, 402, 229, 1404, 1366, 180, 175, 113, 105, 145, 108, 102, 78, 993, 992, 5787, 220, 212, 205, 194, 143, 134, 526, 513, 100, 100, 72, 41, 169, 163, 149, 129, 103, 235, 233, 213, 205, 188, 155, 1560, 1351, 120, 108, 85, 74, 4929, 4668, 109, 101, 92, 82, 69, 103, 94, 213, 201, 180, 395, 387, 315, 293), start=c(1, 1, 2, 1, 2, 3, 1, 1, 2, 4, 5, 6, 7, 1, 1, 2, 4, 6, 1, 1, 4, 1, 1, 2, 5, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 3, 1, 2, 1, 2, 1, 1, 2, 4, 6, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 3, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 5, 1, 2, 1, 1, 2, 3, 4, 6, 1, 1, 2, 1, 2, 4, 6, 1, 2, 4, 5, 6, 1, 2, 4, 5, 6, 7, 1, 3, 1, 1, 3, 5, 1, 2, 1, 2, 3, 4, 5, 1, 2, 1, 2, 4, 1, 2, 1, 2), end=c(3, 1, 6, 1, 2, 3, 3, 1, 3, 4, 5, 6, 7, 4, 1, 3, 5, 7, 2, 3, 5, 4, 1, 4, 7, 1, 3, 1, 5, 1, 4, 1, 5, 1, 2, 3, 1, 5, 1, 7, 7, 1, 3, 5, 7, 5, 3, 4, 1, 3, 1, 2, 1, 4, 2, 3, 3, 1, 3, 1, 4, 1, 3, 2, 1, 4, 7, 1, 7, 3, 1, 2, 3, 5, 7, 2, 1, 3, 1, 3, 5, 7, 1, 3, 4, 5, 6, 1, 3, 4, 5, 6, 7, 2, 3, 4, 2, 4, 5, 1, 3, 1, 2, 3, 4, 5, 1, 3, 1, 3, 5, 1, 5, 1, 7), n=115, study=c(1, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 8, 9, 9, 10, 11, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 16, 17, 17, 18, 18, 19, 20, 20, 20, 20, 21, 22, 23, 24, 24, 25, 25, 26, 26, 27, 27, 28, 29, 29, 30, 30, 31, 31, 32, 33, 33, 33, 34, 34, 35, 36, 36, 36, 36, 36, 37, 38, 38, 39, 39, 39, 39, 40, 40, 40, 40, 40, 41, 41, 41, 41, 41, 41, 42, 42, 43, 44, 44, 44, 45, 45, 46, 46, 46, 46, 46, 47, 47, 48, 48, 48, 49, 49, 50, 50), zero = c(0,0,0,0,0,0,0) , R = cbind(c(2, 0, 0, 0, 0,0,0 ), c(0, 2, 0, 0, 0,0,0), c(0, 0, 2, 0, 0,0,0), c(0, 0, 0, 2, 0,0,0), c(0, 0, 0, 0, 2,0,0), c(0, 0, 0, 0, 0,2,0), c(0, 0, 0, 0, 0,0,2))) ```
Translating WINBUGS model to JAGS model: possible directed cycle
|bayesian|jags|winbugs|
I am using selenium-manager in Python, and I am getting an error in GitLab CI. The error is as follows. [enter image description here](https://i.stack.imgur.com/HT0TH.png) To find the cause, I tried to run selenium-manager directly, but I get the same error. To investigate the cause, I tried to get the json file by "wget" immediately before the run as shown below, and it succeeded. Only selenium-manager failed. [enter image description here](https://i.stack.imgur.com/WdcD3.png) Do you know what the cause is?
selenium-manager failed with "dns error: failed to lookup address information: Name does not resolve"
|selenium-webdriver|dns|seleniummanager|
null
The following approach should work. In the call `.plot.scatter()`: - Use `c=` instead of `color=` to tell matplotlib you want to use colormapping - Use `cmap=` with one of matplotlib's [colormaps](https://matplotlib.org/stable/users/explain/colors/colormaps.html) to tell which colors you want; setting the number of colors in the colormap will give a colorbar with separations (instead of a continous one) - Set `colorbar=False`, so we can create the colorbar separately - Set `vmin=` and `vmax=` to the lowest and highest color number and extend by a half; this will put the tick positions nicely in the center of each color - Grab the `ax` that is output by `ax = ....plot.scatter(...)` Then, create the colorbar. It needs a "scalar mappable", this is the matplotlib element that stores the scatter dots with their color information. In this case, it is stored in `ax.collections[0]`. The colorbar can then be accessed to set the ticks and their labels. Here is some test code: ```python import matplotlib.pyplot as plt import pandas as pd import numpy as np # a dummy dataframe for testing f_trains = pd.DataFrame({"Latitude": np.random.normal(0.1, 1, 100).cumsum() + 5, "Longitude": np.random.normal(0.1, 1, 100).cumsum() + 5, "LstPrice": np.random.randint(1, 12, 100) }) # suppose the correspondence between the integers and the labels is given in a dictionary lstPrice_dict = {1: '1000-2000', 2: '2000-3000', 3: '3000-4000', 4: '4000-5000', 5: '5000-6000', 6: '6000-7000', 7: '7000-8000', 8: '8000-9000', 9: '9000-10000', 10: '10000-11000', 11: '11000-12000'} num_colors = len(lstPrice_dict) cmap = plt.get_cmap('RdYlBu', num_colors) ax = f_trains.plot.scatter("Latitude", "Longitude", c=f_trains["LstPrice"], cmap=cmap, vmin=min(lstPrice_dict.keys()) - 0.5, vmax=max(lstPrice_dict.keys()) + 0.5, colorbar=False) cbar = plt.colorbar(ax.collections[0], ax=ax) cbar.set_ticks(list(lstPrice_dict.keys())) cbar.set_ticklabels(lstPrice_dict.values()) cbar.ax.set_title("LstPrice", ha='left') plt.tight_layout() plt.show() ``` [![pandas scatterplot with colorbar legend][1]][1] [1]: https://i.stack.imgur.com/fzLBQ.png
A slight Modification because the previous methods doesn't work. Use Negative for left and vice versa onHorizontalDragEnd: (dragDetail) { if (dragDetail.primaryVelocity! < -5) { print("left"); } else if (dragDetail.primaryVelocity! > 5) { print("right"); } },
When the app is loading, the initial time between the app window opening and the storyboard graphics loading I see what appears to be the app icon in full screen mode. This is clipping out of the horizontal boundaries and looks ugly. A second or so later the storyboard images load and it looks fine How can I geet rid of that initial ugly icon? I'm assuming it's just a larger version of the app icon but it might be another version of the app image loading in from somewhere. I have already changed in Info.plist: From: <key>UILaunchStoryboardName</key> <string>LaunchScreen.storyboard</string> To: <key>UILaunchStoryboardName</key> <string>LaunchScreen</string> ...without this change there would be a white screen before the storyboard loaded up.
Flutter App IOS Xcode - seeing lage project logo before storyboard
|ios|flutter|xcode|
I solved this by: 1. Downloading the `whl` from [here](https://github.com/GoogleCloudPlatform/gcloud-python-wheels/blob/master/wheelhouse/docopt-0.6.2-py2.py3-none-any.whl) 2. Installing it with: ``` python -m pip install docopt-0.6.2-py2.py3-none-any.whl ```
I am facing the problem of not getting the user_id in GA4 reports and explorations for paid users. The page path for these users is a popup path for our payment processor or a FlutterViewController for our mobile apps. We want to find which page actually made that user who paid, a new user. We get page paths with no. of new users and on viewing the user segment, we get the user_id with many of them. But when we track no. of purchases, the page paths are the popup path or FlutterViewController and it shows as if almost all of them were not new users with a few returning users too. But as we had started sending user_id from all the pages of our website since only a week back, we are not seeing many of the old users' user_ids. But we should see it for the last week ones - at least for the users who paid after that that we are not able to see. What is the way to get their (paid users) user_ids and the page path that made them the new user? I tried using page path as dimension and new users, returning users and total purchasers as metrics. I did not get the same person as a new user as well as the purchaser for a particular path. All the paths for purchasers were either popup or FlutterViewController. When I tried first source for purchasers, I got our payment processor's URL instead of the page where they first met our pages. We expect that they get the user IDs and start showing the proper page source. Thanks and Regards, Shubham
Unable to get user ID for paid users on GA4
|google-analytics-4|
null
{"OriginalQuestionIds":[12043775],"Voters":[{"Id":328193,"DisplayName":"David"},{"Id":5625547,"DisplayName":"0stone0"},{"Id":269970,"DisplayName":"esqew"}]}
You will need to create a credential provider for Android (e.g. a passkey provider). Apps don't implement CTAP directly, they leverage platform credential management APIs that handle transports for you, including FIDO Cross-Device Authentication. All documentation is available here: https://developer.android.com/training/sign-in/credential-provider
With [okhttp3-tls][1] it is possible in one line ```java import okhttp3.tls.certificatePem String certPem = cert.certificatePem() ``` [1]: https://mvnrepository.com/artifact/com.squareup.okhttp3/okhttp-tls
|typescript|react-native|authentication|expo|supabase|
I seem to have solved it. The login app sends a POST request to a server side API that accepts the user input and then the php code checks if the password is valid. Here is the the php that receives the user input POST then gives the user input to the function that does the validation: ```````````````````````````````````````````````````````````````````````` if ($_SERVER['REQUEST_METHOD'] === 'POST' && isset($_POST['username']) && isset($_POST['password'])) { $username = $_POST['username']; $password = $_POST['password']; $login_result = user_login($username, $password); } ```````````````````````````````````````````````````````````````````````` here is the verification logic used: ```````````````````````````````````````````````````````````````````````` function user_login($username, $password) { global $CFG, $DB, $USER; if (!$user = $DB->get_record('user', array('username' => $username))) { return false; // User not found } // Verify the password using password_verify() instead of directly comparing hashes if (password_verify($password, $user->password)) { return true; // Passwords match } else { return false; // Passwords do not match } } ````````````````````````````````````````````````````````````````````````
I'm normally very comfortable performing joins in my work/on online practice sets but sort of blanked when I got this question. Here's the data: The image for the data set > ![](https://i.stack.imgur.com/iUmN0.png) I was asked to perform a select * statement on the two tables, and write down the result for all 5 kinds of joins. So basically: select * from table_a a left join table_b b on a.column_a = b.column_b For this I answered: `1 1 0 0 1 1 1 1 0 0 null` Can somebody tell me if the answer is wrong/right? Also, can you please list down the output for all 5 types of joins (left, right, inner outer, cross) for this? With a good explanation Any help would be appreciated! Listed out my answer in the block above.
I've been searching high and low for a solution to this but I've only found lots of answers for Google Apps instead of Excel Online. Not sure if the Google Apps script can be used in Excel Online, but here goes anyway. What I'm looking for is guidance on how to create an 'onEdit' type script for an Excel Online spreadsheet that will insert a timestamp in the row (let's say A10) where a cell in a different column/same row (let's say N10) is changed. I am trying to accomplish this task using Excel Online through Microsoft 365, not a locally installed Excel application. Iteration settings cannot be altered on this version to my knowledge. Could anyone provide a solution that does not involve altering iteration settings? Any help would be HIGHLY appreciated!
|vba|power-automate|
Say we have a list. How can we reorder it so that the total difference between two consecutive elements is the smallest possible? For example, list = [7, 4, 2, 6]. The differences between two consecutive elements are 3,2,4 Therefore, the total difference is 9. We can rearrange it to become [2, 4, 6, 7] (not sorting) The differences between two consecutive elements are 2,2,1 Therefore, the total difference is 5. I already created a function to calculate the total difference of a list. ``` def total_diff(test_list): t = 0 for i in range(1, len(test_list)): t += test_list[i] - test_list[i-1] return t ``` What should I do next?
Reorder a list so that the total consecutive difference between two elements is the smallest possible
|python|list|reorderlist|
I have two models with `M2M field`. Because there wont be any update or deletion (just need to read data from db) I'm looking to have single db hit to retrieve all the required data. I used `prefetch_related` with `Prefetch` to be able to filter data and also have filtered objects in a cached list using `to_attr`. I tried to achieve the same result using `annotate` along with `Subquery`. but here I can't understand why the annotated filed contains only one value instead of a list of values. let's review the code I have: - some Routes may have more than one special point (Point instances with is_special=True). ### models.py ```python class Route(models.Model): indicator = models.CharField() class Point(models.Model): indicator = models.CharField() route = models.ManyToManyField(to=Route, related_name="points") is_special=models.BooleanField(default=False) ``` ### views.py ```python routes = Route.objects.filter(...).prefetch_related( Prefetch( "points", queryset=Point.objects.filter(is_special=True), to_attr="special_points", ) ) ``` this will work as expected but it will result in a separate database querying to fetch the points data. in the following code I tried to use Subquery instead to have a single database hit. ```python routes = Route.objects.filter(...).annotate( special_points=Subquery( Point.objects.filter(route=OuterRef("pk"), is_special=True).values("indicator") ) ``` the problem is in the second approach will have __either one or none__ special-point indicator when printing `route_instance.special_points` even if when using prefetch the printed result for the same instance of Route shows that there are two more special points. - I know in the first approach `route_instance.special_points` will contains the Point instances and not their indicators but that is the problem. - I checked the SQL code of the Subquery and there is no sign of limitation in the query as I did not used slicing in the python code as well. but again the result is limited to either one (if one or more exists) or none if there isn't any special point. ### This is how I check db connection ```python # Enable query counting from django.db import connection connection.force_debug_cursor = True route_analyzer(data, err) # Output the number of queries print(f"Total number of database queries: {len(connection.queries)}") for query in connection.queries: print(query["sql"]) # Disable query counting connection.force_debug_cursor = False ```
Is there an easy way to open PowerShell in admin mode for the current folder on Windows 11?
|powershell|windows-11|
``` Error while searching:- [{'code': 3, 'summary': 'Illegal query', 'message': "Could not set 'ranking.features.query(query_embedding)' to 'embed(e5, What is the Goodwill as of December 2023 in SUBSIDIARIES CONSOLIDATED BALANCE SHEETS?)': Multiple embedders are provided but no embedder id is given. Valid embedders are colbert,e5"}] ``` The problem is only at the retrival, if the query have anything to do with embeddings it is not able to embed the query string. but i tried a simple query to see if there are actaully embeddings in the data, there are. Not sure why it is able to identify it while indexing a document, but not in query. mapping:- ``` self.app_package = ApplicationPackage(name=self.app_name) # self.app_package.schema.mode = "streaming" self.meta_variables = ['doc_id','document_name', 'type', 'reportedTime', 'period', 'IsNro', 'pageNumber', 'language', 'company_ID', 'company_name', 'company_ticker', 'company_countryCode', 'company_quantum', 'company_currency', 'company_fiscalYear', 'company_fyAdjustment'] self.app_package.schema.add_fields( Field( name="text", type="string", indexing=["index", "summary"], index="enable-bm25" ), Field( name="embedding", type="tensor<float>(x[1024])", indexing=["input text", "embed e5","attribute", "summary", "index"], attribute=["distance-metric: angular"], is_document_field=False ), Field( name="colbert", type="tensor<float>(dt{}, x[128])", indexing=["input text", "embed colbert","attribute", "summary", "index"], attribute=["distance-metric: angular"], is_document_field=False ), Field(name="doc_id", type="int", indexing=["attribute", "summary"]), Field(name="document_name", type="string", indexing=["attribute", "summary"], match=['word']), Field(name="type", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="reportedTime", type="string", indexing=["attribute", "summary"], match=['word']), Field(name="period", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="IsNro", type="bool", indexing=["attribute", "summary"]), Field(name="pageNumber", type="int", indexing=["attribute", "summary"]), Field(name="language", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_ID", type="int", indexing=["attribute", "summary"]), Field(name="company_name", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_ticker", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_countryCode", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_quantum", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_currency", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_fiscalYear", type="string", indexing=["attribute", "summary"], match=['exact']), Field(name="company_fyAdjustment", type="bool", indexing=["attribute", "summary"], match=['exact']), ) self.app_package.schema.add_rank_profile( RankProfile( name="default", first_phase="closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[1024])")], ) ) self.app_package.schema.add_rank_profile( RankProfile( name="combined_ranking", first_phase="cos_sim", second_phase=SecondPhaseRanking(expression="0.05 * bm25(text) + 0.15 * cos_sim + 0.8 * max_sim", rerank_count=10), # global_phase=GlobalPhaseRanking(expression="0.05 * bm25(text) + 0.25 * cos_sim + 0.7 * max_sim"), functions=[Function(name="unpack", expression="cell_cast(attribute(colbert), float)"),Function(name="cos_sim", expression="cosine_similarity(query(query_embedding), attribute(embedding),x)"),Function( name="max_sim", expression="""sum( reduce( sum( query(qt) * attribute(colbert) , x ), max, dt ), qt )/32.0 """ )], inputs=[ ("query(query_embedding)", "tensor<float>(x[1024])"), ("query(qt)", "tensor<float>(qt{}, x[128])") ], match_features=["max_sim", "cos_sim", "bm25(text)"] ) ) self.app_package.components = [Component(id="colbert", type="colbert-embedder", parameters=[ Parameter("transformer-model", {"url": "https://huggingface.co/mixedbread-ai/mxbai-colbert-large-v1/resolve/main/onnx/model.onnx?download=true"}), Parameter("tokenizer-model", {"url": "https://huggingface.co/mixedbread-ai/mxbai-colbert-Large-v1/raw/main/tokenizer.json"}) ] ), Component(id="e5", type="hugging-face-embedder", parameters=[ Parameter("transformer-model", {"url": "https://huggingface.co/BAAI/bge-large-en-v1.5/resolve/main/onnx/model.onnx?download=true"}), Parameter("tokenizer-model", {"url": "https://huggingface.co/BAAI/bge-large-en-v1.5/resolve/main/tokenizer.json"}) ] )] ``` Query that got this error:- ``` def get_top_para_finance(self, query: str, doc_id: int): # print(self.vespa_app.application_package.get_model(model_id='colbert')) with self.vespa_app.syncio(connections=12) as session: start = time.time() print(f"Got the Query:- {query}") st = time.time() # embeddings = self.vespa_obj.embedding_function.embed_query(query) print(f"Time to get the Embeddings:- {round(time.time()-st, 2)}s") result = self.vespa_app.query( yql="select * from sources * where {targetHits: 10}nearestNeighbor(embedding, query_embedding) and doc_id = "+ f"{doc_id}", query=query, ranking= "default", body={ "input.query(qt)": f"embed(colbert, {query})", "input.query(query_embedding)": f"embed(e5, {query})", }, hits = 1, # timeout = "1ms" ) assert(result.is_successfull()) end = time.time() total_time = round(end-start, 2) print(f"Search time:- {total_time}s") return self.display_hits_as_df(result, self.vespa_obj.meta_variables+['text']), total_time ```
Vespa not able to identify the embedding id during the query. even when it is in vald embedders list
|python|database|full-text-search|vespa|vector-database|
null
Align pop-up button on English and Arabic
null
i want to develop a AutoCAD plugin.the main code use c#, but i need a custom class(an object consists of a text and some ),then I creat a custom class by objectarx, after compilation produce a .dbx and .lib file ,but c# use .dll file, i don't know how to use this custom class with c#. i tried to use hybrid programming, but immediately I realised that is was illogical,and also i couldn't find a similar example like this.
I'm currently developing an word add-in task pane app. I am trying to retrieve the following information from the task pane app (Office Add-in) of the current signed in user information : username, email address, user group details. I'm trying to fetch the current signed in user information from the active directory. **Scenario** My document contains who can work on the sections (User1 - section 1, User2 - section 2), I need to know user_name who currently logged into the word application so that I can authenticate the users against article without asking them to log-in each time. **Flow** Article Creator is responsible for creating the document. This process will create word document and will be transferred to the user. Article Admin is responsible for adding custom content controls into document where he implements controls based on current signed in user and his usergroup. Authentication Rule is simple here. The document will load content controls according to the logged in user and the user can work on editable content set to him.
You can try to use [Ext.XTemplate][1] and refer to [customTplCombo][2] [1]: https://docs.sencha.com/extjs/4.1.3/#!/api/Ext.XTemplate [2]: https://docs.sencha.com/extjs/4.2.0/extjs-build/examples/form/combos.html
null
{"Voters":[{"Id":259769,"DisplayName":"Enigmativity"}],"DeleteType":1}
I have tried the approach below: ``` table2_desc = spark.sql("DESCRIBE EXTENDED default.table2") table1_desc = spark.sql("DESCRIBE EXTENDED default.table1") print("Extended Description of default.table2:") table2_desc.show(truncate=False) print("Extended Description of default.table1:") table1_desc.show(truncate=False) ``` **Results:** ``` Extended Description of default.table2: +----------------------------+---------------------------------------------------+-------+ |col_name |data_type |comment| +----------------------------+---------------------------------------------------+-------+ |id |int |NULL | |data |string |NULL | | | | | |# Delta Statistics Columns | | | |Column Names |id, data | | |Column Selection Method |first-32 | | | | | | |# Detailed Table Information| | | |Catalog |spark_catalog | | |Database |default | | |Table |table2 | | |Created Time |Fri Mar 29 05:22:19 UTC 2024 | | |Last Access |UNKNOWN | | |Created By |Spark 3.4.1 | | |Type |MANAGED | | |Location |dbfs:/user/hive/warehouse/table2 | | |Provider |delta | | |Owner |root | | |Is_managed_location |true | | |Table Properties |[delta.minReaderVersion=1,delta.minWriterVersion=2]| | +----------------------------+---------------------------------------------------+-------+ ``` ``` Extended Description of default.table1: +----------------------------+---------------------------------------------------+-------+ |col_name |data_type |comment| +----------------------------+---------------------------------------------------+-------+ |id |int |NULL | |data |string |NULL | | | | | |# Delta Statistics Columns | | | |Column Names |id, data | | |Column Selection Method |first-32 | | | | | | |# Detailed Table Information| | | |Catalog |spark_catalog | | |Database |default | | |Table |table1 | | |Created Time |Fri Mar 29 05:20:22 UTC 2024 | | |Last Access |UNKNOWN | | |Created By |Spark 3.4.1 | | |Type |MANAGED | | |Location |dbfs:/user/hive/warehouse/table1 | | |Provider |delta | | |Owner |root | | |Is_managed_location |true | | |Table Properties |[delta.minReaderVersion=1,delta.minWriterVersion=2]| | +----------------------------+---------------------------------------------------+-------+ ``` In the code above, I am getting the extended descriptions of `default.table2` and `default.table1`.
{"Voters":[{"Id":21588212,"DisplayName":"DileeprajnarayanThumula"}]}
I don’t have sudo access and contacting sys-admin takes a non trivial amount of time. Here is the output of `nvcc -V` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Tue_Feb_27_16:19:38_PST_2024 Cuda compilation tools, release 12.4, V12.4.99 Build cuda_12.4.r12.4/compiler.33961263_0 Output of `nvidia-smi` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA RTX A6000 Off | 00000000:1C:00.0 Off | Off | | 30% 32C P8 19W / 300W | 23MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA RTX A6000 Off | 00000000:1E:00.0 Off | Off | | 30% 33C P8 20W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA RTX A6000 Off | 00000000:3D:00.0 Off | Off | | 30% 32C P8 27W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA RTX A6000 Off | 00000000:3E:00.0 Off | Off | | 30% 34C P8 25W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 4 NVIDIA RTX A6000 Off | 00000000:3F:00.0 Off | Off* | |ERR! 49C P5 ERR! / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 5 NVIDIA RTX A6000 Off | 00000000:40:00.0 Off | Off | | 30% 31C P8 6W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 6 NVIDIA RTX A6000 Off | 00000000:41:00.0 Off | Off | | 30% 31C P8 16W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 7 NVIDIA RTX A6000 Off | 00000000:5E:00.0 Off | Off | | 30% 29C P8 6W / 300W | 11MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 4216 G /usr/libexec/Xorg 9MiB | | 0 N/A N/A 4466 G /usr/bin/gnome-shell 4MiB | | 1 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 2 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 3 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 4 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 5 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 6 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | | 7 N/A N/A 4216 G /usr/libexec/Xorg 4MiB | +-----------------------------------------------------------------------------------------+ ``` when I try to run ``` cuda_available = torch.cuda.is_available() print("CUDA Available:", cuda_available) if cuda_available: print("CUDA version:", torch.version.cuda) print("cuDNN version:", torch.backends.cudnn.version()) else: print("CUDA not available") ``` I get the following error: /home/user_name/anaconda3/envs/llm2/lib/python3.10/site-packages/torch/cuda/__init__.py:141: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 CUDA Available: False CUDA not available Is it possible to fix this error without sudo access ? The two possible solutions are :- 1) Update drivers 2) Build pytorch for cuda 12.4 from source IIRC both of these require sudo access
Try this, worked for me: ``` css .BodyCover { position: absolute; top: 0; right: 0; bottom: 0; left: 0; }​ ``` [http://jsfiddle.net/jeL3D/][1] [1]: http://jsfiddle.net/jeL3D/
{"OriginalQuestionIds":[5690541],"Voters":[{"Id":-1,"DisplayName":"Community","BindingReason":{"DuplicateApprovedByAsker":""}}]}
I'm updating my Spring Boot application from Java 11 to Java 17, and Spring Boot `2.7.18` to `3.2.4`, and running the tests I'm getting an error. So, I have a entity called `WorkloadDAO` like this: ``` @Entity @Table(name = "workloads") @IdClass(WorkloadId.class) public class WorkloadDAO { @Id @NonNull() private String id; @Id @Column(name = "created_at", insertable = false, updatable = false, columnDefinition = "TIMESTAMP") private LocalDateTime createdAt; // more attributes @PrePersist protected void onCreate() { createdAt = LocalDateTime.now(); } ``` The `createdAt` attribute will always be received with null value, and it will be set by the `onCreate()` method annotated with `@PrePersist` before save the instance in the database. As you can see in the annotation `@IdClass(WorkloadId.class)`, I'm setting WorkloadId as the id of this class. The `WorkloadId` class is like this: ``` public class WorkloadId implements Serializable { private String id; private LocalDateTime createdAt; } ``` So far, so good until the test. For example, I'm running this test: ``` @Test void testSaveWorkload() { String id = "someid"; WorkloadDAO dao = WorkloadDAO.builder() .id(id) .build(); WorkloadDAO response = repository.saveAndFlush(dao); assertThat(repository.findById(WorkloadId.builder() .id(id) .createdAt(response.getCreatedAt()) .build())) .get() .extracting("createdAt").isNotNull(); } ``` Before updating Java and Spring Boot, this test was running successfully. But now, I'm getting the following error: ``` org.springframework.orm.jpa.JpaSystemException: identifier of an instance of com.adevinta.delivery.workloadapi.data.WorkloadDAO was altered from WorkloadId(id=someid, createdAt=2024-03-28T12:26:34.355098) to WorkloadId(id=someid, createdAt=null) ``` Can you help me to understand why I'm getting this? Thanks! What I expect to happen is the test works, without setting the field `createdAt` beforehand.
JPA/Hibernate JpaSystemException: identifier of an instance of X was altered from Y to Z
|java|spring-boot|hibernate|jpa|java-17|
null
I am having trouble getting the desired Output using Jolt Transform. My input JsonArray looks like this: ``` [ { "from": [ { "area1": 1 }, { "area2": 1 }, { "area3": 1 } ], "id": 111, "to": "destination1" }, { "from": [ { "area1": 2 }, { "area2": 2 }, { "area3": 2 } ], "id": 222, "to": "destination2" } ] ``` I am using this Jolt Spec to create a JsonArray for every JsonObject in the JsonArray "from" and want to include the where "id" and "to" keys: ``` [ { "operation": "shift", "spec": { "*": { "from": { "*": { "@": "[&]", "@(2,id)": "[&].id", "@(2,to)": "[&].to" } } } } } ] ``` This is the output: ``` [ [ { "area1" : 1, "id" : 111, "to" : "destination1" }, { "area1" : 2 } ], [ { "area2" : 1, "id" : 111, "to" : "destination1" }, { "area2" : 2 } ], [ { "area3" : 1, "id" : 111, "to" : "destination1" }, { "area3" : 2 } ] ] ``` My desired output would look something like that: ``` [ { "area1" : 1, "id" : 111, "to" : "destination1" }, { "area1" : 2, "id" : 222, "to" : "destination2" }, { "area2" : 1, "id" : 111, "to" : "destination1" }, { "area2" : 2 "id" : 222, "to" : "destination2" }, { "area3" : 1, "id" : 111, "to" : "destination1" }, { "area3" : 2 "id" : 222, "to" : "destination2" } ] ``` I am not sure, why the "id" and "to" key aren't included from the second JsonObject of the input. Does anyone know how to fix the Jolt to get the desiered output?
You can use this formula: ``` =LET(items,$A$2:$A$13, closBalance,$E$2:$E$13, IFNA(XLOOKUP(INDEX(items,ROW()-1),TAKE(items,ROW()-2), TAKE(closBalance,ROW()-2),,0,-1),0)) ``` `XLOOKUP` looks only in the above rows and returns the last value (`-1`) Be aware that you add the formulas only from row 7. Problems occur when you add a new item which has a fix Opening Balance - then you have to delete the formula. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/cHLci.png
Here's how I made it work in my app: 1) Edit scheme > App Language > Arabic (or Right-to-Left Pseudolanguage) If you only do this step, the RTL layout will appear correctly in the simulator but not on a real device (unless the device's language is RTL) 2) Set the locale language This will force the app's preferred language to always remain your RTL language. @main struct AppName: App { init() { UserDefaults.standard.set(["ar"], forKey: "AppleLanguages") } var body: some Scene { WindowGroup { ContentView() } } }
I am building an HTML/CSS/Javascript application that dynamically embeds facebook posts based on an API that returns a list of facebook post ID's. I loop through the post ID's, generate an embed code with postId inserted for each post, and append to the DOM. Simple stuff. However I need to append some simple text above each post, and I cannot figure out how to do this dynamically in Javascript. **UPDATE 3.28** I know that I cannot append anything to the iframe generated by the embed code, and that I have to append the text before or after the iframe element as mentioned in comments below. I updated my code below so that the additional text is being appended to the parent container for the posts, as opposed to trying to append to the iframe itself. This works as far as getting the text to appear on the page, but I'm getting a list of text elements that appear after all the iframe elements, as opposed to one text element for each iframe. See below for screenshots of what is generated: [![What appears on the page][1]][1] [![Resulting code in inspector][2]][2] <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> // Array of facebook post ID's facebookArr = [{ post_id: "pfbid0cm7x6wS3jCgFK5hdFadprTDMqx1oYr6m1o8CC93AxoE1Z3Fjodpmri7y2Qf1VgURl" }, { post_id: "pfbid0azgTbbrM5bTYFEzVAjkVoa4vwc5Fr3Ewt8ej8LVS1hMzPquktzQFFXfUrFedLyTql" } ]; // Variables to store post ID, embed code, parent container let postId = ""; let embedCode = ""; let facebookContainer = document.getElementById("facebook-feed-container"); $(facebookContainer).empty(); // Loop through data to display posts facebookArr.forEach((post) => { let relativeContainer = document.createElement("div"); postId = post.post_id; postLink = `${postId}/?utm_source=ig_embed&amp;utm_campaign=loading`; // ---> UPDATE: separate container element let iframeContainer = document.createElement("div"); embedCode = `<iframe src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2FIconicCool%2Fposts%2F${postId}&show_text=true&width=500" width="200" height="389" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowfullscreen="true" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share" id=fb-post__${postId}></iframe>`; // Update the DOM iframeContainer.innerHTML = embedCode; // ADDITIONAL TEXT let additionalText = document.createElement("div"); additionalText.className = "absolute"; additionalText.innerText = "additional text to append"; relativeContainer.append(additionalText, iframeContainer); facebookContainer.append(relativeContainer); }); <!-- language: lang-css --> #facebook-feed-container { display: flex; flex-direction: row; row-gap: 1rem; column-gap: 3rem; padding: 1rem; } .absolute { position: absolute; color: red; } <!-- language: lang-html --> <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script> <div id="facebook-feed-container"></div> <!-- end snippet --> [1]: https://i.stack.imgur.com/uzsHz.jpg [2]: https://i.stack.imgur.com/C2rfZ.png
Excel Online: Add timestamp when specific cell changes in the same row
null
Please try this solution which will avoid overlaping elements ``` @Preview @Composable fun TestPreview() { MaterialTheme() { Surface { Row(Modifier.fillMaxWidth(), horizontalArrangement = Arrangement.SpaceBetween) { // Icon(imageVector = Icons.Filled.CropFree, contentDescription = "image") Spacer(modifier = Modifier.width(24.dp)) Text(text = "Text") Icon(imageVector = Icons.Filled.CropFree, contentDescription = "image") } } } } ``` [![Preview][1]][1] [1]: https://i.stack.imgur.com/VrJx2.png
To me, the only difference is that the regular operation needs one more instantiation, and the result is held by this new instance. And thus the regular implementation should call the other. [But](https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types) : *these (in-place) methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, or if that method returns NotImplemented, the augmented assignment falls back to the normal methods.* Here, i understand that the standard way is the opposite of mine : iadd relies on add. Why ? A bit of context : the question came while implementing a Polynomial class, for learning purpose. I have written: ``` class A: ... def __iadd__(self, other): "processing resulting in modification of attributes of self" return self def __add__(self, other): res = self.copy() # A.copy() being implemented as well res += other return res ```
Web application is loaded only in the form address `ip:8080`. I've a web application that is configured as `Wildfly` server and `Nginx` reverse proxy, I am trying to load this application in the browser without success because the application load only with ip address information. This is my first time that I use `Wildfly` server and `Nginx` then I don't have experience but I've reading high. My application is a `Maven` project in `Java`, all in the project is working fine. Below is my configurations. My `standalone.xml`: ``` <subsystem xmlns="urn:jboss:domain:deployment-scanner:2.0"> 161 <deployment-scanner name="itcmedbr.war" path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" auto-deploy-exploded="true" runtime-failure-causes-rollback="${jboss.deployment.scanner.rollback.on.failure:false}"/> 162 </subsystem> ``` My `nginx` conf: ``` 1 server { 2 listen 80; 3 listen [::]:80; 4 server_name itcmedbr.com www.itcmedbr.com; 5 6 # Load configuration files for the default server block. 7 8 location / { 9 root /opt/wildfly/standalone/data/content/39/c296b5d6d608465514ecce78b062f85b5f9001/content; 10 proxy_set_header X-Forwarded-For $remote_addr; 11 proxy_set_header Host $server_name:$server_port; 12 proxy_set_header Origin http://myipaddress; 13 proxy_set_header Upgrade $http_upgrade; 14 15 proxy_pass http://127.0.0.1:8080; 16 } 17 18 error_page 404 /404.html; 19 location = /40x.html { 20 } 21 22 error_page 500 502 503 504 /50x.html; 23 location = /50x.html { 24 } 25 } ``` In my mind I understanding that when is made a deploy a file is created so I looked in the `wildfly` folders and found this file that I specified in the root parameter of my `nginx.conf`, but the problem still the same, i.e `domain.com` and load the page. What I must to do to solve this problem? Thanks and best regards. 1 - deploy the war file ``` ls /opt/wildfly/standalone/deployments/ itcmedbr.war itcmedbr.war.deployed README.txt ``` 2 - configuration of `nginx`: ``` # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # Settings for a `TLS` enabled server. # # server { # listen 443 ssl http2 default_server; # listen [::]:443 ssl http2 default_server; # server_name _; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_ciphers PROFILE=SYSTEM; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } } ``` 2.1 - configuration of `itcmedbr.com.conf` in `/etc/nginx/conf.d`: ``` server { listen 80; listen [::]:80; server_name itcmedbr.com www.itcmedbr.com; listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; # Redirect all port 80 (HTTP) requests to port 443 (HTTPS). return 301 https://itcmedbr.com$request_uri; # Load configuration files for the default server block. location / { proxy_set_header Origin http://ipaddress; proxy_pass http://127.0.0.1:8080; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } ``` 3 - status `nginx`: ``` Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2024-03-03 18:36:43 -03; 41s ago ``` 4 - status `wildfly.service`: ``` Loaded: loaded (/etc/systemd/system/wildfly.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2024-03-03 18:25:40 -03; 41s ago ``` 5 - `firewall` ports: ``` firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: http https ssh ports: 8080/tcp 9990/tcp 3306/tcp 80/tcp 443/tcp protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: ss -tunelp | grep 80 tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=159107,fd=6),("nginx",pid=159105,fd=6)) ino:2538418 sk:4b <-> tcp LISTEN 0 2048 xxx.xxx.xx.xxx:8080 0.0.0.0:* users:(("java",pid=158828,fd=494)) uid:990 ino:2536856 sk:4c <-> tcp LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=159107,fd=7),("nginx",pid=159105,fd=7)) ino:2538419 sk:4f v6only:1 <-> ss -tunelp | grep 443 tcp LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=159107,fd=8),("nginx",pid=159105,fd=8)) ino:2538420 sk:4d <-> tcp LISTEN 0 2048 xxx.xxx.xx.xxx:8443 0.0.0.0:* users:(("java",pid=158828,fd=495)) uid:990 ino:2536857 sk:4e <-> tcp LISTEN 0 511 [::]:443 [::]:* users:(("nginx",pid=159107,fd=9),("nginx",pid=159105,fd=9)) ino:2538421 sk:50 v6only:1 <-> ``` Now when I typing `itcmedbr.com` I receiving " This site can't be reached The connection was reset." and if I typing in the browser `ip:8080` the "`Welcome to Wildfly`" page is loaded. Trying to solve the problem I did the following steps: 1 - uninstall and remove `Nginx` 2 - install `nginx` with `certbot` ``` yum install nginx certbot python3-certbot-nginx ``` 3 - create my `server conf` ``` vi /etc/nginx/conf.d/itcmedbr.conf server { server_name itcmedbr.com; } ``` 4 - configure `certbot` ``` certbot --nginx define email define domain ``` 5 - `reload`, `restart`, `nginx` and `wildfly` 6 - when test `nginx` result is: ``` nginx -t nginx: [warn] conflicting server name "itcmedbr.com" on 0.0.0.0:80, ignored <==(I don't know why this occur and how to solve this) nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ``` 7 - result of `nginx.conf` ``` # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # Settings for a TLS enabled server. server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; server_name _; root /usr/share/nginx/html; ssl_certificate "/etc/ssl/certs/nginx-selfsigned.crt"; ssl_certificate_key "/etc/ssl/private/nginx-selfsigned.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers PROFILE=SYSTEM; ssl_prefer_server_ciphers on; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } server { server_name itcmedbr.com; # managed by Certbot root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/itcmedbr.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/itcmedbr.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = itcmedbr.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 ; listen [::]:80 ; server_name itcmedbr.com; return 404; # managed by Certbot }} ``` How we can see `nginx.conf` was updated by certbot and is managed by certbot too. After all of this now I've a new problem that when I typing my `ipaddress:9990` in the browser to open wildfly console I receive the same message "This site can't be reached."
solved, you have to update your app to expo. run: `eas update` just that!
I am getting this error, running locally a nextjs project, using next-auth v5 >Unhandled Runtime Error MissingSecret: Missing secret, please set AUTH_SECRET or config.secret. Read more at https://errors.authjs.dev#missingsecret But my `.env` file has (key changed) AUTH_SECRET="srqoi/XynrMAjcjuMx6T5kGMXRAc+giSoSIxvpESUpA=" also, I can console.log my process.env file and I get, aside others, this _on server terminal/console_ { ... AUTH_SECRET: 'srqoi/XynrMAjcjuMx6T5kGMXRAc+giSoSIxvpESUpA=' } but *it's empty in the browser, when console.log is executed at runtime* what am I doing wrong ? Context - I am getting the error at this line export const { auth, signIn, signOut } = NextAuth(authConfig); this is my `authConfig` import { getUser } from "@/services/authService"; import type { NextAuthConfig } from "next-auth"; import Credentials from "next-auth/providers/credentials"; import { z } from "zod"; import dotenv from "dotenv"; dotenv.config(); console.dir(process.env); export const authConfig = { pages: { signIn: "/login", }, callbacks: { authorized({ auth, request: { nextUrl } }) { console.log("running auth/config.js -> callbacks.authorized"); const isLoggedIn = !!auth?.user; const isOnDashboard = nextUrl.pathname.startsWith("/dashboard"); if (isOnDashboard) { if (isLoggedIn) return true; return false; // Redirect unauthenticated users to login page } else if (isLoggedIn) { return Response.redirect(new URL("/dashboard", nextUrl)); } return true; }, }, providers: [ Credentials({ async authorize(credentials) { console.log( "runnning auth/config.ts -> providers.Credentials.authorize - Received credentials:", credentials ); const parsedCredentials = z .object({ email: z.string().email(), password: z.string().min(6) }) .safeParse(credentials); if (parsedCredentials.success) { const { email, password } = parsedCredentials.data; const user = await getUser(email, password); if (!user) return null; return user; } console.log("Invalid credentials"); return null; }, }), ], secret: process.env.AUTH_SECRET, } satisfies NextAuthConfig;
[![vs code suggestion][1]][1] [1]: https://i.stack.imgur.com/oBDKa.png How can I disable just this suggestion?
How to disable this suggestion in VS Code?
This should do the trick ```css .tab-background[selected] { background-color: #dd9933 !important; background-image: none !important; } ```
null
null
null
null
null
I'm trying to test the notifications component. When the user clicks on one of the cards it should call method `markNotificationAsRead` but when I write the test like this: it('should mark notification as read and navigate to correct page when clicked', async () => { vi.mock('../services/notifications.http', () => ({ markNotificationAsRead: vi.fn(), markAllNotificationsAsRead: vi.fn(), loadNotifications: vi.fn(), })); vi.mock('@tanstack/react-query', async () => { const mod = await vi.importActual<typeof import('@tanstack/react-query')>('@tanstack/react-query'); return { ...mod, useInfiniteQuery: vi.fn(() => ({ data: { pages: [ [ { id: '1', text: 'Notification 1', title: 'Title 1', data: { id: 1, notify_type: 'type1', category: 'category1', }, read: false, type: 'type1', created_at: '2022-01-01T00:00:00Z', }, ], [ { id: '2', text: 'Notification 2', title: 'Title 2', data: { id: 2, notify_type: 'type2', category: 'category2', }, read: true, type: 'type2', created_at: '2022-01-02T00:00:00Z', }, ], ], }, fetchNextPage: vi.fn(), hasNextPage: true, isFetching: false, isLoading: false, })), useQueryClient: vi.fn(() => ({ invalidateQueries: vi.fn(), })), }; }); const markNotificationAsRead = vi.spyOn(notificationsHttp, 'markNotificationAsRead'); renderWithProviders(<Notifications />); const firstNotificationCard = screen.getAllByRole('listitem', { name: 'Notification' })[0]; await userEvent.click(firstNotificationCard); await waitFor(() => { expect(markNotificationAsRead).toHaveBeenCalled(); }); }); When i run this test i get: ```AssertionError: expected "spy" to be called at least once``` I tried to put the click in a ```act``` function and tried `fireEvent`, and none of them worked for me. Also, I'm sure that the card functions normally when clicking on it in normal ui
`UserEvent` doesn't fire on 1 element from getAll* in vitest & testing-library/react
|react-testing-library|vitest|
I need to setup a module to create aws security group, which can be used to create multiple secuiryt groups with differnt port and CIDR value. For example, Security group for Ec2 will need port 80 and 443 to internet, another ec2 with port 8000 only, for rds with port 5432 from secuity groups of the 2 ec2's, for redis with port 6379 from secuity groups of the 2 ec2's. ``` dynamic "ingress" { iterator = port for_each = var.ingress_ports content { description = "HTTP connection from Web" from_port = port.value to_port = port.value protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } } ``` I tried with for_each and pass port as values, but i need a way to loop or reference other security group from env variables. In sum cases the CIDR will an IP in some other it will be another securoty group. Is there a way to address this.
Transforming JsonArray to shift keys into inner JsonArray using Jolt Transform
|jolt|
null
GitLab + NPM/Yarn Cache + React Firebase Hosting ```yml stages: - test - build - deploy default: image: node:21 cache: # Cache modules in between jobs key: files: - yarn.lock paths: - node_modules/ ########################## # Firebase Preview Links # ########################## preview_deploy: stage: test image: node:21 # only: # - merge_requests rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop") before_script: - npm install -g firebase-tools script: - yarn install --immutable --immutable-cache - yarn build - | echo "{\"commit\": \"https://gitlab.com/ORG/REPO/-/commit/${CI_COMMIT_SHA}\", \"ref\": \"${CI_COMMIT_REF_NAME}\", \"job\": \"https://gitlab.com/ORG/REPO/-/jobs/${CI_JOB_ID}\"}" > dist/build.json - firebase --project "${FIREBASE_PROJECT_ID}" --token "${FIREBASE_TOKEN}" hosting:channel:deploy "${CI_COMMIT_SHA}" environment: name: preview-staging ```
You could use a tuple, but its restriction makes it almost useless: the JSON items must be named Item1, Item2, and so on, and you cannot have only one item. var json = @"{""Item1"": ""abc"", ""Item2"": 123}"; var tuple = JsonSerializer.Deserialize<(string, int)>(json, new JsonSerializerOptions { IncludeFields = true }); You could give the elements names, but that only changes the output, you cannot change the JSON naming. Names are just syntactic sugar for tuples and are not available at runtime.