instruction
stringlengths
0
30k
I wanted to have CPU Cloudwatch Alarm for my ECS service previously i used to use CPU Utilization metric which was in percentage i used to give >80%. Seems they have removed it.[AWs Docs][1] [1]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html So now we have CPUUtililized. I want to know how to configure it for 80%. Its coming in 100s i had 4vCPU for my container.
How to configure CPU utilized metric for ECS in AWS for Alarm?
|amazon-web-services|amazon-ecs|amazon-cloudwatch|
@MarkSeemann's approach works pretty well, but after fiddling around with a few designs, I think you can get the cleanest syntax by simply using a `Writer` monad with `Maybe` values: type Errors e a = Writer e (Maybe a) Note here that the `Writer` layer is *not* a transformer. This is a plain old writer monad whose values are lifted into `Maybe`. You write `do`-notation for the `Writer`, which takes care of collecting error messages, but the values you extract from the monad are always `Maybe` values that allow you to carry around multiple failed computations as `Nothing`s where the cause of the failure has already been logged to the `Writer`. So, in your `do`-blocks, a statement like: r1 <- f1 is running an `f1 :: Error [Text] Int` computation to yield an `r1 :: Maybe Int` value. (Again, `r1` is *not* an `Int`. All the extracted values are lifted into `Maybe`.) When you have a function like `g` that takes unlifted values but can generate its own errors: g :: Int -> Int -> Errors [Text] Int you use an adapter to explicitly mark the function as requiring its arguments to be present: r1 <- f1 r2 <- f2 r3 <- f3 rg <- call2 g r2 r3 -- `call2` adapter ensures r2/r3 are present You can also lift pure functions like `(,)` or the `Result` constructor from your example with additional adapters: liftE2 Result r1 rg -- `liftE2` lifts Result :: Int -> Int -> Result The resulting scenarios look pretty clean. For example: f1, f2, f3, f4, f5 :: Errors [String] Int f1 = err "f1 failed" f2 = yield 2 f3 = err "f3 failed" f4 = yield 4 f5 = yield 5 g :: Int -> Int -> Errors [Text] Int g a b | a + b <= 6 = err "g: a + b NOT > 6" | otherwise = yield $ a * b scenario1 :: Errors [String] Result scenario1 = do r1 <- f1 r2 <- f2 r3 <- f3 rg <- call2 g r2 r3 liftE2 Result r1 rg and the adapters are straightforward, if tedious: call2 :: (Monoid e) => (a1 -> a2 -> Errors e r) -> Maybe a1 -> Maybe a2 -> Errors e r call2 f x1 x2 = makeCall (f <$> x1 <*> x2) where makeCall = fromMaybe (pure Nothing) liftE2 :: (Monoid e) => (a1 -> a2 -> r) -> (Maybe a1 -> Maybe a2 -> Errors e r) liftE2 f x1 x2 = pure (f <$> x1 <*> x2) A complete example with more arities of adapters defined: {-# LANGUAGE OverloadedStrings #-} module Collect where import Data.Text import Data.Maybe (fromMaybe) import Control.Monad.Writer type Errors e a = Writer e (Maybe a) runErrors :: Errors e a -> Either e a runErrors act = case runWriter act of (Nothing, e) -> Left e (Just x, _) -> Right x err :: t -> Errors [t] a err t = errs [t] errs :: (Monoid e) => e -> Errors e a errs e = tell e >> pure Nothing yield :: (Monoid e) => a -> Errors e a yield = pure . Just call :: (Monoid e) => (a1 -> Errors e r) -> Maybe a1 -> Errors e r call2 :: (Monoid e) => (a1 -> a2 -> Errors e r) -> Maybe a1 -> Maybe a2 -> Errors e r call3 :: (Monoid e) => (a1 -> a2 -> a3 -> Errors e r) -> Maybe a1 -> Maybe a2 -> Maybe a3 -> Errors e r call f x1 = makeCall (f <$> x1) call2 f x1 x2 = makeCall (f <$> x1 <*> x2) call3 f x1 x2 x3 = makeCall (f <$> x1 <*> x2 <*> x3) makeCall :: (Monoid e) => Maybe (Writer e (Maybe a)) -> Writer e (Maybe a) makeCall = fromMaybe (pure Nothing) liftE :: (Monoid e) => (a1 -> r) -> (Maybe a1 -> Errors e r) liftE2 :: (Monoid e) => (a1 -> a2 -> r) -> (Maybe a1 -> Maybe a2 -> Errors e r) liftE3 :: (Monoid e) => (a1 -> a2 -> a3 -> r) -> (Maybe a1 -> Maybe a2 -> Maybe a3 -> Errors e r) liftE f x1 = pure (f <$> x1) liftE2 f x1 x2 = pure (f <$> x1 <*> x2) liftE3 f x1 x2 x3 = pure (f <$> x1 <*> x2 <*> x3) data Result = Result {r1 :: !Int, rg :: !Int} deriving (Show) f1, f2, f3, f4, f5 :: Errors [Text] Int f1 = err "f1 failed" f2 = yield 2 f3 = err "f3 failed" f4 = yield 4 f5 = yield 5 g :: Int -> Int -> Errors [Text] Int g a b | a + b <= 6 = err "g: a + b NOT > 6" | otherwise = yield $ a * b scenario1 :: Errors [Text] Result scenario1 = do r1 <- f1 r2 <- f2 r3 <- f3 rg <- call2 g r2 r3 liftE2 Result r1 rg scenario2 :: Errors [Text] Result scenario2 = do r1 <- f1 r2 <- f2 r4 <- f4 rg <- call2 g r2 r4 liftE2 Result r1 rg scenario3 :: Errors [Text] Result scenario3 = do r1 <- f1 r2 <- f2 r5 <- f5 rg <- call2 g r2 r5 liftE2 Result r1 rg scenario4 :: Errors [Text] Result scenario4 = do r1 <- f4 r2 <- f2 r5 <- f5 rg <- call2 g r2 r5 liftE2 Result r1 rg main = mapM_ (print . runErrors) [scenario1, scenario2, scenario3, scenario4]
I am trying to record video using Selenium in headless mode. I am using Xvfb and FFmpeg bindings for Python. I've already tried: ```python import subprocess import threading import time from chromedriver_py import binary_path from selenium import webdriver from selenium.webdriver.chrome.service import Service from xvfbwrapper import Xvfb def record_video(xvfb_width, xvfb_height, xvfb_screen_num): subprocess.call( [ 'ffmpeg', '-f', 'x11grab', '-video_size', f'{xvfb_width}x{xvfb_height}', '-i', xvfb_screen_num, '-codec:v', 'libx264', '-r', '12', 'videos/video.mp4', ] ) with Xvfb() as xvfb: ''' xvfb.xvfb_cmd[1]) returns scren num :217295622 :319294854 :<generated_dynamically> ''' xvfb_width, xvfb_height, xvfb_screen_num = xvfb.width, xvfb.height, xvfb.xvfb_cmd[1] thread = threading.Thread(target=record_video, args=(xvfb_width, xvfb_height, xvfb_screen_num)) thread.start() opts = webdriver.ChromeOptions() opts.add_argument('--headless') try: driver = webdriver.Chrome(service=Service(executable_path=binary_path), options=opts) finally: driver.close() driver.quit() ``` As much as I understand `xvfb.xvfb_cmd[1]` returns an information about virtual display isn't it ? When I executed this script, I got the error message: ```bash [x11grab @ 0x5e039cfe2280] Failed to query xcb pointer0.00 bitrate=N/A speed=N/A :1379911620: Generic error in an external library ``` I also tried to use the following commands: `xvfb-run --listen-tcp --server-num 1 --auth-file /tmp/xvfb.auth -s "-ac -screen 0 1920x1080x24" python main.py &` `ffmpeg -f x11grab -video_size 1920x1080 -i :1 -codec:v libx264 -r 12 videos/video.mp4` In the commands above, there are used `xvfb-run --server-num 1` and `ffmpeg -i :1`, why ? Overall, when Selenium is running in the headless mode what's going on behind the scenes ? Is it using virtual display ? If yes, how can I detect display id of this, etc. Am I on the right path ? I am not using Docker or any kind of virtualization. All kind of tests are running on my local Ubuntu machine.
Record video with Xvfb + FFmpeg using Selenium in headless mode
|python|selenium-webdriver|ffmpeg|google-chrome-headless|xvfb|
I created an MVC application with EF Core. Several problems appeared one after the other. I eventually could create a controller on the orders table of Northwind database. It went with views automatically. When launching the application, this opened the HomeController, that is provided by default. As I want to open the OrdersController, in the address bar, after > https://localhost:7290/ I add ``orders`` But I get a transient error, whereas the content has this property : > ` .UseSqlServer( @"Server=(localdb)\mssqllocaldb;Database=EFMiscellanous.ConnectionResiliency;Trusted_Connection=True", options => options.EnableRetryOnFailure( maxRetryCount: 5, maxRetryDelay: System.TimeSpan.FromSeconds(30), errorNumbersToAdd: null) );` I first tried with nothing in the parenthesis for EnableRetryOnFailure, and then tried something I saw in the forums. Well the example was on MySql, as I use Sql Express, perhaps good to know. The database is on an instance of SQL Express rather than on localdb, but I am not sure this is the point. I remember that the two controllers can target two ports, how do I verify that? And supposing it is not the point either, what must I do?
JavaScript function to validate an email address using regular expressions, ensuring it follows the standard format
|javascript|coding-style|
null
null
null
null
null
Using [Python][1] (3.10.14 at the time of writing), how could one build a 3D mesh object (which can be saved in either STL, PLY or GLB/GLTF format) using: - a 3D path as the sweep axis, - a 2D rectangular shape with those constraints: - the 3D path is a true 3D path, which means that each coordinate varies in space; it's not contained in a single plane - the upper and lower edges of the rectangle shape must always be horizontal (which means that no banking occurs, i.e. there is no rotation of the shape during the sweep along the 3D axis) - the 3D path always passes perpendicularly through the center of the rectangle ? We can consider the 3D trajectory as being composed of straight segments only (no curves). This means that two segments of the 3D axis meet at an angle, i.e. that the derivative at this point is not continuous. The resulting 3D mesh should not have holes at those locations. Therefore, the "3D join style" should be determined with a given cap style (e.g. as described [here][2] for 2 dimensions). The 3D path is given as a numpy 3D array as follow: ```py import numpy as np path = np.array([ [ 5.6, 10.1, 3.3], [ 5.6, 12.4, 9.7], [10.2, 27.7, 17.1], [25.3, 34.5, 19.2], [55. , 28.3, 18.9], [80.3, 24.5, 15.4] ]) ``` The 2D rectangular shape is given as a [Shapely 2.0.3][3] [Polygon][4] feature: ```py from shapely.geometry import Polygon polygon = Polygon([[0, 0],[1.2, 0], [1.2, 0.8], [0, 0.8], [0, 0]]) ``` ### What I achieved so far I'm currently giving [Trimesh 4.2.3][5] ([Numpy 1.26.4][6] being available) a try by using [`sweep_polygon`][7] but without success because each time the rectangle shape has to change direction, it also rotates around an axis perpendicular to the plane defined by the two egdes meeting at that vertex where the direction changes, violating the second constraint here above. ```py import numpy as np from shapely.geometry import Polygon from trimesh.creation import sweep_polygon polygon = Polygon([[0, 0],[1.2, 0], [1.2, 0.8], [0, 0.8], [0, 0]]) path = np.array([ [ 5.6, 10.1, 3.3], [ 5.6, 12.4, 9.7], [10.2, 27.7, 17.1], [25.3, 34.5, 19.2], [55. , 28.3, 18.9], [80.3, 24.5, 15.4] ]) mesh = sweep_polygon(polygon, path) ``` In addition, the [`sweep_polygon`][7] doc says: > Doesn’t handle sharp curvature well. which is a little obscure. [![Mesh rendered in meshlab. The shape's tilt is clearly visible as it rises to the right.][8]][8] Mesh rendered in [meshlab][9]. The shape's tilt is clearly visible as it rises to the right. The final goal is to run that in a Docker container on a headless server. [1]: https://www.python.org/ [2]: https://matplotlib.org/stable/gallery/lines_bars_and_markers/joinstyle.html [3]: https://pypi.org/project/shapely/2.0.3/ [4]: https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html [5]: https://pypi.org/project/trimesh/4.2.3/ [6]: https://pypi.org/project/numpy/1.26.4/ [7]: https://trimesh.org/trimesh.creation.html#trimesh.creation.sweep_polygon [8]: https://i.stack.imgur.com/0gkpV.png [9]: https://www.meshlab.net/
You can do it this way as mentioned in the comment: $router->group(['prefix' => '/api/v1'], static function (Router $router): void { ... $router->get('login', [AccessController::class, 'show']); ... });
When I try to knit an Rmarkdown document containing a flextable to pdf it just stalls. No issues knitting to html or with other types of tables like kable. Min reprex below. When rendering it will just freeze on 'output file: example.knit.md' and never finish rendering. --- title: "Untitled" output: pdf_document date: "2024-03-08" --- ```{r} flextable::flextable(cars) ``` I installed tiny_tex and can successfully knit pdfs that do not have any flextables in them. After reading this link [getting-flextable-to-knit-to-pdf](https://stackoverflow.com/questions/64935647/getting-flextable-to-knit-to-pdf) I discovered this was a known issue with earlier versions of flextable. However, following the soluting I added 'latex_engine: xelatex' but still no luck. I'm running flextable 0.8.3. pandoc 2.19.2 When rendering: processing file: flextable_example.Rmd |................................... | 50% ordinary text without R code |......................................................................| 100% label: unnamed-chunk-1 "C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/pandoc" +RTS -K512m -RTS flextable_example.knit.md --to latex --from markdown+autolink_bare_uris+tex_math_single_backslash --output flextable_example.tex --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\pagebreak.lua" --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\latex-div.lua" --embed-resources --standalone --highlight-style tango --pdf-engine pdflatex --variable graphics --variable "geometry:margin=1in" --include-in-header "C:\Users\a0778291\AppData\Local\Temp\RtmpoJlm1X\rmarkdown-str7d047674de6.html" output file: flextable_example.knit.md **freezes here** Session info below. R version 4.2.2 (2022-10-31 ucrt) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 22621) attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] **flextable_0.8.3** loaded via a namespace (and not attached): [1] Rcpp_1.0.9 lattice_0.20-45 png_0.1-8 digest_0.6.31 R6_2.5.1 grid_4.2.2 jsonlite_1.8.4 evaluate_0.19 zip_2.2.2 cli_3.6.1 [11] rlang_1.1.1 gdtools_0.2.4 uuid_1.1-0 data.table_1.14.6 rstudioapi_0.14 xml2_1.3.3 Matrix_1.5-1 reticulate_1.26.9000 rmarkdown_2.19 tools_4.2.2 [21] officer_0.5.0 xfun_0.36 fastmap_1.1.0 compiler_4.2.2 systemfonts_1.0.4 base64enc_0.1-3 htmltools_0.5.4 knitr_1.41
On the Strapi application, I accidentally checked the 'active false' box for the user when it should have been 'true.' Since then, I can't log in to Strapi. I'm convinced that to fix this issue, I need to go into the database, but I can't find which file I should modify to change my user's status back to 'active true.' Database: SQLite Language: JavaScript I tried to back up an old version on GitHub, but no success... I also tried to filter all active elements in VS Code, but I can't figure out which one I should fix.
How to kill a bash script as well as the python program started within the bash script?
|python|bash|kill-process|
null
I coded a very simple COBOL program that should take data from sysin dd * and put it in my WORKING-STORAGE variable, but it doesn’t work as expected. The problem is when I try to pass a value of 10 to a pic 9(10) variable, coded like this in jcl: //sysin dd * 10 /* I get 1000000000 instead of 0000000010. Is there a simple way to make it work without changing the input data? Thanks in advance :)
How to process non formatted numeric variable from jcl in cobol
|cobol|jcl|
I have created a button to insert a row. Each row I add says "Insert expense here. The problem is that when I click the button and insert the row, the row new row only populates where it says B7. If I were to delete "test" in B7 leave it blank, and then click the button it would get generated under B5 where it says expenses. I just want a new row to be added so I can list my expenses accordingly. I also attached a screenshot Here is my code in VBA below: ```vb Sub Button17_Click() Range("B" & Rows.Count).End(xlUp).Select ActiveCell.EntireRow.Insert ActiveCell.FormulaR1C1 = "Insert An Expense…" End Sub ``` "Insert An Expense..." only gets added below the cell that has "test" populated. If the cell is blank, "Insert an Expense" gets populated above the cell that says "expenses" [!["Insert An Expense..." only gets added below the cell that has "test" populated. If the cell is blank, "Insert an Expense" gets populated above the cell that says "expenses" ][1]][1] [1]: https://i.stack.imgur.com/GYUKf.png
{"OriginalQuestionIds":[40192449],"Voters":[{"Id":3001761,"DisplayName":"jonrsharpe","BindingReason":{"GoldTagBadge":"python"}}]}
In my case, the error was in the controller-manager, not in the api server logs. Check the logs of the ...-controller-manager-... pod in the kube-system namespace: kubectl get pod -n kube-system kubectl logs -n kube-system <name-of-the-pod> Mine was full of error about an invalid certificate (wrong ip address).
In Flutter we do serialization (toJson) of the model, but how can we generate toJson that includes the getters? Simple example class MyClass { const MyClass({ required this.a, required this.b, }); final double a; final double b; double get average => (a + b) / 2; } We use `json_serializable` package for generating the `toJson` method. It generate the `my_class.g.dart` file where we have toJson and fromJson methods. The problem is that in this way we do not have a serialization of the getter (`average` field in our example). What can we do to provide the full serialization including class getters? Is there a way to achieve it using `json_serializable`, `freezed` or any other package?
Would anyone suspect why my React Native app on the emulator just suddenly stop working and show a "No connection to the Metro server error" out of the blue? The screenshots refer to the same error. The longer message one is what comes up after I refresh the emulator.[First error](https://i.stack.imgur.com/YvkNP.png)[Second error after refresh](https://i.stack.imgur.com/PG4fT.png) System: MacOS 14.2.1 Android Studio: Flamingo 2022.2.1 Patch 1 Node: 18.15.0 I can give setting of the emulator too but don't know how relevant they are since the emulator has worked 75% of the times I have still to find a pattern on when it decides to go awol on me but here's what has worked in the past. - First time it happened: Just restarting the emulator and the Metro server worked - Second time it happened: Just restarting the emulator and the Metro server DID NOT work. I had to run adb reverse tcp:8081 tcp:8081 which worked. - Third time it happened(now): None of the above works in any capacity or order Disclaimer: I have tried restarting everything, restarting by clearing cache on npx react-native, closing the emulator, closing the Metro server, closing the terminals, killing the node process manually with kill -9, deleting node modules and installing again. I didn't do anything to mess with the server or app, some changes I did I reverted to see if it what was bothering it (not so likely because the change was simply on one of the components, nothing that would affect networking in any way). This is my very first time working with React Native and emulators so of course there is a million things I could have done wrong, but this thing bamboozles me because it works fine one time and then suddenly not.
Firstly, created the .NET 6 Function App and set the console title. It’s working successfully: ![enter image description here](https://i.imgur.com/wGWtAXE.png) Then created the .NET Isolated Function App and changed the console title. It didn’t work as expected: ![enter image description here](https://i.imgur.com/eejjhto.png) Observed that the isolated function app has a separate worker process and there is no impact of setting the title in the console runner. Similar issue was reported in 2022 related to Azure-functions-dotnet-worker console name/title changing where many developers commented as it is the required functionality but it’s closed due to inactivity. Raise a new ticket in the GitHub Azure-Functions-dotnet-worker repository by referencing the previous one - [GitHub-AF-dotnet-worker-issue-960](https://github.com/Azure/azure-functions-dotnet-worker/issues/960).
I created a build.bat file to zip up the correct files for my browser extension and makes Firefox and Chromium builds. The problem is that, for some reason, these zips seem to be corrupt in some way. ``` @echo off setlocal :: Define variables set ZIP_NAME_FIREFOX=Firefox.zip set ZIP_NAME_CHROMIUM=Chromium.zip set SOURCE_FOLDER=%CD% set TEMP_FOLDER=%SOURCE_FOLDER%\temp :: Ensure the temp folder is clean if exist "%TEMP_FOLDER%" rmdir /s /q "%TEMP_FOLDER%" mkdir "%TEMP_FOLDER%" :: Copy files and folders individually to avoid cyclic copy echo Copying files to the temp directory... xcopy "%SOURCE_FOLDER%\mrbeastify.js" "%TEMP_FOLDER%" /Q xcopy "%SOURCE_FOLDER%\manifest v3.json" "%TEMP_FOLDER%" /Q xcopy "%SOURCE_FOLDER%\images" "%TEMP_FOLDER%\images\" /E /Q xcopy "%SOURCE_FOLDER%\icons" "%TEMP_FOLDER%\icons\" /E /Q echo Creating Firefox zip folder... powershell Compress-Archive -Path "%SOURCE_FOLDER%\mrbeastify.js", "%SOURCE_FOLDER%\manifest.json", "%SOURCE_FOLDER%\images", "%SOURCE_FOLDER%\icons" -DestinationPath "%SOURCE_FOLDER%\%ZIP_NAME_FIREFOX%" -Force echo Firefox zip folder created successfully. :: Rename manifest for Chromium echo Preparing files for Chromium zip... rename "%TEMP_FOLDER%\manifest v3.json" "manifest.json" echo Creating Chromium zip folder... powershell Compress-Archive -Path "%TEMP_FOLDER%\mrbeastify.js", "%TEMP_FOLDER%\manifest.json", "%TEMP_FOLDER%\images", "%TEMP_FOLDER%\icons" -DestinationPath "%SOURCE_FOLDER%\%ZIP_NAME_CHROMIUM%" -Force echo Chromium zip folder created successfully. :: Cleanup if exist "%TEMP_FOLDER%" rmdir /s /q "%TEMP_FOLDER%" echo All operations completed successfully. pause ``` When I zip the files manually, it works fine. I can load the extension temporarily and nothing goes wrong. When I use the build.bat file, and load the extension, my script cannot locate any of the images in the folders, and the extension icons also don't load. I then decided to see what was up with the images, so I tried to delete the images folder *inside the zip* and replace it with a version known to work. Windows actually *wasn't able to delete these folders* (Luckily 7zip could). When I finally copied in the images and icons folders in manually, it all works. What on earth could be going wrong here? I already checked whether the file was locked by some program (it wasn't), and whether running cmd as admin works (it doesn't).
bat file creates a "corrupt" zip
|batch-file|
null
Can't Log into Strapi after Accidentally Setting User to 'Active False'
|javascript|node.js|sqlite|github|strapi|
null
Had the same issue. Check to see this problem may be caused by Loading JavaScript deferred. I have wp rocket for some caching. It's normally great but sometimes it can mess things up. If you do have a caching plugin or running a caching program somewhere on your server or site, check to see that that portion is turned off for that particular page that you are running ANY of your carousels.
Copy From One Closed Workbook to Another (`PERSONAL.xlsb`!?) - Sub CopyRawData() Const SRC_FOLDER_PATH As String = "U:\Documents\Macro Testing\Raw Data\" Const SRC_FILE_PATTERN As String = "SLTEST_*.csv" Const SRC_FIRST_ROW_RANGE As String = "A2:G2" Const DST_FILE_PATH As String _ = "U:\Documents\Macro Testing\Data\Finished Data.xlsx" Const DST_SHEET_NAME As String = "Banana" Const DST_FIRST_CELL As String = "A2" Dim sFileName As String: sFileName = Dir(SRC_FOLDER_PATH & SRC_FILE_PATTERN) If Len(sFileName) = 0 Then MsgBox "No file matching the pattern """ & SRC_FILE_PATTERN _ & """ found in """ & SRC_FOLDER_PATH & """!", vbExclamation Exit Sub End If Dim sFilePath As String, sFilePathFound As String Dim sFileDate As Date, sFileDateFound As Date Do While Len(sFileName) > 0 sFilePathFound = SRC_FOLDER_PATH & sFileName sFileDateFound = FileDateTime(sFilePathFound) If sFileDate < sFileDateFound Then sFileDate = sFileDateFound sFilePath = sFilePathFound End If sFileName = Dir Loop Application.ScreenUpdating = True Dim swb As Workbook: Set swb = Workbooks.Open(sFilePath, , True) ' , Local:=True) Dim sws As Worksheet: Set sws = swb.Sheets(1) Dim srg As Range, slcell As Range, rCount As Long With sws.Range(SRC_FIRST_ROW_RANGE) Set slcell = .Resize(sws.Rows.Count - .Row + 1) _ .Find("*", , xlValues, , xlByRows, xlPrevious) If slcell Is Nothing Then swb.Close SaveChanges:=False MsgBox "No data found in workbook """ & sFilePath & """!", _ vbExclamation Exit Sub End If rCount = slcell.Row - .Row + 1 Set srg = .Resize(rCount) End With Dim dwb As Workbook: Set dwb = Workbooks.Open(DST_FILE_PATH) Dim dws As Worksheet: Set dws = dwb.Sheets(DST_SHEET_NAME) Dim drg As Range: Set drg = dws.Range(DST_FIRST_CELL) _ .Resize(rCount, srg.Columns.Count) srg.Copy Destination:=drg swb.Close SaveChanges:=False With drg ' Clear below. .Resize(dws.Rows.Count - .Row - rCount + 1).Offset(rCount).Clear ' Format. .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter '.EntireColumn.AutoFit End With 'dwb.Close SaveChanges:=True Application.ScreenUpdating = True MsgBox "Raw data copied.", vbInformation End Sub
The reason that it only works for only one subdomain is that `next auth` by default sets the cookies on the domain that the signIn has performed on. To change the default behavior and give it a custom domain (e.g. domain with empty subdomain which means all subdomains `.my-site.com`) while using next auth version 5 we should config the cookies' options like so: ```js const nextAuthUrl = process.env.NEXTAUTH_URL NextAuth({ cookies: { sessionToken: { name: `${nextAuthUrl.startsWith('https://') ? "__Secure-": ""}next-auth.session-token`, options: { domain: nextAuthUrl === "localhost" ? "." + nextAuthUrl : "." + new URL(nextAuthUrl).hostname, // support all subdomains httpOnly: true, sameSite: 'lax', path: '/', secure: nextAuthUrl.startsWith('https://'), } } } }) ``` You might also need to config other cookies as well depending on your use case. you can read more about it in the official documentation: https://next-auth.js.org/configuration/options#cookies
I've been working on a project where I need to access data from api's. In these api's are values named 'decimal' which I need to access. I have no problem accessing and displaying the decimal value itself, but when I try to only display decimal value based on another value within the json I seem to struggle. My code currently looks like this: ``` import pandas as pd import requests as r api = 'https://content.toto.nl/content-service/api/v1/q/event-list?startTimeFrom=2024-03-31T22%3A00%3A00Z&startTimeTo=2024-04-01T21%3A59%3A59Z&started=false&maxMarkets=10&orderMarketsBy=displayOrder&marketSortsIncluded=--%2CCS%2CDC%2CDN%2CHH%2CHL%2CMH%2CMR%2CWH&marketGroupTypesIncluded=CUSTOM_GROUP%2CDOUBLE_CHANCE%2CDRAW_NO_BET%2CMATCH_RESULT%2CMATCH_WINNER%2CMONEYLINE%2CROLLING_SPREAD%2CROLLING_TOTAL%2CSTATIC_SPREAD%2CSTATIC_TOTAL&eventSortsIncluded=MTCH&includeChildMarkets=true&prioritisePrimaryMarkets=true&includeCommentary=true&includeMedia=true&drilldownTagIds=691&excludeDrilldownTagIds=7291%2C7294%2C7300%2C7303%2C7306' re = r.get(api) red = re.json() #Finding the 'type' variable within the Json file match_result = pd.json_normalize(red, record_path=['data', 'events', 'markets', 'outcomes']) match_result = match_result['type'] #For every type variable within the file we check if type == 'MR' for type in match_result: if type == 'MR': #If the type == 'MR' I want to print the decimal belonging to that type value but this is where i'm doing something wrong decimal = pd.json_normalize(match_result, record_path=['prices']) decimal = decimal['decimal'] print(decimal) else: pass ``` I've been looking all over youtube and stackoverflow to find what im doing wrong but can't seem to figure it out. Also important to note is that in the variable 'match_result' I access the type by using the record_path, but for the 'decimal variable' I need to go further into the record path with using 'prices' in the for loop. This is where I think i'm doing it wrong, but still no idea what it is i'm doing wrong.
How to access a value within a nested json file based on conditions
|python|pandas|
null
I'm encountering an issue while trying to build a Docker image for my Python application. Whenever I run the `docker build` command, I'm getting the following error: *ERROR: failed to solve: the Dockerfile cannot be empty* **Here's my Dockerfile:** ```Dockerfile # Use an official Python runtime as the base image FROM python:latest # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . . # Install any needed dependencies specified in requirements.txt # RUN pip install --no-cache-dir -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Define environment variable ENV NAME World # Run name.py when the container launches CMD ["python", "name.py"] **And here's the content of my name.py file:** `print("Hello World!")` I'm confused as to why Docker is complaining that the Dockerfile is empty when it clearly contains instructions. Any ideas on what might be causing this issue? **Additional Information:** Docker version: Docker version 25.0.3 Operating system: Windows
Let's say you are in `WAIT_PASSWORD` state. What will happen in this state is, your `counter_wait` will start to increment, which is a `33-bit register` and it will take eternity to overflow as your clock is 1Hz. Now, you will get to the `RIGHT_PASS` state with two conditions, first, your `password==2'b11` and second, your `counter_wait` should be less than `5`. And this is the issue, when you are in `WAIT_PASSWORD` state and you wait too long for the password to become `2b'11` (more than 5 seconds i.e. until `counter_wait` becomes greater than or equal to `5`) then you will never be able to enter the `RIGHT_PASS` state again and you will always end up in that same `WAIT_PASSWORD` state. ``` WAIT_PASSWORD: begin if(counter_wait < 5 && password==2'b11) next_state = RIGHT_PASS; else if( counter_wait > 5 && password==2'b00) next_state=STOP; else if (password==2'b01|| password==2'b10) next_state =WRONG_PASS; else next_state=WAIT_PASSWORD; end ```
I have a CSS variable `--standard-backcolor` which is assigned like `--standard-backcolor: green` This works fine when I use it like `background-color: var(--standard-backcolor);` But when I retrieve the color from another instance, like Angular Material like `--standard-backcolor: mat.get-color-from-palette($greyPalette, 900);` it is neither retrieved from the SCSS properly. I would guess this is caused by the fact that I don't assign the value directly - more in a later way. I've tried as well ``` $test: mat.get-color-from-palette($greyPalette, 900); :root & { --standard-backcolor: $test; ... ``` Which fails as well :-/ ==\> How to assign the CSS variable a retrieved value to use it later?
SCSS variables: How to assign a retrieved value
|sass|angular-material|
null
You can delete a endpoint using the python azure ml sdk from azureml.core import Workspace, Webservice service = Webservice(workspace=ws, name='your-service-name') service.delete() Then if you want to re create you can re deploy the model from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice from azureml.core.model import Model service_name = 'my-custom-env-service' inference_config = InferenceConfig(entry_script='score.py', environment=environment) aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1) service = Model.deploy(workspace=ws, name=service_name, models=[model], inference_config=inference_config, deployment_config=aci_config, overwrite=True) service.wait_for_deployment(show_output=True) There is no current way to schedule or temporary disable the endpoint. The only way would be to delete and re create using the azureml sdk. The other option would be to use a Azure function app for deployment for ml models and this way you only pay for requests made.
Self resolved. The functionality of the old torch.fft corresponds to the new **torch.fft.fft, fft2**, or **fftn**. The important thing is the value of signal_ndim in torch.fft, i.e., how many dimensions of FFT you want to perform. To prepare, we first converse Real (3, 4, 2) -> Complex (3,4) ``` a = torch.view_as_complex(a) ``` Then, ``` # If signal_ndim=1 (1D FFT) torch.fft.fft(a) # or torch.fft.fftn(a, dim=1) # If signal_ndim=2 (2D FFT) torch.fft.fft2(a) # or torch.fft.fftn(a, dim=2) # If signal_ndim=k (k>=3) torch.fft.fftn(a, dim=k) ```
I am following the book "Learn Python The Hard Way" and in ex 46 we make a skeleton structure for projects and install Pytest to test the project code. All cool and dandy until I try to run the test for my package. The test marks as successful but nothing from my module code shows up, only when I test individually the module with a "test_Dandy2.py" file that the code run. This is how my directory is organized (based on the book exercise): ``` project_1\ bin\ docs\ project1\ __init__.py Dandy2.py tests\ __init__.py test_project1.py test_Dandy2.py setup.py ``` I tried to write import Dandy2 on my `__init__` file in my package directory and then I got an ImportError. I looked up all over stack overflow and most answers talk about my project file being out of the python PATH. However my test marks as successful and when running the a test for the module individually it runs the code normally. I don't know if this is how it should work, as the book gives no explanation whatsoever and the LPTHW forum is very dead. Maybe it is because I am using a windows os, and running the pytest in Powershell and have something to twist in order to make it work (like I had to do to run scripts, where I needed to change the execution policy of my system).
I think a simple "blowing out" the current selection would suffice here, and that would then allow you to "return" to the page, and click on the "same" row again. Take this setup: <CollectionView x:Name="MyColview" VerticalOptions="FillAndExpand" MaximumHeightRequest="700" SelectionChanged="MyColview_SelectionChanged" SelectionMode="Single" > <CollectionView.Header> <Grid ColumnDefinitions="100,100,100,150,200"> <Label Text="First" Grid.Column="0" FontAttributes="Italic,Bold" /> <Label Text="Last" Grid.Column="1" FontAttributes="Italic,Bold" /> <Label Text="City" Grid.Column="2" FontAttributes="Italic,Bold" /> <Label Text="HotelName" Grid.Column="3" FontAttributes="Italic,Bold" /> <Label Text="Description" Grid.Column="4" FontAttributes="Italic,Bold" /> </Grid> </CollectionView.Header> <CollectionView.ItemTemplate> <DataTemplate> <Grid ColumnDefinitions="100,100,100,150,200" Padding="0,5,0,5"> <Label Grid.Column="0" Text="{Binding FirstName}" /> <Label Grid.Column="1" Text="{Binding LastName}" /> <Label Grid.Column="2" Text="{Binding City}" /> <Label Grid.Column="3" Text="{Binding HotelName}" /> <Label Grid.Column="4" Text="{Binding Description}" HorizontalOptions="FillAndExpand" /> <BoxView Grid.ColumnSpan="5" Margin="0,-4,0,0" Color="SkyBlue" MinimumHeightRequest="1" MaximumHeightRequest="1" HeightRequest="1" VerticalOptions="Start" /> </Grid> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> So, we use the "regular" Selectionchanged event. My row click code looks like this: private async void MyColview_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (MyColview.SelectedItem != null) { tblHotel MySelHotel = (tblHotel)MyColview.SelectedItem; await Navigation.PushAsync(new EditHotel(MySelHotel, this)); MyColview.SelectedItem = null; } } So, note close in above how I "blow out" and set the selection to null. Keep in mind, that changing the selected item to null WILL trigger the selection changed event again, but by wrapping the code in a null check, then it works fine. So, I pass the current row object and "form" to the next page. On my target page, I have this: public tblHotel MyHotel { get; set; } TestHotels frmPrevious; public EditHotel(tblHotel MyH, TestHotels frmP) { InitializeComponent(); MyHotel = MyH; frmPrevious = frmP; BindingContext = MyHotel; } private void cmdTest_Clicked(object sender, EventArgs e) { DisplayAlert("City", MyHotel.City, "ok"); } void MyClose() { dbLocal.con.Update(MyHotel); frmPrevious.MyRefresh(); } protected override bool OnBackButtonPressed() { MyClose(); return false; } (a bit of extra test code - but you get the idea here). And my layout? Again, probably not all that important, but I have this: <Grid ColumnDefinitions="400,400"> <VerticalStackLayout Grid.Column="0" HorizontalOptions="Start" Padding="20" MinimumWidthRequest="380" > <Label>HotelName</Label> <Editor Text="{Binding HotelName}"/> <Label>First Name</Label> <Editor Text="{Binding FirstName}"/> <Label>Last Name</Label> <Editor Text="{Binding FirstName}"/> <Label>City</Label> <Editor Text="{Binding City}"/> </VerticalStackLayout> <VerticalStackLayout Grid.Column="1" Padding="20" HorizontalOptions="Start" MinimumWidthRequest="380" > <Label>Description</Label> <Editor Text="{Binding Description}" HeightRequest="200" /> <Button Text="Test Button" x:Name="cmdTest" Clicked="cmdTest_Clicked" /> </VerticalStackLayout> </Grid> So, when I run, I get this effect, and note how I am able to click the 2nd row again when I return: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/0Vy4c.gif
IIUC, you can try to group the properties by `ItemTypeX` and then explode: ```py df.columns = df.columns.str.replace(r"\[\d+\]", "", regex=True) df = df.set_index(["Time", "Property"]) df = df.T.groupby(df.columns).agg(list).T print(df.reset_index().explode(df.columns.to_list())) ``` Prints: ```none Time Property ItemType0.property 0 1 2 1 0 1 2 0 1 2 2 0 1 2 2 1 2 3 2 3 2 3 2 3 ```
When I try to knit an Rmarkdown document containing a flextable to pdf it just stalls. No issues knitting to html or with other types of tables like kable. Min reprex below. When rendering it will just freeze on 'output file: example.knit.md' and never finish rendering. --- title: "Untitled" output: pdf_document date: "2024-03-08" --- ```{r} flextable::flextable(cars) ``` I installed tiny_tex and can successfully knit pdfs that do not have any flextables in them. After reading this link [getting-flextable-to-knit-to-pdf](https://stackoverflow.com/questions/64935647/getting-flextable-to-knit-to-pdf) I discovered this was a known issue with earlier versions of flextable. However, following the solution I added `'latex_engine: xelatex'` but still no luck. I'm running flextable 0.8.3. pandoc 2.19.2 When rendering: processing file: flextable_example.Rmd |................................... | 50% ordinary text without R code |......................................................................| 100% label: unnamed-chunk-1 "C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/pandoc" +RTS -K512m -RTS flextable_example.knit.md --to latex --from markdown+autolink_bare_uris+tex_math_single_backslash --output flextable_example.tex --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\pagebreak.lua" --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\latex-div.lua" --embed-resources --standalone --highlight-style tango --pdf-engine pdflatex --variable graphics --variable "geometry:margin=1in" --include-in-header "C:\Users\a0778291\AppData\Local\Temp\RtmpoJlm1X\rmarkdown-str7d047674de6.html" output file: flextable_example.knit.md **freezes here** Session info below. R version 4.2.2 (2022-10-31 ucrt) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 22621) attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] **flextable_0.8.3** loaded via a namespace (and not attached): [1] Rcpp_1.0.9 lattice_0.20-45 png_0.1-8 digest_0.6.31 R6_2.5.1 grid_4.2.2 jsonlite_1.8.4 evaluate_0.19 zip_2.2.2 cli_3.6.1 [11] rlang_1.1.1 gdtools_0.2.4 uuid_1.1-0 data.table_1.14.6 rstudioapi_0.14 xml2_1.3.3 Matrix_1.5-1 reticulate_1.26.9000 rmarkdown_2.19 tools_4.2.2 [21] officer_0.5.0 xfun_0.36 fastmap_1.1.0 compiler_4.2.2 systemfonts_1.0.4 base64enc_0.1-3 htmltools_0.5.4 knitr_1.41
Docker build fails with "failed to solve: the Dockerfile cannot be empty" error
|python|windows|docker|dockerfile|docker-build|
null
Just adding an actual image inside the div makes the outline disappear. The white outline stays there only to remind you that there needs to be a image loaded, which either failed to load or is loading.
I'm trying to add items to a linked list, however, if a node's data repeats instead of returning the new node it should return the original node with the same data. I have a test file along with the methods I'm testing. the trouble comes from my `addOnce()` method returning the node parsed through instead of the first occurrence. trying to avoid > "Error: The initial item was not returned for the " + result" line of code here's the test block of code, no editing is required ```Java import java.util.Iterator; public class OrderedListTest { static private class Courses implements Comparable<Courses>{ String rubric; int number; int occurance; public Courses(String rub, int num, int occ) { rubric = rub; number = num; occurance = occ; } public int compareTo(Courses other) { if (rubric.compareTo(other.rubric) < 0) return -1; else if (rubric.compareTo(other.rubric) > 0) return 1; else return number - other.number; } public String toString() { return rubric + " " + number; } } public static void main(String[] args) { Courses listOfCourses[] = { new Courses("COSC", 2436, 1), new Courses("ITSE", 2409, 1), new Courses("COSC", 1436, 1), new Courses("ITSY", 1300, 1), new Courses("ITSY", 1300, 2), new Courses("COSC", 1436, 2), new Courses("COSC", 2436, 2), new Courses("ITSE", 2417, 1), new Courses("ITNW", 2309, 1), new Courses("CPMT", 1403, 1), new Courses("CPMT", 1403, 2)}; OrderedAddOnce<Courses> orderedList = new OrderedAddOnce<Courses>(); Courses result; for (int i = 0; i < listOfCourses.length; i++){ result = orderedList.addOnce(listOfCourses[i]); if (result == null) System.out.println("Error: findOrAdd returned null for " + listOfCourses[i]); else { if (result.occurance != 1) System.out.println("Error: The initial item was not returned for " + result); if (result.compareTo(listOfCourses[i]) != 0) System.out.println("Error: " + listOfCourses[i] + " was passed to findOrAdd but " + result + " was returned"); } } Iterator<Courses> classIter = orderedList.iterator(); while(classIter.hasNext()) { System.out.println(classIter.next()); } // There should be 7 courses listed in order } } ``` Here's the code in need of debugging, specifically in the `addOnce()` method ```Java import java.util.Iterator; import java.util.NoSuchElementException; /** * * @author User */ // Interface for COSC 2436 Labs 3 and 4 /** * @param <E> The class of the items in the ordered list */ interface AddOnce <E extends Comparable<? super E>> { /** * This method searches the list for a previously added * object, if an object is found that is equal (according to the * compareTo method) to the current object, the object that was * already in the list is returned, if not the new object is * added, in order, to the list. * * @param an object to search for and add if not already present * * @return either item or an equal object that is already on the * list */ public E addOnce(E item); } //generic linked list public class OrderedAddOnce<E extends Comparable<? super E>> implements Iterable<E>, AddOnce<E> { private Node<E> firstNode; public OrderedAddOnce() { this.firstNode = null; } @Override public E addOnce(E item) { Node<E> current; if (firstNode == null || item.compareTo(firstNode.data) <= 0) { Node<E> newNode = new Node<>(item); newNode.next = firstNode; firstNode = newNode; return firstNode.data; } current = firstNode; while (current.next != null && item.compareTo(current.next.data) > 0) { current = current.next; } Node<E> newNode = new Node<>(item); newNode.next = current.next; current.next = newNode; return newNode.data; } @Override public Iterator iterator() { return new AddOnceIterator(); } private class AddOnceIterator implements Iterator{ private Node<E> currentNode = firstNode; @Override public boolean hasNext() { return currentNode != null; } @Override public E next() { if (!hasNext()) { throw new NoSuchElementException(); } E data = currentNode.data; currentNode = currentNode.next; return data; } } private class Node<E> { public E data; public Node<E> next; public Node(E intialData){ this.data = intialData; this.next = null; } } } ``` I've tried swapping the return statement values from the `newNode.data`, `current.data`, `item.` but it returns the same error messages regardless ``` Error: The initial item was not returned for ITSY 1300 Error: The initial item was not returned for COSC 1436 Error: The initial item was not returned for COSC 2436 Error: The initial item was not returned for CPMT 1403 COSC 1436 COSC 1436 COSC 2436 COSC 2436 CPMT 1403 CPMT 1403 ITNW 2309 ITSE 2409 ITSE 2417 ITSY 1300 ITSY 1300 ``` I'm expecting the list without duplicates
null
[`Apache Commons IO`][1] comes to the rescue (again). The `Commons IO` method [`FilenameUtils.separatorsToSystem(String path)`][2] will do what you want. Needless to say, `Apache Commons IO` will do a lot more besides and is worth looking at. [1]: https://commons.apache.org/proper/commons-io/ [2]: https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/FilenameUtils.html#separatorsToSystem(java.lang.String)
null
So I'm trying to deploy a brand new application to Google App Engine from Github actions and I'm getting this error message: ```Failed to create cloud build: invalid bucket "<PROJECT_NUMBER>.cloudbuild-logs.googleusercontent.com"; default Cloud Build service account or user-specified service account does not have access to the bucket``` I have tried to specify a different bucket for logging other than the default, using the flag: `--bucket gs://generic-app`. I've also tried to specify a different service account that has owner role (overkill, I know), exported it's key and passed it into the `credentials_json` prop. But I get the same error message. Why would this be happening? Full workflow file ```yaml name: Deploy to Goggle App Engine on: push: branches: - main - staging jobs: build-and-deploy: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v2 - name: Setup Node.js and yarn uses: actions/setup-node@v2 with: node-version: '18' - name: Install dependencies run: yarn - name: Build Node Project run: yarn build # Production - name: Google Cloud Auth Production if: github.event_name == 'push' && github.ref == 'refs/heads/main' uses: 'google-github-actions/auth@v2' with: credentials_json: '${{ secrets.GCP_SA_KEY_PRODUCTION }}' project_id: ${{ secrets.GCP_PROJECT_ID_PRODUCTION }} # Staging - name: Google Cloud Auth Staging if: github.event_name == 'push' && github.ref == 'refs/heads/staging' uses: 'google-github-actions/auth@v2' with: credentials_json: '${{ secrets.GCP_SA_KEY_STAGING }}' project_id: ${{ secrets.GCP_PROJECT_ID_STAGING }} - name: Set up Cloud SDK uses: 'google-github-actions/setup-gcloud@v2' - name: Deploy to Google App Engine run: | gcloud app deploy app.yaml --quiet --version service-default --no-promote --bucket gs://generic-app ```
I have an issue in creating an sqldump file. Here is the code snip ```` import java.io.*; public class MySQLBackup { public static void main(String[] args) { try { // MySQL dump command with output file path String[] command = {"C:\\xampp\\mysql\\bin\\mysqldump.exe", "-u", "root", "-p", "mta_db", "-r", "D:/backu1p.sql"}; // Create ProcessBuilder instance ProcessBuilder processBuilder = new ProcessBuilder(command); // Redirect error stream to output stream processBuilder.redirectErrorStream(true); // Start the process Process process = processBuilder.start(); // Read the output of the process InputStream inputStream = process.getInputStream(); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); String line; StringBuilder output = new StringBuilder(); while ((line = bufferedReader.readLine()) != null) { output.append(line).append("\n"); } // Wait for the process to complete int exitValue = process.waitFor(); // Print the output System.out.println(output); // Check if the process exited successfully if (exitValue == 0) { System.out.println("Backup completed successfully."); } else { // Print error stream if process failed System.out.println("Backup failed. Exit value: " + exitValue); InputStream errorStream = process.getErrorStream(); InputStreamReader errorStreamReader = new InputStreamReader(errorStream); BufferedReader errorBufferedReader = new BufferedReader(errorStreamReader); while ((line = errorBufferedReader.readLine()) != null) { System.out.println(line); } } } catch (IOException | InterruptedException e) { e.printStackTrace(); } } } ```` When I run the cooresponding cmd code in cmd, it runs perfectly, but when I run it in netbeans ide, it stop at the processbuilder line. When I futher investigate, I found that the file is created but empty( zero octs) an if I try to delete it, it show an error message stating that mysqldump is using the file. I don't know what to do. I try changing the process by using process.runtime `String backupFilePath = PRECLOUD_DIRECTORY + File.separator + "backup.sql";` ```` String dumpFilePath = "D:/your_dump_file.sql"; String command = "mysqldump -u root -p mta_db >"+backupFilePath; try { Process process = Runtime.getRuntime().exec(command);..... ```` but still it not working
``` if(!empty($meerdereantwoorden)) { // Multiple choice meerdere antwoorden controleren if($_POST['answer'] === 'GET' && $_POST['answer'] === 'POST') { $antwoordmeerdereantwoorden = 'Vraag 4 (meerdere antwoorden): correct'; } else { $antwoordmeerdereantwoorden = 'Vraag 4 (meerdere antwoorden): incorrect'; } } else{} ``` ``` <form name="vragen" method="post" action="vijfvragen.php"> <fieldset> <legend>Vragen</legend> <p><strong>Vraag 1: </strong>Hoe heet de serverside scriptingtaal die we nu aan het leren zijn? <input type = "text" size="18" name ="invulvraag" autocomplete="off" placeholder='Vul hier je antwoord in.'></p> <hr> <p><strong>Vraag 2: </strong>Om sommige stukken code opnieuw te gebruiken, zonder het steeds opnieuw te typen, welke tag is dan het handigst om te gebruiken? </p><select name="combobox"> <option> </option> <option>for loop</option> <option>function()</option> <option>array</option> </select> <hr> <p><strong>Vraag 3: </strong>Met welk HTML-element kan je &lt;fieldset&gt; een naam geven?</p> <p><input type="radio" value="<legend>" name="eenantwoord">&lt;legend&gt;<p> <p><input type="radio" value="<br>" name="eenantwoord">&lt;br&gt;<p> <p><input type="radio" value="echo" name="eenantwoord">echo<p> <p><input type="radio" value="<hr>" name="eenantwoord">&lt;hr&gt;<p> <hr> <p><b>Meerdere antwoorden mogelijk!</b></p> <p><strong>Vraag 4: </strong>Welke van deze onderstaande antwoorden worden gebruikt om informatie naar een server te sturen?</p> <label><input type="checkbox" value="GET" name="answer[]">GET</label><br> <label><input type="checkbox" value="POST" name="answer[]">POST</label><br> <label><input type="checkbox" value="multipass" name="answer[]">Multipass</label><br> <label><input type="checkbox" value="array" name="answer[]">Array</label><br> <br> <input type="submit" value="Vragen versturen" name='verstuurd'><br> <p><strong><font color="#ff0000">Verstuur je vragen, zodat het nagekeken kan worden!</strong></p> <font color='#000000'> </fieldset> </form> ``` I tried in_array and array intersect aswell but to no use. The issue is when i select the two correct options that it doesn't correct them. What i mean that it should echo Vraag 4 (meerdere antwoorden) correct.Which it doesn't do for the incorrect and correct questions. This also applies for when you choose one incorrect and one correct option I tried array intersect and in_array and foreach, but it didn't work. I also tried ChatGPT and Gemini and still no use haha. Sometimes it gave me the correct echo, but not the incorrect echo or the other way around.
How can i correct a multiple choice question with multiple correct answers?
I'm trying to implement both a PCA and t-sne algorithms to cluster the embeddings resulting of using ImageBind. For this, using scikit-learn v.1.3.2 library in python v.3.8.18, I'm trying to use the sklearn library but the error mentioned in the title keeps popping at: ```python from sklearn.decomposition import PCA from sklearn.manifold import TSNE ``` This is the traceback of the error: ```python ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 4 1 import numpy as np 2 import pandas as pd ----> 4 from sklearn.decomposition import PCA 5 from sklearn.manifold import TSNE 7 import matplotlib.pyplot as plt File ~\AppData\Roaming\Python\Python38\site-packages\sklearn\__init__.py:83 69 # We are not importing the rest of scikit-learn during the build 70 # process, as it may not be compiled yet 71 else: (...) 77 # later is linked to the OpenMP runtime to make it possible to introspect 78 # it and importing it first would fail if the OpenMP dll cannot be found. 79 from . import ( 80 __check_build, # noqa: F401 81 _distributor_init, # noqa: F401 82 ) ---> 83 from .base import clone 84 from .utils._show_versions import show_versions 86 __all__ = [ 87 "calibration", 88 "cluster", (...) 129 "show_versions", 130 ] File ~\AppData\Roaming\Python\Python38\site-packages\sklearn\base.py:19 17 from ._config import config_context, get_config 18 from .exceptions import InconsistentVersionWarning ---> 19 from .utils import _IS_32BIT 20 from .utils._estimator_html_repr import estimator_html_repr 21 from .utils._metadata_requests import _MetadataRequester ModuleNotFoundError: No module named 'sklearn.utils' ``` I've checked that I'm using the latest version available of sklearn as well as that the packet .utils exists in my version of sklearn. I've tried to add the isolated library using both pip and conda in my environment, and installing the previous version (1.2.2). I've also checked that the path is included. Nothing seems to work. Does anyone have any idea what the problem might be? Edit: Hey, I'm new to both python and stackoverflow so please try to be nice:)
You probably need to add a bit more explanation. Why are you waiting for an exception? I assume you expect a `FileNotFoundError` exception. It is a good practice to only catch the exception you want to handle. Otherwise, another exception could occur which isn't related to the FileNotFound. You probably could use the os module `os.path.exists()` to check if a path exists. Another problem causing your error could be that you are modifying your list at the same time you are looping through it. So when you remove an element from the list the indexes update so now your current element is now the next element. So in the next iteration of the `for` loop after an element removal, you will actually skip one because the list was updated. I suggest you to create a new list with the valid paths instead of removing from the one you are looping over.
The Jinja below does the job ```yaml result: | {% for i in my_list|map('split', '/') %} {% for j in range(2, i|length +1) %} {{ i[:j]|join('/') }} {% endfor %} {% endfor %} ```
I've been trying to achieve the result in the table above with no luck so far .. I have table called Nodes consists of ( base doc, current done and target ). BaseDocType . BaseDocID DocType . DocID TargetDocType . TargetDocID .. I want to fetch all the related nodes for any specific node if that's possible ..if any one can help I will be appreciate it a lot .. it's a sql server database . ` With CTE1 (ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID) As ( Select ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID From Doc.Nodes Where DocType=8 and DocID = 2 Union All Select a.ID, a.BaseDocType, a.BaseDocID, a.DocType, a.DocID, a.TargetDocType, a.TargetDocID From Doc.Nodes a Inner Join CTE1 b ON (a.BaseDocType = a.BaseDocType and a.BaseDocID = b.BaseDocID and a.DocType != b.DocType and a.DocID != b.DocID) ) Select * From CTE1` this query is not working .. [Example][1] [1]: https://i.stack.imgur.com/t3q8u.jpg
I made it work by disabling IPV6 for on my server. 1. Run ``` ping generativelanguage.googleapis.com -4 ``` To get the IPV4 of the domain name. 2. Edit `/etc/hosts/` Replace the IP address with the one you find in step 1. ``` 172.217.2.202 generativelanguage.googleapis.com ```
I am working on a web api that consumes Spotify API and I can't make the call for the search endpoint. It's my first time making an API consumes other and also my first time using Stack Overflow, so I really don't know how to explain to you, but I will pass part of the code that I think its important. ``` @RestController @RequestMapping("/spotify/api") public class SearchController { private final AuthSpotifyClient authSpotifyClient; private final SearchClient searchClient; public SearchController(AuthSpotifyClient authSpotifyClient, SearchClient searchClient) { this.authSpotifyClient = authSpotifyClient; this.searchClient = searchClient; } @GetMapping("/search") public ResponseEntity<SearchResponse> search(@RequestBody String q, @RequestBody String type){ var request = new LoginRequest( "confidential", "confidential", "confidential" ); var token = authSpotifyClient.login(request).getAcessToken(); var response = searchClient.search("Bearer " + token, q, type); return ResponseEntity.ok(response); } } @FeignClient( name = "SearchClient", url = "https://api.spotify.com/v1/search?" ) public interface SearchClient { @GetMapping("?q={q}&type={type}") SearchResponse search(@RequestHeader("Authorization") String authorization, @PathVariable String q, @PathVariable String type); } ``` When I run it the endpoint returns 400 Bad Request
Calling search endpoint for Spotify API in Java
|java|spring-boot|spotify|webapi|
I need to receive a notification/event through FIREBASE in Android app that is running in background. After that I need this app runs in foreground. In addition, i need this notification/event to be able to send information in XML or JSON format. Would someone help me? I tried FCM push Notifications and FCM In-App Messaging Tutorials, but did not find the scenario I need.
|php|html|if-statement|multiple-choice|array-intersect|
null
```swift struct ContentView: View { @State var stepperValue: Int = 0 @State private var modalIsPresented = false var body: some View { List([1], id: \.self) { item in Button(action: printTapped) { HStack(content: { Stepper("\(stepperValue)", value: $stepperValue) }) } } .sheet(isPresented: $modalIsPresented) { modalIsPresented = false } content: { ContentView() } } func printTapped() { modalIsPresented = true } } ```
null
I'm trying to convert my column of string data type to timestamp in my Azure databricks. I'm using `10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12)` So I wrote the following query Alter table convertToTimeStamp alter column FinalDate timestamp My sample data looks like |FinalDate | |---------------------| |2/18/2021 7:20:12 PM | |2/22/2021 5:25:13 PM | |4/23/2021 3:19:35 AM | But I'm getting ParseException saying that extraneous input 'timestamp' expecting {<EOF>, ';'}(line 1, pos 54) Can you please guide me to resolve the issue?
extraneous input 'timestamp' expecting {<EOF>, ';'}(line 1, pos 54)
|sql|
I have two models with `M2M field`. Because there wont be any update or deletion (just need to read data from db) I'm looking to have single db hit to retrieve all the required data. I used `prefetch_related` with `Prefetch` to be able to filter data and also have filtered objects in a cached list using `to_attr`. I tried to achieve the same result using `annotate` along with `Subquery`. but here I can't understand why the annotated filed contains only one value instead of a list of values. let's review the code I have: - some Routes may have more than one special point (Point instances with is_special=True). ### models.py ```python class Route(models.Model): indicator = models.CharField() class Point(models.Model): indicator = models.CharField() route = models.ManyToManyField(to=Route, related_name="points") is_special=models.BooleanField(default=False) ``` ### views.py ```python routes = Route.objects.filter(...).prefetch_related( Prefetch( "points", queryset=Point.objects.filter(is_special=True), to_attr="special_points", ) ) ``` this will work as expected but it will result in a separate database querying to fetch the points data. in the following code I tried to use Subquery instead to have a single database hit. ```python routes = Route.objects.filter(...).annotate( special_points=Subquery( Point.objects.filter(route=OuterRef("pk"), is_special=True).values("indicator") ) ``` the problem is in the second approach will have __either one or none__ special-point indicator when printing `route_instance.special_points` even if when using prefetch the printed result for the same instance of Route shows that there are two more special points. - I know in the first approach `route_instance.special_points` will contains the Point instances and not their indicators but that is the problem. - I checked the SQL code of the Subquery and there is no sign of limitation in the query as I did not used slicing in the python code as well. but again the result is limited to either one (if one or more exists) or none if there isn't any special point.
I feel like arguing about __pass-by-reference__ vs __pass-by-value__ is not really helpful. If you say that Java is __pass-by-whatever__, you are not providing a complete answer. Here is some additional information that will hopefully help you understand what actually happens in memory. Crash course on stack/heap before we get to the Java implementation: Values go on and off the stack in a nice orderly fashion, like a stack of plates at a cafeteria. Memory in the heap (also known as dynamic memory) is haphazard and disorganized. The JVM just finds space wherever it can, and frees it up as the variables that use it are no longer needed. Okay. First off, local primitives go on the stack. So this code: int x = 3; float y = 101.1f; boolean amIAwesome = true; results in this: ![primitives on the stack][1] When you declare and instantiate an object. The actual object goes on the heap. What goes on the stack? The address of the object on the heap. C++ programmers would call this a pointer, but some Java developers are against the word "pointer". Whatever. Just know that the address of the object goes on the stack. Like so: int problems = 99; String name = "Jay-Z"; ![a b*7ch aint one!][2] An array is an object, so it goes on the heap as well. And what about the objects in the array? They get their own heap space, and the address of each object goes inside the array. JButton[] marxBros = new JButton[3]; marxBros[0] = new JButton("Groucho"); marxBros[1] = new JButton("Zeppo"); marxBros[2] = new JButton("Harpo"); ![marx brothers][3] So, what gets passed in when you call a method? If you pass in an object, what you're actually passing in is the address of the object. Some might say the "value" of the address, and some say it's just a reference to the object. This is the genesis of the holy war between "reference" and "value" proponents. What you call it isn't as important as that you understand that what's getting passed in is the address to the object. private static void shout(String name){ System.out.println("There goes " + name + "!"); } public static void main(String[] args){ String hisName = "John J. Jingleheimerschmitz"; String myName = hisName; shout(myName); } One String gets created and space for it is allocated in the heap, and the address to the string is stored on the stack and given the identifier `hisName`, since the address of the second String is the same as the first, no new String is created and no new heap space is allocated, but a new identifier is created on the stack. Then we call `shout()`: a new stack frame is created and a new identifier, `name` is created and assigned the address of the already-existing String. ![la da di da da da da][4] So, value, reference? You say "potato". [1]: http://i.stack.imgur.com/7nGKU.png [2]: http://i.stack.imgur.com/yTIYp.png [3]: http://i.stack.imgur.com/v2b33.png [4]: http://i.stack.imgur.com/q0prc.png
The workaround I found was to do it in **two steps**: 1. add an empty column to my table using this syntax: ALTER TABLE `total_bike` ADD COLUMN Duration INT64 2. Update that new column with the calculation: UPDATE `testing-pancho-coursera.Coursera_Capstone_Bike_Model.total_bike` SET Duration = (TIMESTAMP_DIFF(ended_at, started_at, MINUTE) ) WHERE TRUE;
*(Might not be relevant, but this worked for me, idk why)* Maybe you can try to open this single project (/ folder) in a **new workspace** -- with no other projects / folders, just this **single** one. And then run the code. - My situation is bit diff -- once I ran the code, the debugger directly jumps to the compiled ts file (yes, a ts file not js, idk why). - (I am using `tsx` to run the ts code.) - I guess this may due to vscode has some config that tries to search from **the default root project dir** -- but unable to correctly locate it if you have multi projects in your workspace. - *(feels everything was fine for me before, till now.)*
null
|asp.net-core|console-application|sharepoint-online|office365api|azure-app-registration|
I just encountered the same issue. In case somebody needs to solve it using proper JavaScript, you can use the code below. It precisely focuses on the anchor tag, avoiding any disruption to the default collapse events of the Bootstrap Accordion. As stated on the official Bootstrap documentation [here][1], to render an accordion that expanded, we need to add the `.show` class on the `.accordion-collapse` element, drop the `.collapsed` class from the `.accordion-button` element and set its `aria-expanded` attribute to `true`. `HTML` ``` <div class="accordion" id="accordionExample"> <div class="accordion-item"> <h2 class="accordion-header"> <button class="accordion-button" type="button" data-bs-toggle="collapse" data-bs-target="#collapseOne" aria-expanded="true" aria-controls="collapseOne"> <!-- <a href="javascript:void(0);" onclick="handleAccordionClick('https://www.google.com/')">GOOGLE</a> --> <a href="https://www.google.com/" target="_blank">GOOGLE</a> </button> </h2> <div id="collapseOne" class="accordion-collapse collapse show" data-bs-parent="#accordionExample" style=""> <div class="accordion-body"> <strong>This is the item's accordion body.</strong> </div> </div> </div> </div> ``` `JavaScript` ``` function handleAccordionClick(url) { // Navigate to the specified URL window.open(url, "_blank"); } var accordionButtons = document.querySelectorAll(".accordion-button"); for (var i = 0; i < accordionButtons.length; i++) { var anchorTag = accordionButtons[i].querySelector('a'); anchorTag.addEventListener('click', function(event) { event.stopPropagation(); var currentAccordionButton = this.parentElement; var ariaControls = currentAccordionButton.getAttribute("aria-controls"); // var headerAccordion = currentAccordionButton.parentElement; // var containerAccordion = headerAccordion.parentElement; // var accordionDiv = containerAccordion.querySelector("#" + ariaControls); var accordionDiv = document.querySelector("#" + ariaControls); accordionDiv.classList.add("show"); currentAccordionButton.setAttribute("aria-expanded", "true"); currentAccordionButton.classList.remove("collapsed"); }); } ``` [1]: https://getbootstrap.com/docs/5.3/components/accordion/
For anyone still looking for an answer (I know I did), I opted for a different solution. I decided to make a generic compressor that can be used anywhere you need a byte[] compressed. You are doing a little more work because you are reading and writing from a stream twice, and optimizing that is left as an exercise for the reader, but compressing generically is a really elegant solution altogether. Here's one for GZip: ```csharp public class GZipCompressor : ICompressor { public async Task<byte[]> CompressAsync(byte[] data, CancellationToken cancellationToken = default) { using var uncompressedStream = new MemoryStream(data); using var compressedStream = new MemoryStream(); using (var compressorStream = new GZipStream(compressedStream, CompressionMode.Compress)) { await uncompressedStream.CopyToAsync(compressorStream, cancellationToken); } return compressedStream.ToArray(); } public async Task<byte[]> DecompressAsync(byte[] compressedData, CancellationToken cancellationToken = default) { using var compressedStream = new MemoryStream(compressedData); using var decompressorStream = new GZipStream(compressedStream, CompressionMode.Decompress); using var decompressedStream = new MemoryStream(); await decompressorStream.CopyToAsync(decompressedStream, cancellationToken); return decompressedStream.ToArray(); } } ``` And here's one for Deflate: ```csharp public class DeflateCompressor: ICompressor { public async Task<byte[]> CompressAsync(byte[] data, CancellationToken cancellationToken = default) { using var uncompressedStream = new MemoryStream(data); using var compressedStream = new MemoryStream(); using (var compressorStream = new DeflateStream(compressedStream, CompressionMode.Compress)) { await uncompressedStream.CopyToAsync(compressorStream, cancellationToken); } return compressedStream.ToArray(); } public async Task<byte[]> DecompressAsync(byte[] compressedData, CancellationToken cancellationToken = default) { using var compressedStream = new MemoryStream(compressedData); using var decompressorStream = new DeflateStream(compressedStream, CompressionMode.Decompress); using var decompressedStream = new MemoryStream(); await decompressorStream.CopyToAsync(decompressedStream, cancellationToken); return decompressedStream.ToArray(); } } ``` Now you just call this before sending your data to AWS and you're golden.
Here's a brute-force hard-coding method… The Chinese characters are stored as Unicode. You can convert Integers to Unicode with the `ChrW` function, and convert Unicode characters to their Integer value with `AscW` And so, running a loop to convert each character one-by-one into Integers, and aggregate them into a function that reverses that, I get this: Function ChineseWarning() AS String ChineseWarning = ChrW(-28647) & ChrW(26159) & ChrW(22806) & ChrW(-28440) & _ ChrW(23492) & ChrW(20358) & ChrW(30340) & ChrW(-28427) & _ ChrW(20214) & ChrW(-244) & ChrW(-24866) & ChrW(25802) & _ ChrW(-28637) & ChrW(32080) & ChrW(25110) & ChrW(-27253) & _ ChrW(21855) & ChrW(-27068) & ChrW(20214) & ChrW(21069) & _ ChrW(-30005) & ChrW(20572) & ChrW(12289) & ChrW(30475) & _ ChrW(12289) & ChrW(-32643) & ChrW(12290) End Function Then you can use that in your `Replace`: .HTMLBody = Replace(.HTMLBody, ChineseWarning(), "")
You are trying to find `class = "time"` with `driver.find_element(By.CLASS_NAME,"time")`. But the target element's attribute is `class = "set"`. And `9 months ago` is text of the element. You can get it with `.text` property. try ```python activity = driver.find_element(By.CLASS_NAME,"set") print(activity.text) ``` or ```python activity = driver.find_element(By.XPATH,"//time") print(activity.text) ```
When I try to knit an Rmarkdown document containing a flextable to pdf it just stalls. No issues knitting to html or with other types of tables like kable. Min reprex below. When rendering it will just freeze on 'output file: example.knit.md' and never finish rendering. --- title: "Untitled" output: pdf_document date: "2024-03-08" --- ```{r} flextable::flextable(cars) ``` I installed tiny_tex and can successfully knit pdfs that do not have any flextables in them. After reading [getting-flextable-to-knit-to-pdf](https://stackoverflow.com/questions/64935647/getting-flextable-to-knit-to-pdf) I discovered this was a known issue with earlier versions of flextable. However, following the solution I added `'latex_engine: xelatex'` but still no luck. I'm running flextable 0.8.3. pandoc 2.19.2 When rendering: processing file: flextable_example.Rmd |................................... | 50% ordinary text without R code |......................................................................| 100% label: unnamed-chunk-1 "C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/pandoc" +RTS -K512m -RTS flextable_example.knit.md --to latex --from markdown+autolink_bare_uris+tex_math_single_backslash --output flextable_example.tex --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\pagebreak.lua" --lua-filter "C:\Users\a0778291\AppData\Local\R\win-library\4.2\rmarkdown\rmarkdown\lua\latex-div.lua" --embed-resources --standalone --highlight-style tango --pdf-engine pdflatex --variable graphics --variable "geometry:margin=1in" --include-in-header "C:\Users\a0778291\AppData\Local\Temp\RtmpoJlm1X\rmarkdown-str7d047674de6.html" output file: flextable_example.knit.md **freezes here** Session info below. R version 4.2.2 (2022-10-31 ucrt) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 22621) attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] **flextable_0.8.3** loaded via a namespace (and not attached): [1] Rcpp_1.0.9 lattice_0.20-45 png_0.1-8 digest_0.6.31 R6_2.5.1 grid_4.2.2 jsonlite_1.8.4 evaluate_0.19 zip_2.2.2 cli_3.6.1 [11] rlang_1.1.1 gdtools_0.2.4 uuid_1.1-0 data.table_1.14.6 rstudioapi_0.14 xml2_1.3.3 Matrix_1.5-1 reticulate_1.26.9000 rmarkdown_2.19 tools_4.2.2 [21] officer_0.5.0 xfun_0.36 fastmap_1.1.0 compiler_4.2.2 systemfonts_1.0.4 base64enc_0.1-3 htmltools_0.5.4 knitr_1.41
Django using Subquery in annotate: How to fetch all rows that match the condition of filtering
|django|django-orm|
I am facing github issue on cloning the repo. I am using mac and getting this issue. 'C:\Windows\System32\OpenSSH\ssh.exe': C:\Windows\System32\OpenSSH\ssh.exe: command not found fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Please help me. I am curios if I am using mac so why its showing Windows
Facing git repo clone issue with correct access rights