instruction
stringlengths
0
30k
I'm trying to make my own Brick Breaker game using SDL library and I'm having some issues with dealing the collision resolution and ball bouncing when it comes to the bouncing of the paddle. I'm calculating collision angle based on the collision point between ball and paddle and calculating balls' velocity direction based on that angle. The issue I'm experiencing is after the collision with the paddle, ball doesn't change it's Y direction, instead just keeps going down and hitting the paddle. It continues to do that until eventually "slides" of the paddle. I don't think my math is wrong here (although it could easily be), but I simply don't know what the issue is or how to fix it. This is the code that is responsible for collision detection and calculating angle and direction. ``` void Ball::CollisionWithPaddle(Paddle*& paddle) { if (CollisionManager::GetInstance()->Collision(GetHitbox(), paddle->GetHitbox())) { double collisionAngle = CalculateCollisionAngle(paddle->GetHitbox()); Vector2Df newVelocityDirection = CalculateNewVelocityDirection(collisionAngle); velocity = newVelocityDirection * velocity.Magnitude(); AdjustBallPosition(paddle); } } double Ball::CalculateCollisionAngle(const SDL_Rect& paddle) { double collisionPointX = transform->X + width / static_cast<double>(2); double collisionPointY = transform->Y + height / static_cast<double>(2); double relativeX = collisionPointX - (paddle.x + paddle.w / static_cast<double>(2)); double relativeY = collisionPointY - (paddle.y + paddle.h / static_cast<double>(2)); double collisionAngle = atan2(relativeY, relativeX); return collisionAngle; } Vector2Df Ball::CalculateNewVelocityDirection(double collisionAngle) { // Calculate direction of the new velocity based on the collision angle double newVelocityX = std::cos(collisionAngle); double newVelocityY = -std::sin(collisionAngle); Vector2Df newVelocityDirection(newVelocityX, newVelocityY); return newVelocityDirection.Normalize(); } void Ball::AdjustBallPosition(Paddle* paddle) { transform->Y = paddle->GetHitbox().y - height; } ``` This is the code where I'm simply updating the position of the ball with the current velocity. ``` void Ball::Update() { transform->X += velocity.X * Engine::GetInstance()->GetDeltaTime(); transform->Y += velocity.Y * Engine::GetInstance()->GetDeltaTime(); rec.x = transform->X; rec.y = transform->Y; } ```
Hello Stack Overflow community, I am encountering a peculiar issue with my PyTorch model where the presence of an initialized but unused FeedForward Network (FFN) affects the model's accuracy. Specifically, when the FFN is initialized in my CRS_A class but not used in the forward pass, my model's accuracy is higher compared to when I completely remove (or comment out) the FFN initialization. The FFN is defined as follows in my model's constructor: ` class CRS_A(nn.Module): def __init__(self, modal_x, modal_y, hid_dim=128, d_ff=512, dropout_rate=0.1): super(CRS_A, self).__init__() self.cross_attention = CrossAttention(modal_y, modal_x, hid_dim) self.ffn = nn.Sequential( nn.Conv1d(modal_x, d_ff, kernel_size=1), nn.GELU(), nn.Dropout(dropout_rate), nn.Conv1d(d_ff, 128, kernel_size=1), nn.Dropout(dropout_rate), ) self.norm = nn.LayerNorm(modal_x) self.linear1 = nn.Conv1d(1024, 512, kernel_size=1) self.linear2 = nn.Conv1d(512, 300, kernel_size=1) self.dropout1 = nn.Dropout(0.1) self.dropout2 = nn.Dropout(0.1) def forward(self, x, y, adj): x = x + self.cross_attention(y, x, adj) #torch.Size([5, 67, 1024]) x = self.norm(x).permute(0, 2, 1) x = self.dropout1(F.gelu(self.linear1(x))) #torch.Size([5, 512, 67]) x_e = self.dropout2(F.gelu(self.linear2(x))) #torch.Size([5, 300, 67]) return x_e, x ` As you can see, the self.ffn is not used in the forward pass. Despite this, removing or commenting out the FFN's initialization leads to a noticeable drop in accuracy. Could this be due to some form of implicit regularization, or is there another explanation for this behavior? Has anyone encountered a similar situation, and how did you address it? Any insights or explanations would be greatly appreciated.
I have completed implementing the watch function and successfully implemented receiving push notification as well. However, I want to synchronize the calendar after receiving the push notification, but I failed here. My Application is a web server application, so I implemented google oauth2 login and callback using servlet. [The document I referred to for implementing the servlet.](https://developers.google.com/api-client-library/java/google-api-java-client/oauth2#web_server_applications) **I'll provide explanations along with the code.** ### Google OAuth2 Login ``` @WebServlet(urlPatterns = "/oauth2Login") public class GoogleAuthenticationServlet extends AbstractAuthorizationCodeServlet { @Override protected AuthorizationCodeFlow initializeFlow() { return GoogleApiConfig.initializeFlow(); } @Override protected String getRedirectUri(HttpServletRequest httpServletRequest) { return GoogleApiConfig.getRedirectUri(httpServletRequest); } @Override protected String getUserId(HttpServletRequest httpServletRequest) { return GoogleApiConfig.getClientId(httpServletRequest); } } ``` ### Callback ``` @WebServlet(urlPatterns = "/oauth2callback") public class Oauth2CallbackServlet extends AbstractAuthorizationCodeCallbackServlet { @Override protected void onSuccess(HttpServletRequest req, HttpServletResponse resp, Credential credential) throws IOException { String userId = req.getSession().getId(); resp.sendRedirect("/google/watch?userId=" + userId + "&accessToken=" + credential.getAccessToken()); } ... @Override protected String getUserId(HttpServletRequest var1) { return GoogleApiConfig.getClientId(var1); } } ``` ### watch ``` @GetMapping("/google/watch") public ResponseEntity<Channel> watchCalendar(@RequestParam String userId, @RequestParam String accessToken) { accessToken = "Bearer " + accessToken; HttpHeaders headers = new HttpHeaders(); headers.add("Authorization", accessToken); Channel channel = adateService.executeWatchRequest(userId); return ResponseEntity.ok() .headers(headers) .body(channel); } ``` ### push notification ``` @PostMapping("/notifications") public ResponseEntity<List<GoogleCalendarEventResponse>> printNotification(@RequestHeader(WebhookHeaders.RESOURCE_ID) String resourceId, @RequestHeader(WebhookHeaders.RESOURCE_URI) String resourceUri, @RequestHeader(WebhookHeaders.CHANNEL_ID) String channelId, @RequestHeader(WebhookHeaders.CHANNEL_EXPIRATION) String channelExpiration, @RequestHeader(WebhookHeaders.RESOURCE_STATE) String resourceState, @RequestHeader(WebhookHeaders.MESSAGE_NUMBER) String messageNumber, HttpServletRequest request) { log.info("Request for calendar sync, channelId=" + channelId + ", expiration=" + channelExpiration + ", messageNumber=" + messageNumber); String userId = request.getSession().getId(); adateService.listEvents(userId); return ResponseEntity.status(HttpStatus.CREATED).build(); } ``` I checked the log of watch and push notification and it was successful. ### watch log ``` { "expiration": 1714380331000, "id": "<id>", "kind": "api#channel", "resourceId": "<resourceId>", "resourceUri": "https://www.googleapis.com/calendar/v3/calendars/primary/events?alt=json", "token": "tokenValue" } ``` ### push notification log ``` 2024-03-30T08:06:08.102Z INFO 957 --- [nio-8080-exec-1] c.f.d.a.c.i.GoogleCalendarControllerImpl : Request for calendar sync, channelId=<calendar-id>, expiration=Mon, 29 Apr 2024 06:30:11 GMT, messageNumber=<message-number> ``` ### events sync ``` @Transactional public void listEvents(String sessionId) { try { Calendar calendar = googleApiConfig.calendarService(sessionId); Calendar.Events.List request = calendar.events().list("primary"); String syncToken = getNextSyncToken(); List<Event> events; if (syncToken == null) { events = request.execute().getItems(); } else { request.setSyncToken(syncToken); events = request.execute().getItems(); googleApiConfig.getSyncSettingsDataStore().set(SYNC_TOKEN_KEY, syncToken); } syncEvents(events); } catch (IOException e) { throw new AdateIOException(e); } } --- public Calendar calendarService(String sessionId) { Credential credential = getCredentials(sessionId); return new Calendar.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential) .setApplicationName(APPLICATION_NAME) .build(); } --- public DataStore<String> getSyncSettingsDataStore() { return syncSettingsDataStore; } I initialized syncSettingsDataStore using the @PostConstruct annotation syncSettingsDataStore = GoogleApiConfig.getDataStoreFactory().getDataStore("SyncSettings"); ``` After reviewing the log, it appears that an error occurred within the event sync code and the following error message has been identified. ### error log ``` 403 Forbidden<EOL>GET https://www.googleapis.com/calendar/v3/calendars/primary/events<EOL>{<EOL> "code" : 403,<EOL> "errors" : [ {<EOL> "domain" : "global",<EOL> "message" : "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",<EOL> "reason" : "forbidden"<EOL> } ],<EOL> "message" : "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",<EOL> "status" : "PERMISSION_DENIED"<EOL>}] ``` After checking the error code, I found the message **'Please use API Key or other form of API,'** which led me to consider passing the accessToken as a parameter. However, I couldn't figure out how to pass the accessToken as a parameter for push notifications, so I couldn't implement it. I've read [the post][1], but I'm still having trouble understanding it. it driving me CRAZY...!!!! I'm still new to posting questions on Stack Overflow, so if there's any incorrect information or if you need additional details, please feel free to leave a comment. My service is a web application, so I referred to the following document. - https://developers.google.com/identity/protocols/oauth2/web-server - https://developers.google.com/api-client-library/java/google-api-java-client/oauth2#web_server_applications - https://developers.google.com/calendar/api/guides/sync [1]: https://stackoverflow.com/questions/31932239/how-to-handle-google-calendar-api-push-notifications
``` let departmentsMap = { '1576996323453': 'QA', '1874996373493': 'Dev', '1374990372493': 'BA', '1874926373494': 'Tech Support', } ``` Above is my sample departments map object. I am running an aggregation pipleine on employees collection where each document has a department field, whose value is the ids of corresponding departments(keys in the above onject). now i want to project as below ``` { $project:{ department : // i want get the department name here. data is there in local variable } } ``` immediate help will be appreciated
how to get corresponding values of fields by passing a map object in $project stage in mongodb aggregation
|mongodb|aggregation-framework|
null
So I followed a tutorial on [YouTube][1] by Kavsoft. This tutorial demonstrates a really popular feature that big social media apps use, such as Instagram, TikTok & X. It's when a user scrolls between their posts, reels & posts tagged in, which is located on Instagram's user profile tab & TikTok has a similar one on its user profile tab. While X has it on its home tab. I can't seem to get it to work if I add a ScrollView. Now to keep it simple this is how the ProfileScreen looks like: struct ProfileScreen: View { @State private var selectedTab: ProfileScreenTabOption? = .media var body: some View { ScrollView { // Comment Scrollview and it works VStack(alignment: .leading) { Color.blue .frame(height: 350) customTabBar() ScrollView(.horizontal) { LazyHStack(spacing: 16) { sampleView(.red, count: 3) .id(ProfileScreenTabOption.media) .containerRelativeFrame(.horizontal) sampleView(.purple, count: 50) .id(ProfileScreenTabOption.saved) .containerRelativeFrame(.horizontal) } .scrollTargetLayout() } //Uncomment to play around with height //.frame(height: selectedTab == .media ? 300 : 1000) .scrollPosition(id: $selectedTab) .scrollIndicators(.hidden) .scrollTargetBehavior(.viewAligned) .ignoresSafeArea() } } } } The ProfileScreen uses an enum that only has 2 cases media & saved: enum ProfileScreenTabOption: Identifiable, CaseIterable { case media case saved var id: Self { return self } var systemName: String { switch self { case .media: "photo.on.rectangle" case .saved: "bookmark" } } } The ProfileScreen also uses 2 ViewBuilders: extension ProfileScreen { private func customTabBar() -> some View { HStack(spacing: 0) { ForEach(ProfileScreenTabOption.allCases) { tab in Image(systemName: tab.systemName) .frame(maxWidth: .infinity) .frame(height: 29) .onTapGesture { withAnimation(.snappy(duration: 0.1)) { selectedTab = tab } } } } } private func sampleView(_ color: Color, count: Int) -> some View { ScrollView(.vertical) { LazyVGrid(columns: Array(repeating: GridItem(.flexible(), spacing: 1), count: 3), spacing: 1) { ForEach(1...count, id: \.self) { x in RoundedRectangle(cornerRadius: 0) .fill(color.gradient) .frame(height: 150) .overlay { Text("\(x)") } } } } .scrollDisabled(true) // comment this with ProfileScreen ScrollView } } I tried to get this to work for a couple of hours but no luck. I know now that it has to do with the nested ScrollView that contains the sampleViews. I also tried changing the LazyHStack inside the nested ScrollView to an HStack but this makes the Scroll View height the height of the sampleView that has the highest height. When scrolling you will see unwanted white space for the smallest sampleView. I also tried changing the height of the nested ScrollView but its an ugly transition. Now is there a way to get this to work or a package that does this in SwiftUI? [1]: https://www.youtube.com/watch?v=UQ8ZQIhi8ow
You can use [runsettings][1] file to get a value Then using that value you can read the json file and get further configuration That configuration , lets say chrome / windows10 can be passed to browser instance at run time to decide which configuration to run. Also another idea would be lets say your piepline runs twice a day , based on day / time pass a browser. This way you need not to run all test in all browsers and you can save time and cost [1]: https://playwright.dev/dotnet/docs/test-runners#using-the-runsettings-file
Windows 11 I would like to filter double values. [enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/ZckkI.png [2]: https://i.stack.imgur.com/ICCrr.png 1. testo = text 2. numerico = numeric 3. valuta = currency I can filter Strings Dim srchStr As String = Me.TextBox1.text Dim strFilter As String = "MyCol1 LIKE '*" & srchStr.Replace("'", "''") & "*'" dv.RowFilter = strFilter I can filter Integers Dim srchStr As String = Me.TextBox1.Text Dim id As Integer If Integer.TryParse(srchStr, id) Then dv.RowFilter = "code = " & id Else MessageBox.Show("Error: ........") End If Dim strFilter As String = "code = " & id dv.RowFilter = strFilter but I can not filter a double value. I actually use this code to filter strings in my DataGridView Private Sub MyTabDataGridView_DoubleClick(ByVal sender As Object, ByVal e As System.EventArgs) Handles MyTabDataGridView.DoubleClick Try 'MyRow Dim row As Integer = MyTabDataGridView.CurrentRow.Index 'MyColumn Dim column As Integer = MyTabDataGridView.CurrentCell.ColumnIndex 'MyColumn and MyRow Dim ColumnRow As String = MyTabDataGridView(column, row).FormattedValue.ToString 'Header Text Dim HeaderText As String = MyTabDataGridView.Columns(column).HeaderText 'I exclude the errors If HeaderText = "id" Or HeaderText = "MyCol3" Or HeaderText = "MyCol4" Or HeaderText = "MyCol5" Then Exit Sub End If 'Ready to filter Dim strFilter As String = HeaderText & " Like '*" & ColomnRow.Replace("'", "''") & "*'" dv.RowFilter = strFilter Catch ex As Exception End Try Any suggestion will be highly appreciated.
Using `List.containsAll()` and `Truth.assertThat(list).contains(originalList)` behaves differently. Working as expected ``` assertThat(getItemsByChannelId.containsAll(entitiesChannelAPage2)).isTrue() ``` Not working as expected ``` assertThat(getItemsByChannelId).contains(entitiesChannelAPage2) ``` I am trying to verify whether the list of data class returned from query on local db are the same set of items from what was previously inserted after performing some deletion. Is there something I am missing here or the first approach is the ideal solution for such validation? I was hoping to utilize Truth's API to the utmost specially to get a meaningful error message rather than the obscure `expected to be true`.
I started working with cms moodle not long ago, I am trying to create my theme that will match the expected design, but I faced a problem that I don't understand how the theme system works in this cms. the documentation is not very helpful. How can I create my own navigation template and have full control over the styles? This is also a general question as I don't understand where the templates come from? Some are from the parent theme and some are from the parent theme, and what is rendering? Can you turn it off Booststrap and replace it with tailwind? ### Basic theme architecture: ``` ├── config.php ├── lang │ ├── en │ │ └── theme_dpu.php ├── lib.php ├── pix │ ├── favicon.ico │ └── screenshot.png ├── scss │ └── post.scss ├── settings.php ├── templates └── version.php ```
null
I recently started working with the Moodle CMS and am trying to create a theme that matches the expected design. However, I'm having trouble understanding how the theme system works in this CMS, as the documentation isn't very helpful. How can I create my own navigation template and have full control over the styles? This is also a general question, as I don't understand where the templates come from. Some are from the parent theme, and what is rendering? Can I disable Bootstrap and replace it with Tailwind? **Basic theme architecture** ``` ├── config.php ├── lang │ ├── en │ │ └── theme_dpu.php ├── lib.php ├── pix │ ├── favicon.ico │ └── screenshot.png ├── scss │ └── post.scss ├── settings.php ├── templates └── version.php ```
General questions about creating a custom theme Moodle CMS
|php|content-management-system|moodle|moodle-theme|moodle-boost|
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7. Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc. Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s: do (echo %%G) & (echo %%H) & (echo %%I)... etc In total, the following worked for me: `for /f "tokens=1-7" %G in ("1 2 7 16 21 26 688") do ( echo %G echo %H echo %I echo %J echo %K echo %L echo %M )`
I'm currently developing a plugin for JetBrains Rider (version 2023.2.3). A key feature of this plugin involves creating and running a .NET executable file using System.Diagnostics.Process, which gives me access to the process ID (PID). My goal is to programmatically attach Rider's debugger to this process, similar to how it's done via the UI as described in the Rider documentation https://www.jetbrains.com/help/rider/attach-to-process.html#attach-to-local. In a similar Visual Studio plugin, I achieve this functionality using the following C# code: foreach (Process item in _dte.Debugger.LocalProcesses) { if (item.ProcessID == newp.Id) { item.Attach(); break; } } However, I'm facing challenges replicating this in Rider. In my Rider plugin, I'm using IExecutableAction which triggers an execute function. In this function, I have access to the IDataContext from which I can obtain various components like `context.GetComponent<ISolutionBuilder>().` Despite trying to get various components related to the Debugger, I haven't found any that offer the functionality to attach to a process. Is there a specific component or method in Rider's API that enables attaching the debugger to a process programmatically? Any insights or guidance would be greatly appreciated!
How to Programmatically Attach JetBrains Rider's Debugger to a Process in a Plugin
|.net|debugging|rider|jetbrains-rider|rider-plugin|
In the framework of microservice architecture, I have a microservice that is coupled (domain coupling) to others, in the sense that it needs both to know about their base classes (C#) and also to send requests (REST-http1) to them. In order to avoid implementing the standard DTO's approach (thus avoiding manual code duplication) and write the CRUD calls by myself, I am really interested in the automation process NSwag allows for. Practically, I want to automatically generate the whole C# client (base classes & CRUD operations) from swagger.json exposed by individual microservices this given microservice depends on. - C# to json: I implement this with NSwag CLI, to have the ability to generate the schemas on build. Due to type name collisions occurring when auto-generating the OpenApiDocument (data models share common general data structures, and also might embed properties defined in other microservices' data models), I follow the standard approach to use fully qualified type names of base classes (see for example [here](https://stackoverflow.com/questions/56475384/swagger-different-classes-in-different-namespaces-with-same-name-dont-work)). This allows me generate the following simplified OpenApi document (only the schemas are shown). The schema contains 2 MetaInfo properties that have the exact same definition, but in different namespaces. I have read [`it is not good practice to expose namespaces`](https://github.com/RicoSuter/NSwag/issues/2728), but I accept this risk. Fine, the swagger.json file is generated as expected. ``` "components": { "schemas": { "TestOpenAPI.Models.Architecture": { "type": "object", "properties": { "MetaInfo": { "allOf": [ { "$ref": "#/components/schemas/TestOpenAPI.Models.MetaInfo" } ], "nullable": true }, "OtherMetaInfo": { "allOf": [ { "$ref": "#/components/schemas/TestOpenAPI.Models.ConflictingMetaInfoNamespace.MetaInfo" } ], "nullable": true } }, "additionalProperties": false }, "TestOpenAPI.Models.ConflictingMetaInfoNamespace.MetaInfo": { "required": [ "ID" ], "type": "object", "properties": { "ID": { "type": "string", "format": "uuid" }, "Url": { "type": "string", "nullable": true } }, "additionalProperties": false }, "TestOpenAPI.Models.MetaInfo": { "required": [ "ID" ], "type": "object", "properties": { "ID": { "type": "string", "format": "uuid" }, "Url": { "type": "string", "nullable": true } }, "additionalProperties": false } } } ``` - json to C#: in a first approach I programmatically implemented the following CSharpGenerator to include a TypeNameGenerator that "merges" both of them (I also take the risk associated to this assumption). ``` NSwag.OpenApiDocument nswDocument = await NSwag.OpenApiDocument.FromJsonAsync(outputString); var settings = new CSharpClientGeneratorSettings { CSharpGeneratorSettings = { Namespace = NAMESPACE, TypeNameGenerator = new CustomTypeNameGenerator(), JsonLibrary = NJsonSchema.CodeGeneration.CSharp.CSharpJsonLibrary.SystemTextJson } }; var generator = new CSharpClientGenerator(nswDocument, settings); var code = generator.GenerateFile(); ``` With the TypeNameGenerator doing the merging: ``` public class CustomTypeNameGenerator : ITypeNameGenerator { public string Generate(JsonSchema schema, string typeNameHint, IEnumerable<string> reservedTypeNames) { return typeNameHint.Split('.', '+').Last(); } } ``` It works very fine. But now I want to automated this on build using NSwag CLI. Here is the command that works very well for any other types I have to deal with in my microservices: ``` <Target Name="NSwag" AfterTargets="CreateSwaggerJson" Condition=" '$(Configuration)' == 'Debug' "> <PropertyGroup> <OpenApiDocument>./wwwroot/Architecture/json-schemas/swagger.json</OpenApiDocument> <NSwagConfiguration>NSwag/nswag.json</NSwagConfiguration> <GeneratedOutput>Client.g.cs</GeneratedOutput> </PropertyGroup> <Exec Command="$(NSwagExe_Net70) run $(NSwagConfiguration) /variables:OpenApiDocument=$(OpenApiDocument),GeneratedOutput=$(GeneratedOutput)" /> </Target> ``` Here is the associated nswag.json: ``` { "runtime": "Net70", "defaultVariables": null, "documentGenerator": { "fromDocument": { "json": "./swagger.json", "flattenInheritanceHierarchy": false } }, "codeGenerators": { "openApiToCSharpClient": { "clientBaseClass": null, "generateClientClasses": true, "generateClientInterfaces": true, "clientBaseInterface": null, "injectHttpClient": true, "disposeHttpClient": false, "jsonLibrary": "SystemTextJson", "output": "./Client.cs", "namespace": "MyNamespace" } } } ``` Could you please advise me on what are the options to inject the TypeNameGenerator in this automated process? In particular, I would need guidance to understand how the [options of nswag.json ](https://github.com/RicoSuter/NSwag/blob/313ea53f4f8a53c0e66b0f84f63f3224bfcebcac/src/NSwag.Sample.NET70Minimal/nswag.json) could handle custom type name generators. In summary, what I tried: - programmatically implement the whole process C# to json to C# for standard classes (value and reference types) > works well - implement the C# to json process on build using `dotnet swagger tofile` > works well - but I am forced to used fully qualified names to handle namespace collisions
I'm very new to Xamarin, but not C#. I'm trying to draw concentric circles and various shapes to a StackLayout, where traditionally I would use a Canvas in WPF. I'm trying to avoid using SkiaSharp - and I wondered what control I should be using, if not a StackLayout? I can get it to draw shapes, but they clip one another and don't overlap as I want. See top left circles in image: [![enter image description here][1]][1] Any help would be really appreciated. [1]: https://i.stack.imgur.com/KDyya.jpg
Drawing shapes in Xamarin programmatically. Allow overlap and transparency
|c#|xamarin.forms|drawing|stacklayout|
Can anyone help me how to to define the below given custom distribution in flexsurvreg function. custom_pdf=function(y,b0,b1,b2,b3,sigma,alpha){#pdf of proposed model z=(y-b0-b1*t2-b2*t3-b3*t4)/sigma ft=exp(z) num=alpha*ft*((exp(-ft))^alpha)*((1-exp(-ft))^(alpha-1)) den=sigma*(((1-exp(-ft))^alpha)+((exp(-ft))^alpha))^2 res=num/den return(res) } custom_cdf=function(y,b0,b1,b2,b3,sigma,alpha){#cdf of proposed model z=(y-b0-b1*t2-b2*t3-b3*t4)/sigma ft=exp(z) num=(1-exp(-ft))^alpha den=(1-exp(-ft))^alpha+(exp(-ft))^alpha res=num/den return(res) } alpha=4.62 sigma=6.2 b0=8.7 b1=-0.07 b2=1.4 b3=2.5 flexsurvreg(Surv(data_log$log_time, data_log$SURVIVAL_STATUS)~1, data = data_log, dist = custom_pdf,inits = c(1,1,1,1,1,1)) I am getting the error "flexsurvreg(Surv(data_log$log_time, data_log$SURVIVAL_STATUS)~1, data = data_log, dist = custom_pdf,inits = c(1,1,1,1,1,1)) Error in parse.dist(dist) : "dist" should be a string for a built-in distribution, or a list for a custom distribution"while running the below code
With `group = datetime`: (I increased the `fps` just for the reprex.) ``` r library(gganimate) anim <- ggplot(clust, aes(x = longitude, y = latitude, size = mag, group = datetime)) + geom_point(show.legend = FALSE, alpha = 0.7, colour = '#562486') + theme_bw() + transition_time(datetime) + labs(title = "{format(frame_time, '%Y-%b-%d %H:%M:%S')}", x = "Longitude", y = "Latitude") + shadow_mark(past = TRUE, future = FALSE, alpha = 0.5) animate(anim, height = 4, width = 7, units = "in", fps = 5, end_pause = 0, res = 100) ``` ![](https://i.imgur.com/LngsZan.gif)<!-- --> <sup>Created on 2024-03-29 with [reprex v2.1.0](https://reprex.tidyverse.org)</sup>
That's likely coming from a profile file, either `Rprofile.site` or `.Rprofile`. See `?Startup` for information on where those files are found. Use `Rscript --vanilla -e "1"` to avoid running them.
{"Voters":[{"Id":213269,"DisplayName":"Jonas"},{"Id":10008173,"DisplayName":"David Maze"},{"Id":839601,"DisplayName":"gnat"}]}
I'm confused about this, is this only auto-pilot? I thought I was going to get this on GKE standard too https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#-managed_secondary_ranges_default Can I enable this for new standard GKE clusters as well? > For Autopilot clusters running GKE 1.27 and later, GKE assigns Service > IP addresses from a Google-managed range by default 34.118.224.0/20, > eliminating the need to specify your own range for Services. The > following considerations apply... The docs don't explicitly say auto-pilot only. Perhaps thats the case, but I'd like to confirm if there is a way to configure standard GKE for it as well. Edit ==== Adding more context, the following terraform creates a cluster with a service range in a `10.x.0.0/20` network. I can't see the option in the terraform resource to use the managed service range. resource "google_container_cluster" "test01" { provider = google-beta name = var.test01_name release_channel { channel = "STABLE" } private_cluster_config { enable_private_nodes = true master_ipv4_cidr_block = var.test01_master_ipv4_cidr_block } remove_default_node_pool = true initial_node_count = 1 node_config { service_account = google_service_account.test.email } cluster_autoscaling { enabled = true resource_limits { resource_type = "memory" minimum = 0 maximum = 1000 } resource_limits { resource_type = "cpu" minimum = 0 maximum = 100 } auto_provisioning_defaults { oauth_scopes = [ "https://www.googleapis.com/auth/cloud-platform" ] service_account = google_service_account.test.email shielded_instance_config { enable_integrity_monitoring = true enable_secure_boot = false } } } master_auth { client_certificate_config { issue_client_certificate = false } } location = var.default_region network = google_compute_network.net.id subnetwork = google_compute_subnetwork.net.id workload_identity_config { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } addons_config { http_load_balancing { disabled = false } gcp_filestore_csi_driver_config { enabled = true } gce_persistent_disk_csi_driver_config { enabled = true } } cost_management_config { enabled = true } lifecycle { ignore_changes = [ node_pool, ] } }
Use : href='../../css/style.css'
I'm currently working on a Liquibase project where my Liquibase application needs to dynamically connect to multiple databases based on records stored in a connection table. Each record in this table contains JSON data with connection details such as connection name, username, and password. My goal is to fetch records from this table, connect to the corresponding database using the provided credentials, and generate initial SQL scripts for the tables in that database like need to apply the changes in the records in the connection table. Here's the workflow I'm envisioning: > Liquibase connects to the default database, Master DB which has an connection table with multiple datasource records. > Liquibase needs to read datasources from the connection table. > Using the details from connection table, Liquibase dynamically connects to the specified database. > Once connected, it generates initial SQL scripts for the tables in the connected database. > Same way it needs to new SQL changes to all the Datasources in the connection table > I've explored various approaches, but I'm struggling to implement this dynamically changing database connection and script generation in Liquibase. If anyone in the Liquibase community has experience with similar requirements or can provide guidance on how to achieve this, I'd greatly appreciate your insights! Please share your thoughts, suggestions, or code examples that can help me solve this challenge. I'm currently working on a Liquibase project where my Liquibase application needs to dynamically connect to multiple databases based on records stored in a connection table. Each record in this table contains JSON data with connection details such as connection name, username, and password. My goal is to fetch records from this table, connect to the corresponding database using the provided credentials, and generate initial SQL scripts for the tables in that database like need to apply the changes in the records in the connection table.
Liquibase as SaaS To Configure Multiple Database as Dynamic
|database|bigdata|liquibase|liquibase-sql|liquibase-cli|
null
It looks like you may be referring to a compile-time error. However, the `On Error` statement only handles a run-time error. So you should manually compile your code before trying to run it. This way you'll avoid a compile-time error when running it. First, make sure that you have the following statement at the very top of your module before any procedures. This will force you to declare all variables. So, if you try to use an undeclared variable, you'll get a compile-time error. Option Explicit Then manually compile your code, and fix any errors that pop-up. Once any errors are fixed and your workbook is saved, any errors that pop-up will occur a run-time, which your `On Error` statement will catch. Visual Basic Editor (Alt+F11) >> Debug >> Compile VBAProject
Required to access the view from different source database. I am using the Azure data studio with SQL Database project. First i've exported Source database into dacpac and in new project created the Database reference point to source dacpac. `CREATE VIEW [dbo].[v_activitypointer] AS SELECT * FROM [$(dvdbname)].[dbo].[ap_partitioned]; GO` it's working with above statement using \* all column and it's able to build and depoly the project successfully. Instead of all columns, i need few columns when i change to specify column it's failing with SQL71561: SqlComputed column error `CREATE VIEW [dbo].[v_activitypointer] AS SELECT [ucode] FROM [$(dvdbname)].[dbo].[ap_partitioned]; GO` stdout: c:\\dbt\\cicdtest\\v_activitypointer.sql(2,13,2,13): Build error SQL71561: SqlView: \[dbo\].\[v_activitypointer\] has an unresolved reference to object \[$(dvdbname)\].\[dbo\].\[ap_partitioned\].\[ucode\]. \[c:\\dbt\\cicdtest\\cicdtest.sqlproj\] stdout: c:\\dbt\\cicdtest\\v_activitypointer.sql(2,13,2,13): Build error SQL71561: SqlComputedColumn: \[dbo\].\[v_activitypointer\].\[ucode\] has an unresolved reference to object \[$(dvdbname\]).\[dbo\].\[ap_partitioned\].\[ucode\]. \[c:\\dbt\\cicdtest\\cicdtest.sqlproj\] stdout: 0 Warning(s) stdout: 2 Error(s) Here's .sqlproj file `<?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build"> <Sdk Name="Microsoft.Build.Sql" Version="0.1.12-preview" /> <PropertyGroup> <Name>cicdtest</Name> <ProjectGuid>{25E6C2C6-1C07-4516-BDC0-06E5AF0DCE07}</ProjectGuid> <DSP>Microsoft.Data.Tools.Schema.Sql.SqlServerlessDatabaseSchemaProvider</DSP> <ModelCollation>1033, CI</ModelCollation> <VerificationExtract>false</VerificationExtract> <VerifyExtendedTransactSQLObjectName>False</VerifyExtendedTransactSQLObjectName> </PropertyGroup> <ItemGroup> <SqlCmdVariable Include="dvdbname"> <Value>$(SqlCmdVar__1)</Value> <DefaultValue>dataverse_uunq6705</DefaultValue> </SqlCmdVariable> </ItemGroup> <ItemGroup> <ArtifactReference Include="..\dataverse_uunq6705.dacpac"> <SuppressMissingDependenciesErrors>False</SuppressMissingDependenciesErrors> <DatabaseVariableLiteralValue>dataverse_uunq6705</DatabaseVariableLiteralValue> </ArtifactReference> </ItemGroup> <Target Name="BeforeBuild"> <Delete Files="$(BaseIntermediateOutputPath)\project.assets.json" /> </Target> </Project>` i tried turning off the VerificationExtract & VerifyExtendedTransactSQLObjectName but no use
I am currently running Fail2Ban version 0.11.2 on my host machine to monitor and manage access to an Nginx service that is operating within a Docker container. The Nginx logs are bind-mounted from the container to a directory on the host machine, and Fail2Ban is configured to observe this bound directory for any malicious activity. In fail2ban.conf, dbfile option is enabled and dbpurgeage value is increased to 86400. I'm seeking some insights into an issue I've encountered with Fail2Ban on my server. Specifically, I am seeing inconsistencies in the 'Total banned' count reported by Fail2Ban for my `nginx-bad-request` jail. Despite being aware of numerous IP addresses that should have triggered the fail conditions, the 'Total banned' metric is capped at 10. When running the `fail2ban-client status` command for the `nginx-bad-request` jail, the output is as follows: ``` $ sudo fail2ban-client status nginx-bad-request Status for the jail: nginx-bad-request |- Filter | |- Currently failed: 0 | |- Total failed: 1 | `- File list: /path/to/log/nginx/access.log `- Actions |- Currently banned: 10 |- Total banned: 10 `- Banned IP list: [Redacted IP List] ``` Based on my logs and monitoring, I am confident that the actual number of IPs that should be banned exceeds this figure. However, the reported 'Total banned' does not reflect this higher number, instead showing a limit of 10, which corresponds to the number of currently banned IPs. This is happening to other jails as well. I'm curious to know if there is a configuration setting that I might be overlooking, or if there's a known limitation within Fail2Ban that could be causing this. Is there a way to ensure that the 'Total banned' count accurately reflects all IPs that have been banned over time, rather than just the current snapshot? Thank you in advance for your assistance and any advice you can provide!
Understanding Discrepancies in 'Total Banned' Count Reported by Fail2Ban
|fail2ban|
null
I'm using ```Make``` (```Automake```) to compile and execute unit tests. However these tests need to read and write test data. If I just specify a path, the tests only work from a specific directory. This is a problem, as first the test executables may be executed from a directory different than at compiletime by ```make```, and secondly they should be executable even manually or in ```VPATH```-builds, which currently brakes ```make distcheck``` Even using ```srcdir```, via config.h, isn't particularly useful, because it is of course evaluated at compiletime rather than at runtime. <del>What would be nice is, if the builddir would get passed at runtime instead.</del> What I think is necessary, is a way to get the srcdir relative to the builddir into the executable. This needs to be done/adjusted at runtime, due to the reasons written earlier. While this wouldn't solve the problem of being executed outside of the builddir, I don't think, that this is a particular problem, because, who would do that? But would it be better to specify them via the command arguments or via the environment? And is it "better" to specify individual files or a generic directory? I would consider a ```PATH```-like search behaviour overkill for just a test, or would that be recommended? Or should I just give up and specify a absolute path? But I think, that would break ```VPATH```-builds. So the question is, how would I specify the path to a test file best, in terms of portability, interoperability, maintainability and common sense?
Can I have the Google managed service range on a standard gke cluster created with Terraform- non auto-pilot
Your code might compile now, but it will not work as you expect. The `INT8_*` macros have totally different values than the `INT_*` ones. `INT_MIN`/`INT_MAX` are the min/max values for an `int` (and are defined in `<limits.h>` header). It is typically **32 bit** and therefore the values that you used before were probably **–2147483648** and **2147483647** respectively (they could be larger for 64 bit integers, or smaller for 16 bit ones). On the other hand `INT8_MIN`/`INT8_MAX`are min/max values for a **8 bit** signed integer (AKA `int8_t`), which are **-128** and **127** respectively. BTW - they are also defined in a different header (`<stdint.h>`) which might explain why using it solved your compilation error. **The bottom line:** In order to get the behavior you had before, you should use `std::numeric_limits<int>::min()` and `std::numeric_limits<int>::max()`, from the [`<limits>` header][1]. Note that `INT_MIN` and similar constants are actually macros "inherited" from C. In C++ we prefer to use the `std::numeric_limit` template (mentioned above) which accepts the type as a template agrument. This makes is less likely to make mistakes. You could even use `decltype(variable)` as the the template agrument. Finally - you also mentioned `INT16_MIN`/`INT16_MAX` in your title: it is the correlative min/max values for **16 bit** signed integer - i.e. **-32768** and **32767** respectively. The same principle applies to similar constants. Again they have an equivalent in `std::numeric_limits` which is recommended in C++. [1]:https://en.cppreference.com/w/cpp/types/numeric_limits
I am trying to access a webpage using `Selenium` for `C#` and I am struggling to make some elements accessible. Some elements seem to be readily available whilst other do not. An excerpt of the structure of the page is as follows: <header class="l-quotepage__header"> <div class="c-faceplate c-faceplate--bond is-positive /*debug*/" data-faceplate="" data-faceplate-symbol="1rPFR0010870956" data-ist="1rPFR0010870956" data-ist-init="{&#34;symbol&#34;:&#34;1rPFR0010870956&#34;,&#34;high&#34;:0,&#34;low&#34;:0,&#34;previousClose&#34;:114.45,&#34;totalVolume&#34;:0,&#34;tradeDate&#34;:&#34;2024-03-27 09:34:21&#34;,&#34;variation&#34;:0,&#34;last&#34;:114.45,&#34;exchangeCode&#34;:&#34;PAR&#34;,&#34;category&#34;:&#34;BND&#34;,&#34;decimals&#34;:2}" data-ist-variation-indicator=""> <input class="c-faceplate__accordion-toggle" id="faceplate-1492129610" type="checkbox"/> <div class="c-faceplate__body"> <div class="c-faceplate__company">[...]</div> <div class="c-faceplate__data" data-faceplate-target=""> <ul class="c-list-info__list c-list-info__list--split-half"> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">open</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--open" data-ist-open="">0.00</span> </p> </li> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">previous close</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--previousclose" data-ist-previousclose="">114.45</span> </p> </li> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">high</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--high" data-ist-high="">0.00</span> </p> </li> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">low</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--low" data-ist-low="">0.00</span> </p> </li> </ul> <ul class="c-list-info__list c-list-info__list--split-half"> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">volume</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--totalvolume" data-ist-totalvolume="">0</span> </p> </li> <li class="c-list-info__item"> <p class="c-list-info__heading u-color-neutral">last trading time</p> <p class="c-list-info__value u-color-big-stone"> <span class="c-instrument c-instrument--tradedate" data-ist-tradedate="">27.03.24 / 09:34:21</span> </p> </li> </ul> <ul class="c-list-info__list c-list-info__list--split-half@sm-max c-list-info__list--nowrap@md-min"> <li class="c-list-info__item c-list-info__item--has-picto c-list-info__item--fixed-width"> <p class="c-list-info__heading u-color-neutral">low threshold</p> <p class="c-list-info__value u-color-big-stone">114.94</p> </li> <li class="c-list-info__item c-list-info__item--has-picto c-list-info__item--fixed-width"> <p class="c-list-info__heading u-color-neutral">high threshold</p> <p class="c-list-info__value u-color-big-stone">116.94</p> </li> </ul> </div> </div> <label class="c-faceplate__accordion-button" for="faceplate-1492129610"/> </div> </header> Whilst I can access anything located within <div class="c-faceplate__company">[...]</div> I cannot access any of the data located within <div class="c-faceplate__data" data-faceplate-target=""> I can locate the parent `IWebElement element` (here in `C#`) defined for the `XPath` //*div[@class="c-faceplate__data"] but, even after creating a `WebBrowserWait wait` to wait for it to be in `element.Displayed` or `element.Enabled` mode, makes it time out unsuccessfully (for long time spans such as up to 60s). The data under this section are displayed on the UI side by toggling the button <input class="c-faceplate__accordion-toggle" id="faceplate-1492129610" type="checkbox"/> that comes with its label <label class="c-faceplate__accordion-button" for="faceplate-1492129610"/> although, on a full screen, the data under <div class="c-faceplate__data" data-faceplate-target=""> are visible by default, only when one uses a reduced-sized screen does it disapppear and can only be made visible again by pressing the button. Other than this, the cross reference between the input 'id' and the label 'for', here "faceplate-1492129610", is random. The "faceplate-#" changes at each page call or refresh. I am wondering to what extent does this button control the access to the div section too, how can I see if it is the case and how I can manage the Selenium browser to by-pass this hurdle.
Selenium access to html webpage times out
|c#|html|selenium-webdriver|
I have a query that I was asked to modify. The original field is: Startdate with a MM/DD/YYYY format. 1/1/2020 I was asked to change the format to YYYY-MM-DD (mod_startdate1), but will need a 00:00:00 timestamp included. 2020-01-01 The best i can get to is mod_startdate2, but note that the minute field is replicating the month field 2020-01-01 00:01:00 query used for the screenshot. select startdate, from_unixtime(unix_timestamp(startdate,'mm/dd/yyyy'), 'yyyy-mm-dd') as mod_startdate1, from_unixtime(unix_timestamp(startdate,'mm/dd/yyyy'), 'yyyy-mm-dd HH:mm:ss') as mod_startdate2 from datamart_core.fbp_baseline limit 50 --returns date format, returns timestamp, but the month is also populating as the minutes HIVE SQL format[enter image description here](https://i.stack.imgur.com/6Qb3J.jpg) I'd appreciate any ideas to help resolve my query. Thank you. I've found multiple repositories, but have not found 'the' solution yet
HIVE Sql Date conversion
|hive|
null
The name of the `inline` specifier is somewhat misleading, as it suggests that the function be inlined. However, `inline` foremost specifies the **linkage of the function** (it's also a hint to the compiler to consider inlining). For a function declared `inline` no linkable symbol is generated in the compiled object. Therefore, `inline` functions only make sense when *defined* (not merely declared) in a header file, which is included by, possibly, many compilation units. The `inline` specifier then prevents multiple (in fact *any*) symbols for this function to be emitted by the compiler in the respective object files. If you need a small function only once for one compilation unit, you don't need to declare it anywhere else. Moreover, you don't need to declare it inline, but place it in the anonymous namespace to prevent it from being visible (in the object file generated). So, either (that's most likely your use case) // foo.hpp: inline void foo(bar x) { /* ... */ } // full definition // application.cpp: #include "header.hpp" /* ... */ foo(X); or // application.cpp: namespace { inline void foo(bar x) // inline specifier redundant { /* ... */ } } /* ... */ foo(X);
i found the answer (and ill put it in an edit of this because the stupid tool here pretends my code is not formatted and their so-called ctrl-K does not work and the [?] does not exist!)
I'm working my way through C++ concurrency in action and ran into a problem trying to understand listing 5.12, reproduced below ([GitHub code sample](https://github.com/anthonywilliams/ccia_code_samples/blob/main/listings/listing_5.12.cpp)). I understand why the following should work when the release and acquire memory fences are in. ``` #include <assert.h> std::atomic<bool> x,y; std::atomic<int> z; void write_x_then_y() { x.store(true,std::memory_order_relaxed); std::atomic_thread_fence(std::memory_order_release); y.store(true,std::memory_order_relaxed); } void read_y_then_x() { while(!y.load(std::memory_order_relaxed)); std::atomic_thread_fence(std::memory_order_acquire); if(x.load(std::memory_order_relaxed)) ++z; } int main() { x=false; y=false; z=0; std::thread a(write_x_then_y); std::thread b(read_y_then_x); a.join(); b.join(); assert(z.load()!=0); } ``` However if I remove the fences, this example unexpectedly still works and `z == 1`. See below for my modified example: ``` #include <atomic> #include <cassert> #include <thread> std::atomic_int x(0); std::atomic_int y(0); std::atomic_int z(0); void read_y_then_x() { while (!(y.load(std::memory_order_relaxed) == 1)) ; // std::atomic_thread_fence(std::memory_order_acquire); if (x.load(std::memory_order_relaxed) == 1) { z.fetch_add(1, std::memory_order_relaxed); } } void write_x_then_y() { x.store(1, std::memory_order_relaxed); // std::atomic_thread_fence(std::memory_order_release); y.store(1, std::memory_order_relaxed); } int main() { for (int i = 0; i < 100'000; i++) { z.store(0); x.store(0); y.store(0); std::thread t2(read_y_then_x); std::thread t1(write_x_then_y); t2.join(); t1.join(); assert(z.load(std::memory_order_relaxed) == 1); } } ``` Is there a memory ordering constraint I'm missing here? Is there a release sequence getting formed that I am unaware of? I'm running this on an M1 mac and compiling with `g++`, I don't think that matters here, though.
Unexpected inter-thread happens-before relationships from relaxed memory ordering
|c++|concurrency|lock-free|
null
You can't call a React hook in a nested callback, i.e. calling `usePathname` in `getData` is not valid. You can call `usePathname` in the `Temps` component and pass the `lat`, and `lon` values to the query function. Basic Example: ```jsx "use client" import { useQuery } from "@tanstack/react-query"; import { usePathname } from "next/navigation"; import fetchData from "../app/fetchData.jsx"; export default function Temps() { const { searchParams } = usePathname(); const lat = searchParams.get('lat'); const lon = searchParams.get('lon'); const { data, error, isError } = useQuery({ queryKey: ["wdata", lat, lon], queryFn: () => fetchData({ lat, lon }), }); if (isError) <span>Error: {error.message}</span>; if (data) return ( <div> <p> {data.toString()}</p> </div> ); } } ``` ```javascript "use client" import { usePathname } from "next/navigation"; async function getData({ lat, lon }) { const options = { method:"GET", headers: { accept: "application/json", }, }; try { const response = await fetch( `http://api.openweathermap.org/data/2.5/weather?lat=${lat}&lon=${lon}&appid=${process.env.API_KEY}&units=imperial`, options ) const data = await response.json(); return data; } catch(err) { console.error(err); throw err; // re-throw for query }; } export default async function fetchData({ lat, lon }) { return getData({ lat, lon }); } ```
SQL71561: SqlComputedColumn: When column selected
|sql-server|database|dacpac|azure-data-studio|
null
I would like to bootstrap a multilevel sem model including indirect relationships, however when I include the bootstrap argument I get the following error: FactorA <- sem(model1, data = Daily_Diary_Study_WERKDOC_parcels_CMC_2, + cluster = "id", + estimator = "ML", + se = "bootstrap", bootstrap = 10000, fixed.x = FALSE) Error in lav_options_set(opt) : lavaan ERROR: `se' argument must one of "none", "standard" or "robust.huber.white" in the multilevel case Is it even possible to bootstrap a multilevel sem model in R? Or merely the indirect relationships I aim to test within this model would also do. Thank you! I tried this: FactorA <- sem(model1, data = Daily_Diary_Study_WERKDOC_parcels_CMC_2, + cluster = "id", + estimator = "ML", + se = "bootstrap", bootstrap = 10000, fixed.x = FALSE) And I expect bootstrapped confidence intervals for my estimates
|r|confidence-interval|bootstrapping|mediator|multilevel-analysis|
I have understood and solved the notebook available on Coursera for the Deep Learning Specialization (Sequence Models course) by Andrew Ng. In the notebook, he provides a detailed walkthrough for building a wake word detection model. However, at the end, he loads a pre-trained model trained on the word "activate." I attempted to use Google Colab and my own data. I collected 369 voices of people saying "Alexa," which are available on Kaggle. However, they have a sample rate of 16000KHz. I also used Google voice commands as negative sounds and collected some clips from YouTube that contain various environmental sounds. I followed all the steps exactly as instructed, but when I try to create the dataset, the RAM quickly fills up, and I cannot create 4000 samples as mentioned by Andrew in his notebook. here is my code of "create_training_examples": ``` nsamples = 2500 X_train = [] Y_train= [] X_test = [] Y_test = [] train_count = 0 test_count = 0 to_test = False for i in range(0, nsamples): if i % 500 == 0: print(i) rand = random.randint(0,61) if i%5 == 0: x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = True) X_test.append(x.swapaxes(0,1)) Y_test.append(y.swapaxes(0,1)) test_count+=1 else: x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = False) X_train.append(x.swapaxes(0,1)) Y_train.append(y.swapaxes(0,1)) train_count+=1 print("Number of training samples:", train_count) print("Number of testing samples:", test_count) X_train = np.array(X_train) Y_train = np.array(Y_train) np.save('XY_train/X_train.npy', X_train) np.save('XY_train/Y_train.npy', Y_train) X_test = np.array(X_test) Y_test = np.array(Y_test) np.save('XY_test/X_test.npy', X_test) np.save('XY_test/Y_test.npy', Y_test) print('done saving') print('X_train.shape: ',X_train.shape) print('Y_train.shape: ',Y_train.shape) ``` here is the model i use: ``` def model(input_shape): X_input = Input(shape = input_shape) X = Conv1D(196,15,strides=4)(X_input) X = BatchNormalization()(X) X = Activation('relu')(X) X = Dropout(0.8)(X) X = GRU(128,return_sequences = True)(X) X = Dropout(0.8)(X) X = BatchNormalization()(X) X = GRU(128,return_sequences=True)(X) X = Dropout(0.8)(X) X = BatchNormalization()(X) X = Dropout(0.8)(X) X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid) model = Model(inputs = X_input, outputs = X) return model ``` and for training: ``` opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"]) model.fit(X, Y, batch_size = 5, epochs=20) ``` I tried reducing the sample rate from 44100 to 8000, but I still didn't get any good results after training. Do you have any advice or suggestions?
Getting error while running flexsurvreg function
Ive been using dynamic imports with ssr set to false as a workaround for several shadcn components. I also think modals should be dynamic in general. Eg: const Modal = dynamic(()=>import(./pathToFile/Modal), {ssr: false, loading: ()=> <AnyPlaceHolder />})
Had the same error. Solved it by deleting the tags associated with the problematic commits (did not need the tags anyway) and then mirroring the repo. `git tag -d TAGNAME` I was able to list the problematic commits with the following command: `git fsck`
I am using the [ServiceBusExplorer][1] 5.0.18 version. I am connecting to a service bus by using its connection string. It's not showing the topics now. It was working fine before but suddenly stopped showing the topics now. It's working fine for the other service bus. What could be the issue? ![image](https://github.com/paolosalvatori/ServiceBusExplorer/assets/3524495/72fbc260-e02c-4b65-bb02-8b779d04f2f3) [1]: https://github.com/paolosalvatori/ServiceBusExplorer
ServiceBusExplorer is not showing the topics
|azure|azureservicebus|azure-servicebus-topics|service-bus-explorer|
{"Voters":[{"Id":1745001,"DisplayName":"Ed Morton"},{"Id":1773798,"DisplayName":"Renaud Pacalet"},{"Id":724039,"DisplayName":"Luuk"}]}
Using any awk and 1 pass just storing the values for one $2 at a time in memory: $ cat tst.awk { if ( $2 == prev[2] ) { if ( $3 != prev[3] ) { type = "diff" } } else { prt() type = "same" numVals = 0 } vals[++numVals] = $0 split($0,prev) } END { prt() } function prt( i) { for ( i=1; i<=numVals; i++ ) { print vals[i], type } } <p> $ awk -f tst.awk test.txt 49808830/ccs 9492 TACA 3 same 175833950/ccs 971 ACCC 1 same 180422692/ccs 971 ACCC 10 same 110952448/ccs 9714 TAGAG 2 diff 117309969/ccs 9714 TAGAG 4 diff 119998610/ccs 9714 TAGAG 5 diff 171509463/ccs 9714 TAGAT 4 diff
Specifying my config via `CMD` did the trick: ``` FROM node:16-bullseye # install rabbitmq RUN apt-get update && apt-get install -y erlang rabbitmq-server # enable default rabbitmq plugins RUN rabbitmq-plugins enable rabbitmq_management CMD service rabbitmq-server start \ # create new vhost and admin user && rabbitmqctl add_vhost test-vhost \ && rabbitmqctl add_user admin admin \ && rabbitmqctl set_user_tags admin administrator \ && rabbitmqctl set_permissions -p "test-vhost" "guest" ".*" ".*" ".*" \ && rabbitmqctl set_permissions -p "test-vhost" "admin" ".*" ".*" ".*" \ # create exchange and queue && rabbitmqadmin declare exchange --vhost=test-vhost name=test-ex type=fanout durable=true \ && rabbitmqadmin declare queue --vhost=test-vhost name=testq-1 durable=true \ && rabbitmqadmin declare binding --vhost=test-vhost source=test-ex destination_type=queue destination=testq-1 routing_key=route1 \ && /bin/bash ```
try changing const supabase = createClient('https://mylink.supabase.co', 'mykey') to const supabase = await createClient('https://mylink.supabase.co', 'mykey')
I am designing a CNN classifier for image classification with reproducibility. I am using the GPU of the Google Colab for this. To ensure the result is reproducible, I am enabling the TensorFlow ops deterministic using the "tf.config.experimental.enable_op_determinism()" command and getting the reproducible output. The problem is that when I am trying to create a gradient-based saliency map, I get an error. The error says, "UnimplementedError: {{function_node __wrapped__FusedBatchNormGradV3_device_/job:localhost/replica:0/task:0/device:GPU:0}} A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available. [Op:FusedBatchNormGradV3] name: " Here is my code for creating the saliency map, def salency_map(model, sample_index): # Choose a random sample from the test set #sample_index = np.random.randint(0, len(X_test)) sample_image = x_test[sample_index][np.newaxis] #sample_image = x_test[sample_index] sample_label = y_test[sample_index] sample_class_index = np.argmax(sample_label) Y_true = np.argmax(y_test[sample_index]) Y_pred_classes = np.argmax(model.predict(sample_image), axis=1) print("Actual Class: ", labels[int(Y_true)]) print("Predicted Class: ", labels[int(Y_pred_classes)]) # Initialize GradCAM object explainer = GradCAM() # Explain the model predictions on the sample image grid = explainer.explain((sample_image, None), model, class_index=sample_class_index) # Visualize the saliency map plt.figure(figsize=(4, 2)) plt.subplot(1, 2, 1) plt.title("Original Image") plt.imshow(sample_image.squeeze()) plt.subplot(1, 2, 2) plt.title("Saliency Map") plt.imshow(grid) plt.colorbar() plt.show() How to overcome this issue? After training the model, I tried to enable the non-deterministic mode again, But it did not work.
A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available
|tensorflow|gpu|deterministic|reproducible-research|
null
try changing const supabase = createClient('https://mylink.supabase.co', 'mykey') to const supabase = await createClient('https://mylink.supabase.co', 'mykey')
Im new to using javaFx and programming in general so this might be a stupid question. Im trying to make a function that makes the thread wait without crashing the program which would result in something like the image above. [What Im trying to achive](https://i.stack.imgur.com/9ScnL.png) I have tried using thread.sleep but it crashed the gui and also something like a Timeline or PauseTransition like this: ``` public static void wait(int milliseconds) { Timeline timeline = new Timeline(new KeyFrame(Duration.millis(milliseconds))); timeline.setOnFinished(event -> { }); timeline.play(); } ``` but it doesn't work since the javafx things work on a different thread. edit: Something to keep in mind is that there isn't something specific to do after the pause so since the function doesn't know why I need the pause for it just needs to stop the main thread for x amount of time without crashing the gui. Example of what I mean: System.out.println("some information"); pause(4000); System.out.println("idk"); pause(1000); button.setVisible(true); pause(5000); MyImage.setImage(AImage);
main.py ``` from flask import Flask, render_template, request import os app = Flask(__name__) from pyresparser import ResumeParser import warnings warnings.filterwarnings('ignore') import en_core_web_sm nlp = en_core_web_sm.load() def extract_resume_info(resume_path): # Извлечение текста из файла резюме (поддерживаются форматы .docx, .pdf, .txt) # Извлечение данных из резюме с использованием ResumeParser data = ResumeParser(resume_path).get_extracted_data() return data['skills'] @app.route('/') def index(): return render_template('index.html') @app.route('/upload', methods=['POST']) def upload(): if request.method == 'POST': resume_file = request.files['resume'] if resume_file.filename == '': return render_template('index.html', message='No file selected') if resume_file: if not os.path.exists('temp'): os.makedirs('temp') resume_path = os.path.join('temp', resume_file.filename) resume_file.save(resume_path) resume_info = extract_resume_info(resume_path) return render_template('result.html', resume_info=resume_info) if __name__ == '__main__': app.run(debug=True) ``` config.cfg ``` [nlp] pipeline = [] disabled = [] before_creation = null after_creation = null after_pipeline_creation = null batch_size = 1000 lang = en tokenizer = () ``` This is a web app, in which you upload your resume file and get bulletpoints in return,like your experience,university etc. Also ypu can recomend some other Resume modules that work well. I am using pyresparser.
How to solve Config validation error when tokenizer is not callable in Python?
|python|token|tokenize|resume|pyresttest|
null
I'm inserting using DSL context: ``` val userRecord = create.newRecord(ITR_NE_USER, user) val x = create.executeInsert(userRecord) ``` I would expect `userRecord` to be updated with the generated ID <Int> (this is Kotlin) - it's still null after the insert. How do I make it give me the value back?
Jooq - Insert does not update object with generated id
|kotlin|jooq|
In this code, the language change is made when I reload the page, so I would like a solution so that it happens instantly and I don't have to reload the page. ``` <template> <div ref="scene" class="scene"></div> </template> <script> import { onMounted, ref } from "vue"; import Matter from "matter-js"; import { useI18n } from "vue-i18n"; export default { setup() { const scene = ref(null); const balls = ref([]); const { t } = useI18n(); onMounted(() => { const engine = Matter.Engine.create({ gravity: { x: 0, y: 0.3 } }); const render = Matter.Render.create({ element: scene.value, engine: engine, options: { width: window.innerWidth / 5, height: window.innerHeight / 2, wireframes: false, background: "transparent", }, }); Matter.Render.run(render); const createPhysicalButton = (key) => { const buttonWidth = 165; const buttonHeight = 45; const cornerRadius = 20; const x = Math.random() * (window.innerWidth / 8 - buttonWidth * 2 - 100) + buttonWidth + 50; const y = Math.random() * (window.innerHeight / 2 - buttonHeight * 2 - 100) + buttonHeight + 50; const angle = Math.random() * Math.PI * 2; const body = Matter.Bodies.rectangle(x, y, buttonWidth, buttonHeight, { chamfer: { radius: cornerRadius }, angle: angle, isStatic: false, restitution: 0.4, frictionAir: 0.004, density: 0.01, render: { visible: false }, }); body.initialPosition = { x, y }; const button = document.createElement("button"); button.innerText = t(key); button.style.padding = "10px"; button.style.borderRadius = `${cornerRadius}px`; button.style.border = "1px solid #333"; button.style.backgroundColor = "transparent"; button.style.position = "absolute"; button.style.left = `${x - buttonWidth / 2}px`; button.style.top = `${y - buttonHeight / 2}px`; button.style.width = `${buttonWidth}px`; button.style.height = `${buttonHeight}px`; scene.value.appendChild(button); const limiterVitesse = (corps, vitesseMax) => { const vitesse = Matter.Vector.magnitude(corps.velocity); if (vitesse > vitesseMax) { Matter.Body.setVelocity( corps, Matter.Vector.mult( Matter.Vector.normalise(corps.velocity), vitesseMax ) ); } }; Matter.Events.on(engine, "afterUpdate", () => { button.style.left = `${body.position.x - buttonWidth / 2}px`; button.style.top = `${body.position.y - buttonHeight / 2}px`; button.style.transform = `rotate(${body.angle}rad)`; [...balls.value, ...buttons].forEach((corps) => limiterVitesse(corps, 10) ); const maxSpeed = 10; const speed = Matter.Vector.magnitude(body.velocity); if (speed > maxSpeed) { Matter.Body.setVelocity( body, Matter.Vector.mult( Matter.Vector.normalise(body.velocity), maxSpeed ) ); } }); return body; }; const buttons = [ createPhysicalButton("button.fun"), createPhysicalButton("button.awareness"), createPhysicalButton("button.act"), createPhysicalButton("button.enjoy"), createPhysicalButton("button.learn"), createPhysicalButton("button.ecology"), createPhysicalButton("button.education"), ]; Matter.World.add(engine.world, buttons); const colors = ["#551126", "#C1DD13", "#ABCDFF"]; const createBall = (color) => { const ballSize = 6; const x = Math.random() * (window.innerWidth / 8 - ballSize * 2 - 100) + ballSize + 50; const y = Math.random() * (window.innerHeight / 2 - ballSize * 2 - 100) + ballSize + 50; const angle = Math.random() * Math.PI * 2; const ball = Matter.Bodies.circle(x, y, ballSize, { angle: angle, isStatic: false, restitution: 0.4, frictionAir: 0.004, density: 0.01, render: { fillStyle: color, strokeStyle: "transparent", }, }); balls.value.push(ball); Matter.World.add(engine.world, ball); return ball; }; const createdBalls = colors.map((color) => createBall(color)); Matter.World.add(engine.world, createdBalls); const wallOptions = { isStatic: true, render: { visible: false }, }; const wallThickness = 200; // Augmentez l'épaisseur des murs pour empêcher le passage des balles // Assurez-vous que la position et la taille des murs couvrent correctement les bords de la scène const leftWall = Matter.Bodies.rectangle(-wallThickness / 2, window.innerHeight / 4, wallThickness, window.innerHeight * 2, wallOptions); const rightWall = Matter.Bodies.rectangle(window.innerWidth / 5 + wallThickness / 2, window.innerHeight / 4, wallThickness, window.innerHeight * 2, wallOptions); const topWall = Matter.Bodies.rectangle( window.innerWidth / 16, -wallThickness / 2, window.innerWidth * 2, wallThickness, wallOptions ); const bottomWall = Matter.Bodies.rectangle( window.innerWidth / 16, window.innerHeight / 2 + wallThickness / 2, window.innerWidth * 2, wallThickness, wallOptions ); Matter.World.add(engine.world, [ leftWall, rightWall, topWall, bottomWall, ]); Matter.Events.on(engine, "collisionStart", (event) => { event.pairs.forEach((pair) => { const { bodyA, bodyB } = pair; if ( balls.value.includes(bodyA) && (leftWall === bodyB || rightWall === bodyB || topWall === bodyB || bottomWall === bodyB) ) { Matter.Body.setVelocity(bodyA, { x: -bodyA.velocity.x * 0.5, y: -bodyA.velocity.y * 0.5, }); } if ( balls.value.includes(bodyB) && (leftWall === bodyA || rightWall === bodyA || topWall === bodyA || bottomWall === bodyA) ) { Matter.Body.setVelocity(bodyB, { x: -bodyB.velocity.x * 0.5, y: -bodyB.velocity.y * 0.5, }); } }); }); let lastMousePosition = { x: 0, y: 0 }; window.addEventListener("mousemove", (event) => { const sceneRect = scene.value.getBoundingClientRect(); const mousePosition = { x: event.clientX - sceneRect.left, y: event.clientY - sceneRect.top, }; const mouseVelocity = { x: mousePosition.x - lastMousePosition.x, y: mousePosition.y - lastMousePosition.y, }; buttons.forEach((button) => { const distance = Matter.Vector.magnitude( Matter.Vector.sub(button.position, mousePosition) ); if (distance < 50 && Matter.Vector.magnitude(mouseVelocity) > 0) { const maxMouseSpeed = 20; const buttonSpeed = Matter.Vector.magnitude(mouseVelocity); if (buttonSpeed > maxMouseSpeed) { Matter.Body.setVelocity( button, Matter.Vector.mult( Matter.Vector.normalise(mouseVelocity), maxMouseSpeed ) ); } else { Matter.Body.setVelocity(button, mouseVelocity); } } }); balls.value.forEach((ball) => { const distance = Matter.Vector.magnitude( Matter.Vector.sub(ball.position, mousePosition) ); if (distance < 50 && Matter.Vector.magnitude(mouseVelocity) > 0) { const maxMouseSpeed = 20; const ballSpeed = Matter.Vector.magnitude(mouseVelocity); if (ballSpeed > maxMouseSpeed) { Matter.Body.setVelocity( ball, Matter.Vector.mult( Matter.Vector.normalise(mouseVelocity), maxMouseSpeed ) ); } else { Matter.Body.setVelocity(ball, mouseVelocity); } } }); lastMousePosition = mousePosition; }); const runner = Matter.Runner.create(); Matter.Runner.run(runner, engine); }); return { scene }; }, }; </script> <style> .scene { width: 50vw; height: 30vh; position: absolute; /* ou 'relative' selon le besoin */ top: 60%; left: 73%; transform: translate(-50%, -50%); margin-top: 100px; } button { z-index: 10; border-radius: 20px; border: 1px solid #333; background-color: transparent; transform-origin: center center; cursor: auto; } </style> ``` In this case, I don't need to reload the page, but the buttons are no longer subject to the same reactivity as before. ``` <template> <div ref="scene" class="scene"></div> </template> <script> import { onMounted, ref, watch } from "vue"; import Matter from "matter-js"; import { useI18n } from "vue-i18n"; export default { setup() { const scene = ref(null); const balls = ref([]); const buttons = ref([]); const { t, locale } = useI18n(); const updatePhysicalButtons = () => { if (!scene.value) { console.warn("Scene not ready."); return; } const existingButtons = scene.value.querySelectorAll('button'); existingButtons.forEach(button => button.remove()); const buttonKeys = [ "button.fun", "button.awareness", "button.act", "button.enjoy", "button.learn", "button.ecology", "button.education", ]; buttonKeys.forEach(key => createPhysicalButton(key)); }; const createPhysicalButton = (key) => { const buttonWidth = 165; const buttonHeight = 45; const cornerRadius = 20; const x = Math.random() * (window.innerWidth / 8 - buttonWidth * 2 - 100) + buttonWidth + 50; const y = Math.random() * (window.innerHeight / 2 - buttonHeight * 2 - 100) + buttonHeight + 50; const angle = Math.random() * Math.PI * 2; const body = Matter.Bodies.rectangle(x, y, buttonWidth, buttonHeight, { chamfer: { radius: cornerRadius }, angle: angle, isStatic: false, restitution: 0.4, frictionAir: 0.004, density: 0.01, render: { visible: false }, }); body.initialPosition = { x, y }; const button = document.createElement("button"); button.innerText = t(key); button.style.padding = "10px"; button.style.borderRadius = `${cornerRadius}px`; button.style.border = "1px solid #333"; button.style.backgroundColor = "transparent"; button.style.position = "absolute"; button.style.left = `${x - buttonWidth / 2}px`; button.style.top = `${y - buttonHeight / 2}px`; button.style.width = `${buttonWidth}px`; button.style.height = `${buttonHeight}px`; scene.value.appendChild(button); const limiterVitesse = (corps, vitesseMax) => { const vitesse = Matter.Vector.magnitude(corps.velocity); if (vitesse > vitesseMax) { Matter.Body.setVelocity( corps, Matter.Vector.mult( Matter.Vector.normalise(corps.velocity), vitesseMax ) ); } }; Matter.Events.on(engine, "afterUpdate", () => { button.style.left = `${body.position.x - buttonWidth / 2}px`; button.style.top = `${body.position.y - buttonHeight / 2}px`; button.style.transform = `rotate(${body.angle}rad)`; [...balls.value, ...buttons.value].forEach((corps) => limiterVitesse(corps, 10) ); const maxSpeed = 10; const speed = Matter.Vector.magnitude(body.velocity); if (speed > maxSpeed) { Matter.Body.setVelocity( body, Matter.Vector.mult( Matter.Vector.normalise(body.velocity), maxSpeed ) ); } }); buttons.value.push(body); return body; }; let engine; onMounted(() => { engine = Matter.Engine.create({ gravity: { x: 0, y: 0.3 } }); const render = Matter.Render.create({ element: scene.value, engine: engine, options: { width: window.innerWidth / 5, height: window.innerHeight / 2, wireframes: false, background: "transparent", }, }); Matter.Render.run(render); updatePhysicalButtons(); const colors = ["#551126", "#C1DD13", "#ABCDFF"]; const createBall = (color) => { const ballSize = 6; const x = Math.random() * (window.innerWidth / 8 - ballSize * 2 - 100) + ballSize + 50; const y = Math.random() * (window.innerHeight / 2 - ballSize * 2 - 100) + ballSize + 50; const angle = Math.random() * Math.PI * 2; const ball = Matter.Bodies.circle(x, y, ballSize, { angle: angle, isStatic: false, restitution: 0.4, frictionAir: 0.004, density: 0.01, render: { fillStyle: color, strokeStyle: "transparent", }, }); balls.value.push(ball); Matter.World.add(engine.world, ball); return ball; }; const createdBalls = colors.map((color) => createBall(color)); Matter.World.add(engine.world, createdBalls); const wallOptions = { isStatic: true, render: { visible: false }, }; const wallThickness = 200; const leftWall = Matter.Bodies.rectangle(-wallThickness / 2, window.innerHeight / 4, wallThickness, window.innerHeight * 2, wallOptions); const rightWall = Matter.Bodies.rectangle(window.innerWidth / 5 + wallThickness / 2, window.innerHeight / 4, wallThickness, window.innerHeight * 2, wallOptions); const topWall = Matter.Bodies.rectangle( window.innerWidth / 16, -wallThickness / 2, window.innerWidth * 2, wallThickness, wallOptions ); const bottomWall = Matter.Bodies.rectangle( window.innerWidth / 16, window.innerHeight / 2 + wallThickness / 2, window.innerWidth * 2, wallThickness, wallOptions ); Matter.World.add(engine.world, [ leftWall, rightWall, topWall, bottomWall, ]); Matter.Events.on(engine, "collisionStart", (event) => { event.pairs.forEach((pair) => { const { bodyA, bodyB } = pair; if ( balls.value.includes(bodyA) && (leftWall === bodyB || rightWall === bodyB || topWall === bodyB || bottomWall === bodyB) ) { Matter.Body.setVelocity(bodyA, { x: -bodyA.velocity.x * 0.5, y: -bodyA.velocity.y * 0.5, }); } if ( balls.value.includes(bodyB) && (leftWall === bodyA || rightWall === bodyA || topWall === bodyA || bottomWall === bodyA) ) { Matter.Body.setVelocity(bodyB, { x: -bodyB.velocity.x * 0.5, y: -bodyB.velocity.y * 0.5, }); } }); }); let lastMousePosition = { x: 0, y: 0 }; window.addEventListener("mousemove", (event) => { const sceneRect = scene.value.getBoundingClientRect(); const mousePosition = { x: event.clientX - sceneRect.left, y: event.clientY - sceneRect.top, }; const mouseVelocity = { x: mousePosition.x - lastMousePosition.x, y: mousePosition.y - lastMousePosition.y, }; buttons.value.forEach((button) => { const distance = Matter.Vector.magnitude( Matter.Vector.sub(button.position, mousePosition) ); if (distance < 50 && Matter.Vector.magnitude(mouseVelocity) > 0) { const maxMouseSpeed = 20; const buttonSpeed = Matter.Vector.magnitude(mouseVelocity); if (buttonSpeed > maxMouseSpeed) { Matter.Body.setVelocity( button, Matter.Vector.mult( Matter.Vector.normalise(mouseVelocity), maxMouseSpeed ) ); } else { Matter.Body.setVelocity(button, mouseVelocity); } } }); balls.value.forEach((ball) => { const distance = Matter.Vector.magnitude( Matter.Vector.sub(ball.position, mousePosition) ); if (distance < 50 && Matter.Vector.magnitude(mouseVelocity) > 0) { const maxMouseSpeed = 20; const ballSpeed = Matter.Vector.magnitude(mouseVelocity); if (ballSpeed > maxMouseSpeed) { Matter.Body.setVelocity( ball, Matter.Vector.mult( Matter.Vector.normalise(mouseVelocity), maxMouseSpeed ) ); } else { Matter.Body.setVelocity(ball, mouseVelocity); } } }); lastMousePosition = mousePosition; }); const runner = Matter.Runner.create(); Matter.Runner.run(runner, engine); }); watch(locale, () => { updatePhysicalButtons(); }, { immediate: true }); return { scene }; }, }; </script> <style> .scene { width: 50vw; height: 30vh; position: absolute; top: 60%; left: 73%; transform: translate(-50%, -50%); margin-top: 100px; } button { z-index: 10; border-radius: 20px; border: 1px solid #333; background-color: transparent; transform-origin: center center; cursor: auto; } </style> ``` So I tried the things said above
Problems with matter.js and i18n in vue.js
|vue.js|vuejs3|vue-i18n|matter.js|
null
Suppose the calculation time of a process is 200 cpu cycles. Meanwhile, the I/O operation is in progress for another process through dma, and after 100 cpu cycles, the end of the I/O operation is notified to the system by an interrupt. If we assume that the execution time of isr is 10 cpu cycles, how much system time do the mentioned operations occupy? please help me. i don't have any idea.
the end of the I/O operation is notified to the system by an interrupt.how much system time do the mentioned operations occupy?
|operating-system|
`.prev()` gives you only the immediatelly previous element and if you specify a filter selector like this `.prev('.selected')` it only returns the previous element IF it matches that selector. Meaning if the desired `.selected` element is not immediately before the current element, it will return nothing. If you truly want to get the closest previous element with a class (even if it is not immediately before the current element) you have to approach it like this. currentElem .prevAll('.selected') // returns all previous elems with 'selected' class in reverse DOM order (the closest is the first) .eq(0); // return the first element in the set - the closest one Therefore in your case $('.selectable').click(function(){ $(this).closest('.dropdown-wrapper').prevAll('.selected').eq(0).val('hit'); });
I have a df such as : data = pd.DataFrame({ 'class': ['First', 'First', 'First', 'Second', 'Second', 'Second', 'Third', 'Third', 'Third'], 'who': ['child', 'man', 'woman', 'child', 'man', 'woman', 'child', 'man', 'woman'], 'survived': [5, 42, 89, 19, 8, 60, 25, 38, 56] }) Could someone help me to write a code with seaborn where : I want a barplot grouped by "class" in that order : `["First","Second","Third"]` and were the bar plots are also grouped such as : `["child","man","woman"]`. In the y axis should be displayed the percentage of survived so for instance for First child, the bar should be at 5*100/(5+19+25)%. And I would like for each bar to display in their top the actual 'survived' value. In the end the plot should look like that : [![enter image description here][1]][1] Here is the code I got so far : # Calculate percentage of survived within each group data['Percentage'] = (data['survived'] / grouped_total_survived) * 100 # Plot ax = sns.barplot(x='class', y='Percentage', hue='who', data=data, estimator=sum, ci=None) ax.set(ylabel='Survived Count', title='Survived Count and Percentage') [1]: https://i.stack.imgur.com/JXcxD.png
Barplot in python with % value in Y axis and count value in text on top of the bars
|python|python-3.x|seaborn|
The current branch `feature` is ahead of `master` by several commits and is to be merged into `master` with `--no-ff` option. This can be achieved by ```sh git checkout master git merge feature --no-ff -m "commit msg" ``` For some reasons (the working directory is not clean, the commits contain a lot of changes) I don't want to run `git checkout master`. How to merge current `feature` branch into `master` with `--no-ff` option without checkout?
How to merge current branch into another with --no-ff option without checkout?
|git|