instruction stringlengths 0 30k ⌀ |
|---|
Print warning if db is out sync with code |
|sql|database|mongodb|orm|typeorm| |
|sql|oracle-database| |
To quote from the same [documentation](https://docs.databricks.com/en/delta/clone-parquet.html#requirements-and-limitations-for-cloning-parquet-and-iceberg-tables), it is mentioned as follows:
> You must register Parquet tables with partitions to a catalog such as the Hive metastore before cloning and using the table name to identify the source table. **You cannot use path-based clone syntax for Parquet tables with partitions**.
This means if you have a source parquet table that is partitioned, you cannot use the below syntax to clone the partitioned parquet table which is registerded in hive metastore:
```sql
CREATE OR REPLACE TABLE <target-table-name> CLONE parquet.`/path/to/data`;
```
Instead, the suggested way would be:
```sql
CREATE OR REPLACE TABLE <target-table-name> CLONE <source-table-name>;
```
or
```sql
CREATE OR REPLACE TABLE cdf.customerOrderArchive CLONE <source-table-name>;
``` |
i have a table called sans that stores event information with columns for id, date, and hour. <br/> I'm trying to write a query using Eloquent to fetch the first event of each day. My current code works fine for today's events, but I'm having trouble modifying it to fetch the first event for each day beyond today.
Can you help me with this?
<br/>
my current code:
$sans=sans::where('id',$id)
->where('user_id',$user->id)
->whereDate('date','>=',date('Y-m-d'))
->where('Hour','>=',date('H:i'))
->first();
but its work correct just for today's sans
|
fetch first daily row using Laravel Eloquent |
I found the solution myself. I need to use an inner join on the table and then filter on user_id:
```javascript
const sites = await supabase.from('sites').select('*, members!inner(user_id)').eq('members.user_id', user.id)
```
Probably there is a better or more efficient way. Any advice is still welcome! |
convering RGB image to labels image (1 channel)
img = … # RGB image (MxNx3)
lut = np.arange(256 * 256 * 256, dtype=np.uint32).reshape(256, 256, 256)
labels_img = lut[img[:,:,0], img[:,:,1], img[:,:,2]]
then, each label can be converted back to RGB color simply by
color_label = labels_img[5, 7]
rgb = np.unravel_index(color_label, lut.shape) |
I was wondering what is the problem in the code:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <limits.h>
#define MAX_ROOMS 100 // Maximum number of rooms in the map
#define INF INT_MAX // Infinity value for distances
// Structure to represent an edge in the graph
typedef struct {
int source;
int destination;
int weight;
} Edge;
// Structure to represent a graph
typedef struct {
int numRooms;
int numEdges;
Edge edges[MAX_ROOMS * MAX_ROOMS]; // Assuming each room can be connected to all other rooms
} Graph;
// Structure to represent a queue node for BFS
typedef struct {
int vertex;
int distance;
} QueueNode;
// Structure to represent a queue
typedef struct {
int front, rear;
int capacity;
QueueNode* array;
} Queue;
// Function to create a new queue
Queue* createQueue(int capacity) {
Queue* queue = (Queue*)malloc(sizeof(Queue));
queue->capacity = capacity;
queue->front = queue->rear = -1;
queue->array = (QueueNode*)malloc(queue->capacity * sizeof(QueueNode));
return queue;
}
// Function to check if the queue is empty
bool isEmpty(Queue* queue) {
return queue->front == -1;
}
// Function to add an element to the queue
void enqueue(Queue* queue, int vertex, int distance) {
QueueNode newNode;
newNode.vertex = vertex;
newNode.distance = distance;
queue->array[++queue->rear] = newNode;
if (queue->front == -1) {
queue->front = 0;
}
}
// Function to remove an element from the queue
QueueNode dequeue(Queue* queue) {
QueueNode node = queue->array[queue->front];
if (queue->front == queue->rear) {
queue->front = queue->rear = -1;
}
else {
queue->front++;
}
return node;
}
// Function to read the map from a text file and construct the graph
Graph readMapFromFile(const char* filename) {
FILE* file;
errno_t err = fopen_s(&file, filename, "r");
if (err != 0) {
printf("Error opening file.\n");
exit(EXIT_FAILURE);
}
Graph graph;
fscanf_s(file, "%d %d", &graph.numRooms, &graph.numEdges);
for (int i = 0; i < graph.numEdges; i++) {
fscanf_s(file, "%d %d %d", &graph.edges[i].source, &graph.edges[i].destination, &graph.edges[i].weight);
}
fclose(file);
return graph;
}
// Function to perform BFS and find the center of Harry's friends' component
int bfsFindCenterOfFriendsComponent(Graph graph, int mainSourceRoom, int* numFriends) {
// Initialize distances array
int distances[MAX_ROOMS];
for (int i = 0; i < graph.numRooms; i++) {
distances[i] = INF;
}
// Initialize visited array
bool visited[MAX_ROOMS] = { false };
// Perform BFS
Queue* queue = createQueue(graph.numRooms);
enqueue(queue, mainSourceRoom, 0);
visited[mainSourceRoom] = true;
distances[mainSourceRoom] = 0;
while (!isEmpty(queue)) {
QueueNode current = dequeue(queue);
int currentRoom = current.vertex;
int currentDistance = current.distance;
for (int i = 0; i < graph.numEdges; i++) {
if (graph.edges[i].source == currentRoom) {
int neighbor = graph.edges[i].destination;
int weight = graph.edges[i].weight;
if (!visited[neighbor]) {
visited[neighbor] = true;
distances[neighbor] = currentDistance + weight;
enqueue(queue, neighbor, distances[neighbor]);
}
}
}
}
// Find the farthest room from the main source room
int farthestRoom = mainSourceRoom;
int maxDistance = 0;
for (int i = 0; i < graph.numRooms; i++) {
if (distances[i] > maxDistance && distances[i] != INF) {
maxDistance = distances[i];
farthestRoom = i;
}
}
// Now, we perform BFS again starting from the farthest room to find the center
// This time, we use the farthest room as the source
// We calculate distances from the farthest room to all other rooms
// The room with the minimum maximum distance to other rooms will be the center
// Reset distances array
for (int i = 0; i < graph.numRooms; i++) {
distances[i] = INF;
}
// Reset visited array
for (int i = 0; i < graph.numRooms; i++) {
visited[i] = false;
}
// Perform BFS from the farthest room
enqueue(queue, farthestRoom, 0);
visited[farthestRoom] = true;
distances[farthestRoom] = 0;
while (!isEmpty(queue)) {
QueueNode current = dequeue(queue);
int currentRoom = current.vertex;
int currentDistance = current.distance;
for (int i = 0; i < graph.numEdges; i++) {
if (graph.edges[i].source == currentRoom) {
int neighbor = graph.edges[i].destination;
int weight = graph.edges[i].weight;
if (!visited[neighbor]) {
visited[neighbor] = true;
distances[neighbor] = currentDistance + weight;
enqueue(queue, neighbor, distances[neighbor]);
}
}
}
}
// Find the room with the minimum maximum distance to other rooms
int centerRoom = farthestRoom;
int minMaxDistance = INF;
for (int i = 0; i < graph.numRooms; i++) {
if (distances[i] != INF && distances[i] < minMaxDistance) {
minMaxDistance = distances[i];
centerRoom = i;
}
}
*numFriends = 0;
// Count the number of friends (rooms with non-INF distances to the center)
for (int i = 0; i < graph.numRooms; i++) {
if (distances[i] != INF && i != centerRoom) {
(*numFriends)++;
}
}
// Return the center room of Harry's friends' component
return centerRoom;
}
// Function to find the shortest path from source to destination using Dijkstra's algorithm
void dijkstraShortestPath(int source, int destination, Graph graph) {
int distances[MAX_ROOMS];
bool visited[MAX_ROOMS]; // Added
// Initialize distances and visited arrays
for (int i = 0; i < graph.numRooms; i++) {
distances[i] = INF;
visited[i] = false;
}
// Set distance of source to 0
distances[source] = 0;
// Dijkstra's algorithm
for (int count = 0; count < graph.numRooms - 1; count++) {
int minDistance = INF;
int minIndex = -1;
// Find vertex with minimum distance
for (int v = 0; v < graph.numEdges; v++) { // Iterate over the number of edges
if (!visited[v] && distances[v] <= minDistance) {
minDistance = distances[v];
minIndex = v;
}
}
// Mark the picked vertex as visited
visited[minIndex] = true;
// Update distance value of the adjacent vertices
for (int v = 0; v < graph.numEdges; v++) { // Iterate over the number of edges
if (!visited[v] && graph.edges[v].weight && distances[minIndex] != INF && distances[minIndex] + graph.edges[minIndex].weight < distances[v]) {
distances[v] = distances[minIndex] + graph.edges[minIndex].weight;
}
}
}
// Print the shortest distance from source to destination
printf("Shortest distance from %d to %d: %d\n", source, destination, distances[destination]);
}
int main() {
const char* filename = "g1-8.txt"; // Change this to the name of your map file
// Step 1: Read the map from the file and construct the graph
Graph graph = readMapFromFile(filename);
// Step 2: Identify the main source room
int mainSourceRoom = 0; // Assume the first room as the main source room
// Step 3: Find the center of Harry's friends' component
int numFriends = 0; // Number of Harry's friends
int friendsComponentCenter = bfsFindCenterOfFriendsComponent(graph, mainSourceRoom, &numFriends); // Implement this function
// Step 4: Find the shortest path from the main source to Harry's room
int harrysRoom = friendsComponentCenter; // Placeholder
dijkstraShortestPath(mainSourceRoom, harrysRoom, graph);
// Step 5: Print out the questions and their answers
printf("2.1. In which room does Harry find himself lost, in the beginning? Room %d\n", mainSourceRoom);
printf("2.2. How many friends does Harry have on this map? %d\n", numFriends);
printf("2.3. What vertex is Harry's dorm room? Room %d\n", harrysRoom);
printf("2.4. What is a path with the least energy spending to safely reach his room? [Main Source Room -> Harry's Room]\n");
printf("2.5. How much energy does the path consume (or gain)? 0 \n(Since the distance from the main source room to Harry's room is 0)\n");
return 0;
}
The graph I used is :
```none
0 3 0 5 0 2 0 0
0 0 -4 0 0 0 0 0
0 0 0 0 0 0 0 4
0 0 0 0 6 0 0 0
0 0 0 -3 0 0 0 8
0 0 0 0 0 0 3 0
0 0 0 0 0 -6 0 7
0 0 0 0 0 0 0 0
```
But there is also another graph which is:
```none
0 2 0 0 1 0 0 0 0 0 0 0 2
0 0 0 0 0 -1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 3 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 -2 0 0 0 0 0 0 0 0 0 -1
0 0 0 0 0 0 0 0 -2 0 0 0 0
0 -2 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 -2 0 0 0 0 0 0 0 0
2 4 0 0 0 0 0 0 0 -1 0 0 0
0 0 0 2 0 0 0 0 0 0 0 0 0
0 0 -1 0 0 0 7 0 0 0 0 0 0
0 0 0 3 0 0 0 0 0 0 3 0 3
0 0 2 0 0 0 0 0 0 3 0 0 0
```
The result is:
Shortest distance from 0 to 0: 0
1. In which room does Harry find himself lost, in the beginning? Room 0
2. How many friends does Harry have on this map? 0
3. What vertex is Harry's dorm room? Room 0
4. What is a path with the least energy spending to safely reach his room? [Main Source Room -> Harry's Room]
5. How much energy does the path consume (or gain)? 0
(Since the distance from the main source room to Harry's room is 0)
But its not quite right so anyone capable for helping me here? |
For debugging with GDB server on host target I need to retrieved the IP address to pass to the "miDebuggerServerAddress" command. Since the Ip will not always be the same, I have a bash script running as a preLaunchTask that return me the IP address of the host. Is there a with to remember this value or to pass it to the launch.json file afterward?
here a snippet of my launch.json file
> {
> "name": "GDB test",
> "type": "cppdbg",
> "request": "launch",
> "preLaunchTask": "TaskGetIP",
> "program": "${input:promptPathDgbAppl}",
> "stopAtEntry": false,
> "cwd": "${workspaceFolder}",
> "miDebuggerServerAddress" : "${return_of_GetIP}:41882",
>
> "environment": [],
> "externalConsole": false,
> "MIMode": "gdb",
> "miDebuggerPath": "/usr/bin/gdb",
> "setupCommands": [
> {
> "description": "Enable pretty-printing for gdb",
> "text": "-enable-pretty-printing",
> "ignoreFailures": true
> },
> {
> "description": "Set Disassembly Flavor to Intel",
> "text": "-gdb-set disassembly-flavor intel",
> "ignoreFailures": true
> }
> ],
>
> },
here my task.json
> {
> "label": "TaskGetIP",
> "type": "shell",
> "command": "GetIP.sh
>
> },
So I would like the return of GetIP.sh to be store or use on the "miDebuggerServerAddress" command. Is there a way to do this?
thank you |
Using Vscode launch.json and task.json how can I use the return value of script running in task.json inside launch.json |
When running into this http/401 I noticed I had open privacy declarations for the app in question. When finishing the privacy/government declarations for this application, I could upload the .pem certificate. Hope this helps. |
I am developing a Flutter app where I want to schedule notifications from the app itself based on some conditions related to the user, I was not able to find something useful in the [OneSignal Flutter package](https://pub.dev/packages/onesignal_flutter).
I have seen this [question](https://stackoverflow.com/questions/56827952/sending-onesignal-notification-from-flutter) on stackoverflow that shows people are actually able to schedule notifications using OneSignal Flutter package, but, the `shared` property used in that example is not exposed anymore, and I couldn't find something useful in the [documentation](https://documentation.onesignal.com/docs/flutter-sdk-setup) of the OneSignal Flutter SDK.
Does anyone know if this is functionality was removed from the OneSignal Flutter package or only changed to something else?
I have installed the OneSignal package on my app, and it is perfectly working when I send a notification from OneSignal website, but, my goal is to schedule that notification from the app itself through OneSignal package. |
How can I schedule OneSignal programmatically from a Flutter app |
|flutter|dart|push-notification|notifications|onesignal| |
null |
1. Sure you did not update accidentaly to 4.27.**3** or later? I got exactly your problem, after I installed the 4.28.0 Version - see below...
2. You need Hyper-V enabled for this. If you are using Windows Home Edition (check with PWS `Get-WindowsEdition -Online`, which should answer at least `Edition : Professional`) - no chance: upgrade your Windows to Professional Edition - see maybe [tag:docker-for-windows]?
From my view, at this time the Docker Desktop Version 4.28.0 seems to have a bug, because after I deinstalled the 4.28.0 and replaced it with a fresh install of the Docker Desktop Version 4.27.2 (see [Docker Desktop release notes][2]) everything works fine for me with VS 2022 and ASP.NET 8.
... don't Update DD until this is fixed! ;)
In [GitHub, docker/for-win: ERROR: request returned Internal Server Error for API route and version...][1] there is a hint upgrading the WSL2 which might help too.
[1]: https://github.com/docker/for-win/issues/13909
[2]: https://docs.docker.com/desktop/release-notes/#4272 |
This is the first time I've written a code in Javascript, and I'm no better in other coding languages, but I managed to get the code to work (kinda).
I think this code could be way shorter but theres also another problem that I dont quite get.
It would be really helpful for the future if someone could point out my mistakes and show me ways that are easier.
```
var main = function() {
//put the day of payment in here
const payday = 25;
//This code is needed for the following Calculation
const now = new Date();
currentDay = now.getDate();
const nextMonth = new Date(now.getFullYear(), now.getMonth() + 1, 1);
const diffDays = Math.ceil(Math.abs(nextMonth.getTime() - now.getTime()) / (1000 * 60 * 60 * 24));
now.setMonth(now.getMonth() + 1);
now.setDate(0);
const daysInCurrentMonth = now.getDate();
//Not necessary
//console.log('Days in current month: ' + daysInCurrentMonth);
//console.log('Days until next month: ' + diffDays);
//Calculation of days left until payment
//Here 25th is the day of Payment you can change that to your liking
const daysTilPayment = Math.ceil(diffDays - (daysInCurrentMonth - payday) - 1 );
//Folowing code is to determine if the payday is on a weekend
//Also it changes the paydate to 23th since thats how banks operate Here in Switzerland
//If your paydate will be changed to the closest workday (Friday) Then
//change the 2 on Rules for Saturdays to 1
//If that doesnt apply to you at all remove the 'Rules for Saturdays' + 'Rules for Sundays'
//Sunday = 0 and Saturday = 6
//to determine weekday of payment in current month
const d = new Date();
d.setDate(d.getDate() + daysTilPayment);
let paymentDay = d.getDay();
//to determine weekday of payment in next month
const thenextMonth = new Date (d.setMonth(d.getMonth() +4));
let paymentDay2 = d.getDay();
{
if (paymentDay >5)
//Rules for Saturdays
//To subtract 2
output = Math.ceil(daysTilPayment - 2);
//End of rules for Saturdays
else if (paymentDay <1)
//Rules for Sundays
//To subtract 2
output = Math.ceil(daysTilPayment - 2);
//End of rules for Sundays
else
//Rules for normal days
//does nothin
output = daysTilPayment
//End of rules for normal days
}
//this made the repetition above necessary
//it is to get rid of the countdown going negativ after payday
if (output < 0)
//output = Math.ceil(daysInCurrentMonth - currentDay + payday-2);
{
if (paymentDay2 >5)
//Rules for Saturdays
{//To put a 0 in front of single digit numbers and subtract 2
if (daysTilPayment+ daysInCurrentMonth - currentDay + payday -2 < 10)
output = ': 0'+Math.ceil(daysInCurrentMonth - currentDay + payday-2);
else
output = ': '+Math.ceil(daysInCurrentMonth - currentDay + payday-2);
}
//End of rules for Saturdays
else if (paymentDay2 <1)
//Rules for Sundays
{//To put a 0 in front of single digit numbers and subtract 2
if (daysTilPayment+ daysInCurrentMonth - currentDay + payday -2 < 10)
output = ': 0'+Math.ceil(daysInCurrentMonth - currentDay + payday-2);
else
output = ': '+Math.ceil(daysInCurrentMonth - currentDay + payday-2);
}
//End of rules for Sundays
else
//Rules for normal days
{//Just to put a 0 in front of single digit numbers
if (daysTilPayment+ daysInCurrentMonth - currentDay + payday < 10)
output = ': 0'+Math.ceil(daysInCurrentMonth - currentDay + payday);
else
output = ': '+Math.ceil(daysInCurrentMonth - currentDay + payday);
}
//End of rules for normal days
}
else
{
if (paymentDay >5)
//Rules for Saturdays
{//To put a 0 in front of single digit numbers and subtract 2
if (daysTilPayment -2 < 10)
output = ': 0'+Math.ceil(daysTilPayment - 2);
else
output = ': '+Math.ceil(daysTilPayment - 2);
}
//End of rules for Saturdays
else if (paymentDay <1)
//Rules for Sundays
{//To put a 0 in front of single digit numbers and subtract 2
if (daysTilPayment -2 < 10)
output = ': 0'+Math.ceil(daysTilPayment - 2);
else
output = ': '+Math.ceil(daysTilPayment - 2);
}
//End of rules for Sundays
else
//Rules for normal days
{//Just to put a 0 in front of single digit numbers
if (daysTilPayment < 10)
output = ': 0'+daysTilPayment;
else
output = ': '+daysTilPayment;
}
//End of rules for normal days
}
//for debug purpose
//console.log(output);
return output.toLocaleLowerCase('de-DE');
}
```
Since I want to Use this on a Widget Im returning to local lowercase but the output is sometimes a math result which cant be returned that way.
To counter this I've put a : infront so that the output is a text and not a math result.
It fixed the issue but now the widget always outputs :nn instead of just nn .
Its no big of a deal I just want to now how it would have been done correctly.
Im sorry if i've talked nonsense.
I really have no Idea of what im doing. |
Countdown to varying payday in Javascript |
|javascript|date|time|widget|countdown| |
null |
Try using `datetime.datetime.utcnow()` |
If you have an infinite `Stream` like a `NetworkStream`, replacing string tokens on-the-fly would make a lot of sense. But because you are processing a finite stream of which you need the *complete* content, such a filtering doesn't make sense because of the performance impact.
My argument is that you would have to use buffering. The size of the buffer is of course restricting the amount of characters you can process. Assuming you use a ring buffer or kind of a queue, you would have to remove one character to append a new. This leads to a lot of drawbacks when compared to processing the complete content.
Pros & Cons
| Full-content search & replace | Buffered (real-time) search & replace |
| -------- | -------------- |
| Single pass over the full range | multiple passes over the same range |
| Advanced search (e.g. backtracking, look-ahead etc.) | Because characters get dropped at the start of the buffer, the search engine loses context |
| Because we have the full context available, we won't ever miss a match | Because characters get dropped at the start of the buffer, there will be edge cases where the searched token does not fit into the buffer. Of course, we can resize the buffer to a length = n * token_length. However, this limitation impacts performance as the optimal buffer size is now constrained by the token size. The worst case is that the token length equals the length of the stream. Based on the search algorithm we would have to keep input in the buffer until we match all key words and then discard the matched part. If there is no match the buffer size would equal the size of the stream. In streaming scenarios e.g. network socket, we don't even know the stream size in advance (expect it to be infinite). In this case we have to define a "random" buffer size and risk to lose matches that span across multiple buffer reads. It makes sense to evaluate the expected input and search keys for the worst-case scenarios and switch to a full-context search to get rid of the buffer management overhead. Just to highlight that the efficiency of a real-time search is very much limited by the input and the search keys (expected scenario). It can't be faster than full-context search. It potentially consumes less memory under optimal circumstances. |
| Don't have to worry about an optimal buffer size to maximize efficiency | Buffer size becomes more and more important the longer the source content is. Too small buffers result in too many buffer shifts and too many passes over the same range. Note, the key cost for the original search & replace task is the string search and the replace/modification, so it makes sense to reduce the number of comparisons to an absolute minimum. |
| Consumes more memory. However, this is not relevant in this case as we will store the complete response in (client) memory anyways. And if we use the appropriate text search technique, we avoid all the extra `string` allocations that occur during the search & replace. | Only the size of the buffers are allocated. However, this is not relevant in this case as we will store the complete response in (client) memory anyways.
| Does not allow to filter the stream in real-time (not relevant in this case)). However, a hybrid solution is possible where we would filter certain parts in real-time (e.g. a preamble) | Allows to filter the stream in real-time so that we can make decisions based on content e.g. abort the reading (not relevant in this case).|
I stop here as the most relevant performance costs search & replace are better for the full-content solution. For the real-time search & replace we basically would have to implement our own algorithm that has to compete against the .NET search and replace algorithms. No problem, but considering the effort and the final use case I would not waste any time on that.
An efficient solution could implement a custom `TextReader`, an advanced `StreamReader` that operates on a `StringBuilder` to search and replace characters. While `StringBuilder` offers a significant performance advantage over the string search & replace it does not allow complex search patterns like word boundaries. For example, word boundaries are only possible if the patter explicitly includes the bounding characters.
For example, replacing "int" in the input "internal int " with "pat" produces "paternal pat". If we want to replace only "int" in "internal int " we would have to use regular expression. Because regular expression only operates on `string` we have to pay with efficiency.
The following example implements a `StringReplaceStreamReader` that extends `TextReader` to act as a specialized `StreamReader`. For best performance, tokens are replaced *after* the complete stream has been read.
For brevity it only supports `ReadToEndAsync`, `Read` and `Peak` methods.
It supports simple search where the search pattern is simply matched against the input (called simple-search).
Then it also supports two variants of regular expression search and replace for more advanced search and replace scenarios.
The first variant is based on a set of key-value pairs while the second variant uses a regex pattern provided by the caller.
Because simple-search involves iteration of the source dictionary entries + multiple passes (one for each entry) this search mode is expected to be the slowest algorithm, although the replace using a `StringBuilder` itself is actually faster. Under these circumstances, the `Regex` search & replace is expected to be significantly faster than the simple search and replace using the `StringBuilder` as it can process the input in a single pass.
The `StringReplaceStreamReader` search behavior is configurable via constructor.
Usage example
------------------
```c#
private Dictionary<string, string> replacementTable;
private async Task ReplaceInStream(Stream sourceStream)
{
this.replacementTable = new Dictionary<string, string>
{
{ "private", "internal" },
{ "int", "BUTTON" },
{ "Read", "Not-To-Read" }
};
// Search and replace using simple search
// (slowest).
await using var streamReader = new StringReplaceStreamReader(sourceStream, Encoding.Default, this.replacementTable, StringComparison.OrdinalIgnoreCase);
string text = await streamReader.ReadToEndAsync();
// Search and replace variant #1 using regex with key-value pairs instead of a regular expression pattern
// for advanced scenarios (fast)
await using var streamReader2 = new StringReplaceStreamReader(sourceStream, Encoding.Default, this.replacementTable, SearchMode.Regex);
string text2 = await streamReader2.ReadToEndAsync();
// Search and replace variant #2 using regex and regular expression pattern
// for advanced scenarios (fast).
// The matchEvaluator callback actually provides the replcement value for each match.
// Creates the following regular expression:
// "\bprivate\b|\bint\b|\bRead\b"
string searchPattern = this.replacementTable.Keys
.Select(key => $@"\b{key}\b")
.Aggregate((current, newValue) => $"{current}|{newValue}");
await using var streamReader3 = new StringReplaceStreamReader(sourceStream, Encoding.Default, searchPattern, Replace);
string text3 = await streamReader.ReadToEndAsync();
}
private string Replace(Match match)
=> this.replacementTable.TryGetValue(match.Value, out string replacement)
? replacement
: match.Value;
```
Implementation
-------------------
**SearchMode**
```c#
public enum SearchMode
{
Default = 0,
Simple,
Regex
}
```
**StringReplaceStreamReader.cs**
```c#
public class StringReplaceStreamReader : TextReader, IDisposable, IAsyncDisposable
{
public Stream BaseStream { get; }
public long Length => this.BaseStream.Length;
public bool EndOfStream => this.BaseStream.Position == this.BaseStream.Length;
public SearchMode SearchMode { get; }
private const int DefaultCapacity = 4096;
private readonly IDictionary<string, string> stringReplaceTable;
private readonly StringComparison stringComparison;
private readonly Encoding encoding;
private readonly Decoder decoder;
private readonly MatchEvaluator? matchEvaluator;
private readonly Regex? regularExpression;
private readonly byte[] byteBuffer;
private readonly char[] charBuffer;
public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable)
: this(stream, encoding, stringReplaceTable, StringComparison.OrdinalIgnoreCase, SearchMode.Simple)
{
}
public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, SearchMode searchMode)
: this(stream, encoding, stringReplaceTable, StringComparison.OrdinalIgnoreCase, searchMode)
{
}
public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, StringComparison stringComparison)
: this(stream, encoding, stringReplaceTable, stringComparison, SearchMode.Simple)
{
}
public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, StringComparison stringComparison, SearchMode searchMode)
{
ArgumentNullException.ThrowIfNull(stream, nameof(stream));
ArgumentNullException.ThrowIfNull(stringReplaceTable, nameof(stringReplaceTable));
this.BaseStream = stream;
this.encoding = encoding ?? Encoding.Default;
this.decoder = this.encoding.GetDecoder();
this.stringReplaceTable = stringReplaceTable;
this.stringComparison = stringComparison;
this.SearchMode = searchMode;
this.regularExpression = null;
this.matchEvaluator = ReplaceMatch;
if (searchMode is SearchMode.Regex)
{
RegexOptions regexOptions = CreateDefaultRegexOptions(stringComparison);
var searchPatternBuilder = new StringBuilder();
foreach (KeyValuePair<string, string> entry in stringReplaceTable)
{
// Creates the following regular expression:
// "\b[search_key]\b|\b[search_key]\b"
string pattern = @$"\b{entry.Key}\b";
searchPatternBuilder.Append(pattern);
searchPatternBuilder.Append('|');
}
string searchPattern = searchPatternBuilder.ToString().TrimEnd('|');
this.regularExpression = new Regex(searchPattern, regexOptions);
}
this.byteBuffer = new byte[StringReplaceStreamReader.DefaultCapacity];
int charBufferSize = this.encoding.GetMaxCharCount(this.byteBuffer.Length);
this.charBuffer = new char[charBufferSize];
}
public StringReplaceStreamReader(Stream stream, Encoding encoding, string searchAndReplacePattern, MatchEvaluator matchEvaluator)
: this(stream, encoding, searchAndReplacePattern, matchEvaluator, RegexOptions.None)
{
}
public StringReplaceStreamReader(Stream stream, Encoding encoding, string searchAndReplacePattern, MatchEvaluator matchEvaluator, RegexOptions regexOptions)
{
ArgumentNullException.ThrowIfNull(stream, nameof(stream));
ArgumentNullException.ThrowIfNullOrWhiteSpace(searchAndReplacePattern, nameof(searchAndReplacePattern));
ArgumentNullException.ThrowIfNull(matchEvaluator, nameof(matchEvaluator));
this.BaseStream = stream;
this.encoding = encoding ?? Encoding.Default;
this.decoder = this.encoding.GetDecoder();
this.matchEvaluator = matchEvaluator;
this.SearchMode = SearchMode.Regex;
this.stringReplaceTable = new Dictionary<string, string>();
this.stringComparison = StringComparison.OrdinalIgnoreCase;
if (regexOptions is RegexOptions.None)
{
regexOptions = CreateDefaultRegexOptions(stringComparison);
}
else if ((regexOptions & RegexOptions.Compiled) == 0)
{
regexOptions |= RegexOptions.Compiled;
}
this.regularExpression = new Regex(searchAndReplacePattern, regexOptions);
this.byteBuffer = new byte[StringReplaceStreamReader.DefaultCapacity];
int charBufferSize = this.encoding.GetMaxCharCount(this.byteBuffer.Length);
this.charBuffer = new char[charBufferSize];
}
public override int Peek()
{
int value = Read();
this.BaseStream.Seek(this.BaseStream.Position - 1, SeekOrigin.Begin);
return value;
}
public override int Read()
=> this.BaseStream.ReadByte();
public override Task<string> ReadToEndAsync()
=> ReadToEndAsync(CancellationToken.None);
public override async Task<string> ReadToEndAsync(CancellationToken cancellationToken)
{
if (!this.BaseStream.CanRead)
{
throw new InvalidOperationException("Source stream is not readable.");
}
var textBuilder = new StringBuilder(StringReplaceStreamReader.DefaultCapacity);
int bytesRead = 0;
int charsRead = 0;
while (!this.EndOfStream)
{
cancellationToken.ThrowIfCancellationRequested();
bytesRead = await this.BaseStream.ReadAsync(buffer, 0, buffer.Length);
bool flush = this.EndOfStream;
charsRead = this.decoder.GetChars(this.byteBuffer, 0, bytesRead, this.charBuffer, 0, flush);
textBuilder.Append(charBuffer, 0, charsRead);
}
cancellationToken.ThrowIfCancellationRequested();
SearchAndReplace(textBuilder, cancellationToken, out string result);
return result;
}
public ValueTask DisposeAsync()
=> ((IAsyncDisposable)this.BaseStream).DisposeAsync();
private void SearchAndReplace(StringBuilder textBuilder, CancellationToken cancellationToken, out string result)
{
cancellationToken.ThrowIfCancellationRequested();
if (this.SearchMode is SearchMode.Simple or SearchMode.Default)
{
foreach (KeyValuePair<string, string> entry in this.stringReplaceTable)
{
cancellationToken.ThrowIfCancellationRequested();
textBuilder.Replace(entry.Key, entry.Value);
}
result = textBuilder.ToString();
}
else if (this.SearchMode is SearchMode.Regex)
{
string input = textBuilder.ToString();
result = this.regularExpression!.Replace(input, this.matchEvaluator!);
}
else
{
throw new NotImplementedException($"Search mode {this.SearchMode} is not implemented.");
}
}
private string ReplaceMatch(Match match)
=> this.stringReplaceTable.TryGetValue(match.Value, out string replacement)
? replacement
: match.Value;
private RegexOptions CreateDefaultRegexOptions(StringComparison stringComparison)
{
RegexOptions regexOptions = RegexOptions.Multiline | RegexOptions.Compiled;
if (stringComparison is StringComparison.CurrentCultureIgnoreCase or StringComparison.InvariantCultureIgnoreCase or StringComparison.OrdinalIgnoreCase)
{
regexOptions |= RegexOptions.IgnoreCase;
}
return regexOptions;
}
}
``` |
Since MultiIndexes are iterables, an easy way would be to use [`zip`](https://docs.python.org/3/library/functions.html#zip):
```
out = list(zip(*idx))
```
Output: `[('A', 'A', 'B', 'B'), ('C', 'D', 'C', 'D')]`
As lists:
```
out = list(map(list, zip(*idx)))
# [['A', 'A', 'B', 'B'], ['C', 'D', 'C', 'D']]
```
#### timings
Tested on 20 levels, ~1M rows:
```
# list(zip(*idx))
733 ms ± 17.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# [idx.get_level_values(i) for i in range(len(idx[0]))]
175 ms ± 3.59 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# list(map(idx.get_level_values, range(idx.nlevels)))
175 ms ± 969 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# [[x for x in idx.get_level_values(level)] for level in range(idx.nlevels)]
1.15 s ± 15.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
``` |
Vanilla extract Next.js storybook: Can't resolve @vanilla-extract/css/recipe' |
|webpack|next.js|storybook|vanilla-extract| |
The problem in your initial definition of the class is that you've written:
class name(object, name):
This means that the class inherits the base class called "object", and the base class called "name". However, there is no base class called "name", so it fails. Instead, all you need to do is have the variable in the special `__init__` method, which will mean that the class takes it as a variable.
class name(object):
def __init__(self, name):
print(name)
If you wanted to use the variable in other methods that you define within the class, you can assign name to self.name, and use that in any other method in the class without needing to pass it to the method.
For example:
class name(object):
def __init__(self, name):
self.name = name
def PrintName(self):
print(self.name)
a = name('bob')
a.PrintName()
bob
|
null |
|java|java-17| |
|apache-kafka|micronaut| |
Suppose that I have a dataset with firms and the associated addresses. In my analysis I am putting them into different groups. So, my dataframe, which I call ResultPlot, could look like this:
**Name Address Group**
Firm1, Berlin, group1
Firm2, Copenhagen, NA
Firm3, Amsterdam, group2
Firm4, Stockholm, group1
Firm5, Frankfurt, group2
...
In this example, Firm1 and Firm4 are put in the same group, and Firm3 and Firm5 are put in another group. In general a group may consist of any number of firms, and a firm may also be independent (no group). I can plot the group structure on a map by using the following code:
```
library(leaflet)
MyDomainColors=ResultPlot %>% select(Group) %>% distinct() %>% arrange()
MyDomainColors=as.vector(MyDomainColors[,1])
pal2 <- colorFactor(rainbow(length(MyDomainColors)), domain = MyDomainColors,na.color = "#808080")
leaflet(ResultPlot) %>% addProviderTiles(providers$CartoDB.PositronNoLabels) %>% addCircleMarkers(label=ResultPlot$Group, stroke = FALSE, fillOpacity = 1, radius=5, color = ~pal2(Group))
```
But I would also like a table (can be a picture) where firms are in the first column, while the next columns show the group structures. Here is an example that I made in Excel with one group structure (firm1 and firm4; firm3 and firm5):
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/GkBRh.png
I mean, the idea is that I change the column "Group" in the dataframe "ResultPlot", and the colors in the table should be adjusted automatically. I would like to have the same colors as in my leaflet plot (here I use "rainbow" which I found on the internet, but I am fine with setting colors in a different way).
Also, I might look into different group structures, so it would be perfect if it was possible to add more columns with group structures ("Group structure 2", "Group structure 3", etc.). |
I would like a table where cells are colored (defined by values) |
|visual-studio-code|vscode-extensions| |
**[Important concepts][1]**
- The memory request is mainly used during (Kubernetes) Pod scheduling.
- The memory limit defines a memory limit for that cgroup.
According to the article [Containerize your Java applications][2] the best way to configure your JVM is to use the following JVM args:
-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0
Along with that you should always [set the JVM to crash][3] if it runs out of memory. There is nothing worse than a health endpoint that thinks it's healthy, but the JVM has run out of memory!
-XX:+CrashOnOutOfMemoryError
***Note, there is a bug where you need to specify 75.0 and not 75***
To simulate what happens in Kubernetes with limits in the Linux container run:
docker run --memory="300m" eclipse-temurin:21-jdk java -XX:+UseContainerSupport -XX:MinRAMPercentage=50.0 -XX:MaxRAMPercentage=75.0 -XX:+CrashOnOutOfMemoryError -XshowSettings:vm -version
result:
VM settings:
Max. Heap Size (Estimated): 218.50M
Using VM: OpenJDK 64-Bit Server VM
It also works on old school Java 8:
docker run --memory="300m" eclipse-temurin:8-jdk java -XX:+UseContainerSupport -XX:MinRAMPercentage=50.0 -XX:MaxRAMPercentage=75.0 -XX:+CrashOnOutOfMemoryError -XshowSettings:vm -version
This way the container will read your requests from the cgroups (cgroups v1 or cgroups v2). Having a limit is extremely important to prevent evictions and noisy neighbours. I personally set the limit 10% over the request.
Older versions of the Java like Java 8 don't read the cgroups v2 and Docker desktop uses cgroups v2.
[To force Docker Desktop to use legacy cgroups1][4] set `{"deprecatedCgroupv1": true}` in `~/Library/Group\ Containers/group.com.docker/settings.json`
[1]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
[2]: https://learn.microsoft.com/en-us/azure/developer/java/containers/overview
[3]: https://www.codementor.io/@suryab/outofmemoryerror-related-jvm-arguments-w6e4vgipt
[4]: https://github.com/docker/for-mac/issues/6073#issuecomment-1028933577 |
{"Voters":[{"Id":3025856,"DisplayName":"Jeremy Caney"},{"Id":11810933,"DisplayName":"NotTheDr01ds"},{"Id":874188,"DisplayName":"tripleee"}],"SiteSpecificCloseReasonIds":[11]} |
I have a table where I store the following:
```
uuid, client_code, account, item, quantity, timestamp
```
Every time an item quantity is updated somewhere else, a new record is created in this table and items can be repeated by account from the client, so I could end up with several records per client for the same or different accounts with the same item, but different quantity.
I'm trying to come up with a query that would give me the maximum aggregated quantities by client and the time. As an example:
| Timestamp | item | quantity | account |
| -------- | -------- | -------- | -------- |
| 2024-01-01T01:00:10 | 1245 | 10 | a |
| 2024-01-01T05:00:10 | 1245 | 300 | a |
| 2024-01-01T04:00:10 | 1245 | 20 | b |
| 2024-01-01T07:00:10 | 1111 | 30 | a |
For this example, it'd need to return that the highest quantity for item `1245` was at 2024-01-01T05:00:10 with 320, 300 from account a, and 20 from account b since it was the latest movement.
For item `1111` the highest quantity was at 2024-01-01T07:00:10 with a total quantity of 30.
I've tried to figure this out many times but I ended nowhere.
Could someone help to see how this could be achieved? Thanks!
Tried several queries, windows functions, but couldn't make it work.
|
I am trying to remove the last occurrence of the string where there is a hyphen(-) leaving all the other strings alone that do not mean that criteria:
|Name1 |ID |ID_SPLIT |ID_SPLIT2
|:---------:|----------------:|-------------------:|---------------------
|FUH-V2 |FUH07V2NUM | |
|FUH-V2 |FUHV2-DEN | FUHV2 | FUHV2
|FUH-V2 |FUH30V2NUM | |
df['ID'].str.split('-').str[:-1].str.join('-')
|
null |
This is a frequently asked question about how powershell formats output. Aside from making the window bigger:
```
gwmi Win32_Process | % CommandLine
sihost.exe
C:\Windows\system32\svchost.exe -k UnistackSvcGroup
taskhostw.exe {222A245B-E637-4AE9-A93F-A59CA119A75E}
```
Note that get-process in powershell 7 has the commandline property too. You can add it in powershell 5.1 like this in your $profile:
```
$updateTypeData = @{
TypeName = 'System.Diagnostics.Process'
MemberName = 'CommandLine'
MemberType = [Management.Automation.PSMemberTypes]::ScriptProperty
Force = $true
Value = { (Get-CimInstance Win32_Process -Filter "ProcessId =
$($this.Id)").CommandLine }
}
Update-TypeData @updateTypeData
```
```
get-process notepad | % commandline
"C:\Windows\system32\notepad.exe"
```
|
I am using Google's [Vertex AI SDK for Python](https://cloud.google.com/python/docs/reference/aiplatform/latest) to access the [Gemini Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini) model and generate content. I would like to set a timeout (e.g. 30 seconds) for the generate content call to complete or raise an exception.
Setting a timeout is easy if I use an HTTP library like Requests and query the Gemini REST endpoint directly, but how can I implement the same functionality with the Vertex AI SDK for Python?
Here is a example of the code I use to generate content:
```
from vertexai import init
from vertexai.preview.generative_models import GenerativeModel
from google.oauth2.service_account import Credentials
credentials = Credentials.from_service_account_file(
'path/to/json/credential/file.json')
init(
project=credentials.project_id, location='northamerica-northeast1',
credentials=credentials)
model = GenerativeModel('gemini-pro')
response = model.generate_content("Pick a number")
print("Pick a number:", response.candidates[0].content.text)
``` |
How to set a timeout on Google Gemini generate content request with the Vertex AI SDK for Python |
I've been trying to solve the problem using your code but I couldn't achieve that. However, I'm quite sure the problem is the way you are defining ```target```.
Your code needs some kind of restriction which avoids two or more drones from being in the same target patch.
Therefore, I think about 2 alternatives:
1. Make sure your drones are updating the target all the time, so they will choose another one if the patch they had chosen has just been reached by another drone.
2. When defining the target, ensure a drone can't choose a patch which has been already set as target by another drone.
Hope this can help you! |
I had to switch to the new preview tool "Index Lookup" because the outdated "Vector DB Lookup" no longer appears in my tool list. Unfortunately, it is no longer possible to specify a filter for the Azure AI search. Does anyone know whether the option still exists or how it could be solved differently?
I can't find the setting and there's nothing in the documentation about it. |
Missing filter option in Prompt flow tool "Index Lookup" |
|azure-promptflow| |
null |
Below we will be discussing about one of the workaround solution that has been identified. In this approach one will be relying on adb (Android databridge command) to switch between primary and secondary display. So when ever the scripts needs to perform action on the secondary screen the app activity running on that screen will be switched to primary screen so that appium will be able to continue the execution without interruption.
Switching of screens can be achieved using **adb shell cmd activity display move-stack <taskID> <displayID>** which will move the taskID (Task Id of the activity) to displayID (display 0 represents primary, 1 represents secondary or so on).
In order to capture the taskId we can use another adb command named **adb shell am stack list** which will list out all the stack number along with task id of all activities running on the device ( The command need to be execution by having the application running either in emulator or real device).
The above adb command can be integrated into the appium scripts so that we can dynamically capture the taskID and flip the screen at run time. These call will be done once we start the application or startexisting application.
Challenges with the approach
1. Switching the screens of the application may affect the behaviour of the application because of display size, activity setting etc – its advice to discuss this approach with the team and should be adopted after sufficient testing.
2. Switching screen can lead to flaky test – so do test the execution behaviour of the scripts before fully adopting this approach into your framework.
3. adb command may not get executed from the appium scripts due to different reasons.
a. Improper setting of adb Path
b. JDK version issues
c. Permission issue within android Manifest.xml – In order to enable this following line need to be added <uses-permission android:name="android.permission.DUMP" /> and then apk file need to be recompiled and reinstalled. Basically a different version |
is there any way to do trigger from Jenkins job build as(folder) to Jenkins job build as( pipeline) ?
I need to trigger from configure in said Jenkins job .
I can do trigger from project job build as( pipeline) to project job build as( pipeline) but any job build as(folder) I can't .
could any one please help me and explain why and how I can do trigger
Note : I'm admin in Jenkins . |
trouble to trigger Jenkins job |
|jenkins|triggers|jenkins-pipeline|jenkins-groovy|helper| |
null |
I have an SQL query exactly as described in [this post][1]. To sum it up, it will read all Carboards specified with an offset. This query is executed on a MariaDB database.<br>
Now, I have my Cardboards (ID, Cardboard_number, DateTime, Production_LineNumber) in my C# ASP.NET program. I have to read all production processes in which each Carboard was used (basically production.start <= cardboard.datetime <= production.end).
The table for the Productions in the Oracle database looks like this (I did not create that table myself and I am not able to change anything because it is used in a production program as well):
- PRODUCTION_NUMBER (NUMBER)
- POSNR (NUMBER)
- DATETIME (TIMESTAMP(6))
- PROCESS_ACTIVE (VARCHAR2(1))
- PRODUCTION_LINE (NUMBER)
The PROCESS_ACTIVE column is used like a flag, when starting a production process a row is inserted with DATETIME=sysdate, PROCESS_ACTIVE = 1 and a stopped one is indicated with sysdate, PROCESS_ACTIVE = 0.
I have created a query which sums up my processes, so I get a Start und End for each PRODUCTION_NUMBER and POSNR:
```
SELECT *
FROM
(
SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END
FROM ssq_lauftr
GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME
ORDER BY DATETIME
)
WHERE PROCESS_ACTIVE = 1
```
I iterate over all Cardboards retrieved from the MariaDB in my C# code and execute this query for **every cardboard** (where *cardboard* is the injected object from my C# loop):
```
SELECT *
FROM
(
SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END
FROM ssq_lauftr
WHERE PRODUCTION_LINE = cardboard.PRODUCTION_LINENUMBER and DATETIME <= cardboard.DATETIME
GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME
ORDER BY DATETIME
)
WHERE PROCESS_ACTIVE = 1 AND cardboard.DATETIME <= END
```
which is working quite well with a low number of cardboards.
The problem with this solution is that if I have a lot of Cardboards, the whole function for reading all Productions will take too long. Is there a way (e.g. with PLSQL) to make this process more efficient? The SQL statement above is quite fast, but iterating over the list in C# and adding the results to my ProductionSet is slowing down the application a lot.
**Edit:**<br>
The cardboards are stored in the MariaDB, while the Productions are stored in the Oracle DB.
My current C# code for the described functionality looks like this:
Cardboards = [.. _mariaDB.Cardboards.FromSql($@"
SET @cb_num = {request.Cardboard_Number};
select *
from (
SELECT *,
sum(Cardboard_Number LIKE CONCAT(@cb_num, '%')) over (
partition by ProductionLine_Number
order by timestamp, id
rows BETWEEN {CardboardOffset} preceding AND {CardboardOffset} following
) matches
from cardboards
) cardboards_with_matches
where matches
")];
HashSet<Production> productionSet = [];
for(int i = 0; i < Cardboards.Count(); i++)
{
productionSet.UnionWith(_oracleDB.Production.FromSqlRaw($@"
SELECT *
FROM
(
SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END
FROM Productions
WHERE PRODUCTION_LINE = {Cardboards.ElementAt(i).ProductionLine_Number} and {Cardboards.ElementAt(i).DateTime} <= cardboard.DATETIME
GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME
ORDER BY DATETIME
)
WHERE PROCESS_ACTIVE = 1 AND {Cardboards.ElementAt(i).DateTime} <= END
"));
}
[MariaDB fiddle][2] - Fiddle for Cardboards
[1]: https://stackoverflow.com/questions/78224006/sql-how-to-get-elements-after-and-before-the-searched-one
[2]: https://dbfiddle.uk/7sFrE5pU |
```
PS F:\SCALER\NODE JS\Module1\mongoose> npm install mongoose
npm ERR! code EPERM
npm ERR! syscall mkdir
npm ERR! path F:\
npm ERR! errno -4048
npm ERR! Error: EPERM: operation not permitted, mkdir 'F:\'
npm ERR! [Error: EPERM: operation not permitted, mkdir 'F:\'] {
npm ERR! errno: -4048,
npm ERR! code: 'EPERM',
npm ERR! syscall: 'mkdir',
npm ERR! path: 'F:\\'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It's possible that the file was already in use (by a text editor or antivirus),
npm ERR! or that you lack permissions to access it.
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\Lab\AppData\Local\npm-cache\_logs\2024-03-28T11_14_21_900Z-debug-0.log
PS F:\SCALER\NODE JS\Module1\mongoose>
```
Not able to install mongoose in this particular folder |
Postgresql find aggregated maximum value by different times |
|postgresql| |
null |
I'm trying to get my s3 bucket working to store access logs. Below is how I'm deploying the required policy for it using terraform.
```
resource "aws_s3_bucket_policy" "bucket_logging_policy" {
bucket = aws_s3_bucket.s3_access_logs_bucket.id
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "logging.s3.amazonaws.com"
},
"Action" : "s3:PutObject",
"Resource" : "arn:aws:s3:::${aws_s3_bucket.s3_access_logs_bucket.id}/*"
}
]
})
}
```
While building however, Sonarqube is throwing errors in below lines:
"Effect" : "Allow" -\> Non-conforming requests should be denied.
"Action" : "s3:PutObject", -\> All S3 actions should be restricted.
Tried with general suggestions given by Sonarqube(below) and aws, but no luck. I'm new to this, so having trouble figuring out. Any idea where the policy is not adhering to standards?
```
resource "aws_s3_bucket_policy" "bucket_logging_policy" {
bucket = aws_s3_bucket.s3_access_logs_bucket.id
policy = jsonencode({
"Version" = "2012-10-17",
"Statement" = [
{
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${aws_s3_bucket.s3_access_logs_bucket.id}/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::${aws_s3_bucket.s3_access_logs_bucket.id}/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
})
}
```
Also made sure I'm compliant with conditions given here https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html |
Sonarqube not allowing me to set policy for S3 bucket |
|amazon-web-services|amazon-s3|terraform|sonarqube|sonarqube-scan| |
null |
{"OriginalQuestionIds":[7229450],"Voters":[{"Id":4108803,"DisplayName":"blackgreen"}]} |
You can run the same thing but programmatically create the cases.
In your example, you have a short list of colours, but if you have many different categories, you can run something like:
SET SESSION group_concat_max_len = 10000;
SET @cases = NULL;
SELECT
GROUP_CONCAT(
DISTINCT
CONCAT(
0xd0a,
"COUNT(CASE WHEN color = '",
color,
"' THEN 1 END) AS ",
color
)
SEPARATOR ','
) INTO @cases
FROM table_name t;
SET @sql = CONCAT(
"SELECT
t.keyword,",
@cases,
"
FROM table_name t
GROUP BY keyword
"
);
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
|
I'm not sure if there's a right question for this, but I'd like to know you think about. To what extent is it a good practice to break wide HTML class lines? I'm using tailwind-ements components of Tailwind CSS, and for some simple code examples like a checkbox, it utilizes wide lines. However, I've started to break the lines for better readability, and the problem is that this causes a giant block of code for a simple checkbox. What do you think about that? What do you usually do in these cases?
A example of what was said:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<input
class="relative float-left -ms-[1.5rem] me-[6px] mt-[0.15rem] h-[1.125rem] w-[1.125rem]
appearance-none rounded-[0.25rem] border-[0.125rem] border-solid border-secondary-500 outline-none
before:pointer-events-none before:absolute before:h-[0.875rem] before:w-[0.875rem] before:scale-0
before:rounded-full before:bg-transparent before:opacity-0 before:shadow-checkbox before:shadow-transparent
before:content-[''] checked:border-primary checked:bg-primary checked:before:opacity-[0.16]
checked:after:absolute checked:after:-mt-px checked:after:ms-[0.25rem]
checked:after:block checked:after:h-[0.8125rem] checked:after:w-[0.375rem] checked:after:rotate-45
checked:after:border-[0.125rem] checked:after:border-l-0 checked:after:border-t-0
checked:after:border-solid checked:after:border-white checked:after:bg-transparent
checked:after:content-[''] hover:cursor-pointer hover:before:opacity-[0.04] hover:before:shadow-black/60
focus:shadow-none focus:transition-[border-color_0.2s] focus:before:scale-100
focus:before:opacity-[0.12] focus:before:shadow-black/60
focus:before:transition-[box-shadow_0.2s,transform_0.2s] focus:after:absolute focus:after:z-[1]
focus:after:block focus:after:h-[0.875rem] focus:after:w-[0.875rem] focus:after:rounded-[0.125rem]
focus:after:content-[''] checked:focus:before:scale-100 checked:focus:before:shadow-checkbox
checked:focus:before:transition-[box-shadow_0.2s,transform_0.2s] checked:focus:after:-mt-px
checked:focus:after:ms-[0.25rem] checked:focus:after:h-[0.8125rem] checked:focus:after:w-[0.375rem]
checked:focus:after:rotate-45 checked:focus:after:rounded-none checked:focus:after:border-[0.125rem]
checked:focus:after:border-l-0 checked:focus:after:border-t-0 checked:focus:after:border-solid
checked:focus:after:border-white checked:focus:after:bg-transparent rtl:float-right
dark:border-neutral-400 dark:checked:border-primary dark:checked:bg-primary flex items-center justify-center text-gray-500 dark:text-white"
>
|
|r|colors|r-leaflet| |
|java|migration|java-17|java-module| |
I want to process million records and doing some process on that and export that records into excel file. When I am fetching records and fill it in DataTable it is utilizing very high memory around 8 to 10gb. My sql query result is containing around 300 columns.
I have tried with batch and tried with SqlDataReader also. With and without batchwise it is taking almost same memory and after using SqlDataReader it is not impacting on memory. Below is my current code. Is there any way to stop this high utilization?
Fetch records in 10000 records Batch
```
DataTable dt = new DataTable();
for (int i = 0; i <= 0; i++)
{
int startRow = 1;
int endRow = 10000;
for (int j = 0; (startRow == 1 || dt.Rows.Count >= 10000); j++)
{
dt = null;
dt = _dataService.SqlExecuteDT(data);
startRow = startRow + 10000;
}
}
```
Code for call SP and fill in datatable.
```
public static DataTable SqlExecuteDT(string spname, Dictionary<string, object> parameters, DBName dBName = DBName.DBCONN)
{
using (SqlConnection conn = new SqlConnection(DBConnection.GetConnectionString(dBName)))
{
using (SqlCommand cmd = new SqlCommand(spname, conn))
{
cmd.CommandTimeout = Timeout;
cmd.CommandType = System.Data.CommandType.StoredProcedure;
if (parameters != null)
{
foreach (KeyValuePair<string, object> kvp in parameters)
cmd.Parameters.AddWithValue(kvp.Key, kvp.Value ?? DBNull.Value);
}
conn.Open();
// Use a forward-only, read-only data reader for memory efficiency
using (SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess))
{
// Retrieve column schema
DataTable schemaTable = reader.GetSchemaTable();
// Create DataTable to hold the data
DataTable dataTable = new DataTable();
// Add columns to DataTable based on schema
foreach (DataRow row in schemaTable.Rows)
{
string columnName = row["ColumnName"].ToString();
Type dataType = (Type)row["DataType"];
dataTable.Columns.Add(columnName, dataType);
}
// Populate DataTable with data
while (reader.Read())
{
DataRow dataRow = dataTable.NewRow();
for (int i = 0; i < reader.FieldCount; i++)
{
dataRow[i] = reader[i];
}
dataTable.Rows.Add(dataRow);
}
return dataTable;
}
}
}
}
```
|
Doctrine "matching" method on EXTRA_LAZY ManyToMany relation make invalid SQL request |
|php|mysql|doctrine-orm|doctrine| |
null |
I have a Spring MVC web app with Spring Security using Keycloak v23.0.3 and Spring Boot v3.2.4. In Keycloak I set up parameters "SSO Session Idle" and "Access Token Lifespan" to 1 minute, but a session still doesn't expire after user being idle during 1 minute. I expect redirection to login screen after being idle this time. Please help me with this setting. Below is my configuration.
_build.gradle_
```kotlin
dependencies {
...
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.boot:spring-boot-starter-oauth2-client'
...
}
```
_application.yaml_
```yaml
...
spring:
security:
oauth2:
client:
registration:
keycloak:
client-id: local-client
client-secret: [my-client-secret]
scope: openid
provider:
keycloak:
issuer-uri: http://localhost:8180/realms/local-realm
user-name-attribute: preferred_username
...
```
_MySecurityConfig.java_
```java
@Configuration
@EnableWebSecurity
public class MySecurityConfig {
@Bean
public SecurityFilterChain configure(HttpSecurity http) throws Exception {
http.authorizeHttpRequests(auth -> auth
.anyRequest()
.fullyAuthenticated())
.oauth2Login(Customizer.withDefaults())
.logout(logout -> logout
.logoutSuccessHandler(oidcLogoutSuccessHandler())
.permitAll());
return http.build();
}
OidcClientInitiatedLogoutSuccessHandler oidcLogoutSuccessHandler() {
OidcClientInitiatedLogoutSuccessHandler successHandler =
new OidcClientInitiatedLogoutSuccessHandler(clientRegistrationRepository);
successHandler.setPostLogoutRedirectUri("{baseUrl}");
return successHandler;
}
}
```
_Keycloak > local-realm > Realm settings > Sessions_
[![enter image description here][1]][1]
_Keycloak > local-realm > Realm settings > Tokens_
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/JeMxc.png
[2]: https://i.stack.imgur.com/MoN2b.png |
Keycloak session doesn't expire |
|java|keycloak|spring-security-oauth2| |
I recently created a Github Codespace but when I tried to run applications like Firefox, I got this:
`Error: no DISPLAY environment variable specified`
I guess this is because Github Codespace don't have a buildin GUI.
My home computer's operating system is Windows 11 and my Codespace is a 4 cores, 16 GB RAM, 32 GB storage Ubuntu with only a Vscode as GUI.
I really just want to access web pages and DON'T mention w3m.
I tired to follew [this](https://babyprogrammer.com/blog/running-gui-code-applications-in-github-codespaces-on-windows/) article to solve the problem, but I when I tried to connect to my Codespace via Github CLI, I got this:
`C:\Users\my-user-name>gh cs ssh -- -XY`
`selecting ssh keys: checking configured keys: could not find ssh executable: exec: "ssh": executable file not found in %PATH%`
I find [this](https://github.com/microsoft/vscode-dev-containers/blob/main/script-library/docs/desktop-lite.md) and tried it out, but I don't know if I can't find the .devcontainer folder on my Codespace and basicly don't know what to do.
Can you help? Thank you very much. |
High memory utilizing when fetch data from Store Procedure to DataTable |
|c#|asp.net|.net|asp.net-core|asp.net-web-api| |
null |
I am quite new in React and trying to understand the code in Tic-tac-toe.
```
import { useState } from "react";
//side function Square (will have value and onClick)
function Square({ value, onSquareClick }) {
return (
<button className="square" onClick={onSquareClick}>
{value}
</button>
);
}
//function to find winner
function calculateWinner(squares) {
const lines = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[0, 3, 6],
[1, 4, 7],
[2, 5, 8],
[0, 4, 8],
[2, 4, 6],
];
for (let i = 0; i < lines.length; i++) {
const [a, b, c] = lines[i];
if (squares[a] === squares[b] && squares[a] === squares[c]) {
return squares[a];
}
}
return null;
}
export default function Board() {
//1st, set 9 array for squares
const [squares, setSquares] = useState(Array(9).fill(null));
const [xIsNext, setXIsNext] = useState(true);
function handleClick(i) {
if (winner || squares[i]) {
return;
}
const nextSquares = squares.slice();
if (xIsNext) {
nextSquares[i] = "X";
} else {
nextSquares[i] = "O";
}
setXIsNext(!xIsNext);
setSquares(nextSquares);
}
const winner = calculateWinner(squares);
let status;
if (winner) {
status = "Winner is: " + winner;
} else {
status = "There is no winner";
}
return (
<>
<div className="status">{status}</div>
<div className="board-row">
<Square value={squares[0]} onSquareClick={() => handleClick(0)} />
<Square value={squares[1]} onSquareClick={() => handleClick(1)} />
<Square value={squares[2]} onSquareClick={() => handleClick(2)} />
</div>
<div className="board-row">
<Square value={squares[3]} onSquareClick={() => handleClick(3)} />
<Square value={squares[4]} onSquareClick={() => handleClick(4)} />
<Square value={squares[5]} onSquareClick={() => handleClick(5)} />
</div>
<div className="board-row">
<Square value={squares[6]} onSquareClick={() => handleClick(6)} />
<Square value={squares[7]} onSquareClick={() => handleClick(7)} />
<Square value={squares[8]} onSquareClick={() => handleClick(8)} />
</div>
</>
);
}
```
In function Board, the handleClick function has this part: if (winner || squares[i]){return}....to make function return early with either condition. If winner exists, there will be NO MORE ACTION ALLOWED, or if you click on a square already filled, YOU CAN STILL MOVE ON AND FILL OTHER SQUARES.
So why do these two conditions respond differently while being handled with the same early return????
Hope that makes sense. Thank you so much everyone! |
Tic-tac-toe return early? |
|reactjs|tic-tac-toe| |
null |
[enter image description here](https://i.stack.imgur.com/eK3C7.png)
I have a task to automate a certain excel worksheet. The worksheet happens to implement a logic with an excel plugin called Solver. It uses a single value(-1.95624) in Cell $O$9 (which is the result of computations highlighted with red and blue ink in the diagram ) as an input value and then returns three values for C, B1 and B2 using an algorithm called "GRG Non linear regression". My task is to emulate this logic in Python. Below is my attempt. The major problem, is I am not getting the same values for C, B1 and B2 as computed by Excel's Solver plugin.
```
import numpy, scipy, matplotlib
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
xData = numpy.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) # Use the same table name as the parameter
yData = numpy.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) # Use the same table name as the parameter
def func(x, a, b, Offset): # Sigmoid A With Offset from zunzun.com
return 1.0 / (1.0 + numpy.exp(-a * (x-b))) + Offset
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
maxY = max(yData)
minY = min(yData)
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for a
parameterBounds.append([minX, maxX]) # search bounds for b
parameterBounds.append([0.0, maxY]) # search bounds for Offset
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# generate initial parameter values
geneticParameters = generate_Initial_Parameters()
# curve fit the test data
params, covariance = curve_fit(func, xData, yData, geneticParameters,maxfev=50000)
# Convert parameters to Python built-in types
params = [float(param) for param in params] # Convert numpy float64 to Python float
C, B1, B2 = params
OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]})
```
Any Ideas will be helpful? Thanks in advance
Here's My Attempt:
Given these datasets for xData and yData, the correct output should be:
C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217
|
[This is the error i get](https://i.stack.imgur.com/ADz54.png)
Although I tried to remove the httpresponse from my code even that isn't solving the issue.
```python
from multiprocessing import context
from django.http import HttpResponse
from django.shortcuts import redirect, render
def unauthenticated_user(view_func):
def wrapper_func(request, *args, **kwargs):
if request.user.authenticated:
return redirect('homepage')
else:
return view_func(request,*args, **kwargs)
return wrapper_func
def allowed_users(allowed_roles=[]):
def decorator(view_func):
def wrapper_func(request,*args, **kwargs):
group is not None
if request.user.groups.exists():
group=request.user.groups.all()[0].name
if group in allowed_roles:
return view_func(request,*args, **kwargs)
if group == 'user':
return render(request,'loginapp/user_dashboard')
'''else:
return HttpResponse('you are authorized')'''
return wrapper_func
return decorator
def admin_only(view_func):
def wrapper_func(request,*args, **kwargs):
group is not None
if request.user.groups.exists():
group=request.user.groups.all()[0].name
if group == 'user':
return render(request,'loginapp/user_dashboard')
if group == 'admin':
return render(request,'loginapp/admin_dashboard')
return wrapper_func
```
.......................................................................... |
|django|django-views| |
null |
I'm using df.compare where I'm doing a diff between 2 csv's which have same index/row names, but when it does df.compare, it does the diff as expected but gives the row index numbers as 0,2,5,... where ever it find the diff between the csv's.
What I'm looking out is instead of the row numbers where It finds the diff, I need df.compare to show me the row text.
```
diff_out_csv = old.compare(latest,align_axis=1).rename(columns={'self':'old','other':'latest'})
```
Current output -
```
NUMBER1 NUMBER2 NUMBER3
old latest old latest old latest
0 -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN
2 -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0
3 -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0
4 -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0
```
Desired Output -
```
NUMBER1 NUMBER2 NUMBER3
old latest old latest old latest
JACK -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN
JASON -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0
JACOB -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0
JIMMY -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0
```
I was able to replace the column names using `df.compare.rename(columns={})` but how do I replace 0, 2, 3, 4 with the text names ?
|
|python|pandas|dataframe| |
{"OriginalQuestionIds":[37683558],"Voters":[{"Id":523612,"DisplayName":"Karl Knechtel","BindingReason":{"GoldTagBadge":"python"}}]} |
|javascript|mongodb|npm|mongoose| |
When configuring cluster, I get to select the subnet group, and I chose the default one, which was a mixture of public and private subnets. So AWS randomly chose a subnet from it for my cluster, which was a private one. that is why I haven't been able to connect to it via local.So I just had to create anew subnet group with only public subnets, and use this subnet group while creating an entirely newly cluster, as doing subnet modifications to a cluster ain't possible. |
This worked for me:
**flutter run -d chrome --web-browser-flag "--disable-web-security"**
|
I am trying to write a custom code and that should only run when someone books a subscription on specific membership level. I am using Wishlist membership plugin 3.9 and I won't be able to upgrade it.
Please let me know if you know any hook related to this? |
Wishlist membership 3.9 Wordpress plugin hook when new member takes subscription |
|wordpress|woocommerce|membership| |
My company have a big issue with Vuforia Engine for the past 3 days.
When we scan a visual with our application, a blank screen appears before our content appears and there's a lot of latency. This wasn't the case before the update of the website.
Can you please help me ? Is anyone else in the same situation?
Thank you.
We tried to reduce the size of our files as they may have been too heavy, but it didn't help. |
AR Image Display Issue |
|image-processing|augmented-reality|display|vuforia| |
null |
I've just noticed that in SQL Server Management Studio (SSMS) 19.1.56 I can't seem to be able to set a bookmark on a SQL script file open in a query window, if the script has been saved to the solution open in the SSMS Solution Explorer. Is there a way to do this, or does some option need to be set to allow it?
I can set bookmarks in unsaved query windows. I can also set bookmarks in SQL script files open in query windows as long as the scripts aren't included in the open solution. It's only SQL scripts that are included in the open solution which I can't bookmark. |
Setting bookmarks in SQL script saved to SSMS solution |
|sql-server|ssms| |